text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Many client applications require caching of data to work with low bandwidth connections.: [{ "event_id": 6, "event_ver": 1, "id": 27, "microlocations_ver": 0, "session_ver": 4, "speakers_ver": 3, "sponsors_ver": 2, "tracks_ver": 3 }] The number corresponding to "*_ver" tells the number of modifications done for that resource list. For instance, "tracks_ver": 3 means there were three revisions for tracks inside the event ( /events/:event_id/tracks). So when the client user starts his app, the app would make a request to the versions API, check if it corresponds to the local cache and update accordingly. It had some shortcomings, like checking modifications for a individual resources. And if a particular service (microlocation, track, etc.) resource list inside an event needs to be checked for updates, a call to the versions API would be needed. ETag based caching for GET APIs The concept of ETag (Entity Tag) based caching is simple. When a client requests (GET) a resource or a resource list, a hash of the resource/resource list is calculated at the server.. Little modifications were needed to deal with ETags for GET requests. Flask-Restplus includes a Resource class that defines a resource. It is a pluggable view. Pluggable views need to define a dispatch_request method that returns the response. import json from hashlib import md5 from flask.ext.restplus import Resource as RestplusResource # Custom Resource Class class Resource(RestplusResource): def dispatch_request(self, *args, **kwargs): resp = super(Resource, self).dispatch_request(*args, **kwargs) # ETag checking. # Check only for GET requests, for now. if request.method == 'GET': old_etag = request.headers.get('If-None-Match', '') # Generate hash data = json.dumps(resp) new_etag = md5(data).hexdigest() if new_etag == old_etag: # Resource has not changed return '', 304 else: # Resource has changed, send new ETag value return resp, 200, {'ETag': new_etag} return resp To add support for ETags, I sub-classed the Resource class to extend the dispatch_request method. First, I grabbed the response for the arguments provided to RestplusResource‘s dispatch_request method. old_etag contains the value of ETag set in the If-None-Match header. Then hash for the resp response is calculated. If both ETags are equal then an empty response is returned with 304 HTTP status (Not Modified). If they are not equal, then a normal response is sent with the new value of ETag. [smg:~] $ curl -i HTTP/1.0 200 OK Content-Type: application/json Content-Length: 1061 ETag: ada4d057f76c54ce027aaf95a3dd436b Server: Werkzeug/0.11.9 Python/2.7.6 Date: Thu, 21 Jul 2016 09:01:01 GMT {"description": "string", "sessions": [{"id": 1, "title": "Fantastische Hardware Bauen & L\u00f6ten Lernen mit Mitch (TV-B-Gone) // Learn to Solder with Cool Kits!"}, {"id": 2, "title": "Postapokalyptischer Schmuck / Postapocalyptic Jewellery"}, {"id": 3, "title": "Query Service Wikidata "}, {"id": 4, "title": "Unabh\u00e4ngige eigene Internet-Suche in wenigen Schritten auf dem PC installieren"}, {"id": 5, "title": "Nitrokey Sicherheits-USB-Stick"}, {"id": 6, "title": "Heart Of Code - a hackspace for women* in Berlin"}, {"id": 7, "title": "Free Software Foundation Europe e.V."}, {"id": 8, "title": "TinyBoy Project - a 3D printer for education"}, {"id": 9, "title": "LED Matrix Display"}, {"id": 11, "title": "Schnittmuster am PC erstellen mit Valentina / Valentina Digital Pattern Design"}, {"id": 12, "title": "PC mit Gedanken steuern - Brain-Computer Interfaces"}, {"id": 14, "title": "Functional package management with GNU Guix"}], "color": "GREEN", "track_image_url": "", "location": "string", "id": 1, "name": "string"} [smg:~] $ curl -i --header 'If-None-Match: ada4d057f76c54ce027aaf95a3dd436b' HTTP/1.0 304 NOT MODIFIED Connection: close Server: Werkzeug/0.11.9 Python/2.7.6 Date: Thu, 21 Jul 2016 09:01:27 GMT ETag based caching has a drawback..
http://blog.fossasia.org/etag-based-caching-for-get-apis/
CC-MAIN-2017-22
refinedweb
588
55.24
In this article, I am going to describe the process of writing and building of a simple driver-module for Linux OS. Meanwhile, I will touch upon the following questions: The article concerns the Linux kernel version 2.6.32 because other kernel versions can have the modified API, which is used in examples or in the build system. Linux is a monolithic kernel. That is why the driver for it should be compiled together with the kernel itself or should be implemented in the form of a kernel module to avoid the recompiling of the kernel when driver adding is needed. This article deals with the kernel modules exactly. A module is an object file prepared in a special way. The Linux kernel can load a module to its address space and link the module with itself. The Linux kernel is written in 2 languages: C and assembler (the architecture dependent parts). The development of drivers for Linux OS is possible only in C and assembler languages, but not in C++ language (as for the Microsoft Windows kernel). It is connected with the fact that the kernel source pieces of code, namely, header files, can contain C++ key words such as new, delete and the assembler pieces of code can contain the ‘::’ lexeme. new delete ‘::’ The module code is executed in the kernel context. It rests some additional responsibility in the developer: if there is an error in the user level program, the results of this error will affect mainly the user program; if an error occurs in the kernel module, it may affect the whole system. But one of the specifics of the Linux kernel is a rather high resistance to errors in the modules’ code. If there is a non-critical error in a module (such as the dereferencing of the null pointer), the oops message will be displayed (oops is a deviation from the normal work of Linux and in this case, the kernel creates a log record with the error description). Then, the module, in which the error appeared, is unloaded, while the kernel itself and the rest of modules continue working. However, after the oops message, the system kernel can often be in an inconsistent state and the further work may lead to the kernel panic. oops The kernel and its modules are built into a practically single program module. That is why it is worth remembering that within one program module, one global name space is used. To clutter up the global name space minimally, one should monitor that the module exports only the necessary minimum of global characters and that all exported global characters have the unique names (the good practice is to add the name of the module, which exports the character, to the name of the character as a prefix). The piece of code that is required for the creation of the simplest module is very simple and laconic. It looks as follows: #include <linux/init.h> #include <linux/module.h> static int my_init(void) { return 0; } static void my_exit(void) { return; } module_init(my_init); module_exit(my_exit); This piece of code does not do anything but allowing loading and unloading the module. When loading the driver, the my_init function is called; when unloading the driver, the my_exit function is called. We inform the kernel about it with the help of the module_init and module_exit macros. These functions must have exactly the following signature: my_init my_exit module_init module_exit int init(void); void exit(void); The linking of the linux/module.h header file is necessary for adding information about a kernel version, for which the module is built, to the module itself. Linux OS will not allow loading of the module that was built for another kernel version. It is because the kernel API changes intensively and the change of signature of one of the functions used in the module will lead to the damage of the stack when calling this function. The linux/init.h header file contains the declaration of the module_init and module_exit macros. We will not dwell on such a simple module. I would like to demonstrate the work with the device files and with logging in the kernel. These are tools that will be useful for each driver and will somewhat expand the development in the kernel mode for Linux OS. First, I would like to say a few words about the device file. The device file is a file that is usually located in hierarchy of the /dev/ folder. It is the easiest and the most accessible way of interaction of the user code and the kernel code. To make it shorter, I can say that everything that is written to such file is passed to the kernel, to the module that serves this file; everything that is read from such file comes from the module that serves the file. There are two types of device files: character (non-buffered) and block (buffered) files. The character file implies the possibility to read and write information to it by one character whereas the block file allows reading and writing only the data block as a whole. This article will touch upon only the character device files. In Linux OS, device files are identified by two positive numbers: major device number and minor device number. The major device number usually identifies the module that serves the device file or a group of devices served by a module. The minor device number identifies a definite device in the range of the defined major device number. These two numbers can be either defined as constants in the driver code or received dynamically. In the first case, the system will try to use the defined numbers and if they are already used, it will return an error. Functions that allocate the device numbers dynamically also reserve the allocated device numbers so that the dynamically allocated device number cannot be used by another module when it is allocated or used. To register the character device, the following function can be used: int register_chrdev (unsigned int major, const char * name, const struct fops); file_operations * It registers the device with the specified name and major device number (or it allocates the major device number if the major parameter is equal to zero) and links the file_operations structure with the device. If the function allocates the major device number, the returned value will be equal to the allocated number. In other case, the zero value means the successful completion and the negative value means an error. The registered device is associated with the defined major device number and minor device number is in the range of 0 to 255. major file_operations The string that is passed as the name parameter is the name of the device or the module if the last registers only one device and is used for the identification of the device in the /sys/devices file. The file_operations structure contains the pointers to the functions that must process the manipulations with the device file (such as open, read, write, etc.) and the pointer to the module structure that identifies the module, which implements these functions. The structure for the kernel version 2.6.32 looks as follows: name module *); }; It is not necessary to implement all functions from the file_operations structure to use the file. If the function is not implemented, the corresponding pointer can be of zero value. In this case, the system will implement some default behavior for this function. It is enough to implement the read function for our example. read As our driver will provide the work of devices of one type, we can create the global static file_operations structure and fill it statically. It can look as follows: static file_operations static struct file_operations simple_driver_fops = { .owner = THIS_MODULE, .read = device_file_read, }; Here, the THIS_MODULE macro (declared in linux/module.h) will be converted to the pointer to the module structure that corresponds to our module. The device_file_read is a pointer to the function with the prototype, whose body we will write later. THIS_MODULE device_file_read ssize_t device_file_read (struct file *, char *, size_t, loff_t *); So, when we have the file_operations structure, we can write a pair of functions for registration and unregistration of the device file: static int device_file_major_number = 0; static const char device_name[] = "Simple-driver"; static int register_device(void) { int result = 0; printk( KERN_NOTICE "Simple-driver: register_device() is called." ); result = register_chrdev( 0, device_name, &simple_driver_fops ); if( result < 0 ) { printk( KERN_WARNING "Simple-driver: can\'t register character device with errorcode = %i", result ); return result; } device_file_major_number = result; printk( KERN_NOTICE "Simple-driver: registered character device with major number = %i and minor numbers 0...255" , device_file_major_number ); return 0; } We store the major device number in the device_file_major_number global variable as we will need it for the device file unregistration in the end of the “driver life”. device_file_major_number In the listing above, the only function, which was not mentioned, is the printk() function. It is used for logging of messages from the kernel. The printk() function is declared in the linux/kernel.h file and works like the printf library function except one nuance. As you have already noticed, each format string of printk in this listing has the KERN_SOMETHING prefix. It is the message priority and it can be of eight levels, from the highest zero level (KERN_EMERG), which informs that the kernel is unstable, to the lowest seventh level (KERN_DEBUG). printk() printf printk KERN_SOMETHING KERN_EMERG KERN_DEBUG The string that is formed by printk function is written to the circular buffer. From there, it is read by the klogd daemon and gets to the system log. The printk function is written in such a way that it can be called from any place in the kernel. The worst that can happen is circular buffer overflow when the oldest messages will not get to the system log. string klogd Now, we need only to write the function for the device file unregistration. Its logic is simple: if we succeed in the device file registration, the device_file_major_number value will not be zero and we will be able to unregister it with the help of the unregister_chrdev function declared in linux/fs.h. The first parameter is the major device number and the second is the device name string. The unregister_chrdev function, by its action, is fully symmetric to the register_chrdev function. unregister_chrdev register_chrdev We receive the following piece of code for the device registration: void unregister_device(void) { printk( KERN_NOTICE "Simple-driver: unregister_device() is called" ); if(device_file_major_number != 0) { unregister_chrdev(device_file_major_number, device_name); } } We need to write the function for reading characters from the device. It must have the signature that is appropriate for the signature from the file_operations structure: ssize_t (*read) (struct file *, char *, size_t, loff_t *); The first parameter of this function is the pointer to the file structure from which we can find out the details: what file we work with, what private data is associated with it, etc. The second parameter is a buffer that is allocated in the user space for the read data. The third parameter is the number of bytes to be read. The fourth parameter is the offset (position) in the file, starting from which we should count bytes. After the performing of the function, the position in the file should be refreshed. Also the function should return the number of successfully read bytes. file One of the actions that our read function should perform is the copying of the information to the buffer allocated by the user in the address space of the user mode. We cannot just dereference the pointer from the address space of the user mode because the address, to which it refers, can have another value in the kernel address space. There is a special set of functions and macros (declared in asm/uaccess.h) for working with pointers from the user address space. The copy_to_user() function is the best for our task. As it can be seen from its name, it copies data from the buffer in the kernel to the buffer allocated by the user. Besides, the copy_to_user() function checks the pointer validity and the sufficiency of the size of the buffer allocated in the user space. It makes it easier to process errors in the driver. The copy_to_user prototype looks like the following: copy_to_user() copy_to_user long copy_to_user( void __user *to, const void * from, unsigned long n ); The first parameter, which should be passed to the function, is the user pointer to the buffer. The second parameter should be the pointer to the data source, the third – the number of bytes to be copied. The function will return 0 in case of success and not 0 in case of error. The __user macro in the function prototype is used for documenting. It also allows analyzing the piece of code for the correctness of using the pointers from the user address space by means of the sparse static code analyzer. The pointers from the user address space should always be marked as __user. __user sparse We create only an example of the driver and we do not have the real device. So it will be sufficient if reading from our device file will always return some text string (e.g., Hello world from kernel mode!). Now, we can start writing the piece of code of the read function: static const char g_s_Hello_World_string[] = "Hello world from kernel mode!\n\0"; static const ssize_t g_s_Hello_World_size = sizeof(g_s_Hello_World_string); static ssize_t device_file_read( struct file *file_ptr , char __user *user_buffer , size_t count , loff_t *position) { printk( KERN_NOTICE "Simple-driver: Device file is read at offset = %i, read bytes count = %u" , (int)*position , (unsigned int)count ); /* If position is behind the end of a file we have nothing to read */ if( *position >= g_s_Hello_World_size ) return 0; /* If a user tries to read more than we have, read only as many bytes as we have */ if( *position + count > g_s_Hello_World_size ) count = g_s_Hello_World_size - *position; if( copy_to_user(user_buffer, g_s_Hello_World_string + *position, count) != 0 ) return -EFAULT; /* Move reading position */ *position += count; return count; } Now, when the whole driver piece of code is written, we would like to build it and see how it will work. In the kernels of version 2.4, to build the module, the developer had to prepare the compilation environment himself and to compile the driver with the help of the GCC compiler. As a result of the compilation, the received .o file is the module loadable to the kernel. Since then, the order of the kernel modules build has changed. Now, the developer should only write a special makefile that will start the kernel build system and will inform the kernel what the module should be built of. To build a module from one source file, it is enough to write the one-string makefile and to start the kernel build system: obj-m := source_file_name.o The module name will correspond to the source file name and the module itself will have the .ko extension. To build the module from several source files, we should add one string: string obj-m := module_name.o module_name-objs := source_1.o source_2.o … source_n.o We can start the kernel build system with the help of the make command: make make –C KERNEL_MODULE_BUILD_SYSTEM_FOLDER M=`pwd` modules for the module build and make –C KERNEL_MODULES_BUILD_SYSTEM_FOLDER M=`pwd` clean for the build folder cleanup. The module build system is usually located in the /lib/modules/`uname -r`/build folder. We should prepare the module build system for building to build the first module. To do this, we should go to the build system folder and execute the following: #> make modules_prepare Let’s unite this knowledge into a single makefile: TARGET_MODULE:=simple-module # If we are running by kernel building system ifneq ($(KERNELRELEASE),) $(TARGET_MODULE)-objs := main.o device_file.o obj-m := $(TARGET_MODULE).o # If we running The load and unload targets are for loading of the built module and for deleting it from the kernel. load unload In our example, the driver is compiled from two files with the main.c and device_file.c source pieces of code and has the simple-module.ko name. When our module is built, we can load it by executing the following command in the folder with the source files: #> make load After that, a string with the name of our driver appears in the special /proc/modules file. And a string with the device, registered by our module, appears in the special /proc/devices file. It will look as follows: Character devices: 1 mem 4 tty 4 ttyS … 250 Simple-driver … The number before the device name is a major number associated with it. We know the range of minor numbers for our device (0...255) and that is why we can create the device file in the /dev virtual file system: #> mknod /dev/simple-driver c 250 0 When the device file is created, we will check if everything works correctly and will display its contents with the help of the cat command: cat $> cat /dev/simple-driver Hello world from kernel.
http://www.codeproject.com/Articles/112474/A-Simple-Driver-for-Linux-OS?msg=3877622&PageFlow=FixedWidth
CC-MAIN-2016-30
refinedweb
2,835
58.92
kontocheck 5.4.2 Python ctypes wrapper of the konto_check library. Python ctypes wrapper of the konto_check library. This module is based on konto_check, a small library to check German bank accounts. It implements all check methods and IBAN generation rules, published by the German Central Bank. Example import kontocheck kontocheck.lut_load() bankname = kontocheck.get_name('37040044') iban = kontocheck.create_iban('37040044', '532013000') kontocheck.check_iban(iban) bic = kontocheck.get_bic(iban) Changelog - v5.4.2 - Updated the LUT data file since it contained an invalid BIC - v5.4.1 - Fixed a bug on Windows systems, failed to load msvcrt - v5.4.0 - Updated the konto_check library to version 5.4 - v5.3.0 - Updated the konto_check library to version 5.3 - Fixed a bug in function get_name that did not recognize an IBAN. - v5.2.1 - Replaced Cython with ctypes, since it is easier to maintain for different plattforms. - Downloads (All Versions): - 41 downloads in the last day - 382 downloads in the last week - 1913 downloads in the last month - Author: Thimo Kraemer - License: LGPLv3 - Package Index Owner: joonis - DOAP record: kontocheck-5.4.2.xml
https://pypi.python.org/pypi/kontocheck/5.4.2
CC-MAIN-2015-22
refinedweb
183
70.6
Code: #include "std_disclaimer.h" /* * Your warranty is. All the source code for CyanogenMod is available in the CyanogenMod Github repo. And if you would like to contribute to CyanogenMod, please visit Gerrit Code Review. Wiki Official CyanogenMod Wiki: Installation First time CyanogenMod 14.1 installation on your Nexus 4: Upgrading from earlier version of CyanogenMod (even from 13.0) : Push the new CM 14.1 zip to your SD card Boot into Recovery Flash the CM 14.1 zip from SD card Flash the most recent GApps for 7.1 if you are upgrading from earlier android version Reboot Downloads Lastest Nightly 14: Lastest Recovery (Nightly) : Lastest Snapshot : Lastest Recovery (Snapshot) : ----------------------------------------------------------------------------- Google Apps (arm): wiki.cyanogenmod.org/w/gapps All the builds: Code: Phone Inform CyanogenMod team would like to thank everyone involved in helping with testing, coding, debugging & documenting! Enjoy! Contributors cyanogen, CM team Source Code:
https://forum.xda-developers.com/nexus-4/orig-development/official-cyanogenmod-14-1-nexus-4-t3507532
CC-MAIN-2018-17
refinedweb
147
68.36
Introduction to Java Assertion In Java, Assertion is a statement that ensures or tests the correctness of the assumptions made in a program. It is done with the help of the assert statement. When the written assumption is executed, it is considered as true. If it is false, an assertion error will be thrown by the Java Virtual Machine. The main reasons why Assertions are used are: - To confirm whether an unreachable code is reachable. - To check whether the assumptions available in the comments are correct or not. - To confirm, the default case in the switch is not reached. - After the invocation of a method. - To check the state of an object. Syntax Below is the syntax of the Java Assertion statement. assert expression; assert expr1 : expr2; Any of these syntaxes can be used based on the requirement. How does Assertion work in Java? As already mentioned, assert can be written in two forms. - The syntax asserts expression; is used in order to test the expressions of Boolean form. If the particular expression is false, the program gets terminated by throwing an AssertionError. Unlike the normal exceptions, these errors are disabled during runtime. - However, the syntax asserts expr1: expr2; it is used in cases where the program has some extra information that helps diagnose certain failures. - Similar to the uncaught exceptions in Java, assertion errors are commonly labeled in stack trace along with the file as well as line number from which the exception is thrown. Even though these are the main pros of Assertion, there are certain situations where Assertions should not be used. They are: - Error message replacement. - Argument checking in public methods - Command-line arguments. Syntax java –ea programname Or java –enable assertions programname Steps to use Eclipse Java Assertion In Eclipse, it can be done using the below steps. Step 1: Select Run Configurations. Step 2: Go to the left panel and select Java Application, and Right-click on it. Step 3: Select New configuration and type –ea on VM Arguments. Once it is done, click. Similarly, assertions can be disabled using the syntax given below. Java –da programname Examples to Implement Java Assertion Now, let us see some sample programs for the assertion in order to get a clear idea of the same. Example #1 Java program to check whether a particular value is higher than 20. Code: class AssertionExample{ public static void main( String args[] ){ int val = 14; assert val>=20:" Value is not valid"; System.out.println("The given value is: "+ val); } } Output: - On executing the code, the message “The given value is 14” gets displayed. - It can be clearly seen that assertion is not checked here since the value is greater than 20 is not checked. - In order to enable assertion, -ea has to be set before compiling. For that, perform the steps mentioned in NOTE. - If you again try to run the code, it can be clearly seen that an AssertionError has been thrown since the value is less than 20. Example #2 Java program to check whether a particular user input value is higher than 20. Code: import java.util.Scanner; class AssertionExample{ public static void main( String args[] ){ Scanner sc = new Scanner( System.in ); System.out.print("Enter a number to check assertion "); //store the input value to the variable val int val = sc.nextInt(); //assertion check whether the input value is greater than 20 assert val>=20:" Value is not valid"; System.out.println("The given value is: "+ val); } } Output: - The user will be asked to input a number on executing the code. In the below result, the number 13 is given as input. Since it is less than 20, an AssertionError has been thrown. - At the same time, when a value which is greater than 20 is given, no errors are thrown, and the message gets displayed. Example #3 Java program to check the number of days in a week. Code: class AssertionExample { //main method public static void main(String args[]) { //declare a string days String[] days = {" Monday " , " Holiday " , " Saturday " , " Tuesday " , " Wednesday " , " Sunday " , " Thursday " , " Friday " }; //set the assertion as 7 assert days.length==7 : "7 days are present in a week, Your input is wrong"; //print the line below System.out.println("There are " + days.length + " days in a week"); } } Output: - On executing the code, an assertion error is thrown as the number of days in a week is 7, and the string input given contains more than that. - Let us remove Holiday from the input values and see what the output will be. Yes. The line gets printed as the assertion value satisfies the input value. Advantages of using Assertion in Java The following are the main advantages of using Assertion. - Efficient detection and correction of bugs. - Boilerplate code will be removed and helps in creating a readable code. - Execution time won’t be affected since it gets removed automatically during runtime. - Quick bug detection and correction. - Code optimization and refactoring is done with high confidence in order to function correctly. In addition to the above points, below are the important points that have to be known while studying Assertions. - An assertion in Java is introduced in the version JDK 1.4 - Assert is the keyword used in order to implement assertion. - Enabling and disabling of assertion can be done at runtime with the help of corresponding syntaxes. - Even though assertion compliments the exception, it does not replace the exception. - It does not replace the unit testing even if it helps in the validation of conditions. - Never use assertion for arguments or parameters validation of a method that is public. Conclusion Java Assertion is a statement that checks the trueness of a particular condition. It is commonly used in testing during software development. Moreover, they are used with certain Boolean expressions. In this article, several aspects such as syntax, working, pros, cons, and examples of Assertion is explained in detail. Recommended Articles This is a guide to Java Assertion. Here we discuss an introduction to Java Assertion, how does it work, steps to use, advantages and examples for better understanding. You can also go through our other related articles to learn more –
https://www.educba.com/java-assertion/
CC-MAIN-2021-43
refinedweb
1,026
57.16
Glenmorangie Part I (was)Re: Shape of the Distilation Column Affects The Product Expand Messages - --- In Distillers@yahoogroups.com, "Sasha" <blackrabbit.namespace@h...> wrote: >as > Greetings, > I have been spending a lot of time reading everything I can get my > hands on about distilation but I cannot say that I am not totally > overwhelmed, there seems to be as many theories about distilates > there is grains in a field.not > The goal that I have set is to produce a Scotch whiskey that is > disimilar to Glenmorangie (that being my favorite) weather thishave > thesis prove to be possible or not I intend to either succeed or > a coffin made of my research data papers. Needless to say I don'tam > need flame-mail on how nieve I probabily am, since I know what I > and that is a gromit at distilling.been > I am not however unblooded when it comes to fermentation, I have > making my own malt, brewing my own beer, and making my own winesince > I was 11, but I have never distilled.times > I would like to offer my experiences to this group as I breach the > world of distilation and perhaps you might re-live your first > as I experience mine.floor, > As I write this, I have the pieces of my still laying on the > and in the coming weeks I intend to assemble it and start my firstand I > run...however,I don't want to fail before I begin...I read some > startling theory on the outcome of flavors on homedistiller.org > quote:the > > > progress of vapours into the neck. The subsequent, sudden wideningof > the neck, and relatively cooler temperature, consequentlyincreases > reflux."as > > There is more, but that I believe is sufficient to pose a question > to what my column should look like. At the moment the design planfound > that I am following is a duplicate of the valved-reflux column > at. My grand plan was to un-tweak thereflux > column and use it in part as a modified pot-still. However basedon > some of the new readings I am thinking that this perhaps is notthe > best way to continue. I would like some qualified opinions ifanyone > is willing to take the time to offer them. Because this is ahobby I > am somewhat limited in my budget by my better-half so baring thatin > mind would be appreciated. I have the chance here of modifying myto > future before I weld it firmly in place so to speak, and I choose > wait a minute and find out what more experienced people say, sinceWelcome to the black arts, Sasha. ;-)) > this may be my only shot until I save some more to manufacture > another still-head. > > Sincerely. > Me. > Well I admire your choice of challenges for your first foray into distilling. Making Scotch of whatever flavour is no easy task, even for the few among us who have spent some years at it. However, it is do-able, and seeing as how you have a background of beer brewing, then you have a definite headstart. So, on with the quest... ;-)) Cloning Glenmorangie Part I Still design: By and large, copying still designs of the old Scotch distilleries is not really much use to a homedistiller, unless you've got a lot of money, time, metalsmith skills and patience. They don't scale well. It is better to understand what the various designs achieve in the final product, then build a standard home potstill with a few tweaks to mimic the outcome of their much larger cousins. Glenmorangie potstills have a neck (column) some 5.5 metres tall, quite a bit more than the average, in fact the tallest of the Scotch potstills. At the base of the neck, where it joins the boiler lid, is a spherical shape known as a 'boil-ball'. The boil-ball and the overlength neck have but one purpose; to produce as much reflux and separation as possible in a potstill. This brings about an end product considerably lighter in congeners compared to other Scotch Malts. This, combined with Glenmorangie's much stricter cuts (1/5, most others cut 1/3) makes a product that homedistillers should be able to duplicate more easily than the heavier malts, as homedistillers potstills invariably make lighter spirits than commercial stills. At this point I must say one thing. Homedistillers reflux stills are NOT potstills. However, it is quite possible to use them as potstills if you remove about 3/4 of the mesh packing, leaving just a little to increase the separation, just as the boil-ball does in the commercial stills. If you already have a potstill, you can achieve a similar result by angling the lyne arm at the top of the neck to a 45° angle for about 60cm of length, then direct it downward to the condenser or worm. The suggested tweaking methods for the two still designs will give product lighter than the usual, which is the first step in trying to duplicate Glenmorangie. By the way, it hasn't yet been decided just WHICH Glenmorangie is to be cloned. Is it to be the Glenmorangie 10 Years ? Glenmorangie 18 Years ? Glenmorangie Madeira Wood Finish ? Glenmorangie Port Wood Finish ? Glenmorangie Sherry Wood Finish ? Or just Glenmorangie in general, then worry about the finish later? As a famous TV ad from years gone by said "Oils aint't oils, Sol". Part II to follow, when I get my second wind. Slainte! regards Harry Moderator - Practice aging 90%abv vodka as some 80% of the characteristics come from aging in oak barrels (ex-Spanish sherry, ex-US bourbon). Then take into account barley quality,peat-dried malt quality, mashing technique, yeast variety, distillation technique, water quality. But then Glenmorangie never set out to copy anybody - so feel free to make yor own house style. wal --- In Distillers@yahoogroups.com, "Sasha" <blackrabbit.namespace@h...> wrote: > > Greetings, > I have been spending a lot of time reading everything I can get my > hands on about distilation but I cannot say that I am not totally > overwhelmed, there seems to be as many theories about distilates as > there is grains in a field. > The goal that I have set is to produce a Scotch whiskey that is not > disimilar to Glenmorangie (that being my favorite) weather this > thesis prove to be possible or not I intend to either succeed or have > a coffin made of my research data papers. Needless to say I don't > need flame-mail on how nieve I probabily am, since I know what I am > and that is a gromit at distilling. > Your message has been successfully submitted and would be delivered to recipients shortly.
https://groups.yahoo.com/neo/groups/Distillers/conversations/topics/33397
CC-MAIN-2017-43
refinedweb
1,115
69.01
The code below is the first step I have taken to create a program for an assignment that is supposed to read from a file that contains information about gold medal winners from the Olympics. I have attached the .txt file I am using as my source for input. My problem is that for some reason, my BufferedReader starts reading a line 3/4 of the way through the file which is not what I need, I need it to start reading from the first line! Any help is greatly appreciated, my code is below! Thanks! import java.io.*; public class TestGoldMedals{ public static void main(String[] args){ BufferedReader input; String line; try{ input = new BufferedReader(new FileReader("a2a.txt")); line = input.readLine(); while(line != null){ line = input.readLine(); System.out.println(line); } input.close(); }catch (IOException ioe){ System.out.println(ioe.getMessage()); } } }
https://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/11811-bufferedreader-starting-read-line-3-4-way-through-file-help.html
CC-MAIN-2020-24
refinedweb
143
65.73
#include <stddef.h> #include <sys/types.h> #include "my_dbug.h" #include "my_inttypes.h" #include "my_sqlcommand.h" #include "sql/query_result.h" #include "sql/sql_cmd_dml.h" #include "sql/sql_data_change.h" #include "sql/sql_list.h" #include "sql/table.h" Go to the source code of this file. Check that all fields with arn't null_fields are used. Prepare triggers for INSERT-like statement. Validates default value of fields which are not specified in the column list of INSERT statement. Write a record to table with optional deletion of conflicting records, invoke proper triggers if needed. Once this record is written to the table buffer, any AFTER INSERT trigger will be invoked. If instead of inserting a new record we end up updating an old one, both ON UPDATE triggers will fire instead. Similarly both ON DELETE triggers will be invoked if are to delete conflicting records. Call thd->transaction.stmt.mark_modified_non_trans_table() if table is a non-transactional table. < Flag for fatal errors
https://dev.mysql.com/doc/dev/mysql-server/latest/sql__insert_8h.html
CC-MAIN-2019-35
refinedweb
159
64.17
Microsoft Corporation October 2002 Applies to: Microsoft® Visio® Professional 2002 Microsoft Visio Standard 2002 Summary: The Visio 2002 Primary Interop Assembly (PIA) is available for download as part of the Office XP PIA download. Use this PIA to integrate Visio with your managed code applications. (1 printed page) Download the Oxppia.exe from the Microsoft Download Center. The Microsoft® Visio® 2002 PIA is now available as part of the Microsoft Office XP Primary Interop Assemblies Web download. The Visio 2002 PIA allows developers to create Microsoft .NET Framework managed code solutions that integrate with Visio 2002, which is a Component Object Model (COM)-based application. It is strongly recommended that managed code developers use the Visio 2002 PIA available in the Office XP PIA Web download for interoperability with Visio 2002. The Visio 2002 PIA (Microsoft.Office.Interop.Visio.dll) in the Office XP PIA Web download has a different name and namespace than the Visio 2002 SDK PIA (Microsoft.Visio.Interop.dll), as well as several important bug fixes and updates. In addition, the Visio 2002 PIA in the Office XP PIA Web download is installed in the Global Assembly Cache (GAC). For more information about the updates in the Visio 2002 PIA, refer to Office XP Primary Interop Assemblies Known Issues. If you have developed a managed code application using the Visio 2002 SDK PIA, you should update your application to use the Visio 2002 PIA from the Office XP PIA Web download. All managed code components that share Visio objects must use the same Visio 2002 PIA. The Visio 2002 PIA object types in the Office XP PIA Web download are not compatible with the object types in the Visio 2002 SDK PIA or with any other interop assemblies based on the Visio type library.
http://msdn.microsoft.com/en-us/library/aa140373(office.10).aspx
crawl-002
refinedweb
298
53.1
On Thu, Jan 07, 2010 at 10:03:51AM -0500, Michael Breuer wrote:> On 1/7/2010 3:21 AM, Jarek Poplawski wrote:> >On Thu, Jan 07, 2010 at 02:55:20AM -0500, Michael Breuer wrote:> >>Unless I misread the code, I think that in some cases e skb is actually> >>freed if the cfq (among others perhaps) scheduler returns an error on> >>enqueue (flow control perhaps). Thus with alternative 1, it is possible> >>that the skb is acted upon after being freed - this would be consistent> >>with the DMAR errors I saw.> >I can't see your point: could you give some scenario?> >> >Jarek P.> With NET_CLS_ACT set, net_dev_enqueue can return an error after> freeing the skb. Alternative 1 disregards the error and assumes the> skb is still valid. The original code and alternative 2 exit the> loop assuming the skb has been freed.Not exactly: alternative 1 disregards the error, and tries to sendnext skbs if the message was longer. After consuming all the messageit returns without err code (at least wrt. dev_queue_xmit). This isquite often practice to skip dev_queue_xmit() return (try to grep innet\). It should never touch any part of an earlier sent skb.Jarek P.
https://lkml.org/lkml/2010/1/7/240
CC-MAIN-2016-36
refinedweb
200
69.11
i have 2 scenes and wish to pass info from one to another. I have created in the first scene a script Constants on an empty game object that retains the data that i wish to save for the next scene. Here it is: public class Constants : MonoBehaviour { public string scratchImageFront; public string scratchImageBack; void setScratchFront(string _scratchFront) { scratchImageFront = _scratchFront; } void setScratchBack(string _scratchBack) { scratchImageBack = _scratchBack; } } before moving to the next scene, i call to not destroy the Constants script: GameObject constants = GameObject.Find("Constants"); Constants script; script = constants.transform.GetComponentInChildren<Constants>(); DontDestroyOnLoad(constants); Application.LoadLevel("scratch"); I then collect the data in my second scene and all is good. after i'm done in my second scene, i go back to my first scene Application.LoadLevel("start"); and redo the same steps as i first did, the Constants script doesn't retain the new strings passed to it, it retains the old ones. Why? i need to destroy the Constants script in my second scene so that everything works as it should, but i don't want this. What am i doing wrong? I know someone is going to disapprove of this idea I personally hate the scene changing experience because of this problem. And I hated that there wasn't a direct way to save data after you end runtime So I solved both problems by saving all relevant data values to a text file which you can look up how to do in C# with filestream I believe and to get it to work with unity u have to look up Application.dataPath Answer by fafase · Oct 19, 2012 at 02:25 PM A simple way that is ok here is to use a static class. static class DataToStore{ var data1:int; var data2:int; var wava:string; } Put this in a script, no need to give the script to any object. In your game, just call DataToStore.data1 = theDataToStore; A static class is made of static variables that are created before the game starts and are destroyed after the game is over (or crashes). You can then access your data anywhere and anytime in your game. It can be convenient to have a storage class for info that you want to keep all over the game like score, achievements. Cloud recognition in Vuforia 0 Answers public PlayerView pv { get { return _Game.playerViews[id]; } } 0 Answers Unable to start Unity compiled swf from other swf 2 Answers Making material for two scenes 1 Answer Font issue in unity 3d 3 Answers
https://answers.unity.com/questions/334959/not-passing-new-values-to-object-that-has-not-been.html
CC-MAIN-2017-51
refinedweb
424
68.1
Aug 5 JavaScript – Much more than Java’s Mini-Me Filed Under Computers & Tech, Software Development on August 5, 2006 at 9:01 pm Something that has annoyed me for a long time is that JavaScript is looked on by many people as just being a stripped down version of Java. You take Java, you take out most of the features and you get JS. This is completely wrong. The two are two completely different languages which follow different paradigms. One is Hard Typed and Object Oriented, the Other is Loosely Typed and Object Based. To give you an idea of just how different the languages are I would say that Java is to JavaScript like C/C++ is to Perl. I.e. they are completely different languages in just about every respect but their syntax is superficially similar. Far from being a stripped down version of Java, JS is in many ways a more powerful language and is certainly more feature-rich. And I’m not talking about little conveniences that make programming a little easier but major features that make some things all but impossible to do with Java but which JS does simply and naturally. In this article I’m going to look at some of these features. While I was writing this article, I came up with many less dramatic advantages which JS has over Java, which just make things easier with JS. Initially I had also included those in this article but they made it too long for the modern attention span. Instead, I’m compiling them into a separate article with the working title Hidden JS which I hope to publish within the next week or so. The inspiration for this article was a post by Joel Spolsky entitled Can your programming language do this? which details one of the advantages JS has over Java. Functions as Variables This is the feature Joel discussed in great detail so go read his article, he explains it perfectly so I’m not going to waste my time doing it again (and probably less well!). Multiple Inheritance Before I get into the guts of this one it’s probably a good idea to point out that JS does all the regular OO stuff. It lets you create objects to encapsulate data and functions, and does inheritance and polymorphism. Yes, it’s true that it does it in an odd way but it does it all none the less (and no less oddly than Perl). In fact, it does it in multiple ways and the different ways have different advantages and disadvantages. The way I do OO in JS could best be described as creating classes by writing one function that acts as their constructor. Everything is created within this one function, data structures, functions, and, occasionally, even inner classes. To illuminate this lets have a look at a very simple example, a class to represent a lamp: [js] //create the class function Lamp(initialState){ this.isOn = initialState; this.turnOn = function(){ this.isOn = true; }; this.turnOff = function(){ this.isOn = false; }; this.isOn = function(){ return this.isOn; }; } //use the class var myLamp = new Lamp(true); window.alert(myLamp.isOn()); myLamp.turnOff(); window.alert(myLamp.isOn()); [/js] The above code shows how to create a class with one property, isOn, and three functions. What happens is that when you create an object with new a new blank object gets passed to the function after the word new and that function builds up the object by adding properties and functions to it. Given this it is not hard to imagine how one might implement inheritance, call another function on the object you’re instantiating from within your ‘constructor’ function. Somewhat annoyingly you do this in two steps, first you map a pointer to the other constructor as some property of your new object, then you call it on yourself. I think an example may make things a little clearer so below is the code for making a coloured lamp by extending our lamp class from above: [js] //create a coloured lamp class function ColouredLamp(initState, initColour){ //first inherit from Lamp this.inheritFrom = Lamp; this.inheritFrom(initState); //then create any extra attributes needed this.colour = initColour; //then create any extra functions needed this.getColour = function(){ return this.colour; }; this.setColour = function(c){ this.colour = c; }; } [/js] Now the thing to note is that there is nothing special about the name I chose for the label for the constructor function we are grabbing from outside ( inheritFrom). The line that does the inheriting is really no different to calling any function the object contains on itself. Hence, there is no reason you can’t inherit from more constructors. You can, in fact, inherit from as many as you like. Let us now imagine that we have created a simple class to represent a paper weight with the properties; weight, and shape. Then, let us imagine that we were mad inventors and we had just invented a combined paperweight and lamp and called it a Paper-Ilumin-Weight and that we wanted a JS object to represent it. We could start from scratch and implement a class with the properties colour, state, weight, and shape. But wait, we already have classes for doing both these things complete with all the functions we need for manipulating them. Hence we can just re-use both classes to create our PaperIluminWeight class like so: [js] //create our PaperIluminWeight class function PaperIluminWeight(initColour, initShape, initWeight){ //inherit from ColouredLamp this.inheritFrom = ColouredLamp; this.inheritFrom(false, initColour); //inherit from PaperWeight this.alsoInheritFrom = PaperWeight; this.alsoInheritFrom(initShape, initWeight); } [/js] Now lets have a look at the inheritance tree for the PaperIluminWeight class. Can you do this with Java? In a word, no, because Java does not allow multiple inheritance. You can achieve some of this functionality with interfaces but not a lot. In Java you would have to choose one of the two parent classes to inherit from and then re-create the functionality for the other. Because ColouredLamp is more complex (having already inherited from Lamp) lets choose that class to inherit from. If we then want to be able to interact with a PaperIluminWeight object as a Lamp or a Coloured Lamp it will work fine but we won’t be able to interact with it as a PaperWeight. We can make a fudge and re-name our class PaperWeight to PaperWeightImplementation and create an interface called PaperWeight that defines the functions that all Paper Weights should have and then have both PaperWeightImplementation and PaperIluminWeight implement that interface but that only gets us half way there. We can now use PaperIluminWeight objects as Paper Weights within our code but, and this is a hugely important but, we still have to re-implement all the functionality in the PaperWeightImplementation class in PaperIluminWeight class. I.e. copy and paste the whole lot in. Now, you don’t need to be a software engineering guru to see that that is an exceptionally bad thing. Just as the ability to pass functions as variables results in more code re-use and less code replication, so multiple inheritance in JS does the same. JS lets us re-use code in a situation where Java would have us slavishly re-type the same code again with all the software engineering problems that that brings. Dynamically Build and Execute Code Most scripting languages will allow you to build up a string programmaticaly and then execute it as code and grab the result. This is generally done with a method called eval which is exactly how JS does it. Java does not support this. There is no eval method in Java. If you REALLY need to create Java code on the fly and execute it you can do it in a number of really convoluted ways but for all intents and purposes Java does not support dynamic code generation and execution. You might wonder why you might want to do that though. So far I have not used it often but I have had to use it. Sometimes I use it for form validation when the form to validate could have a variable number of elements with the same name pattern but this is not a great example because you can also achieve this functionality with DOM objects like document.forms. A better example would be a simple calculator widget where you let users build up a string in a disabled text box by clicking buttons on your onscreen calculator and then simply get the answer by putting the value property of that textbox through eval. One Function, as many or as Few Arguments as You Like In JS you can pass as many arguments to a function as you like and that function can then decide what to do with those arguments on the fly. You might wonder what I’m on about so let me give you an example. You have to write a function that takes any number of integers and adds them together. Before we do this in JS lets do it in Java: [java] public static int sum(int[] nums){ int ans = 0; for(int i=0;i java.util.LinkedList or a java.util.Vector. What wrapper we decide to use is irrelevant, the point is that we have to somehow wrap our variable length list of arguments into a single argument to keep Java happy. Now lets have a look at the right way to implement this in JS: [js] function sum(){ var ans = 0; for(var i=0;i arguments. JS, like Perl, stores it’s arguments in an array but lets you name some or all of them for convenience if you want. (unlike Perl JS does let you pass an array literal as a single argument.) You would be right in saying that there is very little difference between the JS version and the Java version when it comes to writing the function. There is a line-for-line equivalence between the functions. Where the difference becomes obvious is when you go to use your functions as illustrated below. Java first: [java] int[] numbers1 = {1, 2, 3, 4}; int ans1 = sum(numbers1); int numbers2[] = {5, 6, 7, 8}; int ans2 = sum(numbers2); [/java] Now the JS: [js] var ans1 = sum(1, 2, 3, 4); var ans2 = sum(5, 6, 7, 8); [/js] You’ll notice that the JS code is much simpler. There is only one step involved, call the function and pass it what you want. The Java code on the other hand needs to prepare the array each time before sending it. Things get even worse if you want to do this kind of thing with Objects rather than literals. Have a look at the nasty Java code below: [java] Integer numbers1 = new Integer[4]; numbers1[0] = new Integer(1); numbers1[1] = new Integer(2); numbers1[2] = new Integer(3); numbers1[3] = new Integer(4); Integer sum1 = sum(numbers1); Integer numbers2 = new Integer[4]; numbers2[0] = new Integer(5); numbers2[1] = new Integer(6); numbers2[2] = new Integer(7); numbers2[3] = new Integer(8); Integer ans2 = sum(numbers2); [/java] Now the power of this feature of the JS language should be obvious. More perceptive readers have, at this stage, probably realised that this means that JS does not do function overloading and you would be right. In each scope/namespace there is only one function with a particular name. If you create a second function with the same name but a different argument list it will replace the first one, ignore the second or throw an error depending on your implementation of JS. You are now probably asking yourself, is this a problem? The answer is no it isn’t. You do all the work within one function and if you need to do different things depending on what arguments you got you have a look at what’s in the arguments array and take appropriate action. In fact, function overloading is nothing more than a kludge to allow strongly typed languages like Java to approximate variable length argument lists. In general when two functions overload each other they do the same thing but with different data and you just do some converting/default setting and then pass the data on to the overloaded function that does the actual work. In JS you just write one function that does the work regardless of how you called it. Anything you can do with multiple overloaded functions you can do with one single function in JS and TBH that makes a lot more sense to me, less hoop-jumping and all that. Edit Classes on the Fly Although it is easy, in Java, to take the functionality from any System object (like java.lang.String) and add it to an object of your own (you just extend the class), it is not possible to add functionality to the System classes. You can’t, for example, decide that there should be a function in java.lang.Integer to return an Integer which is made up of the digits of the original Integer reversed and add it in to the class (no idea why you’d want to mind!). No prizes for guessing but, in JS you can. You can take any of the classes of object that come with JS and add your own functionality to them. The reason you can do this is because of the prototype variable that is associated with each object class. When a function is called on an object there are two places that JS looks to find the function to invoke. First it looks to see if the object has a variable of type function with the right name, failing that it looks in the prototype variable for the class that the object belongs to to see if there is a mapping there. We have already seen the former when creating classes in the second section of this article. In the code examples in that section we created the properties and the functions for an object all within the ‘constructor’, adding functions by setting properties of the class equal to function literals. You don’t have to do it that way. You can add the functions in separately at any stage using the prototype variable but this makes things more difficult for inheriting because prototyped functions cannot be inherited simply by calling a constructor function, they have to be explicitly copied to the inheriting class’ prototype variable. This is why I don’t use prototype when creating classes. Lets now re-visit our sum function form the previous section. Rather than having the function as a standalone function that you then have to pass an array to, it would be much nicer to add the summing functionality straight into JS’s Array class. Below is the code to do just that: [js] Array.prototype.sum = function(){ var ans = 0; for(var i=0;i [js] var myArrayOfInts = [1, 2, 3, 4]; window.alert(myArrayOfInts.sum()); //because JS is loosly typed even the following will work //(printing out the Array contents concatonated together) var mixedArray = [“here”, ” “, “are”, ” “, “some”, ” “, ” numbers:”, ” “, 1, 2, 3, 4]; window.alert(mixedArray.sum()); [/js] If you’re already impressed, you’re about to become even more impressed. The more observant among you will have noticed that if you add functions to an object within it’s constructor each object has it’s own copy of the function. This has a downside, it takes up more RAM than the prototyping method, but it makes inheritance easier to implement and it also adds another very interesting ability to the language. In JS it is possible to have two objects of the same type with different implementations of the same function. Take a moment to let that sink in. Without having to create a new class you can make different instances of the same class have different implementations of functions. You just can’t do that in Java. You’d have to have a base class with the default implementation of the function in a parent class (or an abstract function if there is no default) and then a separate subclass overriding the function for each different implementation. If you have a couple of functions that that need a couple of implementations things get very out of hand very very quickly. That’s great but why on Earth would you want to do that? Well let me give you an example from my everyday life. You want to write a nice object to encapsulate a generic AJAX request because your site uses a lot of AJAX and you’re tired of typing the same code again and again and would like everything to do with a request encapsulated in an object. You start coding this up and it all goes well till you get to your function for handling state changes in the HTTP Request object in which you have to actually DO something with the response you got from the server. Thing is you want to do something different for each instance of your class. In Java this is not possible so you would have to write a base class which implements all the functionality for talking to the server and then make your function to actually do something with the response (lets call it process) abstract. For each different thing you want to do with AJAX you would then have to create a new subclass of your base class and implement process. That is a lot of work for no good reason. In JS you can just create an instance of your AJAX object, plug in a process function of your choice using a function literal and then use your object. In fact, my partner wrote a little JS class to encapsulate AJAX in JS in this way which will be released as GPL soon. This object gets used something like this: [JS] var req1 = new AjaxReq(“POST”, “”, “var1=val1&var2=val2”); req1.process = function(req){ document.getElementById(‘myDynamicDiv’).innerHTML = req.responseText(); } req1.send(); var req2 = new AjaxReq(“POST”, “”, “var1=val1&var2=val2”); req2.process = function(res){ window.alert(res.responseText()); } req2.send(); [/js] You’ll note that this code is much simpler and neater than having to create a new class each time you want to do something different with the result you get from the server. Conclusions This is by no means an exhaustive list of the cool things that JS can do that Java has trouble with or does less well yet it is a substantial list with some very powerful features. Perl can also do just about everything in this article and there are probably other languages that can too but Java, C and C++ are not among those languages. Something else to note is that the two features that I consider most powerful (Multiple inheritance and the ability to edit classes on the fly) are made possible because JS supports function literals which is what sparked Joel’s article in the first place. The simple act of allowing you to create a function in the same way as a variable and then pass it round as if it was a variable gives JS immense power which Java and it’s ilk just do not have. JS used to be a language confined to just the HTML page but is is now growing beyond that. JS is used to power FireFox extensions, Mac OS X Dashboard Widgets and Yahoo Desktop Widgets (or what ever Confabulator is now called) as well as other non-web applications. Considering how powerful the language is it seems a shame that most people don’t realise what it can do or that it has now broken free from the HTML page. I have no doubt that we will see JS branch out into even more applications soon and I for one am really looking forward to that. [tags]Java, JavaScript, Programming[/tags] Good stuff Bart. My last comment dissappeared so this is what I saved from it 1) Multiple Inheritance is one of those concepts that seems like a great idea, and was heralded as C++s killer feature (back when C++ was the King of OO), but I am still stuck for an example of when it’s actually useful to use Multiple Inhertiance. It always seems to come with incredible complexity, and out of a desire to use Multiple Inheritance, rather than a desire to do the right thing. 2) You’ve probably read this already, but its good stuff re: Eval 3) A good guide on the “gotchas” in AJAX, if you’d like one is here… Thanks Des, Those two links are handy ones for AJAX heads and the the advice on eval is sound. Just because you can use it does not mean you should. It can be really great (as in the AJAX example) but it should be used with caution. As for Multiple Inheritance, my experience has been that in general the real world we are trying to model tends to be hierarchical in a simple way so naturally modeling it will also generally result in simple inheritance trees and hence you don’t need multiple inheritance often but there are good real world applications for it. So far I think I’ve used it twice in real world code. Both times for doing very complex wizard interfaces in JS. I’m trying to remember the exact details but it basically involved GUI elements that were a combination of two other GUI elements. I think one of them was a submenu inheriting from menu item and from menu or something like that. Multiple Inheritance is not needed in many situations but when it is needed it saves you from code duplication which is an evil I try to avoid. […]. […]
https://www.bartbusschots.ie/s/2006/08/05/javascript-much-more-than-javas-mini-me/
CC-MAIN-2017-43
refinedweb
3,654
58.32
(For more resources related to this topic, see here.) Mission Briefing To create the Processing sketches for this project, we will need to install the Processing library ttslib. This library is a wrapper around the FreeTTS Java library that helps us to write a sketch that reads out text. We will learn how to change the voice parameters of the kevin16 voice of the FreeTTS package to make our robot’s voices distinguishable. We will also create a parser that is able to read the Shakespeare script and which generates text-line objects that allow our script to know which line is read by which robot. A Drama thread will be used to control the text-to-speech objects, and the draw() method of our sketch will print the script on the screen while our robots perform it, just in case one of them forgets a line. Finally, we will use some cardboard boxes and a pair of cheap speakers to create the robots and their stage. The following figure shows how the robots work: Why Is It Awesome? Since the 18th century, inventors have tried to build talking machines (with varying success). Talking toys swamped the market in the 1980s and 90s. In every decent Sci-Fi novel, computers and robots are capable of speaking. So how could building talking robots not be awesome? And what could be more appropriate to put these speaking capabilities to test than performing a Shakespeare play? So as you see, building actor robots is officially awesome, just in case your non-geek family members should ask. Your Hotshot Objectives We will split this project into four tasks that will guide you through the general on of the robots from beginning to end. Here is a short overview of what we are going to do: - Making Processing talk - Reading Shakespeare - Adding more actors - Building robots Making Processing talk Since Processing has no speaking capabilities out of the box, our first task is adding an external library using the new Processing Library Manager. We will use the ttslib package, which is a wrapper library around the FreeTTS library. We will also create a short, speaking Processing sketch to check the installation. Engage Thrusters - Processing can be extended by contributed libraries. Most of these additional libraries can be installed by navigating to Sketch | Import Library… | Add Library…, as shown in the following screenshot: - In the Library Manager dialog, enter ttslib in the search field to filter the list of libraries. - Click on the ttslib entry and then on the Install button, as shown in the following screenshot, to download and install the library: - To use the new library, we need to import it to our sketch. We do this by clicking on the Sketch menu and choosing Import Library… and then ttslib. - We will now add the setup() and draw() methods to our sketch. We will leave the draw() method empty for now and instantiate a TTS object in the setup() method. Your sketch should look like the following code snippet: import guru.ttslib.*; TTS tts; void setup() { tts = new TTS(); } void draw() { } - Now we will add a mousePressed() method to our sketch, which will get called if someone clicks on our sketch window. In this method, we are calling the speak() method of the TTS object we created in the setup() method. void mousePressed() { tts.speak("Hello, I am a Computer"); } - Click on the Run button to start the Processing sketch. A little gray window should appear. - Turn on your speakers or put on your headphones, and click on the gray window. If nothing went wrong, a friendly male computer voice named kevin16 should greet you now. Objective Complete – Mini Debriefing In steps 1 to 3, we installed an additional library to Processing. The ttslib is a wrapper library around the FreeTTS text-to-speech engine. Then we created a simple Processing sketch that imports the installed library and creates an instance of the TTS class. The TTS objects match the speakers we need in our sketches. In this case, we created only one speaker and added a mousePressed() method that calls the speak() method of our tts object. Reading Shakespeare In this part of the project, we are going to create a Drama thread and teach Processing how to read a Shakespeare script. This thread runs in the background and is controlling the performance. We focus on reading and executing the play in this task, and add the speakers in the next one. Prepare for Lift Off Our sketch needs to know which line of the script is read by which robot. So we need to convert the Shakespeare script into a more machine-readable format. For every line of text, we need to know which speaker should read the line. So we take the script and add the letter J and a separation character that is used nowhere else in the script, in front of every line our Juliet-Robot should speak, and we add R and the separation letter for every line our Romeo-Robot should speak. After all these steps, our text file looks something like the following: R# Lady, by yonder blessed moon I vow, R# That tips with silver all these fruit-tree tops -- J# O, swear not by the moon, the inconstant moon, J# That monthly changes in her circled orb, J# Lest that thy love prove likewise variable. R# What shall I swear by? J# Do not swear at all. J# Or if thou wilt, swear by thy gracious self, J# Which is the god of my idolatry, J# And I'll believe thee. Engage Thrusters Let’s write our parser: - Let’s start a new sketch by navigating to File | New. - Add a setup() and a draw() method. - Now add the prepared script to the Processing sketch by navigating to Sketch | Add File and selecting the file you just downloaded. - Add the following line to your setup() method: void setup() { String[] rawLines = loadStrings ( "romeo_and_juliet.txt" ); } - If you renamed your text file, change the filename accordingly. - Create a new tab by clicking on the little arrow icon on the right and choosing New Tab. - Name the class Line. This class will hold our text lines and the speaker. - Add the following code to the tab we just created: public class Line { String speaker; String text; public Line( String speaker, String text ) { this.speaker = speaker; this.text = text; } } - Switch back to our main tab and add the following highlighted lines of code to the setup() method:() )); } } } - We have read our text lines and parsed them into the lines array list, but we still need a class that does something with our text lines. So create another tab by clicking on the arrow icon and choosing New Tab from the menu; name it Drama. - Our Drama class will be a thread that runs in the background and tells each of the speaker objects to read one line of text. Add the following lines of code to your Drama class: public class Drama extends Thread { int current; ArrayList lines; boolean running; public Drama( ArrayList lines ) { this.lines = lines; current = 0; running = false; } public int getCurrent() { return current; } public Line getLine( int num ) { if ( num >=0 && num < lines.size()) { return (Line)lines.get( num ); } else { return null; } } public boolean isRunning() { return running; } } - Now we add a run() method that gets executed in the background if we start our thread. Since we have no speaker objects yet, we will print the lines on the console and include a little pause after each line. public void run() { running = true; for ( int i =0; i < lines.size(); i++) { current = i; Line l = (Line)lines.get(i); System.out.println( l.text ); delay( 1 ); } running = false; } - Switch back to the main sketch tab and add the highlighted code to the setup() method to create a drama thread object, and then feed it the parsed text-lines. Drama drama; ); } - So far our sketch parses the text lines and creates a Drama thread object. What we need next is a method to start it. So add a mousePressed() method to start the drama thread. void mousePressed() { if ( !drama.isRunning()) { drama.start(); } } - Now add a little bit of text to the draw() method to tell the user what to do. Add the following code to the draw() method: void draw() { background(255); textAlign(CENTER); fill(0); text( "Click here for Drama", width/2, height/2 ); } - Currently, our sketch window is way too small to contain the text, and we also want to use a bigger font. To change the window size, we simply add the following line to the setup() method: void setup() { size( 800, 400 ); ); } - To change the used font, we need to tell Processing which font to use. The easiest way to find out the names of the fonts that are currently installed on the computer is to create a new sketch, type the following line, and run the sketch: println(PFont.list()); - Copy one of the font names you like and add the following line to the Romeo and Juliet sketch: void setup() { size( 800, 400 ); textFont( createFont( "Georgia", 24 )); ... - Replace the font name in the code lines with one of the fonts on your computer. Objective Complete – Mini Debriefing In this section, we wrote the code that parses a text file and generates a list of Line objects. These objects are then used by a Drama thread that runs in the background as soon as anyone clicks on the sketch window. Currently, the Drama thread prints out the text line on the console. In steps 6 to 8, we created the Line class. This class is a very simple, so-called Plain Old Java Object (POJO) that holds our text lines, but it doesn’t add any functionality. The code that is controlling the performance of our play was created in steps 10 to 12. We created a thread that is able to run in the background, since in the next step we want to be able to use the draw() method and some TTS objects simultaneously. The code block in step 12 defines a Boolean variable named running, which we used in the mousePressed() method to check if the sketch is already running or should be started. Classified Intel In step 17, we used the list() method of the PFont class to get a list of installed fonts. This is a very common pattern in Processing. You would use the same approach to get a list of installed midi-interfaces, web-cams, serial-ports, and so on.
https://hub.packtpub.com/romeo-and-juliet/
CC-MAIN-2018-22
refinedweb
1,764
69.01
Tech Blog Tensorflow 2.0 What is new in TensorFlow 2.0 Overview Google released the TensorFlow 2.0 alpha version in March 2019. The official 2.0 release is expected in Q2 2019. TensorFlow 2.0 promises simplicity and ease of execution while maintaining TensorFlow’s flexibility and scalability. This post outlines the key differences between TensorFlow 1.X and 2.0, and explains how to upgrade to TensorFlow 2.0. Key changes Eager execution TensorFlow 2.0 uses eager execution by default. The result is more intuitive, object oriented and pythonic. Nothing like a simple example to conceptualize the difference: import tensorflow as tf sum = tf.add(1, 2) If we print the sum result in TensorFlow 1.X, we get the operation definition in the graph and not the actual result. It is only when the operation is executed inside a session that we get the expected result: # Using TensorFlow 1.X print(sum) # Tensor("Add:0", shape=(), dtype=int32) with tf.Session() as sess: print(sess.run(sum)) # 3 If we repeat the operation in TensorFlow 2.0, the result is as follows # Using TensorFlow 2.0 print(sum) # tf.Tensor(3, shape=(), dtype=int32) Eager execution is intuitive and corresponds to the expected behavior when using Python. It also has some practical benefits. For example, it simplifies debugging as the code does not wait for the session to be executed and, potentially, fail. This allows using print() to quickly inspect results for debugging. Applications that require graphs can still use them with TensorFlow 2.0, but graphs are no longer the default behavior. Keras TensorFlow 2.0 consolidates high-level APIs under the Keras header. A great decision, in our opinion. Keras’ focus on ease to learn and use are ideal for a high-level API. Keras’ capabilities have also been greatly expanded. All advanced features from TensorFlow can now be accessed through Keras API. Google has explained that the design objective was to minimize the effort to move from experimenting with Keras to production. An example of the new Keras capabilities is tf.distribute that allows distributing model training through multiple GPUs. Google claims that tf.distribute and Keras achieve 90% scaling efficiency across multiple GPUs. In addition to being simple, Keras API is now scalable. As part of its simplification process, TensorFlow 2.0 has removed duplicated functionality throughout its API. In the case of high-level API, layers ( tf.layers) will now only be accessible through Keras ( tf.keras.layers). Losses and metrics that used to be duplicated between Keras and TensorFlow are now grouped into single sets. Remove global namespaces TensorFlow 1.X relied on global namespaces and, indirectly, the assumption that there was a single model per graph. Again, this was not the expected Python behavior. After defining the same variable twice # Using TensorFlow 1.X x = tf.Variable(1, name='x1') x = tf.Variable(2, name='x2') the first definition with namespace “x1” was not garbage collected, even if we lost track of the Python variable pointing to it. The tf.Variable “x1” could be recovered, but only if the name that it had been created with was known. This behavior was the root of many mistakes among new TensorFlow users. For example, issues in TensorFlow 1.X can arise through name conflicts when using the same graph for multiple models, or a graph accumulating multiple unused definitions of the same variable. In addition to being error prone, the TensorFlow 1.X design led to multiple mechanisms and functions to facilitate finding untracked variables (e.g. variable scopes, global collections and tf.get_global_step()). TensorFlow 2.0 removes all of these functionalities in favor of the default Python behavior. A tf.Variable that is not being tracked is now garbage collected. Upgrade to TensorFlow 2.0 If you would like to give the new TensorFlow a go, the TensorFlow 2.0 alpha version can be installed through pip install -U --pre tensorflow As with TensorFlow 1.X, the best approach is to install TensorFlow in a virtual environment (e.g. virtualenv). It is always tricky to perform major version upgrades. To facilitate the transition, the TensorFlow 2.0 default installation includes the command tf_upgrade_v2 that automatically updates a script from TensorFlow 1.X to TensorFlow 2.0. We tested the upgrade command in some of our code and it worked as expected. The command is fairly “conservative”. If a function does not have an exact equivalent in TensorFlow 2.0, it avoids any potential incompatibility by relying on the old function definition through the library tf.compat.v1. After the automatic transformation, additional work is required to change the code style into TensorFlow 2.0. We recommend checking out the section Recommendations for idiomatic TensorFlow 2.0 for more details.
https://www.evergreeninnovations.co/blog-tensorflow-2-0/
CC-MAIN-2022-40
refinedweb
800
53.17
Create broken axes Project description brokenaxes brokenaxes makes matplotlib plots with breaks in the axes for showing data across a discontinuous range. Features - Break x and y axes. - Supports multiple breaks on a single axis. - Automatically scales axes according to relative ranges. - Plot multiple lines. - Legend with positioning relative to entire broken axes object - x and y label centered to entire plot - Make brokenaxes object a subplot itself with matplotlib.GridSpec.subplot_spec. - xlims and ylims may be datetime.datetimeobjects - Supports log scales. Installation I recommend the Anaconda python distribution and this package is available via pypi: pip install brokenaxes Usage import matplotlib.pyplot as plt from brokenaxes import brokenaxes import numpy as np fig = plt.figure(figsize=(5, 2)) bax = brokenaxes(xlims=((0, .1), (.4, .7)), ylims=((-1, .7), (.79, 1)), hspace=.05) x = np.linspace(0, 1, 100) bax.plot(x, np.sin(10 * x), label='sin') bax.plot(x, np.cos(10 * x), label='cos') bax.legend(loc=3) bax.set_xlabel('time') bax.set_ylabel('value') Create subplots from brokenaxes import brokenaxes from matplotlib.gridspec import GridSpec import numpy as np sps1, sps2 = GridSpec(2,1) bax = brokenaxes(xlims=((.1, .3), (.7, .8)), subplot_spec=sps1) x = np.linspace(0, 1, 100) bax.plot(x, np.sin(x*30), ls=':', color='m') x = np.random.poisson(3, 1000) bax = brokenaxes(xlims=((0, 2.5), (3, 6)), subplot_spec=sps2) bax.hist(x, histtype='bar') Log scales import matplotlib.pyplot as plt from brokenaxes import brokenaxes import numpy as np fig = plt.figure(figsize=(5, 5)) bax = brokenaxes(xlims=((1, 500), (600, 10000)), ylims=((1, 500), (600, 10000)), hspace=.15, xscale='log', yscale='log') x = np.logspace(0.0, 4, 100) bax.loglog(x, x, label='$y=x=10^{0}$ to $10^{4}$') bax.legend(loc='best') bax.grid(axis='both', which='major', ls='-') bax.grid(axis='both', which='minor', ls='--', alpha=0.4) bax.set_xlabel('x') bax.set_ylabel('y') plt.show() datetime import matplotlib.pyplot as plt from brokenaxes import brokenaxes import numpy as np import datetime fig = plt.figure(figsize=(5, 5)) xx = [datetime.datetime(2020, 1, x) for x in range(1, 20)] yy = np.arange(1, 20) bax = brokenaxes( xlims=( ( datetime.datetime(2020, 1, 1), datetime.datetime(2020, 1, 3), ), ( datetime.datetime(2020, 1, 6), datetime.datetime(2020, 1, 20), ) ) ) bax.plot(xx, yy) fig.autofmt_xdate() [x.remove() for x in bax.diag_handles] bax.draw_diags() import matplotlib.dates as mdates for ax in bax.axs: ax.xaxis.set_major_formatter(mdates.DateFormatter('%Y-%b-%d')) How do I do more? You can customize brokenaxes outside of the supported features listed above. Brokenaxes works by creating a number of smaller axes objects, with the positions and sizes of those axes dictated by the data ranges used in the constructor. Those individual axes are stored as a list in bax.axs. Most customizations will require accessing those inner axes objects. (See the last two lines of the datetime example). There is also a larger invisible axes object, bax.big_ax, which spans the entire brokenaxes region and is used for things like x and y axis labels which span all of the smaller axes. Gallery If you make a plot with this tool that you are proud of, send me a png and code and I'll add it to the gallery! Life advice Please use this tool wisely. Any data visualization techique can be used to elucidate trends in the data, ben can also be used to manipulate and mislead. The latter is particularly true for broken axes plots, so please try to use them responsibly. Other than that, this software is free to use. See the license file for details. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/brokenaxes/
CC-MAIN-2021-43
refinedweb
638
54.39
, 1993 ODESSA L. CRIALES, APPELLANTv.UNITED STATES, APPELLEE Appeal from the Superior Court of the District of Columbia; (Hon. Patricia A. Wynn, Trial Judge) Before Rogers, Chief Judge, and Ferren and Farrell, Associate Judges. The opinion of the court was delivered by: Farrell FARRELL, Associate Judge : Found guilty after a bench trial of solicitation for lewd and immoral purposes (D.C. Code § 22-2701), appellant contends that the trial Judge erred in denying her motion to suppress evidence admitted against her at trial. Specifically, she contends that the search warrant authorizing the search of the premises where she was an employee was defective; that in executing the search the police violated Rule 41 of the Superior Court Rules of Criminal Procedure; and that the police similarly failed to comply with the knock and announce requirements of 18 U.S.C. § 3109. We affirm. I. The evidence relevant to the suppression issue, viewed in the light most favorable to the government, revealed that after an investigation lasting several months and including surveillance and undercover operations, Detective Levasseur of the Metropolitan Police Department prepared an affidavit for a warrant to search the Physical Culture Center, located in Northwest Washington and better known as the Capitol Health Club (CHC). The affidavit was based upon information from police officers that the club was being used for prostitution. *fn1 Levasseur and an Assistant United States Attorney signed and dated the affidavit on November 29, 1990, and the warrant itself was signed by Judge Winfield the same day, though it was never dated. Before the police executed the warrant on December 3, 1990, Detective Gilkey, posing as a customer and wearing a recording and transmitting device, entered the CHC to confirm the presence of prostitution on the premises. Gilkey gave appellant fifty dollars in pre-recorded police money and asked her to have sex with him. They retired to a private room, appellant told Gilkey to "get comfortable," and appellant returned with a towel, bottles of lotion, and a condom. Gilkey then gave appellant an additional seventy dollars in pre-recorded money and appellant began to undress, whereupon Gilkey signaled the search team to execute the warrant. Two uniformed officers entered the club through an unlocked or open door from the street to execute the search warrant. *fn2 No one attempted to obstruct their entry or the execution of the search. Levasseur followed, announcing that he had a search warrant and holding the warrant in his hand. The search team then walked up the stairs and through the door of the club to the waiting area, where they saw several persons in the office space. They informed these persons that they had a warrant, then located Detective Gilkey and appellant. Seventy dollars of the pre-recorded money was on a bench next to appellant. Another fifty dollars in pre-recorded funds, as well as condoms, were seized from the office. Detective Levasseur let one of the employees telephone "associates of the business" to inform them of the search. Since everyone in the club had been arrested, he left a copy of the warrant and return (or inventory) on the premises. On November 5, 1991, Judge Wynn heard suppression motions on behalf of all the defendants, *fn3 and denied appellant's motion. II. Appellant contends that the evidence admitted against her should have been suppressed as the fruit of an unlawful search, *fn4 because the warrant itself was undated and because the officers failed to comply with the requirement that they knock and announce their authority before entering (18 U.S.C. § 3109; Super. Ct. Crim. R. 41 (e)(3)) as well as the rule requiring them to provide a copy of the warrant and return to an occupant of the premises. Super. Ct. Crim. R. 41 (e)(4). First, relying on the rule that a warrant must be executed within ten days of its date of issuance (Super. Ct. Crim. R. 41 (e)(1)), appellant contends that the warrant was facially invalid because it was undated. See also Super. Ct. Crim. R. 41 (d) ("A search warrant shall contain . . . the date of issuance"). This argument has no merit. Although the warrant itself was undated, the affidavit was dated, and there is no dispute that the warrant was issued that same day or the next and executed within four days. Judge Wynn found that the warrant was attached to the affidavit and referred to it. Cf. United States v. Moore, 263 A.2d 652, 653 (D.C. 1970) (warrant valid under Fourth Amendment where affidavit's description of place to be searched was incorporated into warrant). Since there was no delay in executing the warrant beyond the time permitted by Rule 41 (e)(1), *fn5 and since appellant has shown no prejudice from any delay in the execution, Johnson v. United States, 255 A.2d 494, 495 (D.C. 1969), the omission of the date from the face of the warrant provided no ground for suppression. Appellant next argues that the police violated the knock and announce requirements of 18 U.S.C. § 3109. See Griffin v. United States, 618 A.2d 114 (D.C. Dec. 18, 1992). She appears to direct this argument chiefly to Detective Gilkey's entry in an undercover capacity, an argument plainly without merit. The knock and announce rule is intended to protect privacy and reduce the possibility of harm flowing from an unannounced entry. United States v. White, 168 U.S. App. D.C. 309, 514 F.2d 205 (1975). Appellant, however, was engaged in a business (though an illicit one) open to the public. "A government agent, in the same manner as a private person, may accept an invitation to do business and may enter upon the premises for the very purposes contemplated by the occupant." Lewis v. United States, 385 U.S. 206, 211, 87 S. Ct. 424, 17 L. Ed. 2d 312 (1966). *fn6 The trial Judge rejected any notion that Gilkey's entry -- intended to confirm the presence of prostitution on the premises -- was a subterfuge to relieve the executing officers of any statutory obligation they otherwise had. *fn7 Requiring Gilkey to have announced his identity would have thwarted the very purpose of his entry as an undercover officer. Since appellant welcomed Gilkey as a paying customer, her consent nullifies any claim that he violated § 3109. See United States v. Sheard, 154 U.S. App. D.C. 9, 13, 473 F.2d 139, 143 (1972), cert. denied, 412 U.S. 943, 93 S. Ct. 2784, 37 L. Ed. 2d 404 (1973). Appellant further argues that, despite Gilkey's previous admission to the premises, the entry by the executing officers without knocking amounted to a breaking and entering *fn8 that fell within neither exception to the knock and announce statute. See Williams v. United States, 576 A.2d 700, 703 (D.C. 1990) (describing purposes of statute and exceptions). Judge Wynn's rejection of this argument, however, is well-reasoned and conclusive: The defense argued that a breaking is sort of a term of art and it doesn't require any use of force but instead simply crossing the line onto the private property. I think that, . . . although that might be true in one's home, . . . that would not be true in a commercial establishment which is
http://dc.findacase.com/research/wfrmDocViewer.aspx/xq/fac.19930305_0001.DC.htm/qx
CC-MAIN-2017-09
refinedweb
1,219
64.41
Here’s something I didn’t know about Silverlight, and I did not find any documentation on it. It’s possible to create custom namespaces at the root of a XAML file. I always knew that you could insert namespaces inline in XAML, but apparently when you define a namespace attribute in XAML it also applies to the root. Here’s a simple example: <myType:TreeMapPanelItem x:Class="MIXOnline.Descry.Controls.CustomTreeMapItem" xmlns= xmlns:x= xmlns: <TextBlock Text="Hello" /> x:Class="MIXOnline.Descry.Controls.CustomTreeMapItem" xmlns= xmlns:x= xmlns: <TextBlock Text="Hello" /> </myType:TreeMapPanelItem> This allows replacing of custom elements programmatically as was done in the CPP Custom Treemap sample in Descry which needed to inherit from a base TreeMapPanelItem class to pick up some of its default properties and methods. This could not have been done with a standard Silverlight class since they cannot inherit from custom classes. See the CPP TreeMap sample in Descry for an example and look to the CustomTreeMapItem class for an example of an implementation. Note the assignment of the PanelItemType property in the Page.xaml.cs file which lets the TreeMapItem know which class to use for rendering. The Mix Online_ team has just released their latest work: Glimmer. Glimmer is a WPF tool that generates jQuery so that you can easily create effects for your website! Check it out at Since I write a lot of Silverlight and WPF applications, I make heavy use of Visual Studio, and when I work with large XML files it is very time saving to know how to use Visual Studio’s Regular Expressions search support. Every time I use it though I invariably forget how to write RegEx searches, since I use it so seldom. So as the proverb goes: the weakest ink is better than the best memory. Perhaps you may get some use out of it. Regular Expression search is not on by default. To turn it on you need to expand the options section in the Find and Replace dialog. RegEx search cannot be on by default since certain RegEx characters have completely different meaning that the literal characters and so would cause conflict if both were enabled. The Complete Character List is conveniently linked off the menu shown when clicking the arrows to the right of the find and replace Textboxes. I tend to learn much more quickly by example, and this page does provide *some* examples, so while useful, it doesn’t contain as many as I’d like. So here are two more for now: \<xdate\>(.*) Finds and selects the following node to the end of the line: <xdate>11/14/2008</xdate> This is handy if you need to remove or modify an XML node. Note that the angle brackets are escaped with backslashes. The parentheses' group the period which allows any character and the asterisk which signifies zero or more characters before the carriage return. \n~((.+)\n) Finds and selects carriage returns that have empty lines following them. This is handy if you want to compact empty vertical space. Essentially the logic is: Find a carriage return, but only one that precedes a line with a blank carriage return. This search will not find a line if it has spaces contained in the empty line, so you would need to modify it. I only provided two examples, but if you know of any good RegEx resources please provide a link in the comment, or you can tweet me on twitter. For MIX09, our team decided to create a new look for Flotzam that was to be shown prior to the keynotes. We’d thought about possibly creating a Vegas slot machine version, but we ended up choosing Tetris, since it was very much in line with the #MIX09 theme. It sounded like an exciting project, so I jumped on it! Tim Aidlin designed the user experience and the look and feel, and Karsten Januszewski modified the data object model to support binding a single data template type to any one of the available Flotzam data sources collected from the various feeds: Twitter, Flickr, RSS, etc. We brainstormed about what we wanted it to do, and settled on creating a somewhat simplified version of Tetris that avoided rotating the blocks. Flotzam Tetris needed to specifically target the 16x9 format used in the MIX09 keynote screens. This translated into 1280px by 720px dimensions, a fairly low resolution by today’s standards, which forced us to experiment with shape sizes. We needed to keep a careful balance between ensuring readability of the text in the back row of the keynote sessions while looking esthetically pleasing; the less blocks on screen, the less compelling it appears. The Tetris logic itself was simple enough; I found several samples on the web to draw from. I chose to borrow some ideas from the CodePlex WPF Tetris that Karsten showed me, but this only got us about halfway. Absent was how to programmatically solve which Tetris blocks should fall based on the pieces that had already fallen. There are probably a few ways to solve this problem. I chose to take a commonly used game approach of using a buffer for the Tetris shapes defined as an array of integers, which I’ll call a table. Zero represents an empty space, and a one represents a space occupied by a block, the fundamental unit that makes up Tetris shapes. It turned out to be convenient to use integers since it very was handy to add up columns and rows for calculating when a row was full or calculating the total height of a stack of shapes for a particular column. The shapes are defined in the TetrisShape class. The individual blocks that compose a shape are represented in a Point array offset from a (0,0) origin. The TetrisShape class also contains a Position property that maintains the absolute position of a TetrisShape shape in the table. There are two other interesting parts to the TetrisShape class. The first is a Weight property that allows us to define how important the shape is, so that we can get a better spread of shape usage. It’s used in determining the probability that it will be chosen as the “next” shape to be dropped. The second is the ShapeSignature() method which returns a series of numbers, as a string, that define the “signature” of the bottom of a shape. So for example a 2 x 2 square would have a signature of “00”, or a 3 x 2 Tee shape would have a signature of “101”. The numbers represent the distance to the bottom block from an imaginary line running under the shape. This reason that this signature is returned as a string is so that string operators can be used to match the signature of a shape to the signature of all the other shapes settled in the table. Fig. 1: Example of the signature of a Square shape vs. a Tee shape. The GetShapeNeededInTable() method returns the signature of entire width of the bottom of the table. This is a function that examines the height of the columns of the settled shapes in the table to obtain the signature. Once we have this, we can see if a shape can fit in the table by simply calling: ShapeNeeded.Contains(ts.ShapeSignature()). If the call returns true it’s a suitable shape and the shape is stored it in a temporary list that gets sorted based on the TetrisShape Weight property. If there’s more than one shape in the temporary list it randomly chooses one of the shapes with the two highest weights and returns the chosen shape. The shape chosen then gets bound to the data source and gets added to the VisualTree and displayed on screen. When a row is completed it causes all the “settled” shapes to drop down one level. If a shape drops below the bottom threshold it gets removed from the VisualTree and also the table shape list to prevent a memory leak. The main loop is the RunTetrisFlotzam() method, which is called at regular intervals by a timer (.7 seconds in by default.) This method contains a method that determines whether a shape can drop down or whether it is “settled” it also calls a function that animates the shapes from one position to the next in a programmatic manner. We wanted the animation to have easing, but it would take a bit of reorganization of the code to make this happen. You’ll find the source code for Flotzam and Tetris Flotzam here. This was a fun project and perhaps you can make some interesting modifications to it. If you do, be sure to let us know! When putting together the code for the Descry - Social Timeline visualization, I quickly discovered that when I subscribed to FriendFeed friends that contained large numbers of items, I eventually encountered performance issues. Scobleizer’s feed was a great acid test, since he had about 13K+ subscribers. We needed the Social Timeline to display, at a maximum, 3 months of subscriber data. I found as the total number of FriendFeed items approached 10K, scrolling and animating databound StackPanels became unacceptably slow. My opinion is that quick response time not only is a critical determining factor in user satisfaction, but also contributes to faster user feedback time and continuity, so that user can make better informed decisions. It can be argued that what I was attempting to do was an architectural design flaw, since we were attempting to use StackPanels to represent one days’ worth of stacked FriendFeed items; a total of 93 StackPanels for 3 months of data. However, we wished to retain the resizing and scaling properties of the StackPanels’ parent Grid control and it’s ColumnDefinitions, so that we would could avoid writing messy sizing/positioning logic, that the Grid control, conveniently, already has built-in. As a result we would need to find another method to gain performance. We ended up choosing control virtualization, since the sheer number of FriendFeed items is what was causing the slowness. So, what do I mean by “virtualized control?” A virtualized control is one that utilizes resources efficiently. You might ask: “Shouldn’t all controls utilize resources efficiently?” The answer to that is yes, but there are performance and behavior tradeoffs between a typical control and a virtualized control. Non-virtualized controls have all the databound UI already in memory. Virtualized controls are specialized controls geared for managing large amounts of data that need to bind UI on the fly, and they take a performance hit when doing this. Also, you’ll note that when scrolling the content of the virtual StackPanel control in this project that it does not smoothly scroll up a pixel as a time as a normal ListBox does, instead it scrolls up one item at a time. This is not a limitation of a virtualized control, but rather a limitation of this implementation. To implement this would take a bit more thought. In other words, when a typical Silverlight control is databound to a data source that contains ten’s of thousand objects, it will literally contain ten’s of thousand UI elements in its’ visual tree; potentially using a vast amount of resources. The visual tree is maintained in memory (in addition to the original data source) whether the visual tree is onscreen or off! Conversely, a virtualized control while having access the to the data source in memory, would only contain the elements that need to be physically displayed within the viewable area of the control, resulting in a dramatically smaller set of items in the visual tree. This sample shows a virtualization technique for creating a scalable and performant ListBox-like control. There are two key classes: the VirtualizedStackPanel and the VirtualizedItemsControl. The source for this simple virtualized control is here. VirtualizedStackPanel Role The VirtualizedStackPanel is a control that simply limits the number of children contained in its visual tree to only the items in the viewable area. This control alone is unable to display the data properly, since a scrollbar is needed to manage the position of the currently viewable area; the VirtualizedItemsControl takes care of that need. The VirtualizedStackPanel behaves as a normal StackPanel until a data generator is attached. The IDataGenerator implemented in the custom VirtualizedItemsControl lets the VirtualizedStackPanel know what children are currently in the active viewable area, based on control size and scrollbar position settings. As soon as the size of the control, or scrollbar position changes, the MeasureOverride method is called, and the control recalculates which children are to be added or removed from the visual tree. A ContentPresenter is acquired from the assigned data generator, so that a common DataTemplate can be used to define item UI. When the ContentPresenter Content property is set, the data is bound to the UI defined in the DataTemplate. The main logic in the VirtualizedStackPanel is in the MeasureOverride method, creates new containers if they are needed when the control height increases, or reuses containers if no new ones are needed, and also removes unnecessary containers of the control height decreases. The ArrangeOverride simply positions all the children in the StackPanel vertically. VirtualizedItemsControl Role The VirtualizedItemsControl’s role is to act as a container for one or more Virtualized StackPanel’s. It manages the data source, a common DataTemplate that children use, and it also manages the Scrollbar positioning logic. The VirtualizedItemsControl implements the IDataGenerator interface that assists in communicating with its VirtualizedStackPanel children. This sample shows a single VirtualizedStackPanel child for brevity. But you may refer to the VirtualizedTimelineItemsControl in the Social Timeline project on CodePlex for one that contains multiple children. Incidentally, the VirtualizedItemsControl’s content is defined in the projects /Themes/Generic.xaml file, it could have been just as well defined procedurally, but it uses an external resource file for better designer / developer workflow. It is critical that this file exist in the project in the Themes folder, and note that this file needs to be defined as a resource in the project to avoid errors. I made the Silverlight 2 newbie mistake of not doing this, and I literally spent days trying to figure out why I couldn't get the project to work. Resources I found that writing a virtualized control wasn’t quite as complex as I had anticipated, after I broke down a much more complex sample. There is a good example of a full Implementation of a VirtualListBox in the Silverlight Extensions project on CodePlex. Another related WPF VirtualizedCanvas sample by Chris Lovett that I found useful is located here. Descry and Codeplex Visit CodePlex to download a more fully developed version of a virtualized control that supports mouse drag animation with physics, plus a VirtualizedListBox as well. And visit here to get the entire Descry project source! “Ever wonder what goes into building an effective visualization? Look no further. We decided to roll up our sleeves and explore the topic. We're calling it Project Descry.” Descry is the new culmination of our “Innovative Web” teams efforts! It’s worth a check! My team members have been working on a entirely new version of the VisitMIX site for the past couple of months. There's going to be some very interesting activity around this site.. [more...] I In Building a recipe application using Vista and .NET 3.0 (Part IV: ThumbnailProvider interface) I reference the DBMon.exe tool, and state that it is located in the Windows SDK, however, as of the lastest Windows SDK: Microsoft® Windows® Software Development Kit Update for Windows Vista™ (released 3/22/07), DBMon no longer ships in the SDK. I was unable to find it in any other Microsoft release. In my search I found this tool that very nicely replaces DBMon: DebugView v4.74 (by SysInternals.) This tool has the same basic functionality, plus many, many more features, such as the abilty to connect to remote machines, persisting sessions to disk, filter entries, and capture from many debug sources. It is available here on the Microsoft.com TechNet site: (published 11/27/07) I had a need to create a string “Linkify” function for a web site that I am working on; a function that takes a string, parses it for URL’s, and replaces those URL’s with HTML anchors. The end result is when the string is passed to a web client it would allow the user to browse to the URL. I started out thinking that it could be done with some simple string manipulation, but quickly realized that finding the end of a URL was not trivial. It then occurred to me that Regular Expressions would solve the problem from my experience with Perl. Regular expressions are exceedingly powerful, but with its’ power comes complexity. The .NET framework has a namespace dedicated to Regular Expressions: System.Text.RegularExpressions. The RegEx class has a constructor that takes a pattern and options as parameters. The RegEx.Matches(string) method takes a string as an input and returns a MatchCollection, which contains a lot of buried information. For my purposes I was able to avoid having to drill down deep into the collections contained within the object, but it’s good to know that it has the ability to capture the individual groups that a pattern returns. The trickiest part for me, was recalling how to construct a pattern for this type of query. RegEx pattern syntax is a language of its own, that takes me time to wrap my head around every time I need to work with Regular Expressions, which is not all too often. So how did I overcome my cerebrally-challenged problem? In searching on Live.com… incidentally in Windows Vista, by default, hit the Windows key, type your search word(s), press the down arrow and press enter… I came across a free 3rd party tool Regular Expression Designer by Rad Software, (this tool is not endorsed by Microsoft) that helped me to quickly construct and try out the pattern that I needed. It also has a handy quick reference to show the meaning of the esoteric symbols used in RegEx patterns. Construction of the pattern Anything within parentheses in a RegEx pattern is called a “group.” In this application want 3 groups; the first: the protocol, the second: the domain, and the third: the path. For the protocol group we want to find an occurrence of http or and ftp. The | operator indicates an Or, followed by the literal ://. The domain group is one or more matches + that do not contain a forward slash, a carriage return, a close paren, or a quote mark [^/\r\n\")], note that some are escaped by preceding backslashes, and the path group needs to have zero or more matches * that begin with a forward slash , but not a carriage return, close paren or a quote mark [^\r\n)\"], and lastly we want to find *all* matches in the string with a ? Putting it all together we arrive at this: "(http|ftp)://([^/\r\n\")]+)(/[^\r\n\")]*)?" We could have given the groups friendly names as in: "(?<protocol>http|ftp)://(<?<domain>[^/\r\n\")]+)(?<path>/[^\r\n\")]*)?", if we had wanted to access the individual values by name, for some other purpose. Without assigning names, by default, the groups are numbered in order of occurrence within the pattern starting with 0. Following is a method call that wraps it all up: using System.Text.RegularExpressions; /// <summary> /// This method takes a string and looks for URL’s within it. /// If a URL is found it makes a HTML Anchor out of it and embeds it /// in the output string. This method assumes that we are not passing /// in HTML. This method would have to be revised to support that. /// </summary> /// <param name="input"></param> /// <returns>Clickified String</returns> public static string LinkifyString(string input) { Regex regex = new Regex("(http|ftp)://([^/\r\n)\"]+)(/[^\r\n)\"]*)?", RegexOptions.IgnoreCase); MatchCollection mc = regex.Matches(input); foreach (Match m in mc) { string url = m.Value; string link = Globals.CreateAnchorTargetBlank(url, url); input = input.Replace(url, link); } return input; } public static string CreateAnchorTargetBlank(string href, string description) return string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", href, description); MSDN has some RegEx examples that may be worth taking a look at. Be advised that this LinkifyString function will corrupt HTML if you pass an HTML’ified string to it, since we are not checking for anchor elements surrounding the URL’s. To do this one would have to add additional groups that ignore matches inside anchors. i.e. <a href=””></a>. -Hans Hugli Abstract Becoming familiar with SharePoint 2007 web parts, CAML and implementing a custom web part that interacts with a SharePoint page’s CAML. This article also covers setting up a development environment, building, deploying, installing, and debugging a web part and includes a project linked at the bottom. Intro It is our teams’ goal this year to create an internal site for our Technical Evangelists to go to as a first stop resource for demos. We wanted to organize these demos so that they are easy to find, and we wanted to have the site in a blog-like format. It’s a demos’ nature to become stale with time, and so we thought that a blog format was a good fit for documenting demos. We really like the “tag clouds” concept since it surface’s keywords and their frequency of use and popularity. We took a look at a few blogging solutions. The first was an ASP.NET sample “Blog Starter Kit.” While it’s a great example of how to code a blog server from the ground up, it proved to be too much work to get where we wanted in a short time. I next took a look at Community Server 2.0. This is very robust and easy to use blogging software and contains a wealth of other community capabilities, but it ended up being overkill for our simple blogging needs. It also didn’t have the ability to easily customize the UI layout via its web interface; I wanted to be able to add announcements and contact information without having to manage any HTML, plus we needed to be able to add custom properties to the blog entries, such as demo storage location, which Community Server was not designed to enable without code, this meant becoming familiar with their architecture and writing custom code. Having had some familiarity with SharePoint, and knowing its great support for being able to customize lists, I then examined SharePoint 2007’s blog capabilities, and while they are certainly not as developed as Community Servers’, they were a good starting point for our proposed solution. Out-of-the-box SharePoint 2007 has the capability of creating blog sites with ease. It creates a home page that contains a Blog Post List web part (based on ListViewWebPart.) These blogs can have multiple categories assigned to them with items from the “Category” List, and they can also have multiple comments attached with items from the “Comments” List. The homepage contains a list of all posts in a summary view layout. A second, but equally important, reason for having chosen SharePoint is that I wanted to have a better understanding of its inner workings. From the Category List, the user can drill down into the category page which out-of-box only displays items from a single category. I found this to be too constraining for our needs, and so I set out to modify Category.aspx to allow filtering with multiple categories, plus create a way to navigate the page in a more efficient and meaningful way with a “Tag Filter” control that would look something like this: I started out with the “Tag Cloud” web part that I found on as a starting point which was very useful, but I needed to change it dramatically to meet our requirements since they were quite different. See resources for more info on the “Tag Cloud” web parts Definitions Not the most exciting, but necessary never-the-less to understand later portions of this writing. CAML – Collaborative Application Markup Language – not to be confused with the Caml programming language, is an XML based language that contains tags to define and display data. In this post we will focus on CAML’s query language and query language schema. SharePoint Designer 2007 – A tool to edit SharePoint pages along with CAML embedded in those pages. Since the web part CAML markup is stored in SQL, opening files via UNC (e.g. \\<my share point server name>\blogs\default.aspx) from a SharePoint Server within normal editing tools, will not show the CAML markup, so it’s necessary to edit pages with SharePoint Designer 2007 to surface and edit the CAML. Incidentally here’s the trial version of SharePoint Web Designer 2007. SharePoint Web Part – This is the fundamental unit that can access SharePoint information and display itself on a SharePoint page. With SharePoint 2007, the Office team has made it much easier to develop web parts, since they are now leveraging ASP.NET’s web part infrastructure. See resources section for required reading on web parts that will help you along with development. Getting Started The Blog Home page The default.aspx SharePoint blog home page shows all blog posts that have been created. The two web parts that we will find most interesting on this page are the Category web part and the Posts web part. The Category part enumerates all categories that have been defined. For example: a category of “MyTag” would link to “/<blogsitename>/lists/categories/category.aspx?Name=MyTag”. Note that “Category” and “Tag” will sometimes be used interchangeably in this writing. The Posts part shows all Blog posts ordered by PublishedDate and ID. How do I know how the Posts are ordered? Well, I could edit the page in the browser, go into the View Settings, and look at the settings for its current view, but that won’t help us later when we are trying to dissect the category.aspx page. So let’s instead open the default.aspx page in the SharePoint Designer and take a look at the web part titled “Posts”. CAML The web part XML that you will find contains all the default and user-defined settings for the Posts part. A large portion of these settings are exposed through the web management UI, but the ListViewXml property is not. Instead the Web based View Settings UI manages most of the capabilities of this property. Notice that embedded in the CAML (escaped to avoid collisions in SharePoint Designer) inside the ListViewXml property is a query node. That looks like this: <Query> <OrderBy> <FieldRef Name="PublishedDate" Ascending="FALSE"/> <FieldRef Name="ID" Ascending="FALSE"/> </OrderBy> </Query> Converted to XML, it appears much more readable as: <Query> <OrderBy> <FieldRef Name="PublishedDate" Ascending="FALSE"/> <FieldRef Name="ID" Ascending="FALSE"/> </OrderBy> </Query> The Category.aspx Page So now that we know where the CAML and its Queries are serialized, we can move on to taking a look at the Category.aspx page. The Category.aspx page shows only Post items that have been tagged with a name that is passed as a parameter to the page. How does the Category.aspx page capture a request parameter? The answer lies in the CAML of the Posts web part which is also included on the category.aspx page. Inspecting the Category.aspx page with the SharePoint Designer this is how the Posts part markup appears inline. Taking a look at ListViewXml reveals this CAML XML. We find the Query node and see now that it is more complex. Looking at just the Query node again, we now see a “Where” clause that compares the “PostCategory” value with the “Name” request variable: <Where> <Eq> <FieldRef Name="PostCategory"/> <Value Type=""> <GetVar Scope="Request" Name="Name"/> </Value> </Eq> </Where> If we make some minor changes to this query, we can get some much more interesting results, for example: if I wanted to find all Posts that contain the word “Demos” and “WPF”. I could pass the following to the Category.aspx page: “Category.aspx?N1=Demos&N2=WPF” and have the following CAML embedded process the request: <FieldRef Name="PublishedDate" Ascending="FALSE"/> <And> <Value Type=""> <GetVar Scope="Request" Name="N1"/> </Value> <GetVar Scope="Request" Name="N2"/> </And> And then taken to the extreme; we’d like to retain the original “Name” parameter, for backwards compatibility, so that pre-existing links don’t break, and we’d like to have the ability to filter up to 6 Categories deep. (Note that if values are omitted for any of the parameters, no results will be returned. The drawback to this approach, due to the AND query logic, is that we need to always pass a parameter for all request variables; otherwise the query returns no results. So, for this sample to work we need to create a common category “All” and (important for this to work) tag every single Blog Post with it. So our final query will be: <Or> <And> <And> <And> <And> <And> <Eq> <FieldRef Name="PostCategory"/> <Value Type=""> <GetVar Scope="Request" Name="N1"/> </Value> </Eq> <GetVar Scope="Request" Name="N2"/> </And> <Eq> <FieldRef Name="PostCategory"/> <Value Type=""> <GetVar Scope="Request" Name="N3"/> </Value> </Eq> </And> <Eq> <FieldRef Name="PostCategory"/> <Value Type=""> <GetVar Scope="Request" Name="N4"/> </Value> </Eq> </And> <Eq> <FieldRef Name="PostCategory"/> <Value Type=""> <GetVar Scope="Request" Name="N5"/> </Value> </Eq> </And> <Eq> <FieldRef Name="PostCategory"/> <GetVar Scope="Request" Name="N6"/> </Eq> </And> </Or> So now a request for all Posts that contain “Demos” and “WPF” would be: “Category.aspx?N1=Demos&N2=WPF&N3=All&N4=All&N5=All&N6=All” Setting up a dev environment First things first, we need to set up the development environment for building/debugging a SharePoint web part. Optimally, it‘s best to set up the Visual Studio 2005 development environment on the SharePoint Server itself, so that you can debug the SharePoint web part. The resources section below contains links that will walk you through some of the basics of creating a web part, though I’d like to provide some basic and additional comments. You will need to copy the Microsoft.Sharepoint.dll (9Mb) from the SharePoint server into your web part project root to get it to properly compile. I found mine in “C:\Program Files\Common Files\Microsoft Shared\web server extensions\12\ISAPI” Note that some of the documentation points you to other directories for earlier versions of SharePoint. There was a naming convention change, and so for SharePoint 2007 the correct files are located in the \12\ISAPI directory standing for Office 2007 (or Office “12”). The \60\ISAPI directory is for Office 2003 (or version 6.0). Registering and installing a web part Manually registering and installing a web part is not a trivial task, but I’ve taken the following steps to make it easier to version the web part. 1. Make sure that your web part is “strong name signed”. If you use the project that I’ve provided, the build process will take care of this (essentially you just need to add a key file to your solution, and in project property settings choose it to sign the assembly.) If you happen to create a new key, you will have to determine its new PublicKeyToken and adjust this value anywhere it occurs (safecontrol entry and dwp file). See article 1 in the resources section below for direction. 2. In a default installation of SharePoint server the files for the site itself will be contained in the following directory: “C:\Inetpub\wwwroot\wss\VirtualDirectories\80.” Yours may vary, but the 80 stands for port 80. In your web part project you will want to make sure that the assembly that is created, when building the project, gets copied to the BIN directory in your SharePoint Site. This means adding something like the following to the PostBuildEvent property in your project: xcopy /y $(TargetPath) "C:\Inetpub\wwwroot\wss\VirtualDirectories\80\bin\" 3. Add the following line to the “C:\Inetpub\wwwroot\wss\VirtualDirectories\80\web.config” file in the <SafeControls> section: <SafeControl Assembly="UberdemoWebparts, Version=1.0.0.0, Culture=neutral, PublicKeyToken=d4dfd62c364ed482" Namespace="UberdemoWebparts" TypeName="*" Safe="True"/> 4. Make the following change in the trust level in the same web.config file from WSS_Minimal to WSS_Medium in the trust node so that it appears so: <trust level="WSS_Medium" originUrl="" /> (Note that this is a very insecure way to install web parts. Anyone can install them if they have permissions on the machine, but it is also the least pain free way for doing development. See article 3 in the resources section below, how to properly deploy a web part in a production environment) 5. You must create a DWP file for each web part in an assembly, yet one assembly can contain multiple web parts, and be declared once as safe in the Web.config file. See Resource article 1 below on how to properly format a DWP file. (note that the project provided has a TagFilter.dwp file in the root solution folder.) 6. You should now build the web part in Visual Studio 2005, and ensure that the DLL has been copied into the site’s BIN directory. 7. Now you need to add a reference the web part in a page. Visit the page that you would like to install the web part on and click “Site Actions->Edit Page” 8. Click on “Add a Web Part” on the page, and in the dialog that appears click the “Advanced Web Part Gallery and options” link. 9. Click on the “Browse” sub-heading just below the “Add Web Parts” heading, and click the “Import” menu item in the menu that appears. 10. Click the browse button and browse to the DWP file for the web part that you would like to install and click upload. 11. This step will determine whether you have the web part installed properly: drag the web part from the ToolBox menu onto the page anywhere. If you get an error at this time it will most likely be something to the effect of “your web part is not registered as safe”. Check steps 1-6 to ensure that all are correct and then repeat steps 7-11. 12. You should now see the added web part displaying on your page. Debugging a web part Debugging can be tricky at first, but once you get it set up it becomes much easier. In some of the below articles referenced in the Resources section it states that you need to attach to the w3wp.exe process, but it doesn’t say that there might be 3 running, and which one to choose from. I solved this by going into the IIS Manager and stopping the 2 SharePoint management sites. 1. Launch “inetmgr” from the command prompt and open the “Web Sites”node. 2. Stop all sites except for the main one. In my case the main is “SharePoint - 80” 3. You may need to reboot the machine to insure the other w3wp.exe processes die. “iisreset” did not work. Perhaps “net stop w3svc” and then “net start w3svc” will work, but I haven’t tried. 4. Once you have only one w3wp.exe running you should build your project. 5. Load up the web part once in the SharePoint page by visiting the page, to get it to load the version that you just built. 6. Set a breakpoint inside the web part constructor or RenderWebPart method. 7. Now attach to the w3wp.exe process in Visual Studio 2005 by choosing: “Debug->Attach to Process” and selecting the w3wp.exe process in the dialog and click Attach. The first time you do this it may take a while to load up all the debug files. In addition if you see the breakpoint turn hollow with a little yield sign that says something to the effect of “The breakpoint will currently not be hit. No symbols have been loaded”, can mean that the web page did not properly load your web part. Try refreshing the page and reattaching the VS2005. Briefly examining the TagFilter web part So now that we understand the inner workings of the SharePoint Post web part, we now need to create our own web part that displays tags based on what posts are displayed and to construct the requesting URLs for those tags properly. The TagFilter web part inherits from Microsoft.SharePoint.WebPartPages.WebPart which in turn inherits from System.Web.UI.WebControls.WebParts.WebPart which makes it very nice for developers that are familiar with the ASP.NET WebPart programming model. It also abstracts away the implementations of the System.Web.UI.INamingContainer, System.Web.UI.IAttributeAccessor and Microsoft.SharePoint.WebPartPages.IConnectionData Interfaces. [ToolboxData("<{0}:TagFilter runat=server></{0}:TagFilter>")] [XmlRoot(Namespace = "UberdemoWebparts")] public class TagFilter : WebPart {…} Each web part must contain a RenderWebPart override which renders any controls that you want the webpart to display. At its simplest we could send “<div>Hello World</div>” down to the browser with: protected override void RenderWebPart(HtmlTextWriter output) HtmlGenericControl container = new HtmlGenericControl("div"); container.Controls.Add(new LiteralControl("Hello World")); container.RenderControl(output); The most important thing to understand about this particular web part is how it accesses the Posts list and Categories that are contained within that list: SPListCollection lists = SPControl.GetContextWeb(this.Context).Lists; foreach (SPList list in lists) foreach (SPListItem item in list.Items) { string[] tagArray = item.GetFormattedValue("Category").Trim().Split(';'); } Custom properties are very useful and only require you to define some special attributes in order to surface these properties in the web based web part page editor. Properties are persisted in the page CAML when they are modified. [Bindable(true), Category("TagCloud Filter Settings"), DefaultValue(""), Description("Background Color for Tag Cloud"), FriendlyName("Background Color")] public string TagCloudBackgroundColor get { return tagFilterBackgroundColor; } set { tagFilterBackgroundColor = value; } The rest from here is straightforward business logic. There is a Generics Tags List and a Tag Class that contains the list of remaining tags for a selection. I’ve also created a StyleGrade List to manage Style settings that can be set in the web part properties. I also created a breadcrumb and added filtering logic. This web part was coded to work only with the Category.aspx page. Here is the project in a zip file (14Kb) Hopefully this has provided some hard to find educational value above and beyond the articles listed in the following resource section. Other SharePoint Resources MSDN SharePoint Server 2007 Developer Portal Web Part Development 1) A developers introduction to web parts 2) Walkthrough Creating a Basic SharePoint Web Part 3) Microsoft Windows SharePoint Services and Code Access Security 4) Configuring IntelliSense with CAML Files When Developing for Windows SharePoint Services 3.0 Web part Solutions and Samples, TagCloud 1.0 contains a generic Tag Cloud part and 3 other web parts. I'll let Hans focus on expanding upon some of the geekier details of some of the demos he's working on. Which will give me an opportunity to switch things up a little bit and discuss some of the overall concepts of how to go about designing a great demo. Perhaps one of the most important aspects of a demo, is to make sure that it communicates the right message to the audience. And before you can communicate this message YOU need to know what that message is. There are countless occasions when I see demoers totally lose track of the message, or try to use far too complex of a message in their demos and presentations. Often, when you ask somebody to articulate a core message that they are wanting to communicate, they will ramble off a paragraph or two. If you ask them to repeat that message, they will usually ramble off a slightly different paragraph, since they can't exactly remember what it was they said the first time. Makes you wonder how they expect the audience to remember that message if they can't themselves. I like to try to get people to distill their message down to something that would make a great headline in a newspaper. One short sentence, with few (if any) unecessary words. Such a message is far easier to consistantly repeat, but more importantly it is easier for the audience to understand AND remember... which is the point after all now isn't it? I think it would be useful to spend a few moments taking on an actual example to help illustrate this. Let’s take a set of messages that perhaps we are all roughly familiar with, Newton’s Laws of Motion. Here they are as Newton himself originally stated them:. Clearly this represents a set of messages that would be awkward to present, and difficult for the audience to remember, much less understand. Ok, so part of the reason for that is because they are in Latin, but that’s only partially to blame. To get past that hurdle, let’s take a look at a translation of these same laws as they would appear in English and see if that makes them much easier to understand and. Nope, still not at a state that would allow them to form a core message that is easy to communicate to an audience. So let us try our hand at modifying these statements to make them fit better as a presentation or demo: 1. An object in motion will stay in motion unless acted upon by an external force. 2. When force is applied to an object, it will proportionally alter its velocity. 3. For every action, there is an equal and opposite reaction. 1. An object in motion will stay in motion unless acted upon by an external force. 2. When force is applied to an object, it will proportionally alter its velocity. 3. For every action, there is an equal and opposite reaction. Some might argue that the exact wording used here doesn’t as precisely match the exact laws that were being laid down by Sir Newton, but frankly that’s not important. One of the worst things you can do when trying to set up your message, is to get caught up in the exactitude of them. The messaging you choose to use doesn’t have to be so accurate that it sounds like a lawyer wrote it; it just has to properly communicate the intent, concept, and potential without being misleading. As for Sir Newton, it’s possible that the first “English Translation” I listed above is the first time you’ve ever seen his laws listed with wording like that. I know that it took me a while to track these down. More than likely what you’ve seen are closer to what I’ve listed in the last, and more sensible, listing. Why’s that? Because people have had a hard time quickly grasping the “original” ones, and so the presenters have continuously been tweaking their wording in order to make these laws more memorable. So before you get too far along with designing your next demo, take a little time to try to identify the messages that are at the core of your demo, and work on crafting them in a tight and memorable way. -Robert Hess Apologies for the delay of this post, I’ve been busy with MIX07. Charles Torre and I organized the MIX07 Sandbox this year, and it was a great success. This year we raised the bar with regards to the types of people that a Hands on lab was interesting to: Business people, Designers, as well as Developers. We also increased the usefulness of the Sandbox in this Sky Server for which all MIX07 attendees have access. The experimental OpenMic, the place for any MIX attendee to talk about whatever they wanted, was also popular. Next year, we will raise awareness of the OpenMic and provide a way for people to sign up for OpenMic prior to the event. Making the ECB file type searchable from Explorer It’s easy to make the Electronic Cookbook file-type fully searchable in Windows Vista by making changes in the “Indexing Options” applet. You can find this in the control panel, or simply typing “Index” in the Start Menu Search box and choosing “Indexing Options.” (Open up the Start Menu by pressing the Windows key on your keyboard.) Fig. 1 – Windows Start Menu. Caution: When changing some indexing settings (such as changing a filter) Windows will re-index the content. This operation could take a few hours depending on the amount of content on your machine. Be aware that you will not be able to fully search your hard-drive until it is finished re-indexing. Click the “Advanced Options” button and switch to the “File Types” tab. Fig. 2 – Windows Vista Indexing Options, Advanced Options dialog By default most files-types are set to only index file properties exposed to the shell, however for our scenario we want to be able to search the entire file contents. If you’ve installed the ECB ThumbnailProvider you will see an .ecb entry, choose the “Index Properties and File Contents” radio button for the ECB file-type and click OK. This enables searching the text contents of the file, in addition to properties that are exposed to the Windows shell. If we had a PropertyHandler written for this file type, we could alternately avoid exposing all the text content and only make particular properties searchable. For example this might be useful if we wanted to only search on the “Tags” that are assigned to a file. Incidentally, if you want to search your CSharp (.cs) files for code you would make this same change for the .cs file-type. You’d also need to ensure that the folders that your content resides in is marked for indexing. By default everything that is under the \users\<username> folders is automatically marked for indexing, but if you have a separate location on your hard disk for the content that you want searched, you may have to explicitly set the Indexing engine to index those files. This can be set from the same Indexing Options applet, but you would click the Modify button and check the appropriate folders on your drive. Fig. 3 – Indexing Options Dialog When Windows has finished indexing, you should be able to type any word that is contained in an Electronic Cookbook (ECB) file, and have it surface in the search results in the Start Search box on the Start Menu. Try typing “Chicken” and for example “Tikka Masala.ecb” recipe should appear under the “Files” group. Fig. 4 – Start Menu displaying indexed content Using Windows Search from within an application So now let’s take a look at doing a Windows Search programmatically. Windows Search has provided an OleDB provider, which makes programming against the Windows Search engine a cinch. static string connectionString = @"Provider=Search.CollatorDSO;Extended Properties='Application=Windows'"; Then we manually construct the query. There are API’s that can help to construct a Query automatically, which I do not use in this sample (see AQS below) Query = "SELECT \"System.FileName\", \"System.Rating\", \"System.Keywords\", \"System.ItemAuthors\", \"System.ItemPathDisplay\" FROM SYSTEMINDEX..SCOPE() WHERE \"System.ItemType\" = '.ecb' AND CONTAINS('" + searchstring + "')"; In this sample we take the resulting dataset from an OleDBDataReader and turn it into a collection of string arrays. Our recipe application then consumes that. See all the search code that we used for the Electronic Cookbook application here. The tricky part is determining which particular properties that you need to care about. A “select *” does not work with Windows Search, so it’s a hit and miss search to see if any of the possible valid properties return anything of value. Here’s an extensive list of all the Shell properties that are available. The ones that I am working with in this application are located in the “Core” properties: System.Filename, System.Keywords, System.Rating, System.Authors, and System.ItemType. Windows Search 3.x and AQS To read more about Windows Search see Catherine Heller’s blog in general. Some Windows Search documentation is available on MSDN and here’s an SDK that shows C++ samples of how to program against Windows Search 3.x, that also includes a useful managed library. Also of potential interest for our recipe application is Advanced Query Syntax (AQS) which you can read about here and Catherine blogs about here and here. In the next blog post we’ll discuss creating a Windows Property Handler written in C++ that is designed to work with Electronic Cookbook files.
http://blogs.msdn.com/uberdemo/
crawl-002
refinedweb
8,161
60.55
Design of an Interactive Tutorial for Logic and Logical Circuits by Jeremy Kindy, John Shuping, Patricia Yali Underhill and David John Introduction The principles of propositional logic are essential tools for computer science students. An understanding of these principles has immediate applicability to software engineering, computer organization, and algorithm studies. Many beginning students have difficulty associating the boolean value of a logical statement with the input and output values of the corresponding logical circuit. This paper discusses the design and development of an interactive tutorial that assists in the understanding of these concepts. Perspectives of the beginning student and the team constructing the building blocks of the system are presented. The tutorial is designed for two types of users. First are the introductory students who use the materials as part of their learning process. The tutorials are not intended to take the place of textbooks or instructors. They are simply another learning resource. In order to be effective, this resource must interact in a way that provides additional understanding of the material. The tutorials must run in a timely fashion and be readily accessible to the students. Second are the instructors who construct tutorials as supplements to their teaching. The tutorials must be flexible enough for faculty to make modifications. Currently, there are several fairly advanced circuit development packages available. The use of these was deemed infeasible for several reasons: - many, if not all, of these packages require licenses for each computer running the software - installation and maintenance of the software is nontrivial - these packages are for circuit design, and are not meant to be a tutorial on understanding the relationship between boolean algebra and logical circuits at an elementary level Tutorial Overview--User's Perspective Directing a web browser to loads the home page for the project. The tutorial begins with an introduction to the elementary logical operations and logic gates. This includes statement definition and the negation, conjunction, disjunction, implication, and equivalence operators as shown in Figure 1. The exploration of the subject matter occurs at each individual's pace. Explanations, examples, and truth tables are provided, and ``Quick Quizzes'' can be taken along the way. Introductory computer science and discrete mathematics students benefit from this basic instructional module. Figure 1: Statement definition frame In a following section, the simple gates AND, OR, and NOT are introduced as extensions of the logical operators conjunction, disjunction, and negation. For each gate, students interactively choose inputs and observe the results as a truth table is built dynamically (Figure 2). This is important for the beginning student because all circuits are constructed using these simple gates. There are designated colors for true and false that clearly mark the input and output values. The same color scheme is used in the more advanced sections where circuits are created. Since lengthy explanations are eliminated at this level, the tutorial is well-suited as a supplement to textbooks and lecture notes. A link to Java source code is included at the end of each tutorial page for those who are interested. Figure 2: Browser view of the Tutorial, including navigation menu and interactive AND gate lesson. Another section of the tutorial gives a brief lesson on binary addition. The half adder circuit is introduced to show how simple binary addition is implemented using XOR and AND gates (Figure 3). Again, the student can experiment with sets of inputs, and as before both the circuit output and the truth table values are generated. In a later section, half adders are combined to make a full adder. This illustrates how simple gates and circuit components form the foundation for all digital computer hardware. Figure 3: Browser view of the Half-Adder circuit. The tutorial covers a wide range of topics. This allows students to study on different levels, depending on their background and past experience with the material. The tutorial is also useful in a variety of courses as content can easily be added. Developmental Issues: Software Design In order to best meet the needs of the different Wake Forest users, there are three primary goals of this project. First, build a set of reusable objects that represent basic logical circuit elements. Second, develop a reasonable mechanism for assembling the objects into tutorials. Third, assemble the tutorials in a coherent framework which facilitates access by students, forces a version control mechanism, and is easily maintained. Fortunately, most of the issues related to machines, networks, and software are easily resolved at Wake Forest University. The University "Undergraduate Plan" specifies that all undergraduates and faculty are requisitioned a personal laptop computer [1]. Each of these nearly identical computers are configured with a standard software load, which is automatically upgraded by network servers. Given these goals, Java was chosen as the programming language for the project. The main features guiding this decision were Java's object-oriented structure, relative ease with graphics, and portability. Java allows for individual gates and circuits to be created as either applets or applications. When the circuits and gates are packaged as applets, the tutorial is entirely web-based, with all of the object files stored on the web server. Java is part of the standard Wake Forest configuration, and is also part of the network browser. The project was completed in two distinct phases. The first phase yielded a working set of libraries for creating gates and circuits, and the second involved refining the libraries and creating tutorials. Different teams worked on the two phases. Each phase is discussed below in two sections: a high-level overview of the phase followed by a more technical discussion. Software Development: Phase 1 In the first phase of software design, logic gates were viewed as consisting of two parts: the set of input and output values, and the logical function of the gate that corresponds to the truth table. Using this logical division, two libraries were designed and implemented to serve as the basis for all logic gates and circuits: Gate and Connector. In an initial prototype, the Gate and Connector classes were subclasses of the Applet class, and the AND, OR and NOT classes were subclasses of the Gate class. Objects in the AND, OR and NOT classes were passed to the methods of the Connector class as Gate objects. This arrangement caused a very awkward situation. The Gate parameters could not be used naturally as AND, OR, or NOT objects. For this reason, the design was changed to avoid subclassing of the Gate class. The Gate library provides a mechanism for creating, manipulating, and drawing any type of logic gate. The Connector library creates connectors between gates that both show the connectors on-screen and propagates the values from gate to gate within a circuit. Fairly early in the design the decision was made to have a fairly regular geometry for gates, which required the connectors to possess geometric flexibility in order to connect an output to inputs. Using these two libraries, a gate is created by writing a simple Java class. This class must create a gate, set its input values, create the connector, set the gate's starting position, and then instruct the gate and its connector to draw themselves. Creating a class using the steps above yields a static gate on the screen with hard-coded input values. By adding functionality from Java's event handling system and a check-box at each input leg, the gate becomes interactive. When the value of a check-box is changed, an event handler notifies the gate of the change. The gate's input value is then set to the new value of the check-box. Next, the gate is redrawn by re-evaluating the values of each input, recalculating the output values, and propelling this value to any subsequent gates. From the user's perspective, changing the value of a check-box causes the color and value of the corresponding input to change, and subsequently any inputs or outputs with new values are drawn in the proper color. This way, users see how their choices impact the output value of the gate or circuit. After testing the libraries, interactive versions of each type of logic gate and some basic circuits were created. The circuits are created in exactly the same manner as the single gate, the difference being that more gates are instantiated and more connectors are drawn. These interactive gates and circuits are written as Java applets and then packaged into HTML files that show the truth table of the gate or circuit. These pages then are assembled into a web-based package. Feedback from a campus demonstration resulted in ideas for improvement in the software, which led the project into the second phase. Technical Details of Phase 1 The Gate and Connector libraries are implemented as Java classes. The Gate class provides constructors and methods applicable to all gates, such as: Gate(int kind, boolean val1, boolean val2); void drawGate(Graphics g); int getnumIN(); The above constructor uses the int kind argument to determine which type of gate (AND, OR, NOT, etc) is being created, and takes the gate's initial input values. In the Gate class, AND, OR, and NOT are declared as "public static final", allowing them to be used as symbolic constants for this parameter. The method getnumIN() returns the number of inputs of the calling gate, useful when using a combination of NOT and other gates. The Gate library also provides gate-specific methods which are usually called by a similar method available to any gate. This simplifies gate and circuit creation. The Connector library has three methods: one for drawing input connectors, another for output connectors, and a third for connectors between the output of one gate and the input of another. Simple and intuitive graphical representations of the circuits are used to focus on simple logical circuits rather than efficient circuit design. As above, gates are written as applets or applications. The following is an example of a simple gate applet: // Create the Gate and Connector objects private Gate and = new Gate(Gate.AND, false, false); private Connector andConnector = new Connector(); public void init() { ... // Set the gate's starting position on the screen and.setSTART(100,100); } public void paint(Graphics g) { // Draw the And gate and.drawGate(g); // Draw the 2 input Connectors andConnector.inputConn(g, and); // Draw the output Connector andConnector.outputConn(g, and); } Event Handling A gate becomes interactive by adding check-boxes in front of each gate input and by ``listening'' for the check-boxes to be clicked. This is accomplished using the methods of java.awt.Event. When a check-box value is changed, the gate's drawGate() method determines the type of gate being drawn and then calls a gate-type specific draw() method. This method draws the actual gate. For example, the drawAND() method: public void drawAND(Graphics g) { int width = 3*length/4; // set color for outline of the body and then draw it g.setColor(this.getGateColor()); g.drawLine(xpos,ypos,xpos,ypos+length); g.drawLine(xpos,ypos,xpos+width,ypos); g.drawLine(xpos,ypos+length,xpos+width,ypos+length); g.drawArc(xpos+width/2,ypos,width,length,90,-180); }; After redrawing the gate, the connectors also must be redrawn. The methods of the Connector class redraw the connectors by first asking the gate for the new color of its legs and its position, and then drawing the connectors. Coordinates and Gate Placement The starting position of each gate (measured in pixels from the the upper left hand corner of the applet window) must be specified. The default position of each gate is at pixel coordinates (30,30). Failure to set the starting position of gates will cause them to appear on top of one another. To determine the position of a gate, the location and size of the surrounding gates and connectors must be computed. This can become complicated when creating a large, multi-gate circuit. Software Development: Phase 2 The main focus of the project's second phase was to build tutorials using the classes Gate and Connector. A secondary objective was to determine the effectiveness of the the Gate and Connector class libraries. Many refinements in the initial design made the tutorial easier to use and understand, as well as more pleasing to the eye. Before these improvements, the placement of gates and check-boxes on the interface was erratic, and the truth tables were hard to read. The truth table and check-boxes needed to be integrated into the class system, and gate placement required simplification. Java allowed for easy expansion of the existing framework to include the tables and check-boxes, without making significant changes to the Connector or Gate classes. The first change implemented a grid system for gate layout. This allows the placement of gates on the interface using row and column numbers instead of pixels. The grid was created by adding a small method to the Gate class that takes the desired x and y grid coordinates as arguments. This grid starts at the upper left corner of the applet window. The addition of this method was the only modification to the Gate class. The second major change aligned the check-boxes with the input legs of the gates. Initially, this was accomplished with a deprecated Java function named reshape(). Since this method would not be supported in the future, a custom check-box class named cbx was created. A black ``0'' was put in the box for false, and a white ``1'' for true, as those were the values used in the truth tables and they would be easily understood. The check-boxes automatically position themselves when provided with a Gate and an input. The final significant change implemented an automatically updating truth table. Several types of graphical displays (mainly lines and/or boxes) were considered, but difficulty of placement created a problem with these solutions. Eventually, a working truth table was engineered using strings to correctly place values. The table automatically updates whenever an input is changed (when the repaint() method is called), with the new input settings in white, old (``visited'') input settings in black, and other (``unvisited'') input settings as invisible. The invisible strings contain only spaces. As a consequence of these changes, the creation of a gate involves the instantiation of more objects. To create an interactive gate, one must create instances of gate, connector, truth table, and check-box classes. Then, the gate must be initialized into the grid system, mouse events occurring over check-boxes must be handled, and finally the gate and connectors must be drawn. After the new truth table and check-boxes were tested, the previous applets were updated to use the new code, and the static HTML truth tables were removed. The modification of the HTML files necessitated a revision of the web pages. Also, a new color scheme was needed to make the gates contrast against the background. Green was chosen for the background, blue for the gates, black for ``off'' connectors, and white for ``on'' connectors. The table colors remained black and white. Technical Details of Phase 2 Java's object-oriented structure simplified creation of the new grid system, check-boxes, and truth tables. The grid system was implemented by adding a small method to the Gate class. The new method StartGate() takes two grid coordinates as arguments. public void StartGate (int xpos, int ypos) { int newx, newy; // Translate the grid positions into pixel positions. // Each rectangle is 75x65 pixels. // (The grid is offset by 25 pixels from the top left of the screen) newx = (xpos * 75) + 25; newy = ypos * 65; this.setSTART(newx,newy); } The cbx class contains only draw() and compare() methods in addition to the basic constructs required for any Java class. When a mouse event occurs, the compare() method observes the location of the mouse event. If the mouse is clicked over a check-box, that check-box value and the corresponding gate's input are changed, and the applet is repainted. There were problems related to the appearance of the check-box. Due to differing color settings in users' browsers, the ``x'' in activated check-boxes was sometimes invisible. To remedy this situation, the cbx class was modified to display a black ``0'' and white ``1'' of slightly larger size. The main problem while designing the truth table was the varying number of inputs. This required the use of a different method for each number of inputs, which was accomplished through parameter overloading. The overloaded function compares the current settings to every possible combination of inputs, setting a string to each combination as it is reached. If a combination has not been reached, the string contains only spaces. The current combination is output in white, while the other ones, which have been set, are output in black. The strings are kept in an array of the size needed (although no more than two inputs have been implemented to date). Finally, the table is placed using the same grid system as the gates. Here is an example of a simple gate written as an applet: public class or extends Applet { private static Gate or1 = new Gate(Gate.OR, false, false); private Connector gateConn = new Connector(); private Table truth = new Table(3,1,``OR Truth Table'',``OR'',4); private cbx or1_in0, or1_in1; public void init() { // Set the gate's position and create the checkboxes or1.StartGate(1,1); or1_in0 = new cbx(or1,0); or1_in1 = new cbx(or1,1); } // event handler public boolean mouseDown(Event evt, int x, int y) { ... compare and set input states ... } public void paint(Graphics g) { ... drawing functions ... } }; Conclusion The tutorial has undergone two major cycles of design and development. At this point, the only introductory tutorials developed for this package are the products of the development team. These materials have been field tested at two university technology expositions. Now the package is ready for use by other faculty with their students. Additional modules will be added to the existing ones, and the system will undergo further testing. Java has simplified some tasks. It has been an excellent choice for providing straightforward control structures, mechanisms for an event driven system, and graphics support. The network browsers' ability to quickly load the object code and then begin execution is essential. Using compiled Java code with the browsers has been quite successful. Unfortunately, faculty who design lessons with this system must be able to construct very simple Java programs, as well as HTML. This is not a problem for our target faculty in computer science. On the other hand, if the faculty audience is extended, this could be a considerable problem. As discussed earlier, one large gate class was built, instead of a collection of smaller classes, one for each specific type of gate. Initially, there was concern about the time required to load this class across the network, but transfer time has been negligible. One point of concern for this project is the evolution of slightly different implementations of Java on different Web browsers. Even though the typical Wake Forest student will use the browser installed in the standard load, the typical computer science major is more likely to experiment with different browsers. This presents an interesting challenge in supporting the tutorials over a long period of time. As a student programming design project, a couple of important lessons have been learned. First, the design and implementation of the software in two distinct phases worked quite well. In the first phase all attention was focused on gates and interfaces, and in the later phase those objects were used to build other objects. The second phase thoroughly tested the objects built in the first phase. Second, having different students responsible for the two phases was very productive. Only at the end of the project did the two students meet to look at the end result. This encouraged very frank evaluation of the two separate phases. The end product is a simple interactive tutorial that can be conveniently accessed and used by students. Also, faculty can add and modify lessons in the collection in a relatively simple and direct fashion. This package was not designed to provide the sophistication of professional circuit design systems. The intent is to assist beginning computer science students in their mastery of critically important, yet fairly simple, ideas related to logic and logical circuits. This must be considered a work in progress. The software design and implementation is complete. Now the product must be used by students in the beginning classes. Some care must be taken to quantitatively and effectively measure its effect. At this point the only evaluation of the interactive tutorial is anecdotal. References - 1 - Brown, D. et al. Plan for the Class of 2000: Final Report of the Program Planning Committee Wake Forest University. Jan. 1995. - 2 - Cusick, G. Java Logic Simulator (20 April 99). - 3 - Gajski, Daniel D. Principles of Digital Design. Prentice Hall Upper Saddle River, NJ. 1997. - 4 - Grand, Mark. Java Language Reference. O'Reilly Cambridge, Mass. 1997. - 5 - Ross, K.A., and Wright, C.R.B. Discrete Mathematics, 4th ed. Prentice Hall Upper Saddle River, NJ. 1999. - 6 - Sloan, M.E. Computer Hardware and Organization, 2nd ed. Science Research Associates Inc. Chicago. 1983. - 7 - Zukowski, John. Java AWT Reference. O'Reilly Cambridge, Mass. 1997. Biography Jeremy Kindy is a junior Computer Science major at Wake Forest University. His interests and activities include reading and cheerleading. He is the student who was responsible for Phase II of the project. John Shuping graduated in May, 1999, as a Computer Science major at Wake Forest University. His interests include networks and computer security. He is now employed by Scient, working in San Francisco. He was responsible for Phase I of the project. David John is an associate professor of Mathematics and Computer Science. His principle interests lie in the area of genetic algorithms. He is one of the co-advisors of the project. Patricia Yali Underhill is an instructor of Computer Science. She is interested in the integration of Computer Science and Human Movement Studies. She is one of the co-advisors of the project. Copyright 2000 Jeremy Kindy, John Shuping, Patricia Yali Underhill and David John
http://www.acm.org/crossroads/xrds6-3/tutorial.html
crawl-002
refinedweb
3,705
55.24
In this chapter, we will learn how to test multiple URLs concurrently. For that, we will need to edit our application file, app.py to include two URLs − from bottle import Bottle, run app = Bottle() @app.route('/') @app.route('/hello1') def hello(): return "Hello World! It is first URL." @app.route('/hello2') def hello(): return "Hello World! It is second URL." run(app,server = 'gunicorn',host = '127.0.0.1', port = 8080) You can do this by creating a shell script, with multiple ab calls. Create a file test.sh and add the following lines to it − ab -n 100 -c 10 ab -n 100 -c 10 When you have added the above lines, Save and Close the file. Make the file executable − chmod u+x test.sh Let us now run the script − ./test.sh To avoid repetition and purpose of clarity, we will show only the relevant of the ab output, indicating by dots what portion has been omitted, as in the following. . . . Document Path: /hello1 Document Length: 732 bytes Concurrency Level: 10 Time taken for tests: 0.040 seconds Complete requests: 100 Failed requests: 0 Non-2xx responses: 100 Total transferred: 90000 bytes HTML transferred: 73200 bytes Requests per second: 2496.13 [#/sec] (mean) Time per request: 4.006 [ms] (mean) Time per request: 0.401 [ms] (mean, across all concurrent requests) Transfer rate: 2193.87 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.8 0 3 Processing: 1 3 1.0 4 5 Waiting: 0 3 1.2 4 4 Total: 1 4 0.6 4 5 WARNING: The median and mean for the processing time are not within a normal deviation These results are probably not that reliable. . . . You can save the Apache Bench Output to file by creating a shell script, with multiple ab calls. At the end of each line, place an &; this makes the command run in the background, and lets the next command start its execution. You will also want to redirect the output to a file for each url using <filename>. For example, our file test.sh will look like the following after modification − $ ab -n 100 -c 10 > test1.txt & $ ab -n 100 -c 10 > test2.txt & Here, test1.txt and test2.txt are the files to save the output data. You can check that the above script has created two files, test1.txt and test2.txt which contains the ab output for the respective URLs − $ ls -l ... -rw-r--r-- 1 root root 5225 May 30 12:11 out.data -rwxr--r-- 1 root root 118 Jun 10 12:24 test.sh -rw-r--r-- 1 root root 1291 Jun 10 12:31 test1.txt -rwxr--r-- 1 root root 91 Jun 10 13:22 test2.sh -rw-r--r-- 1 root root 1291 Jun 10 12:31 test2.txt ... While using ab, you should be alert to the failed test without warning. For example, if you check a wrong URL, you may get something similar to the following (we have deliberately changed the port here). $ ab -l -r -n 100 -c 10 -k -H "Accept-Encoding: gzip, deflate" This is ApacheBench, Version 2.3 <$Revision: 1604373 $> Copyright 1996 Adam Twiss, Zeus Technology Ltd, Licensed to The Apache Software Foundation, Benchmarking 127.0.0.1 (be patient).....done Server Software: Server Hostname: 127.0.0.1 Server Port: 805 Document Path: / Document Length: Variable Concurrency Level: 10 Time taken for tests: 0.002 seconds Complete requests: 100 Failed requests: 150 (Connect: 0, Receive: 100, Length: 0, Exceptions: 50) Keep-Alive requests: 0 Total transferred: 0 bytes HTML transferred: 0 bytes Requests per second: 44984.26 [#/sec] (mean) Time per request: 0.222 [ms] (mean) Time per request: 0.022 [ms] (mean, across all concurrent requests) Transfer rate: 0.00 [Kbytes/sec] received Connection Times (ms) min mean[+/-sd] median max Connect: 0 0 0.0 0 0 Processing: 0 0 0.2 0 0 Waiting: 0 0 0.0 0 0 Total: 0 0 0.2 0 0 Percentage of the requests served within a certain time (ms) 50% 0 66% 0 75% 0 80% 0 90% 0 95% 0 98% 0 99% 0 100% 0 (longest request)
https://www.tutorialspoint.com/apache_bench/apache_bench_testing_multiple_urls_concurrently.htm
CC-MAIN-2020-24
refinedweb
707
77.13
Hide Forgot From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.0.2) Gecko/20030208 Netscape/7.02 Description of problem: Type casting behavior ("unsigned char" to "char") for function / method arguments of reference type depends on whether the program will be compiled without or with optimization. Compiled without optimization it behaves as expected and as gcc-2.96 behaved. Compiled with optimization (-0, -O1, -O2, or -O3) it does not behave as expected. See example bugtest.cc below: // This demonstrates a behavior change (bug?) in gcc-3.2-7 // which occurs only when compiled with -O (-O1 ... -O3). // The expected output "1 1" will occur only if compiled without optimization: // gcc -o bugtest bugtest.cc; ./bugtest // Output will be "0 1" if compiled with gcc-3.2-7 and -O flag: // gcc -O -o bugtest bugtest.cc; ./bugtest #include <iostream> void proc(char & c) { c = 1; } int main(void) { unsigned char uc1 = 0, uc2 = 0; // Works as expected (uc1 is set to 1) if compiled without -O // Behaves different (uc1's value remains 0) if compiled with -O: // Apparently a temporary variable is created and set instead of uc1. proc((char)uc1); // Work around: This works as expected with and without optimization proc((char&)uc2); std::cout << int(uc1) << " " << int(uc2) << "\n"; return 0; } Version-Release number of selected component (if applicable): gcc-3.2-7 How reproducible: Always Steps to Reproduce: 1. Copy, paste, and save example (Description section) as bugtest.cc 2. Compile example using gcc-3.2-7: g++ -O -o bugtest bugtest.cc 3. Execute binary: ./bugtest Actual Results: The output shown is "0 1". Expected Results: The output should be "1 1". Additional info: This did not happen with gcc-2.96 (RedHat 7.x) and it does not happen if compiled using gcc-3.2-7 without optimization flag. This gcc-3.2 behavior leads to crashes and unexpected behavior of a huge multi-platform C++ project (Ptolemy) which I maintain for RedHat Linux (create updated rpm packages). A work around for gcc-3.2-7 is to use the reference type in casting (see example code in section description). However, I did not check whether this works for older gcc compilers, too. Your testcase is ill-formed. (char) c is an rvalue, and you can't bind a reference to non-const to an rvalue.
https://bugzilla.redhat.com/show_bug.cgi?id=86523
CC-MAIN-2021-49
refinedweb
400
59.8
Patent application title: EVENT SERVER USING CACHING Inventors: Andrew Piper (Amersham, GB) Alexandre De Castro Alves (San Jose, CA, US) Seth White (San Francisco, CA, US) Assignees: BEA Systems, Inc. IPC8 Class: AG06F1208FI USPC Class: 711118 Class name: Storage accessing and control hierarchical memories caching Publication date: 2009-11-26 Patent application number: 20090292877 Abstract: An event server adapted to receive events from an input stream and produce an output event stream. The event server uses a processor using code in an event processing language to process the events. The event server obtaining input events from and/or producing output events to a cache. Claims: 1. An event server adapted to receive events from an input stream and produce an output event stream, the event server using a processor using code in an event processing language to process the events, the event server obtaining input events from and/or producing output events to a cache. 2. The event server of claim 1, wherein the same event processing language code can be used to process input from a cache or from a stream. 3. The event server of claim 1, wherein the cache can be configured to be an event source or event sink. 4. The event server of claim 1, wherein the event server uses an event type repository to indicate at least one source or sink of events. 5. The event server of claim 4, wherein the cache is registered in the event type registry as a source or sink of events. 6. The event server of claim 4, wherein a change in the event type registry changes whether event processing language code uses a stream or cache as the source or sink. 7. The event server of claim 1, wherein the cache is a JCache based cache. 8. An event server adapted to receive events from an input stream and produce an output event stream, the event server using a processor using code in an event processing language to process the events, the event server obtaining input events from and/or producing output events to a cache;wherein the event processor uses an event type repository to indicate a source or sink of events; and wherein a change of the event type registry changes whether the event processing language code uses a stream or cache as the source or sink. 9. The event server of claim 8, wherein the same event processing language code can be used to process input from a cache or from a stream. 10. The event server of claim 8, wherein the cache can be configured to be an event source or event sink. 11. The event server of claim 8, wherein the cache is registered in the event type registry as a source or sink of events. 12. The event server of claim 8, wherein the cache is a JCache based cache. 13. A computer readable storage medium comprising:code for an event server adapted to receive events from an input stream and produce an output event stream, the event server using a processor using code in an event processing language to process the events, the event server obtaining input events and/or produce output events into a cache. 14. The computer readable storage medium of claim 13, wherein the same event processing language code can be used to process input from a cache or from a stream. 15. The computer readable storage medium of claim 13, wherein the cache can be configured to be an event source or event sink. 16. The computer readable storage medium of claim 13, wherein the event servers uses an event type repository to indicate a source or sink of events. 17. The computer readable storage medium of claim 13, wherein the cache is registered in the event type registry as a source or sink of events. 18. The computer readable storage medium of claim 13, wherein a change in the event type registry changes whether event processing language code uses a stream or cache as the source or sink. 19. The computer readable storage medium of claim 15, wherein the cache is a JCache based cache. Description: CLAIM OF PRIORITY [0001]This application claims priority to U.S. Provisional Application No. 61/054,658 entitled "EVENT SERVER USING CACHING" by Seth White, et al., filed May 20, 2008 which is hereby incorporated by reference [Atty. Docket No.: ORACL-02357US0]. BACKGROUND [0002]Event servers are used to process on streams of data in realtime. Typically, one or more input streams are converted into a much smaller output stream. BRIEF DESCRIPTION OF THE DRAWINGS [0003]FIG. 1 shows a caching system of one embodiment. [0004]FIG. 2 shows an event server of one embodiment. [0005]FIG. 3 shows a realtime application that can run on an event server. DETAILED DESCRIPTION [0006]One embodiment of the present inventions is an event server 102, adapted to receive events from an input stream 110 and produce an output event stream 111. The event server 102 can use a processor 104 using code 106 in an event processing language to process the events. The event server can obtain input events from and/or producing output events to a cache 108 or 109. [0007]The use of a cache with the event server 102 allows for flexibility in the operation of the event server 102. [0008]In one embodiment, the same event processing language code 106 can be used to process input from a cache 108 or from a stream 110. [0009]The cache can be configured to be an event source or event sink. [0010]The event server can use an event type repository 112 to indicate at least one source or sink of events. A cache can be registered in the event type registry 112 as a source or sink of events. [0011]In one embodiment, a change in the event type registry 112 can change whether event processing language code 106 uses a stream or cache as the source or sink. [0012]The cache can be a JCache based cache. [0013]Caching can provide event driven applications with increased availability and high performance. The following describes an example of how caching functionality can be supported by an Event Server. Integration points can include the JAVA programming model, EPL query language, configuration, and management systems. [0014]An Event Server application can publish output events to a cache in order to make them highly available or available to other applications running in the server. Publishing events to a cache can also allow for the events to be written asynchronously to secondary storage by the cache implementation. [0015]In one example, an application publishes events to a cache while the market is open and then processes the data in the cache after the market closes. [0016]A cache can be configurable as a listener on any Event Server component (Stage) that generates events, including input adapters, streams, pojos, and processors. [0017]An Event Server application can need to access non-stream data in order to do its work. Caching this data can dramatically increase the performance of the application. The component types that can be allowed direct programmatic access to a cache are adapter (input and output), user-defined Event Processing Language (EPL) functions, and Plain Old JAVA Objects (POJO) components (stage and non-stage). [0018]A special case is consuming data from a cache during evaluation of an EPL user-defined function. Authors of user-defined functions can be able to have a cache resource injected into the component that implements the function. [0019]An example of this use case is the "in-play" sample application which uses a user-defined function to join together trade data with company data. [0020]In one embodiment, the cache can be accessed directly from the EPL. A cache can function essentially as another type of stream data source to a processor, so querying a cache will be very similar to querying a stream. [0021]In one example, at the end of a trading day, the cache will contain orders as well as trades used to execute the order. It can be useful to query the cache in order to find all trades related to an order. [0022]A cache may contain data that needs to be updated or deleted by an Event Server application component. For example, an order object that is held in a cache may need to be updated as individual trades that fulfill the order are executed. An order that is canceled may need to be deleted from the cache. Any component that may be injected with a reference to a cache (adapters, functions, and POJOs) may be able to update and delete from the cache. [0023]Caching support can include support for local, single-JVM caches using a caching implementation. The caching framework can support the addition of distributed caching using the cache. [0024]Caching support can include integration with third-party products that provide both local and distributed caching. Products that can be supported include GemFire, Tangosol, GigaSpaces and OSCache. [0025]A cache can be preloaded with data when an application is deployed. [0026]A cache can periodically refresh or reload its state. This can be done incrementally and not halt the application or cause spikes in latency. [0027]Invalidation of cached data can be scheduled. This can be done incrementally and not halt the application or cause spikes in latency. [0028]Cached data can be periodically flushed. This can be done incrementally and not halt the application or cause spikes in latency. [0029]Support for querying a cache using EPL can be provided. [0030]A cache's configuration can be dynamically updated using JMX and also runtime statistics accessed. [0031]Appendix I shows exemplary JAVA programming interfaces. [0032]Large local cache sizes (4-16 GB) can be supported. [0033]Event Server applications can use a caching implementation such as the built-in Event Server local cache or third-party caching solutions like GemFire, Tangosol, or GigaSpaces. A configured instance of a caching implementation can be referred to as a "caching system" in this specification. A caching system can define a named set of configured caches as well as configuration for remote communication if any of the caches are distributed across multiple machines. [0034]A caching system can be configured at the application level. In one embodiment, there is an application that owns the caching system instance in some sense. Other applications may use the caching system, but in order to do so they can be deployed after the application that owns the caching system, so that the caching system is available to them. [0035]Caching systems can be configured using the dynamic/external configuration feature of the Event Server. Custom Spring tags can also be provided in order to configure caching-related entities that are part of an Event Server event processing network (EPN). These caching related entities may or may not be a Stage in the event processing network. This depends on how they are used in a particular application context. [0036]The example below shows how a local cache(s) can be configured using the Event Server local cache implementation. The example is not normative. The XML fragment below would be part of the external configuration for the application declaring the caching system. TABLE-US-00001 <wlevs-caching-system> <name>caching-system-id</name> <cache> <name>cache-id</name> <max-size>100000</max-size> <eviction-policy>LRU</eviction-policy <time-to-live>3600</time-to-live> </cache> <cache> ... </cache> </wlevs-caching-system> [0037]In one embodiment, there are three fundamental use cases involved in configuring a caching system: using the Event Server cache, using a third-party cache implementation that is packaged as part of the owning application bundle, and using a third-party cache implementation that is packaged in a separate bundle. [0038]An application may simply specify external configuration when using the built in Event Server caching implementation. Nothing else is required to configure and use the caching system as long as there is only one caching system defined for the application. [0039]The application may also declare an explicit element in its Spring application context to represent the caching system, like so: [0040]<caching-system [0041]In this case, the value of the id attribute can match the name specified for the caching system in the external configuration metadata. If the application wishes to allow other applications to use the caching system, it can specify that the caching system be advertised. [0042]<caching-system [0043]In one embodiment, the following options are also available when using the Event Server caching implementation. They are a special case of the configuration that is more typically used for a third-party caching implementation. TABLE-US-00002 <caching-system <caching-system [0044]Either option may be used without requiring any additional packages to be imported or exported by the application bundle. In one embodiment, it is not necessary for the application bundle to import the com.bea.wlevs.cache package if the class attribute is used (this package is already imported by the Spring bundle). In one embodiment, the id attribute in the Spring caching-system element must match the name attribute specified in the dynamic configuration in both cases above. [0045]An exemplary Third-party caching implementation (included in application bundle) is described below. [0046]The idea here is to allow users to keep things simple when using a third-party cache by doing a couple of things: [0047]1. Package both their application code and caching implementation code in a single bundle. [0048]2. Avoid using or implementing a factory. [0049]3. Avoid using the OSGi service registry. [0050]4. Configure their caching system as much like a simple Spring bean is possible. [0051]Here is what the caching-system element looks like in this case. [0052]<caching-system [0053]The class attribute specifies a JAVA class that must implement the com.bea.wlevs.cache.api. CachingSystem interface. The class attribute can be required when using a third-party caching implementation since the default value for class is com.bea.wlevs.cache.WlevsCachingSystem which specifies that the BEA caching implementation is being used. The external configuration for the caching system can be part of the external configuration for the application. No additional imports or exports of packages related to caching are required of the application bundle since the caching implementation is part of the bundle. The advantage of this approach is to avoid the overhead of creating a separate bundle for the caching implementation and the extra complexity involved with managing the lifecycle of that bundle. [0054]Exemplary Third-party caching implementation (separate bundle) is described below. [0055]A simple way to configure a separate caching bundle is to begin by creating a bundle containing just the Spring configuration for the caching system. In other words, a Spring application context that contains only a caching-system element. This caching bundle can then be deployed along with the external configuration for the caching system and the caching system is advertised so that other bundles can use the caching system. [0056]The caching system tag in the Spring application context can look like this: TABLE-US-00003 <caching-system [0057]The bundle containing the caching system could be provided by the event server company, a caching vendor, or the customer. Alternately, under a factory approach, a bundle wishing to declare a caching system instance can include a tag like the following in its Spring application context. TABLE-US-00004 <caching-system [0058]The caching implementation bundle can contain a tag in its application context that looks like this: [0059]<factory id="factory-id" provider- <class>package.name.class.name</class> [0060]</factory> [0061]The caching-system element can contain static Spring configuration. The caching-system element can be a factory bean for caches in the Spring sense. [0062]A caching system can be configured this way if custom tags are not used. [0063]<bean id="caching-system-id" class="caching-system-implementation-class"/> [0064]An individual cache that is part of a caching system is configured (from the point of view of the application context) using the <cache> element in the Spring configuration file. TABLE-US-00005 <cache id="cache-id" cache- <caching-system </cache> [0065]The caching-system attribute can specify the caching system that contains the cache. The cache-name attribute can be optional. The cache-name attribute can be used to specify the name for the cache in the caching system if the name is different than the value of the id attribute. [0066]The cache bean can implement the java.util.Map interface. [0067]A cache element itself may not require external configuration data. The configuration of individual caches is part of the metadata for the caching system. The caching system can be responsible for creating the cache associated with a particular name and returning a reference to the cache. [0068]A cache element need not be exported as an OSGi service using the advertise attribute. [0069]A cache can be configured this way if custom tags are not used: TABLE-US-00006 <bean id="cache-id" factory- <constructor-arg </bean> [0070]In one embodiment, a cache will only be required to explicitly reference a caching system when the caching system that is to be used is ambiguous. In one example: [0071]1. A caching system is not ambiguous when a single caching system is declared as part of the same application that declares the cache. (When the Event Server cache being used, a caching system is declared implicitly in the Spring application context whenever a caching system is configured using the dynamic configuration facility. All a developer needs to do is declare the dynamic configuration and then declare cache elements that reference it.) [0072]2. No caching system is declared by the application, and a single caching system has been advertised by another application. [0073]Under these circumstances, the following form can be allowed for the cache element: [0074]<cache id="cache-id" cache- [0075]An object that implements user-defined EPL functions can be configured like a regular spring bean: [0076]<bean id="epl-function-id" class="epl-function-implementation-class"/> [0077]The class attribute can specify an arbitrary JAVA class or POJO. Public member methods on this class can be invoked by a processor as a user-defined function in an EPL query. The syntax for referencing a method of this bean as a user-defined function within an EPL query can be: [0078]<epl-function-id>.<method-name> [0079]For example, here is an exemplary EPL query that invokes a user-defined function: [0080]insert into InstitutionalOrder [0081]select er.orderKey as key, [0082]er.symbol as symbol, [0083]er.shares as cumulativeShares [0084]from ExecutionRequest er retain 8 hours with unique key [0085]where not orderFunction.existsOrder(er.orderKey) [0086]In this example, a processor can be injected with the EPL function bean like so TABLE-US-00007 <processor id = "tradeProcessor"> <function ref="orderFunction"/> </processor> [0087]The Spring bean name of the user-defined function being can be used within the EPL query to refer to the bean. It is possible to inject a processor with multiple EPL function components by specifying multiple <function> sub-elements of <processor>. [0088]If query-name is specified TABLE-US-00008 <processor id = "tradeProcessor"> <function query- </processor> then the syntax for referencing a method is: [0089]queryName.<method-name> [0090]If a user-defined function bean needs access to a resource such as a cache it can provide a public method for injecting the resource and declare a reference to the resource as part of its Spring configuration. TABLE-US-00009 <bean id="epl-function-id"> <property name="cache" ref="cache-id"/> </bean> [0091]User-written POJOs can be configured using the bean element in the Spring configuration file. A cache can be injected into a POJO component using the normal spring mechanism for referencing another bean. TABLE-US-00010 <bean id="bean-id" > <property name="map" ref ="cache-id"/> </bean > [0092]The POJO class can have a method like this: TABLE-US-00011 import JAVA.util.Map; public class MyComponent { ... public void setMap (Map map) {...} } [0093]In one example, adapter that references a cache can be configured. This case can be similar to the POJO case in the previous section. TABLE-US-00012 <adapter id="input-adapter-id"> <instance-property </adapter > [0094]The adapter class can contain a public setter method for the Map: TABLE-US-00013 import JAVA.util.Map; public class MyAdapter { ... public void setMap (Map map) {...} } [0095]A cache can be configured as a listener in order to receive events. For example, TABLE-US-00014 <cache id="cache-id"> <caching-system </cache> <stream id="tradeStream"> <listener ref="cache-id"/> </stream> registers a cache as a listener of the stream. New events that are received by the cache can be inserted into the cache. In one example, Map.put ( ) is called. Remove events that are received by the cache result in Map.remove ( ) can be called. [0096]A couple of options can be provided for keys to index a cache that contains events. [0097]Option 1 is to allow the application developer to specify a property name for the key property when a cache is declared. TABLE-US-00015 <cache id="cache-id" key- <caching-system </cache> [0098]Events that are inserted into the cache can be required to have a property of this name at runtime, else an exception is thrown. [0099]Option 2 is to provide an annotation, such as com.bea.wlevs.ede.api.Key, which can be used to annotate an event property. The ability to identify a unique key for an event. [0100]Option 3 is not to specify a key. In this case the event object itself will serve as both the key and value when an event is inserted into the cache. The event class can provide a valid equals and hashcode method that takes into account the values of the key properties, in this case. [0101]Option 4 is used to specify a composite key in which multiple properties form the key. An explicit composite key is indicated by specifying a key class. A key class can be a JAVABean whose public fields match fields that are present in the event class. The matching can be done according to the field name: TABLE-US-00016 <cache id="cache-id" key- <caching-system </cache> [0102]A cache can be a source of events. For example, a cache can generate an event when a new mapping is inserted into the cache by calling the put method. A listener may be attached to a cache using the listener sub-element. TABLE-US-00017 <cache id="cache-id"> <caching-system <listener ref="cache-listener-id"/> </cache> [0103]<bean id="cache-listener-id" class="cache-listener-implementation-class"/> [0104]Cache listeners can implement the com.bea.cachejcache.CacheListener interface. [0105]A cache may be configured with a custom loader that is responsible for loading data into the cache when there is a cache miss. A loader may be configured for a cache using the loader sub-element. TABLE-US-00018 <cache id="cache-id"> <caching-system <loader ref="cache-loader-id"/> </cache> <bean id="cache-loader-id" class="cache-loader-implementation-class"/> [0106]Cache loaders can implement the com.bea.cachejcache.CacheLoader interface. [0107]A cache may be configured with a custom store that is responsible for storing data when data is written from the cache to a backing store. A store can be configured for a cache using the store sub-element. TABLE-US-00019 <cache id="cache-id"> <caching-system <store ref="cache-store-id"/> </cache> <bean id="cache-store-id" class="cache-store-implementation-class"/> [0108]EPL query language can be enhanced so that a cache can be referenced in much the same way that a stream is referenced currently. For example, the query below can perform an inner join between trade events received from a stream and company data held in a cache. This is an example of a common use case in which an event is "enriched" with data from another source. TABLE-US-00020 INSERT INTO EnrichedTradeEvent SELECT trade.symbol, trade.price, trade.numberOfShares, company.name FROM TradeEvent trade RETAIN 8 hours, Company company WHERE trade.symbol = company.id [0109]Note that the Company type can be used in the SELECT, FROM, and WHERE clauses. In the query above TradeEvent and Company are types that are registered in the event type repository. The FROM clause can continue to reference only registered types, as opposed to directly referencing a node in the EPN. The Company type can be mapped to a particular node (a cache in this case) that is the source for instances of the type as part of the XML configuration. The mapping can be external to the query, in other words, just as it is for Streams. [0110]Another interesting aspect to the enrichment example above is that Company data can be pulled from the cache instead of being pushed like it is in the case of a Stream. Pulling data from a cache can be the default behavior while push remains the default behavior for streams. In one example, this means that the query fires only when trade events arrive--are pushed to the query by a stream. In one example, company instances are not events but simply data values that are pulled from the cache. [0111]A cache can declare a type for the values that it contains in its Spring configuration. TABLE-US-00021 <wlevs:cache <value-type </wlevs:cache> [0112]In this case, Company is a type that is registered in the event type repository. A cache can be wired to a processor as a source just as a stream is, but as was noted above data will be pulled from the cache by default instead of pushed to the processor. TABLE-US-00022 <wlevs:stream <wlevs:processor <source ref="cacheId"> <source ref="streamId"> </wlevs:processor> [0113]In the example that we have been flushing out so far, the query processor examines its sources and detects that the cache is a source for the Company type. The query processor then will pull data from the cache during query evaluation. When the query processor matches a type in the from clause to a type supplied by a cache it will assume that instances of that type are to be pulled from the cache essentially. [0114]If a value type is not specified by the cache, then the Company type can be mapped to a cache as part of the dynamic/external configuration for the query. For example, TABLE-US-00023 <processor> <name>processorId</name> <rules> <rule id="ruleId"> <query> INSERT INTO EnrichedTradeEvent SELECT trade.symbol, trade.price, trade.numberOfShares, company.name FROM TradeEvent trade RETAIN 8 hours, Company company WHERE trade.symbol = company.id </query> <source name="Company" ref="cacheId"/> </rule> </rules> </processor> [0115]If the type name is ambiguous then an alias may be used as the value of the name attribute. [0116]If a cache is referenced by an EPL query, the key properties for data in the cache must be specified. This can be done in the following ways: [0117]1. Annotate one or more properties of the value class for the cache with the (Key annotation. [0118]2. List one or more key properties in a comma separated list using the key-properties attribute of the cache. [0119]3. Specify a key class for the cache. [0120]Appendix II shows some sample applications that use caches. Whether the cache is local or distributed is specified in the configuration for the caching system and does not impact the topology of the event processing network. [0121]Appendix III shows a XML Schema for Native BEA Cache. [0122]An exemplary event server is shown in FIG. 2. An Event Server can be a low latency framework, such as JAVA based middleware framework for event driven applications. The event server can be a lightweight application server which can connect to high volume data feeds and can have a complex event processing engine (CEP) to match events based on user defined rules. [0123]The Event Server can have the capability of deploying user JAVA code (POJOs) which contain the business logic. Running the business logic within the Event Server can provide a highly tuned framework for time and event driven applications. [0124]An event-driven system can be comprised of several event sources, the real-time event-driven (WebLogic Event Server) applications, and event sinks. The event sources can generate streams of ordinary event data. The Event Server applications can listen to the event streams, process these events, and generate notable events. Event sinks can receive the notable events. [0125]Event sources, event-driven applications, and event sinks can be de-coupled from each other; one can add or remove any of these components without causing changes to the other components. [0126]Event-driven applications can be rule-driven. These rules, or queries, which can be persisted using some data store, are used for processing the inbound stream of events, and generating the outbound stream of events. Generally, the number of outbound events can be much lower than that of the inbound events. [0127]The Event Server can be a middleware for the development of event-driven applications. The Event Server application can be essentially an event-driven application. [0128]An application can be hosted by the Event Server infrastructure, a light-weight container. The application can be as described by the diagram of FIG. 3. [0129]An Event Server application can be comprised of four main component types. Adapters can interface directly to the inbound event sources. Adapters can understand the inbound protocol, and be responsible for converting the event data into a normalized data that can be queried by a processor (i.e. event processing agent, or processor). Adapters can forward the normalized event data into Streams. Streams can be event processing endpoints. Among other things, streams can be responsible for queuing event data until the event processing agent can act upon it. The event processing agent can remove the event data from the stream, process it, and may generate new events to an output stream. The user code can register to listen to the output stream, and be triggered by the insertion of a new event in the output stream. The user code can be generally just a plain-old-JAVA-object (POJO). The user application can make use of a set of external services, such as JMS, WS, and file writers, to forward on the generated events to external event sinks. [0130]Adapters, streams, processors, and business logic POJOs can be connected arbitrarily to each other, forming event processing networks (EPN). Examples of topologies of EPN's are: [0131]Adapter>Stream>Business Logic POJO [0132]Scenario: no processing is needed, aside adaptation from proprietary protocol to some normalized model. [0133]Adapter>Stream>Processor>Stream>Business Logic POJO [0134]Scenario: straight through processing to user code. [0135]Adapter>Stream>Processor>Stream>Business Logic POJO>Stream>Processor>Stream→Business Logic POJO [0136]Scenario: two layers of event processing, the first processor creates causality between events, and the second processor aggregates events into complex events. [0137]Event Processing Networks can have two important attributes. [0138]First, event processing networks can be used to create hierarchy of processing agents, and thus achieve very complex processing of events. Each layer of the Event Processing Network can aggregate events of its layer into complex events that become simple events in the layer above it. [0139]A second attribute of event processing networks is that it helps with integrability, that is, the quality of having separately developed components work correctly together. For example, one can add user code and reference to external services at several places in the network. [0140]The use cases for the Event Server can span a variety of businesses: [0141]Financial: Algorithmic Trading: [0142]Automate stock trading based on market movement. Sample query: if, within any 20 second window, StockB rises by more than 2% and StockA does not, then automatically buy StockA. [0143]Transportation: Security and Fraud Detection: [0144]Discover fraudulent activity by detecting patterns among events. Sample query: if a single ID card is used twice in less than 5 seconds to gain access to a city's subway system, alert security for piggybacking. [0145]Energy and Telecommunications: Alarm Correlation: [0146]Reduce false positive alarms. Sample query: When 15 alarms are received within any 5 second window, but less than 5 similar alarms detected within 30 seconds, then do nothing. [0147]Health Care: Patient Monitoring: [0148]Monitor the vital signs of a patient and perform some task if a particular event happens. Sample query: When a change in medication is followed by a rise in blood pressure within 20% of maximum allowable for this patient within any 10 second window, alert nearest nurse. [0149]An application server can support deployment of Plain Old JAVA applications (POJOs), or Spring applications, for handling large volumes of streaming data with low latency requirements. [0150]Event Server applications can be developed and deployed as event driven applications, that is, a set of custom Spring tags is used to define the event processing network in the EPN assembly file, which extends the standard Spring context file, of your application. [0151]The application server can contain a set of real time services that include a complex event processor (CEP), adapters, and streams. The server can be tuned for high message throughput and low latency and deterministic behavior. [0152]The complex event processor can be a high performance, continuous query engine for processing high volumes of streaming data. It can have full support for filtering, correlation, and aggregation of streaming data from one or more streams. [0153]An event processing language can be used. The Event Processing Language (EPL), can be a SQL-like language that allows event data from streams to be declaratively filtered, correlated, aggregated, and merged, with the ability to insert results into other streams for further downstream processing. The EPL rules can be defined in an XML file that configures the complex event processor or programmatically using APIs. [0154]An Adapter SDK can provide all the tools you need to create adapters that listen to incoming data feeds. [0155]A load generator utility can simulate a data feed, useful for testing your application without needing to connect to a live data feed. [0156]A monitoring service can include pre-built instrumentation for measuring throughput and latency at the component level. [0157]A static and dynamic configuration framework can be used. Static configuration can be performed using XML files; dynamic configuration can be performed by accessing configuration and runtime MBeans using JMX and with the command-line utility wlevs.Admin. [0158]The Event Server can be built on a microServices Architecture (mSA) which can use an OSGi-based framework to manage services provided by modules or feature sets. A mSA can provide the following services: [0159]Jetty, an HTTP container for running servlets. [0160]javax.sql.DataSource implementation and thin JDBC drivers for accessing a relational database. [0161]Logging and debugging. [0162]Authentication and authorization security. [01636465666768]The forgoing Alexandre De Castro Alves, San Jose, CA US Patent applications by Andrew Piper, Amersham GB Patent applications by Seth White, San Francisco, CA US Patent applications by BEA Systems, Inc. Patent applications in class Caching Patent applications in all subclasses Caching User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20090292877
CC-MAIN-2015-40
refinedweb
5,945
52.39
A friend of a class is a function or class that is given permission to use the private and protected member names from the class. A class specifies its friends, if any, by way of friend declarations. Such declarations give special access rights to the friends, but they do not make the nominated friends members of the befriend ] [ Example: class X { enum { a=100 }; friend class Y; }; class Y { int v[X::a]; // OK, Y is a friend of X }; class Z { int v[X::a]; // error: X::a is private }; — end example ] A class shall not be defined in a friend declaration. [ Example: class A { friend class B { }; // error: cannot define class in friend declaration }; — end example ] A friend declaration that does not declare a function shall have one of the following forms: friend elaborated-type-specifier ; friend simple-type-specifier ; friend typename-specifier ; [ Note: A friend declaration may be the declaration in a template-declaration (Clause [temp], [temp.friend]). — end note ] If the type specifier in a friend declaration designates a (possibly cv-qualified) class type, that class is declared as a friend; otherwise, the friend declaration is ignored. [ Example: class C; typedef C Ct; class X1 { friend C; // OK: class C is a friend }; class X2 { friend Ct; // OK: class C is a friend friend D; // error: no type-name D in scope friend class D; // OK: elaborated-type-specifier declares new class }; template <typename T> class R { friend T; }; R<C> rc; // class C is a friend of R<C> R<int> Ri; // OK: "friend int;" is ignored — end example ] A function first declared in a friend declaration has the linkage of the namespace of which it is a member ([basic.link]). Otherwise, the function retains its previous linkage ([dcl.stc]). When a friend declaration refers to an overloaded name or operator, only the function specified by the parameter types becomes a friend. A member function of a class X can be a friend of a class Y. [ Example: class Y { friend char* X::foo(int); friend X::X(char); // constructors can be friends friend X::~X(); // destructors can be friends }; — end example ] an inline function. A friend function defined in a class is in the (lexical) scope of the class in which it is defined. A friend function defined outside the class is not ([basic.lookup.unqual]). No storage-class-specifier shall appear in the decl-specifier-seq of a friend declaration. A name nominated by a friend declaration shall be accessible in the scope of the class containing the friend declaration. The meaning of the friend declaration is the same whether the friend declaration appears in the private, protected or public ([class.mem]) portion of the class member-specification. ] If a friend declaration appears in a local class-class ]
https://timsong-cpp.github.io/cppwp/n4659/class.friend
CC-MAIN-2019-04
refinedweb
465
54.36
> On Sept. 22, 2015, 6:55 p.m., Cong Wang wrote: > > 3rdparty/libprocess/3rdparty/stout/include/stout/os/linux.hpp, line 52 > > <> > > > > s/namespaces/flags/, because SIG* can be or'ed with namespaces. Or even > > better, pass namespaces separately with signals as two different parameters? > > haosdent huang wrote: > Because I see ::clone only have one param, I just change > s/namespaces/flags/ here. > > Cong Wang wrote: > I meant: > > clone(...int flags, int signal) > { > ::clone(... flags | siganl, ...); > } > > haosdent huang wrote: > Got it, let me change to that. > > Jie Yu wrote: > I suggest keeping 'flags' for now because 'clone' takes an or'ed flags. Advertising Oh, got it. #_# - haosdent ----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: ----------------------------------------------------------- On Sept. 23, 2015, 1:26 a.m., haosdent huang wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Sept. 23, 2015, 1:26 a.m.) > > > Review request for mesos, Jie Yu and Cong Wang. > > > Bugs: MESOS-3474 > > > > Repository: mesos > > > Description > ------- > > Add os::clone function in stout. > > > Diffs > ----- > > 3rdparty/libprocess/3rdparty/stout/include/stout/os/linux.hpp > b994b13941628947c9d12b8baae155d5da1ec7bd > > Diff: > > > Testing > ------- > > > Thanks, > > haosdent huang > >
https://www.mail-archive.com/reviews@mesos.apache.org/msg10371.html
CC-MAIN-2017-51
refinedweb
184
71.92
Let's say you have the code: WebDriver driver; @Before public void setUp() { driver = new FirefoxDriver(); } and you want to change it so the test cases will generate a screen capture file when an exception is thrown. Here is the first change: WebDriver driver; @Before public void setUp() { WebDriverEventListener eventListener = new MyEventListener(); driver = new EventFiringWebDriver(new FirefoxDriver()).register(eventListener); } The second change is to create the MyEventListener class. The MyEventListener class will be: public class MyEventListener implements WebDriverEventListener { // All the methods of the WebDriverEventListener need to // be implemented here. You can leave most of them blank. // For example... public void afterChangeValueOf(WebElement arg0, WebDriver arg1) { // does nothing } // ... public void onException(Throwable arg0, WebDriver arg1) { String filename = generateRandomFilename(arg0); createScreenCaptureJPEG(filename); } } The MyEventListener class will have 15 methods, including the two examples I have given here. The main method that you must implement if you want screen captures whenever an exception is thrown would be the onException method. The biggest trick for this method is generating a unique filename for each exception. First thought is that the filename could be in the format "YYYY-MM-DD-HH-MM-SS.jpg". Unless you get two exception in one minute this will work okay. Unfortunately, it will be hard to figure out what the exception was unless you kept some sort of log in the code execution. You'll also have to waste time figuring out which exception goes with which date/time. Personally, I'd use the format "YYYY-MM-DD-HH-MM-SS-message-from-the-throwable-argument.jpg". Selenium tends to throw multiple line exception messages. So you could take the first line of the message, change characters which are illegal for your file system and change them to underscores. You could also have something to set the location of the screen capture files and prepend that to the filename. Here is the code I came up with in 2 minutes: private String generateRandomFilename(Throwable arg0) { Calendar c = Calendar.getInstance(); String filename = arg0.getMessage(); int i = filename.indexOf('\n'); filename = filename.substring(0, i).replaceAll("\\s", "_").replaceAll(":", "") + ".jpg"; filename = "" + c.get(Calendar.YEAR) + "-" + c.get(Calendar.MONTH) + "-" + c.get(Calendar.DAY_OF_MONTH) + "-" + c.get(Calendar.HOUR_OF_DAY) + "-" + c.get(Calendar.MINUTE) + "-" + c.get(Calendar.SECOND) + "-" + filename; return filename; } The final part is the code to actually generate the file. This is standard Robot stuff. Here is the code I whipped together a few projects back: private void createScreenCaptureJPEG(String filename) { try { BufferedImage img = getScreenAsBufferedImage(); File output = new File(filename); ImageIO.write(img, "jpg", output); } catch (IOException e) { e.printStackTrace(); } } private BufferedImage getScreenAsBufferedImage() { BufferedImage img = null; try { Robot r; r = new Robot(); Toolkit t = Toolkit.getDefaultToolkit(); Rectangle rect = new Rectangle(t.getScreenSize()); img = r.createScreenCapture(rect); } catch (AWTException e) { e.printStackTrace(); } return img; } And that is it. Whenever an exception is thrown a file will be generate. 29 comments: Nice Article! thanks. very nice article. It was very helpful to me. Simps Nice article. But I have one comment and one question. First the comment: - You can use AbstractWebDriverEventListener instead of WebDriverEventListener so you must must not override all the methods you do not use. The Question: I see you are using HtmlUnitDriver. But I never get any visible output from this driver. So how do you make HtmlUnitDriver rendering html in a GUI? Thanks for the comments Ralph. I am not sure why I used HtmlUnitDriver in my example. It does not display a GUI and therefore shouldn't be used for this example. I have changed it to FirefoxDriver. Great article! Thank you for sharing! I have tried this but this method is not working for me :( Please guide/help. "onException" methid is not at all being called. what should i do for this. Please help me, please. onException function will be called automatically when exception occur. Make some exception using some selenium functions like VerifyXXXXX,AssertXXXXX, etc,. If you wonder how to use those functions, refer the description of each functions in selenium package. Great post, thanks a lot Can you please help me how i can implement this with the Remote web driver. To webuser,. Bottom line, you can only do this with FirefoxDriver. Maybe you should file a feature request with the Selenium team to have them add TakesScreenshot to RemoteWebDriver, if RemoteWebDriver is using Firefox. Nice Article! thanks. I have one use case in same please clarify for that. I want to take screen sort for each and every event (means click on button or tab or link). Than i want to capture the page. I want to capture my application from login to logout. Please give me some solution. Thanks, Ajay Ajay, have a look at where are the screenshots stored in the machine? Where are the screenshots stored on the machine? or they are within the project workspace? The createScreenCaptureJPEG() takes as input the full path to the file. If I use: createScreenCaptureJPEG("filename.jpg"); it will save the file in the current directory, whatever that may be. If I use: createScreenCaptureJPEG("C:\\foo\\bar\\filename.jpg");. Hello Darrell, In one of your reply you said that screen capture is implemented only with FirefoxDriver and for all other drivers it will throw an exception. Is it still the same case, or is it being implemented with the other drivers. I am getting NUlLL exception when I call 'GetScreenshot' on instance of 'EventFiringWebDriver'. Can I get any document about the same on Selenium site? Thanks, Naresh I determine a lot from the source code. The documentation can always be out of date but the source code must be what the tool is currently doing. You can go to to browser the source code. This page also has instructions on checking out the source code. If I look at the Java bindings: trunk/java/client/src/org/openqa/selenium/chrome/ChromeDriver.java trunk/java/client/src/org/openqa/selenium/firefox/FirefoxDriver.java trunk/java/client/src/org/openqa/selenium/ie/InternetExplorerDriver.java trunk/java/client/src/org/openqa/selenium/iphone/IPhoneDriver.java trunk/java/client/src/org/openqa/selenium/safari/SafariDriver.java all now have "implements TakesScreenshot". So all the drivers now support taking a screen shot.? Thanks, Naresg The bindings for RemoteWebDriver in Java does not have "implements TakesScreenshot". So it does not support taking a screen capture. Darrell, just wanted to say thank you for your blog! It's awesome! Very useful and interesting. You're doing great job! Kyryll, Kiev, Ukraine. This really work...Just a small changes in the code to extract the substring of complete message would help to reduce the filename :-) Thanks Darrell! I noticed you state this is for exceptions. Will this also generate a screenshot for failed assertions? Hey Cameron. It will not work if you throw an assertion. Frameworks like junit or TestNG will throw AssertionError which are different from Exception. This will only work for whenever a runtime exception is thrown. If you are explicitly throwing an assert you could write your work assert which checks the assert condition and if false does a screen capture then throws an assert. This code is more for when runtime exceptions occur. These are things you cannot anticipate and therefore need a general catchall structure.. Camer. Rather than have a conversation here you should post a general message on. I monitor it and will see anything you post there. i m getting java.io.FileNotFoundException: D:\newJunoWorkspace\UCTest\ScreenShots\22_Nov_2013__12_59_51PM_192.168.110.101.png (The system cannot find the path specified) error once the Assertion fails and Image is getting Captured only when browser dies. what is the error i m getting ?.
https://darrellgrainger.blogspot.com/2011/02/generating-screen-capture-on-exception.html?showComment=1359530884808
CC-MAIN-2022-27
refinedweb
1,273
59.9
WCTRANS(3) NetBSD Library Functions Manual WCTRANS(3)Powered by man-cgi (2020-09-24). Maintained for NetBSD by Kimmo Suominen. Based on man-cgi by Panagiotis Christias. NAME wctrans -- get character mapping identifier by name LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <wctype.h> wctrans_t wctrans(const char *charmap); DESCRIPTION The wctrans() function returns a character mapping identifier correspond- ing to the locale-specific character mapping name charmap. This identi- fier can be used on the subsequent calls of towctrans(). The following names are defined in all locales: tolower toupper The behaviour of wctrans() is affected by the LC_CTYPE category of the current locale. RETURN VALUES wctrans() returns: 0 If the string charmap does not corresponding to a valid character mapping name. non-zero A character mapping identifier corresponding to charmap. Note: wctype_t is a scalar type, e.g., a pointer. ERRORS No errors are defined. SEE ALSO iswctype(3), setlocale(3), wctype(3) STANDARDS The towctrans() function conforms to ISO/IEC 9899/AMD1:1995 (``ISO C90, Amendment 1''). NetBSD 9.99 March 4, 2003 NetBSD 9.99
http://man.netbsd.org/wctrans.3
CC-MAIN-2021-04
refinedweb
180
50.53
can't find QuasiDenseStereo in opencv_contrib_python Hi everyone, It's very possible I'm overlooking something, but I can't seem to find the Quasi Dense Stereo implementation in opencv_contrib for python. It's part of the stereo module in contrib. Using python3.6 on 64-bit linux, just recompiled the whl file to make sure that quasi_dense_stereo.{cpp,hpp} are getting processed, which it is. But I can't find it in the python module. I've made sure the import cv2 picks the correct version, the one that I just compiled, but there's no QuasiDenseStereo class anywhere in the module. Any thoughts? Thanks! - List item btw, digging through this, i find it weird, that it does not exist in the 3.4 branch, only on master ?!?!
https://answers.opencv.org/question/235540/cant-find-quasidensestereo-in-opencv_contrib_python/
CC-MAIN-2020-45
refinedweb
129
73.68
Next: Manipulating Classes, Up: Object Oriented Programming ## -*- texinfo -*- ## @deftypefn {Function File} {} polynomial () ## @deftypefnx {Function File} {}if (nargin == 1) if (strcmp (class (a), "polynomial")) p = a; elseif (isvector (a) && isreal (a)) p.poly = a(:).'; p = class (p, "polynomial"); else error ("polynomial: expecting real vector"); endif else print_usage (); endif. For example type @polynomial/display will print the code of the display method of the polynomial class to the screen, and dbstop @polynomial/display will set a breakpoint at the first executable line of the display method of the polynomial class. To check where a variable. The available methods of a class can be displayed with the methods function. Return a cell array containing the names of the methods for the object x or the named class. To inquire whether a particular method is available to a user class, the ismethod function can be used. Return true if x is a class object and the string method is a method of this class. For example: p = polynomial ([1, 0, 1]); ismethod (p, "roots") ⇒ 1
http://www.gnu.org/software/octave/doc/interpreter/Creating-a-Class.html#Creating-a-Class
crawl-003
refinedweb
171
60.95
KEYNOTE(3) BSD Programmer's Manual KEYNOTE(3) keynote - a trust-management system library #include <sys/types.h> #include <regex.h> #include <keynote.h> struct environment { char *env_name; char *env_value; int env_flags; regex_t env_regex; struct environment *env_next; }; struct keynote_deckey { int dec_algorithm; void *dec_key; }; struct keynote_binary { int bn_len; char *bn_key; }; struct keynote_keylist { int key_alg; void *key_key; char *key_stringkey; struct keynote_keylist *key_next; }; necessary memory could not be allocated. kn_remove_assertion() removes the assertion identified by assertid from the session identified by sessid. On success, this function returns 0. On failure, it returns -1 and sets keynote_errno to ERROR_NOTFOUND. kn_add_action() inserts the variable name in the action environment of session sessid, with the value value. The same attribute may be added more than once, but only the last instance will be used (memory resources are consumed however). The flags specified are formed by or'ing the following values:, and action authorizers added to session sessid. returnvalues is an ordered array of strings that contain the return values. The lowest- ordered return value is contained in returnvalues[0], and the highest- ordered value is returnvalues[numvalues - 1]. If returnvalues is NULL, the returnvalues from the previous call to kn_do_query the session was not found. kn_read_asserts() parses the string array of length arraylen and returns an array of pointers to strings containing copies of the assertions found in array. Both the array of pointers and the strings are allocated by kn_read_asserts() dynamically, and thus should be freed by the programmer when they are no longer needed. numassertions contains the number of assertions (and thus strings in the returned array) found in array. On failure, this function returns NULL. kn_keycompare() compares key1 and key2 (which must be of the same algorithm) and returns 1 if equal and 0 otherwise. kn_get_authorizer() returns the authorizer key (in binary format) for assertion assertid in session sessid. It also sets the algorithm argument to the algorithm of the authorizer key. On failure, kn_get_authorizer() returns NULL, and sets keynote_errno to ERROR_NOTFOUND. kn_get_licensees() returns the licensee key(s) for assertion assertid in session sessid. The keys are returned in a linked list of struct keynote_keylist structures. On failure, kn_get_licensees() returns NULL. and sets keynote_errno to ERROR_NOTFOUND. kn_query() takes as arguments a list of action attributes in env, a list of return values in returnvalues (the number of returnvalues is indicated by numvalues), a number (numtrusted) of locally-trusted assertions in trusted (the length of each assertion is given by the respective element of trustedlen), a number (numuntrusted) of assertions that need to be cryptographically verified in untrusted (the length of each assertion is given by the respective element of untrustedlen), and a number (numauthorizers) of action authorizers in authorizers. env is a linked list of struct environment structures. The env_name, env_value, and env_flags fields correspond to the name, value, and flags arguments to kn_add_assertion assertion could not be added to the session due to lack of memory resources. Syntax errors in assertions will not be reported by kn_query(). kn_encode_base64() converts the data of length srclen contained in src in Base64 encoding and stores them in dst which is of length dstlen. The ac- tual length of the encoding stored in dst is returned. dst should be long enough to also contain the trailing string terminator. If srclen is not a multiple of 4, or dst is not long enough to contain the encoded data, this function returns -1 and sets keynote_errno to ERROR_SYNTAX. kn_decode_base64() decodes the Base64-encoded data stored in src and stores the result in dst, which is of length dstlen. The actual length of the decoded data is returned on success. On failure, this function re- turns -1 and sets keynote_errno to ERROR_SYNTAX, denoting either an in- valid Base64 encoding or insufficient space in dst. kn_encode_hex() encodes in ASCII-hexadecimal format the data of length srclen contained in src. This function allocates a chunk of memory to store the result, which is returned in dst. Thus, this function should be used as follows: char *dst; kn_encode_hex(src, &dst, srclen); The length of the allocated buffer will be (2 * srclen + 1). On success, this function returns 0. On failure, it returns -1 and sets keynote_errno to ERROR_MEMORY if it failed to allocate enough memory, ERROR_SYNTAX if dst was NULL. kn_decode_hex() decodes the ASCII hex-encoded string in src and stores the result in a memory chunk allocated by the function. A pointer to that memory is stored in dst. The length of the allocated memory will be (strlen(src) / 2). On success, this function returns 0. On failure, it returns -1 and sets keynote_errno to ERROR_MEMORY if it could not allo- cate enough memory, or ERROR_SYNTAX if dst was NULL, or the length of src is not even. kn_encode_key() ASCII-encodes a cryptographic key. The binary representa- tion of the key is contained in dc. The field dec_key in that structure is a pointer to some cryptographic algorithm dependent information describing the key. In this implementation, this pointer should be a DSA * or RSA * for DSA or RSA keys respectively, as used in the SSL library, or a keynote_binary * for cryptographic keys whose algorithm key contained in key. The type of signature to be produced is described by the string algorithm. Possible values for this string are SIG_RSA_SHA1_PKCS1_HEX, SIG_RSA_SHA1_PKCS1_BASE64, SIG_RSA_MD5_HEX-based keys. No other cryptographic signatures are currently supported by this implementation. If vflag is set to 1, then the generated signature will also be verified. On success, this function returns a string containing the ASCII-encoded signature, without modifying the assertion. On failure, it returns NULL the assertion contained a syntactic error, or the cryptographic algorithm was not supported. kn_free_key() frees a cryptographic key. kn_get_string() parses the argument, treating it as a keynote(4) (quoted) string. This is useful for parsing key files. On success, this function returns a pointer to the parsing result. The result is dynamically allo- cated and should be freed after use. On failure, NULL is returned. keynote.h libkeynote.a keynote(1),> The return values of all the functions have been given along with the function description above. None that we know of. If you find any, please report them to <keynote@research.att.com> MirOS BSD #10-current April 29, 1999.
http://mirbsd.mirsolutions.de/htman/sparc/man3/kn_remove_authorizer.htm
crawl-003
refinedweb
1,037
55.95
Hi,for some time I was not able to compile a kernel with ramdisk support but with the following patch it works again!rudmerdiff -u arch/i386/kernel/setup.c~ arch/i386/kernel/setup.c--- arch/i386/kernel/setup.c~ Mon Jan 7 21:47:51 2002+++ arch/i386/kernel/setup.c Mon Jan 7 23:38:45 2002@@ -661,6 +661,9 @@ unsigned long bootmap_size, low_mem_size; unsigned long start_pfn, max_low_pfn; int i;+#ifdef CONFIG_BLK_DEV_RAM+ extern int rd_size, rd_image_start, rd_prompt, rd_doload;+#endif #ifdef CONFIG_VISWS visws_get_board_type_and_rev();-- Get your free email from Powered by Outblaze-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
https://lkml.org/lkml/2002/1/7/317
CC-MAIN-2022-27
refinedweb
121
53.21
These are the DM L2 Issues. This document is an editor’s draft and has no normative status This document identifies the status of Last Call issues on XQuery 1.0 and XPath 2.0 Data Model as of October 29, 2004.’ll add the mapping.’ll. SECTION. [My apologies that these comments are coming in after the end of the Last Call comment period.] Section 6.2.4 The description of how the children property of the element node is constructed from PSVI indicates that if the [schema normalized value] property exists, and the processor chooses to use that means of setting the property, the order of nodes in the sequence is implementation defined. This should be phrased to indicate that only the text node will be in some implementation-dependent position relative to the PI and comment children, but the order of the PI's and comments themselves is the same as in the [children] property. Thanks, Henry [Speaking on behalf of reviewers from IBM, not just personally.].] Decided at BEA face-to-face: fixed [My apologies that these comments are coming in after the end of the Last Call comment period.] Section 2.5 In the second paragraph, it is unclear how long the implementation-defined anonymous type name must persist. For instance, if the same query or transformation is evaluated twice, with the same static context each time, must the names used to identify the same anonymous types in each evaluation be consistent. Thanks, Henry [Speaking on behalf of reviewers from IBM.] Decided at joint telcon 183: added wording to clarify. SECTION. Decided at BEA face-to-face: agreed to clarify. SECTION 6.5.3: Processing instruction information items Under "target" it says that the value of the target property in the data model is the same as the value of the [target] property in the Infoset. However, the Infoset allows an arbitrary Name for this value (see XML 1.0 section 2.6 rule [17]) which means that a target can have arbitrarily many colons. The Namespaces recommendation did not limit this. On the other hand, the XQuery data model limits targets to NCName (no colons). What happens when a processing instruction containing one or more colons is loaded into the data model? This should be specified. (See also comment re: 6.5.1) Decided at BEA face-to-face: it’s an error to map a PI whose name is not an NCName. SECTION) Decided at BEA face-to-face: it’s’t have a conformance section; the host language has to describe conformance.] This issue is just the preamble for the issues that follow. Closed.. We believe this is fixed. Norm to ask schema to review.. Decided at BEA face-to-face: added note about lightweight].." Decided at BEA face-to-face: DM/FS/F&O editors should review revised Section 7.. The particular problem identified here does not occur because we use the SchemaNormalizedValue if it exists. But this section has been rewritten to clarify the points raised by Schema.. Decided at BEA face-to-face: DM/FS/F&O editors should review revised Section 7.. Decided at BEA face-to-face: DM/FS/F&O editors should review revised Section 7..) Decided at BEA face-to-face: status quo solves existing use cases. Announced.. Decided at the Aug 2004 f2f: the triples proposal will address this issue.. Model Section 6.4.2 (Namespace Nodes--Accessors): The dm:node-name accessor depends on whether the implementation "preserves information about prefixes declared." Is this an optional feature? Should it be on our list of optional features? If prefixes are not "preserved", then what purpose is served by namespace nodes? 3.3.1 Mapping PSVI Additions to Types Technical The rules about inferring type annotations seems to break named typing in the context of union types. The instance type of the element needs to be the name of the union type and not the name of the member type that matches the instance's type. Otherwise $n instance of element(*, uniontype) Will fail. Decided at joint telcon 183: resolved to make the type of nodes always the union type and the type of the atomic values the member type. This was approved at meeting 185. Section 3.3 Construction from a PSVI Technical We do not see a reason to support "incompletely validated documents". Actually this makes it impossible to support statically typed implementations on top of such data models that guarantee type-safety. It also is unclear if such an element is untyped or partially typed. The later is a concept that currently does not exists in our type system. Please remove this remark.’ve adopted a new definition of document order and made the other editorial changes. The) White space is now significant in all cases except element-only content where it is not significant. The draft has been clarified to reflect this. 1.1. The term type DM appears to use the term type for several related but different concepts; we believe it would be desirable if you were to clarify the meaning of the term, or at least if you called the reader's attention to its overloading. The Data Model specification appeals to the Formal Semantics specification, which says types are XML Schema types. However, XML Schema tries to avoid the term "type", instead using "type definition". Among the uses of "type" we have noticed are: 1. T1. a name (for example, as used by the dm:type accessor). 2. T2. a set of values (this sense is used by XML Schema's internal work on a formalization, which includes a "Type Lattice"). 3. T3. an XML Schema Type Definition component (simple or complex). Defines a set of values and certain properties, such as [name], [baseType], etc. 4. T4. an OO class. Defines a set of values, inheritance info, and operators. Specifically, we suggest that the dm:type accessor be renamed to dm:type-name and that "type" be explicitly defined. If "type" is just a synonym for "type definition", say so in the definition ot "type". Decided at joint telcon 183: occurrences of the word ‘type’ have been qualified. 1.3. Items and singleton sequences Section 6 Sequences reads in part: An important characteristic of the data model is that there is no distinction between an item (a node or an atomic value) and a singleton sequence containing that item. One consequence of this characteristic is that the types xs:integer and a list of xs:integer with length constrained to 1 have exactly the same value space in the Data Model. That is, each value in the value space is a sequence of a single xs:integer. This is different from the XML Schema value spaces for the two types. Might this cause a problem for functions or other uses of the Data Model? We believe further discussion is needed here. The WG feels comfortable with the current model. Closed with no action. 1.4. The implications of [validity] != valid Section 3.6 para 2 reads in part: "The only information that can be inferred from an invalid or not known validity value is that the information item is well-formed." This is not true in the general case: the values of the properties [validity] and [validation attempted] interact, so that some inferences beyond well-formedness can be made. (If [validity] is 'notKnown', for example, we can infer without examining the PSVI that [validation attempted] is not 'full'. If for some node N [validity] is 'invalid', we can infer that declarations are available for at least some element or attribute information items in the subtree rooted in N.) The data model doesn't have to be interested in those inferences, but it is simply incorrect to say that they don't exist. On the whole, we believe that that the data model misses an opportunity by failing to exploit the information contained in the [validity] and [validation attempted] properties more fully.. 1.5. Anonymous local types Section 3.6 has an extended list of cases describing how the namespace and local name of a type are found. This list reads in part: * If the [validity] property exists and is `valid': + ... + If the [type definition] property exists and its {name} property is present: o the {target namespace} and {name} properties of the [type definition] property. + ... + If [type definition anonymous] exists: o If it is false: the [type definition namespace] and the [type definition name] o Otherwise, the namespace and local name of the appropriate anonymous type name. The above structure does not handle the case of an anonymous type when the schema processor provides the [type definition] property instead of the [type definition name] property and its fellows. We think the [type definition] rule can readily be rephrased so that the result is parallel to the case when the upstream schema processor provides [type defintion name] instead of [type definition]: * If the [validity] property exists and is `valid': + ... + If the [type definition] property exists[DEL: and its {name} property is present :DEL] : o [INS: If the [type definition]'s {name} property exists: :INS] the {target namespace} and {name} properties of the [type definition] property. o [INS: Otherwise, the namespace and local name of the appropriate anonymous type name. :INS] + ... The spec has been significantly redrafted in this area and we believe your concern has now been addressed. 3.2. Comments not reviewed by the Working Group When the XML Schema Working Group reviewed the draft comments provided by our task force, we focused on substantive comments; the following editorial comments were not reviewed owing to lack of time. They are transmitted on behalf of the Working Group, but they do not necessarily carry the consensus of the Working Group. 1. Section 3.3 para -1: "inconsistent data models are forbidden". There has not thus far been any definition of consistency for data models; if it's provided elsewhere, a forward reference might be in order. If it's not provided elsewhere, it needs to be. 2. abstract. For "the data model of at least XPath 2.0 ... and any other specifications that reference it" perhaps read "the data model of XPath 2.0 ... and of any other specifications that reference it". 3. Section 1 Introduction para 2: "... it defines precisely the information contained in the input to an XSLT or XQuery processor." Surely it specifies a minimum, by defining the information which must be contained, rather than specifying both a minimum and a maximum by forbidding any input to contain any other information. If one has concealed a coded message in a document by varying the amount of white space before the '>' characters which close the tags in an XML document, that coded message is certainly (a) information, and (b) present in the input to the processor and (c) not defined by this Data Model. It may make sense to say that this document defines precisely which information present in the input it is that is relevant to XSLT or XQuery processors (although formulating this without falling into traps is also fraught with difficulty), but it seems simply wrong to deny that information other than what is defined here is present in the input. 4. Section 2 Notation. Since this is to be a free-standing document, a short description of what the sample signature means would be useful. As it is, the combination of (a) the sample, clearly intended to help the reader understand the notation, with (b) the absence of any explication, manages to do a rather effective job of sapping the reader's will to continue reading. 5. Section 3.3 para -1. "Validation is described conceptually as a process of ..." -- either insert a pointer to the section or document which provides this description or (if this is the description) read "Validation is a process of ..." 6. Section 3.4 para 2. For "For named types, which includes ..." read "For named types, which include ..." (subject-verb agreement) 7. section 3.4 para 6. "The data model defines ... It returns ... if it ..." The noun phrase "data model" is almost certainly not intended as the antecedent of either of the two occurrences of it, but syntactically it has a better claim than any other noun phrase around. For the first, perhaps read "The accessor"; for the second, perhaps "the node" or "the argument". 8. section 3.4 para -1. For "The semantics of such operations, e.g. checking if a particular instance of an element node has a given type is defined in [Formal semantics]" read "... if a particular instance ... has a given type, is defined in ...". We believe these comments are all addressed in the 23 July draft. 2.4. Elements labeled xs:anyType in the PSVI Section 4.3.2 says in part: If the element node's type is xs:anyType, the dm:typed-value accessor returns the node's string value as xs:anySimpleType. This seems to contradict section 4.1.6: If the node is an element node with type xs:anyType, then its typed value is equal to its string value, as an instance of xdt:untypedAtomic. ---- 2.5.2. Prefix property Section 4.3.4 says: ... Please be clear about the meaning of "namespace URI" or the namespace node. Is it the [uri] property of the namespace node or the namespace uri part of the node-name property of the namespace node? ---- 2.5.3. Sequences in sequences Section 2 reads in part: In a sequence, V may be a Node or AtomicValue, or the union (choice) of several categories of Items. It's not immediately clear to all readers what this means. It appears a first glance to say that if V*, V?, or V+ appear in (the description of) a sequence, then V may be or denote a Node or an AtomicValue or a union. But if sequences cannot appear in sequences, and V* and V? and V+ all denote sequences (as specified in the list immediately above), then if V*, V?, or V+ appear in (the description of) a sequence S, then sequence S would appear to violate the rule that sequences cannot contain other sequences. (Unless "In a sequence" means `When appearing as the description of a sequence'.) ---- 2.5.4. Synthetic data models Section 3.3, para 2 reads:. We agree that it is worthwhile to point out that synthetic instances of the Data Model are possible, and need not derive from some pre-existing XML document or information set. Some members of the XML Schema WG believe, however, that the formulation just quoted does not do full justice to the abstract nature of the infoset as a concept. Any process which can create an instance of the Data Model clearly has access to the set of information defined by the Infoset Rec and can thus be thought to have, or be, an infoset itself. To this line of thinking, the construction of a synthetic Data Model is itself a sufficient demonstration that the necessary information, and thus the necessary infoset, is available. Two possible fixes may be worth suggesting: Although we describe construction of a data model in terms of infoset properties, a [INS: pre-existing :INS] infoset is not an absolutely necessary precondition for building an instance of the Data Model. Purely synthetic data model instances are entirely appropriate as long as they obey all of the constraints described in this document. Or Although we describe construction of a data model in terms of XML infoset properties, a [INS: pre-existing XML document :INS] is not an absolutely necessary precondition for building an instance of the Data Model. Purely synthetic data model instances are entirely appropriate as long as they obey all of the constraints described in this document. /] > | [3.6] The list defining the type of an element information item is not > | complete. What happens in the case where [member type definition] > | or [type definition] is present but hase no name property (because > | the type is anonymous)? Presumably the generated anonymous type > | name is used. > My understanding is that if [member type definition] exists and has no > {name} property then [member type definition anonymous] also exists. > Is that not the case? No. Either: you are a fully-featured schema validator, and you have the [type definition] property and (if the type is a union) the [member type definition property], ... or ... you are a wimpy schema validator, and you have the [type definition type], [type definition namespace], [type definition anonymous] and [type definition name] properties, and (if the type is a union) the [member type definition namespace], [member type definition anonymous] and [member type definition name] properties.. Decided at joint telcon 185: define the term “instance of the data model”. SECTION...".’s discretion. Closed. SECTION 3: Data Model Construction The paragraph before 3.1 refers to 'well-formed document fragments', what is this ? The words 'well-formed', 'fragments', 'document' are overloaded in XML. Please define them exactly here. SECTION C : Glossary In the Glossary, it defines fragment as a tree whose root node is some other kind of node. But it does not define what 'some-other kind of node' is. This definition is pulled from '1 introduction' section, but the context is missing. 'some-other kind of node' really means 'non-root node' here.. Accepted editorial suggestions. (Fragment is now referenced.) SECTION 5: Accessors 5 Accessors First sentence of second para says "In order for applications to be able to operate on instances of the data model, the model must expose properties of the items it contains... These are not functions in the usual sense, they are not available for users or applications to call directly." This passage uses the term "application" in two senses, the first a nonstandard sense and the second a standard sense. Usually "application" refers a software program that a user writes, whereas "implementation" or "processor" refers to the software component implementing a specification and supplied by a vendor as a platform for the user to build his application. This understanding of "application" works for the second occurrence of "application" but not for the first. Indeed, if we try to use the standard meaning of the term for both occurrences, the passage appears to be contradictory. I think what you mean by the first occurrence of "application" is that you are viewing the data model as being realized by a software layer which is distinguishable from the XQuery or XPath processor. From that perspective the XQuery or XPath prcoessor is an "application" of the data model. But if you view things that way, then the accessors are indeed functions that are callable by the "application". For the sake of readability, if you want to introduce a distinction between the data model layer and the XQuery or XPath processor above it, you should introduce a third term, keeping "applciation" for the end user. Alternatively, you could rewrite the first sentence as "In order for XQuery or XPath processors to be able to operate..." SECTION.? The spec has been significantly redrafted in this area and we believe your concern has now been addressed.. / Daniela Florescu <danielaf@bea.com> was heard to say: |. I don't think the phrase "partial function" actually adds any value to the explanation. I've reworded it as follows: <p>There are some functions in the data model that return the empty sequence to indicate that no value was available. We use the occurrence indicators <emph>?</emph> or <emph>*</emph> when specifying the return type of such functions. For example, a node may have one parent node or no parent. If the node argument has a parent, the <function>parent</function> accessor returns a singleton sequence. If the node argument does not have a parent, it returns the empty sequence. The signature of <function>parent</function> specifies that it returns an empty sequence or a sequence containing one node:</p> Please let me know if this satisfies your concern..4: Typo - "Retreiving" should be "Retrieving". [ Fixed in DM 17 Feb 2004 ].]".? These problems have been fixed.? The spec has been significantly redrafted in this area and we believe your concern has now been addressed. Model Section 6.1.3 (Document Nodes--Construction from an Infoset): This section specifies how three of the four properties of a document node are derived from an infoset. It should also mention the fourth property, "document-uri", and state that it is derived from the absolute URI of the resource from which the document node was constructed, as described under "Accessors".. Data Model Sections 6.5.3 and 6.6.3: Section heading should be "Construction from an Infoset" in each case. 6.7.3 Construction from an Infoset Editorial "W3C normalized" has been renamed to "fully normalized". There are also other normalization forms that may be interesting for an application. Just say that text nodes are not necessarily normalized. Also, the term "programmers" should be defined (implementers of the data model or somebody that writes Xqueries or XSLT stylesheets?) Section? “kind” 1.2. Derivation of simple types Section 5 Atomic Values reads in part: An XML Schema simple type [XMLSchema Part 2] may be primitive or derived by restriction, list, or union. We think it will help avoid confusion among users, implementors, and (not least) discussion among Working Groups if you use XML Schema terminology here. Perhaps: An XML Schema simple type definition [XMLSchema Part 2] has a [variety], which may be atomic, list or union. If [variety] is atomic, the type definition may be primitive or derived by restriction. The XML Schema WG wishes to de-emphasize the use of the term "derived by" in XML Schema Part 2 in describing union and list contruction. The term "derived by" is used only colloquially there and is unfortunately confused with derivation in the proper sense (i.e. restriction and extension). All non-primitive simple types are derived by restriction. List types may be restrictions of xs:anySimpleType or other lists. Similarly for union types. Please don't propagate the confusion we created. [We are aware that it would be useful to have a simple term other than derivation to describe the relation between a list type and its item type, or that between a union type and its member types; we need it as much as you do. Suggestions are welcome.] The section on types and atomic values has been extensively redrafted to address this and a number of other related comments. 1.6. Target namespaces Section 3.4 Types reads in part: Since named types in XML Schema are global, an expanded-QName uniquely identifies such a type. The namespace name of the expanded-QName is the target namespace of the schema and its local name is the name of the type. A schema does not have a target namespace; a schema document has a target namespace. One possible repair would be: Since named types in XML Schema are global, an expanded-QName uniquely identifies such a type. The namespace name of the expanded-QName is the {target namespace} property of the type definition, and its local name is the {name} property of the type definition. Another might be: Since named types in XML Schema are global, an expanded-QName uniquely identifies such a type within a schema. We believe this to be relatively important. 1.7. Lexical spaces, reference, containment Section 2 refers to: "the lexical space referring to constructs of the form prefix:local-name". Perhaps substitute "the lexical space containing ..." Lexical forms may, with a certain investment of time and energy, be thought of as `referring to' values, but the lexical space as a whole does not refer. The lexical space of QName does contain, even if it does not refer to, constructs of the form prefix:local-name. 2.1. Atomic values and singleton sequences In section 2 Notation, after indicating how to represent Node and Item in the syntax, DM says "Some accessors can accept or return sequences." This may need clarification; elsewhere we had been led to think that everything is a sequence. Please emphasize that Node, Item, and atomic values in the syntax correspond to singleton sequences, and that some accessors accept less-constrained sequences. Some members of the XML Schema WG add that DM seems to conflate the notations of list and sequence, which are distinct and should not be confused. 2.2. Node identity Sections 3.1 and 3.2 raise the question of node identity and stable ordering. Does a node maintain its identity on being modified? on being added to another tree? If so, wouldn't its ordering change? 2.3. Names in namespace nodes Section 4.3 Elements lists, among the constraints that element nodes must satisfy: 7. The namespace nodes of an element must have distinct names. This requirement contradicts the definition of dm:name for namespace nodes, for processors that choose not to preserve prefix information. All their namespace nodes will name [or have] the same name, namely the empty sequence. 2.5.1. Infoset-only processing Section 3.6 says, under the heading "Infoset-only processing": Note that this processing is only performed if no part of the subtree that contains the node was schema validated. In particular, Infoset-only processing does not apply to subtrees that are "skip" validated in a document. Which subtree is "the" subtree? A given node is contained by many subtrees. Perhaps read "if no part of any subtree containing the node was schema validated"? 3. Editorial notes In the course of our work, some editorial points were noted; we list them here for the use of the editors. We do not particularly expect formal responses on these comments. 3.1. Comments reviewed by the Working Group 1. QNames. Section 2 Notation reads in part: ).] Thank you for being specific about value-space vs. lexical space. Please also be specific on whether the namespace URI can be absent or not. 2. Section 3.3: The definition [Definition: A Post Schema Validation Infoset, or PSVI, is the augmented infoset produced by an XML Schema validation episode.]. has an extra full stop at the end. 3. Section 3.4 para 6: It returns xs:anyType or xs:anySimpleType if no type information exists, or if it failed W3C XML Schema validity assessment. Are "xs:anyType" and "xs:anySimpleType" expanded-QNames? They don't look like it. 4. Section 4.1.1: We suggest using "[base-uri]" rather than "base-uri" when referring to the infoset propery, to avoid confusion with the base-uri accessor. In general, we believe all references to infoset properties should use the brackets. 5. Section 4.1.3: dm:node-name returns the qualified name of the element or attribute. The XML Infoset does not define a [qualified name] for items. For "qualified name" perhaps read "expanded QName". 6. Section 4.1.6, bulleted list: Two of the bullets begin "If the item is" and the rest begin "If the node is". Why are these different? At first we thought the difference reflected a crucial difference in the tests being performed, but the entire list is about nodes; there are no items under discussion which are not nodes. 7. Section 4.3.2, repeated in 4.4.2: the first bullet item says that under certain circumstances the result will be an "atomic value 3.14 of type decimal". Should that be "xs:decimal"?
http://www.w3.org/2004/10/data-model-issues.html
CC-MAIN-2021-39
refinedweb
4,577
55.44
send a signal to a process Please send bug reports to <procps-feedback@lists.sf.net> #include <sys/types.h> #include <signal.h> int kill(pid_t pid, int sig); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): kill(): _POSIX_C_SOURCE >= 1 || _XOPEN_SOURCE || _POSIX_SOURCE If pid is positive, then signal sig is sent to the process with the ID specified whose ID is -pid. If sig is 0, then no signal is sent, but error checking is still performed; this can be used to check for the existence of a process ID or process group ID.() returns. - - - ja - nl - pl - Meet new people ¤ 1994-2005 LinuxReviews™ ¤ ¤ ¤ ¤ XHTML 1.0 ¤
http://linuxreviews.org/man/kill/index.html.en
CC-MAIN-2016-50
refinedweb
108
80.41
Example: strstr Example 1 This example uses strstr to search for the word "dog" in the string, str. After strstr returns the address, position, the code then calculates the word's place in str by subtracting the address of the start of the string from position. This is the offset of the word "dog", in bytes. #include <string.h> int offset; char * position; char * str = "The quick brown dog jumps over the lazy fox"; char * search_str = "dog"; position = (char *)strstr(str, search_str); // strstr has returned the address. Now calculate * the offset from the beginning of str offset = (int)(position - str + 1); lr_output_message ("The string \"%s\" was found at position %d", search_str, offset); Example: Output: Action.c(14): The string "dog" was found at position 17 Example 2 The next example shows how strstr can be used to parse file name suffixes. The code checks the equality of the following two pointers: The pointer returned by strstr after searching for the substring is either null on failure, or the address of the first character of the file suffix on success. Adding 4 moves the pointer to the end of the string. The start of the string (path) plus its length, len. This is also the end of the string. If these 2 pointers are equal than strstr has found the file suffix. If not, strstr has returned null. #include <string.h> char * path = "c:\\tmp\\xxx\\foo.ocx"; int len = strlen(path), filetype; // Check the suffix of the file name to determine its type if ((char *)(strstr(path, ".dll") + 4) == (path + len)) filetype = 1; else if ((char *)(strstr(path, ".exe") + 4) == (path + len)) filetype = 2; else if ((char *)(strstr(path, ".tlb") + 4) == (path + len)) filetype = 3; else if ((char *)(strstr(path, ".ocx") + 4) == (path + len)) filetype = 4; else filetype = 5; lr_output_message ("type = %d", filetype); Example: Output: Action.c(18): type = 4
https://admhelp.microfocus.com/vugen/en/2021/help/function_reference/Content/FuncRef/c_language/etc/lrFuncRef_CLang_strstr_example.htm
CC-MAIN-2021-17
refinedweb
310
81.22
After the Server Core installation is complete and the server is configured, you can install one or more optional features. The Server Core installation of Windows Server 2008 R2 supports the following optional features: The following procedure describes how to install these features on a server running a Server Core installation of Windows Server 2008 R2. The following optional features require appropriate hardware: There are no prerequisites for the following optional features: You can install .NET Framework 3.0 and 3.5 functionality, but WPF is not available. The following .NET Framework namespaces are not available in Server Core installations: The standard installation of the Subsystem for UNIX-based applications, Windows PowerShell, and .NET Framework 2.0, 3.0, and 3.5 does not include 32-bit support. If you need 32-bit support, install the WoW64 feature. To install an optional feature on a Server Core installation of Windows Server 2008 R2, perform the following procedure. To discover the available optional features,:<featurename> Where featurename is the name of a feature from the following list:
http://technet.microsoft.com/en-us/library/ee441253(WS.10).aspx
crawl-003
refinedweb
176
50.02
Can someone explain why this prints "goodnight"? If you removed string = "hello" def a_method(string) string = "hello" string << " world" end bedtime = "goodnight" a_method(bedtime) puts bedtime In Ruby there's a difference between in-place operations and those that return a copy. Sometimes the name provides a hint, like gsub vs gsub!, but other times you just need to know, like <<. What you're doing here is redefining which object string references so no permanent modification is made to the original reference. The line string = "hello" does not mean that the original bedtime object reference changes. If you wanted that effect you'd do string.replace("hello") which is an in-place reassignment of the string's content. To find out what object you're referencing call object_id on the object in question. You'll notice here that with your code that value changes, but with replace it does not. Ruby method arguments are passed by object reference which in practice is a lot like a pointer. If you're expecting that value to be passed by absolute reference that's not the case.
https://codedump.io/share/kMATDuEJNe7n/1/mutating-calling-objects-in-ruby
CC-MAIN-2017-09
refinedweb
184
65.52
Mike, On Sat, Apr 13, 2013 at 8:38 AM, Mike Kienenberger <mkienenb@gmail.com>wrote: > > Maybe your problem is that you are missing the no-arg constructor for > the converter. > Maybe that was why it worked when it was static. > Interesting. I'm definitely still novice to java, especially use of 'static', even though i used it, primarily, for adding static strings throughout the app. All along, I knew that 'static' means that only one copy will be created/instantiated within/for the parent class. I think the static reference was added (by Netbeans, when it generated the code for me, back in 2011), because of the following: Below, you will see that the managed bean has a reference to a stateless @EJB (DAO), @ManagedBean(name = "customerController") @RequestScoped public class CustomerController implements Serializable { @EJB private jpa.session.CustomerFacade ejbFacade; and, the CustomerControllerConverter.getAsObject() method references the @EJB which is a member of the managed bean, CustomerController. See below. CustomerController controller = (CustomerController) facesContext.getApplication().getELResolver(). getValue(facesContext.getELContext(), null, "customerController"); return controller.ejbFacade.find(getKey(value)); > > Was there additional information in java.lang.InstantiationException, > like a root cause? > > I'll have to take a look at that, but I see MyFaces ApplicationImpl which throws the exception, but haven't made my way to looking at java.lang.InstantiationException, just yet. > > The two things I would try are > 1) move it to a separate class > 2) add a no-arg constructor > > Will try this and let you know. I think it still may be necessary to add the class as a 'public static' class, so there will be only one instance of @FacesConverter(forClass = Customer.class) during runtime. > I haven't really written many converters since the annotation support > was added, but this is what one of the few I did write looks like. > This was a quick proof of concept thing, so I didn't take the time to > debug why @FacesConverter(forClass = ) wasn't working for me but > instead manually specified a converter id in the one place I used the > converter. > Yeah, I really prefer not to use converter id, or I would have to go all throughout my xhtml and update, accordingly. Thanks. Will share any/all updates, ASAP. Thanks Mike!
http://mail-archives.apache.org/mod_mbox/myfaces-users/201304.mbox/%3CCAGV1rD+pUXWEt6CkWn54_sPJznCLJVYfM5yVs8ziRBPeLuwYpg@mail.gmail.com%3E
CC-MAIN-2017-22
refinedweb
374
55.03
I’m attempting to convert RTF data in a SQL Server Table into JPG data and save it in a SQL Server Table. The code is working fine but for only one page RTF documents. If the RTF data spans more than one page it only saves off the first page. Any ideas or work arounds? Is this just the behavior of JPG image files? Convert All RTF Pages to Separate JPEG Image Files using C# .NET | Save JPEG Streams in SQL Database Table Yes, this is the expected behavior of JPG image i.e. it will contain the first page of RTF document only. However, you can convert all pages in RTF document to separate JPEG images by using the following code: Document doc = new Document("E:\\temp\\input.rtf"); ImageSaveOptions options = new ImageSaveOptions(SaveFormat.Jpeg); options.PageCount = 1; int pageCountDoc = doc.PageCount; for (int pageCount = 0; pageCount < pageCountDoc; pageCount++) { options.PageIndex = pageCount; doc.Save(MyDir + "out_" + pageCount + ".jpg", options); } Or you can try converting the whole RTF file to TIFF Image by using the following code: Document doc = new Document("E:\\temp\\input.rtf"); doc.Save("E:\\Temp\\20.6.tiff", SaveFormat.Tiff); In this case, the TIFF image will contain the content of all pages in separate frames. Thank you for the quick response! While trying to implement your first code example I’m getting a reference not found issue with ImageSaveOptions. Do you have any suggestions on what reference I’m missing in my code? Also, tiff will not work for what I’m trying to do due to limitations with SQL Server Reporting Services. I figured out what was wrong with my reference. I need to include more of the namespace. I’m using Aspose.Words.Saving.ImageSaveOptions. Once included the code you provided worked! I only had to tweak it a little since I’m doing the conversion in a memory stream and then adding it to a database. Thanks again!
https://forum.aspose.com/t/convert-all-rtf-pages-to-separate-jpeg-image-files-using-c-net-save-jpeg-streams-in-sql-database-table/214786
CC-MAIN-2021-39
refinedweb
327
67.96
Loop is an important concept of a programming that allows to iterate over the sequence of statements. Loop is designed to execute particular code block till the specified condition is true or all the elements of a collection(array, list etc) are completely traversed. The most common use of loop is to perform repetitive tasks. For example if we want to print table of a number then we need to write print statement 10 times. However, we can do the same with a single print statement by using loop. Loop is designed to execute its block till the specified condition is true. Java provides mainly three loop based on the loop structure. We will explain each loop individually in details in the below. The for loop is used for executing a part of the program repeatedly. When the number of execution is fixed then it is suggested to use for loop. For loop can be categories into two type. For Loop Syntax: Following is the syntax for declaring for loop in Java. for(initialization;condition;increment/decrement) { //statement } To create a for loop, we need to set the following parameters. It is the initial part, where we set initial value for the loop. It is executed only once at the starting of loop. It is optional, if we don’t want to set initial value. It is used to test a condition each time while executing. The execution continues until the condition is false. It is optional and if we don’t specify, loop will be inifinite. It is loop body and executed every time until the condition is false. It is used for set increment or decrement value for the loop. This flow diagram shows flow of the loop. Here we can understand flow of the loop. For loop Example In this example, initial value of loop is set to 1 and incrementing it by 1 till the condition is true and executes 10 times. public class ForDemo1 { public static void main(String[] args) { int n, i; n=2; for(i=1;i<=10;i++) { System.out.println(n+"*"+i+"="+n*i); } } } Example for Nested for loop Loop can be nested, loop created inside another loop is called nested loop. Sometimes based on the requirement, we have to create nested loop. Generally, nested loops are used to iterate tabular data. public class ForDemo2 { public static void main(String[] args) { for(inti=1;i<=5;i++) { for(int j=1;j<=i;j++) { System.out.print("* "); } System.out.println(); } } } In Java, for each loop is used for traversing array or collection elements. In this loop, there is no need for increment or decrement operator. For-each loop syntax Following is the syntax to declare for-each loop in the Java. for(Type var:array) { //code for execution } Example: In this example, we are traversing array elements using the for-each loop. For-each loop terminates automatically when no element is left in the array object. public class ForEachDemo1 { public static void main(String[] args) { inta[]={20,21,22,23,24}; for(int i:a) { System.out.println(i); } } } Like for loop, while loop is also used to execute code repeatedly. a control statement. It is used for iterating a part of the program several times. When the number of iteration is not fixed then while loop is used. Syntax: while(condition) { //code for execution } Example: In this example, we are using while loop to print 1 to 10 values. In first step, we set conditional variable then test the condition and if condition is true execute the loop body and increment the variable by 1. public class WhileDemo1 { public static void main(String[] args) { inti=1; while(i<=10) { System.out.println(i); i++; } } } Example for infinite while loop A while loop which conditional expression always returns true is called infinite while loop. We can also create infinite loop by passing true literal in the loop. Be careful, when creating infinite loop because it can issue memory overflow problem. public class WhileDemo2 { public static void main(String[] args) { while(true) { System.out.println("infinitive while loop"); } } } In Java, the do-while loop is used to execute statements again and again. This loop executes at least once because the loop is executed before the condition is checked. It means loop condition evaluates after executing of loop body. The main difference between while and do-while loop is, in do while loop condition evaluates after executing the loop. Syntax: Following is the syntax to declare do-while loop in Java. do { //code for execution } while(condition); Example: In this example, we are printing values from 1 to 10 by using the do while loop. public class DoWhileDemo1 { public static void main(String[] args) { inti=1; do { System.out.println(i); i++; }while(i<=10); } } Example for infinite do-while loop Like infinite while loop, we can create infinite do while loop as well. To create an infinite do while loop just pass the condition that always remains true. public class DoWhileDemo2 { public static void main(String[] args) { do { System.out.println("infinitive do while loop"); }while(true); } }
https://www.studytonight.com/java/loops-in-java.php
CC-MAIN-2021-04
refinedweb
855
57.67
Big Data Big data is large amount of data. Big Data in normal layman’s term can be described as a huge volume of unstructured data. It is a term used to describe data that is huge in amount and which keeps growing with time. Big Data consists of structured, unstructured and semi-structured data. This data can be used to track and mine information for analysis or research purpose. What is Big Data? Big data in simple terms is a large amount of structured, unstructured, semi-structured data that can be used to for analysis purpose. - Volume: The name Big Data itself suggest it contains large amount of data. The size of the data is very important in determining whether the data is “Big data” or not. Hence, “Volume” is an important characteristic when dealing with Big data. - Velocity: Velocity is the speed at which data is generated. In Big Data the velocity is a measure of determining the efficiency of the data. The more quickly the data is generated and processed will determine the data’s real potential. The flow of data is huge and Velocity is one of the characteristics of Big Data. - Variety: Data comes in various forms, structured, unstructured, numeric, etc. Earlier spreadsheets and database were considered as data. But now pdf’s, emails, audio, etc are considered for analysis. Let us know more about Big Data Big Data has turned out to be really important for businesses who want to maintain their files and huge amount of data. Companies have moved to Big Data technologies in order to maintain data for analysis or business development purposes. Importance of Big Data: Big Data is important not in terms of volume but in terms of what you do with the data and how you utilize it to make analysis in order to benefit your business and organisation. Big Data helps analyse: - Time - Cost - Product Development - Decision Making, etc Big data when teamed up with Analytics help you determine root causes of failure in businesses, analyse sales trends based on analysing the customer buying history. Also help determine fraudulent behaviour and reduce risks that might affect the organisation. Big Data Technology has given us multiple advantages, Out of which we will now discuss a few. - Big Data has enabled predictive analysis which can save organisations from operational risks. - Predictive analysis has helped organisations grow business by analysing customer needs. - Big Data has enabled many multimedia platforms to share data Ex: youtube, Instagram. - Medical and Healthcare sectors can keep patients under constant observations. - Big Data changed the face of customer-based companies and worldwide market Big Data Categories - Structured - Unstructured - Semi-structured Structured Data: Data which is stored in a fixed format is called as Structured Data. In structured data the data is formatted so that it is easily accessible and can be used for analysis. Unstructured Data: Any data whose structure is not classified is known as unstructured data. Unstructured data is very huge in size. Unstructured data usually consists of data that contains combination of text, images, files, etc. They do not use conventional database models. Semi-structured Data: It contains both structured as well as unstructured data. The data is not organized in a repository but has associated information which makes it accessible. Characteristics of Big Data 1-Volume Volume refers to the unimaginable amounts of information generated every second from social media, cell phones, cars, credit cards, M2M sensors, images, video, and whatnot. We are currently using distributed systems, to store data in several locations and brought together by a software Framework like Hadoop. Facebook alone can generate about billion messages, 4.5 billion times that the “like” button is recorded, and over 350 million new posts are uploaded each day. Such a huge amount of data can only be handled by Big Data Technologies. 2-Variety As Discussed before, Big Data is generated in multiple varieties. Compared to the traditional data like phone numbers and addresses, the latest trend of data is in the form of photos, videos, and audios and many more, making about 80% of the data to be completely unstructured Structured data is just the tip of the iceberg. 3-Veracity Veracity basically means the degree of reliability that the data has to offer. Since a major part of the data is unstructured and irrelevant, Big Data needs to find an alternate way to filter them or to translate them out as the data is crucial in business developments 4-Value Value is the major issue that we need to concentrate on. It is not just the amount of data that we store or process. It is actually the amount of valuable, reliable and trustworthy data that needs to be stored, processed, analyzed to find insights. 5-Velocity Last but never least, Velocity plays a major role compared to the others, there is no point in investing so much to end up waiting for the data. So, the major aspect of Big Data is to provide data on demand and at a faster pace. How are the Top MNCs Using Big Data Analytics to their Advantage? 1. 2. 3. 4. 5. Here are some Uses of Big Data and where it is used - Health Care - Detect Frauds - Social Media Analysis - Weather - Public sector. Contribution of Big Data in Health Care The contribution of Big Data in Healthcare domain has grown largely. With medical advances there was need to store large amount of data of the patients. Big data is used extensively to store the patients health history. This data can be used to analyse the patients health condition and to prevent health failures in future. Detect Fraud Fraud detection and prevention is one of the many uses of BIg Data today. Credit card companies face a lot of frauds and big data technologies are used to detect and prevent them. Earlier credit card companies would keep a track on all the transactions and if any suspicious transaction is found they would call the buyer and confirm if that transaction was made. But now the buying patterns are observed and fraud affected areas are analysed using Big Data analytics. This is very useful in preventing and detecting frauds. Social Media Analysis The best use case of big data is the data that keeps flowing on social media networks like, Facebook, Twitter, etc. The data is collected and observed in the form of comments, images, social statuses, etc. Companies use big data techniques to understand the customers requirements and check what they say on social media. This helps companies to analyse and come up strategies that will be beneficial for the companies growth. Weather Big Data technologies are used to predict the weather forecast. Large amount of data is feeded on the climate and an average is taken to predict the weather This can be useful to predict natural calamities such as floods, etc. Examples of how some MNCs are handling Big Data Analytics -. 5-Google Did you know that Google processes about 3.5 billion search queries on single day? Do you know that each request queries about pages numbering 20 billion? Google derives such search results from knowledge graph database, indexed pages and Google bots crawling over a plethora of web pages. The user requests are processed in Google’s application servers. The application server searches results in GFS (Google File System) and logs the search queries in logs cluster for quality testing. Google uses Dremel which is a query execution engine to run almost near real-time, ad-hoc queries from search engines. This kind of advantage is not present in MapReduce. Google launched BigQuery which runs queries based on aggregation over billions row tables in a matter of seconds. Google is really advanced in its implementation of big data technologies. 6-Facebook Did you know that users of Facebook upload 500+ terabytes of data per day? To process such large chunks of data, Facebook uses Hive for parallel map-reduce opertions and Hadoop for its data storage. Would you believe me if I say Facebook uses Hadoop cluster which is the largest in the world? Employees also use Cassandra which is fault-tolerant, distributed storage system aiming to manage large amount of structured data across variety of commodity servers. Facebook also uses Scuba to carry out real-time ad-hoc analysis on massive data sets. Hive is used to store large data in Oracle data warehouse. Prism is used to bring out and manage multiple namespaces instead of a single one managed by Hadoop. Facebook also uses many other big data technologies such as Corona, Peregine, among many others. 7-Oracle There is an explosive growth like 12.5 billion devices which doesn’t include phones, tablets and PCs. This has helped to increase the research and development in the field of Internet-of-Things and in storage requirements which in turn require database management support. Oracle users use Oracle Advanced Analytics which requires Oracle database to be loaded with data. Oracle advanced analytics provides functionalities such as text mining, predictive analytics, statistical analysis and interactive graphics among many others. HDFS data can be loaded into an Oracle data warehouse using Oracle Loader for Hadoop. This feature is used to link data and search query results from Hadoop to Oracle data warehouse. Oracle Exadata Database Machine provides scalable and high-end performance for all database applications. Oracle is leveraging big data to mainly expand its business in Database management systems.
https://abhinavshukla1808.medium.com/big-data-59f93af15c81
CC-MAIN-2022-27
refinedweb
1,582
54.52
In the last post, we focused on the device side. We created a graphic rich web application using HTML 5 canvas, and browsed it in IE 9. This post focuses on the integration. That is, how to connect client applications to cloud services. You can see the application lively on. The source code for this project can be downloaded from 1Code using this link:. The UI of this demo is much simpler than the demo in the last post. It simply draws a few rectangles, which looks like a 3D bar chart. The rectangles are rendered using SVG. You can refer to the IE 9 Developers Guide to get started with SVG. SVG is a retained graphics API. It is much easier to use than canvas. It also supports declaratively creating shapes, just like Silverlight's XAML. To embed SVG contents in an HTML document, you put a svg tag with the proper namespace: <svg id="mainSvg" version="1.1" xmlns=""> </svg> Then you can write xml tags to define the shapes. For example: <rect fill="orange" stroke="black" width="150" height="75" x="50" y="25" /> This sample creates the elements dynamically in code based on the data returned by cloud services. To do so, you can use the familiar DOM API. For example: var rectangle1 = document.createElementNS(svgNamespace, 'rect'); rectangle1.setAttribute('width', 100); rectangle1.setAttribute('height', height * 200); var x1 = 30 + i * 180; var y1 = 770 - height * 200; rectangle1.setAttribute('x', x1); rectangle1.setAttribute('y', y1); rectangle1.setAttribute('fill', 'rgb(' + red + ',' + green + ',' + blue + ')'); rectangle1.setAttribute('stroke', 'black'); rectangle1.setAttribute('stroke-width', 2); document.getElementById('mainSvg').appendChild(rectangle1); As you can see, the code is not so straightforward as a typical Silverlight code, because JavaScript does not pre-define classes like Rectangle. This sample shows you the raw code. In real world, when working with SVG in code, it is recommended to use a library like jQuery SVG. The advantage of SVG over canvas is it remembers the rectangle you just created. While not demonstrated in the sample, you can change the rectangle's properties (such as width and height) at a later time, and you can add mouse events on the rectangle. For example, when clicking the rectangle, you change its color. With an immediate graphics API (like canvas), it takes much more work to add interactives. Now that you understand the basics of SVG, we can move to the cloud. The bars in the demo actually represents how dangerous a city is (in the New York state). The more dangerous a city is, the higher the bar becomes. We do not store the data in our own database. Instead, we obtain the data from Dallas, a service hosted on Windows Azure. Dallas allows you to consume data provided by professional data providers. For example, our sample uses the "2006 and 2007 Crime in the United States" service provided by DATA.gov. Those professional data providers give you the most authority data. The role of Dallas is: First, serves as a discovery portal for you to discover those services. Second, exposes the data to you using a single protocol: OData. So with Dallas, you don't need to learn a separate API to consume each service. You can use the familiar WCF Data Services API. Please refer to for more information about Dallas. Since Dallas exposes the data using OData protocol, you can use the standard REST API or a client library listed on (such as the JavaScript library and the Windows Phone library) to directly access the data in a client application. However, this approach exposes two problems: First, it may work for simple scenarios. But what if you want to write some custom business logics on top of the data? Where do you put the business logic? Duplicate them in all client applications? Second, you have to embed your Dallas account key in the client application, which is very dangerous. So we will take a more SOA approach. We host a WCF REST service in Windows Azure. Then the client applications work with the WCF service. The service also performs some custom business logics. It calculates how dangerous a city is based on a simple rule (see the int sum = line of code for details). public List<CityCrimeWeight> GetCrimes() { List<CityCrimeWeight> results = new List<CityCrimeWeight>(); datagovCrimesContainer svc = new datagovCrimesContainer(new Uri("")); svc.Credentials = new NetworkCredential("Account Key", "Your account key"); var query = from c in svc.CityCrime where c.State == "New York" select c; int count = query.Count(); int queried = 0; while (queried <= count) { var pagedQuery = query.Skip(queried).Take(100); queried += 100; foreach (var city in pagedQuery) { CityCrimeWeight ccw = new CityCrimeWeight() { City = city.City, Population = city.Population }; int sum = city.MurderAndNonEgligentManslaughter * 10 + city.ViolentCrime * 7 + city.ForcibleRape * 7 + city.Arson * 7 + city.Burglary * 3 + city.MotorVehicleTheft * 3 + city.PropertyCrime * 1 + city.Robbery * 5 + city.LarcenyTheft * 1 + city.AggravatedAssault * 3; ccw.Weight = (double)sum / city.Population; results.Add(ccw); } } return results.OrderByDescending(c => c.Weight).Take(10).ToList(); } As you can see from the above code, accessing Dallas service is the same as accessing a normal WCF Data Services. First add a service reference to generate a client proxy. Then write a LINQ query, which will be translated to a corresponding URI. One thing to note is Dallas forces a server side paging. One request returns up to 100 results. To obtain all results for a certain query, you should write a paging logic. You do so by invoking the Skip method to skip several pages, and invoke the Take method to specify the page size. All requests to Dallas must be authenticated. You do so by setting the service context's Credential property. One final note is our service doesn't return all data obtained from Dallas. We only return the city name, the population, and the weight we calculated using our custom business logic. This also helps to reduce the message size returned to the client. We host the service in the north central US data center, the same as Dallas. Data exchanged within the same data center is very fast (faster than disk I/O), and it doesn't charge for network consumption. So if you run the application locally in Development Fabric, you may find the performance is not so good as running in the cloud, because all Dallas data have to be downloaded to your local machine. In the cloud, Dallas data is transferred from Dallas VM to our own hosting VM between the same data center, which is very fast. Hosting a WCF REST Services in Windows Azure is the same as hosting it locally. First we define a service contract with the WebGet/Invoke attribute. Since we want to support a JavaScript client, we naturally return the data in JSON format: [ServiceContract] public interface ICrimeService { [OperationContract] [WebGet(UriTemplate = "/Crimes", ResponseFormat = WebMessageFormat.Json)] List<CityCrimeWeight> GetCrimes(); } When using .NET 4, WCF configuration is much simpler. You can use protocolMapping to define webHttpBinding as the default binding when the scheme is http. You can define a nameless endpoint behavior which will be applied to all endpoints. You don't need to explicitly define the services. <protocolMapping> <add binding="webHttpBinding" scheme="http"/> </protocolMapping> <behaviors> <endpointBehaviors> <behavior> <webHttp/> </behavior> </endpointBehaviors> </behaviors> One problem in Windows Azure is certain configuration sections, such as protocolMapping, are missing in machine.config on the cloud machines. So you have to define them in web.config, otherwise the deployment will stuck in busy. </sectionGroup> </configSections> You can use the standard jQuery ajax method to consume the service: $.ajax({ url: '../Services/CrimeService.svc/Crimes', success: createChart, dataType: 'json' }); function createChart(data) { // Create a simple bar chart. $(data).each(function (i) { createRectangle(i, this.Weight, this.City); }); $('#placeHolderDiv').hide(); } Now we have a service hosted in Windows Azure. We have a client application delivered from Windows Azure to a PC device, where a modern browser is running to render the chart. The same application should also work fine on a phone that supports SVG. But not all phones support SVG. And even if SVG is supported, we need to modify the UI to make it more phone friendly. To take full advantage of a device, we still need to look into the native application development model. On Windows Phone 7, an application can be created either using Silverlight or XNA. XNA is usually used for games, while Silverlight is used for other applications. So we choose Silverlight to implement our Windows Phone client. We assume most Windows Azure developers are already familiar with Silverlight. So we won't go into the details on how the application is created. In addition, we cannot provide a live demonstration for the Windows Phone client... So instead, we will ship the source code in 1Code, together with the source code for the service and the HTML client. This is the last post on IE 9/Windows Phone integration with Windows Azure. The key take away of this series is the cloud wants smarter devices. Both Windows 7 PC and Windows Phone 7 are smart devices. To exploit the full power of the devices in your applications, you should choose a modern platform with a modern technology. Then you connect the devices to the cloud, and bring the user a better experience.
http://blogs.msdn.com/b/windows-azure-support/archive/2010/09/19/cloud-device-combine-the-power-of-windows-azure-ie-9-and-windows-phone-7-part-3.aspx
CC-MAIN-2013-48
refinedweb
1,547
59.8
So You Think You Can Polymorph? Now you may already think you understand polymorphism—and perhaps you do—but I’ve found that most software developers don’t actually understand exactly what polymorphism is. What is polymorphism? How many times have you been asked this question during a job interview? Do you actually know confidently what the right answer is? Don’t worry, if you are like most developers out there in the world you probably have this feeling that you know what polymorphism is, but are unable to give a clear and concise definition of it. Most developers understand examples of polymorphism or one particular type of polymorphism, but don’t understand the concept itself. Allow me to clarify a bit. What I mean by this is that many times when I ask about polymorphism in an interview, I get a response in the form of an example: Most commonly a developer will describe how a shape base class can have a circle derived class and a square derived class and when you call the draw method on a reference to the shape base class, the correct derived class implementation of draw is called without you specifically having to know the type. While this is technically a correct example of runtime polymorphism, it is not in any way concise, nor is it a definition of the actual term. I myself have described polymorphism in a similar fashion in plenty of job interviews. True understanding The problem with just that example as an explanation is that it lacks true understanding of the concept. It is like being able to read by memorizing words, while not understanding the concepts of phonetics that underlie the true concept of reading. A good test for understanding a concept is the ability to create a good analogy for that concept. Oftentimes if a person cannot come up with an analogy to describe a concept, it is because they lack the true understanding of what the concept is. Analogies are also an excellent way to teach concepts by relating things to another thing that is already understood. If right now you can’t come up with a real world analogy of polymorphism, don’t worry you are not alone. A basic definition Now that we understand why most of us don’t truly understand polymorphism, let’s start with a very basic concise definition. Polymorphism is sharing a common interface for multiple types, but having different implementations for different types. This basically means that in any situation where you have the same interface for something but can have different behavior based on the type, you have polymorphism. Think about a Blu-ray player. When you put a regular DVD in the player what happens? How about when you put a Blu-ray disc in the player? The interface of the player is the same for both types of media, but the behavior is different. Internally, there is a different implementation of the action of playing a disc depending on what the type is. How about a vending machine? Have you ever put change into a vending machine? You probably put coins of various denominations or types in the same slot in the machine, but the behavior of the machine was different depending on the type. If you put a quarter in the machine it registers 25 cents. If you put in a dime it registers 10 cents. And that is it, you now understand the actual concept of polymorphism. Want to make sure you don’t forget it? Try coming up with a few of your own real world analogies or examples of polymorphism. Bringing it back to code In code polymorphism can be exhibited in many different ways. Most developers are familiar with runtime polymorphism that is common in many OO languages like C#, Java and C++, but many other kinds of polymorphism exist. Consider method overloading. If I create two methods with the same name, but they only differ in type, I have polymorphic behavior. The interface for calling the method will be the same, but the type will determine which method actually gets called. Add(int a, int b) Add(decimal a, decimal b) You might be shaking your head “no” thinking that this is not polymorphism, but give me the benefit of the doubt for a moment. The most common argument against this example as polymorphism is that when you write this code the method that is going to be called is known at compile time. While this is indeed true for statically typed and compiled languages, it is not true for all languages. Consider Add being a message instead of a method. What I mean by this is that if you consider that the actual determination of the method that is called in this situation could be differed until runtime, we would have a very similar situation to the common shape example. (Late binding) In many languages this is what happens. In Objective-C or Smalltalk for example, messages are actually passed between objects and the receiver of the message determines what to do at runtime. The point here is that polymorphism can be done at compile time or during execution, it doesn’t really matter. Other polymorphic examples in code Since the intent of this post is not to classify and explain each type of polymorphism that exists in code, but rather to provide a simplified understanding of the general concept, I won’t go into a detailed explanation of all the kinds of polymorphism we see in code today. Instead I’ll give you a list of some common examples that you may not have realized were actually polymorphic. - Operator overloading (similar to method overloading.) - Generics and template programming. (Here you are reusing source code, but actual machine code executed by the computer is different for different types.) - Preprocessing (macros in C and C++) - Type conversions Why understanding polymorphism is important I may be wrong, but I predict that more and more development will move away from traditional OO as we tend to find other ways of modularizing code that is not so rooted in the concept of class hierarchies. Part of making the transition requires understanding polymorphism as a general purpose and useful computer science concept rather than a very situational OO technique. Regardless, I think you’ll agree that is it nice to be able to describe polymorphism itself rather than having to cite the commonly overused example of shapes. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.) Barry Smith replied on Tue, 2013/04/16 - 8:45am Why not make it even simpler? Polymorphism is just a neat way of doing repeated switch statements on a type variable: e.g, in Java-ish: public class Animal { private int type; public void speak() { switch(type) { case DOG: println("woof"); break; case CAT: println("meow"); break; }} ...etc. Polymorphism is basically just transforming the switches into implementations. John J. Franey replied on Tue, 2013/04/16 - 10:12am You didn't really help clear up understanding. You munged together the concepts of overriding and polymorphism. You also attempted to distinguish polymorphism in weakly typed languages which is difficult because every method call is polymorphic by definition. Quite simply, 'polymorph' is 'many-form'. I mean, if you go back to the greek, 'morph' is a noun not a verb, meaning 'form' or 'shape', and 'poly' means 'many'. In OO programming, it means a method can have many (poly) definitions (morphs), where the definition invoked at runtime is determined when the method is called. How confusing is that? John Sonmez replied on Tue, 2013/04/16 - 2:49pm in response to: John J. Franey This is a common misconception about polymorphism. Overriding is a form of polymorphism, as is generic programming. You are talking only about subtype polymorphism. Here is a wikipedia article that explains some of this: Thanks for bringing this up though, it is certainly a point of confusion. John J. Franey replied on Tue, 2013/04/16 - 3:47pm in response to: John Sonmez A cat is form of animal, it is not a misconception to differentiate cats from animals. Overriding is a form of polymorphism, it is not a misconception to differentiate overriding from polymorphism. The differentiation clarifies. Stephen Lindsey replied on Wed, 2013/04/17 - 5:34am Just what, exactly, is the point of the large image at the top of this article. I would suggest that most people read these articles at work and it's not good when ones screen is dominated by such an image. It's unnecessary, childish and unprofessional, you're by far not the only one who does this; you're just the one I picked to make the point. Lund Wolfe replied on Sun, 2013/04/21 - 7:13pm Polymorphism (in programming) does imply differences in behavior (methods) of derived classes. Otherwise, there is no reason to have derived classes. The simple explanation of writing the code once for the usage of the super class is all that really matters from the developer's point of view. Technically, the derived types are created at run time and dynamically bound (and accessed accordingly). Along with predicting the end of OO, I think the brain is overrated ;-) Brad Appleton replied on Wed, 2013/04/24 - 6:02pm I like the way I learned it in SmallTalk better -- "Polymorphism is the ability to send any message to any object capable of understanding it". This covers inclusion polymorphism, parametric polymorphism, and ad-hoc polymorphism. It even covers the seeming exception (tho not really) to subtype polymorphism known as "delegates" in C# (and which was known as "Signatures" in g++ back in the early 90s). I often like to describe polymorphism (and its different types) based on upon what varies and what stays the same for each kind of polymorphism: 1. using the same method name for the same type (interface) to invoke differing implementations ==> subtype polymorphism 2. using the same method name for different types to invoke the same implementation ==> parametric polymorphism (generics/templates) 3. using the same method name for (same or different) types to invoke different method signatures ==> ad-hoc polymorphsm
http://java.dzone.com/articles/so-you-think-you-can-polymorph
CC-MAIN-2014-35
refinedweb
1,714
60.85
In competition ‘Quora Insincere Questions Classification’, I want to use simple TF-IDF statistics as a baseline. def grid_search(classifier, parameters, X, y, X_train, y_train, X_test, y_test, name = 'SVC'): begin = time.time() clf = GridSearchCV(classifier, parameters, cv = StratifiedKFold(n_splits = 10), n_jobs = 36) clf.fit(X, y) print('f1 for ' + name +': ', benchmark(clf.best_estimator_, X_train, y_train, X_test, y_test), clf.best_estimator_) print('cost time: ', time.time() - begin) data = pd.read_csv('train.csv') data = data.sample(frac = 1.0) corpus = data['question_text'] y = data['target'] vectorizer = TfidfVectorizer() X = vectorizer.fit_transform(corpus) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, stratify = y) C_set = [0.4, 0.6, 0.8] tol_set = [2, 1.5, 1.4, 1.3, 1.2, 1, 0.8, 0.6, 0.4] parameters = { 'penalty': ['l1', 'l2'], 'C': C_set, 'tol': tol_set } classifier = LinearSVC(dual = False) grid_search(classifier, parameters, X, y, X_train, y_train, X_test, y_test, 'LinearSVC') The result is not bad: f1 for LinearSVC: 0.5255644347893553 LinearSVC(C=0.8, class_weight=None, dual=False, fit_intercept=True, intercept_scaling=1, loss='squared_hinge', max_iter=1000, multi_class='ovr', penalty='l1', random_state=None, tol=0.4, verbose=0) But after I change LinearSVC to SVC(kernel=’linear’), the program couldn’t work out any result even after 12 hours! Am I doing anything wrong? In the page of sklearn.svm.LinearSVC, there is a note: Similar to SVC with parameter kernel=’linear’, but implemented in terms of liblinear rather than libsvm, so it has more flexibility in the choice of penalties and loss functions and should scale better to large numbers of samples. Also in the page of sklearn.svm.SVC, it’s another note: The implementation is based on libsvm. The fit time complexity is more than quadratic with the number of samples which makes it hard to scale to dataset with more than a couple of 10000 samples. That’s the answer: LinearSVC is the right choice to process a large number of samples.
http://donghao.org/tag/scikit-learn/
CC-MAIN-2020-50
refinedweb
321
51.65
HTML and CSS Reference In-Depth Information element.className = cName.replace(/^\s+ | \s+ $ /g, ""); } } function removeClassName(element, cName) { var r = new RegExp("(^ | \\s)" + cName + "(\\s | $ )"); if (element) { cName = element.className.replace(r, " "); element.className = cName.replace(/^\s+ | \s+ $ /g, ""); } } dom.addClassName = addClassName; dom.removeClassName = removeClassName; }()); These two methods require the tddjs object and its namespace method from Chapter 6, Applied Functions and Closures. To code along with this example, set up a simple JsTestDriver project as described in Chapter 3, Tools of the Trade, and save the tddjs object and its namespace method along with the above helpers in lib/tdd.js . Also save the Object.create implementation from Chapter 7, Objects and Prototypal Inheritance, in lib/object.js . 9.5.2 The tabController Object Listing 9.5 shows the first test case, which covers the tabController object's create method. It accepts a container element for the tab controller. It tests its markup requirements and throws an exception if the container is not an element (determined by checking for the properties it's going to use). If the element is deemed sufficient, the tabController object is created and a class name is appended to the element, allowing CSS to style the tabs as, well, tabs. Note how each of the tests test a single behavior. This makes for quick feedback loops and reduces the scope we need to concentrate on at any given time. The create method is going to add an event handler to the element as well, but we will cheat a little in this example. Event handlers will be discussed in Chapter 10, Feature Detection, and testing them will be covered through the example project in Chapter 15, TDD and DOM Manipulation: The Chat Client. Search WWH :: Custom Search
http://what-when-how.com/Tutorial/topic-3775be1i6b/Test-Driven-JavaScript-Development-216.html
CC-MAIN-2017-47
refinedweb
293
56.25
KEYCTL_JOIN_SESSION_KEYRING(3) Key Management CallsL_JOIN_SESSION_KEYRING(3) keyctl_join_session_keyring - join a different session keyring #include <keyutils.h> key_serial_t keyctl_join_session_keyring(const char *name);. On success keyctl_join_session_keyring() returns the serial number of the key it found or created. On error, the value -1 will be returned and errno will have been set to an appropriate error. ENOMEM Insufficient memory to create a key. EDQUOT The key quota for this user would be exceeded by creating this key or linking it to the keyring. EACCES The named keyring exists, but is not search), session-keyring(7), user-session-keyring 20 Feb 2014 KEYCTL_JOIN_SESSION_KEYRING(3) Pages that refer to this page: keyctl(2), keyctl(3), session-keyring(7)
http://man7.org/linux/man-pages/man3/keyctl_join_session_keyring.3.html
CC-MAIN-2017-43
refinedweb
112
57.16
If you use CArray, and the const keyword, your programs might be running 50% too slow! Interested? Well read on... CArray const I love Object-Oriented programming. And after 15 years of writing C programs, I'd be quite happy to code in C++ forever. Two of the things I love most are: templates and the const keyword. If you're like me, you use const everywhere. It encapsulates encapsulation. Well kind of... Anyway, if I can pass a const reference or const pointer, I will. Why? Because it means that my calling routine knows that its data is safe. It reduces complexity. Templates speak for themselves. Well actually, they don't...and the syntax sucks (I have to race to a text book every time I want to create a template - that either means the syntax sucks or I'm stupid or maybe I just drink too much red wine...). Anyway, Microsoft has written several useful template classes, including CArray. It's a pity they did such a poor job, particularly with the documentation. I've been burned by CArray several times. My code works fine, but then I discover a whole lot of unnecessary copying going on. CArray is fine for arrays of ints and doubles, but give it a class with more than a few bytes of data, and your program's efficiency gets clobbered. int double Here's the kind of thing I like to do: // // Declare a useful class. // class MyClass { protected: // data here (maybe lots) public: // etc. etc. etc. etc }; typedef CArray<MyClass,MyClass&> MyClassArray; Then, I'll use this array as follows: MyFunction(const MyClassArray& array) { for (int ii = 0 ; ii < array.GetSize() ; ii++) DoSomething(array[ii]); } DoSomething(const MyClass& my_object) { // do stuff here } Pretty simple, right? But with CArray, the call to DoSomething(array[ii]) creates a temporary copy of the array element (in this case, MyClass) before calling DoSomething! Then the temporary copy is destroyed before the next loop iteration. DoSomething(array[ii]) MyClass DoSomething If my array element is an int, that's fine by me. But if it's a class with 1K of data, then CArray is silently stabbing me in the back. Of course, to be fair, CArray isn't "silent". Its operator[] const and GetAt methods are documented to return a copy. operator[] const GetAt But WHY? I can't think of any good reason (unless CArray is only designed for arrays of ints etc.) why these methods return a copy. They should return a const reference. After getting burned for the Nth time, I've done something about it. I've made a simple derivation of the template class CArray, called OCArray (OC stands for Open Concepts - one of my companies). Or, if you like, it can mean "Optimised-CArray". OCArray /* * Template Class: OCArray * Author: Russell Robinson * Purpose: * To provide a generic array class like CArray without the problems. * OCArray takes one parameter - TYPE. Unlike CArray, OCArray always * returns references and expects references as parameters. */ template <class TYPE> class OCArray : public CArray<TYPE,TYPE&> { public: /* * Method: OCArray::operator[] const * Parameters: i_index the array index to access * Returns: const TYPE& reference to the element at the index * Author: Russell Robinson * Purpose: * To return an element of the array for const access. */ inline const TYPE& operator[](int i_index) const { ASSERT(0 <= i_index && i_index < GetSize()); return (GetData()[i_index]); }; /* * Method: OCArray::GetAt * Parameters: i_index the array index to access * Returns: const TYPE& reference to the element at the index * Author: Russell Robinson * Purpose: * To return an element of the array for const access. */ inline const TYPE& GetAt(int i_index) const { ASSERT(0 <= i_index && i_index < GetSize()); return (GetData()[i_index]); }; /* * Method: OCArray::operator[] * Parameters: i_index the array index to access * Returns: TYPE& reference to the element at the index * Author: Russell Robinson * Purpose: * To return an element of the array for possible modification. * This method is needed because the compiler * loses the base class's method. */ inline TYPE& operator[](int i_index) { ASSERT(0 <= i_index && i_index < GetSize()); return (GetData()[i_index]); }; }; Just use OCArray instead of CArray. It only takes one parameter, because the argument type is implied as being a reference. This also helps remind you that you're not using CArray. The result is that there is no copying when you access the array through a const reference or pointer. The time saving is around 50% in an optimized program, and can be 75% in a debug version! The above is all you need, but I've provided a demonstration project so that you can see the difference. Now we can think about what we'll do with all those spare CPU cycles...... This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here void test() { // Standard way AVirtTreeItem avti; CVirtTreeItem avti2; // fixed declaration avti.InsertAt(0, avti2); // removed & // New way OAVirtTreeItem oavti; CVirtTreeItem oavti2; // fixed declaration oavti.InsertAt(0, oavti2); // removed & } void InsertAt( int nStartIndex, CArray* pNewArray ); throw( CMemoryException ); OAVirtTreeItem oavti; AVirtTreeItem oavti2; for (int i = 0; i < oavti2.GetSize(); i++) { oavti.InsertAt(i, oavti2[i]); } void InsertAt( int nStartIndex, const CArray* pNewArray ); class MyClass; typedef OCArray<MyClass> MyClassArray; // a straight 1 dimension vector of MyClass typedef OCArray<MyClassArray> MyClassMatrix; // a 2 dimensioned matrix MyClassMatrix MyMatrix; for (int i_row = 0 ; i_row < MyMatrix.GetSize() ; i_row++) for (int i_col = 0 ; i_col < MyMatrix[i_row].GetSize() ; i_col++) { // stuff you want to do with MyMatrix[i_row][i_col] } typedef OCArray<CPassportInterface> CPassportInterfaceArray;
http://www.codeproject.com/Articles/255/CArray-A-simple-but-highly-efficient-improvement
CC-MAIN-2015-48
refinedweb
931
56.55
'CodeSpells' Video Game Teaches Children Java Programming 245 CyberSlugGump writes "Computer scientists at UC San Diego have developed a 3D first-person video game designed to teach young students Java programming. In CodeSpells, a wizard must help a land of gnomes by writing spells in Java. Simple quests teach main Java components such as conditional and loop statements. Research presented March 8 at the 2013 SIGCSE Technical Symposium indicate that a test group of 40 girls aged 10-12 mastered many programming concepts in just one hour of playing." The spell book looks INCREDIBLE: (Score:3) Or like a windows 3.11 ui for an edutainment product that came with the computer... whatever, as long as it works Re:The spell book looks INCREDIBLE: (Score:5, Insightful) The entire game looks pretty basic - and who the heck cares? Watch a two year old and see what happens when you give em a present. They are as likely to play in the big box as with the toy. Graphics might be important for the latest 3D shooter, but a good game doesn't HAVE to have cutting edge graphics. A game with amazing graphics can still be crap. If the idea is to teach kids how to code, and they enjoy playing the game enough to at least learn a little coding - then it is a GREAT product. If I was ten and wanted to learn java and had a choice of following tutorials/reading books/etc or playing a game that taught me the concepts, then I certainly know how I would have learned java. Sure, all my projectst might also include a random "Save the GNOMES!!" routine, but you know.. Re: (Score:2) Re: (Score:3, Insightful) The entire game looks pretty basic - and who the heck cares? Watch a two year old and see what happens when you give em a present. Polish can be incredibly important. If first impressions are bad, the student may not get hooked, which defeats the entire point of packaging the thing like a game. Re: (Score:3) Re: (Score:3) Because they brought us "The Witcher" and we'd really like to see what else the Polish people are capable of? Re: (Score:2) Re: (Score:3) Re: (Score:2) Re: (Score:2) Re: (Score:3) Re: (Score:2) It's silly to complain about such an early version. Don't worry—Java 10 should catch up with the Windows 95 GUI. Re: (Score:2) not a complete success (Score:4, Funny) 38 of the 40 girls in the test group complained that, once they were written in Java, the spells took forever to execute. Re:not a complete success (Score:4, Informative) Your joke was funny 15 years ago when Java was actually slow. Re: (Score:2) GP wrote the joke in Java. Re:not a complete success (Score:4, Insightful) Re: (Score:3, Interesting) I really can't tell if you're trolling. If you're not trolling then you should know that the reason heap allocation is faster in Java is because Java allocates a sizeable chunk of the systems memory on runtime startup to use as a pool allocator. In C++ you're expected to come up with your own allocation strategies because you know more than the compiler about what the high level behavior should be. What you were actually benchmarking was the overhead of the system calls/context switches that come with opera Re: (Score:2) Re: (Score:2) Funny thing is.. well, not funny it's actually kind of sad... most common high end languages today are scripting languages or interpreted with JIT. The traditional "compile it first and use a minimal run time" method is considered something that only ancient dinosaurs did before they died off. So in that sense Java is probably one of the fastest languages in that set. Re:not a complete success (Score:5, Interesting) Yeah, that's fairly outdated thinking. Speed isn't derived by the language anymore. It's the execution that counts. Java Compiles down to op-code, which is run in the JVM. The JVM has decades worth of run-time optimizations. The majority of large scale web sites are written in Java. Hey, ever heard of Hadoop. You know the large scale Map-reduce framework based on Google's technology that sorts terabyte and petabyte of data? Java. Re: (Score:3) yes, I've heard of Hadoop - the framework that fixes java performance by splitting its execution up across a couple hundred machines. :-) The framework language isn''t the performance part though - it just acts as a manager to send data to a group of workers and aggregate the results back, its the workers that are important. If they are slow, the whole thing is slow. So its best to write these in a native language. Its not anything special to Java either - Google's mapreduce implementation is written in pyth [google.com] Re: (Score:3) The bigger problem is that once students exit the game, their memory gets garbage collected and they have no recollection of what they learned. Re: (Score:3) 38 of the 40 girls in the test group complained that, once they were written in Java, the spells took forever to execute. Probably ran slow because their machines had been compromised by 5 separate zero-day exploits before they'd finished the lesson. Re: (Score:2) and this is why you should have the download from [ninite.com] as a Run Before Anything Else program on a flashkey you use to do Computer setups. (and yes you can have it download//install a number of other programs in the same run) Re:not a complete success (Score:4, Interesting) When Java first took off, and the web was made of Java content executed via plugin, Java was written by idiots who concatenated strings instead of using string builders, and similar abuses of common sense through ignorance and teaching materials that focused on results rather than good practice. Executables outside of plugins suffered the same deficiencies, although they were probably attempting loftier goals, and the performance was... what is the opposite of magnified, because it was slower than a sloth taking a crap? This lasted a number of years, even as the Java interpreter became stable and work was made to increase its performance. Idiot coders learned or abandoned Java, and the runtime made even the remaining idiots look better, if not "good". If you don't find this comment amusing, you either lack historical perspective, are a Java programmer, or should consult a medical professional to be diagnosed for your deficiency in some manner or other. Security problems these days seem to be focused on the browser plugin, rather than locally executing native apps, so the security comments mostly don't apply. Visiting a random internet web page and allowing it to execute poorly sand-boxed arbitrary code is a bit like licking random strangers' genitals. In case that interests you, let me state that it should not be done as a general practice, and you should consult a medical professional. I have read Java for over a decade, and I have coded in Java for 3 years or so. Having experience with x86 ASM (AT&T and MASM), K&R C, ANSI C, GWBasic, Turbo Pascal, C++ (VC 5-2010, gcc 2.x - 3.x, mingw), VB 5-6, C#, VB.NET, Python, Powershell, JavaScript (advanced, not your normal getElementById().Blink() shit) and several other introductions, I can say this: Java examples in the real world and in most printed books are the most incestuous, groupthink-y, overly-architected piles of verbosity I have ever had the displeasure to read. I completely understand the need for default parameters, dependency injection, constructor and method chaining, and all kinds of modern best practice. But I have never seen another language embrace the overbearance of best practice teachings without implementing some balance of solution soundness. Java examples and implementations (open source of course, because I have read them) seem to abound with overloaded methods under 5 lines of code, which initialize another parameter to call another overload. Now you have multiple functions to unit test, multiple code paths, multiple exception sources, and unless you are brainwashed in the spirit of Java, comprehension of the complete workings are complicated by scrolling off-screen with essentially purpose-free function declarations, whitespace between functions, and an essentially functional programming paradigm split over several different methods to give the appearance of flexibility, OOP, and conscious design. It reads to me like someone wrote that no method should ever take more than one additional parameter that you were not already given, and coherence be damned. I would much rather see a single method with 5 non-optional parameters than 5 overloads which calculate and pass one new parameter each time. The Java paradigm seems to be calculating things within the overloaded methods is preferable to factoring out these into unrelated functions. In a truly sane, OOP world, those calculations would be a part of the object, or if sufficiently general would be part of the object's base object. In fact, the Java approach seems to be the Builder design pattern, which I have not seen adopted as frequently as it should be. Obligatory link here. [stackoverflow.com] As sensible as the Builder pattern seems to be, I think it would still require a number of extra Set/Get property methods, which are function calls. Maybe Java has optimized this, but if you don't adopt it optimization can't he had a similar idea some time ago. (Score:5, Interesting) I had a similar idea some time ago, but with an MMORPG setting. One of the issues that has always rankled me hard was the "cookie cutter" nature of the world events in those games, as well as the limiting gameplay options, so I had this idea for "obfuscated and sigilized" programming syntax as the basis for a game's magic system. Rather than presenting a loop as a nested block of instructions, it would depict it as a "container", with subcomponents inside. Kind of a mix of flowcharting and stylized syntax. The idea was that the layout of the "enchantment" could be moved and teased to make clever images out of the interconnected containers and symbolic representations, to make the programmatical nature of the system less banal, and much more aesthetically attractive, while simultanously making the kinds of magic and counter magic highly diverse and dynamic. I never really did much with the idea (ideas aren't worth much, despite what the USPTO and several shell corps may claim. Implementations are far more valuable.), and all the "on paper" mental models I tried kept having non-trivial problems. I like seeing that somebody had a similar idea, and made a working implementation. Re: (Score:3, Funny) Battlegrounds in World of Warcraft were kind of awesome back before they (quite correctly, I guess) put all sorts of restrictions on what kinds of things could be scripted. I used to *own* the level-19 battlegrounds with a warlock and an addon I wrote to keep track of enemy targets and optimally distribute my various curses and afflictions. I just ran around mashing the spacebar like crazy, because among the few restrictions was that every action had to be tied to a hardware event. I also earned 10,000 gold kturtle & minecraft vs codespell (Score:3) My nearly six-year-old is doing great things (for a kindergartner) with KTurtle -- which is really a pretty cool environment (I was surprised to find). He also spends much time hacking crazy stuff with redstone in Minecraft. The next logical step to real programming language seems to me, keeping it fun and relevant to his interests, is to introduce some javascript (as much as I dislike it) so he can mess up web pages with little effort. From there it seems python is the friendliest, easiest and most resource-rich multi-purpose playground. Maybe CodeSpell will be something to check out eventually. Though the java example on their blog doesn't look all that fun to me. I hope its fun. If it gets to the point where I'm teaching the kid OOP, and all the verbose java syntax requirements, he'll probably only want to make minecraft mods. That's what CodeSpell is up against in this house. Re: (Score:2) I would think any language that's fairly simple and can produce instant results would be a good language to introduce a child to. I say this because I had BASIC at the age of 5 and I could type out a few lines of code, hit run and see the results (almost) instantly. Better yet was having a ton of software written in BASIC that I could load up, tinker with and then try. I'm not really in touch with most modern languages, so I don't know what's out there that would have that same kind of feel, but I know Re: (Score:2) Logo/Turtle Graphics and Microworlds should be required learning software for the young kids. Jumping on the bandwagon... (Score:2) ... but it seems they are willfully ignoring Linux as a platform. And teaching about computers. Yeah, cliche to complain about it, I know, but it does seem kind of disingenuous at best. Re: (Score:2) Neal Stephenson bet them to it (Score:2) In the Diamond Age, the "premier" teaches logic, programming and nanotechnology in a similar fashion. While it is good to see the concept taken to practice..... Nothing new to see here, move along. ;-) Re: (Score:3) There have been games that have done this before, even well before "The Diamond Age" was an idea in Stephenson's head. The problem is these kinds of games are so few and far between that it's fairly notable when one comes up. While I certainly played my fair share of standard games as a kid, I also had quite a few educational ones as well (granted some of them were below me, my parents bought me a math game based on my age and not my ability). As much as I hate coding now (mostly due to syntax crap in lang Wizard's Bane by Rick Cook (Score:2) Jesus, Java? Why not COBOL? (Score:2) Java is an OK language, but it's kind of bureaucratic and boring. I can't think of a better way to suck all the magic out of a fantasy game than to have the spells written in Java---except maybe having the kids produce an ER diagram and a set of tables in Boyce-Codd normal form. At the very least, they could do without the pointless punctuation. Does a spell really have to have semicolons and empty parentheses to denote that the spell is imperative? Re: (Score:2) While I agree with your sentiment about "why use Java" for something like this, I also really applaud this kind of thing. Yeah, different language or language invented specifically for this app probably would have been better, but introducing kids to programming at an early age is win over all. MASTERED in an hour? (Score:2) I think I need an hour of CodeSpells and I can add Java proficiency to my CV; I've only spend a hundred hours coding in it, so I've set my skill as "exposed to" instead. Lua with Minecraft + Tekkit mod (Score:3, Informative) it would seem... (Score:2) Java and standalone ... seriously!? (Score:2) How about Javascript and run in the browser or on the cloud instead? There's nothing commenting on why Java was chosen but it seems a very surprising decision to come out of a computer science department ... or maybe not. Are academics really keeping pace with technology or the public interaction with technology? I feel... (Score:4, Funny) Re: (Score:2) I was just visting with the good folks at the local Python users group. Nice folks, but the when I dug into where the actual jobs are it was clear Python was not a bread winner by a long shot. Most of them were using only Python when they were bidding out the work and the client had no input into the language to use. That tended to be side gigs. The 9-to-5 work was usually Java or .Net. Python certainly has it's great points, but so do a dozen of so does Groovy, Clojure, Ruby and Scala. I know a lot of f Re:How about Python or something? (Score:5, Insightful) At the elementary level I don't think the choice of language with reference to the business environment is that important. Teaching kids that they can make their computer/tablet/whatever DO STUFF and presenting it in an easy to digest format is much more valuable that what is big in the industry now. Keep in mind, elementary age... at least 20 years (on average) until they start rolling to the job market. Not all of them will be programmers. The languages we use now may be dying by that point. they may not all be programmers, some may be scientists using more focused languages in the vein of Matlab, some may be homemakers, some may be athletes, some may be artists. But they will all have an appreciation for technology and what it can do. They will all get introduced to logic and algorithmic structure at a much earlier age than is normal right now. Those things easily apply to other aspects of life. Hell, if they keep at programming strictly on a hobby basis, they may even catch on to when the developers at their company/organization/whatever are BS'ing them about what can and cannot be accomplished. Re: (Score:3) While the mods may not agree with you very strongly, I've seen a wealth of evidence that says Java is a bad introductory language. The CS department at my alma mater switched from an all-Java curriculum to one with a Python intro, and the student attrition rate dropped by a significant margin. A friend of mine—the daughter of two CS profs—was dead-set on avoiding programming as a teenager until I introduced her to languages other than Java. While the formalisms and syntax are great for software e Re: (Score:2) Indeed. Seriously fuck java. I decided to try to learn Java after quite a few years of not coding (not that I ever got that advanced with it anyway). There are not enough derogatory words in all languages combined to describe my hatred of Java. It seems simple and straightforward at first, but this is deception! (To be fair, some of my issue is with the buzzword like jargon associated with it and OOP in general, fucking insanity). Sure, basic math to advanced math (which is generally what I use code f Re:How about Python or something? (Score:4, Interesting) Irony of ironies, C# is almost exactly like Java at the language level, only with a totally different object hierarchy, which is why it's easier for UI development. The .NET hierarchy is somewhat influenced by classic VB, which was a very well-developed and efficient (if sometimes limiting) format for expressing common UI needs. Java's popularity, sadly, has to do exactly with that OOP evangelism. In the late eighties and early nineties, academic software engineers were absolutely convinced OOP was the silver-bullet software development paradigm for all ills, since encapsulation (hiding methods) made code re-use practical. They also believed it was the end to all programming practices that inhibited re-use, particularly global variables. Unfortunately they made the mistake of conflating these practices with "laziness," and very mistakenly believed in a bizarrely Victorian fashion that all beginners should be forced to use only best practices, as though we should be teaching infants proper manners straight out of the crib. It's stupid enough that I sometimes wonder if it was a massive conspiracy by Sun's marketing department, but to be honest computing has always been full of fads like this. In the early eighties, logic programming was The Way Of The Future; everyone thought that Prolog and constraint-satisfaction-based expert systems (basically, fancy predicate logic expression solvers) would dominate computing for the rest of time. Today, there are only a few niches where new Prolog code is considered desirable. Re: (Score:2) Oh trust me, I get that about C#, it's just that much of a step up above Java (and both have roots in C, so the transition was fairly easy). Granted I'm old enough to where when I started programming OOP hadn't even become a fad yet (to be fair, I started at a much earlier age than most of my peers - PC's weren't common in the home at the time - mid 80's). Though I still tend to think procedurally, I like the concepts of OOP, just hate that evangelism and the needless jargon introduced to separate it fro Re: (Score:3) Just like in math, CS is riddled with context-specific names for refinements of the same thing. You wouldn't want to conflate a ring with a field, right? For what it's worth, though, the first OOP language, Simula, just called them "procedures" at the syntax level. And in the case of encapsulation, there really isn't a good, compact term for "writing all of your code properly so that nothing inappropriate is publicly accessible," so that was kind of a new concept that needed a new name. But for what it's wor Re: (Score:2) As a Math major I totally get that. Especially at the more advanced level where you start using perpendicular, orthogonal and normal pretty much interchangeably. In programming, I've used "procedure", "subroutine", "function" and "method" to describe what is conceptually the same thing. I like function best because of the easy tie in with Math, granted given the mathematic definition of a function, I can see where a function with no return value seems bizarre. As far as encapsulation goes, while I get tha Re: (Score:2) In my opinion the worst OOP offences are what sane people call "setters" and "getters", but are what are probably called "mutators" and "accessors". To most people they're a necessary evil when you want to limit the range of values a variable can take on (although C# does this transparently with properties, which Microsoft (confusingly) recommends starting with an upper-case letter... what the hell, Microsoft?) but in Java, students are often taught to write them even when they're completely transparent and Re: (Score:2) I still have enough of the old procedural mindset in me to avoid setters, getters, mutators and accessors all at once! What is even better is that /.'s apparently recently implemented spelling/grammar check doesn't like mutator or accessor. +1 for the sane people I suppose? I don't write code to be rolled out into a large package to be reused by someone else elsewhere in an obtuse and poorly implemented fashion. I will set my variables to what I want, how I want, when I want. Oddly enough my desire for d Re: (Score:2) I need to lay off the alcohol tonight. I just realized I responded to two of your posts in the exact same fashion. /sigh Re: (Score:2) Re: (Score:2) Indeed, Thank you for the quality discussion! I will now go investigate the mystery that is sleep. Re: (Score:3) Over half of the population is way too dumb to understand programming in the first place. It's better to start off with a good language. Also, C is easier than Java. C++ maybe not, but then it sucks balls too. Re: (Score:3) Re: (Score:3) Yeah, you interview crappy >. Try talking to liberal arts students or people who have not gone to college. Re: (Score:2) is engineers. Slashdot cut it off as a tag Re:How about Python or something? (Score:4, Insightful) Re: (Score:3) The world could use more elitism, instead of dumbing kids down by teaching them that we should value mediocrity. Everyone gets a trophy, woo-hoo! But a job? Gee, kid - Sorry we didn't prepare you to actually compete when you get to the real world... 90% of the population of the world could easily learn to program and learn to do it proficiently. Aahahahahaha... Oh, man, stop, ya killin' me here! With a lot of effort, you could teach most people to use cookie-cutter VBS sni Re:How about Python or something? (Score:4, Insightful) Perhaps to someone who has been trained in C but not Java. The biggest problems with C when compared against Java is the limited extent of its standard library, sorting through the plethora of poorly documented non-standard libraries that are available (vs Java, where if there isn't a standard for it, then the next obvious stop is apache.org) and the fact that you need to understand the hardware architecture of the system you are developing for in a lot of cases, as well as distinctions between stack and heap and a bunch of low level gotchas in the language that are far from obvious to the newbie, or even to experienced developers sometimes. Re: (Score:2) As opposed to the high level gotchas in Java? ;) Java can be just as obtuse as C, if not more so. Re: (Score:2) The only obtuseness I've encountered in Java is the hoops you have to jump through to interoperate with an API from another language that uses unsigned types. C tutorials on the other hand are full of examples where *(x+1) is used interchangeably with x[1], which ends up becoming a habit for years until one day you hit one of the edge cases where those statements are not equivalent and are left scratching your head as to why your program is crashing. Endianness is never hidden behind an API, since the whole Re: (Score:3) In C, you don't have to create a separate namespace with specifically named folders and files, and a separate class just to be able to say "hello world". Kids lose interest very quickly if they don't get results, having to learn about OOP before they write their first program isn't going to work. Re: (Score:2) No, but it might not hurt to try and find a way to introduce those concepts earlier? The same goes for programming, logic is a segment of mathematics that is essential to programming yet doesn't require concepts such as addition and its offshoots (almost every other operation commonly used in Math). Boolean and Binary logic are the foundation of programming and are simple enough that a 5 year old can grasp them. Sure, they may not be able to design complex circuits at that age (though you might get the o Re: (Score:2) So we're agreed, using C as the first introduction to programming is a bad idea? Not that I'm advocating Java, or even Python as the best solution, both are only marginally better. Scratch is more like it - as simple to learn as Logo, but less boring (Logo held my interest for about half an hour when I was young, by that time I'd about reached the limitations in what could be done with a pen and 4 directional commands). Re: (Score:2) Re: (Score:2) Re: (Score:2, Insightful) Pointers are not complicated, I'm sorry. Mabye for 8 year olds, but that's why they should learn Python. It's actually really easy, it's a very popular language, and it teaches good coding practices as well as jack-off object oriented concepts. Re: (Score:3, Insightful) Re: (Score:2) No pointers are complicated in any large size software project. Because they point to memory which you have to handle allocating and freeing yourself when not needed and not being referenced. When you have pointers to pointers to pointers then deciding what code has the responsibility of handling freeing what pointer memory which is not always handled by the same code it becomes easy to make a mistake. Garbage collected programming solves all these low level tedius memory handling automatically. Re: (Score:2) Nah, it's more readable to use whitespace to denote codeblocks. It's not like it's easy to match up a bunch of curly braces. And you should be indenting anyway, right? Re: (Score:2) Re: (Score:2) With white space being insignifacant each reader of the code can format it however they want. And that's a bad thing. I mean you don't have people choosing what they want to call the commands in a programming language. That would be chaos. Some people have tried... remember #define BEGIN { #define END } With indenting as with commands, it's far better of the language defines a common standard, with deviations being a warning or an error. That way, when are working collaboratively their code is guaranteed to be in the same indentation style. Unfortunately no language I know of does this. Python comes clo Re: (Score:2) Re: (Score:3) This is 2013, we shouldn't have to indent manually still. If you want to cut/paste a few lines of of code from one section to another, if the indentation doesn't match it can be seriously annoying in python to get it all correct. Compare this to java, where as long as it is between the curly brackets I know it will be OK. Press the shortcut for auto-indent, and I can tell immediately if it is in the right place. Re: (Score:3) Me too, but they were called PEEK and POKE then. Re: (Score:2) I would argue that it's probably more beneficial to teach children some completely fictional programming language. Let the intelligent ones figure out theoretical improvements. Re: (Score:2, Funny) Also, you said "your" instead of "you're"; if you are confused by this, these words are used correctly here: "you're a moron, your opinion doesn't count." Re: (Score:3) I'll have to ask you to back that up. Python 1.0 is from 1994, and didn't have much by way of specialized facilities for this purpose. Sure you aren't thinking of PHP? Anyhow, if you care about "type systems and formalism", you should be on Haskell; Java is distinctly half-assed. (Type-erasure generics? Really?!) Re: (Score:2) >I've been programming for more than thirty years, and I love Python In my experience, the people who love Python are hack programmers who are sloppy with types and exceptions in the beginning, and python just enables the sloppiness. I think its true that a lot of hack programmers like Python, but it is not true that everyone who loves Pythin is a hack programmer. I know you didn't actually say that but it came across that way. Re: How about Python or something? (Score:5, Informative) The reason that Java isn't as fun to program is the same reason that it's good for businesses. The language is very restrictive and prescriptive of how you should do things. For programmers that want flexibility and power, the constraints and extra typing (dual-meaning intended) chafe. But when you're using it as part of a large group, those same constraints become the things you can depend on. Where is a certain class located? Java requires it to be in a certain directory. What methods are available on a class? Java's static type system was designed to make tooling easy, so your IDE will tell you. And even talented programmers can mess up manual memory management...the less-talented wouldn't stand a chance without Java's memory management. The list of things that Java prevents you from screwing up is quite long. Basically, for my home coding projects and projects where I work with a small team of talented developers, Java is one of my last choices. But for my boring 9-5 job where I'm working with 30 knuckle-draggers who don't understand the purpose of an interface, let alone how to write functional code that's easy to read, I want them writing Java and I'm willing to pay the Java price to get that. Re: How about Python or something? (Score:4) Re: (Score:2) I think Java is fun to program in for exactly those reasons. For me the fun in programming is getting cool results and Java allows one to create complex stuff without having to constantly worry about shooting oneself in the foot. It allows me to use my full brain capacity for the actual algorithms I want to create and doesn't add lots of cognitive load. Especially when using a powerful IDE, like Netbeans. Re: (Score:2) True, but that's also why it's bad as a first language. It only teaches a very narrow view of programming, and does that in a painful way. Re: (Score:2) Fortran? Re: (Score:3) Let's compare the two, shall we? CodeSpells actually exists and is backed by a university. Code Hero shows no signs of ever being completed and is backed by a guy who's notorious for scamming people. Re: (Score:2) Seriously? Never heard of something called "Google"? Different people learn in different ways. (Score:2) . Rote memorization could maybe get them to pass a multiple choice exam where they can pick out which is "a conditional statement" out of the choices, or perhaps which is a valid beginning or end of a loop construct, but play-acting and engaging the mind int Re: (Score:2) Actually I think the key is just "when it interests them". I mean we can apply classical conditioning, operant conditioning, and other methods all day long. Providing the material in a way that engages, interests and challenges an individual is always best, but the method for doing that may vary from individual to individual. Re:Let me read that again (Score:5, Insightful) Yeah, you go ahead and explain loops and conditional statements to 40 10-year-olds. They'll learn it in 5, master it in 10, forget all about it in 15. They'll probably be bored, too. Or you can use a software like this which will engage them, encourage them, and help them remember it when they go home that night. It sure would be a shame if they were excited to learn more the next day and had a platform that was there to teach them and give you time to grade their math tests. Re:Let me read that again (Score:4, Interesting) I wish I had mod points right now. One of the best things I ever had as a kid was a TRS-80 (CoCo - and not a true TRS-80 either, even though that was stamped on it) that booted to a BASIC interpreter. The code for any games I loaded directly off disk could be tinkered with easily, no need to compile. This was awesome as a curious 5 year old. Even better about it were the games "Rocky's Boots" and "Robot Odyssey". These games taught me the basics of digital electronics, lessons which have actually helped in my current career as a technician (with no formal training in digital logic). Seeing this kind of software being produced in a modern setting is awesome, I wish there was more of it. Re: (Score:2) Sorry that should've been [iolanguage.org] Re: (Score:3, Funny) That's why Java has builtin garbage collection, DUH! Re: (Score:2) I really think algorithm structure and design (from a math perspective) is more important for a beginning coder than things like OOP and memory management, yes those are important, especially with how prevalent OOP has become, however OOP is just a wrapper around the math and the memory management will flow from sound and logical structures. Pseudocode is probably the best first step, aside from it lacking the ability to be executed. Re: (Score:2) As much as I fucking despise Apple (aside from my iPod classic), you do realize that a good portion of hacker culture spawned around the Apple and Macintosh computers right? Re: (Score:2) Yes, hence me despising Apple in their current state (pretty much since Jobs came back, though they had been rocky for a few years before that). I will however give credit where credit is due. Many icons in the code world got their start on an Apple machine. Many of them still have a lot of sentiment for the platform, why try to completely marginalize an entire group? Especially when some of the younger ones may not know any better? Re: (Score:2) (pretty much since Jobs came back Jobs has come back? He truly is the Messiah!
https://games.slashdot.org/story/13/04/10/2237243/codespells-video-game-teaches-children-java-programming?sbsrc=developers
CC-MAIN-2017-26
refinedweb
6,219
68.81
What is the best way to delete the elements from one numpy array in another? Essentially I'm after np.delete() import numpy as np a = np.array([2,1,3]) print a b = np.array([4,1,2,5,2,3]) b = np.delete(b, a) # doesn't work as desired print b # want [4,5,2] a You can use np.argmax to find the first True element along a set of rows or columns. So, for example, you can do a broadcasted version of this operation this way: >>> a = np.array([2,1,3]) >>> b = np.array([4,1,2,5,2,3]) >>> np.delete(b, np.argmax(b == a[:, np.newaxis], axis=1)) array([4, 5, 2]) Of course, as with many numpy vectorized operations, the speed comes at the cost of allocating an array of size len(a) * len(b), so depending on your inputs this may not be appropriate.
https://codedump.io/share/V2nIBs0DUSbO/1/efficient-way-to-delete-elements-in-one-numpy-array-from-another
CC-MAIN-2017-13
refinedweb
155
77.74
Details - Type: Improvement - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: None - - Component/s: core/index - Labels:None - Lucene Fields:New Description. Issue Links - blocks LUCENE-2293 IndexWriter has hard limit on max concurrency - Closed LUCENE-2297 IndexWriter should let you optionally enable reader pooling - Closed Activity - All - Work Log - History - Activity - Transitions. Yeah I don't like it either (makes my code unnecessarily long). And I always use UNLIMITED, and the LIMITED=10,000 is really just a guess, and so if anyone wants to limit it, he needs to do new MaxFieldLength(otherLimit) which is unnecessarily long as well ... I like it - I'll deprecate on IW and introduce UNLIMITED on IWC. I was wondering if perhaps instead of allowing to pass a create=true/false, we should use an enum with 3 values: CREATE, APPEND, CREATE_OR_APPEND. The current meaning of create is a bit unclear. I.e. if it is true, then overwrite. But if it is false, don't attempt to create, but just open an existing one. However if the directory is empty, it throws an exception. I think an enum would someone to pass CREATE_OR_APPEND in case he doesn't know if there is an index there ... but I don't want to complicate things unnecessarily ... what do you think? IndexingChain is one of the things that can be set on IW, however I don't see any implementations of it besides the default, and the class itself is package-private, so no app could actually set it on IW (unless it puts its code under o.a.l.index). Therefore I'm thinking of not introducing it on IWC, or turn it to a public class? Is it really something we expect any application out there to set, or can we simply make DocsWriter impl one for itself internally, and don't declare this class as abstract etc.? I'm thinking to make this whole IWC a constructor only parameter to IW, without the ability to set it afterwards. I don't see any reason why would anyone change the RAM limit, Similarity etc while IW is running. What's the advantage vs. say close the current IW and open a new one with the different settings? I know the latter is more expensive, and I write it deliberately - I think those settings are really ctor-only settings. Otherwise you might get inconsistent documents (like changing the Similarity or max field length). This will also simplify IWC, because now I need to distinguish between settings that cannot be altered afterwards, like changing IndexDeletionPolicy, create, IndexCommit, Analyzer ... if IWC will be a ctor only object, I can have only the default ctor (to init to default settings) and provide the setters otherwise. Any objections? +1 – this is great! I am not sure why MaxFieldLength is required in all IW ctors, yet IW declares a DEFAULT (which is an int and not MaxFieldLength). This is because it's a dangerous setting (you silently lose content while indexing), a trap. So we want to force the user to make the choice, up front, so they realize the implications. But, if we change the default to UNLIMITED (which we should do under Version), then I agree you should not have to specify it. In my opinion it should be left to the apploication to limit the number of tokens if needed, but not silently drop tokens I like that approach – we could make a TokenFilter to do this? Then we don't need MFL at all in IWC (and deprecate in IW). I was wondering if perhaps instead of allowing to pass a create=true/false, we should use an enum with 3 values: CREATE, APPEND, CREATE_OR_APPEND +1 I'm thinking to make this whole IWC a constructor only parameter to IW, without the ability to set it afterwards. +1 in general, though we should go setting by setting to confirm this is OK. I don't know of "real" use cases where apps eg want to change RAM buffer or mergeFactor... but maybe there are some interesting usages out there. But, if we change the default to UNLIMITED Today there is no DEFAULT .. IW forces you to pass MFL so whoever moves to the new API can define whatever he wants. We'll default to UNLIMITED but there won't be any back-compat issue ...? I can see the value in this - there are a bunch of IW constructors - but personally I still think I prefer them. Creating config classes to init another class is its own pain in the butt. Reminds me of windows C programming and structs. When I'm just coding away, its so much easier to just enter the params in the cnstr. And it seems like it would be more difficult to know whats required to set on the config class - without the same cstr business ... edit Though I suppose the chaining does makes this more swallowable... new IW(new IWConfig(Analyzer).set().set().set()) isn't really so bad ... I wouldn't worry about what's required - Directory will be left out, MFL is useless and a pain anyway, so what's left is Analyzer. I can put Analyzer on IWC's ctor, but I personally think we can default to a simple one (such as Whitespace) encouraging the people to set their own. I find it very annoying today when I want to test something about IW and I need to pass all these things to IW ... The way I see it, those who want to rely on Lucene's latest and greatest can just do: IndexWriter writer = new IndexWriter(dir, new IWC()); Well maybe except for the Analyzer, but I really don't think it matters that much. And like you wrote, someone can chain the setters. So win-win? If you don't care about anything, just wants to open a writer, index something and that's it, you don't need to specify anything .. otherwise you just chain calls? One thing I should add to IWC so far (I hope to post a patch even today) is a Version parameter. For now it will be ignored, but as a placeholder to change settings in the future..?). Hmm yeah quite a hassle to fix all analyzers. Hmmm.?... ... Actually it ought to be 0 terms wasted, with the filter @ the end – with this StopAfterNTokensFilter, it'll immediately return false w/o asking for the 10001th token. One thing I should add to IWC so far (I hope to post a patch even today) is a Version parameter. For now it will be ignored, but as a placeholder to change settings in the future. +1 There are some settings on IW that go directly to the MergePolicy used (well ... only if it's LogMergePolicy). At first I wanted to move them, together w/ MP, to IWC, but this isn't possible, because MP requires IW to be passed to its ctor, and when IWC is created, there is no IW ... So there are couple of options I can think of: - Add all to IWC, and expose on MP a final setIndexWriter, package-private, as well as an empty ctor. IW will later set MP's IW. It will also allow applications to pass their own MP after IW has been created (like they can do today). IWC will delegate all set/get to MP, like IW does today. The only downside is that the IW member of MP won't be final anymore ... - Keep the get/set on IW, like it is today. It will split the configuration of IW to two places ... I prefer to keep it all in one place. - Remove all the set/get from IW and let applications interface directly with MP. It will remove the bizarre situation where someone could pass a non LogMergePolicy to IW, however when he'll try to set anything on it from IW, it will fail ... If we remove the get/set from IW and let folks interface w/ their MP directly, we won't have this problem: - They'll need to declare the MP of type LogMP, or otherwise the API won't be there. - if they use their own MP, with special API, they'll interface w/ MP's configuration from one place ... Between the three, I like (3) the best as it will get rid of the inconsistency in IW today (expecting only LogMP, but requiring MP). But if those set/get are important to keep together with the other configuration, I prefer (1). What do you think? I voted for killing these delegating methods some time ago. It ended in nothing, so I vote again, #3 Yay, we'll be able to remove SolrIndexConfig and use this Ok, then I'll proceed w/ #3. So this will affect these methods in IW, right: get/setMergeFactor(...) get/setUseCompoundFile(...) get/setMaxMergeDocs(...) Ie, we'd deprecate them. Instead you'd have to interact /w the MergePolicy. IW will still default its merge policy, so with this change, instead of: writer.setMergeFactor(7); You'd have to do: ((LogMergePolicy) writer.getMergePolicy()).setMergeFactor(7)); (And take the risk of runtime cast yourself/explicitly, which IW was already doing under the hood anyway). Exactly ! Only I hope that if people will interact w/ their MP to set stuff on it, they would interact with it to query its settings. Calling IW.getMergePolicy will become redundant, or at least will be avoidable. Maybe only when you rely on the default MP, you'll perform the last code example you gave above. Actually, I've already done 98% of the change. I need to finish some stuff on IW and then I'll post the patch. On the MFL/Analyzer - I like the approach of wrapping an Analyzer with a MFLAnalyzer. The beauty is that if I don't want to limit anything, I don't need to do anything and the code won't need to keep track of how many tokens were indexed for that field ... I still think it should be done in a separate issue, as it involves introducing some code in the lower levels to wrap any Analyzer with it, for back-compat. If we do it, let's do it before 3.1 is out, so we can remove it from IWC right away w/o deprecating anything ... I'll open an issue for it. Patch includes: - IndexWriterConfig + Test - Changes to IndexWriter to use IWC - Changes to DocumentsWriter - removed IndexingChain from its ctor and always use default. IndexingChain is package-private and so was the ctor on IndexWriter which allowed to set it. If that's important (e.g. used by SOLR maybe?), then I can add a package-private setting to IndexWriterConfig which will allow setting this. But since I haven't found any calls in Lucene (tests nor java), and no implementations of this class besides the default that is in DW, I thought this can be removed. I would like to have this patched reviewed, before I go about changing all the code to use the new IWC (tests + java). If we agree on the details and how it looks and used, I'll do it - should be straightforward. Hmm... I think we should still allow package private specification of the indexing chain? Ok, I'll add it back. For IWC.setAnalyzer(null) I preferred not to (it's not just setAnalyzer, but also MS, Similarity and IndexDeletionPolicy). I specifically documented on each what happens if one passes null. I think it's better service - you're not expected to pass null, and so if you do, instead of throwing an exception, we revert to default. I thought at some point to add a restoreDefaults() method, but then realized that doing new IWC() is not that expensive ... I think - if someone uses IWC but receives any of those settings from the outside, instead of asking him to always check for null, we tell him "pass null, we revert to default" ... In IWC we call it "scheduler" Woops, fixed that ! Why can't MergePolicy also live in IWC? I've commented on it in this issue - MP requires an IW instance to be passed to its ctor.(see my comment from 04/Mar/10 09:28 PM). IW.messageState will have a space before each of its entries IWC.toString() includes '\n' between settings. I thought it'd be more readable that way because otherwise it'd be a long line. The output for me looks like that: IW 0 [main]: dir=org.apache.lucene.store.RAMDirectory@2ca22ca2 mergePolicy=org.apache.lucene.index.LogByteSizeMergePolicy@6ca06ca index= version=3.1-dev config=matchVersion=LUCENE_31 analyzer=org.apache.lucene.analysis.WhitespaceAnalyzer delPolicy=org.apache.lucene.index.KeepOnlyLastCommitDeletionPolicy commit=null openMode=CREATE_OR_APPEND maxFieldLength=2147483647 similarity=org.apache.lucene.search.DefaultSimilarity termIndexInterval=128 mergeScheduler=org.apache.lucene.index.ConcurrentMergeScheduler default WRITE_LOCK_TIMEOUT=1000 writeLockTimeout=1000 maxBufferedDeleteTerms=-1 ramBufferSizeMB=16.0 maxBufferedDocs=-1 Perhaps I'll print "config=\n" so that all config parameters start on the new line, and not all but the first. Is that acceptable, or you still prefer all to be on one line, space separated? Need small jdoc for the new IW ctor I'll add that. That one slipped . Should we disallow all setters on IWC after it's been consumed by an IW? I've thought about all the options that you raise, and decided to keep the situation as-is, to let others also comment on that. So I'm glad you commented . - I've documented on IW.getConfig() that setting anything on the returned object has no effect on IW instance, and if one needs to do it, one should re-instantiate IW. - I also thought to turn off setters (setting IWC to read-only by IW), but then someone won't be able to reuse an IWC - Removing the fields from IW is not good, because then someone could really call getConfig().setMaxBufferedDocs and that will affect IW, unlike what the comment says. If we want to really keep everything in IWC, we need to clone on ctor and getConfig() time. Seems a waste to me. I think I'll just clone the incoming IWC on IW and leave the rest as-is? Patch with applied comments. I'll start moving the code to use the new ctor, class etc. OK let's keep the semantics of "if you pass null that means restore to default". Should we rename UNLIMITED_FIELD_LENGTH -> DEFAULT_MAX_FIELD_LENGTH? (Though w/ the new issue this should be going away from IWC anyway). IWC.toString() includes '\n' between settings Ahh, I misread it (I thought IW was doing \n followed by single space). I agree multiple lines is much better. How about making IW also do multiple lines for its non-IWC settings that are messaged? And then removing the config= from the output? Ie all I see is a bunch of settings. I don't think [in this debug printout] that we need to message which setting was on IW and which was from the IWC. I've commented on it in this issue - MP requires an IW instance to be passed to its ctor.(see my comment from 04/Mar/10 09:28 PM). Ahh, sorry, right. Annoying we moved IW as required into MP Hrmph. I think I'll just clone the incoming IWC on IW and leave the rest as-is? OK, I think this is fine, for starters. The API semantics are clearly defined, so, if at a later date we harden the enforcement (say throw IllegalStateEx if you try to change anything after IWC has been consumed by IW, that's fair game). Maybe clearly state in the jdocs that presently no settings are version dependent, so it's just a placeholder? (Basically, elevate the code comment that already says this, into the jdocs). Whenever a class in Lucene takes a Version, we should strongly document how Version alters settings, even if it's a no-op placeholder. (And I agree we should have it here as placeholder). Patch looks good Shai, thanks!. Annoying we moved IW as required into MP I think I'm to blame for this . . While I'm converting the tests to use the new IWC, and getting rid of deprecated methods, I think it'll be useful to add a MergePolicyConfig. But since most of the setters used today assume a LogMergePolicy, and since it's also unrelated to IW, I prefer to leave it for a separate issue. Doing that, w/ the methods chaining approach, will simplify usage even more. Patch with all tests and code converted to not use the deprecated API, and move to use IWC. All except CreateIndexTask in benchmark, which looks a bit more complicated to change, I left it out for now. Another thing - I deprecated *QueryParserWrapperTest and did not convert them to use the new IWC API, as those tests will be removed when *QueryParserWrapper will be removed. All tests pass. Patch looks good – thanks Shai! I'll commit in a day or two. Phew! Thanks Shai... Robert raised a concern on java-dev: maybe we should not have a default analyzer....? Ie force the user to make an explicit choice. Here's my elaboration a bit: I like the concept of good, versioned defaults, but i think this is just a decision point that a user should have to make, like it is now. A practical concern that I have: Whitespace really sucks, we will get lots of questions on java-user because "lucene doesnt search upper/lowercase by default". This is because beginner users will only do what work is necessary for their application to compile. I disagree, but obviously I'm on the minority side. It is clearly documented what's the default Analyzer used is, and that you should change it if you want to get more meaningful analysis. When I created IWC I really wanted to simplify how IW is created. If we force IWC to accept both a Version AND an Analyzer, instantiating IW will look like this: new IndexWriter(dir, new IndexWriterConfig(matchVersion, analyzer)); We don't accomplish anything with it, took away the MFL argument and replace it w/ IWC ... Remember - those that used to set all kind of parameters, using the other ctors, anyway care about how their IW is instantiated. The others that used IW(dir, analyzer, MFL) don't care about all other attributes. MFL is just annoyance, so we removed it. I just don't feel that a default Analyzer, which is Whitespace, is bad. It's easy to understand what your analysis looks like, and since it's well documented, nobody can say "hey, what didn't you warn me". IW defaults to all other bunch of settings, so why is Analyzer different? If we say Analyzer is mandatory, what will stop us tomorrow from saying IndexDeletionPolicy is mandatory? And then we'll get into whole bunch of ctors, only now on IWC? If we're documenting things clearly, and IWC documents clearly all its defaults, I see no reason why to require an Analyzer to be specified up front. At least to me, that will make this entire change useless. When I create my IW for serious indexing, I take care of all its settings. Otherwise I just instantiate it to check something completely not related to its defaults. If I test those, I define them (otherwise I cannot test them). If we say Analyzer is mandatory, what will stop us tomorrow from saying IndexDeletionPolicy is mandatory? Nothing But I think Analyzer should be mandatory and that IndexDeletionPolicy should not be mandatory, looking at them case by case. This doesn't help much Mark . Question - does SOLR requires everyone to specify an Analyzer, or does it come w/ a default one? I'll revert the commit while we discuss... IW defaults to all other bunch of settings, so why is Analyzer different? Just my opinion, but what makes it different is that breaking text into tokens and indexing them is what search is all about. Deletions is an optional feature, and if someone got rid of it alltogether, I wouldn't even comment on the issue, I could care less. Question - does SOLR requires everyone to specify an Analyzer, or does it come w/ a default one? Hmm... SOLR doesn't really use Lucene analyzers. It comes with a default Schema.xml that defines FieldTypes. Then field names can be assigned to FieldTypes. So technically speaking, no, Solr does not - but because most people build off the example, you could say that it does have defaults for example FieldTypes and defaults of what field names map to those. But it also only accepts certain example fields with the example Schema - you really have to go in and customize it to your needs - its setup to basically show off what options are available and work with some demo stuff. Solr comes with almost no defaults in a way - but it does ship with an example setup that is meant to show you how to set things up, and what is available. You could consider those defaults since most will build off it. example of Solr analyzer declaration: <!-- A general unstemmed text field - good if one does not know the language of the field --> <fieldType name="textgen"="0"/> <filter class="solr. <filter class="solr.LowerCaseFilterFactory"/> </analyzer> </fieldType>). what makes it different is that breaking text into tokens and indexing them is what search is all about. I agree, and Whitespace is easy to explain and understand. Who said that Standard or Simple is better, for example (I know you don't say it)? I just don't understand why we should force everyone to select an Analyzer even when they don't want to, or care. Remember that all the people out there today already set an Analyzer, so when they port to the new API they'll continue setting it (unless they didn't care but were forced to do it). New users ... well I hope they read jdocs. It's good that SOLR doesn't force people to specify the Analyzer ... smart decision, for play/example purposes. Mike - regarding getting rid of Analyzer on IW. Today it's very convenient that I can specify my default Analyzer for all ANALYZED fields, for all documents. I don't understand how a Field can introduce an Analyzer - do I now set an Analyzer on every field I add to a Document? Wouldn't that be extremely inconvenient? If you want to get rid of Analyzer on IW I'm fine. We can throw away the addDocument(Doc) method and always force a user to pass an Analyzer. Much better than requiring him to pass it on every field he adds to a document ... I'm assuming you would set an Analyzer for the document - and then you could override per field - or something along those lines. I opened LUCENE-2309 for decoupling IW from analysis. IW would only need to know about attr source. If we did LUCENE-2309 then IWC wouldn't need an Analyzer, default or otherwise I'm assuming you would set an Analyzer for the document - and then you could override per field - or something along those lines. Isn't that like calling IW.addDocument(Doc, Analyzer) where some of the fields have pre-defined TokenStream? What's the difference then? Why change the current API (besides getting rid of the addDoc(Doc) method)? Isn't that like calling IW.addDocument(Doc, Analyzer) where some of the fields have pre-defined TokenStream? Well, where all of the fields have pre-defined TokenStream, or, can produce it on demand with no other args (ie that field instance knows its analyzer). What's the difference then? Why change the current API (besides getting rid of the addDoc(Doc) method)? Well it'd simplify your work here, for one But it'd also put distance between IW and analysis, which I think is useful for others to use Lucene with their own "means" of producing tokens. Tying IW to less concrete impls also increases our independence which makes cross-version compatibility easier, over time. The more independent the various parts of Lucene are the more we can factor things out... Well it'd simplify your work here, for one I'm afraid that's too late no? I've already completed that work ... it's that late realization that forces me to re-do it again ... I don't mind if IW does not know about Analyzers, I'm perfectly fine w/ it (and even like it). Suggestion: since re-doing all that work, meaning adding Whitespace to the instantiation in all the places that really don't care, is very expensive and time consuming on my part, if LUCENE-2309 is resolved before 3.1, can we keep IWC as-is, and as part of 2309 remove setAnalyzer from IWC? It will keep the ctor's signature as it is now, and we won't need to change all of these classes again in 2309. Only those that explicitly call setAnalyzer. Is that acceptable? Patch w/ IWC requiring an Analyzer in its ctor. All tests pass Thanks Shai! I'll look through it... Patch looks good Shai – that was fast adding back in the analyzers A few things: - Should we also move MergedSegmentWarmer to IWC? - This adds more Version.LUCENE_CURRENTs in contrib... but I don't see any way around that (and it's good to add the placeholder to IWC). - There were some tests that previously passed null as analyzers, that you changed to WhitespaceAnalyzer, which seems OK. - I see you cleaned some stuff up in the process (fixed "extends TestCase" -> extends "LuceneTestCase", use TEST_VERSION_CURRENT consistently instead using class member, added Version to ctor for analyzers that now require them, noting that test is deprecated since it only tests deprecated class, fixed places where tests passed null as analyzer) – good! - There are a number of tests where the analyzers were changed, eg Simple -> Whitespace or Standard -> Whitespace – I attached check.py (run it w/ 1 arg which is path to the patch file). Note that not all things it finds are actually changed (ie it has some false pos's, but I think not too many). Tests still pass... but it makes me a bit nervous. Shouldn't we just keep the same analyzer? Should we also move MergedSegmentWarmer to IWC? I guess it could ... wonder how I missed it ... This adds more Version.LUCENE_CURRENTs in contrib... I added it on purpose, so we know to convert these places when we remove LUCENE_CURRENT. We can also do this gradually in LUCENE-2305. There were some tests that previously passed null as analyzers Those were exactly the cases I wanted to move Analyzer out of the mandatory list. Whatever you pass will work for those tests, because they obviously don't rely on analysis ... otherwise they'd had been hit by NPE. Hmm ... I wonder if I can still pass 'null' for them. I used eclipse search/replace tool to add the Analyzer to IWC calls. The null argument disappeared completely because they used the default, so the search/replace tool replaced that w/ Whitespace. I prefer to keep it that way, to avoid re-changing them again. There are a number of tests where the analyzers were changed ... I thought that that's ok, because those tests don't rely on the output of analysis, or otherwise they should have tested/asserted it. Since they don't, it shouldn't matter to them. But I'll revert that change. So two things I need to do: - revert the changes to those tests - add warmer to Patch updated w/: - Move setMergedSegmentWarmer to IWC. - Revert the changes reported by check.py Note, check.py still alerts on some changes, though I don't see any relevant change in the patch file. Should I ignore them? Thanks Shai, I'll look... Note, check.py still alerts on some changes, though I don't see any relevant change in the patch file. Should I ignore them? Yes if they are indeed false positives... Note, check.py still alerts on some changes, though I don't see any relevant change in the patch file. Should I ignore them? Hmm some of these (at least TestAtomicUpdate was changed from Simple -> Whitespace) were in fact real changes.... I'll fix & post a new patch. Attached new patch, just fixing a couple tests where analyzer had changed. I it's ready to commit (take 2)! I'll wait a day or two... Thanks Mike. I ran the tool once, fix all that it complained. Then 2nd time it found some more (probably some I missed in the 1st pass), only this time really few more. So I fixed them as well. But I didn't run it 3rd time ... I can't wait for this to be in ... an exhausting issue . I can't wait for this to be in ... an exhausting issue Shai, thanks for taking the time to redo this massive patch. I'm sorry again I dropped the ball and didn't notice till the commit, forcing you to redo a lot of work. +1 Take 2! Thanks Shai. +1 for the IndexWriterConfig with chaining method calls We had a discussion about this a while ago on the mailinglist:
https://issues.apache.org/jira/browse/LUCENE-2294
CC-MAIN-2016-30
refinedweb
4,918
65.01
Difference between revisions of "WxSmith tutorial: Working with multiple resources" Revision as of 16:09, 24 August 2009 Contents Working with multiple resources Hello. In this tutorial I'll show you how to create application with more than one window. I'll also show how to invoke one window from another. At the end we will learn how to change man window in project settings. So let's start. Creating empty project As usual we will start with empty project. You can find informations on how to do this in the first tutorial. Remember to crate frame-based application and select wxSmith as main RAD designer. Also let's name the project "Tutorial 4". If empty application compiles and runs fine we can advance to next step. Adding few dialogs New resources can be added from wxSmith menu. You can find following options there: - Add wxPanel - this will add new panel into resource - Add wxDialog - this adds new dialog window - Add wxFrame - this adds new frame window. Ok, let's add new wxDialog. If you choose this option it will show following window: Here we can configure following parameters: - Class name - it's name of class which will be generated for the resource. It will also be a name of resource - Header file - name of header file with class declaration - Source file - name of source file with class definition - Xrc file - if you check this option you can enter name of XRC file which will contain XRC's structure (XRC files will be covered in other tutorial) - Add XRC file to autoload list - this option is aailable only with XRC file and notifies wxSmith that it should automatically load XRC file when the application starts. Now let's change name of resource to something better like "FirstDialog". Note that if you change class name, names of files are also updated (they are updated as long as you don't change them manually). After clicking OK you will be prompted for list of build target into which new resource will be added. Select both debug and release and we can continue. New dialog is automatically opened in editor. Let's add some simple content: Now let's add some wxFrame resource (FirstFrame) and add some content into it: Using dialog window in modal mode Dialogs can be shown in so-called modal mode. This means that when such dialog is shown, all other windows in the application are frozen as long as the dialog finishes it's job. Such dialogs are usefull when user should quickly provide some informations - for example the window used to configure new resource was modal. Now let's try to invoke our dialog. Generally such dialog will pop-up as reaction for user action like choosing menu option or clicking on button. We will use he button since this approach is really easy. Ok, let's switch into main frame - the one that was opened right after creating new project, and add sizer and button: Now we have to create action for button-click event. The easiest way is to double-click the button in editor. wxSmith will automatically add new event handler function: void Tutorial_4Frame::OnButton1Click(wxCommandEvent& event) { } If you can't find this code, scroll down to the end of cpp file. Now let's Invoke our dialog: void Tutorial_4Frame::OnButton1Click(wxCommandEvent& event) { FirstDialog dialog(this); dialog.ShowModal(); } In the first added line we created dialog's object notifying that this window (main frame) is parent of dialog. In the second line we show the dialog. Note that call to ShowModal() blocks untill dialog is closed. There's one more thing we need to add before our application compiles. Jump to the beginning of the file and add following code next to other includes: #include "FirstDialog.h" Note that you shouldn't add it inside code which looks like this: //(*InternalHeaders(Tutorial_4Frame) #include <wx/intl.h> #include <wx/string.h> //*) This is block of code that is automatically generated by wxSmith. Every block starts with //(*BlockName comment and ends with //*). You may find other blocks in both header and source files. If you change their content, all changes will be lost net time you change something in editor. To sum things button on main window, dialog will pop-up: Using dialog and frame window in non-modal mode Another mode used to show windows is modal-less mode. In such case new window does not block other windows in application and two (or more) windows can cooperate in one application. Before we add new code into our application there's one more thing you should know. Each window may exist only as long as it's object (instance of resource's c++ class). So we can not use same approach as in case of modal dialog. In modal mode object was created as local variable of event handler. We could do this only because ShowModal method was blocked as long as diaog was shown. Now we will have to create objets using new opertor because objects must exist after leaving handler functin and also because windows not using modal mode will delete such objects automatically when window is closed. To allow creating FirstFrame class we should add #include "FirstFrame.h" into list of includes just like main frame: - Add new dialog - Add new frame We can also name the first button to show what it does: Now let's add handler to Add new dialog button: void Tutorial_4Frame::OnButton2Click(wxCommandEvent& event) { FirstDialog* dlg = new FirstDialog(this); dlg->Show(); } Analogically we can write the code for firstFrame: void Tutorial_4Frame::OnButton3Click(wxCommandEvent& event) { FirstFrame* frm = new FirstFrame(this); frm->Show(); } Now each time we click Add new dialog or Add new frame buton, new window shows up. This is the end of this tutorial. I hope that you learned some new useful things. In the next tutorial I'll show how to use the last type of resources, wxPanel, inside other resources. See you.
http://wiki.codeblocks.org/index.php?title=WxSmith_tutorial:_Working_with_multiple_resources&diff=6034&oldid=5313
CC-MAIN-2020-24
refinedweb
998
62.17
/* Coherent tty locking support. This file was contributed by Bob Hemedinger <bob@dalek.mwc.com> of Mark Williams Corporation and lightly edited by Ian Lance Taylor. */ /* The bottom part of this file is lock.c. * This is a hacked lock.c. A full lock.c can be found in the libmisc sources * under /usr/src/misc.tar.Z. * * These are for checking for the existence of locks: * lockexist(resource) * lockttyexist(ttyname) */ #include "uucp.h" #if HAVE_COHERENT_LOCKFILES /* cohtty.c: Given a serial device name, read /etc/ttys and determine if * the device is already enabled. If it is, disable the * device and return a string so that it can be re-enabled * at the completion of the uucico session as part of the * function that resets the serial device before uucico * terminates. * */ #include "uudefs.h" #include "sysdep.h" #include <ctype.h> #include <access.h> /* fscoherent_disable_tty() is a COHERENT specific function. It takes the name * of a serial device and then scans /etc/ttys for a match. If it finds one, * it checks the first field of the entry. If it is a '1', then it will disable * the port and set a flag. The flag will be checked later when uucico wants to * reset the serial device to see if the device needs to be re-enabled. */ /* May 10, 1993: This function will always return true for the following * reasons: * 1) lock files have already been dealt with * 2) if someone else already has the port open, uucico should fail anyways * 3) Coherent's disable command return can return '0' or '1', but will * succeed in any event. * 4) It doesn't matter if there is a ttys entry for the port in question. * /etc/ttys generally only lists devices that MAY be enabled for logins. * If a device will never be used for logins, then there may not be a * ttys entry, in which case, disable won't be called anyways. * ---bob@mwc.com */ boolean fscoherent_disable_tty (zdevice, pzenable) const char *zdevice; char **pzenable; { struct ttyentry{ /* this is an /etc/ttys entry */ char enable_disable[1]; char remote_local[1]; char baud_rate[1]; char tty_device[16]; }; struct ttyentry sought_tty; int x,y,z; /* dummy */ FILE * infp; /* this will point to /etc/ttys */ char disable_command[66]; /* this will be the disable command * passed to the system. */ char enable_device[16]; /* this will hold our device name * to enable. */ *pzenable = NULL; strcpy(enable_device,""); /* initialize our strings */ strcpy(sought_tty.tty_device,""); if( (infp = fopen("/etc/ttys","r")) == NULL){ ulog(LOG_ERROR,"Error: check_disable_tty: failed to open /etc/ttys\n"); return FALSE; } while (NULL !=(fgets(&sought_tty, sizeof (sought_tty), infp ))){ sought_tty.tty_device[strlen(sought_tty.tty_device) -1] = '\0'; strcpy(enable_device,sought_tty.tty_device); /* we must strip away the suffix to the com port name or * we will never find a match. For example, if we are passed * /dev/com4l to call out with and the port is already enabled, * 9/10 the port enabled will be com4r. After we strip away the * suffix of the port found in /etc/ttys, then we can test * if the base port name appears in the device name string * passed to us. */ for(z = strlen(sought_tty.tty_device) ; z > 0 ; z--){ if(isdigit(sought_tty.tty_device[z])){ break; } } y = strlen(sought_tty.tty_device); for(x = z+1 ; x <= y; x++){ sought_tty.tty_device[x] = '\0'; } /* ulog(LOG_NORMAL,"found device {%s}\n",sought_tty.tty_device); */ if(strstr(zdevice, sought_tty.tty_device)){ if(sought_tty.enable_disable[0] == '1'){ ulog(LOG_NORMAL, "coh_tty: Disabling device %s {%s}\n", zdevice, sought_tty.tty_device); sprintf(disable_command, "/etc/disable %s",enable_device); { pid_t ipid; const char *azargs[3]; int aidescs[3]; azargs[0] = "/etc/disable"; azargs[1] = enable_device; azargs[2] = NULL; aidescs[0] = SPAWN_NULL; aidescs[1] = SPAWN_NULL; aidescs[2] = SPAWN_NULL; ipid = ixsspawn (azargs, aidescs, TRUE, FALSE, (const char *) NULL, TRUE, TRUE, (const char *) NULL, (const char *) NULL, (const char *) NULL); if (ipid < 0) x = 1; else x = ixswait ((unsigned long) ipid, (const char *) NULL); } *pzenable = zbufalc (sizeof "/dev/" + strlen (enable_device)); sprintf(*pzenable,"/dev/%s", enable_device); /* ulog(LOG_NORMAL,"Enable string is {%s}",*pzenable); */ return TRUE; }else{ /* device not enabled */ return TRUE; } } } return TRUE; /* no ttys entry found */ } /* The following is COHERENT 4.0 specific. It is used to test for any * existing lockfiles on a port which would have been created by init * when a user logs into a port. */ #define LOCKSIG 9 /* Significant Chars of Lockable Resources. */ #define LOKFLEN 64 /* Max Length of UUCP Lock File Name. */ #define LOCKPRE "LCK.." #define PIDLEN 6 /* Maximum length of string representing a pid. */ #ifndef LOCKDIR #define LOCKDIR SPOOLDIR #endif /* There is a special version of DEVMASK for the PE multiport driver * because of the peculiar way it uses the minor device number. For * all other drivers, the lower 5 bits describe the physical port-- * the upper 3 bits give attributes for the port. */ #define PE_DRIVER 21 /* Major device number for the PE driver. */ #define PE_DEVMASK 0x3f /* PE driver minor device mask. */ #define DEVMASK 0x1f /* Minor device mask. */ /* * Generates a resource name for locking, based on the major number * and the lower 4 bits of the minor number of the tty device. * * Builds the name in buff as two "." separated decimal numbers. * Returns NULL on failure, buff on success. */ static char * gen_res_name(path, buff) char *path; char *buff; { struct stat sbuf; int status; if (0 != (status = stat(path, &sbuf))) { /* Can't stat the file. */ return (NULL); } if (PE_DRIVER == major(sbuf.st_rdev)) { sprintf(buff, "%d.%d", major(sbuf.st_rdev), PE_DEVMASK & minor(sbuf.st_rdev)); } else { sprintf(buff, "%d.%d", major(sbuf.st_rdev), DEVMASK & minor(sbuf.st_rdev)); } return(buff); } /* gen_res_name */ /* * lockexist(resource) char *resource; * * Test for existance of a lock on the given resource. * * Returns: (1) Resource is locked. * (0) Resource is not locked. */ static boolean lockexist(resource) const char *resource; { char lockfn[LOKFLEN]; if ( resource == NULL ) return(0); sprintf(lockfn, "%s/%s%.*s", LOCKDIR, LOCKPRE, LOCKSIG, resource); return (!access(lockfn, AEXISTS)); } /* lockexist() */ /* * lockttyexist(ttyname) char *ttyname; * * Test for existance of a lock on the given tty. * * Returns: (1) Resource is locked. * (0) Resource is not locked. */ boolean lockttyexist(ttyn) const char *ttyn; { char resource[LOKFLEN]; char filename[LOKFLEN]; sprintf(filename, "/dev/%s", ttyn); if (NULL == gen_res_name(filename, resource)){ return(0); /* Non-existent tty can not be locked :-) */ } return(lockexist(resource)); } /* lockttyexist() */ #endif /* HAVE_COHERENT_LOCKFILES */
http://opensource.apple.com/source/uucp/uucp-11/uucp/unix/cohtty.c
CC-MAIN-2014-35
refinedweb
1,024
58.48
Setup You must have a C# development environment, such as Visual Studio or Xamarin Studio. Add to your project the LBXamarinSDK.cs file generated by the lb-xm command. You must add the following NuGet packages to your project: You now have the Xamarin SDK customized for your LoopBack server available in your project. Overview The SDK has three entities: Models - Local instances of the LoopBack models with the same name as the Loopback Models; for example, “Car.” They are in the LBXamarinSDKnamespace. Repositories - Classes unique to each type of model. They contain the methods that exist on the server: create, retrieve, update, and delete methods and the custom methods that were added to the Loopback models. Client code uses Repositories to perform server calls. They have the plural name of the Loopback model; for example, “Cars.” They are in the LBXamarinSDK.LBReponamespace. Gateway - A class containing the code that enables the repositories to talk to the server. It is in the LBXamarinSDKnamespace. Models Models in C# have the same name as the Loopback models. Creating a local instance The model is a C# class. Use it and create it as you would any other class. Car myCar = new Car() { wheels = 5, drivers = 3, name = "blarg" } Important: Again, this creation is only local. To perform server calls use the repositories. Repositories Repositories have the plural name of the Loopback models. They contain the methods that exist on the Loopback server. For example, if Car is the name of the model, then Cars is the name of the repository: // Creation of a local instance Car myCar = new Car(); myCar.accidents = 0; // Repository method. Persisting local instance to Loopback server Cars.Create(myCar); The following sections introduce the methods in the repositories. Create, retrieve, update, and delete methods The repositories have the same basic model create, read, update, and delete methods for all models, as is in the Loopback server REST API. Important: The method createChangeStream() is still unsupported. Count Task<int> Count(string whereFilter = "") Counts instances of the model that match the specified where filter. Parameter: - Optional filter, counting all instances that this filter passes. Returns the number of matching instances. For example: int numOfCustomers = await Customers.Count(); int numOfSmartCustomers = await Customers.Count("{\"where\":{\"and\": [{\"pizzas\":\"500\"}, {\"cats\":\"500\"}]}}"); Create Task<T> Create(T _theModel_) Creates a new instance of the model and persist it into the data source. T is a generic template (the same for all loopback models in the SDK). Parameter: _theModel_: The model to create. Returns the created model. For example: c = await Customers.Create(c); DeleteById Task DeleteById(String Id) Deletes a model instance by ID from the data source. Parameter: Id- String ID of the model to delete. For example: await Customers.DeleteById("1255"); Exists Task<bool> Exists(string _Id_) Checks whether a model instance exists in the data source. Parameter: _Id_- the String ID of the model of interest. Returns a Boolean value. For example: if(await Customers.Exists("441")) { ... } Find Task<IList<T>> Find(string filter = "") Finds all instances of the model matched by filter from the data source. A filter is applicable here. Parameter: - an optional filter to apply on the search. Returns a list of all found instances. For example: IList<Customer> l1 = await Customers.Find(); IList<Customer> l2 = await Customers.Find("{\"where\":{\"and\": [{\"pizzas\":\"500\"}, {\"cats\":\"500\"}]}}"); FindById Task<T> FindById(String _Id_) Finds a model instance by ID from the data source. Parameter: _Id_- String ID of the model to find. Returns the found model. For example: Customer c = await Customers.FindById("19"); FindOne Task<T> FindOne(string filter = "") Finds first instance of the model matched by filter from the data source. Parameter: - Optional filter to apply on the search. Returns the found model For example: Customer c = await Customers.FindOne(); Customer c = await Customers.FindOne("{\"where\":{\"and\": [{\"pizzas\":\"500\"}, {\"cats\":\"500\"}]}}"); UpdateAll Task UpdateAll(T updateModel, string _whereFilter_) Updates instances of the model matched by where from the data source. Parameters: _updateModel_- Update pattern to apply to all models that pass the where filter _whereFilter_- Where filter. For example: Customer updatePattern = new Customer() {cars = 99}; await UpdateAll(updatePattern, "{\"where\":{\"and\": [{\"pizzas\":\"500\"}, {\"cats\":\"500\"}]}}"); this updates all customers who have 500 cats and 500 pizzas to have 99 cars. UpdateById Task<T> UpdateById(String _Id_, T _update_) Updates attributes of a model instance and persists it to the data source. Parameters: _Id_- String ID of the model to update _update_- the update to apply. Returns the updated model. For example: Customer c = new Customer() { hp = 100 }; h = await UpdateById("11", c); There are more methods created automatically if relations between models are added. Their definition and description are in the generated code. Upsert Task<T> Upsert(T _theModel_) Updates an existing model instance or insert a new one into the data source. Parameter: _theModel_, the model to update/insert. Returns the updated/inserted model. For example: c = await Customers.Upsert(c); Relation Methods The SDK will create C# code for methods that are created for models connected by relations, just like in the REST API. Example Suppose we have two Loopback models: Goal and TodoTask, and the relation between them is that TodoTask belongs to Goal. Then the SDK will create, among others, the method to fetch the goal of a given todo task. You can see the generated code in the TodoTasks repository: /* * Fetches belongsTo relation goal */ public static async Task<Goal> getGoal(string id, [optional parameters]) { ... } Then a usage example is: var TodoTaskID = "10034"; Goal relatedGoal = await TodoTasks.getGoal(TodoTaskID); Custom remote methods The LoopBack Xamarin SDK supports custom remote methods and will create C# code for them. These methods exist in the repository classes. Example Suppose there is a LoopBack model called Logic. Then, if there is a custom method in the LoopBack application in Logic.js as follows: module.exports = function(Logic) { Logic.determineMeaning = function(str, cb) { console.log(str); cb(null, 42); }; Logic.remoteMethod('determineMeaning', { accepts: { arg: 'str', type: 'string', http: { source: 'body' }, required: true }, returns: { arg: 'res', type: 'number', root: true }, http: { path: '/determineMeaning', verb: 'post' }, description: 'This is the description of the method' }); }; Then once we compile an SDK using lb-xm, a method will be created in the C# code of LBXamarinSDK.cs, with the signature: Task<double> determineMeaning(string str) If you go into the _Logic_s repository code you can see the generated code for the method: /* * This is the description of the method */ public static async Task<double> determineMeaning(string str) { ... } Then an example of using it: double something = await Logics.determineMeaning("blarg"); Gateway Gateway has several important methods. isConnected Task<bool> isConnected(int _timeoutMilliseconds_ = 6000) Checks if the server is connected. Parameter: - timeoutMilliseconds - timeout for the check, in ms. Returns a Boolean value. connectedStatus = await Gateway.isConnected(); There are additional Gateway methods with a descriptions in the generated code. SetAccessToken void SetAccessToken(AccessToken _accessToken_) All server calls will be done with the authorization given by the supplied access token. var list1 = await Users.Find(); // error 401, not authorized AccessToken token = await Users.login(auth); // Server call: Performing login Gateway.SetAccessToken(token); var list2 = await Users.Find(); // ok, list is returned SetDebugMode This method sends output when important SDK functionality is executed (output from System.Diagnostics.Debug) to the console. void SetDebugMode(bool _isDebugMode_) Once debug mode is set to true, important information will be sent to the debug console. For example, if we make a server call: Customer myCustomer = new Customer() { name = "joe", geoLocation = new GeoPoint() { Longitude = 0, Latitude = 11 } }; myCustomer = await Customers.Create(myCustomer); Then when the debug mode is on, calling the above code will display the following output to the console: -------- >> DEBUG: Performing POST request at URL: '', Json: {"geoLocation":{"lat":11.0,"lng":0.0},"name":"joe"} SetServerBaseURL void SetServerBaseURL(Uri _baseUrl_) Parameter: - baseUrl - URL of the server API. Sets the API URL of the loopback server. For example: SetServerBaseURL(new Uri("")); SetTimeout void SetTimeout(int _timeoutMilliseconds_ = 6000) Sets the timeout of server calls. if no response is received from server after this period of time, the call will die. Gateway.SetTimeout(2000); Handling exceptions Perform all access to a repository (calls to methods that communicate with the server) inside a try / catch clause, including a clause to catch a RestException. RestException has a StatusCode field that specifies the return status of the error. For example: User myUser = new User() { email = "[email protected]", password = "123" }; try { myUser = await Users.Create(myUser); // user created on the server. success. } catch(RestException e) { if(e.StatusCode == 401) { // not authorised to create the user } else if(e.StatusCode == 422) { // unprocessable entity. Perhaps the email already exists, perhaps no password // is specified inside the model myUser, etc } }
http://loopback.io/doc/en/lb3/Xamarin-client-API.html
CC-MAIN-2017-43
refinedweb
1,448
51.04
Read more about this book (For more resources on this subject, see here.) Introduction WF4 is one part of .NET Framework 4.0, which means WF4 workflow can be hosted and run in any type of application running with the .NET framework. We can host a workflow as a WCF service. We can also invoke a workflow service from a workflow or host workflow in an ASP.NET application and handle all the business logic behind the page. When we design workflow applications, please let workflow be workflow. Don't couple workflow with other logic. For example, in this chapter, hosting workflow in ASP.NET is for conception demonstration only, not the best practice. In the real world, most of the time, workflow should be implemented as a workflow service hosted in IIS7 or AppFabric. AppFabric is an IIS7 extension that includes many tools to help us host a workflow service. AppFabric is to workflow service like IIS7 is to ASP.NET website. However, we can run a workflow service in IIS7 without AppFabric installed. Although AppFabric is powerful, we need to spend some time to learn it. For more information about AppFabric, you can check this link: Hosting a workflow service in IIS7 The process of sending an e-mail would consume some time—maybe a few seconds or even minutes. It would be a waste of time and resources for our applications to stop and wait for an e-mail sending action to complete. Because sending e-mail is time-consuming, a better design is to strip this feature out as an independent WCF workflow service and host that service in IIS7. Getting ready We need the SendEmailActivity activity to send an e-mail. How to do it... - Create a WCF workflow service application: Create a WCF workflow service application and name it HostingWorkflowServiceInIIS7. - Add SendEmailActivity to the toolbox: In the Toolbox tab, right-click and select Choose Items. In the opening dialog, click Browse and navigate to the ActivityLibrary.dll - Create a SendEmail workflow service: - Delete Service1.xamlx, which is created by default, and add a new WCF workflow service to the project. Name it SendEmailService.xamlx. Drag a TransactedReceiveScope activity to the design panel, click the Variables button, and create a variable named emailMessage: - Drag a Receive activity to the Request box of TransactedReceiveScope. Set the OperationName to SendEmail. Click the Content Definition link to create a parameter as shown here: - Assign ISendEmailService to the ServiceContractName property. Check the CanCreateInstance property. - Next, drag SendEmailActivity to the body of TransactedReceiveScope. - Assign the following properties to SendEmailActivity: The final workflow will look as shown in the following screenshot: - Create a website in IIS7 for this WF service: In IIS7 Manager Console, create a website and assign the website's physical path to the project folder of HostingWorkflowServiceInIIS7. Assign it a new port number. By default, an ASP.NET application will run under the built-in network service account (or ApplicationPoolIdentity in IIS7.5). This account has the most limited permissions. For testing, we can shift the application pool's identity to an administrator account. - Use WCFTestClient.exe to test the WCF service: Usually, we can find the WCFTestClient.exe tool in C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE. We just need to open our mail. A new mail with subject Hello WF Service indicates that we have created and hosted the WF service successfully. Click OK. We will find SendEmailActivity in the toolbox: We should be able to find the following module and handlers in IIS7: If we cannot, then we should reinstall .NET framework 4.0 or repair it. Here are the repair commands: Repair command for 32-bit: .NET Framework 4 Full (32-bit) – silent repair %windir%\Microsoft.NET\Framework\v4.0.30319\SetupCache\Client\ setup.exe /repair /x86 /x64 /ia64 /parameterfolder Client /q /norestart Repair command for 64-bit: .NET Framework 4 Full (64-bit) – silent repair %windir%\Microsoft.NET\Framework64\v4.0.30319\SetupCache\Client\setup.exe /repair /x86 /x64 /ia64 /parameterfolder Client /q /norestart How it works... Simply put, once we have set up the IIS7, we need to copy all the workflow service project files and folders to the IIS application folder and the workflow service will just work. There's more We can also host a WF4 service in IIS6 once we have installed .NET framework 4.0. Running a WF4 service in IIS6 is not recommended. Hosting workflow in ASP.NET In this task, we will create an e-mail sending workflow and run it in an ASP.NET site. Getting ready We need an e-mail sending workflow service hosted in IIS7. How to do it... Read more about this book (For more resources on this subject, see here.) - Create an ASP.NET4 web application: Create an ASP.NET4 web application and name it HostingWorkflowInASPNET. Because we are going to host WF4 workflow in this website, we have to make sure it is an ASP.NET4 website. To check the version, right-click the project name HostingWorkflowInASPNET and select Properties. - Author a Workflow: - Add an activity to the website and name it Workflow.xaml. Author the workflow as follows: - Set the properties for SendEmail1: - Set the parameters for SendEmail1: - Set the properties of SendEmail2. The only difference as compared to SendEmail1 is the DisplayName. - Set the parameters for SendEmail2: - Alter the Default.aspx page: Add a Button control to the Default.aspx page: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="HostingWorkflowInASPNET._Default" %> <asp:Content </asp:Content> <asp:Content <p> <asp:Button </p> </asp:Content> Add this code to the button event handler in Default.aspx.cs: using System; using System.Activities; using System.Threading; namespace HostingWorkflowInASPNET { public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { } protected void Button1_Click(object sender, EventArgs e) { AutoResetEvent waitHandler = new AutoResetEvent(false); WorkflowApplication wfApp = new WorkflowApplication(new Workflow()); wfApp.Unloaded = (workflowApplicationEventArgs) => { waitHandler.Set(); }; wfApp.Run(); waitHandler.WaitOne(); } } } - Run it: Build the website and browse to the default page: Click the Start a workflow button to start a workflow. Open your e-mail client. Two mails with subject Hello WF Service indicate we have finished this task successfully. How it works... We can treat a WF4 workflow as a managed .NET object and it can run in any .NET framework 4.0 application. If we have experience with WF3/3.5, we may still remember that we had to schedule the workflow instance in an ASP.NET application. In WF4, a WorkflowApplication workflow instance runs in an independent .NET thread. No special workflow schedule is needed. There's more As I have stated in the introduction of this chapter, usually we don't run a workflow instance in an ASP.NET page directly; instead, we call a WF4 service in page events. For example, in this task, we can call the WF4 service using pure .NET code by following these steps: - Use Svcutil.exe to generate the proxy code and configuration code. Usually, if we have installed .NET 4.0 framework, we can find this Svcutil.exe in C:\ProgramFiles (x86)\Microsoft SDKs\Windows\v7.0A\Bin or C:\ProgramFiles\Microsoft SDKs\Windows\v7.0A\Bin. - In a command window, navigate to the svcutil.exe folder using the following command: cd C:\Program Files (x86)\Microsoft SDKs\Windows\v7.0A\Bin - Input the following command: svcutil.exe /language:cs /out:c:\GeneratedProxy.cs /config:c:\app. config. Press the Enter key, and we will find the GeneratedProxy.cs and app.config files in C:\. - Add GeneratedProxy.cs to our ASP.NET web application. - Open the app.config file and copy the following configuration code into the web.config file of the ASP.NET web application right below the <configuration> node: <system.serviceModel> <bindings> <basicHttpBinding> <binding name="BasicHttpBinding_ISendEmailS=""> <extendedProtectionPolicy policyEnforcement="Never" /> </transport> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> </basicHttpBinding> </bindings> <client> <endpoint address=" SendEmailService.xamlx" binding="basicHttpBinding" bindingConfiguration="B asicHttpBinding_ISendEmailService" contract="ISendEmailService" name="BasicHttpBinding_ISendEmailService" /> </client> </system.serviceModel> - Use the following code to call the workflow service: SendEmailServiceClient sesc = new SendEmailServiceClient(); sesc.SendEmail("message"); Read more about this book (For more resources on this subject, see here.) Hosting workflow in WPF In this task, we will create a workflow running in a WPF application. How to do it... - Create a WPF project: Create a WPF project and name it HostingWorkflowInWPF. - Create a workflow: Add a workflow to the project named AdditionWorkflow.xaml and author a workflow like this: - Create a WPF window: Open the default created WPF file MainWindow.xaml. Alter its contents to: <Window x:Class="HostingWorkflowInWPF.MainWindow" xmlns=" presentation" xmlns: <Grid Width="180" HorizontalAlignment="Left" VerticalAlignment="Top" Height="124"> <Label Content="x:" Width="20" HorizontalAlignment="Left" Name="LabelX" Margin="0,2,0,0" VerticalAlignment="Top" /> <TextBox Name="textBoxX" Width="80" Height="20" VerticalAlignment="Top" HorizontalAlignment="Left" Margin="52,4,0,0" /> <Label Content="y:" HorizontalAlignment="Left" Margin="0,26,0,0" Name="labelY" Width="20" Height="30" VerticalAlignment="Top" /> <TextBox Height="20" HorizontalAlignment="Left" Margin="52,28,0,0" Name="textBoxY" VerticalAlignment="Top" Width="80" /> <Button Content="Adding" Height="23" HorizontalAlignment="Left" Margin="52,54,0,0" Name="buttonAdding" VerticalAlignment="Top" Width="75" Click="buttonAdding_Click" /> <Label Content="result:" Height="28" HorizontalAlignment="Left" Margin="0,83,0,0" Name="labelResult" VerticalAlignment="Top" /> <Label Height="28" HorizontalAlignment="Left" Margin="52,83,0,0" Name="labelResultValue" VerticalAlignment="Top" /> </Grid> </Window> We can see this in the WPF window designer: Double-click the Adding button, and add code to the button event handler. The final MainWindow.xaml.cs code will be: using System.Windows; using System.Threading; using System.Activities; using System; namespace HostingWorkflowInWPF { public partial class MainWindow : Window { public MainWindow() { InitializeComponent(); } private void buttonAdding_Click(object sender, RoutedEventArgs e) { AutoResetEvent waitHandler = new AutoResetEvent(false); string result = ""; AdditionWorkflow addwf = new AdditionWorkflow { x = new InArgument<Int32>(Int32. Parse(textBoxX.Text)), y = new InArgument<Int32>(Int32. Parse(textBoxY.Text)) }; WorkflowApplication wfApp = new Workflow.Content = result; } } } - Run it: Set this project as StartUp project. Press Ctrl+F5 to run the workflow without debugging. Now we shall see the following: How it works... This task is only for the purpose of concept demonstration. In a real application, it is not a good idea to host a workflow in a WPF application. It would be better to host the workflow in IIS and call it in the WPF application like we did in the previous ASP.NET web application. Hosting workflow in a Windows Form In this task we will create a workflow running in a Windows Form application. How to do it... - Create a Windows Form project: Create a Windows Form project and name it HostingWorkflowInWinForm. - Create a workflow: Add a workflow to the project and call it AdditionWorkflow.xaml. Author the workflow like this: - Create a Windows Form. Open the default created Form1.cs file and alter it to: Double-click the Adding button and add code to the button event handler. The final code will be: using System; using System.Windows.Forms; using System.Threading; using System.Activities; namespace HostingWorkflowInWinForm { public partial class Form1 : Form { public Form1() { InitializeComponent(); } private void buttonAdding_Click(object sender, EventArgse) { AutoResetEventwaitHandler = newAutoResetEvent(false); string result = ""; AdditionWorkflowaddwf = new AdditionWorkflow { x = new InArgument<Int32>( Int32.Parse(textBoxX.Text.ToString())), y = new InArgument<Int32>( Int32.Parse(textBoxY.Text.ToString())) }; WorkflowApplicationwfApp = newWorkflow.Text = result; } } } - Run it: Set this project as StartUp project and press Ctrl+F5 to run this project without debugging. We shall see the following: How it works... This task is only for the purpose of concept demonstration. In a real application, it is not a good idea to host a workflow in a Windows Form application. It would be better to host the workflow in IIS and call it from a Win Form application like we did in the previous ASP.NET web application. Summary In the above article we have covered: - Hosting a workflow service in IIS7 - Hosting workflow in ASP.NET - Hosting workflow in WPF - Hosting workflow in a Windows Form Further resources on this subject: - Working with a Microsoft Windows Workflow Foundation 4.0 (WF) Program - Starting with Windows Workflow Foundation
https://www.packtpub.com/books/content/hosting-workflow-applications-microsoft-windows-workflow-foundation-40
CC-MAIN-2016-22
refinedweb
2,025
51.65
Pyromancer 0.2 Simple framework for creating IRC bots Simple framework for creating IRC bots. Example from pyromancer.objects import Pyromancer HOST = '1.2.3.4' PORT = 6667 NICK = 'PyromancerBot' settings = {'host': HOST, 'port': PORT, 'nick': NICK, 'encoding': 'ISO-8859-1', 'packages': ['.test_examples']} p = Pyromancer(settings) p.run() Custom commands Writing own commands is fairly simple. Create a folder which will be the package name, with a folder named “commands” in it and a module to hold the commands in there. In your module, you can register functions to be a command with the built-in command decorator. After that you need to register it in your settings, and you can use it. Example File layout: test/ commands/ __init__.py test_commands.py __init__.py init.py test_commands.py: from pyromancer.decorators import command @command(r'bye (.*)') def bye(match): return 'Bye {m[1]}!' init.py: settings['packages'] = ['test.test_commands'] On IRC: <User> bye everyone <Bot> Bye everyone! Pyromancer scans the modules in the settings automatically for functions decorated using the commands decorator, so all your commands in test_commands.py are used automatically.. To do - Ability to process raw lines through custom commands - Figure out how to do translation of messages through the Match.msg function. - Add timers - Make a module of settings (like Django), with settings for each installed package prioritized based on place in packages setting (not like Django). - Add a command module which keeps track of channels joined and users in them which other commands can use. - Redo commands loading so you can use commands.py for custom commands instead of a mandatory commands directory. - Redo package loading so you just have to specify the package name and it loads the commands and any future things like settings. Changelist - Package Index Owner: Gwildor - DOAP record: Pyromancer-0.2.xml
https://pypi.python.org/pypi/Pyromancer/0.2
CC-MAIN-2016-44
refinedweb
301
60.01
Content-type: text/html XmFontListAdd - A font list function that creates a new font list #include <Xm/Xm.h> XmFontList XmFontListAdd (oldlist, font, charset) XmFontList oldlist; XFontStruct *font; XmStringCharSet charset; XmFontListAdd creates a new font list consisting of the contents of the oldlist and the new font-list element being added. This function deallocates the oldlist after extracting the required information; therefore, do not reference oldlist thereafter. This function is obsolete and exists for compatibility with previous releases. It has been replaced by XmFontListAppendEntry. Specifies a pointer to the font list to which an entry will be added. Specifies a pointer to a font structure for which the new font list is generated. This is the structure returned by the XLib XLoadQueryFont function.. Returns NULL if oldlist is NULL; returns oldlist if font or charset is NULL; otherwise, returns a new font list. XmFontList(3X), XmFontListAppendEntry(3X)
http://backdrift.org/man/tru64/man3/XmFontListAdd.3X.html
CC-MAIN-2016-50
refinedweb
147
55.64
Ranter - rant1ng3029125dthanks devrant for murdering my image... that I spent like an hour preparing and taking... arg. fuck it I'm leaving this up I think it looks pretty. I'm proud of how I used contracts and interfaces, patterns, and an abstract class. Properly, I think because it was absolutely needed to do so. But I'm new to OOP (one year experience) please .... though... this is dev rant... don't be kind.... rip it. - Drillan7678867125dGo upload the screen on imgur or something like that! - C0D423454125dHard to read when blind and devrant destroys images 😒 But from what I can see it looks reasonable to me. - Michelle27755124dEven thought I can't read it, it's still really pretty. :) - polaroidkidd2843124d.. I couldn't resist.. - -ANGRY-CLIENT-12238124dNeeds more res - @polaroidkidd you're just chinese. - hell17090124dPhP is bugly - VS code doesn't have this layout ability but Atom have. - Their namespace splitter choice is strange, like in windows. - @polaroidkidd @Drillan767 and everyone else in case you all missed it not sure why devrant doesn't let you edit your own post or maybe I'm stupid and can't find it ----> <---- but yeah.. I use the smallest font size mostly lol - my favourite line of code: $object = (new Adapter())->handle($this); It just worked out that way as I write out, it wasn't planned. Product of using good patterns? or just stupid coincidence, I don't know... but I like it. by the way, that's sugar for $adapter = new Adapter(); $object = $adapter->handle($this); this is not immediately obvious to most, so I'll point it out. - Glinkis216124d@rant1ng if something is "not immediately obvious to most", it's a good sign you should go with the obvious one instead. - Kandelborg42596d@Pogromist VSCode got that layout functionality now 😊 - Pogromist166396d@Kandelborg i know) Your Job Suck? Take a quick quiz from Triplebyte to skip the job search hassles and jump to final interviews at hot tech firms Get a Better Job Related Rants - Company - About - News - Swag Store - Free Stickers - devDucks Is php beautiful, or ugly? Here's my php. The beautiful, and the ugly. *nervously presses submit* rant php
https://devrant.com/rants/1479624/is-php-beautiful-or-ugly-heres-my-php-the-beautiful-and-the-ugly-nervously-press
CC-MAIN-2018-43
refinedweb
359
75.91
order an old version of GatoSelvadego's user page. Note that combinations with level 0 for languages in any tested pair (language not understood) are automatically excluded. Babel boxes Your Babel box is for showing the languages that you know, as well as the operating system, keyboard layout, web browser, desktop environment, and text editor that you are most comfortable with. Details of how to use it for its original purpose -- showing the languages that you know -- are at wikipedia:Wikipedia:Babel. - Template:Babel - Allows 1 to 100 boxes. Format: {{Babel|<box1>|<box2>|<box3>...}} Passing parameters to included user boxes The trick is to use the {{!}} template. See below for an example. User boxes from the "User:" namespaces The Babel box prefixes its arguments with "User ", so it will work with the User: namespace. For example, this would include the {{User:Urhixidur/Userboxes/Asteroid}} user box, and pass it a parameter as well: {{Babel|<box1>|<box2>|:Urhixidur/Userboxes/Asteroid{{!}}<parameter>|<box 4>|...}}. Boxes with parameters can be added at the end with the format, "|special-boxes={{box page name|param1|param2}}{{second page name|param1|param2|param3}}". Any number of additional boxes can be added this way and will display below the others. Example: {{Babel|color=yellow|align=left|de|en|!|fr|ru}} produces: How to get Babel boxes to work on other Wikipedias The page you are now reading is the actual Babel template. If you click "edit this page", you will see a bunch of computer code that makes the Babel boxes work. What you're now reading is just comments in that code, inside "<noinclude>" tags so it doesn't interfere with the computer code. In order to have Babel boxes on another Wikimedia project, just copy this page to that project. Click "view source" and use your computer mouse.
https://wiki.openstreetmap.org/wiki/Template:Babel
CC-MAIN-2021-49
refinedweb
303
54.42
You can write a custom action for Excel, Word, Image, Text etc.Here I am going to create a custom action for a Text file. Steps to create Custom Action Handler.Step 1: Create a new MVC project.Step 2: We will create a source folder from where we will read a text file. I have created the File folder:Step 3: Now we will create a new class and that class will inherit from ActionResult. It will look like this:public class TextResult : ActionResult{ public string Path { get; set; } public string FileName { get; set; } public override void ExecuteResult(ControllerContext context) { var filepath = context.HttpContext.Server.MapPath(Path); var data = File.ReadAllText(filepath + "\\" + FileName); context.HttpContext.Response.Write(data); }}We have to override the ExecuteResult method in your custom Result class.Step 4: Now we will add a new controller; in this sample I created a TextController class:public class TextController : Controller{ public TextResult Index() { return new TextResult { FileName = "Mahesh.txt", Path = "File" }; }}Note:- I have only added a controller, not a view.Step 5: Now we will run our application.I am also attaching sample code for this article. Hope it will help you.Happy Coding...
http://www.c-sharpcorner.com/UploadFile/amit12345/custom-action-in-mvc/
CC-MAIN-2015-40
refinedweb
197
59.6
16.1. Geometry and Linear Algebraic Operations¶ In Section 2.3, we encountered the basics of linear algebra and saw how it could be used to express common operations for transforming our data. Linear algebra is one of the key mathematical pillars underlying much of the work that we do deep learning and in machine learning more broadly. While Section 2.3 contained enough machinery to communicate the mechanics of modern deep learning models, there is a lot more to the subject than we could fit there (or even here). In this section, we will go deeper, highlighting some geometric interpretations of linear algebra operations, and introducing a few fundamental concepts, including of eigenvalues and eigenvectors. 16.1.1. Geometry of Vectors¶ First, we need to discuss the two common geometric interpretations of vectors, as either points or directions in space. Fundamentally, a vector is a list of numbers such as the Python list below. v = [1, 7, 0, 1] Mathematicians most often write this as either a column or row vector, which is to say either as or These often have different interpretations, where data points are column vectors and weights used to form weighted sums are row vectors. However, it can be beneficial to be flexible.. Given a vector, the first interpretation that we should give it is as a point in space. In two or three dimensions, we can visualize these points by using the components of the vectors to define the location of the points in space compared to a fixed reference called the origin. This can be seen in Fig. 16.1.1. Fig. 16.1.1 An illustration of visualizing vectors as points in the plane. The first component of the vector gives the \(x\)-coordinate, the second component gives the \(y\)-coordinate. Higher dimensions are analogous, although much harder to visualize.¶ This geometric point of view allows us to consider the problem on a more abstract level. No longer faced with some insurmountable seeming problem like classifying pictures as either cats or dogs, we can start considering tasks abstractly as collections of points in space and picturing the task as discovering how to separate two distinct clusters of points. In parallel, there is a second point of view that people often take of vectors: as directions in space. Not only can we think of the vector \(\mathbf{v} = [2,3]^\top\) as the location \(2\) units to the right and \(3\) units up from the origin, we can also think of it as the direction itself to take \(2\) steps to the right and \(3\) steps up. In this way, we consider all the vectors in figure Fig. 16.1.2 the same. Fig. 16.1.2 Any vector can be visualized as an arrow in the plane. In this case, every vector drawn is a representation of the vector \((2,3)\).¶ One of the benefits of this shift is that we can make visual sense of the act of vector addition. In particular, we follow the directions given by one vector, and then follow the directions given by the other, as is seen in Fig. 16.1.3. Vector subtraction has a similar interpretation. By considering the identity that \(\mathbf{u} = \mathbf{v} + (\mathbf{u}-\mathbf{v})\), we see that the vector \(\mathbf{u}-\mathbf{v}\) is the direction that takes us from the point \(\mathbf{u}\) to the point \(\mathbf{v}\). 16.1.2. Dot Products and Angles¶ As we saw in Section 2.3, if we take two column vectors say \(\mathbf{u}\) and \(\mathbf{v}\), we can form their dot product by computing: Because this operation is symmetric, we will mirror the notation of classical multiplication and write to highlight the fact that exchanging the order of the vectors will yield the same answer. The dot product also admits a geometric interpretation: it is closely related to the angle between two vectors. Consider the angle shown in Fig. 16.1.4. Fig. 16.1.4 Between any two vectors in the plane there is a well defined angle \(\theta\). We will see this angle is intimately tied to the dot product.¶ To start, let us consider two specific vectors: With some simple algebraic manipulation, we can rearrange terms to obtain In short, for these two specific vectors, the dot product tells us the angle between the two vectors. This same fact is true in general. We will not derive the expression here, however, if we consider writing \(\|\mathbf{v} - \mathbf{w}\|^2\) in two ways: one with the dot product, and the other geometrically using the law of cosines, we can obtain the full relationship. Indeed, for any two vectors \(\mathbf{v}\) and \(\mathbf{w}\), the angle between the two vectors is This is a nice result since nothing in the computation references two-dimensions. Indeed, we can use this in three or three million dimensions without issue. As a simple example, let’s see how to compute the angle between a pair of vectors: %matplotlib inline import d2l from IPython import display from mxnet import gluon, np, npx npx.set_np() def angle(v, w) : return np.arccos(v.dot(w) / (np.linalg.norm(v) * np.linalg.norm(w))) angle(np.array([0, 1, 2]), np.array([2, 3, 4])) array(0.41899002) We will not use it right now, but it is useful to know that we will refer to vectors for which the angle is \(\pi/2\) (or equivalently \(90^{\circ}\)) as being orthogonal. By examining the equation above, we see that this happens when \(\theta = \pi/2\), which is the same thing as \(\cos(\theta) = 0\). The only way this can happen is if the dot product itself is zero, and two vectors are orthogonal if and only if \(\mathbf{v}\cdot\mathbf{w} = 0\). This will prove to be a helpful formula when understanding objects geometrically. Examples like this are everywhere. In text, we might want the topic being discussed to not change if we write twice as long of document that says the same thing. For some encoding (such as counting the number of occurrences of words in some vocabulary), this corresponds to a doubling of the vector encoding the document, so again we can use the angle. 16.1.2.1. Cosine Similarity¶ In ML contexts where the angle is employed to measure the closeness of two vectors, practitioners adopt the term cosine similarity to refer to the portion The cosine takes a maximum value of \(1\) when the two vectors point in the same direction, a minimum value of \(-1\) when they point in opposite directions, and a value of \(0\) when the two vectors are orthogonal. Note that if the components of high-dimensional vectors are sampled randomly with mean \(0\), their cosine will nearly always be close to \(0\). 16.1.3. Hyperplanes¶ In addition to working with vectors, another key object that you must understand to go far in linear algebra is the hyperplane, a generalization to higher dimensions of a line (two dimensions) or of a plane (three dimensions). In an \(n\)-dimensional vector space, a hyperplane has \(d-1\) dimensions and divides the space into two half-spaces. Let us start with an example. Suppose that we have a column vector \(\mathbf{w}=[2,1]^\top\). We want to know, “what are the points \(\mathbf{v}\) with \(\mathbf{w}\cdot\mathbf{v} = 1\)?” By recalling the connection between dot products and angles above, we can see that this is equivalent to Fig. 16.1.5 Recalling trigonometry, we see the formula \(\|\mathbf{v}\|\cos(\theta)\) is the length of the projection of the vector \(\mathbf{v}\) onto the direction of \(\mathbf{w}\)¶ If we consider the geometric meaning of this expression, we see that this is equivalent to saying that the length of the projection of \(\mathbf{v}\) onto the direction of \(\mathbf{w}\) is exactly \(1/\|\mathbf{w}\|\), as is shown in Fig. 16.1.5. The set of all points where this is true is a line at right angles to the vector \(\mathbf{w}\). If we wanted, we could find the equation for this line and see that it is \(2x + y = 1\) or equivalently \(y = 1 - 2x\). If we now look at what happens when we ask about the set of points with \(\mathbf{w}\cdot\mathbf{v} > 1\) or \(\mathbf{w}\cdot\mathbf{v} < 1\), we can see that these are cases where the projections are longer or shorter than \(1/\|\mathbf{w}\|\), respectively. Thus, those two inequalities define either side of the line. In this way, we have found a way to cut our space into two halves, where all the points on one side have dot product below a threshold, and the other side above as we see in Fig. 16.1.6. Fig. 16.1.6 If we now consider the inequality version of the expression, we see that our hyperplane (in this case: just a line) separates the space into two halves.¶ The story in higher dimension is much the same. If we now take \(\mathbf{w} = [1,2,3]^\top\) and ask about the points in three dimensions with \(\mathbf{w}\cdot\mathbf{v} = 1\), we obtain a plane at right angles to the given vector \(\mathbf{w}\). The two inequalities again define the two sides of the plane as is shown in Fig. 16.1.7. While our ability to visualize runs out at this point, nothing stops us from doing this in tens, hundreds, or billions of dimensions. This occurs often when thinking about machine learned models. For instance, we can understand linear classification models like those from Section 3.4, as methods to find hyperplanes that separate the different target classes. In this context, such hyperplanes are often referred to as decision planes. The majority of deep learned classification models end with a linear layer fed into a softmax, so one can interpret the role of the deep neural network to be to find a non-linear embedding such that the target classes can be separated cleanly by hyperplanes. To give a hand-built example, notice that we can produce a reasonable model to classify tiny images of t-shirts and trousers from the Fashion MNIST dataset (seen in Section 3.5) by just taking the vector between their means to define the decision plane and eyeball a crude threshold. First we will load the data and compute the averages. # Load in the dataset train = gluon.data.vision.FashionMNIST(train=True) test = gluon.data.vision.FashionMNIST(train=False) X_train_0 = np.stack([x[0] for x in train if x[1] == 0]).astype(float) X_train_1 = np.stack([x[0] for x in train if x[1] == 1]).astype(float) X_test = np.stack( [x[0] for x in test if x[1] == 0 or x[1] == 1]).astype(float) y_test = np.stack( [x[1] for x in test if x[1] == 0 or x[1] == 1]).astype(float) # Compute averages ave_0 = np.mean(X_train_0, axis=0) ave_1 = np.mean(X_train_1, axis=0) It can be informative to examine these averages in detail, so let us plot what they look like. In this case, we see that the average indeed resembles a blurry image of a t-shirt. # Plot average t-shirt d2l.set_figsize() d2l.plt.imshow(ave_0.reshape(28, 28).tolist(), cmap='Greys') d2l.plt.show() In the second case, we again see that the average resembles a blurry image of trousers. # Plot average trousers d2l.plt.imshow(ave_1.reshape(28, 28).tolist(), cmap='Greys') d2l.plt.show() In a fully machine learned solution, we would learn the threshold from the dataset. In this case, I simply eyeballed a threshold that looked good on the training data by hand. # Print test set accuracy with eyeballed threshold w = (ave_1 - ave_0).T predictions = X_test.reshape(2000, -1).dot(w.flatten()) > -1500000 np.mean(predictions.astype(y_test.dtype)==y_test, dtype=np.float64) # Accuracy array(0.801, dtype=float64) 16.1.4. Geometry of Linear Transformations¶ Through Section 2.3 and the above discussions, we have a solid understanding of the geometry of vectors, lengths, and angles. However, there is one important object we have omitted discussing, and that is a geometric understanding of linear transformations represented by matrices. Fully internalizing what matrices can do to transform data between two potentially different high dimensional spaces takes significant practice, and is beyond the scope of this appendix. However, we can start building up intuition in two dimensions. Suppose that we have some matrix: If we want to apply this to an arbitrary vector \(\mathbf{v} = [x,y]^\top\), we multiply and see that This may seem like an odd computation, where something clear became somewhat impenetrable. However, it tells us that we can write the way that a matrix transforms any vector in terms of how it transforms two specific vectors: \([1,0]^\top\) and \([0,1]^\top\). This is worth considering for a moment. We have essentially reduced an infinite problem (what happens to any pair of real numbers) to a finite one (what happens to these specific vectors). These vectors are an example a basis, where we can write any vector in our space as a weighted sum of these basis vectors. Let us draw what happens when we use the specific matrix If we look at the specific vector \(\mathbf{v} = [2,-1]^\top\), we see this is \(2\cdot[1,0]^\top + -1\cdot[0,1]^\top\), and thus we know that the matrix \(A\) will send this to \(2(\mathbf{A}[1,0]^\top) + -1(\mathbf{A}[0,1])^\top = 2[1,-1]^\top - [2,3]^\top = [0,-5]^\top\). If we follow this logic through carefully, say by considering the grid of all integer pairs of points, we see that what happens is that the matrix multiplication can skew, rotate, and scale the grid, but the grid structure must remain as you see in Fig. 16.1.8. Fig. 16.1.8 The matrix \(\mathbf{A}\) acting on the given basis vectors. Notice how the entire grid is transported along with it.¶ This is the most important intuitive point to internalize about linear transformations represented by matrices. Matrices are incapable of distorting some parts of space differently than others. All they can do is take the original coordinates on our space and skew, rotate, and scale them. Some distortions can be severe. For instance the matrix compresses the entire two-dimensional plane down to a single line. Identifying and working with such transformations are the topic of a later section, but geometrically we can see that this is fundamentally different from the types of transformations we saw above. For instance, the result from matrix \(\mathbf{A}\) can be “bent back” to the original grid. The results from matrix \(\mathbf{B}\) cannot because we will never know where the vector \([1,2]^\top\) came from—was it \([1,1]^\top\) or \([0,-1]^\top\)? While this picture was for a \(2\times2\) matrix, nothing prevents us from taking the lessons learned into higher dimensions. If we take similar basis vectors like \([1,0,\ldots,0]\) and see where our matrix sends them, we can start to get a feeling for how the matrix multiplication distorts the entire space in whatever dimension space we are dealing with. 16.1.5. Linear Dependence¶ Consider again the matrix This compresses the entire plane down to live on the single line \(y = 2x\). The question now arises: is there some way we can detect this just looking at the matrix itself? The answer is that indeed we can. Lets take \(\mathbf{b}_1 = [2,4]^\top\) and \(\mathbf{b}_2 = [-1,-2]^\top\) be the two columns of \(\mathbf{B}\). Remember that we can write everything transformed by the matrix \(\mathbf{B}\) as a weighted sum of the columns of the matrix: like \(a_1\mathbf{b}_1 + a_2\mathbf{b}_2\). We call this a linear combination. The fact that \(\mathbf{b}_1 = -2\cdot\mathbf{b}_2\) means that we can write any linear combination of those two columns entirely in terms of say \(\mathbf{b}_2\) since This means that one of the columns is, in a sense, redundant because it does not define a unique direction in space. This should not surprise us too much since we already saw that this matrix collapses the entire plane down into a single line. Moreover, we see that the linear dependence \(\mathbf{b}_1 = -2\cdot\mathbf{b}_2\) captures this. To make this more symmetrical between the two vectors, we will write this as In general, we will say that a collection of vectors \(\mathbf{v}_1, \ldots \mathbf{v}_k\) are linearly dependent if there exist coefficients \(a_1, \ldots, a_k\) not all equal to zero so that In this case, we can solve for one of the vectors in terms of some combination of the others, and effectively render it redundant. Thus, a linear dependence in the columns of a matrix is a witness to the fact that our matrix is compressing the space down to some lower dimension. If there is no linear dependence we say the vectors are linearly independent. If the columns of a matrix are linearly independent, no compression occurs and the operation can be undone. 16.1.6. Rank¶ If we have a general \(n\times m\) matrix, it is reasonable to ask what dimension space the matrix maps into. A concept known as the rank will be our answer. In the previous section, we noted that a linear dependence bears witness to compression of space into a lower dimension and so we will be able to use this to define the notion of rank. In particular, the rank of a matrix \(\mathbf{A}\) is the largest number of linearly independent columns amongst all subsets of columns. For example, the matrix has \(\mathrm{rank}(B)=1\), since the two columns are linearly dependent, but either column by itself is not linearly dependent. For a more challenging example, we can consider and show that \(\mathbf{C}\) has rank two since, for instance, the first two columns are linearly independent, however any of the four collections of three columns are dependent. This procedure, as described, is very inefficient. It requires looking at every subset of the columns of our given matrix, and thus is potentially exponential in the number of columns. Later we will see a more computationally efficient way to compute the rank of a matrix, but for now, this is sufficient to see that the concept is well defined and understand the meaning. 16.1.7. Invertibility¶ We have seen above that multiplication by a matrix with linearly dependent columns cannot be undone, i.e., there is no inverse operation that can always recover the input. However, multiplication by a full-rank matrix (i.e., some \(\mathbf{A}\) that is \(n \times n\) matrix with rank \(n\)), we should always be able to undo it. Consider the matrix which is the matrix with ones along the diagonal, and zeros elsewhere. We call this the identity matrix. It is the matrix which leaves our data unchanged when applied. To find a matrix which undoes what our matrix \(\mathbf{A}\) has done, we want to find a matrix \(\mathbf{A}^{-1}\) such that If we look at this as a system, we have \(n \times n\) unknowns (the entries of \(\mathbf{A}^{-1}\)) and \(n \times n\) equations (the equality that needs to hold between every entry of the product \(\mathbf{A}^{-1}\mathbf{A}\) and every entry of \(\mathbf{I}\)) so we should generically expect a solution to exist. Indeed, in the next section we will see a quantity called the determinant, which has the property that as long as the determinant is not zero, we can find a solution. We call such a matrix \(\mathbf{A}^{-1}\) the inverse matrix. As an example, if \(\mathbf{A}\) is the general \(2 \times 2\) matrix then we can see that the inverse is We can test to see this by seeing that multiplying by the inverse given by the formula above works in practice. M = np.array([[1, 2], [1, 4]]) M_inv = np.array([[2, -1], [-0.5, 0.5]]) M_inv.dot(M) array([[1., 0.], [0., 1.]]) 16.1.7.1. Numerical Issues¶ While the inverse of a matrix is useful in theory, we must say that most of the time we do not wish to use the matrix inverse to solve a problem in practice. In general, there are far more numerically stable algorithms for solving linear equations like than computing the inverse and multiplying to get Just as division by a small number can lead to numerical instability, so can inversion of a matrix which is close to having low rank. Moreover, it is common that the matrix \(\mathbf{A}\) is sparse, which is to say that it contains only a small number of non-zero values. If we were to explore examples, we would see that this does not mean the inverse is sparse. Even if \(\mathbf{A}\) was a \(1\) million by \(1\) million matrix with only \(5\) million non-zero entries (and thus we need only store those \(5\) million), the inverse will typically have almost every entry non-negative, requiring us to store all \(1\text{M}^2\) entries—that is \(1\) trillion entries! While we do not have time to dive all the way into the thorny numerical issues frequently encountered when working with linear algebra, we want to provide you with some intuition about when to proceed with caution, and generally avoiding inversion in practice is a good rule of thumb. 16.1.8. Determinant¶ The geometric view of linear algebra gives an intuitive way to interpret a a fundamental quantity known as the determinant. Consider the grid image from before, but now with a highlighted region (Fig. 16.1.9). Fig. 16.1.9 The matrix \(\mathbf{A}\) again distorting the grid. This time, I want to draw particular attention to what happens to the highlighted square.¶ Look at the highlighted square. This is a square with edges given by \((0, 1)\) and \((1, 0)\) and thus it has area one. After \(\mathbf{A}\) transforms this square, we see that it becomes a parallelogram. There is no reason this parallelogram should have the same area that we started with, and indeed in the specific case shown here of it is an exercise in coordinate geometry to compute the area of this parallelogram and obtain that the area is \(5\). In general, if we have a matrix we can see with some computation that the area of the resulting parallelogram is \(ad-bc\). This area is referred to as the determinant. Let us check this quickly with some example code. import numpy as np np.linalg.det(np.array([[1, -1], [2, 3]])) 5.000000000000001 The eagle-eyed amongst us will notice that this expression can be zero or even negative. For the negative term, this is a matter of convention taken generally in mathematics: if the matrix flips the figure, we say the area is negated. Let us see now that when the determinant is zero, we learn more. Let us consider If we compute the determinant of this matrix, we get \(2\cdot(-2 ) - 4\cdot(-1) = 0\). Given our understanding above, this makes sense. \(\mathbf{B}\) compresses the square from the original image down to a line segment, which has zero area. And indeed, being compressed into a lower dimensional space is the only way to have zero area after the transformation. Thus we see the following result is true: a matrix \(A\) is invertible if and only if the determinant is not equal to zero. As a final comment, imagine that we have any figure drawn on the plane. Thinking like computer scientists, we can decompose that figure into a collection of little squares so that the area of the figure is in essence just the number of squares in the decomposition. If we now transform that figure by a matrix, we send each of these squares to parallelograms, each one of which has area given by the determinant. We see that for any figure, the determinant gives the (signed) number that a matrix scales the area of any figure. Computing determinants for larger matrices can be laborious, but the intuition is the same. The determinant remains the factor that \(n\times n\) matrices scale \(n\)-dimensional volumes. 16.1.9. Tensors and Common Linear Algebra Operations¶ In Section 2.3 the concept of tensors was introduced. In this section, we will dive more deeply into tensor contractions (the tensor equivalent of matrix multiplication), and see how it can provide a unified view on a number of matrix and vector operations. With matrices and vectors we knew how to multiply them to transform data. We need to have a similar definition for tensors if they are to be useful to us. Think about matrix multiplication: or equivalently This pattern is one we can repeat for tensors. For tensors, there is no one case of what to sum over that can be universally chosen, so we need specify exactly which indices we want to sum over. For instance we could consider Such a transformation is called a tensor contraction. It can represent a far more flexible family of transformations that matrix multiplication alone. As a often-used notational simplification, we can notice that the sum is over exactly those indices that occur more than once in the expression, thus people often work with Einstein notation, where the summation is implicitly taken over all repeated indices. This gives the compact expression: 16.1.9.1. Common Examples from Linear Algebra¶ Let us see how many of the linear algebraic definitions we have seen before can be expressed in this compressed tensor notation: \(\mathbf{v} \cdot \mathbf{w} = \sum_i v_iw_i\) \(\|\mathbf{v}\|_2^{2} = \sum_i v_iv_i\) \((\mathbf{A}\mathbf{v})_i = \sum_j a_{ij}v_j\) \((\mathbf{A}\mathbf{B})_{ik} = \sum_j a_{ij}b_{jk}\) \(\mathrm{tr}(\mathbf{A}) = \sum_i a_{ii}\) In this way, we can replace a myriad of specialized notations with short tensor expressions. 16.1.9.2. Expressing in Code¶ Tensors may flexibly be operated on in code as well. As seen in Section 2.3, we can create tensors as is shown below. # Define tensors B = np.array([[[1, 2, 3], [4, 5, 6]], [[7, 8, 9], [10, 11, 12]]]) A = np.array([[1, 2], [3, 4]]) v = np.array([1, 2]) # Print out the shapes A.shape, B.shape, v.shape ((2, 2), (2, 2, 3), (2,)) Einstein summation has been implemented directly via np.einsum. The indices that occurs in the Einstein summation can be passed as a string, followed by the tensors that are being acted upon. For instance, to implement matrix multiplication, we can consider the Einstein summation seen above (\(\mathbf{A}\mathbf{v} = a_{ij}v_j\)) and strip out the indices themselves to get the implementation: # Reimplement matrix multiplication np.einsum("ij, j -> i", A, v), A.dot(v) (array([ 5, 11]), array([ 5, 11])) This is a highly flexible notation. For instance if we want to compute what would be traditionally written as it can be implemented via Einstein summation as: np.einsum("ijk, il, j -> kl", B, A, v) array([[ 90, 126], [102, 144], [114, 162]]) This notation is readable and efficient for humans, however bulky if for whatever reason we need to generate a tensor contraction programmatically. For this reason, einsum provides an alternative notation by providing integer indices for each tensor. For example, the same tensor contraction can also be written as: np.einsum(B, [0, 1, 2], A, [0, 3], v, [1], [2, 3]) array([[ 90, 126], [102, 144], [114, 162]]) Either notation allows for concise and efficient representation of tensor contractions in code. 16.1.10. Summary¶ Vectors can be interpreted geometrically as either points or directions in space. Dot products define the notion of angle to arbitrarily high-dimensional spaces. Hyperplanes are high-dimensional generalizations of lines and planes. They can be used to define decision planes that are often used as the last step in a classification task. Matrix multiplication can be geometrically interpreted as uniform distortions of the underlying coordinates. They represent a very restricted, but mathematically clean, way to transform vectors. Linear dependence is a way to tell when a collection of vectors are in a lower dimensional space than we would expect (say you have \(3\) vectors living in a \(2\)-dimensional space). The rank of a matrix is the size of the largest subset of its columns that are linearly independent. When a matrix’s inverse is defined, matrix inversion allows us to find another matrix that undoes the action of the first. Matrix inversion is useful in theory, but requires care in practice owing to numerical instability. Determinants allow us to measure how much a matrix expands or contracts a space. A nonzero determinant implies an invertible (non-singular) matrix and a zero-valued determinant means that the matrix is non-invertible (singular). Tensor contractions and Einstein summation provide for a neat and clean notation for expressing many of the computations that are seen in machine learning. 16.1.11. Exercises¶ What is the angle between True or false: \(\begin{bmatrix}1 & 2\\0&1\end{bmatrix}\) and \(\begin{bmatrix}1 & -2\\0&1\end{bmatrix}\) are inverses of one another? Suppose that we draw a shape in the plane with area \(100\mathrm{m}^2\). What is the area after transforming the figure by the matrix Which of the following sets of vectors are linearly independent? \(\left\{\begin{pmatrix}1\\0\\-1\end{pmatrix},\begin{pmatrix}2\\1\\-1\end{pmatrix},\begin{pmatrix}3\\1\\1\end{pmatrix}\right\}\) \(\left\{\begin{pmatrix}3\\1\\1\end{pmatrix},\begin{pmatrix}1\\1\\1\end{pmatrix},\begin{pmatrix}0\\0\\0\end{pmatrix}\right\}\) \(\left\{\begin{pmatrix}1\\1\\0\end{pmatrix},\begin{pmatrix}0\\1\\-1\end{pmatrix},\begin{pmatrix}1\\0\\1\end{pmatrix}\right\}\) Suppose that you have a matrix written as \(A = \begin{bmatrix}c\\d\end{bmatrix}\cdot\begin{bmatrix}a & b\end{bmatrix}\) for some choice of values \(a,b,c\), and \(d\). True or false: the determinant of such a matrix is always \(0\)? The vectors \(e_1 = \begin{bmatrix}1\\0\end{bmatrix}\) and \(e_2 = \begin{bmatrix}0\\1\end{bmatrix}\) are orthogonal. What is the condition on a matrix \(A\) so that \(Ae_1\) and \(Ae_2\) are orthogonal? How can you write \(\mathrm{tr}(\mathbf{A}^4)\) in Einstein notation for an arbitrary matrix \(A\)?
https://www.d2l.ai/chapter_appendix_math/linear-algebra.html
CC-MAIN-2019-47
refinedweb
5,130
52.49
soneshBAN USER here is the solution ConvertToInt(inputStr) length <- inputStr.Length if length > k // k is the size of the maximum integer Overflow;return; multiplier = 1; output = 0; maxLastNum = r; for i = length; j > 1; j -- output = output*multiplier + Int(inputStr[i]); multiplier = multiplier * 10; if Int(inputStr[length]) > maxLastNum OR Int(inputStr[length])*multiplier > MaxInt-output Overflow;return; else output = output*multiplier + Int(inputStr[length]); return output Here is the solution Current <= Root; while(Current is not null) if Current -> left is not null Current = Current -> Left; else Print Current.Value Current.Printed = true; If Current -> Right is not null Current = Current -> Right; else Current = Current -> Parent; Here is the complete solution for the problem Please note that, the author has emphasized on the size of data, so we will store the photographs in external memory only. here is the background and assumptions behind the algorithm We will use B+ tree to implement this, where all the internal nodes only store the indexes and all the terminal node stores the real data(the photographs) The indexes will stay in the main memory only where the assumption the indexes can stay in the memory(primary memory), if not, we have to save the indexes also into the secondary memory, which will increase few I/O operations. we will keep two trees, one to store normal images and another to store favourites Here is the algorithm for Search(t) Search(t) Figure out the memory block where the photographs are stored which are inserted after t both from normal tree one having minimum time Here is the algorithm for NextFavourite(t) NextFavourite(t) Figure out the memory block where the photographs are stored which are inserted after t image. Here is the algorithm for Insert(x,t,m) Insert(x,t,m) Based upon m decide the tree; Figure out the memory block where the photographs can be stored with t time. Node = T.Root;Directive = Next_Comparison_Directive(null); // return year while(true) If Node.IsDevisionPresent Node = Node.Child_a ? Node.Child_a.Year = t.Year(can use BST to figure out the child, as not Year have the child) if(Node is null or no node exists, add a new child to parent node with directive value, and insert this new data into the block of this new node). Directive = Next_Comparison_Directive(Directive); // return in following order Year > Month > Day > Hour > Minute > Second > MillSecond as so on else read Node.Photograph block from secondary memory(we may have to read two blocks) break; insert the entry at proper location in bst. while(node is full) // means the block size is full set Node.IsDevisionPresent = true; figure out distinct count of Directive time units from the block; Create one child for each, and set there IsDevisionPresent = false; set the list of photographs into each of the child based upon the Directive. If any of the node is full(actually there will be one only) set node = node.child which is full. else save all the nodes block to secondary memory. break; Complexities Search Time : Constant(which is the search time) + 4.Log(M), where M is the a memory block size + 4 I/O NextFavourite Time : Constant(which is the search time) + 2.Log(M), where M is the a memory block size + 2 I/O Insert Time : Best case : Constant(which is the search time) + Log(M), where M is the a memory block size + 2 I/O, which occur most of the time, Node spiting case : Constant(which is the search time) + Log(M), where M is the a memory block size + M.I/O With little modification, we can reduce Node spiting case Complexities to the best case, we can maintain Min and Max insert time of photos which are residing in a block at last internal node(which are just above the terminal node) Space : O(N) : size of the input in secondary memory + constant size of internal memory. Here is what i think of a solution for this problem First we define the structure of a straw. Class Straw { int X_1, Y_1;// this is to repersent first coordinate of the straw int X_2,Y_2;//secon coordinate } Now we are given N list of straws as input. Lets first define how two straw are connected to each other or not(we can consider this problem as to figure out whether two lines of 2D are intersecting/touching each or or not). We are use the Cross product to figure out this. bool IsIntersect(Straw a, Straw b) Define P1 = (a.X_1,a.Y_1);P2 = (a.X_2,a.Y_2); P3 = (b.X_1,b.Y_1);P4 = (b.X_2,b.Y_2); we can use following cross products to figure out whether these two straws intersecting/touching each other or not If Cross_Product(P1P2, P1P4) and Cross_Product(P1P2,P1P3) are apposite to each other and Cross_Product(P3P4, P3P1) and Cross_Product(P3P4,P3P2) are also apposite to each other then these two straws are intersecting, other wise if any of the cross_product is zero, which is the case when these two straws are touching each other or are their in a single line, then we just check the vertices of other lines, if any one of thems((p1 or p2) or (p3 or p4))'s x and y coordinate lies between other line's coordinetes. Now we will start the actual execution. We will keep three array of indexes, one which will have indexes straws in non decreasing order x coordinate other two will have non decreasing order of y coordinate of first and second vertices. now we will figure out the connected component. Please note that any straw from one connected component will only touch/intersect other straws of the same component. Here is the algorithm A <- Input straws Index_x <- indexes in nondecreasing order of min of x(from both the vertices of a straw) Index_y_1 <- indexes in nondecreasing order of y_1 Index_y_2 <- indexes in nondecreasing order of y_2 In index_x, we also keep one flag to tell whether something is left to try or not keep a queue Q = empty(normal queue) while(something left in index_x to try) take the first untried index, call it j, set tried flag to true; figure out the list of straws which may intersect/touch with straw j We can figure out by first figure out the list of untried straws using index_x, where index_x's straw's min x <= j' straws max x. then we will also figure out the list of untried straws using index_y_1 and index_y_2.where ay straws whose either first y coordinate or second y coordinate lies between J's straws min y and max y coordinate. we will put all these straws in the Q while(Q is not empty) straw_k <- Dequeue(); if straw_k and straw_j are connected, then insert k into an array, and set k as tried figure out all the possible untried straws which may intersect with k using above logic and insert them also into the queue insert a seperator into the array to defferenciate between connected component. print the connected component and any two straw from a connected component are touching/intersecting each other directly or indirectly. the complexity is : Time : O(n * log(n)) for sorting + O(n * k * log(n)) where k is degree of connectivity of straws, space : O(n), or more preciously 5N. I think, we don't have to actually retate the string, we can just use index update to treat the string rotated for users, Here is one of the example A <- Input String Current Start index = 0, EndIndex = A.length If user ask us to rotate the string left by 2, we can update the start_index to 2 , and end_Index to 1 , the next index calculation can be done with modulo operation. this code assume string as array of char, but will work fine with normal string variable as well. Here is the complete set of the code in c# private static void PrintWithRotation(string inputStr, int rotationCount, string rotationSide) { int length = inputStr.Length; Console.WriteLine("Printing the input string with {0} {1} rotation", rotationCount, rotationSide); rotationCount = rotationCount % length; int startPoint = rotationSide == "left" ? rotationCount : length - rotationCount; for(int i = startPoint;i<length;i++) { Console.Write(inputStr[i]); } for (int i = 0;i < startPoint;i++) { Console.Write(inputStr[i]); } Console.WriteLine(); } The optimal single threaded solution can be described something like this implement a routine, which given l element between i,i+l indexes of an array, sort this l size subarray using inplace l x log(l) time algorithm.(options are : heap sort, or quick sort with proper pivot). Once we have this routine, we can call it n/k times to sort the whole array as explained by @pc. I am presenting the multithreaded algorithm which can solve this problem in n/r x log(k) time with r as maximum threads systme can allocate to this program. A <- input array, k is the condition as per this question Step : 1 do Parallel for j = 0; j < n; j = j + 3k sort(A, j, j+3k-1) Step : 2 do parallel for j = 2; j < n; j = j + 3k; sort (A, j, j + 2k -1) I havn't considered the edge cases in both the steps for simplicity, and btw they can only add upto O(k x log(k)) more complexity to the total complaxity. one can easily check that given r threads, both the steps will only take O(n/r x log(k)) time. Here is the implementations in c# We first define the the element class which will have queue element public class Element { public int priority; public int value; } once we have defined the element class, we initialise the queue, and define some common parameters for the queue public static Element[] minQueue { get; set; } public static int queueCurrentSize { get; set; } public static int maxQueueSize { get; set; } Then we ask use to input the max size of the queue, as we have to define some size for it, although there exits some implementations where the element is added to the queue on the fly, but we have to bear the computational burden, which is heigher, so, mostly you will find that the queue implementation need the size. After this, we ask user to take an action which he/she want to perform on the queue Console.WriteLine("Following Action can be taken on the priority queue"); Console.Write("Extract Min(Choose Action : EM){0}Insert(Choose Action : I){0}Get Min(Choose Action : GM){0}Queue Size(Choose Action : QS){0}Print Queue(Choose Action : PQ){0}Clear Queue(Choose Action : CQ){0}Exit(Choose Action : Exit){0}Choose Your Action : ", '\n'); while (true) { string action = Console.ReadLine().Trim(); if (action.Equals("EM", StringComparison.InvariantCultureIgnoreCase) || action.Equals("I", StringComparison.InvariantCultureIgnoreCase) || action.Equals("GM", StringComparison.InvariantCultureIgnoreCase) || action.Equals("QS", StringComparison.InvariantCultureIgnoreCase) || action.Equals("PQ", StringComparison.InvariantCultureIgnoreCase) || action.Equals("CQ", StringComparison.InvariantCultureIgnoreCase) || action.Equals("Exit", StringComparison.InvariantCultureIgnoreCase)) { return action.ToUpperInvariant(); } else { Console.WriteLine("Choose correctly please"); } } Once we have the input action from the user, we can use the switch statement to choose between actions and make correct function call actionTaken = GetUserAction(); switch(actionTaken) { case "EM" : Extract_Min(); break; case "I" : Insert(); break; case "GM" : Get_Min(); break; case "QS" : Console.WriteLine("Queue current size is : {0}",queueCurrentSize); break; case "PQ" : Print_Queue(); break; case "CQ": Clear_Queue(); break; case "EXIT" : Console.WriteLine("Exiting"); return; default : Console.WriteLine("Exiting due to some problem"); return; } Here are the implementations of eah of these functions 1) Clearing the queue private static void Clear_Queue() { Console.WriteLine("Clearing the Queue"); queueCurrentSize = 0; } 2) Printing the queue private static void Print_Queue() { Console.WriteLine("Queue contains following elements"); for(int i = 0; i < queueCurrentSize;i++) { Console.WriteLine("(Value : {0}, Priority : {1})", minQueue[i].value, minQueue[i].priority); } } 3) Getting the minimum priority element private static void Get_Min() { if (queueCurrentSize > 0) { Console.WriteLine("The min priority element is : (Value : {0}, Priority : {1})", minQueue[0].value, minQueue[0].priority); } else { Console.WriteLine("The queue is empty"); } } 4) Extracting the minimum(or deleting the minimum priority element) private static void Extract_Min() { if (queueCurrentSize > 0) { Element minElement = minQueue[0]; minQueue[0] = minQueue[queueCurrentSize - 1]; queueCurrentSize = queueCurrentSize - 1; MaintainTopToButtomQueue(); Console.WriteLine("The extracted min priority element is : (Value : {0}, Priority : {1})", minElement.value, minElement.priority); } else { Console.WriteLine("The queue is empty"); } } private static void MaintainTopToButtomQueue() { int parent,lowest,leftChild,rightChild; parent = 0;leftChild = 1;rightChild = 2; Element temp = new Element(); while (leftChild < queueCurrentSize) { if (minQueue[parent].priority <= minQueue[leftChild].priority) { lowest = parent; } else { lowest = leftChild; } if (rightChild < queueCurrentSize && (minQueue[lowest].priority > minQueue[rightChild].priority)) { lowest = rightChild; } if (lowest == parent) { break; } else { temp = minQueue[parent]; minQueue[parent] = minQueue[lowest]; minQueue[lowest] = temp; parent = lowest; leftChild = 2 * parent + 1; rightChild = 2 * parent + 2; } } } 5) Inserting new elements private static void Insert() { if (queueCurrentSize < maxQueueSize) { int value, priority; string strValue, strPriority; Console.WriteLine("Please enter the details of the element(write exit to go to main queue options)"); Console.Write("Value : "); while(true) { strValue = Console.ReadLine().Trim(); if(strValue.Equals("exit",StringComparison.InvariantCultureIgnoreCase)) { return; } else if(!int.TryParse(strValue,out value)) { Console.WriteLine("Enter correctly please"); } else { break; } } Console.Write("Priority : "); while (true) { strPriority = Console.ReadLine().Trim(); if (strPriority.Equals("exit", StringComparison.InvariantCultureIgnoreCase)) { return; } else if (!int.TryParse(strPriority, out priority)) { Console.WriteLine("Enter correctly please"); } else { break; } } Element element = new Element { value = value, priority = priority }; queueCurrentSize = queueCurrentSize + 1; minQueue[queueCurrentSize - 1] = element; MaintainButtomToTopQueue(); } else { Console.WriteLine("Queue is full, please extract some entries first"); } } private static void MaintainButtomToTopQueue() { int parent,child; child = queueCurrentSize-1; Element temp = new Element(); while (child > 0) { parent = Convert.ToInt32(Math.Floor((1.0*child - 1) / 2)); if(minQueue[parent].priority > minQueue[child].priority) { temp = minQueue[parent]; minQueue[parent] = minQueue[child]; minQueue[child] = temp; child = parent; } else { break; } } } we can add more functionalities to the queue like increasing/decreasing the size of the queue, performing the shorting, etc. Here is the concept : Start calculating sum from leaf node, and move to root node, Maintain a minimum weight variable, with a node variable which store minimum weighted node int minweight(node curr) { int left = 0,right = 0; if(curr) { left = minweight(curr->left); right = minweight(curr->right); if(MIN > left + right + curr->data) { MIN = left + right + curr->data; NODE = curr; } return left + right + curr->data; } return 0; } output NODE Complexity : O(n)+O(1), nut require O(height of tree) size run time stack. We create a tree data structure, the tree is a general tree with at max 26 children, and each node also store a bool variable(is used for telling weather number present here or not) and a string (used to tell contact number along with code number). and choose child function(which uses char to identifies the children) Here is insert methods we start with root node, and move from root node to children's according to each char. for example if user name is "sonesh", then from root node it go to "s"th children, then from "s"th node it go to "o"th children and so on. when we reach to "h"th node, In this if at any point we does not get require children, then we create that children, and move to that children. we set bool variable to true, and set string variable to sonesh's number. Now we can easily able to extend it to have multiple contacts with same name, or we can easily able to restrict it, Here we can also consider "space" as a char. Now get contact can also be implemented very easily, Here both insert and get have constant complexity(O(1)). This question may be asking for less then O(n) space algorithm. Here is the algorithm : Lets Node definition is struct Node { int data; Struct Node *left; Struct Node *right }; Tree is struct Node *Root; Now we do following int ISBST(Node *curr,bool dir); { int max = ISBST(curr->left,0); int min = ISBSDT(curr->right,1); if max <= curr->data <= min //Here we also have to check for one child nodes leaf node if dir == 0 for left node return min; else if dir == 1 for right node return max; else stop all function, and report not BST; } We call this function with curr as root->left and root->right, and check for root, and report. The complexity of this algorithm is O(n)+O(1), but as quick sort algorithm, it also require O(height of tree) space in run time stack.(which is not considered generally) We can have better implementation of this algorithm. Run one more for loop over the array, and check how much time 7 is available, if it more then half, then 7 is the right answer, otherwise not. Note : When I write O(n)+ O(1) it means O(n) time complexity and O(1) space complexity. Can you please read the algorithm once again. M[1] is not that number, it is A, which may be M[1], or may be other. like here : 1st step : C = 1, A = 5, 2nd step : C = 0, A = 6, which make C = 1, A = 7; 3rd step : C = 2,A = 7; 4th step : C = 3,A = 7; 5th step : C = 4,A = 7; Now check weather A = 7 is that number, And in this example it is. Here is the concept : If a number is lies more then half, then if we increment a counter when that required numbe comes and decrements the counter when any other number come ,then at the end the counter will have positive value. Now in the problem, even if we have worst case, the counter will be +1, when for every other number we give one valid number, but for a general case, number from other one can also contribute from canceling their effect. Use a counter (C = 0)and a integer variable (A), M[1 to n] is given array C = 1;A = M[1]; for i = 2 to n if M[i] == A; c++; else C--; if(c == 0) i++; if(i > n) output -1; C = 1; A = M[i]; if C > 0 check weather A is that number, else output -1; complexity : O(n) + O(1) @showell : If the numbers have any kind of relation between them, then they are not random, the term random is define in this way that we can't predict them, they can't follow any pattern even when some one draw infinitely many, Now there is no way to generate perfect random number, all random number generator(I mentions above), generates only sudo random numbers, but as our computer are not powerful enough to find any patterns in these sudo random numbers, so we generally accept them. simple solution is : for i = 2 to n for j = i-1 to 0 sum elements from j to i and from i to 2*i-j and find out weather sum is equal or not ? this require O(n*n) time and O(1) space, similar to maximum length palindrome. We create following data structure create a b+tree like structure, where we use date to create child, at first level we use year, second level we use months, and at third level we use day, and then to hours. Now these are standard, we join these hours node by linked list. so that we can move easily to another hours. After that we create categories for URL's, It may be domain's. here actually we create a trie kind of structure to store URL's in minimum space, and to improve search. Now the first part is used to find from where to where,we have to search. and once we find interval in tree, we go searching and print output. One simplest solution is : Generate sudo random number, using any methods(such as linear congruential generators, Fibonacci generators), or can use inbuilt method. call it x hash it, if already present then try again, otherwise store it in array, and set hash function true. Obviously here we require O(n) space, and they will not be following uniform distribution. Here is the algorithm : K is given number. -sort the data A[1 to n] O(nlogn) for i = 1 to n apply finding two data problem whose sum is K-A[i] and data set is all number except A[i] O(n) here total time complexity goes to O(n*n) We create a skip list, we store data page wise in linked list, and above we create a tree kind of structure,(google skip list for more info), here the insertion time is O(log(#page)) per page. Insertion procedure : Here each page is half empty, Now if we get a page which is more then half empty then we just insert it and if the newly inserted page have less then half data then we merge it with another near page. or if both nearer page don't have space to store this new page, then we take items from any one of nearer page and make both of then to have more then half data. I solve the problem in following ways Create a b+ tree,use dates are separating elements, first level is used for year, second is for months and third is day, and at day store each URL by linked list which is both way, after linked list use hash table to directly file the URL, this is used to maintain the constrain that each url can only comes only once, actually this does not happens, like in google crome, they don't have above constrain. Now when every user click on a url , first we check weather the clicked url is present or not ?, if not present then we insert it into tree and create new entry in hash table, It is like two indices on a database. so data is same for both indices. Now if url already present then we remove that element from linked list and create new node at first node and join it. so this require constant time for insertion, searching , deletion etc. You are right @showell, In general we have two different organization of memory, one is shared memory and distributed one, In shared memory we don't need to pass anything, as all cores share same address space, so for that above solution is ok, But If we have distributed systems, where each core have different their own memory address space, then we do by MPI, here each core actually have to send their data to another core, so their we have two type of cost 1)computation cost, 2) communication cost. so the computation cost is same but communication cost vary as it depends upon how distributed system are implemented, for example they all might be connected by a common bus, or they may be connected our a network, then again their software implementation also affect the actual cost. What ever you are saying in last para is actually comes under vector calculation, here we read not only a single data item but a vector of data item, and in message passing interface also, we have optimize in terms of size of packet. Ok, loler solution is essentially saying the same thing, he uses another way to say the same thing, using these 3 states, actually -1 represent the elt is in first chosen k number, 1 means it is chosen by next r numbers and 0 means is not being chosen, Yes i didn't wrote the code here, I just explain the algo and the property of the problem Simple solution : Let A[1 to 26] is array where all entries are zero except the one given as input. and if some char repeat then the corresponding entry tell the number. for example here f's entry will be 2 but a's entry will be 1 Read words one by one ans = 0; for each word (w) : i =0;flag = false; while(i++ < length of word) if A[w[i]]) A[W[i]]--; else flag = true; break; if flag go on reading; else ans = max(ans,length of current word); break; this require O(n) internal memory operation + O(n/B) file I/O where B is the block size, where one block is read by memory at a time. Now if we do preprocessing, we form trie with each node except leaf contain a single element, now here we need to do every possible searching, for example at root node we need to go to all possible nodes(at max 7 in above example), again at second level we need to choose at max 6 or 7(in f case) and so on.... so complexity here become O(2*7!),although this is a number but in actual computation it is not smaller. for this above example, Explanation of why we need some sort of relation on cores and threads per core. Nowadays each core have 2 hardware threads, so we can have 2 threads per core, but more does not give us any benefit, in terms of performance, because each core can run only two thread in parallel(both are mapped to existing hardware threads), when we create more threads, it increase overhead of their creation, their synchronization, coherency and many other, and the switch time of c.p.u. because now same work is done by more threads, so when you create more then 2 thread per processor, it actually reduce performance, (in terms of actual time). Now to increase performance we increase core, here is the algorithm to sum n numbers with n/2 core. Let A[1 to n] is the given array they sum in binary tree form, example : ...................23 ..........12.............11 ...6...........6........11 1...5.....2.....4...5...6 at i th thread or at ceil(i/2)th core read(A[i],a); write(a,B[i]); for h = 1 to log(n) if i <= n/2^h read(B(2*i-1),x) read(B(2*i),y); z = x+y; write(z,B(i)); if i=1 print B(i); this require O(log(n)) time and O(1) space complexity per core.but overall it require O(n) + O(n), But here we consider(in parallel program) complexity per core. Now if number of core are limited, means we can only create let say P threads only then what ? so here we create (n/P) number set and sum them at each thread which require O(n/p) time then we apply above procedure which require O(log(p)) so total complexity become O(n/p) + O(log(p)) We go by tree, We consider a tree in which each node can have at max 4 children. and each children contain a range in "x" and "y", which decide weather the value lies in this node or someone else.we go upon the point where each child get only a single point. and we know that here linear case does not exist, always search, delete,insert complexities goes down to O(logn). The question require DP here is the algorithm Let me first draw the matrix here for above example 2 1 0 0 0 1 2 0 0 3 1 1 4 2 2 and output is upper rightmost point whose index is i=0;j = n; Here is algo same matrix multiplication DP program with one difference which is here you have to apply abs(sub(M[i][k],M[k][j])) instead of addition and multiplication. min function is again here, complexity : O(n*n) + O(n*n) if we also want to know the sign of each number in the case of minimum sum, then we require one more matrix of size n*n. I think the problem is NP complete problem and we don't have any polynomial solution we can possibly check for all combination, so what we need to do is to calculate choose k numbers and sum them, choose r numbers from remaining ones and sum them and calculate difference. Now this k vary from 1 to N and r vary from 1 to N-k, so the complexity goes to exponential. Let A = max(abs(x),abs(y)); Num = (2*A+1)*(2*A+1); If (x,y) lies in first quadrant If x > y Num = (2*(A-1)+1)*(2*(A-1)+1) + x-y; else if x < y Num = Num -(y-x); else if (x,y) lies in 2nd quadrant similarly define here too and actually this is the function. you give any (x,y) it will give the number correspond to it in this matrix. complexity is O(1); and here there is no backward motion, as A is moving forward only, and B is also moving forward only. but here again If we go via your method we will need more then 12 steps to complete the solution. n + n/2;where n is total number of monster. here i am assuming 10 A monster and 10 B monster. so h is 20 for example : 6(n) steps from here to AAA*BBB AA*ABBB AABA*BB A*BAABB ABBAA*B *BBAAAB BBBAAA* here now 3(n/2) steps from here to BBBAA*A BBBA*AA BBB*AAA here so total is 3*n/2; We use general tree kind of structure,in which root node is does not contain any any data but it contain pointers between 1 to 9, depending upon numbers distribution.For example suppose India have numbers starting from 9,8,7 then root node contain 3 pointer pointing the sub trees each containing a int data point and at max 10 pointers. search : O(1); insert : O(1); delete : O(1); If the number can also have different countries then at root node we add an additional layer which tell information's about country code. this does not increase complexity. Your assumption is right, only when interviewer ask us to make assumption, otherwise the probability distribution is required. and once we know the probability distribution then there is nothing in 2nd case, as I have already written above. The Algorithm is following Let A[i=1 to N] is the daily stock price process and initally we don't own any stock. start = 1; while(start <= N) find first element such that next element is bigger then that. set it buying price of stock. find first element such that next element is lesser then it. set it selling price. find profit and add it to sum and set new start to selling price element index + 1 For Example : Here in this case we will have {3,7}, {4,11},{4,8} are the buying selling tuples, and net profit is 4 + 7 + 4 = 15. Algorithm is here : Take two pointer P and Q; Move P twice as Q, and move Q one step at once. start with P = Head = Q; Q = Q->next; P = P->next->next; -- check condition of termination(case when no loop) repeat until P == Q; Complexity : O(Length of linked list) Prove is here : It is easy to see that if the linked list has a loop then after some time both the pointer will be in the loop. Consider the time when they are in the loop. Now let First pointer is at Ith node, and Second pointer is at Jth node in linked list. Lets assume that node at Ith place is faster one. So Ith node require J-I+1 move in order to reach to Jth node, and we know that First pointer is moving 1 node relative to second one.which mean it move one node nearer to second in every step. so in J-I+1 step's it will reaches to node second. Hence proved. #include <omp> mp_set_num_threads(2); for i = 1 to 6 { #pragma omp parallel { #pragma omp sections { #pragma omp section print "Hello "; #pragma omp section print "World "; } } #pragma omp barrier } Here is the algorithm : minima( x1, x2) mid = (x1+x2)/2;M = convex(mid); lmid = (x1+mid)/2;L = convex(lmid); rmid = (mid+x2)/2; R = convex(rmid); If M < R && L then x1 = lmid;x2 = rmid; else if M < L && M > R then x1 = lmid; else if M < R && M > L then x2 = rmid; go to line number 2; if (x2-x1) < 1E-10 output (x1+x2)/2; break; If function is non decreasing or non increasing then we also use <=. Complexity : Depends upon accuracy. @showell : If tree is 2 /\ 1 3 and i need to find nodes between 1 and 3, then the root node which is 2 is the node from where we start in order algo. if tree is 6 / \ 4 8 / \ / \ 2 5 7 10 Now in this question if the range is [1 5] then at root node which 6 in not belong to [1,5] and is greater then 5, so we go left, Now 4 is in [1,5], so we consider 4 as root and start inorder traveling How ? Here is how, If we consider 4 as root, then the new tree is 4 / \ 2 5 so we can see that all 2,4,5 are inside [1,5]. Here is the algorithm Let string 1 is A[nx1] and string 2 is B[mx1] Now take M[n+1,m+1]; for i = 0 to n M[i][0] = 0; for j = 0 to m M[0][j] = 0; for i = 1 to n for j = 1 to m if A[i-1] = B[j-1] M[i][j] = 1 + M[i-1][j-1]; else M[i][j] = Max(M[i-1][j],M[i][j-1]) output M[n][m]; Yes, that's why they ask sub sequence, If they want the contiguous then they must have asked substring. standard solution : use dynamic programming with M[i][j] = {1+M[i-1][j-1] if A[i] ==B[j]} {max(M[i][j-1],M[i-1][j]) otherwise} standard solution : use dynamic programming with M[i][j] = {1+M[i-1][j-1] if A[i] ==B[j]} {max(M[i][j-1],M[i-1][j]) otherwise} Hint : Use BFS here. Here is algo Lets say A is boolean matrix. consider each A[i][j] as node and the bool value as there value and it is connected with all its 2 to 4 nearer node. Now run BSE on it. Here BFS is just used for visiting nodes with value true. in BFS we provide a source node whose value is true, Here is sample code for i=1 to n for j=1 to m if A[i][j] is not visited and is true count++; run BFS with source node A[i][j] Now BFS only visit to new nodes having true value. complexity O(m*n), same algo can also be used to calculate largest size pound of true values, or other related questions. Use two index/pointers, one lets say P1 & P2, Now enhance P1 to n lines, if it already reaches to end of file and then print the file with 2nd pointer. If not then start both pointer, enhance both one by one, means enhance P2 by one line and then enhance P1 by one line(starting with first line), when we P2 reaches to end of file, then we print lines with the help of P1.(n) time This problem can be of two type : First : all integer in given input are lies between 0 to 9. Here is the algo : - In first pass to input string, find the sum of all integer by testing each char by isnum(), and summing integer value into sum variable. - Now we keep one pointer at sum and another the end of input string, we find integer and char and populate it at first pointer. move second by 2 and first by the value of integer, - keep going with this process. 2nd : the integer might have value greater then 9. Here is the algo : - We use atoi() function along with isalpha() to calculate output string length, -- Isalpha() is to find weather we reach to next integer from last one, -- Atoi() is to find current integer before next char. For example : current input is 110f10g then Isalpha give false, and atoi() give 110, then by the help of isalpha we move our next pointer on input string. - we apply same concept while outputting.(input length) time complexity : O(length of output string) James Gulledge, Dev Lead at ADP Get you driving in comfort and class in Atlanta! Prestige Luxury Rentals offer a wide selection of premium vehicles Our ... Here is the solution for this problem - sonesh June 06, 2015
https://careercup.com/user?id=14962663
CC-MAIN-2021-17
refinedweb
6,150
52.73
Colin Paul Adams wrote: > I have used the rng tool to generate a grammar for sitemap.xmap, from > sitemap-v04.dtd (patched version). I did this just to see how > difficult it was. As expected, it did not work automatically - I had > the same problem with dtd2xsd when trying to generate a WXS grammar. I am not sure which tool you refer to. I had no problems at all when using DTDinst. See the URL for Jing and DTDinst in the previous discussion on this topic: Re: DTD v. WXS v. Schematron v. RelaxNG > The problem is the DTD effectively has two namespaces - the map: > namespace and the global namespace. I guess as DTDs do not support > namespaces, the conversion tools do not attempt to cope with this. I presume that the draft sitemap DTD is using a workaround for the namespace lack, by having explicit element names which contain the "map:" prefix, e.g. <!ELEMENT map:components ... This workaround is then reflected in the generated RNG. Is that what you are referring to with your next statement? > Still, I was able to systematically correct the grammar, so I could > automate the process with an emacs macro or some such. > > The generated grammar is equivalent to the DTD grammar, and as such > fairly loose. I could go on to express the rules better now > (e.g. map:components should have at most one child of each type, but > in any order). I gather that you are suggesting to maintain a canonical RNG grammar, which we use for system validation. Then from that canonical RNG we can generate a WXS schema that can be used by any XML editors that cannot use RNG. That sounds like the way to go. We should compare the two RNG grammars that are generated from sitemap-v04.dtd to decide which one to base any further work upon. Or perhaps we should really write it from scratch, just using the generated ones for guidance. > Then perhaps write an ant validation task, or maybe > runtime validation through an coccon.xconf option? Did you see the demonstration that i provided in the abovementioned thread? It provides an Ant task for build-time validation. Also Pete Royal followed up with mention about validation of Configuration objects via Avalon soon. --David --------------------------------------------------------------------- To unsubscribe, e-mail: cocoon-dev-unsubscribe@xml.apache.org For additional commands, email: cocoon-dev-help@xml.apache.org
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200210.mbox/%3C1035519858.2524.732.camel@ighp%3E
CC-MAIN-2015-27
refinedweb
402
66.64
There are different techniques for multithreading in Java. One can parallelize a piece of code in Java either with synchronize keywords, locks or atomic variables. This post will compare performances of using synchronized keyword, ReentrantLock, getAndIncrement() and perform continuous trials of get() and compareAndSet() calls. Different types of Matrix classes are created for performance testing and a plain one also included. For comparison, all cells incremented 100 times for different sizes of matrices, with different types of synchronizations, thread counts and pool sizes at a computer which has Intel Core I7 (has 8 cores – 4 of them are real), Ubuntu 14.04 LTS and Java 1.7.0_60. This is the plain matrix class of performance test: /** * Plain matrix without synchronization. */ public class Matrix { private int rows; private int cols; private int[][] array; /** * Matrix constructor. * * @param rows number of rows * @param cols number of columns */ public Matrix(int rows, int cols) { this.rows = rows; this.cols = cols; array = new int[rows][rows]; } /** * Increments all matrix cells. */ public void increment() { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { array[i][j]++; } } } /** * Returns a string representation of the object which shows row sums of each row. * * @return a string representation of the object. */ @Override public String toString() { StringBuffer s = new StringBuffer(); int rowSum; for (int i = 0; i < rows; i++) { rowSum = 0; for (int j = 0; j < cols; j++) { rowSum += array[i][j]; } s.append(rowSum); s.append(" "); } return s.toString(); } } For other ones, increment methods of them are listed due to remaining parts are same for each matrix types. Synchronized matrix: public void increment() { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { synchronized (this) { array[i][j]++; } } } } Lock matrix: public void increment() { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { lock.lock(); try { array[i][j]++; } finally { lock.unlock(); } } } } Atomic getAndIncrement matrix: public void increment() { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { array[i][j].getAndIncrement(); } } } Continuous trials of get() and compareAndSet() matrix: public void increment() { for (int i = 0; i < rows; i++) { for (int j = 0; j < cols; j++) { for (; ; ) { int current = array[i][j].get(); int next = current + 1; if (array[i][j].compareAndSet(current, next)) { break; } } } } } Also worker classes are created for each matrix. Here is the worker class of plain one: /** * Worker for plain matrix without synchronization. * * @author Furkan KAMACI * @see Matrix */ public class PlainMatrixWorker extends Matrix implements Runnable { private AtomicInteger incrementCount = new AtomicInteger(WorkerDefaults.INCREMENT_COUNT); /** * Worker constructor. * * @param rows number of rows * @param cols number of columns */ public PlainMatrixWorker(int rows, int cols) { super(rows, cols); } /** * Increments matrix up to a maximum number. * * @see WorkerDefaults */ @Override public void run() { while (incrementCount.getAndDecrement() > 0) { increment(); } } } For a correct comparison, all tests are replied 20 times by default. Average and standard errors calculated for each result. Due to there are many dimensions at test set (matrix type, matrix size, pool size, thread count and elapsed time) some features are shown as aggregated at charts. These are the results: For pool size 2 and thread count 2: For pool size 4 and thread count 4: For pool size 6 and thread count 6: For pool size 8 and thread count 8: For pool size 10 and thread count 10: For pool size 12 and thread count 12: Conclusion: It can be easily seen that plain version is run fastest. However it does not produce correct results as expected. Worse performance is seen with synchronized blocks (when synchronization is done with “this”). Locks are slightly better than synchronized blocks. However, atomic variables are prominently better from all of them. When atomic getAndIncrement and continous trials of get() and compareAndSet() calls compared it’s shown that their performances are same. Reason behind it can easily be understood when source code of Java is checked: /** * Atomically increments by one the current value. * * @return the previous value */ public final int getAndIncrement() { for (;;) { int current = get(); int next = current + 1; if (compareAndSet(current, next)) return current; } } It can be seen that getAndIncrement is implemented with continous trials of get() and compareAndSet() within Java (version 1.7) source code. On the other hand when other results are checked the effect of pool size can be seen. When a pool size is used which is less than actual thread counts a performance performance issue will occur. So, performance comparison of multithreading in Java shows that when a piece of code is decided to be synchronized and performance is an issue, and if such kind of threads will be used as like in the test, one should try to use Atomic variables. Other choices should be locks or synchronized blocks. Also it does not mean that synchronized blocks are always better than locks due to effect of JIT compiler and running a piece of code several times or not. Source code for performance comparison of multithreading in Java can be downloaded from here: {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/performance-comparison
CC-MAIN-2017-04
refinedweb
843
53.51
Declares an event. __event method-declarator; __event __interface interface-specifier; __event member-declarator; The keyword __event can be applied to a method declaration, an interface declaration, or a data member declaration. Depending on whether your event source and receiver are native C++, COM, or managed (.NET Framework), you can use the following constructs as events: Native C++ COM Managed (.NET Framework) Method — method interface data. A templated class or struct cannot contain. // Examples of native C++ events: __event void OnDblClick(); __event HRESULT OnClick(int* b, char* s); See Event Handling in Native C++ for sample code.. // Example of a COM event: __event __interface IEvent1; See Event Handling in COM for sample code. For information on coding events in the new syntax, see event (Visual C++).: // Examples of managed events: __event ClickEventHandler* OnClick; // data member as event __event void OnClick(String* s); // method as event When implicitly declaring a managed event, you can specify add and remove accessors that will be called when event handlers are added or removed. You can also define the method that calls (raises) the event from outside the class. // EventHandling_Native_Event.cpp // compile with: /c [event_source(native)] class CSource { public: __event void MyEvent(int nValue); }; //; } }; // EventHandling_Managed_Event.cpp // compile with: /clr:oldSyntax /c using namespace System; [event_source(managed)] public __gc class CPSource { public: __event void MyEvent(Int16 nValue); }; When applying an attribute to an event, you can specify that the attribute apply to either the generated methods or to the Invoke method of the generated delegate. The default (event:) is to apply the attribute to the event. // EventHandling_Managed_Event_2.cpp // compile with: /clr:oldSyntax /c using namespace System; [attribute(All, AllowMultiple=true)] public __gc class Attr {}; public __delegate void D(); public __gc class X { public: [method:Attr] __event D* E; [returnvalue:Attr] __event void noE(); };
http://msdn.microsoft.com/en-us/library/cb1dzt8t.aspx
crawl-002
refinedweb
296
52.7
I'm working on a python script that will be accessed via the web, so there will be multiple users trying to append to the same file at the same time. My worry is that this might cause a race condition where if multiple users wrote to the same file at the same time and it just might corrupt the file. For example: #!/usr/bin/env python g = open("/somepath/somefile.txt", "a") new_entry = "foobar" g.write(new_entry) g.close You can use file locking: import fcntl new_entry = "foobar" with open("/somepath/somefile.txt", "a") as g: fcntl.flock(g, fcntl.LOCK_EX) g.write(new_entry) fcntl.flock(g, fcntl.LOCK_UN) Note that on some systems, locking is not needed if you're only writing small buffers, because appends on these systems are atomic.
https://codedump.io/share/Ov34gtw377UV/1/python-multiple-users-append-to-the-same-file-at-the-same-time
CC-MAIN-2018-05
refinedweb
133
63.7
hello , i'm trying to solve this problem Create a program which generates fibonacci series till a number 'n' where 'n' is entered by the user. For eg. if the user enters 10 then the output would be: 1 1 2 3 5 8 (beginner) the problem is that the output is 0 1 1 2 3 5 8 13 , but it should be 0 1 1 2 3 5 8 . Can somebody help me?) #include <cstdlib> #include <iostream> #include <conio.h> using namespace std; int main() { int a=10,sum = 0, b = -1 ,c = 1; while(sum<a) { sum = (b+c); cout<<sum<<" "; b=c; c=sum; } getch(); return 0; } p.s the 'n' number is 'a' .
https://www.daniweb.com/programming/software-development/threads/393859/c-fibonacci-series-problem
CC-MAIN-2020-10
refinedweb
118
78.79
DataGrid /* run this code under .NET 1.1 and the label column 'Bla' is chopped off! I've tried everything to fix this! */ using System; using System.Data; using System.Windows.Forms; public class Form1 : System.Windows.Forms.Form { public static void Main () { Application.Run (new Form1()); } public Form1() { this.ClientSize = new System.Drawing.Size(292, 266); this.Text = "Form1"; DataGridTableStyle dgts = new DataGridTableStyle(); DataTable dt = new DataTable(); dt.Columns.Add (new DataColumn ("bla",Type.GetType ("System.Int32"))); DataGrid dg = new DataGrid(); dg.TableStyles.Add(dgts); dg.Parent = this; dg.SetDataBinding(dt,""); dgts.GridColumnStyles[0].Alignment = HorizontalAlignment.Right; } } NetFreak Wednesday, September 29, 2004 I haven't seen any responses yet -- just an FYI -- it's not specific to you. I'm seeing it too. I haven't found a solution to the issue, but it is most definitely a repeatable issue. I've tried it on 3 different computers and they're all displaying the same behavior. Sgt. Sausage Thursday, September 30, 2004 I could reproduce the problem on .NET 1.1 SP1, but not resolve it. Just me (Sir to you) Friday, October 1, 2004 I figured out that if you add a pipe symbol to the end, it works perfectly (eg. HeaderText = "Blah|"). The pipe is perfectly obscured by the gridline, and the text itself is OK. I got this from a guy who suggested doing the same with a period on windowsforms.net. The period looks crappy, but the pipe is OK. NetFreak Friday, October 1, 2004 Recent Topics Fog Creek Home
https://discuss.fogcreek.com/dotnetquestions/default.asp?cmd=show&ixPost=4414&ixReplies=3
CC-MAIN-2018-26
refinedweb
255
62.64
Michael Galpin (mike.sr@gmail.com), Software architect, eBay 26 Aug 2008 Ruby on Rails has raised the bar in terms of rapid development of data-driven Web sites. The JRuby project is making Ruby faster and more scalable than ever. One of the great advantages to running Rails on the Java™ Virtual Machine is that you can leverage other Java libraries, like the Apache Derby embedded database. The combination of Derby, JRuby, and Rails allows for rapid prototyping of dynamic Web applications. Learn how to use these technologies together to help you prototype your next great idea. System requirements This article makes use of several cutting-edge technologies to allow for rapid Web development. Links for all of these can be found in the Resources section. We use JRuby V1.1.3 and Apache Derby, which are Java-based technologies and require the Java Development Kit (JDK) V1.4 or higher. It is highly recommended that you use a V1.6 JDK because Apache Derby is bundled with JDK V1.6. If you use a V1.4 or 1.5 VM, you can download Derby separately. V10.4.1.3 was used in this article, in combination with a V1.5.0_13 JDK. This article also uses Rails V2.1. Prior knowledge of Ruby and Ruby on Rails is assumed to get the most from this article. Setup Many modern Web application frameworks emphasize developer productivity, which is good because developer time is precious, right? However, one of the things often missed is setup time, including how complicated it is to go from a clean machine to a place where a developer can write and run code. This is not just important to a developer sitting at home playing around with a new technology but also to organizations that frequently hire new developers and want to see quick returns on their investments. Rails excels in this area — and with JRuby and Derby, things get even better. Installing JRuby We assume that you've installed the V1.6 JDK. JRuby needs to know where your JDK is, so it follows the common convention of looking for an environment variable called JAVA_HOME that points to your JDK. You should also make sure that $JAVA_HOME/bin is in your path. Now you just need to download JRuby and unzip it to wherever you like. It is recommended that you create an environment variable for this location — call it JRUBY_HOME — and put $JRUBY_HOME/bin on your path, as well. That's all you need to do for JRuby, but what about Rails? Installing Rails JRuby is a 100-percent implementation of Ruby that simply uses Java for its implementation instead of native code. It has everything that Ruby has and that includes gems. To install Rails, you simply use gems, as shown in Listing 1. gem $ jruby -S gem install rails JRuby limited openssl loaded. gem install jruby-openssl for full support. Successfully installed activesupport-2.1.0 Successfully installed activerecord-2.1.0 Successfully installed actionpack-2.1.0 Successfully installed actionmailer-2.1.0 Successfully installed activeresource-2.1.0 Successfully installed rails-2.1.0 6 gems installed... Notice the only real difference between using JRuby and native Ruby is that the gem command is prefixed with a JRuby -S. This simply tells JRuby to first look for the script in the $JRUBY_HOME/bin directory. This makes sure that you get JRuby's gem script. -S It is possible to have JRuby use an existing gem repository (i.e., share a gem repository with a (native) Ruby installation). All you need to do is set an environment variable. However, this is not advised. Most gems are written in Ruby and are compatible with JRuby. However some are written in C++, and these are not compatible with JRuby. Similarly, some JRuby gems are written in the Java language and are not compatible with native Ruby. Running the above command will take a while, and it will be largely dependent on your network connection speed. Rails comes with a Web server, WEBrick, that is certainly not a production-quality Web server, but is perfect for rapid prototyping and development. Now we just need a database — namely, Apache Derby. Installing Apache Derby Rails uses the ActiveRecord library to handle database access and object-relational mapping between database tables and Ruby object models. This is one place where things are a little different because we are using JRuby instead of Ruby. In the Java language, we have the Java Database Connectivity (JDBC) API for communicating with the database, and we want to leverage this in Rails. So we need an additional gem that will allow ActiveRecord to use JDBC. This includes database-specific information, so we need to install the gem specific to Derby. ActiveRecord ActiveRecord-JDBC $ jruby -S gem install activerecord-jdbcderby-adapter JRuby limited openssl loaded. gem install jruby-openssl for full support. Successfully installed activerecord-jdbc-adapter-0.8.2 Successfully installed jdbc-derby-10.3.2.1 Successfully installed activerecord-jdbcderby-adapter-0.8.2 3 gems installed Installing ri documentation for activerecord-jdbc-adapter-0.8.2... Installing ri documentation for jdbc-derby-10.3.2.1... Installing ri documentation for activerecord-jdbcderby-adapter-0.8.2... Installing RDoc documentation for activerecord-jdbc-adapter-0.8.2... Installing RDoc documentation for jdbc-derby-10.3.2.1... Installing RDoc documentation for activerecord-jdbcderby-adapter-0.8.2... If you are a veteran of Rails, you probably realize that the above is not much different from native Ruby. Rails always needs a database-specific adapter. However, Rails comes bundled with the adapters for MySQL, PostgreSQL, and SQLite. If you have used Rails with another type of database, such as Oracle or IBM® DB2®, you have done something similar to the above. As mentioned, with JDK V1.6, Derby is already bundled and you are ready to code (or generate code). If you are using JDK V1.5 or V1.4, you have a couple more steps. You need to download Derby and unzip it, then find the derby.jar and copy this to $JRUBY_HOME/lib. Now you have caught up to those using JDK V1.6 and may proceed. Create an application Ruby on Rails practically invented the term "convention over configuration," and it makes it so easy to create an application. Using JRuby does not affect this equation at all. You get to use the Rails generators to rapidly create your application stub. >rails deadly -d sqlite3 exists/unit create vendor create vendor/plugins create tmp/sessions create tmp/sockets create tmp/cache create tmp/pids create Rakefile create README create app/controllers/application.rb create app/helpers/application_helper.rb ... Much of the output has been deleted, as this is all typical Rails output. One thing you will notice is that we used -d sqlite3, which tells the generator script to use SQLite as the database. This is an embedded database similar to Derby, and it is included with Rails V2.0 or higher, making it a "safe" option here. We will set up the Derby configuration in the next section. Just like with any Rails application, you can go ahead and start it up. -d sqlite3 => Booting WEBrick... JRuby limited openssl loaded. gem install jruby-openssl for full support. => Rails application started on => Ctrl-C to shutdown server; call with --help for options [2008-07-25 17:18:31] INFO WEBrick 1.3.1 [2008-07-25 17:18:31] INFO ruby 1.8.6 (2008-05-30) [java] [2008-07-25 17:18:31] INFO WEBrick::HTTPServer#start: pid=363524275 port=3000 And just like any other Rails application, we can bring it up in a browser by going to, as shown below. This is all standard Rails fare. As the welcome screen suggests, it is time to set up a database so we can start adding something to our application. Of course, this is where we specify that we are using Derby. Look at Listing 5 and see what a Derby configuration for Rails looks like. development: adapter: jdbc driver: org.apache.derby.jdbc.ClientDriver url: jdbc:derby://localhost/deadly_development;create=true username: app password: app test: adapter: jdbc driver: org.apache.derby.jdbc.ClientDriver url: jdbc:derby://localhost/deadly_test;create=true username: app password: app production: adapter: jdbc driver: org.apache.derby.jdbc.ClientDriver url: jdbc:derby://localhost/deadly_production;create=true username: app password: app There are a few things you will want to notice about this configuration. First, the adapter is just JDBC. Typically, you would specify a database-specific adapter here, like MySQL or Derby. Instead, we will go through a JDBC-based adapter. The URL specifies that we are using Derby. Notice the create=true parameters that will tell Derby to create the database if it does not already exist. Finally, notice the username and password. These are simply the defaults for Derby. Now that the database connections are configured, it's time to write some code. Well actually, it's time to let Rails write some code for us. create=true Scaffolding If you have used Rails, chances are you have used its scaffolding or at least seen it in action. It is probably the main reason for the "wow" factor people associate with Rails, and why it has been adopted and copied by countless other frameworks. Most Rails experts are quick to tell you that scaffolding is not meant to be a production feature, but its usefulness for prototyping is unquestionable. It has evolved over the years, especially in Rails V2.0. Let's take a look at the generation command we will use in Listing 6. >script/generate scaffold boat name:string captain:string crew:integer capacity:integer exists app/models/ exists app/controllers/ exists app/helpers/ create app/views/boats exists app/views/layouts/ exists test/functional/ exists test/unit/ exists public/stylesheets/ create app/views/boats/index.html.erb create app/views/boats/show.html.erb create app/views/boats/new.html.erb create app/views/boats/edit.html.erb create app/views/layouts/boats.html.erb create public/stylesheets/scaffold.css create app/controllers/boats_controller.rb create test/functional/boats_controller_test.rb create app/helpers/boats_helper.rb route map.resources :boats dependency model exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/boat.rb create test/unit/boat_test.rb create test/fixtures/boats.yml create db/migrate create db/migrate/20080727053100_create_boats.rb This generates an ActiveRecord model, a controller with an appropriate route mapped to it, and several view templates. It also creates an initial database-migration script based on the parameters in the command (the name-value pairs corresponding to attributes of the model and the types of those attributes). To create our database, we simply use Rake to execute the database migration, as shown below. >rake db:migrate (in /Users/michael/code/aptana/workspace/deadly) == 20080727053100 CreateBoats: migrating ====================================== -- create_table(:boats) -> 0.0395s -> 0 rows == 20080727053100 CreateBoats: migrated (0.0407s) ============================= The database and the boats table are created at once, just like that. Notice that we did not have to start up a database server. When the JDBC driver tries to connect to Derby, it starts Derby and creates the database, the script creates the table. This is not just some transient in-memory database; it is persisted to disk. We can fire up the application again and the table will be there. If we go to, we should see something like Figure 2. Click around and create some boats. Everything will be stored in the Derby database. You can stop and restart the server, reboot your machine, or whatever. When you start the server again, all of your data will be there. Scaffolding can seem mysterious to people, but it's not. It is mostly just generated code you can modify. Next, let's take a look at a simple example of modifying scaffolding code. Custom scaffolding Prototyping lends itself well to many Agile development methodologies. You can get something up and running quickly, then get immediate feedback from product managers, users, etc. For example, we could hand off the scaffolding-based prototype to a product manager, who could then explore the application. After they had used it for awhile and added some boats, perhaps it would look like Figure 3. Listing 8 shows the index method of the BoatsController class. index BoatsController class BoatsController < ApplicationController # GET /boats # GET /boats.xml def index @boats = Boat.find(:all, :order => "capacity DESC") respond_to do |format| format.html # index.html.erb format.xml { render :xml => @boats } end end end There are actually five more methods (show, new, edit, create, update, and destroy), but we did not need to modify those. Actually, all that was changed here is the first line in the index method: the finder query. The original was just Boat.find(:all). Here, we simply added an order clause. You can make this change and refresh the browser to see the changes, as shown below. show new edit create update destroy Boat.find(:all) Of course, there are many more customizations possible. You could let the user pick what to sort by, change the look and feel, etc. The point is that scaffolding code is not only useful for prototyping by itself, but it can be easily modified to allow for feedback as well. Still, there are limits to what is appropriate for scaffolding. Sometimes you will need to prototype various models and controllers that would not come from scaffolding. Luckily, Rails has more generators that can help with this, too. More generators Let's say we want to create a catch model that represents a catch made by a boat. There is a one-to-many relationship between a boat and catches. We will start with another generator for the new model, as shown below. catch >script/generate model catch quantity:integer exists app/models/ exists test/unit/ exists test/fixtures/ create app/models/catch.rb create test/unit/catch_test.rb create test/fixtures/catches.yml exists db/migrate create db/migrate/20080727060346_create_catches.rb Notice that we only specified a quantity attribute (column) in the generation command. We have to do some hand editing to add the relationship. First, we modify the migration script that was generated. CreateCatches class CreateCatches < ActiveRecord::Migration def self.up create_table :catches do |t| t.integer :quantity t.integer :boat_id t.timestamps end end def self.down drop_table :catches end end Everything here was generated except the boat_id we added. Now we can execute the migration by using Rake, as shown below. boat_id >rake db:migrate (in /Users/michael/code/aptana/workspace/deadly) == 20080727060346 CreateCatches: migrating ==================================== -- create_table(:catches) -> 0.0291s -> 0 rows == 20080727060346 CreateCatches: migrated (0.0299s) =========================== We need to modify the models. Open the Boat model and add a single line of code to it, as shown below. Boat class Boat < ActiveRecord::Base has_many :catches end The class is empty by default, as Rails infers the attributes of the class based on table metadata from the database. We simply add the meta-programming command to indicate that a single boat has many catches. Similarly, we complete the relationship by modifying the catch model, as shown below. Catch class Catch < ActiveRecord::Base belongs_to :boat end Again, we simply add a meta-programming command to indicate that a catch belongs to a boat. We followed the Rails naming convention when we used boat_id to indicate that a catch belongs to a boat. Now we can create a new controller. DeliveryController class DeliveryController < ApplicationController def index @boats = Boat.find :all @catch = Catch.new respond_to do |format| format.html end end def record @catch = Catch.new(params[:catch]) respond_to do |format| if @catch.save flash[:notice] = 'Delivery recorded successfully' format.html { redirect_to :action => "list"} end end end def list catches = Catch.find(:all) @totals = {} catches.each{|catch| if @totals[catch.boat] @totals[catch.boat] += catch.quantity else @totals[catch.boat] = catch.quantity end } @totals = @totals.sort {|a,b| b[1] <=> a[1]} respond_to do |format| format.html end end end This defines three actions. The first is the index action — the default when /delivery (Rails routing convention will map /delivery to the DeliveryController class — is requested by a user. Here, we do a little extra work to get all the boats first, so we can use them in the view. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" /> <title>Record a Delivery</title> </head> <body> <h1>Record a Delivery</h1> <% form_for @catch, :url => { :action => "record" } do |f| %> <%= f.error_messages %> <p> <%= f.label :quantity %> <%= f.text_field :quantity %> </p> <p> <%= f.label :boat %> <%= select("catch", "boat_id", @boats.collect{|b| [b.name, b.id] }, { :include_blank => true })%> </p> <p> <%= f.submit "Record" %> </p> <% end %> </body> </html> This is similar to the generated code we would get from using scaffolding, as we leverage Rails features to allow for rapid prototyping. For example, we will use a FormHelper object by using the form_for API from the ActionView class. There are a couple things we use that you would not see from generated code. First, we set the action URL for the form to go to the record method, as shown in Listing 14. We will take a look at this method shortly. Next, we use select helper to create a custom HTML select tag with option values. We use the boats we retrieved in the index method of the DeliveryController class, as shown in Listing 14. We use some idiomatic Ruby and create a collection of arrays, where each array has the name of a boat and its ID. These will become the labels and values of the options in the HTML that will be generated. This code could have been put in the controller, but it demonstrates the expressiveness of Ruby and that expressiveness is part of what makes Ruby so well suited to rapid prototyping and development. FormHelper form_for ActionView record The form in Listing 15 submits to the record action of the DeliveryController class, back in Listing 14. This method simply creates a new Catch instance and saves it. It then forwards to the list action, also from Listing 14. This action queries the database to retrieve all Catch records. It then aggregates the records to sum up the total catches for each boat in the database. You could perform this calculation using a custom query, as well. This aggregate then gets sorted into an array of two-element arrays, where the first element is a Boat object and the second element is the total catches for that boat. This is then passed to the view shown below. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html> <head> <meta http- <title>List Totals</title> </head> <body> <h1>Totals</h1> <table> <tr> <th>Boat</th> <th>Total</th> </tr> <% for total in @totals %> <tr> <td><%=h total[0].name %></td> <td><%=h total[1] %></td> </tr> <% end %> </table> <br /> <%= link_to 'Add catch', :action => "index" %> </body> </html> This is all pretty standard Rails templating for creating a table of all of the boats and their totals. We also use one last helper to create a link back to the index action for the user. Now we have a complete prototype of a mini-application. This concludes the application, but there is one more thing we will add: a brief note about using an IDE with JRuby and Derby. IDE support For many developers, rapid prototyping and development means using an IDE. Ruby is not the easiest language to build an IDE for, and this is just as true for JRuby. However, there are several good IDEs available, including the Eclipse-based RadRails. There are other articles covering this great tool (see Resources), but it can be a little tricky to use with JRuby and Derby. Most IDEs, including RadRails, need a Ruby debug gem so the IDE's debugger can hook into the Ruby VM. This gem is a native gem in that it is written in C++. Such gems cannot be used (yet) with JRuby. Instead, a Java version must be written, but fortunately, this has been done for this very important gem. You need to download one Java-based gem directly, then install it an a couple of other standard Ruby-based gems. This process is shown below. $ jruby -S gem instal -l ruby-debug-base-0.10.1-java.gem -v 0.10.1 ruby-debug ruby-debug-ide Successfully installed ruby-debug-ide-0.2.0 1 gem installed You can add breakpoints and debug your application easily from any IDE that uses the standard Ruby debugging gems. You can also use the debugger to step through some of the Rails code and get a much better understanding of some of its magic. Summary In this article, we saw how the combination of JRuby, Rails, and Apache Derby can produce a perfect environment for rapid prototyping and development. With Rails, we're able to use generators to produce boilerplate code, or we can create more custom applications with little effort, as well. With the help of JRuby and Derby, we have a transactional, persistent database embedded with our application; it runs whenever our application runs. From here, you can use more Rails features to add more models, controllers, and views. You can add Ajax to the applications easily using Ajax helpers in Rails. You can also use an IDE, such as the Eclipse-based RadRails to further speed your prototyping and development.?
http://www.ibm.com/developerworks/opensource/library/os-ad-prototype-jruby/index.html
crawl-002
refinedweb
3,597
59.4
On Tue, Jul 28, 2020 at 10:32:20AM +0200, Peter Krempa wrote: > On Thu, Jul 16, 2020 at 11:58:22 +0200, Pavel Hrdina wrote: > > Signed-off-by: Pavel Hrdina <phrdina at redhat.com> > > --- > > tools/Makefile.am | 13 ++----------- > > tools/meson.build | 8 ++++++++ > > 2 files changed, 10 insertions(+), 11 deletions(-) > > [...] > > > diff --git a/tools/meson.build b/tools/meson.build > > index 446831557e1..b95ced3728b 100644 > > --- a/tools/meson.build > > +++ b/tools/meson.build > > @@ -261,3 +261,11 @@ configure_file( > > install: true, > > install_dir: libexecdir, > > ) > > + > > +if init_script == 'systemd' > > + install_data( > > + 'libvirt-guests.sysconf', > > + install_dir: sysconfdir / 'sysconfig', > > + rename: 'libvirt-guests', > > + ) > > +endif > > Arguably it's data at this point as it doesn't need to be modified, but > shouldn't we use the config_file directive for any config file? If by 'config_file' you mean 'configure_file' I don't think so. There is no need to copy it into build directory, it just needs to be installed. Pavel -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: <
https://listman.redhat.com/archives/libvir-list/2020-July/msg01789.html
CC-MAIN-2022-21
refinedweb
172
68.47
When. - The Language - The Standard Library - 4. Variable Templates for Traits - 5. Logical Operation Metafunctions - 6. std::void_t Transformation Trait - 7. std::from_chars - Fast, Low-level Conversion s - 8. Splicing for maps and sets - 9. try_emplace() Function - 10. insert_or_assign() Member Function for Maps - 11. Return Type of Emplace Functions - 12. Sampling Algorithms - 13. gcd(), lcm() and clamp() + lots of math functions - 14. Shared Pointers and Arrays - 15. std::scoped_lock - Removed Elements - Extra - Summary Last Update: 19th October 2020 (the std::invoke section, plus smaller fixes). The Language Let’s start with the language changes first. C++17 brought larger features like structured bindings, if constexpr, folding expressions, updated expression evaluation order - I consider them as “significant” elements. Yet, there are also smaller updates to the language that make it clearer and also allows you to write more compact code. Have a look below: 1. Dynamic Memory Allocation for Over-Aligned Data If you work with SIMD instructions (for example to improve performance of some calculations, or in graphics engined, or in gamedev), you might often find some C-looking code to allocate memory. For example aligned_malloc() or _aligned_malloc() and then aligned_free(). Why might you need those functions? It’s because if you have some specific types, like a Vec3 that has to be allocated to 128bits alignment (so it can fit nicely in SIMD registers), you cannot rely on Standard C++ new() functions. struct alignas(16) Vec3 { float x, y, z; }; auto ptr = new Vec3[10]; To work with SSE you require the ptr to be aligned to 16-byte boundary, but in C++14 there’s no guarantee about this. I’ve even seen the following guides in CERT: MEM57-CPP. Avoid using default operator new for over-aligned types - SEI CERT C++ Coding Standard - Confluence Or here: Is there any guarantee of alignment of address return by C++’s new operation? - Stack Overflow. Fortunately, the C++17 standard fixes this by introducing allocation functions that honour the alignment of the object. For example we have: void* operator new[](std::size_t count, std::align_val_t al); Now, when you allocate an object that has a custom alignment, then you can be sure it will be appropriately aligned. Here’s some nice description at MSVC pages: /Zc:alignedNew (C++17 over-aligned allocation). 2. Inline Variables When a class contains static data members, then you had to provide their definition in a corresponding source file (in only one source file!). Now, in C++17, it’s no longer needed as you can use inline variables! The compiler will guarantee that a variable has only one definition and it’s initialised only once through all compilation units. For example, you can now write: // some header file... class MyClass { static inline std::string startName = "Hello World"; }; The compiler will make sure MyClass::startName is defined (and initialised!)) only once for all compilation units that include MyClass header file. You can also read about global constants in a recent article at Fluent C++: What Every C++ Developer Should Know to (Correctly) Define Global Constants where inline variables are also discussed. 3. __has_include Preprocessor Expression C++17 offers a handy preprocessor directive that allows you to check if the header is present or not. For example, GCC 7 supports many C++17 library features, but not std::from_chars. With __has_include. If you want to read more about __has_include, then see my recent article: Improve Multiplatform Code With __has_include and Feature Test Macros. The Standard Library With each release of C++, its Standard Library grows substantially. The Library is still not as huge as those we can use in Java or .NET frameworks, but still, it covers many useful elements. Plus not to mention that we have boost libs, that serves as the Standard Library 2.0 :) In C++17, a lot of new and updated elements were added. We have a big features like the filesystem, parallel algorithms and vocabulary types (optional, variant, any). Still, there are lots (and much more than 17) that are very handy. Let’s have a look: 4. Variable Templates for Traits In C++11 and C++14, we got many traits that streamlined template code. Now we can make the code even shorter by using variable templates. All the type traits that yields ::value got accompanying _v variable templates. For example: std::is_integral<T>::value has std::is_integral_v<T> std::is_class<T>::value has std::is_class_v<T> This improvement already follows the _t suffix additions in C++14 (template aliases) to type traits that “return” ::type. One example: //; } Can be shorten (along with using if constexpr) into: template <typename Concrete, typename... Ts> unique_ptr<Concrete> constructArgs(Ts&&... params) { if constexpr (is_constructible_v<Concrete, Ts...>) return make_unique<Concrete>(forward<Ts>(params)...); else return nullptr; } Also, if you want to create your custom trait that returns ::value, then it’s a good practice to provide helper variable template _v as well: // define is_my_trait<T>... // variable template: template< class T > inline constexpr bool is_my_trait_v = is_my_trait<T>::value; 5. Logical Operation Metafunctions C++17 adds handy template metafunctions: template<class... B> struct conjunction;- logical AND template<class... B> struct disjunction;- logical OR template<class B> struct negation;- logical negation Here’s an example, based on the code from the proposal (P0006): #include<type_traits> template<typename... Ts> std::enable_if_t<std::conjunction_v<std::is_same<int, Ts>...> > PrintIntegers(Ts ... args) { (std::cout << ... << args) << '\n'; } The above function PrintIntegers works with a variable number of arguments, but they all have to be of type int. 6. std::void_t Transformation Trait A surprisingly simple metafunction that maps a list of types into void: template< class... > using void_t = void; Extra note: Compilers that don’t implement a fix for CWG 1558 (for C++14) might need a more complicated version of it. The void_t technique was often used internally in the library implementations, so now we have this helper type in the standard library out of the box. void_t is very handy to SFINAE ill-formed types. For example it might be used to detect a function overload: void Compute(int &) { } // example function template <typename T, typename = void> struct is_compute_available : std::false_type {}; template <typename T> struct is_compute_available<T, std::void_t<decltype(Compute(std::declval<T>())) >> : std::true_type {}; static_assert(is_compute_available<int&>::value); static_assert(!is_compute_available<double&>::value); is_compute_available checks if a Compute() overload is available for the given template parameter. If the expression decltype(Compute(std::declval<T>())) is valid, then the compiler will select the template specialisation. Otherwise, it’s SFINEed, and the primary template is chosen (I described this technique in a separate article: How To Detect Function Overloads in C++17, std::from_chars Example). 7. std::from_chars - Fast, Low-level Conversion s This function was already mentioned in previous items, so let’s now see what’s that all about. from_chars gives you low-level support for text to number conversions! No exceptions (as std::stoi, no locale, no extra memory allocations), just a simple raw API to use. Have a look at the simple example: #include <charconv> // from_char, to_char #include <iostream> #include <string>. The API is quite “raw”, but it’s flexible and gives you a lot of information about the conversion process. Support for floating-point conversion is also possible (at least in MSVC, but still not implemented in GCC/Clang - as of October 2020). And if you need to convert numbers into strings, then there’s also a corresponding function std::to_chars. See my blog posts on those procedures: - How to Use The Newest C++ String Conversion Routines - std::from_chars - How to Convert Numbers into Text with std::to_char in C++17 8. Splicing for maps and sets Let’s now move to the area of maps and sets, in C++17 there a few helpful updates that can bring performance improvements and cleaner code. The first example is that you can now move nodes from one tree-based container (maps/sets) into other ones, without additional memory overhead/allocation. Previously you needed to copy or move the items from one container to the other. For example: #include <set> #include <string> #include <iostream> struct User { std::string name; User(std::string s) : name(std::move(s)) { std::cout << "User::User(" << name << ")\n"; } ~User() { std::cout << "User::~User(" << name << ")\n"; } User(const User& u) : name(u.name) { std::cout << "User::User(copy, " << name << ")\n"; } friend bool operator<(const User& u1, const User& u2) { return u1.name < u2.name; } }; int main() { std::set<User> setNames; setNames.emplace("John"); setNames.emplace("Alex"); std::set<User> outSet; std::cout << "move John...\n"; // move John to the outSet auto handle = setNames.extract(User("John")); outSet.insert(std::move(handle)); for (auto& elem : setNames) std::cout << elem.name << '\n'; std::cout << "cleanup...\n"; } Output: User::User(John) User::User(Alex) move John... User::User(John) User::~User(John) Alex cleanup... User::~User(John) User::~User(Alex) In the above example, one element “John” is extracted from setNames into outSet. The extractmember function moves the found node out of the set and physically detaches it from the container. Later the extracted node can be inserted into a container of the same type. Let’s see another improvement for maps: 9. try_emplace() Function The behaviour of try_emplace is important in a situation when you move elements into the map: int main() { std::map<std::string, std::string> m; m["Hello"] = "World"; std::string s = "C++"; m.emplace(std::make_pair("Hello", std::move(s))); // what happens with the string 's'? std::cout << s << '\n'; std::cout << m["Hello"] << '\n'; s = "C++"; m.try_emplace("Hello", std::move(s)); std::cout << s << '\n'; std::cout << m["Hello"] << '\n'; } The code tries to replace key/value ["Hello", "World"] into ["Hello", "C++"]. If you run the example the string s after emplace is empty and the value “World” is not changed into “C++”! try_emplace does nothing in the case where the key is already in the container, so the s string is unchanged. 10. insert_or_assign() Member Function for Maps Another new feature is insert_or_assign() - which is a new member function for std::map. It inserts a new object in the map or assigns the new value. But as opposed to operator[] it also works with non-default constructible types. Also, the regular insert() member function will fail if the element is already in the container, so now we have an easy way to express “force insertion”. For example: struct User { // from the previous sample... }; int main() { std::map<std::string, User> mapNicks; //mapNicks["John"] = User("John Doe"); // error: no default ctor for User() auto [iter, inserted] = mapNicks.insert_or_assign("John", User("John Doe")); if (inserted) std::cout << iter->first << " entry was inserted\n"; else std::cout << iter->first << " entry was updated\n"; } This one finishes the section about ordered containers. 11. Return Type of Emplace Functions Since C++11 most of the standard containers got .emplace* member functions. With those, you can create a new object in place, without additional temporary copies. However, most of .emplace* functions didn’t return any value - it was void. Since C++17 this is changed, and they now return the reference type of the inserted object. For example: // since C++11 and until C++17 for std::vector template< class... Args > void emplace_back( Args&&... args ); // since C++17 for std::vector template< class... Args > reference emplace_back( Args&&... args ); This modification should shorten the code that adds something to the container and then invokes some operation on that newly added object. For example: in C++11/C++14 you had to write: std::vector<std::string> stringVector; stringVector.emplace_back("Hello"); // emplace doesn't return anything, so back() needed stringVector.back().append(" World"); one call to emplace_back and then you need to access the elements through back(). Now in C++17, you can have one liner: std::vector<std::string> stringVector; stringVector.emplace_back("Hello").append(" World"); 12. Sampling Algorithms New algorithm - std::sample - that selects n elements from the sequence: #include <iostream> #include <random> #include <iterator> #include <algorithm> int main() { std::vector<int> v { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; std::vector<int> out; std::sample(v.begin(), // range start v.end(), // range end std::back_inserter(out), // where to put it 3, // number of elements to sample std::mt19937{std::random_device{}()}); std::cout << "Sampled values: "; for (const auto &i : out) std::cout << i << ", "; } Possible output: Sampled values: 1, 4, 9, 13. gcd(), lcm() and clamp() + lots of math functions The C++17 Standard extended the library with a few extra functions. We have a simple functions like clamp , gcd and lcm : #include <iostream> #include <algorithm> // clamp #include <numeric> // for gcm, lcm int main() { std::cout << std::clamp(300, 0, 255) << ', '; std::cout << std::clamp(-10, 0, 255) << '\n'; std::cout << std::gcd(24, 60) << ', '; std::cout << std::lcm(15, 50) << '\n'; } What’s more, C++17 brings even more math functions - called special math functions like rieman_zeta, assoc_laguerre, hermite, and others in the following paper N1542 or see here Mathematical special functions - @cppreference. 14. Shared Pointers and Arrays Before C++17, only unique_ptr was able to handle arrays out of the box (without the need to define a custom deleter). Now it’s also possible with shared_ptr. std::shared_ptr<int[]> ptr(new int[10]); Please note that std::make_shared doesn’t support arrays in C++17. But this will be fixed in C++20 (see P0674 which is already merged into C++20) Another important remark is that raw arrays should be avoided. It’s usually better to use standard containers. So is the array support not needed? I even asked that question at Stack overflow some time ago: c++ - Is there any use for unique_ptr with array? - Stack Overflow And that rose as a popular question :) Overall sometimes you don’t have the luxury to use vectors or lists - for example, in an embedded environment, or when you work with third-party API. In that situation, you might end up with a raw pointer to an array. With C++17, you’ll be able to wrap those pointers into smart pointers ( std::unique_ptr or std::shared_ptr) and be sure the memory is deleted correctly. 15. std::scoped_lock With C++11 and C++14 we got the threading library and many support functionalities. For example, with std::lock_guard you can take ownership of a mutex and lock it in RAII style: std::mutex m; std::lock_guard<std::mutex> lock_one(m); // unlocked when lock_one goes out of scope... The above code works, however, only for a single mutex. If you wanted to lock several mutexes, you had to use a different pattern, for example: std::mutex first_mutex; std::mutex second_mutex; // ... std::lock(fist_mutex, second_mutex); std::lock_guard<std::mutex> lock_one(fist_mutex, std::adopt_lock); std::lock_guard<std::mutex> lock_two(second_mutex, std::adopt_lock); // .. With C++17 things get a bit easier as with std::scoped_lock you can lock several mutexes at the same time. std::scoped_lock lck(first_mutex, second_mutex); Sorry for a little interruption in the flow :) I've prepared a little bonus if you're interested in Modern C++, check it out here: Removed Elements C++17 not only added lots of elements to the language and the Standard Library but also cleaned up several places. I claim that such clean-up is also as “feature” as it will “force” you to use modern code style. 16. Removing auto_ptr One of the best parts! Since C++11, we have smart pointers that properly support move semantics. auto_ptr was an old attempt to reduce the number of memory-related bugs and leaks… but it was not the best solution. Now, in C++17 this type is removed from the library, and you should really stick to unique_ptr, shared_ptr or weak_ptr. Here’s an example where auto_ptr might cause a disc format or a nuclear disaster: void PrepareDistaster(std::auto_ptr<int> myPtr) { *myPtr = 11; } void NuclearTest() { std::auto_ptr<int> pAtom(new int(10)); PrepareDistaster(pAtom); *pAtom = 42; // uups! } PrepareDistaster() takes auto_ptr by value, but since it’s not a shared pointer, it gets the unique ownership of the managed object. Later, when the function is completed, the copy of the pointer goes out of scope, and the object is deleted. In NuclearTest() when PrepareDistaster() is finished the pointer is already cleaned up, and you’ll get undefined behaviour when calling *pAtom = 42. 17. Removing Old functional Stuff With the addition of lambda expressions and new functional wrappers like std::bind() we can clean up old functionalities from C++98 era. Functions like bind1st()/ bind2nd()/ mem_fun(), were not updated to handle perfect forwarding, decltype and other techniques from C++11. Thus it’s best not to use them in modern code. Here’s a list of removed functions from C++17: unary_function()/ pointer_to_unary_function() binary_function()/ pointer_to_binary_function() bind1st()/ binder1st bind2nd()/ binder2nd ptr_fun() mem_fun() mem_fun_ref() For example to replace bind1st/ bind2nd you can use lambdas or std::bind (available since C++11) or std::bind_front that should be available since C++20. // old: auto onePlus = std::bind1st(std::plus<int>(), 1); auto minusOne = std::bind2nd(std::minus<int>(), 1); std::cout << onePlus(10) << ", " << minusOne(10) << '\n'; // a capture with an initializer auto lamOnePlus = [a=1](int b) { return a + b; }; auto lamMinusOne = [a=1](int b) { return b - a; }; std::cout << lamOnePlus(10) << ", " << lamMinusOne(10) << '\n'; // with bind: using namespace std::placeholders; auto onePlusBind = std::bind(std::plus<int>(), 1, _1); std::cout << onePlusBind(10) << ','; auto minusOneBind = std::bind(std::minus<int>(), _1, 1); std::cout << minusOneBind(10) << '\n'; The example above shows one “old” version with bind1st and bind2nd and then provides two different approaches: with a lambda expression and one with std::bind. Extra But there’s more good stuff! std::invoke - Uniform Call Helper This feature connects with the last thing that I mentioned - the functional stuff. While C++17 removed something, it also offered some cool new things! With std::invoke you get access to a magical INVOKE expression that was defined in the Standard since C++11 (or even in C++0x, TR1), but wasn’t exposed outside. In short the expression INVOKE(f, t1, t2, ..., tN) can handle the following callables: - function objects: like func(arguments...) - pointers to member functions (obj.*funcPtr)(arguments...) - pointer to member data obj.*pdata See the full definition here:[func.require] Additionally, those calls can also be invoked with references to objects or even pointers (smart as well!), or base classes. As you can see, this expression creates a nice abstraction over several options that you can “call” something. No matter if that’s a pointer to a member function, a regular callable object, or even a data member. Since C++17 (proposed in N4169) the INVOKE expression is now exposed through std::invoke which is defined in the <functional> header. Let’s see some examples: The first one with a regular function call: #include <functional> #include <iostream> int intFunc(int a, int b) { return a + b; } int main(){ // a regular function: std::cout << std::invoke(intFunc, 10, 12) << '\n'; // a lambda: std::cout << std::invoke([](double d) { return d*10.0;}, 4.2) << '\n'; } That was easy, and how about member functions: #include <functional> #include <iostream> struct Animal { int size { 0 }; void makeSound(double lvl) { std::cout << "some sound at level " << lvl << '\n'; } }; int main(){ Animal anim; // before C++17: void (Animal::*fptr)(double) = &Animal::makeSound; (anim.*fptr)(12.1); // with std::invoke: std::invoke(&Animal::makeSound, anim, 12.2); // with a pointer: auto* pAnim = &anim; std::invoke(&Animal::makeSound, pAnim, 12.3); } And the last example with invoking a data member, this will simply return a value of that member. #include <functional> #include <iostream> #include <memory> struct Animal { int size { 0 }; }; int main(){ Animal anim { 12 }; std::cout << "size is: " << std::invoke(&Animal::size, anim) << '\n'; auto ptr = std::make_unique<Animal>(10); std::cout << "size is: " << std::invoke(&Animal::size, ptr) << '\n'; } As you can see std::invoke makes it easy to get a value of some callable object or even a data member using the same syntax. This is important when you want to create a generic code that needs to handle such calls. As it appears std::invoke also become an essential part for of things called Projections in Ranges that are introduced in C++20. You can see an example in my other post about Ranges. And one additional update, in C++17 std::invoke wasn’t defined as constexpr, but it’s now since C++20! There’s an excellent presentation from STL if you want to know more: CppCon 2015: Stephan T. Lavavej “functional: What’s New, And Proper Usage” - YouTube Summary It was a lot of reading… and I hope you found something useful to try and explore. The list is not complete, and we can add more and more things, for example, I skipped std::launder, direct initialisation of enum classes, std::byte, aggregate changes, or other removed features from the library. If you want to see other elements of C++17 you can read my book - C++17 in Detail - or see the list @cppreference. But how about your preferences? What’s your favourite small feature of C++17?
https://www.bfilipek.com/2019/08/17smallercpp17features.html
CC-MAIN-2021-04
refinedweb
3,518
54.73
Automotive Forums .com Car Chat > Cars in General > The most common Suped up car Page updated on 09-19-2017 The most common Suped up car Hossain_Trance101 04-26-2005, 12:50 AM Hey, wat do u rekon is the most common modded cars? I rekon its either the CRX or Lancer Coupe, which is normally driven by ricers. Jetts 04-26-2005, 12:51 AM any civic or honda/acura shawnwilliams 04-26-2005, 01:13 AM Definetly the civic and the cavalier/ 93rollaracer 04-26-2005, 09:12 AM a) Why the fuck is this in car sightings? b) That'd have to be either the Chevy Cavalier or Honda Civic YukiHime 04-26-2005, 04:38 PM What car is suped up in the Acura line? illegal_eagle187 04-26-2005, 05:01 PM around here, cavaliers, civics, tegs, and eclipses GritMaster 04-26-2005, 07:05 PM Don't forget mustangs. aznxthuggie 04-26-2005, 10:05 PM a) Why the fuck is this in car sightings? b) That'd have to be either the Chevy Cavalier or Honda Civic x2 and to yukihime, the integra very common among teenagers n such, i see as many integras fixed up as civics and cavaliers lamehonda 04-27-2005, 12:01 PM Civic CivicCivic CivicCivic CivicCivic Civic Nothing touches the civic as far as sheer numbers. porscheguy9999 04-28-2005, 05:03 PM Hello? Mitsubish Eclipse anyone? Civic Coupe, Integra (my friend owns one, hes getting 17" chrome's, got blue Street Glow, and a body kit). YukiHime 05-01-2005, 01:04 PM x2 and to yukihime, the integra very common among teenagers n such, i see as many integras fixed up as civics and cavaliers But I thought the Integra is really going fast with a 1.8L engine... :uhoh: Ridenour 05-01-2005, 05:19 PM Deffinately civics (at least around here). They're everywhere. -Davo 05-08-2005, 05:10 AM nissan pulsars, and honda civics, why is this in the car sightings forum? 97Tsi 05-20-2005, 01:34 AM Dodge Neons around here, and not the srt-4 either! :banghead: mustangmann9 06-26-2005, 08:24 PM civic civic civic integ eclipse and 1 supra TT. Zachp911 06-26-2005, 08:40 PM Civics, Integras and Mustangs mostly, but what else is new :rolleyes: pimprolla112 06-26-2005, 08:45 PM around here Mustangs and not just v6 ones theres a mach 1 with spinners such a ugly car. Cavaliers/sunfires sorry its a damn economy car not a sports car cant complain my girl has one at least its not chromed out or other shit. Civics not as bad if they have a motor swap but still nothing more than an economy car. Camaroes just as bad as mustangs euros, 20"s and theres even one with import racing graphics on the windshield some stupid blonde girl who thinks shes cool cause she acts like a slut, shes still a virgin. The focus is just as bad and the ranger and s10. I hate stupid people. slideways... 06-27-2005, 11:11 PM Integra (my friend owns one, hes getting 17" chrome's, got blue Street Glow, and a body kit). ewwwww :puke: CIVICS, integras, accords, cavaliers, eclipses are the 5 most common in st paul there is a few subaru-only groups and a SRT-4 group with like 10 cars, but they dont fuck with the street racers much jcsaleen 06-28-2005, 12:40 PM Yea those damn civic's an alot of maxima's too. Zachp911 07-10-2005, 08:10 AM Haha I was coming home from Connecticut yesterday and on the highway I saw an older '97-'99 Maxima with a huge aluminum wing and decals and shit, and then on the front grille it had the GTR badge.... stupid ricer :rofl: Polouser 07-10-2005, 09:28 PM #1 here is the vw golf ... ^^ knorwj 07-15-2005, 07:16 PM Well I would have to say a mustang or camaro... only because they have been around quite abit longer than civics/tegs/neons/etc. And people have been modding them since they debuted in the 60's fastguy18 07-15-2005, 07:44 PM definately civic's, integra's but those look a lot better than the decked out civic's and they are much more faster pimprolla112 07-16-2005, 04:29 PM Ive seen some really fast civic hatchbacks and coupes but stock for stock the integra does have it. Oh yeah seen the ultimate riced cars today an evo (this is my opinion but a family sedan with a turbo is not a sports car ie neon and lancer) sporting a huge $80 autozone wing with a bodykit. Then there was a riced out 96 corolla. 92pontiacbonny 07-29-2005, 06:49 PM def hondas and chevy cars rowitha 08-04-2005, 09:20 AM theres a diff. between "riced out" and supd up...personally i think the majority of all these cars listed are "riced"...but i have developd a respect for some of these cars because there are some fast/sick civics that arnt wingy and all that ... integras are awesome...they look sweet and come with a pretty decent engine and theres alot of nice ones out there...eclipses..i mean come on they have an awd turbo model thats just sick...cavaliers...welll i cant think about anything good about those cars ha...and crx's are light...you can put countless engines in them and they look awesome...this is just my two cents RaidenKing 08-13-2005, 08:04 AM He said suped up, not riced out. Nissan 240sx's, rx7 FD's and DSM's usually have the most "suped up" work done to them, no one likes to keep them stock if they race. The civic hatch is also probably up there in the most commonly modded car because of the engine possibilities that can be thrown in. pimprolla112 08-13-2005, 04:02 PM Your also forgetting that there are different types of import "modders" theres the ricers and the tuners. Ricers are generally the all show no go cars civics with the stock d16 with bodykits, chrome wheels, loud annoying exhausts and a system that usually sounds like shit. The tuners go for more speed than looks they usually do motor swaps and turbo and nitrous and some minor body stuff. When he said suped up he didnt specify which he was asking about and when some one says something aout a suped up car you generally think of a civic with an airplane wing and a fart pipe. I have seen some of the ugliest civics with dented body panels, rust, shit loads of bondo and a cannon for an exhaust come out to the streets and beat up on cobras. Then again ive seen the nicest looking cars with molded bodykits, pearl paint jobs, and tons of engine mods that could get there asses handed to them buy a stock civic. Before you go saying that he said nothing about ricers remember one thing everyone gets different ideas from the same word suped up can meen many different things. Generally when some one says ricer you automatically think its an import but ive seen some riced out domestics and euro cars. blakscorpion21 09-14-2005, 04:16 PM 1. civic i see a riced civic every couple of minutes but there is alot more you just notice the riced out ones. 2. mustangs i see a stang about every 30 sec. drivin around town. ZL1power69 09-30-2005, 09:42 PM around here in MD its civics, accords, integras,240s, caviliers, impreza/wrx,neons,eclipses its mostly teens. im sure im forgeting some but those are the most common. BigG123 12-04-2005, 12:29 PM 350z!!!!!!!!!!! JLad10687 12-05-2005, 12:04 PM Civics are all too common and done improperly pimprolla112 12-05-2005, 11:02 PM Not to be an ass but look at the last post date. Last post was almost 3 months ago. That and the civic is only referred to as the most commonly modified import cars, just like the mustang is referred to as the most modified domestic. Its very common to see some teen driving a v6 stang with dual exhaust just like the teen with massive pipe on a civic. Thats the problem everytime someone sees a jackass in a import they relate it direclty to a civic. There are all kinds of riced cars, domestic, import, euro, they all get done yeah theres the ricers but once again there are the tuners with financial disability, or the kids who put there blood and sweat into a car while these little rich kids get these decked out cars and they cant even turn a wrench. Im not pointing fingers but its all the same, not everyone can do it all at once so it might look like shit, and then theres the people who think a 8" aluminum wing with euros and a apc exhaust think they have a show car. thisisit 12-19-2005, 11:06 AM How bout we turn this thread into the most unexpected modified car. Meaning that it's a car that you would never think about modding and that yet somebody had modded. I'd have to say this 1st gen taurus i saw a while ago. The dude had made it look like the police cars from Robocop. Totally awesome. And by the way everyone is doing some kind of aftermarket modification to their cars these days. Rims, headlights and exhaust seem to be super popular. By the way that 3" exhaust on your stock engine actually hurts performance. GO BLOW iPoddity 02-07-2006, 02:28 PM in my town its Civics, Golfs, and Mustangs unexpected you say? id like to see a souped up crown vic laying down rubber and taking my towns PD Intrepids for all their worth.... pimprolla112 02-07-2006, 11:41 PM Dude this thread is almost 2 months old not to be a dick but let old threads die, the second time this thread has been brought back to life. Ever hear of marauder a crown vic with a mach1 motor with the shaker. Automotive Network, Inc., Copyright ©2017
http://www.automotiveforums.com/t400813-the_most_common_suped_up_car.html
CC-MAIN-2017-39
refinedweb
1,733
76.05
Robosapien, I Kinect you... - Posted: Aug 04, 2011 at 6:00 AM - 7,678 Views - 2 Comments ![if gt IE 8]> <![endif]> Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements This is a cute project that looks just fun... Leads me to dream about how cool it would be do something like this for Halloween. Imagine a scary animatronic in your front yard that you control in real time. Or that you code to react to, or interact with, trick-or-treat'ers. hum...: ... Project Information URL: Project Download URL: Kinect Extensions, namespace KinectExtensions { public static class JointExtensions { public static bool HigherThan(this Joint basejoint, Joint joint) { return basejoint.Position.Y > joint.Position.Y; } public static bool LowerThan(this Joint basejoint, Joint joint) { return basejoint.Position.Y < joint.Position.Y; } public static bool BetweenVertically(this Joint basejoint, Joint lowerJoint, Joint higherJoint) { return basejoint.Position.Y > lowerJoint.Position.Y && basejoint.Position.Y < higherJoint.Position job! hi nice work here i've been working a little with a kineck and i have my own robosapiens so im wondering if you dont mind to share your code for this proyect, it would be great if you could send to my e-mail or something the code :P tnx Remove this comment Remove this threadclose
https://channel9.msdn.com/coding4fun/kinect/Robosapien-I-Kinect-you
CC-MAIN-2015-35
refinedweb
228
52.7
Python Programming, news on the Voidspace Python Projects and all things techie. Ironclad 0.1 Released: C Extensions for IronPython Python 2.5.2 [1] has just been released, but more importantly the work at Resolver Systems on C Extensions for IronPython has finally born fruit: William Reade (who has done most of the work) posted the following along with the announcement: It's not very impressive yet, and it's still win32-only, but it will import the bz2.pyd module from CPython 2.5; the compress() and decompress() functions should work, as should the BZ2Compressor and BZ2Decompressor types. Sadly, BZ2File still needs quite a lot of work. Nonetheless, please download it and have a play. Be aware that you can't just reference the DLLs and have everything Just Work -- have a look in 'tests/functionalitytest.py' to see how to make it work; have a look at 'doc/details.txt' if you're interested in what's going on under the hood. Bug reports, complaints, advice and patches are all very welcome; of course, bug reports with integrated patches and test cases will receive the maximum possible brownie points. The repository includes a description of the requirements and how to build Ironclad, plus a description of how it works. Despite William's protestations it is really very impressive. There is still a lot to do (a lot of the CPython API still to implement!), but it is already well beyond proof of concept. Other Python news linuxquestions.org have made Python their programming language of 2007: Congratulations! It's my pleasure to inform you that Python has been selected as the Programming Language of the Year in the 2007 LinuxQuestions.org Members Choice Awards. For more information, visit: 2007-linuxquestions.org-members-choice-awards. Python got 21.8%, followed by C++ and then C / PHP (then Java, Perl, Ruby, C#, Lisp and Haskell). It will distress many die-hards (and perhaps a few new-hards) that C# came in ahead of both Lisp and Haskell, but then it could be a conspiracy... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-02-22 21:12:59 | | Categories: Work, IronPython, Python Tags: ironclad, release, resolver Catchup plus More Mac Musings Sorry for the lack of posts, I've been playing catchup. I've just finished the first draft of a 4000 word article on ConfigObj for the Python Magazine. I've also done a 500 word article on IronPython for the MSDN Flash UK newsletter. Ironically the 500 word article (on a whole programming language) is slightly under the word limit, the 4000 word article (on a module for handling configuration files) is slightly over the word limit... Don't worry I'll be sure to let you know when they're both available to read. After the 1.0 release, we've been working hard over at Resolver Systems. We've been working on 1.0.1 which should be released any day now. For this release we've focussed on usability defect fixes and have managed to close out all the ones we scheduled plus a few extra (and one or two new features and performance enhancements along the way). We're still deciding what major features to work on for version 1.1, but whilst the boss is making his mind up we've been focussing on improving the speed of importing spreadsheets from Excel, improving recalc speed and reducing memory use. Several people responded to my last blog entry with links to Mac OS X clients for Subversion. It looks like there are plenty! None of them are quite as good as TortoiseSVN for Windows (which sets the bar pretty high), but I'm getting used to the command line. One of the major failings of the Mac OS X interface is that application menu bars appear at the top of your first monitor. If you have multiple monitors this puts the menu for applications a long way away. The best way round this I have found so far is a combination of DejaMenu (freeware) which brings up a context menu with a keyboard shortcut and Steermouse ($20 shareware) that can assign keyboard shorcuts to mouse buttons! Hardly ideal. Oh, and one final thing - another Python snippet. This one discovered by Andrew Dalke as he pokes around in the dusty corners of Python's grammar: ... pass ... >>> eggs() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "<stdin>", line 1, in eggs TypeError: 'float' object is not iterable Using tuple unpacking in the function signature (which will disappear in Python 3) it is possible to create default arguments in functions that raise exceptions when you try to use them! Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-02-21 23:38:18 | | Categories: Python, Work, Computers Tags: resolver Silly Snippets Here are some fun snippets of Python, which may or may not do what you expect. True >>> isinstance(object, type) True This really just illustrates Python's object orientation. Everything in Python is an object (is an instance of an object), this includes type since types are first class objects. object itself is a type - so it is an instance of type. True >>> 3 is (not False) False Just illustrating that sometimes all is not as it appears with identity checks. (I assume that is and not are both tokens to the lexer but that is and is not are actually different operators to the parser.) def __del__(self): print e.args e = BaseException(1, S()) e.__init__("hello") # segfault This is an interesting one posted to Python-dev recently and actually reveals a subtle bug in the way Python uses reference counting to do garbage collection. It is pretty pathological though. When the 'decref' happens the finalizer (__del__) is called - which can sometimes find a route back to the now invalid object. There is a really interesting post on Python garbage collection over on the PyPy blog: PyPy Development: Python Finalizers Semantics, Part 1. ... def mro(self): ... return [] ... >>> class C: __metaclass__ = T ... >>> class D(object): ... pass ... >>> d = D() >>> d.__class__ = C >>> isinstance(d, object) False This is silliness from Michael Hudson. He says that the primary use case is for looking geeky on IRC. Actually lying to isinstance can be useful when creating proxy classes (but you can't lie to checks that use type(something)). Back to Mac stuff. SCPlugin is nothing like as good as TortoiseSVN (yet - it is a younger project), but dropping down to the command line is kind of reassuring. I've discovered that Parallels can be made to work with multiple monitors (it basically fakes a single display the combined width of your Mac desktop - but it works fine). Looking for a straightforward IRC client for the Mac - so far Colloquy seems fine other than its unhelpful error reporting. The most useful little application I've found is OpenTerminalHere. It puts an icon on finder windows that opens a terminal window with the current directory set to the directory you are viewing (kind of like 'command window here' on Windows). Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-02-18 20:34:52 | | Categories: Python, Hacking, Fun Tags: snippets More Pyglet and OpenGL Today I've been switching my desktop environment over to the Mac. I've also moved all my important files into a subversion repository, which makes switching easier. This is the first blog entry on the Mac, but it doesn't count because wxPython crashes on Leopard and I'm running Firedrop under Parallels. Anyway, all that has nothing to do with this blog entry. Jonathan Hartley has been furthering his Pyglet exploration. Co-incidentally, Sylvain Hellegouarch has also posted some examples of working with OpenGL, from IronPython, to the IronPython Cookbook: - OpenGL - Initializing Context With GLFW - OpenGL - Rendering a spinning triangle with GLFW - SDL for 2D Graphics with Zoom These examples use the Tao Framework which is a classic example of a project where their about page tells you nothing about the project... Back to the Mac. Mac OS X binaries for the latest version of Mono (1.2.6) includes a prebuilt MonoDevelop. Unfortunately it isn't at all usable on Leopard. On my MacBook Pro it won't compile a trivial projects and mangles lines as you edit. On my Mac Pro it wouldn't even launch. Oh well, early days I guess. I also experimented with VMware Fusion as it supports 64bit operating systems. Unfortunately it doesn't support 64 bit bootcamp partitions and as I have paid for Parallels I'll stick with it for now. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2008-02-16 23:27:40 | | Categories: Python, IronPython, Computers Tags: pyglet, opengl, graphics, apple Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2008_02_16.shtml
CC-MAIN-2015-27
refinedweb
1,504
64.2
Hello everyone, I have an idea for this year's Summer of Code, so I figured I should share it early with you guys. I'm hoping for a chance to work on Zenmap this summer. I was doing some thinking/research about graphical network topology mapping (or a lack thereof) and how cool it would be if Zenmap had that functionality, so I came up with a simple algorithm and implemented it in Python. I've attached the source code (disclaimer: it's a quick hack), so I'll continue with a brief explanation of the basic ideas behind it. First, a user supplies an IP address (or an IP range) which is our "destination", and the prefix length (PLEN), which is used for standard CIDR-style subnet scanning (performed later). All IP addresses are added to the "to be processed" (TBP) list. For every address ADDR in the TBP list, do the following: * Run a traceroute to ADDR, and add any newly-discovered intermediate hosts to the TBP list. Update the topology graph using traceroute's data to add new nodes and construct edges between nodes. * Run a nmap scan on ADDR/PLEN (standard CIDR-style designation) and add any discovered hosts to the TBP list. When there are no addresses left in the TBP list, the construction of the topology graph is "finished". I say "finished" here because, of course, a user can only see the portion of the net she scanned. What I thought would be cool (but didn't implement it) is that a user should be able to manually expand the constructed topology once the algorithm finishes its run (that's the TODO on stinkfist.py:85). For example, she knows that there's a live host (or hosts) on the local LAN that wasn't detected because PLEN was too big. There are also some other ideas that can be implemented, for example if a node has more than two edges connected to it, then it's probably a router. Also I had some thoughts about the starting node - maybe it doesn't have to be the localhost, so we can simulate being on some other machine and starting the scan from there. The possibilities are endless... ;) As for the code, I used (py)graphviz to represent the topology graph and plot it to an SVG image. You can use "stinkfist.py dst TARGET PLEN" (note the "dst" there) or "stinkfist.py dst TARGET" (PLEN = 29, default) to generate the topology file for graphviz (in the form TARGET_PLEN.dot). You can then use "draw.py FILENAME" to convert the .dot file into an .svg image (Firefox can open it). Also attached is a sample scan of my ISP's subnet with PLEN = 29, converted to a GIF image. I've blurred out the first two octets of IP addresses for privacy reasons. So guys, any input is welcome, and please let me know if I'm reinventing the wheel here. I'm looking forward to discussing this with you. Cheers, Vladimir p.s. I apologize if the name "stinkfist" sounds bad, it's just a working title. #!/usr/bin/env python # -*- coding: utf-8 -*- import sys import os import re import pygraphviz default_plen = 29 # default topology = pygraphviz.AGraph() startaddr = "localhost" # TODO get a real local IP (ifconfig, whatever) def traceroute(dest): addrlist = [] tr_out = os.popen("traceroute -I -n " + dest + " 2> /dev/null", "r") for line in tr_out: # find the first IP address in the current line addr = re.findall(r'[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+', line) if addr != []: addrlist.append(addr[0]) return addrlist def scan_subnet(subnet): addrlist = [] nmap_out = os.popen("nmap -sP " + subnet + " 2> /dev/null", "r") for line in nmap_out: addr = re.findall(r'[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+', line) if addr != []: addrlist.append(addr[0]) return addrlist def destination_based(dest, plen): ip_list = [] # list of not-yet-processed nodes result_list = [] # list of processed nodes ### 1. traceroute to the destination address print "initial traceroute run:\n " + startaddr, for addr in traceroute(dest): print "-> " + addr, ip_list.append(addr) print # add these nodes to the topology graph if len(ip_list) > 0: topology.add_edge(startaddr, ip_list[0]) # cheat for i in range(0, len(ip_list)-1): topology.add_edge(ip_list[i], ip_list[i+1]) ### 2. for addr in ip_list: result_list.append(addr) ### 2.1 traceroute to the current node, add any new nodes to ip_list print "processing " + addr lastaddr = startaddr for newaddr in traceroute(addr): if newaddr not in result_list and newaddr not in ip_list: ip_list.append(newaddr) # add the new node to the topology graph topology.add_edge(lastaddr, newaddr) lastaddr = newaddr ### 2.2 scan this host's subnet, using a given host width for newaddr in scan_subnet(addr + "/" + str(plen)): if newaddr not in result_list and newaddr not in ip_list: ip_list.append(newaddr) ### 2.3 ### 2.4 #ip_list.remove(addr) # save the topology into a .dot file tfile = dest + "_" + str(plen) + ".dot" topology.write(tfile) print "written topology into " + tfile def incremental(): # TODO implement pass def single_node(): # TODO implement pass if __name__ == "__main__": if sys.argv[1] == "dst": if len(sys.argv) == 4: destination_based(sys.argv[2], sys.argv[3]) else: destination_based(sys.argv[2], default_plen) elif sys.argv[1] == "inc": incremental() elif sys.argv[1] == "scn": scan_subnet(sys.argv[2]) elif sys.argv[1] == "sgl": single_node() from pygraphviz import * import re import sys format = "svg" layout = "dot" #layout = "circo" A = AGraph() A.read(sys.argv[1]) A.layout(layout) A.graph_attr["label"]=sys.argv[1] A.node_attr["shape"] = "circle" # put the format extension instead of the original extension and draw A.draw(re.sub(r'\.\w+$', '', sys.argv[1]) + "." + format) _______________________________________________ Sent through the nmap-dev mailing list Archived at
http://seclists.org/nmap-dev/2008/q1/0409.html
crawl-002
refinedweb
953
66.33
Created on 2010-08-18 23:43 by dmalcolm, last changed 2018-02-27 20:48 by dmalcolm. It's sometimes useful to be able to programatically inject a breakpoint when debugging CPython. For example, sometimes you want a conditional breakpoint, but the logic involved is too complex to be expressed in the debugger (e.g. runtime complexity of evaluating the conditional in the debugger process, or deficiency of the debugger itself). I'm attaching a patch which: - adds a Py_BREAKPOINT macro to pyport.h This is available as a quick and dirty way of hardcoding a breakpoint in code (e.g. in extension modules); so that when you need to you can put of these in (perhaps guarded by C-level conditionals): if (complex_conditional()) { Py_BREAKPOINT(); } - when Py_BREAKPOINT is defined, adds a sys.breakpoint() method. This means that you can add C-level breakpoints to Python scripts, perhaps guarded by python-level conditionals: if foo and bar and not baz: sys.breakpoint() Naturally this is highly system-dependent. Only tested on Linux (Fedora 13 x86_64 with gcc-4.4.4). The Windows implementation was copied from but I don't have a Windows box to test it on. I note that the GLib library within GNOME has a similar idea with a G_BREAKPOINT macro, which has accumulated a collection of platform-specific logic: (unfortunately that's LGPLv2+ licensed) Doesn't yet have a unit test. Note that when running on Linux when _not_ under a debugger, the default for SIGTRAP is to get a coredump: Trace/breakpoint trap (core dumped) so people should be strongly discouraged from adding these calls to their code. > Note that when running on Linux when _not_ under a debugger, the > default for SIGTRAP is to get a coredump: > Trace/breakpoint trap (core dumped) > so people should be strongly discouraged from adding these calls to > their code. Looks like Windows' DebugBreak has similar behavior here; according to: "If the process is not being debugged, the function uses the search logic of a standard exception handler. In most cases, this causes the calling process to terminate because of an unhandled breakpoint exception." Adding updated version of patch, which adds documentation to sys.rst and adds a unit test. I'm a little wary of this: it seems useful but also too much like a self-destruct button for my taste. I renamed it from sys.breakpoint to sys._breakpoint, since this is CPython-specific I would rename Py_BREAKPOINT to _Py_BREAKPOINT since we don't really want to support this. Also, why do you allow any arguments to sys._breakpoint()? Note to self: a messy way of forcing gdb to do the equivalent of a breakpoint directly from Python is: os.kill(os.getpid(), signal.SIGTRAP) On Tue, 2010-11-02 at 17:25 +0000, Antoine Pitrou wrote: > Antoine Pitrou <pitrou@free.fr> added the comment: > > I would rename Py_BREAKPOINT to _Py_BREAKPOINT since we don't really want to support this. Also, why do you allow any arguments to sys._breakpoint()? Agreed about _Py_BREAKPOINT. The reason for allowing arguments to sys._breakpoint() is so that the developer can pass in arbitrary objects (or collections of objects), which can then be easily inspected from the debugger. Does that seem sane? Maybe the docs should read: ------ This may be of use when tracking down bugs: the breakpoint can be guarded by Python-level conditionals, and supply Python-generated data:: if foo and bar and not baz: sys._breakpoint(some_func(foo, bar, baz)) In the above example, if the given python conditional holds (and no exception is raised calling "some_func"), execution will halt under the debugger within Python/sysmodule.c:sys_breakpoint, and the result of some_func() will be inspectable in the debugger as ((PyTupleObject*)args)[0] static PyObject * sys_breakpoint(PyObject *self, PyObject *args) { _Py_BREAKPOINT(); Py_RETURN_NONE; } It can also be useful to call when debugging the CPython interpreter: if you add a call to this function immediately before the code of interest, you can step out of sys_breakpoint and then step through subsequent execution. ------ I thought about it making it METH_O instead (to make it easier to look at a single object), but then you'd be forced to pass an object in when using it, I think (though None should work). > I thought about it making it METH_O instead (to make it easier to > look at a single object), but then you'd be forced to pass an object > in when using it, I think (though None should work). I don't like this API. I really prefer METH_O because it is already very difficult to dump a Python object in a debugger (especially with gdb macros). Why not using two functions? One without argument, One with an argument. The sys module is maybe not the right place for such low level function. You may create a new module with a name starting with _ (ex: _python_dbg). I'm quite sure that you will find other nice functions to add if you have a module dedicated to debugging :-) If you don't want to create a new module, faulthandler might be used to add new debug functions. It's maybe unrelated, but Brian Curtin also created a module to help debugging with Visual Studio: We may use the same module for all debugging functions? (I suppose that Brian Curtin wants to integrate its minidumper into Python 3.4.) Example: * faulthandler._breakpoint() * faulthandler._inspect(obj) # breakpoint with an object * faulthandler.enable_minidumper(...) * faulthandler.disable_minidumper(...) -- For Power PC, the machine code to generate a breakpoint is 0x0cc00000. The instruction looks to be called twllei. Something like: "twllei reg_ZERO,trap_Cerror". Hum, it looks like there is family of instructions related to trap: all TW* instructions (twle, twlt, twge, ...). Did PEP553 make this issue obsolete? On Fri, 2018-02-23 at 00:16 +0000, Cheryl Sabella wrote: > Cheryl Sabella <chekat2@gmail.com> added the comment: > > Did PEP553 make this issue obsolete? I *think* they have slightly different scope: if I'm reading it right, PEP553 is about injecting a breakpoint into the Python debugger. This proposal was about injecting a lower-level breakpoint for debugging CPython itself (for e.g. gdb to handle). The idea was to make it easier to, say, step through a particular CPython construct at the C level by injecting a breakpoint right before it: def test_something(): # lots of setup sys.c_level_breakpoint() # whatever comes next so that sys.c_level_breakpoint drops you into, say, gdb, and from there you can step through the following Python code at the C level, without having to express stepping through all the setup at the C/gdb level. Hope that makes sense. That said, I'm full-time on gcc these days, and unlikely to pursue this from the CPython side.
https://bugs.python.org/issue9635
CC-MAIN-2021-49
refinedweb
1,124
62.58
We currently mark with a mark bit in the data itself. That's very cache inefficient. Don't touch this until bug 558451 is fixed. Also, how about some measurements? Asserting inefficiency, as you like to point out re: hand-optimizing code, is hard to do in our modern world of super-scalar CPUs, optimizing compilers, and complex C++ sources. Finally, the properties are in a dedicated heap so they can be swept, which must involve property tree operations. It's not at all clear mucking with this in the GC, or virtualizing it so it can be delegated back to property-tree code, is a win on perf. It's a lose on modularity. /be Currently marking JSScopeProperties sets a bit in a 16-byte structure. That results eventually in a cache line flush (16-bytes on most machines). So for N JSScopeProperties you write N * 32 bytes. If we pack mark bits, we would write N / 8 bytes. Sunspider allocates 7695 JSScopeProperty structs, resulting in 123k bus traffic when marked. With a separate mark bitmap that would be less than 1k. Then can you at least morph this bug's summary to be less over-specified? It's not obviously a non-fix to simply separate SPROP_MARK out and have a bitmap in js::PropertyTree. I'd buy that for a dollar! /be I am still not convinced that the current allocation scheme is ideal. We take a lock every time we allocate a scope property. Thats 8000ish lock operations on SS. I think we should look into all aspects of the current scheme, not just the bitmap. Hello? Are you forgetting that we're moving the property tree into the thread-lcoal compartment for a given allocating cx? Let's stay on target. First it was the mark bit. Now it's the lock. Focus! /be I meant that nicely :-P. Really, the bitmap idea is good. Can this bug be about it? Set the summary if so. /be Really, I sounded like a jerk in this bug -- sorry about that. There are several good reasons to make Shapes be gc-things, even if the shape heap as implemented today were to be single-threaded and per-compartment. Especially if some inlined allocator wins can be had. If we get compartment GC done soon the shorter path would be to avoid reinventing it, or at least refactoring the shape heap to resemble it, and instead just use it. I'll try that this week. /be I will look into it after the bug 558861 will be landed. Removed dependency after discussing this with brendan. This is still likely a nice speedup. We allocate over 8000 shapes during sunspider. But we dont have to block 2.0 on this. Still wanted though. (In reply to comment #9) > This is still likely a nice speedup. Unless we move the property tree into the compartment the shape would need to be allocated using the default compartment. That implies taking lock on each allocation. So a speedup would not be possible. I'm not sure why the dependency on bug 609104 was removed. We discussed moving the shape tree (rename proeprty tree freely please) into the compartment, along with rt->shapeGen, rt->protoHazardShape, etc. Seems we need all that anyway to avoid locking on shape tree mutation. /be Yes, the other bug will do all those things #11 lists. Its a pretty straight forward patch I need for 2.0. This bug I don't need. Just trying to reduce risk here. I still would love to see this bug here fixed too, if we have the time. But we don't have to block on it. Created attachment 496711 [details] [diff] [review] patch Moves shapes into the GC heap and the property tree into compartments. x = this; gczeal(1) for (w in x) {} This crashes in the shell with slot > capacity. Can't figure out why. A couple comments: - I removed the shape number optimization that assigns low numbers to empty shapes. I will replace this with compartment-local shape numbers which will provide low numbers for a wider set of objects once the prop cache is compartment-local. - The patch removes around 8000 lock/unlock pairs in the shape allocation path. Created attachment 512314 [details] [diff] [review] patch Here's an updated version. It passes jit-tests and jstests, even with gczeal. I can start a browser and browse around. However, I'm still *strongly* opposed to getting this into Firefox 4. Here's why. There is a lot of code, particularly in jsscope.cpp, that assumes it runs atomically with respect to the GC (i.e., we don't GC in the middle). This patch breaks that assumption, since shape allocations are now GC allocations, which can lead to a GC. Here are two examples of how this kind of bug can manifest itself:. I've tried to audit the code as best I can, but these sorts of errors don't jump out at you. They occur because of complex interactions between native code and the GC. I stared at the dictionary conversion code in #2 for at least an hour before I realized the problem. The only way to catch these bugs is with extensive testing, and we simply don't have time to do that in Firefox 4. If you aren't comfortable with this patch for 4, we need a spot fix that roots shapes. I am ok with a quick fix like that for 4. (In reply to comment #17) > If you aren't comfortable with this patch for 4, we need a spot fix that roots > shapes. I am ok with a quick fix like that for 4. As pointed out in person (and over iChat :-P), we do root shapes in js_NativeGet and js_NativeSet. Can anyone point to other places that don't root shapes but do hold them across call-outs that might GC? /be (In reply to comment #16) >. This is a good point. >. Crash where? I"m not disagreeing, looking for an exact problem statement. The non-dictionary shape lineage that used to be held by obj->lastProp should be kept alive by the |shape| local (on stack) variable in Shape::newDictionaryList, via the conservative stack scanner. /be (In reply to comment #19) > Crash where? I"m not disagreeing, looking for an exact problem statement. The > non-dictionary shape lineage that used to be held by obj->lastProp should be > kept alive by the |shape| local (on stack) variable in > Shape::newDictionaryList, via the conservative stack scanner. You're right, these shapes are kept alive through the stack scan. The problem is that the JSObject that's going into dictionary mode is not scanned. Its lastProp field is NULL, and the GC marking code treats objects with a NULL lastProp as newborn and doesn't scan them at all. So anything this object points to will not be scanned either (unless it's reachable some other way). That includes other its slots, its emptyShapes, and miscellaneous other things. However, I thought of a better way to handle shape rooting in the short term. I'll try to post a patch tonight if I can. Created attachment 517013 [details] [diff] [review] fixed patch This is an updated version of the patch. It basically stable now. It passes a Linux64-only tryserver run. I'm mainly posting so that Gregor can make progress on his background sweeping patch. I'd like to get this patch in early in the release cycle (i.e., soon), since it's the sort of thing that needs lots of testing. I'm going to try to break it up into pieces before posting it for review. Some of the changes are refactorings, so those can be committed first, separately. I'm worried about the rest of the patch though. It's not the sort of thing that is easy to back out or #ifdef off. Maybe I just need to think harder, though. If you isolate the allocate and sweep parts, backing out would be patch -R with fuzz as needed. Or hand merging, but small enough scope that it seems ok to rely on this kind of back-out and not #ifdef or over-engineer for the worst case. We need to plan on success here, with failure an option that won't kill us in merge pain. /be Created attachment 518465 [details] [diff] [review] adds const to some GC type signatures I think the patch is close to ready. I've broken it up into pieces to make it easier to review and to back out. This first piece just changes a bunch of GC functions to take const arguments. I did this because I want the shape type that we GC to be |const Shape *|. I considered using just |Shape *|, but that seemed to require a lot more casts. Created attachment 518474 [details] [diff] [review] miscellaneous fixes This piece makes some changes that make it easier to GC shapes. - Currently we do a changeProperty during sweeping in the watchpoint code. This caused a lot of trouble for me. Jason said it was okay not to do this in the sweeping case, so I took it out. - I found some cases where we could do a GC before the map is initialized for a newborn object. This is really bad. To avoid the problem, I NULLed out the map in js_NewGCObject. This adds an extra store to a fast path, but I think it's better than having a lot of elaborate accounting for whether we have done the store yet or not (which is what I end up with otherwise). - I think there was a pre-existing bug in AssertValidPropertyCacheHit. It only happens in debug builds. I fixed it here. ... and some other stuff that should be explained by comments. Created attachment 518475 [details] [diff] [review] patch to remove Shape constructors GC things can't have constructors, so this makes js::Shape use an init method instead. Comment on attachment 518465 [details] [diff] [review] adds const to some GC type signatures > template <typename T> > inline T * > -Arena<T>::getAlignedThing(void *thing) > +Arena<T>::getAlignedThing(const void *thing) Is it intentional that the return type is discarding the const qualifier? While describing the final patch, I noticed a problem. I'll fix it and post soon, hopefully. (In reply to comment #26) > Comment on attachment 518465 [details] [diff] [review] > adds const to some GC type signatures > > Is it intentional that the return type is discarding the const qualifier? Yes. It's a little ugly, but I only want the const on Shapes. Created attachment 518546 [details] [diff] [review] patch to move shapes to GC heap This is the main one. It adds a new size class for shapes, along with all the tracing stuff they need. I'm not sure if I've handled the public JSAPI tracing stuff correctly. I just did a pretty straightforward copy-and-paste job here. I added a new function, PreMark, that is called during marking. It's supposed to work like MarkChildren, except that it's called even in the RecursionTooDeep case. This is where the shape number gets regenerated. I needed something that happens exactly once per GC thing; MarkChildren can be called many times if we're doing delayed marking. I also changed how shapes are marked. Previously, we traversed a shape's parent chain in the JSObject marking path. However, I found that in the case of a shape not attached to an object (like one that's only rooted by the stack) we then fail to walk up the parent links. So I moved this code to Shape itself, being careful to keep it iterative rather than recursive. As I mentioned a while ago, the main risk in this patch is that it adds a new place to GC: during shape allocation. I've tested with GCZEAL pretty thoroughly, but please keep this in mind during the review. PreMark is a bad name IMO. Maybe Touch() ? And if you use a function template with typename T that is empty, and a specialization for Shape * that is not, you don't need 4 empty functions for this. Comment on attachment 518465 [details] [diff] [review] adds const to some GC type signatures >+++ b/js/src/jsgcinlines.h >@@ -210,17 +210,17 @@ Mark(JSTracer *trc, T *thing) > > JSRuntime *rt = trc->context->runtime; > /* Don't mark things outside a compartment if we are in a per-compartment GC */ > if (rt->gcCurrentCompartment && thing->asCell()->compartment() != rt->gcCurrentCompartment) > goto out; > > if (!IS_GC_MARKING_TRACER(trc)) { > uint32 kind = GetGCThingTraceKind(thing); >- trc->callback(trc, thing, kind); >+ trc->callback(trc, (void *)thing, kind); Looks good, but could reinterpret_cast<void *> be used here or is there a problem with the template type parameter? /be Comment on attachment 518474 [details] [diff] [review] miscellaneous fixes > /* ..12345678901234567890123456789012345678901234567890123456789012345678901234567890 > * NB: DropWatchPointAndUnlock releases cx->runtime->debuggerLock in all cases. >+ * The sweeping parameter is true if the watchpoint and its object are about >+ * to be finalized, in which case we don't need to changeProperty. "to" on previous line fits the wrap-margin. >+DropWatchPointAndUnlock(JSContext *cx, JSWatchPoint *wp, uintN flag, bool sweeping) Good fix, wow. Silly to change a garbage object's shape. >@@ -1171,17 +1169,17 @@ JS_ClearWatchPoint(JSContext *cx, JSObje > for (wp = (JSWatchPoint *)rt->watchPointList.next; > &wp->links != &rt->watchPointList; > wp = (JSWatchPoint *)wp->links.next) { > if (wp->object == obj && SHAPE_USERID(wp->shape) == id) { > if (handlerp) > *handlerp = wp->handler; > if (closurep) > *closurep = wp->closure; >- return DropWatchPointAndUnlock(cx, wp, JSWP_LIVE); >+ return DropWatchPointAndUnlock(cx, wp, JSWP_LIVE, false); > Trailing whitespace alert -- trim at will! >+/* >+ * The AssertValidPropertyCacheHit function can cause a GC, which will >+ * invalidate the property cache. Therefore, it's important that we do >+ * ASSERT_VALID_PROPERTY_CACHE_HIT *after* we have already finished using the >+ * property cache entry in question. D'oh. We ran into issues within AssertValidPropertyCacheHit and tried to tolerate nested GC there, but clearly did not consider callers carefully. Another option would be to restore the entry if GC does nest. That seems a bit better in terms of coverage (the GNAME INC/DEC ops at least, from what I see a bit further below). > Shape::newDictionaryList(JSContext *cx, Shape **listp) > { > Shape *shape = *listp; > Shape *list = shape; > >- Shape **childp = listp; >- *childp = NULL; >+ /* ..12345678901234567890123456789012345678901234567890123456789012345678901234567890 >+ * We temporarily create the dictionary shapes using a root located on the stack. >+ * This way, the GC doesn't see any intermediate state until we switch listp >+ * at the end. Rewrap to wm=79 and less ragged right look ftw. >+ *listp = root; >+ root->listp = listp; >+ > list = *listp; > JS_ASSERT(list->inDictionary()); > list->hashify(cx->runtime); > return list; Nit: could leave list as the saved original lastProp and just use root in the unchanged lines above. Seems more pure (functionally), avoids two roles for "list" in this method. r=me with AssertPropertyCacheValid fixing up the clobbered entry, if you can. Thanks, /be Comment on attachment 518475 [details] [diff] [review] patch to remove Shape constructors JSString has inline js::gc::Cell *asCell() { return reinterpret_cast<js::gc::Cell *>(this); } inline js::gc::FreeCell *asFreeCell() { return reinterpret_cast<js::gc::FreeCell *>(this); } Could we not do that and avoid deriving Shape from gc::Cell, as JSString manages to avoid, and thereby keep the constructors? /be (In reply to comment #33) > Could we not do that and avoid deriving Shape from gc::Cell, as JSString > manages to avoid, and thereby keep the constructors? Deriving from Cell isn't the problem. The problem is when we instantiate Arena<Shape>. The Arena class has a union that includes elements of type T, which is Shape in this case. And things that go in unions can't have constructors. I'm not really sure how important this union is. Gregor, is there a way we could get rid of the union? Are there other unions that might cause problems? For reference, bug 613457 (in review) makes JSString derive js::gc::Cell, lets us remove asCell() and the calls everywhere. Constructors without virtual are winning and we should try to use them. This may mean some asCell "cast methods" -- lesser evil compared to banning ctors, IMHO. The union sounds avoidable but I'll wait for Gregor to weigh in. /be The union comes from the fact that sometimes we want to treat an Arena as "untyped". This is used in places where we don't care about the type or we don't know the type. The delayed marking stack for example is a linked list of Arenas with different types. There we need some kind of "base" type. I remember implementing this solution with Luke because I couldn't get rid of some GCC warnings. After we introduced the union I could also remove many ugly casts in the GC code. The union is avoidable but I don't think there is a clean way to silence a bunch of GCC warnings and you have to add casts in many different places.. (In reply to comment #38) >. My compiler doesn't like it. I am using gcc-4.2.1 and I get a ton of warnings during opt builds.? (In reply to comment #40) >? Yeah this works. That's what I meant with "no clean solution" yesterday. I thought a bit more about the union-elimination patch. I'm more in favor of it. The fact is, we do type-punning all over the place, in the GC and elsewhere. It's just that GCC doesn't warn about it. The asFreeCell() function is one example. Do we have any belief that GCC is more likely to warn in places where code will be compiled incorrectly than it places where it isn't? If not, then I think the patch is fine. Comment on attachment 520050 [details] [diff] [review] fixed GC unions patch I'm an old-school, die-hard, C/Unix nerd. Doing reviews out of order, this looks fine to me but I'm bouncing to Luke. /be Comment on attachment 518546 [details] [diff] [review] patch to move shapes to GC heap Beautiful. Waiting a long time for this one. Thanks for persevering under my "ctors ftw" nagging. Summary nits: MarkIfGCThingWord and markDelayedChildren both need to half-indent their switch case label lines (pre-existing, check and fix any more like this). TypedMarker nits -- blank line before commented line, overbraced if-else toDictionaryMode blank line before commented line. r=me with these picked. /be Comment on attachment 520050 [details] [diff] [review] fixed GC unions patch Nice job with Things. >+ reinterpret_cast<FreeCell *>(thing)->link = reinterpret_cast<FreeCell *>(thing + 1); > ++thing; > } >- last->cell.link = NULL; >+ reinterpret_cast<FreeCell *>(last)->link = NULL; > Cell::cellIndex() const > { >- return reinterpret_cast<const FreeCell *>(this) - reinterpret_cast<FreeCell *>(&arena()->t); >+ return (reinterpret_cast<const char *>(this) - reinterpret_cast<char *>(&arena()->t)) / sizeof(FreeCell); s/reinterpret_cast<FreeCell *>(x)/x->asFreeCell()/ ? > static const size_t ThingsPerArena = (ArenaSize - sizeof(AlignedArenaHeader)) / sizeof(T); >+ static const size_t Filler1Size = >+ ((sizeof(ArenaHeader) + sizeof(T) - 1) / sizeof(T)) * sizeof(T) - sizeof(ArenaHeader); >+ static const size_t Filler2Size = >+ ArenaSize - sizeof(AlignedArenaHeader) - sizeof(T) * ThingsPerArena; >+ Things<T, ThingsPerArena, Filler1Size, Filler2Size> t; I think the computation of Filler1Size could be a little more obvious in its purpose by using a piecewise function: static const size_t Filler1Size = tl::If< sizeof(ArenaHeader) % sizeof(T) == 0, 0, sizeof(T) - sizeof(ArenaHeader) % sizeof(T) >::result; Also, it seems you could define ThingsPerArena/Filler2Size so that AlignedArenaHeader could be deleted: static const size_t SpaceAfterHeader = ArenaSize - (sizeof(ArenaHeader) + Filler1Size); static const size_t ThingsPerArena = SpaceAfterHeader / sizeof(T); static const size_t Filler2Size = SpaceAfterHeader % sizeof(T); Lastly, it would be nice to have this immediately after to give a warm fuzzy feeling. static void staticAsserts() { JS_STATIC_ASSERT(offsetof(Arena<T>, t.things) % sizeof(T) == 0); JS_STATIC_ASSERT(sizeof(Arena<T>) == ArenaSize); } (In reply to comment #45) I talked about this in person with Luke. We decided to get rid of the union and deal with any strict aliasing problems as they arise. I'll post an updated patch that addresses the template comments later. Created attachment 521023 [details] [diff] [review] remove GC unions, v3 Created attachment 521030 [details] [diff] [review] remove GC unions, v4 Oops, messed up on the last one. Here it is. Comment on attachment 521030 [details] [diff] [review] remove GC unions, v4 Sweet comment I did a quick fix for the orange, and filed bug 644349 to find a more permanent fix.
https://bugzilla.mozilla.org/show_bug.cgi?id=569422
CC-MAIN-2017-30
refinedweb
3,381
65.73
Java examples for java.lang:String Strip If a string exceeds a particular length, truncate it and append a suffix that will hopefully allow the string to retain its uniqueness. /*/*ww w .j av a 2 s. co m*/ StringUtils.java : A stand alone utility class. Copyright (C) 2000 Justin P. McC <>. To further contact the author please email jpmccar@gjt.org Modifications, copyright 2001-2014 Tuma Solutions, LLC; distributed under the LGPL, as described above. */ //package com.java2s; public class Main { /** * If a string exceeds a particular length, truncate it and append a * suffix that will hopefully allow the string to retain its uniqueness. * @since 1.15.5 */ public static String limitLength(String s, int len) { if (s == null || s.length() <= len) return s; String suffix = "... (truncated, #" + Math.abs(s.hashCode()) + ")"; return s.substring(0, len - suffix.length()) + suffix; } }
http://www.java2s.com/example/java/java.lang/if-a-string-exceeds-a-particular-length-truncate-it-and-append-a-suff.html
CC-MAIN-2018-51
refinedweb
140
61.83
The CPython interpreter scans the command line and the environment for various settings. CPython implementation detail: Other implementations’ command line schemes may differ. See Alternate Implementations for further resources. When invoking Python, you may specify any of these options: python [-bBdEhiIOqsSuvVWx?] [ (while the module file is being located, the first element will be set to "-m"). Changed in version 3.1: Supply the package name to run a __main__ submodule. Changed in version 3.4: namespace packages are also supported. If no interface option is given, -i is implied, sys.argv[0] is an empty string ("") and the current directory will be added to the start of sys.path. Also, tab-completion and history editing is automatically enabled, if available on your platform (see Readline configuration). Changed in version 3.4: Automatic enabling of tab-completion and history editing... Run Python in isolated mode. This also implies -E and -s. In isolated mode sys.path contains neither the script’s directory nor the user’s site-packages directory. All PYTHON* environment variables are ignored, too. Further restrictions may be imposed to prevent the user from injecting malicious code. New in version 3.4... Reserved for various implementation-specific options. CPython currently defines the following possible values: It also allows to pass arbitrary values and retrieve them through the sys._xoptions dictionary. New in version 3.3: The -X faulthandler option. New in version 3.4: The -X showrefcount and -X tracemalloc options. These environment variables influence Python’s behavior, they are processed before the command-line switches other than -E or -I. and the hook sys.__interactivehook__ and OS X. If this is set to a non-empty string,. Both the encodingname and the :errorhandler parts are optional and have the same meaning as in str.encode(). For stderr, the :errorhandler part is ignored; the handler will always be 'backslashreplace'. Changed in version 3.4: The encodingname part is now optional. If this is set, Python won’t add the user site-packages directory to sys.path. Defines the user base directory, which is used to compute the path of the user site-packages directory and Distutils installation paths for python setup.py install --user. -X faulthandler option. New in version 3.3.. See the tracemalloc.start() for more information. New in version 3.4. If this environment variable is set to a non-empty string, enable the debug mode of the asyncio module. New in version 3.
http://wingware.com/psupport/python-manual/3.4/using/cmdline.html
CC-MAIN-2016-18
refinedweb
408
54.49
Web Services allow several operations to be published at a single end-point. However, there may be a scenario where more than one operation may be required to be invoked, in a specific order, and an internal state needs to be maintained. Let’s take a concrete but simple example for demonstration. There is a service called MathService that publishes two operations Add and Subtract. MathService Add Subtract Add operation requires that two integers are parameters and returns an integer – result of addition of values to the two parameters. Subtract requires just one parameter. It returns the difference between the result of last operation - addition or subtraction - (by the same client) and the value of parameter passed. As an example, first call to Add with values 4 and 5 returns 9. Then, a call to Subtract with a value of 3 would return 6. Next, a call to Subtract with a value of 10 would return -4 and so on. How can this web service be modified to maintain the state? For extreme cases, one must look at the Web Service Transactions Specification. I am providing a quick-heal solution. But the same idea can be used in any web services that require maintaining states – not only by one service but multiple services. I would modify the Add operation to return two items – the result of addition and a unique token. I also modify the Subtract operation to take an additional parameter – a token that would be used to internally track the data. Subtract It is required since for every operation invocation, a new instance of the class representing the web service is created. Using the unique token, the data would be cached on the server side. The implementation of the cache is what remains in question. I am using an inbuilt implementation for the cache for MathService – using the HttpRuntime Cache. Persistent caching may be provided using databases and other dependencies. HttpRuntime Cache [WebService(Namespace="")] public class MathService : WebService { [WebMethod] [return: XmlElement("result", typeof(int)), XmlElement("token", typeof(string))] public object[] Add(int x, int y) { int z = x + y; string token = Guid.NewGuid().ToString(); Cache cache = HttpRuntime.Cache; cache.Insert(token, z, null, Cache.NoAbsoluteExpiration, TimeSpan.FromMinutes(30)); return new object[] { z, token }; } [WebMethod] [return: XmlElement("result", typeof(int))] public int Subtract(int y, string token) { Cache cache = HttpRuntime.Cache; object obj = cache[token]; int x = 0; if(obj != null) { x = (int) obj; } int z = x – y; cache[token] = z; return z; } } The web service is ready to be deployed and tested. You can download the source code and test client to see how all this works! The output may appear as shown below: You can also visit web-service related blog at Eduzine© - electronic technology magazine of EduJini, the company that I work with. One can have stateful services without going into the nitty-gritties of Web Service Transactions, or using WSE to modify the Head of the Envelope or rely on cookies that some enterprises disallow. Head Envelope You may need to update the web reference for the web-service in the client. I used 'File System' for ASP.NET application and the Web-Server was on port 1073. Update the reference for your case. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL): mastergaurav wrote:One can use HttpSession as well. However, that would be implemented thru Cookies mastergaurav wrote:Session are, unless specifically mentioned, always maintained thru Cookies. How else do you think it is done? mastergaurav wrote:Well... of course. I mean the same. General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/15689/Stateful-Web-Services?msg=1689323
CC-MAIN-2016-40
refinedweb
631
56.05
Good Morning. I am sitting down to rethink my entire procedure of sharing files and folders on the network. I am wondering if it is better to just have one mapped network drive, and in that have all of the different folders the users require. I would then have the top level folder have the permissions based on their access, and the best part is that users who don't have access don't even see the folder. Currently, and historically here we have about 10 mapped network drives, with folders in each. This is becoming unmanageable, and I was wondering how everyone else maintains their network files and access. Ultimately I'd like to be able to manage this with powershell, and this seems the way to get started. The only caveat I can forsee right now before I start this, is that if the drive hosting the share becomes full, I need to either add storage to it, or add in another file server with more storage, but then that will force me to use a different drive letter for other files and secured folders. Thoughts on this? What are other people doing in this regard?. From the small 1 person company to the large Fortune 500 companies - this solution just plain works. You will create a whole lot of groups, for nesting, but the other side is that with the scripts that he provides (Since I can no longer attach zip files to SpiceWorks posts, PM Me your email asking for the scripts that he references in the video and I'll send them your way), you just need to run his script and you can see who has what access to what folders, and what type of access they have. (Eg. who has access to this folder? what does this user have access to? Run the script, it will tell you.) It will change the way you assign permissions to everything - and it will make your life easy. It will take a bit of grunt work to switch over to RBAC but if you do it right, it's a 1 time change. On each file share's NTFS Permissions tab, you will only have 1 security group with Read, 1 with modify, 1 with list folder contents, and 1 with Full and 1 SYSTEM. Nothing else. Share permissions will ALWAYS be Everyone > Full Control (unless there is a specific need that the share must stay read only). Grant access based on groups, not individuals, and not groups that individuals are a part of (eg Sales Department). Create a nested group Access Control List (ACL) Structure. This way, you can find out EVERYTHING by using included scripts. If you wanted to take it a step further for having a File Admins group (not just Domain Admins), set up a "File Admins" group, use it instead of Domain Admins on all root folder NTFS Permissions for where the rest of your files are located (eg. D:\CompanyFiles). Set it up as a member of every ACL that has Full Control. Don't make Domain Admins a member of that group, but control the individuals as members of that File Admins group. Then you can give Domain Admin to any other Admin user (if needed), and then they will not have access to the files and folders of your company's files. Of course they can take ownership of the folder structure as they are a Local Administrator of the file server (by rights of Domain Admin), but that will irradiate all security on the object and they will be caught in the act. I also recommend that you couple RBAC with Access Based Enumeration (ABE). This way users can only see files and folders that they have access to. If they don't have access, they can't see it. It makes it much more secure, and an easier experience for the user. On top of everything, make your life easier by enabling Distributed File System (DFS) with replication (DFSR) if you have multiple servers for files. Using DFS(R) coupled with RBAC, you have a secure, easy to manage, least privilege, best practice file system in effect at your company. The beauty of DFS is that if you change servers in the future (buy a new server, etc) or you need some extra space that is not utilized on a different server, you can adjust DFS to suit your needs, or introduce DFSR and sync across the data automatically, keeping High Availability (HA) and/or faster access at multiple locations, or even replication of data to replacement server. Don't forget, DFS(R) is a service and needs to be duplicated for HA, just like Domain Controllers (DCs), which is why I suggest using the DCs as your DFS Namespace Servers as they usually are already distributed properly for HA (separate hardware, separate UPSs, etc) and AD already uses DFSR (Server 2012) to replicate SysVol and other AD folders. The best advice is to do it step by step. Start by mapping out your entire folder structure's permissions set using AccessEnum This will not only give you a backup of the security settings, but also a map as to how to create your security groups. Start with your job titles - create groups for them. Add the appropriate people into the job titles as members. Then go to the other end of the scale - file structure - create your groups with regards to your department security EG: Sales will need 4 groups minimum. ACL_Sales_Read ACL_Sales_Modify ACL_Sales_ListFolderContents (in advanced permissions, set this for "This folder only") ACL_Sales_Full Add the sales oriented job title groups you just created as "member of" the ACL_Sales_Modify group. Add all ACL_Sales_* groups to the Sales folder. LEAVE IT FOR A FEW DAYS. - you haven't taken away anyone's access, you've only added groups, and you need to make sure that all users have at least logged in 1 time again so that the TGT from Kerberos has been updated to include the group memberships you added them to. After a few days (a week is good - it gives those who are remote users who don't connect to the network often enough time to get the updates. If all are internal, next day is fine) remove direct members permissions on the Sales folder. After this, create another set of groups for the next and next folders inside ACL_Sales.2015 Sales Folder_Modify - make this group a MEMBER OF ACL_Sales_ListFolderContents, and add the members ROLES to this group. Add the group to the folder security. Wait a week, and then remove the direct memberships. At this point you can remove or include inheritance from the folder structure above depending on the nature of the folder. You then finish this for all file shares. Then move on to printers. ACL_Printer1_Print, ACL_Printer1_Manage, ACL_Printer1_Full, etc. You may have some roles that contain multiple ACL groups that replicate across roles - eg, if you have a committee like Health & Safety. Then create a new role for that, add all the ACL memberships in the member of tab, and for the members, add the individual users as you're creating a new role. Wait for the TGT to be recreated at next login for each user (wait a few days) and then remove the direct ACL groups from their employee role. Now the user Sally has a role of Sales Representative, and also Health & Safety Committee Representative. 60 Replies? Jeff: For your replies, I'll answer all three. I don't think my users are ready to wholly drop a mapped drive. But I certainly could scale it back to one, and then have all folders populate off of that. I currently have DFS setup, but I did not build it here. Ideally I would like to have a situation where I have two file servers running DFS at my main site, plus one file server with DFS at my remote site, and those three all syncing all of my data. Yes, we are currently running a login script to map drives and determine who gets access to what data. Welcome to 1999. With my projects the past two years, I haven't been able to address this at all, but this spring and summer I have no projects and need to get this in place before we move buildings in January. I have one mapped drive for everyone's shared data, abstracted thru DFS namespaces, all the folders underneath are controlled by their own NTFS permissions and hidden with ABE. If you are redoing this, now is the time to move to DFS. This is exactly what I was thinking. I do need to brush up on the DFS namespace stuff a bit, as my understanding of it is little at the moment. I have ABE working already, as my 'public P:' drive has the bulk of our data stored in it. One other thing I should mention, a couple of our software applications do not work with UNC drives, and require a mapped drive to function. But I can certainly just leave one mapped drive.. Hi Bob. I have 100 users between two sites. Still mapping via a .bat login script. This was in place when I started here. There is not a san in place. I am a vmware shop, and my storage needs are small I think. We only have about 2TB of live data on our network. I also prefer to manage it via DFS servers rather than one san. I am using DFS now. I have two file servers. S06 and S01 sync together with DFS and both are about 750G each. The users home folders are in DFS but only accessible from S06. I'd like to be able to set up roaming profiles and have their home drive mapped to the DFS namespace, so I can do maintenance on the individual servers monthly with little disruption. Roaming profiles are not working here yet, but I have to get them working. I run Veeam and my backups are rock solid. I have a great understanding of that, and would like to continue to keep this working as it is.. This is my goal, and to be able to manage it with powershell. It will be a work in progress but certainly worth it when I get there. My other issue is I am a one man shop, and I need to make my life easier as I go along. Spending twice the time to get something new working is okay for me because then when I need to maintain it later it only takes two minutes. One other thing I should mention, a couple of our software applications do not work with UNC drives, and require a mapped drive to function. ... Aside from what I said, I do have a mapped drive for Sage, which like your situation requires a mapped drive. I don't lump that into the main file share. Just the Sage users get that one mapped. I don't even direct that one over DFS. Sage developers are special snowflakes. ... We use folder redirection and not roaming profiles. Right now I replicate the user's personal data to another file server, but I don't namespace it. I ran into some doh! issues when I first had the namespace setup to do that. I'd find that the DFS client would (slow network? i don't know) kick them to the less preferred file server, then fail them back to the preferred. The changes they made to file in the meantime hadn't replicated from the secondary to the preferred server, so it appeared to the users like they lost changes. Frustrating for them and me. So I took the secondary off as a target, and if I need to take the primary down, I change the DFS targets and wait for the DFS referrals to expire. Kind of a pain, but I don't have to do it ever really, my file server is reliable and I do maintenance after hours. Just keep in mind the replication delays and DFS referral timeouts if you plan to bring servers down. I am moving to the same single-drive solution. I am configuring it like this. N: - THE network drive. Points to a DFS share \\domain\namespace\FILES Inside the FILES folder in the namespace, I create all the same top-level directories we have now. FILES FINANCE HR NURSING and then I point those folders to their homes on the physical servers. With ABE, no one sees a folder they don't have access to. Everything appears under N: with one drive map. And, I can move anything I want to a different location just by manipulating the DFS pointers. The only server name issue that remains is the printer services. They can't be abstracted through DFS. A company I consult for that has their own out of state IT department put something in place years ago back in the days of 95 that really made sense to me and I've tried my best to emulate it whenever possible. You can tell the age of the practice considering the low drive letter they picked way back before media card readers were common. :) g:\ - each folder has specific permissions put on it. requests for new root folders or changes have to be made through HR via physical form (although now i think they use a ticket system). j:\ - completely open network share. nothing has any specific permissions on it and never will. everyone knows this. s:\ - corporate home office network share - i only mention it because my next statement wouldn't be true... Nothing else exists. Personally I love this setup. I know you could do it with one drive letter and then two folders inside, one open and one with permissions but it's so much easier to say "throw it on the j drive" or "go into our folder on the g drive". We have a similar setup. As everyone else mentioned, DFS/Namespaces for replication and ease of upgrades/changes. Ours is setup on 2 shares; one for public where every authenticated user has access, and another for restricted where access is by group level/department.Edited May 4, 2016 at 18:29 UTC I have used both these approaches at different customers' site. In some sites, where they do not have more than 2 or 3 Shared Folders, I use one drive (letter) per Shared Folder and it has been working fine. GPO to determine who has what mapped drive. In other sites where there are many Shared Folders, and this is actually the method I liked more, I have a unique S Drive (S for Shared) and it contains all the Shared Folders, then I created Security Groups to access folders - So All I have to do is Add/Remove a user To/From a Security Group in order to Grant him the access. It takes time to create security groups and manage the security settings on the different folders but once it is setup it only takes few minutes to manage any changes. Good luck ;) we do something similar here. We have an R:\ drive, that is the department drive. If you are in Finance you will see finance folders, lending will see lending folders, etc... if you require access to both we just dump the user in the corresponding security group and now you'll see both Lending and Finance folders in your R: drive. All controlled via security groups/GPO, super easy to keep track of and manage. I just have two mapped drivers one for shared files such as you are talking and one that points to the users' directory (their own directory, not the top level where they can see everyone's folders, although they'd have no permissions to be able to access anything other than their own). I agree though if you're going to have multiple file servers then DFS is the way to go. One man shop here also. Getting users to move away from Drive letters to UNC names not easy. I had success getting people to navigate once to the top level DFS share, showing them they could reach all the mapped fiolders thru that, and then having them add that unc folder location to their explorer favorites. Everyone uses browser favs, but no one was using explorer favs. This was especially useful for my traveling/home users, who would have login script drive mapping fail as their VPN was not up yet. Dan
https://community.spiceworks.com/topic/1593547-shared-network-folders
CC-MAIN-2022-05
refinedweb
2,783
69.92
Opened 7 months ago Closed 6 months ago Last modified 6 months ago #26779 closed enhancement (fixed) py3: fix graph_input.py and hypergraph_generators.py Description doctest errors in hypergraph_generators.py are due to bytes vs str. We add a test in graph_input.py to fix the issue. Change History (14) comment:1 Changed 7 months ago by - Branch set to public/26779_graph_input - Cc tscrim chapoton added - Commit set to c58ed7920a65bfde0e651f5393ca165db546bccf - Status changed from new to needs_review comment:2 Changed 7 months ago by - Cc embray added probably, it would be better to use "bytes_to_str" from sage.cpython.string import bytes_to_str cc-ing Erik, who knows better about the unicode problem comment:3 Changed 7 months ago by If it's safer I can use it. I was not aware of that method. Let me know if I should do the change. comment:4 Changed 7 months ago by yes, please use "bytes_to_str". comment:5 Changed 7 months ago by - Commit changed from c58ed7920a65bfde0e651f5393ca165db546bccf to 99e5b330baf3b8a5c7b16107ced5381cba83a2ea Branch pushed to git repo; I updated commit sha1. New commits: comment:6 Changed 7 months ago by Thank you. I agree it is much cleaner this way. comment:7 Changed 6 months ago by - Reviewers set to Frédéric Chapoton - Status changed from needs_review to positive_review ok, let it be comment:8 Changed 6 months ago by - Status changed from positive_review to needs_work Please try to avoid using inline imports if it isn't strictly necessary; see recent discussion about this at comment:9 Changed 6 months ago by - Commit changed from 99e5b330baf3b8a5c7b16107ced5381cba83a2ea to 016fa31eb0083267eaccf15c5a508da5aaabbbb1 Branch pushed to git repo; I updated commit sha1. New commits: comment:10 Changed 6 months ago by - Status changed from needs_work to needs_review I have not touched to existing imports. But it could also be done here if needed. comment:11 Changed 6 months ago by - Status changed from needs_review to positive_review ok comment:12 Changed 6 months ago by - Branch changed from public/26779_graph_input to 016fa31eb0083267eaccf15c5a508da5aaabbbb1 - Resolution set to fixed - Status changed from positive_review to closed comment:13 Changed 6 months ago by - Commit 016fa31eb0083267eaccf15c5a508da5aaabbbb1 deleted It would also be great to squash trivial changes like those together, although at least in this case both versions are at least valid (it's more of a problem when one commit contains trivial errors). comment:14 Changed 6 months ago by Didn't mean to delete that. I think that's a bug in the trac plugin... I don't know if it's the best way to do it. New commits:
https://trac.sagemath.org/ticket/26779
CC-MAIN-2019-26
refinedweb
422
60.95
The question is answered, right answer was accepted I'm having a little trouble with Unity's component approach. I've created an Dad class that extends MonoBehaviour and a Family class that also extends MonoBehaviour but also has a reference to Dad. I can make sure that Family has always a Dad reference by using [RequireComponent], the problem is that every Dad should have a parameter 'name', and this parameter must be set by it's Family. As I shouldn't use constructors on MonoBehaviours I've created a method to set the name: public void Init(string aName) { this.name = aName; } But I want to receive an error when I forgot to call Init. Is there a way to make sure the Init method will always be called? Shouldn't it actually be the other way around, that a Dad always has to have a Family? Then every Dad will always get a name. Anyway, I think what you want to do here is make sure that Dad.name is always set when you want to use it. For that purpose you can have a getter function or property which checks that: public class Dad : Monobehaviour { private string name; public string Name { get { if (string.IsNullOrEmpty(name)) { Debug.LogError("Dad " + gameObject.name + " does not have a name!"); } return name; } } ... } Answer by xarismax · Nov 09, 2017 at 06:37 PM At Family.Start() -> call Dad.Init(name); Think as Dad as a "Data Object", Family is responsible for calling its methods. For this reason don't add Awake or Start on Dad, only Family class is allowed to initialise Dad. 1) Initialization Responsibility: This makes explicit that Dad is not responsible for initializing itself, the responsibility falls to another class (Family). 2) Explicit Execution Order: This ensures that Family will be called before Dad, as Dad is initialized from Family before used. Yeah, that's what I was going for, but what if when I create a different class AnotherKindOfFamily that also uses Dad? Every time I use the Dad component in another class I would have to remember to call Init. That's what I was referring to: either you remember to call it, or you reverse the dependency. For example you could create a Family interface/abstract base class FamilyABC, and in Dad.Start() do FamilyABC family = GetComponent<FamilyABC>(); if (family != null) { family.Init(this); }. Dad.Start() FamilyABC family = GetComponent<FamilyABC>(); if (family != null) { family.Init(this); } Thanks! That should. Zenject add component dynamically 2 Answers Help removing an AudioSource component from another script 0 Answers Weird audio problem 1 Answer Call Method of a Gameobj. Component from other script works, but it does not change variables 2 Answers When do components of loaded prefabs get initialized? 1 Answer
https://answers.unity.com/questions/1430924/how-to-make-sure-custom-init-methods-are-called.html
CC-MAIN-2019-43
refinedweb
462
54.63
Explain facet plots in plotly with various functions in it. Facet plots these are the plots in which each panel shows a different subset of the data, these approach partitions a plot into a matrix of panels. import plotly.express as px import seaborn as sns Sample_data = sns.load_dataset("tips") Sample_data.head() fig = px.bar(Sample_data, x="sex", y="total_bill", color="smoker", barmode="group", facet_row="time", facet_col="day",category_orders={"day": ["Thur", "Fri", "Sat", "Sun"], "time": ["Lunch", "Dinner"]}, ) fig.show() Here in the above plot, there are various functions used: X - Represents the data should be plotted on x-axis Y - Represents the data should be plotted on y-axis color - Represents the color on plot, which will be a column present in the data. barmode - It can be group or overlay. Group mode are placed besides each other and overlay mode are drwan on the top of one another. facet_row - It can be a column from the data, the values from this are used to assign marks to facetted subplots i the vertical direction. facet_col - It can be a column from the data, he values from this are used to assign marks to facetted subplots i the horiontal direction. category_orders - these can be dictionary with string keys and list of string values, the parameter is used to force a specific ordering of values per column. The keys of this dict should correspond to column names, and the values should be lists of strings corresponding to the specific display order desired.
https://www.projectpro.io/recipes/explain-facet-plots-plotly-with-various-functions
CC-MAIN-2021-39
refinedweb
250
60.24
> Every time I switch scenes int Unity, it will ask if I want to save the changes I've made to a scene, even though I didn't make any. I noticed this started happening after I updated. Could it just be a bug or could there be another reason? facing same problem. you got solution yet? Answer by eddyspaghetti · Oct 08, 2018 at 03:45 PM I know this is a little old, but this has been happening to me in 2018.2.0f2. I registered a callback with the Undo system to try to figure out what's happening using the code below (in one of my editor classes). When I add this, I can see what undo-able changes are happening to the scene. In our case the issue is that Unity is making tiny changes to the anchor positions of the Handle game object for a scrollbar in our UI. It's stuff like: "Changed value for target Handle:m_AnchorMin.y to 0.0000004172325 from 0.0000011324883" "Changed value for target Handle:m_AnchorMin.y to 0.0000004172325 from 0.0000011324883" I haven't figured out how to suppress tiny changes, but at least I've ruled out issues with our own code. public static bool TrackSceneChanges = false; public static void EnableSceneDirtyChecker() { if (!TrackSceneChanges) { Undo.postprocessModifications += OnPostProcessModifications; TrackSceneChanges = true; } } public static void DisableSceneDirtyChecker() { if (TrackSceneChanges) { Undo.postprocessModifications += OnPostProcessModifications; } } public static UndoPropertyModification[] OnPostProcessModifications(UndoPropertyModification[] propertyModifications) { Debug.Log("Dirty!"); foreach (UndoPropertyModification mod in propertyModifications) { Debug.Log($"Changed value for target {mod.currentValue.target.name}:{mod.currentValue.propertyPath} to {mod.currentValue.value} from {mod.previousValue.value}"); } return propertyModifications; } I have VSTS (souce control) in my project and I checked When I open my scene Unity immediately mark it as unsaved. If I save and then check in the source control what has changed it is the same thing: Some object's anchor changed from 0 to 0.0000324. And everytime I open the scene it keeps changing a few decimal values. This generates garbage to my project's source control all the time... I still haven't found the reason that unity is doing/generating this floating values.. I'm using Unity 2018.2.5 Answer by OneCept-Games · Jan 05, 2018 at 04:10 PM Could be that some Editor scripts are changing your scene objects. Do you have your own- or Asset Store purchased Editor scripts in your project? I don't think so. I have downloaded a few things from the store but I think I've deleted them all because I don't use them anymore. Could EditorUtility.SaveFilePanelInProject be causing the problem? No build in Editor scripts should cause this. Speaking of saving a scene, i use this script to actually remember to always save on Play. It might help you in this case also, because then it saves automatically - at least on Play, and you might be able to change it to save on Close by using EditorSceneManager.SceneClosingCallback. using UnityEngine; using UnityEditor; using UnityEditor.SceneManagement; namespace OneCept { [InitializeOnLoad] public class SaveOnPlay { static SaveOnPlay() { EditorApplication.playmodeStateChanged += SaveCurrentScene; } static void SaveCurrentScene() { if (!EditorApplication.isPlaying && EditorApplication.isPlayingOrWillChangePlaymode) { //Debug.Log("Saving scene on play..."); EditorSceneManager.SaveOpenScenes(); } } } } Answer by Thaun_ · Jan 05, 2018 at 04:52 PM No, there isnt any bug with that. Lets say you havent changed anything, you think you havent changed anything, but Unity itself might have automaticly did tiny tiny changes that it needs to be saved. Dont worry, but we suggests that you keep saving it. I just wish it wouldn't! Its very annoying when I have to switch between a few scenes and I keep getting the popup to save for each one Well, the quick way is to just press Enter Answer by wlwlz · Nov 07, 2018 at 07:06 PM @caulfield85 @eddyspaghetti this bug (it is a bug!) happens on Unity 2018.1.5f1 as well. I guess that it has something to do with UI layout. For me the solution was to disable the child (with the layouts) in my root UI game object (I only use one UI canvas and have its children heavily nested). Something in that child is causing this issue I guess, I use layouts a lot inside it (it's a menu). I'll update my answer when I figure it out. I have the same issue with 2019.1.0a9 with a canvas in the scene. It's annoying to keep discarding changes in my. ,Enums not being passed between scripts properly 1 Answer Playerprefs across Scenes 1 Answer Can't Change Scenes When Built 2 Answers Re-loading a scene but on the background older scenes are displayed 1 Answer SceneManager.GetAllScenes() only returns the current scene 3 Answers
https://answers.unity.com/questions/1450829/unity-keeps-asking-to-save-a-scene.html
CC-MAIN-2019-18
refinedweb
789
58.69
- Newbie? - Creating Layouts and Activities - Something on Events - Using an Intent - Intent's and Extras - Done Newbie? To absolve this Tutorial, a basic understanding of Android, Activities and Layouts is needed. I suggest, you read those Tutorials by gabehabe: Creating Layouts and Activities Now that you basically know how to do things on Android, we're going to create the two Activities that we're working with, their Layout's and the used Strings. I created a new Android-Project in Eclipse and called it "ActivitysAndIntents". Main.java This is the Main-Activity, which shows up when the App is started. This Code is mostly basic stuff to create the Activity: package dic.tutorial.Intends; import android.app.Activity; import android.os.Bundle; public class Main extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); this.setContentView(R.layout.main); } } The Main-Layout: The Layout for the Main-Activity, written in XML: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: <Button android: </LinearLayout> The "Hit"-Activity This is the second Activity we're going to "call": package dic.tutorial.Intends; import android.app.Activity; import android.os.Bundle; public class Hit extends Activity{ public void onCreate(Bundle bundle){ super.onCreate(bundle); this.setContentView(R.layout.hit); } } The Hit-Layout The Layout for the "Hit"-Activity, written in XML: <?xml version="1.0" encoding="utf-8"?> <LinearLayout xmlns: <TextView android: </LinearLayout> The Strings Last but not least, the XML-File, containing the used Strings (important if you want to make your App multilingual): <?xml version="1.0" encoding="utf-8"?> <resources> <string name="hello">Simple Demo on Activitys and Intets!</string> <string name="app_name">ActivitysAndIntents</string> <string name="button">Fire the Intent!</string> <string name="hit"> Just just fired an Intent!\n Use the "Back"-Button and do it again! </string> </resources> Something on Events So, now that we have our basic components, lets code! The App should start in the Emulator now and you'll see the Main-Activity. There should be some Text and a Button which does... wait for it... nothing! So now, we're about to change that. In the Tutorial "Simple UI with Events", gabehabe used a onclickListener, which is almost the same as a ActionListener known from Java. Well, in Android, there's another (maybe cooler) way to make a Button do something: You can specify the function that should be called when the Button is clicked, in the XML-File. To do that, where going to add something to the Main-Layout: <Button android: [...] android: So, thats it? Yep, almost. Now we need to create the Method in the Main-Activity. A method, which should be called like this needs to be public and void. Also, a Parameter of the Type View will be passed to the Method: public void fire(View view){ Toast.makeText(this, "FIRE!", Toast.LENGTH_SHORT).show(); } Now, if you start the App in the Emulator and click the Button, a Toast (yes, it's really Toast!) should show up. This is much easier and maybe even faster then always calling the onclick-Method and searching for the correct Button... And, the Code looks way more simple. Using an Intent Okay, now that we have our Activities, our Layouts and the Button up and running, it's time to use an Intent! Android is a component-based System. Every Activity represents a component. Of course, you need something to communicate between those Components. That's where the Intent comes into play. Intent's are used to call/open a Component from the same App, or even from another App (for example the System-Call App). To open Activities from other Apps, your App needs special rights... But that's an other story... Now, how can we use an Intent to open the second Component from our App, which is the Hit-Activity? public void fire(View view){ // Create the Intent Intent i = new Intent(this, Hit.class); // Start the other Activity, using the Intent this.startActivity(i); } The Constructor of the Intent-Class awaits two parameters: The application-context and the Class, which contains the Activity. In this case, we can use this for the application-context, because we are in a Activity, and the Activity-class extends the Context-class. If this is not available, the method getApplicationContext() will do the job. After we created the Intent, we used the startActivity(Intent i)-method to open the specified Activity. If you test this, you'll see something like this in your Emulator: This Error is thrown, because Android doesn't know the Activity. We need to declare it as an Activity, in the AndroidMainfest of our Application: <application [...]> <activity android: </application> Now that the Activity is declared in the Manifest, the App should work and you should see the other Activity. Intent's and Extras But what, if we wanted to pass the opened Activity some values? Well, Activities can have Extras, which are almost like little Boxes where you can put some stuff in and then send it with the Intent. The opened Activity can unpack the box and use the Stuff that was inside. A little example of how this could be done: Sending the Intent I used just a String, you can also create a new EditText and get the Text from it... public void fire(View view){ // Create the Intent Intent i = new Intent(this, Hit.class); // Put the Extra i.putExtra("SOME_EXTRA", "Change ME!"); // Send the Intent this.startActivity(i); } Receiving the Intent Now, we get the Extra from the Intent. This is done in the onCreate()-method: public void onCreate(Bundle bundle){ super.onCreate(bundle); this.setContentView(R.layout.hit); // get the Extra from the Intent: Bundle extras = this.getIntent().getExtras(); // Check if there is a Package and // if the String is in it: if (extras != null && extras.containsKey("SOME_EXTRA")){ // Get the TextView to display it: TextView show = (TextView) this.findViewById(R.id.hit); // Get the String from the Package: String text = extras.getString("SOME_EXTRA"); // Show it on the TextView show.setText(text); } else { // Make some Toast: Toast.makeText(getApplicationContext(), "Something went wrong...", Toast.LENGTH_SHORT); } } The Extra-Boxes are like HashMap's in Java (Link). There are putExtra()-methods for many Java-Data-Types. However, passing an Object is a little more complicated (See here). Done ... jay, not really. Intents can do more than just open another Activity. For example, you can call/open a System-Activity (like the Dailer or the Contact-Book) or an Activity from another App with an Intent. Also, there's another technique called "Sub-Activities", but thats stuff for like one or two more Tutorials, just focusing on those things. Let's see what time brings with it... If you have any suggestions about "How to do it better" or "That's not correct", don't hesitate to answer this post. Also, I'd like to have some feedback on my writing-stile... Greetings and Happy Coding: Luke
http://www.dreamincode.net/forums/topic/223914-activities-intents-and-something-on-events/
CC-MAIN-2017-51
refinedweb
1,172
58.89
Remarks by Paul Flessner Professional Developers Conference Oct. 23, 2001 PAUL FLESSNER: Thanks very much. The first thing I want to do is absolutely thank all of you for attending. I know your schedules are very, very busy and we very much appreciate you taking time out of your schedule to come and visit us. Were going to talk about servers now for the next 75 minutes or so and talk about the things were doing in the .NET Enterprise Servers to make it easier for you to build great applications. Building apps is hard. I spent 13 years doing that before I came to Microsoft, building enterprise IT applications. Its a difficult task. Forget about keeping up with the technology; just keeping up with the business, reorgs, sales, realignments, mergers, acquisitions, connecting to customers, connecting to partners, all that under the auspices of budget cuts is a very difficult thing. I saw an advertisement the other day, two business people talking and said, "Congratulations, they passed all the IT projects at the board meeting, but they cut your budget," kind of the world that you live in each and every day, and we know how difficult it is. And our job, believe it or not, is to make it easy for you. We do work very hard and these conferences are very important for us to get your feedback and to learn the things that you think were doing right and maybe the things that were not doing so well. So we do certainly also want to listen. So with that, I want to get started. Ive been in the enterprise business and computing for 20 years, and the enterprise customer is incredibly consistent. The abilities have to be assumed. Customers dont want to talk about the feature set of your products unless they know that youve got proven abilities. And thats just something that Microsoft has had to prove in the marketplace. I believe weve come a long way in the last 24 months certainly, and Ive got a lot of proof points in my talk. Hopefully you wont think its too many, but I thought it was important to take some time today to really talk to you and give you our side of this story on where we are in the abilities. Weve worked certainly hard on scalability, but we know its not just scalability -- availability, reliability, serviceability, manageability, predictability and certainly security, even though it doesnt have the -ability on it, are all very, very important parts of your business and we want to make sure that we address that and help you understand all the things that weve done and the progress weve made. So hopefully you wont think those slides are too sales-like. Theyre up front and well get through them pretty quickly, but I think theyre important. Agility: Thats really what you want to talk about. I talk to a lot of customers every day and customers are talking Java, J2EE, Windows. They want to talk about what it is that you can do for me to help my business do better, how can I move more quickly as a developer to meet the demands of my customers, how can I as a customer get higher return on my investment. You know, after we get over the initial hurdle of talking about the abilities, they do quickly want to talk about agility, the ability to just move quickly in the marketplace, extend over the firewall, through the firewall when they need to just to make a more agile enterprise. Thats something that customers get very engaged in and thats really the essence of how we hash Microsoft.NET. They also told us they want us to think globally, not just about the server, not just about the desktop, but all of the components that live in your world today -- the device, client, whether its a thin client, a rich client, a wireless device, the server, a rich robust platform for deploying your applications, or a service. Its a critical component of the economy today. You want to be able to latch in and leverage all of that infrastructure. So you wanted us to think very broadly. You also want to think about interoperability at the beginning of designing your applications. You really do need to extend through the firewall both to customers and to partners and suppliers. Customers help increase revenues and connecting closely and tightening the supply chain with your suppliers and your partners actually really helps reduce costs to make you more efficient. So those were kind of the essence of the things that you really needed to do. So Web services provides that environment. Youve heard a lot about Web services. Im going to work hard not to repeat the things that Eric and Bill said this morning. But this whole situation does put us in a situation where the best platform wins. Microsoft is excited about that. We have a lot of confidence in our engineering team. Were happy to be able to just play on the playing field as long as everybody is playing in an equal situation. This picture, even though Bill put up one similar to it this morning, is the reality of your world. You could look at this picture and say, "Yeah, thats my world inside my firewall or part of that world is inside my firewall and part of it is on the other side of the firewall." Its a very complicated world. We know that. It has been in the past. It is today. It always will be. Any vendor that preaches a strategy of buy all of my stuff, you know, applications, technology, database, whatever, its never going to be homogeneous and its kind of silly to think that. So we know this is the world you live in and its a world were excited about being in. SQL, the language, did this to the database industry, right, that helped to neutralize that so that we could all compete to have the best implementation of SQL. SMTP did that in the mail business. HTTP did that in connectivity. So those are the things that from a customers perspective I think you would get very excited about, because it puts all of us, Microsoft and other competitors, in a playing field where we can create the best possible products for you and we can compete on your business on the best implementation of those various protocols or APIs. And thats an environment that were extremely excited about, and thats the environment that I believe Web services puts you in as a customer. So my job is servers. Thats what I do. I enjoy it. I work hard at it. You know, Ive had the job for the Microsoft servers for about 18 months. You kind of get different opportunities to do things in a job like that. The first few months you kind of get your arms around a job and then you go deeper and start looking at a strategy and how we can make those servers work better together. An important part of understanding the platform and providing a world-class platform is the fact that youve got a lot of technology to provide a platform. To create a broad base for your applications it takes a lot of technology. You can see on the screen now the products that Microsoft has brought to market to provide that kind of platform. You dont have to buy all Microsoft products, certainly at all, but these are the products that we offer for our platforms so that you can then build great applications for the Windows environment. Im just going to go through these very quickly to give you kind of a quick perspective. Windows 2000 is the most deployed application environment in the world today. Media Metrix says 62 percent of all secure applications are running on Windows 2000 today. Application Center, a new product, shipped just this last year to help you manage scale-out applications. Were going to do a demo of that product in a few minutes to help you really understand the power that that can bring to manageability and to cost effectiveness. ISA, Internet Security and Acceleration Server, an integrated and certified firewall and cache, both forward and reverse, very instrumental in providing a good scalable, robust and secure environment. Operations Manager: Our data center management product is in the market now, again to help you provide a much more cost effective management solution. Content Management Server, the ability to build versions and move forward to build very professional Web Sites from a content perspective. Host Integration Server, the ability to connect your legacy environments, a lot of SNA, MBS, AS/400. SQL Server: Were very proud of SQL Server. Were going to talk about it a little bit more in a few moments. Thats a very important business for us today with over a million servers deployed of SQL Server today. Mobile Information Server, the ability for you to build applications in our environment and then extend them out to devices of all types with great wireless connectivity. Exchange, the most deployed mail solution inside the firewall today, with 94 million seats sold and over two times as many seats installed as the closest competitor, being Notes from IBM. And then out toward the front where you really have to connect up to your customers, you partners and your employees, Commerce Server is helping you reach through the firewall with your line of business applications and touch the customers directly with a good e-commerce solution. BizTalk Server is a very important product for us that allows you to integrate your business partners and really get out through the firewall, or inside the firewall, to help integrate your business processes, again with your own systems or with the systems of your partners to tighten that supply chain. And then SharePoint Portal Server, the ability to really collaborate, build projects quickly, share documents and work very tightly with the employees within your company. All of this focus was being better together in terms of the value proposition that they bring to you, the customer, and we are working very hard to earn your business to be the best platform for XML Web services. So Im going to do a little timeline. Youll see this slide pop up a couple of times. We are going to talk about the abilities. The slide said delivered. Thats always a little strong, because youre never quite done with the abilities. Were going to work hard every year to bring new abilities to the market to really improve the products. I think the thing I want you to understand in 2001 is that XML Web services are real. Sure, were in the early adopter stage, but theres lots and lots of customers doing it. Bill put up a big slide this morning and you talked to some customers here, and Ive talked to several customers over the last day that are building their first Web services and theyre quite excited. So they are real. You do have the tools to do it with the products today. Im not going to read you the laundry list there, but its important I think for all of you to be sitting there today going, "Yep, this is something that we really should be thinking about in our enterprise." And next year were going to make it easier. VisualStudio.net I think will be a huge benefit to developers in terms of building Web services and the Windows .NET Sever, which were going to talk about in a minute, again is helping you to really get those applications out there and providing the infrastructure to deploy them. The servers are going to pitch in and well do our part in this timeframe too integrating with Visual Studio and really making it easy, we hope, for you to build and deploy your applications. And in 2003 I think is when this wave will really start to come. A lot of customers ask me, "You know, should I be doing Web services today? When should I start?" And this is the kind of timeline that I think about. Maybe Im a little bit more aggressive than you and Im not saying every application will be a Web service by 2003, but I think if you think about the way Web services can solve the problems that youve got in tying your enterprise line of business systems together, youll get as excited about this as we are. So now Im going to do some of those proof point slides, and again I promise I wont take too long. But I think theyre important. You as developers know that you learn something about your applications each and every time you test them to the very limits, right? Edge testing is an important case. And what you do with a benchmark like TPC-C is not that it represents your workload; its the fact that its a standardized workload that all competitors agree to, that its well monitored, and that each and every result is audited. And when you drive your applications to 100 percent CPU and youre driving those disks absolutely as hard as they can be driven, you learn stuff about your code. We do benchmarks because each and every time we do one we learn things about our engineering process and we learn that we have to drive the hardware in different ways, and we also like to win. Two years ago we didnt have a TPC-C number in the top ten. We set that as a goal to absolutely win in this space, and first we chose scale out. It fit our model better. It fit what customers wanted to do. And today Im very proud to say that Windows has the top eight spots in terms of scale out TPC-C, and SQL Server has seven of those. So thats a big change in where weve been. In terms of scale up, Im showing this slide, too. Im happy to show it. Were not all the way there yet. Were sitting down there at number nine. Our first number to break into the top ten was just a few months ago. But this is absolutely a target that were going to go after. A lot of this is the fact that were still a 32-bit system and a 32-bit operating system and the applications that sit on it, so we dont get as much addressable memory. And when youre doing SMP work, what youve got to do with fast processors is have lots of memory to keep them happy and well fed, and this 64-bit architecture will allow us to do that. So we do pretty well on our own right in the 32-bit space, and I want to show you a little bit more about that. So TPC-C, while it may not represent your workload, we move on into ISVs. These are the applications that you run in your businesses every day. We worked hard on these as well. We really wanted again to stress our database and work closely with the ISV vendors to make sure that we were a good part of their application. And over the last 24 months weve been able to win the top benchmarks in this list. This is a proud list for us, because weve worked very hard. And youll notice that checkmark says the best on any platform. So you also scan down and youll see SAP here at the bottom, which is certainly benchmark that everyone pays attention to, SAP being the market leader in the world today in the ERP space. And right now or as of last week I should say we had the best number on Windows, but we had a big Unix number that was beating us. But Im proud to say that as of late last week weve broken that record as well with 24,000 in terms of I believe it was dialogue steps, but well have to look on the next slide. But we dug in a little bit deeper on this benchmark and this is a slide that represents a big part of my life, so I can go at it in a lot of detail. But four years ago we were just starting out. Id go out and wed do a benchmark and Id go out and talk about being in the enterprise and Id get beat up a lot, and customers would laugh at me at times and wed kind of have that conversation that, "Look, this is something that were deeply interested in. Were going to go at it hard and were going to make good progress over time." And you can look at the progress we have made over the last four years. Its also kind of interesting to look where we were just over a year ago, about 19 months ago, at 7,500 and the competition was sitting up there with 64-bit machines and 64-processor machines. This is on an eight-way at the time. And where weve gone in just a little over a year to take the lead in this position, again with a 32-bit system on 32-prox. Now, this workload is different than TPC, so we scale better in this workload, which is more of a real world workload than we did in the TPC-C. So we were able to do quite well in this real world application. I guess it proves that when we put our mind to it and we work hard on it and we make a commitment to our customers were going to work hard and deliver for them. And this is again a very proud moment for me and the team that did all the engineering. But the real measure is customer proof points. I call it tier one, tier two, tier three applications. Tier two and tier three are important applications to your enterprise, but not the most important. Tier one apps, thats what Microsoft and the .NET Enterprise Servers have to win with customers today, and Im very proud to say that over the last 24 months were winning a lot of them. These are the applications where the business doesnt run at all, okay. If youre a distribution company, this is the distribution system. If youre a manufacturing company, this is the manufacturing system. Its the one where you if you go in and its running on MVS, everybody in the shop knows the service level and can explain all of it to you. Those are the apps that we had to start winning, and this is just a small list of the ones that were winning today. Korea.com was an ASP in Korea, 5.5 million users up on Exchange. They did that in a little less than six months. Lets jump down and we can look at Pennzoil. Not a drop of oil moves at Pennzoil without the SAP system being up. They did a big merger with Quaker State and were worried about the scalability that they might run into. We upgraded them to an 8-proc and we upgraded them to SQL Server 2000 and today theyre doing double what they were before the merger and still doing it at the same response time. So good stories all the way down. MyFamily.com ancestry, if youve ever had a chance to get on that site, its a lot of fun, 3.8 billion rows in a single table. The UK Government Gateway, Bill talked about that this morning, again absolutely mission critical. Verizon has done an unbelievable amount of work on our platform. This big customer service and billing system that they ported down from the mainframe has been a huge success for them and saved them a huge amount of money. And Dollar, thats a site that we talk about a lot. They actually did build a Web service to connect their Unix system with the Unix system of Southwest Airlines, because they were trying to do a joint site, and they were able to put up a Web service intermediary to plug their applications together and increase Dollars revenue by $10 million in a single year, and they got the app up and running about six months ahead of when they thought they were going to be able to. So this is the proof point that well continue to push hard on, and I know these are the proof points that you as the customer are looking for. So with that, Im going to jump in and thats the end of the proof points and were going to talk more about products going forward. XML Web services made easy: This is what, you know, youve seen a lot of talk about this morning with the toolsets and you can look down. The big push this year is the Windows .NET Server, which well ship in the 2002 timeframe with the .NET framework to just make it a lot easier for you to build applications. And then its my job from the server side to wrap ourselves around that and make sure that our servers are easily extensible and utilize ASP.NET and the runtime and that sort of thing. Certainly were going to have work in 2002 and 2003, so lets look at some of the opportunities that well have in the 2002 timeframe. So the server, Windows .NET Server, theres a lot of focus on the abilities. Were going to talk about that in a second, and Ive got a slide on it. The primary focus was around security certainly, and were going to talk about that deeply as well. The whole thing, a big part of that really is focused on making Web services much, much easier to build. Some of the things that weve done, Commerce Server, you can think of Commerce Server in a lot of ways based on the demo that youre going to see. Its really a super framework to build e-commerce systems and let them extend out to your customers. And the ability to just take Commerce Server and be building your application inside a Visual Studio shell is something that well demo in a few minutes. Content Management Server is allowing you to take and build ASP.NET templates and use that for your site. SQL Server, ADO.NET, SQL XML, a notification service that we're going to ship this year: Lots and lots of things that make it easy for you to build Web services. And just overall management, this is something that we get back over and over as one of the key abilities. It doesnt do you any good if you can do all this work and create great Web services if the cost of ownership kills you. So BizTalk, a very important part of the feedback from the BizTalk customers is weve got the hub, but we're trying to get out and connect with lots of our partners and getting them hooked up is very tricky and difficult. So we built this concept of a seed where if you want a trading partner, you point them to an URL, they click, theyll download the right schema, the right information, the right orchestration so they can plug in and you can begin trading. An application management pack through MOM: This is a very important thing and easy to ship for SQL Server and Exchange. This is all the IP that Microsoft has got from working with customers around those products. These kinds of support calls come in, we get these kinds of complaints. These are the things that need to be monitored. A lot of that IP we turn around and put right back into these application management packs, so that you can install them and you can set up the monitoring on your applications in a way that prevents you and lets you learn from all the things that weve had to deal with over the years in building the apps and deploying them for customers. So Windows .NET Server: Better applications faster, lots of thing in there, IIS. One of the big changes we made, several big changes, IIS 6 is a big change in terms of the process architecture. We took the HTTP listener and moved it down into the kernel. Its much faster, more resilient. We changed the process architecture. Obviously the run time, all of these things make IIS 6 a much, much more secure and faster environment. COM+, the ability to take your existing COM+ objects, okay, and then turn them into XML Web services through the dialogue or through an administration dialogue without any coding. Native UDDI support in the directory, so whether you want to publish that outside the firewall or just keep it inside so you can publish your own Web services inside your firewall, UDDI built in right in. MSMQ, the ability to get an XML protocol working within our messaging environment. And we saw the real time communications example this morning. Lots of work in the abilities space. Im not going to list a bunch of that: 64-bit is the big one. SQL Server will come on with a version of 64-bit SQL Server at the same time or soon after Windows ships. Exchange will be the following year. Clustering enhancements to make it easier for you to step that up and do a better job with the clustering. A lot of feedback early on was, yes, sure, it works but its fragile and difficult to set up and weve done a lot of work to make that much easier. And then integrating with partners and federation and that whole concept, which is so key to really getting the full power of Web services. Kerberos 5, a standard implementation, so that you can cooperate with other partners and make sure that you keep a very secure environment. Cross-support trust giving you the ability of ease of management and higher level security again within a large company or outside your DMZ. And then the Passport network, which I think was also talked about this morning. So security is something I do want to spend a bit of time on. Security is a tough one. Some customers want a huge amount of security, very sophisticated and are willing to spend the money and understand what that means. Some customers want no security. They dont understand it. They dont need it. And most customers fall somewhere in the middle where they want the absolute most security they need but not any more because its a hassle to keep it up. So we worked very hard to make it easy for you to administer, but it is a difficult thing to do. We're not making excuses, but we certainly have worked hard in this release to make it easier for you to try to administer this stuff. Now, theres lots of different ways to set up security, and if you ever do figure out all of them, dont tell anyone, because theyll immediately make you the security administrator, and thats a very, very hard job. But it is important that as a customer you understand the pros and cons of each different security, and that you think about it, and you implement the right level of security for your application giving your circumstance. The key things around security are authentication, authorization and then actually securing the Windows environments. And there are a lot of features in this release. Im just going to kind of run through these again more quickly. Multi-protocol transition. This is the ability to accept lots of different clients for mid-tier connections into the back-end server, HTTP, Digest or Basic, even Passport, and connect it back to your back end. And when you connect, we can transition and give you a Kerberos ticket and you can lock all your back end servers down with Kerberos and keep them very secure. Kerberos standard support, as I said, V5. Trusted subsystems. If youre written your own security, which some customers have, if you want to authenticate that and pass it on, you can designate a given subsystem as trusted to NT. And then you can actually map Passport to have PPUIDs, Passport User IDs in the AD so that if you come in with a Passport ID we can translate that inside the firewall AD user ID. Authorization: This is a very important point as well. A lot of sophisticated framework has been set up to allow you to take a lot of the authorization code out of your applications and set them up in a centralized policy environment so that you dont have to be monkeying with code if you want to change that stuff. Its very, very important and certainly very powerful. Seamless impersonation: Thats a good one that really has been enabled by the multi-protocol transition and our support for Kerberos. It will I think also allow you again a lot more flexibility as a developer and a lot less overhead. And then constrained delegation, where you can just not delegate broadly but delegate the specific back-end servers based on the amount of control that you want to pass back. The other big change is just when you install Windows .NET Server weve dramatically reduced the attack surface for people who would want to get in and do malicious things. You know, we dont install IIS by default. We dont install COM+. We dont install search. A lot of these hackers were going after services that people didnt even realize were running on their server. I just eliminated the installation so youll purposely have to go in and add that if you want those services to run. Local and network services: The other thing is well start those services as network services not local users so you wont have the authority that they would have in the past. Code access security: This is very important. It allows you as an IT manager to set up and only allow certain code to run with certain privileges. So through the CLR you can make a decision on whether or not and what kind of code you want to run, so you can say my IT shop signed DLLs are the only ones that run on this server and any server thats not signed can only read or only write to this scratch directory. And then software restricted policy is a lockdown of your server. Its kind of a binary situation where this is a lockdown server. If it doesnt have my signature, it doesnt run at all. So lots of improvements on a very necessary part of what we need to do to go forward. But it isnt just technology thats going to get us through this. We're going to have to work with all of you and with the end customer as well to be able to do this. The hackers, I mean, they are criminal acts. Theyre actually trying to disrupt your enterprise and cause harm and damage out on the Internet. If the criminal were an easy thing to stop, we wouldnt have so many people in prison. And us just implementing a lot of technology isnt going to stop this. It might make it so unpleasant that wed never be able to get any kind of authorization set up that made any sense. There are a lot of things that we can do outside of the bits to make this better. Microsoft is doing a bunch of things and this security Windows initiative, all kinds of tools testing process, we talked about the things we're doing in the platform itself, and this Secure Technology Protection Program through the Windows Update facility, we can make it available to you to download all security patches first and give you special notifications so that you can do what you want with those and hopefully roll them out very quickly. Education: The author of this Writing Secure Code , I believe its Mike Howard, is here this week in our sessions on what you can do and what kind of coding practices you should pursue in order to write great Windows applications that are very, very secure. If youre a VS 7 developer of native code, there is a runtime option that you can set that will allow you to disable a process if the buffer overflows, so that you can stop any kind of runaway process. Lots of testing and logo certification: You can participate in the Windows logo certification program. And certainly always, always, always when you can run non-admin. Review sign-off good processes, good procedures. Just more focus. I know the dev teams at Microsoft I sit through security reviews now, something I never did in the past, but we really actually in detail go through what our are processes, what are our design points. We have people that do penetration and attack testing, where they go in and try to break things. All those things are things that you should take seriously as well as developers too. And customers, just education: A lot of the NIMDA thing, and it was a very difficult one that could have been avoided had we applied the patches. And believe me, we could have got those out and all those things we're learning as we go, but there is a lot we can do to make this a much, much better environment for all of us. The last thing I want to talk about is an ASP.NET benchmark. The Nile Web benchmark is a benchmark put together by Doculabs and Ziff-Davis. Its actually based on a TPCW benchmark. Its an e-commerce benchmark. So rather than do a competitive benchmark, which I know all of you dont always believe, youre always sitting there thinking in the background, "Gee, I wonder if they purposely made the other guy look back," I decided well just do a benchmark that compares ASP.NET against our ASP environment. So the first thing Ill do is click and show you our numbers and the various ways that you can run in the current ASP environment on Windows 2000. So the first is ASP auto prox. Thats a very secure way to run. Theres a performance penalty you get for that extra security, but thats where we are on 2, 4 and 8 prox environments today. In prox a little faster, obviously better location, better memory, but applications, if you run multiple applications in the same process if one misbehaves it can take down the entire process and take down multiple applications. The fastest way is with the pedal to the metal get in there and ISAPI code using C and again you get very good performance out of that environment. Then we took the exact same configurations and we ran them in the next ASP.NET environment. Pretty dramatic results in the ASP.NET auto prox, which is what a lot of customers want to run in, again much improved scalability up on the 8-way and overall just pretty dramatic results, you know, four times the throughput on the 2-prox and you can kind of go through and look at the numbers. The really impressive was the in prox, which has gone up a tremendous amount and is actually just a few pages per second faster than the ISAPI with a huge productivity gain in terms of the code that you have to write and the amount of effort you have to put in to write that. So we are very proud. This benchmark thing can be a little crazy, but youll see a lot of benchmarks and always, always, always ask a lot of questions about benchmarks, because you can learn a lot in the detail and I always encourage our guys when we do benchmark to make sure you put all the code out there, make sure the developers can download the thing and try it themselves, so you really understand what weve done and hopefully that will help you with our technology or possibly even a competitors if weve shown you something that you didnt know. So now Im going to have Ori come out and talk to you about Application Center, talk to you a little bit about what it does and also what we can do around Web services to help us guarantee a better level of service. (Applause.) ORI: Thanks, Paul. Hi, everyone. I want to take this opportunity to introduce to you some of the great management technologies weve been building around the .NET Enterprise Servers and Web services. Application Center 2000 is our deployment, management and scale out tool to power your next generation of Web applications and Web services. I want to show you how our powerful monitoring tool, deployment services and scale-out facilities allow you to create a scalable, manageable, highly available environment for your application. The first thing we're going to do is take a look at the simple VB.NET client, a typical type of application we expect great developers such as yourselves to be building over the next few years. Giving a flight number, this client talks to a Web service to obtain flight detail information such as departure airport, gate and time information. When I click the calculate fastest route button, our client talks to another Web service and obtains driving directions, shortest distance from my office to SeaTac airport, taking into account traffic conditions. Now, lets see how we can instrument that Web service to perform things like server level agreements and use Application Center to monitor and enforce those. PAUL FLESSNER: So that site just called out to two different Web services to do those calculations to get you your fastest route and -- ORI: Correct. So this is our first Web service and what we're going to do is uncomment four lines of code that will fire a WMI event into the system indicating the service delayed encountered when it calls the flight status calculation component. So we're going go to ahead and build that. Great. Next well switch into the Application Center console and we're going to define a new type of data collector. Our data collector is going to be a WMI event query. We're going to go ahead and wait for an event in the continental namespace. Application Center will allow us to dynamically browse for the event, which would show up dynamically. There we go. And the two properties we're interested in. Next we can define a custom action to take when we see this event and the SLA violation is detected. For this particular demo Ive chosen to send myself an e-mail since this is my development workstation, but in production an administrator can choose to run a script, add additional capacity, configure load balancing, send an SMS message to their cell phone, whatever theyd like. Finally, Im going to paste in a custom message to our administrator that will tell them the SLA violation was detected. Okay, one quick thing we're going to do is we're going to add an additional threshold on this monitor. The reason for that is we dont want this even to fire every time we notice our code firing that instrumentation piece. We're going to tell it only if you see the service delay being greater than 50 milliseconds does that constitute an SLA violation. Great. So well go back into our client, well type in flight 126, and very quickly youll notice Application Center will turn into a critical state letting me know an SLA violation was detected, service delay of 78 milliseconds. Well switch into our mailbox, click send/receive, and youll notice we got an e-mail telling us the SLA violation was detected and we even have a hyperlink in here allowing us to automatically failover our client to use a backup Web service that can meet our SLA requirement. PAUL FLESSNER: Now, the e-mail is great for a demo, but we dont want to be sending e-mail out to all of our network administrators. ORI: Absolutely not. And let me show you how some of our automated responses can help you keep your Web service running in production. The next thing we're going to do after weve added that piece of instrumentation is we need to deploy the new version of our application to our production system. Using the Application Center New Deployments Wizard, I can configure a custom deployment job that will send my application over to my production service. Im going to provide some credentials and Ill be able to give it to the machine I want to send my application to. Now, I have two Web servers sitting in production. Im only going to send this to one of them. Application Center is automatically going to notice the new version of my application and propagate it and all its associated resources to all the servers in my production environment. So once I select my Web service and send it out, we're going to get my IIS site, my security settings for it, my ASP.NET pages, my private assembles, we even have support for global assembly registration in the GAC. You can deploy complex applications, registry entries, DSNs, anything and everything you want. PAUL FLESSNER: Youve provisioned the whole machine for this app. ORI: We have, and let me show you that working in production. So what we're going to do next is, okay, weve sent our application into production. Now we need to verify that both its working and its performing up to our levels and we havent hurt performance using that new instrumentation. What Im going to do is using Application Center Test, our performance and stress and functional testing tool, we're going to record a session against our Web service. Using the browser and the browser record feature, we're going to hit our cluster directly. We're going to enter in the flight information, flight 126, 10/22 is the date, and we're going to click invoke. As youll see, weve got the XML representation of my flight details back: SeaTac Airport, gate B14, and served by server AT Demo 32. This is not the server I deployed my content to. This is the second server that the application was automatically propagated to. We're going to call this My Cool Stress Test. One of the nice features of Application Center Test is I can now replay this capture greatly amplified against my production system. Im going to play it at 25 simultaneous browser connections. Well hit the play button and what youre seeing on the right in pink is the CPU utilization on AT Demo 32. Very quickly its going to start inching towards 100 percent. Application Center will dynamically notice that CPU overload. It will notice the SLA violation taking place and quickly bring another machine into the cluster, configure the application for load balancing and come help serve that additional load. PAUL FLESSNER: So this is different than what Eric showed. He was doing something over the net in service of a net. We're talking about in a given cluster behind the firewall that we're running? ORI: Correct. Either your production Web cluster or your business logic internal cluster. As you can see, my second machine has come up. As you can see, my second machine has come up. Were now serving twice as many requests. I didnt have to deploy my application there or configure it. Its all done automated for me. And so what were seeing today is how Application Center can take advantage of rich instrumentation built using VisualStudio.net and WMI, how you can take advantage of that instrumentation and take automated responses, how you can easily deploy your application and all its resources into staging and production, as well as how our scale-out features will allow you to achieve a very highly available and scalable set of Web services. PAUL FLESSNER: Excellent. Very impressive. Thanks, Ori. PAUL FLESSNER: Now were actually going to do another demo. I said before that Commerce Server 2000 has worked very closely with the VisualStudio.net team to make sure that Commerce Server 2002 becomes a part of the ASP.NET family and that you can easily and comfortably extend your Commerce Server applications from the comfortable VS IDE. So now L.J. Germinario is going to come out and give us a demonstration of that integration. L.J.? L.J. GERMINARIO: Thanks, Paul. Hi. How are you good? PAUL FLESSNER: Good. L.J. GERMINARIO: I have one slide. Now, today were going to talk about Commerce Server 2002, and Im really excited to be here today because what were going to do is give you a preview of some of the new functionality were going to talk about in Commerce Server 2002. Commerce Server 2002 is actually a native complement to VisualStudio .NET, and we enable rapid application development and provide a developer portal, giving developers easy access to all the essential Commerce Server tools they need right from the VisualStudio .NET IDE. For the developers in this administration Im actually going to show you how you can create a VB or C# application right from Visual Studio. Then were going to show you how weve enabled our sample application for .NET Passport and .NET My Services. So lets go ahead and jump into Visual Studio. Here were looking at the VisualStudio.net IDE. The first thing were going to talk about is how you can easily create a new Commerce Server application right from Visual Studio. So just like you create a project, you can go File, New Project, and here we have a new section of projects called Commerce Projects. You have the option to either create a VB ASP.NET application or a C# application. Now, this would launch a wizard that would take about two minutes and walk through and create the essential resources, so your SQL data stores, your IIS resources, all of the XML and code actually required for the site. Lets fast forward and look at a site that we actually have up and running. Here we have all of the essential pages in our Visual Studio project environment. You can see we have all the files and all of the resources there for us. Now, I mentioned before, you can invoke all of the different tools right here in Visual Studio. If you were to go and actually look at projects, Commerce Server projects, youd see here we can actually pull in the different Commerce Server tools available, so the MMC administration tool, the resources, business desk, profile designer and catalog designer. One of the other Commerce Server features many of you might be familiar with is the pipeline. So if we just go up to our project window and look at our default pipeline components that are installed, we can actually invoke the pipeline editor right here in Visual Studio by opening up one of these PCF files. So it really is an integrated environment for the developer. They can do everything in Visual Studio. Lets go ahead and actually take a look at the sample site. Now, what youre looking at is the Eye-By-Five site, which is probably familiar to most of you. The Eye-By-Five site was originally just a storefront application that provided basic catalog capabilities. Well, weve now upgraded Eye-By-Five. Its a fully functioning e-commerce site that uses Commerce Server 2002 for its commerce functionality and has integration with the .NET technologies for logon and then for alerts. So we have the Eye-By-Five site here. You can see we have our different catalogs. We have our account and our shopping cart capabilities. Lets go ahead and sign in. Weve enabled .NET Passport logon. So using my Hotmail account, I can log in and its able to identify me on the site. Now, from a developers perspective, one of the things commonly used on a site is campaigns or advertisements. Because this is very tightly integrated with Visual Studio, its very simple to invoke the content selection framework of Commerce Server right here on this page. So were looking at our default page. If we jump back to Visual Studio, we can actually go down to our default, that ASP page, right-click and do a View Code. And here youll see that weve actually created this application using C#. So youre able to actually see the different calls we have going out for the profile, pulling in the profile ID and then also here we have a section that talks specifically about pulling in the Commerce Server 2002 content selection framework information. So Ive written some C# code here commented out. Im just going to real quick uncomment this and go ahead and rebuild this solution. It rebuilds the application itself. It takes about ten seconds. And then once this gets done, well be able to go back to the Eye-By-Five site and see a dynamic advertisement now placed on our page. Now, all of this is integrated directly into Visual Studio. So all of these tools are available right from the IDE. Well see here that the rebuild has been successful. Well go back to our site. Well do a refresh on this page. And now youll see at the bottom a banner advertisement actually appears and thats delivered using the Commerce Server 2002 content selection framework. Lets continue now using some of our .NET technologies. Lets walk through. We see here we have an item called the Persuasive Pencil. One of the other features that customers commonly ask us for is the ability to maintain multiple currencies on our site. Here we see that the Persuasive Pencil is $1.99 US dollars. I can change my locale to Italy and see that its 2.23 Euro or I could go ahead and change my locale to Japan, being able to see that its now 245 Yen. So the site is able to present different information in currency based on the locale. Lets go ahead and add our Persuasive Pencil to our shopping cart. Youll see that our checkout process has the normal checkout, final checkout where youd actually have to insert your shipping information as well as credit card information, but weve enabled this site for Passport Express Purchase. Youll see here that my Passport information is already propagated into the form for me. It has my default credit card information, billing address and shipping address, so well continue through there. And well be presented with a basic invoice. I can verify my information, my shipping total, my address and so forth. Now, when I place this order, Passport is actually going out and authenticating my credit card information and the site itself will generate the order and an order number. Weve integrated this site with the .NET My Services, so once this order is completed, we have enabled it so that we get an instant message using .NET Alerts and here it is. Order number 1016 has been accepted, and we should watch our e-mail for more information. So what Ive shown you today in summary is that Commerce Server 2002 not only has great advancements for e-commerce but also for the developer, having a developer environment right in Visual Studio and integrating with our .NET technologies. Now, Im really happy to let everyone know everything weve seen here today you can actually start building with. In the show bags you will have a copy of the Commerce Server 2002 technical preview bit and youll also have a copy of the Eye-By-Five sample application. For more information or to play with the site, stop by the Commerce Server booth. Great, thanks. PAUL FLESSNER: Excellent. Very good. Thanks, L.J. (Applause.) So those are a couple small samples of the work weve done or will be doing and delivering in 2002 to help make it easier for you to build and deploy ASP.NET and Web services to applications. So now I want to talk a little bit about the future in 2003 and kind of beyond and talk to you about the things that were doing, again to keep making it easier and to enable what we hope to be broad adoption for Web services in the .NET initiative. Deep XML integration: A lot of XML work today already, certainly SQL Server, the ability to query and get results back in XML, BizTalk Server, XML deeply embedded into the product, a lot of again additional data work and data adapters around XML sources. In fact, were going to continue to drive this stuff deeper and deeper into the product. Bill mentioned that this morning. Think higher-level about SLA agreements, Service Level Agreements rather than managing tactically and responding to errors, and just overall continuing to push harder and harder on what we believe are the key infrastructure enhancements that we need to make to get broad adoption of Web services. As I said when I started out, a big part of my job is making our platform better together. I want each and every customer experience to be better the more Microsoft server products you buy in your environment. Again, they wont be built in any way thats proprietary that wires them together. You can always use other vendors technology on Windows. Its just that my job is to make it a better experience with each of the various servers that you buy. So weve really started to focus on the core set of things that we wanted to push in the 2003 timeframe. The common language runtime is something that we will embed deeply in all of our products. I think its a very exciting technology that from an IT perspective I know in my days in IT I would have been really excited about this. In terms of the ability to let the IT infrastructure -- messaging, transaction, database -- all of that to stay the same as you evolve forward with innovation in the languages space is something that I think would be very exciting. There always used to be, in my IT shop anyway, a little friction between the developer and the boys managing the data center. The data center guys wanted everything to be left alone and the developers always wanted to use the most advanced productivity tools going forward. The Java world today is a little fragile around that. Java is tightly bound to J2EE. If you start to move into another environment, youre going to be moving and changing out a lot of technology. But the common language runtime and what I call language freedom, youll have this ability to really keep up with language innovation. Java wasnt the first language innovation and it certainly wont be the last. Weve introduced C# and were going to see I think a lot more language work as we continue to go forward. Process languages will be hot, things like XLANG and that sort of thing. So theres a lot of innovation that can go on in the language space while remaining constant in what you want to be your hardened data center, and I think thats a very exciting thing for customers, and well work hard to embed the common language runtime in our environment, and Im going to talk a bit more about that when I talk about Yukon, the Yukon release of SQL Server. XML and SOAP, again deeply embedded in our product so that we can exist and coexist in your complex heterogeneous environments. Well continue to push that deep into all the products. Caching: Caching is something that can give an incredible performance boost. Its 5,000 times more expensive to do an I/O than it is to go to memory. But today the edge server or the IIS server cant dish out or cant cache pages that have data on them that have come from the back-end database. A lot of you do this today with all kinds of tricks in Optimistic and Currency and that sort of thing, but what if we were able to give you a notification system back in the database and you could register, look, this data I know doesnt change very often, I dont change customer numbers, I don t change prices during the day, I want to cache that stuff all the way out on the edge serer because Im not going to be changing it for a few hours, and if, in fact, it does change, send me a notification and Ill update my cache. Thats work that we can do in our platform and will be available in the Yukon timeframe. Diagnostics end to end: This is something that I worked in a complex IT environment many, many years ago, and we had the common customer complaint. Gee, as I hit enter or I hit the Search button on my Web browser and it took 15 seconds to come back and it always seems to happen at 2:00 and I dont know why, but thats just the way it is. You know, what do I do? Well, today you have to go through a very difficult process of assembling a lot of experts and tearing apart a lot of code and a lot of different monitoring things on the network. What if it was as simple as saying, "Oh, look, we can trace this thing; it was 3 milliseconds on the client, 5 milliseconds on the wire, 10 milliseconds in the IIS server; whoops, 12 seconds sitting here in SQL Server." Then its a matter of digging into SQL Server. I see last night they dropped a couple indices. Bang, weve got a problem. They recreate them and life can be much easier. So that kind of end to end monitoring potentially even as a SOAP header kind of a thing where we can share that information with you as well is something that were working very hard on. Security: Again, a very complex world. We want a unified concept. We know that causes frustrations today, multiple implementations of roles and all the products not using security in the same way. Part of our work has got to be to unify those things so its not so difficult for you to implement and manage those environments. And then SLA: Again the work weve done with Application Center, the work were going to do with MOM I really think can put us in a different situation. And managing at the Service Level Agreement is what customers want and the stakes that really go up when you really start to live in a Web services world, especially is Web services youre depending on for your applications happen to live outside of your environment, because theres lots and lots of work to do there and all of these areas are areas that were really honing in on and focusing on in the 2003 timeframe. SQL Server Yukon: Its kind of a busy slide. Id like you to kind of look over to the right side first and well look at the kind of key components of the server. We're going to continue to push hard on analysis services. Were getting a lot of positive feedback from the customers. You like our OLAP technology. You want more. You want more data mining. You want more in terms of capabilities to publish a great report dynamically. Were going to keep investing in that, and were going to do all that we can to make it easy for you to develop those kinds of environments, develop your data warehouses and again integration in the Visual Studio shell to make that much easier. We will have a big change in terms of our administration interfaces during this timeframe, moving from DMO to this SMO, AMO, SQL Management Objects and Analytical Management Objects, and were going to continue to really make those strongly available for your environment. An HTTP stack, native HTTP connectivity into the SQL Server: So today we offer this, but its through an IIS server. And what well be doing in the future is making sure that HTTP connectivity can also run natively on the server. Another advance were going to make in the Yukon timeframe, we talked about pushing XML from kind of the mid-tier today. Today, our XML environment, SQL XML runs on the mid-tier. You send XML to it. We convert that to SQL and we go into the database. You know, we can do better than that and in the Yukon timeframe what were going to do is take the x-query processor and move it in underneath our parser and well actually have an XML data type so youll be able to have a relational table definition. One of the columns in that table could be an XML data type. Inside that data type you can have a full XML document with metadata and youll be able to do x-query on that column as well. So again making that much more native and much more available so that you can get XML directly out of the database or put it directly into the database through a native HTTP connection, if you so desire. I also mentioned the common language runtime. Were going to embed that in the engine as well, so youll be able to get language independent stored procedures, which I believe is a very powerful environment for the SQL developer. (Applause.) Yeah, thanks. We know theres a lot of Key SQL code and were not giving up on Key SQL. Were even looking at enhancing it somewhat in the next release. But we also want to give SQL a first class developer environment, and thats the objective with embedding the common language runtime. I mentioned before this concept of being able to do notifications and invalidate some external cache or actually any other kind of process that you want to run in. The idea with this query notification manager is to allow you to register types of queries that if, in fact, any data changes that would affect that kind of a query, you would be able to then get a notification and take whatever action youd like. Were also going to continue to do a lot of work in the storage engine, XML as a native data to store any engine relational data, and were going to continue to push on that for what Ill call a richer, more integrated storage experience going forward. And with that, well get away from that death slide, which is actually very cool technology, I promise, and Jeff Ressler is going to come out and give us a demo of SQL Server and some storage futures. Jeff? JEFF RESSLER: Thanks, Paul. PAUL FLESSNER: Hey. Well, I hope the demo goes better than those slides. JEFF RESSLER: So today I want to show three things about Yukon in the context of a demo called Weebotix, which is a toy company that makes robotic toys. The three things were going to show, Paul, are first how SQL Server Yukon allows us to bring together all sorts of data, not just the structured data where a custom is storing in SQL Server, but also semi-structured and unstructured data, leveraging some of the XML support you just spoke about. Were also going to show the common language runtime executing a C# stored procedure and Ill show you a new development environment called the SQL Server Workbench thats a replacement for the Query Analyzer, something built on Visual Studio thats a very productive environment. So this Weebotix Toy Company is having a wee bit of a problem in that one of their popular products called Steve Leapyear is not selling very well. Im a quality assurance manager at this company, and Ive just received an e-mail from my boss, Paul Flessner, which Ill open up here, and I note that theres been a meeting called to discuss the problems weve had with the Steve Leapyear product, and some sales declines weve seen. Im going to go ahead and open up the meeting agenda and scroll through this, and what I see is that I see some Office XP Smart Tags showing up. And weve actually built a custom Smart Tag recognizer thats looking for certain keywords to appear in this document, and indeed when I hover my mouse over "quality assurance" youll see that I get a Smart Tag popping up. Im going to go ahead and click on this Smart Tag and select Send All Weebotix Tags to the Smart Tag Analyzer. What this is actually doing, the code behind this Smart Tag recognizer is actually writing an XML-based query, sending it to SQL Server Yukon and delivering this result set back. This is a mixed result set. You can see that there is structured data in here but also semi-structured data and even things like e-mail. This gives me a quick view of recent changes in the quality assurance around the Weebotix product line, and one thing thats of particular interest when I look at this display, and this is all being displayed in Internet Explorer as XML, I notice that Ive got an e-mail message here about dissatisfied customers. So Im going to click on that and take a look at that, and I note that theres a mention from an account rep out in the field of a decline in the sales of Steve Leapyear, attributed to a company called Super Gears. Well, so Im going to go ahead, I get another Smart Tag. Im going to click on my recognizer here and send this supplier name back to SQL Server Yukon again as an XML query, and now my view, my XML view here is customized to display information specifically on that supplier. Now, in looking at this data and my graph here, I dont see any particularly obvious trends, and so Im going to change some of the criteria to make this a little more specific to the problem were investigating, the problem with Steve Leapyear. So Ill click on the criteria button and I can go ahead and turn off Tiny Timbot and Robotic Robby and Im going to go ahead and add to broaden the scope of the search at appointments and events. Ill click Okay and re-query and now the graph looks quite a bit different. Were focused in on Steve Leapyear. You can see that something really bad is happening. Were seeing sales declining. We're seeing the defect reports growing. We really want to get a handle on this. So now at this point as the QA manager I can choose to send this off, send this page, again this XML page, send it off to my team and have them do some further analysis on this to really figure out whats going on here. But lets take a look at whats going on behind the scenes with SQL Server. Lets switch machines here. So this is the SQL Server Workbench. This is a development tool that will eventually replace the SQL Query Analyzer product that many of you are familiar with, having developed for SQL Server. And Im actually looking at stored procedure code. PAUL FLESSNER: It doesnt look like any stored procedure I know. JEFF RESSLER: It doesnt quite. There are semicolons and all sorts of curly braces and things. Its not very common to find in a stored procedure. And that is indeed because as you suspect Im sure were looking at C# code. And weve used the Visual Studio shell and extended it to work with SQL Server Yukon in this case. So in this case this is the body of our stored procedure, and you can see that I can navigate through members with the Member Navigation Tool, and you can see that Ive got SQL data types actually in my C# code. Well, lets take a look at the actual SQL code that calls this, and I wan to show you a neat aspect of what weve got here with the SQL Workbench. You can see IntelliSense popping up, completing my statement for me. As I continue to type, I also get a toolset outlining what parameters this stored procedure takes. Instead of forcing you through the pain of watching me type all this though, Im just going to uncomment this line here, Im going to hit F5. The Workbench is saying, "Is it okay to connect to SQL Server and pull this data back?" Im going to say Okay. Well connect to SQL Server and were pulling back an XML result set. This XML result set is the same XML result set that we displayed inside Internet Explorer, so that same view of data. If we scroll through it, we can see the different data types represented, including some of our messages that we were seeing in our Web view. So what you see is that weve got the ability to display those different types of data in that singular view, messages, structured data, graphical data. Weve got the ability to write these stored procedures in C# and of course any other language that works on top of the .NET common language runtime. And Key SQL will still be supported. Thats still going to be there, also operating inside this development environment with IntelliSense and things like that. And then finally this great environment that does provide so many productivity enhancements for developers. PAUL FLESSNER: Excellent. Thanks, Jeff. JEFF RESSLER: Thanks. PAUL FLESSNER: So this is just another example of things that we are doing to make it much more productive for you, the developer, to build Web services and specifically XML Web services in our environment. So BizTalk, were going to talk about that for a few moments and were going to see a demonstration. BizTalk is a product that were extremely proud of. We believe there is a lot of innovation thats in the product and we have had trouble getting the story fully out, and I want to spend just a couple of minutes talking about it. BizTalk can do a lot of things. Its primarily focused at enterprise application integration, but we also have some business process automation built in there, so people have a tendency to get confused about what the product is doing. Actually, it can do many things and thats kind of the exciting part about the product. Lots and lots of customers today are using it for EAI and its great at that, but if you really think about BizTalk, it does three things. Its this concept of a universal Inbox, allowing us to accept messages in lots of different protocols, in lots of different file formats, and store those up and persist them into a canonical XML format in a reliable messaging engine. It can also allow you to do data transformations against that information, which is also extremely powerful. Lots of customers want to normalize data or change or if youve gone through any kind of merger or acquisition, maybe you want to flip the customer numbers behind the scenes or flip the catalog numbers of whatever. All those things can be done in a situation like this as well. And if I had to guess, most of the BizTalk customers today are doing those two things. What it can also do and what were getting a lot of rapid adoption on now is orchestration. Its this concept of allowing you to orchestrate business processes. These can be very fine-grained business processes or very large-grained. They can be inside your firewall or outside your firewall. And this concept of orchestration is something that were extremely excited about. Youve seen probably demos of BizTalk Server where we use a graphical user interface to kind of define both the domain experts sitting there working the kind of process flow that they want the application to perform and then also binding that to various interfaces in the UI that we want to technically implement around. And thats very powerful, but weve gotten a lot of feedback that customers want to take it even further and wanted to be able to script that kind of information and do it in a development environment. And Scott Woodgate is now going to come out and give us a preview of a future version of BizTalk that implements a new and richer development environment. Scott? SCOTT WOODGATE: Hi, Paul. Many customers today are already orchestrating Web services with BizTalk Server. BizTalk Server 2002 enhances these solutions with management, monitoring and deployment enhancements, but today, Paul, lets look further into the future past 2002. Orchestrating Web services will be even easier with BizTalk XLANG S. XLANG S is the next generation of the XLANG business process orchestration language. XLANG S is built on the .NET framework. XLANG S gives you the choice. You can visually design a business process or you can script a business process. So to demonstrate XLANG S, Im going to play a developer and my boss has just tasked me with creating a solution that accepts purchase order from customers through a Web service. Those orders are passed to my back-end ERP system. The order then gets a confirmation number back to the customer. The next step in the business process is sending it to the inventory system, and finally the shipping system. At a later point the customer can query their shipping date for their particular order. So Im going to orchestrate this entire process with BizTalk XLANG S. So first lets do that visually. How does it look visually? I create a business process that consists of actions, actions are connected to ports. Of course, I send messages to those ports. And when Ive completed the business process, I compile it today to XLANG -- of course in the future to XLANG S. But the beauty of the XLANG S technology is I can also script it. So what youll see in a moment is that XLANG S is deeply integrated into VisualStudio .NET. We get great, of course, debugging and other productivity support from that. So lets drop straight into VisualStudio .NET and take a look at a business process project in VS.NET. So heres the VisualStudio.net environment and this is an XLANG file in VS.NET. Notice that we have the typical color highlighting, full Intellisense on the syntax. Lets look at that syntax. Whereas previously I could just draw a port to an ERP system, now Ive scripted that port to my ERP system. Ive also declared messages. Heres the place order message that I send to my ERP system. Of course, a business process has a sequence of actions. Here is the start of that sequence. I can send messages to my ERP system and receive them from my ERP system. Here is this in XLANG. Now, notice on the right-hand side in the Project Explorer I have Web references to my ERP, inventory and shipping Web services. XLANG S is tightly integrated into Web services. That ERP port you see on the screen is accessed directly in XLANG S. And whats more, I can create a Web services front-end to my business process. So what this means is when I compile the XLANG S business process to a managed .NET assembly, the customer is presented with a Web services front end that looks no different to a typical VB or C# Web service. So lets simulate a customer placing an order. When we fill in the Web service and invoke it, XLANG S will take the message into my business and pass it to my back-end ERP system, simulated here, Ill approve the customers order, order number 20, and the customer will receive a confirmation number for their Web service. But the business process continues. I approve the inventory system and finally I approve the shipping system. The business process is complete. The final step is for the customer to come back and query what their shipping date is. So lets go in and query the shipping date. XLANG S will automatically return them the shipping date, the 26th of this month for their order, order number 20. So, Paul, what you see here is XLANG S. Ive orchestrated five Web services together to a single business process in one manageable file. All of this is built on the .NET framework. For those of you who want to learn more about XLANG S, your PDC CD has a prototype for you to play with. You can do all the things I did just now in VS.NET and check out Tony Andrews session. So thats XLANG X. PAUL FLESSNER: Okay, excellent. Thanks, Scott. So we do understand how important interop is for your business. XML Web services obviously are an important part of that, but we want you to know that we know that all applications wont immediately go to XML Web services, and theres all kinds of interop that will be needed going forward regardless. Weve just kind of put a laundry list of all the things that were doing today. You know, these things are from SQL Server, theyre from BizTalk Server. Theyre certainly from Host Integration Server, and were going to continue to work hard to make sure that you can get and stay connected to all the heterogeneity in your world from the Windows platform. And again, of course, we believe that long-term as XML Web services become more prevalent, youll be migrating in that direction, but again there is a lot of interop between now and then and were certainly going to continue to commit to that. So in terms of the product roadmap, youve seen a couple of these slides this morning. You know, all the stuff in 2001 were feeling great about, shipped, and getting good feedback in the market, and it does make XML Web services real. In 2002 again I think were really focusing on making XML Web services easier and well continue to work on that and well continue to deeply embed them. It wont be a big year for SQL Server or Exchange, no major releases, but there will be important updates to both products, SQL Server youve seen, the SQL XML, which was announced today. Exchange has got lots of enhancements coming in terms of the abilities. Were going to continue to push on that and also a very important update for OA in the SD2 timeframe. BizTalk, Commerce Server, Content Management Server and ISA will all have releases, again focused on the VisualStudio.net release to make sure that we get good integration there. And then App Center and MOM will be working even closer together to give you a more robust and complete solution for managing your data center. MIS and SharePoint also with releases in the 2002 timeframe. 2003 is a big year for us, Longhorn, more technology in the Windows environment and weve talked a little bit about the earlier this morning, Bill did, and then SQL Server and Exchange, a major release that year, again a lot of joint work with those teams to hopefully bring together I believe a very robust Exchange offering in that timeframe, built on new storage technology, which we talked about to our Exchange customers at MEC and I continue to do that around the world today. And again all of the servers really sneaking up and making a big, big push in the 2003 timeframe around XML Web services. So to wrap it up, I think weve made it clear that were going to continue to work hard on the abilities. Thats something that I know you live and die on in your enterprises. Scale up and scale out are both important. We dont pick one. We have to support both because we know your business needs both. Major focus on TCO: Total cost of ownership is absolutely paramount in your business not only in this economy but in every economy. I mean, we wont give up. Quality: I believe quality can be a competitive advantage, and were going to continue to push hard. Youve only just begun to see the advances that well make in quality. Its something that we invest in every day. And agility, which is really I think the most exciting thing that weve talked about today, were going to work hard to make you as a developer more agile and to provide a better experience for your customers and a higher rate of return for your employer. Overall were in it to win and we cant do that without your support, so again I thank you for coming here today and I thank you for the opportunity to present to you and I thank you for your business. Thats it for me. Have a great day.
http://www.microsoft.com/presspass/exec/flessner/10-23flessnerpdc.mspx
crawl-002
refinedweb
13,648
68.91
One problem I experienced trying to import a GEDCOM was that, after creating a new family project, the File | Import/Export menu remained grey; I had to restart the application to be able to choose it. The second time I created a new file to try and import a GEDCOM, the File | Import/Export menu was immediately available. Having to choose betweenEasyandComprehensiveimport? That suggests that theEasyimport is not comprehensive and theComprehensiveimport is not easy. A general issue with Embla Family Treasures is that it rarely uses proper dialog boxes, but uses the main window as dialog box instead, just like Brother’s Keeper does. Other than the main-window-as-dialog box, GEDCOM import starts fairly normal, but soon wants you to choose between Easy Gedcom and Comprehensive Gedcom. The mere fact that a vendor does not bother to capitalise GEDCOM properly should already make you wonder how serious they take they GEDCOM support at all. Having to choose between Easy and Comprehensive import? That suggests that the Easy import is not comprehensive and the Comprehensive import is not easy. So what do I do if am like most users, and prefer an import that is both comprehensive and easy? The Comprehensive option lets you pick a file, and then starts importing it. However, after doing a first pass to analyse the file, it pauses the import, and then asks you to choose between GEDCOM definitions and Embla Family Treasures standard. Never mind that the choice should read GEDCOM standard and Embla Family Treasures definitions, the real issues are more fundamental; it is not at all clear what the difference between the two options is, how a user is supposed to choose between the two options or even why a user should have to choose anything at all; surely the application can simply import the file to the best of its ability, without bothering the user. Moreover, if there really is something to choose at all, why not let the user choose these options before starting the import? The application defaults to Embla Family Treasures standard. Whichever of the two options you pick, the next option ask you to map GEDCOM tags to their corresponding events. Users should not have to do that. After confirming that you want the next step twice, the import continues. During import, the main-window-as-dialog-box displays the text Import in Progress below an edit box that shows log messages being added to it. Family Treasures does display a progress indicator, it just isn’t in the main-window-as-dialog-box, it is in the status bar instead. When the import appears finished, the demo puts up a messagebox to inform you that you entered more than 50 people into the demo version. What Embla does not tell you, is that the import is not finished yet, but only quasi-finished. Embla Family Treasures writes an import log file. Family Treasures seems to create a log by writing messages to an edit box, and finish off with a final message about writing it to a file. So you might it think that it makes the same mistake as MyHeritage Family Tree Builder, that it only writes the log files once the import is done, that it only writes a post-import log instead of an import log. Embla Family Treasures does write a real import log, it does write the log messages to the file as the import progresses. If Embla Family Treasures crashes during import, you are able to see how far it got and what issues it encountered. Family Treasures simply writes its messages to both the import log file and the edit box. The log messages lack line numbers and do not distinguish between error, warning and info messages, but the log file is still pretty good. Messages about individuals include both the id and the name of the individual as well as the lines that the messages apply to. For messages that apply to families, Family Treasures actually takes the trouble to list the names of both partners. The log starts with a message which file is being imported and when, and ends with a claimed import time, which is not accurate. The biggest complaint about the log file is that it only covers the first few phases of the import, and does not contain any messages for the later phases. What Family Treasures claims as its GEDCOM import time is in fact the time elapsed from the start of phase 2 (which starts after reading and analysing the header) till the moment it quasi-finishes. I remarked that the import is quasi-finished. It surely has every appearance of being finished; the title of this main-window-as-dialog box is Step 4 of 4 - Import Options and this step just completed, one of the last log messages in the edit box is GEDCOM Import complete, the application claims a particular import time, there is a Finished button to click, and when you click it, the main window displays your data in a tree view. All this is seriously misleading. The GEDCOM import is not over yet. As soon as the main window displays the tree view, Family Treasures starts the next two phases of the GEDCOM import process; the place names phase and the database audit phase. What Family Treasures claims as its GEDCOM import time is in fact the time elapsed from the start of phase 2 (which starts after reading and analysing the header) till the moment it quasi-finishes. The actual import time is significantly longer. Structured place names are new in version 8. Family Treasures version 7 and earlier used a single line for place names, on which you used commas between the various parts, just like you do in PAF and many other genealogy applications. Version 8 has separate fields like TMG, and demands that you tell it part what goes into which field. After quasi-finishing the import, Embla Family Treasures throws up a dialog box (and a real dialog box this time) asking how to map a particular place name to Embla’s Place/Farm/Parish/Municipality/County (Province)/Country structure. It then asks the same question for the next place name, and so on, for each place name in the database… Obviously, this way of splitting each place field name gets tedious very fast, so I checked the Import all place automatically option to be done with it. I then saw it map all places one by one. That automatic mapping is a relatively fast phase, but only because it contains no smarts at all. I am not impressed by how Family Treasures maps structured place name strings to its separate fields, it does not seem any smarter than TMG. However, in its defence, Family Treasures does offer a Continue later option, and that is probably the option you should take if you are seriously consider switching to this. During one attempt to import place names, the application crashed. Once the place name splitting is done, Family Treasures once again pops up a message about the 50-people limit in the demo edition, but that only means you cannot edit your data. The Family Treasures demo does not artificially restrict the size of the GEDCOM you can import. …does Embla’s GEDCOM import for version 8 imports data into a version 7 database and then upgrade it? Once the place name import phase is over, the audit phase starts. A messagebox appears stating that You need to run the audit to complete the upgrade process. The audit screen will be opened next; press the Start button to begin.. That is a confused message; this is an import process, not an upgrade from one version of Embla to another - or does Embla’s GEDCOM import for version 8 imports data into a version 7 database and then upgrade it? That is what this message seems to imply. After working with Family Treasures for a while it is also what I believe to be the case. Anyway, the last step of the import procedure is to run perform an Audit Structure command on the new project database. That may sound like overkill; surely the application knows how to import data into its own database and there should be no need to check the integrity of the database now? Maybe, but then again, if you trust all databases operation to never introduce any errors, no application would not ever need to check its own database. Database integrity are checks are a good thing, and performing such a check as the last step of a GEDCOM import is arguably something all applications should be doing, just to make sure everything went well. Embla knows the Family Treasures GEDCOM import creates database errors, and includes the audit phase to fix these errors. During the audit, Family Treasures removed unlinked events, blank families, bad links to events and empty events. Do not think that this shows how useful the audit is, what it really shows is how bad the GEDCOM import is. Embla did not add the database audit as the final phase of the GEDCOM import process to make sure everything went fine, but because everything is not fine. Embla knows the Family Treasures GEDCOM import creates database errors, and includes the audit phase to fix these errors. The particular GEDCOM file it imported is not perfect, but all the issues the audit found should be addressed by the GEDCOM import phases that write the database, not by tacking on an audit command. Moreover, when the actual import code is so poor that the audit phase is mandatory, the user should not be able to choose to cancel that audit… By the way, once you choose to run the audit, a messagebox appears that says The audit tool could take a long time to run and it should not be interrupted [sic] during running.. That is not the entire message. It goes on to say that If error messages appear, the audit should be re-run until no more error messages are displayed.. That often means that Embla is actually demanding that you run the audit phase twice; the first time to clean up the mess the earlier import phases made, the second time to confirm that everything is okay now. Family Treasures does not write anything to the import log during either the place name splitting or database audit phase of the GEDCOM import. All the messages it produces during these phases are lost as soon as you move on. After the audit command, Family Treasures showed the Import/Export main-window-as-dialog again. Despite the fact that the place name import and audit had taken considerable time, the log text in the edit box had not been updated; it did not even mention either place name or audit phase, and was still claiming the same import time. The GEDCOM import is really finished when you have clicked the Finish button on that main-window-as-dialog. TheEasyGEDCOM import logic is seriously messed up; it actually asks you to specify the GEDCOM’s character encoding, something it should really not ask at all, before asking you to select a GEDCOM file… The Comprehensive Gedcom import is a comprehensive mess and the Easy Gedcom option is not as easy to deal with as its name suggests. The Easy GEDCOM import logic is seriously messed up; it actually asks you to specify the GEDCOM’s character encoding, something it should really not ask at all, before asking you to select a GEDCOM file… As if asking users to specify the character encoding, even doing so before they have selected a GEDCOM file to import, is not confusing enough, the same dialog asks the user to select a GEDCOM source; the first and default choice is GEDCOM, but all the database formats it can read directly are the other choices. It is not clear whether Family Treasures expect you to opt for direct import instead of GEDCOM, or expects you to specify the GEDCOM dialect of the GEDCOM file you have not chosen yet. Either way, this main-window-as-dialog is messed up. Because the Character set option defaults to From GEDCOM as it should, and the GEDCOM source option defaults to From GEDCOM too, you can simply ignore this particular main-window-dialog, and immediately move on choosing a file to import. The one That’s easy about the Easy Gedcom option is that, apart from the one messed up main-dialog-as-window just discussed, there are no more options, you can just choose Start, and Family Treasures will display the dialog with import messages scrolling by. Once the import is quasi-done, you are prompted to perform the place name and the database audit phase. I’ve done all import measurements using the Easy import option, opting to split place names automatically. I do not recommended that option, but did want to measure import times for a complete import that includes all import phases. According to Family Treasures itself, the audit process should be run until there are no errors anymore. After considering the issue for a while, I decided to always run this phase of the import process just once. Embla hurts Family Treasure usability and performance by pausing and demanding user input during import. I’ve tried to respond quickly. Embla can avoid the inclusion of these minor delays by improving the import experience. On my 2.4 GHz quad-core Vista machine, Family Treasures claims to be done in 10m58s, but the actual import time is 12m23s. That is an import speed of barely 6,5 individuals per second. On my 2.7 GHz single-core Windows XP machine, Family Treasures claims to be done in 21m58s, but actually takes 23m31s. That is an import speed of less than 3,5 individuals per second. So, if all phases had completely properly, the import speed would be probably be less than one individual per second; a truly stunning unaccomplishment. When I tried to import the 100k INDI GEDCOM on the quad-core Vista machine, import of GEDCOM records aborted after about 2 hours and 20 minutes, at which point only 9.350 individuals had been imported. The last message in the import log file is Birth: The import terminated because data was found that could not be handled. The line with the birth date is 2 DATE 1947, so it seems hard to believe that Family Treasures really had any problem processing that. All other lines in the vicinity of that one look just fine too. Although this phase failed with a fatal error, the import continued with the place name and audit phase as if nothing was wrong, but Family Treasures crashed when I tried to start the audit phase. By the way, 9.350 individuals in 2h20m is 9.350 individuals per 140 minutes, that is 1,113 individual per second - on a 2.4 GHz quad-core with 4GB RAM. So, if all phases had completely properly, the import speed would be probably be less than one individual per second; a truly stunning unaccomplishment. When I created a smaller set of data including the person born in 1947 and surrounding family, just some 500 individuals in total, Family Treasures complained Not a valid GEDCOM file - operation cancelled. That complaint is in error. The GEDCOM file I tried to import was just fine, the problem I encountered is that Family Treasures does not support UTF-8 encoded GEDCOM files; if the file starts with a Byte Order Mark (as it should, and PAF always writes), it even fails to recognise the file as GEDCOM. That is a rather basic fail that I do simply not expect from an application that is already at version 8. When I removed the BOM from the GEDCOM file to get past this Family Treasures limitation, it seemed to import just fine. So, despite the log file message, the failure to import seems unrelated to any birth date. That import was not really fine. Family Treasures does not support UTF-8, yet it imported the file without even an error or warning, which implies that it bluntly imported the UTF-8 file using another character encoding, thus mangling all text upon import. I tried the import on my old Windows XP machine. It took about 3h02m to fail in exactly the same way; it produced the same fatal error on individual number 9350, once again happily continued with the place name phase, and once again crashed when the audit phase was started. Family Treasures is one of those applications that reopen the project you last worked on, and after a crash tries to continue with whatever it thinks it is useful or necessary, which does not make any sense after a fatally failed import. This behaviour is rather annoying. I found that the easiest way to prevent it is to delete the project directory. The import speed until the fatal failure is 9.350 individuals in 182 minutes, and that works out to 51,374 individuals per minute, just 0,856 individuals per second. That would make this the slowest desktop GEDCOM import measured so far. However, this is not a slow import, this is a failed import. The import of the 5K INDI GEDCOM is slow, and indeed one of the slowest desktop GEDCOM imports I ever measured. It is slower than Family Tree Maker 2008, slower than Hereditree 2008 and slower than WinFamily 7. Of all the applications measured so far, only Genea 1.4.1 managed an even more epic waste of CPU cycles on this basic task. I do not know why the import fails. The error message does not make sense, so I am guessing that Family Treasures has somehow confused itself already, and this is merely how it shows. The Embla Family Treasures GEDCOM import is so slow, that its tardiness is better expressed in seconds per individual than individuals per second. I decided to tried a medium sized file, one with 34.3111 individuals, which MudCreek GENViewer loads in about half a second, just to see what would happen. After seven hours, the progress indicator claimed the import was 33% done. After almost exactly twelve hours, the Windows XP system crashed. I am not positive that the crash is related to some error in Family Treasures. I am sure that the Family Treasures log file showed it that had imported 17.818 individuals in those twelve hours. Family Treasures still needed to perform several more import phases, but lets keep the calculation simple; an import speed of 17.818 individuals per 12 hours works out to 17.818 / 43.200 individuals per second. That is 0,412 individuals per second, surely better expressed as 2,424 second per individual. The Embla Family Treasures GEDCOM import is so slow, that its tardiness is better expressed in seconds per individual than individuals per second. This additional result really drives home that even if Embla Family Treasures is able to import a large file at all, it would take prohibitively long to do so. The import speed is so embarrassingly slow that one aspect alone is a strong indicator that Embla employees never use their own product. It certainly proves that, if they ever bother to test this at all, they never tested with anything but minuscule GEDCOM files. The bottom line remains that Embla Family Treasures failed to import the large file. The GEDCOM import speed is a serious issue, but not everything about the GEDCOM import is so bad. Embla Family Treasures supports three of the four legal GEDCOM character encodings; ASCII, ANSEL and 16-bit UNICODE (properly known as UTF-16). Weirdly, although it supports UTF-16, support for UTF-8 is missing. Family Treasures does additionally support the GEDCOM-illegal but still often seen IBMPC and Windows ANSI character sets. Every other genealogy application can and should reject Embla Family Treasures GEDCOM files as invalid as soon as it has processed this header. The Embla Family Treasures GEDCOM export is easier than the GEDCOM import. The GEDCOM export supports the same character encodings as the GEDCOM import. Alas, Embla Family Treasures fails my superficial GEDCOM export examination as hard as Family Tree Maker 200x; like Family Tree Maker 200x, Embla Family Treasures does not even get the GEDCOM header right. The header contains illegal GEDCOM tags Date and Time instead of the legal tags DATE and TIME. It also uses the CORP (corporation) tag where they clearly intended to use the COPR (copyright) tag. A simple transposition of characters, but one that you do not expect in an application that is at version 8 already. You expect their testing to have weeded such errors out many versions ago already. 0 HEAD 1 SOUR Embla_Familie_og_Slekt 2 VERS 8.0.30 2 NAME Embla Family Treasures 2 CORP Embla Norsk Familiehistorie AS 3 ADR1 Bj²rn²ygeilen 21 3 CITY Hundvêag 3 POST 4085 3 CTRY Norway 3 PHON +47-5189 1307 2 Data Test 3 CORP Copyright 2009 of FirstName LastName 3 Date 10 SEP 2009 2 Time 16:53:15 1 SUBM @SUB1@ 1 FILE FileName.GED 1 CORP The content of this Gedcom file is copyright of FirstName LastName 1 GEDC 2 VERS 5.5 2 FORM LINEAGE-LINKED 1 CHAR ANSEL 1 LANG English (British) 0 @SUB1@ SUBM 1 NAME FirstName LastName Inclusion of illegal tags in the GEDCOM header is a serious error. Every other genealogy application can and should reject Embla Family Treasures GEDCOM files as invalid as soon as it has processed this header. Notice the ADR1 and CITY line in the Embla Family Treasures GEDCOM header. Those lines are not in error. This is how ANSEL-encoded lines show up in an editor such as NotePad that does not support ANSEL. If NotePad supported ANSEL, you would see that it says: 3 ADR1 Bjørnøygeilen 3 CITY Hundvåg Family Treasures’ encoding of these two lines is correct. When you choose another encoding for Embla’s GEDCOM export, the header uses that other encoding too, and that is not correct. The GEDCOM specification states that whatever encoding the file uses, the heading should be encoded in ANSEL. Most, if not all, vendors seem to agree that is an error of the GEDCOM specification, that the entire file should use just one encoding, the one specified in the header. Embla Family Treasures’ ostensible Unicode GEDCOM export actually exports gibberish. What truly convinced me that Embla never bothered to test the GEDCOM export at all is its ostensible Unicode (UTF-16) output. NotePad support Unicode but somehow did not show the GEDCOM file as it should; it seemed to think it the file is encoded in Windows ANSI, and displays spaces between all consecutive letters. Now, NotePad relies on a Windows function that is pretty good at recognising text encodings in the absence of a Byte Order Mark, so this is unusual. I soon discovered Embla’s mistake, one that even the simplest test (such as trying to import into PAF) would have revealed: all the lines are in Unicode, but the CR/LF linefeed is not! The result is that the file is not in any valid encoding at all. Embla Family Treasures’ ostensible Unicode GEDCOM export actually exports gibberish. If you do try to import this into PAF, which supports Unicode GEDCOMs, PAF reports ERROR 1: ExportTestUnicode.GED, line 1: Missing delimiter, and then repeats that error for every other line. That is pretty clear error reporting. That Embla Family Treasures contains this mistake regardless is practically proof that they never performed this test. Family Treasures’ GEDCOM import is weird, and needlessly complex. It seems to import data into a version 7 database and then run an upgrade process to turn it into a version 8 database. The quality of the import process is so poor that Family Treasures includes a database audit to clean up the errors earlier phases made. Because of the low quality of the earlier phases, that final phase is necessary, yet Embla allows the user to cancel that phase… The awful GEDCOM import experience is hardly acceptable in proof-of-concept code, yet Embla ships it as part of a commercial application. Embla Family Treasures writes an import log file and what it contains is pretty good, but it only contains messages for the first few import phases, and lacks messages for the final phases. It also claims an import time that is considerably lower than the actual import time, because it is actually the total time for just some of the import phases. Embla Family Treasures failed to import the 100k INDI GEDCOM. Perhaps that it just as well, as the Embla Family Treasures GEDCOM import is so stunningly slow that it actually makes the already remarkably slow GEDCOM import of applications such Family Tree Maker 200x, Hereditree 2008 and WinFamily 7 look good in comparison. Its GEDCOM import is actually so slow that it is better expressed in seconds per individual than individuals per second. Embla Family Treasures supports ASCII, ANSEL and UNICODE (UTF-16), but lacks support for UTF-8. It fails to recognise properly encoded UTF-8 GEDCOM as GEDCOM, and does not warn you about it limitation, but import UTF-8 GEDCOM files using some other encoding, thus mangling your data. It does support the GEDCOM-illegal IBMPC and ANSI encodings. The GEDCOM output contains multiple errors so basic, that it is clear that Embla never bothered to test the GEDCOM at all. The UNICODE output is not UTF-16, but gibberish. The ANSEL output is ANSEL, but its GEDCOM header contains both illegal and erroneous tags. The GEDCOM support in Embla Family Treasures is unreliable, slow and defective. The GEDCOM produces import crashes easily, mangles UTF-8 files and cannot handle a large file. The GEDCOM export produces invalid GEDCOM and its Unicode export does not seem to have been tested at all. The overall quality of the GEDCOM support is so far below par that Embla should consider changing it name to Embarrassing.
http://www.tamurajones.net/EmblaFamilyTreasures8GEDCOM.xhtml
CC-MAIN-2013-48
refinedweb
4,374
58.82
Plane poiseuelle flow solved by finite difference Posted February 14, 2013 at 09:00 AM | categories: bvp | tags: fluids | View Comments Updated March 06, 2013 at 06:32 PM Adapted from We want to solve a linear boundary value problem of the form: y'' = p(x)y' + q(x)y + r(x) with boundary conditions y(x1) = alpha and y(x2) = beta. For this example, we solve the plane poiseuille flow problem using a finite difference approach. An advantage of the approach we use here is we do not have to rewrite the second order ODE as a set of coupled first order ODEs, nor do we have to provide guesses for the solution. We do, however, have to discretize the derivatives and formulate a linear algebra problem. we want to solve u'' = 1/mu*DPDX with u(0)=0 and u(0.1)=0. for this problem we let the plate separation be d=0.1, the viscosity \(\mu = 1\), and \(\frac{\Delta P}{\Delta x} = -100\). The idea behind the finite difference method is to approximate the derivatives by finite differences on a grid. See here for details. By discretizing the ODE, we arrive at a set of linear algebra equations of the form \(A y = b\), where \(A\) and \(b\) are defined as follows. \[A = \left [ \begin{array}{ccccc} % 2 + h^2 q_1 & -1 + \frac{h}{2} p_1 & 0 & 0 & 0 \\ -1 - \frac{h}{2} p_2 & 2 + h^2 q_2 & -1 + \frac{h}{2} p_2 & 0 & 0 \\ 0 & \ddots & \ddots & \ddots & 0 \\ 0 & 0 & -1 - \frac{h}{2} p_{N-1} & 2 + h^2 q_{N-1} & -1 + \frac{h}{2} p_{N-1} \\ 0 & 0 & 0 & -1 - \frac{h}{2} p_N & 2 + h^2 q_N \end{array} \right ] \] \[ y = \left [ \begin{array}{c} y_i \\ \vdots \\ y_N \end{array} \right ] \] \[ b = \left [ \begin{array}{c} -h^2 r_1 + ( 1 + \frac{h}{2} p_1) \alpha \\ -h^2 r_2 \\ \vdots \\ -h^2 r_{N-1} \\ -h^2 r_N + (1 - \frac{h}{2} p_N) \beta \end{array} \right] \] import numpy as np # we use the notation for y'' = p(x)y' + q(x)y + r(x) def p(x): return 0 def q(x): return 0 def r(x): return -100 #we use the notation y(x1) = alpha and y(x2) = beta x1 = 0; alpha = 0.0 x2 = 0.1; beta = 0.0 npoints = 100 # compute interval width h = (x2-x1)/npoints; # preallocate and shape the b vector and A-matrix b = np.zeros((npoints - 1, 1)); A = np.zeros((npoints - 1, npoints - 1)); X = np.zeros((npoints - 1, 1)); #now we populate the A-matrix and b vector elements for i in range(npoints - 1): X[i,0] = x1 + (i + 1) * h # get the value of the BVP Odes at this x pi = p(X[i]) qi = q(X[i]) ri = r(X[i]) if i == 0: # first boundary condition b[i] = -h**2 * ri + (1 + h / 2 * pi)*alpha; elif i == npoints - 1: # second boundary condition b[i] = -h**2 * ri + (1 - h / 2 * pi)*beta; else: b[i] = -h**2 * ri # intermediate points for j in range(npoints - 1): if j == i: # the diagonal A[i,j] = 2 + h**2 * qi elif j == i - 1: # left of the diagonal A[i,j] = -1 - h / 2 * pi elif j == i + 1: # right of the diagonal A[i,j] = -1 + h / 2 * pi else: A[i,j] = 0 # off the tri-diagonal # solve the equations A*y = b for Y Y = np.linalg.solve(A,b) x = np.hstack([x1, X[:,0], x2]) y = np.hstack([alpha, Y[:,0], beta]) import matplotlib.pyplot as plt plt.plot(x, y) mu = 1 d = 0.1 x = np.linspace(0,0.1); Pdrop = -100 # this is DeltaP/Deltax u = -(Pdrop) * d**2 / 2.0 / mu * (x / d - (x / d)**2) plt.plot(x,u,'r--') plt.xlabel('distance between plates') plt.ylabel('fluid velocity') plt.legend(('finite difference', 'analytical soln')) plt.savefig('images/pp-bvp-fd.png') plt.show() You can see excellent agreement here between the numerical and analytical solution. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/14/Plane-poiseuelle-flow-solved-by-finite-difference/
CC-MAIN-2019-26
refinedweb
693
70.63
Generate Class Constructors in Eclipse Based on Fields or Superclass Constructors Generate Class Constructors in Eclipse Based on Fields or Superclass Constructors Join the DZone community and get the full member experience.Join For Free You’ll often need to add a constructor to a class based on some/all of its fields or even based on constructors of its superclass. Take the following code: public class Contact { private String name, surname; private int age; public Contact(String name, String surname, int age) { this.name = name; this.surname = surname; this.age = age; } } That’s 5 lines of code (lines 5-9) just to have a constructor. You could write them all by hand, but writing a constructor that accepts and initialises each field takes a lot of time and becomes irritating after a while. And creating constructors from a superclass can take even longer because the superclass can define multiple constructors that you need to reimplement. That is why Eclipse has two features to help you generate these constructor instantly: Generate Constructor using Field and Generate Constructor from Superclass. Both features will generate a constructor in seconds, freeing you up to get to the exciting code. You’ll also see how to add/remove/reorder arguments of an existing constructor based on fields defined in the class. Generate a constructor from fields The fastest way to generate a constructor based on fields is to press Alt+Shift+S, O (alternatively select Source > Generate Constructor using Fields… from the application menu). This pops up a dialog where you can select the fields you want to include in the constructor arguments. Once you’ve selected the fields you want, just click Ok and you’re done. BTW, Alt+Shif+S is the shortcut to display a shortened Source menu, allowing Java source editing commands. The following video shows an example of how much time this feature can save you. We’ll create a constructor for the class Message. Notes: - You can additionally call a superclass constructor with a subset of the fields by changing the dropdown Select super constructor to invoke at the top of the dialog. This creates a super(…) call with the relevant arguments and initialising code for the rest of the arguments on your subclass’s constructor. - If you don’t want the JavaDoc for the constructor, disable the checkbox Generate constructor comments on the dialog. - You can include/exclude the calls to super() using the checkbox Omit call to default constructor super(), on the dialog. - You have to be positioned in a class to invoke this command. If you use frequently use this command, you can remap its keyboard shortcut by changing the key binding for the command Generate Getters and Setters. Generate constructor(s) from a superclass Sometimes you’ll want to reimplement some/all of a superclass’s constructors, especially as part of the contract. To generate constructor(s) from a superclass, just press Alt+Shift+S, C (or alternatively select Source > Generate Constructor from Superclass… from the application menu). A dialog pops up allowing you to select the constructor(s) you’d like to create. Once you click Ok, Eclipse generates the constructor, together with a super() call. Here’s an example of how to create a constructor in SecretMessage, that inherits from the class Message. Message has three constructors: a default one, one that accepts one String (content) and another that accepts three Strings (content, fromAddress and toAddress). SecretMessage should only expose the last two constructors. Note: You have to be positioned in a class to invoke this command. If you use this command frequently, you can remap its keyboard shortcut by changing the key binding for the command Generate constructors from superclass. Add, reorder and remove fields on existing constructors If you have an existing constructor and want to reorder its arguments or remove some of them, have a look at the Change Method Signature refactoring that does that in a jiffy. If you want to add a single field to an existing constructor, have a look at the next video that uses Eclipse’s Quick Fix (Ctrl+1) to do that easily. I’ll add a field createdDate to an existing constructor in Message by choosing Assign parameter to field from the Quick Fix menu while positioned on the field. From Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/generate-class-constructors
CC-MAIN-2019-18
refinedweb
739
51.78
The QueryOptionalDelayLoadedAPI function lets you ask whether a function marked as delay-loaded was in fact found. Here's a tiny demonstration: #include <windows.h> #include <commdlg.h> #include <libloaderapi2.h> #include <stdio.h> EXTERN_C IMAGE_DOS_HEADER __ImageBase; #define HMODULE_THISCOMPONENT reinterpret_cast<HMODULE>(&__ImageBase) int __cdecl main(int argc, char** argv) { if (QueryOptionalDelayLoadedAPI(HMODULE_THISCOMPONENT, "comdlg32.dll", "GetOpenFileNameW", 0)) { printf("GetOpenFileNameW can be called!\n"); } return 0; } This gives you function-by-function granularity on checking whether a delay-loaded function was successfully loaded, which is an improvement over being told whether all the imports for a DLL were loaded. Note also that the original problem with the Win16 model for weak linking wasn't that developers built but never ran their programs. Developers built their programs, and they ran fine on all the systems they tested because the function was present on all the systems they tested. It never occurred to them that the function might not exist in the first place. I mean, suppose you wrote a 16-bit program that called GetOpenFileName. It runs great on all your systems! But oh no, you get a report from a customer that it crashes on their system. The reason: COMMDLG.DLL was not a mandatory OS component. Users had the option of installing Windows without it, at which point all the programs that called GetOpenFileName would start crashing. Win32's response to this was "If you want weak linking, you know where to find it." Namely, GetProcAddress. The fact that you called a function to get an address will hopefully remind you to check whether the function actually succeeded. The introduction of the QueryOptionalDelayLoadedAPI function is to allow Store apps (which are not allowed by policy to call LoadLibrary) to detect whether their delay-loaded function actually got loaded. The fact that the requested functions are in the delay-loaded function table means that a static analysis can still find all the functions that the program could potentially call. Odd they added a new ANSI API. I had thought adding new ANSI APIs was explicitly banned in WINDIV. When I checked it appears there is no Unicode version of this. The reason, I imagine, is because there is no GetProcAddressW function. Function exports are in ANSI (or possibly even ASCII). I do find it mildly surprising that the filename parameter isn’t Unicode though. I wonder what you would do if the DLL file were inexpressable in the current character set. so apparently the method name parameter is safe because “Error C3872 ‘0x2019’: this character is not allowed in an identifier”. But I was able to make a dll with the name 💩.dll, although it did give a linker warning that it might not load on other systems, so in theory the module name parameter should be Unicode. If you name a DLL with non-ASCII letters, you won’t be able to hard-link programs to the import library and have them run properly. This is because the DLL name in the PE import table format is ASCII – the OS won’t be able to load the DLL. However, if you’re willing to always use LoadLibrary[Ex]W+GetProcAddress with the DLL, Unicode DLL names are fine. I tested this this morning, it seems that the linker is mixed in some ways. It will generate a UTF-8 exports table. But the Module itself can’t be linked because the .lib parser does require CP1252 for some reason. In theory the fix should be upgrading the lib parser to support a flag saying that it’s UTF-8 and then make all future libraries use that. Poop dot dll. Now that’s honest naming! I correct myself identifiers can be unicode, just not emoji Note that a platform ABI may be more strict than the C++ language allows for. The PE file format may restrict identifiers more than the C++ language allows for. And the Windows platform may further restrict identifiers more than its file formats allow. Conversely, a platform ABI may be less strict than the C++ language allows. The C++ language disallows identifiers that are also C++ keywords, e.g. “class”. But the Windows platform may allow an identifier called “class” – you just can’t use it natively from C++. You may have to use LoadLibrary()/GetProcAddress() to bind it to a different identifier in your program. The filename parameter isn’t UTF-16 because this function is reading the PE headers and delay-load import table of hParentModule. The DLL names are in ASCII in the import table, so you might as well use the same format for the API. Same for the function name. The DLL names and export names in the PE file format are byte strings, not UTF-16 strings, so it wouldn’t serve a purpose. It’s more like they are ASCII, rather than ANSI. AFAIK COMMDLG was not optional in Win3.1, but it was not part of Win3.0 and programs was supposed to redistribute it onto Win3.0 systems. Incidentally, you can always call LoadLibrary. Procedure: 1) Take the address of ReadFileW. It will be in your own image area (trampoline placed by the linker to reduce the number of fixups). 2) Follow the jmp instruction at the address of ReadFileW to get the real address. 3) Ascend memory until you find MZ at the start of a page with the PE header pointer pointing to a valid PE header. 4) Interpret the executable image to find the exports section. Walk the exports table for LoadLibrary. 5) Hope that the bytes ‘MZ’ doesn’t appear anywhere else. :) Non-issue. You keep going until you decode a valid header. Or just use VirtualQuery instead of scanning memory manually. Way less error prone. But since it’s a Windows-10-only function, you still need to access it dynamically (through delay-loading if you’re on a Metro app?) if you want to run on earlier platforms, so there’s a bit of a chicken-and-egg scenario here… It exists in 8/8.1 as well, never trust MSDN version numbers! Actually even on Win16 your program wouldn’t even start if the DLL hadn’t been installed. Where things went wrong is when the installed version wasn’t new enough and was missing an export that the program needed. Or both programs had installed the DLL locally, which didn’t work when you started the older program first. Or someone else’s broken installer had downgraded the shared copy of the DLL to an earlier version.
https://blogs.msdn.microsoft.com/oldnewthing/20170322-00/?p=95796
CC-MAIN-2018-30
refinedweb
1,097
65.83
#include "DOP_API.h" #include <UT/UT_Array.h> #include <UT/UT_String.h> #include <OP/OP_Node.h> #include <GU/GU_DetailHandle.h> #include "DOP_Engine.h" Go to the source code of this file. Version of DOPfindAllDataFromPath() that uses the provided SIM_Engine instead of searching for a DOP node at the beginning of the path. Version of DOPfindDataFromPath() that uses the provided SIM_Engine instead of searching for a DOP node at the beginning of the path. Thread-safe method to find the owner node of a DOP data path. The following functions are the only ones which are thread-safe for accessing DOP data Thread-safe method to world transform of the a DOP data path, optionally returning the geometry if there is any (and gdh is non-NULL). If given an interested_node, then we will add an extra input on it to the path. If the dopparent is currently simulating it is not possible to reset its time. Similarly, if the desired time is within the last timestep, we can't interpolate since the actual 'current' value of the object is stored at the end time. Returned from this is the new time to use for accesses. In case of unsimulated networks, it is the same as dopparent->setDOPTime(time); return time;
https://www.sidefx.com/docs/hdk/_d_o_p___full_path_data_8h.html
CC-MAIN-2021-10
refinedweb
209
66.94
SP_fclose() #include <sicstus/sicstus.h> spio_t_error_code SP_fclose( SP_stream *stream, spio_t_bits close_options); Close the stream. The stream to close unless the SP_FCLOSE_OPTION_USER_STREAMS is set, see below. The following bits can be set: SP_FCLOSE_OPTION_READ SP_FCLOSE_OPTION_WRITE Close the specified directions. If neither of these options is specified, the stream is closed in all opened directions, i.e. as if both options were specified. If the stream is not opened in a direction specified by an option, that option is ignored. Note that it is possible to close only one direction of a bidirectional stream. The return value will tell whether the stream is still open; see below. SP_FCLOSE_OPTION_FORCE Close the specified direction forcibly, i.e. without flushing buffers etc. This also ensures that the close finishes quickly, i.e. does not block. SP_FCLOSE_OPTION_NONBLOCKING You should avoid using this option. Pass non-blocking option to lower level routines, including the call to SP_flush_output() that is issued when non-forcibly closing write direction. One possible use for this option is to perform a best effort close, which falls back to using SP_FCLOSE_OPTION_FORCE only if ordinary close would block. SP_FCLOSE_OPTION_USER_STREAMS In this case the stream should not be a stream but instead be the user_class of a user defined stream. When this option is passed, all currently opened streams of that class is closed, using the remaining option flags. E.g. to close all user defined streams of class my_class in the read direction only do: SP_fclose((SP_stream*)my_class,SP_FCLOSE_OPTION_USER_STREAMS|SP_FCLOSE_OPTION_READ). On success, all specified directions has been closed. Since some direction may still be open, there are two possible return values on success: SPIO_S_NOERR The stream is still valid, some direction is still not closed. SPIO_S_DEALLOCATED The stream has been deallocated and cannot be used further. All directions have been closed. On failure, returns a SPIO error code. Error codes with special meaning for SP_fclose() are the same as for SP_flush_output(), which see. Other error codes may also be returned. cpg-ref-SP_flush_output. Prolog Streams.
https://sicstus.sics.se/sicstus/docs/latest/html/sicstus.html/cpg_002dref_002dSP_005ffclose.html
CC-MAIN-2016-44
refinedweb
330
67.55
Has anyone seen an error in unit tests that says there is no indented block inside of the test code? I have written many unit tests for my course and they have all worked until today. Now suddenly none of them are working (even ones that previously did), and I am getting this error: I haven’t changed anything in previously working files of unit tests and they are all now getting this error. It could be something in my code, but the code I have in the test is pretty straightforward and just calls a test function I have written: def test_Q4_Test_Grass_Type(self): self.q4.test_grass() It’s frustrating this is occurring now because my students are taking an exam tomorrow, and I like to give them unit tests to use on the programming portion of the exam and now it seems I won’t have that. Please let me know if there is a fix or if anyone else has gotten this error.
https://ask.replit.com/t/new-repl-error-in-unit-testing/264
CC-MAIN-2022-27
refinedweb
165
69.96
05 February 2013 23:00 [Source: ICIS news] HOUSTON (ICIS)--Here is Tuesday’s end of day ?xml:namespace> CRUDE: Mar WTI: $96.64/bbl, up 47 cents; Mar Brent: $116.52/bbl, up 92 cents NYMEX WTI crude futures recouped some of Monday’s lost ground, as they tracked a rally in the stock market in response to data showing a gradual economic rebound in various global economies. Geopolitical risks of wider supply disruptions also provided underlying support. RBOB: Mar: $3.0374/gal, up 2.59 cents/gal Reformulated blendstock for oxygen blending (RBOB) gasoline futures prices reversed Monday’s losses during morning trading as crude futures continued to influence RBOB. NATURAL GAS: Mar: $3.399/MMBtu, up 8.4 cents NYMEX natural gas futures continued to rise for a second straight day, boosted by an outlook of colder-than-average temperatures across the ETHANE: higher at 26.25-26.50 cents/gal Ethane spot prices rose despite downward movement in the energy complex. AROMATICS: toluene wider at $4.28-4.35/gal, mixed xylene tighter at $4.50-4.55/gal Prompt n-grade toluene spot prices were discussed within a wider range during the day, compared with $4.29-4.31/gal FOB (free on board) the previous session. MX spot prices were within a tighter range from $4.50-4.65/gal FOB the previous day. OLEFINS: ethylene at 63.5 cents/lb, RGP done steady at 73 cents/lb US ethylene for February was heard done at 63.500 cents/lb, lower than previous deals done at 65.000-65.125 cents/lb. February refinery-grade propylene (RGP) was done at 73 cents/lb, flat with a trade
http://www.icis.com/Articles/2013/02/05/9638306/evening-snapshot-americas-markets-summary.html
CC-MAIN-2014-15
refinedweb
283
69.28
editor Identifying Individual Terms 5 Natures Nature Keys the their nature and purpose of related resources.. The In RDDL 1.0, the nature of the a resource is specified using a machine-readable label (i.e., a URI) that URI, which identifies “what kind of thing” it the resource is. For example, the key URI is used to its nature might be “HTML documentation” or “XML Schema” or “CSS Stylesheet”.). The In RDDL 1.0 the purpose of an ancillary resource is also specified using a machine-readable label nature “is a” web site, URI identifies a nature, is it coherent to say that it is also identifies a namespace or an HTML document? document, a media type or a namespace? We can address this problem by observing that we don't really need to model what the a nature is, we simply need to model the fact that we use the nature label a nature-related URI as a key to distinguish between different resources that could satisfy the same purpose.. The nature key is the label which allows us to distinguish between the different targets that could be used for the purpose. nature key “RELAX NG”. This model also includes two examples of HTML documentation, defguide.html which has the purpose “reference” and docbook.html which has the purpose “normative reference”. Although it is often the case the purpose and nature are closely coupled, as this example shows it is not always possible to determine one given the other. NG validator could find all the resources that serve the purpose “validation” and identify the one (or one of the ones) with the nature “RELAX NG” and proceed with a validation task. Similarly, a human being could find the resource with the purpose “specification” “normative reference” to locate the specification in a convenient format. Here's an example of the DocBook model above, expressed in RDF using [N3] . # RDDL Model for DocBook @prefix purpose: <> . @prefix nature: <> . <> purpose:validation [ purpose:validation [a nature:Object; nature:key <>; nature:target <> ]; purpose:validation [ purpose:validation [a nature:Object; nature:key <>; nature:target <> ]; purpose:reference [ purpose:reference [a nature:Object; nature:key <>; nature:target <> ]; purpose:normative-reference [ purpose:normative-reference [a nature:Object; nature:key <>; nature:target <> ] . If we can construct this model from a namespace document, then we know we have all the information we need to locate related resources. the an XSLT transformation . into RDF. If this transformation is applied, the resulting resource description includes the following model fragment: <> purpose:normative-reference <>, <> identifies a key for the nature of a resource encoded in that vocabulary. For other resources, the URI of the normative specification is appropriate. Here are the nature keys corresponding to the natures identified by listed in [RDDL 1.0] : The nature of key for a defined term. The nature of key for CSS. The nature of key for an XML DTD. The nature of key for a mailbox. The nature of key for generic HTML. The nature of key for HTML 4. The nature of key for HTML 4 Strict. The nature of key for HTML 4 Transitional. The nature of key for HTML 4 Frameset. The nature of key for XHTML 1.0. The nature of key for XHTML 1.0 Strict The nature of key for XHTML 1.0 Transitional. The nature of key for an RDF Schema. The nature of key for RELAX (not RELAX NG) Core. The nature of key for RELAX (not RELAX NG) Namespace. The nature of key for Schematron. The nature of key for an OASIS Open Catalog. The nature of key for a W3C XML Schema. The nature of key for XML character data. The nature of key for escaped XML. The nature of key for an XML unparsed entity. The nature of key for an IETF RFC. The nature of key for an ISO Standard. Purpose is a relationship between a namespace name and another resource. Broadly, it describes the action one might take with the related resource. resource or the reason one might look at it. For example, with respect to an XML document, the purpose of an W3C XML Schema might be “validation”. For an XHTML 1.0 document, the purpose of the XHTML Recommendation might be “normative reference”. xs: <> . @prefix assoc: <> . @prefix rdf: <> . @prefix rdfs: <> . @prefix dc: <> . @prefix : <> . @prefix p: <> . @prefix n: <> . @prefix r2: <> . # Purpose is a relationship between the resource identified with # a namespace URI and another resource (that is presumably useful # for the specified purpose). # # With respect to example.xml, the purpose of example.xsd is schema validation. n:Key a owl:Class; :comment """A resource whose IRI is used by RDDL to identify the nature of a resource"""; :label "NatureKey" . assoc:purpose a owl:ObjectProperty; rdfs:comment """The relationship between a namespace name and another resource useful for the specified purpose.""" . n:Object a owl:Class; :comment """An association of a resource and a nature key: the object of an rddl:purpose"""; :label "Object" . # Nature is the relationship between a resource and the resource # which defines it's inherent character. # # The nature of example.xsd is that it is a W3C XML Schema document. n:target a owl:ObjectProperty; :comment "The resource of an Object"; :domain n:Object; :label "target" . assoc:nature a owl:ObjectProperty; rdfs:comment """The relationship between a resource and another resource which describes its nature.""" . n:key a owl:ObjectProperty; :comment "The Nature Key of an Object"; :domain n:Object; :label "nature key"; :range n:Key . p:Purpose a owl:ObjectProperty; :comment "The super-type of all RDDL purposes"; :range n:Object; :label "Purpose" . # ============================================================ # RDDL 1.0 Purposes # p:validation a owl:ObjectProperty; rdfs:subPropertyOf assoc:purpose; rdfs:comment "Serves the purpose of SGML or XML DTD validation." . p:validation a owl:ObjectProperty; :subPropertyOf p:Purpose; :comment "Serves the purpose of SGML or XML DTD validation." . p:schema-validation a owl:ObjectProperty; rdfs:subPropertyOf assoc:purpose; rdfs:comment "Serves the purpose of W3C XML Schema validation." . p:schema-validation a owl:ObjectProperty; :subPropertyOf p:Purpose; :comment "Serves the purpose of W3C XML Schema validation." . p:module a owl
http://www.w3.org/2001/tag/doc/nsDocuments-2007-10-05/diff_20070919.html
CC-MAIN-2016-30
refinedweb
1,018
50.02
Stricter API filtering Django Rest Framework and Tastypie both provide REST queries and allow looking up lists of objects based on query parameters. In the former case it allows multiple back-ends, but defaults to using Django filter. That's a fine approach with one problem. Consider this: You've got a reasonably good idea of what that will return. How about this one? What does that return? Everything. Because category got typo'd. Meaning that it was excluded from the filter and you didn't filter on anything at all. Ouch. Out of the box Django Rest Framework only allows GETs on lists. Even a GET that goes awry like that is bad. But you can use the bulk library to do PATCH, PUT and DELETE. DELETE That's really going to ruin your day. Kumar rightly complained about this so I threw in a simple filter to stop that. from rest_framework.exceptions import ParseError from rest_framework.filters import DjangoFilterBackend class StrictQueryFilter(DjangoFilterBackend): def filter_queryset(self, request, queryset, view): requested = set(request.QUERY_PARAMS.keys()) allowed = set(getattr(view, 'filter_fields', [])) difference = requested.difference(allowed) if difference: raise ParseError( detail='Incorrect query parameters: ' + ','.join(difference)) return (super(self.__class__, self) .filter_queryset(request, queryset, view)) Once it's done alter: DEFAULT_FILTER_BACKENDS to point to that and make sure your filter_fields are correct. Now a typo'd query string will return you a 422 error.
http://agmweb.ca/2015-05-12-stricter-filtering/
CC-MAIN-2019-09
refinedweb
232
52.46
By Alvin Alexander. Last updated: May 14 2018 Scala FAQ: How do I get the current year as an integer ( Int value) in Scala? Solution: Use the Java 8 Year or LocalDate classes, or the older old Java Calendar class. The solutions are shown below. Java 8 Year import java.time.Year val year = Year.now.getValue Java 8 LocalDate import java.time.LocalDate val year = LocalDate.now.getYear Java Calendar If for some reason you’re not using Java 8 (or newer), here’s the older Calendar class solution: import java.util.Calendar val year = Calendar.getInstance.get(Calendar.YEAR) REPL examples Skipping over the import statements, here’s what the solutions look like in the Scala REPL: scala> val year = Year.now.getValue year: Int = 2018 scala> val year = LocalDate.now.getYear year: Int = 2018 scala> val year = Calendar.getInstance.get(Calendar.YEAR) year: Int = 2018 The java.time API For more details, here are some statements from the java.time API page: - The (java.time API is the) main API for dates, times, instants, and durations. - The classes defined here represent the principle. The calendar neutral API should be reserved for interactions with users. More information For more reading, here are links to the Javadoc: Add new comment
https://alvinalexander.com/scala/how-get-current-year-as-integer-in-scala
CC-MAIN-2019-26
refinedweb
211
62.14
WebDeveloper.com > Site Management > Business Matters > Charge for blog using PDA Click to See Complete Forum and Search --> : Charge for blog using brolin112 10-20-2011, 04:48 PM I have a blog and I would like to charge $4 monthly (by paypal) from users. Is it possible? seobd 10-24-2011, 02:23 PM need to see you blog site :) brolin112 10-26-2011, 02:03 PM seobd 10-26-2011, 02:35 PM Actually it will be restriction for user. Next time they are do not want to use your site. as a result you lost your use and your site traffic. Hope you understand. :) brolin112 10-30-2011, 06:12 AM Yes I know that however do you know websites with this kind of option? regans 11-18-2011, 03:15 AM Many blogs and niched websites have charges for their newsfeeds, it's not an uncommon practice. However, keep in mind the benefit of the customer when you ask for payment. Always try to give him some goodies in return (downloadable stuff, access to private events or others) to make the payment psychologically worth it. Another point. Your blog already has high quality content and good rankings, why don't you try to take more advantage of your SE position? I mean hosting advertisements, advertorials and paid text links. Ecom-host-guru 11-28-2011, 12:10 PM Of course its possible but do you really want to do it? Most users do not want to pay for things that they can get for free elsewhere. I would recommend that you look into alternate ways to finance it, ie affiliate links, google adsense, paid ads, selling ebooks or special access/emails perhaps. You will probably need to at least give away some of it for free to get new customers. webdeveloper.com
http://www.webdeveloper.com/forum/archive/index.php/t-252527.html
crawl-003
refinedweb
306
70.73
: - Function Component - Class Component #Nested. Behind the scenes, React Native converts this to a flat NSAttributedString or SpannableString that contains the following information: "I am bold and red"0-9: bold9-17: bold, red #Containers: the text will be inline if the space allowed it// |First part and second part| // otherwise, the text will flow as if it was one// |First part |// |and second |// |part | <View> <Text>First part and </Text> <Text>second part</Text></View>// View container: each text is its own block// |First part and|// |second part | // otherwise, the text will flow in its own block// |First part |// |and |// |second part| #Limited. Meanwhile, fontFamily only accepts a single font name, which is different from font-family in CSS.() { return ( . Reference #Props accessibilityHint# An accessibility hint helps users understand what will happen when they perform an action on the accessibility element when that result is not clear from the accessibility label. accessibilityLabel# Overrides the text that's read by the screen reader when the user interacts with the element. By default, the label is constructed by traversing all the children and accumulating all the Text nodes separated by space. accessibilityRole# Tells the screen reader to treat the currently focused on element as having a specific role. On iOS, these roles map to corresponding Accessibility Traits. Image button has the same functionality as if the trait was set to both 'image' and 'button'. See the Accessibility guide for more information. On Android, these roles have similar functionality on TalkBack as adding Accessibility Traits does on Voiceover in iOS accessibilityState# Tells the screen reader to treat the currently focused on element as being in a specific state. You can provide one state, no state, or multiple states. The states must be passed in through an object. Ex: {selected: true, disabled: true}. accessible# smallest possible scale a font can reach when adjustsFontSizeToFit is enabled. (values 0.01-1.0). nativeID# Used to locate this view from native code. numberOfLines# Used to truncate the text with an ellipsis after computing the text layout, including line wrapping, such that the total number of lines does not exceed this number... selectable# Lets the user select text, to use the native copy and paste functionality. selectionColor;}
https://reactnative.dev/docs/text
CC-MAIN-2021-39
refinedweb
370
51.28
David: David: It's fine as is then Peter: Is that all you want to talk about? <anne> Anne: removed dependency section ... Fixed syntax section to build on CSS 2.1 the same way the namespace module does ... There is a reference in the current CR in a note that never made it to that ... Made HTML and XML informative rather than normative ... Relative units are based on initial value and not initial value of root element ... Updated grids ... Anne: Is it okay if I move draft to public space Elika: As long as Hakon is okay - do it Resolved: move media queries to public space Daniel: Thank you HP for hosting! <glazou> ==== ADJOURN ==== <tantek> note microformats dinner meetup tonight, everyone is welcome: This is scribe.perl Revision: 1.133 of Date: 2008/01/18 18:48:51 Check for newer version at Guessing input format: RRSAgent_Text_Format (score 1.00) No ScribeNick specified. Guessing ScribeNick: molly Inferring Scribes: molly WARNING: No "Present: ... " found! Possibly Present: Bert Daniel David Elika Peter anne glazou tant]
http://www.w3.org/2008/03/29-css-minutes.html
CC-MAIN-2014-10
refinedweb
173
76.52
NAME getdents - get directory entries SYNOPSIS #include <unistd.h> #include <linux/types.h> #include <linux/dirent.h> #include <linux/unistd.h> #include <errno.h> int getdents(unsigned int fd, struct dirent *dirp, unsigned int count); DESCRIPTION This is not the function you are interested in. Look at readdir(3) for the POSIX conforming C library interface. This page documents the bare kernel system call interface. The system call getdents() reads several dirent structures from the directory referred to by the open file descriptor fd into the buffer pointed to by dirp. The parameter returns one of the following values:). This call supersedes readdir(2). SEE ALSO readdir(2), readdir(3) COLOPHON This page is part of release 3.01 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://manpages.ubuntu.com/manpages/intrepid/man2/getdents64.2.html
CC-MAIN-2014-10
refinedweb
139
53.98
Feedback Getting Started Discussions Site operation discussions Recent Posts (new topic) Departments Courses Research Papers Design Docs Quotations Genealogical Diagrams Archives Can someone help me decode this comment from this post on Scala type inference:. Is that a theoretical result published somewhere? And when they say constraint typing, they mean "constraint-based typing" right? I went to look at François Pottier's PhD thesis, which handles structural subtyping. In fact it seems to handle type inference in a modular way, in the sense that inferred constraints can be simplified separately for each type-checked program fragment (provided you have type scheme for the polymorphic identifiers you depend on (module interfaces), as is usual in ML-based languages). There is a remark about the relation with control-flow analysis in the introduction: So far, we have discussed typing and type inference only. Yet, it is important to realize that the same algorithms could be described in terms of abstract interpretation. Indeed, since each function application gives birth to a constraint, the constraint graph is in fact an apporximation representation of the program's data flow. For instance, in the absence of polymorphism, our system would be equivalent to a 0-CFA, as pointed out by Palbserg, O'Keefe and Smith [39, 40]. One can also compare our type inference system to a modular abstract analysis, such as SBA [25, 19]. In both cases, the analysis yields a description of the program's behavior, as a set of constraints. Typing is traditionally distinguished from abstract interpretation, because of its greater simplicity and lesser precision; here, the type systems at hand are precise enough that the distinction blurs, and both points of view are possible. Besides, note that this parallel with abstract interpretation allows understanding some of our simplification algorithms in a more graphic way: for instance, garbage collection simply consists in simulating a flow of data through the constraint graph, and in destroying all unused edges. The discussion between Spiewak and Odersky in the link you cite would seem to indicate that nominal subtyping is harder in this case that structural subtyping. I'm much less familiar with nominal subtyping and don't have references for that. This smells to me like a statement that involves some hidden assumptions. There is a sentence later in that comment (end of the same paragraph) that strikes me as very curious: ...the entire program must be available to the type checker in order to definitively derive the union type at a particular declaration site This seems to suggest that the union has to be effectively closed (at least de practico). The only reason I can see to require that is that inequality type constraints are being replaced with enumerated constraints. I can see a bunch of reasons from an HM-style inference perspective why that might be a tempting approach, but I really don't see that it's a necessary approach. There has been some clever work on inference in the presence of nominal subtyping lately. There has been some clever work on inference in the presence of nominal subtyping lately. Cool, any pointers? I'm a bit frustrated now, it doesn't seem that hard for some result much better than Scala's. Getting F-Bounded Polymorphism into Shape, 2014 Ben Greenman, Fabian Muehlboeck, and Ross Tate Ah, this looks kind of like Jiazzi! Shapes are like classes imported into units. It also matches my experience with using open classes (type families) in Scala: the shape traits where never to be used directly as types, but as bounds for type parameters that were! This design was made illegal in later version of Scala for reasons that I didn't really understand at the time, but now make more sense. I hope Martin Odersky reads this paper. Looks like something useful to understand. This example looks like a pretty clumsy encoding of a type class: interface Graph<G extends Graph<G,E,V>, E extends Edge<G,E,V>, V extends Vertex<<G,E,V>> { List<V> getVertices(); } interface Edge<G extends Graph<G,E,V>, E extends Edge<G,E,V>, V extends Vertex<G,E,V>> { G getGraph(); V getSource(); V getTarget(); } interface Vertex<G extends Graph, E extends Edge<G,E,V>, V extends Vertex<G,E,V>> { G getGraph(); List<E> getIncoming(); List<E> getOutgoing(); } class Map extends Graph<Map,Road,City> {...} class Road extends Edge<Map,Road,City> {...} class City extends Vertex<Map,Road,City> {...} I don't think type classes can deal with family polymorphism, correct? How do you relate Map, Road, and City so that they are bound to Graph, Edge, and Vertex in the Map, Road, and City types? See gBeta as a language that deals with such designs natively. I meant something like: typeclass Graph g v e where getVerts :: g -> [v] getEdges :: g -> [e] getIncoming :: v -> [e] getOutgoing :: v -> [e] getGraph :: v -> g ... instance Graph Map City Road where ... Note: And I edited the parent comment to fix formatting of less-than. Your type class version seems just equal to: class Graph[G,V,E] { ... } The point is not in the instantiation of the class, but in the implementation of it. You can define your base graph, and then evolve their implementations in an extension of the graph. All you are doing here is tying the knot. You're right that OO languages offer something that Haskell doesn't here (though the type class version doesn't look equivalent to one class Graph[G,V,E]). Namely, you can define an instance for (Map, City, Road) and then refine those classes (BigCity extends City, etc.) while keeping the same implementation of the Graph interfaces. Haskell isn't an OOP language, after all. But I think my point stands that the snippet I posted is a quite cumbersome way of specifying a theory of three related classes. How does haskell handle the comparable interface? Can a type class refer to itself? Edit: ok, this a dumb question. Type classes are just constructors... Haskell uses the Eq type class for comparables. Type classes are a mechanism for constraining types to have certain associated functions. So you write (Eq a => ...) to indicate that the type 'a' should have some associated function of type a -> a -> Bool. So it would look like this: interface Comparable[c extends Comparable[c]] Bool operator == (c rhs) class Foo implements Comparable[Foo] operator ==(Foo rhs) { ... } vs typeclass Eq a (==) :: a -> a -> Bool class Foo -- If Haskell had classes ... ... instance Eq Foo (x == y) = ... Edit: It occurs to me that by comparable, you probably meant the Ord type class, not Eq, but that shouldn't change much... Ya, I realized this is super easy in haskell, and doesn't require anything like recursion as it does in OOP. Redundant post, I just made the same point as Matt above. Intuitively this may have something to do with constraint propagation. If you have inequality constraints, you cannot tell if unification fails without access to all constraints imposed top down. If you only have access to part of a program you cannot determine if a unification should fail. If an fragment 1 a disequality is established between A and B, and in fragment 2 A and B are unified, only a single type is presented at the module interface, hence the incompatability of fragments 1 and 2 cannot be expressed at link time. It doesn't have to be unification, here unification could be replaced by local resolution of an equality constraint. In effect in the presence of equalities and inequalities all constraints need to be resolved at the top level to find any possible contradiction. I think only equality-only constraint resolution is locally resolvable. Why can't you just resolve the constraints as you receive them? Whenever you get a constraint, just update your lattice right there and then. Because inequality requires the type variables to be grounded in order to discharge it. Equality is the only constraint you can immediately discharge with non ground type variables (by unification). My comment above is that after local unification, how do you express an inequality on something already unified? I don't get what you mean by updating the lattice? Edit: I am sure it can be done for some set of restrictions, but I don't know what restrictions. One thing you obviously can't allow is to declare some types in a header that get used in one module but get a subtyping constraint defined in a different module. It seems really possible by my experience. I'm not unifying at all, and also I'm not allowing subtyping between type parameters yet, they are only f-bounded (e.g. T <: S is not possible if T and S are type parameters, but T <: Foo[S] is fine). You definitely can't allow subtyping after the fact between two previously visible type variables (don't violate truth in subtyping), but it might be doable for a fresh type variable. Do you consider Foo[S] <: Foo[T] simplifies to S <: T, and hence its still possible to introduce constraints on ungrounded type variables? Then we find we need to unify S and T, but of course we cannot violate the subtyping constraint when unifying, hence we don't know if unification should succeed until we know the concrete types of S and T. I don't allow Foo[S] <: Foo[T] if S and T are both fresh type variables (instances are a different story). So current, S and T are not allowed to have a relationship. Also, for Foo[TypeA] <: Foo[TypeB], the constituents aren't just related by TypeA <: TypeB or even TypeA = TypeB (well, if you want to be conservative), but something a bit more complex than that! Surely if I have a generic sort procedure that expects an array of numbers (number being the super type of int, float, int64, int32 etc), then: Array Int <: Array Num And: sort :: (a <: Array Num) => a -> a Should admit any container that has at least the same interface as Array, and contains something that at least has the same interface as Num. Yes, you at least get that. You can get more than that also, unfortunately. It might depend on whether your language is mutable or not, but even procedure arguments are contravariant, so I'm not sure. What more can you get (substituting an generic container for an array)? Surely: a <: Container Num Means 'a' must be something substitutable where I expect a Container of Num(s). It means exactly that and nothing more. Where's the problem? f :: (a <: Container Num) => a -> IO () Is a function that takes an argument that provides at least the same interface as a container of nums, and we don't mind if it provides extra, as we never use it. f :: (a <: Container Num) => IO a Is a function that returns something that is at least a Container Num, IE something that can be used anywhere a container of nums is expected. Again we don't care if it provides extra functionality, because we will never use it. Note I am using the IO monad because I don't want to deal with the details of what exactly is mutable. Haskell doesn't have subtyping, so there isn't a problem. Ad hoc polymorphism is quite different from the subtype variety, it isn't about substitutability. But if you have say b <: a, how are a.T and b.T related? Knowing nothing about variance, it just isn't b.T <: a.T since we need to derive w <: b.T if w <: a.T. I'm not talking about Haskell, I am taking about a Haskell like language (with type classes, parametric polymorphism, similar type system) with the addition of subtyping. Type-classes behave exactly like Java interfaces, except for the static vs runtime difference. What is wrong with the statement that a function that accepts a container of numbers should accept any subtype of container of any subtype of numbers? Isn't that clearly what we mean by: a <: Container Number Type classes don't behave like Java interfaces, even the designs they lead to are mostly very different. It is not good enough, it doesn't deal with catcalls. Unless you mandate covariance, which in an immutable languag like haskell might be workable. A type-class is just a record plus an implicit parameter. A record is an interface just like in Java, for example: data R a = R { f :: a -> a } interface R { R f(R r); } And you could apply subtyping constraints as required: data R a = R { f :: (b <: a) => b -> b } data Animal = Animal data Cat = Cat -- Cat <: Animal x = R Animal { f = id } x.f Cat You pass it an animal, it returns an animal. Where's the difficulty? This is an old question/debate that I don't feel like is relevant to me right now. The debate is do we need first-class subtype polymorphism, or can it just be effectively simulated with type classes? Just adding a subsumption function doesn't really get you subtypes, though it gives you the subsumption aspect of subtyping (obviously). The problem/expressiveness with subtyping beyond subsumption involves its interaction with type parameters, and that's about it. If you have a type system with different semantics that admits the same values, where is the problem. I don't see the need to be attached to the mechanism, its the result that is important? In any case the debate is settled, in that you can simulate with type classes, it was one of the results from the OOHaskell paper I worked on, linked to below. The syntax in Haskell is not very nice, but you could imagine a language with a syntax that looks like Java, but that uses a type system like Haskell's. One of the interesting results is that you don't need to worry about co-variaiance or contra-variance unless performing an explicit narrowing operation. This is because the type checker looks at the individual method type signatures. You basically get structural subtyping (duck-subtyping) but you can do it nominally too. Really type-classes are the super type of all subtyping methods, and you could construct many object-oriented language behaviours within the sandbox of type-classes. The paper shows how to construct many OO language features, so that you can quickly build a DSL OO language prototype with sound type-checking. Again, this is a different more ideological argument to have some other time. There are good reasons Haskell hasn't taken over the world yet even with type classes, and maybe not so good reason also. I now realise that the comment the original post refers to is talking about a method of type inference, and nothing to do with constraint types. It is referring to the technique of bidirectional inference where a set of constraints is gathered for a whole program and solved hence avoiding the left-to-right bias in something like Scala. I think this is actually a flawed argument though as my compositional typing algorithm solves this problem, at the cost of making the type signatures inferred in the 'interface' files more complex. See: It is also flawed in a simpler perspective, if we simply require explicit type signatures at module boundaries, which is a good thing so we maintain stable interfaces between modules, then it becomes simply whole-module inference not whole-program inference, so I don't think its true on a much simpler level too. Everything below continues the previous (misdirected) discussion, but I think its interesting so I am going to leave it here. Trying to backtrack to where we diverged from the topic, Explicit subtype constrains seem equivalent to implicit subtyping, and clearly express co- and contra-variance. Also we should expect them to obey the relevant algebraic rules. I think the original subject assumes we are dealing with an algebraic type system, and it is under these circumstances that the introduction of subtyping breaks separate compilation. If you don't allow all the axioms, the conclusion may not apply. However, as soon as you introduce generics, you will find you need all the axioms to allow generics to be used in all the ways people want to use them. This leads to trying to combine a type system with parametric polymorphism and function types with OO style subtyping, and that apparently leads to problems with separate compilation. I'm not so sure now, as with type-classes we don't have such problems, and if they subsume subtypes then subtypes should not have such problems either? Subtype constraints in an algrbraic type system are the equivalent of 'subtypes' in a 'C++' like type system that lacks function types. I think it is actually a category error to try and combine the two. Subtyping in an algebraic type system _is_ subtype constraints. It seems much better to use subtype constraints (and type-classes) to achieve the same restrictions on the value level code, and keep the type system simpler and easier to understand, whilst avoiding these problems. I find it really hard to see the difference between: f :: (a <: Shape) => a -> a g :: (a -> b <: Square -> Shape) => (a -> b) -> b -> b h :: Shape -> Shape and Shape f(Shape) Shape g(UnaryFunction<Square, Shape>, Shape) // cannot express 'h'. Apart from that I can ask for exactly a Shape in the former which seems more powerful, and you cannot clearly see the variance of generic 'Function' you would have to look at the definition to see which parameters are "in" or "out" in C# for example. These are two different notations for the same concepts, except information is clearly missing in the C++ style signatures. The problems stem from the lack of function types, in an algebraic type system 'a -> b' has clearly different subtyping rules than datatype 'X a b', whereas C++ style generics do not distinguish between 'UnaryFunction<a, b>' and 'Pair<a, b>'. Thinking about the invariance of arrays, of course an array interface can have the desired variance, considering: set :: Array x -> Int -> x -> IO () get :: Array x -> Int -> IO x Would have the correct variance on the element written or read form the array. It is the implicit array accessor a[i] that causes problems because the result is a reference that can be both input and output. Here's a reference to a paper I contributed to that covers the topic in depth. Good of you to edit your comment. Sorry about that original comment, I am going through a phase of disliking casting at the moment. I plan on only providing type-based dispatch (maybe including a type-case statement) in the language I am working on. In fact there are two different issues here, which are static subtyping, and runtime (dynamic) subtyping. I was trying to keep only to static subtyping. In dynamic subtyping, it looks like you need a cast, but in something like C++ its preferable to use the visitor pattern rather than casting. This is implemented only using vTable dispatch, and you could use this mechanism too for a type-case. Now there is no casting, and no need for runtime safety checks (as its simple vTable dispatch on two objects. If we allow open data types and extensible records, and unify type classes and records using implicits (and modules too for good measure), then a single declaration will give us an interface that can be used in static subtype polymorphism (when used implicitly) or dynamic subtype polymorphism (when used explicitly). I think that I was basically agreeing with you. The initial argument about casting was to show that it's a special case of a general constructive definition of subtyping (but for some reason, many people seem to define subtyping strictly in terms of castability). A general witness to subtyping may require non-trivial work (as in C++ with multiple or virtual inheritance, not talking about visitors or unsound "up-casting" -- that's a different thing entirely). But I cut out the argument since it seemed like an intolerable tangent to you. I think I said you missed my point, but that doesn't mean it wasn't right or useful to someone else. I did feel it was a tangent, but I did not find it intolerable. I thought it was interesting, but I didn't really have the time to compare it to what I have been working on, but some of the parts of the OOHaskell paper on subtyping and variance dealt with something similar. In a way you were introducing (completely valid) details that I was trying to gloss over. Covariance is only sound for read-only arrays. So I suppose this particular `sort` function would take the covariant ReadOnlyArray generic subclass of the Array generic class. I think I've come up with an encoding I think that avoids unification as well as a covariance distinction. We divide the type into multiple channels and "tunnel" parts of the type into latent channels that become active only on explicit assignment. But that is what I meant: there is something more to a Seems related to use-case variance (which was judged a too complex in the form that was integrated into Java; you can possibly do better with a new design). I guess we will have to wait for you to release you system. Not wild cards, really transmitting parts of the type in covert channels. It is quite weird and I wonder if it really works. The math seems to work out but why it works out I have no clue! I'll release videos and an initial draft of a paper pretty soon. I'm a bit far away from a release (mostly related to the live programming features I'm also working on :) ). Mutable arrays are invariant, so (assuming that Array is a mutable type) a <: Array Num is the same as a == Array Num. The correct type for the sort function would then be (since it uses the argument in a read-only way): Array a <: Array Num a == Array Num sort sort :: (a <: Num) => Array a -> Array a You are right, and this is not a good example to demonstrate container variance. A bound (a <: Ord[a]) would be even better. (a <: Ord[a]) The simple goal, a function that can sort a (mutable) container of numbers, whatever the type of the container (providing it conforms to at least some interface) of elements (providing they conform to at least some interface). This is no problem... we can be polymorphic on the array contents. We can read an element for the array and write it back. There is no problem with mutable arrays. (warning: I am probably making some different assumptions to you). The generic function will of course use associated types, so the value read from the array has ValueType(I) where I is the iterator on the array (we can get the type of the iterator from IteratorType(C) where C is the container). Consider: typeclass Container c where data IteratorType c :: * begin :: c -> IteratorType c end :: c -> IteratorType c typeclass Eq i => Iterator i where data ValueType i :: * succ :: i -> i pred :: i -> i typeclass Iterator i => Readable i where source :: i -> IO (ValueType i) typeclass Iterator i => Writeable i where sink :: i -> ValueType i -> IO () reverse :: (Container c, Readable (IteratorType c), Writable (IteratorType c)) => c -> IO () reverse x = reverse' (begin x) (end x) where reverse' f l | (f == l || succ f == l) = return () reverse' f l | f /= l = do a <- source f b <- source l sink f b sink l a reverse' (succ f) (pref l) Actually I see the point, that the above assumes all elements in the array are the same type. It still should be possible, in that we can use a polymorphic copy/clone operation, and of course in the underlying implementation the array would really be an array of object handles where the actual objects are on the heap. The implementation is going to be pretty complex in a language like Haskell.
http://lambda-the-ultimate.org/node/5119
CC-MAIN-2019-18
refinedweb
4,047
59.74
React's new Context API and Actions Steven Washington Mar 31 Updated on Apr 02, 2018 Photo: Daniel Watson Edit: 4/2/2018 - It was pointed out to me that the example in this post had a performance issue, where render was called on Consumers unnecessarily. I've updated the article, examples, and the CodeSandbox to rectify this. The new React Context API ( coming soon now here! in React 16.3) is a massive update of the old concept of context in React, which allowed components to share data outside of the parent > child relationship. There are many examples and tutorials out there that show how to read from the state provided by context, but you can also pass functions that modify that state so consumers can respond to user interactions with state updates! Why Context? The context API is a solution to help with a number of problems that come with a complex state that is meant to be shared with many components in an app: - It provides a single source of truth for data that can be directly accessed by components that are interested, which means: - It avoids the "prop-drilling" problem, where components receive data only to pass it on to their children, making it hard to reason about where changes to state are (or aren't) happening. B-but Redux! Redux is a fantastic tool that solves these problems as well. However Redux also brings a lot of other features to the table (mostly around enforcement of the purity of state and reducers) along with required boilerplate that may be cumbersome depending on what is needed. For perspective, Redux uses the (old) context API. Check out this article by Dan the Man himself: You Might Not Need Redux What's Context do? There are plenty of articles on this (I particularly like this one), so I don't want to go into too many details about how this works. You've seen the examples so far, and they're mostly missing something: how to update the state in the provider. That state is sitting there, and everyone can read it, but how do we writeto it? Simple Context Example In many of these examples we make a custom provider to wrap around React's, which has its own state that is passed in as the value. Like so: context.js import React from "react"; const Context = React.createContext(); export class DuckifyProvider extends React.Component { state = { isADuck: false }; render() { const { children } = this.props; return ( <Context.Provider value={this.state}> {children} </Context.Provider> ); } } export const DuckifyConsumer = Context.Consumer; Seems simple, enough. Now we can use the DuckifyConsumer to read that state: DuckDeterminer.js import React from "react"; import { DuckifyConsumer } from "./context"; class DuckDeterminer extends React.Component { render() { return ( <DuckifyConsumer> {({ isADuck }) => ( <div> <div>{isADuck ? "quack" : "...silence..."}</div> </div> )} </DuckifyConsumer> ); } } export default DuckDeterminer; Passing Functions Now, what if we wanted to emulate a witch turning something into a duck (stay with me here)? We need to set isADuck to true, but how? We pass a function. In Javascript, functions are known as "first-class", meaning we can treat them as objects, and pass them around, even in state and in the Provider's value prop. It wouldn't surprise me if the reason why the maintainers chose value and not state for that prop is to allow this separation of concepts. value can be anything, though likely based on state. In this case, we can add an dispatch function to the DuckifyProvider state. dispatch will take in an action (defined as a simple object), and call a reducer function (see below) to update the Provider's state (I saw this method of implementing a redux-like reducer without redux somewhere, but I'm not sure where. If you know where, let me know so I can properly credit the source!). We pass the state into the value for the Provider, so the consumer will have access to that dispatch function as well. Here's how that can look: context.js import React from "react"; const Context = React.createContext(); const reducer = (state, action) => { if (action.type === "TOGGLE") { return { ...state, isADuck: !state.isADuck }; } }; export class DuckifyProvider extends React.Component { state = { isADuck: false, dispatch: action => { this.setState(state => reducer(state, action)); } }; render() { const { state, props: { children } } = this; return <Context.Provider value={state}>{children}</Context.Provider>; } } export const DuckifyConsumer = Context.Consumer; Note that we have dispatch in our state, which we pass into value. This is due to a caveat in how the need to re-render a consumer is determined (Thanks, Dan for pointing that out!). As long as the reference to this.state stays pointed to the same object, any updates the make the Provider re-render, but don't actually change the Provider's state, won't trigger re-renders in the consumers. Now, in DuckDeterminer, we can create an action ( {type:"TOGGLE"}) that is dispatched in the button's onClick. (We can also enforce certain action types with an enum object that we export for the DuckifyContext file. You'll see this when you check out the CodeSandbox for this) DuckDeterminer.js import React from "react"; import { DuckifyConsumer } from "./DuckContext"; class DuckDeterminer extends React.Component { render() { return ( <DuckifyConsumer> {({ isADuck, dispatch }) => { return ( <div> <div>{isADuck ? "🦆 quack" : "...silence..."}</div> <button onClick={e => dispatch({ type: "TOGGLE" })}> Change! </button> </div> ); }} </DuckifyConsumer> ); } } export default DuckDeterminer; The secret sauce here is the dispatch function. Since we can pass it around like any other object, we can pass it into our render prop function, and call it there! At that point the state of our Context store is updated, and the view inside the Consumer updates, toggling on and off whether the duck truly exists. Extra credit You can (read: I like to) also add a helpers field alongside state and dispatch, as a set of functions that "help" you sift through the data. If state is a massive array, perhaps you can write a getLargest or getSmallest or getById function to help you traverse the list without having to split the implementation details of accessing various items in a list in your consumer components. Conclusion Used responsibly, the new Context API can be very powerful, and will only grow as more and more awesome patterns are discovered. But every new pattern (including this one, even) should be used with care and knowledge of the tradeoffs/benefits, else you're dipping your toes in deaded antipattern territory. React's new context API, is incredibly flexible in what you can pass through it. Typically you will want to pass your state into the value prop to be available to consumers, but passing along functions to modify state is possible as well, and can make interacting with the new API a breeze. Try it out The DuckDeterminer component is available to play with on CodeSandbox, right now! What developers want in a job A look at what job-seekers value most when looking for a job. I'm not too familiar with Redux so I have a few questions about this method. 1) How would I fire off multiple actions at once> For instance if my component needs to change 2-3 state values? Can I just pass multiple actions? What does that look like? 2) What if my state/action doesn't take a true/false vale but instead changes a string? For example: state = { msg: 'This message can change', dispatch: action => { this.setState(state => reducer(state, action)); } }; How would I pass in a new msg to the action at this point to change the msg? Thanks so much for the guidance! 1) I would say that you could fire multiple calls to dispatchfor each of your actions, or (if possible) create a new action type (a so-called "super-action") that's actually a collection of other actions. 2) Keep in mind that you action can be any sort of object. So in addition to type(which exists so you know what action you are working with), you can add in other data. An action can look like: And then the reducer would update the stateby using action.newString Note your example will re-render consumers more often than necessary. reactjs.org/docs/context.html#caveats Thanks for pointing that out! I made some updates to that example that should address this. always important to RTFM; I messed up in this case. 😳 Always learning! Worth noting that I've got an open proof-of-concept PR that updates React-Redux to use the new context API. So React now is more than just a view library? React has always had state management built-in. This is just another way of updating the front-end state; no assumptions are made about any back-end architecture. In theory, instead of your action directly modifying the context state, the action could kick off a request to your backend to update its data, which then responds with updated (and server-validated) data that you can use to update your front-end store. In that sense, React is still just a view layer, showing the data that you pass into its state and responding to user events. It's up to you to decide what sort of data updates that triggers, and React will come along for the ride. 😀 Hey, thx for your post, I was wondering how to do something similar to this, right now I'm just thinking in how to manage multiple reducers and import them at once
https://dev.to/washingtonsteven/reacts-new-context-api-and-actions-446o
CC-MAIN-2018-30
refinedweb
1,579
63.39