text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Playing Video files in ASP.NET MVCOne! Step 1: Open VS 2012 and create an ASP.NET MVC Empty application. Name it as ‘MVC404040_CustomActionResult.CustomResult; using System.Web.Mvc; namespace MVC40: 6: Run the application in a Browser (I have used Chrome), and the result will be as below: . Tweet 10 comments: Awesome, you made my day!! I needed to stream a video from a file that is somewhere on the harddisk and not below inetpub. Yyour solution made it possible, THANKS! This method won't work with non html5 compatible videos (html5 only supports ogg, mp4 and webm formats (see this link) Do you know how to play videos in other formats, like wmv or avi? I'm trying to get a behavior similar to accessing directly to the video url (). This method opens VLC media player plugin to play de video, instead of download it. Rectify, using the correct MIME type for each format when serving the video, vlc plugin launches, but the video doesn't play. You can see the Correct MIME types here: I keep searching for the mistake. Javier - Does this help I've found the solution! VLC plugin wasn't playing the video because it hadn't Authorization to execute the method (the web browser can access because user is logged in, but vlc player has not credentials). The trick is mark the method to allow anonymous access via "[AllowAnonymousAttribute]". The above solution doesn't provide full support for HTML5 video (for example dynamic seeking or prefetching) as it requires support for Range Requests on the server side. You can read more on how to provide such support in ActionResult here: @Thomasz Thanks for that link! Hi, can you help me, its work perfectly on ie, safari, firefox but not on a ipad... Hi There. You have set a static path of the file. But how to set a dynamic path.
https://www.devcurry.com/2013/04/play-videos-in-aspnet-mvc-html5-using.html?showComment=1371645310556
CC-MAIN-2021-39
refinedweb
317
74.29
gav^ I am quite disappointed by many of my fellow monks after reading this node. One of the things that makes the Perl community (and especially the Monastery) so great is that it is very lighthearted, especially compared to the "competition", such as Microsoft products, Java, etc. I handed out many -- to the responses here because I was disgusted that something as simple as an april fools joke turned into an embarrassment to the Monastery (how the hell did this get frontpaged, anyways?). This isn't something thats happened just at this node. I've noticed it happening more and more lately; I often see a proposal mocked and questions recieving very sarcastic responses with very little substance attached. Is that the kind of image that we want to project to new monks and the rest of the world? I certainly would not of stuck around here if my first question here had gotten such a response. Here is an idea: if you don't have much positive & useful material to add, or simply think that a node is frivilous, how about shutting the F*** up instead of acting like an asshole? It would be much appreciated. Here is an idea: if you don't have much positive & useful material to add, or simply think that a node is frivilous Yeah, I guess no-one actually cares about CPAN pollution. It's just us CPAN purists who think bogus modules should not be in the XML:: namespace. We should not complain. jryan, CPAN is an important Perl community service. Bad beta modules are welcome, humor is welcome. But a joke module should not be under a serious namespace, and the joke should be made much more obvious to those who don't know the phenomenon called "April fools day". If this module was called Acme::XML::Simpler, I wouldn't have complained. My gripe, Juerd, is that I that when I see childish responses such as these: I too am excited , I can't believe grantm's selfishness (CPAN is not a crap factory). Man oh man am I excited, so much so that I am going to write to grantm, and tell everybody I know to avoid his CPAN directory. can any fool add his own favourite Foo::Bar::Broken? Have you seen my latest module? **mocking code** that I very much wanted to never bother coming back here, as this kind of thing happens all the time. Its not like grantm is an stupid beginnier, his XML::Simple module seems to be quite popular. Even if he was, he most certainly did not deserve to be treated as such. I never said that you weren't allowed to voice complaints on his actions, but perhaps it could have been voiced a little better? # $Id: Simpler.pm,v 1.0 2002/04/01 00:00:00 grantm Exp $ package XML::Simpler; use strict; use warnings; require Exporter; use File::Slurp; use vars qw(@ISA @EXPORT $VERSION); @ISA = qw(Exporter); @EXPORT = qw( XMLin XMLout ); $VERSION = '1.00'; sub XMLin { read_file ( $_[0] ) } sub XMLout { write_file( $_[1], $_[0]) } 1; __END__ [download] It does not parse XML in any way shape or form, nor does it do anything remotely useful or entertaining. I too am excited , I can't believe grantm's selfishness (CPAN is not a crap factory). Why are you excited gav^? package XML::Simpler; use strict; use warnings; require Exporter; use vars qw(@ISA @EXPORT $VERSION); @ISA = qw(Exporter); @EXPORT = qw( XMLin XMLout ); $VERSION = '1.00'; sub XMLin { my $file = shift; my @lines(); open(FILE,$file) or die "couldn't open $file $!"; while(<FILE>) { my $line = $_; push @lines, \$line; } close(FILE); return @lines; } 1; __END__ [download] sub write_file { my ($f, @data) = @_; local(*F); open(F, ">$f") || croak "open >$f: $!"; (print F @data) || croak "write $f: $!"; close(F) || croak "close $f: $!"; return 1; } [download] Man oh man am I excited, so much so that I am going to write to grantm, and tell everybody I know to avoid his CPAN directory. update (posted 4 minutes after this node was created, before dws/shotgunefx replied): the fact that "Version 1.00 of XML::Simpler was released on April 1st, 2002" further excites me (is nothing sacred anymore, must everything be tainted in such a disgusting manner). Joke or no joke, this should NOT be in CPAN, this should NOT be work load for testers. I wonder why this module (the namespace for it) was not rejected - can any fool add his own favourite Foo::Bar::Broken? Now, beginners will think this module is real, and try to use it. That means a waste of development time and it's a very bad name for Perl and the CPAN. Maybe if it was named Acme::XML::Simpler, or XML::Simpleminded, maybe I could laugh. As far as I can tell, the key difference between previous ones (eg: Sex, DNA, Bleach, Buffy) and mine was that XML::Simpler did not contain obfuscated Perl code. That's not the point. "Sex" is obviously not serious, DNA is useable, and the synopsis makes clear it's just for fun. Both Bleach and Buffy are in the Acme:: namespace, home of useless, but fun modules. XML::Simpler is in the XML:: namespace, and will be found when anyone searches for XML or XML::Simple. A lot of people want ease of use, and will install XML::Simpler before trying XML::Simple. They'll find out it's useless and might give up on trying other modules, because even the simplest doesn't work. Had you named it Acme::XML::Simpler, there would be no problem. As modules can't be deleted easily, I urge you to add a very verbose and clear statement at the top of your POD, add a warning to the code itself, and release version 1.01. Then, request it to be deleted, and if you like, re-submit it under the Acme:: namespace. I would argue that mine is no more or less useful than the other packages. It does work exactly as the documentation describes. Have you seen my latest module? package Null; use v5.6.0 use strict; our $VERSION = '1.00'; __END__ =head1 NAME Null - Does absolutely nothing =head1 SYNOPSIS use Null; =head1 DESCRIPTION This module only sets $Null::VERSION, but does nothing else. =head1 COPYRIGHT None is claimed. This module is released into the public domain. =cut [download] update: Shoot the messanger? I wanted to shoot dws there for a second, who seemed more approving than he should (it hurts me to see good monks with impaired judgment), but I by no mean wanted gav^ shot. Just, due to the lack of the smily (;) which you later filled in as your interpretation, I wanted to know what he really thought. Anyway, i'm glad we all understand each other now (and i'd have openly said --shooot-- him if that's what I meant ~ very rarely am I subtle ~ some people like not having to wonder) The joke, such as it is, is in the POD. Try re-reading the module minus code. But you're right. This is a waste of CPAN cycles. Try re-reading the module minus code. The pod doesn't make clear it's bogus code, people will be bitten by this module, not only today or this week. A lot of cultures don't have an April fools day. But yea, XML::Simpler doesn't offer either. It's a gag, and should have been limited to discussion groups like here or perlxml, or even something like an article at ORA (with an approiate post-fix to clue those in not already on the joke). ----------------------------------------------------- 01-04 is a very bad excuse for I agree that the joke went a little too far, the XML namespace is already clutered enough, but Grant is the author of XML::Simple, so maybe he is allowed to show a little disrespect ;--) What's wrong with stringified.
http://www.perlmonks.org/?node_id=155922
CC-MAIN-2016-18
refinedweb
1,336
70.84
Hi, I just updated myrmidon to remove a lot of extra cruft that was included. Now it has less classes and should be easier to maintain. Essentially the changes involved modifying Avalon so that it was not necessary to define *Loader, and *Factory etc for Tasklet/Converters. I also changed it so that taskdefinitions were loaded from an xml file rather than 2 property files. I haven't received much feedback but feel free to have a hack at it and see if you can spot any issues. I plan to hack at it again tomorrow so if you have any requests feel free to make them. If no one has any requests I may add a few tasks. As far as I am aware James is the only one at the moment who is opposing a unified namespace for different values in builds. If that is the case I may implement that - if that is not the case someone else -1 the idea ;) Also about evaluating properties in targets/project tags. Should we do it ? Personally I don't like it but if everyone else does then ... ;) Also James do you still have any issues regarding Avalon integration? The only one that I don't think I have addressed is the size of the jar file. If that is the only issue I will write a shrinker that optimises it by removing unused .class files and stripping debug data. That will easily bring it under 100kb (possibly as low as 40 kb) - which should be acceptable no ? Remember that all the features except 1 that Avalon provides have been requested on ant-dev before. The one abstraction that hasn't be asked for is the ComponentManager abstraction. I don't think this is too much bloat if that is what you are worried about. if you have any other issues then feel free to comment. Cheers, Pete *-----------------------------------------------------* | "Faced with the choice between changing one's mind, | | and proving that there is no need to do so - almost | | everyone gets busy on the proof." | | - John Kenneth Galbraith | *-----------------------------------------------------*
http://mail-archives.apache.org/mod_mbox/ant-dev/200012.mbox/%3C3.0.6.32.20001211121925.00a15170@latcs2.cs.latrobe.edu.au%3E
CC-MAIN-2015-22
refinedweb
347
72.76
FAQs Search Recent Topics Flagged Topics Hot Topics Best Topics Register / Login Win a copy of Hands On Software Engineering with Python this week in the Jython/Python forum! atif abbas Greenhorn 7 3 Threads 0 Cows since May 21, 2002 Cows and Likes Cows Total received 0 In last 30 days 0 Total given 0 Likes Total received 0 Received in last 30 days 0 Total given 0 Given in last 30 days 0 Forums and Threads Scavenger Hunt Ranch Hand Scavenger Hunt Number Posts (7/10) Number Threads Started (3/10) Number Likes Received (0/3) Number Likes Granted (0/3) Set bumper stickers in profile (0/1) Set signature in profile Set a watch on a thread Save thread as a bookmark Create a post with an image (0/1) Recent posts by atif abbas Binary Files ? here is my code so far..please i need help with the updating of current-balance ..thanks kind regards Atif. import java.io.*; import java.awt.*; import java.awt.event.*; public class TransactionFile extends Frame implements ActionListener { // TextFields where user enters customer id, firstname, // lastname and current balance. private TextField cust_idField, firstNameField, lastNameField, current_balField; private Button enter, // enter and send record to file done; // quit program // Application private DataOutputStream output; public TransactionFile() { super( "Customer Transaction File" ); // Open the file cust.txt try { output = new DataOutputStream( new FileOutputStream( "cust.txt" ) ); } catch ( IOException e ) { System.err.println( "File not opened properly\n" + e.toString() ); System.exit( 1 ); } setSize( 300, 150 ); setLayout( new GridLayout( 5, 2 ) ); // create the components of the Frame add( new Label( "Customer ID" ) ); cust_idField = new TextField(); add( cust_idField ); add( new Label( "First Name" ) ); firstNameField = new TextField( 20 ); add( firstNameField ); add( new Label( "Last Name" ) ); lastNameField = new TextField( 20 ); add( lastNameField ); add( new Label( "Current Balance" ) ); current_balField = new TextField( 20 ); add( current_balField ); enter = new Button( "Add" ); enter.addActionListener( this ); add( enter ); done = new Button( "Done" ); done.addActionListener( this ); add( done ); setVisible( true ); } public void addRecord() { int cust_id = 0; Double balance; if ( ! cust_idField.getText().equals( "" ) ) { // output the values to the file try { cust_id = Integer.parseInt( cust_idField.getText() ); if ( cust_id > 0 ) { output.writeInt(cust_id ); output.writeUTF( firstNameField.getText() ); output.writeUTF( lastNameField.getText() ); balance = new Double ( current_balField.getText() ); output.writeDouble( balance.doubleValue() ); } // clear the TextFields cust_idField.setText( "" ); firstNameField.setText( "" ); lastNameField.setText( "" ); current_balField.setText( "" ); } catch ( NumberFormatException nfe ) { System.err.println( "You must enter an integer account number" ); } catch ( IOException io ) { System.err.println( "Error writing to file\n" + io.toString() ); System.exit( 1 ); } } } public void actionPerformed( ActionEvent e ) { addRecord(); if ( e.getSource() == done ) { try { output.close(); } catch ( IOException io ) { System.err.println( "File not closed properly\n" + io.toString() ); } System.exit( 0 ); } } public static void main( String args[] ) { new TransactionFile(); } } show more 16 years ago I/O and Streams Binary Files ? Thanks, for your tips.. one other thing how can i update records once io have stored records. Like if i want to add other records, and how will i be able to match the records based on the customer id.. kind regards Atif show more 16 years ago I/O and Streams Binary Files ? Hi, i need some ideas with Binary Files.. I have an assignment to do based on binary files.. basically, A binary file stores customer id, name and current balance in ascending order of customer id. A program accepts transactions from a user to update customer records. Later enters customer id, program reads file for a record matching with the customer id. entered. Then it asks for type of transaction - p(payment), s(sales). It asks for the amount from current balance. Updated record is then written to a new file. Program continues until end of the file, and then displays each record from the new file. All transactions are entered in the order of customer id. And the assignment only asks us to create 5 records, and the user enters three transactions, two for payment type and one for sales type. i have some ideas on what input and output files to use. like FileOutputStream and FileInputStream. But, how to update the file i dont know.. kind regards Atif show more 16 years ago I/O and Streams I/O filestreams HI.. I have an excercise where i have to Write a program that copies a text file onto another file. Program accepts a command line: java copy original file destination file. program checks that both file names exist or throws an exception. It then copies the file as per command line arguments. here is my source code so far. and would like someone to check it for me and correct me please.. The file compiles. //copy file import java.io.*; public class Copy { public Copy() { try { FileInputStream in = new FileInputStream("sri.txt"); FileOutputStream out = new FileOutputStream("file.txt"); byte buffer[] = new byte[4096]; int length; while((length = in.read(buffer))!= -1) out.write(buffer,0,length); in.close(); out.close(); } catch(IOException e){} } public static void main(String args[]) { Copy copy = new Copy(); } } [ May 23, 2002: Message edited by: Dirk Schreckmann ] show more 16 years ago I/O and Streams Validation problem please help?? hey, guys many thanks for your help.. but i still get errors when i compile... i have tried what you guys sais but not working.. Also how am i able to retrieve the password, once i have got the firstname, lastname and id.. if you read my question from the project it mentions that the password is created by the joining the firstname, id and lastname.. can you guys help me here again please... here is my code so far.. import java.io.*; import java.lang.*; public class UserValidation { //declare variables private int id = 0; private String firstname = ""; private String lastname = ""; private String email = ""; private String password = ""; private BufferedReader input = new BufferedReader (new InputStreamReader(System.in)); //handles input console public UserValidation(int id, String firstname, String lastname, String email, String password) { this.id = id; this.firstname = firstname; this.lastname = lastname; this.email = email; this.password = password; } //you can throw NumberFormatException to verify if the user entered an integer // parseInt throws a NumberFormatException if the argument is not valid public int getId() throws NumberFormatException { System.out.println("Input UserID:"); id = Integer.parseInt(input.readLine()); if (id < 4 || id > 4) { System.out.println( "Id must be a number! please try again." ); } getId(); return id; } public String getfName(String firstname)throws IOException { int index; int length = firstname.length(); System.out.println("Input firstname:"); firstname = input.readLine(); for(index = 0; index < firstname.length; index++) { if (firstname.isLetter(charAt(index))) { System.out.println("Invalid firstname:"); } } getfName(); return firstname; } public String getlName(String lastname) { int index; int length = lastname.length(); System.out.println("Input lastname:"); lastname = input.readLine(); for(index = 0; index < lastname.length; index++) { if (lastname.isLetter(CharAt(index))) { System.out.println("Invalid lastname:"); } } getlName(); return lastname; } public String getEmail(String email)throws IOException { int index; int length = email.length(); System.out.println("Enter your EmailAddress please:"); email = input.readLine(); for (index = 0; index < length; index++) { if(email.CharAt(index) == '@') { return (email.substring(index + 1)); } else System.out.println("Invalid Email! please try again."); getEmail(email); } return email; } /* public getPassword() { } */ public static void main (String args[])throws IOException { int id; String email, firstname , lastname, password; UserValidation user = new UserValidation(); id = user.getId(); email = user.getEmail(email); firstname = user.getfName(); lastname = user.getlName(); } password = user.getPassword(id, firstname, lastname); } [ May 22, 2002: Message edited by: Dirk Schreckmann ] show more 16 years ago Beginning Java Validation problem please help?? sorry. dirk i was using my old login name for the previous reply.. show more 16 years ago Beginning Java Validation problem please help?? Hi?? i have a problem in java...and i'm not quite sure how to solve it, and was wandering if anyone would be able to help me out here... i have started something but not sure what to do next. Here is the problem: Problem 1 A program accepts identification number, first name, last name, and email address of a user. It validates the data as per following specifications: .identification number – no blanks, four numerical characters; .first name – no blanks, all alphabetical characters; .last name – same as for first name; .no blanks; .only one ‘@’character; ,there is at least one word before ‘@’, and at least two words after ‘@’, both words(following ‘@’ joined by ‘.’, no space between the words before and after the ‘.’ character. Assume that there are maximum of two words following ‘@’character, and there are maximum of 25 characters in the email address. If any of the data item is invalid, program displays an appropriate message to the user and asks for re-entry of that item. When all data items entered are valid, program creates a password for the user by joining: .first two characters of first name; .2nd and 3rd characters of identification .last three characters from the last name; in above order. Password is then relayed back to user. Later then enters this password and program then checks that user has entered the correct password (as created for him/her), and displays the message to the user accordingly. You do not need to create any special user interface but must utilize string and array processing functions and here is my source code so far.. if any one helpful enough could fix the program for me that would be much appreciated..thanks kind regards, Atif. /******begining of code ****************| public class UserValidation { //declare variables private int id; private String firstname; private String lastname; private String email; private BufferedReader input = new BufferedReader // Handles console input (new InputStreamReader(System.in)); public int getId() throws IOException{ System.out.println("Input UserID:"); id = Integer.parseInt(input.readLine()); getId(); // I suspect that these two lines don't belong here, } // the code won't compile this way. -Dirk return id; } public String getfName(){ return firstname; } public String getlName(){ return lastname; } public String getEmail(){ return id; } public void display(){ System.out.println(" " + getId() + getfName() + getlName() + getEmail()); } public static void main (String args[]){ UserValidation user = new UserValidation(); user.display(); } } /*****************end of code*******************/ [I've edited your post, surrounding the code with the proper ubb tags in order to preserve formatting and make the code easier to understand. -Dirk] [ May 21, 2002: Message edited by: Dirk Schreckmann ] show more 16 years ago Beginning Java
https://coderanch.com/u/30133/atif-abbas
CC-MAIN-2019-04
refinedweb
1,700
58.99
fflush(3p) [posix man page] FFLUSH(3P) POSIX Programmer's Manual FFLUSH(3P) PROLOG This manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the correspond- ing Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAME fflush -- flush a stream SYNOPSIS #include <stdio.h> int fflush(FILE *stream); DESCRIPTION The functionality described on this reference page is aligned with the ISO C standard. Any conflict between the requirements described here and the ISO C standard is unintentional. This volume of POSIX.1-2008 defers to the ISO C standard. If stream points to an output stream or an update stream in which the most recent operation was not input, fflush() shall cause any unwrit- ten). thread would be delayed in the write operation. EBADF The file descriptor underlying stream is not valid. EFBIG An attempt was made to write a file that exceeds the maximum file size. EFBIG An attempt was made to write a file that exceeds the file size limit of the process. The fflush() function may fail if: ENXIO A request was made of a nonexistent device, or the request was outside the capabilities of the device. The following sections are informative. EXAMPLES Sending Prompts to Standard Output(" New); APPLICATION USAGE None. RATIONALE Data buffered by the system may make determining the validity of the position of the current file descriptor impractical. Thus, enforcing the repositioning of the file descriptor after fflush() on streams open for read() is not mandated by POSIX.1-2008. FUTURE DIRECTIONS None. SEE ALSO Section 2.5, Standard I/O Streams, fmemopen(), getrlimit(), open_memstream(), ulimit() The Base Definitions volume of POSIX.1-2008, <stdFLUSH(3P)
https://www.unix.com/man-page/posix/3P/fflush/
CC-MAIN-2021-04
refinedweb
292
57.47
Definitely sounds like a good plan for goplay thumbnails to me :) Thanks for sharing this command :) On Tue, May 13, 2008 at 10:35 AM, Miriam Ruiz <little.miry@gmail.com> wrote: > Hi, > > Many games use .png files for their graphics. Usually these files are > not very optimized, and they can be reduced in size through optipng, > without losing any quality at all. Calling optipng, especially with -o > 7, can be quite costly, at least it takes more time in my computer > than to compile the game, but it is only done once, for the > arch-independent packages, and will usually be compiled just in the > uploader's computer, as all the pbuilders will just compile the arch > dependent packages, so no extra cost for pbuilders will probably be > added. The result is quite smaller arch independent files that, even > though not extremelly smaller (about an 8% for me) it might worth the > effort. Any thoughts about it? > > Figures: > > Original: > > 5584 themes/ > 364 menuimg/ > > After optipng -o 7: > > 5180 debian/chapping-data/usr/share/games/chapping/themes/ > 328 debian/chapping-data/usr/share/games/chapping/menuimg/ > > > As a side question, I wonder if that optimization should be disabled > if noopt is used: > > ifeq (,$(findstring noopt,$(DEB_BUILD_OPTIONS))) > for i in `find $(CURDIR)/debian/chapping-data -name "*.png"`; > do echo "Optimizing image $$i"; optipng -o 7 -q "$$i"; done > endif > > Any thoughts about it? > > Miry > > > -- > To UNSUBSCRIBE, email to debian-devel-games-request@lists.debian.org > with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org > > -- Ivan Vučica -- Croatia --
https://lists.debian.org/debian-devel-games/2008/05/msg00079.html
CC-MAIN-2014-15
refinedweb
258
52.39
/* $NetBSD: SYS.h,v 1.14.10.1 2014/08/20 00:02:12 tls.h 8.1 (Berkeley) 6/4/93 * * from: Header: SYS.h,v 1.2 92/07/03 18:57:00 torek Exp */ #include <machine/asm.h> #include <sys/syscall.h> #include <machine/trap.h> #ifdef __STDC__ #define _CAT(x,y) x##y #else #define _CAT(x,y) x/**/y #endif /* * ERROR branches to cerror. This is done with a macro so that I can * change it to be position independent later, if need be. */ #if __PIC__ - 0 >= 2 #define JUMP(name) \ PIC_PROLOGUE(%g1,%g5); \ sethi %hi(_C_LABEL(name)),%g5; \ or %g5,%lo(_C_LABEL(name)),%g5; \ ldx [%g1+%g5],%g5; \ jmp %g5; \ nop #elif __PIC__ - 0 >= 1 #define JUMP(name) \ PIC_PROLOGUE(%g1,%g5); \ ldx [%g1+_C_LABEL(name)],%g5; jmp %g5; nop #else #define JUMP(name) set _C_LABEL(name),%g1; jmp %g1; nop #endif #define ERROR() JUMP(__cerror) /* * SYSCALL is used when further action must be taken before returning. * Note that it adds a `nop' over what we could do, if we only knew what * came at label 1.... */ #define _SYSCALL(x,y) \ ENTRY(x); mov _CAT(SYS_,y),%g1; t ST_SYSCALL; bcc 1f; nop; ERROR(); 1: #define SYSCALL(x) \ _SYSCALL(x,x) /* * RSYSCALL is used when the system call should just return. Here * we use the SYSCALL_G5RFLAG to put the `success' return address in %g5 * and avoid a branch. */ #define RSYSCALL(x) \ ENTRY(x); mov (_CAT(SYS_,x))|SYSCALL_G5RFLAG,%g1; add %o7,8,%g5; \ t ST_SYSCALL; ERROR() /* * PSEUDO(x,y) is like RSYSCALL(y) except that the name is x. */ #define PSEUDO(x,y) \ ENTRY(x); mov (_CAT(SYS_,y))|SYSCALL_G5RFLAG,%g1; add %o7,8,%g5; \ t ST_SYSCALL; ERROR() /* * WSYSCALL(weak,strong) is like RSYSCALL(weak), except that weak is * a weak internal alias for the strong symbol. */ #define WSYSCALL(weak,strong) \ WEAK_ALIAS(weak,strong); \ PSEUDO(strong,weak) /* * SYSCALL_NOERROR is like SYSCALL, except it's used for syscalls * that never fail. * * XXX - This should be optimized. */ #define SYSCALL_NOERROR(x) \ ENTRY(x); mov _CAT(SYS_,x),%g1; t ST_SYSCALL /* * RSYSCALL_NOERROR is like RSYSCALL, except it's used for syscalls * that never fail. * * XXX - This should be optimized. */ #define RSYSCALL_NOERROR(x) \ ENTRY(x); mov (_CAT(SYS_,x))|SYSCALL_G5RFLAG,%g1; add %o7,8,%g5; \ t ST_SYSCALL /* * PSEUDO_NOERROR(x,y) is like RSYSCALL_NOERROR(y) except that the name is x. */ #define PSEUDO_NOERROR(x,y) \ ENTRY(x); mov (_CAT(SYS_,y))|SYSCALL_G5RFLAG,%g1; add %o7,8,%g5; \ t ST_SYSCALL .globl _C_LABEL(__cerror)
http://cvsweb.netbsd.org/bsdweb.cgi/src/lib/libc/arch/sparc64/SYS.h?rev=1.14.10.1&content-type=text/x-cvsweb-markup&sortby=log&only_with_tag=tls-maxphys
CC-MAIN-2022-40
refinedweb
411
64.41
BBC micro:bit Buzzer With MicroPython Introduction You will need a buzzer or speaker to use the examples on this page. aligator clips. Connect the pin that is labelled - to GND. Programming - Pitch You can set the pitch of the buzzer sound with the music.pitch() method. This example uses that statement two ways. In the first example, the frequency 440hz is played for 1000 milliseconds. After waiting a second, the same tone is played with the length set to -1. This plays until it is stopped. You can't tell the difference when listening to the output from this program. from microbit import * import music while True: music.pitch(440,1000) sleep(1000) music.pitch(440,-1) sleep(1000) music.stop() sleep(1000) You can take advantage of the continuous tone to make a not-so musical instrument using the accelerometer readings to determine the pitch of the note being played. Pressing the A button controls whether or not a note is played. from microbit import * import music while True: if button_a.is_pressed(): music.pitch(accelerometer.get_x(),-1) else: music.stop() sleep(10) Programming - Built-In Tunes MicroPython has some built-in constants that can be played whenever you need. This program plays them all for you, one at a time, so that you can enjoy them at your leisure.) Study the music.play() statement in the following program. The argument that is set to false makes the tune play in the background. Playing the tune does not block anything else that we want to do at the same time. The last argument, set to true, makes the tune play in a loop. from microbit import * import music] music.play(music.PYTHON, pin0, False, True) display.show(built_in_images, delay=1000,loop=True) The display statement has a delay set that doesn't match the tempo of this tune. In the next section, you will come across some statements that might help you work out how to match synchronise the two things better than you can by trial and improvement. Programming - Playing Melodies In order to make your own music with MicroPython, you first need to get your head around its musical notation, the way it represents musical information. Tempo The length of individual notes is calculated from the tempo you set. The tempo is a number of beats per minute (bpm). In order to divide a beat into notes, we use a value called ticks. By default, it takes 4 of these to make a beat. You define a note with a letter and optional sharp symbol followed by a number to indicate its octave. Middle C is octave 4. The you write a colon, followed by the number of ticks the note should be played for. You can use an R to indicate a rest. For example, C4:4 This means the note C from octave 4, or middle C, played for 4 ticks. The following program plays you a tune, specified using this notation. After the first note, you can leave out the octave or duration until you need to change it. This tune was converted from code for a different microcontroller and so it was easier to leave the durations in.) There is also a music.get_tempo() function that will return for you a tuple consisting of the bpm and ticks. Challenges - Improve the way that the accelerometer and buzzer combination works. Try to work out how to make the negative values result in sounds too. You might also want to map the values on to a smaller range of frequencies, preferably actual notes. - Use the LED Matrix to make an animation with sound effects. Use the built-in tunes to give you ideas. Start by trying to create images to illustrate one of the tunes of your choice. Then connect a few of them together. - Use some of the built-in tunes or your own compositions as background or other sounds for a game you have made. - Make your micro:bit play scales to you according to your pre-programmed or in-program choices. - Use sheet music or your own composition to play the tune of your choice. - Turn your micro:bit into a metronome. Allow the user to choose a value for the bpm and then think about a ticking animation to go along with what you are doing. Think carefully and you can time your animation to go with the tempo of the beats.
http://www.multiwingspan.co.uk/micro.php?page=pybuzz
CC-MAIN-2019-09
refinedweb
741
75.2
perltutorial InfiniteSilence This is a tutorial for calling a .NET assembly via Win32::OLE. I was looking for such a tutorial on the web and previously at PerlMonks but didn't find anything, so I elected to research the issue, get a quick example working, and submit it. <P> Basically what you want to be able to do is: <code> perl -e "use Win32::OLE; $hl = new Win32::OLE('HelloDot.HelloWorld') or die $!; $hl->SayHello();" </code> Where <b>HelloDot.HelloWorld</b> is a .NET assembly registered in the system to look/act/smell like a regular COM component. The following are preliminary setup steps that do not require explanation: <P> <li>Step 1: You will have to install both the .NET Framework and the .NET SDK on your machine (type .net framework/sdk download in Google) <li>Step 2: The installer does not set your path, so you will have to add the following to be able to reach gacutil.exe, regasm.exe, csc.exe (my cheezy example was written in C#), etc.: C:\Program Files\Microsoft.NET\SDK\v1.1\Bin\;C:\WINDOWS\Microsoft.NET\Framework\v1.1.4322\ <P> After mucking with your PATH, you should create a key file like this: <code> sn -k personality.snk </code> This creates a 'key pair' file which contains a public <i>and</i> a private key for entering the .dll we are about to create into the GAC, the Global Assembly Cache. Think of it as a Windows registry for .NET assemblies. There is a way to separate the public key part from the private one (using sn -p) in case you are interested. (Note: I'm not sure why this thing didn't offer to allow me a choice of ciphers or allow me to choose my own entropy engine. I remember reading that it is using SHA1 or something, but I don't know.) <P>: <code> using System; using System.Reflection; [assembly:AssemblyVersion("1.0.0.0")] [assembly:AssemblyKeyFile("personality.snk")] namespace HelloDot { public class HelloWorld { public HelloWorld(){ //no param constructor for COM } public void SayHello(){ Console.WriteLine("Hello from DOT NET"); } } } </code> You need a constructor with no params for the .NET assembly to be called like a regular COM component (as with Win32::OLE). There is some kind of wrapper that M$soft creates to allow COM components to interact with .NET assemblies and the constructor is necessary to make that happen. If you are interested on the why, more information on that is <a href="">here</a>. The only problem with this is that a third party .dll is probably not going to have this constructor, so at that point you will have to create a wrapper class that <i>does</i> have it before you try this. <P> Anyway, you need to compile this into a .dll like so: <code> csc /t:library /out:hellodot.dll hello.cs </code> If you go back to to the hello.cs code, you will notice the funny lines before the namespace that stand out like a sore thumb: <b>assembly:AssemblyVersion...yadda,yadda</b>. These instructions tell the compiler that you are creating an assembly you are intending to share. If you remove the lines and recompile you will notice that the .dll becomes a little smaller. Remember to put it back in and recompile before you continue this tutorial. <P> If the .dll compiled okay you now have to register it: <code> gacutil /i hellodot.dll </code> There is one more thing to do. Your .NET assembly still needs to look and act like a regular COM component. Normal COM components (i.e. Excel.Application, Microsoft.XML, etc.) are registered with regsvr32. This puts their GUIDs in the Window's Registry (see HKEY_LOCAL_MACHINE/Software/Classes for the ones you've got). Not so in this case. You need to use regasm.exe in order to 'register' your .NET .dll to be usable by everybody outside of the .NET camp: <code> regasm hellodot.dll </code> With all of this being done, you should be able to call the following: <code> perl -e "use Win32::OLE; $hl = new Win32::OLE('HelloDot.HelloWorld') or die $!; $hl->SayHello();" </code> <P>. <P> <B>Summary:</B> <br> <BR> Are you are asking yourself, "why is any of this important?" Well, it seems that the only other way to get Perl on Windows to interact with .NET is to pony up cash for Visual Studio.NET and <a href="">Visual Perl</a>. The above tutorial offers a (relatively) quick and (depending on your threshold) painless way to interoperate with .NET components.
http://www.perlmonks.org/?displaytype=xml;node_id=392275
CC-MAIN-2016-40
refinedweb
768
67.76
Hi guys I need help creating an unordered list of objects.I am supposed to create a file of name and phone number pairs, such that each name is on a separate line and each phone number is on a separate line. Then I am to edit the Phonetest and phone file to -im supposed to create phone list and print -then print the Ordered List of phone numbers. Here is the class Phone: public class Phone implements Comparable<Phone> { private String name; private String phone; public Phone(){ name = ""; phone = ""; } public Phone(String name, String phone){ this.name = name; this.phone = phone; } public String getName(){ return name; } public String getPhone(){ return phone; } public void setName(String name){ this.name = name; } public void setPhone(String phone){ this.phone = phone; } public String toString(){ return (name + " " + phone); } public boolean equals(Phone other) { return (name == other.name)&&(phone == other.phone); } @Override public int compareTo(Phone other) { int compare = other.getPhone().compareTo(name); if (compare != 0) return compare; else { if (other.getName().compareTo(name) < 0) return - 1; if (other.getName().compareTo(name) > 0) return 1; } return 0; } } Here is the class Phonetest: public class PhoneTest { public static void main(String[] args) throws Exception { // get the filename from the user BufferedReader keyboard = new BufferedReader (new InputStreamReader(System.in),1); System.out.println("Enter name of the input file: "); String fileName= keyboard.readLine(); // create object that controls file reading and opens the file InStringFile reader = new InStringFile(fileName); System.out.println("\nReading from file " + fileName + "\n"); // your code to create (empty) ordered list here ArrayOrderedList<Phone> list = new ArrayOrderedList<Phone>(); // read data from file two lines at a time (name and phone number) String name, phone; do { name = (reader.read()); phone = (reader.read()); // your code to add the entry to your ordered list here Phone phone2 = new Phone(name,phone); list.add(phone2); } while (!reader.endOfFile()); System.out.println("Here is my phone book:"); // your code to print the ordered list here System.out.println(list); // close file reader.close(); System.out.println("\nFile " + fileName + " is closed."); } } Here us my File: So how can I change the print portion so I am getting the list in alphabitically order top down? So I my file looks like this Claire hfjg 555-222-1111 Bob abc 222-900-1818 Don cbcd 122-564-9653 I get this when I input my code: Don cbcd 122-564-9653 Claire hfjg 555-222-1111 Bob abc 222-900-1818 How can I get the list in alhpabetical order so it looks more like this: Bob abc 222-900-1818 Claire hfjg 555-222-1111 Don cbcd 122-564-9653
https://www.daniweb.com/programming/software-development/threads/493754/compare
CC-MAIN-2017-09
refinedweb
438
63.8
Scala match tuples helps cut down on unwieldy nested ifs Perhaps you have seen professionals and amateurs alike write code like this: if (myVar == constA) { actionA(); } else if (myVar == constB) { actionB(); } else if (myVar == constC) { actionC(); } else if ... // and so on and so forth } Reasonably you expect the professionals to know better and use a switch-case statement, like so: switch (myVar) { case constA: actionA(); break; case constB: actionB(); break; case constC: actionC(); break; case ... // and so on and so forth break; default: throw someError; } You might find it more plausible that professionals forget about the “fall-through” behavior of the switch-case statement and neglect to put in a break where it is needed. Scala doesn’t have switch-case, but it has something that can look very similar, the match statement. For example: myVar match { case constA => actionA case constB => actionB case constC => actionC ... // etc. case _ => throw new Exception("Unexpected case") } No break command needed to prevent case constA from falling through to case constB. The default case is labeled case _ rather than default, but that’s a superficial difference, as is the use of => instead of :, or the lack of semicolons. Though you can use semicolons if you want. Don’t use break though, that has another purpose in Scala. Here is a more concrete example, using the infamous FizzBuzz. This example still doesn’t even begin to hint at the power of Scala match expressions: def fizzBuzz(n: Int): Any = (n % 15) match { case 0 => "FizzBuzz" case 3 | 6 | 9 | 12 => "Fizz" case 5 | 10 => "Buzz" case _ => n }(1 to 100).map(fizzBuzz(_)) If you don’t feel like typing this into a file, compiling it and executing it, you can just run it in Scastie, the online Scala REPL. This is already much cleaner and neater than how’d you do it in Java. And it looks more deliberate, too, than a Java switch-case statement, where code reviewers might wonder if you left out a break on purpose. This probably won’t compile to tableswitch or lookupswitch in the Java Virtual Machine, but I sincerely doubt you will notice any performance penalty even if a performance penalty does exist. It is also cleaner than in JavaScript, which is not really based on Java but does have switch-case statements pretty much like in Java. And in Scala, Any gives you the flexibility that you get in JavaScript without the nightmares of unexpected type inference. But in Scala we can do FizzBuzz even better, by matching to a pair of moduli rather than a single modulus. In Scala, you can create a tuple any time you want with very little fuss. def fizzBuzz(n: Int): Any = (n % 3, n % 5) match { case (0, 0) => "FizzBuzz" case (0, _) => "Fizz" case (_, 0) => "Buzz" case _ => n }(1 to 100).map(fizzBuzz(_)) I’m also providing a Scastie link for this one. There are fifteen possibilities for the tuple (n % 3, n % 5), just as there are fifteen possibilities for the single n % 15. But with the tuple, we can express the possibilities more concisely: instead of, say, 3 or 6 or 9 or 12 but not 15, we want (0, m), where m may be 1, 2, 3 or 4 but not 0, and we don’t care which. You do have to take care to put the narrower cases before the more general cases. If, for example, you put (0, m) before (0, 0), the function will fail to match 15, 30, 45, 60, … to “FizzBuzz” and will match those to just “Fizz”. The elements of a tuple don’t all have to be the same type, and this is what can really help you cut down on potentially confusing nested if statements. For example, if a flag is true, a number is one of two or three specific cases, and a String converted to lowercase is one of four specific cases, the program should take such and such action. But if the number is one of a couple other specific cases, regardless of the String, the program should take a slightly different action. And if the flag is false but the number is a particular value and the String in lowercase also meets a certain criterion, the program should also take the slightly different action. Any other combination of values should trigger one of two specific exceptions. How would you program that in Java? You would probably write nested if statements, at least three levels deep. Or you could create some kind of hash code and then sort through the relevant cases in a switch-case statement. In Scala, you could just create a triple and then work out the possible cases, taking care to place the narrowest cases before the more general cases. I know this example sounds rather contrived and unlikely to occur in an actual programming situation (as opposed to a programming exercise). So I will now present a concrete example from a program I’ve been working on for months. To understand this example, you don’t really need to know about domains of algebraic integers. Though if you don’t know but want to know, you can check out my Medium article on the topic. Is 7 divisible by 3 + √2? In a domain that contains both of those numbers, yes, 7 divisible by 3 + √2. Is 7 divisible by √2? No, it’s not, because 7√2/2 is not an algebraic integer in any domain. In my Java program, both numbers would have to be represented as instances of the RealQuadraticInteger class. My program would give you the answer that 7 divided by 3 + √2 is 3 − √2, with the understanding that 7 is a shorthand for 7 + 0√2. But 7 could also be 7 + 0√3, 7 + 0√5, etc. It could also be something like 7 + 0√−19, represented by an instance of the ImaginaryQuadraticInteger class. But I would expect my program to give the right result without troubling you too much about how exactly 7 is represented. I won’t quote the divides() function here, it would really swell up this article’s word count. Go to the GitHub source listing, scroll down to the “ public QuadraticInteger divides(QuadraticInteger divisor) throws NotDivisibleException” line. That’s line 893 in the current version. It goes on to line 996. Some of the length is due to my not yet taking full advantage of the Fraction class in that version, but some of the length is due to all the if statements, some nested four levels deep. I will continue to develop the Java version of my program, but I will also work on it in Scala. I could dump Java source into Scala files in IntelliJ through cut and paste from a plaintext editor, and let IntelliJ translate it to Scala for me. But I imagine that might not always produce the most idiomatic Scala. And also, through a more rigorous application of test-driven development, I intend to produce a version of the program that is more elegant and perhaps at times more efficient. Now follows a quotation of what I’ve got so far for the equivalent to QuadraticInteger.divides() in my Scala program. It is not a full quotation, but it is close to full. @throws(classOf[NotDivisibleException]) def /(divisor: QuadInt): QuadInt = if (this.ring == divisor.ring) { if (divisor.norm == 0) { throw new IllegalArgumentException("Division by zero") } val dividendRegFract = new Fraction(this.regPart, this.denominator) val dividendSurdFract = new Fraction(this.surdPart, this.denominator) val divisorRegFract = new Fraction(divisor.regPart, divisor.denominator) val divisorSurdFract = new Fraction(divisor.surdPart, divisor.denominator) val quotientRegFract = (dividendRegFract * divisorRegFract - this.ring.radicand * dividendSurdFract * divisorSurdFract)/divisor.norm val quotientSurdFract = (dividendSurdFract * divisorRegFract - dividendRegFract * divisorSurdFract)/divisor.norm val notDivisibleFlag = (this.ring.hasHalfIntegers, quotientRegFract.denominator, quotientSurdFract.denominator) match { case (_, 1, 1) => false case (true, 2, 2) => false case _ => true } if (notDivisibleFlag) { throw new NotDivisibleException("Not divisible", this, divisor, fractArray, this.ring) } QuadInt(quotientRegFract.numerator.toInt, quotientSurdFract.numerator.toInt, this.ring, quotientRegFract.denominator.toInt) } else { throw new AlgebraicDegreeOverflowException } As you can see, there are still nested ifs. But the match statement (in bold) vastly simplifies the task of checking a Boolean and comparing if two integers match each other to 1 or to 2, so there is no need to nest the if that checks the denominators in the if that checks the Boolean property ring.hasHalfIntegers (which comes from the QuadRing class). This passes the pertinent JUnit tests (I have yet to feel a pressing need to use ScalaTest). On the local Scala REPL, I tried out the examples mentioned earlier of 7 divided by 3 + √2 or √2. scala> val ring = new algebraics.quadratics.RealQuadRing(2) ring: algebraics.quadratics.RealQuadRing = Z[sqrt(2)]scala> val ramifier = new algebraics.quadratics.RealQuadInt(0, 1, ring) ramifier: algebraics.quadratics.RealQuadInt = sqrt(2)scala> var numberA = new algebraics.quadratics.RealQuadInt(7, 0, ring) numberA: algebraics.quadratics.RealQuadInt = 7scala> var numberB = new algebraics.quadratics.RealQuadInt(3, 1, ring) numberB: algebraics.quadratics.RealQuadInt = 3 + sqrt(2)scala> numberA / numberB res5: algebraics.quadratics.QuadInt = 3 - sqrt(2)scala> numberA / ramifier algebraics.NotDivisibleException: 7 divided by sqrt(2) is 0 + 7/2 * sqrt(2), which is not an algebraic integer at algebraics.quadratics.QuadInt.$div(QuadInt.scala:193) ... 28 elidedscala> In the numberA / numberB example that produced res5, the match triple was (false, 1, 1), so it matched case (_, 1, 1). By contrast, numberA / ramifier triggered NotDivisibleException because (false, 1, 2) matched neither case (_, 1, 1) nor case (true, 2, 2) (note that Fraction represents 0 as 0/1 internally, though 0/2 would be valid from a mathematical point of view). Hence notDivisibleFlag was set to false and so $div() decided to throw NotDivisibleException. None of this is on GitHub yet, QuadInt.scala is still very incomplete and not passing all the tests. When I get around to it, it’ll be in the alg-int-calc-scala/algebraics/ folder. Scala matches can also be used to match types, but that’s a topic that will have to wait to another day, if it hasn’t been covered by others here on Medium (though outside of Medium, Alvin Alexander probably has some very good coverage of it). It is of course possible to overdo tuple matching to the point of ridiculousness. Functional programming can be used to clarify just as it can be used to obfuscate, and it’s up to the programmer to choose wisely. And it remains important to test your program (preferably through a non-dogmatic application of test-driven development) to make sure it gives the right results. Nonetheless, the ability to match tuples in Scala is definitely something you should keep in mind if you program in Scala.
https://alonso-delarte.medium.com/scala-match-tuples-helps-cut-down-on-unwieldy-nested-ifs-49108eadc589
CC-MAIN-2021-21
refinedweb
1,805
54.02
Hashtables are used to store key-value pairs. Java Hashtable is now integrated into the collection framework in Oracle Java. It is a member of java.util package. Hash table uses hashing technique to store these key-value pairs. It creates indexes wherever an element is inserted, displayed or deleted. It uses an array data structure to store the data retrieved from the user. Also, each list in a hash table is called as a ‘bucket’. The position of any bucket can be retrieved by using the hashcode() method. The hashtable class in Java contains all unique elements. Using Java Hashtable The following command can be used to import Hashtable in a program. import java.util.HashTable; import java, util.* can also be used to import all the libraries of the java.util class. This is the general declaration for a Hashtable. To store and retrieve objects successfully from a Hashtable, the objects used as keys must fall in line with the hashCode method. The Hashtable stores data in just like an associative array. The following code snippet shows the ways to declare a new HashTable. public class Hashtable<K,V> extends Dictionary<K,V> implements Map<K,V>, Cloneable, Serializable Here, in the above format, K stands for Key of the Hashtable and V is the value associated with that key that will be mapped. Here is an example to show the basic use of Hashtable in java to input and get the data : Hashtable<String, Integer> numbers = new Hashtable<String, Integer>(); Iterators from the collections class can be used to retrieve data. import java.util.*; class HashExample{ public static void main(String args[]){ Hashtable<String,Integer> data=new Hashtable<String,Integer>(); data.put("CAT",101); data.put("DOG",102) ; data.put("MAT",103) ; data.put("FAT",104) ; //we can use the get() method to retrieve the data from the Hashtable passing key to the get method. Integer n = data.get("FAT"); System.out.println(n); } } The data added in a Hashtable can be removed by calling the remove method with Hashtable object and passing the key of the element. In the below example to remove a record, we can use the htable.remove(20) command as shown in the following example: import java.util.*; class HashExample{ public static void main(String args[]){ Hashtable<Integer, String> htable = new Hashtable<Integer, String>(); htable.put(10, "Samtam"); htable.put(20, "KimKoo"); htable.put(30, "IroDov"); htable.put(40, "AlePlex"); htable.put(50, "OmiVat"); System.out.println("Before Deletion"); for(Map.Entry h:htable.entrySet()){ System.out.println(h.getKey()+" "+h.getValue()); } htable.remove(20); System.out.println("\nAfter Deletion"); for(Map.Entry h:htable.entrySet()){ System.out.println(h.getKey()+" "+h.getValue()); } } } Java Hashtable and HashMap both are used to store data in key and value pair format but are different on some structural parameters. Documentations of various methods are available in the Java Hashtable class on this link. Be First to Comment
https://csveda.com/java-hashtable/
CC-MAIN-2022-33
refinedweb
492
59.5
Hi, I want to control a servo motor with an IR sharp sensor. Now, the main issue is that I don’t want to pilot the servo with the raw data from the sensor because I need it to be stable; so what I did is creating an array and calculate the average. What I also want to do is that if the avg value is in a range -+3 from the previous one, the servo just won’t update itself (so that a small change in the values wont affect the servo position). Another thing that I did is that every too big/small value coming in from the sensor, will be approximated to the max/min value inside the array, before being added to it. (to avoid glitches or potential wrong values to be considered) Finally, only the values in a specific range from the sensor (6 to 80, assuming that the sensor won’t work between 0 and 5 cm). Everything above 80 should just be ignored and then 80 should be the default incoming signal when nothing interfere with the sensor beam. I’m not sure everything is working properly and I think I might have done too much stuff to accomplish my task so I can’t find the problem. Any hint appreciated #include <Servo.h> #include <SharpIR.h> #define SERVO 9 #define SHARP A0 Servo myServo; //servo obj SharpIR sharp(SHARP,25,93,1080); //using IR library so the distance coming in is in cm int pos = 0; //servo position int val = 0; //sharp input val int servoUpd = 0; float avg = 0; //average int myValues[10]; //array of values from the IR void setup() { myServo.attach(SERVO); // attaches the servo on pin 9 to the servo object myServo.write(0); //init servo pinMode(SHARP, INPUT); Serial.begin(9600); //array init int i; for(i=0;i<10;i++) myValues[i] = 0; } void loop() { int int_avg; int i=0; int dis = sharp.distance(); //distance (cm) if(dis > 80) //80 è is the sensor max limit dis = 80; // calculate average for(;i<10;i++) avg += myValues[i]/10; int_avg = (int)avg+10; //add avg dist top_head-ear needed for the project, just adding some more distance to the average value from the IR if(dis > int_avg+3 || dis < int_avg-3){ //servo internal limit - update the servo only if dis is in range -+3 from the average servoUpd = map(int_avg, 10, 80, 0, 90); myServo.write(servoUpd); delay(15); } //get rid of glitches: if the value read from the sensor is too big/small, it will be reduced to the max/min of the array if(dis > int_avg+20){ //max int massimo = 0; for(i=0;i<10;i++){ if(myValues[i] > massimo) massimo = myValues[i]; } dis = massimo; }else if(dis < int_avg-20){ //min int minimo = 9999; for(i=0;i<10;i++){ if(myValues[i] < minimo) minimo = myValues[i]; } dis = minimo; } //update the array by shifting its values and adding the last read value from the sensor pointer_shift(myValues, 10); myValues[9] = dis; avg = 0; //reset the average delay(100); } //function to shift the values into the array void pointer_shift(int *a, int n) { int i; for(i=0;i!=n-1;i++){ *(a+i)=*(a+i+1); } } particularly I’m a bit lost about where I should put all the ‘limits’… thank you! EDIT: I found that eliminating the -+ 3 range actually the servo works fine wen there are object in the range.
https://forum.arduino.cc/t/help-with-simple-algorithm-ir-sensor-and-servo/346037
CC-MAIN-2022-40
refinedweb
578
50.3
Time Routines in CSPICE str2et_c Labels (A.M. and P.M.) For the Record Labels (Time Zones) For the Record Labels ( TDT, TDT, and UTC ) utc2et_c tparse_c Changing Default Behavior Abbreviated Years Range of Time String Components Default Time Systems and Time Zone Changing the Time System Time Zones Calendars Output Conversion timout_c et2utc_c etcal_c Converting Between Uniform Time Scales Local Solar Time Foundation Routines and Utilities (deltet February 2, 2004 18 November 1997 --- Ed Wright CSPICE naming conventions CSPICE Toolkit for manipulating various representations of time. It is your main source for general information about calendar based and continuous time systems in CSPICE . CSPICE system. The Toolkit also supports conversion between spacecraft clock (SCLK) and Barycentric Dynamical Time (TDB). However, spacecraft clock conversion is mentioned only in the context of background information in Appendix A. CSPICE routines dealing with spacecraft clock are discussed in SCLK Required Reading (sclk.req). This document is intended for all CSPICE users.. CSPICE CSPICE. We touch on only the most important routines in the remainder of this overview. furnsh_c ( filename ); str2et_c ( string, et ); utc2et_c ( utcstr, et); tparse_c ( string, lenout, sp2000, errmsg ) timout_c ( et, pictur, lenout, output ); et2utc_c ( et, format, prec, lenout, utcstr ); etcal_c ( et, lenout, string ); unitim_c ( dptime, insys, outsys ); timdef_c ( action, item, lenout, value ); tsetyr_c ( year ); tparch_ ( yesno, yesno_len); tpictr_c ( sample, lenout, lenerr, pictur, ok, error ); et2lst_c ( et, body, lon, type, timlen, ampmlen, hr, mn, sc, time, ampm ); ttrans_ ( from, to, tvec, from_len, to_len); tpartv_ ( string, tvec, ntvec, typ, modify, mods, yabbrv, succes, pictur, error, string_len, typ_len, modify_len, pictur_len, error_len); deltet_ ( epoch, eptype, delta, eptype_len); texpyr_ ( year ); tchckd_ ( yesno, yesno_len ); jul2gr_ ( year, month, day, doy ); gr2jul_ ( year, month, day, doy ); tcheck_ ( tvec, typ, mods, modify, ok, error, typ_len, modify_len, error_len); b1900_c (); b1950_c (); j1900_c (); j1950_c (); j2000_c (); j2100_c (); jyear_c (); spd_c (); tyear_c (); The basic spatial reference system for CSPICE CSPICE furnsh_c. furnsh_c ( " str2et_c (``String to ET''). str2et_c ( string, et ); The variety of ways people have developed for representing times is enormous. It is unlikely that any single subroutine can accommodate all of the custom time formats that have arisen in various computing contexts. However, we believe that str2et_c correctly interprets most time formats used throughout the planetary science community. For example str2et_c supports ISO time formats, UNIX `date` output formats. VMS time formats, MS-DOS formats, epochs in both the A.D. and B.C. eras, time zones, etc. If you've been using the Toolkit for a while you are probably familiar with the routine utc2et_c. utc2et_c ( utcstr, et ); If you are writing new code, we recommend that you use the routine str2et_c. There is no need to upgrade any of your existing code that calls utc2et_c. However, you may want to replace calls to utc2et_c with calls to str2et_c due to the greater flexibility of str2et timout_c. timout_c ( et, pictur, lenout, output ); Less flexible, but slightly easier to use, et2utc_c has been the standard CSPICE time formatting routine for many years. et2utc_c ( et, format, prec, lenout, utcstr ); You may need to convert between different numeric representations of time such as TDT, Julian Date TDB, TAI seconds past J2000, etc. The routine unitim_c is available for such conversions. unitim_c ( epoch, insys, outsys ); Most CSPICE time routines make use of the information contained in a leapseconds kernel. Specifically, all of the following routines make use of the leapseconds kernel. furnsh_c ( kernel ); The leapseconds kernel needs to be loaded just once per program run; normally, the leapseconds kernel is loaded in a program's initialization section. The precise contents of the leapseconds kernel are discussed in the section ``Computing Delta ET'' below. Text kernels and the routine furnsh_c are discussed in more detail in KERNEL Required Reading, kernel.req. The routine et2lst_c converts ephemeris time (ET) to the local solar time for a point at a user specified longitude on the surface of a body. This computation is performed using the bodyfixed location of the sun. Consequently, to use et2lst_c you must first load SPK and PCK files that contain sufficient position and orientation data for the computation of the bodyfixed location of the sun. SPK files are loaded using the routine furnsh_c. furnsh_c ( "<spk file name>" ); furnsh_c ( "<binary pck file name>" ); We normally represent epochs as a combination of a date and time of day. The simplest means of specifying an epoch as a date and time is to create a string such as: SpiceChar * string = "Oct 1, 1996 09:12:32"; CSPICE contains three routines for converting strings directly to ``seconds past 2000.'' They are str2et_c, utc2et_c, and tparse_c., str2et_c, utc2et_c and tparse_c, use the ``foundation'' routine tpartv_ to parse the input string. Each then interprets the results of tpartv_ to assign meaning to the string. Below are a number of examples of strings and the interpretation assigned to the various components. ISO (T) Formats. String Year Mon DOY DOM HR Min Sec ---------------------------- ---- --- --- --- -- --- ------ 1996-12-18T12:28:28 1996 Dec na 18 12 28 28 1986-01-18T12 1986 Jan na 18 12 00 00 1986-01-18T12:19 1986 Jan na 18 12 19 00 1986-01-18T12:19:52.18 1986 Jan na 18 12 19 52.18 1995-08T18:28:12 1995 na 008 na 18 28 12 1995-18T 1995 na 018 na 00 00 00 String Year Mon DOM HR Min Sec ---------------------------- ---- --- --- -- --- ------ Tue Aug 6 11:10:57 1996 1996 Aug 06 11 10 57 1 DEC 1997 12:28:29.192 1997 Dec 01 12 28 29.192 2/3/1996 17:18:12.002 1996 Feb 03 17 18 12.002 Mar 2 12:18:17.287 1993 1993 Mar 02 12 18 17.287 1992 11:18:28 3 Jul 1992 Jul 03 11 18 28 June 12, 1989 01:21 1989 Jun 12 01 21 00 1978/3/12 23:28:59.29 1978 Mar 12 23 28 59.29 17JUN1982 18:28:28 1982 Jun 17 18 28 28 13:28:28.128 1992 27 Jun 1992 Jun 27 13 28 28.128 1972 27 jun 12:29 1972 Jun 27 12 29 00 '93 Jan 23 12:29:47.289 1993* Jan 23 12 29 47.289 27 Jan 3, 19:12:28.182 2027* Jan 03 19 12 28.182 23 A.D. APR 4, 18:28:29.29 0023 Apr 04 18 28 29.29 18 B.C. Jun 3, 12:29:28.291 -017 Jun 03 12 29 28.291 29 Jun 30 12:29:29.298 2029+ Jun 30 12 29 29.298 29 Jun '30 12:29:29.298 2030* Jun 29 12 29 29.298 String Year DOY HR Min Sec ---------------------------- ---- --- -- --- ------ 1997-162::12:18:28.827 1997 162 12 18 28.827 162-1996/12:28:28.287 1996 162 12 28 28.287 1993-321/12:28:28.287 1993 231 12 28 28.287 1992 183// 12 18 19 1992 183 12 18 19 17:28:01.287 1992-272// 1992 272 17 28 01.287 17:28:01.282 272-1994// 1994 272 17 28 01.282 '92-271/ 12:28:30.291 1992* 271 12 28 30.291 92-182/ 18:28:28.281 1992* 182 18 28 28.281 182-92/ 12:29:29.192 0182+ 092 12 29 29.192 182-'92/ 12:28:29.182 1992 182 12 28 29.182 jd str2et_c is the most flexible time transformation routine. str2et_c accepts the widest variety of time strings. To illustrate the various features of str2et_c we begin by considering the string 1988 June 13, 3:29:48 In the absence of such indicators, the default interpretation of this string is to regard the time of day to be a time on a 24-hour clock in the UTC time system. The date is a date on the Gregorian Calendar (this is the calendar used in nearly all western societies). If you add more information to the string, str2et_c can then make a more informed interpretation of the time string. For example: 1988 June 13, 3:29:48 P.M. 1988 June 13, 12:29:48 A.M. 1988 June 13, 00:29:48 12:00 A.M. corresponds to Midnight (00:00 on the 24-hour clock). 12:00 P.M. corresponds to Noon (12:00 on the 24-hour clock). You may add still further indicators to the string. For example 1988 June 13, 3:29:48 P.M. PST 1988 June 13, 23:29:48 UTC EST --- Eastern Standard Time ( UTC-5:00 ) CST --- Central Standard Time ( UTC-6:00 ) MST --- Mountain Standard Time ( UTC-7:00 ) PST --- Pacific Standard Time ( UTC-8:00 ) EDT --- Eastern Daylight Time ( UTC-4:00 ) CDT --- Central Daylight Time ( UTC-5:00 ) MDT --- Mountain Daylight Time ( UTC-6:00 ) PDT --- Pacific Daylight Time ( UTC-7:00 ) To specify an offset from UTC you need to create an offset label. The label starts with the letters `UTC' followed by a `+' for time zones east of Greenwich and `-' for time zones west of Greenwich. This is followed by the number of hours to add or subtract from UTC. This is optionally followed by a colon `:' and the number of minutes to add or subtract to get the local time zone. Thus to specify the time zone of Calcutta (which is 5 and 1/2 hours ahead of UTC) you would specify the time zone to be UTC+5:30. To specify the time zone of Newfoundland (which is 3 and 1/2 hours behind UTC) use the offset notation UTC-3:30. Leapseconds occur at the same time in all time zones. In other words, the seconds component of a time string is the same for any time zone as is the seconds component of UTC. The following are all legitimate ways to represent an epoch of some event that occurred in the leapsecond 1995 December 31 23:59:60.5 (UTC) 1996 January 1, 05:29:60.5 (UTC+5:30 --- Calcutta Time) 1995 December 31, 20:29:60.5 (UTC-3:30 --- Newfoundland) 1995 December 31 18:59:60.5 (EST) 1995 December 31 17:59:60.5 (CST) 1995 December 31 16:59:60.5 (MST) 1995 December 31 15:59:60.5 (PST) In addition to specifying time zones you may specify that the string be interpreted as a formal calendar representation in either the Barycentric Dynamical Time system (TDB) or the Terrestrial Dynamical Time system (TDT). In these systems there are no leapseconds; utc2et_c can be thought of as a version of str2et_c that allows a narrower range of inputs. It converts strings in the UTC system to TDB seconds past the J2000 epoch. It does not support other time systems or time zones. In addition utc2et_c does not recognize times on a 12-hour clock. Strings such as 1983 June 13, 9:00:00 A.M. The routine tparse_c can be thought of as a narrow version of str2et_c that allows only TDB as input. tparse_c converts strings on a formal time scale to seconds past the J2000 epoch. tparse_c doesn't ``know'' anything about leapseconds. Since tparse_c does not make use of leapseconds, it can be used without first loading a leapseconds kernel. Like utc2et_c, tparse_c does not recognize other time systems or time zones. Also it does not recognize times on a 12-hour clock. Unlike str2et_c and utc2et_c, tparse_c does not make use of the CSPICE exception handling subsystem. Erroneous strings are diagnosed via a string---`error'. If the string `error' is returned empty (blank) no problems were detected in the input string. If `error' is returned by tparse_c tsetyr_c. For example if you would like to set the default range to be from 1972 to 2071 issue the following subroutine call: tsetyr_c ( 1972 ); The routines tparse_c and utc2et_c accept time strings whose numeric components are outside of the normal range of values used in time and calendar representations. For example strings such as 1985 FEB 43 27:65:25 (equivalent to 1985 MAR 16 04:05:25) tparch_ ( "YES", 3 ); where 3 is the string length of YES. str2et_c does not accept time strings whose components are outside the normal range used in conversation. You cannot alter this behavior without re-coding str2et_c. When a string is presented without a time system or time zone label str2et str2et_c. Hence, the defaults used by str2et_c might be a hindrance rather than a convenience. With this possibility in mind, str2et_c has been designed so that you may alter its default behavior with regard to default time system or time zone. To change the default time system or time zone use the routine timdef_c. : timdef_c ( "SET", "SYSTEM", lenout, "UTC" ); timdef_c ( "SET", "SYSTEM", lenout, "TDB" ); timdef_c ( "SET", "SYSTEM", lenout, "TDT" ); All time zones are supported by str2et_c. The default time zone is simply Greenwich Mean Time (UTC+00:00). To change the default behavior of str2et_c so that unlabeled strings are assumed to be referenced to a particular time zone (for example Pacific Standard Time) issue the subroutine call below. timdef_c ( "SET", "ZONE", lenout, "PST" ); The default calendar used by str2et str2et_c is to use the Gregorian calendar for all epochs. However, using timdef_c you can set the default calendar to one of three: GREGORIAN, JULIAN, or MIXED. timdef_c ( "SET", "CALENDAR", lenout, "GREGORIAN" ); timdef_c ( "SET", "CALENDAR", lenout, "JULIAN" ); timdef_c ( "SET", "CALENDAR", lenout, "MIXED" ); Times need to be printed out as well as read in. CSPICE contains three routines for accomplishing this task: timout_c, et2utc_c, and etcal_c. All three convert a number of ephemeris seconds past J2000 to a time string. The routine timout_c provides a mechanism for formatting output time strings in almost any form you desire. The calling sequence for timout_c is: 04:29:29.292 Jan 13, 1996 SpiceChar * pictur = "HR:MN:SC.### Mon DD, YYYY ::RND"; "::RND". The rules for constructing pictur are spelled out in the header to timout_c. However, you may very well never need to learn these rules. CSPICE contains the routine tpictr_c that can construct a time format picture for you from a sample time string. Returning to the example above, if the following block of code is executed, pictur will contain the format picture that will yield output strings similar to our example string. SpiceChar * exampl = "04:29:29.292 Jan 13, 1996"; tpictr_c ( exampl, lenout, lenerr, pictur, ok, error ); SpiceChar * exampl = "Jan 12, 02:28:29.### A.M. (PDT)"; The routine et2utc_c is an older time formatting routine. It is not as flexible as timout_c. All outputs are UTC outputs and only a limited set of formats are supported. On the other hand it is easier to learn how to use et2utc_c. et2utc_c is an inverse to utc2et_c: that is following the calls utc2et_c ( utcin, et ); et2utc_c ( et , "C", 3, lenout, utcout ); etcal etcal_c. This makes it well suited for producing diagnostic messages. Indeed, it was created so that more user friendly diagnostic messages could be produced by those SPICE routines that require ET as an input. We use the term uniform time scale to refer to those representations of time that are numeric (each epoch is represented by a number) and additive. A numeric time system is additive if given the representations E1 and E2 of any pair of successive epochs, the time elapsed between the epochs is given by the difference E2 - E1. Conversion between uniform time scales can be carried out via the double precision function unitim et2lst_c (ET to Local Solar Time). At the heart of the CSPICE time software subsystem are the ``foundation'' routines tpartv_ and ttrans_. tpartv_ is used to take apart a time string and convert it to a vector of numeric components. ttrans_ serves the role of converting between the various numeric vector representations of time. If you need to build your own time conversion routines, these routines are a good place to begin. In addition to the foundation routines, you may find helpful the following utility routines. The following program demonstrates use of the time conversion routines str2et_c, tpictr_c, timout_c and et2utc_c. Note that the data necessary to convert between UTC and ET are loaded into the kernel pool just once---typically during program initialization--- after which the conversion may be performed at any level within the program. /* Convert between UTC and ET interactively, and convert ET back to UTC in calendar format, DOY format, and as a Julian date. Requires a leapseconds kernel. */ #include <stdio.h> #include "SpiceUsr.h" #define UTCLEN 34 void main () { SpiceDouble et; SpiceChar utcstr[UTCLEN]; SpiceChar dutcstr[UTCLEN]; SpiceChar jutcstr[UTCLEN]; SpiceChar * leap; SpiceChar * response; SpiceInt response_len = 50; /* Get the name of the kernel file. */ leap = prompt_c ( "Name of leapsecond kernel? " ); /* Load the kernel pool.*/ furnsh_c ( leap ); response = "Y"; /* Compute result for each new time epoch. */ do { response = prompt_c ( "Enter time string: " ); str2et_c ( response, &et ); printf( "Time converts to ET (sec past J2000) %f\n\n", et ); /* Convert from et to string */ et2utc_c ( et , "C", 3, UTCLEN, utcstr ); et2utc_c ( et , "D", 3, UTCLEN, dutcstr ); et2utc_c ( et , "J", 3, UTCLEN, jutcstr ); printf( "ET converts to %s, %s, %s\n\n", utcstr, dutcstr, jutcstr ); response = prompt_c ( "Continue? (Y/N)" ); } while ( *response == 'Y' || *response == 'y' ); } CSPICE furnsh_c. CSPICE str2et str2et CSPICE CSPICE = j2000_c + et/spd_c(); CSPICE routine utc2et_c treats the string ``2451821.1928 JD'' as Julian Date UTC. On the other hand, the CSPICE routine tparse_c function tpartv_ parses time strings. tpartv_ is the ``foundation'' function relied upon by str2et_c, utc2et_c, tparse_c and tpictr_c CSPICE private routines. These routines are not considered part of the CSPICE ldpool_c and other lower level loader routines with furnsh_c throughout the document. Performed a spell-check on text. This edition of the TIME required reading is cast for the C version of the SPICELIB library, CSPICE. The CSPICE library is an implementation of the FORTRAN SPICELIB library in C. CSPICE is composed of C routines translated to C from FORTRAN by f2c, and a set of wrapper functions which allow a more C native interface to the f2c'd routines. This edition of TIME Required Reading documents the routine et2lst_c. This routine allows user's to easily convert Ephemeris Time (Barycentric Dynamical Time) to the local solar time at a user specified longitude on the surface of an object. In addition to the new routine et2lst_c, we document a slight extension of the set of time strings that are recognized by the CSPICE time software. This extension is documented in Appendix B. This edition of TIME Required Reading is a substantial revision to the previous edition; this reflects a major enhancement of the CSPICE time software. This version describes the new time related software that was included in version N0046 of CSPICE . We also draw distinctions between the various levels of time conversion software that are available to Toolkit users. The following routines are new as of version N0046 of SPICELIB. str2et_c tsetyr_c ttrans_ jul2gr_ timout_c timdef_c tpartv_ gr2jul_ tpictr_c tchckd_ tcheck_ texpyr_, unitim_c, for converting between additive numeric time systems.
http://naif.jpl.nasa.gov/pub/naif/toolkit_docs/C/req/time.html
CC-MAIN-2014-15
refinedweb
3,159
73.07
I'm writing an ASP.NET Core MVC web app that is a tool for handling a parts database. What I want is for the user to select a Part and then that will do some action, like delete that part from it's DB. However, I want this to be a generic action used by all the parts. I have a class hierarchy which is: What I need is some method that I can call that will get the DbSet that my part belongs to. This is an example of what I'm looking to do: Models public class Part { public Nullable<int> ID { get; set; } public string Brand { get; set; } } public class PartA : Part { public int Length { get; set; } public List<Image> Images { get; set; } } public class PartB : Part { public int Durability { get; set; } } public class Image { public Nullable<int> ID { get; set; } public string ImagePath { get; set; } } PartsDbContext public class PartsDbContext : DbContext { public DbSet<PartA> PartAs { get; set; } public DbSet<PartB> PartBs { get; set; } } PartsController public IActionResult DeletePart (string partType, int id) { var partSet = GetDbSet(partType); var part partSet.FirstOrDefault(e => e.ID == id); if (part != null) { partSet.Remove(part); _context.SaveChanges(); } } //function to find and return DbSet of the selected type private DbSet<Part> GetDbSet (string partType) { switch (partType) { case "PartA": return _context.PartAs; case "PartB": return _context.PartBs; } return null; } Now obviously this doesn't work because the compiler will complain that: You can't convert type DbSet<PartA> to type DbSet<Part> Anyone know how I might go about doing this? This is really hacky, but sort of works. public IActionResult DeletePart (string partType, int id) { Type type = GetTypeOfPart(partType); var part = _context.Find(type, id); var entry = _context.Entry(part); entry.State = EntityState.Deleted; _context.SaveChanges(); } However, you really should just use polymorphism and generic abstract Controllers. EDIT You can also use Explicit Loading for this. private void LoadRelatedImages(IPart part) { _context.Entry(part) .Collection(p => p.Images) .Load(); }
https://entityframeworkcore.com/knowledge-base/43884689/return-dbset-based-on-selection
CC-MAIN-2022-21
refinedweb
326
53.1
> Hello, I'm working on a foliage shader and I would like it to react to the player as they walk by. HZD and Uncharted both have made really great use fo this and I've seen it in Unity before, but I'm not sure how to do it without collision (collision is to expensive for every pice of vegetation). Here is what I have so far: Thanks in advance! Here it is in unreal, just converting it over now. Also requires player position, as listed here: Maybe some wind zone inclusion: Answer by Matt-Murch · Apr 08, 2018 at 05:02 PM Here's how I do it. I'm using ShaderForge, Unity, and Blender. There are two components to making it work, a script on each foliage asset that gets the player position and sends it to the shader, and a vector in the shader that receives the player. From there it's pretty easy (thats relative). End result: Here is the code I use: using System.Collections; using System.Collections.Generic; using UnityEngine; public class PlayerToShaderScript : MonoBehaviour { public GameObject player; // Use this for initialization void Start () { if (player == null) player = GameObject.FindWithTag("Player"); } // Update is called once per frame void Update () { gameObject.GetComponent<Renderer>().materials[0].SetVector("_PlayerPos", player.transform.position); } } This gets the player object (tagged) and passes their position to the material every frame.. And here are the shader nodes: The effect has to major node groups, Player reaction (top), and wind animation (bottom). The Top Group takes the player vector (from the script), compares it to world space to find it's distance and applies a vertex offset based on that. The Bottom Group takes time and uses COS to add some wind animation. I have two wind effect, Vertical offset and horizontal offset. I use vertex colours in my model to control all this, R is horizontal (top of tree) and G is Vertical (ends of branches) this gives nice movement fairly cheap. You can also use the alpha value to blend between bark and leaves in your textures (Needs the nightly build of blender for Vertex alpha support). I hope this helps someone, good luck and happy developing! @Matt-Murch, I did everything like you said in this comment, and yet the mesh doesn't move when my player brushed by it. Thing is, I only painted the mesh black and red (red supposed to be horizontal, black is no movement). Yes, the player is tagged, the script is attached to the mesh, only difference is my mesh doesn't have branches, it's a simple crossplane with its top vertices painted red in blender. So what am I missing? Please help. Hi, you should be getting some 'wind' animation on the top. Only vertices that have some green value in them will react to the player. If you add the green (you can add green to the red vertex with the add brush) and you still don't have movement check that your object is in positive world space (X Y and Z are all greater than 1), I found a weed glitch in the math for negative space. Other than that week the values a lot, depending on your scene scale it can be a bit of a hassle but it's with it when you find the right. Can I make animations snap to a frame? 2 Answers How to Animate Environment Objects 1 Answer How to select an animation clip by index number? 7 Answers Create animated scene in Maya and play it out in Unity. 1 Answer Can the animation editor create local rotational data? 3 Answers
https://answers.unity.com/questions/1490372/shader-forge-vegetation-collision.html?sort=oldest
CC-MAIN-2019-26
refinedweb
611
69.92
Created on 2019-11-04 07:12 by Samuel Tatasurya, last changed 2019-11-09 05:52 by python-dev. Transformation performed by certain fixers (e.g. future, itertools_imports) that causes a statement to be replaced by a blank line will generate a Python file that contains syntax error. For example, assuming a Python file (foo.py) containing line below: try: from itertools import imap except ImportError: pass If we run "itertools_imports" fixer against it: 2to3 -f itertools_imports foo.py will result in the following: try: except ImportError: pass which is syntactically incorrect. Suggestion: Instead of always replacing such case with BlankLine(), a check should be performed beforehand if the statement to be replaced has any siblings. If no sibling is found, then replace that statement with a "pass" statement instead. By doing this, Python source files generated by 2to3 are more readily runnable right after the transformation.
https://bugs.python.org/issue38681
CC-MAIN-2019-47
refinedweb
148
55.95
Have C++ standard library ifdef or ifndef preprocessor instructions? I'm building my own terminal app project in C++ and I'm asking myself if standard library has ifdef or ifndef preprocessors instructions. I want to know that because I need to create different header files which need some standard library headers such as "string" and some others, i don't want to include the same library 3 or more times because it makes the program heavier. For example i wrote on my header files something like this to prevent the .h file to be included more than once: #ifndef myheader_h #define myheader_h // my file code here #endif I tried compiling but the compiler say me nothing about errors or warnings. I also tried to read the standard-library source code () and I haven't found any preprocessor rule like ifdef or ifndef. Should i include standard library headers like this? #ifndef string_h #define string_h #include <string> #endif I hope my question isn't already asked because I haven't found it while searching it. Updates To some who said "you're not in the position where you need to worry about" and who said "it costs very little if it has proper include guards", I meant: program's heaviness is important, I want to make it slighter so I don't want to entirely include the same file multiple times. Have std lib files proper include guards? (my header files have them, didn't know std lib files) Those preprocessor directives you're talking about are called "header guards", and the standard library headers definitely have them (or some other mechanism that does the same thing) like all other proper header files. Including them multiple times shouldn't cause any problems, and you only need to worry about these when you're writing your own header files. The "source code" that you're reading is just the documentation which says how the header files should work, but it doesn't provide the actual code. To see the code, you can look in the header files provided by your compiler. For example, the <iostream> header in Visual Studio has both #pragma once and header guards: #pragma once #ifndef _IOSTREAM_ #define _IOSTREAM_ //... #endif /* _IOSTREAM_ */ The headers provided by the GCC compiler also has header guards: #ifndef _GLIBCXX_IOSTREAM #define _GLIBCXX_IOSTREAM 1 //... #endif /* _GLIBCXX_IOSTREAM */ [PDF] The C Preprocessor, You have freedom to copy and modify this GNU Manual, like GNU software. Copies the GNU C preprocessor does not do a few things required by the standard. operating system kernel or some other program that does not use the standard C library '#ifndef'. 4.2.1 Ifdef. The simplest sort of conditional is. #ifdef MACRO. The #if directive, with the #elif, #else, and #endif directives, controls compilation of portions of a source file. If the expression you write (after the #if) has a nonzero value, the line group immediately following the #if directive is kept in the translation unit. Each #if directive in a source file must be matched by a closing #endif There is no requirement for the standard header files to #define any specific pre-processor symbols to make sure they can be #included multiple times. Having said that, any sane implementation would make sure that they can be #included multiple times without adversely affecting application code. Turns out, that is a requirement by the standard for most headers (Thanks, @Rakete1111). From the C++ standard A translation unit may include library headers in any order ([lex]). Each may be included more than once, with no effect different from being included exactly once, except that the effect of including either <cassert>or <assert.h>depends each time on the lexically current definition of NDEBUG. Not only that, they are very likely to be using the #pragma once directive. Hence, even if you use #include multiple times for the same header, they are going to be read only once. In summary, don't worry about standard header files. If your header files are implemented correctly, your application would be just fine. C Programming/Preprocessor directives and macros, You have freedom to copy and modify this GNU Manual, like GNU software. the GNU C preprocessor does not do a few things required by the standard. program that does not use the standard C library facilities, or the standard C library itself. C preprocessor begins with a conditional directive: '#if', '#ifdef' or. '#ifndef'. The standard solution is to wrap the entire library in the following construct: #ifndef _EXAMPLE_LIBRARY_H #define _EXAMPLE_LIBRARY_H //This is an example library int a = 0; //End of example library #endif. Now, when the library is included for the first time, the preprocessor checks whether there is something defined with the name “ _EXAMPLE I'm asking myself [sic] if standard library has ifdef or ifndef preprocessors instructions The standard doesn't specify whether there are ifdef-style header guards, although it does require that multiple inclusion is protected in some manner. I took a look at a random header of stdlibc++ standard library implementation. It does have header guards. i don't want to include the same library 3 or more times because it makes the program heavier Including a header file multiple times does not make a program "heavier". Should i include standard library headers like this?#ifndef string_h #define string_h #include <string> #endif That is not necessary, or particularly useful. Preprocessor, Preprocessors are a way of making text processing with your C program before find out specific instructions called Preprocessor directives that it can understand. options to C compilers aren't completely standard, many follow similar rules. 1.7 #undef; 1.8 #if,#else,#elif,#endif (conditionals); 1.9 #ifdef,#ifndef; 1.10 #line. Conditional inclusions (. Using C and C++ preprocessor directives that includes #include , Standard Library Headers Regular expressions library (C++11) character; preprocessing instruction (one of define , undef , include , if , ifdef , ifndef , else , elif The preprocessor has the source file translation capabilities:. The standard does not define behavior for other directives: they might be ignored, have some useful meaning, or make the program ill-formed. Even if otherwise ignored, they are removed from the source code when the preprocessor is done. A common non-standard extension is the directive #warning which emits a user-defined message during compilation. C Preprocessor, Understanding and learning on using the C and C++ pre-processor directives such and use the conditional compilation – #if,#endif,#ifdef,#else, #ifndef and #undef. The#include directive is normally used to include standard library such as These files should have common declaration, such as functions, classes etc, Preprocessor directives - C++ Tutorials, The C preprocessor takes a C file and any included headers and outputs an edited C takes a preprocessed C file and outputs an assembly file (CPU instructions). This will be fore example when the standard library implementations (ie the #ifdef FOO // This code is only compiled if FOO is defined #endif #ifndef FOO C Example 1 #include < stdio. h > /* including standard library */ //#include <windows.h> /* uncomment this for Windows */ int main ( void ) { printf ( "the answer to life the universe and everything is %d " , 42 ) ; }
http://thetopsites.net/article/51828147.shtml
CC-MAIN-2020-34
refinedweb
1,187
51.18
Edge renders simple web page differently than IE 11 or chrome Not reproducible Issue #15652384 Steps to reproduce This very simple Startup.cs renders like this in Edge: Hello,From My World! Like this in IE 11: Hello, From My World! And this in Chrome: Hello, From My World! Notice missing space between Hello,From in Edge. Here is the code: using Microsoft.AspNetCore.Builder; using Microsoft.AspNetCore.Hosting; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.DependencyInjection; using System; using System.Linq; namespace AspNetCoreVideo {, From My World!"); }); } } } Microsoft Edge Team Changed Assigned To to “James M.” Hello, Thank you for providing this information about the issue. We are unable to reproduce this problem in Edge. Please test this behavior in our latest public stable build 17134. Best Wishes, The MS Edge Team Microsoft Edge Team Changed Status to “Not reproducible” You need to sign in to your Microsoft account to add a comment.
https://developer.microsoft.com/en-us/microsoft-edge/platform/issues/15652384/
CC-MAIN-2018-43
refinedweb
153
54.08
ldns_rr_set_owner, ldns_rr_set_ttl, ldns_rr_set_type, ldns_rr_set_rd_count, ldns_rr_set_class, ldns_rr_set_rdf- #include <stdint.h> #include <stdbool.h> #include <ldns/ldns.h> void ldns_rr_set_owner(ldns_rr *rr, ldns_rdf *owner); void ldns_rr_set_ttl(ldns_rr *rr, uint32_t ttl); void ldns_rr_set_type(ldns_rr *rr, ldns_rr_type rr_type); void ldns_rr_set_rd_count(ldns_rr *rr, size_t count); void ldns_rr_set_class(ldns_rr *rr, ldns_rr_class rr_class); ldns_rdf* ldns_rr_set_rdf(ldns_rr *rr, const ldns_rdf *f, size_t position); ldns_rr_set_owner() sets the owner in the rr structure. *rr: rr to operate on *owner: set to this owner Returns void ldns_rr_set_ttl() sets the ttl in the rr structure. *rr: rr to operate on ttl: set to this ttl Returns void ldns_rr_set_type() sets the type in the rr. *rr: rr to operate on rr_type: set to this type Returns void ldns_rr_set_rd_count() sets the rd_count in the rr. *rr: rr to operate on count: set to this count Returns void ldns_rr_set_class() sets the class in the rr. *rr: rr to operate on rr_class: set to this class Returns void ldns_rr_set_rdf() sets a rdf member, it will be set on the position given. The old value is returned, like pop. *rr: the rr to operate on *f: the rdf to set position: the position the set the rdf Returns the old value in the rr, NULL on failyre, ldns_rr_list.)
http://huge-man-linux.net/man3/ldns_rr_set_owner.html
CC-MAIN-2017-13
refinedweb
201
62.27
Projecting Content Into The Root Application Component Using Slots In Vue.js 2.6.6 In almost every Vue.js demo that I've looked at, the root Vue() instance is created with an "el" property and a render() function that does nothing but render the "App" component. This approach instantiates the App component in a context in which it has no children (though you could, theoretically, define children in the root render() function). But, what if I wanted to define children in the top-level HTML page? It turns out, if you omit both the render() function and the "template" property in the root Vue() definition, Vue.js will use the in-DOM HTML as the root Vue() template. This allows the App component to project HTML from the top-level page using the normal Slot-based architecture. Run this demo in my JavaScript Demos project on GitHub. View this code in my JavaScript Demos project on GitHub. When you pre-compile your Vue.js application, all of your HTML templates get transformed into createElement() calls (or h() calls, if you favor brevity over readability). As such, Vue.js doesn't have to perform any runtime compilation when you open your application. However, if you allow HTML to be projected from the top-level page, Vue.js can't know about it ahead-of-time. As such, Vue.js will have to compile the top-level HTML at runtime. This may impact the time-to-first-interaction of the application. And, it will also require the JavaScript bundle to include the Vue.js compiler, not just the Vue.js runtime. As such, this approach is probably not recommended. That said, I wanted to try it anyway as I can think of one or two use-cases in which the top-level HTML page may serve as the input for some sort of Domain Specific Language (DSL). So, to set up this thought-experiment, let's look at the top-level HTML that I use for all of my demos: <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <title> Projecting Content Into The Root Application Component Using Slots In Vue.js 2.6.6 </title> <link href="build/main.bdcc718aeab47815b2cb.css" rel="stylesheet"></head> <body> <h1> Projecting Content Into The Root Application Component Using Slots In Vue.js 2.6.6 </h1> <my-app> <p> <em>Loading files...</em> </p> <p> npm Run Scripts: </p> <ul> <li> <strong>npm run build</strong> — Compiles the .vue file into bundles. </li> <li> <strong>npm run watch</strong> — Compiles the .vue file into bundles and then watches files for changes. </li> </ul> </my-app> <script type="text/javascript" src="build/main.77fde5cf302a90a08010.js"></script> <script type="text/javascript" src="build/runtime.ba170225256633bfa10a.js"></script> <script type="text/javascript" src="build/vendors~main.222974fc2903e0d3c178.js"></script> </body> </html> Normally, the content in between my "my-app" elements would get wholly replaced with the render-content of my App component. But, in this case, I'm going to omit both the "template" and "render()" properties in my root Vue() instance. This will cause the above outerHTML of the "my-app" element to be used as the root Vue() template: import "./main.polyfill"; // ----------------------------------------------------------------------------------- // // ----------------------------------------------------------------------------------- // // Import core classes. import Vue from "vue"; // Import application classes. import AppComponent from "./app.component.vue"; // ----------------------------------------------------------------------------------- // // ----------------------------------------------------------------------------------- // new Vue({ el: "my-app", // When we omit both the render() function and the "template" property, the in-DOM // markup of <my-app> will serve as the component's template. As such, we have to // tell Vue.js how to map custom elements (like <my-app>) onto Vue Components. // -- // NOTE: While I am not using other custom elements in this demo, this approach // allows other custom elements to be consumed in the top-level HTML page. Though, // they have to be identified using lowercase Kebab-Case. components: { "my-app": AppComponent } // Instead of using a render() function, which is the most common example, we're // going to allow the runtime HTML to act as the component template. This allows the // runtime HTML to be projected into the AppComponent using standard slotting. // -- // WARNING: This requires RUNTIME COMPILATION of the in-DOM HTML, which may have a // negative affect on performance and time-to-first-interaction. // I render the root component of the application into the DOM. // render: ( createElement ) => { // // return( createElement( AppComponent ) ); // // } }); As you can see, the only thing I'm doing in my root Vue() instance is mapping the custom HTML elements onto Vue.js component. In this case, I only have one custom component, "my-app"; however, if I wanted to use other Vue components in my top-level HTML page, I'd have to map them here as well. And, now that the "my-app" HTML element is being mapped to my Vue.js AppComponent, I can use Slot-based content projection in my AppComponent: <style scoped <template> <div class="app"> <p> I am the AppComponent. And, this is my <strong class="label">Projected Content</strong>: </p> <div class="projected"> <slot></slot> </div> <p> Fascinating, right? </p> </div> </template> <script> export default { // ... }; </script> Here, the content from my top-level HTML page is being projected right in the middle of my AppComponent. And, when we run this page, we get the following browser output: As you can see, the HTML markup from the top-level page has been successfully projected into the AppComponent using the standard Slot syntax. I can't imagine that there are too many uses-cases for this kind of approach; and, I will remind you that I am a total beginner with Vue.js 2.6.6; however, I can think of one or two uses-cases in which I might want to enable a non-developer to be able to define consumable HTML mark-up in the index file of a project. If nothing else, this thought-experiment is just helping me understand the Vue.js application life-cycle a bit better. Reader Comments This is great! But, what about named slots? @Hendrik, I assume it would work similarly; but, to be honest, I've only dabbled with Vue.js. I am not sure that I've even ever tried a named-slot in Vue before. I spend most of my time in Angular, which does not allow content to be projected into the root-element.
https://www.bennadel.com/blog/3573-projecting-content-into-the-root-application-component-using-slots-in-vue-js-2-6-6.htm
CC-MAIN-2021-31
refinedweb
1,056
57.47
Opened 9 years ago Closed 9 years ago Last modified 9 years ago #2632 closed defect (wontfix) Tries to retrieve a non-existent object for a changeset Description Hi there: When trying to retreive this page: GET: /changeset/65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5 I get the following error: Trac detected an internal error: GitErrorSha: object '131ee778fd5757495d9807b24231111f801ed839' not found I can't see why it is looking for that SHA...the SHA it should be looking for is specified in the URL. I've had a quick look through the plugin but I don't have the time to find the issue myself right now. Sometimes things work right, with many changesets, but some produce an incorrect SHA lookup. I'm on the 0.11 branch using the timeline to look at my changesets. Attachments (0) Change History (10) comment:1 Changed 9 years ago by comment:2 follow-up: 4 Changed 9 years ago by hi there: Sorry, I didn't see this in the log before (I had turned on debugging to see what I could find)... 008-02-25 07:29:09,705 Trac[main] DEBUG: Dispatching <Request "GET u'/changeset/65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5'"> 2008-02-25 07:29:09,711 Trac[svn_fs] INFO: Failed to load Subversion bindings Traceback (most recent call last): File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/versioncontrol/svn_fs.py", line 251, in init _import_svn() File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/versioncontrol/svn_fs.py", line 69, in _import_svn from svn import fs, repos, core, delta ImportError: No module named svn 2008-02-25 07:29:09,722 Trac[git_fs] INFO: detected GIT version 1.5.3.6 2008-02-25 07:29:09,723 Trac[PyGIT] DEBUG: PyGIT.Storage instance 142814732 constructed 2008-02-25 07:29:09,723 Trac[PyGIT] DEBUG: requested weak PyGIT.Storage instance 142814732 for '/users/home/thekirk/git/prestige.git/.git' 2008-02-25 07:29:09,723 Trac[git_fs] INFO: disabled CachedRepository for '/users/home/thekirk/git/prestige.git/.git' 2008-02-25 07:29:09,737 Trac[PyGIT] DEBUG: triggered rebuild of commit tree db for 142814732 2008-02-25 07:29:10,063 Trac[PyGIT] DEBUG: rebuilt commit tree db for 142814732 with 8655 entries 2008-02-25 07:29:10,064 Trac[session] DEBUG: Retrieving session for ID '0eeb43608fc5a5da27c2b118' 2008-02-25 07:29:10,123 Trac[chrome] DEBUG: Prepare chrome data for request 2008-02-25 07:29:10,124 Trac[perm] DEBUG: No policy allowed anonymous performing TICKET_CREATE on None 2008-02-25 07:29:10,125 Trac[perm] DEBUG: No policy allowed anonymous performing TRAC_ADMIN on None 2008-02-25 07:29:10,126 Trac[perm] DEBUG: No policy allowed anonymous performing PERMISSION_GRANT on None 2008-02-25 07:29:10,126 Trac[perm] DEBUG: No policy allowed anonymous performing PERMISSION_REVOKE on None 2008-02-25 07:29:10,126 Trac[perm] DEBUG: No policy allowed anonymous performing TICKET_ADMIN on None 2008-02-25 07:29:10,598 Trac[main] ERROR: object '131ee778fd5757495d9807b24231111f801ed839' not found Traceback (most recent call last): File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/web/main.py", line 419, in _dispatch_request dispatcher.dispatch(req) File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/web/main.py", line 196, in dispatch resp = chosen_handler.process_request(req) File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/versioncontrol/web_ui/changeset.py", line 323, in process_request self._render_html(req, repos, chgset, restricted, xhr, data) File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/versioncontrol/web_ui/changeset.py", line 544, in _render_html diff_bytes += _estimate_changes(old_node, new_node) File "/users/home/thekirk/python/lib/python2.5/site-packages/Trac-0.11dev_r6603-py2.5.egg/trac/versioncontrol/web_ui/changeset.py", line 498, in _estimate_changes old_size = old_node.get_content_length() File "build/bdist.solaris-2.11-i86pc/egg/tracext/git/git_fs.py", line 349, in get_content_length self.fs_size = self.git.get_obj_size(self.fs_sha) And no, that changeset doesnt even exist I don't think. Here is the info for the changeset I got from git: KidA% git cat-file commit 65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5 tree 0a718ab762b20fa6cf6a9cc37e90a7c09063136c parent 747323490880b3dd0c384c72fbfece6c19eff406 author Jamie Kirkpatrick <jkp@…> 1203876188 +0100 committer Jamie Kirkpatrick <jkp@…> 1203876542 +0100 Hope that helps. PS: GREAT work. I didn't mean to file two complaints in one go...just want to help you make it better :) comment:3 Changed 9 years ago by thanks for the traceback :-) well, this exception is not about a revision sha, but about a blob sha, which is most likely associated with that commit... can we rule out, that the git repository was garbage collected/pruned while GitPlugin's Instance was alive? i.e. can this be reproduced after restarting Trac with the very same changeset? what would be git diff-tree -r -m 65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5's output? comment:4 Changed 9 years ago by Replying to jkp@kirkconsulting.co.uk: hi there: Sorry, I didn't see this in the log before (I had turned on debugging to see what I could find)... ps: next time, please enclose log output within {{{ ...logoutput... }}} brackets, so it doesn't get re-wrapped :-) comment:5 follow-up: 6 Changed 9 years ago by Yeah I just did that in the comment I posted, but the item was locked coz you were editing it to tell me to do that! Doh! KidA% git diff-tree -r -m 65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5 65e5795d0a77b341ecfc9d9df7a7e08fe02dc3d5 :100644 100644 f4d5f7d7f976204b05a6b0d2e92b2ec21b44e375 2f4534bdcb965d715f745eca748357242d379eff M Applications/PrManager/PrCrypt.cpp :100644 100644 488e3aff3cc1c798679b97763075fdf230cb1cc2 7153c1c69d2a007671b9e333bbf7b4e77a4dc160 M Applications/PrManager/PrDatabase.cpp :100644 100644 0a57e338a393d7ef4129ba1599a939f6d89d1607 585f02475f23232343ce25f0010240e2bdff4d6a M Applications/PrManager/PrDatabase.h :100644 100644 c41a6726ca72730fe295386605db37a6d6c08d15 ec33feb12ab868d1ff1a2486d3b9d99f344dea59 M Applications/PrManager/PrItem.cpp :100644 100644 08084628cdc58f92dbec7ca17f65fef13c768c6a 9599eee114d0a6dab52494acec6f45ec26dc4c7b M Applications/PrManager/PrItemData.cpp :100644 100644 9dac2b3533369b8d7b430c1f55cceae856019bec 8359ca43d744cb4580c71f121646e1166b52db42 M Applications/PrManager/PrItemQuery.cpp :100644 100644 838a9d6798222b108fa766b4551f7ae63cf099d1 7d393f4c79b17eac1adf8c471067b28e86c7afc9 M Applications/PrManager/PrManager.h :100644 100644 449762901df2271bea0f5a81c4503eb86181d3d2 958c8844534f355026e1746241eefa3d5fca4258 M Applications/PrManager/PrMulticastSender.cpp :100644 100644 357b5726e242eaff1d2818e778c996f28930cbd4 c882b14e887a9784712c7a6aa8fd77ff4b9d2aa7 M Applications/PrManager/PrODBCDatabase.cpp :100644 100644 90a304f336eecf5527bce943160eef01a586284c 61ae889ca54341b9dee93cac7c53c5b9bf94a4ab M Applications/PrManager/PrScriptJobScheduler.cpp :100644 100644 891185405582c9aa94002cd4b279e8607e2bbd6d 069e38fc33d0be89400d318eb333339948325c93 M Applications/PrManager/PrServer.cpp :100644 100644 8ed1f5a83683284f0f28e7f4bdaf9fc1072db7a7 3f6bccf47015447b367c329522a2520423b65a70 M Applications/PrManager/PrStory.cpp :100644 100644 e377919b3616a2d36c54cdc10cf715db5201043e d28be78c8182e4f98d1f2d68daf20ea138ad5ca2 M Applications/PrManager/PrStoryText.cpp :100644 100644 23b590ede91519fba978399329afbb30be677b95 513ab0f62b5620ab116a0a9a9480ba75c8843c0c M Include/PrCommon.h :100644 100644 9cf2acbe1cd26e5203d54ecdbe0736452d7fd285 78ed5c6233a17ae4834630629f7db7562fd0ee1e M Include/PrSignal.h :100644 100644 5d2446b94b63f6402becf022640ab965b5048c98 80d177f6ca2c3f90bff81136cb118e08f0b0975d M Include/PrSocket.h :100644 100644 6b8e77fa151750d76086a6f6a01a4864e31b8761 b716c10b479653e914d2f70e2a37268c683871f3 M Include/PrThread.h :000000 100644 0000000000000000000000000000000000000000 ca6d1ee81da1dc9bc167b40dc7e4cfd216f7a991 A Mac/PRManager/PRManager.xcodeproj/jkp.mode2v3 :100644 100644 b7e1d0d5273bece0787d20e41bf96b68a33d08cd 8f9eba10b4e5b9ead6ea13ea2647eeb1ce32c78a M Mac/PRManager/PRManager.xcodeproj/jkp.pbxuser :100644 100644 8f93c37ab28c6052406187da410849ff7172d66d 42602a12c935c792a8632a34c60460ed0af6a751 M Mac/PRManager/PRManager.xcodeproj/jkp.perspectivev3 :100644 100644 8bad519633e1a3f4aa7737e0ac851c030689b561 1c49bc2e8dbc3847963c8c4ddc9d4f2a82493b7f M Mac/PRManager/PRManager.xcodeproj/project.pbxproj :100644 100644 2edb6aeb91b946ac6db30deb6a7bd61f182bcfab fd559b972894c19b444f5b2274205ce23c53e419 M Mac/PRManager/main.cpp :000000 100644 0000000000000000000000000000000000000000 cf4c5f11bf2b457df293a3f8e7449ec347f622e5 A Mac/PRUtility/PRUtility.xcodeproj/jkp.mode2v3 :100644 100644 7464ebd8fb44d78b5818757bb10e66f1b7cfb153 186c8f75dbf3764501dcadf6fb9743edf745c661 M Mac/PRUtility/PRUtility.xcodeproj/jkp.pbxuser :100644 100644 082403bf78e22d1303e6f0b7201240238216f03c 514d9c323c8f87987699c61e9e1c063020974e2a M Mac/PRUtility/PRUtility.xcodeproj/jkp.perspectivev3 :000000 100755 0000000000000000000000000000000000000000 888d0adbef3ac255fe3ba57ecb471e6bb3456666 A SQL Scripts/CreatePostgresDatabase.sh :000000 100755 0000000000000000000000000000000000000000 77589fcee3997297218c110a17cda79b09484a5a A SQL Scripts/Prestige5_MySQL.sql :000000 100755 0000000000000000000000000000000000000000 4492d9edf2943d6d4f9a1230b7b4d90104122488 A SQL Scripts/Prestige5_MySQL_Data.sql :000000 100644 0000000000000000000000000000000000000000 e57a3553cd280442903d34a9a5540a0248ad33b0 A SQL Scripts/Prestige5_MySQL_GlobalParams.sql :100644 100755 d23ff2f8b3157a2565acb207aae388d398ca8876 305d5f3ab16d42cb026c44838cce4f82c69fdd9c M SQL Scripts/Prestige5_Oracle_Data.sql :000000 100755 0000000000000000000000000000000000000000 457fbde3e12d2df5b26f14cd8cfdd6cb14ce2a57 A SQL Scripts/Prestige5_Postgres.sql :000000 100755 0000000000000000000000000000000000000000 516ee21cccc7003d3380b2f555dec28aae64e9eb A SQL Scripts/Prestige5_Postgres_Data.sql :000000 100644 0000000000000000000000000000000000000000 369e5d9ae27568ec666d6233b56afd1b8dcb8803 A SQL Scripts/Prestige5_Postgres_GlobalParams.sql :100644 100644 7cf6d0dd963c74a80f0433838a665397719d1fda e5c9cf4b3d946b11863b69b7a9e524973d9ee9f9 M Shared Libraries/PrUtilityLib/PrSocket.cpp :100644 100644 27afe0b6ddc85e93d58b713c064a9c24195ae6c9 14863c55328c86d81582a5eb97b232fade638a38 M Shared Libraries/PrUtilityLib/PrThread.cpp comment:6 Changed 9 years ago by Replying to jkp@kirkconsulting.co.uk: Yeah I just did that in the comment I posted, but the item was locked coz you were editing it to tell me to do that! Doh! :-) well, the following one seems to be our problem... doesn't that 131ee... object exist (i.e. if you try to git cat-file blob 131ee77 or git cat-file -s 131ee77)? comment:7 follow-up: 8 Changed 9 years ago by Hrm. OK, so this could be the issue. If I do it on my local machine I have no issues, however on the shared-hosting I get an out of memory error. I wonder if their limits are too low for git. Seems its not an issue with your code at all! Sorry for the wild goose chase. comment:8 Changed 9 years ago by Replying to jkp@kirkconsulting.co.uk: Hrm. OK, so this could be the issue. If I do it on my local machine I have no issues, however on the shared-hosting I get an out of memory error. I wonder if their limits are too low for git. well, Trac0.11+GitPlugin do use up quite a bit of memory at the moment... how much memory do you have at your disposal on that hosting host? Seems its not an issue with your code at all! Sorry for the wild goose chase. np comment:9 Changed 9 years ago by comment:10 Changed 9 years ago by Well, I just read that the limit is a 100MB...which seems like it should be big enough to run this command. I've filed a support request with them so I'll have to see what they come back with. Otherwise maybe I'm going to be looking at new hosting :( (I use Textdrive's Shared Accelerator) could you provide some more info please? :-) maybe a back-trace? is the 131eee... sha somehow related to that changeset (e.g. a parent/child/next/prev?)
https://trac-hacks.org/ticket/2632
CC-MAIN-2017-17
refinedweb
1,600
51.04
On Tue, 2007-10-16 at 16:03 -0400, Jesse Keating wrote: > On Tue, 16 Oct 2007 15:58:29 -0400 > Warren Togami <wtogami redhat com> wrote: > > > First ever build of libgssglue which seems to replace libgssapi > > happened today, one day before the freeze of F8. In addition to > > replacing libgssapi it seems to bump soname. > > Hrm, when I was contacted about this I was told it was just a package > rename, not a soname rename too. This is a much bigger change than I > expected. With the freeze tomorrow and the sluggishness of koji lately > I'm reluctant to let this go through. The problem is that in both namespaces, libgssapi is a confusing name for what is really just glue between the kernel and userspace, and not a gssapi implementation. Particularly when (in Fedora) libgssapi_krb5 is the mechglue library provided by the MIT krb5 package, and elsewhere libgssapi is a name much more traditionally associated (such as in Heimdal Kerberos) with real GSSAPI implementations. Andrew Bartlett -- Andrew Bartlett Authentication Developer, Samba Team Samba Developer, Red Hat Inc. Attachment: signature.asc Description: This is a digitally signed message part
http://www.redhat.com/archives/fedora-devel-list/2007-October/msg01272.html
CC-MAIN-2014-10
refinedweb
190
50.87
You may also like … - Rs. 65,000,000 import Commercial chines manufactured tr...Karachi, Pakistan 7 Jul 2017 - Rs. 4,000,000 one of the manufactured by chines loadin...Karachi, Pakistan 22 Feb 2017 - Rs. 1,510,000 Master Society Motors Dealing in Commerc...Karachi, Pakistan 8 Aug 2016 - Rs. 60,000 Hiroof or Shehzore Trucks available for ...Karachi, Pakistan 6 Jun 2016 - Rs. 2,500,000 Pavers, Generators Rollers, Trucks & Exc...Karachi, Pakistan 22 Sep 2015 - Rs. 100,000 new trucks cars vans pick ups carrierKarachi, Pakistan 20 May 2017 - Rs. 1,300,000 Daewoo Dump Truck Model “V3TEFD” del...Karachi, Pakistan Today, 13 hours ago - Rs. 6,000,000 cement bulkier manufacture of brand new ...Karachi, Pakistan 20 Jul 2016 - Rs. 375,000 Commercial suzukiKarachi, Pakistan Today, 1 hour ago - Rs. 11,000,000 chines Air Conditioning Passenger Coach ...Karachi, Pakistan 5 Jul 2017 - Rs. 7,000,000 TCM FD160 Forklift, Fresh ImportKarachi, Pakistan 25 Mar 2015 - Rs. 3,500,000 DYNAPAC CA25D Roller - Fresh ImportKarachi, Pakistan 21 Sep 2016 - Rs. 350,000 Suzuki pickup commercial plateKarachi, Pakistan Today, 22 hours ago - Rs. 530,000 COMMERCIAL SUZUKI MODEL 2012 FOR SALEKarachi, Pakistan 10 Feb 2016 - Rs. 5,800,000 DYNAPAC CA 511D Fresh Import Europe Impo...Karachi, Pakistan 25 Feb 2017 Dump trucks import Commercial chines manufacturedKarachi, Pakistan Rs. 7,000,000 Rs. 7,000,000 - Ad Id2020003 - Ad Posted16 Jun 2017 - Seller TypeA Professional / Business - ConditionNew - WarrantyYes - Model Year2016 - Mileage50 want well investor & investment of import Commercial chines manufactured Dump trucks and good profit and well demand in Commercial chines Vehicles of Pakistan & Afghanistan. dear sir, We are sealing chines Dump trucks by loading hydraulic Dump truck largest and leading and best and top available Commercial Vehicles imported loading hydraulic Dumper truck 10 & 12 & 22 & 18 wheeler for road transport companies businesses based. Specialists in heavy and abnormal sized loading prime movers Dump truck transport throughout all world. We Doing the business of horologic Dumper Transport Services & vehicle designed to transport cargo loading is 30 m/ton, 50 m/ton, sand & heavy rock stone & coal etc, immediate delivery Ex-Karachi Unit Price 30 m/ton Rs.70,00000/- Inclusive of all taxes (approximate). immediate delivery Ex-Karachi Unit Price 50 m/ton Rs.90,00000/- Inclusive of all taxes (approximate) images loading hydraulic: 1458
http://karachi.bolee.com/detail/dump-trucks-import-commercial-chines-manufactured-2020003
CC-MAIN-2018-34
refinedweb
387
69.07
of Windows Explorer’s directory change notification mechanism (Change Notify), and how that mechanism can lead to performance issues before moving on to monitoring your environment for performance issues. Change Notify and its impact on DFSN servers with ABE Let’s say you are viewing the contents of a network share while a file or folder is added to the share remotely by someone else. Your view of this share will be updated automatically with the new contents of the share without you having to manually refresh (press F5) your view. Change Notify is the mechanism that makes this work in all SMB Protocols (1,2 and 3). The way it works is quite simple: The client sends a CHANGE_NOTIFY request to the server indicating the directory or file it is interested in. Windows Explorer (as an application on the client) does this by default for the directory that is currently in focus. Once there is a change to the file or directory in question, the server will respond with a CHANGE_NOTIFY Response, indicating that a change happened. This causes the client to send a QUERY_DIRECTORY request (in case it was a directory or DFS Namespace) to the server to find out what has changed. QUERY_DIRECTORY is the thing we discussed in the first post that causes ABE filter calculations. Recall that it’s these filter calculation that result in CPU load and client-side delays. Let’s look at a common scenario: During login, your users get a mapped drive pointing at a share in a DFS Namespace. This mapped drive causes the clients to connect to your DFSN Servers The client sends a Change Notification (even if the user hasn’t tried to open the mapped drive in Windows Explorer yet) for the DFS Root. Nothing more happens until there is a change on the server-side. Administrative work, such as adding and removing links, typically happens during business hours, whenever the administrators find the time, or the script that does it, runs. Back to our scenario. Let’s have a server-side change to illustrate what happens next: We add a Link to the DFS Namespace. Once the DFSN Server picks up the new link in the namespace from Active directory, it will create the corresponding reparse point in its local file system. If you do not use Root Scalability Mode (RSM) this will happen almost at the same time on all of the DFS Servers in that namespace. With RSM the changes will usually be applied by the different DFS servers over the next hour (or whatever your SyncInterval is set to). These changes trigger CHANGE_NOTIFY responses to be sent out to any client that indicated interest in changes to the DFS Root on that server. This usually applies to hundreds of clients per DFS server. This causes hundreds of Clients to send QUERY_DIRECTORY requests simultaneously. What happens next strongly depends on the size of your namespace (larger namespaces lead to longer duration per ABE calculation) and the number of Clients (aka Requests) per CPU of the DFSN Server (remember the calculation from the first part?) As your Server does not have hundreds of CPUs there will definitely be some backlog. The numbers above decide how big this backlog will be, and how long it takes for the server to work its way back to normal. Keep in mind that while pedaling out of the backlog situation, your server still has to answer other, ongoing requests that are unrelated to our Change Notify Event. Suffice it to say, this backlog and the CPU demand associated with it can also have negative impact to other jobs. For example, if you use this DFSN server to make a bunch of changes to your namespace, these changes will appear to take forever, simply because the executing server is starved of CPU Cycles. The same holds true if you run other workloads on the same server or want to RDP into the box. So! What can you do about it? As is common with an overloaded server, there are a few different approaches you could take: - Distribute the load across more servers (and CPU cores) - Make changes outside of business hours - Disable Change Notify in Windows Explorer Monitoring ABE As you may have realized by now, ABE is not a fire and forget technology---it needs constant oversight and occasional tuning. We’ve mainly discussed the design and “tuning” aspect so far. Let’s look into the monitoring aspect. Using Task Manager / Process Explorer This is a bit tricky, unfortunately, as any load caused by ABE shows up in Task Manager inside the System process (as do many other things on the server). In order to correlate high CPU utilization in the System process to ABE load, you need to use a tool such as Process Explorer and configure it to use public symbols. With this configured properly, you can drill deeper inside the System Process and see the different threads and the component names. We need to note, that ABE and the Fileserver both use functions in srv.sys and srv2.sys. So strictly speaking it’s not possible to differentiate between them just by the component names. However, if you are troubleshooting a performance problem on an ABE-enabled server where most of the threads in the System process are sitting in functions from srv.sys and srv2.sys, then it’s very likely due to expensive ABE filter calculations. This is, aside from disabling ABE, the best approach to reliably prove your problem to be caused by ABE. Using Network trace analysis Looking at CPU utilization shows us the server-side problem. We must use other measures to determine what the client-side impact is, one approach is to take a network trace and analyze the SMB/SMB2 Service Response times. You may however end up having to capture the trace on a mirrored switch port. To make analysis of this a bit easier, Message Analyzer has an SMB Service Performance chart you can use. You get there by using a New Viewer, like below. Wireshark also has a feature that provides you with statistics under Statistics -> Service Response Times -> SMB2. Ignore the values for ChangeNotify (its normal that they are several seconds or even minutes). All other response times translate into delays for the clients. If you see values over a second, you can consider your files service not only to be slow but outright broken. While you have that trace in front of you, you can also look for SMB/TCP Connections that are terminated abnormally by the Client as the server failed to respond to the SMB Requests in time. If you have any of those, then you have clients unable to connect to your file service, likely throwing error messages. Using Performance Monitor If your server is running Windows Server 2012 or newer, the following performance counters are available: Most noticeable here is Avg. sec/Request counter as this contains the response time to the QUERY_DIRECTORY requests (Wireshark displays them as Find Requests). The other values will suffer from a lack of CPU Cycles in varying ways but all indicate delays for the clients. As mentioned in the first part: We expect single digit millisecond response times from non-ABE Fileservers that are performing well. For ABE-enabled Servers (more precisely Shares) the values for QUERY_DIRECTORY / Find Requests will always be higher due to the inevitable length of the ABE Calculation. When you reached a state where all the other SMB Requests aside of the QUERY_DIRECTORY are constantly responded to in less than 10ms and the QUERY_DIRECTORY constantly in less than 50ms you have a very good performing Server with ABE. Other Symptoms There are other symptoms of ABE problems that you may observe, however, none of them on their own is very telling, without the information from the points above. At a first glance a high CPU Utilization and a high Processor Queue lengths are indicators of an ABE problem, however they are also indicators of other CPU-related performance issues. Not to mention there are cases where you encounter ABE performance problems without saturating all your CPUs. The Server Work Queues\Active Threads (NonBlocking) will usually raise to their maximum allowed limit (MaxThreadsPerQueue ) as well as the Server Work Queues\Queue Length increasing. Both indicate that the Fileserver is busy, but on their own don’t tell you how bad the situation is. However, there are scenarios where the File server will not use up all Worker Threads allowed due to a bottleneck somewhere else such as in the Disk Subsystem or CPU Cycles available to it. See the following should you choose to setup long-term monitoring (which you should) in order to get some trends: Number of Objects per Directory or Number of DFS Links Number of Peak User requests (Performance Counter: Requests / sec.) Peak Server Response time to Find Requests or Performance Counter: Avg. sec/Request Peak CPU Utilization and Peak Processor Queue length. If you collect those values every day (or a shorter interval), you can get a pretty good picture how much head-room you have left with your servers at the moment and if there are trends that you need to react to. Feel free to add more information to your monitoring to get a better picture of the situation. For example: gather information on how many DFS servers were active at any given day for a certain site, so you can explain if unusual high numbers of user requests on the other servers come from a server downtime. ABELevel Some of you might have heard about the registry key ABELevel. The ABELevel value specifies the maximum level of the folders on which the ABE feature is enabled. While the title of the KB sounds very promising, and the hotfix is presented as a “Resolution”, the hotfix and registry value have very little practical application. Here’s why: ABELevel is a system-wide setting and does not differentiate between different shares on the same server. If you host several shares, you are unable to filter to different depths as the setting forces you to go for the deepest folder hierarchy. This results in unnecessary filter calculations for shares. Usually the widest directories are on the upper levels---those levels that you need to filter. Disabling the filtering for the lower level directories doesn’t yield much of a performance gain, as those small directories don’t have much impact on server performance, while the big top-level directories do. Furthermore, the registry value doesn’t make any sense for DFS Namespaces as you have only one folder level there and you should avoid filtering on your fileservers anyway. While we are talking about Updates Here is one that you should install: High CPU usage and performance issues occur when access-based enumeration is enabled in Windows 8.1 or Windows 7 - Furthermore you should definitely review the Lists of recommended updates for your server components: DFS (2008 / 2008 R2) (2012 / 2012 R2) File Services (2008 / 2008 R2) (2012 / 2012 R2) Well then, this concludes this small (my first) blog series. I hope you found reading it worthwhile and got some input for your infrastructures out there. With best regards Hubert
https://docs.microsoft.com/en-us/archive/blogs/askds/access-based-enumeration-abe-troubleshooting-part-2-of-2
CC-MAIN-2020-10
refinedweb
1,885
57.91
OpenTK First Person CameraPosted Saturday, 2 January, 2010 - 05:12 by flopoloco in using System; using System.Drawing; using OpenTK; using OpenTK.Graphics.OpenGL; using OpenTK.Input; namespace OpenTKCameraPort { class Program : GameWindow { private float cameraSpeed = 5f; private Matrix4 cameraMatrix; private float [] mouseSpeed = new float[2]; public Program() : base(1024, 768) { GL.Enable(EnableCap.DepthTest); } protected override void OnLoad(EventArgs e) { base.OnLoad(e); cameraMatrix = Matrix4.Translation(0f, -10f, 0f); } protected override void OnRenderFrame(FrameEventArgs e) { base.OnRenderFrame(e); GL.MatrixMode(MatrixMode.Modelview); GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit); GL.LoadMatrix(ref cameraMatrix); // Display some planes for (int x = -10; x <= 10; x++) { for (int z = -10; z <= 10; z ++) { GL.PushMatrix(); GL.Translate((float) x * 5f, 0f, (float) z * 5f); GL.Begin(BeginMode.Quads); GL.Color3(Color.Red); GL.Vertex3( 1f, 4f, 0f); GL.Color3(Color.Orange); GL.Vertex3(-1f, 4f, 0f); GL.Color3(Color.Brown); GL.Vertex3(-1f, 0f, 0f); GL.Color3(Color.Maroon); GL.Vertex3( 1f, 0f, 0f); GL.End(); GL.PopMatrix(); } } SwapBuffers(); } protected override void OnUpdateFrame(FrameEventArgs e) { if (Keyboard[Key.W]) { cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.Translation(0f, 0f, 10f*(float)e.Time)); } if (Keyboard[Key.S]) { cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.Translation(0f, 0f, -10f*(float)e.Time)); } if (Keyboard[Key.A]) { cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.Translation(10f*(float)e.Time, 0f, 0f)); } if (Keyboard[Key.D]) { cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.Translation(-10f*(float)e.Time, 0f, 0f)); } mouseSpeed[0] *= 0.9f; mouseSpeed[1] *= 0.9f; mouseSpeed[0] += Mouse.XDelta / 100f; mouseSpeed[1] += Mouse.YDelta / 100f; cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.RotateY(mouseSpeed[0]*(float)e.Time)); cameraMatrix = Matrix4.Mult(cameraMatrix, Matrix4.RotateX(mouseSpeed[1]*(float)e.Time)); if (Keyboard[Key.Escape]) Exit(); }); } public static void Main(string[] args) { using (Program p = new Program()) { p.Run(); } } } } Hello, I would like please some help. I have made this first person camera example. I have 1. How can set the cursor position to a fixed position. I tried centering the cursor to the middle of the screen System.Windows.Forms.Cursor.Position = new Point(Bounds.Left + (Bounds.Width / 2), Bounds.Top + (Bounds.Height / 2)); But after that OpenTK can not get mouse delta. 2. On cameraMatrix = Matrix4.Translation(0f, -10f, 0f); I set -10 for setting the camera above the planes (I guess it would be better to set it to positive for up and negative for down) 3. How can I prevent Z rotation rolling (I want to make it a turntable rotation) Re: OpenTK First Person Camera Rather pretty test program. :-) To do turntable rotation (aka cylindrical coordinates), you're going to need a few more numbers to keep track of. As you're doing it now, you're doing all your rotation in the coordinate system of the camera; you have to do your rotation in the world coordinate system, and then tell your camera where that is (hope that makes sense). As it is now, you have no absolute coordinates... every translation and rotation you make on your camera is going to be relative to the camera itself. So, if you're going to define a coordinate system, you can do it however you want... but yes, people usually use a positive number to mean "up". Try out the following code: After all my fancy words I got frustrated with the camera wobbling around when I tried to Do It Right, so I cheat and just make a point representing where I want the camera to face and aim the view at it with Matrix4.LookAt(). ;-) It works though, so I'm happy with it. I can't help you with the input issues I'm afraid; I'm still learning that part of OpenTK myself. I have to keep the mouse right up near the top of the window to keep it facing in a reasonable direction, but other than that the rotation seems to work. MonoDevelop tells me that OpenTK.Input.MouseDevice.XDelta and YDelta are obsolete, and I should be using the OpenTK.Input.MouseMoveEventArgs.Delta property with the OpenTK.Input.MouseDevice.Move event; you might have better luck with that. Re: OpenTK First Person Camera Using System.Windows.Forms.Cursor.Positionto move the pointer registers as a real mouse movement that will raise a MouseDevice.Move event and will update the X, Y properties and corresponding deltas. Since you wish to ignore this mouse movement, you'll have to generate the deltas yourself: Re: OpenTK First Person Camera I found that the solution provided by IceFox was incomplete. A few fixes/adjustments that helped me: It didn't seem like a good idea to shift to using an event model for the key presses as well, although obviously in a real application it would simplify other behaviors. It also seemed like the OpenTK GameWindow should probably have OnMouseMove and OnKeyPress event handlers built in, not sure. And here's my code: Re: OpenTK First Person Camera I expanded on xandemon's adjustments that He did to IceFox's solution...I personally don't do the inverted mouse controls as an avid gamer, I find it's too confusing when in serious gaming mode. To do that, this is what I changed. From: To: I also bumped up the WASD movement...The code was sound there but I felt the original speed was to slow. Bumped it from 0.1f to 0.5f for the speed increase. I also implicated Fiddler's portion of code to "lock" the mouse pointer to the center of the screen. It runs great, however I keep getting the following warning: (73,6): warning CS0414: The private field `OpenTKCameraPort.Program.pointer_delta' is assigned but its value is never used I commented some of the changes and where the line the warning is being thrown up at. So here's the code.... Re: OpenTK First Person Camera Hey guys, I've been playing around with camera movement and so I thought I'd grab and use this code. I've noticed something QUITE odd - every time I compile this, without any changes to the original code, rotation seems to lock up on ALL axis randomly. It happens like this: Compile, move mouse to cause rotation, everything is fine. Recompile (no changes to code), move mouse, suddenly horizontal rotation is very limited. Vertical rotation nearly non-existent. recompile (no changes to code) - flip a coin. It could be fixed or it could be still locking. At first I thought this was gimbal lock, but I thought that only occurred on ONE axis, not 2. Have you guys seen this with the code above? Re: OpenTK First Person Camera Well, I don't seem to have the exact symptoms you describe, but it certainly isn't working correctly for me. I start it up and any rotations I try to make with the mouse immediately slew back to a centered view. This behavior stops when I comment out UpdateMousePosition() on line 88, but then the mouse isn't bounded to the window and the program gets confused for a second when I move the mouse pointer out of one side of the window and back in from the other side. Alas, I still haven't dug into OpenTK's mouse handling code. I suspect that assigning to System.Windows.Forms.Cursor.Position creates a MouseMoveEvent that triggers OnMouseMove(). The input code is a little wonky anyway; pointer_delta and such assigned to by UpdateMousePosition() are never used. I'm going to see if I have time to rip the guts out and make sense of it. Re: OpenTK First Person Camera It turns out the System mouse stuff works in screen coordinates, while the OpenTK Mouse.X and Mouse.Y works in window coordinates. Getting the mouse coordinate offset is easy, but figuring out what the correct coordinate to give to System.Windows.Forms.Cursor.Position is a bit more tricky, since the OpenTK and System coordinates don't seem to quite line up correctly. In the end I sort of hack it into place just by undoing whatever movement I detect, but it seems to work. Does anyone have problems making it work? Re: OpenTK First Person Camera You can use PointToScreen and PointToClient to convert between screen/client coordinates. Re: OpenTK First Person Camera This still doesn't work for me - here's why: This code takes the mouse's current cursor position, no matter where it is in the window and uses it to figure out what the change has been then, we set the cursor back to the center of the window, which technically makes a new delta so the next time we come through UpdateFrame, it will grab the new position, which is at the center of the window, thusly forcing the camera back the way it was just moved. To solve this, we need to find a way to move the mouse cursor to the center of the screen without changing our deltas. Since we are programmatically updating the cursor position here, forcing a change that is not made by Physical mouse movement, it would make sense to use the OnMouseMove event to solve this problem - except it seems that the framework counts setting the cursor position as an OnMouseMove event, even though it was not a physical movement that fired it. I'm not sure how to proceed here - there has to be a way to tell if the mouse was PHYSICALLY moved. Back before managed code, I would write ASM to capture the hardware interrupt of the mouse, and that would for SURE tell me that the physical mouse had been moved. There has to be something similar in the managed world, but I am at a loss. Re: OpenTK First Person Camera Yeah, Windows posts WM_MOUSEMOVE even for artificial movement. The solution right now is to not use the MouseMove event at all but rather read the mouse position directly every frame. This is a simple window vs screen coordinates mismatch: mouse events are reported in window coordinates, whereas System.Windows.Forms.Cursor.Position works with screen coordinates. The solution is simple: pick one coordinate system and convert everything to it: In any case, I'd recommend against using MouseMove events for camera movement. MouseMove events are useful for GUIs but little else (which is why e.g. XNA doesn't even provide them); camera movement should read the mouse position directly and calculate deltas as necessary. Below is how I am moving the camera:
http://www.opentk.com/node/1492
CC-MAIN-2016-07
refinedweb
1,747
57.57
Every class exported by the ns3 library is enclosed in the ns3 namespace. More... Every class exported by the ns3 library is enclosed in the ns3 namespace. This class represents a record (handled by SnrToBlockErrorRate manager) that keeps a mapping between an SNR value and its corresponding (1) Bit Error Rate, (2) Block Error Rate, (3) Standard deviation, (4 and 5) confidence interval. This class handles the SNR to BlcER traces. files format should be as follows SNR_value(1) BER(1) Blc_ER(1) STANDARD_DEVIATION(1) CONFIDENCE_INTERVAL1(1) CONFIDENCE_INTERVAL2(1) SNR_value(2) BER(2) Blc_ER(2) STANDARD_DEVIATION(2) CONFIDENCE_INTERVAL1(2) CONFIDENCE_INTERVAL2(2) ... ... ... ... ... ... ... ... ... ... ... ... SNR_value(n) BER(n) Blc_ER(n) STANDARD_DEVIATION(n) CONFIDENCE_INTERVAL1(n) CONFIDENCE_INTERVAL2(n). Implementation of Minstrel Rate Control Algorithm. Information elements typically come in groups, and the WifiInformationElementVector class provides a representation of a series of IEs, and the facility for serialisation to and deserialisation from the over-the-air format. The IEEE 802.11 standard includes the notion of Information Elements, which are encodings of management information to be communicated between STAs in the payload of various frames of type Management. Information Elements (IEs) have a common format, each starting with a single octet - the Element ID, which indicates the specific type of IE (a type to represent the options here is defined as WifiInformationElementId). The next octet is a length field and encodes the number of octets in the third and final field, which is the IE Information field. The class ns3::WifiInformationElement provides a base for classes which represent specific Information Elements. This class defines pure virtual methods for serialisation (ns3::WifiInformationElement::SerializeInformationField) and deserialisation (ns3::WifiInformationElement::DeserializeInformationField) of IEs, from or to data members or other objects that simulation objects use to maintain the relevant state. This class also provides an implementation of the equality operator, which operates by comparing the serialised versions of the two WifiInformationElement objects concerned. The implementation of the public static-based API which calls into the private implementation through the simulation singleton. This file contains an implementation of the HighPrecision class. Each instance of the Time class also contains an instance of this class which is used to perform all the arithmetic operations of the Time class. This code is a bit ugly with a lot of inline methods for speed: profiling this code on anything but the simplest scenarios shows that it is a big bottleneck if great care in its implementation is not performed. My observations are that what dominates are Division operations (there are really really super costly) and Comparison operations (because there are typically a lot of these in any complex timekeeping code). Map of Ipv4Address to Ipv4Route Map of Ipv4Address to NixVector This method is used by the PHY to notify the MAC that a previously started TX attempt has terminated without success. This method is used by the PHY to notify the MAC that a previously started TX attempt has been successfully completed. This method is used by the PHY to notify the MAC that a RX attempt is being started, i.e., a valid signal has been recognized by the PHY. this method is invoked by the PHY to notify the MAC that the transmission of a given packet has been completed. This method allows the MAC to instruct the PHY to start a transmission of a given packet Data structure for a Sample Rate table A vector of a vector uint32_t This type is used to represent an Information Element ID. An enumeration would be tidier, but doesn't provide for the inheritance that is currently preferable to cleanly support pre-standard modules such as mesh. Maybe there is a nice way of doing this with a class. Until such time as a better way of implementing this is dreamt up and applied, developers will need to be careful to avoid duplication of IE IDs in the defines below (and in files which declare "subclasses" of WifiInformationElement). Sorry. In various parts of the code, folk are interested in maintaining a list of transmission modes. The vector class provides a good basis for this, but we here add some syntactic sugar by defining a WifiModeList type, and a corresponding iterator. Headers for Block ack request and response. 802.11n standard includes three types of block ack: Types of ethernet packets. Indicates the type of the current header. Used in Messages to determine whether it contains IPv4 or IPv6 addresses As per IEEE Std. 802.11-2007, Section 6.1.1.1.1, when EDCA is used the the Traffic ID (TID) value corresponds to one of the User Priority (UP) values defined by the IEEE Std. 802.1D-2004, Annex G, table G-2. Note that this correspondence does not hold for HCCA, since in that case the mapping between UPs and TIDs should be determined by a TSPEC element as per IEEE Std. 802.11-2007, Section 7.3.2.30 This enumeration defines the various convolutional coding rates used for the OFDM transmission modes in the IEEE 802.11 standard. DSSS (for example) rates which do not have an explicit coding stage in their generation should have this parameter set to WIFI_CODE_RATE_UNDEFINED. This enumeration defines the modulation classes per IEEE 802.11-2007, Section 9.6.1, Table 9-2. Current state of the channel fallback breakpoint function This function is used by the NS_BREAKPOINT() macro as a fallback for when breakpoint assembly instructions are not available. It attempts to halt program execution either by a raising SIGTRAP, on unix systems, or by dereferencing a null pointer. Normally you should not call this function directly. This function includes the name of the object, pointer, vector or vector item in the first column This function writes the attribute or typeid name in the column 0 This function includes the name of the attribute or the editable value in the second column This function writes data in the second column, this data is going to be editable if it is a NODE_ATTRIBUTE This is the callback called when the value of an attribute is changed This functions is called whenever there is a change in the value of an attribute If the input value is ok, it will be updated in the default value and in the gui, otherwise, it won't be updated in both. This function displays the tooltip for an object, pointer, vector item or an attribute This function is used to display a tooltip whenever the user puts the mouse over a type ID or an attribute. It will give the type and the possible values of an attribute value and the type of the object for an attribute object or a typeID object Delete the tree model contents Delete the tree model contents This method invoke the copy constructor of the input object and returns the new instance. This is the main view opening the widget, getting tooltips and drawing the tree of attributes... This is the main view opening the widget, getting tooltips and drawing the tree of attributes This allocates an object on the heap and initializes it with a set of attributes. This allocates an object on the heap and initializes it with a set of attributes. Exit the application Exit the window when exit button is pressed create ns3::Time instances in units of femtoseconds. For example: Time t = FemtoSeconds (2); Simulator::Schedule (FemtoSeconds (5), ...); This function gets the column number 0 or 1 from the mouse click If the user presses the button load, it will load the config file into memory. If the user presses the button load, it will load the config file into memory. Create a TraceSourceAccessor which will control access to the underlying trace source. This helper template method assumes that the underlying type implements a statically-polymorphic set of Connect and Disconnect methods and creates a dynamic-polymorphic class to wrap the underlying static-polymorphic class. create ns3::Time instances in units of microseconds. For example: Time t = MicroSeconds (2); Simulator::Schedule (MicroSeconds (5), ...); create ns3::Time instances in units of milliseconds. For example: Time t = MilliSeconds (2); Simulator::Schedule (MilliSeconds (5), ...); create ns3::Time instances in units of nanoseconds. For example: Time t = NanoSeconds (2); Simulator::Schedule (NanoSeconds (5), ...); create an ns3::Time instance which contains the current simulation time. This is really a shortcut for the ns3::Simulator::Now method. It is typically used as shown below to schedule an event which expires at the absolute time "2 seconds": Simulator::Schedule (Seconds (2.0) - Now (), &my_function); Defined for use in UanMacRcGw. create ns3::Time instances in units of picoseconds. For example: Time t = PicoSeconds (2); Simulator::Schedule (PicoSeconds (5), ...); This is the action done when the user presses on the save button. It will save the config to a file. This is the action done when the user presses on the save button. It will save the config to a file. create ns3::Time instances in units of seconds. For example: Time t = Seconds (2.0); Simulator::Schedule (NanoSeconds (5.0), ...); Compare two double precision floating point numbers and declare them equal if they are within some epsilon of each other. Approximate comparison of floating point numbers near equality is trickier than one may expect and is well-discussed in the literature. Basic strategies revolve around a suggestion by Knuth to compare the floating point numbers as binary integers, supplying a maximum difference between them . This max difference is specified in Units in the Last Place (ulps) or a floating point epsilon. This routine is based on the GNU Scientific Library function gsl_fcmp. maximum MPI message size for easy buffer creation
https://www.nsnam.org/docs/release/3.9/doxygen/namespacens3.html
CC-MAIN-2017-39
refinedweb
1,604
53.21
Note: This post contains codes that cannot be run using an online compiler. Please make sure that you have Python 2.7 and cv2 module installed before trying to run the program on your system. Hi everyone! I read a brilliant work by Aditya Prakash – OpenCV C++ Program to blur an image , so I decided to come up with something similar but this time in Python. So, here is a very simple program with basically the same result. #Python Program to blur image import cv2 #This will give an error if you don't have cv2 module img = cv2.imread('bat.jpg') #bat.jpg is the batman image. #make sure that you have saved it in the same folder blurImg = cv2.blur(img,(10,10)) #You can change the kernel size as you want cv2.imshow('blurred image',blurImg) cv2.waitKey(0) cv2.destroyAllWindows() Now, this program above is using image blurring technique called Averaging. There are some other options available as well – Gaussian Blurring, Median Blurring, Bilateral Filtering. Let’s make a couple of additions in our program and compare the results. import cv2 #This will give an error if you don't have cv2 module img = cv2.imread('bat.jpg') #bat.jpg is the batman image. #make sure that you have saved it in the same folder #Averaging avging = cv2.blur(img,(10,10)) #You can change the kernel size as you want cv2.imshow('Averaging',avging) cv2.waitKey(0) #Gaussian Blurring gausBlur = cv2.GaussianBlur(img, (5,5),0) #Again, you can change the kernel size cv2.imshow('Gaussian Blurring', gausBlur) cv2.waitKey(0) #Median blurring medBlur = cv2.medianBlur(img,5) cv2.imshow('Media Blurring', medBlur) cv2.waitKey(0) #Bilateral Filtering bilFilter = cv2.bilateralFilter(img,9,75,75) cv2.imshow('Bilateral Filtering', bilFilter) cv2.waitKey(0) cv2.destroyAllWindows() Hope you enjoyed the post! Auf Wiedersehen! About the author: Vishwesh Shrimali on GeeksforGeeks.
http://126kr.com/article/6kao0jvkrxm
CC-MAIN-2017-13
refinedweb
314
61.43
Introduction XML, or eXtensible Markup Language, is a useful tool for describing all sorts of data. Take a look at this example: <?xml version=”1.0″ encoding=”UTF-8″?> <devshed> <categories> <python> <article> <title>MySQL Connectivity With Python</title> <author>Icarus</author> </article> <article> <title>Python UnZipped</title> <author>Mark Lee Smith</author> </article> </python> <php> <article> <title>Writing Clean and Efficient PHP Code</title> <author>David Fells</author> </article> <article> <title>Using the PHP Crypt Function</title> <author>Chris Root</author> </article> </php> </categories> </devshed> We describe four articles in the DevShed backend. We start with a tag named devshed. Inside it, the tag categories contains two names, python and php. Inside each of these is an article tag, containing a title tag and an author tag. With this system, we are able to describe the data contained very easily and with no database. The data can be modified later with little effort and by someone with little experience. However, we are faced with the problem of interpreting the data embedded within the XML. We need something to parse it with. Python includes a few tools that can aid us in parsing the data, and we’ll take a look at them in this article, learning about them through example. {mospagebreak title=Organizing a Book Collection} Let’s say we want to organize a book collection using XML to describe it all. We don’t need anything fancy. We only need to store the title, author, and genre of the book. Let’s go ahead and create the markup for a few books: <?xml version=”1.0″ encoding=”UTF-8″?> <collection> <book> <title>The Once and Future King</title> <author>T.H. White</author> <genre>Fantasy</genre> </book> <book> <title>The Curse of Chalion</title> <author>Lois McMaster Bujold</author> <genre>Fantasy</genre> </book> <book> <title>Paladin of Souls</title> <author>Lois McMaster Bujold</author> <genre>Fantasy</genre> </book> <book> <title>Alas, Babylon</title> <author>Pat Frank</author> <genre>Fiction</genre> </book> <book> <title>Rifles for Wattie</title> <author>Harold Keith</author> <genre>Fiction</genre> </book> </collection> Now we’re left with parsing the data and turning it into something presentable. If you examine the way the data is stored, you will notice that it is similar to a dictionary in Python. Therefore, a dictionary would be an ideal type to store the data in. We’ll create a chunk of code that does just this. SAX, Simple API for XML, will be used for this project. It is contained in xml.sax: import xml.sax # Create a collection list collection = [] # This handles the parsing of the content class HandleCollection ( xml.sax.ContentHandler ): def __init__ ( self ): self.book = {} self.title = False self.author = False self.genre = False # Called at the start of an element def startElement ( self, name, attributes ): if name == ‘title': self.title = True elif name == ‘author': self.author = True elif name == ‘genre': self.genre = True # Called at the end of an element def endElement ( self, name ): if name == ‘book': collection.append ( self.book ) self.book = {} elif name == ‘title': self.title = False elif name == ‘author': self.author = False elif name == ‘genre': self.genre = False # Called to handle content besides elements def characters ( self, content ): if self.title: self.book [ ‘title’ ] = content elif self.author: self.book [ ‘author’ ] = content elif self.genre: self.book [ ‘genre’ ] = content # Parse the collection parser = xml.sax.make_parser() parser.setContentHandler ( HandleCollection() ) parser.parse ( ‘collection.xml’ ) As you can see, there’s really not much work involved. All we have to do is write the instructions that organize each book into a dictionary and put all the dictionaries into the collection list. We start by subclassing xml.sax.ContentHandler. The class we create is charged with handling the content of the document we parse. In our class’s __init__ method, we define a few variables. The book dictionary will, of course, house the book’s information. The title variable will be used by the characters method to determine whether we are dealing with the title tag’s content. The same goes for the author variable and the genre variable. These are set to True in startElement if we’re dealing with that particular element. They are then set to False when we have finished using them in endElement. Finally, we instruct Python to parse the file in the last three lines. We are now free to present this information to the user in whichever way we see fit. For example, if we wanted to just output the book information without dressing it up too much, we could simply append some code to the above script that sorts through the list of dictionaries that the script creates: for book in collection: print print ‘Title: ‘, book [ ‘title’ ] print ‘Author: ‘, book [ ‘author’ ] print ‘Genre: ‘, book [ ‘genre’ ] {mospagebreak title=Describing a Music Library} While the above XML structure is fine for many things, compacting things into attributes is often very helpful and a much better idea. Let’s consider a music library (consisting of individual songs rather than whole albums). Instead of creating indivual tags for the album the song comes from, the artist of the song, the name of the song and length of the song, which would get tiresome to type, we could simply create attributes for each of these properties. Here’s an XML file that does this: <?xml version=”1.0″ encoding=”UTF-8″?> <library> <track artist=’Boston’ album=’Greatest Hits’ time=’5:04’>Peace of Mind</track> <track artist=’Dire Straits’ album=’Sultans Of Swing-Best of Dire Straits’ time=’5:50’>Sultans of Swing</track> <track artist=’Dire Straits’ album=’Sultans Of Swing-Best of Dire Straits’ time=’4:12’>Walk of Life</track> <track artist=’The Eagles’ album=’The Very Best of The Eagles’ time=’3:33’>Take It Easy</track> <track artist=’Gary Allan’ album=’Best I Ever Had’ time=’4:18’>Best I Ever Had</track> <track artist=’Goo Goo Dolls’ album=’Dizzy Up the Girl’ time=’4:50’>Iris</track> <track artist=’Kansas’ album=’The Ultimate Kansas’ time=’3:24’>Dust in the Wind</track> <track artist=’Toby Keith’ album=’Greatest Hits 2′ time=’3:27’>How Do You Like Me Now?!</track> <track artist=’Toby Keith’ album=’Greatest Hits 2′ time=’3:15’>Courtest of the Red, White and Blue</track> <track artist=’ZZ Top’ album=’ZZ Top-Greatest Hits’ time=’4:03’>Got Me Under Pressure</track> </library> If we had given every property of the song its own tag, then the file would have been much longer than it is now. However, we are able to reduce the length of the file by putting properties in attributes. Now, however, we are left with the task of parsing the data. The process is similar to what we did above, but there are some differences since we are dealing with attributes this time around. Here’s how it all works: import xml.sax # Create a class to handle the contents of the XML file class HandleLibrary ( xml.sax.ContentHandler ): def __init__ ( self ): self.artist = ” self.album = ” self.time = ” # Handle the start of an element def startElement ( self, name, attributes ): # Check to see if it is a “track” element # If so, store the attributes if name == ‘track': self.artist = attributes.getValue ( ‘artist’ ) self.album = attributes.getValue ( ‘album’ ) self.time = attributes.getValue ( ‘time’ ) # Handle content def characters ( self, content ): # If the content isn’t a newline or blank space, then we have the track name # Print out all the track info if ( content != ‘n’ ) and ( content.replace ( ‘ ‘, ” ) != ” ): print print ‘Track: ‘ + content print ‘Artist: ‘ + self.artist print ‘Album: ‘ + self.album print ‘Length: ‘ + self.time # Parse the file parser = xml.sax.make_parser() parser.setContentHandler ( HandleLibrary() ) parser.parse ( ‘music.xml’ ) It’s not a very complex script, and it’s not very lengthy. It strongly resembles the previous script, but note that we choose not to store anything in a dictionary. Rather, we just dump it all out to the user as we receive it. The attributes variable in the startElement method is an object representing all the attributes of that tag. We then access the attributes by name with the getValue method, saving the values in variables that we print out in the characters method. That’s all there is to parsing attributes. What if, however, we do not know the names of the attributes? It isn’t too much of a problem, since we can loop through all the attributes and get their values: import xml.sax # Create a class to handle the XML class HandleLibrary ( xml.sax.ContentHandler ): def __init__ ( self ): self.attributes = None # Handle the beginning part of each tag def startElement ( self, name, attributes ): # Check to see if we’re dealing with a track tag # If we are, store the attributes if name == ‘track': self.attributes = attributes # Handle content def characters ( self, content ): if ( content != ‘n’ ) and ( content.replace ( ‘ ‘, ” ) != ” ): # Loop through each attribute and print the name and value print print ‘Track: ‘ + content for attribute in self.attributes.getNames(): print attribute [ 0 ].upper() + attribute [ 1: ] + ‘: ‘ + self.attributes.getValue ( attribute ) print # Parse it all parser = xml.sax.make_parser() parser.setContentHandler ( HandleLibrary() ) parser.parse ( ‘music.xml’ ) In the above script, we use the getNames method to retrieve a list of attribute names. We loop through the list and print each attribute’s name (with the first letter capitalized) and value. {mospagebreak title=The Document Object Model} SAX is not the only way to process XML data in Python. The Document Object Model exists, and it gives us an object-oriented interface to XML data. Python contains a library called minidom that provides a simple interface to DOM without a whole lot of bells and whistles. Let’s recreate our music library parser, making use of DOM this time: import xml.dom.minidom # Load the music library library = xml.dom.minidom.parse ( ‘music.xml’ ) # Get a list of the tracks tracks = library.documentElement.getElementsByTagName ( ‘track’ ) # Go through each track for track in tracks: # Print each track’s information print print ‘Track: ‘ + track.childNodes [ 0 ].nodeValue print ‘Artist: ‘ + track.attributes [ ‘artist’ ].nodeValue print ‘Album: ‘ + track.attributes [ ‘album’ ].nodeValue print ‘Length: ‘ + track.attributes [ ‘time’ ].nodeValue We end up with a very short script in the above example. We start by pointing Python to the file we wish to parse. Then we get all tags by the name of “track” that belong to the main tag. We loop through the list provided, printing out the information contained within. To access the text inside the element, we access the track’s list of childNodes. The text is stored in a node, and we print out the value of it. The attributes are stored in attributes, and we reference them by name, printing out nodeValue. Again, though, what if we don’t know everything we are going to parse? Let’s recreate the second example of the previous section using DOM: import xml.dom.minidom # Load the XML file library = xml.dom.minidom.parse ( ‘music.xml’ ) # Get a list of tracks tracks = library.documentElement.getElementsByTagName ( ‘track’ ) # Loop through the tracks for track in tracks: # Print the track name print print ‘Track: ‘ + track.childNodes [ 0 ].nodeValue # Loop through the attributes for attribute in track.attributes.keys(): print attribute [ 0 ].upper() + attribute [ 1: ] + ‘: ‘ + track.attributes [ attribute ].nodeValue We loop through the names of the attributes returned in the keys method, printing the name of the attribute out with the first letter capitalized. We then print the value of the attribute out by referencing the attribute by its key and then accessing nodeValue. Let’s say we have tags nested in each other. Consider our very first script that parsed a book collection. Let’s rebuild it with DOM: import xml.dom.minidom # Load the book collection collection = xml.dom.minidom.parse ( ‘collection.xml’ ) # Get a list of books books = collection.documentElement.getElementsByTagName ( ‘book’ ) # Loop through the books for book in books: # Print out the book’s information print print ‘Title: ‘ + book.getElementsByTagName ( ‘title’ ) [ 0 ].childNodes [ 0 ].nodeValue print ‘Author: ‘ + book.getElementsByTagName ( ‘author’ ) [ 0 ].childNodes [ 0 ].nodeValue print ‘Genre: ‘ + book.getElementsByTagName ( ‘genre’ ) [ 0 ].childNodes [ 0 ].nodeValue We load the collection file and then get a list of books. Then, we loop through the list of books and print out what’s wrapped inside of each tag, which is in the form of a child node that we must get the value of. It’s all very simple. DOM includes many more features, but all you need to simply read a document is contained within the minidom module. Conclusion XML is a useful tool for describing data. It can be used to describe just about anything –- from book collections and music libraries to user settings for an application. Python contains a few utilities that can be used to read and process XML data, namely SAX and DOM. SAX allows you to create handler classes that can process individual items within an XML document, and DOM abstracts the entire document, allowing you to easily navigate through a tree of XML data. Both tools are extremely simple to use and contribute to the phrase “batteries included” that is used to describe the Python language.
http://www.devshed.com/c/a/python/working-with-xml-documents-and-python/1/
CC-MAIN-2015-22
refinedweb
2,192
65.93
Last updated on FEBRUARY 08, 2017 Applies to:Oracle Database - Enterprise Edition - Version 11.2.0.4 to 12.1.0.1 [Release 11.2 to 12.1] Information in this document applies to any platform. Symptoms When an XMLTYPE contains namespaces not declared in the root element, like in the following example may spin on CPU, and is necessary to kill the session as the query does not complete, even if the execution plan shows that it might be executed in fragments of seconds. Cause My Oracle Support provides customers with access to over a Million Knowledge Articles and hundreds of Community platforms
https://support.oracle.com/knowledge/Oracle%20Database%20Products/1907124_1.html
CC-MAIN-2017-34
refinedweb
104
57.67
I'm attempting to wrap my javascript/html application using qt and so far everything works fine, however scrolling is unbearable. On both Windows and OS X when scrolling it appears as if it takes several seconds for the scrolling to carry out. It is especially sluggish when I'm attempting to scroll through a DIV where its contents was computed dynamically with javascript (sometimes I scroll, count to ten, and it's stil trying to compute what to do). Even if I run something like this, though: @// import QtQuick 1.0 // to target S60 5th Edition or Maemo 5 import QtQuick 1.1 import QtWebKit 1.0 Rectangle { id: application width: 1014 height: 500 WebView { id: page html: "<iframe src=''></iframe>" preferredWidth: parent.width preferredHeight: parent.height scale: 1.0 settings.localContentCanAccessRemoteUrls: true } } @ I'm hoping there's a fix for this, my qt experience other than with this has been wonderful!
http://forum.qt.io/topic/16506/very-very-poor-scrolling-using-qtwebkit
CC-MAIN-2015-18
refinedweb
153
65.83
Sets a processor affinity mask for the threads of the specified process. BOOL WINAPI SetProcessAffinityMask( __in HANDLE hProcess, __in DWORD_PTR dwProcessAffinityMask ); BOOL WINAPI SetProcessAffinityMask( __in HANDLE hProcess, __in DWORD_PTR dwProcessAffinityMask ); A handle to the process whose affinity mask is to be set. This handle must have the PROCESS_SET_INFORMATION access right. For more information, see Process Security and Access Rights. The affinity mask for the threads of the process. On a system with more than 64 processors, the affinity mask must specify processors in a single processor group.. Send comments about this topic to Microsoft Build date: 7/2/2009 Note from jkriegshauser: This code sample does not work as expected on Multi-core processors (i.e. Core2 Duo/Quad, etc). The reason is because the newer multi-core processors define the HTT flag as "hardware multithreading" and the logical processor count will include the cores even though Core2 Duo processors do not have HTT. See the following Intel publications: Intel Processor Identification and the CPUID Instruction: Intel 64 and IA-32 Architectures Software Developer's Manual Vol 3A (section 7.10.2): Also, the original MSDN article mentioned in this comment can be found here: The following code sample comes from "Juice Up Your App with the Power of Hyper-Threading", an MSDN article: public void SetProcessAffinityToPhysicalCPUForHyperthreadOnly(int processid){ int res; int hProcess; int ProcAffinityMask = 0, SysAffinityMask = 0; hProcess = OpenProcess(PROCESS_ALL_ACCESS, 0, processid); res = GetProcessAffinityMask( hProcess, ref ProcAffinityMask, ref SysAffinityMask); if (SysAffinityMask == 3) // 1 proc, 2 logical CPUs res = SetProcessAffinityMask(hProcess, 1); else if (SysAffinityMask == 15) //dual proc, 4 virtual CPUs res = SetProcessAffinityMask(hProcess, 3); res = CloseHandle(hProcess);} From the sample above, we see that the affinity mask is such that all physical processors come first in the mask, then the logical (Hyper-threaded) processors. If your process is a heavy user of floating point instructions, setting the affinity mask to (number of processors/2) - 1 will make sure your threads will give preference for the physical processors which have FPU. For a sample on how to detect if Hyper-Thread is on in C/C++, you could use this: __inline BOOL hyperThreadingOn(){ DWORD rEbx, rEdx; __asm { push eax // save registers used push ebx push ecx push edx xor eax,eax // cpuid(1) add al, 0x01 _emit 0x0F _emit 0xA2 mov rEdx, edx // Features Flags, bit 28 indicates if HTT (Hyper-Thread Technology) is // available, but not if it is on; if on, Count of logical processors > 1. mov rEbx, ebx // Bits 23-16: Count of logical processors. // Valid only if Hyper-Threading Technology flag is set. pop edx // restore registers used pop ecx pop ebx pop eax } return (rEdx & (1<<28)) && (((rEbx & 0x00FF0000) >> 16) > 1);} The above information should not be used under any circumstances. There are 4 problems with the implementation itself, as well as a few problems with the concept as a whole. Implementation problems: 1. With Hyper-Threading, each physical processor contains two logical processors. There is no distinction between a "physical" and "logical" processor. There are no special bits that correspond to a "physical" processor. There are only logical processors. It is not possible to "prefer a physical processor over a logical one" because that simply makes no sense. Each logical processor on the same physical chip shares the same FPU and provides access to the same resources. Even the title makes no sense... every logical processor is part of a real processor, you can't "prefer your real processors" because they are all your real processors. The example above fundamentally does not make sense. It will gain you nothing more than limiting a process to run on two completely arbitrary logical CPU's (which may or may not even be on the same physical CPU). It just doesn't work. Don't use it. This is the biggest problem. 2. Which CPUs correspond to which bits in the affinity mask is not guaranteed by the Win32 API at all, and should not be relied on (e.g. it is never correct to assume that a certain processor type comes first in the bit mask). 3. The example logic fails on machines without 2 or 4 logical CPU's (e.g., an 8-core Xeon server). 4. The Hyper-Threading detection "algorithm" is not "C or C++". It's MSVC-specific inline assembler. Another compiler, for example, GCC, would not recognize it as-is. And the _emit's are highly eyebrow-raising. Conceptual problems: 1. Unless you have complete control over the system, and over the other software on the system, and over how the user is running your particular piece of software, you are far more likely to hurt performance by setting the affinity mask yourself. The above poster is incorrect in stating that a thread "prefers" a certain logical processor. In fact, this function forces the threads to run on those processors, no matter what. While the threads will be able to run on any of the CPU's in the process affinity mask, this is still inviting problems. For example, if another piece of software has also unwisely set it's affinity mask to the same logical CPU(s) as your software does, you're S.O.L. as you've bypassed the kernel's ability to make wise judgments about which CPU to run your application on, and forced your application to share busy CPU's with another process (e.g. a quad core system where two applications are forced to run on the same two cores... not too great). Also, what if you set the affinity mask, and the user runs your application twice? Again, bypassing the kernel's CPU scheduler forces your application to share busy CPU's with other processes while other CPU's sit idle. 2. Again, unless you have complete control over the system, making the assumption that the system you are on has a certain number of cores is never a good idea. At best, if you must set affinity masks, you can get information about which logical processors reside on the same physical processor using NUMA functions like GetNumaNodeProcessorMask and make judgments from there (rather than using unreliable hacks like assuming certain bits correspond to certain processors). At the very least, you can get the system affinity mask and count the bits if you want to count the number of logical processors and make decisions based on that. Ignoring the fact that the above technique fundamentally makes no sense (as mentioned above), even if it did make sense, it only "works" (I use the term loosely) in 2 very specific situations: HT CPU with 2 logical processors, or HT CPU with 4 logical processors. In all other cases, if HT is not present, or if there are not 2 or 4 logical processors, the affinity mask is not set. Inconsistent logic like that is never a recipe for optimization. 3. For the vast majority of applications, the kernel will always do a better job than you at making judgments about what logical CPU's to schedule a process and it's threads on. By not setting the affinity masks at all, you immediately make the best use of all available CPU's on any system, regardless of system configuration or other running applications. On the other hand, by setting the affinity masks, you lock yourself into very specific CPU configurations, and risk forcing yourself into far-less-than-optimal situations on machines with other processes running on them or with moderately complex layouts (e.g. Windows automatically schedules to logical CPU's on separate physical CPU's first in layouts where that makes sense, and in general always is assumed to schedule threads as optimally as possible given the system layout). While there are certainly some valid reasons for setting the process affinity mask, the "technique" given above is fundamentally flawed in both concept and implementation, and should be disregarded. For those of you interested in actually learning more about the concept of physical and logical processors, multi-threading in general, and a bit about Hyper-threading before attempting such "optimizations", here is a decent article about what goes on under the hood: I also suggest actually reading the MSDN article linked to in the above post. It's a great article. Note how it correctly describes the difference between physical and logical processors. Also note that the kernel takes the system configuration into account and makes optimal decisions for you (e.g. by attempting to schedule processes on logical CPU's that reside on different physical CPU's first -- and doing that only in HT environments where it is appropriate). JC <DllImport("kernel32.dll", CharSet:=CharSet.Auto, SetLastError:=True)> Public Shared Function SetProcessAffinityMask(ByVal handle As SafeProcessHandle, ByVal mask As IntPtr) As Boolean End Function [DllImport("kernel32.dll", CharSet=CharSet.Auto, SetLastError=true)] public static extern bool SetProcessAffinityMask(SafeProcessHandle handle, IntPtr mask);
http://msdn.microsoft.com/en-us/library/ms686223(VS.85).aspx
crawl-002
refinedweb
1,479
50.57
Opened 5 years ago Closed 5 years ago #5532 closed bug (fixed) Variants of ticket #1200 still cropping up Description I see that ticket #1200 from several years back is currently closed. However, I was in the midst of writing some simple tests to help myself understand ReaderT ContT interaction and again ran into these frustrating undefined errors coming from printf: import Control.Monad.Cont as C import qualified Control.Monad.Reader as R import Data.IORef import Text.Printf test ref = do x <- R.ask liftIO$ printf "Observed value %d before callCC\n" x callCC$ \cont -> do y <- R.ask liftIO$ writeIORef ref cont liftIO$ printf "Observed value %d inside callCC\n" y z <- R.ask liftIO$ printf "Observed value %d in invoked continuation\n" z main = do ref <- newIORef (error "unused") let test' = do test ref -- Uncommenting the following will fix it: -- return () m1 = R.runReaderT test' (100::Int) m2 = C.runContT m1 (\ () -> return ()) m2 putStrLn "Done with main." I see only two occurrences of undefined in Printf.hs. Is there any disadvantage to changing them to appropriate errors? In fact, is there any reason that undefined shouldn't be banned from the standard libraries in favor of error? Attachments (1) Change History (4) comment:1 Changed 5 years ago by Changed 5 years ago by comment:2 follow-up: 3 Changed 5 years ago by Trivial patch uploaded. It looks like I botched the naming convention. By the way, the wiki section on the patch naming convention seems to be missing. It's referred to from here: And should live here: comment:3 Changed 5 years ago by Trivial patch uploaded. Applied, thanks! By the way, the wiki section on the patch naming convention seems to be missing. It's referred to from here: And should live here: Fixed, ta. Absolutely. Please send patch! S
https://ghc.haskell.org/trac/ghc/ticket/5532
CC-MAIN-2017-09
refinedweb
308
66.13
java.lang.Object org.zkoss.json.JSONsorg.zkoss.json.JSONs public class JSONs Utilities to json-ize objects that JSON is not aware, such as Date. Notice that implementing JSONAware is another way to make an object able to json-ized. public JSONs() public static final java.lang.String d2j(java.util.Date d) JSONArrayor JSONObject(aka., json-ize). It is used with j2d(java.lang.String). d2j(java.util.Date) is used to json-ize a Date object, while j2d(java.lang.String) is to unmarshall it back to a Date object. Notice it assumes TimeZones.getCurrent() (and Locale-independent). However, the result string has no time zone information. Thus, if the client is in different time zone, the date object will be different. However, since the object will be marshalled back in the same way, the value sent back from the client will be the same (regardless the time zone is different). public static final java.util.Date j2d(java.lang.String s) throws java.text.ParseException d2j(java.util.Date). d2j(java.util.Date)is used to json-ize a Date object, while j2d(java.lang.String)is to unmarshall it back to a Date object. Notice it assumes TimeZones.getCurrent() (and Locale-independent). java.text.ParseException
http://www.zkoss.org/javadoc/latest/zk/org/zkoss/json/JSONs.html
crawl-003
refinedweb
210
54.08
I want to get a better understanding of what is actually happening in this code when I convert from a Java List to a generic Scala Seq: import scala.collection.JavaConverters._ def foo(javaList: java.util.List[String]): Seq[String] = { val scalaMutableBuffer: mutable.Buffer[String] = javaList.asScala scalaMutableBuffer } ... val bar = foo(someJavaList) bar bar scalaMutableBuffer toList foo while bar is typed as a Seq[String], it's using a mutable buffer on an underlying level You're right, the run-time value of bar is a mutable.ArrayBuffer[String] (as Buffer is a trait itself), and as Seq[+A] is a trait, you as a caller only get to see the "sequency" parts of the ArrayBuffer, although you can always cast it to a buffer via buffer.asInstanceOf[mutable.ArrayBuffer[String]] and then see the actual "internals" of the buffer. potentially affecting the performance of Seq operations When one exposes a Seq[A], you're exposing the "contract" that the underlying collection adheres to. At run-time, it will always be a concrete implementation. i.e., when we create a Seq via apply: scala> Seq(1,2,3) res0: Seq[Int] = List(1, 2, 3) The concrete implementation is actually a List[Int]. Would it be best to think of the value bar contains as mutable or immutable? The underlying implementation is mutable, but exposed via an immutable contract. This means that when you're operating on the buffer via the abstraction of the Seq trait, there are no mutable operations available to you as a caller. For example, when we do: scala> val bufferAsSeq: Seq[Int] = scala.collection.mutable.Buffer(1,2,3) bufferAsSeq: Seq[Int] = ArrayBuffer(1, 2, 3) scala> bufferAsSeq += 4 <console>:12: error: value += is not a member of Seq[Int] bufferAsSeq += 4 The abstraction guards us from letting the users invoke operations we don't desire them to do on the run-time type. For example, would there be any cases in which it would be preferable for me to convert scalaMutableBuffer toList before returning from foo I think this is primarily opinion based. If you don't feel the Seq trait, you can always have the concrete type be immutable. But remember, in order for someone to mutate the copy of the Seq, he has to check the run-time type and explicitly cast to it, and when you cast all bets are off.
https://codedump.io/share/RZW7QKzBIWqe/1/scala-returning-a-mutable-buffer-from-a-function-that-returns-a-seq
CC-MAIN-2018-30
refinedweb
401
50.46
in reply to Re^3: Powerset short-circuit optimization in thread Powerset short-circuit optimization jimt, I too couldn't leave good enough alone. After realizing an embarrasing oversight (pointed out by ambrus), I decided to try my hand at a perl + Inline::C version. It finishes the 3_477 records in 1 second flat with constant memory consumption (less than 65MB). Furthermore, here is how it stacks up against the 67_108_863 lines of output from 9_448_847 lines of input (see How many words does it take? for more details): #!/usr/bin/perl use strict; use warnings; use Inline C =>; for my $file (@ARGV) { open(my $fh, '<', $file) or die "Unable to open '$file' for readin +g: $!"; while (<$fh>) { my ($set) = $_ =~ /^(\w+)/; powerset($set, length($set)); } } __END__ __C__ #include <stdio.h> #include <stdlib.h> void powerset (char *set, int len) { static char seen[67108864]; static int init_flag = 0; static long val[128]; int i, j, k, new_len; long bit = 0; char *oldset, *newset; if (! init_flag) { memset(seen, '0', sizeof(seen)); val['a'] = 1; val['b'] = 2; val['c'] = 4; v +al['d'] = 8; val['e'] = 16; val['f'] = 32; val['g'] = 64; v +al['h'] = 128; val['i'] = 256; val['j'] = 512; val['k'] = 1024; v +al['l'] = 2048; val['m'] = 4096; val['n'] = 8192; val['o'] = 16384; v +al['p'] = 32768; val['q'] = 65536; val['r'] = 131072; val['s'] = 262144; v +al['t'] = 524288; val['u'] = 1048576; val['v'] = 2097152; val['w'] = 4194304; v +al['x'] = 8388608; val['y'] = 16777216; val['z'] = 33554432; init_flag = 1; oldset = malloc((len + 1) * sizeof(char)); for (i = 0; i < len; ++i) { oldset[i] = set[i]; } oldset[len] = '\0'; } else { oldset = set; } for (i = 0; i < len; ++i) { bit += val[ oldset[i] ]; } if (seen[bit] == '1') { return; } seen[bit] = '1'; printf("%s\n", oldset); if (len == 1) { return; } new_len = len - 1; newset = malloc((len + 1) * sizeof(char)); for (i = 0; i < len; ++i) { k = 0; for (j = 0; j < i; ++j) { newset[k++] = oldset[j]; } for (j = i + 1; j < len; ++j) { newset[k++] = oldset[j]; } newset[k] = '\0'; powerset(newset, new_len); } free(newset); } [download] Update: I had solicited feedback on my poor C in the CB and was told that, despite it being a translation of the Perl above, that documentation would go along way in getting the desired comments. For each $set in a file, the code generates the powerset (minus the empty set). It is designed to skip any subset that has previously been produced. The code to generate the powerset is intentionally inefficient from the perspective of a single $set, but more than makes up for it when the "skip previously seen subset" is applied across all sets in the file. The algorithm to generate the powerset (ABCD) is as follows: Because over 67 million (2 ** 26) sets will be generated, keeping track of what has previously been seen can take up a lot of space. There is a pretty straight forward way to translate a set into an integer that fits into a contigous range of 1 .. 2 ** 26. For those interested, the algorithm is the sum of 1 << ord(x) - ord('a') where x is all lowercase chars in the subset. I have created a static lookup table val[] rather than generate the values each time. In perl, I use a bitstring with vec but in C have chosen to use a char array seen[] which takes up (2 ** 26) / 1024 * 1024 or 64MB. Since some of these arrays need to retain their values between invocations they have been made static. It is basically then just a matter of determining if the passed in string has been seen and if not, allocate a new string of size N - 1. Generate each string of size N - 1 and call self recursively. Cheers - L~R
http://www.perlmonks.org/?node_id=581697
CC-MAIN-2013-48
refinedweb
635
60.79
Maybe you have already remarked that a tree is a two-dimensional data structure that can't be accessed linearly, like you should do with a list. To traverse a tree, you have to visit all its nodes, with respect to their orders and hierarchy. This can be done by using recursivity or loops. An object-oriented approach of tree traversal is to build a most generic as possible class that encapsulate the traversal algorithm. A modern and good way to do that is building a tree iterator. This article talks about tree iterators and different choices that can be taken while coding them for taking account performance, reusability and comprehension. First, we assume that you are familiar with tree data structures. In this case, you know that a tree is a hierarchical structure, organized in linked nodes. It can also be considered as a particular case of graphs (an acyclic connected graph). Each node has a unique parent and zero or several children. Nodes at the bottommost level of the tree are called leaf nodes. An iterator is an object that can iterate through a container data structure, and display, stay by step in incremental manner, its elements. In object-oriented programming, the Iterator pattern is a design pattern that encapsulates the internal structure of the iteration process. Widely used in the C++ STL, it is implemented there as a pointer that can be increased with the ++ operator. This can also be done with C#, but the Framework provides for us another and convenient approach: the foreach keyword. In this case, the Iterator must implement the IEnumerator interface. foreach IEnumerator // Define the data container : a list of strings StringList mylist = new StringList(); // C++ STL syntax : ListIterator iterator = new ListIterator(mylist); while(iterator++) ShowValue(iterator.Value); // C# enumerator syntax : foreach(string value in mylist) ShowValue(value); As you can see, the C++ syntax is very natural. You first define the iterator, and then use it in a while loop. But, what about the C# syntax ? Mmmh, it's not straightforward, but let's see the basic mechanism. In fact, the class StringList implements the IEnumerable interface and provides a method called GetEnumerator() that returns an iterator on the list. Thus, before enumerating, the IEnumerable instance calls the GetEnumerator() method that provides the iterator and increments it in each loop. StringList IEnumerable GetEnumerator() In the Visual Studio 2005 version, Microsoft came with a very useful new keyword: yield. If you don't know this magic keyword, I suggest you go urgently and read this article. Otherwise, you might know that it can save much time, avoiding you the need to create the IEnumerator class. In this case, the compiler will generate it for you. Read this excellent article on the yield keyword. yield Trees can be represented in many ways. Because trees are organized sets of nodes, it is natural to consider nodes as a basis. A node can have zero, one or more children. The first idea is to set a list of nodes to a node. This is not the chosen approach in this article, because the list implementation is not optimized in the case of nodes with a few children (binary trees, for example). For a more generic and flexible implementation, I prefer a linked list, like the one drawn in the picture below: public interface INode<T> { INode<T> Parent { get; set; } INode<T> Child { get; set; } INode<T> Right { get; set; } } As you can see, each node has a parent, a right neighbour, and a child. You can remark too: null The code defining the generic process of iteration is easy and straightforward to write. Note that because it's generic, this algorithm does really nothing and must be implemented in an inherited class. This is why the following class is declared as abstract: abstract public abstract class TreeEnumerator<T> : IEnumerator<INode<T>> { // Contains the original parent tree. protected INode<T> _Tree = null; // Contains the current node. protected INode<T> _Current = null; // Constructor. // The parameter tree is the main tree. public TreeEnumerator(INode<T> tree) { _Tree = tree; } // Get the explicit current node. public INode<T> Current { get { return _Current; } } // Get the implicit current node. object System.Collections.IEnumerator.Current { get { return _Current; } } // Increment the iterator and moves the current node to the next one public abstract bool MoveNext(); // Dispose the object. public void Dispose() { } // Reset the iterator. public void Reset() { _Current = null; } // Get the underlying enumerator. public virtual TreeEnumerator<T> GetEnumerator() { return this; } } There are two families of tree traversal: the in-depth and the in-breadth algorithms. The first algorithm can be easily written using recursivity, whereas the latter needs a FIFO structure for memorizing already visited nodes. See more information on tree traversal. For iterator purpose, it was not conceivable to use recursive methods. The traditional while loops have been preferred. I expose here only the depth traversal algorithms. The breadth algorithm can be downloaded in the files section (with the entire Visual Studio Solution). while public class DepthTreeEnumerator<t> : TreeEnumerator<t> { public DepthTreeEnumerator(INode<t> tree) : base(tree) { } public override bool MoveNext() { if (_Current == null) _Current = _Tree; else if (_Current.Child != null) _Current = _Current.Child; else if (_Current.Right != null) _Current = _Current.Right; else { // The current node has no more child node INode<t /> node = _Current; do { if (node.Parent == null) return false; node = node.Parent; } while (node.Right == null); _Current = node.Right; } return true; } } Here is the code I used to compare the classic implementation with the yield method implementation. yield class Program { // The entry point of the program. static void Main(string[] args) { // Build the node. Node tree = new Node(1); tree.Child = new Node(2); tree.Child.Right = new Node(5); tree.Child.Child = new Node(3); tree.Child.Child.Right = new Node(4); tree.Child.Right.Child = new Node(6); tree.Child.Right.Child.Right = new Node(7); int imax = 100; double[] ratios = new double[imax]; ulong iter = 10000; for (int i = 0; i < imax; i++) ratios[i] = TestPerformance(tree, iter); StringBuilder sb = new StringBuilder(); foreach(double value in ratios) sb.Append(value.ToString()+";"); string copy = sb.ToString(); } // Define a yield return method iteration. public static IEnumerable<Node> MoveNext(Node root) { yield return root; if (root.Child != null) foreach (Node node in MoveNext((Node)root.Child)) yield return node; if (root.Right != null) foreach (Node node in MoveNext((Node)root.Right)) yield return node; } public static double TestPerformance(Node tree, ulong iter) { DateTime start; // Classic method test start = DateTime.Now; for (ulong i = 0; i < iter; i++) { DepthTreeEnumerator<int> enumerator; enumerator = new DepthTreeEnumerator<int>(tree); foreach (Node node in enumerator) {} long ticks1 = ((TimeSpan)(DateTime.Now - start)).Ticks; } // Yield return method test start = DateTime.Now; for (ulong i = 0; i < iter; i++) { foreach (Node node in MoveNext(tree)) { } } long ticks2 = ((TimeSpan)(DateTime.Now - start)).Ticks; return (double)ticks2 / (double)ticks1; } } The result is that the classic method is up to 5 times faster than the yield return method! So, it's clear that with the addition of simple tests, it can be shown that the yield keyword is not adapted in this kind of work. Theory discussions are not understandable without a concrete application. That's why I add to this article an example application of tree iterators, useful in the "true life" of a C# developer. This application exposes a TreeView with many nodes, each one displaying a French surname. TreeView To test the application, download the file above, unzip it and build the solution. Run the TreeIterator.Example project (selected by default). TreeIterator.Example Then, in the dialog box that appears, select a surname among the ones suggested by the combo box. Click OK. Finally, three things happen at the same time: TextBox TextBox The application runs fine. But to appreciate the application of the iterator, we need to examine the code. In this project, there are two main files: Well, how does it work ? It's not really difficult. Remember, we already have defined a tree iterator that does all the traversal for us. But this iterator has been thought of for working on any INode structure. So, to achieve the task, we need to create a new class (we'll name it SurnameNode) that derives from INode. This class will make the link between the TreeView nodes and the iterator. The only thing we have to do is define the properties of a SurnameNode object : INode SurnameNode INode SurnameNode // Get or set the parent of the Node. public INode<TreeNode> Parent { get { return _Node == null ? null : new SurnameTreeNode(_Node.Parent); } set { throw new NotImplementedException(); } // Get or set the leftmost child of the Node. public INode<TreeNode> Child { get { if (_Node == null || _Node.Nodes == null) return null; if(_Node.Nodes.Count <= 0) return null; return new SurnameTreeNode(_Node.Nodes[0]); } set { throw new NotImplementedException(); } } } // Get or set the right neighbour of the Node. public INode<TreeNode> Right { get { if (_Node == null || _Node.Parent == null) return null; if(_Node.Parent.Nodes.Count <= _Node.Index + 1) return null; var node = _Node.Parent.Nodes[_Node.Index + 1]; return new SurnameTreeNode(node ); } set { throw new NotImplementedException(); } } Ok. Now, if we want to use our iterator in a foreach loop, we have to defer its traversal with a yield return loop : // Returns an enumeration of the tree nodes, with a depth traversal. private IEnumerable<TreeNode> Enumerate() { // Instantiate the iterator DepthTreeEnumerator<TreeNode> enumerator = null; enumerator = new DepthTreeEnumerator<TreeNode>( new SurnameTreeNode(this.treeView1.Nodes[0]) ); // Defer the traversal of the tree foreach (SurnameTreeNode node in enumerator) yield return node.Node; } This method must be used like this: IEnumerable<treenode /> nodes = Enumerate(); The last thing to do is writing search methods. That's where the power of iterators come in. We can now handle nodes linearly, like a list, and then apply LINQ method extensions. It's very easy! The first use consists in getting all the surnames of the tree, ordered by surname. Here's the code to do so: // Fill the combo box with all the surnames the tree contains. IQueryable<string> items; items = Enumerate().OrderBy(node=>node.Text).Select(node=>node.Text); this.comboBox1.Items.AddRange(items.ToArray()); Then, we can search for a node matching a specified name : string name = "Boris" TreeNode resultnode = Enumerate().Single(node => node.Text == name); Or we can find all the names that start with a specified letter : string letter = "B" var nodes = Enumerate().Where(node=>node.Text.StartsWith(letter))); List<string> names = nodes.Select(node=>node.Text).ToList(); In this article, we saw an simple, generic and efficient way to traverse trees, with two different algorithms. We also saw that implementing yield return based-algorithms gave bad performances. Then, we could apply our library on daily development problems, and prove that iterators, combined with LINQ, can become powerful when searching elements in non-ordered.
https://www.codeproject.com/Articles/34352/Tree-Iterators?fid=1537693&df=90&mpp=10&sort=Position&spc=None&tid=3095696
CC-MAIN-2017-34
refinedweb
1,798
58.28
. In this article, you will learn how to add Custom Controls to your own application, and how Custom Controls can have complete designer support. You will also learn how to make use of GPS hardware inside your own application, and how to update User Interface Controls inside a multithreaded application. If you are developing applications for Windows Mobile devices, you might at some time need specific User Interface Controls that are not available in the collection of Common Controls that are part of the .NET Compact Framework. In that case, you have a number of choices: In this article, we identify three different types of controls that you can create, either from scratch or by re-using and combining existing controls. These controls typically are used to combine several existing controls to create a new control. In Visual Studio 2008, the creation of User Controls is fully supported by the provided Visual Studio 2008 project type. Once you create a new User Control, you get a designer surface that acts as a container to host additional controls. With a User Control, you can, for instance, create a Compound Control that contains a number of existing controls and behaves as a single control when you use it in an application. In figure 1, you can see a User Control inside Visual Studio 2008. The User Control contains two Labels and two TextBoxes, and can be used to enter user credentials, for instance, to logon to a database. When the project is built, this User Control is stored in its own assembly that can be reused by any Smart Device Project. In this particular User Control sample, there is hardly any need for additional code after dragging and dropping existing Custom Controls to it. The only additional code that is available inside this User Control is the declaration of two public properties to retrieve both the password and the user name inside an application and to set a user name. The Password property is read-only to force users of the application to at least provide a valid password. Password validation will be done inside an application, not inside the User Control, to make it as flexible and ready for re-use as possible. Label TextBox To use the User Credentials control inside an application, it can simply be dragged and dropped from the Toolbox. The User Credentials control does not automatically appear on the Toolbox. It is, however, possible to add controls to the toolbox by right-clicking in the General area of the Toolbox and from the displayed popup menu selecting Choose Items and navigating to the assembly that stores the User Credentials control. In figure 2, you can see how the User Credentials control appears inside an application. If you look at the Toolbox, you can see the UserCredentials control in the General area from which you can drag and drop it to a form. You can also see in the Properties window that there is a UserName property defined that can be set, either from the designer, or programmatically from inside the application. You can also see that the Password property is grey in the Properties window, meaning that it is a read-only property. It cannot be set, but the application can retrieve the property once the user has entered a password. UserName Sometimes you might want to use a control, let’s say, one of the Common Controls that are available inside the .NET Compact Framework. But maybe you want to have the control behave slightly different. For instance, if you want to make use of a TextBox but have a requirement that the TextBox should only accept numbers. Of course, you can have your user enter anything in the TextBox and validate the entered data inside your application. But, you can also create your own control, either inside your application or as a separate control that derives from a TextBox, but limit input to characters only. In that way, you have less coding to do in your application, because validation is done by the inherited Textbox. Even though the functionality of such a Textbox is limited, compared to the Common Control, the derived Textbox still has great designer support and exposes all Properties and Events that are defined for the original Textbox. Also, if you create a separate control for it, you can re-use the control in other applications as well, which increases your productivity. In order to create a numeric Textbox as a separate control, you can simply create a new Smart Device Project and select a Class Library. The amount of code necessary to convert a Textbox into a Numeric Textbox is limited, especially in this sample, because it omits the use of a decimal point, only allowing numeric data and the backspace to be entered. public class NumericTextBox : TextBox { protected override void OnKeyPress(KeyPressEventArgs e) { if ((e.KeyChar < '0' || e.KeyChar > '9') && e.KeyChar != (char)Keys.Back) { SystemSounds.Beep.Play(); e.Handled = true; } base.OnKeyPress(e); } } When you are writing override code, it is important to determine what the base class method does in order to decide whether to remove it or to call it, and in the latter case, when to call it. Just look in the documentation, which in this case reads, "Notes to Inheritors: When overriding OnKeyPress in a derived class, be sure to call the base class' OnKeyPress method so that registered delegates receive the event." Since listeners to the TextChanged event are also expecting numeric data only, you call the base class method after having validated the entered characters. Using the Numeric Textbox inside an application is as simple as using the UserCredentials control that you saw in the previous section of this article. OnKeyPress TextChanged If you don’t plan to re-use existing User Interface Controls, but need entirely new controls, you can create your own Custom Controls. Typically, these types of controls are written from scratch, even though you can, for instance, inherit them from the System.Windows.Forms.Control class to at least make use of the shared functionality that controls have in common. They are also frequently referred to as Owner Drawn Controls. There is no Visual Studio 2008 project template available for creating Custom Controls. The simplest thing to do is create a new User Control inside Visual Studio. Once the project is created, you can delete the User Control and manually add a new Custom Control from Visual Studio 2008. System.Windows.Forms.Control public partial class CustomControl1 : Control { public CustomControl1() { InitializeComponent(); } protected override void OnPaint(PaintEventArgs pe) { // TODO: Add custom paint code here // Calling the base class OnPaint base.OnPaint(pe); } } To create a Custom Control from scratch means you have the responsibility to paint the control yourself. You can override the OnPaint method for that, which already is added to your source file when you let Visual Studio 2008 create a new Custom Control. Since painting of your control might happen frequently, it is important to make your implementation efficient, both in its performance and in its use of resources. Since you are typically making use of Graphics classes inside the OnPaint method, and with Graphics classes often relying on underlying native code, it is important to clean up resources once you are done painting. In C#, it is not a bad idea to use Graphics classes that have a Dispose method implemented in combination with a using statement. The using statement ensures that Dispose is called even if an exception occurs while you are calling methods on the object, meaning you don’t have to worry about exception handling. For instance, you want to display some text; you could do so as shown in the following code snippet, and call it from inside the OnPaint method, passing the Graphics object that is part of the PaintEventArgs parameter: OnPaint Dispose using Graphics PaintEventArgs private void DisplayLabelInfo(Graphics g, string labelText, int xPos, int yPos) { using (Font labelFont = new Font(FontFamily.GenericSerif, 10.0F, FontStyle.Regular)) { using (SolidBrush labelBrush = new SolidBrush(LabelForeground)) { g.DrawString(labelText, labelFont, labelBrush, xPos, yPos); } } } Even though this is just a simple method to display a text inside a control, you can already see that two different Graphics objects are needed, a Font and a SolidBrush, which will both be created each time the method DisplayLabelInfo is called. Since you embed these objects in a using statement, cleaning up these objects will be efficient because the Dispose method is called when the objects go out of scope, resulting in less activity during Garbage Collections because no Finalize method has to be executed. This is especially valuable when dealing with Graphics classes, since they rely on a limited number of native resources that should be released as soon as possible. Font SolidBrush DisplayLabelInfo Finalize If you draw immediately on the screen using an instance of type Graphics, it might happen that the screen update causes flickering, especially when you are executing a lot of code in your OnPaint method. In those situations, it might help to make use of buffering. This means that you first draw everything on a separate instance of type Graphics that works on memory, and when you are done drawing, transfer the entire contents of the Graphics object that lives in memory to the Graphics object that draws on the screen. Suppose you want to display an image, and on top of that, write some text. To make use of buffering, you would do the following: protected override void OnPaint(PaintEventArgs pe) { // Initialize a Graphics object with the bitmap we just created // and make sure that the bitmap is empty. Graphics memoryGraphics = Graphics.FromImage(memoryBitmap); memoryGraphics.Clear(this.BackColor); // Draw the control image in memory memoryGraphics.DrawImage(someImage, destRect, imageRect, GraphicsUnit.Pixel); // Display some text DisplayLabelInfo(memoryGraphics, “Some text”, 0, 0); // Draw the memory bitmap on the display pe.Graphics.DrawImage(memoryBitmap, 0, 0); // Calling the base class OnPaint base.OnPaint(pe); } In the above code snippet, it is assumed that you already created an instance of type Bitmap called memoryBitmap and that you already created and initialized an instance of type Image called someImage, as well as a destination rectangle to draw the image upon. Calling the base.OnPaint method at the end of your OnPaint method means that subscribers to the Paint event have the chance to paint on top of the contents you already displayed inside your own OnPaint method. Bitmap memoryBitmap Image someImage base.OnPaint Paint To add designer support to your Custom Control, you have to create a .xmta file which is a special type of XML file. Visual Studio 2008 supports you in creating this file by providing IntelliSense and the possibility to add an initial Design-Time Attribute File to your project from inside Solution Explorer. The following code snippet shows a Design-Time Attribute File sample which puts a property of your Custom Control in a particular category in the Properties window and displays some help about the property. <?xml version="1.0" encoding="utf-16"?> <classes xmlns=""> <class name="Compass.Compass"> <property name="Heading"> <category>CompassSpecific</category> <description>Sets the travelling speed in mph</description> </property> </class> </classes> Once you have created a Custom Control, you, of course, want to use it inside an application. If you have properly added Designer Support to your Custom Control, and if you have added the Custom Control to the Visual Studio 2008 Toolbox, adding your Custom Control will be as easy as adding a Common Control to your user interface. If you look at figure 4, you can see the User Interface for an electronic compass that makes use of a Custom Control, the Compass control. You can see that the Compass control has full designer support and that you can set a number of properties on the control. Also, you can see that all properties have default values, and contain help about the usage of the property. To create this particular User Interface, a Compass control has been dragged to the MainForm, and a Menu has been added to enable / disable retrieving data from a (built-in) GPS receiver. Once the application is fed with the GPS location information, the user’s direction and travel speed are displayed continuously on the Compass control. To update this information, the Compass control exposes a number of properties that can be set programmatically, that will be assigned to values that are read from the GPS receiver. Thanks to the fact that both Windows Mobile 5.0 and Windows Mobile 6 devices contain the GPS Intermediate Driver (GPSID), it is very easy for you, as a developer, to retrieve location information through GPS and to share the GPS hardware between different applications. With more and more devices having a built-in GPS receiver, it makes sense to start thinking about making your applications location aware. Both the Windows Mobile 5.0 SDKs and the Windows Mobile 6 SDKs contain a lot of sample code. One of the samples not only shows you how to make use of GPSID, but it also contains a managed wrapper around the GPSID functionality. You can find these samples in the following folders: . If you are using Visual Studio 2008 to build the GPS solution, you will be asked to convert the project before you can build it. The reason for this is that the sample code’s solution and project files were created with Visual Studio 2005, and cannot be used without conversion in Visual Studio 2008. After conversion, you can build the GPS solution. Once you have built it, you can make use of the functionality in the Microsoft.WindowsMobile.Samples.Location assembly. In order to do so, you need to import this assembly into your own solution. When you have added a reference to the managed wrapper around GPSID, you can make use of the following classes, from which the relevant properties and method for this article are displayed in figure 5. The Gps class is your entry point to the GPS hardware. In this class, you find methods to synchronously retrieve GPS position information, and an event that is fired when position information is changed. You also find methods to Open and Close the GPS hardware. Gps Before being able to retrieve location information from the GPS receiver, you need to create a new object of type Gps and call the Open method on that object. This operation activates the GPS hardware if your application is the first application that makes use of the GPS hardware. Once your application is done using the GPS hardware, you need to call the Close method on the Gps object in order to switch off the GPS hardware if your application is the last one using it. In this way, you make sure to preserve battery power when GPS functionality is no longer needed on the Windows Mobile device. The following code snippet shows you how to open and close the connection to the GPS hardware, and is executed inside separate event handlers: Open private void menuEnableGPS_Click(object sender, EventArgs e) { gps.Open(); gps.LocationChanged += new LocationChangedEventHandler(gps_LocationChanged); menuDisableGPS.Enabled = true; menuEnableGPS.Enabled = false; } private void menuDisableGPS_Click(object sender, EventArgs e) { gps.LocationChanged -= gps_LocationChanged; gps.Close(); menuEnableGPS.Enabled = true; menuDisableGPS.Enabled = false; } If the user decides to terminate the application, you also have to make sure to close the GPS connection, so it makes sense to verify in the Closing event handler of the application if the user still has GPS enabled, and unsubscribe from the LocationChanged event and call the Close method on the Gps object. The ‘real’ work of the Electronic Compass application will be executed in the gps_LocationChanged event handler. Any application can subscribe to this event by providing an event handler. Each time the GPS location information changes, the event handler of the application will be called to allow the application to act on location changes. Subscribing to the LocationChanged event assures that your application always has up-to-date location information available, assuming the GPS hardware can read valid satellite data. The following code snippet shows the gps_LocationChanged event handler in its original version: Closing LocationChanged gps_LocationChanged } if (pos.HeadingValid | pos.SpeedValid) { compass1.Invalidate(); } } In this event handler, you only take a look at the Heading and Speed properties, even though there are a large number of additional properties. Since the application is just an electronic compass, you are even omitting location information like Lattitude and Longitude. Since satellite readings might not be valid, you also need to check if a property you are interested in contains ‘real’ data. Finally, you call the Invalidate method to force an update of the Compass control if at least one reading has changed. Heading Speed Lattitude Longitude Invalidate Compass If you compile and run the application with the gps_LocationChanged event handler implemented as shown in the above code snippet, you will run into an exception. It looks like you are not able to update information on the Compass control. The description of the exception will point you to a certain direction. The exception message contains the following text: "Control.Invoke must be used to interact with controls created on a separate thread". The stack trace shows the following information: StackTrace: at Microsoft.AGL.Common.MISC.HandleAr(PAL_ERROR ar) at System.Windows.Forms.Control.Invalidate() at ElectronicCompass.MainForm.gps_LocationChanged(Object sender, LocationChangedEventArgs args) at Microsoft.WindowsMobile.Samples.Location.Gps.WaitForGpsEvents() The exception message contains the most important information to identify this problem. It seems that there is an issue with multiple threads that are active inside your application, even though you did not create additional threads. Looking at the Stack Trace, you can conclude that the Invalidate method, where the exception occurred, was called from inside your event handler, which in turn was called by a method inside the managed wrapper around the GPS Intermediate Driver. Apparently, GPSID or the managed wrapper around it makes use of multiple threads, something you have to be aware of inside your own application. A common mistake that many developers make is trying to update or access User Interface Controls directly from within worker threads. This action results in unexpected behavior; in version 1.0 of the .NET Compact Framework, the application frequently stops responding. In version 2.0 and higher of the .NET Compact Framework, the behavior is better since a NotSupportedException is thrown when you try to update a User Interface Control from inside another thread than its creator. In figure 6, you saw that particular behavior inside the Electronic Compass application. NotSupportedException To solve this problem, commit yourself to the following rule: Only the thread that creates a UI control can safely update that control. If you need to update a control inside a worker thread, you should always use the Control.Invoke method. This method executes a specified delegate on the thread that owns the control's underlying window handle, in other words, the thread that created the control. With this knowledge, you now can modify the gps_LocationChanged event handler. In order to do so, you first have to declare a delegate to be able to pass a method as argument to the Control.Invoke method. A delegate is simply a type that defines a method signature that can be associated with any method with a compatible signature. Control.Invoke delegate private delegate void UpdateDelegate(); before displaying } if (pos.HeadingValid | pos.SpeedValid) { compass1.Invoke((UpdateDelegate)delegate() { compass1.Invalidate(); }); } } Inside the gps_LocationChanged event handler, you can see that compass1.Invalidate is not called directly, but it is called through another method, compass1.Invoke. The syntax used might be a bit cryptic if you are not familiar with anonymous delegates. In the gps_LocationChanged event handler, this C# 2.0 feature is used to call compass1.Invalidate through compass1.Invoke. It would also have been possible to define a separate method that just calls compass1.Invalidate and call that method through the UpdateDelegate delegate. Because the Compass control update logic is only called at one location, making use of an anonymous delegate makes the code more compact and, although a matter of taste, better readable. compass1.Invalidate compass1.Invoke UpdateDelegate With these modifications, it is now possible to run the Electronic Compass application without exceptions being thrown, and receiving GPS satellite information through the GPS Intermediate Driver. But ... wait a moment, I hear you saying. The application is running inside Device Emulator, and yet it is receiving GPS satellite information. How does that work? If you write applications that make use of GPS hardware, testing those applications is a challenge. Many GPS receivers don't function well indoors, typically, the location where you develop your application. Even if the GPS receiver would receive GPS information, the information will be static, since it is not likely that you are moving much around when developing an application. In order to overcome this problem, the Windows Mobile 6 SDK ships with a utility called FakeGPS that uses text files containing GPS information to simulate the functionality of a GPS receiver. Applications that use the GPS Intermediate Driver can make use of FakeGPS, and will function exactly as they would if a GPS receiver was present, and do not need to be modified in any way. Since FakeGPS also runs on Device Emulator, you can even test your entire application using Device Emulator. To make use of FakeGPS, you first have to install it on your target device or on Device Emulator. FakeGPS is available as a CAB file. To install FakeGPS on the Device Emulator, you can share the folder where FakeGPS is located, through the Device Emulator properties. Inside Device Emulator, you can now use File Explorer to navigate to the Storage Card. On the Storage Card, which points to the shared folder, select the FakeGPS CAB file to install it on Device Emulator. FakeGPS comes with a number of sample text files containing GPS data. You can add your own text files containing GPS data to create test files for specific test scenarios. The contents of a FakeGPS text file look like this: To activate the FakeGPS data feed, start the the FakeGPS utility on the Device Emulator, enable it, and select the desired GPS data file. Finally, you click on the Done softkey to start feeding GPS data to GPSID. That is all you need to do to start testing GPS enabled applications, either on Device Emulator, or on a physical Windows Mobile device. After installing and setting up FakeGPS, you can start testing the Electronic Compass. Correctly install VS2008 and Windows Mobile SDK to create your first mobility app. Part 1 of 7 from. Use Device Emulator and Cellular Emulator to test your applications. Learn the basics of developing Windows Forms based applications for Windows Mobile devices..
https://www.codeproject.com/Articles/43381/Windows-Mobile-App-Development-Part-Adding-Custo?msg=3254368
CC-MAIN-2016-50
refinedweb
3,803
51.58
Fig 1 – ASP using Ajax ToolKit and LinqDataSource It has been a while since I used one of the server side web frameworks. Microsoft .NET 3.5 ASP has some attraction, especially since the announcement of SQL Server 2008 geospatial functionality along with Silverlight, WPF, and DeepZoom. There are even hints of a Virtual Earth map element for use in Silverlight. I decided to take a learning detour into ASP Land. JSP is similar to ASP but never seemed compelling to an individual developer without the division of skill sets found in larger teams. By coding my own servlets instead of deferring to a JSP engine, I found I had more control that translated well into the ping-pong dynamic rendering now called AJAX. My prior experience with server generated html using JSF was not especially fruitful either. Admittedly it was a number of years ago when JSF was quite new, but at that time it was, you might charitably say, “clunky.” The round trips to the server for every little user interaction were not conducive to a good user experience. I guess I didnt find the complexity of a server side JSF layer too compelling, since I could manipulate SVG myself with javascript and send Msxml.XMLHTTP for any parts of my SVG that needed dynamic re-rendering. Now that higher bandwidth is more common, round trips are a bit more responsive but refresh blinks are still noticeable. With this history in mind I started into ASP land with some reservation. Microsoft has a very extensive set of tools available for the web developer. The basic ASP controls are all there as well as Linq to SQL, and Ajax Control Kit so ASP has come a lot further than the early days of JSF. The price is a steep learning curve with a wide range of technologies to pull together. Visual Studio 2008 makes a lot of these technologies a matter of drag and drop which means that things are easier to create but more difficult to understand or tweak. C# is a nice language for pulling all the disparate pieces together, especially since it is so close to Java in syntax and semantics. Stephen Walthers book, ASP .NET 3.5 Unleashed was an invaluable reference for my detour through ASP. Oversimplified, ASP, like JSP, adds special namespace controls that can be interspersed into normal html. The resulting web form files named with an .aspx extension are dynamically compiled on the server. This server side code in turn is used to create html sent back to the client browser. In addition to Javascript, ASP lets you have a lot of server side control with partial class code-behind created for each aspx page. One nice benefit is that much of the idiosyncrasies between browsers is handled by the compiler instead of the developer. I was given an opportunity to volunteer for an energy calculator website that required a fairly complex MS SQL Server schema. This was ideal for learning ASP. I was able to adapt the SQL Server Membership capabilities to suit the website requirementsby adding some additional fields to the default aspnet_Users table of ASPNETDB.mdf. Once I had basic registration and login, complete with email notification and role based security, I could move on to the basic web application. The first iteration involved using the new .NET 3.5 ListView control to create a basic CRUD interface (Create, Read, Update, Delete) to the required tables. ListView is a powerful Control with a great deal of flexibility. It took a little to learn all the ins and outs of events attached to the control, but I eventually had a basic approach worked out. I soon learned that Master pages simplify life considerably. Paying attention to the hierarchy of a sites page layout lets a developer abstract the unchanged into Master pages. As you go deeper into the web application a hierarchy of nested Master pages seemed to work best. I started building my ListView controls referenced to a SqlDataSource like this: <asp:SqlDataSource <SelectParameters> <asp:QueryStringParameter </SelectParameters> <DeleteParameters> <asp:Parameter </DeleteParameters> <InsertParameters> <asp:Parameter <asp:Parameter <asp:Parameter <asp:Parameter <asp:Parameter </InsertParameters> <UpdateParameters> <asp:Parameter <asp:Parameter <asp:Parameter </UpdateParameters> </asp:SqlDataSource> The complexity of having all this sql inside a web page with additional code behind glue, didn’t appear too concise. Fortunately on further study I realized that ASP .NET 3.5 also has a new technology called LINQ to SQL. Now I could replace the SqlDataSource with a LinqDataSource and make use of all the new ORM tools in VS2008. This is basically just dragging tables from the VS2008 Server Explorer onto a dbml page and then taking advantage of all the automatically created classes that map to the SQL table rows. It meant learning a bit more of Linq query syntax, but it was well worth the effort. <asp:LinqDataSource <WhereParameters> <asp:QueryStringParameter <asp:ControlParameter </WhereParameters> </asp:LinqDataSource> Next on the agenda was delving a bit into AJAX. One requirement of this project was a set of cascading DropDownLists. The selection from the first DropDownList affects the content of the second etc. The content of each DropDownList is populated out of the database. There is a project called the ASP.NET AjaxTool Kit Included in the toolkit is this control, ajaxToolkit:CascadingDropDown, which exactly met the requirements. The ajaxToolkit:CalendarExtender is also much more useable than the simple asp:calendar. This is quite a bit of capability for actually very little effort. So far I am quite pleased with my foray into ASP .NET land. I did, however, run into one area that caused some delay. ListView controls are best wrapped in an asp:panel that can have a Scrollbar=”auto” attribute. This allows larger ListView table layouts to have their own scrollbars as needed by the client browser. This all works pretty well. The user can scroll around the table, sort columns, add new records etc. But, when he comes to edit a row that required a scroll, the page refreshes and the next moment the edit row is out of sight with the panel scroll reset to the top of the panel. This is quite confusing to anyone. The best solution was to add an asp:UpdatePanel with default partial rendering. The UpdatePanel is a quick and dirty approach to AJAX. Any control inside the UpdatePanel <ContentTemplate> becomes an AJAX control with partial rendering happening behind the scenes. In the case of ListView this meant that the Edit command event reset the row layout for editing but did not lose the scroll position. This is nice. Unfortunately there is a further twist. Scrolling is good to have but really what most would like to see is a fixed header at the top of a scrolling table. Asp controls do not have a nice property for fixed headers of tables inside a Panel control. One solution is setting style=”position:relative;” on the <tr> surrounding the table <th> elements. The header cells are then positioned relative to the top of the table regardless of the scroll position. This fixes the column headers but the ajax refresh for editing rows will keep the edit row scrolled correctly while throwing the relative head position the scroll amount above its correct location. This is definitely not desirable. I spent a few days searching for some solution and finally came across this approach: Maintain Scroll Position after Asynchronous Postback <asp:ScriptManager <script type="text/javascript"> var xPos,yPos; Sys.WebForms.PageRequestManager.getInstance().add_beginRequest(BeginRequestHandler); Sys.WebForms.PageRequestManager.getInstance().add_endRequest(EndRequestHandler); function BeginRequestHandler(sender, args) { if($get('<%= DataTableView.ClientID %>') != null){ xPos = $get('<%= DataTableView.ClientID %>').scrollLeft; yPos = $get('<%= DataTableView.ClientID %>').scrollTop; } else { xPos=0; yPos = 0; } //alert(xPos+","+yPos); } function EndRequestHandler(sender, args) { if($get('<%= DataTableView.ClientID %>') != null){ $get('<%= DataTableView.ClientID %>').scrollLeft = xPos; $get('<%= DataTableView.ClientID %>').scrollTop = yPos; } } </script> The BeginRequestHandler is fired prior to the partial render and EndRequestHandler after the rendering. So the workaround involves capturing a current x,y scroll position on the panel before the render is fired and then restoring these after the rendering is completed in order to put things back where they should have been left. Too bad there is no attribute available for panels like the page attribute: MaintainScrollPositionOnPostback=”true” which only works for the enclosing Page scrollbar and will not affect any interior Panel controls. Once this bit of workaround was applied my “fixed header Ajax partial rendering ASP ListView Scrolling Table” control with full CRUD was finally working the way I needed. The effort involved was not very substantial and now that I’m further along the learning curve I can replicate this functionality across the entire website. ASP at the very least is a good approach to rapid prototyping. Next on the agenda is exploring the Mono project to see how much of the ASP world can be forced into a Linux OS. Ultimately it would be very useful to deploy ASP websites in an Amazon EC2 AMI instance for inexpensive scaling. It remains to be seen what problems arise in a Mono based ASP platform.
http://www.web-maps.com/gisblog/?m=200807
CC-MAIN-2018-17
refinedweb
1,519
54.52
Your answer is one click away! I need to write a function, which takes df with data and returns string with country, which GDP is maximum among countries with area(sq km) is less than 200 OR which population is less than 1000. How to write this code correctly? def find_country(df): df.loc[((df.Area < 200).Max(df.GDP))|(df.Population < 1000)] First of all you should make your first column to be your Index. This could be done using the following command: df.set_index('Country', inlace = True) Assuming you want to replace your dataframe with the reworked version. To find your desired country you simply look for the date which has the maximum GDP, for instance, and return its index. The subscript of the index is needed to get the actual value of the index. def find_Country(df): return df[df['GDP'] == max(df['GDP'])].index[0] I hope this will help, Fabian
http://www.devsplanet.com/question/35274963
CC-MAIN-2017-22
refinedweb
155
67.35
UFDC Home | Search all Groups | World Studies | Federal Depository Libraries of Florida & the Caribbean | UF Government Documents Collection | Vendor Digitized Files | Plant-quarantine import restrictions of the Union of South Africa Permanent Link: Material Information Title: Plant-quarantine import restrictions of the Union of South Africa Series Title: B.E.P.Q. ; Physical Description: 11 p. : ; 27 cm. Language: Creator: Strong, Lee A United States -- Bureau of Entomology and Plant Quarantine Publisher: U.S. Dept. of Agriculture, Bureau of Entomology and Plant Quarantine Place of Publication: Washington, D.C Publication Date: 1938 Subjects Subjects / Keywords: Plant quarantine -- Law and legislation -- South Africa ( lcsh ) Genre: federal government publication ( marcgt ) non-fiction ( marcgt ) Notes Additional Physical Form: Also available in electronic format. General Note: Cover title. General Note: "February 5, 1938." General Note: "(Superseding P.Q.C.A. -- 297)." General Note: "Lee A. Strong, Chief, Bureau of Entomology and Plant Quarantine."--Prelim. Record Information Source Institution: University of Florida Rights Management: All applicable rights reserved by the source institution and holding location. Resource Identifier: aleph - 030486158 oclc - 792739349 System ID: AA00026137:00001 Full Text STATE PLANT BOARD UNITED STATES DEP'L-r.'Ii OF AGRICULTURE Bureau of Entorology and Plant Quarantine Washington, D. C. B. E. P. Q.--471 (Superseding P. Q. C. A.--297) February 5, 1938. PLANT-QUARAUTIli IMPORT RZSTRI ACTIONS OF THE UNION OF SOUTH iP.IRCA UNITED STATES DEP.$T' ,2NT OF AGRICULrURt Bureau of Entomolog.- and Plant Quarantine 7W:hington, D. C. B. E. P. Q.--471 (Superseding P. Q. C.A.--297) February 5, 1938. PLANT-QT UAJ.j T I I ,PORT RESTRICT iCITS OF THE UnIoJC OF SOUTH AFRICA This digest of the rules and regulations promulgated under the Agricultural Pests Act of 1911, and subsequent -mendmonts thereof, has been prepared for the information of nurserymen, plant-quarantine of- ficials, and others interested in the exportation of plants and plant products to South Africa. The digest was prepared by H-.rry B. Shaw, Plant-Quarantine Inspector in Charge of Foreign Information Service, Division of For- eign Plant Quarantines, from the original texts end reviewed by the Chief Inspector, Plant Regulatory Service, Department of Agriculture and Forestry of the Union of South Africa. The information presented in this circular is believed to be correct and complete up to the time of pr-paration, but it is not intended to be used independently of, nor as a substitute for, the original texts, and it is not to be interpreted as legally author- itative. T'To proclamations and Government notices themselves should be consulted for the exact texts. ./- ) U::I2. l -3S DEF.-: OF AC^T.7 bur, -u of LEnteore'ole y and. Plant Quarantino W",shington, D. C. PLANT-IUAPALTINE IivlPOcT RESTRICTIONS OF TTiE UJ7ON OF SOUTH AFHICA Basic Legislation AgriccuIltural Pests Act (Act No. 11 of 1911 as amended) Def initions Section ,. In this Act and the regulations made thereunder, unless inconsistent with the context: "Insect pest" shall mean any insect or other ir.vertebrate animal that is injurious to plants. "Plant" shall mean any tree, shrub, or vegetation, and the fruit, leaves, cuttings, or bark thereof, and shall include any live portion of a plant, whether s,73rd or attached, and anyr dead portion or any product of a plant which by proclamation under this Act or any amendment thereof has been included in this definition, but shall not include any seed unless the seed has been specially mentioned in this Act or has been included in the definition of plant by proclamation under this Act. "Plant disease" shall mean any bacterial or fungous or other disease that is injurious to plants. "Exotic animal" shall mean any animal (other than man) and any bird, reptile, insect, or other member of the animal kingdom, in- cluding the e,-gs thereof, that is not indigenous or native to South Africa. Species of the following classes are included in this definition: Arphibia, Arachnida, Ayes, Crustacea, Insecta, Mam.malia, Mollusc.,,Myriapoda, Nematoda, and Reptilia. -2- Authorized Ports of Entry Section 5 of the Act, as supplemented by Proclamation No. 283 of 1936, prescribes that no person shall introduce or cause to be introduced from overseas into the Union any plant otherwise than by mail or through the authorized customs ports of entry: Cape Town, Durban, East London, Johannesburg, Nelspruit, Port Elizabeth, and Pretoria. Fruits, potatoes end onions may enter also through Mossel Bay, Port Nolloth, and Simonstown. Restrictions on Importation of Plants Section 9 prohibits the introduction of certain plants, subjects other plants to re strictions, and requires an imiport permit from the De- partment of Agricalture and Forestry for all plants except fruits, most seeds, bIulbs, tubers, and vegetables. Provision for Inspection of Imported Plants Section 10 provides for the inspection of all plants offered for entry into the Union and their subsequent disposal. Treatment of Infected Plants Sections 11 and 12 provide for the disinfection, cleansing, or destruction of infected plants when deemed necessary, and the is- suance, upon request, of certificates for shipments that have complied with the nrovisions of the Act and the regulations. Power of Governor-General to Extend Application of Certain Provisions of Act Section 14 empowers the Governor-General, by proclamation in the Gazette: (a) To include in the definition of plant, the seed of any plant or any dead portion or product of a plant; (b) To vary, by addition or withdrawal, the list of plants thu introduction whereof into the Union is under Section 9 prohibited, supervised, or restricted; (c) To prohibit or restrict the introduction into the Union from anyvihere, or from any s.-ecified country or place, of any plant, insect, or germ of any plant disease. Section 21 prohibits the importation from oversea of live bees, honey, and apiary appliances, and empowers the Governor-General to apply the provisions to other African territories. Live bees may be imported by the Gover';..uint. -3- Section 22 enables the Governor-General, by proclamation, to prohibit or restrict thD impo--tetion from anywhere or from any specified' country or place of any particular class of exotic animals. Section 28 em-o..- rs the Governor-General to make regulations not inconsistent with the Act prescribing: (a) The manner and place in which anry r eistration, i-.,ction, disinfecting, cleansing, or destruction authorized under this Act shall be carried' out. (b) The conditions and restrictions governing the importa- tion and keeping of plants, bees, articles, exotic animals, and any- thing whatsoever dealt with under this Act. . SUviiARY Applicable to countries oversea, and to Portuguese East Africa, Mandated Territory of South-West AJrica or any State or Territory in Africa nor-th of the Zambesi, except Northern PRhodesia, Nyasaland, or in the case of pl-nts other than maize and barley, the Belgian Congo. Importation Prohibited ACACIA spp., battle trees but not the seeds (Act. No. 11 of 1911). ALFALFA OR LUCERNE (.d ics.o sativa L.) hay, fresh or dried, to pre- vent the introduction of clover canker, crowngall, or crown vwart (Urojhlyctis alfalfae (v. Lagerh.) :"'an. (Proc. No. 151 of 1937.) ARCTIU-L spp., burdock,_ seeds and flowering seed-heads. (Proc. No. 151 of 137.) BROOMCORN (Sor 'hum vulgare var. technicum(Koern.) Jay.), or articles made thereof containing unshredded broomcorn stalk, to prevent the introduction of the European corn borer (Pyrausta nubilalis Hbn.). (Proc. No. 286..of .1936.) CHESTUrI (Castanea spp.) plants and seeds of any species from North America or any other country where the chestnut blight disease (Endothia parasitica (Murr.) Ander. and Ander.) exists. (Proc. No. 286 of 1936.) CITRUS TREES, except by the. Department. of Agriculture and Forestry, to prevent the introduction of citrus canker. (Proc. 286 of 1936.) -4- CCNTI'_F US PL-XTS but not the soeds. (Act No. 11 of 1911.) ELM (Ul]is spp.), plants and seeds of any species from the continent of Europe and an.' other country whore the Dutch elm disease (Graphium ulmi Schwarz) exists. (Proc. !o. 286 of 1936.) jEUCALYPTUS spp., gum trees, but not the s-eeds. (Act H 1. 11 of 1911.) ERJ iiS: Apples, pears, quinces, and. loquats ,MT.._ho, PEy.is dyonia, Eriobotr2ya), from China, Chosen (Korea) Est Siberia, Japan, a.C MaLnckhukwo (Manchuria), to prevent the introduction of such fruit pests as (Cyda) Grapholitha molesta Busck, the oriental fruit aoth(Ca___osina sasakii ivats.), a fruit moth, etc. (Proc. No. 286 of 1936.) Citrus fruits (Citrus spp.) and the peel thereof, whether fresh or dried, but not candied peel, to prevent the introduction of citrus canker (Bacterium citri (Hasse) Doidge). (Proc. No. 151 of 1937.) Citrus fruits from southwest Africa are admitted without restric- tion and from a portion of the territory administered by the Companhia de Mocambiqcue under certain conditions. (Proc. 201 of 1937 and 202 of 1937.) Stone fruits, fresh.--Apricot (Prunus armeaiaca L.), cherry (Prunzus spp.), nectarine (&-_mo1'- l.s persica var. nucipersica), peachl (-._ :!u2 persica), plum (Prunus spp.). (Proc. No. 285 of 1936.) HOliEY, jam, syrup, or malt, mixed with honey, medicines containing honey, fly tapes or fly papers containing honey, live bess, second-hand hives, and any container used for honey, bees, or beeswax. Mcdi- cines containing not more than 25 percent of honey may be passed. Precaution against the introduction of American foulbrood and other bee diseases. (Act No.11 of 1911, Govt. Notices Nos. 1337 of 1925 and 2032 of 1930.) OPUNTIIA spo. (Proc. No. 151 of 1937.) PEICH STO1:3 (A..7 -?dalus persica L.). (Act !7o. 11 of 1911.) PL-iWTS PACKED IN SOIL other than special rooting compost, to prevent the introduction of injurious insect pests and plant diseases that occur in soil. (Proc. No. 286 of 1936.) SUC-0A.CA:T: PLANTS, ROOTED (Saccharum officinarum L.), to prevent the in- troduction of injurious pests and diseases of the sugarcane. (Govt. Notice 1793 of 1936.) TRZES AND PiLA"i.-TS OFDT0I'APJLY RAISED FROM S.ZD, if the seed be easily procurable in the Union or can be readily introduced in a viable condition to prevent the introduction ,of injurious insect pests and plant" diseases. (Govt. Notice No. 1793 of 1936, as amended by Govt. Notice No. 677 of 1937.) TF3ES .... IT-L^I"'N PLAHITS LISTED BY NUFuS.l.Yii-I within the Union and procurable from them at or below the ordinary price for recent novelties of their class, unless the Department is satis- fied that the strain of the variety procurable in the Union is an inferior one or untrue to type. (Govt. Notice No. 1793 of 1936, as amended by Govt. Notice No. 677 of 1937.) Importation Restricted z,.ci.UI:FJA CITP.IODOPA F. Muell.: : TO limitation on number admissible, but must be grown in quarantine. (Govt. Notice No. 1793 of 1936, as amended by Govt. Notice No. 677 of 1937.) BAGS, E..'.D-hAD: No import permit required; subject to inspection on arrival to ascertain whether any had contained cottonseed, and, if any cottonseed present, may be refused entry or treated by heat at the expense of the owner. BEESWAX, and foundation comb: Imlport permit and inspection on arrival; consignorls sw-orn declaration that the beeswax has been heated to 2120 F. for 30 minutes. No declaration is required for vwhnite beeswax. In the absence of the declaration for un- manufactured yellow beeswax, the wax may be heated at the ex- pense of the owner under official supervision, or the Depart- ment may agree to the keeping and manufacture of the beeswax under conditions deemed to make special heating unnecessary. Precaution against the introduction of American foulbrood and other bee diseases. (Govt. Notice.. No. 1793 of 1936.) BROCOi,,-'RIT (Sorg:hum vuLare var. technicum (Koern.) Jay.) and brooms, brushes, and other articles :-:ade from broomcorn. (except as prohibited) (see item under "Importation Prohibitedt"): Import permit and inspection on arrival, to prevent the introduction of the European corn borer (Pyrausta nubilalis Hbn.) and other stalk borers (Proc. No. 282 of 1936.) CORK, UT...U.. U.ED, derived from the cork oak tree (Quercus suber L.): Import permit end inspection on arrival, to prevent the intro- duction of the -.-,psy moth (Porthetria dispar L.). (Proc. No. 282 of 1936.) - 6- COTTON (Gssrium spp.) unwanufactured, in2_lui' A lint-r.3 .-rlI u.niu'%_n cotton wvste (but not including purified cotton wool (absorbent cotton) annd cotton batting) is admissible only under special import permit. This restriction does not epply to kapok. The use of cotton waste as packing i!..terial for merchandise is not permitted. (Act No. 11 of 1911, ,nd Proc. No, 282 of 1936.) COTTCIiSZD: Permits will be issued only for seed intended for sowing, and only when its introduction is deemed desirable by the Principal Field Hilsob'.ndry Officer. Cottonseed the t is allowed to enter will be fm.ig.atd with carbon disulphid. Inportations are restricted to official cotton breeders. Precautions against 'Vie introduc- tion of pink bollworm (Pectinophora gosypiella Saund.). (Act No. 11 of 1911, and Proc. N'o. 282 of 1936.) EXOTIC Aii.i LS of the classes: Amphibia, Arachnida, Ayes, Crustacea, Insect, .I.'alia, Mollusca, Myriapoda, 1kiratoda, andd Reptilia: Importation subject to permit and such conditions as may 1e prescribed therein. (Proc. 115 of 1937.) GRA2:1.-/ May not be introduced into the District of Graaff Reinet, nor into the area in tie COpe Province defined in paragraph 3 (1) of the schedule to Proc. :To. 267 of 1936, but grapes may be landed at Cape Town, Simonsto-vn, and Mossel Bay and be con- signed to destinftiois beyond. (Proc. S87 of 1936.) PTA,.S, LiVU. ]-' (s-u definition of plants, p. 1) of all admissible kinds, e;xcept those specifically mentioned, and except fruits, most seuds, b ulbs, and tubers: Importation subject to a permit is- sued by the Union Department of Agriculture -.nd Forestry and in- sopection on arrival. (See rules governing the issuance of permits 9. 7 et seq.) POE-FF.UIT TRLS1-' and all plants of the genern. Malus, Pyrus, and Cydoxiag: Import ,r.'it 'nd inspection on arrival; must be accompanied by an official certificate fro-m the Department of Agriculture or other recognized official institution of the country of origin af- firming that firblight (Bacillus amylovorus (Burr.) Trev.) is not kno-'n to occur on the premises where the plants were grown. Entr:.- is conditional also on thilplants being cut back severely and subjected, without expense to the Government, at Cape Town, Durban, or Pretoria, or other approved place for special inspec- tion and disinfection. (Proc. No. 286 of 1936.) POTATOES (Solanum tuberosum L.): No import permit required; subject to inspection on arrival; must be accompanied by a shipper's sworn declaration of the country, of origin and of the locality where grovmwn, together ;zith sufficient data clearly to esta.blish the identity of the conoiinment; also an official certificate dated not more than 30 d7ys before the dispatch of the consignment affirming that potato v-art (Synchytrium endobioticum (Schilb.) Pere.) has not been known to exist within 5 miles of the place or places where I/ See Note, pp. 10 and 11. -7- the potatoes are declared to have been grown, or an official certificate, dated nor more than 9 months prior to the date of arrival of the potatoes, affirming that the s.,id disease has not been known to exist within the shire, county, or other such territorial division comprising the declared place or places of origin. The certificate is not required with potatoes from British East Africa and Western Australia. A certificate will be accepted from the United Kingdom declar- ing that no c.as.s of potato wrt are known to have occurred at the place or places where the potatoes are declared to have been grown, that the only outbreaks of the disease within 5 miles of such places are trivial and without menace to land where potatoes are grown for sale, ?nd that, tin official inspection, the potatoes concerned were found to be aLpparently free from serious disease nd insect pests. (Proc. :o. 286 of 1936.) ROSES (Rosa spp.) from Australia and North America and any other coun- try in which a virus disease of roses in known.to occur: An official certificate affirming that no virus diseases are pres- ent in the premises where they were grown. (Proc. No. 286, 1936.) SEEDS: I.i;ort permits and inspection on arrival. This applies only to seeds of the plants named below, vhich have been included in the definition of "plant."1 (Act I1o. 11 of 1911, Proc. No. 282 of 1936, and Govt. Notice !Io. 1793 of 1936.) Alfalfa or lucerne (Medic" sativa L.): Permits issued only to the Department of Agriculture -and Forestry. (Proc. No. 282, 1936, and Proc. 286 of 1936.) Grown in quarantine and produce released if no disease discovered. Chestnut (Castanea spp.) (except from North America and any other country in which the Chestnut blight occurs). (Proc. No. -:2 of 1936.) Cotton: See also item "Cottonseed." (Proc. No. 282 of 1936.) Elm (Ulus spp.): (Proc. No. 282 of 1936.) Maize (Zea mays L.) and barley (Hordeum vulgare L.) (except from Territory administered by the Companhia de ivloambique): Importation limited to 10 pounds of any variety. However, in times of shortage the Department may authorize the impor- tation of maize in bulk under prescribed conditions. (Govt. Notice No. 1793, of 1936, as amended by Govt. Notice No. 677 of 1937.) Maize imported for planting is disinfected in a solution of mercuric bichloride. LIBRARY Oak (gercus spp.): (Proc, No. 282 of 1936.) STATE PLANT BOARDf -8- Tea (except from countries in which Exobasidium vexans Mass. occurs). (Proc. 282 of 1936.) See also item "Tea plants and tea seeds."1 To: -cto (for importation frcm countries in which Aplanobacter mich- ianense .. F. Smin. occurs). (Proc. No. 282, of 1936.) Importation from ouch countries is not exempt from permit. See item "Tomato seed from Germany, etc." SUG-ARCA. C'TTUii-1S: Import permit; fumigation with hydrocyanic acid gas on arrival and disinfection with solution of copper sulphate. Permits issued only to South African Sugar Association; canes grown in quarantine greenhouse and then in open ground. TEA PLANTS AL T)ZA SEEDS (Camnellia thea m Thea sinensis L.) from India, Japan, C-iosen (Formosa), a-nd other countries where blister bliyht (Exobasidium vexans Mass.) occurs: Import permit and inspection on arrival; must be accompanied by an official cer- tificate from the Department of Agriculture, the Indian Tea Association, or other recognized institution of the country of origin, affirming that the disease is not known to occur within 10 miles of the place where the plants or seeds were produced. (Proc. ITo. 286 of 1936.) TOMATO SEEDS (Ly'I -ersJicui esculentum Mill.) from Germany, Italy, North America, or any country where bacterial canker of tomato (Aplan- obacter .ichi,-anense E. F. Smin.) occurs: Import permit required; must be a.ccomipanied by an official certificate stating that the seed was produced by plants officially inspected in the field and found free from that disease. (Proc. 286 of 1936.) TOBACCO (Tic-til.n.2 tabacum L.), unmranufactured or leaf tobacco: Import permit and inspection on arrival. Must be accompanied by an official certificate affirming that the tobacco has been inspec- ted and found free from Ephestia elutella Hbn. At the discre- tion of the Union Department of A-riculture and Forestry the cer- tification requirement may be waived. (Proc. No. 286 of 1936.) Importation Unrestricted FRUITS, SEEDS (L:.CJ'I THOSE SPECIALLY RfSTRICTED OR PROHIBITED, BULBS, T f3ER, AMD VLi'.TLrL-S. However, admissible fruits are inspected and may be rejected if any serious pest is found on them. Consigrnments of apples are refused entry if more than 5 percent are infested by codling moth or infected by one Fuasicladium spot over 1/8 inch in diameter to 10 fruits. Affected fruits may be picked out and clean ones passed. Fruit will be fumigated if more than one San Jose scale or oystershell scale found per fruit. -9- .-. RULES GOVERNING THE ISSUANCE OF IMPORT PE:iITS (Government Notice No. 1793 of 1936, as amended by Government Notice No. 677, April 30, 1937) Number of Plants Limited I. No permit shall be issued to any one person to introduce into the Union during any one calendar year from oversea or from Port- uguese East Africa, the Mand'.ted Territory of South West. Africa or any State or Territory in Africa'north of the Zambesi, except Northern Rhodesia, T y-,?l7--.nd or, in the case of plants other than maize and bar- ley, the Belgian Congo: .(a) More than 10 plants of any one variety of: (1) Rooted forest trees, ornamental trees, nut trees, rose trees, fruit trees and fruit bearing plants (not including strawberries). (2) Ornamental shrubs,2/ including azaleas, rhododen- drons, camellias, hydrangeas, spireas, lilacs, and oleanders. ' (3) Climbing plants, including clematis, begonias, passifloras, wistarias, honeysuckles, j:asminums, and solanuins, or (b) More than 100 plants of any one variety of: (1) Strawberry plants. (2) Scions or unrooted cuttings of any tree, woody shrub or sugarcane, or (c) More than 10 pounds of any one variety of maize or barley. II. Nothing contained in the above regulation shall prevent'the Department from: (:e) Introducing stocks, which it may consider of exceptional or special value, into the Union in excess of the number above stipulated for budding or grafting, or issuin.- a permit to any person for special reasons and subject to such conditions as it may determine, to introduce into the Union any stocks in excess of the number provided in S,,,,this regulation; 2/ Permits are not issued for species of Berhberis that are inter- mediate hostsof Puccinia graminis Pers., black steinm rust of whea. -10- (b) Issuing permits to any person to introduce into the Union Backhousia citripdoaplants in excess of tha max- imunm provided in this regulation, on condition that such plants be kept in quarantine, at a place approved by the Department, for a period of 2 years or such lesser period as the Department may direct; Provided, That the Department, if it deems expedient, may destroy without com-pensation to the owner all the plants so introduced, together with the progeny thereof. (C) Issuing permits for the introduction of maize in bulk in times of shortage, and subject to such conditions as the Department may determine. III. No permit shall be issued to any person to introduce into the Union: (a) Any kind of tree or plant ordinarily raised from seed, if the seed be easily procurable in the Union or can be readily introduced in a viable condition. (b) Any variety of tree or fruit-bearing plant or rose plant listed by nurserymen within the Union, and procurable from them at or below the ordinary price for recent novelties of its class, unless the Department is satisfied that the strain of the variety procurable in the Union is an in- ferior one or untrue to type. (c) Any rooted su *rcane plants. However, the Department may issue a permit for the introduction of any tree or plant specified in either subparagraphs (a) or (b) of paragraph (iiI) in any case where the Ee:-.rtment is satisfied that for special reasons such introduction should be exempted from the prohibition of that paragraph. Plants l'ot Limited in Number IV. Ornamental palms and floristst plants, such as violets, carna- tions, chrysanthemirnms, geraniums, pelargoniums, fuchsias, orchids, and ferns, shall not be subject to any limitation in regard to the number of such plants that may be introduced into the Union. Note.-All trees and other hardwood plants and fruit-bearing plants are fumigated with hydrocyanic acid gas before importation is permitted. Her- baceous plants and ornamental palms are fumigated only when insect pests are present for which such treatment is deemed necessary. Grapevines are also disinfected in a solution of copper sulphate. All species of Ribes, -11- Castanea, and Julans are cut back and disinfected in a 2-percent solu- tion of copper sulphate. Treatments are to be effected without expense to the Government at Cape Town, Durban, Pretoria, or other approved center. UNIVERSITY OF FLORIDA IIIII 1111111116I 0Ill 111 I11111111 ll t 8tll l Hlltl1111111 3 1262 09311 4840
http://ufdc.ufl.edu/AA00026137/00001
CC-MAIN-2018-43
refinedweb
4,041
57.06
.\" .\" Extended attributes system calls manual pages .\" .\" (C) Andreas Gruenbacher, February 2001 .\" (C) Silicon Graphics Inc, September 2001 .\" .\". .\" .TH REMOVEXATTR 2 "Extended Attributes" "Dec 2001" "Linux Programmer's Manual" .SH NAME removexattr, lremovexattr, fremovexattr \- remove an extended attribute .SH SYNOPSIS .fam C .nf .B #include .B #include .sp .BI "int removexattr (const char\ *" path ", const char\ *" name ); .BI "int lremovexattr (const char\ *" path ", const char\ *" name ); .BI "int fremovexattr (int " filedes ", const char\ *" name ); .fi .fam T .SH DESCRIPTION Extended attributes are .IR name :\c value pairs associated with inodes (files, directories, symlinks, etc). They are extensions to the normal attributes which are associated with all inodes in the system (i.e. the .BR stat (2) data). A complete overview of extended attributes concepts can be found in .BR attr (5). .PP .B removexattr removes the extended attribute identified by .I name and associated with the given .I path in the filesystem. .PP .B lremovexattr is identical to .BR removexattr , except in the case of a symbolic link, where the extended attribute is removed from the link itself, not the file that it refers to. .PP .B fremovexattr is identical to .BR removexattr , only the extended attribute is removed from the open file pointed to by .I filedes (as returned by .BR open (2)) in place of .IR path . .PP An extended attribute name is a simple NULL-terminated string. The .I name includes a namespace prefix \- there may be several, disjoint namespaces associated with an individual inode. .SH RETURN VALUE On success, zero is returned. On failure, \-1 is returned and .I errno is set appropriately. .PP If the named attribute does not exist, .I errno is set to ENOATTR. .PP If extended attributes are not supported by the filesystem, or are disabled, .I errno is set to ENOTSUP. .PP The errors documented for the .BR stat (2) system call are also applicable here. .SH AUTHORS Andreas Gruenbacher, .RI < a.gruenbacher@computer.org > and the SGI XFS development team, .RI < linux-xfs@oss.sgi.com >. Please send any bug reports or comments to these addresses. .SH SEE ALSO .BR getfattr (1), .BR setfattr (1), .BR getxattr (2), .BR listxattr (2), .BR open (2), .BR setxattr (2), .BR stat (2), .BR attr (5)
http://www.fiveanddime.net/ss/man-unformatted/man2/removexattr.2
crawl-003
refinedweb
372
63.46
Gift cards are a very pleasant gift, especially during the holiday season, but they also can represent a good opportunity for crooks. When dealing with gift card frauds it is possible to find a wide range of fraudulent schemes, fraudsters could exploit merchandise returns, clone, or steal the card, or hack the merchant site by exploiting a security flaw. Gift card frauds are considered minor crimes due to the small amount of money involved in any transaction, so this kind of crime are rarely prosecuted. Another important factor related credit card frauds is the possibility to arrange a criminal activity without specific skills neither specific tools Gift card frauds are very common and often not disclosed, but they are rarely disclosed because usually customer personally identifiable information (PII) is not compromised. A few months ago, Starbucks suffered two incidents that involved gift cards. In one case, a security researcher discovered a race condition in the gift card value-transfer protocol that could be exploited to allow attackers to transfer card balances between cards without using the credit of the card. In the second incident, attackers exploited the auto-load feature on the gist cards that allowed fraudsters to quickly drain attached bank accounts. As usually happens crooks relies on users’ bad habits, like the sharing of the same login credentials over multiple websites and web services. Crooks use to harvest user login credentials with malware-based attacks, through phishing campaigns, or buying them in the various black markers present tin the underground. Once cyber criminals collect login credentials they try to use them over multiple websites. Most common schemes of gift card frauds that could be grouped in three categories: - Hacking accounts: - Stealing number and gift card cloning - Acquiring lot of cards - Hacking gift cards - Return of merchandize Hacking Accounts The hacking of gift card accounts to steal the associated amount of money is quite common in the criminal underground, this kind of attack could cause greater losses in the case victims have turned on the auto-load feature. Gift cards are excellent to cash out money from illegal activities, typically they are used to convert the value of money associated in another commodity easy to monetize, such as credit card rewards programs, travel or hotel points. A cybercriminal obtains the login credentials to a person’s credit card reward program (i.e. through phishing activities, malware based attacks discovering a set of compromised credentials reused by the victim on numerous websites) can get an e-gift card number to spend immediately. In this way the crooks rapidly convert the money, the fraudster uses the gift card number immediately in online and in-stores. At this point, they can use a service that converts gift cards into cash, such as cardcash.com or cardhub.com, these services usually endure a conversion rate of 60% of the face value of the gift cards. In the US it is also possible to find in many malls physical kiosks that offer the same service. Figure 1 – Card Hub Service By exploiting this process, the fraudster can convert a point or rewards on a hacked account into real cash. Gift card cloning and card thefts Another common gift card fraud relies on stolen numbers off physical gift cards. Gift cards have a number that is printed on the card that could be used for manual key entry, the same number is also encoded on a magnetic stripe on the back of the card, exactly like a credit card. The level of security implemented to protect information is very poor, the mag stripe store data in plain text and it is very easy to read its content with a common mag stripe reader. In some cases, gift cards are protected by a PIN number covered with a coating. Similar to a lottery ticket, the coating needs to be scratched off, but also in this case this supplementary level of protection could be easily bypassed, a fraudster could remove the coating, get the pin, print a new coating on the gift card using a specific printer or replace it with a new sticker easily purchased from Internet. Figure 2 – Scratch-off stickers for sale on EBay Another factor which favors this type of fraud, is that many merchants don’t require users to provide the PIN. Magnetic Credit Stripe readers could be bought on EBay or Alibaba for a few dozen dollars meanwhile a coating printer have a ranging from a few hundred dollars up to several thousand dollars. Figure 3 – Mag stripe reader Thefts of gift cards at its stores are very frequent, in this case the fraud is riskier because fraudsters have to steal the gift card from the store. Typically Gift cards are not usable until they are activated at the cash register, so fraudsters steal the cards, clone them with a mag stripe reader, then return the gift cards back to the store waiting for their activation. The fraudsters will repeatedly check balances on the merchant’s website and wait until the cards in their possession are activated by a legitimate purchase. Once a gift card is activated, they can transfer balances to another card or converting into cash the amount of money associated. Clearly the exposure of the fraudsters is greater, but the support of insiders could make such kind of fraud very easy to carry on. Another fraud scheme consists in the use of malware to compromise a POS system in order to grab gift card numbers. In this case, poorly configured PoS system or their promiscuous use made by store operators could open the door to hackers that can hack them and steal the precious commodity. ACQUIRING NUMBERS IN BULK Another opportunity for crooks is represented by the criminal underground where it is possible to buy lot of gift cards. This method is much more rewarding, but requests a significant effort for the execution of the fraud. The gift card number can be harvested in a multitude of methods, including POS hacking, phishing campaign, hacking database of the merchant, social engineering, physical theft and accidental disclosure. Hacking Gift Cards To better understand how it is possible to hack gift cards, I desire to share with you an interesting analysis conducted by the expert Will Caput in collaboration with the hacker group dc530.org that starting from a bunch of card have explained how to use them. Caput and his colleagues have illustrated how to exploit weaknesses with gift cards, balance checking, and how hackers can enumerate gift cards even without knowing the card holder. It is important to explain that the technique can be applied to any gift card that not use a CAPTCHA or a pin, for any kind of commercial activity they are intended (i.e. Retailer stores, shops, restaurants) The team analyzed a lot of credit cards used by a prominent restaurant, the cards were not purchased, so they were not loaded or activated, this implies that they come with no balance. Figure 4 – Gift cards used in the test The experts started looking for the generation sequence of the card numbers by analyzing the number reported on the cards they discovered.” states Caput. The number of requests necessary to find a valid card is so equal to 10^4 = 10,000 because 4 are the digit used in the generation process. The hackers operated with a stock of gift card starting from a specific number so they restricted the space of analysis to the numbers related to earlier cards in the stack that were most likely sold to a customer. Once the attackers discover the pattern they could use the online card balance checker, in the case of the restaurant by visiting the restaurant online and look for “check gift card balance” on it. In order to analyze every single request, they sued the Burp Proxy tool. Below the request to the card balance checker that was intercepted for one of the gift cards: POST /Payment/GetGiftCardBalance HTTP/1.1 Host: order.xxxxxxx 6523“} Figure 5: 6: HTTP Response Then the hacker tried to discover the response for invalid or inactive cards by try entering a random card number. POST /Payment/GetGiftCardBalance HTTP/1.1 Host: order.xxxxxxxxx 2222“} Figure 7 – HTTP Request The hacker tried to figure out different responses depending on the card status (i.e. invalid or inexistence, and account balance equal to zero). At this point the last step is trying possible combinations for gift card numbers with the Burp Intruder Tool. Figure 8 – Burp Tool invalid gift card numbers In some cases, restaurants allow users to use the gift cards by knowing only the number even without the card they were printed on, but there are also a number of exception. The attacker need to clone the gift card by using a magnetic strip writer like the one used by Caput and his collaborators. Figure 9 – Magnetic strip writer The attacker needs to take an empty card and write the data of a legitimate one on it. Writing is very easy, the attacker has to prepare the strings to write on the tracks present on the magnetic stripe and as reported in the following image. Figure 10 – Magnetic strip writer Ethical Hacking Training – Resources (InfoSec) The return merchandise fraud scheme Every day, dozens of gift cards from top retailers are offered for sale online, some of these are legitimate gift cards sold through third-party sites that resell used or unwanted cards, but a good portion result of illegal activities. Some discounted gift cards are in fact the product of merchandise return fraud. As explained by the security expert Brian Krebs, this kind of scam mainly impacts retailers that issue gift cards when clients return merchandise at a store without presenting a receipt. Brian Krebs reported the case of one of his readers, who was aware that crooks steal merchandise from a physical store in the retail chain and return the merchandise to another store of the same chain without a receipt and then offer for sale the gift cards to websites like raise.com and cardpool.com at a discounted price. Many stores for returns more than 60 days after the purchase, or if the receipt is unavailable, offer the value of the goods returned will be refunded to a merchandise card. The Kreb’s reader confirmed she was not aware that the card was a merchandise return card, a fact that was printed on the front of the card she received. Krebs searching for available gift cards for sale online discovered that the cards are routinely sold for at least 25 percent off their value. “Clothier H&M’s cards average about 30 percent off.” The popular investigator made other interesting discoveries analyzing discounts for industries that haven’t customers return (i.e. fuel stations, restaurants). The value of the cards from merchants that don’t take customer returns allows discounts that tends to be much lower, between 3 and 15 percent (i.e. gift cards from Starbucks and Chevron). Twenty-five percent off is really high and experts invite customers to be wary of such offers. “Normally, it is around 5 percent to 15 percent.” said Damon McCoy, an assistant professor at New York University and an expert on fraud involving stored value cards. This means we are facing with a consolidated illegal activity, that according to the National Retail Foundation will cost U.S. retailers nearly $11 billion this year. “Investigators say the crimes very often are tied to identity theft rings and drug addicts. Last month, authorities in Michigan convicted a 46-year-old father of four for running a large-scale fencing operation that used teams of prostitutes, heroin users, parolees and panhandlers to steal high-priced items from local Home Depot stores and then return the goods to a different Home Depot location in exchange for store debit cards.” wrote Krebs in a blog post. Clearly gift cards are also a privileged cash out method for criminals specialized in the sale of stolen credit cards. Crooks used stolen card data to buy gift cards from a range of retailers and offer them for sale online at 20-30 percent discounts. Conclusion The gift cards are an easy target for cyber criminals, the enumeration of the card numbers is very easy and the absence of authentication systems makes easy the cloning of the cards. The Solutionary company provided a few suggestions to protect gift cards, such as: -
http://resources.infosecinstitute.com/gift-card-frauds-a-profitable-business/
CC-MAIN-2017-43
refinedweb
2,083
55.58
The link to download is: The due day is midnight MST tonight. What I have uploaded is the inventory program part two. I cannot figure out how to write the sub-class. "For LogicPro only" Are you there? I need it by midnight tonight. I posted my code from the inventory program part two, but I can not seem to grasp the third part. The link to my code is: Also it needs to be in basic java as this is a introduction to java class. If you could just modify my code to meet the requirements. Thank you in advance for your help. Ok and thank you. Also note that the sub-class will inherit from the parent class. Are you still working on it? Yes it is ok, but please format in a students code (comparable to mine) so there will be no flags popped with my instructor. It is ok, but please keep in the "student format" Just so it will not raise flags with my instructor. I would be pleased if it were written so that I could submit it without modifications. Thank you for your help!!! I will keep you in mind when I need help again. God bless! This program does not have a sub-class and it is the same code that I submitted, This is not what I wanted to complete this assignment. Please review the assignment and resubmit. Please resubmit. This code does not contain a sub-class and it runs exactly the way my code ran. Please review the requirements for the assignment and re-submit the results. Please note the code you received from me was inventory 2. The assignment is for inventory 3 and you are required to modify the code to meet the requirements for inventory 3. Yes, but it does not display the restocking fee. Now I get a compile error that is:C:\Users\Bill\Desktop\Inventory2.java:9: error: class Inventory is public, should be declared in a file named Inventory.javapublic class Inventory { Now I get a error consisting of:C:\Users\Bill\Desktop\Inventory2.java:9: error: class Inventory is public, should be declared in a file named Inventory.javapublic class Inventory {
http://www.justanswer.com/homework/7r8f8-2-checkpoint-inventory-program-part-resource-java.html
CC-MAIN-2014-23
refinedweb
370
76.22
A Countdown-style Numbers Game in Python This post is largely because repl.it have an embeddable Python environment and I want to try it out. And here (hopefully) is an embedded version: It works! Not every time, and not flawlessly (for example waiting for user input just causes my browser to hang) but it’s still useful. There is also an iframe version that’s far more powerful in that not only does user input work, but the script is entirely editable: Same code but hosted locally rather than embedded from a third party: import random big = [100,75,50,25] small = list(range(1,11))*2 nummax = 6 numbig = int(input('big?')) selected = random.sample(big, numbig) + random.sample(small, nummax - numbig) target = random.randint(100,999) print('{:>14d}{:>30s}'.format(target, ', '.join(str(x) for x in selected)))
https://www.zoril.co.uk/2016/12/22/a-countdown-style-numbers-game-in-python/
CC-MAIN-2022-21
refinedweb
141
56.86
Working With Soap Calls Hey, SOAP is one of the popular ways of working with web-services another being REST (another of my blog)), while working with any of the SOAP based API, you will get a WSDL (web services description/definition language) describing what all methods are supported by the web service. Its a rather complicated XML document. A few days back I was working on consuming SOAP based web services. I could not use Groovy WS as the soap calls I needed to make were little complicated, So I used the old httpclient libraries to make the calls. Here is a very simple snippet of the code I was using. import org.apache.commons.httpclient.* import org.apache.commons.httpclient.methods.* class SoapcallController { def index = { def url = "" def payload =' ACHACH ' def method = new PostMethod(url) def client = new HttpClient() payload = payload.trim() method.addRequestHeader("Content-Type","text/xml") method.addRequestHeader("Accept","text/xml,application/xml;q=0.9") method.setRequestEntity(new StringRequestEntity(payload)) def statusCode = client.executeMethod(method) println "STATUS CODE : ${statusCode}" def resultsString = method.getResponseBodyAsString() method.releaseConnection() //println new XmlSlurper().parseText(resultsString).toString() println resultsString } } In this code we are hitting the “url” with “payload” and getting the response back in the resultsString which itself will be an XML. The payload will change depending upon the WSDL provided. To generate the payload XML using your own WSDL you can use. Its a rather nice tool to use and play before using the web service in your application. And of course rather than working with XML it is always better to work with objects and method calls,search for wsdl2java or better still Axis2 on google for generating classes from wsdl. I will put a blog on generating classes from WSDL using axis2 api in intelliJ IDEA shortly. Thank you for your blog post.Much thanks again. Cool. I tried using external webservice thru groovyx.net.ws.WSClient but it fails to connect. I am trying to create the xml for my WSDL Soapclient. But it gives some error. wsdl is- Will appreciate your input. thx This was extremely helpful. I have been trying the easy way such as WSclient plugin etc. but none works. The problem is when you’re passing complex objects, it can be difficult to match the WSDL and your passed objects becomes null. I finally gave up and went to use this method, seems like a brute force solution, BUT IT WORKS. Another this, it gives a better understanding of SOAP. Hey guys! GroovyWS can support complex scenarios! It’s a bit tricky and works well. And code became much more readable than “raw SOAP”. =) Check it out: @ lukas Thanks for the pointing it out. I have updated the blog. Hope it will be more useful now. I see that this is a SSL SOAP service; however, when I try to execute your code against a SSL service, I get the following exceptions: Exception thrown:11):1035) at com.sun.net.ssl.internal.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:124):1112) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:623) at com.sun.net.ssl.internal.ssl.AppOutputStream.write(AppOutputStream.java:59) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123) at org.apache.commons.httpclient.methods.StringRequestEntity.writeRequest(StringRequestEntity.java:146) prueba.run(prueba.groovy:35) Caused by: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target:1014) … 18 more Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:238) … 22 more And the owner of the WS keeps telling me that there is no certificate. Any clue? Thanks. Sachin: If this is for somebody new to SOAP then it’s even worse of an example of SOAP because the first thing a newbie would ask is “how did you know what namespaces to put in the XML?”. I think the title of your post is misleading in this respect. It has nothing to do with SOAP – it’s a pure http get operation. @ Lukas Very true, I completely agree with you. I mentioned about Axis2 and wsdl2java in post. The whole purpose of this blog was to give an idea to a new person who has never implemented a soap call about how things are working behind the scene. This code is never meant to be a part of a production system, at best it can be used at making an isolated call to a server in a POC. Yes, that’s how we did things back in the 90’s. The code posted can only be really called a hack and ust a caveat to whoever reads this post: code like the above construction of SOAP requests should never ever make it out into a production or even a testing environment. You’ll get into a world of pain debugging namespace and other issues the moment your XSD becomes just slightly more complex. There are many good webservice libraries out there Axis, Apache CXF etc which will abstract all of this away easily. I have not worked with PHP at all, so I don’t know what kind of support it provides, but in java you do need to select one of the various ways provided which fit your bill. Axis2 is a very sophisticated api for achieving most of what you want to do with soap whether its consuming external web services or exposing your own service. Once you are sure of what you want to do, you will find a tool to achieve it in world of java. lol looks like good old PHP days in 2006 for me ; ) but pragmatism is good some times. Just reading a book about web services for pure java and im really disappointed how poor and akward is the soap support in Java …. million ways to do it and not single one is really nice …. its just pain. Art
http://www.tothenew.com/blog/working-with-soap-calls/
CC-MAIN-2019-18
refinedweb
1,014
57.16
D HOPKINSVILLE Watch The Dtte Aftet tour Innme. renew promptly, and not mtu a nnra ber. Th Poitftt i emulation require utiscrfptlonf to b paid In advance. THF WEATHER. TO JCKWTOCKY - CoH Vol. jxxv' HOPKIHSYXLLE, KENTUCKY, TUESDAY, FEBRUARY 4, 1913. NO 15 KENTUCK1AN m 1 Jcaui ted. Editorial Comment PJim TWorp, the Indian athlete, educated at Cirliek, Ims rone Into professional lwebll mmI has 8incd with the New York Gianta at $7,500 a acneon. Ttwpt has confessed to haying: played a professional be fiigeand hM returned the amateur phiei won in the Marathon con gest! Itityear. John' Me Meloan, of Paris, Tenn , has juat gotten out of a Nashville sanitarium, where ho was treated for blood poison in his arm. He was defeated by a "scratch" for assist ant clerk of the Houso, but the scratch that cused his trouble was a slight one on his hand. Secretary Wilson, of the Agricul tural department, who has served 16 years, wants to hold on for a dkie under President Wilson, to bfMk all cabinet records by serving under four Presidents, ho oldest Wild goose in captivity, caught 45 years ago and domestica ted, has just died at Murfreesboro, Tenn.. supposed to be 47 years old. The old bird, a gander, had been blind 5 years. Dr. H. Littlefield, a homeopathic physician and scientist of Seattle, claims to have created life artificial ly. His creation is an insect organ ism of the octopus type. A man playfully swinging a piece of wire as he walked through an electrical plant at Anderson,- S. C, Btruck a wire overhead and was dead when picked up. Mrs. D. P. Dilworth, of New York, formerly Mias Gfrace Carpen ter, was robbed of $3000 . worth of jewelry by burglars who entered her apartment. The fusionists elected an entire list of officers to be chosen by the tiHklature in Tennessee, with the stKgKe exception of the Senator for ii ,i, me long term. The Idaho editors sent to jail and fined, $500 each for contempt of court paid their hnes with pennies eollected for them while they were in jail. Judge South at Frankfort has de cided. that it is mandatory upon the State Treasurer to stamp as interest intr bearing all state warrants not paid. The annual banquet to Kentuck ians resident in New York will be riven Feb. 12. and . Gen. Basil W. Duke will be one of the speakers. Some of the Northwestern States are urging J. A. Everitt, President of the Farmers Union, of Indianapo lis, for Secretary of Agriculture. W. McAffrey was murdered d d his dead body thrown into the city reservoir at Cookeville, Tenn. It was found a few hours later. After all, Castro got in Friday on a writ of habeas corpus, which" wil be heard for permanent decision next Friday.. He is very mad. Frank M, icy an, one or the prom- i"ment dynamiters in the Leavenworth slifsderal orison, lias been released on a $70,000 bond. John J. Simms. certainly 106 and believed to bo 115 years old. died in Wilson County, Tenn., a few days ago. Fritzl Scheff, the actress, has just secured a divorce from her latest hutifer, John Fox, Jr., the author. A chili rescued from the flood at Smith's Mill; Ky., was so hungry that he ato a raw potato. S Judge T. J. Nunn, whoso health ?ery much impaired, fa resting up it Petersburg, Fla. H M. Stanley has been elected Councilman In Henderson.to fill vacancy. . Councilman Louts' Metzner, Henderson, died kst week. ( end COMES TO ESQ. 11. B. CLARK Venerable Ex-Justice Succumbs Sunday To Ills Incident To Old Age. WAS A PROMINENT CITIZEN. Interment in Riverside Ceme v tery After Funeral Ser vices Yesterday, Hosea Ballou Clark, the venerable brmer. justice of the peace, better known as Esquire "Joe" Clark, died at his .home near Gracey Sunday morning, aged 79 years. He had e;n in a critical ''condition for sev eral weeks from ills incident to old age. ' Esq. Clark was a ,son of Rev. Joab Clark and his wife.E izabeth Brasher Clark, and was born in this county March 16, 1834. His father was three times married and wa3 the father of 14 children. Eq. Clark waSthe fourth child by his first, wife. He is survived by three of iiis half-sisters, Mrs. Victoria Fruit, Mrs. Ellen Bowles and Mrs. Mollie Nichols, and one half-brother, Joab Clark. He was twice married. His first wife was Miss Mildred Pyle. His second wife, to' whom he was married in 1861, was Miss Elizabeth S. Cox, of Gracey, who survives him, together wjth eight children Geo. M , Claude K.. Harry C, Clifford A. and Albert H. Clark, and Mrs. Mollie A Potter and Mrs. Ellen Rich, of Dawson, and Mrs. Ada Baker, wife of George Baker, of Elkton. E-q. Clark lived on a farm in the western part of the county, and was one of most prominent citizens in the county. He was a justice of the peace al most continuously for nine terms, beginning in 1871 and covering a pe ri 3d of 36 years. He was a Republi can in politics, but seldom had oppo sition, being so popular with his neighbors that people of all parties voted for him terra after term. He also served one term in the legisla ture. He was a leader of affairs in his community and enjoyed the re spect of all who knew him. His father was a preacher of the Univer- salist church and he held to the same faith. His funeral was held yesterday at 1 p. m. at the Universalist church in this city by Rev. J. B. Fosher. the pastor, who paid a high tribute to his virtues; Interment was at Riv erside Cemetery. , A-very large crowd attended from both county and city, and there were many beautiful floral offerings. The pall bearers were J. W. Wood, Buckner Campbell, Charles Smith and John M. Renshaw.of the county, and T. E. Bartley and A. W. Wood, of this city." TURKS ARE LAST TO QUIT Moslem Army Also Instructed Not to Shoot Until Attacked London, Feb. 3. The porte has ordered the Turkish plenipotentia ries not to leave London untU hoetili ties aio resumed and has instructed the army to await the attack be fore firing a shot. Thus the Ottoman who, with the exception of the Montenegrins, are the only delegates left in London, remarked today that nobody could accuse them of not having done all that was humanly possible to come to terms. Money For Teachers, Frankfort, Ky., Feb, 1, Warrants for $410,410,55 for the rural school teachers and $96,120.21 for tho city school teachers wore made out today by the Department of Education and sent to the State Treusurer for payment. SUDDEN DEATH OF J. W. P'POOL Lived Only a Few Minutes' After Physicians Were " Called. POPULAR TOBACCO BUYER The Deceased Is Survived By His Wife and Five' Sons. Mr. John W. P'Pool died at his home suddenly last Sunday after noon, aged 56 years. The cause of his death was neuralgia of the heart. He was ill but a few min utes when Dr. Southall was sum moned. When the doctor arrived he saw at first glance that the end was near, and in a very shor time Mr. P'Pool breathed his last. The deceased is survived by his wife. and five sons, Corbett, Hers- chel, Norman, Harry and Golay. For several years he has been a buyer for the Imperial Tobacco company, and was one, of the beat known buy ers in this district. Funeral Bervice4 will bo held at the residence this afternoon at 2:30 o'clock. Interment in Riverside cemetery. . TOBACCO BURNED Incendiary Fire at Cobb Recalls Old Troubles. A freight car on the Illinois Cen trahrailroad at Cobb, jCy,,,in. which had been loaded seven hogsheads of hand-packed tobacco consigned to a buyer at Clarksville, Tenn., was burned Thursday night by incendia ries, as believed, giving rise to some uneasiness that a revival of "night riders" is imminent. One report from Cobb is to the effect that the car and tobacco were burned by a band of men. Another report is that it was undoubtedly set afire, but by whom and just in what way is not known. In either event there is said to be no clew whatever as to who the firebugs were or their mo tive?. There was about 7,000 pounds of tobacco, and it is a total loss. While the burning of the tobacco bears all the ear-marks of the "night riding" of live or six years ago no serious import is attached to it. BOARD OF TRADE At Paducah Arranged for Com petition With Paducah. The trade in building sand in Hop kinsville is considered by Paducah as worth reaching out for, and the Pa ducah board of trade, realizing this, got busy with the I. C. railroad. The result as.given by the Paducah Sun, was as follows: Previous to January 18 the freight rate on sand from Paducah to Hop kinsville, a distance of 77 miles, was 90 cents a ton, while the rate from Henderson to Hopkinsville, a dis tance or uu miles, was ou cents a a ton from both Paducah and Hen derson. It is recognized that there Is a bet ter grade of sand at Paducah but the Paducah dealers have been un able to compete with the Henderson dealers for the business at Hopkins viile because of the. higher freight rates. This is only one of the many discriminations that the board has taken up. Bought Printing Outfit. County Attorney John C. Duffy has closed a deal with Mrs. L. Yonts by which ho becomes the owner of tho plant of the Hopkinsville Mos senger. that shut down about two years ago, car. uuity traded some town lots for the plant. THE WEED POURING IN ! ' T 1 . Receipts Continue to Be Beyond t Auiiiiy 10 iianaie "Them. ' ALL BUSINESS NOW -BOOMING Loose Floors Crowded and The .. Prices are Highly Satisfactory. Last week was a record one in both receipts and prices for the year. The weed had full risht of way into the city and the rush to get it into town and return home for reloading wan unabated. Owing to the late ness of the season for stripping and the nearness of the time for making preparations for the 1913 crop, farm ers or.i having a strenuous time get ting the old crop out of the way. This must cjntinue for some time yet. The delivery of the crops and cash ing of checks at the banks has given a new impluse in all fall lines of trade, as the farmers are now sup plying themselves with many needs necessarily delayed by . their having to wait so long to realize on their tobacco, a product that brings to the farmer money that can come from no .other source at this time of the year. rhe bank pfficiala are kept so con stantly at work they go home at night with a reeling otrenet mat the da,y's. work Is oyer. But the merchants are not complaining not that they do not feel tired, too, afethe close, of the day but they are realizing swhat they have been waiting for, a general quickening of their business, which always comes when the delivery season is at its flood. The local hogshead business is re ported to have been quiet but firm for the good leaf, suitable for cigar wrappers, binders and the lug fillers. Low lugs, $5 50 to $6; common, $6 to $6 50; medium, $6.50 to $7.25; good $7.50 to $8; fine, $8 to $9. Low leaf, $8.50 to $9; common, $9 to $10; medium, '$11 to $11 50; good, $11 to $12 50; fine, $12 to $13-50. Receipts for the week, as reported by Inspector Abernathy, were 29 hhds; sales for the week, 154 hhds. receipts, for the year, 106 hhds; salej for the year, 488 hhds. He reported . .the loose floor sales for the week at 722.595 lbs. Sales for the season 2,313,030 lbs. The receipts at the warehouses of the Imperial, American Snuff Co., Regie and Geo. W. Helme Co. were so heavy that they, like the loose floor men, were compelled at time? to put on a night force of handlers to unload the wagons, many of them having to wait something like twelve hours to get their turn at the re ceiving doors. The prices at the loose floors may fairly be quoted at the following figures: Trash. $3 to $1; lugp, $4 to $6, leaf, $6 to. $11, but few crop3 reached these figures. ' T. B. Waggoner, of North Christ ian, who turned down a private of fer of $2 for trash, $4 for lugs and $8 for leaf, put it on the loose floor and realized $5.25 for trqsh, $6 for lugs and $9.20 for leaf. Burley Tobacco. AT LEXINGTON There seem to be an increase in the quantity of common grades and a decrease of the good. Friday 700,000 lbs. were sold and $25 was the top price for tho day, there being no high grades on the market. AT MT. STERLING. The market was off on colory grades but higher on red tobaccos General prices ranging from $4 to $23. AT M AYSVlLLB. Total sales Friday approximated 840.0QO lbs , at prices ranging from $1.60 to $36. AT CARLISLE -The sales Friday reached 220 000 lbs., at prices rang ing from $2 70 to $39. ONE TERM AMENDMENT Passes The Senate With Just Enough Votes To Get Through. NOW GOES TO THE HOUSE. Bradley As Usual Voted With The Losing Side. A constitutional amendment, which would restrict the President of the United States to a single term of six years and would bar Woodrow Wilson, Theodore Roosevelt and William H. Taft from again seeking election, was approved by the Senate Saturday by one over the two-thirds majority. After a three-days fight, in which the Progre3sives joined with many Republicans in opposing the restricted presidential term, the Senate adopted the original Works resolution by a vote of 47 to 23. It now goes to the House. 32D ANNVERSARY Of C. E. in U. S. Observed at First Presbyterian Church. The 32d anniversary of the Chris tian Endeavor Society was appro priately celebrated at the First Pres byterian church Sunday evening, February 2, at 7 o'clock. A special program was carried out, consisting or short talks along U. rJ. lines, in terspersed by music by the choir and a violin solo by Jame3 West, Jr., which was highly enjoyed. This society is the oldest organiza tlon of its kind in the city and, al though its membership is small, it is justly proud of its record of be nevolences during the past year The treasurer's report showed much activity along missionary and chari table lines, $25 having been sent to missions, $10 for state work, $10 for charity in the city, besides smaller amounts for flowers for the sick and prison wors. A delegate s expense to Synod, at Princeton, and to the C. E. Convention at O ensboro, was also met by the society which! amounted to $12 In round num bers this society has given $70.00 during the past ten months. WEATHER FOR WEEK Rain and Snow and Average Temperatures Everywhere. Washington, Feb. 2. Indications are that during the coming week temperatures will be nearly the sea sonal average in all parts of the country, with well distributed pre cipitation, according to the weather bureau bulletin. "A disturbance that now covers the southwest," says the bulletin, "will move northeastward, crossing the great central valleys Monday or Monday night and the eastern states Tuesday or Wednesday. This dis turbance will cause general ruins and snows Monday in the southwest, and Monday and Tuesday through out the region between the Missis sippi and the Atlantic coast. "Another disturbance will appear in the far west about Weduesday, move eastward over tho middle west Thursday or Friday and the eastern states near the close of the week, This disturbance will bo attended by general precipitation, and will in all probability terminate the prolonged period of dry weather in the Pacific states. A change to considerably coldor weather will overspread the northwestern states about Thurs day." Davo Morgan, with the Row borough Co., went to Morganfield Sunday to be with his futhar, who is very ill. T. S. WINFREE GAME WARDEN Commission and Instructions Were Received Last Sunday. . LAW WILL BE Citizens Are Asked to Co operate With Him. Constable Thomas S. Winfree re ceived his commission as Fish and Game Warden from the Secretary of State last Saturday. His badee. which is numbered J.18, and his sup plies have also been received. The commission expires January lst,1914. Mr. Winfree's commission gives him State-wide jurisdiction. When- ever he catches violators in the act he can arrest without a warrant. Ho wants it distinctly understood that no favors will be Bhown and asks that all citizens co-operate with him in his efforts to arrest and have every violator punished to the full extent of the law. Under the game and fish law no body has a right to hunt out of sea son on his own premises without icense. Seining for minnows is also forbidden by law, but may be taken with a dip-net. Seining is positively forbidden, but spearing is permissi ble. The penality for killing song and insectiverous birds is very heavy and caging song birds is a violation of the law also. BASKET BALL Two Good Games to Be Seen this Evening. Tonight is basket ball league night and the contests to be staged should be as good as any offered for the season. The crowds at the league games so far have been much below the standard hoped for by the direc tors, but it is hoped that they will show an increase from now on. The teams in the league are all composed of experienced players, and all of them are just now hittincr their stride. McLean Colleee. al though it lhas always had the rec ord of having good basket ball teams, has one of the best in the history of the school. In Captain Burnett they have a forward whose equal would be hard to find. He plays like a de mon, so to speak, and scarcely ever misses a shot at the basket. It is well worth the price of admission to see him play. Co. D has just com menced to hit their stride and on last Tuesday came near slipping one over on the college boys Co. D has some mighty good material and they are lust now finding themselves. Be sides the main contest, another game between the second teams of Co. D and McLean will take place. This bids fair to be a spirited contest in every respect. These games will be played at the Cook building and win start prompt ly at 8 o'clock. M. C. NORTHINGTON Former Mayor of Clarksville Dies Suddenly. Clarksville, Tenn., Feb. 3, Ex Mayor Michael C. Northington died at his home Saturday morning, death resulting from a stroke of apoplexy, which he suffered earlier in the night. Friday he was in hi1 usual health at his office and on tho streets. At night ho retired early, but repos ing in bed talked to members of his family who wore in tha room. ud dsnly h ooaeod talking and wo a Beamingly suffering fta! a ounvuN sion. A physician yai summoned, but he never regaind conwlousncss, dying six hours later. D xml | txt
http://chroniclingamerica.loc.gov/lccn/sn86069395/1913-02-04/ed-1/seq-1/ocr/
CC-MAIN-2013-48
refinedweb
3,329
74.39
Opened 7 years ago Closed 7 years ago Last modified 3 years ago #11765 closed Uncategorized (invalid) models.DateField(null=True, blank=True) accepts None, but not the empty string '' Description (last modified by ) For some reason I cannot create a record with class MyModel(models.Model): models.DateField(null=True, blank=True) MyModel(MyDateField='') It must be either None or a valid date. Change History (8) comment:1 Changed 7 years ago by comment:2 follow-up: 4 Changed 7 years ago by comment:3 Changed 7 years ago by blank=True tells user that they don't have to fill this field, and it doesn't mean that its empty string. Django assign NoneType if empty. comment:4 Changed 7 years ago by The problem is that DateField.to_python()(or DateTimeField) doesn't treat ''specially... Neither does IntegerField nor FloatField, eg. It's not clear to me why Date fields would/should be singled out for special treatment here. Forms fields consistently handle treating empty strings input as None, but if you are operating at the model level then you need to use None when you mean None, not an empty string. comment:5 follow-up: 7 Changed 7 years ago by i've got the ValidationError exception in django's own create_update generic view. a ModelForm instance validates fine, but the save() method raises exactly this exception. there's a simple workaround, but IMHO it should be performed by the field instance. def clean(self): cleaned_data = self.cleaned_data end_time = cleaned_data.get('end_time') # this is not None if user left the <input/> empty # ... do stuff with data if not end_time: cleaned_data['end_time'] = None return cleaned_data not sure if it warrants a different ticket. comment:6 Changed 7 years ago by In case it helps anyone else tearing their hair out, here's one other thing to check (see Stack Overflow ticket): if you have any default or helper text in the field, that will blow up on validation before you get to the clean() method. comment:7 Changed 4 years ago by I agree, I just fixed my problem by adding def clean(self): cleaned_data = super(MyForm, self).clean() if not cleaned_data['my_date']: cleaned_data['my_date'] = None return cleaned_data to my form. Without this I received the error ValidationError: [u"'' value has an invalid date format. It must be in YYYY-MM-DD format."] even though I had blank=True, null=True set in my model and required=False set in my form. comment:8 Changed 3 years ago by I got this error using crispy-forms and not specifying my Date field in Meta.fields. Adding a clean()n method didn't help but adding the Date field back to Meta.fields did. The problem is that DateField.to_python()(or DateTimeField) doesn't treat ''specially, see source:/django/trunk/django/db/models/fields/__init__.py@10545#L464
https://code.djangoproject.com/ticket/11765
CC-MAIN-2016-50
refinedweb
479
64.41
In the neural network, we use various kinds of layers which are designed for different predefined functions. These functions perform mathematical operations on the data to reach the goal of the network. We see various examples of the layers like input, output, dense, flatten, etc. Similarly, the lambda layer is also a layer that helps in the data transformation between the layers of a neural network. In this article, we are going to discuss an overview of the lambda layer with its importance and how it can be applied to the network. The major points that we will discuss here are listed below. Table of Contents THE BELAMY Sign up for your weekly dose of what's up in emerging technology. - What is a Lambda Layer? - Building a Keras Model - Keras Model with Lambda Layer - Defining Function - Lambda Layer in Network - Difference Between the Models What is a Lambda Layer? In the neural networks, we see many kinds of layers like convolutional layers, padding layers, LSTM layers, etc. these all layers have their own predefined functions. Similarly, the lambda layer has its own function to perform editing in the input data. Using the lambda layer in a neural network we can transform the input data where expressions and functions of the lambda layer are transformed. Keras has provided a module for the lambda layer that can be used as follows: keras.layers.Lambda(function, output_shape = None, mask = None, arguments = None) More simply we can say that using the lambda layer we can transform the data before applying that data to any of the existing layers. For example, we want to square a value inside the input data we can apply lambda with the following expression lambda x: x ** 2 The function of the lambda layer can be considered as the data preprocessing but traditionally we perform the data preprocessing before the modelling. But using the lambda layer we can process the data in between the modelling. In this article, we will see an overview of the lambda layer, how it can be applied to neural network build using Keras. Building a Keras Model When we talk about the approaches to building networks using the Keras we can use three types of APIs: - Functional API - Sequential API - Model Subclassing API In this article, we are using the Functional API for making a custom API. Also, we will try to build a model which can recognize images using the MNIST dataset. Before starting building a model we are required to know that in a neural network we stack layers on top of one another. We can make it available from the layers module of the Keras, Also, we can use TensorFlow as a backend for Keras. Let’s start the modelling by importing the required modules. import numpy as np import matplotlib.pyplot as plt from tensorflow.keras import layers, models, datasets, utils,optimizers Before the modelling procedure, we are required to load and preprocess the data. As we have discussed before we are using the MNIST data here which can call from the datasets module of the Keras. (X_train, Y_train), (x_test, y_test) = datasets.mnist.load_data() The image under the data can be visualized using the matplotlib. plt.figure(figsize=(20, 4)) print("Train images") for i in range(10,20,1): plt.subplot(2, 10, i+1) plt.imshow(X_train[i,:,:], cmap='gray') plt.show() Output: Here we can see that the images under the data are images of handwritten numbers. The size of images in the data is 28 x 28 = 784. Before putting the data into the model we are required to preprocess the data so that it can behave properly according to the model. It is suggested to make the values under the MNIST data to reshape in a vector of 784 and change the data type into float64 so that it will be easier for the model to process all the values in the data set. x_train = X_train.astype(np.float64) / 255.0 x_test = x_test.astype(np.float64) / 255.0 x_train = X_train.reshape((x_train.shape[0], np.prod(x_train.shape[1:]))) x_test = x_test.reshape((x_test.shape[0], np.prod(x_test.shape[1:]))) x_test = x_test.reshape((x_test.shape[0], np.prod(x_test.shape[1:]))) Now we start building the network using the layers given under the module layers of the Keras. In this section of the article, we are building a model using some dense layers and activation layers with input and output layers where the input layers require an argument shape which defines the shape of the input where the dense layer specifies the number of neurons in the using the argument units. The dense layer can be followed by the activation layer. We can simply stack these layers using the following codes: input = layers.Input(shape=(784), name="input") dense_1 = layers.Dense(units=500, name="dense_1")(input) activ_1 = layers.ReLU(name="activ_1")(dense_1) After this we can also stack some more layers on the network: dense_2 = layers.Dense(units=250, name="dense_2")(activ_1) activ_2 = layers.ReLU(name="relu_2")(dense_2) dense_3 = layers.Dense(units=20, name="dense_3")(activ_2) activ_3 = layers.ReLU(name="relu_3")(dense_3) A dense layer is required to add at the second last place of the network which acquires the neuron according to the classes in the data set. Since we have 10 classes in the data set we can set the dense layer like the following: dense_4 = layers.Dense(units=10, name="dense_4")(activ_3) After all the stacking layers we can add an output layer to the network. Where the output layer, output = layers.Softmax(name="output")(dense_4) After building the model we are ready to connect them all and using the model module from the Keras we can define the input and output layer in the model. And all other layers of the model will be in the middle of the output and input layers. Also, we can compile the model using the compile function and check the summary of the model as: model = models.Model(input, output, name="model") model.compile(optimizer=optimizers.Adam(lr=0.0005), loss="categorical_crossentropy") model.summary() Output: Here in the summary, we can check the architecture of the model. Now we can fit the model on the MNIST data. Using the fit function, model.fit(x_train, y_train, epochs=5, batch_size=256, validation_data=(x_test, y_test)) Output: Here we can see the architecture of the model using the existing layer from the Keras. In the next section of the article, we will see how we can perform custom operations using the lambda layer. Keras Model with Lambda Layer In the above section, we have seen that in the model architecture we have 4 dense layers, and let’s say we are required to perform some operation on the data where we are going to add 3 to each element. The layers we used in the previous architecture are not capable of performing this, so now we will need a lambda layer because this layer is specifically destined for such a task. Defining Function To build a model with a lambda layer we are required to define a function that can add 3 to each value of the tensor. def Function(tensor): return tensor + 3 We can use this above-defined function in place of the argument function of the lambda layer module of Keras. Like the following: lambda_l = layers.Lambda(Function, name="lambda_l")(dense_3) Lambda Layer in Network So now we can put a lambda layer in the network and the code for the network will look like the following: input = layers.Input(shape=(784), name="input") dense_1 = layers.Dense(units=500, name="dense_1")(input) activ_1 = layers.ReLU(name="activ_1")(dense_1) dense_2 = layers.Dense(units=250, name="dense_2")(activ_1) activ_2 = layers.ReLU(name="relu_2")(dense_2) #lambda layer dense_3 = layers.Dense(units=20, name="dense_3")(activ_2) def Function(tensor): return tensor + 2 lambda_l = layers.Lambda(Function, name="lambda_l")(dense_3) activ_3 = layers.ReLU(name="relu_3")(dense_3) dense_4 = layers.Dense(units=10, name="dense_4")(activ_3) output = layers.Softmax(name="output")(dense_4) After compiling the model summary of the model will look like the following: Here we can see that there is a lambda layer between the layers after the third dense layer. Difference Between the Models We can also find the difference between the data before and after applying the lambda layer by creating two new models where in the first model we will use the layers from input to the third dense layer and in the second model we will compile layers from input to the lambda layers like the following : before_lambda = models.Model(input, dense_3, name="before_lambda") after_lambda = models.Model(input, lambda_l, name="after_lambda") After this, we can fit the models on the data and predict the values using the test data. model.fit(x_train, y_train, epochs=5, batch_size=256, validation_data=(x_test, y_test)) pred = model.predict(x_train) before = before_lambda.predict(x_train) after = after_lambda.predict(x_train) Lets print the data before and after the lambda layer. print("beforer lambda layer",before[0, :]) print("after lambda layer",after[0, :]) Output: Here in the predictions we can see the difference between the data used in the network. We can easily say before and after the results there is a difference of 3. In many places we see that in the neural network between the training because of reshaping and neuron in the layers makes the data values faded to the next layer which cause the differences in the learning of the neurons of the layer. Due to this, the performance of the model decreases. Using the lambda layers we can reshape the data for better modeling and extract better results from the network. Final Words Here in the article, we have seen the function of the lambda layer and also we have seen how we can apply it into the network build using the keras. Lastly, we see how internally it transforms the data according to the requirement. By just defining a function and putting it into the argument we can change the data in between the network. I encourage readers to use the lambda layers to customize an operation which depends on more than two layers. References:
https://analyticsindiamag.com/how-to-use-lambda-layer-in-a-neural-network/
CC-MAIN-2022-33
refinedweb
1,695
55.54
Hi Guys, I'm trying to have the user enter an integer than an outline of a pyramid would be printed on the console. I just can't get the last line to print out with "*" all across. So say a user enters 5 I need the output to be: * * * * * * * ********* Any hints or tips would be greatly appreciated import java.util.Scanner; public class Pyramid { public static void main(String[] args) { int width; System.out.print("How many rows :"); Scanner kb = new Scanner(System.in); width = kb.nextInt(); for (int i = 1; i <= width; i++) { for (int j = 1; j <= width - i; j++) { System.out.print(" "); } for (int j = 1; j <= 2 * width - 1; j++) { if (j == i || j==1) { System.out.print("*" + " "); }else { System.out.print(" " + " "); } } System.out.println(); } } }
https://www.daniweb.com/programming/software-development/threads/323120/pyramid-outline
CC-MAIN-2018-43
refinedweb
130
76.01
Mehendran, This is a priority. As mentioned earlier, the newcompiler project - which fully supports Python 2.5 from a compilation perspective - currently depends on having CPython bytecode, specifically what's emitted in *.pyc files. Tobias wrote a very clever piece of code - and short, it's just over a page! - that makes it much easier to use via import filters; you can do this: import pycimport and import any CPython 2.5 modules you like into Jython. (Look in newcompiler/PyASM.) Most importantly, this filter, like any other import filter, supports any module dependencies. The logic is that it will use a pyc if it's there, and compile via the newcompiler; otherwise, the module import goes through the standard ("oldcompiler") in Jython. This let's you run generator expressions, decorators, etc., to whatever degree you like, as long as you satisfy two caveats: Hi, It is probably the right time to ask, I think. May I know about when the new compiler will be integrated in jython? The question comes because Jython lacks for features like Generator expressions, Function/method decorators according to the link -Mehendran ------------------------------------------------------------------------- SF.Net email is sponsored by: The Future of Linux Business White Paper from Novell. From the desktop to the data center, Linux is going mainstream. Let it simplify your IT future. _______________________________________________ Jython-dev mailing list Jython-dev@lists.sourceforge.net
http://sourceforge.net/p/jython/mailman/attachment/d03bb4010711272212x58891e5by751cdd7fd910441c%40mail.gmail.com/1/
CC-MAIN-2015-32
refinedweb
229
58.69
CSC151.01 2009F Functional Problem Solving : Readings: Algorithm names are bound to values in Scheme. +and quotient, are predefined. When the Scheme interpreter starts up, these names are already bound to the procedures they denote. As you develop more and longer procedures, you will find that there are many times you want to create local names for values that are not parameters. We will consider such names in this reading. You may have already noted that there are formulae the same? If not, why not? Second, it's hard to correct the code. If we change our formula, we'll need to change it in three places, components, we only need to change one piece of code. (define rgb-brightness255 (lambda (color) (/ (+ (rgb-red color) (rgb-green color) (rgb-blue color)) 3))) Similarly, if we want to ensure that the value produced by rgb-brightness255 is an exact integer (the values we expect for the components of RGB colors), we need only make one change, rather than three. (define rgb-brightness255 (lambda (color) (inexact->exact (round (+ (* .30 (rgb-red color)) (* .59 (rgb-green color)) (* .11 (rgb-blue color))))))) Although we have solved the first two deficiencies of the original code, we are still repeating work. Can we avoid the repetition of work? Certainly, we can write a procedure that makes a shade of grey from a single brightness getting to this version took a lot of extra work. We had to write (and document!) two procedures that we may never use again. letExpress). and without helpers. (define rgb-greyscale (lambda (color) (let ((component (+ (* 0.30 (rgb-red color)) (* 0.59 (rgb-green color)) (* 0.11 (rgb-blue color))))) (rgb-new component component component)))) Again, if we want to make the change to the computation of the brightness component (e.g., by rounding and making it exact), we need change our code only once. (define rgb-greyscale (lambda (color) (let ((component (inexact->exact (round (+ (* 0.30 (rgb-red color)) (* 0.59 (rgb-green color)) (* 0.11 (rgb-blue color))))))) (rgb-new component component component)))) Here's another example of a binding list, taken from a let-expression in a real Scheme program: (let ((black (rgb-new 0 0 0)) (complement (rgb-complement rgb))) ...) This binding list contains two binding specifications -- one in which the value of the expression (rgb-new 0 0 0) is bound to the name black, and the other in which the complement of rgb (presumably, an RGB color) is bound to the name complement. Note that even though binding lists and binding specifications start with parentheses, the are not procedure calls; their role in a let-expression simply to give names to certain values while the body of the expression is being evaluated. The outer parentheses in a binding list are “structural” -- recomputation of values. In other cases, there is little difference in speed, but the code may be a little clearer. letand define You may have missed it, but there are a few subtle and important issues with changable) everywhere. Hence, you enforce this separation by policy or obscurity. In contrast, if you use let to define your local names, these names are completely inaccessible to other pieces of code. We return to this issue in our discussion of the ordering of let and lambda below. (+ . (SamR says, “When you have sons between the ages of 5 and 12, you'll understand.”) You might begin with (define years-to-seconds (lambda (years) (return (* years 365.24 24 60 60)))) This produce does correctly compute the desired result. However, it is a bit hard to read. For clarity, you might want to name some of the values. hours-per-day 24) (define minutes-per-hour 60) (define seconds-per-minute 60) (define seconds-per-year (* days-per-year hours-per-day minutes-per-hour will see in the lab, it is possible to empirically verify that the bindings occur only once in this case, and each time the procedure is called in the prior case. So, one moral of this story is whenever possible, move your bindings outside the lambda. For the first example, in which we bound the name black, one might write (let ((black (rgb-new 0 0 0))) (lambda (rgb ...) ... However, it is not always possible to move the bindings outside of the lambda. In particular, if your let-bindings use parameters, then you need to keep them within the body of the lambda. Consider the other half of the initial example. In that case, we were computing the complement of rgb, which we assumed was a parameter. In that case, we cannot write (let ((black (rgb-new 0 0 0)) (complement (rgb-complement rgb))) (lambda (rgb ...) ... but must instead write (let ((black (rgb-new 0 0 0))) (lambda (rgb ...) (let ((complement (rgb-complement rgb))) .... defineStatements Scheme.).
http://www.math.grin.edu/~rebelsky/Courses/CS151/2009F/Readings/let-reading.html
CC-MAIN-2017-51
refinedweb
803
65.22
Foreign Asset Sales: Are they worth it? Foreign Asset Sales: Are they worth it? 1 September 2014 With the pending election, the current sale of Lochinver Station before the OIO, and the wider debate around foreign investment in land assets (both rural and residential), the issue of direct foreign investment in land has jumped to the top of the queue of hot issues. Much of the discussion has been emotive, and that is entirely legitimate, but it’s the economic debate that seems to have been lost. We pose the question, “What is the economic return to NZ Inc. by allowing foreign direct investment in land?” Many of the public offerings in the NZ market place that are targeting an investment in farming, offer returns of around 3-4% cash return, and an 8-9% capital return. Importantly, the capital return (if treated appropriately) is tax free. Given that much of the NZ Inc. Agribusiness value is tied up in the value of our rural land, it seems the first major test appears to fail. Profits from a gain in the value of land are not taxed, (if managed appropriately) so there is no return for NZ Inc., particularly if those funds are repatriated and not invested into better (i.e. greater than 3-4%) cash-returning assets. Of course, there is the counter argument that if the capital is released and applied to a better returning asset, then there is some value. Using the Lochinver Station example, it can be argued that NZ Inc. is releasing capital from one lower performing asset to invest in a higher performing asset, which has a better outcome for NZ Inc. But in the Crafar farms example, that capital simply repaid debt. What is done with the Lochinver purchase from an economic perspective in the future, will be the final determinant as to whether NZ Inc. benefits. As yet, we don’t have that information. The Crafar example also showed another factor to consider when it comes to Foreign Investment. The offer was significantly larger than the next closest offer, which raises the question of how a foreign entity with no experience in NZ farming systems could justify a value well in excess of the largest and most experienced operators in NZ, relative to the cash returns from the asset. We actually don’t know the answer to that as we are not privy to the wider strategy being employed, but it needs clarity because it appears to be putting a value on land that is outside what can be economically justified. There are lots of other examples of this. Our iconic high country stations are often sold for values well in excess of any production values. (Although that’s not just to foreign investors). The economic value to NZ Inc. is once again negative, because the outcome will create greater barriers to entry for operators focused on productive returns, from which NZ Inc. benefits through increased exports and profitable taxable earnings. But there is the alternative view that the land is being put into a higher value use. Tourism in Otago is valued at 2.17B, so the creation of 53,000Ha of covenants on land covering Motatapu, Mount Soho, Glencoe and Coronet Peak stations, has to be seen as a big endorsement for foreign investment, as the land was placed into the covenants by the owners of Soho Property Limited and its overseas owner Robert “Mutt” Lange. Underlying all of this of course is our free market. It has served us well. No one should be told who they can and cannot sell their private assets to. We also need to acknowledge that we need foreign investment. NZ Inc. is relatively poor when it comes to the amount of capital we hold. As a country we do tend to hold the value of our Agribusiness Industry in the value of our land, but this has been a hard fought process. Successive governments and economic polices have created an environment that makes us very attractive. So NZ Inc. needs to ensure it gets a return for its investment, without interfering in the mechanism that has created it. The economic debate needs to move toward what measures can be put in place to ensure NZ Inc. gets its share of its investment, while those in the market can still trade and make the best decisions for their businesses. We think the answer starts by measuring the overall outcomes from recent OIO approved decisions, and whether they have achieved what was expected economically. Each OIO in itself is a detailed application, and the applicant has to achieve the agreed outcomes. But we think an overall review of OIO approvals and outcomes over the last few years would provide a more informed debate, on not only the benefits, but allow an informed debate on mechanisms and process that may need to be implemented to ensue that if there is any miss alignments, then NZ Inc. is still getting its return. Hayden Dillon is the Managing Principal for Waikato and leads the Corporate Agribusiness and Capital Advisory team.
http://www.scoop.co.nz/stories/BU1409/S00017/foreign-asset-sales-are-they-worth-it.htm
CC-MAIN-2017-34
refinedweb
852
61.77
Strange memory usage and eventual termination of StreamInsight instance - Donnerstag, 9. Februar 2012 12:21 I'm experiencing memory usage problems when running a query. Memory consumption is normal for a good few hours after query start with garbage collection clearly visible after a while it begins to climb without any apparent garbage collection as before. Eventually memory usage exceeds that available and the stream insight service is killed. The purpose of the query is to receive events, categorise them by joining on a reference stream and output the count of each category over a window. If a received event doesn't match an event in the known event reference stream it is categorised as unknown by the query. The reference stream is updated every minute and enqueues around 28,000 events each time. The point events from this stream are being stretched out until they receive a corresponding event at a later date in time. The event input stream usually handles anything from a couple hundred to several thousand events an hour and enqueue's a cti event every 2 seconds. The query still runs out of memory even if there are no input events received which points the finger at the reference stream. I'm pretty sure the problem lies with memory management of the reference stream as memory increases consistently upon enqueuing of the updated events. This is as expected and a short time later the memory for the expired events is released. At least to begin with... The first image below is the normal memory usage experienced with the clear garbage collection of expired events in the reference stream. It has been running for around 8 hours at this point. The blue is memory usage, yellow is cti count for the reference stream, green is cti count for the event input stream, red is cpu, purple is events in reference stream input queue, turquoise is events in input stream queue. Below is the usage around 5 a half hours after the first with a considerable rise in memory visible and no apparent garbage collection. The query survived another 8 hours before StreamInsight was terminated. Memory usually rises to about 1.5gb before failing. The query is running on a virtualised development machine with 1 core and 2gb of ram which should be sufficient for this low throughput. Below is the code, if needed i can lift the code from the bigger solution and and upload it as a buildable solution. Query const string eventTraceStreamName = "EventTraceStream"; CepStream<TraceEvent> eventTraceStream = CepStream<TraceEvent>.Create(eventTraceStreamName, typeof (InputAdapterFactory), new InputAdapterConfig() { adapterType = InputAdapterType.TraceEventPayload, stopPollingPeriod = 1000, ctiPeriod = 2000 }, EventShape.Point); var timeImportSettings = new AdvanceTimeSettings(null, new AdvanceTimeImportSettings(eventTraceStreamName), AdvanceTimePolicy.Adjust); // Create a reference stream using the datastream as a time reference var knownEventStream = CepStream<Viagogo.Cep.Adapters.Database.GetKnownEventsResult>.Create("knownEventStream", typeof(InputAdapterFactory), new InputAdapterConfig() { adapterType = InputAdapterType.TraceEventCategory, }, EventShape.Point, timeImportSettings); var knownEvents = from e in knownEventStream .AlterEventDuration(e => TimeSpan.MaxValue)) .ClipEventDuration(knownEventStream, (e1, e2) => (e1.EventID == e2.EventID && e1.ApplicationID == e2.ApplicationID)) // this line appears to overwrite the line above select e; var unknownEventStream = from e in eventTraceStream where (from ke in knownEvents where ke.EventID == e.id && ke.ApplicationID == e.applicationID select ke).IsEmpty() select new CategorisedTraceEvent() { id = e.id, applicationId = e.applicationID, eventType = e.eventType, source = e.source, message = e.message, categoryName = "Unknown", categoryID = 1, timestamp = e.timestamp }; var categorisedEventStream = from e in eventTraceStream join ke in knownEvents on new KnownTraceEvent() { EventID = e.id, ApplicationID = e.applicationID, CategoryID = -1 } equals // have to stub out CategoryID as we're doing the join in the first place to determine that new KnownTraceEvent() { EventID = ke.EventID, ApplicationID = ke.ApplicationID, CategoryID = -1 } select new CategorisedTraceEvent() { id = e.id, applicationId = e.applicationID, eventType = e.eventType, source = e.source, message = e.message, categoryName = ke.CategoryName, categoryID = ke.EventLogCategoryID, timestamp = e.timestamp }; var unionedEventStream = unknownEventStream.Union(categorisedEventStream);") }; var eventCategoryCountQuery = eventCategoryCount.ToQuery( application, "EventCategoryCount", "Count the number of events received in a rolling window", typeof (OutputAdapterFactory), new OutputAdapterConfig() {adapterType = OutputAdapterType.TraceEventCategoryCount}, EventShape.Point, StreamEventOrder.FullyOrdered); Reference stream input adapter public class KnownTraceEventsPointInputAdapter : TypedPointInputAdapter<GetKnownEventsResult> { private readonly Timer resetCache; public KnownTraceEventsPointInputAdapter(InputAdapterConfig config) { // Poll the known event table to insert updated events into the stream resetCache = new Timer(60000); // every 1 minute resetCache.Elapsed += new ElapsedEventHandler(ResetCache); } public void ResetCache(object sender, ElapsedEventArgs e) { SyncKnownEvents(); } public void SyncKnownEvents() { var knownEvents = default(List<GetKnownEventsResult>); using (var eventLog = new EventLogDataContext()) { knownEvents = eventLog.GetKnownEvents(false).ToList(); } foreach (var knownEvent in knownEvents) { PointEvent<GetKnownEventsResult> pointEvent = CreateInsertEvent(); if (null == pointEvent) { Ready(); return; } if (AdapterState.Stopping == AdapterState) { Stopped(); return; } try { pointEvent.StartTime = DateTime.UtcNow; pointEvent.Payload = knownEvent; if (Enqueue(ref pointEvent) == EnqueueOperationResult.Full) { Log(DateTime.UtcNow + ": " + GetType().ToString() + " input queue full"); Ready(); return; } } finally { if (null != pointEvent) { ReleaseEvent(ref pointEvent); } } } EnqueueCtiEvent(DateTime.UtcNow); } public override void Start() { Log(DateTime.UtcNow + ": " + GetType().ToString() + " started"); SyncKnownEvents(); resetCache.Start(); } public override void Resume() { resetCache.Start(); } public override void Stop() { Log(DateTime.UtcNow + ": " + GetType().ToString() + " stopped"); resetCache.Stop(); base.Stop(); Stopped(); } protected static void Log(string message) { using (var file = new System.IO.StreamWriter(@"C:\Cep\Logs\InputAdapters.txt", true)) { file.WriteLine(message); } } } Input stream input adapter (receives its events over a wcf duplex connection) [CallbackBehavior(UseSynchronizationContext = false, ConcurrencyMode = ConcurrencyMode.Single, IncludeExceptionDetailInFaults = true)] public class EventTracePointInputAdapter : ViagogoPointInputAdapter<TraceEvent>, IEventTraceSourceSubscriptionManagerCallback { private EventTraceSourceSubscriptionManagerClient client; public EventTracePointInputAdapter(InputAdapterConfig config) : base(config) { Listen(); } public void Listen() { client = new EventTraceSourceSubscriptionManagerClient(new InstanceContext(this), "NetTcpBinding_IEventTraceSourceSubscriptionManager"); client.Open(); client.SubscribeToSource(); ((ICommunicationObject)client).Faulted += new EventHandler(HandleClientFault); } public void HandleClientFault(object sender, EventArgs e) { Log(DateTime.UtcNow + " " + this.GetType() + " callback channel faulted, adapter is renewing connection"); client.Abort(); Listen(); } public void ObserveSource(TraceEvent payload) { if (null == payload) { throw new ArgumentNullException("payload"); } PointEvent<TraceEvent> pointEvent = base.CreateInsertEvent(); if (null == pointEvent) { Ready(); return; } try { var eventTime = payload.timestamp; // This is actually the server's Utc timestamp latestEvent = eventTime; pointEvent.StartTime = eventTime; pointEvent.Payload = payload; pointEvent.Payload.message = "removed to prevent message size exceeding streaminsight max event size"; if (Enqueue(ref pointEvent) == EnqueueOperationResult.Full) { Log(DateTime.UtcNow + ": " + GetType().ToString() + " input queue full"); Ready(); return; } } catch (Exception e) { // probably tried to enqueue an event before the last cti Log(DateTime.UtcNow + ": " + GetType() + " threw an exception\n" + e.ToString() + "\n"); } finally { if (null != pointEvent) { ReleaseEvent(ref pointEvent); } } } public override void Stop() { ((ICommunicationObject)client).Faulted -= new EventHandler(HandleClientFault); try { client.UnsubscribeFromSource(); client.Close(); } catch (Exception e) { client.Abort(); } base.Stop(); Stopped(); } protected override void Dispose(bool disposing) { if (disposing) { ((IDisposable)client).Dispose(); } base.Dispose(disposing); } } Input adapter base class which enqueue's cti events (may generate cti's using advance time settings at a later date) public abstract class ViagogoPointInputAdapter<T> : TypedPointInputAdapter<T> { private readonly Timer stopTimer; private readonly Timer ctiTimer; private readonly object sync = new object(); private InputAdapterConfig config; protected DateTime latestEvent = DateTime.MinValue; protected DateTime lastCti = DateTime.MinValue; protected ViagogoPointInputAdapter(InputAdapterConfig config) { this.config = config; // Poll the adapter to determine when it is time to stop. stopTimer = new Timer(config.stopPollingPeriod);//CheckStopping, new object(), config.stopPollingPeriod, config.stopPollingPeriod); stopTimer.Elapsed += new ElapsedEventHandler(CheckStopping); // Poll the adapter to inject cti events into the stream ctiTimer = new Timer(config.ctiPeriod); ctiTimer.Elapsed += new ElapsedEventHandler(InjectCtiEvent); } public override void Start() { ctiTimer.Start(); stopTimer.Start(); Log(DateTime.UtcNow + ": " + GetType().ToString() + " started"); } public override void Resume() { ctiTimer.Start(); stopTimer.Start(); Log(DateTime.UtcNow + ": " + GetType().ToString() + " resumed"); } public override void Stop() { ctiTimer.Stop(); stopTimer.Stop(); Log(DateTime.UtcNow + ": " + GetType().ToString() + " stopped"); base.Stop(); } private void InjectCtiEvent(object source, ElapsedEventArgs e) { // Allow for a delay of 3 seconds var ctiTime = DateTime.UtcNow.Subtract(TimeSpan.FromSeconds(3)); EnqueueCtiEvent(ctiTime); lastCti = ctiTime; } private void CheckStopping(object source, ElapsedEventArgs e) { lock (this.sync) { if (AdapterState == AdapterState.Stopping) { Stop(); } } } protected static void Log(string message) { using (var file = new System.IO.StreamWriter(@"C:\Viagogo.Cep\Logs\InputAdapters.txt", true)) { file.WriteLine(message); } } } Thanks in advance for taking time to look over this. Marcus Alle Antworten - Donnerstag, 9. Februar 2012 15:15 I agree. It does seem to point to the reference stream as being the culprit. It seems reasonable to assume code in the input adapter is where the leak is happening although we can't rule out (yet) the leak occuring in the query engine I guess. Taking a quick look, the only thing I see in the input adapter that appears to be different from the input stream input adapter is this: using (var eventLog = new EventLogDataContext()) { knownEvents = eventLog.GetKnownEvents(false).ToList(); } Finding memory leaks is always very difficult, but you have to start somewhere, so I would focus initially on this part. What happens, for example, if you stub this out and just generate a fixed event (or a fixed few hundred events). I would be interested to see if this changes the behaviour. If it doesn't, you look somewhere else. If it does, bingo. - Donnerstag, 9. Februar 2012 20:34 Question for you - is it necessary to use TimeSpan.MaxValue to extend the events before clipping? Is it reasonable to have a timeout - no event in x amount of time and it goes away. Even if not, try setting the event duration to something less than TimeSpan.MaxValue ... say, TimeSpan.FromHours(2) and see if that changes the behavior. Unfortunately, it will may take longer to verify the solution. Finally, have you recorded any of the events using the debugger or trace.cmd? I'm thinking that you may have an issue with a join that isn't apparent to us here and you should be able to identify that by using EFD. DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Freitag, 10. Februar 2012 04:16 Something unrelated to your issue but I did notice in your input adapters that you aren't synchronizing your enqueue and your stop. This can cause you issues with shutdown ... from my initial glance, most likely an ObjectDisposedException. One of your adapters would catch it (but it wouldn't be a CTI violation exception ...) But ... it's also possible to be an ObjectDisposedException. Here's what could happen ... your input adapter is happily enqueuing events and StreamInsight calls stop. Now, you are in the middle of an enqueue operation, you've already checked for the AdapterState == Stopping ... and you are about to enqueue. In your Stop(), you call Stopped(). This will dispose the input adapter. The enqueue thread then proceeds to call Enqueue and ... boom! Depending on the volume of events that you have, you may not see it very often and it'll be one of those Heisenbugs that's really, really tough to reproduce - the timing has to be exactly wrong. Also, StreamInsight will call Stop() for you ... you don't need to check the state and call Stop() yourself. From what I've been able to determine in Reflector, the setting of the adapter state to Stopping and the actual call to Stop() occur on different threads so it seems to be possible that your AdapterState may not yet be Stopping when Stop() is called (though I've not been able to actually reproduce this). So your CheckStopping() would call Stop() and the engine would call Stop(). Both of these calls would be on separate threads and they aren't being synched ... so another possible ObjectDisposedException. Question .. have you seen any issues with shutdown? Hangs? Exceptions? DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Freitag, 10. Februar 2012 11:07 A leak on the result from the database call seems a bit strange, the initial drop in memory usage after each increase appears to be garbage collection of the initial list. I tried calling clear on the list before going out of scope as a thread on stack overflow suggested to try that but not surprisingly it didn't make any difference. With regards to extending the event duration before the clipping I just followed this tutorial. I've toyed with this previously because the first duration extension seems redundant after the second and concluded that the clipping overwrites the previous infinite extension, but I may be wrong... I've still kept the initial extension but as you suggested, changed events to expire after 2 minutes instead of never. Ran the query again over night and as I thought, it didn't make any difference - memory usage still begins to surge. However i've spotted a pattern which i'll explain below. I've used the event flow debugger to record events and check the query is functioning correctly which it does, can that or trace.cmd help identify where a leak is occuring? You're spot on DevBiker about the ObjectDisposedException, I have seen that before when shutting the service down and assumed it was to do with the wcf connection but even with all the exception catching it was still throwing the error. I'll try what you suggest and get back to you. Back to the issue at hand... After another night of monitoring the query, i've spotted a pattern and which causes the build up and then a sudden release of memory which is quite strange. While the input stream is receiving events which need categorising, the query releases memory on cue, however during periods of inactivity the memory begins to creep up. I spotted this last night while witnessing almost exponential growth in memory consumption and noticing that there were no input events in the build up to this rapid consumption of memory. This lack of events isn't a problem with the input adapter, its just our source didn't produce any interesting events during the night. I manually triggered some events to see if the query was still alive even with the high memory usage and after a slightly delayed period of time, sure as hell the output event popped out the other side and in the process flushed the memory, bringing it back down to normal levels! So the question is, why is StreamInsight not releasing memory from the reference stream unless its receiving events on the input stream which need categorising? The the input stream is enqueuing cti's every 2 seconds even with if there are no incoming events and the reference stream is tied to this fast moving stream. The reference stream is still successfully enqueuing an updated copy of the known events and the query is expiring the old events (a quick updating of the known events to switch the categorisation of some events does indeed change the query output on the fly). Have i missed something or is this a problem with the StreamInsight engine? Thanks for the help so far! Marcus - Bearbeitet Marcu5 Williams Freitag, 10. Februar 2012 11:09 - Bearbeitet Marcu5 Williams Freitag, 10. Februar 2012 13:53 - - Freitag, 10. Februar 2012 14:08 How about trying this: Instead of having the input adapter inject CTIs whether you have events or not, use the AdvanceTimeSettings when you create the stream. While implementing IDeclareAdvanceTimeProperties would do the same thing, I think it'd be easier to simply remove the CTI injection and add the AdvanceTimeSettings to where you create the stream. Take a look at this blog post - while it's about controlling your CTIs with LinqPad, it does go into how CTIs are enqueued with the AdvanceTimeSettings. Now ... somethings to keep in mind ... AdvanceTimeSettings won't enqueue CTIs like clockwork (the way yours does) ... it will only enqueue CTIs when you actively have events being enqueued. So ... in that period of inactivity, you wouldn't have any CTIs. I'm wondering if all of the empty CTI spans are what is at the root of what you are seeing. DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Freitag, 10. Februar 2012 17:00 I've implemented your suggests with regards to the stopping situation. I'll keep on eye on that... I've also added AdvanceTimeSettings.StrictlyIncreasingStartTime to my query and cti's with a timestamp of one tick in the future get enqueued only when there are input events. I think i actually tried something like this a while back but the output events are not quite as you'd expect, so went down the manual route which makes everything happen like clockwork. This unpredictable behaviour is still the case, when you enqueue an event, its not included in the output until it receives another event some time in the future (even with the cti being one tick in the future). Anyway, even with this change in CTI tactics, memory usage continues to rise during inactive periods and is flushed when the first event comes along again. As can be seen, the memory (in blue) rises continually while no cti's are being produced. Upon production of cti's by the engine (green), the memory is freed! So the build up of cti events doesn't appear to be the problem... - Bearbeitet Marcu5 Williams Freitag, 10. Februar 2012 17:01 - - Samstag, 11. Februar 2012 01:31 Let me first say that my initial suggestion ... to look at using AdvanceTimeSettings ... was silly and, as you can see, just plain wrong. But it didn't hit me until I saw your followup. The CTI events don't get imported into the stream until you join with the source stream. Here's what's happening. You have a data stream and a reference stream. You are importing the CTIs from the data stream into the reference stream. This is exactly how you should do it. But those CTIs don't get imported into the stream ... and, therefore, the reference data actually pushed through the AlterEventDuration().ClipEventDuration until the join. And if there is no data in the data stream, there is nothing to join. So the reference data doesn't get pushed through. Because it doesn't get pushed through, all of the reference events are queued and your memory increases and increases and increases .... I remember running into this ... though in a different way ... early in my StreamInsight life ... and Roman set me straight. While I'm sure that there is another solution, the only thing that I can think of right now is to have a timeout in your data input adapter that enqueues a "junk" event that won't make it past the join to keep things moving. DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Samstag, 11. Februar 2012 01:42 I’d speculate that at least during the quiet time Ctis from your regular input (not the reference input), do not reach the join and therefore the reference events never expire. You could use query diagnostics to get a handle on which operator in your query is consuming the memory (assuming the culprit is the query operator). You can get to the detailed diagnostics in the event flow debugger. Those diagnostics also report the last Cti timestamp processed by each operator. This will help to verify if Ctis are actually flowing through the query. Keep in mind that there is a slight twist with point events and the advance time policy “adjust”. From the documentation: CTI Violation Policies It. Since you are dealing with point events, all events with start time less than the Cti do not overlap the Cti and are therefore dropped. I’m not sure if this is relevant here (I couldn’t find how you actually generate timestamps for the input events), but it is worth checking up on. Peter Zabback - Als Antwort vorgeschlagen Peja TaoModerator Montag, 13. Februar 2012 04:46 - Als Antwort markiert Peja TaoModerator Dienstag, 21. Februar 2012 07:02 - - Samstag, 11. Februar 2012 03:35You may also be able to use a Left Anti Semi-Join between your reference stream and your data stream ... that should force it to push through. You would still have to publish it as a query but don't hook it up to an output adapter ... or hook it up to a NullOutputAdapter that simply dequeues and drops (that's what I do). DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Mittwoch, 15. Februar 2012 10:54? What we've actually done for the moment is switch to a push approach rather than a pull. Instead of polling the known events table and pulling them in regardless every 60 seconds, we're now only updating the reference stream if those categories have changed (this is invoked via wcf). Our production servers also appear to provide a pretty continual flow of input events, making sure there is always data going through the join. Even though this appears to function for us, we're still relying on these conditions to hold to avoid a eating up all the memory. Shouldn't the enqueing of a cti in the reference stream after every update cause the old events to expire even if no cti's are coming from the fast stream during quiet periods? Marcus - Mittwoch, 15. Februar 2012 12:39? ... Shouldn't the enqueing of a cti in the reference stream after every update cause the old events to expire even if no cti's are coming from the fast stream during quiet periods? Marcus No, the enqueuing of a cti in the reference stream won't do that. The various stream operators are processed with the query ... the entire query has to move forward. If you aren't getting CTIs or events from the data stream to keep it moving forward, the intermediary stream operators won't be processed. Changing your reference stream to enqueue updates only, though, is a good thing to do, regardless of the memory issues. :-) Looking over your queries again, I see that a tumbling window is the last operator in the stream and I think that's why you need events ... rather than just CTIs ... to push the events through. Windows don't process without events. Try using a query before creating the window to make sure that it moves forward. An example is below: var unionedEvents = unknownEventStream.Union(categorisedEventStream); var unionedEventQuery = unionedEvents.ToQuery(application, "UnionedEvents", "Categorized and unknown events", EventShape.Point, StreamEventOrder.FullyOrdered); var unionedEventStream = unionedEventQuery.ToStream<CategorisedTraceEvent>();") }; //Start the unionedEventQuery right before the event Category Count Query This takes your unioned and categorized events and will push the associated stream operators forward independantly of the window. You can also then reuse the unionedEventStream in other queries without worrying about creating multiple instances of your input adapters (which is usually a Very Bad Thing). See this blog post for more details on the how's and why's of DQC. DevBiker (aka J Sawyer) My Blog My Bike - Concours 14 If I answered your question, please mark as answer. If my post was helpful, please mark as helpful. - Als Antwort markiert Peja TaoModerator Dienstag, 21. Februar 2012 07:02 -
http://social.msdn.microsoft.com/Forums/de-DE/streaminsight/thread/fabcf2b6-f438-4858-b208-c633c01c3e70
CC-MAIN-2013-20
refinedweb
3,861
57.67
she is interested in. Information displayed on main gadget view is automatically and silently refreshed depending on gadget type to make sure that user sees the most recent data. Mobile support for CMO gadgets Most of CMO gadgets have mobile version and are available on iPhone, iPad and iPod Touch where MobileCenter app is installed. EPiServer MobileCenter module must be deployed on the site where mobile support is required and CMO gadgets should be selected in iPhone Activator gadget. For this blog gadget mobile view screenshots were made on iPad. KPI Summary gadget By default you can measure KPIs in points, pounds, euro, dollars and kronor. Since KPI entity is the one for all KPIs in campaign, it is possible to aggregate KPI data and calculate summary values by the page, by KPI type or for all KPIs in the campaign. After selecting campaign in gadget settings user can decide if he or she wants to see specific KPI value or one of the calculated summaries: Selected KPI value or summary is displayed similar to KPI reports in CMO UI. Information is automatically refreshed every 30 seconds. In mobile version KPI value is displayed as bar chart with estimated and achieved value scores and color indication. Live Monitor gadget Live Monitor gadget shows current visitors and their navigation for the pages included in selected campaign. It is possible to define gadgets height. This gadget is not supported on mobile devices. LPO Report gadget LPO Report gadget is new in EPiServer CMO 2.1. After selecting LPO test user can decide if report must be presented in full or compact mode: Gadget in normal mode is full analog of LPO test report in CMO UI with ability to start, stop and finalize test. It contains page thumbnail, estimated conversion rate with range and color indication and detailed statistics for each variation. Clicking on any variation name user can move to this page version in CMS edit mode. Information on LPO Report gadget is automatically refreshed every 30 seconds. Compact gadget view contains only conversion rate gauges: In mobile version estimated conversion rates are visualized as bar chart with scores and ranges: It is possible to finalize stopped LPO test and set one of the variations as the winner. Selected variation is being published and replaces the original page. Finalization function is available both in detailed and compact modes, as well as in mobile version. Campaign Statistics gadget Campaign statistics gadget provides general statistics report for selected campaign from start to current date. Report consists of 3 parts: campaign summary, visits and page views report and browser statistics. Each part can be enabled or disabled on gadget depending on user needs. Period scale can be defined for visits and page views report if this part is enabled. Scale option defines how many values must be shown on report chart. For better usability available scale depends on campaign duration. Chart would be pretty overloaded displaying very detailed “by hour” report for 2 month campaign. However this scale option would be great for campaign that is active only 2 days. Line and pie charts are replaced by bars in mobile version of gadget: Campaign data must be processed and aggregated before going to report. That’s why campaign statistics report is not updated very often. Displayed information is refreshed every 30 minutes. Feedback Your feedback is highly appreciated. Please let us know what you’d like to see in CMO gadgets. More information EPiServer CMO documentation. Download EPiServer CMO 2.1 for CMS 6 R2. EPiServer MobileCenter for iOS. Hi, I am beginner to mobile apps in EPIServer cms 6.I installed the MobilePack but it is giving errors regarding to namespaces........how can i figure it out? any suggestions pls.......... As I can see on project page, Mobile Center was built and published for EPiServer CMS 6. Probably you can have some version related issues using it with CMS 6 R2. Try to get the source code of Mobile Center and rebuild it for your EPiServer CMS / Framework version. An old one but... Did you make any changes in web.config (or other) in order to get the gadgets working with MobileCenter for iOS? I get an error when I try to access the gadget through the MobileCenter App saying; "You are not authorized to work with this gadget". I have other gadgets working.
https://world.optimizely.com/blogs/Dmytro-Duk/Dates/2011/6/EPiServer-CMO-gadgets/
CC-MAIN-2021-39
refinedweb
731
56.76
Building a complex resource, like a boot instance, requires not only touching a number of Nova processes, but also other services such as Neutron, Glance, and possibly Cinder. When we make those service jumps we currently generate a new request-id, which makes tracing those flows quite manual. When a user creates a resource, such as a server, they are given a request-id back. This is generated very early in the paste pipeline of most services. It is eventually embedded into the context, which is then used implicitly for logging all activities related to the request. This works well for tracing requests inside of a single service as it passes through its workers, but breaks down when an operation spans multiple services. A common example of this is a server build, which requires Nova to call out multiple times to Neutron and Glance (and possibly other services) to create a server on the network. It is extremely common for clouds to have an ELK (Elastic Search, Logstash, Kibana) infrastructure that is consuming their logs. The only way to query these flows is if there is a common identifier across all relevant messages. A global request-id immediately makes existing deployed tooling better for managing OpenStack. The high level solution is as follows (details on specific points later): The processing of incoming requests happens piecemeal through the set of paste pipelines. These are mostly common between projects, but there are enough local variation to highlight what this looks like for the base IaaS services, which will be the initial targets of this spec. [composite:neutronapi_v2_0] use = call:neutron.auth:pipeline_factory noauth = cors http_proxy_to_wsgi request_id catch_errors extensions neutronapiapp_v2_0 keystone = cors http_proxy_to_wsgi request_id catch_errors authtoken keystonecontext extensions neutronapiapp_v2_0 # ^ ^ # request_id generated here -------+ | # context built here ------------------------------------------------+ # Use this pipeline for keystone auth [pipeline:glance-api-keystone] pipeline = cors healthcheck http_proxy_to_wsgi versionnegotiation osprofiler authtoken context rootapp # ^ # request_id & context built here -----------------------------------------------------+ [composite:openstack_volume_api_v3] use = call:cinder.api.middleware.auth:pipeline_factory noauth = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler noauth apiv3 keystone = cors http_proxy_to_wsgi request_id faultwrap sizelimit osprofiler authtoken keystonecontext apiv3 # ^ ^ # request_id generated here -------+ | # context built here ------------------------------------------------------------------+ [composite:openstack_compute_api_v21] use = call:nova.api.auth:pipeline_factory_v21 noauth2 = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit osprofiler noauth2 osapi_compute_app_v21 keystone = cors http_proxy_to_wsgi compute_req_id faultwrap sizelimit osprofiler authtoken keystonecontext osapi_compute_app_v21 # ^ ^ # request_id generated here -------+ | # context built here ----------------------------------------------------------------------+ In nearly all services the request_id generation happens very early, well before any local logic. The middleware sets an X-OpenStack-Request-ID response header, as well as variables in the environment that are later consumed by oslo.context. We would accept an inbound X-OpenStack-Request-ID, and validate that it looked like req-$UUID before accepting it as the global_request_id. The returned X-OpenStack-Request-ID would be the existing request_id. This is like the parent process getting the child process id on a fork() call. Fortunately for us most projects now use the oslo.context from_environ constructor. This means that we can add content to the context, or adjust the context, without needing to change every project. For instance in Glance the context constructor looks like [5]: kwargs = { 'owner_is_tenant': CONF.owner_is_tenant, 'service_catalog': service_catalog, 'policy_enforcer': self.policy_enforcer, 'request_id': request_id, } ctxt = glance.context.RequestContext.from_environ(req.environ, **kwargs) As all logging happens after the context is built. All required parts of the context will be there before logging starts. oslo.log defaults should include global_request_id during context logging. This is something which can be done late, as users can always override there context logging string format. With the infrastructure above implemented it will be a small change to python clients to save and emit the global_request_id when created. For instance, Nova calling Neutron, during the get_client call context.request_id would be stored in the client. [6]: def _get_available_networks(self, context, project_id, net_ids=None, neutron=None, auto_allocate=False): """Return a network list available for the tenant. The list contains networks owned by the tenant and public networks. If net_ids specified, it searches networks with requested IDs only. """ if not neutron: neutron = get_client(context) if net_ids: # If user has specified to attach instance only to specific # networks then only add these to **search_opts. This search will # also include 'shared' networks. search_opts = {'id': net_ids} nets = neutron.list_networks(**search_opts).get('networks', []) else: # (1) Retrieve non-public network list owned by the tenant. search_opts = {'tenant_id': project_id, 'shared': False} if auto_allocate: # The auto-allocated-topology extension may create complex # network topologies and it does so in a non-transactional # fashion. Therefore API users may be exposed to resources that # are transient or partially built. A client should use # resources that are meant to be ready and this can be done by # checking their admin_state_up flag. search_opts['admin_state_up'] = True nets = neutron.list_networks(**search_opts).get('networks', []) # (2) Retrieve public network list. search_opts = {'shared': True} nets += neutron.list_networks(**search_opts).get('networks', []) _ensure_requested_network_ordering( lambda x: x['id'], nets, net_ids) return nets Note There are some usage patterns where a client is built and kept for long running operations. In these cases we’d want to change the model to assume that clients are ephemeral, and should be discarded at the end of their flows. This will also help tracking non user initiated tasks such as periodic jobs that touch other services for information refresh. There was a previous OpenStack cross project spec to completely handle this in the caller -. That was merged over 2 years ago, but has yet to gain traction. It had a number of disadvantages. It turns out the client code is far less standardized here, so fixing every client was substantial work. It also requires some standard convention for writing these things out to logs on the caller side that is consistent between all services. It also does not allow people to use Elastic Search to trace their logs (which all large sites have running). A custom piece of analysis tooling would need to be built. A long time ago, in a galaxy far far away, in a summit room I was not in, I was told there was a concern about clients flooding this field. There has been no documented attack that seems feasable here if we strictly validate the inbound data. There is a way we could use Service roles to validate trust here, but without a compelling case for why that is needed, we should do the simpler thing. For reference Glance already accepts a user provided request-id of 64 characters or less. This has existed there for a long time, with no reports as to yet for abuse. We could consider dropping the last constraint and not doing role validation. Swift has a related approach where their transaction id, which is a multipart id that includes a piece generated by the server on inbound request, a timestamp piece, a fixed server piece (for tracking multiple clusters), and a user provided piece. Swift is not currently using any of the above oslo infrastructure, and targets syslog as their primary logging mechanism. While there are interesting bits in this approach, it’s a less straight forward chunk of work to transition to, given the oslo components. Also, oslo.log has many structured log back ends (like json stream, fluentd, and systemd journal) where we really would want the global and local as separate fields so there is no heuristic parsing required. oslo.middleware request_id contract will change so that it accepts an inbound header, and sets a second env variable. Both are backwards compatible. oslo.context will accept a new local_request_id. This requires plumbing local_request_id into all calls that take request_id. This looks fully backwards compatible. oslo.log will need to be adjusted to support logging both request_ids. It should probably be enabled to do that by default, though log_context string is a user configured variable, so they can set whatever site local format works for them. An upgrade release note would be appropriate when this happens. There previously was a concern about trusting request ids from the user. It is an inbound piece of user data, so care should be taken. These items can be handled with strict validation of the content that it looks like a valid uuid. Minimal. This is a few extra lines of instruction in existing through paths. No expensive activity is done in this new code. Developers will now have much easier tracing of build requests in their devstack environments! Note Could definitely use help to get this through the gauntlet, there are lots of little patches here to get right. TBD - but presumably some updates to operators guide on tracing across services. Note This work is licensed under a Creative Commons Attribution 3.0 Unported License.
http://specs.openstack.org/openstack/oslo-specs/specs/pike/global-req-id.html
CC-MAIN-2018-39
refinedweb
1,433
55.03
I'd like to send a local REST request in a flask app, like this: from flask import Flask, url_for, request import requests app = Flask(__name__) @app.route("/<name>/hi", methods=["POST"]) def hi_person(name): form = {"name": name} return requests.post(url_for("hi", _external=True), data=form) @app.route("/hi", methods=["POST"]) def hi(): return 'Hi, %s!' % request.form["name"] curl -X POST Run your flask app under a proper WSGI server capable of handling concurrent requests (perhaps gunicorn or uWSGI) and it'll work. While developing, enable threads in the Flask-supplied server with: app.run(threaded=True) but note that the Flask server is not recommended for production use. What happens is that using requests you are making a second request to your flask app, but since it is still busy processing the first, it won't respond to this second request until it is done with that first request. Incidentally, under Python 3 the socketserver implementation handles the disconnect more gracefully and continues to serve rather than crash.
https://codedump.io/share/lSUW4CzBrgIF/1/flask-broken-pipe-with-requests
CC-MAIN-2017-17
refinedweb
171
54.83
DEBSOURCES Skip Quicknav sources / inn2 / 2.6.3-1 / tests / TAP is the Test Anything Protocol, a protocol for communication between test cases and a test harness. This is the protocol used by Perl for its internal test suite and for nearly all Perl modules, since it's the format used by the build tools for Perl modules to run tests and report their results. A TAP-based test suite works with a somewhat different set of assumptions than an xUnit test suite. In TAP, each test case is a separate program. That program, when run, must produce output in the following format: 1..4 ok 1 - the first test ok 2 # a diagnostic, ignored by the harness not ok 3 - a failing test ok 4 # skip a skipped test The output should all go to standard output. The first line specifies the number of tests to be run, and then each test produces output that looks like either "ok <n>" or "not ok <n>" depending on whether the test succeeded or failed. Additional information about the test can be provided after the "ok <n>" or "not ok <n>", but is optional. Additional diagnostics and information can be provided in lines beginning with a "#". Processing directives are supported after the "ok <n>" or "not ok <n>" and start with a "#". The main one of interest is "# skip" which says that the test was skipped rather than successful and optionally gives the reason. Also supported is "# todo", which normally annotates a failing test and indicates that test is expected to fail, optionally providing a reason for why. There are three more special cases. First, the initial line stating the number of tests to run, called the plan, may appear at the end of the output instead of the beginning. This can be useful if the number of tests to run is not known in advance. Second, a plan in the form: 1..0 # skip entire test case skipped can be given instead, which indicates that this entire test case has been skipped (generally because it depends on facilities or optional configuration which is not present). Finally, if the test case encounters a fatal error, it should print the text: Bail out! on standard output, optionally followed by an error message, and then exit. This tells the harness that the test aborted unexpectedly. The exit status of a successful test case should always be 0. The harness will report the test as "dubious" if all the tests appeared to succeed but it exited with a non-zero status. Writing TAP Tests Environment One of the special features of C TAP Harness is the environment that it sets up for your test cases. If your test program is called under the runtests driver, the environment variables C_TAP_SOURCE and C_TAP_BUILD will be set to the top of the test directory in the source tree and the top of the build tree, respectively. You can use those environment variables to locate additional test data, programs and libraries built as part of your software build, and other supporting information needed by tests. The C and shell TAP libraries support a test_file_path() function, which looks for a file under the build tree and then under the source tree, using the C_TAP_BUILD and C_TAP_SOURCE environment variables, and return the full path to the file. This can be used to locate supporting data files. They also support a test_tmpdir() function that returns a directory that can be used for temporary files during tests. Perl Since TAP is the native test framework for Perl, writing TAP tests in Perl is very easy and extremely well-supported. If you've never written tests in Perl before, start by reading the documentation for Test::Tutorial and Test::Simple, which walks you through the basics, including the TAP output syntax. Then, the best Perl module to use for serious testing is Test::More, which provides a lot of additional functions over Test::Simple including support for skipping tests, bailing out, and not planning tests in advance. See the documentation of Test::More for all the details and lots of examples. C TAP Harness can run Perl test scripts directly and interpret the results correctly, and similarly the Perl Test::Harness module and prove command can run TAP tests written in other languages using, for example, the TAP library that comes with C TAP Harness. You can, if you wish, use the library that comes with C TAP Harness but use prove instead of runtests for running the test suite. C C TAP Harness provides a basic TAP library that takes away most of the pain of writing TAP test cases in C. A C test case should start with a call to plan(), passing in the number of tests to run. Then, each test should use is_int(), is_string(), is_double(), or is_hex() as appropriate to compare expected and seen values, or ok() to do a simpler boolean test. The is_*() functions take expected and seen values and then a printf-style format string explaining the test (which may be NULL). ok() takes a boolean and then the printf-style string. Here's a complete example test program that uses the C TAP library: #include <stddef.h> #include <tap/basic.h> int main(void) { plan(4); ok(1, "the first test"); is_int(42, 42, NULL); diag("a diagnostic, ignored by the harness"); ok(0, "a failing test"); skip("a skipped test"); return 0; } This test program produces the output shown above in the section on TAP and demonstrates most of the functions. The other functions of interest are sysdiag() (like diag() but adds strerror() results), bail() and sysbail() for fatal errors, skip_block() to skip a whole block of tests, and skip_all() which is called instead of plan() to skip an entire test case. The C TAP library also provides plan_lazy(), which can be called instead of plan(). If plan_lazy() is called, the library will keep track of how many test results are reported and will print out the plan at the end of execution of the program. This should normally be avoided since the test may appear to be successful even if it exits prematurely, but it can make writing tests easier in some circumstances. Complete API documentation for the basic C TAP library that comes with C TAP Harness is available at: <> It's common to need additional test functions and utility functions for your C tests, particularly if you have to set up and tear down a test environment for your test programs, and it's useful to have them all in the libtap library so that you only have to link your test programs with one library. Rather than editing tap/basic.c and tap/basic.h to add those additional functions, add additional *.c and *.h files into the tap directory with the function implementations and prototypes, and then add those additional objects to the library. That way, you can update tap/basic.c and tap/basic.h from subsequent releases of C TAP Harness without having to merge changes with your own code. Libraries of additional useful TAP test functions are available in rra-c-util at: <> Some of the code there is particularly useful when testing programs that require Kerberos keys. If you implement new test functions that compare an expected and seen value, it's best to name them is_<something> and take the expected value, the seen value, and then a printf-style format string and possible arguments to match the calling convention of the functions provided by C TAP Harness. Shell C TAP Harness provides a library of shell functions to make it easier to write TAP tests in shell. That library includes much of the same functionality as the C TAP library, but takes its parameters in a somewhat different order to make better use of shell features. The libtap.sh file should be installed in a directory named tap in your test suite area. It can then be loaded by tests written in shell using the environment set up by runtests with: . "$C_TAP_SOURCE"/tap/libtap.sh Here is a complete test case written in shell which produces the same output as the TAP sample above: #!/bin/sh . "$C_TAP_SOURCE"/tap/libtap.sh cd "$C_TAP_BUILD" plan 4 ok 'the first test' true ok '' [ 42 -eq 42 ] diag a diagnostic, ignored by the harness ok '' false skip 'a skipped test' The shell framework doesn't provide the is_* functions, so you'll use the ok function more. It takes a string describing the text and then treats all of its remaining arguments as a condition, evaluated the same way as the arguments to the "if" statement. If that condition evaluates to true, the test passes; otherwise, the test fails. The plan, plan_lazy, diag, and bail functions work the same as with the C library. skip takes a string and skips the next test with that explanation. skip_block takes a count and a string and skips that many tests with that explanation. skip_all takes an optional reason and skips the entire test case. Since it's common for shell programs to want to test the output of commands, there's an additional function ok_program provided by the shell test library. It takes the test description string, the expected exit status, the expected program output, and then treats the rest of its arguments as the program to run. That program is run with standard error and standard output combined, and then its exit status and output are tested against the provided values. A utility function, strip_colon_error, is provided that runs the command given as its arguments and strips text following a colon and a space from the output (unless there is no whitespace on the line before the colon and the space, normally indicating a prefix of the program name). This function can be used to wrap commands that are expected to fail with output that has a system- or locale-specific error message appended, such as the output of strerror(). License This file is part of the documentation of C TAP Harness, which can be found at <>. Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty. SPDX-License-Identifier: FSFAP
https://sources.debian.org/src/inn2/2.6.3-1/tests/README/
CC-MAIN-2019-43
refinedweb
1,728
57.91
#include <List.h> List of all members. Constructor for the list. Destructor of the list. Adds a element at the end of the list. Adds an element to the list. If needed the array is enlarged. Clears and empties the list. Returns, if the lement is in the list. Returns the reference of the nth element. Returns the position of an element in the list. Returns the nth element out of the dynamic array. Removes an element at the given position. Removes the given element from the list. Sets a value at the given position. The old value is overwritten. Returns the size of the list. Trims the list with capacity equals size.
http://es3d.sourceforge.net/doxygen/class_e_s3_d_1_1_list.html
CC-MAIN-2017-30
refinedweb
113
73.24
Dist Git Project From FedoraProject What This is a project to convert our current package source control (CVS) into git. See Dist_Git_Proposal. Who - Jesse Keating - Toshio Kuratomi - Clint Savage Road Map There may be multiple proofs of concept as we tweak various components and layouts, but generally they will follow these phases: Phase 1 This phase is all about the import. This is where we convert the CVS content into git, and then provide access to the content via git:// anon clones. Content won't be modified other than imported and branches created for the various releases. Phase 2 This phase is all about providing write access to the repos via ssh. It is during this phase that we will figure out how to generate an ACL set and apply these ACLs to the repos, including branch level ACLs. Once this phase completes we will be able to offer write access to the git repos for maintainers. Phase 3 This is the tooling phase, where we develop fedpkg, the utility to replace the Make system. Once this phase is complete we will have a package maintainers can download and use to interact with their repos in various ways. Phase 4 This is the building phase where we allow packages to be built in a test koji instance from our dist-git repos. Implementation Plans Currently we are evaluating a number of tools to perform the tasks needed. CVS import parsecvs is the tool we're currently using to convert CVS history into git format. It has been used to convert a number of projects, including xorg and the Gnome projects, so it has a good track record. It is quite fast, is able to translate CVS commit names into full git like name+addresses, and seems to handle our packages well. A script has been written that processes the CVS ,V files via parsecvs and creates the proper branches for release subdirs. The last full run took roughly 900 minutes to complete. A trail import is available to the public via a public test system. Modules are exported via the git:// or ssh:// protocol, and the url format is git://pkgs.stg.fedoraproject.org/<module> or ssh://[fedoraccount@]pkgs.stg.fedoraproject.org/<module> For example, in order to clone the yum module anonymously, one would enter: git clone git://pkgs.stg.fedoraproject.org/yum To clone the yum module with write access, one would enter: git clone ssh://pkgs.stg.fedoraproject.org/yum after which git push should just work, provided you have pkgdb rights to commit to the yum module. Git ACLs Unlike CVS, where we used subdirectories for "branches", and thus would be able to apply filesystem ACLs on the subdirs, git does not provide an easy way to do filesystem ACLs at a branch level. Therefor we will need to use an extra layer in order to accomplish our needs. This is not unlike our current use of CVS, where we rely on file system group ID to provide write access, and then use the cvs Avail system to restrict that down. Currently we are evaluating gitolite to provide the ACLs. It has the ability to provide users write access only to specific branches. gitolite does have a few problems for our use that we are working out with upstream. - It defines user groups internally rather than using getent - It is designed around every user logging in through a single system user via ssh keys - The config file system does not quite scale to our size gitolite upstream has created a branch of the code for multi-user huge config file use, namely the Fedora case, and is committed to making it work for us. A preliminary script has been written that takes data from pkgdb and getent in order to draft a gitolite config file which can then be "compiled" into what gitolite uses internally to check ACLs. gitolite works by running a gl-auth-command when an ssh connection is initiated. This is controlled by .ssh/authorized_keys, much in the way that our current CVS server setup is done. gl-auth-command will then check ACLs against a pre-compiled hash and if so allowed, will pass the rest of your ssh command on to the local git command. The update hook in each repo will also check permissions to see if you have rights to do whatever it is you are doing on whichever ref you are trying to do it on (master, F-12, a tag, etc..) This does bring up the wrinkle of admins with shell access to the git server, who we can't force gl-auth-command in via authorized_keys. These few people will bypass the first auth check, and we'll have to rely upon file system permissions + the repo update hook to deny them access to things which they shouldn't access. A symlink on the filesystem will need to be provided so that people interacting directly with git use the same paths as people who interact via gl-auth-command (which defines its own git root path). Another problem is that when using the update hook, and not using gl-auth-command (eg people who have full shell access) the update hook will not allow writing due to missing data in the environment. There are a couple different ways this could be fixed. - An early check in the update hook that exits 0 if gl-auth-command wasn't used, bypassing branch level ACLs - Running a secondary ssh server for admin shells, forcing all git traffic through gl-auth-command - Forcing gl-auth-command for every ssh user, with a check in gl-auth-command that detects non-git actions and drops to the shell if the user is an admin user All have their pros and cons, but the last option seems the best currently, due to not having to run more daemons, having every git action go through gl-auth-command and subsequently the update hook, and by being able to define a git base path to keep the urls short. This solution has one small wrinkle. Currently we have 'no-pty' as one of the options when using a forced command in authorized_keys. This means that admins would not be able to get a shell due to lack of a pty, so for certain users we'd have to not use that option. This may require some changes within FAS itself to make ssh auth commands and options more generic and defined by each group. This is the route we're currently taking to provide ssh write access to the repos. Branch ACLs There is ongoing discussion as to how to manage user created branches in our repos. The ACL system can limit these branches to a particular name space (or 3..) which is almost necessary for the ACL system to work. There are some open questions though: - What namespace to use (private- is what CVS used) - Who should have rights to create/commit to such branches (currently anybody with filesystem write access is allowed) - Should builds be allowed to happen from branches (I'd prefer all official builds come from origin/master or official release branches) fedpkg See Dist_Git_Proposal#fedpkg for a feature list. Ideally fedpkg would operate somewhat like the koji client or python-bugzilla client, or even git works, where fedpkg takes a series of global options, and then each command takes further options. This helps in isolating development on the tool and makes adding features to specific commands quite easy (look into python-argparse for this). Initial code drop with some functioning targets has been done. You can find the code in the fedora-packager package. An anonymous checkout can be done with: git clone git://git.fedorahosted.org/fedora-packager.git There are currently two files being worked on, fedpkg.py which is the command line tool to gather options/arguments, and fedpkg/__init__.py which is a python module housing the code to actually do stuff. Work Needed Here are some high level items we should get done before going live with fedpkg. - commit target - new-sources target - clone --branches - import (replace cvs-import.sh) - init (create a local repo to work with) koji Koji upstream already has some code to deal with git repos, however our proposed layout will be different enough to require modification. No modification has been done yet. Recently we attempted to hook dist-git into koji and discovered some areas in koji that need modification. Namely the use of "make" needs to be optional and configurable as we don't use make, we use fedpkg. Other than that the existing code for dealing with git repos works, as long as fedpkg crafts the url sent to koji correctly. A instance of koji is running in the stg environment and it will be modified so that we can build things from dist-git with it. When Current target for conversion is shortly after Fedora 13 release. How can I help? Find Jesse Keating (Oxf13) on freenode IRC
http://fedoraproject.org/wiki/Dist_Git_Project
CC-MAIN-2015-40
refinedweb
1,513
58.82
To: s-y-n-e-r-g-y > I have an old Palm M100 that I have never really used. I'm thinking of > dusting it off to be used for reading Ebooks. Before I dig up the > hotsync cable and buy some batteries: Your biggest problem is going to be memory, the m100 only has 2megs. > Can anyone remind what kinds of files I can read on this unit? IE: PDF > files, TXT files, DOC files.. LIT files, etc... ? Do i need some 3rd > party software? Depends on your software, with stock software, only text files that you import into memo pad. If you're going to go for ebooks in general then I'd recommend you find out what your source is going to be, there are many formats out there that convert txt or doc files to palm readable. LIT files require MS reader, which I think is only for PocketPC. Look into perhaps if you have an older copy of Documents to Go if you already have books in TEXT or Word Doc format. IF you are getting books, eReader in conjunction with the Palm Store would be a good idea. iSolo is another widely used program that I use myself. Most things require conversion in some way to work with Palms. --- Synchronet 3.13a-Win32 NewsLink 1.83 Shadow River BBS
http://fixunix.com/palmtop/107876-palm-m100.html
CC-MAIN-2016-07
refinedweb
226
83.25
Find an integer n for which (1+2cos(pi/9))^n rounded to the nearest integer is divisible by 1 billion. I heard this problem could be solved by linear algebra. Please help me solve this! Find an integer n for which (1+2cos(pi/9))^n rounded to the nearest integer is divisible by 1 billion. I heard this problem could be solved by linear algebra. Please help me solve this! Since no one's chiming in, I'll mention that perhaps some form of fast exponentiation could lead to a solution. (I do not have a solution, and not much time to devote to finding one.) Not sure whether this would be of use Trigonometry Angles--Pi/9 -- from Wolfram MathWorld Seems to me an interesting and tough problem There seems to be something special about the number 1+2cos(pi/9). According to observation as opposed to rigorous proof, I find that if we let a(n) = [(1+2cos(pi/9))^n], where [k] means nearest integer of k, the sequence {a_n} approaches an integer sequence whose successive ratios a(n)/a(n-1) of course approach 1+2cos(pi/9), and from experimentation, a(n) = 2*a(n-1) + 2*a(n-2) + sum(i=0, n-3, a(i)). The relation seems to hold starting from n=5. Testing on PARI/GP, I'm not sure what happens around n=60 because precision loss seems to kick in, would have to reset precision or use a different CAS or something. (But my guess right now is that it holds for all n > 4.) After some manipulation, this just becomes a(n) = 3*a(n-1) - a(n-3), holding for n > 5. Assuming this is true, calculating a(n) mod 10^9 for n having 10 digits could run in a "reasonable" amount of time just by brute force, although this seems pretty inelegant. Perhaps matrix multiplication can be used in the manner of this, which could provide a tie-in with linear algebra and fast exponentiation. (Just thinking out loud.) Do you have any way to check answers? I got Also, I'd like to know where you got this problem.Also, I'd like to know where you got this problem.Spoiler: n = 169531249 Java OutputOutputCode:import java.util.ArrayList; public class NonagonRecursion { static ArrayList<Long> nums = new ArrayList<Long>(); static long n = 6; static final long M = 1000000000; public static void main(String[] args) { long t = time(); nums.add(24L); nums.add(69L); nums.add(198L); for(;;n++) { long next = (3*nums.get(2) - nums.get(0)) % M; if(next < 0) next += M; if(next == 0) { System.out.println(n); break; } nums.remove(0); nums.add(next); } System.out.println("Elapsed: " + (time()-t)/1000.0 + " seconds"); } public static long time() { return System.currentTimeMillis(); } } Code:169531249 Elapsed: 5.543 seconds This is very similar to a hint my friend gave me.This is very similar to a hint my friend gave me. I will ask my friend when I get a chanceI will ask my friend when I get a chanceDo you have any way to check answers? I got From my friend ^^From my friend ^^Also, I'd like to know where you got this problem. Could you explain a bit on what you did in the program? characteristic polynomial.. I have experience with these in the context of solving problems, but my theoretical knowledge could be better. But as to why it is a root.. I remember seeing solutions to cubics that looked similar, but I can't say for sure it was the same situation, it was so long ago, and I should definitely brush up. Project Euler problems to solve.. The other part is that we only care about the last 9 digits at any given time. So we use the % or "mod" operator which gives the remainder when divided by (in this case) 10^9. Sometimes though we get a negative number and have to add 10^9 in order to get it in the desired range, 0 to 10^9 - 1. If you study modular arithmetic/congruence it will be pretty apparent, you might need to work with a few examples before it sinks in. (I don't know your experience/knowledge level.) We are done when the last 9 digits are 0, that is, our term is congruent to 0 mod 10^9. So for example, if we want to know the last three digits of 7^6 but don't want to calculate 7^6 directly, we can take mod 10^3 at each operation and come up with the desired result: 7^3 = 343 343 * 7 = 2401, discard all but last three digits 401 * 7 = 2807, discard all but last three digits 807 * 7 = 5649 so the answer to that example is 649. The rest is just knowing how to read Java syntax, like the loop structure. A post by Opalg in the following thread is relevant:
http://mathhelpforum.com/advanced-algebra/151955-find-integer-n-1-2cos-9-n-rounded-nearest-integer-divisib.html
CC-MAIN-2014-41
refinedweb
833
71.95
Guideline When Adding New Executor¶ New deep learning model? New indexing algorithm? When the existing executors/drivers do not fit your requirement, and you can not find a useful one from Jina Hub, you can simply extend Jina to what you need without even touching the Jina codebase. In this chapter, we will show you the guideline of making an extension for a jina.executors.BaseExecutor. Generally speaking, the steps are the following: Decide which Executorclass to inherit from; Override __init__()and post_init(); Override the core method of the base class; (Optional) implement the save logic. Decide which Executor class to inherit from¶ The list of executors supported by the current Jina can be found here. As one can see, all executors are inherited from jina.executors.BaseExecutor. So do you want to inherit directly from BaseExecutor for your extension as well? In general you don’t. Rule of thumb, you always pick the executor that shares the similar logic to inherit. If your algorithm is so unique and does not fit any any of the category below, you may want to submit an issue for discussion before you start. Note Inherit from class X when … jina.executors.encoders.BaseEncoder You want to represent the chunks as vector embeddings. jina.executors.encoders.BaseNumericEncoder You want to represent numpy array object (e.g. image, video, audio) as vector embeddings. jina.executors.encoders.BaseTextEncoder You want to represent string object as vector embeddings. jina.executors.indexers.BaseIndexer You want to save and retrieve vectors and key-value information from storage. jina.executors.indexers.BaseVectorIndexer You want to save and retrieve vectors from storage. jina.executors.indexers.NumpyIndexer You vector-indexer uses a simple numpy array for storage, you only want to specify the query logic. jina.executors.indexers.BaseKVIndexer You want to save and retrieve key-value pair from storage. jina.executors.craters.BaseCrafter You want to segment/transform the documents and chunks. jina.executors.craters.BaseDocCrafter You want to transform the documents by modifying some fields. jina.executors.craters.BaseChunkCrafter You want to transform the chunks by modifying some fields. jina.executors.craters.BaseSegmenter You want to segment the documents into chunks. jina.executors.Chunk2DocRanker You want to segment/transform the documents and chunks. jina.executors.CompoundExecutor You want to combine multiple executors in one. jina.executors.BaseClassifier You want to enrich the documents and chunks with a classifer. Override __init__() and post_init()¶ Override __init__()¶ You can put simple type attributes that define the behavior of your Executor into __init__(). Simple types represent all pickle-able types, including: integer, bool, string, tuple of simple types, list of simple types, map of simple type. For example, from jina.executors.crafters import BaseSegmenter class GifPreprocessor(BaseSegmenter): def __init__(self, img_shape: int = 96, every_k_frame: int = 1, max_frame: int = None, from_bytes: bool = False, *args, **kwargs): super().__init__(*args, **kwargs) self.img_shape = img_shape self.every_k_frame = every_k_frame self.max_frame = max_frame self.from_bytes = from_bytes Remember to add super().__init__(*args, **kwargs) to your __init__(). Only in this way you can enjoy many magic features, e.g. YAML support, persistence from the base class (and BaseExecutor). Note All attributes declared in __init__() will be persisted during save() and load(). Override post_init()¶ So what if the data you need to load is not in simple type. For example, a deep learning graph, a big pretrained model, a gRPC stub, a tensorflow session, a thread? The you can put them into post_init(). Another scenario is when you know there is a better persistence method other than pickle. For example, your hyperparameters matrix in numpy ndarray is certainly pickable. However, one can simply read and write it via standard file IO, and it is likely more efficient than pickle. In this case, you do the data loading in post_init(). Here is a good example. from jina.executors.encoders import BaseTextEncoder class TextPaddlehubEncoder(BaseTextEncoder): def __init__(self, model_name: str = 'ernie_tiny', max_length: int = 128, *args, **kwargs): super().__init__(*args, **kwargs) self.model_name = model_name self.max_length = max_length def post_init(self): import paddlehub as hub self.model = hub.Module(name=self.model_name) self.model.MAX_SEQ_LEN = self.max_length Note post_init() is also a good place to introduce package dependency, e.g. import x or from x import y. Naively, one can always put all imports upfront at the top of the file. However, this will throw an ModuleNotFound exception when this package is not installed locally. Sometimes it may break the whole system because of this one missing dependency. Rule of thumb, only import packages where you really need them. Often these dependencies are only required in post_init() and the core method, which we shall see later. Override the core method of the base class¶ Each Executor has a core method, which defines the algorithmic behavior of the Executor. For making your own extension, you have to override the core method. The following table lists the core method you may want to override. Note some executors may have multiple core methods. Feel free to override other methods/properties as you need. But frankly, most of the extension can be done by simply overriding the core methods listed above. Nothing more. You can read the source code of our executors for details. Implement the persistence logic¶ If you don’t override post_init(), then you don’t need to implement persistence logic. You get YAML and persistency support off-the-shelf because of BaseExecutor. Simple crafters and rankers fall into this category. If you override post_init() but you don’t care about persisting its state in the next run (when the executor process is restarted); or the state is simply unchanged during the run, then you don’t need to implement persistence logic. Loading from a fixed pretrained deep learning model falls into this category. Persistence logic is only required when you implement customized loading logic in :meth:`post_init` and the state is changed during the run. Then you need to override __getstate__(). Many of the indexers fall into this category. In the example below, the tokenizer is loaded in post_init() and saved in __getstate__(), whcih completes the persistency cycle. class CustomizedEncoder(BaseEncoder): def post_init(self): self.tokenizer = tokenizer_dict[self.model_name].from_pretrained(self._tmp_model_path) self.tokenizer.padding_side = 'right' def __getstate__(self): self.tokenizer.save_pretrained(self.model_abspath) return super().__getstate__() How Can I Use My Extension¶ You can use the extension by specifying py_modules in the YAML file. For example, your extension Python file is called my_encoder.py, which describes MyEncoder. Then you can define a YAML file (say my.yml) as follows: !MyEncoder with: greetings: hello im external encoder metas: py_modules: my_encoder.py Note You can also assign a list of files to metas.py_modules if your Python logic is splitted over multiple files. This YAML file and all Python extension files should be put under the same directory. Then simply use it in Jina CLI by specifying jina pod --uses=my.yml, or Flow().add(uses='my.yml') in Flow API. Warning If you use customized executor inside a jina.executors.CompoundExecutor, then you only need to set metas.py_modules at the root level, not at the sub-component level. I Want to Contribute it to Jina¶ We are really glad to hear that! We have done quite some effort to help you contribute and share your extensions with others. You can easily pack your extension and share it with others via Docker image. For more information, please check out Jina Hub. Just make a pull request there and our CICD system will take care of building, testing and uploading.
https://docs.jina.ai/v1.0.0/chapters/extend/executor.html
CC-MAIN-2021-10
refinedweb
1,252
51.34
Connecting a Meteor back-end to a React Native application In the last few weeks, I was very intrigued by React native. I kept seeing more and more articles like this one, so I decided to take a deeper dive into React Native and actually use it, for real. Meteor is the framework I use at work, and I now have some experience with it. I thought about connecting the React Native application with a Meteor back-end. This article will show you how to get things started. Creating the Meteor app First things first, we will create a Meteor application. meteor create serverMeteor For now, that is all we need. We'll come back to that. Creating our React Native app I'll use the very useful create-react-native-app tool. You can get more info on this, check this link. It will also explain how to use the Expo client app to see your work, very useful! So, we run a command like this: create-react-native-app reactFront Now, you'll have two folders. One named meteorServer that contains your Meteor application, and an other named reactFront where you will find your react-native application. React-Native: Creating a simple PhoneBook For the sake of brevity, we will create something simple. The user will have a form with two inputs. The first will take a name, the second a phone number. After modifications, here is how App.js looks like: import React from 'react'; import { StyleSheet, Text, View, TextInput, Button } from 'react-native'; export default class App extends React.Component { constructor(){ super() this.state = { name: '', number: '' } } addPhoneNumber = () => { console.log(this.state) this.setState({ number: '', name: '' }) }'/> </View> ); } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', marginTop: 20 }, input: { borderWidth: 2, borderColor: 'gray', height: 50, margin: 10 } }); I added two TextInput elements and a Button element. I also added some styles for the input. In React Native, we use StyleSheet.create({}) to control styles. Or you could style using the inline objects as in React. On my iOS simulator it looks like this: Ok, for now, when we click ( or tap ) on the button. Nothing happens. It logs the values in the console and reset them. Let's move on to the back-end. Meteor: Preparing the method and publication Go to the folder where your Meteor application is located. Mine was called serverMeteor. Let's create a /imports folder, and inside this /imports, we'll add an /api folder. Just to follow the proper Meteor conventions. Here is the plan: we will create a Meteor method that our React Native app will call when we click on the Save Phone Number button. This method will save the name and the number to the Meteor mongo database. Then, we will create a publication that our React Native application will subscribe to. It will simply return all the entries we have. Let's go! In /imports/api/, let's create a PhoneNumbers.js file that will hold our small back-end logic. export const PhoneNumbers = new Mongo.Collection( 'phoneNumbers' ) Meteor.methods({ addPhoneNumber( data ){ PhoneNumbers.insert({ name: data.name, number: data.number }, err => { if (err){ return err } else { return null } }) } }) Meteor.publish( 'getAllNumbers', () => { return PhoneNumbers.find({}) }) Nothing fancy here. We create our collection, our method addPhoneNumber and our publication getAllNumbers. And that's it for Meteor. Let's make the two applications talk to one another. React Native: Adding react-native-meteor Go back to the React Native folder. We will use the react-native-meteor package to connect both applications. npm install --save react-native-meteor Here are the changes we need to make: - Call the addPhoneNumber method when we click our button. - Display the numbers in a list - Make sure our React Native app is aware of our Meteor application. Let's start with the method call. If you've worked with Meteor/React before, this will look very familiar: // In our App component addPhoneNumber = () => { const data = { number: this.state.number, name: this.state.name } Meteor.call('addPhoneNumber', data, err => { if( err ){ console.log( err ) } else { this.setState({ number: '', name: '' }) } }) } Next, let's subscribe to our publication. For this, we will wrap our App component in createContainer provided by react-native-meteor. Let's import it at the top of our file: import Meteor, { createContainer } from 'react-native-meteor' Good, now we will NOT export our App component, but the createContainer wrapper. Like so: // The App Component will be defined above like so: // class App extends React.Component{ ... } export default createContainer( () => { Meteor.subscribe('getAllNumbers') return { phoneNumbers: Meteor.collection('phoneNumbers').find({}) } }, App) // Need to specify which component we are wrapping Ok, that's done. So we will get the phone numbers in a nice array. We will display them in a list. Nothing fancy, we will use the FlatList component. Don't forget to import FlatList from react-native. Our render function will look like so: // Still in the App component my friend'/> <FlatList data={this.props.phoneNumbers} keyExtractor={(item, index) => item._id} renderItem={({item}) => ( <View> <Text>{item.name} || {item.number}</Text> </View> )} /> </View> ); } FlatList takes the array of data and loops through it in the renderItem function. I'm just displaying the name and the phone number. keyExtractor is used to create keys for each element we render in this list, just like React needs in the web. Each key will be the ObjectID returned from MongoDB. Finally, let's make sure our React Native application knows where to get those informations: //I have only one component anyway... componentWillMount(){ Meteor.connect('ws://localhost:3000/websocket') } We use the connect method from react-native-meteor. Note: Because I am only using the iOS simulator here, I can use localhost. If you use the android simulator, you will need to use the IP address of your machine ( 192.168.xx.xx:3000/websocket for example). Clicking on the Save Phone Number button will populate the database in our Meteor application. Our subscription to the publication will retrieve the informations and display them! Just a final picture to show you how it looks on my iOS simulator: Well, there you have it. You can now connect a React Native application to a Meteor application without a problem. Have fun! Warning: It seems that using npm5 with create-react-native-app is buggy and doesn't work properly. You should probably use npm4 or yarn to make sure you don't encounter any problems for now.
http://brianyang.com/connecting-a-meteor-back-end-to-a-react-native-application/
CC-MAIN-2018-43
refinedweb
1,075
60.21
fgetc, fgets, getc, getchar, gets, ungetc - input of characters and strings #include <stdio.h> int fgetc(FILE *stream); char *fgets(char *s, int size, FILE *stream); int getc(FILE *stream); int getchar(void); char *gets(char *s); int ungetc(int c, FILE *stream);. C89, C99, POSIX.1-2001. LSB deprecates gets(). POSIX.1-2008 removes the specification of gets().(2) for the file descriptor associated with the input stream; the results will be undefined and very probably not what you want. read(2), write(2), ferror(3), fgetwc(3), fgetws(3), fopen(3), fread(3), fseek(3), getline(3), getwchar(3), puts(3), scanf(3), ungetwc(3), unlocked_stdio(3) This page is part of release 3.24 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at.
http://huge-man-linux.net/man3/fgets.html
CC-MAIN-2017-13
refinedweb
137
76.11
Qt Quick with the power of OpenCL on Embedded Linux devices Monday April 20, 2015 by Laszlo Agocs | Comments and discovered how easy it is to get started with CUDA development in Qt applications using OpenGL. When it comes to OpenCL, developers are not left out in the cold either, thanks to Hardkernel's ODROID-XU3, where the ARM Mali-T628 graphics processor provides full OpenCL 1.1 support with CL-GL interop in addition to OpenGL ES 3.0. In this post we will take a look at a simple and powerful approach to integrating Qt Quick applications and OpenCL. We will focus on use cases that involve sharing OpenGL resources like textures or buffers between OpenGL and OpenCL. The examples demonstrate three standard compute use cases and we will see them running on an actual ODROID board. Why OpenCL and Qt? The ability to perform complex, highly parallel computations on embedded devices while keeping as much data on the GPU as possible and to visualize the results with Qt Quick and touch-friendly Qt Quick Controls open the door for easily creating embedded systems performing advanced tasks in the domain of computer vision, robotics, image and signal processing, bioinformatics, and all sorts of heavyweight data crunching. As an example, think of gesture recognition: with high resolution webcams, Qt Multimedia, Qt Quick, Qt Quick Controls, and the little framework presented below, applications can focus on the things that matter: the algorithms (OpenCL kernels) performing the core of the work and the C++ counterpart that enqueues these kernels. The rest is taken care of by Qt. Looking back: Qt OpenCL OpenCL is not unknown to Qt - once upon a time, back in the Qt 4 days, there used to be a Qt OpenCL module, a research project developed in Brisbane. It used to contain a full 1:1 API wrapper for OpenCL 1.0 and 1.1, and some very helpful classes to get started with CL-GL interop. Today, with the rapid evolution of the OpenCL API, the availability of an official C++ wrapper, and the upcoming tighter C++ integration approaches like SYCL, we believe there is little need for straightforward Qt-ish wrappers. Applications are encouraged to use the OpenCL C or C++ APIs as they see fit. However, when it comes to the helpers that simplify common tasks like choosing an OpenCL platform and device so that we get interoperability with OpenGL, they turn out to be really handy. Especially when writing cross-platform applications. Case in point: Qt Multimedia 5.5 ships with an OpenCL-based example as presented in the video filters introduction post. The OpenCL initialization boilerplate code in that example is unexpectedly huge. This shows that the need for modern, Qt 5 based equivalents of the old Qt OpenCL classes like QCLContextGL has not gone away. In fact, with the ubiquity of OpenCL and OpenGL on all kinds of devices and platforms, they are more desirable than ever. Qt 5.5 on the ODROID-XU3 Qt 5.5 introduces support for the board in the device makespec linux-odroid-xu3-g++. Just pass -device odroid-xu3 to configure. For example, to build release mode binaries with a toolchain borrowed from the Raspberry Pi, assuming a sysroot at ~/odroid/sysroot: ./configure -release -prefix /usr/local -extprefix ~/odroid/sysroot/usr/local -hostprefix ~/odroid/qt5-build -device odroid-xu3 -device-option CROSS_COMPILE=~/odroid/toolchain/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian-x64/bin/arm-linux-gnueabihf- -sysroot ~/odroid/sysroot -nomake examples -nomake tests -opengl es2 This will configure the Qt libraries and target tools like qmlscene to be deployed under /usr/local in the sysroot, while the host tools - like the x86 build of qmake that is to be used when building applications afterwards - get installed into ~/odroid/qt5-build. When it comes to the platform plugins, both xcb and eglfs are usable, but only one at a time: the Mali graphics driver binary is different for X11 and fbdev, and has to be switched accordingly. The Ubuntu image from Hardkernel comes with X11 in place. While OpenGL is usable under X too, the usage of eglfs and the fbdev drivers is recommended, as usual. For more information on the intricacies and a step by step guide to deploying Qt on top of the Hardkernel image, see this wiki page. If you have a Mali-based ARM Chromebook featuring a similar CPU-GPU combo, see here. It is worth noting that thanks to Qt's Android port, running a full Android system with Qt apps on top is also feasible on this board. Time for some action Now to the fun part. Below are three examples running on the framebuffer in full HD resolution with the fbdev Mali driver variant, Qt 5.5 and the eglfs platform plugin. All of them utilize OpenCL 1.1, CL-GL interop, and are regular Qt Quick 2 applications. They all utilize the little example framework which we call Qt Quick CL for now. OpenGL texture to OpenGL texture via OpenCL First, let's take a look at a standard image processing use case: we will execute one or more OpenCL kernels on our input, which can be a Qt Quick Image element, a (potentially invisible) sub-tree of the scene, or any texture provider, and generate a new texture. With CL-GL interop the data never leaves the GPU: no pixel data is copied between the CPU and the GPU. Those familiar with Qt Quick have likely realized already that this is in fact an OpenCL-based alternative to the built-in, GLSL-based ShaderEffect items. By using the easy-to-use base classes to automatically and transparently manage OpenCL and CL-GL initialization, and to hide the struggles and gotchas of Qt Quick's dedicated render thread and OpenGL contexts, the meat of the above application gets reduced to something like the following: class CLRunnable : public QQuickCLImageRunnable { public: CLRunnable(QQuickCLItem *item) : QQuickCLImageRunnable(item) { m_clProgram = item->buildProgramFromFile(":/kernels.cl"); m_clKernel = clCreateKernel(m_clProgram, "Emboss", 0); } ~CLRunnable() { clReleaseKernel(m_clKernel); clReleaseProgram(m_clProgram); } void runKernel(cl_mem inImage, cl_mem outImage, const QSize &size) Q_DECL_OVERRIDE { clSetKernelArg(m_clKernel, 0, sizeof(cl_mem), &inImage); clSetKernelArg(m_clKernel, 1, sizeof(cl_mem), &outImage); const size_t workSize[] = { size_t(size.width()), size_t(size.height()) }; clEnqueueNDRangeKernel(commandQueue(), m_clKernel, 2, 0, workSize, 0, 0, 0, 0); } private: cl_program m_clProgram; cl_kernel m_clKernel; }; class CLItem : public QQuickCLItem { Q_OBJECT Q_PROPERTY(QQuickItem *source READ source WRITE setSource) public: CLItem() : m_source(0) { } QQuickCLRunnable *createCL() Q_DECL_OVERRIDE { return new CLRunnable(this); } QQuickItem *source() const { return m_source; } void setSource(QQuickItem *source) { m_source = source; update(); } private: QQuickItem *m_source; }; ... qmlRegisterType("quickcl.qt.io", 1, 0, "CLItem") ... import quickcl.qt.io 1.0 Item { Item { id: src layer.enabled: true ... } CLItem { id: clItem source: src ... } } Needless to say, the application works on a wide variety of platforms. Windows, OS X, Android, and Linux are all good as long as OpenGL (ES) 2.0, OpenCL 1.1 and CL-GL interop are available. Getting started with OpenCL in Qt Quick applications won't get simpler than this. OpenGL texture to arbitrary data via OpenCL And now something more complex: an image histogram. Histograms are popular with Qt, and the recent improvements in Qt Multimedia introduce the possibility of efficiently calculating live video frame histograms on the GPU. In this example we take it to the next level: the input is an arbitrary live sub-tree of the Qt Quick scene, while the results of the calculation are visualized with a little Javascript and regular OpenGL-based Qt Quick elements. Those 256 bars on the right are nothing else but standard Rectangle elements. The input image never leaves the GPU, naturally. All this with a few lines of C++ and QML code. OpenGL vertex buffer generation with OpenCL Last but not least, something other than GL textures and CL image objects: buffers! The position of the vertices, that get visualized with GL by drawing points, are written to the vertex buffer using OpenCL. The data is then used from GL as-is, no readbacks and copies are necessary, unlike with Qt Quick's own GL-based particle systems. To make it all more exciting, the drawing happens inside a custom QQuickItem that functions similarly to QQuickFramebufferObject. This allows us to mix our CL-generated drawing with the rest of the scene, including Qt Quick Controls when necessary. Looking forward: Qt Quick CL QtQuickCL is a small research and demo framework for Qt 5 that enables easily creating Qt Quick items that execute OpenCL kernels and use OpenGL resources as their input or output. The functionality is intentionally minimal but powerful. All the CL-GL interop, including the selection of the correct CL platform and device, is taken care of by the module. The QQuickCLItem - QQuickCLRunnable split in the API ensures easy and safe CL and GL resource management even when Qt Quick's threaded render loop is in use. Additional convenience is provided for the cases when the input, output or both are OpenGL textures, like for instance the first two of the three examples shown above. The code, including the three examples shown above, is all available on Gerrit and code.qt.io as a qt-labs repository. The goal is not to provide a full-blown OpenCL framework or wrapper, but rather to serve as a useful example and reference for integrating Qt Quick and OpenCL, and to help getting started with OpenCL development. Happy hacking!
https://www.qt.io/blog/2015/04/20/qt-quick-with-the-power-of-opencl-on-embedded-linux-devices
CC-MAIN-2021-39
refinedweb
1,571
51.18
The a template: def get_sidebar(user): identifier = 'sidebar_for/user%d' % user.id value = cache.get(identifier) if value is not None: return value value = generate_sidebar_for(user=user) cache.set(identifier, value, timeout=60 * 5) return value). Baseclass for the cache systems. All the cache systems implement this API or a superset of it. Works like set() but does not overwrite the values of already existing keys. Clears the cache. Keep in mind that not all caches support completely clearing the cache. Decrements the value of a key by delta. If the key does not yet exist it is initialized with -delta. For supporting caches this is an atomic operation. Deletes key from the cache. If it does not exist in the cache nothing happens. Deletes multiple keys at once. Looks up key in the cache and returns the value for it. If the key does not exist None is returned instead. Works like get_many() but returns a dict: d = cache.get_dict("foo", "bar") foo = d["foo"] bar = d["bar"] Returns a list of values for the given keys. For each key a item in the list is created. Example: foo, bar = cache.get_many("foo", "bar") If a key can’t be looked up None is returned for that key instead. Increments the value of a key by delta. If the key does not yet exist it is initialized with delta. For supporting caches this is an atomic operation. Adds a new key/value to the cache (overwrites value, if key already exists in the cache). Sets multiple keys and values from a mapping. A cache that doesn’t cache. This can be useful for unit testing. Simple memory cache for single process environments. This class exists mainly for the development server and is not 100% thread safe. It tries to use as many atomic operations as possible and no locks for simplicity but it could happen under heavy load that keys are added multiple times. A cache that uses memcached as backend. The first argument can either be an object that resembles the API of a memcache.Client or a tuple/list of server addresses. In the event that a tuple/list is passed, Werkzeug tries to import the best available memcache library.. This class is deprecated in favour of MemcachedCache which now supports Google Appengine as well. Changed in version 0.8: Deprecated in favour of MemcachedCache.. A cache that stores the items on the file system. This cache depends on being the only user of the cache_dir. Make absolutely sure that nobody but this cache stores files there or otherwise the cache will randomly delete files therein.
http://werkzeug.pocoo.org/docs/0.9/contrib/cache/
CC-MAIN-2014-49
refinedweb
439
78.35
Layout Failure for Collapsed Fieldsets Layout Failure for Collapsed Fieldsets Hi, We have upgrade to 4.1 RC 2 and are testing our current website. Have found a minor issue with window display in a border layout where the window width and height configs are not explicitly set (if they are set the issue doesn't occur). Window is displayed from a button click event and is renderedTo the centre panel. Issue at this stage only occurs when there is a nested field set within the window being displayed. When the window displays the title bar and title bar controls are missing. They only appear with a resize of the window or expanding the collapsed field-set with the window. Would you like a bug report? (I appreciate your very busy) windowDisplayIssue.jpg RC2 has not yet been released. I assume you're referring to a nightly build? If so, it'd be very helpful if you could include the date-stamp of the build when you're reporting problems. If you're testing with nightly builds then you should expect some problems, those builds have not yet cleared the QA process and could still contain serious bugs. That said, if the problem persists for a few days then it's a good idea to file a bug report. If you post some code here to reproduce your issue then it should be pretty easy to confirm whether you've found a bug or whether there's a problem with your code. If it's a bug I'll move this thread to the bugs forum to save you having to report it again. My first attempt at a debug post so apologies if its messy or not correct. REQUIRED INFORMATION Ext version tested: - Ext 4.1.0 RC2 nightly build 23 March 2012 - Chrome 18.0.1025.137 beta-m - IE9 - FF 10.0.2 - Display a window in a container fails to render correctly. Window title bar and controls missing. Re-sizing the window causes the bar to appear. Issue only occurs if the window width and height config are not set. Appears to be related to the window control contents where a field set is nested within a field set. See code sample - Use code reproducer below. Click button to add the window - Window should display centered in the container and auto sized - Window displays top left with title bar missing and not sized Code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html><head><meta http- <link rel="stylesheet" type="text/css" href="extjs/resources/css/ext-all.css"/> <script type="text/javascript" src="extjs/ext-all-debug.js" charset="UTF-8"></script> <script language="javascript" type="text/javascript" charset="UTF-8"> Ext.namespace("Greentree.Utilities"); Greentree.Utilities.initialLoad = function(){var viewport = new Ext.Viewport({ border: false, layout: "border", items: [{ xtype: "container", id: "centerContainer", layout: "anchor", region: "center"}, { xtype: "panel", region: "east", title: "EAST", width: 300, items: [{ xtype: "button", margin: "10 0 0 10", text: "Display Window", listeners: { click: function() { Ext.create('Ext.window.Window', { autoScroll : true, collapsible : true, resizable : true, constrain : true, layout : "anchor", renderTo : "centerContainer", title : "Test Window", xtype : "window", items : [{ xtype : "fieldset", margin : 10, padding : 10, items : [{ xtype : "textfield", anchor : "30%", fieldLabel : "Code" }, { xtype : "textfield", fieldLabel : "Name", minWidth : 40 } , { xtype : "fieldset", collapsed : true, collapsible : true, margin : 10, padding : 5, title : "Details", items : [{ border : false, layout : "anchor", xtype : "container", items : [{ xtype : "textfield", anchor : "70%", fieldLabel : "", hideEmptyLabel : false, margin : 10 }] }] }] }] }).show();} } }] } ]}); viewport.show(); }; Ext.onReady(Greentree.Utilities.initialLoad); </script> </head><body></body></html> HELPFUL INFORMATION Debugging already done: - none - not provided - only default ext-all.css - Win 7 Last edited by suzuki1100nz; 26 Mar 2012 at 4:19 PM. Reason: Put test case code in A couple of forum tips. The forum allows you to edit your post after you've posted it. I've gone through and de-mangled your post for you, if you have problems like this again please edit it yourself so that others can read it easily. I highly recommend clicking the A/A button in the top-left corner of the editor so that you can see your posts as markup rather than the half-baked attempt at a WYSIWYG editor that the forum shows you by default. Please don't attach ZIPs to threads. Post the relevant code inline in the thread, making sure it is nicely formatted and wrapped in [CODE] tags. If you attach a ZIP then your thread will almost always be ignored. Ok thanks for the tips. When can you legitimately use the file attach feature? or is it simply a no no for bug reporting There's no official rules saying you can't use attachments, just in practice you'll find that threads with file attachments get ignored. In the specific case of bug reports, you're ideally looking to provide a complete, minimal test case. I've never yet come across a test case that was genuinely minimal and yet was complicated enough to justify attaching it as a ZIP. If you post your code for this window issue to the thread I'll take a look at it for you, see whether it's a bug. ok. Test case provided inline zip file removed. Thanks I'm going to move this thread to the bugs forum. I'm also going to change the title so that it doesn't mention RC2 any more, as that could cause confusion. I've reduced your test case down a little: Code: Ext.widget({ autoShow: true, title: 'Test Window', xtype: 'window', items: [{ xtype: 'fieldset', //collapsed: true, collapsible: true, title: 'Details', items: [{ height: 100, width: 100 }] }] }); - Collapsing the fieldset causes a layout failure. - Specifying layouts (e.g. fit) on the window/fieldset doesn't help. - Switching the fieldset to a panel works fine. Success! Looks like we've fixed this one. According to our records the fix was applied for EXTJSIV-5724 in a recent build.
http://www.sencha.com/forum/showthread.php?190840-Layout-Failure-for-Collapsed-Fieldsets&s=5b98832292170761dc00ba799a56e696&p=765567
CC-MAIN-2014-10
refinedweb
992
64.51
This was already called out in the errata, but I think it’s a major omission so I wanted to mention it here too, in case anyone gets stuck on this step. In the paragraph above Listing 20.16 it instructs us: Create a new Swift file called Photoand declare the Photo class with properties for the photoID, the title, and the remoteURL. Finally, add a designated initializer that sets up the instance. However, the code in Listing 20.16 has a fourth property for dateTaken which the text above does not mention: import Foundation class Photo { let title: String let remoteURL: URL let photoID: String let dateTaken: Date } And Listing 20.16 does not include the designated initializer which the instructions above tell us to create. The provided solutions files for this chapter have also omitted the initializer. Here is the code I wrote for this step in the instructions, which includes an initializer: import Foundation class Photo { let title: String let remoteURL: URL let photoID: String let dateTaken: Date init(title: String, remoteURL: URL, photoID: String, dateTaken: Date) { self.title = title self.remoteURL = remoteURL self.photoID = photoID self.dateTaken = dateTaken } } You need an initializer at this point in the project to avoid a compiler error. Although later in this chapter, in Listing 20.17, you will make this class conform to the Codable type alias which apparently satisfies this compiler warning. Still, it may be confusing to some people that the initializer we were instructed to create here is missing from the code in the book.
https://forums.bignerdranch.com/t/missing-initializer/18989
CC-MAIN-2021-25
refinedweb
258
55.74
__tfork_thread, __tfork— #include <unistd.h> struct __tfork { void *tf_tcb; /* TCB address for new thread */ pid_t *tf_tid; /* where to write child's thread ID */ void *tf_stack; /* stack address for new thread */ }; __tfork_thread(const struct __tfork *params, size_t psize, void (*startfunc)(void *), void *startarg); pid_t __tfork(const struct __tfork *params, size_t psize); __tfork_thread() function creates a new kernel thread in the current process. The new thread starts by calling startfunc, passing startarg as the only argument. If startfunc returns, the thread will exit. The params argument provides parameters used by the kernel during thread creation. The new thread's thread control block (TCB) address is set to tf_tcb. If tf_tid is not NULL, the new thread's thread ID is returned to the user at that address, with the guarantee that this is done before returning to userspace in either the calling thread or the new thread. If tf_stack is not NULL, the new thread's stack is initialized to start at that address. On hppa that is the lowest address used; on other architectures that is the address after the highest address used. The psize argument provides the size of the struct __tfork passed via the params argument. The underlying system call used to create the thread is __tfork(). Because the new thread returns without a stack frame, the syscall cannot be directly used from C and is therefore not provided as a function. However, the syscall may show up in the output of kdump(1). __tfork_thread() returns in the calling thread the thread ID of new thread. The __tfork() syscall itself, on success, returns a value of 0 in the new thread and returns the thread ID of the new thread to the calling thread. Otherwise, a value of -1 is returned, no thread is created, and the global variable errno is set to indicate an error. __tfork_thread() and __tfork() will fail and no thread will be created if: ENOMEM] EINVAL] EAGAIN] EAGAIN] MAXUPRCon the total number of threads under execution by a single user would be exceeded. MAXUPRCis currently defined in <sys/param.h>as CHILD_MAX, which is currently defined as 80 in <sys/syslimits.h>. __tfork_thread() function and __tfork() syscall are specific to OpenBSD and should not be used in portable applications. __tfork_thread() function and __tfork() syscall appeared in OpenBSD 5.1.
https://man.openbsd.org/__tfork.3
CC-MAIN-2018-34
refinedweb
384
70.53
80054/importerror-cannot-import-name-spec-when-start-django-project I'm learning Python in tandem with Django. I initially installed Python 3 on my machine, but read about possible conflicts and removed it with some difficulty. Now I'm using virtualenv and installed python3 within the env and Django using pip. Django and Python seem to have installed correctly: # python -c "import django; print(django.get_version())" 1.9.1 # python -V Python 3.2.3` but when I try to start a new Django project, I get the following: # django-admin.py startproject mysite Traceback (most recent call last): File "/home/niroj/Projects/webenv/bin/django-admin.py", line 2, in <module> from django.core import management File "/home/niroj/Projects/webenv/lib/python3.2/site-packages/django/core/management/__init__.py", line 10, in <module> from django.apps import apps File "/home/niroj/Projects/webenv/lib/python3.2/site-packages/django/apps/__init__.py", line 1, in <module> from .config import AppConfig File "/home/niroj/Projects/webenv/lib/python3.2/site-packages/django/apps/config.py", line 6, in <module> from django.utils.module_loading import module_has_submodule File "/home/niroj/Projects/webenv/lib/python3.2/site-packages/django/utils/module_loading.py", line 67, in <module> from importlib.util import find_spec as importlib_find ImportError: cannot import name find_spec What should I do? Hello @kartik, find_spec isn't available in Python 3.2.3; it was added in Python 3.4. Try upgrading to 3.4 or later. Hope it works!! Thank YOU!! There's no module called "add" in random. ...READ MORE Hi@akhtar, To avoid this error, you should upgrade ...READ MORE Hi@akhtar, I think there is a problem in ...READ MORE Hello kartik, The variation of the context processor ...READ MORE Hi, there is only that way and ...READ MORE Hi all, with regard to the above ...READ MORE Hello @kartik, To turn off foreign key constraint ...READ MORE Hello @kartik, Let's say you have this important ...READ MORE Hello @kartik, Upgrading the packages will resolve the ...READ MORE Hello @kartik, The render_to_response shortcut was deprecated in Django 2.0, and ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/80054/importerror-cannot-import-name-spec-when-start-django-project
CC-MAIN-2022-21
refinedweb
381
54.08
How I Built a Custom Framework and App with Rack: Part 1 I was introduced to Rack while working with Sinatra and was only vaguely knowledgeable about what it meant for Sinatra to be “built on top of Rack,” not to mention how to work with Rack, itself. While learning how to work with Sinatra, I continually encountered scenarios that made me stop and ask myself: “Well, how does Sinatra work “under the hood,” anyway?” Whenever I tried to take a peek behind the curtain, I would come upon another world entirely: The world of Rack. It was evident to me that if I wanted to understand Sinatra, I ought to understand Rack, too! The Rack Specification and the Problem it Solves Rack is, at its heart, a specification. It puts forth rules that both Ruby-web servers (aka ‘application servers’) and Ruby applications should follow to send and receive information between each other. In the same way that HTTP introduces one protocol, thereby allowing a myriad of machines and software to communicate, Rack is a protocol (for web apps and web servers). It acts as a standard interface between web servers and ruby applications. Therefore, the Rack specification solves the many-frameworks/many-web-servers problem. First of all, rather than expecting web-server developers to take sole responsibility for developing framework-specific APIs (or vice versa) Rack splits the responsibility between developers on both ends. Everyone just follows the rules of Rack, sharing the task of maintaining a single interface. Getting more technical about the Rack specification As I mentioned above, the Rack specification solves the “many frameworks/many web servers” problem by detailing a generalized interface that applications and ruby web servers can use to communicate. A ruby web server needs to send information to the application of a particular format: one that the application is written to handle. Similarly, the application’s logic must return the information to the ruby web server that the ruby web server’s code is written to handle. Otherwise, errors will result. Whatever a ruby web server does after it receives an HTTP request doesn’t matter regarding Rack-compliance; it’s the last step the server takes — starting the ruby application itself and passing in the right values — that makes it Rack-compliant. As long as it completes this final step, it will work. A “Rack-compliant” ruby web server is simply a Ruby web server that parses the client’s HTTP request into a hash with specified key-value pairs (as described in the Rack specification) and then calls the Rack application via example_app.call(env) where env is the hash containing the parsed HTTP -request information. If this discussion of web servers, ruby web servers, and applications have you a bit confused, consider reading my article on the differences between them and how they interact. From the above description of ruby web servers, it is clear that the ruby web server will be loading an instance of the application and calling it via app.call(env). Therefore, it expects the application to respond to app.call because that is what the Rack-specification requires of rack-applications. The ruby web server also expects that the call method will take the env hash as an argument. There are a few more requirements of Rack-applications. First, the application code not only has to respond to call(env), but the call method must return an array with three elements. The first element is an HTTP status code, the second is a set of key-value pairs which will be used to create the HTTP headers, and the third is an array of Strings: [‘200’, {‘Content-Type’: ‘text/html’}, [“Hello World!”]]` See this great blog post for a more in depth discussion of Rack-based Ruby web servers. With the preliminary information about Rack out of the way, it’s time to dive into Rack-app development. What should the app do, and how should it do it? What features should the app have, and how should those functions all fit together into one coherent experience for the user? These are questions to ask oneself at the beginning of development. I wanted to build an application that could: • Serve custom content based on the client’s requested route • Redirect clients to a different route at the developer’s discretion • Output dynamic content • Render templates and layouts • Maintain state between each request/response cycle. I, therefore, needed the following: • An easy way to render templates with a templating engine (I chose ERB due to familiarity). • A way to store routes with associated executable code. • A way to integrate sessions. • A way to add redirects to stored routes. These are all non-specific features that any framework would need to offer for web application development. It made sense, then, to group these into a single class that I would later use as a framework. Whichever application I decided to build would then simply inherit from this framework. My General Workflow After developing a high-level list of features from the perspective of the hypothetical user, I zoomed in on one feature at a time, breaking it down into a sequence of steps. Each step required its logic and implementation. So, I worked on four levels of abstraction: Highest-level: List of features for the application, described from the user perspective. Void of implementation details. E.g., routing Mid-level: The sequence of steps that detail the subprocesses that comprise a feature from start to finish (represented by a flow chart). E.g., routing steps: store routes -> get client’s request -> match client’s request to a stored route -> execute code associated with stored route Low-level: The logic each of the steps developed above should take (represented by pseudo code). E.g., Writing the pseudo code for a method that stores out in a certain data structure. Really-low-level: Implementation of the logic for each of those steps. E.g., implementing the pseudocode developed in the preceding level into the code itself. It is better to test your pseudo code as you develop it by combining the ‘low-level’ and ‘really-low-level’ steps into one alternating step. Develop some logic for the sub function, test it, develop more logic, test it, etc. Below is a general outline of the workflow I followed. Designing the Routing Functionality I had my list of features; now I just had to pick one to implement, first. The first feature I worked on was routing. Routing in an application is a combination of many sub-features. I chose to represent the sequence of steps that routing should take as a flow chart (shown below). Since the application was going to be subclassing from the framework’s class, I thought it appropriate to store all of the routes, and their associated executable code, in an instance variable that the child-class could access. The instance variable and the data it referenced (i.e., the routes) would persist as long as the application object persisted. So, to integrate routing functionality into the framework, the framework needed: • An instance variable that stored route information, coupled with executable code. • A mechanism that added custom routes to the route storage. • A mechanism that read the route requested by the client and matched it with a route stored in the routes instance variable. • A mechanism that executed the code associated with the stored route information if, and only if the requested route matched the stored route. The above requirements contain further implicit requirements: • The routes need to be added before the client’s request was read and compared to the route information stored in the instance variable. • The route-relevant information from the client’s request had to be extracted from the request. A few crucial points about web applications There are some implicit constraints in web-application development (without a database) by nature of the statelessness of HTTP. First, when the starts up, it initializes an instance of the Ruby application and keeps that instance in its process. Then, every time the Ruby web server receives a request, it will invoke the call method on that instance. However, a lot of information is still lost: once the call executes and returns its response, the information in the env hash passed to that object is lost. Therefore, each invocation of the call method is passed a completely new env hash. This exemplifies the statelessness of HTTP. To maintain state, the application will have to store information about the client with the client in the form of a session, and then extract the session information out of the subsequent request sent from the client to the server. Second, this means that the application code executes every single time a request is “sent” from the ruby web server to the application. The application will, therefore, have to set each route over again every time a new request is received. * *Edit on December 4, 2017: The routes would not have needed to be set each time a request was received from the client if the routes were set up within an initialize method rather than within the call method. In fact, logic that only needs to execute once-per-process should not be placed within the call method of a Rack application. Implementing the Routing Functionality I decided to initialize an empty hash to an instance variable @routes every time the ruby web server spawned a new instance of the application (which, again, would happen in response to every HTTP request). class Framework def initialize @routes = {} end end The goal was to build an easy-to-use interface to add routes to the @routes hash: add_route(“get”, “/hello”) do “Hello World” end From this example, I knew that I needed an add_route method that took an HTTP method, a request path, and a block as arguments. I also needed the block’s code to be stored with the route, and only be executed if the client’s request matched the stored route. The easiest way to accomplish this was to store the block in the @routes hash as a Proc object. The @routes hash would not only store the HTTP request methods and request paths: it would store the code to execute when a client’s request matched the request method (e.g. ‘GET’) and the request path (e.g. ‘/hello’). Here is my add_route method: def add_route(http_method, path, redirects={}, &block) response = Rack::Response.new response.body = block if redirects[:location] response.headers[“Location”] = redirects[:location] response.status = ‘302’ end @routes[[http_method, path]] = [response.status, response.headers, [response.body]] end Remember that a Rack-compliant application responds to call(env) and that the env hash contains a parsed HTTP request with lots of handy key-value pairs, like env[‘REQUEST_PATH’]. Further, the application’s call method must return an array with three elements: an HTTP status, response headers, and an HTTP response body (which needs to be an array of strings). Keep that in mind as we go through the explanation of the add_route method… As you can see, the add_route method takes four arguments: an HTTP method, a request path, a hash called redirects, and a block. The redirects hash is optional and is initialized to an empty hash if nothing is passed in. The first line of the add_route method simply wraps the response array (in a convenience object. Now, all I need to do to add headers, a status, or a body, to the response array is to type response.body = something. It makes things much easier. And that is exactly what happens in the next line: response.body is set equal to a Proc object. Response array: [‘200’, {‘Content-Type’ => ‘text/html’}, [“This is HTTP body”]] Response object: response.body = [“This is HTTP body”] response.status = ‘200’ response.headers[‘Content-Type’] = ‘text/html’ After that, the method checks whether redirects[:location] contains a value. If it does, then ‘Location’: redirects[:location] is inserted into the response header, which will tell the client which path to automatically send a new request to when the response is received. However, on its own, the location header will not make the browser redirect. For the redirect to work, the application’s response should contain a redirect status code in conjunction with the location header. Therefore, the method sets the response.status = ‘302’. The last part of the add_route method inserts the route information into the @routeshash. As you can see, keys in the @routeshash are arrays containing an HTTP method (e.g. ‘get’ or ‘post’) and a request path (e.g. ‘/home’). The values in the @routes hash are arrays with three elements, where the first element is a status, the second is a hash with headers, and the third is an array containing a Proc object (more on procs later). Rationale of the #add_routes method Since this method stores triplet arrays as keys in the @routes hash, it is reasonable to assume that these keys are what the Rack app will return to the ruby web server and, ultimately, the client. There is a caveat here: The third element in the triplet array is an array that contains a Proc object according to the add_route method. The Rack-specification requires that applications return a triplet array whose third element is an array that contains String objects. But I can’t just store String objects in place of the Proc objects because then I won’t be able to store executable code: I will merely be storing Strings. Further, I can’t just store the return value of the_proc_object.call, even though the return value will be a String object, because the executable code will be running at route-store-time rather than response-time. A few notes on Procs Proc objects are useful because they store executable code. To execute the code stored in a Proc object, you simply type name_of_proc_object.call. The Proc object will then return whatever the last line of code in the Proc returns. All blocks passed into our add_route method ought to return a String object on their last line, but the application ought not to execute the code in that Proc unless the client’s request matches the key in @routes associated with that response array that contains that Proc. The reason for this is clearer with an example. Imagine if the app had a route for logging in and a route for deleting a user’s account: add_route(‘post’,’/account/create’) do # create account # return string end add_route(‘post’,’/account/delete’) do # delete user’s account # return string end The example blocks above contain code that should obviously not be executed at the start of the application when add_route method calls execute. Rather, the executable code passed to the method via the do..end blocks needs to be stored and only executed if the user sends a POST request to ‘/account/create’ or ‘/account/delete.’ With adding routes out of the way, the next step to implement routing functionality is to code the logic required for accessing the client’s requested route and then to compare that to the routes stored in the hash, referenced by the @routes instance variable. After that, there are plenty of more features to implement. This article covered a lot of material, but much of it was preliminary and related to Rack-app development in general rather than the construction of a particular framework or application. The next article in this series will be more coding-oriented than concept-oriented. It will detail a large part of the framework implementation, as well as the reasoning process behind it.
https://medium.com/launch-school/how-i-built-a-custom-framework-and-app-with-rack-part-1-d6a790250c6e
CC-MAIN-2018-51
refinedweb
2,617
59.84
The author selected Creative Commons to receive a donation as part of the Write for DOnations program. React is a popular JavaScript framework for creating front-end applications. Originally created by Facebook, it has gained popularity by allowing developers to create fast applications using an intuitive programming paradigm that ties JavaScript with an HTML-like syntax known as JSX. Starting a new React project used to be a complicated multi-step process that involved setting up a build system, a code transpiler to convert modern syntax to code that is readable by all browsers, and a base directory structure. But now, Create React App includes all the JavaScript packages you need to run a React project, including code transpiling, basic linting, testing, and build systems. It also includes a server with hot reloading that will refresh your page as you make code changes. Finally, it will create a structure for your directories and components so you can jump in and start coding in just a few minutes. In other words, you don’t have to worry about configuring a build system like Webpack. You don’t need to set up Babel to transpile you code to be cross-browser usable. You don’t have to worry about most of the complicated systems of modern front-end development. You can start writing React code with minimal preparation. By the end of this tutorial, you’ll have a running React application that you can use as a foundation for any future applications. You’ll make your first changes to React code, update styles, and run a build to create a fully minified version of your application. You’ll also use a server with hot reloading to give you instant feedback and will explore the parts of a React project in depth. Finally, you will begin writing custom components and creating a structure that can grow and adapt with your project. To follow this tutorial, you’ll need the following: Node.js version 10.16.0 installed on your computer. To install this on macOS or Ubuntu 18.04, follow the steps in How to Install Node.js and Create a Local Development Environment on macOS or the Installing Using a PPA section of How To Install Node.js on Ubuntu 18.04. It will also help to have a basic understanding of JavaScript, which you can find in the How To Code in JavaScript series, along with a basic knowledge of HTML and CSS. In this step, you’ll create a new application using the npm package manager to run a remote script. The script will copy the necessary files into a new directory and install all dependencies. When you installed Node, you also installed a package managing application called npm. npm will install JavaScript packages in your project and also keep track of details about the project. If you’d like to learn more about npm, take a look at our How To Use Node.js Modules with npm and package.json tutorial. npm also includes a tool called npx, which will run executable packages. What that means is you will run the Create React App code without first downloading the project. The executable package will run the installation of create-react-app into the directory that you specify. It will start by making a new project in a directory, which in this tutorial will be called digital-ocean-tutorial. Again, this directory does not need to exist beforehand; the executable package will create it for you. The script will also run npm install inside the project directory, which will download any additional dependencies. To install the base project, run the following command: - npx create-react-app digital-ocean-tutorial This command will kick off a build process that will download the base code along with a number of dependencies. When the script finishes you will see a success message that says: Output... Success! Created digital-ocean-tutorial at your_file_path/digital-ocean-tutorial digital-ocean-tutorial npm start Happy hacking! your_file_path will be your current path. If you are a macOS user, it will be something like /Users/your_username; if you are on an Ubuntu server, it will say something like /home/your_username. You will also see a list of npm commands that will allow you to run, build, start, and test your application. You’ll explore these more in the next section. Note: There is another package manager for JavaScript called yarn. It’s supported by Facebook and does many of the same things as npm. Originally, yarn provided new functionality such as lock files, but now these are implemented in npm as well. yarn also includes a few other features such as offline caching. Further differences can be found on the yarn documentation. If you have previously installed yarn on your system, you will see a list of yarn commands such as yarn start that work the same as npm commands. You can run npm commands even if you have yarn installed. If you prefer yarn, just replace npm with yarn in any future commands. The results will be the same. Now your project is set up in a new directory. Change into the new directory: - cd digital-ocean-tutorial You are now inside the root of your project. At this point, you’ve created a new project and added all of the dependencies. But you haven’t take any actions to run the project. In the next section, you’ll run custom scripts to build and test the project. react-scripts In this step, you will learn about the different react-scripts that are installed with the repo. You will first run the test script to execute the test code. Then you will run the build script to create a minified version. Finally, you’ll look at how the eject script can give you complete control over customization. Now that you are inside the project directory, take a look around. You can either open the whole directory in your text editor, or if you are on the terminal you can list the files out with the following command: - ls -a The -a flag ensures that the output also includes hidden files. Either way, you will see a structure like this: Outputnode_modules/ public/ src/ .gitignore README.md package-lock.json package.json Let’s explain these one by one: node_modules/ contains all of the external JavaScript libraries used by the application. You will rarely need to open it. The public/ directory contains some base HTML, JSON, and image files. These are the roots of your project. You’ll have an opportunity to explore them more in Step 4. The src/ directory contains the React JavaScript code for your project. Most of the work you do will be in that directory. You’ll explore this directory in detail in Step 5. The .gitignore file contains some default directories and files that git—your source control—will ignore, such as the node_modules directory. The ignored items tend to be larger directories or log files that you would not need in source control. It also will include some directories that you’ll create with some of the React scripts. README.md is a markdown file that contains a lot of useful information about Create React App, such as a summary of commands and links to advanced configuration. For now, it’s best to leave the README.md file as you see it. As your project progresses, you will replace the default information with more detailed information about your project. The last two files are used by your package manager. When you ran the initial npx command, you created the base project, but you also installed the additional dependencies. When you installed the dependencies, you created a package-lock.json file. This file is used by npm to ensure that the packages match exact versions. This way if someone else installs your project, you can ensure they have identical dependencies. Since this file is created automatically, you will rarely edit this file directly. The last file is a package.json. This contains metadata about your project, such as the title, version number, and dependencies. It also contains scripts that you can use to run your project. Open the package.json file in your favorite text editor: - nano package.json When you open the file, you will see a JSON object containing all the metadata. If you look at the scripts object, you’ll find four different scripts: start, build, test, and eject. These scripts are listed in order of importance. The first script starts the local development environment; you’ll get to that in the next step. The second script will build your project. You’ll explore this in detail in Step 4, but it’s worth running now to see what happens. buildScript To run any npm script, you just need to type npm run script_name in your terminal. There are a few special scripts where you can omit the run part of the command, but it’s always fine to run the full command. To run the build script, type the following in your terminal: - npm run build You will immediately see the following message: Output> digital-ocean-tutorial@0.1.0 build your_file_path/digital-ocean-tutorial > react-scripts build Creating an optimized production build... ... This tells you that Create React App is compiling your code into a usable bundle. When it’s finished, you’ll see the following output: Output... Compiled successfully. File sizes after gzip: 39.85 KB build/static/js/9999.chunk.js 780 B build/static/js/runtime-main.99999.js 616 B build/static/js/main.9999.chunk.js 556 B build/static/css/main.9999: serve -s build Find out more about deployment here: bit.ly/CRA-deploy List out the project contents and you will see some new directories: - ls -a Outputbuild/ node_modules/ public/ src/ .gitignore README.md package-lock.json package.json You now have a build directory. If you opened the .gitignore file, you may have noticed that the build directory is ignored by git. That’s because the build directory is just a minified and optimized version of the other files. There’s no need to use version control since you can always run the build command. You’ll explore the output more later; for now, it’s time to move on to the test script. testScript The test script is one of those special scripts that doesn’t require the run keyword, but works even if you include it. This script will start up a test runner called Jest. The test runner looks through your project for any files with a .spec.js or .test.js extension, then runs those files. To run the test script, type the following command: - npm test After running this script your terminal will have the output of the test suite and the terminal prompt will disappear. It will look something like this: OutputPASS src/App.test.js ✓ renders learn react link (67ms) Test Suites: 1 passed, 1 total Tests: 1 passed, 1 total Snapshots: 0 total Time: 4.204s. There are a few things to notice here. First, as noted before, it automatically detects any files with test extensions including .test.js and .spec.js. In this case, there is only one test suite—that is, only one file with a .test.js extension—and that test suite contains only one test. Jest can detect tests in your code hierarchy, so you can nest tests in a directory and Jest will find them. Second, Jest doesn’t run your test suite once and then exit. Rather, it continues running in the terminal. If you make any changes in the source code, it will rerun the tests again. You can also limit which tests you run by using one of the keyboard options. If you type o, for example, you will only run the tests on files that have changed. This can save you lots of time as your test suites grow. Finally, you can exit the test runner by typing q. Do this now to regain your command prompt. ejectScript The final script is npm eject. This script copies your dependencies and configuration files into your project, giving you full control over your code but ejecting the project from the Create React App integrated toolchain. You will not run this now because, once you run this script, you can’t undo this action and you will lose any future Create React App updates. The value in Create React App is that you don’t have to worry about a significant amount of configuration. Building modern JavaScript applications requires a lot of tooling from build systems, such as Webpack, to compilation tools, such as Babel. Create React App handles all the configuration for you, so ejecting means dealing with this complexity yourself. The downside of Create React App is that you won’t be able to fully customize the project. For most projects that’s not a problem, but if you ever want to take control of all aspects of the build process, you’ll need to eject the code. However, as mentioned before, once you eject the code you will not be able to update to new versions of Create React App, and you’ll have to manually add any enhancements on your own. At this point, you’ve executed scripts to build and test your code. In the next step, you’ll start the project on a live server. In this step, you will initialize a local server and run the project in your browser. You start your project with another npm script. Like npm test, this script does not need the run command. When you run the script you will start a local server, execute the project code, start a watcher that listens for code changes, and open the project in a web browser. Start the project by typing the following command in the root of your project. For this tutorial, the root of your project is the digital-ocean-tutorial directory. Be sure to open this in a separate terminal or tab, because this script will continue running as long as you allow it: - npm start You’ll see some placeholder text for a brief moment before the server starts up, giving this output: OutputCompiled successfully! You can now view digital-ocean-tutorial in the browser. Note that the development build is not optimized. To create a production build, use npm run build. If you are running the script locally, it will open the project in your browser window and shift the focus from the terminal to the browser. If that doesn’t happen, you can visit to see the site in action. If you already happen to have another server running on port 3000, that’s fine. Create React App will detect the next available port and run the server with that. In other words, if you already have one project running on port 3000, this new project will start on port 3001. If you are running this from a remote server you can still see your site without any additional configuration. The address will be. If you have a firewall configured, you’ll need to open up the port on your remote server. In the browser, you will see the following React template project: As long as the script is running, you will have an active local server. To stop the script, either close the terminal window or tab or type CTRL+C or ⌘-+c in the terminal window or tab that is running your script. At this point, you have started the server and are running your first React code. But before you make any changes to the React JavaScript code, you will see how React renders to the page in the first place. In this step, you will modify code in the public/ directory. The public directory contains your base HTML page. This is the page that will serve as the root to your project. You will rarely edit this directory in the future, but it is the base from which the project starts and a crucial part of a React project. If you cancelled your server, go ahead and restart it with npm start, then open public/ in your favorite text editor in a new terminal window: - nano public/ Alternatively, you can list the files with the ls command: - ls public/ You will see a list of files such as this: Outputfavicon.ico logo192.png manifest.json index.html logo512.png robots.txt favicon.ico, logo192.png, and logo512.png are icons that a user would see either in the tab of their browser or on their phone. The browser will select the proper-sized icons. Eventually, you’ll want to replace these with icons that are more suited to your project. For now, you can leave them alone. The manifest.json is a structured set of metadata that describes your project. Among other things, it lists which icon will be used for different size options. The robots.txt file is information for web crawlers. It tells crawlers which pages they are or are not allowed to index. You will not need to change either file unless there is a compelling reason to do so. For instance, if you wanted to give some users a URL to special content that you do not want easily accessible, you can add it to robots.txt and it will still be publicly available, but not indexed by search engines. The index.html file is the root of your application. This is the file the server reads, and it is the file that your browser will display. Open it up in your text editor and take a look. If you are working from the command line, you can open it with the following command: - nano public/index.html Here’s what you will see: <" /> <!-- manifest.json provides metadata used when your web app is installed on a user's mobile device or desktop. See --> <link rel="manifest" href="%PUBLIC_URL%/manifest.json" /> <!-- Notice the use of %PUBLIC_URL% in the tags above. It will be replaced with the URL of the `public` folder during the build. Only files inside the `public` folder can be referenced from the HTML. Unlike "/favicon.ico" or "favicon.ico", "%PUBLIC_URL%/favicon.ico" will work correctly both with client-side routing and a non-root public URL. Learn how to configure a non-root public URL by running `npm run build`. --> <title>React App<> The file is pretty short. There are no images or words in the <body>. That’s because React builds the entire HTML structure itself and injects it with JavaScript. But React needs to know where to inject the code, and that’s the role of index.html. In your text editor, change the <title> tag from React App to Sandbox: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> <link rel="icon" href="%PUBLIC_URL%/favicon.ico" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="theme-color" content="#000000" /> ... <title>Sandbox< your text editor. Check your browser. The title is the name located on the browser tab. It will update automatically. If not, refresh the page and notice the change. Now go back to your text editor. Every React project starts from a root element. There can be multiple root elements on a page, but there needs to be at least one. This is how React knows where to put the generated HTML code. Find the element <div id="root">. This is the div that React will use for all future updates. Change the id from root to base: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> ... <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="base">< the changes. You will see an error in your browser: React was looking for an element with an id of root. Now that it is gone, React can’t start the project. Change the name back from base to root: <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8" /> ... index.html. At this point, you’ve started the server and made a small change to the root HTML page. You haven’t yet changed any JavaScript code. In the next section, you will update the React JavaScript code. In this step, you will make your first change to a React component in the src/ directory. You’ll make a small change to the CSS and the JavaScript code that will automatically update in your browser using the built-in hot reloading. If you stopped the server, be sure to restart it with npm start. Now, take some time to see the parts of the src/ directory. You can either open the full directory in your favorite text editor, or you can list out the project in a terminal with the following command: - ls src/ You will see the following files in your terminal or text editor. OutputApp.css App.js App.test.js index.css index.js logo.svg serviceWorker.js setupTests.js Let’s go through these files one at a time. You will not spend much time with the serviceWorker.js file at first, but it can be important as you start to make progressive web applications. The service worker can do many things including push notifications and offline caching, but for now it’s best to leave it alone. The next files to look at are setupTests.js and App.test.js. These are used for test files. In fact, when you ran npm test in Step 2, the script ran these files. The setupTests.js file is short; all it includes is a few custom expect methods. You’ll learn more about these in future tutorials in this series. Open App.test.js: - nano src/App.test.js When you open it, you’ll see a basic test: import React from 'react'; import { render } from '@testing-library/react'; import App from './App'; test('renders learn react link', () => { const { getByText } = render(<App />); const linkElement = getByText(/learn react/i); expect(linkElement).toBeInTheDocument(); }); The test is looking for the phrase learn react to be in the document. If you go back to the browser running your project, you’ll see the phrase on the page. React testing is different from most unit tests. Since components can include visual information, such as markup, along with logic for manipulating data, traditional unit tests do not work as easily. React testing is closer to a form of functional or integration testing. Next, you’ll see some styling files: App.css, index.css, and logo.svg. There are multiple ways of working with styling in React, but the easiest is to write plain CSS since that requires no additional configuration. There are multiple CSS files because you can import the styles into a component just like they were another JavaScript file. Since you have the power to import CSS directly into a component, you might as well split the CSS to only apply to an individual component. What you are doing is separating concerns. You are not keeping all the CSS separate from the JavaScript. Instead you are keeping all the related CSS, JavaScript, markup, and images grouped together. Open App.css in your text editor. If you are working from the command line, you can open it with the following command: - nano src/App.css This is the code you’ll see: .App { text-align: center; } .App-logo { height: 40vmin; pointer-events: none; } @media (prefers-reduced-motion: no-preference) { .App-logo { animation: App-logo-spin infinite 20s linear; } } .App-header { background-color: #282c34; min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: white; } .App-link { color: #61dafb; } @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } This is a standard CSS file with no special CSS preprocessors. You can add them later if you want, but at first, you only have plain CSS. Create React App tries to be unopinionated while still giving an out-of-the-box environment. Back to App.css, one of the benefits of using Create React App is that it watches all files, so if you make a change, you’ll see it in your browser without reloading. To see this in action make a small change to the background-color in App.css. Change it from #282c34 to blue then save the file. The final style will look like this: .App { text-align: center; } ... .App-header { background-color: blue min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: white; } ... @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } Here’s how it will look after the change: Go ahead and change background-color back to #282c34. .App { text-align: center; ... .App-header { background-color: #282c34 min-height: 100vh; display: flex; flex-direction: column; align-items: center; justify-content: center; font-size: calc(10px + 2vmin); color: white; } ... @keyframes App-logo-spin { from { transform: rotate(0deg); } to { transform: rotate(360deg); } } Save and exit the file. You’ve made a small CSS change. Now it’s time to make changes to the React JavaScript code. Start by opening index.js. - nano src/index.js Here’s what you’ll see:(); At the top, you are importing React, ReactDOM, index.css, App, and serviceWorker. By importing React, you are actually pulling in code to convert JSX to JavaScript. JSX are the HTML-like elements. For example, notice how when you use App, you treat it like an HTML element <App />. You’ll explore this more in future tutorials in this series. ReactDOM is the code that connects your React code to the base elements, like the index.html page you saw in public/. Look at the following highlighted line: ... import * as serviceWorker from './serviceWorker'; ReactDOM.render(<App />, document.getElementById('root')); ... serviceWorker.unregister(); This code instructs React to find an element with an id of root and inject the React code there. <App/> is your root element, and everything will branch from there. This is the beginning point for all future React code. At the top of the file, you’ll see a few imports. You import index.css, but don’t actually do anything with it. By importing it, you are telling Webpack via the React scripts to include that CSS code in the final compiled bundle. If you don’t import it, it won’t show up. Exit from src/index.js. At this point, you still haven’t seen anything that you are viewing in your browser. To see this, open up App.js: - nano src/App.js The code in this file will look like a series of regular HTML elements. Here’s what you’ll see:; Change the contents of the <p> tag from Edit <code>src/App.js</code> and save to reload. to Hello, world and save your changes. ... function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} <p> Hello, world </p> <a className="App-link" href="" target="_blank" rel="noopener noreferrer" > Learn React </a> </header> </div> ); } ... Head over to your browser and you’ll see the change: You’ve now made your first update to a React component. Before you go, notice a few more things. In this component, you import the logo.svg file and assign it to a variable. Then in the <img> element, you add that code as the src. There are a few things going on here. Look at the img element: ... function App() { return ( <div className="App"> <header className="App-header"> <img src={logo} <p> Hello, world </p> ... Notice how you pass the logo into curly braces. Anytime you are passing attributes that are not strings or numbers, you need to use the curly braces. React will treat those as JavaScript instead of strings. In this case, you are not actually importing the image; instead you are referencing the image. When Webpack builds the project it will handle the image and set the source to the appropriate place. Exit the text editor. If you look at the DOM elements in your browser, you’ll see it adds a path. If you are using Chrome, you can inspect the element by right-clicking the element and selecting Inspect. Here’s how it would look in the browser: The DOM has this line: <img src="/static/media/logo.5d5d9eef.svg" class="App-logo" alt="logo"> Your code will be slightly different since the logo will have a different name. Webpack wants to make sure the image path is unique. So even if you import images with the same name, they will be saved with different paths. At this point, you’ve made a small change to the React JavaScript code. In the next step, you’ll use the build command to minify the code into a small file that can be deployed to a server. In this step, you will build the code into a bundle that can be deployed to external servers. Head back to your terminal and build the project. You ran this command before, but as a reminder, this command will execute the build script. It will create a new directory with the combined and minified files. To execute the build, run the following command from the root of your project: - npm run build There will be a delay as the code compiles and when it’s finished, you’ll have a new directory called build/. Open up build/index.html in a text editor. - nano build/index.html You will see something like this: <!doctype html><html lang="en"><head><meta charset="utf-8"/><link rel="icon" href="/favicon.ico"/><meta name="viewport" content="width=device-width,initial-scale=1"/><meta name="theme-color" content="#000000"/><meta name="description" content="Web site created using create-react-app"/><link rel="apple-touch-icon" href="/logo192.png"/><link rel="manifest" href="/manifest.json"/><title>React App</title><link href="/static/css/main.d1b05096.chunk.css" rel="stylesheet"></head><body><noscript>You need to enable JavaScript to run this app.</noscript><div id="root"></div><script>!function(e){function r(r){for(var n,a,p=r[0],l=r[1],c=r[2],i=0,s=[];i<p.length;i++)a=p[i],Object.prototype.hasOwnProperty.call(o,a)&&o[a]&&s.push(o[a][0]),o[a]=0;for(n in l)Object.prototype.hasOwnProperty.call(l,n)&&(e[n]=l[n]);for(f&&f(r);s.length;)s.shift()();return u.push.apply(u,c||[]),t()}function t(){for(var e,r=0;r<u.length;r++){for(var t=u[r],n=!0,p=1;p<t.length;p++){var l=t[p];0!==o[l]&&(n=!1)}n&&(u.splice(r--,1),e=a(a.s=t[0]))}return e}var n={},o={1:0},u=[];function a(r){if(n[r])return n[r].exports;var t=n[r]={i:r,l:!1,exports:{}};return e[r].call(t.exports,t,t.exports,a),t.l=!0,t.exports}a.m=e,a.c=n,a.d=function(e,r,t){a.o(e,r)||Object.defineProperty(e,r,{enumerable:!0,get:t})},a.r=function(e){"undefined"!=typeof Symbol&&Symbol.toStringTag&&Object.defineProperty(e,Symbol.toStringTag,{value:"Module"}),Object.defineProperty(e,"__esModule",{value:!0})},a.t=function(e,r){if(1&r&&(e=a(e)),8&r)return e;if(4&r&&"object"==typeof e&&e&&e.__esModule)return e;var t=Object.create(null);if(a.r(t),Object.defineProperty(t,"default",{enumerable:!0,value:e}),2&r&&"string"!=typeof e)for(var n in e)a.d(t,n,function(r){return e[r]}.bind(null,n));return t},a.n=function(e){var r=e&&e.__esModule?function(){return e.default}:function(){return e};return a.d(r,"a",r),r},a.o=function(e,r){return Object.prototype.hasOwnProperty.call(e,r)},a.</script><script src="/static/js/main.bac2dbd2.chunk.js"></script></body></html> The build directory takes all of your code and compiles and minifies it into the smallest usable state. It doesn’t matter if a human can read it, since this is not a public-facing piece of code. Minifying like this will make the code take up less space while still allowing it to work. Unlike some languages like Python, the whitespace doesn’t change how the computer interprets the code. In this tutorial, you have created your first React application, configuring your project using JavaScript build tools without needing to go into the technical details. That’s the value in Create React App: you don’t need to know everything to get started. It allows you to ignore the complicated build steps so you can focus exclusively on the React code. You’ve learned the commands to start, test, and build a project. You’ll use these commands regularly, so take note for future tutorials. Most importantly, you updated your first React component. If you would like to see React in action, try our How To Display Data from the DigitalOcean API with React! It’s a good intro! Maybe I’m nitpicking here but the quote “Anytime you are passing attributes that are not strings or numbers” seems incorrect. Since numbers also should be wrapped with bracers e.g. someAttr={42}instead of someAttr=42. As of Jan 2021, some of the idiosyncrasies of create-react-app have changed. It no longer creates a service worker, and there are a few other small changes. How to contribute a zh-CN tranlation? Nice article , but the important part of run the build folder has omitted: npm install -g serve serve -s build (from the root of the project after U did npm run build)
https://www.digitalocean.com/community/tutorials/how-to-set-up-a-react-project-with-create-react-app
CC-MAIN-2022-33
refinedweb
5,669
66.44
On Feb 1, 1:28 pm,). I have slightly modified overlaps() in order to make the analysis easier (I have made overlaps() consider all intervals open). Here is the new version: def open_interval(i, (a, b)): return (a, 1, i), (b, 0, i) def overlaps(lst, interval=open_interval): bounds = chain(*starmap(interval, enumerate(lst))) # 1 inside = {} # 2 for x, _, i in sorted(bounds): # 3 if inside.pop(i, None) is None: # 4 for j, y in inside.iteritems(): yield i, j # 5 inside[i] = x # 6 'Detailed' analysis of running overlaps(lst) follows, where * lst is of length n * the total number of overlaps in m (assuming dict.__getitem__ and dic.__setitem__ are done in constant time [1]) 1. is a simple loop over lst, takes An 2. is constant (K) 3. The sorted(...) function will take Bnlogn 3,4. This loop is iterated 2n times, with the if condition it will take Cn 5. This loop is iterated once for each overlap exacly. So it will take Dm 6. This statement is executed n times, will take En Altogether the time taken is: K + (A+B+E)n + Bnlogn + Dm So if m is dominated by nlogn, the algorithm is O(nlogn). Otherwise one cannot hope for better thant O(m)! -- Arnaud [1] Otherwise one could use a combination of an array and a doubly linked list to achieve constant time access, deletion and updating of the 'inside' structure. Anyway even if it was log(n) the overall complexity would be the same!
https://mail.python.org/pipermail/python-list/2008-February/469048.html
CC-MAIN-2016-50
refinedweb
255
66.03
The goal of this code is to type in the file name and if its found then open the file. The file type will be images png or jpg. I know the code isnt the pretties but its just practice. Here is the code. Thanks in advance. - Code: Select all import sys import os import Image while True: print "What photo would you like to see?" photo = raw_input("> ") exist = os.path.isfile(photo) if exist == False: print "That file does not exist" elif exist == True: print "I found your file!" Image.open(photo) print "would you like to continue?" cont = raw_input("(y/n) ") if cont == "n": break else: print " "
http://python-forum.org/viewtopic.php?f=6&t=869
CC-MAIN-2013-48
refinedweb
109
86.5
Setting Minimum DCOM Permissions To connect to a remote computer by using WMI, you must ensure that the correct DCOM settings and WMI namespace security settings are enabled for the connection. To grant DCOM remote launch and activation permissions for a user or group, do these steps. - Click Start >Run, type DCOMCNFG, and then click OK. - In the Component Services dialog box, expand Component Services, expand Computers, and then right-click My Computer and click Properties. - In the My Computer Properties dialog box, click the COM Security tab. - Under Launch and Activation Permissions, click Edit Limits. - In the Launch Permission dialog box, if your name or your group does not appear in the Groups or user names list, follow these steps: select Remote Activation, and then click OK.
https://www.juniper.net/documentation/en_US/jsa7.4.0/jsa-managing-vulnerability-user-guide/topics/task/operational/jsa-vuln-setting-minimum-dcom-permissions.html
CC-MAIN-2022-05
refinedweb
128
53.1
User talk:Lysander89 Welcome[edit source] I appreciate your stated intentions about improving School:Computer Science and Portal:Computer Science and advancing Wikiversity. All such efforts are needed and welcome. If you wish to collaborate with others, be sure to study SB_Johnny's welcome card above thoroughly. Take full advantage of the associated talk pages and join appropriate content development groups usually found in Topic: or School: namespaces. You're right about Wikiversity's lack of structure, but it's coming along. I'm sometimes in #wikiversity-en on IRC as yeoman or tractor if you'ld like to try real-time collaboration. Let me know if I can help! Welcome! CQ 03:50, 23 August 2009 (UTC) - Sorry I keep missing you on IRC... today makes it twice now that I've sat down at the keyboard a few minutes after you logged off. :-/. - Structure has long been an issue for us. You might want to open a thread on WV:C to get a bit more about the history of it and the options. --SB_Johnny talk 10:40, 23 August 2009 (UTC) Some help required[edit source] Hi! I am setting up a new discussion forum on wikiversity for highschool students at HHF. I require some assistance in setting up pages and doing some writeups. The part on asking questions and tracking them in the order of latest reply is complete, thankfully! I was asking CQ for help and he directed me to you. Please reply on my talk page if you are willing to help! Thanks! --Dharav talk 04:55, 26 August 2009 (UTC) - Hi there. Thanks for the reply. I know all the problems with this wiki acting as a forum. It'll be very lousy. And thanks for suggesting all the stuff that you did, especially liquidthreads. Did you have a look at ? It is a liquid thread implementation. - Yes indeed the present forum system has it's problems. It's clumsy and patchy. I don't think anyone will resist any imporovements. If we can get an actual forum working, well and good. My suggestion is, let's run that in the background. Let's just make HHF operational the way it is, and then add liquidthreads when it becomes a suitable substitution for phpBB3. - I have a question - can phpBB3 be installed on mediawiki? Or can the source code be modified and included in mediawiki software? I am up for anything that makes sense. Can you give me more details on the possibilities with the two software working on wikiversity? The liquidthreads extension and phpBB3? - What's the chance that we can run a software on sandboxserver.org [Let's say phpBB3] and make it edit/create pages on wikiversity? - My suggestion - we let HHF stay as it is right now, except changing the forum "Looks" and keeping the tracking system as-is. Side by side, we work on the liquidthreads extension. When it's finally done, we implement it on HHF and elsewhere. - is it worth the effort? Will liquid threads allow forums, sub forums, tracking and displaying of threads in chornological order according to last edits, flag some posts and all that jazz? Or is it just the discussion that'll be jazzified? Is it only meant for talk pages? - I know there are many complications on HHF right now and the templates/categories system I've used is pretty primitive, but it can be used AT THE MOMENT to make the forum work and changed later on! I would not be able to collaborate on liquid threads with full commitment, but no way in hell I won't help anyone trying to help HHF. If we go with this, then HHF requires a makeover rightnow. We'll keep the overhaul of the system in the pipeline, working on both of them simultaneously. - Thanks for the help, and let's continue the discussion on Talk:HHF - --Dharav talk 06:16, 27 August 2009 (UTC) Network Computers, Clusters and Clouds[edit source] Do you think you will have room in your Computer Networks topic to include Clusters and Clouds as a sub-topic? I am currently trying to develop a Charity Souper Computer (Heterogeneous Cluster of Donated Components) and I am sure that considerations of which cluster operating system to use, might be of interest to others. --Graeme E. Smith 13:20, 31 August 2009 (UTC) Miss You![edit source] We all appreciate your contributions to the School of Computer Science and we'd love to see you back. Hope to hear from you. --AFriedman 16:10, 1 September 2009 (UTC) - Best of luck with your assignments! Thanks for letting us know about your absence, and looking forward to seeing you back. Yes, Computer Science is very time consuming :). My own courses start on the 8th of September and will probably keep me up at night :(. --AFriedman 04:06, 3 September 2009 (UTC) Keep up the good work[edit source] Your contributions to Introduction to Computers are great. Hereby receive a personal award from me as a sign of appreciation. --Gbaor 17:33, 29 October 2009 (UTC)
https://en.wikiversity.org/wiki/User_talk:Lysander89
CC-MAIN-2021-31
refinedweb
855
74.69
Simple StateT use From HaskellWiki Revision as of 01:51, 13 November 2006 by DonStewart (Talk | contribs) A small example showing how to combine a State monad (in this case a unique supply), with the IO monad, via a monad transformer. No need to resort to nasty mutable variables or globals! import Control.Monad.State main = runStateT code [1..] >> return () -- -- layer a infinite list of uniques over the IO monad -- code = do x <- pop io $ print x y <- pop io $ print y return () -- -- pop the next unique off the stack -- pop = do (x:xs) <- get put xs return x io = liftIO
https://wiki.haskell.org/index.php?title=Simple_StateT_use&oldid=8211
CC-MAIN-2016-07
refinedweb
101
63.73
On 2012-03-01 18:21, Philippe Sigaud wrote: > > > But how to associate actions with a rule, in that case? I mean, some > > > rules will have actions, some not. > > > > You could maybe just put D code in the grammar string, which gets > > compiled as a string mixin by CTFE? > > I could, but one of my driving goals while beginning this project a > month ao was to use the shiny new lambda syntax directly :) > > "rule1", o => o > "rule2", o => o > > etc. > Maybe we can take some ideas from the CoffeeScript compiler (written in CoffeeScript) : Grammar: Lexer: -- /Jacob Carlborg (2012/03/01 1:19), Andrei Alexandrescu wrote: >? P.S. prefix operator "!" is same as postfix operator "%" which you said. The types which parsers on both sides of "/" operator have should be compatible. The type of "(" addExp ")" is Tuple!(string, int, string) and the type of intLit_p is int. They are not compatible. The type of !"(" addExp !")" is int. So the types of !"(" addExp !")" and intLit_p are compatible. Thanks, Hisayuki Mima On 3/4/12 8:34 AM, Hisayuki Mima wrote: >? It's your project so you define what's fun to work on. Let me just say that I worked on a lot of parsing-related stuff, and most often projects that start as "this grammar is simple enough, let's just do processing during parsing" ultimately required a rewrite to do generate AST, process it, and then use it for computation.. Andrei Hi Nick, On Wednesday, 29 February 2012 at 17:16:44 UTC, Nick Sabalausky wrote: > One thing I've been wanting to see in a lexer's regex would be a way to > match things like D's nested comments or: I have already implemented a lexer generator that can handle recursion in the token definitions (using multiple lexer states). See The library is C++ and generates lexers at runtime, but the concepts should be easily transferable. Basically I have a boolean for pop in the end state and a separate column for push_dfa_: if (end_state_) { // Return longest match if (pop_) { start_state_ = results_.stack.top ().first; results_.stack.pop (); } else if (push_dfa_ != results_.npos ()) { results_.stack.push (typename results::id_type_pair (push_dfa_, id_)); } . . . I'm happy to answer any questions etc. Regards, Ben On Sunday, 4 March 2012 at 21:00:00 UTC, Andrei Alexandrescu wrote: >. This chimes nicely with the comments made at the Going Native conference with regard to clang generating an AST and MSVC and GCC not. Herb Sutter referred to it as "AST envy" (as in he wished MSVC took the AST route). I only mention it as maybe not everyone here watched that vid. Andrei himself was at the conference of course! Regards, Ben "Nick Sabalausky" <a@a.a> wrote in message news:jikfpi$25cc$1@digitalmars.com... > "Nick Sabalausky" <a@a.a> wrote in message news:jikcit$201o) Filed: > > >. > Hello Youkei I am trying to use CTPG for compile time parsing for a DSL I am working on. I have tried the examples you created in the examples directory. I would like the parser to effect some side effects. For this purpose, I tried including the parser mixin into a class, but I got a strange error saying: Error: need 'this' to access member parse I have copied the code I am trying to compile at the end of the email. Let me know what I could be doing wrong here. Regards - Puneet import ctpg; import std.array: join; import std.conv: to; class Foo { int result; mixin(generateParsers(q{ int root = mulExp $; int mulExp = primary !"*" mulExp >> (lhs, rhs){ return lhs * rhs; } / primary; int primary = !"(" mulExp !")" / [0-9]+ >> join >> to!int; })); void frop() { result = parse!root("5*8"); } } void main(){ Foo foo = new Foo(); foo.frop(); } > > I would like the parser to effect some side effects. For this purpose, I tried including the parser mixin into a class, but I got a strange error saying: > > Error: need 'this' to access member parse > > Ok. I see my folly. At compile time, there would not be any "this" since the class has not been instantiated yet. I will have to think of other solutions. Basically, I am trying to use the parser to create functions that I can use at run time. But I wanted to create two functions from the same parser. I have succeeded with creating one function. I do not want to create two separate parsers because each of these parsers would add to memory footprint of the compiler. Any ideas? Maybe I could use tuples here? Regards - Puneet I'm not sure I follow all the details of what Andrei's suggesting and what's being talked about here, this parser/lexer stuff is still very new to me, so this may be a bit off-topic. However, I thought I'd weigh in on something I was very impressed with about the Nimrod language's direct AST access/manipulation. Nim has a "template" which is very much like D's mixin templates, example: # Nim template foo(b:string) = var bar = b block main: foo("test") assert(bar == "test") and the equivalent in... // D mixin template foo(string b) { auto bar = b; } void main() { mixin foo("test"); assert(bar == "test"); } which is all very straight forward stuff, the cool part comes with Nim's macro's. Nim has a two unique types: expr & stmt (expression & statement). They're direct AST structures which can be passed to template/macro procedures and arbitrarily mutated. Example: macro foo(s:stmt): stmt = result = newNimNode(nnkStmtList) for i in 1 .. s.len-1: var str = s[i].toStrLit() result.add(newCall("echo", str)) block main: foo: bar baz the above code prints: " bar baz " **Some notes: result is what's returned, and the reason you can use "foo" with a statement body is because any macro/template who's last parameter is type 'stmt' can be called with block semantics; similar to how UFCS works with the first parameter.** The above *might* look like the following in D: macro foo(ASTNode[] stmts...) { ASTNode[] result; foreach (s; stmts) { auto str = s.toASTString(); result ~= new ASTCall!"writeln"(str); } return result; } void main() { foo { bar; baz; } } This kind of direct AST manipulation + body semantics opens the doors for a lot of cool things. If you read through Nim's lib documentation you'll notice many of the "language" features are actually just Library procedures in the defaultly included system module. Which is great because contributions to the *language* can be made very easily. Also, the infrastructure to read/write AST is no-doubt immensely useful for IDE support and other such dev tools. I'm not a huge fan of everything in Nimrod, but this is something they definitely got right, and I think D could gain from their experience. On Thursday, 1 March 2012 at 15:10:36 UTC,. > > Yes, using one string is indeed better. That won't be too difficult to code. I'm wondering if people have seen LPeg. It's a Lua library, but the design is interesting in that patterns are first class objects which can be composed with operator overloading.
http://forum.dlang.org/thread/jii1gk$76s$1@digitalmars.com?page=8
CC-MAIN-2015-40
refinedweb
1,187
73.68
Another release of RFuzz to announce: DESCRIPTION. Fuzzing is where you try to give a web application lots of randomly generated unexpected inputs in an attempt to break it and find new areas to write unit tests. It compliments other testing methods. See for additional information. This is the library I'll be using in my RubyConf talk. CHANGES The 0.7 release fixes a bad bug in the request headers, fixes a rare chunked encoding error, and adds a new example that uses the fresh RFuzz::Browser class. Look at examples/cl_watcher.rb for a simple script that I'm using to watch apartment listings on craigslist. This release *also supports win32 precompiled binaries*. INSTALL Everyone should be able to install it with: sudo gem install rfuzz Or on window just "gem install rfuzz". Windows people pick the win32 version as it's the one that's precompiled. EXAMPLE REST CLIENT This example is from the samples page: class RESTClientError < Exception; end class RESTClient def initialize(host,port, base="") @host, @port = host, port @base = base @client = RFuzz::HttpClient.new(host,port) end def target_uri(symbol) uri = @base + "/" + symbol.to_s.tr("_","/") end def method_missing(symbol, *args) res = @client.get(target_uri(symbol), :query => (args[0] || {})) raise_error_if(res.http_status != "200", "Invalid Status #{res.http_status} from server #{@host}:#{@port}") return REXML::Document.new(res.http_body).root end def raise_error_if(test, msg) raise RESTClientError.new(msg) if test end end This example just takes simple: client.users_find :name => "joe" And translates them to: GET /users/find?name=joe Requests on the fly and then returns an REXML docroot. Enjoy! -- Zed A. Shaw -- Need Mongrel support? > Another release of RFuzz to announce: > > Or on window just "gem install rfuzz". Windows people pick the win32 > version as it's the one that's precompiled. Select which gem to install for your platform (i386-mswin32) 1. rfuzz 0.7 (mswin32) 2. rfuzz 0.7 (ruby) 3. rfuzz 0.6 (ruby) 4. Cancel installation > 1 ERROR: While executing gem ... (OpenURI::HTTPError) 404 Not Found I checked the rubyforge.org rfuzz download page, which lists the mswin32 gem, but clicking the link for a manual download also gives a 404 is broken. -- James Britt "Blanket statements are over-rated" > I just ran 'gem install rfuzz' on my WinXP box, and was given the choice > of installs. Selecting the first one returned a 404: [...] > I checked the rubyforge.org rfuzz download page, which lists the mswin32 > gem, but clicking the link for a manual download also gives a 404 > is broken. -- Mauricio Fernandez - - singular Ruby In case anybody's wondering... alex@pandora:~/Desktop/Noodling/ruby$ ruby -v; cat rfuzz_test.rb; ruby rfuzz_test.rb ruby 1.8.4 (2005-12-24) [i486-linux] require 'mongrel' require 'rfuzz/client' require 'net/http' require 'benchmark' h = Mongrel::HttpServer.new('0.0.0.0', '8080') h.register('/', Mongrel::DirHandler.new('.')) h.run def do_download(h, path) h.get(path) # quack! end n = 1000 Benchmark.bmbm(20) do |b| u = URI.parse('') busted = Net::HTTP.new(u.host, u.port) b.report('net/http'){for i in 0..n; do_download(busted, u.path); end} hotness = RFuzz::HttpClient.new(u.host, u.port) b.report('rfuzz'){for i in 0..n; do_download(hotness, u.path); end} end) That doesn't include those speedups for net/http that went through the list a couple of weeks back, though. -- Alex > user system total real > net/http 7.930000 1.370000 9.300000 ( 9.298725) > rfuzz 2.950000 1.130000 4.080000 ( 4.073578) Otherwise net/http has a bunch more features, but I'll slowly add them as I need them. I think SSL is next on the list. - Hide quoted text -- Show quoted text - >>Zed Shaw wrote: >>In case anybody's wondering... > <snip> >) > Yeah it should be faster just by virtue of having very little to it and > using a C parser. Keep in mind that rfuzz--unlike mongrel--isn't really > designed for speed but more for letting you easily hit web servers with > any kind of request. Another thing some folks don't notice is that all > of the HttpClient requests are "data driven". That means you can power > RFuzz blindly from a YAML file full of strings and hashes. > Otherwise net/http has a bunch more features, but I'll slowly add them > as I need them. I think SSL is next on the list. >>That doesn't include those speedups for net/http that went through the >>list a couple of weeks back, though. > I'd also be curious for the same test run with the RFuzz call being done > inside a Timeout block. RFuzz doesn't do any of that (bare metal), and > I know Timeout blocks fire up an extra thread and eat up a bit more CPU. > I'm betting that putting RFuzz inside Timeout makes it about the same > speed at net/http. alex@pandora:~/Desktop/Noodling/ruby$ cat rfuzz_test.rb; ruby rfuzz_test.rb require 'mongrel' require 'rfuzz/client' require 'net/http' require 'benchmark' require 'timeout' m = Mongrel::HttpServer.new('0.0.0.0', '8080') m.register('/', Mongrel::DirHandler.new('.')) m.run def do_download(h, path) h.get(path) end def do_timeout_download(h, path) timeout(30){ h.get(path) } end Benchmark.bmbm(20) do |b| u = URI.parse('') h = Net::HTTP.new(u.host, u.port) b.report('net/http'){ for i in 0..n; do_download(h, u.path); end } r = RFuzz::HttpClient.new(u.host, u.port) b.report('rfuzz'){ for i in 0..n; do_download(r, u.path); end } b.report('rfuzz+timeout'){ for i in 0..n; do_timeout_download(r, u.path); end } end Rehearsal ------------------------------------------------------- net/http 8.000000 1.380000 9.380000 ( 9.609907) rfuzz 3.110000 1.090000 4.200000 ( 4.213084) rfuzz+timeout 3.660000 1.270000 4.930000 ( 4.933268) --------------------------------------------- total: 18.510000sec user system total real net/http 8.060000 1.480000 9.540000 ( 9.620967) rfuzz 3.090000 1.090000 4.180000 ( 5.725728) rfuzz+timeout 3.680000 1.230000 4.910000 ( 4.991043). That shouldn't be a problem on this test, though - the file it's downloading is only 715 bytes. I'll try the test with a bigger file and see what I can see. > Not quite: <snip> >. user system total real net/http 7.880000 1.470000 9.350000 ( 9.454987) net/http big 1056.710000 113.930000 1170.640000 (1173.337412) rfuzz 2.850000 0.980000 3.830000 ( 3.831175) rfuzz+timeout 3.420000 1.180000 4.600000 ( 4.609152) rfuzz big 175.550000 45.900000 221.450000 (221.584886) rfuzz+timeout big 205.930000 55.530000 261.460000 (262.673411) The .*big results are from downloading the results of dd if=/dev/zero of=big_file bs=1M count=1... that is, 1MB of zeros. Interesting.
http://groups.google.com/group/ruby-talk-google/browse_thread/thread/4e5a3b64edf54002
crawl-002
refinedweb
1,131
71.1
SYNOPSIS #include <nng/nng.h> nng_pipe nng_msg_get_pipe(nng_msg *msg); DESCRIPTION The nng_msg_get_pipe() returns the nng_pipe object associated with message msg. On receive, this is the pipe from which a message was received. On transmit, this would be the pipe that the message should be delivered to, if a specific peer is required. The most usual use case for this is to obtain information about the peer from which the message was received. This can be used to provide different behaviors for different peers, such as a higher level of authentication for peers located on an untrusted network. The nng_pipe_getopt() function is useful in this situation. RETURN VALUES This function returns the pipe associated with this message, which will be a positive value. If the pipe is non-positive, then that indicates that no specific pipe is associated with the message. ERRORS None.
https://nng.nanomsg.org/man/v1.2.2/nng_msg_get_pipe.3.html
CC-MAIN-2020-10
refinedweb
142
55.95
Linkage of any variable and/or function defines the Area of the Program in which they are accessible. There are three types of Linkages for the variables & functions. These are as follows 1. EXTERNAL 2. INTERNAL 3. NONE External linkage defines that the variable, function can be accessed throughout the program, across the multiple source files if program comprises of several source files with the use of Keyword extern. #include <stdio.h> int counter; extern float marks; /* * External linkage: counter is an integer accessible across entire program * External linkage: marks which is of float type defined in some other source file & used here */ void hello(void); int main() { printf("the marks are %f\n", marks = 55.9); return 0; } Static Keyword changes Linkage from External to Internal meaning that any variable, function declared globally with static keyword can only be accessed within the source file from the point of declaration until the end of the program in which it is declared. #include <stdio.h> /* * Internal Linkage: counter can only be accessed within this source file from this point till the end of the file * Internal Linkage: function hello can only be accessed within this source file */ static int counter; static void hello(void); int main(void) { hello(); } Automatic and register variables neither have external nor internal linkages. They are accessed only within the blocks in which they are declared. For example: #include <stdio.h> void hello(void); int main(void) { hello(); return 0; } void hello() { int counter = 1; /* counter is an automatic variable i.e. ONLY accessible within this block! */ printf("the value of counter is %d\n",counter); } Sanfoundry Global Education & Learning Series – 1000 C Tutorials.
https://www.sanfoundry.com/c-question-linkage-scope-variables-functions-their-types/
CC-MAIN-2018-39
refinedweb
277
57.81
-----Original Message-----From: Andi Kleen <ak@muc.de>>tleete@access.mountain.net (Tom Leete) writes:>> Hi,>>>> Here's a first patch. The only thing it does besides replacing "inline">> with "__inline__" is to fill in the missing statements which set alight>> the "linux headers and C++" flamefest.>>This looks useless. The moving out of ISO C namespace doesn't help, becauseIt is for user-space developers who include headers for the ksymsprototypes. It works to permit them to use gcc -ansi and -pedantic if theywish. I recommend that for producing maintainable and portable code.$ info gcc;for reasons to use __inline__. I didn't invent that macro, you know. Themajority of kernel headers used it to start with.>it would need to be reserved for C++ compatibility anyways.>Near all interesting (=optimizing) C compilers know about inline, and if>not it can be easily #define'd out. Also the next revision of the C>standard has inline.Who's going to put the revisions into gcc-2.7.2?>>What they usually don't deal with is gcc's "extern inline", so>migrating from "extern inline" to "static inline". The reasoning behindYes, I think that may need to be handled, perhaps by something like:#ifdef _GNUC_extern#endifI'm open to suggestions on that.>the extern thing seems to be to let ld catch non inlined copies. A much>better way to archieve this IMHO is -Winline (if you can stand the few>warnings for the non-fixable cases - which should be few because with>extern inline they didn't even compile)>>I think the kernel header ANSIfication is a doomed project though, because>how do you do a efficient spinlock without inline assembly (and don't say>now "move it out of line")? In 2.2+ you cannot do much without spinlocks.>In 2.0 the same applies e.g. to cli/sti>Who said anything about altering kernel mechanisms for this?I think you haven't followed the discussion that produced these patches. Iwould have turned off the religious war too, if I hadn't got involved as aninnocent agnostic.>>-AndiTom-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.rutgers.eduPlease read the FAQ at
http://lkml.org/lkml/1999/7/8/1
CC-MAIN-2014-15
refinedweb
377
58.38
KWayland::Client::Keyboard #include <keyboard.h> Detailed Description Wrapper for the wl_keyboard interface. This class is a convenient wrapper for the wl_keyboard interface. To create an instance use Seat::createKeyboard. Definition at line 31 of file keyboard.h. Member Function Documentation Destroys the data held by this Keyboard._keyboard interface once there is a new connection available. This method is automatically invoked when the Seat which created this Keyboard gets destroyed. Definition at line 83 of file keyboard.cpp. Definition at line 169 of file keyboard.cpp. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Definition at line 164 of file keyboard.cpp. - Returns - Whether key repeat is enabled on this keyboard - Since - 5.5 Definition at line 179 of file keyboard.cpp. - Returns trueif managing a wl_keyboard. Definition at line 174 of file keyboard.cpp. A key was pressed or released. The time argument is a timestamp with millisecond granularity, with an undefined base. - Parameters - This signal provides a file descriptor to the client which can be memory-mapped to provide a keyboard mapping description. The signal is only emitted if the keymap format is libxkbcommon compatible. - Parameters - Emitted whenever information on key repeat changed. - Since - 5.5 - Returns - the delay in millisecond for key repeat after a press. - Since - 5.5 Definition at line 184 of file keyboard.cpp. - Returns - the key repeat rate in characters per second. - Since - 5.5 Definition at line 189 of file keyboard.cpp. Notifies clients that the modifier and/or group state has changed, and it should update its local state. Releases the wl_keyboard interface. After the interface has been released the Keyboard instance is no longer valid and can be setup with another wl_keyboard interface. This method is automatically invoked when the Seat which created this Keyboard gets released. Definition at line 78 of file keyboard.cpp. Setup this Keyboard to manage the keyboard. When using Seat::createKeyboard there is no need to call this method. Definition at line 88 of file keyboard.
https://api.kde.org/frameworks/kwayland/html/classKWayland_1_1Client_1_1Keyboard.html
CC-MAIN-2021-17
refinedweb
346
53.37
Faster builds with PCH suggestions from C++ Build Insights Kevin The creation of a precompiled header (PCH) is a proven strategy for improving build times. A PCH eliminates the need to repeatedly parse a frequently included header by processing it only once at the beginning of a build. The selection of headers to precompile has traditionally been viewed as a guessing game, but not anymore! In this article, we will show you how to use the vcperf analysis tool and the C++ Build Insights SDK to pinpoint the headers you should precompile for your project. We’ll walk you through building a PCH for the open source Irrlicht project, yielding a 40% build time improvement.. Viewing header parsing information in WPA C++ Build Insights provides a WPA view called Files that allows you to see the aggregated parsing time of all headers in your program. After opening your trace in WPA, you can open this view by dragging it from the Graph Explorer pane to the Analysis window, as shown below. The most important columns in this view are the ones named Inclusive Duration and Count, which show the total aggregated parsing time of the corresponding header and the number of times it was included, respectively. Case study: using vcperf and WPA to create a PCH for the Irrlicht 3D engine In this case study, we show how to use vcperf and WPA to create a PCH for the Irrlicht open source project, making it build 40% faster. Use these steps if you would like to follow along: - Clone the Irrlicht repository from GitHub. 97472da9c22ae4a. - Open an elevated x64 Native Tools Command Prompt for VS 2019 Preview command prompt and go to the location where you cloned the Irrlicht project. - Type the following command: devenv /upgrade .\source\Irrlicht\Irrlicht15.0.sln. This will update the solution to use the latest MSVC. - Download and install the DirectX Software Development Kit. This SDK is required to build the Irrlicht project. - To avoid an error, you may need to uninstall the Microsoft Visual C++ 2010 x86 Redistributable and Microsoft Visual C++ 2010 x64 Redistributable components from your computer before installing the DirectX SDK. You can do so from the Add and remove programs settings page in Windows 10. They will be reinstalled by the DirectX SDK installer. - Obtain a trace for a full rebuild of Irrlicht. From the repository’s root, run the following commands: vcperf /start Irrlicht. This command will start the collection of a trace. msbuild /m /p:Platform=x64 /p:Configuration=Release .\source\Irrlicht\Irrlicht15.0.sln /t:Rebuild /p:BuildInParallel=true. This command will rebuild the Irrlicht project. vcperf /stop Irrlicht irrlicht.etl. This command will save a trace of the build in irrlicht.etl. - Open the trace in WPA. We open the Build Explorer and Files views one on top of the other, as shown below. The Build Explorer view indicates that the build lasted around 57 seconds. This can be seen by looking at the time axis at the bottom of the view (labeled A). The Files view shows that the headers with the highest aggregated parsing time were Windows.h and irrAllocator.h (labeled B). They were parsed 45 and 217 times, respectively. We can see where these headers were included from by rearranging the columns of the Files view to group by the IncludedBy field. This action is shown below. Creating a PCH We first add a new pch.h file at the root of the solution. This header contains the files we want to precompile, and will be included by all C and C++ files in the Irrlicht solution. We only add the irrAllocator.h header when compiling C++ because it’s not compatible with C. PCH files must be compiled before they can be used. Because the Irrlicht solution contains both C and C++ files, we need to create 2 versions of the PCH. We do so by adding the pch-cpp.cpp and pch-c.c files at the root of the solution. These files contain nothing more than an include directive for the pch.h header we created in the previous step. We modify the Precompiled Headers properties of the pch-cpp.cpp and pch-c.c files as shown below. This will tell Visual Studio to create our 2 PCH files. We modify the Precompiled Headers properties for the Irrlicht project as shown below. This will tell Visual Studio to use our C++ PCH when compiling the solution. We modify the Precompiled Headers properties for all C files in the solution as follows. This tells Visual Studio to use the C version of the PCH when compiling these files. In order for our PCH to be used, we need to include the pch.h header in all our C and C++ files. For simplicity, we do this by modifying the Advanced C/C++ properties for the Irrlicht project to use the /FI compiler option. This change results in pch.h being included at the beginning of every file in the solution even if we don’t explicitly add an include directive. A couple of code fixes need to be applied for the project to build correctly following the creation of our PCH: - Add a preprocessor definition for HAVE_BOOLEAN for the entire Irrlicht project. - Undefine the far preprocessor definition in 2 files. For the full list of changes, see our fork on GitHub. Evaluating the final result After creating the PCH, we collect a new vcperf trace of a full rebuild of Irrlicht by following the steps in the Case study: using vcperf and WPA to create a PCH for an open source project section. We notice that the build time has gone from 57 seconds to 35 seconds, an improvement of around 40%. We also notice that Windows.h and irrAllocator.h no longer show up in the Files view as top contributors to parsing time. Getting PCH suggestions using the C++ Build Insights SDK Most analysis tasks performed manually with vcperf and WPA can also be performed programmatically using the C++ Build Insights SDK. As a companion to this article, we’ve prepared the TopHeaders SDK sample. It prints out the header files that have the highest aggregated parsing times, along with their percentage weight in relation to total compiler front-end time. It also prints out the total number of translation units each header is included in. Let’s repeat the Irrlicht case study from the previous section, but this time by using the TopHeaders sample}/TopHeadersfolder, starting from the root of the repository. - Follow the steps from the Case study: using vcperf and WPA to create a PCH for the Irrlicht 3D engine section to collect a trace of the Irrlicht solution rebuild. Use the vcperf /stopnoanalyze Irrlicht irrlicht-raw.etlcommand instead of the /stopcommand when stopping your trace. This will produce an unprocessed trace file that is suitable to be used by the SDK. - Pass the irrlicht-raw.etl trace as the first argument to the TopHeaders executable. As shown below, TopHeaders correctly identifies both Windows.h and irrAllocator.h as top contributors to parsing time. We can see that they were included in 45 and 217 translation units, respectively, as we had already seen in WPA. Rerunning TopHeaders on our fixed codebase shows that the Windows.h and irrAllocator.h headers are no longer a concern. We see that several other headers have also disappeared from the list. These headers are referenced by irrAllocator.h, and were included in the PCH by proxy of irrAllocator.h. Understanding the sample code We first filter all stop activity events and only keep front-end file and front-end pass events. We ask the C++ Build Insights SDK to unwind the event stack for us in the case of front-end file events. This is done by calling MatchEventStackInMemberFunction, which will grab the events from the stack that match the signature of TopHeaders::OnStopFile. When we have a front-end pass event, we simply keep track of total front-end time directly. AnalysisControl OnStopActivity(const EventStack& eventStack) override { switch (eventStack.Back().EventId()) { case EVENT_ID_FRONT_END_FILE: MatchEventStackInMemberFunction(eventStack, this, &TopHeaders::OnStopFile); break; case EVENT_ID_FRONT_END_PASS: // Keep track of the overall front-end aggregated duration. // We use this value when determining how significant is // a header's total parsing time when compared to the total // front-end time. frontEndAggregatedDuration_ += eventStack.Back().Duration(); break; default: break; } return AnalysisControl::CONTINUE; } We use the OnStopFile function to aggregate parsing time for all headers into our std::unordered_map fileInfo_ structure. We also keep track of the total number of translation units that include the file, as well as the path of the header. AnalysisControl OnStopFile(FrontEndPass fe, FrontEndFile file) { // Make the path lowercase for comparing std::string path = file.Path(); std::transform(path.begin(), path.end(), path.begin(), [](unsigned char c) { return std::tolower(c); }); auto result = fileInfo_.try_emplace(std::move(path), FileInfo{}); auto it = result.first; bool wasInserted = result.second; FileInfo& fi = it->second; fi.PassIds.insert(fe.EventInstanceId()); fi.TotalParsingTime += file.Duration(); if (result.second) { fi.Path = file.Path(); } return AnalysisControl::CONTINUE; } At the end of the analysis, we print out the information that we have collected for the headers that have the highest aggregated parsing time. AnalysisControl OnEndAnalysis() override { using namespace std::chrono; auto topHeaders = GetTopHeaders(); if (headerCountToDump_ == 1) { std::cout << "Top header file:"; } else { std::cout << "Top " << headerCountToDump_ << " header files:"; } std::cout << std::endl << std::endl; for (auto& info : topHeaders) { double frontEndPercentage = static_cast<double>(info.TotalParsingTime.count()) / frontEndAggregatedDuration_.count() * 100.; std::cout << "Aggregated Parsing Duration: " << duration_cast<milliseconds>( info.TotalParsingTime).count() << " ms" << std::endl; std::cout << "Front-End Time Percentage: " << std::setprecision(2) << frontEndPercentage << "% " << std::endl; std::cout << "Inclusion Count: " << info.PassIds.size() << std::endl; std::cout << "Path: " << info.Path << std::endl << std::endl; } return AnalysisControl::CONTINUE; } Tell us what you think! We hope the information in this article has helped you understand how to use C++ Build Insights to create new precompiled headers, or to optimize existing ones. Give vcperf a try today by downloading the latest version of Visual Studio 2019, or by cloning the tool directly from the vcperf Github repository. Try out the TopHeaders sample from this article by cloning the C++ Build Insights samples repository from GitHub, or refer to the official C++ Build Insights SDK documentation to build your own analysis tools. Have you been able to improve your build times with the header file information provided by vcperf or the C++ Build Insights SDK? Let us know in the comments below, on Twitter (@VisualC), or via email at visualcpp@microsoft.com. Great article! I was able to reduce build time 30% in a project which already has precompiled headers. Before this article, I used to choose the most repeated headers. Now I know how to choose the best headers in order to reduce build time. Great tool! Thanks! You’re welcome! Thanks for letting us know about your success with the tool! Very great tool, i reduce my build duration from 25 minutes to 10 minutes. With precompile header I increase the perf by ~40%, was spending 6 minutes inside mscvc/xxatomic.h With the timeline tool i target specific modules to compile using unity build, i also gained 2 minutes. Thanks That’s awesome! Thanks for letting us know. Nice tool, good article! But why isn’t PCH optimization already an integral part of VisualStudio? PCH optimization requires build time metrics to be done accurately and before Build Insights we didn’t have easy access to those metrics for a full rebuild. There is a suggestion on Developer Community to integrate C++ Build Insights in the IDE. Feel free to upvote it if that’s something you are interested in: Too large PCH files also make the build slow. What’s the best way to start analyzing in legacy projects when a PCH file already exists? Is there a recommendation for this? I would suggest capturing a trace with the PCH disabled and building it again from scratch. However if you just disable the PCH some headers will be included everywhere even when it’s not required. This might skew your results. The most accurate method would be to temporarily revert back to including individual headers only where they are needed if it’s not too much work. If it’s too much work, you could keep including the headers everywhere but remove from the PCH the ones that don’t show up high in WPA. This way you can at least avoid triggering a full rebuild when modifying these headers. You can also add new ones when it’s worth it based on what shows up in WPA. I hope this helps. This tool is very impressive, great job ! Can you explain the difference between inclusive and exclusive duration ? I have some files were: Count * exclusive < inclusive. What could be a logical explanation for such behaviour Hi Paltoquet, If A includes B and C, the inclusive duration of A is the time it takes to parse A and its inclusions B and C (i.e. the entire inclusion hierarchy rooted at A). The exclusive duration would be the time that was spent parsing only A, excluding the children B and C. As such, exclusive will always be smaller than inclusive. Does this answer your question? Great article! Thanks for sharing this! I’m super excited to try this out, but when I open the .etl file with WPA, I do not see the “Diagnostics” section. First I installed Visual Studio 2019 Version 16.6.1, and WPA Version 10.0.19041.1 (WinBuild.160101.0800). I tried capturing a trace and analyzing without placing perf_msvcbuildinsights.dll in the Performance Toolkit folder because a dll of that name was already present with the WPA that I installed. Opening the resulting .etl file, I only see “System Activity” and “Computation”. Then I went back and placed my VS2019 perf_msvcbuildinsights.dll in C:\Program Files (x86)\Windows Kits\10\Windows Performance Toolkit, and since the perfcore.ini file already had an entry for perf_msvcbuildinsights.dll, I did not modify it further. I re-captured the trace, and when opening the .etl file, I still only see “System Activity” and “Computation”. Is there anything else I could try to get the “Diagnostics” view? Or is there anywhere I can look for causes of why the Diagnostics view is not loading? Thanks! I obtained the same version of WPA and VS as you are using and am able to collect and view a trace. The most common reasons why no C++ Build Insights views are present under Diagnostics are: 1. Tracing an unsupported toolset. You said that you downloaded VS 2019 16.6.1, but are you also building your C++ project with this version? Sometimes people download the latest version of VS to get vcperf but their project is actually built using an older toolset. vcperf will only work for projects that are built with VS 2017 15.9 and above. 2. Not installing the WPA addin (perf_msvcbuildinsights.dll) correctly. To validate your installation, open WPA and go in Window -> Select Tables in the top menu. Scrolling down to the Diagnostics section, you should see 4 entries that start with C++ Build Insights. These entries should be checked. Not seeing them means the add-in was not installed correctly. Make sure you copied it to the right location. If the entries are there but unchecked, try clicking the Reset to default button at the top and restart WPA. 3. Collecting the trace incorrectly. Use vcperf /start MySession and vcperf /stop MySession traceFile.etl, and open traceFile.etl in WPA. Some people mistakenly use /stopnoanalyze instead of /stop, but this does not produce a trace that can be viewed in WPA. I hope this helps. Please let me know if you are still unable to see the views after verifying the items mentioned above. Thanks for the great tips Kevin! Our projects are built with VS 2015 toolset, and I confirmed that building using VS 2017 toolset allowed the Diagnostics view to show up! It is also great to know how to verify if the WPA addin was installed correctly. Thanks again! You’re welcome! Consider upgrading to the latest toolset for building your projects. Not only does it come with improved linker performance, but also has the new C++ Build Insights template instantiation events.
https://devblogs.microsoft.com/cppblog/faster-builds-with-pch-suggestions-from-c-build-insights/
CC-MAIN-2020-45
refinedweb
2,736
66.33
XIST 2.10? =========================== * The content of the processing instruction ll.xist.ns.code.pyexec will not be executed at construction time, but at conversion time. The code in ll.xist.ns.code.pyexec or ll.xist.ns.code.pyeval will no longer be executed in the ll.xist.sandbox module (which has been removed), but in a sandbox dictionary in the converter context of the ll.xist.ns.code namespace. * The tests have been ported to py.test. * The method mapped is now callable without arguments. In this case a converter will be created on the fly. You can pass constructor arguments for this converter to mapped as keyword arguments. * The publishing API has changed again: ll.xist.publishers.Publisher.publish now longer accepts an argument stream to which the byte strings are written, but it is a generator now. The publisher methods write and writetext have been renamed to encode and encodetext and return the encoded byte string, instead of writing it directly to the stream. There's a new generator method bytes for nodes now, which can be passed the same arguments as asBytes. These changes should help when using XIST in WSGI applications. * The iterator returned from Element.__getitem__, Frag.__getitem__ and the walk method now supports __getitem__ itself, so you can write table[html.tr][0] to get the first row from a table or page.walk(xsc.FindTypeAll(html.td))[-1] to get the last table cell from a complete HTML page. * Several bugs in the namespaces ll.xist.ns.meta, ll.xist.ns.form and ll.xist.ns.specials have been fixed. * The namespace modules ll.xist.ns.css and ll.xist.ns.cssspecials have been removed. For changes in older versions see: Where can I get it? =================== XIST can be downloaded from or Web pages are at ViewCVS access is available at For information about the mailing lists go to Bye, Walter Dörwald
https://mail.python.org/pipermail/python-announce-list/2005-May/004033.html
CC-MAIN-2014-10
refinedweb
320
69.79
Constrained Layout Guide¶()or figure(), e.g.: plt.subplots(constrained_layout=True) activate it via rcParams,. matplotlib.colors as mcolors import matplotlib.gridspec as gridspec import numpy as np plt.rcParams['savefig.facecolor'] = "0.8" plt.rcParams['figure.figsize'] = 4.5, 4. plt.rcParams['figure.max_open_warning'] = 50 def example_plot(ax, fontsize=12, hide_labels=False): ax.plot([1, 2]) ax.locator_params(nbins=3) if hide_labels: ax.set_xticklabels([]) ax.set_yticklabels([]) else: ax.set_xlabel('x-label', fontsize=fontsize) ax.set_ylabel('y-label', fontsize=fontsize) ax.set_title('Title', fontsize=fontsize) fig, ax = plt.subplots(constrained_layout=False) example_plot(ax, fontsize=24) To prevent this, the location of axes needs to be adjusted. For subplots, this can be done manually by adjusting the subplot parameters using Figure.subplots_adjust. However, specifying your figure with the # constrained_layout=True keyword argument¶ If you create a colorbar with Figure.colorbar, you need to make room for it. constrained_layout does this automatically. Note that if you specify use_gridspec=True it will be ignored because this option is made for improving the layout via tight_layout. Note For the pcolormesh keyword arguments ( pc_kwargs) we use a dictionary. Below we will assign one colorbar to a number of axes each containing a ScalarMappable;) Out: <matplotlib.colorbar.Colorbar object at 0x7fd1dc720910>) Out: <matplotlib.colorbar.Colorbar object at 0x7fd1df087610>) Out: <matplotlib.colorbar.Colorbar object at 0x7fd1ded56190> Suptitle¶ constrained_layout can also make room for suptitle.') Out: Text(0.5, 0.9895825, 'Big Suptitle') Legends¶ Legends can be placed outside of their parent axis. Constrained-layout is designed to handle this for Axes.legend(). However, constrained-layout does not handle legends being created via Figure.legend() (yet). fig, ax = plt.subplots(constrained_layout=True) ax.plot(np.arange(10), label='This is a plot') ax.legend(loc='center left', bbox_to_anchor=(0.8, 0.5)) Out: <matplotlib.legend.Legend object at 0x7fd1ed022640> However, this will steal space from a subplot layout: Out: <matplotlib.legend.Legend object at 0x7fd1df035610>('../../doc/_static/constrained_layout_1b.png', bbox_inches='tight', dpi=100) The saved file looks like: A better way to get around this awkwardness is to simply use the legend method provided by Figure.legend:('../../doc/_static/constrained_layout_2b.png', bbox_inches='tight', dpi=100) The saved file looks like: Padding and Spacing¶ Padding between axes is controlled in the horizontal by w_pad and wspace, and vertical by h_pad and hspace. These can be edited via set_constrained_layout_pads. w/h_pad are the minimum space around the axes in units of inches: fig, axs = plt.subplots(2, 2, constrained_layout=True) for ax in axs.flat: example_plot(ax, hide_labels=True) fig.set_constrained_layout_pads(w_pad=4 / 72, h_pad=4 / 72, hspace=0, wspace=0) Spacing between subplots is further set by wspace and hspace. These are specified as a fraction of the size of the subplot group as a whole. If these values are smaller than w_pad or h_pad, then the fixed pads are used instead. Note in the below how the space at the edges doesn't change from the above, but the space between subplots does. fig, axs = plt.subplots(2, 2, constrained_layout=True) for ax in axs.flat: example_plot(ax, hide_labels=True) fig.set_constrained_layout_pads(w_pad=4 / 72, h_pad=4 / 72, hspace=0.2, wspace=0.2) If there are more than two columns, the wspace is shared between them, so here the wspace is divided in 2, with a wspace of 0.1 between each column: fig, axs = plt.subplots(2, 3, constrained_layout=True) for ax in axs.flat: example_plot(ax, hide_labels=True) fig.set_constrained_layout_pads(w_pad=4 / 72, h_pad=4 / 72, hspace=0.2, wspace=0.2) GridSpecs also have optional hspace and wspace keyword arguments, that will be used instead of the pads set by constrained_layout: fig, axs = plt.subplots(2, 2, constrained_layout=True, gridspec_kw={'wspace': 0.3, 'hspace': 0.2}) for ax in axs.flat: example_plot(ax, hide_labels=True) # this has no effect because the space set in the gridspec trumps the # space set in constrained_layout. fig.set_constrained_layout_pads(w_pad=4 / 72, h_pad=4 / 72, hspace=0.0, wspace=0.0) plt.show() Spacing with colorbars¶ Colorbars are placed a distance pad from their parent, where pad is a fraction of the width of the parent(s). The spacing to the next subplot is then given by w/hspace. fig, axs = plt.subplots(2, 2, constrained_layout=True) pads = [0, 0.05, 0.1, 0.2] for pad, ax in zip(pads, axs.flat): pc = ax.pcolormesh(arr, **pc_kwargs) fig.colorbar(pc, ax=ax, shrink=0.6, pad=pad) ax.set_xticklabels([]) ax.set_yticklabels([]) ax.set_title(f'pad: {pad}') fig.set_constrained_layout_pads(w_pad=2 / 72, h_pad=2 / 72, hspace=0.2, wspace=0.2) rcParams¶ There are five rcParams¶ constrained_layout is meant to be used with subplots() or GridSpec() and add_subplot().) Out: Text(0.5, 41.33399999999999, 'x-label') Note that in the above the left and right columns don't have the same vertical extent. If we want the top and bottom of the two grids to line up then they need to be in the same gridspec. We need to make this figure larger as well in order for the axes not to collapse to zero height: fig = plt.figure(figsize=(4, 6)), hide_labels=True) ax = fig.add_subplot(gs0[2:4, 1]) example_plot(ax, hide_labels=True) ax = fig.add_subplot(gs0[4:, 1]) example_plot(ax, hide_labels=True) fig.suptitle('Overlapping Gridspecs') Out: Text(0.5, 0.993055, 'Overlapping Gridspecs')¶ There can be good reasons to manually set an axes position. A manual call to set_position]) Manually turning off constrained_layout¶¶ Incompatible functions¶ constrained_layout will work with pyplot.subplot, but only if the number of rows and columns is the same for each call. The reason is that each call to pyplot.subplot will create a new GridSpec instance if the geometry is not the same, and constrained_layout. So the following works fine: fig = plt.figure() ax1 = plt.subplot(2, 2, 1) ax2 = plt.subplot(2, 2, 3) # third axes that spans both rows in second column: ax3 = plt.subplot(2, 2, (2, 4)) example_plot(ax1) example_plot(ax2) example_plot(ax3) plt.suptitle('Homogenous nrows, ncols') Out: Text(0.5, 0.9895825, 'Homogenous nrows, ncols') but the following leads to a poor layout: fig = plt.figure() ax1 = plt.subplot(2, 2, 1) ax2 = plt.subplot(2, 2, 3) ax3 = plt.subplot(1, 2, 2) example_plot(ax1) example_plot(ax2) example_plot(ax3) plt.suptitle('Mixed nrows, ncols') Out: Text(0.5, 0.9895825, 'Mixed nrows, ncols') Similarly, subplot2grid works with the same limitation that nrows and ncols cannot change for the layout to look good.) fig.suptitle('subplot2grid') plt.show() Other Caveats¶. An artist using axes coordinates that extend beyond the axes boundary will result in unusual layouts when added to an axes. This can be avoided by adding the artist directly to the Figureusing add_artist(). See ConnectionPatchfor an example. Debugging¶. If there is a bug, please report with a self-contained example that does not require outside data or dependencies (other than numpy). Notes on the algorithm¶ The algorithm for the constraint is relatively straightforward, but has some complexity due to the complex ways we can layout a figure. Layout in Matplotlib is carried out with gridspecs via the GridSpec class. A gridspec is a logical division of the figure into rows and columns, with the relative width of the Axes in those rows and columns set by width_ratios and height_ratios. In constrained_layout, each gridspec gets a layoutgrid associated with it. The layoutgrid has a series of left and right variables for each column, and bottom and top variables for each row, and further it has a margin for each of left, right, bottom and top. In each row, the bottom/top margins are widened until all the decorators in that row are accommodated. Similarly for columns and the left/right margins. Simple case: one Axes¶ For a single Axes the layout is straight forward. There is one parent layoutgrid for the figure consisting of one column and row, and a child layoutgrid for the gridspec that contains the axes, again consisting of one row and column. Space is made for the "decorations" on each side of the axes. In the code, this is accomplished by the entries in do_constrained_layout() like: gridspec._layoutgrid[0, 0].edit_margin_min('left', -bbox.x0 + pos.x0 + w_pad) where bbox is the tight bounding box of the axes, and pos its position. Note how the four margins encompass the axes decorations. from matplotlib._layoutgrid import plot_children fig, ax = plt.subplots(constrained_layout=True) example_plot(ax, fontsize=24) plot_children(fig) Simple case: two Axes¶ When there are multiple axes they have their layouts bound in simple ways. In this example the left axes has much larger decorations than the right, but they share a bottom margin, which is made large enough to accommodate the larger xlabel. Same with the shared top margin. The left and right margins are not shared, and hence are allowed to be different. fig, ax = plt.subplots(1, 2, constrained_layout=True) example_plot(ax[0], fontsize=32) example_plot(ax[1], fontsize=8) plot_children(fig, printit=False) Two Axes and colorbar¶ A colorbar is simply another item that expands the margin of the parent layoutgrid cell: Colorbar associated with a Gridspec¶ If a colorbar belongs to more than one cell of the grid, then it makes a larger margin for each: fig, axs = plt.subplots(2, 2, constrained_layout=True) for ax in axs.flat: im = ax.pcolormesh(arr, **pc_kwargs) fig.colorbar(im, ax=axs, shrink=0.6) plot_children(fig, printit=False) Uneven sized Axes¶ There are two ways to make axes have an uneven size in a Gridspec layout, either by specifying them to cross Gridspecs rows or columns, or by specifying width and height ratios. The first method is used here. Note that the middle top and bottom margins are not affected by the left-hand column. This is a conscious decision of the algorithm, and leads to the case where the two right-hand axes have the same height, but it is not 1/2 the height of the left-hand axes. This is consietent with how gridspec works without constrained layout. fig = plt.figure(constrained_layout=True) gs = gridspec.GridSpec(2, 2, figure=fig) ax = fig.add_subplot(gs[:, 0]) im = ax.pcolormesh(arr, **pc_kwargs) ax = fig.add_subplot(gs[0, 1]) im = ax.pcolormesh(arr, **pc_kwargs) ax = fig.add_subplot(gs[1, 1]) im = ax.pcolormesh(arr, **pc_kwargs) plot_children(fig, printit=False) One case that requires finessing is if margins do not have any artists constraining their width. In the case below, the right margin for column 0 and the left margin for column 3 have no margin artists to set their width, so we take the maximum width of the margin widths that do have artists. This makes all the axes have the same size: fig = plt.figure(constrained_layout=True) gs = fig.add_gridspec(2, 4) ax00 = fig.add_subplot(gs[0, 0:2]) ax01 = fig.add_subplot(gs[0, 2:]) ax10 = fig.add_subplot(gs[1, 1:3]) example_plot(ax10, fontsize=14) plot_children(fig) plt.show() Total running time of the script: ( 0 minutes 17.239 seconds) Keywords: matplotlib code example, codex, python plot, pyplot Gallery generated by Sphinx-Gallery
https://matplotlib.org/3.5.0/tutorials/intermediate/constrainedlayout_guide.html
CC-MAIN-2022-27
refinedweb
1,847
52.36
Review: World of Warcraft 602 - Title: World of Warcraft - Developer: Blizzard Entertainment - Publisher: Vivendi Universal Games - Reviewer: Zonk - Score: 10/10 That said, in the interests of disclosure I should state that I've been playing the game since the first round of Beta invitations in March of this year. I've seen the good and the bad as the game's final form took shape, and I've rode all of them out with a high degree of satisfaction. Before I was snagged to be an editor here, I wrote for a site dedicated to Massively Multiplayer games. I've played over a dozen of them, and I follow Massive gaming news with an intense personal interest. As you read my review, keep my level of commitment to the game and the genre in mind. Character creation is a straightforward process. Once your account is created and you're into the game proper, your first choice is going to be what server to play on. Currently the game has been released to North America, South Korea, and Australia. The rest of the world is officially on hold as the European launch of the game moves forward. If you have associates in the Old World who you plan on playing with, be aware that Blizzard's current plan is to enforce continental segregation. Apart from what continent you're on, Blizzard has recognized that the "flyover states" are more than just places you see in movies when a plot has to reference a train accident. Servers are available in the four time zones represented on the North American continent. They have also taken the step of classifying servers into different rules-sets. The normal rules-set only allows Player vs. Player (PvP) combat on a voluntary basis. PvP servers also exist which allow any player to attack any other player, a no-holds barred environment between the two major factions. Finally, there are roleplaying (RP) servers, essentially "normal" servers with extra GM support to provide an atmosphere conducive to roleplaying. There are only a few RP servers, but there are more than enough Normal and PvP servers to go around. Deciding between the two is literally this simple: Do you plan on participating in Player Vs. Player combat on a regular basis? If the answer is yes, you know where to go. Once you're on a server, you have a number of choices to make. There are currently eight races available to choose from, and each race has between three and five character classes open to them. On one side you have the members of the Alliance. Brought together by the Humans, the Alliance represents the forces of the Human nation of Stormwind, the Dwarven nation of Ironforge, the Night Elf nation of Darnassus, and the remains of the Gnomish civilization. Primarily based on the continent of Azeroth, the forces of "good" face down their enemy among the Horde across a vast sea. The races of the Horde, primarily based on the continent of Kalimdor, represent the tenuous group brought together under the leadership of the Orcs. The Horde represents the Orcs of Orgrimmar, the Tauren of Thunder Bluff, the Undead followers of Sylvanas Windrunner located in the Undercity below Lordaeron, and the jungle Trolls who have allied themselves with the Orcish chieftain Thrall. Character classes are broken down to fit with established racial history (Night Elves can't be mages because their history is littered with magical disasters) and fantasy tropes (Dwarves can't be mages because they can't). The actual character classes presented in the game cover all of the fantasy basics, with each class actually having a useful role to play in a group. There are only nine available, but the lack of extreme diversification means that each class can really live into the role they have to play within the game. The standards are all available: The combat machine is the Warrior, the long distance spellcaster is the Mage, the stealthy high damage character is the Rogue, and the healer is the Priest. There are a few multipurpose classes you'll likely recognize from other games. The Paladin (an Alliance-only class) combines combat abilities with healing and backup resurrection duties. The Warlock is a dark caster that has spells but primarily relies on summoned entities to fight and interact with his enemies. Because it's Blizzard, there are also a few classes that may have titles you're familiar with, but have a very different flavour to them. The Druid is the "nature" version of the Paladin, with spellcasting and combat abilities, but their primary role is to become group glue. Druids have the ability to take on various animal forms, enabling them to take on the roles of combat-intensive classes if needed. Their bear form is a nice fill-in for a Warrior, while the jungle cat slashes and claws like a rogue. Shaman (a Horde-only class) are elemental based spellcasters, tapping into the four aspects of the wilderness to produce unique effects within range of their totems. Finally, the Hunter is a crack shot with bows, thrown axes, and guns (yes, guns). Hunters have the ability to train animals from the wilds to be their companions, with everything from bears and wolves to crocodiles and velociraptors being available as pets. Once you've gotten your race, class, and name picked out, you're introduced to your race's struggle within the World of Warcraft through a brief panning shot inside the game engine. The camera pans over most of the starting area you'll be exploring and a voiceover intones a brief backstory of the problems facing your race. When it comes to advanced graphics technology, World of Warcraft is not the top dog. If you want to give your graphics card a workout, the normal settings on World of Warcraft aren't going to fulfill your needs. The upshot of this is that the game scales amazingly well. 256 megs of ram and a GeForce 2 really will run this game well enough to have an excellent gameplay experience. The visual presentation of the game actually takes advantage of this. As you can see from the screenshots, World of Warcraft is a stunning place to explore. Instead of aiming for a hyper-realistic approach Blizzard has actually accentuated the unreality of the gameworld, endowing the Night Elves with long pointed ears, the Gnomes with large, limpid eyes, and the Undead with horrible clawlike manipulators. Characters have an almost anime quality, while beasts and monsters wear new interpretations that accentuate their most vivid characters. Moving through the landscape is more like walking through a painting than playing a game. Particularly picturesque landscapes such as the snowy Dwarven home of Dun Morogh or the sweltering jungle of Stranglethorn Vale require real pauses to stop and drink them in. The visual quality of the world and the introductory voiceover at your character's creation begins the process of drawing you into the game world, a task which World of Warcraft does more meticulously than any other Massive game I've had the opportunity to play. Each race faces specific challenges, bourn out by the quests you receive immediately upon entering the game world. Non-Player Characters (NPCs) with quests for you appear with a yellow exclamation point above their heads, and speaking with them prompts a short vocal interaction and the possibility to add a quest to your log. Each quest is a miniature story unto itself, just waiting for you to carry it through to completion. Quest goals are clearly marked, as are the rewards you will receive from completing the quest. All quests have an experience reward (making questing an integral part of level advancement), but the rewards displayed include the amount of coin you'll receive and any items. Many quests give you the option of choosing your reward from among a few different items, allowing you to customize your character's loot set from NPC quests. Beyond simply providing you an impetus for getting out into the world, these quests are the hook that allows you to stop being just some person wandering around killing monsters and allows you to actually become a hero. From the start, you're participating in events that are keeping your fellow countrymen safe and secure. Beyond just simple "go here and kill the thingie" quests, there are endless opportunities to become involved in the lives of your people. Here, you take a note to an important official notifying him of how a pest eradication campaign goes, while there you collect the pieces necessary for a powerful potion. Your actions have consequences as well, as the NPCs begin to treat you with greater and greater respect (and remember you when you return to them), allowing you deeper into their lives and into the story of the world around you. In some places, questing even pays off in lucrative gains as vendors offer you discounts because of your service to their cause. Beyond the ways that you interact directly with the world, Azeroth does it's own thing quite well without you. Guardsman patrol the streets of the major cities, keeping the populace safe (and answering any questions that wayward adventurers might have). Children are at play in houses or gardens, and hilarious conversations play out between the folks wandering through the avenues of the racial strongholds. Far from a static world on which you leave your mark, the World of Warcraft is a place littered with it's own history and peopled by individuals with motivations and stories. This inclusive experience extends beyond just the visuals and the storyline. World of Warcraft has the richest sound environment I've yet experienced in a MMOG. Music, often the most frustrating aspect of a Massive game's soundtrack, is incredibly well produced and judiciously used. There is no "combat music". When you enter combat the only sounds you'll experience are the harsh clash of weaponry and armor. Musical scores are cued based on location, with each city and wilderness area having their own themes. The music isn't constantly on at a consistent volume. Swelling music announces your arrival at a new area, and then fades back into the background to allow you to enjoy the music without overwhelming you with it. In the spare manner in which it's used, the musical score completes the atmosphere that World of Warcraft attempts to create. Sound effects are also well tended to. Weapon noises and spell effects are very satisfying, with grunts and clashes making combatants incredibly aware of the danger they're in. Tiny audio clues also keep a player aware of his surroundings. Tiny "clinks" announce personal messages from fellow players, and an small explosion of sound announces your arrival at a higher level. Beyond the normal text and animated emotes common to many games, Blizzard has also included voice emotes. The emotes, which are combinations of animations and voices that get across a particular emotion, are very similar to the clicky-conversations you can have with your units in Warcraft III. I especially like the male Dwarf's flirt emotes. Beyond the game's excellent presentation, Blizzard's reputation for making intuitive game interfaces is upheld. A simple quick-launch bar is available at the foot of the screen, with numerous other bars available with a combination of the shift and middle mouse buttons. Right clicking is the default "do stuff" button, and the action taken changes in context to what you're clicking on. Items are easy to examine, as each features a small portrait next to it's name. This portrait, when moused over, displays a popup detailing the statistics associated with the item. Simple color coding indicates the rarity of the item (green for magic, purple for rares, etc.), and the display lists a level requirement. Every item has a level requirement, which a character has to meet or exceed in order to equip or use the item. Items which are not useable by your race or class have portraits tinged with red. This intuitive interface extends to quests and tradeskills as well. The quest log displays all the information given out by the originating NPC and color codes quests based on the difficulty of the quest in relation to your character's level. Tradeskills are often the red headed stepchildren of a Massive game because of poor documentation and a high barrier to entry. WoW's approach to tradeskilling allows even the most casual player to get involved, and ensures that every crafter knows where they stand as regards possible crafted items. Each character is allowed to train in two tradeskills, which are called professions. Some options, such as Tailoring and Enchanting are viable thanks to specialized equipment or scavenged goods. Others, such as Herbalism and Blacksmithing, have a counterpart "gathering" Profession that allows materials to be collected from the environment. Mining allows a character to obtain ore, which can be melted down via Blacksmithing for use in Arms and Armor. Training in a Profession is as simple as finding a trainer and saying "sign me up". You are then presented with a list of recipes that you currently have access to. Each recipe has a materials requirement for completion. To create an item, you have to have the required materials present in your inventory, and then hit the "create" button while a recipe is selected. There is no margin for error here. Every attempt to create an item using a recipe is successful. As you create items your skill in your chosen Profession goes up. Recipes are color coded (like items and quests), and as your skill goes up recipes begin to become relatively "easier". Once you've created your hundredth tunic, you've got it cold. As such, new recipes become available for purchase from the trainer, allowing you access to better and more challenging items. Skills without gathering requirements are extremely easy to get into, and even Blacksmithing only requires that you keep an eye out once in a while for a mineral deposit. The Mining Profession even provides you with an ability that makes mineral deposits show up on your local mini-map. This is, of course, a Roleplaying Game and RPGs are nothing if not fighting intensive. Combat has been as carefully considered as all other elements of the game. The most striking thing about the combat is the interactivity. Combat is a very fluid experience in World of Warcraft. Every class has abilities and spells that allow it to contribute to a fight, with the typical Massive Gaming roles (such as the Tank and the Healer) being filled by overlapping classes. Grouping casually is not a cause for worry, and almost any combination of classes can form a valid hunting party. The actual act of combat follows many other games' patterns. You activate an "autoattack" mode, where your character swings his or her weapon or weapons as often as she can every few seconds. The difference is that, unless you utilize the abilities at your disposal you're likely to lose in a fight between yourself and an enemy of equal level. Constant use of spells and abilities to keep your opponent on their toes is required to ensure that a fight goes your way, and finding the rhythm to your class's combat style is one of the most engaging parts of the game. And if you die? You don't lose experience. I'm going to say that again, because it's so important. You don't lose experience when you die. There's no debt, there's no recriminations, nothing. You reappear as a ghost in the nearest graveyard to the point where you died, with the world outlined in white and a spooky soundscape playing around you. You just jog back to your body and click the button that says "Resurrect". You reappear with about 75% of your health and mana intact, and go on from there. Many characters can just hop right back into combat. If you're in a group, a friendly Priest or Paladin can raise you on the spot. If you don't want to jog back to your body or don't have a Priest in your pocket, you can speak to an NPC located in each graveyard and resurrect in the graveyard. You're penalized for taking this option by reducing the durability of your items by 25%. Items with reduced durability eventually stop working and must be repaired, so taking the easy way out costs you money but no experience. You will never be penalized experience for your death. With a good group at your back and a level head, you can tear through levels at a brisk pace. Character advancement in World of Warcraft is anything but a grind. And if you die, who cares? A minor annoyance, and you're back into the thick of things. Leveling up is anything but a chore with the combination of enjoyable combat and risk free death. In fact combining the experience you get from combat with the XP received from questing, and you'll regularly find yourself honestly surprised when you gain a level. And leveling up is definitely enjoyable. In addition to improving your basic attributes, at even levels you're given access to new abilities or spells. These are trained up by speaking to a class trainer. At the trainer you will be given a list of the abilities available for you to learn, with two or three new abilities opening up every other level. Every ability has a monetary cost associated with it, but once you have a new ability or spell in your hands it's incredibly satisfying to try them out. Once you reach level ten you'll begin working on your Talents, as well. Talents are how you take your character and really make him your own. As opposed to being just another Mage or Warrior, you're given three "trees" in which to allocate Talent points. The three trees each correspond to a facet of your character class. Each new level starting at ten allows you access to a Talent point. As opposed to the instant gratification of Abilities, Talents allow you to specialize your character over time. Mages, for example, can choose to specialize in Fire or Frost spells, and their talents allow them to reduce casting time, improve damage, and generally tweak their relationship with a chosen field of abilities. Warriors, in turn, can focus on defensive, offensive, or weapon skills. Combat, questing, graphics, backstory, and game design are what bring a player to a Massive game. What keeps him there is the community. While the actual community you find yourself in is highly variable (there's a reason the ESRB sticker says "Game Experience May Change During Online Play") the tools Blizzard has provided for getting into the community around you are very robust. The game has a very versatile "/who" command, allowing you to see the level, name, class, and group status of everyone around you. Finding folks who might be interested in grouping is a snap, and contact ing them is as well. There is a flexible chat system that allows players to congregate as they desire based on their interests. Guilds, always an important aspect of an online game, get a great deal of respect from the Blizzard developers. A charter is required to begin a Guild, ensuring that one person Guilds don't clutter up the Guild namespace. Once the Guild has been formed, a permanent chat channel is formed that connects every member of the group. Guild members that want to show their pride can purchase a tabard, which go into an equipment slot that isn't used for anything else. The Guild leader decides on the tabard design, and every tabard bears the same color and design. Guild pride is something these designers understood. Beyond simple communication, mercantile exchange is promoted through Auction Houses. These locations (one per continent), allow players to put items up for sale and reap monetary rewards through the in-game mail system. Filling an equipment hole that quests haven't taken care of yet is easy and convenient. World of Warcraft, then, is a remarkable achievement. It has both depth and breadth, allowing old hands at online games to feel right at home while inviting new players into the genre. The game's backstory is easily accessible via the questing system, and the interactive combat system ensures that you're never bored while exploring the vast world you inhabit. Beautiful done graphics combine with a carefully constructed soundscape to transport you to another place. From a game design standpoint World of Warcraft is an accomplishment to be proud of. In my mind, though, what pushes this game from a nine to a ten are the little things. The Blizzard polish that resulted in the endlessly clickable strategy game units has expressed itself as a world that always has something new to reveal to the curious player. Books lie on desks, waiting to be opened and their stories read. Crystal balls allow you to peer beyond a Wizards tower across half a continent. A woman in a shop asks you to deliver a sewing kit to her son. Someone else needs your help convincing a tavern-keep to carry his brew. Blizzard has somehow found the happy medium between an online world and an online game, and the results are satisfying beyond measure. Every gamer who is tired of shooting zombies or killing rats deserves to try this game. I highly recommend it to every gamer, every MMOG player, and everyone who's ever picked up a fantasy book and gone "I wonder what I would do in their shoes?" World of Warcraft is your chance to find out. MMORPG's (Score:4, Insightful) I know I'll probably get laughed at for this, but FPS's have build a very nice way of controling your players.. and usually its rather smooth movements. Games i've played, like Lineage 2, FFXI, these games make me use my mouse to move my character around.. and I don't like it.. Aim, swing, I could see that for my mouse.. But moving, I would far rather use fingers. Just my two cents. Re:MMORPG's (Score:3, Informative) Re:MMORPG's (Score:2) -9mm- Re:MMORPG's (Score:2) Re:MMORPG's (Score:3, Informative) Re:MMORPG's (Score:2) You move your character with WASD and control the camera and "steer" your character's facing with the mouse. I agree, it's far better. Took some getting used to compared to lineage II, but i like it much more now. Re:MMORPG's (Score:3) You use a mouse for FFXI? (Score:2) As an avid FFXI player, I will tell you that since the game was designed for a PS2 pad in mind, the mouse is completly optional. In fact, I believe you can control the game better without one. All my playing on that game is done entirely on keyboard. (numpad for movement, arrows for menu selection, SHIFT+arrows for camera control, F keys for targeting) Re:MMORPG's (Score:4, Informative) (I use ASDX to move straight left, forward, right, backward. WASD works fine, too, of course. Hold right mouse button to mouselook, aim up or down, turn, etc, with mouse. Number keys to use special abilities, spells, weapons, etc. Space to jump, etc.) The control scheme is, in short, nice. Re:MMORPG's (Score:2) Re:MMORPG's (Score:3, Informative) Ladies and gentlemen, (Score:2, Insightful) Re:Ladies and gentlemen, (Score:2) Damn you WoW! (Score:5, Funny) As Daniel Foesch from PearPC put it "I"ve hit a nearly impenetrable roadblock in development at this time. It is called, World of Warcraft Open Beta." I suppose even developers are human after all! Re:Damn you WoW! (Score:2) How many more games like this? (Score:5, Insightful) I played Lineage 2 for a while and it ran out of steam for me. Same with Star Wars Galaxies. So what are the delineating factors for a game that I'd be interested in NOW? My personal opinion is that snazzy graphics, while interesting, can only go so far. If you've played a game of the a particular genre for so long (oh lets say fantasy - Lineage, EQ, WoW), and there comes along a new game which has --- ooo -- better graphics, does this REALLY keep you in the game very long? Sure, buy the $50 game, snag a few months of subscriptions, then My opinion is that playability outlasts graphics. Graphics are an immediately gratifying factor, but in the long term, I think peoeple are sick of the fantasy and or sci-fi genre. So what's next? I dunno Re:How many more games like this? (Score:4, Interesting) Lineage 2 is not a good MMO to judge others by, and I'm fairly tempted to say the same of Galaxies, though maybe it's just a matter of taste. My friend and I spent a good few months playing AC2, but stopped when we pretty much hit a level cap and had other things to do anyway. However, we both have several characters on WoW, as it is actually fun to try out all the variations. They are all, well, variant. Basically, out of all the MMO's I've played (AC2, FFXI, Lineage 2, DaoC, Eve Online, Neocron and Anarchy) WoW is the one I would choose to play. It has all the best features of the others, done better. If you have a friend who bought the CE, see if you can get the 10 day free trial from them and check it out. Re:How many more games like this? (Score:2) Yeah. As for the comment about how graphics and gameplay draw players in, it's the community that makes them stay... Today's UserFriendly is a perfect illustration of why, for me, it's the community [userfriendly.org] that drives me away from MMORPGs. I had more fun in three m Re:How many more games like this? (Score:3, Informative) Re:How many more games like this? (Score:3, Interesting) I don't find it fun to run up to someone and hand them the 10 wood they need, while 14 other people do the same thing, and get the same next quest. That's a single-player RPG Re:How many more games like this? (Score:2) And to read stuff like... "u need wud?" - gwbush, Lvl 50 Politician Re:How many more games like this? (Score:5, Interesting) Re:How many more games like this? (Score:3, Informative) Yes, but you can always find a guild that suits your preferences by going through the forums. I prefer mature guilds w/ players 25yr old or older, there are plenty such guilds advertised on the forums, just take a few hours and apply to a couple and more than likely you'll find a play communi Re:How many more games like this? (Score:5, Informative) A level cap at level 60. A lot of content for people who are level 60 that requires teamwork and strategy instead of more power. If they can make those two things work together concurrently, it should continue to be fine. I heard numbers at one point that at launch almost 50% of the real content work is dedicated to players who are at their peak level, and more was going to be added all the time. If you play the game, you'll notice LOTS of unimplimented things, such as Portals, etc. I'd imagine 'Portals' will be the path of creating more content while also running out of land area. Re:How many more games like this? (Score:5, Insightful) In the first month, do I feel like the game was worth the originial cost of buying it? Given most of the non-MMOG I get don't last even a month the first go around, it isn't hard to justify WoW. From the descriptions it'd definately pass muster here. Hell, Deus Ex 2 made it past the bar, you'd have to really try hard to loss. After the first month, do I feel like I'm still getting enough out of the game to warrent the cost of the subscription? If I do, I pay and play. If I don't, I drop it. It really isn't any more complicated than that to me, a MMOG isn't a carreer. You shouldn't be looking for something that you'll still be playing in the nursing home. Yeah, sometimes you'll get into situations where you'll have friends in the game that you don't want to lose and feel like you can't quit the game because of that. But you know, that's what email and IM is for. Get a free message board hosted off one of the big sites and keep in touch with them there. Re:How many more games like this? (Score:4, Interesting) Take one of the other replies to your post: So...WoW has a level cap and some variation but, honestly, not THAT much difference to keep people sick of the level grind coming back. Same as happened in AC2... So what - in the LONG run - will keep people playing? Probably nothing. The lasting community part of the MMO genre is not needed, as long as there is fresh blood (subscribers) to keep the revenue stream going. I believe your lament - beyond the "this is just same as the other games but newer" part - is that the lasting aspect of the game is missing. Something that keeps people playing a game - beyond the level grind - will require a shift in paradigm from the current genre. EQ popularized the level grind and was quite successful, so that is what will be emulated. UO, with the quick "level up" but no "endgame" is the opposite - it became a graphical chatroom. Some game that can remove the tedious nature of the EQ-like games, while providing substantial character variation in a meaningful and beneficial manner, that has fun and rewarding content for a player at any stage of advancement is probably the holy grail of MMO designs. North North America? (Score:2, Funny) Two languages, two releases eh? Cool! (Score:3, Funny) Worlds fastest Java GUI. iMessage [kicks-ass.net] (Java Webstart Required). Re:Cool! (Score:2) So ultimately the social structure isn't hurt by a game having 80,000,000 players versus 25,000. Great game (Score:2, Informative) I came from playing DAoC religiously for 3 years, and after having not played it for a few months, when I started to play WoW again i start slipping into my long nights of no sleep It's a great game, but it's still far too time consuming for the casual gamer. Is that a challenge? (Score:3, Funny) "We're working on that..." - Everyone on the pro- and anti-Steam sides of the HL2/Steam debate. > and every aspect of the game was poked, prodded, and analyzed by the legions of would-be players. Once the Beta began, a line was thrown up between the lucky gamers who had the opportunity to participate and those who didn't. There was much wailing and gnashing of teeth in the [developer's] forums, and expectations ran even higher for those on the outside looking in. Hmm, maybe MMORPGs and FPSes aren't so different after all. *rimshot* Solved my problems! (Score:5, Funny) I dunno -- my wife has been appearing late at night in my office begging me to come to bed, usually dressed in something rather scandalous. So, indeed, World of Warcraft satisfies more than just my gaming addiction! Re:Solved my problems! (Score:3, Funny) "my wife has been appearing late at night in my office begging me to come to bed, usually dressed in something rather scandalous." She drapes herself in loose-leaf copies of the federal tax code? Re:Solved my problems! (Score:2) She drapes herself in loose-leaf copies of the federal tax code? More likely transcripts of speeches from the presidential campaign. The tax code is really more like a fantasy novel. How's the play on an iBook...anyone? (Score:2, Interesting) Re:How's the play on an iBook...anyone? (Score:3, Informative) FPS can't be vid card related... (Score:3, Informative) I'm going to try it on my Powerbook 12" with the Go 5200 chipset and see what it looks like. Blizzard made a fan for life with me on this one. This is the first MMORPG game that I konw of that has simultaneous mac/pc users that have the same server. EQ and others have the 'short bus' Re:FPS can't be vid card related... (Score:2) Re:How's the play on an iBook...anyone? (Score:2) In my PowerBook 15"/1GHZ with a 64MB ATI card and 768MB DDRam, I could increase the clipping levels to "medium", but I did notice some frame rate degradation in areas with a lot of players or objects. I think the low bus speed on the laptops is probably the limiting factor. works great on my PowerBook (Score:2) Re:How's the play on an iBook...anyone? (Score:2) Leveling up (Score:5, Informative) WoW manages to break up this monotony admirably in several ways first: Quests. Questing is the way ot level up, not killing things endlessly. The quests take you all arond the world and give you some interesting insight into what's going on in th game world at the time. second: Loot. I expect to be jumped on by a horde of RP'ers admonishing me for liking my treasure but---well, it's exciting knowing that the creatures you're killing may possibly drop something you can use or sell for big bucks. In Lineage getting a useful drop was an extreme rarity--hell getting anything besides a handful of gold was an oddity enough. In short, having critters drop items more often, especially craft items and "trophies" makes the game more interesting. third: WoW runs beautifully on my machine (oldish, GF3 and an athlon XP) compared to lineage. Granted LII might have had spiffier more realistic graphics but towns turned into slideshows...this is apparent in WoW in bigger towns but not as severe. PvP servers... (Score:3, Interesting) No wonder there's so much world hunger. (Score:3, Funny) Is that necessary? It's just another meal, albeit a Blizzard-sponsored one. WoW Fanboy wouldn't give it 10/10 (Score:3, Insightful) But it's not *flawless* - and by rating something 10/10, you're basically saying that there is *no* room for improvement, and that *nothing* could be done better. So far, the release has been a little shakey. Yeah, it has only just now been a week, but there has been significant problems for four of the servers, some lag issues, and some unexpected down times. Nothing really serious - it has been a pretty good launch - but nothing worthy of a *PERFECT* score. It's definitely 9/10 material, 9.5 even, and I would highly recommend it to fans of Warcraft and the MMOG genre. -lw Re:WoW Fanboy wouldn't give it 10/10 (Score:5, Informative) It basically came down to this: I think half scores are copping out. Gamespot gave the game a 9.5 but didn't have a single complaint in the review, as far as I could tell, that would merit half a point being taken off. I was planning on giving it a 9 until, as I say at the end of the review, I considered the inordinate amount of polish this game has. The polish really brings the game above and beyond basically every other MMOG out there. Don't take a 10/10 as "perfection". There is no perfect game. I gave it a 10 because to I simply couldn't think of anything to complain about, and I know it's just going to get better as they add more content. I don't think that 10s should be used regularly, but if any game warrants it it's this one. Re:WoW Fanboy wouldn't give it 10/10 (Score:3, Informative) No, he's saying that on whatever scale he's using, the game is good enough to merit a "10". There's no rule saying that "10" has to mean "perfection". Re:WoW Fanboy wouldn't give it 10/10 (Score:5, Funny) And let's not forget... (Score:2, Informative) My obligatory gripe... (Score:3, Insightful) There's just no way I'm going to pay $50 for a game that I can't even play unless I keep forking out more money. If they want $13 to $15 per month to play the game, then they should give the game away for free. steve Re:My obligatory gripe... (Score:3, Interesting) Never liked MMORPGS... (Score:2) I dunno, maybe WoW is different, but I'm not really inclined to spend $50 to play it for thirty days and find out I hate it Re:Never liked MMORPGS... (Score:2, Interesting) This review is 100% right (I have a few sound issues with dialogue being very quiet, but apart from that it gets a 10). I would say if you liked Diablo 2, with the quests and the semi-speedy leve Immersiveness (Score:2, Interesting) I'm a level 14 priest right now, and there has been no grinding so far. I'm also amazed at the extent of solo'ing you can do, even as a priest. I thought I'd just be a lowly healer, but I can open a pretty good can of whoop-ass myself An interesting note is my fiance got into the game before I did, and she was instantl Re:Immersiveness (Score:2) lvl 20 undead warlock so far. Thanksgiving weekend I became one with my iMac. 12 - 16 hour days playing that game. When sunday came around I looked up, blinked, and remembered I had a family to talk too and tried to get back to some semblance of a normal lif Spot on (Score:4, Interesting) The only things I'd add is that, based on my experience with the original Diablo (admittedly not an MMORPG per-se), Ultima Online, and Everquest this is by far the most fun game to play. For instance, level advancement doesn't feel like a root canal gone horribly wrong (like it did in Everquest). One of the really clever things Blizzard did with the UI was make the "XP bar" take up about 90% of the width of your screen, so no matter how little XP you receive for an action, you can see it advance. The one little trick alone goes a long way to easing the frustration in other games, such as UO where you would practice a skill forever to get it to move 0.1 points, or in EQ where you could fight for hours without your XP bar moving by a single pixel (in EQ the XP bar was maybe 5% of the width of your screen). The UI is more intuitive than others I've used, but I still found myself lost on a few occasions and that caused extreme frustration. If you turn off the tutorial pop-ups (which can be annoying), you'll have to hunt around to get the right screen for things like trade skills (professions). I certainly didn't expect to find them in my spellbook! The quest system in this game is OUTSTANDING!!! I cannot believe the sheer volume of quests, and the thought that was put into them. None of the quests feel like after-thoughts and they all seem very natural to the flow of the game. Just when you start wondering how long until your next level up, you return to town and complete a few quests and BAM, next level! The pace of the game is quite fast in other areas, too. Combat is very fast and furious, perhaps a bit too fast for my taste. I tend to like being deliberate in my actions, and since I don't have the nano-second twitch abilities of a console gamer, it takes me a little time to deliver the right sequence of skill uses (especially on a laptop keyboard). My wife also has trouble keeping up in combat because she's not used fast-paced computer games. I will point out that this is the first MMORPG that she's ever been remotely interested in. She detested EQ and refused to play it, but she's been drawn right into WoW even so far as to pursue her in-game professions with great gusto. So fellow geeks, there is hope yet that your SO might join you in your addiction "unique spin on the genre"? (Score:5, Insightful) I played it extensively in pre-release, but ultimately decided I am not interested in a rerun of the experiences of the past. Unfortunately, the major MMORPGs all seem to be converging on a set of features, which involve structuring the players experience to maximize the little mini-rewards such as experience and loot. This takes away from the original appeal of the virtual world, with degrees of freedom allowing the player to seek his own goals and write his own story. Some of the things I've heard about Vanguard [vanguardsoh.com] have raised my hopes that this game on the horizon, designed by the original creators of Everquest, will both push the envelope in gameplay, and return some of the virtual adventure to the genre. Re:"unique spin on the genre"? (Score:3, Interesting) Many people have tried to expand the frontiers of gaming with titles like The Sims Online, 2nd Li Announced when? (Score:2) Uhm, just to nitpick... maybe you're thinking 2001, and not 2002. I played an alpha version of World of Warcraft at the 2002 E3. (And even back then I knew it was going to be a LOT better than EverQuest...) multi-platform distribution (Score:3, Interesting) It would be interesting to see client statistics to see how the host OS breaks out... whether it falls along market lines or has more or less penetration into a particular host market. Re:multi-platform distribution (Score:2) Announce Date (Score:2) PvP (Score:4, Informative) Yes, I know to go to a different game, like Guild Wars. Not World of Warcraft. Before subscribing, know that WoW is NOT a PvP game, and the PvP is not fun at all. (At least, not yet -- but I don't expect them to make it fun, because their approach in developing WoW is to appeal to people who just want to advance in the game, and who don't tend to like PvP). WoW is good if you're one of the advancement-oriented RPGers. However, if you're like me and are interested in strategic, skill-intensive PvP, pick a different game. I've been playing Guild Wars, and THAT is much more along the lines of what I'm looking for. WoW's PvP servers simply allow random ganking by the opposing faction (alliance or horde) in certain areas. Your faction is determined upon character creation (based on race), and you cannot even TALK to the other faction. This to me is boring and meaningless. Especially since there's no penalty to dying in PvP -- if you are being attacked, simply die; you lose nothing. In Shadowbane I had fond memories of frantically calling for a summon when three people from an enemy guild showed up while I was carrying a valuable rune or something. There's no such rush of adrenaline here. Re:PvP (Score:4, Interesting) I posted this in an earlier story, I'm gonna cut and paste it because it's even more apropos here. There is NO reason to give WoW 10/10 unless you don't care a whit about PvP, which to my mind is the only real draw MMORPGs have over IRL. Folks are talking a lot about WoW's upcoming release, and rightly so. It really is going to be the big beasty game it's been promised to be. But to counter the WoW love I'm seeing I'd like to offer some thoughts on the game's quality for a particular segment of the market: the PvPer. Caveats: I'm an ex-Clan Lord, ex-Shadowbane player. I enjoy questing and other PvE-type activities for the social aspects, but to put it simply I find playing against the computer damned unsatisfying, fairly quickly. Of course I know serious PvP isn't for everyone, but for those of us who aren't willing to bother unless there's a human on the other side to challenge us with player (NOT character) skill, strategy and quick thinking, there's no substitute. Executive Summary of my thoughts on WoW PvP: Not Ready for Prime Time, but Lots of Potential. The game is so clearly built around a PvE/questing model its deficiencies in PvP really stand out. However, the engine is robust and looks great, even on sub-standard hardware like mine. I think the problems for PvP posed by a largely PvE game are overcomable, but it's going to take some very significant ruleset/mechanics changes before it's worth it. Specific problems: - PvP is Meaningless. It's basically a multi-person duel with no stakes involved. You don't lose anything but a couple of minutes - or less - spent respawning; no loot, little damage to your gear, no money, no experience, NOTHING. There's no recognition for a PvP kill, no death list, no guild/race/area messages about who's kicking who's ass. You can't loot a corpse, you can sit and stare at it until the player decides to respawn, when you can kill them again with no consequences for either of you. Yay. - Any sort of operational tactics are pointless, because you can't hold territory or ground unless you round up all your enemies, they kindly allow you to kill them all in the same place, and you corpse-camp them. Otherwise they can just rez for free, rapidly, regroup and attack. Everything's a running battle with no center, no topographical advantage, no flanks, no nothing, just a mess. - All classes are the same speed unmounted. This is ridiculous, and it makes finishing kills a joke if your main damage route is melee. You need potions/gear to move faster, which brings me to my next point. - The game is dependent on items. This is so Everquest-y it's not even funny. Whether you're skilled or not matters far less than the items you have equipped and, therefore, the time you spent farming to get them (or for the money to buy them at auction). This SUCKS. Some augmentation by items is fine, but this game is ALL about items. And level. Which also sucks. Skill and strategy anticipation is a very distant consideration. In the brief time I was playing I had a several Horde players complain (this is through their Alliance alts and buddies) that I was exploiting because they for some reason couldn't cast when they wanted to. I was just using the rogue attack Kick; when they looked it up (if they bothered) they complained that it's too powerful to use against players. Come oooonnnnn. In that same vein people were screaming bloody murder and shouting to GMs anytime Horde players mounted a raid. This was on a "PVP SERVER." Brief aside on player skill v. character skill, cause that's a differentiation that I know a lot of PvE gamers don't make. In general I'm talking about knowledge of your opponent's capabilities, knowledge of your own and the facility to advantageously match yours against his. Facing your enemy with strength in the areas of his weakness, to paraphrase Sun Tzu. - The group mechanics are rudimentary. Max players in a group is not high enough for PvP and there's no way to mo If you can't use a credit card/pay pal... (Score:2) Amazon [amazon.com] = $26.99 EB [ebgames.com] = $29.99 Walmart [walmart.com] = $29.82 Amazon [amazon.com] has a photograph sample. I am not sure if they are out yet in retail stores (e.g., Walmart and EB). Does anyone know? Also, are there any other stores selling them? This game is addicting. So what is North America exactly? (Score:4, Funny) ok I know that Americans doesn't include Canadians. However Canada *is* part of North America. Linux (Score:2) I'm holding out... (Score:2) WoW great, except clicking the mouse all the time (Score:2) In SWG you could use the ALT key to toggle your mouse between cursor mode and character control mode. So long, twisted runs meant only turning on auto run then moving the mouse to make turns. In WoW it's turn on auto run then get a cramp in your finger as you hold down the right mou WoW is good but (Score:4, Interesting) My biggest complaint, not just with WoW but most new MMORPGS, is that it requires minimal skill to play. You either fight at your level or you get slaughtered. You can't run into a mob 5 levels above you (solo) and survive easily (by running away).. The reason why this doesn't bother me as much in WoW is that you get most of your experience from quests.. There is so much content for a game just released that you don't notice some of these issues. I also find that its difficult to powerlevel in WoW because of the way mobs spawn and there doesn't seem to be a bonus with grouping. ie, I can probably gain more xp running around the wild solo than in a group. On that note, you can solo alot of this game besides the scattered quest. And it appears to be relatively easy to find a group working on the same one. Back to skill tho, the thing I liked about AC that I haven't seen in MMORPGS since was that you could put yourself into seemingly impossible situations and surivive. Sometimes you'd gamble and die repeatedly but there would be times that you would survive by the skin of your teeth. I can't count how many times i've recalled out with 1hp left keeping the mobs busy so my friends could escape. Unfortunately i'm a geek and can't properly describe the feelings you experience but I haven't felt the same in other mmorpgs. They are too dumbed down. In AC you knew the levels and some info on the mob (depending on your skills) but that didn't mean much. You could go against a mob 20 levels above you and easily wipe the floor with it, while another mob around your level would kick your butt. None of this color coded mob crap. You had to know the mobs weaknesses to attacks.. Some were nearly invulnerable to certain types. You had to prep for your adventures.. If you knew you were going to run into mob x, y and z you'd prep different than mob a, b and c. While in other mmorpgs it doesn't seem to matter. Maybe AC was too arcade like... You could play it safe but if you wanted to push the edge the fun factor was amazing. Maybe this style of game didn't do well because it required a slight twitch factor. Maybe it was bad timing or MS's lack of marketing (did they have any?) But back to WoW, it is the first MMORPG i've played in 3 years that I have really enjoyed. It doesn't feel like a level grind like most of them. Even tho the quests are the same type, collection, delivery, they do toss in the scattered twist which keeps you on your toes. I personaly give it a 9/10 and forsee myself playing this for awhile. Please sign the petition to run on linux (Score:3, Informative) WTF is the point? (Score:4, Insightful) WTF is the point in that? If there is no penalty for death, you can play pretty damn recklessly knowing you can just hack your way thru, eventually. I would much prefer not having that ghost-jog-resurrect bit. If you can't make friends (or financial arrangements) with a decent level priest, dead you stay and over you start. I've played too many MMRPGs where you can wipe our monsters 10x your level just by getting in a whack or two each time; getting raised; coming back; repeat until monster is dead, since (other than Trolls) they never heal. The LEAST they could do is limit that. Like having 9 lives, or something. Auto resurrect/respawn is for pussies. -Charles Why WoW is better than anything else I've played (Score:4, Interesting) WoW has a major advantage over all of the above games because of one thing. Blizzard understands math and statistics. Blizzard has mastered the RTS genre which requires a very high compentency of math. You cannot balance 3+ diverse teams in a RTS game without a solid understanding of statistics, non-transitive statistics [jimloy.com], and stong skills in mathematics. In WoW, every aspect of character ablities and spells have the Blizzard touch. They just feel right. They don't feel overpowered or underpowered. The enemies that you face are perfectly balanced for their level. Unlike EQ or SWG where you could easily run into a lower level enemy that would completely wipe the floor with you. Other game makers seem to play darts with their games. Try something random, see if it works and tweak it if needed. Blizzard obviously has a very strong mathematical foundation for their game. There's nothing complex. The formulas just work and the game just feels right. This alone is why I think WoW has a very bright future. Canada? (Score:3, Funny) Currently the game has been released to North America, Canada, South Korea, and Australia Boy, I'm glad Canada finally got out of that hell-hole North America. We've been fighting our way out for years! Heroism Problem and a Question (Score:4, Interesting) #2 complaint is heroism...and that's the big one. In Morrowind, you are the hero. In a MMORPG, you are Examine: TableTopRPG (TTRPG) - the heroes are made by doing amazing things in the midst of greater problems. This is hardest to do in MMORPGS. A GM in a TTRPG has to worry about a group of say, 5-8 players. A GM in a MMORPG has to worry about a group of maybe 2500 players, probably more. Daunting. GM run events will either have to be many (uniqeness problem) or huge (management issues). Kudos to whomever figures out how. Now, in real life, a hero is someone who is admired by many for a great deed -- and usually the great deed benefits many. This is missing in single player RPGs (sure you're saving the world, but why should you care about them? they arent real!) and in MMORPGS (how can the owners let so much ride on a player, when people are paying?) But if they could figure out a way to have random people at random times save groups of other people, they would be heroes. Asheron's Call with the ponzischeme system tried to make the people at top to be world altering heroes. It didn't work, since it was also a hybridized guild system, among other things. The only way I can imagine people feeling like heroes, and being recognized as such is a "quest" that is some sort of battle between the (WoW example) Horde and Alliance, where the winner gains some sort of advantage over the loser (permanent, perhaps control of a zone or city? not sure exactly). There would be (GM run) "turning points" in the battle, where players would have the opportunity to influence the outcome one way or the other, through success or failure. Perhaps rescuing a captured NPC from instantly zonewarps all high level PCs from the zone (of the opposing force). I dunno. But that's all I can come up with. My question...how do you pronounce "MMORPG"???! About death in the game... (Score:3, Interesting) But such an idea works. Darktide for Asheron's Call was a server set up much the same way. So was the original UO. The entire roleplaying aspect is about watching your back, making friends that will protect you to the death, and working for yourself. I hope Blizzard seriously considers a server of this nature in the future. The simple response to people that say the server will draw players who sit back and feast on newbies is that, "You're right, they will, and if that scares you, then this server is NOT for you." There are at least a thousand players out of 200+ thousand that WOULD support such a server though, and they would like to see it. The discussions on the PvP forums for such a server seemed to justify at least one if not one per time zone. Also, about death specifically. Death in the game is not as light as the poster seems to indicate. You do not lose experience, but if you cannot get back to your corpse, your armor and weapons are degraded. They are also degraded considerably upon death, so don't die continously, or you'll regret it. It'd just be great if on death on one of these new PvP servers people could loot a corpse and get 5% of the coin and one of the most expensive "equipped" pieces. It'd give more of an incentive to PvP, and coupled with a "kill anywhere" rule, would be a GREAT server for those of us that want it. As such, I do not intend to purchase the game right now. I realized after Asheron's Call Darktide that there is no RPG with the level of roleplay as a player-driven storyline, and you can only truly accomplish that by giving the power of life or death to the players themselves. Slamming (Score:3, Insightful) I tried City of Heroes because it was unique. It got old fast just like -insert any MMORPG here-. I tried WoW for one reason and I am surprised it is not mentioned more (if at all) in the review: the game rewards you for being a casual player. When you log out, you build up XP bonuses which can add up to 2x XP per kill. If you rest/logout in a Inn, that bonus per kill can rise to 4x per kill. Mostly, when you log back in, you notice your XP bar is blue with a marker set past your current amount of XP. The more you've rested, the more to the right that marker will be. When you gain experience, it will add bonuses (the most I've gotten was double) to those kills and get your XP closer to the mark. Once you hit the mark, "you feel normal" i.e. not rested. Blizzard's thought on this is that if you take a lot of rest, you are going to go back out hunting with more energy and zeal... thus more XP. WoW and EQ2 are incredibly similar in a lot of ways. WoW's clear superiority over any other MMORPG, including EQ2, lies in rewarding casual/rping players. We pay the same amount so why should we get penalized for not logging on as many times as the 13yoa punk next door on his daddy's DSL with the fresh GED. The practice of "Extra XP" is not a lot but it helps. Re:Some Review (Score:4, Interesting) Re:Some Review (Score:2) If Blizz wants to charge for a "Premium World", so be it... But they should also allow for free, user hosted shards. THAT would pique my interest. Re:Some Review (Score:3, Insightful) You could rent a few movies (or insert other form of entertainment) per month for a limited/set # of hours of entertainment, or.... you could pay $15 or less a month and spend as much time playing it/being entertained as you wish to. When I get bored of it I'll unsubscribe, but for now it's worth the money to me Re:Some Review (Score:3, Insightful) You do, it's called property taxes. Re:Some Review (Score:3, Interesting) If you have to pay a monthly subscription, then the game should be free, unless they have a mechanism to allow the creation of user hosted worlds. You shouldn't have to lay down $50, and then not be able to use the game. You make a good point, however there are some things to consider: First, distribution. The game is around 3 gigs in size, and they're using BitTorrent for the patches. Yeah, it's big. With a publisher and full rollout they can make that available in stores. $50 means a publisher will mak Re:Some Review (Score:2) On the other hand, I have little problem paying one-time purchase fees to waste my time. Bizarre. Re:Some Review (Score:3, Interesting) Sheeet... That game was $3 per HOUR to play. I've had several $300 bills from that game, ouch!! Eventually it migrated to a flat fee structure on another service. Some of the most complex team play I've ever done was in that game. Plus - that game really DID penalize you for dying. You had to be careful. You lost stats that could only be regained by c Re:Some Review (Score:2) How much do you spend each year going to the theatre? How much do you spend each year going to sporting events? In the grand scheme of things, $150 per year isn't that much, especially considering that these games cost less, per month, than a single music CD or two movie tickets. Re:Farking AT&T (Score:4, Informative) I skimmed the first page of that thread and it's terrible that they're doing that, so I suggest you do as someone mentioned in the thread of the release of WoW: 1) Get the 2) Use a real client (Az, ABC, what ever) to connect via a different port. If I recall proper, their server is run on a port in the 3000's, so you should be able to bypass the "BitTorrent doesnt work" problem, yet it'll still probably be hella slow. As another suggestion, why not complain to blizzard that you can't download their patches because of this problem? I believe they could drop it on a hidden location somewhere for "special customers", or you could get a friend to host it/send it to you. Parent is correct (Instructions to do this inside) (Score:5, Informative) 2.) Download the patch executable from Blizzard. 3.) Launch PE Explorer and Open the patch file. 4.) Choose View > Resources from the toolbar. 5.) Expand the "TORRENT" resource section. 6.) Look for binary resources in the TORRENT section. Right-click then and choose "Save As". Save them to disk as 7.) Fire up your favorite BT client using non-blocked ports and open the 8.) Play and have fun. Re:Farking AT&T (Score:2) Re:from the WTF? dept. (Score:5, Informative) One of the crafting skills is "Engineer" that allows you to create gadgets. One of these gadgets is a mechanical squirrel. It's a "pet" - you can "summon" it and it will follow you around. It serves no other purpose than that. It just follows you, it's always level 1, it can't attack or anything. Only for show. Re:Grind (Score:2) Re:Grind (Score:5, Insightful) Blizzard has discussed this in the past and said they have many plans for world events based around seasonal things and other game world related subjects. "No property ownership and kingdom duels like warcraft." Blizzard has also said that will houses will be implemented. "No end game." Far from it! What about instanced raid dungeons, or raiding in general? What about PvP? "No adventures or campaigns." Seeing as you only got to level 20, I really think that you missed out on a lot of the content, which would lend itself to a comment like this. For just one examples... when you're playing as Horde, at around level 18-20, the stories of many of the quests in the Barrens and the surrounding areas start to grow closer and closer together, finally threading together into one big ball of trouble - the Wailing Caverns. No less than 6 independent quests lead you into this instanced dungeon, and playing through it is an extremely satisfying experience. This dungeon alone is a four hour adventure. Re:Grind (Score:2) (I've just given up FFXI because the 'gill grind' got stupid.. Even my wife who plays it 18 hours a day solid has maxed out and is thinking of quitting because of this (albeit at a much higher level)). Re:Grind (Score:5, Informative) Re:Grind (Score:5, Informative) I never thought quests would make that big of a difference. The last MMO I played was EQ and it's been years since I"ve played it. But, quests have made a huge difference. Don't underestimate what those smaller goals can do to the game. That grind feeling is all but gone for me. Regards, -JD- Re:If only it were available on consoles (Score:2) Re:hmm (Score:2) Re:Great review of an excellent game... (Score:2) Re:Perfect score agenda? (Score:2) Re:WoW doesn't deserve the score given (Score:3, Insightful) Why I am replying to this, I don't know. Re:Quick Newbie Question (Score:3, Informative) Yes, you can buy the skill to use a gun. But you won't get the auto-fire skill as only hunters get that.
http://slashdot.org/story/04/12/01/1753224/review-world-of-warcraft
CC-MAIN-2014-23
refinedweb
11,144
71.34
UVA-679-DROPING BALL UVA-679 A number of K balls are dropped one by one from the root of a fully binary tree structure FBT. Each time the ball being dropped rst visits a non-terminal node. It then keeps moving down, either follows the path of the left subtree, or follows the path of the right subtree, until it stops at one of the leaf nodes of FBT. To determine a ball’s moving direction a ag is set up in every non-terminal node with two values, either false or true. Initially, all of the ags are false. When visiting a non-terminal node if the ag’s current value at this node is false, then the ball will rst switch this ag’s value, i.e., from the false to the true, and then follow the left subtree of this node to keep moving down. Otherwise, it will also switch this ag ags are initially set to be false, the rst ball being dropped will switch ag’s values at node 1, node 2, and node 4 before it nally stops at position 8. The second ball being dropped will switch ag’s values at node 1, node 3, and node 6, and stop at position 12. Obviously, the third ball being dropped will switch ag’s values at node 1, node 2, and node 5 before it stops at position 10. Now consider a number of test cases where two values will be given for each test. The rst value is D, the maximum depth of FBT, and the second one is I, the I-th ball being dropped. You may assume the value of I will not exceed the total number of leaf nodes for the given FBT. Please write a program to determine the stop position P for each test case. For each test cases the range of two parameters D and I is as below: Input Contains l + 2 lines. 2≤D≤20, and1≤I≤524288. my code : //UVA 679 Droping balls #include "iostream" #include "cstdio" #include "cmath" using namespace std; int main (){ int n,b,c; int k = 1; cin>>n; for (int i = 1; i <= n; i++) { cin >> b >> c; if (b <= 20) { int deep = (1<<b) - 1; int bstack[deep]; memset(bstack, 0, sizeof(bstack)); for (int j = 0; j < c ;j++ ){ k = 1; for ( ; ; ) { bstack[k] =! bstack[k]; k = bstack[k] ? k*2:k*2+1; if (k > deep ) { break; } } } }else{ cout << "TOO DEEP" <<endl; } } cout << k/2 << endl; return 0; }
http://blog.csdn.net/danlang_csdn/article/details/51553307
CC-MAIN-2017-43
refinedweb
423
74.83
Nonparametric statistics is a branch of statistics concerned with analyzing data without assuming that the data are generated from any particular probability distribution. Since they make fewer assumptions about the process that generated the data, nonparametric tests are often more generally applicable than their parametric counterparts. Nonparametric methods are also generally more robust than their parametric counterparts. These advantages are balanced by the fact that when the data are generated by a known probability distribution, the appropriate parametric tests are more powerful. In this post, we’ll explore a Bayesian approach to nonparametric regression, which allows us to model complex functions with relatively weak assumptions. We will study the model \(y = f(x) + \varepsilon\), where \(f\) is the true regression function we wish to model, and \(\varepsilon\) is Gaussian noise. Given observations \((x_1, y_1), \ldots, (x_n, y_n)\) from this model, we wish to predict the value of \(f\) at new \(x\)-values. Let \(X = (x_1, \ldots, x_n)^\top\) be the column vector of the observed \(x\)-values, and similarly let \(Y = (y_1, \ldots, y_n)^\top\). Our goal is to use these observations to predict the value of \(f\) at any given point with as much accuracy as possible. The Bayesian approach to this problem is to calculate the posterior predictive distribution, \(p(f_* | X, Y, x_*)\) of the value of \(f\) at a point \(x_*\). With this distribution in hand, we, if necessary, employ decision-theoretic machinery to produce point estimates for the value of \(f\) at \(x_*\). As the nonparametric approach makes as few assumptions about the regression function as possible, a natural first step would seem to be to place a prior distribution \(\pi(f)\) on all possible regression functions. In this case, the posterior predictive distribution is \[\begin{align*} p(f_* | X, Y, x_*) & = \int_f p(f_* | f, x_*)\ p(f | X, Y)\ df \\ & \propto \int_f p(f_* | f, x_*)\ p(Y | f, X)\ \pi(f) df. \end{align*}\] Observant readers have certainly noticed that above we integrate with respect to an unspecified measure \(df\). If we make the often-reasonable assumption that \(f\) is continuous on a compact interval \([0, T]\) (for \(T < \infty\)), we may choose \(df\) to be the Weiner measure, which is closely related to Brownian motion. Unfortunately, this approach has a few drawbacks. First, it may be difficult to choose a decent prior on the contiuous functions on the interval \([0, T]\) (even the uniform prior will be improper). Even if we manage to choose a good prior, the integral with respect to the Weiner measure may not be easy (or even possible) to calculate or approximate. The key realization about this situation that allows us to sidestep the difficulty in calculating the posterior predictive distribution in the manner detailed above is to consider the joint distribution of the observed and predicted values of \(f\). Suppose that given our observations, we want to predict the value of \(f\) at the points \(x_{*, 1}, \ldots x_{*, m}\). Again, let \(X_* = (x_{*, 1}, \ldots, x_{*, m})^\top\) and \(Y_* = (f(x_{*, 1}), \ldots, f(x_{*, m}))^\top\). With this notation, we want to model the joint distribution of the random variables \(Y\) and \(Y_*\). After choosing a joint distribution, we can condition on the observed value of \(Y\) (and \(X\) and \(X_*\)) to obtain a posterior predictive distribution for \(Y_*\). In this approach, specifying the joint distribution of \(Y\) and \(Y_*\) has taken the place of specifying a prior distribution on regression functions. We obtain a flexible model for the joint distribution of \(Y\) and \(Y_*\) by considering the set \(\{f(x) | x \in \mathbb{R}\}\) to be a Gaussian Process. That is, for any finite number of points \(x_1, \ldots, x_n \in \mathbb{R}\), the random variables \(f(x_1), \ldots, f(x_n)\) have a multivariate normal distribution. While at first it may seem that modelling \(f\) with a Gaussian process is a parametric approach to the problem, mean vector and covarience matrix of the multivariate normal (generally) depend explicitly on the points \(x_1, \ldots, x_n\), making this approach quite flexible and nonparametric. The Gaussian process for \(f\) is specified by a mean function \(m(x)\) and a covariance function \(k(x, x')\). In this context, we write that \(f \sim GP(m, k)\). The advantage of this approach is that the Gaussian process model is quite flexible, due to the ability to specify different mean and covariance functions, while still allowing for exact calculation of the predictions. For the purposes of this post, we will assume that the mean of our Gaussian process is zero; we will see later that this is not a terribly restrictive assumption. Much more important than the choice of a mean function is the choice of a covariance function, \(k\). In general, \(k\) must be positive-definite in the sense that for any \(X = (x_1, \ldots x_n)\), \(k(X, X) = (k(x_i, x_j))_{i, j}\) is a positive-definite matrix. This restriction is necessary in order for the covariance matrices of the multivariate normal distributions obtained from the Gaussian process to be sensible. There are far too many choices of covariance function to list in detail here. In this post, we will use the common squared-exponential covariance function, \[k(x, x') = \exp\left(-\frac{(x - x')^2}{2 \ell^2}\right).\] The parameter \(\ell\) controls how quickly the function \(f\) may fluctuate. Small values of \(\ell\) allow \(f\) to fluctuate rapidly, while large values of \(\ell\) allow \(f\) to fluctuate slowly. The following diagram shows samples from Gaussian processes with different values of \(\ell\). def k(x1, x2, l=1.0): return np.exp(-np.subtract.outer(x1, x2)**2 / (2 * l**2)) def sample_gp(xs, m=None, k=k, l=1.0, size=1): if m is None: mean = np.zeros_like(xs) else: mean = m(xs) cov = k(xs, xs, l) samples = np.random.multivariate_normal(mean, cov, size=size) if size == 1: return samples[0] else: return samples Now that we have introducted the basics of Gaussian processes, it is time to use them for regression. Using the notation from above, the joint distribution of \(Y\), the values of \(f\) at the observed points, and \(Y_*\), the values of \(f\) at the points to be predicted, is normal with mean zero and covariance matrix \[\Sigma = \begin{pmatrix} k(X, X) + \sigma^2 I & k(X, X_*) \\ k(X_*, X) & k(X_*, X_*) \end{pmatrix}.\] In \(\Sigma\), only the top left block contains a noise term. The top right and bottom left blocks have no noise term because the entries of \(Y\) and \(Y_*\) are uncorrelated (since the noise is i.i.d). The bottom right block has no noise term because we want to predict the actual value of \(f\) at each point \(x_*, i\), not its value plus noise. Conditioning on the observations \(Y = y\), we get that \(Y_* | Y = y\) is normal with mean \(\mu_y = k(X_*, X) (k(X, X) + \sigma^2 I)^{-1} y\) and covariance \(\Sigma_y = k(X_*, X_*) - k(X_*, X) (k(X, X) + \sigma^2 I)^{-1} k(X, X_*)\). As a toy example, we will use Gaussian process regression to model the function \(f(x) = 5 \sin x + \sin 5 x\) on the interval \([0, \pi]\), with i.i.d. Gaussian noise with standard deviation \(\sigma = 0.2\) using twenty samples. def f(x): return 5 * np.sin(x) + np.sin(5 * x) n = 20 sigma = 0.2 sample_xs = sp.stats.uniform.rvs(scale=np.pi, size=n) sample_ys = f(sample_xs) + sp.stats.norm.rvs(scale=sigma, size=n) xs = np.linspace(0, np.pi, 100) The following class implements this approach to Gaussian process regression as a sublcass of scikit-learn’s sklearn.base.BaseEstimator. class GPRegression(BaseEstimator): def __init__(self, l=1.0, sigma=0.0): self.l = l self.sigma = sigma def confidence_band(self, X, alpha=0.05): """ Calculates pointwise confidence bands with coverage 1 - alpha """ mean = self.mean(X) cov = self.cov(X) std = np.sqrt(np.diag(cov)) upper = mean + std * sp.stats.norm.isf(alpha / 2) lower = mean - std * sp.stats.norm.isf(alpha / 2) return lower, upper def cov(self, X): return k(X, X, self.l) - np.dot(k(X, self.X, self.l), np.dot(np.linalg.inv(k(self.X, self.X, self.l) + self.sigma**2 * np.eye(self.X.size)), k(self.X, X, self.l))) def fit(self, X, y): self.X = X self.y = y def mean(self, X): return np.dot(k(X, self.X, self.l), np.dot(np.linalg.inv(k(self.X, self.X, self.l) + self.sigma**2 * np.eye(self.X.size)), self.y)) def predict(self, X): return self.mean(X) Subclassing BaseEstimator allows us to use scikit-learn’s cross-validation tools to choose the values of the hyperparameters \(\ell\) and \(\sigma\). gp = GPRegression() param_grid = { 'l': np.linspace(0.01, 0.6, 50), 'sigma': np.linspace(0, 0.5, 50) } cv = GridSearchCV(gp, param_grid, scoring='mean_squared_error') cv.fit(sample_xs, sample_ys); Cross-validation yields the following parameters, which produce a relatively small mean squared error. model = GPRegression(**cv.best_params_) model.fit(sample_xs, sample_ys) cv.best_params_ {'l': 0.40734693877551015, 'sigma': 0.1020408163265306} The figure below shows the true regression function, \(f(x) = 5 \sin x + \sin 5 x\) along with our Gaussian process regression estimate. It also shows a 95% pointwise confidence band around the estimated regression function. Note now near observed points, the confidence band shrinks drastically, while away from observed points it expands rapidly. As our toy example shows, Gaussian processes are powerful and flexible tools for regression. Although we have only just scratched the surface of this vast field, some remarks about the general properties and utility of Gaussian process regression are in order. For a much more extensive introduction to Gaussian process consult the excellent book Gaussian Processes for Machine Learning, which is freely available online. In general, Gaussian process regression is only effective when the covariance function and its parameters are appropriately chosen. Although we have used cross-validation to choose the parameter values in our toy example, for more serious problems, it is often much quicker to maximize the marginal likelihood directly by calculating its gradient. Our implementation of Gaussian process regression is numerically unstable, as it directly inverts the matrix \(k(X, X) + \sigma^2 I\). A more stable approach is to calculate the Cholesky decomposition of this matrix. Unfortuantely, calculating the Cholesky decomposition of an \(n\)-dimensional matrix takes \(O(n^3)\) time. In the context of Gaussian process regression, \(n\) is the number of observations. If the number of training points is significant, the cubic complexity may be prohibitive. A number of approaches have been developed to approximate the results of Gaussian process regression with larger training sets. For more details on these approximations, consult Chapter 8 of Gaussian Processes for Machine Learning. This post is available as an IPython notebook.
http://austinrochford.com/posts/2014-03-23-bayesian-nonparamtric-regression-gp.html
CC-MAIN-2017-13
refinedweb
1,823
54.83
OpenCV is a powerful tool to process images. In this tutorial, we will introduce how to read an image to numpy ndarray. Python pillow library also can read an image to numpy ndarray. Python Pillow Read Image to NumPy Array: A Step Guide Preliminary We will prepare an image which contains alpha chanel. We will start to read it using python opencv. This image is (width, height)=(180, 220), the backgroud of it is transparent. Read image using python opencv Import library import cv2 import numpy as np Read image We can use cv2.imread() to read an image. This function is defined as: cv2.imread(path ,flag) path: the path of image flag: determines the mode of reading an image These are some values are often used in flag: We will use an example to show you how to do. flag = [cv2.IMREAD_COLOR, cv2.IMREAD_UNCHANGED, cv2.IMREAD_GRAYSCALE] for f in flag: img = cv2.imread('opencv.png', f) print(type(img)) print(img.shape) print(img) cv2.imshow('image window',img) cv2.waitKey(0) cv2.destroyAllWindows() Run this code, we can find: img is <class ‘numpy.ndarray’> Notice: If you read an image which does not contains alpha channel (it is not transparent) using cv2.IMREAD_UNCHANGED, you may get an shape(heigh, width, 3) For example: As to this image, it is not transparent. We will read it using cv2.IMREAD_UNCHANGED. You will get: <class 'numpy.ndarray'> (367, 384, 3) [[[255 255 255] [255 255 255] [255 255 255] ... [255 255 255] [255 255 255] [255 255 255]]] The effect of cv2.IMREAD_UNCHANGED is same to cv2.IMREAD_COLOR.
https://www.tutorialexample.com/python-opencv-read-an-image-to-numpy-ndarray-a-beginner-guide-opencv-tutorial/
CC-MAIN-2020-45
refinedweb
268
68.87
Chatbots are increasingly becoming an important channel for companies to interact with their customers, employees, and partners. Amazon Lex allows you to build conversational interfaces into any application using voice and text. Amazon Lex V2 console and APIs make it easier to build, deploy, and manage bots so that you can expedite building virtual agents, conversational IVR systems, self-service chatbots, or informational bots. Designing a bot and deploying it in production is only the beginning of the journey. You want to analyze the bot’s performance over time to gather insights that can help you adapt the bot to your customers’ needs. A deeper understanding of key metrics such as trending topics, top utterances, missed utterances, conversation flow patterns, and customer sentiment help you enhance your bot to better engage with customers and improve their overall satisfaction. It then becomes crucial to have a conversational analytics dashboard to gain these insights from a single place. In this post, we look at deploying an analytics dashboard solution for your Amazon Lex bot. The solution uses your Amazon Lex bot conversation logs to automatically generate metrics and visualizations. It creates an Amazon CloudWatch dashboard where you can track your chatbot performance, trends, and engagement insights. Solution overview The Amazon Lex V2 Analytics Dashboard Solution helps you monitor and visualize the performance and operational metrics of your Amazon Lex chatbot. You can use it to continuously analyze and improve the experience of end users interacting with your chatbot. The solution includes the following features: - A common view of valuable chatbot insights, such as: - User and session activity (sentiment analysis, top-N sessions, text/speech modality) - Conversation statistics and aggregations (average session duration, messages per session, session heatmaps) - Conversation flow, trends, and history (intent path chart, intent per hour heatmaps) - Utterance history and performance (missed utterances, top-N utterances) - Slot and session attributes most frequently used values - Rich visualizations and widgets such as metrics charts, top-N lists, heatmaps, and utterance management - Serverless architecture using pay-per-use managed services that scale transparently - CloudWatch metrics that you can use to configure CloudWatch alarms Architecture The solution uses the following AWS services and features: - Amazon Lex V2 conversation logs, which generate metrics and visualizations from interactions with your chatbot - Amazon CloudWatch Logs to store your chatbot conversation logs in JSON format - CloudWatch metric filters to create custom metrics from conversation logs - CloudWatch Logs Insights to query the conversation logs and create powerful aggregations from the log data - CloudWatch Contributor Insights to identify top contributors and outliers in highly variable data such as sessions and utterances - A CloudWatch dashboard to put together a set of charts and visualizations representing the metrics and data insights from your chatbot conversations - CloudWatch custom widgets to create custom visualizations like heatmaps and conversation flows using AWS Lambda functions The following diagram illustrates the solution architecture. The source code of this solution is available in the GitHub repository. Additional resources There are several blog posts for Amazon Lex that also explore monitoring and analytics dashboards: - Building a business intelligence dashboard for your Amazon Lex bots - Analyzing and optimizing Amazon Lex conversations using Dashbot - Analyzing Amazon Lex conversation log data with Amazon CloudWatch Insights - Building a real-time conversational analytics platform for Amazon Lex bots - Analyzing Amazon Lex conversation log data with Grafana This post was inspired by the concepts in those previous posts, but the current solution has been updated to work with Amazon Lex bots created from the V2 APIs. It also adds new capabilities such as CloudWatch custom widgets. Enable conversation logs Before you deploy the solution for your existing Amazon Lex bot (created using the V2 APIs), you should enable conversation logs. If your bot already has conversation logs enabled, you can skip this step. We also provide the option to deploy the solution with an accompanying bot that has conversation logs enabled and a scheduled Lambda function to generate conversation logs. This is an alternative if you just want to test drive this solution without using an existing bot or configuring conversation logs yourself. We first create a log group. - On the CloudWatch console, in the navigation pane, choose Log groups. - Choose Actions, then choose Create log group. - Enter a name for the log group, then choose Create log group. Now we can enable the conversation logs. - On the Amazon Lex V2 console, from the list, choose your bot. - On the left menu, choose Aliases. - In the list of aliases, choose the alias for which you want to configure conversation logs. - In the Conversation logs section, choose Manage conversation logs. - For text logs, choose Enable. - Enter the CloudWatch log group name that you created. - Choose Save to start logging conversations. If necessary, Amazon Lex updates your service role with permissions to access the log group. The following screenshot shows the resulting conversation log configuration on the Amazon Lex console. Deploy the solution You can easily install this solution in your AWS accounts by launching it from the AWS Serverless Application Repository. As a minimum, you provide your bot ID, bot locale ID, and the conversation log group name when you deploy the dashboard. To deploy the solution, complete the following steps: You’re redirected to the create application page on the Lambda console (this is a Serverless solution!). - Scroll down to the Application Settings section and enter the parameters to point the dashboard to your existing bot: - BotId – The ID of an existing Amazon Lex V2 bot that is going to be used with this dashboard. To get the ID of your bot, find your bot on the Amazon Lex console and look for the ID in the Bot details section. - BotLocaleId – The bot locale ID associated to the bot ID with this dashboard, which defaults to en_US. To get the locales configured for your bot, choose View languages on the same page where you found the bot ID.Each dashboard creates metrics for a specific locale ID of a Lex bot. For more details on supported languages, see Supported languages and locales. - LexConversationLogGroupName – The name of an existing CloudWatch log group containing the Amazon Lex conversation logs. The bot ID and locale must be configured to use this log group for its conversation logs. Alternatively, if you just want to test drive the dashboard, this solution can deploy a fully functional sample bot. The sample bot comes with a Lambda function that is invoked every 2 minutes to generate conversation traffic. If you want to deploy the dashboard with the sample bot instead of using an existing bot, set the ShouldDeploySampleBots parameter to true. This is a quick and easy way to test the solution. - After you set the desired values in the Application settings section, scroll down to the bottom of the page and select I acknowledge that this app creates custom IAM roles, resource policies and deploys nested applications. - Choose Deploy to create the dashboard. You’re redirected to the application overview page (it may take a moment). - Choose the Deployments tab to watch the deployment status. - Choose View stack events to go to the AWS CloudFormation console to see the deployment details. The stack may take around 5 minutes to create. Wait until the stack status is CREATE_COMPLETE. - When the stack creation is complete, you can look for a direct link to your dashboard on the Outputs tab of the stack (the DashboardConsoleLinkoutput variable). You may need to wait a few minutes for data to be reflected in the dashboard. Use the solution The dashboard provides a single pane of glass that allows you to monitor the performance of your Amazon Lex bot. The solution is built using CloudWatch features that are intended for monitoring and operational management purposes. The dashboard displays widgets showing bot activity over a selectable time range. You can use the widgets to visualize trends, confirm that your bot is performing as expected, and optimize your bot configuration. For general information about using CloudWatch dashboards, see Using Amazon CloudWatch dashboards. The dashboard contains widgets with metrics about end user interactions with your bot covering statistics of sessions, messages, sentiment, and intents. These statistics are useful to monitor activity and identify engagement trends. Your bot must have sentiment analysis enabled if you want to see sentiment metrics. Additionally, the dashboard contains metrics for missed utterances (phrases that didn’t match the configured intents or slot values). You can expand entries in the Missed Utterance History widget to look for details of the state of the bot at the point of the missed utterance so that you can fine-tune your bot configuration. For example, you can look at the session attributes, context, and session ID of a missed utterance to better understand the related application state. You can use the dashboard to monitor session duration, messages per session, and top session contributors. You can track conversations with a widget listing the top messages (utterances) sent to your bot and a table containing a history of messages. You can expand each message in the Message History section to look at conversation state details when the message was sent. You can visualize utilization with heatmap widgets that aggregate sessions and intents by day or time. You can hover your pointer over blocks to see the aggregation values. You can look at a chart containing conversation paths aggregated by sessions. The thickness of the connecting path lines is proportional to the usage. Grey path lines show forward flows and red path lines show flows returning to a previously hit intent in the same session. You can hover your pointer over the end blocks to see the aggregated counts. The conversation path chart is useful to visualize the most common paths taken by your end users and to uncover unexpected flows. The dashboard shows tables that aggregate the top slots and session attributes values. The session attributes and slots are dynamically extracted from the conversation logs. These widgets can be configured to exclude specific session attributes and slots by modifying the parameters of the widget. These tables are useful to identify the top values provided to the bot in slots (data inputs) and to track the top custom application information kept in session attributes. You can add missed utterances to intents of the Draft version of your bot with the Add Missed Utterances widget. For more information about bot versions, see Creating versions. This widget is optionally added to the dashboard if you set the ShouldAddWriteWidgets parameter to true when you deploy the solution. CloudWatch features This section describes the CloudWatch features used to create the dashboard widgets. Custom metrics The dashboard includes custom CloudWatch metrics that are created using metric filters that extract data from the bot conversation logs. These custom metrics track your bot activity, including number of messages and missed utterances. The metrics are collected under a custom namespace based on the bot ID and locale ID. The namespace is named Lex/Activity/<BotID>/<LocaleID> where <BotId> and <LocaleId> are the bot and locale IDs that you passed when creating the stack. To see these metrics on the CloudWatch console, navigate to the Metrics section and look for the namespace under Custom Namespaces. Additionally, the metrics are categorized using dimensions based on bot characteristics, such as bot alias, bot version, and intent names. These dimensions are dynamically extracted from conversation logs so they automatically create metrics subcategories as your bot configuration changes over time. You can use various CloudWatch capabilities with these custom metrics, including alarms and anomaly detection. Contributor Insights Similar to the custom metrics, the solution creates CloudWatch Contributor Insights rules to track the unique contributors of highly variable data such as utterances and session IDs. The widgets in the dashboard using Contributor Insights rules include Top 10 Messages and Top 10 Sessions. The Contributor Insights rules are used to create top-N metrics and dynamically create aggregation metrics from this highly variable data. You can use these metrics to identify outliers in the number of messages sent in a session and see which utterances are the most commonly used. You can download the top-N items from these widgets as a CSV file by choosing the widget menu and choosing Export contributors. Logs Insights The dashboard uses the CloudWatch Logs Insights feature to query conversation logs. Various widgets in the dashboard including the Missed Utterance History and the Message History use CloudWatch Logs Insights queries to generate the tables. The CloudWatch Logs Insights widgets allow you to inspect the details of the items returned by the queries by choosing the arrow next to the item. Additionally, the Logs Insights widgets have a link that can take you to the CloudWatch Logs Insights console to edit and run the query used to generate the results. You can access this link by choosing the widget menu and choosing View in CloudWatch Logs Insights. The CloudWatch Logs Insights console also allows you to export the result of the query as a CSV file by choosing Export results. Custom widgets The dashboard includes widgets that are rendered using the custom widgets feature. These widgets are powered by Lambda functions using Python or JavaScript code. The functions use the D3.js (JavaScript) or pandas (Python) libraries to render rich visualizations and perform complex data aggregations. The Lambda functions query your bot conversation logs using the CloudWatch Logs Insights API. The functions then use code to aggregate the data and output the HTML that is displayed in the dashboard. It obtains bot configuration details (such as intents and utterances) using the Amazon Lex V2 APIs or dynamically extracts it from the query results (such as slots and session attributes). The dashboard uses custom widgets for the following widgets: heatmaps, conversation path, add utterances management form, and top-N slot/session attribute tables. Cost The Amazon Lex V2 Analytics Dashboard Solution is based on CloudWatch features. See Amazon CloudWatch pricing for cost details. Clean up To clean up your resources, you can delete the CloudFormation stack. This permanently removes the dashboard and metrics from your account. For more information, see Deleting a stack on the AWS CloudFormation console. Summary In this post, we showed you a solution that provides insights on how your users interact with your Amazon Lex chatbot. The solution uses CloudWatch features to create metrics and visualizations that you can use to improve the user experience of your bot users. The Amazon Lex V2 Analytics Dashboard Solution is provided as open source—use it as a starting point for your own solution, and help us make it better by contributing back fixes and features. For expert assistance, AWS Professional Services and other AWS Partners are here to help. We’d love to hear from you. Let us know what you think in the comments section, or use the issues forum in the GitHub repository. About the Author Oliver Atoa is a Principal Solutions Architect in the AWS Language AI Services team.
https://awsfeed.com/whats-new/machine-learning/monitor-operational-metrics-for-your-amazon-lex-chatbot
CC-MAIN-2021-49
refinedweb
2,490
51.58
Help Required - Swing AWT JFrame("password example in java"); frame.setDefaultCloseOperation...(); } }); } } ------------------------------- Read for more information. Required How to bring password field in JDialogueBox???  Java AWT Package Example Java AWT Package Example In this section you will learn about the AWT package of the Java. Many running examples are provided that will help you master AWT package. Example java - Swing AWT information, Thanks...java i want a program that accepts string from user in textfield1 and prints same string in textfield2 in awt hi, import java.awt. awt Java AWT Applet example how to display data using JDBC in awt/applet Java - Swing AWT Java Hello friends, I am developing an desktop application in java & I want to change the default java's symbol on the top & put my own symbol there.. Can Anyone help me java - Swing AWT java hello sir.. i want to start the project of chat server in java please help me out how to start it?? urgently.... Hi friend, To solve problem to visit this link.......
http://roseindia.net/tutorialhelp/allcomments/3872
CC-MAIN-2014-15
refinedweb
170
64.81
Repository storage types (CORE ONLY)Repository storage types (CORE ONLY) - Introduced in GitLab 10.0. - Hashed storage became the default for new installations in GitLab 12.0 - Hashed storage is enabled by default for new and renamed projects in GitLab 13.0. GitLab can be configured to use one or multiple repository storage paths/shard locations that can be: In GitLab, this is configured in /etc/gitlab/gitlab.rb by the git_data_dirs({}) configuration hash. The storage layouts discussed here apply to any shard defined in it. The default repository shard that is available in any installations that haven't customized it, points to the local folder: /var/opt/gitlab/git-data. Anything discussed below is expected to be part of that folder. Hashed storageHashed storage NOTE: In GitLab 13.0, hashed storage is enabled by default and the legacy storage is deprecated. Support for legacy storage is scheduled to be removed in GitLab 14.0. If you haven't migrated yet, check the migration instructions. The option to choose between hashed and legacy storage in the admin area has been disabled. Hashed storage is the storage behavior we rolled out with 10.0. Instead of coupling project URL and the folder structure where the repository is stored on disk, we are coupling a hash, based on the project's ID. This makes the folder structure immutable, and therefore eliminates any requirement to synchronize state from URLs to disk structure. This means that renaming a group, user, or project costs only the database transaction, and takes effect immediately. The hash also helps to spread the repositories more evenly on the disk, so the top-level directory contains fewer folders than the total number" Translating hashed storage pathsTranslating hashed storage paths Troubleshooting problems with the Git repositories, adding hooks, and other tasks requires you translate between the human readable project name and the hashed storage path. From project name to hashed pathFrom project name to hashed path The hashed path is shown on the project's page in the admin area. To access the Projects page, go to Admin Area > Overview > Projects and then open up the page for the project. The "Gitaly relative path" is shown there, for example: "" This is the path under /var/opt/gitlab/git-data/repositories/ on a default Omnibus installation. In a Rails console, get this information using either the numeric project ID or the full path: Project.find(16).disk_path Project.find_by_full_path('group/project').disk_path From hashed path to project nameFrom hashed path to project name To translate from a hashed storage path to a project name: - Start a Rails console. - Run the: => #<Project id:16 it/supportteam/ticketsystem> Hashed object poolsHashed object pools Introduced in GitLab 12.1. WARNING: Do not run git prune or git gc in pool repositories! This can cause data loss in "real" repositories that depend on the pool in question." Hashed storage coverage migrationHashed storage coverage migration Files stored in an S3-compatible endpoint do not have the downsides mentioned earlier, if they are not prefixed with #{namespace}/#{project_name}, which is true for CI Cache and LFS Objects. In the table below, you can find the coverage of the migration to the hashed storage.Legacy storage WARNING: In GitLab 13.0, hashed storage is enabled by default and the legacy storage is deprecated. If you haven't migrated yet, check the migration instructions. Support for legacy storage is scheduled to be removed in GitLab 14.0. If you're on GitLab 13.0 and later, switching new projects to legacy storage is not possible. The option to choose between hashed and legacy storage in the admin area has been disabled. concentrates a huge number needs to be reflected on disk (when groups / users or projects are renamed). This can add a lot of load in big installations, especially if using any type of network based filesystem.
https://cloud.cees.ornl.gov/gitlab/help/administration/repository_storage_types.md
CC-MAIN-2021-31
refinedweb
647
63.39
Difference between revisions of "Muzzle Flash Lighting" Revision as of 10:33, 20 July 2013 This tutorial will show you how to create a muzzleflash that dynamically appears and lights up the world around you, like in Counter-Strike: Source. These types of lights are called "dlights". This code was designed for any Orange Box based mod, singleplayer or multiplayer. It also works with NPCs, assuming that you've gotten them to work properly. Warning: Some dynamic props will not be lit when using dlights. Warning: Too many dlights may result in severe performance losses on low-end systems. When coding for lighting, dlights should be used sparingly. Implementing Dlight-based Muzzleflashes The coding begins! Only a few minor changes are needed here to make your muzzleflashes look better than ever. First off, open up c_baseanimating.cpp and search for the function void C_BaseAnimating::ProcessMuzzleFlashEvent() Vector vAttachment; QAngle dummyAngles; GetAttachment( attachment, vAttachment, dummyAngles ); // have noticed, Valve went a little crazy here and the result was a seemingly ugly muzzleflash. This tutorial is all about making things pretty, so we're going to change that. Replace the entries above with this: Vector vAttachment, vAng; QAngle angles; #ifdef HL2_EPISODIC GetAttachment( 1, vAttachment, angles ); // set 1 instead "attachment" #else GetAttachment( attachment, vAttachment, angles ); #endif AngleVectors( angles, &vAng ); vAttachment += vAng * 2; dlight_t *dl = effects->CL_AllocDlight ( index ); dl->origin = vAttachment; dl->color.r = 231; dl->color.g = 219; dl->color.b = 14; dl->die = gpGlobals->curtime + 0.05f; dl->radius = random->RandomFloat( 245.0f, 256.0f ); dl->decay = 512.0f; Edit: Changed the code for the colors, by default it was set to a magenta-ish color. A good muzzle flash color is closer to white, you can try 252, 238, 128 for a realistic washed out yellow. Notes I am currently working further on this idea, the updates will add more visual effects to this idea to create a more realistic and cleaner look to the muzzle flash. the update will only work on the orange box engine A link will be provided HERE when the new muzzle flash is done A NOTE FROM GAMERMAN12: I had updated this code so that the lights decay after firing, so you have a flashing effect when shooting machine guns. And it fades so that it's alitle more realistic. Thanks.
https://developer.valvesoftware.com/w/index.php?title=Muzzle_Flash_Lighting&diff=prev&oldid=176500
CC-MAIN-2020-05
refinedweb
384
55.74
29 April 2010 11:42 [Source: ICIS news] LONDON (ICIS news)--Most players in the European benzene market expected the May contract price to settle down by around 7%, between $1,100-1,120/tonne (€836-851/tonne), they said on Thursday morning. With the April contract confirmed at $1,190/tonne FOB (free on board) NWE (northwest ?xml:namespace> Not all players were ready for predictions though. “My crystal ball is blind,” one trader commented, adding that the market was difficult to peg with a premium paid for delivery in early May compared to delivery at any time in May, as the market was reportedly tight in the front month. For most part of the week, prices for delivery in the first half of May were higher by approximately $20-30/tonne compared to any other May values. With little business actually done, this meant that the May range was at some times more than $60/tonne wide, to incorporate all bids and offers that were heard for the month. On Thursday morning, however, it appeared that the premium for early May was reduced to around $10-15/tonne only. May deals for delivery at any time were reported at $1,095-1,105/tonne CIF (cost, insurance, freight) ARA ( “Any May has been talked at $1,100-1,105/tonne for the last few days, and early May was mostly at $1,110-1,125/tonne,” another trader said. “So my guess is that the contract will settle anywhere between $1,100-1,120/tonne, but closer to the $1,100/tonne mark.” With reports of more than 25,000 tonnes of imports arriving from the Finally, with the euro weakening against the dollar since the April contract had been agreed, players thought that the May euro level could see a drop of around €45/tonne, to settle at around €838/tonne. The May European benzene contract price was expected to settle on 30 April. ($1 = €0.76)
http://www.icis.com/Articles/2010/04/29/9354850/europe-may-benzene-contract-price-expected-to-fall-by-around-7.html
CC-MAIN-2014-49
refinedweb
330
61.5
Full code from this post can be found in my deep learning repo A classic example of a function that doesn’t respond well to linear machine learning algorithms is XOR. Because the meaning of each bit in a sequence depends on other bits, there’s no assignment of weights that we can give to each bit that captures the XOR function. That is to say, there’s no way to come up with weights \[ w_1 \] and \[ w_2 \] satisfying these equations, which make up XOR’s truth table: \[ \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}^\intercal \begin{bmatrix} 0 \\ 0 \end{bmatrix} = 0 , \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}^\intercal \begin{bmatrix} 0 \\ 1 \end{bmatrix} = 1 , \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}^\intercal \begin{bmatrix} 1 \\ 0 \end{bmatrix} = 1 , \begin{bmatrix} w_1 \\ w_2 \end{bmatrix}^\intercal \begin{bmatrix} 1 \\ 1 \end{bmatrix} = 0 \] If we want to learn XOR, we have to use some nonlinearity to capture its full behavior. One way to do this is with recurrent neural networks, or specifically long short-term memories—LSTMs. The structure of an LSTM allows it to remember disconnected pieces of an input. Think about it as reading in each bit in our XORed sequence one at a time. The LSTM should be able to remember the bits it’s seen, helping us keep track of the overall parity of the sequence. Luckily, this strategy also scales up fairly nicely, allowing us to perform the XOR task on fairly long sequences of bits. In fact, we can write all the code needed to show off how this works in the span of about thirty lines. We’ll use python, tensorflow, and a bit of numpy to make this happen. First, let’s import a bunch of things we’ll need to use later. from tensorflow.keras import optimizers from tensorflow.keras.layers import Dense, Input, LSTM from tensorflow.keras.models import Sequential import numpy as np import random So far, so good. Just a bunch of machine learny stuff. The most important parts to note are that we’re using a sequential model with some mixture of LSTM and Dense layers. Let’s set up some constants. We’ll operate on sequences of bits with length 50, and let’s create 100,000 of them for our training set. Our classification will work by being fed in both a sequence and its inverse, and deciding which one has an odd number of 1s. For example, if we see sequences [1 0 1 1] and its inverse, [0 1 0 0], we’ll want the model to predict that the first sequence is the one that has an XOR of 1. Let’s create a simple lambda bin_pair to help us create inverses of any sequence, by pairing off each bit with its opposite. Our training set consists of these pairs of random bits, in a list of length SEQ_LEN, and we’ll want COUNT training examples overall. To calculate our target set, we’ll create binary pairs based on the cumulative sum of a sequence, modulo 2. For example, if we have a sequence [1 0 1 1 0 1] the cumulative sum is [1 1 2 3 3 4] and that cumulative sum modulo 2 is [1 1 0 1 1 0]. This sum captures the “current” parity of the sequence as we read it in from left to right. We mostly need the binary pairs so that our target set has the same dimensions as the training set. bin_pair = lambda x: [x, not(x)] training = np.array([[bin_pair(random.choice([0, 1])) for _ in range(SEQ_LEN)] for _ in range(COUNT)]) target = np.array([[bin_pair(x) for x in np.cumsum(example[:,0]) % 2] for example in training]) Let’s do a quick check to make sure the dimensions of the two datasets match up. They should both be \[ 100000 \times 50 \times 2 \], since we have 100,000 examples of length 50, and each bit is paired off with its inverse. Now it’s time to build the model! Sequential means that our network will run each of its layers as a series of steps. We start with taking in the input. Notice that we drop the 100000 in the input shape—this is because we want the dimensions of each individual example here. The next layer is our single-unit LSTM, which should read in sequences for us and act as the network’s “memory”. Finally, we use a dense layer with 2 possibilities (one for each parity) and a softmax activation which turns our answers into probabilities describing the parity of the sequence, given how well the network has been trained. model = Sequential() model.add(Input(shape=(SEQ_LEN, 2), dtype='float32')) model.add(LSTM(1, return_sequences=True)) model.add(Dense(2, activation='softmax')) Using a binary crossentropy loss helps us make a decision—the XOR’s parity is either 0 or it’s 1. An Adam optimizer is fairly typical and helps adjust the learning rate so that our model can hopefully learn much faster without skipping over too many possible minima. We’ll track how well our model is doing by looking at its accuracy. Model fitting is the step that takes the longest time. 10 epochs should be enough for our model to gain reasonable amounts of prediction power without spending too long training. A batch size of 128 is pretty usual, and we usually like powers of two there because GPUs handle them better. model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(training, target, epochs=10, batch_size=128) model.summary() Now that our model has been trained, we can see how it might work on an arbitrary sequence of bits. Let’s select one at random, and print out what our model predicts, what it thinks the probability is of being correct, and what the actual answer ended up being. predictions = model.predict(training) i = random.randint(0, COUNT) chance = predictions[i,-1,0] print('randomly selected sequence:', training[i,:,0]) print('prediction:', int(chance > 0.5)) print('confidence: {:0.2f}%'.format((chance if chance > 0.5 else 1 - chance) * 100)) print('actual:', np.sum(training[i,:,0]) % 2) There’s definitely a lot to take in here, but hopefully this amount of code is small enough that it seems reasonable to just dive right in, start making tweaks, and seeing what happens. Machine learning is super easy to experiment with—all you really need is a computer of some kind—so it makes sense to play around and see how different ways you prod the model produce different results. This model took under 2 minutes to train on my laptop CPU, and got confidence levels up around 98% on sequences of length 50, and with 100,000 examples in the training set.
https://vitez.me/lstm-xor
CC-MAIN-2020-16
refinedweb
1,143
61.97
rotate_x() Contents rotate_x()# Rotates around the x-axis the amount specified by the angle parameter. Examples# def setup(): py5.size(100, 100, py5.P3D) py5.translate(py5.width//2, py5.height//2) py5.rotate_x(py5.PI/3.0) py5.rect(-26, -26, 52, 52) def setup(): py5.size(100, 100, py5.P3D) py5.translate(py5.width//2, py5.height//2) py5.rotate_x(py5.radians(60)) py5.rect(-26, -26, 52, 52) Description# Rotates around the x_x(PI/2) and then rotate_x(PI/2) is the same as rotate_x(PI). If rotate_x() is run within the draw(), the transformation is reset when the loop begins again. This function requires using P3D as a third parameter to size() as shown in the example. Underlying Processing method: rotateX Signatures# rotate_x( angle: float, # angle of rotation specified in radians /, ) -> None Updated on September 01, 2022 16:36:02pm UTC
https://py5.ixora.io/reference/sketch_rotate_x.html
CC-MAIN-2022-40
refinedweb
145
51.04
Use the Xrm.Page object model Applies To: Dynamics 365 (online), Dynamics 365 (on-premises), Dynamics CRM 2016, Dynamics CRM Online When you write form scripts you will interact with objects in the Xrm.Page namespace to perform the following actions: Get or set attribute values. Show and hide user interface elements. Reference multiple controls per attribute. Access multiple forms per entity. Manipulate form navigation items. Interact with the business process flow control. For more examples, see Form scripting quick reference. In This Topic Xrm.Page object hierarchy Execution context Collections Object descriptions attribute context control entity formSelector navigation process section stage step tab Xrm.Page object hierarchy As shown in the following diagram, Xrm.Page provides a namespace container for three objects described in the following table: .jpeg) Execution context When you register a function for an event handler you have the option to pass an execution context object as the first parameter to the function. This object contains methods that allows you to manage variables you wish to share with other event handlers and the save event. For more information, see Execution context (client-side reference) and Save event arguments (client-side reference). Collections The following table describes the Xrm.Page object model collections. See Collections (client-side reference) for information about the methods available for collections. Business process flow collections Collections for stages and steps within Xrm.Page.data.process are based on the same collection structure but also allow for adding or removing items from the collections. Use the process.getStages method to access the collection of stages. Use the stage.getSteps method to access the collection of steps. Object descriptions Each object possesses several methods to retrieve data, get or set object properties, or perform actions: attribute Each attribute corresponds to an entity attribute that has been added to the form as a field. Generally only those entity attributes that have been added to the form as a field are available. Each instance of a field is a control. A field can be added to a form more than one time, which creates multiple controls that refer to the same attribute. Note Composite attributes have special behaviors. More information: Write scripts for composite attributes Attributes are categorized by type. You can determine the type of an attribute by using the getAttributeType method. While all attributes share some common methods, certain methods are only available for specific attribute types. For more information, see Xrm.Page.data.entity attribute (client-side reference). Note Attribute type information represents the behavior of the attribute in the form. It does not necessarily correspond to the field type defined in the application or the AttributeMetadata types. Attributes of a particular field type may behave differently depending on how they are formatted. The following table lists the attribute type string values to expect for each type of attribute schema type and format option. context Xrm.Page.context provides methods to retrieve information specific to an organization, a user, or parameters that were passed to the form in a query string. For more information, see Client-side context (client-side reference). control Represents an HTML element present on the form. Some controls are bound to a specific attribute, whereas others may represent unbound controls such as an IFRAME, Web resource, or a sub grid that has been added to the form. Use specific control names in your code for IFrame, Web Resource, and Subgrid controls. These controls are not bound to an attribute. Avoid including specific control names in your code when the control is bound to an attribute. When multiple controls are bound to an attribute, the control names are determined at runtime and can vary depending on where the control is located in the form. For most tasks related to attribute bound controls, you will access the controls by using the attribute controls collection or through the controls collection of a section. Rather than refer to a control by name, you will gain a reference to it based on the context of the collection. In this case, the name is not important. See the example found for the attribute controls for a way to create functions to perform actions across all the controls that are bound to a specific attribute. Note Composite attributes have special behaviors. More information: Write scripts for composite attributes. Note For most script development work outside of Microsoft Dynamics 365, developers may be accustomed to referring to page elements by using the document.getElementById method. For Microsoft Dynamics 365 form scripts this method is not supported. It is important to recognize that the attribute stores the data and the control is simply the presentation of the attribute in the form. For controls bound to attributes you may need to adjust the way you are accustomed to accessing data in the form. Controls are categorized by type. You can determine the type of a control by using the getControlType method. Certain control methods are only available for specific types of controls. For example, the addOption method is only available for controls that are presented as option sets. For more information, see Xrm.Page.ui control (client-side reference). entity Xrm.Page.data.entity provides methods to retrieve information specific to the record displayed on the page, the save method, and a collection of all the attributes included in the form. See Xrm.Page.data.entity (client-side reference) for more information. formSelector The Xrm.Page.ui.formSelector contains an items collection that provides capabilities to query the forms available for the current user. Use the navigate method to close the current form and open a different one. For more information, see Xrm.Page.ui.formSelector item (client-side reference). navigation Does not contain any methods. Provides access to navigation items through the items collection. process Contains methods to retrieve properties of a business process flow. More information: Process methods section A section contains methods to manage how it appears in addition to accessing the tab that contains the section. A section also provides access to the controls within it through a controls collection. More information: Xrm.Page.ui section (client-side reference) stage Each process has a collection of stages that can be access using the process getStages method. One stage is the active stage. More information: Structure of business process flows step Steps represent individual items of data to be collected during a stage. Each stage has a collection of steps that can be accessed using the stage getSteps method. More information: Structure of business process flows You can access a step control within the active stage of a business process flow control by referencing the control name with the special prefix “header_process_<control name>”. For example, to hide the step that represents the purchaseprocess attribute, use the following: Xrm.Page.getControl("header_process_purchaseprocess").setVisible(false); tab A tab is a group of sections on a page. It contains methods to change the presentation of the tab. You will access sections in the tab through the sections collection. For more information, see Xrm.Page.ui tab (client-side reference). See Also Form scripting quick reference Write and debug scripts for Dynamics 365 for phones and tablets Write code for Microsoft Dynamics 365 forms Write scripts for composite attributes Write scripts for business process flows Use JavaScript with Microsoft Dynamics 365 Client-side programming reference Client-side programming reference JavaScript libraries for Microsoft Dynamics 365 Customize entity forms Microsoft Dynamics 365
https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2016/developers-guide/gg328474%28v%3Dcrm.8%29
CC-MAIN-2020-34
refinedweb
1,242
57.77
A rational number is a number that can be expressed as a fraction whose numerator and denominator are integers. Examples of rational numbers are 0.75 (which is �) and 1.125 (which is 9/8). The value ? is not a rational number; it cannot be expressed as a ratio of two integers. Working with rational numbers on the computer is often a problem. Inaccuracies in floating point representation can yield imprecise results. For example the result of a C# expression 1.0/3.0*3.0 is likely to be a value like 0.999999 rather than 1.0. Design and build a rational number calculator program. This calculator should read the mathematical expression as an user input and add (+), subtract (-), multiply (*) or divide (/) the rational numbers. The program should do the calculation using c# operator precedence rule; * and / are perform before + and -. It also search for the maximum common divider to simplify the result. The following is a sample run of the program Input the mathematical expression: 1/3 * 2/4 + 5/9 Output: 1/3 * 2/4 + 5/9 = 78/108 = 13/18 or Input the mathematical expression 1/2 + 1/4 * 5/3 � 3/8 Output: 1/2 + 1/4 * 5/3 � 3/8 = 104/192 = 52/96 = 13/24 Reminder: The program should validate the input values and show the appropriate message when an error occurs. Hint: The split method in a String class finds all the substrings in a string that are separated by one or more characters, returning a string array. Example: public class SplitTest { public static void Main() { string words = "this is a list of words, with: a bit of punctuation."; string [] split = words.Split(new Char [] {' ', ',', '.', ':'}); foreach (string s in split) { if (s.Trim() != "") Console.WriteLine(s); } } }
http://www.chegg.com/homework-help/questions-and-answers/rational-number-number-expressed-fraction-whose-numerator-denominator-integers-examples-ra-q5083108
CC-MAIN-2016-36
refinedweb
295
57.57