text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
I’m currently playing with ORMlite to make a model with tables and relationships. One relationship is a many-to-many relationship. What’s the best way to implement that? To be more concrete: Let’s say I’ve got these two tables Product id brand Purchase id A purchase can have several products and one products can be in several purchases. Using ORMLite I could have a @ForeignCollectionField in each model but I don’t think it would work. The only valid solution I see is to make a third table Product_Purchase to link Product and Purchase with many-to-one relationships. What do you folks think? @Romain’s self answer is correct but here’s some more information for posterity. As he mentions there is an example many-to-many ORMLite project that demonstrates the best way to do this: The example uses a join table with the id’s of both of the objects to store a relationship. In @Romain’s question, the join object would have both the Product and the Purchase object. Something like: public class ProductPurchase { @DatabaseField(generatedId = true) private int id; @DatabaseField(foreign = true) private Product product; @DatabaseField(foreign = true) private Purchase purchase; ... } The id fields get extracted from the objects which creates a table like: CREATE TABLE `userpost` (`id` INTEGER AUTO_INCREMENT , `user_id` INTEGER , `post_id` INTEGER , PRIMARY KEY (`id`) ) You then use inner queries to find the Product objects associated with each Purchase and vice versa. See lookupPostsForUser() method in the example project for the details. There has been some thought and design work around doing this automatically but right now ORMLite only handles one-to-many relationships internally. ### Ok I guess the only way to go is to create a third table Product_Purchase. It is indicated in a sample project. ### You have to create a ProductPurchase class and manage it like it was another object that has to go into your Database. You can (but you don’t have to) have a Collection of Products inside Purchases (and vice-versa) but they will have to be manually updated/created when you load the relations between Products and Purchases from the ProductPurchase linker table. Having those collections mean nothing for the ORM (you won’t and shouldn’t annotate them). If anyone is looking for and Android App with the Many-to-Many relationship I worked on an example:
https://throwexceptions.com/android-what-is-the-best-way-to-implement-many-to-many-relationships-using-ormlite-throwexceptions.html
CC-MAIN-2020-29
refinedweb
396
52.7
Hi everyone in this program down below in the while loop it works the first time, but when it goes back to start of the loop again, it skips the first question of the loop all the time, and it wont let me enter it again, it just skips it can you please help me why this wont let me do it or fix it up for me =) thanx heaps Rob #include <stdio.h> #include <string.h> #define MAX 100 /* size of structure array */ typedef struct { char name[ 20 ]; char phone [ 15 ]; char studentid [ 15 ]; } customerData; void bubblesort(customerData[]); int main() { customerData customer[MAX]; customerData hold; int i, pass, element; char targetName[ 20 ]; int s; printf("Enter students records (names and phone no.).\n"); printf("\nThey will then be sorted by names.\n\n"); while ( s != 0) { printf( "\nEnter customer lastname Then First name (E.g. Jones John): "); gets ( customer[i].name ); printf( "\nEnter phone number: " ); gets( customer[i].phone ); printf( "\nEnter Student ID: "); gets( customer[i].studentid); printf( "With to enter anymore?(1 for YES, 0 for NO)\n "); scanf("%d", &s); } }
http://cboard.cprogramming.com/cplusplus-programming/24741-troubling-me-please-help.html
CC-MAIN-2014-15
refinedweb
183
73.27
The QAsciiDictIterator class provides an iterator for QAsciiDict collections. More... #include <qasciidict.h> Inherits QGDictIterator. List of all member functions. QAsciiDictIterator is implemented as a template class. Define a template instance QAsciiDictIterator<X> to create a dictionary iterator that operates on QAsciiDict<X> (dictionary of X*). Example: #include <qasciidict.h> #include <stdio.h> void main() { // Creates a dictionary that maps char* ==> char* (case insensitive) QAsciiDict<char> dict( 17, FALSE ); dict.insert( "France", "Paris" ); dict.insert( "Russia", "Moscow" ); dict.insert( "Norway", "Oslo" ); QAsciiDictIterator<char> it( dict ); // iterator for dict while ( it.current() ) { printf( "%s -> %s\n", it.currentKey(), it.current() ); ++it; } } Program output: Russia -> Moscow Norway -> Oslo France -> Paris Note that the traversal order is arbitrary, you are not guaranteed the order above. Multiple iterators may independently traverse the same dictionary. A QAsciiDict knows about all iterators that are operating on the dictionary. When an item is removed from the dictionary, QAsciiDict update all iterators that are referring the removed item to point to the next item in the traversing order. See also QAsciiDict and Collection Classes Constructs an iterator for dict. The current iterator item is set to point on the first item in the dict. Destroys the iterator. Cast operator. Returns a pointer to the current iterator item. Same as current(). Returns the number of items in the dictionary this iterator operates on. See also isEmpty(). Returns a pointer to the current iterator item. Returns a pointer to the key for the current iterator item. Returns TRUE if the dictionary is empty, i.e. count() == 0, otherwise FALSE. See also count(). Makes the succeeding item current and returns the original current item. If the current iterator item was the last item in the dictionary or if it was null, null is returned. Prefix ++ makes the succeeding item current and returns the new current item. If the current iterator item was the last item in the dictionary or if it was null, null is returned. Sets the current item to the item jump positions after the current item, and returns a pointer to that item. If that item is beyond the last item or if the dictionary is empty, it sets the current item to null and returns null. Sets the current iterator item to point to the first item in the dictionary and returns a pointer to the item. If the dictionary is empty it sets the current item to null and returns null. Search the documentation, FAQ, qt-interest archive and more (uses): This file is part of the Qt toolkit, copyright © 1995-2005 Trolltech, all rights reserved.
https://doc.qt.io/archives/2.3/qasciidictiterator.html
CC-MAIN-2021-49
refinedweb
430
60.11
Quote Here's the code I wrote: #include <iostream> #include <conio.h> using namespace std; int main() { int a, b; cout << "Enter a number: "; cin >> a; cout << "Enter another number: "; cin >> b; if(a>>B)/>{ while(b<=a){ b=b++; } cout << b <<endl; } if(b>>a){ while(a<=B)/>{ a=a++; } cout <<a<<endl; } if(a==B)/>{ cout <<"Both Numbers are same"<< endl; } _getch(); } But it doesn't work ! Where am I doing wrong? Please help ! Note: In my program all the b are in small. But whenever I try to do the b small in the above code, it gets capitalized. May be a bug in the forum? This post has been edited by Jeet.in: 12 July 2011 - 12:51 AM
http://www.dreamincode.net/forums/topic/239095-range-of-numbers-c-primer-problem/
CC-MAIN-2017-39
refinedweb
123
92.02
RDF::Query::Compiler::SQL - Compile a SPARQL query directly to SQL. This document describes RDF::Query::Compiler::SQL version 2.917. This module's API and functionality should be considered deprecated. If you need functionality that this module provides, please get in touch. new ( $parse_tree ) Returns a new compiler object. compile () Returns a SQL query string for the specified parse tree. emit_select Returns a SQL query string representing the query. limit_clause Returns a SQL LIMIT clause, or an empty string if the query does not need limiting. order_by_clause Returns a SQL ORDER BY clause, or an empty string if the query does not use ordering. variable_columns ( $var ) Given a variable name, returns the set of column aliases that store the values for the column (values for Literals, URIs, and Blank Nodes). add_variable_values_joins Modifies the query by adding LEFT JOINs to the tables in the database that contain the node values (for literals, resources, and blank nodes). patterns2sql ( \@triples, \$level, %args ) Builds the SQL query in instance data from the supplied @triples. $level is used as a unique identifier for recursive calls. %args may contain callback closures for the following keys: 'where_hook' 'from_hook' When present, these closures are used to add SQL FROM and WHERE clauses to the query instead of adding them directly to the object's instance data. expr2sql ( $expression, \$level, %args ) Returns a SQL expression for the supplied query $expression. $level is used as a unique identifier for recursive calls. %args may contain callback closures for the following keys: 'where_hook' 'from_hook' When present, these closures are used to add necessary SQL FROM and WHERE clauses to the query. _mysql_hash ( $data ) Returns a hash value for the supplied $data string. This value is computed using the same algorithm that Redland's mysql storage backend uses. _mysql_node_hash ( $node ) Returns a hash value (computed by _mysql_hash for the supplied $node. The hash value is based on the string value of the node and the node type. qualify_uri ( $uri ) Returns a fully qualified URI from the supplied $uri. $uri may already be a qualified URI, or a parse tree for a qualified URI or QName. If $uri is a QName, the namespaces defined in the query parse tree are used to fully qualify. add_function ( $uri, $function ) Associates the custom function $function (a CODE reference) with the specified URI, allowing the function to be called by query FILTERs. get_function ( $uri ) If $uri is associated with a query function, returns a CODE reference to the function. Otherwise returns undef. Gregory Williams <gwilliams@cpan.org>
http://search.cpan.org/~gwilliams/RDF-Query-2.917/lib/RDF/Query/Compiler/SQL.pm
CC-MAIN-2017-26
refinedweb
420
65.12
XML Schema is the W3C evolution of the DTD. It is complex but powerful, in wide use but not always popular. This hack will help you start writing schema in this format. XML Schema is a recommendation of the W3C, written in three parts. Part 0 is a nice little primer () that gets you started with the language. Part 1 describes the structures of XML Schema (); it is a long spec?about 200 pages long when printed?and is rather complex. Part 2 defines datatypes () and has been more gladly received than Part 1, though it is considered by some to be ad hoc and not without anomalies. XML mensch James Clark () has said of Part 1 that "it is without doubt the hardest to understand specification that I have ever read" (). Many others who have read the spec, or have attempted to read it, heartily agree with James. This is unfortunate, as it has placed many schema writers and companies in the uncomfortable position of using and supporting a difficult spec from the W3C, a widely accepted (though not always highly regarded) source. Happily, there are alternatives, such as RELAX NG [Hack #72] and tools such as Trang (), that can conveniently convert RELAX NG to XML Schema [Hack #76] . We'll start out by taking a look at the schema time.xsd, which was introduced but not explored in depth in [Hack #14] . It is displayed in Example 5-6. <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns: <xs:element <xs:complexType> <xs:sequence> <xs:element <xs:element <xs:element <xs:element <xs:element <xs:complexType> <xs:attribute </xs:complexType> </xs:element> </xs:sequence> <xs:attribute </xs:complexType> </xs:element> </xs:schema> The document element of an instance of XML Schema is always schema (line 2). The namespace name is and the common prefix for the namespace is xs: (also line 2). Starting on line 4, the element time is declared. This is called a global element declaration (); because it is the only global declaration in the schema, the schema will anticipate that time will be the top-level or document element in an instance. The complexType element (line 5) indicates that its children may have complex content; that is, they can have attributes and element child content (). Contrariwise, elements with simple types cannot have attributes or element children. I think this terminology makes things harder to grasp than is necessary, but that's the way it is in XML Schema. On lines 6 through 16, the sequence element specifies the order in which elements must appear in an instance. So the element declarations (lines 7 through 15) for the elements hour, minute, second, meridiem, and atomic, must appear in that order. The element names are given in the name attributes of the element, and all but the atomic element will have a string datatype (), as indicated by the type attribute. Starting on line 11 is the declaration for the atomic element, which is different from the others. It is considered an anonymous type definition () because it is a complex type declaration without a name (that is, there is no name attribute on the complexType element start tag). The definition for time (starting on line 4) also is an anonymous type definition. atomic has a signal attribute (declared in the attribute element on line 13) whose type is string, and is required (hence the use attribute with a value of required). Finally, on line 17, the required timezone attribute is declared. This declaration, way down near the bottom of the schema, applies to the time element. Its type is string, and it is also required. Next, you need to become acquainted with the named complex type structure in XML Schema, as well as simple types. These structures can be named and reused. Example 5-7 shows a new version of our previous schema, complex.xsd, using these complex types and two derived simple types. <?xml version="1.0" encoding="UTF-8"?> <xs:schema xmlns: :attribute </xs:complexType> <xs:simpleType <xs:restriction <xs:pattern </xs:restriction> </xs:simpleType> <xs:simpleType <xs:restriction <xs:enumeration <xs:enumeration </xs:restriction> </xs:simpleType> </xs:schema> When the time element is declared on line 4, rather than using a built-in type, its type is set to be the complex type named Time, which starts on line 6 (you could use time instead of Time as the name and it would not conflict with the name time used in an element declaration). Note the ref attributes on lines 8 through 11, which refer to element declarations on lines 17 through 20 (this is superfluous, but serves to illustrate how ref works). On line 12, the element atomic is of type Atomic, a complex type that contains only an attribute declaration (line 22). The element declarations on lines 17, 18, and 19 are of type Digits, a simple type (line 26) that is a restriction of a string. The pattern facet element (line 28) restricts the content to two digits with the regular expression \d\d (). The meridiem element is of type AmPm, an enumeration () that can contain either of the values a.m. or p.m. (see line 33). Now let's validate time.xml against time.xsd or complex.xsd. There are a number of tools readily available to do this. We'll use three here: an online XSD Schema Validator, available from Got Dot Net (), and the command-line validators xmllint () and xsv (). In a web browser, go to (Figure 5-1). Click the Browse button next to the first text box, and the File Upload dialog box appears. Select time.xsd or complex.xsd from the working directory where the file archive was extracted, then click Open. Again, click the Browse button next to the third text box. Select time.xml in the File Upload dialog, and then click Open. Having selected both files, click the Submit button. Upon success, the browser will display the message "Validated OK!" and display the validated file. By selecting one or the other file alone, you can also use this service to check only an instance of an XML Schema for validity or only an XML document for well-formedness. The command-line tool xmllint was discussed and demonstrated in [Hack #9]. To use this tool to validate against XML Schema, all you need to do is use the --schema option. With xmllint installed and in the path, enter the command: xmllint --schema time.xsd time.xml or: xmllint --schema complex.xsd time.xml When successful, the validated instance is displayed, without reporting any errors. You can submit one or more XML instances at the end of the command line for validation xsv is an XML Schema validator that is available both online and as a command-line tool (). It was developed by Henry S. Thompson and Richard Tobin of the University of Edinburgh. It is available for the Windows platform (), in Python as an RPM (RPM Package Manager) package (), or as a tar ball (). Once xsv is installed and in your path, you can use it to validate time.xml with time.xsd by typing: xsv time.xml time.xsd or: xsv time.xml complex.xsd By default, xsv reports its validation results with an XML document, as shown here (for time.xsd): <?xml version='1.0'?> <xsv xmlns="" docElt="{None}time" instanceAssessed="true" instanceErrors="0" rootType="[Anonymous]" schemaDocs="time.xsd" schemaErrors="0" target="" validation="strict" version="XSV 2.6-2 of 2004/02/04 11:33:42"> <schemaDocAttempt URI="" outcome="success" source= "command line"/> </xsv> In the file archive there is a stylesheet that transforms this result into HTML; it's called xsv.xsl. To put it to work, use xsv with the -o switch for the output file and the -s switch for the XSLT stylesheet: xsv -o xsvresult.xml -s xsv.xsl time.xml time.xsd The -s switch places an XML stylesheet PI in the resulting file. You can then display the file in a browser that supports client-side XSLT, and it will be transformed as shown in Figure 5-2. Here are some additional interesting features from XML Schema. These three elements help you to construct content models. choice allows one of its children to appear in an instance, literally a choice of two or more options. group collects declarations into a single unit. all allows all children elements to appear once or not at all, in any order (). The annotation element, with its children appInfo and documentation, can annotate and document a schema or provide information about an application (). With the include element, you can include other schemas as part of a schema definition. The import element allows you to borrow definitions from other namespaces (see and). With the restriction and extension elements, it is possible to create new types by deriving from existing types. You can, for example, add additional elements to an existing complex type or restrict or change facets in a simple type (see and). You can match any element name with any and any attribute name with anyAttribute wildcards (). XML 1.0 gave us fixed and default values for attributes, and XML Schema extends that capability to elements by using the fixed and default attributes on the element declaration (). A list type in XML Schema allows you to define a whitespace-separated list of values in an attribute or element. Union types allow values that are of any one of a number of simple types, such as integer and string ( and). You can substitute one element for another. You can also create abstract elements. An abstract element can be the head of a substitution group, so that other elements can take its place, but it cannot be used within an XML document itself (). With the redefine element, you can redefine simple types, complex types, attribute groups, etc., from external schema files (). You can use ID, IDREF, and IDREFS to constrain the identity of elements and attributes in XML Schema. You can also constrain values within a scope so that they are unique (unique), unique and present (key), or refer to a unique or key constraint (keyref) ( and). This feature (xsi:nil in an instance and nillable="true" on an element declaration) allows you to give an element meaning to a nil value (). DecisionSoft's online schema validator: XML Schema, by Eric van der Vlist (O'Reilly) Definitive XML Schema, by Priscilla Walmsley (Prentice Hall PTR) xframe schema-based programming project: xframe xsddoc documentation toolkit for XML Schema:
https://etutorials.org/XML/xml+hacks/Chapter+5.+Defining+XML+Vocabularies+with+Schema+Languages/Hack+69+Validate+an+XML+Document+with+XML+Schema/
CC-MAIN-2021-31
refinedweb
1,756
63.7
You can subscribe to this list here. Showing 1 results of 1 Maybe you should use something like " i' <- treeModelSortConvertIterToChildIter sortList i " when using rows in a sorted model for retrieving data in an unsorted store. Works fine for me. /J -----Ursprungligt meddelande----- Från: Axel Simon [mailto:Axel.Simon@...] Skickat: den 10 september 2009 21:34 Till: Marcus Pedersén Kopia: Gtk2hs List Ämne: Re: [Gtk2hs-users] Reorder TreeView and model not updated On Sep 10, 2009, at 21:26, Marcus Pedersén wrote: > Yes, I am with you so far... > > using (as before): > selection <- treeViewGetSelection treeview > listOfInt <- treeSelectionGetSelectedRows selection > > I get my list of selected rows and I use my ListStore to get the > value: > value <- listStoreGetValue model row > > It works fine so far. > But when I start to sort my view by pressing the column titles the > order > in the view and the ListStore isn't the same and there is where I am > failing. > If I do like above I get the wrong row. > How do I keep control of the order between view and ListStore? You can't. The View can sort the rows according to your own sorting criteria. The Store always stays the same. Why would you want the user to sort the ListStore? Does treeSelectionGetSelectedRows return the row numbers as they occur in the TreeView? In that case, we have a bug to fix... Axel. > Thanks again! > Marcus > > tor 2009-09-10 klockan 20:22 +0200 skrev Axel Simon: >> Hi Marcus, >> >> On Sep 10, 2009, at 19:04, Marcus Pedersén wrote: >> >>> Sorry for stupid questions, but I do not seems to get it all right! >>> As I understand it I have to get the selected object from the >>> TreeView >>> and then search for it in the model. >>> But I can not figure out how to get the selected object out of the >>> TreeView. I can get the selected row number by using: >>> >>> selection <- treeViewGetSelection treeview >>> listOfInt <- treeSelectionGetSelectedRows selection >>> >>> But how do I get the object so I can compare and find it in my >>> model? >>> I realy must be missing somthing here!! >> >> First of all, make sure you use only modules form >> Graphics.UI.Gtk.ModelView.* not those under TreeList.* since those >> are deprecated. Your view must somehow be backed by a store in which >> the data reside. You can either use ListStore or TreeStore which are >> two models that handle a list of rows and a hierarchical arrangement >> of rows, respectively. If you use a ListStore, the selection will >> return a TreePath (a list of indices) which will have one element >> [i], indicating that the user selected the ith row in the ListStore. >> >> Is that what you're asking? >> >> Cheers, >> Axel. >> >>> Many, many thanks! >>> >>> Marcus >>> >>> >>> fre 2009-09-04 klockan 17:37 +0200 skrev Axel Simon: >>>> On Fri, 2009-09-04 at 16:11 +0200, Marcus Pedersén wrote: >>>>> Hi all! >>>>> I have started to find my way around the tables in gtk2hs, but one >>>>> problem I can not figure out is: >>>>> See code below for details. >>>>> When I reorder my table by clicking column title the model and >>>>> the view >>>>> does not get the same ordering. >>>> >>>> The model never changes! You can have several TreeViews using the >>>> same >>>> model. In each view, the user can reorder the columns in a >>>> different >>>> way. Note that you can insert several CellRenderer into the same >>>> column >>>> of a TreeView, thus there generally is no one-to-one correspondance >>>> between columns in the model and those in the view. >>>> >>>> Note that you should not use the modules under TreeList.* but only >>>> those >>>> under ModelView.*. The former are deprecated. Using the latter, you >>>> usually don't need column numbers in the model anymore since each >>>> in the >>>> ListStore is a simply Haskell value. >>>> >>>> Cheers, >>>> Axel. >>>> >>>>> What am I doing wrong?? >>>>> >>>>> Many thanks! >>>>> Marcus >>>>> >>>>> >>>>> I have written a test list store: >>>>> model <- listStoreNew >>>>> [ User (Just "192.168.1.1") (Just "192.168.1.0") (Just >>>>> "255.255.255.0") Nothing "Per" "Pelle" (Just "Chef") (Just >>>>> "Uppsala") >>>>> Available Male, >>>>> User (Just "192.168.1.2") (Just "192.168.1.0") (Just >>>>> "255.255.255.0") (Just "Debian2") "Arne" "Banan" (Just "Chef") >>>>> (Just >>>>> "Uppsala") Offline Male, >>>>> User (Just "192.168.1.3") (Just "192.168.1.0") (Just >>>>> "255.255.255.0") (Just "Debian3") "Jenny" "Li" (Just "Worker") >>>>> (Just >>>>> "Uppsala") Busy Female ] >>>>> >>>>> Made a sort model: >>>>> >>>>> sortModel <- treeModelSortNewWithModel model >>>>> treeSortableSetSortFunc sortModel 1 $ \iter1 iter2 -> do >>>>> a <- treeModelGetRow model iter1 >>>>> b <- treeModelGetRow model iter2 >>>>> return (compare (username a) (username b)) >>>>> treeSortableSetSortFunc sortModel 2 $ \iter1 iter2 -> do >>>>> a <- treeModelGetRow model iter1 >>>>> b <- treeModelGetRow model iter2 >>>>> return (compare (nickname a) (nickname b)) >>>>> >>>>> Made a treView with my sortmodel: >>>>> >>>>> view <- treeViewNewWithModel sortModel >>>>> >>>>> >>>>> ------------------------------------------------------------------ >>>>> -- >>>>> ---------- >>>>>@...
http://sourceforge.net/p/gtk2hs/mailman/gtk2hs-users/?viewmonth=200909&viewday=15
CC-MAIN-2015-48
refinedweb
776
64.91
How many local maximus an multidimensional array has. An element of the array is maximally local, if it is strictly larger than all its neighbors. I know that each number should be compared with i-1 , i+1 for left and right neighbours and j-1 , j+1 for upper and lower neighbours but is there a .. Category : multidimensional-array int a[2][2] = {{1, 2}, {3, 4}}; cout << a << endl; cout << *a << endl; cout << &a[0][0] << endl; The output of this code is: 0x7fff3da96f40 0x7fff3da96f40 0x7fff3da96f40 However, if a is 0x7fff3da96f40 then *a should be the value in the address 0x7fff3da96f40 which is 1. But, we get the same address .. I have created a program to get and display two 2d arrays, and it is working properly but at the end of the program I’m getting an error in my program. Error:- Process finished with exit code -11. Can anyone tell why this error occurs? My code:- #include <iostream> using namespace std; int ro1,co1,ro2,co2; int .. My main code is .ino file for STM32F103C8T6 with STM32 official core. I have also included my library files of .h file and .cpp file. I want store a values in 2d array called uint32_t Page[15][14]; in .h file of my library How to store a value in 2d array variable during runtime. I have .. this is my first time ever using MPI (I am also quite new to C++ as well) and I am trying to do this challenge for a course and I am very confused. I think I have to be overthinking it, but basically I have to broadcast the given array to each node, have each .. I’ creating a program (just for didactic use) that solves the 8 queens puzzle, where we should set 8 queens on a chessboard avoiding conflicts. After applying some OOD I decided to create 3 classes : Chessboard, Square and Queen. Obviously Chessboard is a composition of 64 (8×8) Square, we could argue on Queenclass utility .. I have been doing some CP practice and I’ve came over some small problems. When I want to make a dynamic two dimensional vector (in this case a 3 rowed two dimensional vector). I have trouble finding the length of a specific row (and the last element). int Pweightboost = 0; int Tempweightboost = 0; .. I’m new to C++ and I want to create a function that takes an empty array of a certain size as default value. Basically the array should act like a storage for the iterative function calls and the function will return some value in that array at the end. I need an empty array because .. My goal is to make simple sand simulation using c++ and OpenGL. Right now my plan is to have a 2d array of pixel colors and a texture the same size. To simulate sand I will update the array accordingly to the sand coordinates and where it has to travel. I’m thinking of sending the .. I am currently trying to convert my small regular expression engine from C to C++. To discard syntactically incorrect regexes in a compact way, I use a 2D array to define what kinds of tokens are allowed after one another: #define NUMBER_OF_TOKEN_KINDS 15 typedef enum { Literal, Alternator, … } TokenKind; bool grammar_allowed[NUMBER_OF_TOKEN_KINDS][NUMBER_OF_TOKEN_KINDS] = { .. Recent Comments
https://windowsquestions.com/category/multidimensional-array/
CC-MAIN-2021-43
refinedweb
558
63.39
Lesson 1 - Show the Print Preview for a Link - 2 minutes to read This lesson illustrates how to create a basic printing link that creates an empty document, and how to create a Print Preview in a Windows Forms Application to show this document. TIP A complete sample project is available in the DevExpress Code Examples database at. To create a Print Preview and show a document in it, do the following. - Create a new Windows Forms Application in Visual Studio 2012, 2013, 2015 or 2017. To add a print preview to your application, switch to the application's main form in Visual Studio, and press CTRL+ALT+X to run the Toolbox. Expand the DX.18.2: Reporting category and drop the DocumentViewer control onto the form. Click the smart tag of the control, and in the invoked actions list, select a toolbar that matches the user interface style of the rest of your application. In this tutorial, the ribbon toolbar is preferred over the minimal bar interface. To assign a document source to the control, click its smart tag again. In the drop-down menu of the DocumentViewer.DocumentSource property, expand the Standard Sources category and select PrintingSystem. Press F7 to switch to the code view, and declare a new public class (called Lesson1), inherited from the Link class. using DevExpress.XtraPrinting; // ... public class Lesson1 : Link { public Lesson1(PrintingSystem ps) { CreateDocument(ps); } } Each subsequent lesson will create a class derived from the class created in the preceding lesson. Calling the Link.CreateDocument method will generate a document after making changes to it in code. Handle the main form's Load event and add the following code to the event handler. using System; using System.Windows.Forms; using DevExpress.XtraPrinting; // ... private void Form1_Load(object sender, System.EventArgs e) { Lesson1 lesson = new Lesson1(printingSystem1); } In this code, the Printing System of the Document Viewer is passed, and assigned to the created printing link. Launch the application and view the result.
https://docs.devexpress.com/WindowsForms/90/controls-and-libraries/printing-exporting/getting-started/lesson-1-show-the-print-preview-for-a-link?v=18.2
CC-MAIN-2021-10
refinedweb
330
57.06
So I’m learning the latest Rails release by converting an existing app to it. This app used namespaced controllers for its admin section. New routes features have really cleaned this up. Before map.resources :users, :controller => 'admin/users', :name_prefix => 'admin_', :path_prefix => 'admin' After map.namespace(:admin) do |admin| admin.resources :users end Much cleaner. Then I began to take advantage of Rails 2.0’s smarter #form_for courtesy of simple_helpful in the non-admin portion of the app. <% form_for @user do |form| %> <% end %> If that user object is a #new_record? that #form_for is going to generate the following html: <form action="/users" method="post" class="new_user" id="new_user"> </form> If that user object is not a #new_record? that #form_for is going to generate the following html (assuming its id is 1): <form action="/users/1" method="post" class="edit_user" id="edit_user_1"> <div style="margin:0;padding:0"><input name="_method" type="hidden" value="put" /></div> </form> Now let’s look at the #new and #edit forms for the admin section of the site. <% form_for [:admin, @user] do |form| %> <% end %> This will generate the same html as above except with ’/admin’ being prefixed onto each action attribute value. [:admin, @user] Ooo that is ugly. Some more examples. redirect_to [:admin, @user] # redirect_to admin_user_url link_to 'show', [:admin, @user] # link_to 'show', admin_user_url This anonymous Array syntax seems like such a quick hack, like they forgot about namespaced controllers. With all the nice interfaces in Rails, this one really sticks out. I’m willing to bet this becomes deprecated in the near future. In the meantime I’m going to stick with the much better looking old style. <% form_for @user, :url => admin_user_url do |form| %> <% end %> However, this sucks because I’d really like to take advantage of these smarter #form_for, #redirect_to and #link_to versions but I consider consistency in my code more important. I’d rather see 1 consistent way of using named routes instead of using them solely in the namespaced controller sections of an app in order to avoid this new ugly syntax. This would probably be a lot cleaner in php.
http://robots.thoughtbot.com/tagged/namespaced
crawl-003
refinedweb
350
56.05
- NAME - SYNOPSIS - CLEARTOOL ENHANCEMENTS - GENERAL FEATURES - CONFIGURABILITY - DIAGNOSTICS - INSTALLATION NAME ClearCase::Wrapper - General-purpose wrapper for cleartool SYNOPSIS This perl module functions as a wrapper for cleartool, allowing its command-line interface to be extended or modified. It allows defaults to be changed, new flags to be added to existing cleartool commands, or entirely new commands to be synthesized. CLEARTOOL ENHANCEMENTS EXTENSIONS A pseudo-command which lists the currently-defined extensions. Use with -long to see which overlay module defines each extension. Note that both extensions and their aliases (e.g. checkin and ci) are shown. CI/CHECKIN Extended to handle the -dir/-rec/-all/-avobs flags. These are fairly self-explanatory but for the record -dir checks in all checkouts in the current directory, -rec does the same but recursively down from the current directory, -all operates on all checkouts in the current VOB, and -avobs on all checkouts in any VOB. Extended to allow symbolic links to be checked in (by operating on the target of the link instead). Extended to implement a -diff flag, which runs a diff -pred command before each checkin so the user can review his/her changes before typing the comment. Implements a new -revert flag. This causes identical (unchanged) elements to be unchecked-out instead of being checked in. Implements a new -mkhlink flag. This works in the context of the -revert flag and causes any inbound merge hyperlinks to an unchanged checked-out element to be copied to its predecessor before the unchanged element is unchecked-out. Since checkin is such a common operation a special feature is supported to save typing: an unadorned ci cmd is promotedto ci -dir -me -diff -revert. In other words typing ct ci will step through each file checked out by you in the current directory and view, automatically undoing the checkout if no changes have been made and showing diffs followed by a checkin-comment prompt otherwise. CO/CHECKOUT Extended to handle the -dir/-rec flags. NOTE: the -all/-avobs flags are disallowed for checkout. Also, directories are not checked out automatically with -dir/-rec. DIFF Extended to handle the -dir/-rec/-all/-avobs flags. Improved default: if given just one element and no flags, assume -pred. Extended to implement -n, where n is an integer requesting that the diff take place against the n'th predecessor. DIFFCR Extended to add the -data flag, which compares the contents of differing elements and removes them from the output if the contents do not differ. EDIT/VI Convenience command. Same as 'checkout' but execs your favorite editor afterwards. Takes all the same flags as checkout, plus -ci to check the element back in afterwards. When -ci is used in conjunction with -diff the file will be either checked in or un-checked out depending on whether it was modified. The aggregation flags -dir/-rec/-all/-avo may be used, with the effect being to run the editor on all checked-out files in the named scope. Example: "ct edit -all". LSPRIVATE Extended to recognize -dir/-rec/-all/-avobs. Also allows a directory to be specified such that 'ct lsprivate .' restricts output to the cwd. This directory arg may be used in combination with -dir etc. The -eclipsed flag restricts output to eclipsed elements. The flag -type d|f is also supported with the usual semantics (see cleartool find). The flag -visible flag ignores files not currently visible in the view. Output is relative to the current or specified directory if the -rel/ative flag is used. The -ext flag sorts the output by extension. LSVIEW Extended to recognize the general -me flag, which restricts the searched namespace to <username>_*. MKELEM Extended to handle the -dir/-rec flags, enabling automated mkelems with otherwise the same syntax as original. Directories are also automatically checked out as required in this mode. Note that this automatic directory checkout is only enabled when the candidate list is derived via the -dir/-rec flags. If the -ci flag is present, any directories automatically checked out are checked back in too. By default, only regular (-other) view-private files are considered by -dir|-rec. The -do flag causes derived objects to be made into elements as well. If -ok is specified, the user will be prompted to continue after the list of eligible files is determined. When invoked in a view-private directory, mkelem -dir/-recwill traverse up the directory structure towards the vob root until it finds a versioned dir to work from. Directories traversed during this walk are added to the list of new elements. UNCO Extended to accept (and ignore) the standard comment flags for consistency with other cleartool cmds. Extended to handle the -dir/-rec/-all/-avobs flags. Extended to operate on ClearCase symbolic links. GENERAL FEATURES symlink expansion Before processing a checkin or checkout command, any symbolic links on the command line are replaced with the file they point to. This allows developers to operate directly on symlinks for ci/co. -M flag As a convenience feature, the -M flag runs all output through your pager. Of course "ct lsh -M foo"saves only a few keystrokes over "ct lsh foo | more" but for heavy users of shell history the more important feature is that it preserves the value of ESC-_ ( ksh -o vi) or !$ (csh). The CLEARCASE_WRAPPER_PAGER EV has the same effect. This may not work on Windows, though it's possible that a sufficiently modern Perl build and a smarter pager than more.com will do the trick. -P flag The special -P flag will cause ctto pause before finishing. On Windows this means running the built in pausecommand. This flag is useful for plugging ClearCase::Wrapper scripts into the CC GUI. -me -tag Introduces a global convenience/standardization feature: the flag -me in the context of a command which takes a -tag view-tag causes "$LOGNAME" to be prefixed to the tag name with an underscore. This relies on the fact that even though -me is a native cleartool flag, at least through CC 7.0 no command which takes -tag also takes -me natively. For example: % <wrapper-context> mkview -me -tag myview ... The commands setview, startview, endview, and lsview also take -me, such that the following commands are equivalent: % <wrapper-context> setview dboyce_myview % <wrapper-context> setview -me myview CONFIGURABILITY Various degrees of configurability are supported: Global Enhancements and Extensions To add a global override called 'cleartool xxx', you could just write a subroutine 'xxx', place it after the __END__ token in Wrapper.pm, and re-run 'make install'. However, these changes wcould be lost when a new version of ClearCase::Wrapper is released, and you'd have to take responsibility for merging your changes with mine. Therefore, the preferred way to make site-wide customizations or additions is to make an overlay module. ClearCase::Wrapper will automatically include ('require') all modules in the ClearCase::Wrapper::* subclass. Thus, if you work for TLA Corporationyou should put your enhancement subroutines in a module called ClearCase::Wrapper::TLA and they'll automatically become available. A sample overlay module is provided in the ./examplessubdir. To make your own you need only take this sample, change all uses of the word 'MySite' to a string of your choice, replace the sample subroutine mysite()with your own, and install. It's a good idea to document your extension in POD format right above the sub and make the appropriate addition to the "Usage Message Extensions" section. Also, if the command has an abbreviation (e.g. checkout/co) you should add that to the "Command Aliases" section. See ClearCase::Wrapper::DSB for examples. Two separate namespaces are recognized for overlays: ClearCase::Wrapper::* and ClearCase::Wrapper::Site::*. The intent is that if your extension is site-specific it should go in the latter area, if of general use in the former. These may be combined. For instance, imagine TLA Corporation is a giant international company with many sites using ClearCase, and your site is known as R85G. There could be a ClearCase::Wrapper::TLA overlay with enhancements that apply anywhere within TLA and/or a ClearCase::Wrapper::Site::R85G for your people only. Note that since overlay modules in the Site namespace are not expected to be published on CPAN the naming rules can be less strict, which is why TLAwas left out of the latter module name. Overlays in the general ClearCase::Wrapper::* namespace are traversed before ClearCase::Wrapper::Site::*. This allows site-specific configuration to override more general code. Within each namespace modules are read in standard ASCII sorted alphabetical order. All override subroutines are called with @ARGV as their parameter list (and @ARGV is also available directly of course). The function can do whatever it likes but it's recommended that ClearCase::Argv be used to run any cleartool subcommands, and its base class Argv be used to run other programs. These modules help with UNIX/Windows portability and debugging, and aid in parsing flags into different categories where required. See their PODs for full documentation, and see the supplied extensions for lots of examples. Personal Preference Setting As well as allowing for site-wide enhancements to be made in Wrapper.pm, a hook is also provided for individual users to set their own defaults. If the file ~/.clearcase_profile.plexists it will be read before launching any of the sitewide enhancements. Note that this file is passed to the Perl interpreter and thus has access to the full array of Perl syntax. This mechanism is powerful but the corollary is that users must be experienced with both ClearCase and Perl, and to some degree with the ClearCase::Wrapper module, to use it. Here's an example: % cat ~/.clearcase_profile.pl require ClearCase::Argv; Argv->dbglevel(1); ClearCase::Argv->ipc(2); The purpose of the above is to turn on ClearCase::Argv "IPC mode" for all commands. The verbosity (Argv->dbglevel) is only set to demonstrate that the setting works. The require statement is used to ensure that the module is loaded before we attempt to configure it. Sitewide ClearCase Comment Defaults This distribution comes with a file called clearcase_profile which is installed as part of the module. If the user has no clearcase_profile file in his/her home directory and if CLEARCASE_PROFILE isn't already set, CLEARCASE_PROFILE will automatically be pointed at this supplied file. This allows the administrator to set sitewide defaults of checkin/checkout comment handling using the syntax supported by ClearCase natively but without each user needing to maintain their own config file or set their own EV. CLEARCASE_WRAPPER_NATIVE This environment variable may be set to suppress all extensions, causing the wrapper to behave just like an alias to cleartool, though somewhat slower. DIAGNOSTICS The flag -/dbg=1 prints all cleartool operations executed by the wrapper to stderr as long as the extension in use was coded with ClearCase::Argv, which is the case for all supplied extensions. INSTALLATION I recommend you install the cleartool.plx file to some global dir (e.g. /usr/local/bin), then symlink it to ct or whatever short name you prefer. For Windows the strategy is similar but requires a "ct.bat" redirector instead of a symlink. See "examples/ct.bat" in the distribution. Unfortunately, there's no equivalent mechanism for wrapping GUI access to clearcase. To install or update a global enhancement you must run "make pure_all install" - at least that's what I've found to work. Also, don't forget to check that the contents of lib/ClearCase/Wrapper/clearcase_profile are what you want your users to have by default. Copyright (c) 1997-2006 David Boyce (dsbperl AT boyski.com). All rights reserved. This Perl program is free software; you may redistribute it and/or modify it under the same terms as Perl itself.
https://metacpan.org/pod/distribution/ClearCase-Wrapper/Wrapper.pm
CC-MAIN-2018-39
refinedweb
1,968
64
Intro (0:00) What is snapshot testing? In snapshot testing, a view is rendered and saved into a repository. When the test is run, the tool re-creates and re-renders that view and does a comparison. What can you test? You can test states of your views, such as when it’s highlighted, or perhaps with more text. You can also test your view against different screen sizes. Demo (3:19) I’ve created a simple example with a view controller. The view has a title that animates, a body, and a button. This project is on GitHub We’ll use SnapshotEngine as a wrapper, and Nimble-Snapshots to help with writing our tests. Get more development news like this Create a class that inherits from QuickSpec. import UIKit import Quick import Nimble import Cartography @testable import cmduconf class ST_View: QuickSpec { // Note: Return 'true' to regenerate the snapshots of this class override var recordingMode: Bool { return false } let delegate = ViewController() override func spec() { describe("A View") { context("on a 4' screen") { let view = View(delegate: self.delegate) constrain(view) { view in view.width == 320 view.height == 568 } it("has a valid snapshot") { expect(view).to(self.validateSnapshot()) } } context("on a 5.5' screen") { let view = View(delegate: self.delegate) constrain(view) { view in view.width == 414 view.height == 736 } it("has a valid snapshot") { expect(view).to(self.validateSnapshot()) } } } } } This test will fail because we don’t have a snapshot saved. Change the following line to true to save a snapshot to the repository. // Note: Return 'true' to regenerate the snapshots of this class override var recordingMode: Bool { return true } By changing the boolean back to false, the test will pass. What happens under the hood? It creates a snapshot by capturing the view and saving it to the repository. If the color of the view is changed, the test will fail. These tests are created by a Facebook library, called FB Snapshot Test Case. The library creates a CGContextRef, and then creates another based on the saved snapshot. By rendering the view again, it compares both images on a memory level with a C function. Concerns (10:45) - Architecture - You can test your view controllers, but if you isolate your views and separate them from the view controllers, that will be ideal. - Asynchronicity - If your views need network connectivity to work properly, it will be hard to test. To combat this, try to have methods that load a view without requiring a network connection. - Autolayout - Use autolayout. Views without autolayout requires the frame to be set. - Repo Space - Testing this way takes up repo space which can get clogged, and can take longer if its a large project. Questions (16:35) How can snapshot testing deal with animations? You cannot test if something will change. You can only test the view in a fixed moment in time. What is the difference between using snapshot testing versus UI testing for Xcode? Firstly, snapshot testing is much faster. It runs with your unit tests so it’s not as if it runs the unit test and then runs the UI test. Secondly it doesn’t have to run the app so it’s faster that way. About the content This talk was delivered live in July 2016 at CMD+U Conference. The video was transcribed by Realm and is published here with the permission of the conference organizers.
https://academy.realm.io/posts/cmdu-conf-luis-ascorbe-ui-and-snapshottesting/?utm_source=Swift_Developments&utm_medium=email&utm_campaign=Swift_Developments_Issue_79
CC-MAIN-2021-10
refinedweb
568
67.45
Hi Guys When ever i am changing anything in the XHTML .. always i need to clear the Cache ... is any there solution to avoid this... please try to solve this problem asap . its will be very useful for me Regards Chinna Which project stage is your JSF application in? If you don't have it set, it's automatically set to PRODUCTION, which doesn't check to for changes to the XHTML files. Use this if you need to set it to developement: <context-param> <param-name>javax.faces.PROJECT_STAGE</param-name> <param-value>Development</param-value> </context-param> Hi sir ... thanx for Reply i am getting this warning while login of my page can u let me know whats the problem ... and i pasted the above content in web.xml Warning: This page calls for XML namespace declared with prefix link but no taglibrary exists for that namespace. Your image didn't come through, I don't see any paste.
https://developer.jboss.org/thread/199479
CC-MAIN-2017-43
refinedweb
161
76.32
TCO Semifinals 2 Editorial The competitors of this round are _aid, jcvb, Kankuro, kcm1700, krijgertje, ksun48, Petr, and qwerty787788. The easy is a tricky greedy problem that is easy to get wrong because of cases that need to be carefully considered. The medium is a constructive problem that is hard to analyze at first but has some solutions that are both easy to implement and verify. The last is a very tricky constructive problem that has a lot of small incremental steps that contestants can make progress on, but getting a full solution that covers all cases is hard. Congratulations to the advancers: - qwerty787788 - ksun48 - Petr - _aid NextLuckyNumber (misof): In this problem, we are given integers N,K,d, and we want to find the smallest integer M such that M > N and M has exactly K occurences of the digit d. There are two main cases to be aware of, if d is zero, or d is nonzero. Let’s first construct the smallest number that consists of exactly k occurrences of the digit d. This is just the number ddd…d (k times), except for when d is zero, in which case we can prepend the number with a 1. We can check if this smallest number is strictly bigger than N, and if so, we can return it. Now, let’s try to see if we can get an answer that is the same length as N. We can try all possibilities of the length of the common prefix (i.e., number of most significant digits they share), then all possibilities for the next digit in the new number. Once those are fixed, we have the guarantee that the new number is bigger and we can greedily fill in the missing digits (i.e with 0s and the number of ds needed if d is nonzero, otherwise, with 1s and the number of 0s we need). If we iterate from longest common prefix to shortest, and increment the next number from smallest to largest, we guarantee we are iterating through candidates in increasing order so we can greedily return the first one that we can feasibly fill. If none of these options work, then M must have more digits than N. Since we already handled the first case earlier, we know that M must be longer than N by exactly one digit. We can start M with 1 and greedily fill in the rest (using a similar strategy to the second case). An implementation is shown below: class NextLuckyNumber: def getTicket(self, lastTicket, age, digit): lo = str(digit) * age if digit == 0: lo = "1" + lo if int(lo) > lastTicket: return lo s = str(lastTicket) for prefix in range(len(s)-1, -1, -1): have = s[:prefix].count(str(digit)) for bdig in range(int(s[prefix])+1, 10): nhave = have + int(bdig == digit) if nhave <= age and nhave + len(s)-1-prefix >= age: need = max(0, age - nhave) rem = len(s) - 1 - prefix if digit == 0: return s[:prefix] + str(bdig) + "0" * need + "1" * (rem - need) return s[:prefix] + str(bdig) + "0" * (rem - need) + str(digit) * need if digit == 0: return int("1" + "0" * age + "1" * (len(s) - age)) return int("1" + "0" * (len(s) - age + int(digit == 1)) + str(digit) * (age - int(digit == 1))) VennDiagrams (misof): In this problem, we want to construct a Venn diagram with n different categories: a drawing in which each intersection of categories corresponds to some nice connected area. Drawing the smallest Venn diagram is surprisingly hard, but in this problem we have a lot of freedom, so we can come up with a construction that will be easy to implement. If you could do any dimension rectangle, one easy 2-approximation algorithm is to construct a 2 x (2^n) bitmap where the first row is {0,1,2,…,2^n – 2,2^n – 1} and the second row is full of (2^n – 1). All exact intersections of sets correspond to pixels in the first row (and to the entire bottom row), while each superset of sets contains the entire second row and some subset of the first row so it looks like a comb. When we have to fit our solution into a 50×50 matrix, all we need is to be a bit more creative with the shape of the “backbone”. In the solution below, we have the following pattern for the (2^n – 1) cells: *.......... *********** *.......... *.......... *.......... *********** *.......... *.......... *.......... *********** *.......... (full left column + full rows that are 4 apart) Remember that this pattern will be put on an infinite 2d grid with all zeros, so the zero cells are still connected. This leaves plenty of room to attach all the other 1-pixel sets and to make sure that they can’t form holes. There are better constructions in terms of guaranteed approximation ratio but the constraints were loose enough so those were not needed. class VennDiagrams: def construct(self,N): ret = [[0 for __ in xrange(50)] for ___ in xrange(50)] tot = (1 << N) - 1 for i in xrange(50): ret[i][0] = tot for i in xrange(0,50,4): for j in xrange(0,50): ret[i][j] = tot cnt = 0 for i in xrange(1,50,2): for j in xrange(1,50): ret[i][j] = 0 if cnt >= (1 << N) else cnt cnt += 1 ans = [50,50] for i in xrange(50): ans += ret[i] return ans RearrangingBoxes (monsoon): You originally have A x B x H cubes arranged in a cuboid with A rows, B columns, and H height. You would like to remove K of them so that the resulting arrangement is still connected and has the same surface area as the original cuboid (i.e. S = 2(AB + AH + BH)). Each cube’s bottom face must also touch the ground or another cube directly. The idea is that we start from the A x B x H cuboid and we iteratively remove cubes from it, at all times maintaining the constant surface area S = 2(AB + AH + BH). Assume wlog that A ≤ B. First of all observe that in theory the minimal volume we can achieve is (in particular if S – 2 is divisible by 4 it is achieved by a cuboid of size ). It will turn out that we can obtain this minimum satisfying the task constraints (that is the solid must be formed from towers in a limited space), except for special case when A = 1 and B is even (it can be shown that then ). The rough idea is simple: if we treat the solid as a graph (cubes are vertices, and two vertices are connected by an edge if corresponding cubes share a face), then we achieve minimal volume not only for a line graph of size , but also for any tree of size . Thus if then there is no solution. Otherwise, we will proceed removing cubes. Observe that removing one cube containing exactly one vertex of the cuboid reduces volume by 1 and does not change surface area. In fact, in the same way we can remove from the cuboid a cuboid of size (A – 1) x (B – 1) x (H – 1), removing cubes one by one in layers. This leaves us “floor” of size A x B and two “walls” of total size (A + B – 1) x (H – 1). Next observe that we can remove a wall cube that has three neighbors (again this does not change surface area). Thus we can totally remove towers of size H – 1 from the walls (we just remove every second tower). So if A + B – 1 is odd, every tower is connected only to the floor. (We deal with A + B – 1 even later). Next we can do the same idea with (A – 1) x (B – 1) part of the floor, by removing segments of length B – 1. If A – 1 is even we are done: the remaining cubes form a tree graph, so the volume is in fact minimal (equal to ). If A – 1 is even but B – 1 is odd we can do symmetric thing (see left picture below with A = 5, B = 7, where we observe the warehouse from the top, light gray cells denote towers of height 1, dark gray cells denote towers of height H). If A – 1 and B – 1 are odd, we can create an almost tree like in the right picture below (almost tree is the best we can get here, since A and B are even, thus S is divisible by 4): In this picture, the dark squares are towers of height H, the gray squares are towers of height 1, and the white squares are towers of height 0. So the only case left is what to do with two towers of size H next to each other when A + B – 1 is even. If A = B (and thus B is even) we cannot do anything (thus this is the special case of greater we have mentioned before). Otherwise wlog let’s assume that B is even and we have a following picture (with A = B, B = 4) where two towers of height H are next to each other: Call these two towers TL (left) and TR (right). If A ≥ 5 or B & ge; 4 we have at least one tower (call it T) of height 1 which does not have a neighboring tower of height H. If we remove two cubes from TR and add one cube to T, the volume reduces by 1, and the surface area doesn’t change. If this leaves TR of size 2, then it means that was even, thus S divisible by 4 and we are done. This needs special treatment for A = 3 and B = 2, but it also can be done. Time complexity of the algorithm is O(AB). Sample code: public class RearrangingBoxes { public int[] rearrange(int A, int B, int H, long K) { long V = (long)A*B*H; boolean swapped = false; if (A > B) { int temp = A; A = B; B = temp; swapped = true; } long Area = 2*((long)A*B + (long)A*H + (long)B*H); long minV = ((Area-2) + 2)/4; if (A == 1 && B%2 == 0) { minV += (H-1)/2; } if (V-K < minV) { return new int[0]; } int[][] sol = new int[A][B]; for (int a=0; a<A; ++a) { for (int b=0; b<B; ++b) { sol[a][b] = H; } } if (A > 1) { long w = Math.min(H-1, K / ((A-1)*(B-1))); for (int a=0; a<A-1; ++a) { for (int b=0; b<B-1; ++b) { sol[a+1][b+1] -= w; } } K -= (A-1)*(B-1)*w; if (w < H-1) { for (int a=A-2; a>=0 && K > 0; --a) { for (int b=B-2; b>=0 && K > 0; --b) { sol[a+1][b+1]--; K--; } } } } int cols = (A+B-2)/2; for (int i=0; i<cols && K > 0; ++i) { long ile = Math.min(H-1, K); K -= ile; int a = i<A/2 ? A-2-2*i : 0; int b = i<A/2 ? 0 : 2*(i-A/2)+1+(A+1)%2; sol[a][b] -= ile; } cols = (B-1)/2; for (int i=0; i<cols && K > 0; ++i) { long ile = Math.min(A-1, K); K -= ile; for (int j=0; j<ile; ++j) { sol[A-1-j][1+2*i]--; } } if (B%2 == 0) { int rows = (A-1)/2; for (int i=0; i<rows && K > 0; ++i) { sol[1+2*i][B-1]--; K--; } } if (A == 2 && (A+B-1)%2 == 0) { if (K > 0) { assert(sol[0][B-1] == H); if ((H-1)%2 == 1) { if (B >= 5) { sol[1][B-3]--; sol[1][B-4]++; sol[0][B-1]--; sol[1][B-1]++; } else { assert(B == 3); sol[0][0]--; sol[0][B-1]--; sol[1][1]++; sol[1][B-1]++; } } int ile = (sol[0][B-1]-1) / 2; sol[0][B-1] -= ile; sol[1][B-1] += ile; for (int i=0; i < ile && K > 0; ++i) { sol[0][B-1]--; K--; } } } else if (A >= 3 && (A+B-1)%2 == 0) { int ile = (sol[0][B-1]-1) / 2; for (int i=0; i < ile && K > 0; ++i) { sol[0][B-1] -= 2; sol[A-1][B-1]++; K--; } } else { assert(K==0); } int[] answer = new int[A*B]; for (int i=0; i<A*B; ++i) { answer[i] = swapped ? sol[i%A][i/A] : sol[i/B][i%B]; } return answer; } };
https://www.topcoder.com/blog/tco-semifinals-2-editorial/
CC-MAIN-2019-39
refinedweb
2,082
68.54
(This article was first published on mages' blog, and kindly contributed to R-bloggers) Earlier this week we released googleVis 0.5.5 on CRAN. The package provides an interface between R and Google Charts, allowing you to create interactive web charts from R. This is mainly a maintenance release, updating documentation and minor issues. New to googleVis? Review the examples of all googleVis charts on CRAN. Perhaps the best known example of the Google Chart API is the motion chart, popularised by Hans Rosling in his 2006 TED talk. R Code.5 WDI_2.4 RJSONIO_1.3-0 loaded via a namespace (and not attached): [1] tools_3.1.1...
https://www.r-bloggers.com/googlevis-0-5-5-released/
CC-MAIN-2017-43
refinedweb
109
67.25
Cool React scripts and functions Let's write and explore some scripts and things that will help you in working with React. This post will be updated frequently, and some new things will be added in the future. Let's write and explore some scripts and things that will help you in working with React. This post will be updated frequently, and some new things will be added in the future. useEffect We are all using useEffect in our React application. What I find annoying is that sometimes I don't need useEffect called on the initial render, but only on an update. This hook does just that. It won't trigger on the initial render, e.g when the app first loads. You can just call this function instead of calling useEffect directly. The syntax will be the same as with useEffect. use-effect-update.js import React, { useEffect, useRef } from "react" const useEffectUpdate = (func, deps) => { const didMount = useRef(false) useEffect(() => { if (didMount.current) func() else didMount.current = true }, deps) } export default useEffectUpdate withAuth Let's say that your app has some kind of authentication. Naturally, users that are not logged in won't be able to access every page on your website. HOC (Higher Order Component) is a perfect solution for that. withAuth.js const withAuth = (Component, options = {}) => ({ ...initialProps }) => { const history = useHistory(); // here you can check if the current user is logged-in and return true or false to this variable const loggedIn = false // you can send anything to the options parameter, and make your own custom logic for different things const { forbidden } = options // if user is not authenticated, and that page is forbidden then send him to "/login" page if (!loggedIn && forbidden) { history.push('/login'); return <Fragment /> } return <Component {...initialProps} user={loggedIn} /> } export default withAuth When you have a page that you want to protect you just do this import React from "react" import withAuth from "./withAuth" const ProtectedPage = () => { return (...) } export default withAuth(ProtectedPage, { forbidden: true }) routes This one probably goes without saying, but as someone who made this mistake when starting, I want to share it anyway. You should always have one file with all API routes defined in there. routes.js export const API = { BASE: "", USERS: "users/", POSTS: "posts/" } After registering all routes you can use them through your app like this const data = fetch(API.BASE.USERS) You can also destructure items from API and use them const { BASE, USERS } = API const data = fetch(BASE.USERS) This is useful for 2 situations. - You always have all routes in one file, which is much easier to maintain than to have strings all over the project. - If the route changes you have to edit it only in one place, and then the whole project will adopt a new endpoint. useApi This hook will make it easier for you to handle errors, loading state, and data when fetching from a REST API endpoint. You can also pass custom triggers so that useEffect will only be called when that trigger changes, but this should do the job for start at least. import { useEffect, useState } from 'react'; function useAPI({ url }) { const [data, setData] = useState(null); const [loading, setLoading] = useState(false); const [error, setError] = useState(null); const fetchData = async () => { try { setLoading(true); setData(await fetch(url)); } catch (e) { setError(e); } finally { setLoading(false); } }; useEffect(() => { fetchData(); }); return [{ data, loading, error }, fetchData]; } export default useAPI; To use this hook you do the following const { data, error, loading } = useAPI('') if (loading) { return ( // here you can display a loader while data is getting fetched ) } if (error) { return ( // here you can display an error message ) } return ( // display data that was requested ) This is pretty useful because we know every state of our API calls without using some complicated logic or installing other clients. Easier imports Don't you hate when you have a relative path import that looks like this import Button from "../../../../src/components/button" To make this more beautiful, easier to maintain, and read, you have to create a file called jsconfig.json in the project's root folder. After creating the file paste this configuration inside. { "compilerOptions": { "baseUrl": ".", "paths": { "@/components/*": ["src/components/*"], "@/styles/*": ["src/styles/*"] } }, "exclude": ["node_modules"] } If you are using create-react-app to create your project then your configuration will look like this { "compilerOptions": { "baseUrl": "src" } "include": ["src"] } Note that create-react-app doesn't support paths. Since paths are not working, if you want to import anything outside src you'll have to import it using a relative path. // When importing components you now do this import Button from "@/components/button" // instead of import Button from "../../../../src/components/button" or with create-react-app // When importing components you now do this import Button from "components/button" // instead of import Button from "../../../../src/components/button" There is no need to remember the depth of where you currently are in the project structure. You can also make this for other things, like styles, hooks, basically anything you create in the project. You can change the syntax of importing to whatever you like, but I find "@/components" work best for me because it helps me to visually distinct components that I made from those that are imported from external packages. The configuration is pretty easy to read, so you can add new items effortlessly, and if you are using the same core principles and rules in your projects you can just copy-paste this file to another project, and it will work out of the box. Get window size In your project, you probably came to a point where you need to get window size. If you tried to use window.innerWidth you probably got an error that says ReferenceError: window is not defined. To fix this you could make a check that looks like this if (typeof window !== "undefined") { // browser code } but copying this on every place is a really messy job. This can be fixed by creating a hook that looks like this import { useEffect, useState } from "react" const useWindowSize = () => { const [windowSize, setWindowSize] = useState({ width: undefined, height: undefined, }) useEffect(() => { if (typeof window !== "undefined") { function handleResize() { setWindowSize({ width: window.innerWidth, height: window.innerHeight, }) } window.addEventListener("resize", handleResize) handleResize() return () => window.removeEventListener("resize", handleResize) } }, []) return windowSize } export default useWindowSize Using this hook is pretty easy. const size = useWindowsSize() // accessing width and height props size.width size.height
https://micko.dev/post/cool-react-scripts-and-functions
CC-MAIN-2021-39
refinedweb
1,055
53.21
::rect -> :user/rect (derive ::rect ::shape) (derive ::square ::rect) (parents ::rect) -> #{:user/shape} (ancestors ::square) -> #{:user/rect :user/shape} (descendants ::shape) -> #{:user/rect :user/square} (isa? 42 42) -> true (isa? ::square ::shape) -> true (derive java.util.Map ::collection) (derive java.util.Collection ::collection) (isa? java.util.HashMap ::collection) -> true (isa? String Object) -> true (isa? [::square ::rect] [::shape ::shape]) -> true (ancestors java.util.ArrayList) -> #{java.lang.Cloneable java.lang.Object java.util.List java.util.Collection java.io.Serializable java.util.AbstractCollection java.util.RandomAccess java.util.AbstractList} (defmulti foo class) (defmethod foo ::collection [c] :a-collection) (defmethod foo String [s] :a-string) (foo []) :a-collection (foo (java.util.HashMap.)) :a-collection (foo "bar") :a-string You need to enable Javascript in your browser to edit pages. help on how to format text Multimethods and Hierarchies or a value from which the dispatching value is derived.. create these relationships, and the isa? function tests for their existence. Note that isa? is not instance?. You can define hierarchical relationships with (derive child parent). Child and parent can be either symbols or keywords, and must be namespace-qualified: Note the :: reader syntax, ::keywords resolve namespaces derive is the fundamental relationship-maker parents/ancestors/descendants and isa? let you query the hierarchy (= x y) implies (isa? x y) isa? uses the hierarchy system You can also use a class as the child (but not the parent, the only way to make something the child of a class is via Java inheritance). This allows you to superimpose new taxonomies on the existing Java class hierarchy: isa? also tests for class relationships: isa? works with vectors by calling isa? on their corresponding elements: as do parents/ancestors (but not descendants, since class descendants are an open set) isa? based dispatch Multimethods use isa? rather than = when testing for dispatch value matches. Note that the first test of isa? is =, so exact matches work. prefer-method is used for disambiguating in case of multiple matches where neither dominates the other. You can just declare, per multimethod, that one dispatch value is preferred over another: All of the examples above use the global hierarchy used by the multimethod system, but entire independent hierarchies can also be created with make-hierarchy, and all of the above functions can take an optional hierarchy as a first argument. This simple system is extremely powerful.. Note: In this example, the keyword :Shape is being used as the dispatch function, as keywords are functions of maps, as described in the Data Structures section.
http://clojure.org/multimethods?responseToken=6babc33160341b1723eae03a4fc1dd02
CC-MAIN-2014-15
refinedweb
420
51.04
Here is a listing of C interview questions on “Register Variables” along with answers, explanations and/or solutions: 1. When compiler accepts the request to use the variable as a register? a) It is stored in CPU b) It is stored in cache memory c) It is stored in main memory d) It is stored in secondary memory View Answer 2. Which data type can be stored in register? a) int b) long c) float d) All of the mentioned View Answer 3. Which of the following operation is not possible in a register variable? a) Reading the value into a register variable b) Copy the value from a memory variable c) Global declaration of register variable d) All of the mentioned View Answer 4. Which among the following is the correct syntax to declare a static variable register? a) static register a; b) register static a; c) Both (a) and (b) d) We cannot use static and register together. View Answer 5. Register variables reside in a) stack b) registers c) heap d) main memory View Answer 6. What is the output of this C code? #include <stdio.h> void main() { register int x = 0; if (x < 2) { x++; main(); } } a) Segmentation fault b) main is called twice c) main is called once d) main is called thrice View Answer 7. What is the output of this C code? #include <stdio.h> void main() { register int x; printf("%d", x); } a) 0 b) Junk value c) Compile time error d) Noting View Answer 8. What is the output of this C code? #include <stdio.h> register int x; void main() { printf("%d", x); } a) varies b) 0 c) Junk value d) Compile time error View Answer Sanfoundry Global Education & Learning Series – C Programming Language. Here’s the list of Best Reference Books in C Programming Language. To practice all features of C programming language, here is complete set of 1000+ Multiple Choice Questions and Answers on C.
http://www.sanfoundry.com/c-interview-questions-register-variables/
CC-MAIN-2017-39
refinedweb
328
71.24
This is a community collaborative translation article. The translation has been completed. For more information, please click Introduction to collaborative translation . Although most PHP developers know how to use Composer, not everyone is using it effectively or in the best way. So I decided to summarize something important to my daily workflow. The idea of most techniques is_ 「 Play it safe 」_, This means that if there are more ways to deal with something, I will use the least error prone method. Tip 1: reading documents i mean it. Official documents Well written. It only takes a few hours to read now, which will save you a lot of time in the future. You'll be surprised how much Composer can. Tip 2: recognize the difference between "project" and "library" It is important to realize whether you are creating a "project" or a "library". There are great differences between the two in the process of use. A library is a reusable package that needs to be added as a dependency - such as symfony/symfony, doctrine/orm, or elasticsearch/elasticsearch. A typical project is an application that depends on multiple libraries. It is usually not reusable (other projects do not need it to be a dependency). Applications such as e-commerce websites and customer service systems are typical examples. In the following Tip, I will explain the difference between libraries and projects in more detail. Tip 3: use the specified dependent version for the application When creating an application, you should define dependencies with the clearest version number. If you need to parse YAML files, you should specify dependencies in the form of "symfony/yaml": "4.0.2". Even if the dependent library follows Semantic version Specifications will also destroy backward compatibility due to different version numbers and revision numbers. For example, using the form "symfony/symfony": "^3.1", there may be something discarded in version 3.2, which will destroy your application to pass the test in that version. Or maybe in PHP_ There is a fixed bug in codesniffer, and the code will detect new format problems, which will lead to wrong construction again. Rely on the upgrade to be cautious, can not hit the luck. The following Tip will explain this in more detail. It sounds alarmist, but paying attention to this point will prevent your partner from accidentally updating all dependencies when adding new libraries to the project (this may be ignored during code review). Tip 4: use version range for library dependencies When creating a library, you should define the maximum range of available versions as much as possible. For example, if you create a library and want to use symfony/yaml library for YAML parsing, you should write as follows:``` "symfony/yaml": "^3.0 || ^4.0" This means that the library can Symfony 3.x Or 4.x Used in any version of `symfony/yaml` . This is important because this version constraint is passed to applications that use the library. In case there is a conflict between the requests of two libraries, for example, one needs to `~3.1.0` ,Another need `~3.2.0` ,The installation will fail. []( "")bigqiang translated 1 week ago [0]( " ") [retranslation]( " ") # Tip 5: when developing an application, submit the 'composer.lock' file to the git version library ---------------------------------------------- Created _A project_,Be sure to `composer.lock` File submitted to git This will ensure that everyone - you, your partners, your friends CI Server and your product server - the application running has the same dependent version. At first glance, it seems to add to the snake Tip #It has been mentioned in 3 to use explicit version number constraints. This is not redundant. You should know that the dependencies of the dependencies you use are not bound by these constraints (for example, 'symfony/console' also depends on 'symfony / Polyfill mbstring'). If you do not submit the 'composer.lock' file, you will not get the dependency collection of the same version. [! [image]() ]( "")bigqiang translated 1 week ago [0]( " ") [retranslation]( " ") Tip #6: The development library should add the 'composer.lock' file to the '. gitignore' file ---------------------------------------------------- establish _A library_ (For example, call `acme/my-library`), This shouldn't be `composer.lock` File submitted to git The file is in the library. The file is not valid for projects that use the library It [There will be no impact]( "") Suppose 'acme / my library' uses' monolog/monolog 'as a dependency. You have submitted' composer.lock 'in the version library, and everyone developing' acme / my library 'may be using the old version of Monolog. After the library is developed and used in the actual project, it may be that the installed Monolog is a new version, and it will be connected with the inventory at this time Incompatible. But you never noticed the compatibility problem before, because of this' composer.lock '! Therefore, the best way to deal with it is to `composer.lock` Add to `.gitignore` File, which avoids the problems caused by accidentally submitting it to the version library. If you also want to ensure that the library is compatible with different versions of its dependencies, continue to the next section Tip ! []( "") # Tip 7: different versions of Travis CI build dependencies ---------------------------- > current Tip For libraries only (specify the specific version number for applications). If you're building an open source library, chances are you'll use it Travis CI To run the build process. By default, in `composer.json` Where document constraints permit, composer The installation will install the latest possible version of the dependency. This means that for `^3.0 || ^4.0` With such dependency constraints, build installations always use the latest v4 Version distribution package. 3.0 The version will not be tested at all, and the built library may be incompatible with the version, and your users will cry. Fortunately, composer A switch is provided for installing low version dependencies `--prefer-lowest` (Should be used `--prefer-stable` ,Can prevent installation of unstable versions). Uploaded `.travis.yml` The configuration is similar to the following format:``` language: php php: - 7.1 - 7.2 env: matrix: - PREFER_LOWEST="--prefer-lowest --prefer-stable" - PREFER_LOWEST="" before_script: - composer update $PREFER_LOWEST script: - composer ci See code for details my mhujer/fio-api-php library and the build matrix on Travis CI Although this solves most incompatibilities, it is important to remember that there are too many combinations between the minimum and maximum versions of dependencies. They may still be incompatible. Tip 8: sort packages in require and require dev by name Sorting packages in require and require dev by name is a good practice. This avoids unnecessary merge conflicts when spawning a branch. If you add a package to the end of the list in two branch files, you may encounter a conflict every time you merge. Sorting packages manually can be tedious, so the best way is in composer.json Configure it You can:``` { ... "config": { "sort-packages": true }, ... } Later `require` A new package will be automatically added to a correct location (not to the end). []( "")bigqiang translated 1 week ago [0]( " ") [retranslation]( " ") # Tip 9: do not merge ` composer.lock during version derivation or merging` --------------------------------------- If you are `composer.json` (and `composer.lock`)A new dependency is added to the main branch, and another dependency is added to the main branch before the branch is merged. At this time, you need to derivative your branch. Then `composer.lock` The file will get a merge conflict. Never try to resolve the conflict manually because `composer.lock` The file contains definitions `composer.json` So even if you resolve the conflict, the final merge result lock The file is still wrong. The best solution is to create one in the project root directory with the following line of code `.gitattributes` File, it will tell git Don't try to be right `composer.lock` To merge files:``` /composer.lock -merge recommend Trunk Based Development Use temporary feature branches to correct this problem. When you have a temporary branch that needs to be merged immediately, the risk of merging the composer.lock file is minimal. You can even create a branch just to add a dependency and merge immediately. What if the composer.lock encounters a merge conflict during the derivation process? Use the main branch version to solve it. In this way, only modify the composer.json file (add a new package) . then run composer update --lock to update the modification of composer.json file to composer.lock file. Now submit the updated composer.lock file to the version staging area, and then continue the derivation operation. Tip 10: understand the difference between require and require dev It is very important to be aware of the difference between require and require dev modules. Packages that need to run in applications or libraries should be defined in require (e.g. Symfony, Doctrine, Twig, Guzzle,...) If you are creating a library, pay attention to what is defined as require, because each dependency of this part is also the dependency of the application using the library. The packages required to develop applications (or libraries) should be defined in require dev (for example: PHPUnit, PHP_CodeSniffer, PHPStan). Tip 11: safely upgrade dependencies I think everyone has a consensus on the following facts: dependencies should be upgraded regularly. What I want to discuss here is that dependency upgrading should be done in the open and with caution, not because of the needs of other tasks. If the library is upgraded while reconstructing the application, it is difficult to distinguish whether the cause of application crash is caused by reconstruction or upgrading. You can use the composer updated command to see which dependencies need to be upgraded. It is smart to append a -- direct (or - D) parameter switch, which will only view the dependencies specified by composer.json. There is also a - m parameter switch, which will only view the upgrade list of minor version numbers. The following steps should be followed for upgrading the dependencies of each old version: - Create new branch - Update the dependency version to the latest version number in the composer.json file - Run composer update phpunit/phpunit -- with dependencies (replace phpunit/phpunit with the upgraded Library) - Check the CHANGELOG file in the version library on Github for major changes. If so, upgrade the application - Test the application locally (you can also see a deprecation warning in the debug bar if you use Symfony) - Submit the modifications (including the necessary modifications for the normal operation of composer.json, composer.lock and other new versions) - Wait until the CI build is completed - Merge and deploy Sometimes you need to upgrade multiple dependencies at one time, such as upgrading Doctrine or Symfony. In this case, you need to list them all in the upgrade command: composer update symfony/symfony symfony/monolog-bundle --with-dependencies Or use wildcards to upgrade all dependencies of the specified namespace: composer update symfony/* --with-dependencies It's all tedious work, but it provides additional protection against inadvertently upgrading dependencies. An acceptable shortcut is to upgrade all dependencies in require dev at once (if the program code has not been modified, otherwise it is recommended to create a separate branch for code review). Tip 12: define other types of dependencies in composer.json In addition to defining the library as a dependency, you can also define other things here. You can define the PHP versions supported by applications and libraries: "require": { "php": "7.1.* || 7.2.*", }, It can also define the extensions required by applications and libraries. This is very useful when trying to dock your own applications, or when your partner sets up the application environment for the first time. "require": { "ext-mbstring": "*", "ext-pdo_mysql": "*", }, (when Inconsistent extended version When, the version number should be *). Tip 13: validate composer.json during CI build Composer.json and composer.lock should always be synchronized. Therefore, it is a good idea to keep automatic checking for them all the time. Adding this as part of your build script will ensure that composer.lock and composer.json are synchronized: composer validate --no-check-all --strict Tip 14: using the Composer plug-in in PHPStorm Here's one composer.json plugin for PHPStorm When manually modifying composer.json, the plug-in will automatically complete and perform some verification If you are using another IDE (or just an editor), you can use its JSON schema Set up validation Tip 15: specify the PHP version number of the production environment in composer.json If you're like me, sometimes Run the latest pre release version of PHP in the local environment , there is a risk that the version of the upgrade dependency cannot run in the production environment. Now I'm using PHP 7.2.0, which means that the library I installed may not run in version 7.1. If the production environment is running version 7.1, the installation will fail. However, don't worry. There is a very simple solution. Just indicate the PHP version number of the production environment in the config section of the composer.json file: "config": { "platform": { "php": "7.1" } } Don't confuse it with the setting of the require part. Its function is different. Your application can run under version 7.1 or 7.2, and the platform version is specified as 7.1 (which means that the upgraded version of the dependency should be compatible with platform version 7.1): "require": { "php": "7.1.* || 7.2.*" }, "config": { "platform": { "php": "7.1" } } Tip 16: use private packages on own managed Gitlab vcs is recommended as the version library type, and Composer decides the appropriate method to obtain the package. For example, add a fork from Github and use its API to download the. zip file of the entire version library without cloning. However, it will be more complex for a private Gitlab installation. If vcs is used as the version library type, Composer will detect that it is a Gitlab installation and will try to use the API download package (this requires an API key. I don't want to set it, so I only use SSH clone to install): first, indicate that the version library type is git: "repositories": [ { "type": "git", "url": "git@gitlab.mycompany.cz:package-namespace/package-name.git" } ] Then indicate the commonly used packages: "require": { "package-namespace/package-name": "1.0.0" } [ ]( "" Tip 17: how to temporarily use the bug repair branch under fork If you find a bug in a public library and fix it in your own fork on Github, you need to install the library from your own version library instead of the official version Library (until the composite is repaired and the repaired version is released). use Embedded alias Easy to handle: { "repositories": [ { "type": "vcs", "url": "" } ], "require": { "symfony/monolog-bundle": "2.0", "monolog/monolog": "dev-bugfix as 1.0.x-dev" } } Can pass Set path as the version library type Test the repair locally, and then push to update the version library. Tip 18: use prestissimo to speed up your package installation Composer has a hirak/prestissimo Plug in, which can be downloaded in parallel, so as to improve the installation speed of dependent packages. So, what should you do now for such a good thing? You just need to install the plug-in globally immediately, and then it can be automatically used in all projects. composer global require hirak/prestissimo Tip 19: when you are not sure, test your version constraints Even when reading the documentation After that, writing the correct version constraints is also tricky at some times. Fortunately, there are Packagist Semver Checker It can be used to check which part matches a specific constraint. It is not just analyzing version constraints. It shows the actual release version since downloading data from packgist. Check the result for symfony/symfony:^3.1. Tip 20: using authoritative class mapping files in a production environment Should be in a production environment Generate authoritative class mapping file . this allows all classes contained in the class mapping file to load quickly without any checking to the disk file system. You can run the following commands when the production environment is built: composer dump-autoload --classmap-authoritative Tip 21: configure autoload dev for testing You also don't want to load test files in the production environment (considering the size of test files and memory usage). This can be solved by configuring autoload dev (similar to autoload): "autoload": { "psr-4": { "Acme\\": "src/" } }, "autoload-dev": { "psr-4": { "Acme\\": "tests/" } }, Tip 22: try Composer script Composer script is a lightweight tool for creating build scripts. I have some questions about this It is also mentioned. summary If you disagree with some points and explain why you disagree (don't forget to mark the number of tip), I will be happy. All translations in this article are for learning and communication purposes only. Please indicate the translator, source and link of this article Our translation work follows CC protocol , if our work infringes upon your rights and interests, please contact us in time. Article transferred from If there is infringement, please contact to delete.
https://programmer.group/17-composer-best-practices-you-must-know-updated-to-22.html
CC-MAIN-2022-27
refinedweb
2,840
56.15
CodeRunner and Patterns Quick Review I just started using CodeRunner and Patterns for script writing.[1] They are both available in the MAS. CodeRunner supports AppleScript, Python, Ruby, Shell and several more. It provides syntax highlighting and code completion. Importantly it also provides a console window to display the output as well as a mode for accepting input. Sure, it’s not BBEdit, but it’s lightweight and single minded. There are not many frills but it works great. Patterns is also by Nikolai Krill. It’s also for writing code but directed specifically at RegEx. The interface is simple but effective. Place some example search text in one box and start writing some RegEx in another box. There is a handy pop-up panel cheat-sheet for quick reminders. The killer feature is that after writing the expression, I can select a language and then hit the “Copy Code” button. I get the language specific version of the RegEx on my clipboard. The example above gives this for Python import re re.search(“%252”, searchText, re.M) … and this for Javascript searchText.match(/%252/m) Patterns also offers warnings when a language does not support a specific RegEx option. I have not been using it very long but I’m already thrilled with the results. I avoid RegEx like the plague and as a result, I’m not very good at it. Patterns should help me use more RegEx while also remaining pathetically bad at it.
http://macdrifter.com/2011/12/coderunner-and-patterns-quick-review.html
CC-MAIN-2016-50
refinedweb
247
77.13
Hi!: Thanks again for this valuable feedback. It's disappointing that searching for "GetMessage" returns the expected Win32 page as the #1 hit in both Bing and Google, yet the F1 help always seems to come back with the least relevant version of the topic. In this case it's Azure, but in the past it's had a peculiar love for the Windows CE and FoxPro documentation. In the scenarios where the context isn't known and there are multiple possibilities, could you bias the results towards selecting the Platform SDK, or some predefined order rather than just picking one at random? This won't be as accurate as the full context search, but it will be better than what we've got now, and could be delivered a lot sooner than the C++/CLI Intellisense fix. I'm testing with VS2010 RTM. In case it was an issue with the project, I created a new project from scratch using the Win32 project wizard. I did not edit the code at all, so everything is supplied by the IDE. I did a rebuild-all, then also waited for the status bar to indicate that the intellisense stuff was done updating. I scrolled down to the GetMessage call in the wizard-generated code and pushed F1, and this is what happened (a page on Windows Azure Platform, CloudQueue.GetMessage Method appears): leo.kelbv.com/wc132_firefox.png Let me know if there's anything I can do to help debug this. I would love to see it fixed. I can try installing VS2010 SP1 if desired, but I'll leave my machine on the RTM for now in case you want me to check anything against the current state. (I'm still using VS2008 as my main IDE so I don't mind which version the parallel VS2010 install is on.) Another thing that makes this problem more and more frustrating over the years is that there keep being added new technologies which re-use the names of old ones. On top of the old annoyances of MFC, ATL, Windows CE and .Net pages coming up when Win32 devs push F1, we get new annoyances like STL/CLR. Not that the frameworks themselves are an annoyance -- STL/CLR seems like a good thing -- but they make it even more important for the IDE to understand the language, platform and framework context of the project when F1 is pushed. This is the full URL which the F1 push seems to take me to. It does have "DevLang-C++" in the query, FWIW, but that doesn't seem to do any good (and even if the web server respected that language hint, the language alone wouldn't be enough to ensure the Win32 API was found rather than another that might be called from other types of C++ projects): msdn.microsoft.com/.../dev10.query > (we plan to correct that in a future release) Ulzii, I'm getting a bit frustrated with the lack of information about plans for C++/CLI Intellisense. Despite repeated requests for more detail, the only information we've heard back from Microsoft are vague promises that Intellisense will be re-introduced in some unspecified future release. There are a number of customers like me who pay thousands and thousands of dollars to Microsoft every year for their MSDN subscriptions, only to hit a wall when we ask for more information. Your plans affect our plans, so more insight into the roadmap would be appreciated. I agree, unfortunately I've found F1 help to be so broken as to be completely worthless. I'd like to offer constructive criticism, but when something doesn't work at all there's nothing to really do. If it could ever return a relevant result that'd be a good start. (Though to be fair I wouldn't know if this got fixed, I always just Google it. Works every time). One problem I'm having all the time is that getting F1 help for a message (like WM_DESTROY) brings up the help for CWnd::OnDestroy. No, sorry, I'm not using MFC. This is a big issue for non-MFC programmers (Win32, WTL). Often the MFC help is incomplete and refers to the Win32 page "for more details". That's actually the better case, because I can click on the link and go where I want. Often though the Win32 topic is not linked in the MFC topic, so I have to go to the Index/Search and type what I want. I fail to see how this can be helpful even for MFC programmers. If I'm writing an MFC application I would never have to deal with WM_DESTROY directly in my code. As it stands, F1 is mostly useful just to bring up the help browser. After that I have to type my text again in the index box. I'm getting some pretty bizarre results on this myself, at least with VC++ Express 2010 SP1beta and online help. If I just create a standalone C++ file with this code: #include <windows.h> int main() { ::GetMessage(); } ...and hit F1 on GetMessage(), I get Microsoft.Practices.EnterpriseLibrary.Validation.Validators.GetMessage(). When I create a new Win32 Console Project, create a new .cpp file in the project with the code and hit F1, I get help on the Code and Text Editor at first. Not very helpful (but that brings back memories of VC6!). If I wait about 10-20 seconds for Intellisense to update, then I finally get Win32 GetMessage(). So it looks like you can get the correct answer, but there are multiple cases in which the heuristics could still be improved. I frequently hit the same problem as Ivo with getting MFC/ATL help when looking for Win32 info, and that's been a long standing problem in VC Help even back to the VC6 days. It was made worse by the MFC help getting bundled with other C++ help such that you couldn't avoid installing it. Generally, I think I would prefer just having better options to exclude help, as heuristics are too hard to get right and there are different times where I want different help collections to have higher priority. I agree with Leo and Phaeron. With VS 2010 SP1 beta creating a skeleton Win32 GUI app, GetMessage's F1 help gives help on: ManagementPackExceptionMessages.GetMessage Method And this is in VS 2010 SP1, which I thought was supposed to be better! Mike Diack Like Phaeron, with VS2010 SP1, I also get through to that page: Microsoft.Practices.EnterpriseLibrary.Validation.Validators First of all the new Help viewer appears, says "Can’t find requested content on your computer", and gives me a link to view the content online - and it's that that takes me to the wrong topic. I'm a tad worried that things are requiring Intellisense to function in order to improve the searches. Almost inevitably when you want help, you're editing code, so it's in a non-compilable state anyway! Hi, My name is Sunny Gupta and I work for VC++ team. First of all thanks everyone for the constructive feedback. We would really like to make things work the way everyone would expect and therefore we truly value all these comments. @Leo and @Phaeron: Can you please confirm if you are selecting the entire word and then pressing F1? In that case we query for only the word highlighted and as a result the context is not proper. If you place the cursor on the word (without selecting the word) and then press F1, you will get what you want. Having said that, I will admit that we are depending on intellisense to provide correct context and that can sometime back fire if the intellisense does not work. But under common scenario where if you have a project with files on which intellisense works and you place your cursor on the Word and press F1, you will go the right topic. Under scenarios where the intellisense is working and the word is not highlighted, if we don't get the correct F1 help, then it’s an issue that we would like to fix in our future release. You can help us out by giving us the entire URL that is being passed to the browser. That way we will know what is the supplied data by the IDE to the Help system. And that would help us narrow down the problem and fix it for you. If you have any questions about F1 help you can also email me directly at (sugupta at microsoft dot com) and I will try my best to get that thing fixed for you in our next release. Thanks a ton for using VS and helping us with your valuable feedback. -Sunny Gupta @Sunny: For me, F1 help for GetMessage() fails exactly the same way regardless whether I highlight the function or place the cursor in the word. Like the others, I have become accustomed to just press F1 to bring up the help viewer, then type in or copy/paste to get what I need. I did just now try F1 on LoadAccelerators() and was surprised to see it open the correct topic in the Platform SDK documentation (installed locally, FWIW). Interesting, I just assumed that native programmers had been left out to dry as usual. I haven't been selecting any text when hitting F1. Oddly, however, I'm now getting slightly different results than last time. When I create a loose C++ file (no project) and hit F1, this is what is opened in the browser: msdn.microsoft.com/.../dev10.query(MAIN());k(DevLang-"C%2B%2B")&rd=true This then redirects to this page: msdn.microsoft.com/.../microsoft.windowsazure.storageclient.cloudqueue.getmessage.aspx ...which is the Azure doc that Leo hit earlier. If I then use a Win32 Console Application project to hold the file, I get this query: Which then returns the expected Win32 function. I think I have figured out one way that queries degrade, however. If you have your help set to local help and then jump from there to online help because you don't have the help locally, some parts of the query are dropped. For instance, doing that with the no project case produces this query: msdn.microsoft.com/.../dev10.query(MAIN())&rd=true Which is missing the C++ language tag, and leads to this topic from the Windows CE 1.0 documentation instead: msdn.microsoft.com/.../aa453135.aspx Bizarrely, this is actually a better fit than the Azure documentation returned with the DevLang-"C++" tag added. With all respect to all your involvelemnt into solving so simple problem. Don't you all think all those failures are caused by too complex design of this system? I understand that intellisense is great and improving from release to release but you must know the simple thing is usually better that overcomplicated. Why not just show "dialog" box" with list of topics found like: GetMessage (MFC 4.1 doc ...) GetMessage (Windows SDK blabla) GetMessage (Windows CE SDK blabla) This list would be shown of course when system isn't sure what is the best choice (i bet you've got score for each entry somewhere in algorithm) It's only two clicks/presses more from standard F1 and enables to find reasonable source fast. (i bet that this system could learn by those choices better than rely of inlellisence which can befouled by complicated C++ constructs) d. PS. I'm not using VC since 10 years or more i'm just astonished that it does not work now as it worked quite well in last century. KISS :) I too find the VC2010 environment a step back from previous versions. I described the look and feel as 'Dark', it must be WPF. I really dont't like it, I have been using VS since 1994. To make my like easier I run up the help from VS 2008 and enter the keyword in the index and adjust the collection to C++/C# to reduce the noise. Perhaps Microsoft are trying to be too clever. And whose bright idea was it to use WPF when VS2005/2008 was not that bad. @Sunny and everyone With VS2010 SP1 beta: In fairness to sunny, I've just retried the GetMessage thing, having just put the cursor on the word and hit F1 (rather than selecting the whole word), and it appears he's right (for me at least), the help then correctly showed the right information about ::GetMessage. Mike
http://blogs.msdn.com/b/vcblog/archive/2010/12/15/issues-with-f1-help-in-c-projects.aspx
CC-MAIN-2014-15
refinedweb
2,110
68.81
Stacks and Queues are often used as programmer's tools during the operation of a program and are usually deleted after the task is completed. Stack A stack is an abstract data structure that follows the "last-in-first-out" or LIFO model. Some real world examples are the "click to go back" to the previous web page, and text editor's undo feature. There are 3 basic operations on a stack: 1. Push: Insert a data item on the stack. 2. Pop: Remove an item from the top of the stack. 3. Peek: Read the value of an item from the top of the stack WITHOUT removing it. In programming, you can implement a stack using an array or a linked list. (below is an example of the implementation of a stack with java) class StackX { private int maxSize; private long[] stackArray; private int top; public StackX(int s) { // constructor maxSize = s; // set the array size stackArray = new long[maxSize]; // create an array top = -1; // no items yet } public void push(long j) { // put items on top of the stack stackArray[++top] = j; // increment top when item inserted } public long pop() { // take item from the top of the stack return stackArray[top--]; // access item, then decrement top } public long peek() { // peek at the top of the stack return stackArray[top]; } public boolean isEmpty() { // true if the stack is empty return (top == -1); } public boolean isFull() { // true if the stack is full return (top == maxSize -1); } Stack Applications - reversing data; e.g. reversing a string parsing: breaks the data into independent pieces for further provessing; e.g. Check the delimiter matching [,{,(,),},]. how it works: - read characters from the string one at a time - if you encounter an opening delimiter [,{,(, place it on a stack - if you encounter a closing delimiter, pop the item from the top of the stack - in case they don't match (the opening and closing delimiter), then an error occurs. for example: a{b(c[d]e)f}h postponing: When the use of the data must be postponed for a while. For example, parsing an arithmetic expression; 3+(5-9)*4 Notations we're going to use: - prefix -> + a b - infix -> a + b - postfix a b + +, -, *, /, these are operators. While we call the numbers (1,2,3,...) the operands. Precedence (priority, rank) relationships: - +, - have the same precedence - *, / have the same precedence, and are higher than +, - I figured that using a video to explain how it works should be easier, so here we go... The reason we have to do this is because computers can only go either forward or backward through the expression. Let's look at how to convert the infix expression to the postfix one.. And this is the video of how to evaluate the postfix expression.. For more detailed explanation, click here. - backtracking: used in many search applications, eight-queen problem, etc. Example of finding a path: Example of n-queen problem: Queue A queue is an abstract data structure that follows "first-in-first-out" or FIFO model. Some real world examples include printing a file (and there's a file in queue), process scheduler, a waiting line. Basic operations on a queue: 1. Enque/Add/Put: Insert a data item on the back or rear of the queue. 2. Deque/Delete/Get: Remove an item from the front of the queue. 3. Peek: Read the value of an item from the front of the queue without removing it. Now, let's take a look at the implementation of a queue in java below! class Queue { private int maxSize; private long[] queArray; private int front; private int back; private int nItems; public Queue(int s) { // Constructor maxSize = s; quaArray = new long[maxSize]; front = 0; back = -1; nItems = 0; } public void insert(long j) { // Put an item at the back of the queue if(back == maxSize -1) back = -1; // Deal with wraparound queArray[++back] = j; // Increment back and insert the item nItems++; // One more item } public long remove() { // Take item from the front of the queue long temp = queArray[front++]; // Get the item and increment front if(front == maxSize) front = 0; // Deal with wraparound nItems--; // One less item return temp; } public long peekFront() { return queArray[front]; } Queue in OS When we have one processor, but there are many processes to be executed. To appear like the processes run simultaneously, we have to slice the CPU time into slots and create a queue that contains jobs to be executed. Priority Queue Items in the queue will be ordered (prioritized) by key value. There are 2 types of priority queue. - Ascending-priority queue - the item with the smallest key has the highest priority - Descending-priority queue - the item with the biggest key item has the highest priority Inserting an item in priority queue : O(n) Instead of inserting at the back or the rear of the queue, we insert the item based on its value compared to the others. # Removing / Deleting the item : O(n) Just like a normal queue, we remove the front one in the queue. That's the end of this post :) see you next time~ Discussion Thank you very much ,I was waiting for these articles on the DEV. Hope I will see more.. awesome-article Helped alot Thank you ^ gotcha!
https://dev.to/rinsama77/data-structure-stack-and-queue-4ecd
CC-MAIN-2020-45
refinedweb
881
65.66
Expedia xml integration jobs jquery,xjquery,xml expert needed .., efficient code by using best development. .. una demo del file xml eventualmenteı modülüme uygun duruma getirmek için php dosyası ile değiştirirken bazı sorunlarla karşılaştım. Aşağıda linkten örneklerini verdiğim xml ve php dosyalarına ve açıklamasını yazdığım txt dosyasına ulaşabilirsiniz. İnceleyip fiyat sunabilirseniz sevinir.... am going to use any of the theme listed in this search above. What I need is a wordpress based plugin to instantly list p.... We need someone with advanced XML programming, PHP and databases experience that could integrate the XML data provided by our vendors. Integrate travel products from other website to show on my website.] Need to create an ASP net module for working windows application, to get an XML file as an outputs. Upload XML demo content file for demo content wordpress I need an Android app. I would like it designed and built. Rss, xml reader app for offers. Someone to convert xls to xml and update it to floreant pos database. I want to upload metadata xml files to crossref server. Need to integrate Synnex Corp. XML Product Feed with WooCommerce. Budget is $30. Please do not bid more than that. Only those who knows how to do this should bid. Need to be done in 1 day.! .. .. experience android developer around 20 screens to be design in xml import xml data to tally and tally developer Need to convert 2 PSD/SVG to XML
https://www.freelancer.com/work/expedia-xml-integration/
CC-MAIN-2018-22
refinedweb
241
68.97
stalkd 1.1.3 Library for interacting with the Beanstalk message queue. To use this package, put the following dependency into your project's dependencies section: Stalkd This library provides an interface to the Beanstalk message queue. The sections below outline details of it's usage. License The stalkd library is licensed under the terms of the MIT license. Details of this license can be found in the license.txt file in the root of the project source directory. Building The Library The stalkd library use the dub package manager application. If you clone the source repository and install dub you can build a production version of the library using a command such as the following issued in the root directory of the repository... $> dub build --build=release The output from this command should be written into the bin subdirectory of the repository and will consist of two files. On Linux systems these will be called libstalkd.a and stalkd.di. The first is a static library the you can compile into your application. The second is a header file that can be used as an alternative to providing the source file for direct compilation (it's needed by D to determine imports). Alternatively you can build a debugging version of the library with the command... $> dub build --build=debug Using The Library All of the components provided within the stalkd library are contained in the stalkd module so you first have to import this to make use of any of the libraries facilities. You can do this be adding a line such as the following to your code... import stalkd; Once you've imported the library the simplest thing to do is to obtain yourself a Tube. To do this you'll need to know the host/IP address for a Beanstalkd server and possibly it's port number (if it isn't using the standard one). Once you have these details you can obtain yourself a Tube instance as follows... auto tube1 = new Tube(Server("hostname")), tube2 = new Tube(Server("192.168.0.1", 5678)); You'd replace the host name and port number shown in these examples with the relevant host and port for your server. A Tube object is the main class for interacting with Beanstalk jobs. In Beanstalk there are two concepts associated with tubes. Tubes can be used and they can be watched. A used tube is one to which submitted jobs will be added. You can only be using a single tube per Beanstalk connection. On the other hand you can be watching multiple tubes simultaneously. Watched tubes are ones that you're interested in knowing when jobs are available on them. Note that if a named tube does not exist on the server when you specify that you want to watch it then it is auto-created by the server itself. You can change the tube you're using in one of two ways... tube1.use("blah"); tube2.using = "ningy"; On the first line we just call the use function of the Tube object and specify the name of the tube we want to start using. The second line just shows an alternative approach by setting the using property but these two are effectively the same behind the scenes. Similarly there is a function for altering the tubes that a Tube object is currently watching... tube1.watch("first", "second", "third"); This call adds three tubes to the list of tubes being watched by the Tube object referred to as tube1. You can pass one or more tube names to a call to the watch() function. Note, that calling watch() implies addition and not the replacement of tubes being watched. To stop watching a tube then there use a call like the following... tube1.ignore("default"); Again this function will accept one or more tube names. Once you have configured your Tube object to watch the appropriate tubes you can fetch a job from it by calling the reserve() function... Job job = tube1.reserve(); Note that in the example above the call to reserve will block until such time as a job becomes available. If you want to use a non-blocking request then pass a uint to the call to reserve() that specifies the maximum number of seconds that the server will wait for a job to become available before giving up. In the case of a job not being available a call to reserve() returns null. The jobs returned from a call to reserve() are of type Job. Beanstalk considers all jobs to essentially be a collection of bytes. The Job class provides some convenience methods for converting these collection of bytes to and from strings. For example... string body = job.bodyAsString(); Note that use of these functions is contingent on the fact that the job was originally written in the same encoding as you're trying to extract it into. Reserving a job informs Beanstalk that you are interested in having sole ownership of it and Beanstalk guarantees that the same job will not be handed out to separate reservation requests. Reserving a job does not take it out of the queue, to do that you must destroy it... job.destroy(); Destroying a job deletes it from Beanstalk. You should do this only when you are satisfied that you have finished with the job. Note that when a job is created in Beanstalk it has a time to run (TTR) value associated with it. This is used by Beanstalk as a timer on the job. Beanstalk assumes that if you reserve a job and then fail to destroy it within its TTR then it is free to return it to the ready queue. If you do require extra time to process a job you can extend the TTR by calling the touch() function of the Job class like this... job.touch(); This resets the TTR timer for the job on the Beanstalk server. If while processing the job you decide that you cannot continue working with the job you've reserved you can return it to Beanstalks control by calling the release() function... job.release(); The release() function accepts some additional parameters that are not shown in this example, consult the code for details. Alternatively, if you decide that the job cannot be processed but don't want to lose it you can bury it instead. To bury a job make a call such as... job.bury(); Again the bury() function has a defaulted parameter so consult the code for additional information. Finally, in relation to looking for jobs, if you simply want to check that a job is available from the queue the you are currently using you can call the peek() function on the tube, such as... Job job = tube1.peek(); This will return a job if there is one available or null if there isn't. Note that you haven't reserved the job returned so you can't destroy it or bury it as you haven't obtained exclusive access to it. This function is simply a means of checking if any jobs are available. Note that there are other peek functions on the Tube class, consult the code for more details. Adding a job to Beanstalk involves creating a new Job object, populating it with data and then submitting it to the server. This might looks like... auto job = new Job; job.append("This is the textual content of my job's body."); tube1.put(job); This submits your job with a default priority and time to run and with no delay (i.e. it's ready to be processed immediately). Here are some examples of adding jobs that vary these parameters... // Add a job with a five minute delay. tube1.put(job, 300); // Add a job with no delay and a lower priority. tube1.put(job, 0, 1000); // Add a job with a 1 minutes delay, highest priority and a 10 minute TTR. tube1.put(job, 60, 0, 600); Thread Safety There are no access control mechanisms on any of the classes or entities within the library. Having said that the Server class is essentially immutable once created and each Tube fetched from a Server gets it's own connection to the Beanstalk server so you could share a Server instance between threads. You certainly should not share Tubes between threads however and you definitely should not share a Connection between Tubes. Testing To build the unit test application for the library issue the following command in the root directory of the repository... $> dub test This should place a unit test executable into the bin directory upon completion. Note, to run the test you must have a working instance of the Beanstalk server that you can reference. By default the test application assumes it's running on port 11300 or localhost. If this is not the case then you can specify -h and -p flags when calling the executable to specify the host and port for the test Beanstalk server. Note that testing without connecting to an actual Beanstalkd instance is fairly limited. The test can run in 'advanced' mode if you have Beanstalkd instance that you can let them use. In this case you simply set the host name for the instance as the BEANSTALKDTESTHOST environment variable. On a Unix system you could do this with a command such as... $> BEANSTALKD_TEST_HOST="127.0.0.1" dub test The system will also recognise the BEANSTALKDTESTPORT environment setting as the port number for the Beanstalkd test instance if its set. If this is not set then the default port is assumed. Note that the Beanstalkd instance that you use for testing should not be used for anything else as the test code will add, query and destroy entries on the default tube, which is not the kind of activity that you'd want on an instance being used for other purposes. - Registered by Peter Wood - 1.1.3 released 11 months ago - free-beer/stalkd - github.com/free-beer/stalkd - MIT - Authors: - - Dependencies: - none - Versions: - Show all 6 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 4484 downloads total - Score: - 1.1 - Short URL: - stalkd.dub.pm
http://code.dlang.org/packages/stalkd?tab=info
CC-MAIN-2019-26
refinedweb
1,708
71.34
LineServices One of the key technologies behind the high-quality display of mathematical text in OfficeMath applications like Word, PowerPoint, and OneNote is a special component called LineServices along with its sibling Page/TableServices (PTS). In addition to handling math display, various versions of LineServices are responsible for line layout in Word, PowerPoint, Publisher, OneNote, RichEdit, WordPad, and the Windows 10 Calculator. LineServices was developed by one of the most amazing teams at Microsoft. Because LineServices is used by components like RichEdit and the XAML text edit controls, it’s indirectly available to developers outside Microsoft. The low-level interfaces to run it directly can be tricky to use and aren’t documented publicly. This post is an update of an earlier post that Nicolas Wirth (author of Pascal, among other things). Eliyezer had led the two-man teams (the other person was Dean Ballard) that developed the Microsoft TrueType rasterizer as well as had a native American working with him named Lennox Brassel. Then he hired his first St. Petersburg mathematician, Sergey Genkin. Sergey’s first job after arriving in the USA in 1990 was back East working on a TeX compatible system. The team developed LineServices 1.0, which shipped first with a little program call Greeting Cards. Eliyezer needed more developer cycles, so he asked Sergey if knew any more smart software engineers back in St. Petersburg. Sure enough Igor Zverev could come and RichEdit was fortunate enough to have Igor’s services for a while in developing RichEdit 2.0. (RichEdit also had another St. Petersburg mathematician Andrei Burago for some of that time; more about Andrei in a bit…). Not long after, yet another St. Petersburg mathematician Victor Kozyrev joined the team. LineServices 2.0 was developed and shipped with Word 2000, Internet Explorer 4.0, RichEdit 3.0, PowerPoint 2000, and Publisher 2000. In addition to Western text, LineServices supported several special layout objects: the reverse object for BiDi text, the ruby object for East Asian phonetic annotations, Taten. The team had the strange habit of seriously designing a product before ever writing one line of code. What’s even stranger is that when they finally wrote the code, it had very few bugs in it. My own approach is to design partially and then dive into writing code using the well-known physicist approach to evaluating things called successive approximations. I can’t figure out everything in advance, so I try something and then gradually improve on it. Those guys figured out most of the design without writing any code. After Office 2000, Eliyezer & Co. embarked on Page/TableServices, which was natural. He had started with characters in implementing TrueType, then progressed to lines with LineServices, and so pages and table were the next items on the layout hierarchy. To pull that off he needed the help of another St. Petersburg mathematician, Anton Sukanov, who had been a whiz kid in Russia winning various computer-science puzzle competitions. So, the team developed PTS as it’s called and revised LineServices to work well with it. About that time, I simply couldn’t stand not having some native math layout in our products, so in February 2001 I wrote a math handler for LineServices patterned after the ruby handler. While I was at it, I installed the ruby object in a recursive way, so that you could have ruby objects nested like continued fractions. This upset the authors of the HTML ruby specification, since they said ruby was not supposed to be nested. Nevertheless, the easiest way to display complex ruby is using one level of nesting. My simple math handler showed that LineServices could do mathematics, although my spacing was mediocre. More precisely, spacing was delegated to the user, who unfortunately seldom knows what correct math spacing is. A valuable thing about my old LineServices math handler was that it convinced people that we had the infrastructure to layout math. Fortunately, I didn’t appreciate at the time how hard laying out TeX-quality math would prove to be. Else I might not have been able to persuade people to work on it. It seems that most things that are and he (see OfficeMath UI) using my old math handler, since they didn’t have any code written. Since TeX was so valuable in the design process, Eliyezer wanted to talk with Donald Knuth, who happened to be an old friend of Eliyezer’s PhD advisor, Nicolas Wirth. A visit was arranged in November 2003 and the four of us had the good fortune to spend an extraordinary afternoon with what evolved into the OpenType math tables and associated code, such as “cut-ins” to kern superscripts and subscripts with their bases. Eliyezer’s health gradually declined, and he decided to retire after the initial math-handler design. Sergey Genkin took over leadership of the math project. One day in the summer of 2004 they came into my office all excited and announced that they had been able to display the mathematical expression 𝑎 + 𝑏! It was a real achievement, since the spacing around the + was the desired 4/18th em and a lot of code had checked out correctly. One of the things they soon discovered was that LineServices alone was not adequate to layout high quality mathematics: you need PTS too! The problem is that on computer screens, unlike the printed page, the layout width varies substantially from one invocation to another. Hence you must be able to break equations to accommodate different window widths. TeX delegates most equation breaking to the user, but that’s not a good option for browsers, slide shows and other screen usages. Also, you need PTS to handle placement of equation numbers. Yet another brilliant St. Petersburg mathematician had joined the PTS team, namely Alexander Vaschillo. So, he and Anton implemented equation breaking and numbering. At this point one can understand better how we came to use OMML (Office MathML) as a file format for mathematics rather than MathML. OMML is a close representation of the LineServices/PTS math objects. These math objects were created after extensive study of mathematical typography, rather than by study of MathML. It’s natural to have a file format that mirrors the internal format. In addition, we needed to be able to put any inline text and objects inside math zones. MathML cannot embed other XML namespaces except indirectly via parallel markup of some kind. “I was working on RichEdit 2.0 next door to Eliyezer back then” NT 4 shipped with 2.0 in mid-96 so this would be 1995-1996? Yes. Good old days 🙂
https://devblogs.microsoft.com/math-in-office/lineservices/
CC-MAIN-2022-21
refinedweb
1,108
55.24
? But of course, interpreted and byte-compiled languages do require the original language, or a version of it, in order to run. True, Java programs are compiled, but they're compiled into bytecodes then executed by the JVM. Similarly, .NET programs cannot run unless the CLR is present. Even so, many of the students in my Python courses are surprised to discover that if you want to run a Python program, you need to have the Python language installed. If you're running Linux, this isn't a problem. Python has come with every distribution I've used since 1995. Sometimes the Python version isn't as modern as I'd like, but the notion of "this computer can't run Python programs" isn't something I've had to deal with very often. However, not everyone runs Linux, and not everyone's computer has Python on it. What can you do about that? More specifically, what can you do when your clients don't have Python and aren't interested in installing it? Or what if you just want to write and distribute an application in Python, without bothering your users with additional installation requirements? In this article, I discuss PyInstaller, a cross-platform tool that lets you take a Python program and distribute it to your users, such that they can treat it as a standalone app. I also discuss what it doesn't do, because many people who think about using PyInstaller don't fully understand what it does and doesn't do. Running Python Code Like Java and .NET, Python programs are compiled into bytecodes, high-level commands that don't correspond to the instructions of any actual computer, but that reference something known as a "virtual machine". There are a number of substantial differences between Java and Python though. Python doesn't have an explicit compilation phase; its bytecodes are pretty high level and connected to the Python language itself, and the compiler doesn't do that much in terms of optimization. The correspondence between Python source code and the resulting bytecodes is basically one-to-one; you won't find the bytecode compiler doing fancy things like inlining code or optimizing loops. However, there's no doubt that Python runs bytecode, rather than your source code. You can see this in a number of different ways, the easiest of which is to create a Python module and then import that module. The module is translated into Python bytecodes and then saved to a file with a .pyc suffix. (In Python 3, this is under a directory called __pycache__, with separate byte-compiled versions for different Python versions and architectures.) What does this all have to do with PyInstaller? Well, if you want to distribute a Python program, it's not enough to provide the byte-compiled output. You also need to provide a copy of Python, and that turns out to be a pain under certain circumstances, as I mentioned previously. PyInstaller takes your Python code and byte-compiles it. But, then it also creates an executable application that basically loads Python and runs your program. In other words, each application you distribute with PyInstaller has a complete copy of Python within it, including the libraries needed to run your program. Normally, Python includes the entire standard library, but PyInstaller is smart enough to include only those modules it really needs, thus keeping the distribution size within reason. Note that the copy of Python you have when using PyInstaller is used to create the distributable package. This means if you are running Python 3.4 on Linux, it's that copy of Python 3.4 for Linux that'll be included in your package. In other words, PyInstaller works across platforms, in that you can run it on Linux, Windows, macOS and other systems, but the resulting package is specifically for one architecture. It also means you need to be a bit careful when using PyInstaller on a computer that has multiple Python versions installed. Installing PyInstaller PyInstaller is most easily installed on a computer running Python with the standard pip command: pip install -U --user pyinstaller The -U flag indicates that you would like to upgrade PyInstaller, in case you already have installed it and the version on PyPI is newer. The --user flag indicates that you don't want to install it in the system's directories, but rather under your own home directory. Recently, I've become a fan of installing things with --user, largely because it avoids the need to think about permissions. However, it does mean that you need to add the "bin" directory from the --user location to your PATH. If you're on a computer that has more than one Python version installed, it sometimes can be hard to know just which version is connected to pip. (Although pip --version will tell you which version of Python it's using.) For this reason, I sometimes do things the long way, as follows: python3.6 -m pip install -U --user pyinstaller The -m flag is sort of like the import statement in Python; running things in this way ensures that you're using the version you want. Now that you've installed PyInstaller, let's use it to create a distributable Python application. I've created a new program called (very creatively) myapp.py. Here's the source code: #!/usr/bin/env python3.6 import sys print("Hello, and welcome to my app!") print(f"We're running Python {sys.version}") for i in range(10): print(f"{i} ** 2 = {i**2}, {i} ** 3 = {i**3}") As you can see, this program imports the sys module, which provides access to the Python environment, as well as its variables and settings. I do this so that I can grab sys.version and ensure that the correct version is really running. Next, I execute a "for" loop, for no reason other than that it gives me some output that I can see on the screen when the program runs. In both cases, I use one of my favorite features from Python 3.6, f-strings, which allows me to interpolate expressions inside curly braces. This is, in my mind, far better than the previous ways this was done in Python, using the "%" operator on strings or (more recently) the str.format method. So, let's assume you want to run this program on a colleague's machine. (Remember that your colleague needs to run the same operating system as you do, because the output from PyInstaller is going to be a binary based on the Python version you've installed.) You can type: pyinstaller myapp.py And, you'll get a lot of output. I'm not going to review all of it, but here are some highlights: 468 INFO: PyInstaller: 3.3.1 468 INFO: Python: 3.6.3 470 INFO: Platform: ↪Linux-4.4.0-119-generic-x86_64-with-Ubuntu-16.04-xenial 475 INFO: wrote /root/myapp1/myapp.spec The file "myapp.spec" describes the application you're creating with PyInstaller. You'll find that this file is created automatically when you run PyInstaller. Normally, PyInstaller is smart enough to figure out what files must be included in the resulting distribution, but in some cases, such as data files and shared libraries, you might have to edit the specfile and add them yourself: 491 INFO: Extending PYTHONPATH with paths ['/root/myapp1', '/root/myapp1'] When you say import xyz in Python, the language looks (for starters) in the current directory for "xyz.py". If it doesn't find that (or a bytecoded variation), it looks through the elements of sys.path, one by one, looking for "xyz.py". If you want to tell Python to look in some additional directories, you can set the PYTHONPATH environment variable. Here, PyInstaller is saying that it's modifying PYTHONPATH so that the program can find modules and packages defined in the current directory: 491 INFO: checking Analysis PyInstaller analyzes your code in order to figure out which modules and packages you want to use. Used modules are included in the final distribution, while unused ones are ignored. The "dist" Directory There's a lot more to the output, but after running PyInstaller, you'll find that there's a "dist" directory, and that in that directory is another subdirectory with the name of your new application. This directory contains your new Python application. Now, you can't just run it like that; it's still a bit more complex than your average executable. The idea is that you'll turn the directory into a zipfile, distribute the software to wherever you need it, unzip it on the destination machine, and then run the top-level program. But what if you use a module that has not only a Python component, but also a compiled C component? PyInstaller handles that automatically. For example, say you're going to use NumPy in your program, how does PyInstaller handle the C portion, which is compiled? In this case, PyInstaller noticed that you were using a module with a C component. And if you look in the "dist" directory, you'll now see a bunch of additional shared libraries (*.so files). PyInstaller can't promise to work with all complex packages, but the authors have tried hard to provide a large degree of compatibility. For example, if you use the Cython package (for implementing Python modules in C or providing type hints), you'll find that PyInstaller handles it fine, including the appropriate files in the "dist" directory. Conclusion For years, many of my students and consulting clients have wanted to distribute Python code without needing to run the language itself. That's not possible, but PyInstaller does the next best thing, letting you distribute software in a fairly straightforward way.
https://www.linuxjournal.com/content/introducing-pyinstaller
CC-MAIN-2020-29
refinedweb
1,646
61.56
Approaching the problem: While working with dates we have to keep in mind a variety of cases as months have different number of days. Below is a list of possible cases we have to take care of: # When day=28 and it’s a February In this we would have to check if it’s a leap year or not and then set the next date accordingly. # Month ends of various months For January, March, May, July, August, October, and December last day is 31. For February it is 28 or 29 depending on it’s a leap year or not. And for rest, it’s 30. So we need to check a combination of month and day before incrementing the month. # Last day of the year If its 31st December i.e. last day of the year then the month will be set to 1 and date to 1 and year will be incremented by 1. Also while printing the date we would have to check if the day and month to be printed are less than 10 as then they will be followed by a zero. For leap year we will follow the conditions of Georgian Calendar, which states a year is a leap year if: – It is divisible by 400 – It is divisible by 4 and not divisible by 100 Algorithm: - Since the date can be inputted in a variety of formats, like 1 Dec 2020 or 1/12/2020 or 1/12/20 or 12/1/2020(MMDDYYYY) we will output a statement specifying the acceptable input format for the program. - In the program below, I have taken input in such a way so that I can separate out day and month and year in separate variables in order to work on them easily. - Next, I will check my first condition, if the day is less than 27 as till then irrespective of month and year we just have to increment the day by 1 and the month and year remain as they were. a. Next, I will check for day=28: If the month is Feb I will further check if it’s a leap year or not and accordingly set the date as 29 Feb or 1 march of the respective year. If it’s not Feb then I will simply increment the day by 1. b. Next I will check for day=29: If its Feb then the month will be incremented by 1 and day will be set to 1 otherwise simply increment the day by 1. c. Next I will check for day=30: For January, march, may, July, August, October, and December, I will simply increment the day by 1 otherwise I will increment the month by 1 and set the date to 1 d. Lastly, I will check for day=31: If this condition is true then we will set the day to 1. Further, we will check if month is December as then we will set the month to 1 and increment the year by 1 otherwise we will just increment the month by 1. - After setting the date, I will print it and before printing day and month I will check if they need to be preceded by a 0 or not. Code: #include <iostream> using namespace std; int main() { int d, m, y; cout << "Enter today's date in the format:DD MM YYYY\n"; cin >> d >> m >> y; if (d > 0 && d < 28) //checking for day from 0 to 27 d += 1; if (d == 28) { if (m == 2) //checking for february { if ((y % 400 == 0) || (y % 100 != 0 || y % 4 == 0)) //leap year check in case of feb { d = 29; } else { d = 1; m = 3; } } else //when its not feb d += 1; } if (d == 29) //last day check for feb { if (m == 2) { d = 1; m = 3; } else d += 1; } if (d == 30) //last day check for april,june,September,November { if (m == 1 || m == 3 || m == 5 || m == 7 || m == 8 || m == 10 || m == 12) d += 1; else { d = 1; m += 1; } } if (d == 31) //last day of the month { d = 1; if (m == 12) //checking for last day of the year { y += 1; m = 1; } else m += 1; } cout << "Tomorrow's date:\n"; if (d < 10) //checking if day needs to be preceded by 0 { cout << "0" << d << " "; } else cout << d << " "; if (m < 10) //checking if month needs to be preceded by 0 { cout << "0" << m << " "; } else cout << m << " "; cout << y; return 0; } Output: Enter today's date in the format:DD MM YYYY 28 02 2020 Tomorrow's date: 01 03 2020 Report Error/ Suggestion
https://www.studymite.com/cpp/examples/program-to-print-the-next-days-date-month-year-cpp/?utm_source=related_posts&utm_medium=related_posts
CC-MAIN-2020-50
refinedweb
773
60.92
This is a Java Program to Find 2 Elements in the Array such that Difference between them is Largest. Enter size of array and then enter all the elements of that array. Now we check for all possible difference between two elements and then select the elements whose difference is largest one. Here is the source code of the Java Program to Find 2 Elements in the Array such that Difference between them is Largest. The Java program is successfully compiled and run on a Windows system. The program output is also shown below. import java.util.Scanner; public class Largest_Difference { public static void main(String[] args) { int n, x, count = 0, i = 0, temp = 0; Scanner s = new Scanner(System.in); System.out.print("Enter no. of elements you want in array:"); n = s.nextInt(); int a[] = new int[n]; System.out.println("Enter all the elements:"); for(i = 0; i < n; i++) { a[i] = s.nextInt(); } int diff, greatest_diff; greatest_diff = 0; int a1 = 0, a2 = 0; for(i = 0; i < n; i++) { for(int j = i + 1; j < n; j++) { diff = Math.abs(a[i] - a[j]); if(diff > greatest_diff) { greatest_diff = diff; a1 = i; a2 = j; } } } System.out.println("Greatest Difference:"+greatest_diff); System.out.println("Two elements with largest difference:"+a[a1]+" and "+a[a2]); } } Output: $ javac Largest_Difference.java $ java Largest_Difference Enter no. of elements you want in array:7 Enter all the elements: -2 4 5 6 2 7 -3 Greatest Difference:10 Two elements with largest difference:7 and -3 Sanfoundry Global Education & Learning Series – 1000 Java Programs. Here’s the list of Best Reference Books in Java Programming, Data Structures and Algorithms.
https://www.sanfoundry.com/java-program-find-2-elements-array-such-difference-between-them-largest/
CC-MAIN-2018-13
refinedweb
277
57.16
. Getting Started First of all, you will need to install Faker. If you have pip (and why wouldn’t you?), all you need to do is this: pip install fake-factory Now that you have the package installed, we can start using it! Creating Fake Data Creating fake data with Faker is really easy to do. Let’s look at a few examples. We will start with a couple of examples that create fake names: from faker import Factory #---------------------------------------------------------------------- def create_names(fake): """""" for i in range(10): print fake.name() if __name__ == "__main__": fake = Factory.create() create_names(fake) If you run the code above, you will see 10 different names printed to stdout. This is what I got when I ran it: Mrs. Terese Walter MD Jess Mayert Ms. Katerina Fisher PhD Mrs. Senora Purdy PhD Gretchen Tromp Winnie Goodwin Yuridia McGlynn MD Betty Kub Nolen Koelpin Adilene Jerde You will likely receive something different. Every time I’ve run the script, the results were never the same. Most of the time, I don’t want the name to have a prefix or a suffix, so I created another script that only produces a first and last name: from faker import Factory #---------------------------------------------------------------------- def create_names2(fake): """""" for i in range(10): name = "%s %s" % (fake.first_name(), fake.last_name()) print name if __name__ == "__main__": fake = Factory.create() create_names2(fake) If you run this second script, the names you see should not contain a prefix (i.e. Ms., Mr., etc) or a suffix (i.e. PhD, Jr., etc). Let’s take a look at some of the other types of fake data that we can generate with this package. Creating Other Fake Stuff Now we’ll spend a few moments learning about some of the other fake data that Faker can generate. The following piece of code will create six pieces of fake data. Let’s take a look: from faker import Factory #---------------------------------------------------------------------- def create_fake_stuff(fake): """""" stuff = ["email", "bs", "address", "city", "state", "paragraph"] for item in stuff: print "%s = %s" % (item, getattr(fake, item)()) if __name__ == "__main__": fake = Factory.create() create_fake_stuff(fake) Here we use Python’s built-in getattr function to call some of Faker’s methods. When I ran this script, I received the following for output: email = pacocha.aria@kris.com bs = reinvent collaborative systems address = 57188 Leuschke Mission Lake Jaceystad, KY 46291 city = West Luvinialand state = Oregon paragraph = Possimus nostrum exercitationem harum eum in. Dicta aut officiis qui deserunt voluptas ullam ut. Laborum molestias voluptatem consequatur laboriosam. Omnis est cumque culpa quo illum. Wasn’t that fun? Wrapping Up The Faker package has many other methods that are not covered here. You should check out their full documentation to see what else you can do with this package. With a little work, you can use this package to populate a database or a report quite easily. - Tim Shaffer - Mike Driscoll - kutpaste123 - choosen one
http://www.blog.pythonlibrary.org/2014/06/18/python-create-fake-data-with-faker/
CC-MAIN-2015-11
refinedweb
484
65.62
2. Installation Procedures Oracle Java ME Embedded Client SDK and NetBeans Projects Adding the SDK as a Java Platform Create and Run a New Project As mentioned in Development Environment, the recommended NetBeans version is 6.9.1. You can find this version at: Choose the “Java” download bundle. This chapter details the steps to configure NetBeans to use the Oracle Java ME Embedded Client SDK as a Java platform, and presents a sample project and to run in your configured environment. This section details how to add the SDK as a Java Platform in NetBeans and how to create an Oracle Java ME Embedded Client project. Oracle Java ME Embedded Client provides the Java ME platform for embedded devices, such as TV Set Top boxes and smart electric meters. These devices run a virtual machine based on Java ME CDC. To emulate this environment, the NetBeans IDE must be configured to use the Oracle Java ME Embedded Client platform. Follow these steps to install this platform into your NetBeans IDE. This procedure was recorded on a Linux machine. If you are a PC user, use the Windows paths discussed in SDK Installation Structure. If Java ME is checked, continue to Step 9 or, if Java ME is not enabled, perform the following actions: The NetBeans IDE Installer window opens. Click the Java box as shown, and click the Update button. This step enables the plugin for CDC Java Embedded Client Platform Implementation. If no errors are displayed on the Platform Name page, click Finish and the Java Platform Manager opens. You are ready to develop applications. For example: public class Main { public static void main(String args[]) { System.out.println("Hello, world!"); } } The Browse Main Classes window opens with helloworld.Main selected. The option Run using main(String[] args) method execution should be selected. The message, “Hello, world!” prints in the NetBeans Output window. To open the output window, select Window > Output > Output. To edit project properties, right click on the project and select the option Properties. To change the display resolution, select the Platform category to switch to another emulator platform (if available). Open the Build sub-options to set desired values. For example, you can add any JAR file to the build system by selecting Build sub-option Libraries & Resources. Modify the Running option to pass Arguments or VM options for the Java runtime.
http://docs.oracle.com/javame/config/cdc/cdc-opt-impl/ojmeec/1.0/install/html/z400009a1006487.html
CC-MAIN-2014-42
refinedweb
397
56.35
#include <hallo.h> * Andrew Donnellan [Mon, Jun 05 2006, 07:13:29AM]: > . > > What is wrong with not being a DD? I'm not one, I'm not in NM, I don't > maintain any packages, I just care about free software and Debian in > particular. Phrased after a famous german comedian: Democracy means, you are allowed to have an opinion on everything. You do not have to. Especially some people should learn a simple fact: if you do not have anything new to say, just STFU. > Debian is supposed to be *open* and *transparent*. Telling off users > because their opinion doesn't matter is just stupid. What Mike said is > completely relevant, and IMHO correct. Yes. Should 100 people appear now and say the same things again, and again, and again? WE GOT IT. WE DO NOT NEED TO READ IT AGAIN. We are not through with this issue, and it will be solved in the near future. Just stop chewing the same arguments, let the people do their work. And do not try to polarize the discussion with another "summary of facts, yeah, I could contribute to this discussion somehow so I rock". Eduard.
http://lists.debian.org/debian-devel/2006/06/msg00190.html
CC-MAIN-2013-48
refinedweb
195
75.71
Tunneling PyZMQ Connections with SSH¶ New in version 2.1.9. You may want to connect ØMQ sockets across machines, or untrusted networks. One common way to do this is to tunnel the connection via SSH. IPython introduced some tools for tunneling ØMQ connections over ssh in simple cases. These functions have been brought into pyzmq as zmq.ssh under IPython’s BSD license. PyZMQ will use the shell ssh command via pexpect by default, but it also supports using paramiko for tunnels, so it should work on Windows. An SSH tunnel has five basic components: - server : the SSH server through which the tunnel will be created - remote ip : the IP of the remote machine as seen from the server (remote ip may be, but is not not generally the same machine as server). - remote port : the port on the remote machine that you want to connect to. - local ip : the interface on your local machine you want to use (default: 127.0.0.1) - local port : the local port you want to forward to the remote port (default: high random) So once you have established the tunnel, connections to localip:localport will actually be connections to remoteip:remoteport. In most cases, you have a zeromq url for a remote machine, but you need to tunnel the connection through an ssh server. This is So if you would use this command from the same LAN as the remote machine: sock.connect("tcp://10.0.1.2:5555") to make the same connection from another machine that is outside the network, but you have ssh access to a machine server on the same LAN, you would simply do: from zmq import ssh ssh.tunnel_connection(sock, "tcp://10.0.1.2:5555", "server") Note that "server" can actually be a fully specified "user@server:port" ssh url. Since this really just launches a shell command, all your ssh configuration of usernames, aliases, keys, etc. will be respected. If necessary, tunnel_connection() does take arguments for specific passwords, private keys (the ssh -i option), and non-default choice of whether to use paramiko. If you are on the same network as the machine, but it is only listening on localhost, you can still connect by making the machine itself the server, and using loopback as the remote ip: from zmq import ssh ssh.tunnel_connection(sock, "tcp://127.0.0.1:5555", "10.0.1.2") The tunnel_connection() function is a simple utility that forwards a random localhost port to the real destination, and connects a socket to the new local url, rather than the remote one that wouldn’t actually work. See also A short discussion of ssh tunnels:
http://pyzmq.readthedocs.io/en/latest/ssh.html
CC-MAIN-2017-47
refinedweb
445
60.24
In this post we’ll have a look at a few ways to write asynchronous code in F#, and a very brief example of parallelism as well. As noted in the previous post, F# can directly use all the usual .NET suspects, such as Thread AutoResetEvent, BackgroundWorker and IAsyncResult. Let’s see a simple example where we wait for a timer event to go off: open System let userTimerWithCallback = // create an event to wait on let event = new System.Threading.AutoResetEvent(false) // create a timer and add an event handler that will signal the event let timer = new System.Timers.Timer(2000.0) timer.Elapsed.Add (fun _ -> event.Set() |> ignore ) //start printfn "Waiting for timer at %O" DateTime.Now.TimeOfDay timer.Start() // keep working printfn "Doing something useful while waiting for event" // block on the timer via the AutoResetEvent event.WaitOne() |> ignore //done printfn "Timer ticked at %O" DateTime.Now.TimeOfDay This shows the use of AutoResetEvent as a synchronization mechanism. Timer.Elapsedevent, and when the event is triggered, the AutoResetEvent is signalled. The code above is reasonably straightforward, but does require you to instantiate an AutoResetEvent, and could be buggy if the lambda is defined incorrectly. F# has a built-in construct called “asynchronous workflows” which makes async code much easier to write. These workflows are objects that encapsulate a background task, and provide a number of useful operations to manage them. Here’s the previous example rewritten to use one: open System //open Microsoft.FSharp.Control // Async.* is in this module. let userTimerWithAsync = // create a timer and associated async event let timer = new System.Timers.Timer(2000.0) let timerEvent = Async.AwaitEvent (timer.Elapsed) |> Async.Ignore // start printfn "Waiting for timer at %O" DateTime.Now.TimeOfDay timer.Start() // keep working printfn "Doing something useful while waiting for event" // block on the timer event now by waiting for the async to complete Async.RunSynchronously timerEvent // done printfn "Timer ticked at %O" DateTime.Now.TimeOfDay Here are the changes: AutoResetEventand lambda have disappeared, and are replaced by let timerEvent = Control.Async.AwaitEvent (timer.Elapsed), which creates an asyncobject directly from the event, without needing a lambda. The ignoreis added to ignore the result. event.WaitOne()has been replaced by Async.RunSynchronously timerEventwhich blocks on the async object until it has completed. That’s it. Both simpler and easier to understand. The async workflows can also be used with IAsyncResult, begin/end pairs, and other standard .NET methods. For example, here’s how you might do an async file write by wrapping the IAsyncResult generated from BeginWrite. let fileWriteWithAsync = // create a stream to write to use stream = new System.IO.FileStream("test.txt",System.IO.FileMode.Create) // start printfn "Starting async write" let asyncResult = stream.BeginWrite(Array.empty,0,0,null,null) // create an async wrapper around an IAsyncResult let async = Async.AwaitIAsyncResult(asyncResult) |> Async.Ignore // keep working printfn "Doing something useful while waiting for write to complete" // block on the timer now by waiting for the async to complete Async.RunSynchronously async // done printfn "Async write completed" Asynchronous workflows can also be created manually. A new workflow is created using the async keyword and curly braces. The braces contain a set of expressions to be executed in the background. This simple workflow just sleeps for 2 seconds. let sleepWorkflow = async{ printfn "Starting sleep workflow at %O" DateTime.Now.TimeOfDay do! Async.Sleep 2000 printfn "Finished sleep workflow at %O" DateTime.Now.TimeOfDay } Async.RunSynchronously sleepWorkflow Note: the code do! Async.Sleep 2000 is similar to Thread.Sleep but designed to work with asynchronous workflows. Workflows can contain other async workflows nested inside them. Within the braces, the nested workflows can be blocked on by using the let! syntax. let nestedWorkflow = async{ printfn "Starting parent" let! childWorkflow = Async.StartChild sleepWorkflow // give the child a chance and then keep working do! Async.Sleep 100 printfn "Doing something useful while waiting " // block on the child let! result = childWorkflow // done printfn "Finished parent" } // run the whole workflow Async.RunSynchronously nestedWorkflow One very convenient thing about async workflows is that they support a built-in cancellation mechanism. No special code is needed. Consider a simple task that prints numbers from 1 to 100: let testLoop = async { for i in [1..100] do // do something printf "%i before.." i // sleep a bit do! Async.Sleep 10 printfn "..after" } We can test it in the usual way: Async.RunSynchronously testLoop Now let’s say we want to cancel this task half way through. What would be the best way of doing it? In C#, we would have to create flags to pass in and then check them frequently, but in F# this technique is built in, using the CancellationToken class. Here an example of how we might cancel the task: open System open System.Threading // create a cancellation source let cancellationSource = new CancellationTokenSource() // start the task, but this time pass in a cancellation token Async.Start (testLoop,cancellationSource.Token) // wait a bit Thread.Sleep(200) // cancel after 200ms cancellationSource.Cancel() In F#, any nested async call will check the cancellation token automatically! In this case it was the line: do! Async.Sleep(10) As you can see from the output, this line is where the cancellation happened. Another useful thing about async workflows is that they can be easily combined in various ways: both in series and in parallel. Let’s again create a simple workflow that just sleeps for a given time: // create a workflow to sleep for a time let sleepWorkflowMs ms = async { printfn "%i ms workflow started" ms do! Async.Sleep ms printfn "%i ms workflow finished" ms } Here’s a version that combines two of these in series: let workflowInSeries = async { let! sleep1 = sleepWorkflowMs 1000 printfn "Finished one" let! sleep2 = sleepWorkflowMs 2000 printfn "Finished two" } #time Async.RunSynchronously workflowInSeries #time And here’s a version that combines two of these in parallel: // Create them let sleep1 = sleepWorkflowMs 1000 let sleep2 = sleepWorkflowMs 2000 // run them in parallel #time [sleep1; sleep2] |> Async.Parallel |> Async.RunSynchronously #time We’re using the #time option to show the total elapsed time, which, because they run in parallel, is 2 secs. If they ran in series instead, it would take 3 seconds. Also you might see that the output is garbled sometimes because both tasks are writing to the console at the same time! This last sample is a classic example of a “fork/join” approach, where a number of a child tasks are spawned and then the parent waits for them all to finish. As you can see, F# makes this very easy! In this more realistic example, we’ll see how easy it is to convert some existing code from a non-asynchronous style to an asynchronous style, and the corresponding performance increase that can be achieved. So here is a simple URL downloader, very similar to the one we saw at the start of the series: open System.Net open System open System.IO let fetchUrl url = let req = WebRequest.Create(Uri(url)) use resp = req.GetResponse() use stream = resp.GetResponseStream() use reader = new IO.StreamReader(stream) let html = reader.ReadToEnd() printfn "finished downloading %s" url And here is some code to time it: // a list of sites to fetch let sites = [""; ""; ""; ""; ""] #time // turn interactive timer on sites // start with the list of sites |> List.map fetchUrl // loop through each site and download #time // turn timer off Make a note of the time taken, and let’s if we can improve on it! Obviously the example above is inefficient – only one web site at a time is visited. The program would be faster if we could visit them all at the same time. So how would we convert this to a concurrent algorithm? The logic would be something like: Unfortunately, this is quite hard to do in a standard C-like language. In C# for example, you have to create a callback for when an async task completes. Managing these callbacks is painful and creates a lot of extra support code that gets in the way of understanding the logic. There are some elegant solutions to this, but in general, the signal to noise ratio for concurrent programming in C# is very high*. * As of the time of this writing. Future versions of C# will have the await keyword, which is similar to what F# has now. But as you can guess, F# makes this easy. Here is the concurrent F# version of the downloader code: open Microsoft.FSharp.Control.CommonExtensions // adds AsyncGetResponse // Fetch the contents of a web page asynchronously let fetchUrlAsync url = async { let req = WebRequest.Create(Uri(url)) use! resp = req.AsyncGetResponse() // new keyword "use!" use stream = resp.GetResponseStream() use reader = new IO.StreamReader(stream) let html = reader.ReadToEnd() printfn "finished downloading %s" url } Note that the new code looks almost exactly the same as the original. There are only a few minor changes. use resp =” to “ use! resp =” is exactly the change that we talked about above – while the async operation is going on, let other tasks have a turn. AsyncGetResponsedefined in the CommonExtensionsnamespace. This returns an async workflow that we can nest inside the main workflow. async {...}” wrapper which turns it into a block that can be run asynchronously. And here is a timed download using the async version. // a list of sites to fetch let sites = [""; ""; ""; ""; ""] #time // turn interactive timer on sites |> List.map fetchUrlAsync // make a list of async tasks |> Async.Parallel // set up the tasks to run in parallel |> Async.RunSynchronously // start them off #time // turn timer off The way this works is: fetchUrlAsyncis applied to each site. It does not immediately start the download, but returns an async workflow for running later. Async.Parallelfunction Async.RunSynchronouslyto start all the tasks, and wait for them all to stop. If you try out this code yourself, you will see that the async version is much faster than the sync version. Not bad for a few minor code changes! Most importantly, the underlying logic is still very clear and is not cluttered up with noise. To finish up, let’s have another quick look at a parallel computation again. Before we start, I should warn you that the example code below is just to demonstrate the basic principles. Benchmarks from “toy” versions of parallelization like this are not meaningful, because any kind of real concurrent code has so many dependencies. And also be aware that parallelization is rarely the best way to speed up your code. Your time is almost always better spent on improving your algorithms. I’ll bet my serial version of quicksort against your parallel version of bubblesort any day! (For more details on how to improve performance, see the optimization series) Anyway, with that caveat, let’s create a little task that chews up some CPU. We’ll test this serially and in parallel. let childTask() = // chew up some CPU. for i in [1..1000] do for i in [1..1000] do do "Hello".Contains("H") |> ignore // we don't care about the answer! // Test the child task on its own. // Adjust the upper bounds as needed // to make this run in about 0.2 sec #time childTask() #time Adjust the upper bounds of the loops as needed to make this run in about 0.2 seconds. Now let’s combine a bunch of these into a single serial task (using composition), and test it with the timer: let parentTask = childTask |> List.replicate 20 |> List.reduce (>>) //test #time parentTask() #time This should take about 4 seconds. Now in order to make the childTask parallelizable, we have to wrap it inside an async: let asyncChildTask = async { return childTask() } And to combine a bunch of asyncs into a single parallel task, we use Async.Parallel. Let’s test this and compare the timings: let asyncParentTask = asyncChildTask |> List.replicate 20 |> Async.Parallel //test #time asyncParentTask |> Async.RunSynchronously #time On a dual-core machine, the parallel version is about 50% faster. It will get faster in proportion to the number of cores or CPUs, of course, but sublinearly. Four cores will be faster than one core, but not four times faster. On the other hand, as with the async web download example, a few minor code changes can make a big difference, while still leaving the code easy to read and understand. So in cases where parallelism will genuinely help, it is nice to know that it is easy to arrange.
https://fsharpforfunandprofit.com/posts/concurrency-async-and-parallel/
CC-MAIN-2018-13
refinedweb
2,072
66.64
Dirichlet BC problems... Asked by Arun Jaganathan on 2013-11-25 Hello, This is a beginner question. I have the following error setting Dirichlet BC . Error: Unable to create Dirichlet boundary condition. *** Reason: Illegal value rank (1), expecting (2). *** Where: This error was encountered inside DirichletBC.cpp. Here is part of my code: mesh = UnitSquareMesh(300, 300) V = VectorFunctionS Q = VectorFunctionS W = V * Q class DirichletBounda def inside( return on_boundary zero = Constant((0.0,0.0)) bc1 = DirichletBC( bc2 = DirichletBC( bcs =[bc1, bc2] Any help ? Question information - Language: - English Edit question - Status: - Answered - For: - DOLFIN Edit question - Assignee: - No assignee Edit question - Last query: - 2013-11-25 - Last reply: - 2013-11-27 FEniCS no longer uses Launchpad for Questions & Answers. Please consult the documentation on the FEniCS web page for where and how to (re)post your question: http:// fenicsproject. org/support/
https://answers.launchpad.net/dolfin/+question/239758
CC-MAIN-2021-21
refinedweb
142
50.02
I am trying to write 16 bit data to i2c device . As there are no registers specified to write to the slave device, I am not able to use the built-in wiringPiI2CWriteReg16 Api. Can some one guide me how to write 16 bit data to device address. I tried using wiringPiI2CWrite() API consecutively for 2 times to write 16 bit data but no luck! Answer (1) I never use Wiring Pi. So I don’t understand your WiringPi problem. (2) I only know how to use smBus to communicate with I2C devices, with or without registers. (3) Referring the Appendix A below, I usually (a) First import smbus, (b) Define one function to write one byte to the I2C device, (c) Define another function to write two bytes to the I2C device. (4) I very seldom use the first function – writing only one byte to the I2C device. (5) I heavily use the second function – writing two bytes to the I2C device. (6) I use the i2cBus.write_byte method to write the two bytes. (7) i2cBus.write does not care what the two write bytes are. It just writes out blindly. (8) But if the first byte is the device’s register address, say config register at address 0x00, and second byte is a data byte 0x55, then 0x55 would be written into the config register. (9) Now coming back to your problem. What you want is just writing two bytes out, say first byte is 0x77, second byte is 0x88, then i2cBus just again do its job blindly, sending out 0x77, 0x88, I guess this is what you want. Let me know otherwise, (10) I have a scope to display waveforms, I usually repeated indefinitely write two bytes, and pause 10mS every writing two bytes. I am happy to display the waveforms fro you. Appendices Appendix A – Import and definitions to write one byte and two bytes to I2C device import smbus i2cBus1 = smbus.SMBus(1) def quickWriteDevOneByte(i2cBus, devAddr, writeByte): i2cBus.write_byte(devAddr, writeByte) return def writeDevTwoBytes(i2cBus, devAddr, writeByte1, writeByte2): i2cBus.write_byte_data(devAddr, writeByte1, writeByte2) return Appendix B – I2C Sending two bytes What is the max i2c speed of the raspberry pi 4? Listing of program to write two bytes to device (actually at the same time read the ADXL345’s ID register) References (1) System Management Bus – Wikipedia (2) SMBus Quick Start Guide, App Note AN4471 – NXP 2010 (3) SMBus Protocol Summary – Linux Kernel documentation v5.4.0 (4) What is the max i2c speed of the raspberry pi 4? Categories: Uncategorized
https://tlfong01.blog/2020/04/28/i2c-write-two-bytes/
CC-MAIN-2020-29
refinedweb
427
71.75
Guides & Tutorials Building a custom React media query hook for more responsive apps Welcome to Blogvent, day 5! Chances are if you've written any CSS before, you've written media queries. And honestly, media queries overall are solid! But, they were made for an earlier time in the browser. They were not designed for some of the rendering logic that we have on the front-end now. You can still use media queries, of course, and should, but there are some cases where JavaScript will be a smarter option. For example, what if you’re on your phone, and browsing a website, and there is a sidebar or element that is hidden by CSS, that is making network requests? For the user, that is a waste of resources! There has to be a better way. And there is! Media queries... in JavaScript! So, to solve this problem, what you need to do here is conditionally render things based on the browser size, rather than render something and hide it with CSS. If you'll recall in yesterday's Blogvent post, you can use React's useEffect to access the window object in the browser. That window object has a function called matchMedia that returns a boolean based on if the window matches a certain media query passed in! So, if we combine these with a little bit of state, you can make a custom hook that you can use to conditionally render components in your applications: import { useState, useEffect } from 'react'; export function useMediaQuery(query) { const [matches, setMatches] = useState(false); useEffect(() => { const media = window.matchMedia(query); if (media.matches !== matches) { setMatches(media.matches); } const listener = () => { setMatches(media.matches); }; media.addListener(listener); return () => media.removeListener(listener); }, [matches, query]); return matches; } Let's walk through this. In this custom hook, you have a matches state variable, and we take in a query. In the effect, we check if the query that is passed in matches the window. If it does, we set matches to true. We set an event listener in there as well, to keep that variable in sync with the window changing sizes. The event listener is removed when query changes, when the component using it unmount, or when matches changes. Whoa. How can I see this in action? Feel free to use this hook in your projects! You can call it inside your components, for example: function Page() { let isPageWide = useMediaQuery('(min-width: 800px)') return <> {isPageWide && <UnnecessarySidebar />} <ImportantContent /> </> } If you'd like to see it in action in a real project, check out the Jamstack Explorers repo and how we render our Navigation component. And, if you'd like to learn more about Next.js, check out the course (with more to come) on Jamstack Explorers!
https://www.netlify.com/blog/2020/12/05/building-a-custom-react-media-query-hook-for-more-responsive-apps/
CC-MAIN-2022-21
refinedweb
458
64.3
Hello. I am starting using Julia. I like its sintax, simplicity and its claimed performances. I have been using Matlab/Octave a lot at university and now at work we are using Octave for licence reason. However Octave is really slow and here I have read about Julia. I have tried some simple matrix multiplication (A*B, not element wise one) but it is really slow. Octave is faster and Python numpy too. I am using Julia version 0.6.2 on Ubuntu notebook for the test but the same problem occurs in Windows 10. Python code below takes 0.006 seconds: import numpy import cProfile n = 1000; x=numpy.random.random((n,n)) y=numpy.random.random((n,n)) cProfile.run("x*y") Julia code takes 0.1 seconds. n = 1000; a = rand(n,n); @time a*a; I don’t understand why is so slow. Thank you
https://discourse.julialang.org/t/slow-matrix-multiplication-in-julia-compared-to-python-numpy/11015
CC-MAIN-2019-09
refinedweb
149
69.28
A Closer Look at Large-cap European Tech in Public Markets 71 tech companies with >€1bn market cap. Full list available on request. There is a consensus that Europe has lagged behind the US and China in tech innovation due to a variety of reasons. Europe has been considered terrible at nurturing successful tech companies with only a few reaching significant scale (>€1bn value) compared to the US. While the debate has focused on private markets, much of the same negative sentiment / consensus is echoed in the European public markets with the tech sector to be “small” and overshadowed by the old economy. In this post, I take a second look at the listed European tech sector to better understand its composition and assess its attractiveness from a long term investor’s perspective. I summarize my key initial findings in this post. Please reach out if you would like access to the data or exchange thoughts on this topic. There are tech 71 companies with >€1bn Market Cap in Europe Based on my screening, there are 71 European tech companies with a market cap above €1 billion, representing an aggregate market cap of €1.1 trillion (as a reference, Apple’s current market cap is €1.9 trillion). It should be noted that there is a long tail of 250+ smaller companies (<€1 billion market cap) with an aggregate market value of €90+ billion. Sector Distribution - Semis / hardware represent the largest segment. I was slightly surprised as it is not a sector I know well and hence was not aware of its scale. There are several large companies such as ASML, Infineon, STMElectronics in Europe. I suspect that the global nature of the semis, coupled with its manufacturing capabilities, enabled European companies to reach such scale. - Consumer Internet is the second largest segment and comprises dominant regional players in classifieds / platforms, food delivery and e-commerce. Whilst there are several new-age players, particularly in food delivery and ecommerce, low-tech classifieds (Adevinta, Scout24, Schibsted) still represent a large portion of this segment. - Software segment is the third largest segment with an aggregate market value of c. €235 billion (SAP represents c.60% of the aggregate market value). - Majority of the European software companies are “mature-tech” with revenue growth <10%. However, several promising, high-growth companies such as TeamViewer have recently IPOed and reached significantly scale. - It should be noted that successful high-growth tech companies tend to prefer a US listing (e.g. Spotify, Elastic), limiting the listing of break-out software companies in Europe. - Gaming is the third largest segment with 7 out of 10 companies listed in two geographies (Sweden and the UK). Selected companies include Ubisoft, Embracer Group, CD Projekt, Keywords Studios. - Others including (a) Edtech (Kahoot!, Learning Technologies), (b) HoldCo (Prosus), (c) Payments (Adyen) and (d) Ocado Geographic Distribution As expected, the majority (61%) of the tech companies are HQed / listed in the UK, Germany and Netherlands, which are the largest and mature financial exchanges in Europe. I was quite surprised to see 13 companies listed on the Nordic exchanges, containing some hidden gems in the European tech universe. The 20 largest tech companies in Europe The table above shows the top 20 largest listed tech companies in Europe, representing an aggregate market cap of €300 billion (c.27% of total universe). Some observations - Lack of European “FAANG” — There is a clear lack of tech majors in Europe. While the US have their FAANGs and China have their BAT stocks, Europe doesn’t have any tech companies of such scale. In fact, most investors won’t even be familiar with the largest European tech company — ASML, a leading manufacturers of chip-making equipment. - “Old-school” Semis / hardware are overweight on the list, similar to the overall sector distribution — As per the aggregate sector distribution, semis / hardware dominate the top 20. - Several new-age, consumer internet players dominating — One common theme within the consumer internet is the emergence of dominant regional platforms in various verticals (classifieds, ecommerce, food delivery). Companies such as DeliveryHero, Zalando and Allegro dominate multiple geographic markets, enabling them to reach significant scale. - Where are the software companies? There is a clear lack of software companies in the top 20. SAP, Dassault and AVEVA are the only software companies on the list. In contrast to the US, Europe is a tough place to scale a listed software business due to a variety of structural reasons (geographic fragmentation of end markets and smaller IT budgets relative to the US, investor focus on profitability etc). Highest Return over a 5 Year Period The table above shows the highest total shareholder return (assuming dividends are reinvested if applicable) over the last 5 years. It is difficult to derive a common theme. While +10 baggers exist in every market, the relative small cap nature of these companies and geographic dispersion makes it more difficult for retail or institutional investors to identify these attractive opportunities. Investing in European tech requires access to and knowledge of several local markets (UK, Germany, Nordics) and a holistic sector knowledge (semis, gaming, software) as opportunities are spread across sectors and geographies. Absent the ZIRP state of the world, an optimist would argue that the complexity would perhaps makes it a compelling playground for sophisticated European retail and institutional investors, wanting to generate alpha on home turf. However, as long as the US tech keeps hitting ATHs, investors don’t need to do the hard work in European tech.
https://cmodi.medium.com/a-closer-look-at-large-cap-european-tech-in-public-markets-f5f89768cd67?source=user_profile---------3-------------------------------
CC-MAIN-2022-33
refinedweb
917
52.09
CNC machine v2.1 - aka "Valkyrie Reloaded" - Login or register to post comments - by TinHead - Collected by 43 users - the front - the back - left - right - the power unit - and some closeups here and there Description: This is the second third. :) I've put up the code ... Great Buid. Looking around the web, it seems like there isn't much too helpful information with DIY CNCs (maybe I'm not looking hard enough). RepRap proved off-putting when their pages and updates seemed unkept. All in all, Your build was quite inspiring, and I'm following your lead. You said you used a python script for a pass on the g-code to the arduino through the serial port. How exactly did you do this? Looking into it, I assume you used python and pyserial. Is there any way you can post the source code up for the python-end of your build? Hello and thanks :) If you need any help with the build, do not hesitate to ask. You are right, I'm using pyserial. Basically I'm reading the gcode file, then I'm the lines one by one trough the serial while waiting for the OK from the Arduino. The script is as simple as it get's I will post it tonight when I get home, right now I'm at work. hi i am glad to read you found better steppers for your proyect.......... i was worried for you to struggle with microstepping. when i measured the 1.8 degree steppers i have i noticed i wont need microstepping at all..... then readed your info working on pwm and stuff i double check my settings and confirmed my findings,,,,,,,,, everything ok.... what torque your new steppers have..... did you test them? if you what to drive 1 amp motors you could use 2 hbridges one on top of the other.... parallel pit to pin... you must notice that you can also rise the voltage that feeds the motors, better perfomance.... are they 4 wire motors? my new ones are unipolar 6 wire, 12 volts 0.4 amp vexta steppers.... wired as bipolar half coiled... i think that in order to get allways the best results in proyects, you must keep things as simple as they can be..... really dont know why to use i2c when the driver circuits are so simple to manage..... in the reprap gcode interpreter there is a delay value i was playing with to get better results.... it is on the stepper control tag..... return ((distance * 60000000.0) / feedrate) / master_steps; i changed the value to 1000000000 (9 cero). also have limit switches on all axis...... i cant see them on your machine.... very good feature..... Hello Microstepping is cool but very hard to implement. I wanted microstepping because besides a better resolution with those cheap steppers, it would create a much smoother movement and reduce overall resonance. Anyway I can live without it though :) The new steppers are bigger and have very good torque. They are bipolar, so they have 4 wires. I did some tests with them, they work very nicely. I agree with the KISS idea in my projects too, I chose to use I2C because it seemed more elegant then the pulse/direction solution,I can always change that. Not sure why you needed to modify the value there, I had no problems with that... Limit switches will be implemented next, I'm tired of homing the machine manually everytime :P Mechanical resonance Very nice and clean build you have there, I'm using some similar solutions on my own mini-cnc that's been in the making for years now, it seems like. How is the mechanical resonance after adding the new motors? It will probably still scream a bit during operation, yes? What I did was to add a small flywheel to the leadscrews which dramatically cut down on noise, and also increased my max speed to at least double of what it was before, although acceleration may have to be tuned down if the rotating mass is large enough (too large?),Mine are about 5cm in diameter cut with a hole-saw from 8mm plywood. They are probably a bit larger than they need to be, but they do give a mean max-speed. ;) (did do - my machine is currently in pieces, awaiting some redesign) So basically what you'll have is: [Motor shaft] -> [Rubbery connector] -> {Lead screw] -> [Small flywheel] Feels right to have the flywheel right at the shaft connector, but that's just based on my instinct :) This will be the mechanical analog of a second order low-pass filter, with the rubber connector playing the role of the capacitor (Being kind of springy, it stores potential energy.) And the fly-wheel takes the role of an inductor (Storing kinetic energy). I believe the reason that this allows a higher max speed is because the resonant frequency of the mechanical system is increased so as to avoid oscillations. But the simplest and best way to see what happens is probably just to realize that the flywheel will tend to keep the lead-screw spinning for the fraction of a second that the motor shaft does not. With the semi-rigid shaft connector further enchancing this smoothing action. Edit: A soft connector between the motor and lead-screw is probably very important for this to work properly. I can't imagine the fly-wheel doing much good if the lead-screw can't move slightly independent from the shaft. The rubber acting as a highly damped spring probably doesn't hurt to kill of oscillations either. Hi I haven't got to mount the new motors yet, I need to create new mounts for them ... But with the old ones it wasn't screaming too much. But thanks for the tip, I will give it a try if I think it's needed. Right now I'm rebuilding the Z-Axis, should enhance the Y axis work area by around 100 mm yey ! the CNCurse That sounds familiar - And if you ever think you're finished, it's probably just mental preparations for the next build. :) Btw, how do you power the your motors? Overvoltaged and current limited? I'm using some inline power-resistors to limit stall-current. Primitive, but all my dirt cheap controller cards could accomodate. That's a tough one ... You've got such a nice You've got such a nice controller with all the possibilities in the world - I say stick with it. Sooner or later you'll figure it out :) Haven't got the software to view your schematics, but I'm guessing you're using the sense voltages per the l298 datasheet. And feed them to two ADC channels on the uC. And based on that reading you vary the duty cycle to get closed loop control of the current? edit: I see Attiny2313 doesn't have an ADC, so no-go on that one. Apparently it has an analog comparator, perhaps that could be used somehow. I'm thinking something along the lines of 100% duty cycle until the comparator trips, (Which is assumed to happen when the motor winding is at it's rated current.) and then drop the duty cycle accordingly. E.g. if your motor is rated for 12V and you supply it 36V drop duty cycle to about 33%. This is just off the top of my head, and I'm not quite sure how it'll play out in meatspace. At the end of the day this is probably not doable without some more external circuitry, as the Attiny datasheet suggests it only has one 2-input comparator. I'm sorry to say I seem to be running out of ideas here...
http://letsmakerobots.com/node/9006?page=7
CC-MAIN-2014-35
refinedweb
1,301
72.66
dear friends , I found a code which calculates pi with an interesting algorithm the programme code is below: from sys import stdout def f((q,r,t,k)): n = (3*q+r) / t if (4*q+r) / t == n: return (10*q,10*(r-n*t),t,k,n) else: return (q*k, q*(4*k+2)+r*(2*k+1),t*(2*k+1),k+1) # Call pi(20) for first 20 digits, or pi() for all digits def pi(n=-1): printed_decimal = False r = f((1,0,1,1)) while n != 0: if len(r) == 5: stdout.write(str(r[4])) if not printed_decimal: stdout.write('.') printed_decimal = True n -= 1 r = f(r[:4]) #stdout.write('\n') if __name__ == '__main__': from sys import argv try: digit_count = long(argv[1]) except: digit_count=int(raw_input('How many digits? :')) pi(digit_count) This code gives the number in an unusual format like "3.1415'None'" it has a number part and a string part . I want to seperate these from easc other but I couldn't manage. I mean when I try to turn it into string format then try to use things like [:4] or like that they don't work.Any idea how to seperate this 'None' from the number and make it a real normal number on which I can do operations like +1 -1 or like that :) Regards __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around -------------- next part -------------- An HTML attachment was scrubbed... URL:
https://mail.python.org/pipermail/python-win32/2005-January/002870.html
CC-MAIN-2017-17
refinedweb
253
71.85
CLaSH.Tutorial Contents Description Synopsis Introduction. The CλaSH compiler transforms these high-level descriptions to low-level synthesizable VHDL. 0 .8.* and up. - Install GHC (version 7.8.* or higher) - Download and install GHC for your platform. Unix user can use ./configure prefix=<LOCATION>to set the installation location. - Make sure that the bindirectory of GHC is in your PATH. - Install Cabal - Windows and OS X Mavericks: - Download the binary for cabal-install - Put the binary in a location mentioned in your PATH - Other Unix systems: - Download the sources for cabal-install - Unpack ( tar xf) the archive and cdto the directory - Run sh bootstrap.sh - Follow the instructions to add cabalto your PATH - Run cabal update - Install CλaSH - Run cabal install clash-ghc - one added command ( :vhdl). register function: register :: a -> Signal a -> Signal a regiser: (<^>) :: (Pack i, Pack o) => (s -> i -> (s,o)) -> s -> (SignalP i -> SignalP o) f <^> initS = ... The complete sequential MAC circuit can now be specified as: mac = macT <^> 0 Where the LHS of <^> is our macT function, and the RHS is the initial state, in this case 0. We can see it is functioning correctly in our interpreter: >>> take 4 $ simulateP. The observant reader already saw that the <^> operator does not create a function that works on Signals, but on on SignalPs. Indeed, when we look at the type of our mac circuit: >>> :t macmac :: (Pack o, Num o) => (Signal o, Signal o) -> SignalP o We see that our mac function work on a two-tuple of Signals and not on a Signal of a two-tuple. Indeed, the CλaSH prelude library defines that: type instance SignalP (a,b) = (Signal a, Signal b) SignalP is an associated type family belonging to the Pack type class, which, together with pack and unpack defines the isomorphism between a product type of Signals and a Signal of a product type. That is, while (Signal a, Signal b) and Signal (a,b) are not equal, they are isomorphic and can be converted from on to the other using pack and unpack. Instances of this Pack type-class are defined as isomorphisms for: But they are defined as identities for: That is: instance Pack Bool where type SignalP Bool = Signal Bool pack :: SignalP Bool -> Signal Bool pack = idunpack :: Signal Bool -> SignalP Bool unpack = id We will see later why this Pack type class is so convenient, for now, you just have to remember that it exists. And more importantly, that you understand that a product type of Signals is not equal to a Signal of a product type, but that the functions of the Pack type class allow easy conversion between the two. Creating 9),Signal (Signed 9)) -> Signal (Signed 9) = macT <^> 0 topEntity :: (Signal (Signed 9),Signal (Signed 9)) -> Signal (Signed 9) (except testbench.vhdl) :: SignalP a -> SignalP b Where a and b are placeholders for monomorphic types: the topEntity is not allowed to be polymorphic. So given the above type for the topEntity, the type of testInput should be: testInput :: Signal a And the type of expectedOutput should be: expectedOutput :: Signal b -> Signal Bool stimuliGenerator and outputVerifier: testInput :: Signal (Signed 9,Signed 9) testInput = stimuliGenerator $(v [(1,1) :: (Signed 9,Signed 9),(2,2),(3,3),(4,4)]) expectedOutput :: Signal (Signed 9) -> Signal Bool expectedOutput = outputVerifier $(v [0 :: Signed 9 $ unpack that has support for VHDL-2008. VHDL-2008 support is required because the output verifier will use the VHDL-2008-only to_string function.. This concludes the main part of this section on "Your first circuit", read on for alternative specifications for the same mac circuit, or just skip to the next section where we will describe another DSP classic: an FIR filter structure. Alternative specifications Numinstance for Signal: 0 0 acc' acc' = ma <$> acc <*> pack (x,y) - State Monad We can also implement the original macTfunction as a Statemonadic computation. First we must an extra import statement, right after the import of CLaSH.Prelude: import Control.Monad.State We can then implement macT as follows: macTS (x,y) = do acc <- get put (acc + x * y) return acc We can use the <^>operator again, although we will have to change position of the arguments and result: asStateM :: (Pack o, Pack i) => (i -> State s o) -> s -> (SignalP i -> SignalP o) asStateM f i = g <^> = vfoldl (+) 0 (vzipWith (*) as bs) fir coeffs x_t = y_t where y_t = dotp coeffs xs xs = window x_t topEntity :: Signal (Signed 16) -> Signal (Signed 16) topEntity = fir $(v [0::Signal (Signed First we define some types: module CalculatorTypes where import CLaSH.Prelude type Word = Signed 4 data OPC a = ADD | MUL | Imm a | Pop | Push deriveLift ''OPC Now we define the actual calculator: module Calculator where import CLaSH.Prelude import CalculatorTypes (.:) :: (c -> d) -> (a -> b -> c) -> a -> b -> d (f .: g) a b = f (g a b) infixr 9 .: alu :: Num a => OPC a -> a -> a -> Maybe a alu ADD = Just .: (+) alu MUL = Just .: (*) alu (Imm i) = const . const (Just i) alu _ = const . const Nothing pu :: (Num a, Num b) => (OPC a -> a -> a -> Maybe a) -> (a, a, b) -- Current state -> (a, OPC a) -- Input -> ( (a, a, b) -- New state , (b, Maybe a) -- Output ) pu alu (op1,op2,cnt) (dmem,Pop) = ((dmem,op1,cnt-1),(cnt,Nothing)) pu alu (op1,op2,cnt) (dmem,Push) = ((op1,op2,cnt+1) ,(cnt,Nothing)) pu alu (op1,op2,cnt) (dmem,opc) = ((op1,op2,cnt) ,(cnt,alu opc op1 op2)) datamem :: (KnownNat n, Integral i) => Vec n a -- Current state -> (i, Maybe a) -- Input -> (Vec n a, a) -- (New state, Output) datamem mem (addr,Nothing) = (mem ,mem ! addr) datamem mem (addr,Just val) = (vreplace mem addr val,mem ! addr) topEntity :: Signal (OPC Word) -> Signal (Maybe Word) topEntity i = val where (addr,val) = (pu alu <^> (0,0,0 :: Unsigned 3)) (mem,i) mem = (datamem <^> initMem) (addr,val) initMem = vcopy d8 0 Here we can finally see the advantage of having the <^> return a function of type: ( (instead of: SignalP i -> SignalP o) (): Signal i -> Signal o) - We can use normal pattern matching to get parts of the result, and, - We can use normal tuple-constructors to build the input values for the circuits. Advanced: VHDL.Signed module specifies multiplication as follows: {-# NOINLINE timesS #-} timesS :: KnownNat n => Signed n -> Signed n -> Signed n timesS (S a) (S b) = fromIntegerS_inlineable (a * b) For which the expression primitive is: { "BlackBox" : { "name" : "CLaSH.Sized.Signed.timesS" , blockRam as an example, for which the Haskell/CλaSH code is: {-# NOINLINE blockRam #-} -- | Create a blockRAM with space for @n@ elements -- -- NB: Read value is delayed by 1 cycle -- -- > bram40 :: Signal (Unsigned 6) -> Signal (Unsigned 6) -> Signal Bool -> Signal a -> Signal a -- > bram40 = blockRam d40 blockRam :: forall n m a . (KnownNat n, KnownNat m, Pack a, Default a) => SNat n -- ^ Size @n@ of the blockram -> Signal (Unsigned m) -- ^ Write address @w@ -> Signal (Unsigned m) -- ^ Read address @r@ -> Signal Bool -- ^ Write enable -> Signal a -- ^ Value to write (at address @w@) -> Signal a -- ^ Value of the blockRAMat address @r@ from the previous clock cycle blockRam n wr rd en din = pack $ (bram' <^> binit) (wr,rd,en,din) where binit :: (Vec n a,a) binit = (vcopy n def,def) bram' :: (Vec n a,a) -> (Unsigned m, Unsigned m, Bool, a) -> (((Vec n a),a),a) bram' (ram,o) (w,r,e,d) = ((ram',o'),o) where ram' | e = vreplace ram w d | otherwise = ram o' = ram ! r And for which the definition primitive is: { "BlackBox" : { "name" : "CLaSH.Prelude.blockRam" , "templateD" : "~SYM[0]_block : block type ram_array is array (natural range <>) of ~TYP[8]; signal ~SYM[1] : ram_array((~ARG[0]-1) downto 0) := (others => ~ARG[3]); -- ram signal ~SYM[2] : ~TYP[8]; -- inp signal ~SYM[3] : ~TYP[8] := ~ARG[3]; -- outp begin ~SYM[2] <= ~ARG[8]; process(~CLKO) begin if rising_edge(~CLKO) then if ~ARG[7] then ~SYM[1](to_integer(~ARG[5])) <= ~SYM[2]; end if; ~SYM[3] <= ~SYM[1](to_integer(~ARG. ~DEF[N]: Default value for the VHDL type of the (N+1)'th argument. NB: Does not correspond per se to the value of defof the Defaulttype class for the Haskell type. ~DEFO: Default value for the VHDL type of the result. NB: Does not correspond per se to the value of the defof the Defaulttype class for the Haskell type. ~SYM[N]: Randomly generated, but unique, symbol. Multiple occurrences of ~SYM[N]in the same primitive definition all refer to the same random, but unique, symbol. ‘Signal (a,b)’ with actual type ‘(Signal a, Signal b)’: packfunction like so: ... = f a b (pack (c,d)) Product types supported by packare: NB: Use cpackwhen you are using explicitly clocked CSignals - Type error: Couldn't match expected type ‘(Signal a, Signal b)’ with actual type ‘Signal unpackfunction like so: (c,d) = unpack (f a b) Product types supported by unpackare: NB: Use cunpackwhen you are using explicitly clocked CSignals - 3 (acc + x * y) The above function, works for any number-like type. This means that accis a recursively defined polymorphic value. Adding a monomorphic type annotation makes the error go away: topEntity :: Signal (Signed 8) -> Signal (Signed 8) -> Signal (Signed 8) topEntity x y = acc where acc = register 3 (acc + x * y) Or, alternatively: topEntity x y = acc where acc = register (3 :: Signed 8) = vmap fst sorted <: (snd (vlast sorted)) where lefts = vhead xs :> vmap snd (vinit sorted) rights = vtail xs sorted = vzipWith compareSwapL lefts rights -- Compare and swap compareSwapL a b = if a < b then (a,b) else (b,a) Will not terminate because vzipWithis too strict in its second argument: >>> sortV (4 :> 1 :> 2 :> 3 :> Nil)<*** Exception: <<loop>> In this case, adding lazyVon vzipWiths second argument: sortVL xs = vmap fst sorted <: (snd (vlast sorted)) where lefts = vhead xs :> vmap snd (vinit sorted) rights = vtail xs sorted = vzipWith compareSwapL (lazyV lefts) rights Results in a successful computation: >>> sortVL (4 :> 1 :> 2 :> 3 :> Nil)<1,2,3,4> Unsupported Haskell features Here is a list of Haskell features which the CλaSH compiler cannot synthesize to VHDL VL xs = vmap fst sorted <: (snd (vlast sorted)) where lefts = vhead xs :> vmap snd (vinit sorted) rights = vtail xs sorted = vzipWith compareSwapL (lazyV.
http://hackage.haskell.org/package/clash-prelude-0.5/docs/CLaSH-Tutorial.html
CC-MAIN-2018-05
refinedweb
1,678
50.4
List as a table cellJGagnon Sep 27, 2013 3:58 PM I have a UI screen that needs to represent a "dashboard" of sorts that will display various collections of related information. Part of the requirements is that all relevant information be visible. For example, there will be a table of all users of a system and for each user the table must also show a listing of all "roles" assigned to that user. The optimal solution would be to have a table, where each row represents a user, with "sub rows" or a list of roles. The list of roles, of course, will vary by user. To provide a visual example (although the inner list below should not have nor does it need a header): None of the information on these dashboard screens needs to be directly editable in the table views, it is intended to be read-only on these screens. Editing of the information displayed is handled elsewhere. The first and simplest idea that occurs is: why not make 2 tables (sort of master/detail), but that breaks the "everything must be visible" rule. I've tried to make a TableCell extension that implements a list (ListView) as the "widget". There is no current implementation for list view table cells in JavaFX 2.2, although there are for combo boxes and choice boxes. I've looked at the source code for the ComboBoxTableCell class to try to get an idea how it does what it does. I've come up with a version of a "ListViewTableCell" and have tried to use it. It's not quite working, but I can't figure out what I'm doing wrong. I'll include what source I can below. public class ListViewTableCell<S, T> extends TableCell<S, T> { public static <S, T> Callback<TableColumn<S, T>, TableCell<S, T>> forTableColumn(final T... items) { return forTableColumn(null, items); } public static <S, T> Callback<TableColumn<S, T>, TableCell<S, T>> forTableColumn(final StringConverter<T> converter, final T... items) { return forTableColumn(converter, FXCollections.observableArrayList(items)); } public static <S, T> Callback<TableColumn<S, T>, TableCell<S, T>> forTableColumn(final ObservableList<T> items) { return forTableColumn(null, items); } public static <S, T> Callback<TableColumn<S, T>, TableCell<S, T>> forTableColumn(final StringConverter<T> converter, final ObservableList<T> items) { return new Callback<TableColumn<S, T>, TableCell<S, T>>() { public TableCell<S, T> call(TableColumn<S, T> list) { return new ListViewTable<S, T>(converter, items); } } } private ObservableList<T> items; private ListView<T> listView; public ListViewTableCell() { this(FXCollections.<T> observableArrayList()); } public ListViewTableCell(T... items) { this(FXCollections.observableArrayList(items)); } public ListViewTableCell(ObservableList<T> items) { this.items = items; listView = new ListView<T>(); listView.setItems(items); setGraphic(listView); } public ObservableList<T> getItems() { return items; } public void updateItem(T item, boolean empty) { super.updateItem(item, empty); if (!empty) { setContentDisplay(ContentDisplay.GRAPHIC_ONLY); } else { setContentDisplay(ContentDisplay.TEXT_ONLY); } if (isEmpty()) { setText(null); setGraphic(null); } else { setText(null); setGraphic(listView); } } } In the UI code: TableView<User> userTable = new TableView<User>(); TableColumn<User, String> userCol = new TableColumn<User, String>("Name"); userCol.setCellValueFactory(new PropertyValueFactory<User, String>("userName")); userTable.getColumns().add(userCol); TableColumn<User, ObservableList<Role>> rolesCol = new TableColumn<User, ObservableList<Role>>("Roles"); rolesCol.setCellValueFactory(new PropertyValueFactory<User, ObservableList<Role>>("roleList")); // Do I need this? userTable.getColumns().add(rolesCol); // The code below is definitely a mystery to me. I'm sure that I'm butchering it. rolesCol.setCellFactory(new Callback<TableColumn<User, ObservableList<Role>>, TableCell<User, ObservableList<Role>>>() { public TableCell<User, ObservableList<Role>> call(TableColumn<User, ObservableList<Role>> col) { final ListViewTableCell<User, ObservableList<Role>> cell = new ListViewTableCell<User, ObservableList<Role>>(); ListBinding<Role> binding = new ListBinding<Role>() { { super.bind(cell.tableRowProperty()); } protected ObservableList<Role> computeValue() { return FXCollections.observableArrayList(); } }; cell.itemProperty().bind(binding); return cell; } }); The "user" class: public class User { private StringProperty userName = new SimpleStringProperty(); private ObjectProperty<ObservableList<Role>> roleList = new SimpleObjectProperty<ObservableList<Role>>(); public StringProperty userNameProperty() { return userName; } public ObjectProperty<ObservableList<Role>> roleListProperty() { return roleList; } // other non-relevant code omitted } I've extended the TableCell once or twice, but it has been for simpler situations. This is the first time I tried to make a cell that would be represented as a list of items that are intended to be bound to a list property of the object type associated with the table view. The code as written above throws exceptions left and right complaining that a bound value cannot be set. Obviously I'm doing something wrong. A prior change displayed the table and did show a list view in the column, however, the view was empty. I feel I'm close, but I don't know enough about how the guts of it works to figure out how to make it work. Any ideas and suggestions on how to make this work would be appreciated. 1. Re: List as a table cellJames_D Sep 27, 2013 4:50 PM (in response to JGagnon) First thing that occurs to me is that you might not need the full complexity of a ListView for these Table cells. E.g. I think you probably don't need to be able to select values, etc. I wonder if you can simply get away with this: rolesCol.setCellFactory(new Callback<TableColumn<User, ObservableList<Role>>, TableCell<User, ObservableList<Role>>>() { @Override public TableCell<User, ObservableList<Role>> call(TableColumn<User, ObservableList<Role>> col) { return new TableCell<User, ObservableList<Role>>() { @Override public void updateItem(ObservableList<Role> roles, boolean empty) { if (empty) { setText(null); } else { StringBuilder sb = new StringBuilder(); for (Role role : roles) { sb.append(role).append("\n"); } setText(sb.toString()); } } }; } }); This will basically just use a default table cell implementation, but set the text of the cell to the concatenation of all the rows, with new lines between them. If you want more control over the appearance, you could create a VBox, add a Label for each row to the VBox, and then set the graphic as a VBox. Call setContentDisplay(GRAPHIC_ONLY) if you use this approach. This way you could set some styles on the individual labels (give them borders or alternating background colors, or some such). If you need the ListView for the cell, it looks like you are on the right track. The problems are caused by cell.itemProperty().bind(binding); The TableView rendering mechanism will call setItem(...) on the cell to tell it what data to display. Calling set on a bound property will throw a runtime exception. The cell value factory will actually cause the calls to updateItem(...) in your cell implementation, with the list of roles being passed in as the item. The updateItem method should take care of updating the items in the underlying list view. Fixing this will take a bit of thought. One thing to notice is that your generic types don't seem to be quite right. Your ListViewTableCell<S,T> extends TableCell<S,T>: you instantiate this as new ListViewTableCell<User, ObservableList<Role>>(). So T resolves to ObservableList<Role>. But your items property (and indeed the items property for the ListView itself) is of type ObservableList<T>, which is now ObservableList<ObservableList<Role>>. So I think you want something like public class ListViewTableCell<S,T> extends TableCell<S, ObservableList<T>> { ... } and that you want to instantiate it as new ListViewTableCell<User, Role>(). If the first option doesn't work, experiment a bit more and post back if you need more help. 2. Re: List as a table cellJames_D Sep 27, 2013 4:59 PM (in response to JGagnon) I guess one other comment here. The ComboBoxTableCell on which you're basing this uses the same list of possible values for all rows in the table. The value passed to the updateItem(...) method (which is of the same type as the type of the column) is one of these values (and becomes the selected item in the combo box). Your case is slightly different: it's the list of values that varies row to row (and I think the selected item is basically irrelevant). 3. Re: List as a table cellJGagnon Sep 27, 2013 5:13 PM (in response to James_D) You are entirely correct. Yes, I was using the ComboBoxTableCell as my basis for comparison. 4. Re: List as a table cellJGagnon Sep 27, 2013 5:16 PM (in response to James_D) I think your simple idea (concatenated text with newlines or maybe the VBox) is just what I need. As I had mentioned, none of the data needs to be editable in this view and no, it doesn't need to have the ListView "look". I think this is what I've been searching for for the last couple weeks. I will let you know how it works out. Thanks. 5. Re: List as a table cellJGagnon Sep 27, 2013 5:33 PM (in response to James_D) This worked beautifully. I adopted the VBox instead of just concatenated text. So simple when all is said and done. Now, I've got to do the same for a list of checkboxes. (Remember the problem you helped me out with yesterday? The checkboxes that needed to be enabled/disabled based upon the "checked" state of another checkbox in the same table row). Another one of the views on this dashboard lists roles and a collection of "groups" and associated CRUD permissions. Each role row may have an arbitrary collection of groups and associated permissions. In this case, all of the columns are read-only. I'm guessing I can just expand on the solution you've provided and place checkboxes in a VBox and set the "checked" state of each box according to the particular permission setting. 6. Re: List as a table cellJames_D Sep 27, 2013 5:53 PM (in response to JGagnon) Yup, that should work. 7. Re: List as a table cellJGagnon Sep 27, 2013 7:14 PM (in response to James_D) OK this is all working very nicely (for both text and checkboxes). Thank you very much. I have now a different problem though. Everything on that dashboard is read-only, but the user has the ability to effect changes to the lists of information displayed on the dashboard via "editing" dialogs that allow the user to make changes once a given row has been selected on the dashboard table. A given editor only operates on one row selected in the dashboard table. For example, using the user/roles described earlier, the user selects a user on the dashboard and clicks "Edit" to open the editor. The editor allows the user to add/remove roles for the selected user row and then click OK to save changes or Cancel to discard changes. The save logic will make the appropriate calls to save the pertinent information to a database and when that completes, the user row back on the dashboard needs to be refreshed to reflect any changes in roles that were made. This is where I'm having the problem. The editor screen correctly displays the collection of roles for a given user and I can make changes and click OK to "save" them. However, once the editor dialog closes, the dashboard is not updated. I suspect that it does not "know" about the changes that were made to the roles list for that user. I know that the list change has been correctly updated to the underlying object, because if I open the editor again for that user, the changes that I had made are correctly represented. The editors make a copy of the selected object (i.e. user) and that is what is modified in the editor. Once the user clicks OK (committing their changes), I update the model object. I do this so that I can ensure that nothing is changed outside of the editor until the user clicks OK. I'm guessing there's a technique that can be used to keep the dashboard items updated when edits are committed. Keep in mind this is in the context of the concept of dashboard table rows that have some columns with multiple pieces of information. And it happens to be that collection is what's being changed. Any ideas? 8. Re: List as a table cellJGagnon Sep 27, 2013 7:40 PM (in response to JGagnon) In my refresh logic is removed the edited row and re-inserted it. The dashboard now shows the changes. For some reason it doesn't seem like the right way to do it. 9. Re: List as a table cellJGagnon Sep 30, 2013 2:21 PM (in response to James_D) Using the idea presented in earlier comments (which work nicely, except when the list content changes dynamically), is there a way to "bind" a collection of items used by the customized TableCell so that it will be aware of changes made to the list externally? For example, I have taken your suggestions above and have attached an anonymous Callback to a table column that builds a VBox with labels (or checkboxes), one for each item in the "role" list for a given user. This works great - except that if I make changes to the role list (via an editor dialog), the changes are not reflected back on the dashboard. I think that I need some way to create a binding between the backing list (the collection of roles for a selected user) and the customized TableCell. I just don't know how to do it. I've been trying different things, but nothing works so far. Does this idea make sense? Any suggestions would be appreciated. 10. Re: List as a table cellJGagnon Sep 30, 2013 2:50 PM (in response to JGagnon) Update to the "delete and add" strategy: that works only sometimes (most often not). 11. Re: List as a table cellJames_D Sep 30, 2013 3:18 PM (in response to JGagnon) (Just a quick response here; don't really have time to test the code but hopefully you will get the idea.) You can't use a binding directly: you need to update the children of the vbox, which are not exposed as a property. So you have to implement this with a listener. So I think I'd implement the cell as an inner class, and do something like this: class RoleListTableCell extends TableCell<User, ObservableList<Role>> { private final ListChangeListener<Role> changeListener ; private final VBox vbox ; RoleListTableCell() { this.vbox = new VBox(); // configure spacing, style etc if required... changeListener = new ListChangeListener<Role>() { @Override public void onChanged(Change<? extends Role> change) { rebuildList(); } }; itemProperty().addListener(new ChangeListener<ObservableList<Role>>() { @Override public void changed(Observable<? extends ObservableList<Role>> observable, ObservableList<Role> oldRoleList, ObservableList<Role> newRoleList) { if (oldRoleList != null) { oldRoleList.removeListener(changeListener); } if (newRoleList != null) { newRoleList.addListener(changeListener); } rebuildList(); } }); setContentDisplay(ContentDisplay.GRAPHIC_ONLY); setGraphic(vbox); } private void rebuildList() { ObservableList<Role> roles = getItem(); if (roles == null) { vbox.getChildren().clear(); } else { List<Label> labels = new ArrayList<Label>(); for (Role role : getItem()) { labels.add(new Label(role.toString())); } vbox.getChildren().setAll(labels); } } } Then your callback just returns a new instance of that class. As I said, that's just off the top of my head, and typed in here without testing, so there are likely typos that need fixing. The basic idea is that the list listener responds to changes in the list. The listener to the itemProperty makes sure the list listener is observing the correct list of roles for changes. I'm assuming here your Role class itself is immutable. It gets a whole lot more fun otherwise . 12. Re: List as a table cellJGagnon Sep 30, 2013 6:31 PM (in response to James_D) Any particular reason that you would make it an inner class? Just because that's the only place it needs to be used? Not sure I understand what you mean as far as the Role class being immutable? That it is final and therefore there is no chance for subclassing? Regarding your suggestion, so far so good. Several of my columns use checkboxes instead of just a label, but as I mentioned sometime earlier, the dashboard screen is read-only so having to deal with inline editing is not a concern. The biggest gripe is that I need to create a separate class for each instance where I need to do this (a total of 8 table columns spread amongst 3 tables) - and they're nearly all identical (for a given table data type), save for the "get" call to get the model object state of interest and rendering that state (as a text label or a checkbox setting) in the column. I wish I could figure out a way to genericize that in some way. 13. Re: List as a table cellJames_D Sep 30, 2013 6:56 PM (in response to JGagnon) Any particular reason that you would make it an inner class? Just because that's the only place it needs to be used? Yes. It could just as easily be a top-level class. Not sure I understand what you mean as far as the Role class being immutable? That the state of a Role object can't be changed once the Role object has been created. (Specifically, the state that's displayed in the table.) So public class Role { private final String name ; public Role(String name) { this.name = name ; } public String getName() { return name ; } @Override public String toString() { return name ; } } is fine, but public class Role { private String name ; public Role(String name) { this.name = name ; } public String getName() { return name ; } public void setName(String name) { this.name = name ; } @Override public String toString() { return name ; } } might cause problems (if the name of a role were changed via someRole.setName(...), the UI would not be notified). I wish I could figure out a way to genericize that in some way. Should be possible. Again, not tested, but something like public class ListTableCell<T extends Object> extends TableCell<User, ObservableList<T>> { private final Callback<T, String> formatter ; private final ListChangeListener<Object> changeListener ; private final VBox vbox ; public ListTableCell(Callback<T, String> formatter) { this.formatter = formatter ; // constructor as before but Role replaced by Object } private void rebuildList() { ObservableList<T> items = getItem(); if (items==null) { vbox.clear(); } else { List<Label> labels = new ArrayList<>(); for (T item : items) { labels.add(new Label(formatter.call(item)); } vbox.getChildren().setAll(labels); } } } You might have to play with the types a little to get that to work, but I think it's close to correct. Now you could do something like roleCol.setCellFactory(new Callback<TableColumn<User,ObservableList<Role>>, TableColumn<User, ObservableList<Role>>>() { @Override public TableCell<User, ObservableList<Role>> call(TableColumn<User, ObservableList<Role>> col) { return new ListTableCell<Role>(new Callback<Role, String>() { @Override public String call(Role role) { return role.toString(); } }); }); }); All that will look so much nicer in Java8 with lambda expressions... 14. Re: List as a table cellJames_D Sep 30, 2013 7:01 PM (in response to James_D) You could make the formatter a Callback<T, Node> instead, if you wanted. That way you could be more general and provide a CheckBox, or other control. rebuildList() would create a List<Node> and set them in the VBox; for your role cell instantiation you'd return new Label(role.toString()) in the inner callback.
https://community.oracle.com/message/11208622
CC-MAIN-2017-30
refinedweb
3,181
62.68
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hi Everyone, We're on JIRA cloud and are trying out the ScriptRunner addon. Is it possible to set a custom date picker field with the current date/time in a workflow transition's post function? If so, how would I do so? By playing around with a test workflow, I think I found what you need: I have not tested this, but I'm trusting what I read on screen on Step 6 as JIRA shows: Please make sure that the value you enter is valid for custom field its datatype and context configuration for the project using this workflow. Otherwise, the transition may fail at execution time. Additionally to enter text or numbers as value, you may use the following: - You may use macro '%%CURRENT_USER%%' to insert the function caller. - You may use macro '%%CURRENT_DATETIME%%' to insert the current date and time. - You may use macro '%%ADD_CURRENT_USER%%' to append the function caller. Obsolete please use append option instead with user macro above. - For Cascading Select fields, you may either use the value of the option to be selected, no need to add the parent for childs. Or simply enter the ID of the option to be selected. Here is code for custom script post-function: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.event.type.EventDispatchOption import com.atlassian.jira.issue.MutableIssue import com.atlassian.jira.issue.UpdateIssueRequest import java.sql.Timestamp issue.setCustomFieldValue( ComponentAccessor.getCustomFieldManager().getCustomFieldObjectByName("customFieldname") , new Timestamp() ) //only for create transition /* ComponentAccessor.getIssueManager().updateIssue( ComponentAccessor.getJiraAuthenticationContext().getUser() , issue , UpdateIssueRequest.builder().sendMail(false).eventDispatchOption(EventDispatchOption.ISSUE_UPDATED).build() ) */ This answer is for server, not cloud But for server, you should not need the commented out bit even on create. Edit your workflow, add new post-function on the action of type "Script Post-function". Choose "Modify Issue" function. Enter the following code: issueInput.fields.customfield_10132 = new Date().format("yyyy-MM-dd") where 10132 is the ID of the custom field you want to update. I'm getting this error now. Is there any way to set this field without having to surface it on my screens? I'd rather not allow users to change this value. 2016-11-21 19:09:28,388 INFO - Serializing object into 'interface java.util.Map' 2016-11-21 19:09:28,390 INFO - PUT /rest/api/2/issue/80258 asObject Request Duration: 326ms 2016-11-21 19:09:28,403 ERROR - assert resp.status == 204 | | | | 400 false status: 400 -` Bad Request body: [errorMessages:[], errors:[customfield_15500:Field 'customfield_15500' cannot be set. It is not on the appropriate screen, or unknown.]] I use version 3.0.6 of ScriptRunner. The above mentioned script fragment issueInput.fields.customfield_11100 = new Date().format("yyyy-MM-dd") didn't throw an error but it did not have any visible result. Does it work for my companies old version of ScriptRunner? Marco - that code snippet will only work for Jira Cloud. If you have version 3.0.6 of ScriptRunner then you are using Jira Server. Randy - apologies for the extremely slow reply but yes you can. The Jira Cloud REST API docs state you can use the overrideScreenSecurity query parameter as long as your script is being executed by the ScriptRunner Add-on user. Thank you Jon! As I am still new to ScriptRunner: What would be the correct way to update a custom field in a scripted post-function when using a Jira.
https://community.atlassian.com/t5/Jira-questions/How-do-I-set-a-custom-field-with-the-current-date-time-in-a-post/qaq-p/143415
CC-MAIN-2018-09
refinedweb
591
51.65
B range from k1 and k2. 7) B-Tree grows and shrinks. . // C++ implemntation of search() and traverse() methods #include<iostream> using namespace std; // A BTree node class BTreeNode { int *keys; // An array of keys int t; // Minimum degree (defines the range for number of keys) BTreeNode **C; // An array of child pointers int n; // Current number of keys bool leaf; // Is true when node is leaf. Otherwise false public: BTreeNode(int _t, bool _leaf); // Constructor // A function to traverse all nodes in a subtree rooted with this node void traverse(); // A function to search a key in subtree rooted with this node. BTreeNode *search(int k); // returns NULL if k is not present. // Make BTree friend of this so that we can access private members of this // class in BTree functions friend class BTree; }; // A BTree class BTree { BTreeNode *root; // Pointer to root node int t; // Minimum degree public: // Constructor (Initializes tree as empty) BTree(int _t) { root = NULL; t = _t; } // function to traverse the tree void traverse() { if (root != NULL) root->traverse(); } // function to search a key in this tree BTreeNode* search(int k) { return (root == NULL)? NULL : root->search(k); } }; // Constructor for BTreeNode class BTreeNode::BTreeNode(int _t, bool _leaf) { // Copy the given minimum degree and leaf property t = _t; leaf = _leaf; // Allocate memory for maximum number of possible keys // and child pointers keys = new int[2*t-1]; C = new BTreeNode *[2*t]; // Initialize the number of keys as 0 n = 0; } // Function to traverse all nodes in a subtree rooted with this node void BTreeNode::traverse() { // There are n keys and n+1 children, travers through n keys // and first n children int i; for (i = 0; i < n; i++) { // If this is not leaf, then before printing key[i], // traverse the subtree rooted with child C[i]. if (leaf == false) C[i]->traverse(); cout << " " << keys[i]; } // Print the subtree rooted with last child if (leaf == false) C[i]->traverse(); } // Function to search key k in subtree rooted with this node BTreeNode *BTreeNode::search(int k) { // Find the first key greater than or equal to k int i = 0; while (i < n && k > keys[i]) i++; // If the found key is equal to k, return this node if (keys[i] == k) return this; // If key is not found here and this is a leaf node if (leaf == true) return NULL; // Go to the appropriate child return C[i]->search(k); }.
http://www.geeksforgeeks.org/b-tree-set-1-introduction-2/
CC-MAIN-2016-36
refinedweb
405
54.63
FrontPage List of FAQ pages List of Certification pages Content categories RecentChanges Upload files Access list Java Forums This involves the java.lang.Runtime and java.lang.Process classes. All the details can be found in this article. As of Java 6, there's a new class for handling this: JavaDoc:java.lang.ProcessBuilder The Bean Scripting Framework (BSF) library from Apache does this. It facilitates two-way integration between Java and a growing number of scripting languages. Java 6 introduces an API that does something comparable; see here for an introduction. Some discussion and an example can be found here. Marker interfaces are a mechanism of asserting a fact about a class, without adding any functionality to it. As such, they represent metadata about that class. Some examples are: Marker interfaces are a misuse of interfaces, and should be avoided. Note that all the above example are rather old, and that no new ones have been added since. Ken Arnold, who was/is behind several Java APIs at Sun, sounds off on marker interfaces here, noting that they should rarely be used. With the advent of annotations in Java 5 -which are a generic mechanism of adding metadata to a class-, marker interfaces have become obsolete, and no new ones should be defined. There are a number of libraries that can take an expression and either compile or interpret it. JEP does interpretation; a newer commercial version is also available. For even more flexibility, check out Javassist, which creates actual Java classes. This Javaranch Journal article demonstrates how to use Javassist to create classes that evaluate mathematical expressions. Starting with Java 6, there's now an official API for working with the compiler from within Java code. Starting with Java 6, the Desktop.browse(...) method can be used: JavaDoc:java.awt.Desktop If the objective is to display a web page within a Swing application, the Lobo web browser component can be used. For very simple pages (HTML 3.2, no CSS, no JavaScript? etc.) Swing contains a web browsing component. JavaFX contains a much improved web view component for HTML 5: JavaDoc:javafx.scene.web.WebView Several commercial options are available; check out TrueLicense, license4j and JLicense. This will involve using the JNI API (specification, introduction). A number of libraries exist that take some of the pain of using JNI out of it, like JACOB (outdated), Jawin (outdated) and j-Interop. Several commercial tools (like EZ Jcom and JNBridge) are also available. Since JNI only works with C/C++, but not the .Net languages, a .Net wrapper in C++ needs to be created as well if the COM/DLL object was written in one of those languages. An RFE (Request for Enhancement) has been filed years ago for letting JNI code access .Net code directly (see this entry in Sun's Java Bug Database), but it doesn't seem to go anywhere. JNA implements something similar to JNI, but without the need to create or use C headers and files (it's all Java from the developer's perspective); article Integer i = 127; Integer j = 127; System.out.println(i == j); System.out.println(i.equals(j)); Integer i1 = 128; Integer j1 = 128; System.out.println(i1 == j1); System.out.println(i1.equals(j1)); Integer i = 127; Integer j = 127; System.out.println(i == j); System.out.println(i.equals(j)); Integer i1 = 128; Integer j1 = 128; System.out.println(i1 == j1); System.out.println(i1.equals(j1)); The key to understanding this is that the JVM uses a process called "boxing" (or "auto-boxing") when converting an int (like 127) to an Integer object. This involves calling the Integer.valueOf(127) method. The JavaDoc:java.lang.Integer#valueOf(int) says: "Returns a Integer instance representing the specified int value. If a new Integer instance is not required, this method should generally be used in preference to the constructor Integer(int), as this method is likely to yield significantly better space and time performance by caching frequently requested values." What that means is that the valueOf() method has a cache of Integer objects, and if the primitive being boxed is in that range, the cached object is returned. It just so happens that 127 is in that range, but 128 is not. So i and j are the same object, while i1 and j1 are not. (Also note that you can't rely on the boundary to fall between 127 and 128 - since this is not documented, JRE implementors are free to shrink or enlarge the range of values so cached.) import java.security.ProtectionDomain; ProtectionDomain protectionDomain = getClass().getProtectionDomain(); File codeLoc = new File(protectionDomain.getCodeSource().getLocation().getFile()); import java.security.ProtectionDomain; ProtectionDomain protectionDomain = getClass().getProtectionDomain(); File codeLoc = new File(protectionDomain.getCodeSource().getLocation().getFile()); Here:
http://www.coderanch.com/how-to/java/Java-FAQ
CC-MAIN-2015-32
refinedweb
796
50.12
In find all common elements in given three sorted arrays problem, we have given three sorted arrays and you have to find common numbers that is present in all three arrays. Example Input: ar1[] = {1, 5, 10, 20, 40, 80} ar2[] = {6, 7, 20, 80, 100} ar3[] = {3, 4, 15, 20, 30, 70, 80, 120} Output: 20 80 Explanation Here we have given sorted array and we need to find the common elements in all arrays. We take three-pointers that denote the beginning index of the arrays. Now check the condition that if any element is less than with respect to others then increase that pointer value and not that pointer denotes to the next element. Moving like that till the end where we reach at the end on any array. If we hit the end of an array then stop the iteration and print the result. Now for solid understanding check the below algorithm. Algorithm for find Common Elements 1. Take three-pointers like variables i, j, and k pointing to the starting index of the three arrays respectively. 2. Check if the three numbers pointed by the variables are same or not 3. If same then print the value and increment all the three variables hence moving forward in the respective arrays. 4. Else increment the variable which is pointing to the smaller ones. 5. If an array gets traversed completely then we check for the remaining two arrays plainly. 6. If two arrays are traversed then simply print the untraversed part of the third array because we know that it is already sorted. C++ Program for find Common Elements #include <bits/stdc++.h> using namespace std; int main() { int arr1[] = {2, 8, 15 , 20, 35, 45, 100}; int arr2[] = {5, 9, 20, 45, 110}; int arr3[] = {3, 4, 15, 20, 30, 45, 80, 120}; int n1 = sizeof(arr1)/sizeof(arr1[0]); int n2 = sizeof(arr2)/sizeof(arr2[0]); int n3 = sizeof(arr3)/sizeof(arr3[0]); int i = 0 , j = 0 , k = 0; // i ,j and k are pointing at the start of 1st , 2nd and 3rd array resepectively. while(i < n1 and j < n2 and k < n3) { if(arr1[i] == arr2[j] and arr3[k] == arr1[i]) //if all elements are same then { cout << arr1[i] <<" "; i++; j++; k++; } //increase the array index variable of those which are small else if(arr1[i] < arr2[j]) i++; else if(arr2[j] < arr3[k]) j++; else k++; } return 0; } 20 45 Complexity Analysis Time Complexity – O(max(M, N, P)) where M, N, and P are sizes of the three arrays Space Complexity – O(1) because here we use some variables only.
https://www.tutorialcup.com/interview/array/find-all-common-elements-in-given-three-sorted-arrays.htm
CC-MAIN-2021-04
refinedweb
444
63.83
Complete Roguelike Tutorial, using python3+libtcod, part 1 Graphics Setting it up Download a font You will also need to download a font file. libtcod does not transform standard .ttf file fonts into something usable. Instead, we must use what's called a bitmap format. As a consequence, the font displayed is static - you will not be able to increase or decrease the size of the font on the screen without picking a new font file. You may find these font files in a variety of locations, including the bitbucket repo we downloaded within step 0. For this tutorial, it is recommended that you download arial10x10.png. You will want to make sure that this font file is found within the same folder as the script we're creating within this tutorial. For those on a posix system, please do not use `curl` or `wget` as this has caused problems in the past. Instead, browse to the font and then save-as to the proper folder. For more information on choosing a font, the author of Cogmind has done a great in-depth explanation here. tcod, which is easier to type. import libtcodpy as t. font_path = 'arial10x10.png' # this will look in the same folder as this script font_flags = tcod.FONT_TYPE_GREYSCALE | tcod.FONT_LAYOUT_TCOD # the layout may need to change with a different font file tcod.console_set_custom_font(font_path, font_flags) This is probably the most important call, initializing the window. We're specifying its size, the title (change it now if you want to), and the last parameter tells it if it should be fullscreen or not. window_title = 'Python 3 libtcod tutorial' fullscreen = False tcod.console_init_root(SCREEN_WIDTH, SCREEN_HEIGHT, window_title, fullscreen) For a real-time roguelike, you wanna limit the speed of the game (frames-per-second or FPS). If you want it to be turn-based, ignore this line. (This line will simply have no effect if your game is turn-based.) tcod.sys_set_fps(LIMIT_FPS) Now the main loop. It will keep running the logic of your game as long as the window is not closed. while not t. tcod.console_set_default_foreground(0, t character to the coordinates (1,1). Once more the first zero specifies the console, which is the screen in this case. Can you guess what that character is? No, it doesn't move yet! tcod.console_put_char(0, 1, 1, '@', tcod.BKGND_NONE) At the end of the main loop you'll always need to present the changes to the screen. This is called flushing the console and is done with the following line. tcod.console_flush() Ta-da! You're done. Run that code and give yourself a pat on the back! Common Bugs_x = SCREEN_WIDTH // 2 player_y = SCREEN_HEIGHT // 2 - The screen coordinates start at the top left corner (0, 0) and end at the bottom right corner (SCREEN_WIDTH, SCREEN_HEIGHT) - Python 3 has two types of division: "/" and "//". The former will produce a floating point number (e.g. 1.0, 2.0, 3.0) while the latter will produce an integer (e.g. 1, 2, 3). The libtcod library is explicitly expecting an integer, so we will make sure that Python produces an integer for the player's (x, y) coordinates. There are functions to check for pressed keys. When that happens, just change the coordinates accordingly. Then, print the @ at those coordinates. We'll make a separate function to handle the keys. def handle_keys(): global player_x, player_y # movement keys if tcod.console_is_key_pressed(tcod.KEY_UP): player_y = player_y - 1 elif tcod.console_is_key_pressed(tcod.KEY_DOWN): player_y = player_y + 1 elif tcod.console_is_key_pressed(tcod.KEY_LEFT): player_x = player_x - 1 elif tcod.console_is_key_pressed(tcod.KEY_RIGHT): player_x = player_x + = tcod.console_check_for_keypress() if key.vk == tcod.KEY_ENTER and key.lalt: #Alt+Enter: toggle fullscreen tcod.console_set_fullscreen(not tcod.console_is_fullscreen()) elif key.vk == t = t: tcod.console_set_default_foreground(0, tcod.white) tcod.console_put_char(0, player_x, player_y, '@', tcod.BKGND_NONE) t(). tcod.console_put_char(0, player_x, player_y, ' ', tcod.BKGND_NONE) Finishing Touches Here's a rundown of the whole code so far.
http://www.roguebasin.com/index.php?title=Complete_Roguelike_Tutorial,_using_python3%2Blibtcod,_part_1&oldid=46877
CC-MAIN-2019-22
refinedweb
655
68.97
I have a simple question...I think.. I need to go through my current directory, finding files who have their file name IN the actual file somewhere. And the list of files who do, need to be stored in another file given as the scripts argument. I just need to know how to save output to a file (that file specified on the command line). Here is what I have, but the #!/usr/bin/python import os,sys,string file_lst = os.listdir(".") print file_lst infile=open(sys.argv[1]) if infile is None: print 'Error' for file in file_lst: if os.path.isfile(file): fd=open(file,'r') str=fd.read() if string.find(str,file)>-1 print file infile=file.save(file) the last line is where it doesn't work. I can print these to screen but not save them to a file
http://forums.devshed.com/python-programming/73823-saving-files-last-post.html
CC-MAIN-2014-15
refinedweb
145
87.52
How can you make your app more visible in the Windows Store and boost your downloads? As we describe in the App Promotion Checklist, collecting user feedback can be accelerator for driving app downloads. In this blogpost I’m going to cover how to integrate collecting user feedback in your Windows Store app. Often users tend to forget about reviewing and rating the apps their using. The approach we’re taking in this post is to remind the user about this after he/she has been using the app for a while. First of all, the user will have experience with the app and will be able to provide more valuable and realistic feedback. Secondly, a user that has been using the app is likely to be a happy user; hence, the feedback will likely be more positive which can ultimately raise your visibility in the Store . A word of warning, make sure not to be too pushy in prompting the user for reviewing your app: stick to a limited number of reminders or your initially happy user could give you negative app review. The overall flow of the process is as follows, we’ll go in more detail in the code sample: - Keep track of the app usage in a counter - Increment the counter as the user uses the app - When we hit a threshold, prompt the user for feedback - Direct the user to the Windows Store to review the app - Otherwise, continue to step 1 RatingNotifier class To limit the impact on existing source code and to allow for easily reusing it, we’re encapsulating the user rating code in a separate class RatingNotifier: 1: public class RatingNotifier 2: { 3: /// <summary> 4: /// Triggers the rating reminder logic. Checks if we have surpassed the usage threshold and prompts the user for feedback if appropriate. 5: /// </summary> 6: public async static Task TriggerNotificationAsync(string title, string message, string yes, string no, string later, int interval, int maxRetry) 7: { 8: // Notification logic comes here 9: } 10: } Tracking the usage To manage the user feedback, we rely on three settings: - counter: tracks the app usage – in this example, we’re tracking the number of visits to the app’s main page. Alternatively you could track the number of times the user has launched the app - rated: boolean value that indicates if the user has already rated the app - retryCount: tracks how many times we’ve prompted the user already – after a predefined number of reminder we back off. These settings are stored in the roaming app storage via the RoamingSettings class. By doing so, the underlying sync engine ensures that the settings roam across each of the user’s devices automatically. If a user rates the app on one device, and then logs onto another device, you don’t want to remind him again. You could also use the roaming app storage for keeping track of other app settings, for example to store the user’s preferences or for keeping track of a high score, etc. Note that there is a limit to how much data you can store in roaming storage for the syncing to work – this is defined in RoamingStorageQuota. 1: // Initialize settings 2: var counter = 0; // usage counter 3: var rated = false; // indicate if rating has happened 4: var retryCount = 0; // number of times we've reminded the user 5: 6: // Use roaming app storage to sync across all devices 7: var settingsContainer = ApplicationData.Current.RoamingSettings; 8: 9: // Retrieve the current values if available 10: if (settingsContainer.Values.ContainsKey(IsRatedKey)) 11: rated = Convert.ToBoolean(settingsContainer.Values[IsRatedKey]); 12: if (settingsContainer.Values.ContainsKey(RatingRetryKey)) 13: retryCount = Convert.ToInt32(settingsContainer.Values[RatingRetryKey]); 14: if (settingsContainer.Values.ContainsKey(RatingCounterKey)) 15: counter = Convert.ToInt32(settingsContainer.Values[RatingCounterKey]); 16: 17: // Increment the usage counter 18: counter = counter + 1; 19: 20: // Store the current values in roaming app storage 21: SaveSettings(rated, counter, retryCount); As you can see, upon every call to TriggerNotificationAsync, we increment the usage counter and persist it to roaming storage. Should we ask for feedback Now that we’ve incremented the usage counter, we need to determine if we will prompt the user for reviewing the app. This is where the three settings come into play. First of all, we check if the app has already been rated, in which case we don’t prompt. Secondly, we verify if we have not yet exceeded the maximum number of reminders. Finally, we check the usage counter to check if we have passed a given interval – for example, remind the user after every 15 times he/she visited the app’s main page. 1: // Do we need to ask the user for feedback 2: if (!rated && // app was not rated 3: retryCount < maxRetry && // not yet exceeded the max number reminders (e.g. max 3 times) 4: counter >= interval * (retryCount + 1)) // surpassed the usage threshold for asking the user (e.g. every 15 times) 5: { 6: // Prompt the user 7: } Ask for feedback All conditions are met to ask the user for rating the app now. To do this, we’ll display a MessageDialog dialog asking if the user wants to review the app. If agreed, we redirect to the Windows Store app to show the rate and review page of our app. We achieve this by navigating to a ‘special’ URL, a so-called Windows Store protocol link (source code line 12). If the user decides not to rate the app we just update the reminder counter (code not shown here). 1: // Create a dialog window 2: MessageDialog md = new MessageDialog(message, title); 3: 4: // User wants to rate the app 5: md.Commands.Add(new UICommand(yes, async (s) => 6: { 7: // Store the current values in roaming app storage 8: SaveSettings(true, 0, 0); 9: 10: // Launch the app's review page in the Windows Store using protocol link 11: await Launcher.LaunchUriAsync(new Uri( 12: String.Format("ms-windows-store:REVIEW?PFN={0}", Windows.ApplicationModel.Package.Current.Id.FamilyName))); 13: })); 14: 15: 16: // Prompt the user 17: await md.ShowAsync(); Triggering the counter Now that we have all the logic to track the settings and to decide when to ask the user for feedback, all that’s left is to invoke this logic from the actual app logic. In this post we’ll be tracking the number the number of times the user visits the app’s main page. In order to do so, we invoke the TriggerNotificationAsync method from the Page_Loaded event handler of the app’s main page (e.g. GroupedItemsPage.xaml.cs). 1: private async void Page_Loaded(object sender, RoutedEventArgs e) 2: { 3: // 4: // trigger rating notification 5: // 6: await RatingNotifier.TriggerNotificationAsync( 7: Convert.ToString(Application.Current.Resources["RatingTitle"]), 8: Convert.ToString(Application.Current.Resources["RatingMessage"]), 9: Convert.ToString(Application.Current.Resources["RatingYes"]), 10: Convert.ToString(Application.Current.Resources["RatingNo"]), 11: Convert.ToString(Application.Current.Resources["RatingLater"]), 12: Convert.ToInt32(Application.Current.Resources["RatingInterval"]), 13: Convert.ToInt32(Application.Current.Resources["RatingMaximumRetries"])); 14: } Alternatively, you could invoke this logic when launching the app. In that case, you would add the call to TriggerNotificationAsync in the OnLaunched event handler of the App.xaml.cs file. Conclusion You can find the full RatingNotifier class online:. Hopefully with this code you can collect lots of valuable user ratings and feedback, which may in turn boost your visibility and downloads in the Windows Store. For more tips on promoting your app in the Store, check our ultimate checklist. Join the conversationAdd Comment
https://blogs.msdn.microsoft.com/belux/2013/08/30/getting-more-downloads-for-your-windows-store-app/
CC-MAIN-2016-40
refinedweb
1,249
51.89
Building SignalR. Introduced in ASP.NET Core 3, the architecture of Blazor leverages shareable C# code, which can run on the server and client. Developers can also expect to ship self-contained, fully running in a web browser, Blazor apps with WebAssembly. In this beginner tutorial, we’ll be building a Farm Animal Soundboard. A soundboard is an app that lets the user push a button and play the associated sound. We’ll walk through some major elements of building a Blazor experience: Razor pages, Components, and JavaScript interoperability. ⚠️ Prerequisites We recommend installing the latest ReSharper EAP or Rider EAP to get C# 9 support, as we’ll be using some C# 9 features, such as record types. Note that I’ll be using Rider for this tutorial, but this all works with ReSharper as well. To follow this blog post, try using the latest .NET 5 SDK. The .NET team has been hard at work, enhancing the Blazor experience in .NET 5, and we should take full advantage of that. This sample should work on previous versions of Blazor found in .NET Core 3.1, but I haven’t tested that. We will also need images and audio of our favorite farm animals. Luckily, the sample project already has those assets ready to utilize. We have nine animals, along with their accompanying sounds. Adventurous folks can also choose to change the theme of this demo to whatever they would like. 🐄 What We’re Building Let’s take a quick look at what we’re building and breakdown our application before we get started. We can see that we are using cards to display an animal’s image and allow our users to play an audio sound. We can think of the elements in our UI in three major parts: - C# Classes and Data - Razor View and Components - JavaScript Interoperability We’ll start from the beginning of our list, where most C# devs will be comfortable, and then work our way to the “hardest” part. 🚦 Getting Started In Visual Studio and ReSharper, use the Blazor App template, and then pick Blazor Server App. When using Rider, create a new Blazor Server App (under ASP.NET Core Web App). We can call the solution Farm. Once we have our solution, we can run the project to see that everything is working. Let’s start modifying the Blazor template. 📚 The C# Classes And Data We can store static data in C# classes. Taking this additional step will ensure that our Razor views stay compact and readable. Since we’ll be dealing with Animals, let’s create a static class that will store each new addition to our farm. Under the Data folder, create a C# file named Animals. Add the following C# code: public static class Animals { public static IEnumerable<AnimalInfo> All => new[] { new AnimalInfo("Cat", "The barn yard cat is a staple of many farms."), new AnimalInfo("Chicken", "Providing fresh eggs and constant clucking."), new AnimalInfo("Cow", "Cow's are the source of milk and beef."), new AnimalInfo("Dog", "Every farmer needs a trusty dog to keep watch."), new AnimalInfo("Donkey", "The trusty animal can make hard labor easier."), new AnimalInfo("Horse", "Help farmers cover long distances faster. YeeHaw!"), new AnimalInfo("Pig", "These messy animals are fun to have around."), new AnimalInfo("Rooster", "Helping farmers wake up early everywhere."), new AnimalInfo("Sheep", "A great source of wool for those cold winters.") }; public sealed record AnimalInfo(string Name, string Description) { public string ImageUrl => $"/img/{Name.ToLowerInvariant()}.png"; public string WavUrl => $"/audio/{Name.ToLowerInvariant()}.wav"; } } The code uses the new C# 9 record type to store information about our farm animals. We also have two helper properties that will produce an ImageUrl and WavUrl. Our images and audio paths are stored conventionally and use the Name property to resolve each resource’s complete location. Let’s move onto something more interesting, the Razor implementation. 🪒 Razor View and Components Blazor utilizes Razor as its rendering engine. For .NET developers coming from ASP.NET MVC or Razor pages, this syntax will be familiar. In our Index.razor file, we’ll be building our animal grid. We’ll preemptively design our Index page, thinking about how we may want to instantiate each card. @page "/" @using Farm.Data <h1> <i class="oi oi-home" aria-</i> Old McKhalid's Farm Animals </h1> <div class="container-fluid"> <div class="row equal"> @foreach (var animal in Animals.All) { <Animal Name="@animal.Name" ImageUrl="@animal.ImageUrl" WavUrl="@animal.WavUrl"> @animal.Description </Animal> } </div> </div> We first notice the conciseness of our Razor view. There are 21 lines in total, and six of those lines are a formatting choice around the Animal component. We are also referencing the namespace containing our C# data records along with the static class and its collection Animals.All. The core of the soundboard lies in our reuseable Animal component. We can pass our properties into parameter placeholders for Name, ImageUrl, WavUrl. Along with Parameters, we also allow for child content within the Animal tag. To create the Animal component, let’s add a new Blazor Component named Animal.razor under the Shared directory. Let’s look at the entire implementation of our Animal component, then break down the essential parts. @inject IJSRuntime Js @implements IDisposable <div class="col-3 d-flex pb-3"> <div class="card" style="width: 18rem;"> <img class="card-img-top" src="@ImageUrl" alt="Card image cap"> <div class="card-body"> <h5 class="card-title">@Name</h5> <p class="card-text"> @ChildContent </p> > } <audio @ <source src="@WavUrl" type="audio/wav"> Your browser does not support the audio element. </audio> </div> </div> </div> @code { bool IsPlaying { get; set; } [Parameter] public string Name { get; set; } [Parameter] public string ImageUrl { get; set; } [Parameter] public string WavUrl { get; set; } [Parameter] public RenderFragment ChildContent { get; set; } private DotNetObjectReference<Animal> animal; private ElementReference Audio { get; set; } private async Task PlayAudio() { await Js.InvokeVoidAsync("playAudio", Audio); IsPlaying = true; } private async Task StopAudio() { await Js.InvokeVoidAsync("stopAudio", Audio); IsPlaying = false; } [JSInvokable] public async Task OnEnd() { IsPlaying = false; StateHasChanged(); } protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) { animal = DotNetObjectReference.Create(this); await Js.InvokeVoidAsync("initAudio", Audio, animal); } await base.OnAfterRenderAsync(firstRender); } public void Dispose() { animal?.Dispose(); } } Razor components are a hybrid of HTML, Razor, and C#. For folks coming from the front-end development world, this should be reminiscent of Vue and React development. At the top of our Animal file, we are injecting an IJSRuntime dependency, which will allow us to interact with client-side JavaScript and DOM elements. We’ll see the Js variable used later in our @code block. Moving through the HTML, we can see the use of the @ symbol. Throughout the markup, we are placing our parameters, allowing Blazor to render their values. In the middle of our HTML block, we see a Razor if/else block. Blazor will perform state management as our IsPlaying property changes values, switching which HTML element our client renders accordingly. Finally, we have our audio HTML tag, which we decorate with the @ref attribute. The @ref keyword allows Blazor to hold a reference to any DOM element and pass it to our JavaScript implementations. The @code block is likely the most unfamiliar part of the Razor file to new Blazor developers. We can think of the @code section as our class definition, which is a reminder that each .razor file is also a C# class. We define private members in the code block, the parameters we saw earlier in our Index.razor file, and interactivity methods. The ParameterAttribute allows the properties they decorate to be assigned values by the component consumer. This attribute is critical for anyone building reusable components. Other notable “Blazorisms” include the classes RenderFragment and ElementReference. The RenderFragment type allows us to accept child content when using our component. The markup located within the Animal tag is considered the child content. <Animal Name="@animal.Name" ImageUrl="@animal.ImageUrl" WavUrl="@animal.WavUrl"> THIS IS CHILD CONTENT! </Animal> We can use the @ref attribute with the ElementReference type. The type allows us to hold a DOM reference to an HTML element; we’ll use it to pass our audio tag to our JavaScript to play and stop our animals’ sound. <audio @ <source src="@WavUrl" type="audio/wav"> Your browser does not support the audio element. </audio> In our case, the @ref attribute maps directly to our private Audio property. ElementReference Audio { get; set; } Let’s look at our PlayAudio and StopAudio methods. We bind the methods to our component’s button elements utilizing the @onclick binding. The Razor binding should not be confused with HTML’s onclick attribute. > } Let’s take a look at the methods themselves and the utilization of IJSRuntime. These methods interact with our audio DOM element, so we utilize the Js property to invoke our JavaScript functions. These methods are also in charge of managing the state of IsPlaying, toggling the value from true and false. private async Task PlayAudio() { await Js.InvokeVoidAsync("playAudio", Audio); IsPlaying = true; } private async Task StopAudio() { await Js.InvokeVoidAsync("stopAudio", Audio); IsPlaying = false; } Another essential attribute when dealing with JavaScript interoperability is the JsInvokeAttribute. The feature allows our client-side JavaScript to call a .NET method using SignalR. We create this bridge using the DotNetObjectReference class. We need to Create the reference in our OnAfterRenderAsync method because our component is accessible. protected override async Task OnAfterRenderAsync(bool firstRender) { if (firstRender) { animal = DotNetObjectReference.Create(this); await Js.InvokeVoidAsync("initAudio", Audio, animal); } await base.OnAfterRenderAsync(firstRender); } Looking at this component, we can see the significant elements of what it takes to build a reusable Razor component. From injected services, Blazor specific types, and JavaScript interoperability calls. Let’s look at the JavaScript that our component will be calling. 😱 Did Someone Say JavaScript?! Choosing Blazor as a front-end framework means writing less JavaScript, but it doesn’t mean that we’ll be writing no JavaScript. Blazor’s interoperability with JavaScript is a strength, and we should embrace the fact that we’ll be writing some script to make our UI experience’s function. For folks looking to avoid JavaScript altogether, I’m sorry to say that it’s likely not possible. Luckily, in the context of this demo, the JavaScript is very minimal. Let’s create a new JavaScript file at /wwwroot/js/site.js and paste the following functions into the file. function initAudio(element, reference){ element.addEventListener("ended", async e => { await reference.invokeMethodAsync("OnEnd"); }); } function playAudio(element) { stopAudio(element); element.play(); } function stopAudio(element) { element.pause(); element.currentTime = 0; } Phew! That was painless. As we can see in the JavaScript, we are interacting with our reference elements found in our component. This is the real magic of Blazor, allowing for seamless server and client interactions with very little code. Next, we’ll need to reference this script in our _Host.cshtml file, located in the Pages directory. Right about the reference to the blazor.server.js, we can add our new script file. <script src="/js/site.js"></script> <script src="_framework/blazor.server.js"></script> We must reference our file before the Blazor script, as our script needs to be loaded to have correctly functioning Animal components. Running Our App All the pieces are now in place, and we should be able to run our farm soundboard. Let me be the first to say the obvious, “the cow says moo”. We’ll notice the play button switching state as we play audio and when the audio reaches its end. We did it! We have a functioning Blazor farm soundboard. Conclusion Blazor delivers on the promise of interactive web experiences, helping folks working with ASP.NET and C# bridge the front-end gap. It’s not as scary for folks new to Blazor as it first looks, and I hope this post convinces you to try it out. The Blazor documentation does a great job explaining the fundamentals. I also found Ed Charbenau’s Blazor, A Beginners Guide a great starting point for anyone interested in the topic. Understanding the boundaries between .NET and the front-end will be the most unclear for folks. Understanding the framework provided by Blazor helps breakdown the elements of a Blazor app and helps keep the project from turning into an overwhelming task. Anyone interested in seeing the final version of this project can fork it on GitHub. I hope you enjoyed this post, and please leave a comment below if you enjoyed it.
https://blog.jetbrains.com/dotnet/2020/10/22/building-a-blazor-farm-animal-soundboard/
CC-MAIN-2022-05
refinedweb
2,081
50.23
navigate to NEAREST goal pos (if goal pos is blocked) Hello all, I am working on a project where my robot needs to navigate to the goal position (and it is easy and working fine). But sometimes the goal position may be blocked and there is a possibilty that the goal planners fail hence resulting in no movement at all. In this case i want my robot to move to the nearest point of the goal position (this can be done by taking a radius around the goal position and assign it a new position and then navigate again but it might take alot of time). Please help me that how can i make my algorithms more robust to implement these changes. (using amcl) Below is my code for navigation to the goal position. Please share your thoughts: import rospy from move_base_msgs.msg import MoveBaseAction, MoveBaseGoal import actionlib from actionlib_msgs.msg import * from geometry_msgs.msg import Pose, Point, Quaternion class GoToPose(): def __init__(self): self.goal_sent = False # What to do if shut down (e.g. Ctrl-C or failure) rospy.on_shutdown(self.shutdown) # Tell the action client that we want to spin a thread by default self.move_base = actionlib.SimpleActionClient("move_base", MoveBaseAction) rospy.loginfo("Wait for the action server to come up") # Allow up to 5 seconds for the action server to come up self.move_base.wait_for_server(rospy.Duration(5)) def goto(self, pos, quat): # Send a goal self.goal_sent = True goal = MoveBaseGoal() goal.target_pose.header.frame_id = 'map' goal.target_pose.header.stamp = rospy.Time.now() goal.target_pose.pose = Pose(Point(pos['x'], pos['y'], 0.000), Quaternion(quat['r1'], quat['r2'], quat['r3'], quat['r4'])) # Start moving self.move_base.send_goal(goal) # Allow TurtleBot up to 60 seconds to complete task success = self.move_base.wait_for_result(rospy.Duration(60)) state = self.move_base.get_state() result = False if success and state == GoalStatus.SUCCEEDED: # We made it! result = True else: self.move_base.cancel_goal() self.goal_sent = False return result def shutdown(self): if self.goal_sent: self.move_base.cancel_goal() rospy.loginfo("Stop") rospy.sleep(1) if __name__ == '__main__': try: rospy.init_node('nav_test', anonymous=False) navigator = GoToPose() # Customize the following values so they are appropriate for your location position = {'x': 13.3, 'y' : 3.9} quaternion = {'r1' : 0.000, 'r2' : 0.000, 'r3' : 0.000, 'r4' : 1.000} rospy.loginfo("Go to (%s, %s) pose", position['x'], position['y']) success = navigator.goto(position, quaternion) if success: rospy.loginfo("Hooray, reached the desired pose") else: rospy.loginfo("The base failed to reach the desired pose") # Sleep to give the last log messages time to be sent rospy.sleep(1) except rospy.ROSInterruptException: rospy.loginfo("Ctrl-C caught. Quitting") #### code by mark sulliman Which global planner are you using ? The carrot_planner does it but your robot only move straight forward. There is also the navfn which accept a tolerance on a goal point. You can set your tolerence param to be high, that should solve your problem. Thanks for your help. Can you post it as an answer so i can accept it and close it. @Delb Thanks for your help too @Choco93 You can answer yourself to tell exactly what solved your problem/which solution you used, and then accept it (and don't close the question, only accept it). Glad your problem is solved !
https://answers.ros.org/question/304224/navigate-to-nearest-goal-pos-if-goal-pos-is-blocked/
CC-MAIN-2019-26
refinedweb
546
61.22
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. Hi --- Paolo Carlini <pcarlini@suse.de> wrote: > Ben wrote: > > >As far as I know gcc's STL default > >allocator uses memory pool to boost > >its performance. > > > Actually, this is the case only of GCC 3.3.x, not > GCC 3.4.x and 4.0.x. > >From what I see this *is* the case with 3.4.x ( try the sample code below), I didn't try it with 4.0.x though > > Is there some way to force shrinking > >of STL's memory pool and returning memory to the > >system ? Sometimes your program needs a large > amount > >of STL containers only for short period and > bloating > >the process forewer is very annoying. > > > > > A good question. I don't have a clear cut answer. I > can tell you that > there are very good reasons *not* to return memory, > i.e., you don't > really know when it's *really* safe, until the end > of the process. We > battled with that quite a few times. Also, assuming > your OS uses virtual > memory, bloating the *virtual* memory usage doesn't > seem such a *big* > problem, frankly: basically, after a while, the > unused memory pages get > swapped out and never return to physical memory. > Which kind of > difficulties are you experiencing, exactly? First of all when other processes need the memory, OS (it is Linux in my case) should swap unused pages of bloated process and this takes time, delaying other processes. Then again when this process with big STL pool needs to allocate ( for short time ) large amount of containers, OS will spend long time swapping in pages of pool, because it has no idea that most of them contain irrelevant garbage. So it is more than desirable to give process some control over STL pool. By the way regular malloc in glibc also has similar caching policy and there is undocumented function malloc_trim to force it to return memory to the system. > > Paolo. > regards, Ben ========================================== You can wath memory usage of this test at each stage and then continue typing Ctrl-C #include <iostream> #include <cstdlib> #include <signal.h> #include <vector> #include <unistd.h> using namespace std ; void sh(int) { return ; } int main(int argc, char **argv) { signal(SIGINT, sh); if( argc != 3 ) { cerr << "Usage: " << argv[0] << " num_of_vectors vector_size_(in ints)\n"; return 1; } int count = atoi(argv[1]); int sz = atoi(argv[2]); cout << "started\n"; pause(); vector<vector<int > > vp(count); for( unsigned i = 0; i < vp.size() ; ++i) { vp[i].resize(sz); } cout << "allocated\n"; pause(); vp.clear(); cout << "freed\n"; pause(); return 0; } __________________________________ Yahoo! Mail Stay connected, organized, and protected. Take the tour:
http://gcc.gnu.org/ml/libstdc++/2005-05/msg00435.html
crawl-003
refinedweb
454
66.64
Asked by: 'ExtensionAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices' Question hi, Im using .net framework 2.0. recently i added the third party dll LINQBRIDGE.DLL in my project for implementing linq operations. now when i build the project im getting the below error: error bc30560 : 'ExtensionAttribute' is ambiguous in the namespace 'System.Runtime.CompilerServices'. i think the other projects are not compatible with this third party dll. please let me know how can i resolve this error when i add linqbridge.dll - Moved by Zhanglong WuMicrosoft contingent staff Monday, July 31, 2017 2:50 AM 3rd-party dll related. All replies Does the error message say which assemblies define ExtensionAttribute? If one of those assembles is built from a project of yours, then delete the definition of class ExtensionAttribute from that project and instead add a reference to LINQBRIDGE.DLL. Perhaps add a TypeForwardedToAttribute, as well. (In C#, one can sometimes solve ambiguity errors by giving one of the assemblies an extern alias. In the command-line C# compiler, that would be the /reference:alias=filename syntax. However, you seem to be using Visual Basic, and it looks like the /reference option of the Visual Basic compiler does not support anything similar.) Hi umamaheshwaran ganesan, Based on your description and related error message, it seems that 3rd-party dll file cause the issue, I would suggest that you could connect the 3rd-party DLL author for suitable support. Thanks for your understanding. Best regards, Co.
https://social.microsoft.com/Forums/en-US/bee68ebd-569b-4791-941e-f60336bc5e14/extensionattribute-is-ambiguous-in-the-namespace-systemruntimecompilerservices?forum=Offtopic
CC-MAIN-2020-45
refinedweb
248
58.38
I'm a bit new to C, and I supposed to code a program that prompts for a character, and then will echo the same character, ascii and hex equiv. of that character. It is supposed to repeat the process, prompting again, and will keep going until the user inputs a ! character. I can't figure out why the ! isant killing the program. all help is much appreciated, this is my first post here, and look forward to being an active, responseable member. Thanks! my code: Code:#include <stdio.h> #define FLAG '!' int instruct(); int getval(); int displayval(); int repeat(); int main() { char car; instruct(); car = getval(); displayval(); while (car != FLAG) { repeat(); } return (0); } int instruct() { printf("Please enter a (character) value: \n"); return (0); } int getval() { char car; car = getc(stdin); fpurge(stdin); return (0); } int displayval() { char car; printf("Character: %c\t ascii: %u\t hex: %#X\n", car, car, car); return (0); } int repeat() { instruct(); getval(); displayval(); return (0); }
https://cboard.cprogramming.com/c-programming/71398-help-trying-kill-repeat-function.html
CC-MAIN-2017-34
refinedweb
162
73.07
Comment on Tutorial - Update cell data in an Excel file using OLEDB in VB.net By Issac Comment Added by : nim Comment Added at : 2011-03-23 00:28:10 Comment on Tutorial : Update cell data in an Excel file using OLEDB in VB.net By Issac how to append data to existing excel sheets without loosing any data using jx code of sending sms from java code... View Tutorial By: Shekh at 2014-11-27 14:18:25 2. i found this tutorial very useful... i need to kno View Tutorial By: Anonymous at 2009-04-14 21:09:20 3. i know this example this is in mg.hill but i want View Tutorial By: srikanta at 2009-11-18 00:06:57 4. thanksssssssssssss View Tutorial By: gajender sharma at 2015-02-02 09:15:05 5. public class zcxzcStack { //push fun View Tutorial By: eissen at 2014-09-22 13:32:20 6. how to display images in j2me?? thank you :) View Tutorial By: bill at 2011-06-23 23:59:41 7. Hi Chriz, Yes it should work on any site. B View Tutorial By: William at 2010-07-10 20:54:24 8. Sir , I am doing BCS . I want to know more about View Tutorial By: Sachin deshmukh at 2010-08-04 02:24:59 9. Thanks! Please point me to a site which di View Tutorial By: TERRY DEGLOW at 2012-01-29 16:04:41 10. Design program with given algorithm--- View Tutorial By: siddhant at 2014-10-11 13:19:25
https://java-samples.com/showcomment.php?commentid=35975
CC-MAIN-2022-21
refinedweb
261
76.32
If you’ve ever built a web app, there’s a good chance you’ve built a tabbed document interface at one point or another. Tabs allow you to break up complex interfaces into manageable subsections that a user can quickly switch between. Tabs are a common UI component and are important to understand how to implement. In this article, you will learn how to create a reusable tab container component that you can use by itself or with your existing components. Before you begin this guide, you’ll need the following: This tutorial was tested on Node.js version 10.20.1 and npm version 6.14.4. In this step, you’ll create a new project using Create React App. You will then delete the sample project and related files that are installed when you bootstrap the project. To start, make a new project. In your terminal, run the following script to install a fresh project using create-react-app: - npx create-react-app react-tabs-component After the project is finished, change into the directory: - cd react-tabs-component In a new terminal tab or window, start the project using the Create React App start script. The browser will auto-refresh on changes, so leave this script running while you work: - npm start This will start a locally running server. If the project did not open in a browser window, you can open it by visiting. If you are running this from a remote server, the address will be. Your browser will load with a template React application included as part of Create React. You will see a file div tags and an h1. This will give you a valid page that returns an h1 that displays Tabs Demo. The final code will look like this: import React from 'react'; import './App.css'; function App() { return ( <div> <h1>Tabs Demo</h1> </div> ); } export default App; Save and exit the text editor. Finally, delete the logo. You won’t be using it in your application, and you should remove unused files as you work. It will save you from confusion in the long run. In the terminal window type the following command to delete the logo: - rm src/logo.svg Now that the project is set up, you can create your first component. TabsComponent In this step, you will create a new folder and the Tabs component that will render each Tab. First, create a folder in the src directory called components: - mkdir src/components Inside the components folder, create a new file called Tabs.js: - nano src/components/Tabs.js Add the following code to the new Tabs.js file: import React, { Component } from 'react'; import PropTypes from 'prop-types'; import Tab from './Tab'; These are the imports you need to create this component. This component will keep track of which tab is active, display a list of tabs, and the content for the active tab. Next, add the following code that will be used to keep track of state and display the active tab below the imports in Tabs.js: ... class Tabs extends Component { static propTypes = { children: PropTypes.instanceOf(Array).isRequired, } constructor(props) { super(props); this.state = { activeTab: this.props.children[0].props.label, }; } onClickTabItem = (tab) => { this.setState({ activeTab: tab }); } ... The initial state is added for the active tab and will start at 0 in the array of tabs you will be creating. onClickTabItem will update the app state to the current tab that is clicked by the user. Now you can add your render function to the same file: ... render() { const { onClickTabItem, props: { children, }, state: { activeTab, } } = this; return ( <div className="tabs"> <ol className="tab-list"> {children.map((child) => { const { label } = child.props; return ( <Tab activeTab={activeTab} key={label} label={label} onClick={onClickTabItem} /> ); })} </ol> <div className="tab-content"> {children.map((child) => { if (child.props.label !== activeTab) return undefined; return child.props.children; })} </div> </div> ); } } export default Tabs; This component keeps track of which tab is active, displays a list of tabs, and the content for the active tab. The Tabscomponent uses the next component you will create called Tab. TabComponent In this step, you will create the Tab component that you will use to create individual tabs. Create a new file called Tab.js inside the components folder: - nano src/components/Tab.js Add the following code to the Tab.js file: import React, { Component } from 'react'; import PropTypes from 'prop-types'; Once again, you import React from react and import PropTypes. PropTypes is a special propTypes property used to run type-checking on props in a component. Next, add the following code below the import statements: ... class Tab extends Component { static propTypes = { activeTab: PropTypes.string.isRequired, label: PropTypes.string.isRequired, onClick: PropTypes.func.isRequired, }; onClick = () => { const { label, onClick } = this.props; onClick(label); } render() { const { onClick, props: { activeTab, label, }, } = this; let className = 'tab-list-item'; if (activeTab === label) { className += ' tab-list-active'; } return ( <li className={className} onClick={onClick} > {label} </li> ); } } export default Tab; The PropTypes in this component are used to ensure that activeTab and label are a string and required. onClick is set to be a function that is also required. The Tab component displays the name of the tab and adds an additional class if the tab is active. When clicked, the component will fire a handler, onClick, that will let the Tabs component know which tab should be active. In addition to creating components, you will add CSS to give the components the appearance of tabs. Inside the App.css file, remove all the default CSS and add this code: [label react-tabs-component/src/App.css] .tab-list { border-bottom: 1px solid #ccc; padding-left: 0; } .tab-list-item { display: inline-block; list-style: none; margin-bottom: -1px; padding: 0.5rem 0.75rem; } .tab-list-active { background-color: white; border: solid #ccc; border-width: 1px 1px 0 1px; } This will make the tabs in-line and give the active tab a border to make it stand out when clicked. App.js Now that the components and associated styles are in place, update the App component to use them. First, update the imports to include the Tabs component: import React from 'react'; import Tabs from "./components/Tabs"; import "./App.css"; Next, update the code in the return statement to include the imported Tabs component: ... function App() { return ( <div> <h1>Tabs Demo</h1> <Tabs> <div label="Gator"> See ya later, <em>Alligator</em>! </div> <div label="Croc"> After 'while, <em>Crocodile</em>! </div> <div label="Sarcosuchus"> Nothing to see here, this tab is <em>extinct</em>! </div> </Tabs> </div> ); } export default App; The divs with associated labels give the tabs their content. With Tabs added to the App component, you will now have a working tabbed interface that allows you to toggle between sections: You can view this Github Repository to see the completed code. In this tutorial, you built a tab component using React to manage and update your application’s state. From here, you can learn other ways to style React components to create an even more attractive UI. You can also follow the full How To Code in React.js series on DigitalOcean to learn even more about developing with React. have any kind of esLint running it’s going to complain that this is not keyboard accessible. It will be best make them tab-able and selectable for users without a mouse if you’re going to put them out on the open web hi joshtronic, thank you for this tutorial! Question, what if the data that I will be displaying in tabs are from my API. How can I filter and dynamically pull the data? I’m new to React. Thank you joshtronic, this is a great articles and sample of code! One question about tab.js ‘Anonymous Object’ (sorry I don’t know how to name it) from tabs.js Assigning this to an javascript object without name? It is new to me and want to know more about it. Thank you very much!
https://www.digitalocean.com/community/tutorials/react-tabs-component
CC-MAIN-2022-33
refinedweb
1,336
66.44
Lots of people asked me to write an introductory article about DirectDraw programming and Spriting so that people can understand the basic concepts and start discovering the other things about DirectX from samples (MSDN and others available here). For all those that asked me the introductory article, here it is. Since we are working with a DirectX application, there is no need to use the MFC library in our program. Not that the use of MFC in a DirectX application is prohibited, but MFC has a lot of code aimed to desktop apps and not graphic intensive ones, so its better to stick on plain Windows API and STL. We will start our basic DirectDraw program by selecting the "Windows Application" option in the Visual C++ interface. At the first screen we will select the option "Simple Win32 Application" to allow Visual C++ to create a WinMain function for us. The code generated by the wizard will look like this: #include "stdafx.h" int APIENTRY WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nCmdShow) { // TODO: Place code here. return 0; } Now that we have the main function of our program, we need to create a main window for the program so that we can allow Windows OS to send messages to our application. Even if you work with a full screen DirectX application you'll still need a main window in the background, so that your program can receive the messages that the system sends to it. We will put the window initialization routine in another function of our program, this function will be called InitWindow. HWND InitWindow(int iCmdShow) { HWND hWnd; WNDCLASS wc; wc.style = CS_HREDRAW | CS_VREDRAW; wc.lpfnWndProc = WndProc; wc.cbClsExtra = 0; wc.cbWndExtra = 0; wc.hInstance = g_hInst; wc.hIcon = LoadIcon(g_hInst, IDI_APPLICATION); wc.hCursor = LoadCursor(NULL, IDC_ARROW); wc.hbrBackground = (HBRUSH )GetStockObject(BLACK_BRUSH); wc.lpszMenuName = TEXT(""); wc.lpszClassName = TEXT("Basic DD"); RegisterClass(&wc); hWnd = CreateWindowEx( WS_EX_TOPMOST, TEXT("Basic DD"), TEXT("Basic DD"), WS_POPUP, 0, 0, GetSystemMetrics(SM_CXSCREEN), GetSystemMetrics(SM_CYSCREEN), NULL, NULL, g_hInst, NULL); ShowWindow(hWnd, iCmdShow); UpdateWindow(hWnd); SetFocus(hWnd); return hWnd; } The first thing that this function does is register a window class in windows environment (this is needed for the window creation process). In the window class we need to pass some information about the window to the RegisterClass function. All this parameters are contained in WNDCLASS structure. Notice that in many places I use the variables g_hInst. This variable will have global scope, and will hold the instance handle of our application. We will need another global variable to hold the handle of our main window (that we are about to create). To create this global variables, simply declare them above the winmain definition, like this: HWND g_hMainWnd; HINSTANCE g_hInst; Don't forget that you need to fill the content of this variables at the very begging of your program so, at our winmain function, we'll add this code: g_hInst = hInstance; g_hMainWnd = InitWindow(nCmdShow); if(!g_hMainWnd) return -1; Notice that we are assigning the result of our InitWindow function to the main window global variable, because this function will return a handle to our newly created window. There's an extra information at the window creating function that we haven't discussed yet, the lpfnWndProc. In this parameter we need to assign a reference to a function that will be our main window procedure. This procedure is responsible for receiving the messages that Windows sends to our application. This function will be called by the system (not by you) every time your application receives a message (like a key pressed, a painting message, a mouse move and so on...). Here is the basic definition of our WndProc function: LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam) { switch (message) { case WM_DESTROY: PostQuitMessage(0); return 0; } // switch return DefWindowProc(hWnd, message, wParam, lParam); } // Ok, our windows application is almost set, we are only missing an important code: the message loop. In order to allow windows to send messages to our program, we need to call a function to check if our program has received any messages. If we receive this messages we need to call a function so that our WndProc can process the message. If we didn't receive any system message, we can use this "spare time" of our application to do some background processing and even do some DirectX stuff. This process is called Idle Processing. We need to insert our message loop right after the initialization of our global variables. while( TRUE ) { MSG msg; if( PeekMessage( &msg, NULL, 0, 0, PM_REMOVE ) ) { // Check for a quit message if( msg.message == WM_QUIT ) break; TranslateMessage( &msg ); DispatchMessage( &msg ); } else { ProcessIdle(); } } In our message loop, the first thing we do is check the message queue for messages to our application. This is accomplished by calling the PeekMessage function. If the function returns true, we call TranslateMessage and DispatchMessage so that the messages received by our program are processed. If we have no message, we'll call another function called ProcessIdle. This function will be created in our program and we'll use it to update the graphics of our screen. Here is a simple definition of the function: void ProcessIdle() { } Ok, our basic windows application is set. If you compile and run the application you will see an entirely black window that covers all your desktop. Now we are going to work on the initialization of the DirectDraw in our application. Before your start to modify the code, I need to present you some concepts (surfaces and page flipping). All the drawing created by DirectDraw are based on structures called surfaces. Surfaces are memory regions that contains graphics that can be used in your application. Everything we need to drawn on the screen needs to be created on a surface first. Let's assume that we are creating a space invaders game ( like the one I wrote). For this you'll probably need a graphic buffer that will hold the space ships, the UFOs, the shots. All this graphics will be stored in memory in this structures that we'll call surfaces. In fact, for DirectDraw applications, the area that displays what we are seeing on the screen is considered a surface too, and it's called the FrontBuffer. Attached to this FrontBuffer surface, we have another surface called the BackBuffer. This surface stores the information of what will be showed to the user in the next frame of our application. Lets say that the user is currently seeing an UFO on the screen at position (10,10) and the user's ship is at position (100,100). Since the objects are moving, we need to move our UFO to the position (12,10) and our ship to position (102, 10). If we draw this to the front buffer directly we can have some kind of synchronization problems (ie. the user can see the UFO move first and them the ship, but they need to move both at the same time). To solve this we draw everything we need to show to the user in the next frame in the backbuffer. When we finish it, we move all the information contained in the backbuffer to the frontbuffer. This process is called page flipping and is very similar to the process of creating cartoons (where we use lots of paper sheets to animated a drawing). What really happens in the background is that DirectDraw changes the pointer of backbuffer with the pointer of frontbuffer, so that next time the video card send the video data to the monitor it uses the backbuffered content and not the old frontbuffer. When we do a page flip, the content of the backbuffer becomes the content of the previously showed frontbuffer, and not the same content of the drawn backbuffer as you might think. Now that you have some idea of the concepts of DirectDraw, we will start coding the DirectX part of the program. The first thing you need to do is insert the #include of DirectDraw in your main source file. Just insert the line below in the top of your file. #include <ddraw.h> You need to inform the library files related to DirectDraw too. Go to the Project Menu, submenu Settings. Select the link tab and put the following lib files in the "Object/Library Modules" kernel32.lib user32.lib ddraw.lib dxguid.lib gdi32.lib Now we are going to create a new function in our program. This function will be called InitDirectDraw and it will be used to start the main DirectDraw object and create the main surfaces that we are going to use (the front and back buffer surfaces). int InitDirectDraw() { DDSURFACEDESC2 ddsd; DDSCAPS2 ddscaps; HRESULT hRet; // Create the main DirectDraw object. hRet = DirectDrawCreateEx(NULL, (VOID**)&g_pDD, IID_IDirectDraw7, NULL); if( hRet != DD_OK ) return -1; // Get exclusive mode. hRet = g_pDD->SetCooperativeLevel(g_hMainWnd, DDSCL_EXCLUSIVE | DDSCL_FULLSCREEN); if( hRet != DD_OK ) return -2; // Set the video mode to 640x480x16. hRet = g_pDD->SetDisplayMode(640, 480, 16, 0, 0); if( hRet != DD_OK ) return -3; // Prepare to create the primary surface by initializing // the fields of a DDSURFACEDESC2 structure. ZeroMemory(&ddsd, sizeof(ddsd)); ddsd.dwSize = sizeof(ddsd); ddsd.dwFlags = DDSD_CAPS | DDSD_BACKBUFFERCOUNT; ddsd.ddsCaps.dwCaps = DDSCAPS_PRIMARYSURFACE | DDSCAPS_FLIP | DDSCAPS_COMPLEX; ddsd.dwBackBufferCount = 1; // Create the primary surface. hRet = g_pDD->CreateSurface(&ddsd, &g_pDDSFront, NULL); if( hRet != DD_OK ) return -1; // Get a pointer to the back buffer. ZeroMemory(&ddscaps, sizeof(ddscaps)); ddscaps.dwCaps = DDSCAPS_BACKBUFFER; hRet = g_pDDSFront->GetAttachedSurface(&ddscaps, &g_pDDSBack); if( hRet != DD_OK ) return -1; return 0; } Notice that in this function we are using some other variables with the " g_" (global) prefix. Since we are going to use the backbuffer and front buffer reference through all our code, we are goinf to store this two surface handles in global variables. The other variable that we are storing as global is the main DirectDraw object ( g_pDD). This object will be use to create all the DirectDraw related objects. So, at the top of our code, add the following global variables. LPDIRECTDRAW7 g_pDD = NULL; // DirectDraw object LPDIRECTDRAWSURFACE7 g_pDDSFront = NULL; // DirectDraw fronbuffer surface LPDIRECTDRAWSURFACE7 g_pDDSBack = NULL; // DirectDraw backbuffer surface Now lets get back to our InitDirectDraw function. The first thing we are doing at the function is the creation of the DirectDraw object. To create this object we use the DirectDrawCreate function that is defined in the ddraw.h header. There are two important parameters in this function, the second and the third one. The second parameter passes a reference to the variable where we want to store the DirectDraw object variable (in our case the g_pDD variable). In third parameter we need to pass the version of DirectDraw object we are trying to get. This allow you to work with older version of DirectDraw even if you install the new version of the SDK. In my case I�m using the objects of DirectX 7, but using DX SDK 8.1. Notice that I�m testing the result of the function for DD_OK, that is the Ok result for all DirectDraw functions. It�s important to test every return code of the DirectDraw function calls. If we receive a value different from DD_OK we return a negative value to the function. If you have some error at this point of the program you can assume that the user probably doesn't have the correct version of DirectX installed on his computer, so you can give him some friendly message (we will see this later). The second function call is the SetCooperativeLevel. This function is used to tell DirectX how we are going to work with the display, if we are going to use full-screen mode or windowed mode and some other options. You can check the available options at the DirectX documentation. We test the result of this function as we have done in the first function call. The third function called is the SetDisplayMode. This function is responsible for selecting the resolution we are going to use with our application. In this case we are creating an 640x480 full screen. The third parameter represent the color depth that we are using. That will depend on the number of colors you want to use with your app. After starting the display, we need to create the two surfaces that we will use to draw our graphics on the screen. First we need to initialize the front buffer (the one that the user is seeing). When we want to create a surface with DirectDraw we need to initialize the DDSURFACEDESC2 structure that have some parameter for the creation of the surface. Its important to cleanup the structure first with a ZeroMemory or memset (or you can have some problems in some calls). Since we are creating the front buffer we need to fill the dwflags parameter with the value DDSD_BACKBUFFERCOUNT, so that the creation function recognizes that out frontbuffer will have an associated backbuffer. At the ddsCaps.dwCaps parameter we need to inform that we are creating the front buffer surface (or primary surface) with the DDSCAPS_PRIMARYSURFACE parameter. Since we are going to work with a flipping surface, we need to inform the DDSCAPS_FLIP and the DDSCAPS_COMPLEX parameters. After setting up the DDSURFACEDESC2 structure we need to call the CreateSurface function from our DirectDraw global object, passing as parameter the surface description structure and the global object that will hold the DirectDraw frontbuffer surface. After creating the frontbuffer surface we need to get the backbuffer associated with this frontbuffer. We can do that by calling the GetAttachedSurface of the frontbuffer surface. As a parameter we need to pass a DDSCAPS2 structure, so that the function know that we are trying to get the backbuffer. Now that our function is created, we need to call it from the main function. Here is how we are going to call it: if(InitDirectDraw() < 0) { CleanUp(); MessageBox(g_hMainWnd, "Could start DirectX engine in your computer." "Make sure you have at least version 7 of " "DirectX installed.", "Error", MB_OK | MB_ICONEXCLAMATION); return 0; } Notice that we are testing for a negative result. If we receive a negative result we tell the user that he probably didn�t installed the correct version of DirectX. We have an extra function call here, the Cleanup function. The Cleanup function will be responsible for deleting all the objects created by DirectX. All the objects are destroyed by calling the Release method of each instance. Here is the function definition: void CleanUp() { if(g_pDDSBack) g_pDDSBack->Release(); if(g_pDDSFront) g_pDDSFront->Release(); if(g_pDD) g_pDD->Release(); } Before we compile and run the code again, insert the following code to the WndProc function, at the switch statement that handles the messages. case WM_KEYDOWN: if(wParam == VK_ESCAPE) { PostQuitMessage(0); return 0; } break; With this code you'll be able to get out of the application by pressing the ESCAPE key. Now, compile and run the application and notice that you'll enter in the 640x480 fullscreen mode. Now we are going to draw some things in our backbuffer so that we can flip the surfaces and produce some animation. We are going to use a bitmap with some tiles of a race car that produce an animation. To create an sprite in DirectDraw we need to store this bitmap in another surface (that we will call tile or offscreen surface) so that we can blit (print) this surface in the backbuffer and produce the animation. We are going to create a class called cSurface to help us to manage our tile surfaces. Right click in the ClassView of Visual C++ and select the Create New Class option. As class type, select Generic Class and for the name use cSurface. Let�s start by creating the member variables of our class. The main variable will be of LPDIRECTDRAWSURFACE7 type and will hold a reference to the DirectDraw Surface object associated with our class. We are going to store the width and height of our surface too. We will have another member called m_ColorKey that I�ll explain later. Here is the definition of our member variables. protected: COLORREF m_ColorKey; UINT m_Height; UINT m_Width; LPDIRECTDRAWSURFACE7 m_pSurface; The first function we are going to insert in our class is the Create function. This function will be used to create the DirectX surface object for our bitmap. Here is the Create function code: BOOL cSurface::Create(LPDIRECTDRAW7 hDD, int nWidth, int nHeight, COLORREF dwColorKey) { DDSURFACEDESC2 ddsd; HRESULT hRet; DDCOLORKEY ddck; ZeroMemory( &ddsd, sizeof( ddsd ) ); ddsd.dwSize = sizeof( ddsd ); ddsd.dwFlags = DDSD_CAPS | DDSD_WIDTH | DDSD_HEIGHT; ddsd.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_VIDEOMEMORY; ddsd.dwWidth = nWidth; ddsd.dwHeight = nHeight; hRet = hDD->CreateSurface(&ddsd, &m_pSurface, NULL ); if( hRet != DD_OK ) { if(hRet == DDERR_OUTOFVIDEOMEMORY) { ddsd.ddsCaps.dwCaps = DDSCAPS_OFFSCREENPLAIN | DDSCAPS_SYSTEMMEMORY; hRet = hDD->CreateSurface(&ddsd, &m_pSurface, NULL ); } if( hRet != DD_OK ) { return FALSE; } } if((int)dwColorKey != -1) { ddck.dwColorSpaceLowValue = dwColorKey; ddck.dwColorSpaceHighValue = 0; m_pSurface->SetColorKey(DDCKEY_SRCBLT, &ddck); } m_ColorKey = dwColorKey; m_Width = nWidth; m_Height = nHeight; return TRUE; } Notice that the creation process used to create the tile surface is very similar to the creation process of the front buffer surface. The different is at the information assigned to the DDSURFACE2 structure. At the dwFlags parameter we inform that the dwCaps, dwWidth and dwHeight will have information that needs to be used to create the surface. In the dwCaps parameter we inform that this surface is and offscreen surface (tile surface) by using the DDSCAPS_OFFSCREENPLAIN flag. We combine with this value with the DDSCAPS_VIDEOMEMORY value, that tells the function that we are trying to create this function in video memory. At the error test we are testing if the return value of the function is DDERR_OUTOFVIDEOMEMORY so that if the user has an old video card if just a few MB of memory, we can change the DDSURFACEDESC2 parameter to DDSCAPS_SYSTEMMEMORY and try to create the surface on RAM instead of VIDEO memory. The process of blitting surfaces from SYSTEM_MEMORY to VIDEO_MEMORY is much slower then the VIDEO MEM to VIDEO MEM process but is needed in case of the user doesn�t have enough memory. At the last portion of the function we have the dwColoKey parameter test. This is used if we are working with a colorkeyed surface. A colorkeyed surface is a surface where we don't want to display a certain color. Let's say that I want to blit a spaceship in a starfield background. When I blit the spaceship I don't want to display the black background of the bitmap just the ship itself so I can associate a color key to the ship to display just the ship picture and not the background. You need to take care when you create your tile bitmaps and make sure to not use antialised backgrounds in the sprite bitmaps (lots of application allows you to removed the antialiased background so that you can have high quality sprites). Now we will create another function to load a bitmap file into the DirectX surface object. For this we are going to use some basic GDI functions. Since we are going to load this just once, this will probably not impact much on the performance of the drawing process. Here is the LoadBitmap function: BOOL cSurface::LoadBitmap(HINSTANCE hInst, UINT nRes, int nX, int nY, int nWidth, int nHeight) { HDC hdcImage; HDC hdc; BITMAP bm; DDSURFACEDESC2 ddsd; HRESULT hr; HBITMAP hbm; hbm = (HBITMAP) LoadImage(hInst, MAKEINTRESOURCE(nRes), IMAGE_BITMAP, nWidth, nHeight, 0L); if (hbm == NULL || m_pSurface == NULL) return FALSE; // Make sure this surface is restored. m_pSurface->Restore(); // Select bitmap into a memoryDC so we can use it. hdcImage = CreateCompatibleDC(NULL); if (!hdcImage) return FALSE; SelectObject(hdcImage, hbm); // Get size of the bitmap GetObject(hbm, sizeof(bm), &bm); if(nWidth == 0) nWidth = bm.bmWidth; if(nHeight == 0) nHeight = bm.bmHeight; // Get size of surface. ddsd.dwSize = sizeof(ddsd); ddsd.dwFlags = DDSD_HEIGHT | DDSD_WIDTH; m_pSurface->GetSurfaceDesc(&ddsd); if ((hr = m_pSurface->GetDC(&hdc)) == DD_OK) { StretchBlt(hdc, 0, 0, ddsd.dwWidth, ddsd.dwHeight, hdcImage, nX, nY, nWidth, nHeight, SRCCOPY); m_pSurface->ReleaseDC(hdc); } DeleteDC(hdcImage); m_srcInfo.m_hInstance = hInst; m_srcInfo.m_nResource = nRes; m_srcInfo.m_iX = nX; m_srcInfo.m_iY = nY; m_srcInfo.m_iWidth = nWidth; m_srcInfo.m_iHeight = nHeight; return TRUE; } This function is very easy to understand if you know a little bit of GDI programming, anyway I'll explain all the code. The first thing we need to do is to call the restore method of our m_Surface internal variable. This will restore the memory allocated to the surface object of DirectDraw, in case DirectDraw deallocates the memory (if this situation happens, any function call referring the m_Surface object will return DERR_SURFACELOST). After restoring the memory we create a GDI dc and load the bitmap passed as a parameter from the resource. The bitmap is them selected in the DC and blittled in the surface using StretchBlt function. Notice that I'm saving the bitmap information in a m_srcInfo structure. This structure is used when we have a surface lost problem, this way we can restore the surface with its original data. The last function we are going to present here is the Draw function that is used to Draw a portion of the surface in to another surface. In most of the cases you�ll draw the surface in the backbuffer but you can use this Draw method with any other kind of surface. BOOL cSurface::Draw(LPDIRECTDRAWSURFACE7 lpDest, int iDestX, int iDestY, int iSrcX, int iSrcY, int nWidth, int nHeight) { RECT rcRect; HRESULT hRet; if(nWidth == 0) nWidth = m_Width; if(nHeight == 0) nHeight = m_Height; rcRect.left = iSrcX; rcRect.top = iSrcY; rcRect.right = nWidth + iSrcX; rcRect.bottom = nHeight + iSrcY; while(1) { if((int)m_ColorKey < 0) { hRet = lpDest->BltFast(iDestX, iDestY, m_pSurface, &rcRect, DDBLTFAST_NOCOLORKEY); } else { hRet = lpDest->BltFast(iDestX, iDestY, m_pSurface, &rcRect, DDBLTFAST_SRCCOLORKEY); } if(hRet == DD_OK) break; if(hRet == DDERR_SURFACELOST) { Restore(); } else { if(hRet != DDERR_WASSTILLDRAWING) return FALSE; } } return TRUE; This function is extremely simple. The first thing we do is create a rect variable and fill it with the source bitmap position and size that we want to blit in the destination surface. After that, we call the BltFast method of the surface to blit the content in the destination surface. Notice that we're making a test to see if the surface have a color key or not. Blitting surfaces without colorkey is much faster than surfaces that have colorkey, so create colorkey only when needed. You can see that the drawing code is inside and infinite loop. This is created because the drawing function can return a surface lost error. If this error is returned we need to restore the surface and try to blit it again until we got the surface restored. Another important function is the Destroy function that is responsible for releasing the DirectDraw resources related to this objects. Its basically a call to the Release method of the m_Surface variable. void cSurface::Destroy() { if(m_pSurface != NULL) { m_pSurface->Release(); m_pSurface = NULL; } } In the source code you�ll find some other methods in this class but basically, for this article, you'll only need this four. Compile the code to see if you have no errors. The next step is the creation of an instance of our cSurface class so that we can blit this information on the backbuffer. To do this, we need to insert an include statement in the file that contains out WinMain function. #include "csurface.h" After including the header of our class, create a new global variable that will hold our instance. You can create it below the declaration of the other global variables. cSurface g_surfCar; After proceeding with the coding, add the bitmap resource to the object so that we can use it to blit the surface in the backbuffer. The resource is a bitmap file called bmp_bigcar_green.bmp . This bitmap is used in my new game (RaceX) that will be posted here in CP pretty soon. You can create a resource ID for the bitmap with the " IDB_GREENCAR" name. Now that we have the surface class instance declared, we need to call the create and loadbitmap method to create the DirectXobject inside the class. This code can be inserted after the call of InitDirectDraw. g_surfCar.Create(g_pDD, 1500, 280); g_surfCar.LoadBitmap(g_hInst, IDB_GREENCAR, 0, 0, 1500, 280); Before we proced, remember that you need to destroy this object in the case you create it during the code execution. For this you need a call to the Destroy method. You can put this in the CleanUp function. void CleanUp() { g_surfCar.Destroy(); if(g_pDDSBack) g_pDDSBack->Release(); if(g_pDDSFront) g_pDDSFront->Release(); if(g_pDD) g_pDD->Release(); } Now that we have created, initialized and added the destruction code of our surface class we just need to draw the picture in the backbuffer and flip the surface in the ProcessIdle function. void ProcessIdle() { HRESULT hRet; g_surfCar.Draw(g_pDDSBack, 245, 170, 0, 0, 150, 140); while( 1 ) { hRet = g_pDDSFront->Flip(NULL, 0 ); if( hRet == DD_OK ) { break; } if( hRet == DDERR_SURFACELOST ) { g_pDDSFront->Restore(); } if( hRet != DDERR_WASSTILLDRAWING ) { break; } } } This code draws the first picture of the car in the middle of the backbuffer and flips the backbuffer with the front one every time we have an idle processing call. Let�s change the code a little bit so that we can blit the animation of the car. void ProcessIdle() { HRESULT hRet; static int iX = 0, iY = 0; static iLastBlit; if(GetTickCount() - iLastBlit < 50) { return; } g_surfCar.Draw(g_pDDSBack, 245, 170, iX, iY, 150, 140); while( 1 ) { hRet = g_pDDSFront->Flip(NULL, 0 ); if( hRet == DD_OK ) { break; } if( hRet == DDERR_SURFACELOST ) { g_pDDSFront->Restore(); } if( hRet != DDERR_WASSTILLDRAWING ) { break; } } iX += 150; if(iX >= 1500) { iX = 0; iY += 140; if(iY >= 280) { iY = 0; } } iLastBlit = GetTickCount(); } We create 3 static variables. The first 2 will be used to change the position of the blitted portion of the source bitmap. This way we can create the animation of the car by going from frame 1 to frame 20. Notice that we have an extra variable called iLastBlit that holds the result of a GetTickCount function call. This is used to allow that each frame stays on the screen at least 50 miliseconds, this way the animation will go pretty smooth. You can remove this code to see what happens (on my machine the car spins too fast). This was a brief introduction on how to create a basic C++ program that uses the DirectX DirectDraw library. If you have any questions or comments just post them in! General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/directx/basicdd.aspx
crawl-002
refinedweb
4,389
62.98
Hi guys, In India our government provide some very useful mapping system known as "Bhuvan". It's so hight quality and provide 3D high quality satellite data. It's free and available for everyone. But we are little concern about OSM policies. Can anyone tell us that can it be possible to use government data to edit OSM.? Here is the Government official website.. link text asked 12 Oct '20, 19:02 Rick198 11●1●2 accept rate: 0% edited 12 Oct '20, 19:05 When you look at the terms on that site you will find they explicitly forbid usage in any other way than viewing. Thus you must not use the data for OSM without prior permission from the mentioned authorities. Bhuvan Terms of Service By downloading, installing, accessing or using the Bhuvan plug-in/website or using the Bhuvan service or accessing or using any of the content available within the Bhuvan website, you agree to be bound by the following Terms of service: Content in the Bhuvan Website Bhuvan website allows you to access and view a variety of content, including but not limited to IRS imagery, map and terrain data, geospatial vector information like administrative boundaries, soils, census data and other related information provided by Bhuvan, its licensors, and its users (the "Content"). You understand and agree to the following: Before you continue, you should read this 'Terms of Service' as they form a binding agreement between you and DOS/ISRO/NRSC regarding your use of the website and its services. answered 12 Oct '20, 19:29 TZorn 9.5k●4●45●179 accept rate: 14% Once you sign in you will be able to subscribe for any updates here Answers Answers and Comments Markdown Basics learn more about Markdown This is the support site for OpenStreetMap. Question tags: osm ×634 import ×185 data_import ×9 question asked: 12 Oct '20, 19:02 question was seen: 274 times last updated: 12 Oct '20, 19:29 how to use data in "the ESRI format" with potlatch? Issue with Importing osm-file in Nominatim Importing only items with certain Tags into QGIS How to import OSM transport layer into Oracle Locator IS it possible to import a kml file into OSM? import OSM data to PostGIS database via imposm3 How to check Nominatim planet import execution is running in background or terminated? [closed] How do i can put data on OSM? like electrical poles or electrical transformers for a whole country. Mapping French Buildings cadestre First time here? Check out the FAQ!
https://help.openstreetmap.org/questions/77050/indian-government-mapping-system-bhuvan?sort=oldest
CC-MAIN-2021-25
refinedweb
424
58.21
Welcome to Reach UI Development ♿️ Thanks for getting involved with Reach UI development! Looking for the documentation?Looking for the documentation? Getting StartedGetting Started Reach UI is built and tested with Yarn. Please follow their install instructions to get Yarn installed on your system. Then, run these commands: git clone git@github.com:reach/reach-ui.git cd reach-ui yarn install yarn build Root Repo Scripts:Root Repo Scripts: yarn build # builds all packages yarn start # starts storybook server yarn test # runs tests in all packages Running / Writing ExamplesRunning / Writing Examples First do the steps in "Getting started", then start the Storybook server: yarn start Next, put a file in packages/<component-dir>/examples/<name>.example.js and make it look like this: import React from "react"; // The name of the example, you must export it as `name` export let name = "Basic"; // The example to render, you must name it `Example` export let Example = () => <div>Cool cool cool</div>; Now you can edit the files in packages/* and storybook will automatically reload your changes. Note: If you change an internal dependency you will need to run yarn build again. For example, if working on MenuButton requires a change to Rect (an internal dependency of MenuButton), you will need to run yarn build for the changes to Rect to show up in your MenuButton example. Running / Writing TestsRunning / Writing Tests First do the steps in "Getting Started", then: yarn test Or if you want to run the tests as you edit files: yarn test --watch Often you'll want to just test the component you're working on: cd packages/<component-path> yarn test --watch Development PlansDevelopment Plans The components to be built come from the the Aria Practices Design Patterns and Widgets. Here is a table of the components and their status. 🧪 - Beta Released ReleasesReleases This is our current release process. It's not perfect, but it has almost the right balance of manual + automation for me. We might be able to put some of this in a script... # First, run the build locally and make sure there are no problems # and that all the tests pass: $ yarn build $ yarn test # Generate the changelog and copy it somewhere for later. We'll # automate this part eventually, but for now you can get the changelog # with: $ yarn changes # Then create a new version and git tag locally. Don't push yet! $ yarn ver [version] # Take a look around and make sure everything is as you'd expect. # You can inspect everything from the commit that lerna made with: $ git log -p # If something needs to be changed, you can undo the commit and # delete the tag that lerna created and try again. # If everything looks good, push to GitHub along with the new tag: $ git push origin master --follow-tags # Open up travis-ci.com/reach/reach-ui and watch the build. There will # be 2 builds, one for the push to the master branch and one for the # new tag. The tag build will run the build and all the tests and then # automatically publish to npm if everything passes. If there's a # problem, we have to figure out how to fix manually. # Paste the changelog into the release on GitHub. The release is # complete … huzzah! You need to be careful when publishing a new package because the lerna publish on Travis CI will fail for new packages. To get around this, you should publish a 0.0.0 version of the package manually ahead of time. Then the release from CI will be ok. This is really janky but AFAICT the only workaround. Stuff I'd like to improve: - Automate changelog generation and GitHub release from CI - Document how we're using GitHub PRs to generate the changelog somewhere WebsiteWebsite The website is a Gatsby app in the website directory. It automatically deploys to when the website branch is updated. ContributorsContributors This project exists thanks to our contributors and financial backers.
https://react.ctolib.com/reach-reach-ui.html
CC-MAIN-2020-05
refinedweb
664
61.77
Introduction to Insertion Sort in C Insertion sort is a sorting algorithm that helps in sorting objects of an array one by one. Insertion sort works by picking one element at a time and places it accordingly in the array. It will keep working on single elements and eventually put them in the right position, eventually ending with a sorted array. It is similar to sorting cards in hand, where we sort the cards one at a time. When the first card is sorted, we move to the next one and place it in a way where it appears sorted. First, let us have a look at the syntax and few examples. In this topic, we are going to learn about Insertion Sort in C. Syntax There is no particular syntax for writing the insertion sort but an algorithm. This algorithm can be as below to sort an array in ascending order. - Traverse from array position 0 to array position 1 in the array. - Now compare the current element of the array with its predecessor. - If a current element of the array has a lesser value than the predecessor, then you can compare the previous number and then move the elements a position ahead of the previous number. This is similar to swapping the numbers and bringing the number to the expected position. How to perform Insertion sort in C? Insertion sort functions in the following way. The figure below explains the working of the insertion sort. We have an array of 6 numbers which is not in a sorted manner. We need to sort this array using insertion sort. We first consider 85 and assume that it is sorted. We compare it with 12. 12 is smaller than 85; it will be swapped with 85 and placed in the first position. The second comparison will now be done using 85 again. 85 will be compared with 59. Again 59 is smaller than 85. These two numbers will be swapped again, and at the second position in the array, we will have 59 moving 85 to the third position. The iteration will check between the numbers 12 and 59. 12 is less than 59 and is already in the first position. Hence there will be no change in these two numbers. The next two numbers for comparison are 85 and 45. 45 is smaller than 85, and hence it will be swapped with 85. Next, it will be checked with 59. 45 is smaller than 59 as well; hence it will be swapped with 59 as well. Now 12 is smaller than 45; hence its position remains unchanged. Again the next iteration takes into consideration 85 with 72. 72 being smaller will be swapped with 85. 59 is smaller than 72; hence its position remains unchanged. Now 85 will be compared with 51. 51 will be swapped and will be compared with 72. Since 72 is bigger, it will be swapped again. 51 is smaller than 59 as well, so it will be getting swapped again. Now 51 is not smaller than 45; hence it will stay at its original position. You can now observe that the array is now sorted. All numbers are in ascending order. Example: Let us check this example now using the C program #include <math.h> #include <stdio.h> /*C function to sort an array*/ void Sort_Insertion(int array[], int n) { int m, k, p; for (m = 1; m < n; m++) { k = array[m]; p = m - 1; while (p >= 0 && array[p] > k) { array[p + 1] = array[p]; p = p - 1; } array[p + 1] = k; } } void print(int array[], int n) { int i; for (i = 0; i < n; i++) printf("%d ", array[i]); printf("\n"); } int main() { int array[] = { 17, 78, 56,32 , 46 }; int n = sizeof(array) / sizeof(array[0]); Sort_Insertion(array, n); print(array, n); return 0; } The C program above has a main function which is called at the very beginning of any program. The main() program has an array containing an array of 5 elements which are in jumbled format. It then takes the size of the array by using the sizeof() function and size of the element at the 0th position. It will be then sent to a function sort_insertion which has arguments of the array and n elements. The control then moves to this function. This function takes three variables m, k, and p. The array is traversed till the second last element in a loop. The while loop moves the pointer from 0 to p-1 position. Here the numbers are greater than k and moved to a position that is ahead of their current position. Whenever the numbers are smaller, they are swapped, and k has the value of the new number. This function runs until the array is in a sorted manner. The for loop here performs this activity. While loop is comparing and swapping the numbers. After this, the print function is called, where every element of the sorted array is getting printed. A for loop is used here, starting from the 0th position of the array till the end of the array. All elements of the array will be printed post the sort function. The output of this function will be as below. The above array is in a sorted form now. Previously all numbers were randomly placed. Now using C language, the array is sorted. Conclusion There are many sorting techniques, out of which insertion sort is considered to be one of the simplest ones. Insertion sort compares two numbers and swaps the numbers when they are not in order. It will traverse the entire array for all the numbers until all of them are placed in proper order. This algorithm considers one element at a time and works accordingly. If the element is in the correct position, it will not swap the element and move to the next element. Using C language, this logic can be applied easily by using for and while loops. Thus insertion sort is one of the simplest sorting methods which sorts all elements of an array. Recommended Articles This is a guide to Insertion Sort in C. Here we discuss How to perform Insertion sort in C along with the example and output. You may also have a look at the following articles to learn more –
https://www.educba.com/insertion-sort-in-c/?source=leftnav
CC-MAIN-2022-40
refinedweb
1,059
74.29
Tigerstripe Simple Plugin Tutorial < To: Tigerstripe_Tutorials This tutorial will walk you through creating your own Tigerstripe Plugin plug-in for your code. By writing your own plugin, you can define your own rules and conventions for the code that is generated from a Tigerstripe Model Project. In many cases, you may want to write your own plugin so that you can create code according to your own organizations standards, or you may need some HTML to document your project model. Furthermore, by creating your own plugin you can distribute it to other users within your organization to ensure that all users are using the same coding rules and conventions when generating code. As you work through this tutorial, you will generate a simple java class, based on a single entity. You will see how easy it is to modify and enhance the generated code in templates.(Other types of rule that do not use templates are covered in the next tutorial, however the basic steps below apply for all rule types.). Contents Create a new Tigerstripe Plug-in Project Tigerstripe Plugin Projects share some common behavior with standard Tigerstripe Model Projects except that a plugin project contains details of a plug-in that you can use to generate output based on a model. A Tigerstripe Plugin Project includes the following: - Properties – Used to allow specific information for use in a given Tigerstripe Project. For example, the name of a directory where certain file types should be created. - Rules – Define the actual behavior of the plug-in. - Runtime references – Specify additional files required by the plug-in at run-time (eg .jar files). - Velocity templates: Describes how output will be generated based on the artifacts contained within a particular project against which the plug-in is run. - Java source files for enhanced behavior. - Additional .jar files To create a new Tigerstripe Plugin Project: - From the File menu, select New and click Project. Select the Tigerstripe Plugin Project under the Tigerstripe heading. The New Tigerstripe Plug-in Project dialog box opens. - Enter a name for the plug-in. - For this tutorial, name it SimplePluginProject. - Click Finish to create the new plug-in. An editor displays that appears similar to the standard Tigerstripe Project Editor. - Enter a description and version number for your new plug-in in the appropriate text boxes and click Save. Note: Click Help and select Help Contents for more information about the other options available in the New Tigerstripe Plug-in Project dialog box. Create Global Plug-in Properties In many situations, you may want the user of your plug-in to be able to specify the value for a particular property when they run the plug-in. Such as: - Setting the name of a Directory - Flagging the adoption of certain optional behavior These tasks can be accomplished by creating Global Properties. To create Global Properties: - Click the Properties tab in the Plug-in Project Editor. - Select Global Properties and click Add to create a new property. - Enter a name for your property and click OK. - For this tutorial, the name is directoryName. Leave the type as String property. - Complete step 1 through step 3 to create a second property named flag. - Change the flag property type to Boolean. - Review the Global Properties for the plug-in and add any required Tool tip text or Default values. - Click Save. Create Velocity Templates A plug-in contains one or more Velocity Templates, which describe how output will be generated based on the artifacts contained within a particular project against which the plug-in is run. Tigerstripe uses an apache utility called Velocity to create the output. For more information about Velocity, refer to the [[1]]. In this step, you will create a template. - Select the templates directory on your plugin project in the Tigerstripe Explorer. - From the File menu, click New, and select Other. - You can also accomplish this step by right-clicking on the templates directory. From the shortcut menu, click New and select Other. - Select General and click File from then specify a filename for the file you wish to create. - For this tutorial, name the file simpleTemplate.vm. - Your template must be saved in the templates directory of your SimplePluginProject, and the .vm file extension is mandatory. A text editor will open. Within a template you will need to use Velocity syntax. Some brief points about Velocity are as follows: - Use ## to indicate a Velocity comment. - Use # to call Velocity directives. For example, #set. - Use $ to indicate variables. - All other text entered will appear in the output file. We will now create a template that will output a basic Java class file based on a Session Facade Artifact. Enter the following text into your text editor: ## This is simpleTemplate.vm // This file was generated using $templateName public class $artifact.Name { private boolean myBoolean = $pluginConfig.getProperty("flag") ; } Save your changes. The first line is a comment internal to the template and this text will not appear in the final output. The following line (starting //) will appear in the final output. The reference $templateName will be replaced by the name of the template when the template is rendered - in this case it will be replaced with templates/simpleTemplate.vm. (It is good practice to enter this information so you can easily determine which template your generated files utilized.) The $artifact.Name in the public class .... line will be replaced with the name of the Artifact against which this template is run. For example, in the Simple Tutorial Project this would be either Order or OrderFacade. Finally the reference $pluginConfig.getProperty("flag") is replaced with the value of the property flag from the Tigerstripe Model Project. The next step is to set when this template is executed. Define a Plug-in Rule In this step we will create an Artifact Rule. Artifact Rules are run once per artifact in the model (in fact some control of the artifacts is possible, but that is covered in a later tutorial). Now that you have a template, you are going to need to configure the rule to populate your template. To configure your rule: - Select the Rules tab in the editor for the plug-in project. - Select the Artifact Rules section, and click Add to add a new rule. - Select the optionArtifact Template Rule - For this tutorial, name your rule SimpleRule. - Click OK. - Define your rule. A set of options for defining your rule will display. Some options are self-explanantory, while others are beyond the scope of this tutorial. The ones you need to look at are: - Template - Click Browse button and select the template you created. - Output File - Select the directory to where you want the output file created. The file name must be unique for each artifact that you process. The following file format will achieve this: ${ppProp.directoryName}/${artifact.Name}.out Where ${xxx} signals the plug-in to replace text, ppProp signals the plug-in that a plug-in property should be used, and similarly for the artifact. (Details of this syntax can be found by searching for Expander in the on-line help). - ArtifactType - Select the type of artifact that the rule applies to from the drop-down list. An Artifact Rule normally only processes one type of artifact (although there is an option for all artifact types). For this tutorial, use Session Facade. - Click Save to save your plug-in. - You can have many rules in each plugin. - You can have several rules that use the same template, eg that are run for different artifact types.. Test the Plug-in Before distributing your plugin, you should perform some testing to make sure you obtain the output you want. To test your plugin in your local Environment: - Open the Plugin Project Descriptor (ts-plugin.xml) by double clicking on it in the Tigerstripe Explorer. - Click on Package and deploy this plugin within the Testing section on the Overview tab of the editor. Your plug-in is now available to all projects in your workspace! - Run your plug-in from a Tigerstripe project. - Select a Tigerstripe Project' and open the descriptor editor for that project by double clicking on tigerstripe.xml file. - Open the Plugin Settings tab and a section for the plug-in you just deployed will be present. At this stage, your plug-in is disabled. - Open your section for your plug-in and click Enable to enable your plug-in. Enabling your plug-in will also enable all of the other controls for that plug-in. You can override the defaults specified during definition of your plug-in however, don't change these values for now. - Click Save to save the Tigerstripe project. - Note: You may wish to disable all other plug-ins when testing your new plug-in. - Make sure you have the project in scope and click Generate . - Review your generated code. If you used the settings as described above, you should find a new directory in the target directory of your project. This new directory contains a file called Order.out. Note: Your default directory may contain additional files if you have more than one entity in your project. Further Testing Note: Tigerstripe Workbench does not delete files in your target directory by default; therefore you may want to delete the default directory before re-testing your plug-in. Alternatively visit the "Advanced Settings" tab of your project and enable the feature "Clear target directory before generate". Add more entities to your project, and change the value of the directoryName property. Changes to your model (artifacts) and property values are specific to the Tigerstripe Project so they will automatically be picked up when you run generate, however if you change the associated template, add new rules, or change property definitions, don't forget to re-deploy your plugin before running it again. You are now ready to package your plug-in for others to use. Distribute your Plug-in To package and distribute your plug-in to other users.: - Navigate to your plug-in editor and click on the Package up this plugin link in the Packaging section on the Overview tab. - Navigate to where you want to save your plugin and click Open. - A message box appears upon successfully packaging your plug-in. Click OK to close the message box. You can then distribute the resultant .zip file to other Tigerstripe Workbench users who can deploy your plugin by placing the .zip file in their Tigerstripe plug-ins directory. (ECLIPSE_HOME\tigerstripe\plugins) Everyone in your organization will now be able to apply the same rules to their model and you can generate consistent results for all projects. Un-deploying a plugin You can undeploy plugins in one of two ways. The first option only works if you have the original Plugin Project in your workspace - the second option is always available. If you have the Plugin Project : - Open the Plugin Project Descriptor (ts-plugin.xml) by double clicking on it in the Tigerstripe Explorer. - Click on Un-deploy this plugin within the Testing section on the Overview tab of the editor. Alternatively to un-deploy any previously deployed plug-in: - Click the Tigerstripe menu and click Deployed Generators... to view a list of deployed plug-ins in your workspace,. The Deployed Tigerstripe Plugins dialog box opens. - Right-click on the plug-in and select Un-deploy.
http://wiki.eclipse.org/index.php?title=Tigerstripe_Simple_Plugin_Tutorial&oldid=340072
CC-MAIN-2014-52
refinedweb
1,894
57.37
Java Collection, TreeMap Exercises: Get a key-value mapping associated with the greatest key and the least key in a map Java Collection, TreeMap Exercises: Exercise-8 with Solution Write a Java program to get a key-value mapping associated with the greatest key and the least key in a map. Sample Solution:- Java Code: import java.util.*; import java.util.Map.Entry; public class Example8 { public static void main(String args[]) { // Create a tree map TreeMap <String,String> tree_map1 = new TreeMap <String,String> (); // Put elements to the map tree_map1.put("C1", "Red"); tree_map1.put("C2", "Green"); tree_map1.put("C3", "Black"); tree_map1.put("C4", "White"); System.out.println("Orginal TreeMap content: " + tree_map1); System.out.println("Greatest key: " + tree_map1.firstEntry()); System.out.println("Least key: " + tree_map1.lastEntry()); } } Sample Output: Orginal TreeMap content: {C1=Red, C2=Green, C3=Black, C4=White} Greatest key: C1=Red Least key: C4=White Java Code Editor: Contribute your code and comments through Disqus.
https://www.w3resource.com/java-exercises/collection/java-collection-tree-map-exercise-8.php
CC-MAIN-2019-47
refinedweb
157
51.75
2018-04-16 13:03:25 8 Comments Hi I have one built created with one script in c# but now I want to create a button from other script in the same Form (the same window) that in the first script. Would be great to use a PlayButton_Click void from different script too. Here is an example of what i'm trying: Script 1: using System; using System.Windows.Forms; using System.Drawing; using System.IO; using System.Collections.Generic; public class HelloWorld : Form { public static Button PlayButton; public void Main() { Application.Run (); } public Form1() { PlayButton = new Button(); Controls.Add(PlayButton); } public void PlayButton_Click(object sender, EventArgs e) { MessageBox.Show ("All ok"); } } Script 2: using System; using System.Windows.Forms; using System.Drawing; using System.IO; using System.Collections.Generic; public class HelloWorld2 : Form { public static Button PlayButton2; public Helloworld2() { PlayButton2 = new Button(); Controls.Add(PlayButton2); } public void PlayButton2_Click(object sender, EventArgs e) { MessageBox.Show ("I'm in the first Form :D!"); } } Related Questions Sponsored Content 57 Answered Questions 24 Answered Questions [SOLVED] What is the best way to iterate over a dictionary? - 2008-09-26 18:20:06 - Jake Stewart - 1146585 View - 1925 Score - 24 Answer - Tags: c# dictionary loops 9 Answered Questions [SOLVED] What are the correct version numbers for C#? - 2008-10-29 17:09:40 - Jon Skeet - 285845 View - 2046 Score - 9 Answer - Tags: c# .net visual-studio .net-framework-version compiler-version 64 Answered Questions [SOLVED] How do I calculate someone's age in C#? 15 Answered Questions [SOLVED] <button> vs. <input type="button" />. Which to use? - 2009-01-22 13:14:11 - Aron Rotteveel - 546105 View - 1421 Score - 15 Answer - Tags: html button compatibility html-input 24 Answered Questions [SOLVED] Get int value from enum in C# 27 Answered Questions [SOLVED] How to create an HTML button that acts like a link? - 2010-05-25 16:39:47 - Andrew - 4360953 View - 1351 Score - 27 Answer - Tags: html button hyperlink anchor htmlbutton 21 Answered Questions [SOLVED] Cast int to enum in C# 40 Answered Questions [SOLVED] Create Excel (.XLS and .XLSX) file from C# 296 Answered Questions [SOLVED] Hidden Features of C#? - 2008-08-12 16:32:24 - Serhat Ozgel - 611144 View - 1476 Score - 296 Answer - Tags: c# hidden-features
https://tutel.me/c/programming/questions/49858038/create+a+button+from+other+script+in+c
CC-MAIN-2018-30
refinedweb
376
58.99
Red Hat Bugzilla – Bug 41431 Serious bug when comparing doubles in gcc Last modified: 2007-04-18 12:33:20 EDT From Bugzilla Helper: User-Agent: Mozilla/4.72 [en] (X11; U; Linux 2.2.12-20 i686; Nav) Description of problem: When I compile the simple C-program below under RedHat 6.2 (egcs-2.91.66) or RedHat 7.0 (gcc-2.96) I get the following problem: > gcc -O3 testc.c -lm -o testc > ./testc 1: x1!=x2 1: x1!=x2 -0.227202 -0.227202 0.000000e+00 If a comparison of variables is made, as the first operation, the values are not loaded into the varibles before comparison. If I compile without the -O3 flag, the problem disappears. On an alpha-architecture (gcc 2.8.1) the problem is not present. test program: #include <stdio.h> #include <math.h> #include <stdlib.h> int main(){ double x1; double x2; x1 = cos(1.7); x2 = cos(1.7); if(x1!=x2) printf("1: x1!=x2\n"); if(x1!=x2) printf("2: x1!=x2\n"); if(x1!=x2) printf("3: x1!=x2\n"); x1 = cos(1.8); x2 = cos(1.8); if(x1!=x2) printf("1: x1!=x2 %f %f %e\n", x1, x2, x1-x2); if(x1!=x2) printf("2: x1!=x2 %f %f %e\n", x1, x2, x1-x2); if(x1!=x2) printf("3: x1!=x2 %f %f %e\n", x1, x2, x1-x2); exit(0); } How reproducible: Always Steps to Reproduce: 1. gcc -O3 testc.c -lm -o testc 2. ./testc 3. Actual Results: When compiled with optimization O1, O2, O3, ... the test program fails to compare two double the first time they are used. When compiled with O0, the program works without problem. Additional info: Created attachment 19070 [details] test program for serious bug in comparing doubles This is not a bug. The Intel IA32 FPU architecture is so braindamaged that if floating point code generated for this architecture is to run sufficiently fast, there is no other way. You can use the -ffloat-store switch to force all floating point variables into memory, where at the cost of slowing things down these will compare equal. The issue is that IA32 computes all floating point stuff in IEEE 854 extended double precision (IA32 long double) and the values are rounded on storing into memory only. On sane architectures, there are separate instructions to do single precision, double precision and on some arches either extended double precision or quad precision arithmetics, so rounding is done to the right precision after every single operation.
https://bugzilla.redhat.com/show_bug.cgi?id=41431
CC-MAIN-2016-40
refinedweb
427
70.5
Project ideasQR for this page From Fiji This page contains a loose list of ideas for cool/useful projects that have some relation to Fiji Contents - 1 Visualization - 2 Image processing plugins - 3 Scripting - 3.1 Add JMathLib (Matlab clone) support - 3.2 A Javascript Recorder - 3.3 Code templates in the Script Editor - 3.4 Make Script Editor rename a Java class automatically on Save As... - 3.5 Add a Bookmark function to the Script Editor - 3.6 Add support for Haskell (via Jaskell) and Tcl (via Jacl) - 3.7 Add Edit>Find in files... - 3.8 Add a "REPL" (Read-Eval-Print-Loop) to the Script Editor - 3.9 Detect loops after macro recording - 4 Fiji development environment/infrastructure - 5 User interface improvements - 6 Miscellaneous - 7 Other resources Visualization Plugin for Mixed-File-Format MultiVirtualHyperStack viewing window The idea is to be able to display multiple virtual hyperstack-type data sets in a single multi-color composite window. I've already arranged this using multiple QuickTime Movies or AVIs. But the ideal will be to allow overlay from mixed data of any of the many BioFormats-supported file types. The user would be able to overlay, realign and fit the separate channels in 4 dimensions, and then synchronously browse the composite view in ZT dimensions. There should be some demand for this functionality, especially with the very large data repositories being made by many labs in different file formats. Enabling automatic spatial calibration from metadata would allow measurements analysis of all channels with regions of interest addressing all of the overlaid data. The project would consist of - Writing a class extending VirtualStack.java class for each type of input data file series. Currently, these exist for FileSeriesFromList and QTVirtualStack. - Create a MultiVirtualHyperStack.java class that can organize multiple VirtualStack types into a single VirtualHyperStack displayed via a single ImagePlus and StackWindow. The getProcessor() method in this class must be able to sort out the file coordinates of any channel/slice/frame requested from the mixed-format virtual stack and call getProcessor() from each of the specialized VirtualStack classes for each format. - Create a control panel that allows adjustments of XYZT position for any single VirtualHyperStack that is a component of the mixed overlay window. Goal: Plugin for Mixed-File-Format MultiVirtualHyperStack viewing window. Language: Java. Contact: Bill Mohler (wmohler@neuron.uchc.edu) Plugin: python script for multi-stack composite image Interactively adjustable intensity/LUT curves In Fiji, you can adjust the dynamic range of an image by calling Image>Adjust>Brightness & Contrast. However, this only lets you choose a linear mapping between pixel intensity and lookup table. This project aims to provide non-linear controls, such as piecewise linear functions, gamma curves, splines, etc Image processing plugins. Goal: Implement a number of segmentation algorithms based on machine learning. Language: Java. Mentor: Ignacio Arganda-Carreras, Albert Cardona Colorizing algorithms There are a number of publications about turning greyscale images into color images. This project is about implementing as many of them as possible. Note: this is an ill-posed problem, as there is not enough information in the greyscale images to identify the original color. However, under certain circumstances, it is possible to estimate a best guess for the color for most or all pixels. Goal: implement a colorizing plugin. Language: Java. Mentor: J. Schindelin (johannes.schindelin AT gmx.de) Image selector/sorter Implement an algorithm that sorts a number of images by features, such as color. Inspired by Kai-Uwe Barthel's pixolu project. A set of more powerful painting brushes and image editing tools Even if Fiji aims at scientific image processing rather than beautifying photographs, it might be fun to take your holiday pictures and post-process them with the image processing software you are familiar with. Possible tools to do so would be airbrushes (allowing for transparent colors) or brushes with a certain inertia to allow calligraphic effects, etc. Goal: implement a plugin for interactive calligraphic effects. Language: Java. Mentor: J. Schindelin (johannes.schindelin AT gmx.de) Wavelet inpainting. Goal: implement a plugin for interactive calligraphic effects. Language: Java. Mentor: [ Albert Cardona]) Scripting Add JMathLib (Matlab clone) support Quite a few algorithms are available as proof-of-concept Matlab scripts. While it is wrong to think of pixels as little squares, and literally all Matlab scripts to perform image processing are suffering from that assumption, it would be very nice nevertheless to be able to run the scripts without having to buy Matlab licenses just for that purpose. Matlab bundles a Java runtime (and in fact, all of Matlab's GUI is implemented in Java!) and allows the user to instantiate Java classes and call methods on them: import java.io.File; f = File('/usr/local/Fiji.app/'); f.exists(). So far, we have a branch which adds rudimentary JMathLib bindings to Fiji's scripting interface and a Git-SVN mirror of the JMathLib source code repository with a special fiji branch. The idea is to work on this branch to adjust JMathLib in certain ways to support this project, and once that is done, contribute the changes back to the JMathLib project. The proof-of-concept version of the Refresh_JMathLib_Scripts class illustrates how the JMathLib Interpreter needs to be instantiated and called. The following issues need to be tackled in the JMathLib source code: - In embedded (non-standalone) mode, the FunctionManager installs a WebFunctionLoader which disallows new functions to be added. Either the WebFunctionLoader needs to be taught to allow (maybe optionally) new functions to be added, or there needs to be another mode in which JMathLib is embedded, but still allows new functions to be defined. - In embedded mode, JMathLib must not call System.exit(). - Only in standalone mode are the available functions discovered at runtime rather than read from an embedded file. This file should not be necessary. JMathLib should detect whether it is bundled in a .jar or not, and use a JarInputStream or traverse the directory hierarchy otherwise. Probably the best place to do this is to teach the FileFunctionLoader to accept a URL instead of a File, too. - JMathLib supports Java via a non-standard mechanism based on DynamicJava. This is incompatible with Matlab, so there needs to be native support using reflection to support the method to instantiate Java objects mentioned above. This issue needs to be tackled in Fiji's source code: - JMathLib's image toolbox does not contain much. Even the most basic functions are missing. And even if there were functions, we would have to override them, because the functions need to be done in a way so that they can use and interact with ImageJ. The best approach may be to start by implementing the functions mentioned in Matlab's image processing toolbox' Getting Started section, by implementing .m files that call directly into ImagePlus (using the above-mentioned technique). Goal: Integrate JMathLib as a new scripting language. Language: Java. Mentor: Johannes Schindelin (johannes.schindelin@gmx.de) A Javascript Recorder Similar to the Macro Recorder but producing Javascript instead. There is a Javascript recorder in ImageJ right now, but it is in no way integrated into the Fiji Script Editor. It also appears that the Javascript recorder is not as robust as the Macro recorder yet. Code templates in the Script Editor The Script Editor provides a fine way to script small plugins that do some simple tasks. If you know how. Provide a good number of templates so that the user does not have to start from scratch. A good template will also include rather more documentation than less, so that ideally the user does not have to look up the appropriate API calls, but just modifies the well-documented code. Make Script Editor rename a Java class automatically on Save As... A public Java class must be compiled from a source file reflecting the class name, so it makes sense to rename the Java class when the file is saved under a new name. Teach the Script Editor to do that. Add a Bookmark function to the Script Editor Often, it would be very convenient to remember the current cursor position to come back to, after looking around in other parts of the file. Maybe ^ Ctrl+B (together with a menu entry), or ^ Ctrl+<digit> are good ways to implement the user interface. (The code should be similar to the Goto Line... function. Add support for Haskell (via Jaskell) and Tcl (via Jacl) We already have Jacl in Fiji, as it is a dependency of Batik. There is also a pure-Java implementation of the Haskell language, and both should be relatively easy to integrate into Fiji as scripting languages. For Tcl, the Script Editor would need minimal adjustments, as RSyntaxTextArea already has support for Tcl, but for Haskell, a new TokenMaker would have to be implemented. Add Edit>Find in files... We already have a mechanism to jump between compile errors and locations of a stack trace. The same mechanism could be used to present results from a search through multiple files. Add a "REPL" (Read-Eval-Print-Loop) to the Script Editor Detect loops after macro recording A special form of an autocorrelation (on text) should be pretty good an indication where the user repeated things that might want to be done in a loop instead. This would help users with little background in programming to write powerful plugins through the macro recorder. Fiji development environment/infrastructure GUI Testing framework We have some rudimentary GUI testing in the tests branch but it may be better to use an established GUI framework such as Jemmy or Marathon. The idea is, in any case, to record mouse moves and keyboard presses, optionally waiting for some GUI element (such as a window) to appear, and error out if something unexpected happens -- which most likely means that something broke and needs fixing. Goal: Provide an easy way to record and run GUI regression tests. Language: Mainly Java Mentor: Johannes Schindelin (johannes.schindelin@gmx.de) Interface between R and ImageJ/Fiji It would be nice to have a set of implemented procedures so IJ/Fiji can run statistical procedures directly from Results tables, etc). RImageJ is an R package written by Romain Francois which uses the Java/R interface included in rJava to start an ImageJ instance inside R and send commands to that instance. It would be nice to have the opposite direction working, to call R from Fiji. As the Java/R interface included in rJava does allow to start R from within Java, it is quite feasible. To overcome the typical problem of loading native libraries via System.loadLibrary() needing special platform-dependent settings, we should do something like this: // prohibit JRI to call System.exit(1); System.setProperty("jri.ignore.ule", "yes"); if (!Rengine.jriLoaded) { // not found on the library path System.load("/absolute/path/to/the/library"); Rengine.jriLoaded = true; } Rengine re = new Rengine(); Teach the Fiji Updater to accept other sites in addition to fiji.sc The Fiji Updater always looks for a static file containing an XML database of Fiji plugins (both current and past versions) on our website. To put new versions or new plugins there (to upload into the updater), you have to be a Fiji developer with write permission for that particular directory on our server. In some cases, there are plugins that are either too sensitive, or too specific for a certain application, or not ready for public consumption yet, but still somebody might want to install Fiji in such a way that it automatically updates those plugins, too. Of course, there must be a different location for those plugins than the official Fiji update site, lest the general audience receive those plugins automatically. The project is not without complications, though: - The XML database is saved as a file in the local Fiji directory, and it is always checked at startup whether the timestamp is newer than the timestamp of the XML database on the server. If you have multiple update sites, it should be handled in a way, where the local XML database reflects the sources of the metadata, and for uploading, a temporary XML database must be constructed for one particular upload site. - There may be conflicts between plugins that are official Fiji plugins, but also available from a secondary site. This has to be coped with (it is not clear what the best strategy should be: take the official Fiji version over the secondary site? let the user choose?) - With a new site, you need to be able to upload plugins to that site, too. There needs to be a very good way to prevent confusion, lest the plugin is uploaded to the wrong site. - To determine whether a developer can upload new plugins (because there are new versions), the Fiji Updater scans the complete plugins directory, along with a few other places where macros, 3rd party libraries, or the Fiji launcher might hide. The Fiji Updater needs to learn not to offer these plugins for upload to a secondary site, but only the non-Fiji ones. - It is unlikely that our current Fiji Updater can start a database from scratch. This has to be verified, and if there is no code for that yet, it has to be implemented. - Cross-site dependencies should be handled by having hints in the XML database as to what other site is supposed to have the newest dependency. Integrate JGit into Fiji An important part of Fiji's success is the ease with which developers can collaborate through the use of Git. There exists a pure Java implementation of Git called JGit, which already provides a large part of Git's functionality. It would be nice to have it integrated into Fiji so that the Script Editor can give the developers an even smoother developing experience. Make the Fiji Updater more intelligent about restarting Only when there are updates outside plugins/ is it necessary to do a full restart; otherwise, a simple "Update Menus" will do the trick. Further, after the message "You need to restart Fiji" (or the Update Menus), there is no reason for the Updater to stay open. And finally, if a restart is required, the user could be asked whether a restart should be attempted, and a JNI-provided function could be called with a list of open images (if there are unsaved images, they should be saved temporarily into temporary files) and result tables, which then re-executes Fiji appropriately. Make the Object Inspector more useful There is a tool in Fiji that lets you sift through all open frames and inspect the corresponding objects (and their fields, in a recursive fashion). However, there is no connection to the script editor yet (where you could open the corresponding sources for a given class). This might be pretty handy. Additionally, there could be a mode where you open the hierarchy of objects starting with the object the cursor hovers over, updated dynamically. This could be even more useful if there was a mode to show only the listeners of the objects, so that you can easily determine what code is responsible, say, to handle the click on a specific OK button. User interface improvements Add a meta-plugin to run other plugins with ranges of parameters Many plugins take parameters, and it might not be obvious what the optimal values are. So it would be nice to have a plugin that can call another plugin with a range of values. The idea is to let the meta-plugin run the plugin which asks for the user input in the normal way. But then, the meta-plugin will analyze what parameters were specified, and ask for increments and stop criteria of the numeric ones. With these data, it will run the plugin in a loop. The result could be presented as a stack of the result images. Another (more hacky) possibility of getting the range parameters is to intercept the dialog before it is shown, and extend the numeric input fields by increment and stop entries. Suggested by Quentin de Robillard. Integrate ImageFlow into Fiji ImageFlow provides a graphical way to construct macros. Every action is represented by a node which the user can connect with lines to define a workflow. ImageFlow has its own Git repository (our mirror). The following issues need to be resolved: - At the moment, it is not a true plugin, but wants to start its own ImageJ instance - it only targets the macro language, while we want to target all the scripting languages supported by Fiji - it searches for its .xml files outside of the .jar file, which makes it cumbersome to ship with the Fiji updater. Many commands available from menu items are actually plugins, so they are documented nicely on the Fiji Wiki. There is a beginning of a plugin that changes the cursor to an arrow with a question mark, and (temporarily) the way the menu items are handled: instead of running the corresponding command, the corresponding documentation on the Fiji Wiki is opened in a web browser. The code needs to be improved, though, to handle special menu items correctly, such as the recent files or the windows menu. Some menu items are actually provided by the core of ImageJ, and it may be the best to open the appropriate place on the ImageJ website instead of the Fiji Wiki. In the alternative, the ImageJ documentation should be replicated on the Fiji Wiki under the appropriate page titles. The user should also be informed that hitting the ⎋ Esc key gets her out of this mode. And finally, the Fiji Wiki needs some love to reflect the exact titles of the menu items, most probably by adding appropriate redirects. Add a clever Save As plugin For now, File>Save As always saves the result as a .tiff file, even if the user specified a file name ending in, say, .png. Stephan Preibisch suggests: Add a plugin that determines from a set of extensions which plugin to call to the respective writer plugin. For extra brownie points, do not hardcode the extension/plugin mapping (like HandleExtraFileTypes), but make it configurable via one or more file. Miscellaneous Alpha shapes / concave hull / other Graph Theory algorithms Fiji already contains a Delaunay_Voronoi plugin. The purpose of this project is to implement more graph algorithms. Most likely, this will involve designing a common framework for graph theory as applied to two- or higher-dimensional graphs. Support for storing ROIs in TIFF tag fields Fiji can save images as TIFF files and ROIs into custom .roi files. Provide a way to store the ROIs inside custom tags in the TIFF file so ROIs and images can be saved together. Cross platform webcam support Supporting image recording from webcams might provide a cheap way to make videomicroscope/telescope units (possibly using the Distortion Correction plugin to overcome low-quality CCD chips and lenses). One way to achieve that would be by using the Free Java Media Framework. A unique/common segmentation interface I have collected near 15 new histogram segmentation methods that would be better put under a single interface together with others already available. Note: this is more or less implemented in the Auto_Threshold and Auto_Local_Threshold plugins.--Gabriel 14:47, 29 November 2009 (CET) Virtual microscope-like image viewer HSB/Lab painting modes, C++, shell Mentor: Johannes Schindelin (johannes.schindelin@gmx.de) Other resources There is a wish list on the ImageJ Documentation Wiki.
http://fiji.sc/Project_ideas
CC-MAIN-2014-52
refinedweb
3,255
52.09
Coming from C# background, I was trying to understand whether Python allows to pass obj.method where ordinary function is expected, and if yes, how could it possibly work. Consider this code fragment: def call_twice(f): f() f() class Foo: def __init__(self, data): self.data = data def print_value(self): print(f"Value is {self.data}") foo = Foo(42) call_twice(foo.print_value) # can it work? how? The short answer is: yes, it works, foo.print_value returns an equivalent of C# delegate that knows about foo variable. Longer answer Unlike C++ and the languages influenced by it, methods definitions in Python accept “this” parameter explicitly. It is usually called “ self” by convention, but in fact any name would work: class Foo: def print_value(this): print(f"Value is {this.data}") It looks suspiciously like C, and in C passing a one-argument function to something that expects a zero-argument function would not work. So, how can we pass foo.print_value to call_twice() and then invoke it with no arguments? The answer is that while Foo.print_value is indeed an ordinary function that accepts one argument, foo.print_value is a callable object that accepts no arguments. When invoked, it calls Foo.print_value(foo). This is very similar to how C# delegates work, and somewhat similar to C++ functional objects that define operator (). In other words, foo.print_value() is not simply syntactic sugar for Foo.print_value(foo) as I originally thought. The expression foo.print_value returns an object of type method. This object can then be called immediately, or passed to another function to be called later. More details here: Some Interesting Implications Since methods are actually regular functions, they can be applied to arbitrary objects. Consider this code: class Bar: def run(self): self.go("fast") def go(self, msg): print(f"Bar.go({msg})") class Bus: def __init__(self, name): self.name = name def go(self, msg): print(f"{self.name} is going {msg}!") bus = Bus("Yellow bus") Bar.run(bus) # prints "Yellow bus is going fast!" We apply Bar.run() to an object of type Bus, which is completely unrelated to Bar, and yet it invokes Bus.go() and prints the expected message. Of course, this feature should be used with care, but it does occasionally become useful, e.g. when we want to be 100% certain method of what class we are calling on an object in presence of multiple inheritance. 2 Comments Permalink Lambdas are the analogs of delegates in Python, and now in C++ too. Permalink This is only partially true. In C# a delegate is a pointer to function, plus optional target object. A delegate may point to a lambda, or to a regular function/method. So, delegates and Python method and function objects are quite similar. In C++ the equivalent of delegate would be a function pointer or a struct containing a pointer to member, a pointer to a class object, and operator ().
https://ikriv.com/blog/?p=4790
CC-MAIN-2021-49
refinedweb
489
61.02
I'm looking at string manipulation in C and I don't understand why the statement s1[i] = s1[++i]; H e #include <stdio.h> main() { char s1[] = "Hello world !"; for(int i = 0; s1[i] != '\0'; ++i) s1[i] = s1[++i]; printf("%s", s1); } Hello world ! el r Your program has undefined behaviour because in this statement s1[i] = s1[++i]; i is modified twice between sequence points (The assignment operator = doesn't introduce a sequence point). gcc ( gcc -Wall -Wextra) warns with: warning: operation on ‘i’ may be undefined [-Wsequence-point] similarly clang warns: warning: unsequenced modification and access to 'i' [-Wunsequenced]
https://codedump.io/share/ngB5IQS0StxQ/1/why-this-piece-of-code-doesn39t-change-the-string
CC-MAIN-2017-39
refinedweb
104
56.25
Practical Artificial Intelligence Programming With Java Third Edition Mark Watson This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works Version 3.0 United States License. November 11,2008 Contents Preface xi 1 Introduction 1 1.1 Other JVMLanguages........................1 1.2 Why is a PDF Version of this Book Available Free on the Web?...1 1.3 Book Software............................2 1.4 Use of Java Generics and Native Types................2 1.5 Notes on Java Coding Styles Used in this Book...........3 1.6 Book Summary............................4 2 Search 5 2.1 Representation of Search State Space and Search Operators.....5 2.2 Finding Paths in Mazes........................6 2.3 Finding Paths in Graphs........................13 2.4 Adding Heuristics to Breadth First Search..............22 2.5 Search and Game Playing.......................22 2.5.1 Alpha-Beta Search......................22 2.5.2 A Java Framework for Search and Game Playing......24 2.5.3 Tic-Tac-Toe Using the Alpha-Beta Search Algorithm....29 2.5.4 Chess Using the Alpha-Beta Search Algorithm.......34 3 Reasoning 45 3.1 Logic.................................46 3.1.1 History of Logic.......................47 3.1.2 Examples of Different Logic Types.............47 3.2 PowerLoomOverview........................48 3.3 Running PowerLoomInteractively..................49 3.4 Using the PowerLoomAPIs in Java Programs............52 3.5 Suggestions for Further Study....................54 4 Semantic Web 57 4.1 Relational Database Model Has Problems Dealing with Rapidly Chang- ing Data Requirements........................58 4.2 RDF:The Universal Data Format...................59 4.3 Extending RDF with RDF Schema..................62 4.4 The SPARQL Query Language....................63 4.5 Using Sesame.............................67 iii Contents 4.6 OWL:The Web Ontology Language.................69 4.7 Knowledge Representation and REST................71 4.8 Material for Further Study......................72 5 Expert Systems 73 5.1 Production Systems..........................75 5.2 The Drools Rules Language.....................75 5.3 Using Drools in Java Applications..................77 5.4 Example Drools Expert System:Blocks World...........81 5.4.1 POJO Object Models for Blocks World Example......82 5.4.2 Drools Rules for Blocks World Example...........85 5.4.3 Java Code for Blocks World Example............88 5.5 Example Drools Expert System:Help Desk System.........90 5.5.1 Object Models for an Example Help Desk..........91 5.5.2 Drools Rules for an Example Help Desk...........93 5.5.3 Java Code for an Example Help Desk............95 5.6 Notes on the Craft of Building Expert Systems............97 6 Genetic Algorithms 99 6.1 Theory.................................99 6.2 Java Library for Genetic Algorithms.................101 6.3 Finding the MaximumValue of a Function..............105 7 Neural Networks 109 7.1 Hopfield Neural Networks......................110 7.2 Java Classes for Hopfield Neural Networks.............111 7.3 Testing the Hopfield Neural Network Class.............114 7.4 Back Propagation Neural Networks.................116 7.5 A Java Class Library for Back Propagation..............119 7.6 Adding Momentumto Speed Up Back-Prop Training........127 8 Machine Learning with Weka 129 8.1 Using Weka’s Interactive GUI Application..............130 8.2 Interactive Command Line Use of Weka...............132 8.3 Embedding Weka in a Java Application...............134 8.4 Suggestions for Further Study....................136 9 Statistical Natural Language Processing 137 9.1 Tokenizing,Stemming,and Part of Speech Tagging Text......137 9.2 Named Entity Extraction FromText.................141 9.3 Using the WordNet Linguistic Database...............144 9.3.1 Tutorial on WordNet.....................144 9.3.2 Example Use of the JAWS WordNet Library........145 9.3.3 Suggested Project:Using a Part of Speech Tagger to Use the Correct WordNet Synonyms...............149 iv Contents 9.3.4 Suggested Project:Using WordNet Synonyms to Improve Document Clustering.....................150 9.4 Automatically Assigning Tags to Text................150 9.5 Text Clustering............................152 9.6 Spelling Correction..........................156 9.6.1 GNU ASpell Library and Jazzy...............157 9.6.2 Peter Norvig’s Spelling Algorithm..............158 9.6.3 Extending the Norvig Algorithmby Using Word Pair Statistics162 9.7 Hidden Markov Models........................166 9.7.1 Training Hidden Markov Models...............168 9.7.2 Using the Trained Markov Model to Tag Text........173 10 Information Gathering 177 10.1 Open Calais..............................177 10.2 Information Discovery in Relational Databases...........181 10.2.1 Creating a Test Derby Database Using the CIA World Fact- Book and Data on US States.................182 10.2.2 Using the JDBC Meta Data APIs...............183 10.2.3 Using the Meta Data APIs to Discern Entity Relationships.187 10.3 Down to the Bare Metal:In-Memory Index and Search.......187 10.4 Indexing and Search Using Embedded Lucene............193 10.5 Indexing and Search with Nutch Clients...............197 10.5.1 Nutch Server Fast Start Setup................198 10.5.2 Using the Nutch OpenSearch Web APIs...........201 11 Conclusions 207 v Contents vi List of Figures 2.1 A directed graph representation is shown on the left and a two- dimensional grid (or maze) representation is shown on the right.In both representations,the letter R is used to represent the current po- sition (or reference point) and the arrowheads indicate legal moves generated by a search operator.In the maze representation,the two grid cells marked with an X indicate that a search operator cannot generate this grid location.......................7 2.2 UML class diagramfor the maze search Java classes........8 2.3 Using depth first search to find a path in a maze finds a non-optimal solution................................10 2.4 Using breadth first search in a maze to find an optimal solution...14 2.5 UML class diagramfor the graph search classes...........15 2.6 Using depth first search in a sample graph..............21 2.7 Using breadth first search in a sample graph.............21 2.8 Alpha-beta algorithmapplied to part of a game of tic-tac-toe....23 2.9 UML class diagrams for game search engine and tic-tac-toe.....30 2.10 UML class diagrams for game search engine and chess.......35...........................36 2.12 Continuing the first sample game:the computer is looking ahead two moves and no opening book is used................37 2.13 Second game with a 2 1/2 move lookahead..............41 2.14 Continuing the second game with a two and a half move lookahead. We will add more heuristics to the static evaluation method to reduce the value of moving the queen early in the game...........42 3.1 Overview of how we will use PowerLoom for development and de- ployment...............................46 4.1 Layers of data models used in implementing Semantic Web applica- tions..................................58 4.2 Java utility classes and interface for using Sesame..........68 vii List of Figures 5.1 Using Drools for developing rule-based systems and then deploying them..................................74 5.2 Initial state of a blocks world problem with three blocks stacked on top of each other.The goal is to move the blocks so that block C is on top of block A............................82 5.3 Block C has been removed fromblock B and placed on the table...82 5.4 Block B has been removed fromblock A and placed on the table...84 5.5 The goal is solved by placing block C on top of block A.......85 6.1 The test function evaluated over the interval [0.0,10.0].The maxi- mumvalue of 0.56 occurs at x=3.8..................100 6.2 Crossover operation..........................101 7.1 Physical structure of a neuron.....................110 7.2 Two views of the same two-layer neural network;the view on the right shows the connection weights between the input and output layers as a two-dimensional array...................117 7.3 Sigmoid and derivative of the Sigmoid (SigmoidP) functions.This plot was produced by the file src-neural-networks/Graph.java....118 7.4 Capabilities of zero,one,and two hidden neuron layer neural net- works.The grayed areas depict one of two possible output values based on two input neuron activation values.Note that this is a two-dimensional case for visualization purposes;if a network had ten input neurons instead of two,then these plots would have to be ten-dimensional instead of two-dimensional..............119 7.5 Example backpropagation neural network with one hidden layer...120 7.6 Example backpropagation neural network with two hidden layers..120 8.1 Running the Weka Data Explorer...................131 8.2 Running the Weka Data Explorer...................131 viii List of Tables 2.1 Runtimes by Method for Chess Program...............44 6.1 Randomchromosomes and the floating point numbers that they encode106 9.1 Most commonly used part of speech tags...............139 9.2 Sample part of speech tags......................167 9.3 Transition counts fromthe first tag (shown in row) to the second tag (shown in column).We see that the transition from NNP to VB is common................................169 9.4 Normalize data in Table 9.3 to get probability of one tag (seen in row) transitioning to another tag (seen in column)..........171 9.5 Probabilities of words having specific tags.Only a few tags are shown in this table...........................172 ix List of Tables x Preface I wrote this book for both professional programmers and home hobbyists who al- ready know how to program in Java and who want to learn practical Artificial In- telligence (AI) programming and information processing techniques.I have tried to make this an enjoyable book to work through.In the style of a “cook book,” the chapters can be studied in any order.Each chapter follows the same pattern:a mo- tivation for learning a technique,some theory for the technique,and a Java example programthat you can experiment with. I have been interested in AI since reading Bertram Raphael’s excellent book Think- ing Computer:Mind Inside Matter in the early 1980s.I have also had the good fortune to work on many interesting AI projects including the development of com- mercial expert systemtools. I enjoy AI programming,and hopefully this enthusiasmwill also infect the reader. Software Licenses for example programs in this book My example programs for chapters using Open Source Libraries are released under the same licenses as the libraries: Drools Expert SystemDemos:Apache style license PowerLoomReasoning:LGPL Sesame Semantic Web:LGPL The licenses for the rest of my example programs are in the directory licenses-for- book-code: License for commercial use:if you purchase a print version of this book or the for-fee PDF version fromLulu.comthen you can use any of my code and data used in the book examples under a non-restrictive license.This book can be purchaed at Free for non-commercial and academic use:if you use the free PDF version xi Preface of this book you can use the code and data used in the book examples free for activities that do not generate revenue. Acknowledgements I would like to thank Kevin Knight for writing a flexible framework for game search algorithms in Common LISP (Rich,Knight 1991) and for giving me permission to reuse his framework,rewritten in Java for some of the examples in Chapter 2.I have a library full of books on AI and I would like to thank the authors of all of these books for their influence on my professional life.I frequently reference books in the text that have been especially useful to me and that I recommend to my readers. In particular,I would like to thank the authors of the following two books that have had the most influence on me: Stuart Russell and Peter Norvig’s Artificial Intelligence:A Modern Approach which I consider to be the best single reference book for AI theory John Sowa’s book Knowledge Representation is a resource that I frequently turn to for a holistic treatment of logic,philosophy,and knowledge represen- tation in general Book Editor: Carol Watson Thanks to the following people who found typos: Carol Watson,James Fysh,Joshua Cranmer,Jack Marsh,Jeremy Burt,Jean-Marc Vanel xii 1 Introduction There are many fine books on Artificial Intelligence (AI) and good tutorials and software on the web.This book is intended for professional programmers who either already have an interest in AI or need to use specific AI technologies at work. The material is not intended as a complete reference for AI theory.Instead,I provide enough theoretical background to understand the example programs and to provide a launching point if you want or need to delve deeper into any of the topics covered. 1.1 Other JVMLanguages The Java language and JVM platform are very widely used so that techniques that you learn can be broadly useful.There are other JVMlanguages like JRuby,Clojure, Jython,and Scala that can use existing Java classes.While the examples in this book are written in Java you should have little trouble using my Java example classes and the open source libraries with these alternative JVMlanguages. 1.2 Why is a PDF Version of this Book Available Free on the Web? I have written 14 books that have been published by the traditional publishers Springer- Verlag,McGraw-Hill,J.Wiley,Morgan Kaufman,Hungry Minds,MCP,and Sybex. This is my first book that I have produced and published on my own and my moti- vation for this change is the ability to write for smaller niche markets on topics that most interest me. As an author I want to both earn a living writing and have many people read and enjoy my books.By offering for sale both a print version and a for-fee PDF version for purchase at I can earn some money for my efforts and also allow readers who can not afford to buy many books or may only be interested in a few chapters of this book to read the free PDF version that is available frommy web site. 1 1 Introduction Please note that I do not give permission to post the free PDF version of this book on other people’s web sites:I consider this to be commercial exploitation in violation of the Creative Commons License that I have chosen for this book.Having my free web books only available on my web site brings viewers to my site and helps attract customers for my consulting business.I do encourage you to copy the PDF for this book onto your own computer for local reading and it is fine to email copies of the free PDF to friends. If you enjoy reading the no-cost PDF version of this book I would also appreciate it if you would purchase a print copy using the purchase link: I thank you for your support. 1.3 Book Software You can download a large ZIP file containing all code and test data used in this book fromthe URL: All the example code that I have written is covered by the licenses discussed in the Preface. The code examples usually consist of reusable (non GUI) libraries and throwaway text-based test programs to solve a specific application problem;in some cases,the test code will contain a test or demonstration GUI. 1.4 Use of Java Generics and Native Types In general I usually use Java generics and the new collection classes for almost all of my Java programming.That is also the case for the examples in this book except when using native types and arrays provides a real performance advantage (for example,in the search examples). Since arrays must contain reifiable types they play poorly with generics so I prefer not to mix coding styles in the same code base.There are some obvious cases where not using primitive types leads to excessive object creation and boxing/unboxing. That said,I expect Java compilers,Hotspot,and the JVMin general to keep getting better and this may be a non-issue in the future. 2 1.5 Notes on Java Coding Styles Used in this Book 1.5 Notes on Java Coding Styles Used in this Book Many of the example programs do not strictly follow common Java programming idioms – this is usually done for brevity.For example,when a short example is all in one Java package I will save lines of code and programing listing space by not declaring class data private with public getters and setters;instead,I will sometimes simply use package visibility as in this example: public static class Problem { //constants for appliance types: enum Appliance {REFRIGERATOR,MICROWAVE,TV,DVD}; //constants for problem types: enum ProblemType {NOT_RUNNING,SMOKING,ON_FIRE, MAKES_NOISE}; //constants for environmental data: enum EnvironmentalDescription {CIRCUIT_BREAKER_OFF, LIGHTS_OFF_IN_ROOM}; Appliance applianceType; List<ProblemType> problemTypes = new ArrayList<ProblemType>(); List<EnvironmentalDescription> environmentalData = new ArrayList<EnvironmentalDescription>(); //etc. } Please understand that I do not advocate this style of programming in large projects but one challenge in writing about software development is the requirement to make the examples short and easily read and understood.Many of the examples started as large code bases for my own projects that I “whittled down” to a small size to show one or two specific techniques.Forgoing the use of “getters and setters” in many of the examples is just another way to shorten the examples. Authors of programming books are faced with a problem in formatting program snippets:limited page width.You will frequently see what would be a single line in a Java source file split over two or three lines to accommodate limited page width as seen in this example: private static void createTestFacts(WorkingMemory workingMemory) throws Exception { ... } 3 1 Introduction 1.6 Book Summary Chapter 1 is the introduction for this book. Chapter 2 deals with heuristic search in two domains:two-dimensional grids (for example mazes) and graphs (defined by nodes and edges connecting nodes). Chapter 3 covers logic,knowledge representation,and reasoning using the Power- Loomsystem. Chapter 4 covers the Semantic Web.You will learn how to use RDF and RDFS data for knowledge representation and how to use the popular Sesame open source Semantic Web system. Chapter 5 introduces you to rule-based or production systems.We will use the open source Drools systemto implement simple expert systems for solving “blocks world” problems and to simulate a help desk system. Chapter 6 gives an overview of Genetic Algorithms,provides a Java library,and solves a test problem.The chapter ends with suggestions for projects you might want to try. Chapter 7 introduces Hopfield and Back Propagation Neural Networks.In addition to Java libraries you can use in your own projects,we will use two Swing-based Java applications to visualize how neural networks are trained. Chapter 8 introduces you to the GPLed Weka project.Weka is a best of breed toolkit for solving a wide range of machine learning problems. Chapter 9 covers several Statistical Natural Language Processing (NLP) techniques that I often use in my own work:processing text (tokenizing,stemming,and de- termining part of speech),named entity extraction from text,using the WordNet lexical database,automatically assigning tags to text,text clustering,three different approaches to spelling correction,and a short tutorial on Markov Models. Chapter 10 provides useful techniques for gathering and using information:using the Open Calais web services for extracting semantic information from text,infor- mation discovery in relational databases,and three different approaches to indexing and searching text. 4 2 Search Early AI research emphasized the optimization of search algorithms.This approach made a lot of sense because many AI tasks can be solved effectively by defining state spaces and using search algorithms to define efficient. What are the limitations of search?Early on,search applied to problems like check- ers and chess misled early researchers into underestimating the extreme difficulty of writing software that performs tasks in domains that require general world knowl- edge or deal with complex and changing environments.These types of problems usually require the understanding and then the implementation of domain specific knowledge. In this chapter,we will use three search problem domains for studying search algo- rithms:path finding in a maze,path finding in a graph,and alpha-beta search in the games tic-tac-toe and chess. 2.1 Representation of Search State Space and Search Operators We will use a single search tree representation in graph search and maze search examples in this chapter.Search trees consist of nodes that define locations in state space and links to other nodes.For some small problems,the search tree can be easily specified statically;for example,when performing search in game mazes,we can compute and save a search tree for the entire state space of the maze.For many problems,it is impossible to completely enumerate a search tree for a state space so we must define successor node search operators that for a given node produce all nodes that can be reached from the current node in one step;for example,in the 5 2 Search game of chess we can not possibly enumerate the search tree for all possible games of chess,so we define a successor node search operator that given a board position (represented by a node in the search tree) calculates all possible moves for either the white or black pieces.The possible chess moves are calculated by a successor node search operator and are represented by newly calculated nodes that are linked to the previous node.Note that even when it is simple to fully enumerate a search tree,as in the game maze example,we still might want to generate the search tree dynamically as we will do in this chapter). For calculating a search tree we use a graph.We will represent graphs as node with links between some of the nodes.For solving puzzles and for game related search, we will represent positions in the search space with Java objects called nodes.Nodes contain arrays of references to both child and parent nodes.A search space using this node representation can be viewed as a directed graph or a tree.The node that has no parent nodes is the root node and all nodes that have no child nodes a called leaf nodes. Search operators are used to move from one point in the search space to another. We deal with quantized search spaces in this chapter,but search spaces can also be continuous in some applications.Often search spaces are either very large or are infinite.In these cases,we implicitly define a search space using some algorithm for extending the space from our reference position in the space.Figure 2.1 shows representations of search space as both connected nodes in a graph and as a two- dimensional grid with arrows indicating possible movement from a reference point denoted by R. When we specify a search space as a two-dimensional array,search operators will move the point of reference in the search space from a specific grid location to an adjoining grid location.For some applications,search operators are limited to moving up/down/left/right and in other applications operators can additionally move the reference location diagonally. When we specify a search space using node representation,search operators can move the reference point down to any child node or up to the parent node.For search spaces that are represented implicitly,search operators are also responsible for determining legal child nodes,if any,fromthe reference point. Note that I use different libraries for the maze and graph search examples. 2.2 Finding Paths in Mazes The example program used in this section is MazeSearch.java in the directory sr- c/search/maze and I assume that the reader has downloaded the entire example ZIP file for this book and placed the source files for the examples in a convenient place. 6 2.2 Finding Paths in Mazes R R Figure 2.1:A directed graph representation is shown on the left and a two- dimensional grid (or maze) representation is shown on the right.In both representations,the letter R is used to represent the current posi- tion (or reference point) and the arrowheads indicate legal moves gener- ated by a search operator.In the maze representation,the two grid cells marked with an X indicate that a search operator cannot generate this grid location. Figure 2.2 shows the UML class diagram for the maze search classes:depth first and breadth first search.The abstract base class AbstractSearchEngine contains common code and data that is required by both the classes DepthFirstSearch and BreadthFirstSearch.The class Maze is used to record the data for a two- dimensional maze,including which grid locations contain walls or obstacles.The class Maze defines three static short integer values used to indicate obstacles,the starting location,and the ending location. The Java class Maze defines the search space.This class allocates a two-dimensional array of short integers to represent the state of any grid location in the maze.When- ever we need to store a pair of integers,we will use an instance of the standard Java class java:awt:Dimension,which has two integer data components:width and height.Whenever we need to store an x-y grid location,we create a newDimension object (if required),and store the x coordinate in Dimension.width and the y coor- dinate in Dimension.height.As in the right-hand side of Figure 2.1,the operator for moving through the search space from given x-y coordinates allows a transition to any adjacent grid location that is empty.The Maze class also contains the x-y location for the starting location (startLoc) and goal location (goalLoc).Note that for these examples,the class Maze sets the starting location to grid coordinates 0-0 (upper left corner of the maze in the figures to follow) and the goal node in (width - 1)-(height - 1) (lower right corner in the following figures). 7 2 Search Maze getV alue: short setV alue: void Maze AbstractSearchEngine getPath: Dimension [] #searchPath #initSearch AbstractSearchEngine 1 DepthFirstSearchEngine iterateSearch DepthFirstSearchEngine BreadthFirstSearchEngine BreadthFirstSearchEngine MazeDepthFirstSearch paint main (static) MazeDepthFirstSearch MazeBreadthFirstSearch paint main (static) MazeBreadthFirstSearch Java main test programs using JFC 1 1 Figure 2.2:UML class diagramfor the maze search Java classes The abstract class AbstractSearchEngine is the base class for both the depth first (uses a stack to store moves) search class DepthFirstSearchEngine and the breadth first (uses a queue to store moves) search class BreadthFirstSearchEngine. We will start by looking at the common data and behavior defined in AbstractSearchEngine. The class constructor has two required arguments:the width and height of the maze, measured in grid cells.The constructor defines an instance of the Maze class of the desired size and then calls the utility method initSearch to allocate an array searchPath of Dimension objects,which will be used to record the path traversed through the maze.The abstract base class also defines other utility methods: equals(Dimensiond1;Dimensiond2) – checks to see if two arguments of type Dimension are the same. getPossibleMoves(Dimensionlocation) – returns an array of Dimension objects that can be moved to fromthe specified location.This implements the movement operator. Now,we will look at the depth first search procedure.The constructor for the derived class DepthFirstSearchEngine calls the base class constructor and then solves the search problem by calling the method iterateSearch.We will look at this method in some detail.The arguments to iterateSearch specify the current location and the current search depth: 8 2.2 Finding Paths in Mazes private void iterateSearch(Dimension loc,int depth) The class variable isSearching is used to halt search,avoiding more solutions,once one path to the goal is found. if (isSearching == false) return; We set the maze value to the depth for display purposes only: maze.setValue(loc.width,loc.height,(short)depth); Here,we use the super class getPossibleMoves method to get an array of possible neighboring squares that we could move to;we then loop over the four possible moves (a null value in the array indicates an illegal move): Dimension [] moves = getPossibleMoves(loc); for (int i=0;i<4;i++) { if (moves[i] == null) break;//out of possible moves //from this location Record the next move in the search path array and check to see if we are done: searchPath[depth] = moves[i]; if (equals(moves[i],goalLoc)) { System.out.println("Found the goal at"+ moves[i].width + ‘‘,"+ moves[i].height); isSearching = false; maxDepth = depth; return; } else { If the next possible move is not the goal move,we recursively call the iterateSearch method again,but starting from this new location and increasing the depth counter by one: iterateSearch(moves[i],depth + 1); if (isSearching == false) return; } 9 2 Search Figure 2.3:Using depth first search to find a path in a maze finds a non-optimal solution Figure 2.3 shows how poor a path a depth first search can find between the start and goal locations in the maze.The maze is a 10-by-10 grid.The letter S marks the starting location in the upper left corner and the goal position is marked with a G in the lower right corner of the grid.Blocked grid cells are painted light gray.The basic problem with the depth first search is that the search engine will often start searching in a bad direction,but still find a path eventually,even given a poor start. The advantage of a depth first search over a breadth first search is that the depth first search requires much less memory.We will see that possible moves for depth first search are stored on a stack (last in,first out data structure) and possible moves for a breadth first search are stored in a queue (first in,first out data structure). The derived class BreadthFirstSearch is similar to the DepthFirstSearch pro- cedure with one major difference:from a specified search location we calculate all possible moves,and make one possible trial move at a time.We use a queue data structure for storing possible moves,placing possible moves on the back of the queue as they are calculated,and pulling test moves fromthe front of the queue.The 10 2.2 Finding Paths in Mazes effect of a breadth first search is that it “fans out” uniformly from the starting node until the goal node is found. The class constructor for BreadthFirstSearch calls the super class constructor to initialize the maze,and then uses the auxiliary method doSearchOn2Dgrid for per- forming a breadth first search for the goal.We will look at the class BreadthFirstSearch in some detail.Breadth first search uses a queue instead of a stack (depth first search) to store possible moves.The utility class DimensionQueue implements a standard queue data structure that handles instances of the class Dimension. The method doSearchOn2Dgrid is not recursive,it uses a loop to add new search positions to the end of an instance of class DimensionQueue and to remove and test newlocations fromthe front of the queue.The two-dimensional array allReadyV isited keeps us fromsearching the same location twice.To calculate the shortest path after the goal is found,we use the predecessor array: private void doSearchOn2DGrid() { int width = maze.getWidth(); int height = maze.getHeight(); boolean alReadyVisitedFlag[][] = new boolean[width][height]; Dimension predecessor[][] = new Dimension[width][height]; DimensionQueue queue = new DimensionQueue(); for (int i=0;i<width;i++) { for (int j=0;j<height;j++) { alReadyVisitedFlag[i][j] = false; predecessor[i][j] = null; } } We start the search by setting the already visited flag for the starting location to true value and adding the starting location to the back of the queue: alReadyVisitedFlag[startLoc.width][startLoc.height] = true; queue.addToBackOfQueue(startLoc); boolean success = false; This outer loop runs until either the queue is empty or the goal is found: outer: while (queue.isEmpty() == false) { 11 2 Search We peek at the Dimension object at the front of the queue (but do not remove it) and get the adjacent locations to the current position in the maze: Dimension head = queue.peekAtFrontOfQueue(); Dimension [] connected = getPossibleMoves(head); We loop over each possible move;if the possible move is valid (i.e.,not null) and if we have not already visited the possible move location,then we add the possible move to the back of the queue and set the predecessor array for the new location to the last square visited (head is the value from the front of the queue).If we find the goal,break out of the loop: for (int i=0;i<4;i++) { if (connected[i] == null) break; int w = connected[i].width; int h = connected[i].height; if (alReadyVisitedFlag[w][h] == false) { alReadyVisitedFlag[w][h] = true; predecessor[w][h] = head; queue.addToBackOfQueue(connected[i]); if (equals(connected[i],goalLoc)) { success = true; break outer;//we are done } } } We have processed the location at the front of the queue (in the variable head),so remove it: queue.removeFromFrontOfQueue(); } Now that we are out of the main loop,we need to use the predecessor array to get the shortest path.Note that we fill in the searchPath array in reverse order,starting with the goal location: maxDepth = 0; if (success) { searchPath[maxDepth++] = goalLoc; 12 2.3 Finding Paths in Graphs for (int i=0;i<100;i++) { searchPath[maxDepth] = predecessor[searchPath[maxDepth - 1]. width][searchPath[maxDepth - 1]. height]; maxDepth++; if (equals(searchPath[maxDepth - 1], startLoc)) break;//back to starting node } } } Figure 2.4 shows a good path solution between starting and goal nodes.Starting from the initial position,the breadth first search engine adds all possible moves to the back of a queue data structure.For each possible move added to this queue in one search cycle,all possible moves are added to the queue for each new move recorded.Visually,think of possible moves added to the queue as “fanning out” like a wave from the starting location.The breadth first search engine stops when this “wave” reaches the goal location.In general,I prefer breadth first search techniques to depth first search techniques when memory storage for the queue used in the search process is not an issue.In general,the memory requirements for performing depth first search is much less than breadth first search. To run the two example programs fromthis section,change directory to src/search/- maze and type: javac * .java java MazeDepthFirstSearch java MazeBreadthFirstSearch Note that the classes MazeDepthFirstSearch and MazeBreadthFirstSearch are simple Java JFC applications that produced Figures 2.3 and 2.4.The interested reader can read through the source code for the GUI test programs,but we will only cover the core AI code in this book.If you are interested in the GUI test programs and you are not familiar with the Java JFC (or Swing) classes,there are several good tutorials on JFC programming at java.sun.com. 2.3 Finding Paths in Graphs In the last section,we used both depth first and breadth first search techniques to find a path between a starting location and a goal location in a maze.Another common 13 2 Search Figure 2.4:Using breadth first search in a maze to find an optimal solution type of search space is represented by a graph.A graph is a set of nodes and links. We characterize nodes as containing the following data: A name and/or other data Zero or more links to other nodes A position in space (this is optional,usually for display or visualization pur- poses) Links between nodes are often called edges.The algorithms used for finding paths in graphs are very similar to finding paths in a two-dimensional maze.The primary difference is the operators that allowus to move fromone node to another.In the last section we saw that in a maze,an agent can move from one grid space to another if the target space is empty.For graph search,a movement operator allows movement to another node if there is a link to the target node. Figure 2.5 shows the UML class diagram for the graph search Java classes that we will use in this section.The abstract class AbstractGraphSearch class is the base class for both DepthFirstSearch and BreadthFirstSearch.The classes GraphDepthFirstSearch and GraphBreadthFirstSearch and test programs also provide a Java Foundation Class (JFC) or Swing based user interface.These two test programs produced Figures 2.6 and 2.7. 14 2.3 Finding Paths in Graphs #getNodeIndex(String name): int getNodeName( int index): String addNode (String name, int x, int y) : void getNodeName ( int index) : String getNodeX ( int index) : int getNodeY ( int index) : int getLink1 ( int index) : int getLink2 ( int index) : int addLink ( int node1, int node2) k: void fi ndPath: int[] AbstractGraphSearch findPath( int start_node, int goal_node): int[] DepthFIrstSearch findPath( int start_node, int goal_node): int[] BreadthFIrstSearch main(String[] args): void paintNode(Graphics g, String name, int x, int y): void paint(Graphics g): void GraphDepthFirstSearch main(String[] args): void paintNode(Graphics g, String name, int x, int y): void paint(Graphics g): void GraphDepthFirstSearch Java main test programs using JFC 1 1 Figure 2.5:UML class diagramfor the graph search classes 15 2 Search As seen in Figure 2.5,most of the data for the search operations (i.e.,nodes,links, etc.) is defined in the abstract class AbstractGraphSearch.This abstract class is customized through inheritance to use a stack for storing possible moves (i.e.,the array path) for depth first search and a queue for breadth first search. The abstract class AbstractGraphSearch allocates data required by both derived classes: final public static int MAX = 50; protected int [] path = new int[AbstractGraphSearch.MAX]; protected int num_path = 0; //for nodes: protected String [] nodeNames = new String[MAX]; protected int [] node_x = new int[MAX]; protected int [] node_y = new int[MAX]; //for links between nodes: protected int [] link_1 = new int[MAX]; protected int [] link_2 = new int[MAX]; protected int [] lengths = new int[MAX]; protected int numNodes = 0; protected int numLinks = 0; protected int goalNodeIndex = -1, startNodeIndex = -1; The abstract base class also provides several common utility methods: addNode(String name,int x,int y) – adds a new node addLink(int n1,int n2) – adds a bidirectional link between nodes indexed by n1 and n2.Node indexes start at zero and are in the order of calling addNode. addLink(String n1,String n2) – adds a bidirectional link between nodes spec- ified by their names getNumNodes() – returns the number of nodes getNumLinks() – returns the number of links getNodeName(int index) – returns a node’s name getNodeX(),getNodeY() – return the coordinates of a node getNodeIndex(String name) – gets the index of a node,given its name 16 2.3 Finding Paths in Graphs The abstract base class defines an abstract method findPath that must be overrid- den.We will start with the derived class DepthFirstSearch,looking at its im- plementation of findPath.The findPath method returns an array of node indices indicating the calculated path: public int [] findPath(int start_node, int goal_node) { The class variable path is an array that is used for temporary storage;we set the first element to the starting node index,and call the utility method findPathHelper: path[0] = start_node;//the starting node return findPathHelper(path,1,goal_node); } The method findPathHelper is the interesting method in this class that actually per- forms the depth first search;we will look at it in some detail: The path array is used as a stack to keep track of which nodes are being visited during the search.The argument num path is the number of locations in the path, which is also the search depth: public int [] findPathHelper(int [] path, int num_path, int goal_node) { First,re-check to see if we have reached the goal node;if we have,make a newarray of the current size and copy the path into it.This new array is returned as the value of the method: if (goal_node == path[num_path - 1]) { int [] ret = new int[num_path]; for (int i=0;i<num_path;i++) { ret[i] = path[i]; } return ret;//we are done! } We have not found the goal node,so call the method connected nodes to find all nodes connected to the current node that are not already on the search path (see the source code for the implementation of connected nodes): 17 2 Search int [] new_nodes = connected_nodes(path, num_path); If there are still connected nodes to search,add the next possible “node to visit” to the top of the stack (variable path in the program) and recursively call the method findPathHelper again: if (new_nodes!= null) { for (int j=0;j<new_nodes.length;j++) { path[num_path] = new_nodes[j]; int [] test = findPathHelper(new_path, num_path + 1, goal_node); if (test!= null) { if (test[test.length-1] == goal_node) { return test; } } } } If we have not found the goal node,return null,instead of an array of node indices: return null; } Derived class BreadthFirstSearch also must define abstract method findPath. This method is very similar to the breadth first search method used for finding a path in a maze:a queue is used to store possible moves.For a maze,we used a queue class that stored instances of the class Dimension,so for this problem,the queue only needs to store integer node indices.The return value of findPath is an array of node indices that make up the path fromthe starting node to the goal. public int [] findPath(int start_node, int goal_node) { We start by setting up a flag array alreadyV isited to prevent visiting the same node twice,and allocating a predecessors array that we will use to find the shortest path once the goal is reached: //data structures for depth first search: 18 2.3 Finding Paths in Graphs boolean [] alreadyVisitedFlag = new boolean[numNodes]; int [] predecessor = new int[numNodes]; The class IntQueue is a private class defined in the file BreadthFirstSearch.java;it implements a standard queue: IntQueue queue = new IntQueue(numNodes + 2); Before the main loop,we need to initialize the already visited and predecessor ar- rays,set the visited flag for the starting node to true,and add the starting node index to the back of the queue: for (int i=0;i<numNodes;i++) { alreadyVisitedFlag[i] = false; predecessor[i] = -1; } alreadyVisitedFlag[start_node] = true; queue.addToBackOfQueue(start_node); The main loop runs until we find the goal node or the search queue is empty: outer:while (queue.isEmpty() == false) { We will read (without removing) the node index at the front of the queue and calcu- late the nodes that are connected to the current node (but not already on the visited list) using the connected nodes method (the interested reader can see the imple- mentation in the source code for this class): int head = queue.peekAtFrontOfQueue(); int [] connected = connected_nodes(head); if (connected!= null) { If each node connected by a link to the current node has not already been visited,set the predecessor array and add the new node index to the back of the search queue; we stop if the goal is found: for (int i=0;i<connected.length;i++) { if (alreadyVisitedFlag[connected[i]] == false) { predecessor[connected[i]] = head; 19 2 Search queue.addToBackOfQueue(connected[i]); if (connected[i] == goal_node) break outer; } } alreadyVisitedFlag[head] = true; queue.removeFromQueue();//ignore return value } } Now that the goal node has been found,we can build a new array of returned node indices for the calculated path using the predecessor array: int [] ret = new int[numNodes + 1]; int count = 0; ret[count++] = goal_node; for (int i=0;i<numNodes;i++) { ret[count] = predecessor[ret[count - 1]]; count++; if (ret[count - 1] == start_node) break; } int [] ret2 = new int[count]; for (int i=0;i<count;i++) { ret2[i] = ret[count - 1 - i]; } return ret2; } In order to run both the depth first and breadth first graph search examples,change directory to src-search-maze and type the following commands: javac * .java java GraphDepthFirstSearch java GraphBeadthFirstSearch Figure 2.6 shows the results of finding a route fromnode 1 to node 9 in the small test graph.Like the depth first results seen in the maze search,this path is not optimal. Figure 2.7 shows an optimal path found using a breadth first search.As we saw in the maze search example,we find optimal solutions using breadth first search at the cost of extra memory required for the breadth first search. 20 2.3 Finding Paths in Graphs Figure 2.6:Using depth first search in a sample graph Figure 2.7:Using breadth first search in a sample graph 21 2 Search 2.4 Adding Heuristics to Breadth First Search We can usually make breadth first search more efficient by ordering the search order for all branches froma given position in the search space.For example,when adding new nodes from a specified reference point in the search space,we might want to add nodes to the search queue first that are “in the direction” of the goal location:in a two-dimensional search like our maze search,we might want to search connected grid cells first that were closest to the goal grid space.In this case,pre-sorting nodes (in order of closest distance to the goal) added to the breadth first search queue could have a dramatic effect on search efficiency.In the next chapter we will build a simple real-time planning system around our breadth first maze search program;this new programwill use heuristics.The alpha-beta additions to breadth first search are seen in in the next section. 2.5 Search and Game Playing Now that a computer programhas won a match against the human world champion, perhaps people’s expectations of AI systems will be prematurely optimistic.Game search techniques are not real AI,but rather,standard programming techniques.A better platformfor doing AI research is the game of Go.There are so many possible moves in the game of Go that brute force look ahead (as is used in Chess playing programs) simply does not work. That said,min-max type search algorithms with alpha-beta cutoff optimizations are an important programming technique and will be covered in some detail in the re- mainder of this chapter.We will design an abstract Java class library for imple- menting alpha-beta enhanced min-max search,and then use this framework to write programs to play tic-tac-toe and chess. 2.5.1 Alpha-Beta Search The first game that we will implement will be tic-tac-toe,so we will use this simple game to explain how the min-max search (with alpha-beta cutoffs) works. Figure 2.8 shows the possible moves generated from a tic-tac-toe position where X has made three moves and O has made two moves;it is O’s turn to move.This is “level 0” in Figure 2.8.At level 0,O has four possible moves.How do we assign a fitness value to each of O’s possible moves at level 0?The basic min-max search algorithm provides a simple solution to this problem:for each possible move by O in level 1,make the move and store the resulting 4 board positions.Now,at level 1, it is X’s turn to move.Howdo we assign values to each of X’s possible three moves 22 2.5 Search and Game Playing X X X X X X X X X X X X X X X X X X X X X X X X X X O O O O O O O O O O O O O O O O O O O O O O O Level 0: O to move Level 1: X to move Level 2: O to move Figure 2.8:Alpha-beta algorithmapplied to part of a game of tic-tac-toe in Figure 2.8?Simple,we continue to search by making each of X’s possible moves and storing each possible board position for level 2.We keep recursively applying this algorithmuntil we either reach a maximumsearch depth,or there is a win,loss, or draw detected in a generated move.We assume that there is a fitness function available that rates a given board position relative to either side.Note that the value of any board position for X is the negative of the value for O. To make the search more efficient,we maintain values for alpha and beta for each search level.Alpha and beta determine the best possible/worst possible move avail- able at a given level.If we reach a situation like the second position in level 2 where X has won,then we can immediately determine that O’s last move in level 1 that produced this position (of allowing X an instant win) is a low valued move for O (but a high valued move for X).This allows us to immediately “prune” the search tree by ignoring all other possible positions arising from the first O move in level 1.This alpha-beta cutoff (or tree pruning) procedure can save a large percentage of search time,especially if we can set the search order at each level with “probably best” moves considered first. While tree diagrams as seen in Figure 2.8 quickly get complicated,it is easy for a computer program to generate possible moves,calculate new possible board posi- tions and temporarily store them,and recursively apply the same procedure to the next search level (but switching min-max “sides” in the board evaluation).We will see in the next section that it only requires about 100 lines of Java code to implement an abstract class framework for handling the details of performing an alpha-beta en- hanced search.The additional game specific classes for tic-tac-toe require about an additional 150 lines of code to implement;chess requires an additional 450 lines of code. 23 2 Search 2.5.2 A Java Framework for Search and Game Playing The general interface for the Java classes that we will develop in this section was inspired by the Common LISP game-playing framework written by Kevin Knight and described in (Rich,Knight 1991).The abstract class GameSearch contains the code for running a two-player game and performing an alpha-beta search.This class needs to be sub-classed to provide the eight methods: public abstract boolean drawnPosition(Position p) public abstract boolean wonPosition(Position p, boolean player) method drawnPosition should return a Boolean true value if the given po- sition evaluates to a draw situation.The method wonPosition should return a true value if the input position is won for the indicated player.By convention,I use a Boolean true value to represent the computer and a Boolean false value to represent the human opponent.The method positionEvaluation returns a posi- tion evaluation for a specified board position and player.Note that if we call po- sitionEvaluation switching the player for the same board position,then the value returned is the negative of the value calculated for the opposing player.The method possibleMoves returns an array of objects belonging to the class Position.In an actual game like chess,the position objects will actually belong to a chess-specific refinement of the Position class (e.g.,for the chess program developed later in this chapter,the method possibleMoves will return an array of ChessPosition ob- jects).The method makeMove will return a new position object for a specified board position,side to move,and move.The method reachedMaxDepth returns a Boolean true value if the search process has reached a satisfactory depth.For the tic-tac-toe program,the method reachedMaxDepth does not return true unless ei- ther side has won the game or the board is full;for the chess program,the method reachedMaxDepth returns true if the search has reached a depth of 4 half moves deep (this is not the best strategy,but it has the advantage of making the example 24 2.5 Search and Game Playing programshort and easy to understand).The method getMove returns an object of a class derived fromthe class Move (e.g.,TicTacToeMove or ChessMove). The GameSearch class implements the following methods to performgame search: protected Vector alphaBeta(int depth,Position p, boolean player) protected Vector alphaBetaHelper(int depth, Position p, boolean player, float alpha, float beta) public void playGame(Position startingPosition, boolean humanPlayFirst) The method alphaBeta is simple;it calls the helper method alphaBetaHelper with initial search conditions;the method alphaBetaHelper then calls itself recur- sively.The code for alphaBeta is: protected Vector alphaBeta(int depth, Position p, boolean player) { Vector v = alphaBetaHelper(depth,p,player, 1000000.0f, -1000000.0f); return v; } It is important to understand what is in the vector returned by the methods alphaBeta and alphaBetaHelper.The first element is a floating point position evaluation for the point of viewof the player whose turn it is to move;the remaining values are the “best move” for each side to the last search depth.As an example,if I let the tic-tac- toe program play first,it places a marker at square index 0,then I place my marker in the center of the board an index 4.At this point,to calculate the next computer move,alphaBeta is called and returns the following elements in a vector: next element:0.0 next element:[-1,0,0,0,1,0,0,0,0,] next element:[-1,1,0,0,1,0,0,0,0,] next element:[-1,1,0,0,1,0,0,-1,0,] next element:[-1,1,0,1,1,0,0,-1,0,] next element:[-1,1,0,1,1,-1,0,-1,0,] next element:[-1,1,1,1,1,-1,0,-1,0,] 25 2 Search next element:[-1,1,1,1,1,-1,-1,-1,0,] next element:[-1,1,1,1,1,-1,-1,-1,1,] Here,the alpha-beta enhanced min-max search looked all the way to the end of the game and these board positions represent what the search procedure calculated as the best moves for each side.Note that the class TicTacToePosition (derived fromthe abstract class Position) has a toString method to print the board values to a string. The same printout of the returned vector fromalphaBeta for the chess programis: next element:5.4 next element: [4,2,3,5,9,3,2,4,7,7,1,1,1, -1,0,0,0,0,0,0,0,7,7,0,0,0,0,-1,-5,0,0,7,7, 0,-1,-1,-1,0,-1,-1,-1,7,7,-4,-2,-3,0,-9, -3,-2,-4,] 26 2.5 Search and Game Playing Here,the search procedure assigned the side to move (the computer) a position evaluation score of 5.4;this is an artifact of searching to a fixed depth.Notice that the board representation is different for chess,but because the GameSearch class manipulates objects derived fromthe classes Position and Move,the GameSearch class does not need to have any knowledge of the rules for a specific game.We will discuss the format of the chess position class ChessPosition in more detail when we develop the chess program. The classes Move and Position contain no data and methods at all.The classes Move and Position are used as placeholders for derived classes for specific games. The search methods in the abstract GameSearch class manipulate objects derived fromthe classes Move and Position. Nowthat we have seen the debug printout of the contents of the vector returned from the methods alphaBeta and alphaBetaHelper,it will be easier to understand how the method alphaBetaHelper works.The following text shows code fragments fromthe alphaBetaHelper method interspersed with book text: protected Vector alphaBetaHelper(int depth, Position p, boolean player, float alpha, float beta) { Here,we notice that the method signature is the same as for alphaBeta,except that we pass floating point alpha and beta values.The important point in understanding min-max search is that most of the evaluation work is done while “backing up” the search tree;that is,the search proceeds to a leaf node (a node is a leaf if the method reachedMaxDepth return a Boolean true value),and then a return vector for the leaf node is created by making a newvector and setting its first element to the position evaluation of the position at the leaf node and setting the second element of the return vector to the board position at the leaf node: if (reachedMaxDepth(p,depth)) { Vector v = new Vector(2); float value = positionEvaluation(p,player); v.addElement(new Float(value)); v.addElement(p); return v; } If we have not reached the maximumsearch depth (i.e.,we are not yet at a leaf node in the search tree),then we enumerate all possible moves from the current position using the method possibleMoves and recursively call alphaBetaHelper for each 27 2 Search new generated board position.In terms of Figure 2.8,at this point we are moving down to another search level (e.g.,from level 1 to level 2;the level in Figure 2.8 corresponds to depth argument in alphaBetaHelper): Vector best = new Vector(); Position [] moves = possibleMoves(p,player); for (int i=0;i<moves.length;i++) { Vector v2 = alphaBetaHelper(depth + 1,moves[i], !player, -beta,-alpha); float value = -((Float)v2.elementAt(0)).floatValue(); if (value > beta) { if(GameSearch.DEBUG) System.out.println("!!!value="+ value+ ",beta="+beta); beta = value; best = new Vector(); best.addElement(moves[i]); Enumeration enum = v2.elements(); enum.nextElement();//skip previous value while (enum.hasMoreElements()) { Object o = enum.nextElement(); if (o!= null) best.addElement(o); } } / ** * Use the alpha-beta cutoff test to abort * search if we found a move that proves that * the previous move in the move chain was dubious * / if (beta >= alpha) { break; } } Notice that when we recursively call alphaBetaHelper,we are “flipping” the player argument to the opposite Boolean value.After calculating the best move at this depth (or level),we add it to the end of the return vector: Vector v3 = new Vector(); v3.addElement(new Float(beta)); Enumeration enum = best.elements(); 28 2.5 Search and Game Playing while (enum.hasMoreElements()) { v3.addElement(enum.nextElement()); } return v3; When the recursive calls back up and the first call to alphaBetaHelper returns a vector to the method alphaBeta,all of the “best” moves for each side are stored in the return vector,along with the evaluation of the board position for the side to move. The class GameSearch method playGame is fairly simple;the following code fragment is a partial listing of playGame showing howto call alphaBeta,getMove, and makeMove: public void playGame(Position startingPosition, boolean humanPlayFirst) { System.out.println("Your move:"); Move move = getMove(); startingPosition = makeMove(startingPosition, HUMAN,move); printPosition(startingPosition); Vector v = alphaBeta(0,startingPosition,PROGRAM); startingPosition = (Position)v.elementAt(1); } } The debug printout of the vector returned from the method alphaBeta seen earlier in this section was printed using the following code immediately after the call to the method alphaBeta: Enumeration enum = v.elements(); while (enum.hasMoreElements()) { System.out.println("next element:"+ enum.nextElement()); } In the next few sections,we will implement a tic-tac-toe program and a chess- playing programusing this Java class framework. 2.5.3 Tic-Tac-Toe Using the Alpha-Beta Search Algorithm Using the Java class framework of GameSearch,Position,and Move,it is simple to write a basic tic-tac-toe programby writing three new derived classes (see Figure 29 2 Search Move Position +moveIndex: int T icT acT oeMove +toString: String +BLANK: int +HUMAN: int +PROGRAM: int board: int[] T icT acT oePosition +move: Move T icT acT oe Figure 2.9:UML class diagrams for game search engine and tic-tac-toe 2.9) TicTacToe (derived from GameSearch),TicTacToeMove (derived from Move),and TicTacToePosition (derived fromPosition). I assume that the reader has the book example code installed and available for view- ing.In this section,I will only discuss the most interesting details of the tic-tac- toe class refinements;I assume that the reader can look at the source code.We will start by looking at the refinements for the position and move classes.The TicTacToeMove class is trivial,adding a single integer value to record the square index for the new move: public class TicTacToeMove extends Move { public int moveIndex; } The board position indices are in the range of [0..8] and can be considered to be in the following order: 0 1 2 3 4 5 6 7 8 The class TicTacToePosition is also simple: 30 2.5 Search and Game Playing public class TicTacToePosition extends Position { final static public int BLANK = 0; final static public int HUMAN = 1; final static public int PROGRAM = -1; int [] board = new int[9]; public String toString() { StringBuffer sb = new StringBuffer("["); for (int i=0;i<9;i++) sb.append(""+board[i]+","); sb.append("]"); return sb.toString(); } } This class allocates an array of nine integers to represent the board,defines constant values for blank,human,and computer squares,and defines a toString method to print out the board representation to a string. The TicTacToe class must define the following abstract methods from the base class GameSearch: public abstract boolean drawnPosition(Position p) public abstract boolean wonPosition(Position p, boolean player) public abstract float implementation of these methods uses the refined classes TicTacToeMove and TicTacToePosition.For example,consider the method drawnPosition that is responsible for selecting a drawn (or tied) position: public boolean drawnPosition(Position p) { boolean ret = true; TicTacToePosition pos = (TicTacToePosition)p; 31 2 Search for (int i=0;i<9;i++) { if (pos.board[i] == TicTacToePosition.BLANK){ ret = false; break; } } return ret; } The overridden methods from the GameSearch base class must always cast argu- ments of type Position and Move to TicTacToePosition and TicTacToeMove. Note that in the method drawnPosition,the argument of class Position is cast to the class TicTacToePosition.A position is considered to be a draw if all of the squares are full.We will see that checks for a won position are always made be- fore checks for a drawn position,so that the method drawnPosition does not need to make a redundant check for a won position.The method wonPosition is also simple;it uses a private helper method winCheck to test for all possible winning patterns in tic-tac-toe.The method positionEvaluation uses the following board features to assign a fitness value fromthe point of view of either player: The number of blank squares on the board If the position is won by either side If the center square is taken The method positionEvaluation is simple,and is a good place for the interested reader to start modifying both the tic-tac-toe and chess programs: public float positionEvaluation(Position p, boolean player) { int count = 0; TicTacToePosition pos = (TicTacToePosition)p; for (int i=0;i<9;i++) { if (pos.board[i] == 0) count++; } count = 10 - count; //prefer the center square: float base = 1.0f; if (pos.board[4] == TicTacToePosition.HUMAN && player) { base += 0.4f; } if (pos.board[4] == TicTacToePosition.PROGRAM && !player) { 32 2.5 Search and Game Playing base -= 0.4f; } float ret = (base - 1.0f); if (wonPosition(p,player)) { return base + (1.0f/count); } if (wonPosition(p,!player)) { return -(base + (1.0f/count)); } return ret; } The only other method that we will look at here is possibleMoves;the interested reader can look at the implementation of the other (very simple) methods in the source code.The method possibleMoves is called with a current position,and the side to move (i.e.,programor human): public Position [] possibleMoves(Position p, boolean player) { TicTacToePosition pos = (TicTacToePosition)p; int count = 0; for (int i=0;i<9;i++) { if (pos.board[i] == 0) count++; } if (count == 0) return null; Position [] ret = new Position[count]; count = 0; for (int i=0;i<9;i++) { if (pos.board[i] == 0) { TicTacToePosition pos2 = new TicTacToePosition(); for (int j=0;j<9;j++) pos2.board[j] = pos.board[j]; if (player) pos2.board[i] = 1; else pos2.board[i] = -1; ret[count++] = pos2; } } return ret; } It is very simple to generate possible moves:every blank square is a legal move. (This method will not be as straightforward in the example chess program!) 33 2 Search It is simple to compile and run the example tic-tac-toe program:change directory to src-search-game and type: javac * .java java TicTacToe When asked to enter moves,enter an integer between 0 and 8 for a square that is currently blank (i.e.,has a zero value).The following shows this labeling of squares on the tic-tac-toe board: 0 1 2 3 4 5 6 7 8 2.5.4 Chess Using the Alpha-Beta Search Algorithm Using the Java class framework of GameSearch,Position,and Move,it is rea- sonably easy to write a simple chess program by writing three new derived classes (see Figure 2.10) Chess (derived fromGameSearch),ChessMove (derived from Move),and ChessPosition (derived from Position).The chess program devel- oped in this section is intended to be an easy to understand example of using alpha- beta min-max search;as such,it ignores several details that a fully implemented chess programwould implement: Allow the computer to play either side (computer always plays black in this example). Allow en-passant pawn captures. Allow the player to take back a move after making a mistake. The reader is assumed to have read the last section on implementing the tic-tac- toe game;details of refining the GameSearch,Move,and Position classes are not repeated in this section. Figure 2.10 shows the UML class diagramfor both the general purpose GameSearch framework and the classes derived to implement chess specific data and behavior. The class ChessMove contains data for recording fromand to square indices: public class ChessMove extends Move { public int from; public int to; } 34 2.5 Search and Game Playing Move Position +from: int +to: int ChessMove +toString: String +BLANK: int +HUMAN: int +PROGRAM: int +P A WN: int +KNIGHT : int +BISHOP: int +ROOK: int +QUEEN: int +KING: int board: int[] ChessPosition -setControlData: void -calcPossibleMoves: int -calcPieceMoves: int +move: Move -computerControl: fl oat[] -humanControl: fl oat[] -possibleMoveList: Move[] -piece_moves: int[] -initialBoard: int[] -index: int[] -pieceMovementT able: int[] -value: int[] Chess Figure 2.10:UML class diagrams for game search engine and chess 35 2 Search 1 c4 b6 2 d4 B b7 Black increases the mobility of its pieces by fi anchettoing the queenside bishop: 8 rm0lkans 7 obopopop 6 0o0Z0Z0Z 5 Z0Z0Z0Z0 4 0ZPO0Z0Z 3 Z0Z0Z0Z0 2 PO0ZPOPO 1 SNAQJBMR a b c d e f g h Figure. The board is represented as an integer array with 120 elements.A chessboard only has 64 squares;the remaining board values are set to a special value of 7,which indicates an “off board” square.The initial board setup is defined statically in the Chess class and the off-board squares have a value of “7”: private static int [] initialBoard = { 7,7,7,7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7,7,7,7, 4,2,3,5,9,3,2,4,7,7,//white pieces 1,1,1,1,1,1,1,1,7,7,//white pawns 0,0,0,0,0,0,0,0,7,7,//8 blank squares 0,0,0,0,0,0,0,0,7,7,//8 blank squares 0,0,0,0,0,0,0,0,7,7,//8 blank squares 0,0,0,0,0,0,0,0,7,7,//8 blank squares -1,-1,-1,-1,-1,-1,-1,-1,7,7,//black pawns -4,-2,-3,-5,-9,-3,-2,-4,7,7,//black pieces 7,7,7,7,7,7,7,7,7,7,7, 7,7,7,7,7,7,7,7,7,7,7 }; It is difficult to see fromthis listing of the board square values but in effect a regular chess board if padded on all sides with two rows and columns of “7” values. 36 2.5 Search and Game Playing 3 N f3 g6 4 B f4 B g7 5 N c3 Black (the computer) continues to increase piece mobility and control the center squares: 8 rm0lkZns 7 obopopap 6 0o0Z0ZpZ 5 Z0Z0Z0Z0 4 0ZPO0A0Z 3 Z0M0ZNZ0 2 PO0ZPOPO 1 S0ZQJBZR a b c d e f g h Figure 2.12:Continuing the first sample game:the computer is looking ahead two moves and no opening book is used. We see the start of a sample chess game in Figure 2.11 and the continuation of this same game in Figure 2.12.The lookahead is limited to 2 moves (4 ply). The class ChessPosition contains data for this representation and defines constant values for playing sides and piece types: public class ChessPosition extends Position { final static public int BLANK = 0; final static public int HUMAN = 1; final static public int PROGRAM = -1; final static public int PAWN = 1; final static public int KNIGHT = 2; final static public int BISHOP = 3; final static public int ROOK = 4; final static public int QUEEN = 5; final static public int KING = 6; int [] board = new int[120]; public String toString() { StringBuffer sb = new StringBuffer("["); for (int i=22;i<100;i++) { sb.append(""+board[i]+","); } sb.append("]"); return sb.toString(); } 37 2 Search } The class Chess also defines other static data.The following array is used to encode the values assigned to each piece type (e.g.,pawns are worth one point,knights and bishops are worth 3 points,etc.): private static int [] value = { 0,1,3,3,5,9,0,0,0,12 }; The following array is used to codify the possible incremental moves for pieces: private static int [] pieceMovementTable = { 0,-1,1,10,-10,0,-1,1,10,-10,-9,-11,9, 11,0,8,-8,12,-12,19,-19,21,-21,0,10,20, 0,0,0,0,0,0,0,0 }; The starting index into the pieceMovementTable array is calculated by indexing the following array with the piece type index (e.g.,pawns are piece type 1,knights are piece type 2,bishops are piece type 3,rooks are piece type 4,etc.: private static int [] index = { 0,12,15,10,1,6,0,0,0,6 }; When we implement the method possibleMoves for the class Chess,we will see that except for pawn moves,all other possible piece type moves are very easy to calculate using this static data.The method possibleMoves is simple because it uses a private helper method calcPieceMoves to do the real work.The method possibleMoves calculates all possible moves for a given board position and side to move by calling calcPieceMove for each square index that references a piece for the side to move. We need to perform similar actions for calculating possible moves and squares that are controlled by each side.In the first version of the class Chess that I wrote,I used a single method for calculating both possible move squares and controlled squares. However,the code was difficult to read,so I split this initial move generating method out into three methods: possibleMoves – required because this was an abstract method in Game- Search.This method calls calcPieceMoves for all squares containing pieces for the side to move,and collects all possible moves. 38 2.5 Search and Game Playing calcPieceMoves – responsible to calculating pawn moves and other piece type moves for a specified square index. setControlData – sets the global array computerControl and humanControl. This method is similar to a combination of possibleMoves and calcPiece- Moves,but takes into effect “moves” onto squares that belong to the same side for calculating the effect of one piece guarding another.This control data is used in the board position evaluation method positionEvaluation. We will discuss calcPieceMoves here,and leave it as an exercise to carefully read the similar method setControlData in the source code.This method places the cal- culated piece movement data in static storage (the array piece moves) to avoid creat- ing a new Java object whenever this method is called;method calcPieceMoves re- turns an integer count of the number of items placed in the static array piece moves. The method calcPieceMoves is called with a position and a square index;first,the piece type and side are determined for the square index: private int calcPieceMoves(ChessPosition pos, int square_index) { int [] b = pos.board; int piece = b[square_index]; int piece_type = piece; if (piece_type < 0) piece_type = -piece_type; int piece_index = index[piece_type]; int move_index = pieceMovementTable[piece_index]; if (piece < 0) side_index = -1; else side_index = 1; Then,a switch statement controls move generation for each type of chess piece (movement generation code is not shown – see the file Chess.java): switch (piece_type) { case ChessPosition.PAWN: break; case ChessPosition.KNIGHT: case ChessPosition.BISHOP: case ChessPosition.ROOK: case ChessPosition.KING: case ChessPosition.QUEEN: break; } The logic for pawn moves is a little complex but the implementation is simple.We start by checking for pawn captures of pieces of the opposite color.Then check for 39 2 Search initial pawn moves of two squares forward,and finally,normal pawn moves of one square forward.Generated possible moves are placed in the static array piece moves and a possible move count is incremented.The move logic for knights,bishops, rooks,queens,and kings is very simple since it is all table driven.First,we use the piece type as an index into the static array index;this value is then used as an index into the static array pieceMovementTable.There are two loops:an outer loop fetches the next piece movement delta from the pieceMovementTable array and the inner loop applies the piece movement delta set in the outer loop until the new square index is off the board or “runs into” a piece on the same side.Note that for kings and knights,the inner loop is only executed one time per iteration through the outer loop: move_index = piece; if (move_index < 0) move_index = -move_index; move_index = index[move_index]; //System.out.println("move_index="+move_index); next_square = square_index + pieceMovementTable[move_index]; outer: while (true) { inner: while (true) { if (next_square > 99) break inner; if (next_square < 22) break inner; if (b[next_square] == 7) break inner; //check for piece on the same side: if (side_index < 0 && b[next_square] < 0) break inner; if (side_index >0 && b[next_square] > 0) break inner; piece_moves[count++] = next_square; if (b[next_square]!= 0) break inner; if (piece_type == ChessPosition.KNIGHT) break inner; if (piece_type == ChessPosition.KING) break inner; next_square += pieceMovementTable[move_index]; } move_index += 1; if (pieceMovementTable[move_index] == 0) break outer; next_square = square_index + 40 2.5 Search and Game Playing 1 d4 e6 2 e4 Q h4 Black (the computer) increases the mobility of its pieces by bringing out the queen early but we will see that this soon gets black in trouble. 8 rmbZkans 7 opopZpop 6 0Z0ZpZ0Z 5 Z0Z0Z0Z0 4 0Z0OPZ0l 3 Z0Z0Z0Z0 2 POPZ0OPO 1 SNAQJBMR a b c d e f g h Figure 2.13:Second game with a 2 1/2 move lookahead. pieceMovementTable[move_index]; } Figure 2.13 shows the start of a second example game.The computer was making too many trivial mistakes in the first game so here I increased the lookahead to 2 1/2 moves.Now the computer takes one to two seconds per move and plays a better game.Increasing the lookahead to 3 full moves yields a better game but then the programcan take up to about ten seconds per move. The method setControlData is very similar to this method;I leave it as an exercise to the reader to read through the source code.Method setControlData differs in also considering moves that protect pieces of the same color;calculated square control data is stored in the static arrays computerControl and humanControl. This square control data is used in the method positionEvaluation that assigns a numerical rating to a specified chessboard position on either the computer or human side.The following aspects of a chessboard position are used for the evaluation: material count (pawns count 1 point,knights and bishops 3 points,etc.) count of which squares are controlled by each side extra credit for control of the center of the board credit for attacked enemy pieces Notice that the evaluation is calculated initially assuming the computer’s side to move.If the position if evaluated from the human player’s perspective,the evalua- 41 2 Search 3 N c3 N f6 4 B d3 B b4 5 N f3 Q h5 Black continues to develop pieces and puts pressure on the pawn on E4 but the vulnerable queen makes this a weak position for black: 8 rmbZkZ0s 7 opopZpop 6 0Z0Zpm0Z 5 Z0Z0Z0Zq 4 0a0OPZ0Z 3 Z0MBZNZ0 2 POPZ0OPO 1 S0AQJ0ZR a b c d e f g h Figure 2.14:Continuing the second game with a two and a half move lookahead. We will add more heuristics to the static evaluation method to reduce the value of moving the queen early in the game. tion value is multiplied by minus one.The implementation of positionEvaluation is: public float positionEvaluation(Position p, boolean player) { ChessPosition pos = (ChessPosition)p; int [] b = pos.board; float ret = 0.0f; //adjust for material: for (int i=22;i<100;i++) { if (b[i]!= 0 && b[i]!= 7) ret += b[i]; } //adjust for positional advantages: setControlData(pos); int control = 0; for (int i=22;i<100;i++) { control += humanControl[i]; control -= computerControl[i]; } //Count center squares extra: control += humanControl[55] - computerControl[55]; control += humanControl[56] - computerControl[56]; control += humanControl[65] - computerControl[65]; 42 2.5 Search and Game Playing control += humanControl[66] - computerControl[66]; control/= 10.0f; ret += control; //credit for attacked pieces: for (int i=22;i<100;i++) { if (b[i] == 0 || b[i] == 7) continue; if (b[i] < 0) { if (humanControl[i] > computerControl[i]) { ret += 0.9f * value[-b[i]]; } } if (b[i] > 0) { if (humanControl[i] < computerControl[i]) { ret -= 0.9f * value[b[i]]; } } } //adjust if computer side to move: if (!player) ret = -ret; return ret; } It is simple to compile and run the example chess programby changing directory to src-search-game and typing: javac * .java java Chess When asked to enter moves,enter string like “d2d4” to enter a move in chess alge- braic notation.Here is sample output fromthe program: Board position: BR BN BB.BK BB BN BR BP BP BP BP.BP BP BP ..BP BQ. .... .WP.. ...WN. WP WP WP.WP WP WP WP WR WN WB WQ WK WB.WR 43 2 Search Your move: c2c4 Class.method name %of total runtime %in this method Chess.main 97.7 0.0 GameSearch.playGame 96.5 0.0 GameSearch.alphaBeta 82.6 0.0 GameSearch.alphaBetaHelper 82.6 0.0 Chess.positionEvaluate 42.9 13.9 Chess.setControlData 29.1 29.1 Chess.possibleMoves 23.2 11.3 Chess.calcPossibleMoves 1.7 0.8 Chess.calcPieceMoves 1.7 0.8 Table 2.1:Runtimes by Method for Chess Program The example chess program plays in general good moves,but its play could be greatly enhanced with an “opening book” of common chess opening move sequences. If you run the example chess program,depending on the speed of your computer and your Java runtime system,the program takes a while to move (about 5 seconds per move on my PC).Where is the time spent in the chess program?Table 2.1 shows the total runtime (i.e.,time for a method and recursively all called methods) and method-only time for the most time consuming methods.Methods that show zero percent method only time used less than 0.1 percent of the time so they print as zero values. The interested reader is encouraged to choose a simple two-player game,and using the game search class framework,implement your own game-playing program. 44 3 Reasoning Reasoning is a broad topic.In this chapter we will concentrate on the use of the PowerLoom descriptive logic reasoning system.PowerLoom is available with a Java runtime and Java API – this is what I will use for the examples in this chapter. PowerLoomcan also be used with JRuby.PowerLoomis available in Common Lisp and C++ versions. Additionally,we will look briefly at different kinds of reasoning systems in Chapter 4 on the Semantic Web. While the material in this chapter will get you started with development using a powerful reasoning system and embedding this reasoning system in Java applic
https://www.techylib.com/el/view/periodicdolls/practical_arti%EF%AC%81cial_intelligence_programming_with_java
CC-MAIN-2018-34
refinedweb
13,435
52.49
Member Since 10 Months Ago Edit Only 1 Row On Laravel For Each Loop I am using livewire to inline edit the table data. @foreach($tags as $index => $tag) <form> <input type="hidden" name="id" wire: <tr class="row d-flex flex-row justify-content-between"> <td class="border-0 namePart"> @if($editable !== true) {{$tag['name']}} @else <input type="text" wire:model. @error('editName') <span class="invalid-feedback d-block">{{ $message }}</span> @enderror @endif </td> Everything works fine. The only problem is when editing every row looks the same but after i hit save button it works. Does anyone have some suggestion
https://laracasts.com/@Sadon
CC-MAIN-2021-25
refinedweb
102
57.27
In this tutorial, we will learn about a new sorting algorithm known as merge sort that has a far better performance than the sorting algorithms we have seen so far. Previously, we have looked upon simpler sorting algorithms including selection, bubble and insertion sort which were slower as compared to merge sort. This sorting algorithm uses the basic logic of recursion and uses it to effectively sort a list in an incredibly faster manner. Understanding Merge Sort Let’s see how to sort a list of integers given to us in a form of an array. Suppose we have an array of 7 integers named X with unordered numbers. Our aim is to sort the following numbers in increasing order of values by using a merge sort algorithm. Now the first thing we will do is to divide this list in two halves. If the list has a total number of elements equal to an even number then the list can will be split into exact two halves. However, if the list has a total number of elements equal to an odd number then you can choose which side to include the middle number in. This will result in one side having more elements than the other. For our array we have 7 elements hence we will not have two equal sides. We can either split the array by placing the middle number which is 2 in our case, in the first half or in the second half. In either cases, one part of the array will have higher number of elements than the other. Supposing we are splitting our array as shown below: We have split the array ‘X’ into two arrays: ‘L’ denoting the left array and ‘R’ denoting the right array. Our approach will be to first sort these individual arrays “L’ and ‘R’ and then merge them together in the original array ‘X’ in a sorted order. Notice that all the elements in X are either present in L or R. Therefore we can start overwriting X from right to left. We will start at from index 0 in X. At any point, the smallest element will be either the smallest unpicked in L or the smallest unpicked in R. The figure below shows the smallest unpicked in both the sub arrays L and R in white. In ‘L’ the smallest element is 1. In ‘R’ the smallest element is 3. Now out of these two, the smallest is 1. This suggests 1 belongs to index 0 of X. Therefore, we will write 1 at index 0. Now we will look for the second smallest number in both the subarrays and so on and will write their values at the appropriate positions in X. Pseudocode for Merge logic Lets write a pseudocode to merge the two sorted arrays L and R into X. We will create a function called Merge() which will take in three arguments. The arguments will be L denoting the left subarray, R denoting the right subarray and X denoting our array in which the other two arrays will be merged. Merge(L,R,X){ } Then we will create two variables one that will store the elements in L and another that will store the elements in R. These could also be used as arguments for our Merge() function. Merge(L,R,X) { nL=length(L) nR=length(R) Next we will define three more variables called i, j, and k and set them to zero initially. Here ‘i’ denotes the index of the smallest unpicked in L, ‘j’ denotes the index of the smallest unpicked in R, and ‘k’ denotes the index of the position that needs to be filled in X. Merge(L,R,X) { nL=length(L) nR=length(R) i=j=k=0 Lets show the stage where we had filled the smallest element at index 0 in X and were looking for the element to be placed at index 1, the ‘i’, ‘j’ and ‘k’ will be as follows: i =1, j=0 and k=1. This is because the first element has been filled at this stage. The green coloured cell denotes the picked element that was the smallest of the two un picked elements now placed at index 0 in X. For i and j to be valid, they should be less than the number of elements in their respective arrays. Hence we will add the following while statement to our code: Merge(L,R,X) { nL=length(L) nR=length(R) i=j=k=0 while(i<nL && j<nR) Now inside the while statement, if L[i] is less than or equal to R[j], then for the array X at its k-th position we will overwrite it with L[i]. Here what we did was we compared the smallest unpicked element in the sub array L with that of the sub array R. If it was indeed smaller then that element will be placed at the k-th position in X. Additionally, we will increment both k and i to move to the next element. Merge(L,R,X) { nL=length(L) nR=length(R) i=j=k=0 while(i<nL && j<nR) { if (L[i]<=R[j]) { X[k]=L[i] k=k+1 i=i+1 } If this condition is not true in the case that R[j] is smaller than L[i] then we will overwrite the k-th position of array X with R[j]. Additionally, we will increment both k and j to move to the next element. Merge(L,R,X) { nL=length(L) nR=length(R) i=j=k=0 while(i<nL && j<nR) { if (L[i]<=R[j]) { X[k]=L[i] k=k+1 i=i+1 } else { X[k]=R[j] k=k+1 j=j+1 } Testing the Merge Logic Let us demonstrate how the following pseudocode will work using our example array X. Look at this stage shown below: Both i and j are valid numbers hence we will enter the while loop and check whether L[i] <= R[j] or not. L[i] in this case is 2 and R[j] is 3. 2 is lesser than 3 hence X[k] will now be 2. Moreover both k and i will get incremented by 1 as shown below. k moves to index 2 and i moves to index 2 as well. Now again we will check if L[i] <= R[j] or not. In this case, L[i]=4 and R[j] =3. As R[j] < L[i] we will move in the else statements and thus X[k] will be overwritten by 3. Increment both k and j to move ahead. Again we will check if L[i] <= R[j] or not. In this case, L[i] = 4 and R[j] =6. As L[i] < then R[j], hence X[k] will be overwritten by L[i]. Now in array X at index 3 we have the placed the element 4. Increment both i and k. Again we will check if L[i] <= R[j] or not. In this case, L[i] = 3 and R[j] =6. As L[i] < then R[j], hence X[k] will be overwritten by L[i]. Now in array X at index 4 we have the placed the element 5. Increment both i and k. Modifying the Merge pseudocode As you may notice now at this stage i=3, j=1 and k=5. We have used all the elements from the sub array L and now it is at index 3 which is not valid inside our while condition. The condition where i<nL is not valid any more hence we will now pick elements from the other array to fill the remaining positions of X. Therefore we will have to include two additional while statements in our program to cater to this scenario where one of the subarrays gets exhausted. This is shown below: Merge(L,R,X) { nL=length(L) nR=length(R) i=j=k=0 while(i<nL && j<nR) { if (L[i]<=R[j]) { X[k]=L[i] k=k+1 i=i+1 } else { X[k]=R[j] k=k+1 j=j+1 } while(i<nl) { X[k]=L[i] i=i+1 k=k+1 } while(j<nR) {X[k]=R[j] j=j+1 k=k+1 } Once we are out of the first while loop then out of the two other while loops only one will execute. This is because out of the two sub arrays only one will have elements left. Now for our particular example the third while loop where j<nR will execute. Thus the remaining positions in X will be filled appropriately. Now we have all elements in the array X in a sorted arrangement. Sorting the Sub arrays As you may remember we split the array ‘X’ into two sub arrays: ‘L’ denoting the left array and ‘R’ denoting the right array. Then we said that we will first sort these individual arrays “L’ and ‘R’ and then merge them together in the original array ‘X’ in a sorted order. Lets see how to sort the subarrays L and R. We will further divide the subarrays L and R. Array L consists of 4 elements hence we will divide it into two halves. Whereas array R consists of 3 elements so we will also divide it into two parts where one will have more elements than the other. Now we will try to sort the individual sub arrays of L and merge them back in L. Likewise, we will sort the individual sub arrays of R and merge them back in R. Now these sub arrays can also be further divided into arrays consisting of single elements. This way we are creating a recursive pattern to solve the problem. Once we sort the individual sub arrays we will be able to merge them back to the original array. This reduction of a sub array will be continued till we are left with a single element in the sub array. The recursion will end at this point. A list with only one element is always sorted. Thus, now at this stage we will start merging the last sub arrays. The sub array 4,1 after merging will be 1,4 The subarray 5,2 after merging will be 2,5. At this stage, we can merge 1,4 and 2,5 as 1,2,4,5 in array L as shown below. Array L is now sorted. Lets move to the next single element subarrays and merge them back. The subarray 7,3 after merging will be 3,7 whereas 6 will simply be merged as it is. At this stage, we can merge 3,7 and 6 as 3,6,7 in array R as shown below. Array R is now sorted. Now these two sorted sub arrays L and R can be merged back to the original array X. Array X is now sorted. Pseudocode for Merge Sort Let us show you the pseudocode for how we sorted array X using merge sort algorithm. MergeSort(X) { n=length(X) if (n<2) return middle=n/2 L= array of size(middle) R= array of size(n-middle) for i=0 to (middle-1) L[i]=X[i] for i=middle to (n-1) R[i-middle]=X[i] MergeSort(L) MergeSort(R) Merge(L,R,X) } The MergeSort() function takes in a single argument that will be the array X. Inside the function, we will define a variable called ‘n’ that will store the number of elements in the array X. Then we will create a condition to divide the array into subarrays if ‘n’ is greater than 1. However, if ‘n’ is less than 2 then we will return. This is because this suggests that the array has only one element and is therefore sorted. Otherwise, we will find the middle position in the array and divide it accordingly. The array will therefore be divided into two parts. The first sub array will be of size middle and the second sub array will be of size (n-middle). middle=n/2 L= array of size(middle) R= array of size(n-middle) Next, we will run a for loop from i=0 till (mid-1) to fill the sub array L with elements till middle. for i=0 to (middle-1) L[i]=X[i] Likewise, we will run another for loop from i=middle to (n-1) to fill the remaining elements in the array X to the sub array R. for i=middle to (n-1) R[i-middle]=X[i] At this stage the left and right subarrays have been created and filled with elements from array X. Now, we will make a recursive call to sort the left sub array and the right sub array. MergeSort(L) MergeSort(R) After both of the sub arrays L and R have been sorted, we will call Merge() function which we had previously looked upon in detail. This function will merge the sorted left and right sub arrays back to the original array X. Merge(L,R,X) Demonstrating Merge Sort Pseudocode using example array Let us show you what happens to the our array of unsorted integers when this MergeSort() function is called. We will use the same example array that we have been using previously to understand the different concepts. We have passed this array X to the MergeSort() function. Now first the number of elements in the array will be calculated. In our case ‘n’ is 7. As it is greater than 2 hence we will proceed further. We will be creating two sub arrays L and R. These will be filled up with elements from array X. Now we will make a recursive call where the function is calling itself. First the merge sort function will execute on the left sub array of 4 elements: 4,1,5,2. We will again go to the beginning of the function and find the number of elements for this particular array. As they are greater than 2 hence we will move forward. We will again create further left and right sub arrays. There will be another recursive call. In that case the second MergeSort() function will get paused. The recursive calls will go on until we are left with a single element in the sub array. Now the array with elements 4,1 will be further split into sub arrays 4 and 1. The MergeSort() function will return and the Merge() function will be called. The control will return back to 4,1,5,2. This will call the second MergeSort() function. It will first make a call for this sub array with just one element and once this is done, it will make another recursive call. Then we will call the Merge() function for 5,2. Now control will go back to 4,1,5,2 and Merge will be called. After 1,2,4,5 will finish, the control will return to the original array. Now another recursive call will be made. This time to the second MergeSort() with R as an argument inside it. For the sub array R consisting of 7,3,6 we will have a recursive call passing 7,3 that will again cause a recursive call with just one element 7. Then we will have a call to 3 which will return. Once 3,7 returns, we will have a call for 6. As it is a single element hence it will return. At this stage 3,7 and 6 will merge and form 3,6,7 in the sub array R. Now both the sub arrays L and R are sorted. When execution for 3,6,7 will finish we will call Merge on the original array X. This will merge the two sorted sub arrays to X. This is how we used Merge Sort Algorithm to sort the list of integers in increasing order of values. Merge Sort Example Code in C #include <stdio.h> void merge(int arr[], int begin, int mid, int end) { int lenght_1 = mid - begin + 1; int length_2 = end - mid; int left_arr[lenght_1], right_arr[length_2]; for (int i = 0; i < lenght_1; i++) left_arr[i] = arr[begin + i]; for (int j = 0; j < length_2; j++) right_arr[j] = arr[mid + 1 + j]; int i, j, k; i = 0; j = 0; k = begin; while (i < lenght_1 && j < length_2) { if (left_arr[i] <= right_arr[j]) { arr[k] = left_arr[i]; i++; } else { arr[k] = right_arr[j]; j++; } k++; } while (i < lenght_1) { arr[k] = left_arr[i]; i++; k++; } while (j < length_2) { arr[k] = right_arr[j]; j++; k++; } } void mergeSort(int arr[], int begin, int end) { if (begin < end) { int mid = begin + (end - begin) / 2; mergeSort(arr, begin, mid); mergeSort(arr, mid + 1, end); merge(arr, begin, mid, end); } } void print_arr(int arr[], int size) { for (int i = 0; i < size; i++) printf("%d ", arr[i]); printf("\n"); } int main() { int arr[] = {11, 2, 5, 7, 9, 1}; int size = sizeof(arr) / sizeof(arr[0]); printf("Original array\n"); print_arr(arr, size); mergeSort(arr, 0, size - 1); printf("Sorted array\n"); print_arr(arr, size); } Original array 11 2 5 7 9 1 Sorted array 1 2 5 7 9 11 Other sorting Algorithms:
https://csgeekshub.com/c-programming/merge-sort-algorithm/
CC-MAIN-2021-49
refinedweb
2,909
70.53
Summary: shows a dictionary by the name rank that stores the names of individuals as the keys while their corresponding ranks represent the values. We shall be using this example as a reference while discussing the solutions. rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } # Some Procedure to Sort the Dictionary by its Values Output: {'Dwyane': 1, 'Bob': 2, 'John': 3, 'Alice': 4, 'Sharon': 5} Before we dive into the solutions here are a few Points To Remember about dictionaries: - From Python 3.7 onwards, Python dictionaries are ordered (Insertion is also ordered). This means the ordered in which the keys are inserted into dictionaries are preserved. - In normal scenario sorting dictionaries, results in a sorted dictionary with respect to its keys. To get a better insight into Python dictionaries, please follow our Blog Tutorial on dictionaries here. However, the purpose of this article is purely to guide you through the numerous methods to sort a dictionary based on the values instead of the keys. So without further delay let us dive into the solutions. ➤ Here’s a quick overview of all the methods used in this article. Please follow the slide show give below: Method 1: Using The sorted(dict1, key=dict1.get) Method The sorted() method is an in-built method in Python which is used to sort the elements of an iterable in a specific order (ascending or descending). After sorting the elements it returns the sorted sequence, in the form of a sorted list. Syntax: In order to sort the dictionary using the values, we can leverage the power of the sorted() function. To sort the dictionary based on the value, we can use the get() method and pass it to the key argument of the sorted() function. Let us have a look at the following code to understand the usage of sorted function in order to solve our problem: rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } for w in sorted(rank, key=rank.get): print(w, rank[w]) Output: Dwyane 1 Bob 2 John 3 Alice 4 Sharon 5 In order to sort the dictionary using its values in reverse order, we need to specify the reverse argument as true, i.e., rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } for w in sorted(rank, key=rank.get, reverse=True): print(w, rank[w]) Output: Sharon 5 Alice 4 John 3 Bob 2 Dwyane 1 Method 2: Using Dictionary Comprehension And Lambda With sorted() Method If you are using Python 3.6 and above then our problem can be solved in a single line using a dictionary comprehension and a lambda function within the sorted method. This is an efficient and concise solution to sort dictionaries based on their values. ⦿. To learn more about dictionary comprehensions in python have a look at our blog tutorial here. Now, let us have a look at the following code given below that explains the usage of dictionary comprehensions to solve our problem in a single lie of code. rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } print({k: v for k, v in sorted(rank.items(), key=lambda item: item[1])}) Output: {'Dwyane': 1, 'Bob': 2, 'John': 3, 'Alice': 4, 'Sharon': 5} Method 3: Using OrderedDict (For Older Versions Of Python) Dictionaries are generally unordered for versions prior to Python 3.7, so it is not possible to sort a dictionary directly. Therefore, to overcome this constraint we need to use the OrderedDict subclass. ⦿ An OrderedDict is a dictionary subclass that preserves the order in which key-values are inserted in a dictionary. It is included in the collections module in Python. Let us have a look at how we can use OrderedDict to order dictionaries in earlier versions of Python and sort them. from collections import OrderedDict rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } a = OrderedDict(sorted(rank.items(), key=lambda x: x[1])) for key,value in a.items(): print(key, value) Output: ('Dwyane', 1) ('Bob', 2) ('John', 3) ('Alice', 4) ('Sharon', 5) Method 4: Using itemgetter() With The sorted() Method itemgetter() is a built-in function of the operator module that constructs a callable that accepts an iterable like a list, tuple, set, etc as input and fetches the nth element out of it. Example: from operator import itemgetter rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } a = sorted(rank.items(), key=itemgetter(1)) print(dict(a)) Output: {'Dwyane': 1, 'Bob': 2, 'John': 3, 'Alice': 4, 'Sharon': 5} In the above example a stores a list of tuples. Therefore we converted it to a dictionary explicitly, while printing. Method 5: Using Counter Counter is a dictionary subclass that is used for counting hashable objects. Since the values we are using are integers, we can use the Counter class to sort them. The Counter class has to be imported from the collections module. Disclaimer: This is a work-around for the problem at hand and might not fit in every situation. Since, values of the dictionary in our case are integers, the Counter subclass fits as a solution to our problem. Consider it as a bonus trick!? from collections import Counter rank = { 'Bob': 2, 'Alice': 4, 'Sharon': 5, 'Dwyane': 1, 'John': 3 } count = dict(Counter(rank).most_common()) print(count) Output: {'Dwyane': 1, 'Bob': 2, 'John': 3, 'Alice': 4, 'Sharon': 5} ⦿ In the above program the method most_common() has been used to return all elements in the Counter. Conclusion In this we learned about the following methods to sort a dictionary by values in Python: - Using The sorted(dict1, key=dict1.get) Method. - Using Dictionary Comprehension And Lambda With sorted() Method. - Using OrderedDict (For Older Versions Of Python). - Using itemgetter() with sorted() Method. - Using Counter subclass. I hope after reading this article you can sort dictionaries by their values with ease. Please subscribe and stay tuned for more interesting articles!
https://blog.finxter.com/how-to-sort-a-dictionary-by-value-in-python/
CC-MAIN-2022-21
refinedweb
993
61.16
Have you ever experienced this error in SQL Server 2005 and under the same circumstances, but using SQL Server 2000 instead, you never "suffered" this problem? This issue was recently raised to my attention. And its resolution I found it of general interest, so I've just decided to publicly explain it in this forum. I'll try to explain what has changed in SQL Server 2005 when it comes to column binding, and will also try to explain why the changes were introduced. First of all, let's prepare for the laboratory for our experiments: And these, the different queries he tried to run and the results of every execution: So, I began by attaching a debugger to my test instance of SQL Server and enabling the debugger to stop the execution of the debuggee when an exception was thrown (sxe eh). I ran one of the queries that produced the exception, and since I had the private symbols for sqlservr.exe, from the stack I could get the name of the function from where we were throwing the 1013. (sqlservr!CNameSpace::CheckDuplicateTables). With that, I went to read the source code of that function and found that it took into account so many details and handled so many subtle variations (like behaving differently depending on the database compatibility level for the current context) that it wasn't possible so many subtle differences would have been the result of an accident. It definitively, couldn't be a regression. Therefore, I decided it was a good idea to read through the functional specifications of SQL Server 2005's algebrizer, especially the part where it describes how column binding should work. And, as already expected it was working the way it had been designed to work, as it had been written in the functional specs. SQL 92 describes column binding algorithm in 6.4 , "syntax rules", and it shows subtle differences between prefixed and non-prefixed column names. Non-prefixed column names are searched first in the nearest scope and, if not found, in the next enclosing scope, etc. Until (1) the column name is found, or (2) the scopes are exhausted, or (3) an ambiguity is found (i.e. in the scope currently being searched, there are two or more columns with the name indicated - typically, they would come from different tables). (1) means the binding succeeded, (2) and (3) indicate a failure. Note that scopes are usually created by nested subqueries. If a prefix is specified (say "t.c"), the standard says that we should first search for the table indicated by the prefix ("t"). The search is performed in the same bottom-up manner described earlier. If the table is found, we proceed searching for the column ("c") in that table. Only if both the table and the column are found, the binding is successful. Usually, ambiguity is impossible in this scenario, because FROM clause issues an error if the user specifies two tables with the same exposed name, and CREATE TABLE statement does not allow columns with duplicate names. Example: t1 has columns a, b t2 has columns a, c SELECT * FROM t1 as t WHERE EXISTS (SELECT * FROM t2 as t WHERE t.b = c) t.b will not bind successfully. Table “t” will be found in the nearest scope and bound to t2. t2 does not have a column named "b". We will not proceed to the next scope to search for a "better" t, because the table was found in the nearest scope. SQL Server 2000's notion of a "column prefix" is different from the standard. SQL Server 2000 allows "multi-part" prefix (such as dbo.t.c1). SQL Server 2000 also differs from the standard in the way prefix is matched with a table in the FROM clause. Standard allows multi-part table names in the FROM clause (not in a column prefix). Each table, according to the standard, has an "exposed name" which is the last part of the multi-part table name (if there is no alias), or alias name when present. Column prefix matching with a table is performed as a textual match with the exposed name. SQL Server 2000 treatment of the column prefix is different. A prefix "p" is matched with a table as follows: In many cases, the result of the above algorithm is actually the same as the standard behavior. For example, in the following statement: SELECT t.x FROM dbo.t prefix "t" will be successfully bound to the table dbo.t (provided that the current user is dbo), thus creating the illusion that we only look at the "exposed name" as the standard specifies. However, some differences are visible: (1) SELECT t.x FROM db1.dbo.t In this case, the binding will only be successful if "db1" is the current database. (2) SELECT * FROM t WHERE EXISTS (SELECT * FROM dbo.t WHERE t.x = 5) Here, t.x will bind to the outer t (textual match in step 2), which is in stark contrast with the standard that states that we should bind to the nearest scope based on the exposed name match. The standard specifies that the FROM clause cannot have two tables with the same exposed name. This ensures that the table binding for a prefixed column is never ambiguous. In fact, the no-duplicates condition is even stronger than is necessary just to ensure no ambiguity. Indeed, we can even allow duplicate exposed names as long as these names are never actually referenced (say, all columns from these tables are referenced in a non-prefixed form). However, the standard (and SQL Server 2000) decided to enforce this stronger condition (probably because it makes SQL cleaner and more maintainable). Since SQL Server 2000 definition of an "exposed name" is different from the standard, the definition of a "duplicate table" is correspondingly different. Essentially, a table is a duplicate of another if its full name (or an alias, if present), used as a column prefix in the above algorithm, would match the other table. (3) SELECT t.x FROM db1.dbo.t, db2.dbo.t The two tables in the FROM clause are not considered duplicates, because neither their full names nor dbid/objid pairs match. To what table the column "t.x" binds actually depends on the current database. So this new function, only introduced in SQL Server 2005, which I mentioned earlier, is implemented as part of SQL Server 2005's algebrizer and it checks that a new table added to a namespace is not a duplicate of some existing table. The definition of what exactly a "duplicate" is depends on the particular prefix-matching algorithm that we are using. For the standard (exposed-name-based) prefix matching, the definition of a duplicate is clear: it's a table with the same exposed name. However, for the SQL Server 2000-style two-pass prefix matching, the definition of a duplicate is more complicated. The goal of the duplicate checking is to prevent ambiguities, so a strict definition that tables T and P are duplicates is: There exists a column prefix X that matches both tables T and P. This definition is not very practical because it does not specify how to find X, so we have to come up with a particular algorithm that would be equivalent to that definition. And this is what this method implements. Hope you find it useful understanding how column binding works. PingBack from What if the 2 databases with 100 and 90 compatibility mode have same table on inner joining them gives an error: Msg 1013, Level 16, State 1, Line 3 The objects "c***.dbo.p***" and "S***.dbo.P***" in the FROM clause have the same exposed names. Use correlation names to distinguish them. We cannot change the query as its coming from the users and the joins are already defined. Note: we have inner joins and no derived tables/sub queries as such. Any help will be greatly appreciated. MAK
http://blogs.msdn.com/b/ialonso/archive/2007/12/21/msg-1013-the-object-s-and-s-in-the-from-clause-have-the-same-exposed-names-use-correlation-names-to-distinguish-them.aspx
CC-MAIN-2014-35
refinedweb
1,340
62.78
PyQT custom widget fixed as square I'm developing a custom widget (inheriting from QWidget) to use as a control. How can I fix the aspect-ratio of the widget to be square, but still allow it to be resized by the layout manager when both vertical and horizontal space allows? I know that I can set the viewport of the QPainter so that it only draws in a central square area, but that still allows the user to click either side of the drawn area. Answers Apparently, my old approach had many flaws and some of the things in it are meaningless: - QSizePolicy.setHeightForWidth and QSizePolicy.setWidthForHeight are exclusive... and setWidthForHeight doesn't even work in most cases... - There is no such thing as QWidget.widthForHeight, so defining it doesn't really override anything. It seems like there is no universal way to keep a widget square under all circumstances. You must choose one: - Make its height depend on its width: class MyWidget(QWidget): def __init__(self, parent=None): QWidget.__init__(self, parent) policy = QSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred) policy.setHeightForWidth(True) self.setSizePolicy(policy) ... def heightForWidth(self, width): return width ... - Make its minimal width depend on its height: class MyWidget(QWidget): def __init__(self, parent=None): QWidget.__init__(self, parent) self.setSizePolicy(QSizePolicy.Preferred, QSizePolicy.Preferred) ... def resizeEvent(self, e): setMinimumWidth(height()) ... Such a widget will be kept square as long as there is such a possibility. For other cases you should indeed consider changing the viewport, as you mentioned. Mouse events shouldn't be that much of a problem, just find the center of the widget (divide dimensions by 2), find min(width, height) and go from there. You should be able to validate the mouse events by coordinate. It is nice to call QMouseEvent.accept, only if the event passed the validation and you used the event. I'd go with BlaXpirit's method, but here's an alternative that I've used before. If you subclass the custom widget's resiseEvent() you can adjust the requested size to make it a square and then set the widget's size manually. import sys from PyQt4 import QtCore, QtGui class CustomWidget(QtGui.QFrame): def __init__(self, parent=None): QtGui.QFrame.__init__(self, parent) # Give the frame a border so that we can see it. self.setFrameStyle(1) layout = QtGui.QVBoxLayout() self.label = QtGui.QLabel('Test') layout.addWidget(self.label) self.setLayout(layout) def resizeEvent(self, event): # Create a square base size of 10x10 and scale it to the new size # maintaining aspect ratio. new_size = QtCore.QSize(10, 10) new_size.scale(event.size(), QtCore.Qt.KeepAspectRatio) self.resize(new_size) class MainWidget(QtGui.QWidget): def __init__(self, parent=None): QtGui.QWidget.__init__(self, parent) layout = QtGui.QVBoxLayout() self.custom_widget = CustomWidget() layout.addWidget(self.custom_widget) self.setLayout(layout) app = QtGui.QApplication(sys.argv) window = MainWidget() window.show() sys.exit(app.exec_()) Need Your Help Is there any way to control where the output file of a T4 is generated to? c# silverlight code-generation t4I would like to generate C# code for silverlight but I dont have access to some dll's that would make my T4 code more powerful. Is there anyway to have my T4 template in a C# Class Library and have...
http://www.brokencontrollers.com/faq/11008140.shtml
CC-MAIN-2019-26
refinedweb
545
51.24
Code. Collaborate. Organize. No Limits. Try it Today. In Microsoft's vision, the next generation of distributed systems will communicate with WebServices. WebServices are great when it comes to integrate heterogeneous loosely coupled systems, but they have their limitations too: they have no support for remote object references. In practice, they are stateless and closer to a remote method call than to a distributed object system. Furthermore, SOAP and XML are by no means a compressed format and tend to be quite verbose. .NET and J2EE are two similar but disjointed worlds: they currently can interact together only using WebServices. Both platforms offer great mechanisms for building tightly coupled distributed object systems: .NET's Remoting and Java's RMI, but sadly these rely on incompatible standards. Luckily, .NET's remoting is highly configurable: a different formatter for the serialization and deserialization of the objects together with a different transport channel can be easily provided. This article shows how the .NET and J2EE platforms can tightly interoperate together, as it is often needed when developing distributed enterprise applications. For this purpose, we use an open-source custom remoting channel called IIOP.NET. IIOP.NET is a .NET remoting channel based on the IIOP protocol. IIOP is the protocol defined by the CORBA standard, the same used by Java's RMI/IIOP. IIOP.NET acts as an ORB (a CORBA object request broker); it converts .NET's type system to CORBA's type system and back, making the objects defined in your application accessible to other ORBs. RMI/IIOP implements a subset of the ORB functionalities (due to some limitations in Java's type system) and provides the same features as IIOP.NET for the J2EE platform. Using IIOP.NET is almost as simple as using the built-in remoting. The following example will show you how to access a .NET service from Java using IIOP.NET. IIOP.NET is an open-source project hosted on sourceforge (). It was developed by Dominic Ullmann as part of his diploma thesis at ETH-Z; further work is now sponsored by his current employer (). Not surprisingly, IIOP.NET is not the only software you can use for this purpose. First, the open-source project Remoting.Corba is quite similar in its goals, but has no generator for creating the IDL from a dll and currently does not support CORBA's valuetypes; second, Janeva from Borland promises to do the same, but is neither free nor available yet (should be released in summer '03). We choose IIOP.NET because it is free, currently available, and has a tool to generate the IDL automatically. Your problem: you just implemented a great service using .NET, but your customer insists on using a Java client; You cannot use WebServices, because the client software needs to keep references to the single objects on the server: this is just not possible using WebServices unless you implement your own naming service, lease manager, and distributed garbage collection. A more appropriate approach is to use RMI/IIOP on the Java side and IIOP.NET on the .NET side. This article is constructed around a non-trivial example (the GenericCollection tutorial in the IIOP.NET release: a .NET server provides access to a set of collections consisting of key / value pairs. A client can grab a collection, modify it by adding more pairs, or querying it about the pairs it contains. This requires the client to hold references to the objects on the server. For the sake of simplicity, we will concentrate on the object distribution, and skip all the concurrency-related problems. You will find a few directories: IIOPChannel CLSToIDLGenerator Examples GenericCollections To be able to use IIOP.NET, you need Microsoft's .NET Frameword (1.0 or 1.1) and the C# compiler. The Java part of the demo requires any Java system supporting RMI/IIOP (e.g. Sun's Java SDK 1.4). To install IIOP.NET, first unpack it; them copy the ir.idl and orb.idl files from your Java SDK lib directory into IIOP.NET's IDL directory. Compile by executing nmake in the main directory. ir.idl orb.idl lib IDL nmake When you define a .NET service, you have the choice between objects marshalled by reference, which subclass MarshalByRefObject, and objects marshalled by value, which implement ISerializable or are decorated with SerializableAttribute. In the GenericCollections example, the objects (without implementation) are defined as: MarshalByRefObject ISerializable SerializableAttribute namespace Ch.Elca.Iiop.Demo.StorageSystem { [Serializable] public struct Entry { public string key; public string value; } public class Container: MarshalByRefObject { public Container() {...} public Entry[] Enumerate() {...} public void SetValue(string key, string value) {...} public void SetEntry(Entry e) {...} public String GetValue(string key) {...} } public class Manager: MarshalByRefObject { public Manager() {...} public Container CreateContainer() {...} public Container[] FilterContainers(Entry[] filter) {...} public void DeleteContainer(Container c) {...} } } In practice, Manager and Container objects stay on the server. The client merely receives a remote reference to them and works with a proxy object that serializes (i.e. encodes) the method calls and forwards them to the server. On the other hand, Entry structures are entirely copied to the client, which works with its own Entry clones. You can now make the managed object available to the rest of the (IIOP) world: public class Server { [STAThread] public static void Main(string[] args) { // register the channel int port = 8087; IiopChannel chan = new IiopChannel(port); ChannelServices.RegisterChannel(chan); // publish the adder Manager manager = new Manager(); string objectURI = "manager"; RemotingServices.Marshal(manager, objectURI); Console.WriteLine("server running"); Console.ReadLine(); } } The above code installs an ORB listening on the URI iiop://localhost:8087/, and registers a manager instance under the name "manager" to all channels. The manager object will handle all requests (in fact, this is a server-activated singleton object). iiop://localhost:8087/ To be able to access these objects from Java, their definition must be made available. Because Java does not understand .NET's metadata, we create a description of the objects using the IDL format using the IIOP.NET's CLSToIDLGenerator tool; this tool takes as input one type and one assembly, and emits the corresponding IDL definition file. It also recursively emits the definitions for all other types used. Calling CLSToIDLGenerator Ch.Elca.Iiop.Demo.StorageSystem.Manager Service.dll generates the description for the Manager type (the full type name is required) defined in the Service.dll assembly and for all other types used by Manager. Manager.idl, Container.idl, and Entry.idl are created. The Java SDK provides the idlj compiler to generate the java stubs for the IDL files. Note that you will need two more IDL files present in your Java SDK: orb.idl and ir.idl, which contain all the predefined CORBA objects for your Java platform. idlj You can now implement a client, which accesses the remote objects defined previously. import javax.naming.InitialContext; import javax.rmi.PortableRemoteObject; import Ch.Elca.Iiop.GenericUserException; import Ch.Elca.Iiop.Demo.StorageSystem.*; Manager m = null; try { InitialContext ic = new InitialContext(); Object obj = ic.lookup("manager"); m = (Manager) PortableRemoteObject.narrow(obj, Manager.class); ... use m ... } catch (Exception e) { System.out.println("Exception: " + e.getMessage()); } This code retrieves a reference to the remote object. Now you can call the methods defined in the remote object just like normal methods: Container c = m.CreateContainer(); c.SetValue("name","Patrik"); There is still one catch: you must write and implement the class EntryImpl . The Manager and Container types are accessed by reference, i.e. idlj generates a proxy that forwards all method calls to the server. The Entry structure instead is copied to the client (this corresponds to the classes extending marked as serializable in .NET, and to the valuetypes in CORBA): thus, you need to provide a local implementation for all its methods (idlj just provides an abstract class): EntryImpl Manager Container Entry package Ch.Elca.Iiop.Demo.StorageSystem; public class EntryImpl extends Entry { } Because Entry has no methods to be implemented(only fields), its implementation is simple and consists of an empty class definition. As a last step, run the distributed application. First, start the server: D:\Demo\net:> Server server running and then the client: java -Djava.naming.factory.initial=com.sun.jndi.cosnaming.CNCtxFactory -Djava.naming.provider.url=iiop://localhost:8087 Client The internet address part of the URL is passed to the JVM in order to tell RMI/IIOP where to find the naming service. The java client will prompt you for an operation to perform. First input "1" to create a new collection, and then insert a few keys and values, terminate with an empty key. Keep in mind that every command you issue, is executed on the server; in fact, by starting two clients, you will access exactly the same data. Here's a small session log: Main Menu: 0. Terminate 1. Create Container 2. Select Container 1 Container Menu: 0. Return to previous menu 1. Set Entry 2. Show Entries 1 Enter an key / value pair: key: site value: CodeProject Container Menu: 0. Return to previous menu 1. Set Entry 2. Show Entries 1 Enter an key / value pair: key: URL value: Container Menu: 0. Return to previous menu 1. Set Entry 2. Show Entries 2 List Entries Entry[site] = CodeProject Entry[URL] = Container Menu: 0. Return to previous menu 1. Set Entry 2. Show Entries 0 Main Menu: 0. Terminate 1. Create Container 2. Select Container 2 Select Containers: enter a list of key / values pairs; terminate with an empty key key: site value: CodeProject key: Matches: Container 1 List Entries Entry[site] = CodeProject Entry[URL] = Select container number or 0 to return to previous menu 0 Main Menu: 0. Terminate 1. Create Container 2. Select Container 0 This article has shown how to access objects created and hosted under .NET from J2EE using IIOP.NET. The implementation of a distributed application is as simple as using only Java's RMI/IIOP. It is obviously possible to work in the opposite way hosting the objects on J2EE and remotely accessing them from .NET: IIOP.NET includes also a generator to create the proxies for .NET given the IDL of the objects. IIOP.NET allows to transparently access remote objects using the CORBA standard under .NET. It is well integrated in .NET's remoting framework, and as such, that it can be used together with the other remoting channels already available without any code change. Remoting and RMI are the technologies available in .NET and J2EE to create tightly coupled interaction in a distributed object system. WebServices are a more appropriated for loosely coupled systems. Projects and products: IIOP.NET Janeva Remoting.Corba Technologies: .NET Remoting RMI / IIOP CORBA This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here <configuration> <system.runtime.remoting> <application> <channels> <channel type="Ch.Elca.Iiop.IiopChannel,IiopChannel" port="8087"/> </channels> </application> </system.runtime.remoting> </configuration> // register the channel IiopChannel chan = new IiopChannel(8087); ChannelServices.RegisterChannel(chan); RemotingConfiguration.Configure(configFile); General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/4450/Building-a-Distributed-Object-System-with-NET-and?fid=16076&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&fr=11
CC-MAIN-2014-23
refinedweb
1,886
51.34
CREATE AN ARRAY IN NUMPY In this tutorial, we are going to learn about how to create an array in numpy using different methods and techniques followed by handy examples. Now that we have learned about ndarray object in numpy, it’s time to learn about how to create an array in numpy using different methods. We will use arrays to perform different functions such as logical, arithmetic, statistical etc. So, let’s get started with creating arrays. There are numerous ways to create arrays in numpy such as: Creating Lists with Different Dimensions Numpy has a built-in function which is known as arange, it is used to generate numbers within a range if the shape of an array is predefined. Creating a Single Dimensional Array Let’s create a single dimension array having no columns but just one row. This type of array is usually called a rank 1 array because it has only one axis (one dimensional array). Similarly, an array with rank 2 will be a 2D array because it will have 2 axis (rows x columns). Let’s pass on a value of 10 in the arange function which will generate values from 0 to 9 (index wise). import numpy as np #Single dimensional array one_d = np.arange(10) one_d Output: array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]) To verify the dimensions of the shape of an array, you can use the shape function: one_d.shape Output: (10,) You can see that there is no value after 10. This means that it is a single dimensional array with having 10 indices. You can update different values in your array by assigning new ones. For example, if you want to change the value at index 5, then you can simply do as follows: one_d[5] = 30 one_d Output: array([ 0, 1, 2, 3, 4, 30, 6, 7, 8, 9]) But you have to be careful when updating values in your numpy, for example you cannot update strings in your numpy array which has a datatype int. Since, arrays should of the same type at a time so if you update a string value to an integer datatype, then there is going to be an error. one_d[6] = "Numbers" one_d Output: ValueError Traceback (most recent call last) in ----> 1 one_d[6] = "Numbers" 2 one_d ValueError: invalid literal for int() with base 10: 'Numbers' Creating a 2D Array Let’s create a 2D array now. We have learnt about using the arange function as it outputs a single dimensional array. To create a 2D array we will link the reshape function with the arange function. import numpy as np two_d = np.arange(30).reshape(5,6) two_d Output: array([[ 0, 1, 2, 3, 4, 5], [ 6, 7, 8, 9, 10, 11], [12, 13, 14, 15, 16, 17], [18, 19, 20, 21, 22, 23], [24, 25, 26, 27, 28, 29]]) You can always check the shape of the array by using the shape function: two_d.shape Output: (5, 6) You can access the values in a two dimensional array by specifying the index value of the row as the column: two_d[3][1] Output: 19 Make sure that the product of the reshape numbers should be equal to the number defined in arange function. Creating a 3D Array For creating a 3D array, we can specify 3 axises to the reshape function like we did in 2D array. To create a three-dimensional array, specify 3 parameters to the reshape function. three_d = np.arange(8).reshape(2,2,2) three_d Output: array([[[0, 1], [2, 3]], [[4, 5], [6, 7]]]) Make sure that the product of the reshape numbers should be equal to the number defined in arange function. Through Numpy Functions We have used the arange function now we will use the zeroes, ones along with other built in functions that we can use for numpy array creation. Using Zeros Function We can use the zeros function for creating an array representing only zeroes. We can identify the number of rows and columns as parameters while declaring the zeros function: zeros = np.zeros((3,4)) zeros Output: array([[0., 0., 0., 0.], [0., 0., 0., 0.], [0., 0., 0., 0.]]) Using Ones Function We can use the ones function for creating an array representing only ones. We can identify the number of rows and columns as parameters while declaring the ones function: ones = np.ones((3,4)) ones Output: array([[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]]) Note: The default datatype for zeros and ones is ‘float’ Using Empty Function Empty function in numpy creates an array which is based on random numbers and depends on the state of the memory. The empty function creates an array. Its initial content is random and depends on the state of the memory. empty = np.empty((2,4)) empty Output: array([[-1.72723371e-077, 2.00389442e+000, 9.76118064e-313, 2.20687562e-312], [ 2.37663529e-312, 4.99006302e-322, 0.00000000e+000, 5.56268465e-309]]) Using Full Function Returns a new array with a shape of rows and columns followed by the only value that you want to see in your array. full = np.full((3,4),3) full Output: array([[3, 3, 3, 3], [3, 3, 3, 3], [3, 3, 3, 3]]) Using the Eye Function The Eye function creates an array in numpy that is positioned with a diagonal design of 1s and 0s. eye = np.eye(4,4) eye Output: array([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.], [0., 0., 0., 1.]]) Using Linspace Function The linspace function creates an array in which numbers are represented at a defined interval. For example, if I want to represent the numbers between 1 and 10 for 20 times, I can write it as: linspace = np.linspace(1, 10, 20) linspace Output:. ]) Extracting Lists as Numpy Arrays We can also create an array in numpy by using lists, we can separately create lists and add them to an array list = [1,2,3,4,5] list Output: [1, 2, 3, 4, 5] Now let’s create an array and insert the existing list in it. list_array = np.array(list) list_array Output: array([1, 2, 3, 4, 5]) Using the Random Function We can special functions in numpy as well to create arrays, one of them is the random function. Let’s learn about how to create an array holding random values: rand = np.random.rand(3,5) rand Output: array([[0.09361231, 0.79701563], [0.9774606 , 0.87040235], [0.79645207, 0.34890012]]) Creating arrays in numpy is the most crucial part of working in numpy. This phase trains us to create arrays before starting any sort of computations on them. Hence it is the most important one as well.
https://python-tricks.com/create-an-array-in-numpy/
CC-MAIN-2021-39
refinedweb
1,142
66.57
To run Script: 1. Place file to maya scripts folder 2. Arnold plugin must be installed 3. Run this code: - for main window: import VK_arnoldTools VK_arnoldTools.VK_arnoldTools().buildUImain() - for subdivision window only: import VK_arnoldTools VK_arnoldTools.VK_arnoldTools().buildUIsmooth() USE Masking Tools: Create Object Masks - works with group, object and shape selection. Create Shader Masks - works with group, object, shape, shader and shading group selection. Will skip “initialShadingGroup” and “initialParticleSE” shading groups. RESET - removes setups and corresponding nodes. May not properly work for complex scenes with referenced objects. Subdivision: First Row - set/display subdivision settings for selected nodes Second Row - display amount/select nodes with corresponding subdivision settings HIGH - nodes with more than 4 subdivisions General info: tool uses conflict resolving algorithms and works in non destructive way. New objects/shading groups can be masked over an existing setup. Note: Tool uses "mO_<NUMBER>" and "mS_<NUMBER>" name patterns for mask naming Masking tool currently supports maya native nodes('mesh', 'nurbsCurve', 'nurbsSurface') and Yeti plugin nodes. Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://www.highend3d.com/maya/script/arnold-tools-for-maya
CC-MAIN-2020-24
refinedweb
193
51.44
java - without - kth smallest element in an array in c Java, Finding Kth largest value from the array (4) Edit: Check this answer for O(n) solution. You can probably make use of PriorityQueue as well to solve this problem: public int findKthLargest(int[]; } Implementation note: this implementation provides O(log(n)) time for the enqueing and dequeing methods ( offer, poll, remove()and add); linear time for the remove(Object)and contains(Object)methods; and constant time for the retrieval methods ( peek, element, and size). The for loop runs n times and the complexity of the above algorithm is O(nlogn). This question already has an answer here: I had an interview with Facebook and they asked me this question. Suppose you have an unordered array with N distinct values $input = [3,6,2,8,9,4,5] Implement a function that finds the Kth largest value. EG: If K = 0, return 9. If K = 1, return 8. What I did was this method. private static int getMax(Integer[] input, int k) { List<Integer> list = Arrays.asList(input); Set<Integer> set = new TreeSet<Integer>(list); list = new ArrayList<Integer>(set); int value = (list.size() - 1) - k; return list.get(value); } I just tested and the method works fine based on the question. However, interviewee said, in order to make your life complex! lets assume that your array contains millions of numbers then your listing becomes too slow. What you do in this case? As hint, he suggested to use min heap. Based on my knowledge each child value of heap should not be more than root value. So, in this case if we assume that 3 is root then 6 is its child and its value is grater than root's value. I'm probably wrong but what you think and what is its implementation based on min heap? He has actually given you the whole answer. Not just a hint. And your understanding is based on max heap. Not min heap. And it's workings are self-explanatory. In a min heap, the root has the minimum (less than it's children) value. So, what you need is, iterate over the array and populate K elements in min heap. Once, it's done, the heap automatically contains the lowest at the root. Now, for each (next) element you read from the array, -> check if the value is greater than root of min heap. -> If yes, remove root from min heap, and add the value to it. After you traverse your whole array, the root of min heap will automtically contain the kth largest element. And all other elements (k-1 elements to be precise) in the heap will be larger than k. Here is the implementation of the Min Heap using PriorityQueue in java. Complexity: n * log k. import java.util.PriorityQueue; public class LargestK { private static Integer largestK(Integer array[], int k) { PriorityQueue<Integer> queue = new PriorityQueue<Integer>(k+1); int i = 0; while (i<=k) { queue.add(array[i]); i++; } for (; i<array.length; i++) { Integer value = queue.peek(); if (array[i] > value) { queue.poll(); queue.add(array[i]); } } return queue.peek(); } public static void main(String[] args) { Integer array[] = new Integer[] {3,6,2,8,9,4,5}; System.out.println(largestK(array, 3)); } } Output: 5 The code loop over the array which is O(n). Size of the PriorityQueue (Min Heap) is k, so any operation would be log k. In the worst case scenario, in which all the number are sorted ASC, complexity is n*log k, because for each element you need to remove top of the heap and insert new element. One approach for constant values of k is to use a partial insertion sort. (This assumes distinct values, but can easily be altered to work with duplicates as well) last_min = -inf output = [] for i in (0..k) min = +inf for value in input_array if value < min and value > last_min min = value output[i] = min print output[k-1] (That's pseudo code, but should be easy enough to implement in Java). The overall complexity is O(n*k), which means it works pretty well if and only if k is constant or known to be less that log(n). On the plus side, it is a really simple solution. On the minus side, it is not as efficient as the heap solution
https://code.i-harness.com/en/q/1ed39d0
CC-MAIN-2021-49
refinedweb
731
66.13
Quickstart: Adding search to an app (HTML) Most users rely on search to find what they're looking for. For example, if your app plays media files, users will expect to be able to search for a specific song or video; if your app is a cooking app, users will expect to search for specific recipes or ingredients. With a little planning, it's not that difficult to add search to your app. Here's what you need: - A data source to search. You need some sort of catalog or inventory of items that users might want to search for. The more descriptive you can make this inventory, the better your search results will be. - A control for entering search queries. Windows provides a SearchBox control that your app can use. The SearchBox provides an input area for entering queries, a search button for executing the search, and events for handling search queries. It even provides some search suggestions automatically. - A page for displaying search results. Microsoft Visual Studio provides the Search Results Page template that creates a lot of the code you need to handle search queries and display results. This quickstart tells you how to use these items to add search functionality to your app. See this feature in action as part of our App features, start to finish series: Windows Store app UI, start to finish Prerequisites - We assume that you can add controls to a basic Windows Store app using JavaScript. For instructions on adding controls, see Quickstart: Adding controls and handling events and Quickstart: Adding WinJS controls and styles. - You should be familiar with working with data sources and data binding. For instructions, see How to customize Visual Studio template data. Set up your data When the user enters a search query, your app searches for items that user might be looking for. The data your app searches could take several forms: it might be an XML file, JavaScript Object Notation (JSON) data, a database, a web service, or files in the file system. The examples in this quickstart use the sample data that Microsoft Visual Studio generates when you create a new project in Visual Studio. When you use Visual Studio to create a new Grid app, Hub app, or Split app, it creates a file named data.js in your app's js folder. This file includes static data that you can replace with your own data. For example, if your app makes a single xhr request to obtain RSS or JSON data, you might want to add your code to data.js. Including the code there enables you to easily use your own data without changing the data model used by the Search Results Page. Here's an example of what the sample data looks like: function generateSampleData() { // . . . var sampleGroups = [ { key: "group1", title: "Group Title: 1", // . . . // . . . ]; var sampleItems = [ { group: sampleGroups[0], title: "Item Title: 1", // . . . // . . . ]; return sampleItems; } To make this data accessible to your files, the data.js file defines a Data namespace that exposes these members: items: A WinJS.Binding.List that contains the data items. This is a grouped List. groups: A WinJS.Binding.List that contains the groups to which the data items belong. (You can also obtain the groups by calling items.groups.) - getItemReference: Retrieves an object that contains the group key and the title of the specified item. - getItemsFromGroup: Retrieves a FilteredListProjection that contains the items that belong to the group with the specified key. - resolveGroupReference: Retrieves an object that represents the group that has the specified key. - resolveItemReference: This method takes an array that contains two strings, a group key and title. This method retrieves the item that has the specified group key and title. You don't have to use this namespace or these members to contain your data, but doing so will make it easier to use the Search Results Page template. (For more info about working with the template-generated data, see How to customize Visual Studio template data.) Add a Search Results page The Search Results page processes search queries and displays the result. Let's add one to your project. (These instructions assume that your project was created from the Hub, Grid, or Split template. ) Add the Search Results Page item In the pages project folder of Solution Explorer, add a new folder named search. Open the shortcut menu for the search folder, and then choose Add > New Item. In the center pane of the Add New Item dialog box, choose Search Results Page. For this example, keep the default name, searchResults.html, that appears in the Name box. Choose Add. Visual Studio adds searchResults.html, searchResults.css, and searchResults.js to the project in the new search folder. Add a SearchBox We still have work to do on the search results page, but first, let's add a SearchBox to our app. Having a SearchBox will make it easier for us to test our search results page as we implement it. A SearchBox lets the user enter queries. It can also display suggestions. To add a SearchBox to your app, just add this markup to an HTML page: <div class="searchBox" data- </div> (You also need to register for the onquerysubmitted event; we'll do that in a later step.) Where should you place your search box? We recommend putting a search box on each page of your app so users can easily search whenever they want to. If space is an issue, you can put the search box in a top app bar. Add a SearchBox to your page Let's add a SearchBox to one of your app's pages. These instructions will work for any page based on a Page control. Usually, the best location to put your SearchBox is in the upper-right corner of the page. Most pages that you create from a Visual Studio template (such as the Page Control template) have a header element that contains the page title and a back button: <header aria- <button data-</button> <h1 class="titlearea win-type-ellipsis"> <span class="pagetitle"></span> </h1> </header> Just add your SearchBox after the h1 element: <header aria- <button data-</button> <h1 class="titlearea win-type-ellipsis"> <span class="pagetitle">Welcome to basicPage</span> </h1> <div class="searchBox" data- </div> </header> (Recommended) You should give your users the ability to search for content in your app by simply beginning to type with their keyboard. Many people will use a keyboard to interact with Windows 8. Letting users search by typing makes efficient use of keyboard interaction and makes your app's search experience consistent with the Start screen. Set the SearchBox control's focusOnKeyboardInput property to true so that the search box receives input when a user types. <div class="searchBox" data- </div> The default.css style sheet that Visual Studio creates for you gives header elements an -ms-grid layout. To place your SearchBox in the upper-right corner of the page, just add this style to the Cascading Style Sheets (CSS) file for your page: .searchBox { -ms-grid-column: 4; margin-top: 57px; margin-right: 29px; } Handle the onquerysubmitted event It's likely that your app will have multiple SearchBox controls. Let's define a single onquerysubmitted event handler that they can all use. Open your app's default.js file. Create an onquerysubmitted event handler named "querySubmittedHandler" that takes a single argument named "args". (You can put this method definition anywhere inside the anonymous function that wraps the existing default.js code.) function querySubmittedHandler(args) { } Use the event handler to navigate to your new search results page by calling WinJS.Navigation.navigate. The args.detailsproperty contains an object that provides info about the event that our search results page will need, so pass this object when you call WinJS.Navigation.navigate. function querySubmittedHandler(args) { WinJS.Navigation.navigate('/pages/search/searchResults.html', args.detail); } Warning If you created your app using the Blank App template, you need to add navigation support to your app for search to work. You can support navigation the same way that the Grid, Split, and Navigation App templates do by adding a custom control called PageControlNavigatorto your app. You can see how this custom control supports navigation in Quickstart: Using single-page navigation. If you'd rather support navigation without using a custom control, you have to write your own code that listens for and responds to navigation events like WinJS.Navigation.navigated. You can see an example of how to support navigation without using a custom control like PageControlNavigatorin the Navigation and navigation history sample. Now we need to publicly expose this event handler by defining a namespace and making the handler a member. Let's call the namespace "SearchUtils". We also need to use the WinJS.UI.eventHandler method so we can set the event handler declaratively (for more info on how this works, see How to set event handlers declaratively). WinJS.Namespace.define("SearchUtils", { querySubmittedHandler: WinJS.UI.eventHandler(querySubmittedHandler) } ); Open the HTML page that contains your SearchBox. Use the data-win-options property to set the onquerysubmitted event to SampleUtils.querySubmittedHandler. <div class="searchBox" data- </div> Let's try it out. Run the app and type a test query into the SearchBox and press Enter. If you're using the sample data provided by Visual Studio, try using "1" as your test query. The onquerysubmitted event handler that you wrote navigates to the search results page, passing the query you entered. If you used the sample data, you should see matches for your test query. If you're using your own data, you might not get any results yet; we'll need to update the search results page first. We'll get to that in a later step. Search your data It's time to go back to our search results page. When your app navigates to the search results page, one of the first methods it calls is the _handleQuery method. The _handleQuery calls several methods that we should modify: _generateFilters Generates the list of filters that the user can click to filter results. _searchData Searches your data for matching items and stores them in a List named originalResults. _populateFilterBar Displays the filters in our filter list. Let's update these methods to customize them for your data. Update the filters The _generateFilters method generates the list of filters that the user can click to filter results. The template-generated method creates three filters: an "All" filter for showing all results, a filter for showing the items in group 1, and a filter for showing everything else. Let's replace the template-generated code with code that generates the filter list dynamically. That way, if you change the sample data, your new filters will show up on the page. We'll update the _generateFilters code and create two helper methods. But first, we need to update our data.js file so that we can access the list of groups; we use these groups to define our filters. Update the _generateFilters method In searchResults.js, find the _generateFiltersmethod and delete the code it contains. Initialize the _filtersarray. (The _filtersarray is a member variable defined by the search results page.) _generateFilters: function () { this._filters = []; Now create a filter. A filter is an object that has three properties: results: A List of the items to display. We'll set this to null for now. text: The display text for the filter. predicate: A function that takes an item. If the item meets the filter criteria (if it should be displayed when this filter is selected), this function returns true; otherwise, it returns false. First, let's create the "All" filter. The all filter always displays items, so its predicatealways returns true. this._filters.push({ results: null, text: "All", predicate: function (item) { return true; } }); Now lets create a filter for each group in our data. Our groups are stored as a List named Data.groups. Use the forEach method to iterate through each group in the List. The forEach method takes a function as its parameter; this function is called for each item in the list. Let's pass it a member function named _createFiltersForGroups; we'll create the function in the next step. if (window.Data) { Data.groups.forEach(this._createFiltersForGroups.bind(this)); } }, Now let's create the _createFiltersForGroupsfunction. Create a member function named _createFiltersForGroupsthat takes three parameters: element, index, and array. _createFiltersForGroups: function (element, index, array){ The element parameter contains our group object. Create a new filter object and use the push method to add it to the _filtersarray. Set the filter's resultsproperty to null, its textproperty to element. title, and its predicateproperty to a function named _filterPredicate. You'll define the _filterPredicatemethod in the next step. this._filters.push( { results: null, text: element.title, predicate: this._filterPredicate.bind(element)} ); }, Create a member function named _filterPredicatethat takes a single parameter named item. Return true if the item parameter's groupproperty is equal to the current group object. _filterPredicate: function (item) { return item.group === this; }, Here's the complete code for the three methods we just created: _generateFilters: function () { this._filters = []; this._filters.push({ results: null, text: "All", predicate: function (item) { return true; } }); if (window.Data) { Data.groups.forEach(this._createFiltersForGroups.bind(this)); } }, _createFiltersForGroups: function (element, index, array){ this._filters.push( { results: null, text: element.title, predicate: this._filterPredicate.bind(element)} ); }, _filterPredicate: function (item) { return item.group === this; }, Run the app and perform a search; you should see your new filters in the filter bar. If you're using the template-generated sample data, you might notice that some of the groups are clipped. You can fix the issue by making a few adjustments to the CSS file for your search results page. Update the CSS for the search results page Open searchResults.css. Find the .searchResults section[role=main]style and change the value of the -ms-grid-rows property to "auto 1fr". .searchResults section[role=main] { /* Define a grid with rows for the filters and results */ -ms-grid-columns: 1fr; -ms-grid-rows: auto 1fr; -ms-grid-row: 1; -ms-grid-row-span: 2; display: -ms-grid; } Find the .searchResults section[role=main] .filterbarstyle and change the value of the word-wrap property to "normal" and set margin-bottom to "20px". .searchResults section[role=main] .filterbar { -ms-font-feature-settings: "case" 1; -ms-grid-row: 1; list-style-type: none; margin-left: 60px; margin-right: 60px; margin-top: 133px; max-width: calc(100% - 120px); position: relative; white-space: normal; z-index: 1; margin-bottom: 20px; } Find the .searchResults section[role=main] .filterbar listyle and change the value of the display property to "inline-block". .searchResults section[role=main] .filterbar li { display: inline-block; margin-left: 20px; margin-right: 20px; margin-top: 5px; opacity: 0.6; } Find the .searchResults section[role=main] .resultsliststyle and change the value of the -ms-grid-row property to "2" and set -ms-grid-row-span to "1". .searchResults section[role=main] .resultslist { -ms-grid-row: 2; -ms-grid-row-span: 1; height: 100%; position: relative; width: 100%; z-index: 0; } Run the app and perform another search. You should see all of the filters now. Update the search algorithm The _searchData method searches or data for items that match the search query. The template-generated code searches the title, subtitle, and description of each item. Let's write our own search code that ranks the results by relevancy. Update the _searchData method Open searchResults.js, find the _searchDatamethod, and delete the code it contains. Create a variable named originalResults; this will be our return value. // This function populates a WinJS.Binding.List with search results for the // provided query. _searchData: function (queryText) { // Create a variable for the results list. var originalResults; Let's make our search case-insensitive by converting both the query text and the text we're looking at to lowercase. Let's start by converting the query to lowercase and storing it as a variable named lowercaseQueryText. // Convert the query to lowercase. var lowercaseQueryText = queryText.toLocaleLowerCase(); Before we attempt to access our data, let's make sure the data exists. if (window.Data) { If you're using the sample data provided in data.js, then our items our stored in Data.items, a WinJS.Binding.List object. Use the createFiltered method to filter out items that don't satisfy the search query. The createFiltered method takes a filtering function as its parameter. This filtering function takes a single parameter, item. The List calls this function on each item in the list to determine whether it should be in the filtered list. The function returns true if the item should be included and false if it should be omitted. originalResults = Data.items.createFiltered( function (item) { In JavaScript, you can attach new properties to existing objects. Add a rankingproperty to item and set its value to "-1". // A ranking < 0 means that a match wasn't found. item.ranking = -1; First, let's check to see whether the item's title contains the query text. If it does, give the item 10 points. if (item.title.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 10; } Next, let's check for hits in the subtitle field. If we find a match, give the item 5 points. if (item.subtitle.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 5; } Finally, let's check the description field. If we get a match, give the item 1 point. if (item.description.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 1; } If the item has a ranking of -1, that means it didn't match our search query. For our return value, return true if the item has a ranking of 0 or greater. return (item.ranking >= 0); } ); So far, we've filtered the list down to only the items that match the search query and we've added ranking info. Now let's use the createSorted method to sort our results list so that the items with the most points appear first. // Sort the results by the ranking info we added. originalResults = originalResults.createSorted(function (firstItem, secondItem){ if (firstItem.ranking == secondItem.ranking) { return 0; } else if (firstItem.ranking < secondItem.ranking) return 1; else return -1; }); } If our data is missing, create an empty list. else { // For some reason, the Data namespace is null, so we // create an empty list to return. originalResults = new WinJS.Binding.List(); } Finally, return the results. return originalResults; } Here's the complete code for the updated _searchData method. _searchData: function (queryText) { // Create a variable for the results list. var originalResults; // Convert the query to lowercase. var lowercaseQueryText = queryText.toLocaleLowerCase(); if (window.Data) { originalResults = Data.items.createFiltered( function (item) { // A ranking < 0 means that a match wasn't found. item.ranking = -1; if (item.title.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 10; } if (item.subtitle.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 5; } if (item.description.toLocaleLowerCase().indexOf(lowercaseQueryText) >= 0) { item.ranking += 1; } return (item.ranking >= 0); } ); // Sort the results by the ranking info we added. originalResults = originalResults.createSorted(function (firstItem, secondItem){ if (firstItem.ranking == secondItem.ranking) { return 0; } else if (firstItem.ranking < secondItem.ranking) return 1; else return -1; }); } else { // For some reason, the Data namespace is null, so we // create an empty list to return. originalResults = new WinJS.Binding.List(); } return originalResults; } Provide navigation to the items returned by search When you run your app and perform a search, the search results page displays the results in a ListView control. Right now, clicking on one of these search result items doesn't do anything. Let's add some code to display the item when the user clicks it. When the user clicks an item in a ListView, the ListView fires the oniteminvoked event. The template-generated code for our search results page defines an oniteminvoked event handler named _itemInvoked. Let's update the code to navigate to the invoked item. To add navigation to items Open searchResults.js and add code to the _itemInvokedfunction to navigate to the correct page. Caution The URI shown here is for the Hub template. For the Grid template, the URI must be: /pages/itemDetail/itemDetail.html. For the Split template, the URL must be: /pages/items/items.html. _itemInvoked: function (args) { args.detail.itemPromise.done(function itemInvoked(item) { // TODO: Navigate to the item that was invoked. var itemData = [item.groupKey, item.data.title]; WinJS.Navigation.navigate("/pages/item/item.html", { item: itemData }); }); }, (Optional) Update the ListView control's itemTemplate The template-generated search results page defines an itemTemplate that is designed to work with the sample data source that Visual Studio creates for you; it expects the following fields in each data item: "image", "title", "subtitle", and "description". If your data items have different fields, you need to modify the itemTemplate. For instructions, see Quickstart: Adding a ListView. (Optional) Add search suggestions Search suggestions are displayed under the search box in the search pane. Suggestions are important because they save users' time and give valuable hints about the kinds of things users can search for in your app. You can get suggestions from several sources: - You can define them yourself. For example, you could create a list of car manufacturers. - You can get them from Windows if your app searches local files. - You can get them from a web service or server. For user experience guidelines for displaying suggestions, see Guidelines and checklist for search. You can use LocalContentSuggestionSettings to add suggestions, based on local files from Windows, in only a few lines of code. Alternatively, you can register for the search box control's onsuggestionsrequested event and build your own list of suggestions that is made up of suggestions you retrieved from another source (like a locally-defined list or a web service). This quickstart shows you how to handle the onsuggestionsrequested event. For additional code examples that show how to add search suggestions, download the SearchBox control sample. The sample demonstrates how to add search suggestions by using all three possible sources, and how to add suggestions for East Asian languages by using alternate forms of the query text generated by an Input Method Editor (IME). (We recommend using query text alternatives if your app will be used by Japanese or Chinese users.) Handle the SuggestionsRequested event It's likely that your app will have multiple SearchBox controls; let's define a single event handler in your default.js file that they can all use. Add this code after the querySubmittedHandlermethod that you created in an earlier step. function suggestionsRequestedHandler(args) { Convert the SearchBox query text to lowercase. var query = args.detail.queryText.toLocaleLowerCase(); The system automatically provides some search suggestions, such as previous searches the user performed. Let's add our search suggestions to whatever the system provides. // Retrieve the system-supplied suggestions. var suggestionCollection = args.detail.searchSuggestionCollection; Verify that the query contains at least one character and that we have access to our data. if (query.length > 0 && window.Data) { Iterate through each item in your data and check for matches. When we find a match, append the matching item's title to the search suggestions collection. Data.items.forEach( function (element, index, array) { if (element.title.substr(0, query.length).toLocaleLowerCase() === query) { suggestionCollection.appendQuerySuggestion(element.title); } }); The args.detail.linguisticDetails.queryTextAlternativesproperty provides additional suggestions for users entering text in an IME. Using these suggestions improves the search experience for users of East Asian languages. Let's check the query text alternatives for strings that contain the original query and add them to our search suggestion list. args.detail.linguisticDetails.queryTextAlternatives.forEach( function (element, index, array) { if (element.substr(0, query.length).toLocaleLowerCase() === query) { suggestionCollection.appendQuerySuggestion(element); } }); } } That's all the code we need for our search suggestion event handler. Here's the complete suggestionsRequestedHandlermethod: function suggestionsRequestedHandler(args) { var query = args.detail.queryText.toLocaleLowerCase(); // Retrieve the system-supplied suggestions. var suggestionCollection = args.detail.searchSuggestionCollection; if (query.length > 0 && window.Data) {); } }); } } Note If your data source is asynchronous, you must wrap updates to the search suggestion collection in a Promise. The sample code uses a List, which is a synchronous data source, but here's what the method would look like if the List were an asynchronous data source. function suggestionsRequestedHandler(args) { var query = args.detail.queryText.toLocaleLowerCase(); // Retrieve the system-supplied suggestions. var suggestionCollection = args.detail.searchSuggestionCollection; if (query.length > 0 && window.Data) { args.detail.setPromise(WinJS.Promise.then(null, function () {); } }); }) ); } } That's all the code we need for our search suggestion event handler. Let's make it publicly accessible by exposing it through the SearchUtilsnamespace we defined in an earlier step: WinJS.Namespace.define("SearchUtils", { querySubmittedHandler: WinJS.UI.eventHandler(querySubmittedHandler), suggestionsRequestedHandler: WinJS.UI.eventHandler(suggestionsRequestedHandler) } ); Now let's register the event with our SearchBox. Open the HTML page that contains your SearchBox and set the onsuggestionsrequested event to SearchUtils.suggestionsRequestedHandler. <div class="searchBox" data- </div> Implementing the Search contract (for previous versions of Windows) Prior to Windows 8.1, apps used the Search charm to provide in-app search. Developers implemented the Search contract and used the SearchPane API to handle queries and obtain suggestions and results. Although we continue to fully support the Windows 8 Search contract and the SearchPane API, as of Windows 8.1, we recommend using the SearchBox control instead of the SearchPane. Apps that use the SearchBox don't need to implement the Search contract. Should an app ever use the SearchPane and Search contract? If you don't expect users to search your app very much, you can use the SearchPane and Search contract. We recommend that you use a button with the Search glyph (Segoe UI Symbol 0xE0094 at 15pt) in your app that users can click to activate the search pane. To see code that implements the SearchPane and the Search contract, see the Search contract sample. Summary and next steps You used the SearchBox control and the Search Results Page to add search to your app. For guidelines to help you design and create a good search experience for your users, see Guidelines and checklist for search. Related topics Guidelines and checklist for search
https://docs.microsoft.com/en-us/previous-versions/windows/apps/hh465238(v%3Dwin.10)
CC-MAIN-2019-35
refinedweb
4,344
58.48
The Characteristics of a Time Value The Components of a Time By now, we have seen that a time value is made of the hour, the minute, the second, and the millisecond parts. These are values you can specify when creating a time object using one of the appropriate constructors of the DateTime structure. If you request a time value from the user or if the application itself will provide it, you can retrieve its components. To get the hour portion of an existing DateTime object, you can access its Hour property. To retrieve the minute side of a time value, access its Minute property. If you want to know the second value of a DateTime variable, you can call its Second property. In the same way, you can get the millisecond value of a time by accessing its Millisecond property. The Time of Day of a DateTime Value As seen so far, a DateTime variable always holds both a date and a time portions. In your program, you may want to get only the time of the variable. To support this, the DateTime structure is equipped with a property named TimeOfDay. This property produces the time value of an existing DateTime object. Here is an example of using it: using System; namespace DateAndTime { class Program { static int Main() { DateTime time = new DateTime(2002, 4, 22, 16, 8, 44); Console.WriteLine("Date and Time: {0}\n", time); Console.WriteLine("Time of Day: {0}\n", time.TimeOfDay); return 0; } } } This would produce: Date and Time: 4/22/2002 4:08:44 PM Time of Day: 16:08:44 Press any key to continue . . .
http://www.functionx.com/csharp2/structures/time2.htm
CC-MAIN-2013-48
refinedweb
272
70.94
Something short while I work on the next mega-article. :) I recently had to write an algorithm that populates a tree from a collection of paths. The reason I ended up writing this algorithm is actually because I was mentoring someone who needed to implement this, but who is not quite proficient yet (but is a very quick learner) in C#, .NET, LINQ, etc., hence my mentoring. The algorithm is complicated enough that I wanted to work through the problem myself first and not look like a bumbling fool -- prep work is important! As it turns out, it makes for a good case study of code refactoring. First, by "tree", I don't mean a TreeView necessarily, but something as simple as this model: TreeView public class Node { public string Text { get; set; } public List<Node> Nodes { get; protected set; } public Node() { Nodes = new List<Node>(); } } With regards to a TreeView, there are some existing implementations out there. Ironically, I didn't even think of searching for an existing implementation first, and quite frankly, I'm glad I didn't because each of the examples from that SO page has the interesting feature that they walk the tree from its root every time when figuring out how to create the path. This is done either using the Nodes.Find method or, for each path, iterating from the root node until a missing node is found. As one reader commented: "I took your code, and it works very well, but I made just a little modification for improving the load speed when it is used with a large list of files it seems like find operation, and string operations generally are very slow." Nodes.Find Frankly, the idea of traversing from the root for each path that needs to be added simply didn't occur to me. Instead, it was clear to me from the beginning that this was a recursive algorithm. Granted, the "search from root" algorithms are all iterative -- no recursion required -- but they have the significant penalty of always having to walk the tree from the root and text for the existence of a child in the node collection at each level. Yuck! So, this article is hopefully a fun case study of how I iterated my implementation from a prototype to an over-the-top "production" piece. I think we can all pretty much agree that a prototype is like the first draft of a book -- it's the first cut at the code and typically has one or more of the following features: try catch Ironically, the code presented here is "evolutionary", but the end result looks nothing like the first cut, so the initial prototype was disposed! Production code implies a certain quality of code, and this is where there will be lots of disagreement. A few guidelines can be stated as "maybe it does this": Realistically, prototype code often ends up in a product without qualifying as "production" code. The concept of "production code" is often entangled with "maintainable code", and the two need to be separated. As the list above shows, my concept of "production code" includes things that should fall under "maintainable code": Wait a minute! The only thing that is left in the original list is "edge cases handled!" Yup. And even that is debatable. This, in my opinion, is the more important question, as it prevents dealing with the ambiguity and disagreements of what "quality" means. Simply stated: The "intended job" of course has to be defined, but usually this means it passes some QA process. Not unit tests, not style consistency, not language idiomatic correctness, not documentation. The QA process is not the same as unit testing (that's a whole other subject!) and as long as the code passes QA (does what it's supposed to do, including one or more of: correct results, performance, and exception handling) then guess what, it's ready for production! Here, your QA "functional" process should be a clear and separate process from internal code reviews which look at things QA doesn't. Anything else about the code falls under the category of maintainability and the programmer's desire to be language idiomatic ("cute", in other words.) So given that, let's begin. The algorithm looks like this: Given a list of strings, where each string represents a path delimited by a forward slash ('/'): string So in other words, if I have these paths: a/b/c a/b/d c/d/e I should get back a tree like this: My first attempt looked like this: static void ParsePaths1(Node node, List<string> paths) { // Convert the list of strings into a list of path component strings: List<string[]> splitPaths = new List<string[]>(); foreach (string str in paths) { splitPaths.Add(str.Split('/')); } // Get the distinct path components of the first component in each of the paths: var distinctItems = splitPaths.Select(p => p.First()).Distinct(); // Iterate each of the distinct components: foreach (string p in distinctItems) { // Create the child node. Node subNode = new Node() { Text = p }; node.Nodes.Add(subNode); // Initialize our collection of paths that match this distinct component // as the first component in the path. List<string> matchingFullPaths = new List<string>(); // Populate the paths whose first component matches the distinct component, // and that have additional components. foreach (var match in splitPaths.Where(p2 => p2.First() == p && p2.Length > 1)) { // Get the remaining components of those paths and // join them back up into a single string. matchingFullPaths.Add(String.Join("/", match.Skip(1))); } // Recurse the remainder for the distinct subnode we just added. ParsePaths1(subNode, matchingFullPaths); } } The idea was not to start with too much LINQ, which can complicate debugging of the basic algorithm. There's a few problems, one of which is glaring, with this code though: Problem #3, where the component paths are re-joined is where this code really falls into the category of prototype -- it was a shortcut that I took so I could test the algorithm, as that was my focus. In the second version, I decided that, instead of fixing the most glaring problems, I actually wanted to tighten up the code with a better use of LINQ. It was more a "what do I want to work on first" decision rather than anything else. So version 2 looked like this: static void ParsePaths2(Node node, List<string> paths) { var splitPaths = paths.Select(p => p.Split('/')); foreach(var p2 in splitPaths.Select(p=>p.First()).Distinct()) { Node subNode = new Node() { Text = p2 }; node.Nodes.Add(subNode); ParsePaths2(subNode, splitPaths.Where(p3 => p3.First() == p2 && p3.Length > 1). Select(p3 => String.Join("/", p3.Skip(1))).ToList()); } } There's less physical code, but there's now a new problem: List<string> List p p2 p3 None-the-less, version 2 works just fine. Version 3 fixes all the remaining problems: static void ParsePaths3(Node node, IEnumerable<IEnumerable<string>> splitPaths) { foreach (var distinctComponent in splitPaths.Select(path => path.First()).Distinct()) { Node subNode = new Node() { Text = distinctComponent }; node.Nodes.Add(subNode); ParsePaths3(subNode, splitPaths.Where (pathComponents => pathComponents.First() == distinctComponent && pathComponents.Count() > 1).Select(pathComponents => pathComponents.Skip(1))); } } Notice how I've changed the signature of the parser to IEnumerable<IEnumerable<string>> This eliminates the nasty ToList() call, and by working with a "list of lists", the re-joining of the string has been eliminated as well. A helper method lets use both styles: IEnumerable<IEnumerable<string>> ToList() static void ParsePaths3(Node node, List<string> paths) { ParsePaths3(node, paths.Select(p => p.Split('/'))); } One thing that bothered me about version 3 is that it does one thing -- populates a tree of Node instances. That's great, but then something else has to take that model and do other things with it, like dump it to the console or populate an actual TreeView. Node This is where that "going the last mile" argument with regards to code quality / maintainability most often arises. The implementation in Version 3 is probably just great for the requirements, and everyone that uses it totally gets that it creates a "model" and we now can do things with that model for whatever our "view" wants. That is after all the concept behind the somewhat defunct Model-View-Controller (MVC) pattern and the more alive and kicking Model-View-ViewModel (MVVM) pattern. And for cases where there really is a physical model (like some database data) that is being represented, that is a fine approach, but quite frankly, this parser is not really creating a "model" in the same sense that MVC or MVVM thinks of a model. The parser is really just that -- in fact, it shouldn't even know or care about what it's constructing! Enter Inversion of Control. In version 4, we pass in a Func that performs the desired operation, defined externally, and returns something (the parser doesn't care what) that is passed in during recursion. Func public static void ParsePaths4<T>( T node, IEnumerable<IEnumerable<string>> splitPaths, Func<T, string, int, T> action, int depth = 0) { ++depth; foreach (var p2 in splitPaths.Select(p => p.First()).Distinct()) { T ret = action(node, p2, depth); ParsePaths4(ret, splitPaths.Where(p3 => p3.First() == p2 && p3.Count() > 1).Select(p4 => p4.Skip(1)), action, depth); } } For demo reasons, I also snuck in a "depth" counter in this code. Notice how the method has become a generic method, where T has replaced representing the type of current node, and the action that we're passing in is expected to return something of type T as well, which typically would represent the child node. T In the three previous versions, I was calling the parser like this: List<string> paths = new List<string>() { "a/b/c", "a/b/d", "c/d/e" }; Node root = new Node(); ParsePaths3(root, paths); Now we have to pass in the function that implements the specific behavior that we want the parser to implement. This reproduces what versions 1-3 were doing: ParsePaths4(root, paths, (node, text, depth) => { Node subNode = new Node() { Text = text }; node.Nodes.Add(subNode); return subNode; }); But because we now have a general purpose parser that is decoupled from the implementation of the what we do with the path components, we can write an implementation that outputs the results to the Console window: ParsePaths4<object>(null, paths, (node, text, depth) => { Console.WriteLine(new String(' ', depth * 2) + text); return null; }); Notice something interesting here--since we're not really doing anything other than printing the string, we're passing in null for the "root node" and returning null, as we don't care. As a result, the type of T has to be explicitly specified, which in this case is object, as the type cannot be inferred from null. null object And here is an implementation that populates a TreeView control: Program.ParsePaths4(tvDemo.Nodes, paths, (nodes, text, depth) => { TreeNode childNode = new TreeNode(text); nodes.Add(childNode); return childNode.Nodes; }); Now this gets a bit more complicated for the average programmer that may not be familiar or comfortable Action, Func, anonymous methods, etc. (though they should be somewhat comfortable if they're already using all that LINQ.) But I bring this up because there comes a point in the coding process where you can write code with elegance and maintainability in mind, but it requires a skilled developer to actually maintain the implementation, whereas a simpler implementation can be more easily handled by a more junior programmer, who will probably just copy & paste the code to change the behavior. And there, we have the tension between elegance, maintainability, re-usability, and skill. Action Whilt this has a been a somewhat lightweight article, but I think it covers some issues that we should all be conscious of when reworking prototype code (and deciding whether to rework prototype code!) Hopefully, this case study has provided some food for thought, or at least was a fun read. One thing I didn't think of, which someone in one of the SO implementations provided, is a default delimiter parameter that can be changed by the caller. Code improvement is a never ending process! This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) Paulo Zemek wrote:this is the kind of sentence that can be largely misunderstood. Sarah Jane Snow wrote:Really cracking, I wish you had been my mentor public static void ParsePaths4(Node nodes, List<string> paths) // overload helper function { var splitPaths = paths.Select(p => p.Split(cdelimiter)); Func<Node, string, int, Node> myAction = (node, text, depth) => { Node subNode = new Node() { Text = text }; node.Nodes.Add(subNode); return subNode; }; ParsePaths4(nodes, splitPaths, myAction); // This is the way the author calls the parser //ParsePaths4(nodes, splitPaths, (node, text, depth) => //{ // Node subNode = new Node() { Text = text }; // node.Nodes.Add(subNode); // return subNode; //}); } asiwel wrote:why, you went right back in V4 to p2s and p3s Shaun Stewart wrote:A very well thought out and described article Nelek wrote:and you call this "short"? General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://codeproject.freetls.fastly.net/Articles/1180636/A-Case-Study-of-Taking-a-Simple-Algorithm-from-Pro?msg=5394134
CC-MAIN-2021-39
refinedweb
2,199
58.32
In this project, you will give GeekOS the ability to load executable files from disk into memory. In a future project, you will add the ability to run user programs in a safe way, but for this project we have supplied code that will execute programs as part of a kernel process. Your job will be to parse an executable file and fill in appropriate structures so that our code can execute the program. You will know that you have successfully loaded the program when it produces the specified output. When a program is linked, the linker specifies that the text and data sections of a program should be laid out in a certain pattern in memory. This allows the linker to set specific memory addresses for code and data references. We will call this pattern the Executable Image. The work for this project will be to determine what the executable image should be for the loaded program (by using the ELF headers) and to fill in some structures expected by the loader with this information. If you pass the loader wrong information, it can not load and run the executable correctly. The ELF file format is described in the ELF Specification. The most relevant sections for this project are 1.1 to 1.4 and 2.1 to 2.7. The steps involved in identifying the sections of the ELF file are: 1) Read the ELF Header. The ELF header will always be at the very beginning of an ELF file. The ELF header contains information about how the rest of the file is laid out. You are interested only in the program headers. 2) Find the Program Headers, which specify where in the file to find the text and data sections and where they should end up in the executable image. There are a few simplifying assumptions you can make about the types and location of program headers. In the files you will be working with, there will always be one text header and one data header. The text header will be the first program header and the data header will be the second program header. This is not generally true of ELF files, but it will be true of the programs you will be responsible for. The file geekos/include/geekos/elf.h provides data types for structures which match the format of the ELF and program headers. See A trick in C: casting a pointer to a structure below for tips on how to parse the headers. You should start this project with the new GeekOS distribution. This distribution contains the same code as the proj0 distribution, with the addition of several files that add new functionality and a file that provides much of the structure for the code you will write for this project. In addition, it contains newer versions of the Cyclone runtime system, so be sure you are using the most recent version of Cyclone (version 0.8.2a) if you are going to use Cyclone (more on using Cyclone below). Note that until September 11, this version of Cyclone is installed in /afs/csic/projects/cmsc412/cyclone-0.8.2a/bin, so as not to influence those working on project 0. We added a simple filesystem for GeekOS called PFAT. PFAT provides basic routines for reading files from and writing files to disk. The "disks" that bochs reads from are just files in the LINUX filesystem. The disks are configured in the .bochsrc file. The .bochsrc file provided in the distribution includes an extra line that specifies how the disk should be interpreted, so do not simply overwrite it with the .bochsrc you used in project 0. If you look in the geekos/src/user directory, you'll see a file called a.c which contains the source code for the ELF program you will need to load. When you gmake the project, a.c will also be compiled and the resulting ELF file, called a.exe will be written to the disk image hd.img which is the file for the C: drive on bochs. The path name for a.exe will be /c/a.exe. Code has been added to geekos/src/geekos/main.c to start a new thread that will run a function called Spawner that loads /c/a.exe into memory, calls your Parse_ELF_Executable() then executes the program as you have set it up. If you have not properly built the disk or used the correct .bochsrc file, the Spawner will not be able to load /c/a.exe. Your code to load the ELF file will go into geekos/src/geekos/elf.c, where you must complete the Parse_ELF_Executable(char *exeFileData, ulong_t exeFileLength, struct Exe_Format *exeFormat) function. The executable file is read into memory and passed to you as the exeFileData argument, which is of course exeFileLength long. You will need to parse the ELF headers and fill out the Exe_Format structure.The body of this function is the only piece of code that needs be written for this project ! This is a rough guideline for what Parse_ELF_Executable() has to do: This diagram shows the relationship between the ELF File Image and the Executable Image in memory. You will know you have loaded the program correctly if you see the following output when you run bochs: Hi ! This is the first string Hi ! This is the second string Hi ! This is the third (and last) string If you see this you're happy If your program prints these lines, you'll know that you've done it correctly. If things go wrong, try setting the lprogdebug flag in geekos/src/geekos/lprog.c to 1, to print some debug statements on the glorious way towards loading and running the executable. Part of this project involves parsing the ELF header structures that were read from the file. There is a specification of exactly how the elements of the header will be laid out on disk. There's a simple way in C to access the different fields of the header as the fields of a C structure. In the file geekos/include/geekos/elf.h, there are structures defined that correspond to the ELF header (called elfHeader) and the ELF program header(called programHeader). typedef struct { unsigned char ident[16]; unsigned short type; unsigned short machine; unsigned int version; unsigned int entry; unsigned int phoff; unsigned int sphoff; unsigned int flags; unsigned short ehsize; unsigned short phentsize; unsigned short phnum; unsigned short shentsize; unsigned short shnum; unsigned short shstrndx; } elfHeader; typedef struct { unsigned int type; unsigned int offset; unsigned int vaddr; unsigned int paddr; unsigned int fileSize; unsigned int memSize; unsigned int flags; unsigned int alignment; } programHeader; The data at the beginning of the ELF file is laid out in exactly the same pattern as the elfHeader structure: there are 16 characters, followed by 2 short ints, followed by 5 ints, and so on. When you read in the ELF file, there will be a big chunk of memory containing the file contents and you will have a pointer-to-char that points to it. When you define a structure in C, the compiler will arrange things so that the memory for an instance of that structure will look exactly as you defined the structure. All the fields will be in the order you specified them, with no extra space in between. So the memory image that your char* points to is exactly the same as the memory image would be created if you created an elfHeader structure. So, here's the important part. If you create a pointer-to-elfHeader, and you point it at the memory you read in, the code that knows how to pull fields out of an elfHeader structure will be able to pull fields out of your memory. You will tell the pointer that the memory it's pointing at is an elfHeader structure, it will access the memory as if it were an elfHeader structure, and everything will work because the memory really is exactly the same as an elfHeader structure. Here's an example. Say we have a blah structure defined as: typedef struct {and a big chunk of memory pointed to by int number; char name[10]; int age; } blah; char * exeFileData We can create a pointer-to-blah and point it at our data: blah *myBlah = (blah *) exeFileData; We cast the pointer to make myBlah (well, the compiler, really...) think that exeFileData is a pointer-to-blah, rather than a pointer-to-char. Now we can access the fields of myBlah in the usual fashion: printf("My blah's name is: %s", myBlah->name); extern "C include" {The code inside the first set of braces is regular C code, and the export statement indicates that it should be callable from Cyclone. Note that all typedefs, struct definitions, #defines, etc. are exported by default; you do not need to put them in the export list. static elfHeader *getHeader(char *buf, ulong_t buflen) { // your code here } // perhaps other functions here } export { getHeader; } extern "C include" {For every variable or function appearing in one of these headers that you wish to use, you will need to add it to the export list. In general, it may turn out that the C type in the geekos file you are including does not correspond to the Cyclone type that you need. For example, C does not specify zero-termination or other qualifiers that Cyclone does. There is a facility for defining Cyclone types to override the C codes, called cyclone_override, that is described in the manual. You should not need that for this project. However, beware that you should not #include geekos/string.h or geekos/malloc.h, since these could result in problematic types (but your mileage may vary). #include <geekos/screen.h> #include <geekos/elf.h> // perhaps other includes here static elfHeader *getHeader(char *buf, ulong_t buflen) { // your code here } // perhaps other functions here } export { getHeader, ...; }
http://www.cs.umd.edu/class/fall2004/cmsc412/proj1/
crawl-002
refinedweb
1,666
70.94
![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]><![if gte IE 9]><![endif]> I am interfacing Lcd to msp430f2132.Lcd display facing problem with garbage value.what changes should i do in this code.please help me. #include "msp430f2132.h"unsigned int i;#define LCM_DIR P1DIR#define LCM_OUT P1OUT//// Define symbolic LCM - MCU pin mappings// We've set DATA PIN TO 4,5,6,7 for easy translation//#define LCM_PIN_RS BIT5 // p2.5#define LCM_PIN_EN BIT6 // P2.6#define LCM_PIN_D7 BIT7 // P1.7#define LCM_PIN_D6 BIT6 // P1.6#define LCM_PIN_D5 BIT5 // P1.5#define LCM_PIN_D4 BIT4 // P1.4#define LCM_PIN_MASK ((LCM_PIN_RS | LCM_PIN_EN | LCM_PIN_D7 | LCM_PIN_D6 | LCM_PIN_D5 | LCM_PIN_D4))#define FALSE 0#define TRUE 1//// Routine Desc:// // This is the function that must be called // whenever the LCM needs to be told to // scan it's data bus.//// Parameters://// void.//// Return//// void.//void PulseLcm(){ // // pull EN bit low // LCM_OUT &= ~LCM_PIN_EN; __delay_cycles(200); // // pull EN bit high // LCM_OUT |= LCM_PIN_EN; __delay_cycles(200); // // pull EN bit low again // LCM_OUT &= (~LCM_PIN_EN); __delay_cycles(200);}//// Routine Desc:// // Send a byte on the data bus in the 4 bit mode// This requires sending the data in two chunks.// The high nibble first and then the low nible//// Parameters://// ByteToSend - the single byte to send//// IsData - set to TRUE if the byte is character data// FALSE if its a command//// Return//// void.//void SendByte(char ByteToSend, int IsData){ // // clear out all pins // LCM_OUT &= (~LCM_PIN_MASK); // // set High Nibble (HN) - // usefulness of the identity mapping // apparent here. We can set the // DB7 - DB4 just by setting P1.7 - P1.4 // using a simple assignment // LCM_OUT |= (ByteToSend & 0xF0); if (IsData == TRUE) { LCM_OUT |= LCM_PIN_RS; } else { LCM_OUT &= ~LCM_PIN_RS; } // // we've set up the input voltages to the LCM. // Now tell it to read them. // PulseLcm(); // // set Low Nibble (LN) - // usefulness of the identity mapping // apparent here. We can set the // DB7 - DB4 just by setting P1.7 - P1.4 // using a simple assignment // LCM_OUT &= (~LCM_PIN_MASK); LCM_OUT |= ((ByteToSend & 0x0F) << 4); if (IsData == TRUE) { LCM_OUT |= LCM_PIN_RS; } else { LCM_OUT &= ~LCM_PIN_RS; } // // we've set up the input voltages to the LCM. // Now tell it to read them. // PulseLcm();}//// Routine Desc:// // Set the position of the cursor on the screen// // Parameters://// Row - zero based row number//// Col - zero based col number// // Return//// void.//void LcmSetCursorPosition(char Row, char Col){ char address; // // construct address from (Row, Col) pair // if (Row == 0) { address = 0; } else { address = 0x40; } address |= Col; SendByte(0x80 | address, FALSE);}//// Routine Desc:// // Clear the screen data and return the// cursor to home position// // Parameters://// void.// // Return//// void.//void ClearLcmScreen(){ // // Clear display, return home // SendByte(0x01, FALSE); SendByte(0x02, FALSE);}//// Routine Desc:// // Initialize the LCM after power-up.// // Note: This routine must not be called twice on the// LCM. This is not so uncommon when the power// for the MCU and LCM are separate.// // Parameters://// void.// // Return//// void.//void InitializeLcm(void){ // // set the MSP pin configurations // and bring them to low // LCM_DIR |= LCM_PIN_MASK; LCM_OUT &= ~(LCM_PIN_MASK); // // wait for the LCM to warm up and reach // active regions. Remember MSPs can power // up much faster than the LCM. // //__delay_cycles(1000000); // // initialize the LCM module // // 1. Set 4-bit input // LCM_OUT &= ~LCM_PIN_RS; __delay_cycles(5000); LCM_OUT &= ~LCM_PIN_EN; __delay_cycles(5000); LCM_OUT = 0x20; __delay_cycles(5000); PulseLcm(); __delay_cycles(5000); // // set 4-bit input - second time. // (as reqd by the spec.) // SendByte(0x28, FALSE); __delay_cycles(5000); // // 2. Display on, cursor on, blink cursor // SendByte(0x0E, FALSE); __delay_cycles(5000); // // 3. Cursor move auto-increment // SendByte(0x06, FALSE); __delay_cycles(5000);}//// Routine Desc//// Print a string of characters to the screen//// Parameters:// // Text - null terminated string of chars//// Returns// // void.//void PrintStr(char *Text){ char *c; c = Text; while ((c != 0) && (*c != 0)) { SendByte(*c, TRUE); __delay_cycles(5000); c++; }}//// Routine Desc//// main entry point to the sketch//// Parameters// // void.//// Returns// // void.//void main(void){ WDTCTL = WDTPW + WDTHOLD; // Stop watchdog timer DCOCTL=CAL_DCO_1MHZ ; BCSCTL1=CAL_BC1_1MHZ ; P2DIR |=0xA0; P2OUT|=0xA0; for(;;) { InitializeLcm(); ClearLcmScreen(); PrintStr("H!"); i = 50000; // Delay do (i--); while (i != 0);}} haridini belanfacing problem with garbage value.what changes should i do in this code.please help me. Before you jump in and start changing stuff, you need to work out why you are getting "garbage". This is called debugging - and is an essential part of any form of development. Once you know out why you are getting "garbage", it should be clear what you need to change in order to correct the problem(s)! If you just make random changes whithout knowing why you are getting "garbage", you will not know if you have actually fixed the underlying problem(s) - or just masked the symptoms... So, the first thing you need to check is that your code actually produces signals to the LCD that are fully in conformance with the LCD's specifications. You should pay particular attention to timing... Here's some debugging tips to get you started: In reply to Andy Ne.
http://e2e.ti.com/support/microcontrollers/msp430/f/166/p/181067/652411
CC-MAIN-2015-14
refinedweb
810
65.83
Disclaimer: This post is a post about my opinions and experiences with the pipeline portal application, Cocoon. This disclaimer is included to disclaim any possibility that I may be wrong about this. Period. :-P What is Apache Cocoon? On paper its an impressive framework to construct a web application around practically any source medium, dynamic or otherwise. Cocoon is largely based on the XML philosophy (that everything has to be extremely complex and difficult to use to store simple information (citation)). Cocoon also can generate a user interface for a component application from the component applications XML output. Apache Cocoon is a modular framework where each module is broken into a "Block". Each block enhances the Cocoon functionality in some respect. For the purposes of this post I will say that most of my experience is with the JSR-168 Portal block. This block is supposed to be a JSR-168 compliant portal application which can process and render portlets. What is a Portlet? I'm not going to go into much detail here but as an overview a Portlet is a Java application which renders into a HTML container within a Portal application. From the users perspective a Portlet is a small box on a HTML page, and that a Portal can contain many portlets and organise how they are rendered, positioned, etc. Portlets can be a little long winded to design complex applications with but they've never caused my hair to fall out. What is my opinion? Ah the important question! Well let's start at the beginning. I am currently working on a project which involves employing a Portlet application on a currently running copy of Cocoon at a client organisation. Our software was originally designed for Pluto (another Apache project - a very nice stand alone Portal application) as it was a JSR-168 portal and we were assured that if it ran in Pluto it would also run on Cocoon. Our first problems came into effect when we tried to install our Portlet. The copy of Cocoon we downloaded most of the example configurations did not work. We had a time trying to determine how to install the Portlet, we eventually found out we have to edit several XML files and configure the Portal Block to recognise our web application. I am not a fan of the XML philosophy but I can recognise that XML has its uses. This took a while to figure out and complete. Our portlet application was still not being recognised, we were confused. After several hours of looking for fixes we found out that we had to delete certain .jars located around the package we downloaded from the Cocoon project website. I am actually very confused why the Cocoon installation we had required so many broken components. The Apache project are smart people so there is probably an explaination for it, but I've yet to find it. Further more, we had serious problems replicating our configuration. We ended up just copying our entire tomcat directory between workstations for other developers to work with it. Even just suppling our entire web application directory, including Cocoon and all the other installed applications to another work station yielded no positive results. My scalp becoming a mess of torn off skin after two days I was beginning to dislike Cocoon. Now we're at a point where Cocoon recognises and runs our portlet. However we've hit a wall. Our application uses several Servlets and Java Server Page components to modularise computation and keep coupling down to a minimum. Inter portlet/servlet communication is arrived out by the use of the session to hand other components the data they require for processing. To quote from the JavaDoc of the JSR-168 PortletSession interface specification: All objects stored in the session using the APPLICATION_SCOPE must be available to all the portlets, servlets and JSPs. Attributes stored in the PORTLET_SCOPE are not protected from other web components of the portlet application. They are just conveniently namespaced. The crux of the problem is as follows. We diagnosed the problem and determined that the session ID sent to our Servlet components was entirely different to that of the Portlet components. Interestingly we found that there are two IDs sent, one of which is identical in the requests of both the Servlet and Portlet. The other is completely different - and that is the one Cocoon and our Servlets are using. I read a conversation from one of the Portal block developers on the Cocoon project where he was arguing that Cocoon should support the specification but doesn't. The argument for why it doesn't is largely based on the "session=bad" argument. This has made it very hard for us to develop for and has cost me, and my company significant effort. A further issue we encountered is less of a Cocoon problem (as we weren't playing nice ourselves) but more of an example of very curious behaviour. Cocoon uses precision buffering system where it calculates the exact output length from the Portlet and only prints that exact length. Our client uses international characters that will be required to be printed in the output. The behaviour it exhibits when encountering these characters is very curious. Each one of these characters takes up two bytes in its output buffer (which admittedly all of these should be made into HTML entities). The interesting point is that Cocoon doesn't count the byte length of the buffer but instead just the number of characters that are supposed to be printed and it takes this number as being the byte length of the output. In conclusion I am hating Cocoon right now. It was a pain to set up, a pain to maintain, and a pain to develop for. If you are looking at Portal applications I would recommend Pluto. Links Apache Cocoon's project page
http://neverfear.org/blog/view/4/My_beef_with_Apache_Cocoon
CC-MAIN-2017-22
refinedweb
986
54.22
Working With Tree Data Structures¶ Contents - Working With Tree Data Structures - Trees - Reading and Writing Newick Trees - Understanding ETE Trees - Basic tree attributes - Browsing trees (traversing) - Getting Leaves, Descendants and Node’s Relatives - Traversing (browsing) trees - Advanced traversing (stopping criteria) - Iterating instead of Getting - Finding nodes by their attributes - Checking the monophyly of attributes within a tree - Caching tree content for faster lookup operations - Node annotation - Comparing Trees - Modifying Tree Topology - Pruning trees - Concatenating trees - Copying (duplicating) trees - Solving multifurcations - Tree Rooting - Working with branch distances Trees¶ Trees are a widely-used type of data structure that emulates a tree design with a set of linked nodes. Formally, a tree is considered an acyclic and connected graph. Each node in a tree has zero or more child nodes, which are below it in the tree (by convention, trees grow down, not up as they do in nature). A node that has a child is called the child’s parent node (or ancestor node, or superior). A node has at most one parent. The height of a node is the length of the longest downward path to a leaf from that node. The height of the root is the height of the tree. The depth of a node is the length of the path to its root (i.e., its root path). - The topmost node in a tree is called the root node. Being the topmost node, the root node will not have parents. It is the node at which operations on the tree commonly begin (although some algorithms begin with the leaf nodes and work up ending at the root). All other nodes can be reached from it by following edges or links. Every node in a tree can be seen as the root node of the subtree rooted at that node. - Nodes at the bottommost level of the tree are called leaf nodes. Since they are at the bottommost level, they do not have any children. - An internal node or inner node is any node of a tree that has child nodes and is thus not a leaf node. - A subtree is a portion of a tree data structure that can be viewed as a complete tree in itself. Any node in a tree T, together with all the nodes below it, comprise a subtree of T. The subtree corresponding to the root node is the entire tree; the subtree corresponding to any other node is called a proper subtree (in analogy to the term proper subset). In bioinformatics, trees are the result of many analyses, such as phylogenetics or clustering. Although each case entails specific considerations, many properties remains constant among them. In this respect, ETE is a python toolkit that assists in the automated manipulation, analysis and visualization of any type of hierarchical trees. It provides general methods to handle and visualize tree topologies, as well as specific modules to deal with phylogenetic and clustering trees. Reading and Writing Newick Trees¶ The Newick format is one of the most widely used standard representation of trees in bioinformatics. It uses nested parentheses to represent hierarchical data structures as text strings. The original newick standard is able to encode information about the tree topology, branch distances and node names. Nevertheless, it is not uncommon to find slightly different formats using the newick standard. ETE can read and write many of them: Formats labeled as flexible allow for missing information. For instance, format 0 will be able to load a newick tree even if it does not contain branch support information (it will be initialized with the default value). However, format 2 would raise an exception. In other words, if you want to control that your newick files strictly follow a given pattern you should use strict format definitions. Reading newick trees¶ In order to load a tree from a newick text string you can use the constructor TreeNode or its Tree alias, provided by the main module ete2. You will only need to pass a text string containing the newick structure and the format that should be used to parse it (0 by default). Alternatively, you can pass the path to a text file containing the newick string. from ete2 import Tree # Loads a tree structure from a newick string. The returned variable ’t’ is the root node for the tree. t = Tree("(A:1,(B:1,(E:1,D:1):0.5):0.5);" ) # Load a tree structure from a newick file. t = Tree("genes_tree.nh") # You can also specify the newick format. For instance, for named internal nodes we will use format 1. t = Tree("(A:1,(B:1,(E:1,D:1)Internal_1:0.5)Internal_2:0.5)Root;", format=1) Writing newick trees¶ Any ETE tree instance can be exported using newick notation using the Tree.write() method, which is available in any tree node instance. It also allows for format selection (Reading and Writing Newick Trees), so you can use the same function to convert between newick formats. from ete2 import Tree # Loads a tree with internal node names t = Tree("(A:1,(B:1,(E:1,D:1)Internal_1:0.5)Internal_2:0.5)Root;", format=1) # And prints its newick using the default format print t.write() # (A:1.000000,(B:1.000000,(E:1.000000,D:1.000000)1.000000:0.500000)1.000000:0.500000); # To print the internal node names you need to change the format: print t.write(format=1) # (A:1.000000,(B:1.000000,(E:1.000000,D:1.000000)Internal_1:0.500000)Internal_2:0.500000); # We can also write into a file t.write(format=1, outfile="new_tree.nw") Understanding ETE Trees¶ Any tree topology can be represented as a succession of nodes connected in a hierarchical way. Thus, for practical reasons, ETE makes no distinction between tree and node concepts, as any tree can be represented by its root node. This allows to use any internal node within a tree as another sub-tree instance. Once trees are loaded, they can be manipulated as normal python objects. Given that a tree is actually a collection of nodes connected in a hierarchical way, what you usually see as a tree will be the root node instance from which the tree structure is hanging. However, every node within a ETE’s tree structure can be also considered a subtree. This means, for example, that all the operational methods that we will review in the following sections are available at any possible level within a tree. Moreover, this feature will allow you to separate large trees into smaller partitions, or concatenate several trees into a single structure. For this reason, you will find that the TreeNode and Tree classes are synonymous. Basic tree attributes¶ Each tree node has two basic attributes used to establish its position in the tree: TreeNode.up and TreeNode.children. The first is a pointer to parent’s node, while the later is a list of children nodes. Although it is possible to modify the structure of a tree by changing these attributes, it is strongly recommend not to do it. Several methods are provided to manipulate each node’s connections in a safe way (see Comparing Trees). In addition, three other basic attributes are always present in any tree node instance: In addition, several methods are provided to perform basic operations on tree node instances: This is an example on how to access such attributes: from ete2 import Tree t = Tree() # We create a random tree topology t.populate(15) print t print t.children print t.get_children() print t.up print t.name print t.dist print t.is_leaf() print t.get_tree_root() print t.children[0].get_tree_root() print t.children[0].children[0].get_tree_root() # You can also iterate over tree leaves using a simple syntax for leaf in t: print leaf.name Root node on unrooted trees?¶ When a tree is loaded from external sources, a pointer to the top-most node is returned. This is called the tree root, and it will exist even if the tree is conceptually considered as unrooted. This is, the root node can be considered as the master node, since it represents the whole tree structure. Unrooted trees can be identified as trees in which master root node has more than two children. from ete2 import Tree unrooted_tree = Tree( "(A,B,(C,D));" ) print unrooted_tree # # /-A # | #----|--B # | # | /-C # \---| # \-D rooted_tree = Tree( "((A,B).(C,D));" ) print rooted_tree # # /-A # /---| # | \-B #----| # | /-C # \---| # \-D Browsing trees (traversing)¶ One of the most basic operations for tree analysis is tree browsing. This is, essentially, visiting nodes within a tree. ETE provides a number of methods to search for specific nodes or to navigate over the hierarchical structure of a tree. Getting Leaves, Descendants and Node’s Relatives¶ TreeNode instances contain several functions to access their descendants. Available methods are self explanatory: Traversing (browsing) trees¶ Often, when processing trees, all nodes need to be visited. This is called tree traversing. There are different ways to traverse a tree structure depending on the order in which children nodes are visited. ETE implements the three most common strategies: preorder, levelorder and postorder. The following scheme shows the differences in the strategy for visiting nodes (note that in both cases the whole tree is browsed): - preorder: 1)Visit the root, 2) Traverse the left subtree , 3) Traverse the right subtree. - postorder: 1) Traverse the left subtree , 2) Traverse the right subtree, 3) Visit the root - levelorder (default): every node on a level before is visited going to a lower level Note - Preorder traversal sequence: F, B, A, D, C, E, G, I, H (root, left, right) - Inorder traversal sequence: A, B, C, D, E, F, G, H, I (left, root, right); note how this produces a sorted sequence - Postorder traversal sequence: A, C, E, D, B, H, I, G, F (left, right, root) - Level-order traversal sequence: F, B, G, A, D, I, C, E, H Every node in a tree includes a TreeNode.traverse() method, which can be used to visit, one by one, every node node under the current partition. In addition, the TreeNode.iter_descendants() method can be set to use either a post- or a preorder strategy. The only different between TreeNode.traverse() and TreeNode.iter_descendants() is that the first will include the root node in the iteration. strategy can take one of the following values: "postorder", "preorder" or "levelorder" # we load a tree t = Tree('((((H,K)D,(F,I)G)B,E)A,((L,(N,Q)O)J,(P,S)M)C);', format=1) for node in t.traverse("postorder"): # Do some analysis on node print node.name # If we want to iterate over a tree excluding the root node, we can # use the iter_descendant method for node in t.iter_descendants("postorder"): # Do some analysis on node print node.name Additionally, you can implement your own traversing function using the structural attributes of nodes. In the following example, only nodes between a given leaf and the tree root are visited. from ete2 import Tree tree = Tree( "(A:1,(B:1,(C:1,D:1):0.5):0.5);" ) # Browse the tree from a specific leaf to the root node = t.search_nodes(name="C")[0] while node: print node node = node.up Advanced traversing (stopping criteria)¶ Collapsing nodes while traversing (custom is_leaf definition)¶ From version 2.2, ETE supports the use of the is_leaf_fn argument in most of its traversing functions. The value of is_leaf_fn is expected to be a pointer to any python function that accepts a node instance as its first argument and returns a boolean value (True if node should be considered a leaf node). By doing so, all traversing methods will use such a custom function to decide if a node is a leaf. This becomes specially useful when dynamic collapsing of nodes is needed, thus avoiding to prune the same tree in many different ways. For instance, given a large tree structure, the following code will export the newick of the pruned version of the topology, where nodes grouping the same tip labels are collapsed. from ete2 import Tree def collapsed_leaf(node): if len(node2labels[node]) == 1: return True else: return False t = Tree("((((a,a,a)a,a)aa, (b,b)b)ab, (c, (d,d)d)cd);", format=1) print t # We create a cache with every node content node2labels = t.get_cached_content(store_attr="name") print t.write(is_leaf_fn=collapsed_leaf) # /-a # | # /-|--a # | | # /-| \-a # | | # /-| \-a # | | # | | /-b # --| \-| # | \-b # | # | /-c # \-| # | /-d # \-| # \-d # We can even load the collapsed version as a new tree t2 = Tree( t.write(is_leaf_fn=collapsed_leaf) ) print t2 # /-aa # /-| # | \-b # --| # | /-c # \-| # \-d Another interesting use of this approach is to find the first matching nodes in a given tree that match a custom set of criteria, without browsing the whole tree structure. Let’s say we want get all deepest nodes in a tree whose branch length is larger than one: from ete2 import Tree t = Tree("(((a,b)ab:2, (c, d)cd:2)abcd:2, ((e, f):2, g)efg:2);", format=1) def processable_node(node): if node.dist > 1: return True else: return False for leaf in t.iter_leaves(is_leaf_fn=processable_node): print leaf # /-a # /-| # | \-b # --| # | /-c # \-| # \-d # # /-e # /-| # --| \-f # | # \-g Iterating instead of Getting¶ As commented previously, methods starting with get_ are all prepared to return results as a closed list of items. This means, for instance, that if you want to process all tree leaves and you ask for them using the TreeNode.get_leaves() method, the whole tree structure will be browsed before returning the final list of terminal nodes. This is not a problem in most of the cases, but in large trees, you can speed up the browsing process by using iterators. Most get_ methods have their homologous iterator functions. Thus, TreeNode.get_leaves() could be substituted by TreeNode.iter_leaves(). The same occurs with TreeNode.iter_descendants() and TreeNode.iter_search_nodes(). When iterators are used (note that is only applicable for looping), only one step is processed at a time. For instance, TreeNode.iter_search_nodes() will return one match in each iteration. In practice, this makes no differences in the final result, but it may increase the performance of loop functions (i.e. in case of finding a match which interrupts the loop). Finding nodes by their attributes¶ Both terminal and internal nodes can be located by searching along the tree structure. Several methods are available: Search_all nodes matching a given criteria¶ A custom list of nodes matching a given name can be easily obtain through the TreeNode.search_node() function. from ete2 import Tree t = Tree( '((H:1,I:1):0.5, A:1, (B:1,(C:1,D:1):0.5):0.5);' ) print t # /-H # /--------| # | \-I # | #---------|--A # | # | /-B # \--------| # | /-C # \--------| # \-D # I get D D = t.search_nodes(name="D")[0] # I get all nodes with distance=0.5 nodes = t.search_nodes(dist=0.5) print len(nodes), "nodes have distance=0.5" # We can limit the search to leaves and node names (faster method). D = t.get_leaves_by_name(name="D") print D Search nodes matching a given criteria (iteration)¶ A limitation of the TreeNode.search_nodes() method is that you cannot use complex conditional statements to find specific nodes. When search criteria is too complex, you may need to create your own search function. from ete2 import Tree def search_by_size(node, size): "Finds nodes with a given number of leaves" matches = [] for n in node.traverse(): if len(n) == size: matches.append(n) return matches t = Tree() t.populate(40) # returns nodes containing 6 leaves search_by_size(t, size=6) Find the first common ancestor¶ Searching for the first common ancestor of a given set of nodes it is a handy way of finding internal nodes. from ete2 import Tree t = Tree( "((H:0.3,I:0.1):0.5, A:1, (B:0.4,(C:0.5,(J:1.3, (F:1.2, D:0.1):0.5):0.5):0.5):0.5);" ) print t ancestor = t.get_common_ancestor("C", "J", "B") Custom searching functions¶ A limitation of the previous methods is that you cannot use complex conditional statements to find specific nodes. However you can user traversing methods to meet your custom filters. A possible general strategy would look like this: from ete2 import Tree t = Tree("((H:0.3,I:0.1):0.5, A:1, (B:0.4,(C:1,D:1):0.5):0.5);") # Create a small function to filter your nodes def conditional_function(node): if node.dist > 0.3: return True else: return False # Use previous function to find matches. Note that we use the traverse # method in the filter function. This will iterate over all nodes to # assess if they meet our custom conditions and will return a list of # matches. matches = filter(conditional_function, t.traverse()) print len(matches), "nodes have ditance >0.3" # depending on the complexity of your conditions you can do the same # in just one line with the help of lambda functions: matches = filter(lambda n: n.dist>0.3 and n.is_leaf(), t.traverse() ) print len(matches), "nodes have ditance >0.3 and are leaves" Shortcuts¶ Finally, ETE implements a built-in method to find the first node matching a given name, which is one of the most common tasks needed for tree analysis. This can be done through the operator & (AND). Thus, TreeNode&”A” will always return the first node whose name is “A” and that is under the tree “MyTree”. The syntaxis may seem confusing, but it can be very useful in some situations. from ete2 import Tree t = Tree("((H:0.3,I:0.1):0.5, A:1, (B:0.4,(C:1,(J:1, (F:1, D:1):0.5):0.5):0.5):0.5);") # Get the node D in a very simple way D = t&"D" # Get the path from B to the root node = D path = [] while node.up: path.append(node) node = node.up print t # I substract D node from the total number of visited nodes print "There are", len(path)-1, "nodes between D and the root" # Using parentheses you can use by-operand search syntax as a node # instance itself Dsparent= (t&"C").up Bsparent= (t&"B").up Jsparent= (t&"J").up # I check if nodes belong to certain partitions print "It is", Dsparent in Bsparent, "that C's parent is under B's ancestor" print "It is", Dsparent in Jsparent, "that C's parent is under J's ancestor" Checking the monophyly of attributes within a tree¶ Although monophyly is actually a phylogenetic concept used to refer to a set of species that group exclusively together within a tree partition, the idea can be easily exported to any type of trees. Therefore, we could consider that a set of values for a given node attribute present in our tree is monophyletic, if such values group exclusively together as a single tree partition. If not, the corresponding relationship connecting such values (para or poly-phyletic) could be also be inferred. The TreeNode.check_monophyly() method will do so when a given tree is queried for any custom attribute. from ete2 import Tree t = Tree("((((((a, e), i), o),h), u), ((f, g), j));") print t # /-a # /-| # /-| \-e # | | # /-| \-i # | | # /-| \-o # | | # /-| \-h # | | # | \-u # --| # | /-f # | /-| # \-| \-g # | # \-j # We can check how, indeed, all vowels are not monophyletic in the # previous tree, but polyphyletic (a foreign label breaks its monophyly) print t.check_monophyly(values=["a", "e", "i", "o", "u"], target_attr="name") # however, the following set of vowels are monophyletic print t.check_monophyly(values=["a", "e", "i", "o"], target_attr="name") # A special case of polyphyly, called paraphyly, is also used to # define certain type of grouping. See this wikipedia article for # disambiguation: print t.check_monophyly(values=["i", "o"], target_attr="name") Finally, the TreeNode.get_monophyletic() method is also provided, which allows to return a list of nodes within a tree where a given set of attribute values are monophyletic. Note that, although a set of values are not monophyletic regarding the whole tree, several independent monophyletic partitions could be found within the same topology. For instance, in the following example, all clusters within the same tree exclusively grouping a custom set of annotations are obtained. from ete2 import Tree t = Tree("((((((4, e), i), o),h), u), ((3, 4), (i, june)));") # we annotate the tree using external data colors = {"a":"red", "e":"green", "i":"yellow", "o":"black", "u":"purple", "4":"green", "3":"yellow", "1":"white", "5":"red", "june":"yellow"} for leaf in t: leaf.add_features(color=colors.get(leaf.name, "none")) print t.get_ascii(attributes=["name", "color"], show_internal=False) # /-4, green # /-| # /-| \-e, green # | | # /-| \-i, yellow # | | # /-| \-o, black # | | # /-| \-h, none # | | # | \-u, purple # --| # | /-3, yellow # | /-| # | | \-4, green # \-| # | /-i, yellow # \-| # \-june, yellow print "Green-yellow clusters:" # And obtain clusters exclusively green and yellow for node in t.get_monophyletic(values=["green", "yellow"], target_attr="color"): print node.get_ascii(attributes=["color", "name"], show_internal=False) # Green-yellow clusters: # # /-green, 4 # /-| # --| \-green, e # | # \-yellow, i # # /-yellow, 3 # /-| # | \-green, 4 # --| # | /-yellow, i # \-| # \-yellow, june Caching tree content for faster lookup operations¶ If your program needs to access to the content of different nodes very frequently, traversing the tree to get the leaves of each node over and over will produce significant slowdowns in your algorithm. From version 2.2 ETE provides a convenient methods to cache frequent data. The method TreeNode.get_cached_content() returns a dictionary in which keys are node instances and values represent the content of such nodes. By default, content is understood as a list of leave nodes, so looking up size or tip names under a given node will be instant. However, specific attributes can be cached by setting a custom store_attr value. from ete2 import Tree t = Tree() t.populate(50) node2leaves = t.get_cached_content() # lets now print the size of each node without the need of # recursively traverse for n in t.traverse(): print "node %s contains %s tips" %(n.name, len(node2leaves[n])) Node annotation¶ Every node contains three basic attributes: name ( TreeNode.name), branch length ( TreeNode.dist) and branch support ( TreeNode.support). These three values are encoded in the newick format. However, any extra data could be linked to trees. This is called tree annotation. The TreeNode.add_feature() and TreeNode.add_features() methods allow to add extra attributes (features) to any node. The first allows to add one one feature at a time, while the second can be used to add many features with the same call. Once extra features are added, you can access their values at any time during the analysis of a tree. To do so, you only need to access to the TreeNode.feature_name attributes. Similarly, TreeNode.del_feature() can be used to delete an attribute. import random from ete2 import Tree # Creates a tree t = Tree( '((H:0.3,I:0.1):0.5, A:1, (B:0.4,(C:0.5,(J:1.3, (F:1.2, D:0.1):0.5):0.5):0.5):0.5);' ) # Let's locate some nodes using the get common ancestor method ancestor=t.get_common_ancestor("J", "F", "C") # the search_nodes method (I take only the first match ) A = t.search_nodes(name="A")[0] # and using the shorcut to finding nodes by name C= t&"C" H= t&"H" I= t&"I" # Let's now add some custom features to our nodes. add_features can be # used to add many features at the same time. C.add_features(vowel=False, confidence=1.0) A.add_features(vowel=True, confidence=0.5) ancestor.add_features(nodetype="internal") # Or, using the oneliner notation (t&"H").add_features(vowel=False, confidence=0.2) # But we can automatize this. (note that i will overwrite the previous # values) for leaf in t.traverse(): if leaf.name in "AEIOU": leaf.add_features(vowel=True, confidence=random.random()) else: leaf.add_features(vowel=False, confidence=random.random()) # Now we use these information to analyze the tree. print "This tree has", len(t.search_nodes(vowel=True)), "vowel nodes" print "Which are", [leaf.name for leaf in t.iter_leaves() if leaf.vowel==True] # But features may refer to any kind of data, not only simple # values. For example, we can calculate some values and store them # within nodes. # #) # Prints the precomputed nodes print "These are nodes under ancestor with long branches", \ [n.name for n in ancestor.long_branch_nodes] # We can also use the add_feature() method to dynamically add new features. label = raw_input("custom label:") value = raw_input("custom label value:") ancestor.add_feature(label, value) print "Ancestor has now the [", label, "] attribute with value [", value, "]" Unfortunately, newick format does not support adding extra features to a tree. Because of this drawback, several improved formats haven been (or are being) developed to read and write tree based information. Some of these new formats are based in a completely new standard (Phylogenetic XML standards), while others are extensions of the original newick formar (NHX). Currently, ETE includes support for the New Hampshire eXtended format (NHX), which uses the original newick standard and adds the possibility of saving additional date related to each tree node. Here is an example of a extended newick representation in which extra information is added to an internal node: (A:0.35,(B:0.72,(D:0.60,G:0.12):0.64[&&NHX:conf=0.01:name=INTERNAL]):0.56); As you can notice, extra node features in the NHX format are enclosed between brackets. ETE is able to read and write features using such format, however, the encoded information is expected to be exportable as plain text. The NHX format is automatically detected when reading a newick file, and the detected node features are added using the TreeNode.add_feature() method. Consequently, you can access the information by using the normal ETE’s feature notation: node.feature_name. Similarly, features added to a tree can be included within the normal newick representation using the NHX notation. For this, you can call the TreeNode.write() method using the features argument, which is expected to be a list with the features names that you want to include in the newick string. Note that all nodes containing the suplied features will be exposed into the newick string. Use an empty features list ( features=[]) to include all node’s data into the newick string. import random from ete2 import Tree # Creates a normal tree t = Tree('((H:0.3,I:0.1):0.5, A:1,(B:0.4,(C:0.5,(J:1.3,(F:1.2, D:0.1):0.5):0.5):0.5):0.5);') print t # Let's locate some nodes using the get common ancestor method ancestor=t.get_common_ancestor("J", "F", "C") # Let's label leaf nodes for leaf in t.traverse(): if leaf.name in "AEIOU": leaf.add_features(vowel=True, confidence=random.random()) else: leaf.add_features(vowel=False, confidence=random.random()) #) print print "NHX notation including vowel and confidence attributes" print print t.write(features=["vowel", "confidence"]) print print "NHX notation including all node's data" print # Note that when all features are requested, only those with values # equal to text-strings or numbers are considered. "long_branch_nodes" # is not included into the newick string. print t.write(features=[]) print print "basic newick formats are still available" print print t.write(format=9, features=["vowel"]) # You don't need to do anything speciall to read NHX notation. Just # specify the newick format and the NHX tags will be automatically # detected. nw = """ (((ADH2:0.1[&&NHX:S=human:E=1.1.1.1], ADH1:0.11[&&NHX:S=human:E=1.1.1.1]) :0.05[&&NHX:S=Primates:E=1.1.1.1:D=Y:B=100], ADHY:0.1[&&NHX:S=nematode: E=1.1.1.1],ADHX:0.12[&&NHX:S=insect:E=1.1.1.1]):0.1[&&NHX:S=Metazoa: E=1.1.1.1:D=N], (ADH4:0.09[&&NHX:S=yeast:E=1.1.1.1],ADH3:0.13[&&NHX:S=yeast: E=1.1.1.1], ADH2:0.12[&&NHX:S=yeast:E=1.1.1.1],ADH1:0.11[&&NHX:S=yeast:E=1.1.1.1]):0.1 [&&NHX:S=Fungi])[&&NHX:E=1.1.1.1:D=N]; """ # Loads the NHX example found at t = Tree(nw) # And access node's attributes. for n in t.traverse(): if hasattr(n,"S"): print n.name, n.S Comparing Trees¶ Calculate distances between trees¶ The :Tree:`compare` function allows to calculate distances between two trees based on any node feature (i.e. name, species, other tags) using robinson-foulds and edge compatibility distances. It automatically handles differences in tree sizes, shared nodes and duplicated feature names. - result[“rf”] = robinson-foulds distance between the two trees. (average of robinson-foulds distances if target tree contained duplication and was split in several subtrees) - result[“max_rf”] = Maximum robinson-foulds distance expected for this comparison - result[“norm_rf”] = normalized robinson-foulds distance (from 0 to 1) - result[“effective_tree_size”] = the size of the compared trees, which are pruned to the common shared nodes. - result[“ref_edges_in_source”] = compatibility score of the target tree with respect to the source tree (how many edges in reference are found in the source) - result[“source_edges_in_ref”] = compatibility score of the source tree with respect to the reference tree (how many edges in source are found in the reference) - result[“source_subtrees”] = number of subtrees in the source tree (1 if do not contain duplications) - result[“common_edges”] = a set of common edges between source tree and reference - result[“source_edges”] = the set of edges found in the source tree - result[“ref_edges”] = the set of edges found in the reference tree - result[“treeko_dist”] = TreeKO speciation distance for comparisons including duplication nodes. Robinson-foulds distance¶ Two tree topologies can be compared using ETE and the Robinson-Foulds (RF) metric. The method TreeNode.robinson_foulds() available for any ETE tree node allows to: - compare two tree topologies by their name labels (default) or any other annotated feature in the tree. - compare topologies of different size and content. When two trees contain a different set of labels, only shared leaves will be used. - examine size and content of matching and missing partitions. Since the method return the list of partitions found in both trees, details about matching partitions can be obtained easily. In the following example, several of above mentioned features are shown: from ete2 import Tree t1 = Tree('(((a,b),c), ((e, f), g));') t2 = Tree('(((a,c),b), ((e, f), g));') rf, max_rf, common_leaves, parts_t1, parts_t2 = t1.robinson_foulds(t2) print t1, t2 # We can also compare trees sharing only part of their labels t1 = Tree('(((a,b),c), ((e, f), g));') t2 = Tree('(((a,c),b), (g, H));') rf, max_rf, common_leaves, parts_t1, parts_t2 = t1.robinson_foulds(t2) print t1, t2 print "Same distance holds even for partially overlapping trees" Modifying Tree Topology¶ Creating Trees from Scratch¶ If no arguments are passed to the TreeNode class constructor, an empty tree node will be returned. Such an orphan node can be used to populate a tree from scratch. For this, the TreeNode.up, and TreeNode.children attributes should never be used (unless it is strictly necessary). Instead, several methods exist to manipulate the topology of a tree: from ete2 import Tree t = Tree() # Creates an empty tree A = t.add_child(name="A") # Adds a new child to the current tree root # and returns it B = t.add_child(name="B") # Adds a second child to the current tree # root and returns it C = A.add_child(name="C") # Adds a new child to one of the branches D = C.add_sister(name="D") # Adds a second child to same branch as # before, but using a sister as the starting # point R = A.add_child(name="R") # Adds a third child to the # branch. Multifurcations are supported # Next, I add 6 random leaves to the R branch names_library is an # optional argument. If no names are provided, they will be generated # randomly. R.populate(6, names_library=["r1","r2","r3","r4","r5","r6"]) # Prints the tree topology print t # /-C # | # |--D # | # /--------| /-r4 # | | /--------| # | | /--------| \-r3 # | | | | # | | | \-r5 # | \--------| # ---------| | /-r6 # | | /--------| # | \--------| \-r2 # | | # | \-r1 # | # \-B # a common use of the populate method is to quickly create example # trees from scratch. Here we create a random tree with 100 leaves. t = Tree() t.populate(100) Deleting (eliminating) and Removing (detaching) nodes¶ As currently implemented, there is a difference between detaching and deleting a node. The former disconnects a complete partition from the tree structure, so all its descendants are also disconnected from the tree. There are two methods to perform this action: TreeNode.remove_child() and TreeNode.detach(). In contrast, deleting a node means eliminating such node without affecting its descendants. Children from the deleted node are automatically connected to the next possible parent. This is better understood with the following example: from ete2 import Tree # Loads a tree. Note that we use format 1 to read internal node names t = Tree('((((H,K)D,(F,I)G)B,E)A,((L,(N,Q)O)J,(P,S)M)C);', format=1) print "original tree looks like this:" # This is an alternative way of using "print t". Thus we have a bit # more of control on how tree is printed. Here i print the tree # showing internal node names print t.get_ascii(show_internal=True) # # /-H # /D-------| # | \-K # /B-------| # | | /-F # /A-------| \G-------| # | | \-I # | | # | \-E #-NoName--| # | /-L # | /J-------| # | | | /-N # | | \O-------| # \C-------| \-Q # | # | /-P # \M-------| # \-S # Get pointers to specific nodes G = t.search_nodes(name="G")[0] J = t.search_nodes(name="J")[0] C = t.search_nodes(name="C")[0] # If we remove J from the tree, the whole partition under J node will # be detached from the tree and it will be considered an independent # tree. We can do the same thing using two approaches: J.detach() or # C.remove_child(J) removed_node = J.detach() # = C.remove_child(J) # if we know print the original tree, we will see how J partition is # no longer there. print "Tree after REMOVING the node J" print t.get_ascii(show_internal=True) # /-H # /D-------| # | \-K # /B-------| # | | /-F # /A-------| \G-------| # | | \-I # | | #-NoName--| \-E # | # | /-P # \C------- /M-------| # \-S # however, if we DELETE the node G, only G will be eliminated from the # tree, and all its descendants will then hang from the next upper # node. G.delete() print "Tree after DELETING the node G" print t.get_ascii(show_internal=True) # /-H # /D-------| # | \-K # /B-------| # | |--F # /A-------| | # | | \-I # | | #-NoName--| \-E # | # | /-P # \C------- /M-------| # \-S Pruning trees¶ Pruning a tree means to obtain the topology that connects a certain group of items by removing the unnecessary edges. To facilitate this task, ETE implements the TreeNode.prune() method, which can be used by providing the list of terminal and/or internal nodes that must be kept in the tree. From version 2.2, this function includes also the preserve_branch_length flag, which allows to remove nodes from a tree while keeping original distances among remaining nodes. from ete2 import Tree # Let's create simple tree t = Tree('((((H,K),(F,I)G),E),((L,(N,Q)O),(P,S)));') print "Original tree looks like this:" print t # # /-H # /--------| # | \-K # /--------| # | | /-F # /--------| \--------| # | | \-I # | | # | \-E #---------| # | /-L # | /--------| # | | | /-N # | | \--------| # \--------| \-Q # | # | /-P # \--------| # \-S # Prune the tree in order to keep only some leaf nodes. t.prune(["H","F","E","Q", "P"]) print "Pruned tree" print t # # /-F # /--------| # /--------| \-H # | | #---------| \-E # | # | /-Q # \--------| # \-P # Let's re-create the same tree again Concatenating trees¶ Given that all tree nodes share the same basic properties, they can be connected freely. In fact, any node can add a whole subtree as a child, so we can actually cut & paste partitions. To do so, you only need to call the TreeNode.add_child() method using another tree node as a first argument. If such a node is the root node of a different tree, you will concatenate two structures. But caution!!, this kind of operations may result into circular tree structures if add an node’s ancestor as a new node’s child. Some basic checks are internally performed by the ETE topology related methods, however, a fully qualified check of this issue would affect seriously the performance of the method. For this reason, users themselves should take care about not creating circular structures by mistake. from ete2 import Tree # Loads 3 independent trees t1 = Tree('(A,(B,C));') t2 = Tree('((D,E), (F,G));') t3 = Tree('(H, ((I,J), (K,L)));') print "Tree1:", t1 # /-A # ---------| # | /-B # \--------| # \-C print "Tree2:", t2 # /-D # /--------| # | \-E # ---------| # | /-F # \--------| # \-G print "Tree3:", t3 # /-H # | # ---------| /-I # | /--------| # | | \-J # \--------| # | /-K # \--------| # \-L # Locates a terminal node in the first tree A = t1.search_nodes(name='A')[0] # and adds the two other trees as children. A.add_child(t2) A.add_child(t3) print "Resulting concatenated tree:", t1 # /-D # /--------| # | \-E # /--------| # | | /-F # | \--------| # /--------| \-G # | | # | | /-H # | | | # | \--------| /-I # | | /--------| # ---------| | | \-J # | \--------| # | | /-K # | \--------| # | \-L # | # | /-B # \--------| # \-C Copying (duplicating) trees¶ ETE provides several strategies to clone tree structures. The method TreeNode.copy() can be used to produce a new independent tree object with the exact topology and features as the original. However, as trees may involve many intricate levels of branches and nested features, 4 different methods are available to create a tree copy: - “newick”: Tree topology, node names, branch lengths and branch support values will be copied as represented in the newick string This method is based on newick format serialization works very fast even for large trees. - “newick-extended”: Tree topology and all node features will be copied based on the extended newick format representation. Only node features will be copied, thus excluding other node attributes. As this method is also based on newick serialisation, features will be converted into text strings when making the copy. Performance will depend on the tree size and the number and type of features being copied. - “cpickle”: This is the default method. The whole node structure and its content will be cloned based on the cPickle object serialization python approach. This method is slower, but recommended for full tree copying. - “deepcopy”: The whole node structure and its content is copied based on the standard “copy” Python functionality. This is the slowest method, but it allows to copy very complex objects even when attributes point to lambda functions. from ete2 import Tree t = Tree("((A, B)Internal_1:0.7, (C, D)Internal_2:0.5)root:1.3;", format=1) # we add a custom annotation to the node named A (t & "A").add_features(label="custom Value") # we add a complex feature to the A node, consisting of a list of lists (t & "A").add_features(complex=[[0,1], [2,3], [1,11], [1,0]]) print t.get_ascii(attributes=["name", "dist", "label", "complex"]) # /-A, 0.0, custom Value, [[0, 1], [2, 3], [1, 11], [1, 0]] # /Internal_1, 0.7 # | \-B, 0.0 # -root, 1.3 # | /-C, 0.0 # \Internal_2, 0.5 # \-D, 0.0 # Newick copy will loose custom node annotations, complex features, # but not names and branch values print t.copy("newick").get_ascii(attributes=["name", "dist", "label", "complex"]) # /-A, 0.0 # /Internal_1, 0.7 # | \-B, 0.0 # -NoName, 0.0 # | /-C, 0.0 # \Internal_2, 0.5 # \-D, 0.0 # Extended newick copy will transfer custom annotations as text # strings, so complex features are lost. print t.copy("newick-extended").get_ascii(attributes=["name", "dist", "label", "complex"]) # /-A, 0.0, custom Value, __0_ 1__ _2_ 3__ _1_ 11__ _1_ 0__ # /Internal_1, 0.7 # | \-B, 0.0 # -NoName, 0.0 # | /-C, 0.0 # \Internal_2, 0.5 # \-D, 0.0 # The default pickle method will produce a exact clone of the # original tree, where features are duplicated keeping their # python data type. print t.copy().get_ascii(attributes=["name", "dist", "label", "complex"]) print "first element in complex feature:", (t & "A").complex[0] # /-A, 0.0, custom Value, [[0, 1], [2, 3], [1, 11], [1, 0]] # /Internal_1, 0.7 # | \-B, 0.0 # -root, 1.3 # | /-C, 0.0 # \Internal_2, 0.5 # \-D, 0.0 # first element in complex feature: [0, 1] Solving multifurcations¶ When a tree contains a polytomy (a node with more than 2 children), the method resolve_polytomy() can be used to convert the node into a randomly bifurcated structure in which branch lengths are set to 0. This is really not a solution for the polytomy but it allows to export the tree as a strictly bifurcated newick structure, which is a requirement for some external software. The method can be used on a very specific node while keeping the rest of the tree intact by disabling the recursive flag. from ete2 import Tree t = Tree("(( (a, b, c), (d, e, f, g)), (f, i, h));") print t # /-a # | # /--|--b # | | # | \-c # /--| # | | /-d # | | | y # | | |--e # | \--| # ---| |--f # | | # | \-g # | # | /-f # | | # \--|--i # | # \-h polynode = t.get_common_ancestor("a", "b") polynode.resolve_polytomy(recursive=False) print t # /-b # /--| # /--| \-c # | | # | \-a # /--| # | | /-d # | | | # | | |--e # | \--| # ---| |--f # | | # | \-g # | # | /-f # | | # \--|--i # | # \-h t.resolve_polytomy(recursive=True) print t # # /-b # /--| # /--| \-c # | | # | \-a # | # /--| /-f # | | /--| # | | /--| \-g # | | | | # | \--| \-e # ---| | # | \-d # | # | /-i # | /--| # \--| \-h # | # \-f Tree Rooting¶ Tree rooting is understood as the technique by with a given tree is conceptually polarized from more basal to more terminal nodes. In phylogenetics, for instance, this a crucial step prior to the interpretation of trees, since it will determine the evolutionary relationships among the species involved. The concept of rooted trees is different than just having a root node, which is always necessary to handle a tree data structure. Usually, the way in which a tree is differentiated between rooted and unrooted, is by counting the number of branches of the current root node. Thus, if the root node has more than two child branches, the tree is considered unrooted. By contrast, when only two main branches exist under the root node, the tree is considered rooted. Having an unrooted tree means that any internal branch within the tree could be regarded as the root node, and there is no conceptual reason to place the root node where it is placed at the moment. Therefore, in an unrooted tree, there is no information about which internal nodes are more basal than others. By setting the root node between a given edge/branch of the tree structure the tree is polarized, meaning that the two branches under the root node are the most basal nodes. In practice, this is usually done by setting an outgroup node, which would represent one of these main root branches. The second one will be, obviously, the brother node. When you set an outgroup on unrooted trees, the multifurcations at the current root node are solved. In order to root an unrooted tree or re-root a tree structure, ETE implements the TreeNode.set_outgroup() method, which is present in any tree node instance. Similarly, the TreeNode.unroot() method can be used to perform the opposite action. from ete2 import Tree # Load an unrooted tree. Note that three branches hang from the root # node. This usually means that no information is available about # which of nodes is more basal. t = Tree('(A,(H,F)(B,(E,D)));') print "Unrooted tree" print t # /-A # | # | /-H #---------|---------| # | \-F # | # | /-B # \--------| # | /-E # \--------| # \-D # # Let's define that the ancestor of E and D as the tree outgroup. Of # course, the definition of an outgroup will depend on user criteria. ancestor = t.get_common_ancestor("E","D") t.set_outgroup(ancestor) print "Tree rooteda at E and D's ancestor is more basal that the others." print t # # /-B # /--------| # | | /-A # | \--------| # | | /-H #---------| \--------| # | \-F # | # | /-E # \--------| # \-D # # Note that setting a different outgroup, a different interpretation # of the tree is possible t.set_outgroup( t&"A" ) print "Tree rooted at a terminal node" print t # /-H # /--------| # | \-F # /--------| # | | /-B # | \--------| #---------| | /-E # | \--------| # | \-D # | # \-A Note that although rooting is usually regarded as a whole-tree operation, ETE allows to root subparts of the tree without affecting to its parent tree structure. from ete2 import Tree t = Tree('(((A,C),((H,F),(L,M))),((B,(J,K))(E,D)));') print "Original tree:" print t # /-A # /--------| # | \-C # | # /--------| /-H # | | /--------| # | | | \-F # | \--------| # | | /-L # | \--------| #---------| \-M # | # | /-B # | /--------| # | | | /-J # | | \--------| # \--------| \-K # | # | /-E # \--------| # \-D # # Each main branch of the tree is independently rooted. node1 = t.get_common_ancestor("A","H") node2 = t.get_common_ancestor("B","D") node1.set_outgroup("H") node2.set_outgroup("E") print "Tree after rooting each node independently:" print t # # /-F # | # /--------| /-L # | | /--------| # | | | \-M # | \--------| # /--------| | /-A # | | \--------| # | | \-C # | | # | \-H #---------| # | /-D # | /--------| # | | | /-B # | | \--------| # \--------| | /-J # | \--------| # | \-K # | # \-E Working with branch distances¶ The branch length between one node an its parent is encoded as the TreeNode.dist attribute. Together with tree topology, branch lengths define the relationships among nodes. Getting distances between nodes¶ The TreeNode.get_distance() method can be used to calculate the distance between two connected nodes. There are two ways of using this method: a) by querying the distance between two descendant nodes (two nodes are passed as arguments) b) by querying the distance between the current node and any other relative node (parental or descendant). from ete2 import Tree # Loads a tree with branch lenght information. Note that if no # distance info is provided in the newick, it will be initialized with # the default dist value = 1.0 nw = """(((A:0.1, B:0.01):0.001, C:0.0001):1.0, (((((D:0.00001:0,I:0):0,F:0):0,G:0):0,H:0):0, E:0.000001):0.0000001):2.0;""" t = Tree(nw) print t # /-A # /--------| # /--------| \-B # | | # | \-C # | # | /-D # | /--------| #---------| /--------| \-I # | | | # | /--------| \-F # | | | # | /--------| \-G # | | | # \--------| \-H # | # \-E # # Locate some nodes A = t&"A" C = t&"C" # Calculate distance from current node print "The distance between A and C is", A.get_distance("C") # Calculate distance between two descendants of current node print "The distance between A and C is", t.get_distance("A","C") # Calculate the toplogical distance (number of nodes in between) print "The number of nodes between A and D is ", \ t.get_distance("A","D", topology_only=True) Additionally to this, ETE incorporates two more methods to calculate the most distant node from a given point in a tree. You can use the TreeNode.get_farthest_node() method to retrieve the most distant point from a node within the whole tree structure. Alternatively, TreeNode.get_farthest_leaf() will return the most distant descendant (always a leaf). If more than one node matches the farthest distance, the first occurrence is returned. Distance between nodes can also be computed as the number of nodes between them (considering all branch lengths equal to 1.0). To do so, the topology_only argument must be set to True for all the above mentioned methods. # Calculate the farthest node from E within the whole structure farthest, dist = (t&"E").get_farthest_node() print "The farthest node from E is", farthest.name, "with dist=", dist # Calculate the farthest node from E within the whole structure, # regarding the number of nodes in between as distance value # Note that the result is differnt. farthest, dist = (t&"E").get_farthest_node(topology_only=True) print "The farthest (topologically) node from E is", \ farthest.name, "with", dist, "nodes in between" # Calculate farthest node from an internal node farthest, dist = t.get_farthest_node() print "The farthest node from root is is", farthest.name, "with dist=", dist # # The program results in the following information: # # The distance between A and C is 0.1011 # The distance between A and C is 0.1011 # The number of nodes between A and D is 8.0 # The farthest node from E is A with dist= 1.1010011 # The farthest (topologically) node from E is I with 5.0 nodes in between # The farthest node from root is is A with dist= 1.101 getting midpoint outgroup¶ In order to obtain a balanced rooting of the tree, you can set as the tree outgroup that partition which splits the tree in two equally distant clusters (using branch lengths). This is called the midpoint outgroup. The TreeNode.get_midpoint_outgroup() method will return the outgroup partition that splits current node into two balanced branches in terms of node distances. from ete2 import Tree # generates a random tree t = Tree(); t.populate(15); print t # # # /-qogjl # /--------| # | \-vxbgp # | # | /-xyewk #---------| | # | | /-opben # | | | # | | /--------| /-xoryn # \--------| | | /--------| # | | | | | /-wdima # | | \--------| \--------| # | | | \-qxovz # | | | # | | \-isngq # \--------| # | /-neqsc # | | # | | /-waxkv # | /--------| /--------| # | | | /--------| \-djeoh # | | | | | # | | \--------| \-exmsn # \--------| | # | | /-udspq # | \--------| # | \-buxpw # | # \-rkzwd # Calculate the midpoint node R = t.get_midpoint_outgroup() # and set it as tree outgroup t.set_outgroup(R) print t # /-opben # | # /--------| /-xoryn # | | /--------| # | | | | /-wdima # | \--------| \--------| # /--------| | \-qxovz # | | | # | | \-isngq # | | # | | /-xyewk # | \--------| # | | /-qogjl # | \--------| #---------| \-vxbgp # | # | /-neqsc # | | # | | /-waxkv # | /--------| /--------| # | | | /--------| \-djeoh # | | | | | # | | \--------| \-exmsn # \--------| | # | | /-udspq # | \--------| # | \-buxpw # | # \-rkzwd
http://pythonhosted.org/ete2/tutorial/tutorial_trees.html
CC-MAIN-2016-50
refinedweb
7,963
64.41
>>>>> "Henning" == Henning P Schmiedehausen <mailgate@mail.hometree.net> writes:Henning> "Eric S. Raymond" <esr@thyrsus.com> writes:>> Here is an example map block for my kxref.py tool:>> # %Map # T: CONFIG_ namespace cross-reference generator/analyzer #>> P: Eric S. Raymond <esr@thyrsus.com> # M: esr@thyrsus.com # L:>> kbuild-devel@kbuild.sourceforge.net # W:>> # D: Sat Apr 21 11:41:52 EDT 2001 #>> S: Maintained>> Comments are solicited.Henning> Hi Eric,Henning> please not. If you really want to redo this, please use aHenning> simple XML markup. Let's not introduce another kind ofHenning> markup if there is already a well distributed and working.Henning> What's wrong with:DON'T! go there, please!A) This sucks to write and maintain, B) it sucks for people bringingup Linux on a minimum system or new architecture because they don'twant to have to install 217 XML and other tools to just be able toconfigure and build a basic kernel.Jes-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2001/4/21/176
CC-MAIN-2018-09
refinedweb
187
52.66
On Wed, 11 Jun 2008 16:04:59 . That's no reason. %if 0%{?fedora} > 8 # something %endif Effectively, you can create a spec file in "devel" which you can copy unmodified to older branches. If, however, you really need to modify'n'bump an older branch only, increase the "Release" value in the least-significant position at the very right, 4%{?dist} => 4%{?dist}.1 => %{?dist}.2 => and so on but if it's just minor modifications, you better use the %fedora macro as above. > I wasn't aware that there had to be a strict increase in package > numbering between branches. (In fact, I wasn't aware that Fedora even > allowed updating between Fedora releases). What do you think why does Anaconda support distribution upgrades? It has been the official upgrade method for many years (as with old Red Hat Linux), and our users do also Yum/Apt-based dist-upgrades.
http://www.redhat.com/archives/fedora-devel-list/2008-June/msg00643.html
CC-MAIN-2014-41
refinedweb
153
56.15
Unity 5.4 Additional downloads Select the runtime platforms of your choice from the list below (the desktop runtime is included as standard) or, to install the full complement of runtime platforms, use the download assistant installer above. Windows Component Installers Unity Editor 64-bit (Win) Unity Editor 32-bit (Win) 发行说明 5.4.0f3 Release Notes (FULL) Features - Editor: Optional "strict mode" when building projects and AssetBundles, which will fail the build if any errors (even non-fatal ones) are reported during the build process. - GI: Added de-noising filter to baked final gather. - GI: Light Probe Proxy Volumes - This component allows using more than one light probe sample for large dynamic objects (think large particle systems or important characters). It will sample probes into a 3D texture and use that in the shader. - Requires shader model 4+ platform (DX11/DX12 on Windows, GLCore 4.1+ on Mac/Linux, PS4, XboxOne). - Graphics: GPU Instancing Support - Use GPU instancing to draw a large amount of identical geometries with very few draw calls. - Works with MeshRenderers that use the same material and the same mesh. - Only needs a few changes to your shader to enable it for instancing. Supports custom vertex/fragment shader and surface shaders. - Set per-instance shader properties from script via MaterialPropertyBlock. - Supports Graphics.DrawMesh command. - Requires shader model 4+ platform (DX11/DX12 on Windows, GLCore 4.1+ on Mac/Linux, PS4, XboxOne). - Graphics: Improved multithreaded rendering: - Compared to current dual-thread rendering (main thread + rendering thread), this splits up rendering logic into concurrent "graphics jobs" that run on all available CPU cores. - See "Graphics Jobs" option in player settings (off by default, still considered experimental). - Currently implemented on PC (Win/Mac/Linux/WindowsStore), PS4, XboxOne. Best results with modern graphics APIs like DX12. - In addition to multithreaded rendering, overall CPU graphics performance should be better in 5.4. - Graphics: Motion vector rendering support - Motion vectors track the screen space position of an object from one frame to the next, and can be used for post process effects. - See the API docs for Renderer.motionVectors, Camera.depthTextureMode, SkinnedMeshRenderer.skinnedMotionVectors, PassType.MotionVectors, and DepthTextureMode.MotionVector. - Requires RGHalf render texture format support. - Utilized in the current beta of Unity Cinematic Image Effects - See Keijiro Takahashi's example of vector field visualization KinoVision - Graphics: Texture Array support - SeeTexture2DArray class. - Requires shader model 3.5+ platform (DX11/DX12 on Windows, GLCore Mac/Linux, GLES3+, Metal, PS4, XboxOne). - Here they are used in the Adam demo for terrain shading: - IAP: Cloud catalog - A 'useCloudCatalog' boolean has been added to UnityEngine.Purchasing.ConfigurationBuilder. When set, Unity IAP will fetch your catalog of products for sale from the Unity cloud. Catalog is configured via the Unity Analytics dashboard. - IL2CPP: Android support for IL2CPP is now official (previously 'experimental'). - iOS: Added support for ODR (On Demand Resources) initial install tags. - Kernel: The Transform component has been rewritten using SIMD and a cache-friendly data layout, so the code is now faster for many use cases. - OSX: Unity Editor supports Mac Retina displays now (mostly for improved text and icon rendering). Windows HiDPI support in development. - Particles: New Trigger Module, including: - A script callback when particles touch a predefined list of collision shapes. - Ability to modify/kill particles that are intersecting the collision shapes. Editor UI: Example: - Particles: Particle width, height and depth (for Mesh particles) can now be defined independently from each other. Editor UI: Example: - Physics: Various physics improvements: - Overlap recovery. Used to de-penetrate CharacterControllers from static objects when an overlap is detected. When activated, the CharacterController will automatically try to resolve the penetration, and move to a safe place where it does not overlap other objects. - Added ContactPoint.separation API. - Added Physics.OverlapCapsule and OverlapCapsuleNonAlloc functions. - Added Rigidbody.solverVelocityIterations and Physics.defaultSolverVelocityIterations, to help stabilize bounce behavior on impacts. - Services: GamePerf service integration. You can now track your exceptions from the wild by enabling this in the Services window. - Shaders: ComputeShader improvements: - Added DispatchIndirect function. Similar to DrawProceduralIndirect; dispatches ComputeShader with parameters sourced from ComputeBuffer. - API of counters on ComputeBuffers can now be optionally reset when bound, and can be explicitly set via SetCounterValue. - Exposed ComputeShader.GetKernelThreadGroupSizes to query Compute thread group sizes. - Improved error handling for ComputeShaders. - Debugging via #pragma enable_d3d11_debug_symbols, just like for regular shaders. - Shaders: Uniform array support - Uniform arrays can be set by new array APIs on MaterialPropertyBlock, Shader and Material. - Supports array sizes up to 1023. - The old way of setting array elements by using number-suffixed names is removed. - Substance: ProceduralMaterials are now supported at runtime on Windows Store/Phone platforms. - VR: Multi-device support - PlayerSettings: When the Virtual Reality Supported checkbox is checked, a prioritized list is shown allowing devs to choose which VR SDKs their game supports. (Similar to the Graphics API selection dialog) - VR SDK list is per build-target. - Dependencies (such as DLLs) will be copied to the build for every SDK in the list. - At startup, Unity will go down the list and try to initialize each device. If any fail to initialize (for example, if the headset is not connected), Unity will move on to the next. If all fail, Unity won’t enter VR mode. - PlayerSettings: Deprecated PlayerSettings stereoscopic 3D checkbox. This goes through the same subsystem as the VR devices, so a non-headmounted stereoscopic driver is one of the possible devices on supporting platforms. - API: Deprecated VRDeviceType enum and VRSettings.loadedDevice. This is replaced with VRSettings.loadedDeviceName and VRSettings.LoadDeviceByName(). - API: Added the ability to get a list of supported SDKs. Readonly: string[] VRSettings.supportedDevices. - VR: Native OpenVR support - Note that native OpenVR support renders with an off-center asymmetric projection matrix. This means that any shaders which relied on fov / aspect may not work correctly. - VR: Native Spatializer Plugins for VR - Oculus Spatializer included with the support. - VR: Oculus Support for DirectX 12. - VR: Optimized Single-Pass Stereo Rendering - Instead of rendering each eye separately, this uses a wider render target and alternating draw calls to render both eyes with a single scene traversal. - Option in Player Settings. - Note that some image effects or screenspace shaders might need to be updated to work with it. - Windows: Added speech recognition APIs under UnityEngine.Windows.Speech. These APIs are supported on all Windows platforms as long as they're running on Windows 10 (editor, standalone, store apps). - Windows: Added support for G-Sync and FreeSync on Windows 10 on DirectX 11 (for the Windows Store player only) and DirectX 12 (for both the standalone player and the Windows Store player). - Windows Store: Realtime global illumination now works when using Windows 10 SDK. Backwards Compatibility Breaking Changes - Android: WebCam no longer works on Gingerbread devices. - DX12: Introduced new native plugin interface IUnityGraphicsD3D12v2 . The old interface will not function anymore due to differences in internal graphics job submission. - Editor: Deprecated UnityEditor.ShaderUtil.ShaderPropertyTexDim; users should now use Texture.dimension. - GI: Deprecated Light.actuallyLightmapped; users should now use Light.isBaked and Light.bakedIndex instead. Baked Light now has unique index, instead of the flag "actuallyLightmapped" - Graphics: Deprecated Material(String) constructor further. This will now always create a material with the error shader and print an error, in both Editor and player. It will be completely removed in a future Unity version. - Physics: Made changes to avoid Physics transform drift by not sending redundant Transform updates. - Physics: Physics Meshes are now rejected if they contain invalid (non-finite) vertices. - Playables: Refactored API so that Playables are structs instead of classes, making the API allocation-less in C#. - Scripting: Added two new script errors in the editor for catching calls to the Unity API during serialization. See "Scripting Serialization" page in the manual for more details. - Scripting: Promoted WebRequest interface from UnityEngine.Experimental.Networking to UnityEngine.Networking. Unity 5.2 and 5.3 projects that use UnityWebRequest will need to be updated. - Shaders: Changed default shader compilation target to "#pragma target 2.5" (SM3.0 on DX9, DX11 9.3 feature level on WinPhone). Can still target DX9 SM2.0 and DX11 9.1 feature level with "#pragma target 2.0". The majority of built-in shaders target now 2.5. Notable exceptions are Unlit, VertexLit and fixed function shaders. Changes - Android: Assets - Disabled texture streaming for Android. - Android: Deprecated UnityPlayerNativeActivity and UnityPlayerProxyActivity; these will now print warnings to the logcat if in use. - Android: Removed native activity implementation. An activity with the same name based on a regular activity is still in place for backwards compatibility reasons. - Android: Screen.dpi now always returns densityDpi. - Audio: Updated FMOD to 4.44.56. - DX12: Disabled client/worker mode as a preparation step for pure threading (-force-gfx-mt now does nothing for DX12). - DX12: Enabled GPU profiler in single-threaded mode (-force-gfx-direct). - Graphics: Default Camera's background clear color now has 0 alpha, instead of 5/255 alpha. - Graphics: Unity splash screen replacement now uniform across platforms, featuring a light and pro-only dark style. - Installer: With Webplayer removal, desktop players are now part of their respective Editor installations, so the option to separately install them is removed. - iOS: Upgraded the minimum supported iOS version to 7.0. iOS 6 is no longer supported. - Physics: Exposed Cloth.enableTethers API. Renamed Cloth.useContinuousCollision to enableContinuousCollision, and Cloth.solverFrequency to clothSolverFrequency. - Physics: Fixed Character Controller Physics causing capsule to be thrown in the air when exiting another collider. - Physics: Renamed Physics.solverIterationCount to Physics.defaultSolverIterations, and Rigidbody.solverIterationCount to Rigidbody.solverIterations. - Scripting: Renamed onSceneLoaded to sceneLoaded, onSceneUnloaded to sceneUnloaded, and onActiveSceneChanged to activeSceneChanged, to be compliant with naming conventions. - Scripting: Using GameObject.AddComponent<MonoBehaviour> is no longer allowed and will throw an exception. Derive a class from MonoBehaviour and add it instead. - Shaders: Moved internal shader for computing screenspace cascaded shadows into Graphics Settings. If you were overriding it before by just dropping it into the project, you now need the custom one via Graphics Settings. - Shaders: Removed support for EXT_shadow_samplers on non-iOS OpenGL ES 2.0 platform. - Terrain: Terrain objects created in the Scene will now be properly renamed (in the same way as GameObjects) to avoid using the same name. - Terrain: When different TerrainData are used for Terrain and TerrainCollider components on the same GameObject, a warning message will be shown with a button to fix the situation. - UI: Switched component menu name for RectMask2D to match class name. - UI: UI no longer interacts with the cursor when the cursor is locked. - WebGL: Removed .htaccess file generation. - Windows Store: Deprecated PlayerSettings.WSA.enableLowLatencyPresentationAPI. It is now always enabled. Improvements - Android: Added template for ProGuard obfuscation on exported project. - Android: Application name now supports non-alphanumeric characters and spaces. - Android: Converted some fatal error messages to be presented on-screen rather than printed to the logcat. - Android: Enhanced robustness of Location input. - Animation: Improved Animation event performance for repeat calls to the same events on components. - Asset Import: Unity now supports import of model files (such as FBX) containing more than 100,000 objects. - Cache Server: Improved the cache server so that it can properly handle scenarios when assets with missing references are being read. - Core: Improved multithreaded job execution. Spawn worker threads are now based on the number of logical processors instead of physical cores. - Core: Object.Instantiate now takes a optional Transform parent parameter. - DX12: Added support for multi-display rendering. - DX12: Introduced -force-d3d12-stablepowerstate command line parameter. Use it when profiling the GPU. - DX12: Optimized texture/mesh loading times by using GPU copy queue. - Editor: "Discard changes" in Scene context menu now reloads selected modified scenes. - Editor: Added an editor warning whenever a Shader with many variants (for example, Standard shader) is added to the 'always included' list in graphics settings. - Editor: Added API to toggle preventing cross-scene references on/off. - Editor: Added EditorSceneManager.DetectCrossSceneReferences API. - Editor: ENABLE_PROFILER now works correctly in Editor for runtime script compilation. - Editor: In Play Mode the DontDestroyOnLoad Scene will now only be shown if it has GameObjects. - Editor: Scene headers are now always shown in the Hierarchy to prevent confusion when loading and unloading Scenes in Play Mode. This also allows user to see which Scene is loaded in OSX fullscreen mode. - GI: Added ability to hide the tetrahedron wireframe while editing light probe group. - GI: Added edit mode for light probe group to avoid accidental selection changes. - GI: Added Lightmapping.realtimeGI and Lightmapping.bakedGI editor APIs. - GI: Ambient Occlusion now has separate sliders for direct and indirect light. The default value is Ambient Occlusion on indirect light only. - GI: Atlassing will now correctly generate atlases without wasting space when scaling down objects. - GI: BakeEnlightenProbeSetJob results now stored in hashed file to speed up rebaking of light probes. - GI: Final Gather no longer recomputes if the result is in the cache. - GI: HDR color picker is now used for ambient color, instead of color plus ambient intensity. - GI: Improved light update performance. - GI: Improved mixing of realtime and baked shadows: removes shadow from the back-facing geometry, preserves bounce and contribution of other baked lights. - GI: Occlusion of the strongest mixed mode Light is now stored per Light Probe. - GI: Reflection probe convolution has been sped up (about 2x), and is now less noisy, particularly for HDR environments. - Graphics: A slice of 3D/2DArray can now be set as a render target (Graphics.SetRenderTarget depthSlice argument). - Graphics: Added a property to allow skipping the bounding box recalculation when setting the list of indices or triangles of a Mesh. This is useful for LODs that use a sliding window. - Graphics: Added GL.Flush API. - Graphics: Added ImageEffectAllowedInSceneView attribute for Image Effects. This will copy the Image Effect from the main camera onto the Scene View camera. This can be enabled / disabled in the Scene View Effects menu. - Graphics: Added Light.customShadowResolution and QualitySetting.shadowResolution to scripting API to make it possible to adjust the shadow mapping quality in code at run time on a per-light basis. - Graphics: Added makeNoLongerReadable argument to Texture3D.Apply and Texture2DArray.Apply APIs, to allow for the release of system memory. - Graphics: Added MaterialPropertyBlock.SetBuffer. - Graphics: Added mechanism to tweak some Unity shader defines per-platform per-shader-hardware-tier. Currently it is exposed only to scripts (see UnityEditor.Rendering namespace, specifically UnityEditor.Rendering.PlatformShaderSettings for tweakable settings and UnityEditor.Rendering.EditorGraphicsSettings, for methods to get/set shader settings). Please note that if settings are different for some tiers, shader variants for ALL tiers will be compiled, but duplicates will be still stripped from final build. - Graphics: Added RenderTexture.GetNativeDepthBufferPtr for native code plugins. - Graphics: Added TextureDimension enum and Texture.dimension property. - Graphics: Added useLightProbes argument to Graphics.DrawMesh (defaults to true). - Graphics: DX11; rendering annotations now correctly appear on Windows Store platforms when using GPU debuggers. - Graphics: Implemented fast texture copies via Graphics.CopyTexture. - Graphics: Reduced render batch breaking overhead due to LOD fading. - Graphics: Support multithreaded (client/worker) rendering on iOS and OSX Metal devices. - IAP: Added support for fetching IAP products incrementally in batches. FetchAdditionalProducts method added to IStoreController. - Installer: DownloadAssistant will now warn users if they try to install components which require Unity without selecting UnityEditor component. - Installer: Mac Download Assistant will now write additional logs to ~/Library/Logs/Unity/DownloadAssistant.log. - Installer: WindowsEditor Installer will install Release Notes online shortcut to the Windows start menu. - iOS: Added support for new native rendering plugin interface. - iOS: Option for custom URL schemes added to Player Settings. - iOS/tvOS: Change to use relative symlinks for plugins when building to a related folder. - Multiplayer: Made matchName and matchSize serializable attributes so they can save on the network manager. - OpenGL: Optimized shader translation for matrix array accessing. This improves instancing performance. - OpenGL: Ported existing multidisplay support (Mac/Linux) to OpenGL core. - Particles: Added implicit conversion operators when setting MinMaxCurve with constants. This allows "myModule.myCurve = 5.0f;" syntax. Added the same support for MinMaxGradient when using one color. - Particles: Added option to select exactly which UV channels the Texture Animation Module is applied to. - Particles: Added particle radius parameter for world collisions. Editor UI: - Particles: Added Undo support when auto re-parenting sub-emitters. - Particles: Choosing a random start frame in the Texture Animation Module is now supported. - Particles: It is now possible to read MinMaxCurve/MinMaxGradient in script, regardless of what mode it is set to. Previously it would give an error message in some modes. - Physics: Added a warning when using a staticially combined mesh on a BoxCollider. - Physics: Running the PhysX simulation step can now be skipped if not required by Rigidbodies or WheelColliders. - Physics2D: Added 'OneWayGrouping' property to PlatformEffector2D for group contacts. - Physics2D: Point editing is now allowed in Inspector for Edge/PolygonCollider2D. - Profiler: Added more profiling information for loading operations. - Profiler: Added toggle to exclude reference traversal in memory profile. - Scene Management: Added events sceneLoaded, sceneUnloaded and activeSceneChanged to SceneManager. - Scripting: Added cancel button to "Opening Visual Studio" progress dialog. - Scripting: Added new yield instruction: WaitForSecondsRealtime. - Scripting: Added UnityEngine.Diagnostics.PlayerConnection. This allows user to send files from player to Editor when profiler is connected. - Scripting: COM no longer used to launch VisualStudio, resulting in better immediate feedback experience. - Scripting: Deprecated Application.stackTraceLogType; users should now use Application.SetStackTraceLogType/GetStackTraceLogType instead. - Scripting: For StacktraceLogtype.None only the message will now be printed (without file name or line number). - Scripting: Improved Object.Instantiate() performance. - Scripting: Improved SendMessage performance for repeat calls to the same message on components. - Scripting: ScriptUpdater now asks whether to automatically update once per project session (i.e if a different project is opened or Unity is restarted). - Scripting: Serialization depth limit warning now prints the serialization hierarchy that triggered the warning. - Scripting: Stacktrace log type can now be set in PlayerSettings for various log types. - Shaders: #pragma targets 3.5, 4.5, 4.6 are accepted. - 3.5 - minimum version for texture arrays (DX11 SM4.0+, GL3+, GLES3+, Metal) - 4.5 - minimum version for compute shaders (DX11 SM5.0+, GL4.3+, GLES3.1+) - 4.6 - minimum version for tessellation (DX11 SM5.0+, GL4.1+, GLES3.1AEP+) - Shaders: Added ability to exclude shaders from automatic upgrade by having "UNITY_SHADER_NO_UPGRADE" anywhere in shader source file. - Shaders: Added PassFlags=OnlyDirectional pass tag. When used in ForwardBase pass, it makes sure that only ambient, light probe and main directional light information is passed. Non-important lights are not being passed as vertex light constants, nor are put into SH data. - Shaders: Added shader #pragma to allow easy/cheap variants of shaders across different tiers of hardware in the same renderer without needing keywords (e.g. iPhone 4 and iPhone 6, within OpenGL ES). - Shaders: Added UNITY_SAMPLE_TEX3D_LOD macro, for consistency with other LOD sampling macros. - Shaders: Engine and built-in shaders use five fewer shader keywords now, leaving more keywords for users. The following keywords are thus removed: SOFTPARTICLES_OFF, HDR_LIGHT_PREPASS_OFF, HDR_LIGHT_PREPASS_ON, SHADOWS_OFF, DIRLIGHTMAP_OFF. - Shaders: Extended Standard Shader UI and added new options to disable specular highlights and reflections, and to pack Smoothness into the alpha channel of the Albedo texture. - Shaders: Implemented alpha-to-coverage ("AlphaToMask On" in shaders) on OpenGL/ES, DX9, and Metal (previously only on DX11/12). - Shaders: Improved game data build times with many complex shaders, especially when they were already compiled before. - Shaders: Improved shader translation performance when compiling shaders into OpenGL ES 2.0 & Metal. - Substance: Warning is shown when an input of a BakeAndDiscard ProceduralMaterial is being set at runtime. - tvOS: Added support for Analytics. - UI: Added new property AscentCalculationMode to TrueTypeFont importer to control how font ascent value is determined. - UI: Added rootCanvas property to Canvas. - UI: Align By Geometry now supports vertical alignment. This can be useful for cases where the font ascent/descent info has large uneven spacing. - UI: Created an empty RectMask2D editor and modified the selectable one to hide script fields. - UI: ETC1+alpha support for UIImage on mobile platforms. - UI: Improved performance of MaskUtility functions. - UI: Improved the way that line spacing affects leading in text generation, to provide more predictable leading when line spacing is less than 1. - UI: Made more functions virtual inside Graphic class. - UI: UI now sets the texelSize for use in custom shaders. - VR: Added support for the Oculus Rift Remote. It now presents itself to the input system as a joystick named "Oculus Remote". - VR: Focus and ShouldQuit Support: -Application Focus is now controlled by respective VR SDK when Virtual Reality Support is enabled. -Application will quit if the respective VR SDK tells the app to quit when Virtual Reality Support is enabled - VR: The Oculus OVRPlugin signature check now happens only for non-development, release builds. - WebGL: Incremental builds of generated C++ code are now supported. - Windows: Added "Copy PDB files" option in the Build Settings window. This way, you can control whether or not to copy debugging files. - Windows: Standalone player now can be run in Low Integrity Mode. - Windows Store: Added Bluetooth capability to Player Settings. - Windows Store: Added PlayerSettings.WSA.Declarations API for setting declarations for Package.appxmanifest. - Windows Store: Added support for UnityEngine.Ping class. - Windows Store: Command line argument -dontConnectAcceleratorEvent can now be added to disable accelerator event-based input. This disables support for some keys in Unity (like F10, Shift), but fixes issue with duplicate characters in some XAML controls. - Windows Store: Improved deserialization performance when using .NET scripting backend. - Windows Store: Improved Visual Studio project generation. The solution shouldn't rebuild needlessly anymore; however, users may need to delete the old generated project so it can be regenerated. See upgrade guide. - Windows Store: In Player Settings, visual asset images are now edited using object fields. - Windows Store: New implementation for TouchScreenKeyboard on UWP now supports both XAML and D3D apps as well as IME input. Older implementation can be turned on by passing command line argument -forceTextBoxBasedKeyboard. - Windows Store: PDBs will now be included in the installers for "Release" players as well as debug and master players. - Windows Store: System.operatingSystem will add '64bit' postfix if target device has 64bit CPU (see more information in Unity Documentation). - Windows Store: UnityWebRequest now supported for all SDKs. Fixes - [755263] 2D: Add tooltips for Size, Full Tile or Threshold on the 9-slice section of the Sprite Renderer. - [745882] 2D: Fixed a crash when packing a crunched 24-bit texture. - [759462, 761416] 2D: Fixed error log 'GetLocalizedString is not allowed...' - [754385] 2D: Fixed the clipped text in the Unity Preferences > 2D pane. - [727785] AI: Prevent rare access of garbage memory of last node in navmesh BV tree. - [689362] Android: Audio is now muted when audio focus is lost. - [554244] Android: Editor now only detects Android devices that are online. - Android: Fixed an issue where SystemInfo.deviceUniqueIdentifier would return an empty string on some x86 devices. - [766776] Android: Fixed freeze in new splash screen when using threaded GfxDevice. - [757111] Android: LocationService - Fixed crash bug - [764422, 762733] Android: PlayerPrefs - Fixed an issue where upgrading a lot of keys from a previous version of unity would cause an out of memory error - [789557] Animation: A warning that was erroneously displayed in AnimationClip is now displayed in ModelImporter. - [769704] Animation: Added an error when an AnimatorOverrideController can't find the animations to override in the base AnimatorController - [760796] Animation: Added AnimationClipPlayable.applyFootIK. - [742973] Animation: Added better error messaging and handling for for AnimationCurves with invalid data. - [561601] Animation: Disabled multi-file editing of model scale because it wasn't working properly. - [743181] Animation: Disabled play/record/key/... buttons on Animation window when viewing objects with optimized hierarchy - [582315] Animation: Disabled recording and playback ui in animation window when in game mode - [766821] Animation: Disabled reset menu item in component when animation mode is active. - [757982] Animation: Dragging Sprite Assets into the Hierarchy window and then pressing Cancel no longer deletes the parent GameObject. - [757982] Animation: Fix for deleted GameObject when cancelling AnimationClip creation on Sprite drop. - [767096] Animation: Fix RootMotion import for generic animation with parent with specific default values - [705558] Animation: Fix to allow deletion of the last keyframe in the curve editor. - [758274] Animation: Fix to prevent Animation Event from being created with negative time. - [710887] Animation: Fixed a bug causing an offset between Set and Get of Animator.bodyPosition. - [715009] Animation: Fixed a bug spewing errors when the animation mode was reset from saving the scene. - [768767] Animation: Fixed a bug where animations created using the "Create" menu would contain an empty Sprite track - [723395] Animation: Fixed a bug where auto keys at time 0 for rotation curves were slightly off. - [749332] Animation: Fixed a bug where Rotation property would still be shown to be added even when it was animated. - [754268] Animation: Fixed a bug where the Animation window would try to access a deleted Animator component and cause a crash. - [788452] Animation: Fixed a case where adding an IsActive property to a legacy animation would cause a crash when sampling. - [742258] Animation: Fixed a case where animation events queue up when fireEvents is set to false. - [736468] Animation: Fixed a case where copying transitions between state machines without copying destination would crash. - [783143] Animation: Fixed a crash triggered when playing an AnimatorControllerPlayable with an invalid asset. - [784839] Animation: Fixed a crash when interrupting a transition on a synchronized layer. - [742124] Animation: Fixed a crash when trying to enumerate a list of 0 animations on the Animation component. - [771744] Animation: Fixed adding an Animator via the AnimationWindow not dirtying the scene - [766978] Animation: Fixed an issue where an assert would fail when importing animations on a model where the skinned mesh was not on the root joint. - [742069, 699102] Animation: Fixed an issue where animation events would be significantly slower when an object has a lot of components. - [755714] Animation: Fixed an issue where AnimationPreview objects would get grabbed by FindGameObjectWithTag. - [769861] Animation: Fixed an issue where changing the selected game object would leave the animated properties modified. - Animation: Fixed an issue where events and additional curves in 0-length animations were popping errors - [769505] Animation: Fixed an issue where having animations with a mixed number of bones in a controller, and having Write Defaults to false would throw errors. - [764019] Animation: Fixed an issue where imported keyframes would overlap and get sorted in the wrong order - [785852] Animation: Fixed an issue where interrupted transitions would cause empty states to continue to output animation. - [742367] Animation: Fixed an issue where rotation keys created through the inspector defaulted to quaternion curves instead of euler curves. - [754595] Animation: Fixed an issue where Rotation values would stay applied to objects after exiting Animation Mode. - Animation: Fixed an issue where the ModelImporterClipAnimation inspector would not show properly when he Avatar Mask was empty - [789784] Animation: Fixed an issue where the transition preview wouldn't reappear when valid parameters were set (after it having disappeared when invalid parameters were set). - [754595] Animation: Fixed an issue with rotations staying applied after animating - [784131] Animation: Fixed animation event firing even though layer weight is set to zero. - [745131] Animation: Fixed animation in Scene View not updating when deleting key in dopesheet editor. - [769029] Animation: Fixed Animation previewer not detecting properly target object to preview - [762274] Animation: Fixed Animation recording being broken in some cases. - [759029] Animation: Fixed Animation window not updating to selection when out of focus or when just opened. - Animation: Fixed AnimationClipImporter inspector for Generic clips. - [762709] Animation: Fixed AnimationClipPlayable.duration so that it returns the length of its AnimationClip. - [748164] Animation: Fixed Animator Blend Tree layout issues with long motion names. - Animation: Fixed Animator with statemachine behaviour runtime compile error not firing callback on the right SMB - [757904] Animation: Fixed Animator.UpdateMode not being saved. - [732776] Animation: Fixed AnimatorController vs AnimatorControllerPlayable not being reset the same way when modified. - [743494] Animation: Fixed AnimatorControllerPlayable.GetParameter crash. - [785841] Animation: Fixed applying rotation on RigidBody2D. - [753204] Animation: Fixed beeping sounds when pressing 'k' and 'c' hotkeys in the Animation window. - [748211] Animation: Fixed blending not smooth when entering or leaving empty state. - [573482] Animation: Fixed broken Avatar Configure Tool when changing tabs with invalid Avatar. - [756989] Animation: Fixed broken import of RootMotion transforms for Humanoid. - [803584] Animation: Fixed case of animation being glitchy when 'Optimize Game Objects' option was selected. - [770184] Animation: Fixed case of blend tree inspector not updating animator values in game mode. - [796729] Animation: Fixed case of CullingMode not getting properties applied when changed during Play Mode in Inspector. - [788132] Animation: Fixed case of GetHumanPose crashing when using it with a humanoid avatar with unsupported hierarchy. - [731510] Animation: Fixed case of incorrect transition shown in Inspector when entering Play Mode. - [740173] Animation: Fixed case of missing transitions when undoing layer deletion in Animator window. - [778658] Animation: Fixed case of playback not stopping when changing frame in the Animator window. - [774265] Animation: Fixed case of sample rate not being taken into account when moving key frame in curve editor. - [776673] Animation: Fixed case of Scene View not updating when changing clip in the Animation window. - [579556] Animation: Fixed case of slider in curve editor and dope sheet editor resetting for clips of short duration. - [775732] Animation: Fixed case of state machine undo moving focus back to base layer. - [784470] Animation: Fixed case of transition not evaluating when exit time is close to 1.0. - [667639] Animation: Fixed clipped text for transition preview warning message. - [715969] Animation: Fixed contextual menu operation on multiple selection in Animation window hierarchy. - [781321] Animation: Fixed copy/paste to a clip where associated properties don't exist. - [802327] Animation: Fixed crash when calling Animator.GetCurrentAnimatorStateInfo during an interrupted transition. - [778887] Animation: Fixed crash when changing playable controller in animator in Game Mode. - [748219] Animation: Fixed crash when duplicating transition. - [738767] Animation: Fixed crash when trying to copy Entry Transition. - [773437] Animation: Fixed crashes in AudioSource.GetCustomCurve. - [768490] Animation: Fixed Culled Animator still calling PrepareFrame - [718615] Animation: Fixed current frame not set properly in Animation window at certain sample rates. - [688412] Animation: Fixed current timeline breaking in the AnimationWindow during resize of the window. - [759023] Animation: Fixed curve editor range not updated when moving animation event. - [785686] Animation: Fixed curve selection not clearing in dopesheet editor when clicking on editor background. - [793808] Animation: Fixed cut letter in the 'Dopesheet' button, in the Animation window. - [727806] Animation: Fixed documentation for AvatarBuilder. - [761674] Animation: Fixed dopesheet keyframe manipulation not registering. - [747222] Animation: Fixed errors with unsupported functions and enum events. - [719392] Animation: Fixed event window not appearing when pressing Add Animation Event button. - [729176] Animation: Fixed focus on search field when opening the add StateMachineBehaviour windows. - [753273] Animation: Fixed frame clipping not performed when changing clips in curve editor. - [723883] Animation: Fixed frame number not updating during play or when pressing next/previous key frame button while it's being edited. - [780631] Animation: Fixed GameObject animated data being duplicated when copy-and-pasting in Scene View hierarchy. - Animation: Fixed Generic MatchTarget. - [740584] Animation: Fixed ghost rename text field in Animation window when changing selection. - [746454] Animation: Fixed highlight at wrong index after drag&dropping layer in Animator. - Animation: Fixed import of humanoids when root object rotation is not identity. Mostly single root 3DSMAX models - [756422] Animation: Fixed inconsistencies in dopesheet editor and curve editor framing. - [769233] Animation: Fixed inconsistency in tangent mode when a key frame is created between two with different tangent modes - [776653] Animation: Fixed issue where it was not possible to change clip when Animation window is locked. - [778610] Animation: Fixed issue whereby animation events could be added to read-only animation clips. - [775918] Animation: Fixed issue whereby property or keyframe deletion in read-only clip was not disabled in Animation window. - [789053] Animation: Fixed issue whereby the Z position of RectTransform couldn't be animated. - [775841] Animation: Fixed issues when dragging and dropping a sprite into a clip with no existing sprites. - [683514, 692934] Animation: Fixed issues with 2D elements creating unwanted keyframes in the animation window when enabled/disabled. - [722129] Animation: Fixed issues with dragging and dropping sprites in a GameObject with multiple bindings available. - [753249] Animation: Fixed key editing over duplicates issues in curve editor. - [732776] Animation: Fixed LayersAffectMassCenter with ControllerPlayable. - Animation: Fixed leaking scriptable objects in AnimationWindow. - [748981] Animation: Fixed live link to OverrideController. - [754813] Animation: Fixed long Animator.Update not playing all Events/ExitTimes. - [762706] Animation: Fixed loss of Animation window selection when selecting child GameObjects. - [707863] Animation: Fixed loss of curve selection in the curve editor when keys are moved in the dopesheet editor. - Animation: Fixed memory leak in AnimatorOverrideController. - [781950] Animation: Fixed missing keys when pasting on curves with multiple matching properties. - [741653] Animation: Fixed missing operation when dragging dope key outside of viewport. - [740590] Animation: Fixed missing undo when renaming binding in animation window. - [752791] Animation: Fixed missing update to Inspector when adding or removing property while recording animation. - [777630] Animation: Fixed missing warning when deleting a parameter that is used by a transition from "Any State". - [721991] Animation: Fixed NullReferenceException in Animation window when deleting the GO that is played. - [745089] Animation: Fixed NullReferenceExceptions when deleting a state after opening a project from 4.x. - [743873] Animation: Fixed numerical issues in next/previous frame scrubbing of animation window. - [677972] Animation: Fixed OpenGL errors showing in console when closing "Add Property" pop-up window. - [740187] Animation: Fixed Play/Pause/Step buttons not turning red when entering AnimationWindow recording - [765280] Animation: Fixed property added in recording mode to read-only clip. - [556392] Animation: Fixed property value not being unselected on mouse down in Animation window. - [746322] Animation: Fixed scene not being dirtied when AnimatorController field changes on Animator component. - [747816] Animation: Fixed StateMachineBehaviour not updating when changing AnimatorController at runtime. - [768879] Animation: Fixed StateMachineTransition being displayed as EntryTransition - [620551] Animation: Fixed text editing overlay remaining active in Animation window when losing focus. - [575983] Animation: Fixed unremovable property in the Animation window. - [759022] Animation: Fixed unresponsive animation window when zoomed out beyond a certain level. - [727806] Animation: Generating an avatar with AvatarBuilder should no longer return an error if the HumanDescription doesn't include the whole hierarchy up to the topmost GameObject. - [778870] Animation: If scaling GO using Vector3 with zero z or Vector2, then its child Rigidbody will have its position multiplied by that vector. - [664046] Animation: Implemented API for tangentMode in AnimationUtility. - Animation: Memory usage improvements. - [746020] Animation: ModelImporter.defaultClipAnimation should return the default mask. - Animation: Optimized AvatarMask inspector. - [624764] Animation: Overridden virtual methods are now listed as potential Animation Event targets in the animation window. - [749764] Animation: Removed modal dialog showing when removing states or transitions in the animator window. - [743853] Animation: Renamed interpolation Euler Angles (Quaternion Approximation) to Euler Angles (Quaternion) for simplicity. - [765649] Animation: Root motion not applied on single object - [771510] Animation: Rotate tool no longer rotates wrong for object with negatively scaled parent. - [754265] Animation: Unity no longer hard-crashes when importing Blender Rigify model. - Asset Bundles: BuildAssetBundles will now switch back to the original Active Build Target when finished. - [763293] Asset Bundles: Fixed AssetBundle.LoadFromFile usage with Application.streamingAssetsPath on Android. - [800939] Asset Bundles: Fixed crash when building AssetBundles. - [722725] Asset Bundles: Fixed issue whereby particle materials would lose reference to textures if loaded from AssetBundles. - [758260] Asset Bundles: Fixed thread hang after filesystem error when decompressing AssetBundle data to the cache. - [774223] Asset Bundles: Fixed up-to-date check when a script is only renamed, which previously could result in Asset Bundle build failures. - [726464] Asset Bundles: Loading AssetBundles via WWW outside of Play Mode now works correctly. - [778562] Asset Import: Added support for Blender 2.77 and later. - [771372] Asset Import: Fixed issue with fileScale value always being set to 1 on first import. - [785775] Asset Management: Fixed crash at UndoBase::DetermineUndoType when deleting a large number of objects or objects with large sizes. - Audio: Audio profiler: Added separator lines between columns, adapted initial column widths to fit, and added support for horizontal scrolling. - [782175] Audio: Fixed issue where an AudioSource created from MovieTexture.audioClip always returned 'time' property as 'Infinity'. - Audio: Fixed issue where AudioClip.LoadAudioData had no effect when called after AudioSettings.Reset. - Audio: Fixed issue where AudioSource.time was returning NaN values for user-created clips. - Audio: Fixed issue where non-persisted audio clips (i.e. clips created through AudioClip.Create) tried to reload after AudioSettings.Reset, which caused error messages. - [775982] Cache Server: Fixed issue with wrong CacheServer IP address used when "Check Connectivity" button clicked in Preferences window. - [775644] Cache Server: Implemented delay connecting to the cache server, until the user has finshed entering the cache server IP. - [678001] Compute: Add more specific error messages when creating compute buffers, to help pinpoint incorrect usage. - [708438] Compute: Compute shader programs that use >8 UAVs on platforms (e.g. D3D11 before 11.1) that don't support that many UAVs are no longer dispatched, and when importing such a shader a warning is reported. - Compute: Compute shaders from the same folder as a modified .cginc file are now reimported, just like regular shaders are. - [783093] Compute: Documented the restriction whereby ComputeBuffer.CopyCount is only available to IndirectArgs or Raw typed destination buffers. - [780340] Compute: Fixed a regression where ComputeShader.SetFloats wouldn't set all values in some constants (like arrays of matrices). - Compute: Fixed UTF8 BOM in compute shader include files not being understood properly. - [738117] Compute: Improved support for bool parameters for compute shaders. - [781700] Connect: Opening "last loaded project" upon Unity start up will no longer unlink project from its organization. - [681950] Core: Error messages are now returned for invalid locationPathName parameter values for BuildPipeline.BuildPlayer. - [674553, 727331] Core: Fix for prefabs not updating the root order property modification under certain circumstances. - [725043] Core: Fix for silent asset overwrite when importing a package via "openfile" (aka double-click). - [792497] Core: Fixed a crash when more than 65535 identically named objects were created. - [716926] Core: Fixed deletion order of depending components. - Core: Fixed possible crash when loading multiple asset bundles simultaneously. - [793567] Deployment Management: Exceptions from PostProcessBuild callbacks now correctly cause a build to fail. Previously builds with this issue would exit with return code 0. - Deployment Management: Fix to ensure to include streaming asset files in the editor log build report correctly. - [778565] Deployment Management: Fixed incorrect size calculation in the Editor log build report. - [761859] Deployment Management: Fixed issue whereby building Windows Standalone would fail with Config Dialog Banner set and "Install in builds folder" checked (relevant for source code customers only). - [369773] Deployment Management: When building from the GUI, Unity now uses a relative project path if the build location is under the project folder. - Documentation: Restored lost documentation for RenderTargetSetup. - [778324] DX11: Fixed errors when trying to create floating point textures in linear color space with mismatching flags. - [800247] DX11: Fixed rendering into 3D/2DArray render texture mip levels. - DX12: Fixed case of DX11 9.x feature level shaders erroneously being treated as supported on DX12. - [669717] Editor: Added a warning in the Camera Inspector when the rendering path is set to deferred but the perspective is set to orthographic, as orthographic is unsupported in the deferred path. - [804676] Editor: Building VR projects when running on case-sensitive file systems will now correctly find the target plugin folders. - [748499] Editor: Copying a directory onto itself will no longer incorrectly recurse. - Editor: Ctrl/Cmd + marquee select now correctly subtracts from selection in light probe group editor. - [776559] Editor: Custom cursor texture is now validated when setting it, fixing issue where custom cursors could look corrupted. - [779935] Editor: Dragging objects between different Editor processes should no longer cause unintended behaviors. - [787114] Editor: EDITOR ONLY: When multiple Scenes are open when entering Play Mode, the active Scene is now loaded and activated first, no matter where in the list it is. When exiting Play Mode the previous active Scene is now correctly made the active Scene again. - [755238] Editor: Editor will now show compiler errors in English while building to Windows Store Apps, even if Windows locale is not set to English. - Editor: EditorGUIUtility.ObjectContent now adds type information to the text string as shown for ObjectFields: E.g: "Concrete (Texture2D)" instead of "Concrete". - Editor: EditorUtility.SetSelectedWireframeHidden state is now saved into Scenes. - [777750] Editor: Fix to prevent the following error: "GetEditorTargetName is not allowed to be called from a MonoBehaviour constructor, call it in Awake or Start instead". - [440883] Editor: Fix to show correct platform title instead of platform ID. - Editor: Fixed a bug in Editor DelayedTextField where it was losing edit progress when moving focus to a checkbox or dropdown. - [712973] Editor: Fixed a Mac-only crash when importing some textures. - [709369] Editor: Fixed an issue that could cause Scenes containing prefab instances with driven transforms to immediately become dirty. - [762946] Editor: Fixed an issue where, when dragging a second Scene to the Hierarchy, the first Scene would auto-expand. - [703222] Editor: Fixed Camera preview sometimes not taking manually overriden projection matrix into account. - [774466] Editor: Fixed case of potential extra parenthesis when updating from instance method to instance property. - [795182] Editor: Fixed case of Scene.buildIndex being always -1 when in Edit mode. - [752218] Editor: Fixed certain UI elements not responding after changing the graphics API. - [777411] Editor: Fixed crash when registering Transform using Undo.RegisterCreatedObjectUndo. - [763920] Editor: Fixed curves in Particle System inspector not showing negative values initially. - [811990] Editor: Fixed deployment of native plugins when building Linux Universal player. Existing projects will need to reapply plugin importer settings. - [734284] Editor: Fixed different color gradients & pickers being seen between Gamma and Linear space in player settings. - [810330] Editor: Fixed empty Analytics Terms of Services link. - [792560] Editor: Fixed graphics settings when importing from an old Unity package. - [795707] Editor: Fixed issue where in some cases the Editor window title was not reflecting the current graphics emulation setting immediately. - [805547] Editor: Fixed issue where two (or more) Cef browser windows were instantiated. - [676201] Editor: Fixed issue whereby exiting Play Mode via script from Start did not work. - [715448] Editor: Fixed loss of all keyboard shortcuts when focus is on a Unity Connect window or Asset Store window. - [745085] Editor: Fixed null ref configuring Avatar. - [775986] Editor: Fixed NullReferenceExceptions that could be triggered when multi-selecting in the Inspector. - [729048] Editor: Fixed OSX native web view that would continue ticking its timer after being closed. - [757729] Editor: Fixed Scene View crashing if internal Scene View Camera is disabled. - [757212] Editor: Fixed squashed vector fields in Material editor. - [705226] Editor: Fixed toggling of Asset Store window in fullscreen mode. - [743688] Editor: Fixed warning when deleting an open Scene in the assets folder and then saving the scene from the Hierarchy window. - [778277 763319] Editor: Launcher cosmetic changes: adjusted header element separation and project list UI. - [788602] Editor: Launcher cosmetic changes: adjusted title font weight. - [799627] Editor: Reduced error messages when using IMGUI scope helpers. - [789883] Editor: The OnGeneratedCSProjectFiles callback is now triggered as expected when using Visual Studio. - [796682] Editor: Updated default license and activation host URL for staging and dev environments. - [739892] Editor - Other: Fixed an Editor crash when closing with a detached Asset Store window. - [650493] GI: "Edit" button for non-editable lightmap parameter assets now says "View". - [663512] GI: Added a message to appear in the Lighting window when a reflection source is not set. - [754308] GI: Fixed an issue on iOS and some Android devices where Materials with high Emission would produce banding artifacts when real-time GI was used. - [684983, 672944] GI: Fixed baked transparency not being applied to AO. - [629690] GI: Fixed baked transparency textures not using tiling and offset values. - GI: Fixed black (NaN) artifacts produced in rendering by invalid directional lightmaps. - [743095] GI: Fixed fallback to non-directional lightmaps on SM2.0 hardware. - GI: Fixed issue of changing lightmap directionality mode corrupting the scene rendering. - [791437] GI: Fixed Light/Reflection probe settings in Renderer sometimes being lost after a project upgrade. - [728021] GI: Fixed scene view GI visualizations sometimes not working properly. - [793304] GI: Fixed some cases of null reference exceptions related to baked object preview coming out of the Lighting window. - [714102] GI: Improved Lighting window preview of very small lightmaps. - [649006] GI: Improved mixing of directional specular lightmaps with realtime shadows. - [650495] GI: Improved name of Default lightmapping parameters asset. - [632894] GI: Refresh light probe connections when editing probe position via Inspector. - [534658] GI: Substances now work with baked transparency. - [776004] GI: The importance value for reflection probes can no longer be a negative value. - [775942] Global Illumination: A warning will now appear if user is trying to update a disabled reflection probe. - [736077] Global Illumination: Fix in Editor to avoid some of the automatic GI overhead when GI is turned off. - [777505] Graphics: Apply material keywords when drawing from command buffers. - [793506] Graphics: Color space switch done via PlayerSettings.colorSpace API now takes effect immediately. - Graphics: DX12: Implemented support for ShadowSamplingMode. - [685154] Graphics: Filtered out duplicate graphics APIs in PlayerSettings.SetGraphicsAPIs. - Graphics: Fix to avoid crash in CommandBuffer.Draw commands if null renderer/mesh is passed. - [778188] Graphics: Fix to ensure that Unity doesn't set unsupported texture filter or wrap modes. - Graphics: Fix to prevent spam of D3D11 debug layer warning messages when setting resource names. - [814300] Graphics: Fixed a mipmapping bug causing mipmaps to not update in certain scenarios. - [768232] Graphics: Fixed an issue where dynamic batching could produce corrupted geometry when vertex components are compressed. - [735101] Graphics: Fixed an issue with Texture2D.LoadImageIntoTexture() -> Texture2D.Apply() not generating mipmaps if the Texture2D object didn't have actual mipmaps previously. - [728925] Graphics: Fixed case of crash after SetTargetBuffers and GameObject.SetActive(false). - [803086] Graphics: Fixed crash in CommandBuffer.DrawMesh when material is null. - Graphics: Fixed Crunch texture compression artifacts caused by integer overflow. - [804750] Graphics: Fixed DepthNormals pass to no longer use very low precision (16bit) depth buffer. It now uses the same depth buffer format as regular rendering. - [778659] Graphics: Fixed incorrect normal/tangent generation when dynamic batching is used on a rotated object without a normal/tangent stream i.e. a rotated TextMesh object. - [808304] Graphics: Fixed issue where SyncAsyncResourceUpload would do a busy wait loop, causing a CPU to get pegged unnecessarily. - [678975] Graphics: Fixed issue whereby building a player with -nographics would cause some image effects to become disabled in the build. - [715712] Graphics: Fixed material file content being non-deterministic in Editor, sometimes order of properties in a '.mat' file changed. - [752757] Graphics: Fixed meshes with large scale and blend shapes sometimes not being lit correctly. - [758050] Graphics: Fixed null reference exception being thrown when setting null to mesh triangles/indices. - Graphics: Fixed potential problem capturing frames in RenderDoc. - [807174] Graphics: Frame Debugger; fixed assert 'PPtr cast failed when dereferencing! Casting from Mesh to Renderer!' - [775652] Graphics: Improved feedback on Recalculate Bounds in LODGroup inspector. - Graphics: Newly created line renderers and trail renderers will now have light probes and reflection probes disabled by default. - [687726] Graphics: RenderTextures that have IsCreated==false will now consistently render black if bound, instead of undefined results. - [739115] Graphics: Repeated loading of PNG/JPG images into a Texture2D no longer fails. - [768296] Graphics: Static batching can batch more objects now: previously had a limit of 32k indices, now 64k. 32k limit stays on Mac due to driver issues. - [727467] Graphics - General: DX12: CPU profiler timeline view is no longer broken. - IAP: Fixed case of failed IAP purchase events costing Unity Analytics analysis points. - IAP: Fixed case of IAP not sending Transaction events to Unity Analytics. - [742005] IL2CPP: Fixed NavMesh stripping issue. - IMGUI: BeginVertical now behaves like BeginHorizontal. - [768042] IMGUI: Fixed CurveEditor's OnGUI Repaint early-out not respecting control IDs. - [744171] IMGUI: GenericMenu shortcut keys are now displayed. - [598054] IMGUI: Material Property Drawer now displays a slider and drop-down menu. - [744136] IMGUI: NullReferenceException in plugin no longer crashes Unity in GUIStyle drawing. - [525606] iOS: Fix to use remote notifications API only if they are used. This fixes a warning when submitting to iTunes Connect. - [791387] iOS: Fixed case of some Japanese-Kana keyboard buttons being ignored. - [777964] iOS: Fixed case of Unity splash screen on standalone player appearing brighter in linear color space on Metal. - [801573] iOS: Fixed crash in FMOD_RESULT DSP::release() when AudioManager is deinitialized several times. - [732658] iOS: Fixed Korean, Indian and Hebrew font fallbacks. - iOS: Incremented the minimum supported iOS version from 6.0 to 7.0 (edit: moved to Changes section) - [753299] iOS: viewWillTransitionToSize will now be passed to super even if we disallow orientations. This helps plugins that use presentation controllers. - [764995] Materials: Fixed a crash if GetMaterialProperties is called with a null in the list of materials. - [776940] Metal: Fixed case of native RenderBuffer query failing in case of multi-threaded rendering. - [731111] MonoDevelop: Added a hint in the breakpoints dialog warning that the list of available exceptions is generated only from the currently selected project. - [759138] MonoDevelop: Disabled Git, Subversion and NUnit add-ins by default. This fixes an issue with being unable to write to newly created scripts. - [754609] MonoDevelop: Fixed issue with MonoDevelop showing "��u" symbols in document view after using "Save As". - [485138] MonoDevelop: Fixed issue with MonoDevelop sometimes giving focus to the wrong script when opened from Unity. - [729201] MonoDevelop: OSX: Fixed issue with MonoDevelop not working when copied to case-sensitive partition. - Multiplayer: Fix to clean up MatchInfo UI in the NetworkManager Play Mode inspector. - [732687, 696591] Multiplayer: Fixed bug where connecting to a non-https:// MatchMaker after joining one match would fail in all cases. - Multiplayer: Fixed issue where an error response from the server could lead to undesirable console output in non-error cases when setting the match auth token. - Multiplayer: Fixed issue where default matchmaker port was 80 instead of 443 in one code path. - Multiplayer: Fixed issue where wrong initialization connection amount could lead to system crash. - Multiplayer: Fixed issue whereby cleaning up a connection containing a StateUpdate channel could cause a crash. - [788537] Multiplayer: Fixed issue with initial state in SyncListStructs not being handled correctly. - [731045] Multiplayer: Fixed issue with matchSize being incorrectly used from a 'join match' response. - Multiplayer: Fixed MatchMaker URI to be correct with http:// prefix as default. - [738501] Multiplayer: Fixed UI panel on NetworkManager for match max size and name, and added tooltip info for both. - [727797] Multiplayer: Networking: Fixed lobby issue where it would reject new players when there was still one slot left. - Multiplayer: Removed warnings "no free events" and "Attempt to send to not connected connection"; Bugfix: acks now reset when connection resets. - Multiplayer: The WWW object in MatchMaker callback handler is now explicitly disposed when the handler is done with it. - [795897] Networking: Fixed issue with SendToAll sending duplicate messages to the local client when hosting. - [776137] Occlusion Culling: Fixed broken portal visualization in some cases. - [775691] Occlusion Culling: Fixed visualization when changing scenes. - OpenGL: Fixed cases of SystemInfo.graphicsMemorySize, graphicsDeviceID and graphicsDeviceVendorID being incorrect on desktop platforms. These now match legacy GL behavior. - OpenGL: Fixed crash and reflection probes corruption on iOS A9 devices when using GLES 3.0. - OpenGL: Fixed issue whereby 3D textures did not have mipmaps in some cases. - OpenGL: Fixes for Windows fullscreen mode. - [794384] OpenGL: Work-around for Mac Intel driver crash when trying to use tessellation shaders on a GPU that can't do it. - [799348] OSX: Fixes for Cinematic Effects (Depth Of Field, Screen Space Reflections, SMAA) on Metal. - [702914] OSX: Menu bar in standalone player no longer blocks the main player loop from updating. - [807378] OSX: Significantly improved Editor exit times. - [760072] Particles: Applied fix to ensure no garbage is generated when using certain script commands. - [782535] Particles: Burst counts are no longer limited to 64K. - [772263] Particles: Collider visualization should now scale with transform. - [763929] Particles: Disabled size/rotation properly when toggling 3D in the Editor. - [764568] Particles: Disabled unused UI options. - [696305] Particles: Faster particle mesh data caching and memory usage optimization. - [775739] Particles: Fix to disable "Speed Range" UI when it is not relevant. - [767786] Particles: Fix to preserve stopEmitting parameter when becoming visible (culling fix). - [761003] Particles: Fix to support radius in Trigger module. - [782648] Particles: Fixed an edge case where 3D size wasn't behaving correctly. - [790186] Particles: Fixed case of "Invalid AABB" error messages. - [761790] Particles: Fixed case of scale not being applied correctly to AABB in the TrailRenderer. - [781570] Particles: Fixed collision bug where NaN could be generated for the contact normal. - [755330] Particles: Fixed collision events to ensure correct events are sent to correct GameObjects. - [757061] Particles: Fixed crash when mesh is missing inside player (for example, when a default mesh is used). - [763664] Particles: Fixed crashes with null curves and gradients. - [765346] Particles: Fixed debug plane visualization. - [805565] Particles: Fixed issue where 3D start rotation affects Rotation Over Lifetime module. - [795404] Particles: Fixed issue where assigning a different mesh to Shape Module could cause a crash. - [773317, 793614 ] Particles: Fixed issue where automatic culling icon did not work with multiple Particle Systems. - [775304] Particles: Fixed issue where collision messages did not work for plane collisions. - [774931] Particles: Fixed issue where crashes would occur when material is missing and mesh colors are requested. - [806920] Particles: Fixed issue where GetTriggerParticles returns incorrect value for first few frames running in Editor. - [798671] Particles: Fixed issue where isVisible was not always correct or not updated by ParticleSystem.Play, depending on camera position. - [791082] Particles: Fixed issue where Particle System twitches when being moved in the Editor. - [803866] Particles: Fixed issue where Particle.GetCurrentSize3D apply curves only to X if separate axis is not set. - [784875] Particles: Fixed issue where prewarm was ignored when start delay was greater than start lifetime. - [771887] Particles: Fixed issue where size over lifetime using separate axes wasn't always working. - [786561] Particles: Fixed issue where Start Frame range is one frame longer than available frames (i.e. TilesX*TilesY). - [769838] Particles: Fixed issue where Texture Sheet Animations could incorrectly loop back to the first frame at the end of their animation. - [762708] Particles: Fixed issue where Trigger Module crashed when assigning the wrong list type via script. - [762702] Particles: Fixed issue where Trigger Module wasn't spawning death sub emitters. - [805843] Particles: Fixed issue where Trigger particle with 0 speed fails to know whether it is inside or outside Collider. - [805903] Particles: Fixed issue where Trigger radius scale property is not exposed to public API. - [769656] Particles: Fixed issue where using large gravity values could cause erroneous error messages. - [765533] Particles: Fixed issue where Visualize Bounds did not scale the debug rendering correctly. - [778193] Particles: Fixed mesh color usage and improved messaging. - [756108] Particles: Fixed mesh double scaling issues. - [755095] Particles: Fixed Particle Shader preview to display a flat plane instead of a sphere. - [753940] Particles: Fixed Particle System Simulate issues. - [521391] Particles: Fixed Particle System's inconsistent behavior with Time.timeScale. - [759756] Particles: Fixed stretch particle bounds to avoid culling errors. - [754042] Particles: Fixed to give sub-emitters better default names based on their context (birth, death, etc). - [770176] Particles: Fixed to hide "Angle" option in Shape Module when it is not relevant. - [790506] Particles: Improved RotationBySpeed module UI and made tooltips more descriptive. - [765472] Particles: Improved the Mesh Particle error message that appears when buffers are full. - [761689] Particles: LimitVelocityOverLifetime module no longer allows negative values. - [754041] Particles: Max Collision Shapes option can no longer be negative. - [767827] Particles: Renamed Particle.lifetime property to Particle.remainingLifetime for clarity. - [790789] Particles: Scene view Particle count now shows SubEmitter information. - [769598] Particles / VR: Fixed issue where stretched particles did not work in VR. - [766261] Physics: Fix to ensure that 2D Overlap/Cast checks use consistent 'skin' radius for all shapes. - [716264] Physics: Fixed HingeJoint setup issue when changing isKinematic property on attached Rigidbody. - [751979] Physics: Fixed inertia tensor being broken after a new Collider was added to the Rigidbody. - [777966] Physics: Fixed the source of a number of crashes that could occur when setting properties (such as 'connectedBody') of broken Joints during the OnJointBreak callback. - [753924] Physics: Fixed to ensure Unity doesn't pass infinite radius into PhysX overlap. - [798901] Physics2D: Animating the Transform position/rotation when using 'Animate Physics' now correctly uses Rigidbody2D MovePosition/MoveRotation. - Physics2D: Ensure that buoyancy forces take into account the Rigidbody2D gravity-scale. - Physics2D: Fix ensures that animating a Joint2D property always updates the joint immediately. - Physics2D: Fix ensures that when recalculating contacts for an Effector2D, all relevant Rigidbody2D are woken. - [747934] Physics2D: Fix to stop TargetJoint2D crashing when using a Kinematic Rigidbody2D. - Physics2D: Fixed crash when setting a frequency of zero on the TargetJoint2D. - [798879] Physics2D: SurfaceEffector2D now correctly calculates tangent velocities for objects with forces opposing the desired surface speed. - [772977] Plugins: Fixed Editor crash when trying to set plugin settings for deprecated platforms. - Plugins: Fixed issue with Android plugins where some of them were treated as folder plugins (for example, Assets/Plugins/Android/SimpleJarPlugin.java). - Plugins: Native plugins will now correctly initialize their settings on import and this will be reflected in meta file. - [760993] Profiler: Fixed incorrect time displaying in CPU profiler if time exceeded 5 seconds. - [704398] Profiler: The Record setting is now saved so that it is restored when re-opening the Profiler window during an Editor session. - [790626] PS4/PS Vita: Fixed issue whereby script-only builds did not copy native plugins. - Samsung TV: Fixed case where OnMouseDown() was not working on Samsung TV. - [765357] Scripting: An exception is now always thrown when calling AssetDatabase methods from constructors and during serialization. - [765509] Scripting: Deactivation order during built application shutdown is now consistent with normal scene unloading. Previously closing a built aplication could result in a NullReference exception. - [746490] Scripting: Debug.Log no longer consumes memory when StackTraceLogType is None. - [766939] Scripting: Fixed crash after calling DestroyImmediate(gameObject) in MonoBehaviour.Awake. - [780093] Scripting: Fixed crash in APIUpdater when updating some Boo/UnityScript scripts. - [746877] Scripting: Fixed crash on FieldInfo.SetValue and FieldInfo.GetValue when field is not defined on target object specified. - [770003] Scripting: Fixed crash triggered by a double cleanup inside coroutines. - [723650] Scripting: Fixed crash when accessing SerializedProperty.tooltip in some cases. - [784556] Scripting: Fixed InvalidExceptionCast exceptions when updating 'safeCollisionEventSize'. - [750066] Scripting: Fixed issue where GetComponent() throws an exception if called from constructors or deserialization. - [750066] Scripting: Fixed issue where GetTransform<>() invoked from a constructor or field initializer could crash Unity. - [697550] Scripting: Fixed issue with being unable to use ScriptableObject.CreateInstance with a nested ScriptableObject class. - [657306] Scripting: Fixed thread safety of TimeZone.CurrentTimeZone and DateTime.Now. - [723842] Scripting: GetHashCode() on UnityEngine.Object derived classes no longer changes when the object is destroyed. - [737455] Shaders: A depth buffer for temporary textures created by GrabPass is no longer created. - [736102] Shaders: Alpha is no longer overwritten if an opaque surface shader writes to occlusion in the deferred pass. - [728200] Shaders: Diffuse alpha channel in opaque shaders is now overwritten before finalGBufferModifier is applied, to allow full control of diffuse alpha to the user. - [778700] Shaders: Fixed a case where a material using a standard shader with transparency could be sorted incorrectly if the shader was reselected. - [782654] Shaders: Fixed a potential crash if an internal error is encountered while compiling a D3D11 shader. - [644807] Shaders: Fixed case of animated Standard Shader emission being wrongly overriden by Material inspector. - [793886] Shaders: Fixed case of function shaders containing CustomEditors not displaying the correct editor. - Shaders: Fixed cases where unsupported LightMode=Meta pass type was also making LightMode=ShadowCaster passes unsupported. - Shaders: Fixed DX11 shader disassembly not showing correctly in the editor UI for 'show compiled code'. - [792541] Shaders: Fixed issue whereby importing a package containing Player Settings with graphics API settings that differ from the current ones would cause shaders to not work properly. - [793168] Shaders: Fixed issue whereby in some cases the [Header] attribute on the shader property UI could cause the next shader property to have the same name as the header text. - Shaders: Fixed PBS surface shader compilation when Occlusion output is not present. - [785928, 786539] Shaders: Fixed UNITY_SAMPLE_TEXCUBE_LOD to only force texCubeBias when we are sure that there is no texCubeLod available. - Shaders: Fixed very small or very large default shader property values not being serialized correctly. - [784141] Shaders: Improved error messages emitted if a syntax error is found early in surface analysis. - Shaders: Made some optimizations for surface shader importing time. - [786534] Shaders: Removed a spurious error about _Emission if a legacy Self-Illumin shader was selected in a material. - [774822] Shaders: Surface shaders can now directly include "Lighting.cginc" to control where it is included. - [669396] Shaders: _CameraDepthTexture shader property is now preserved across calls to RenderWithShader(). - [751764] Shadows: Added animation support of light shadow near plane. - [766179] Shadows: Fix to prevent setting out-of-range shadow strength. - [745720] Shadows: Fixed shadow error messages happening in some cases. - [782556] SpeedTree: Fixed an issue where billboards didn't scale correctly when batched. - [745080] Standalone: Checking/unchecking the 'Windowed' checkbox will no longer reset the screen resolution. - [755097] Standalone: Fixed Application.persistentDataPath when Product Name contains invalid path character. - Substance: Fixed case of IsProcessing sometimes staying true in the player when changing $randomseed or $outputsize. - [794364] Substance: Fixed case of outputs not being generated correctly when changing values too fast. - Substance: Fixed corner cases of outputs not being impacted by any input not being generated. - [779560] Substance: Fixed crash in VisibleIf expression evaluation, caused by looking up an input by label instead of by identifier. - Templates: Fixed whitespaces in Editor Tests template. - [784423] Terrain: Fixed an issue where deleting in-use terrainData while GI is being baked for the terrain would crash the editor. - Terrain: Fixed an issue where the legacy Soft Occlusion tree shaders didn't work well with images effects that utilize the depth texture. - Terrain: Fixed incorrect SpeedTree rendering when a tree is in dithering on some mobile devices such as iPhone 6. - [730899] Terrain: TreeEditor, baked tree textures are no longer tinted to current fog color. - [800401] Text Rendering: Fixed a memory leak and behavior leading to crash in Font::CacheFontForText. This also reduced the amount of unnecessary growth that could occur in the Font texture atlas. - [784847] Text Rendering: Fixed a potential crash NativeTextGenerator::InsertCharacter when processing badly formed rich text. - [792648] Text Rendering: Fixed bug preventing best fit from working when font size 0 specified. - [803344] Text Rendering: Fixed case of text occasionally sampling beyond the spacing between adjacent glyphs by adding a default 1 texel padding around all glyphs in dynamic font textures. - [790016] Tizen: Fixed case of TapCount not functioning in Development mode. - [790311] Tizen: Fixed issue where projects would hang on the splash screen if a custom splash image was used. - [787494] Tizen: Fixed operation of Application.OpenURL when used with mailto: URLs. - Tizen: Fixed problem whereby an object would sometimes move to the center of the screen when the device was rotated. - tvOS: Fixed issues with Xcode plugin installation when it is done by tvOS platform support. - [739883] tvOS: Fixed SetScriptingDefineSymbolsForGroup for tvOS. - [747537] tvOS: Menu button will now be disabled when playing video if allowExitToHome is disabled. - tvOS: Movie playing is now enabled on tvOS. - tvOS: tvOS platform support installation is now independent from iOS platform support. - [749573] UI: After device reset, an application repaint is now forced to prevent the canvas from disappearing after a session lock/unlock. - [775637] UI: Applied change to re-emit during the repaint loop due to world space canvasses not using cameras to render. This fixes a case of UI Elements becoming invisible after scrolling "Graphics" option for the element. - [605596] UI: Assigned default font to text/button text created in the Editor when playing. Previously labels were empty in such cases. - [797902] UI: Black bars that are only visible when game and display resolution aspect ratios don't match are now cleared every frame. - [662320] UI: Change to hide unused bump map tiling editor in UI/Lit/Bumped Shader inspector. - [688005] UI: Fix to add unsupported fonts in '<font name> only supported in dynamic fonts.' warning. - [790246] UI: Fix to force canvas.position.z to 0 when updating the RectTransform of a screen space canvas. - [787872] UI: Fix to prevent Canvas emitting to a preview camera. - [780185] UI: Fixed buffers not being copied to threads causing the incorrect shader properties to be used and UI flickering to occur. - [787195] UI: Fixed case of Canvas Editor bypassing overrideSorting and sortingOrder setters, missing needed events. - [794711] UI: Fixed case of shader UV properties being discarded in graphics material. - [786986] UI: Fixed case where RectMask2D was clipping nested canvases with overrideSorting. - [742140] UI: Fixed first color crossfade tween cancelling subsequent tweens while running. - [772003] UI: Fixed inadvertent interaction with UI when cursor is locked. - UI: Fixed incorrect alpha threshold implementation. - UI: Fixed incorrect content validation when assigning text. - [768754] UI: Fixed incorrect line height with embedded quads in rich text. - [794038] UI: Fixed infinite ScrollRect elastic movement. - UI: Fixed issue that was causing the first line of text to include the ascent offset scaled by line spacing. - UI: Fixed issue were LayoutComplete() was called instead of GraphicUpdateComplete() after the GraphicUpdate loop. - [764925] UI: Fixed issue where MovieTextures did not have the texel size set. - [798018] UI: Fixed issue where multiple webviews were created in UnityConnectEditorWindow. - [772953] UI: Fixed issue where out-of-memory crash would occur if borders are larger than drawable rect. - [772943] UI: Fixed issue where pressing up in an empty text field would index the first character, causing an out of range exception. - [785501] UI: Fixed issue where the last UI element added to a layout group via the Create... menu would be mis-positioned. - [771816] UI: Fixed issue where vertical alignment was set to TOP if the line height was less than the text extents. - [783853] UI: Fixed issue with blurry text when centering text and pivot set to 0.5 and width/height non-power-of-2. - [778121] UI: Fixed issue with crash due to dirty Renderer being in the dirty list after being destroyed. - [803901] UI: Fixed issue with double rendering of world canvas in Editor. - UI: Fixed issue with null reference error in Graphic on reload. - [793385] UI: Fixed issue with small UI gittering due to local position-rounding issues. - UI: Fixed NaNs introduced in font calculations when the font reports its size as 0. - [780112] UI: Fixed nested RectMask2D not updating when enabling/disabling them. - [793119] UI: Fixed off-by-1 error in InputField.characterLimit when ContentType != standard. - [769981] UI: Fixed the issue with the font importer inspector getting dirtied each time it was enabled. - UI: Fixed to only process mouse input if there is a mouse present within the StandAloneInputModule. - UI: Fixed toggle group assignment use to re-register the toggle during onDisable(). - [775987] UI: Fonts now report a proper descent value, which means we can the calculate bottom extent independent of internal leading. Previously, this was causing some characters to render in the wrong place. - [765232] UI: If the InverseMatrix of the Canvas change, Unity now dirties the batch to redraw at correct location. - [786986] UI: MaskUtilities fixes: - Fix to skip inactive RectMask2D - Fixed nested clipping. - [766769] UI: Reduced overhead incurred by SetSiblingIndex calling OnTransformParentChanged. - [778280] UI: Removed functions from onValidate() that cause incorrect behaviour: - Set does nothing as serialized property has already overwritten the value by this point. - PlayEffect can cause other items to go invisible. - [730100] UI: Special characters \n and \t are now stripped from InputField text when in single-line mode to prevent multi-line text in single-line mode. - UI: World space canvas now emits to each camera in the Scene instead of directly to the world. - VCS: Moved the assignment of an identifier from the constructor to the Get method. This stops a C++ method being called during C# constructor and serialization (which is a scripting warning as it is unsafe to do this). - [483760] Video: Fixed case of MovieTexture hang due to incorrect frame rate discovery. - [787993] Video: Fixed case of MovieTexture hang with multi-channel audio. - [730528] Video: Fixed case of MovieTexture Incorrect colors. - [402608] Video: Fixed case of MovieTexture spurious sound when regaining focus. - [715732] Video: Fixed MovieTexture crash when importing large files. - [596738] Video: Fixed MovieTexture Unicode support in .mp4/.mov filenames on Windows. - [761981] VR: Exposed Camera.targetEye to script for virtual reality applications. - [704978] VR: Fixed case of application still updating while Editor is paused (in VR mode). - [783787] VR: Fixed case of VRSettings.supportedDevices not populating until a VR device is loaded. - VR: Fixed Culling in left eye for OpenVR applications. - VR: Fixed file access error when building with Oculus or OpenVR plugins included in the Plugins folder. - [713551] VR: Fixed issue where using [Deferred Rendering + MSAA + Image Effect - blur] would render a black screen. - VR: Fixed issue with acquiring Correct node values during the first frame for Oculus. - [807558] VR: Fixed Particle System rendering in stereoscopic rendering modes. - [765997] VR: Fixed Screen Orientation on GearVR applications. - [766124] VR: Fixed splash screen aspect ratio issues for VR applications. - VR: VR splash screen images now follow the gaze of the user so that they are always seen. - [799367] WebGL: Correct WebGL build in the Windows editor. - [770266] WebGL: Fixed 'Uncaught incorrect header check' error. - [752150] WebGL: Fixed a default button/axis mapping problem with the Xbox 360 controller on some platforms. - [778027] WebGL: Fixed anisotropic filtering in Safari. - [759492] WebGL: Fixed browser lock-up when profiling. - [781565] WebGL: Fixed case of blendEquationSeparate shader errors being thrown. - [790931, 789543] WebGL: Fixed Command key input issues on OS X. - [744374] WebGL: Fixed cursor hotspot coordinates when hotspot is outside the cursor area. - [792856] WebGL: Fixed error message about loading WebGL content from file:// URLs in browsers that don't support this. - WebGL: Fixed float RenderTextures in WebGL 2.0 in Chrome. - [794353] WebGL: Fixed floating point precision errors, which could result in a variety of errors during play. - [771984] WebGL: Fixed issues launching WebGL content in iframes when third-party cookies are disabled. - [782587] WebGL: Fixed point-light rendering on Windows when deferred rendering is enabled. - [759286] WebGL: Fixed spot-light rendering on Windows when deferred rendering is enabled. - WebGL: Fixed WebGL 2.0 support to run on browsers implementing the final WebGL 2.0 specifications. - [749303] WebGL: Implemented AudioSource.PlayScheduled and AudioSource.SetScheduledEndTime. - WebGL: Improved performance of floating point math in Chrome. - WebGL: WebGL: Fixed input Y coordinates being off by one. - [691571] Window Management: LockReloadAssemblies now works correctly. - [608901] Window Management: Minimizing Unity no longer changes the size of maximized Unity windows. - [788011] Windows: Fixed case of custom cursor not working after ~3300 updates to different texture, which could also lead to a crash on opening some standardWindows dialog in the Editor. - [797575] Windows: Fixed default behaviour for the native resolution setting: the player will now default to native resolution on the first run when this is enabled. - Windows: Fixed occassional stutter when running with VSync turned off. - Windows: Unity now properly handles non-ASCII command line arguments in Editor and standalone. - [784933] Windows: Windowed Direct3D 11 applications will no longer run with their framerate uncapped when minimized. - [742250] Windows: Windows Standalone: Input.touchSupported will return a correct value. - Windows Phone 8: Added obsolete attribute for BuildTargetGroup.WP8 and BuildTarget.WP8Player. - [801951] Windows Standalone: Custom images will now be shown correctly in the screen selector dialog when building to a directory with non-Western/Latin alphanumeric characters. - [765773] Windows Standalone: Game execution will no longer freeze while dragging window via title bar. - Windows Store: Added docs for PlayerSettings.WSA in Scripting Reference. - Windows Store: Application will now exit without crashing when calling Application.Quit(). - Windows Store: Fix to correctly generate RootNamespace inside Visual Studio project if product name contains whitespaces while building with IL2CPP scripting backend. - [774815] Windows Store: Fix to exclude abstract classes from serialization (as is the case for all other platforms). - [793456] Windows Store: Fixed a crash that could occur when building a UWP app that references Windows.winmd from a non-standard location. - Windows Store: Fixed a rare error message "unknown type xxx detected. Using reflection to gather its type information." when yielding IEnumerators in Coroutines on .NET scripting backend. - Windows Store: Fixed antialising when calling Screen.SetResolution on Universal Windows 10 Apps. - [754312] Windows Store: Fixed build & run when using IL2CPP scripting backend. - [776483] Windows Store: Fixed build and run when Unicode characters with diacritics are used in product name. - [779572] Windows Store: Fixed case of mouse scroll not generating GUI events and being less sensitive than on other platforms. - [750001] Windows Store: Fixed compilation error when using .NET 4 plugin without placeholder and compilation override None. - Windows Store: Fixed DirectoryInfo.FullName on 8.1 - Windows Store: Fixed rare error "Incorrect hashcode" which sometimes could cause a crash. - Windows Store: Fixed Screen.SetResolution with fullscreen true on SDK 8.1; previously when antialising was set to 0, you would get a black screen. - [761936] Windows Store: Fixed serialization error. Unity will no longer serialize fields in Windows Store Apps which are of the same type as the class where this field is located (for example, if you have a class Test and you have a field inside public Test MyField;). This behavior was already present in Editor, but not in Windows Store Apps. - [759313] Windows Store: SystemInfo.deviceType will now correctly report Handheld when running on Windows Phone 10. - [730023] Windows Store: Unity will no longer report XboxOne controllers as Xbox 360 controllers. - Windows Store: Unity will no longer steal key events when another XAML element (e.g. TextBox) is in focus. - [741458] Windows Store: WebCamera will now correctly continue working after minimizing/maximizing application. - [752581] Windows Store: When building to Universal Windows Platforms, project.json will not be overwritten if the contents inside were modified. 5.4.0f3 Release Notes (Delta since F2/RC2) Fixes (Delta since f2/rc2) - [815233] Animation: Fixed a crash when previewing animations on models with no avatars. - [814501] Graphics: Fix anisotropic filtering setting sometimes being ignored for textures - [801677] Graphics: Fixed crashes on very large texture imports (coming to 5.3.x too) - [807961] Graphics: Frame Debugger; fixed render target game view visualization regression - [817291, 813987] Graphics: GPU Profiler; fixed events like draw calls not showing up or their cost being attributed to wrong sections - [815127] Graphics: LightProbes.GetInterpolatedProbe will return the ambient probe if there are no light probes in the scene. - [769774] Image Effects: Fixed HistogramCompute implicit vector type truncation warnings - [813091] Metal: Fixed crash when iOS device is rotated continously and autorotation is enabled - [817041] Rendering: Fixed issue with re-ordering of OnPreRender event and culling data creation. - [806751] Shaders: Fixed transparent shader writing into DepthNormals texture, because material sometimes cache RenderType setting (coming to 5.3.x too) - Trails/Lines: Fixed crash when using Lines and Trails at the same time Known Issues - [814953] Audio: Memory tracking of AudioClip Assets is broken. Clips referenced by the Scene or loaded in background will be tracked under AudioManager instead of the respective clips. This is the temporary trade-off to fix a crash from a threading race condition. A fix is in progress. - [817337] Editor: The GPU profiler does not associate draw calls with objects, and displays "N/A" for most of them. A fix is being tested and is due in patch release. - [802273] Editor: GPU Profiler does not work with 'Graphics Jobs' enabled. A fix is in progress. - [804333] Editor: Reverting prefab and pressing undo can result in a crash. - [808187] Editor: Undoing hierarchy leads to m_TransformData.hierarchy == NULL assert followed by a crash. - [813805] Graphics: GPU skinning crashes some DX11 drivers when shaders with tessellation were rendering just before it. A fix is being tested and is due in a patch release. - [809364] UI: 9-Sliced images whose sprite has mipmapping can blur if size is too small. Disable mipmapping as workaround. - [817835] VR: Changing the renderViewport scale results in render texture not scaling correctly on HMD when using single-pass stereo. - [817945] VR: In some circumstances player crashes upon headset removal when using single-pass stereo and deferred rendering. - [817943] VR: Player crashes on exit when using single-pass stereo and deferred rendering. - [811571] VR: Single-pass rendering: The following Standard Asset Image Effects do not currently work reliably: CameraMotionBlur, ScreenSpaceAmbientObscurance. A fix is being developed. To be addressed separately from 5.4.x - [688985] GI: Copying a Scene loses the baked lighting. Workaround is to manually use "Build Lighting.". This will not be addressed in 5.4. - Graphics: In deferred rendering, lightmapped objects affected by mixed-mode lights fallback to forward rendering. - [762371] Scene Management: SceneManager.UnloadScene hangs if called from a physics trigger. Workaround is to defer unload. This will not be addressed in 5.4. - [779516] UI: Editing Text with Best Fit enabled causes artifacts to appear. This will not be addressed in 5.4. - VR: In-development cinematic Image Effects (Bitbucket project) do not currently work with single-pass rendering. This will be addressed independently of the 5.4 release schedule. - [799533] VR: SteamVR does yet not support DX12. This will be coordinated between SteamVR's schedule and Unity's, but it is currently unclear which cycle this will be part of. - [807031] VR: SteamVR Unity Native: When you have two "eye" Cameras under head, Unity crashes. Working with Valve to address in the plugin. This will be addressed on SteamVR's schedule. - [776787] Windows Store: Unity APIs which take multi-dimensional arrays as parameters (e.g. TerrainData.SetHeights) do not work on UWP in configurations (e.g. Master config) in which .NET Native compilation is enabled. The bug has been logged with Microsoft. Revision: a6d8d714de6f Unity 5.4
https://unity3d.com/cn/unity/whats-new/unity-5.4.0
CC-MAIN-2018-47
refinedweb
12,920
50.43
celClientEventData Class Reference The data about a client event. More... #include <physicallayer/nettypes.h> Detailed Description The data about a client event. Definition at line 337 of file nettypes.h. Member Data Documentation The persistent data of the event. Definition at line 353 of file nettypes.h. The time at which the event occured. Definition at line 348 of file nettypes.h. The type of the event. Definition at line 343 of file nettypes.h. True if we need to be sure that the message has been received by the server. Definition at line 358 of file nettypes.h. The documentation for this class was generated from the following file: - physicallayer/nettypes.h Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api-2.0/classcelClientEventData.html
CC-MAIN-2015-35
refinedweb
126
54.79
So far in this series we've looked in detail at StringBuilder, and how it works under-the-hood. In this post I look at a different type, the internal StringBuilderCache type. This type is used internally in .NET Core and .NET Framework to reduce the cost of creating a StringBuilder. In this post I describe why it's useful, run a small benchmark to see its impact, and walk through the code to show how it works. Reducing allocations to improve performance In the first post in this series, I discussed how .NET has focused on performance recently, with a particular focus on reducing allocations. This isn't a new problem for .NET, so in .NET 1.1 the StringBuilder class was introduced. This lets you efficiently concatenate strings, characters, and ToString()ed objects without creating a lot of intermediate strings. However, StringBuilder itself is a class that is allocated on the heap. As we've seen throughout this series, internally, the StringBuilder uses a char[] and a linked list of StringBuilders to store the intermediate values. All of these are allocated on the heap. In cases where you're doing a lot of string concatenation, the instances of the StringBuilder class (including the internal linked values) and the internal char[] buffer can put some pressure on the GC. That's where StringBuilderCache comes in. Using StringBuilderCache to reduce StringBuilder allocations StringBuilderCache is an internal class that has been present in .NET Framework and .NET Core for a looong time (I couldn't figure out exactly when, but it's since at least 2014, so .NET 4.5-ish). Being internal it's not directly usable by user code, but it's used by various classes in the heart of .NET. The observation behind StringBuilderCache is that most cases where we need to build up a string, the size of the string will be relatively small. For example when formatting dates and times, you expect the final string to be relatively small. There are many other examples of cases like this, where you know the final string is going to be relatively small, but that you know the function will be called relatively frequently. StringBuilderCache works (perhaps unsurprisingly) by caching a StringBuilder instance, and "loaning" it out whenever a StringBuilder is required. Calling code can request a StringBuilder instance and return it to the cache when it's finished with it. That means only a single instance of StringBuilder needs to be created by the app, as it can keep being re-used, reducing GC pressure on the app. If your first thought is "that doesn't sound thread-safe", don't worry. As you'll see later, there's a single StringBuilderper thread, so that isn't a problem. Let's take this toy sample which concatenates a user's name using the StringBuilderCache. var user = new User { FirstName = "Andrew", LastName = "Lock", Nickname = "Sock", }; int requiredCapacity = user.FirstName.Length + user.LastName.Length + user.Nickname.Length + 3; // Fetch a StringBuilder of the required capacity. Instead of // var sb = new StringBuilder(requiredCapacity); StringBuilder sb = StringBuilderCache.Acquire(requiredCapacity); sb.Append(user.FirstName); sb.Append(user.LastName); sb.Append(" ("); sb.Append(user.Nickname); sb.Append(')'); // return the StringBuilder to the cache and retrieve the string. Instead of // string fullName = sb.ToString(); string fullName = StringBuilderCache.GetStringAndRelease(sb); As you can see, using StringBuilderCache is pretty simple, and mostly analogous to using a StringBuilder directly. The question is, does it improve performance? Benchmarking StringBuilderCache To see the impact of using StringBuilderCache over StringBuilder directly for a simple snippet like the above, I turned to BenchmarkDotNet. I copied the .NET 5 implementation of StringBuilderCache into my project (we'll look at the implementation shortly), and created the following simple benchmark, directly analogous to the above example: [MemoryDiagnoser] public class StringBuilderBenchmark { private const string FirstName = "Andrew"; private const string LastName = "Lock"; private const string Nickname = "Sock"; [Benchmark(Baseline = true)] public string UsingStringBuilder() { var sb = new StringBuilder(); sb.Append(FirstName); sb.Append(LastName); sb.Append(" ("); sb.Append(Nickname); sb.Append(')'); return sb.ToString(); } [Benchmark] public string UsingStringBuilderCache() { var sb = StringBuilderCache.Acquire(); sb.Append(FirstName); sb.Append(LastName); sb.Append(" ("); sb.Append(Nickname); sb.Append(')'); return StringBuilderCache.GetStringAndRelease(sb); } } The results, running on my relatively old home laptop are as follows: BenchmarkDotNet=v0.13.0, OS=Windows 10.0.19042.1052 (20H2/October2020Update) Intel Core i7-7500U CPU 2.70GHz (Kaby Lake), 1 CPU, 4 logical and 2 physical cores .NET SDK=5.0.104 [Host] : .NET 5.0.7 (5.0.721.25508), X64 RyuJIT DefaultJob : .NET 5.0.7 (5.0.721.25508), X64 RyuJIT As you can see, using the StringBuilderCache gives a relative speed boost of 30% and allocates a fraction as much (56 vs 264 bytes). Obviously, these are small speedups, but on a hot path, these sorts of micro-optimisations can be worthwhile. We've looked at the benefit StringBuilderCache can bring. The next question is: how does it do it? Looking at the implementation of StringBuilderCache You can find the latest implementation of StringBuilderCache for .NET on GitHub, which is the implementation I show below. I'll give the whole implementation, and then discuss it below. This version uses nullable reference types. You can also find an implementation for .NET Framework on. namespace System.Text { /// <summary>Provide a cached reusable instance of stringbuilder per thread.</summary> internal static class StringBuilderCache { // The value 360 was chosen in discussion with performance experts as a compromise between using // as little memory per thread as possible and still covering a large part of short-lived // StringBuilder creations on the startup path of VS designers. internal const int MaxBuilderSize = 360; private const int DefaultCapacity = 16; // == StringBuilder.DefaultCapacity [ThreadStatic] private static StringBuilder? t_cachedInstance; /// <summary>Get a StringBuilder for the specified capacity.</summary> /// <remarks>If a StringBuilder of an appropriate size is cached, it will be returned and the cache emptied.</remarks>); } /// <summary>Place the specified builder in the cache if it is not too big.</summary> public static void Release(StringBuilder sb) { if (sb.Capacity <= MaxBuilderSize) { t_cachedInstance = sb; } } /// <summary>ToString() the stringbuilder, Release it to the cache, and return the resulting string.</summary> public static string GetStringAndRelease(StringBuilder sb) { string result = sb.ToString(); Release(sb); return result; } } } The code is helpfully heavily commented, but lets walk through the code anyway. I'm actually going to start at the end first, and look at the GetStringAndRelease and Release messages first. internal const int MaxBuilderSize = 360; [ThreadStatic] private static StringBuilder? t_cachedInstance; public static string GetStringAndRelease(StringBuilder sb) { string result = sb.ToString(); Release(sb); return result; } public static void Release(StringBuilder sb) { if (sb.Capacity <= MaxBuilderSize) { t_cachedInstance = sb; } } The GetStringAndRelease() method is very simple, it just calls ToString() on the provided StringBuilder, calls Release() on the builder, and then returns the string. The Release method is where the "caching" happens. The method checks to see if the provided StringBuilder's current capacity is less than the MaxBuilderSize constant ( 360), and if it is, it stores the StringBuilder in the ThreadStatic t_cachedInstance. As mentioned in the code comments, the value of 360 is chosen to be large enough to be useful, but not too large that a lot of memory is used per thread. If this check wasn't here, and you released a StringBuilderwith a large capacity, then you'd forever be using up that memory without releasing it, essentially causing a memory leak. Marking the t_cachedInstance as [ThreadStatic] means that each separate thread in your application will see a different StringBuilder instance in t_cachedInstance. This avoids any chance of concurrency issues due to multiple threads accessing the field. That covers the release part of the cache, lets look at the acquire part now: internal const int MaxBuilderSize = 360; private const int DefaultCapacity = 16; // == StringBuilder.DefaultCapacity [ThreadStatic] private static StringBuilder? t_cachedInstance;); } When you call Acquire, you request a capacity for the StringBuilder. If the capacity is bigger than the cache's maximum capacity, then we bypass the cached value entirely, and just return a new StringBuilder. Similarly, if we haven't cached a StringBuilder yet, you just get a new one. For these cases, the StringBuilderCache doesn't add any value. We also check whether the capacity requested is less than the cached StringBuilder's capacity. As mentioned in the comment, if we return a StringBuilder with a capacity that's smaller than the requested capacity, we can be pretty much certain we're going to have to grow the StringBuilder. That's fine, but it has a performance impact, so it's better in these cases to just return a new StringBuilder. If you're in the sweet-spot—requesting a capacity less than MaxBuilderSize and less than the cached StringBuilder.Capacity—then you can reuse the cached instance. The cached instance is cleared (so if you call Acquire again before Release then you don't re-use the builder), and the StringBuilder is "reset" by calling Clear(). You can then use the StringBuilder as normal, finally calling GetStringAndRelease() to retrieve your built value, and to (potentially) add the builder to the cache. That's all there is to it, a simple, single-value cache for StringBuilders. In the worse case it's no worse than using new StringBuilder(), and in the best case you can avoid a few allocations. Using StringBuilderCache in your own projects The only downside to StringBuilderCache is that you can't easily use it in your own projects! StringBuilderCache is internal, so there's no way to use it directly outside the core .NET libraries. Luckily, the code is simple enough (and the license permissive-enough) that you can generally copy-paste the implementation into your own code. As an example, we use a similar implementation in the Datadog .NET Tracer library. Another possibility, if you're trying to reduce the impact of StringBuilders on a hot-patg, it to look at another internal type, ValueStringBuilder. I'll look at this type in another post. Summary In this post I discussed the need to reduce allocations for performance reasons, and the role of StringBuilder in helping with that. However, the StringBuilder class itself must be allocated. StringBuilderCache provides a way to reduce the impact of allocating a StringBuilder by reusing a single StringBuilder instance per thread. I showed in a micro-benchmark that this can reduce allocation and improve performance. I then walked through the code to show how it was achieved.
https://andrewlock.net/a-deep-dive-on-stringbuilder-part-5-reducing-allocations-by-caching-stringbuilders-with-stringbuildercache/
CC-MAIN-2022-05
refinedweb
1,738
56.55
? A wait can be “woken up” by another thread calling notify on the monitor which is being waited on whereas a sleep cannot. Also a wait (and notify) must happen in a block synchronized on the monitor object whereas sleep does not: Object mon = ...; synchronized (mon) { mon.wait(); } At this point the currently executing thread waits and releases the monitor. Another thread may do synchronized (mon) { mon.notify(); } (On the same mon object) and the first thread (assuming it is the only thread waiting on the monitor) will wake up. You can also call notifyAll if more than one thread is waiting on the monitor – this will wake all of them up. However, only one of the threads will be able to grab the monitor (remember that the wait is in a synchronized block) and carry on – the others will then be blocked until they can acquire the monitor’s lock. Another point is that you call wait on Object itself (i.e. you wait on an object’s monitor) whereas you call sleep on Thread. Yet another point is that you can get spurious wakeups from wait (i.e. the thread which is waiting resumes for no apparent reason). You should always wait whilst spinning on some condition as follows: synchronized { while (!condition) { mon.wait(); } } One key difference not yet mentioned is that while sleeping a Thread does not release the locks it holds, while waiting releases the lock on the object that wait() is called on. synchronized(LOCK) { Thread.sleep(1000); // LOCK is held } synchronized(LOCK) { LOCK.wait(); // LOCK is not held } I found this link helpful (which references this post). It puts the difference between sleep(), wait(), and yield() in human terms. (in case the links ever go dead I’ve included the post below with additional markup). There are a lot of answers here but I couldn’t find the semantic distinction mentioned on any. It’s not about the thread itself; both methods are required as they support very different use-cases. sleep() sends the Thread to sleep as it was before, it just packs the context and stops executing for a predefined time. So in order to wake it up before the due time, you need to know the Thread reference. This is not a common situation in a multi-threaded environment. It’s mostly used for time-synchronization (e.g. wake in exactly 3.5 seconds) and/or hard-coded fairness (just sleep for a while and let others threads work). wait(), on the contrary, is a thread (or message) synchronization mechanism that allows you to notify a Thread of which you have no stored reference (nor care). You can think of it as a publish-subscribe pattern ( wait == subscribe and notify() == publish). Basically using notify() you are sending a message (that might even not be received at all and normally you don’t care). To sum up, you normally use sleep() for time-syncronization and wait() for multi-thread-synchronization. They could be implemented in the same manner in the underlying OS, or not at all (as previous versions of Java had no real multithreading; probably some small VMs doesn’t do that either). Don’t forget Java runs on a VM, so your code will be transformed in something different according to the VM/OS/HW it runs on. There are some difference key notes i conclude after working on wait and sleep, first take a look on sample using wait() and sleep(): Example1: using wait() and sleep(): synchronized(HandObject) { while(isHandFree() == false) { /* Hand is still busy on happy coding or something else, please wait */ HandObject.wait(); } } /* Get lock ^^, It is my turn, take a cup beer now */ while (beerIsAvailable() == false) { /* Beer is still coming, not available, Hand still hold glass to get beer, don't release hand to perform other task */ Thread.sleep(5000); } /* Enjoy my beer now ^^ */ drinkBeers(); /* I have drink enough, now hand can continue with other task: continue coding */ setHandFreeState(true); synchronized(HandObject) { HandObject.notifyAll(); } Let clarity some key notes: - Call on: - wait(): Call on current thread that hold HandObject Object - sleep(): Call on Thread execute task get beer (is class method so affect on current running thread) - Synchronized: - wait(): when synchronized multi thread access same Object (HandObject) (When need communication between more than one thread (thread execute coding, thread execute get beer) access on same object HandObject ) - sleep(): when waiting condition to continue execute (Waiting beer available) - Hold lock: - wait(): release the lock for other object have chance to execute (HandObject is free, you can do other job) - sleep(): keep lock for at least t times (or until interrupt) (My job still not finish, i’m continue hold lock and waiting some condition to continue) - Wake-up condition: - wait(): until call notify(), notifyAll() from object - sleep(): until at least time expire or call interrupt - And the last point is use when as estani indicate: you normally use sleep() for time-syncronization and wait() for multi-thread-synchronization. Please correct me if i’m wrong. Here are few important differences between wait() and sleep() methods. wait() wait()method releases the lock. wait()is the method of Objectclass. wait()is the non-static method – public final void wait() throws InterruptedException { //...} wait()should be notified by notify()or notifyAll()methods. wait()method needs to be called from a loop in order to deal with false alarm. wait()method must be called from synchronized context (i.e. synchronized method or block), otherwise it will throw IllegalMonitorStateException sleep() sleep()method doesn’t release the lock. sleep()is the method of java.lang.Threadclass. sleep()is the static method – public static void sleep(long millis, int nanos) throws InterruptedException { //... } - after the specified amount of time, sleep()is completed. sleep()better not to call from loop(i.e. see code below). sleep()may be called from anywhere. there is no specific requirement. Ref: Difference between Wait and Sleep Code snippet for calling wait and sleep method synchronized(monitor){ while(condition == true){ monitor.wait() //releases monitor lock } Thread.sleep(100); //puts current thread on Sleep } Difference between wait() and sleep() The fundamental difference is wait()is from Objectand sleep()is static method of Thread. The major difference is that wait()releases the lock while sleep()doesn’t releas any lock while waiting. The wait()is used for inter-thread communication while sleep()is used to introduce pause on execution, generally. The wait()should call from inside synchronise or else we get IllegalMonitorStateExceptionwhile sleep()can call anywhere. - To start thread again from wait(), you have to call notify()or notifyAll(). While in sleep(),thread gets start after specified ms/sec interval. Similarities which helps understand - Both makes the current thread goes into the Not Runnable state. - Both are nativemethods. This is a very simple question, because both these methods have a totally different use. The major difference is to wait to release the lock or monitor while sleep doesn’t release any lock or monitor while waiting. Wait is used for inter-thread communication while sleep is used to introduce pause on execution. This was just a clear and basic explanation, if you want more than that then continue reading. In case of wait() method thread goes in waiting state and it won’t come back automatically until we call the notify() method (or notifyAll() if you have more then one thread in waiting state and you want to wake all of those thread). And you need synchronized or object lock or class lock to access the wait() or notify() or notifyAll() methods. And one more thing, the wait() method is used for inter-thread communication because if a thread goes in waiting state you’ll need another thread to wake that thread. But in case of sleep() this is a method which is used to hold the process for few seconds or the time you wanted. Because you don’t need to provoke any notify() or notifyAll() method to get that thread back. Or you don’t need any other thread to call back that thread. Like if you want something should happen after few seconds like in a game after user’s turn you want the user to wait until the computer plays then you can mention the sleep() method. And one more important difference which is asked often in interviews: sleep() belongs to Thread class and wait() belongs to Object class. These are all the differences between sleep() and wait(). And there is a similarity between both methods: they both are checked statement so you need try catch or throws to access these methods. I hope this will help you. source :. Wait and sleep are two different things: - In sleep()the thread stops working for the specified duration. - In wait()the thread stops working until the object being waited-on is notified, generally by other threads. sleep is a method of Thread, wait is a method of Object, so wait/notify is a technique of synchronizing shared data in Java (using monitor), but sleep is a simple method of thread to pause itself. sleep(). wait and sleep methods are very different: sleephas no way of “waking-up”, - whereas waithas a way of “waking-up” during the wait period, by another thread calling notifyor notifyAll. Come to think about it, the names are confusing in that respect; however sleep is a standard name and wait is like the WaitForSingleObject or WaitForMultipleObjects in the Win API. In simple words, wait is wait Until some other thread invokes you whereas sleep is “dont execute next statement” for some specified period of time. Moreover sleep is static method in Thread class and it operates on thread, whereas wait() is in Object class and called on an object. Another point, when you call wait on some object, the thread involved synchronize the object and then waits. 🙂 From this post : wait() Method. 1) The thread which calls wait() method releases the lock it holds. 2) The thread regains the lock after other threads call either notify() or notifyAll() methods on the same lock. 3) wait() method must be called within the synchronized block. 4) wait() method is always called on objects. 5) Waiting threads can be woken up by other threads by calling notify() or notifyAll() methods. 6) To call wait() method, thread must have object lock. sleep() Method 1) The thread which calls sleep() method doesn’t release the lock it holds. 2) sleep() method can be called within or outside the synchronized block. 3) sleep() method is always called on threads. 4) Sleeping threads can not be woken up by other threads. If done so, thread will throw InterruptedException. 5) To call sleep() method, thread need not to have object lock. sleep - It causes current executing thread to sleep for specific amount of time. - Its accuracy depends on system timers and schedulers. - It keeps the monitors. wait()is a method of Objectclass. sleep()is a method of Threadclass. sleep()allows the thread to go to sleepstate for x milliseconds. When a thread goes into sleep state it doesn’t release the lock. wait()allows thread to release the lock and goes to suspended state. This thread will be active when a notify()or notifAll()method is called for the same object. One potential big difference between sleep/interrupt and wait/notify is that - calling interrupt()during sleep()always throws an exception (e.g. InterruptedException), whereas - calling notify()during wait()does not. Generating an exception when not needed is inefficient. If you have threads communicating with each other at a high rate, then it would be generating a lot of exceptions if you were calling interrupt all the time, which is a total waste of CPU. You are correct – Sleep() causes that thread to “sleep” and the CPU will go off and process other threads (otherwise known as context switching) wheras I believe Wait keeps the CPU processing the current thread. We have both because although it may seem sensible to let other people use the CPU while you’re not using it, actualy there is an overhead to context switching – depending on how long the sleep is for, it can be more expensive in CPU cycles to switch threads than it is to simply have your thread doing nothing for a few ms. Also note that sleep forces a context switch. Also – in general it’s not possible to control context switching – during the Wait the OS may (and will for longer waits) choose to process other threads. The methods are used for different things. Thread.sleep(5000); // Wait until the time has passed. Object.wait(); // Wait until some other thread tells me to wake up. Thread.sleep(n) can be interrupted, but Object.wait() must be notified. It’s possible to specify the maximum time to wait: Object.wait(5000) so it would be possible to use wait to, er, sleep but then you have to bother with locks. Neither of the methods uses the cpu while sleeping/waiting. The methods are implemented using native code, using similar constructs but not in the same way. Look for yourself: Is the source code of native methods available? The file /src/share/vm/prims/jvm.cpp is the starting point… Here wait() will be in the waiting state till it notify by another Thread but where as sleep() will be having some time..after that it will automatically transfer to the Ready state… Wait() and sleep() Differences? Thread.sleep() Once its work completed then only its release the lock to everyone. until its never release the lock to anyone. Sleep() take the key, its never release the key to anyone, when its work completed then only its release then only take the key waiting stage threads. Object.wait() When its going to waiting stage, its will be release the key and its waiting for some of the seconds based on the parameter. For Example: you are take the coffee in yours right hand, you can take another anyone of the same hand, when will your put down then only take another object same type here. also. this is sleep() you sleep time you didn’t any work, you are doing only sleeping.. same here also. wait(). when you are put down and take another one mean while you are waiting , that’s wait you are play movie or anything in yours system same as player you can’t play more than one at a time right, thats its here, when you close and choose another anyone movie or song mean while is called wait In my opinion, the main difference between both mechanisms is that sleep/interrupt is the most basic way of handling threads, whereas wait/notify is an abstraction aimed to do thread inter-communication easier. This means that sleep/interrupt can do anything, but that this specific task is harder to do. Why is wait/notify more suitable? Here are some personal considerations: It enforces centralization. It allows to coordinate the communication between a group of threads with a single shared object. This simplifies the work a lot. It enforces synchronization. Because it makes the programmer wrap the call to wait/notify in a synchronized block. It’s independent of the thread origin and number. With this approach you can add more threads arbitrarily without editing the other threads or keeping a track of the existing ones. If you used sleep/interrupt, first you would need to keep the references to the sleeping threads, and then interrupt them one by one, by hand. An example from the real life that is good to explain this is a classic restaurant and the method that the personnel use to communicate among them: The waiters leave the customer requests in a central place (a cork board, a table, etc.), ring a bell, and the workers from the kitchen come to take such requests. Once that there is any course ready, the kitchen personnel ring the bell again so that the waiters are aware and take them to the customers. Example about sleep doesn’t release lock and wait does Here there are two classes : - Main : Contains main method and two threads. Singleton : This is singleton class with two static methods getInstance() and getInstance(boolean isWait). public class Main { private static Singleton singletonA = null; private static Singleton singletonB = null; public static void main(String[] args) throws InterruptedException { Thread threadA = new Thread() { @Override public void run() { singletonA = Singleton.getInstance(true); } }; Thread threadB = new Thread() { @Override public void run() { singletonB = Singleton.getInstance(); while (singletonA == null) { System.out.println("SingletonA still null"); } if (singletonA == singletonB) { System.out.println("Both singleton are same"); } else { System.out.println("Both singleton are not same"); } } }; threadA.start(); threadB.start(); } } and public class Singleton { private static Singleton _instance; public static Singleton getInstance() { if (_instance == null) { synchronized (Singleton.class) { if (_instance == null) _instance = new Singleton(); } } return _instance; } public static Singleton getInstance(boolean isWait) { if (_instance == null) { synchronized (Singleton.class) { if (_instance == null) { if (isWait) { try { // Singleton.class.wait(500);//Using wait Thread.sleep(500);// Using Sleep System.out.println("_instance :" + String.valueOf(_instance)); } catch (InterruptedException e) { e.printStackTrace(); } } _instance = new Singleton(); } } } return _instance; } } Now run this example you will get below output : _instance :null Both singleton are same Here Singleton instances created by threadA and threadB are same. It means threadB is waiting outside until threadA release it’s lock. Now change the Singleton.java by commenting Thread.sleep(500); method and uncommenting Singleton.class.wait(500); . Here because of Singleton.class.wait(500); method threadA will release all acquire locks and moves into the “Non Runnable” state, threadB will get change to enter in synchronized block. Now run again : SingletonA still null SingletonA still null SingletonA still null _instance :[email protected] SingletonA still null SingletonA still null SingletonA still null Both singleton are not same Here Singleton instances created by threadA and threadB are NOT same because of threadB got change to enter in synchronised block and after 500 milliseconds threadA started from it’s last position and created one more Singleton object. Should be called from synchronized block : wait() method is always called from synchronized block i.e. wait() method needs to lock object monitor before object on which it is called. But sleep() method can be called from outside synchronized block i.e. sleep() method doesn’t need any object monitor. IllegalMonitorStateException : if wait() method is called without acquiring object lock than IllegalMonitorStateException is thrown at runtime, but sleep() method never throws such exception. Belongs to which class : wait() method belongs to java.lang.Object class but sleep() method belongs to java.lang.Thread class. Called on object or thread : wait() method is called on objects but sleep() method is called on Threads not objects. Thread state : when wait() method is called on object, thread that holded object’s monitor goes from running to waiting state and can return to runnable state only when notify() or notifyAll() method is called on that object. And later thread scheduler schedules that thread to go from from runnable to running state. when sleep() is called on thread it goes from running to waiting state and can return to runnable state when sleep time is up. When called from synchronized block : when wait() method is called thread leaves the object lock. But sleep() method when called from synchronized block or method thread doesn’t leaves object lock. From oracle documentation page on wait() method of Object: - interrupts and spurious wakeups are possible - This method should only be called by a thread that is the owner of this object’s monitor This method throws IllegalMonitorStateException– if the current thread is not the owner of the object’s monitor. InterruptedException– if any thread interrupted the current thread before or while the current thread was waiting for a notification. The interrupted status of the current thread is cleared when this exception is thrown. From oracle documentation page on sleep() method of Thread class: public static void sleep(long millis) - Causes the currently executing thread to sleep (temporarily cease execution) for the specified number of milliseconds, subject to the precision and accuracy of system timers and schedulers. - The thread does not lose ownership of any monitors. This method throws: IllegalArgumentException– if the value of millis is negative InterruptedException– if any thread has interrupted the current thread. The interrupted status of the current thread is cleared when this exception is thrown. Other key difference: wait() is a non-static method (instance method) unlike static method sleep() (class method). wait releases the lock and sleep doesn’t. A thread in waiting state is eligible for waking up as soon as notify or notifyAll is called. But in case of sleep the thread keeps the lock and it’ll only be eligible once the sleep time is over. Lets assume you are hearing songs. As long as the current song is running, the next song wont play, i.e Sleep() called by next song If you finish the song it will stop and until you select play button(notify()) it wont play, i.e wait() called by current song. In this both cases songs going to Wait states. wait() is given inside a synchronized method whereas sleep() is given inside a non-synchronized method because wait() method release the lock on the object but sleep() or yield() does release the lock(). sleep() method causes the current thread to move from running state to block state for a specified time. If the current thread has the lock of any object then it keeps holding it, which means that other threads cannot execute any synchronized method in that class object. wait() method causes the current thread to go into block state either for a specified time or until notify, but in this case the thread releases the lock of the object (which means that other threads can execute any synchronized methods of the calling object.
https://exceptionshub.com/difference-between-wait-and-sleep.html
CC-MAIN-2022-05
refinedweb
3,648
62.78
Investors in Nielsen Holdings PLC (Symbol: NLSN) saw new options begin trading this week, for the October 16th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the NLSN options chain for the new Oct NLSN, that could represent an attractive alternative to paying $15.55/share today. Because the $14.03% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Nielsen Holdings PLC, and highlighting in green where the $14.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $16.00 strike price has a current bid of 70 cents. If an investor was to purchase shares of NLSN stock at the current price level of $15.55/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $16.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 7.40% if the stock gets called away at the October 16th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if NLSN shares really soar, which is why looking at the trailing twelve month trading history for Nielsen Holdings PLC, as well as studying the business fundamentals becomes important. Below is a chart showing NLSN's trailing twelve month trading history, with the $16.00 strike highlighted in red: Considering the fact that the 31.60% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 63%, while the implied volatility in the call contract example is 65%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 252 trading day closing values as well as today's price of $15.55) to be 54%. For more put and call options contract ideas worth looking at, visit StockOptionsChannel.com. The views and opinions expressed herein are the views and opinions of the author and do not necessarily reflect those of Nasdaq, Inc.
https://www.nasdaq.com/articles/first-week-of-october-16th-options-trading-for-nielsen-holdings-nlsn-2020-08-25
CC-MAIN-2022-05
refinedweb
362
62.38
assigned to a bucket, which is assigned a name and can be addressed by{bucket}/{key}. Each file is assigned a unique key, which can be used later on to retrieve the file. There are plenty of other options to assign to buckets and files (encryption, ACLs, etc.), but we won't get in to it much here. Just notice the references to 'public-read', which allows the file to be downloaded by anyone. The Code The code below shows, in Python using boto, how to upload a file to S3. import os import boto from boto.s3.key import Key def upload_to_s3(aws_access_key_id, aws_secret_access_key, file, bucket, key, callback=None, md5=None, reduced_redundancy=False, content_type=None): """ Uploads the given file to the AWS S3 bucket and key specified. callback is a function of the form: def callback(complete, total) The callback should accept two integer parameters, the first representing the number of bytes that have been successfully transmitted to S3 and the second representing the size of the to be transmitted object. Returns boolean indicating success/failure of upload. """ try: size = os.fstat(file.fileno()).st_size except: # Not all file objects implement fileno(), # so we fall back on this file.seek(0, os.SEEK_END) size = file.tell() conn = boto.connect_s3(aws_access_key_id, aws_secret_access_key) bucket = conn.get_bucket(bucket, validate=True) k = Key(bucket) k.key = key if content_type: k.set_metadata('Content-Type', content_type) sent = k.set_contents_from_file(file, cb=callback, md5=md5, reduced_redundancy=reduced_redundancy, rewind=True) # Rewind for later use file.seek(0) if sent == size: return True return False Using the Code And here is how you'd use the code: AWS_ACCESS_KEY = 'your_access_key' AWS_ACCESS_SECRET_KEY = 'your_secret_key' file = open('someFile.txt', 'r+') key = file.name bucket = 'your-bucket' if upload_to_s3(AWS_ACCESS_KEY, AWS_ACCESS_SECRET_KEY, file, bucket, key): print 'It worked!' else: print 'The upload failed...' boto works with much more than just S3, you can also access EC2, SES, SQS, and just about every other AWS service. The boto docs are great, so reading them should give you a good idea as to how to use the other services. But if not, we'll be posting more boto examples, like how to retrieve the files from S3.
http://stackabuse.com/example-upload-a-file-to-aws-s3/
CC-MAIN-2018-26
refinedweb
361
59.19
Investors in Viavi Solutions Inc (Symbol: VIAV) saw new options become available this week, for the November 15th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the VIAV options chain for the new November 15th contracts and identified one put and one call contract of particular interest. The put contract at the $14.00 strike price has a current bid of 51 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $14.00, but will also collect the premium, putting the cost basis of the shares at $13.49 (before broker commissions). To an investor already interested in purchasing shares of VIAV, that could represent an attractive alternative to paying $14.64.64% return on the cash commitment, or 25.55% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Viavi Solutions Inc, and highlighting in green where the $14.00 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $15.00 strike price has a current bid of 67 cents. If an investor was to purchase shares of VIAV stock at the current price level of $14.64/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $15.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 7.04% if the stock gets called away at the November 15th expiration (before broker commissions). Of course, a lot of upside could potentially be left on the table if VIAV shares really soar, which is why looking at the trailing twelve month trading history for Viavi Solutions Inc, as well as studying the business fundamentals becomes important. Below is a chart showing VIAV's trailing twelve month trading history, with the $15.00 strike highlighted in red: Considering the fact that the $15.58% boost of extra return to the investor, or 32.10% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 40%, while the implied volatility in the call contract example is 39%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $14.64) to be 32%..
https://www.nasdaq.com/articles/first-week-of-viav-november-15th-options-trading-2019-09-24
CC-MAIN-2019-47
refinedweb
416
63.9
How do I keep my script running? for more information. 2.4.() 2.9.) from __future__ import division # required for python.() 2.14...() 2.17. Full color LED¶ Making colours with an RGBLED: from gpiozero import RGBLED from time import sleep from __future__ import division # required for python 2) 2.18. Motion sensor¶ Light an LED when a MotionSensor detects motion: from gpiozero import MotionSensor, LED from signal import pause pir = MotionSensor(4) led = LED(16) pir.when_motion = led.on pir.when_no_motion = led.off pause() 2.19..() 2.26. PRESSED curses.halfdelay(3) action = actions.get(key) if action is not None: action() next_key = key while next_key == key: next_key = window.getch() # KEY RELEASED. gpiozero.tools import zip_values from signal import pause robot = Robot(left=(4, 14), right=(17, 18)) pir = MotionSensor(5) robot.source = zip_values(pir, pir) pause() 2.28. pause() 2.29.) 2.30.:
https://gpiozero.readthedocs.io/en/stable/recipes.html?highlight=distance%20sensor
CC-MAIN-2021-25
refinedweb
145
52.26
Why widgets? One of the best things about Umbraco is that there are so many ways to build websites, and one of the worst things about Umbraco… is that there are so many ways to build websites. It can be overwhelming to learn the seemingly endless approaches. For that reason, I will just be covering one technique that I think the vast majority of Umbraco developers will find particularly useful. That is, building Umbraco websites with widgets using Archetype (a tool to create complex content structures) and Ditto (a tool to convert complex content structures in C# classes). Here are some reasons why you should be building your Umbraco websites with widgets: - New Pages Quickly. Creating new pages becomes trivially easy (just compose the widgets). - Flexible Pages. Content editors can experiment by adding or subtracting content (just add or remove the widgets). - Isolate Styles. Styling becomes very easy to manage (just style the widget and gaps between widgets). - Avoid Redundancy. You no longer need to worry about two developers rebuilding the same component on two different pages (you are building components, not pages). What are widgets? In short, widgets are the individual features you see on any given page within a website. A slideshow is a widget, a text block is a widget, an image gallery is a widget, a contact form is a widget, and so on. People use different terminology for this concept (e.g., components, features, macros, etc.), and I use the term widget. I recommend against using the term "macro" in particular, as it would be easy to confuse with the Umbraco feature called macros (i.e., rich text macros and grid macros). Some Example Widgets Before I get too deep into the details, here are some examples of widgets. Slideshow Widget Callout Widget FAQ Card Widget Those should give you some sense of what a widget is. This article is about building those types of widgets with Archetype. Archetype Fieldsets as Widgets An example Archetype with 3 fieldsets (aka, widgets). Archetype is a natural choice to build widgets. It offers a few key features that make this possible: - Lists of heterogeneous content. - Ability to nest content within content. - Adding, removing, disabling, and reordering content. - Entire interface is inline on the content node. The lists of heterogeneous content allow you to combine data from various elements (i.e., widgets). For example, this means you can add a slideshow followed by a text block followed by an image gallery. Archetype refers to these blobs of content as "fieldsets", and these fieldsets are what I use to create widgets. The ability to nest content within content allows for complex widgets (e.g., a slideshow). In the case of a slideshow, you'd have a slideshow widget, and within it would be a number of slides. This also allows for the possibility to create layouts with Archetype as well. For example, you could have a "main content with sidebar" layout, or a "two column" layout, or a default "full width" layout. Each of these layouts would be created as an Archetype fieldset, and each of them would have a property or properties that also contain Archetype fieldsets. The ability to add, remove, disable, and reorder content make creating and editing content a stress-free experience, as you can experiment along the way and adjust as you go. For example, you might want to add an image gallery to the page, but you may not be sure where it would look best. Archetype allows you to create the image gallery, then reorder it relative to the other widgets by dragging it. If you need to temporarily remove some of the content to make some edits, you can easily do so by disabling it and then later enabling it (i.e., you don't have to delete it, which would require you to enter the content again). Finally, the fact that the entire interface is inline makes editing the content an extremely quick process (as opposed to an older strategy of nested content nodes as a means of creating widgets). There is no wait to for things to load, as would be the case when navigating from page to page in the nested content node approach. Use Archetype to Control Layout In the past, some people have recommended that you find alternatives to Archetype when creating a layout. One reason was that layouts were hard to change after creating them with Archetype. Another reason was that the extra indentation caused by nested Archetypes made the editing experience clumsy on smaller screens. Luckily, there are solution to both of these problems now. To avoid the indentation that Archetype creates, you can inject your own styles into the back office to reduce the indentation (e.g., by positioning the property editors below the property labels rather than to the right of the labels). Archetype generates classes based on the names of your Archetype fieldsets and properties, which allows you to target those elements with CSS selectors easily. You can read about how to inject CSS into the back office here: In order to change a layout created with Archetype, you can use a new feature called cross-Archetype dragging. This allows you to drag Archetype fieldsets from one Archetype to another Archetype. The other Archetype can even be a nested Archetype (i.e., when an Archetype property exists inside of an Archetype fieldset). This means that if you create a "main content with sidebar" widget that has two properties ("main content" and "sidebar content"), you can drag that content outside of that widget and into another widget (e.g., into a "two column" widget). An example of this cross-Archetype dragging feature is below, and a description of it can be found here: Dragging Archetype fieldsets to other Archetypes (some nested, some not nested). Mapping Archetype Widget Data with Ditto With the major upgrade from Ditto 0.8 to Ditto 0.9, the Ditto Archetype Resolvers project was broken in a major way and it seemed that mapping Archetype content with Ditto would no longer be possible. However, with a few changes that were included in Ditto 0.10, it is now fully possible to map Archetype widgets with Ditto. For those unfamiliar with Ditto, it creates instances of C# classes based on your Umbraco content. This makes working with the structure of the content much easier (i.e., you get intellisense and your content structure is fully specified by your classes). Ditto does this with a concept called a "Ditto processor". A processor is essentially a C# attribute that you decorate your classes or properties with that tells Ditto how to convert content into your classes. That looks like this: // @{ // Use Ditto's "As" extension method to map the content to a C# class. var pageModel = Model.Content.As(); // Get the widgets from the page model. var widgets = pageModel.Widgets; } @foreach (var widget in widgets) { // Render widgets here. } // public class WidgetPage { // IWidget is just an empty interface. // Many widget classes implement the IWidget interface. // DittoMixedArchetype comes from here: [DittoMixedArchetype] public IEnumerable MainContent { get; set; } } Not All Data Resides in Widgets For a lot of widgets, it makes sense to store the data for that widget in the Archetype fieldset. For example, a slideshow or an image gallery would likely store the images and text within their respective Archetype fieldsets. However, you may want to consider storing more reusable data outside of the widgets. One example would be a banner widget. Imagine a banner that appears at the top of each page. This banner might include a breadcrumb of ancestor pages and header text. If you were to store the header text (essentially the title of the page) inside of the banner widget, any page wanting to display that header text would have to parse the Archetype widgets just to extract the header text. This would include pages like the HTML sitemap, which typically automatically generate a tree of all (or most) pages in a site. To make it easier on the HTML sitemap page, it'd probably be best to include a "Header" field on each page rather than forcing the HTML sitemap page to parse out the "Banner" widget on each page to extract the header field from there. Continuing on with the banner example, one might choose to implement the breadcrumb as a list of links on a property within the breadcrumb Archetype fieldset. However, that causes a couple problems. For one, it adds unnecessary extra work on the content editor (i.e., they have to manually enter every breadcrumb). Instead, the code for the banner widget can just generate the breadcrumb from the ancestor pages. Secondly, this manual approach is prone to causing stale data. That is, an ancestor page could be renamed, and every descendant page would then have an inaccurate breadcrumb. There are many reasons to store data outside of an Archetype fieldset widget. Rather than go over all of them, I'll leave it to the discretion of the reader. However, here are a few reasons to consider storing some of your data outside of widgets: - Galleries of items (e.g., an article gallery) can more easily extract data from pages rather than from widgets on pages. - It is more apparent to content editors that they should fill in a field than it is that they should add a particular widget. - It is easier to mark a field as mandatory than it is to mark a widget as mandatory. - Some data is used in multiple places, and it is easier to extract that data from a property on a page rather than from a property on a widget on a page. Prepopulate Widgets by Document Type Some people rightly have a concern that a content editor may not know the appropriate widgets to use when creating a given page. If you have 50 widgets, it may not be all that intuitive that 5 in particular should be used for a given type of page. To help manage that, I recommend initializing the widgets in a page based on the document type. You could do that yourself by using the events in the content service, but a colleague of mine has already done that work for you: The documentation is a bit light, but the basic idea is that you can create a part of the content tree that indicates default widgets. Essentially, you create a "content template" (not to be confused with Umbraco templates, which is an entirely different concept) as a content node with the default widgets, and you select the document types this content template applies to. When you create a content node with one of the selected document types, it will get created with the widgets specified on the content template node. Styling Widgets Styling widget-based websites can become a challenge when you consider that any widget can appear next to any other widget. However, you don't need to create styles for each of those combinations. All you need to do is create a default margin so that widgets get a sensible spacing between them. If you need a specific margin between two widgets, you can use a CSS sibling selector to override the default margin. This is also the reason I recommend you use the top margin as your default (otherwise, changing the margin between widgets becomes more difficult to achieve with CSS alone). There are a couple more situations you'll need to consider when styling widgets. One is the margin at the top and bottom of the page (the gap separating the widgets from the header and footer). Rather than setting the margin on the container of the widgets, you can set the margin on the first and last widget using the first-child and last-child CSS selectors. If particular widgets need a different margin (e.g., if there should be no gap between them and the header/footer), you can use the first-child and last-child CSS selectors in combination with the class name attached to those particular widgets. Another situation I've come across is when showing widgets in a sidebar. Sometimes the widgets in the sidebar need to be displayed a little differently than they would in the main content area. For example, you might want to reduce the spacing between widgets. Rather than styling each of the widgets that happens to appear in the sidebar with a particular margin, you can set a default margin for all widgets that appear in the sidebar. This is again a default you can override using a CSS sibling selector for two particular widgets that appear adjacent to one another. Here are a few examples that show how to implement some of the above mentioned styles: // /* Default widget gap. */ .widget { margin-top: 30px; } /* Override widget gap between slideshow and callout. */ .slideshow-widget + .callout-widget { margin-top: 0px; } /* Gap at top and bottom of page. */ .widget:first-child { margin-top: 50px; } .widget:last-child { margin-bottom: 60px; } /* Default sidebar widget gap. */ .sidebar .widget { margin-top: 10px; } /* Override sidebar widget gap between video and rich text. */ .sidebar .video-widget + .rich-text-widget { margin-top: 0px; } Putting it All Together Now that you've read about building widget-based websites with Archetype, here's how you can put it all together: - Fieldsets. Create an Archetype data type with a fieldset for each widget you want to use, then create document types with a property based on that data type. - Single View. Avoid creating a Razor view for each type of page in your website. Instead, create a content template with the default widgets for each document type, and use a single Razor view to render all of them. - Map to Classes. Map your Archetype widgets to C# classes, then loop over each of them to render them to markup. - Style. Create a few default styles to ensure your widgets look good next to each other, then refine those styles when you need to override the defaults. - Reuse. When you nee to create a new type of page, reuse your existing widgets to save a ton of time. That's pretty much all there is to widgets. There are lots of ways of building them, but this approach should be a good start. If you have any questions, you can let me know in the comments or contact me on Twitter. Resources Here are a few resources that will help you create widget-based websites with Archetype. - Ditto - Ditto Labs - Ditto Documentation - Ditto - The Friendly Poco Mapper for Umbraco - Archetype - Archetype Documentation - Content Templates Also in Issue No 20 Creating A UI Layer For Custom Content With UI-O-Matic 2 by Matt Brailsford
https://skrift.io/issues/building-umbraco-websites-with-archetype-widgets-and-ditto/
CC-MAIN-2020-40
refinedweb
2,454
61.87
Simplex optimization is one of the simplest algorithms available to train a neural network. Understanding how simplex optimization works, and how it compares to the more commonly used back-propagation algorithm, can be a valuable addition to your machine learning skill set. 10 Just because you're using ADO.NET to update data doesn't mean you can't also grasp the opportunity to retrieve some data and save yourself a trip to the database. 10/10/2014 Andrew does integration for a living. As a result, weird client data comes with the territory, but one client's data in particular stands out as being truly unique. 10/09/2014 One file equals one class? Not really. 10/07/2014 Peter upgrades his Backbone/Typescript to respond to the event raised when the user selects an item in a dropdown list by retrieving related data from a Web API service. 10/06/2014 If you need to manipulate a text file of data outside of Visual Studio to convert it into something else (code, for instance, or just a better data file), use NimbleText. 10/02/2014 Lots of decisions go into creating cross-platform apps. Without Xamarin.Forms, the decision process is almost too unwieldy. Here's how it can simplify your mobile development. 09/30/2014 DI containers all serve a similar purpose, but with some differences in syntax and functionality. Ondrej Balas explains the differences between Ninject, Castle Windsor, Unity, StructureMap and Autofac. 09/25/2014 Peter returns to improve performance by splitting a single table into multiple entities, but this time, he implements his solution using the Entity Framework 6 designer. 09/22/2014 There are some occasions when using Entity Framework can really hurt you: When you have tables with hundreds of columns or tables with large payloads. Here's how to get EF6 to do the right thing. 09/17/2014 A look at some of the tools available to automate the creation of documentation for your Web API. 09/16/2014 Nick Randolph discussed how Windows Phone applications can be deployed within a company using enterprise distribution. 09/11/2014 You can set a temporary breakpoint and start debugging with one mouse click. 09/09/2014 Context is king, and your app can easily create hyper-local experiences with iBeacons! 09/04/2014 Peter turns the management of his single-page Backbone application over to Backbone itself by integrating Backbone Routers and Events. Plus: How to simplify your TypeScript code with longer namespaces. 09/02/2014 Here's an article about managing transactions that you don't need to read because, with one exception, Entity Framework will do the right thing by default. But, in the .NET Framework 4 and later, you can do more (if you ever need to). 08/25/2014 There are two different techniques for training a neural network: batch and online. Understanding their similarities and differences is important in order to be able to create accurate prediction systems. 08/18/2014 You want the responsiveness that asynchronous programming in the Microsoft .NET Framework 4 provides, but also need your asynchronous methods to work with other code in your application. Here's how the Task object answers all of your problems. 08/13/2014 Ondrej Balas continues his series on refactoring code for dependency injection, looking at patterns and strategies for changing application behavior after it has already been compiled. 08/11/2014 Extension methods provide a great way for extending a class functionality -- but it's interfaces that let you use those methods anywhere you want. 08/08/2014 How to use Xamarin.Auth, Xamarin.Social, and the Facebook SDK for Android to interact with Facebook within a Xamarin.Android application. 07/29/2014 Weight Ondrej Balas continues his series on refactoring code for dependency injection, focusing on techniques that make it easier to refactor complex applications. 07/16/2014 I agree to this site's Privacy Policy.
http://visualstudiomagazine.com/pages/topic-pages/c-sharp-vb-tutorials.aspx
CC-MAIN-2014-42
refinedweb
660
55.13
Opened 6 years ago Closed 4 years ago #14186 closed New feature (wontfix) Adding GDirections wrapper to overlays.py Description Today I needed to add support to GoogleMap GeoDjango abstraction so it could draw directions in a map. So I coded GDirections class, modified gmap.py to add a directions parameter and modified the template. Directions are drawn in a path within the map, adding a directions div and getting direction steps would be trivial with this. Take in mind only one GDirections object is passed to the GoogleMap object, as only one route will be drawn in the map. You can find an example of use in the Doc strings of the class: from django.shortcuts import render_to_response from django.contrib.gis.maps.google.overlays import GDirections def sample_request(request): route = GDirections(LineString(POINT(40.44 -3.77), POINT(42.33 -3.66)) return render_to_response('mytemplate.html', {'google' : GoogleMap(directions=route)}) You can still pass other overlays and they will get drawn in the map. Just be careful with zoom, because GDirections will set automatically the zoom to show the whole path. I hope somebody finds this useful, Best regards Miguel Araujo Attachments (1) Change History (11) comment:1 Changed 6 years ago by Changed 6 years ago by comment:2 Changed 6 years ago by comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by comment:5 Changed 6 years ago by As GeoDjango is moving or has moved to the new Google's API v3. This is probably not necessary, as GDirections objects don't exist anymore and directions handling is done completely different. Changed 4 years ago by Closing per comment 5. I'm not sure why but the patch diff is not showing correctly in the Trac interface, sorry.
https://code.djangoproject.com/ticket/14186
CC-MAIN-2017-09
refinedweb
298
62.88
iPad 2 crashes, restarts on its own 49519 Views 43 Replies Latest reply: May 19, 2013 9:54 PM by tommyomega - Currently Being ModeratedMar 13, 2011 7:34 PM (in response to Shovelflh)I am having this exact problem. Seems to be happening the most after I use Facetime or Photobooth. Right now, when I try to open ANY app, it crashes, locks or freezes. I have tried many things including a hard restart. I am very frustrated!! I am trying to restore it now, thru iTunes. Waiting for the giant "iPad software update" to download, then I'm going to restore it. I hope this works, or I'll be at the genius bar tomorrow. Message was edited by: armyxraysiPad 2, iOS 4 - Philly_Phan If a man says he will fix it, he will. There is no need to remind him every 6 months about it.Currently Being ModeratedMar 13, 2011 7:38 PM (in response to Shovelflh)Try a System Reset:, a less thorough version of the Guide is included as a Safari bookmark and a more thorough version of the Guide can be downloaded at no charge via iBooks.iMac, Mac OS X (10.6.6), iPad-1 IOS 4.3 - Currently Being ModeratedMar 13, 2011 7:46 PM (in response to armyxrays)I updated iTunes to 10.2.1.1 before connecting the iPad 2. I hope you have better luck than me! I did the system reset several times, the problem recurs with use. Message was edited by: ShovelflhiPad 2, iOS 4 - Currently Being ModeratedMar 13, 2011 8:09 PM (in response to Shovelflh)Shovelflh, have you done a complete restore via iTunes yet? I'm getting ready to try it now. - Currently Being ModeratedMar 13, 2011 8:31 PM (in response to armyxrays)I haven't but I guess I should. The restore doesn't take long but reloading my music library requires down sizing the files to 128k bit rate and takes about 12 hours. Here goes....iPad 2, iOS 4 - Currently Being ModeratedMar 13, 2011 8:37 PM (in response to Shovelflh)Well, I just restored. I will post my findings after some heavy use. - Currently Being ModeratedMar 14, 2011 8:14 AM (in response to Shovelflh)Restored last night and it still crashes. I'm not very computer knowledgeable but it seems programs which are video intensive crash it or degrade its capabilities to the point that simple actions, like going to the settings screen, do not work. Facetime, Midnight HD, and Plants vs. Zombies HD will crash the device. There may well be others but I haven't had the device long enough to figure out more and haven't been able to keep it running in any case. Now to the Genius bar or just return it and wait for one out of a later production run?iPad 2, iOS 4 - Currently Being ModeratedMar 14, 2011 1:51 PM (in response to Shovelflh)Just wanted to give an update. After a full restore thru iTunes, the iPad 2 continued to crash. I visited the Apple store and the Genius tried to restore at the Genius bar. It failed to restore! He deemed it defective and swapped me for a new one. He said I was the first return (to that store), and that Apple would want to study it. Incidentally, the Apple employees were telling everyone they were sold out. My replacement came from "a private stock" for defects. I'm glad they were able to help. Great customer service (thanks Luke "Apple Genius")!iPad 2 32gb wifi, iOS 4 - Currently Being ModeratedMar 15, 2011 2:11 PM (in response to armyxrays)We had the same problem. It would sometime freeze up, sometimes the apps would just open for a second then close, sometimes it would re-start itself. After a few phone calls to apple and restoring it three times, they arranged to replace it at the local apple store even though I did not buy it there. I guess they hold some back for exchanges. I'm unhappy that this one had a problem, but the customer service was great and I am going to exchage it tomorrow.Mac OS X (10.6.6) - Currently Being ModeratedMar 16, 2011 1:53 AM (in response to Shovelflh)Recap and update--Upgraded to latest iTunes before initially connecting it; installed backed up settings from previous iPad; crashed/reset frequently. Did a complete system restore, installed backed up settings from previous iPad; crashed/reset LESS frequently but still crashed/reset in a repeatable manner. Box it up, let it sit, did not reconnect it to a computer, then can't get it to crash at Apple store. Genius does confirm the shut down events so I'm not loco (cool diagnostic software btw). Take it home and, you guessed it, it reset in the middle of an app. Should I wait a little while for iOS and app updates?iPad 2, iOS 4 - Currently Being ModeratedMar 17, 2011 9:45 PM (in response to Shovelflh)The latest - complete reset/ erase; setup as new iPad; installed apps; first app used SiriusXM, played, quit unexpectedly, went from app initializing to home screen, then started again; later system froze on home screen, had to sys reset to start; later froze again, stuck on silver apple, sys reset. Off to Genius Bar. Genius Bar offered a replacement but refused to replace a white model with a black model. Like for like is the policy. Which generally makes sense but I was a little put off by it given this product hasn't been on the market a week and I went out of my way to buy it on day one. So it goes. Ended up returning the iPad 2 to the military PX where I purchased it and might later purchase another. Definitely won't go out of my way to do so as this has been a lot of effort for no reward.iPad 2, iOS 4 - Currently Being ModeratedMar 18, 2011 3:02 PM (in response to Shovelflh)I am having these same problems. Soft resets where the apple logo shows on a black screen while using an app and then comes back after about 30 seconds. Also, problem where opening any app closes immediately. Also, Safari closes on it's own quite often. Do these problems occur on the original iPad with iOS 4.3? This is my first Apple product and I am really surprised by all of the issues.IPad 2, iOS 4 - Currently Being ModeratedMar 20, 2011 4:00 PM (in response to Brrmax)My suggestion would be to just bring it into the Apple store if you can. They will swap it out. Don't try to figure it out on your own, it's not worth the trouble, and no, it isn't working as intended. I'm much happier now that I exchanged. - Currently Being ModeratedMar 22, 2011 6:29 AM (in response to armyxrays)Have a 64GB Wifi White, and I'm having this same issue... I did not restore from backup, and it's a new software install. Apps Crash, the thing randomly restarts, and it's a general mess. I've gotten defective Apple products for the last 4 things I've bought...MBA 11.6, iPad 2 64, White iPhone 4, Mac OS X (10.6.7)
https://discussions.apple.com/message/13230026
CC-MAIN-2014-15
refinedweb
1,237
72.56
wmemcpy() Copy wide characters from one buffer to another Synopsis: #include <wchar.h> wchar_t * wmemcpy( wchar_t * ws1, const wchar_t * ws2, size_t n ); Arguments: - ws1 - A pointer to the buffer that you want to copy the wide characters into. - ws2 - A pointer to the buffer that you want to copy the wide characters from. - n - The number of wide characters to copy. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Copying overlapping buffers isn't guaranteed to work; use wmemmove() to copy buffers that overlap. Returns: A pointer to the destination buffer (i.e., the same pointer as ws1). Classification: Last modified: 2013-12-23
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/w/wmemcpy.html
CC-MAIN-2014-15
refinedweb
117
60.11
Red Hat Bugzilla – Bug 427583 Solaris 10 clients cannot mount NFS exports from RHEL (NFSv4 specific) Last modified: 2009-06-29 09:35:36 EDT Description of problem: When a Solaris 10 NFS client tries to mount an export on a RHEL4 (and RHEL5) server, "Permission denied" is reported and the mount fails. Version-Release number of selected component (if applicable): nfs-utils-1.0.6-84.EL4 How reproducible: Always Steps to Reproduce: 1. Add a normal entry to /etc/exports 2. Attempt to mount export on a Solaris 10 machine 3. Actual results: Permission denied reported. Expected results: Successful mount. Additional info: If NFSv4 is disabled on either side, the mount works fine again. I note that this bug () was filed, but never really resolved. The fsid fix mentioned there does not work for me. The only solution is to either tell the Solaris 10 client to use NFSv3 at most or tell rpc.nfsd on the Linux side to do the same. This happens with RHEL5 and Fedora NFS servers as well. I am not clear if this is an issue on the RH side or the Solaris side. Will file an SR for this issue as well. Created attachment 290864 [details] tcpdump output on RHEL4 server tcpdump -i eth0 -n -vv -s 0 -w /tmp/nfs.dmp host barn Where 'barn' is the Solaris 10 NFS client. In the file, 10.49.6.46 is the RHEL4 server and 10.27.6.19 is the Solaris 10 client. These bugs from upstream may be interesting: As is this page from Sun: The thing is, even when I use the following in my /etc/exports: /install *(rw,insecure,async) (Which is an NFSv3 style export), Solaris 10 clients still can't seem to mount it. Who's right and who's wrong here? :-) This may also be of interest -- and worthwhile to backport? Oh, d'oh, that patch was by Steve himself. :) My bad. Will this find its way into RHEL4 eventually? Likely not, though it might make it into RHEL5. Can you post your /etc/exports file and the command you're using on the solaris side to mount the filesystem? /etc/exports: /install *(rw,insecure,async) /yum3 *(rw,insecure,async) /var/www/mrepo *(ro,async) Pretty basic NFSv3 style exports file. On the Solaris side, we're using the automounter to access the directories. ie: solaris10% cd /net/linuxhostname/install This fails as long as NFSv4 is enabled either on the Solaris10 client side or the RH server side. Note that per this thread: I was able to use Steve's patch to make this work correctly. Specifically here: I will try backporting the patch into RHEL4's nfs-utils on my own. Let me know if you need any additional information. Ok, I think I understand the problem. Your exports don't have any entries with the option "fsid=0". That's how the root of the NFSv4 namespace is designated. Solaris implicitly declares the root directory as the root of the NFSv4 namespace. Linux does not -- this has to be done manually. This page describes this in a bit more detail: This is really a configuration issue, so I'm going to close this as NOTABUG. My understanding is that we will not be backporting the dynamic pseudo root patches to RHEL4, but RHEL5 is still a possibility. The approach you link to above is still being debated upstream, and likely won't have enough soak time there before RHEL4 goes into maintenance mode. Closing this as NOTABUG, please reopen if you consider this to be in error. Even with fsid=0 in my export line, the automounter from Solaris still will not work. It _will_ work with fsid=0 if I manually do a mount from Solaris. The problem is that the namespace exposed is incorrect. RH seems to expect an NFSv4 mount to request / as the mount for the export tagged fsid=0. Solaris sees "/install" and tries to mount that instead. If you have access to a Solaris 10 machine this should be reproduceable. Remember this is with the automounter, _not_ with a manual mount. I did reopen this and can give you a detailed example if that would help clarify my issue. > Even with fsid=0 in my export line, the automounter from Solaris still will > not work. That's because Solaris is querying the NFSv3 mount daemon and assumes that it can mount the same set of directories using NFSv4. This isn't necessarily correct because the NFSv4 namespace doesn't map directly to the NFSv3 namespace unless you set it up that way. > It _will_ work with fsid=0 if I manually do a mount from Solaris. The problem > is that the namespace exposed is incorrect. The namespace is not incorrect -- it's simply different from the NFSv3 namespace. > RH seems to expect an NFSv4 mount to request / as the mount for the export > tagged fsid=0. Solaris sees "/install" and tries to mount that instead. Right. That's because fsid=0 denotes the _root_ of the NFSv4 namespace. You're free to map the root to anything you like in Linux. Solaris is constrained to mapping the root of the NFSv4 namespace to /. To work around this, you might want to do something like this in /etc/exports: /install *(rw,insecure,async) /yum3 *(rw,insecure,async) /var/www/mrepo *(ro,async) /export *(ro) /export/install *(rw,insecure,async,fsid=0) /export/yum3 *(rw,insecure,async,fsid=0) /export/var/www/mrepo *(ro,async,fsid=0) Bind mount /install, /yum3, and /var/www/mrepo under /export. Then run exportfs -a. This is essentially what Steve's patches do, just in a less automatic fashion... As a side note, "async" can leave you with data corruption. See several notes in the NFS FAQ for more info: Again closing this again as NOTABUG since this is just the NFSv4 server on RHEL4 working as designed. Thanks Jeff. Will consider these workarounds. Should I reopen another bug for RHEL5 and also a new SR for RHEL5 in the hopes of getting the pFS patch backported to RHEL5 at some point? Opening a SR might be the best thing. There are already a couple of BZ's open related to this: 237108 247759 ...so you might want to mention those when you open the SR. My gut feeling on this is that the changes will be too much for RHEL4, but a possibility for RHEL5. Sorry, I posted the exports example in haste yesterday. It's wrong. This would be correct: /install *(rw,insecure,async) /yum3 *(rw,insecure,async) /var/www/mrepo *(ro,async) /export *(ro,fsid=0) /export/install *(rw,insecure,async) /export/yum3 *(rw,insecure,async) /export/var/www/mrepo *(ro,async) ...also to be safe, you'll probably want to make /export a real filesystem. Otherwise, a client could spoof filehandles and get to stuff in your root filesystem. FWIW, I also posted a "rebuttal" to Tom's blog post, so that anyone else who runs across it will have a bit more info to go on:
https://bugzilla.redhat.com/show_bug.cgi?id=427583
CC-MAIN-2018-09
refinedweb
1,189
74.19