text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
SAP HANA Studio, Build a Calculation View and View the Data in MS Excel You. DEPRECATED: SAP HANA XS Classic is deprecated as of SPS02. Please use XS Advanced, and learn about how to get started with the new mission Get Started with XS Advanced Development. First, you will need to download and install the SAP HANA Client to enable the MDX Connector. You can find it in the Marketplace, in the SAP Software Download Center: Follow the instructions from the installer to complete the installation. In SAP HANA Studio, from the SAP HANA Development perspective, create an XS Project by selecting File->New->XS Project. If you do not see the XS Project option, go into Other and look for it under the Application Development menu inside SAP HANA. Click on Next, choose a repository, add a name for a package and untick the Add project Folder as Subpackage. Click Next, add a name for a schema and for the Data Definition ( .hdbdd) file You can now declare a context in a Core Data Services (CDS) artifact. This context will serve as a group for the two design-time entities (tables) that will hold the data you will import and the data types you will create. Enter the following code into the .hdbdd file: namespace mdx_parks; @Schema : 'PARK' context parks { /**`Type` is to declare reusable field or structure declarations. **/ Type tyObjectID : String(8); Type tyParkID : String(7); Type LString : String(200); Type MString : String(80); Type SString : String(20); Type TString : String(2); /** An entity becomes a table **/ entity Park { key OBJECTID : tyObjectID; key ParkID : tyParkID; ParkName : MString; ParkType : SString; ParkStatus : SString; DevelopmentStatus: SString; ParkPlanningArea : SString; MgtPriorty : SString; MgtAgencyName : MString; MgtAgencyType : SString; Landowner : MString; OwnerType : SString; Acreage : Decimal(10,2); AcreSource : SString; Address : MString; CityMuni : SString; County : SString; State : TString; StreetNum : String(6); StreetDirection : TString; StreetName : MString; StreetType : String(4); Zip : String(5); }; /** The following fields in the file have been lef out: StatusCmnt, Comments, Classic**/ entity Amenities { key ID : String(32); key ParkID : tyParkID; park: Association[1] to Park on park.ParkID = ParkID; AmenityType : MString; CourtCount : Integer; DescriptionText : LargeString; HasSwingset : String(5); }; /**Fields Park Name, FieldCount,FieldSurface, AgeGroup, Address, Latitude, Longitude, CreatedDate,UpdatedDate,FID not included **/ }; The structures defined for these three entities match those of the files we will use later, except for some fields that have been intentionally left out. Right-click on the project, go down to Team and click on the Activate All button. From the Systems tab, open the Security folder and look for your user ID. Double-click on it to open your User Administration page. Click on the Object Privileges tab and the on the + sign to add the schema you need access to: Select the schema you have just created: Note: You may need this done by a user with administration permissions. Flag permissions to Delete, execute, insert, select and Update your new schema. Once finished, press F8 or click on the green arrow on the top right corner to deploy the changes. You can now go into the Systems tab and look for the Schema and tables: This example uses data from the Open Data City of Boise site. If you choose another dataset, you need to adapt the entities and types created in the previous step to match the structure of that dataset. The datasets in this example are the Boise Parks Facilities Open Data and the Boise Parks and Reserves Open Data CSV files. Go into File and then Import. In the SAP HANA Content folder, choose Data from Local File and then click on Next. Choose your target system (where your schema and tables are): Click on Browse to select one of the two files you will import. This example starts with Boise_Parks_and_Reserves_Open_Data.csv, which contains the date for the table Park. You could also start with Boise_Parks_Amenities_Open_Data.csv, which contains the data for the table Amenities. once you select the file, click on Open. Note: You could also use .xlsxor .xlsfile formats. Click on the Header Row Exists to indicate that the first row contains the names of the columns. Click on Select Table to search for the table to which you will import the file. This example uses the table Park. Click on Next. If you are importing a file with a Header Row and the names of the rows match the names in the table, you can choose the Map By Name, like in this example. If you do not have a header row or the names do not match, you can choose to perform the mapping manually by dragging the fields from the source file on the left to the target table on the right. Perform the same import procedure with the Amenities table using the Boise_Parks_Amenities_Open_Data.csv file. Right-click on the tables to Open Data Preview and see the files loaded. You can quickly explore the data in the Distinct Values and Analysis tabs: Hint: If you click on the Show Logbutton on the top right corner of the data preview in SAP HANA Studio, you can see the SQL Query and the server processing time. You will now create a Calculation View. A calculation view can perform complex calculations joining different tables, standard views and even other calculation views as sources. From the Project Explorer tab in SAP HANA Studio, right click on the project, then on New and then look for the Calcuation View under SAP HANA / Database Development Enter the name for your calculation view and click on Finish. Add two projection nodes by dragging and dropping Projection nodes from the Nodes lateral bar. These nodes will be used to join the parks to their Amenities and add a filter later. Hover on the nodes and a + sign will appear on the right. Click on it to select the one of tables (either Park or Amenities) to be projected. Repeat the same for the other table so you can select all of the fields for output. You can either click on the circle on the left of each individual field or right click on the top of the table node and select Add all to Output. For the Amenities, leave out the DescriptionText field from the output as it is not supported by this type of view. Drag and drop a Join node from the Nodes bar on the left. To connect the tables, drag and drop the circles on the top of each Projection node into the Join node. Click on the Join Node to connect the key fields by dragging the ParkID from the left into the ParkID field from the right: Add all the fields to output (except for the ParkID field from amenities, to prevent it from being duplicate) and set the Join Type to Left Outer so that Parks that do not have any amenities are also included in the results. Finally, drag and drop the circle on the Join node into the Aggregation node. Add all the fields in the Aggregation to Output and click on the Semantics node. Use the Auto Assign button to automatically set the type (measure or attribute) for each field. The Auto Layoutbutton at the bottom of the graphic model makes the diagram prettier. Activate your view using the green arrow on the top: And click on Raw Data to see your results: Open a new spreadsheet in MS Excel. In the Data ribbon, choose From Data Connection Wizard in the Other Sources menu. Choose Other/Advanced and click Next. If you completed Step 1 successfully, you will see HANA MDX Provider at the bottom. Select it and click on Next. Enter the connection details to your SAP HANA database and test the connection before clicking on OK. Hint: You can right-click on the system, and open Propertiesto see your IP or host name followed by port 3xx15, where xx is your instance number. After the connection is established, choose the package ( database to MS Excel) and the calculation view (also known as, Cube) and then click Finish. Review the Import Data options. Keep the Pivot Table and click on OK. You can now use Excel to execute OLAP queries against SAP HANA and create Pivot Tables and Pivot Charts. - Step 1: Download and Install SAP HANA Client - Step 2: Create the XS Project and Core Data Services artifacts - Step 3: Grant permissions to the newly created schema - Step 4: Import data into your tables - Step 5: Create a Calculation View - Step 6: Connect to your instance using the MDX connector - Back to Top
https://developers.sap.com/suisse/tutorials/studio-view-data-calculation-mdx.html
CC-MAIN-2019-04
refinedweb
1,429
68.4
2 posts in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - By Dequality Can anyone tell me if its possible to make autoit search for text? (im trying to search for a button a pop up(the color changes every time so i cant use img search) so i thought why not try using text search if possible) , the text is following: 'OK', 'Okay', 'Tak', 'Fedt'. I've searched around most possibilities i found was img search which doesnt work cuz of the color change.. ^^ Any help are HIGHLY appreciatet. -Dequality. - galan2015 yo, i need some help. Im training with _StringsBetween, on big site! Alot fun, My script turned 8 times created a text file with a weight of 1GB. I saw that in the file was a lot of repetition. Is it possible to somehow set the string to save the file, only one URL without repetition? #include <ButtonConstants.au3> #include <EditConstants.au3> #include <GUIConstantsEx.au3> #include <WindowsConstants.au3> #include <INet.au3> #include <StringConstants.au3> #include <File.au3> ;I'm coming for blood, no code of conduct, no law. ;I'm coming for blood, no code of conduct, no law. #Region ### START Koda GUI section ### Form= $Form1 = GUICreate("Form1", 1211, 812, 43, 110) $Button1 = GUICtrlCreateButton("Button1", 0, 8, 249, 81) $Button2 = GUICtrlCreateButton("Button2", 350, 8, 249, 81) $Edit1 = GUICtrlCreateEdit("0", 8, 128, 609, 257, BitOR($ES_CENTER,$ES_AUTOHSCROLL,$ES_READONLY,$ES_WANTRETURN)) $Edit2 = GUICtrlCreateEdit("2", 632, 0, 577, 809) GUISetState(@SW_SHOW) #EndRegion ### END Koda local $iFileSize = FileGetSize('') Func VisitFrontPage() local $Liczba = _FileCountLines(@ScriptDir&'\data\links.txt') local $liczba2 = GUICtrlSetData($Edit1,Random(1,$Liczba,1)) local $Liczba3 = GUICtrlRead($Edit1) local $Liczba4 = FileReadLine(@ScriptDir&'\data\links.txt',$Liczba3) $data = _INetGetSource ( FileReadLine(@ScriptDir&'\data\links.txt',Random(1,$Liczba,1))) $linki = StringRegExp($data, '<a href="(.*?)/" title=""',3) For $q = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$q]&@CRLF) Next $linki = StringRegExp($data, 'href="(.*?)/">',3) ; pobieranie ludzi co dodali znaleziska For $w = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$w]&@CRLF) Next $linki = StringRegExp($data, '<a href="(.*?)/" title="',3) ; pobieranie ludzi co sa na stronie z mikro For $e = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$e]&@CRLF) Next $linki = StringRegExp($data, '<a class="tag create" href="(.*?)/"><em>',3) ; pobieranie ludzi co sa na stronie z mikro For $r = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$r]&@CRLF) Next $linki = StringRegExp($data, '<',3) ; pobieranie ludzi co sa na stronie z mikro For $t = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$t]&@CRLF) Next $linki = StringRegExp($data, 'href="(.*?)/" title=""',3) ; pobieranie ludzi z znaleziska, komentarze For $a = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$a]&@CRLF) Next $linki = StringRegExp($data, 'href="(.*?)/"',3) ; pobierane tagi ze znaleziska, For $s = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$s]&@CRLF) Next $linki = StringRegExp($data, '<a class="clearfix" href="(.*?)/?utm_source',3) ; pobierane znaleziska z prawego menu For $d = 0 To UBound($linki) -1 FileWrite(@ScriptDir&'\data\links.txt',''&$linki[$d]&@CRLF) Next Sleep(5000) GuiCtrlSetData($Edit2, GuiCtrlRead($Edit2)+1) _FileWriteToLine(@ScriptDir&'\data\links.txt',GuiCtrlRead($edit1),'',1) VisitFrontPage() EndFunc While 1 $nMsg = GUIGetMsg() Switch $nMsg Case $Button1 Case $Button2 VisitFrontPage() Case $GUI_EVENT_CLOSE Exit EndSwitch WEnd -
https://www.autoitscript.com/forum/topic/143854-i-cache-some-idea-from-another-topic/
CC-MAIN-2018-05
refinedweb
573
51.95
#include <reporter.h> List of all members. Definition at line 159 of file reporter.h. C'tor. Definition at line 631 of file reporter.cc. Here is the call graph for this function: D'tor. Definition at line 637 of file reporter.cc. Clear all values. Definition at line 642 of file reporter.cc. References m_id, and m_reports. Referenced by job_archiver::clear(), and single_job_report(). Add a path report for this job. Definition at line 649 of file reporter.cc. References m_reports, and TRY_nomem. Referenced by job_archiver::mf_process_report(). Return a const vector of all path reports. Definition at line 655 of file reporter.cc. Set a descriptive ID for this job report. Definition at line 661 of file reporter.cc. References m_id, and TRY_nomem. Referenced by job_archiver::start(). Return the descriptive id for this job report. Definition at line 667 of file reporter.cc. If all path reports say that rsync was successful, then return true, else return false. Definition at line 674 of file reporter.cc. Definition at line 176 of file reporter.h. Referenced by add_report(), clear(), reports(), and status(). Definition at line 177 of file reporter.h. Referenced by clear(), and id().
http://rvm.sourceforge.net/doxygen/1.02/html/classsingle__job__report.html
CC-MAIN-2017-30
refinedweb
194
64.78
Dear Microsoft fanatics, I have the most annoying issue at a customers setup. The customer has 3 servers. 2 of them are working with server 2008 R2 and the 3th one (an VPS) is installed with server 2012 R2. There are several DFS namespaces setup with replication for their user profiles, home folders and shared data. The issue now is with 1 of the shares (a simple data share). This share is setup through the following root folder domain.local\public in this root folder, their are in total 4 namespaces setup. 3 out the 4 namespaces works fine, but their is 1 namespace that we simply cannot get to work. We have checked on all the 3 servers and the permissions on all sides are full control. The root folder on all 3 servers, are shared with full permissions on the domain users group. The NTFS permissions on all the 3 servers are also full control on the domain users. I have no idea where i can check or what i am doing wrong. The share permissions on the namespace itself are setup with full control on all the 3 servers. Is their anything that i am not seeing? Please let me know 11 Replies Jan 31, 2014 at 4:17 UTC Try unchecking "Include Inheritable permissions from this object's parent". Even though you may have full control it may be getting a deny from further up the chain. Jan 31, 2014 at 4:20 UTC I'm somewhat new to DFS but I did run across something like this. When I had setup a new DFS namespace I accidentally set it up so one of the servers got a read only copy. I ended up redoing the DFS namespace so I didn't test this but I assume you could do something like this: (Again please research more as I have not tried this) Dfsradmin membership set /RGName:<replication_group> /RFName:<replicated_folder> /MemName:<DOMAIN\Server>/RO:false Jan 31, 2014 at 4:29 UTC Constant IT is an IT service provider. Hi Agmacguy, The DFS setup was already setup first around 2 years ago with just 2 servers, without any issues. In the original setup they always had 1 out of the 2 servers setup read only, so we did this. I have tested it to turn it off, but that did not make any change. Know we have 3 servers. 2x server 2008 R2 (the ones that already setup for 2 years) and the new one is an new VPS setup with server 2012 R2. I will try to unchecking "Include Inheritable permissions from this object's parent". Jan 31, 2014 at 4:35 UTC Is the 2012 R2 box running on ESXi? If so, you might check out VMware KB 1012225. I had issues like you and this fixed it right up. Jan 31, 2014 at 4:35 UTC Constant IT is an IT service provider. @ Jan 31, 2014 at 4:36 UTC Constant IT is an IT service provider. Hi Cody, The server is running as a hyper v machine. Jan 31, 2014 at 4:38 UTC If it's already unchecked just leave it as is. It'll be something else that's the problem. Feb 1, 2014 at 12:00 UTC Brand Representative for Microsoft Follow this, it should help you:... Feb 3, 2014 at 6:33 UTC Constant IT is an IT service provider. I have changed it. For now it works, but because the issue only comes up sometimes i need to investigate this further. I will also change ip adresses over the network so i do not havy any rout issues. Feb 4, 2014 at 8:02 UTC Constant IT is an IT service provider. Unfortunately the problem occurred again! Turning ipv6 completely off did not do the trick :( Feb 12, 2014 at 2:14 UTC Constant IT is an IT service provider. I have finally found the solution. For some reason one of the servers that was read only was giving the issues. Problem is solved. This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion.
https://community.spiceworks.com/topic/439139-dfs-permissions-problem-all-files-open-up-read-only
CC-MAIN-2017-43
refinedweb
700
81.93
Wiki django-piston / FAQ How do I... ... Why does Piston use fields from previous handlers? When you create a handler which is tied to a model, Piston will automatically register it (via a metaclass.) Later on, if a handler returns an object of the model's type, and no fields is defined for it, Piston will resort to the fields defined by the model handler. For example, this handler: class FooHandler(BaseHandler): model = Foo fields= ('id', 'title') ... is tied to 'Foo'. If we later do this: def read(...): f = Foo.objects.get(pk=1) return { 'foo': f } ... Piston will return the 'id' and 'title' fields of 'f'. If this is not what you want, you can define which fields you do want: class OtherHandler(BaseHandler): fields = ('something', ('foo', ('title', 'content'))) def read(...): f = Foo.objects.get(pk=1) return { 'foo': f } Now it will return the 'title' and 'content' properties of 'f' instead of 'id' and 'title'. This nesting can be as deep as you want it to. If you want to reset both the metaclass register and the nested fields, just use ('foo', ()), which means "take everything." Why does Piston leave out the 'id' even if I specify it NB: As per f34e64f08b3e , specifying fields in fields will override their existence in exclude. Piston has a "sane default" of excluding the ID from models. This is usually internal and shouldn't be exposed to the user. There are of course cases where you want to include the 'id' attribute, and you can do that by resetting 'exclude'. class SomeHandler(BaseHandler): exclude = () If you want to overwrite this default globally, you can do: from piston.handler import BaseHandler, AnonymousBaseHandler BaseHandler.fields = AnonymousBaseHandler.fields = () What is a URI Template URI Templates define how URIs for accessing a Web resource should be made up. Given the following URI Template:{id}/ And the following variable value id := 1 The expansion of the URI Template is: Tips & Tricks As noted by Stephan Preeker in this thread on django-piston, it's possible to invoke the anonymous handler directly from an authenticated handler, like so: class handler( .. ) def read(self, request, id): self.anonymous.read(id=id) This works because self.anonymous points to the anonymous handler, and you can then invoke methods on it directly. Reporting Bugs Use the issue tracker. Who is using Piston Piston is written internally at Bitbucket, and was later open sourced for you to use. We keep our API versioned (by URL), and you can find the latest version on. If you are using Piston for anything interesting, feel free to add your site here! - bachigua.com uses piston to provide an API for our "iGoogle-like" gadget portal (written in Django) - Urban Airship loves using piston for their APIs, which initially provide easy to use functionality for mobile applications (like iPhone push notifications and in-app purchases). Updated
https://bitbucket.org/jespern/django-piston/wiki/FAQ?_escaped_fragment_=who-is-using-piston
CC-MAIN-2015-18
refinedweb
480
64.1
Get the name of a slave pseudo-terminal device #include <stdlib.h> char *ptsname( int fildes ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The ptsname() function gets the name of the slave pseudo-terminal device associated with a master pseudo-terminal device. The ptsname_r() function is a QNX Neutrino function that's a reentrant version of ptsname(). A pointer to a string containing the pathname of the corresponding slave device, or NULL if an error occurred (e.g. fildes is an invalid file descriptor, or the slave device name doesn't exist in the filesystem).
https://www.qnx.com/developers/docs/6.6.0.update/com.qnx.doc.neutrino.lib_ref/topic/p/ptsname.html
CC-MAIN-2022-05
refinedweb
107
56.96
15 February 2012 16:21 [Source: ICIS news] LONDON (ICIS)--European maleic anhydride (MA) producer Sasol-Huntsman has declared force majeure at its 105,000 tonne/year plant in ?xml:namespace> Sasol-Huntsman was forced to declare force majeure last week, although an exact date for the outage could not be confirmed. The force majeure has been caused by a raw material supply disruption resulting from severe weather conditions in Sasol-Huntsman is the largest supplier of MA in Last week, spot material was trading at €1,350-1,500/tonne FD NWE. If the €1,800/tonne FD NWE price becomes more widely confirmed, this would be the largest spot price change week-on-week, with spot prices having risen €300-450/tonne. This would also be the highest percentage change in spot prices week-on-week, at a rise of 20-33%, since ICIS price records for MA began on 17 November 1999. The current record high week-on-week spot price change is €200/tonne, which was seen at the top-end of the price range on 5 November 2004. The current record high week-on-week percentage change in price was 10-14%, seen on 22 September 2000. The record high spot price for MA
http://www.icis.com/Articles/2012/02/15/9532615/sasol-huntsman-declares-force-majeure-on-ma-at-moers-germany.html
CC-MAIN-2015-18
refinedweb
209
65.46
This is what gives you the error you asked about. 2) you don't say what type hoursminfield is, maybe you haven't even declared it yet. 3) it is unusual to set If you're using Java SE 7 you can use something like this: public class DVDComparator implements Comparator So you can't call methods on them. How safe is 48V DC? Does bolting to aluminum for electrical contact have any oxidation concerns? Join them; it only takes a minute: Sign up I keep getting “double cannot be dereferenced” in Java up vote -1 down vote favorite I have to write a program that Why is looping over find's output bad practice? What you need to do is: double hours = Mins / ((double) 60); or something like that, you need to cast some part of your division to a double in order Who is this Voyager character? However there is no harm in looking at the comparable system for the future, as it is often a very useful feature. public class test { public static void main (String [] args) { int x=5; int y=1; System.out.println(x.compareTo(y)); } } //simple program,wont compile. So I changed a few things and made it 'double' instead of int. Is it ethical for a journal to cancel an accepted review request when they have obtained sufficient number of reviews to make a decision? Compareto Double Java facebook google twitter rss Free Web Developer Tools Advanced Search Forum Programming Languages Java Help Long cannot be derefferenced error Thread: Long cannot be derefferenced error Share This Thread Will You (Yes, You) Decide The Election? Double Cannot Be Dereferenced Tostring Expand» Details Details Existing questions More Tell us some more Upload in Progress Upload failed. Do we have "cancellation law" for products of varieties Does bolting to aluminum for electrical contact have any oxidation concerns? Just make sure to follow the contract of compare -- return an int less than 0, equal to 0, or greater than 0, if d1 is "less than" d2, if d1 Double compiler error - Help! Double Compare Java current community chat Stack Overflow Meta Stack Overflow your communities Sign up or log in to customize your list. I made the constructor to take the input of the profits. a. stdarg and printf() in C Did a thief think he could conceal his identity from security cameras by putting lemon juice on his face? share|improve this answer edited Apr 23 '12 at 14:34 answered Apr 23 '12 at 2:25 mwengler 1,991724 Thanks heaps bro, works awesome :D <3 –Daniel Donaldson Apr 23 '12 Double Cannot Be Dereferenced Intvalue How difficult is it to practically detect a forgery in a cryptosystem? Cannot Invoke Compareto(double) On The Primitive Type Double Was This Post Helpful? 0 Back to top MultiQuote Quote + Reply #5 macosxnerd101 Games, Graphs, and Auctions Reputation: 12015 Posts: 44,844 Joined: 27-December 08 Re: double cannot be dereferenced Easier or not - I honestly don't know how I would do it in that way. If the pizza read from the file is cheaper, change the cheapest pizza. asked 1 year ago viewed 2336 times active 1 year ago Upcoming Events 2016 Community Moderator Election ends Nov 22 Related 3Which usage of compareTo method is more understandable?5compareTo and equals What security operations provide confidentiality, integrity and authentication? Byte Extends Number Answer Questions In Java which is best for game programming: Swing, SWT, AWT or JavaFX? 3 simple Java questions? To compare two doubles, just compare them directly if (this.cost < temp.cost) return -1; else if(this.cost > temp.cost) return 1; else return 0; share|improve this answer answered Dec 17 '14 at No, it isn't. I understand that an object can only be compare to another object. Join them; it only takes a minute: Sign up Comparator compare method() int cannot be dereferenced up vote 0 down vote favorite I'm doing some assignment work and I've struck a Long Cannot Be Dereferenced Why does the size of this std::string change, when characters are changed? Empty lines or not? In your code, you can use either one, -- but understand the difference. public double price=clientObj.processedObj.price; inputLine = new JTextField (this.price.toString(), 20); inputLine.setBounds (145,210,130,25); inputPanel.add (inputLine); Error: C:\Documents and Settings\Moon\My Documents\Navi Projects\School\ OOPJ Project\Prototype\GPS-Lite v2 Alpha Debugger\SUBSCRIPTION.java:77: double cannot be dereferenced inputLine = new green meklar · 5 years ago 0 Thumbs up 0 Thumbs down Comment Add a comment Submit · just now Report Abuse change: int x=5; by: Integer x=5; Foo Bar Baz Double To Int Java Using compareTo method in java? YES!!! Just use the normal <, >, and == operators to compare ints. On 1941 Dec 7, could Japan have destroyed the Panama Canal instead of Pearl Harbor in a surprise attack? weblink Right now you have two ints which are primitive. Any help will be appriciated java double share|improve this question edited Apr 23 '12 at 2:06 paxdiablo 494k1189771430 asked Apr 23 '12 at 1:58 Daniel Donaldson 28113 What language However by learning and using the compare and compareTo methods, they open up the possibility of implementing a more generic binary search method that acts upon objects that cannot be compared Book Review: Murach's Java Servlets and JSP Phobos - A JavaFX Games Engine: Part 2 - JavaFX Scene API and the FSM Maven Tutorial 2 - Adding Dependencies Maven Tutorial 1 Is the result of the general election final on 8th of Nov, 2016? Thus, you can't say price.toString(). Firstly the method is part of the String class. more hot questions question feed lang-java about us tour help blog chat data legal privacy policy work here advertising info mobile contact us feedback Technology Life / Arts Culture / Recreation Can I cite email communication in my thesis/paper? Browse other questions tagged java double or ask your own question. CompareTo method in Java programming? You cannot call methods on primitive variables. See stackoverflow.com/questions/4915462/… –sleske Sep 9 '13 at 5:00 add a comment| 3 Answers 3 active oldest votes up vote 3 down vote accepted rocketboy is correct with regard to why this The lines of code that do what you seem to want are: String hoursrminfield; // you better declare any variable you are using // wrap hours in a Double, then use To compare ints, you can use the standard comparison operators < > ==, * or * you could wrap it in an Integer Object and use the compare method as you Reference Sheets Code Snippets C Snippets C++ Snippets Java Snippets Visual Basic Snippets C# Snippets VB.NET Snippets ASP.NET Snippets PHP Snippets Python Snippets Ruby Snippets ColdFusion Snippets SQL Snippets Assembly Snippets
http://geekster.org/cannot-be/double-cannot-be-dereferenced-java-compares.html
CC-MAIN-2017-47
refinedweb
1,137
59.43
JSF Basics JSF Basics Join the DZone community and get the full member experience.Join For Free Get the Edge with a Professional Java IDE. 30-day free trial. This is a brief tutorial that takes a quick look at some of the very basics of JSF, how we define pages and hook them up to server side objects. Rather than cover the fundamentals of starting a new JSF application, I’m going to start from one of the Knappsack archetypes which can provide you with a JEE 6 application ready to roll. In this case, we are going to start with a servlet based example so you can run it using the embedded servlet containers. To create the new project, we are going to use the following archetype GAV values. You can also read up on creating a new Knappsack application groupId = org.fluttercode.knappsack artifactId=jee6-servlet-basic version=1.0.5 Once you have your project, just run mvn jetty:run in the command line and navigate to. Ok, so now we are up and running, lets look at a JSF page and what it contains. JSF uses a templating language called facelets. JSF 1 originally used JSP as its view language, but for JSF 2.0 Facelets became the defacto standards and was adopted as the standard view definition languages for JSF 2.0. In our application, we have a template file called WEB-INF/templates/template.xhtml. The template just has a lot of boilerplate view code but there are some ui:insert tags that mark places in the template where we should insert content. For example, the main content is inserted in a facelets area called content <ui:insertMain Content</ui:insert> These are insertion points which are used by pages that base themselves on this template. One of the great features of facelets is that the template defines the content points, while the page itself is used to define the template used and then push the content into the content points. This is much better than having to include page fragments in a JSP page, using a page decorator or using a template to include the content page. Our main page home.xhtml is one such page that uses this template. home.xhtml <?xml version="1.0" encoding="UTF-8"?> <ui:composition xmlns="" xmlns:ui="" xmlns:f="" xmlns: <ui:define <h1><h:outputText</h1> </ui:define> </ui:composition> At the very top, in the composition tag, we tell Facelets that we want to use the file template.xhtml as our template page. Next we have a ui:define tag that defines the content that is to be used in the template. Facelets works by having the page pull the template into the page and pushing its content into the slots provided by the template. This is much better than pulling content into the main page using includes, or specifying a template used everywhere and decorating the content with it. This is the best of both worlds. Each page defines which template page it uses and pushes the content into it. Our first used component In here you can see we have used our first JSF component : <h:outputText This is a standard JSF component that is used to output text. We could have just written the text right in the page, but the goal of this page is to test that the JSF configuration is working and to do that, we need to see if the JSF component is rendered correctly. You can edit the text in the value and obviously it will change the text displayed on the page. Our first backing bean Displaying static text isn’t what web frameworks were built for, what we really want is to display something that comes from some java code. To do this, we will create a backing bean which is a java object that is part of the web application that JSF is aware of. To create backing bean, create a new class and call it PageBean. import javax.enterprise.context.RequestScoped; import javax.inject.Named; @Named @RequestScoped public class PageBean { private String message = "Mighty apps from little java beans grow"; public String getMessage() { return message; } public void setMessage(String message) { this.message = message; } } The code itself is pretty simple. One field with a default value and getters and setters. However, we have a couple of new annotations at the class level. First is the @Named annotation that tells CDI that this bean has a name that be used to refer to this bean using EL expressions. Since we didn’t supply a name the default name of pageBean is used. The @RequestScope annotation tells CDI that this bean is request scoped so when it is created, it should last till the end of the current request and then be destroyed. This means that the next time we call this page, this bean will be re-created and a fresh version used which again will be destroyed at the end of the request. Now we are going to change our application to display this message instead.From now on, we will just be showing the code within the ui:define statements: <ui:define <h:outputText </ui:define> </ui:composition> If we run the application now, you should see the message displayed on the main page: Now let’s take look at a producer method that can be used to display the date and time the page was rendered. First, open up the template file and look for the footer panel at the bottom and change it to : <h:panelGroupThis page was rendered at #{currentSysDate}</h:panelGroup> What is currentSysDate value? Well, we need to go back to our PageBean and define it. We will add a new method that returns a new Date object and we’ll annotate it with @Named and @Produces. @Produces @Named("currentSysDate")The Produces and Named annotation tells CDI that this method can be used to produce a value with the name currentSysDate. When our application is deployed, CDI makes a note that this value comes from this method so when our page is rendered and JSF is looking for the value currentSysDate, CDI calls the method and returns the value obtained from this method. public Date produceDate() { return new Date(); } If we refresh the page we can see that our new timestamp function is on there. This will come in handy later on when we start using AJAX and want to check that the full page has not updated (the timestamp will stay the same because that portion of the page won’t change). We’ve shown how we can pull data from the server onto the page so let’s take a look at how we send data back to the server. In our home.xhtml page, we’ll add a text input to edit the message and a button to post it back. <h:form <h:outputText<br/> <br/> New Message : <h:inputText <h:commandButton </h:form> First, we added the h:form tags which gives us an HTML form in which to put input controls. Any data entry in a JSF page needs to be enclosed in a JSF form tag in order to be passed back to the server. We defined an input text box that was bound to the same value as the message display and a command button that is used to post the values back. If we refresh the page, enter a new message and click the button our message changes. However, this is the age of Web 2.0, and just posting back a form and displaying the results isn’t enough, we need to do it with AJAX. AJAX is a mechanism by which the browser makes an asynchronous request to the server and gets a response back. When the response is returned, the browser will call a javascript function that will update part of the page instead of all of it. Sounds complicated, but the nice folks that wrote JSF wrapped all that functionality up in one little tag called f:ajax What we want to do is make our command button an AJAX button which we can do by placing the f:ajax tag as a child tag of the button. All we need to supply the AJAX tag with is the execute and render attributes. This indicates which parts of the page we want to post back to the server, and which parts we want to re-render when the response comes back. We want to execute the form and re-render the form, so we could use the component id (messageForm) for the attribute values. JSF also provides a couple of shortcuts that we can use. The value @form references the form the button is in so rather than hardcode the form id, the @form value will let us reference the form without doing so by name. <h:commandButton <f:ajax </h:commandButton> Notice that now you have an ajax button, the timestamp in the footer doesn’t change when you post the value back. Performing Actions So far we have covered displaying information from the server side bean and writing values back, but often we want some user action to execute some code on the server. Start by adding a Integer attribute on to our PageBean class and two methods, one to increase it, and another to decrease it as well as getters and setters for the value. private int value = 0; public void increase() { value++; } public void decrease() { value--; } In the view, we want to display the value and have a couple of buttons to increase and decrease the value. Add the following view code in home.jsf somewhere between the h:form tags : <h:panelGroup <h:commandButton #{pageBean.value} <h:commandButton </h:panelGroup> Refresh the page and click away, and you’ll notice something is wrong. Press the increase button and the value goes to 1, click it again and it…goes to 1. Click the decrease button and it will always go to -1. The problem is an issue of state. Each time we click the button, we post back to the server, and the server creates a new instance of the pageBean and calls the increase or decrease methods. The problem is that each time we create the bean, the value starts at zero and so is only ever increased to 1 or decreased to -1. Now you may be wondering that since you are displaying the value on the page, isn’t that enough to post it back to the server, after all, we displayed the message in an input box and it got passed back to the server. The difference is that an input box is known as a JSF value holder, which means it actually holds the value it is bound to on the client and posts it back to the server when the form is posted back. We can see this demonstrated by changing the value text display component to an input text box component: <h:panelGroup <h:commandButton <h:inputText <h:commandButton </h:panelGroup> Now when you click the +/- buttons the value changes beyond -1 and 1. This is because when the page is rendered, the value is stored on the client. When the button is clicked and the form is posted back, the client side value is sent back to the server side attribute and then the increase/decrease method is called with the value that is set. This is fairly common with web forms, and one way of handling state by putting it in client side forms, even as hidden field values. We can demonstrate this further by manually entering a value into the text input and then clicking a button. Enter 1000 in the input text box and click the increase button. The value should now be 1001. When we manually enter a value, we are changing the value held on the client in the value holder represented by the input text box. When we click the increase button we are sending that value back to the server. On postback, the server creates an instance of our PageBean class, sets the value attribute to the value in our text box (1000) and then calls the increase method which increases the value to 1001. Once the method has finished, JSF then must render a response which includes taking the value from the pageBean.value attribute and putting it in our text box which is how the text box shows 1001 after clicking the button. At this point, once our request is complete, the page bean is destroyed as it is only request scoped. Validating the value If we only want the value to be between 0 and 10, we can add validation annotations onto the value field to enable it to be checked for correctness when we post back the values. In our PageBean class, we’ll add the annotations as follows : @Min(value=0) @Max(value=10) private int value = 0; We also need some way to display error messages if the user enters an invalid value so we’ll add a h:message tag. The message tag is used to associate a JSF message with a component and display it. We give the input text box a name and add the message for that component : <h:panelGroup <h:commandButton <h:inputText <h:commandButton <h:message </h:panelGroup> Now refresh the page and enter 1000 into the value text input and click the increase button. You should see an error message next to the text editor because the value is above 10. Try entering the value of -1000 and clicking a button. }}
https://dzone.com/articles/jsf-basics
CC-MAIN-2019-09
refinedweb
2,290
66.78
tag:blogger.com,1999:blog-33475719923158057582013-05-09T23:41:42.490+12:00On Web DevelopmentMy daily take on web development. PHP, MySQL, JavaScript , AJAX, xHTML, HTML5, CSS and maybe some Flash ;).Gabe LG Wikipedia blackout back online via hosts file work around<h3> </h3> Wikipedia is on blackout for 24 hours. The blackout code is delivered via JS from the domain meta.wikimedia.org. <br /> <br /> To get wikipedia back online during their blackout append "127.0.0.1 meta.wikimedia.org" to your hosts file.<br /> <br /> The line:<br /> <br /> <b>127.0.0.1 meta.wikimedia.org</b><br /> <br /> will make the domain meta.wikimedia.org resolve to your loopback interface for IPv4 (your local machine). It will thus NOT render the HTTP response delivering the JavaScript code that will blackout the wikipedia page. Thus you can use wikipedia as normal.<br /> <br /> If you're not familiar with what a hosts file is, here is a short overview. <a href=""></a><br /> Or look here on how to edit your hosts file: <a href=""></a><br /> <br /> These are for windows, however if you're on Linux then it's a lot simpler - like most things Linux.<br /> <br /> <b>echo "127.0.0.1 meta.wikimedia.org" >> /etc/hosts</b><br /> <br /> <br /> <br /> You need to be root to do this, so either do "sudo" or "su root" or "sudo su" etc. <br /> <br /> <b>Other workarounds to wikipedia blackout</b><br /> <br /> If you have foxyproxy you can also use that to block meta.wikimedia.org specifically by specifying that it proxy to some blackhole.<br /> <br /> If you use a proxy configuration script, that would also work. Modify your script to proxy meta.wikimedia.org to your favorite blackhole.With a proxy script you can match just the JavaScript file URL, so it is more specific.<br /> <br /> There is a number of other ways to block <b>meta.wikimedia.org </b>and they should all deliver the same results. <br /> <br /> If you're interested in the JS file delivering the blackout code then with firebug or chrome, inspect the source and then go to "Network" tab. You should see two HTTP requests for <b>meta.wikimedia.org. </b>The second one is delivering the JavaScript file.You can also use Wireshark to inspect the network traffic and create a Wireshark filter for <b>meta.wikimedia.org. </b><br /> <b><br /></b><br /> <br /><img src="" height="1" width="1"/>Gabe LG is httpd.conf - The Apache Configuration File<h3>Are you tired of searching for the Apache Configuration File, httpd.conf?</h3><p>To find the location of httpd.conf run the following shell command:<br /> </p><pre>httpd -V </pre><p>Note the <b>-V</b> option is capitalized. Which will give a response similar to:</p><preGabe LG Socks Proxy using SSH TunnelIf you’re familiar with using a SOCKS proxy while browsing the internet you may have found out that Chrome will not work with a SOCKS5 proxy such as created when you create a SSH tunnel. <br /> <br /> This is because chrome assumes the SOCKS proxy is a SOCKS4 proxy when it is SOCKS5.<br /> <br /> A workaround is to have chrome read the proxy configuration from a script. <br /> The configuration script is a javascript file with a function called FindProxyForUrl() that will be called for every HTTP URL to proxy. <br /> Here is an example:<br /> <br /> <code><br /> /**<br /> * .pac files are for automated proxy configuration<br /> * This is a fix for chrome to use SOCKS5 as it assumes SOCKS4<br /> */<br /> function FindProxyForURL(url, host)<br /> {<br /> // no proxy for localhost<br /> if (host.match('localhost') return false;<br /> // proxy for other hosts<br /> return "SOCKS5 localhost:8111";<br /> }<br /> </code><br /> <br /> Make sure your port number is the port that the socks proxy is bound to.<br /> <br /> Save this script as ssh-tunnel.pac or similar and in chrome go to:<br /> Options -> Under the Hood -> Network -> Change Proxy Settings<br /> <br /> Chrome uses the same network settings as IE. So you will see the system window open and choose: <br /> LAN settings -> Use Automatic Configuration Script<br /> <br /> Enter the path to the ssh-tunnel.pac file you created. <br /> Now reload the webpage in Chrome. <br /> <br /> Creating an SSH tunnel.<br /> If you’re not familiar with creating an SSH Socks5 proxy visit: <br /> <a href="">Windows SSH as Proxy</a><br /> <a href="">Linux SSH as Proxy</a><br /> <br /> <br /><img src="" height="1" width="1"/>Gabe LG Command Line Telnet ClientA while ago I wrote a single line PHP Command line client.<br /> <br /> <pre>while (1) { fputs(STDOUT, "\n\-PHP$ "); eval(trim(fgets(STDIN))); }</pre><br />. <br /> <br /> <pre; } </pre><br /> Now that isn't one line like the PHP command line client. Also, if you are on windows and a version of PHP lower then 5.3, you will not have the getopt() function. To solve this here is a substitute for getopt(). <br /> <br /> <pre; } } </pre><br /> Save the PHP code to a file, I call it telnet.php. Then open the shell and navigate to the directory which has telnet.php and type in:<br /> <br /> php telnet.php -h [hostname] -p [port]<br /> <br /> For example:<br /> <br /> php telnet.php -h google.com -p 80<br /> <br /> This will open a connection to google.com on the http port. Then you can type in your HTTP headers: <br /> <br /> GET / HTTP/1.1<br /> HOST: google.com<br /> <br /> Then press enter twice, because HTTP requires that you send a newline to terminate the HTTP headers. Google.com should respond with the headers and HTML of the Google website. <br /> <br /> You can telnet into any listening TCP port so for instance you can test XMPP servers. <br /> <br /> php telnet.php -h talk.google.com -p 5222<br /> <br /> Then send your XMPP stanzas. <br /> <br /> Or even telnet into an email server and send or retrieve emails. <br /> <br /> <h3>Notes</h3><br /> <a href="">sockets support</a>. The current implementation should work without that support but does use up a lot of CPU with the while(1) loop.<img src="" height="1" width="1"/>Gabe LG email from PHP on WindowsI'm assuming you've installed Xampp or WAMP on your local windows machine. If not please visit the <a href="">XAMPP website</a>. WAMP and XAMPP will install the full windows equivalent of the <a href="">LAMP</a> (Linux Apache MySQL PHP/Perl/Python) stack on your Windows Machine, and offer a GUI to manage it. Here I'll talk about sending email from windows specifically on XAMPP, but you can follow this for WAMP or your own PHP setup.<br /> <br /> I'm also assuming you're trying to send email through the PHP function "mail()".<br /> <br /> Unlike Linux, Windows is not distributed with a default mail transfer agent (MTA). Linux flavors usually come with sendmail, which acts as a local MTA. <br /> <br /> In order to have PHP send email, your windows machine should be able to send email, and have PHP configured to send email through it. So first you need to get your machine to send email. <br /> <br /> You can either install an MTA locally (not recommended since sending emails is a complicated by spam detection mechanisms built into the protocol). <br /> <a href=""></a><br /> <a href=""></a><br /> <br /> Or use a free MTA such as hotmail.com or gmail.com. You need an account. <br /> <br /> To use a free MTA, you need to specify the MTA host, port, user and password. Luckily there are also sendmail like programs for windows and one is included with Xampp that makes this easy. <br /> <br /> You will just need to configure PHP to use the sendmail binary provided by Xampp, to send emails. To do this edit the PHP configuration file, php.ini. You can find this in xampp by opening the control panel, clicking "explore" and going to the folder "php". The full path should be something like:<br /> <br /> c:/xampp/php/php.ini <br /> <br /> Look for: <br /> ;Gabe LG XMLHttpRequest Wrapper<p> Here is my version of the <a href="">XMLHttpRequest Wrapper</a>, Open Source, and <a href="">available </a>on Google Code. Enjoy. </p> <p> Here is a simple example of a http request with the lib: <pre name="code" class="js"> new fiji.xhr('get', 'echo.php?param=value', function(xhr) { if (this.readyState == 4 && this.status == 200) { alert(this.responseText); } }).send(); </pre> </p> <p> The readystate callback handler executes in the scope of the XHR Instance. So you can use the <code>this</code> keyword to reference the XHR object instance. This behavior is the same as the specifications for the callback scope in the <a href="">W3C XMLHttpRequest Specs</a>. You can also receive a reference to the XHR library instance which is passed as the first parameter to the callback. In the case above it would be <code>xhr</code>. This allows you to attach further references or objects to the XHR library instance that would be persisted for the duration of the XHR call. For example, a request ID. </p><img src="" height="1" width="1"/>Gabe LG Strings in SQLite with PHP<p> Unlike MySQL, SQLite follows the quoting standards in SQL strictly and does not understand the backslash <code>\</code> as an escape character. SQLite only understands escaping a single quote with another single quote. </p> <p> For example, if you receive the input data <code>'cheeky ' string'</code> and use the PHP function addslahes() to escape literal characters in the string then you will get <code>'cheeky \' string'</code> which according to SQLite is not escaped properly. You need to escape the string so that it looks like <code>'cheeky '' string'</code>. </p> <p> If you have <code>magic_quotes</code> turned on then you are in even more trouble. This PHP setting escapes all HTTP variables received by PHP with an equivalent of <code>addslshes()</code>. So the correct way to escape strings in SQLite would be: <pre name="code" class="php"> function sqlite_quote_string($str) { if (get_magic_quotes_gpc()) { $str = stripslashes($str); } return sqlite_escape_string($str); } </pre> This will remove the escape characters added by the <code>magic_quotes</code> setting, and escape strings with SQLites <code>sqlite_escape_string()</code> function which correctly escapes the string with <code>'</code>. </p><img src="" height="1" width="1"/>Gabe LG a custom SQLite Function in PHP<p> SQLite is available in PHP5 either by compiling PHP5 with SQLite support or enabling the SQLite extension dynamically from the PHP configuration (PHP.ini). A distinct feature of SQLite is that it is an embedded database, and thus offers some features a Server/Hosted database such as the popular MySQL database doesn't. </p> <h3>Creating Custom Functions in SQLite</h3> <p> One of the really cool features of SQLite in PHP is that you can create custom PHP functions, that will be called by SQLite in your queries. Thus you can extend the SQLite functions using PHP. </p> <h3>A custom Regexp function for SQLite in PHP</h3> <p> <pre name="code" class="php"> // create a regex match function for sqlite sqlite_create_function($db, 'REGEX_MATCH', 'sqlite_regex_match', 2); function sqlite_regex_match($str, $regex) { if (preg_match($regex, $str, $matches)) { return $matches[0]; } return false; } </pre> The above PHP code will create a custom function called <code>REGEX_MATCH</code> for the SQLite connection referenced by <code>$db</code>. The <code>REGEX_MATCH</code> SQLite function is implemented by the <code>sqlite_regex_match</code> user function we define in PHP. </p> <p> Here is an example query that makes use of the custom function we created. Notice that in the SQLite query, we call our custom function <code>REGEX_MATCH</code>: <pre name="code" class="php"> $query = 'SELECT REGEX_MATCH(link, \'|http://[^/]+/|i\') AS domain, link, COUNT(link) AS total' .' FROM links WHERE domain != 0' .' GROUP BY domain' .' LIMIT 10'; $result = sqlite_query($db, $query); </pre> This will make SQLite call the PHP function <code>sqlite_regex_match</code> for each database table row that is goes over when performing the select query, sending it the link field value as the first parameter, and the regular expression string as the second parameter. PHP will then process the function and return its results to SQLite, which continues to the next table row. </p> <p></p> <h3>Custom Functions in SQLite compared to MySQL</h3> <p> In comparison with MySQL, you cannot create a custom function in PHP that mysql will use. MySQL allows creation of custom functions, but they have to be written in MySQL. Thus you cannot extend MySQL's query functionality with PHP. </p> <p> I believe the reason for this is simply because having a callback function called on the client, by the database, over a Client-Server model for each row that has to be processed would be just inefficient. Imaging processing 100,000 rows in a MySQL database and having MySQL make a callback to PHP over a TCP connection, the overhead of sending the data back and forth for the callback would be way too much. <br /> With an embedded database like SQLite, this isn't the case since making the actual communication between the language and the embedded database does not pose such a high overhead. </p><img src="" height="1" width="1"/>Gabe LG Email Address validation through SMTP<p> Here is a PHP class written for PHP4 and PHP5 that will validate email addresses by querying the SMTP (Simple Mail Transfer Protocol) server. This is meant to complement <a href="">validation of the syntax of the email address</a>, which should be used before validating the email via SMTP, which is more resource and time consuming. </p> <p> <blockquote style="border: 1px dotted #c0c0c0;"> <h3>Update: Sept 8, 2008</h3> The class has been updated to work with Windows MTA's such as Hotmail and many other fixes have been made. <a href="">See changes</a>. The class will no longer get you blacklisted by Hotmail due to improper HELO procedure. <h3>Update: Sept 10, 2008</h3> Window Support Added through Net_DNS (pear DNS class). Added support for validating multiple emails on the same domain through a single Socket. Improved the Email Parsing to support literal @ signs. <h3>Update: Sept 29, 2008</h3> The code for this project has been moved to <a href="">Google Code</a>. The <a href="">latest source</a> can be grabbed from SVN. <h3>Update: Nov 22, 2008</h3> SMTP Email Validation Class has been added to the Yii PHP Framework. <a href="" target="_blank"></a>. Yii is a high-performance component-based PHP framework for developing large-scale Web applications. </blockquote> </p> <p> <pre name="code" class="php"> <?php /** * Validate Email Addresses Via SMTP * This queries the SMTP server to see if the email address is accepted. * @copyright - Please keep this comment intact * @author gabe@fijiwebdesign.com * @contributers adnan@barakatdesigns.net * @version 0.1a */ class SMTP_validateEmail { /** * PHP Socket resource to remote MTA * @var resource $sock */ var $sock; /** * Current User being validated */ var $user; /** * Current domain where user is being validated */ var $domain; /** * List of domains to validate users on */ var $domains; /** * SMTP Port */ var $port = 25; /** * Maximum Connection Time to an MTA */ var $max_conn_time = 30; /** * Maximum time to read from socket */ var $max_read_time = 5; /** * username of sender */ var $from_user = 'user'; /** * Host Name of sender */ var $from_domain = 'localhost'; /** * Nameservers to use when make DNS query for MX entries * @var Array $nameservers */ var $nameservers = array( '192.168.0.1' ); var $debug = false; /** * Initializes the Class * @return SMTP_validateEmail Instance * @param $email Array[optional] List of Emails to Validate * @param $sender String[optional] Email of validator */ function SMTP_validateEmail($emails = false, $sender = false) { if ($emails) { $this->setEmails($emails); } if ($sender) { $this->setSenderEmail($sender); } } function _parseEmail($email) { $parts = explode('@', $email); $domain = array_pop($parts); $user= implode('@', $parts); return array($user, $domain); } /** * Set the Emails to validate * @param $emails Array List of Emails */ function setEmails($emails) { foreach($emails as $email) { list($user, $domain) = $this->_parseEmail($email); if (!isset($this->domains[$domain])) { $this->domains[$domain] = array(); } $this->domains[$domain][] = $user; } } /** * Set the Email of the sender/validator * @param $email String */ function setSenderEmail($email) { $parts = $this->_parseEmail($email); $this->from_user = $parts[0]; $this->from_domain = $parts[1]; } /** * Validate Email Addresses * @param String $emails Emails to validate (recipient emails) * @param String $sender Sender's Email * @return Array Associative List of Emails and their validation results */ function validate($emails = false, $sender = false) { $results = array(); if ($emails) { $this->setEmails($emails); } if ($sender) { $this->setSenderEmail($sender); } // query the MTAs on each Domain foreach($this->domains as $domain=>$users) { $mxs = array(); // retrieve SMTP Server via MX query on domain list($hosts, $mxweights) = $this->queryMX($domain); // retrieve MX priorities for($n=0; $n < count($hosts); $n++){ $mxs[$hosts[$n]] = $mxweights[$n]; } asort($mxs); // last fallback is the original domain array_push($mxs, $this->domain); $this->debug(print_r($mxs, 1)); $timeout = $this->max_conn_time/count($hosts); // try each host while(list($host) = each($mxs)) { // connect to SMTP server $this->debug("try $host:$this->port\n"); if ($this->sock = fsockopen($host, $this->port, $errno, $errstr, (float) $timeout)) { stream_set_timeout($this->sock, $this->max_read_time); break; } } // did we get a TCP socket if ($this->sock) { $reply = fread($this->sock, 2082); $this->debug("<<<\n$reply"); preg_match('/^([0-9]{3}) /ims', $reply, $matches); $code = isset($matches[1]) ? $matches[1] : ''; if($code != '220') { // MTA gave an error... foreach($users as $user) { $results[$user.'@'.$domain] = false; } continue; } // say helo $this->send("HELO ".$this->from_domain); // tell of sender $this->send("MAIL FROM: <".$this->from_user.'@'.$this->from_domain.">"); // ask for each recepient on this domain foreach($users as $user) { // ask of recepient $reply = $this->send("RCPT TO: <".$user.'@'.$domain.">"); // get code and msg from response preg_match('/^([0-9]{3}) /ims', $reply, $matches); $code = isset($matches[1]) ? $matches[1] : ''; if ($code == '250') { // you received 250 so the email address was accepted $results[$user.'@'.$domain] = true; } elseif ($code == '451' || $code == '452') { // you received 451 so the email address was greylisted (or some temporary error occured on the MTA) - so assume is ok $results[$user.'@'.$domain] = true; } else { $results[$user.'@'.$domain] = false; } } // quit $this->send("quit"); // close socket fclose($this->sock); } } return $results; } function send($msg) { fwrite($this->sock, $msg."\r\n"); $reply = fread($this->sock, 2082); $this->debug(">>>\n$msg\n"); $this->debug("<<<\n$reply"); return $reply; } /** * Query DNS server for MX entries * @return */ function queryMX($domain) { $hosts = array(); $mxweights = array(); if (function_exists('getmxrr')) { getmxrr($domain, $hosts, $mxweights); } else { // windows, we need Net_DNS require_once 'Net/DNS.php'; $resolver = new Net_DNS_Resolver(); $resolver->debug = $this->debug; // nameservers to query $resolver->nameservers = $this->nameservers; $resp = $resolver->query($domain, 'MX'); if ($resp) { foreach($resp->answer as $answer) { $hosts[] = $answer->exchange; $mxweights[] = $answer->preference; } } } return array($hosts, $mxweights); } /** * Simple function to replicate PHP 5 behaviour. */ function microtime_float() { list($usec, $sec) = explode(" ", microtime()); return ((float)$usec + (float)$sec); } function debug($str) { if ($this->debug) { echo htmlentities($str); } } } ?> </pre> </p> <h3>Using the PHP SMTP Email Address Validation Class</h3> <p> Example Usage: <pre name="code" class="php"> // the email to validate $email = 'joe@gmail.com'; // an optional sender $sender = 'user@example.com'; // instantiate the class $SMTP_Valid = new SMTP_validateEmail(); // do the validation $result = $SMTP_Valid->validate($email, $sender); // view results var_dump($result); echo $email.' is '.($result ? 'valid' : 'invalid')."\n"; // send email? if ($result) { //mail(...); } </pre> </p> <h3>Code Status</h3> <p> This is a very basic, and alpha version of this php class. I just wrote it to demonstrate an example. There are a few limitations. One, it is not optimized. Each email you verify will create a new MX DNS query and a new TCP connection to the SMTP server. The DNS query and TCP socket is not cached for the next query at all, even if they are to the same host or the same SMTP server.<br /> Second, this will only work on Linux. Windwos does not have the DNS function needed. You could replace the DNS queries with the <a href="">Pear Net_DNS Library</a> if you need it on Windows.<br /> </p> <h3>Limitations of verifying via SMTP</h3> <p> Not all SMTP servers are configured to let you know that an email address does not exist on the server. If the SMTP server does respond with an "OK", it does not mean that the email address exists. It just means that the SMTP server will accept the email address and not bounce it. What it does with the actual email is different. It may deliver it to the recipient, or it may just send it to a <a href="">blackhole</a>.<br /> If you get an invalid response from the SMTP server however, you can be pretty sure your email will bounce if you actually send it. <br /> You should also NOT use this class to try and guess emails, for spamming purposes. You will quickly get blacklisted on Spamhaus or a similar list. </p> <h3>Good uses of verifying via SMTP</h3> <p> If you have forms such as registration forms, where users enter their email addresses. It may be a good idea to first check the syntax of the email address, to see if it is valid as per the <a href="">SMTP protocol specifications</a>. Then if it is valid, you may want to verify that the email will be accepted (will not bounce). This can allow you to notify the user of a problem with their email address, in case they made a typo, knowingly entered an invalid email. This could increase the number of successful registrations. </p> <h3>How it works</h3> <p> If you're interested in how it works, it is quite simple. The class will first take an email, and separate it to the user and host portions. The host portion, tells us which domain to send the email to. However, a domain may have an SMTP server on a different domain so we retrieve a list of SMTP servers that are available for the domain by doing a DNS query of type MX on that domain. We receive a list of SMTP servers, so we iterate through each trying to make a connection. Once connected, we send <a href="">SMTP commands</a> to the SMTP server, first saying "HELO", then setting our sender, then our recipient. If the recipient is rejected, we know an actual sending of an email will fail. Thus, we close the TCP connection to the SMTP server and quit. </p><img src="" height="1" width="1"/>Gabe LG (Cross Site Scripting) and stealing passwords<p> <a href="">XSS</a> . </p> <p>. </p> <h3>Hello World in XSS</h3> <p> You have a page that has an XSS vulnerability. Let say a website has a PHP page, <code>mypage.php</code> with the code: <pre name="code" class="php"> <?php // the variable is returned raw to the browser echo $_GET['name']; ?> </pre> Because the variable <code>$_GET['name']</code> is not encoded into HTML entities, or stripped of HTML, it has an XSS vulnerability. Now all an attacker has to do is create a URL that a victim will click, that exploits the vulnerability. <pre> mypage.php?name=%3Cscript%3Ealert(document.cookie);%3C/script%3E </pre> This basically will make PHP write <code><script>alert(document.cookie);</script></code> onto the page, which displays a modal dialog with the value of the saved cookies for that domain. </p> <h3>How Does stealing passwords with XSS work?</h3> <p> The example above displays the cookies on the domain the webpage is on. Now imagine the same page has a login form, and the user chose to have their passwords remembered by the browser. Lets say the PHP page looks like this: <pre name="code" class="php"> <?php // the variable is returned raw to the browser echo $_GET['name']; ?> <form action="login.php"> <input type="text" name="username" /> <input type="password" name="password" /> <input type="submit" value="Login" /> </form> </pre> Now an attacker just needs to craft a URL that retrieves the username and password. Here is an example that retrieves the password: <pre> mypage.php?name=%3Cscript%3Ewindow.onload=function(){alert(document.forms[0].password);}%3C/script%3E </pre> </p> <p> As you can see, it is just a normal XSS exploit, except it is applied to the username and password populated by the browser after the <code>window.onload</code> event. </p> <h3>Password stealing XSS vs Session Cookie stealing XSS</h3> <p> Well, they are both suck from a developers perspective. According to Wikipedia, 70% or so of websites are vulnerable to XSS attacks. </p> <p>. </p> <p> Due to its ability to be executed without having the user logged into a website, this exploit should be regarded worse then session based XSS. </p> <h3>Proof of Concept</h3> <p> Fill in the form below with dummy values and click the "Login" button. <fieldset> <legend>Login Form</legend> <form name="login_form" action=""> Username: <input name="username"><br /> Password: <input type="password" name="password" /><br /> <input type="submit" value="Login" /> </form> </fieldset> </p> <p> Now <a href="">return to the same page</a>, to simulate logging out. Now click the <a href="javascript:alert('password:'+document.forms['login_form'].password.value);">Exploit</a>. This will simulate an XSS exploit on this page, and alert the saved password. </p> <p> I've set up a proof of concept based on an actual XSS exploit here: <a href=""></a>. </p> <h3>Preventing Stealing Passwords via XSS</h3> <p> The only way I can think of right now is to give your username and password fields unique names so that the browser does not remember their values. In PHP you can do this with the time() function. eg: <pre name="code" class="php"> <input type="password" name="pass[<?php echo sha1(time().rand().'secret'); ?>]" /> </pre> The unique names prevents the browser from remembering the password field. This should work universally in all browsers. </p><img src="" height="1" width="1"/>Gabe LG Modal Dialog "JRE Required"<p> Was trying to edit a chart in <a href="">OpenOffice</a> Writer on <a href="">Ubuntu</a> today when it popped up a modal dialog box saying that it needed Java Runtime Environment, JRE, to complete the task (not in those words). <img src="" border="0" /> </p> <p>: <code>pkill soffice.bin</code>. First annoyance I've had with OpenOffice, which apart from this 'lil quirk is the best Document Publisher/Editor - even better then the Microsoft Office Suite. </p> <p> It appears this bug is already addressed <a href=""></a>. </p><img src="" height="1" width="1"/>Gabe LG Trends<p> <a href="">BlogPulse</a> offers a <a href="">trend search</a> very similar to <a href="">Google Trends</a> but specifically targeted at blogs. </p> <p> Here is the blog trend for the words Joomla, Drupal and Wordpress over the last 6 months. <a href=""><img src="" border="0" /></a> <br /> This shows the percentage of the mentions of each word, Joomla, Drupal and Wordpress in blogs. </p><img src="" height="1" width="1"/>Gabe LG - Googles 3D Virtual World<p> Just came across <a href="">Lively.com</a> which developed by <a href="">Google</a>, creates a virtual 3D world similar to <a href="">SecondLife</a>. </p> <p> I haven't tested out Lively yet, since I'm running Ubuntu and it only supports Windows at the moment. In <a href="">one of their blog posts</a> however we can gather that Lively is well integrated with the Web. They have widgets that allows visitors - with Lively software installed - to jump into a Lively room embedded in a webpage. </p> <p> By contrast I believe <a href="">SecondLife offer APIs</a> that offers a REST interface as well as other network level interfacing. Here is a<a href=""> SecondLife Facebook</a> Application. </p> <p> I'm wondering if Google will offer the same level of integration with their Lively 3D world. Would make fun <a href="">Mashups</a>. </p><img src="" height="1" width="1"/>Gabe LG Functions in JavaScript<p> A lil snippet I borrowed from the Google AJAX libraries API: <pre name="code" class="js"> chain = function(args) { return function() { for(var i = 0; i < args.length; i++) { args[i](); } } }; </pre> You're probably familiar with the <a href="">multiple window onload method</a> written by Simon Willison which creates a closure of functions within functions in order to chain them.<br /> Here is the same functionality using the <code>chain</code> function defined above. <pre name="code" class="js"> function addLoad(fn) { window.onload = typeof(window.onload) != 'function' ? fn : chain([onload, fn]); }; </pre> Chain can also be namespaced to the <code>Function.prototype</code> if thats how you like your JS. <pre name="code" class="js"> Function.prototype.chain = function(args) { args.push(this); return function() { for(var i = 0; i < args.length; i++) { args[i](); } } }; </pre> So the multiple window onload function would be: <pre name="code" class="js"> window.addLoad = function(fn) { window.onload = typeof(window.onload) != 'function' ? fn : window.onload.chain([fn]); }; </pre> </p><img src="" height="1" width="1"/>Gabe LG Google Loader and the AJAX Libraries API<p> Google recently released the <a href="">AJAX Libraries API</a> which allows you to load the popular JavaScript libraries from Google Servers. The benefits of this outlined in the description of the API. </p> <blockquote style="border:1px solid gray;background-color:#c0c0c0;"> The AJAX Libraries API is a content distribution network and loading architecture for the most popular open source JavaScript libraries. By using the google.load() method, your application has high speed, globally available access to a growing list of the most popular JavaScript open source libraries. </blockquote> <p> I was thinking of using it for a current project that would use JS heavily, however, since the project used a CMS (Joomla) the main concern for me was really how many times MooTools would be loaded. Joomla uses a PHP based plugin system (which registers observers of events triggered during Joomla code execution) and the loading of JavaScript by multiple plugins can be redundant as there is no central way of knowing which JavaScript library has already been loaded, nor is there a central repository for JavaScript libraries within Joomla. </p> <p> MooTools is the preferred library for Joomla and in some cases it is loaded 2 or even 3 times redundantly. I did not want our extension to add to that mess. To solve the problem I would test for the existence of MooTools, <code>if (typeof(MooTools) == 'undefined')</code> and load it from Google only if it wasn't available. Now this would have worked well, however, I would have to add the JavaScript for <a href="">AJAX Libraries API</a> and it would only be loading 1 script, "MooTools", when I also had about 3-4 other custom libraries that I wanted loaded. </p> <p> Now I thought, why don't I develop a JavaScript loader just like the Google AJAX Libraries API Loader. Should be just a simple function to append a Script element to the document head. So I started with: </p> <p> <pre name="code" class="js"> function loadJS(src) { var script = document.createElement('script'); script.src = src; script.type = 'text/javascript'; timer = setInterval(closure(this, function(script) { if (document.getElementsByTagName('head')[0]) { clearTimeout(timer); document.getElementsByTagName('head')[0].appendChild(script); } }, [script]), 50); } function closure(obj, fn, params) { return function() { fn.apply(obj, params); }; } </pre> The function loadJS would try to attach a <code>script</code> element to the document head, each 50 milliseconds until it succeeded. </p> <p> This works but there is no way of knowing when the JavaScript file was fully loaded. Normally, the way to figure out if a JS file has finished loading from the remote server, is to have the JS file invoke a callback function on the Client JavaScript (aka: JavaScript Remoting). This however means you have to build a callback function into each JavaScript file, which is not what I wanted. </p> <p> So to fix this problem I though I'd add another Interval with <code>setInterval()</code> to detect when the remote JS file had finished loading by testing a condition that exits when the file has completed. eg: for MooTools it would mean that the Object <code>window.MooTools</code> existed. </p> <p> So I went about writing a JavaScript library for this, with a somewhat elaborate API, with JS libraries registering their "load condition test" and allowing their remote loading, about 1 wasted hour, (well not wasted if you learn something) only to realize that this wouldn't work for the purpose either. The reason is that it broke the <code>window.onload</code> functionality. Some remote files would load before the window.onload event (cached ones) and others after. This made the JavaScript already written to rely on window.onload fail. </p> <p> Last Resort, how did Google Do it? I had noted earlier that if you load a JavaScript file with Google's API the file would always load before the <code>window.onload</code> method fired. Here is the simple test: (In the debug output, the google callback always fired first). <pre name="code" class="js"> google.load("prototype", "1"); google.load("jquery", "1"); google.load("mootools", "1"); google.setOnLoadCallback(function() { addLoad(function() { debug('google.setOnLoadCallback - window.onload'); }); debug('google.setOnLoadCallback') }); addLoad(function() { debug('window.onload'); }); debug('end scripts'); </pre> I had to take a look at the source code for Google's AJAX Libraries API which is: <a href=""></a> to see how they achieved this. </p> <p> It never occurred to me that you could force the browser to load your JavaScript before the <code>window.onload</code> event so I was a bit baffled. Browsing through their source code I came upon what I was looking for: <pre name="code" class="js"> function q(a,b,c){if(c){var d;if(a=="script"){d=document.createElement("script");d.<\/script>')}else if(a=="css"){document.write('<link href="'+b+'" type="text/css" rel="stylesheet"></link>' )}}} </pre> The code has been minified, so its a bit hard to read. Basically its the same as any javascript remoting code you'd find on the net, the but the part that jumps out is: <pre name="code" class="js"> var e=document.getElementsByTagName("head")[0]; if(!e){e=document.body.parentNode.appendChild(document.createElement("head"))} e.appendChild(d) </pre> Notice how it will create a <code>head</code> Node and append it to the parentNode of the document body if the document <code>head</code> head does not exist yet. </p> <p> Now that forces the browser to load the JavaScript right then, no matter what. Now following that method you can load remote JavaScript files dynamically and just used the regular old <code>window.onload</code> event or "domready" event and the files will be available. </p> <p> Apparently this won't work on all browsers, since Google's code also has the alternative: <pre name="code" class="js"> document.write('<script src="'+b+'" type="text/javascript"><\/script>') </pre> with a bit of testing, you could discern which browsers worked with which and use that. I'd imagine that the latest browsers would accept the dom method and older ones would need the <code>document.write</code> </p> <p> So my JavaScript file loading function became: <pre name="code" class="js"> function loadJS(src) { var script = document.createElement('script'); script.src = src; script.type = 'text/javascript'; var head = document.getElementsByTagName('head')[0]; if (!head) { head = document.body.parentNode.appendChild(document.createElement('head')); } head.appendChild(script); } </pre> </p> <p> Anyways, I finally got my JavaScript library loader working just as I liked, thanks to the good work done by Google with the AJAX Libraries API. </p><img src="" height="1" width="1"/>Gabe LG HTTP over SSH proxy with Linux<p> In an previous post I made I detailed how to create a secure your browser's HTTP communications by tunneling the <a href="">HTTP session over an SSH proxy using Putty</a>. </p> <p> <a href="">Putty</a> is what you would use if you use a Windows desktop. If you're on a Linux Desktop you do not need Putty since you should have OpenSSH with the distribution you use. </p> <p> Doing a <code>man ssh</code> on your Linux Desktop should give you the manual on how to use your SSH client: <pre> SSH(1) BSD General Commands Manual SSH(1) NAME ssh - OpenSSH SSH client (remote login program) SYNOPSIS ssh [-1246AaCfgKk local_tun[:remote_tun]] [user@]hostname [command] ... etc ... </pre>: with an alternative syntax: [bind_address/]port listen‐ ing port be bound for local use only, while an empty address or ‘*’ indicates that the port should be available from all interfaces. </pre> Basically it means that you can start an SSH session using the OpenSSH client with a command such as: <pre> ssh -D localhost:8000 user@example.com </pre> and it will create a SOCKS proxy on port <code>8000</code> that will tunnel your HTTP connection over SSH to the server at <code>example.com</code> under the username <code>user</code>. </p> <p>. </p> <h3>Configuring Firefox to use the Socks Proxy</h3> <p> <ul> <li>Tools -> Options -> Advanced -> Network</li> <li>Under Connection click on the Settings button</li> <li>Choose Manual Proxy configuration, and SOCKS v5</li> <li>Fill in localhost for the host, and 8000 (or the port number you used) for the port</li> <li>Click OK and reload the page</li> </ul> </p> <p>. <br /> Example: <pre> #!/bin/sh ssh -D localhost:8000 user@example.com </pre> That should start up the ssh connection and create the socks proxy when you log in. The other alternative is to create a launcher and use <code>ssh -D localhost:8000 user@example.com</code> as the command, allowing you to launch the proxy whenever you need. </p> <p> You can also set up an ssh key for authentication instead of having to log in. This is detailed in other posts: <a href=""></a> and <a href=""></a>. This allows you to use the proxy transparently in the background without having to start it and log in. </p> <p> For Firefox you can switch between proxy and direct connection using the <a href="">switchproxy extension</a>. </p> <p>. </p><img src="" height="1" width="1"/>Gabe LG Trends for websites<p> The <a href="">Google trends for websites</a>, which was released by Google <a href="">3 days ago</a>, is really something to check out if you're interested in comparing website metrics between different websites and across geographical locations. </p> <p>. </p> <h3>Joomla vs Wordpress on Google Trends for websites</h3> <p> <a href=""><img src="" border="0" /></a> </p> <h3>Joomla vs Wordpress on Alexa Website Analytics</h3> <p> <a href=""><img src="" border="0" /></a> </p> <h3>Joomla vs Wordpress on Compete Website Analytics</h3> <p> <a href=""><img src="" border="0" /></a> </p> <p> <a href="">Search Engine Land </a>and <a href="">Matt Cutts</a> also blogged about the new google trends for websites. </p><img src="" height="1" width="1"/>Gabe LG Application Development Platforms and Services<p> Everyone is offering Application development as a service! </p> <h3>Appjet</h3> <p> <blockquote> AppJet is the easiest way to create a web app. Just type some code into a box, and we'll host your app on our servers. </blockquote> <a href=""></a> </p> <h3>AppPad</h3> <p> <blockquote> AppPad provides a place to create web applications completely in HTML and Javascript. AppPad gives you: </blockquote> <a href=""></a> </p> <h3>Bungee Connect</h3> <p> <blockquote> Bungee Connect is the most comprehensive Platform-as-a-Service (PaaS) — significantly reducing the complexities, time and costs required to build and deliver web applications. </blockquote> <a href=""></a> </p> <h3>CogHead</h3> <p> <blockquote> Coghead is a 100% web-based system that allows knowledge workers to create their own custom business applications. There’s never any software to install or servers to maintain. Just think it, build it and share it! </blockquote> <a href=""></a> </p> <h3>Google App Engine</h3> <p> <blockquote> Google App Engine enables you to build web applications on the same scalable systems that power Google applications. </blockquote> <a href=""></a> </p> <p> I was going to add more but got bored after reaching G. Now I'm just waiting for that meta web application development environment that integrates all the above with a RESTful API. lol... </p><img src="" height="1" width="1"/>Gabe LG Cross Window Communication via Cookies<p> Using JavaScript we can communicate between browser windows given that we have a reference to each window. When a new window is created the the JavaScript method <code>window.open()</code> it returns a reference to the new window. The child window, also has a reference to the parent window that created it via the <code>window.opener</code> window object. These references allow the two windows to communicate with and manipulate each other. </p> <p>. </p> <p>. </p> <p> To demonstrate how communicating between windows with cookies would work, lets assume we want to open a window, and then close it a few seconds later. <pre name="code" class="js"> var win = window.open('child.html'); setTimeout(function() { win.close(); }, 5000); </pre> The code will open a child window, and close it after 5 seconds using the reference to the child window and the method <code>close()</code>. However if we didn't have a reference for some reason, we would not be able to invoke the close method. So lets see how it could be done with cookies: <pre name="code" class="js"> window.open('child.html'); setTimeout(function() { setCookie('child', 'close'); }, 5000); </pre>. <pre name="code" class="js"> // child.html setInterval(function() { getCookie('child') == 'close' ? this.close() : ''; }, 500); </pre> This would check the cookie every half a seconds and close the window if the cookie read 'close'. </p> <p> Using this method we can send any commands to any open windows and have them execute it without having a reference to that window. </p><img src="" height="1" width="1"/>Gabe LG Advertisements with a Hosts file, Apache and PHP<p> The Hosts file is located at <code>/etc/hosts</code> on Linux and <code>%SystemRoot%\system32\drivers\etc\</code> on Windows XP and Vista. It maps host names to IP addresses and takes precedence over the DNS server. So if you add an entry in your hosts file: <pre> 207.68.172.246 google.com </pre> Then every time you type google.com you will be taken to msn.com instead, since <code>207.68.172.246</code> is the IP address of msn.com. </p> <p> Knowing this, you can point any domain, to an IP address of choice using the Hosts file. Therefore, we can use it to block any domains that hosts unwanted advertising or malware. </p> <h3>Modifying your Hosts File to block Advertisements and Malware</h3> <p> There are many sites offering host files which block advertisments and malware. I use the one on <a href=""></a>. <br /> Here is the txt version of the hosts file: <a href=""></a> </p> <p> Here is an example of what the entries look like, there the list contains a lot more, about 18, 000 entries at this time. </pre> You will need to download the txt file and append the entries to your hosts file. </p> <p> Now once the hosts file is in effect, when you browse any website in firefox or IE or any other browser, 99% of the advertisements will not be displayed. </p> <h3>Setting up Apache to display a custom page or message for blocked Advertisements and Malware</h3> <p>. </p> <p>. </p> <p>: <pre> <VirtualHost 127.0.0.1> ServerAdmin webmaster@adblock DocumentRoot /var/www/adblock/ ErrorDocument 404 /404.html ErrorLog /etc/log/adblock/error.log TransferLog /etc/log/adblock/access.log </VirtualHost> </pre> <code>ErrorDocument 404 /404.html</code>. This can have the simple line, "ad or malware blocked" or something on those lines. </p> <p> Now localhost also resolves to 127.0.0.1 so you will need to make sure you have a virtual host for the host localhost. </p> <p> The other thing you could do instead of setting up a virtual host, and it may be simpler, is create a custom 404 document for your current setup. You can do this via a directive directly in httpd.conf like: <code>ErrorDocument 404 /404.php</code>.. </p> <p> How you detect if the request is from a one of the blocked hosts is by comparing the requested host with the list of hosts in your hosts file that are blocked. The host requested is in the <code>$_SERVER['SERVER_NAME']</code>: <pre name="code" class="php"> // } </pre> Now, when you visit those websites with pesky advertisements and popups, you get a neat little line saying "ad or malware blocked" in the place of those ads. </p><img src="" height="1" width="1"/>Gabe LG Date and Time in different Timezones with PHP, MySQL and JavaScript<p> How do you show the correct date and time (timestamp) to users in differnet time zones? In theory it should be simple: <ul> <li><a href="#save_timestamp">Save your your date and time with the correct timezone</a></li> <li><a href="#retrieve_timezone">Retrieve the timezone of the user to whom you will display the date and time to</a></li> <li><a href="#calc_tz_diff">Calculate the difference in hours between the saved date and time and the users date and time</a></li> <li><a href="#add_tz_diff">Add the difference in hours to the saved date and time and display</a></li> </ul> </p> <h3><a name="save_timestamp">Save your your date and time with the correct timezone</a></h3> <p> The date and time with timezone is a <a href="">timestamp</a>. Though implementations differ, timestamps basically contain the same information (date/time and timezone).The timezone can be explicitly recorded in the timestamp (eg: 2005-10-30 T 10:45 UTC) or implicitly taken from the context in which the timestamp was generated or recorded (eg: unix timestamp is dependent on timezone of server generating the timestamp). </p> <p> Something as simple as saving a timestamp in mysql with PHP can be not so simple due to the difference in the timestamp representation in the two languages. </p> <p> The <a href="">PHP timestamp</a> is defined as the number of seconds since the Unix Epoch (January 1 1970 00:00:00 GMT) while the <a href="">mysql timestamp</a> is a representation of the present date and time in the format YYYY-MM-DD HH:MM:SS. </p> <p> If you save the timestamp as a mysql timestamp field, then the timestamp is saved as UTC, however, when you access the timestamp it is converted to the timezone set in the mysql server, so basically you don't get to see the stored UTC version of the timestamp. If you save it as a PHP timestamp in a varchar or unsigned int field then it would be subject to the the PHP servers timezone. So in essence both the MySQL and PHP timestamp a dependent on the timezone of their respective servers. </p> <p> Whichever format you save it in, just remember that both the PHP and MySQL timestamps reference the timezone on the server they are saved on, PHP during generation of the timestamp, and mysql during retrieval. </p> <h3><a name="retrieve_timezone">Retrieve the timezone of the user to whom you will display the date and time to</a></h3> <p> The easy way to do this is ask the user what their timezone is. You see this used in many registration forms on websites as well as many open source forums, CMSs, blog software etc. However, not every user will even bother giving you the correct timezone. </p> <p> You can use JavaScript if it is available on the browser: <pre name="code" class="javascript"> var tz = (new Date().getTimezoneOffset()/60)*(-1); </pre> This depends on the users computer's clock, so if it is set wrong, then you will get a wrong result. Time is relative however, so if a user wants to be a few hours behind, let them be. </p> <p> You can use the users IP address to assume their geographic location and thus their timezone. This can be done server side and thus is not dependent on the users browser having JavaScript. Using the IP to determine timezone is dependent on the accuracy of your IP geocoding service you use. Here is an example using the hostip.info API for geocoding and earthtools.org for lat/long conversion to timezone. <pre name="code" class="php"> <?php /** * Retrieves the Timezone by the IP address * @param String $IP (optional) IP address or remote client IP address is used */ function getTimeZoneByIP($IP = false) { // timezone $timezone = false; // users IP $IP = $IP ? $IP : $_SERVER['REMOTE_ADDR']; // retrieve geocoded data from API in plain text if ($geodata = file(''.$IP.'&position=true')) { // create an associative array from the data $geoarr = array(); foreach($geodata as $line) { list($name, $value) = explode(': ', $line); $geoarr[$name] = $value; } // retrieve lat and lon values $lat = trim($geoarr['Latitude']); $lon = trim($geoarr['Longitude']); if (strlen($lat) > 0 && strlen($lon) > 0) { // pass this lat and long to API to get Timezone Offset in xml $tz_xml = file_get_contents(''.$lat.'/'.$lon); // lets parse out the timezone offset from the xml using regex if (preg_match("/<offset>([^<]+)<\/offset>/i", $tz_xml, $match)) { $timezone = $match[1]; } } } return $timezone; } ?> </pre> </p> <p> You can also use a combination of the three in order to correlate the data and get a better guess of the timezone. </p> <h3><a name="calc_tz_diff">Calculate the difference in hours between the saved date and time and the users date and time</a></h3> <p> Now that we have the timestamp and the users timezone, we just need to adjust the timestamp to their timezone. First we need to calculate the difference between the timezone the timestamp is saved in, as the users timezone. <pre> $user_tz_offset = $tz_user - $tz_timestamp; </pre> where $user_tz_offset is how far ahead or behind in hours the user timezone is from the timestamps timezone. </p> <h3><a name="add_tz_diff">Add the difference in hours to the saved date and time and display</a></h3> <p> Now we have all we need to show the correct time to the user based on their timezone. Example in pseudo code: <pre> $user_tz_offset = $tz_user - $tz_timestamp; $users_timestamp = $timestamp + $user_tz_offset; </pre> </p><img src="" height="1" width="1"/>Gabe LG Code Performance Profiling on Ubuntu<p> A few options for code profiling in PHP: <ul> <li><a href="">XDebug</a></li> <li><a href="">Benchmark</a></li> <li><a href="">DPG</a></li> <li><a href="">Advanced PHP Debugger</a></li> </ul> </p> <h3>XDebug</h3> <p>. </p> <p> To install XDebug on Ubuntu is as simple as the command: <pre> sudo apt-get install php5-xdebug </pre> This assumes you installed Apache2 and PHP5 using aptitude. Otherwise you'd probably want to follow one of these instructions on installing XDebug:<br /> <a href=""></a><br /> <a href=""></a><br /> <a href=""></a><br /> </p> <p> Once you've installed xdebug you will need to enable php code profiling by setting the <a href="">xdebug.profiler_enable setting to 1 in php.ini</a>. You can view the php.ini settings using the command: <pre> php -r 'phpinfo();' </pre> To narrow down to just the xdebug settings use: <pre> php -r 'phpinfo();' | grep xdebug </pre> Note: <code>php --info | xdebug</code> will work also. </p> <p> If you used the simple <code>sudo apt-get install php5-xdebug</code> to install xdebug, then it should have automatically created an ini file: <code> /etc/php5/conf.d/xdebug.ini</code> which is included with the php.ini. </p> <p> Edit the php.ini file or <code> /etc/php5/conf.d/xdebug.ini</code> and add the line: <pre> xdebug.profiler_enable=1 </pre> Example: <code>sudo gedit /etc/php5/conf.d/xdebug.ini</code> </p> <p> After saving this you will need to restart Apache in order to reload the php settings. <pre>sudo /etc/init.d/apache2 restart</pre> </p> <p> You will then need an xdebug client to display the profiling information. <a href="">Kcachegrind</a> is installed by default in Ubuntu. Open Kcachegrind <code>kcachegrind &</code> and use it to open a file generated by xdebug. This will be found in the directory specified in php.ini for the directive <code>xdebug.trace_output_dir</code> which defaults to <code>/tmp</code>. The files generated by Xdebug are prefixed with <code>cachegrind.out.</code>. So you can view a list of these files using the command <code>ls /tmp | grep cachegrind.out.</code> </p> <p> How to interpret the Xdebug profiling information displayed in Kcachegrind is described at: <a href=""></a> under Analysing Profiles. </p> <h3>Benchmark</h3> <p> Benchmark is a Pear package. Documentation is found at: <a href=""></a> </p> <p> A simple usage would be: <pre name="code" class="php"> // load class and instantiate and instance require_once 'Benchmark/Profiler.php'; $profiler = new Benchmark_Profiler(); // start profiling $profiler->start(); // do some stuff myFunction(); // stop $profiler->stop(); // display directly in PHP output $profiler->display(); </pre> myFunction could look something like: <pre name="code" class="php"> function myFunction() { global $profiler; $profiler->enterSection('myFunction'); //do something $profiler->leaveSection('myFunction'); return; } </pre> </p> <p>, <code>xdebug.profiler_enable</code> directly in PHP with ini_set(). </p> <p> <a href="">simulate the live site and data in a development environment</a>. </p> <p> For example you could have Benchmark only display profiling data if requested via the URL: <pre name="code" class="php"> // load class and instantiate and instance require_once 'Benchmark/Profiler.php'; $profiler = new Benchmark_Profiler(); // start profiling $profiler->start(); // do some stuff myFunction(); // stop $profiler->stop(); // only display if requested by me if (isset($_GET['debug'])) $profiler->display(); </pre> </p> <p> I haven't got around to the other two alternatives for PHP code profiling. I will update this entry if I do. </p><img src="" height="1" width="1"/>Gabe LG Ends Service<p> It looks like <a href="" target="_blank">Ringo</a>, which is a popular social network, has decided to end its service. Here is part of the email I received 2 hours ago. </p> <p> <pre> Dear Ringo member, After much consideration we have decided to end the Ringo service. As of June 30, 2008 the Ringo service is ending and you will no longer have access to your Ringo account. </pre> </p> <p> Twitter is faster at getting updated with news like this. A search through google won't reveal any news of this since it is only 2 or so hours old, however, a search through twitter got quite a lot of results. <a href="" target="_blank"></a> </p> <p> The originating server of the email seems to check out. It originates from the tickle.com which is authoritative for ringo's mail transfer. (I consider all email guilty until proven innocent) </p> <p> Is this the <a href="">first death of a social network</a>? I believe so, at least a major one. </p> <p> It appears ringo.com hasn't been growing at all for the past year, if not slowly dying. <img src="" border="0" /> </p> <p> Farewell Ringo. </p><img src="" height="1" width="1"/>Gabe LG XML to JSON<p> Why would I want to convert XML to JSON. Mainly because JSON is a subset of JavaScript (JavaScript Object Notation) and XML isn't. It is much easier to manipulate JavaScript Objects, then it is to manipulate XML. This is because Objects are native to JavaScript, where as XML requires an API, the DOM, which is harder to use. DOM implementations in browsers are not consistent, while you will find Objects and their methods more or less the same across browsers. </p> <p> Since, most of the content/data available on the web is in XML format and not JSON, converting XML to JSON is necessary. </p> <p> The main problem is that there is no standard way of converting XML to JSON. So when converting, we have to develop our own rules, or base them on the most widely used conversion rules. Lets see how the big boys do it. </p> <h3>Rules Google GData Uses to convert XML to JSON</h3> <blockquote> <p>A GData service creates a JSON-format feed by converting the XML feed, using the following rules:< <code>$t</code> properties.</li> </ul> <p><strong>Namespace</strong></p> <ul> <li>If an element has a namespace alias, the alias and element are concatenated using "$". For example, <code>ns:element</code> becomes <code>ns$element</code>.</li> </ul> <p><strong>XML</strong></p> <ul> <li> XML version and encoding attributes are converted to attribute version and encoding of the root element, respectively.</li> </ul> </blockquote> <h3>Google GData XML to JSON example</h3> <p>This is a hypothetical example, Google GData only deals with RSS and ATOM feeds.</p> <pre rel="xml" name="code" class="xml"> <?xml version="1.0" encoding="UTF-8"?> <example:user <name>Joe</name> <status online="true">Away</status> <idle /> </example:user> </pre> <pre rel="json" name="code" class="javascript"> { "version": "1.0", "encoding": "UTF-8", "example$user" : { "domain" : "example.com", "name" : { "$t" : "Joe" }, "status" : { "online" : "true", "$t" : "Away" }, "idle" : null } } </pre> <p> How Google converts XML to JSON is well documented. The main points being that XML node attributes become strings properties, the node data or text becomes <code>$t</code> properties and namespaces are concatenated with <code>$</code>.<br /> <a href="" target="_blank"></a> </p> <h3>Rules Yahoo Uses to convert XML to JSON</h3> <p> I could not find any documentation on the rules Yahoo uses to convert its XML to JSON in Yahoo Pipes, however, by looking the output of a pipe in RSS format and the corresponding JSON format you can get an idea of the rules used. < string properties of the parent node, if the node has no attributes.</li> <li> Text values of tags are converted to <code>content</code> properties, if the node has attributes.</li> </ul> <p><strong>Namespace</strong></p> <ul> <li>Unknown.</li> </ul> <p><strong>XML</strong></p> <ul> <li> XML version and encoding attributes are removed/ignored - at least in the RSS sample I looked at.</li> </ul> <p> The only problem I see with the rules Yahoo Pipes uses is that if an XML node has an attribute named "content", then it will conflict with the Text value of the node/element giving the programer an unexpected result. </p> <h3>Yahoo Pipes XML to JSON example</h3> <pre rel="xml" name="code" class="xml"> <?xml version="1.0" encoding="UTF-8"?> <example:user <name>Joe</name> <status online="true">Away</status> <idle /> </example:user> </pre> <pre rel="json" name="code" class="javascript"> { "example??user" : { "domain" : "example.com", "name" : "Joe", "status" : { "online" : "true", "content" : "Away", }, "idle" : ?? } } </pre> <h3>XML.com on rules to convert XML to JSON</h3> <p> The article on XML.com by Stefan Goessner gives a list of possible XML element structures and the corresponding JSON Objects.<br /> <a href="" target="_blank"></a> </p> <blockquote> <table> <tbody><tr> <td bgcolor="#cccccc"><strong>Pattern</strong> </td> <td bgcolor="#f0f0f0"><strong>XML</strong> </td> <td bgcolor="#cccccc"><strong>JSON</strong> </td> <td bgcolor="#f0f0f0"><strong>Access</strong> </td> </tr> <tr> <td bgcolor="#cccccc">1</td> <td bgcolor="#f0f0f0"> <code><e/></code> </td> <td bgcolor="#cccccc"> <code>"e": null</code> </td> <td bgcolor="#f0f0f0"> <code>o.e</code> </td> </tr> <tr> <td bgcolor="#cccccc">2</td> <td bgcolor="#f0f0f0"> <code><e>text</e></code> </td> <td bgcolor="#cccccc"> <code>"e": "text"</code> </td> <td bgcolor="#f0f0f0"> <code>o.e</code> </td> </tr> <tr> <td bgcolor="#cccccc">3</td> <td bgcolor="#f0f0f0"> <code><e name="value" /></code> </td> <td bgcolor="#cccccc"> <code>"e":{"@name": "value"}</code> </td> <td bgcolor="#f0f0f0"> <code>o.e["@name"]</code> </td> </tr> <tr> <td bgcolor="#cccccc">4</td> <td bgcolor="#f0f0f0"> <code><e name="value">text</e></code> </td> <td bgcolor="#cccccc"> <code>"e": { "@name": "value", "#text": "text" }</code> </td> <td bgcolor="#f0f0f0"> <code>o.e["@name"] o.e["#text"]</code> </td> </tr> <tr> <td bgcolor="#cccccc">5</td> <td bgcolor="#f0f0f0"> <code><e> <a>text</a> <b>text</b> </e></code> </td> <td bgcolor="#cccccc"> <code>"e": { "a": "text", "b": "text" }</code> </td> <td bgcolor="#f0f0f0"> <code>o.e.a o.e.b</code> </td> </tr> <tr> <td bgcolor="#cccccc">6</td> <td bgcolor="#f0f0f0"> <code><e> <a>text</a> <a>text</a> </e></code> </td> <td bgcolor="#cccccc"> <code>"e": { "a": ["text", "text"] }</code> </td> <td bgcolor="#f0f0f0"> <code>o.e.a[0] o.e.a[1]</code> </td> </tr> <tr> <td bgcolor="#cccccc">7</td> <td bgcolor="#f0f0f0"> <code><e> text <a>text</a> </e></code> </td> <td bgcolor="#cccccc"> <code>"e": { "#text": "text", "a": "text" }</code> </td> <td bgcolor="#f0f0f0"> <code>o.e["#text"] o.e.a</code> </td> </tr> </tbody> </table> </blockquote> <p> If we translate this to the rules format given by Google it would look something like: </p> <p><strong>Basic</strong></p> <ul> <li> The feed is represented as a JSON object; each nested element or attribute is represented as a name/value property of the object.</li> <li>Attributes are converted to <code>@attribute</code> properties. (attribute name preceeded by @)</li> <li>Child elements are converted to Object properties, if the node has attributes or child nodes.</li> <li>Elements that may appear more than once are converted to Array properties.</li> <li>Text values of tags are converted to string properties of the parent node, if the node has no attributes or child nodes.</li> <li>Text values of tags are converted to <code>#text</code> properties, if the node has attributes or child nodes.</li> </ul> <p><strong>Namespace</strong></p> <ul> <li>If an element has a namespace alias, the alias and element are concatenated using ":". For example, <code>ns:element</code> becomes <code>ns:element</code>. (ie: namespaced elements are treated as any other element)</li> </ul> <p><strong>XML</strong></p> <ul> <li>XML version and encoding attributes are not converted.</li> </ul> <h3>XML.com XML to JSON example</h3> <pre rel="xml" name="code" class="xml"> <?xml version="1.0" encoding="UTF-8"?> <example:user <name>Joe</name> <status online="true">Away</status> <idle /> </example:user> </pre> <pre rel="json" name="code" class="javascript"> { "example:user" : { "@attributes" : { "domain" : "example.com" }, "name" : { "#text" : "Joe" }, "status" : { "@attributes" : {"online" : "true"}, "#text" : "Away" }, "idle" : null } } </pre> <h3>Other rules being used to convert XML to JSON</h3> <p> Here is a blog on the topic of an XML to JSON standard. <a href=""></a>. <br /> A good discussion on the differences between XML and JSON. <a href=""></a> </p> <h3>We need a standard way of converting XML to JSON</h3> <p> I'm tired of hearing the "XML vs JSON" debate. Why not just make them compatible. Now, that we see just how many different rules are being used, we can definitely see another reason why a standard would come in handy. But till then, I think I'll add to the confusion and come up with my own ruleset. </p> <h3>My rules of converting XML to JSON</h3> <p> My rules are simple and is based on the XML DOM. The DOM represents XML as DOM Objects and Methods. We will use the DOM objects only since JSON does not use methods. So each Element would be an Object, and each text node <code>#text</code> property and attributes an <code>@attributes</code> object with string properties of the attribute names. The only difference from the DOM Objects representation in JavaScript is the <code>@</code> sign in front of the attributes Object name - this is to to avoid conflicts with elements named "attributes". The DOM goes around this by having public methods to select child nodes, and not public properties (the actual properties are private, and thus not available in an object notation). </p> <p><strong>Basic</strong></p> <ul> <li> The feed is represented as a JSON object; each nested element or attribute is represented as a name/value property of the object.</li> <li>Attributes are converted to String properties of the <code>@attributes</code> property.</li> <li> Child elements are converted to Object properties.</li> <li>Elements that may appear more than once are converted to Array properties.</li> <li>Text values of tags are converted to <code>$text</code> properties.</li> </ul> <p><strong>Namespace</strong></p> <ul> <li>Treat as any other element.</li> </ul> <p><strong>XML</strong></p> <ul> <li> XML version and encoding attributes are not converted. </li> </ul> <p> In order to convert XML to JSON with JavaScript, you first have to convert the XML to a DOM Document (to make things simpler). Any major browser willd do this either automatically in the case of the XML/XHTML Document you are viewing, or an XML document retrieved via XMLHttpRequest. But if all you have is an XML string, something like this will do: <p> <pre rel="javascript" name="code" class="javascript"> function TextToXML(strXML) { var xmlDoc = null; try { xmlDoc = (document.all)?new ActiveXObject("Microsoft.XMLDOM"):new DOMParser(); xmlDoc.async = false; } catch(e) {throw new Error("XML Parser could not be instantiated");} var out; try { if(document.all) { out = (xmlDoc.loadXML(strXML))?xmlDoc:false; } else { out = xmlDoc.parseFromString(strXML, "text/xml"); } } catch(e) { throw new Error("Error parsing XML string"); } return out; } </pre> <p> This will give you the XML represented as a DOM Document, which you can traverse using the <a href="" target="_blank">DOM methods</a>. </p> <p> Now all you'll have to do to convert the DOM Document to JSON is traverse it, and for every Element, create an Object, for its attributes create an <code>@attributes</code> Object, and a <code>#text</code> attribute for text nodes and repeat the process for any child elements. </p> <pre name="code" class="javascript"> /** * Convert XML to JSON Object * @param {Object} XML DOM Document */ xml2Json = function(xml) { var obj = {}; if (xml.nodeType == 1) { // element // do attributes if (xml.attributes.length > 0) { obj['@attributes'] = {}; for (var j = 0; j < xml.attributes.length; j++) { obj['@attributes'][xml.attributes[j].nodeName] = xml.attributes[j].nodeValue; } } } else if (xml.nodeType == 3) { // text obj = xml.nodeValue; } // do children if (xml.hasChildNodes()) { for(var i = 0; i < xml.childNodes.length; i++) { if (typeof(obj[xml.childNodes[i].nodeName]) == 'undefined') { obj[xml.childNodes[i].nodeName] = xml2Json(xml.childNodes[i]); } else { if (typeof(obj[xml.childNodes[i].nodeName].length) == 'undefined') { var old = obj[xml.childNodes[i].nodeName]; obj[xml.childNodes[i].nodeName] = []; obj[xml.childNodes[i].nodeName].push(old); } obj[xml.childNodes[i].nodeName].push(xml2Json(xml.childNodes[i])); } } } return obj; }; </pre> <h3>Converting XML to Lean JSON?</h3> <p> We could make the JSON encoding of the XML lean by using just "@" for attributes and "#" for text in place of "@attributes" and "#text": </p> <pre rel="json" name="code" class="javascript"> { "example:user" : { "@" : { "domain" : "example.com" }, "name" : { "#" : "Joe" }, "status" : { "@" : {"online" : "true"}, "#" : "Away" }, "idle" : null } } </pre> <p> You may notice that "@" and "#" are valid as javascript property names, but not as XML attribute names. This allows us to encompass the DOM representation in object notation, since we are swapping DOM functions for Object properties that are not allowed as XML attributes and thus will not get any collisions. We could go further and use "!" for comments for example, and "%" for CDATA. I'm leaving these two out for simplicity. </p> <h3>What about converting JSON to XML?</h3> <p> If we follow the rules used to convert XML to JSON, it should be easy to convert JSON back to XML. We'd Just need to recurse through our JSON Object, and create the necessary XML objects using the DOM methods. </p> <pre name="code" class="javascript"> /** * JSON to XML * @param {Object} JSON */ json2Xml = function(json, node) { var root = false; if (!node) { node = document.createElement('root'); root = true; }Xml(json[x][i], document.createElement(x))); } } else { node.appendChild(json2Xml(json[x], document.createElement(x))); } } } } if (root == true) { return this.textToXML(node.innerHTML); } else { return node; } }; </pre> <p> This really isn't a good example as I couldn't find out how to create Elements using the XML DOM with browser Javascript. Instead I had to create Elements using the document.createElement() and text nodes with document.createTextNode() and use the non-standard innerHTML property in the end. The main point demonstrated is how straight forward the conversion is. </p> <h3>What is the use of converting JSON to XML</h3> <p> If you are familiar with creating xHTML via the DOM methods, you'll know how verbose it can be. By using a simple data structure to represent XML, we can remove the repetitive code needed to create the xHTML. Here is a function that creates HTML Elements out of a JSON Object. </p> <p> <pre name="code" class="javascript"> /** * JSON to HTML Elements * @param {String} Root Element TagName * @param {Object} JSON */ json2HTML = function(tag, json, node) { if (!node) { node = document.createElement(tag); }HTML(json[x][i], document.createElement(x))); } } else { node.appendChild(json2HTML(json[x], document.createElement(x))); } } } } return node; }; </pre> </p> <p> Lets say you wanted a link <code><a title="Example" href="">example.com</a></code>. With the regular browser DOM methods you'd do: </p> <pre name="code" class="javascript"> var a = document.createElement('a'); a.setAttribute('href', ''); a.setAttribute('title', 'Example'); a.appendChild(document.createTextNode('example.com'); </pre> This is procedural and thus not very pleasing to the eye (unstructured) as well as verbose. With JSON to XHTML you would just be dealing with the data in native JavaScript Object notation. <pre name="code" class="javascript"> var a = json2HTML('a', { '@attributes': { href: '', title: 'Example' }, '#text': 'example.com' }); </pre> <p> That does look a lot better. This is because JSON seperates the data into a single Object, which can be manipulated as we see fit, in this case with json2HTML(). </p> <p> If you want nested elements: </p> <pre name="code" class="javascript"> var div = json2HTML('div', { a : { '@attributes': { href: '', title: 'Example' }, '#text': 'example.com' } }); </pre> <p>Which gives you</p> <pre name="code" class="html"> <div><a title="Example" href="">example.com</a></div> </pre> <p> The uses of converting JSON to XML are many. Another example, lets say you want to syndicate an RSS feed. Just create the JSON Object with the rules given for conversion between XML and JSON, run it through your json2Xml() function and you should have a quick and easy RSS feed. Normally you'd be using a server side language other than JavaScript to generate your RSS (however <a href="" target="_blank">Server Side JavaScript</a> is a good choice also) but since the rules are language independent, it doesn't make a difference which language is used, as long as it can support the DOM, and JSON. </p><img src="" height="1" width="1"/>Gabe LG - a live view of twitter<p> A preview of <a href="">Twittier.com</a> is available. </p> <p> What is does is retrieve updates from the <a href="">twitter public timeline</a>, or specific topics using <a href="">summize.com</a> and presents it in a simple view. </p> <p> On receiving a message it extracts "relevant" keywords and displays a "live cloud view" of keywords. This cloud view is updating in real time, and thus gives you an idea of the trending topics on twitter at any given time. </p> <p> At the moment it doesn't work on IE6, though I haven't tested IE7. I'm developing this on Firefox3 beta but Firefox2.0 should display it fine also. </p> <p> The mashup is fully client side, using the <a href="">MooTools JavaScript library</a> and <a href="">summize.com</a> for data. </p><img src="" height="1" width="1"/>Gabe LG
http://feeds.feedburner.com/blogspot/hXgxP
CC-MAIN-2013-20
refinedweb
12,069
62.78
Enumeration Interface in Java Hello Learners, today we are going to learn about the use of Enumeration Interface with collections in Java. The Enumeration interface was introduced in JDK 1.0 under Java.util package. Enumeration is a Legacy Interface so it can be used only with the Stack, Vectors, Hash Tables etc. Methods of Enumeration Interface in Java The Enumeration interface offers some methods that can be used to select elements from a collection one by one at a time. The two methods are given below: - hasMoreElements(): this method is of boolean type. As the name suggests, if the collection is having more elements then it returns true or else it returns false. When all the elements have been selected or enumerated then it returns false. - nextElement(): it returns the next element in the collection. This method is of object type (as an object reference). See the code below, Observe every statement and try to solve it on your own… import Java.util.Stack; import Java.util.ArrayList; import Java.util.Enumeration; public class Enumerate { static Stack<Integer> addToStack(ArrayList<Integer> a) { Stack<Integer> s = new Stack<Integer>(); s.addAll(a); //all elements of arraylist are now added to vectors return s; } public static void main(String[] args) { ArrayList<Integer> a = new ArrayList<Integer>(10); a.add(101); a.add(102); a.add(103); a.add(114); a.add(305); Stack<Integer> sta = new Stack<Integer>(); sta = addToStack(a); //create an instance of enumeration Enumeration<Integer> en = sta.elements(); int i = 1; while(en.hasMoreElements()) { System.out.printf("%dth element of stack: ",i); System.out.println(en.nextElement()+100); i++; } } } OUTPUT: 1th element of stack: 201 2th element of stack: 202 3th element of stack: 203 4th element of stack: 214 5th element of stack: 405 - On line 7, we have created a method named addToStack which takes an ArrayList of Integer type as an argument. This method adds all the elements of ArrayList into a stack and returns it. - On line 21, we defined a stack and called the addToStack method by passing the ArrayList in it. on line 22 we saved the result into the stack only. - On line 24, we created an Enumeration instance for the stack. - From lines 26-28, we have created a while loop that checks the condition whether the stack has more elements to pick or not, using the hasMoreElements() method. - If the stack has more elements then the condition becomes true that element is picked using nextElement() method. Now it enters into the loop and increments the element by 100. OTHER POINTS TO REMEMBER: - Enumerations are Fail-False in nature which means it will not throw any error if you modify any collection while execution. - Enumerations are not thread-safe because it allows other threads to modify the collection while traversing. You can learn more about Stack in Java by clicking on the links. So, that’s all for now about Enumeration Interface in Java, till then Keep Learning, Keep Practicing, Keep Reading! “THINK TWICE CODE ONCE!”
https://www.codespeedy.com/enumeration-interface-in-java/
CC-MAIN-2020-50
refinedweb
506
56.45
16 October 2012 04:15 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> The Chinese producer restarted the 25,000 tonne/year NBR line to produce 3355 NBR, a major rubber grade, the source added. The 50,000 tonne/year NBR plant was shut on 20 July for regular maintenance. However, the plant’s restart date was delayed, despite the completion of the turnaround, because of weak demand in the market, the source said. The producer offered the first batch of on-spec NBR from its 50,000 tonne/year NBR plant in June 2011 when it was newly started up. However, the plant’s operating rate was kept largely at around 50% capacity since. Ningbo Shunze Rubber has increased its NBR offers by CNY1,200/tonne to CNY22,000-22,200/tonne Demand in the Chinese market has been improving, according to
http://www.icis.com/Articles/2012/10/16/9604175/chinas-ningbo-shunze-rubber-restarts-one-nbr-line-on-15.html
CC-MAIN-2014-52
refinedweb
142
61.97
[Date Index] [Thread Index] [Author Index] Re: Another chapter in the NeXT Tanh bug saga! In an attempt to clear things up a little bit here, this is the status of the tanh() bug on the NeXT: 1) There is a real bug, only on the 68040, in FTANH which is done in software on the 68040 (it was in hardware in the 68030/68882). Wolfram's code snippet shows this quite clearly. With the necessary #include <math.h>, of course. 2) The so called discontinuity caused by not using the #include for that same statement, is not a bug--it's a mistake in coding. And the result is different than the real discontinuity on the 68040. 3) NeXT has new floating point code from Motorola that fixes this and several other problems with the floating point operations on the 68040, and this will be distributed with version 2.1 of the NeXT operating system. 2.1 should be out Real Soon Now, and will be distributed as about four or so floppies to upgrade 2.0. It is an optional upgrade (which I disagree with, but who listens to me anyway?). Mark Adler madler at pooh.caltech.edu
http://forums.wolfram.com/mathgroup/archive/1991/Mar/msg00010.html
CC-MAIN-2017-26
refinedweb
200
72.36
Hi, I'm doing a matrix of array n I have to assign each column with a letter or number so that when I shuffle the letters/numbers, it will print out accordingly. this is like the transposition cipher. I have 2 problems. 1. my matrix is not printing out like i want it to which is 5x6. instead it's printing out line by line. I've checked my code with everyone n it's the same. help me figure out what im missing. 2.how do you assign a letter or number to a column?? i've tried using strings and assign the column, but it doesn't work if i need to code the column using numbers. this is what i tried: String 1 = array[][1]; here is my code: public class Secret { public static void main(String[]args) { String str = "this is my very secret message"; int len = str.length(); char array [][] = new char[5][6]; int x = 0; for (int i = 0; i<5; i++){ for (int j = 0; j<6; j++){ while (x<len) { char ch = str.charAt(x); x++; array[i][j] = ch; System.out.println(array[i][j] +" "); } } } } } the outcome tht i'm looking for is: 4 5 3 2 9 7 t h i s i s m y v e r y s e c r e t m e s s a g e (ok, the matrix is not typing out as i would have liked but it's taking into consideration the space as well.) when i type 794532 it will come out according to the column. if you could help me tht would be great. thank you kindly.
https://www.daniweb.com/programming/software-development/threads/233100/need-help-in-matrix-of-array
CC-MAIN-2018-13
refinedweb
281
80.82
One of the things I’ve been working on recently involves using XML columns in SQL Server. Starting out, it was simple and I was just doing vanilla ADO.NET (wrapped in a simple Query API) combined with XML serialization/deserialization, which worked pretty well for a while. But as the complexity has grown, it seemed like too much time was being spent enhancing the persistence infrastructure in this particular area of the application. In comes NHibernate, which is already integrated and available to me on this particular project. In fact, the only reason I didn’t use NHibernate for this particular feature from day one is because I didn’t see a lot of information available regarding NHibernate and XML columns. I did find one old blog post by Ayende and a seemingly outdated article on the NHibernate site. But I admit I’ve never jumped into creating custom user types in NHibernate and wasn’t yet comfortable moving forward with that approach. Since what I needed at the time was pretty simple, I went forward without NHibernate for the time being. Without going into too much detail, I came to a point where I wanted to spike with NHibernate to see how it handles columns with an XML data type. I’ve only tried one of a couple approaches so far, but wanted to get some feedback on it so far. First, a couple goals: - Store a set of data as XML in a SQL Server XML column - Ability to deserialize the XML into strongly typed objects for use in the rest of the code base Warning: contrived example ahead. The real implementation is basically for lightweight messages. 1: public class Person 2: { 3: private readonly string contactInformationXml; 4: 5: public Person(ContactInformation contactInformation) 6: { 7: // NOTE: SerializeToXmlStream extension method not shown 8: contactInformationXml = 9: new StreamReader(contactInformation.SerializeToXmlStream()).ReadToEnd(); 10: } 11: 12: public ContactInformation GetContactInformation() 13: { 14: var xmlDocument = new XmlDocument(); 15: xmlDocument.Load(contactInformationXml); 16: 17: // NOTE: DeserializeInto extension method not shown 18: return xmlDocument.DeserializeInto<ContactInformation>(); 19: } 20: } And an excerpt from an example NHibernate mapping for this: 1: <property name="ContactInformationXml" column="ContactInformation" 2: So in the Person class above, the constructor accepts a strongly typed component for the contact information, serializes it to XML and stores it in a private field as a string. Then the NHibernate mapping takes care of persisting the serialized string to the XML column named ContactInformation in the database. To see how it would get used when NHibernate loads a Person, the GetContactInformation() method is an example of deserializing the string into the strongly typed ContactInformation object which it then returns. Now, putting aside the reasons for or against using XML in this way, using XML columns in general or the fact that serialization concerns shouldn’t be placed inside a class like this… I’m looking for feedback on this approach and if anyone else has any better ways of doing this. I’ve yet to go down the path of creating a custom NHibernate user type, even though I think doing it that way would be a bit cleaner and flexible in the future. Anyone have any other good examples of using NHibernate for persisting data to XML columns? You might want to check out this thread There is even a user type there you can use. Interesting. I’ve just writen a generic IUserType that persists an object as XML to an NVARCHAR(MAX) column which I’m using with Fluent NHib. I thought it as a bit useless as I couldn’t index on the properties, but maybe I will be able to if I use an XML column. Up on my blog in the next couple of days. an IUserType would be the way to go. The domain shouldn’t care that ContactInfo is actually stored as XML, so the entity shouldn’t know about that. the 2 important members of IUserType are SafeGet and SafeSet which is where you would place the (de)serialization logic. the mapping would then get updated to then there is no need for field access. @Jason, Cool, thanks Jason! I would agree that the domain entity shouldn’t know about this type of information. In the real example, it’s not necessarily domain entities doing this, it’s really just messages. But yeah, I think I will have to take a further look into solving it with an IUserType. XML column use is of general interest as one plausible approach to the data extensibility requirements associated with shared database SaaS instances and other MEBA apps. Good to see a rundown of the use with NHibernate. Here’s one I’m using at the moment, it lets you map XmlDocument properties on classes directly to database columns, using a UserType. Feel free to post improvements and updates.
https://lostechies.com/joeybeninghove/2009/01/14/nhibernate-xml-columns/
CC-MAIN-2017-13
refinedweb
809
50.77
Access notebook filename from jupyter with sagemath kernel Say I am running sage in a jupyter notebook, e.g. invoked with sage -n jupyter my_nootebook_file.ipynb. After some computations I want to call a function from a custom library that stores all data in a file named my_nootebook_file.data. I can do all the above manually, but I want the function to know that it is run inside the my_nootebook_file.ipynb notebook and hence the canonical output filename should be my_nootebook_file.*. How can I access this information? Shouldn't there be a "process information object" that should have all this information like current directory, ? I cannot seem to find it. I did try def outp(L): print [str(l) for l in L] outp(locals()) and searched the output for my notebook name, to no avail. Any hints are more than welcome.
https://ask.sagemath.org/question/36873/access-notebook-filename-from-jupyter-with-sagemath-kernel/
CC-MAIN-2017-13
refinedweb
142
74.79
Well first let me tell you that your problems go much deeper than your template parameter woes. The logic in your program is a little messed up. However, assuming you'll be able to fix that on your own once you overcome your template problems, try this: It limits you to passing in a vector of vectors, but I think that should be a pretty reasonable expectation.It limits you to passing in a vector of vectors, but I think that should be a pretty reasonable expectation.Code: #ifndef SSCR_H_INCLUDED #define SSCR_H_INCLUDED #include <iostream> #include <vector> template <class T> void sscr(std::vector<std::vector<T> > v) { typename std::vector<std::vector<T> >::iterator i; typename std::vector<T>::iterator j; for (i = v.begin(); i != v.end(); i++) { for (j = i->begin(); j != i->end(); j++) std::cout << *j << " "; std::cout << std::endl; } } #endif // SSCR_H_INCLUDED Oh and there's no point having this function return 0. Just return 0 from your main function after you call this.
http://cboard.cprogramming.com/cplusplus-programming/106239-problem-generic-function-2-print.html
CC-MAIN-2015-11
refinedweb
168
53.1
Red Hat Bugzilla – Bug 117227 gnome-python2-bonobo missing depencies Last modified: 2008-08-02 19:40:33 EDT From Bugzilla Helper: User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.6) Gecko/20040209 Firefox/0.8 Description of problem: $ python Python 2.2.3 (#1, Oct 15 2003, 23:33:35) [GCC 3.3.1 20030930 (Red Hat Linux 3.3.1-6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gnome.ui >>> quit 'Use Ctrl-D (i.e. EOF) to exit.' >>> $ sudo rpm -e gnome-python2-canvas [Note that there's no warning or error on missing dependencies.] $ python Python 2.2.3 (#1, Oct 15 2003, 23:33:35) [GCC 3.3.1 20030930 (Red Hat Linux 3.3.1-6)] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import gnome.ui Traceback (most recent call last): File "<stdin>", line 1, in ? ImportError: could not import bonobo.ui >>> Version-Release number of selected component (if applicable): gnome-python2-bonobo-2.0.0-2 How reproducible: Always Steps to Reproduce: 1. Install some software which uses gnome.ui with yum. 2. Yum installs some depencies. 3. Start software -> ImportError: could not import bonobo.ui Actual Results: Traceback (most recent call last): File "test.py", line 2, in ? import gnome.ui ImportError: could not import bonobo.ui Expected Results: gnome-python2-bonobo should depend on gnome-python2-canvas. Additional info: *** Bug 119454 has been marked as a duplicate of this bug. ***.
https://bugzilla.redhat.com/show_bug.cgi?id=117227
CC-MAIN-2017-51
refinedweb
253
72.53
Rails Ranger is a library I wrote that’s focused on leveraging on the defaults of Ruby on Rails APIs to make your life easier when writing javascript clients for them. It’s essentially a thin layer wrapping the powerful Axios library, while still exposing its full power for you. Installation $ yarn add rails-ranger # or $ npm install —-save rails-ranger Basic Setup The most basic setup would be something like that: api-client.js import RailsRanger from 'rails-ranger' const config = { axios: { baseURL: '' } } export default new RailsRanger(config) One important note here is that anything you send inside the axios option will be handed down to Axios as it is, so you can configure it as you want. Usage Then how do we start making requests? Like this: some-front-end-component.js import api from 'api-client' api.list('users').then((response) => { // your code }) So let’s break down what’s happening here: - We import the client we’ve set up in the previous file seen in the configuration section. - We call the listfunction from it, which is just an alias for index. This will trigger a request to the. - The JSON we receive inside response.datawill have all its keys converted to snake case automatically for you! Also, you can make use of nested resources with something like this: api.resource(users, 1) .list('blogPosts', { hideDrafts: true }) .then((response) => { // your code }) And this would make a request to: Notice that Rails Ranger converted your resource and parameters from camel case to snake case, so each part of your app (client and API) can talk in its preferred standards. Everybody’s happy! 🎉 More Features Other things you can do with Rails Ranger include using namespaced routes, interpolating into the URL and making raw HTTP requests. You can see the full list of actions and methods of Rails Ranger at our comprehensive documentation. 😄 Bonus: Using Rails Ranger as a Path Builder You can also use Rails Ranger as just a path builder and handle the requests yourself with your favorite client: import { RouteBuilder } from RailsRanger const routes = new RouteBuilder route = routes.create('users', { name: 'John' }) // => { path: '/users', params: { name: 'John' }, method: 'post' } Making AJAX requests to a Ruby on Rails API can be fun if we leverage the well-stablished standards of the framework. This way we can free ourselves from handling repetitive tasks like converting between camel case and snake case and focus on accessing endpoints in a semantic way. 🤠
https://alligator.io/js/rails-ranger/
CC-MAIN-2019-35
refinedweb
412
60.45
Python 3 in 1 Hour This is a Python 3 tutorial. This page is a summary of the basics for beginners. Examples on this page are based on Python 3.7. For python 2, see: Python 2 Basics. Python 3 source file must be saved in UTF 8 encoding. Make sure your editor saves it in UTF 8. (usually there's a preference setting.) [see Python: Unicode Tutorial 🐍] [see Unicode Basics: Character Set, Encoding, UTF-8, Codepoint] Strings Use single quote or double quote to quote string. # python 3 # single and double quotes are same in Python a = "tiger ♥" b = 'rabbit ♥' print(a, b) # tiger ♥ rabbit ♥ You can use \n for linebreak, and \t for tab, etc. Quoting Raw String 「r"…"」 You can add r in front of the quote symbol. This way, backslash characters will NOT be interpreted as escapes. # python 3 c = r"this\n and that" print(c) # prints a single line [see Python: Quote String] Triple Quotes for Multi-Line String To quote a string of multiple lines, use triple quotes. # python 3 d = """this will be printed in 3 lines""" print(d) substring, length Substring extraction is done by appending a bracket str[begin_index:end_index]. Index can be negative, which counts from the end. # python 3 b="01234567" print(b[1:4]) # prints “123” Length of the string is len(). a="this" print(len(a)) # 4 Strings can be joined by a plus sign +. print("this" + " that") String can be repeated using *. print("this" * 2) String Methods Python: String Methods Arithmetic # python 3 print(3 + 4) # 7 print(3 - 4) # -1 print(3 + - 4) # -1 print(3 * 4) # 12 print(2 ** 3) # 8 power print(11 / 5) # 2.2 (in python 2, this would be 2) print(11 // 5) # 2 (quotient) print(11 % 5) # 1 remainder (modulo) print(divmod(11, 5)) # (2, 1) quotient and remainder Convert to {int, float, string} Python doesn't automatically convert between {int, float, string}. - Convert to int: int(3.2) - Convert to float: float(3). - Convert to string, use “format” method. [see Python: Format String] - You can write with a dot after the number as float, like this: 3.. Assignment Operators # False like things, such as False, 0, empty string, empty array, …, all evaluate to False. The following evaluate to False: False. A builtin Boolean type. None. A builtin type. 0. Zero. 0.0. Zero, float. "". Empty string. []. Empty list. (). Empty tuple. {}. Empty dictionary. set([]). Empty set. frozenset([]). Empty frozen set. # python 3 my_thing = [] if my_thing: print("yes") else: print("no") Conditional: if then else # python 3 x = -1 if x<0: print('neg') elif x==0: print('zero') elif x==1: print('one') else: print('other') # the elif can be omitted. Loop, Iteration Example of a “for” loop. # python 3 a = list(range(1,5)) # creates a list from 1 to 4. (does NOT include the end) for x in a: if x == 3: print(x) # prints 3 The range(m, n) function gives a list from m to n-1. Python also supports break and continue to exit loop. break→ exit loop. continue→ skip code and start the next iteration. # python 3 for x in range(1,9): print(x) if x == 4: break # 1 # 2 # 3 # 4 Example of a “while” loop. # python 3 x = 1 while x <= 5: print(x) x += 1]. a = ["zero", "one", "two", "three", "four", "five", "six"] print(a[2:4]) # prints ["two", "three"] WARNING: The extraction is not inclusive. For example, mylist[2:4] returns only 2 elements, not 3. Modify element: list[index] = new_value xx = ["a", "b", "c"] xx[2] = "two" print(xx) # → ['a', 'b', 'two'] A slice (continuous sequence) of elements can be changed by assigning to a list directly. The length of the slice need not match the length of new list. # python 3 xx = [ "b0", "b1", "b2", "b3", "b4", "b5", "b6"] xx[0:6] = ["two", "three"] print(xx) # ['two', 'three', 'b6'] Nested Lists. Lists can be nested arbitrarily. Append extra bracket to get element of nested list. a = [3, 4, [7, 8]] print(a[2][1]) # returns 8 List Join. Lists can be joined with plus sign. b = ["a", "b"] + [7, 6] print(b) # prints ['a', 'b', 7, 6] Python: List Basics Tuple Python has a “tuple” type. It's like list, except it's immutable (that is, the elements cannot be changed, nor added/deleted). Syntax for tuble is using round brackets () instead of square brackets. The brackets are optional when not ambiguous, but best to always use them. # python 3 # tuple t1 = (3, 4 , 5) # a tuple of 3 elements. paren optional when not ambiguous print(t1) # (3, 4 , 5) print(t1[0]) # 3 # python 3 # nested tuple t2 = ((3,8), (4,9), ("a", 5, 5)) print(t2[0]) # (3,8) print(t2[0][0]) # 3 # python 3 # a list of tuples t3 = [(3,8), (4,9), (2,1)] print(t3[0]) # (3,8) print(t3[0][0]) # 3 [see Python: Tuple] Python Sequence Types In Python, {string, list, tuple} are called “sequence types”. They all have the same methods. Here's example of operations that can be used on sequence type. # python 3 # operations on sequence types # a list ss = [0, 1, 2, 3] # length print(len(ss)) # 4 # ith item print(ss[0]) # 0 # slice of items print(ss[0:3]) # [0, 1, 2] # slice of items with jump step print(ss[0:10:2]) # [0, 2] # check if a element exist print(3 in ss) # True. (or False) # check if a element does NOT exist print(3 not in ss) # False # concatenation print(ss + ss) # [0, 1, 2, 3, 0, 1, 2, 3] # repeat print(ss * 2) # [0, 1, 2, 3, 0, 1, 2, 3] # smallest item print(min(ss)) # 0 # largest item print(max(ss)) # 3 # index of the first occurrence print(ss.index(3)) # 3 # total number of occurrences print(ss.count(3)) # 1 Dictionary: Key/Value Pairs A keyed list in Python is called “dictionary” (known as Hash Table or Associative List in other languages). It is a unordered list of pairs, each pair is a key and a value. # python 3 # define a keyed list aa = {"john":3, "mary":4, "joe":5, "vicky":7} # getting value from a key print(aa["mary"]) # 4 # add a entry aa["pretty"] = 99 # delete a entry del aa["vicky"] print(aa) # {'john': 3, 'mary': 4, 'joe': 5, 'pretty': 99} # get keys print(list(aa.keys())) # ['john', 'mary', 'joe', 'pretty'] # get values print(list(aa.values())) # [3, 4, 5, 99] # check if a key exists print("is mary there:", "mary" in aa) # is mary there: True Loop Thru List/Dictionary Here is a example of going thru a list by element. # python 3 myList = ['one', 'two', 'three', '∞'] for x in myList: print(x) You can loop thru a list and get both {index, value} of a element. # python 3 myList = ['one', 'two', 'three', '∞'] for i, v in enumerate(myList): print(i, v) # 0 one # 1 two # 2 three # 3 ∞ Loop thru Dictionary # python 3 myDict = {"john":3, 'mary':4, 'joe':5, 'vicky':7} for k, v in list(myDict.items()): print(k, v) # output # joe 5 # john 3 # mary 4 # vicky 7 [see Python: Map Function to List] Use Module A library in Python is called a module. # python 3 # import the standard module named os import os # example of using a function print('current dir is:', os.getcwd()) Python: List Modules, Search Path, Loaded Modules Function The following is a example of defining a function. # python 3 def myFun(x,y): """myFun returns x+y.""" result = x+y return result print(myFun(3,4)) # prints 7 Python: Function Class and Object Python: Class and Object Writing a Module Here's a basic example. Save the following line in a file and name it mymodule.py. # python 3 def f1(n): return n+1 To load the file, use import import module_name, then to call the function, use module_name.function_name. # python 3 import mymodule # import the module print(mymodule.f1(5)) # calling its function. prints 6 print(mymodule.__name__) # list its functions and variables
http://xahlee.info/python/python3_basics.html
CC-MAIN-2019-35
refinedweb
1,351
72.36
I love thinking about data structures, and how to organize them most efficiently for a specific task. In the normal course of programming in Python, we don't have to think about it very much - the choice between list and dict is obvious, and that's usually as far as things go. When things get more complex, though, the Collections Abstract Base Classes can be extremely useful. In my experience, they aren't universally known about, so in this post I'll show a couple of interesting uses for them. List-Based Set Using a set requires that the items held within are all hashable (that is, they implement the __hash__ method). This isn't always the case, though. For example, Django models that don't have a PK yet are unhashable, as are dicts. In these situations, it can be useful to have a data structure which acts like a set, but which is backed by a list to sidestep that requirement. Performance will be worse, but in some cases this is acceptable. >>> s = ListBasedSet([ >>> { >>> 'id': 1, >>> }, >>> { >>> 'id': 2, >>> }, >>> ]) >>> len(s) 2 This can be easily achieved using the MutableSet Abstract Base Class: import collections class ListBasedSet(collections.MutableSet): store = None def __init__(self, items): self.store = list(items) or [] def __contains__(self, item): return item in self.store def __iter__(self): return iter(self.store) def __len__(self): return len(self.store) def add(self, item): if item not in self.store: self.store.append(item) def discard(self, item): try: self.store.remove(item) except ValueError: pass This exposes the exact same API as a built-in set. >>> s.add({ >>> 'id': 3, >>> }) >>> len(s) >>> s.clear() >>> len(s) 0 Lazy-Loading and Pagination If you have an API that paginates results, but you'd like to expose it as a simple list that can be iterated over, the Collections Abstract Base Classes are a good way to do that. As an example, APIs often return a response with a list of objects and the total number of objects available: { "objects": [ { "id": 1 }, { "id": 2 } ], "total": 2 } In such a case, a class like the following could be used to load the data lazily, when an item in the list is accessed: class LazyLoadedList(collections.Sequence): def __init__(self, url): self.url = url self.page = 0 self.num_items = 0 self.store = [] def load_data(self): data = requests.get(self.url, params={ 'page': self.page, }).json() self.num_items = data['total'] objects = data.get('objects', []) self.store += objects return len(objects) def __getitem__(self, index): while index >= len(self): self.page += 1 if not self.load_data(): break return self.store[index] def __len__(self): return self.num_items With this implementation, you can simply iterate over the list as normal and have the paginated data loaded automatically: >>> l = LazyLoadedList('') >>> for item in l: >>> process_item(item) At Zapier, we use something very similar to this to wrap ElasticSearch responses. I hope these examples show some of the things that can be achieved with Python's Collections Abstract Base Classes! {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/python-collections-abstract-base-classes
CC-MAIN-2017-26
refinedweb
518
57.77
# MVCC in PostgreSQL-6. Vacuum We started with problems related to [isolation](https://habr.com/ru/company/postgrespro/blog/467437/), made a digression about [low-level data structure](https://habr.com/ru/company/postgrespro/blog/469087/), then discussed [row versions](https://habr.com/ru/company/postgrespro/blog/477648/) and observed how [data snapshots](https://habr.com/ru/company/postgrespro/blog/479512/) are obtained from row versions. [Last time](https://habr.com/ru/company/postgrespro/blog/483768/) we talked about HOT updates and in-page vacuuming, and today we'll proceed to a well-known *vacuum vulgaris*. Really, so much has already been written about it that I can hardly add anything new, but the beauty of a full picture requires sacrifice. So keep patience. Vacuum ====== What does vacuum do? -------------------- In-page vacuum works fast, but frees only part of the space. It works within one table page and does not touch indexes. The basic, «normal» vacuum is done using the VACUUM command, and we will call it just «vacuum» (leaving «autovacuum» for a separate discussion). So, vacuum processes the entire table. It vacuums away not only dead tuples, but also references to them from all indexes. Vacuuming is concurrent with other activities in the system. The table and indexes can be used in a regular way both for reads and updates (however, concurrent execution of commands such as CREATE INDEX, ALTER TABLE and some others is impossible). Only those table pages are looked through where some activities took place. To detect them, the *visibility map* is used (to remind you, the map tracks those pages that contain pretty old tuples, which are visible in all data snapshots for sure). Only those pages are processed that are not tracked by the visibility map, and the map itself gets updated. The *free space map* also gets updated in the process to reflect the extra free space in the pages. As usual, let's create a table: ``` => CREATE TABLE vac( id serial, s char(100) ) WITH (autovacuum_enabled = off); => CREATE INDEX vac_s ON vac(s); => INSERT INTO vac(s) VALUES ('A'); => UPDATE vac SET s = 'B'; => UPDATE vac SET s = 'C'; ``` We use the *autovacuum\_enabled* parameter to turn the autovacuum process off. We will discuss it next time, and now it is critical for our experiments that we manually control vacuuming. The table now has three tuples, each of which are referenced from the index: ``` => SELECT * FROM heap_page('vac',0); ``` ``` ctid | state | xmin | xmax | hhu | hot | t_ctid -------+--------+----------+----------+-----+-----+-------- (0,1) | normal | 4000 (c) | 4001 (c) | | | (0,2) (0,2) | normal | 4001 (c) | 4002 | | | (0,3) (0,3) | normal | 4002 | 0 (a) | | | (0,3) (3 rows) ``` ``` => SELECT * FROM index_page('vac_s',1); ``` ``` itemoffset | ctid ------------+------- 1 | (0,1) 2 | (0,2) 3 | (0,3) (3 rows) ``` After vacuuming, dead tuples get vacuumed away, and only one, live, tuple remains. And only one reference remains in the index: ``` => VACUUM vac; => SELECT * FROM heap_page('vac',0); ``` ``` ctid | state | xmin | xmax | hhu | hot | t_ctid -------+--------+----------+-------+-----+-----+-------- (0,1) | unused | | | | | (0,2) | unused | | | | | (0,3) | normal | 4002 (c) | 0 (a) | | | (0,3) (3 rows) ``` ``` => SELECT * FROM index_page('vac_s',1); ``` ``` itemoffset | ctid ------------+------- 1 | (0,3) (1 row) ``` Note that the first two pointers acquired the status «unused» instead of «dead», which they would acquire with in-page vacuum. About the transaction horizon once again ---------------------------------------- How does PostgreSQL make out which tuples can be considered dead? We already touched upon the concept of transaction horizon when discussing [data snapshots](https://habr.com/ru/company/postgrespro/blog/479512/), but it won't hurt to reiterate such an important matter. Let's start the previous experiment again. ``` => TRUNCATE vac; => INSERT INTO vac(s) VALUES ('A'); => UPDATE vac SET s = 'B'; ``` But before updating the row once again, let one more transaction start (but not end). In this example, it will use the Read Committed level, but it must get a true (not virtual) transaction number. For example, the transaction can change and even lock certain rows in any table, not obligatory `vac`: ``` | => BEGIN; | => SELECT s FROM t FOR UPDATE; ``` ``` | s | ----- | FOO | BAR | (2 rows) ``` ``` => UPDATE vac SET s = 'C'; ``` There are three rows in the table and three references in the index now. What will happen after vacuuming? ``` => VACUUM vac; => SELECT * FROM heap_page('vac',0); ``` ``` ctid | state | xmin | xmax | hhu | hot | t_ctid -------+--------+----------+----------+-----+-----+-------- (0,1) | unused | | | | | (0,2) | normal | 4005 (c) | 4007 (c) | | | (0,3) (0,3) | normal | 4007 (c) | 0 (a) | | | (0,3) (3 rows) ``` ``` => SELECT * FROM index_page('vac_s',1); ``` ``` itemoffset | ctid ------------+------- 1 | (0,2) 2 | (0,3) (2 rows) ``` Two tuples remain in the table: VACUUM decided that the (0,2) tuple cannot be vacuumed yet. The reason is certainly in the transaction horizon of the database, which in this example is determined by the non-completed transaction: ``` | => SELECT backend_xmin FROM pg_stat_activity WHERE pid = pg_backend_pid(); ``` ``` | backend_xmin | -------------- | 4006 | (1 row) ``` We can ask VACUUM to report what is happening: ``` => VACUUM VERBOSE vac; ``` ``` INFO: vacuuming "public.vac" INFO: index "vac_s" now contains 2 row versions in 2 pages DETAIL: 0 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. INFO: "vac": found 0 removable, 2 nonremovable row versions in 1 out of 1 pages DETAIL: 1 dead row versions cannot be removed yet, oldest xmin: 4006 There were 1 unused item pointers. Skipped 0 pages due to buffer pins, 0 frozen pages. 0 pages are entirely empty. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. VACUUM ``` Note that: * `2 nonremovable row versions` — two tuples that cannot be deleted are found in the table. * `1 dead row versions cannot be removed yet` — one of them is dead. * `oldest xmin` shows the current horizon. Let's reiterate the conclusion: if a database has long-lived transactions (not completed or being performed really long), this can entail table bloat regardless of how often vacuuming happens. Therefore, OLTP- and OLAP-type workloads poorly coexist in one PostgreSQL database: reports running for hours will not let updated tables be duly vacuumed. Creation of a separate replica for reporting purposes may be a possible solution to this. After completion of an open transaction, the horizon moves, and the situation gets fixed: ``` | => COMMIT; ``` ``` => VACUUM VERBOSE vac; ``` ``` INFO: vacuuming "public.vac" INFO: scanned index "vac_s" to remove 1 row versions DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s INFO: "vac": removed 1 row versions in 1 pages DETAIL: CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s INFO: index "vac_s" now contains 1 row versions in 2 pages DETAIL: 1 index row versions were removed. 0 index pages have been deleted, 0 are currently reusable. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. INFO: "vac": found 1 removable, 1 nonremovable row versions in 1 out of 1 pages DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 4008 There were 1 unused item pointers. Skipped 0 pages due to buffer pins, 0 frozen pages. 0 pages are entirely empty. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. VACUUM ``` Now only latest, live, version of the row is left in the page: ``` => SELECT * FROM heap_page('vac',0); ``` ``` ctid | state | xmin | xmax | hhu | hot | t_ctid -------+--------+----------+-------+-----+-----+-------- (0,1) | unused | | | | | (0,2) | unused | | | | | (0,3) | normal | 4007 (c) | 0 (a) | | | (0,3) (3 rows) ``` The index also has only one row: ``` => SELECT * FROM index_page('vac_s',1); ``` ``` itemoffset | ctid ------------+------- 1 | (0,3) (1 row) ``` What happens inside? -------------------- Vacuuming must process the table and indexes at the same time and do this so as not to lock the other processes. How can it do so? All starts with the **scanning heap** phase (the visibility map taken into account, as already mentioned). In the pages read, dead tuples are detected, and their `tid`s are written down to a specialized array. The array is stored in the local memory of the vacuum process, where *maintenance\_work\_mem* bytes of memory are allocated for it. The default value of this parameter is 64 MB. Note that the full amount of memory is allocated at once, rather than as the need arises. However, if the table is not large, a smaller amount of memory is allocated. Then we either reach the end of the table or the memory allocated for the array is over. In either case, the **vacuuming indexes** phase starts. To this end, *each* index created on the table *is fully scanned* in search of the rows that reference the remembered tuples. The rows found are vacuumed away from index pages. Here we confront the following: the indexes do not already have references to dead tuples, while the table still has them. And this is contrary to nothing: when executing a query, we either don't hit dead tuples (with index access) or reject them at the visibility check (when scanning the table). After that, the **vacuuming heap** phase starts. The table is scanned again to read the appropriate pages, vacuum them of the remembered tuples and release the pointers. We can do this since there are no references from the indexes anymore. If the table was not entirely read during the first cycle, the array is cleared and everything is repeated from where we reached. In summary: * The table is always scanned twice. * If vacuuming deletes so many tuples that they all do not fit in memory of size *maintenance\_work\_mem*, all the indexes will be scanned as many times as needed. For large tables, this can require a lot of time and add considerable system workload. Of course, queries will not be locked, but extra input/output is definitely undesirable. To speed up the process, it makes sense to either call VACUUM more often (so that not too many tuples are vacuumed away each time) or allocate more memory. To note in parentheses, starting with version 11, PostgreSQL [can skip index scans](https://git.postgresql.org/gitweb/?p=postgresql.git;a=commit;h=857f9c36cda520030381bd8c2af20adf0ce0e1d4) unless a compelling need arises. This must make the life easier for owners of large tables where rows are only added (but not changed). Monitoring ---------- How can we figure out that VACUUM cannot do its job in one cycle? We've already seen the first way: to call the VACUUM command with the VERBOSE option. In this case, information about the phases of the process will be output to the console. Second, starting with version 9.6, the `pg_stat_progress_vacuum` view is available, which also provides all the necessary information. (The third way is also available: to output the information to the message log, but this works only for autovacuum, which will be discussed next time.) Let's insert quite a few rows in the table, for the vacuum process to last pretty long, and let's update all of them, for VACUUM to get stuff to do. ``` => TRUNCATE vac; => INSERT INTO vac(s) SELECT 'A' FROM generate_series(1,500000); => UPDATE vac SET s = 'B'; ``` Let's reduce the memory size allocated for the array of identifiers: ``` => ALTER SYSTEM SET maintenance_work_mem = '1MB'; => SELECT pg_reload_conf(); ``` Let's start VACUUM and while it is working, let's access the `pg_stat_progress_vacuum` view several times: ``` => VACUUM VERBOSE vac; ``` ``` | => SELECT * FROM pg_stat_progress_vacuum \gx ``` ``` | -[ RECORD 1 ]------+------------------ | pid | 6715 | datid | 41493 | datname | test | relid | 57383 | phase | vacuuming indexes | heap_blks_total | 16667 | heap_blks_scanned | 2908 | heap_blks_vacuumed | 0 | index_vacuum_count | 0 | max_dead_tuples | 174762 | num_dead_tuples | 174480 ``` ``` | => SELECT * FROM pg_stat_progress_vacuum \gx ``` ``` | -[ RECORD 1 ]------+------------------ | pid | 6715 | datid | 41493 | datname | test | relid | 57383 | phase | vacuuming indexes | heap_blks_total | 16667 | heap_blks_scanned | 5816 | heap_blks_vacuumed | 2907 | index_vacuum_count | 1 | max_dead_tuples | 174762 | num_dead_tuples | 174480 ``` Here we can see, in particular: * The name of the current phase — we discussed three main phases, but there are [more](https://postgrespro.com/docs/postgresql/11/progress-reporting#VACUUM-PHASES) of them in general. * The total number of table pages (`heap_blks_total`). * The number of scanned pages (`heap_blks_scanned`). * The number of already vacuumed pages (`heap_blks_vacuumed`). * The number of index vacuum cycles (`index_vacuum_count`). The general progress is determined by the ratio of `heap_blks_vacuumed` to `heap_blks_total`, but we should take into account that this value changes in large increments rather than smoothly because of scanning the indexes. The main attention, however, should be given to the number of vacuum cycles: the number greater than 1 means that the memory allocated was not enough to complete vacuuming in one cycle. The output of the VACUUM VERBOSE command, already completed by that time, will show the general picture: ``` INFO: vacuuming "public.vac" ``` ``` INFO: scanned index "vac_s" to remove 174480 row versions DETAIL: CPU: user: 0.50 s, system: 0.07 s, elapsed: 1.36 s INFO: "vac": removed 174480 row versions in 2908 pages DETAIL: CPU: user: 0.02 s, system: 0.02 s, elapsed: 0.13 s ``` ``` INFO: scanned index "vac_s" to remove 174480 row versions DETAIL: CPU: user: 0.26 s, system: 0.07 s, elapsed: 0.81 s INFO: "vac": removed 174480 row versions in 2908 pages DETAIL: CPU: user: 0.01 s, system: 0.02 s, elapsed: 0.10 s ``` ``` INFO: scanned index "vac_s" to remove 151040 row versions DETAIL: CPU: user: 0.13 s, system: 0.04 s, elapsed: 0.47 s INFO: "vac": removed 151040 row versions in 2518 pages DETAIL: CPU: user: 0.01 s, system: 0.02 s, elapsed: 0.08 s ``` ``` INFO: index "vac_s" now contains 500000 row versions in 17821 pages DETAIL: 500000 index row versions were removed. 8778 index pages have been deleted, 0 are currently reusable. CPU: user: 0.00 s, system: 0.00 s, elapsed: 0.00 s. INFO: "vac": found 500000 removable, 500000 nonremovable row versions in 16667 out of 16667 pages DETAIL: 0 dead row versions cannot be removed yet, oldest xmin: 4011 There were 0 unused item pointers. 0 pages are entirely empty. CPU: user: 1.10 s, system: 0.37 s, elapsed: 3.71 s. VACUUM ``` We can see here that three cycles over the indexes were done, and in each cycle, 174480 pointers to dead tuples were vacuumed away. Why exactly this number? One `tid` occupies 6 bytes, and 1024\*1024/6 = 174762, which is the number that we see in `pg_stat_progress_vacuum.max_dead_tuples`. In reality, slightly less may be used: this ensures that when a next page is read, all pointers to dead tuples will fit in memory for sure. Analysis -------- Analysis, or, in other words, collecting statistics for the query planner, is formally unrelated to vacuuming at all. Nevertheless, we can perform the analysis not only using the ANALYZE command, but combine vacuuming and analysis in VACUUM ANALYZE. Here the vacuum is done first and then the analysis, so this gives no gains. But as we will see later, autovacuum and automatic analysis are done in one process and are controlled in a similar way. VACUUM FULL =========== As noted above, vacuum frees more space than in-page vacuum, but still it does not entirely solve the problem. If for some reasons the size of a table or an index has increased a lot, VACUUM will free space inside the existing pages: «holes» will occur there, which will then be used for insertion of new tuples. But the number of pages won't change, and therefore, from the viewpoint of the operating system, the files will occupy exactly the same space as before the vacuum. And this is no good because: * Full scan of the table (or index) slows down. * A larger buffer cache may be required (since it is the pages that are stored there and the density of useful information decreases). * In the index tree an extra level can occur, which will slow down index access. * The files occupy extra space on disk and in backup copies. (The only exception is fully vacuumed pages, located at the end of the file. These pages are trimmed from the file and returned to the operating system.) If the share of useful information in the files falls below some reasonable limit, the administrator can do VACUUM FULL of the table. In this case, the table and all its indexes are rebuilt from scratch and the data are packed in a mostly compact way (of course, the `fillfactor` parameter taken into account). During the rebuild, PostgreSQL first rebuilds the table and then each of its indexes one-by-one. For each object, new files are created, and old files are removed at the end of rebuilding. We should take into account that extra disk space will be needed in the process. To illustrate this, let's again insert a certain number of rows into the table: ``` => TRUNCATE vac; => INSERT INTO vac(s) SELECT 'A' FROM generate_series(1,500000); ``` How can we estimate the information density? To do this, it's convenient to use a specialized extension: ``` => CREATE EXTENSION pgstattuple; => SELECT * FROM pgstattuple('vac') \gx ``` ``` -[ RECORD 1 ]------+--------- table_len | 68272128 tuple_count | 500000 tuple_len | 64500000 tuple_percent | 94.47 dead_tuple_count | 0 dead_tuple_len | 0 dead_tuple_percent | 0 free_space | 38776 free_percent | 0.06 ``` The function reads the entire table and shows statistics: which data occupies how much space in the files. The main information of our interest now is the `tuple_percent` field: the percentage of useful data. It is less than 100 because of the inevitable information overhead inside a page, but is still pretty high. For the index, different information is output, but the `avg_leaf_density` field has the same meaning: the percentage of useful information (in leaf pages). ``` => SELECT * FROM pgstatindex('vac_s') \gx ``` ``` -[ RECORD 1 ]------+--------- version | 3 tree_level | 3 index_size | 72802304 root_block_no | 2722 internal_pages | 241 leaf_pages | 8645 empty_pages | 0 deleted_pages | 0 avg_leaf_density | 83.77 leaf_fragmentation | 64.25 ``` And these are the sizes of the table and indexes: ``` => SELECT pg_size_pretty(pg_table_size('vac')) table_size, pg_size_pretty(pg_indexes_size('vac')) index_size; ``` ``` table_size | index_size ------------+------------ 65 MB | 69 MB (1 row) ``` Now let's delete 90% of all rows. We do a random choice of rows to delete, so that at least one row is highly likely to remain in each page: ``` => DELETE FROM vac WHERE random() < 0.9; ``` ``` DELETE 450189 ``` What size will the objects have after VACUUM? ``` => VACUUM vac; => SELECT pg_size_pretty(pg_table_size('vac')) table_size, pg_size_pretty(pg_indexes_size('vac')) index_size; ``` ``` table_size | index_size ------------+------------ 65 MB | 69 MB (1 row) ``` We can see that the size did not change: VACUUM no way can reduce the size of files. And this is although the information density decreased by approximately 10 times: ``` => SELECT vac.tuple_percent, vac_s.avg_leaf_density FROM pgstattuple('vac') vac, pgstatindex('vac_s') vac_s; ``` ``` tuple_percent | avg_leaf_density ---------------+------------------ 9.41 | 9.73 (1 row) ``` Now let's check what we get after VACUUM FULL. Now the table and indexes use the following files: ``` => SELECT pg_relation_filepath('vac'), pg_relation_filepath('vac_s'); ``` ``` pg_relation_filepath | pg_relation_filepath ----------------------+---------------------- base/41493/57392 | base/41493/57393 (1 row) ``` ``` => VACUUM FULL vac; => SELECT pg_relation_filepath('vac'), pg_relation_filepath('vac_s'); ``` ``` pg_relation_filepath | pg_relation_filepath ----------------------+---------------------- base/41493/57404 | base/41493/57407 (1 row) ``` The files are replaced with new ones now. The sizes of the table and indexes considerably decreased, while the information density increased accordingly: ``` => SELECT pg_size_pretty(pg_table_size('vac')) table_size, pg_size_pretty(pg_indexes_size('vac')) index_size; ``` ``` table_size | index_size ------------+------------ 6648 kB | 6480 kB (1 row) ``` ``` => SELECT vac.tuple_percent, vac_s.avg_leaf_density FROM pgstattuple('vac') vac, pgstatindex('vac_s') vac_s; ``` ``` tuple_percent | avg_leaf_density ---------------+------------------ 94.39 | 91.08 (1 row) ``` Note that the information density in the index is even greater than the original one. It is more advantageous to rebuild an index (B-tree) from the data available than insert the data in an existing index row by row. The functions of the [pgstattuple](https://postgrespro.com/docs/postgresql/11/pgstattuple) extension that we used read the entire table. But this is inconvenient if the table is large, so the extension has the `pgstattuple_approx` function, which skips the pages marked in the visibility map and shows approximate figures. One more way, but even less accurate, is to use the system catalog to roughly estimate the ratio of the data size to the file size. You can find examples of such queries [in wiki](https://wiki.postgresql.org/wiki/Show_database_bloat). VACUUM FULL is not intended for regular use since it blocks any work with the table (querying included) for all the duration of the process. It's clear that for a heavily used system, this may appear unacceptable. Locks will be discussed separately, and now we'll only mention the [pg\_repack](https://github.com/reorg/pg_repack) extension, which locks the table for only a short period of time at the end of the work. Similar commands ---------------- There are a few commands that also fully rebuild tables and indexes and therefore resemble VACUUM FULL. All of them fully block any work with the table, they all remove old data files and create new ones. The CLUSTER command is in all similar to VACUUM FULL, but it also physically orders tuples according to one of the available indexes. This enables the planner to use index access more efficiently in some cases. But we should bear in mind that clustering is not maintained: the physical order of tuples will be broken with subsequent changes of the table. The REINDEX command rebuilds a separate index on the table. VACUUM FULL and CLUSTER actually use this command to rebuild indexes. The logic of the TRUNCATE command is similar to that of DELETE — it deletes all table rows. But DELETE, as was already mentioned, only marks tuples as deleted, and this requires further vacuuming. And TRUNCATE just creates a new, clean file instead. As a rule, this works faster, but we should mind that TRUNCATE will block any work with the table up to the end of the transaction. [Read on](https://habr.com/ru/company/postgrespro/blog/486104/).
https://habr.com/ru/post/484106/
null
null
3,594
55.03
Another XForms Example As you will probably have appreciated from the descriptions of the components of XForms throughout this chapter, XForms is a substantial and potentially complex XML application language. Additionally, many new concepts have been introduced. This section shows some of the functionality of a fairly long piece of working XForms code. Caution Because the code shown relates to a namespace that was available to the XForms Working Group editors, note that the namespace URI is not one you should use in your own code. Likely, the XForms Working Group will define a new namespace URI in association with a final version of the XForms specification. The following code is used with the permission of Mikko Honkala and demonstrates some ... Get Special Edition Using XML, Second Edition now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/special-edition-using/078972748X/078972748X_ch25lev1sec4.html
CC-MAIN-2021-43
refinedweb
150
52.7
What is react-vis? React-vis is a React visualization library created by Uber. With it you can easily create common charts, such as line, area, bar charts, pie and donut charts, tree maps and many more. React-vis is a good option because it is: - Simple - Flexible - Integrated with React. In this article I want to show how to build a simple line chart using react-vis. Installation First of all you need to install react-vis on your project. For demo purpose I used an empty project created with create-react-app. Installing react-vis is as easy as npm install react-vis –save Examples Of course it is supposed that you have some data you want to visualize. For my example I will use dataset from Github Language Statistics with amounts of pull-requests per programming language. Nothing new here, I fetch data in componentDidMount, than set state of my app and pass it as a props to the child component. As I am interested in JavaScript statistics, I also filter results. import React, {Component} from ‘react’;import ‘./App.css’;import Chart from ‘./components/chart’const API_URL = “”;class App extends Component { constructor(props) { super(props) this.state = { results: , }; } componentDidMount() { fetch(API_URL) .then(response => { if (response.ok) { return response.json() } else { throw new Error (‘something went wrong’) } }) .then(response => this.setState({ results: response.results.filter((r)=>{ return r.name === ‘JavaScript’; }) }) )} render() { const {results} = this.state; return ( <div className=”App”> <Chart data={results}/> </div> ); }}export default App; Now let?s move further to our Chart component. Chart component is a functional component because it has no state. On my chart I want to display the number of pull-requests at the specific period of time. That is why I will go for a simple LineSeries diagram. To be able to use it I import the necessary components from the library import {XYPlot, XAxis, YAxis, VerticalGridLines, LineSeries} from ‘react-vis’; XYPlot is a wrapper for the rest of elements. Axes are there to show X and Y axis. VerticalGridLines to create a grid and LineSeries is a type of chart itself. Simple use-case Now let?s create chart component with some random data first, just to get an idea how it works: import React from ‘react’;import {XYPlot, XAxis, YAxis, VerticalGridLines, HorizontalGridLines, LineSeries} from ‘react-vis’;const Chart = (props) => { return ( <XYPlot width={300} height={300}> <VerticalGridLines /> <HorizontalGridLines /> <XAxis /> <YAxis /> <LineSeries data={[ {x: 1, y: 4}, {x: 5, y: 2}, {x: 15, y: 6} ]}/> </XYPlot> );}export default Chart; As you can see I pass an array of objects containing x and y values I want to show on the diagram to LineSeries component. And here comes some magic! My chart component at this point looks like this : Applying real data Now let?s pass actual data to our component. I want to show the amount of pull-requests for a specific period of time. It is ?count?, and both ?year? and ?quarter? from my dataset. So I will create an array with x and y values from this data: const dataArr = props.data.map((d)=> { return {x: d.year + ‘/’ + d.quarter, y: parseFloat(d.count/1000)}}); Let?s see what happens when I pass my array to LineSeries component <LineSeries data={dataArr}/> Because on the x axis I want to show quarters, I need to specify the type of the axes as following: xType=”ordinal” Not bad, but I still want to modify the look of my chart a little bit. So I will add some styles as well: <LineSeries data={dataArr} style={{stroke: ‘violet’, strokeWidth: 3}}/> Here is the full code for chart component: import React from ‘react’;import {XYPlot, XAxis, YAxis, VerticalGridLines, HorizontalGridLines, LineSeries} from ‘react-vis’;const Chart = (props) => { const dataArr = props.data.map((d)=> { return {x: d.year + ‘/’ + d.quarter, y: parseFloat(d.count/1000)} }); return ( <XYPlot xType=”ordinal” width={1000} height={500}> <VerticalGridLines /> <HorizontalGridLines /> <XAxis title=”Period of time(year and quarter)” /> <YAxis title=”Number of pull requests (thousands)” /> <LineSeries data={dataArr} style={{stroke: ‘violet’, strokeWidth: 3}}/> </XYPlot> );}export default Chart; And here we go: Conclusion I hope that you are convinced now that react-vis is easy to use powerful tool. It is a good choice for presenting any type of data. For further information and experiments check react-vis documentation and examples. Enjoy your data visualization!
https://911weknow.com/data-visualization-with-react-vis
CC-MAIN-2021-04
refinedweb
723
57.57
Nullable Operators Nullable operators are binary arithmetic or comparison operators that work with nullable arithmetic types on one or both sides. Nullable types arise frequently when you work with data from sources such as databases that allow nulls in place of actual values. Nullable operators are used frequently in query expressions. In addition to nullable operators for arithmetic and comparison, conversion operators can be used to convert between nullable types. There are also nullable versions of certain query operators. Table of Nullable Operators The following table lists nullable operators supported in the F# language. Remarks The nullable operators are included in the NullableOperators module in the namespace Microsoft.FSharp.Linq. The type for nullable data is System.Nullable<'T>. In query expressions, nullable types arise when selecting data from a data source that allows nulls instead of values. In a SQL Server database, each data column in a table has an attribute that indicates whether nulls are allowed. If nulls are allowed, the data returned from the database can contain nulls that cannot be represented by a primitive data type such as int, float, and so on. Therefore, the data is returned as a System.Nullable<int> instead of int, and System.Nullable<float> instead of float. The actual value can be obtained from a System.Nullable<'T> object by using the Value property, and you can determine if a System.Nullable<'T> object has a value by calling the HasValue method. Another useful method is the System.Nullable<'T>.GetValueOrDefault method, which allows you to get the value or a default value of the appropriate type. The default value is some form of "zero" value, such as 0, 0.0, or false. Nullable types may be converted to non-nullable primitive types using the usual conversion operators such as int or float. It is also possible to convert from one nullable type to another nullable type by using the conversion operators for nullable types. The appropriate conversion operators have the same name as the standard ones, but they are in a separate module, the Nullable module in the Microsoft.FSharp.Linq namespace. Typically, you open this namespace when working with query expressions. In that case, you can use the nullable conversion operators by adding the prefix Nullable. to the appropriate conversion operator, as shown in the following code. open Microsoft.FSharp.Linq let nullableInt = new System.Nullable<int>(10) // Use the Nullable.float conversion operator to convert from one nullable type to another nullable type. let nullableFloat = Nullable.float nullableInt // Use the regular non-nullable float operator to convert to a non-nullable float. printfn "%f" (float nullableFloat) The output is 10.000000. Query operators on nullable data fields, such as sumByNullable, also exist for use in query expressions. The query operators for non-nullable types are not type-compatible with nullable types, so you must use the nullable version of the appropriate query operator when you are working with nullable data values. For more information, see Query Expressions. The following example shows the use of nullable operators in an F# query expression. The first query shows how you would write a query without a nullable operator; the second query shows an equivalent query that uses a nullable operator. For the full context, including how to set up the database to use this sample code, see Walkthrough: Accessing a SQL Database by Using Type Providers. open System open System.Data open System.Data.Linq open Microsoft.FSharp.Data.TypeProviders open Microsoft.FSharp.Linq [<Generate>] type dbSchema = SqlDataConnection<"Data Source=MYSERVER\INSTANCE;Initial Catalog=MyDatabase;Integrated Security=SSPI;"> let db = dbSchema.GetDataContext() query { for row in db.Table2 do where (row.TestData1.HasValue && row.TestData1.Value > 2) select row } |> Seq.iter (fun row -> printfn "%d %s" row.TestData1.Value row.Name) query { for row in db.Table2 do // Use a nullable operator ?> where (row.TestData1 ?> 2) select row } |> Seq.iter (fun row -> printfn "%d %s" (row.TestData1.GetValueOrDefault()) row.Name)
https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/symbol-and-operator-reference/nullable-operators
CC-MAIN-2018-34
refinedweb
660
51.34
InnoMedia ESBC Enterprise Session Border Controller Administration Guide - Lilian Merritt - 1 years ago - Views: Transcription 1 InnoMedia ESBC Enterprise Session Border Controller Administration Guide SW December, 2014 Copyright 2014 InnoMedia Inc. All rights reserved. 2 Table of Contents InnoMedia ESBC Administrative Guide 1 SAFETY CHECK IMPORTANT SAFETY INSTRUCTIONS SAFETY GUIDELINES General Precautions Protecting Against Electrostatic Discharge GETTING STARTED WITH THE ESBC ESBC MODEL DIFFERENTIATIONS AND KEY FEATURES TDM PRI with PRI ESBC (ESBC 9x80 series) SIP Trunking Using ESBCs with B2BUA (ESBC 8xxx and 9xxx series) Hosted Service Using ESBCs with SIP ALG (ESBC 8xxx and 9xxx series) High Capacity B2BUA and Transcoding Integrated Model: ESBC 10K MDX series CAPACITY AND LICENSE INSTALLING THE ESBC9XXX AND 8XXX SERIES TO AN ENTERPRISE NETWORK INSTALLING THE ESBC10K SERIES TO AN ENTERPRISE NETWORK WEB BASED MANAGEMENT (HTTP, HTTPS) The Console Home Page: System Overview Real Time Activity Monitor Network Status Port Mapping Table Routing Table Telephony Activities SIP Server Redundancy Line Status Active Calls CLI BASED MANAGEMENT SNMP BASED MANAGEMENT Trap host configurations SNMP v3 setup (SMTP) BASED MANAGEMENT 3 2.9 XML CONFIG-FILE BASED MANAGEMENT AUTO-PROVISIONING BASED MANAGEMENT Basic Provisioning Mechanism Configurations DHCP Provisioning Method HTTP / HTTPS / TFTP/ SecHTTP Provisioning Methods Server Initiated Provisioning: SIP NOTIFY EMS BASED MANAGEMENT NETWORK REQUIREMENTS AND CONFIGURATIONS DETERMINING THE NETWORK REQUIREMENTS FOR VOICE SERVICES Understand the network factors which affect quality of service Bandwidth Requirement Latency Jitter Packet Loss INTERNET INTERFACE CONFIGURATIONS LAN INTERFACE CONFIGURATIONS The LAN interface configurations for voice services LAN side Topology: RTP Default Gateway for SIP Trunk (B2BUA) voice services LAN side Topology Design: Static Routing Table Configurations DHCP Server Client List MAC Binding Advanced Configurations for Voice and Data Featured Services Enabling the Management Port Enabling the Bridge Port Enabling the Router Port for data services Ethernet Advanced Configurations for LAN Interfaces Remote access to the ESBC LAN Interfaces and LAN hosts Through VPN Through Port Forwarding Enabling Data Service Access for the ESBC LAN hosts 4 DNS Proxy Access Control UPnP DMZ (De-militarized Zone) Miscellaneous USING AN NTP SERVER TO OFFER TIME INFORMATION TO ESBC LAN DEVICES QOS CONTROL Ethernet models: ESBC 93xx and 83xx Cable Modem Embedded Models: ESBC 95xx and 85xx DQoS service flow settings VLAN FOR MULTI-SERVICE CAPABILITIES AND TRAFFIC SHAPING SIP TRUNK VOICE SERVICE CONFIGURATION ROUTING CALLS BETWEEN THE ESBC AND SIP TRUNK (SERVICE PROVIDER) Trunk Settings: SIP Server Trunk Setting: Sip server redundancy Dynamic Query for Redundant SIP Servers Static Input for Redundant SIP Servers Trunk Setting: Codec Filter ADDING AND CONFIGURING USER ACCOUNTS ON THE ESBC SIP UA Setting Public identity: Batch Add Public identity: Batch Modify/Delete Public identity: Individual Settings and Authentication Implicit registration: Registration Agent Bulk Assigning SIP Trunk Parameter Configuration SIP Profile Configuration: SIP Parameters SIP Profile Configuration: Interoperability SIP Profile Configuration: Security SIP Profile Configuration: Features Analog interface FXS Configuration: FAX and Modem Calls 5 Media Parameter Configuration for Analog Ports Call Feature Configuration for Analog Ports VERIFYING CALLS BETWEEN THE ESBC AND SIP TRUNK: TEST AGENT The Test Agent Setting The Usage of Test Agent ROUTING CALLS: ESBC WITH A SIP-PBX SIP PBX Profile Basic SIP Parameters Interoperability ESBC - PBX Security Configuration ESBC - PBX Call Feature Configuration SIP-PBX and SIP-Client Authentication ESBC SYSTEM GLOBAL SIP SETTINGS SIP Parameters SIP Parameters System Music on Hold (MOH) Filter SIP Method Customized SIP response code settings NUMBERING PLAN Configuring numbers and formulating digit translation rules EMERGENCY CALL CONFIGURATION Adding or deleting emergency call numbers Connection settings for emergency call numbers MEDIA TRANSCODING Introduction Enabling Transcoding Profiles Editing or Adding a Transcoding Profile DTMF Transcoding FAX Transcoding Voice Codec Transcoding Typical example of voice codec transcoding in deployment 6 4.9 ROUTING CALLS: ESBC WITH A PRI-PBX PRI Spans and Channels PRI Span Statistics PRI Span Connection Settings Basic Settings ISDN Interoperability Configuration Process Ringback Tone (RBT) or Early Media for calls B-Channel Maintenance User Account Assignment to PRI Span Groups Assigning UAs to a PRI Span Group Selecting an appropriate B-Channel hunting scheme PRI Media Profile Settings PRI diagnostics Bit error rate testing (BERT) PRI Span LoopBack Diagnostics SIP response code PRI cause code mapping Mapping of a received SIP 4xx-6xx response to an outbound INVITE request Mapping of a Received PRI Cause Code to SIP Response HOSTED VOICE SERVICE ESBC SIP-ALG MODULE FEATURES AND BENEFITS CONFIGURING SIP PHONES FOR HOSTED SERVICES VIA THE ESBC Configuring the SIP phones on the ESBC LAN ( NAT and Voice Port(s)) Configuring the ESBC SIP ALG for Hosted Voice Service FQDN TO IP: STATIC MAPPING LIST OF ACTIVE DEVICES FOR HOSTED SERVICE OAM&P, SECURITY AND FRAUD PROTECTION USER ACCOUNT CONFIGURATIONS SYSTEM TIME MANAGEMENT CONTROL MAINTENANCE Reboot Restore Factory Default Restore WAN MAC Address 7 6.4.2 Firmware Update Rollback Software Import XML or Binary Config Export XML or Binary Config AUTO BACKUP SYSTEM CONFIGURATION PERIODICALLY BATTERY STATUS CALL HISTORY AND LOGS Call History Settings Call History Record VQM (Voice Quality Measurement) VOICE QUALITY MEASUREMENT AND SLA ASSURANCE Voice Quality Parameter Basic Configuration Voice quality statistics line chart SLA (Service Level Agreement) Parameters Advanced Settings ALERT NOTIFICATION: SNMP Trap Alarms Alarms SECURITY System access control: Basic IP Layer Protection: Access Control List SIP Layer Protection SIP Firewall Rules SIP Firewall logs S IP Message domain/ip examination to prevent attack or fraud Fraud from the LAN interface Fraud or attack from the WAN interface Audit logs SYSTEM INFORMATION DIAGNOSIS TEST CALLS SYSLOG 8 7.2.1 Debugging syslog Operational syslog CALL TRACE Tracing - Ladder diagram Packet capture NETWORK DIAGNOSTIC UTILITIES Ping Test Traceroute Nslookup INSTALLERS AND OPERATORS INSTALLATION VIA TECHNICAN WEB CONSOLE The ESBC-9x78, 9x28, 10K series models Technician-Trunk Interface Telephony and Network Diagnostics Connect/Register SIP PBX to the ESBC LAN Setting Monitor ESBC-9x80 series models (switch between T1/E1 and transcoding) OPERATOR MANAGEMENT VIA THE OPERATOR WEB CONSOLE SIP FIREWALL AND HEADER MANIPULATION RULES (SHMR) SIP HEADER MANIPULATION AND FIREWALL SCRIPTS SIP FIREWALL 9 1 Safety Check 1.1 Important Safety Instructions This section contains important safety information you should know before working with the ESBC. Use the following guidelines to ensure your own personal safety and to help protect your ESBC from potential damage. This warning symbol means danger. You are in a situation that could cause bodily injury. Before you work on any equipment, be aware of the hazards involved with electrical circuitry and be familiar with standard practices for preventing accidents Only trained and qualified personnel should be allowed to install, replace, or service this equipment. cord. Before working on a system that has an on/off switch, turn OFF the power and unplug the power This unit is intended for installation in restricted access areas. A restricted access area is where access can only be gained by service personnel through the use of a special tool, lock and key, or other means of security, and is controlled by the authority responsible for the location. This product relies on the building s installation for short-circuit (overcurrent) protection. Ensure that a fuse or circuit breaker no larger than 120 VAC, 15A U.S. (240 VAC, 10A international) is used on the phase conductors (all current-carrying conductors). This equipment must be grounded. Never operate the equipment in the absence of a suitably installed ground conductor. Contact the appropriate electrical inspection authority or an electrician if you are uncertain that suitable grounding is available. Do not work on the system or connect or disconnect cables during periods of lightning activity. 9 10 Before working on equipment that is connected to power lines, remove jewelry (including rings, necklaces, and watches). Metal objects will heat up when connected to power and ground and can cause serious burns or weld the metal object to the terminals. The safety cover is an integral part of the product. Do not operate the unit without the safety cover installed. Operating the unit without the cover in place will invalidate the safety approvals and pose a risk of fire and electrical hazards. Enclosure covers serve three important functions: they prevent exposure to hazardous voltages and currents inside the chassis; they contain electromagnetic interference (EMI) that might disrupt other equipment; and they direct the flow of cooling air through the chassis. Do not operate the system unless all covers are in place. Ultimate disposal of this product should be handled according to all national laws and. 10 11 1.2 Safety Guidelines To reduce the risk of bodily injury, electrical shock, fires, and damage to the equipment, observe the following precautions General Precautions Observe the following general precautions for using and working with your system: Opening or removing covers might expose you to electrical shock. Components inside these compartments should be serviced only by an authorized service technician. If any of the following conditions occur, unplug the product from the electrical outlet and replace the part or contact your authorized service provider: The power cable, extension cord or plug is damaged. An object has fallen into the product. The product does not operate correctly when you follow the operating instructions. The product has been exposed to water. The product has been dropped or damaged. The product does not operate correctly when you follow the operating instructions. Keep your system components away from radiators and heat sources. Also,. Allow the product to cool before removing covers or touching internal components. Use the correct external power source. Operate the product only from the type of power source indicated on the electrical ratings label. If you are not sure of the type of power source required, consult your service representative or local power company. Use only approved power cables. If you have not been provided with a power cable for your ESBC or for any AC-powered option intended for your system, purchase a power cable that is approved for use in your country. The power cable must be rated for the product and for the voltage and current marked on the product s electrical ratings label. The voltage and current rating of the cable should be greater than the ratings marked on the product. To help prevent electric shock, plug the system components and peripheral power cables into properly grounded electrical outlets. These cables are equipped with three-prong plugs to help 11 12 ensure proper grounding. Do not use adapter plugs or remove the grounding prong from a cable. If you must use an extension cord, use a three-wire cord with properly grounded plugs. Observe extension cord and power strip ratings. Make sure that the total ampere rating of all products plugged into the extension cord or power strip does not exceed 80 percent of the extension cord or power strip ampere ratings limit. To help protect your system components from sudden, transient increases and decreases in electrical power, use a surge suppressor, line conditioner or uninterruptible power supply (UPS). Position cables and power cords carefully; route cables and the power cord and plug so that they cannot be stepped on or tripped over. Be sure that nothing rests on your system components cables or power cord. Do not modify power cables or plugs. Consult a licensed electrician or your power company for site modifications. Always follow your local or national wiring rules Protecting Against Electrostatic Discharge Static electricity can harm delicate components inside the equipment. To prevent static damage, discharge static electricity from your body before you touch any of your system s electronic components. You can do so by touching an unpainted metal surface on the chassis. You can also take the following steps to prevent damage from electrostatic discharge (ESD): When unpacking a static-sensitive component from its shipping carton, do not remove the component from the antistatic packing material until you are ready to install the component in your system. Just before unwrapping the antistatic packaging, be sure to discharge static electricity from your body. When transporting a sensitive component, first place it in an antistatic container or packaging. Handle all sensitive components in a static-safe area. If possible, use antistatic floor pads and workbench pads. 12 13 2 Getting Started with the ESBC 2.1 ESBC Model Differentiations and Key Features The Innomedia ESBC product family seamlessly migrates your enterprise telephony system to state-ofthe-art IP-based SIP trunking or hosted voice services TDM PRI with PRI ESBC (ESBC 9x80 series) The InnoMedia Enterprise Session Border Controllers are capable of both B2BUA and SIP ALG operation as well as having TDM PRI interfaces. These features allow broadband service providers to offer services to TDM-PBX customers today, with an easy migration path to SIP trunking or hosted services later when the customers transition from TDM to IP by adopting IP-PBX or IP Centrex services. Figure 1. TDM-PRI with the PRI ESBC SIP Trunking Using ESBCs with B2BUA (ESBC 8xxx and 9xxx series) As part of InnoMedia s comprehensive business voice service solutions, InnoMedia s highly manageable Enterprise Session Border Controller (ESBC) product family provides complete B2BUA functionality for comprehensive signaling normalization/header manipulation, transcoding for codec/fax/dtmf media translation, NAT traversal, topology hiding, SHMR for in-field header manipulation, QoS management, and many other features to support the ability for a service provider to deliver a scalable and reliable SIP Trunking offering. These IMS-ready and SIPConnect-compliant ESBCs are ideal for service providers looking for seamless network migration. 13 14 Figure 2. SIP Trunking Using ESBCs with B2BUA Hosted Service Using ESBCs with SIP ALG (ESBC 8xxx and 9xxx series) As part of InnoMedia s complete and comprehensive business voice service solutions for service providers, InnoMedia s highly manageable ESBC product family provides a SIP ALG function for topology hiding, NAT traversal, SIP Header Manipulation Rules (SHMR) for in-field header manipulation, QoS management, and many other features to support the ability for a service provider to deliver hosted voice services. Being highly integrated, InnoMedia s ESBC family is ideal for service providers looking to offer reliable and scalable hosted services. Figure 3. Hosted Service Using ESBCs with SIP ALG High Capacity B2BUA and Transcoding Integrated Model: ESBC 10K MDX series The ESBC10K-MDX. A carrier-grade, high-capacity, high-performance, and cost effective ESBC solution with an optimum level of B2BUA and Media Transcoding integration, enables service providers to offer highly scalable SIP trunking and hosted voice services to mid and large-size enterprise customers. Figure 4. High Capacity B2BUA and Transcoding Integrated Model: ESBC 10K MDX series 14 15 B2BUA SIP ALG Model Name WAN (SIP Trunk) (Hosted Service) Transcoding T1/E1 QoS ESBC8528-4B DOCSIS 2.0 Yes Yes - - Smart-DQoS ESBC9528-4B DOCSIS 3.0 Yes Yes - - Smart-DQoS ESBC9578-4B DOCSIS 3.0 Yes Yes Yes - Smart-DQoS ESBC9580-4B DOCSIS 3.0 Yes Yes - Yes Smart-DQoS ESBC8328-4B 10/100BT Yes Yes - - ToS/DSCP ESBC9328-4B Gigabit Yes Yes - - ToS/DSCP ESBC9378-4B Gigabit Yes Yes Yes - ToS/DSCP ESBC9380-4B Gigabit Yes Yes - Yes ToS/DSCP ESBC-10K-MDX Dual Gigabit Yes Yes Yes ToS/DSCP Table 1. The ESBC Product Summary 15 16 2.2 Capacity and License The ESBC series platforms support software licenses so that the platform can be upgraded or downgraded in the field. The ESBC license number essentially is equivalent to the number of concurrent calls allowed on a system. Hence, adding licenses increases the maximum number of concurrent calls handled by the device. There is no need to purchase other license type for registered SIP UAs. Check your ESBC system capacities from the following page. Login to the ESBC Administrative web console, and navigate to System > License. (See section 2.5 for descriptions of login to the console) Figure 5. Managing the ESBC licenses License Control Licensed Date B2BUA Calls SIP ALG Calls Description The date when license string (or file) was input to the system. The number of concurrent calls for SIP trunk voice service. The number of concurrent calls for hosted voice service. Note: The ESBC system maximum capacities are model dependent. See section 6.11 for the maximum capacities of your ESBC system. 16 17 2.3 Installing the ESBC9xxx and 8xxx series to an enterprise network Getting Started. Please refer to the document: InnoMedia ESBC deployment checklist for Voice Service Deployment. Figure 6. Hardware interface. The ESBC9580 back panel Note 1. The ESBC93xx series WAN interface is Gigabit Ethernet; the ESBC95xx WAN interface is a DOCSIS 3.0 Cable Modem. Note 2. The T1/E1 interfaces are applicable for ESBC9x80 models only. Step 1 Connecting the panel ports. 1. Connect the active RF coaxial cable to the CABLE connector (for ESBC 8528/9528/9580) or the RJ-45 cable to the WAN connector (for ESBC 8328/9328/9380/9580/9378). 2. Connect the administrator PC to LAN port Connect LAN ports 2, 3, or 4 to the corporate LAN which resides in the same network as the IP PBX or IP phones. Skip this step for a TDM PBX with E1/T1 connections. 4. Optionally, connect T1/E1 port(s) to a corporate TDM PBX. Please ensure the cable between the interface port and the PBX is connected correctly. Do not connect to T1/E1 Port 2 unless T1/E1 Port 1 is also connected to the same TDM PBX. 5. Optionally, connect any standard analog phone or fax machine to the PHONE connectors, labeled Open the battery compartment and insert the optional battery. 17 18 7. Connect included AC power cable to the electrical outlet and its cable to the ESBC s 12V DC connector. Step 2 Configure the administrator PC to access the ESBC The default LAN IP address of the ESBC is with a subnet mask of The ESBC LAN should be placed on the same LAN network where your IP PBX and IP Phone reside. For other network placements, please refer to section 0 for detailed descriptions. 1. Configure your PC with an appropriate IP address (e.g., ) within the same network as the ESBC LAN. 2. Start your web browser, and enter in the address field to connect to the ESBC. The login page will appear. The default user name is admin and the password is 123. Click the login button to enter the ESBC main page. Figure 7. the ESBC login page (web console) 18 19 2.4 Installing the ESBC10K series to an enterprise network Figure 8. The ESBC 10K back panel Note 1. The ESBC 10K series support dual WAN interfaces, which support layer 2 redundancy features. Note 2. LAN1 by default is configured as a management port whose default IP address is / This logical port is designed for an administrator PC. Note 3. LAN 2 is the Voice-NAT port whose default IP address is / This logical port is designed for telephony services. Note 4. When the management port is enabled, the administrator PC can access the ESBC10K console only via LAN1. When the management port is disabled from the web console, LAN1 is disabled. The administrator PC and telephony devices and equipment connect to LAN2 for management and telephony services. Step 1 Connecting the panel ports. 1. Connect the RJ-45 cable to either the WAN 1 or WAN 2 interfaces. Or connect two cables to WAN1 and WAN 2 respectively. Note that when both WAN ports are used, connect them to 2 different Ethernet switches. 2. Connect the administrator PC to LAN1. 3. Connect LAN 2 to the corporate LAN which resides in the same network as the IP PBX and/or IP Phones. 4. Connect included AC power cable to the electrical outlet. Step 2 Configure the administrator PC to access the ESBC. 1. Configure your PC with appropriate IP address (i.e., ) within the same network as the ESBC management port. 2. Start your web browser, and enter in the address field to connect to the ESBC. The login page will appear. The default user name is admin and the password is 123. Click the login button to enter the ESBC main page. If the management port is disabled, connect your PC NIC to ESBC LAN2, and the login procedure is the same as that of the ESBC 9xxx series. See section 2.3 for further details. 19 20 Start your web browser and enter (or in the address field to connect to the ESBC. The login page will appear. The default user name is admin and the password is 123. Click the login button to enter the ESBC main page. See Figure 21 2.5 WEB based management (HTTP, HTTPs) This administrative guide is based on the operation of WEB based management. There are other supporting management interfaces: CLI, XML, SNMP, Provisioning and EMS, which will be described briefly in following sections. Access the ESBC WEB management console through one the following URLs from web browser. See section 3.2 for detailed descriptions of configuring the ESBC Ethernet interfaces. LAN access: For ESBC 8xxx/9xxx series, enter (or The default IP address is /16 For the EBSC 10K series, enter The default IP address is /24. WAN access: or. (The default configuration of the ESBC WAN interface is DHCP client.) The default credentials to logon to the WEB management console are: User ID: admin; Password: 123. When the ESBC management port is configured and enabled, web console access from the NAT_Voice interface will be disabled. See section for details The Console Home Page: System Overview Once logged on to the ESBC WEB management console successfully, the dashboard page displays system configurations and status. 21 22 2.5.2 Real Time Activity Monitor Figure 9. The ESBC dash board: system information display The ESBC provides a real time system activity monitor screen, including Network and Telephony activities Network Status This Monitor page displays overall IP connection status. Navigate to Network > Settings > Monitor. 22 23 Figure 10. Network Connection Status Monitor Page Internet Connection Current Connection Type Log Status Link Status MAC address Description The mechanism of IP addressing, either DHCP client, or Static IP. Displays the DHCP client connection event history. The layer 3 IP connection status of the WAN interface. The layer 2 (data link) connection status of the WAN interface. The MAC address of the ESBC Internet Ethernet interface (WAN). 23 24 LAN Connection Port 1 ~ 4 Description Up or Down. Link speed (10, 100, or 1000Mbps), duplex mode (full or half). Displays data link connection status for all four LAN ports, respectively. Each ESBC LAN port can be assigned a different subscribed service, i.e., NAT-Voice, Bridge, Router, and Management. See section for details. NAT and Voice Current Connection Type Status MAC Address IP Address Netmask Description Static IP or DHCP Client Up or Down The MAC Address of the LAN NIC interface card The IPv4 address assigned to the NAT and Voice interface The netmask for the enterprise NAT and Voice network Router IP Address Netmask Description The IPv4 address assigned to the Router interface, if Router port is enabled on the ESBC. The netmask for the enterprise data network Port Mapping Table The port mapping table assigned for remote access to hosts residing on the ESBC NAT-Voice network via the WAN interface (see section ). Figure 11. The Port Mapping Table Routing Table Click the <Routing Table > button to view the ESBC network routing information for both Internet and LAN connections. 24 25 Figure 12. Network Routing Table See section 3.3 for suggestions on LAN side network topology design Telephony Activities Navigate to Telephony > TOOLS > Monitor. Click the associated tab to display the real-time states of SIP Servers, Lines, and Active calls. The ESBC admin web GUI page refreshes at configurable interval (default 3 seconds), see section If necessary, click the <Refresh> button to get the latest status of the server status information SIP Server Redundancy This page displays the enquiry results and status of redundant sip servers, see section Figure 13. SIP Redundant Server List Line Status This page displays the current state of all user accounts configured on the ESBC, as busy or idle states. If in a busy state, call duration, call type, and peer telephone number are displayed as well. The upper right corner shows the number of active calls at any particular moment. 25 26 Figure 14. The current status of all user accounts (lines) Active Calls Click the Active Calls tab to display current active calls. Figure 15. Displaying active calls Active Calls Busy Override Description The selected calls will be disconnected and associated parties will hear busy tones. 26 27 2.6 CLI Based Management The ESBC supports a CLI (command line interface) based console interface to configure system parameters. Please refer to the ESBC CLI command document for detailed descriptions. This section provides a brief description of the CLI-based configuration method. The CLI can be accessed by using an SSH connection through the Ethernet interface. The serial port connection is applicable to the ESBC-10K model only. The login ID and password are identical to those for WEB console. Once you login to the ESBC CLI console, the ESBC s current running version and the LAN IP address are displayed. You can check configurations or update settings from entering one of the two modes: System: Configure the ESBC system parameters. Net: Configure the ESBC network connections. Type? to get help. Serial port connection settings (applicable to ESBC-10K only). Connect the serial port (port#1) on the ESBC-10K back panel to your PC, with speed (baud rate) , data bits 8, stop bits 1, Parity None and Flow control XON/XOFF. 27 28 2.7 SNMP based management The ESBC s embedded SNMP agent works with a standard SNMP Manager to operate, maintain and provision (OAM&P) the system. It supports standard and proprietary MIBs (Management Information Base) which allow the operator to collect information and hence enable a deeper probe into the device. The ESBC can also send unsolicited events (traps) towards the SNMP manager. All supported MIB files are supplied for each new ESBC firmware release. Please refer to section for all traps for alert notifications Trap host configurations The SNMP Basic Setting page allows you to configure the SNMP trap host based on IP address. The ESBC SNMP agent accepts GET and SET requests from the configured IP address with correct community strings. SNMPv1 and SNMPv2 use the notion of communities to establish trust between managers and agents. An agent is configured with three community strings: read-only, read-write, and trap. The SNMP Community string is like a user id or password that allows access to the ESBC parameters.. If the community string is correct, the EBSC responds with the requested information. If the community string is incorrect, the ESBC simply discards the request and does not respond. Note that SNMP community strings are used on SNMPv1 and SNMPv2 protocol. The SNMPv3 uses username/password authentication along with an encryption key. Navigate to System > SNMP. Figure 16. Configuring the SNMP Trap Host Information 28 29 SNMP Host System Name System Location System Contact Read Only Community Read Write Community Description Enter the designated values for this deployed ESBC unit. The name of the ESBC system; the location of deployed premises, the contact info. Set the SNMP read only community string. Enabling a remote device to retrieve read-only information from the ESBC. The default string is set to "public". It is suggested that the network administrator change all the community strings so that outsiders cannot see information about the internal network. SNMP Read-Write community string. Enabling a remote device to read information from the ESBC and to modify settings. The default string is set to private. It is suggested that the network administrator change all the community strings so that outsiders cannot see information about the internal network. Trap Host IP The SNMP trap host destination, an IPv4 address. See section for Trap alarm descriptions. Trap Community Trap Version SNMP Trap community string. Used when sending SNMP Traps to SNMP Trap Host. The best practice is to use a hard-to-guess string which is different from your polling (read and read-write) community strings. The notion of communities is applicable to SNMPv1 or SNMPv SNMP v3 setup SNMPv3 provides secure access to devices by a combination of authenticating and encrypting packets over the network. It include three important services: authentication, privacy and access control. You can create users, determine the protocol used for message authentication as well as determine if data transmitted between two SNMP entities is encrypted. In addition, you can restrict user privileges by defining which portions of the Management Information Bases (MIB) that a user can view. In this way, you restrict which MIBs a user can display and modify. In addition, you can restrict the types of messages, or traps, the user can send. Security Levels in SNMPv3 The ESBC SNMPv3 Agent supports the following set of security levels as defined in the USM MIB (user security module), RFC NoAuth/NoPriv Communication without authentication and privacy. 29 30 Auth/NoPriv Communication with authentication and without privacy. The protocols used for Authentication are MD5 and SHA (Secure Hash Algorithm). AuthPriv Communication with authentication and privacy. Configure authentication and privacy for SNMPv3 users as follows. Figure 17. SNMPv3 Setup page SNMPv3 Setup User Name User Access Security Level Auth. Protocol Auth. Password Priv. Protocol Priv. Password Description The user id. MIB views. Read-only, or read-write. The SNMPv3 security level. Options available: No Authorization/No privacy, Authorization/No Privacy or Authorization/Privacy. The SNMPv3 (user-based security module) authorization type to use. Options available: MD5 or SHA. The SNMPv3 USM passphrase. Min string length: 8 characters. Privacy protocols supported currently are DES or AES. The SNMPv3 passphrase for encrypting data between two entities. The string must be at least 8 characters long. If you choose to not assign a privacy value, then SNMPv3 messages are sent in plain text format. Action Add, Edit, and Delete. 30 31 2.8 (SMTP) Based Management The ESBC supports sending Alert Notifications via s with SMTP (simple mail transfer protocol). See section 0 for alarm descriptions. Navigate to System > SMTP for the server settings. Figure 18. based management configuration SNMPv3 Setup Your Name Address SMTP server SMTP server port Logon Information Description Enter the name which you would like it to appear on the s sent out. An notification will be sent to this account. Enter the SMTP server IP, or FQDN for outgoing s. Enter the SMTP Server Port. If no SSL connection is required, the default communication port is 25. Check with your administrator for SSL configuration requirements for outgoing s. Enter the user name and password which are associated with the address specified above. Test Accout Settings Click this button and the ESBC will send out a test mail to the E- mail account specified above. 31 32 2.9 XML config-file based management Please refer to the ESBC provisioning tag document for detailed descriptions of all tags and sample configuration files. The XML configuration file is a text-based file (which can be edited with, for example, notepad) that contains any number of provisioning tags (parameters). The XML configuration file can be imported to the ESBC via the following methods: Auto-provisioning. (See section 2.10 for a detailed description) XML config import from the WEB administrative console. (See section for a detailed description.) See section 6.4 for the ESBC configuration backup. 32 33 2.10 Auto-Provisioning based management Basic Provisioning Mechanism Configurations The ESBC supports auto-provisioning based management features which allow the provisioning of user accounts, service features, system capacity, and upgrading system firmware through auto-provisioning servers. Provisioning Methods: DHCP TFTP HTTP HTTPS SecHTTP The DHCP provisioning method is enabled by default SecHTTP is a proprietary provisioning protocol which is used for communicating with the InnoMedia EMS server. Supported configuration file formats: XML INI Please refer to the ESBC provisioning tag document for detailed descriptions of all tags and sample configuration files. Navigate to System > Provisioning. Figure 19. Auto Provisioning Management The use of the supported provisioning methods is described in the following sections DHCP Provisioning Method The ESBC supports DHCP Options 66 and 67. If DHCP Provisioning Method is selected, the ESBC Internet Connection has to be configured as DHCP client. 33 34 DHCP Option 66: Obtain Provisioning server host name and IP address DHCP Option 67: Obtain complete URL for configuration file. Note that it is possible to specify the method used for provisioning by entering a complete URL, e.g., protocol://host:port/path/prov_file_name. MACROs can be used. For example: URL1 URL2 t HTTP / HTTPS / TFTP/ SecHTTP Provisioning Methods Item Description Provisioning methods. Server Name Port Configuration File Path HTTP / HTTPS /TFTP Provisioning server IP Address or FQDN The port number used for selected provisioning method. Default Port: 80(HTTP), 443(HTTPS), 69(TFTP) Enter the path and file name for the configuration file location on the server. MACRO commands (such as $MAC) can be used. This item is not applicable to SecHTTP method. User Name and Password for HTTP/HTTPS/SecHTTP Methods To configure User Name and Password requires the user to login to the Command Line Interface (CLI) console. See section 2.6 and CLI command reference document for the details. Note. If SecHTTP provisioning method is selected, it is necessary to use the rc4-102 encryption utility (InnoMedia proprietary) to encrypt the config file. Please refer to the document the use of rc4-102 utility Server Initiated Provisioning: SIP NOTIFY The ESBC supports an unsolicited SIP NOTIFY to perform requested operations from the SIP server. Event: reboot resync report reboot: the ESBC reboots itself and re-fetches the config file from the provisioning server. resync: the ESBC re-fetches the config file from the provisioning server without rebooting. report: the ESBC sends its profile to the specified FTP server as configured in Figure 35 Figure 20. The SIP Notify Configuration server initiated provisioning 35 36 2.11 EMS based management InnoMedia EMS (element management system) is a scalable and fully redundant solution covering OAM&P features such as device auto-provisioning and device management functions (via SNMP). Auto-Provisioning protocol supported: HTTP SecHTTP TFTP EMS generates device-dependent configuration files on the fly, providing maximum flexibility for the provisioned device with per device parameters. Device Management: the EMS allows a service provider s customer service and maintenance personnel to provide effective device management, trouble-shooting and statistics collection, all from an easy-to-use and secure browser interface. The system is able to manage and communicate with devices even if they are behind an NAT router. Navigate to System > EMS. Figure 21. InnoMedia EMS server configuration EMS Server settings Enabled Device Type EMS Server 1 EMS Server 2 Description Check this box to enable management via an EMS server. The device type ID defined in the EMS server to categorize connected devices by models. Check with the EMS administrator to input the desired value for your deployed CPE units. The ESBC supports geographically redundant EMS servers. Server 1 is the active server, and server 2 is the backup. Enter their IP/FQDN and port information. The communication port for EMS is 5200 by default. If the active EMS server is down, the ESBC automatically switches to the backup server. 36 37 Local EMS port Region ID Heartbeat Type The communication port for EMS is 5200 by default. Check with your service provider for any different configurations. The deployed region ID defined in the EMS server to categorize connected devices by regions. Check with the EMS administrator to input the desired value for your deployed CPE units. The InnoMedia proprietary keep-alive protocol communicating between CPEs and the EMS. 37 38 3 Network requirements and configurations 3.1 Determining the network requirements for voice services Understand the network factors which affect quality of service Identifying the network connectivity requirements is the key to the success of voice service deployment. It is necessary for the IP WAN and LAN to provide networks that meet the requirements for toll-quality service. It is important to identify the following factors which affect the quality of services and service level agreements. Bandwidth Latency Jitter Packet Loss Bandwidth Requirement The amount of bandwidth for voice calls depends on these factors: Number of concurrent calls The codec used for voice communications Signaling overheads These protocol header assumptions are used for calculations: Headers: 40 Bytes overall consisting of IPv4 (20 Bytes)/ UDP (8 Bytes) / RTP (12 Bytes) 38 Bytes for fixed Ethernet headers Voice codecs G.711 G.729 Sampling rate 8 khz 8 khz Effective sample size 8 bits 1 bit Data rate 64 kbps 8 kbps Bandwidth consumption for one way voice Codec Bit rate Packetization period (ptime) Payload size Ethernet bandwidth G kbps 20 ms 1280 bits 95.2 Kbps G.729 8K bps 20 ms 160 bits 39.2 Kbps 38 39 PPS (packet per sec) = (Sampling rate) /(sample period) Voice payload size = Data rate / PPS Total packet size = (IP/UDP/RTP header) + (voice payload size) + (fixed Ethernet overhead) Bandwidth consumption for one way voice= (Total packet size) * PPS Latency Latency is one way delay from mouth to ear. It comprises the following processes: Time required to sampledigitize (or encode) and packetize the sender s voice Time required to send the packet over the IP network De-packetization, decoding and relaying the speech to the receiving party One way delay must not exceed Toll quality 100 ms 150 ms Acceptable quality Jitter Jitter is the variation of latency across the network and the variation in the timing of packet processing inside the devices. To compensate for jitter, modern devices usually utilize an adaptive jitter buffer. However, high levels of jitter can cause packets to be discarded by the jitter buffer at the receiver, and also increase latency as the jitter buffer adapts Packet Loss The human ear is very good at handling the short gaps that are typical of packet loss. So it may take a significant amount of packet loss for the user to be significantly affected by packet loss to report it. On the other hand, fax and modem calls are particularly sensitive to packet loss, almost demanding 0% packet loss to avoid problems with fax/modem transmission. There are two types of packet loss in a VoIP system: received packet loss, and received packet discard. Received packet loss is where a packet is never delivered to the receiving system; while receiving packet discard is where a packet is received after a time when it is not useable for generating audio playback. Packets could be dropped somewhere in the network causing received packet loss, or packets could be delayed somewhere in the network causing received packet discard. Network issues could include: A poor link causing packet errors which may vary by time of day or load. Network congestion causing the router or switch buffer to overflow or produce high jitter. A transient network problem. Packets will get dropped if a valid alternate path is not immediately available. 39 40 3.2 Internet Interface Configurations Login to the ESBC web console via the LAN VoIP-NAT port interface (see section 2.5 ) to configure the ESBC WAN interface. This is required to access the operator s network (and vice versa). Note that updating the WAN IP address can only be performed via LAN interface. Navigate to Network > Settings > Internet Connection. Figure 22. Configuring the ESBC IP Address for Internet Connection (Static IP mode) Internet Connection Internet Connection Type Description DHCP Client (the default setting): use DHCP service to setup internet connection. Static IP: Use the static IP provided by your ISP to setup internet connection. None: do not use Internet connection WAN Interface s MAC Address Use an appropriate MAC address to communicate with the ISP or SIP Trunk service provider. The default value is the MAC address of the WAN NIC. Cloning MAC address is supported. IP Address, Netmask Default Gateway DNS1, DNS2 Status When Static IP is selected as the Connection Type, enter the IP address and its associated netmask value. Enter the default gateway address provided by your ISP Enter the domain name server IP address provided by your ISP. Displays link layer connection status. Two states: Up or Down. 40 41 Auto-Negotiation Link Status Ethernet connection configurations. Checked > Auto-negotiation mode; unchecked > Manual mode. Speed: 10M/100M/1000M Duplex: Full/Half (1000M is not applicable to Manual mode) See section for detailed descriptions The connection status of the data link layer. 41 42 3.3 LAN interface configurations The LAN interface configurations for voice services The default configurations of the LAN interfaces provide NAT-and-Voice services (applicable to ESBC- 9xxx/8xxx series models). The four switch ports connect to the enterprise telephony network for SIP PBX, IP phones, and PCs to access the ESBC administrative consoles via the LAN interfaces, including WEB GUI and SSH CLI. Figure 23. Default configuration: LAN ports serve enterprise telephony services Navigate to Network > Settings > LAN to configure the IP address. By default, all the LAN ports serve NATand-Voice services which are switch ports and share one IP address. Figure 24. Configuring the IP address of the ESBC LAN for NAT-and-Voice services LAN Settings Connection Type Description Static IP, or DHCP Client. 42 43 When DHCP client is selected for the ESBC LAN interface, it is recommended that the ESBC-LAN MAC address is bound with an IP address from the DHCP server. IP Address, Netmask Host Name Domain (optional) VLAN Tag (1-1023) VLAN priority (0-7) Status RTP Default Gateway for B2BUA When Static IP is selected as the Connection Type, enter the IP address and its associated netmask value. The host name designated for the ESBC. The network domain name, if the network administrator defines a domain name and nominates servers to control security and permissions for the ESBC telephony network. When the ESBC LAN interface needs to connect to a VLAN switch of trunk links, appropriate tagging information is needed. If this field is entered with a VLAN-ID value, traffic from devices connecting to Voice-NAT port(s) must be tagged with the same VLAN-ID , QoS VLAN priority. The higher the value, the higher the priority (higher importance) of the outbound packets. The connection status of the data link layer. The IP address of the router which connects two corporate VoIP networks. See section LAN side Topology: RTP Default Gateway for SIP Trunk (B2BUA) voice services The use of RTP default gateway. In a typical deployment scenario, the SIP PBX registers to the ESBC LAN interface, and the SIP IP Phones (SIP PBX clients) register to the SIP PBX. Some SIP PBXs control SIP signaling from the SIP Phones but do not route RTP packets. The RTP packets travelling through the corporate data network between the ESBC and the SIP Phones do not necessarily go through the SIP PBX. When SIP Phones and SIP PBX are not located in the same network, one-way communication or no voice is possible if no static routes nor RTP Default Gateway are configured on the ESBC. The ESBC RTP Default Gateway feature has been designed to make the communication between SIP user agents in different corporate networks possible, allowing scalable distributed SIP VoIP networks. When direct end-to-end media communication is not possible, the media (RTP) streams have to be relayed through another host (i.e., the corporate router acting as the RTP default gateway) to play the role of proxying RTP streams for SIP phones. As the corporate network grows, the ESBC is adaptive to new corporate network configurations without the need to manually update its static routing rules. The RTP Default Gateway feature can also be used in combination with configuring static routing rules (see section ) to the ESBC to build complex VoIP networks. 43 44 Figure 25. The enterprise Router acting as the RTP default gateway for SIP devices With the example illustrated in Figure 25, the SIP Phones in Corporate Network 2 register to the IP PBX in Network 1. With the RTP default gateway (the corporate router s IP address) configured on the ESBC, the RTP (media) streams can be handled by the ESBC and communicate with the service provider network. Note that when RTP default gateway is configured, the static routing rules may not be necessary. The IP PBX has to be located in the same network as that of the ESBC LAN interface if there is no static route rule configured (see section ) LAN side Topology Design: Static Routing Table Configurations For enterprises with multiple voice networks, static routing rules are needed if one of the following conditions is true. SIP Trunk voice service (B2BUA mode): IP PBX (or any SIP User Agents) which register to the ESBC are located in a network other than that of the ESBC NAT-Voice interface. Hosted voice service (SIP-ALG mode): the IP phones are not located in the ESBC NAT-Voice network. Note that when static routing rules are configured, the RTP default gateway may or may not be necessary. To configure Static Routing rules, navigate to Network > Advanced > Static Routing. Figure 26. Configuring network static routing rules for the ESBC LAN interface 44 45 Static Routing Number Interface Destination IP Netmask Gateway Description Record number. The interface to which static routing rules apply (LAN only). The destination network address which this route reaches. The destination network netmask IP address of the router to which the ESBC can reach to route packets for this particular network. Metrics Metrics count. If not configured, the default value is 1. (Note: Metric is the network administrative distance. The default value for connected interface is 0, and static route is 1. Lower numbers take priority over higher numbers.) Click <Routing Table> button to display the ESBC network routing information DHCP Server The ESBC DHCP server allows clients which connects to the Voice-and-NAT ports to obtain dynamic IP addresses. The administrator can also view DHCP client list and bind MAC address. Figure 27. The DHCP Server for devices in the Voice-and-NAT network Static Routing Enabled Starting IP Address Ending IP Address Description Check this box to enable DHCP server service on the ESBC. The default configuration is disabled. The range of IP address is assigned to the LAN clients. Note that this range should not include the broadcast address, e.g., /16 has the broadcast address of the network 45 46 Lease Time Primary DNS (optional) Secondary DNS (optional) WINS (optional) The valid time period of the IP address assigned to DHCP clients. The DNS server(s) specified provides name service for DHCP clients. Windows Internet Naming Service, for NetBIOS names. Default Routing (optional) Option 66 (optional) Obtain the host name or IP address of provisioning server for DHCP clients, i.e., allowing the connected SIP devices to obtain provisioning server address Client List This client list table include IP address, MAC address, obtained time and expired time of all DHCP clients. Figure 28. The ESBC DHCP server client list MAC Binding The DHCP MAC Binding feature allows the system administrator to bind an IP address to a client's MAC address, so the ESBC will always assign a fixed IP address to this client. Click the <MAC Binding> button. Figure 29. The ESBC DHCP server MAC binding. 46 47 3.3.3 Advanced Configurations for Voice and Data Featured Services The ESBC 9xxx and 8xxx series are equipped with four Ethernet switch ports which can be configured individually to fulfill various featured services for field deployments. These featured services can be all or partially activated on one ESBC unit. They are NAT-and-Voice port Management port Router port Bridge port Figure 30. The ESBC 9xxx interface for network access (conceptual view) To configure the ESBC featured service, navigate to Network > Advanced > LAN Interfaces > Port Function. Figure 31. The ESBC 9xxx LAN interfaces. Default Settings (configuration view) 47 48 Enabling the Management Port Figure 32. Enabling the ESBC management port The management port is used by the PC to access the ESBC administrative console (WEB or CLI) through the LAN interface. When the management port feature is enabled, the LAN NAT-and-Voice ports only serve voice traffic, and do not allow PCs to access the administrative console. Access via the ESBC WAN interface remains unchanged. Navigate to Network > Advanced > Management Port. The ESBC Management port can serve a subnet. If the associated DHCP function is enabled, its netmask and IP address range have to be properly managed. Figure 33. Configuring the ESBC Management Port Note: If the management port is enabled, it is possible to access the administrative console through. 48 49 Enabling the Bridge Port Figure 34. Enabling the ESBC Bridge Port for Enterprise Internet Service Hosts The bridge port is transparent to the ESBC WAN interface. Typical applications of the bridge port are as follows: To serve enterprise Internet service hosts ESBC installers or technicians may make use of it to access an OSS in the service provider s core network, especially for cable modem embedded ESBC models Enabling the Router Port for data services Figure 35. Enabling the ESBC Router Port for Data Service 49 50 When the router port is enabled, the ESBC performs data network router functionality, which allows service providers to offer data services to enterprise customers. As Figure 35 illustrates, hosts behind the ESBC router port may use public IP addresses which are routed by the ESBC to the service provider network. The ESBC dynamically updates its routing table with the Network Router in the service provider s core network. Navigate to Network > Advanced > Router Mode to configure router and required protocol security parameters. Router Mode Router Port Router- IP address Netmask Figure 36. Configuring the Router Mode Port Description Select the appropriate port number as the router port. Enter the public IP address and netmask that the service provider designated for this enterprise Router which serves as the default gateway for hosts on the enterprise data network. RIP Setting Version V2: standard routing protocol with associated authentication features. V1: standard routing protocol. Authentication MD5 Text None MD5 is the default mode if RIPV2 is selected. Plain text authentication should not be used when security is an issue. Select appropriate method which interoperates with the core router in the service provider s network. Key ID Key String Broadcast Interval Applicable to MD5. Give the RIPV2 key chain name. This need not to be identical with that of the remote router. Applicable to MD5. The actual password. It needs to be identical to the key string on the remote router. Update. The interval after which the EBSC sends an unsolicited response message of the complete routing table to all neighboring 50 51 RIP routers. Timeout. (Upon this timeout threshold being reached, the particular route is no longer valid. Garbage. Upon this timer expiring, this particular route is removed from the routing table. Note: RIP route authentication is configured on a per-interface basis. All RIP neighbors on interfaces configured for RIP message authentication must be configured with the same authentication mode and key for adjacencies to be established Ethernet Advanced Configurations for LAN Interfaces See Application note: ESBC Application Notes-Ethernet Control.doc Ethernet interfaces. Navigate to Network > Advanced > LAN Interfaces > Advanced. The ESBC s Ethernet connection can be configured for one of the following modes: Manual Speed Duplex 10M, 100M Full, Half Auto Negotiation Speed/Duplex Automatically choose common transmission parameters. Speed: 10M/100M/1000M Duplex: Full/Half Figure 37. Ethernet Interface Controls for the LAN Interfaces of ESBC 8xxx and 9xxx series models 51 52 Figure 38. Ethernet Interface Controls for the LAN Interfaces of ESBC 10K series models Note: If the management port is enabled, it is possible to access the administrative console through the. If the connected devices support auto-negotiation, it is highly recommended that auto-negotiation is used. The remote side port must also operate in auto-negotiation mode. When configuring the device port running in manual mode, the same mode (i.e., duplex and speed) must be configured on the remote port manually. WARNING: An Ethernet port that does not match the settings of the connected device can lose connectivity. Link status of Auto-Negotiation. The correct behavior of link status with auto-negotiation in accordance with IEEE Std 802.3z-1998 should be as follows: If A is enabled and B is enabled, then the. Flow Control. ESBC Ethernet flow control is used to regulate the amount of traffic sent out of the interface. There is a PAUSE functionality built into the ESBC Ethernet LAN interfaces to avoid dropping of packets. Flow control must be negotiated, and hence both devices must be configured for full duplex operation to send PAUSE frames. Threshold of transmit on (48 buffers by default, ranges from ). Threshold of transmit off (64 buffers by default, ranges from , must be greater than threshold of transmit on). Storm Control. The ESBC storm control is used to for network broadcast and multicast packets. It prevents network outage on the LAN interfaces, and rate limit broadcast and multicast traffic at a specified level and to drop packets when the specified traffic level is exceeded. When the broadcast and multicast storm control is enabled, all broadcast and multicast packets beyond the thresholds are discarded. The threshold values are as follows: Threshold of Storm Rate: unit in frames, Rn, n ranges from 1 to 11 (1M fps by default). 52 53 3.3.5 Remote access to the ESBC LAN Interfaces and LAN hosts Through VPN The ESBC allows remote computers (Windows or any hosts supporting PPTP) to access the ESBC LAN interfaces for administrative tasks or other LAN hosts via a PPTP VPN secured tunnel. Note that when it is used to reach other LAN hosts, an additional LAN router needs to be configured to route traffic from the LAN hosts to the ESBC VPN clients. Navigate to Network > VPN > PPTP Server. PPTP-Server (General) Enabled Server IP Address Client IP Address - range Connections Figure 39. Configuring the VPN client IP range Description Enable the PPTP VPN server. The default for the VPN feature is disabled. Enter the IP address of the PPTP VPN server. The VPN server acts as a DHCP server and grants an IP address to clients when they pass the security checks. The number of VPN concurrent connections. Figure 40. Configuring the VPN client ID and password for connection Use the configured IDs and passwords for the remote VPN clients to access to the VPN server, the ESBC LAN and the LAN hosts. 53 54 Through Port Forwarding Software ports are numbered connections that a computer uses to sort different types of network traffic. The ESBC supports port forwarding features which allow remote computers to access services offered by other LAN hosts. A few standard services are listed on the Comments section of the ESBC port forwarding page. By default, the ESBC closes all software ports to the Internet unless they are configured in the port forwarding list, and SIP/RTP ports which are dynamically opened for communication purposes. Navigate to Network > Advanced > Port Forwarding. Figure 41. The port forwarding feature for remote access to LAN services Port Forwarding Description Protocol Starting port Ending port IP Address Schedule Description Enter the purpose of forwarding the specified port number range to access software services provided by LAN hosts. Enter the transportation protocol(s) used by the software service. Enter the port range used by the software service. Enter the IP address of the LAN host which offers the software service. All the time/working Time. Click the <Schedule Setting> button to configure the working time slots Enabling Data Service Access for the ESBC LAN hosts DNS Proxy The DNS Proxy feature is applicable to the ESBC-93xx and 83xx serial models. Because DNS is used by virtually every device connected to the Internet, it is a common target for hacker attacks. For security and performance considerations, the ESBC DNS proxy feature includes rule sets to control outgoing DNS requests from its trusted hosts. A typical DNS proxy processes DNS queries by 54 55 issuing a new DNS resolution query to each name server in the list until the hostname is resolved. A DNS proxy improves domain lookup performance by caching previous lookups. When a DNS query is resolved by a DNS proxy, the result is stored in the device's DNS cache. This stored cache helps the devices to resolve subsequent queries from the same domain and avoid network latency. Navigate to Network > Advanced > DNS Proxy. Figure 42. Configuring the DNS Proxy service for trusted LAN hosts DNS Proxy Domain Name (root) DNS Server IP Description Domain Name suffixes, e.g. the domain name root of is abc.com. Enter the IP address of the DNS server used to query this particular domain name Access Control The Access Control feature provides the basic traffic filtering capabilities which enables the ESBC LAN hosts, i.e., clients connecting to the ESBC Voice-and-NAT ports, to access Internet data services other than voice services (SIP Trunk or Hosted voice services). Navigate to Network > Advanced > Access Control. Access Control LAN Enabling the ESBC LAN hosts which are in legitimate list to access Internet for data services rather than voice services, by specifying hosts within particular IP address ranges hosts within particular subnets hosts of specified MAC ports employed by applications 55 56 Figure 43. To enable the internet data access to the ESBC LAN hosts or applications Note that when the ESBC LAN interface connection type is configured as DHCP client (see section 3.3 for details), the ESBC does not check the validness of access control configurations. Click <Schedule Setting> to define the time periods during which the legitimate hosts/ports are allowed to access Internet data services. Figure 44. Schedule setting for LAN hosts and application to access Internet services Access Control Internet The ESBC may impose restrictions on Internet resources to which the LAN clients can access, by specifying particular IP address ranges particular Subnets, ports employed by applications domains 56 57 Figure 45 Legitimate Internet resources Click <Schedule Setting> to define the time periods during which the legitimate Internet resources are accessible by the ESBC LAN hosts UPnP The UPnP feature is applicable to the ESBC-93xx and 83xx serial models. The ESBC supports UPnP, and hence it can auto discover and control Internet connections at any place in a small office environment, provided its north bound switch/router supports UPnP. Navigate to Network > Advanced > UPnP. Figure 46. Enabling the UPnP network feature Enable the UPnP feature by checking the box if the office network switch/router is equipped with this capability DMZ (De-militarized Zone) The DMZ feature is applicable to the ESBC-93xx and 83xx serial models. In the ESBC setup, most devices on the LAN run with ESBC firewall protection to communicate with the Internet/service provider network. The DMZ is a device inserted in the neutral zone between the ESBC private network and the outside public network. It prevents outside users from getting direct access to an ESBC LAN host. A DMZ is optional, and acts as a proxy server as well. 57 58 Navigate to Network > Advanced > DMZ. Figure 47. Enabling the DMZ host on the ESBC LAN network Enter the IP address which is assigned to the DMZ host Miscellaneous Navigate to Network > Advanced > Miscellaneous. Figure 48. Miscellaneous configurations of network attributes Miscellaneous Enable Ping to WAN Interface MTU (maximum transmission unit) Description For security purpose, disable Ping to WAN interface, i.e., not to respond to ICMP messages, by unchecking this box. The default value 1500 (byte per packet), the largest packet size allowed by Ethernet. A larger MCU brings greater efficiency. Reduce MTU value if any network device cannot support the specified MTU size. 58 59 3.4 Using an NTP server to offer time information to ESBC LAN devices The ESBC can be configured to act as an NTP server to offer time information to ESBC LAN devices. Navigate to Network > Settings > NTP Server. Figure 49. Enabling or disabling the ESBC NTP feature The process to configure the time zone and the Internet server with which the ESBC synchronizes is described in section 6.2. When the ESBC LAN devices need to utilize the ESBC to synchronize system time, they need to be SNTP client compliant and point their SNTP server to the ESBC LAN IP address (NAT-Voice port). 59 60 3.5 QoS Control Ethernet models: ESBC 93xx and 83xx The ESBC supports QoS for voice service with the following two features when sharing with other data services in the network. Data bandwidth control. Controlling both the uplink and downlink bandwidth used by the ESBC WAN interface offers the flexibility of sharing internet access resources with other enterprise network equipment. ToS (type of service) controls for LAN and WAN. By distinguishing signaling and media traffic, the ESBC allows the network administrator to prioritize voice vs. data services if voice and data share the same network. Navigate to Network > QoS Control > Voice QoS. Figure 50. ToS/DSCP configuration for voice services Voice QoS Settings Description Enable Data bandwidth control If the ESBC shares north-bound bandwidth with other network equipment, control of the voice bandwidth may be needed. The default configuration is unchecked. Max WAN Uplink/Downlink speed WAN ToS Configuration LAN ToS configuration Enter the bit rates that are allocated for voice traffic for both uplink and downlink traffic. ToS and DSCP: Type of service/differentiated Service Code Point. The IP header contains an 8-bit field for QoS control (RFC2474). Values range from 00 to FF (in hex). 60 61 3.5.2 Cable Modem Embedded Models: ESBC 95xx and 85xx The ESBC embedded cable modem models, ESBC95xx and ESBC85xx, support industry leading patent pending Smart DQoS technology. It relies on an intelligent edge device (the ESBC) with an embedded DOCSIS cable modem initiating DOCSIS UGS service flow requests based on user or signaling events. With Smart DQoS, the addition of policy servers and complex interactions among various network server components can be avoided DQoS service flow settings Navigate to Telephony > SIP TRUNKS > DQoS Settings. DQoS Setting Figure 51 DQoS Call Control QoS Settings on DOCSIS networks Description Enable the Call Control DQoS Number to limit the Active Dynamic Service Flows (DSF) Allow the SIP Trunking Calls beyond the limit of Active Dynamic Service Flows Use Time of Day received from the Cable Modem Reserved Destination Enabling this option allows a guaranteed service flow mechanism between the CMTS and ECMM/ESBC for voice connections. Disabling this feature results in all voice calls being processed with best effort. Enter the number of service flows reserved for the ESBC. The maximum guaranteed value on DOCSIS 3.0 network is 24. When the number of calls exceeds the DSF mentioned above, allowing additional calls to be connected with best effort. This configuration is applicable to SIP Trunking voice calls (B2BUA mode), not for hosted voice services (SIP ALG mode). The ESBC may use Time of Day information from the Cable Modem (and CMTS) for system time. When this feature is enabled, the ESBC SNTP client feature which retrieves time information from the SNTP server will be disabled. (see section 6.2 for details). The IP and Port is reserved for intra component communications between the ESBC and embedded ECMM. The network address used here should not conflict with other networks configured or routed by the ESBC. Specify an IPV4 IP address and communication port. Default is :9. 61 62 3.6 VLAN for Multi-Service Capabilities and Traffic Shaping See: ESBC-Application Notes-A case study in using VLAN as QoS tool for fiber-mpls deployment. The ESBC supports VLAN tagging, which allows seamless deployment in the path of VLAN tagged traffic and provides application-level control over VLAN connections. For example, tagged VLAN frames on FDDI or other infrastructure is common in large scale networks. Service providers usually utilize VLAN with operations backbone infrastructure technologies with advantages such as (1) traffic engineering, (2) multiservice networks and (3) network resiliency with fast reroute etc. With the ESBC VLAN traffic segregation function, Service Providers do not require enterprises to invest in further VLAN switches in order to deploy multi-service capabilities to customers. Figure 52. ESBC VLAN support for prioritizing services The ESBC LAN ports can be configured for different types of services as described in section 3.3.2, including router port, bridge port, and NAT-voice port. Physical Interface (WAN): ESBC tags traffic from these different types of ports, e.g., Router, Bridge, and Voice-NAT, with associated VLAN IDs and sends it to the WAN/Service provider network, mainly for data services. (The management port is designed for console access from the LAN interface, and hence no VLAN tagging is provided for it toward the WAN interface.) Traffic generated by the ESBC, e.g., sip signaling, voice data, and other management traffic (such as http, snmp, dns, etc.) are tagged with associated VLAN IDs to communicate with the WAN/Service provider network, mainly for telephony services. The ESBC LAN interfaces. The Voice-NAT port(s) supports VLAN tagging which is described in section The bridge, router and management ports do not support VLAN features. For security purposes, accessing the ESBC WEB and CLI consoles through the WAN interface, hosts should be configured with the same VLAN ID as that of NATed Traffic. 62 63 Navigate to Network > VLAN. Figure 53. VLAN Tag configuration Physical Interface (WAN) VLAN ID for NATed/Routed/Bridged Traffic Host Interface VLAN ID for Voice Signal/Voice Data/Other Traffic PRIORITY Settings 802.1p PRIORITY for traffic of Physical Interface (WAN) Host Interface Description Tag traffic from Voice-NAT/Router/Bridge ports with appropriate VLAN ID and send to WAN. Description Tag traffic generated by the ESBC with the appropriate VLAN ID and send to WAN. These include voice signaling, voice data, and other traffic. (Other traffic types are management protocols such as http, snmp, dns, ntp, etc.) Description Value range: 0-7. (0 the lowest) 0: (BK) Background 1: (BE) Best Effort 2: (EE) Excellent Effort 3: (CA) Critical Application 4: (VI) Video, <100ms latency and jitter 5: (VO) Voice, 10ms latency and jitter 63 64 6: (IC) Internetwork control 7: (NC) Network control 64 65 4 SIP Trunk Voice Service Configuration 4.1 Routing Calls between the ESBC and SIP Trunk (service provider) In order for the ESBC to route calls between corporate PBX users and the SIP trunk service for PSTN calls, the items below need to configured properly: Trunk Settings: configure the SIP Trunk Server profile(s) User Account Configurations: configure SIP UAs on the ESBC SIP Trunk Signaling Tuning for Interworking: Fine tune the Trunk SIP Profile Trunk Settings: SIP Server A Trunk denotes a subscribed SIP trunk service. Trunk Setting is used to configure the required parameters for the SIP server. The ESBC supports multiple trunk SIP proxy profiles. This flexibility allows the system administrator to configure up to 8 profiles for different subscribed services. To configure a trunk profile, navigate to Telephony > SIP TRUNKS > Trunks Setting, 65 66 Figure 54. ESBC Trunk Settings SIP Server Default Profile Profile ID SIP Domain SIP Proxy Port SIP Outbound Proxy Description Check the option box if you want to set this profile as the default profile. Any SIP UAs will be associated to this profile automatically. Enter a unique profile ID for this profile. Enter any name which can be recognized easily. The SIP Domain name provided by your service provider for service query purposes. The IP address or FQDN of the target SIP server (or SBC) which processes SIP requests/messages sent from the ESBC. The default SIP communication port is Change to the port number which the SIP trunk service provider specifies. The IP address or FQDN of the target SBC (or SIP server) to which the ESBC sends SIP messages/requests. The ESBC supports DHCP Option 15 to obtain a connection-specific DNS domain suffix. (DHCP Option is applicable only when the ESBC s internet 66 67 connection is configured as a DHCP client.) INVITE Request-URI Domain Min/Max Retry Interval Transport Enter the SIP domain name for the Request-URI for INVITE messages. (If it is different from the SIP Domain name of SIP REGISTER messages.) Registration fail retry timers. When the first REGISTER attempt fails, the ESBC attempts the next REGISTER with a back-off mechanism which defines the Min and Max timers. Specify the transport protocol for SIP signals interconnecting the ESBC and the SIP server. Auto: the default configuration. The ESBC uses the protocol after negotiating with the SIP server. TCP: use TCP to transport SIP signals only. UDP: use UDP to transport SIP signals only. TLS: use secured TLS tunnel to transport SIP signals. Be sure to set the SIP Port number to Security- CA Root Certification When TLS secured connection is used, it is possible to load the 'CA root certificate' issued by your service provider, and the ESBC then authenticates the Server to ensure secured and trusted connections Trunk Setting: Sip server redundancy When the service provider offers a SIP server redundancy feature to ensure high availability of voice services, the ESBC supports real time switch over to a redundant sip server and fail back. The ESBC obtains the list of redundant SIP servers by dynamic or static methods, and continuously monitors the availability of all sip servers from its list. When the ESBC detects that the primary server has gone down, the ESBC switches to the next reachable server. If none of the sip servers are reachable, the ESBC process all calls as internal calls. When the primary server comes back into service after the ESBC switches to the backup server, the ESBC will automatically switch back to primary server with no interruption of on-going calls. See the ESBC SIP Server Redundancy Application Notes for details. To configure SIP Redundancy, navigate to Telephony > SIP TRUNKS> Trunks Setting. 67 68 Figure 55. Configuring sip server redundancy features SIP Server Redundancy Method for obtaining IP address Backup SIP Outbound Proxy Time Between SIP OPTIONS Number of consecutively received SIP Options Treat return error codes as successful SIP OPTIONS responses Advance to alternate SIP Server when receiving specified error codes Send RE-REGISTER after switching to the alternate server Description Choose DNS lookup for dynamic method; or Input Backup Outbound Proxy for static method. Applicable to static method only. The interval for the ESBC to send out SIP OPTIONS ping messages to all SIP servers in the server list. The threshold value for the ESBC to determine the availability of sip servers. SIP servers may reply with a sip response error code when receiving SIP OPTIONS messages from the ESBC. Configure those codes which are treated as successful SIP OPTIONS responses. When the ESBC receives the configured sip response error code(s) from the server, the ESBC should treat this currently connected server as not available for service and move to the next reachable sip server. Enable this option if the databases of redundant sip servers are not synchronized. If sip servers implement HA (high availability) features, it is possible to disable this feature to save processing bandwidth and improve switching efficiency. 68 69 Dynamic Query for Redundant SIP Servers When DNS Lookup is chosen from the menu Method for obtaining IP address (see Figure 55), the ESBC obtains the list of redundant sip servers by doing a DNS SRV/NAPTR/A Record lookup with the configured SIP Outbound Proxy (see Figure 54). If both SIP Outbound Proxy and SIP Proxy are not configured, the SIP domain setting is used. Optionally, the domain may be learnt from a DHCP Option 15 response. To view the queried server, navigate to Telephony > TOOLS > Monitor > SIP Server Redundancy Figure 56. Redundant sip servers: queried results Static Input for Redundant SIP Servers When Input Backup Outbound Proxy is chosen from the menu Method for obtaining IP address (see Figure 55), the ESBC obtains redundant sip servers using user input records. <Arrow> keys are used to adjust their priorities Trunk Setting: Codec Filter Figure 57. Codec Filter Table 69 70 Codec Filter Codec Description Enable this to filter and only use selected CODECs in SIP/SDP messages. When the ESBC composes SIP messages to the SIP Proxy or to the SIP PBX, it will only use CODECs from the list. To change the priority level of CODECs, select the desired CODEC name and click the up or down arrow. To remove a CODEC, click the <Delete> button. To restore to the default, click the <Restore Default> button. Supported Packetization Time Specify the supported ptime values in SDP media attribute descriptions. Note that the selection of codec filters takes higher priority than the extended codecs in the transcoding profile (see section 4.8). 70 71 4.2 Adding and Configuring User Accounts on the ESBC SIP UA Setting To configure User Accounts (UAs), navigate to Telephony > SIP UA Setting > SIP User Accounts. Click the <Batch Config> button to configure SIP User Accounts in bulk mode, or click the <Setting> icon to configure User Accounts individually. To Search user accounts, select the target searching criteria from the Search drop-down menu, and then click the <Search> button. The public identities are user accounts which the ESBC uses to register to the SIP server. Public identities of a subscribed sip trunk service include One main public identity One or more alternate public identities Nominating a main public identity is not mandatory for an ESBC configuration. However, if implicit registration is required by the service provider, it is necessary to nominate the main public identity and configure it as the registration agent. Figure 58. SIP UA Registration Status SIP User Accounts Main Public Identity Description Also known as pilot number, or main trunk number. This user account is selected/configured as a registration agent for implicit 71 72 registration. The Main Public Identity registers to the SIP server on behalf all other alternate ESBC user accounts (of the same subscribed sip trunk service). Default Route Status Default route. When multiple user accounts (DIDs) have the same destination (e.g., the same connected TDM-PBX or SIP-PBX), the default route account can route calls on behalf of all other user accounts associated with the same destination. The ESBC directs all inbound calls whose called number is not configured on the ESBC database to the destination of the Default Route. Connected. Successful REGISTER. Not Connected. Authentication Failure. Credential information is not correct. Registration Error. Account disabled on the ESBC Static Registration (Static Operation Mode) No. User ID Registration Agent Trunk Proxy Profile Type Enabled Action: Setting Action: Delete Batch Config Refresh Register De-Register Register All De-Register All The nth SIP Trunk user account. This number is used for identifying the user account for provisioning tags. SIP account user name, usually the DID telephony number. The assigned Registration Agent for this trunk (used with IMS implicit registration settings). Associate SIP UAs to a SIP Trunk Proxy Profile described in Section The target type of this selected UA. There are three types of targets: PRI, FXS and SIP. Status of this SIP UA Configure parameters of this SIP account. Delete this SIP account. Add/Configure SIP account(s) in bulk mode. Pages refresh automatically based on a time interval. This time interval can be set on the "Auto Refresh" page under "System". Click "Refresh" and the page will be refreshed immediately. Click this button to Register or De-Register SIP UAs with the service provider sip server. Click this button to perform Registration/De-Registration for all SIP UAs configured in the ESBC database. 72 73 Public identity: Batch Add Click the <Batch Config> button to add/configure user accounts in bulk mode. Figure 59. SIP UA Batch Configuration: Add Enter the User ID (usually the DID number). In the Repeat Count field, enter the number of UAs you would like to add to the system. The User IDs will be generated as consecutive numbers. Add User Accounts Description Type Select appropriate attribute for the user agents to be configured. The available options are as follows: SIP: applicable to sip devices or sip-pbx connecting to the ESBC NAT and Voice LAN interface. PRI (Group n): applicable to a TDM-PBX FXS 1~4: applicable to one of the ESBC s four FXS ports 73 74 User ID Display Name Auth ID Shared Auth Password Enable Trunk Proxy Profile Trunk SIP Profile Registration Agent Transcoding profile PBX SIP Profile SIP Contact (for PBX Static Registration) Repeat Count A SIP user account to REGISTER with the SIP server. It can be a DID (TN) number or a user name. The caller name shown on the callee s phone device for outbound calls to the PSTN. SIP ID for authentication purposes. When this Auth ID is shared among all SIP UAs in the same batch config operation, click this check box. Otherwise, the Auth ID will increment by 1 with every new account created in this batch operation. SIP Authentication password for the UAs to register to the service provider network. The Auth password will be applied to all SIP UAs created in this batch operation. Enable or disable the SIP UAs created in this batch operation. The Trunk Proxy Profile used for SIP UAs created in this batch operation. The Trunk SIP profile used for SIP UAs created in this batch operation. (See section 4.2.3) Registration Agent of this trunk, usually the main identity number of this trunk. (See section ) The profile name for the transcoding parameters applied to SIP UAs created in this batch operation. (See section 4.8) This feature is applicable to the ESBC9378 series models. Choose the SIP PBX profile applied to SIP UAs created in this batch operation. If the target SIP PBX name is not in the list, choose Generic. (See section 4.4) The IP:port information for SIP UAs to which the ESBC relays sip messages. This parameter is used when the sip client uses static registration mode to connect to the ESBC. This parameter is not applicable to REGISTER operation mode. Enter the number of SIP UAs to be created in this batch operation. Note that when the User ID is in TN format, the User ID will increment by 1 with every new account created in this batch operation. The repeat count is not applicable to a text User ID Public identity: Batch Modify/Delete Click the <Modify> or <Delete> tab, selecting the target User Accounts and the desired operations. Click the <Apply> button to complete. Please refer to section for descriptions of parameters. 74 75 Figure 60. SIP UA Batch Configuration: Modify Public identity: Individual Settings and Authentication To configure a selected public identity (SIP UA) individually, click the <Action> button in Figure 76 Figure 61. Individual SIP UA Settings Most of the parameters should already be entered in a Batch-Add operation (see section ). This page allows you to configure/update parameters for an individual account. Three options are available for the PBX Authentication Mode, they are None, Local and RADIUS. The configurations apply to SIP PBX, SIP clients and analog FXS ports. Please see section for details. If Local mode is chosen, the Auth password configuration is described as follows. Local Authentication Description Same as Auth Password The ESBC applies the same Auth Password of the SIP Trunk to authenticate the sip request attempts from the SIP PBX (or SIP clients). The Auth Password has to be configured on the SIP PBX accordingly. If a different Auth Password is needed, choose the other option and enter the password accordingly. 76 77 4.2.2 Implicit registration: Registration Agent Implicit registration is completed by the Registration Agent (RA) which is usually the pilot number. The RA registers to the SIP server on behalf all user accounts (alternate public identities or DIDs) of the subscribed service. To configure an RA, navigate to Telephony > SIP ACCOUNTS > SIP UA Setting > Registration Agent. Click the <Add> button to add an RA. Figure 62. Add Registration Agent Add Registration Agent Agent Name None New SIP UA Select from current SIP UA Description Name of the registration Agent Select none if not using implicit registration. If the RA account is not configured on the ESBC, choose this option to add a new user account. Please refer to for parameter descriptions. For the use of implicit registration, select an existing SIP UA from the drop down menu Bulk Assigning The ESBC s web console provides the ability to assign Bulk SIP UAs to the FXS ports, SIP PBX, or TDM PBX. The Bulk Assigning feature is convenient when assigning numbers to different types of clients is needed. Navigate to Telephony > SIP ACCOUNTS > Bulk Assigning. 77 78 Figure 63. Bulk Assigning VoIP numbers to the ESBC clients 78 79 4.2.3 SIP Trunk Parameter Configuration The ESBC normalizes sip signals from the enterprise network to interwork with servers in the service provider network. To configure sip signals interfacing to the server side, navigate to Telephony > SIP TRUNKS > Trunk SIP Profile. Two SIP profiles are provided by the ESBC for general SIP interworking purposes, SIPConnect 1.1, and SIP- Trunk General. To create a new profile, click the <Add> button, or the <Setting> icon to edit an existing profile. Figure 64. Trunk SIP Profile Settings Figure 65. Editing a Trunk SIP Profile Parameters Default Profile Profile ID Description Enable this option to assign all SIP user accounts to this profile if they are not configured otherwise. Entere a unique name for this profile 79 80 SIP Profile Configuration: SIP Parameters Click the Setting icon on an existing profile. Figure 66 Configuring a Trunk SIP Profile SIP Parameters SIP Parameters Static Registration GIN Registration Enable Session Timer Timer C, Timer 1xx Retransmission, Timer Register Expires, Min Registration-Retry Time, Max-Registration Retry Time Keep-alive Interval Description If selected, the service provider network treats the ESBC as a peer network, but not a registering device. Globally Identifiable Number (GIN) Registration (RFC 6140) is used widely for implicit registrations. The ESBC supports GIN to construct and distribute a URI that can be used universally. This mechanism requires the RA to perform implicit registration to the service provider s network (see section 4.2.2). The SIP session timer specifies a keep-alive mechanism for SIP sessions, which limits the time period over which a stateful proxy must maintain state information without a refresh re-invite. (The Session Timer parameter in the SIP Parameter Page (See section 4.5) needs to be enabled in order for this setting to take effect. Standard SIP timers defined in RFC 3261 Specifies the interval for sending keep-alive messages for active SIP sessions 80 81 SIP Profile Configuration: Interoperability See the application note: The ESBC Caller ID (TN) screening mechanism. Figure 67. Trunk SIP Profile Configuration: Interoperability SIP Parameters Set URI format of sip headers: TO, FROM, REGISTER, Refer-To, Description Depending on the sip server configuration, the ESBC provides four combinations to generate sip headers: 81 82 forward. not E.164, without user=phone not E.164, with user=phone E.164 (prefixed with + ), without user=phone E.164 (prefixed with + ), with user=phone Examples: user=phone sip: sip: user=phone Anonymous call Anonymous call Set privacy header to the value id Set From header for outgoing calls Depending on the sip server configuration, the ESBC supports privacy for calls by generating the From header in four different formats. If outbound calls from the PBX specify the blocking of CID (From header with anonymous information), the ESBC generates outbound SIP messages to the SIP server with Privacy: id header fields, and the selected format for the FROM header to the SIP Server (RFC 3323 and RFC 3325). The From header can be used to transport caller information, such as caller ID and name displayed at the called party side. Three options are available: Use Alternate Identity: Use the individual AOR configured in the ESBC database. Use Main Public Identity: Use the pilot number (registration agent account) configured in the ESBC database. Set Identity header for outgoing calls Use the original caller: Use the information obtained from the PBX user. Depending on the sip server privacy configuration, the ESBC may add one of the following three headers as the caller identity header (or none). P-Asserted-Identity: defined in RFC This sip header is used among trusted sip entities to carry the identification of the user sending a sip message as it was verified by authentication. P-Preferred-Identity: defined in RFC This sip header is used from a user agent to a trusted proxy to carry the identification the user sending the sip message wishes to be used for the P-Asserted-Header field value that the trusted 82 83 entity will insert. Get Caller ID from SIP Header if exists Remote-party-id: is defined in a sip draft. This sip header provides information about the remote party. Choose from one among the following three options to transport information on Caller ID to the SIP PBX P-Asserted-Identity, Add "Allow-event: vq-rtcpxr" into REGISTER Forward DTMF in SIP INFO to SIP server Strip ICE Attribute Remote-Party-ID To support VQM (voice quality measurement) requirements of RFC6035. When this feature is enabled, the ESBC forwards SIP INFO messages if the registered SIP UAs send DTMF tones with SIP INFO method. ICE attributes are used for NAT traversal purposes. Enable this feature to allow the ESBC to strip all ICE related parameters in SDP messages for messages sent to the SIP server. ICE related attributes in SDP include a=candidate(.*) Use RFC2543 Hold Remove Contact and Record- Route Headers in 180 responses Enable rinstance Reuse TLS connection a=ice(.*) RFC2543 is obsoleted by RFC3261. For backward compatibility, the ESBC can allow the use of c destination addresses set to all zeroes ( ) for call hold operations. For SIP Server interoperability purposes, check this item when necessary. The parameter rinstance (for the Contact header in REGISTER) is used when sip devices support multiple lines. It is not defined in an RFC but is an opaque URI parameter used to differentiate different lines. Checking this item will allow the ESBC to add the rinstance parameter to support remote SIP device with multiple line features that support this parameter. Enable this feature to allow the ESBC to use the actual source port of a TLS connection in addition to the default port Use Ir=true for loose routing Depending on the sip server configuration, the ESBC adds Ir=true parameter for loose routing. In loose routing, as specified in RFC3261, the Request-URI always contains the URI of the destination user agent. As opposed to strict routing, where the request-uri always contains the URI of the next hop. 83 84 Reject all received REFER Force send REFER even if the peer not add REFER in the Allow header Remove other media types when sending T.38 offer Allow T.38 on WAN side Order of sending Re-INVITEs Method of processing INVITE without SDP Method of processing re-invite without SDP When enabled, the ESBC rejects all REFER messages for Call transfer operations. The ESBC adds REFER to the Allow header. When this parameter is selected, "m=" lines other than t38 are all removed from SDP messages. If this item is disabled, the ESBC rejects all T.38 transmission attempts from remote devices.. Accept RTP/AVP with sdescriptions offer Sdescription is short for security descriptions for media streams. Some clients choose to code them as "RTP/AVP" to make clients accept the SDP as an offer. Select here if the ESBC should accept incoming offers where sdescriptions are presented as "RTP/AVP" offers. 84 85 SDP with Secure Description server s capability to process secured RTP streams. Leave the default selection unchanged unless necessary. Transmit sdescription transparent Transmit all sdescription in SAVP Transmit all sdescription in AVP Use Main Public Identity in Contact Header Trunk Group Identifier Check this box when the Contact header should include the Main Public Identity, i.e., pilot number. By default, it is unchecked, and the ESBC uses the Alternate Identity in the Contact header. Defined in RFC4904, trunk groups are identified by two parameters: tgrp and trunk-context ; both parameters must be present in a tel URI to identify a trunk group. The "trunk-context" parameter imposes a namespace on the trunk group by specifying a global number or any number of leading digits (e.g., +33), or a domain name. For example, Trunk group in a local number, with a phone-context parameter (line breaks added for readability): tel: ;phone-context=+1-630;tgrp=tg-1; trunkcontext=example.com P-Access-Network-Info Header P-Access-Network-Info header. This header is useful in SIP-based networks that also provide layer 2/layer 3 connectivity through different access technologies. SIP User Agents may use this header to relay information about the access technology to proxies that are providing services. Forward Call Audit messages to PBX (OPTIONS and UPDATE) trunk sip profile should be consistent with the similar parameter in section for the target SIP-PBX profile. 85 86 SIP Profile Configuration: Security Figure 68. Configuring SIP security features SIP Parameters Check the domain/host part of the To header in incoming requests Check the source IP address of incoming SIP messages Description If the domain/host part of the To header in incoming requests is different to the ESBC configuration for the SIP domain field, the ESBC rejects these incoming SIP messages. If the source IP address of incoming SIP messages is different to that used when registering SIP UA accounts, the ESBC rejects these incoming SIP messages SIP Profile Configuration: Features Figure 69. Trunk SIP Profile Features SIP Parameters Require Register event (3GPP) Not Retry Registration on 403 Responses Send SUBSCRIBE For Message Waiting Interval Process Call Transfer and Call Description Defined in RFC The ESBC subscribes to the registration event for an AOR, resulting in notifications of registration state changes. For example, when the administrator shortens the registration (e.g., when fraud is suspected), the registration server sends a notification to the ESBC which can re-register and re-authenticate. The ESBC, upon receiving a response code 403 from the registration server, will not perform any more REGISTER attempts. The ESBC sends SUBSCRIBE messages for subscribing to voice mail services (i.e., VMWI) from the server. When this feature is enabled, the ESBC performs requests such as 86 87 Forwarding Locally the following: Process call transfer with re-invite method instead of sending REFER to the sip server. Process call transfer with re-invite method instead of sending sip response code 302 (moved temporarily) to the sip server. Support 100rel for outgoing calls Always respond PRACK for 183 message Play ringback tone until receive 18X from SIP Server When 100rel is enabled, the ESBC will include the 100rel tag in the Supported header and the PRACK method in the Allow header in outgoing INVITE messages. When the called party sends reliable provisional responses, the ESBC will send a PRACK request to acknowledge the response. The ESBC always responds with PRACK with 100rel tag when receiving 183 responses for outbound calls. When this feature is enabled, the ESBC will start to play ring back tone for outgoing calls when it receives a 18x response from the SIP server. When this feature is disabled, the ESBC plays ringback tone immediately after the INVITE messages are sent out to the LAN SIP devices, and updates the ring back tone after receiving 18x (or others) from the SIP server. Loop Detection Enable this function to prevent from a sip signaling loops from occurring. A loop is a situation where a request that arrives at the ESBC is forwarded, and later arrives back. The ESBC handles loops with the Max-Forwards header to limit the number of hops a request can transit on the way to its destination. The default initial value for Max-Forward is 70. If the Max-Forwards value reaches 0 before the request reaches its destination, it will be rejected with a 483 (too many hops) error response. 87 88 4.2.4 Analog interface FXS Configuration: FAX and Modem Calls Once the user accounts have been configured on the ESBC database (see section ), you may select accounts (numbers) and assign them to the ESBC FXS ports to which analog devices are connected, such as FAX machines, point of sale station modems, or POTS phones. Navigate to Telephony > FXS > FXS Port Setting. Figure 70. FXS port settings Click the drop down menu of the Number column, choose a number for this particular FXS port, and enter the User name and Auth Password if needed (see section ). Screen Display Description Status Displays the connected status of analog ports Normal H/W Fault Authentication Failed Disabled Port Number User The connected port ID, as on the label on the back panel for each port interface. Selected SIP Trunk User Account. The user name assigned to this port. Auth Password Enter the password if needed (see section ). Line Profile Action The media parameters defined through the <Profile Config> operation (see section ) Configure call features for this particular analog port (see section ) 88 89 Media Parameter Configuration for Analog Ports Click <Profile Config> button to configure media parameters for analog ports. Figure 71. Configuring media parameters for analog ports FXS port media parameters Profile ID Signaling Description Name of this profile Loop Start and Ground Start are supported. Choose the correct signaling type according to the analog equipment to which 89 90 the FXS port is connected. DTMF mode Gain CODEC Fax RFC2833 out-of-band and In-band methods are both supported. If RFC 2833 is chosen, the ESBC supports auto negotiation with the SIP servers (or SIP devices) at the SIP Trunk side. Choose the correct mode according to the specification of the service provider, or leave the default settings (2833) unchanged. Controls telephone speaker and earpiece volume Control SIP media negotiation. To change the priority level of a CODEC, highlight it and click the arrow keys to adjust its position. To remove a CODEC, click the <Delete> button under the Action column. The ESBC supports both T.38 Relay and Pass_Through modes for fax transmission over an IP network. Parameters for Pass_Through FAX calls transmitted in Pass_Through mode are treated as voice calls and transmitted using the G.711 codec only. Depending on the region where the ESBC is deployed, choose either G.711 u- Law, or G.711 A-Law for transmitting FAX calls. Parameters for T.38 Relay (Note: If the peer gateway does not support T.38 relay, FAX calls will fall back to Pass_Through mode) 3. Do not change the default value unless necessary. High Speed Redundancy. Number of redundant T.38 fax packets to be sent for high-speed V.17, V.27, and V.29 fax machine image data. Default value is 1. Do not change the default value unless necessary. NOTE: Increasing the High Speed Redundancy parameter may cause a significant impact in the network bandwidth consumed by fax calls. Bit Rate. Choose a fax transmission speed to be attempted: 2400, 4800, 9600, or By choosing 14400, the ESBC can 90 91 automatically adjust/lower the speed during the transmission training process. The ESBC supports G3 Fax. Max Buffer Size. This option indicates the maximum number of octets that can be stored on the remote device before an Call Feature Configuration for Analog Ports The ESBC analog ports can be to provide voice communications to enterprise users for backup purposes. Click the <Action> icon in Figure 70 and configure the parameters on this page to enable or disable certain call features for the selected port. Choose the <Feature Setting> tag. Figure 72. Call features for analog ports Analog port call features Call Waiting 3-way Calls Anonymous Call Rejection Waiting Message Indication Description If the line is busy, inform the user of an incoming call. Display caller ID of the incoming caller. The 3-way calls feature allows this analog port to mix media streams for two different parties. Block calls from parties who have their caller ID blocked. Select one of various types of VMWI signals to use as notification of new voic s. 91 92 Hot Line No Answer Timer With the hotline number entered, the telephone automatically connects to this destination number as soon as the user lifts the handset. No answer timeout for incoming call. 92 93 4.3 Verifying Calls between the ESBC and SIP Trunk: Test Agent The Test Agent Setting Click the <Setting tab> to configure the Test Agent parameters. Figure 73. Test Agent Parameter Settings 93 94 Test Agent Enabled User ID, Display Name, Auth ID, Auth Password, Trunk SIP Profile Registration State Description Check the option box to enable the feature; uncheck to disable it. See section for detailed description. Shows the registration status of the test agent (i.e. connected or disabled). You can click the Register button to register your test agent or click the De-register button to disable it. Audio File CODEC File used during calls Auto Disconnect Call Description Select the voice CODEC to be used for voice quality tests. Choose to use the default audio file, or upload your own. This is the audio sound that will play when the test call is established. Configure the test agent to automatically disconnect the call when finished playing the audio file once, or disconnect after defined Duration. Schedule Test Enabled Destination Number Test Frequency Description Check the option box to enable the Scheduled Test feature. Enter the destination number that the test agent calls for testing. Define the test frequency in the fields The Usage of Test Agent After the test call a built-in SIP device, the Test Agent (TA), to verify connectivity and voice quality with the service provider network. In addition, automatic scheduled voice quality testing allows service provider s the ability to view variations of call quality over time. (The TA also supports both media loopback and RTP loopback tests with the InnoMedia EMS server.) Navigate to Telephony>TOOLS>Test Agent. Enter the target auto-answer phone number in the Destination Number field. Click <Dial>, and the TA automatically connects the call and hangs up the call after 90 seconds, you may press <Show> button to display the latest test call result. Please see section for detailed explanations of voice quality parameters. 94 95 Figure 74. Test Agent Call Control 95 96 4.4 Routing Calls: ESBC with a SIP-PBX Please refer to Section to make sure the arrangement of appropriate enterprise voice network for the SIP PBX connecting to the ESBC Voice and NAT interface. This section addresses the configurations of sip telephony signaling and media interconnecting with the SIP PBX SIP PBX Profile SIP PBX models used in deployments, though conforming generally to SIP requirements, may also be designed with some deviations for specific needs. The ESBC normalizes the sip signals for the target SIP PBX to allow interconnection with the SIP server. Choose the target SIP PBX model from the profile list; otherwise choose the Generic profile and make the associated configuration changes according to the specific requirements of the PBX. Navigate to Telephony > SIP-PBX > PBX SIP Profile. Choose the target SIP PBX model from the profile list as the Default Profile. Click the <Apply> button. The ESBC support multiple SIP PBX models (profiles) to connect with. The configurations have to match with the assigned SIP Accounts. Please also see section for configuring SIP Accounts. When there is a need to adjust the parameters of any target PBX profile, click the <Setting> icon. The configurations can be exported (click <Export>) for backup purpose, and restored to the ESBC system (click <Import>). The definitions and usage for most of the parameters are the same as those of the Trunk SIP Profile. They are briefly described in this section Basic SIP Parameters Figure 75 Basic sip parameters in connecting to the SIP PBX 96 97 SIP PBX Profile Profile ID Enable Static Registration Use TCP Transport for SIP Messages Timer C, Timer 1xx Retransmission Description Enter a unique name for this profile. Usually enter the SIP PBX model name. Check this option when the target SIP PBX uses static registration mode to connect to its north bound sip server, i.e., the ESBC. Please refer to the SIP-PBX requirements for configuring this option. Check this option when the target SIP PBX uses only the TCP transport protocol for SIP messages (but not UDP). Please refer to the SIP-PBX requirements for configuring this option. Standard SIP timers defined in RFC Interoperability See the application note: The ESBC Caller ID screening mechanism. Figure 76 SIP-PBX Interoperability I SIP PBX Interoperability Country Code Description The DIDs or user accounts configured on the ESBC and SIP server usually do not include country code information. When the PBX configured numbers are composed of country code + DID, input the country code value in this field, and hence the ESBC will translate the calling/called numbers with or without country code, and connect calls. 97 98 Set URI format of Header: From, To. Depending on the SIP PBX configuration, the ESBC generates the URI format and sends to the SIP PBX accordingly. 1. not E.164, without user=phone 2. not E.164, with user=phone 3. E.164 (prefixed with + ), without user=phone Set Identity header for calls to SIP terminal 4. E.164 (prefixed with + ), with user=phone Depending on the SIP PBX privacy configuration, the ESBC may add one of the following headers as the caller identity header for privacy purposes and forward to the SIP PBX. P-Asserted-Identity: defined in RFC This sip header is used among trusted sip entities to carry the identification of the user sending a sip message as it was verified by authentication. Remote-party-id: is defined in a sip draft. This sip header provides information about the remote party. Anonymous Call Anonymous call- Set privacy header to the value id Depending on the SIP PBX configuration, the ESBC generates the sip From header in four different formats. Select one of these from the drop down menu. The ESBC is able to assert an identity and forward to a trusted SIP PBX by inserting P-Asserted-Identity and a Privacy: id header (RFC3325) Set Caller ID if it does not exist When the inbound call to the SIP PBX does not include a Caller ID, the ESBC may insert a specified caller ID. Get Caller ID from SIP Header if exists Choose desired caller ID source(s) among the following three options to transport the SIP PBX P-Asserted-Identity Remote-Party-ID P-Preferred-Identity Forward SIP Header to SIP terminal Check these items to allow the ESBC to forward the following sip headers from the service provider network to the SIP PBX. The ESBC itself does not generate the following headers, it simply forwards. Alert-Info, History-Info, Diversion. 98 99 Figure 77. SIP-PBX Interoperability II Please refer to the application notes: ESBC Application Notes- Configurations for SIP PBX Call Transfer- REFER. Please refer to the application notes: The ESBC Caller ID screening mechanism. SIP Interoperability Description Forward DTMF in SIP INFO to SIP Server Strip ICE Attribute When this feature is enabled, the ESBC forwards SIP INFO if the registered SIP UAs send DTMF tones using SIP INFO method. ICE attributes are used for NAT traversal purposes. Enable this feature to allow the ESBC to strip all ICE related parameters in SDP messages for messages sent to the SIP PBX. ICE related attributes in SDP include a=candidate(.*) a=ice(.*) Remove Contact and Record- Route Headers in 180 responses Add expires header in the 200 response of registration Use the SIP Terminal s IP address as the domain Enable this feature so that the ESBC removes network routing information (Contact headers and Record-Route headers) from SIP messages before forwarding to the SIP PBX. Expires header: value in seconds eg Expires=3600 Inside a REGISTER request, an Expires header designates the lifespan of the registration. Enable this feature so that the ESBC composes the host-part of the SIP URI for request messages using the SIP-PBX IP and forwards to the SIP-PBX. Use Ir=true for loose routing Depending on the SIP PBX configuration, the ESBC adds Ir=true to enable the loose routing feature. In loose routing, as specified in RFC3261, the Request-URI always contains the URI of the destination user agent. As opposed to strict routing, where the request-uri always contains the URI of 99 100 the next hop. Use entire SIP address as the authentication name Use RFC2543 Hold Prefer Route by Identities Remove other media types when sending T.38 offer Ignore domain in Refer-To Header Use AOR, e.g., as the authentication ID. RFC2543 is obsoleted by RFC3261. For backward compatibility, the ESBC can allow the use of c destination addresses set to all zeroes ( ) for call hold operations. If this item is checked, the ESBC validates Caller ID in the order of precedence P-Preferred-Identity > P-Asserted-Identity > Remote-Party-Identity > From of the INVITE messages from the SIP PBX. Otherwise the ESBC validates From header only. (Please refer to the application notes: ESBC Application Notes-ANI TN Screen. When SIP/SDP messages include multiple m= lines for SIP offers, the ESBC removes those m= lines which are not related to T.38 media types. This option is designed for processing call transfer features using the SIP REFER method with the SIP-PBX. Please refer to the application notes: ESBC Application Notes- Configurations for SIP PBX Call Transfer-REFER Figure 78. SIP-PBX Interoperability III SIP Interoperability Order of sending Re-INVITEs Description 100 101 Method of processing INVITE without SDP Method of processing re-invite without SDP Accept RTP/AVP with sdescriptions offer SDP with Secure Descriptions Remove opaque parameter in the From and To header Get Called Numbers from the. Sdescription is short for security descriptions for media streams. Enabling this option allows the ESBC to accept SDP media lines with the value: m=rtp/avp. Leave the default selection unchanged unless necessary. The RTP/AVP profile is defined for the use of RTP v2 and the associated control protocol (RTCP) within audio and video conferences.-PBX s capability to process secured RTP streams. Leave the default selection unchanged unless necessary. Transmit sdescription transparent Transmit all sdescription in SAVP Transmit all sdescription in AVP By default, the ESBC transmits any opaque parameter to the SIP PBX. If the SIP-PBX is having issues when receiving this extra parameter in a SIP message, then remove it. Parameter opaque is a URI parameter, e.g., opaque=xxxxxx. The opaque parameter is used to achieve the purpose of adding extra pieces of information onto an AOR for routing or other purposes, while keeping the AOR intact in the URI. By default, the ESBC gets called numbers from the To header. 101 102 Request-URI Forward Call Audit messages (OPTIONS and UPDATE) to PBX Enable this item to allow the ESBC to get (and route) the called numbers from the Request-URI. SIP PBX profile should be consistent with the similar parameter in section for the target Trunk SIP profile ESBC - PBX Security Configuration Figure 79. Security configuration for the SIP-PBX Security Check the source IP address of outbound INVITE Check the contact domain of outbound INVITE Description When this item is enabled, the ESBC replies with 404 not found and rejects outbound INVITE requests from a LAN source IP address which is not configured/registered on the ESBC. When this item is enabled, the ESBC replies with 404 not found and rejects outbound INVITE requests from the LAN if the contact domain is not in the registered client list ESBC - PBX Call Feature Configuration Figure 80. Service or call features designed for the SIP-PBX 102 103 Features Description Play Music-On-Hold when Hold By default, the ESBC streams the MOH of the SIP server to the peer party when the PBX user is put on-hold. Enable this feature to have the ESBC play its built-in MOH to the PBX user on behalf of the SIP server. Send NOTIFY of Message- Waiting without a subscribe Enable SIP Forking By default, the ESBC always sends NOTIFY messages for MWI to the SIP PBX. Uncheck this item so the ESBC will only send NOTIFY when the SIP PBX subscribes (SUBSCRIBE) to the MWI package. Enable this feature to allow the ESBC to fork a single SIP call to multiple SIP end points. A single call can ring many end points at the same time SIP-PBX and SIP-Client Authentication Navigate to Telephony > SIP-PBX > Authentication. Figure 81. Authenticating the ESBC SIP clients Three authentication modes are provided for SIP PBX authentication requirements. Note that the authentication mode selected is applied to all SIP clients which connect to the ESBC Voice- NAT interfaces (for SIP Trunk Service). Authentication Mode Description None Local RADIUS The ESBC trusts the SIP request attempts from SIP clients. No authentication required. The ESBC authenticates SIP request attempts from SIP clients by SIP authentication defined in RFC3261, which is a stateless, challenge-based mechanism. The ESBC supports RADIUS, access server authentication for sip 103 104 accounts. Enter the target RADIUS server IP or FQDN, shared secret for the ESBC to connect, and threshold values. Note that all sip accounts have to be configured on the RADIUS server when this option is chosen. 104 105 4.5 ESBC System Global SIP Settings SIP Parameters To configure the ESBC s global SIP settings (in addition to those on the Trunk SIP Profile and PBX SIP Profile ), navigate to Telephony > SIP-PBX > SIP Parameters SIP Parameters Figure 82 SIP parameter global settings SIP Parameters SIP Session Timer Routing incoming calls by Request-URI T1, T2, T4, Timer B, Timer F, Timer H, Timer D T100 Max Forwards Description The SIP session timer specifies a keep-alive mechanism for SIP sessions, which limits the time period over which a stateful proxy must maintain state information without a refresh re-invite. The ESBC, by default, routes incoming calls according to the context specified in the TO header. If this box is checked, the ESBC routes incoming calls according to Request-URI header. Standard SIP timers, defined in RFC 3261 Waiting time for sending 100 Trying for an INVITE message. Defined in RFC3261, the Max-Forwards header is used to limit the 105 106 number of proxies or gateways that can forward the request to avoid looping errors. Default number is 70. SIP Port Place call among SIP UAs as internal calls Send De-Register if reboot Reject B2BUA incoming TCP connection via WAN Port used for SIP signaling communicating with north bound SIP servers. Default 5060 for B2BUA (SIP trunk service) and 5080 for SIP-ALG (hosted voice service). When this feature is enabled, the ESBC routes calls locally when the called numbers are configured on the ESBC database and does not route to the service provider network. By default, this feature is disabled. When the ESBC reboots, it sends a De-Register message to the service provider network, and initiates the Register and service subscription processes. By default, the ESBC rejects TCP connection attempts to reduce the possibility of large TCP packets being dropped by routers along the communication path which may result in call attempt failures. Store nonce for authentication Depending on the SIP (or IMS) server configuration, the ESBC may store nonce for authentication. Media Inactivity Timer Single-direction or Bi-direction time-out timer in second. If checked, the ESBC disconnects calls when no media traffic is detected according to the selection single-direction or bothdirection after the xxx seconds entered. Send Dummy packets to open WAN side NAT connections Keep alive message to keep rtp port open if there is NAT equipment deployed in the service provider network to which the ESBC transmits media packets System Music on Hold (MOH) Figure 83. Configuring the ESBC system MOH MOH Audio File Description The ESBC plays Music On Hold for calls on the FXS ports and SIP UAs during special conditions to prevent the remote side experiencing dead-silence if they are on hold. The ESBC supports a customized MOH audio file of G.711-uLaw of 60 second in length. 106 107 Filter SIP Method The ESBC filters SIP messages specified in the list during the call setup process. The configurations affect both inbound and outbound directions to the SIP PBX and the SIP server in the Service Provider network. Figure 84 Filter SIP Method 107 108 4.5.2 Customized SIP response code settings Navigate to Telephony > SIP-PBX > SIP Response to configure the SIP Response code translation between the enterprise SIP PBX (thru the NAT and Voice interfaces) and the SIP server in the service provider network (thru the WAN interface). The ESBC default mapping tables should meet most SIP requirements. There is no need to input new records to these two tables unless the PBX or SIP Server specify response codes for different processes. If your network is live, make sure that you understand the potential impact of any configuration changes. Figure 85. SIP Response Code mapping The mapping rules are used to match incoming (incoming to the ESBC) SIP error response codes in the order of 'closeness' to the Received Response codes in the list. In this context, closeness is judged by exact matches first, followed by least wild-carded entries. Entries of the same degree of wildcarding are chosen by the rule that matches the most digits before the wild-carded characters. 108 109 4.6 Numbering Plan When the ESBC SIP User accounts (usually DID numbers) and the PBX numbers do not have the same patterns, the Digit Translation plan and configuration can be used. See the ESBC Application Notes--Configurations for digit translations Configuring numbers and formulating digit translation rules The Digit Translation Feature enables the ESBC to prepend or strip certain digits from calling and called numbers in both the outbound and inbound directions by formulating match-pattern and digit map rules. Navigate to Telephony > ADVANCED > Digit Translation. Figure 86. Digit Translation for Called or Calling Parties Digit Translation Direction Call Party - Match Type Match Pattern Call Party - Translate Strip Digits Add prefix Description Description Direction of the call: Inbound or Outbound Call party to be used to match the digit string rule: Calling Party or Called Party. Note: For Inbound calls, only the called party can be modified. Types of calls: All, SIP, or PRI The Digit Map string that must match before the number can be translated. Choose whether to translate the number of the calling or called Party. The number of digits to strip, starting from the left side. The digit string to prepend to the number Add a description of this translation rule. The following example describes the rule displayed in Figure Strip the first four digits from any telephone numbers which match the pattern (1408xxxxxxx). 2. Add Prefix 3510 is to prepend digit 3510 to the string after step 1 is processed. The result of this translation will be 3510xxxxxxx (the called numbers sent from the ESBC to the SIP server). 109 110 NOTE: To translate CallerID from the PBX for outbound calls, it may be necessary to configure Set From header for outgoing calls on the Trunk SIP Profile. Select the item Use the original caller (PBX) and the SIP Server must accept the calling number after translation (see section ). 4.7 Emergency Call configuration The ESBC emergency call features permit the service to provide: Proper priority for emergency calls Line Preemption to always allow emergency calls regardless of session limits--all 911 Emergency Calls always are allowed to be established regardless of any limit in the number of sessions, i.e., the system capacity. Overriding of the caller ID and caller name information--emergency CID and Display Name overrides all other caller id and display name settings when dialing out on an Outbound Route flagged as Emergency Adding or deleting emergency call numbers To configure emergency call numbers: navigate to Telephony > ADVANCED > Emergency Call. Figure 87. Adding emergency call numbers Enter an emergency call number in the Number field and its description in the Description Field. Carefully check your local emergency call list. Click the <Add and Apply> button to add this new entry to the ESBC database Connection settings for emergency call numbers Click the <Setting> tab. 110 111 Figure 88. Connection settings for emergency call numbers Connection Settings Override Caller Information Caller ID Display Name Set SIP Priority Header to "emergency" Set SIP Priority Header to "emergency" Description If enabled, Emergency CID and Display Name will override all other caller id and display name settings when dialing out. Emergency caller ID. Emergency caller Display Name. If enabled, the Outbound Route is flagged as Emergency by setting priority: emergency in sip messages. If enabled, the Outbound Route is flagged as Emergency. Override Trunk Group Identifier If enabled, the emergency tgrp will override the regular Trunk Group Identifier. tgrp Trunk-context DSCP for RTP Packets Value Send SNMP Trap Enter an unique Trunk Group Identifier for Emergency Calls Enter the Trunk-context for emergency calls Upstream Internet Protocol (IP) packets are marked with a configurable DSCP to indicate that the IP packet content contains Emergency Media DSCP value for emergency call media packets. Check this box to send an SNMP trap to the EMS or SNMP server when there is an emergency call. 111 112 4.8 Media Transcoding Introduction The media transcoding feature offered in InnoMedia ESBC 9378 and ESBC-10K series provides a solution to the problem where the Service Provider supports different media capabilities to those of the end device located at the enterprise. The ESBC provides the ability to transcode between the following media capabilities: Fax (T.38 and G.711), Voice CODECs (G.711, G.729, G.726), and DTMF (RFC2833 and In-band). By configuring the ESBC, it is possible to allow different forms of SIP signaling negotiation between the Service Provider side and the Enterprise side before media transcoding takes place. Please refer to the ESBC Application Notes- Media Transcoding Features for call flows and signal descriptions Enabling Transcoding Profiles The Transcoding Profile screen allows the profile list and the default profile to be managed. It also provides access to the Profile configuration screen, which allows the system administrator to configure Fax, CODEC or DTMF transcoding settings between the WAN and LAN side of the ESBC. To enable the Transcoding Profile, navigate to the Telephony > ADVANCED > Transcoding Profile page. Simply check the Enable and hit the <Apply> button. Figure 89 Enabling the transcoding profile Editing or Adding a Transcoding Profile To add a Transcoding Profile, click the <Add> button and then click the <Setting> button to edit transcoding parameters. Individual profiles can be created with different configurations for a specific SIP UA or a group of SIP UAs to use. In the Profile Configuration screen, modify the Profile ID and select one Transcoding Mode option from the drop-down list that a SIP UA group can use: 112 113 Figure 90. Transcoding mode selections Transcoding mode No Transcoding CODEC Transcoding CODEC, FAX, and DTMF Transcoding CODEC, and FAX Transcoding Description Transcoding is disabled on UAs assigned to this transcoding profile This setting only allows CODEC transcoding. CODEC transcoding only happens if there are no common CODECs between the caller and called UAs. Fax and DTMF transcoding will not be performed. Fax, DTMF and CODEC transcoding for all calls. Fax, CODEC transcoding for all calls, but no DTMF transcoding. Transcoding options Allow calls when no supported CODEC in SDP offer Allow calls even when transcoding resources are exhausted Description The ESBC will allow the SDP offer to pass through even if the codec is not in the Extended CODEC list (see section 4.8.4). In this case, no transcoding will take place, but this feature allows unsupported transcoding codecs (e.g., G.723.1) to be negotiated end-to-end between Enterprise and Service Provider SIP UA s. When selected, the ESBC will allow calls to be processed even if there may not be enough channels to process transcoding. The calls will go through but the media may not be transcoded. 113 114 Figure 91. The ESBC Transcoding Profile Configuration 114 115 4.8.2 DTMF Transcoding The ESBC is able to perform DTMF transcoding between the SIP UAs on both the WAN and LAN sides. The DTMF mode can be either In-band or RFC2833. DTMF Transcoding Description Offer In-Band and RFC2833 Offer In-Band The ESBC offers both RFC2833 and In-band in SDP messages of SIP Offers. The ESBC offers only In-band in SDP messages of SIP Offers. Note 1 The ESBC automatically handles negotiation when it answers SIP offers from the peer. In the DTMF configurations displayed in Figure 91, 2 the ESBC WAN interface offers both RFC2833 and in-band DTMF 3 the ESBC LAN interface offers in-band DTMF The above configurations result in the call flows shown in Figure 92 For outbound calls, the ESBC transcodes from In-band DTMF to out-of-band RFC2833 DTMF towards the service provider network. For inbound calls, the ESBC transcodes out-of-band RFC2833 DTMF to in-band DTMF towards the PBX. Figure 92 DTMF transcoding: G.711 in-band and RFC2833 processes 115 116 4.8.3 FAX Transcoding FAX over IP communications require a high-quality IP network for proper operation. Please refer to section for network connectivity assessment. The ESBC performs FAX transcoding based on an SDP Offer-Answer negotiation. As shown in Figure 91, 4 and 5 denote the configuration of the Egress FAX Setting. The ESBC can change its egress SDP signaling using the following three options: Egress FAX Setting Description Passthrough Offer G.711 Only Offer G.711 and T.38 Packetization Time ESBC allows both LAN UA and WAN UA endpoints to negotiate SDP themselves ESBC removes the T.38 codec from the egress SDP offer. ESBC sends two m= lines in its egress SDP including both T.38 and G.711 Ptime. This parameter is applicable to T.38 packets. Three options are available: 10, 20, and 30 ms. Note 2. When offered two m= lines in SDP, some FAX adaptors will choose the first m= line, regardless of the content of the two lines. To allow FAX Transcoding, the Egress FAX Setting must be selected for both WAN and LAN sides as shown in Figure 91. This will allow the following steps to take place: 1. The ESBC starts processing FAX transcoding flows only when FAX transmission is detected. 2. When FAX transmission is detected by the FAX adaptor on ONE SIDE of the fax transmission (usually the receiving side), this side initiates T.38 FAX requests by sending a reinvite with T.38 information to the ESBC. The ESBC will attempt to honor the T.38 FAX offer from this FAX adaptor. 3. When the condition of item 2 occurs, the ESBC forwards the FAX transmission request to the fax adaptor on the OTHER SIDE, according to the ESBC Egress FAX Setting Rules given below: If Offer G.711 and T.38 is selected, then the ESBC offers two transmission modes (T.38 and G.711) and allows the FAX adaptor to choose which FAX mode to use. If Offer G.711 Only is selected, then the ESBC offers only G.711 transmission mode to the FAX adaptor. If Passthrough is selected on either side, then the ESBC will let the FAX Adaptors determine the final fax transmission mode by allowing both endpoints to negotiate SDP between them. Note 3. T.38 does not indicate which party (calling party or called party) is responsible for detecting that a FAX transmission is being attempted and initiating the switch from audio mode to T.38 mode by sending reinvite requests carrying T.38 information. That is, either calling party or called party may initiate T.38 FAX requests. 116 117 4.8.4 Voice Codec Transcoding The ESBC can be configured to add certain codec capabilities (extended codec list, items 6 and 7 of Figure 91) to transcoding profiles, and perform transcoding in cases where the selected codec in the answer SDP is not available in the original offer Typical example of voice codec transcoding in deployment When the Service Provider network only supports G.711 u-law, then only that CODEC is configured for the ESBC WAN extended codec list. When the PBX on the ESBC LAN network only supports G.729, then only that CODEC is configured for the ESBC LAN extended codec list. Referring to Figure 91, items 6 and 7 are used to configure the following codec transcoding example. When an outgoing call from the PBX advertises only G.729 in the SDP offer, then the ESBC will add G.711 u-law in the SDP offer when sending out SIP messages to the Service Provider. When an Incoming call from the Service Provider advertises only G.711 u-law in the SDP offer, then the ESBC will add G.729 in the SDP offer when sending out SIP messages to the PBX in the LAN direction. Once the call is established, the ESBC will transcode the media in both directions with the supported CODECs. 117 118 4.9 Routing Calls: ESBC with a PRI-PBX PRI Spans and Channels The ESBC T1/E1 module supports ISDN PRI digital lines, and provides up to two T1/E1 ports for connection to the enterprise s TDM PBXs. The T1/E1 TDM voice traffic is converted to VoIP and processed by the ESBC to connect to service provider s SIP trunks and vice versa. To configure digital lines (PRI T1/E1), navigate to Telephony > T1/E1> Digital Line. The T1/E1 Status page displays the system configuration and status of all PRI span(s), including D and B channels. Figure 93. Status display of ESBC digital lines: 2 Span T1 model Figure 94. Status display of ESBC digital line: 1 Span E1 model 118 119 PRI Span Status Description Refer to section Span Status Disabled: Span is disabled Clear, OK: Span is ready to use D-channel Down: PRI span signaling error or wire unplugged Alarm: Alarm signals are received locally or by the remote PBX No. PRI span 1, and span 2 Protocol Line Coding T1, J1 or E1 T1 / J1: AMI or B8ZS E1: HDB3 or HDB3 with CRC4 Line Framing T1 / J1: D4 or ESF E1: CCS Profile Span Group The media transmission profile defined in the Profile Config section for PRI spans. It defines codecs and media related parameters used for PRI spans communicating with the SIP Trunk servers. See detailed description in section If there are more than one PRI spans (ports), each span can be assigned to a different PRI Span Group to which certain UAs belong. Each span group defines its own PRI span settings and hence a span group may indicate a connection to a different TDM PBX. See detailed description in section Enabled Channel Status Span enabled or disabled Idle: channel is ready to use Active: an active call is ongoing on the specified channel Unavailable: channel is disabled ISDN signaling channel (D-Channel) B-Channel Restarting Remote out of service 119 120 4.9.2 PRI Span Statistics The status view of span statistics for digital lines shows Framing Errors, CRC Errors, Code Violations, Errored Blocks, and Slips. Statistics of Span Health Description No. Framing Errors CRC Errors Code Violations Errored Block Slips Framing and Coding Status Reset Counter PRI span (port) number Framing errors are counted during synchronous state only. The number increments when incorrect FT and FS-bits in F4, F12 and F72 format or incorrect FAS-bits in ESF format are received. No function if CRC6 procedure or ESF format are disabled. In ESF mode, this counter is incremented when a multi-frame has been received with a CRC error. CRC errors are not counted during asynchronous state. No function if NRZ or CMI code has been enabled. If the B8ZS code is selected, this counter is incremented by detecting violations which are not due to zero substitution. If simple AMI coding is enabled, all bipolar violations are counted. In ESF format this counter is incremented if a multi-frame has been received with a CRC error or an errored frame alignment has been detected. CRC and framing errors are not counted during asynchronous state. In F4/12/72 format an errored block contain 4/12 or 72 frames. Incrementing is done once per multi-frame if framing errors have been detected. The number of frame slips due to clock synchronization between the ESBC and the TDM- PBX The Line Framing and Line Coding configuration and the status of the D-channel. Resets the counters of all Span Statistics fields 120 121 4.9.3 PRI Span Connection Settings Basic Settings Always ensure that PRI Span parameters are consistent or matched with those settings on the peer TDM- PBX. Always start from T1/E1 port 1. Ensure the cable between the interface port and the PBX is connected correctly. ESBC PRI port 1 synchronizes with the clock source, and port 2 follows the timing of port 1. PRI cable(s) must be connected to the port 1 before the port 2 can be used. Figure 95. PRI Span Basic Settings PRI Span: Basic PRI Span Group PRI Profile Enabled Clock Source Description Configure Span 1 (and/or 2) to the target PRI span group which assigns the ESBC user accounts to PRI spans. (PRI Span Groups are described in section 4.9.4) The media transmission profile defined in the Profile Config for PRI spans. It defines codecs and media related parameters used for PRI spans communicating with the SIP Trunk servers. Check this box to enable the particular span. Span 1 should always be enabled. Internal. The ESBC default clock source is Internal, where voice transmission timings follow the ESBC internal clocking scheme. The connected TDM-PBX clock should be configured to follow the ESBC clock. Line. The ESBC follows the TDM-PBX clocking scheme. If there are two spans, span2 always follows the clock of span1. The span2 does not have its own clocking scheme.. If there is only one PRI line, always connect to the SPAN1. Do not 121 122 change the ESBC clock mode default settings unless necessary. Line Framing/Coding T1 / J1: ESF/B8ZS or D4/AMI E1: CCS/HDB3 or CCS/HDB3 with CRC4 The configuration of the ESBC should be consistent with that of the TDM-PBX. Line Build Out Default mode: 0 db (CSU) / feet (DSX-1). The configuration of the ESBC should be based on T1 line length and match that of the TDM-PBX. Channel Total Channel Numbers: T1-24 channels; E1-31 channels B-Channel: configure the enabled B-channel numbers. D-Channel: T1 fixed at CH 24; E1 fixed at CH 16 Switch Type T1: National ISDN 2 (default), Nortel DMS100, AT&T 4ESS, Lucent 5ESS, Old National ISDN 1, Q.SIG E1: EuroISDN (common in Europe) The configuration of the ESBC should match that of the TDM-PBX. Signaling Method PRI NET/PRI CPE. Always configure the ESBC with PRI NET mode to perform switch side functionality. (The TDM-PBX should be in PRI-CPE mode). Do not change this default setting unless necessary ISDN Interoperability Configuration The parameters listed here are available for PRI interoperability purposes. They are settings for provisioning Network specific facilities, and ISDN Timers for Q,921/Q.931. Normally, the default settings meet most usage requirements. You will only need to adjust these parameters if the default settings need to be changed to deal with special conditions. Figure 96. PRI Span Interoperability Settings 122 123 PRI Span: Basic Network Specific Facility Description The ISDN protocol allows telephone service providers to add their own custom protocol extensions. These custom protocol extensions provide various localized services that are not defined in the general ISDN specifications. The ESBC supports the following types of Network Specific Facility and acts as a PRI NET role (Central Office switch side). Configure none if the type is unknown or not configured on the TDM-PBX. Available options are: none sdn megacom tollfreemegacom accunet. PRI Exclusive Default mode: checked. Unconditionally picking B channels exclusively. This parameter should be enabled when the ESBC is configured as PRI NET. Do not change this default setting unless necessary. Discard Remote Hold Retrieval To ignore remote side (PRI side) indications and use MOH that is supplied over the B channel. Default mode: checked. Do not change this default setting unless necessary. ISDN Timers Do not change these default settings unless necessary. Q921 Timers: K: the maximum value of outstanding I frames N200: Maximum number of retransmission attempts T200: Transmission Timer T203: Maximum time allowed without Frame Exchange Q931 Timers: T305: Timer sets how long to wait to get a response such as RELEASE to a DISCONNECT message. T308: sets how long to wait to get a response such as RELEASE COMPLETE to a RELEASE message. T313: sets how long to wait to get a response such as CONNECT ACK to a CONNECT message. Transmission of Facility Based ISDN Supplementary Services Enable Transfer Default mode: checked. Do not change this default setting unless necessary. Default mode: unchecked. Enable this feature when the client subscribes to this supplementary service. Supports business ISDN supplementary services: TBCT/RLT/ECT (National ISDN II/Nortel DMS/Euro ISDN). When enabled, the ESBC uses the SIP REFER method to the SIP Server side and releases B channels when call transfer/forwarding 123 124 operations are successful. Send Display Name Default mode: checked. Send display names to called parties for inbound calls. Some TDM- PBX s do not support Display Name IE settings. Uncheck this setting when necessary Process Ringback Tone (RBT) or Early Media for calls This section addresses RBT call progress related configurations when interworking ISDN and SIP/SDP signaling between the TDM-PBX and the VoIP network. The common problem scenarios/symptoms are: A PBX user (internal to ESBC) places a call through the ESBC to an external number and does not hear a RBT before the call is answered, even though the receiving phone rings and the call is answered. A PSTN user (external to ESBC) places a call to a PBX user through the ESBC and does not hear a RBT before the call is answered, even though the receiving phone rings and the call is answered. The ESBC provides the flexibility to support various forms of RBT generation, either in-band RBT media or out-of-band RBT signals. The default configuration should meet most deployment requirements. There is no need to change these settings unless necessary. If your network is live, make sure that you understand the potential impact of any configuration changes. Figure 97. Process RBT or early media for calls Outbound Calls: The ESBC processes in-band RBT media, or out-of-band RBT signals are controlled by the following settings: 1. -of-band RBT signals (default setting). AS-NEEDED. The ESBC decides the RBT action to the PBX according to the messages it receives (SIP response codes 180/183 from the network; and Q.931 Progress Indicator from the PBX). 2. Check Box: "Ignore 183/early media for outbound calls". When unchecked, the ESBC honors 183 messages received from the network side; otherwise the ESBC will not process inband RBT media relayed from the network. (default setting: unchecked) 124 125 Calling ESBC Setting: ESBC Setting: ESBC Behavior: Party responsible for Direction: Play Ringback Ignore 183 playing ringback: Outbound Calling originating from tone for outbound calls Early Media for outbound calls PBX As Needed Uncheck When the SIP trunk SIP Trunk side (the service side responds. When the SIP Trunk PBX provides the out-of- side responds with band RBT out-of-band RBT signals (180 response code) to the ESBC, and the SETUP message from the caller indicates out-ofband RBT, the ESBC will let the PBX play RBT. Checked When the SETUP ESBC provides the inband message from the RBT media caller expects inband RBT media, the ESBC 125 126 generates inband RBT media locally and sends to the PBX When the SETUP PBX provides the out-of- message from the band RBT caller indicates out-ofband RBT, the ESBC will let the PBX play RBT. Always Uncheck When the SIP trunk SIP Trunk side (the service side responds. Check Regardless of what ESBC provides the inband the SIP trunk side RBT media responds with regarding RBT, the ESBC always generates inband RBT media locally and sends to PBX. Never Uncheck When the SIP trunk side responds with sending inband RBT media (183 response code) to the ESBC, the SIP Trunk side (the service provider network) provides inband RBT media 126 127 ESBC relays inband media to the PBX. Check When the SIP Trunk side responds with out-of-band RBT signals (180 response code) to the ESBC, the ESBC will let the PBX play RBT. Regardless of what the SIP trunk side responds with regarding RBT, the ESBC will let the PBX play RBT. PBX or device behind PBX provides out-of-band RBT media PBX or device behind PBX provides out-of-band RBT media Inbound Calls: Inband RBT media may be provided by the PBX when the "Enable early media for inbound calls" box is checked, which enables the ESBC to send out 183 and forward inband media to the network when the ESBC receives inband signals (progress indicator) from the PBX. The default setting: checked. Please refer to ESBC Application Notes for PRI RBT Processing for the detailed feature description and the ESBC usage of the PRI Progress Indicator B-Channel Maintenance B-channel RESTART and Status Enquiry mechanisms are used to synchronize the B-channels of the ESBC and PBX so as to ensure that telephony services are operational. Figure 98. Span Setting Screen B-Channel maintenance B-Channel RESTART The RESTART message requests a restart (set to idle) for a specified B-channel to the peer device. Response to a successful request is the RESTART ACKNOWLEDGE message. By default, the ESBC triggers B-channel restarts every 60 minutes. The restart requests are performed on idle B channels only. In addition, when an incoming call is placed and the ISDN cause code returned from the PBX is "channel unavailable (44)" or "circuit congestion (34)", the ESBC immediately directs the 127 128 incoming call to the next available channel, and triggers a Restart Message to the PBX in order to reinitialize this B-channel to an idle state. If there are no acknowledgement messages (RESTART ACKNOWLEDGE) received from the PBX, then the ESBC marks these channels with not available for service. STATUS Enquiry The ESBC attempts to continuously monitor the status of active calls by sending STATUS ENQUIRY messages to the PBX periodically, and indicates whether calls are still active from the STATUS messages returned from the PBX. The returned STATUS message contains the call state Information element (IE). On the occurrence of certain procedural errors, both sides of the connection will attempt to re-align call states User Account Assignment to PRI Span Groups Click the <PRI Span Group> tab to assign user accounts to each Span Group. If there are more than one PRI spans (ports), each span can be assigned to a different PRI Span Group to which certain UAs belong. Each span group defines its own PRI span settings and hence a span group can allow a connection to a different TDM PBX. Figure 99. The PRI Span Groups The above example shows span1 is assigned to span group1, and span2 is assigned to span group2. Span group1 contains the default route user account ( ), and hence all numbers that are not configured in the ESBC SIP UA database will be forwarded to span1. (It is also possible that both span1 and span2 are assigned to the same PRI Span group. Please refer to section ) PRI Span Group ID PRI Spans Channel Hunting Scheme Assigned SIP UAs Description ID of PRI Span Group. One Span group may be comprised of multiple (two) PRI spans. A PRI span (port) can only belong to one PRI Span Group. Ascending or Descending Displays UAs of each PRI Span Group 128 129 Assigning UAs to a PRI Span Group Click the <Setting> icon of the PRI Span Group under the Action column of Figure 99 to complete the UA assignment task for the specified PRI Span Group. Click Arrow keys ( ) to assign or remove user accounts to/from the particular span group. Refer to Figure Selecting an appropriate B-Channel hunting scheme Choose the appropriate channel hunting scheme (ascending or descending). The use of a different hunting scheme from that of the PBX is suggested. If the PBX uses an ascending channel hunting scheme, then configure the ESBC with descending so as to distribute loading evenly on entire PRI spans. Refer to Figure 100. Figure 100. PRI Span Group Settings 129 130 4.9.5 PRI Media Profile Settings To configure media transmission settings for digital lines, click <Profile Config> button on the PRI Span main page, Figure 94. FAX over IP communications require a high-quality IP network for proper operation. Please refer to section for network connectivity assessment. Figure 101 Media profile configuration for digital lines 130 131 PRI Media Profile Setting Profile ID DTMF mode Description Name of this profile RFC2833 and In-band are supported. Choose the correct mode according to what the sip trunk (service provider) indicates. G.726 packing order There are two types of byte order for G.726, namely RFC3551 and AAL2. With this setting you can choose the byte order in order to use the same order as the remote entity. Gain Control telephone speaker and listen volume. Tx: transmission gain to digital lines (toward the PBX) Rx: receiving gain from the digital lines and sending toward the SIP Trunk side. CODEC Fax To change the priority level of the CODECs, select the CODEC and click the up and down arrows at the bottom-right hand corner. To remove a CODEC, click the Delete icon in the Action column. The ESBC supports both T.38 Relay and Pass_Through modes for fax transmission over an IP network. Parameters for Pass_Through: Fax signals are transmitted in the same way as voice media. Codec used: G.711 (u-law or A-Law). Parameters for T.38 Relay: 4. Do not change the default value unless necessary. High Speed Redundancy. Number of redundant T.38 fax packets to be sent for high-speed fax machine image data. Default value is 2. Do not change the default value unless necessary. Bit Rate. Choose a fax transmission speed to be attempted: 2400, 4800, 9600, or By choosing 14400, the ESBC can automatically adjust/lower the speed during the transmission training process. The ESBC supports G3 Fax. Max Buffer Size. This option indicates the maximum number of octets that can be stored on the remote device before an 131 132. Packetization Time (p-time): For fax, the period (in ms) after which a UDPTL packet is sent. The default value is 20ms. 132 133 4.9.6 PRI diagnostics To diagnose the status of the ESBC PRI spans, navigate to Telephony > T1/E1 > Digital Line, and click the <Diagnostics> button. The ESBC supports both Bit Error Rate Test (BERT) and Loop Back Test for PRI trunk lines. Note that the PRI trunk lines will be put out of service when entering diagnostic mode Bit error rate testing (BERT) The BERT module tests PRI cables and diagnoses signal problems in the field. BERT generates a specific pattern on the egress data stream of a T1/E1 controller and analyzes the ingress data stream for the same pattern. The bits that do not match the expected pattern are counted as bit errors. Error statistics are displayed in real-time during the testing process. Environmental factors may affect BERT Results. In a communication system, the bit error rate of the receiver side may be affected by transmission channel noise, interference, distortion, bit synchronization problems, attenuation due to cable length, etc. The BERT checks communications between the local and the remote ports. Figure 102. BERT diagnostics page 133 134 Choose which span to test, target pattern of bit stream, loop back mode, and test duration, then simply press the <Start> button to perform the BERT diagnostics. Pattern Type Depending on the particular sequence of bits (i.e., the data pattern) transmitted through a system, different numbers of bit errors may occur. Patterns that contain long strings of consecutive identical digits (CIDs). When the BER is tested using dissimilar data patterns, it is possible to get different results. A detailed analysis of pattern-dependent effects is beyond the scope of this article, but it is sufficient to note the importance of associating a specific data pattern with BER specifications and test results. Pattern Type 2^15 2^20 Unframed 2^15 Unframed 2^20 Description Pseudo-random repeating test pattern that consists of 32,767 (2^15) or 1,048,575 (2^20) bits. Pseudo-random repeating pattern that is 32,767 (2^15) or 1,048,575 (2^20) bits long. The DS-3/E3 framing bits in the DS- 3/E3 frame are overwritten when the pattern is inserted into the frame. Two modes are supported for BERT: Auto and Manual. BERT Loop Back Mode Auto Manual Description The ESBC sends the loopup code to the remote port before starting the BERT and sends the loopdown code after the BERT finishes. The ESBC does not send loopup or loopdown mode codes to the far end port. When this mode is selected, you must manually enable loopback at the remote port before you start the BERT. The ESBC displays the total number of error bits and statistics in real-time during the testing process. The number of Bit Errors and Sync Count may be incremented over time until the duration timer is reached. BERT Test Results Description Status Synchronized Not Synchronized: when no signal is received. Sync Count 0: not sync-up at all. 1 or more: the number of sync-up times. Bit Errors This is the Count of total number of bit errors detected after Status is "Synchronized." 134 135 PRI Span LoopBack Diagnostics This section describes a troubleshooting method known as loopback testing. The ESBC supports three types of LoopBack testing methods to diagnosis clocking and/or line health states. Note that loopback tests are intrusive and impact services. Figure 103. LoopBack diagnostics page A common issue in VoIP networks with a digital interface connection to a TDM-PBX is that the ISDN circuit does not come up or stay up. Such issues can be complex because: Faulty components might reside in several places - for example, within the ESBC or in the TDM-PBX domain. Multiple components impact the status of the ISDN PRI. The problem could be mismatched configuration across the PRI lines (which leads to clock slips, line/path violations), a damaged cable, a bad card, or other issues. Choose either span to test, loop back testing mode, and test duration, then simply press the <Start> button to perform loop back diagnostics. 135 136 LoopBack Testing Mode Local Network Line Network payload Description Tests the inward loopback such that the interface on the ESBC can synchronize on the signal it is sending. Loops the data back towards the network before the framer chip entering the ESBC. Loops the data back towards the network from the T1/E1 framer chip in the ESBC. Note that the wire used for LoopBack testing has to be made with special cross-over pin wiring. It is illustrated in the following picture. Figure 104. T1/E1 loopback wiring map Please refer to the ESBC Application Notes-T1E1 PRI Troubleshooting Guide for further detailed information SIP response code PRI cause code mapping Navigate to Telephony > T1/E1 > SIP Response Mapping to configure the mapping of SIP Response codes to PRI cause codes, and vice versa. Cause codes identify possible reasons for call failures. The ESBC default mapping tables should already meet most deployment requirements. There is no need to input new records to these two tables unless the PBX or SIP Server specifies proprietary codes. If your network is live, make sure that you understand the potential impact of any configuration changes Mapping of a received SIP 4xx-6xx response to an outbound INVITE request The ESBC follows the guidelines below to disconnect calls. On receipt of a SIP failure response (4xx-6xx) to an outbound SIP NVITE request, unless the ESBC is able to retry the INVITE request to avoid the problem (e.g., by supplying authentication in the case of a 401 or 407 response), the ESBC transmits a Q.931 DISCONNECT message with the Cause Code value in accordance with the SIP 4xx-6xx response. 136 137 On receipt of a SIP BYE request from the IP domain, the ESBC sends a Q.931 DISCONNECT message with cause value 16 (normal call clearing). On receipt of a SIP CANCEL request to clear a call for which ESBC has not sent a SIP final response to the received SIP INVITE request, the ESBC sends a Q.931 DISCONNECT message with cause value 16 (normal call clearing). Refer to ESBC Application Notes: Interworking of SIP Response Codes and ISDN Q.931 Cause Codes. Figure 105. Configuring special SIP response to PRI cause code mapping records SIP Response to PRI Cause Code No Received SIP Response Code Description Record number Must specify digits within the range which denote SIP trunk side errors. Transmitted PRI Cause Code Must specify digits within the range which denote PRI Q.931 errors Mapping of a Received PRI Cause Code to SIP Response The ESBC follows the guidelines below to disconnect calls. If ESBC has received a SIP INVITE request but not sent a SIP final response, ESBC sends a SIP response according to the cause code value in the received Q.931 DISCONNECT message from the PBX. Refer to ESBC Application Notes: Interworking of SIP Response Codes and ISDN Q.931 Cause Codes. If a Q.931 cause value is neither listed in the default mapping nor in the configurable mapping, the default response '500 Server internal error' is used. 137 138 Figure 106. The PRI cause code mapping to SIP Response codes PRI Cause Code to SIP Response No Description Record number Received PRI Cause Code Must specify digits within the range which denote PRI Q.931 errors. Transmitted SIP Response Code Must specify digits within the range which denote SIP trunk side errors. 138 139 5 Hosted Voice Service 5.1 ESBC SIP-ALG Module Features and Benefits The ESBC SIP ALG functionality supports hosted voice services to allow enterprises to obtain full-featured IP PBX solutions without the cost of purchasing a PBX or a key system. While provisioning and delivering scalable voice features to enterprise SIP phones, the ESBC SIP ALG offers the ability to allow voice traffic to flow both from the enterprise to service provider networks and vice versa, which enables the service provider to deploy hosted voice services to enterprises seamlessly. Figure 107. Hosted Voice Service delivered by the ESBC SIP-ALG module Serving as a proxy, the ESBC ALG operations include (but are not limited to): Solving the VoIP routing issues caused by the introduction of a NAT in the enterprise network. If the SIP message uses an IP address local to the enterprise network when replying to a SIP message originating from the service provider network, it cannot be routed properly. This is corrected by the ESBC by inspecting traffic and rewriting information within SIP messages (SIP headers and SDP body) to ensure that the signaling and media traffic communicate correctly and can hold an address:host binding until the session terminates. Allowing the SIP phones and SIP Server (the host PBX) to use dynamic UDP ports to communicate with the known ports used by the service provider and SIP Phones. Without the ESBC, the ports would either get blocked, or the network administrator needs to open a large number of pinholes in the firewall, resulting in the network being vulnerable to attacks. Solving the interoperability issues which may appear between the enterprise SIP phones and the hosted voice service provider. The ESBC corrects these compatibility issues by normalizing SIP messages. The ESBC provides security to the enterprise voice network. The ESBC protects against toll fraud and provides needed security and privacy for the connection, using IP layer protection, ACLs and the SIP firewall. Constantly monitoring voice quality and providing statistics to help diagnose problems Monitoring dynamic SIP phone registration status for accounting and usage status management 139 140 5.2 Configuring SIP Phones for Hosted Services via the ESBC Follow the steps below to allow SIP devices (phones or gateway) on the enterprise network to register and obtain voice service from the service provider via the ESBC Configuring the SIP phones on the ESBC LAN ( NAT and Voice Port(s)) Arrange the SIP devices to be located in the same network as the ESBC VoIP port(s). If the SIP devices are configured as DHCP clients and the ESBC LAN port is configured to offer DHCP server functionality, the SIP devices may obtain an IP address from the ESBC. If the SIP devices are configured with fixed IP, it is necessary to have the default gateway of the SIP devices point to the ESBC LAN IP address. Apart from the IP addresses configured on the sip devices being on the same network as the ESBC LAN, the registering sip server and other service configurations of the sip devices should be pointed to the service provider network Configuring the ESBC SIP ALG for Hosted Voice Service To enable ESBC SIP ALG service is straightforward. Navigate to the Telephony > SIP ALG > Setting page. Figure 108. Configuring the ESBC SIP-ALG module Field Name Enable RTP Timeout SIP Expired Time Modify the User part of the Contact header in outgoing messages Modify the host and port part Description Check this item to enable SIP ALG service. This item refers to a media inactivity timer. When there are no rtp packets associated with a particular connection for a longer period than this timer, the ESBC drops this connection. If the registration status of a particular SIP UA becomes stale and exceeds this configured timer, the ESBC removes it from the registration list. (See Figure 110) The User part of contact headers usually refers to the SIP accounts. Do not change this default setting unless necessary. The ESBC SIP ALG module by default inspects sip messages and 140 141 of the Contact header to the ESBC s IP and port in outgoing messages rewrites them (SIP headers and SDP body) for NAT traversal. 141 142 5.3 FQDN to IP: Static Mapping When there is a need to configure unresolved domain names for sip devices, or occasionally if the FQDNs configured on the sip devices are not resolved by the DNS servers configured on the ESBC, the ESBC may be configured to statically map sip domain names to IP addresses and route calls to the designated service provider networks. The FQDNs may be included in the Request-URI, Via header, Contact header, Route header etc. When the ESBC resolves a name, it first checks the static record, then the system DNS cache, and finally, if it is still unresolved, the ESBC will perform a DNS query. The ESBC caches DNS results for 10 minutes. The ESBC follows the sequence of resolving and routing sip messages to the service provider network according to the precedence SIP URI: outbound proxy > route header > request URI. Navigate to the Telephony > SIP ALG > Setting page. Figure 109. The static FQDN IP mapping table Outbound proxy mapping SIP Domain IP address[:port] Description The sip domain to which the ESBC shall query for sending SIP request messages. The IP address (and port number) associated to the SIP Domain. If the Port number is not specified, the ESBC use 5080 as the default SIP ALG communication port. DNS Static Records Name IP Address Description The FQDNs included in the SIP headers. The IP address associated to the FQDN of the same record. 142 143 5.4 List of Active Devices for Hosted Service The ESBC SIP ALG module records all active sip devices which register to the service provider network. When the registration of a particular device becomes stale (see Figure 108), the ESBC removes it from the list. Navigate to the Telephony > SIP ALG > Status page. Figure 110. The registration status table of ESBC LAN SIP devices SIP ALG Client Status AOR Contact From Expires Time Description The Address of Record is usually thought of as the public address of the user. It is composed of a user-part (e.g., ) and a host-part (e.g., sip-kama.net). The contact header of the sip device. The ESBC uses the ip:port to reach this sip device. The ip:port of this sip device. The registration expiration date and time. 143 144 6 OAM&P, Security and Fraud Protection 6.1 User Account Configurations To add or modify user privileges to access the ESBC console. Navigate to System > Administrator. To modify attributes of existing users, click <Setting> icon under column Action. To add a user, click <Add> button. Note that the User ID admin is the default administrator ID, and cannot be deleted from the system. Figure 111. User account administrative page Figure 112. Adding or modifying a user account attributes 144 145 Account Setting Grant Level Allow Access from Description Three levels of user accounts. Admin: access full configurations of the system Technician: access configurations for installing the ESBC to the enterprise. Operator: access configurations for connecting the ESBC to the PBX. See Chapter 8 Installers and Operators for detailed descriptions. Access management console, including WEB and CLI, via the following three interfaces: WAN&LAN LAN WAN Read Only View configurations only. Applicable to Operator and Technician account types. 145 146 6.2 System Time To deploy voice services in the field, it is often necessary to have all related devices synchronize with a precise timing mechanism. The ESBC can be configured to synchronize current time with specified Network Time Servers or obtain time information from the connected administrative console, i.e., the computer accessing the admin WEB console. If there is no synchronization source, the ESBC uses the Linux native time which is Jan 01, Navigate to System > System Time. Figure 113. Configuring the ESBC system time Item Local Time Time Zone Description Local time information is based on the Time Zone specified on this page. The ESBC sends standard UTC time information to OAM&P servers, e.g., SIP server or SNMP server. Select the Time Zone where the ESBC is physically deployed. Enable or disable the Daylight Saving Time (DST) option. If DST is enabled, choosing Moving Date or Fixed Date for the starting and ending date of DST. In North America, the start day 146 147 usually is the second Sunday in March, and end day is the first Sunday in November, and hence Moving Date should be selected. Offset: The offset refers to the offset between DST and normal time. SNTP Client Configure the ESBC to synchronize time with network time servers (SNTP server). Enable the SNTP Client to synchronize time with the SNTP server. Input the FQDN or IP addresses of the target SNTP servers (primary and secondary). Synchronization Interval. (default is 2 hours) The ESBC displays the synchronization status. Synchronization with your computer s time When SNTP Client is unchecked (disabled), the ESBC may synchronize time information with the management computer (the device running the web console). Click this button to trigger time sync immediately. 147 148 6.3 Management Control The management control function allows the service provider to shut down voice services temporarily for maintenance purposes, and then restart these services automatically by configuring a prescheduled time or under certain conditions. Navigate to System > Management Control page. Figure 114. Administrative and Operational State Management Host Current Time Display Operational State of Host Description The current system time is displayed at the upper right corner. Displays the current state: Operational, or Not Operational. If the ESBC is in Not Operational state, some condition exists which prevents the ESBC from providing Business Voice service. For example, it could be administratively out-of-service, or missing some critical configuration data, or other physical issues which block its ability to provide service. Schedule Transition to out-of-service state at a predefined time. There are options available only to shut down the service under configured conditions. Transition to In-service at a predefined time. Actions Out-of-Service immediately: Have the ESBC enter maintenance state NOW with no conditions. Out-of-Service when idle: Have the ESBC enter maintenance state automatically when there are no active calls. In-service immediately: Have the ESBC enter service mode NOW. 148 149 Enterprise SIP Entity Operational State of Enterprise SIP Entity Determine its operational state Description Displays the current state of the connected IP PBX (or SIP Entity): Operational, or Not Operational. Displays the current state: Operational, or Not Operational. If the IP PBX is in Not Operational state, some condition exists which prevents the IP PBX from providing Business Voice service. For example, it could be administratively out-ofservice, or the IP PBX missing some critical configuration data, or other physical issues which block its ability to provide service. Select the desired method for the ESBC to determine whether the SIP Entity is operating properly. Based on registration state. This is applicable if the SIP Entity uses SIP REGISTER to connect to the ESBC (as opposed to the Static operational mode). By using SIP OPTIONS ping. This SIP method is used to send keepalive messages. It is applicable when the SIP Entity supports SIP OPTIONS pings. 149 150 6.4 Maintenance The ESBC maintenance features are used to change the system status. Navigate to System > Maintenance page. Figure 115. System Maintenance Reboot Restore Factory Default Restore WAN MAC Address The comments for each function displayed on this page are self-explanatory. Item Description Reboot Restore Factory Default Performs a soft-reboot process. Clears all updates and restores the unit to default values. It is recommended that the config is backed up (Export XML or Binary) before performing this task. Restore WAN MAC Address Restore the ESBC WAN MAC address back to its factory value. Use this command only when the WAN MAC has been cloned. 150 151 6.4.2 Firmware Update Rollback Software Item Firmware Update Description Either upgrade or downgrade the ESBC s currently running firmware to the target version. Note: 1. The ESBC always updates the firmware on the backup partition and migrates the backup database to the current database. Once the ESBC updates successfully, the system reboots and the partitions swap. 2. When auto-provisioning is enabled, the firmware update button is greyed out on the WEB GUI. Rollback software The rollback function allows the backup partition image and database to become active. With the rollback function, the database will not be migrated. The ESBC will simply swap partitions Import XML or Binary Config Export XML or Binary Config Item Export XML Config Export Binary Config Description Manually backs up the ESBC database to an external file. XML format. Files can be edited after export. Use the ESBC provisioning tags to assign appropriate values. XML files are firmware version independent. If the currently running firmware version does not recognize any provisioning tags, the ESBC just ignores them. Binary format. Files include the complete database of the ESBC for the current partition. Binary files are firmware dependent. They can be imported to a system running the same firmware version as was used during the export. The file is read only and ensures data integrity with the system. Note that manual backup can be used together with the auto backup function, see section 6.5 for a detailed description. Import XML Config Import Binary Config Manually restore an ESBC configuration file to the current system. Note: When auto-provisioning is enabled, the Import buttons are grey out. Importing a Binary Config needs a matching firmware version. During import of an XML Config, since it is version dependent, some parameters could be ignored if the target firmware version does not support these parameters. 151 152 6.5 Auto backup system configuration periodically The ESBC system configurations can be scheduled and backup to an external FTP server automatically and periodically. If Auto Backup fails, the ESBC logs the event to Audit Log. Navigate to System > Auto Backup. Figure 116. Auto backup the ESBC Configuration to an FTP Server Item Enabled FTP Server Description Check this box to enable the Ftp The FTP server IP address or FQDN Port Communication port for FTP protocol. The default port number is 21. Username Password File Path File Name Retry Times Backup Frequency Enter the FTP Username provided by the FTP server administrator. Enter the FTP password provided by the FTP server administrator. Enter the FTP server path for the ESBC to upload config file. Enter a file name designed for the ESBC config file. FTP server connection retry times Frequency: Every Day Every Week Every Month Time Rang. The auto backup procedure is activated anytime within the specified clock hour. Test Backup Click to Backup NOW. 152 153 Note that the ESBC does not perform auto backup procedure if one of the following conditions happens. Every Day is selected. The current hour is behind the scheduled hour. Every Month is selected. The current date is behind the scheduled date. The previous backup event happens less than one hour (3,600 seconds) of the current time. The ESBC boot-up time is within the scheduled auto backup hour. 153 154 6.6 Battery Status The ESBC unit comes with a built-in smart battery for continuing voice service in case of power outage event happening. Navigate to System > UPS. Figure 117. The UPS-Battery status page 154 155 6.7 Call History and Logs Call History Settings In order for the ESBC to record each call detail record, navigate to Telephony > TOOLS > Call History > Setting. Check all desired call types to enable call history for these calls. Figure 118. Call History Setting Options Call History Setting Description Log Call History B2BUA: check this item to enable Call History Records for SIP Trunk Service Calls SIPALG: check this item to enable Call History Records for Hosted Service Calls VQM VQM: check this item to enable voice quality measurement (R- Factor, MOS calculation) for selected call types (B2BUA and/or SIP ALG) Note: To enable Log Call History is needed for the Voice Quality calculation and display. See section 6.8 for details Call History Record The ESBC records all calls through the system, if configured to do so. The Call History page displays calls with various filtering criteria: Call Type, Caller ID, Callee ID, and Dates. The CDRs can be exported to an external csv file for accounting use. Navigate to Telephony > TOOLS > Call History. Note: It is necessary to enable Log Call History in order for the ESBC to calculate and display Voice Quality Measurement information (see section 6.7.1) 155 156 Figure 119. Call history records (list view) As the mouse points to any MOS score area, the associated voice metrics statistics are displayed. Call History Description Caller MOS Callee MOS Search Export Based on IP network factors, the ESBC calculates R-factor and MOS scores for each call. Hence, only IP connections can generate VQM results and MOS scores are displayed for both parties for IP connections. No MOS scores are available for PRI connections. Filter and display records according to the various inquiry criteria Export call history records to a text based CSV file Figure 120. Call History (Chart View) 156 157 6.7.3 VQM (Voice Quality Measurement) Call statistics are gathered at the end of each call based on packets received and sent by the ESBC during the call. VQM Factors Time PRI, SETA, SIP-ALG, B2BUA; trace-id Figure 121. Call History (Voice Quality details) Description The time and date of end of call reporting. Category of call and Internal trace number (SETA: SIP End Point Test Agent) Start-time; end-time Call-side Direction qos-type codec-type NLR RTD IAJ amos mmos Afactor mfactor CALLID LocalID Internal time stamps of a session: start end. The calling party, either from the WAN or LAN side. OUTBOUND or INBOUND call BE: best effort; UGS: cable modem guaranteed service flow Audio codec types Network Packet Loss Rate Round trip delay Inter-arrival jitter Average mos Minimum MOS Average R-factor Minimum R-factor Call ID in SIP message The ESBC User Account in SIP URI format 157 158 RemoteID OrigID LocalAddr:PORT RemoteAddr:PORT SSRC Remote Group/Local Group The remote User Account in SIP URI format The caller User Account in SIP URI format The ESBC WAN interface IP address: RTP Port The remote SIP entity IP address: RTP port Synchronization source identifier uniquely identifies the source of a stream. The synchronization sources within the same RTP session will be unique. The User Accounts of remote party and of the ESBC 158 159 6.8 Voice Quality Measurement and SLA Assurance Rating Factor (R-Factor) and Mean Opinion Score (MOS) are two commonly used measurements of overall VoIP call quality. The ESBC employs R-factor to evaluate and rate the quality of telephony voice traffic and translate it to MOS values. The voice quality performance of each call over time is calculated from the RTP traffic to/from the service provider side on the ESBC WAN interface, and on the ESBC LAN interface with connected SIP devices for both SIP Trunk (B2BUA) and Hosted (SIP-ALG) services. R-Factor: The R-Factor provides a powerful and repeatable way to assess whether a data network is capable of carrying VoIP calls with high quality. A value is derived from network factors such as latency, jitter, and packet loss per ITU-T Recommendations. Typical scores range from 50 (poor) to over 90 (best quality). MOS: Subjective MOS scores are gathered through exhaustive tests with large groups of human listeners who listen to audio and give their opinion of the call quality. The ITU-T Recommendations P.800 series describes how these tests are conducted and the types of scores that are produced. There are strong correlations between R-Factor and MOS and the mapping between R-factor and MOS- CQE scores is described in ITU-T standard G The ESBC calculates the voice quality metric R-factor and uses this mapping to translate it to a MOS value. To configure the Voice Quality parameters and view statistics, navigate to Telephony > TOOLS > Voice Quality Voice Quality Parameter Basic Configuration Figure 122. Voice Quality Parameter Basic Configuration Note: In order to display Voice Quality Chart and SLA information, it is necessary to enable the Call History feature which is described in section 160 R-Factor and MOS Enable R-Factor and MOS scoring Send Voice Quality Information to Syslog Server SIP PUBLISH Measuring and calculating interval Traps threshold Description Check the option box if you would like to enable R-factor and MOS scoring calculations for calls. Calls for SIP Trunk services and SIP Hosted (ALG) modes are both supported. Enter the Syslog Server IP address to which the ESBC sends VQM (voice quality measurement) statistics for each call. redundant syslog server feature is supported. If the InnoMedia EMS is deployed with the ESBC, enter the IP address of the EMS server here. Enable this feature and enter the Telemetry Collector URI to allow statistics to be carried in SIP PUBLISH messages. R-factor calculation interval in seconds. The range is from 5 to 120 seconds. Enable the ESBC to send SNMP traps to an SNMP server if the voice quality is considered poor. Please note that the Send SNMP trap alarm and Alarm features must be enabled (see section 2.7, and section ) Voice quality statistics line chart Figure 123. Voice Quality Statistics Line Chart 160 161 Point the mouse to any spot on the chart to display the related VQM parameters. Figure 123 shows that the WAN side Packet Loss Ratio: 45.5% is most likely the factor which results in a low MOS value for this call leg. R-Factor and MOS WAN to ESBC LAN to ESBC Description The RTP (media) packets coming from the WAN side to the ESBC. The RTP (media) packets coming from the LAN side to the ESBC SLA (Service Level Agreement) Parameters The SLA page provides a high level view of overall performance for quality of experience. Figure 124. Overall quality of experience display Advanced Settings The values in this table are used as some of the input parameters to the R-factor calculation. Do not change these values unless instructed to do so. Figure 125. R-Factor Parameter Setting 161 162 R-Factor and MOS Description G.711 with PLC If this option is checked, ESBC will calculate the R-factor as if the remote endpoint supports PLC with G.711. Jitter Buffer Nominal Delay/ Jitter Buffer Maximum Delay A jitter buffer holds datagrams at the receiving side. If x ms nominal delay setting is used, the first voice sample received is held for x ms before it is played out. The maximum delay is used as an input parameter to the R-factor calculation. RTT Round Trip Time If RTT cannot be collected by the ESBC, this configuration value can be used as a default RTT to calculate R-factor End-To-End Delay Alarm Threshold If endpoint to endpoint delay is greater than the configured value, it will consider the delay as excessive and send out a SNMP Trap alarm. Specify how many occurrences of poor MOS values will trigger the ESBC to send out an SNMP Trap alarm. 162 163 6.9 Alert Notification: The ESBC will send SNMP traps and/or alerts to the specified destination(s) described in sections 2.7 and 2.8 when any of the following alerting events occur SNMP Trap Alarms Navitage to System > Alert Notification. SNMP Traps Enabled Poor Voice Quality SIP Registration Failure Failed Login Attempts Battery Status PRI Alarm Emergency Call Figure 126. SNMP Trap Alarm Configuration Description Check this box to process any of the selected traps. When one of the voice quality levels: MOS, one way delay, or packet loss exceeds the specified threshold (see Section 6.8). When one or more SIP User Accounts fail to register to the proxy server (see Section 4.2). Number of failed login attempts to the ESBC WEB console or SSH connections. (The ESBC Audit Log logs all login attempt information, including date-time, user name and source IP (see Section ). When the battery is low, missing, or status changes. When a PRI alarm happens (e.g., D channel down, or any red/yellow alarms). Please refer to ESBC 9x80 PRI SNMP and MIBs document for details. When there is an emergency call from a PBX subscriber (see 163 164 Section 4.7). Provisioning Failure or Success When there are provisioning events (see Section 2.10). Number of concurrent calls reach its maximum Operational State LAN Interface Down Failed Timer Server Synchronization When the number of calls reaches the licensed number, the ESBC sends out a trap (See Section 2.2). When the operational state of the ESBC changes (See Section 6.3). When the LAN interface is not accessible for management access, e.g., data link layer down, lost connection to the connected switch or cable unplugged (See Section ). When the ESBC loses connection with the SNTP server and fails to synchronize system time (See Section 6.1) Alarms Figure 127 Configuring Notification Items Please refer to Section for descriptions of associated alerts. 164 165 6.10 Security System access control: Basic Navigate to System > Access Control > Basic. Figure 128. System Access Control -- Basic System Access Control- Basic Description SSH Session Timeout. Enable SSH to WAN Interface Default: 10 minutes. If there is no action on the SSH console, the ESBC automatically closes this connection. The ESBC, by default, does not allow SSH access via the WAN interface for security purposes. Web Admin Records per Page Auto Refresh Interval Auto Logout Duration Enable access via WAN interface The number of records displayed per page on the Audit log, SIP Firewall Log, or other logging tables. Default: 12 records per page. The interval for the ESBC WEB GUI to refresh the system current status. Default: 3 seconds. If there is no action on the WEB GUI, the ESBC automatically logs out the user from the WEB console. Access WEB GUI via WAN. The ESBC, by default, disables access via the WAN interface for security purposes. Change the access port when necessary. Default port: () Only HTTPS for access via WAN When this item is enabled, the ESBC always switches the access 165 166 Interface protocol to HTTPS. (e.g., and/or) IP Layer Protection: Access Control List The use of an ACL (Access Control List) is recommended to protect the ESBC on the specified interfaces from undesired access attempts, scanning etc. The ACL rules for the WAN and LAN interfaces are processed independently. That is, if the rule is configured for the WAN, it applies to traffic on the WAN interface only. Traffic that comes into the ESBC is compared to the ACL rules based on the order in the list. The ESBC continues to match the packet against the rules until it finds a match. If no matches are found, the traffic is dropped. In other words, if the ACL feature is enabled, there is an implicit drop rule that will block packets that do not match any rules for that interface. For detailed configuration guideline, please refer to the ESBC Application Notes ACL Configurations. Navigate to System > Access Control > ACL. Figure 129. The ESBC Access Control List (IP Layer Protection) ACL Enable No. Interface Description Special Note: When the ACL feature is enabled, there is an implicit drop rule that will block packets that do not match any rules for that interface. Sequential number of rule Apply ACLs to WAN or LAN 166 167 Protocol TCP, UDP, and TCP+UDP Source/Mask Source IP or Network /Mask: (e.g., / , / , / , or /24) Σταρτινγ πορτ / Ενδινγ πορτ Action Service Port, indicating the TCP or UDP port numbers. A service port range can be supported Permit, Deny and Drop. Deny means reject a request, and Drop means no response for a request SIP Layer Protection SIP Firewall Rules The ESBC SIP firewall rules (SFW) enable the operator to design and select predefined rule-sets that define all messages to be examined by the ESBC. SFW is script-based, and follows the same structure and syntax as SIP Header Manipulation Rules (SHMR). SFW first filters all traffic according to the firewall rules (if a firewall rule script is applied) before handling the resulting traffic that needs to be processed. Firewall rules can be applied independently on the following interfaces: ESBC WAN interface ESBC LAN interface (only for NAT-Voice ports) SIP Firewall rules are presented in a structured manner within a script file which can be imported into the ESBC for the applicable LAN or WAN interface. For SIP Trunk Telephony Services: Navigate to Telephony > ADVANCED > Firewall. For Hosted Telephony Services: Navigate to Telephony > SIP ALG > Firewall Please refer to ESBC Firewall Rules for instructions on composing firewall rules. 167 168 Figure 130. ESBC SIP firewall rules: importing scripts SIP Firewall Rules allow administrators to define the following categories. IP, IP-Subnet, port From and To directions SIP method Black or white lists For any access attempts which match the SIP firewall rules, the ESBC processes those access attempts according to the defined actions (e.g., disposition event) fw-accept: allow the messages to pass through the ESBC fw-drop: discard the messages fw-reject: reject incoming sip messages and reply with sip error response code. sip-manip: firewall rule set manipulation. All the actions taken by the ESBC SIP firewall rules are recorded in the Firewall log (see section ) SIP Firewall logs All access attempts which match the ESBC SIP firewall rules are logged. For SIP Trunk Telephony Services: Navigate to Telephony > ADVANCED > Firewall > Log. For Hosted Telephony Services: Navigate to Telephony > SIP ALG > Firewall > Log. The administrator can search for sources of attack according to the following recorded items for each access attempt. 168 169 Date & Time Protocol SIP Identity Source IP Destination IP Source Port Destination Port Message Type Disposition Event Reason S IP Message domain/ip examination to prevent attack or fraud The ESBC can be configured to block incoming SIP messages from both LAN/WAN interfaces (or either one) by examining their originating IP or domains Fraud from the LAN interface An INVITE from an unregistered LAN side rogue CPE can be examined and/or blocked from initiating an outbound call. Navigate to Telephony > SIP-PBX > PBX SIP Profile. Choose the relevant target PBX profile for the LAN SIP PBX or SIP User Agents. See section for a detailed description Fraud or attack from the WAN interface The IP or domain of incoming SIP messages will be examined and/or be blocked to prevent spoofed source IP, SIP attacks or fraud from the WAN. The ESBC blocks all REGISTER attempts from the WAN interface. Navigate to Telephony > SIP TRUNKS > Trunk SIP Profile. Choose the relevant target trunk sip profile for the SIP server in the service provider network. See section for detailed descriptions Audit logs The ESBC logs major operations which are issued by the system administrator or events triggered by the system. Operations such as: Login/Logout Importing/Exporting Configurations WAN/LAN interface settings Provisioning settings Firmware updates DMS/EMS settings Maintenance commands T1/E1 D-Channel up/down etc. Navigate to System > Audit Log to view or export audit log records. 169 170 Figure 131. The ESBC Audit Log page 170 171 6.11 System information Navigate to the System > Information page to view system information for the current ESBC unit. Figure 132. the ESBC system information page 171 172 7 Diagnosis 7.1 Test Calls Navigate to Telephony > TOOLS > Test Agent. Test calls are used to verify successful registration from the SIP Trunk Interface to the service provider network. Enter the telephony number to be called (for example, the technician s cell phone number) to complete the test. The called number will be sent a.wav file and a series of tones for approximately 60 seconds. See Test Agent (section 4.3) for detailed configuration. 172 173 7.2 Syslog Debugging syslog The debugging syslog is used for debugging system issues or interoperating with the network or other equipment, e.g., SIP server, PBX or any other devices. The debugging syslog is disabled by default. Debugging syslog is designed for debugging purposes only. Uncheck all options by choosing None during normal operation. Navigate to the System > Syslog > Debugging page. Figure 133. Debugging Syslog Configuration The debugging syslog messages can be output to Local. The local flash memory. Click the Message tab to display debugging syslog messages. If the number of records exceeds 5000, the new record will overwrite the earliest record. Click the Message tab to view, and/or export syslog messages saved at the local storage location. An external syslog server. Enter its IP address. Debugging syslog messages are categorized as follows: Kernel System and Network B2BUA SIP ALG PRI Check or uncheck related features for sending out debugging syslog messages Operational syslog Operators can determine system operational states and/or special incidents happening on deployed ESBC units by making use of the operational syslog service. Navigate to System > Syslog > Operation page. 173 174 Figure 134. Enabling operational syslog server When a valid operational syslog server IP address is entered, the ESBC sends syslog messages to the server once any of the following events occurring. Operational Syslog Events Network Interface Link Status Description Internet and LAN: Network up or Network down. DNS query Success or Failure send REGISTER send DEREGISTER Receiving network initiated deregistration REGISTER UA send SUBSCRIBE Success/Failure (Received 480 Temporarily Unavailable, Timer F expired) Success/Failure (including initial SUBSCRIBE and refresh) Update UA configure LAN side PBX REGISTER LAN side PBX DEREGISTER Bootup NTP server sync ESBC NTP synced Resolve NTP server DNS name Proxy discovery INVITE to the Server Check AOR of Notify Sending version information Success/failure Success/failure Failure Failure Failure (with Timer B expiring, or other causes) Warning. All AORs in the NOTIFY message of reg-event do not match the UA. Receive INVITE from other Proxy or address 174 175 7.3 Call Trace The built-in ESBC call trace capability can be used to capture and store SIP+PRI signaling traces. The <Tracing> utility displays call signal traces in a ladder diagram format, and the <Capture> utility captures packets of all calls during the recording period, including sip, rtp, isdn Q921 and Q931 packets Tracing - Ladder diagram Navigate to Telephony > TOOLS > Call Trace. Figure 135. the ESBC Account List Select target User IDs and track SIP and PRI (Q.931) signals for all connections. Log Register Msg: display SIP flows for SIP UA REGISTER exchanges with the SIP Server. Log NonRegister Msg: display SIP and PRI flows for all call attempts. Click the <Show Call Trace> button to display information for the selected User ID account. 175 176 Figure 136. SIP and PRI signal trace of the selected account and call Packet capture The ESBC can capture packets, including signaling and voice packets for live calls. Recorded files can be opened by wireshark or other utilities. The ESBC built-in capture tool is capable of capturing packets for both LAN and WAN, ingress and egress directions, and outputs them to a single file. This feature greatly aids in the investigation of telephony related interoperability issues. Packet types include the following. Signaling: SIP signaling (for both WAN and LAN) Signaling: ISDN (Q931, Q921) Media: RTP packets Captured files can be uploaded directly to a remote ftp server, local storage or external USB storage. Navigate to Telephony > TOOLS > Call Trace > Capture. 176 177 Figure 137 Packet Capture Capture Trace Filter Description Voice, both SIP and RTP packets Signaling, SIP packets Media, RTP Capture PRI FTP Server Username/Password File Path Interval Size Enable this item when it is necessary to capture ISDN Q931 and Q921 signals in call flows (note that you must use q931 or q921 in the filtering box within wireshark). Enter the IP address or FQDN of the target FTP server The Username and password to access the FTP server The path for the ESBC to upload captured files In seconds. 0=No Limit, A new pcap file will be created after this specified duration. Must be more than 30 seconds. Kbytes A new pcap file will be created after the current file size is greater than this specified size. 177 178 Time Limit The capturing duration in seconds (0=No Limit). Storage Internal External Description Storing captured files to the ESBC internal flash memory. If an USB flash disk is inserted, it is possible to specify an external storage space for the captured files. 178 179 7.4 Network diagnostic utilities The network administrator can use the test tools on the ESBC WEB console to verify the connectivity of the ESBC system and trace the path of data through the network. Navigate to Network > Advanced > Diagnostics. Figure 138. The network diagnosis utilities Ping Test Ping is the most common test used to verify basic connectivity to a networking device. Successful ping test results indicate that both physical and virtual path connections exist between the system and the test IP 179 180 address. Successful ping tests do not guarantee that all data messages are allowed between the system and the test IP address Traceroute Traceroute is used to track the progress of a packet through the network. This test can be used to verify that data destined for a WAN device reaches the remote IP address via the desired path. Similarly, network paths internal to a company can be traced over the LAN to verify the local network topology. Selecting LAN or WAN determines the path (direction) of the trace route test Nslookup Nslookup is a network administrator tool for querying the Domain Name System (DNS) to obtain the corresponding IP address of a domain (example,"abc.com"). It will also do a reverse name lookup and find the host name for an IP address you specify. 180 181 8 Installers and Operators In addition to the administrator web console, the ESBC supports simplified web pages for technicians or operators to configure the ESBC s features. Technicians are technical staff who install the ESBC at customer sites. Operators are enterprise administrative staff who facilitate daily routine jobs for the company s telephony or data network. 8.1 Installation via Technican WEB console 1. Configure the technician tech and the password is 123. Click the login button to enter the ESBC main page The ESBC-9x78, 9x28, 10K series models Once logged in, the main page displays as follows. The telephony configurations either have been provisioned or are pre-configured on the target ESBC system. The technician may have to perform the following tasks in order to install the ESBC to the enterprise s network. Network configurations and/or diagnosis Have IP-PBX connect/register to the ESBC Technician-Trunk Interface Figure 139. the Technician main page Trunk Interface Trunk Interface FXS Signaling Trunk Status Description IP. The LAN side of the ESBC should have the SIP PBX or SIP Phone connected or registered. Choose between the Loop Start or Ground Start options. Status. The ESBC registration/connection status with the north bound SIP server. 181 182 Pilot Number. The registration agent (RA) configured via the administrative web GUI or provisioned. Note that if there is no RA configured on the ESBC, the DIDs column is blank. DIDs. The DIDs (SIP user agents) configured with the RA. DIDs with no RA configured will not be displayed Telephony and Network Diagnostics Navigate to Technician > Diagnostics, or Customer > Diagnostics. Call Test. Use the ESBC built-in SIP device, the Test Agent (TA), to verify telephony connectivity and voice quality with the service provider network. See section 4.3 for detailed descriptions. On the WEB GUI, the technician needs to enter the destination phone number which could be a designated test call TN, or the technician s mobile phone TN, and press Dial. Network Test. Use Network Test utilities to verify the network connectivity status from the ESBC to both LAN and WAN interfaces. See section 7.4 for detailed descriptions Connect/Register SIP PBX to the ESBC To have the SIP PBX connect/register to the ESBC LAN Voice-NAT interface, navigate to Customer > SIP Trunk Configuration. Figure 140. The SIP PBX Connecting/Registering Page PBX Settings Select Your PBX Choose among Static or REGISTER operation mode Description Choose the SIP PBX type that connects to the ESBC. The selection items are the PBX profiles configured in the administrator web console. See section for detailed descriptions. Static mode: PBX will be addressed statically by the Adapter. The IP address of the target SIP PBX (the interface toward the ESBC) is needed for this mode. REGISTER mode: PBX will register to the Adapter. The User ID and Password (of the main pilot number) for the SIP PBX is needed to 182 183 register to the ESBC for this mode LAN Setting Configure the ESBC LAN Voice-NAT interface to interoperate with the enterprise telephony network. Enable the DHCP Server option when the ESBC is to offer IP addresses to SIP UAs or hosts in this network. Note that when the SIP PBX connects to the ESBC by the static operation mode, it is recommended that this SIP PBX is configured with a static IP which should be out of the range of the DHCP server IP address range. Navigate to Customer > LAN Setting. Figure 141. The Installer page: ESBC LAN settings Monitor The monitor page displays the SIP UAs registration/connection status for both the North bound and South bound interfaces of the ESBC. Navigate to Technician > Monitor, or Customer > Monitor. 183 184 Figure 142. SIP UA Connection/Register status ESBC-9x80 series models (switch between T1/E1 and transcoding) Figure 143. T1/E1 configuration page for installers The ESBC9x80 (E1/T1) models offer service providers with the flexibility of selecting either a PRI interface (connecting to a TDM PBX) or an IP interface (connecting to a SIP PBX) at the customer site. The IP or PRI options can be chosen through the technician account login. Trunk Interface: When choosing IP mode, all configurations are the same as those described in section 185 When choosing PRI mode, the technician will need to configure the PRI interface to connect to the TDM-PBX. See section 4.9 for a detailed description of PRI configurations. For all PRI configurations, please refer to section for a detailed description. PRI Description Span Status Switch Type To display the D Channel connection status, which reflects the q921 signaling for Up or Down. To choose the switch type, which needs to be exactly the same as that of the TDM-PBX to which the ESBC connects. D-Channel Display the fixed channel number for the D-Channel. CH 24 for T1; CH 16 for E1. Channel Hunting Scheme Clock Source Choose the appropriate channel hunting scheme (ascending or descending). It is suggested that a different hunting scheme is used from that of the PBX. E.g., if the PBX uses an ascending channel hunting scheme, then configure the ESBC with a descending scheme so as to distribute loading evenly on entire PRI spans. The ESBC default clock mode is Internal, using which voice transmissions follow the ESBC s internal clocking scheme. The connected TDM-PBX clock should be configured to follow the ESBC clock. If there are two spans, span2 always follows the clock of span1. span2 does not have its own clocking scheme. Do not change the ESBC clock mode default settings unless necessary. Send Display Name Default mode: checked. Send display names to called parties for inbound calls. Some TDM- PBX s do not support Display Name IE settings. Uncheck this setting when necessary. Play Ringback Tone for outbound call of-band RBT signals (default setting). AS-NEEDED: The ESBC decides the RBT action to the PBX according to the messages it receives (SIP response codes 180/183 from the network; and Q.931 Progress Indicator from the PBX). 185 186 8.2 Operator Management via the Operator WEB Console 1. Configure the operator oper and the password is 123. Click the login button to enter the ESBC main page. The operator console is designed for the end customer to configure the PBX (TDM or SIP) information on the ESBC when there are network setting updates. Please refer to section 8.1 for descriptions of related features. Figure 144. The operator (end customer) login page 186 187 9 SIP Firewall and Header Manipulation Rules (SHMR) To provide finer control of SIP messages traversing through the SIP PBX and the SIP server, the ESBC allows the service providers to create SIP Header Manipulation Rules (SHMR) to achieve this purpose. The SHMR function consists of a sophisticated scripting language that can be used to create scripts that modify SIP message contents both at the LAN/WAN ingress and egress in the following directions: ESBC WAN interface, inbound ESBC WAN interface, outbound ESBC LAN interface (NAT-Voice port), inbound ESBC LAN interface (NAT-Voice port), outbound SHMR can be used to modify SIP headers, parameters as well as SDP contents. Regular expressions also allow complex matching rules to be constructed. Another feature of the SHMR function is multi-level programmability which enables rules to reference each other and pass parameters between them. To import, verify, and activate Firewall rules and SHMR rules, SIP trunk voice services SIP B2BUA mode (including SIP PBX, TDM PBX), navigate to Telephony > ADVANCED > Firewall, and Telephony > ADVANCED > SHMR. SIP hosted voice services SIP ALG mode, navigate to Telephony > SIP ALG > Firewall, and Telephony > SIP ALG > SHMR. 9.1 SIP Header Manipulation and Firewall Scripts The ESBC SHMR and Firewall are scripting rules, refer to the following documents for detailed descriptions of this script language. SHMR Usage App Note and SIP Firewall Rules. The SHMR rules are composed of the following constructs: Objects: headers and header elements. (Headers are SIP headers, and header elements include all subparts of a header, such as header values, header parameters, and URI parameters) Rules: header rules and element rules Processes Regular expressions for matching and giving a new value to an object. 187 188 Figure 145. The ESBC SHMR script configuration screen ESBC SHMR Enable Import Export Delete Verify Description Check this box to activate a SHM rule (or rule set) of the target interface-direction. Importing SHMR script from a file in text file format. Exporting the SHMR script from the ESBC system to a text file. Deleting the current SHMR rule file from the ESBC system. Verify the SHMR with a sample SIP message. 188 189 Figure 146. The SHMR SIP message verification screen (changing 404 to 480) 9.2 SIP Firewall Figure 147. The ESBC SIP firewall configuration screen 189 190 ESBC SIP Firewall Enable Import Export Delete Log Description Check this box to activate SIP firewall rules of the target interface. Importing firewall rules from a file in text file format. Exporting the firewall script from the ESBC system to a text file. Deleting the current firewall rule file set from the ESBC system. Viewing the SIP firewall log Figure 148. The ESBC SIP firewall log 190'sMore informationMore information Barracuda Link Balancer Barracuda Networks Technical Documentation Barracuda Link Balancer Administrator s Guide Version 2.2 RECLAIM YOUR NETWORK Copyright Notice Copyright 2004-2011, Barracuda Networks v2.2-110503-01-0503More information WebsiteMore informationMore informationMore informationMore information Voice Gateway with Router Voice User Guide Model No. SPA3102 Copyright and Trademarks Specifications are subject to change without notice. Linksys is a registered trademark or trademark of Cisco Systems, Inc. and/or its affiliatesMore informationMore information TheMore information Azatel Communications Inc. 2-Port Multi-Protocol VOIP Gateway Device Administrator Guide Azatel Communications Inc. 2-Port Multi-Protocol VOIP Gateway Device Administrator Guide Version 1.5.6 Administrator Guide Azatel VOIP Gateway Copyright Notice All rights reserved, Azatel CommunicationsMore informationMore information...MoreMore information Multi-Homing Security Gateway Multi-Homing Security Gateway MH-5000 Quick Installation Guide 1 Before You Begin It s best to use a computer with an Ethernet adapter for configuring the MH-5000. The default IP address for the MH-5000 information Chapter 8 Router and Network Management Chapter 8 Router and Network Management This chapter describes how to use the network management features of your ProSafe Dual WAN Gigabit Firewall with SSL & IPsec VPN. These features can be found byMore informationMore,More information,MoreMore information Protecting the Home Network (Firewall) Protecting the Home Network (Firewall) Basic Tab Setup Tab DHCP Tab Advanced Tab Options Tab Port Forwarding Tab Port Triggers Tab DMZ Host Tab Firewall Tab Event Log Tab Status Tab Software Tab ConnectionMore information Note: these functions are available if service provider supports them. Key Feature New Feature Remote Maintenance: phone can be diagnosed and configured by remote. Zero Config: automated provisioning and software upgrading even through firewall/nat. Centralized Management:More informationMore information Mediatrix 4404 Step by Step Configuration Guide June 22, 2011 Mediatrix 4404 Step by Step Configuration Guide June 22, 2011 Proprietary 2011 Media5 Corporation Table of Contents First Steps... 3 Identifying your MAC Address... 3 Identifying your Dynamic IP Address...More information Chapter 2 Connecting the FVX538 to the Internet Chapter 2 Connecting the FVX538 to the Internet Typically, six steps are required to complete the basic connection of your firewall. Setting up VPN tunnels are covered in Chapter 5, Virtual Private Networking.More Chapter 4 Customizing Your Network Settings . Chapter 4 Customizing Your Network Settings This chapter describes how to configure advanced networking features of the Wireless-G Router Model WGR614v9, including LAN, WAN, and routing settings. ItMore information INNOMEDIA ESBC10K-MDX ENTERPRISE SESSION BORDER CONTROLLER (ESBC) INNOMEDIA ESBC10K-MDX ESBC10K-MDX ENTERPRISE SESSION BORDER CONTROLLER (ESBC) HIGH DENSITY ENTERPRISE SESSION BORDER CONTROLLER IDEAL FOR BROADBAND SERVICE PROVIDERS OFFERING SIP TRUNKING SERVICES TO MID-TO-LARGEMore information ESBC 9328-4B. Key Benefits ENTERPRISE SESSION BORDER CONTROLLER InnoMedia ESBC 9328-4B ENTERPRISE SESSION BORDER CONTROLLER HIGHLY INTEGRATED ESBC IDEAL FOR Broadband Service Providers OFFERING SIP TRUNKING SERVICES Designed for Service Providers offering SIP trunkingMore information Configuring PA Firewalls for a Layer 3 Deployment Configuring PA Firewalls for a Layer 3 Deployment Configuring PAN Firewalls for a Layer 3 Deployment Configuration Guide January 2009 Introduction The following document provides detailed step-by-stepMore informationMore information Broadband Router ESG-103. User s Guide Broadband Router ESG-103 User s Guide FCC Warning This equipment has been tested and found to comply with the limits for Class A & Class B digital device, pursuant to Part 15 of the FCC rules. These limitsMore information Cisco Unified Communications 500 Series Cisco Unified Communications 500 Series IP PBX Provisioning Guide Version 1.0 Last Update: 02/14/2011 Page 1 DISCLAIMER The attached document is provided as a basic guideline for setup and configurationMore information LifeSize Networker Installation Guide LifeSize Networker Installation Guide November 2008 Copyright Notice 2006-2008 LifeSize Communications Inc, and its licensors. All rights reserved. LifeSize Communications has made every effort to ensureMore information theMore information forMore informationMore informationMore information Chapter 15: Advanced Networks Chapter 15: Advanced Networks IT Essentials: PC Hardware and Software v4.0 1 Determine a Network Topology A site survey is a physical inspection of the building that will help determine a basic logicalMore informationMore.More information CRA 210 Analog Telephone Adapter 3 Ethernet Port + 2 VoIP Line + 1 PSTN Line CRA 210 Analog Telephone Adapter 3 Ethernet Port + 2 VoIP Line + 1 PSTN Line Getting Started Guide Page: 1 of 30 Table of Contents 1. WELCOME - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -More informationayaMore information WiFi Cable Modem Router C3700 User Manual Note: This document is for certification purposes only. Images are for position only and might differ from the actual product. October 2013 350 East Plumeria Drive San Jose, CA 95134 USA SupportMore information Cisco Unified IP Conference Phone 8831 Installation Cisco Unified IP Conference Phone 8831 Installation Cisco Unified IP Conference Phone 8831 Installation Overview, page 1 Before You Begin, page 1 Cisco Unified IP Conference Phone 8831 Components, pageMore information Chapter 9 Monitoring System Performance Chapter 9 Monitoring System Performance This chapter describes the full set of system monitoring features of your ProSafe Dual WAN Gigabit Firewall with SSL & IPsec VPN. You can be alerted to importantMore information Configuration Notes 283 Mediatrix 4400 Digital Gateway VoIP Trunking with a Legacy PBX June 21, 2011 Proprietary 2011 Media5 Corporation Table of Contents Table of Contents... 2 Introduction... 3 Mediatrix 4400 Digital GatewayMore information Configuration Notes 290 Configuring Mediatrix 41xx FXS Gateway with the Asterisk IP PBX System June 22, 2011 Proprietary 2011 Media5 Corporation Table of Contents Introduction... 3 About Mediatrix 41xx Series FXS Gateways...More information...More information SIP Trunking Quick Reference Document SIP Trunking Quick Reference Document Publication Information SAMSUNG TELECOMMUNICATIONS AMERICA reserves the right without prior notice to revise information in this publication for any reason. SAMSUNGMore information AudioCodes. MP-20x Telephone Adapter. Frequently Asked Questions (FAQs) AudioCodes MP-20x Telephone Adapter Frequently Asked Questions (FAQs) Page 2 AudioCodes Customer Support Table of Contents Introduction... 6 Frequently Asked Questions... 7 Web Access... 7 Q1: How mustMore informationMore informationMore informationMore information Technical Configuration Notes MITEL SIPCoE Technical Configuration Notes Configure Inn-Phone SIP Phone for use with MCD SIP CoE NOTICE The information contained in this document is believed to be accurate in all respects but is notMore information theMore information Chapter 3 LAN Configuration Chapter 3 LAN Configuration This chapter describes how to configure the advanced LAN features of your ProSafe Dual WAN Gigabit Firewall with SSL & IPsec VPN. This chapter contains the following sectionsMore information FortiVoice. Version 7.00 VoIP Configuration Guide FortiVoice Version 7.00 VoIP Configuration Guide FortiVoice Version 7.00 VoIP Configuration Guide Revision 2 14 October 2011 Copyright 2011 Fortinet, Inc. All rights reserved. Contents and terms are subjectMore informationMore informationMore informationMore information Voice over IP Basics for IT Technicians Voice over IP Basics for IT Technicians White Paper Executive summary The IP phone is coming or has arrived on desk near you. The IP phone is not a PC, but does have a number of hardware and software elementsMore information Table of Contents. Confidential and Proprietary Table of Contents About Toshiba Strata CIX and Broadvox SIP Trunking... 1 Requirements... 2 Purpose, Scope and Audience... 3 What is SIP Trunking?... 4 Business Advantages of SIP Trunking... 4 TechnicalMore information VoIP Network Configuration Guide The owner friendly phone system for small business VoIP Network Configuration Guide Release 7.10 Copyright 2011 Fortinet, Inc. All rights reserved. Fortinet, FortiGate, FortiGuard, FortiCare, FortiManager,More information MINIMUM NETWORK REQUIREMENTS 1. REQUIREMENTS SUMMARY... 1 Table of Contents 1. REQUIREMENTS SUMMARY... 1 2. REQUIREMENTS DETAIL... 2 2.1 DHCP SERVER... 2 2.2 DNS SERVER... 2 2.3 FIREWALLS... 3 2.4 NETWORK ADDRESS TRANSLATION... 4 2.5 APPLICATION LAYER GATEWAY...More informationMore information User Manual. Page 2 of 38 DSL1215FUN(L) Page 2 of 38 Contents About the Device...4 Minimum System Requirements...5 Package Contents...5 Device Overview...6 Front Panel...6 Side Panel...6 Back Panel...7 Hardware Setup Diagram...8More information Quick Installation Guide Quick Installation Guide PRI Gateway Version 2.4 Table of Contents Hardware Setup... 1 Accessing the WEB GUI... 2 Notification LEDs (On the Front Panel of the Gateway)... 3 Creating SIP Trunks... 4 CreatingMore information SSVP SIP School VoIP Professional Certification SSVP SIP School VoIP Professional Certification Exam Objectives The SSVP exam is designed to test your skills and knowledge on the basics of Networking and Voice over IP. Everything that you need to coverMore information User Manual 821121-ATA-PAK User Manual 821121-ATA-PAK IMPORTANT SAFETY INSTRUCTIONS When using your telephone equipment, basic safety precautions should always be followed to reduce the risk of fire, electric shock and injury toMore informationMoreMore informationMore information Chapter 6 Using Network Monitoring Tools Chapter 6 Using Network Monitoring Tools This chapter describes how to use the maintenance features of your RangeMax Wireless-N Gigabit Router WNR3500. You can access these features by selecting the itemsMore information SIP Trunking and Voice over IP SIP Trunking and Voice over IP Agenda What is SIP Trunking? SIP Signaling How is Voice encoded and transported? What are the Voice over IP Impairments? How is Voice Quality measured? VoIP Technology ConfidentialMore informationMore information Chapter 4 Customizing Your Network Settings Chapter 4 Customizing Your Network Settings This chapter describes how to configure advanced networking features of the RangeMax Dual Band Wireless-N Router WNDR3300, including LAN, WAN, and routing settings.MoreMore information.More information Transport and Security Specification Transport and Security Specification 15 July 2015 Version: 5.9 Contents Overview 3 Standard network requirements 3 Source and Destination Ports 3 Configuring the Connection Wizard 4 Private Bloomberg NetworkMore information Provisioning and configuring the SIP Spider Provisioning and configuring the SIP Spider Administrator Guide Table of Contents 1. Introduction... 3 2. Manual Provisioning... 4 3. Automatic Provisioning... 5 3.1 Concept... 5 3.2 Preparing the configurationMore information Setting Up the Cisco IP Phone CHAPTER 3 This chapter includes this following topics, which help you install the Cisco IP Phone on an IP telephony network: Before You Begin, page 3-1 Installing the Cisco IP Phone, page 3-6 AdjustingMore information your Gateway Windows network installationguide 802.11b wireless series Router model WBR-100 Configuring Installing your Gateway Windows network installationguide 802.11b wireless series Router model WBR-100 Installing Configuring Contents 1 Introduction...................................................... 1 Features...........................................................More information Welltel - Session Border Controller SBC 120 SBC 120 Appliance Welltel - Session Border Controller SBC 120 enhanced performance, increased security Welltel s Session Border Controllers (SBCs) help enterprises to reduce communications costs, enableMore informationMore information LotWan Appliance User Guide USER GUIDE LotWan Appliance User Guide USER GUIDE Copyright Information Copyright 2014, Beijing AppEx Networks Corporation The description, illustrations, pictures, methods and other information contain in this documentMoreMore informationMore information...More information SIP Trunking Service Configuration Guide for Skype SIP Trunking Service Configuration Guide for Skype NDA-31154 Issue 1.0 NEC Corporation of America reserves the right to change the specifications, functions, or features at any time without notice. NECMore information.More information.More information SSVVP SIP School VVoIP Professional Certification SSVVP SIP School VVoIP Professional Certification Exam Objectives The SSVVP exam is designed to test your skills and knowledge on the basics of Networking, Voice over IP and Video over IP. Everything thatMore informationMore informationMore information Linksys SPA2102 Router Configuration Guide Linksys SPA2102 Router Configuration Guide Dear 8x8 Virtual Office Customer, This Linksys guide provides instructions on how to configure the Linksys SPA2102 as a router. You only need to configure yourMore information 640-460 - Implementing Cisco IOS Unified Communications (IIUC) 640-460 - Implementing Cisco IOS Unified Communications (IIUC) Course Introduction Course Introduction Module 1 - Cisco Unified Communications System Introduction Cisco Unified Communications System IntroductionMore information Guideline for setting up a functional VPN Guideline for setting up a functional VPN Why do I want a VPN? VPN by definition creates a private, trusted network across an untrusted medium. It allows you to connect offices and people from around theMore information Chapter 6 Using Network Monitoring Tools Chapter 6 Using Network Monitoring Tools This chapter describes how to use the maintenance features of your Wireless-G Router Model WGR614v9. You can access these features by selecting the items underMore information
http://docplayer.net/1034468-Innomedia-esbc-enterprise-session-border-controller-administration-guide.html
CC-MAIN-2017-26
refinedweb
35,216
53.41
I’ve been talking with my colleague Jakub Skoczen. He’s insisting that functional implementations of quicksort, which make and return new arrays at each level of recursion, must surely be much slower than imperative implementations that modify an array in place. And I can see his point. We all know that tail-call optimisation takes care of the grosser inefficiencies when recursion is used to faux-iterate along a list. For example, the following recursive Lisp function returns a list whose elements are double those the list passed in: (defun double (x) (if (null x) x (cons (* 2 (car x)) (double (cdr x))))) (For those of you unfamiliar with Lisp syntax, this says: if the argument to double is an empty list, then the result is empty; otherwise it’s twice the head of the list concatenated with the result of running the double function on the tail of the list.) It’s easy to see how a clever compiler can recognise that the last thing the function does is re-invoke itself on a lightly modified version of the same argument, and that this can be optimised by changing the argument in place and jumping back up to the top of the function. And indeed this is what many compilers will do. (IIRC, Scheme guarantees this; other Lisp dialects may not.) But consider a functional implementation of the quicksort algorithm. Quicksort sorts an array by picking a pivot element at random, partitioning the array into two subarrays containing the elements less than and greater than the pivot, and returning the concatenation of three things: the result of quicksorting the less-than subarray, the pivot itself, and the result of quicksorting the greater-than subarray. As an example, here is the Ruby version that I wrote for the JRuby post: def qsort(a) return a if a.length <= 1 pivot = a.delete_at rand(a.length) return (qsort(a.select { |x| x <= pivot }) + [ pivot ] + qsort(a.select { |x| x > pivot })) end This is doubly recursive: the qsort function calls itself in two places, and only one of those can be optimised as tail-recursion (right?) Which surely means that the first of the two recursive qsort invocations has to be done properly, with a real recursive call, a new stack frame, and lots of constructing and copying. Which means that Jakub has to be right, doesn’t it? And so to my question: are we missing something? Can a Sufficiently Smart Compiler somehow optimise this recursive quicksort into one that is as efficient as an imperative quicksort that swaps elements in place? Do such optimisations exist even in theory? Better still, are there any functional-language compilers that actually implement them? Inquiring minds want to know! it can be done using Uniqueness type with linear lisp. see The recursive calls in qsort aren’t tail calls. // tail call return f(differentarg) ; // non-tail-call return 1+f(differentarg) ; The first one can be optimized by changing the original parameter to the function and branching. The second one cannot be so optimized. again with the fucking sushi? I stopped reading after the first pic First, your version of qsort is not stable, you should do three selects, one for == pivot too (if you picked the last item as pivot, this is not needed). Second, you are iterating the array 2 times (3 times with my suggestion); instead use a split function that results in 2 (3) arrays in one iteration. In a lazy language, if you use selective parts of the sorted list, it can skip other thunks and thus not have to run the full qsort. But likely this same property will prevent any in-place optimization. In a strict language (where the compiler can prove no side effects), I still doubt it can optimize this into an in-place algorithm, it can likely turn it into a loop. But that prevents it from stack-allocating the (smaller) arrays, which could reduce memory accounting (trading it for stack space). Indeed if anyone knows of compilers or papers that turn a functional qsort into an in-place algorithm, or do other smart tricks, I would love to know about it too. (So good question!) Be careful to distinguish a functional-looking solution in a language with destructive updates and a solution in a functional language–especially a _pure_ functional language. Okasaki’s “Purely Functional Data Structures” goes into this sort of issue in great depth. I’m not expert, but this is what I think I recall… If the language does not support destructuve updates then at runtime on each recurse a “new” data structure can be created which shares structure with the “old” version. So no need to copy the values from “old” to “new”, just point into the “old” strucutre where the values you want are. This can be very efficient. > This is doubly recursive: the qsort function calls itself in two places, and only one of those can be optimised as tail-recursion (right?) I believe this is wrong. Neither is in a tail call position, from what I remember from Scheme class, because you are doing something with the result of both of them, concatenating them with the pivot element. I have the impression there are no compile-side optimizations. Time ago I have learn Ocaml for fun and the Ocaml’s manual has few paragraphs about implementing recursive algorithms managing arrays. The manual points the more efficient way to solve performance issue (of stack grown) is using tail-recursion. This is a functional-way to implement an imperative like behaviour. The Ocaml’s manual points the stack issues but I don’t know about other languages/compilers . But please consider that tail-recursion is a good functional programmer practise which should be considered part of the basics. Log time has passed, please forgive my vagueness I don’t know about all functional implementations. In Clojure (based on Lisp) “all collections are immutable and persistent. In particular, the Clojure collections support efficient creation of ‘modified’ versions, by utilizing structural sharing, and make all of their performance bound guarantees for persistent use.” See also. I don’t have benchmarks handy to back it up, but at least they are not naively recreated every time. In functional languages you are rarely dealing with arrays. Quick sort on a linked list, for example, can be rearranged by simply creating new nodes that point to other parts of the list, which is vastly cheaper than creating a new copy of the whole array (which would be very expensive). If you want to know more about how it is possible to create performant functional data structures and algorithms that maintain referential transparency and persistence, you should check out “Purely Functional Data Structures” by Chris Okasaki. His ~20 line red/black tree example is really amazing and illustrates that the power and expressiveness of these systems does not imply a performance penalty. As for sufficiently smart compilers, if that is your interest I recommend learning a little bit about what GHC does internally. There are some really amazing optimizations like deforestation and stream fusion which are made possible by functional languages. Your first Scheme example is not a tail call. The last thing the function does is call ‘cons, not recurse. You need something more like this (defun double (x so-far) (if (null? x) (reverse so-far) (double (cdr x) (cons (* 2 (car x)) so-far)))) First, you’ve actually haven’t implemented quicksort. Second, your looking at the wrong performance problem. The introduction of On Iteration By Andrei Alexandrescu () has a really good deconstruction of the functional ‘quicksort’ algorithm. In short, “Quicksort, as defined by Hoare in his seminal paper [8], is an in-place algorithm.” Since functional languages can’t mutate state, you can never define a quick sort routine. So let’s consider what a Sufficiently Smart Compiler would need to know to do functional quick sort with only 0 or 1 data copies (i.e. internally in-place with one possible copy if multiple references to the array or list exist). It has to know that the two filter/select routines create two mutually exclusive sets. It can then transform filter/select into an internal partition function. As for recursion itself, most quick sort algorithms I’ve seen in Java, C++, wikipedia, etc are all recursive. P.S. performing a quick sort with floating point numbers is not bug free. Specifically, the quick sort algorithm has the invariant than the elements have a total ordering, which floating point numbers don’t have. Okay, each step of a quicksort addresses the whole array once through, the first time in one pass and subsequently broken up into 2^k parts but adding up to one whole array for each level of depth. The levels of depth average log2 (n) (worst case is n, but that is a pathological condition with good implementations). So quicksort is O(n log n). Now consider a pure functional variant. Each level of depth requires that we copy the entire array once. A bad implementation might copy the whole array for each comparison and swap but functional style dictates that you keep a list of swaps and apply them all at once. Copying an array is O(n) and you do it log n times so the functional version is O(n log n). So no, the functional version is not much slower. But it probably is slower than a good C implementation. Almost certainly at least three times as slow and probably slower. But so is an imperative version in Java or C-octothorpe. Clojure (and Scala and Haskell) has advanced data types the minimize the cost of copying arrays when elements are updated, mostly by using trees connecting subarrays of 32 or 64 elements and only copying the subarrays that change. All of that happens cheaply and transparently under the hood. It doesn’t help much with the granularity of quicksort, which will require a full copy. Clojure includes a mechanism for pulling back the immutability of arrays inside inner loops when performance is critical. Keeping transients inside one function preserves the immutability property for the rest of the program and speeds up some rare cases considerably. One nice thing about Clojure is that edge cases like this can be addressed without losing the abstractions that make the language powerful. I think another question worth asking is why you would want a compiler to change the actual function of the code so thoroughly. Debugging will become more difficult if the compiled code doesn’t look like the source code at all, surely? And a compiler that clever is more likely to cock up in tricky-to-pin-down ways than a simpler compiler. Hidden compiler errors are the worst thing that can happen to a coder. That said I may be the last person to comment – I’ve never written a single line of recursive code in a released project. Have I missed out? Well, clearly I’ve missed out on opaque code, inconsistent frame rates and hours of bug tracking, but what else? Canonical quicksort in Haskell: The ‘filter’ command is iterative (tail recursive) @Brian This can be optimised away as there is such a thing as tail recursion modulo cons: If you have a function this can be optimised to be tail recursive. You could generalise this from CONS to primitive operators (+, -, etc.) and end up with quite aggressive TRMC->TR optimisation. As Brian says, Clojure allows you to do some mutable operations in an isolated fashion on things like arrays. Haskell has a similar thing called ST which allows mutable state like arrays or references to things or whatever. You run the computation inside a monad, and the runST function is completely pure. `runST foo’ will always return the same result. Example: Since pure functions can black boxes, a sufficiently smart implementation of functional quicksort would be to copy the array and then use in-place quicksort on copy. Alternatively, instead of using an imperative in-place quicksort on the copy of the array, one could write a version that uses uniqueness types to allow the compiler use mutation under the hood. The double function is not tail recursive. If you are used to C-like languages, an easy way to determine if a call is in the tail position is to ask yourself if inserting a “return” before the call would break the function. Your example gives function double(x) { return isNull(x) ? x : cons(2 * car(x), return double(cdr(x)); } The first return is there to reproduce the implicit lisp return. The second return is the important one and will bypass the call to cons. This new function is broken and will always return null. In the next paragraph, you imply that a function must call itself in the tail position to be subject to tail-call elimination (“jumping back to the top of the function”). This is not the case. Any target will do. In your example, the call to cons is in the tail position and will be subject to tail-call elimination. This is important if you want to implement state machines or write your code in continuation passing style. TCE applies if the tail position contains a call. It does not care, must not care about the target. You can also handle self-tail-calls in a special way, but this buys you nothing but speed, whereas TCE allows functional languages to replace jump-based flow-control with function calls. Yes, only one branch of your ruby quick-sort can be in the tail position. The other will use stack space. Qsort always requires log(2, n) space to remember the boundaries. You could modify your implementation to reduce the constant factor by putting a recursion in the tail position, but the log(2, n) has to be there in both functional and imperative implementations. Function call overhead (stack frames) will not be the dominant factor in a functional qsort implementation. If you want the function to be more efficient, you should reduce the number of intermediate allocations, to reduce GC stress. When sorting a lisp list, you could build it as you go. When sorting an array, you could return a rope, building it as you go. In an impure language, the fastest qsort is probably something like function qsort(x) { return qsortInPlace(copy(x)); }. Just for fun, you could move both calls to qsort in tail positions by rewriting the code in continuation passing style. It would probably be slower and definitely uglier. You would still need the log(2, n) space. A SSC might notice that all your calls to the array constructor end up in calls to append. It might also notice that they can’t end up anywhere else. It might also notice that they add up to the length of the original array. With that information, it might decide to skip the call to append and hand you slice of a single n-long array. Don’t count on this kind of smarts. BTW, i don’t know ruby, but I’m pretty sure that your implementation will modify the source array. Since the first iteration will work on the outside-visible array, that’s bad. Either that or you will have multiple instances of the pivot in the result. That is also bad. This performance question is difficult to answer because you will run in other language design differences, such as Clojure’s immutable collections or Lisp’s linked lists. None of these will ever compete with a mutable array. Even if the functional compiler is smart enough to produce the best possible linked-list code, it won’t be as fast as code that sorts a simple array with in-place updates. Even if the compiler is able to use a raw array and in-place updates as temporary representation, there’s still the cost of converting the original data structure to that optimized form, and after sorting, converting again to the higher-level (and inefficient) form. These are just O(1) costs but they are quite significant, remarkably on large data sets – when you have millions of elements to sort so they don’t fit in the L1 cache, each extra O(1) step over the data will make your whole sorting 2X slower, and there’s nothing you can do about that (compiler magic, parallelism, etc.). A couple points to consider, before declaring a “win” for imperative programming on this issue (not that that’s what you’re doing, just in case anyone interprets you post that way): 1. A recursive function operating on a mutable array can could still be considered functional (though not pure) if the visibility of the side-effects is limited. (See Clojure’s transients ()) 2. A well designed purely functional/immutable language is far less likely to use an an array for storing a logical vector. Internally it would probably use some sort of tree-like structure, which would utilize shared structure so there would be much less need to create and copy arrays. I a compiler can do a recursion for one call and tailrecurse the other call. @Luke V.: Yes if you have a functional language that supports arrays, it can be just as fast – btw such language can even be functional-pure: you write code that in theory replaces an array’s content with a new array, but under the covers a Sufficiently Smart Compiler produces in-place updates of that array because it sees that the old value was not aliases and won’t be reachable after the sort. JavaFX Script’s sequences are very good example of that: they are immutable, but internally backed by arrays, and the compiler is capable of Clojure-like usage of a mutable representation; and the immutable sequence is just a thin wrapper over the mutable data so these transitions have O(1) cost when the compiler is “smart enough”. The language doesn’t offer Lisp-like manipulation syntax that see the sequence as a car:cdr; programmers are instead presented to an array-like model, with operations that are efficient over arrays including indexed access, initialization with generators, and bulk concatenation. The compiler is not yet very smart, it does some optimizations but there are many that are possible and should follow. A tree-like structure is indeed much better than a linked list with one node per element, but in practice it’s only an improvement when you need few updates. Sorting a random collection will still create a hideous amount of extra work. Is there any real benefit to doing a so called “functional quicksort” vs a functional merge sort? I think you may be discarding the benefits of the quicksort vs the merge sort in your functional formulation. The whole point of quicksort is that it has lower memory overhead and is more optimizable in languages like C. Mergesort actually has a stronger complexity guarantee. Its worst case runtime is O(n*log(n)), whereas for quicksort the worst case time is O(n^2). It’s only quicksort’s *average* runtime that is O(n*log(n)). I like functional programming, and I think there are many places where a functional implementation of an algorithm is clearer. However, often that comes at the expense of runtime complexity. But, in many cases, that just doesn’t matter! Honestly, one of my favorite languages is Python, and it is dog slow. But so what? In most cases a slow language is still plenty fast. For the cases that it isn’t, I’m still comfortable programming in C or C++… First, as others have mentioned, the “last” thing the double-function does is the cons, not the recursive call, and similarly, the last thing the qsort-function does is the list construction. There are common techniques for writing “proper” tail-recursive functions, but that translation may be comparable to recognising and replacing a selection sort with a quicksort; “obvious” on some level of abstraction, but not something to expect the compiler will do. On the other hand, I get the impression that you consider the function calls expensive, and lament the “fact” that only one of the two recursive calls in a quicksort will be tail-call-optimised — unlike in imperative languages like C, where you get the “benefit” of being able/forced to handle the one side with a normal loop, and the other with either a recursive call (stack frame and all that) or by managing your very own stack implementation… There *will* be “extra” objects constructed, but the “expensive” part should be in the construction of the new arrays/lists for ” pivot”, not in passing those to recursive calls… Still, allocating and releasing memory with an optimised GC is a lot cheaper than malloc/free in C (unless available memory is low), placing elements in “new” arrays isn’t much worse than swapping them in existing arrays, calls/stack frames don’t need to be as expensive as those specified for the platform C calling convention — so it doesn’t need to be *much* slower. (Then again, it depends on your definition of “much”. I guess I need to test/compare; just not today…) 1) As others have pointed out, your example of a tail-recursive function in Scheme is not in fact tail-recursive, since it can’t reuse the same stack frame. 2) This problem seems to me to be deeper than just an issue of recursion versus iteration; you’re really talking about mutable versus immutable data structures. If I recall, Clojure does some clever things to make copying its immutable sequence types less expensive, which I think maybe addresses the real issue of creating all those copies of the array. Can somebody who knows Clojure better (or knows another language that does this–Haskell? OCaml?) confirm or deny? Just for laughs I created a naive Haskell program to sort an array of integers and compared it with a naive C program on the same box. The Haskell version used 3.2G and 100% of one CPU core and took 25.3 seconds on 10k values. The C version did it in 0.002 sec. The C program sorted 1 million values in 0.115 sec. The figures speak for themselves. On the other hand, if you actually needed to do this in your Haskell program, it would be trivial to use the C FFI to use a C function to handle that bit of work. Many have pointed out that the function double is not tail-recursive. Chris Done commented that there is such a thing called “tail recursion modulo cons”, which which double can be implemented in a tail recursive manner. My two cents: 1. I suspect Chris’s claim that this optimisation can be generalised to other primitive operators like + or – (as an automatic transformation). Notice that to transform fact n = n * fact (n-1) to fact n = fact ‘n 1 … fact’ n m = fact’ (n-1) (n*m) (some cases omitted) we need to know that (*) is associative — the orders of multiplication are different in the two programs. To verify such a transformation, the compiler would need more knowledge about the domain. One may say that fact and fact’ are different algorithms that happen to compute the same function. 2. Similarly, for a compiler to “optimise” a copying quicksort to one that uses in-place update, it has to know a vast amount of domain-specific knowledge. The two quicksorts should be considered different algorithms. As one commentor said, it’s like “optimising” a selection sort to quicksort. This is not usually what we expect of a compiler, and it is in fact dangerous for a compiler to attempt to do so, without manually inserted heuristics or compiler pragmatics. @Doug: indeed, I think the problem is not iteration vs. recursion but the space requirements. Both quicksorts run in O(n^2) worstcase and O(n log n) average time, but the functionally pure variant uses quadratic space in the worst case variant, including all the copying that goes into that (and also it filters the array twice, instead of partitioning it, but that is only a constant factor). I would be quite surprised if Clojure can optimise this away. I’d guess that it’s persistent data structures will do a good job with a small amount of changes to a list, keeping track of what has changed and falling back to the original structure for the rest, for example for the concat case (left + pivot + right). But these filter operations create a completely different array. You’d have the choice of either evaluating the predicate for each operation on it again, or to build a new array. And in each recursion, we build up more and more filter predicates that need to be applied after each other. I’d expect Clojure to rather build a new array. In any case, you are either looking at quadratic space, or at an additional n comparison operations for each recursion step, which would turn the algorithm into O(n^3) worst case and O(n^2 log n) average. Both kind of sucks ;-) What the hell. Don’t you guys read? “double is not tail recursive” Is there different sort algorithm that is more of a natural match for the pure functional paradigm? Would it make more sense to compare ‘quicksort in a mutable arrays environment’ with this instead? >In functional languages you are >rarely dealing with arrays. Only if you are are focused on the Miranda/Haskell model of functional languages. There’s an entire other branch of functional languages which are array based: fp, J, K, and others. Also, there’s an excellent paper (which I can’t find online) which shows that this favorite Haskell example isn’t a quicksort at all, but a deforested tree sort. I have a question .. (I’m not a “functional” guru). Mike’s post asks about functional implementation of quicksort but.. I have understood that quicksort is an algorithm which has been implemented with a imperative style in mind and optimized for imperative languages. My question is: Are there functional-specific sorting algorithms? ( specifically developed for being functional optimized?) @Jonathan Hartley My post poses a similar question, I’m sorry I have missed your post. Realistically, it seems unlikely that any algorithm that builds fresh lists on each recursion can ever be as fast as an in-place algorithm on typical hardware today. Even if allocating and releasing the extra memory is relatively fast in your language of choice, there is always some overhead. In any case, using extra copies probably means poor data locality, which means poor use of caches, which dominates performance on many modern architectures. To get effective performance out of poor man’s quicksort, you therefore need your optimiser to be smart enough to convert it into an in-place algorithm. The very first comment here, by Maxime Caron, is probably the most relevant on this point: typical type systems in today’s mainstream languages aren’t powerful enough to make the necessary guarantees, but there are more powerful type systems that would recognise the single-movement nature of quicksort’s partitioning algorithm. Given such a system, it would be theoretically possible for an optimiser to safely convert from poor man’s quicksort into real, in-place quicksort. Mike, your post is wrong on so many levels. I lost count at lg( n ). Steve Witham wrote: Regular readers will have noticed that I have a very liberal commenting policy: I’ll basically let anyone post anything so long as it’s not spam. But occasionally I see a comment like this one, which is merely abuse and has no actual point to make, and I wonder whether I should filter a bit more strictly. I’m interested in others’ thoughts on this. (To be clear, I have no problem whatsoever with a comment that begins with “your post is wrong on so many levels” and then goes on to explain why: comments of that kind have been some of the best on this blog, and I’ve learned a lot from them. The ones I’m not sure about are where the comment is just the abuse and nothing more.) Mike, I was going to say the same about Steve’s post lacking content, but thought complaining about it might be equally uninteresting. I’m with you, though. I’d filter that comment out, not because it’s a bit mean but because it’s not worth anything. Mike, it’s your blog so you get to choose. But I certainly see no point in posts that don’t even try to make an interesting point and would suggest that Steve’s post could be deleted without damaging the blog at all. Sorry, the previous comment got a nick instead of my name from a borked WordPress login. I’m kind of disappointed with this post. It talks a lot about recursion when the real topic is immutability. There are impure functional languages, e.g. ocaml, which have mutable arrays. An integer array can be used to represent a data sort and yields results that are most often faster than quicksort. See. Most functional algorithms can be implemented with tail-recursion, but the hoops you need to jump through often eliminates any benefit for readability and debugging that functional programming normally provides. Here’s the quicksort written with tail-recursion (except within the less-greater function). It’s a long mess and at this point, a procedural program may be much more readable. @Chris: Sorry, missed that post. That’s interesting, and it definitely makes intuitive sense that something like it would exist.
https://reprog.wordpress.com/2010/05/25/how-slow-are-functional-implementations-of-quicksort/
CC-MAIN-2016-40
refinedweb
4,989
60.85
Methods definedType() → NodeType shaderType() -> hou.shaderType enum value or None vexContext() -> VexContext.Node node(node_path) → hou.Node or None Return the node at the given path, or None if no such node exists. If you pass in a relative path (i.e. the path does not start with /), searches are performed relative to this node. For example, to get the parent node of a node in the the variable n, use n.node(".."). To get a child node named geo5, use n.node("geo5"). To get a sibling node named light3, use n.node("../light3"). Note that the return value may be an instance of a subclass of Node. For example, if the node being found is an object node, the return value will be a hou.ObjNode instance. If the path is an absolute path (i.e. it starts with /), this method is a shortcut for hou.node(node_path). Otherwise, it is a shortcut for hou.node(self.path() + "/" + node_path). See also hou.node(). nodes(node_path_tuple) → tuple of hou.Node or None This is like node() but takes multiple paths and returns multiple Node objects. This is the equivalent of: nodes = [self.node(path) for path in paths] item(item_path) → hou.NetworkMovableItem or None Return the network item at the given path, or None if no such item exists. If you pass in a relative path (i.e. the path does not start with /), searches are performed relative to this node. If the path is an absolute path (i.e. it starts with /), this method is a shortcut for hou.item(node_path). Otherwise, it is a shortcut for hou.item(self.path() + "/" + item_path). See also hou.item(). Note that the return value may be an instance of a subclass of NetworkMovableItem. For example, if the item being found is an object node, the return value will be a hou.ObjNode instance. If the item is a network box, the return value will be a hou.NetworkBox instance. items(item_path_tuple) → tuple of hou.NetworkMovableItem or None This is like item() but takes multiple paths and returns multiple NetworkMovableItem objects. This is the equivalent of: items = [self.item(path) for path in paths] isNetwork() → bool Return True if this node is a network, in other words a node that may contain child nodes. Otherwise return False which indicates that several other methods such as hou.Node.createNode() will raise hou.OperationFailed if they are called. children() → tuple of hou.Node Return a list of nodes that are children of this node. Using the file system analogy, a node’s children are like the contents of a folder/directory. To find the number of children nodes, use len(node.children()). The order of the children in the result is the same as the user defined ordering in Houdini. To see this order, switch the network view pane into list mode, and ensure that the list order is set to user defined. To reorder nodes, drag and drop them in the list. def pc(node): '''Print the names of the children of a particular node. This function can be handy when working interactively in the Python shell.''' for child in node.children(): print child.name() def ls(): '''Print the names of the nodes under the current node.''' pc(hou.pwd()) The following expression evaluates to a list of children of a particular node type: [c for c in node.children() if c.type() == node_type] allItems() → tuple of hou.NetworkMovableItem Return a tuple containing all the children of this node. Unlike children, this method will also return hou.NetworkBox, hou.SubnetIndirectInput, hou.StickyNote, and hou.NetworkDot objects. allSubChildren(top_down=True, recurse_in_locked_nodes=True) → tuple of hou.Node Recursively return all sub children of this node. For example, hou.node("/").allSubChildren() will return all the nodes in the hip file. top_down If True, this function will do a top-down traversal, placing a node in the returned tuple before its children. If False, it will do a bottom-up traversal, placing children before their parents. recurse_in_locked_nodes If True, the function will recurse inside locked child nodes (child nodes for which the isEditable() method returns False) and include children of the locked child nodes in the returned tuple. If False, the function will not recurse inside locked children nodes, and children of the locked child nodes will not be included in the returned tuple. (The locked child nodes, however, will be included.) For example if recurse_in_locked_nodes is True and hou.node("/obj") contains a Simple Female node (a locked node), then the tuple returned by hou.node("/obj").allSubChildren() will include the Simple Female node and its child nodes. If recurse_in_locked_nodes is False, the returned tuple will contain the Simple Female node, but not its child nodes. Note that a tuple is returned, not a generator. This means that it is safe to delete or create nodes while looping through the return value. The following function deletes all children of a particular type that appear anywhere inside a given node: def removeSubChildrenOfType(node, node_type): '''Recursively delete all children of a particular type.''' for child in node.allSubChildren(): if child.type() == node_type: child.destroy() This code, for example, removes all the visibility SOPs anywhere under /obj: >>> removeSubChildrenOfType(hou.node("/obj"), hou.sopNodeTypeCategory().nodeTypes()['visibility']) allNodes() → generator of hou.Node Recursively return a sequence of all nodes contained in this node including this node. This method differs from hou.Node.allSubChildren() in the following ways: It includes this node in the returned sequence. It does not guarantee a top-down or bottom-up traversal order. The method is a generator and does not return a tuple so it is not safe to create or delete nodes while looping through the return value. Here is an example of printing out the paths for all nodes under /obj: root_node = hou.node("/obj") for node in root_node.allNodes(): print node.path() glob(pattern, ignore_case=False) → tuple of hou.Node Return a tuple of children nodes name matches the pattern. The pattern may contain multiple pieces, separated by spaces. An asterisk ( *) in a pattern piece will match any character. By default, Houdini will add the nodes names. It does not apply when matching group, network box or bundle names. This method returns an empty tuple if you pass in an empty pattern. >>> obj = hou.node("/obj") >>> obj.createNode("geo", "geo1") <hou.ObjNode of type geo at /obj/geo1> >>> obj.createNode("geo", "geo2") <hou.ObjNode of type geo at /obj/geo2> >>> obj.createNode("geo", "grid") <hou.ObjNode of type geo at /obj/grid> >>> obj.createNode("geo", "garbage") <hou.ObjNode of type geo at /obj/garbage> >>> obj.createNode("geo", "box") <hou.ObjNode of type geo at /obj/box> >>> def names(nodes): ... return [node.name() for node in nodes] >>> names(obj.glob("g*")) ['geo1', 'geo2', 'grid', 'garbage'] >>> names(obj.glob("ge* ga*")) ['geo1', 'geo2', 'garbage'] >>> names(obj.glob("g* ^ga*")) ['geo1', 'geo2', 'grid'] See also hou.Node.recursiveGlob(). recursiveGlob(pattern, filter=hou.nodeTypeFilter.NoFilter) → tuple of hou.Node Like hou.Node.glob(), return a tuple of children nodes whose name matches the pattern. However, any matching child will have all its children added, recursively. As well, the result may be filtered by node type. Houdini first matches children nodes against the pattern, then recursively adds the subchildren of matching children, and then applies the filter. pattern Child node names will be matched against this string pattern. See hou.Node.glob() and hou.NodeBundle for information about the pattern syntax. Note that if a child node matches the pattern, all of its subchildren will be added to the result (subject to filtering), regardless of the pattern. filter A hou.nodeTypeFilter enumeration value to limit matched nodes to a particular type (e.g. object nodes, geometry object nodes, surface shader SHOPs, etc.). The pattern and filter behavior is very similar to that used by node bundles in Houdini. See hou.NodeBundle for more information. Raises hou.OperationFailed if the pattern is invalid. createNode(node_type_name, node_name=None, run_init_scripts=True, load_contents=True, exact_type_name=False) → hou.Node Create a new node of type node_type_name as a child of this node. node_name The name of the new node. If not specified, Houdini appends a number to the node type name, incrementing that number until a unique node name is found. If you specify a name and a node already exists with that name, Houdini will append a number to create a unique name. run_init_scripts If True, the initialization script associated with the node type will be run on the new node. load_contents If True, any subnet contents will be loaded for custom subnet operators. exact_type_name If True, the node’s type name will be exactly as specified in the node_type_name. Otherwise, a preferred operator type that matches the given node_type_name may be used. For example, the given "hda" may match a newer version "hda::2.0", or if there are two available operators "namespaceA::hda" and "namespaceB::hda", and the "namespaceB" has precedence, then the created node will be of type "namespaceB::hda". Raises hou.OperationFailed if this node cannot contain children. Raises hou.PermissionError if this node is inside a locked asset. >>> obj = hou.node("/obj") # Let Houdini choose a name based on the node type name. >>> obj.createNode("geo") <hou.ObjNode of type geo at /obj/geo1> # Let Houdini choose a unique name. >>> obj.createNode("geo") <hou.ObjNode of type geo at /obj/geo2> # Give the node a specific name. >>> obj.createNode("geo", "foo") <hou.ObjNode of type geo at /obj/foo> # Let Houdini create a unique name from our suggested name. Also, don't # run the geometry object init scripts so the contents are empty. >>> obj.createNode("geo", "geo1", run_init_scripts=False) <hou.ObjNode of type geo at /obj/geo3> >>> obj.node("geo1").children() (<hou.SopNode of type file at /obj/geo1/file1>,) >>> obj.node("geo3").children() () createOrMoveVisualizer(output_index) Creates a node for visualizing the data from a particular output of this node. If a visualizer node already exists in the current network, it is moved and connected to the specified output_index. This method is only implemented for SOP and VOP nodes. Other node types do nothing when this method is called. destroy() Delete this node. If you call methods on a Node instance after it has been destroyed, Houdini will raise hou.ObjectWasDeleted. Raises hou.OperationFailed if you try to delete a node inside a locked asset. copyTo(destination_node) → hou.Node Copy this node to a new place in the node hierarchy. The new node is placed inside the given destination node. This method returns the new node. Raises hou.OperationFailed if the destination node cannot contain the new node. Raises hou.PermissionError if the destination node is inside a locked asset. copyItems(items, channel_reference_originals = False, relative_references = True, connect_outputs_to_multi_inputs = True) → tuple of hou.NetworkMovableItem Create copies of all specified items in this network. The items do not need to be children of this network, but all items must be contained in the same parent network. If channel_reference_originals is True, the parameters of all new nodes are set to channel reference the original nodes. If a copied node is a sub-network, only the top level node establishes channel references to the original. Child nodes inside the sub-network will be simple copies of the original child nodes. The relative_references parameter controls whether the channel references use relative or absolute paths to the source nodes. If connect_outputs_to_multi_inputs is True, and any items being copied have outputs connected to a multi-input node (like a Merge), then the new item copies will also be connected to the multi-input node. Normally copied nodes do not have any outputs to nodes outside the copied set. Returns a tuple of all the new network items. Raises hou.OperationFailed if this node cannot contain children. Raises hou.PermissionError if this node is inside a locked asset. deleteItems(items) Destroys all the items in the provided tuple of hou.NetworkMovableItem objects. This is significantly more efficient than looping over the items and calling destroy() on each one. It also safely handles cases where one object may not be allowed to be deleted unless another object is also deleted. Raises hou.OperationFailed if one or more of the provided items is not a child of this node. Raises hou.PermissionError if this node is or is inside a locked digital asset. isCurrent() → bool Return a boolean to indicate of the node is the last selected node in its network. Each network (i.e. node containing children) stores its own list of selected nodes, and the last selected node has special meaning. For example, it is the node displayed in unpinned parameter panes. See also hou.selectedNodes() to get a tuple of all the selected nodes in all networks in Houdini. The last node in this list also has special meaning in Houdini, and corresponds to the global current node. setCurrent(on, clear_all_selected=False) Set or unset this node as the last selected one. Each network (i.e. node containing children) stores its own list of selected nodes, and the last selected node has special meaning. For example, it is the node displayed in unpinned parameter panes. If on is True, this node will become the last selected node. If it is False and this node was the last selected one, it will be unselected and the second-last selected node will become the last selected node. If clear_all_selected is true, Houdini will unselect every node in this network before performing the operation. See also hou.Node.setSelected and hou.selectedNodes(). selectedChildren(include_hidden=False, include_hidden_support_nodes=False) → tuple of hou.Node Return a tuple containing the children of this node that are selected. Note that the last selected node has special meaning, and can also be retrieved with hou.Node.isCurrent(). include_hidden If False, hidden nodes are not included in the result, even if they are selected. include_hidden_support_nodes If True, include in the returned tuple any hidden nodes that exist solely to support nodes that are actually selected. This specifically refers to VOP Parameter nodes, but may include other support nodes as well. The following example will print the names of all selected objects in /obj: for n in hou.node("/obj").selectedChildren(): print n.name() To find the total number of selected children nodes, use len(node.selectedChildren()). selectedItems(include_hidden=False, include_hidden_support_nodes=False) → tuple of hou.NetworkMovableItem Return a tuple containing the children of this node that are selected. Unlike selectedChildren, this method will also return any selected hou.NetworkBox, hou.SubnetIndirectInput, hou.StickyNote, and hou.NetworkDot objects. include_hidden If False, hidden nodes are not included in the result, even if they are selected. Other network item types cannot be hidden, and so are unaffected by the value of this parameter. include_hidden_support_nodes If True, include in the returned tuple any hidden nodes that exist solely to support nodes that are actually selected. This specifically refers to VOP Parameter nodes, but may include other support nodes as well. The following example will print the positions of all selected items in /obj: for n in hou.node("/obj").selectedItems(): print n.position() numItems(item_type=None, selected_only=False, include_hidden=False) → int Return the number of children of this node that are selected and are of the type specified by item_type. item_type If None, the total number of selected items of any type is returned. If a hou.networkItemType value is provided, the number of selected items of that type is returned. selected_only If True, only selected items are counted. include_hidden If False, hidden nodes are not included in the result, even if they are selected. Other network item types cannot be hidden, and so are unaffected by the value of this parameter. type() → hou.NodeType Return the hou.NodeType object for this node. For example, all camera node instances share the same node type. changeNodeType(new_node_type, keep_name=True, keep_parms=True, keep_network_contents=True, force_change_on_node_type_match=False) → hou.Node Changes the node to a new type (within the same context). new_node_type is the internal string name of the type you want to change to. Keep_name, keep_parms, and keep_network_contents indicate that the node should keep the same name, parameter values, and contents, respectively, after its type has changed. force_change_on_node_type_match indicates whether to perform the change even when is already of the specified type. childTypeCategory() → hou.NodeTypeCategory Return the hou.NodeTypeCategory corresponding to the children of this node. For example, if this node is a geometry object, the children are SOPs. If it is an object subnet, the children are objects. parm(parm_path) → hou.Parm or None Return the parameter at the given path, or None if the parameter doesn’t exist. globParms(pattern, ignore_case=False, search_label=False, single_pattern=False) → tuple of hou.Parm Return a tuple of parameters matching the pattern. The pattern may contain multiple pieces, separated by spaces. An asterisk ( *) in a pattern piece will match any character. By default, Houdini will add the parameters and parameter names. It does not apply when matching group, network box or bundle names. By default, only parameters with names matching the pattern are returned. Set search_label to True to also return parameters with labels matching the pattern. If single_pattern is True, the pattern will be treated as one pattern even if there are spaces in the pattern. This method returns an empty tuple if you pass in an empty pattern. evalParm(parm_path) → int , float , or str Evaluates the specified parameter and returns the result. parms() → tuple of hou.Parm Return a list of the parameters on this node. parmsReferencingThis() → tuple of hou.Parm Return a list of the parameters that reference this node. allParms() → generator of hou.Parm Recursively return a sequence of all the parameters on all of the nodes contained in this node including this node. This method is a generator and does not return a tuple. Here is an example of printing out the parameter paths for all nodes under /obj: root_node = hou.node("/obj") for parm in root_node.allParms(): print parm.path() setParms(parm_dict) Given a dictionary mapping parm names to values, set each of the corresponding parms on this node to the given value in the dictionary. The following example sets the tx and sy parameters at once: >>> node = hou.node("/obj").createNode("geo") >>> node.setParms({"tx": 1, "sy": 3}) Raises hou.OperationFailed if any of the parameter names are not valid. See also the setParmExpressions method. setParmsPending(parm_dict) Given a dictionary mapping parm names to values, sets the pending value of each of the corresponding parms on this node. Raises hou.OperationFailed if any of the parameter names are not valid. See also the setPending method. setParmExpressions(parm_dict, language=None, replace_expressions=True) Given a dictionary mapping parm names to expression strings, set each of the corresponding parms on this node to the given expression string in the dictionary. See hou.Parm.setExpression() for a description of the language and replace_expressions parms. The following example expressions set the tx and sy parameters at once: >>> node = hou.node("/obj").createNode("geo") >>> node.setParmExpressions({"tx": 'ch("ty")', "sy": "sin($F)"}) Raises hou.OperationFailed if any of the parameter names are not valid. See also the setParms method. parmTuple(parm_path) → hou.ParmTuple or None Return the parm tuple at the given path, or None if it doesn’t exist. This method is similar to parm(), except it returns a hou.ParmTuple instead of a hou.Parm. evalParmTuple(parm_path) → tuple of int , float , or str Evaluates the specified parameter tuple and returns the result. parmTuples() → tuple of hou.ParmTuple Return a list of all parameter tuples on this node. This method is similar to parms(), except it returns a list of hou.ParmTuple instead of hou.Parm. parmsInFolder(folder_names) → tuple of hou.Parm Return a list of parameters in a folder on this node. Returns all parameters in the folder and its subfolders (if any). folder_names A sequence of folder name strings. For example, to get a list of the parameters in the Shading folder of the Render folder, use ("Render", "Shading"). Note that by folder name, we mean the label used in the parameter dialog, not the internal parameter name. If this sequence is empty, the method returns all parameters on the node, the same as if you called parms(). Raises hou.OperationFailed if the folder specified by folder_names does not exist. For example, suppose a node had a Render folder that contained a Shading subfolder. Then this line of code would return the parameters in the Render folder: # Note the trailing comma after "Render" to tell Python that "Render" is # contained in a tuple/sequence as opposed to just a single string with # parentheses around it. >>> node.parmsInFolder(("Render", )) And this line of code would return the parameters in the Shading subfolder. >>> node.parmsInFolder(("Render", "Shading")) See also hou.Parm.containingFolders() and hou.Parm.containingFolderSetParmTuples() parmTuplesInFolder(folder_names) → tuple of hou.ParmTuple Return a list of the parameter tuples in a folder on this node. This method is similar to parmsInFolder(), except it returns a list of hou.ParmTuple instead of hou.Parm. See parmsInFolder() above for information about the arguments. See also hou.Parm.containingFolders() and hou.Parm.containingFolderSetParmTuples() expressionLanguage() → hou.exprLanguage enum value Return the node’s default expression language. When you enter an expression in a parameter that does not already contain an expression, the node’s expression language is used to determine how that expression should be evaluated. You can change a node’s expression language in the parameter dialog in the GUI. Changing the node’s expression language will not change the language in parameters already containing expressions (i.e. parameters with keyframes). Note that if a parameter already contains an expression and you change that expression in the GUI, the expression language will not change, regardless of the value of the node’s expression language. To change the language of an existing expression in a parameter from Python, use hou.Parm.setExpression(), as in parm.setExpression(parm.expression(), language). setExpressionLanguage(language) Set the node’s default expression language. See expressionLanguage() for more information. parmAliases(recurse=False) → dict of hou.Parm to str Return a dictionary of parameter aliases on the node’s parameters. The keys in the dictionary are the parameters that have aliases and the values are the alias names. recurse Return the parameter aliases for this node and its children. clearParmAliases() Removes all alias names from parameters on the node. spareParms() → tuple of hou.Parm Return a list of the spare (user-defined) parameters on this node. removeSpareParms() Removes all spare parameters from this node. parmTemplateGroup() → hou.ParmTemplateGroup Return the group of parm templates corresponding to the current parameter layout for this node. You can edit the parameter layout for this node (add or remove spare parameters, reorder or hide built-in parameters, etc.) by getting the current parameter group, modifying it, and calling hou.Node.setParmTemplateGroup() with it. The following example creates a geometry object, adds a My Parms folder to it, and adds a My Parm float parameter to it in that folder. The parameters are added only to the geometry object created; other geometry objects are unaffected. >>> node = hou.node("/obj").createNode("geo") >>> group = node.parmTemplateGroup() >>> folder = hou.FolderParmTemplate("folder", "My Parms") >>> folder.addParmTemplate(hou.FloatParmTemplate("myparm", "My Parm", 1)) >>> group.append(folder) >>> node.setParmTemplateGroup(group) See hou.ParmTemplateGroup and the setParmTemplateGroup method for more information and examples. setParmTemplateGroup(parm_template_group, rename_conflicting_parms=False) Change the spare parameters for this node. parm_template_group A hou.ParmTemplateGroup object containing the new parameter layout. rename_conflicting_parms If True, parameters in the group with the same parm tuple names will be automatically renamed. If False and there are parms with the same name, this method raises hou.OperationFailed. Note that each node type has a set of parameters which must exist and must be of certain types. If your parm template group does not contain the required parameters for the node type the will be added at the bottom and will be made invisible. Similarly, if your parm template group attempts to modify the type, range, label, or other property of a required parameter, all changes to that parameter other than visibility settings will be ignored. This method is preferred over the other parameter-related methods in this class (addSpareParmTuple, removeSpareParmTuple, replaceSpareParmTuple, addSpareParmFolder, removeSpareParmFolder) because it lets you more easily make manipulate parameters. See hou.HDADefinition.setParmTemplateGroup() to change the parameter interface of a digital asset. addSpareParmTuple(parm_template, in_folder=(), create_missing_folders=False) → hou.ParmTuple Add a spare parameter tuple to the end of the parameters on the node. If in_folder is not an empty sequence, this method adds the parameters to the end of the parameters in a particular folder. parm_template A hou.ParmTemplate subclass instance that specifies the type of parameter tuple, the default value, range, etc. in_folder A sequence of folder names specifying which folder will hold the parameter. If this parameter is an empty sequence (e.g. ()), Houdini will not put the parameter inside a folder. If it is, for example, ("Misc", "Controls"), Houdini puts it inside the "Controls" folder that’s inside the "Misc" folder. If it is, for example, ("Misc",), Houdini puts it inside the "Misc" folder. create_missing_folders If True, and the folder location specified by in_folder does not exist, this method creates the missing containing folders. Note that this method can add a single folder by passing a hou.FolderParmTemplate for parm_template. See also the removeSpareParmTuple() and addSpareParmFolder() methods. This method is deprecated in favor of setParmTemplateGroup. removeSpareParmTuple(parm_tuple) Removes the specified spare parameter tuple. See also addSpareParmTuple(). This method is deprecated in favor of setParmTemplateGroup. addControlParmFolder(folder_name=None, parm_name=None) Adds a control parameter folder as the front-most folder at the top-level. This is used to increase visibility of customized control parameters. If a folder of the same name already exists, no new folder will be created. If folder_name is None, it will be set as 'Controls'. If parm_name is None, it will be set as 'folder'. If there are no current folders present, the existing parameters will be grouped together and stored into a new folder named 'Parameters' and placed after the new control parameter folder. addSpareParmFolder(folder_name, in_folder=(), parm_name=None, create_missing_folders=False) Adds a folder to the spare parameters. Note that all the folders in a set correspond to one parameter. If this is the first folder to go in the set, parm_name will be used as the parameter name. Otherwise, parm_name will be ignored and the parameter name of the first folder in the set is used. If this is the first folder in the set and parm_name is None, it will default to 'sparefolder0'. If parm_name is already in use, a unique name will be automatically generated. If create_missing_folders is True, this method will create the folders in in_folder that don’t exist. So, this method can be used to add spare folders and a spare parameter at the same time. Note that you can add folders by passing a hou.FolderParmTemplate to the addSpareParmTuple method, so this method is deprecated. Note also that addSpareParmTuple is deprecated in favor of setParmTemplateGroup. See also the removeSpareParmFolder and addSpareParmTuple methods. This method is deprecated in favor of setParmTemplateGroup. removeSpareParmFolder(folder) Removes an empty folder from the spare parameters. folder is a sequence of folder names. So, to remove the Output folder, use ("Output",) instead of "Output". See also addSpareParmFolder(), hou.ParmTemplateGroup.remove(), and hou.ParmTemplateGroup.findFolder(). replaceSpareParmTuple(parm_tuple_name, parm_template) Replace an existing spare parameter tuple with a new one. The old parameter tuple is removed and the new one is added in its place. parm_tuple_name The name of the spare parameter tuple to replace. Raises hou.OperationFailed if no parameter tuple exists with this name, or if it is the name of a non-spare parameter. parm_template A hou.ParmTemplate describing the new parameter tuple. The new parameter tuple may or may not have the same name as the old one. By providing a parameter tuple with the same name, you can modify an existing spare parameter tuple. Note that you cannot replace non-spare parameter tuples. However, you can change the visibility of non-spare parameters using hou.ParmTuple.hide(). To change a parameter for all instances of digital asset, use hou.HDADefinition.replaceParmTuple(). This method is deprecated in favor of setParmTemplateGroup. localVariables() Return a list of local variables that can be referenced in parameter expressions on this node. saveParmClip(file_name, start=None, end=None, sample_rate=0, scoped_only=False) Saves the animation associated with the parameters of this node to the clip file specified by file_name. The extension of file_name determines the format of the saved file. You can use one of the following extensions: .clip: save animation as plain text (ASCII) clip file. .bclip: save animation as a bclip (binary clip) file. .bclip.sc: save animation as a bclip file using Blosc compression. Set sample_rate to a non-zero, non-negative value to specify the sample_rate to be used for the clip file. For example, if the current frame rate is 24 (hou.fps()), and sample_rate is set to 12, the animation will be sampled every second frame since sample_rate is half of the current frame rate. If start is not None, start saving the animation from the specified frame (inclusive). Otherwise, the animation will be saved from the global start frame (inclusive). Similarly, if end is not None, stop saving the animation at the specified frame (inclusive). Otherwise, the animation will be saved until the global end frame (inclusive). The global start and end frame are specified in the Global Animation Options window. If scoped_only is True, only the animation associated with scoped parameters will be saved. If there are no scoped parameters, the animation associated with auto-scoped parameters will be saved. If scoped_only is False, animation associated with any of the parameters of this node will be saved. Raises a hou.OperationFailed exception if none of the parameters of this node have animation. If scoped_only is True, this exception can be raised if none of the scoped parameters have animation, or if none of the auto-scoped parameters have animation (if the node has no scoped parameters). Raises a hou.OperationFailed exception if there is an error saving the animation to file. Raises a hou.InvalidInput exception if start >= end. If specifying only start, ensure that the specified value is less than the global end frame. Likewise, if specifying only end, ensure it is larger than the global start frame. loadParmClip(file_name, sample_rate=0, start=None) Load animation for the parameters in this node from the clip file specified by file_name. See hou.Node.saveParmClip() for the list of supported clip file formats. Any tracks in the clip file that do not match the name of the parameters of this node will be ignored. If sample_rate is set to a non-zero, non-negative value, the specified value will be used when loading the animation. For example, if the current frame rate is 24 (hou.fps()) and sample_rate is set to 12, the animation will be loaded with a keyframe at every second frame since sample_rate is half of the current frame rate. start specifies the frame the loaded animation should start from. By default the animation starts at the frame specified in the clip file. Warning Any existing keyframes for the parameters of this node that are within the range of the loaded animation will be overwritten with the loaded data. This function will raise a hou.OperationFailed exception if there is an error reading animation data from the file. parmClipData(start=None, end=None, binary=True, use_blosc_compression=True, sample_rate=0, scoped_only=False) → str Returns the clip data for the parameters of this node. This method is similar to hou.Node.saveParmClip(), except that it returns the clip data (file contents) instead of saving the animation to a clip file. start, end, sample_rate, and scoped_only behave the same as in hou.Node.saveParmClip(). If binary is True, return binary clip data, otherwise return plain text (ASCII) clip data. If use_blosc_compression is True, blosc compress the binary clip data. This cannot be used for plain text (ASCII) clip data. Raises a hou.OperationFailed exception if none of the parameters of this tuple have animation. Raises a hou.InvalidInput exception if start >= end. If specifying only start, ensure that the specified value is less than the global end frame. Likewise, if specifying only end, ensure it is larger than the global start frame. Raises a hou.InvalidInput exception if binary = False and use_blosc_compression = True. setParmClipData(data, binary=True, blosc_compressed=True, sample_rate=0, start=1) Load animation for the parameters in this node from the given clip data. This method is similar to hou.Node.loadParmClip(), except that it loads animation from the given clip data instead of a clip file. sample_rate and start behave the same as in hou.Node.loadParmClip(). binary and blosc_compressed specify the type of input data. If binary is True, the given data is binary clip data, otherwise it is plain text (ASCII) clip data. If blosc_compressed is True, the given data is blosc compressed binary data. This cannot be used for plain text (ASCII) clip data. Raises a hou.OperationFailed exception if the given data is invalid. Raises a hou.InvalidInput exception if binary = False and blosc_compressed = True. inputs() → tuple of hou.Node Return a tuple of the nodes connected to this node’s inputs. If an input is connected to a hou.SubnetIndirectInput, the node connected to the corresponding input on the parent subnet is returned. In other words the presence of the indirect input is hidden. This means the resulting nodes may not all be siblings of the calling node. If a particular input is not connected (or is connected to an indirect input and the corresponding subnet parent input is not connected), a None value is placed in the tuple at that location. outputs() → tuple of hou.Node Return a tuple of the nodes connected to this node’s outputs. This method is a shortcut for [connection.inputNode() for connection in self.outputConnections()]. inputConnections() → tuple of hou.NodeConnection Returns a tuple of hou.NodeConnection objects for the connections coming into the top of this node. The tuple will have a length equal to the number of connections coming into the node. Returns an empty tuple if nothing is connected to this node. To get a list of the connected nodes themselves, use hou.Node.inputs(). To get a list of all possible connection sites (whether or not anything is connected to them), use hou.Node.inputConnectors(). >>> cookie = hou.node("/obj").createNode("geo").createNode("cookie") >>> cookie.setInput(1, cookie.parent().createNode("box")) >>> cookie.inputConnections() (<hou.NodeConnection from grid1 output 0 to cookie input 1>,) >>> cookie.inputConnectors() ((), (<hou.NodeConnection from grid1 output 0 to cookie input 1>,)) See also hou.Node.inputConnectors(). outputConnections() → tuple of hou.NodeConnection Return a tuple of NodeConnection objects for the connections going out of the bottom of this node. If nothing is wired into the output of this node, return an empty tuple. To get a list of the connected nodes themselves, use hou.Node.outputs(). Note that this method is a shortcut for: reduce(lambda a, b: a+b, self.outputConnectors(), ()). Since most nodes have only one output connector, though, this method is usually equivalent to self.outputConnectors()[0]. >>> box = hou.node("/obj").createNode("geo").createNode("box") >>> box.parent().createNode("xform").setFirstInput(box) >>> box.parent().createNode("subdivide").setFirstInput(box) >>> box.outputConnections() (<hou.NodeConnection from box1 output 0 to xform1 output 0>, <hou.NodeConnection from box1 output 0 to subdivide1 input 0>) See also hou.node.outputConnectors. inputConnectors() → tuple of tuple of hou.NodeConnection Return a tuple of tuples of hou.NodeConnection objects. The length of the result tuple is equal to the maximum number of inputs that can be connected to this node. Each subtuple contains exactly one node connection if something is wired into the connector; otherwise it is the empty tuple. See also hou.NodeConnection and hou.Node.inputConnections(). outputConnectors() → tuple of tuple of hou.NodeConnection Return a a tuple of tuples of hou.NodeConnection objects. The length of the result tuple is equal to the number of output connectors on this node. Each subtuple contains all the connections going out of that connector, and is empty if nothing is wired to that connector. >>> split = hou.node("/obj").createNode("dopnet").createNode("split") >>> split.parent().createNode("rbdsolver").setFirstInput(split) >>> split.parent().createNode("gravity").setFirstInput(split, 1) >>> split.parent().createNode("merge").setFirstInput(split, 1) >>> split.outputConnectors() ((<hou.NodeConnection from split1 output 0 to rbdsolver1 input 0>,), (<hou.NodeConnection from split1 output 1 to gravity2 input 0>, <hou.NodeConnection from split1 output 1 to merge1 input 0>), (), ()) See also hou.NodeConnection and hou.Node.outputConnections(). indirectInputs() → tuple of hou.SubnetIndirectInput Return the hou.SubnetIndirectInput objects of a subnet. Raises hou.InvalidNodeType if this node is not a subnetwork. subnetOutputs() → tuple of hou.Node Return the hou.Node objects that are produce the subnet’s outputs. Raises hou.InvalidNodeType if this node is not a subnetwork. inputAncestors(include_ref_inputs=True, follow_subnets=False) → tuple of hou.Node Return a tuple of all input ancestors of this node. If include_ref_inputs is False, then reference inputs are not traversed. If follow_subnets is True, then instead of treating subnetwork nodes as a single node, we also traverse its children starting with its display node. See also the inputs() method. inputIndex(input_name) Obtains an index of a node input that has the given name. For the node categories that use input names, it returns the index of the input with the given name. For VOP nodes, the name may also be a node parameter name that has a corresponding input. outputIndex(output_name) Obtains an index of a node output that has the given name. For the node categories that use input names, it returns the index of the output with the given name. setInput(input_index, item_to_become_input, output_index=0) If item_to_become_input is not None, connect the output connector of another node to an input connector of this node. Otherwise, disconnect anything connected to the input connector. input_index The index of this node’s input connector. item_to_become_input If None this method disconnects everything from the input connector. If a hou.Node or a hou.SubnetIndirectInput, this method connects its output to this node’s input connector. output_index The index of the other node’s output connector. Raises hou.InvalidInput if output_index is invalid. Raises hou.OperationFailed if item_to_become_input is not in the same network as this node. Raises hou.PermissionError if the node is inside a locked asset. setNamedInput(input_name, item_to_become_input, output_name_or_index) Connects an output on this node (specified by either an output name or an output index) to the input on the item_to_become_input specified by input_name. setFirstInput(item_to_become_input, output_index=0) A shortcut for self.setInput(0, item_to_become_input). See hou.Node.setInput() for more information. setNextInput(item_to_become_input, output_index=0, unordered_only=False) Connect the output connector from another node into the first unconnected input connector or a multi-input connector of this node. If a node has some ordered inputs followed by a multi-input connector, the unordered_only parameter can be used to force the input to connect to the unordered multi-input connection instead of any of the ordered input which may not be connected. This method is roughly equivalent to: for input_index, conectors in enumerate(self.inputConnectors()): if len(connectors) == 0: self.setInput(input_index, item_to_become_input, output_index) raise hou.InvalidInput("All inputs are connected") Raises hou.InvalidInput if all inputs are connected. See hou.Node.setInput() for more information. insertInput(input_index, item_to_become_input, output_index=0) Insert an input wire. In other words, for each input connector after input_index, shift the contents of that input connector to the next one, and then call hou.Node.setInput(). See hou.Node.setInput() for the meanings of the parameters. numOrderedInputs() → int Some nodes can have a small number of dedicated inputs with specific meanings, followed by an arbitrary number of additional inputs, where gaps are not permitted between the inputs (these are referred to as unordered inputs). This is common in DOP nodes such as the Multiple Solver DOP. This function returns the number of dedicated (or ordered) inputs that occur before the unordered inputs begin. This function will only return non-zero values if the hou.NodeType.hasUnorderedInputs() function for this node’s hou.Node.type() object returns True. createInputNode(input_index, node_type_name, node_name=None, run_init_scripts=True, load_contents=True, bool exact_type_name=False) Create a new node and connect it to one of this node’s inputs. Return the new node. input_index The index of this node’s input connector. node_type_name The name of the type of node to create. See the createNode method for more information. node_name run_init_scripts load_contents exact_type_name See also the createOutputNode method. createOutputNode(node_type_name, node_name=None, run_init_scripts=True, load_contents=True, bool exact_type_name=False) Create a new node and connect its first input to this node’s (first) output. Return the new node. See the createNode method for more information on the parameters. See also the createInputNode method. inputNames() → tuple of str Returns a tuple of all input names for this node. Names for input connectors that are hidden are also included. inputLabels() → tuple of str Returns a tuple of all input labels for this node. Labels for input connectors that are hidden are also included. outputNames() → tuple of str Returns a tuple of all output names for this node. outputLabels() → tuple of str Returns a tuple of all output labels for this node. editableInputString(input_index, key) → str Return the string. setEditableInputString(input_index, key, value) Sets a string value. references(include_children = True) → tuple of hou.Node Return a tuple of nodes that are referenced by this node, either through parameter expressions, referring to the node by name, or using expressions which rely on the data generated by another node. These reflect all the other ways (besides connecting to an input) in which one node may affect another. Note that the result can differ depending last cook of the nodes. It’s recommended that you first call cook() on the node first. dependents(include_children = True) → tuple of hou.Node Return a tuple of nodes that are reference this node, either through parameter expressions, referring to the node by name, or using expressions which rely on the data generated by this node. These reflect all the other ways (besides connecting to an input) in which one node may affect another. Note that the result can differ depending last cook of the nodes. isSubNetwork() → bool Return True if the node is a sub-network and False otherwise. collapseIntoSubnet(child_nodes, subnet_name=None, subnet_type=None) → hou.Node Given a sequence of children nodes of this node, collapse them into a subnetwork. In other words, create a subnet inside this node’s network and move the specified children of this network inside that subnet. child_nodes The children nodes of this node that will go in the new subnet. subnet_name The name for the new subnet node, or None if you want Houdini to automatically choose a name. subnet_type The type for the new subnet node, or None if you want Houdini to automatically choose a primary subnetwork type, which is recommended. Raises hou.OperationFailed if a node inside child_nodes is not a child of this network, or if child_nodes is an empty sequence. This example function takes a single node and replaces it with a subnet, moving the node into the subnet.. def collapseSingleNodeIntoSubnet(node, subnet_name=None): node.parent().collapseIntoSubnet((node,), subnet_name=None) extractAndDelete() → tuple of hou.NetworkMovableItem Move the children of this subnet node to become siblings of this node, and then delete this node. The method is the opposite of collapseIntoSubnet(). Returns a tuple containing all extracted items. Raises hou.InvalidNodeType if this node is not a subnetwork. canCreateDigitalAsset() → bool Return True if hou.Node.createDigitalAsset() can succeed. createDigitalAsset(name=None, hda_file_name=None, description=None, min_num_inputs=None, max_num_inputs=None, compress_contents=False, comment=None, version=None, save_as_embedded=False, ignore_external_references=False, change_node_type=True, create_backup=True) → Node Create a digital asset from this node. You would typically call this method on subnet nodes.. min_num_inputs The minimum number of inputs that need to be wired into instances of the digital asset. See hou.HDADefinition.minNumInputs() for more information. max_num_inputs The number of input connectors available on instances of the digital asset for input connections. See hou.HDADefinition.minNumInputs() for more information. compress_contents Whether or not the contents of this digital asset are compressed inside the hda file. See hou.HDAOptions.compressContents() for more information. comment A user-defined comment string. See hou.HDADefinition.comment() for more information. version A user-defined version string. See hou.HDADefinition.version() for more information. save_as_embedded Whether or not the digital asset’s definition will be saved with the hip file instead of an hda file. When this parameter is True, Houdini ignores the hda_file_name parameter. Setting this parameter to True is equivalent to setting this parameter to False and setting the hda_file_name parameter to "Embedded". ignore_external_references If True, Houdini will not generate warnings if the contents of this digital asset reference nodes outside the asset. change_node_type Normally, Houdini will change the node creating the digital asset into the new digital asset type. Setting this flag to false will cause the node to remain unchanged. create_backup Create a backup before modifying an existing hda file. createCompiledDigitalAsset(name=None, hda_file_name=None, description=None) Create a compiled digital asset from this node. You would typically call this method on vop network nodes, such as Material Shader Builder SHOP, Surface Shader Builder SHOP, or VEX Surface SHOP Type VOPNET. The digital asset does not have contents section, which means it does not have vop network inside, but instead relies on the saved VEX code sections to provide the shader code. After the creation of a compiled HDA, if its VEX code section is ever changed manually, the corresponding vex object code section can be recompiled using hou.HDADefinition.compileCodeSection().. allowEditingOfContents(propagate=False) Unlocks a digital asset so its contents can be edited. To use this function, you must have permission to modify the HDA. matchCurrentDefinition() If this node is an unlocked digital asset, change its contents to match what is stored in the definition and lock it. The parameter values are unchanged. If this node is locked or is not a digital asset, this method has no effect. See also hou.Node.matchesCurrentDefinition() and hou.Node.isLocked. matchesCurrentDefinition() → bool Return whether the contents of the node are locked to its type definition. isLockedHDA() → bool If this node is an instance of a digital asset, return whether or not it is locked. Otherwise, return False. To differentiate between unlocked digital assets and nodes that are not instances of digital assets, check if the node’s type has a definition: def isUnlockedAsset(node): return not node.isLockedHDA() and node.type().definition() is not None See hou.HDADefinition.updateFromNode() for an example of how to save and lock all unlocked digital asset instances. isInsideLockedHDA() → bool Return whether this node is inside a locked digital asset. If this node is not inside a locked HDA, the node may deviate from the HDA definition. isEditableInsideLockedHDA() → bool Return True if the node is an editable node contained inside a locked HDA node and False otherwise. In particular this function will return False for a node that is not inside a locked HDA. isEditable() → bool Return True if the node is editable. This is similar to the hou.Node.isEditableInsideLockedHDA() method except that it will return True for nodes that are not inside a locked HDA. This function is the simplest way to determine if most node modifications (changing inputs, changing parameters, changing flags) will be allowed on the node. hdaModule() → hou.HDAModule This method is a shortcut for `self.type().hdaModule() to reduce the length of expressions in Python parameters and button callbacks. See hou.NodeType.hdaModule() for more information. See also the hm method and hou.phm(). hm() → hou.HDAModule syncNodeVersionIfNeeded(from_version) Synchronize the node from the specified version to the current version of its HDA definition. See also hou.HDADefinition.version(). comment() → str Return the node’s comment string. setComment(comment) Sets the comment associated with this node. See also appendComment(). appendComment(comment) Appends the given text to the comment associated with this node. isDisplayDescriptiveNameFlagSet() → bool Return a boolean to indicate of the node should display its descriptive name in the network editor. setDisplayDescriptiveNameFlag(on) Set or unset whether this node should display its descriptive name in the network editor. outputForViewFlag() → int Return an integer to indicate which output of the node should be used for display purposes. setOutputForViewFlag(output) Sets which output should be used for display purposes on this node. creationTime() → datetime.datetime Return the date and time when the node was created. modificationTime() → datetime.datetime Return the date and time when the node was last modified. creator() → Node creatorState() → str This returns the name of the viewport tool that was used to be created. This name is not set by default and is usually the empty string. setCreatorState(state) This sets the name of the tool that created this node. If you call this with a name that differs from the node type name, you should also call setBuiltExplicitly(False). isBuiltExplicitly() → bool Return whether this node was built explicitly (defaults to True). Most nodes are built explicitly, but some are implicitly created by Houdini. For example, if you select geometry from multiple SOPs and then perform an operation, Houdini will put down an implicit merge SOP before performing that operation. When reselecting geometry in SOPs, Houdini will automatically delete any SOPs that were created implicitly. setBuiltExplicitly(built_explicitly) Set whether this node was built explicitly (default value is True). If set to False, this node will not show up in various menus and in the Network View pane’s list mode. This flag is typically used for intermediate utility nodes that one is unlikely to want to change its parameters. isTimeDependent() → bool Return whether the node is time dependent. A time dependent node is re-evaluated every time the frame changes. moveToGoodPosition(relative_to_inputs=True, move_inputs=True, move_outputs=True, move_unconnected=True) → hou.Vector2 Moves a node to a well-spaced position near its inputs or outputs and returns the new position of the node. layoutChildren(items=(), horizontal_spacing=-1.0, vertical_spacing=-1.0) Automatically position all or some children of this node in the network editor. items A sequence of child hou.NetworkMovableItem objects to position. This may include nodes, dots, and/or subnet inputs. If this sequence is empty, this method will reposition all child items of this node. horizontal_spacing A fraction of the width and height of a tile that affects the space between nodes with common inputs. If this parameter is -1, Houdini uses the default spacing. vertical_spacing A fraction of the width and height of a tile that affects the space between a node and its output nodes. If this parameter is -1, Houdini uses the default spacing. isHidden() Return whether the node is hidden in the network editor. Note that Houdini also uses the term "exposed" to refer to nodes that are not hidden. If a visible node is connected to a hidden node, the network editor will display dashed lines for the wire going from the visible node to the hidden node. See also hou.Node.hide(). hide(on) Hide or show a node in the network editor. See hou.Node.isHidden() for more information about hidden nodes. cook(force=False, frame_range=()) Asks or forces the node to re-cook. frame_range The frames at which to cook the object. This should be a tuple of 2 or 3 ints giving the start frame, end frame, and optionally a frame increment, in that order. If you supply a two-tuple (start, end), the increment is 1. needsToCook(time=hou.time()) → bool Asks if the node needs to re-cook. cookCount() → int Returns the number of times this node has cooked in the current session. updateParmStates() Update the UI states, such as hidden and disabled, for each parameter in the node. UI states can be expressed as conditionals (i.e. Disable When) which require evaluation. Typically in graphical Houdini the Parameter Pane performs the evaluation when the node is selected in order to determine how the node parameters should look in the pane. However in non-graphical Houdini or if the Parameter Pane has not yet loaded the node, then the evaluation does not occur and the UI states remain at their defaults causing methods such as hou.Parm.isDisabled() and hou.Parm.isHidden() to return incorrect values. In these cases, it is recommended that hou.Node.updateParmStates() is called. errors() → tuple of str Return the text of any errors from the last cook of this node, or an empty tuple if there were no errors. warnings() → tuple of str Return the text of any warnings from the last cook of this node, or an empty tuple if there were no warnings. messages() → tuple of str Return the text of any messages from the last cook of this node, or an empty tuple if there were no messages. infoTree(verbose=False, debug=False, output_index=0) → hou.NodeInfoTree Returns a tree structure containing information about the node and its most recently cooked data. The contents of the tree vary widely depending on the node type, and the nature of its cooked data. This tree of data is used to generate the node information window contents. verbose Setting verbose to True will cause some additional information to be generated. In particular data that is expensive to calculate, or which will generate a large amount of information tends to be generated only if this option is turned on. debug Setting debug to True will, in a few cases, cause additional information to be displayed which generally will be most useful when debugging the internal operation of Houdini. For example, geometry attributes will display their "data ids", which can be helpful when tracking down errors in SOPs written with the HDK. output_index Specifies which of the node’s outputs to return information for. canGenerateCookCode(check_parent=False) → bool Return True if the node can generate compiled cook code and False otherwise. If check_parent is true, the parents in the ancestor hierarchy are tested if any of them can generate code. cookCodeGeneratorNode(check_parent=False) → hou.Node Return the node itself or a network node that contains this node and can generate compiled cook code. For example, the generator node for a VOP node could be the SHOP node or SOP node that contains it for example. Return None if this node cannot generate code and is not contained in a code generating node either either. cookCodeLanguage() → str Return the language of the generated cook code (i.e. VEX, RSL). Raises hou.OperationFailed if this node cannot generate compiled code. supportsMultiCookCodeContexts() → bool Return True if this node can generate compiled cook code for multiple contexts (i.e. surface context, displacement context, etc.) and False otherwise. Raises hou.OperationFailed if this node cannot generate compiled code. saveCompiledCookCodeToFile(file_name, context_name=None) Saves compiled VEX code to a disk file (for nodes that support this). See hou.Node.saveCookCodeToFile() for a description of the arguments. saveCookCodeToFile(file_name, skip_header=False, context_name=None) Saves VEX/RSL source code to a disk file (on nodes that support this). file_name The file path in which to save the generated code. skip_header If True, the method does not write a header comment at the beginning of the file containing the file name and node path from which the code was generated and a time stamp. context_name A string containing name of the shader context for the code. This option applies to nodes such as the Material Shader Builder which can generate code for multiple context types. For example, a Material network might contain both surface and displacement shaders, so you must specify which type of shader code to generate: node("/shop/vopmaterial1").saveCookCodeToFile("myfile.vfl", context_name="surface") On single-context nodes this argument is ignored. For VEX materials, possible values are surface, displacement, light, shadow, fog, image3d, photon, or cvex. For RSL materials, possible values are surface, displacement, light, volume, or imager. networkBoxes() → tuple of hou.NetworkBox Return a list of the network boxes inside this node. iterNetworkBoxes() → generator of hou.NetworkBox Return a generator that iterates through all the network boxes inside this node. findNetworkBox(name) → hou.NetworkBox Return a network box with the given name inside this node, or None if no network box with the given name exists. findNetworkBoxes(pattern) → tuple of hou.NetworkBox Return a list of network boxes inside this node whose names match a pattern. createNetworkBox(name=None) → hou.NetworkBox Creates a network box inside this network. Raises hou.OperationFailed if this node is not a network. If you don’t specify a name, Houdini gives the box a default name. Network box names are not displayed in the network editor pane. Instead, a "comment" can be specified with the hou.NetworkBox.setComment() method, and this comment will appear in the title bar of the network box. copyNetworkBox(network_box_to_copy, new_name=None, channel_reference_original=False) → hou.NetworkBox Copies a network box and returns the copy. If new_name is given, the network box will be copied to a new network box named new_name (a different name will be generated if there is already a network box with that name). If channel_reference_original is True, all operators created by the copy will have their animatable parameters set to reference the original operators. Raises hou.OperationFailed if this node is not a network or if the node child type does not match the network box’s node type. stickyNotes() → tuple of hou.StickyNote Return a list of the sticky notes inside this node. iterStickyNotes() → generator of hou.StickyNote Return a generator that iterates through all the sticky notes inside this node. findStickyNote(name) → hou.StickyNote Return a sticky note with the given name inside this node, or None if no sticky note with the given name exists. findStickyNotes(pattern) → tuple of hou.StickyNote Return a list of sticky notes inside this node whose names match a pattern. createStickyNote(name=None) → hou.StickyNote Creates a sticky note inside this network. Raises hou.OperationFailed if this node is not a network. If you don’t specify a name, Houdini gives the note a default name. copyStickyNote(network_box_to_copy, new_name=None) → hou.StickyNote Copies a sticky note and returns the copy. If new_name is given, the sticky note will be copied to a new sticky note named new_name (a different name will be generated if there is already a sticky note with that name). Raises hou.OperationFailed if this node is not a network or if the node child type does not match the sticky note’s node type. createNetworkDot() → hou.NetworkDot Creates a network dot inside this network. Raises hou.OperationFailed if this node is not a network. networkDots() → tuple of hou.NetworkDot Returns a tuple of all dots in this network. addNodeGroup(name=None) → hou.NodeGroup Add a node group to the node and return the new group. If a group of the given name already exists then this function simply returns the existing group without adding a new one. If the name of the group is None or an empty string, then a unique default name is automatically chosen. This function can only be called on nodes that are networks. If it is called on a node that is not a network, then it raises hou.OperationFailed. To remove a node group, use hou.NodeGroup.destroy(). nodeGroup(name) → hou.NodeGroup Return a node group contained by the node with the given name, or None if the group does not exist. nodeGroups() → tuple of hou.NodeGroup Return the list of node groups in this node. runInitScripts() Runs the initialization script associated with this node’s type. deleteScript() → str Return the script that will run when this node is deleted. setDeleteScript(script_text, language=hou.scriptLanguage.Python) Sets the script that will run when this node is deleted. motionEffectsNetworkPath() → str Return a node path representing the location for storing clips. This location may or may not exist. To find or create such a network, use hou.Node.findOrCreateMotionEffectsNetwork(). findOrCreateMotionEffectsNetwork(create=True) → hou.chopNetNodeTypeCategory() Return a CHOP network node suitable for storing Motion Effects. By default, if the node doesn’t exist, it will be created. See also hou.Parm.storeAsClip and hou.Node.motionEffectsNetworkPath(). stampValue(parm_name, default_value) Return a copy stamping floating point or string value. This node must be a downstream stamping operator, such as a Copy SOP, Cache SOP, LSystem SOP, or Copy CHOP. parm_name The name of the stamping variable. default_value The value that this function returns if Houdini is not currently performing stamping, or if parm_name is not a valid variable name. This value may be a float or a string. You might put the following expression in a Python parameter: node("../copy1").stampValue("sides", 5) copyItemsToClipboard(items) Given a sequence of child items (nodes, network boxes, sticky notes, etc), save them to the clipboard so they can be pasted into this or another network. Raises hou.OperationFailed if any of the nodes or network boxes are node children of this node. Raises hou.PermissionError if you do not have permission to read the contents of this node. saveItemsToFile(items, file_name, save_hda_fallbacks = False) Given a sequence of child items (nodes, network boxes, sticky notes, etc), save a file containing those items. You can load this file using hou.Node.loadItemsFromFile(). file_name The name of the file to write the contents to. You can use any extension for this file name. save_hda_fallbacks Set to True to save simplified definitions for HDAs into the file along with the child nodes. Doing this allows the generated file to be safely loaded into any houdini session, even if the assets used in the file are not already loaded into the houdini session. Depending on the use of the generated file, this information is often not required and makes the files unnecessarily large. Raises hou.OperationFailed if any of the nodes or network boxes are node children of this node, or if the file could not be written to. Raises hou.PermissionError if you do not have permission to read the contents of this node. saveChildrenToFile(nodes, network_boxes, file_name) Combines separate lists of nods and network boxes into a single sequence, and calls hou.Node.saveItemsToFile(). This method is provided for backward compatibility. New code should call saveItemsToFile directly. network_boxes A sequence of hou.NetworkBoxes that are contained in this node. Note that the contents of the network boxes are not automatically saved, so it is up to you to put them in the list of nodes. loadItemsFromFile(file_name, ignore_load_warnings=False) Load the contents of a file (saved with hou.Node.saveItemsToFile()) into the contents of this node. Raises hou.OperationFailed if the file does not exist or it is not the correct type of file. Raises hou.PermissionError if this node is a locked instance of a digital asset. Raises hou.LoadWarning if the load succeeds but with warnings and ignore_load_warnings is False. loadChildrenFromFile(file_name, ignore_load_warnings=False) Calls hou.Node.loadItemsFromFile(). Provided for backward compatibility. New code should call loadItemsFromFile directly. pasteItemsFromClipboard(position = None) Load the contents of a file saved with hou.Node.copyItemsToClipboard() into the contents of this node. If the position parameter is given as a tuple of two float values (or equivalent, like a hou.Vector2), the pasted items are moved such that they are centered around the provided position. Raises hou.OperationFailed if this node is not a network, or if there are errors loading the items from the clipboard. Raises hou.PermissionError if this node is a locked instance of a digital asset. asCode(brief=False, recurse=False, save_channels_only=False, save_creation_commands=True, save_keys_in_frames=False, save_outgoing_wires=False, save_parm_values_only=False, save_spare_parms=True, function_name=None) → str Prints the Python code necessary to recreate a node. brief Do not set values if they are the parameter’s default. Applies to the contents of the node if either recurse or save_box_contents is True. recurse Recursively apply to the entire operator hierarchy. save_box_contents Script the contents of the node. save_channels_only Only output channels. Applies to the contents of the node if either recurse or save_box_contents is True. save_creation_commands Generate a creation script for the node. If set to False, the generated script assumes that the network box already exists. When set to True, the script will begin by creating the network box. save_keys_in_frames Output channel and key times in samples (frames) instead of seconds. Applies to the contents of the node if either recurse or save_box_contents is True. save_parm_values_only Evaluate parameters, saving their values instead of the expressions. Applies to the contents of the node if either recurse or save_box_contents is True. save_spare_parms Save spare parameters as well. When save_creation_commands is True, commands for creating spare parameters will also be output. Applies to the contents of the node if either recurse or save_box_contents is True. function_name If a function_name is specified, the output will be wrapped in a Python function. __eq__(node) → bool Implements == between Node objects. For example, hou.root() == hou.node("/") will return True. There can be multiple Python Node objects for the same Houdini node. Two identical calls to hou.node() will return different Python Node objects, with each representing the same Houdini node. Comparing these nodes using == (which calls __eq__) will return True, while comparing them using is (the object identity test) will return False. __ne__(node) → bool Implements != between Node objects. See __eq__(). addEventCallback(event_types, callback) Registers a Python callback that Houdini will call whenever a particular action, or event, occurs on this particular node instance. Callbacks only persist for the current session. For example, they are not saved to the .hip file. If you want persistent callbacks in every session, you can add them in code in 456.py (runs when the user opens a .hip file). See where to add Python scripting for more information. event_types A sequence of hou.nodeEvent.nodeEventType value corresponding to the event that triggered the callback. Houdini will pass additional keyword arguments depending on the event type. For example, in a callback for the ParmTupleChanged event, Houdini will pass a parm_tuple keyword argument containing a hou.ParmTuple reference to the parameter that changed. See hou.nodeEventType for the extra arguments (if any) certain node’s name changes: def name_changed(node, event_type, **kwargs): print("The geometry object is now named", node.name()) hou.node("/obj/geo1").addEventCallback(hou.nodeEventType.NameChanged, name_changed) See also hou.Node.removeEventCallback() and hou.Node.removeAllEventCallbacks(). removeEventCallback(event_types, callback) Given a callback that was previously added on this node and a sequence of hou.nodeEventType enumerated values, remove those event types from the set of event types for the callback. If the remaining set of event types is empty, the callback will be removed entirely from this node. Raises hou.OperationFailed if the callback had not been previously added. See hou.Node.addEventCallback() for more information. removeAllEventCallbacks() Remove all event callbacks for all event types from this node. See hou.Node.addEventCallback() for more information. eventCallbacks() → tuple of ( tuple of hou.nodeEventType, callback) Return a tuple of all the Python callbacks that have been registered with this node with calls to hou.Node.addEventCallback(). setUserData(name, value) Add/set a named string on this node instance. name A unique name (key) for the user-defined data. By using different names, you can attach multiple pieces of user-defined data to a node. value The string to store. This name/value pair is stored with the hip file and is included in the output from opscript and hou.Node.asCode(). The following example illustrates how to set, access, and delete user-defined data: >>> n = hou.node("/obj").createNode("geo") >>> n.setUserData("my data", "my data value") >>> n.userData("my data") 'my data value' >>> n.userDataDict() {'my data': 'my data value'} >>> n.destroyUserData("my data") >>> n.userDataDict() {} >>> print n.userData("my data") None See per-node user-defined data for more information and examples. Tip If you prefix a user data key with nodeinfo_, the key (without the prefix) and the value will be shown as a custom field in the node info popup window. userDataDict(name) → dict of str to str Return a dictionary containing all the user-defined name/string pairs for this node. See hou.Node.setUserData() for more information. userData(name) → str or None Return the user-defined data with this name, or None if no data with this name exists. See hou.Node.setUserData() for more information. This method can be implemented as follows: def userData(self, name): return self.userDataDict().get(name) destroyUserData(name) Remove the user-defined data with this name. See hou.Node.setUserData() for more information. Raises hou.OperationFailed if no user data with this name exists. setCachedUserData(name, value) Add/set a named value on this node instance. Unlike setUserData, values set using this method are not saved with the hip file. name: A unique name (key) for the user-defined data. By using different names, you can attach multiple pieces of user-defined data to a node. value: The value to store. Unlike setUserData, this value may be any Python object. This name/value pair is not stored with the hip file. It is useful for nodes implemented in Python that want to save temporary values between cooks, to avoid recomputing them on subsequent cooks. The following example illustrates how to set, access, and delete cached user-defined data: >>> n = hou.node("/obj").createNode("geo") >>> n.setCachedUserData("my data", [1, 2, {"a": "b", "c": "d"}]) >>> n.cachedUserData("my data") [1, 2, {'a': 'b', 'c': 'd'}] >>> n.cachedUserDataDict() {'my data': [1, 2, {'a': 'b', 'c': 'd'}]} >>> n.destroyCachedUserData("my data") >>> n.cachedUserDataDict() {} >>> print n.cachedUserData("my data") None See per-node user-defined data for more information and examples. cachedUserDataDict(name) → dict of str to str Return a dictionary containing all the user-defined name/string pairs for this node. See hou.Node.setCachedUserData() for more information. cachedUserData(name) → str or None Return the user-defined cached data with this name, or None if no data with this name exists. See hou.Node.setCachedUserData() for more information. This method can be implemented as follows: def cachedUserData(self, name): return self.cachedUserDataDict().get(name) Note that None is a valid value for a key, so the most reliable way to check if a key is valid is to check if it is in the result of cachedUserDataDict: >>> n = hou.node("/obj").createNode("geo") >>> n.cachedUserDataDict() {} >>> print n.cachedUserData("foo") None >>> "foo" in n.cachedUserDataDict() False >>> n.setCachedUserData("foo", None) >>> n.cachedUserDataDict() {'foo': None} >>> print n.cachedUserData("foo") None >>> "foo" in n.cachedUserDataDict() True destroyCachedUserData(name) Remove the user-defined cached data with this name. See hou.Node.setCachedUserData() for more information. Raises hou.OperationFailed if no user data with this name exists. dataBlockKeys(blocktype) → tuple of str Return the names of all data blocks stored on this node that are of the data type specified by the blocktype parameter. Data blocks are similar to user data in that they can contain any extra data that may be useful to attach to a specific node. They differ from user data in that data blocks are designed to more efficiently handle large blocks of data. Data blocks can also contain binary data, and have a data type associated with each block. dataBlockType(key) → str Return the data type of the block specified by the key parameter. Raises hou.ValueError if the provided key is not associated with any data block on this node. dataBlock(key) → str Returns the data block stored under the given key. This method will only work if the specified data block is has a type that can be represented by a python object. Otherwise None is returned. Raises hou.ValueError if the provided key is not associated with any data block on this node. setDataBlock(key, block, blocktype) Stores the provided data block on the node under the provided key name, marking it with the provided data type. Passing a block value of None will remove any data block with the specified key. simulation() → hou.DopSimulation Return the simulation defined by this DOP network node. This raises an exception if this is not a dop network. findNodesThatProcessedObject(dop_object) → tuple of hou.DopNode Given a hou.DopObject, return a tuple of DOP nodes that processed that object. This raises an exception if this is not a dopnetwork. isFlagReadable(flag) → bool Return True if the specified flag is readable and False otherwise. flag must be a hou.nodeFlag value. isFlagWritable(flag) → bool Return True if the specified flag is writable and False otherwise. flag must be a hou.nodeFlag value. isGenericFlagSet(flag) → bool Returns the value of the specific flag. flag must be a hou.nodeFlag value. setGenericFlag(flag, value) Sets the value of the specified flag based on the bool value argument. flag must be a hou.nodeFlag value. selectNextVisibleWorkItem() If a work item is selected, selects the next visible work item selectPreviousVisibleWorkItem() If a work item is selected, selects the previous work item
http://www.sidefx.com/docs/houdini/hom/hou/VopNetNode.html
CC-MAIN-2019-18
refinedweb
12,176
51.04
Hi Experts, I am getting an error while trying to run the pkgorder script ( part of anaconda-runtime rpm ) that tried to populate hte list of rpm's required for installation. The pkgorder script imports a lot of packages like os, shutils, rpm, sys etc and all of these are successfull includin ghte import of yum package. But when the pkgorder tries to run "from yuminstall import YumSorter", it gives the following error. Stack Trace Snippet :: /dev/mapper/control: open failed: Permission denied Failure to communicate with kernel device-mapper driver. dm.c: 1565 Traceback (most recent call last): File "/usr/lib/anaconda-runtime/pkgorder", line 32, in ? from yuminstall import YumSorter File "/usr/lib/anaconda/yuminstall.py", line 31, in ? from packages import recreateInitrd File "/usr/lib/anaconda/packages.py", line 19, in ? import iutil File "/usr/lib/anaconda/iutil.py", line 16, in ? import os, isys, string, stat File "/usr/lib/anaconda/isys.py", line 32, in ? import block File "/usr/lib64/python2.4/site-packages/block/__init__.py", line 6, in ? File "/usr/lib64/python2.4/site-packages/block/device.py", line 190, in ? File "/usr/lib64/python2.4/site-packages/block/device.py", line 195, in MPNameCache MemoryError Please let me know what is going wrong and the reason why we get such errors, so that i can follow your leads and pursue the resolution for the issue. Thanks, K
https://www.daniweb.com/hardware-and-software/linux-and-unix/threads/305197/redhat-5-5-error-while-trying-to-import-yumsorter-class-from-yuminstall-py
CC-MAIN-2018-30
refinedweb
234
60.51
cumulative.py. For information about downloading and working with this code, see Section 0_lb records weight at birth in pounds. Figure 4.1 shows the PMF of these values for first babies and others. totalwgt_lb Figure 4.1: PMF of birth weights. This figure shows a limitation of PMFs: they are hard to compare visually. Overall, these distributions resemble the bell shape of a normal distribution, with many values near the mean and a few values much higher and lower. But parts of this figure are hard to interpret. There are many spikes and valleys, and some apparent differences between the distributions. It is hard to tell which of these features are meaningful. Also, it is hard to see overall patterns; for example, which distribution do you think has the higher mean? These problems can be mitigated by binning the data; that is, dividing the range of values (CDF), which is the subject of this chapter. But before I can explain CDFs, I have to explain values in the sequence scores: your_score def PercentileRank(scores, your_score): count = 0 for score in scores: if score <= your_score: count += 1 percentile_rank = 100.0 * count / len(scores) return percentile_rank As. This implementation of Percentile is not efficient. A better approach is to use the percentile rank to compute the index of the corresponding percentile: def Percentile2(scores, percentile_rank): scores.sort() index = percentile_rank * (len(scores)-1) // 100 return scores[index] The difference between “percentile” and “percentile rank” can be confusing, and people do not always use the terms precisely. To summarize, PercentileRank takes a value and computes its percentile rank in a set of values; Percentile takes a percentile rank and computes the corresponding value. Now that we understand percentiles and percentile ranks, we are ready to tackle the cumulative distribution function (CDF). The CDF is the function that maps from a value to its percentile rank. The CDF is a function of x, where x is any value that might appear in the distribution. To evaluate CDF(x) for a particular value of x, we compute the fraction of values in the distribution less than or equal to x. Here’s what that looks like as a function that takes a sequence, sample, and a value, x: def EvalCdf(sample, x): count = 0.0 for value in sample: if value <= x: count += 1 prob = count / len(sample) return prob This function is almost identical to PercentileRank, except that the result is a probability in the range 0–1 rather than a percentile rank in the range 0–100. As an example, suppose we collect a sample with the values [1, 2, 2, 3, 5]. Here are some values from its CDF: We can evaluate the CDF for any value of x, not just values that appear in the sample. If x is less than the smallest value in the sample, CDF(x) is 0. If x is greater than the largest value, CDF(x) is 1. Figure 4.2: Example of a CDF. Figure 4.2 is a graphical representation of this CDF. The CDF of a sample is a step function. thinkstats2 provides a class named Cdf that represents CDFs. The fundamental methods Cdf provides are: Figure 4.3: CDF of pregnancy length. The Cdf constructor can take as an argument a list of values, a pandas Series, a Hist, Pmf, or another Cdf. The following code makes a Cdf for the distribution of pregnancy lengths in the NSFG: live, firsts, others = first.MakeFrames() cdf = thinkstats2.Cdf(live.prglngth, label='prglngth') thinkplot provides a function named Cdf that plots Cdfs as lines: thinkplot.Cdf(cdf) thinkplot.Show(xlabel='weeks', ylabel='CDF') Figure 4.3 shows the result. One way to read a CDF is to look up percentiles. For example, it looks like about 10% of pregnancies are shorter than 36 weeks, and about 90% are shorter than 41 weeks. The CDF also provides a visual representation of the shape of the distribution. Common values appear as steep or vertical sections of the CDF; in this example, the mode at 39 weeks is apparent. There are few values below 30 weeks, so the CDF in this range is flat. It takes some time to get used to CDFs, but once you do, I think you will find that they show more information, more clearly, than PMFs. CDFs are especially useful for comparing distributions. For example, here is the code that plots the CDF of birth weight for first babies and others. first_cdf = thinkstats2.Cdf(firsts.totalwgt_lb, label='first') other_cdf = thinkstats2.Cdf(others.totalwgt_lb, label='other') thinkplot.PrePlot(2) thinkplot.Cdfs([first_cdf, other_cdf]) thinkplot.Show(xlabel='weight (pounds)', ylabel='CDF') Figure 4.4: CDF of birth weights for first babies and others. Figure 4.4 shows the result. Compared to Figure 4.1, this figure makes the shape of the distributions, and the differences between them, much clearer. We can see that first babies are slightly lighter throughout the distribution, with a larger discrepancy above the mean. Once you have computed a CDF, it is easy to compute percentiles and percentile ranks. The Cdf class provides these two methods: Percentile can be used to compute percentile-based summary statistics. For example, the 50th percentile is the value that divides the distribution in half, also known as the median. Like the mean, the median is a measure of the central tendency of a distribution. Actually, there are several definitions of “median,” each with different properties. But Percentile(50) is simple and efficient to compute. Another percentile-based statistic is the interquartile range (IQR), which is a measure of the spread of a distribution. The IQR is the difference between the 75th and 25th percentiles. More generally, percentiles are often used to summarize the shape of a distribution. For example, the distribution of income is often reported in “quintiles”; that is, it is split at the 20th, 40th, 60th and 80th percentiles. Other distributions are divided into ten “deciles”. Statistics like these that represent equally-spaced points in a CDF are called quantiles. For more, see. Suppose we choose a random sample from the population of live births and look up the percentile rank of their birth weights. Now suppose we compute the CDF of the percentile ranks. What do you think the distribution will look like? Here’s how we can compute it. First, we make the Cdf of birth weights: weights = live.totalwgt_lb cdf = thinkstats2.Cdf(weights, label='totalwgt_lb') Then we generate a sample and compute the percentile rank of each value in the sample. sample = np.random.choice(weights, 100, replace=True) ranks = [cdf.PercentileRank(x) for x in sample] sample is a random sample of 100 birth weights, chosen with replacement; that is, the same value could be chosen more than once. ranks is a list of percentile ranks. Finally we make and plot the Cdf of the percentile ranks. rank_cdf = thinkstats2.Cdf(ranks) thinkplot.Cdf(rank_cdf) thinkplot.Show(xlabel='percentile rank', ylabel='CDF') Figure 4.5: CDF of percentile ranks for a random sample of birth weights. Figure 4.5 shows the result. The CDF is approximately a straight line, which means that the distribution is uniform. That outcome might be non-obvious, but it is a consequence of the way the CDF is defined. What this figure shows is that 10% of the sample is below the 10th percentile, 20% is below the 20th percentile, and so on, exactly as we should expect. So, regardless of the shape of the CDF, the distribution of percentile ranks is uniform. This property is useful, because it is the basis of a simple and efficient algorithm for generating random numbers with a given CDF. Here’s how: Cdf provides an implementation of this algorithm, called Random: # class Cdf: def Random(self): return self.Percentile(random.uniform(0, 100)) Cdf also provides Sample, which takes an integer, n, and returns a list of n values chosen at random from the Cdf. Percentile ranks are useful for comparing measurements across different groups. For example, people who compete in foot races are usually grouped by age and gender. To compare people in different age groups, you can convert race times to percentile ranks. A few years ago I ran the James Joyce Ramble 10K in Dedham MA; I finished in 42:44, which was 97th in a field of 1633. I beat or tied 1537 runners out of 1633, so my percentile rank in the field is 94%. More generally, given position and field size, we can compute percentile rank: def PositionToPercentile(position, field_size): beat = field_size - position + 1 percentile = 100.0 * beat / field_size return percentile In my age group, denoted M4049 for “male between 40 and 49 years of age”, I came in 26th out of 256. So my percentile rank in my age group was 90%. If I am still running in 10 years (and I hope I am), I will be in the M5059 division. Assuming that my percentile rank in my division is the same, how much slower should I expect to be? I can answer that question by converting my percentile rank in M4049 to a position in M5059. Here’s the code: def PercentileToPosition(percentile, field_size): beat = percentile * field_size / 100.0 position = field_size - beat + 1 return position There were 171 people in M5059, so I would have to come in between 17th and 18th place to have the same percentile rank. The finishing time of the 17th runner in M5059 was 46:05, so that’s the time I will have to beat to maintain my percentile rank. For the following exercises, you can start with chap04ex.ipynb. My solution is in chap04soln.ipynb. chap04ex.ipynb chap04soln.ipynb Generate 1000 numbers from random.random and plot their PMF and CDF. Is the distribution uniform? Think Bayes Think Python Think Stats Think Complexity
http://greenteapress.com/thinkstats2/html/thinkstats2005.html
CC-MAIN-2017-47
refinedweb
1,641
65.73
described features. Each quick-reference entry begins with a four-part title that specifies the name, namespace (followed by the assembly in parentheses), and type category of the type, and may also specify various additional flags that describe the type. The type name appears in bold at the upper-left side of the title. The namespace and assembly appear in smaller print in the lower-left side, below the type name. The upper-right portion of the title indicates the type category of the type (class, delegate, enum, interface, or struct). The "class" category may include modifiers such as sealed or abstract. In the lower-right corner of the title, you may find a list of flags that describe the type. The possible flags and their meanings are as follows: Specifies that the type is part of the ECMA CLI specification. Specifies that the type, or a base class, implements System.Runtime.Serialization.ISerializable or has been flagged with the System.Serializable attribute. This class, or a superclass, derives from System.MarshalByRefObject. This class, or a superclass, derives from System.ContextBoundObject. Specifies that the type implements the System.IDisposable interface. Specifies that the enumeration be marked with the System.FlagsAttribute attribute. much like its source code, except that the member bodies are omitted and some additional annotations are added. If you know C# syntax, you know how base class or interfaces that the type implements. The type definition line is followed by a list of the members that the type defines. This list includes only members that are explicitly declared in the type, are overridden from a base class, or are implementations of an interface member. Members that are simply inherited from a base class are should not be used literally. The member listings are printed on alternating gray and white backgrounds to keep them visually separate. Each member listing is a single line that defines the syntax for that member. These listings use C# syntax, so their meaning is immediately clear to any C# programmer. Some auxiliary information associated with each member synopsis, however, requires explanation. The area to the right of the member synopsis displays a variety of flags that provide additional information about the member. Some flags indicate additional specification details that do not appear in the member syntax itself. The following flags may be displayed to the right of a member synopsis: Indicates that a method overrides a method in one of its base classes. The flag is followed by the name of the base class that the method overrides. Indicates that a method implements a method in an interface. The flag is followed by the name of the implemented interface. For enumeration fields and constant fields, this flag is followed by the constant value of the field. Only constants of primitive and String types and constants with the value null are displayed. Some constant values are specification details, while others are implementation details. Some constants, such as System.BitConverter.IsLittleEndian, are platform dependent. Platform-dependent values shown in this book conform to the System.PlatformID.Win32NT platform (32-bit Windows NT, 2000, or XP). The reason why symbolic constants are defined, however, is so you can write code that does not rely directly upon the constant value. Use this flag to help you understand the type, but do not rely upon the constant values in your own programs. Within a type synopsis, the members are not listed in strict alphabetical order. Instead, they are broken down into functional groups and listed alphabetically within each group. Constructors, events, fields, methods, and properties are all listed separately. Instance methods are kept separate from shared comments, such as: Public Constructors or: // Protected Instance Properties or: // Events The various functional categories follow below (in the order in which they appear in a type synopsis): Displays the constructors for the type. Public and protected constructors are displayed separately in subgroupings. If a type defines no constructor at all, the compiler adds a default parameterless constructor that is displayed here. If a type defines only private constructors, it cannot shared properties and public and protected instance properties. After the property name, its accessors (get or set) are shown. Lists the static methods (class methods) of the type, broken down into subgroups for public shared methods and protected shared methods. Contains all public instance methods. Contains all protected instance methods. For any type that has a nontrivial inheritance hierarchy, the synopsis is followed by a "Hierarchy" section. This section lists all of the base classes of the type, as well as any interfaces implemented by those base classes. It also lists any interfaces implemented by an interface. In the hierarchy listing, arrows indicate base class to derived class relationships, while the interfaces implemented by a type follow the type name in parentheses. For example, the following hierarchy indicates that System.IO.Stream implements IDisposable and extends MarshalByRefObject, which itself extends Object: System.Object System.MarshalByRefObject System.IO.Stream optional cross-reference sections that indicate other related types and methods that may be of interest. These sections include: This section lists all members (from other types) that are passed an object of this type as an argument, including properties whose values can be set to this type. It is useful when you have an object of a given type and want to know where it can be used. This section lists all members that return an object of this type, including properties whose values can take on this type. It is useful when you know that you want to work with an object of this type, but don't know how to obtain one. For attributes, this section lists the attribute targets that the attribute can be applied to. For delegates, this section lists the events it can handle. Throughout the quick reference, you'll notice that types are sometimes referred to by type name alone, and at other times are. However, they can be summarized as follows: If the type name alone is ambiguous, the namespace name is always used. If the type is part of the System namespace or is a commonly used type like.
http://etutorials.org/Programming/Asp.net/Part+III+Namespace+Reference/Chapter+21.+Namespace+Reference/21.1+Reading+a+Quick-Reference+Entry/
CC-MAIN-2018-17
refinedweb
1,024
55.74
Ever wanted to use GLSL with Pyglet, but became lost in the ctypes circumlocutions required to compile and link shaders? If so, then here is the class for you – a basic, simple GLSL wrapper class, which takes the leg work out of using shaders. This provides most of what you will need for simple GLSL programs, allowing you to compile and link shaders, bind and unbind them, and set uniforms (integer, float and matrix). It doesn’t support custom attributes, but that would be a trivial addition. # # # Distributed under the Boost Software License, Version 1.0 # (see) # from pyglet.gl import * class Shader: # vert, frag and geom take arrays of source strings # the arrays will be concattenated into one string by OpenGL def __init__(self, vert = [], frag = [], geom = []): # create the program handle self.handle = glCreateProgram() # we are not linked yet self.linked = False # create the vertex shader self.createShader(vert, GL_VERTEX_SHADER) # create the fragment shader self.createShader(frag, GL_FRAGMENT_SHADER) # the geometry shader will be the same, once pyglet supports the extension # self.createShader(frag, GL_GEOMETRY_SHADER_EXT) # attempt to link the program self.link() def createShader(self, strings, type): count = len(strings) # if we have no source code, ignore this shader if count < 1: return # create the shader handle shader = glCreateShader(type) # convert the source strings into a ctypes pointer-to-char array, and upload them # this is deep, dark, dangerous black magick - don't try stuff like this at home! src = (c_char_p * count)(*strings) glShaderSource(shader, count, cast(pointer(src), POINTER(POINTER(c_char))), None) # compile the shader glCompileShader(shader) temp = c_int(0) # retrieve the compile status glGetShaderiv(shader, GL_COMPILE_STATUS, byref(temp)) # if compilation failed, print the log if not temp: # retrieve the log length glGetShaderiv(shader, GL_INFO_LOG_LENGTH, byref(temp)) # create a buffer for the log buffer = create_string_buffer(temp.value) # retrieve the log text glGetShaderInfoLog(shader, temp, None, buffer) # print the log to the console print buffer.value else: # all is well, so attach the shader to the program glAttachShader(self.handle, shader); def link(self): # link the program glLinkProgram(self.handle) temp = c_int(0) # retrieve the link status glGetProgramiv(self.handle, GL_LINK_STATUS, byref(temp)) # if linking failed, print the log if not temp: # retrieve the log length glGetProgramiv(self.handle, GL_INFO_LOG_LENGTH, byref(temp)) # create a buffer for the log buffer = create_string_buffer(temp.value) # retrieve the log text glGetProgramInfoLog(self.handle, temp, None, buffer) # print the log to the console print buffer.value else: # all is well, so we are linked self.linked = True def bind(self): # bind the program glUseProgram(self.handle) def unbind(self): # unbind whatever program is currently bound - not necessarily this program, # so this should probably be a class method instead glUseProgram(0) # upload a floating point uniform # this program must be currently bound def uniformf(self, name, *vals): # check there are 1-4 values if len(vals) in range(1, 5): # select the correct function { 1 : glUniform1f, 2 : glUniform2f, 3 : glUniform3f, 4 : glUniform4f # retrieve the uniform location, and set }[len(vals)](glGetUniformLocation(self.handle, name), *vals) # upload an integer uniform # this program must be currently bound def uniformi(self, name, *vals): # check there are 1-4 values if len(vals) in range(1, 5): # select the correct function { 1 : glUniform1i, 2 : glUniform2i, 3 : glUniform3i, 4 : glUniform4i # retrieve the uniform location, and set }[len(vals)](glGetUniformLocation(self.handle, name), *vals) # upload a uniform matrix # works with matrices stored as lists, # as well as euclid matrices def uniform_matrixf(self, name, mat): # obtian the uniform location loc = glGetUniformLocation(self.Handle, name) # uplaod the 4x4 floating point matrix glUniformMatrix4fv(loc, 1, False, (c_float * 16)(*mat)) [/sourcecode] Note that there isn't anything restricting this class to use with Pyglet - a simple change to the import statement should allow it to be used with any setup based on OpenGL/ctypes. I hope someone finds this useful, and I would love to hear if you use it to make anything cool/interesting. This is the cleanest implementation I have seen so far! After adding custom attributes, this would make a good replacement for pyglet/experimental/shader.py. Code that illustrate general GPU computation, a-la, would make for a good demo. Thanks! I have a sample program in the works, check back in a couple of days. Pingback: Conway’s Game of Life in GLSL/Pyglet « swiftcoding Yo! Great code! I’ve just started using python (and pyglet) and did a google to see if using shaders with python would be hard, luckily for me you made it really simple with this example code. Thanks for saving a python-n00b for hours of trying and failing :> I finally put up a website with GLSL Stuff using your great library. Feedback appreciated 🙂 Send comments on GLSL/Pyglet on to pythonian_at_inode_dot_at ! There’s a missing import in the shader module. File “/shader.py”, line 40, in createShader src = (c_char_p * count)(*strings) NameError: global name ‘c_char_p’ is not defined Resolved by adding from ctypes import * Cheers, Adam Adam Griffiths, File “C:\Python33\lib\site-packages\shader.py”, line 41, in createShader src = (c_char_p * count)(*strings) TypeError: bytes or integer address expected instead of str instance PS I try to use python33 I would be really very surprised if the code works as-is under python 3. You’ll probably have to convert all the strings to bytearrays. I tried to make this work under Python 3 but couldn’t get it done. There is an invalid value GLException with a standard vertex and fragment shader. Can you help me by updating your code to be compatible with Python 3? Thanks in advance! I still couldn’t figure out what to change in your code to make it python 3 compatible, the following lines won’t work under python3: src = (c_char_p * count)(*strings) glShaderSource(shader, count, cast(pointer(src), POINTER(POINTER(c_char))), None) I struggled a lot to find out from the exceptions that I actually solved this problem, but it arose at another part of the Shader class. The uniformi and uniformf methods are also using strings to locate the uniform variable, so these have to be also encoded to ascii. These lines have to be added or modified to make this useful program work under python3: – before line 40 add the following: strings = [s.encode(‘ascii’) for s in strings] – modify line 106 and 119 to: }[len(vals)](glGetUniformLocation(self.handle, name.encode(‘ascii’)), *vals) So is there any way to make this work with Python 3…? Yes, but it will take a little effort. There are trivial items to fix (i.e. print statement syntax), but the more complicated part is the handling of ctypes strings in python 3 – I think you need to adapt those to the buffer type instead.
https://swiftcoder.wordpress.com/2008/12/19/simple-glsl-wrapper-for-pyglet/
CC-MAIN-2018-26
refinedweb
1,117
52.39
From: nee Spangenberg (dsp_at_[hidden]) Date: 2003-12-03 08:22:52 AlisdairM schrieb: > The following seems such an obvious utility, I was surprised I could not > find it in the boost distribution. Could someone point me to the > equivalent if I am missing it. Alternatively, is this worth cleaning up > for a submission? I remember that it was part of boost, I think in Version 1.28 or so. I also wondered loudly (but obviously not loud enough) about its missing in newer versions some month ago. The functions named sizer with signature template <typename T, int sz> inline char (&sizer(T (&)[sz]))[sz]; was inside array_traits.hpp and allowed compile-time evaluation of fixed array sizes, while the traits-class array_traits provided the static size function which provided run-time evaluation via partial specialization of the traits class. The implementation of template <typename T, std::size_t sz> struct array_traits<T[sz]> ; and template <typename T, std::size_t sz> struct array_traits<T const[sz]>; was: ... size_type size(T (&)[sz]) { return sz; } ... similar to your proposal. Interestingly today's boost version has moved array_traits.hpp into the type_traits subdirectory (that is ok), but replaced its implementation by #include "boost/type_traits/is_array.hpp" This is also not so bad in the first place, but the fact, that is_array.hpp does not **additionally** contain the original array_traits stuff, is somewhat nasty. Greetings from Bremen, Daniel Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2003/12/57093.php
CC-MAIN-2021-43
refinedweb
255
51.04
Contacting a Server Problem You need to contact a server using TCP/IP. Solution Just create a Socket, passing the hostname and port number into the constructor. Discussion There isn’t much to this in Java, in fact. When creating a socket, you pass in the hostname and the port number. The java.net.Socket constructor does the gethostbyname( ) and the socket( ) system call, sets up the server’s sockaddr_in structure, and executes the connect( ) call. All you have to do is catch the errors, which are subclassed from the familiar IOException . Example 15.2 sets up a Java network client, using IOException to catch errors. Example 15-2. Connect.java (simple client connection) import java.net.*; /* * A simple demonstration of setting up a Java network client. */ public class Connect { public static void main(String[] argv) { String server_name = "localhost"; try { Socket sock = new Socket(server_name, 80); /* Finally, we can read and write on the socket. */ System.out.println(" *** Connected to " + server_name + " ***"); /* . do the I/O here .. */ sock.close( ); } catch (java.io.IOException e) { System.err.println("error connecting to " + server_name + ": " + e); return; } } } See Also Java supports other ways of using network applications. You can also open a URL and read from it (see Section 17.7). You can write code so that it will run from a URL, when opened in a web browser, or from an application (see Section 17.10). Get Java Cookbook now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/java-cookbook/0596001703/ch15s02.html
CC-MAIN-2020-10
refinedweb
257
58.99
I'm trying to speed up a few regex's in a script that gets called a few dozen times a day. Each invocation basically loops through a ton of source code and builds a sort of searchable index. The problem is one single run of this script is now taking more than a day to run. There's some parallel-ization that can be done, but I'm hopeful there's something to be gained within each script as well. The script in question is MXR's "genxref": here Here's a relevant NYTProf run (one of the dozens that gets run daily, across different source repos): here. You can see some lines are getting hit a million times or more. Here's a good example fragment: 879 # Remove nested parentheses. 880 while ($contents =~ s/\(([^\)]*)\(/\($1\05/g || 881 $contents =~ s/\05([^\(\)]*)\)/ $1 /g) {} [download] This is one problematic snippet, but hardly the only one... the script is littered with complicated regex's. Most of them quick enough as-is, but some (like above) have become a significant performance bottleneck as our code base as grown. How might I improve upon this situation? Specific improvements and general ideas both welcome... I know the basics from a theoretical perspective (don't capture if you don't have to, try not to backtrack, etc), but not how to spot/fix problems. I don't have enough real-world experience with this. Thanks!. Thanks! There are a few places in the code where small savings can be achieved essentially for free: But even if you could make this level of savings on every single line in the program, you'd still save maybe an hour at most. Looking at a few of the individual REs, nothing leaps off the page as being particularly extravagant. You should heed the annotation at the top of the profiling and try to remove usage of $&. This has been known to effect a substantial time saving. The only place affected is this sub: sub java_clean { my $contents = $_[0]; while ($contents =~ s/(\{[^\{]*)\{([^\{\}]*)\}/ $1."\05".&wash($2)/ges) {} $contents =~ s/\05/\{\}/gs; # Remove imports ##$contents =~ s/^\s*import.*;/&wash($&)/gem; $contents =~ s/(^\s*import.*;)/&wash($1)/gem; # Remove packages ##$contents =~ s/^\s*package.*;/&wash($&)/gem; $contents =~ s/(^\s*package.*;)/&wash($1)/gem; return $contents; } [download] The uncommented replacements should have the same effect (untested) and the changes could have a substantial affect on the overall performance of a script dominated by regex manipulations. While you're at it, you can also add a few micro-optimisations where they are called millions of times like: sub wash { ##### my $towash = $_[0]; return ( "\n" x ( $_[0] =~ tr/\n// ) ); } [download] which will save the 7 seconds spent copying the input parameter. But given that the overall runtime is 7 minutes, that's not going to have a big effect. The only way you're a likely to get substantial savings from within the script, is to try optimising the algorithms used -- which amounts to tuning all of the individual regexes; and the heuristics they represent -- and that comes with enormous risk of breaking the logic completely and would require extensive and detailed testing. All of that said, if you split the workload across two processors, you're likely to achieve close to a 50% saving. Across 4, and a 75% saving is theoretically possible. It really doesn't make much sense to spend time looking for saving within the script when, with a little restructuring, it lends itself so readily to being parallelised. On the $& fix, all I can say is *thank you*. I had investigated this previously, but I was completely misreading the warning in NYTProf as complaining that line 32 itself was doing this, which I couldn't figure out at all. You actually found the offending lines and even offered a fix! After making your change, the warning does indeed go away. From the docs this seems like a safe change to make. Sadly, this seems to have a very small effect in this particular environment / workload. A 38-second run was virtually unaffected... +/- one second. On a 15-minute repo, it fell by about 4-5 seconds. There are bigger repos that might show a bigger benefit, but I suspect we'll only shave at most a few minutes off over a full day's work. Still, every little bit helps and fixing the warning is nice in and of itself. :) I'm working on some parallelization for this, which should help. I'm slightly concerned this might overload the system it runs on, but that can be fixed with the proper application of a wallet. For example, we have a set of repos that have this script run on them every 4 hours.. A better implementation (say, using GNU parallel) may get it down to an hour flat. At the same time there's a much bigger set of repos that get processed daily. I think it takes over 24 hours to run (separate issue: it stopped reporting its status regularly). This runs concurrently, and obviously can overlap the 4-hour jobs.. Still, it's clearly a very good approach to reducing wall-clock time. I was primarily hoping someone would spot something egregiously wrong with the regex's that I could fix, but that seems not to be the case. Oh well... guess we'll do it the hard way. :). You're only going to some limited benefit by tweaking the regex patterns. Shaving 20% off of 60 seconds is still 48 seconds. One thing I see is that you're scanning the file from to to bottom for each s/// operator, and often more than once per pattern. Your efforts might be better spent avoid that. Some examples are self contained, while ($contents =~ s/\05([^\n\05]+)\05/$1\05\05/gs) {} | | v $contents =~ s/\05([^\n\05]+)(?=\05)/$1\05/gs; [download] But that's not going to help you remove this waste in general. Hmm... going to have to get help here. I'm not familiar enough with this code or its expected output to really tell when it's working or when I might introduce a subtle bug somewhere. This seems more like the latter scenario. Thanks for pointing this out! On this particular example, I'm not understanding how the two things are identical. The substitution pattern in your version has one less \05. Is that intentional? If so, how does that work? I don't really grok the look-around assertion, and/or how that's relevant to not needing the extra \05 in the substitution pattern. For that matter, I'm not sure what \05 even is. Most places seem to say that an octal character code requires exactly 3 digits, but some say you can get away with less, as long as the leading digit is zero. But, \05 interpreted as octal is non-printing... don't know what it is, or why it would be in these files. Any thoughts on that?.
http://www.perlmonks.org/?node_id=935021
CC-MAIN-2017-47
refinedweb
1,177
72.76
When you add a new volume to an existing VxVM disk device group, perform the procedure from the primary node of the online disk device group. After adding the volume, you need to register the configuration change by using the procedure SPARC: How to Register Disk Group Configuration Changes (VERITAS Volume Manager). Become superuser on any node of the cluster. Determine the primary node for the disk device group to which you are adding the new volume. If the disk device group is offline, bring the device group online. Switches the specified device group. Specifies the name of the node to switch the disk device group to. This node becomes the new primary. From the primary node (the node currently mastering the disk device group), create the VxVM volume in the disk group. Refer to your VERITAS Volume Manager documentation for the procedure used to create the VxVM volume. Register the VxVM disk group changes to update the global namespace. DPM SPARC: How to Register Disk Group Configuration Changes (VERITAS Volume Manager).
http://docs.oracle.com/cd/E19528-01/819-0580/cihdggdc/index.html
CC-MAIN-2014-42
refinedweb
172
56.96
Inko 0.2.5 released Inko 0.2.5 has been released. Noteworthy changes in 0.2.5 - Sending unsupported messages is no longer silently ignored - Boolean assertions are now easier to define The full list of changes can be found in the CHANGELOG. Sending unsupported messages is no longer silently ignored In 0.2.4, a bug was introduced that would prevent the compiler from producing a compile time error whenever sending a message using an explicit receiver. This would lead to code such as the following producing a runtime error, instead of a compile time error: 'hello'.foo Since this particular bug is rather serious we decided to release a fix in 0.2.5, instead of waiting for 0.3.0. Boolean assertions are now easier to define The addition of std::test::assert.true and std::test::assert.false will make it easier to set boolean assertions. For example, instead of this: import std::test::assert assert.equal(10 == 10, True) You can now write the following: import std::test::assert assert.true(10 == 10)
https://inko-lang.org/news/inko-0-2-5-released/
CC-MAIN-2018-51
refinedweb
179
60.41
It seems that I riled some people up with my blog post yesterday. After some thought, I think the primary reason that there was some backlash is because some people feel that I violated one of the sacred principles of FP: lists are *the* data structure. Well, let me set the matter straight. I love lists. Especially the singlely linked immutable kind that are found in many functional languages. Furthermore, Microsoft is not trying to launch some massive marketing campaign to sell IEnumerable<T> as the *new* list. So to be clear, I talked about IEnumerable<T> yesterday because people who have used functional languages will wonder where are the lists in C# 3.0. IEnumerable<T> is not the same as a list, but there are many similarities between IEnumerable<T> and lazy lists that I pointed out yesterday when I showed they are isomorphic by describing the bijective mapping between them. Furthermore, it is the case that some data structures are generally better than others. But there are tradeoffs between using the various data structures. Also, *not* all of the tradeoffs are captured by looking exclusively at their time complexity for some operation. There is of course the space complexity and there is the complexity of implementation itself. And don’t forget that often they have different characteristics for different operations. The key point here is that a given problem will have a number of constraints and each of the competing designs has a number of trade-offs. As an interviewer, when I notice a candidate is not aware of the trade-offs that he is making then I start to worry that they were not considered at all. Furthermore, when I notice that a candidate seems to favor one data structure to the exclusion of others, I start to wonder how many tools are in the toolbox. But after observing this behavior many times, I am convinced that it often is not the coding problem that is driving usage of some data structure but the mode of thinking that the candidate employs. One more thing before I continue on. At least one reader wondered why I said the following: “Where most programmers who are accustomed to imperative style would naturally use an array, a variable, or a mutable object, a functional programmer will often use a list.” Yes, I meant to say exactly what I said. In fact, when I wrote it, I paused before deciding to include it because I thought it might be misunderstood. When I first read SICP, the most mind bending and rewarding topic was at the end of chapter 3. The section on streams. One of the motivations that was given for using a stream (infinite list) was that variables that change their value over time cause problems. One way to address this was to have streams represent the state of the variable over time where each element of the stream represents a state of the variable at some point in time. So without further ado, let’s take a look at streams… What we want is an infinite list. The problem is that infinite list can never actually be fully realized because of their infinite nature. So if we attempt to realize an infinite list either the computer will enter an infinite loop or it will run out of resources (stack overflow, out of memory, etc.). We can overcome this problem by having lazy lists. Lists where the next element in the list is not realized until it is needed. Yesterday, I presented one such lazy list which uses an enumerator to realize the next element. Today, I present another which has a more intriguing definition. class Stream<T> : IList<T> { Func<IList<T>> next; T value; public Stream(T value, Func<IList<T>> next) { this.value = value; this.next = next; } public IList<T> Next { get { return next(); } } public T Value { get { return value; } } } This lazy list is very similar to a normal list. The only difference is that instead of taking a list as the value of the next node in the list, it takes a function which will evaluate to the next node in the list. But this difference is critical. The first difference can easily be seen by imaging a list type, ErrorList, that when constructed throws an exception. class ErrorList<T> : IList<T> { public ErrorList() { throw new Exception(); } public IList<T> Next { get { throw new Exception(); } } public T Value { get { throw new Exception(); } } } Now consider the following code: var list = new List<int>(1, new ErrorList<int>()); // error, exception thrown var stream = new Stream<int>(1, () => new ErrorList<int>()); // no error An exception is thrown when the list is constructed but when the stream is constructed no exception is thrown. Unless the Next property is evaluated on the stream, there will never be an exception. The second difference can be seen in the following code: IList<BigInteger> onesList = null; onesList = new List<BigInteger>(1, onesList); IList<BigInteger> onesStream = null; onesStream = new Stream<BigInteger>(1, () => onesStream); If you try to list all of the values in onesList, then you will notice that it only contains one value. Whereas onesStream contains an infinite number of values. The reason is that when we constructed onesList, we passed in onesList but onesList had the value null at the time so the next node was set to null. But in the stream case we passed in a function that would be evaluated sometime in the future. By the time that we evaluate it, it will return the proper value of onesStream. A third difference is found in the performance of the two lists. With the lazy lists, parts of the list that are never realized are never paid for. So it is a pay as you go model as opposed to pay everything up front. Furthermore, less space can be required since the whole list is not necessarily held in memory at once but only a process decription that can compute each element as it is required. So now we have this infinite stream of ones, but can we do anything more interesting? Sure. First, let’s define a zip function between lists. public static IList<V> Zip<T,U,V>(Func<T,U,V> f, IList<T> list1, IList<U> list2) { if (list1 == null || list2 == null) return null; return new Stream<V>(f(list1.Value, list2.Value), () => Zip<T,U,V>(f, list1.Next, list2.Next)); } Yes, that is somewhat of a mouthful. Here is what it does. It takes two lists where the lists may differ from each other in what kind of elements they contain. It also takes a function from the type of the first list’s elements and the type of the second list’s elements and returns possibly a new type. Then if either of the lists is empty we return an empty list (null). But if both lists are non-empty then we return a stream where the first element is the application of the given function to the first element of both of the lists and the rest of the list will be the evaluation of Zip on the rest of the two lists. It is important that Zip uses a stream because these lists may be infinite and if we try to immediately evaluate the entire list we will run out of resources. Now that we have Zip, let’s put it to use to define the natural numbers. IList<BigInteger> ones = null; ones = new Stream<BigInteger>(1, () => ones); IList<BigInteger> natural = null; natural = new Stream<BigInteger>(0, () => Zip((x, y) => x + y, natural, ones)); So what we are saying is that the natural numbers begin with zero and then are followed by the sum of the first natural number with the first element of ones (0 + 1). The second natural number is the sum of the second element of natural numbers with the second element of ones (1 + 1) and so on. This works because each element of natural numbers is defined only in terms of the elements of natural numbers that occur previous to it. So now we can easily define the odd numbers (2k + 1) and even numbers (2k). But first we need a map function for lists. public static IList<U> Map<T, U>(Func<T, U> f, IList<T> list) { if (list == null) return null; return new Stream<U>(f(list.Value), () => Map(f, list.Next)); } Now here are the definitions of odd and even numbers. IList<BigInteger> odds = Map(x => 2 * x + 1, natural); IList<BigInteger> evens = Map(x => 2 * x, natural); We can also define the fibonacci sequence as a stream. IList<BigInteger> fibs = null; fibs = new List<BigInteger>(0, new Stream<BigInteger>(1, () => Zip(add, fibs, fibs.Next))); What we are saying here is that the first fibonacci number is zero and the second is one but then the next one is the sum of the first number and the second number and so on. This is similar to the natural numbers definition which used itself to compute itself. If you try it out, you will also notice that it isn’t very efficient. This is because we are back to our exponential time complexity. But this can easily be remedied by memoizing the next function in the constructor to the stream: public Stream(T value, Func<IList<T>> next) { this.value = value; this.next = next.Memoize(); } Now it has linear time complexity by trading off some space (linear as well). Let’s finish by solving the famous problem of producing the Hamming numbers. The problem is to list all of the positive integers who have no prime factors other than 2, 3, or 5 in ascending order. The first ten Hamming numbers are: 1, 2, 3, 4, 5, 6, 8, 9, 10, 12 This problem is notoriously difficult without lazy evaluation and is used to demonstrate the power of laziness. To solve this problem first note that the first hamming number is 1. Then if h is a hamming number so is 2h, 3h, and 5h. So we can define three streams which map the hamming numbers to 2h, 3h, and 5h respectively. The only remaining requirement is that they must be in order. We can maintain this invariant by defining a function named Merge: public static IList<T> Merge<T>(IList<T> list1, IList<T> list2) where T : IComparable<T> { if (list1 == null) return list2; else if (list2 == null) return list1; int c = list1.Value.CompareTo(list2.Value); if (c < 0) return new Stream<T>(list1.Value, () => Merge(list1.Next, list2)); else if (c > 0) return new Stream<T>(list2.Value, () => Merge(list1, list2.Next)); else return new Stream<T>(list1.Value, () => Merge(list1.Next, list2.Next)); } Now we are ready to define the Hamming numbers. Notice how close the definition in the code is to our description: IList<BigInteger> hamming = null; hamming = new Stream<BigInteger>(1, () => Merge(Map(x => x * 2, hamming), Merge(Map(x => x * 3, hamming), Map(x => x * 5, hamming)))); Now for fun, try to think of how to do it without lazy evaluation. Excellent stuff, I’m reminded of the "streams are just delayed lists" section in SICP. If you keep up posting like this Wes then I think the .NET community is going to have to aware you a ‘Brummie’ (as in Chris Brumme). Brummies are awarded for regularly posting uber long, can’t-find-this-stuff-anywhere-else, technical blog entries. 🙂 Keep it up. This is excellent stuff. Thanks Tom. Yes, much of the latter half of this post was directly inspired by that section of SICP. I think that was my favorite section even more so than the metacircular evaluator. Excellent post. That’s a great solution for hamming numbers. It’d be a shame not to show the analogous Haskell code for this: Haskell is a language which is lazily evaluated throughout (well, the standard just says non-strict semantics, but lazy evaluation is the most common implementation of that). The types of hamming and (#) (that code’s name for the ordered merge operation) are inferred by the compiler, and pattern matching makes the breaking down of the list in the merge algorithm very clean. For those unfamiliar with the language, comprehension might be helped by knowing that (x:xs) means a list which starts with x, and the rest is a list called xs, and xxs@(x:xs) is a pattern that matches a nonempty list, binding the first element to x, the rest to xs, and the whole list to xxs. The little bits after the | symbol are called guards, and are conditions which are tried in turn to see which right hand side of the equation applies. Cale: Thank you for posting the solution to the hamming problem in Haskell. Haskell is a very elegant language that I love and use often. The C/C++ Users Journal had an article giving an elegant solution to this problem in both Haskell and C++ (using the STL). The code, and a discussion, can be found at LtU: Here is a Ruby solution: hammings = [1] ms = {2 => 0, 3 => 0, 5 => 0} 20.times do n,m = ms.map{|m,i| [hammings[i]*m,m]}.min hammings << n if hammings.last != n ms[m] += 1 end hammings # => [1, 2, 3, 4, 5, 6, 8, 9, 10, 12, 15, 16, 18, 20, 24] Bradley: Very nice. I especially liked the discussion on LtU. The Haskell solution is basically what I implemented but I did not implement a scale function and instead just used map. Jules: I like your iterative solution. Can you reformulate it to be an infinite sequence? Fascinating stuff. But, oh no! I thought I understood closures until I saw the following: IList<BigInteger> onesStream = null; onesStream = new Stream<BigInteger>(1, () => onesStream); My (wrong) understanding was that when you define a lambda that references variables in the surrounding scope, a special class is created behind the scenes whose member variables match the name and types of the variables being captured. So, I would have expected the lambda to still return null the first time that it was actually exectued. How does the lambda "know" that the null got overwritten, since that appears to happen right *after* the closure is set up? Patrick: Don’t worry, your understanding wasn’t too far amiss. All I need do is quote you and make a slight modification: "When you define a lambda that references variables in the surrounding scope, a special class is created behind the scenes whose member variables *are the* variables being captured. So, I would have expected the lambda to return *the current value of the variable* the first time that it was actually exectued." For another example that will clear things up see this post: Cheers I have been visiting this blog for a month or so after stumbling on it from Google. I am behind the curve and I usually have to read each paragraph more than once to try and comprehend 50% of it. Any links to resources like "Yet another language Novice" or what SICP stands for and some links to working on my learning curve of C# and Functional Language would be greatly appreciated. The paradigm shift for me is difficult to wrap my brain around. SICP stands for "Structure and Interpretation of Computer Programs" by Harold Abelson, Gerald Jay Sussman and Julie Sussman (MIT Press). I ignored this book 15 years ago when I was at Uni (I was young and foolish and besotted with C at the time, where as now I’m old[er] and foolish). I’m reading it now, and wow – I can literally feel my mind expanding. I can’t recommend it enough. James: It is very commendable that you are working hard to educate yourself. I am sure you will get there. I agree whole heartedly with Tom and discussed the process learning to think functionally in the following post: And if you don’t understand something then just ask. Don’t ever feel embarrassed or like you are taking our time or whatever. Someone will either answer your question or I may make a post revolving around the answer to the question. If you have a question then it is likely that many other people have the same question. We are all fellow travelers on the road to understanding. Welcome to the twenty-first Community Convergence. I’m Charlie Calvert, the C# Community PM, and this Welcome to the twenty-first Community Convergence. I’m Charlie Calvert, the C# Community PM, and this I understand IEnumerable<T> and lazy lists. Can other data structures, like a binary tree, be lazy? Grant: Yes, other structures can be lazily built as well. Here is an example with binary trees: static void Test() { var array = new[] { 1, 2, 3, 4, 5, 6, 7 }; var tree = array.CreateTree(); Console.WriteLine(tree); } static IBinaryTree<T> CreateTree<T>(this T[] array) { return CreateTree(array, 0, array.Length – 1); } static IBinaryTree<T> CreateTree<T>(T[] array, int start, int end) { if (end < start) return null; var mid = (end – start) / 2 + start; return new LazyBinaryTree<T>(array[mid], () => CreateTree(array, start, mid – 1), () => CreateTree(array, mid + 1, end)); } Where LazyBinaryTree is very similar in definition to LazyList. A more practical example is perhaps a parse tree where sections of the parse tree are not realized (parsed) until they are needed. Design patterns have been all of the rage for a number of years now. We have design patterns for concurrency, It’s black magic for me – but not all must new all. Best regards. I realize I’m commenting very late on this post, but I only ran across it just now. Is the Hamming problem so hard without lazy streams? Here’s my solution to print all Hamming numbers up to 1000, in sorted order: import java.util.TreeSet; public class Ham { public static void main(String[] argv) { TreeSet<Integer> s = new TreeSet<Integer>(); s.add(1); while (true) { int i = s.pollFirst(); if (i>1000) break; System.out.println(i); s.add(2*i); s.add(3*i); s.add(5*i); } } } Yes, that works quite nicely. But doesn’t the solution just act like a lazily built stream?
https://blogs.msdn.microsoft.com/wesdyer/2007/02/13/the-virtues-of-laziness/
CC-MAIN-2016-40
refinedweb
3,053
61.56
Insights Into Awareness Book I A Collection of Articles By Bentinho Massaro Free AwarenessŠ | Insights Into Awareness: Book I | 1 Copyright 2010 All Rights Reserved ALL RIGHTS RESERVED. No part of this report may be reproduced or transmitted in any form whatsoever, electronic, or mechanical, including photocopying, recording, or by any informational storage or retrieval system without express written and dated permission from the author. If you wish to receive such permissions, you may send an email with your request details to: bentinho@free-awareness.com However, feel free to print out this document for personal use and share it with friends, family and/or those who you feel might be interested, without forcing it on them and without claiming to speak on behalf of the Free Awareness© organization, its author‟s or its community. If you wish to send the book electronically to friends/family/interested and you have their permission to do so, please send them the link at which this book is originally found and from which they themselves can choose to save the book directly to their computer. Free Awareness© | Insights Into Awareness: Book I | 2 Table of Contents Author’s Note: Short Introduction to Free Awareness 4 6 Chapter: 1: 2: 3: 4: 5: 6: 7: 8: 9: 10: 11: 12: 13: 14: 15: 16: Every Experience Starts With awareness How to Recognize Awareness? – Analogy Free From Believing in Appearances Free From Believing in an External World Awareness is Naturally Present Being Beyond a Personal Identity You are Always Already Rested in Awareness Beyond „me‟ Lies True Stability Let Life Happen! What is Ego Really? Free From the Interpreter Stop Dividing and Be Free Now Awareness Cannot Be Defined Don‟t Try To Become What Already Is Free From Needing Sensation Making “The Choice” What is Love? 8 11 14 19 24 27 24 38 42 47 55 58 62 65 68 73 Free Awareness© | Insights Into Awareness: Book I | 3 Authorâ€&#x;s Note: Dear Reader, The following texts are articles written by myself over a few months time. They can also be found on: 1) 2) Free Awareness Forum These Articles are quite random in subject and tone. They came about simply because I wrote whenever I felt like writing something related to Awareness. I wrote these articles spontaneously and often without premeditated purpose or structure. This is therefore not a highly structured e-book or a coherent instruction manual. Structured and specifically designed Teaching Programs will be made available soon. Estimated arrival of first Teaching Program: November 2010. You will be able to view all currently available teachings if you go to the Academy section of this website: This e-book is simply a collection of valuable articles on the subject of Awareness that will help you to further recognize Free Awareness in your own experience and help you to loosen up some of the ideas you might be clinging to which seem to obstruct your recognition of Awareness. Note: Because these articles are taken from our forum, they will occasionally contain forum-specific guidelines or requests. You do not have to pay attention to them. Free AwarenessŠ | Insights Into Awareness: Book I | 4 One forum-specific term you will see more than once is the acronym “ATP.” ATP stands for Awareness Training Program. ATP‟s were themespecific sharing programs in which I wrote a couple of articles for a given period of time while a group of people commits to recognizing Awareness for that same given period of time. Some of these articles have made it to this book. Finally I wish to state that this book might be repetitive on many occasions. We try our best to spice things up and keep things fresh as to offer more angles of „approach‟, but eventually it‟s all about that single recognition of Awareness. In order to bring about the best results, repetition and consistency is actually very valuable. With books that are simply about gaining knowledge, it‟s enough to hear a fact once or twice. However, books like this one are not about gaining knowledge per se. In the first place, books like this only have one single purpose in mind: to help you experience/recognize what is being hinted at. So hearing something about awareness once or twice usually does not suffice due to our habitual way of thinking and seeing. We actually need to be reminded of that same „experience‟ (the recognition of awareness) again and again in order for that experience to become fully apparent and life-transforming. So just relax and enjoy what you are reading moment by moment. Struggle less, relax more. Love & Wisdom, Bentinho Free Awareness© | Insights Into Awareness: Book I | 5 Short Introduction to Free Awareness For those familiar with Free Awareness© won‟t need this explanation, but if you are new to us you might want to read this quick introduction to our vision and practice. Free Awareness© teaches and assists people in recognizing what is always already here: Natural Awareness. Awareness is the open basis in which every perception we know of appears. Awareness is always here regardless of how we feel or what we may be thinking. The fact that Awareness is always here at the very root of all our experiences, makes it worth knowing more about. The fact that it is always present in every single one of our experiences, indicates that there is a stability within us that we might have been missing in the chaos of our everyday lives and in the habit of constant story-telling. In order to become familiar with this peaceful stability that‟s at the root of all our experiences, we can commit ourselves to acknowledging awareness to indeed be present in all sorts of experiences and situations. We consistently confirm to ourselves what is always already here. Relaxed attention reveals the natural presence of awareness. How to Recognize Awareness? In short: Simply notice that something is reading this text right now. This generally becomes a little easier at first when we relax our stories about everything. When we relax, we can usually notice the presence of awareness quite naturally. We notice how there is something still present. Free Awareness© | Insights Into Awareness: Book I | 6 In fact, we notice that life is still living. Or you could say: that we are still present. Not necessarily as a story, idea or personality, but a natural presence is always here. From this simple initial recognition we build onwards. More and more we start to recognize the fact that we are present as awareness. In other terms: we get used to confirming to ourselves that this presence is constantly here, regardless of the „look & feel‟ of any given moment. The more we do this the easier and more obvious it tends to become. Even in situations when we are not feeling relaxed, we can start to notice how we are present as awareness as well! This Awareness itself will be discovered to be unaffected by the thoughts and emotions that rule our experience of life. Awareness is always here as the open presence which is aware of whatever is happening within it‟s own presence. We may even start to notice how even when we feel depressed or extremely happy, that „that which knows‟ we are having this sensation of depression or happiness, is in itself unaffected! Experiences change, but the knowing of them, is every free, clear and stable. Awareness is the great stability that underlies every moment. We are here to help each other discover this in full. With this information at hand, you should be able to understand most of what is said in the following chapters, even if you are new to this kind of material. Enjoy! Free Awareness© | Insights Into Awareness: Book I | 7 1 Every Experience starts with already Perfect Awareness Whenever we have a thought, there is something which knows the thought. This something could be described as openness, spaciousness, presence, cognizance, or awareness. Awareness is open, spacious and free. It is unaffected by whatever is expressed within its scope. We often have thoughts that project some achievement or effort for us which say: "We need to be more aware, we need to meditate more in order to reach Free Awareness." Whenever we feel that such thoughts and frustrations bother us because we feel no relief or connectivity with who we are at this moment, we can start seeing how the very thought that claims: "we are not yet complete and we need to do something in order to see it again," is simply a thought arising within that already present awareness. Because who or what is it that is already seeing the whole notion that says “you must first do something in order to see it again”? IT does! Naturally so, effortlessly so. Thoughts are often deluding us. They tell us to find something somewhere or sometime but that which we are ultimately looking for (Peace, Love and Well-being, Fulfillment) is already here as the very open fabric of these thoughts. Awareness is already here and includes that notion of us having to become more aware. That's just a thought within already pure awareness. So that which we are seeking, knows when we are seeking. That which we Free Awareness© | Insights Into Awareness: Book I | 8 are trying to reach, is that which includes the idea of 'us' trying to reach something. Like planets exist only within space, so do thoughts and ideas about who we are or who we should try to become exist purely in the space of aware cognizance. When we acknowledge this spaciousness in which all appears, again and again, we will see that whatever we - through our thinking - are motivated to achieve, starts in awareness. It is awareness which sees that very thought. This way, we can gradually, or in some cases quite suddenly, let go of our belief in the story of these thoughts. We start trusting more in awareness, and less in stories and ideas. Whenever we are not interested in what the story has to tell us, it has no power over us. How do we become uninterested in the stories of our ideas? By realizing consistently that that very story is entirely appearing in the spaciousness of already present awareness. In this way we come to see that relaxing into the nature of life as it is, is much more empowering and beneficial to the situation and our sense of well-being, than being tied up in ideas. So every single decision we make to start something, is made within Free Awareness. Every single choice we make to walk some sort of path or road towards some goal - whether we seek awareness, enlightenment, or just that ice cream around the corner - that choice starts in the presence of Free Awareness. Recognize this, notice this, know this. When we consistently see this for a few weeks or months, we will see how Free AwarenessŠ | Insights Into Awareness: Book I | 9 more and more we will be naturally at peace as we are. We will be clear on things automatically. Our perception opens up and allows for everything to be as it is. We come to experience how everything exists only right now and that the space which contains and includes every single experience and appearance within the universe, is an appearance of awareness. We realize everything is simply coming and going in and as - the space in which it occurs. That space happens to be cognizant and sees everything within itself. This way our recognition of conventional ideas and us seemingly being lost in some projected time-line of achievement and goal, starts to shift to the recognition that that entire process is happening within us; awareness. So what is there left to find and seek for? Knowing this, is it still worth being so tense all the time in order to reach something? Is it still worth it to keep on running after fictitious desires hoping that once fulfilled they will make us happy, when we can simply start to see the freedom in which it appears? The freedom that is already here by the very nature of things. We can simply relax and notice there is already an awareness in which everything appears, without believing in the stories of our thoughts and ideas about attainment. Attainment too is an idea appearing and disappearing, without any effort or trouble, within natural knowing. Free AwarenessŠ | Insights Into Awareness: Book I | 10 2 How To Recognize Awareness How to Recognize Awareness When introducing people to get a „taste‟ of this natural presence which we can call awareness, for the first time, I often say: Just stop thinking for a second and see what remains... In fact, why don't you do that right now: for 5 seconds stop thinking about everything obsessively, and just relax. Just see what remains... notice that there is a natural presence which is here whether thoughts arise or not. A natural cognizance, you could say. It's then that you notice that there is something that remains when you are not thinking. Like a background that's just present. The goal with this is not to motivate you to stop all thinking or to try and retain a non-thought state of mind. Instead, that initial moment is just to help you recognize awareness. It's just an introduction to make you aware of the fact that there is a constant, natural presence which is unaffected by the thoughts, ideas and experiences that are experienced within this natural presence. Don't ever believe that awareness, or your recognition of awareness, is depending on your being thoughtless. It‟s simply not true. If you notice that you are in fact believing you need to get rid of thoughts and emotions in order to be free, then let this very moment be the perfect moment to bring that belief to a complete stop :). Free Awareness© | Insights Into Awareness: Book I | 11 In fact, it is crucial in this world of pace and chaos, to be able to recognize awareness even while thinking and feeling many things at once. It is crucial that we all come to know ourselves as that openness which can maintain its openness in the face of great chaos. Awareness is always aware, always simply present. A Simple Analogy Why do I tell you initially to stop thinking then? It's because we have grown so accustomed to being interested only in our thought-forms, that we miss the very basis of all our experienced. So when we stop thinking for a moment, we have nothing in our sight to distract us, and so we naturally notice that subtle presence that underlies all thoughts. We naturally notice that we are in fact that awareness that remains! Surprise: we are not our thoughts! It's like this: Have you ever watched television, and suddenly the image turns black? Or the black/white jitter takes over? There is no content on the screen and suddenly you are reminded that you were actually staring at a television set... When the screen was filled with stories that interested you, kept you distracted, you never even realized you were watching at a television screen. All you recognized were the changing forms that were displayed and the stories they were telling you. You were missing the obvious fact, that the television set is the basis of every single image shown. Free AwarenessŠ | Insights Into Awareness: Book I | 12 And just like when something that pulls your interest is reflected in a mirror you are looking at, for example your face, all you see is your face and the story it seems to evoke, you notice all your facial imperfections maybe, or your beautiful characteristics. Because that is what you are interested in. But when a mirror reflects nothing that pulls your interest, like for example an empty space of your room, you naturally notice that there's a mirror in your room instead of being distracted by its reflection. For the first time you see the mirror itself, for what it really is, instead of being distracted by the story of what it reflects. Similarly, Awareness is the basis of all your changing perceptions and is most easily noticed to be present, when we stop thinking about everything for a moment. There can be thoughts, but just stop thinking so excessively for a moment and notice that there is some natural cognizance, that‟s there „in addition to‟ the thoughts that may arise. But as I said, it's important not to dwell on this initial instruction. Now that you have realized that all images were actually just pure television screen, when the images and the stories they tell you start to fill up your screen again, you can start to remind yourself of the fact that every image is nothing more than pure screen. You can now actually see the television screen as a screen! No matter how elaborate, individual or authoritative the story of these images might seem, they have no individual basis or power and have never been anything more than pure screen. All images are now realized to be equal, even if their labels and stories tell you otherwise! So where you might have needed that initial moment of blankness to recognize what was really true about the stories, now that you know this Free Awareness© | Insights Into Awareness: Book I | 13 basic ground of the perceptions in your direct experience, you can start recognizing awareness in every perception, in every thought. For that alertness that is naturally seeing the moment of no-thought, is still that exact same alertness that is witnessing the maelstrom of thoughts and emotions. So instead of recognizing only the images of life, commit yourself from this moment on, to recognize the fact that you are aware of all these images. Not as a separate entity, or a separate observer, but just as a natural seeing in which all experiences are perceived. YOU are always that same peaceful, spacious, open awareness and YOU can never not be present. Thoughts come and go like the wind, but YOU are the space for them to either be in, or not be in. Completely unaffected, completely free already! Free AwarenessŠ | Insights Into Awareness: Book I | 14 3 Free From Believing in Appearances Note: this article was originally written as a basic instruction article for the Awareness Training Program(ATP) #2: Free From Belief in Appearances. What are appearances? An appearance is everything which appears within your perception. Everything that belongs to your perception. Easy enough right? Everything that we know and experience is an appearance within our perception. We could also say that every single experience is an appearance within Awareness. So there is always an awareness in which and by which something is perceived. So simply put: every perception is an appearance. What is there to Free? Every appearance has a story. It's not so much that a tree or a stone itself has a story, but our thoughts about that tree and that stone hold many stories. More specifically we will focus on our psyche. Appearances that belong to our psyche are thoughts, emotions, feelings, intuitions, realizations, concepts, ideas, belief systems, philosophies, internal chatter, etc. All these are our mental and emotional appearances. All these appearances tend to contain stories (about the situation), labels, descriptions, definitions, conclusions etc. about ourselves and everything we experience every single day. We have descriptions and stories about Free AwarenessŠ | Insights Into Awareness: Book I | 15 pretty much every appearance/experience that exists within our perception. According to Free Awareness, awareness itself is already free from being limited by any appearance. Our awareness, the perception itself, or we could also say: The seeing, is free from every condition that is displayed within its own perception. So even as we experience intense fear for example, that which perceives the experience of fear is free, clear and unaffected already. Free Awareness is not a human creation. It is not some result we create with our thoughts and actions. Free Awareness is already the natural condition of all human beings, of all appearances really, and it is the primary essence of every perception/experience. Why is it that we continue to fail to recognize this complete and unifying freedom then? 'loosen' our beliefs in everything that keeps appearing within our awareness? It is my experience that this is exactly all thatâ€&#x;s needed. Instructions In this ATP we will support ourselves and each other in noticing this habit we have learned of automatically believing in whatever our mind's create for us to see. Free AwarenessŠ | Insights Into Awareness: Book I | 16 There are countless beliefs and conceptual constructs about ourselves and about life in general that just keep appearing again and again within our perception. The problem, or limiting factor, is not that they arise, the limiting factor is that we believe in their story, their propositions, their suggestions. We believe them to be true and so we respond to the world according to our individual beliefs. Since the uncontrollable appearances are not the issue, we do not need to try and suppress, control, change or get rid of these thoughts that arise continuously. If we try to reach some state of silence or peace by controlling our minds, we will end up being frustrated (or semi-happy; deluded) and still unable to see our innate freedom. True freedom is freedom that's always the case. If our freedom is dependent on a cultivated state of mind in which no disturbing thoughts arise, it is not true freedom. True freedom is free just as much in the appearance/experience of depression as it is free in the appearance/experience of meditation. Free Awareness is all about true freedom; the kind that is always already here as our true being, our basic condition, which we just need to learn to acknowledge again. All we have to do is gradually free ourselves of this habit of believing in everything that appears. We can then start to see how we, when a thought or emotion arises, automatically jump onto it with the speed of light. This happens so quickly and automatically that we rarely even know that there are two aspects at play here: 1) An appearance (let's say a thought), and 2) The jumping in, or: the act of believing in the thought-form Free AwarenessŠ | Insights Into Awareness: Book I | 17 We are generally unaware that it is by our own interest and believe in these appearances that these appearances have power over us. We tend to overlook the fact that the appearance itself, no matter how gruesome or beautiful, does not really have an independent identity, power, value, implication or meaning of its own. The appearance is just there as a creative display of (and within) the perceiving/awareness. Appearances will continue to appear for an eternity, we have no conscious say in that at this point. Therefore we should not strive to control or change the contents of the appearances, rather we can start to see how our believe in them gives them imaginary power and value. Hence we can start seeing how these appearances are not the issue. The Result If we can start to notice this process of jumping onto our own thoughts, emotions and concepts by believing in what they propose or have to say, and combine that noticing of them with our commitment to being free from believing in these stories, then naturally as we become more aware of this process, our belief in the appearances will 'loosen up' and gradually we will have much greater clarity in every situation and we will start to truly know ourselves as the free awareness that's right there constituting and knowing ever single appearance. What happens when we suddenly stop believing in everything that appears within our mind/perception? We will start to notice how there is an already present 'field' of peace, wisdom and pure presence that is underlying every appearance. Free AwarenessŠ | Insights Into Awareness: Book I | 18 There is a formless, untouchable space of Free Awareness which is innate to us and in which all arises, endures and dissolves naturally without leaving any footprints whatsoever. We will come to see how in reality we are that openness of free awareness in which all perceptions are created. We are that stable basic ground of all phenomena. We can either identify with the fleeting, imaginary forms and their stories, or we can identify with, or relax into, this formless ground in which all appearances arise and are effortlessly free by nature. No appearance is a problem. When we start to identify this open freedom - which is awareness - more and more, we will automatically invest less and less belief in the stories, concepts and sensations of our minds. Thus we will open our perception to the true nature of reality which is already present in every perception. We will come to see how even when anger, suffering, doubt or any other appearance we might have called impure or negative before, arises, we are still present as the stable ground of free open awareness. Free AwarenessŠ | Insights Into Awareness: Book I | 19 4 Freedom From Believing in an External World Note: The article below, was originally posted as a support article for the participants of Awareness Training Program(ATP) #2: "Free From Belief in Appearances." The ultimate Appearance: The World/Maya Dear members and participants, Today I would like to share with you the importance of realizing your freedom from the ultimate appearance: The World as a whole. You have now been observing your beliefs in the appearances for approximately 10 to 14 days. This was good practice and probably gave you more insights in regard to your own personality and your thoughts and emotions and how you tend to deal with certain situations. While this is a great start, there is something to realize beyond the scope of our personal drama. If we only relieve ourselves from believing in personal stories like "who is to blame", which, again, is a great start, we will miss the fact that we believe in appearances as a whole (The world) being real or existential. If we continue to miss the fact that we believe in this world being 'real' (in the sense that we are trapped within a body that's positioned somewhere within that big universe that's 'out there') or that it exists independently from us as awareness, we miss something crucial: All appearances are perceptions. Free AwarenessŠ | Insights Into Awareness: Book I | 20 Where are perceptions occurring? What knows all perceptions? It is within awareness that these perceptions exist. You are not in the World, the World appears in You All appearances together is what we may call the world, or sometimes it is called Maya. It is much like a dream-world which simply appears/exists within mind/consciousness. There cannot truly be proven to be a world 'out there' that's separate from awareness. „Separate from Awareness‟ means that it exists as something other than pure perception; as an individual object, an autonomous reality. Even if we could find solid indications of there being an actual world out there, separate from awareness, then where would that perception/proof occur? By what is it known? That very piece of evidence and the process of investigation, all appears within awareness. Can we prove something, can we state something is here, when we are not aware of that perception? As an analogy, Einstein once suggested: "Does the moon really exist if we are not looking at it?" The entire world exists within awareness. Like a dream, it exists within the mind alone and is never separate from mind. Dream = Mind, World = Awareness. There is no dream that does not consist of mind. There is no dream that exists of some individual substance or power/nature. All appearances within a dream, no matter how real they look and feel to the sensory perceptions, consists of Mind/Awareness and appear in Mind/Awareness alone. Free Awareness© | Insights Into Awareness: Book I | 21 So too does the „biggestâ€&#x; appearance of all: The world in its totality, the universe, appear within awareness. In this sense we can use the proverb that says: "You are not within the world, the world is within you." We can try to pick apart our thoughts and emotions for an eternity as if they are objects perceived by a perceiver, and not ever realize awareness fully. We have to start seeing that all appearances exist as one single appearance/perception and that all 'individual aspects' are nothing more than one, total, pure perception within awareness. Only if we start to recognize that awareness is free from, yet fully pervades/permeates the appearances as a whole at all times, will we truly make a shift in experiential consciousness, which starts to recognize the natural presence of what's never touched by any appearance yet in which all appearances exist as. From Elaboration to Realization In this way we can see that we need not differentiate all kinds of individual aspects of the dream-world and further elaborate/examine/define/use any of them in order to know ourselves as the knower of all of these. All aspects are completely interconnected with all 'other' appearances of the dream-world; all is one single perception consisting of nothing individual nor existing as distinct from awareness/pure perception/the perceiver. Free AwarenessŠ | Insights Into Awareness: Book I | 22 Nothing 'out there' can define what we are. There is nothing to find 'out there' in the first place. It's only so that everything 'out there', confirms us as already being here as what we are... You see? Every single perception simply proves that you know them to exist... All appearances/experiences, no matter what they are, simply confirm that we are free awareness. Over and over again. Life is like walking into mirror after mirror, with there being nothing to find beyond these mirrors. Every experience, no matter what it seems to indicate or mean, only reflect what is forever one, perfect and complete. Acknowledge every moment, every experience, every occurrence, to be a direct reflection, proving the presence of awareness. Know that believing in 'this world having a reality of its own' is purely our personal idea about this world which arises and dissolves as ideas within awareness. We can either believe in that story or we can relax the need to latch onto any story. That which believes, or does not believe, and that which has the apparent choice between these two, is the play of conscious and subconscious attention, happening within ever open, unaffected, free awareness. All options, all choices, all experiences are made in the present openness of awareness in which all things arise and eventually dissolve. All assumptions and definitions we have learned about this world are flawed by nature, since no concept can ever define truth. All concepts see the world as something 'out there' in which something can and has to be reached, found or achieved. Be free from actively believing; Period! Just for a moment, be free from believing in anything whatsoever, this moment of alert openness, will reveal a natural presence to be the case, which is always present, even when we do believe in ideas. Free AwarenessŠ | Insights Into Awareness: Book I | 23 Be free from appearances as they arise, endure and dissolve; Period! Let everything exist as it pleases to exist and identify with nothing in particular. Just rest undistracted, as you are already, at peace with all appearances, no matter the intensity of their stories. Soon you will notice how this spaciousness of being, this openness that knows all experiences, is ever stable and still. It is the sole source of every perception/appearance. To abide as that stable ground of being is to allow for all to appear as it pleases, to let go of trying to control, manage and modify. This, again, reveals the nature of life, to be forever beyond experiences, in a very simple and direct way. From this arises a great sense of peace, wisdom, freedom and bliss. Over and above this, one becomes increasingly more efficient in responding to the situations and challenges of life. Free AwarenessŠ | Insights Into Awareness: Book I | 24 5 Awareness is Naturally Present Awareness is Always Present Already - No effort adds to that Right now a peaceful 'space' is present as the space in which your thoughts and emotions arise and dissolve back into... As you are reading this, it is that which is aware of the words you read and hear mentally. It is that which is aware of whether you understand it or not and whatever other thoughts might appear. If you just stop actively thinking about everything for just a moment, you will notice that you as a presence, are still there... When we stop thinking for a moment we can immediately notice this presence as being naturally here. We might notice it as being a clear seeing, or as a cognizant presence, or as a spaciousness, or simply put we may discover it to be an awareness that's already there. The next thought that comes, can be realized to appear in this exact same 'aware space.' The space itself never goes away, even when intense thinking is present. It is there equally in intellectual chaos as it is in meditative thoughtlessness. All experiences happen in THAT. Paying more attention to this spaciousness in which life appears and disappears as our perception, instead of constantly focusing on the definitions and descriptions of whatever appears within this space, will effectively open up our perception and experience to the benevolent Free AwarenessŠ | Insights Into Awareness: Book I | 25 presence of what's always here without any effort at all. We will start to see, as we recognize and acknowledge this presence more and more, that nothing that is known ever leaves this space. Awareness is inescapable, since everything that you know, you know only as a perception within awareness. Seeing clearly in your own encounter with acknowledging awareness, that this space which underlies every thought and perception is always there without interruption, you will have a choice as to how you wish to experience life. Either as a story-driven character, or as the openness in which this appears to happen effortlessly. The choice is simply this: Either to recognize/emphasize/trust the presence of awareness in which thoughts and situations appear, or to trust/emphasize/follow the seemingly individual appearances and descriptions about what is perceived. More and more we will come to a complete, self-satisfied, joyful peace, through simply recognizing the obvious and evidently present openness of awareness. The more we do this - gradually more often than suddenly we will start to naturally recognize this presence throughout all states of mind and experiences. The statement "Awareness is Always Already Present" becomes meaningful once we start to realize this in our direct experience: that awareness is actually always there as the free knowing space of whatever appears in its perception. Free AwarenessŠ | Insights Into Awareness: Book I | 26 This cognizant knower, this pure presence which is self-aware by nature, is the basic space in which all is enabled to exist. We will come to see how everything that appears, is in fact a direct expression of this space itself. See for yourself how this peaceful presence which is there as a sense of alertness when the mind is open and attentive, is also there when thoughts come and go within actively within perception, and you will find a most beneficial and fundamental basis which is right there with you, always and already. The choice to trust in this space by relaxing as this self-cognizant openness, is yours to make at any time. To make this choice more natural and obvious, simply continue to recognize this presence of awareness which remains unaffected, beyond, yet within, all forms and appearances. Free AwarenessŠ | Insights Into Awareness: Book I | 27 6 Being Beyond a Personal Identity Note to reader: The article below, was originally written for the Awareness Training Program (ATP) #3 Participants. So you will encounter a sentence or two that hints towards their participation specifically, other than that you can still very much benefit from this article by just reading it. Welcome to the start of this ATP. I thank you all in advance for your commitment towards yourself and the world. Discovering your innate freedom is the greatest gift you can give anybody, including yourself. From fully discovering your natural condition of awareness/love you will become a vessel for unconditional love, wisdom and benefit for everyone that enters „your‟ experience. Today you can start by reading this basic instruction article. It will give you a decent general idea/sense of what our intention will be for the next 2 weeks. If you have any questions, feel free to click [POSTREPLY] in the top left area of this page, just above my post. Then simply submit your query. What is a Personal Identity? A personal Identity, to me, is actually a real challenge to describe, since I find it to be non-existing when I look for it. Free Awareness© | Insights Into Awareness: Book I | 28 A real personal identity, as something that exists in and of itself, as something that has a core or a soul of its own, is simply nowhere to be found... There is only the seeing and being of what I describe as: free awareness. So then, what is a personal identity? A personal identity is only a thought. It's only an idea about yourself. If you go looking for the actual identity itself, the actual person or core of the identity, you will see that you can never find it. Surely you can find thoughts and emotions arising continually, but these are just arbitrary thought-forms that revolve around some sort of mysterious identity called “me.” These are the referentials, the thoughts that point toward an identity. But where is the very thing that these thoughts are pointing at? Can the identity itself be found? Truly all we ever do is talk about our personal identity as if it‟s there, but no one has ever seen a personal identity yet. Simply put: This is Human Kind‟s biggest joke. 99% of our lives revolve around something that no one has ever found to actually exist. HA! What a play! Better to recognize this and laugh compassionately, than to indulge in this game of mere fantasy and suffering ourselves. We often believe ourselves to be something... Conventionally we start out believing we are our thoughts and emotions. Than as some of us become more spiritually educated, we tend to believe in a soul of some kind, or we believe in a subtle energy body with which we will someday leave the physical body. So then we believe that that is who we are. But are we really? What has changed from believing in ourselves to be our thoughts and emotions, and Free Awareness© | Insights Into Awareness: Book I | 29 now believing ourselves to be a soul? Where is the soul? Can a soul suddenly be found? Even when we think we experience our soul, in let‟s say a meditation, is that not just another, albeit subtle, sense of self? Just a more sophisticated sensation arising within self-knowing space? What has changed really except your belief about who you are? It's all just a bunch of ideas pointing and hinting at a reality that does not truly exist apart from these referentials... Therefore, I say that the personal Identity really is pure fiction. If you look with clarity, without believing in any of your ideas, you will see that there is no identity existing in and of itself anywhere. There is no 'me' that has its own autonomous nature or source. There is only pure seeing/being... Just the peaceful presence of what‟s looking: Awareness. These thoughts about something called „me‟, I think you will agree with, are completely unreliable when it comes to truly wanting to know ourselves. Since one day these thoughts believe ourselves to be such and such, and then the next day that changes to a so-called more refined belief about who we are. But beliefs won't set you free. Ideas on which we base our actions and thoughts will not make us see first-handedly the true nature of our being. The sense of 'I AM' or the sense of „being someone‟ is also an appearance/expression within awareness. We could say that beyond our thoughts and ideas, lies a more subtle feeling, a more primary feeling. That of 'I AM'. We can all know this sensation... it‟s the simple, basic sensation of being here right now... If you just remind yourself of the fact that you are here right now, and you 'feel' your body and presence, and simply acknowledge the fact that you are here, you will see that there is a sense of being (an individual). Free Awareness© | Insights Into Awareness: Book I | 30 But even that sense will be transcended when we relax our focus/attention and allow it to rest in its natural, wide-open nature. It's often this sense of being „a someoneâ€&#x;, that attaches itself to the arising ideas and feelings. It is this sense of individualism that we believe in; It is the belief that we are this individual we have come to identify with, that finds a connection with thoughts and emotions and ideas about itself to further elaborate and refine its existence. Whenever forms arise in consciousness, it is out of this fictitious sense of being someone that we attach to these descriptions and feelings about ourselves. We personalize every impression that we are interested in. We generally leave aside those impressions that don't interest us. In a way you could say that our discovery of being who we truly are, results in realizing that even the idea of 'me' is simply a thought-form arising and dissolving within our perception. And when we realize there is no actual 'me,' then we gradually (or suddenly) lose interest in all thought-forms as a means to live by! Because we see that they have nothing to tell us about who we are or what the world is all about... Surely we can still think, and many thoughts may even arise about many things, but our interest in believing in their suggested ideas and stories, simply fades as we start to take more delight in relaxing in our peaceful, clear, wise, loving and naturally responsive (not reactive!) essence. We can now come to our intimate, most direct conclusions without the use of thought. Thoughts can only describe so much, and nothing of it is truly fresh, truly alive, truly direct, truly insightful. Free AwarenessŠ | Insights Into Awareness: Book I | 31 Thoughts and ideas are really just flat concepts coming and going. We need not control them nor interfere with them in any way. We can leave them aside altogether. We can simply be, without being pulled by any thought into believing in it. From this, we discover a stable, genuine, open presence, in which all that we know is seen to arise, endure and dissolve without any help or effort from our side. Thoughts, ideas and emotions simply resolve all by themselves if we just let them come and go without having interest in what they have to say. They are their own undoing. We can understand many things from this freedom immediately, without needing concepts in order to understand. All that we see will be seen in the light of true understanding. Then, when a thought arises that says: "I am not good enough for this, I don't deserve it." We aren't moved in any sort of way, because we know there is no 'me' to be either worthy or unworthy, we simply are this wideopen awareness and limitless love, in which a thought arises that refers to a me that never existed as separate from that referential thought. All forms are simply expressions of, and within, awareness. We, as awareness, are all-inclusive, yet remain unaffected and unaltered by any form or happening. All that appears is pure awareness. Even the idea or thought or sense of 'me.' That too is a sensation, a perception known by that which we are already, without effort. Who are we beyond our ideas and sensations of being an individual or personal Identity? That will be our million dollar question in this ATP... Some quick basic instructions: What to do? The words 'what to do', are actually highly inappropriate, but it's the only Free AwarenessŠ | Insights Into Awareness: Book I | 32 way to start. What we are about to discover, is not something that we create or do. It is not the result of our guided thoughts or actions. Instead, it is that which is always here, even when you don't recognize it your entire life. Our nature is always That. It has always already been That and it is never not That. You have never not been That, for you and That are the same! Here are some suggestions to directly experience the confirmation that you are not your ideas about yourself or even your sense of being (someone or some presence): 1) Relax! - Most important thing is to just relax your focus into the natural openness of peaceful attention. We cannot understand this through continuing following our thoughts to wherever they project their descriptions. It is only in the complete relaxation from focusing on any particular object, form or definition, that we immediately discover a presence that's naturally here, as openness itself. If you simply relax your attention and your need to seek and define, the presence that will show itself to itself, is there. 2) Remember whenever you remember - Whenever you remember this practice/article or your commitment to awareness, see that at any given time, there may be a sense of self, or an idea about who you are. If you simply allow that thought to come and go, if you allow it to be, to happen, you will see from a peaceful openness of attention, that all arises in natural cognizance. So from this relaxed and open perception, without the need to define, divide, categorize, describe, understand, explain or seek anything, simply be that relaxed openness that you are, and all thoughts or ideas about a 'you' and all thoughts and emotions that might attach to that idea of „youâ€&#x; Free AwarenessŠ | Insights Into Awareness: Book I | 33 at any given moment, will be seen to simply be another display within your awareness. It's not what you are, for you are that which is openly and unaffectedly aware of them. They are simply appearances of and within your openness, but their descriptions have nothing to explain you about who you are. 3) At the end of each day write a little update on your experiences – How did it go, what confused you, what have you discovered? etc. Feel free to ask. The community, including myself, is here to support you in this. You will see that writing down your experiences and observations will help you to see with greater clarity. Free AwarenessŠ | Insights Into Awareness: Book I | 34 7 You are Always Already Rested in Awareness Whatever appears in/as your life, awareness is effortlessly present: The quote below pretty much sums up a lot of different, yet very similar expressions I have heard my own mind tell me and I have heard others express a lot as well. It is a good example of something which is a very common occurrence for all of us and I would like to share some additional instructions/tips with all of you as a response to this. Listen closely because this can really set you free in a very practical way. It worked wonders for me when these thoughts were bothering me, and I am sure it will work wonders for you if you take the suggestion at heart: I need to get centered and rest in awareness before I go back tomorrow. I do find it hard to remain as awareness while feeling physically taxed, which I am right now. The basis for this instruction/tip/clarification is the part of the sentences in red. The green part is the part we can replace with any personal reason, challenge, idea, excuse, etc. of our own situation. Yes I would suggest someone in this case to do so; to take some moments or time to reconnect to whatever makes you feel stable or Free AwarenessŠ | Insights Into Awareness: Book I | 35 centered. Or just take a deep breath, usually helps. Then rest as you are and let everything that appears just be alone for a moment. However, this is what happens to many 'practitioners' of awareness; that we feel we are going from being centered to being not centered, to being centered again, etc. And yes we could say, in a way, that this is the case, but ultimately this is not truly whatâ€&#x;s happening. In reality you are never not centered nor is there truly a center to go back to. Whatever it is you do/are/experience, you are awareness right there and nowhere else, you are the center in which everything is. Whether it feels uncentered or centered, rested or not rested, awareness is the ultimate center/space in which both these seemingly contradictory experiences appear, equally. When we find it hard to remain in relaxed cognition of awareness, instead of beating ourselves up about it and striving to be more aware constantly, we can simply acknowledge that even the statement "I find it hard to remain in awarenessâ€? has to appear in something... That this very thought or sensation, is already seen by awareness... We know that this statement, this idea about our situation, arose. Somehow we just know this thought arose... So what is it that knows this statement? How could we know our thought that says "I find it hard to remain in awareness in such and such situation" if awareness was not already maintained all the way? I would suggest anyone to incorporate the following in their practice of awareness, while still feeling free to simply take the time to 'center' or reside in awareness as one is used to doing, but to do something else as well. Free AwarenessŠ | Insights Into Awareness: Book I | 36 So additionally I would suggest everyone to again and again make this acknowledgment that: "Awareness is even deeper and more inclusive than the sense of "I am being centered'" and "I am not being centered.'" Awareness is that in which both these sensations/ideas appear and disappear. Awareness knows and is equally present in my hectic and 'caught up' moments, as it is present in my 'resting in awareness' moments. What knows both states and what knows the shift in going from one to the other? That stable awareness which is truly always there in every state of mind is not only there when I feel centered and aware. It is also there when I don't feel centered and aware. No matter what I do or where I go I will always be me, Free Awareness. This is a fact I can simply start to acknowledge more and more, regardless of my doing or non-doing." So yes I would advise us all to regularly spent time 'in' our presence consciously, if we feel it gives us greater peace on a physical and mental level, just to relax... Additionally I would suggest us all to again and again acknowledge that awareness is beyond even our experience of its presence/peace. It knows that peace just like it knows disturbance. This acknowledgment might not make you feel fully alive and free right away, but if we make this acknowledgment again and again; that awareness is always here already, also when we feel unaware and taxed in any way, and really trust and recognize that this is the case, and combine this recognition/acknowledgment with our regular 'practice' of resting as the awareness, we will very effectively free our perception from believing in any kind of limitation/appearance/state of mind, and false ideas about awareness or non-awareness. These are all just ideas and sensations that arise within perfect you. Free AwarenessŠ | Insights Into Awareness: Book I | 37 Meaning that we start to experience how awareness is truly forever present equally and does not depend on any sensation or realization, be it pure, impure, present or imaginary. Whatever happens as our experience, awareness was, is and will be always there to constitute and know all these appearances equally. All appearances, whether labeled as good or bad, right or wrong, are completely pure awareness by nature. Thus, our identification as being the 'doer', and our idea of awareness depending on this doer in any sort of way, will gradually but very effectively drop away. Hence a great relief will take place along with a more constant and natural recognition of Free Awareness, which will only increase in clarity over „time‟. No striving, no trying, we simply see that awareness is unaffected by anything that we 'do' to experience it. Thus it is ever-present. This recognition of awareness being here even when we are unaware of its presence/nature and even when we belief in all kinds of appearances, will make us recognize this the next time we are engaged in similar actions/situations. It undermines all distinctions and dualities about awareness versus non-awareness. If we just take a moment to recognize that in that period/situation/action in which we considered ourselves to be unaware and caught up in beliefs, awareness was nevertheless present as the flawless knower of the experience of „being unaware and caught-up in belief.‟ Awareness was unaffected by the experiences and appearances. This recognition will come naturally more often, and will expand to being mind-blowingly obvious. This recognition, whether combined with or without practices that involve centering, will release all your ideas about free awareness into the free Free Awareness© | Insights Into Awareness: Book I | 38 open space that they are. All things are already fully rested in and as what you are looking for: Awareness. There is no appearance we can ever encounter that is not fully rested right there and then in and as awareness. Just to acknowledge this again and again and to trust on this recognition again and again is enough. Free AwarenessŠ | Insights Into Awareness: Book I | 39 8 Beyond „me‟ Lies True Stability Who is 'I' and who is 'me'? And who asks this question? My sense of self is as fleeting and subject to change as every other phenomenon in this world of perceptions. One day I might feel I am this, then the next day I might feel totally different. That which is there as a background/space for all senses to appear in and dissolve back into, is stable and completely unchangeable. “My sense of self arises... I feel I am here, I am someone, I am present, I am cognizant...” But if „I‟ can notice that this sense of self is arising, than that implies two very important things: 1) That there is some subtler „space‟ for this sense to arise in, just like everything else needs some sort of 'space' wider, subtler and more open than itself to exist in, and; 2) That this deeper space is awareness itself: the awareness through which I can say that „I am aware of this sense of self‟. There is something beyond „me‟. Something that is free from, unaffected by, yet present in the experience of my most basic sense of being here. Because who is the I that can notice the sense of „me,‟ the sense of „being someone‟ and the sense of „being present‟? Both points refer to the same space/openness of awareness: That in which all sensations arise and dissolve, is the same as that which is aware of these senses. Free Awareness© | Insights Into Awareness: Book I | 40 All that arises in this space of self-knowing awareness, is ultimately one with that aware space. Just like reflections in a mirror are nothing but mirror. We are perfect stability already! So if we wish to live stable, carefree and wise lives, we only have to recognize that we are already leading completely stable and carefree lives, even when instability and concerns arise as a sense of being! It all happens within perfect stable freedom! Therefore, we only 'need' to recognize that we are in fact not the sense of being someone or even the sense of being present, for that too is a sensation, arising in You! When you are being present, you are not „more awareness‟ than when you're not present, because both the sensations of 'forgetting' and 'being lost' as well as the sensation of 'I am present', are both just sensations that arise in awareness and they are known by awareness effortlessly, already, always. Or else we would not know about these states of forgetting and „not being present‟. Awareness knows at all times. So there is nothing to worry about or even to achieve. Just relax, just rest, and recognize the peaceful presence that is here without you doing anything for it to be here. Awareness is utterly stable, and considering the fact that the sensation of us being someone and the sensation of being present both fade away again and again, implies that awareness is not dependent on them either. It is evidence that awareness is beyond even those sensations as the ever stable ground for all movement and change, including 'us', to happen in. You are that ground already! Nothing you ever think or do about yourself Free Awareness© | Insights Into Awareness: Book I | 41 will add or subtract from that. All we 'need' to do in order to free our personal perception, is to recognize that all that arises does so within this stable awareness which lies beyond the 'me.' 'Beyond' simply means that it is there even when the sense of 'me' is not there. Like space is there whether there are planets or not. Nothing ever disconnects from You/Awareness There is always this space in which your present set of experiences and feelings manifest in. If you feel a certain way today, for example you feel like shit, then this sensation should not confuse you with thoughts saying: "Oh I am feeling shitty, so I am now not so aware, I am not connected to myself. Darn it, I have to be more present, I have to meditate more." This is the most subtle nonsense that we are conditioned to believe in. But we can oh so easily free ourselves from this belief by simply seeing that it is utterly false. So Instead, let us ask: "If that would really be the case, if I would really be less awareness when I feel like shit, then what is it that knows this entire display of feeling shitty from beginning to end and beyond? And is that which knows my shit when it occurs, not the exact same knower that's right there knowing the sensation that tells us: "Oh now I am connected, I am centered, I am present, I am spiritual!"?" Awareness is the ground for the sensation of me feeling shitty to arise in as an appearance, as an expression of awareness. The sensation itself Free AwarenessŠ | Insights Into Awareness: Book I | 42 does not consist of something else besides awareness and it does not exist outside of, or as disconnected from, free awareness. For how else could you know that your sense of being someone feels like shit right now? You are That in which ALL arises, endures and dissolves in and as itself. Desires, fears anxiety, emotions, bliss, hate, love, realization nonrealization, ignorance, enlightenment... All are appearances of, in and as awareness. Like dreams come and go within - and only as - the free and ungraspable ground of mind. You can always trust and relax in the openness which allows all to be as it is already. You can always lean back and recognize awareness to be the ground of all experience even in the most immediate or challenging of situations. This „space‟ of awareness never leaves your side and all your thoughts, emotions, sensations and realizations happen within this already present perfect openness. This is the original condition of all perceptions. Including the perception of the universe. Universe is also a perception arising in and as the pure original condition/state that equals Awareness. Relax and recognize the obvious . There is nothing to it. Then repeat it again and again so that it becomes irrefutably clear to your experience, that awareness is always already present. Free Awareness© | Insights Into Awareness: Book I | 43 9 Let Life Happen! Life is happening - So Let it! Quite. It appears. It disappears. Even when we believe ourselves to be the doer of something, even when we are in the very act of doing something using our thoughts/intentions and body, when we become aware of it in that moment, we will see how even that which we call 'doing' is simply happening as an effortlessly appearing phenomenal process within ever-clear awareness. There is no effort in any appearance of Life. Life does not know effort. Life simply happens naturally and effortlessly, moment-by-moment. Effort is that which we mentally 'feel' when we separate our sense of self from the actual experience and start thinking about the experience from a distance. Itâ€&#x;s only when we believe ourselves to be the doer of our life, that we feel the effort, stress, and fear that come with being a manager of everything. The Relationship between Doership and the Personal Identity Free AwarenessŠ | Insights Into Awareness: Book I | 44 Whenever we identify with a thought about who we are, we automatically become the doer of the process at hand. Or so we believe/sense. The personal Identity and the sense of being the doer/manager, are essentially one and the same illusion; a mere thought-projection. I am not saying free will exists or that it doesnâ€&#x;t, but the belief that we are limited to our sense of doership, is fundamentally false. Both the sense of being a somebody and the sense of being the doer, will naturally resolve in self-rested Awareness. 'Resolve in Awareness' simply means that both senses will be seen to be essentially non-existent in their own right; we will see how it's merely a sense derived from our belief/though-process, and that that thought process itself, is nothing but relaxed perceptions appearing and disappearing within awareness. So in actuality, there isn't even something real that needs to resolve, it's pure fiction in the first place and that is simply how we see it from awareness/being. What we might call The Resolve of something, is simply the clear seeing of itâ€&#x;s already pure and free nature. Nothing has to be done about anything. Seeing that effort does not exist, is freedom. All that arises, is immediately free and resolved as being pure awareness already! Awareness Has no Obstacles There has never been any obstacle to free awareness/You. The personal identity as well as the sense of doership are no obstacles either, because both can be known to arise in free awareness right now... And all that we can be aware of, including our thoughts about obstacles or our sensations of being or doing, cannot be an obstacle for awareness, for what is aware of them? And is that awareness which is aware of them in any way Free AwarenessŠ | Insights Into Awareness: Book I | 45 imprisoned by these mere ideas/sensations that appear within awareness's perception? Relax and find out for yourself. You see... Awareness is forever beyond anything that ever appears within its perception. It's always present as the incorruptible knower of all perceptions; and perceptions themselves, are nothing other than awareness. So there is only the One Knower, expressing itself within its own Unity as seeming forms. All is free from the get-go! No need to alter, change, suppress or get rid of anything that you feel you are, nor is there any need to acquire, achieve, create or hope for, in order to realize that you are That One. Simply know all these sensations of 'being someone, doing something, going somewhere' to arise as already free ideas within changeless, motionless awareness! Let Life Happen Effortlessly by Itself We can just relax. Really friends, all we could ever really do is relax! We were just taught, raised and self-educated to believe ourselves to be the doer, manager and arranger of everything. While I am certainly not suggesting that responsibility is a myth to be disposed of, or that we should never 'do' anything, we can just relax while life – including doing and taking responsibility – happens. Simply know, throughout all sensations of doership, or non-doership, the free, open, spacious nature of all that arises to be effortlessly present and supreme. Even when we are performing with our minds and bodies a certain task, that is all perfectly happening as one with life itself, without effort. It is only our thoughts about that which we are doing that may tell us effort Free AwarenessŠ | Insights Into Awareness: Book I | 46 exists or that tells us that 'we' are actually 'doing' it. But these thoughts too, arise and dissolve effortlessly as sky-like appearances within sky-like awareness. Allow all 'doing' to happen by itself, even while actively engaged and focused. While in a state of doing actively, simply notice how that state of mind which is focused on performing or creating something, too is an effortless happening within awareness. It just all happens by itself without intervention or obstruction! Even seeming obstruction appears, evolves and disappears effortlessly with no meaning or implication attached to it whatsoever! The Effortless Result Repeatedly, throughout your day, know your sense of self, with all its thoughts and emotions as they arise continuously, to arise effortlessly without your help and attachment. Again and again take a moment to let life happen all by itself. Simply become aware of whatever situation is ongoing in your perception at that particular time, and know this situation to happen without any doing or effort in it. It just happens, freely, spontaneously, timelessly, without meaning or further implication. Life takes care of itself just fine, it doesn't need your belief in the sense of you being a doer. It has taken care of itself perfectly fine for timeless eternity and will continue to do so for timeless eternity. You can just relax with, and as, whatever unfolds all by itself. Don't feel responsible for everything that happens by reflecting everything back upon a fictitious sense of 'me', because that is all just a guilt-play of thoughts and sensations of 'me.' See that even this sense of being someone who is responsible for whatever he does/thinks/happens etc. Free AwarenessŠ | Insights Into Awareness: Book I | 47 arises/happens effortlessly as well! Everything, no matter how imprisoned it „feels‟, can be acknowledged and recognized to be arising in, and therefore as, freedom itself. There is no rigidity or „doing‟ anywhere, there is no solid individual nature to be found in anyone or anything. All is the One Life taking care of itself. From knowingly being beyond, yet intrinsically one as, whatever effortlessly appears as Life itself, arises an unconditional Freedom, Wisdom and Compassion, to go with the flow, and it provides you with the best tools to be the best doer you can be! Sounds like a paradox, right? Well the more you become consciously rested in your relaxed essence, the more you will encounter seemingly insurmountable paradoxes when explained in words, that you will know to be of one harmonious wisdom, beyond explanation. Realize you are not the doer; let life happen by itself while not abandoning any doing as an act or responsibility, and your actions will free themselves and become direct expressions of your realized freedom. All is simply happening. See for yourself that no doing is ever 'done' as such. Knowing this, let it happen again and again until all moments are naturally seen to be effortless appearances. Free Awareness© | Insights Into Awareness: Book I | 48 10 What is „Ego‟ Really? – Free From the Interpreter What is Ego? What is Ego? - is often one of the key-questions around which spiritual paths and practices revolve. In my opinion, there have been many misleading teachings about the ego. Though most of them are wellintended, they create unnecessary hardship for those who are listening with diligence and dedication to what such a teaching has to say. Even many of the masters who could be considered truly free in their recognition of awareness, have expressed the ego in terms that make the listener miss something obvious. By doing so, they (unknowingly?) deny the practitioner a much more open, loving and free atmosphere to start „practicing‟ with. Ego is often described as some sub-entity within yourself that is the root of all misery. While I do not disagree with the fact that what they refer to with „ego‟ is the source of all misery, I must say that I very much disagree with the part that claims the ego to be an entity, or even a sub-entity within oneself. To me the Ego is no such thing. In fact, the ego - as a thing, substance or entity - simply does not exist. For what would it imply if the ego would be an entity, substance, thing? It would imply several things: Free Awareness© | Insights Into Awareness: Book I | 49 1) That it is always there as something solid. 2) That it is there even when not active and when consciousness is not aware of it. 3) That it truly exists as something that has characteristics and qualities of its own. 4) That it exists as having an independent nature or substance. Allow what follows next to take you on a contemplative discovery to see directly for yourself right here and now - through and with clarity – “What is ego truly?” And be free from ill-conceived notions. Ego is simply a Position - The Position of Interpreting Ego, in my experience, is not an actual thing or entity that we need to expel or conquer, but more like a position we can take on (and leave again) to misperceive things from. Imagine endless, pure space. Within that pure space, there are many places one can choose to look from. From each angle things look different. Once we take on such a position by believing in an ideas, thought, feeling or emotions etc., we experience life from the position of being an interpreter of Life. We literally interpret - explain, translate, describe, label, define, separate, distinguish - whatever we perceive, pretty much every moment of our lives. Every moment, which is by nature naturally free and open, is then perceived through a particular reality-tunnel. This, is misperception, and the root of all suffering. However, nowhere in this process is there an actual separate individual entity or power at work. It‟s simply a matter of either assuming a position to view and judge from, or to not do that. Free Awareness© | Insights Into Awareness: Book I | 50 It is like we separate ourselves into a sense of self, from the unity of life, and turn that sense of self into a layer to go on top of life itself, as a constant observer and judge of what is perceived: the interpreter. Just take a look at your own life as I will use some examples to induce direct realization of what I am talking about... Let's take an example most of us will know, in which we are more conscious of us interpreting and describing what we perceive: Holidays! Whenever we go on a holiday, or to any area we have never been before especially when the area is considered to be awesome or beautiful - we describe everything we see: "wow-look at these mountains, they are huge!" and "Ohh how beautiful that waterfall, look at that!" What happens in these instances, is that life as we perceive it, is not left as it is; we do not take life as it is, but we interpret the perceptions that we have by translating it, by describing it, by using ideas to define the perception. We donâ€&#x;t understand the mountain as the natural perception that it is, but we see only our interpretation of it. Could it be that this is what we do all the time? Could it be that living life through the interpreter is all that we have learned to do? In a more challenging, let's say emotional, situation, we do the exact same thing: Whenever we feel sad for example, we will not be at oneness with that experience, we will not be the perception, we will not relax, be innocent and without judgment, we will instead induce a sense of subtle separation between the emotion and the judge of that emotion. As we take on that role of being the interpreter that is looking at our life from a distance, or from a certain point of view, there is suffering. Free AwarenessŠ | Insights Into Awareness: Book I | 51 If you take a closer look at your life, you can feel that there has always been this separation between the observer and the objects of life that are observed. Even as we speak right now, you can be aware of the fact that you are aware of the situation as if there is a separation between you as the witness, and life as the phenomenal witnessed. It's like there are always two things that make up your experience: 1) You (the observer) 2) Life (the observed) It is my experience, that the seeming separation we feel between these two, can dissolve immediately, at least for a moment at first, into the object-less sense of unified Beingness. Here is how: Be Innocent, be with the situation as it is, drop the position of the interpreter. It is merely a role you can take. Therefore, you can decide to not slip into that jacket of interpretation just as easily. Simply stop objectifying your experience by describing what you perceive. Don't explain anything for about 5 seconds, starting right now!!! Drop ALL explanations for just 5 seconds and you will see what freedom truly is! It is not attained through defining and objectifying experiences, for that only perpetuates the illusory sense of separation. Freedom is now attained through applying your best knowledge to every perception. No. Enlightenment means to be completely foolish, at one with the moment, without choice or reason. Just to be without opinion as life arises spontaneously. Then, if you develop a taste of this free-beingness, you can start to see how even when opinions and judgments do arise, the Free AwarenessŠ | Insights Into Awareness: Book I | 52 freedom is still equally present, and pervades the game of interpretation. A newly born or even a very young child, is not yet a judge of his own experience. He does not yet take on the position of interpreting what he sees. He is foolish we might say. "They still have so much to learn!" - We think. Ha! If we only knew... Babies and very young children simply are whatever they experience. They do not have a separation, no intervention, no interpreter, no timeframe, no interception of Life and themselves as the observer. They simply are immediately and spontaneously free as Life takes shape in that very moment. They don't position themselves anywhere outside of the experience. They don't try to maintain a witnessing perception either. However they might feel, that is what is the case without further thinking about it. There is no analyzing, no judgment whether or not it is right or wrong to feel how they feel, it is simply so, without knowledge. They have not yet eaten the apple of the tree of knowledge. Be innocent like children and fools. Innocence simply means that you have no demands; you have no opinion about however your present moment comes to light; you have no opinions about what you should be or how you should appear to be or how life should treat you or how spiritual awakening is achieved, or... No demands and opinions at all! Pure, self-content awareness with whatever appears as „your‟ experience. Be more like a baby: Then there is no insecurity about life, simply courageous foolishness to be living as one with life however it takes shape. You are foolish, unknowing, without clinging to your knowledge to Free Awareness© | Insights Into Awareness: Book I | 53 describe or categorize and objectify (separate) your experience/perceptions. Unity is already the case! It is only through the eyes of interpreting that you experience duality/separation, because the very act of thinking about something, the very act of judging something and seeking for something better, or more enlightened, is in itself creating and perpetuating the belief in the sense of separation! There is only one expression and it is Now. Be that expression with allinclusive allowance, without any intervention of time/thought/explanation/judgment. Witness the interpreter as it interprets your life and be one with that too. Embrace it, collapse with it, stop dividing it by thinking in terms of time and space. Through being your experience, there is the immediate resolve of all experience into knowledge-less, yet clear and wise innocence. Drop this pretending of having to figure out yourself and trying to practice awareness even. Forget about all meditation techniques and simply drop the interpreter and your insecurity through which you belief you have to analyze whatever you perceive in order to judge whether things are right or wrong. Be a complete fool and you will drop the interpreter naturally! This is foolish-wisdom and it is a very direct way to coming to a complete taste of life as it is. Suffering is the belief in the intervention of knowledge in any given moment. Liberation is to be spontaneously free as you are at one with life without any discrimination. Clear seeing will still discriminate in wise and accurate ways, in order to be of benefit, but in order to get a taste for simple aware-being, simply drop all trying to discriminate and arrange. Free AwarenessŠ | Insights Into Awareness: Book I | 54 Ego, in my seeing, is a myth. It simply does not exist. It is simply a position we seem to take on, but the position itself is nothing but open freedom either. Even when you are interpreting and explaining your reality through the intervention of knowledge, that too happens as a completely hollow, empty, open expression of nothing but pure Beingness/Awareness. Be simple, foolish, pure and innocent as you are without any demands and opinions about your present perception/experience. Conclusion Ego is described by many to be something solid, something that actually exists in and of itself as an independent entity having a will of its own. The connotation that comes with such an explanation of the term Ego, can be highly misleading and make someone believe that it has this solid 'thing' inside himself that needs to be banished or conquered in some way, and that true freedom is depending on this act of exorcising the ego. The general belief is that as long as we have an ego, we are not free, and that it is only once the ego is completely destroyed, that one achieves liberation like some sort of attainment; a result from your efforts as an ego-exorcist. But the biggest joke in spiritual history, is also the biggest and most potent secret in spiritual history: By trying to achieve spiritual enlightenment and by trying to analyze and resolve the ego, we continue to wear the jacket of interpretation; we continue to believe in experiences Free AwarenessŠ | Insights Into Awareness: Book I | 55 from a certain perspective, rather than letting life be the expression that it is. By believing in the existence of an ego, we – as the believer of this idea – take on the role of ego. How can this ever lead to the complete resolution of moment-by-moment experience into Oneness/Beingness/Awareness? The very act of believing in the concepts you think about, is a perpetuation of interpreting your reality, instead of allowing it to be the experience that it is. Ego exists only in your thinking about an ego. Outside of the projections of our thoughts, there is nothing but pure perfection naturally, momentby-moment, effortlessly. If you try to analyze, explain and understand your experience, then there is 'you' intervening with Life, and thus creating a illusory sense of separation between awareness and its apparent contents. Be the world, drop the search, drop all knowledge and don't explain your present moment right now... what is left? Cognizance... natural alert presence... Enlightenment is nothing we can ever understand or think our way into, so drop the idea that you need to figure out anything and be simple, as you are, without further seeking. Relax the tendency to understand, and naturally recognize what is seeing... already... always... Light shines through my window on my typing hands right now without any effort and demands at all, it is utterly simple, yet free and beautiful as it is, without me having to explain that it is beautiful. It already knows it is utterly beautiful without any intervention or need for concepts. It does not need anything in order to shine. It shines already. You shine already too! Free Awareness© | Insights Into Awareness: Book I | 56 By the very fact that you exist, you are perfect freedom. This is the only condition of the universe. Drop being the interpreter of your experience, be without demands, at one with the experience as it is and feels, without explaining any feelings, thoughts and situations... this will give you a taste of natural simplicity. Which will then be discovered to still be present when all the conditioning comes back in. Because that which sees the innocence, also sees the disturbance as a natural expression and perception within that same seeing. From befriending this simple, demand-less being, arises foolish-wisdom spontaneously. There are no limits now to how you may respond, but you will find that as you respond to situations from foolish-wisdom, from being without believing in the interpretation that might appear as your mind, that you always have the best answers available without fear or stress ruling your experience. Even when fear does get its grips on you and your body is trembling with insecurity... Even there and then, this is allowed to unfold itself in the allinclusive space of your peacefully present seeing; which equals Love and Compassion. Simple Being. It is the direct doorway to freedom and it is your choice to make right now. Once the choice is made, forget about having to choose as well, just be what you experience without demanding enlightenment or freedom and see what IS, all by itself. Free AwarenessŠ | Insights Into Awareness: Book I | 57 11 Stop Dividing and Be Free Now Truth is right here already... Simply stop dividing your experience for a moment, whatever the situation and your descriptions about the situation may be... What remains? What's right there in your face when you don't divide any perception at all? That which is here when I do not make any distinction and do not create any separation between anything, itself has no name. We could call it is Pure Presence, or Free Awareness, or even Truth. Since it is undistorted seeing/being, and as they say: Truth sets you free. Befriend yourself with truth, by recognizing natural awareness in any given moment. To befriend yourself with that non-dividing Truth more and more, and to trust in that alone, is a definitive way to real freedom and a one way ticket to establishing wisdom. Do you want to know truth and real freedom directly? (And you will want to know it directly in your own experience, for that is the only way it can have its liberating effect) You can know Truth and Freedom right now. Not in 20 Lives from now, not in 20 years from now, not in 20 minutes from now, not in 20 seconds from now, but right this very moment, be still, motionless, make no division mentally about anything that you perceive or experience, just BE undistracted seeing... at one with the Free AwarenessŠ | Insights Into Awareness: Book I | 58 moment however it may come to you. Right now, and in this very moment, you will know in your direct experience what Truth is. Not as a label, concept or idea, but as that essence in which all ideas come and go. It's as simple as that . Stop dividing. What does it mean: Stop dividing? Then what does it mean to be dividing? Every description and every analysis you may have about yourself or the world in any given moment, is an act of dividing; an act of making a distinction between some aspects of this world and the rest of it. Whenever you define something or describe something that you experience, you are creating a distinction in how you value one aspect over the other, when believed in the descriptions. Descriptions alone, are perfect and spontaneous occurrences within awareness. They do not block or obstruct any well-being or clarity, but when believed in the ideas that these descriptions project, effectively, it creates a tunnel-reality through which life is experienced in a fragmented way. Not only does our constant analysis divide the different forms of our perception, it also separates ourselves from the moment, from Life, because we position ourselves as the analyzer, the interpreter, of Life. We are then believing ourselves to be an observer of something that can be observed. You are there as the analyzer of your experience. It is this perpetuation of separating yourself out from your experiences and dividing whatever it is you experience and perceive, that is making you suffer and miss the magnificence of the totality of Life in its indivisibility. Free AwarenessŠ | Insights Into Awareness: Book I | 59 To get a taste of this natural free awareness, simply stop trying to understand, figure out, compare, think about, analyze, describe, etc. A natural presence is then automatically recognized. Subtly at first, but more apparently so with each recognition and acknowledgement of its presence. There is no real distinction between the observer and the observed. Just like both the glass of the mirror as well as the image that can be seen in the glass, are entirely one, so too is awareness completely one with all the forms that it is dreaming into existence within itself. Whatever is observed, is the observer. Just be as you are, without making any division for just a few seconds... Dare to flow into unity with life, dare to lose yourself as being the interpreter, have the courage, if even for a moment, to just drop the act of analyzing and trying to understand. In that moment of total non-dividing presence, you know Truth as Living itself. So just stop seeing in pluses and minuses, in good and bad, spiritual and non-spiritual, pure and impure. Forget about all of that. Forget about all that I said in this article as well, for all that you know in your mind is irrelevant... Instead, return (regularly and with commitment) to taking that taste of freedom. Just that spark-like moment of recognition, in which you consciously stop all dividing instantly, right there and then. By doing so, both immediately and gradually, truth will reveal itself to you as being the all-inclusive presence that you are already. After some getting used to acknowledging presence like this, you will start to know that Truth is openly present and available for being noticed at all times. After all, it will be the direct experience in that moment of non-dual being, that will confirm to you the forever present nature of Truth. Free AwarenessŠ | Insights Into Awareness: Book I | 60 It is as simple as that . Stop making any division whatsoever, and there it is, staring you undeniably in the face: TRUTH! It is all that is. Now Be... Free AwarenessŠ | Insights Into Awareness: Book I | 61 12 Awareness Cannot be Defined Awareness cannot be pointed at We cannot define or grasp awareness by saying: "Ah! So there it is, that is awareness!" Everything that we can point at, define, grasp, is yet another concept appearing within awareness, known by awareness, existing as awareness. Often when we hear the masters speak about emptiness, stillness, awareness, pure space, openness, etc. our minds can create a copy of it, a concept. Our thoughts will probably say at some point: "Ah yes, emptiness, pure awareness, stillness, it is that experience right here!" But this experience what our thoughts refer to is never the real thing, for what do we point at? It is some thing right? An experience, a perception, a phenomenon, etc. All of these belong to the display of forms/things/phenomena. Which are nothing other than empty, open perception itself. At best, the experience which our minds point at or refer to is a 'nonconceptual state of mind', which in itself is still a sensation, an experience, a state of mind, and thus, even this stillness of mind as an experience, is not what awareness is limited by. All that was not here already before, cannot be it now. All that comes and goes or is newly created, is just more perfect fiction arising in that awareness which has already forever been the case. Free AwarenessŠ | Insights Into Awareness: Book I | 62 So ask yourself: “What am I missing? What am I overlooking that is already here and has always been here in every single one of my experiences?” So whenever our thoughts think they understand what is said by those who speak from and towards truth directly, we can notice how our mind tends to appoint that which we search for (emptiness, awareness, freedom) as being some thing somewhere that we can attain or get to. We turn it into an idea in our minds, or we identify it with an experience of peace and stillness. When we notice that we are doing that, immediately know and see that whatever our thoughts are pointing at, cannot be it, for who is seeing the process of appointing and the sensation or phenomenon that is being pointed at?... Surely there is an effortless awareness over all of this, which is already self-maintaining? This Awareness is itself uncatchable and never definable. Forever beyond phenomena, yet present in and as every single experience within perception. For each time we think we have found IT, the true IT is right there as the witness of even the process of appointing/identifying IT. So this process of appointing awareness or saying: "Ah yes, that is awareness!" too, belongs to the effortlessly arising display of dynamic appearances appearing and disappearing within already perfectly present awareness. There is nothing that can change awareness being untouchably here as the knower of all that is. There is nothing you can do to be more awareness and nothing you can screw up to be less awareness. There can only be a „relaxing into‟, or an abiding in knowing/recognizing this awareness to already be the case. Free Awareness© | Insights Into Awareness: Book I | 63 All happens in and as That. There is no searching that will lead you anywhere. We do not have to sort out Awareness from the world. We do not need to find our true selves by sorting out all forms of this world either. We can simply acknowledge that all is a form of awareness and rest right there in that subtle recognition of what is already seeing. How to Know Awareness So the only true way to know awareness, to know the unknowable, is by simply being as you are and seeing that all phenomena appears within this knower that you already are. Just see that whatever the thought, belief, concept, sensation or experience is, whether mundane and materialistic, or 'special' and spiritual, both are equally appearing and disappearing within THAT which was already there to know both experiences and the change from one to the other. All that we can identify and appoint, belongs to the changeable, passing display of perceptions, which too are nothing other than substance-less awareness. Like dreams at night are nothing more than Mind, no matter what we encounter. To try and grab any one of these forms is like trying to capture space or draw a painting in the sky. That which we are looking for is that which can decide to appoint or not to appoint and which knows the choice that is made without being affected by any of this. Everything, and I repeat, Everything that you will ever experience, will not be IT. Not ever! To accept this now may safe you some great deal of struggle and seeking. For if you acknowledge deeply right now, that no experience will ever be more IT than this experience right now, the seeking drops away, and the ease and clarity will shine through naturally, without doing anything to achieve it. Free AwarenessŠ | Insights Into Awareness: Book I | 64 You simply cannot experience awareness. It's already your experience!! Everything that is happening right Now, is an appearance of awareness, happening within awareness. Your current experience is the best experience you could have for you to simply relax as that which is looking. Simply rest as you are, peacefully seeing and being without any divisions needing to be made. Really contemplate this thought: "That nothing you will ever experience or know in your life, is that which you are looking for.” Instead, That which is looking for something, is what you are looking for. So relax into the seeing that is already uninterruptedly happening, whether we are recognizing it or not. Acknowledge the ever present nature of this seeing, to be beyond even recognition and non-recognition. It‟s simply always there as that which sees both the recognition of itself, as well as the ignorance of itself. We are looking for the eyes with which we are looking, we are looking for the mind with which we create and see our dreams. We are looking for the awareness that's perfectly present right now as the knower and openness in which even our most primary sense of 'being someone' occurs as a sensation/experience/phenomenon. All that we may ever experience will come and go. There is nothing outside your current state of being to be found. There is nothing you will ever find 'out there'. You will never be liberated by some experience or phenomenological happening that you should look for or work towards. That's like trying to wake up or realize you are dreaming through digging yourself deeper into your dream world and identifying more with one dream form than with another. It just does not work. Free Awareness© | Insights Into Awareness: Book I | 65 Instead ask yourself this: "How could that perfect awareness for which I am searching and struggling, already be the case? How could that fit in with my current perception regardless of what I think and feel now? How could my current experience show me that Awareness is already here? Where is it? Who is it? am I it? Who am I? Who is the knower of everything I think, believe and perceive? What is right here as 'me' or 'my presence' that I can never escape, add unto or subtract from no matter what I do, try or happens? What is this presence in which all comes and goes and can I pay more attention to that presence during my every day activities?" Free AwarenessŠ | Insights Into Awareness: Book I | 66 13 Donâ€&#x;t Try To Become What Already Is Most of us are taught that in order to be Self-Realized, we need to be this and that, such and so. For example, we believe we have to concentrate, do asanas, use techniques, change our thoughts, silence our mind, or be an entirely different person altogether. We may even believe that we need to rearrange our entire bodily structure and prepare for everything. We might believe we have to meditate or awaken the kundalini energy. Many of us are even taught to believe that we will need many lifetimes to know who we are. We may believe many things... None of these are direct and none of these are necessities. I promise you. You see, that which we are trying to become, already is! Everything we do, including all changes and preparations we put our bodies or thought-patterns through, all of that has to happen within something... And we know all of these changes, do we not? By what basic ground of cognizance, are we able to know all changes that appear in our life, as our life? Free AwarenessŠ | Insights Into Awareness: Book I | 67 You know yourself before you started meditating and you know yourself after you have meditated. You know yourself during the meditation as well. My question is: What is it, that knows all states of mind both prior, during and after any sort of action has been undertaken? Everything we try and do in order to hopefully come closer to THAT, is already happening in THAT! Awareness, is the space in which your entire life-experience unfolds. You are the crystal clear knower of all experiences. Even when you think you are not, that thought and clouded feeling is a sensation known by YOU, or THAT, or simply: Awareness! You are that which knows both ignorance and enlightenment too! So I ask you to take a look at your life right now and see what traditional or non-traditional ideas you might have picked up along the way and started to hang onto; maybe even built your life around. What do you really believe in right now? What do you think you need in order to reach THAT? Find out and imagine yourself after you have gone through that process of doing whatever you believe is needed. What would be different? How would you feel? Then the million-dollar question: What really changed that doesn't have any form or definition? Sure, your experience may have changed, you may feel better, you may have a more flexible body, you may have a quieter mind... but what will be right there knowing the new you? Is that not the very same formless presence that knows you are reading this right now? Free AwarenessŠ | Insights Into Awareness: Book I | 68 What is right here as the spacious awareness in which your life has changed? Can you relax the mind and acknowledge this presence? Surely that which knows the change must itself be changeless, for how else could it track all changes if it would be that which changed? Only something that does not move, can detect movement. Similarly, only something changeless can detect change. So you see, you don't need anything in order to reach THAT. You don't need to be a better person or have altered life-experiences and sensations in order to reach THAT. Simply recognize that subtle awareness which is always there... Don't try to be super-aware, simply recognize that which already knows you are here right now . There is not a single moment in your life that is not known automatically, by something indestructible, changeless. Recognize that something is always aware, even when you are confused on a particular point of view, idea, and believe you are unaware... something knows that too. So instead of trying to become this super-aware-body/mind complex, simply relax and recognize that which already is changelessly present as the knowing force of all experiences. Whenever you catch yourself punishing yourself for not being good, spiritual, meditative or even aware enough, simply recognize that subtle, underlying presence which in that very moment is aware of every thought, feeling and state of mind already. Free AwarenessŠ | Insights Into Awareness: Book I | 69 14 Free From Needing Sensation Observe yourself this very moment... Is there any desire, however subtle, towards a new sensation? A good feeling? Why are you reading this article? Are you anticipating what I am going to say next? Are you hoping for moments of clarity, of peace, good feelings? Do you want to know the amazing nature of your being? Do you want to know freedom and limitless love? Do you want to be able to be present moment to moment and have great compassion and understanding for everything? Do you want to be relieved of your unhappiness and be completely happy? - Then there is only one thing that you 'need': Stop wanting and wishing for that good feeling that you believe to be somewhere around the next corner. You can 'do' this right now as you are reading this, or rather: you can let go of that subtle doing, that subtle searching, Right Now... Just relax and notice awareness... Do you see? There is a peaceful seeing that is there without your effort. It is not depending on the 'you' you think you are, it is not depending in any way on the doer or what is done, it is just naturally there as the soul of all that is. You can never not be seeing. If you only relax your need for being satisfied through sensations and good feelings and notice that you are naturally aware, you have Free AwarenessŠ | Insights Into Awareness: Book I | 70 discovered what you are at the deepest level of life itself. You are that formless, indestructible seeing, that perfect awareness, that is here whether you are aware of it or not, as the background of all of life's happenings. It is the changeless essence of all changing experiences and it does not need any sort of feeling or sensation in order to be. It's free from any need. You have just had a taste of your true nature. It's as simple as that: 1) Notice that you are subtly urging towards relief, and that you are hoping to find that relief in a sensation, an experience. 2) Then, seeing this, let it be, leave it alone however it wants, just be there. 3) Then just notice that you are aware without any effort and that even as thoughts arise, you are still effortless awareness. You cannot change the fact that there is awareness. So just rest as you are, free from believing in the subconscious need for relief through sensation, and realize that the only true relief you will ever find that is accessible at all times, is your natural presence... So instead of you chasing mindlessly after people, places and things under the belief that the sensations that come from associating with them will relieve that discontent that drives your constant wheel of desires, simply notice the background of that desire in which the desire is known to even exist: Awareness. Just be. Stop chasing after relief and notice that you are relief. Then stay with this peaceful seeing, or repeat the letting go/be and noticing Free AwarenessŠ | Insights Into Awareness: Book I | 71 awareness part, so that you get used to actually knowing, deeply and totally, that awareness/relief, is always already present. Just commit to seeing the peace that is already present and repeat that recognition until it is second nature to you, to know under any circumstance, the indestructibility of life itself. Your first nature is Awareness - Being - which you are already. Now make the simple act of acknowledging your first nature, your second nature. Relax into this recognition of awareness more and more, throughout all daily appearances. Free AwarenessŠ | Insights Into Awareness: Book I | 72 15 Making “The Choiceâ€? We have a choice in every single moment. Again and again. It's a very important choice as it determines the outcome of our well-being and eventually that of the world. It may even determine the downfall or survival of our race as a human species. It's that important. You are that important! Free From The Spell of Living in Assumption In every single moment, you can choose to either believe in the perceptions you are having at that very moment, or to be free as awareness itself. Our day-to-day life is basically nothing more than a composition of perceptions chained together. We tend to believe the perceptions. What does it mean to believe in a perception? It means that you are unconsciously assuming that the world you are perceiving is real. If you believe in the perceptions you are having, it means you are assuming that whatever is perceived, is really there as something in and of itself; as having an independent nature; a solid existence. While I am not declaring that what we perceive is either real or unreal, it is important to realize that all we can really say about what we perceive, is that we perceive... We cannot prove that there really is a world out there, that there really are objects that we perceive. Even the calculations that scientists are working on which supposedly explain our universe, are Free AwarenessŠ | Insights Into Awareness: Book I | 73 nothing more than perceptions within awareness. We read it in a science magazine or on the internet, but that's just another creative perception arising in what is always stable, clear and present. It is important to realize this. Again; it is not necessarily beneficial to state that the world we perceive either truly exists or not, all what is beneficial, is to realize that whether it does exist or it does not exist as a world out there, it is still perception within awareness. That is all we have truly got here. We simply cannot go any further than that, without entering the world of basing our entire life on a big assumption. When we realize this, we gradually stop believing in having to seek something in that big bold universe that we generally define as „being out there.â€&#x; We stop solidifying and objectifying our experience as being a single entity in a world full of other individual objects and people. We realize more and more that Truth is not found in any one perception/form, for who knows whether or not the perception we are having really is something? We can never know for sure whether it really exists. Just like we can never really know if our dreams at night are really objects or if they are just perceptions within awareness/mind. All we can know, is that indeed they are perceptions occurring within awareness. Truth is not found in something out there, or even in any perception we might have. Make no mistake, there is no difference whatsoever between external and internal perceptions. I urge you not to dwell on the possible meaning or implication of any internal perceptions either. What I mean by internal perceptions are for example those that are bound to arise when one starts to meditate. Free Awareness is not about rejecting any perceptions, nor is it about accepting any perceptions. But it Free AwarenessŠ | Insights Into Awareness: Book I | 74 is also not about making a distinction between external perceptions and internal perceptions. All are just that: perceptions. Whether you have your eyes open and believe you perceive 'external stimuli' or whether you have your eyes closed and experience your thoughts, emotions, or even meditational states and sensations. All is just that same perception of awareness. Just like the movie screen will always be the movie screen and the projection will always be the projection, regardless of what is shown and what part of the world or universe is displayed. So I find it best to just approach every single moment as being a perception, without making any further differentiation between internal, external, mundane or spiritual, good or bad. For even that very basic differentiation means you are already entering that world of assuming (read: believing) that what you perceive is really something in and of itself that can be, or should be, differentiated. When making a distinction between internal or external, you are already forgetting to realize that all that is truly happening, is that you are perceiving. The Choice: Back to making the choice. As I said before in this article, the choice we have in any given moment, is to either believe in any or all of the forms we seem to perceive, or to simply relax as the seeing, as the perceiving itself. Free AwarenessŠ | Insights Into Awareness: Book I | 75 If we choose to assume that what we perceive is really something that exists also outside of perception/awareness, then we will live our life as we have always done it: based on a huge, primordial assumption. I don't know about you, but to me, that was never really fully gratifying, no matter how great or positive or even spiritual the perception, it was always going to end; it is as simple a truth as can be. True happiness or meaning cannot be found by relying on perceptions. If, on the other hand, one chooses at any given moment, to understand one's experience as pure perception itself, and therefore chooses to rest as that recognition of life being one, seamless, magnificent perception within awareness, things will become clear in such a natural way that doubt is eradicated and all the fully positive qualities inherent in awareness come about as they express themselves through your experience of life. I know, at least for myself, that this is the only truly gratifying and wholly beneficial way to live. Please don't be chocked by what may seem like a rather extreme way, almost like I am giving you an ultimatum, of expressing awareness in this article, it only serves to give you that extra nudge that you might be needing to wake up. The morning-alarm is already ringing your entire life in the form of every bit of experience you have had, but since you have gotten so used to the ringing, you don't recognize it as a wake-up call anymore. That's why you may need someone 'from the outside' to give you that extra nudge that reminds you to actually wake up, for it's time for all of us, as a human species, to experience the rest of the day and leave the sleepy morning behind. Free AwarenessŠ | Insights Into Awareness: Book I | 76 Once you have really looked at yourself and your perceptions in the way I explained above and maybe realized that indeed you base your entire life assuming that your perceptions are actually things out there, instead of just existing solely as perceptions within awareness (like a dream exists solely as perceptions in your mind, even though when dreaming, it seems like there are real objects that are perceived), then you will see after a while, that there is no need for any extremes nor for taking on any standpoints. There is no need to state that either the world we perceive is real or unreal. Truth is not bound by one definition over the other. Truth encompasses all perceptions as equally valid as well as un-valid. Truth is not any one way. And this is why we will never be able to comprehend this kind of truth with our intellect alone: because the intellect knows only decisions, definitions, statements, declarations. Truth is much more versatile and comprehending than our need for a stable truth. Truth is all. This is also why I rarely go into making statements like God is real, or God is unreal, the world is real or the world is unreal, karma exists, karma does not exist. It is all much more simple and complex than that. You will have to start to wake up to the presence of awareness and experience for yourself. How? Make that choice to not be governed by your perceptions, but to recognize your entire spectrum of experience as it may be in any given moment, to be pure perception arising effortlessly within pure seeing. When you recognize life as perception, you naturally recognize awareness as well. Free AwarenessŠ | Insights Into Awareness: Book I | 77 So for those of you who believe they are having trouble recognizing pure awareness, simply take your entire set of feelings, thoughts and sensory perceptions, and see it as one single perception. Then maintain that awareness of life as being pure perception gently, peacefully, subtly. Don't go into any story or description about any 'thing' you may be perceiving. Simply allow perception to be recognized as being perception alone. You will discover Peace immediately. When doing so, naturally you are relaxing in the recognition of that pure awareness in and as which the perceptions exist. Make that choice, again and again. Free AwarenessŠ | Insights Into Awareness: Book I | 78 16 What is Love? What is Love? Love is here. Love is forever. Love is endless, always present. There can never be a lack of love, nor is love ever missing. If we ever feel that that is the case, then there is simply us believing in our stories about life that blind us from Love's obvious presence. But never is it not here. Love is like the background of everything. Like the canvas is to a painting or like the sky is to the clouds. Love is the very basis of everything, without it, we are not; without Love, nothing is. All is that basis; all is that Love. It is the nature of everything and it cannot be surpassed. Whenever you stop thinking for a moment or stop believing in your ideas, you will notice that there is a present background for all these thoughts and emotions to appear in. This background will be seen to be present right now. Much like a space that is there in which thoughts either arise, or stay absent from. Once this space is recognized directly, this background from which nothing ever leaves, we may also realize that this space is the very substratum of everything that is known within it. This space is most easily noticed when there is no clinging to the passing Free AwarenessŠ | Insights Into Awareness: Book I | 79 thoughts. Like the blue open sky is more noticeable when there are no clouds or when we at least have the courage to not be distracted by the clouds and take a moment to realize and recognize the sky in which they exist. After having gained trust and experiential confirmation in this recognition of spacious Love, it will become apparent to us that this Love is also here whenever we are thinking and clinging to thoughts and emotions. All arises, endures and dissolves spontaneously within Love and these appearances are free in nature, from the very moment they arise. The act of Love and Loving yourself Whenever you feel down, then that is how you feel. Whenever you feel great, then that is how you feel... The very act of acknowledging that you feel a certain way and allowing that feeling to be without trying to come up with, or derive from it, any form of meaning, direction or implication, is Love loving itself unconditionally. To believe that feelings and thoughts need to be reacted upon or acted out in order to resolve, is the great misconception. All is already resolved since nothing ever isn't. Love is in, through, with and as all that arises. There is no such thing as distinction or value in the eyes of Love. Love sees only itself in and as everything. Love is completely drunk with its own nectar and expresses itself in seemingly foolish, yet amazingly effective ways. Love is Foolish because there is no filtering, no intervention and there is no interception in any given moment. There simply isnâ€&#x;t anyone to review the expression before it is allowed to be expressed. Free AwarenessŠ | Insights Into Awareness: Book I | 80 When Resting as the background of all phenomena, we rest as Love and see through its limitless eyes. From this restful seeing there is no one to intercept, intervene, think about, judge, categorize, analyze or solidify any appearance whatsoever. There is only Love expressing itself directly, spontaneously, without limit, hesitation or control. So how can Love not seem foolish at times when seen from the eyes of an interpreter? The Commitment to Love Instead of reacting upon our thoughts and feelings, we can love them as they are. We can learn to be patient with ourselves and through that patience realize in our direct experience the background of all that we consider our talents, flaws or human expressions. By committing to something beyond the scope of our transient movements of energy, even if we are not yet sure as to what we are committing ourselves to exactly, we will be much more patient and stable whenever these movements of energy occur. We will have a gaze that's set on infinity and nothing that passes our gaze will distract us, for we are looking directly into what lies beyond. Like the constant movement of a windscreen wiper cannot distract our focus from the road. Through this patience, dedication and ability to encompass and allow all that can arise to be as it will, we will see the background of all movement. It is then and there that we will know what Love is, directly. Not conceptually, not intellectually, but decisively as being what we are as the all-inclusive basis in which and as which everything exists. Free AwarenessŠ | Insights Into Awareness: Book I | 81 Love Forgetting Itself and then Remembering again We tend to forget and overlook the simplicity of what's already here. That's because we analyze and give value to things. By giving value to something, you make a distinction between that one thing and everything else. Love is total, distinction-less equanimity with all things of all sorts. When seen from Love, the World is new and fresh in every moment. Love is not defined by - and therefore not confined to - feelings of love alone. Love is there as the Self-Aware space in which these feelings of love are experienced. Likewise, Love also includes hatred as being a perfect and flawless perception within itself. It sees everything as itself, hence it's a peaceful equanimity that's at perfect balance with itself. Nothing is ever denied existence; all is allowed exactly as it comes. Love is completely in Love with what we call war, hate, and all such things. To see everything as being an expression of that one Love, is to remember Love through its naturally inherent Self Awareness. To Relax, means to stop all seeking and all defining (which equals dividing). So do just that: Relax. Again and again, throughout all your personal challenges, relax your ideas and stop looking for an strategy or a one-way ticket out of the situation you're in and be that gracious compassion for whatever you may experience. Be all-inclusive; allow everything to come and go as it pleases; release yourself from all controlling tendencies and avoiding attitudes. Face your thoughts and emotions and have that stable, compassionate gaze which allows all to arise and dissolve and is never distracted by any specific movement. Free AwarenessŠ | Insights Into Awareness: Book I | 82 Remember this love to be the substratum, the true nature, of all appearances equally, again and again, whenever you remember to do so and more and more Love will reveal itself to be the basis of all that exists. To the point where recognizing love, even in the face of confusion, is inescapable. Love is simply here to stay and nothing ever escapes its scope. Loving yourself means to allow the full spectrum of your experiences to be experienced without any fleeing or manipulation. Love is Free, Love is Total, Love is Now. Already Perfect Love is all you are. Just Relax... Love, Bentinho. Free AwarenessŠ | Insights Into Awareness: Book I | 83 Support us: This teaching and community-based movement, is solely depending on donations provided by the community. If you feel moved to contribute after reading this book, we would gratefully accept your donation, for it does make a difference: Make a donation here Free AwarenessŠ | Insights Into Awareness: Book I | 84
https://issuu.com/earthcat/docs/insights-into-awareness---book-i---a-collection-of
CC-MAIN-2017-34
refinedweb
20,337
59.94
Welcome to Cisco Support Community. We would love to have your feedback. For an introduction to the new site, click here. And see here for current known issues. Guys, Please help!! Part of mu course work for University I have a three floor admin building which houses 300 staff with the possiblity of another 90 over the next three years. Top floor houses the admin team, the remaining two floors house the customer service team. At present the LAN has three network segments connecting to a single router in the building, with either a 10Mbps or 100Mps switch on each floor. Cabling is copper throughout, and the wiring plan has evolved in an adhoc way over a number of years. There appears to be no redundancy within the wiring. The servers are within the same network segment as some of the staff computers. I need to design a LAN upgrade that will be able to support the above, but am finding this a lttle difficult.. Any help would be greatly appreciated Thanks Scott Hi Scott, Have a look at the Solution Reference Network Design Guides, namely LAN Baseline Architecture Overview--Branch Office Network and LAN Baseline Architecture Branch Office Network Reference Design Guide They will give you best practice recommendations for LAN designs, which looks like what you need. Hope this helps! Please use the rating system. Regards, Martin P.S.: Further guides can be found at As posted already the SRND is a great resource, but may not be practicle for all applications. You have some unknown factors that play into your issue. Do you have any budget to work with? Are you suppose to use what you have or can you add to it or replace it? These are important things to know moving forward. What do you have in place today? Switch type, router type, catagory cable type, any fiber at all, need for redundancy? Hi, The aim is produce a cost effective solution. I am not required to give specific costings. My upgarde solution should be sufficient for an increase of 150% traffic, should optimise performance and use of resources. Any additons to the network must be justified.The document does not states any equipment being used at the moment. To let you understand this is one of the main headquaters, the other being Birmingham. The analysis and findings from the current network below: LANS Glasgow's LAN has three network segments connecting to a single router in the larger building,with either a 10Mbps or 100Mps switch on each floor. Cabling is copper throughout, and the wiring plan has evolved in an adhoc way over a number of years. There appears to be no redundancy within the wiring. There is currently no network infrastructure in the warehouse building. The Glasgow servers are within the same network segment as some of the staff computers. Birmingham's LAN has a single network segment. LAN performance within the main sites is unsatisfactory due to excessive traffic within many parts of the network, and reliability is poor. Glasgow LAN requirements Glasgow (Main Site) ⢠At present 300 staff (50 Warehouse, 150 Customer services, 100 staff finance/admin) ⢠Site is split across two buildings (Warehouse & 3 floor Admin building) ⢠Finance/Admin staff located on the top floor of admin building ⢠Finance/Admin staff authorised to access payroll & personal data ⢠Customer service staff located on two remaining floors of admin building. ⢠Networked PCs for all staff ⢠Warehouse staff, to share desktop PCs ⢠Copy of full database, stored at this location ⢠Web Server cluster located on site ⢠Internet Access ⢠Warehouse staff not permitted Internet access ⢠Use public class C network 195.168.0.0 for all internal addressing ⢠To minimise wasted addressing space, VLSM to be used ⢠Expect 150% growth of current IP requirements ⢠Expect 30% growth of work force Does this help any Quick and dirty recommendation would be to replace all IDF switches with 4500 series switches. All capable of doing layer 3. This will be your access layer. All feeding back to a Data Center, MDF, locaiton to another 4500 used as your distro layer, which is then connected to your router, your core layer. In the distrobution layer you will connect servers in the DC and dual feed all the IDF's to either two 4500 distro switches or dual supervisor engines in the one chassis. For all servers you will dual home them into 2 different chassis or cards. Replace copper uplinks with dual fiber runs. You could get away with running 3750G stacks in the IDF's. This allows for single control of the entire switch stack and not seperate devices which will save some of the cost of going for the big guy of the 4500's. You can build VLAN's to allow access to only specific networks and restrict Internet access to only those allowed to access. This will give you a high bandwidth and high availability in your design. This is very high level but should help point you in the right direction.
https://supportforums.cisco.com/t5/wan-routing-and-switching/re-lan-design/td-p/1027567
CC-MAIN-2017-47
refinedweb
855
71.34
Regular expressions are hard to write, hard to read, and hard to maintain. Plus, they are often wrong, matching unexpected text and missing valid text. The problem stems from the power and expressiveness of regular expressions. Each metacharacter packs power and nuance, making code impossible to decipher without resorting to mental gymnastics. Most implementations include features that make reading and writing regular expressions easier. Unfortunately, they are hardly ever used. For many programmers, writing regular expressions is a black art. They stick to the features they know and hope for the best. If you adopt the five habits discussed in this article, you will take most of the trial and error out of your regular expression development. This article uses Perl, PHP, and Python in the code examples, but the advice here is applicable to nearly any regex implementation. Most programmers have no problem adding whitespace and indentation to the code surrounding a regular expression. They would be laughed at or yelled at if they didn't (hopefully, yelled at). Nearly everyone knows that code is harder to read, write, and maintain if it is crammed into one line. Why would that be any different with regular expressions? The extended whitespace feature of most regex implementations allows programmers to extend their regular expressions over several lines, with comments at the end of each. Why do so few programmers use this feature? Perl 6 regular expressions, for example, will be in extended whitespace mode by default. Until your language makes extended whitespace the default, turn it on yourself. The only trick to remember with extended whitespace is that the regex engine ignores whitespace. So if you are hoping to match whitespace, you have to say so explicitly, often with \s. In Perl, add an x to the end of the regex, so m/foo|bar/ becomes: m/ foo | bar /x In PHP, add an x to the end of the regex, so "/foo|bar/" becomes: "/ foo | bar /x" In Python, pass the mode modifier, re.VERBOSE, to the compile function: pattern = r''' foo | bar ''' regex = re.compile(pattern, re.VERBOSE) The value of whitespace and comments becomes more important when working with more complex regular expressions. Consider the following regular expression to match a U.S. phone number: \(?\d{3}\)? ?\d{3}[-.]\d{4} This regex matches phone numbers like "(314)555-4000". Ask yourself if the regex would match "314-555-4000" or "555-4000". The answer is no in both cases. Writing this pattern on one line conceals both flaws and design decisions. The area code is required and the regex fails to account for a separator between the area code and prefix. Spreading the pattern out over several lines makes the flaws more visible and the necessary modifications easier. In Perl this would look like: / \(? # optional parentheses \d{3} # area code required \)? # optional parentheses [-\s.]? # separator is either a dash, a space, or a period. \d{3} # 3-digit prefix [-.] # another separator \d{4} # 4-digit line number /x The rewritten regex now has an optional separator after the area code so that it matches "314-555-4000." The area code is still required. However, a new programmer who wants to make the area code optional can quickly see that it is not optional now, and that a small change will fix that. There are three levels of testing, each adding a higher level of reliability to your code. First, you need to think hard about what you want to match and whether you can deal with false matches. Second, you need to test the regex on example data. Third, you need to formalize the tests into a test suite. Deciding what to match is a trade-off between making false matches and missing valid matches. If your regex is too strict, it will miss valid matches. If it is too loose, it will generate false matches. Once the regex is released into live code, you probably will not notice either way. Consider the phone regex example above; it would match the text "800-555-4000 = -5355". False matches are hard to catch, so it's important to plan ahead and test. Sticking with the phone number example, if you are validating a phone number on a web form, you may settle for ten digits in any format. However, if you are trying to extract phone numbers from a large amount of text, you might want to be more exact to avoid a unacceptable numbers of false matches. When thinking about what you want to match, write down example cases. Then write some code that tests your regular expression against the example cases. Any complicated regular expression is best written in a small test program, as the examples below demonstrate: In Perl: #!/usr/bin/perl my @tests = ( "314-555-4000", "800-555-4400", "(314)555-4000", "314.555.4000", "555-4000", "aasdklfjklas", "1234-123-12345" ); foreach my $test (@tests) { if ( $test =~ m/ \(? # optional parentheses \d{3} # area code required \)? # optional parentheses [-\s.]? # separator is either a dash, a space, or a period. \d{3} # 3-digit prefix [-\s.] # another separator \d{4} # 4-digit line number /x ) { print "Matched on $test\n"; } else { print "Failed match on $test\n"; } } In PHP: <?php $tests = array( "314-555-4000", "800-555-4400", "(314)555-4000", "314.555.4000", "555-4000", "aasdklfjklas", "1234-123-12345" ); $regex = "/ \(? # optional parentheses \d{3} # area code \)? # optional parentheses [-\s.]? # separator is either a dash, a space, or a period. \d{3} # 3-digit prefix [-\s.] # another separator \d{4} # 4-digit line number /x"; foreach ($tests as $test) { if (preg_match($regex, $test)) { echo "Matched on $test<br />"; } else { echo "Failed match on $test<br />"; } } ?> In Python: import re tests = ["314-555-4000", "800-555-4400", "(314)555-4000", "314.555.4000", "555-4000", "aasdklfjklas", "1234-123-12345" ] pattern = r''' \(? # optional parentheses \d{3} # area code \)? # optional parentheses [-\s.]? # separator is either a dash, a space, or a period. \d{3} # 3-digit prefix [-\s.] # another separator \d{4} # 4-digit line number ''' regex = re.compile( pattern, re.VERBOSE ) for test in tests: if regex.match(test): print "Matched on", test, "\n" else: print "Failed match on", test, "\n" Running the test script exposes yet another problem in the phone number regex: it matched "1234-123-12345". Include tests that you expect to fail as well as those you expect to match. Ideally, you would incorporate these tests into the test suite for your entire program. Even if you do not have a test suite already, your regular expression tests are a good foundation for a suite, and now is the perfect opportunity to start on one. Even if now is not the right time (really, it is!), you should make a habit to run your regex tests after every modification. A little extra time here could save you many headaches. |) has a low precedence. This means that it often alternates over more than the programmer intended. For example, a regex to extract email addresses out of a mail file might look like: . For example: regex = "(\\w+)(\\d+)" could be rewritten.
http://www.onlamp.com/lpt/a/4101
CC-MAIN-2015-22
refinedweb
1,185
67.15
29 June 2012 23:07 [Source: ICIS news] HOUSTON (ICIS)--Here is Friday’s end of day ?xml:namespace> CRUDE: Aug WTI: $84.96/bbl, up $7.27 Aug Brent: $97.80, up $6.44 NYMEX WTI crude oil futures rose sharply on Friday as European leaders agreed on a deal to cut Spanish and Italian borrowing costs and maintain a single currency, boosting sentiment on the last trading day of the quarter in one of the worst years for oil. The increase represents the largest one-day percentage gain for the front month in 2012. For the week, WTI was up by 6.5% RBOB: Jul: $2.7272, up 11.30 cents/gal Reformulated gasoline blendstock for oxygen blending (RBOB) prices gained as the prompt month contract expires and the August contract will trade as the prompt month on Monday. NATURAL GAS: Aug: $2.824/MMBtu, up 10.2 cents Natural gas futures on the NYMEX rose 4% by closing, supported by well-above-normal temperatures sustained in the weather outlooks for the US. Increased cooling demand has raised trading expectations, despite the glut in storage supply. ETHANE: higher at 28.25-29.00 cents/gal Mont Belvieu ethane prices ended the week higher after reaching a record low price on Thursday. Trading was thin ahead of the weekend, when many Americans will be celebrating and preparing for the Fourth of July Holiday. Record high ethane inventory has continued to pressure prices. AROMATICS: toluene up at $3.75-3.90/gal Spot prices for n-grade toluene moved higher to $3.75-3.90/gal, compared with $3.55-3.65/gal, on the back of tight supply and stronger crude values. Meanwhile, US MX spot prices moved up to $3.50-3.70/gal, compared with $3.45-3.55/gal a day earlier, following a hike in crude values and higher prices in other aromatics markets.
http://www.icis.com/Articles/2012/06/29/9574245/EVENING-SNAPSHOT-Americas-Markets-Summary.html
CC-MAIN-2014-15
refinedweb
319
77.64
HashMap vs HashSet in Java We are going to discuss the differences between HashSet vs HashMap in this article. These are two of the most important Collection classes in Java. They are present in the java.util package. We use both of them for serving various purposes of data structures. This topic is really important and this question is frequently asked in many interviews. Hence, you must thoroughly know about them and the differences between them. Before discussing the differences, we will first look at their introduction and discuss them separately. Keeping you updated with latest technology trends, Join TechVidvan on Telegram What is a HashSet in Java? HashSet is a collection framework that implements the Set interface and does not allow any duplicate values. All the objects stored in the HashSet must override the equals() and hashCode() methods so that we can check for duplicate values. The HashSet is not thread safe and is not synchronized. There is a method add() which adds elements in a HashSet. The syntax of this method is: public boolean add(Object obj) This method returns a boolean value. It returns true if the element is unique and adds the element successfully, and returns false if there is a duplicate value added to the HashSet. For example: HashSet vehicleSet = new HashSet(); vehicleSet.add(“Car”); vehicleSet.add(“Motorcycle”); vehicleSet.add(“Bus”); What is a HashMap in Java? A HashMap is a Hash table that implements the Map interface and maps a key to value. HashMap also does not allow duplicate keys but allows duplicate values in it. The map interface has two implementation classes which are Treemap and the HashMap. The difference between both is that the TreeMap maintains the order of objects but the HashMap does not maintain the order of objects. The HashMap is not thread-safe and is not synchronized. It does not allow duplicate values but it allows the null values. To add the elements in a Hashmap, the put() method which accepts a key and a value. Its syntax is: public Object put(Object key, Object value) For example: HashMap<Integer, String> vehicleHashMap = new HashMap<Integer, String>(); vehicleHashMAp.put( 1, “Car” ); vehicleHashMAp.put( 2, “Motorcycle” ); vehicleHashMAp.put( 3, “Bus” ); So, this was a brief introduction to both HashSet and HashMap. We came to know that both are not thread-safe and synchronized. Now, let’s discuss the differences between them. Java HashSet Vs HashMap We will discuss each difference with a specific parameter: 1. Implementation Hierarchy The HashSet implements the Set interface of Java while the HashMap implements the Map interface. The Set interface extends the Collection interface which is the top-level interface of the Java Collection framework, while the Map interface does not extend any interface. 2. Data Storage The HashSet stores the data in the form of objects, while the HashMap stores the data in the form of key-value pairs. In HashMap, we can retrieve each value using the key. For example: HashSet<String> hs = new HashSet<String>(); hs.add(“Java”); HashMap<Integer, String> hm = new HashMap<Integer, String>(); hm.put(1, “Java”); 3. Duplicate Values HashSet does not allow you to add duplicate values. But HasMap stores the key-value pairs and allows the duplicate keys but not duplicate values. If we add the duplicate key, then it uses the new value with that key. 4. Null Values HashSet allows a single null value; after adding a null value, HashSet does not allow to add more null values. On the other hand, HashMap allows multiple null values but a single null key. 5. Internal Implementation HashSet internally implements the HashMap, while HasMap does neither implement a HashSet or any Set. 6. Methods to Insert Elements There are predefined methods for both HashSet and HashMap to store or add elements. The add() method adds elements in a HashSet while the put() method adds or stores elements in a HashMap. While using the add method, we directly pass the value in the form of an object. But when using the put() method, we need to pass the key as well as the value to add the element in a HashSet. 7. Mechanism to Add Elements HashMap internally uses a hashing method to store the elements. On the other hand, the HashSet uses the HashMap object to store or add the elements. 8. Performance The speed of HashSet is slower than that of HashMap. The reason that HashMap is faster than HashSet is that the HashMap uses the unique keys to access the values. It stores each value with a corresponding key and we can retrieve these values faster using keys during iteration. While HashSet is completely based on objects and therefore retrieval of values is slower. 9. Dummy Values As we know that HashSet uses the HashMap to add elements. In HashSet, we pass the elements in the add(Object) method, and the argument passed in this method acts as a key. Java internally uses dummy values for each value passed in the add method. 10. Example Let’s see the examples of HashSet and HashMap. - HashSet- {1,2,3,4} or {“Java”, “C++”, “Python”} - HashMap- {a->1, b->2, c->3,d->4} or{1->”Java”, 2->”Python”, 3->“C++”} HashSet vs HashMap in Java in Tabular Form After discussing the differences, we will compare them in the tabular form. The differences are given below: Examples to understand Difference Between HashSet and HashMap in Java Now, we will understand HashSet and HashMap with the help of Java programs: Code to understand Java HashSet: package com.techvidvan.hahssetvshashmap; import java.util.HashSet; public class HashSetDemo { public static void main(String[] args) { // Create a HashSet HashSet < String > hset = new HashSet < String > (); //add elements to HashSet using add() method hset.add("Java"); hset.add("Python"); hset.add("Ruby"); hset.add("C++"); // Displaying HashSet elements System.out.println("HashSet contains:\n" + hset); //Adding duplicate values hset.add("Java"); hset.add("Ruby"); System.out.println("After adding duplicate values, HashSet contains:\n" + hset); //Adding null values to HashSet hset.add(null); System.out.println("After adding null values for the first time, HashSet contains:\n" + hset); hset.add(null); System.out.println("After adding null values for the second time, HashSet contains:\n" + hset); } } Output [Java, C++, Ruby, Python] After adding duplicate values, HashSet contains: [Java, C++, Ruby, Python] After adding null values for the first time, HashSet contains: [null, Java, C++, Ruby, Python] After adding null values for the second time, HashSet contains: [null, Java, C++, Ruby, Python] Code to understand Java HashMap: package com.techvidvan.hahssetvshashmap; import java.util.HashMap; public class HashMapDemo { public static void main(String[] args) { // Creating a HashMap HashMap < Integer, String > hmap = new HashMap < Integer, String > (); //add elements to HashMap using put() method hmap.put(1, "Java"); hmap.put(2, "Python"); hmap.put(3, "Ruby"); hmap.put(4, "C++"); // Displaying HashMap elements System.out.println("HashMap contains:\n" + hmap); //Adding duplicate values to a HashMap hmap.put(4, "JavaScript"); System.out.println("After adding duplicate values, HashSet contains:\n" + hmap); } } Output {1=Java, 2=Python, 3=Ruby, 4=C++} After adding duplicate values, HashSet contains: {1=Java, 2=Python, 3=Ruby, 4=JavaScript} When to use HashSet and HashMap in Java? We should prefer to use HashSet rather than Hashmap when we want to maintain the uniqueness in the Collection object. In all other cases, we should use HashMap over HashSet as its performance is better than HashSet. Conclusion In this article, we discussed every difference between HashSet and HashMap. We use both of them as a Collection class in Java. HashSet implements Set interface and works internally like HashMap, while HashMap implements the Map interface. The HashMap should be always preferred to use unless there is a need to maintain the uniqueness of elements in the Collection. Hope Hashset vs Hashmap is clear to you. Do share your feedback in the comment section.
https://techvidvan.com/tutorials/hashmap-vs-hashset-in-java/
CC-MAIN-2021-17
refinedweb
1,319
56.66
Say that I have two classes, A and B, that are in the package Sample. In class B, I have generated a random int b that is either a 0 or 1. I want to print int b in class A. What code should I use to do this? Here is class B: package Sample; import java.util.Random; public class B { Random random = new Random(); int b = random.nextInt(2); //b is either 0 or 1 } And I need code to go in class A here: package Sample; public class A { //How do I print out the int b here? }
https://proxieslive.com/tag/java/
CC-MAIN-2020-05
refinedweb
101
82.75
Hello, your base is DCOracle2, 1.3 beta. Advertising But my base and fix was for DCOracle2, 1.2 The Successor (?) of DCOracle2 was reported/initiated from Maciej Wisniowski in The base of this development was not the version 1.3 and so we can understand the differnce. When you will use the Version DCOracle2, 1.3 beta, then you must merge the differences or the essentials of my bugfix, which I describe in: Klaus Happle -----Ursprüngliche Nachricht----- Von: Maan M. Hamze [mailto:[EMAIL PROTECTED] Gesendet: Sonntag, 2. Dezember 2007 20:29 An: zope-db@zope.org Cc: Happle Dr., Klaus Martin Betreff: AW: [Zope-DB] [ANN] Modified version of DCOracle2 is available - RE Dr. Klaus Happle fixes Hello - I was checking your fixes (Dr. Klaus Happle) for the two functions in dco2.c: (RE: ) static PyObject *Cursor_ResultSet(Cursor *self, int count) and static PyObject *Cursor_fetch(Cursor *self, PyObject *args) I noticed that in the **original** dco2.c we have: 1. In static PyObject *Cursor_ResultSet(Cursor *self, int count): #ifndef ORACLE8i rs->fetchResultCode = OCI_SUCCESS; #endif 2. In static PyObject *Cursor_fetch(Cursor *self, PyObject *args): #ifndef ORACLE8i if (status == OCI_SUCCESS_WITH_INFO) { for (i = 0; i < PyList_Size(self->results); i++) { rs = (ResultSet *) PyList_GetItem(self->results, i); rs->fetchResultCode=status; } } #endif In both cases, you got rid of #ifndef ORACLE8i in your fixes of the two functions. I am just curious as to why these ifndef's were removed. Can someone please clarifiy. Note: in dco2.c we have: #ifdef ORACLE9i # ifndef ORACLE8i # define ORACLE8i # endif #endif so with the #ifndef ORACLE8i, I imagine with the original code did not compile the statements above with Oracle 8 or Oracle 9 detected. I am curious how removing the #ifndef ORACLE8i condition in the fixes to the two functions would affect things at large. Thanks, Maan _______________________________________________ Zope-DB mailing list Zope-DB@zope.org
https://www.mail-archive.com/zope-db@zope.org/msg01034.html
CC-MAIN-2016-44
refinedweb
309
50.73
XNamespace Class Represents an XML namespace. This class cannot be inherited. Assembly: System.Xml.Linq (in System.Xml.Linq.dll) System.Xml.Linq.XNamespace This class represents the XML construct of namespaces. Every XName contains an XNamespace. Even if an element is not in a namespace, the element's XName still contains a namespace, XNamespace.None. The XName.Namespace property is guaranteed to not be null. The most common way to create an XNamespace object is to simply assign a string to it. You can then combine the namespace with a local name by using the override of the addition operator. The following example shows this idiom: However, in Visual Basic, you would typically declare a global default namespace, as follows: This example produces the following output: Assigning a string to an XNamespace uses the implicit conversion from String. See How to: Create a Document with Namespaces (C#) (LINQ to XML)1 for more information and examples. See Namespaces in Visual Basic (LINQ to XML)1 for more information on using namespaces in Visual Basic. If you create an attribute that declares a namespace, the prefix specified in the attribute will be persisted in the serialized XML. To create an attribute that declares a namespace with a prefix, you create an attribute where the namespace of the name of the attribute is Xmlns, and the name of the attribute is the namespace prefix. The value of the attribute is the URI of the namespace. The following example shows this idiom: In Visual Basic, instead of creating a namespace node to control namespace prefixes, you would typically use a global namespace declaration: This example produces the following output: For more information, see How to: Control Namespace Prefixes (C#) (LINQ to XML)1. When constructing an attribute that will be a namespace, if the attribute name has the special value of "xmlns", then when the XML tree is serialized, the namespace will be declared as the default namespace. The special attribute with the name of "xmlns" itself is not in any namespace. The value of the attribute is the namespace URI. The following example creates an XML tree that contains an attribute that is declared in such a way that the namespace will become the default namespace: In Visual Basic, instead of creating a namespace node to create a default namespace, you would typically use a global default namespace declaration: This example produces the following output: XNamespace objects are guaranteed to be atomized; that is, if two XNamespace objects have exactly the same URI, they will share the same instance. The equality and comparison operators are provided explicitly for this purpose. Another way to specify a namespace and a local name is to use an expanded name in the form {namespace}name: [C#] This example produces the following output: This approach has performance implications. Each time that you pass a string that contains an expanded name to LINQ to XML, it must parse the name, find the atomized namespace, and find the atomized name. This process takes CPU time. If performance is important, you may want to use a different approach. With Visual Basic, the recommended approach is to use XML literals, which does not involve the use of expanded names..
https://msdn.microsoft.com/EN-US/library/system.xml.linq.xnamespace
CC-MAIN-2017-34
refinedweb
539
50.97
I2C accelerometer model MMA7455 Hi all! This time i'm trying to make work the accelerometer MMA7455 The MMA7455 schematics are this: Download PDF .pdf And the specs of the sensor (ADXL345) that carries the MMA7455 is this: I've been trying to do every kind of things to make this little chip work, but no success at all. Almost always getting OSError: I2C bus error I'd like to start with a very basic test i'm doing, reading from the DEVID registry of the ADXL345 sensor. What i get after doing that is: Based on the sensor documentation i think that my previous code is fine: An alternate I2C address of 0x53 (followed by the R/W bit) can be chosen by grounding the SDO/ALT ADDRESS pin (Pin 12). This translates to 0xA6 for a write and 0xA7 for a read If someone understands what's happening please share that precious knowledge. I still dont understand what [59, 60, 61] means by the way... @jmarcelino @livius @robert-hh Hey all, thx a lot for helping me this last days with this driver. This is the code i have until now... it's a shame that i didn't make it, but i think that i could have broken the sensor when i plugged CS to 5v instead of 3V3. I think i will buy another one, that will take some weeks until it gets to Uruguay :P Anyway, this is the code we've made until now, maybe it's useful for someone else. It's not throwing the ugly i2c bus error anymore, but the device id registry is returning always zero, and that's bad... not sure if the device is broken or what, but comparing this code with the implementation @livius has referenced, and with other implementations i found on google for other languages, it looks like the retrieval of the device id is the first action almost everyone does, so... i dont know. from machine import I2C from machine import Pin from machine import Timer import time def DelayUs(us): Timer.sleep_us(us) #i2c slave address SLAVE_ADDRESS = 0x1D #registers REG_POWER_CTL = 0x2D REG_BW_RATE = 0x2C #connected to SDO and CS of MMA7455 CS = Pin("P23", mode=Pin.OUT) CS.value(1) CS.hold() DelayUs(1000) #main i2c = I2C(0, I2C.MASTER, baudrate=100000) print(i2c.scan()) def setPowerCtrl(measure, wake_up=0, sleep=0, auto_sleep=0, link=0): power_ctl = wake_up & 0x03 if sleep: power_ctl |= 0x04 if measure: power_ctl |= 0x08 if auto_sleep: power_ctl |= 0x10 if link: power_ctl |= 0x20 i2c.writeto_mem(SLAVE_ADDRESS, REG_POWER_CTL, bytes([power_ctl])) #***************************** #******* main logic ********** #***************************** data = i2c.readfrom_mem(SLAVE_ADDRESS, 0x00, 1) print("DEVID: " + str(bin(data[0]))) #set power characteristics setPowerCtrl(1, wake_up=0, sleep=0, auto_sleep=0, link=0) @livius said in I2C accelerometer model MMA7455: "{0:b}".format(5) Uhhh good, looks much better than my awful function :D jajaja, sorry for breaking your eyes but didnt know that @pablocaviglia for bin formatthing use: >>> bin(5) '0b101' >>> "{0:b}".format(5) '101' >>> to see more in memory data = i2c.readfrom_mem(SlaveAddress, 0x00, 5) #increase value to 5 bytes from memory and look what you get @livius said in I2C accelerometer model MMA7455: data = i2c.readfrom_mem(SlaveAddress, 0x00, 1) Well... at least something changed! The 0000000 at the end is the way i found to show the byte in binary string. Btw is this way correct to do that ? def byteToBitArrayString(rcv): bitArrayStr = "1" if rcv&0 != 0 else "0" bitArrayStr += "1" if rcv&1 != 0 else "0" bitArrayStr += "1" if rcv&2 != 0 else "0" bitArrayStr += "1" if rcv&3 != 0 else "0" bitArrayStr += "1" if rcv&4 != 0 else "0" bitArrayStr += "1" if rcv&5 != 0 else "0" bitArrayStr += "1" if rcv&6 != 0 else "0" bitArrayStr += "1" if rcv&7 != 0 else "0" return bitArrayStr @pablocaviglia try this instead data = i2c.readfrom_mem(SlaveAddress, 0x00, 1) @livius I've checked that code, really nice! I'm not receiving the i2c bus error anymore, but based on docs, when querying the the DEVID registry i should get this: My code does this: And i got this: PD: By the way, how do i earn reputation points in this forum? I can only send 1 message each 10 minutes... boring! @robert-hh :+1: @pablocaviglia to not starting from scratch try adapt this @pablocaviglia No. It's just the way it has to be coded at the physical interface, not the API you are using. Setting the R/W bit is done by the underlying functions. All you have to do is use the address 29 in the call of python methods. @robert-hh So you say that the slave address would change from read to write. If i want to read/write it would be: i2c.readfrom_mem(0x3B, 0x0, 1) i2c.writeto_mem(0x3A, 0x0, value) Is that what you mean? @pablocaviglia The I2C address is located in bits 1..7. Bit 0 is the R/W bit. 0x3a (58) is 0x1d (29) * 2, and 0x3b simply adds 1 in bit 0. @jmarcelino Very good point man! I've pulled up P23 on pycom to get 3.3v, then i plugged CS and SDO in there, and things got changed a lot. And i got this when running: Much much better, no I2C Bus Error anymore. Just to be sure i tried to pull down the pin P23 to verify if it stops working, and it does stop effectively. Also another very good thing is that based on documentation, the slave port to start using is '1D', and now i'm effectively getting 29 (1D) while scaning ports, instead of 59,60,61. <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> <><><><><><><><><><><><><><><><><><><><><><><><><><><><><><> There's something that i dont understand from that document section. It says that the 0x1D followed by the R/W bit translates to 0X3A for write and 0x3B for read.... what does that mean? - jmarcelino last edited by jmarcelino If you look at the the schematic the CS pin on the board shouldn't be connected to 5V. It's comes from 3.3V. The chip pin is not made for that and also you're connecting the 5V supply to a regulated 3.3V via the 4.7K pull up... @robert-hh said in I2C accelerometer model MMA7455: RT9161 Ah - i see now i ommit this on the first link with schematic Then two things @pablocaviglia Where do you conect your SDA, SCL. Which pins on Lopy? You use default initialization of i2c then i suppose P9 - SDA, P10-SCL? I see on schematic that there is pull up resistor already on i2c This is better picture of it @livius said in I2C accelerometer model MMA7455: Why 5V? The module has an internal 3.3V regulator RT9161. @pablocaviglia Why 5V? in documentation is Supply voltage range: 2.0 V to 3.6 V and all Pycom boards are 3V3. 5V can crash your Lopy. And about other connections Where do you conect your SDA, SCL. Which pins on Lopy? You use default initialization then i suppose P9 - SDA, P10-SCL? @jmarcelino yes, but i have sdo pulled down Thx all for your interest. I've not get home yet, i'll do some tests based on what you said once there. Anyway before leaving i took a picture of the current connections of the chip. Let me show you how does it looks now: UP GND -> GND VCC -> 5V DOWN CS -> 5v SDO ->GND SDA SCL -> both to idem on pycom Hope it helps to deduce something!
https://forum.pycom.io/topic/1184/i2c-accelerometer-model-mma7455/10
CC-MAIN-2019-13
refinedweb
1,246
82.85
Bracket Order closing at wrong prices. - Jeffrey C F Wong last edited by Hi Everyone, I am back testing bracket orders with the following ticks target for SI futures. def next(self): close = self.data.close[0] long_tp = close + 0.025 ##### REWARD long_stop = close - 0.1 ##### RISK short_tp = close - 0.025 ##### REWARD short_stop = close + 0.1 ##### RISK if not self.position: if self.crossover: self.buy_bracket(stopprice=long_stop, limitprice=long_tp, exectype=bt.Order.Market, size=1) if self.crossunder: self.sell_bracket(stopprice=short_stop, limitprice=short_tp, exectype=bt.Order.Market, size=1) However, after backtesting, the results are looking weird as the P&L are not consistent at 0.025 profit or -0.1 stoploss Open Time Close Time Entry Candles PnL 2020-11-16 04:28:00 2020-11-16 05:04:00 25.11 36 0.02 2020-11-16 05:44:00 2020-11-16 06:09:00 25.025 25 -0.095 2020-11-16 06:17:00 2020-11-16 06:31:00 24.95 14 0.025 2020-11-16 06:42:00 2020-11-16 06:45:00 24.99 3 0.02 2020-11-16 07:40:00 2020-11-16 08:31:00 24.97 51 -0.1 2020-11-16 09:31:00 2020-11-16 09:42:00 24.945 11 0.03 2020-11-16 10:12:00 2020-11-16 10:35:00 24.935 23 -0.095 2020-11-16 10:40:00 2020-11-16 11:10:00 24.875 30 0.03 2020-11-16 11:27:00 2020-11-16 11:30:00 24.905 3 0.025 2020-11-17 02:55:00 2020-11-17 02:58:00 24.845 3 0.02 2020-11-17 04:24:00 2020-11-17 04:38:00 24.795 14 0.025 2020-11-17 05:41:00 2020-11-17 06:13:00 24.76 32 -0.105 2020-11-17 06:16:00 2020-11-17 06:23:00 24.69 7 0.02 2020-11-17 07:14:00 2020-11-17 07:21:00 24.73 7 0.025 2020-11-17 07:27:00 2020-11-17 07:32:00 24.71 5 0.02 I painstakingly gone thru the CSV for each trade and I just can't seem to find why it fills at such random prices. For example, the first winning short trade: With an entry price of 25.11, it would only make sense to take profit at 25.085 (25.11-0.025) . However, you can see that 5:04 doesn't have 25.085 for OHLC. So why did it close the trade at 25.09? Am I missing some important parameters in the bracket definition? Could you run the backtest printing data for order creation and execution, along with ohlcv? This will help us to see what's going on. - Jeffrey C F Wong last edited by @run-out Thank you for getting back. I reviewed the CSV again and noticed these trades are correct because it calculated the crossover based on the previous Close price, and then entered at the next Open price. If I used Cheat-on-close, all the P&L are perfect. However, now my question is, how can I specify the Take Profit and Stop Loss ticks based on the parent's executed price instead of previous Close? Well the the three orders are issued simultaneouly, so you can't create the take profit and stop loss on the executed price of the entry order, as it has not happened yet. Someone else my know a built in way, but I've issued orders manually in the past after the fill of the entry order, at which point you can use the executed price. - Jayden Clark last edited by To your second question. I actually believe that the kind of tick trading you're talking of, using bid/ask, is not supported on backtrader. - Jayden Clark last edited by My bad, I didn't read the question properly. I think what you're asking lends itself to creating a seperate stop/limit order once in the market.
https://community.backtrader.com/topic/3210/bracket-order-closing-at-wrong-prices
CC-MAIN-2021-04
refinedweb
700
80.38
Pyramid CRUD, admin web interface. Project description pyramid_sacrud Documentation Overview. The list of standard backends: - ps_alchemy - provides SQLAlchemy models. - ps_mongo - provides MongoDB. - etc.. Look how easy it is to use with Pyramid and SQLAlchemy: from .models import (Model1, Model2, Model3,) # add SQLAlchemy backend config.include('ps_alchemy') # add sacrud and project models config.include('pyramid_sacrud') settings = config.registry.settings settings['pyramid_sacrud.models'] = (('Group1', [Model1, Model2]), ('Group2', [Model3])) go to And see… Example can be found here Installing pip install pyramid_sacrud. 0.3.2 (2016-02-07) - rename CONFIG_MODELS to CONFIG_RESOURCES - make GroupResource as default parent - add exampe with custom resource 0.3.1 (2016-01-08) - add paginate>=0.5.0 version to requirements.txt (see #117) 0.3.0 (2016-01-07) - New resources architecture - move SQLAlchemy handler to separate module ps_alchemy - migrate test to py.test (#102 issue) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pyramid_sacrud/
CC-MAIN-2019-47
refinedweb
166
54.08
Date posted: 2017-01-04 One part of our application had its frontend made with React, taking advantage of its reactivity to state changes, which is very helpful when you are building modern and responsive applications. However, we underestimated the complexity of this system, and maintaining it with React only became very complicated and tiresome; this is when we decided to adopt one more paradigm: Redux. This will not be a tutorial, instead, I will only want to present a general idea of how all these tools work. I will first make a quick introduction on how React works: the easiest way to understand it, for me, is by imagining it as a way to make custom HTML elements. For example, say you have the following pattern: <div> <h1>Header</h1> <p>Body text goes here<p> </div> Wouldn't it be nice if instead of typing all these divs, h1s and ps, you were able to make a custom element with that format (maybe call it Section)? With React, it would be easy: class Section extends React.Component { render() { return ( <div> <h1>{this.props.title}</h1> <p>{this.props.children}<p> </div> ); } } Props are parameters passed to the component (like an html attribute or children), they are recovered from the this.props object. Now to render this element with React: <Section title="Header"> Body text goes here </Section> React also has the concept of State, which refers to the mutable state of a component. For example: a lightbulb of 60W would have "60W" as a prop, but whether it is on or off, it will depend on its state. States are very easy to work with, we set the initial state in the constructor, and every time we need to modify it, we use the method this.setState to pass the new state. The component will update itself automatically. class Lightbulb extends React.Component { constructor(props) { super(props); this.state = { isOn: false }; } toggle = () => { this.setState({ isOn: !this.state.isOn, }); } render() { let message; if (this.state.isOn) { message = 'On!'; } else { message = 'Off!'; } return ( <div> {message} <button onClick={this.toggle}>Click me!</button> </div> ); } } But things start to get complicated when our application grows: sometimes we need to access the state of a component from another component, sometimes they need to be shared: for this, we have to remove the state from the component and pass it to its parent, so the component only receive its values as props. The tendency, therefore, is that all the state will end up in the root component, and all the child components will only receive props: all the state lives in the root component, which are passed down the tree as props; similarly, whenever an event happen on the bottom of the tree, it will bubble up to the top. This is when better paradigms start to appear: the most popular used to be Flux, and now, it is Redux. Redux is more a paradigm than a library - you don't need to use the library, but they do provide you some boilerplate code. It also respects this tendency of all the state living in a single root, which is called the store: the store is an object that contains the state of the whole application; and this is an important detail: you do not modify the state that lives in the store, you create a new "version" of this state - the old states get archived - this makes logging and debugging extremely easy. When you use the store provided by the Redux library, it will take care of recording the old states for you. Myself, I would abstract the data flow of React + Redux in 5 simple steps: A component triggers an action (example: a button is clicked) The action is sent to the reducer (example: turn on the light) The reducer creates a new version of the state, based on the action (example: { lightsOn: true }) The store gets updated with the new state The component gets re-rendered based on the new state 1 A component triggers an action To make the component trigger an action, we simply pass the function (action) as a prop - the component will then call it whenever the right event happens: // In the lines below, we are binding the state from the store, // as well as a function that dispatches the action to toggle // the lights on/off. The "dispatch" function is provided // by the Redux library - we only need to make the "toggleLight" // action ourselves const mapStateAsProps = (state) => ({ isOn: state.isOn, }); const mapDispatchAsProps = (dispatch) => ({ toggle: () => { dispatch(actions.toggleLight()); } }); // The 'connect' function is also provided by the Redux // library, it binds the props and methods to the React // component const LightbulbElement = connect( mapStateAsProps, mapDispatchAsProps, )(Lightbulb); class Lightbulb extends React.Component { render() { let message; if (this.props.isOn) { message = 'On!'; } else { message = 'Off!'; } return ( <div> {message} <button onClick={this.props.toggle}>Click me!</button> </div> ); } } And to render this element: <LightbulbElement /> 2 The action is sent to the reducer An action is sent to all reducers automatically every time we use the dispatch method I described above. But what does that toggleLight look like? Like this: function toggleLight() { return { type: 'TOGGLE_LIGHT, }; } Actions usually return objects with 1 or 2 parameters: type and payload. The type parameter refers to what kind of action you are performing: every action should have a distinct type. The payload parameter contains additional information that you need to pass to the reducer. 3 The reducer creates a new version of the state, based on the action Reducers are responsible for replacing the current state of the application with a new one. For every attribute in the state (for example, say our state object contains the attributes "isOn" and "colour"), we should have a distinct reducer - this will ensure that one reducer will not modify an attribute that does not belong to it. In our case, since we only have one attribute (isOn), we would create only one reducer; it would check the action type to make sure that piece of the state should be changed, if it should, it must create a new version of the state and return it: // This function receives "state", which is the previous state in our store, // and "action", which is the action dispatched function isOnReducer(state = false, action) { switch(action.type) { case 'TOGGLE_LIGHT': return !state; break; default: return state; break; } } In other scenario, say we are receiving a payload and we are going to modify a state that is an object: function myOtherReducer(state = { colour: 'black', opacity: 1.0 }, action) { switch(action.type) { case 'CHANGE_COLOUR': // Notice that I am using the spread operator (...) to create a new object // and recover the values of the previous state; then overriding the colour // with what I received from the payload return { ...state, colour: action.payload }; break; case 'CHANGE_OPACITY': return { ...state, opacity: action.payload }; break; default: return state; break; } } 4 The store gets updated with the new state This part is done automatically by Redux, we only need to give it our reducer: import { createStore } from 'redux'; import { isOn } from './reducers'; const store = createStore(isOn); export default store; 5 The component gets re-rendered based on the new state This is also done automatically. Redux will detect if the parts of the store that a component uses changed - if it did, the component will get re-rendered.
https://hcoelho.com/blog/21/Organizing_data_flow_with_React__Redux
CC-MAIN-2020-45
refinedweb
1,224
57.61
The previous two chapters hid the inner-mechanics of PyMC3, and more generally Markov Chain Monte Carlo (MCMC), from the reader. The reason for including this chapter is three-fold. The first is that any book on Bayesian inference must discuss MCMC. I cannot fight this. Blame the statisticians. Secondly, knowing the process of MCMC gives you insight into whether your algorithm has converged. (Converged to what? We will get to that) Thirdly, we'll understand why we are returned thousands of samples from the posterior as a solution, which at first thought can be odd. When we setup a Bayesian inference problem with $N$ unknowns, we are implicitly creating an $N$ dimensional space for the prior distributions to exist in. Associated with the space is an additional dimension, which we can describe as the surface, or curve, that sits on top of the space, that reflects the prior probability of a particular point. The surface on the space is defined by our prior distributions. For example, if we have two unknowns $p_1$ and $p_2$, and priors for both are $\text{Uniform}(0,5)$, the space created is a square of length 5 and the surface is a flat plane that sits on top of the square (representing that every point is equally likely). %matplotlib inline import scipy.stats as stats from IPython.core.pylabtools import figsize import numpy as np figsize(12.5, 4) import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D jet = plt.cm.jet fig = plt.figure() x = y = np.linspace(0, 5, 100) X, Y = np.meshgrid(x, y) plt.subplot(121) uni_x = stats.uniform.pdf(x, loc=0, scale=5) uni_y = stats.uniform.pdf(y, loc=0, scale=5) M = np.dot(uni_x[:, None], uni_y[None, :]) im = plt.imshow(M, interpolation='none', origin='lower', cmap=jet, vmax=1, vmin=-.15, extent=(0, 5, 0, 5)) plt.xlim(0, 5) plt.ylim(0, 5) plt.title("Landscape formed by Uniform priors.") ax = fig.add_subplot(122, projection='3d') ax.plot_surface(X, Y, M, cmap=plt.cm.jet, vmax=1, vmin=-.15) ax.view_init(azim=390) plt.title("Uniform prior landscape; alternate view"); Alternatively, if the two priors are $\text{Exp}(3)$ and $\text{Exp}(10)$, then the space is all positive numbers on the 2-D plane, and the surface induced by the priors looks like a water fall that starts at the point (0,0) and flows over the positive numbers. The plots below visualize this. The more dark red the color, the more prior probability is assigned to that location. Conversely, areas with darker blue represent that our priors assign very low probability to that location. figsize(12.5, 5) fig = plt.figure() plt.subplot(121) exp_x = stats.expon.pdf(x, scale=3) exp_y = stats.expon.pdf(x, scale=10) M = np.dot(exp_x[:, None], exp_y[None, :]) CS = plt.contour(X, Y, M) im = plt.imshow(M, interpolation='none', origin='lower', cmap=jet, extent=(0, 5, 0, 5)) #plt.xlabel("prior on $p_1$") #plt.ylabel("prior on $p_2$") plt.title("$Exp(3), Exp(10)$ prior landscape") ax = fig.add_subplot(122, projection='3d') ax.plot_surface(X, Y, M, cmap=jet) ax.view_init(azim=390) plt.title("$Exp(3), Exp(10)$ prior landscape; \nalternate view"); These are simple examples in 2D space, where our brains can understand surfaces well. In practice, spaces and surfaces generated by our priors can be much higher dimensional. If these surfaces describe our prior distributions on the unknowns, what happens to our space after we incorporate our observed data $X$? The data $X$ does not change the space, but it changes the surface of the space by pulling and stretching the fabric of the prior surface to reflect where the true parameters likely live. More data means more pulling and stretching, and our original shape becomes mangled or insignificant compared to the newly formed shape. Less data, and our original shape is more present. Regardless, the resulting surface describes the posterior distribution. Again I must stress that it is, unfortunately, impossible to visualize this in large dimensions. For two dimensions, the data essentially pushes up the original surface to make tall mountains. The tendency of the observed data to push up the posterior probability in certain areas is checked by the prior probability distribution, so that less prior probability means more resistance. Thus in the double-exponential prior case above, a mountain (or multiple mountains) that might erupt near the (0,0) corner would be much higher than mountains that erupt closer to (5,5), since there is more resistance (low prior probability) near (5,5). The peak reflects the posterior probability of where the true parameters are likely to be found. Importantly, if the prior has assigned a probability of 0, then no posterior probability will be assigned there. Suppose the priors mentioned above represent different parameters $\lambda$ of two Poisson distributions. We observe a few data points and visualize the new landscape: # create the observed data # sample size of data we observe, trying varying this (keep it less than 100 ;) N = 1 # the true parameters, but of course we do not see these values... lambda_1_true = 1 lambda_2_true = 3 #...we see the data generated, dependent on the above two values. data = np.concatenate([ stats.poisson.rvs(lambda_1_true, size=(N, 1)), stats.poisson.rvs(lambda_2_true, size=(N, 1)) ], axis=1) print("observed (2-dimensional,sample size = %d):" % N, data) # plotting details. x = y = np.linspace(.01, 5, 100) likelihood_x = np.array([stats.poisson.pmf(data[:, 0], _x) for _x in x]).prod(axis=1) likelihood_y = np.array([stats.poisson.pmf(data[:, 1], _y) for _y in y]).prod(axis=1) L = np.dot(likelihood_x[:, None], likelihood_y[None, :]) observed (2-dimensional,sample size = 1): [[0 2]] figsize(12.5, 12) # matplotlib heavy lifting below, beware! plt.subplot(221) uni_x = stats.uniform.pdf(x, loc=0, scale=5) uni_y = stats.uniform.pdf(x, loc=0, scale=5) M = np.dot(uni_x[:, None], uni_y[None, :]) im = plt.imshow(M, interpolation='none', origin='lower', cmap=jet, vmax=1, vmin=-.15, extent=(0, 5, 0, 5)) plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none") plt.xlim(0, 5) plt.ylim(0, 5) plt.title("Landscape formed by Uniform priors on $p_1, p_2$.") plt.subplot(223) plt.contour(x, y, M * L) im = plt.imshow(M * L, interpolation='none', origin='lower', cmap=jet, extent=(0, 5, 0, 5)) plt.title("Landscape warped by %d data observation;\n Uniform priors on $p_1, p_2$." % N) plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none") plt.xlim(0, 5) plt.ylim(0, 5) plt.subplot(222) exp_x = stats.expon.pdf(x, loc=0, scale=3) exp_y = stats.expon.pdf(x, loc=0, scale=10) M = np.dot(exp_x[:, None], exp_y[None, :]) plt.contour(x, y, M) im = plt.imshow(M, interpolation='none', origin='lower', cmap=jet, extent=(0, 5, 0, 5)) plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none") plt.xlim(0, 5) plt.ylim(0, 5) plt.title("Landscape formed by Exponential priors on $p_1, p_2$.") plt.subplot(224) # This is the likelihood times prior, that results in the posterior. plt.contour(x, y, M * L) im = plt.imshow(M * L, interpolation='none', origin='lower', cmap=jet, extent=(0, 5, 0, 5)) plt.scatter(lambda_2_true, lambda_1_true, c="k", s=50, edgecolor="none") plt.title("Landscape warped by %d data observation;\n Exponential priors on \ $p_1, p_2$." % N) plt.xlim(0, 5) plt.ylim(0, 5); The plot on the left is the deformed landscape with the $\text{Uniform}(0,5)$ priors, and the plot on the right is the deformed landscape with the exponential priors. Notice that the posterior landscapes look different from one another, though the data observed is identical in both cases. The reason is as follows. Notice the exponential-prior landscape, bottom right figure, puts very little posterior weight on values in the upper right corner of the figure: this is because the prior does not put much weight there. On the other hand, the uniform-prior landscape is happy to put posterior weight in the upper-right corner, as the prior puts more weight there. Notice also the highest-point, corresponding the the darkest red, is biased towards (0,0) in the exponential case, which is the result from the exponential prior putting more prior weight in the (0,0) corner. The black dot represents the true parameters. Even with 1 sample point, the mountains attempts to contain the true parameter. Of course, inference with a sample size of 1 is incredibly naive, and choosing such a small sample size was only illustrative. It's a great exercise to try changing the sample size to other values (try 2,5,10,100?...) and observing how our "mountain" posterior changes. We should explore the deformed posterior space generated by our prior surface and observed data to find the posterior mountain. However, we cannot naively search the space: any computer scientist will tell you that traversing $N$-dimensional space is exponentially difficult in $N$: the size of the space quickly blows-up as we increase $N$ (see the curse of dimensionality). What hope do we have to find these hidden mountains? The idea behind MCMC is to perform an intelligent search of the space. To say "search" implies we are looking for a particular point, which is perhaps not an accurate as we are really looking for a broad mountain. Recall that MCMC returns samples from the posterior distribution, not the distribution itself. Stretching our mountainous analogy to its limit, MCMC performs a task similar to repeatedly asking "How likely is this pebble I found to be from the mountain I am searching for?", and completes its task by returning thousands of accepted pebbles in hopes of reconstructing the original mountain. In MCMC and PyMC3 lingo, the returned sequence of "pebbles" are the samples, cumulatively called the traces. When I say MCMC intelligently searches, I really am saying MCMC will hopefully converge towards the areas of high posterior probability. MCMC does this by exploring nearby positions and moving into areas with higher probability. Again, perhaps "converge" is not an accurate term to describe MCMC's progression. Converging usually implies moving towards a point in space, but MCMC moves towards a broader area in the space and randomly walks in that area, picking up samples from that area. At first, returning thousands of samples to the user might sound like being an inefficient way to describe the posterior distributions. I would argue that this is extremely efficient. Consider the alternative possibilities: Besides computational reasons, likely the strongest reason for returning samples is that we can easily use The Law of Large Numbers to solve otherwise intractable problems. I postpone this discussion for the next chapter. With the thousands of samples, we can reconstruct the posterior surface by organizing them in a histogram. There is a large family of algorithms that perform MCMC. Most of these algorithms can be expressed at a high level as follows: (Mathematical details can be found in the appendix.) This way we move in the general direction towards the regions where the posterior distributions exist, and collect samples sparingly on the journey. Once we reach the posterior distribution, we can easily collect samples as they likely all belong to the posterior distribution. If the current position of the MCMC algorithm is in an area of extremely low probability, which is often the case when the algorithm begins (typically at a random location in the space), the algorithm will move in positions that are likely not from the posterior but better than everything else nearby. Thus the first moves of the algorithm are not reflective of the posterior. In the above algorithm's pseudocode, notice that only the current position matters (new positions are investigated only near the current position). We can describe this property as memorylessness, i.e. the algorithm does not care how it arrived at its current position, only that it is there. Besides MCMC, there are other procedures available for determining the posterior distributions. A Laplace approximation is an approximation of the posterior using simple functions. A more advanced method is Variational Bayes. All three methods, Laplace Approximations, Variational Bayes, and classical MCMC have their pros and cons. We will only focus on MCMC in this book. That being said, my friend Imri Sofar likes to classify MCMC algorithms as either "they suck", or "they really suck". He classifies the particular flavour of MCMC used by PyMC3 as just sucks ;) figsize(12.5, 4) data = np.loadtxt("data/mixture_data.csv", delimiter=",") plt.hist(data, bins=20, color="k", histtype="stepfilled", alpha=0.8) plt.title("Histogram of the dataset") plt.ylim([0, None]); print(data[:10], "...") [ 115.85679142 152.26153716 178.87449059 162.93500815 107.02820697 105.19141146 118.38288501 125.3769803 102.88054011 206.71326136] ... What does the data suggest? It appears the data has a bimodal form, that is, it appears to have two peaks, one near 120 and the other near 200. Perhaps there are two clusters within this dataset. This dataset is a good example of the data-generation modeling technique from last chapter. We can propose how the data might have been created. I suggest the following data generation algorithm: This algorithm would create a similar effect as the observed dataset, so we choose this as our model. Of course, we do not know $p$ or the parameters of the Normal distributions. Hence we must infer, or learn, these unknowns. Denote the Normal distributions $\text{N}_0$ and $\text{N}_1$ (having variables' index start at 0 is just Pythonic). Both currently have unknown mean and standard deviation, denoted $\mu_i$ and $\sigma_i, \; i =0,1$ respectively. A specific data point can be from either $\text{N}_0$ or $\text{N}_1$, and we assume that the data point is assigned to $\text{N}_0$ with probability $p$. An appropriate way to assign data points to clusters is to use a PyMC3 Categorical stochastic variable. Its parameter is a $k$-length array of probabilities that must sum to one and its value attribute is a integer between 0 and $k-1$ randomly chosen according to the crafted array of probabilities (In our case $k=2$). A priori, we do not know what the probability of assignment to cluster 1 is, so we form a uniform variable on $(0, 1)$. We call call this $p_1$, so the probability of belonging to cluster 2 is therefore $p_2 = 1 - p_1$. Unfortunately, we can't we just give [p1, p2] to our Categorical variable. PyMC3 uses Theano under the hood to construct the models so we need to use theano.tensor.stack() to combine $p_1$ and $p_2$ into a vector that it can understand. We pass this vector into the Categorical variable as well as the testval parameter to give our variable an idea of where to start from. import pymc3 as pm import theano.tensor as T with pm.Model() as model: p1 = pm.Uniform('p', 0, 1) p2 = 1 - p1 p = T.stack([p1, p2]) assignment = pm.Categorical("assignment", p, shape=data.shape[0], testval=np.random.randint(0, 2, data.shape[0])) print("prior assignment, with p = %.2f:" % p1.tag.test_value) print(assignment.tag.test_value[:10]) Applied interval-transform to p and added transformed p_interval_ to model. prior assignment, with p = 0.50: [0 0 0 0 1 1 1 0 0 1] Looking at the above dataset, I would guess that the standard deviations of the two Normals are different. To maintain ignorance of what the standard deviations might be, we will initially model them as uniform on 0 to 100. We will include both standard deviations in our model using a single line of PyMC3 code: sds = pm.Uniform("sds", 0, 100, shape=2) Notice that we specified shape=2: we are modeling both $\sigma$s as a single PyMC3 variable. Note that this does not induce a necessary relationship between the two $\sigma$s, it is simply for succinctness. We also need to specify priors on the centers of the clusters. The centers are really the $\mu$ parameters in these Normal distributions. Their priors can be modeled by a Normal distribution. Looking at the data, I have an idea where the two centers might be — I would guess somewhere around 120 and 190 respectively, though I am not very confident in these eyeballed estimates. Hence I will set $\mu_0 = 120, \mu_1 = 190$ and $\sigma_0 = \sigma_1 = 10$. with model: sds = pm.Uniform("sds", 0, 100, shape=2) centers = pm.Normal("centers", mu=np.array([120, 190]), sd=np.array([10, 10]), shape=2) center_i = pm.Deterministic('center_i', centers[assignment]) sd_i = pm.Deterministic('sd_i', sds[assignment]) # and to combine it with the observations: observations = pm.Normal("obs", mu=center_i, sd=sd_i, observed=data) print("Random assignments: ", assignment.tag.test_value[:4], "...") print("Assigned center: ", center_i.tag.test_value[:4], "...") print("Assigned standard deviation: ", sd_i.tag.test_value[:4]) Applied interval-transform to sds and added transformed sds_interval_ to model. Random assignments: [0 0 0 0] ... Assigned center: [ 120. 120. 120. 120.] ... Assigned standard deviation: [ 50. 50. 50. 50.] Notice how we continue to build the model within the context of Model(). This automatically adds the variables that we create to our model. As long as we work within this context we will be working with the same variables that we have already defined. Similarly, any sampling that we do within the context of Model() will be done only on the model whose context in which we are working. We will tell our model to explore the space that we have so far defined by defining the sampling methods, in this case Metropolis() for our continuous variables and ElemwiseCategorical() for our categorical variable. We will use these sampling methods together to explore the space by using sample( iterations, step ), where iterations is the number of steps you wish the algorithm to perform and step is the way in which you want to handle those steps. We use our combination of Metropolis() and ElemwiseCategorical() for the step and sample 25000 iterations below. with model: step1 = pm.Metropolis(vars=[p, sds, centers]) step2 = pm.ElemwiseCategorical(vars=[assignment]) trace = pm.sample(25000, step=[step1, step2]) [-------100%-------] 25000 of 25000 in 130.7 sec. | SPS: 191.3 | ETA: 0.0 We have stored the paths of all our variables, or "traces", in the trace variable. These paths are the routes the unknown parameters (centers, precisions, and $p$) have taken thus far. The individual path of each variable is indexed by the PyMC3 variable name that we gave that variable when defining it within our model. For example, trace["sds"] will return a numpy array object that we can then index and slice as we would any other numpy array object. figsize(12.5, 9) plt.subplot(311) lw = 1 center_trace = trace["centers"] # for pretty colors later in the book. colors = ["#348ABD", "#A60628"] if center_trace[-1, 0] > center_trace[-1, 1] \ else ["#A60628", "#348ABD"] plt.plot(center_trace[:, 0], label="trace of center 0", c=colors[0], lw=lw) plt.plot(center_trace[:, 1], label="trace of center 1", c=colors[1], lw=lw) plt.title("Traces of unknown parameters") leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.7) plt.subplot(312) std_trace = trace["sds"] plt.plot(std_trace[:, 0], label="trace of standard deviation of cluster 0", c=colors[0], lw=lw) plt.plot(std_trace[:, 1], label="trace of standard deviation of cluster 1", c=colors[1], lw=lw) plt.legend(loc="upper left") plt.subplot(313) p_trace = trace["p"] plt.plot(p_trace, label="$p$: frequency of assignment to cluster 0", color=colors[0], lw=lw) plt.xlabel("Steps") plt.ylim(0, 1) plt.legend(); Notice the following characteristics: To achieve further convergence, we will perform more MCMC steps. In the pseudo-code algorithm of MCMC above, the only position that matters is the current position (new positions are investigated near the current position), implicitly stored as part of the trace object. To continue where we left off, we pass the trace that we have already stored into the sample() function with the same step value. The values that we have already calculated will not be overwritten. This ensures that our sampling continues where it left off in the same way that it left off. We will sample the MCMC fifty thousand more times and visualize the progress below: with model: trace = pm.sample(50000, step=[step1, step2], trace=trace) [-------100%-------] 50000 of 50000 in 215.4 sec. | SPS: 232.2 | ETA: 0.0 figsize(12.5, 4) center_trace = trace["centers"][25000:] prev_center_trace = trace["centers"][:25000] x = np.arange(25000) plt.plot(x, prev_center_trace[:, 0], label="previous trace of center 0", lw=lw, alpha=0.4, c=colors[1]) plt.plot(x, prev_center_trace[:, 1], label="previous trace of center 1", lw=lw, alpha=0.4, c=colors[0]) x = np.arange(25000, 75000) plt.plot(x, center_trace[:, 0], label="new trace of center 0", lw=lw, c="#348ABD") plt.plot(x, center_trace[:, 1], label="new trace of center 1", lw=lw, c="#A60628") plt.title("Traces of unknown center parameters") leg = plt.legend(loc="upper right") leg.get_frame().set_alpha(0.8) plt.xlabel("Steps"); figsize(11.0, 4) std_trace = trace["sds"][25000:] prev_std_trace = trace["sds"][:25000] _i = [1, 2, 3, 4] for i in range(2): plt.subplot(2, 2, _i[2 * i]) plt.title("Posterior of center of cluster %d" % i) plt.hist(center_trace[:, i], color=colors[i], bins=30, histtype="stepfilled") plt.subplot(2, 2, _i[2 * i + 1]) plt.title("Posterior of standard deviation of cluster %d" % i) plt.hist(std_trace[:, i], color=colors[i], bins=30, histtype="stepfilled") # plt.autoscale(tight=True) plt.tight_layout() The MCMC algorithm has proposed that the most likely centers of the two clusters are near 120 and 200 respectively. Similar inference can be applied to the standard deviation. We are also given the posterior distributions for the labels of the data point, which is present in trace["assignment"]. Below is a visualization of this. The y-axis represents a subsample of the posterior labels for each data point. The x-axis are the sorted values of the data points. A red square is an assignment to cluster 1, and a blue square is an assignment to cluster 0. import matplotlib as mpl figsize(12.5, 4.5) plt.cmap = mpl.colors.ListedColormap(colors) plt.imshow(trace["assignment"][::400, np.argsort(data)], cmap=plt.cmap, aspect=.4, alpha=.9) plt.xticks(np.arange(0, data.shape[0], 40), ["%.2f" % s for s in np.sort(data)[::40]]) plt.ylabel("posterior sample") plt.xlabel("value of $i$th data point") plt.title("Posterior labels of data points"); Looking at the above plot, it appears that the most uncertainty is between 150 and 170. The above plot slightly misrepresents things, as the x-axis is not a true scale (it displays the value of the $i$th sorted data point.) A more clear diagram is below, where we have estimated the frequency of each data point belonging to the labels 0 and 1. cmap = mpl.colors.LinearSegmentedColormap.from_list("BMH", colors) assign_trace = trace["assignment"] plt.scatter(data, 1 - assign_trace.mean(axis=0), cmap=cmap, c=assign_trace.mean(axis=0), s=50) plt.ylim(-0.05, 1.05) plt.xlim(35, 300) plt.title("Probability of data point belonging to cluster 0") plt.ylabel("probability") plt.xlabel("value of data point"); Even though we modeled the clusters using Normal distributions, we didn't get just a single Normal distribution that best fits the data (whatever our definition of best is), but a distribution of values for the Normal's parameters. How can we choose just a single pair of values for the mean and variance and determine a sorta-best-fit gaussian? One quick and dirty way (which has nice theoretical properties we will see in Chapter 5), is to use the mean of the posterior distributions. Below we overlay the Normal density functions, using the mean of the posterior distributions as the chosen parameters, with our observed data: norm = stats.norm x = np.linspace(20, 300, 500) posterior_center_means = center_trace.mean(axis=0) posterior_std_means = std_trace.mean(axis=0) posterior_p_mean = trace["p"].mean() plt.hist(data, bins=20, histtype="step", normed=True, color="k", lw=2, label="histogram of data") y = posterior_p_mean * norm.pdf(x, loc=posterior_center_means[0], scale=posterior_std_means[0]) plt.plot(x, y, label="Cluster 0 (using posterior-mean parameters)", lw=3) plt.fill_between(x, y, color=colors[1], alpha=0.3) y = (1 - posterior_p_mean) * norm.pdf(x, loc=posterior_center_means[1], scale=posterior_std_means[1]) plt.plot(x, y, label="Cluster 1 (using posterior-mean parameters)", lw=3) plt.fill_between(x, y, color=colors[0], alpha=0.3) plt.legend(loc="upper left") plt.title("Visualizing Clusters using posterior-mean parameters"); In the above example, a possible (though less likely) scenario is that cluster 0 has a very large standard deviation, and cluster 1 has a small standard deviation. This would still satisfy the evidence, albeit less so than our original inference. Alternatively, it would be incredibly unlikely for both distributions to have a small standard deviation, as the data does not support this hypothesis at all. Thus the two standard deviations are dependent on each other: if one is small, the other must be large. In fact, all the unknowns are related in a similar manner. For example, if a standard deviation is large, the mean has a wider possible space of realizations. Conversely, a small standard deviation restricts the mean to a small area. During MCMC, we are returned vectors representing samples from the unknown posteriors. Elements of different vectors cannot be used together, as this would break the above logic: perhaps a sample has returned that cluster 1 has a small standard deviation, hence all the other variables in that sample would incorporate that and be adjusted accordingly. It is easy to avoid this problem though, just make sure you are indexing traces correctly. Another small example to illustrate the point. Suppose two variables, $x$ and $y$, are related by $x+y=10$. We model $x$ as a Normal random variable with mean 4 and explore 500 samples. import pymc3 as pm with pm.Model() as model: x = pm.Normal("x", mu=4, tau=10) y = pm.Deterministic("y", 10 - x) trace_2 = pm.sample(10000, pm.Metropolis()) plt.plot(trace_2["x"]) plt.plot(trace_2["y"]) plt.title("Displaying (extreme) case of dependence between unknowns"); [-------100%-------] 10000 of 10000 in 0.9 sec. | SPS: 11550.9 | ETA: 0.0 As you can see, the two variables are not unrelated, and it would be wrong to add the $i$th sample of $x$ to the $j$th sample of $y$, unless $i = j$. The above clustering can be generalized to $k$ clusters. Choosing $k=2$ allowed us to visualize the MCMC better, and examine some very interesting plots. What about prediction? Suppose we observe a new data point, say $x = 175$, and we wish to label it to a cluster. It is foolish to simply assign it to the closer cluster center, as this ignores the standard deviation of the clusters, and we have seen from the plots above that this consideration is very important. More formally: we are interested in the probability (as we cannot be certain about labels) of assigning $x=175$ to cluster 1. Denote the assignment of $x$ as $L_x$, which is equal to 0 or 1, and we are interested in $P(L_x = 1 \;|\; x = 175 )$. A naive method to compute this is to re-run the above MCMC with the additional data point appended. The disadvantage with this method is that it will be slow to infer for each novel data point. Alternatively, we can try a less precise, but much quicker method. We will use Bayes Theorem for this. If you recall, Bayes Theorem looks like: $$ P( A | X ) = \frac{ P( X | A )P(A) }{P(X) }$$ In our case, $A$ represents $L_x = 1$ and $X$ is the evidence we have: we observe that $x = 175$. For a particular sample set of parameters for our posterior distribution, $( \mu_0, \sigma_0, \mu_1, \sigma_1, p)$, we are interested in asking "Is the probability that $x$ is in cluster 1 greater than the probability it is in cluster 0?", where the probability is dependent on the chosen parameters. \begin{align} & P(L_x = 1| x = 175 ) \gt P(L_x = 0| x = 175 ) \\\\[5pt] & \frac{ P( x=175 | L_x = 1 )P( L_x = 1 ) }{P(x = 175) } \gt \frac{ P( x=175 | L_x = 0 )P( L_x = 0 )}{P(x = 175) } \end{align} As the denominators are equal, they can be ignored (and good riddance, because computing the quantity $P(x = 175)$ can be difficult). $$ P( x=175 | L_x = 1 )P( L_x = 1 ) \gt P( x=175 | L_x = 0 )P( L_x = 0 ) $$ norm_pdf = stats.norm.pdf p_trace = trace["p"][25000:] prev_p_trace = trace["p"][:25000] x = 175 v = p_trace * norm_pdf(x, loc=center_trace[:, 0], scale=std_trace[:, 0]) > \ (1 - p_trace) * norm_pdf(x, loc=center_trace[:, 1], scale=std_trace[:, 1]) print("Probability of belonging to cluster 1:", v.mean()) Probability of belonging to cluster 1: 0.01062 Giving us a probability instead of a label is a very useful thing. Instead of the naive L = 1 if prob > 0.5 else 0 we can optimize our guesses using a loss function, which the entire fifth chapter is devoted to. MAPto improve convergence¶ If you ran the above example yourself, you may have noticed that our results were not consistent: perhaps your cluster division was more scattered, or perhaps less scattered. The problem is that our traces are a function of the starting values of the MCMC algorithm. It can be mathematically shown that letting the MCMC run long enough, by performing many steps, the algorithm should forget its initial position. In fact, this is what it means to say the MCMC converged (in practice though we can never achieve total convergence). Hence if we observe different posterior analysis, it is likely because our MCMC has not fully converged yet, and we should not use samples from it yet (we should use a larger burn-in period ). In fact, poor starting values can prevent any convergence, or significantly slow it down. Ideally, we would like to have the chain start at the peak of our landscape, as this is exactly where the posterior distributions exist. Hence, if we started at the "peak", we could avoid a lengthy burn-in period and incorrect inference. Generally, we call this "peak" the maximum a posterior or, more simply, the MAP. Of course, we do not know where the MAP is. PyMC3 provides a function that will approximate, if not find, the MAP location. In the PyMC3 main namespace is the find_MAP function. If you call this function within the context of Model(), it will calculate the MAP which you can then pass to pm.sample() as a start parameter. start = pm.find_MAP() trace = pm.sample(2000, step=pm.Metropolis, start=start) The find_MAP() function has the flexibility of allowing the user to choose which optimization algorithm to use (after all, this is a optimization problem: we are looking for the values that maximize our landscape), as not all optimization algorithms are created equal. The default optimization algorithm in function call is the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm to find the maximum of the log-posterior. As an alternative, you can use other optimization algorithms from the scipy.optimize module. For example, you can use Powell's Method, a favourite of PyMC blogger Abraham Flaxman [1], by calling find_MAP(fmin=scipy.optimize.fmin_powell). The default works well enough, but if convergence is slow or not guaranteed, feel free to experiment with Powell's method or the other algorithms available. The MAP can also be used as a solution to the inference problem, as mathematically it is the most likely value for the unknowns. But as mentioned earlier in this chapter, this location ignores the uncertainty and doesn't return a distribution. It is still a good idea to decide on a burn-in period, even if we are using find_MAP() prior to sampling, just to be safe. We can no longer automatically discard sample with a burn parameter in the sample() function as we could in PyMC2, but it is easy enough to simply discard the beginning section of the trace just through array slicing. As one does not know when the chain has fully converged, a good rule of thumb is to discard the first half of your samples, sometimes up to 90% of the samples for longer runs. To continue the clustering example from above, the new code would look something like: with pm.Model() as model: start = pm.find_MAP() step = pm.Metropolis() trace = pm.sample(100000, step=step, start=start) burned_trace = trace[50000:] Autocorrelation is a measure of how related a series of numbers is with itself. A measurement of 1.0 is perfect positive autocorrelation, 0 no autocorrelation, and -1 is perfect negative correlation. If you are familiar with standard correlation, then autocorrelation is just how correlated a series, $x_\tau$, at time $t$ is with the series at time $t-k$: $$R(k) = Corr( x_t, x_{t-k} ) $$ For example, consider the two series: $$x_t \sim \text{Normal}(0,1), \;\; x_0 = 0$$ $$y_t \sim \text{Normal}(y_{t-1}, 1 ), \;\; y_0 = 0$$ which have example paths like: figsize(12.5, 4) import pymc3 as pm x_t = np.random.normal(0, 1, 200) x_t[0] = 0 y_t = np.zeros(200) for i in range(1, 200): y_t[i] = np.random.normal(y_t[i - 1], 1) plt.plot(y_t, label="$y_t$", lw=3) plt.plot(x_t, label="$x_t$", lw=3) plt.xlabel("time, $t$") plt.legend();
http://nbviewer.jupyter.org/github/CamDavidsonPilon/Probabilistic-Programming-and-Bayesian-Methods-for-Hackers/blob/master/Chapter3_MCMC/Ch3_IntroMCMC_PyMC3.ipynb
CC-MAIN-2018-05
refinedweb
5,624
50.33
import json import matplotlib s = json.load( open("styles/qutip_matplotlibrc.json") ) #edit path to json file matplotlib.rcParams.update(s) %pylab inline Populating the interactive namespace from numpy and matplotlib from IPython.display import Image, HTML>.''') Being able to plot high-quality, informative figures is one of the necessary tools for working in the sciences today. If your figures don't look good, then you don't look good. Good visuals not only help to convey scientific information, but also help to draw attention to your work. Often times good quality figures and plots play an important role in determining the overall scientific impact of your work. Therefore we will spend some time learning how to create high-quality, publication ready plots in Python using a Python module called Matplotlib. Image(filename='images/mpl.png',width=700,embed=True) from pylab import * Let us also import numpy so we can use arrays and mathematical functions from numpy import * x=linspace(-pi,pi) y=sin(x) plot(x,y) show() Here, the plot command generates the figure, but it is not displayed until you run show(). If we want, we can also also add some labels to the axes and a title to the plot. While we are at it, lets change the color of the line to red, and make it a dashed line. x=linspace(-pi,pi) y=sin(x) plot(x,y,'r--') #make line red 'r' and dashed '--' xlabel('x') ylabel('y') title('sin(x)') show() Here the 'r' stands for red, but we could have used any of the built in colors: We can also specify the color of a line using the color keyword argument x=linspace(-pi,pi) y=sin(x) plot(x,y,'--',color='0.75') # Here a string from 0->1 specifies a gray value. show() x=linspace(-pi,pi) y=sin(x) plot(x,y,'-',color='#FD8808') # We can also use hex colors if we want. show() The style of the line can be changed from solid: '' or '-', to dashed: '--', dotted '.', dash-dotted: '-.', dots+solid: '.-', or little dots: ':'. One can also use the 'linstyle' or 'ls' keyword argument to change this style. We can disply all of these variations using the subplot function that displays multiple plots in a grid specified by the number of rows, columns, and the number of the current plot. We only need one show() command for viewing all of the plots. To make the plot look good, we can also control the width and height of the figure by calling the figure function using the keyword argument 'figsize' that specifies the width and height of the figure in inches. x=linspace(-pi,pi) y=sin(x) figure(figsize=(12,3)) #This controls the size of the figure subplot(2,3,1) #This is the first plot in a 2x3 grid of plots plot(x,y) subplot(2,3,2)#this is the second plot plot(x,y,linestyle='--') # Demo using 'linestyle' keyword arguement subplot(2,3,3) plot(x,y,'.') subplot(2,3,4) plot(x,y,'-.') subplot(2,3,5) plot(x,y,'.-') subplot(2,3,6) plot(x,y,ls=':') # Demo using 'ls' keyword arguement. show() If we want to change the width of a line then we can use the 'linewidth' or 'lw' keyword arguements with a float number specifying the linewidth x=linspace(0,10) y=sqrt(x) figure(figsize=(12,3)) subplot(1,3,1) plot(x,y) subplot(1,3,2) plot(x,y,linewidth=2) subplot(1,3,3) plot(x,y,lw=7.75) show() If we want to plot multiple lines on a single plot, we can call the plot command several times, or we can use just a single plot command by entering the data for multiple lines simultaneously. x=linspace(0,10) s=sin(x) c=cos(x) sx=x*sin(x) figure(figsize=(12,3)) subplot(1,2,1) plot(x,s,'b') #call three different plot functions plot(x,c,'r') plot(x,sx,'g') subplot(1,2,2) plot(x,s,'b',x,c,'r',x,sx,'g') #combine multiple lines in one call to plot show() x=linspace(-pi,pi,100) figure(figsize=(12,3)) subplot(1,3,1) plot(x,sin(x),lw=2) subplot(1,3,2) plot(x,sin(x),lw=2,color='#740007') xlim([-pi,pi]) #change bounds on x-axis to [-pi,pi] subplot(1,3,3) plot(x,sin(x),'^',ms=8,color='0.8') xlim([-1,1]) #change bounds on x-axis to [-1,1] ylim([-0.75,0.75]) #change bounds on y-axis to [-0.75,0.75] show() Now that we know how to make good looking plots, it would certainly be nice if we knew how to save these figures for use in a paper/report, or perhaps for posting on a webpage. Fortunately, this is very easy to do. If we want to save our previous figure then we need call the savefig function. x=linspace(-pi,pi,100) figure(figsize=(12,3)) subplot(1,3,1) plot(x,sin(x),lw=2) subplot(1,3,2) plot(x,sin(x),lw=2,color='#740007') xlim([-pi,pi]) subplot(1,3,3) plot(x,sin(x),'^',ms=8,color='0.8') xlim([-1,1]) ylim([-0.75,0.75]) savefig('axes_example.png') #Save the figure in PNG format in same directory as script savefig saves the figure with the name and extension that is given in the string. The name can be whatever you like, but the extension. .png in this case, must be a format that Matplotlib recognizes. In this class we will only use the Portable Network Graphics .png and PDF In addition to lines and dots, Matplotlib allows you to use many different shapes to represent the points on a graph. In Matplotlib, these shapes are called markers and just like lines, their color and size can be controlled. There are many basic types of markers, so here we will demonstrate just a few important ones: '*': star, 'o': circle, 's': square, and '+': plus by evaluating the Airy functions. from scipy.special import airy #Airy functions are in the SciPy special module x = linspace(-1,1) Ai,Aip,Bi,Bip = airy(x) plot(x,Ai,'b*',x,Aip,'ro',x,Bi,'gs',x,Bip,'k+') show() We can also chage the size of the shapes using the 'markersize' or 'ms' keyword arguments, and the color using 'markercolor' or 'mc'. So far we have made use of only the plot function for generating 2D figures. However, there are several other functions for generating different kinds of 2D plots. A collection of the many different types can be found at the matplotlib gallery, and we will highlight only the more useful functions. x = np.linspace(-1, 1., 100) figure(figsize=(6,6)) subplot(2,2,1) y=x+0.25*randn(len(x)) scatter(x,y,color='r') #plot collection of (x,y) points title('A scatter Plot') subplot(2,2,2) n = array([0,1,2,3,4,5]) bar(n, n**2, align="center", width=1) #aligns the bars over the x-numbers, and width=dx title('A bar Plot') subplot(2,2,3) fill_between(x, x**2, x**3, color="green") #fill between x**2 & x**3 with green title('A fill_between Plot') subplot(2,2,4) title('A hist Plot') r=random.randn(50) # generating some random numbers hist(r,color='y') #create a histogram of the random number values show() The color and size of the elements can all be controlled in the same way as the usual plot function. MatplotLib Gallery : A gallery of figures showing what Matplotlib can do. Matplotlib Examples : A long list of examples demonstrating how to use Matplotlib for a variety of plotting. Guide to 2D & 3D Plotting : A guide for plotting in Matplotlib by Robert Johansson. from IPython.core.display import HTML def css_styling(): styles = open("styles/style.css", "r").read() return HTML(styles) css_styling()
https://nbviewer.ipython.org/github/qutip/qutip-notebooks/blob/master/python/Matplotlib_Plotting.ipynb
CC-MAIN-2021-25
refinedweb
1,337
60.65
Easy-to-use game AI algorithms (Negamax etc. ) Project description EasyAI (full documentation here) is a pure-Python artificial intelligence framework for two-players abstract games such as Tic Tac Toe, Connect 4, Reversi, etc. It makes it easy to define the mechanisms of a game, and play against the computer or solve the game. Under the hood, the AI is a Negamax algorithm with alpha-beta pruning and transposition tables as described on Wikipedia. Installation If you have pip installed, type this in a terminal sudo pip install easyAI Otherwise, dowload the source code (for instance on Github), unzip everything into one folder and in this folder, in a terminal, type sudo python setup.py install Additionnally you will need to install Numpy to be able to run some of the examples. A quick example Let us define the rules of a game and start a match against the AI: from easyAI import TwoPlayersGame, Human_Player, AI_Player, Negamax class GameOfBones( TwoPlayersGame ): """ In turn, the players remove one, two or three bones from a pile of bones. The player who removes the last bone loses. """ def __init__(self, players): self.players = players self.pile = 20 # start with 20 bones in the pile self.nplayer = 1 # player 1 starts def possible_moves(self): return ['1','2','3'] def make_move(self,move): self.pile -= int(move) # remove bones. def win(self): return self.pile<=0 # opponent took the last bone ? def is_over(self): return self.win() # Game stops when someone wins. def show(self): print "%d bones left in the pile"%self.pile def scoring(self): return 100 if game.win() else 0 # For the AI # Start a match (and store the history of moves when it ends) ai = Negamax(13) # The AI will think 13 moves in advance game = GameOfBones( [ Human_Player(), AI_Player(ai) ] ) history = game.play() Result: 20 bones left in the pile Player 1 what do you play ? 3 Move #1: player 1 plays 3 : 17 bones left in the pile Move #2: player 2 plays 1 : 16 bones left in the pile Player 1 what do you play ? Solving the game Let us now solve the game: from easyAI import id_solve r,d,m = id_solve(GameOfBones, ai_depths=range(2,20), win_score=100) We obtain r=1, meaning that if both players play perfectly, the first player to play can always win (-1 would have meant always lose), d=10, which means that the wins will be in ten moves (i.e. 5 moves per player) or less, and m='3', which indicates that the first player’s first move should be '3'. These computations can be sped up using a transposition table which will store the situations encountered and the best moves for each: tt = TT() GameOfBones.ttentry = lambda game : game.pile # key for the table r,d,m = id_solve(GameOfBones, range(2,20), win_score=100, tt=tt) After these lines are run the variable tt contains a transposition table storing the possible situations (here, the possible sizes of the pile) and the optimal moves to perform. With tt you can play perfectly without thinking: game = GameOfBones( [ AI_Player( tt ), Human_Player() ] ) game.play() # you will always lose this game :) Contribute ! EasyAI is an open source software originally written by Zulko and released under the MIT licence. It could do with some improvements, so if your are a Python/AI guru maybe you can contribute through Github . Some ideas: AI algos for incomplete information games, better game solving strategies, (efficient) use of databases to store moves, AI algorithms using parallelisation. For troubleshooting and bug reports, the best for now is to ask on Github. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/easyAI/
CC-MAIN-2020-16
refinedweb
628
63.29
Python is one of the most easy to read and easy to write programming languages of all time. Over the years, the popularity of Python has only increased, and it is widely used in web application development, scripting, creating games, scientific computing, etc. Flask is a Python web application framework which is gaining popularity due to its ease of use for Python beginners. In this tutorial, we'll have a look at EVE, a REST API building framework based on Flask, MongoDB and Redis. From the official docs: Powered by Flask, MongoDB, Redis and good intentions Eve allows to effortlessly build and deploy highly customizable, fully featured RESTful Web Service. What We'll Be Creating In this tutorial, we'll see how to build REST APIs using the EVE framework. Next, using AngularJS we'll design the front end for a simple app and make it functional by consuming the REST APIs built using EVE. In this tutorial, we'll implement the following functionality: - Create User API - Validate User API - Add Items API - Delete Items API - Update Items API Getting Started Installation We'll be using pip to install EVE. pip install eve We'll be using MongoDB as database. Have a look at the official docs for installation as per your system. Creating the Basic API Create a project folder called PythonAPI. Navigate to PythonAPI and create a file called api.py. Inside api.py import EVE and create an EVE object. from eve import Eve app = Eve() Next, run app when the program is executed as a main program. from eve import Eve app = Eve() if __name__ == '__main__': app.run() Run your MongoDB using the following command: mongod --dbpath= <PATH-TO> /data/db/ As you can see, we need to specify a path for the db files. Simply create data/db in your file system and run the above command. Along with an instance of MongoDB running, EVE requires a configuration file with info about the API resources. So in the PythonAPI folder create another file called settings.py and add the following code: DOMAIN = {'user': {}} The above code informs EVE that a resource for user is available. Save all the files and run api.py: python api.py The API should be online as shown: We'll be using Postman REST Client to send requests to the APIs. It's free and can be installed with a simple click. Once done with installation, launch the app and enter the API URL () and click send. You should have the response as shown: Since we haven't called any specific API resource, it'll show all the available resources. Now, try calling the user resource and you should have the response specific to the user. Create and Validate User API Create User API We'll start by building an API to create or register a user for our application. The user would have certain fields like Username , Password and Phone Number. So first we'll need to define a schema for a user. Schema defines the fields and data types of the key fields. Open up settings.py and modify the DOMAIN by defining a schema as shown : DOMAIN = { 'user': { 'schema': { 'firstname': { 'type': 'string' }, 'lastname': { 'type': 'string' }, 'username': { 'type': 'string', 'unique': True }, 'password': { 'type': 'string' }, 'phone': { 'type': 'string' } } } } As you can see in the above code, we have defined the key fields which are needed to create a user and its data type defined in the schema. Save the changes and execute api.py. From Postman Client try to do a POST request along with the required parameters to as shown: On POST request to user, it threw a 405 Method Not Allowed error. By default EVE accepts only GET requests. If we want to use any other method we need to define it explicitly. Open settings.py and define the resource methods as shown : RESOURCE_METHODS = ['GET', 'POST'] Save the changes and execute api.py. Now try again to POST to user and you should have the below response: As you can see the above POST request was successful. We haven't defined the database configurations in our settings.py, so EVE completed the request using the running instance of MongoDB. Let's log in to the MongoDB shell and see the newly created record. With MongoDB instance running, trigger the mongo shell: mongo Once inside the mongo shell, list all the available databases. show databases; There must be an eve database. Switch to the eve database. use eve; Execute the show command to list the tables inside the eve database. show tables; The listed tables must have a table called user. List the records from the user table using the following command: db.user.find() Here are the selected records from the user tables: Validate User API Next we'll create an API to validate an existing user. Normally, if we do a get request to the user endpoint (), it will give out details of all registered users from the database. We need to implement two things here. First we need to authenticate a user using first name and password, and second we need to return the user details from the database on successful authentication. In order to get details based on the first name, we need to add an additional lookup field in the DOMAIN in settings.py. 'additional_lookup': { 'url': 'regex("[\w]+")', 'field': 'username', } As seen in the above code, we have added a look up field for username. Now when a GET request is send to the <username> it will return the details of the user with the particular username. When making a request to a particular user, we'll also send the username and password for authentication. We'll be doing basic authentication to verify a particular user based on username and password. First, we need to import the Basic Auth class from EVE. Create a class called Authenticate to implement the authentication as shown: from eve.auth import BasicAuth class Authenticate(BasicAuth): def check_auth(self, username, password, allowed_roles, resource, method): Now, when the resource is user and request method is GET, we'll authenticate the user. On successful authentication, the user details of the user with firstname in the API endpoint will be returned. We'll also restrict user creation by providing a username and password. So, if the method is POST and API endpoint is user, we'll check and validate the username and password. So, here is the complete Authenticate class: class Authenticate(BasicAuth): def check_auth(self, username, password, allowed_roles, resource, method): if resource == 'user' and method == 'GET': user = app.data.driver.db['user'] user = user.find_one({'username': username,'password':password}) if user: return True else: return False elif resource == 'user' and method == 'POST': return username == 'admin' and password == 'admin' else: return True We need to pass the Authenticate class name while initiating the API. So modify the API initiating code as shown: if __name__ == '__main__': app = Eve(auth=Authenticate) app.run() Save all the changes and execute the api.py. Try to send a basic auth request with a username and password from Postman to (replace username with any other existing username). On successful authentication, you should get the user details in response as shown: Add, Delete, and Update Items Add Item API In order to create an Add item API, all we need to do is create a new schema for item in settings.py. 'item': { 'schema': { 'name':{ 'type': 'string' }, 'username': { 'type': 'string' } } } The Add Item API would help each logged in user to add an item. We'll be saving the item along with the username of the user who entered the item. Save the changes and try to do a POST request to as shown: Delete Item API For deleting an item created by an user, all we need to do is call the item endpoint /item_id. But simply calling a DELETE request won't delete the item. In order to delete an item, we also need to provide an _etag related to a particular item. Once item id and _etag match, the item is deleted from the database. Here is how the DELETE method is called in item endpoint. Update Item API The Update API is similar to the Delete API. All we have to do is send a PATCH request with the item id and _etag and the form fields which need to be updated. Here is how the item details are updated: Conclusion In this tutorial, we saw how to get started with creating APIs using the Python EVE framework. We created some basic APIs for CRUD operations which we'll use in the next part of the series while creating an AngularJS App. Source code from this tutorial is available on GitHub. Do let us know your thoughts in the comments below! Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/building-rest-apis-using-eve--cms-22961
CC-MAIN-2017-39
refinedweb
1,488
71.34
What's New In Pylint 1.9¶ - Release 1.9 - Date 2018-05-15 Summary -- Release highlights¶ None so far New checkers¶ A new Python 3 checker was added to warn about the removed operator.divfunction. A new Python 3 checker was added to warn about accessing functions that have been moved from the urllib module in corresponding subpackages, such as urllib.request. from urllib import urlencode Instead the previous code should use urllib.parseor six.movesto import a module in a Python 2 and 3 compatible fashion: from six.moves.urllib.parse import urlencode To have this working on Python 3 as well, please use the sixlibrary: six.reraise(Exception, "value", tb) A new check was added to warn about using unicode raw string literals. This is a syntax error in Python 3: a = ur'...' Added a new deprecated-sys-functioncheck, emitted when accessing removed sysmembers. Added xreadlines-attributecheck, emitted when the xreadlines()attribute is accessed on a file object. Added two new Python 3 porting checks, exception-escapeand comprehension-escape These two are emitted whenever pylint detects that a variable defined in the said blocks is used outside of the given block. On Python 3 these values are deleted. try: 1/0 except ZeroDivisionError as exc: ... print(exc) # This will raise a NameError on Python 3 [i for i in some_iterator if some_condition(i)] print(i) # This will raise a NameError on Python 3 Other Changes¶ defaultdictand subclasses of dictare now handled for dict-iter-* checks. That means that the following code will now emit warnings for when iteritemsand friends are accessed: some_dict = defaultdict(list) ... some_dict.iterkeys() Enum classes no longer trigger too-few-methods Special methods now count towards too-few-methods, and are considered part of the public API. They are still not counted towards the number of methods for too-many-methods. docparams allows abstract methods to document returns documentation even if the default implementation does not return something. They also no longer need to document raising a NotImplementedError.
https://pylint.pycqa.org/en/latest/whatsnew/1.9.html
CC-MAIN-2021-49
refinedweb
334
56.15
Hi All, I've got this great script which brings my text on through random text but because I've bought the ActionScript I'm not sure how to go to the next frame after the action script has played out. If anyone knows that would be great. Thanks, Kate I don't know exactly what you want to do but if I understood correctly you could: 1.create a new layer (name it: actions) 2.create a new layer (name it: labels) 3.create a new layer (for the "next thing" to appear on) 4.insert a keyframe on the timeline where you want to continue after the script played on each of these layers 5. open your actions, go to the keyframe on the actions layer and type: stop(); go to the keyframe on the labels layer and give it a label name (in the properties window) go to the keyframe on the "next thing" layer and put anything you like now on frame 1 of your actions layer put in the following code: "your movie clip instance name".gotoAndPlay("your labels name"); to the community: correct me if this isn't right. Thank you for answering me so quickly, not sure if I done it right but it didn't work. Maybe I haven't explained it correctly but I just need something that will fit on to this script #include "wmRandomCharacters.as" effect=new RandomCharactersEffect(); effect.setupEffect(txtfield); effect.setText("Welcome to the Canary Wharf Crossrail Station induction Please switch mobile phones to OFF/Silent") so once it's finished rolling it goes onto the next frame. Kate
https://forums.adobe.com/thread/464908
CC-MAIN-2018-30
refinedweb
272
68.81
I have a . csv item_name,item_cost,item_priority,item_required,item_completed item 1,11.21,2,r item 2,411.21,3,r item 3,40.0,1,r,c .csv item_name,item_cost,item_priority,item_required,item_completed item 1,11.21,2,x item 2,411.21,3,r item 3,40.0,1,r,c print("Enter the item number:") line_count = 0 marked_item = int(input()) with open("items.csv", 'r') as f: reader = csv.DictReader(f, delimiter=',') for line in reader: if line["item_required"] == 'r': line_count += 1 if marked_item == line_count: new_list = line print(new_list) for key, value in new_list.items(): if value == "r": new_list['item_required'] = "x" print(new_list) with open("items.csv", 'a') as f: writer = csv.writer(f) writer.writerow(new_list.values()) There are several problems here DictReader, which is good to read data, but not as good to read and write data as the original file, since dictionaries do not ensure column order (unless you don't care, but most of the time people don't want columns to be swapped). I just read the title, find the index of the column title, and use this index in the rest of the code (no dicts = faster) newline=''or you get a lot of blank lines (python 3) or "wb"(python 2) rto xat a given row) Here's the fixed code taking all aforementioned remarks into account EDIT: added the feature you request after: add a c after x if not already there, extending the row if needed import csv line_count = 0 marked_item = int(input()) with open("items.csv", 'r') as f: reader = csv.reader(f, delimiter=',') title = next(reader) # title idx = title.index("item_required") # index of the column we target lines=[] for line in reader: if line[idx] == 'r': line_count += 1 if marked_item == line_count: line[idx] = 'x' # add 'c' after x (or replace if column exists) if len(line)>idx+1: # check len line[idx+1] = 'c' else: line.append('c') lines.append(line) with open("items.csv", 'w',newline='') as f: writer = csv.writer(f,delimiter=',') writer.writerow(title) writer.writerows(lines)
https://codedump.io/share/8S68BtrBRj9P/1/writing-specific-value-back-to-csv-python
CC-MAIN-2018-17
refinedweb
344
57.67
#include <testsoon.hpp> #include <iostream> TEST(compiler_check) { Equals(1, 1); // let's hope it works!! } TEST_REGISTRY; int main() { testsoon::default_reporter rep(std::cout); testsoon::tests().run(rep); } In order to compile this ... important test, you first need to make sure that a recent testsoon.hpp is in your include path. It can be found in the include/ directory of the distribution. You may just copy it into your project folder. No other installation is required. If you compile and run this program, you should see something like this on your console: "simple.cpp" : . 1 tests, 1 succeeded, 0 failed. I guess this means that we can trust our compiler a little bit. Or so it seems. Seriously, this is our first successful test. Let me explain what the code above actually means. I shall do this by thoroughly commenting the code. // You really can guess why we do this. #include <testsoon.hpp> #include <iostream> // Declare a simple test with name "compiler_check". Note that no quotes are // required here. TEST(compiler_check) { // Check whether the two numbers are equal. Equals(1, 1); } // This line is required _once_ per executable. It ensures that if the code // compiles, everything works smoothly. The principle here: no surprises. TEST_REGISTRY; int main() { // Declare a reporter. The default_reporter should be a sensible setting. // That's why it's the default. // We need to pass it a reference to a std::ostream object to print to, // usually just std::cout. testsoon::default_reporter rep(std::cout); // Run all tests. testsoon::tests().run(rep); } So now let's play around and test something different: are 1 and 2 equal? Change the check as follows: Now, the output should look like something like this: "simple.cpp" : [F=3.4] Error occured in test "compiler_check" in "simple.cpp" on line 3 in check on line 4. Problem: not equal: 1 and 2 Data: 1 2 1 tests, 0 succeeded, 1 failed. Obviously, both numbers differ. Lets look at the first strange thing: "[F=3.4]". This little thing means that there was a failure in the test on line 3 (simple.cpp), to be exact the check on line 4 failed. (I used the version without comments.) The same information is represented below and with additional information. "Data" are the two parameter values to Equals. This is necessary because in other situations the "problem" might not be "not equal: 1 and 2" but "not equal: a and b" where a and b are variables. In this case, "data" would contain the values of both variables in (readable) string representation. In most circumstances you will structure a test executable like this: TEST_REGISTRY; and main(). TEST_GROUP(group_one) { TEST() { Check(!"unnamed test in group_one"); } TEST(named) { Check(!"named test in group_one"); } } TEST_GROUP(group_two) { TEST() { Check(!"only test in group_two"); } TEST_GROUP(nested) { TEST() { Check(!"except if you count this test in a nested group"); } } } The Checks will all fail because they are passed a false value. ! applied to a non-null pointer value is always false. XTEST() {} // unnamed test XTEST((name, "my test")) {} // named test XTEST((n, "my other test")) {} // a named test, too XTEST((parameter1_name, parameter1_value) (parameter2_name, parameter2_value)) // illegal but demonstrates how to use multiple parameters XTEST enables you to use named parameters. Some parameters have short and long names, like name, alias n. This syntax does not make much sense so far, but you will see how it is useful later. XTEST((name, "my test") (fixture, my_fixture_class)) { fixture.do_something(); // the fixture is passed through the variable with the same name } XTEST((n, "my test") (f, my_fixture_class)) { fixture.do_something(); } FTEST(my test, my_fixture_class) { fixture.do_something(); } We recommend you to choose a style and stick to it (mostly). But keep in mind that XTEST is the most powerful. group_fixture_t. Here come the examples: TEST_GROUP(group1) { typedef fixture_for_group1 group_fixture_t; XTEST((group_fixture, 1)) { group_fixture.do_something_better_than_i_can_imagine(); } XTEST((n, "differently named") (gf, 1)) { group_fixture.hey_hey_hey(); } GFTEST(and yet another name) { group_fixture.call_this_method(); } } (generator, (class)(p1)(p2)...(pN))where classis the name of the generator class and p1to pNare the parameters to be passed to the constructor. The generated value is passed through the value variable ( const). Example: XTEST((generator, (testsoon::range_generator)(0)(9))) { Check(0 <= value); Check(value < 9); } valuevariable. range: (range, (type, begin, end)) Example: arrayor testsoon::array_generator but is way more convenient. It currently depends on Boost.Assign though. Parameter syntax: (values, (type)(value1)(value2)...(valueN)) Example: XTEST((values, (std::string)("x")("y")("z"))) { Check(value == "x" || value == "y" || value == "z"); } Parameter syntax: (array, (element_type, array_name)) Example: char const * const array[] = { // not problematic to name the array like this! "abc", "xyz", "ghlxyfxt" }; XTEST((array, (char const *, array))) { Not_equals(value[0], '?'); } Parameter syntax: (2tuples, (element0_type, element1_type)(element0_1, element1_1)(element0_2, element1_2)...(element0_N, element1_N)) The type of value will be boost::tuple<element0_type, element1_type>. You should also include <boost/tuple/tuple_io.hpp> Example: #include <boost/tuple/tuple_io.hpp> // Check if strlen works properly ;-) XTEST((2tuples, (char const *, std::size_t)("hey", 3)("hallo", 5))) { Equals(std::strlen(value.get<0>()), value.get<1>()); }
http://testsoon.sourceforge.net/tutorial.html
crawl-001
refinedweb
830
52.97
Graham Dumpleton wrote .. > The macros supposedly try and make it type safe, so haven't quite worked > out how you are meant to use them yet. In part it looks like compile time > binding is required, which would be an issue with Python. What you might > be able to do though is write a little C based Python module which did > the lookup and calling of "ssl_var_lookup()" for you by going direct to > the Apache runtime library calls. May work. :-) See if you can get the attached code (2 files) to work for you. You will need to modify the setup.py to point to appropriate include and library directories and correct name for APR libraries on your platform. The code all compiles, but since I don't have mod_ssl setup I get back nothing. The first argument to each method must be the request object. It will crash if it isn't as haven't been able to have check in code that it is in fact a request object because undefined as am not linking against mod_python .so. The handler I was using was as follows, but you should be able to adapt it. import _mp_mod_ssl import vampire class _Object: def is_https(self,req): return _mp_mod_ssl.is_https(req) def var_lookup(self,req,name): return _mp_mod_ssl.var_lookup(req,name) handler = vampire.Publisher(_Object()) Let me know how you go. Is an interesting problem which is why I decided to play with it when I should have been doing real work. :-) Graham -------------- next part -------------- A non-text attachment was scrubbed... Name: _mp_mod_ssl.c Type: application/octet-stream Size: 2075 bytes Desc: not available Url : -------------- next part -------------- A non-text attachment was scrubbed... Name: setup.py Type: application/octet-stream Size: 606 bytes Desc: not available Url :
http://modpython.org/pipermail/mod_python/2005-May/018164.html
CC-MAIN-2020-16
refinedweb
296
73.88
Problem description I have an executable which repeatedly posts data to an HTTPS endpoint, using libcurl with libcares. This is getting occasional DNS resolution timeouts on some clients (not all). If I run the equivalent command in command-line curl, I never see any timeouts. What makes this even more confusing is that the host is explicitly specified in /etc/hosts, so there shouldn't be any DNS resolution required. The error from libcurl (with verbose mode) is: * Adding handle: conn: 0xcbca20 * Adding handle: send: 0 * Adding handle: recv: 0 * Curl_addHandleToPipeline: length: 1 * - Conn 88 (0xcbca20) send_pipe: 1, recv_pipe: 0 * Resolving timed out after 2002 milliseconds * Closing connection 88 My libcurl executable is sending 2-3 queries a second, and I see this error about once every 300 requests. Using command-line curl, I have run 10000 queries without a single timeout. Can anyone suggest anything I can try to resolve these errors from libcurl? Are there any settings which I need to add to my libcurl setup, or system configuration I might be missing? I wasn't sure whether to put this in Stack Overflow, Server Fault, or Ask Ubuntu; apologies if it is in the wrong place. Thanks for your time! More detailed information The client is Ubuntu 12.04, 64-bit. The same problem has been observed on several clients, all with the same OS. Usernames/passwords/urls have been obfuscated in the following snippets. Command-line curl tester (using v 7.22.0): while true; do curl -v -u username:password "" -X POST --data "a=x&b=y" >> /tmp/commandLine.log 2>&1; sleep 0.1; done & Libcurl source code (using curl 7.30.0, with c-ares 1.10.0): #include <curl/curl.h> #include <unistd.h> #include <string> #include <iostream> using namespace std; int main(int argc, char** argv) { while (1) { // Initialise curl CURL *curl = curl_easy_init(); // Set endpoint string urlWithEndpoint = ""; curl_easy_setopt(curl, CURLOPT_URL, urlWithEndpoint.c_str()); // Set-up username and password for request curl_easy_setopt(curl, CURLOPT_USERPWD, "username:password"); // Append POST data specific stuff string postData = "a=x&b=y"; long postSize = postData.length(); curl_easy_setopt(curl, CURLOPT_POSTFIELDSIZE, postSize); curl_easy_setopt(curl, CURLOPT_COPYPOSTFIELDS, postData.c_str()); //set timeouts curl_easy_setopt(curl, CURLOPT_TIMEOUT, 10); curl_easy_setopt(curl, CURLOPT_CONNECTTIMEOUT, 2); curl_easy_setopt(curl, CURLOPT_DNS_CACHE_TIMEOUT, 60); cout << endl << endl << "=========================================================" << endl << endl; cout << "Making curl request to " << urlWithEndpoint << "(POST size " << postSize << "B)" << endl; // Set curl to log verbose information curl_easy_setopt(curl, CURLOPT_VERBOSE, 1); // Perform the request CURLcode curlRes = curl_easy_perform(curl); // Handle response bool success = false; if (curlRes == CURLE_OK) { long httpCode; curl_easy_getinfo(curl, CURLINFO_RESPONSE_CODE, &httpCode); success = (httpCode==200); cout << "Received response " << httpCode << endl; } else if ( curlRes == CURLE_OPERATION_TIMEDOUT ) { cout << "Received timeout" << endl; } else { cout << "CURL error" << endl; } curl_easy_cleanup(curl); if (success) { cout << "SUCCESS! (" << time(0) << ")" << endl; usleep(0.1*1e6); } else { cout << "FAILURE!!!! (" << time(0) << ")" << endl; usleep(10*1e6); } } }
https://serverfault.com/questions/864811/dns-errors-in-libcurl-c-ares-but-not-in-command-line-curl/864983
CC-MAIN-2019-47
refinedweb
461
56.25
Need to add this code to my website to hide my email from spam bots Discussion in 'Javascript' started by fb3003@g: - 485 - Philip - Jun 30, 2006 from spam import eggs, spam at runtime, how?Rene Pijlman, Dec 8, 2003, in forum: Python - Replies: - 22 - Views: - 755 - Fredrik Lundh - Dec 10, 2003 Need a random image warp to defeat web bots, Feb 16, 2007, in forum: Java - Replies: - 6 - Views: - 510 - Lew - Feb 17, 2007 Hiding email address from spam botspatrick j, Mar 4, 2007, in forum: HTML - Replies: - 38 - Views: - 1,434 - John Hosking - Mar 11, 2007 Banning the spam-botsKlaus Caprani, Jul 6, 2003, in forum: Javascript - Replies: - 1 - Views: - 84 - Evertjan. - Jul 6, 2003
http://www.thecodingforums.com/threads/need-to-add-this-code-to-my-website-to-hide-my-email-from-spam-bots.925484/
CC-MAIN-2014-49
refinedweb
117
67.05
How to Make a Horror Movie With Low Budget Special Effects Everyone wants to make a movie, and why not? It's an exciting process, and the very idea that you might someday be discovered and become rich and famous makes lots of people work their butts off to become the next big filmmaker. While the chances of fame and fortune are actually slim to none, making a low-budget horror film can still be a lot of fun. Instructions - 1 Come up with a simple but unique idea. You want something familiar, but with a twist. For a low-budget film to stand out, it has to have something special and different. Develop your idea further and write the actual script. This will take some time, but if you stay focused on it and commit to the project, you'll find it's actually a fun process. If you're really terrible at writing, you can hire someone to write your idea for you, or simply look for already-written scripts and pick one that you'd like to produce (after you pay the writer for his work). Break down the script, then budget it out. Figure out how this movie is getting paid for. Do you have a large savings account you plan on using? Do you have investors lined up? If you need investors, put together a film-proposal package. Include a synopsis of the film, the budget breakdown and a letter of intent expressing why you want to make this movie and what you plan on doing with it, as well as a proposed investor's return (what your benefactors can expect in return for their investment). If the investors don't pan out, there are always loans, product-placement sponsorships and other ways of securing funding. - 2 Set up auditions and have actors read for the roles in the script. Conduct callbacks with your favorites and then select your cast. Have each cast member sign a contract stating his agreed-upon compensation and approximate work days. Then secure locations for your movie. Once you these issues are taken care of, it's time to plan the shooting schedule. Use scheduling software to help you get the most shots out of one location in one day, to avoid paying more than you have to. Hire a full crew. Since yours is a low-budget picture, you might consider recruiting from a local film school. Students often want experience, and will work for a title credit and a copy of the film. - 3 Design all your special effects. Decide what you want to see, then figure out how you're going to make that happen. Draw everything out on paper or put it in storyboard form. This will ensure that you don't do more work than necessary or spend too much on a simple effect or shot. - 4 Make sugar glass for breaking-glass effects. Mix 1 part liquid glucose with 2 parts water and 3-1/2 parts sugar. Stir over medium heat until throughly mixed together. Pour into a mold to create the window, vase, cup or other glass item that will be breaking. - 5 Make fake blood. Stir together 1 part water with 3 parts corn syrup, then add in red food coloring. To create oozing, "wet" blood, add some chocolate syrup into the mixture; this will create a realistic brown tone in the blood. To thicken the mixture to resemble, dried blood add in 1 or 2 tbsp. of corn starch. This will create blood clots or clumps. - 6 Use eyeliner, eye shadow and lipstick to create gouges and cuts on the skin. Start with the outline of the wound, penciling in the lines with a brown eyeliner. Then fill in the center with a red lipstick. Cover with finishing powder. After that, add discoloration to the edges with brown, green and light purple eye shadow. To create a "just-wounded" look, apply a thin layer of clear lip balm or Vaseline. - 7 Hire specialists for other special effects as needed. Again, film students may be your best bet, as they are willing to work for free and many can create some pretty amazing effects. - 8 Shoot your film. When all the footage you need is in the can, begin postproduction and add in any computer-animated special effects you may have planned. Edit the film completely, adding music and credits. Show it to your friends, family and complete strangers. Get their feedback. If needed, make changes, re-edit the film or re-shoot scenes. Create your final copy and start promoting it. Enter it into film festivals and try to get local theaters to screen it. Good luck! Tips & Warnings Don't attempt special effects that are out of your budget or beyond your skill level. You May Also Like - How to Make Horror Movie Props Many horror and Halloween enthusiasts choose to dabble in prop-making and makeup effects, creating their own monsters, corpses, murder weapons and bloody... - How to Make a Horror Movie With Low Budget Special Effects When making horror movies on a low budget, remember to make the fake blood darker than one might think. Find out how... - How to Use Halloween Special Effects and Horror Movie Makeup Supplies A skilled application of special effects makeup can transform any Halloween costume in no time at all. Supplies are available for virtually... - Special Effects Software Used in Films Special Effects Software Used in Films. Special effects, also known as visual effects, are used on television and in films to enhance... - The Effects of Horror Movies on Children The implicit, and often explicit, intention of horror movies is to scare people. The fright engendered by horror movies can have residual... - How to Create a Low-Budget Movie Aspiring filmmakers often have to start out small by making student films or taking matters into their own hands with an independent... - How to Make a Low Budget Horror Film Find a good script that you can work with, or write one yourself. A decent script is the backbone of any film,... - How to Make a Good Low-Budget Movie Films are expensive. From equipment to actors to locations, the cost adds up fast, but good filmmaking is not the exclusive realm... - How to Make a Horror Movie When making a horror movie, get a uniquely frightening script, hire a creative crew and good actors, create a truly scary villain... - How to Make a Low Budget Movie for Beginners Making a movie can constitute an overwhelming challenge regardless of the size of the budget. Costs for equipment, cast, crew and other... - Psychological Effects of Horror Movie Images on Kids Horror movies can have lasting effects on children. horror girl image by Alex Motrenko from Fotolia.com - Low-Budget Outreach Ideas Low-Budget Outreach Ideas. Outreach activities bring a church group into the heart of a community. Outreaches are simply churches helping others outside... - How to Make a Low Budget Film Developing and making a motion picture can constitute an overwhelming challenge. An infinite number of variables can hinder a production. Producing a... - How to Make Horror Movie Props How to Make Horror Movie Props. Part of the series: Filmmaking & Movie Special Effects. Knives make great horror movie props, but... - How to Write a Low-Budget (Not Big-Budget) Script Okay, so you have just written a script and want to make the movie. You only have 100,000 dollars to do it... - How to Make Low Budget Movies Before making a low-budget movie, find out what the budget maximum is going to be, and learn how to be creative with...
http://www.ehow.com/how_4464450_make-horror-movie-low-budget.html
crawl-003
refinedweb
1,279
73.68
Introduction In this article I will be kicking off a series of articles describing the often forgotten about methods of the Java language's base Object class. Below are the methods of the base Java Object, which are present in all Java objects due to the implicit inheritance of Object. Links to each article of this series are included for each method as the articles are published. In the sections that follow I will be describing what these methods are, their base implementations, and how to override them when needed. The focus of this first article is the toString() method which is used to give a string representation that identifies an object instance and convey its content and / or meaning in human readable form. The toString() Method At first glance the toString() method may seem like a fairly useless method and, to be honest, its default implementation is not very helpful. By default the toString() method will return a string that lists the name of the class followed by an @ sign and then a hexadecimal representation of the memory location the instantiated object has been assigned to. To help aid in my discussion of the ubiquitous Java Object methods I will work with a simple Person class, defined like so: package com.adammcquistan.object; import java.time.LocalDate; public class Person { private String firstName; private String lastName; private LocalDate dob; public Person() {}; } } Along with this class I have a rudimentary Main class to run the examples shown below to introduce the features of the base implementation of toString(). package com.adammcquistan.object; import java.time.LocalDate; public class Main { public static void main(String[] args) { Person me = new Person("Adam", "McQuistan", LocalDate.parse("1987-09-23")); Person me2 = new Person("Adam", "McQuistan", LocalDate.parse("1987-09-23")); Person you = new Person("Jane", "Doe", LocalDate.parse("2000-12-25")); System.out.println("1. " + me.toString()); System.out.println("2. " + me); System.out.println("3. " + me + ", " + you); System.out.println("4. " + me + ", " + me2); } The output looks like this: 1. [email protected] 2. [email protected] 3. [email protected], [email protected] 4. [email protected], [email protected] The first thing to mention is that the output for lines one and two are identical, which shows that when you pass an object instance to methods like println, printf, as well as loggers, the toString() method is implicitly called. Additionally, this implicit call to toString() also occurs during concatenation as shown in line 3's output. Ok, now its time for me to interject my own personal opinion when it comes to Java programming best practices. What stands out to you as potentially worrisome about line 4 (actually any of the output for that matter)? Hopefully you are answering with a question along these lines, "well Adam, its nice that the output tells me the class name, but what the heck am I to do with that gobbly-gook memory address?". And I would respond with, "Nothing!". Its 99.99% useless to us as programmers. A much better idea would be for us to override this default implementation and provide something that is actually meaningful, like this: public class Person { // omitting everyting else remaining the same @Override public String toString() { return "<Person:"; } } Now if I rerun the earlier Main class I get the following greatly improved output: 1. <Person: firstName=Adam, lastName=McQuistan, dob=1987-09-23> 2. <Person: firstName=Adam, lastName=McQuistan, dob=1987-09-23> 3. <Person: firstName=Adam, lastName=McQuistan, dob=1987-09-23>, <User: firstName=Jane, lastName=Doe, dob=2000-12-25> 4. <Person: firstName=Adam, lastName=McQuistan, dob=1987-09-23>, <User: firstName=Adam, lastName=McQuistan, dob=1987-09-23> OMG! Something that I can read! With this implementation I now stand a fighting chance of actually being able to comprehend what is going on in a log file. This is especially helpful when tech support is screaming about erratic behavior relating to People instances in the program I'm on the hook for. Caveats for Implementing and Using toString() As shown in the previous section, implementing an informative toString() method in your classes is a rather good idea as it provides a way to meaningfully convey the content and identity of an object. However, there are times when you will want to take a slightly different approach to implementing them. For example, say you have an object that simply contains too much state to pack into the output of a toString() method or when the object mostly contains a collection of utilities methods. In these cases it is often advisable to output a simple description of the class and its intentions. Consider the following senseless utility class which finds and returns the oldest person of a list of People objects. public class OldestPersonFinder { public List<Person> family; public OldestPersonFinder(List<Person> family) { this.family = family; } public Person oldest() { if (family.isEmpty()) { return null; } Person currentOldest = null; for (Person p : family) { if (currentOldest == null || p.getDob().isAfter(currentOldest.getDob())) { currentOldest = p; } } return currentOldest; } @Override public String toString() { return "Class that finds the oldest Person in a List"; } } In this case it would not be very helpful to loop over the entire collection of Person objects in the family List instance member and build some ridiculously large string to return representing each Person. Instead, it is much more meaningful to return a string describing the intentions of the class which in this case is to find the Person who is the oldest. Another thing that I would like to strongly suggest is to make sure you provide access to all information specific to your class's data that you include in the output to your toString() method. Say, for example, I had not provided a getter method for my Person class's dob member in a vain attempt to keep the person's age a secret. Unfortunately, the users of my Person class are eventually going to realize that they can simply parse the output of the toString() method and acquire the data they seek that way. Now if I ever change the implementation of toString() I am almost certain to break their code. On the other side let me say that its generally a bad idea to go parsing an object's toString() output for this very reason. Conclusion This article described the uses and value in the often forgotten about toString() method of the Java base Object class. I have explained the default behavior and given my opinion as to why I think it is a best practice to implement your own class specific behavior. As always, thanks for reading and don't be shy about commenting or critiquing below.
https://stackabuse.com/javas-object-methods-tostring/
CC-MAIN-2019-35
refinedweb
1,109
51.48
The objective of this post is to explain how to configure timer interrupts for MicroPython running on the ESP32. The tests were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. Introduction The objective of this post is to explain how to configure timer interrupts for MicroPython running on the ESP32. For more information on the hardware timers of the ESP32, please consult the second section of this previous post. The tests were performed using a DFRobot’s ESP-WROOM-32 device integrated in a ESP32 FireBeetle board. The MicroPython IDE used was uPyCraft. The code. We will use this approach since an interrupt should run as fast as possible and thus we should not call functions such as print inside it. Thus, when the interrupt occurs, the handling function will simply increment a counter and then we will have a loop, outside the interrupt function, that will check for that value and act accordingly. interruptCounter = 0 We will also declare a counter that will store all the interrupts that have occurred since the program started, so we can print this value for each new one. totalInterruptsCounter = 0 Next we will create an object of class Timer, which is available in the machine module. We will use this object to configure the timer interrupts. The constructor for this class receives as input a numeric value from 0 to 3, indicating the hardware timer to be used (the ESP32 has 4 hardware timers). For this example, we will use timer 0. timer = machine.Timer(0) Now we need to declare our handling function, which we will call handleInterrupt. This function receives an input argument to which an object of class Timer will be passed when the interrupt is triggered, although we are not going to use it on our code. As for the function logic, it will be as simple as incrementing the interruptCounter variable. Since we are going to access and modify this global variable, we need to first declare it with the global keyword and only then use it. def handleInterrupt(timer): global interruptCounter interruptCounter = interruptCounter+1 Now that we have finished the declaration of the handling function, we will initialize the timer with a call to the init method of the previously created Timer object. The inputs of this function are the period in which the interrupt will occur (specified in milliseconds), the mode of the timer (one shot or periodic) and the callback function that will handle the interrupt. For our simple example, we will set the timer to fire periodically each second. Thus, for the period argument we pass the value 1000 and for the mode argument we pass the PERIODIC constant of the Timer class. For a one shot timer, we can use the ONE_SHOT constant of the same class instead. Finally, in the callback argument, we pass our previously declared handling function. timer.init(period=1000, mode=machine.Timer.PERIODIC, callback=handleInterrupt) Now that we have started the timer, we will continue our code. As said before, we will handle the interrupt in the main code when the ISR signals its occurrence. Since our example program is very simple, we will just implement it with an infinite loop which will poll the interruptCounter variable to check if it is greater than 0. If it is, it means we have an interrupt to handle. Naturally, in a real case application, we would most likely have other computation to perform instead of just polling this variable. So, if we detect the interrupt, we will need to decrement the interruptCounter variable to signal that we will handle it. Since this variable is shared with the ISR and to avoid racing conditions, this decrement needs to be performed in a critical section, which we will implement by simply disabling the interrupts. Naturally, this critical section should be as short as possible for us to re-enable the interrupts. Thus, only the decrement of the variable is done here and all the remaining handling is done outside, with the interrupts re-enabled. So, we disable interrupts with a call to the disable_irq function of the machine module. This function will return the previous IRQ state, which we will store in a variable. To re-enable the interrupts, we simply call the enable_irq function, also from the machine module, and pass as input the previously stored state. Between this two calls, we access the shared variable and decrement it. state = machine.disable_irq() interruptCounter = interruptCounter-1 machine.enable_irq(state) After that, we finish the handling of the interrupt by incrementing the total interrupts counter and printing it. The final code for the script can be seen below. It already includes these prints and the loop where we will be checking for interrupts. import machine interruptCounter = 0 totalInterruptsCounter = 0 timer = machine.Timer(0) def handleInterrupt(timer): global interruptCounter interruptCounter = interruptCounter+1 timer.init(period=1000, mode=machine.Timer.PERIODIC, callback=handleInterrupt) board and run it. You should get an output similar to figure 1, with the messages being printer in a periodic interval of 1 second. Figure 1 – Output of the timer interrupts program for MicroPython running on the ESP32. Pingback: ESP32 MicroPython: External interrupts | techtutorialsx
https://techtutorialsx.com/2017/10/07/esp32-micropython-timer-interrupts/
CC-MAIN-2017-43
refinedweb
870
53
For Noobs: How to Copy a List in Python 3. This article is by a noob and for noobs. It’s also for anyone teaching noobs, of course. Copying lists using the equals sign alone is, for a noob, a recipe for disaster. There’s a great danger that you’ll get a bug in your code that takes you a very long time to fix. This is how to copy a list, at least at first, until some of the outrageous subtleties of how lists work are mastered. Suppose that the list of the best coders of each year is always only slightly different, so instead of making a new list from scratch, you just copy the old list, renaming to the new year and then modify it accordingly. Suppose that James Senior gets a lot of help from James Junior so you put them together as if they were a sort of team. The result is a nested list. By 2020 James Junior has quit coding and his sister Patricia has taken over his role in helping dad. No other change to the list. So the only modification needed is to replace James Junior with Patricia. First you import the copy module and use the deepcopy function to copy the old list. Then you modify the list, replacing ‘James Junior’ with ‘Patricia’. best_coders_of_2019 = [['James Junior', 'James Senior'], 'Mary', 'Pete'] import copy best_coders_of_2020 = copy.deepcopy(best_coders_of_2019) best_coders_of_2020[0][0] = 'Patricia' That’s all there is to it. If you print out best_coders_of_2020 you’ll see: [['Patricia', 'James Senior'], 'Mary', 'Pete'] and if you print out best_coders_of_2019 you’ll see: [['James Junior', 'James Senior'], 'Mary', 'Pete'] Hunky dory, and pretty simple, right? If you just wanted to know how to copy a list safely a noob, you can stop reading here. The rest of this article is of academic interest only to you. It probably seems strange that such a straightforward, intuitive process requires importing a module. Why not use the equals sign alone? It would look like this: #Noobs should not code like this. best_coders_of_2019 = [['James Junior', 'James Senior'], 'Mary', 'Pete'] best_coders_of_2020 = best_coders_of_2019 best_coders_of_2020[0][0] = 'Patricia' It’s very natural to think of that, since it works well with strings and numbers to use the equals sign alone. If you print out best_coders_of_2020 you’ll see: [['Patricia', 'James Senior'], 'Mary', 'Pete'] and if you print out best_coders_of_2019 you’ll see: [['Patricia', 'James Senior'], 'Mary', 'Pete'] See the problem? Copying the 2019 list using the equals sign alone and then modifying it also changed the 2019 list. Oh my goodness. What noob would have expected that? Python 3 is designed to work this way. As a noob, I don’t quite understand the ins and outs of it (it’s really difficult to understand) but it’ something to do with copying lists in less time and using less memory to store lists. Noobs don’t need to know why. There are a bunch of ways to copy lists but only the complicated-looking deepcopy function does it in what seems like a straightforward way. A word of warning: I think copying dictionaries using the equals sign alone has the same surprising (to the noob) behavior. The good news is that deepcopy works about the same way with dictionaries as it does with lists. @bartshmatthew on Twitter if you don’t want comment here.
https://medium.com/nerd-for-tech/for-noobs-how-to-copy-a-list-in-python-3-703522d62ef9?source=post_internal_links---------4----------------------------
CC-MAIN-2022-40
refinedweb
564
72.16
Revision history for MooseX-Types-DateTime 0.13 2015-10-04 23:38:13Z - make all tests pass with both the current DateTime::Locale and the upcoming new version (currently still in trial releases). 0.12 2015-09-27 05:01:39Z - fix new test that may fail with older Moose 0.11 2015-08-16 01:05:36Z - update some distribution tooling 0.10 2014-02-03 17:17:57Z - temporarily revert change that cleaned namespaces (0.09), until the logic is cleaned up in MooseX::Types itself 0.09 2014-02-03 02:16:30Z - Require perl 5.8.3, as Moose does - canonical repository moved to github moose organization 0.07 2011-12-12 12:58:19Z - Provide optimize_as for pre-2.0100 Moose versions - Bump MooseX::Types version requirement (RT#73188) - Add missing dependencies - Enforce version dependencies at runtime (RT#73189) 0.06 2011-11-22 - Use inline_as instead of the deprecated optimize_as 0.05 2009-08-24 - Merged the two 0.04 releases 0.04 2009-08-24 (NUFFIN) - Remove DateTimeX::Easy support, this is in its own distribution now 0.04 2008-08-18 (FLORA) - Depend on DateTime::TimeZone 0.95 to avoid test failures due to broken, older versions. 0.03 2008-07-11 - more explicit versions for dependencies - removed a test that doesn't seem to cleanly pass in all timezones 0.02 2008-06-16 - Use namespace::clean in some places - Try to skip out of the test suite gracefully when bad crap happens (too much DWIM--) 0.01 2008-06-14 - Initial version
https://metacpan.org/changes/distribution/MooseX-Types-DateTime
CC-MAIN-2016-18
refinedweb
263
61.12
#include <qlcdnumber.h> It can display a number in just about any size. It can display decimal, hexadecimal, octal or binary numbers. It is easy to connect to data sources using the display() slot, which is overloaded to take any of five argument types. There are also slots to change the base with setMode() and the decimal point with setSmallDecimalPoint(). QLCDNumber emits the overflow() signal when it is asked to display something beyond its range. The range is set by setNumDigits(), but setSmallDecimalPoint() also influences it. If the display is set to hexadecimal, octal or binary, the integer equivalent of the value is displayed. These digits and other symbols can be shown: 0/O, 1, 2, 3, 4, 5/S, 6, 7, 8, 9/g, minus, decimal point, A, B, C, D, E, F, h, H, L, o, P, r, u, U, Y, colon, degree sign (which is specified as single quote in the string) and space. QLCDNumber substitutes spaces for illegal characters. It is not possible to retrieve the contents of a QLCDNumber object, although you can retrieve the numeric value with value(). If you really need the text, we recommend that you connect the signals that feed the display() slot to another slot as well and store the value there. Incidentally, QLCDNumber is the very oldest part of Qt, tracing back to a BASIC program on the Sinclair Spectrum. Definition at line 51 of file qlcdnumber.h.
http://qt-x11-free.sourcearchive.com/documentation/3.3.4/classQLCDNumber.html
CC-MAIN-2018-22
refinedweb
239
61.67
2 The React Game: Aliens, Go Home! The game that you will develop in this series is called Aliens, Go Home! The idea of this game is simple, you will have a cannon and will have to kill flying objects that are trying to invade the Earth. To kill these flying objects you will have to point and click on an SVG canvas to make your cannon shoot. If you are curious, you can find the final game up and running here. But don't play too much, you have work to do! "I'm creating a game with React, Redux, and SVG elements." TWEET THIS Previously, on Part 1 In the first part of this series, you have used create-react-app to bootstrap your React application and you have installed and configured Redux to manage the game state. After that, you have learned how to use SVG with React components while creating game elements like Sky, Ground, the CannonBase, and the CannonPipe. Finally, you added the aiming capability to your cannon by using an event listener and a JavaScript interval to trigger a Redux action that updates the CannonPipe angle. These actions paved the way to understand how you can create your game (and other animations) with React, Redux, and SVG. Note: If, for whatever reason, you don't have the code created in the first part of the series, you can simply clone it from this GitHub repository. After cloning it, you will be able to follow the instructions in the sections that follow. Creating More SVG React Components The subsections that follow will show you how to create the rest of your game elements. Although they might look lengthy, they are quite simple and similar. You may even be able to follow the instructions in a matter of minutes. After this section, you will find the most interesting topics of this part of the series. These topics are entitled Making Flying Objects Appear Randomly and Using CSS Animation to Move Flying Objects. Creating the Cannonball React Component The next element that you will create is the CannonBall. Note that, for now, you will keep this element inanimate. But don't worry! Soon (after creating all other elements), you will make your cannon shoot multiple cannonballs and kill some aliens. To create this component, add a new file called CannonBall.jsx inside the ./src/components directory with the following code: import React from 'react'; import PropTypes from 'prop-types'; const CannonBall = (props) => { const ballStyle = { fill: '#777', stroke: '#444', strokeWidth: '2px', }; return ( <ellipse style={ballStyle} cx={props.position.x} cy={props.position.y} ); }; CannonBall.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default CannonBall; As you can see, to make a cannonball appear in your canvas, you will have to pass to it an object that contains the x and y properties. If you don't have that much experience with prop-types, this might have been the first time that you have used PropTypes.shape. Luckily, this feature is self-explanatory. After creating this component, you might want to see it on your canvas. To do that, simply add the following tag inside the svg element of the Canvas component (you will also need to add import CannonBall from './CannonBall';): <CannonBall position={{x: 0, y: -100}}/> Just keep in mind that, if you add it before an element that occupies the same position, you will not see it. So, to play safe, just add it as the last element (right after <CannonBase />). Then, you can open your game in a web browser to see your new component. If you don't remember how to do that, you just have to run npm startin the project root and then open in your preferred browser. Also, don't forget to commit this code to your repository before moving on. Creating the Current Score React Component Another React component that you will have to create is the CurrentScore. As the name states, you will use this component to show users what their current scores are. That is, whenever they kill a flying object, your game will increase the value in this component by one and show to them. Before creating this component, you might want to add some neat font to use on it. Actually, you might want to configure and use a font on the whole game, so it won't look like a monotonous game. You can browse and choose a font from whatever place you want, but if you are not interested in investing time on this, you can simply add the following line at the top of the ./src/index.css file: @import url(''); /* other rules ... */ This will make your game load the Joti One font from Google. After that, you can create the CurrentScore.jsx file inside the ./src/components directory with the following code: import React from 'react'; import PropTypes from 'prop-types'; const CurrentScore = (props) => { const scoreStyle = { fontFamily: '"Joti One", cursive', fontSize: 80, fill: '#d6d33e', }; return ( <g filter="url(#shadow)"> <text style={scoreStyle} {props.score} </text> </g> ); }; CurrentScore.propTypes = { score: PropTypes.number.isRequired, }; export default CurrentScore; Note: If you haven't configured Joti One (or if you configured some other font), you will have to change this code accordingly. Besides that, this font is used by other components that you will create, so keep in mind that you might have to update these components as well. As you can see, the CurrentScore component requires a single property: score. As your game is not currently counting the score, to see this component right now, you will have to add a hard-coded value. So, inside the Canvas component, add <CurrentScore score={15} /> as the last element inside the svg element. Also, add the import statement to fetch this component ( import CurrentScore from './CurrentScore';). If you try to see your new component now, you won't be able to. This is because your component is using a filter called shadow. Although this shadow filter is not necessary, it will make your game looks nicer. Besides that, adding a shadow to SVG elements is easy. To do that, simply add the following element at the top of your svg: <defs> <filter id="shadow"> <feDropShadow dx="1" dy="1" stdDeviation="2" /> </filter> </defs> In the end, your Canvas component will look like this: import React from 'react'; import PropTypes from 'prop-types'; import Sky from './Sky'; import Ground from './Ground'; import CannonBase from './CannonBase'; import CannonPipe from './CannonPipe'; import CannonBall from './CannonBall'; import CurrentScore from './CurrentScore'; const Canvas = (props) => { const viewBox = [window.innerWidth / -2, 100 - window.innerHeight, window.innerWidth, window.inner /> <CannonBall position={{x: 0, y: -100}}/> <CurrentScore score={15} /> </svg> ); }; Canvas.propTypes = { angle: PropTypes.number.isRequired, trackMouse: PropTypes.func.isRequired, }; export default Canvas; And your game will look like this: Not bad, huh?! Creating the Flying Object React Component What about creating React components to represent your flying objects now? Flying objects are not circles, nor rectangles. They usually have two parts (the top and the base) and these parts are usually rounded. That's why you are going to use two React components to create your flying objects: the FlyingObjectBase and the FlyingObjectTop. One of these components is going to use a Bezier Cubic curve to define its shapes. The other one is going to be an ellipse. You can start by creating the first one, the FlyingObjectBase, in a new file called FlyingObjectBase.jsx inside the ./src/components directory. This is the code to define this component: import React from 'react'; import PropTypes from 'prop-types'; const FlyingObjectBase = (props) => { const style = { fill: '#979797', stroke: '#5c5c5c', }; return ( <ellipse cx={props.position.x} cy={props.position.y} rx="40" ry="10" style={style} /> ); }; FlyingObjectBase.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default FlyingObjectBase; After that, you can define the top part of the flying object. To do that, create a file called FlyingObjectTop.jsx inside the ./src/components directory and add the following code to it: import React from 'react'; import PropTypes from 'prop-types'; import { pathFromBezierCurve } from '../utils/formulas'; const FlyingObjectTop = (props) => { const style = { fill: '#b6b6b6', stroke: '#7d7d7d', }; const baseWith = 40; const halfBase = 20; const height = 25; const cubicBezierCurve = { initialAxis: { x: props.position.x - halfBase, y: props.position.y, }, initialControlPoint: { x: 10, y: -height, }, endingControlPoint: { x: 30, y: -height, }, endingAxis: { x: baseWith, y: 0, }, }; return ( <path style={style} d={pathFromBezierCurve(cubicBezierCurve)} /> ); }; FlyingObjectTop.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default FlyingObjectTop; If you don't know how the Bezier Cubic curve works, take a look at the previous article. This is enough to show some flying objects but, as you are going to make them randomly appear in your game, it will be easier to treat these components as a single element. To do that, simply create a new file called FlyingObject.jsx beside the other two and add the following code to it: import React from 'react'; import PropTypes from 'prop-types'; import FlyingObjectBase from './FlyingObjectBase'; import FlyingObjectTop from './FlyingObjectTop'; const FlyingObject = props => ( <g> <FlyingObjectBase position={props.position} /> <FlyingObjectTop position={props.position} /> </g> ); FlyingObject.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default FlyingObject; Now, to add flying objects in your game, you can simply use one React component. To see this in action, update your Canvas component as follows: // ... other imports import FlyingObject from './FlyingObject'; const Canvas = (props) => { // ... return ( <svg ...> // ... <FlyingObject position={{x: -150, y: -300}}/> <FlyingObject position={{x: 150, y: -300}}/> </svg> ); }; // ... propTypes and export Creating the Heart React Component The next component that you will need to create is the component that represents gamers' lives. There is nothing better to represent a life than a Heart. So, create a new file called Heart.jsx inside the ./src/components directory and add the following code to it: import React from 'react'; import PropTypes from 'prop-types'; import { pathFromBezierCurve } from '../utils/formulas'; const Heart = (props) => { const heartStyle = { fill: '#da0d15', stroke: '#a51708', strokeWidth: '2px', }; const leftSide = { initialAxis: { x: props.position.x, y: props.position.y, }, initialControlPoint: { x: -20, y: -20, }, endingControlPoint: { x: -40, y: 10, }, endingAxis: { x: 0, y: 40, }, }; const rightSide = { initialAxis: { x: props.position.x, y: props.position.y, }, initialControlPoint: { x: 20, y: -20, }, endingControlPoint: { x: 40, y: 10, }, endingAxis: { x: 0, y: 40, }, }; return ( <g filter="url(#shadow)"> <path style={heartStyle} d={pathFromBezierCurve(leftSide)} /> <path style={heartStyle} d={pathFromBezierCurve(rightSide)} /> </g> ); }; Heart.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default Heart; As you can see, to create the shape of a heart with SVG, you need two Cubic Bezier curves: one for each side of the heart. You also had to add a position property to this component. You needed this because your game will provide users more than one life, so you will need to show each one of these hearts in a different position. For now, you can simply add one heart to your canvas so you can confirm that everything is working properly. To do this, open the Canvas component and add: <Heart position={{x: -300, y: 35}} /> This must be the last element inside the svg element. Also, don't forget to add the import statement ( import Heart from './Heart';). Creating the Start Game Button React Component Every game needs a start button. So, to create one for your game, add a file called StartGame.jsx beside the other components and add the following code to it: import React from 'react'; import PropTypes from 'prop-types'; import { gameWidth } from '../utils/constants'; const StartGame = (props) => { const button = { x: gameWidth / -2, // half width y: -280, // minus means up (above 0) width: gameWidth, height: 200, rx: 10, // border radius ry: 10, // border radius style: { fill: 'transparent', cursor: 'pointer', }, onClick: props.onClick, }; const text = { textAnchor: 'middle', // center x: 0, // center relative to X axis y: -150, // 150 up style: { fontFamily: '"Joti One", cursive', fontSize: 60, fill: '#e3e3e3', cursor: 'pointer', }, onClick: props.onClick, }; return ( <g filter="url(#shadow)"> <rect {...button} /> <text {...text}> Tap To Start! </text> </g> ); }; StartGame.propTypes = { onClick: PropTypes.func.isRequired, }; export default StartGame; As you don't need to show more than one StartGame button at a time, you have defined that this component is statically positioned in your game ( x: 0 and y: -150). There are other two differences between this component and the others that you have defined before: - First, this component is expecting a function called onClick. This function is used to listen for clicks in this button and will trigger a Redux action to inform your app that it must start a new game. - Second, this component is using a constant called gameWidththat you haven't defined yet. This constant will represent the area that is usable. Any area beyond that will have no purpose besides making your app fill the whole screen. To define the gameWidth constant, open the ./src/utils/constants.js file and add the following line to it: export const gameWidth = 800; After that, you can add the StartGame component to your Canvas by appending <StartGame onClick={() => console.log('Aliens, Go Home!')} /> as the last element inside the svg element. As always, don't forget to add the import statement ( import StartGame from './StartGame';). Creating the Title React Component The last component that you will create in this part of the series is the Title component. You already have a name for your game: Aliens, Go Home!. So, adding the title to it is as easy as creating a new file called Title.jsx (inside the ./src/components directory) with the following code: import React from 'react'; import { pathFromBezierCurve } from '../utils/formulas'; const Title = () => { const textStyle = { fontFamily: '"Joti One", cursive', fontSize: 120, fill: '#cbca62', }; const aliensLineCurve = { initialAxis: { x: -190, y: -950, }, initialControlPoint: { x: 95, y: -50, }, endingControlPoint: { x: 285, y: -50, }, endingAxis: { x: 380, y: 0, }, }; const goHomeLineCurve = { ...aliensLineCurve, initialAxis: { x: -250, y: -780, }, initialControlPoint: { x: 125, y: -90, }, endingControlPoint: { x: 375, y: -90, }, endingAxis: { x: 500, y: 0, }, }; return ( <g filter="url(#shadow)"> <defs> <path id="AliensPath" d={pathFromBezierCurve(aliensLineCurve)} /> <path id="GoHomePath" d={pathFromBezierCurve(goHomeLineCurve)} /> </defs> <text {...textStyle}> <textPath xlinkHref="#AliensPath"> Aliens, </textPath> </text> <text {...textStyle}> <textPath xlinkHref="#GoHomePath"> Go Home! </textPath> </text> </g> ); }; export default Title; To make your title curved, you have used a combination of path and textPath elements with Cubic Bezier curve. Besides that, you have made your title statically positioned, just like the StartGame button. Now, to add this component to your canvas, you can simply add <Title /> to your svg element and add the import statement ( import Title from './Title';) at the top of the Canvas.jsx file. However, if you run your application now, you will notice that your new component does not appear on your screen. This happens because your app does not show enough vertical space yet. Making Your React Game Responsive To change your game dimensions and to make it responsive, you will need to do two things. First, you will need to attach an onresize event listener to the global window object. Doing this is quite simple, you can open the ./src/App.js file and append the following code to the componentDidMount() method: window.onresize = () => { const cnv = document.getElementById('aliens-go-home-canvas'); cnv.style.width = `${window.innerWidth}px`; cnv.style.height = `${window.innerHeight}px`; }; window.onresize(); This will make your app keep the dimension of your canvas equal to the dimension of the window that your users see. Even if they resize their browsers. It will also force the execution of the window.onresize function when the app is rendered for the first time. Second, you will need to change the viewBox property of your canvas. Now, instead of defining that the uppermost point in the Y-axis is 100 - window.innerHeight (if you don't remember why you have used this formula, take a look at the first part of the series) and that the viewBox height is equal to the innerHeight of the window object, you will use the following values: const gameHeight = 1200; const viewBox = [window.innerWidth / -2, 100 - gameHeight, window.innerWidth, gameHeight]; In this new version, you are using the 1200 value so your app can properly show the new title component. Besides that, this new vertical space will give enough time for your users to see and kill these flying objects. This will give them enough time to shoot and kill these objects. Enabling Users to Start the Game With all these new components in place and with these new dimensions, you can start thinking about enabling your users to start the game. That is, you can refactor your game to make its state switch to started whenever a user clicks on the Start Game button. This must trigger a lot of changes in your game's state. However, to make things easier to grasp, you can start by simply removing the Title and the StartGame components from the screen when users click on this button. To do that, you will need to create a new Redux action that will be processed by a Redux reducer to change a flag in your game. To create this new action, open the ./src/actions/index.js file and add the following code to it (leave the previous code on it unaltered): // ... MOVE_OBJECTS export const START_GAME = 'START_GAME'; // ... moveObjects export const startGame = () => ({ type: START_GAME, }); Then, you can refactor the ./src/reducers/index.js to handle this new action. The new version of this file will look like this: import { MOVE_OBJECTS, START_GAME } from '../actions'; import moveObjects from './moveObjects'; import startGame from './startGame'; const initialGameState = { started: false, kills: 0, lives: 3, }; const initialState = { angle: 45, gameState: initialGameState, }; function reducer(state = initialState, action) { switch (action.type) { case MOVE_OBJECTS: return moveObjects(state, action); case START_GAME: return startGame(state, initialGameState); default: return state; } } export default reducer; As you can see, now you have a child object inside initialState that contains three properties about your game: started: a flag to indicate if the game is running or not; kills: a property that holds how many flying objects the user has killed; lives: a property that holds how many lives the user has; Besides that, you have added a new case to your switch statement. This new case (which is triggered when an action of type START_GAME arrives at the reducer) calls the startGame function. The goal of this function is to turn on the started flag inside the gameState property. Also, whenever a user starts a new game, this function has to zero the kills counter and give users three lives again. To implement the startGame function, create a new file called startGame.js inside the ./src/reducers directory with the following code: export default (state, initialGameState) => { return { ...state, gameState: { ...initialGameState, started: true, } } }; As you can see, the code in this new file is quite simple. It just returns a new state object to the Redux store where the started flag is set to true and resets everything else inside the gameState property. This gives users three lives again and zeros their kills counter. After implementing this function, you have to pass it to your game. You also have to pass the new gameState property to it. So, to achieve that, you will have to change the ./src/containers/Game.js file as follows: import { connect } from 'react-redux'; import App from '../App'; import { moveObjects, startGame } from '../actions/index'; const mapStateToProps = state => ({ angle: state.angle, gameState: state.gameState, }); const mapDispatchToProps = dispatch => ({ moveObjects: (mousePosition) => { dispatch(moveObjects(mousePosition)); }, startGame: () => { dispatch(startGame()); }, }); const Game = connect( mapStateToProps, mapDispatchToProps, )(App); export default Game; To summarize, the changes that you have made in this file are: mapStateToProps: Now, you have told Redux that the Appcomponent cares about the gameStateproperty. mapDispatchToProps: You have also told Redux to pass the startGamefunction to the Appcomponent, so it can trigger this new action. Both these new App properties ( gameState and startGame) won't be directly used by the App component itself. Actually, the component that will use them is the Canvas component, so you have to pass them to it. To do that, open the ./src/App.js file and refactor it as follows: // ... import statements ... class App extends Component { // ... constructor(props) ... // ... componentDidMount() ... // ... trackMouse(event) ... render() { return ( <Canvas angle={this.props.angle} gameState={this.props.gameState} startGame={this.props.startGame} trackMouse={event => (this.trackMouse(event))} /> ); } } App.propTypes = { angle: PropTypes.number.isRequired, gameState: PropTypes.shape({ started: PropTypes.bool.isRequired, kills: PropTypes.number.isRequired, lives: PropTypes.number.isRequired, }).isRequired, moveObjects: PropTypes.func.isRequired, startGame: PropTypes.func.isRequired, }; export default App; Then, you can open the ./src/components/Canvas.jsx file and replace the code inside it with this: import React from 'react'; import PropTypes from 'prop-types'; import Sky from './Sky'; import Ground from './Ground'; import CannonBase from './CannonBase'; import CannonPipe from './CannonPipe'; import CurrentScore from './CurrentScore' import FlyingObject from './FlyingObject'; import StartGame from './StartGame'; import Title from './Title'; const Canvas = (props) => { const gameHeight = 1200; const viewBox = [window.innerWidth / -2, 100 - gameHeight, window.innerWidth, game /> <CurrentScore score={15} /> { ! props.gameState.started && <g> <StartGame onClick={() => props.startGame()} /> <Title /> </g> } { props.gameState.started && <g> <FlyingObject position={{x: -150, y: -300}}/> <FlyingObject position={{x: 150, y: -300}}/> </g> } </svg> ); }; Canvas.propTypes = { angle: PropTypes.number.isRequired, gameState: PropTypes.shape({ started: PropTypes.bool.isRequired, kills: PropTypes.number.isRequired, lives: PropTypes.number.isRequired, }).isRequired, trackMouse: PropTypes.func.isRequired, startGame: PropTypes.func.isRequired, }; export default Canvas; As you can see, in this new version, you have made the StartGame and the Title components appear only when the gameState.started property is set to false. Also, you have hidden the FlyingObject components until the user clicks on the Start Game button. If you run your app now (issue npm start in a terminal if it is not running yet), you will see these new changes in action. They are not enough to enable your users to play your game, but you are getting there. Making Flying Objects Appear Randomly Now that you have implemented the Start Game feature, you can refactor your game to show some flying objects randomly positioned. These are the flying objects that your users will have to kill, so you will also need to make them fly (i.e. move down the screen). But first, you have to focus on making them appear somehow. To do that, the first thing you will have to do is to define where these objects will appear. You will also have to set some interval and some max number of flying objects. To keep things organized, you can define constants to hold these rules. So, open the ./src/utils/constants.js file and add the following code: // ... keep skyAndGroundWidth and gameWidth untouched export const createInterval = 1000; export const maxFlyingObjects = 4; export const flyingObjectsStarterYAxis = -1000; export const flyingObjectsStarterPositions = [ -300, -150, 150, 300, ]; The rules above state that your game will show new flying objects every one second ( 1000 milliseconds) and that there will be no more than four flying objects at the same time ( maxFlyingObjects). It also defines that new objects will appear at the magnitude of -1000 on the Y axis ( flyingObjectsStarterYAxis). The last constant that you have added to this file ( flyingObjectsStarterPositions) defines four magnitudes on the X axis where objects can spring to life. You will randomly pick one of them while creating flying objects. To implement the function that will use these constants, create a file called createFlyingObjects.js in the ./src/reducers directory with the following code: import { createInterval, flyingObjectsStarterYAxis, maxFlyingObjects, flyingObjectsStarterPositions } from '../utils/constants'; export default (state) => { if ( ! state.gameState.started) return state; // game not running const now = (new Date()).getTime(); const { lastObjectCreatedAt, flyingObjects } = state.gameState; const createNewObject = ( now - (lastObjectCreatedAt).getTime() > createInterval && flyingObjects.length < maxFlyingObjects ); if ( ! createNewObject) return state; // no need to create objects now const id = (new Date()).getTime(); const predefinedPosition = Math.floor(Math.random() * maxFlyingObjects); const flyingObjectPosition = flyingObjectsStarterPositions[predefinedPosition]; const newFlyingObject = { position: { x: flyingObjectPosition, y: flyingObjectsStarterYAxis, }, createdAt: (new Date()).getTime(), id, }; return { ...state, gameState: { ...state.gameState, flyingObjects: [ ...state.gameState.flyingObjects, newFlyingObject ], lastObjectCreatedAt: new Date(), } } } At first, this code might look complex. However, it's quite the opposite. This list summarizes how it works: - If the game is not running (i.e. ! state.gameState.started), this code simply returns the current state unaltered. - If the game is running, this function uses the createIntervaland the maxFlyingObjectsconstants to decide if it should create new flying objects or not. This logic populates the createNewObjectconstant. - If the createNewObjectconstant is set to true, this function uses Math.floorto fetch a random number between 0 and 3 ( Math.random() * maxFlyingObjects) so it can decide where this new flying object will appear. - With this information, this function creates a new object called newFlyingObjectwith its position. - In the end, this function returns a new state object with the new flying object and it updates the lastObjectCreatedAtvalue. As you may have noticed, the function that you have just created is a reducer. As such, you might expect that you will create an action to trigger this reducer but, actually, you won't need one. Since your game issues a MOVE_OBJECTS action every 10 ms, you can take advantage of this action and trigger your new reducer. To do that, you will have to reimplement the moveObjects reducer ( ./src/reducers/moveObjects.js) as follows: import { calculateAngle } from '../utils/formulas'; import createFlyingObjects from './createFlyingObjects'; function moveObjects(state, action) { const mousePosition = action.mousePosition || { x: 0, y: 0, }; const newState = createFlyingObjects(state); const { x, y } = mousePosition; const angle = calculateAngle(0, 0, x, y); return { ...newState, angle, }; } export default moveObjects; The new version of the moveObjects reducer changes the previous one as follows: - First, it forces the creation of the mousePositionconstant if one is not passed in the actionobject. You will need that because the previous version would make the execution of the reducer halt if no mousePositionwas passed to it. - Second, it fetches a newStateobject from the createFlyingObjectsreducer, so new flying objects are created if needed. - Lastly, it returns a new object based on the newStateobject retrieved in the last step. Before refactoring the App and the Canvas components to show the flying objects created by this new code, you will need to update the ./src/reducers/index.js file to add two new properties to the initialState object: // ... import statements ... const initialGameState = { // ... other initial properties ... flyingObjects: [], lastObjectCreatedAt: new Date(), }; // ... everything else ... With that in place, all you need to do is to add flyingObjects to the propTypes object of the App component: // ... import statements ... // ... App component class ... App, // ... other propTypes definitions ... }).isRequired, // ... other propTypes definitions ... }; export default App; And then make the Canvas component iterate over this property to show the flying objects. Make sure to replace the statically positioned instances of the FlyingObject component with this: // ... import statements ... const Canvas = (props) => { // ... const definitions ... return ( <svg ... > // ... other SVG elements and React Components ... {props.gameState.flyingObjects.map(flyingObject => ( <FlyingObject key={flyingObject.id} position={flyingObject.position} /> ))} </svg> ); }; Canvas, }).isRequired, // ... other propTypes definitions ... }; export default Canvas; That's it! Now, your app will create and show randomly positioned flying objects when users start the game. Note: If you run your app now and hit the Start Game button, you might end up seeing just one flying object. This might happen because there is nothing preventing flying objects from appearing in the same magnitude on the X-axis. In the next section, you will make your flying objects move along the Y-axis. This will ensure that you and your users are able to see all flying objects. Using CSS Animation to Move Flying Objects There are two paths you can follow to make your flying objects move. The first and most obvious one is to use JavaScript code to change their position. Although this approach might seem easy to implement, it will degrade the performance of your game to a level that makes it unfeasible. The second and preferred approach is to use CSS animations. The advantage of this approach is that it uses the GPU to animate elements, which increases the performance of your app. You might think that this approach is harder to implement but, as you will see, it is not. The trickiest part of it is that you will need the help of another NPM package to integrate CSS animations and React properly. That is, you will need to install the styled-components package. "By utilizing To install this package, you will have to stop your React app (i.e. if it is up and running) and issue the following command: npm i styled-components After installing it, you can replace the code of the FlyingObject component ( ./src/components/FlyingObject.jsx) with this: import React from 'react'; import PropTypes from 'prop-types'; import styled, { keyframes } from 'styled-components'; import FlyingObjectBase from './FlyingObjectBase'; import FlyingObjectTop from './FlyingObjectTop'; import { gameHeight } from '../utils/constants'; const moveVertically = keyframes` 0% { transform: translateY(0); } 100% { transform: translateY(${gameHeight}px); } `; const Move = styled.g` animation: ${moveVertically} 4s linear; `; const FlyingObject = props => ( <Move> <FlyingObjectBase position={props.position} /> <FlyingObjectTop position={props.position} /> </Move> ); FlyingObject.propTypes = { position: PropTypes.shape({ x: PropTypes.number.isRequired, y: PropTypes.number.isRequired }).isRequired, }; export default FlyingObject; In this new version, you have wrapped both the FlyingObjectBase and the FlyingObjectTop components inside a new component called Move. This component is simply a g SVG element styled to use the moveVertically transformation. To learn more about transformations and how to use styled-components, you can check the official documentation here and the Using CSS Animations document at the MDN website. In the end, what this means is that instead of adding pure/static flying objects, you are adding elements that carry a transformation (a CSS rule) to move them from their starter position ( transform: translateY(0);) to the very bottom of the game ( transform: translateY(${gameHeight}px);). Of course, you will have to add the gameHeight constant to the ./src/utils/constants.js file. Also, since you will need to update this file, you can replace the flyingObjectsStarterYAxis to make objects start in a position that users don't see. The current value makes flying objects appear right in the middle of the visible area, which might seem odd for end users. To make these changes, open the constants.js file and change it as follows: // keep other constants untouched ... export const flyingObjectsStarterYAxis = -1100; // keep flyingObjectsStarterPositions untouched ... export const gameHeight = 1200; Lastly, you will need to destroy flying objects after 4 seconds, so new ones can appear and move through the canvas. You can achieve that by replacing the code inside the ./src/reducers/moveObjects.js file with this: import { calculateAngle } from '../utils/formulas'; import createFlyingObjects from './createFlyingObjects'; function moveObjects(state, action) { const mousePosition = action.mousePosition || { x: 0, y: 0, }; const newState = createFlyingObjects(state); const now = (new Date()).getTime(); const flyingObjects = newState.gameState.flyingObjects.filter(object => ( (now - object.createdAt) < 4000 )); const { x, y } = mousePosition; const angle = calculateAngle(0, 0, x, y); return { ...newState, gameState: { ...newState.gameState, flyingObjects, }, angle, }; } export default moveObjects; As you can see, this new code filters the flyingObjects property of the gameState to remove objects that have an age equals or greater than 4000 (4 seconds). If you restart your app now ( npm start) and hit the Start Game button, you will see flying objects moving from top to bottom in the SVG canvas. Also, you will notice that your game creates new flying objects after the existing ones reach the bottom of this canvas. "Using CSS animations with React is easy and increases your app's performance." TWEET THIS Conclusion and Next Steps In the second part of this series, you have created most of the elements that you need to make a complete game with React, Redux, and SVG. In the end, you also have made flying objects appear at random positions and took advantage of CSS animations to make them fly around smoothly. In the next and last article of this series, you will implement the missing features of your game. That is, you will: make your cannon shoot to kill flying objects; make your game control lives of your users; and you will control how many kills your users have. You will also use Auth0 and Pusher to implement a real-time leaderboard. Stay tuned!
https://auth0.com/blog/developing-games-with-react-redux-and-svg-part-2/?utm_campaign=React%2BNewsletter&utm_medium=web&utm_source=React_Newsletter_105
CC-MAIN-2018-09
refinedweb
5,362
57.98
Can You Survive Squid Game? A FiveThirtyEight Riddler puzzle. By Vamshi Jandhyala in Riddler mathematics October 29, 2021 Riddler Express I have a spherical pumpkin. I carefully calculate its volume in cubic inches, as well as its surface area in square inches. $in^n$) and surface area (with units $in^{n−1}$). Miraculously, the numerical values are once again the same! What is the radius of my $n$-hyperspherical pumpkin? Solution Let $r$ be the radius of the spherical pumpkin. We have $$ \frac{4}{3}\pi r^3 = 4\pi r^2 \implies r = 3 in $$ The recurrence relation for the surface area of an $n$-ball $V_n(R)$ is given by $$ A_n(R) = \frac{d}{dR}V_{n}(R) = \frac{n}{R}V_{n}(R) $$ If $A_n(R) = V_n(R)$, we have $R = n$. Riddler Classic? Computational Solution Let $S(n, m)$ be the expected number of survivors when there are $n$ competitors and $m$ pairs of glasses. We have the recurrence relation $$ S(n, m) = \frac{1}{2}S(n, m-1) + \frac{1}{2}S(n-1, m-1) \\ S(n, 0) = n \\ S(0, m) = 0 $$ The Python code to compute $S(n, m)$ is given below: def S(n , m): if n == 0: return 0 if m == 0: return n return 0.5*S(n, m-1) + 0.5*S(n-1, m-1) On average, if there are $16$ competitors and $18$ pairs of glasses, we will have $\textbf{7}$ survivors.
https://vamshij.com/blog/riddler/squid-game/
CC-MAIN-2022-21
refinedweb
247
64.71
corejava - Java Beginners design patterns are there in core java? which are useful in threads?what r... for more information:... and again. The use of design patterns related to J2EE applications offer Corejava Interview,Corejava questions,Corejava Interview Questions,Corejava Core Java Interview Questions Page3  ... and retrieve information. For example, the term data store is used in Enterprise Java... ? Ans : Generally Java sandbox does not allow to obtain this reference so corejava - Java Beginners corejava pass by value semantics Example of pass by value semantics in Core Java. Hi friend,Java passes parameters to methods using pass... for more information, CoreJava Project CoreJava Project Hi Sir, I need a simple project(using core Java, Swings, JDBC) on core Java... If you have please send to my account corejava - Java Beginners Tutorials for Core Java beginners Can anyone share their example of Encapsulation in java? I'm a core Java beginner. Hi,Here is the description of Encapsulation in java:Encapsulation is a process of binding is : " + ntw.convert(num)); } } For read for more information : corejava - Java Interview Questions for more information. Thanks...corejava how to validate the date field in Java Script? Hi friend, date validation in javascript var dtCh= "/"; var minYear corejava - Java Beginners information. Thanks Amardeep Advance and Core JAVA Topics Advance and Core JAVA Topics topics come under core java and topics come under advanced java? Under Core Java, following topics comes... information visit the following link: Java Tutorials The above link will provide you core java - Java Beginners Core Java interview Help Core Java interview questions with answers Hi friend,Read for more information. Core java - CVS Core java what are the instance variables?how we differenciate them? Hi Friend, An instance variable is a variable which is related... the object is destroyed. For more information, visit the following link to update the information to update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar to update the information update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update the information through form.please send me code earliar. Please visit information information hello sir i installed java1.5 s/w but java editor not displaying. i wants to create a desktop icon for this application.without notepad i wants to write code in code window like c,cpp.plz help me. replay core java - Java Beginners core java how to create a login page using only corejava(not servlets,jsp,hibernate,springs,structs)and that created loginpage contains database(ms-access) the database contains data what ever u r enter and automatically date java related - Java Beginners information, visit the following link: related Why the "public static void main(Strings args... is necessary for a java class to execute it as a command line application Core Java Core Java Is Java supports Multiple Inheritance? Then How ? There is typo the question is , What is Marker Interface and where it can... information, visit the following link: Learn Interface Thanks coding related - Java Beginners . ---------------------------------------- read for more information, core java core java java program using transient variable Hi..."); String st= v.toString(); System.out.println(st); } } For more information, visit the following link: Core Java ); } } For more information, visit the following link: Java Q. A producer thread is continuously producing integers. Write a program to save the integers and find the pattern in which the integers related to multiple thread....!!!! related to multiple thread....!!!! Write a Java program, which creates a linklist for Employees info viz. EmpID, EmpName, EmpAge. All operations..., the information of whole linklist should be stored in a file.(Everytime the linklist wont corejava - Java Beginners Corejava - Java Interview Questions corejava - Java Interview Questions Courses in Information Technology you in becoming a good programmer. You may learn: a) Core Java b) Advance Java...Courses in Information Technology What are the Courses in Information Technology for beginner? How to learn programming and become a good programmer jsp program related folder. 4)Now create a jsp file:'hello.jsp' <%@page language="java"%>... will then display the output on the browser. For more information, visit core java - Java Beginners core java how to write a simple java program? Hi friend..."); } } ------------------------------------------- Read for more information. Thanks core java - Java Beginners core java Can we provide more than 1 try catch block Hi Friend, Yes you can. For more information, please visit the following link: Thanks CoreJava corejava baaank acoounts related - Java Beginners but that is not working kindly help me send me a working code as i a am totally new to java.... It contains following information, - Customer Account number an integer number... given above - One method that displays all above information in required output Running core java program Running core java program I am jayesh raval. I am going to make simplae program and at the time of runnint that program I found class not found... give me related answer. Thank You core core where an multythread using Please go through the following link: Java Multithreading Core java - Java Beginners Core java Hello sir/madam, Can you please tell me why multiple inheritance from java is removed.. with any example.. Thank you... a link. This link will help you. Please visit for more information. http core java - Java Beginners core java write a program to display equilateral traiangle using...(" * "); } System.out.println(); } } } For read more information : Thanks related to java related to java what is mean of }); in java.i know that } is used to close a block of statement,) is used to close a statement and ";"is used after close a statement but i can not usderstood the use of }); together in a java core java - Java Beginners core java write a program to add two numbers using bitwise operators...)); } } ------------------------------------- Read for more information, Thnaks. Amardeep core java - Java Beginners ); } } ------------------------------------------------------- Read for more information. Thanks... core java How to reverse the words in a given sentence??jagdysbms@yahoo.co.in Hi friend, import java.io.*; public class core java - Java Beginners core java catch(Exception e) { System.out.println(e); } what... on as if the error had never happened. For more information, visit the following link: core java - Java Beginners information : java What is the difference between interfaces and classes? Hi friend, ABSTRACT CLASS Interface Core Java - Java Beginners Core Java How can we explain about an object to an interviewer ... information that is to be passed to the recipient object. Objects are the basic...(); Use "str" field as "myObject.str" For read more information on OOPs visit Core Java - Java Beginners information : Thanks...Core Java How can I take input? hai.... u can take input through command line or by using buffered reader. An wexample for by using core java - Development process core java what is an Instanciation? Hi friend, When... A { ////// } A a = new A(); // instance is created For more information on OOPS visit to : core java - Java Beginners core java "Helo man&sir can you share or gave me a java code hope.... core java jsp servlet Friend use Core JAVA .thank you so much.hope you... amounts. The information in the table appears in ascending oder, based on ID number core java - Java Beginners core java how to reverse a the words in the sentence for example...); } } --------------------------------------------------------- Read for more information. Thanks & Regards Amardeep Core JAVA - Development process Core JAVA hai This is jagadhish.I have a doubt in core java.The...); } } ------------------------------------------------------ Read for more information with Example. Thanks & regards Amardeep   core java - Development process core java Hi i want core java code for this problem.its urgent... to be 0, 0. The rest of the input is information pertaining to the rovers... visit to : Thanks Core-Java - Development process Core-Java Hi, i want to append some string in another string...")); } } --------------------------- read for more information, Core Java Exceptions - Java Beginners Core Java Exceptions HI........ This is sridhar... Error? How can u justify? Hi friend, Read for more information. Thanks Related to Project Related to Project how speech to text project can be make in java? Please tell me the coding part therapeutically i know i have to use java speech api which offer two technology 1. speech recognization 2. speech syenthesis core java - Java Interview Questions core java Hai this is jagadhish.Iam learning java.I have a doubt in core java that is,Is there any instanceInitialization() method is there.If any...(); } } For read more information on java visit to : core java - Java Interview Questions core java 1)can we write try block without catch block? 2)can we...; Hi frined, Read for more information, Thanks Core Java-java.util.date - Java Beginners Core Java-java.util.date How we can convert string to Date  ...); } } } For more information on Java Conversion Visit to : Thanks Vineet core - Java Interview Questions for more information. Thanks...core is java is passed by value or pass by reference? hi... variables affect the caller?s original variables. Java never uses call core java - Java Interview Questions core java why sun soft people introduced wrapper classes? do we... to the methods.Hence, it can improve the performance. For more information, visit the following links: Thanks core java core java how to display characters stored in array in core java Core java - Java Interview Questions Core java Hai this is jagadhish.Iam learning core java.In java1.5 I...); } } } ------------------------------------------ Read for more information. Thanks Core Java - Java Interview Questions for the application For read more information : Java Why we will write public static void main(), instead... in a Java application core java core java basic java interview question core java - Java Interview Questions core java What are transient variables in java? Give some examples Hi friend, The transient is a keyword defined in the java... relevant to a compiler in java programming language likewise the transient CORE JAVA CORE JAVA CORE JAVA PPT NEED WITH SOURCE CODE EXPLANATION CAN U ?? Core Java Tutorials core java core java i need core java material Hello Friend, Please visit the following link: Core Java Thanks Core Java Core Java what is a class Java and jvm related question Java and jvm related question What is difference between java data types and jvm data types Core Java Doubts - Java Beginners Core Java Doubts 1)How to swap two numbers suppose a=5 b=10; without... instances. For more information, visit the following links: core java core java Hi, can any one expain me serialization,Deseralization and exterenalization in core java core java core java Hi, can any one exain me the concept of static and dynamic loading in core java core java - Java Interview Questions ; } ----------------------------------------- Read for more information. java What is the purpose of the System class? what are the methods in this class Hi friend, The purpose of the System class Java Related Question Java Related Question hi, Why java doesn't has primitive type as an object,whats an eligibility to have a primitive type as an object by the languages CORE JAVA CORE JAVA What is called Aggregation and Composition core java core java surch the word in the given file Core JAva Core JAva how to swap 2 variables without temp in java core java core java how can we justify java technology is robust java related - Java Beginners java related Hello sir, I want to learn java. But I don't know where to start from. I have purchased one java related book. But I am..., shruthi Hi friend, Java related question java related question How can we make a program in which we make mcqs question file and then make its corresponding answer sheet....like if we make 15 mcqs then java should generate it answer sheet of 15 mcqs with a,b,c d java question related to objects java question related to objects what is the output of the following code? public class objComp { Public static void main(String args[]) { Int result = 0; objComp oc= new objComp(); object o = oc; if( o==oc) result =1; if(o Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/21326
CC-MAIN-2015-48
refinedweb
2,053
56.86
Build an email news digest app with Nix, Python and Celery In this tutorial, we'll build an application that sends regular emails to its users. Users will be able to subscribe to RSS and Atom feeds, and will receive a daily email with links to the newest stories in each one, at a specified time. As this application will require a number of different components, we're going to build it using the power of Nix repls. By the end of this tutorial, you'll be able to: - Use Nix on Replit to set up a database, webserver, message broker and background task handlers. - Use Python Celery to schedule and run tasks in the background. - Use Mailgun to send automated emails. - Build a dynamic Python application with multiple discrete parts. Getting started To get started, sign in to Replit or create an account if you haven't already. Once logged in, create a Nix repl. Installing dependencies We'll start by using Nix to install the packages and libraries we'll need to build our application. These are: - Python 3.9, the programming language we'll write our application in. - Flask, Python's most popular micro web framework, which we'll use to power our web application. - MongoDB, the NoSQL database we'll use to store persistent data for our application. - PyMongo, a library for working with MongoDB in Python. - Celery, a Python task queuing system. We'll use this to send regular emails to users. - Redis, a data store and message broker used by Celery to track tasks. - Python's Redis library. - Python's Requests library, which we'll use to interact with an external API to send emails. - Python's feedparser library, which we'll use to parse news feeds. - Python's dateutil library, which we'll use to parse timestamps in news feeds. To install these dependencies, open replit.nix and edit it to include the following: { pkgs }: { deps = [ pkgs.cowsay pkgs.python39 pkgs.python39Packages.flask pkgs.mongodb pkgs.python39Packages.pymongo pkgs.python39Packages.celery pkgs.redis pkgs.python39Packages.redis pkgs.python39Packages.requests pkgs.python39Packages.feedparser pkgs.python39Packages.dateutil ]; } Run your repl now to install all the packages. Once the Nix environment is finished loading, you should see a welcome message from cowsay. Now edit your repl's .replit file to run a script called start.sh: run = "sh start.sh" Next we need to create start.sh in the repl's files tab: And add the following bash code to start.sh: #!/bin/sh # Clean up pkill mongo pkill redis pkill python pkill start.sh rm data/mongod.lock mongod --dbpath data --repair # Run Mongo with local paths mongod --fork --bind_ip="127.0.0.1" --dbpath=./data --logpath=./log/mongod.log # Run redis redis-server --daemonize yes --bind 127.0.0.1 The first section of this script will kill all the running processes so they can be restarted. While it may not be strictly necessary to stop and restart MongoDB or Redis every time you run your repl, doing so means we can reconfigure them should we need to, and prevents us from having to check whether they're stopped or started, independent of our other code. The second section of the script starts MongoDB with the following configuration options: --fork: This runs MongoDB in a background process, allowing the script to continue executing without shutting it down. --bind_ip="127.0.0.1": Listen on the local loopback address only, preventing external access to our database. --dbpath=./dataand --logpath=./log/mongod.log: Use local directories for storage. This is important for getting programs to work in Nix repls, as we discussed in our previous tutorial on building with Nix. The third section starts Redis. We use the --bind flag to listen on the local loopback address only, similar to how we used it for MongoDB, and --daemonize yes runs it as a background process (similar to MongoDB's --fork). Before we run our repl, we'll need to create our MongoDB data and logging directories, data and log. Create these directories now in your repl's filepane. Once that's done, you can run your repl, and it will start MongoDB and Redis. You can interact with MongoDB by running mongo in your repl's shell, and with Redis by running redis-cli. If you're interested, you can find an introduction to these clients at the links below: These datastores will be empty for now. Important note: Sometimes, when stopping and starting your repl, you may see the following error message: ERROR: child process failed, exited with error number 100 This means that MongoDB has failed to start. If you see this, restart your repl, and MongoDB should start up successfully. Scraping RSS and Atom feeds We're going to build the feed scraper first. If you've completed any of our previous web-scraping tutorials, you might expect to do this by parsing raw XML with Beautiful Soup. While this would be possible, we would need to account for a large number of differences in feed formats and other gotchas specific to parsing RSS and Atom feeds. Instead, we'll use the feedparser library, which has already solved most of these problems. Create a directory named lib, and inside that directory, a Python file named scraper.py. Add the following code to it: import feedparser, pytz, time from datetime import datetime, timedelta from dateutil import parser def get_title(feed_url): pass def get_items(feed_url, since=timedelta(days=1)): pass Here we import the libraries we'll need for web scraping, XML parsing, and time handling. We also define two functions: get_title: This will return the name of the website, for a given feed track (e.g. "Hacker News" for). get_items: This will return the feed's items – depending on the feed, these can be articles, videos, podcast episodes, or other content. The sinceparameter will allow us to only fetch recent content, and we'll use one day as the default cutoff. Edit the get_title function with the following: def get_title(feed_url): feed = feedparser.parse(feed_url) return feed["feed"]["title"] Add the following line to the bottom of scraper.py to test it out: print(get_title("")) Instead of rewriting our start.sh script to run this Python file, we can just run python lib/scraper.py in our repl's shell tab, as shown below. If it's working correctly, we should see "Hacker News" as the script's output. Now we need to write the second function. Add the following code to the get_items function definition: def get_items(feed_url, since=timedelta(days=1)): feed = feedparser.parse(feed_url) items = [] for entry in feed.entries: title = entry.title link = entry.link if "published" in entry: published = parser.parse(entry.published) elif "pubDate" in entry: published = parser.parse(entry.pubDate) Here we extract each item's title, link, and publishing timestamp. Atom feeds use the published element and RSS feeds use the pubDate element, so we look for both. We use parser to convert the timestamp from a string to a datetime object. The parse function is able to convert a large number of different formats, which saves us from writing a lot of extra code. We need to evaluate the age of the content and package it in a dictionary so we can return it from our function. Add the following code to the bottom of the get_items function: # evaluating content age if (since and published > (pytz.utc.localize(datetime.today()) - since)) or not since: item = { "title": title, "link": link, "published": published } items.append(item) return items We get the current time with datetime.today(), convert it to the UTC timezone, and then subtract our since timedelta object. Because of the way we've constructed this if statement, if we pass in since=None when calling get_items, we'll get all feed items irrespective of their publish date. Finally, we construct a dictionary of our item's data and add it to the items list, which we return at the bottom of the function, outside the for loop. Add the following lines to the bottom of scraper.py and run the script in your repl's shell again. We use time.sleep to avoid being rate-limited for fetching the same file twice in quick succession. time.sleep(1) print(get_items("")) You should see a large number of results in your terminal. Play around with values of since and see what difference it makes. Once you're done, remove the Setting up Mailgun Now that we can retrieve content for our email digests, we need a way of sending emails. To avoid having to set up our own email server, we'll use the Mailgun API to actually send emails. Sign up for a free account now, and verify your email and phone number. Once your account is created and verified, you'll need an API key and domain from Mailgun. To find your domain, navigate to Sending → Domains. You should see a single domain name, starting with "sandbox". Click on that and copy the full domain name (it looks like: sandboxlongstringoflettersandnumbers.mailgun.org). To find your API key, navigate to Settings → API Keys. Click on the view icon next to Private API key and copy the revealed string somewhere safe. Back in your repl, create two environment variables, MAILGUN_DOMAIN and MAILGUN_APIKEY, and provide the strings you copied from Mailgun as values for each. Run your repl now to set these environment variables. Then create a file named lib/tasks.py, and populate it with the code below. import requests, os # Mailgun config MAILGUN_APIKEY = os.environ["MAILGUN_APIKEY"] MAILGUN_DOMAIN = os.environ["MAILGUN_DOMAIN"] def send_test_email(to_address): res = requests.post( f"{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <[email protected]{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "Testing Mailgun", "text": "Hello world!"}) print(res) send_test_email("YOUR-EMAIL-ADDRESS-HERE") Here we use Python Requests to interact with the Mailgun API. Note the inclusion of our domain and API key. To test that Mailgun is working, replace YOUR-EMAIL-ADDRESS-HERE with your email address, and then run python lib/tasks.py in your repl's shell. You should receive a test mail within a few minutes, but as we're using a free sandbox domain, it may end up in your spam folder. Without further verification on Mailgun, we can only send up to 100 emails per hour, and a free account limits us to 5,000 emails per month. Additionally, Mailgun's sandbox domains can only be used to send emails to specific, whitelisted addresses. The address you created your account with will work, but if you want to send emails to other addresses, you'll have to add them to the domain's authorized recipients, which can be done from the page you got the full domain name from. Keep these limitations in mind as you build and test this application. After you've received your test email, you can delete or comment out the function call in the final line of lib/tasks.py. Interacting with MongoDB As we will have two different components of our application interacting with our Mongo database – our email-sending code in lib/tasks.py and the web application code we will put in main.py – we're going to put our database connection code in another file, which can be imported by both. Create lib/db.py now and add the following code to it: import pymongo def connect_to_db(): client = pymongo.MongoClient() return client.digest We will call connect_to_db() whenever we need to interact with the database. Because of how MongoDB works, a new database called "digest" will be created the first time we connect. Much of the benefit MongoDB provides over traditional SQL databases is that you don't have to define schemas before storing data. Mongo databases are made up of collections, which contain documents. You can think of the collections as lists and the documents as dictionaries. When we read and write data to and from MongoDB, we will be working with lists of dictionaries. Creating the web application Now that we've got a working webscraper, email sender and database interface, it's time to start building our web application. Create a file named main.py in your repl's filepane and add the following import code to it: from flask import Flask, request, render_template, session, flash, redirect, url_for from functools import wraps import os, pymongo, time import lib.scraper as scraper import lib.tasks as tasks from lib.db import connect_to_db We've imported everything we'll need from Flask and other Python modules, as well as our three local files from lib: scraper.py, tasks.py and db.py. Next, add the following code to initialize the application and connect to the database: app = Flask(__name__) app.config['SECRET_KEY'] = os.environ['SECRET_KEY'] db = connect_to_db() Our secret key will be a long, random string, stored in an environment variable. You can generate one in your repl's Python console with the following two lines of code: import random, string ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(20)) In your repl's "Secrets" tab, add a new key named SECRET_KEY and enter the random string you just generated as its value. Next, we will create the context helper function. This function will provide the current user's data from our database to our application frontend. Add the following code to the bottom of main.py: def context(): cursor = db.subscriptions.find({ "email": email }) subscriptions = [subscription for subscription in cursor] return { "user_email": email, "user_subscriptions": subscriptions } When we build our user login, we will store the current user's email address in Flask's session object, which corresponds to a cookie that will be cryptographically signed with the secret key we defined above. Without this, users would be able to impersonate each other by changing their cookie data. We query our MongoDB database by calling db.<name of collection>.find(). If we call find() without any arguments, all items in our collection will be returned. If we call find() with an argument, as we've done above, it will return results with keys and values that match our argument. The find() method returns a Cursor object, which we can extract the results of our query from. Next, we need to create an authentication function decorator, which will restrict parts of our application to logged-in users. Add the following code below the definition of the context function: # Authentication decorator def authenticated(f): @wraps(f) def decorated_function(*args, **kwargs): if "email" not in session: flash("Permission denied.", "warning") return redirect(url_for("index")) return f(*args, **kwargs) return decorated_function The code in the second function may look a bit strange if you haven't written your own decorators before. Here's how it works: authenticated is the name of our decorator. You can think of decorators as functions that take other functions as arguments. (The two code snippets below are for illustration and not part of our program.) Therefore, if we write the following: @authenticated def authenticated_function(): return f"Hello logged-in user!" authenticated_function() It will be roughly equivalent to: def authenticated_function(): return f"Hello logged-in user!" authenticated(authenticated_function) So whenever authenticated_function gets called, the code we've defined in decorated_function will execute before anything we define in authenticated_function. This means we don't have to include the same authentication checking code in every piece of authenticated functionality. As per the code, if a non-logged-in user attempts to access restricted functionality, our app will flash a warning message and redirect them to the home page. Next, we'll add code to serve our home page and start our application: # Routes @app.route("/") def index(): return render_template("index.html", **context()) app.run(host='0.0.0.0', port=8080) This code will serve a Jinja template, which we will create now in a separate file. In your repl's filepane, create a directory named templates, and inside that directory, a file named index.html. Add the following code to index.html: <!DOCTYPE html> <html> <head> <title>News Digest</title> </head> <body> {% with messages = get_flashed_messages() %} {% if messages %} <ul class=flashes> {% for message in messages %} <li>{{ message }}</li> {% endfor %} </ul> {% endif %} {% endwith %} {% if user_email == None %} <p>Please enter your email to sign up/log in:</p> <form action="/login" method="post"> <input type="text" name="email"> <input type="submit" value="Login"> </form> {% else %} <p>Logged in as {{ user_email }}.</p> <h1>Subscriptions</h1> <ul> {% for subscription in user_subscriptions %} <li> <a href="{{ subscription.url }}">{{ subscription.title }}</a> <form action="/unsubscribe" method="post" style="display: inline"> <input type="hidden" name="feed_url" value="{{subscription.url}}"> <input type="submit" value="Unsubscribe"> </form> </li> {% endfor %} </ul> <p>Add a new subscription:</p> <form action="/subscribe" method="post"> <input type="text" name="feed_url"> <input type="submit" value="Subscribe"> </form> <p>Send digest to your email now:</p> <form action="/send-digest" method="post"> <input type="submit" value="Send digest"> </form> <p>Choose a time to send your daily digest (must be UTC):</p> <form action="/schedule-digest" method="post"> <input type="time" name="digest_time"> <input type="submit" value="Schedule digest"> </form> {% endif %} </body> </html> As this will be our application's only page, it contains a lot of functionality. From top to bottom: - We've included code to display flashed messages at the top of the page. This allows us to show users the results of their actions without creating additional pages. - If the current user is not logged in, we display a login form. - If the current user is logged in, we display: - A list of their current subscriptions, with an unsubscribe button next to each one. - A form for adding new subscriptions. - A button to send an email digest immediately. - A form for sending email digests at a specific time each day. To start our application when our repl runs, we must add an additional line to the bottom of start.sh: # Run Flask app python main.py Once that's done, run your repl. You should see a login form. Adding user login We will implement user login by sending a single-use login link to the email address provided in the login form. This provides a number of benefits: - We can use the code we've already written for sending emails. - We don't need to implement user registration separately. - We can avoid worrying about user passwords. To send login emails asynchronously, we'll set up a Celery task. In main.py, add the following code for the /login route below the definition of index: @app.route("/login", methods=['POST']) def login(): tasks.send_login_email.delay(email) flash("Check your email for a magic login link!") return redirect(url_for("index")) In this function, we get the user's email, and pass it to a function we will define in lib/tasks.py. As this function will be a Celery task rather than a conventional function, we must call it with .delay(), a function in Celery's task-calling API. Let's implement this task now. Open lib/tasks.py and modify it as follows: import requests, os import random, string # NEW IMPORTS from celery import Celery # NEW IMPORT from celery.schedules import crontab # NEW IMPORT from datetime import datetime # NEW IMPORT import lib.scraper as scraper # NEW IMPORT from lib.db import connect_to_db # NEW IMPORT # NEW LINE BELOW REPL_URL = f"https://{os.environ['REPL_SLUG']}.{os.environ['REPL_OWNER']}.repl.co" # NEW LINES BELOW # Celery configuration CELERY_BROKER_URL = "redis://127.0.0.1:6379/0" CELERY_BACKEND_URL = "redis://127.0.0.1:6379/0" celery = Celery("tasks", broker=CELERY_BROKER_URL, backed=CELERY_BACKEND_URL) celery.conf.enable_utc = True # Mailgun config MAILGUN_APIKEY = os.environ["MAILGUN_APIKEY"] MAILGUN_DOMAIN = os.environ["MAILGUN_DOMAIN"] # NEW FUNCTION DECORATOR @celery.task def send_test_email(to_address): res = requests.post( f"{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <[email protected]{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "Testing Mailgun", "text": "Hello world!"}) print(res) # COMMENT OUT THE TESTING LINE # send_test_email("YOUR-EMAIL-ADDRESS-HERE") We've added the following: - Additional imports for Celery and our other local files. - A REPL_URLvariable containing our repl's URL, which we construct using environment variables defined in every repl. - Instantiation of a Celery object, configured to use Redis as a message broker and data backend, and the UTC timezone. - A function decorator which converts our send_test_emailfunction into a Celery task. Next, we'll define a function to generate unique IDs for our login links. Add the following code below the send_test_email function definition: def generate_login_id(): return ''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(30)) This code is largely similar to the code we used to generate our secret key. Next, we'll create the task we called in main.py: send_login_email. Add the following code below the definition of generate_login_id: @celery.task def send_login_email(to_address): # Generate ID login_id = generate_login_id() # Set up email login_url = f"{REPL_URL}/confirm-login/{login_id}" text = f""" Click this link to log in: {login_url} """ html = f""" <p>Click this link to log in:</p> <p><a href={login_url}>{login_url}</a></p> """ # Send email res = requests.post( f"{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <[email protected]{MAILGUN_DOMAIN}>", "to": [to_address], "subject": "News Digest Login Link", "text": text, "html": html }) # Add to user_sessions collection if email sent successfully if res.ok: db = connect_to_db() db.user_sessions.insert_one({"login_id": login_id, "email": to_address}) print(f"Sent login email to {to_address}") else: print("Failed to send login email.") This code will generate a login ID, construct an email containing a /confirm-login link containing that ID, and then send the email. If the email is sent successfully, it will add a document to our MongoDB containing the email address and login ID. Now we can return to main.py and create the /confirm-login route. Add the following code below the login function definition: @app.route("/confirm-login/<login_id>") def confirm_login(login_id): login = db.user_sessions.find_one({"login_id": login_id}) if login: session["email"] = login["email"] db.user_sessions.delete_one({"login_id": login_id}) # prevent reuse else: flash("Invalid or expired login link.") return redirect(url_for("index")) When a user clicks the login link in their email, they will be directed to this route. If a matching login ID is found in the database, they will be logged in, and the login ID will be deleted so it can't be reused. We've implemented all of the code we need for user login. The last thing we need to do to get it working is to configure our repl to start a Celery worker. When we invoke a task with .delay(), this worker will execute the task. In start.sh, add the following between the line that starts Redis and the line that starts our web application: # Run Celery worker celery -A lib.tasks.celery worker -P processes --loglevel=info & This will start a Celery worker, configured with the following flags: -A lib.tasks.celery: This tells Celery to run tasks associated with the celeryobject in tasks.py. -P processes: This tells Celery to start new processes for individual tasks. --loglevel=info: This ensures we'll have detailed Celery logs to help us debug problems. We use & to run the worker in the background – this is a part of Bash's syntax rather than a program-specific backgrounding flag like we used for MongoDB and Redis. Run your repl now, and you should see the worker start up with the rest of our application's components. Once the web application is started, open it in a new tab. Then try logging in with your email address – remember to check your spam box for your login email. If everything's working correctly, you should see a page like this after clicking your login link: Adding and removing subscriptions Now that we can log in, let's add the routes that handle subscribing to and unsubscribing from news feeds. These routes will only be available to logged-in users, so we'll use our authenticated decorator on them. Add the following code below the confirm_login function definition in main.py: # Subscriptions @authenticated @app.route("/subscribe", methods=['POST']) def subscribe(): # new feed feed_url = request.form["feed_url"] # Test feed try: items = scraper.get_items(feed_url, None) except Exception as e: print(e) flash("Invalid feed URL.") return redirect(url_for("index")) if items == []: flash("Invalid feed URL") return redirect(url_for("index")) # Get feed title time.sleep(1) feed_title = scraper.get_title(feed_url) This code will validate feed URLs by attempting to fetch their contents. Note that we are passing None as the argument for since in scraper.get_items – this will fetch the whole feed, not just the last day's content. If it fails for any reason, or returns an empty list, an error message will be shown to the user and the subscription will not be added. Once we're sure that the feed is valid, we sleep for one second and then fetch the title. The sleep is necessary to prevent rate-limiting by some websites. Now that we've validated the feed and have its title, we can add it to our MongoDB. Add the following code to the bottom of the function: # Add subscription to Mongodb try: db.subscriptions.insert_one({"email": session["email"], "url": feed_url, "title": feed_title}) except pymongo.errors.DuplicateKeyError: flash("You're already subscribed to that feed.") return redirect(url_for("index")) except Exception: flash("An unknown error occured.") return redirect(url_for("index")) # Create unique index if it doesn't exist db.subscriptions.create_index([("email", 1), ("url", 1)], unique=True) flash("Feed added!") return redirect(url_for("index")) Here, we populate a new document with our subscription details and insert it into our "subscriptions" collection. To prevent duplicate subscriptions, we use create_index to create a unique compound index on the "email" and "url" fields. As create_index will only create an index that doesn't already exist, we can safely call it on every invocation of this function. Next, we'll create the code for unsubscribing from feeds. Add the following function definition below the one above: @authenticated @app.route("/unsubscribe", methods=['POST']) def unsubscribe(): # remove feed feed_url = request.form["feed_url"] deleted = db.subscriptions.delete_one({"email": session["email"], "url": feed_url}) flash("Unsubscribed!") return redirect(url_for("index")) Run your repl, and try subscribing and unsubscribing from some feeds. You can use the following URLs to test: - Hacker News feed: - /r/replit on Reddit feed: Sending digests Once you've added some subscriptions, we can implement the /send-digest route. Add the following code below the definition of unsubscribe in main.py: # Digest @authenticated @app.route("/send-digest", methods=['POST']) def send_digest(): tasks.send_digest_email.delay(session["email"]) flash("Digest email sent! Check your inbox.") return redirect(url_for("index")) Then, in tasks.py, add the following new Celery task: @celery.task def send_digest_email(to_address): # Get subscriptions from Mongodb db = connect_to_db() cursor = db.subscriptions.find({"email": to_address}) subscriptions = [subscription for subscription in cursor] # Scrape RSS feeds items = {} for subscription in subscriptions: items[subscription["title"]] = scraper.get_items(subscription["url"]) First, we connect to the MongoDB and find all subscriptions created by the user we're sending to. We then construct a dictionary of scraped items for each feed URL. Once that's done, it's time to create the email content. Add the following code to the bottom of send_digest_email function: # Build email digest today_date = datetime.today().strftime("%d %B %Y") html = f"<h1>Daily Digest for {today_date}</h1>" for site_title, feed_items in items.items(): if not feed_items: # empty list continue section = f"<h2>{site_title}</h2>" section += "<ul>" for item in feed_items: section += f"<li><a href={item['link']}>{item['title']}</a></li>" section += "</ul>" html += section In this code, we construct an HTML email with a heading and bullet list of linked items for each feed. If any of our feeds have no items for the last day, we leave them out of the digest. We use strftime to format today's date in a human-readable manner. After that, we can send the email. Add the following code to the bottom of the function: # Send email res = requests.post( f"{MAILGUN_DOMAIN}/messages", auth=("api", MAILGUN_APIKEY), data={"from": f"News Digest <[email protected]{MAILGUN_DOMAIN}>", "to": [to_address], "subject": f"News Digest for {today_date}", "text": html, "html": html }) if res.ok: print(f"Sent digest email to {to_address}") else: print("Failed to send digest email.") Run your repl, and click on the Send digest button. You should receive an email digest with today's items from each of your subscriptions within a few minutes. Remember to check your spam! Scheduling digests The last thing we need to implement is scheduled digests, to allow our application to send users a digest every day at a specified time. In main.py, add the following code below the send_digest function definition: @authenticated @app.route("/schedule-digest", methods=['POST']) def schedule_digest(): # Get time from form hour, minute = request.form["digest_time"].split(":") tasks.schedule_digest(session["email"], int(hour), int(minute)) flash(f"Your digest will be sent daily at {hour}:{minute} UTC") return redirect(url_for("index")) This function retrieves the requested digest time from the user and calls tasks.schedule_digest. As schedule_digest will be a regular function that schedules a task rather than a task itself, we can call it directly. Celery supports scheduling tasks through its beat functionality. This will require us to run an additional Celery process, which will be a beat rather than a worker. By default, Celery does not support dynamic addition and alteration of scheduled tasks, which we need in order to allow users to set and change their digest schedules arbitrarily. So we'll need a custom scheduler that supports this. Many custom Celery scheduler packages are available on PyPI, but as of October 2021, none of these packages have been added to Nixpkgs. Therefore, we'll need to create a custom derivation for the scheduler we choose. Let's do that in replit.nix now. Open the file, and add the let ... in block below: { pkgs }: let redisbeat = pkgs.python39Packages.buildPythonPackage rec { pname = "redisbeat"; version = "1.2.4"; src = pkgs.python39Packages.fetchPypi { inherit pname version; sha256 = "0b800c6c20168780442b575d583d82d83d7e9326831ffe35f763289ebcd8b4f6"; }; propagatedBuildInputs = with pkgs.python39Packages; [ jsonpickle celery redis ]; postPatch = '' sed -i "s/jsonpickle==1.2/jsonpickle/" setup.py ''; }; in { deps = [ pkgs.python39 pkgs.python39Packages.flask pkgs.python39Packages.celery pkgs.python39Packages.pymongo pkgs.python39Packages.requests pkgs.python39Packages.redis pkgs.python39Packages.feedparser pkgs.python39Packages.dateutil pkgs.mongodb pkgs.redis redisbeat # <-- ALSO ADD THIS LINE ]; } We've chosen to use redisbeat, as it is small, simple and uses Redis as a backend. We construct a custom derivation for it using the buildPythonPackage function, to which we pass the following information: - The package's nameand version. src: Where to find the package's source code (in this case, from PyPI, but we could also use GitHub, or a generic URL). propagatedBuildInputs: The package's dependencies (all of which are available from Nixpkgs). postPatch: Actions to take before installing the package. For this package, we remove the version specification for dependency jsonpicklein setup.py. This will force redisbeatto use the latest version of jsonpickle, which is available from Nixpkgs and, as a bonus, does not contain this critical vulnerability. You can learn more about using Python with Nixpkgs in this section of the official documentation. To actually install redisbeat, we must also add it to our deps list. Once you've done that, run your repl. Building custom Nix derivations like this one often takes some time, so you may have to wait a while before your repl finishes loading the Nix environment. While we wait, let's import redisbeat in lib/tasks.py and create our schedule_digest function. Add the following code to the bottom of lib/tasks.py: from redisbeat.scheduler import RedisScheduler scheduler = RedisScheduler(app=celery) def schedule_digest(email, hour, minute): scheduler.add(**{ "name": "digest-" + email, "task": "lib.tasks.send_digest_email", "kwargs": {"to_address": email }, "schedule": crontab(minute=minute, hour=hour) }) This code uses redisbeat's RedisScheduler to schedule the execution of our send_digest_email task. Note that we've used the task's full path, with lib included: this is necessary when scheduling. We've used Celery's crontab schedule type, which is highly suited to managing tasks that run at a certain time each day. If a task with the same name already exists in the schedule, scheduler.add will update it rather than adding a new task. This means our users can change their digest time at will. Now that our code is in place, we can add a new Celery beat process to start.sh. Add the following line just after the line that starts the Celery worker: celery -A lib.tasks.celery beat -S redisbeat.RedisScheduler --loglevel=debug & Now run your repl. You can test this functionality out now by scheduling your digest about ten minutes in the future. If you want to receive regular digests, you will need to enable Always-on in your repl. Also, remember that all times must be specified in the UTC timezone. Where next? We've built a useful multi-component application, but its functionality is fairly rudimentary. If you'd like to keep working on this project, here are some ideas for next steps: - Set up a custom domain with Mailgun to help keep your digest emails out of spam. - Feed scraper optimization. Currently, we fetch the whole feed twice when adding a new subscription and have to sleep to avoid rate-limiting. The scraper could be optimized to fetch feed contents only once. - Intelligent digest caching. If multiple users subscribe to the same feed and schedule their digests for the same time, we will unnecessarily fetch the same content for each one. - Multiple digests per user. Users could configure different digests with different contents at different times. - Allow users to schedule digests in their local timezones. - Styling of both website and email content with CSS. - A production WSGI and web server to improve the web application's performance, like we used in our previous tutorial on building with Nix. You can find our repl below:
https://docs.replit.com/tutorials/build-news-digest-app-with-nix
CC-MAIN-2022-27
refinedweb
5,661
57.37
Decision::ACL - Manage and Build Access Control Lists use Decision::ACL; use Decision::ACL::Rule; use Decision::ACL::Constants qw(:rule); my $Acl = Decision::ACL->new(); my $rule = Decision::ACL::Rule({ action => 'allow', now => 0, fields => { field1 => 'field1val', field2 => 'field2val', ... } }); ... $Acl->PushRule($rule); my $return_status = $Acl->RunACL({ field1 => 'testfield1value', field2 => 'testfield2value', ... }); if($return_status == ACL_RULE_ALLOW) { print "testfield1value, testfield2value allowed!\n"; } . $Acl->Rules(); Return an arrayref of the rule objects in this rule list. $Acl->RunACL({ args }); Run the list, Returns ACL_RULE_ALLOW or ACL_RULE_DENY. . This module's purpose is to provide an already implemented ACL logic for programmers. Most of the time writing access control list scripts is long and boring. This set of modules has all the convenient logic behind access control lists and provide an easy interface to it. It allows you to build custom ACL's, and provide the mechanisms to run the ACL against data. perl Makefile.PL make make test make install The Decision::ACL set of modules is implemented in pure perl, with very simple behaviour. The main idea behind it is that a ACL object tests each rule in it's list against the target data. Each rule, if concerned applies it's action. This class is simply is a list of Decision::ACL::Rule objects, in a particular order. Once the rules have been "pushed" onto the ACL, the RunACL() methods will take the arguments defined in the rule objects and execute each rule in order and get their return values. The final return value of the RunACL() method is determined if there are no denying rule, and that an explicit allowing rule has been encountered. If no rules are concerned by the data, the ACL will deny automatically. This module implements a basic rule. It holds a list of fields, (wich will be propagated up to the ACL) and a list of values for each fields. It also contains an "action" (ALLOW | DENY). The logic behind the rule is that when the Control() method is called with arguments, the rule will check first it is CONCERNED by the data. If it is, it applies it's action (ALLOW | DENY). If not, it return UNCONCERNED and the ACL RunACL() method continues to the next rule. Here is a description of a basic use of this module set. To create an initial empty rule list, simply use the new() constructor. my $Acl = Decision::ACL->new(); This build an empty Decision::ACL rule list ready to receive rule objects. You can also directly pass an anonymous array of rule objects to new() like this: my $Acl = Decision::ACL->new([ $rule1, $rule2, $rule3 ]); The list now needs to be populated with rules so that it can become useful. To do this, you can either write yourself a small parser, get your rules from a database, a DBM file or any way you want. The only thing you need to know, is how to put these in the rule list and manage that rule list. The Decision::ACL module provides multiple methods to deal with the list of rules: . The last step is to run the ACL and ask each rule if they are concerned by the data you pass to them. my $return_status = $Acl->RunACL({ field1 => 'testf1val', field2 => 'testd2val', }); The arguments passed to RunACL() are checked to see if they are consistent with the fields defined in the rules themselves. When the first rule is "pushed" onto the rule list, Decision::ACL will scan it's Fields() and keep a list of them internally. When RunACL() is ranned, each parameter has to be present, and no unknown parameters can be passed. This is true only if DIE_ON_BAD_PARAMETERS is set to 1 in the class. The same applies for PushRule(), the Fields() of the rules consequent to the first rule are checked for inconsistencies. This behaviour is controlled by the DIE_ON_MALFORMED_RULES, set either to 1 or 0 in the class. The return status are defined in the Decision::ACL::Constants package under the keyword "rule". They are ACL_RULE_ALLOW or ACL_RULE_DENY, so you can test the return of the run and deal with it accordingly in your application. I suggest you import the constants in your namespace when dealing with this suite. use Decision::ACL::Constants qw(:rule); Rules objects are created using the new() constructor. my $rule = Decision::ACL::Rule->new({ action => 'deny', now => 1, fields => { fieldname => 'fieldvalue', fieldname2 => 'field2value', }, }); The arguments passed to the new() constructor are as follows: Once the rule object has been created, it can be pushed onto an ACL for execution. This object has mainly 4 methods to access and modify the object at runtime. . The Fields() method is used to store and retrieve the fields specified for this rule and their associated values. The Now() method return or sets the value of the "now" parameter of the rule. When a rule is set to act "now", the ACL will stop the RunACL() at this level and directly return the Action() of the rule. Concerned({}) will tell wether the rule is concerned by the data that is passed to it. It will test each field data against the values of the same fields in Fields() and return 1 or 0/undef. The Control({}) method is what is called by the ACL in RunACL() to get the status of the rule on a certain set of the fields passed. Control() will first call Concerned({}) to see if the rule is concerned by the dataset, if not Control() exits with status of ACL_RULE_UNCONCERNED. If it is concerned, the Control() method will return the status matching the Action() of the rule. Ok, this all sounds nice and fun, but let's see how we really can use this in real life. This example is a script I wrote to use as a "precommit" script for CVS. It implements an advanced rule system for CVS repository access. This script gets called everytime someone commits to CVS and gets the username, repository and file in wich the person wants to commit. The script simply parses a rule file that contains a very simple rule language, creates the rule objects, pushes them in the ACL and runs the ACL with the values passed by CVS. It then will permit or deny the commit. #!/usr/bin/perl # commitcheck using Decision::ACL. # #Copyright (c) 2001 Benoit Beausejour <bbeausej@pobox.com> All rights reserved. This program is #free software, you can redistribute it and/or modify it under the same terms as Perl itself. # use strict; use Data::Dumper; use Decision::ACL; use Decision::ACL::Rule; use Decision::ACL::Constants qw(:rule); #The user's CVSROOT. my $cvsroot = $ENV{'CVSROOT'}; $cvsroot .= "/"; #The username of the current user. my $username = system('id -un'); chomp $username; #The repository being commited into. my $repository = shift; $repository =~ s/$cvsroot//g; #The file being commited. my $module = shift; my $rule_file = $cvsroot."CVSROOT/commit_acl"; open(RULES, $rule_file) || &failed("Cant open $rule_file: $!\n"); #Create the ACL list object. my $ACL = new Decision::ACL(); my @rules = <RULES>; foreach (reverse @rules) { next if substr($_,0,1) eq '#' || !$_; chomp $_; my ($rule_base, $rule_spec) = split(/to/i, $_); my ($action, $target); my $nowflag = 0; if($rule_base =~ /now/i) { ($action, $target) = split(/now/i, $rule_base); $nowflag++; } else { ($action, $target) = split(/ /i, $rule_base); } my ($repository, $module) = split(/in/i, $rule_spec); $module = '' if not defined $module; $action = uc $action; $action =~ s/ //g; $repository =~ s/ //g; $module =~ s/ //g if defined $module; $target =~ s/ //g; $repository = uc $repository if $repository =~ /^all$/i; $module = uc $module if $module =~ /^all$/i; $target = uc $target if $target =~ /^all$/i; #Create a Decision::ACL::Rule object from the data parsed in the rule file. my $rule = new Decision::ACL::Rule({ now => $nowflag, action => $action, fields => { repository => $repository, component => $module, username => $target, } }); #Push that rule onto the ACL. $ACL->PushRule($rule); } #Run the ACL, get the return and give it back to CVS. my $return_status = $ACL->RunACL({ repository => $repository, component => $module, username => $username, }); if($return_status == ACL_RULE_ALLOW) { exit(0); } else { print STDERR "Commit to $repository in module $module DENIED for user $username\n"; exit(1); } exit(1); sub failed { my $message = shift; print STDERR $message; exit(1); } deny all to all in all allow root to CVSROOT in all allow bbeausej to CVSROOT in commit_acl allow fred to Decision-ACL in LICENSE This script is usable, if you find it useful don't be afraid to use it in your daily CVS uses. To use it, simply put this line in your $CVSROOT/CVSROOT/commitinfo file: DEFAULT /path/to/commitcheck Then create your rulefile named $CVSROOT/CVSROOT/commit_acl. I hope this example shows you how simple it is to use Decision::ACL in real life examples. This module is evolving rapidly. I am already writing version 1.0 of it wich will contain a RecDescent parser for ACL files with a generic dynamic grammar. So, be prepared for the next versions as the module called Decision::ACL::Parser will be released along with the package. I can see many other things go in this module, generic parsers for specific types of ACL, passing them through the 2 main classes, many ideas on the table. If you have anything idea that you want to implement, please do and let me know about it. I'll be glad to integrate your work here if needed. Benoit, "SaKa" Beausejour, <bbeausej@pobox.com> This module was made possible by the help of individuals: - #perl (ers) for their help and support. Copyright (c) 2001 Benoit Beausejour <bbeausej@pobox.com> All rights reserved. This program is free software, you can redistribute it and/or modify it under the same terms as Perl itself. perl(1), Solaris::ACL(1),
http://search.cpan.org/dist/Decision-ACL/ACL.pod
CC-MAIN-2014-23
refinedweb
1,611
62.48
best practice - I need use movieclips stored in model in my view Hello I have SWFLibsModel which is model that contains various movie clips. I also need in my main view use those move clips. What is best practice ? I tryed to put on my view mediator this metod, but I dont like this solution. Thank you for advice. private function onSWFLoaded(e:DataEvent):void { view.render(swfLibsModel); } Comments are currently closed for this discussion. You can start a new one. Keyboard shortcuts Generic Comment Form You can use Command ⌘ instead of Control ^ on Mac Support Staff 1 Posted by Ondina D.F. on 12 Feb, 2015 02:59 PM Hi Egid, You are right, passing the model to the view is not nice. Your Model could dispatch a custom event on the shared eventDispatcher, with the movie clips as a payload. The Mediator listens to the event, and passes the payload to the view. So, if DataEvent is the event dispatched by your SWFLibsModel, you just need to pass the payload to the view: view.render(e.payload); where payload (or a name of your choice) is your movieclip or a collection (array) of movieclips Depending on the nature of your movie clips, there could be other solutions as well. Are they modules, or just some assets (images, sounds) ? How are you loading the movie clips and where? Take a look at Matan's AssetLoader: , it might be what you need. Ondina 2 Posted by Egid on 17 Feb, 2015 10:08 AM Hi Thank you for answer. I apologise for my inactivity. I was off for few days. I use my custom AssetLoader, which load *.swf file, which contains various MovieClips I am using. My model looks like this: package assets { import controllers.DataEvent; import flash.display.MovieClip; import org.robotlegs.mvcs.Actor; /** * ); } } } I am passing this model to view and call his method getMovieClip to get any movieClip in library var mc : MovieClip = swfLibsModel.getMovieClip("LibTiles", "Bush"); Its for me really handy to have my assets in swf libraries. I was looking and Matan's AssetLoader and not sure if I should this assetLoader over my current assetLoading. Egid Support Staff 3 Posted by Ondina D.F. on 18 Feb, 2015 12:07 PM No problem:) You don't have to use Matan's or anyone else's libraries. It was just a suggestion. As I said in my previous post, if you don't like to access your model in your mediator, you can dispatch a custom event from your model with the movieclip as a payload. For example: I don't know when are you loading the movieclips and I also don't know how many mediators need to pass assets to their views. Depending on your use case, there might be many solutions. You could dispatch the event the moment the movieclips are loaded, or you could let the mediator request the assets within its onRegister() method by dispatching an event to trigger a command . DataEvent.SWF_REQUESTED could be mapped to a command: commandMap.mapEvent(DataEvent.SWF_REQUESTED, LoadSWFCommand, DataEvent); When triggered, the Command can access SWFLibsModel, which is injected into the Command. The model can dispatch the event with the requested assets as a payload to the mediator. Let me know if you need more help with this. Ondina Ondina D.F. closed this discussion on 11 May, 2015 11:33 AM.
https://robotlegs.tenderapp.com/discussions/questions/9384-best-practice-i-need-use-movieclips-stored-in-model-in-my-view
CC-MAIN-2020-29
refinedweb
568
75.2
One of the more subtle aspects of converting (n)varchar or (n)text data to XML is the fact that XML is choosy about which characters are permitted and (n)/varchar/(n)text is not. Any T-SQL programmer who runs conversions of this type is likely to run into this issue. Here's a code block that resolves the issue. The characters in question are what are commonly called "lower-order ASCII" characters, those below CHAR(32). Of these, only the TAB (CHAR(9)), LF (CHAR(10)), and CR(CHAR(13) are valid within XML. This solution uses trigger code to call a user-defined function to scrub the nvarchar columns, and a loop within the trigger to an ntext column. Here's the UDF code: CREATE Here's the trigger code: The trigger code is built to maximize performance in that the NULLIF tests in the UPDATE statement will only run the (relatively expensive) UDF if the inserted and deleted images of a particular column differ (if they don't differ, we can guarantee that the value has already been scrubbed). The UDF and the loop in the trigger for the ntext SupplementDescription column employ the same basic strategy of looping through the source value looking for any invalid character and replacing it with a new character (NCHAR(164)) until the last invalid character is found. This code was developed for a SQL Server 2000 environment. It would function in a SQL Server 2005 environment, but better performance would likely be had with a CLR-based UDF. Note also that if you're interested in translating the reserved characters <>& etc., you can do that with a series of nested REPLACE statements. A recent discussion with several colleagues reminded me of a hard-won insight I've been meaning to share... Feedback is solicited on a programming issue A find shared by one friend leads to correspondence from another.. The redoubtable Adam Machanic left thanks!, worked for me when receiving char #x0001 Instead you widen the valid characters, add this to the header of your xml document! <?xml version="1.0" encoding="ISO-8859-1"?> also remove the following characters with this python script... def clean(): file = open('C:/where your file is located','r') myfile = file.readlines() file.close() file = open('C:\root\where you want to save your file\location','w') for r in myfile: r = r.replace("\r\n", ""); r = r.replace("\r", ""); r = r.replace("\n", ""); r = r.replace("\\r\\n", ""); r = r.replace("\\r", ""); r = r.replace("\\n", ""); r = r.replace("\u0085", ""); r = r.replace("\u000A", ""); r = r.replace("\u000B", ""); r = r.replace("\u000C", ""); r = r.replace("\u000D", ""); r = r.replace("\u2028", ""); r = r.replace("\u2029", ""); r = r.replace("\\\"", "\\\\\\\""); file.write(r) --------------python 3.2 Allows for a wider range of valid characters.
http://blogs.technet.com/b/wardpond/archive/2005/07/06/a-solution-for-stripping-invalid-xml-characters-from-varchar-text-data-structures.aspx
CC-MAIN-2015-22
refinedweb
469
67.96
If you are looking to configure .NET Core custom controls in your Visual Studio toolbox, this article is right for you. I’ll guide you through configuring a .NET Core 3.0 custom control in Visual Studio 2019, available as a NuGet package. Following are the prerequisites for the toolbox configuration: - Visual Studio 2019 (update 16.3.1 and later) - .NET Core SDK 3.0 NuGet structure to support .NET Core 3.0 toolbox The custom control will be populated in the toolbox only if the NuGet package has the VisualStudioToolsManifest file inside the tools folder. The tools folder will be present parallel to the lib folder inside the NuGet package. Refer to this link for more information about the structure of the VisualStudioToolsManifest file. For the normal working of the NuGet package, this VisualStudioToolsManifest file isn’t necessary. Its only purpose is to configure a custom control in the Visual Studio toolbox. Add this file in the right location during NuGet packaging. You can find the structure of the VisualStudioToolsManifest file in the following image. - The Reference attribute points to the name of the assembly. - ContainsControls attribute specifies whether there are any controls to be listed in the Visual Studio toolbox for the given assembly. - VSCategory/BlendCategory is the name for the toolbox to be displayed in Visual Studio. - Type is the fully qualified name for the control including the namespace. Install NuGet Install NuGet to check if it adds the control to toolbox: - Create a .NET Core project in the platform you intend to configure the .NET Core 3.0 controls in your toolbox. - Ensure that the target framework is set to .NET Core 3.0. - Right-click the project and select the Manage NuGet Packages… option. - The NuGet Package Manager window will open. Navigate to the Browse tab and search for the NuGet package that has .NET Core 3.0 support. - Select the NuGet package and click Install. - A confirmation message will be displayed. Click OK. The Output window will display the NuGet installation details followed by a completion message. - Now, open Visual Studio toolbox to check if the control is displayed in it. The control’s name will be displayed along with its Namespace. Control name (Namespace) Conclusion I hope this blog was helpful in configuring your NuGet package to list custom controls in the Visual Studio toolbox for .NET Core 3.0.!
https://www.syncfusion.com/blogs/post/add-net-core-3-0-custom-control-to-your-visual-studio-toolbox.aspx
CC-MAIN-2020-10
refinedweb
397
61.22
BACKREST.WS4 ------------ BackRest Hard-disk Backup and Restore Program (Edited by Emmanuel ROCHE.) ************************************************** ************* Stok Software is the author of the BackRest program which has been specially configured to fulfill the backup requirements for the file system of this Concurrent operating system. This version of BackRest has been licensed for distribution by Digital Research (R) by written agreement with Stok Software, Inc. CP/M and Digital Research and its logo are registered trademarks of Digital Research Inc. Concurrent is a trademark of Digital Research Inc. BackRest is a trademark of Stok Software, Inc. ************************************************** ************* This file describes the operation and use of the Concurrent (TM) hard-disk backup and restore program, BackRest (TM). WHAT IS BACKREST? ----------------- BackRest is a file maintenance program that gives you a safe, convenient method for selectively copying your files from Concurrent hard-disk partitions to floppy disks. BackRest's restore facility allows you to specify which files you want restored from backup disks to your hard disk. BackRest identifies each of your backup disks with a unique volume number and maintains report and directory files that indicate the location and date of backup for each of the files it copies. You should back up your hard-disk files on a regular basis. Doing so ensures that you have fairly up-to-date copies in case your hard disk is accidentally erased or damaged. BackRest makes it easy to protect your files because it backs up only those files which have been created or modified since the last backup. BACKREST FUNCTIONS ------------------ BackRest lets you select which files you want backed up and restored. Files may be selected according to subdirectory, user number, filename, file extension, and hard-disk partition. You can also tell BackRest to delete certain files after they have been backed up (file migration). See "How to Tell BackRest What You Want." BackRest accepts either CP/M (R) or DOS media in your source (hard-disk) and destination (floppy-disk) drives. The source drive is the drive from which a file is copied; the destination drive contains the disk on which the copy is placed. BackRest can determine what type of file it is backing up and requests that you place the appropriately formatted disk in the destination drive. If a hard-disk file is too large to fit on one backup disk, BackRest can split the file and copy it to two or more disks. BackRest then merges these file parts when asked to restore the original file. You can restore a file by the date it was backed up. You can restore a particular group of files by giving BackRest an ambiguous file specification. You can also restore bad files automatically. BackRest considers a file "bad" if it is unable to copy the file (due to a source media sector error, for example) to a backup disk. When you select this option, BackRest locates the previously backed up copy of the file and restores it to your hard disk. See "REST" in "BackRest Commands" for more information on file restoration. BackRest generates reports of its backup and restore operations. The reports are divided into four categories: Backup, Restore, Hard-disk Statistics, and Errors. "BackRest Reports" describes the form and content of each report section. Note that you can access BackRest functions through the Backup File(s) Menu in Concurrent's File Manager. BACKREST FILES -------------- To perform its backup and restore operations, BackRest needs three files: BACK.CMD, REST.CMD, and CONTROL.BR. These files must be on the disk in the current drive or on the drive from which you enter a BackRest command. BACK.CMD and REST.CMD are BackRest command files. They correspond to the BACK and REST commands described in "BackRest Commands." CONTROL.BR is BackRest's control file. It contains records that control the operation of BACK and REST. If CONTROL.BR is not on the disk in the current drive, neither BACK nor REST will perform. CONTROL.BR has already been set up for use on most personal computers that run Concurrent. You can edit the records in CONTROL.BR to suit your particular requirements and computer system. Be sure to make a copy of CONTROL.BR before you modify it. Do not modify the original file from your Concurrent distribution disk. As BackRest processes your commands to back up files, it creates permanent work files on the hard disk. All work files created by BackRest have a .BR file extension. BackRest creates three kinds of work files: a directory file, a path file and a report file. The directory file appears on your disk as DIR.BR. BackRest uses DIR.BR to keep track of the files it has copied to specific backup disks. The path file, which appears as PATHS.BR, is used by BackRest to record all the path names encountered during a backup session. The report files, REPORT.BR and RESTRPT.BR, contain information on backup and restore operations. BackRest also creates files with an extension of BR@. These files are temporary work files. Do not erase, rename, or set any "BR" file to Read-Only and do not use the BR extension when naming your own files. HOW BACKREST ORGANIZES YOUR FILES --------------------------------- Factors that affect a particular session can include: date of backup, source and destination drives, filenames and extensions, user numbers, subdirectories, and file media type. It is important that you understand how BackRest handles such "variables" during its backup and restore operations. This section describes the methods BackRest uses to organize and locate the files it copies to your backup disks. Disk Volume Numbers ------------------- BackRest assigns a unique volume number to each disk it uses as a backup disk. Volume numbers are assigned sequentially, beginning with the first time you use BACK. Before your files are copied from a Concurrent hard-disk partition, BackRest writes a volume number file on the backup disk. Backup disk volume number files have filenames and extensions similar to the example shown below: -C-00001.VOL The second character of the example indicates which hard disk (C) was used as the source drive. The remaining portion of the filename is the backup disk volume number; the example shows the number (00001) for the first backup disk containing files copied from hard-disk drive C. All volume number files have the VOL extension. BackRest keeps track of your backup disks by the volume number it assigns to each one. BackRest asks you to label each of your backup disks with the volume number it displays. When you need to restore a specific file to your hard disk, BackRest requests a specific backup disk by its volume number. BackRest's directory files are its key in matching volume numbers to backed up files. When you request the restoration of a specific file, BackRest searches for that file specification in DIR.BR. DIR.BR indicates the names of all the backed up files and the location of each by volume number. BackRest Directory Files ------------------------ BackRest keeps a record of all the files it has backed up in a file named DIR.BR. DIR.BR contains the name of each file backed up, the date it was backed up, and its backup disk volume number. As BackRest performs subsequent backups, it adds the current date, new filenames, and backup disk volume numbers to DIR.BR. When you request the restoration of a specific file, BackRest searches for its filespec in DIR.BR. If you request a file by backup date, BackRest reads DIR.BR until it finds a date that matches the one you have specified. Using the date as an index, BackRest reads the list of files in DIR.BR. When BackRest finds the matching filename in DIR.BR, it reads the volume number associated with the filename and asks that you place the backup disk with that volume number in the correct drive. BackRest writes DIR.BR to the disk in the control drive and automatically copies it to the last backup disk used in a backup session. The control drive is the drive you tell BackRest to use as its system work drive. See the CONTROL DRIVE: record in "Control Records." If DIR.BR is removed from the control drive, backup disk volume numbering begins again, starting from 00001. To avoid volume number repetition, never erase DIR.BR from your hard disk. If DIR.BR does become accidentally erased, use the COPY command to replace it with a copy of DIR.BR from your last backup disk. Never use any other copy of DIR.BR for this purpose. CP/M Files: User Numbers and Passwords -------------------------------------- BackRest can operate on the CP/M files you have organized by user numbers and protected with passwords. The CONTROL.BR file contains a record you can modify to tell BackRest which user numbers you want backed up. BackRest can then copy files fromW your CP/M hard-disk partition into the corresponding user numbers on the backup disk. BackRest records every user number it has backed up in the REPORT.BR file. Printed reports indicate CP/M files by drive, user number, and backup disk volume number. The CP/M files you protect with passwords are backed up and restored when you specify the passwords in the CONTROL.BR Exception Records. It is important that you also password- protect CONTROL.BR if it contains passwords for other files from your hard-disk drives. BackRest uses a default password for this purpose. See "Exception Records." Note: BackRest does not password-protect the files it writes to backup disks. If you use BackRest with password-protected files, store your backup disks in a secure area. DOS Files: Subdirectories and Paths ----------------------------------- DOS subdirectories provide a useful way to separate and organize your files. If you use subdirectories on your DOS hard-disk partition, BackRest can store your files under the same subdirectories on your DOS media backup disks. When your DOS files are restored to your hard disk, BackRest returns them to the same subdirectories from which they were originally copied. The REPORT.BR file records every subdirectory backed up. Printed reports indicate DOS files by drive, backup disk volume number, and path. Subdirectory paths to your files can be passed to BackRest through the File Manager (see the Backup File(s) Menu in Section 2, "File Manager" of the Concurrent PC DOS User's Guide). You can also indicate the subdirectories you want backed up and restored with the PATH: record in "Control Records." SETTING UP BACKREST ------------------- Before you can use BackRest, you must copy three files to your system drive. These files are: BACK.CMD, CONTROL.BR, and REST.CMD. BACK.CMD and REST.CMD are the command files used to back up and restore your hard-disk files; CONTROL.BR is described later in this section. Use the PIP command to copy BACK.CMD, REST.CMD, and CONTROL.BR from your distribution disk to your system disk. See "Drives" in Section 1 of the Concurrent PC DOS User's Guide to determine the system drive for your computer. The following example is for a Personal Computer XT with one floppy-disk drive (A and one hard-diskand one hard-disk divided into two partitions, DOS (C , and CP/M (D, and CP/M (D ; in this; in this case, drive D is the system drive. A>PIP D:=BACK.CMD[OV] A>PIP D:=REST.CMD[OV] A>PIP D:=CONTROL.BR[OV] The BackRest control file, CONTROL.BR, contains records that dictate how BackRest performs its backup and restore operations. It controls how BackRest displays its messages on your screen; whether reports are printed; which user numbers and subdirectories are used; source and destination drive assignments; and which files are backed up, deleted, restored, or ignored. What BackRest Needs to Know --------------------------- The records in CONTROL.BR provide BackRest with information it needs to run its backup and restore operations. If a record that contains information vital to BackRest is missing from the control file, or if a record's information is incorrectly formatted, BackRest displays an error message indicating the missing or incorrect record. Control file records are divided into five categories: 1) Screen Control Records 2) Printer Information Records 3) Report Records 4) Control Records 5) Exception Records Screen Control Records contain codes that enable BackRest to clear your computer's screen and display messages in different colors. Printer Information Records tell BackRest about your printer. Report Records control report content, report headings, and whether reports are printed automatically. Control Records provide important parameters of BackRest's operation, such as source and destination drives. Exception Records tell BackRest which files to treat as exceptions to the general backup procedures stated in the control records. All control file records consist of a descriptor and one or more fields. A record descriptor is a one- or two-word label, ending in a colon, that describes the purpose of the record. For example, DEST DRIVE: is the descriptor for the Control Record that defines the backup disk destination drive. The descriptor is followed by one or more fields. A field defines a value for the record. Screen Control and Printer Information record fields are made up of decimal numbers. The first number for these fields is called a length field. The length field tells BackRest how many numbers make up the second field, called the code field. The following example shows the descriptor and fields for the CLEAR: Screen Control Record: CLEAR: 2,27,69 The length field has a value of 2 because the code field is composed of two numbers (27,69). Notice that the numbers for both the length and code fields are delimited by a comma and that spaces may be used freely in CONTROL.BR. BackRest sends the code 27,69 to your computer so that it clears the screen. Fields for other control file records can consist of a "true" or "false" value, a drive specifier, or an actual number to be used as follows: REPORT PRINT: true DEST DRIVE: a USERS: 0,1,2 "How to Tell BackRest What You Want" describes the purpose and specific form of each control file record by category. The CONTROL.BR file is shown in Listing B-1. Listing 1. BackRest CONTROL.BR File * This is the CONTROL.BR file that tells BackRest how to operate * on your CP/M and DOS files under Concurrent. * This control file is for a personal computer with a color screen * and standard (80 column) printer. * See "Setting Up BackRest" in BACKREST.DOC of the Concurrent * distribution disk. * SCREEN CONTROL RECORDS * The CLEAR: record contains the code used to erase the screen. * The seven ATTRIBUTE: records determine the colors BackRest will * use to display its messages. CLEAR: 2,27,69 ATTRIBUTE: start screen: 9,27,99,0,27,97,3,27,98,2 <-- Green ATTRIBUTE: leave screen: 3,27,98,7 <-- Grey ATTRIBUTE: general: 3,27,98,2 <-- Green ATTRIBUTE: errors: 3,27,98,15 <-- White ATTRIBUTE: message: 3,27,98,14 <-- Yellow ATTRIBUTE: data: 3,27,98,3 <-- Cyan ATTRIBUTE: input: 3,27,98,12 <-- Red * PRINTER INFORMATION RECORDS PRINTER INIT: 1,13 <-- Start with a carriage return. FORMFEED: 1,12 <-- Printer code for a form feed. LENGTH: 60 <-- Number of lines per page. WIDTH: 80 <-- Number of columns per page. * REPORT RECORDS * Change the "ID:" record field to the heading you want BackRest * to print on its reports. REPORT PRINT: true <-- Print report when finished. SHOW SKIPS: true <-- Report on files not backed up. ID: Concurrent PC DOS Hard Disk Backup * CONTROL RECORDS * The following records control backup and restore operations. SPLIT: true <-- Divide backup files if required. BELL REPEAT: false <-- Set to "true" for repeating bell prompt. DEST DRIVE: a <-- Backup disk drive. SOURCE: c,d <-- Hard disk drives to be backed up. CONTROL DRIVE: c <-- BackRest system work drive. VERIFY: true <-- Verify each file by read-after-write. REUSE: false <-- Do not reuse backup disks. ERASE: true <-- Always erase destination disk first. * Backup and restore the following user numbers on CP/M media: USERS: 0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15 * EXCEPTION RECORDS * Least ambiguous exceptions must appear first. * DO NOT REMOVE THE FIRST TWO EXCEPTIONS. * Only the DOS files that reside in the subdirectories declared * by a preceding PATH: record will be affected by an exception * with "D" in the first field. * Five fields are mandatory for exception records. These are: * * 1 ,2 ,3 ,4 ,5 * user,drive,process flag,d(elete) or k(eep),filename.extension * * Use "D" in the first field for DOS files. * A sixth field, password, may be added for CP/M files. PATH: c:\ <-- Subdirectories to backup and restore. EXC: ?,?,c,k,control?.br <-- Backup control files if modified. EXC: ?,?,n,k,*.br? <-- Do not backup .BR files. EXC: ?,?,n,d,*.bak <-- Delete .BAK files. EXC: ?,?,n,d,*.$$$ <-- Delete .$$$ files. * End of CONTROL.BR How to Tell BackRest What You Want ---------------------------------- The control records have already been configured for most personal computers that run Concurrent. However, you might want to change them so that BackRest performs according to your particular requirements. If you do, make a working copy of CONTROL.BR and modify it using a text editor such as DR EDIX or any other text editor or word-processing program. Please read the descriptions of the control file records carefully before you change anything in CONTROL.BR. The description of each record includes what the record does, its form, an explanation of the form, whether the record's presence in the control file is mandatory, and an example. Note that BackRest operations can be accessed through Concurrent's File Manager. See "Backup File(s) Menu" in Section 2 of the Concurrent PC DOS User's Guide. Screen Control Records ---------------------- Two records, CLEAR: and ATTRIBUTE:, control how BackRest uses the screen of your computer. CLEAR: ------ BackRest reads the CLEAR: record to obtain the code for clearing the screen of your computer. Form CLEAR: length,code Explanation The CLEAR: record must contain two fields, the length field and the code field. The length field specifies the number of characters in the code required to clear the screen. The code field contains the decimal values that your computer uses to erase all characters currently being displayed on the screen. The CLEAR: record's presence in CONTROL.BR is optional. If you remove CLEAR: from the control file, BackRest clears the screen by sending it 24 blank lines. Example CLEAR: 2,27,69 This example shows that two decimal numbers (2,) are required for the code (27,69) that clears the screen. This is the correct code for personal computers that run BackRest under Concurrent. ATTRIBUTE: ---------- If your computer has a color monitor, you can use the ATTRIBUTE: record to specify the colors BackRest should use for displaying its messages on your screen. Form ATTRIBUTE: type: length,code Explanation The ATTRIBUTE: record uses three fields, type:, length, and code. There are seven type: fields for ATTRIBUTE: records: 1) start screen: Initialize the screen at the beginning of a backup or restore session. 2) leave screen: Set screen state at the end of a backup or restore session. 3) general: Set BackRest sign-on color. 4) errors: Set color for error messages. 5) message: Set color for BackRest messages. 6) data: Set color for display information. 7) input: Set color for your input to BackRest. The length field specifies the number of characters required for the color code. The code field is the code for the color your computer uses on the screen. This field consists of a series of decimal numbers, each separated by a comma. Table B-1 shows the ATTRIBUTE: length field and code field values for several colors. Table 1. ATTRIBUTE: Color Codes Length Code Color Field Field ------ ----- ------- Blue 3, 27,98,1 Brown 3, 27,98,6 Cyan 3, 27,98,3 Red 3, 27,98,4 Green 3, 27,98,2 Grey 3, 27,98,7 Light Cyan 3, 27,98,11 Magenta 3, 27,98,13 White 3, 27,98,15 Yellow 3, 27,98,14 ATTRIBUTE: records are optional. When ATTRIBUTE: records are not included in the control file, BackRest uses monochrome screen displays. Example ATTRIBUTE: errors: 3,27,98,4 This ATTRIBUTE: record causes all BackRest error messages to be displayed in red. Printer Information Records --------------------------- Four control file records provide BackRest with information about your printer: PRINTER INIT:, FORMFEED:, LENGTH:, and WIDTH:. PRINTER INIT: ------------- The PRINTER INIT: record sends a start-up command to your printer. Form PRINTER INIT: length,command code Explanation Before BackRest sends its reports to your printer, it reads this record to determine which command should precede printer output. PRINTER INIT: has two fields, length and command code. The length field specifies the number of decimal characters that make up the command code. The command code is a series of numbers that your computer sends to your printer for functions such as compressed print mode, carriage return, or line feed. PRINTER INIT: is an optional record. If you remove PRINTER INIT: from the control file, BackRest uses a default command that causes your printer to perform one carriage return. Example PRINTER INIT: 2,12,13 This PRINTER INIT: record causes your printer to perform a form feed (12) and carriage return (13) before BackRest's reports are printed. FORMFEED: --------- BackRest uses the FORMFEED: record to begin printing a section of its reports on a new page. Form FORMFEED: length,command code Explanation The FORMFEED: record consists of two fields, length and command code. Both fields are made up of decimal values. The FORMFEED: record is optional. BackRest uses three linefeed commands if this record is not included in CONTROL.BR. Example FORMFEED: 1,12 The length field is 1 because the command code consists of one number. The command code for a printer form feed on most computers that run Concurrent is 12. LENGTH: ------- The LENGTH: record specifies the number of lines to be printed per BackRest report page. Form LENGTH: number of lines Explanation The LENGTH: record has only one field, a decimal value for the number of lines to be printed per report page. LENGTH: is an optional record. If you remove this record from CONTROL.BR, BackRest uses a default value of 60 lines per page. Example LENGTH: 65 This LENGTH: record tells BackRest to print 65 lines per report page. WIDTH: ------ The WIDTH: record tells BackRest the number of columns per page that your printer accommodates. Form WIDTH: number of columns Explanation The only field the WIDTH: record uses is a decimal value for the number of columns your printer allows. The WIDTH: record must be included in the control file. If WIDTH: is missing from CONTROL.BR, BackRest displays the following error message and returns control to Concurrent: Fatal Control File (CONTROL.BR) Error: Incomplete control file Example WIDTH: 80 This record tells your printer to print 80 columns per page, the standard for most personal computer printers. If your printer can print 120 columns per page, you might want to change the WIDTH: field to 120. Report Records -------------- BackRest uses three control file records for its reports. These are: REPORT PRINT:, SHOW SKIPS:, and ID:. REPORT PRINT: ------------- Use the REPORT PRINT: record to control whether BackRest automatically prints its reports at the end of backup or restore operations. Form REPORT PRINT: true/false Explanation The REPORT PRINT: field can have either a "true" or "false" value. If the field is "true," BackRest automatically sends its reports to your printer. If you set the REPORT PRINT: field to "false," BackRest does not send its reports to your printer. If there is no printer attached to your computer, set this field to "false." You can tell BackRest to send its reports to your printer at any time by issuing one of the following commands: A>BACK REPORT for the backup report, or A>REST REPORT for the restore report. Please see "BackRest Commands" for more information about these commands. REPORT PRINT: is an optional record. If you remove this record from the control file, BackRest uses "true" for a default field value. Example REPORT PRINT: true This example REPORT PRINT: record causes BackRest to send its reports to the printer automatically. SHOW SKIPS: ----------- The SHOW SKIPS: record determines whether BackRest includes a report of the files excluded from a hard-disk backup. Form SHOW SKIPS: true/false Explanation The SHOW SKIPS: record uses a one-word field of either "true" or "false." If you set the field to "true," BackRest includes the "Files Skipped" section in its backup report (see "Backup Reports"). Because this section of the report can be quite long, you might want to set the SHOW SKIPS: field to "false." BackRest always includes the "Files Skipped" section in a backup report if you specify a complete hard-disk backup as described in "BackRest Commands." SHOW SKIPS: is an optional record. If you remove SHOW SKIPS: from the control file, BackRest uses "true" for a default field value. Example SHOW SKIPS: false The report of files excluded from a hard-disk backup session are not printed unless you specify a complete backup. ID: --- Use the ID: record to specify the heading BackRest prints on all pages of its reports. Form ID: heading Explanation The field for the ID: record is the heading you want printed on each page of BackRest reports. If you are using an 80-column printer, use a maximum of 30 characters. This limits the heading to a single line. In any case, do not use more than 60 characters. The ID: record must be included in the control file. If ID: is missing from CONTROL.BR:, BackRest displays the following error message and then returns control to Concurrent: Fatal Control File (CONTROL.BR) Error: Incomplete control file Example ID: Pacific Grove Dental Clinic This ID: record causes BackRest to print the following heading on each page of its reports: BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Control Records --------------- The Control Records contain information that governs important aspects of BackRest's operation. The control records are: SPLIT:, BELL REPEAT:, DEST DRIVE:, SOURCE:, CONTROL DRIVE:, VERIFY:, REUSE:, ERASE:, USERS:, and PATH:. SPLIT: ------ The SPLIT: record controls whether a file too large to fit in the space remaining on a disk is to be divided onto two or more disks during backup. Form SPLIT: true/false Explanation The SPLIT: record field can have either a "true" or "false" value. If the field is "true," BackRest copies as much of the file as will fit in the disk's remaining space; the rest of the file is then copied to the next backup disk. BackRest joins the two files during restoration. A field value of "false" causes BackRest to request a new backup disk if the current disk's remaining space cannot accommodate the file. Note that a value of "true" uses a disk's space completely and requires fewer disks per backup session. SPLIT: is an optional record. BackRest uses "false" as the default field value if you remove SPLIT: from the control file. BackRest always divides single files that are larger than the capacity of a single blank backup disk despite the SPLIT: field setting. Example SPLIT: true This SPLIT: record causes BackRest to divide files between disks. BELL REPEAT: ------------ The BELL REPEAT: record controls the tone prompt BackRest sounds when a message is displayed and your attention is required. Form BELL REPEAT: true/false Explanation The field for BELL REPEAT: can have either a "true" or "false" value. If the field is "true," BackRest sounds the tone repeatedly. A "false" value causes BackRest to sound the tone just once. The BELL REPEAT: record is optional. If you remove BELL REPEAT: from the control file, BackRest uses "false" for its default field value. Example BELL REPEAT: false BackRest sounds the tone just once when your attention is required. DEST DRIVE: ----------- Use DEST DRIVE: to specify the backup disk destination drive. Form DEST DRIVE: d Explanation The field for DEST DRIVE: is a one-letter designation specifying the backup disk drive. BackRest writes copies of the files to be backed up on the disk contained in this drive. BackRest also reads files from the backup disk in this drive during hard-disk file restoration. The DEST DRIVE: record must be included in the control file. If you remove it, BackRest displays the following error message and then returns control to Concurrent: Fatal Control File (CONTROL.BR) Error: Incomplete control file If you use an invalid drive designation in the DEST DRIVE: field, BackRest displays the following error message before returning control to Concurrent: Fatal Control File (CONTROL.BR) Error: Illegal Disk Drive Specified DEST DRIVE: d <-- Backup disk drive Example DEST DRIVE: a This example shows that BackRest writes to and reads files from floppy disks placed in drive A. The drive you designate in the DEST DRIVE: field depends upon the configuration of your computer system. See "Drive Designation" in Section 1 of the Concurrent PC DOS User's Guide. SOURCE: ------- Use the SOURCE: record to specify the hard-disk drives you want BackRest to back up and restore. Form SOURCE: d,d,d,d Explanation The SOURCE: record field is made up of one-letter hard-disk partition designations. The hard-disk partitions specified in the SOURCE: field are backed up to the disk drive specified in the DEST DRIVE: record. The SOURCE: record must be included in the control file. If you remove it, BackRest displays the following error message and then returns control to Concurrent: Fatal Control File (CONTROL.BR) Error: Incomplete control file BackRest ignores a SOURCE: drive designation if it is also used in the DEST DRIVE: record. In that case, BackRest continues processing any valid drive designations that are remaining in the SOURCE: field. When BackRest finishes its backup operations, it prints the following message in its Error Warning! Source and destination drive d: were the same, skipped it as source. When you specify invalid drive designations in the SOURCE: record field, BackRest displays the following error message before returning control to Concurrent: Fatal Control File (CONTROL.BR) Error: Illegal Disk Drive Specified SOURCE: d,d,d <-- Backup disk drive If you change the SOURCE: field, BackRest performs a full hard- disk backup during the next backup session. Example SOURCE: c,d This example shows that files will be read from hard-disk partitions C and D for backup. The drives you specify in the SOURCE: record depend on the configuration of your computer system. See "Drive Designation" in Section 1 of the Concurrent PC DOS User's Guide. CONTROL DRIVE: -------------- Use the CONTROL DRIVE: record to tell BackRest which drive it should use as its work drive. Form CONTROL DRIVE: d Explanation The field for the CONTROL DRIVE: record is a one-letter designation specifying the BackRest system work drive; this is the drive on which BackRest stores its DIR.BR, PATHS.BR, REPORT.BR, RESTRPT.BR, and temporary work files. The CONTROL DRIVE: record must be included in the control file. If it is removed, BackRest displays the following error message and then returns control to Concurrent: Fatal Control File (CONTROL.BR) Error: Incomplete control file When an invalid drive designation is used in the CONTROL DRIVE: record field, BackRest displays the following error message before returning control to Concurrent: Fatal Control File (CONTROL.BR) Error: Illegal Disk Drive Specified CONTROL: d <-- Backup system work drive. You cannot specify the same drive in the CONTROL DRIVE: and DEST DRIVE: fields. If you do, BackRest displays the following error message and then returns control to Concurrent: Fatal Error!! Control drive matches Destination for Drive d:, Aborted backup. Example CONTROL: c This example Control Record tells BackRest to use drive C as its system work drive. The drive you assign as BackRest's system work drive depends on the configuration of your computer. See "Drive Designation" in Section 1 of the Concurrent PC DOS User's Guide. VERIFY: ------- Use the VERIFY: record to control whether BackRest checks the backup files against the original source files for an exact match. Form VERIFY: true/false Explanation The VERIFY: record field can be either "true" or "false." If the field is "true," BackRest performs a read-after-write operation to compare the backup file against the original source file. If you set the VERIFY: field to "false," BackRest will not verify its backup files. The VERIFY: record is optional. When VERIFY: is not included in the control file, BackRest uses "true" for its default field value. Example VERIFY: true This VERIFY record tells BackRest to check every backup file against its source file for an exact match. REUSE: ------ The REUSE: record specifies whether you want BackRest to reuse backup disks. Form REUSE: true/false[,volume number] Explanation REUSE: has two fields. The first field can be set to "true" or "false." If you set the field to "true," BackRest accepts a backup disk that contains previously backed up files. If the field is "false," BackRest does not accept a disk that contains previously backed up files. The second field for REUSE: is the decimal value you want BackRest to use as the volume number for the first reused backup disk. This field is optional; BackRest will only use it if the first field is "true." REUSE: is an optional record. If REUSE: is not contained in the control file, BackRest uses "false" for its default field value. Example REUSE: true,99 BackRest accepts a disk that contains previously backed up files in response to its prompt for you to insert a new disk. BackRest labels the disks used in the current backup session beginning with volume number 99. BackRest assigns the same volume number (99) to the first reused disk in subsequent backup sessions. Note: If you set REUSE: to "true," set the ERASE: record to "false." This will prevent BackRest from erasing existing backup files on the disk to be reused. See the description of ERASE: that follows. ERASE: ------ Use the ERASE: record to tell BackRest if it is to erase destination disks before writing backup files to them. Form ERASE: true/false Explanation The ERASE: record field can be either "true" or "false." If the field is "true," BackRest erases all the files on the destination disks that reside in the user numbers and subdirectory paths declared in the USERS: and PATH: records. Note, however, that BackRest does not erase any previously backed up files from these user numbers or subdirectories if the REUSE: record field is "false." In that case, BackRest prompts you for a new destination disk. If you want BackRest to use the remaining space on a backup disk without erasing any existing backup files, set ERASE: to "false" and REUSE: to "true." Note: If you set ERASE: to "true" and REUSE: to "true," BackRest erases disks before reusing them. Example ERASE: true This record causes BackRest to erase any files in the user numbers or subdirectories specified in the USERS: and PATH: records from backup destination disks before it copies any files. If a disk contains previously backed up files and the REUSE: record is "false," BackRest prompts you for another backup disk. USERS: ------ The USERS: record tells BackRest which CP/M user numbers to back up and restore. Form USERS: n,n,n,. . . Explanation The field for the USERS: record consists of the user numbers, separated by commas, to be backed up and restored. User numbers 0 - 15 may be specified in the USERS: field. Your backup sessions take less time if you limit this field to only those user numbers storing your hard-disk files. USERS: is an optional record. If you remove it from the control file, BackRest uses user number 0 as its default. If you change the USERS: field, BackRest performs a full hard-disk backup during the next backup session. Example USERS: 0,1,4 This USERS: record causes BackRest to search through user numbers 0, 1, and 4 for files to be backed up and restored. PATH: ----- Use PATH: to specify the DOS subdirectories that contain files you want BackRest to back up and restore. Form PATH: d:\1st directory\2nd directory\3rd directory\. . . Explanation The PATH: record field consists of a source drive designation (d , the DOS root directory, and the hierarchical sequence of, the DOS root directory, and the hierarchical sequence of directories that leads to the subdirectory containing the files to be backed up and restored. Note the required colon after the drive designation and the use of backslashes. Note: If a subdirectory specified in a PATH: record does not exist, BackRest ignores the record. When the same subdirectory path exists on two or more drives and you want the files in each to be backed up, the control file must include separate PATH: records for each drive. PATH: records affect only DOS files. DOS files are indicated by EXC: records that contain "D" in the first field. Each PATH: record specifies only the paths to the files declared in the EXC: records that follow it. The PATH: and EXC: records must follow all other control file records. The PATH: record is optional. If it is removed from the control file, BackRest backs up only the files in the current directory. If you are using BackRest from Concurrent's File Manager, you are not required to specify subdirectories in the PATH: record.W BackRest can accept paths to your DOS files from the File Manager (see the Backup File Menu in Section 2, "File Manager" of the Concurrent PC DOS User's Guide). Example PATH: C:\terry\kit\sales This PATH: record tells BackRest to back up the DOS files contained in subdirectory "SALES" on source drive C as specified in any EXC: records that may immediately follow. This includes only those EXC: records which use "D" in the first field. The subdirectory path declared in this record will not be used for EXC: records that come after another PATH: record or do not contain "D" in the first field. Exception Records ----------------- Exception Records tell BackRest which files it should treat as "exceptions" to the more general procedures governed by the control records. Without Exception Records, BackRest backs up every hard-disk file as declared in the SOURCE:, USER:, and PATH: records. An Exception Record's descriptor is EXC:. EXC: ---- Use EXC: to tell BackRest the backup process and disposition you require for specific files. Form EXC: user/D,d,process,disposition,filename.ext,[password] Explanation Exception Records have six fields separated by commas. The first field can be a user number or question mark (?) for CP/M files, or "D" for DOS files. A question mark indicates all user numbers are included. If you want your DOS files backed up according to a preceding PATH: record, the first field must contain a "D." The second field is the source drive containing the files to be backed up. You can use a question mark (?) in this field to indicate any valid source drive specified in the SOURCE: record. If the source drive field of an EXC: record specifying DOS files does not correspond to the source drive designation for the root directory in the preceding PATH: record, BackRest displays the following error message before returning control to Concurrent: Fatal Control File (CONTROL.BR) Error: Drive differs from previous "PATH:" EXC: D,c,a,k,*.TXT <-- Always Backup DOS .TXT files An EXC: record's third field is a single letter representing the backup process you want BackRest to perform on the files specified in the EXC: record. This field can be set to one of the following: o "A" for Always backup the specified files o "C" for Conditionally backup the specified files o "N" for Never backup the specified files Use "C" when you want BackRest to back up the files specified in the EXC: record only if they have been modified since the last backup session. The fourth field is a single letter representing the disposition of the specified files. Use one of the following letters: o "D" to Delete the files from the source drive. If "A" or "C" is in the third (process) field, BackRest deletes the files after they have been successfully backed up on a destination disk. o "K" to Keep the files on the source drive after backup. The fifth field specifies the files the EXC: record acts upon. If you use wildcard characters (? and * ) for the file specification in this field, BackRest searches the files on the source drive for matching characters in the filenames and extensions. Note: You must place EXC: records that use wildcard characters in a sequence of least ambiguous records first. EXC: records that contain explicit file specifications must appear in CONTROL.BR before those that use any form of ambiguous file reference. The sixth field, the password, is optional. Enter the passwords for any password-protected CP/M files you want BackRest to back up. If you use this field, the control file will contain the passwords for your CP/M files. To protect "TSERKCAB" as a password for CONTROL.BR as shown below: A>SET D:CONTROL.BR[PASSWORD=TSERKCAB] where "D:" is the drive containing the CONTROL.BR file. If you use another password for CONTROL.BR, you must set the default to the same password so that BackRest can access CONTROL.BR. For example, if your password is "xyz," do the following: A>SET D:[DEFAULT=XYZ] Note: BackRest does not password-protect files the backup files it creates. If you use BackRest to back up password- protected files, store your backup disks in a secure area. Examples EXC: ?,?,n,d,*.$$$ This example affects all user numbers (first "?") on all source drives (second "?"). It tells BackRest never to back up (n) temporary files (*.$$$) and to delete (d) these files from the source drive. EXE: O,c,c,k,*.tex,eliot This example marks all the files in user number 0 on source drive C with an extension of TEX and a password of "ELIOT" as exceptions. It tells BackRest to back up these files only if they have been modified since the last backup and to keep the original versions on source drive C. PATH: C:Eliot EXC: D,c,a,k,SALES.* This example specifies all DOS files (D) on drive C (c) with a filename of SALES. It tells BackRest to always back up these files (a) and to keep (k) their original versions on drive C. Use this EXC: record format for DOS files you want BackRest to back up from a subdirectory declared by a preceding PATH: record (in this example, Eliot). BACKREST COMMANDS ----------------- This section describes the use of the BACK and REST commands. BACK Command ------------ The BACK command allows you to specify how your hard-disk files are copied to backup disks. The forms of the BACK command are: BACK BACK FULL BACK CONTROL=n BACK REPORT If you have previously copied BACK.CMD to your system drive, you can enter any of the BACK commands from any drive. BackRest does require, however, that the current drive contain the CONTROL.BR file. Each of the BACK commands are described below. BACK -- Routine Hard-disk Backup -------------------------------- Use this form of the BACK command the first time you back up your hard-disk files and afterwards for routine hard-disk backup sessions. Subsequent use of this command will cause BackRest to back up only new files or those which have been modified since the last backup session. This command causes hard-disk backup to occur according to the default control file records in the CONTROL.BR file. Unless you have previously set the system date (see DATE in Section 8 of the Concurrent PC DOS User's Guide), BACK asks you to enter the correct date: Please enter today's date correctly as MM/DD/YY ==> You can optionally enter the correct date as part of the BACK command line: A>BACK MM/DD/YY Here, BACK accepts the date you type at the command line and does not prompt you for it. Once BACK has the correct date, BackRest signs on and indicates which drive and user number or subdirectory path will be backed up according to the SOURCE: record in CONTROL.BR: BackRest (tm) for Concurrent (tm) PC-DOS Hard-disk Backup Facility - Version 2.01 Backing up drive x:, user n After indicating which drive and user number or path is to be backed up, BACK prompts you to insert a backup disk in the destination drive as follows: Please insert a new CP/M Media disk in drive A: Touch RETURN key when ready ==>_ If the source drive is a DOS partition, the above prompt shows "DOS" in place of "CP/M." When you have inserted a properly formatted disk of the indicated media and pressed the Enter key, BACK begins reading the files to be backed up from the source drive and writing them to the backup disk in the destination drive. This is indicated by the messages: Backing up drive x:, user n Reading filename ext and Backing up drive x:, user n Writing filename ext When BACK has finished writing a file to the backup disk, it verifies that the backup file is the same as the source file. BACK reports that it is checking a file by displaying the following message: Backing up drive x:, user n Verifying filename ext When a backup disk is full, BACK displays the disk's unique volume number so that you can label the disk appropriately. BACK asks for more backup disks when it requires them as follows: Please remove the disk in drive A: and place a label on it indicating volume:x then insert a new CP/M Media disk. Touch RETURN key when ready ==>_ When BackRest has finished its backup operations, it displays the following message before sending its report to your printer: Printing report, please standby... End of BackRest processing If the REPORT PRINT: record field is "false," BackRest displays the following message at the end of its backup operations: End of BackRest processing If you have modified the SOURCE: or USERS: record fields in the CONTROL.BR file after BACK has used it to back up your hard- disk files, BackRest displays the following message on your screen: I will obey the new parameters you placed in the CONTROL.BR file with a FULL backup this time. Is that all right? (Y or N) ===> This message means that BackRest will back up every file (as defined in CONTROL.BR) and not just those which have been modified or created since BACK was last used. If you enter "N" in response to this message, BackRest returns control to Concurrent. You can return control to Concurrent and stop BackRest from processing a BACK command by typing "C" while holding down the Ctrl key. When you cancel BACK processing, the program responds by displaying: Do you want to print the report (Y/N) ==>_ If you type "Y," BackRest sends its report to your printer. If you enter "N," BackRest displays: User abort of backup process and returns control to Concurrent. If you abort backup processing, you must do a full backup the next time. BACK FULL -- Complete Hard-disk Backup -------------------------------------- Use this form of the BACK command to cause BackRest to perform a complete hard-disk backup. BACK FULL forces BackRest to ignore its references to previously backed up files. All hard- disk files that satisfy the parameters in CONTROL.BR are backed up. If BackRest cannot locate its DIR.BR file on the control drive, backup disk numbering begins again from 00001. BACK FULL is equivalent to using the BACK command form for a first backup. You can include the date in the BACK FULL command line in one of two ways: A>BACK FULL MM/DD/YY or A>BACK MM/DD/YY FULL The BACK FULL command causes BackRest to over-write its DIR.BR file. This means that BackRest loses any reference to backup operations it has performed prior to a complete hard-disk backup initiated by BACK FULL. In all other aspects, BACK FULL operates just like BACK. BACK CONTROL=n -- Specialized Hard-disk Backup ---------------------------------------------- The BACK CONTROL=n command tells BackRest to take its instructions from the records in a special control file. "n" indicates any character "A" through "Z" or "0" (zero) through "9." Special control files are actually modified copies of CONTROL.BR. You can create a special control file by making a copy of CONTROL.BR (use COPY), giving it a new name, and modifying its control records. Special control files are designated by an extra character in the filename. For example, you can use CONTROLI.BR to identify a control file whose records you have changed to control the backup and restoration of inventory files. The following command would cause BackRest to back up hard-disk files according to the records in CONTROLI.BR: BACK CONTROL=I -------------- All files and backup disks used with this form of the BACK command are identified with the character used to designate the special control file. For example, "BACK CONTROL=I" would cause the first backup disk volume number to be "I-1." As with the other BACK commands, you can include the date in the BACK CONTROL=n command line as follows: A>BACK MM/DD/YY CONTROL=n or A>BACK CONTROL=n MM/DD/YY The BACK CONTROL=n command displays the same messages as the BACK command to request the date and backup disks and to notify you of its current operation. BACK REPORT -- Print Backup Report ---------------------------------- The BACK REPORT command sends a report of backup operations to your printer. BackRest prints its backup reports automatically unless the REPORT PRINT: record field in CONTROL.BR is "false." Use this command to print a record of the previous backup operation. See "BackRest Reports." REST Commands ------------- The REST commands allow you to restore files you previously backed up from your hard-disk partitions with the BACK commands. There are three forms of the REST command: REST REST CONTROL=n REST REPORT If you have copied REST.CMD to your system drive as described in "Setting Up BackRest," you can enter any of the REST commands from any drive. BackRest does require, however, that the current drive (the drive from which you enter the command) contain the CONTROL.BR file. Each of the REST commands are described below. REST -- Routine Hard-disk File Restoration ------------------------------------------ Use this form of the REST command to restore your hard-disk files according to the records in CONTROL.BR. When you type: A>REST and press the Enter key, BackRest responds by displaying its Restore Facility Menu. By offering you options and prompting you for information, the Restore Facility Menu allows you to specify which files you want restored to your hard disk. It gives you the following options: Restore. . 1 - Bad Files Automatically 2 - Other Files [RETURN KEY] - End If you type "1," REST reads the REPORT.BR file created by BACK to determine which files were unusable at the time of the last backup. REST then looks for the backup disk volume that contains the most recently backed up version of the file. When REST has located the last copy of the file that was known to be usable, it prompts you to insert the disk with that specific backup disk volume number in the proper drive: Please insert in drive d: disk volume: xx: Touch RETURN key when ready ==>_ If you insert the wrong backup disk in response to this prompt, REST displays Wrong volume. . . and requests the correct volume number again. If BACK did not encounter any "bad" files during the last backup operation, REST displays the following message in response to your selection of the first option: There are no bad files to restore If you select the second option from the Restore Facility Menu by typing "2," REST prompts you for the source drive to be restored: Enter drive to restore [A-P] or [RETURN] for any ==>_ Reply to this prompt by typing one of the single-letter drive designations used in the SOURCE: control file record. Press the Enter key to indicate that you want REST to use any drive from the SOURCE: record to restore your files by user number or path, filename, and date of backup. REST prompts you next for the user number or subdirectory path that you want restored: Enter user to restore [0-15] or [RETURN] for any ==>_ Enter path to restore or [RETURN] for any ==>_ Type any number in the range of 0 to 15 to specify the user number that contains the files you want restored. Specify the DOS subdirectory according to the requirements discussed in the description of the PATH: record. If you press the Enter key, your files are restored by filename and/or date of backup. REST asks you which files you want restored by displaying: Enter file name to restore (may be wildcard) ==>_ You can enter the name of a single file or use wildcard characters (? and *) to specify a group of files. For example, "*.txt" would tell REST to restore all files with an extension of TXT. Enter "*.*" to indicate every file previously backed up (see "Wildcards" in Lesson 2 of Getting Started with Concurrent PC DOS). REST's last prompt asks you to specify the date of backup for the files you want restored: Enter restore date as MM/DD/YY or [RETURN] for latest ==>_ You can request a specific version of a file according to the date of backup or press the Enter key to tell REST that you want the most recently backed up copies. The date of backup is shown in the heading of the backup reports. When REST finds the volume number of the backup disk that contains the files you have indicated, it asks you to insert that disk in the proper drive: Please insert in drive d: disk volume: xx Touch RETURN key when ready ==>_ If REST cannot locate any files that correspond to your specifications, it displays the following message and returns you to the Restore Facility Menu: Your request did not match any files backed up After REST has restored all the files that match your specifications, it sends the restore report to your printer. REST notifies you of this with the message: Printing report, please standby . . . When REST has finished the report, it displays End of BackRest processing and returns control to Concurrent. Note that REST does not print its report if the REPORT PRINT: record field in the control file is set to "false." You can cancel restore processing at any time by holding down the Ctrl key and typing "C". This causes REST to display the following message: Do you want to print the report (Y/N)? ==>_ If you type "Y," REST sends its report to your printer and then returns control to Concurrent. Entering a response of "N" causes REST to display this message immediately before returning control to Concurrent: User Abort of restore process REST CONTROL=n -- Specialized File Restoration ---------------------------------------------- This form of the REST command tells BackRest to use the records in a special control file. "n" indicates any character "A" through "Z" or "0" (zero) through "9." Special control files are described under the "BACK CONTROL=n" command above. REST REPORT -- Print Restore Report ----------------------------------- Use this command to print a report of the previous restore operation by typing: A>REST REPORT and then pressing the Enter key. BackRest displays the following message before sending the report to your printer: Printing report, please standby . . . When the report has been printed, REST displays End of BackRest processing and returns control to Concurrent. If the REPORT PRINT: record field in the control file is set to "false," REST does not print its report in response to this command. If you have not previously run REST, the report consists of only one line: File RESTRPT.BR cannot be found. Restoring Password-protected Files ---------------------------------- REST will not over-write password-protected files. You must therefore remove password protection from CP/M files on your hard disk before restoring other versions of the files with REST. Reset the password protection after the new versions of the files have been restored. REST restores a file with a temporary extension (.$$$) when an existing version of the file is still password-protected. If this occurs, erase the original file and rename the newly restored, temporary version. Be sure to reinstate password protection for the renamed file. BACKREST REPORTS ---------------- BackRest sends its reports to your printer when its backup and restore operations are completed. BackRest reports provide you with a permanent reference of previous backup and restore operations, hard-disk statistics, and any errors the program encountered during its processing. All reports are formatted according to the Report Records in BackRest's control file. Backup Report ------------- Backup reports are made up of four sections: files backed up, files skipped, files deleted, and bad files. BackRest prints the heading contained in the field of the ID: record on each page of a backup report. The first section of a backup report lists every file copied during the previous backup session. Backed up files are presented according to source drive, backup disk volume number, and user number (CP/M files) or subdirectory path (DOS files). BackRest also indicates the disposition of every file by printing "D" or "K" after each filename according to the fourth field of the control file EXC: records. Remember that "D" shows that BackRest deleted the file after it created a backup copy; "K" tells you that the source file was kept on the source drive after BackRest copied it to a backup disk. BackRest uses the letter "S" to indicate a split file. BackRest splits any single file too large to fit on one blank backup disk. If the SPLIT: record field is "true," BackRest also splits files so that a disk's space is used completely. The format of this section of a backup report is shown below. BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Files Backed Up Drive D User 0 Backup Volume 00003 CONSOLE .BAS K DELTA .BAT K DISPLAY .CMD K DONE .BAT K Drive D User 1 Backup Volume 00003 AUDIT .DAT D BATCH .CMD K CCPM .SYS S Figure 1. CP/M Backup Report DOS files listed in a backup report are shown according to subdirectory path: Drive C Backup Volume 00004 Path:\ELIOT APRIL .GL K AUGUST .GL K FEBR .GL K JAN .GL K Figure 2. DOS Backup Report The "Files Skipped" section of a backup report is included only if the SHOW SKIPS: record field is "true." This section of a backup report lists all the files not backed up according to the EXC: records in the control file. This section of the report also uses the "D" and "K" disposition indicators. The third section of a backup report, "Files Deleted," provides a breakdown of all the files deleted during the last backup session. Only the "D" disposition indicator appears in this section of the report. The last section of a backup report lists all the files that BackRest could not read during a backup session. These "bad files" are not deleted. BackRest does not obey an EXC: record if it would cause the program to erase an unreadable file. See "The REST Command" for information on using BackRest to restore "bad files." Restore Report -------------- BackRest prints a report of the files it has restored as part of REST processing. An example restore report is shown below. (Note that for CP/M files user number is indicated instead of path.) BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Files Restored Drive C Backup Volume 00001 Path:\ CTYPE .H CURRENT .TXT SAMPLE .C Figure 3. DOS Restore Report Hard-disk Statistics Report --------------------------- The hard-disk statistics report shows the amount of disk space and number of files in each user number (CP/M) or path (DOS) as specified in the control file Exception Records or through Concurrent's File Manager. This report also shows the number of files backed up in each user number or subdirectory, the total number of files backed up on a disk, the amount of hard- disk space available, and the total number of directory areas used for CP/M files. BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Statistics Report Date of last backup: 06/04/84 Drive D: User 0: 1043K in 102 files. 4 files backed up. Drive D: User 1: 159K in 8 files. 4 files backed up. Drive D: Total of 8 files backed up out of 110. Drive D: 3293K available. 324 directory areas used. Figure 4. CP/M Hard-disk Statistics Report BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Statistics Report Date of last backup: 06/04/84 Path:\ Drive C: 1254K in 108 files. 10 files backed up. Path:\ELIOT Drive C: 253K in 39 files. 4 files backed up. Drive C: Total of 14 files backed up out of 147. Drive C: 3909K available. Figure 5. DOS Hard-disk Statistics Report Error Report ------------ This report lists any errors BackRest encountered during backup and restore operations. BackRest 2.x Report for Pacific Grove Dental Clinic done on 06/05/84 Error Report Unresolved Error Source APRIL.AP on Drive D: User 0, backup retried on new disk. 1 wrong volumes inserted by operator. Figure 6. CP/M Error Report This example shows that BACK encountered an error it could not resolve while backing up the file APRIL.AP from user number 0, drive D. The error message indicates that BACK copied the file to another backup disk. The second error occurred when the user inserted the incorrect backup disk during a restore operation. In this case, REST would have prompted the user for the proper disk. Example Error Report Messages ----------------------------- Warning ! Source and destination drive X: were the same, skipped it as source. This message means the same drive was specified as both a source and destination drive in the SOURCE: and DEST DRIVE: control file records. Verify error on file xxxxxxxx.xxx on Drive X: backup retried on new disk. This means the backup copy of the indicated file did not exactly match the source file. BackRest attempted another copy on a different backup disk. Destination Backup Drive X: has a Sector Error backup retried on new disk. A media error occurred while BackRest was writing a backup file. BackRest attempted another copy on a different backup disk. Source Disk Sector Error in File xxxxxxxx.xxx on Drive X: skipped file. A media error occurred while BackRest was reading a source file. The file was not copied to a backup disk. EOF
http://fixunix.com/cp-m/388-concurrent-cp-m-back-rest.html
CC-MAIN-2015-40
refinedweb
10,555
63.59
write(2) write(2) NAME [Toc] [Back] write, writev, pwrite - write on a file SYNOPSIS [Toc] [Back] ); DESCRIPTION [Toc] [Back]. For ordinary files, if the O_DSYNC file status flag is set, the write does not return until both the file data and the file attributes required to retrieve the data are physically updated. If the O_SYNC flag is set, the behavior is identical to that of O_DSYNC, with the addition that all file attributes changed by the write operation, including access time, modification time and status change time, are also physically updated before returning to the calling process. For block special files, if the O_DSYNC or the O_SYNC: + If O_NDELAY or O_NONBLOCK is set, the write returns -1 and sets errno to [EAGAIN]. Hewlett-Packard Company - 1 - HP-UX 11i Version 2: August 2003 write(2) write(2) + If O_NDELAY and O_NONBLOCK are clear, the write does not complete until the blocking record lock is removed. process.. + The system-dependent maximum number of bytes that a pipe or FIFO can store is PIPSIZ as defined in <sys/inode.h>. + The minimum value of PIPSIZ on any HP-UX system is 8192. When writing a pipe with the O_NDELAY or O_NONBLOCK file status flag set, the following apply: + If nbyte is less than or equal to PIPSIZ and sufficient room exists in the pipe or FIFO, the write() succeeds and returns the number of bytes written. Hewlett-Packard Company - 2 - HP-UX 11i Version 2: August 2003 write(2) write(2) + If nbyte is less than or equal to PIPSIZ but insufficient room exists in the pipe or FIFO, the write() returns having written nothing. If O_NONBLOCK is set, -1 is returned and errno is set to [EAGAIN]. If O_NDELAY is set, 0 is returned. + If nbyte is greater than PIPSIZ and the pipe or FIFO is full, the write returns having written nothing. If O_NONBLOCK is set, -1 is returned and errno is set to [EAGAIN]. If O_NDELAY is set, 0 is returned. + If nbyte is greater than PIPSIZ, and some room exists in the pipe or FIFO, as much data as fits in the pipe or FIFO is written, and write() returns the number of bytes actually written, an amount less than the number of bytes requested. When writing a pipe and the O_NDELAY and O_NONBLOCK file status flags are clear, the write() always executes correctly (blocking as necessary), and returns the number of bytes written. When attempting to write to a file descriptor (other than a pipe or FIFO) that supports non-blocking writes and cannot accept the data immediately, the following apply: + If the O_NONBLOCK flag is clear, write() will block character special devices, if the stopio() call was used on the same device after it was opened, write() returns -1, sets errno to [EBADF], and issues the SIGHUP signal to the process. write() also clears the potential and granted privilege vectors on the file., Hewlett-Packard Company - 3 - HP-UX 11i Version 2: August 2003 write(2) write(2), the following apply: +. If the write is performed by any user other than the owner or a user who has appropriate privileges, write() clears the set-user-ID, setgroup, write() does not clear the set-user-ID, set-group-ID, and sticky bits.}, as defined in <limits.h>. Each iovec entry specifies the base address and length of an area in memory from which data should be written. The writev() function will always write a complete area before proceeding to the next. The iovec structure is defined in /usr/include/sys/uio.h. Hewlett-Packard Company - 4 - HP-UX 11i Version 2: August 2003 write(2) write(2). RETURN VALUE [Toc] [Back]. A write to a STREAMS file may fail if an error message has been received at the STREAM head. In this case, errno is set to the value included in the error message. ERRORS [Toc] [Back] Under the following conditions, write(), pwrite() and writev() fail and set errno to: [EAGAIN] The O_NONBLOCK flag was set for the file descriptor and the process was delayed in the write() operation. [EAGAIN] Enforcement-mode file and record locking was set, O_NDELAY was set, and there was a blocking record lock. [EBADF] The fildes argument was not a valid file descriptor open for writing. [EDEADLK] A resource deadlock would occur as a result of this operation (see lockf(2) and fcntl(2)). [EDQUOT] User's disk quota block limit has been reached for this file system. [EFBIG] An attempt was made to write a file that exceeds the implementation-dependent maximum file size or the process' file size limit. Hewlett-Packard Company - 5 - HP-UX 11i Version 2: August 2003 write(2) write(2) [EFBIG] The file is a regular file and nbyte is greater than zero and the starting position is greater than or equal to the offset maximum established in the open file description associated with fildes. [EINTR] The writeOLCK] The system record lock table is full, preventing the write from sleeping until the blocking record lock is removed. [ENOSPC] Not enough space on the file system. The process does not possess the limit effective privilege to override this restriction. [ENXIO] A request was made of a non-existent device, or the request was outside the capabilities of the device. [ENXIO] A hangup occurred on the STREAM being written to. [EPIPE] An attempt is made to write to a pipe or FIFO that is not open for reading by any process, or that only has one end open. A SIGPIPE signal will also be sent to the process. [ERANGE] The transfer request size was outside the range supported by the STREAMS file associated with fildes. Under the following conditions, writev() fails and sets errno to: [EFAULT] iov_base or iov points outside of the allocated address space. The reliable detection of this error is implementation dependent. Hewlett-Packard Company - 6 - HP-UX 11i Version 2: August 2003 write(2) write(2) [EINVAL] One of the iov_len values in the iov array is negative. [EINVAL] The sum of the iov_len values in the iov array would overflow an ssize_t. Under the following conditions, the writev() function may fail and set errno to: [EINVAL] The iovcnt argument was less than or equal to 0, or greater than {IOV_MAX}. Under the following conditions, the pwrite() function fails, the file pointer remains unchanged and errno is set to: [EINVAL] The offset argument is invalid, and the value is negative. [ESPIPE] The fildes argument is associated with a pipe or FIFO. Under the following conditions, write() or writev() fails, the file offset is updated to reflect the amount of data transferred and errno is set to: [EFAULT] buf points outside the process's allocated address space. The reliable detection of this error is implementation dependent. EXAMPLES [Toc] [Back] Assuming a process opened a file for writing, the following call to write() attempts to write mybufsize bytes to the file from the buffer to which mybuf points. #include <string.h> int fildes; size_t mybufsize; ssize_t nbytes; char *mybuf = "aeiou and sometimes y"; mybufsize = (size_t)strlen (mybuf); nbytes = write (fildes, (void *)mybuf, mybufsize); WARNINGS [Toc] [Back] Check signal(5) for the appropriateness of signal references on systems that support sigvector(). See the sigvector(2) manpage. sigvector() can affect the behavior of the write(), writev() and pwrite() functions described here. Hewlett-Packard Company - 7 - HP-UX 11i Version 2: August 2003 write(2) write(2) Character special devices, and raw disks in particular, apply constraints on how write() can be used. See specific Section 7 manual entries for details on particular devices. AUTHOR [Toc] [Back] write() was developed by HP, AT&T, and the University of California, Berkeley. SEE ALSO [Toc] [Back] mkfs(1M), chmod(2), creat(2), dup(2), fcntl(2), getrlimit(2), lockf(2), lseek(2), open(2), pipe(2), sigvector(2), ulimit(2), ustat(2), signal(5), <limits.h>, <stropts.h>, <sys/uio.h>, <unistd.h>. STANDARDS CONFORMANCE [Toc] [Back] write(): AES, SVID2, SVID3, XPG2, XPG3, XPG4, FIPS 151-2, POSIX.1, POSIX.4 Hewlett-Packard Company - 8 - HP-UX 11i Version 2: August 2003
http://nixdoc.net/man-pages/HP-UX/man2/writev.2.html
CC-MAIN-2020-10
refinedweb
1,352
59.13
A Little C Primer/C Preprocessor Directives We've already seen the "#include" and "#define" preprocessor directives. The C preprocessor supports several other directives as well. All such directives start with a "#" to allow them to be distinguished from C language commands. As explained in the first chapter, the "#include" directive allows the contents of other files in C source code: #include <stdio.h> Notice that the standard header file "stdio.h" is specified in angle brackets. This tells the C preprocessor that the file can be found in the standard directories designated by the C compiler for header files. To include a file from a nonstandard directory, use double quotes: #include "\home\mydefs.h" Include files can be nested. They can call other include files. Also as explained in the first chapter, the "#define" directive can be used to specify symbols to be substituted for specific strings of text: #define PI 3.141592654 ... a = PI * b; In this case, the preprocessor does a simple text substitution on PI throughout the source listing. The C compiler proper not only does not know what PI is, it never even sees it. The "#define" directive can be used to create function-like macros that allow parameter substitution. For example: #define ABS(value) ( (value) >=0 ? (value) : -(value) ) This macro could then be used in an expression as follows: printf( "Absolute value of x = %d\n", ABS(x) ); Beware that such function-like macros don't behave exactly like true functions. For example, suppose "x++" is as an argument for the macro above: val = ABS(x++); This would result in "x" being incremented twice because "x++" is substituted in the expression twice: val = ( (x++) >=0 ? (x++) : -(x++) ) Along with the "#define" directive, there is also an "#undef" directive that undefines a constant that has been previously defined: #undef PI Another feature supported by the C preprocessor is conditional compilation, using the following directives: #if #else #elif #endif These directives can test the values of defined constants to define which blocks of code are passed on to the C compiler proper: #if WIN == 1 #include "WIN.H" #elif MAC == 1 #include "MAC.H" #else #include "LINUX.H" #endif These directives can be nested if needed. The "#if" and "#elif" can also test to see if a constant has been defined at all, using the "defined" operator: #if defined( DEBUG ) printf( "Debug mode!\n"); #endif -- or test to see if a constant has not been defined: #if !defined( DEBUG ) printf( "Not debug mode!\n"); #endif Finally, there is a "#pragma" directive, which by definition is a catch-all used to implement machine-unique commands that are not part of the C language. Pragmas vary from compiler to compiler, since they are by definition nonstandard.
https://en.wikibooks.org/wiki/A_Little_C_Primer/C_Preprocessor_Directives
CC-MAIN-2015-27
refinedweb
458
54.42
Microsoft's plans for the F# "functional first" language include an upgrade later this year that adds capabilities ranging from struct tuples to improved error messages. Backing for .Net Core, a multiplatform, open source version of the .Net programming model, also is in the works. F# 4.1 focuses on flexibility and incremental improvements, the Microsoft Visual FSharp team said. It features struct tuples and interoperability with Visual C# 7 and Visual Basic tuples. Tuples are a data structure that can store a finite sequence of data of fixed sizes and can return multiple values from a method. Struct tuples improve performance when there are many tuples allocated in a short period of time. "The tuple type in F# is a key way to bundle values together in a number of ways at the language level," the team said. "The benefits this brings, such as grouping values together as an ad-hoc convenience, or bundling information with the result of an operation, are also surfacing in the form of struct tuples in C# and Visual Basic." Version 4.1 will also feature a struct records capability. "In F# 4.1, a record type can be represented as a struct with the [<Struct>] attribute. This allows records to now share the same performance characteristics as structs, without any other required changes to the type definition." Single-case struct unions, meanwhile, also are enabled. "Single-case union types are often used to wrap a primitive type for domain modeling," the team said. "This allows you to continue to do so, but without the overhead of allocating a new type on the heap." Error messages will be enhanced in F# 4.1, featuring improvements in suggested fixes with information already contained in the compiler, and a fixed keyword capability is planned as well. The .Net Intermediate Language enables a developer to pin a pointer-type local on the stack; C# supports this with the "fixed" statement preventing garbage collection within the scope of that statement. "This support is coming to F# 4.1 in the form of the 'fixed' keyword used in conjunction with a 'use' binding," said the team. Underscores in numeric literals version 4.1, meanwhile, will enable grouping of digits into logical units for easier reading. F# 4.1 will enable a collection of types and modules within a single scope in a single file to be mutually referential, and it will include an implicit "module" suffix on modules sharing the same name as a type. "With this feature, if a module shares the same name as a type within the same declaration group -- that is, they are within the same namespace, or in the same group of declarations making up a module -- it will have the suffix 'Module' appended to it at compile-time." Visual F# Tools for F# 4.1 will support editing and compiling .Net Core and .Net Framework projects. "Our compiler and scripting tools for F# 4.1 will be the first version to offer support for .Net Core," the team said. Planned tooling includes a cross-platform, open source compiler tool chain for .Net Framework and .Net Core for use with Linux, MacOS X, and Windows. Visual F# IDE tools will be upgraded for use with the next version of Visual Studio, and F# 4.1 support will be included in Microsoft's Xamarin Studio and Visual Studio Code tools. The upgrade will be supported in the Fable F#-to-ECMAScript transpiler and in Roslyn Workspaces, for code analysis and refactoring in the Roslyn compiler platform.
https://www.infoworld.com/article/3100744/microsoft-maps-out-f-language-upgrade.html
CC-MAIN-2021-49
refinedweb
591
64.71
Use Input Static node to pass python script to Python Interface Node Hello, I saw a demo of Dataverse and it appeared to use a Static Node to pass a python script into the Python Node. 1) Is this currently possible to do? 2) Could someone share a very simple job that shows how to do this using a simple python script? 3) Is it possible to do this in python 3? Thanks, Rob Our software uses Python 2.7.3. Python and the StaticData node aren't going to mesh well. But, depending upon what you are wishing to do, you can simply put your python script directly into the python node. Or, if the following conditions are met: - your script does not need to interact with DV in any way (neither input nor ouput) - for some reason you want it in a static data node - or, if you need python 3 Then you could put your script into the static data node, and use OutputRaw to save the data to a text file. Then use this code fragment in a python node (put it in the "initialize" subroutine) import os os.system("/path/to/python3 myscript.py") But, huge caveat... the StaticData node is going to be very problematic over commas and question marks. StaticData is not going to play well with various delimiters. Thanks Stony. Forgetting Python 3 for the time being-- is there another example available demonstrating the best way to use Python in a Dataverse job? I'm trying to work with the "Example Python Node" job that's included in the install, but I'm not making any progress. If there is a different or better example, that's great, could you share? If there is not, could you help me understand how I could view log messages? Any basic change I make causes the job to fail. I've tried adding a new line with a custom message, and have adjusted the LogLevel, but I get strange errors about indentation. I read and have changed the Node LogLevel to 0. But when I enter: self.logInfo("Testing, testing.") or self.logMedium("Testing, testing.") unindent does not match any outer indentation level (temp.L107ATM19M003.4544.129.1493665004886.64e180058116dc9e5c3911f38da0ad16.prop, line 17) Any help greatly appreciated. Thanks, Rob This is the one thing that I hate about python the most. I love it for many many things, but this "feature" just kills me. In the Python2Implementation input field.. the spacing is done with TABS. However, you will find that your code, even though it is aligned properly, is filled with spaces instead of tabs. That's why the "indentation" is incorrect. - - Please sign in to leave a comment.
https://support.infogix.com/hc/en-us/community/posts/360028736554-Use-Input-Static-node-to-pass-python-script-to-Python-Interface-Node
CC-MAIN-2019-43
refinedweb
453
73.37
This new version is compatible with Maya 2011, 2012 and 2013. IMPORTANT: Please read carefully the documentation before using poseLib!!! Documentation and installation instructions can be found on my website. Also that's where you'll find regularly updated versions. PoseLib is based on a donation system. It means that if it is useful to you or your studio then you can make a donation to reflect your satisfaction. Thanks for using poseLib! Updated: 26 July 2012 - Fixed right-click menu not displaying properly in Maya 2013. -. - Now only shows poses whose file actually exists (no more empty red icons). Description: PoseLib is being used in production throughout the world by feature animation studios and video-game companies. It allows you to record a pose for any object in Maya (more precisely anything that has keyable channel attributes). Whether it is the hyper complex character rig you just created or a simple Nurbs sphere or a basic Lambert shader: You just have to select the object(s), hit the "Create New Pose" button, and a pose file is created in the relevant directory, along with the corresponding icon (a .bmp file). Features: - Works on anything at anytime (no "character map" or specific rig layout or particular naming required). - You can blend between the current pose and the clicked one by holding down the ALT or CTRL key and clicking on a pose. - Lets you choose the icon size you like (from 32x32 to any custom size/ratio up to 512x512). - Works with referenced/unreferenced characters (you can even edit the namespace). - Lets you organize your poses by characters (e.g.: Babar, Tintin...) and categories (e.g.: Face, Body...). - You can rename, delete, replace or move any pose from any character or category. - You can quickly select the objects/controls that are part of a specific pose. - You can also remove, add or replace specific objects/controls in a pose. - You can apply a pose specifically to the channels selected in the channel box. Enjoy! Please use the Feature Requests to give me ideas. Please use the Support Forum if you have any questions or problems. Please rate and review in the Review section.
https://www.highend3d.com/maya/script/poselib-for-maya
CC-MAIN-2017-22
refinedweb
363
65.62
In the previous Mastering article (Mastering ASP.NET DataBinding), we took a detailed look at databinding - one of the most asked about topics in the newsgroups. Today, we continue the series by answering another very common question: how to maximize the communication between a page and its user controls. The actual questions asked typically don't include the words "maximize the communication" in them, but are more along the lines of: The goal of this tutorial isn't only to answer these questions, but more importantly to build a foundation of understanding around these answers to truly make you a master of page-user control communication. Before we can answer the above questions, two basic concepts should be understood. As always, these basic concepts not only go beyond the scope of this tutorial, but by really understanding them, you'll be on your way to mastering ASP.NET. Both these concepts, and therefore the answers to the above questions, deal with object oriented principals. The use of solid object oriented methodologies is a reoccurring theme in developing solutions in ASP.NET, but we must be conscious that this can unfortunately be intimidating for some programmers. If you've read this far however, you're willing to do more than simply copy and paste an answer. This is probably something you already know, but each code-behind file you create is actually compiled into a class. There's a good chance however that you haven't really been taking advantage of that knowledge. Before we get ahead of ourselves, let's look at the shell of a code-behind file: 1: //C# 2: public class SamplePage : System.Web.UI.Page { 3: private string title; 4: public string Title{ 5: get { return title; } 6: } 7: ... 8: } 1: 'VB.NET 2: Public Class SamplePage 3: Inherits System.Web.UI.Page 4: Private _title As String 5: Public ReadOnly Property Title() As String 6: Get 7: Return _title 8: End Get 9: End Property 10: ... 11: End Class As you can see, it's a class like any other - except that an ASP.NET page always inherits from System.Web.UI.Page. In reality though, there's nothing special about this class, it's just like any other. It's true that ASP.NET pages behave slightly differently from normal classes, for example Visual Studio .NET automatically generates some code for you called Web Form Designer generated code, and you typically use the OnInit or Page_Load events to place your initializing code - instead of a constructor. But these are difference for the ASP.NET framework; from your own point of view, you should treat pages like any other classes. So what does that really mean? Well, as we'll see, when we start to look at specific answers, the System.Web.UI.Control class, which System.Web.UI.Page and System.Web.UI.UserControl both inherit from, exposes a Page property. This Page property is a reference to the instance of the current page the user is accessing. The reference is pretty useless to the actual page (since it's a reference to itself), but for a user control, it can be quite useful when properly used. I originally wrote quite a bit about what inheritance was. However, from the start, it felt like the thousands of tutorials try to explain core OO principles with a couple of basic examples and simplified explanations. While inheritance isn't a complicated topic, there's something about trying to teach it so it doesn't seem cheap, which my writing skills just haven't reached yet. Ask Google about C# inheritance if you're really new to the topic. Instead of talking in depth about inheritance, we'll briefly touch on what we need to know. We can clearly see in the above class shell that our SamplePage class inherits from System.Web.UI.Page (we can especially see this in the more verbose VB.NET example). This essentially means that our SamplePage class provides (at the very least) all the functionality provided by the System.Web.UI.Page class. This guarantees that an instance of SamplePage can always safely be treated as an instance of System.Web.UI.Page (or any classes it might inherit from). Of course, the opposite isn't always true; an instance of System.Web.UI.Page isn't necessarily an instance of SamplePage. The truly important thing to understand is that our SamplePage extends the functionality of the System.Web.UI.Page by providing a read-only property named Title. The Title property however is only accessible from an instance of SamplePage and not System.Web.UI.Page. Since this is really the key concept, let's look at some examples: 1: //C# 2: public static void SampleFunction(System.Web.UI.Page page, SamplePage samplePage) { 3: // IsPostBack property is a member of the Page class, 4: // which all instances of SamplePage inherit 5: bool pb1 = page.IsPostBack; //valid 6: bool pb2 = samplePage.IsPostBack; //valid 7: 8: // The ToString() method is a member // of the Object class, which instances 9: //of both the Page and SamplePage classes inherit 10: string name1 = page.ToString(); //valid 11: string name2 = samplePage.ToString(); //valid 12: 13: //Title is specific to the SamplePage class, only it or classes 14: //which inherit from SamplePage have the Title property 15: string title1 = page.Title; //invalid, won't compile 16: string title2 = samplePage.Title; //valid 17: string title3 = ((SamplePage)page).Title; //valid, but might give a run-time error 18: string title4 = null; 19: if (page is SamplePage){ 20: title4 = ((SamplePage)page).Title; 21: }else{ 22: title4 = "unknown"; 23: } 24: } 1: 'VB.NET 2: Public Shared Sub SampleFunction(ByVal page As System.Web.UI.Page, _ ByVal samplePage As SamplePage) 3: 'IsPostBack property is a member of the Page class, which all instances 4: 'of SamplePage inherit 5: Dim pb1 As Boolean = page.IsPostBack 'valid 6: Dim pb2 As Boolean = samplePage.IsPostBack 'valid 7: 8: 'The ToString() method is a member of the Object class, which instances 9: 'of both the Page and SamplePage classes inherit 10: Dim name1 As String = page.ToString() 'valid 11: Dim name2 As String = samplePage.ToString() 'valid 12: 13: 'Title is specific to the SamplePage class, only it or classes 14: 'which inherit from SamplePage have the Title property 15: Dim title1 As String = page.Title 'invalid, won't compile 16: Dim title2 As String = samplePage.Title 'valid 17: Dim title3 As String = CType(page, SamplePage).Title 'valid, but might give a run-time error 18: Dim title4 As String = Nothing 19: If TypeOf page Is SamplePage Then 20: title4 = CType(page, SamplePage).Title 21: Else 22: title4 = "unknown" 23: End If 24: End Sub The first couple of cases are straightforward. First, we see how our SamplePage class inherits the IsPostBack property from System.Web.UI.Page [5,6]. We then see how both SamplePage and System.Web.UI.Page inherit the ToString() function from System.Object - which all objects in .NET inherit from. Things get more interesting when we play with the Title property. First, since the System.Web.UI.Page class doesn't have a Title property, the first example is totally invalid and thankfully won't even compile [15]. Of course, since our SamplePage class does define it, the second example is perfectly sane [16]. The third and fourth examples are really interesting. In order to get our code to compile, we can simply cast the page instance to the type of SamplePage which then allows us to access the Title property [17]. Of course, if page isn't actually an instance of SamplePage, this will generate an exception. The fourth example illustrates a much safer way to do this: by checking to see if page is an instance of SamplePage [19] and only if it is casting it [20]. To wrap up this [painful] section, the key point to understand is that when you create a new ASPX page, the page itself is a class, which inherits from System.Web.UI.Page. If you have access to an instance of System.Web.UI.Page and you know the actual type (for example, SamplePage), you can cast it to this type and then access its functionality - much like we were able to do with page and get the Title. We'll first discuss basic communication strategies between a page and its user controls in all directions. While this section alone will likely answer your questions, the important stuff comes in the following section where we discuss more advanced strategies. For the basic communication, we'll use a single page with two user controls and keep everything fairly simple. We'll use our sample page from above, and these two user controls: 1: 'VB.NET - Results user control 2: Public Class Results 3: Inherits System.Web.UI.UserControl 4: Protected results As Repeater 5: Private info As DataTable 6: 7: Public Property Info() As DataTable 8: Get 9: Return info 10: End Get 11: Set 12: info = value 13: End Set 14: End Property 15: 16: Private Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) 17: If Not Page.IsPostBack AndAlso Not (info Is Nothing) Then 18: results.DataSource = info 19: results.DataBind() 20: End If 21: End Sub 22: End Class 1: 'VB.NET - ResultsHeader user control 2: Public Class ResultHeader 3: Inherits System.Web.UI.UserControl 4: Private Const headerTemplate As String = "Page {1} of {2}" 5: Protected header As Literal 6: Private currentPage As Integer 7: Private recordsPerPage As Integer 8: 9: Public Property CurrentPage() As Integer 10: Get 11: Return currentPage 12: End Get 13: Set 14: currentPage = value 15: End Set 16: End Property 17: 18: Public Property RecordsPerPage() As Integer 19: Get 20: Return recordsPerPage 21: End Get 22: Set 23: recordsPerPage = value 24: End Set 25: End Property 26: 27: Private Sub Page_Load(ByVal sender As Object, ByVal e As EventArgs) 28: header.Text = headerTemplate 29: header.Text = header.Text.Replace("{1}", currentPage.ToString()) 30: header.Text = header.Text.Replace("{2}", recordsPerPage.ToString()) 31: End Sub 32: End Class While communicating from a page to a user control isn't something frequently asked (because most people know how to do it), it nevertheless seems like the right place to start. When placing a user control on a page (i.e., via the @Control directive), passing values is pretty straightforward for simple types: 1: <%@ Register 6: <Result:Header 7: <Result:Results 8: </form> 9: </body> 10: </HTML> We can see that the CurrentPage and RecordsPerPage properties of our ResultHeader user control are assigned a value like any other HTML property [5]. However, since the Results user control's Info property is a more complex type, and must thus be set via code: 1: protected Results rr; 2: private void Page_Load(object sender, EventArgs e) { 3: if (!Page.IsPostBack){ 4: rr.Info = SomeBusinessLayer.GetAllResults(); 5: } 6: } When loading a control dynamically, via Page.LoadControl, it's important to realize that an instance of System.Web.UI.Control is returned - not the actual class of the control loaded. Since we know the exact type, we simply need to cast it first: 1: //C# 2: Control c = Page.LoadControl("Results.ascx"); 3: c.Info = SomeBusinessLayer.GetAllResults(); //not valid, Info isn't a member of Control 4: 5: Results r = (Results)Page.LoadControl("Results.ascx"); 6: r.Info = SomeBusinessLayer.GetAllResults(); //valid 1: 'VB.NET 2: dim c as Control = Page.LoadControl("Results.ascx") 3: c.Info = SomeBusinessLayer.GetAllResults() 'not valid, Info isn't a member of Control 4: 5: dim r as Results = ctype(Page.LoadControl("Results.ascx"), Results) 6: r.Info = SomeBusinessLayer.GetAllResults() 'valid Communicating information from a user control to its containing page is not something you'll need to do often. There are timing issues associated with doing this, which tends to make an event-driven model more useful (I'll cover timing issues and using events to communicate, later in this tutorial). Since this provides a nice segue into the far more frequently asked user control to user control question, we'll throw timing issues to the wind and quickly examine it. As I've already mentioned, pages and user controls eventually inherit from the System.Web.UI.Control class which exposes the Page property - a reference to the page being run. The Page property can be used by user controls to achieve most of the questions asked in this tutorial. For example, if our ResultHeader user control wanted to access our SamplePage's Title property, we simply need to: 1: //C# 2: string pageTitle = null; 3: if (Page is SamplePage){ 4: pageTitle = ((SamplePage)Page).Title; 5: }else{ 6: pageTitle = "unknown"; 7: } 1: 'VB.NET 2: Dim pageTitle As String = Nothing 3: If TypeOf (Page) Is SamplePage Then 4: pageTitle = CType(Page, SamplePage).Title 5: Else 6: pageTitle = "unknown" 7: End If It's important to check that Page is actually of type SamplePage before trying to cast it [3], otherwise we'd risk of having a System.InvalidCastException thrown. User control to user control communication is an extension of what we've seen so far. Too often have I seen people trying to find ways to directly link the two user controls, as opposed to relying on common ground - the page. Here's the code-behind for SamplePage containing a Results and ResultHeader user controls: 1: Public Class SamplePage 2: Inherits System.Web.UI.Page 3: Private rr As Results 4: Private rh As ResultHeader 5: Private _title As String 6: Public ReadOnly Property Title() As String 7: Get 8: Return _title 9: End Get 10: End Property 11: Public ReadOnly Property Results() As Results 12: Get 13: Return rr 14: End Get 15: End Property 16: Public ReadOnly Property Header() As ResultHeader 17: Get 18: Return rh 19: End Get 20: End Property 21: ... 22: End Class The code-behind looks like any other page, except a ReadOnly property for our two user controls has been added [11-15,16-20]. This allows a user control to access any other via the appropriate property. For example, if our ResultHeader wanted to make use of the Result's Info property, it could easily access it via: 1: //C# 2: private void Page_Load(object sender, EventArgs e) { 3: DataTable info; 4: if (Page is SamplePage){ 5: info = ((SamplePage)Page).Results.Info; 6: } 7: } 1: 'VB.NET 2: Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load 3: Dim info As DataTable 4: If TypeOf (Page) Is SamplePage Then 5: info = CType(Page, SamplePage).Results.Info 6: End If 7: End Sub This is identical to the code example above - where a user control accessed a page value. In reality, this is exactly what's happening, the ResultHeader is accessing the Results property of SamplePage and then going a level deeper and accessing its Info property. There's no magic. We are using public properties in classes to achieve our goals. A page sets a user control's value via a property, or vice versa which can be done to any depth. Simply be aware that pages and user controls are actual classes you can program against; create the right public interface (properties and methods) and basic communication becomes rather bland (this isn't always a bad thing). Methods are accessed the same way we've done properties. As long as they are marked Public, a page can easily access one of its user control's methods, or a user control can use the page as a broker to access another user control's method. While the above section aimed at giving you the knowledge to implement a solution to [most of] the questions related to this tutorial, here we'll concentrate on more advanced topics with a strong focus on good design strategies. While the code and methods discussed in the above sections will work, and are even at times the right approach, consider if they are truly the right approach for your situation. Why? you ask. Because if they aren't bad design as-is, they will lead to it unless you are vigilant. Take, for example, the last little blurb about accessing methods. If these are utility/common/static/shared methods, consider moving the function to your business layer instead. Another example of bad design is the dependency such communication creates between specific pages and user controls. All of our example user controls above would either work very differently or cease to work entirely if they were used on a page other than SamplePage. User controls are meant to be reused, and for the most part (this isn't a 100% rule), shouldn't require other user controls or a specific page to work. The next two sections look at ways of improving this. We can leverage interfaces to reduce the dependency created by such communication. In the last example, the ResultHeader user control accessed the Info property of the Results user control. This is actually a pretty valid thing to do as it avoids having to re-hit the database in order to access the total number of records (although there are certainly alternatives to this approach). The problem with the above approach is that ResultHeader would only work with SamplePage and Results. Making good use of interfaces can actually make ResultHeader work for any page which displays a result (whatever that might be). What is an interface? An interface is a contract which a class must fulfill. When you create a class and say that it implements a certain interface, you must (otherwise your code won't compile) create all the functions/properties/events/indexers defined in the interface. Much like you are guaranteed that a class which inherits from another will have all of the parent class' functionality, so too are you guaranteed that a class which implements an interface will have all of the interface's members defined. You can read Microsoft's definition, or this tutorial, but I think the couple example below will give you the exposure you need. To get the most flexibility, we'll create two interfaces. The first will be used by pages which display results and will force them to expose a ReadOnly property which in turn exposes our other interface: 1: //C# 2: public interface IResultContainer{ 3: IResult Result { get; } 4: } 1: 'VB.NET 2: Public Interface IResultContainer 3: ReadOnly Property Result() As IResult 4: End Interface The second interface, IResult, exposes a DataTable - the actual results: 1: //C# 2: public interface IResult { 3: DataTable Info { get; } 4: } 1: 'VB.Net 2: Public Interface IResult 3: ReadOnly Property Info() As DataTable 4: End Interface If you are new to interfaces, notice how no implementation (no code) is actually provided. That's because classes which implement these interfaces must provide the code (as we'll soon see). Next, we make SamplePage implement IResultContainer and implement the necessary code: 1: Public Class SamplePage 2: Inherits System.Web.UI.Page 3: Implements IResultContainer 4: 5: Private rr As Results 6: Public ReadOnly Property Result() As IResult _ Implements IResultContainer.Result 7: Get 8: Return rr 9: End Get 10: End Property 11: ... 12: End Class The last step before we can make use of this is to make Results implement IResult: 1: public class Results : UserControl, IResult { 2: private DataTable info; 3: public DataTable Info { //Implements IResult.Info 4: get { return info; } 5: } 6: ... 7: } With these changes in place, ResultHeader can now decouple itself from SamplePage and instead tie itself to the broader IResultContainer interface: 1: Dim info As DataTable 2: If TypeOf (Page) Is IResultContainer Then 3: info = CType(Page, IResultContainer).Result.Info 4: Else 5: Throw New Exception("ResultHeader user control must be used" & _ " on a page which implements IResultContainer") 6: End If There's no denying that the code looks a lot as it did before. But instead of having to be placed on SamplePage, it can now be used with any page which implements IResultContainer. The use of IResult also decouples the page from the actual Results user control and instead allows it to make use of any user control which implements IResult. All of this might seem like a lot of work in the name of good design. And if you have a simple site which will only display a single result, it might be overkill. But the minute you start to add different results, interfaces will pay off both in lower development time and, more importantly, by making your code easily readable and maintainable. And if you don't use interfaces to decouple your communication links, keep an open mind for where else you might be able to use them because you'll probably find a ton. One of the questions I haven't answered yet is how to make a page (or another user control) aware of an event which occurred in a user control. While it's possible to use the communication methods described above, creating your own events totally decouples the user control from the page. In other words, the user control raises the event and doesn't care who (if anyone) is listening. Besides, it's fun to do! For our example, we'll create a third user control ResultPager which displays paging information for our results. Whenever one of the page numbers is clicked, our user control simply raises an event which the page, or other user controls, can catch and do what they will with it: 1: //C# 2: public class ResultPaging : UserControl { 3: private Repeater pager; 4: public event CommandEventHandler PageClick; 5: 6: private void Page_Load(object sender, EventArgs e) { 7: //use the other communication methods to figure out how many pages 8: //there are and bind the result to our pager repeater 9: } 10: 11: private void pager_ItemCommand(object source, RepeaterCommandEventArgs e) { 12: if (PageClick != null){ 13: string pageNumber = (string)e.CommandArgument; 14: CommandEventArgs args = new CommandEventArgs("PageClicked", pageNumber); 15: PageClick(this, args); 16: } 17: } 18: } 1: 'VB.NET 2: Public Class ResultPaging 3: Inherits System.Web.UI.UserControl 4: Private pager As Repeater 5: Public Event PageClick As CommandEventHandler 6: Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load 7: 'use the other communication methods to figure out how many pages 8: 'there are and bind the result to our pager repeater 9: End Sub 10: 11: Private Sub pager_ItemCommand(ByVal source As Object, _ ByVal e As RepeaterCommandEventArgs) 12: 13: Dim pageNumber As String = CStr(e.CommandArgument) 14: Dim args As New CommandEventArgs("PageClicked", pageNumber) 15: RaiseEvent PageClick(Me, args) 16: 17: End Sub 18: End Class With our PageClick event declared of type CommandEventHandler [5], we are able to notify anyone who's interested when a page number is clicked. The general idea behind the control is to load a Repeater with the page numbers, and to raise our PageClick event when an event fires within this Repeater. As such, the user control handles the Repeater's ItemCommand [11], retrieves the CommandArgument [13], repackages it into a CommandEventArgs [14], and finally raises the PageClick event [15]. The C# code must do a little extra work by making sure that PageClick isn't null [12] before trying to raise it, whereas VB.NET's RaiseEvent takes care of this (the event will be null/ Nothing if no one is listening). SamplePage can then take advantage of this by hooking into the PageClick event like any other: 1: //C# 2: protected ResultPaging rp; 3: private void Page_Load(object sender, EventArgs e) { 4: rp.PageClick += new System.Web.UI.WebControls.CommandEventHandler(rp_PageClick); 5: } 6: private void rp_PageClick(object sender, System.Web.UI.WebControls.CommandEventArgs e) { 7: //do something 8: } 1: 'VB.Net WithEvents solution 2: Protected WithEvents rp As ResultPaging 3: Private Sub rp_PageClick(ByVal sender As Object, _ ByVal e As CommandEventArgs) Handles rp.PageClick 4: 'do something 5: End Sub 1: 'VB.Net AddHandler solution 2: Private rp As ResultPaging 3: Private Sub Page_Load(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles MyBase.Load 4: AddHandler rp.PageClick, AddressOf rp_PageClick 5: End Sub 6: Private Sub rp_PageClick(ByVal sender As Object, _ ByVal e As CommandEventArgs) 7: 'do something 8: End Sub More likely though, the Results user control would take advantage of this event through SamplePage, or better yet by expanding the IResultContainer interface. One of the difficulties which arises from communicating between page and user control has to do with when events happen. For example, if Results where to try and access ResultHeader's RecordsPerPage property before it was set, you would get unexpected behavior. The best weapon against such difficulties is knowledge. When loading controls declaratively (via the @Control directive), the Load event of the page will fire first, followed by the user controls in the order in which they are placed on the page. Similarly, controls loaded programmatically (via Page.LoadControl) will have their Load event fired in the order that they are added to the control tree (not when the call to LoadControl is actually made). For example, given the following code: 1: Control c1 = Page.LoadControl("Results.ascx"); 2: Control c2 = Page.LoadControl("ResultHeader.ascx"); 3: Control c3 = Page.LoadControl("ResultPaging.ascx"); 4: Page.Controls.Add(c2); 5: Page.Controls.Add(c1); c2's Load event will fire first followed by c1's. c3's Load event will never fire because it isn't added to the control tree. When both types of controls exist (declarative and programmatic), the same rules apply, except all declarative controls are loaded first, then the programmatic ones. This is even true if controls are programmatically loaded in Init instead of Load. The same holds true for custom events as with built-in ones. In our event example above, the following is the order of execution when a page number is clicked (assuming no control is on the page except ResultPaging): SamplePage's OnLoadevent. ResultPaging's OnLoadevent. ResultPaging's pager_ItemCommandevent handler. SamplePage's rp_PageClickevent handler. The real difficulties arise when dealing with programmatically created controls within events - such as adding a user control to the page when a button is clicked. The problem is that such things happen after the page loads the viewstate, which, depending on what you are doing, might cause you to miss events within your user controls or cause seemingly odd behavior. As always, there are workarounds to such things, but they are well outside the scope of this tutorial. One solution might be Denis Bauer's DynamicControlsPlaceholder control (I haven't tried it yet, but looks very promising). It seems like good practice to conclude by visiting the key points, but really, the key points are to use what you can and try and understand as much as possible. Try to keep your designs clean, your pages flexible, and above all your code readable. Pages are classes and should be treated as such, namely by understanding how inheritance works with respect to casting/ctyping and public properties and methods. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/user-controls/Page_UserControl.aspx
crawl-002
refinedweb
4,526
53.51
Graham Dumpleton wrote: >Having a bit more of a think about this, it wouldn't be too hard for me to >implement a relatively clean mechanism which would allow access to the >request object when loading a module, in the Vampire package I provide. > > First, thanks for all the input ... I am not aware of what the "Vampire" module are, one of Your projects ? >If the import function took an optional "req" object, I could place this >into the empty module prior to running execfile() and then remove >it afterwards. Ie., > > module = imp.new_module(label) > module.__file__ = file > module.__req__ = req > execfile(file,module.__dict__) > del module.__dict__["__req__"] > > So this is performed in the "python module loader", i guess ? I am using the PythonHandler directly myself, and I guess this is more "low level", so control at this level are not possible ? >That way the "req" object could be available just for the period of the >initialisation phase of an import. You probably wouldn't want to >cache the req object as it applies to a specific request as the cached >module would outlive it. You also wouldn't want to be relying on >information specific to a request. You could access the PythonOption >values, although you may want to avoid values set in .htaccess files >and go for ones you know are set in the file. > > Hmm, I know what You mean, but on the other hand ... I don't use one python script for more than one URL, and if I am, the script still remains in the same physical path in relation to the web server, anyway. Or are we talking about two different things ? >Anyway, in the end, this would allow you to do something like the >following in a content handler. > > from mod_python import apache > > if __req__ != None: > options = __req__.get_options() > if options.has_key("debug") and options["debug"]: > apache.log_error(...) > > Only ... I dislike the "if" :-) I hoped for something like : from mod_python import apache pram = apache.get_option( 'custom_param' ) ... This is without the "if" as the only problem left will be that "custom_param" don't exist. I think the "__req__" object is a bad idea, as this is not a request situation, but module initialization. We know where the script is (and the .htaccess file), but no request have been send (well it has, but we need not to know about this in the "global" context !). Anyway, this is how I dream about it :-) >Is this the sort of thing you were wanting, or have I misunderstood? > > Yeps, with some corrections :-) >Overall I am not sure that this is a good idea or not. It has both good >points and bad points. > > I think I know what you mean. It is a "nice to have" thing and I already have a work around for this. But in my search for perfection :-) >Maybe it shouldn't use an actual req object, but a new object which >incorporates some of what req provides, dropping stuff that may be >more specific to a particular request. > Thats my point :-) >Thus you might provide some >information about the server and python options, although not sure >how you deal with the issue of .htaccess level options being different >based on URL used for original request. > > Well there is tho kinds of URL's there is physical and abstract (as I understand it). Physical URL ends up mapped onto a physical path on the disk (by apache), and therefor know where to load a .htaccess file (and the document), but an abstract ends up in a PythonHandler, that takes care of the rest. The abstract URL handler will only be able to load the .htaccess file in the dir where it lives, but no config change depends on the URL, as long as it ends up in our handler. Hmm, hope this makes sense. Anyway, I don't see any problems regarding configuration and different URL's, but I may be missing somthing. >Anyway, worth thinking about some more. > > Nice. Anyway, python is a fantastic web script if you like your code to be readable :-) /BL
https://modpython.org/pipermail/mod_python/2005-January/017176.html
CC-MAIN-2022-21
refinedweb
684
73.17
Lather, rinse and repeat Posted February 02, 2013 at 09:00 AM | categories: recursive, math | tags: | View Comments Updated February 27, 2013 at 02:45 PM. def recursive_factorial(n): '''compute the factorial recursively. Note if you put a negative number in, this function will never end. We also do not check if n is an integer.''' if n == 0: return 1 else: return n * recursive_factorial(n - 1) print recursive_factorial(5) 120 from scipy.misc import factorial print factorial(5) 120.0 0.1 Compare to a loop solution This example can also be solved by a loop. This loop is easier to read and understand than the recursive function. Note the recursive nature of defining the variable as itself times a number. n = 5 factorial_loop = 1 for i in range(1, n + 1): factorial_loop *= i print factorial_loop 120 There are some significant differences in this example than in Matlab. - the syntax of the for loop is quite different with the use of the inoperator. - python has the nice *= operator to replace a = a * i - We have to loop from 1 to n+1 because the last number in the range is not returned. 1 Conclusions Recursive functions have a special niche in mathematical programming. There is often another way to accomplish the same goal. That is not always true though, and in a future post we will examine cases where recursion is the only way to solve a problem. Copyright (C) 2013 by John Kitchin. See the License for information about copying.
http://kitchingroup.cheme.cmu.edu/blog/2013/02/02/Lather-rinse-and-repeat/
CC-MAIN-2017-39
refinedweb
252
57.06
The Mirror of Literature, Amusement, and Instruction by Various Produced by Jonathan Ingram, Allen Siddle, David Garcia and the Online Distributed Proofreading Team. THE MIRROR OF LITERATURE, AMUSEMENT, AND INSTRUCTION. VOL. XII, NO. 336.] SATURDAY, OCTOBER 18, 1828. [PRICE 2d. Richmond Palace [Illustration: Richmond Palace] Richmond has comparatively but few antiquarian or poetical visiters, notwithstanding all its associations with the ancient splendour of the English court, and the hallowed names of Pope and Thomson. Maurice sings, To thy sequester'd bow'rs and wooded height, That ever yield my soul renew'd delight, Richmond, I fly! with all thy beauties fir'd, By raptur'd poets sung, by kings admir'd! but ninety-nine out of a hundred who visit Richmond, thank the gods they are not poetical, fly off to the _Star and Garter_ hill, and content themselves with the inspirations of its well-stored cellars. All this corresponds with the turtle-feasting celebrity of the modern _Sheen_; but it ill accords with the antiquarian importance and resplendent scenery of this delightful country. Our engraving is from a very old drawing, representing the palace at Richmond, as built by Henry VII. The manor-house at Sheen, a little east of the bridge, and close by the river side, became a _royal palace_ in the time of Edward I., for he and his successor resided here. Edward III. died here in 1377. Queen Anne, the consort of his successor, died here in 1394. Deeply affected at her death, he, according to Holinshed, "caused it to be thrown down and defaced; whereas the former kings of this land, being wearie of the citie, used customarily thither to resort as to a place of pleasure, and serving highly to their recreation." Henry V., however, restored the palace to its former magnificence; and Henry VII. held, in 1492, a grand tournament here. In 1499, it was almost consumed by fire, when Henry rebuilt the palace, and gave it the name of RICHMOND. Cardinal Wolsey frequently resided here; and Hall, in his Chronicles, says, that !'"[1] Queen Elizabeth was prisoner at Richmond during the reign of her sister Mary; after she came to the throne, the palace was her favourite residence; and here she died in 1608. Charles I. formed a large collection of pictures here; and Charles II. was educated at Richmond. On the restoration, the palace was in a very dismantled state, and having, during the commonwealth, been plundered and defaced, it never recovered its pristine splendour. The survey taken by order of parliament in 1649, affords a minute description of the palace. The great hall was one hundred feet in length, and forty one hundred and twenty-four steps. The chapel was ninety-six feet long and forty broad, with cathedral-seats and pews. Adjoining the prince's garden was an open gallery, two hundred. In 1650, it was sold for 10,000_l_. to private persons. All the accounts which have come down to us describe the furniture and decorations of the ANCIENT PALACE as very superb, exhibiting in gorgeous tapestry the deeds of kings and of heroes who had signalized themselves by their conquests throughout France in behalf of their country. The site of Richmond Palace is now occupied by noble mansions; but AN OLD ARCHWAY, seen from _the Green_, still remains as a melancholy memorial of its regal splendour. [1] Mrs. A.T. Thomson, in her _Memoirs of the Court of Henry the Eighth_, says, !" * * * * * EPITOME OF COMETS. (_For the Mirror_.) "Hast thou ne'er seen the Comet's flaming flight?" YOUNG. Comets, according to Sir Isaac Newton, are compact, solid, fixed, and durable bodies: in one word, a kind of planets, which move in very oblique orbits, every way, with the greatest freedom, persevering in their motions even against the course and direction of the planets; and their tail is a very thin, slender vapour, emitted by the head, or nucleus of the comet, ignited or heated by the sun. There are _bearded_, _tailed_, and _hairy_ comets; thus, when the comet is eastward of the sun, and moves from it, it is said to be _bearded_, because the light precedes it in the manner of a beard. When the comet is westward of the sun, and sets after it, it is said to be _tailed_, because the train follows it in the manner of a tail. Lastly, when the comet and the sun are diametrically opposite (the earth being between them) the train is hid behind the body of the comet, excepting a little that appears around it in the form of a border of hair, or _coma_, it is called _hairy_, and whence the name of comet is derived. For the conservation of the water and moisture of the planets, comets (says Sir Isaac Newton) seem absolutely requisite; from whose condensed vapours and exhalations all that moisture which is spent on vegetations and putrefactions, and turned into dry earth, may be resupplied and recruited; for all vegetables increase wholly from fluids, and turn by putrefaction into earth. Hence the quantity of dry earth must continually increase, and the moisture of the globe decrease, and at last be quite evaporated, if it have not a continual supply. And I suspect (adds Sir Isaac) that the spirit which makes the finest, subtilest, and best part of our air, and which is absolutely requisite for the life and being of all things, comes principally from the comets. Another use which he conjectures comets may." THOMSON. Newton has computed that the sun's heat in the comet of 1680,[2] was, to his heat with us at Midsummer, as twenty-eight thousand to one; and that the heat of the body of the comet was near two thousand times as great as that of red-hot iron. The same great author also calculates, that a globe of red-hot iron, of the dimensions of our earth, would scarce be cool in fifty thousand years. If then the comet be supposed to cool a hundred times as fast as red-hot iron, yet, since its heat was two thousand times greater, supposing it of the bigness of the earth, it would not be cool in a million of years. An elegant writer in the Guardian, says, "I cannot forbear reflecting on the insignificance of human art, when set in comparison with the designs of Providence. In pursuit of this thought, I considered a comet, or in the language of the vulgar, a blazing star, as a sky-rocket discharged by a hand that is Almighty. Many of my readers saw that in the year 1680, and if they were not mathematicians, will be amazed to hear, that it travelled had objects wandering through those immeasurable depths of ether, and running their appointed courses! Our eyes may hereafter be strong enough to command the magnificent prospect, and our understandings able to find out the several uses of these great parts of the universe. In the meantime, they are very proper objects for our imagination to contemplate, that we may form more extensive notions of infinite wisdom and power, and learn to think humbly of ourselves, and of all the little works of human invention." Seneca saw three comets, and says, "I am not of the common opinion, nor do I take a comet to be a sudden fire; but esteem it among the eternal works of nature." P.T.W. [2] The Comet which appeared in 1759, and which (says Lambert) returned the quickest of any that we have an account of, had a winter of seventy years. Its heat surpassed imagination. * * * * * SONNETS. BY LEIGH CLIFFE, AUTHOR OF "PARGA," "THE KNIGHTS OF RITZBERG," &c. (_For the Mirror_.) TO THE SUN. Hail to thee, fountain of eternal light, Streaming with dewy radiance in the sky! Rising like some huge giant from the night, While the dark shadows from thy presence fly. Enshrin'd in mantle of a varied dye, Thou hast been chambering in the topmost clouds, List'ning to peeping, glist'ning stars on high, Pillow'd upon their thin, aerial shrouds; But when the breeze of dawn refreshfully Swept the rude waters of the ocean flood, And the dark pines breath'd from each leaf a sigh, To wake the sylvan genius of the wood, Thou burst in glory on our dazzled sight, In thy resplendent charms, a flood of golden light! TO THE MOON. Spirit of heaven! shadow-mantled queen, In mildest beauty peering in the sky, Radiant with light! 'Tis sweet to see thee lean, As if to listen, from cloud-worlds on high, Whilst murmuring nightingales voluptuously Breathe their soft melody, and dew-drops lie Upon the myrtle blooms and oaken leaves, And the winds sleep in sullen peacefulness! Oh! it is then that gentle Fancy weaves The vivid visions of the soul, which bless The poet's mind, and with sweet phantasies, Like grateful odours shed refreshfully From angels' wings of glistening beauty, tries To waken pleasure, and to stifle sighs! * * * * * EMBLEM OF WALES. (_For the Mirror_.) It is supposed by some of the Welsh, and in some notes to a poem the author (Mr. P. Lewellyn) says he has been confidently assured, that the leek, as is generally supposed to be, is not the original emblem of Wales, but the sive, or chive, which is common to almost every peasant's garden. It partakes of the smell and taste of the onion and leek, but is not so noxious, and is much handsomer than the latter. It grows in a wild state on the banks of the Wye, infinitely larger than when planted in gardens. According to the above-mentioned author, the manner in which it became the national emblem of Cambria was as follows:--As a prince of Wales was returning victorious from battle, he wished to have some leaf or flower to commemorate the event; but it being winter, no plant or shrub was seen until they came to the Wye, when they beheld the sive, which the prince commanded to be worn as a memorial of the victory. _Tipton, Staffordshire._ W.H. * * * * * HISTORY OF FAIRS. (_For the Mirror._) Fairs, among the old Romans, were holidays, on which there was an intermission of labour and pleadings. Among the Christians, upon any extraordinary solemnity, particularly the anniversary dedication of a church, tradesmen were wont to bring and sell their wares even in the churchyards, which continued especially upon the festivals of the dedication. This custom was kept up till the reign of Henry VI. Thus we find a great many fairs kept at these festivals of dedications, as at Westminster on St. Peter's day, at London on St. Bartholomew's, Durham on St. Cuthbert's day. But the great numbers of people being often the occasion of riots and disturbances, the privilege of holding a fair was granted by royal charter. At first they were only allowed in towns and places of strength, or where there was some bishop or governor of condition to keep them in order. In process of time there were several circumstances of favour added, people having the protection of a holiday, and being allowed freedom from arrests, upon the score of any difference not arising upon the spot. They had likewise a jurisdiction allowed them to do justice to those that came thither; and therefore the most inconsiderable fair with us has, or had, a court belonging to it, which takes cognizance of all manner of causes and disorders growing and committed upon the place, called _pye powder_, or _pedes pulverizati_. Some fairs are free, others charged with tolls and impositions. At free fairs, traders, whether natives or foreigners, are allowed to enter the kingdom, and are under the royal protection in coming and returning. They and their agents, with their goods, also their persons and goods, are exempt from all duties and impositions, tolls and servitudes; and such merchants going to or coming from the fair cannot be arrested, or their goods stopped. The prince only has the power to establish fairs of any kind. These fairs make a considerable article in the commerce of Europe, especially those of the Mediterranean, or inland parts, as Germany. The most famous are those of Frankfort and Leipsic; the fairs of Novi, in the Milanese; that of Riga, Arch-angel of St. Germain, at Paris; of Lyons; of Guibray, in Normandy; and of Beauclaire, in Languedoc: those of Porto-Bello, Vera Cruz, and the Havannah, are the most considerable in America. HALBERT. * * * * * THE VIRGINAL. (_For the Mirror_.) A rare and beautiful relic of the olden time was lately presented to the museum of the Northern Institution, by William Mackintosh, Esq. of Milbank--an ancient virginal, which was in use among our ancestors prior to the invention of the spinnet and harpsichord. Mary, Queen of Scots, who delighted in music, in her moments of "joyeusitie" as John Knox phrases it, used to play finely on the virginal; and her more fortunate rival, Queen Elizabeth, was so exquisite a performer on the same instrument, that Melville says, on hearing her once play in her chamber, he was irresistibly drawn into the room. The virginal now deposited in the museum formerly belonged to a noble family in Inverness, and is considered to be the only one remaining in Scotland. It is made of oak, inlaid with cedar, and richly ornamented with gold. The cover and sides are beautifully painted with figures of birds, flowers, and leaves, the colours of which are still comparatively fresh and undecayed. On one part of the lid is a grand procession of warriors, whom a bevy of fair dames are propitiating by presents or offerings of wine and fruits. Altogether, the virginal may be regarded as a fine specimen of art, and is doubly interesting as a memorial of times long gone by. W.G.C. * * * * * HERSCHEL'S TELESCOPE. (_To the Editor of the Mirror_.) Your correspondent, a _Constant Reader_, in No. 330 of the MIRROR, is informed that the identical telescope which he mentions is now in the possession of Mr. J. Davies, optician, 101, High-street, Mary-le-bone, where it may be seen in a finished and perfect state. It is reckoned the best and most complete of its size in Europe. It was ordered to be made for his late majesty George III. as a challenge against the late Dr. Herschel's; but was prevented from being completed till some time after. The metals, 9-1/4 inches in diameter, having a diagonal eye-piece, four eye tubes of different magnifying powers, and three small specula of various radii, were made by Mr. Watson. J.D. * * * * * ANCIENT ROMAN FESTIVALS. * * * * * OCTOBER. (_For the Mirror_.) The _Augustalia_ was a festival at Rome, in commemoration of the day on which Augustus returned to Rome, after he had established peace over the different parts of the empire. It was first established in the year of Rome 735. The _Fontinalia_, or _Fontanalia_, was a religious feast, held among the Romans in honour of the deities who presided over fountains or springs. Varro observes, that it was the custom to visit the wells on those days, and to cast crowns into fountains. This festival was observed on the 13th of October. The _Armilustrum_ was a feast held on the 19th of October, wherein they sacrificed, armed at all points, and with the sound of trumpets. The sacrifice was intended for the expiation of the armies, and the prosperity of the arms of the people of Rome. This feast may be considered as a kind of benediction of arms. It was first observed among the Athenians. P.T.W. * * * * * THE ANECDOTE GALLERY. LORD BYRON AT MISSOLONGHI. [The _Foreign Quarterly Review_ gives the following sketch as a "_pendant_ to Mr. Pouqueville's picture of the poet, given in a preceding page," and requoted by us in the last No. of the MIRROR. It is from a History of Greece, by Rizo, a Wallachian sentimentalist of the first order, and in enthusiasm and exuberance of style, it will perhaps compare with any previous sketches of the late Lord Byron: but the romantic interest which Rizo has thrown about these "more last words" will doubtless render them acceptable to our readers.] For several years a man, a poet, excited the admiration of civilized people. His sublime genius towered above the atmosphere, and penetrated, with a searching look, even into the deepest abysses of the human heart. Envy, which could not reach the poet, attacked the man, and wounded him cruelly; but, too great to defend, and too generous to revenge himself, he only sought for elevated impressions, and "_vivoit de grand sensations_," (which we cannot translate), capable of the most noble devotedness, and, persuaded that excellence is comprised in justice, he embraced the cause of the Greeks. Still young, Byron had traversed Greece, _properly so called_, and described the moral picture of its inhabitants. He quitted these countries, pitying in his verses the misery of the Greeks, blaming their lethargy, and despising their stupid submission; so difficult is it to know a nation by a rapid glance. What was the astonishment of the poet, when some years later he saw these people, whom he had thought unworthy to bear the name of Greeks, rise up with simultaneous eagerness, and declare, in the face of the world, that "they _would_ again become a nation." Byron hesitated at first; ancient prepossessions made him attribute this rupture to a partial convulsion, the ultimate effort of a being ready to breathe the last sigh. Soon new prodigies, brilliant exploits, and heroic constancy, which sustained itself in spite of every opposition, proved to him that he had ill-judged this people, and excited him to repair his error by the sacrifice of his fortune and life; he wished to concur in the work of regeneration. From the shores of the beautiful Etruria he set sail for Greece, in the month of August, 1823. He visited at first the seven Ionian Isles, where he sojourned some time, busied in concluding the first Greek loan. The death of Marco Botzaris redoubled the enthusiasm of Byron, and perhaps determined him to prefer the town of Missolonghi, which already showed for its glory the tombs of Normann, Kyriakoulis, and Botzaris. Alas! that town was destined, four months later, to reckon another mausoleum! Towards the month of November a Hydriote brig of war, commanded by the nephew of the brave Criezy, sailed to Cephalonia to take him on board, and bring him to Missolonghi; but the Septinsular government, not permitting ships bearing a Greek flag to come into its harbours, Byron was obliged to pass to Zante in a small vessel, and to join the Greek brig afterwards, which was waiting for him near Zante. Hardly was Byron on board when he kissed the mainmast, calling it "_sacred wood_." The ship's crew astonished at this whimsical behaviour, regarded him in silence; suddenly Byron turned towards the captain and the sailors, whom he embraced with tears, and said to them, "It is by this wood that you will consolidate your independence." At these words the sailors, moved with enthusiasm, regarded him with admiration. Byron soon reached Missolonghi: the members of the Administrative Council received him at the head of two thousand soldiers drawn up in order. The artillery of the place, and the discharge of musquetry announced the happy arrival of this great man. All the inhabitants ran to the shore, and welcomed him with acclamations. As soon as he had entered the town, he went to the hotel of the Administrative Council, where he was complimented by Porphyrios, Archbishop of Arta, Lepanto and Etolia, accompanied by all his clergy. The first words of Byron were, "Where is the brother of the modern Leonidas?" Constantine Botzaris, a young man, tall and well made, immediately stepped forward, and Byron thus accosted him:--"Happy mortal! Thou art the brother of a hero, whose name will never be effaced in the lapse of ages!" Then perceiving a great crowd assembled under the windows of the hotel, he advanced towards the casement, and said, "Hellenes! you see amongst you an Englishman who has never ceased to study Greece in her antiquity, and to think of her in her modern state; an Englishman who has always invoked by his vows that liberty, for which you are now making so many heroic efforts. I am grateful for the sentiments which you testify towards me; in a short time you will see me in the middle of your phalanxes, to conquer or perish with you." A month afterwards the government sent him a deputation, charged to offer him a sword and the patent of Greek citizenship; at the same time the town of Missolonghi inscribed him in its archives. For this public act they prepared a solemn ceremony for him; they fixed beforehand the day--they invited there by circular letters the inhabitants of the neighbouring districts--and more than twenty thousand persons arrived at Missolonghi. Byron in a Greek costume, preceded and followed by all the military, who loved him, proceeded to the church, where the Archbishop Porphyrios and the bishop of Rogon, Joseph, that martyr of religion and his country, received him in the vestibule of the church, clothed in their sacerdotal habits; and, after having celebrated mass, they offered him the sword and the patent of citizenship. Byron demanded that the sword should be first dedicated on the tomb of Marco Botzaris; and immediately the whole retinue, and an immense crowd, went out of the church to the tomb of that warrior, which had been ornamented with beautiful marble at the expense of the poet. The archbishop placed the sword upon this tomb, and then Byron, to inspire the Greeks with enthusiasm, advanced with a religious silence, and stopping all on a sudden, he pronounced this discourse in the Greek tongue:--"What man reposes buried under this stone? What hollow voice issues from this tomb? What is this sepulchre, from whence will spring the happiness of Greece? But what am I saying? Is it not the tomb of Marco Botzaris, who has been dead some months, and who, with a handful of brave men, precipitated himself upon the numerous ranks of the most formidable enemies of Greece? How dare I approach the sacred place where he reposes--I, who neither possess his heroism nor his virtues? However, in touching this tomb, I hope that its emanations will always inflame my heart with patriotism." So saying, and advancing towards the sepulchre, he kissed it while shedding tears. Every spectator exclaimed, "Lord Byron for ever!" "I see," added his lordship, "the sword and the letter of citizenship, which the government offers me; from this day I am the fellow-citizen of this hero, and of all the brave people who surround me. Hellenes! I hope to live with you, to fight the enemy with you, and to die with you if it be necessary." Byron, superior to vulgar prejudice, saw in the manners of the _pallikares_ an ingenuous simplicity, a manly frankness and rustic procedure, but full of honour; he observed in the people a docility and constancy capable of the greatest efforts, when it shall be conducted by skilful and virtuous men; he observed amongst the Greek women natural gaiety, unstudied gentleness, and religious resignation to misfortunes. Byron did not pretend to bend a whole people to his tastes and European habits. He came not to censure with a stern look their costumes, their dances, and their music; on the contrary, he entered into their national dances, he learned their warlike songs, he dressed himself like them, he spoke their language; in a word, he soon became a true _Roumeliote_. Consequently, he was adored by all Western Greece; every captain acknowledged him with pleasure as his chief; the proud Souliots gloried in being under his immediate command. The funds of the first loan being addressed to him, and submitted to his inspection, gave him influence, not only over continental Greece, but even over the Peloponnesus; so that he was in a situation, if not sufficient to stifle discord, at least to keep it within bounds. Not having yet fathomed the character of all the chief people, as well civil as military, he was sometimes deceived in the beginning of his sojourn, which a little hurt his popularity; but being completely above trifling passions, being able to strengthen by his union with it the party which appeared to him the most patriotic, he might without any doubt, with time and experience, have played a part the most magnificent and salutary to Greece. At first he had constructed, at his own expense, a fort in the little isle of Xeclamisma, the capture of which would have given great facilities to the enemies to attack by sea Missolonghi or Anatoliko. Missolonghi gave to this important fort the name of "Fort Byron." This nobleman conceived afterwards, studied and prepared an expedition against the strong place of Lepanto, the capture of which would have produced consequences singularly favourable. Once in possession of the means of regularly paying the soldiers, he would have been able to form a choice body, and take the town, which did not present any difficulty of attack, either on account of the few troops shut up there, or the weakness of its fortifications. Byron only waited the arrival of the loan, to begin his march. Thus he led an agreeable life in the midst of a nation which he aimed at saving. Enchanted with the bravery of the Souliots, and their manners, which recalled to him the simplicity of Homeric times, he assisted at their banquets, extended upon the turf; he learnt their pyrrhic dance, and he sang in unison the airs of Riga, harmonizing his steps to the sound of their national mandolin. Alas! he carried too far his benevolent condescension. Towards the beginning of April he went to hunt in the marshes of Missolonghi. He entered on foot in the shallows; he came out quite wet, and, following the example of the _pallikares_ accustomed to the _malaria_, he would not change his clothes, and persisted in having them dried upon his body. Attacked with an inflammation upon the lungs, he refused to let himself be bled, notwithstanding the intreaties of his physician, of Maurocordato and all his friends. His malady quickly grew worse; on the fourth day Byron became delirious; by means of bleeding he recovered from his drowsiness, but without being able to speak; then, feeling his end approaching, he gave his attendants to understand that he wished to take leave of the captains and all the Souliots. As each approached, Byron made a sign to them to kiss him. At last he expired in the arms of Maurocordato, whilst pronouncing the names of his daughter and of Greece. His death was fatal to the nation, which it plunged in mourning and tears. * * * * * MANNERS & CUSTOMS OF ALL NATIONS. * * * * * CEREMONIES RELATING TO THE HAIR. (_For the Mirror_.) Among the ancient Greeks, all dead persons were thought to be under the jurisdiction of the infernal deities, and therefore no man (says Potter) could resign his life, till some of his hairs were cut to consecrate to them. During the ceremony of laying out, clothing the dead, and sometimes the interment itself, the hair of the deceased person was hung upon the door, to signify the family was in mourning. It was sometimes laid upon the dead body, sometimes cast into the funeral pile, and sometimes placed upon the grave. Electra in Sophocles says, that Agamemnon had commanded her and Chrysothemis to pay him this honour:-- "With drink-off'rings and _locks_ of _hair_ we must, According to his will, his _tomb_ adorn." Candace in Ovid bewails her calamity, in that she was not permitted to adorn her lover's tomb with her locks. At Patroclus's funeral, the Grecians, to show their affection and respect to him, covered his body with their hair; Achilles cast it into the funeral pile. The custom of nourishing the hair on religious accounts seems to have prevailed in most nations. Osiris, the Egyptian, consecrated his hair to the gods, as we learn from Diodorus; and in Arian's account of India, it appears it was a custom there to preserve their hair for some god, which they first learnt (as that author reports) from Bacchus. The Greeks and Romans wore false hair. It was esteemed a peculiar honour among the ancient Gauls to have long hair. For this reason Julius Caesar, upon subduing the Gauls, made them cut off their hair, as a token of submission. In the royal family of France, it was a long time the peculiar mark and privilege of kings and princes of the blood to wear long hair, artfully dressed and curled; every body else being obliged to be polled, or cut round, in sign of inferiority and obedience. In the eighth century, it was the custom of people of quality to have their children's hair cut the first time by persons they had a particular honour and esteem for, who, in virtue of this ceremony, were reputed a sort of spiritual parents or godfathers to them. In the year 1096, there was a canon, importing, that such as wore long hair should be excluded coming into church when living, and not be prayed for when dead. Charlemagne wore his hair very short, his son shorter; Charles the _Bald_ had none at all. Under Hugh Capet it began to appear again; this the ecclesiastics were displeased with, and excommunicated all who let their hair grow. Peter Lombard expostulated the matter so warmly with Charles the Young, that he cut off his own hair; and his successors, for some generations, wore it very short. A professor of Utrecht, in 1650, wrote expressly on the question, Whether it be lawful for men to wear long hair? and concluded for the negative. Another divine, named Reeves, who had written for the affirmative, replied to him. In _New_ England a declaration was inscribed in the register of the colony against the practice of wearing long hair, which was principally levelled at the Quakers, with unjust severity. P.T.W. * * * * * Pagoda in Kew Gardens. [Illustration: Pagoda in Kew Gardens.] In one of the wildernesses of Kew Gardens stands the _Great Pagoda_, erected in the year 1762, from a design in imitation of the Chinese Taa. The base is a regular octagon, 49 feet in diameter; and the superstructure is likewise a regular octagon on its plan, and in its elevation composed of 10 prisms, which form the 10 different stories of the building. The lowest of these is 26 feet in diameter, exclusive of the portico which surrounds it, and 18 feet high; the second is 25 feet in diameter, and 17 feet high; and all the rest diminish in diameter and height, in the same arithmetical proportion, to the ninth story, which is 18 feet in diameter and 10 feet high. The tenth story is 17 feet in diameter, and, with the covering, 20 feet high, and the finishing on the top is 17 feet high; so that the whole structure, from the base to the top of the fleuron, is 163 feet. Each story finishes with a projecting roof, after the Chinese manner, covered with plates of varnished iron of different colours, and round each of them is a gallery enclosed with a rail. All the angles of the roof are adorned with large dragons, eighty in number, covered with a kind of thin glass of various colours, which produces a most dazzling reflection; and the whole ornament at the top is double gilt. The walls of the building are composed of very hard bricks; the outside of well-coloured and well-matched greystocks, (bricks,) neatly laid. The staircase is in the centre of the building. The prospect opens as you advance in height; and from the top you command a very extensive view on all sides, and, in some directions, upwards of forty miles distant, over a rich and variegated country. * * * * * FINE ARTS * * * * * MR. HAYDON'S PICTURE OF "CHAIRING THE MEMBERS." In our last volume we were induced to appropriate nearly six of our columns to a description of Mr. Haydon's Picture of the Mock Election in the King's Bench Prison--or rather _the first_ of a series of pictures to illustrate the Election, the subject of the present notice being the Second, or the Chairing of the Members, which was intended for the concluding scene of the burlesque. It will, therefore, be unnecessary for us here to give any additional explanation of the real life of these paintings, except so far as may be necessary to the explanation of the present picture. The "_Chairing_" was acted on a water butt one evening, but was to have been again performed in more magnificent costume the next day; just, however, as all the actors in this eccentric masquerade, High Sheriff, Lord Mayor, Head Constable, Assessor, Poll Clerks, and Members, were ready dressed, and preparing to start, the marshal interfered, stopped the procession, and, after some parley, was advised to send for the guards. "About the middle of a sunny day," says Mr. Haydon, "when all was quiet, save the occasional cracking of a racket ball, while some were reading, some smoking, some lounging, some talking, some occupied with their own sorrows, and some with the sorrows of their friends, in rushed six fine grenadiers with a noble fellow of a sergeant at their head, with bayonets fixed, and several rounds of ball in their cartouches, expecting to meet (by their looks) with the most desperate resistance." "The materials thus afforded me by the entrance of the guards, I have combined in one moment;" or "I have combined in one moment what happened at different moments; the _characters_ and _soldiers are all portraits_. I have only used the poets and painters' license, to make out the second part of the story, a part that happens in all elections, viz. the chairing of the successful candidates." "In the corner of the picture, on the left of the spectator, are three of the guards, drawn up across the door, standing at ease, with all the self-command of soldiers in such situations, hardly suppressing a laugh at the ridiculous attempts made to oppose them; in front of the guards, is the commander of the enemy's forces; viz.--a little boy with a tin sword, on regular guard position, ready to receive and oppose them, with a banner of 'Freedom of Election,' hanging on his sabre; behind him stands the Lord High Sheriff, affecting to charge the soldiers with his mopstick and pottle. He is dressed in a magnificent suit of decayed splendour, with an old court sword, loose silk stockings, white shoes, and unbuckled knee-bands; his shoulders are adorned with white bows, and curtain rings for a chain, hung by a blue ribbon from his neck. Next to him, adorned with a blanket, is a character of voluptuous gaiety, helmeted by a saucepan, holding up the cover for a shield, and a bottle for a weapon. Then comes the Fool, making grimaces with his painted cheeks, and bending his fists at the military; while the Lord Mayor with his white wand, is placing his hand on his heart with mock gravity and wounded indignation at this violation of _Magna Charta_ and civil rights. Behind him are different characters, with a porter pot for a standard, and a watchman's rattle; while in the extreme distance, behind the rattle, and under the wall, is a ragged Orator addressing the burgesses on this violation of the privileges of Election. "Right over the figure with a saucepan, is a Turnkey, holding up a key and pulling down the celebrated Meredith; who, quite serious, and believing he will really sit in the House, is endeavouring to strike the turnkey with a champagne glass. The gallant member is on the shoulders of two men, who are peeping out and quizzing. "Close to Meredith is his fellow Member, dressed in a Spanish hat and feather, addressing the Sergeant opposite him, with an arch look, on the illegality of his entrance at elections, while a turnkey has taken hold of the member's robe, and is pulling him off the water butt with violence. "The sergeant, a fine soldier, one of the heroes of Waterloo, is smiling and amused, while a grenadier, one of the other three under arms, is looking at his sergeant for orders. "In the corner, directly under the sergeant, is a dissipated young man, addicted to hunting and sports, without adequate means for the enjoyment, attended by his distressed family. He, half intoxicated, has just drawn a cork, and is addressing the bottle, his only comfort, while his daughter is delicately putting it aside and looking with entreaty at her father. "The harassed wife is putting back the daughter, unwilling to deprive the man she loves, of what, though a baneful consolation, is still one; while the little, shoeless boy with his hoop, is regarding his father with that strange wonder, with which children look at the unaccountable alteration in features and expression, that takes place under the effects of intoxication. "Three pawnbroker's duplicates, one for the child's shoes, 1_s_. 6_d_., one for the wedding ring, 5_s_., and one for the wife's necklace, 7_l_., lie at the feet of the father, with the Sporting Magazine; for drunkards generally part with the ornaments or even necessaries of their wives and children before they trespass on their own. "At the opposite corner lies curled up the Head Constable, hid away under his bed-curtain, which he had for a robe, and slyly looking, as if he hoped nobody would betray him. By his side is placed a table, with the relics of a luxurious enjoyment, while a washing tub as a wine cooler, contains, under the table, Hock, Champagne, Burgundy, and a Pine. "Directly over the sergeant, on the wall, are written, 'The _Majesti_ of the _Peepel_ for ever--huzza!'--'No military at Elections!' and 'No Marshal!'--on the standards to the left, are '_Confusion to Credit, and no fraudulent Creditors_.' In the window are a party with a lady smoking a hookah; on the ledge of the window, "Success to the detaining Creditor!" --At the opposite window is a portrait of the Painter, looking down on the extraordinary scene with great interest--underneath him is, 'Sperat infestis.' "On a board under the lady smoking, is written the order of the Lord Mayor, enjoining _Peace_, as follows:-- "Banco Regis, Court House, July 16, In the Sixth year of the Reign of GEORGE IV. "Peremptorily ordered-- "That the Special Constables and Headboroughs of this ancient Bailwick do take into custody all Persons found in any way committing a breach of the Peace, during the Procession of Chairing the Members returned to represent this Borough. "SIR ROBERT BIRCH, (Collegian) Lord Mayor. "'A New Way to pay Old Debts,'--is written over the first turnkey; and below it, 'N.B. A very old way, discovered 3394 years B.C.;' and in the extreme distance, over a shop, is--'Dealer in every thing genuine.' "While the man beating the long drum, at the opposite end, another the cymbals, and the third blowing a trumpet, with the windows all crowded with spectators, complete the composition, with the exception of the melancholy victim behind the High Sheriff. "I recommend the contemplation of this miserable creature, once a gentleman, to all advocates of imprisonment for debt. First rendered reckless by imprisonment--then hopeless--then sottish--and, last of all, from utter despair of freedom, insane! Round his withered temples is a blue ribbon, with 'Dulce est pro Patria mori,' (it is sweet to die for one's country); for he is baring his breast to rush on the bayonets of the guards, a willing sacrifice, as he believes, poor fellow, for a great public principle. In his pocket he has three pamphlets, 'On Water Drinking, or The Blessings of Imprisonment for Debt,'--and Adam Smith's 'Moral Essays.'--Ruffles hang from his wrists, the relics of former days, rags cover his feeble legs, one foot is naked, and his appearance is that of a decaying being, mind and body." Such is Mr. Haydon's "Explanation" of his own Picture; and it only remains for us to give the reader some idea of its most prominent beauties. As a whole, it is very superior to the "Election," highly as we were disposed to rate the merits of that performance. The style is masterly throughout, and every shade of the colouring has all the depth and richness which characterize works of real genius. There is a spirit in every touch which differs as much from the softened and soulless compositions of certain modern artists, as does the florid architecture of the ancients from the starved proportions of these days, or the rich and graceful style of the Essayists from the fabrications of little, self-conceited biographers. In short, the whole scene is dashed off in the first style of art; the subject and humour are all over English--true to nature, and so forcible as to seize on the attention of the most listless beholder. We must notice a few of the details. The three guards are foremost in the picture, and in merit; the struggle in their countenances between discipline and a sense of the ludicrous scene before them is admirably represented; as well as the little urchin with his tin sword. The centre figure of the High Sheriff, with his tattered and faded finery of office, is equally clever; but the skill with which the artist has contrived to express his forced mirth, and mopstick bravado, is still more forcible. The troubled countenance of the Lord Mayor is an excellent portrait of the indignation of little authority when perturbed by men of greater place. The faces of the turnkey and the sergeant are likewise admirable; and that of the soldier looking towards the latter for orders, is like an excellent piece of byplay in the farce. The drunken patriot, behind the High Sheriff, is well entitled to the attention which the artist, in his explanation, suggests; but the spectator must not dwell too long on this sorrowful wreck of fallen nature. The group in the foreground of the right hand corner, is an episode which must not be omitted, for it corresponds with the fine portrait in the same situation in the "Election" picture. The reckless dissipation of the fine, young fox-hunter, the half intoxicated chuckle with which he holds the bottle, the grief of his daughter and wife, and the little shoeless boy with his hoop, are finely contrasted with the rich humour and extravagant burlesque of all around them. The slyness of the Head Constable, in the left hand corner, half smothered in his mock robes, is expressively told; and the painter is a capital likeness. From the success of Mr. Haydon in the particular line of art requisite for scenes of real humour, it is not unlikely that his execution of the first picture, the "Election" may prove one of the most fortunate events in his professional career, and turn out to be one of the "sweet uses of adversity," by eliciting talent which he probably did not believe himself to possess. Much as we admire this style of art, we can but deplore that purchasers cannot be found for such pictures as his _Entry into Jerusalem_, and _Judgment of Solomon_, both which, with two others, are exhibited in the room with the Chairing of the Members. Out of the scores of new churches which are yearly completed, surely some altar-pieces might be introduced with propriety; and when we consider the peculiar influence which such scenes as those chosen by Mr. Haydon are known to possess over the human heart, we do not think their entire exclusion from modern churches contributes to their devotional character. Such pictures are intended for better purposes than mere seclusion in large galleries and mansions, of which there are but comparatively few in England; and it is always with regret that we see these noble efforts of art in such profitless situations. Occasionally a nobleman, or parochial taste, introduces a valuable painted window, and sometimes an altarpiece into a church; but we wish the practice were more general. * * * * * RETROSPECTIVE GLEANINGS * * * * * ENGLAND IN THE DAYS OF GOOD "QUEEN BESS." The misery and mendicity which prevailed in this country before the provisions of the poor laws in the time of Elizabeth became duly enforced, might be proved by the following extract from a curious old pamphlet, which describes, in very forcible language, the poverty and idleness which prevailed in one of the fairest and most fertile districts of the kingdom, viz.--. There bee, says this author, within a mile and a halfe from my house every waye, five hundred poore habitations; whose greatest meanes consist in spinning flaxe, hemp, and hurdes. They dispose the seasons of the yeare in this manner; I will begin with May, June, and July, (three of the merriest months for beggers,) which yield the best increase for their purpose, to raise multitudes: whey, curdes, butter-milk, and such belly provision, abounding in the neighbourhood, serves their turne. As wountes or moles hunt after wormes, the ground being dewable, so these idelers live intolerablie by other meanes, and neglect their painfull labours by oppressing the neighbourhood. August, September, and October, with that permission which the Lord hath allowed the poorer sorte to gather the eares of corne, they do much harme. I have seen three hundred leazers or gleaners in one gentleman's corn-field at once; his servants gathering and stouking the bound sheaves, the sheaves lying on the ground like dead carcases in an overthrown battell, they following the spoyle, not like souldiers (which scorne to rifle) but like theeves desirous to steale; so this army holdes pillaging, wheate, rye, barly, pease, and oates; oates, a graine which never grew in Canaan, nor AEgypt, and altogether out of the allowance of leazing. Under colour of the last graine, oates, it being the latest harvest, they doe (without mercy in hotte bloud) steale, robbe orchards, gardens, hop-yards, and crab trees; so what with leazing and stealing, they doe poorly maintaine themselves November, December, and almost all January, with some healpes from the neighbourhood. The last three moneths, February, March, and Aprill, little labour serves their turne, they hope by the heat of the sunne, (seasoning themselves, like snakes, under headges,) to recover the month of May with much poverty, long fasting, and little praying; and so make an end of their yeares travel in the Easter holy days. * * * * * BEGGARS. In the earlier periods of their history, both in England and Scotland, beggars were generally of such a description as to entitle them to the epithet of _sturdy_; accordingly they appear to have been regarded often as impostors and always as nuisances and pests. "Sornares," so violently denounced in those acts, were what are here called "masterful beggars," who, when they could not obtain what they asked for by fair means, seldom hesitated to take it by violence. The term is said to be Gaelic, and to import a soldier. The life of such a beggar is well described in the "Belman of London," printed in 1608--"The life of a beggar is the life of a souldier. He suffers hunger and cold in winter, and heate and thirste in summer; he goes lowsie, he goes lame; he is not regarded; he is not rewarded; here only shines his glorie. The whole kingdome is but his walk; a whole cittie is but his parish. In every man's kitchen is his meate dressed; in every man's sellar lyes his beere; and the best men's purses keepe a penny for him to spend." * * * * * CURIOUS MANORIAL CUSTOM. At King's Hill, about half a mile north-east of Rocford Church, Essex, is held what is called the _Lawless Court_, a whimsical custom, the origin of which is not known. On the Wednesday morning next after Michaelmas day, the tenants are bound to attend upon the first cock-crowing, and to kneel and do their homage, without any kind of light, but such as heaven will afford. The steward of the court calls all such as are bound to appear, with as low a voice as possible, giving no notice when he goes to execute his office; however, he that does not give an answer is deeply amerced. They are all to whisper to each other, nor have they any pen and ink, but supply that deficiency with a coal; and he that owes suit and service, and appears not, forfeits to the lord of the manor double his rent every hour he is absent. A tenant, some years ago, forfeited his land for non attendance, but was restored to it, the lord taking only a fine. HALBERT H. * * * * * SPIRIT OF THE PUBLIC JOURNALS * * * * * THE PET DOG.-footed and open-hearted lasses at the Rose; now standing on his hind-legs to extort, by sheer beggary, a scanty morsel from some pair of "drowthy Brow whisp of dry straw, on which to repose his sorry carcass, some comfort in his disconsolate condition. every body's opposition, by the activity of her protection, and the pertinacity of her self-will; made him sharer of her bedly, like Dash. His master has found out that Dash every where, and are going with us now to the Shaw, or rather to the cottage by the Shaw, to bespeak milk and butter of our little dairy-woman, Hannah Bint--a housewifely occupation, to which we owe some of our pleasantest rambles--_Miss Mitford_.--_Month. Mag_. * * * * * FROM THE ROMAIC. When we were last, my gentle Maid, In love's embraces twining, 'Twas Night, who saw, and then betray'd! "Who saw?" Yon Moon was shining. A gossip Star shot down, and he First told our secret to the Sea. The Sea, who never secret kept, The peevish, blustering railer! Told it the Oar, as on he swept; The Oar informed the Sailor. The Sailor whisper'd it to his fair, And she--she told it every where! _New Monthly Magazine_. * * * * * NOTES OF A READER. * * * * * EELS. The problem of the generation of eels is one of the most abstruse and curious in natural history; but we have been much pleased, and not a little enlightened, by some observations on the subject in Sir Humphrey Davy's delightful little volume, _Salmonia_, of which the following is the substance:-- Although the generation of eels occupied the attention of Aristotle, and has been taken up by the most distinguished naturalists since his time, it is still unsolved. Lacepede, the French naturalist, asserts, in the most unqualified way, that they are _viviparous_; but we do not remember any facts brought forward on the subject. Sir Humphrey then goes on to say--This is certain, that there are two migrations of eels--one up and one down rivers, one _from_ and the other _to_ the sea; the first in spring and summer, the second in autumn or early winter. The first of very small eels, which are sometimes not more than two or two and a half inches long; the second of large eels, which sometimes are three or four feet long, and which weigh from 10 to 15, or even 20 lbs. There is great reason to believe that all eels found in fresh water are the results of the first migration; they appear in millions in April and May, and sometimes continue to rise as late even as July and the beginning of August. I remember this was the case in Ireland in 1823. It had been a cold, backward summer; and when I was at Ballyshannon, about the end of July, the mouth of the river, which had been in flood all this month, under the fall, was blackened by millions of little eels, about as long as the finger, which were constantly urging their way up the moist rocks by the side of the fall. Thousands died, but their bodies remaining moist, served as the ladder for others to make their way; and I saw some ascending even perpendicular stones, making their road through wet moss, or adhering to some eels that had died in the attempt. Such is the energy of these little animals, that they continue to find their way, in immense numbers, to Loch Erne. The same thing happens at the fall of the Bann, and Loch Neagh is thus peopled by them; even the mighty Fall of Shaffausen does not prevent them from making their way to the Lake of Constance, where I have seen many very large eels. There are eels in the Lake of Neufchatel, which communicates by a stream with the Rhine; but there are none in the Lake of Geneva, because the Rhone makes a subterraneous fall below Geneva; and though small eels can pass by moss or mount rocks, they cannot penetrate limestone rocks, or move against a rapid descending current of water, passing, as it were, through a pipe. Again: no eels mount the Danube from the Black Sea; and there are none found in the great extent of lakes, swamps, and rivers communicating with the Danube--though some of these lakes and morasses are wonderfully fitted for them, and though they are found abundantly in the same countries, in lakes and rivers connected with the ocean and the Mediterranean. Yet, when brought into confined water in the Danube, they fatten and thrive there. As to the instinct which leads young eels to seek fresh water, it is difficult to reason; probably they prefer warmth, and, swimming at the surface in the early summer, find the lighter water warmer, and likewise containing more insects, and so pursue the courses of fresh water, as the waters from the land, at this season, become warmer than those from the sea. Mr. J. Couch, in the Linnaean Transactions, says the little eels, according to his observation, are produced within reach of the tide, and climb round falls to reach fresh water from the sea. I have sometimes seen them in spring, swimming in immense shoals in the Atlantic, in Mount Bay, making their way to the mouths of small brooks and rivers. When the cold water from the autumnal flood begins to swell the rivers, this fish tries to return to the sea; but numbers of the smaller ones hide themselves during the winter in the mud, and many of them form, as it were, masses together. Various authors have recorded the migration of eels in a singular way; such as Dr. Plot, who, in his History of Staffordshire, says they pass in the night across meadows from one pond to another; and Mr. Arderon, in the Philosophical Transactions, gives a distinct account of small eels rising up the flood-gates and posts of the water-works of the city of Norwich; and they made their way to the water above, though the boards were smooth planed, and five or six feet perpendicular. He says, when they first rose out of the water upon the dry board, they rested a little--which seemed to be till their slime was thrown out, and sufficiently glutinous--and then they rose up the perpendicular ascent with the same facility as if they had been moving on a plane surface.--There can, I think, be no doubt that they are assisted by their small scales, which, placed like those of serpents, must facilitate their progressive motion; these scales have been microscopically observed by Lewenhoeck. Eels migrate from the salt water of different sizes, but I believe never when they are above a foot long--and the great mass of them are only from two and a half to four inches. They feed, grow, and fatten in fresh water. In small rivers they seldom become very large; but in large, deep lakes they become as thick as a man's arm, or even leg; and all those of a considerable size attempt to return to the sea in October or November, probably when they experience the cold of the first autumnal rains. Those that are not of the largest size, as I said before, pass the winter in the deepest parts of the mud of rivers and lakes, and do not seem to eat much, and remain, I believe, almost torpid. Their increase is not certainly known in any given time, but must depend upon the quantity of their food; but it is probable they do not become of the largest size from the smallest in one or even two seasons; but this, as well as many other particulars, can only be ascertained by new observations and experiments. Block states, that they grow slowly, and mentions that some had been kept in the same pond for fifteen years.. * * * * * At Munich, every child found begging is taken to a charitable establishment; the moment he enters his portrait is given to him, representing him in his rags, and he promises by oath to keep it all his life. * * * * * INFANCY. [This is _one_ of the gems of the quarto volume of poetry recently published by the author of the "Omnipresence of the Deity;" but in our next we intend stringing together a few of the resplendent beauties which illumine almost every page.] On yonder mead, that like a windless lake Shines in the glow of heaven, a cherub boy Is bounding, playful as a breeze new-born, Light as the beam that dances by his side. Phantom of beauty! with his trepid locks Gleaming like water-wreaths,--a flower of life, To whom the fairy world is fresh, the sky A glory, and the earth one huge delight! Joy shaped his brow, and Pleasure rolls his eye, While Innocence, from out the budding lip Darts her young smiles along his rounded cheek. Grief hath not dimm'd the brightness of his form, Love and Affection o'er him spread their wings, And Nature, like a nurse, attends him with Her sweetest looks. The humming bee will bound From out the flower, nor sting his baby hand; The birds sing to him from the sunny tree; And suppliantly the fierce-eyed mastiff fawn Beneath his feet, to court the playful touch. To rise all rosy from the arms of sleep, And, like the sky-bird, hail the bright-cheek'd morn With gleeful song, then o'er the bladed mead To chase the blue-wing'd butterfly, or play With curly streams; or, led by watchful Love, To hear the chorus of the trooping waves, When the young breezes laugh them into life! Or listen to the mimic ocean roar Within the womb of spiry sea-shell wove,-- From sight and sound to catch intense delight, And infant gladness from each happy face,-- These are the guileless duties of the day: And when at length reposeful Evening comes, Joy-worn he nestles in the welcome couch, With kisses warm upon his cheek, to dream Of heaven, till morning wakes him to the world. The scene hath changed into a curtain'd room, Where mournful glimmers of the mellow sun Lie dreaming on the walls! Dim-eyed and sad, And dumb with agony, two parents bend O'er a pale image, in the coffin laid,-- Their infant once, the laughing, leaping boy, The paragon and nursling of their souls! Death touch'd him, and the life-glow fled away, Swift as a gay hour's fancy; fresh and cold As winter's shadow, with his eye-lids seal'd, Like violet-lips at eve, he lies enrobed An offering to the grave! but, pure as when It wing'd from heaven, his spirit hath return'd, To lisp his hallelujahs with the choirs Of sinless babes, imparadised above. _Death, a Poem, by R. Montgomery._ * * * * * THE ZOOLOGICAL SOCIETY. What a fashionable place Soon the Regent's Park will grow! Not alone the human race To survey its beauties go; Birds and beasts of every hue, In order and sobriety, Come, invited by the Zo- Ological Society. Notes of invitation go To the west and to the east. Begging of the Hippopo- Tamus here to come and feast: Sheep and panthers here we view, Monstrous contrariety! All united by the Zo- Ological Society. Monkeys leave their native seat, Monkeys green and monkeys blue, Other monkeys here to meet, And kindly ask, "Pray how d'ye do?" From New Holland the emu, With his better moiety, Has paid a visit to the Zo- Ological Society. Here we see the lazy tor- Toise creeping with his shell, And the drowsy, drowsy dor- Mouse dreaming in his cell; Here from all parts of the U- Niverse we meet variety, Lodged and boarded by the Zo- Ological Society. Bears at pleasure lounge and roll, Leading lives devoid of pain, Half day climbing up a poll, Half day climbing down again; Their minds tormented by no su- Perfluous anxiety, While on good terms with the Zo- Ological Society. Would a mammoth could be found And made across the sea to swim! But now, alas! upon the ground The bones alone are left of him: I fear a hungry mammoth too, (So monstrous and unquiet he.) By hunger urged might eat the Zo- Ological Society! _The Christmas Box._ * * * * * INSECTS. One great protection against all creeping things is, to stir the ground very frequently along the foot of the wall. That is their great place of resort; and frequent stirring and making the ground very fine, disturbs the peace of their numerous families, gives them trouble, makes them uneasy, and finally harasses them to death. _Cobbett's English Gardener._ * * * * * SIR W. TEMPLE'S GARDEN. It was formerly the fashion to have a sort of canal, with broad grass walks on the sides, and with the water coming up to within a few inches of the closely shaven grass; and certainly few things were more beautiful than these. Sir William Temple had one of his own constructing in his gardens at Moor Park. On the outsides of the grass-walks were borders of beautiful flowers. I have stood for hours to look at this canal, which any thing of the gardening kind so beautiful in the whole course of my life--_Ibid_. * * * * * BULBOUS ROOTS. In glasses filled with water, bulbous roots, such as the hyacinth, narcissus, and jonquil, are blown. The time to put them in is from September to November, and the earliest ones will begin blowing about Christmas. The glasses should be blue, as that colour best suits the roots; put water enough in to cover the bulb one-third of the way up, less rather than more; let the water be soft, change it once a week, and put in a pinch of salt every time you change it. Keep the glasses in a place moderately warm, and _near to the light_. A parlour window is a very common place for them, but is often too warm, and brings on the plants too early, and causes them to be weakly.--_Ibid_. * * * * * TRAVELLING INVALIDS. We cannot refrain from stating our belief, and this on the authority of intelligent physicians, as well as from personal observation, that much mischief is done by committing invalids to long and precarious journeys, for the sake of doubtful benefits. We have ourselves seen consumptive patients hurried along, through all the discomforts of bad roads, bad inns, and indifferent diet, to places, where certain partial advantages of climate poorly compensated for the loss of the many benefits which home and domestic care can best afford. We have seen such invalids lodged in cold, half-furnished houses, and shivering under blasts of wind from the Alps or Apennines, who might more happily have been sheltered in the vales of Somerset or Devon. On this topic, however, we refrain from saying more--further than to state our belief, that much misapprehension generally prevails, as to the comparative healthiness of England, and other parts of Europe. Certain phrases respecting climate have obtained fashionable currency amongst us, which greatly mislead the judgment as to facts. The accurate statistical tables, now extended to the greater part of Europe, furnish more secure grounds of opinion; and from these we derive the knowledge, that there is no one country in Europe where the average proportion of mortality is so small as in England. Some few details on this subject we subjoin,--tempted to do so by the common errors prevailing in relation to it. The proportion of deaths to the population is nearly one-third less in England than in France. Comparing the two capitals, the average mortality of London is about one-fifth less than that of Paris. What may appear a more singular statement, the proportion of deaths in London, a vast and luxurious metropolis, differs only by a small fraction from that of the whole of France; and is considerably less than the average of those Mediterranean shores which are especially frequented by invalids for the sake of health. In Italy, the proportion of deaths is a full third greater than in England; and even in Switzerland and Sweden, though the difference be less, it is still in favour of our own country.--_Q. Rev_. * * * * * NEWSPAPER LOVE. The paper so highly esteemed, entitled, _The Courier de l'Europe_, originated in the following circumstances:-- "Monsieur Guerrier de Berance was a native of Auvergne, whose fortune in the origin was very low, but who by his intrigues succeeded in gaining the place of Procureur General of the Custom-house. He married two wives; the name of the last was Millochin, who was both young and handsome. She soon began to find out that her husband was very disagreeable; and what caused her more particularly to remark his faults was her contrasting him with M. Cevres de la Tour, with whom she fell most desperately in love. This passion became so violent, that Madame Guerrier fled into England with her lover, who, in his turn, left his wife behind him in Paris. The finances of these two lovers growing rather low, M. Sevres de la Tour, who was a man of talent, thought, as a plan to enrich himself, to turn editor to a newspaper, and for this purpose started the _Courier de l'Europe_, which succeeded beyond his most sanguine hopes. Disgust, which commonly follows these sort of unions, caused Madame Guerrier to be deserted by her lover, and she was obliged to turn a teacher of languages for her subsistence.--_The Album of Love_. * * * * * THE GATHERER. "A snapper-up of unconsidered trifles." SHAKSPEARE. * * * * * REPLY TO THE DIRGE ON MISS ELLEN GEE, OF KEW. (_See Mirror, page 223_.) Forgive, ye beauteous maids of Q, The much relenting B, Who vows he never will sting U, While sipping of your T. One nymph I wounded in the I, The charming L N G, The fates impell'd, I know not Y, The luckless busy B. And oh recall the sentence U Pass'd on your humble B, Let me remain at happy Q, Send me not o'er the C. And I will mourn upon A U, The death of L N G, And all the charming maids of Q Will pity the poor B. I will hum soft her L E G, The reason some ask Y, Because the maiden could not C, By me she lost her I. To soothe ye damsels I'll S A, Far sooner would I B Myself in funeral R A, Than wound one fair at T. F.H. * * * * * THE BITER BIT. In the reign of Charles II. a physician to the court was walking with the king in the gallery of Windsor Castle, when they saw a man repairing a clock fixed there. The physician knowing the king's relish for a joke, accosted the man with, "My good friend, you are continually doctoring that clock, and yet it never goes well. Now if I were to treat my patients in such a way, I should lose all my credit. What can the reason be that you mistake so egregiously?" The man dryly replied, "The reason why you and I, Sir, are not upon a par is plain enough--the sun discovers all my blunders, but the earth covers yours." G.I.F. * * * * * EPITAPH. On a tablet in the outside wall of the old church, at Taunton, in Somersetshire, is the following on "James Waters, late of London, aged 49." Death traversing the western road, And asking where true merit lay, Made in this town a short abode, Then took this worthy man away. W.R. * * * * * LIFE. Grass of levity, Span in brevity, Flower's felicity, Fire of misery, Wind's stability Is mortality. * * * * *3. 19_s_. 6_d_ half bound, L3. 17_s_. * * * * *...0 Edward, by Dr. Moore .................... 2...6 Roderick Random ......................... 2...6 The Mysteries of Udolpho ................ 3...6 Back to Full Books
http://www.fullbooks.com/The-Mirror-of-Literature-Amusement-andx765.html
crawl-002
refinedweb
11,566
62.31
Opened 5 years ago Closed 5 years ago Last modified 5 years ago #16494 closed Bug (fixed) HttpResponse should raise an error if given a non-string object without `__iter__` Description where response = <class 'django.http.HttpResponse'> and response._container = [<Response><Speak loop="1" voice="slt">Hello Friend</Speak></Response>] Content type = text/xml matt@dragoon:~/Projects/my_project$ python manage.py test my_app Creating test database for alias 'default'... ====================================================================== ERROR: test_hello (my_app.tests.RestTestCase) ---------------------------------------------------------------------- print response Traceback (most recent call last): File "/Users/matt/Projects/my_project/my_app/tests.py", line 22, in test_hello print result.content File "/Library/Python/2.6/site-packages/django/http/__init__.py", line 596, in _get_content return smart_str(''.join(self._container), self._charset) TypeError: sequence item 0: expected string, Response found ---------------------------------------------------------------------- Ran 1 test in 3.177s FAILED (errors=1) Destroying test database for alias 'default'... The following patch corrected the issue for me django/http/__init__.py 593,595c593 < def _get_content(self): if self.has_header('Content-Encoding'): < return ''.join(self._container) < return smart_str(''.join(self._container), self._charset) --- > def _get_content(self): Attachments (1) Change History (15) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by comment:3 Changed 5 years ago by I printed out the type of each object in the response's and realized I am getting the error because I was returning an object in my view whose unicode method returns a string containing XML instead of returning an HttpResponse. I think one of two resolutions could help people making this mistake here: - Raise a TypeError if a view returns an object that does not behave like or does not subclass an HttpResponse - Use the patch I included in the original description to attempt to convert each object in the _container list to a string before joining them comment:4 Changed 5 years ago by Pardon me. I mean it is currently returning an HttpResponse with a non-string as the first argument, e.g. class Response(object): def __unicode__(self): return '<Response><Speak loop="1" voice="slt">Hello Friend</Speak></Response>' return HttpResponse(Response()) comment:5 Changed 5 years ago by The relevant code is here: Django currently assumes that everything that is an instance of the base string or has no __iter__ property is a string. We should either be converting to a string on line 550, or raising an error if that object is not a string. I'm in favor of doing the string conversion. This looks like a legitimate bug to me, so I'm marking it as such and changing the title to reflect the real issue. comment:6 Changed 5 years ago by I just ran into this today. I was returning the pk of an object in the HttpResponse object like so: return HttpResponse(obj.pk) This caused the same error. One fix that I found floating around the internet was changing this: def _get_content(self): if self.has_header('Content-Encoding'): return ''.join(self._container) return smart_str(''.join(self._container), self._charset) to this: def _get_content(self): if self.has_header('Content-Encoding'): return ''.join(self._container) return smart_str(''.join(map(str, self._container)), self._charset) This would fix it as far as just forcing everything to a string. It worked for me when I tried it. However, in the interest of not modifying Django's libraries unnecessarily, I'll just be going through my code and fixing it there. It would be nice, though, if this were to make it into Django at some point. It seems unnecessary to have to call HttpResponse(str(obj.pk)) Lastly, it seems likely that this appeared in Django 1.3. I upgraded not too long ago, and had no problems up until that point. This is the first time noticing this problem. comment:7 Changed 5 years ago by @anonymous I looked through the changelog and didn't see any recent changes in django/http/__init__.py that would have affected this. There may have been some elsewhere. Your post points out that we have this bug in the _container property as well. I'll work on a patch here - I don't see any good reason not to convert to a string when setting HttpResponse._container, since we go on to use it as a string. comment:8 Changed 5 years ago by I put together an initial draft of a patch, which adds tests and raises an exception on non-string, non-iterator input: After further consideration, I noticed that when the HttpResponse.content is accessed via iterator, it DOES convert to a string: Given this, the most consistent fix (in line with existing behavior) is to convert to string on output. I'm working on a comprehensive patch to both clean up the code and do this in a way that is backwards compatible. Changed 5 years ago by Converts non-string inputs to strings just before output. Adds tests. comment:9 Changed 5 years ago by I've added a patch that converts everything to strings just before output. It cleans up the logic and fixes the original edge-case logic error. It also adds tests since this component was relatively under-tested. comment:10 Changed 5 years ago by I'm not overwhelmingly familiar with this chunk of the codebase, but from a cursory glance it appears that the provided patch offers a more consistent approach to handling the content of HttpResponse. The tests and docs look reasonable, but I'd like another set of eyes on the code itself prior to marking it RFC. comment:11 Changed 5 years ago by I haven't actually tested the patch, but according to my reading of it, I think there is one big problem and one smaller problem. Both are present already in current code. - The big one: If the given content is iter(['a', 'b']), isn't it so that on first call of reponse.content you will get 'ab' and on second call . That is, the iter is consumed on first call and on the second one it is already consumed. - The smaller one: If the content is already a string, doesn't the .join(self._container) create a new copy of the string? That seems to be non-necessary. The content can be large and there is no point of copying it. Try the following (should "fail" on both old and new version of get_content). from django.core.management import setup_environ import settings setup_environ(settings) from django.http import HttpResponse h = HttpResponse(iter(['a', 'b'])) print h.content print h.content I think the best approach is to turn the given iterable to a string in set_content, store it in self._raw_content. Turn it into correctly encoded string when content is asked. Test and make sure there will be as little copying of the content as possible, including the conversion to the correct encoding. UTF-8 should be the expected result, so maybe ._raw_content could be in UTF-8? comment:12 Changed 5 years ago by Forget about the string copying. It just doesn't matter performance wise, the _get_content isn't called when generating pages. If I understand this correctly, which I hope is now the case, the normal (as in when running under Apache) path just iterates the response object, so that is the case which should be optimized, not the access to .content. The multiple consuming of the iterator would be nice to fix, though. But even that should not be a blocker for this ticket. The problem exists already, and can be fixed separately. So, I removed the patch needs improvement flag. comment:13 Changed 5 years ago by As you say, the normal case is to iterate the HttpResponse object. This is why it is inappropriate to turn it into a string in _set_content(). Accessing content directly is only the correct thing to do if you're consuming the content once and for all, or the original content is not an iterator. #7581 is the issue you describe as multiple consuming the iterator and has been outstanding for a long time. This patch fixes a different issue (though I'm sure the patch for that ticket will need to be updated once we commit this). There's not enough information here to reproduce your error. What does your testcase look like? What does the rest of the project look like? If you can provide enough information for someone else to reproduce your issue, please feel free to re-open the ticket.
https://code.djangoproject.com/ticket/16494
CC-MAIN-2016-50
refinedweb
1,422
64.81
0 I made some simple mistake (conversion?) in my program.. I will be gratefull if anyone has an idea what's wrong with it;) The problem is not with the algorithm itself so I am not explaining it - the problem is: why the condition placed in the code between // HERE and // HERE seems to be never true? (The function does not work correctly) Try this piece of code with 0.7 and 4 on entry. int s(int k) // factorial for integers { if (k==0 || k==1) {return 1;} else {return (s(k-1))*k;} } double newtonsymbol(double r, int k) { if (k<0) return 0; int s(int k); //that's factorial above static double t=r; double h; h=r+k-1.0; static double numerator=r; double denominator; std::cout<<h-t<<" "<<t<<std::endl; /* that's optional; it shows the mistake; the condition would be fulfilled if h-t==0 */ //HERE if (t==h) //HERE {denominator=s(k); std::cout<<"ok"<<std::endl; return (numerator/denominator);} if (h<-10) {return 3;} // that is added to stop the function so you can see what's going on {std::cout<<"not ok"<<std::endl; numerator=r*newtonsymbol(r-1,k));} } int main () { double p=0.7; int l=4; std::cout<<newtonsymbol(p,l); system("pause"); return 0; }
https://www.daniweb.com/programming/software-development/threads/167046/mistake-in-simple-math-function
CC-MAIN-2018-05
refinedweb
220
50.26
1) Download SCT SCT is my own free product and I have it hosted here on Github. If you are on a debian-based system you can install the .deb (you may get a standard compliance warning if you use the Ubuntu software center. It's Okay to install this file. If you don't believe me, look at the source on Github). Otherwise, you will have to download the executable file ( in bin) and run it directly. 2) Make a langdefines.ldf This is a type of file that holds a language. The syntax is old_new. You can use "$" for commenting, but not on its own line. For instance: old_new$oldnew$ oldnew would not become part of this line. Here is how you could set it up to work with a hello world application for C++. #include_import std_txt <<_<-- cout_say endl_eol 3) Make a code file You can now make a code file. It is just a normal text file that adheres to the langdefines rules. However, you can include strings that the langdefines does not know; they will just not be translated. In fact, this is the only way to get some characters into code. import <iostream> using namespace txt; int main(){ say <-- "Hi!" <--eol; Of course, this is an overly simplistic example. Langdefines can contain up to 10,000 lines. 4) Run sct You should run "sct" from the command line (if you used the .deb) or just run the executable. Here is what to do at each prompt: What is the total file path to the language definition file? $ /home/yourname/example.ldf this one is pretty self explanatory. Where is your langdefines? Would you like the langdefines automatically ordered? y or n $ y This one causes some confusion. Before I explain what this does, take an example. You have a langdefines that reroutes || to or like this: ||_or. However, below it you have this: for_door. This will become "do||" in the code, causing a massive error. Ordering the langdefines by line size helps to prevent this. Ordering is not foolproof, however. Comments are DELIBERATELY COUNTED. If you want a particular line to be longer than the other for this reason, it is a good way to do it. Be careful with comments for this reason. Are you using a Noran Make File? y/n $ n A Noran Make File is like a make file, but it specifies a list of files to translate at one time. The syntax is infile_outfile_magichar for every line. infile is the code, outfile is the output filepath, and magichar is what character you used for translation comments. Translation comments allow strings that you KNOW will be translated be preserved. In our above example, you could save "door" by doing $door$ and using $ as your magichar. $ is the standard. If you did not use a Noran Make File, this is what the rest of the program will be: What is the total file path to the code? $ /home/yourname/excode This is where your code is located. What is the file path to the output file? $ /home/yourname/whatever.cpp Finally, this is where to put the translated code. If you would like to easily call this form other programs, it can run in argument mode, but it does not support iterative compiling. Congratulations. You have now redefined C++ syntax and converted code that you wrote in your own custom language into runnable C++. PS: Here is what my language looks like: import <iostream> use nspace std; number program() (* itl(number index is 0; index lthen 10; index`incra`) say <-- "$Hello world, Iteration $"<--index<--eol; mail 0; *)
http://profectium.blogspot.com/2012/06/your-own-programming-language-sct.html
CC-MAIN-2017-47
refinedweb
606
76.22
Contents This section describes a grammar (and forward-compatible parsing rules) common to any level of CSS (including CSS 2.1). Future updates of CSS will adhere to this core syntax, although they may add additional syntactic constraints. These descriptions are normative. They are also complemented by the normative grammar rules presented in Appendix G. In this specification, the expressions "immediately before" or "immediately after" mean with no intervening white space or comments. All levels of CSS — level 1, level 2, and any future levels — use the same core syntax. This allows UAs to parse (though not completely understand) style sheets written in levels of CSS that did not exist at the time the UAs were created. Designers can use this feature to create style sheets that work with older user agents, while also exercising the possibilities of the latest levels of CSS. At the lexical level, CSS style sheets consist of a sequence of tokens. The list of tokens for CSS is as follows. The definitions use Lex-style regular expressions. Octal codes refer to ISO 10646 ([ISO10646]). As in Lex, in case of multiple matches, the longest match determines the token. The macros in curly braces ({}) above are defined as follows: For example, the rule of the longest match means that " red-->" is tokenized as the IDENT " red--" followed by the DELIM " >", rather than as an IDENT followed by a CDC. Below is the core syntax for CSS. The sections that follow describe how to use it. Appendix G describes a more restrictive grammar that is closer to the CSS level 2 language. Parts of style sheets that can be parsed according to this grammar but not according to the grammar in Appendix G are among the parts that will be ignored according to the rules for handling parsing errors.|unused]* ')' | '(' S* [any|unused]* ')' | '[' S* [any|unused]* ']' ] S*; unused : block | ATKEYWORD S* | ';' S* | CDO S* | CDC S*; The "unused" production is not used in CSS and will not be used by any future extension. It is included here only to help with error handling. (See 4.2 "Rules for handling parsing errors.") COMMENT tokens do not occur in the grammar (to keep it readable), but any number of these tokens may appear anywhere outside other tokens. (Note, however, that a comment before or within the @charset rule disables the @charset.) The token S in the grammar above stands for white space. 2.1 implementers should always be able to use a CSS-conforming parser, whether or not they support any vendor-specific extensions. Authors should avoid vendor-specific extensions This section is informative. At the time of writing, the following prefixes are known to exist: The following rules always hold:). Outside a string, a backslash followed by a newline stands for itself (i.e., a DELIM followed by a newline). Second, it cancels the meaning of special CSS characters.: In fact, these two methods may be combined. Only one white space character is ignored after a hexadecimal escape. Note that this means that a "real" space after the escape sequence must". A CSS style sheet, for any level of CSS, consists of a list of statements (see the grammar above). There are two kinds of statements: at-rules and rule sets. There may be white space around the statements. At-rules start with an at-keyword, an '@' character followed immediately by an identifier (for example, '@import', '@page'). An at-rule consists of everything up to and including the next semicolon (;) or the next block, whichever comes first. CSS 2.1 user agents must ignore any '@import' rule that occurs inside a block or after any non-ignored statement other than an @charset or an @import rule. Assume, for example, that a CSS 2.1 parser encounters this style sheet: @import "subs.css"; h1 { color: blue } @import "list.css"; The second '@import' is illegal according to CSS 2.1. The CSS 2.1 parser ignores } Instead, to achieve the effect of only importing a style sheet for 'print' media, use the @import rule with media syntax, e.g.: @import "subs.css"; @import "print-main.css" print; @media print { body { font-size: 10pt } } h1 {color: blue } does not match the first single quote: { causta: "}" + ({7} * '\'') } Note that the above rule is not valid CSS 2.1, but it is still a block as defined above. A rule set (also called "rule") consists of a selector followed by a declaration block. A declaration block starts with a left curly brace ({) and ends with the matching right curly brace (}). In between there must be a list of zero or more semicolon-separated (;) declarations. The selector (see also the section on selectors) consists of everything up to (but not including) the first left curly brace ({). A selector always goes together with a declaration block. When a user agent cannot parse the selector (i.e., it is not valid CSS 2.1), it must ignore the selector and the following declaration block (if any) as well. CSS 2.1 gives a special meaning to the comma (,) in selectors. However, since it is not known if the comma may acquire other meanings in future updates of CSS, the whole statement should be ignored if there is an error anywhere in the selector, even though the rest of the selector may look reasonable in CSS 2.1. For example, since the "&" is not a valid token in a CSS 2.1 selector, a CSS 2.1 user agent valid CSS 2.1 rule. p[example="public class foo\ {\ private int x;\ \ foo(int x) {\ this.x = x;\ }\ \ }"] { color: red } A declaration is either empty or consists of a property name, followed by a colon (:), followed by a property value. Around each of these there may be white space. name is an identifier. Any token may occur in the property, colors, etc. A user agent must ignore a declaration with an invalid property name or an invalid value. Every CSS property has its own syntactic and semantic restrictions on the values it accepts. For example, assume a CSS 2.1 2.1 parser will ignore these declarations, effectively reducing the style sheet to: h1 { color: red; } p { color: blue; font-variant: small-caps } em em { font-style: normal } Comments begin with the characters "/*" and end with the characters "*/". They may occur anywhere outside other specification ([HTML4]) for more information. In some cases, user agents must ignore part of an illegal style sheet. This specification defines ignore to mean that the user agent parses the illegal part (in order to find its beginning and end), but otherwise acts as if it had not been there. CSS 2.1 reserves for future updates of CSS all property:value combinations and @-keywords that do not contain an identifier beginning with dash or underscore. Implementations must ignore such combinations (other than those introduced by future updates of CSS). To ensure that new properties and new values for existing properties can be added in the future, user agents are required to obey the following rules when they encounter the following scenarios: h1 { color: red; rotation: 70minutes } the user agent will treat this as if the style sheet had been h1 { color: red } */ p @here {color: red} /* ruleset with unexpected at-keyword "@here" */ @foo @bar; /* at-rule with unexpected at-keyword "@bar" */ }} {{ - }} /* ruleset with unexpected right brace */ ) ( {} ) p {color: red } /* ruleset with unexpected right parenthesis */ } Something inside an at-rule that is ignored because it is invalid, such as an invalid declaration within an @media-rule, does not make the entire at-rule invalid. User agents must close all open constructs (for example: blocks, parentheses, brackets, rules, strings, and comments) at the end of the style sheet. For example: @media screen { p:before { content: 'Hello would be treated the same as: @media screen { p:before { content: 'Hello'; } } in a conformant UA. User agents must close strings upon reaching the end of a line (i.e., before an unescaped line feed, carriage return or form feed character),.. Note that many properties that allow an integer or real number as a value actually restrict the value to some range, often to a non-negative value.. In cases where the used length cannot be supported, user agents must approximate it in the actual value. There are two types of length units: relative and absolute. Relative length units specify a length relative to another length property. Style sheets that use relative units can more easily scale from one output environment to another. Relative units are: h1 { margin: 0.5em } /* em */ h1 { margin: 1ex } /*.) The 'ex' unit is defined by the element's first available font. The exception is when 'ex' occurs in the value of the 'font-size' property, in which case it refers to the 'ex' of the parent element." in HTML), 'em' and 'ex' refer to the property's initial value. Child elements do not inherit the relative values specified for their parent; they inherit the computed values. fixed in relation to each other.I values (Uniform Resource Identifiers, see [RFC3986], which includes URLs, URNs, etc) in this specification are denoted by <uri>. The functional notation used to designate URIs in property values is "url()", as in: body { background: url("") }. An example without quotes: li { list-style: url() disc } Some characters appearing in an unquoted URI, such as parentheses, white space characters, single quotes (') and double quotes ("), must be escaped with a backslash so that the resulting URI value is a URI token: '\(', '\)'. Depending on the type of URI, it might also be possible to write the above characters as URI-escapes (where "(" = %28, ")" = %29, etc.) as described in [RFC3986]. Note that COMMENT tokens cannot occur within other tokens: thus, "url(/*x*/pic.png)" denotes the URI "/*x*/pic.png", not "pic.png". invalid URIs or URIs that designate unavailable or inapplicable resources. Counters are denoted by case-sensitive identifiers (see the 'counter-increment' and 'counter-reset' properties). To refer to the value of a counter, the notation 'counter(<identifier>)' or 'counter(<identifier>, <'list-style-type'>)', with optional white space separating the tokens, is used. The default style is 'decimal'. To refer to a sequence of nested counters of the same name, the notation is 'counters(<identifier>, <string>)' or 'counters(<identifier>, <string>, <'list-style-type'>)' with optional white space separating the tokens. See "Nested counters and scope" in the chapter on generated content for how user agents must determine the value or values of the counter. See the definition of counter values of the 'content' property for how it must convert these values to a string. In CSS 2.1,) ". "} A <color> is either a keyword or a numerical RGB specification. The list of color keywords.. Note that only colors specified in CSS are affected; e.g., images are expected to carry their own color information. Values outside the device gamut should be clipped or mapped into the gamut when the gamut is known: the red, green, and blue values must be changed to fall within the range supported by the device. Users agents may perform higher quality mapping of colors from one gamut to another.. Note. Mapping or clipping of color values should be done to the actual device gamut if known (which may be larger or smaller than 0..255).. It is possible to break strings over several lines, for aesthetic style sheets, specification ([HTML4], chapter 5). See also the XML 1.0 specification ([XML10], sections 2.2 and 4.3.3, and Appendix F). When a style sheet is embedded in another document, such as in the STYLE element or "style" attribute of HTML, the style sheet shares the character encoding of the whole document. When a style sheet resides in a separate file, user agents must observe the following priorities when determining a style sheet's character encoding (from highest priority to lowest): <link charset="">or other metadata from the linking mechanism (if any) Authors using an @charset rule must place the rule at the very beginning of the style sheet, preceded by no characters. (If a byte order mark is appropriate for the encoding used, it may precede the @charset rule.) After "@charset", authors specify the name of a character encoding (in quotes). For example: @charset "ISO-8859-1"; @charset must be written literally, i.e., the 10 characters '@charset "' (lowercase, no backslash escapes), followed by the encoding name, followed by '";'. The name must be a charset name as described in the IANA registry. See [CHARSETS] for a complete list of charsets. Authors should use the charset names marked as "preferred MIME name" in the IANA registry. User agents must support at least the UTF-8 encoding. User agents must ignore any @charset rule not at the beginning of the style sheet. When user agents detect the character encoding using the BOM and/or the @charset rule, they should follow the following rules: User agents must ignore style sheets in unknown encodings. A style sheet may have to refer to characters that cannot be represented in the current character encoding. These characters must be written as escaped references to ISO 10646 characters. These escapes serve the same purpose as numeric character references in HTML or XML documents (see [HTML4], ISO-10646>
http://www.w3.org/TR/CSS2/syndata
CC-MAIN-2020-45
refinedweb
2,201
64.1
On Sat, 18 Jan 1997, Philip Blundell wrote:> > That would be the worst thing, because it clutters up the source code> something chronic. We do _not_ want that to happen. Internationalisation> belongs somewhere else. > > P.> Ok, either we can have:1. a methord that is fast, introduces minimal kernel bloat, can be translated as and when is wanted, and works from _before_ booting.Or 2. we can have a bloated, slow, posably all in one go methord that introduces one of: a huge (unswapable) module, a huge userspace daemon, or the kernel finding a file at boot time to load in the strings, and these strings have to be kept in sync with the kernel for it to make sense.And it still won't do the boot messages, as they are generated before anyof the above could happen (they all require access to a filesystem).Well I know which I'd choose ....The only other alternitve that I can think off that has all the advantagesof the first methord, bar one (the it's right there one), would be to havean include file per .c (or set of .c's) that defines the strings like:/* strings for a.random.driver */#ifdef LANG1#define INFO_DEBUG_THE_CARD_SAY "Message that %s said %d in lang1"....#endif /* LANG1 */#ifdef LANG2#define INFO_DEBUG_THE_CARD_SAY "Message that %s said %d in lang2"...#endif /* LANG2 */#ifndef INFO_DEBUG_THE_CARD_SAY#define INFO_DEBUG_THE_CARD_SAY "Message that %s said %d in english"#endif...but you'll end up with a block for lang1, lang2, ..., and a whole load of#ifndef/#define/#endif's at the end to pickup untranslated messages, anduse the english. It does clean up the code tho, it becomes:#include "a.random.driver.strings.h"...printk(KERN_DEBUG, INFO_DEBUG_THE_CARD_SAY, device->name, status);It's the same solution as before tho ....Bryn--PGP key pass phrase forgotten, \ Overload -- core meltdown sequence again :( and I don't care ;) | initiated. / This space is intentionally left | blank, apart from this text ;-) \____________________________________
https://lkml.org/lkml/1997/1/18/40
CC-MAIN-2016-36
refinedweb
326
73.58
Difference between revisions of "ATL/Tutorials - Create a simple ATL transformation" Latest revision as of 07:40, 4 June 2013 This tutorial shows you how to create your first simple transformation with ATL, through a well-known basic example: Families2Persons. Note: This tutorial is followed step by step in the ATL cheatsheet under eclipse : ATL Development > Create a simple ATL transformation. Go to the Help > Cheat Sheets... menu to find it. Contents Objectives The objectives of this tutorial are to perform a transformation from a list of families to a list of persons. On one side (the source), we have a list of families. Each family has a last name, and contains a father, a mother and a number of sons and daughters (0, 1 or more) all with a first name. We want to transform this list into a new list of persons (the target), this means that each member of the family will be a person, without differentiating parents from children, and with no link between the members of a same family (except a part of their name). In the end, we must only have a person with its full name (first name & last name), male or female. The Families The Persons In order to manage to do this, there are a few requirements. Requirements First of all, you will need to install ATL on eclipse. If you already have it installed, just skip this task. Otherwise, follow the few steps bellow: - click on Help > Install New Software... - then you have to select an update site, an search for ATL on the filter field when a list of software is available - once you have done this, you should see a line "ATL SDK - ATLAS Transformation Language SDK" on the list below, under the Modeling node - check it, click Next >, and follow the instructions You can check if ATL is installed by going to Help > About Eclipse SDK, then clicking the Installation Details button, and under the Plug-ins tab you should see several lines with ATL. If you have any problem, please refer to the User Guide for further information about ATL installation. Create a new ATL project After the theory, let's start creating the project. To create a new ATL project, you need to go to File > New > Other... and then select ATL > ATL Project.... Then, click the Next > button. Type a name for the project (say "Families2Persons" for our example). The project should now appear on your projects list. The User Guide also provides a detailled section for the creation of a new ATL project. The metamodels Now that our project is ready to use, we can fill it. Our first files are the representation of a family and a person, that is to say how we want to symbolize then (like a map symbolize the real world). This is called a metamodel, and it corresponds to an Ecore file. To create the Ecore file, go to File > New > Other..., and then select Eclipse Modeling Framework > Ecore Model and click Next >. Select your Families2Persons project on the list, enter a name for the file (Families.ecore for instance), and click Finish. An empty file is added to your project. Repeat this task for the Persons.ecore metamodel. Now we need to fill these files. (Note that the User Guide shows other metamodels examples.) The Families metamodel As we saw in the Objective part, a family has a last name, and a father, a mother, sons and daughters with a first name. That is what we need to tell to the Families.ecore file. Open it with the default editor (Sample Ecore Model Editor). We will also need the Properties view, so if it is not already opened, you can show it by going on Window > Show View > Other..., selecting General > Properties and clicking OK. The Families.ecore file comes in the form of a tree. The root should be: "platform:/resource/Families2Persons/Families.ecore". If you expand it, there is an empty node under it. Click on it, and in the Properties view, enter "Families" in the value of the "Name" property. This node is where we are going to put everything that makes a family. So first we create a class "Family", by right clicking on the Families node, and clicking on New Child > EClass. You can name it the same way you named the node Families above. Then we give it an attribute (New Child > EAttribute) and name it "lastName". We want to have one and only one last name per family, so we control its multiplicity: set 1 for the lower bound (that should be set to 0 by default), and 1 for the upper bound (that should already be 1). These bounds can be set the same way than the name, but on the Lower Bound and Upper Bound properties. We can specify a type for this attribute, and we want it to be a string. So in the EType property, search for the EString type. At this moment, we have a family with its last name. Now we need members for this family. Therefore we are going to create another class (as we created the Family class): "Member". This class will be a Families node's child, as the other Family class. These members have a first name, so we add an attribute "firstName" of type EString, and again a member has one and only one first name (see above if you don't remember how to create an attribute, name it, give it a type and change its multiplicity). Now we have to make the links between the family and the members. For this purpose, you have to create children of the Family of the type EReference. Name these references "father", "mother", "sons" and "daughters". They will have the EType Member. About the multiplicity, we have one father and one mother for one family (so upper and lower bounds set to 1), but we can have as many sons and daughters as we want, even 0 (so lower bound set to 0, and upper bound set to -1, which means *). And at last, put their Containment property to true so that they can contain members. Once these attributes are created and configured, we do the same for the Member class. It also needs references towards the Family class. Just add 4 EReferences to the Member class: "familyFather", "familyMother", "familySon" and "familyDaughter" with EType Family. This time, each reference should have its multiplicity set to 0..1 (it is by default), because a member is either a father, or a mother, or a son, or a daughter, so the reference that is defined for a member shows its role in the family. Then, in order to tell which member refer to which family member, set their EOpposite field to their reference in the Family class (for example, familyFather refers to the father reference of the Family class). <?xml version="1.0" encoding="ISO-8859-1"?> <xmi:XMI xmi: <ecore:EPackage <eClassifiers xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: </eClassifiers> <eClassifiers xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: <eStructuralFeatures xsi: </eClassifiers> </ecore:EPackage> </xmi:XMI> And here we are with the metamodel for our families! The Persons metamodel The principle is the same for the target metamodel, in less complicated. Open the Persons.ecore file, and name the root child node to "Persons". Then add it a class "Person", with one attribute: "fullName" of EType EString and multiplicity 1..1. Then set the Abstract attribute of the Person class to "true". We need to do this because we won't directly implement this class, but two other subclasses: "Male" and "Female", according to who was the person in the family, a man or a woman. Create these two classes at the same level than Person. We make them subclasses of Person by setting their ESuper Types property to Person. <?xml version="1.0" encoding="ISO-8859-1"?> <xmi:XMI xmi: <ecore:EPackage <eClassifiers xsi: <eStructuralFeatures xsi: </eClassifiers> <eClassifiers xsi: <eClassifiers xsi: </ecore:EPackage> </xmi:XMI> And our second metamodel is ready! The ATL transformation code Now that we have represented what we have (Families, the source) and what we want to obtain (Persons, the target), we can concentrate on the core of the transformation: the ATL code. This code is going to match a part of the source with a part of the target. What we want in our example, is to take each member of each family, and transform him into a person. That implies melting his first and last name to have a full name, defining whether it's a man or a woman, and copy these pieces of information into a Person object. We first need a file to put this code into. So create a new ATL file, by going to File > New > Other..., and then ATL > ATL File. Name it "Families2Persons.atl" for instance, don't forget to select your project, and then click Finish. If you are asked to open the ATL perspective, click Yes. When you open the file, an error is marked (we will see how to fix it below), and it contains a single line: module Families2Persons; First we add two lines at the top of the file, one for each metamodel, so that the editor can use the auto-completion and documentation when we type in some code concerning the two metamodels: -- @path Families=/Families2Persons/Families.ecore -- @path Persons=/Families2Persons/Persons.ecore Then we tell ATL that we have families in and we want persons out (this should fix the error): create OUT: Persons from IN: Families; Now we must define some helpers: helper context Families!Member def: isFemale(): Boolean = if not self.familyMother.oclIsUndefined() then true else if not self.familyDaughter.oclIsUndefined() then true else false endif endif; helper context Families!Member def: familyName: String = if not self.familyFather.oclIsUndefined() then self.familyFather.lastName else if not self.familyMother.oclIsUndefined() then self.familyMother.lastName else if not self.familySon.oclIsUndefined() then self.familySon.lastName else self.familyDaughter.lastName endif endif endif; These helpers will be used in the rules that we will see below. - The first one is called on a member of a family (context Families!Member), gives us a boolean (: Boolean), and tells us whether the member is a female or not, by verifying if the familyDaughter or familyMother reference is defined or not. - The second one is also called on a member of a family, this time gives us a string (: String) and returns the last name of the member. It must look for it in every reference to the family, to see which one is defined (familyFather, familyMother, familySon or familyDaughter) And finally, we add two rules creating male and female persons from members of families: rule Member2Male { from s: Families!Member (not s.isFemale()) to t: Persons!Male ( fullName <- s.firstName + ' ' + s.familyName ) } rule Member2Female { from s: Families!Member (s.isFemale()) to t: Persons!Female ( fullName <- s.firstName + ' ' + s.familyName ) } Each rule will be called on the object that respect the filter predicate in the from part. For instance, the first rule takes each member of each families (from s: Families!Member) that is not a female (using the helper we described above, not s.isFemale()). And then it creates a male person (to t: Persons!Male) and set its fullName attribute to the first name of the member followed by its last name (using the helper familyName we saw above). The principle is the same for the second rule, whereas this time it takes only the female members. Note that the ATL editor provides syntax highlighting, and indentation much better than what you can see above. Besides, you can find help on what we saw above on the User Guide, here and here. The sample families model file The transformation is ready to be used, we just need a sample model to run it on. First create a file in your project in which we will put the code of the model. Go to File > New > File name it "sample-Families.xmi" for instance and open it with a text editor. Here is some code sample: <?xml version="1.0" encoding="ISO-8859-1"?> <xmi:XMI xmi: <Family lastName="March"> <father firstName="Jim"/> <mother firstName="Cindy"/> <sons firstName="Brandon"/> <daughters firstName="Brenda"/> </Family> <Family lastName="Sailor"> <father firstName="Peter"/> <mother firstName="Jackie"/> <sons firstName="David"/> <sons firstName="Dylan"/> <daughters firstName="Kelly"/> </Family> </xmi:XMI> The launch configuration We have everything we need to make the transformation, but there is one more step before we launch it, at least the first time: we have to configure the launching. When you are in the ATL file (Families2Persons.atl), click on Run > Run (or Ctrl+F11) A dialog opens. Several pieces of information are already filled in: the ATL module (our transformation file, Families2Persons.atl), the metamodels (Families.ecore and Persons.ecore), but we need to complete the page. The Source Models (IN:, conforms to Families) part is the model we want to transform, that is to say our sample-Families.xmi; browse the workspace to add it. The Target Models (Out:, conforms to Persons) part is the model to be generated; browse the workspace to find your project and enter a name for the file (say "sample-Persons.xmi"). A useful option can be found in the Common tab of the page: we can save our configuration so that ATL can find it the next time we would want to run it or if the project is exported. If you check Shared file and browse within your project, you can save this configuration in a file ("Families2Persons.launch" for example). You can found help on how to compile an ATL file on the User Guide, here. Running the transformation At last we can run the transformation by clicking Run on the configuration page. A file is then generated, named sample-Persons.xmi, and containing the list of your family members transformed into persons. Here is what you should get if you open it with a text editor: <?xml version="1.0" encoding="ISO-8859-1"?> <xmi:XMI xmi: <Male fullName="Jim March"/> <Male fullName="Brandon March"/> <Male fullName="Peter Sailor"/> <Male fullName="David Sailor"/> <Male fullName="Dylan Sailor"/> <Female fullName="Cindy March"/> <Female fullName="Brenda March"/> <Female fullName="Jackie Sailor"/> <Female fullName="Kelly Sailor"/> </xmi:XMI> Running an ATL launch configuration is explained on the User Guide, here This is the end of this basic example. Further documentation, examples, and help can be found on the ATL website:.
http://wiki.eclipse.org/index.php?title=ATL/Tutorials_-_Create_a_simple_ATL_transformation&diff=338922&oldid=194255
CC-MAIN-2016-26
refinedweb
2,438
63.7
Write a function primes with a single argument x that returns a list of all prime numbers less than x. You may assume that x is an integer greater than 2. 2 is the smallest prime. All larger primes are integers x such that for all prime numbers y smaller than x, the remainder of x ÷ y is non-zero. We recommend implementing some version of the Sieve of Eratosthenes. In particular, You must not have any primes when you submit it; we’ll test with very large numbers and printing out the list will take so long your program will time out before you get results. When you run seive.py, nothing should happen. It defines a function, it does not run it. If in another file (which you do not submit) you write the following: import seive print(seive.primes(20)) many = seive.primes(12345) print(many[-1]) you should get the following output quickly (in not more than a second or two): [2, 3, 5, 7, 11, 13, 17, 19] 12343 Once I know that 17 is not divisible by 2, 3, or 5 I don’t have to check other numbers; if 17 was divisible by something larger than 5, it would also have to be divisible by something smaller than 17 ÷ 5 and thus something smaller than 5. Generalizing this observation can dramatically speed up your code, allowing you to return 7-digit primes in reasonable time. There are also many other primality tests which are more efficient than the above for checking single large numbers, though some are less efficient for creating lists of small prime numbers. You’ll probably want a loop within a loop: the outer loop to consider possible primes, the inner loop to check possible factors of the currently-being-considered possible prime. Many successful solutions have a variable named prime_so_far that is True until a factor is found, then it becomes False. Don’t add a prime to the list of primes until after you’ve checked all of its possible prime factors!
http://cs1110.cs.virginia.edu/w08-seive.html
CC-MAIN-2017-43
refinedweb
344
68.91
Investors in Transocean Ltd (Symbol: RIG) saw new options begin trading today, for the July 5th expiration. At Stock Options Channel, our YieldBoost formula has looked up and down the RIG options chain for the new July 5th contracts and identified one put and one call contract of particular interest. The put contract at the $6.50 strike price has a current bid of 41 cents. If an investor was to sell-to-open that put contract, they are committing to purchase the stock at $6.50, but will also collect the premium, putting the cost basis of the shares at $6.09 (before broker commissions). To an investor already interested in purchasing shares of RIG, that could represent an attractive alternative to paying $6.61/share today. Because the .31% return on the cash commitment, or 53.54% annualized — at Stock Options Channel we call this the YieldBoost. Below is a chart showing the trailing twelve month trading history for Transocean Ltd, and highlighting in green where the $6.50 strike is located relative to that history: Turning to the calls side of the option chain, the call contract at the $7.00 strike price has a current bid of 32 cents. If an investor was to purchase shares of RIG stock at the current price level of $6.61/share, and then sell-to-open that call contract as a "covered call," they are committing to sell the stock at $7.00. Considering the call seller will also collect the premium, that would drive a total return (excluding dividends, if any) of 10.74% if the stock gets called away at the July 5th expiration (before broker commissions). Of $7.00 strike highlighted in red: Considering the fact that the .84% boost of extra return to the investor, or 41.09% annualized, which we refer to as the YieldBoost. The implied volatility in the put contract example is 58%, while the implied volatility in the call contract example is 56%. Meanwhile, we calculate the actual trailing twelve month volatility (considering the last 251 trading day closing values as well as today's price of $6.61).
https://www.nasdaq.com/articles/interesting-rig-put-and-call-options-july-5th-2019-05-23
CC-MAIN-2021-31
refinedweb
358
64.3
Displaying data in a customized DataGrid Displaying data is probably the most straightforward task we can ask the DataGrid to do for us. In this recipe, we'll create a collection of data and hand it over to the DataGrid for display. While the DataGrid may seem to have a rather fixed layout, there are many options available on this control that we can use to customize it. In this recipe, we'll focus on getting the data to show up in the DataGrid and customize it to our likings. Getting ready In this recipe, we'll start from an empty Silverlight application. The finished solution for this recipe can be found in the Chapter04/Datagrid_Displaying_Data_Completed folder in the code bundle that is available on the Packt website. How to do it... We'll create a collection of Book objects and display this collection in a DataGrid. However,we want to customize the DataGrid. More specifically, we want to make the DataGridfixed. In other words, we don't want the user to make any changes to the bound data or move the columns around. Also, we want to change the visual representation of the DataGrid by changing the background color of the rows. We also want the vertical column separators to be hidden and the horizontal ones to get a different color. Finally, we'll hook into the LoadingRow event, which will give us access to the values that are bound to a row and based on that value, the LoadingRow event will allow us to make changes to the visual appearance of the row. To create this DataGrid, you'll need to carry out the following steps: - Start a new Silverlight solution called DatagridDisplayingData in Visual Studio. We'll start by creating the Book class. Add a new class to the Silverlight project in the solution and name this class as Book. Note that this class uses two enumerations—one for the Category and the other for the Language. These can be found in the sample code. The following is the code for the Book class: public class Book { public string Title { get; set; } public string Author { get; set; } public int PageCount { get; set; } public DateTime PurchaseDate { get; set; } public Category Category { get; set; } public string Publisher { get; set; } public Languages Language { get; set; } public string ImageName { get; set; } public bool AlreadyRead { get; set; } } - In the code-behind of the generated MainPage.xaml file, we need to create a generic list of Book instances (List )and load data into this collection.This is shown in the following code: private List<Book> bookCollection; public MainPage() { InitializeComponent(); LoadBooks(); } private void LoadBooks() { bookCollection = new List<Book>(); Book b1 = new Book(); b1.Title = "Book AAA"; b1.Author = "Author AAA"; b1.Language = Languages.English; b1.PageCount = 350; b1.Publisher = "Publisher BBB"; b1.PurchaseDate = new DateTime(2009, 3, 10); b1.ImageName = "AAA.png"; b1.AlreadyRead = true; b1.Category = Category.Computing; bookCollection.Add(b1); ... } - Next, we'll add a DataGrid to the MainPage.xaml file. For now, we won't add any extra properties on the DataGrid. It's advisable to add it to the page by dragging it from the toolbox, so that Visual Studio adds the correct references to the required assemblies in the project, as well as adds the namespace mapping in the XAML code. Remove the AutoGenerateColumns="False" for now so that we'll see all the properties of the Book class appear in the DataGrid. The following line of code shows a default DataGrid with its name set to BookDataGrid: <sdk:DataGrid x:</sdk:DataGrid> - Currently, no data is bound to the DataGrid. To make the DataGrid show the book collection, we set the ItemsSource property from the code-behind in the constructor. This is shown in the following code: public MainPage() { InitializeComponent(); LoadBooks(); BookDataGrid.ItemsSource = bookCollection; } - Running the code now shows a default DataGrid that generates a column for each public property of the Book type. This happens because the AutoGenerateColumns property is True by default. - Let's continue by making the DataGrid look the way we want it to look. By default, the DataGrid is user-editable, so we may want to change this feature. Setting the IsReadOnly property to True will make it impossible for a user to edit the data in the control. We can lock the display even further by setting both the CanUserResizeColumns and the CanUserReorderColumns properties to False. This will prohibit the user from resizing and reordering the columns inside the DataGrid, which are enabled by default. This is shown in the following code: <sdk:DataGrid x: </sdk:DataGrid> - The DataGrid also offers quite an impressive list of properties that we can use to change its appearance. By adding the following code, we specify alternating the background colors (the RowBackground and AlternatingRowBackground properties), column widths (the ColumnWidth property), and row heights (the RowHeight property). We also specify how the gridlines should be displayed (the GridLinesVisibility and HorizontalGridLinesBrushs properties). Finally, we specify that we also want a row header to be added (the HeadersVisibility property ). <sdk:DataGrid x: </sdk:DataGrid> - We can also get a hook into the loading of the rows. For this, the LoadingRow event has to be used. This event is triggered when each row gets loaded. Using this event, we can get access to a row and change its properties based on custom code. In the following code, we are specifying that if the book is a thriller, we want the row to have a red background: private void BookDataGrid_LoadingRow(object sender, DataGridRowEventArgs e) { Book loadedBook = e.Row.DataContext as Book; if (loadedBook.Category == Category.Thriller) { e.Row.Background = new SolidColorBrush(Colors.Red); //It's a thriller! e.Row.Height = 40; } else { e.Row.Background = null; } } After completing these steps, we have the DataGrid that we wanted. It displays the data (including headers), fixes the columns and makes it impossible for the user to edit the data. Also, the color of the rows and alternating rows is changed, the vertical grid lines are hidden, and a different color is applied to the horizontal grid lines. Using the LoadingRow event, we have checked whether the book being added is of the "Thriller" category, and if so, a red color is applied as the background color for the row. The result can be seen in the following screenshot: How it works... The DataGrid allows us to display the data easily, while still offering us many customization options to format the control as needed. The DataGrid is defined in the System.Windows.Controls namespace, which is located in the System.Windows.Controls.Data assembly. By default, this assembly is not referenced while creating a new Silverlight application. Therefore, the following extra references are added while dragging the control from the toolbox for the first time: - System.ComponentModel.DataAnnotations - System.Windows.Controls.Data - System.Windows.Controls.Data.Input - System.Windows.Data While compiling the application, the corresponding assemblies are added to the XAP file (as can be seen in the following screenshot, which shows the contents of the XAP file). These assemblies need to be added because while installing the Silverlight plugin, they aren't installed as a part of the CLR. This is done in order to keep the plugin size small. However, when we use them in our application, they are embedded as part of the application. This results in an increase of the download size of the XAP file. In most circumstances, this is not a problem. However, if the file size is an important requirement, then it is essential to keep an eye on this. Also, Visual Studio will include the following namespace mapping into the XAML file: xmlns:sdk="clr-namespace:System.Windows.Controls; assembly=System.Windows.Controls.Data" From then on, we can use the control as shown in the following line of code: <sdk:DataGrid x: </sdk:DataGrid> Once the control is added on the page, we can use it in a data binding scenario. To do so, we can point the ItemsSource property to any IEnumerable implementation. Each row in the DataGrid will correspond to an object in the collection. When AutoGenerateColumns is set to True (the default), the DataGrid uses a refl ection on the type of objects bound to it. For each public property it encounters, it generates a corresponding column. Out of the box, the DataGrid includes a text column, a checkbox column, and a template column. For all the types that can't be displayed, it uses the ToString method and a text column. If we want the DataGrid to feature automatic synchronization, the collection should implement the INotifyCollectionChanged interface. If changes to the objects are to be refl ected in the DataGrid, then the objects in the collection should themselves implement the INotifyPropertyChanged interface. There's more While loading large amounts of data into the DataGrid, the performance will still be very good. This is the result of the DataGrid implementing UI virtualization, which is enabled by default. Let's assume that the DataGrid is bound to a collection of 1,000,000 items (whether or not this is useful is another question). Loading all of these items into memory would be a time-consuming task as well as a big performance hit. Due to UI virtualization, the control loads only the rows it's currently displaying. (It will actually load a few more to improve the scrolling experience.) While scrolling, a small lag appears when the control is loading the new items. Since Silverlight 3, the ListBox also features UI virtualization. Inserting, updating, and deleting data in a DataGrid The DataGrid is an outstanding control to use while working with large amounts of data at the same time. Through its Excel-like interface, not only can we easily view the data, but also add new records or update and delete existing ones. In this recipe, we'll take a look at how to build a DataGrid that supports all of the above actions on a collection of items. Getting ready This recipe builds on the code that was created in the previous recipe. To follow along with this recipe, you can keep using your code or use the starter solution located in the Chapter04/Datagrid_Editing_Data_Starter folder in the code bundle available on the Packt website. The finished solution for this recipe can be found in the Chapter04/Datagrid_Editing_Data_Completed folder. How to do it... In this recipe, we'll work with the same Book class as in the previous recipe. Through the use of a DataGrid, we'll manage an ObservableCollection The following are the steps we need to perform: - In the MainPage.xaml.cs file, we bind to a generic list of Book instances (List ). For the DataGrid to react to the changes in the bound collection, the collection itself should implement the INotifyCollectionChanged interface. Thus, instead of a List , we'll use an ObservableCollection as shown in the following line of code: ObservableCollection<Book> bookCollection = new ObservableCollection<Book>(); - Let's first look at deleting the items. We may want to link the hitting of the Delete key on the keyboard with the removal of a row in the DataGrid. In fact, we're asking to remove the currently selected item from the bound collection. For this, we register for the KeyDown event on the DataGrid as shown in the following code: <sdk:DataGrid x:Name="BookDataGrid" KeyDown="BookDataGrid_KeyDown" ...> - In the event handler, we'll need to check whether the key was the Delete key. Also, the required code for inserting the data—triggered by hitting the Insert key—is included. This is shown in the following code: private bool cellEditing = false; private void BookDataGrid_KeyDown(object sender, KeyEventArgs e) { if (e.Key == Key.Delete && !cellEditing) { RemoveBook(); } else if (e.Key == Key.Insert && !cellEditing) { AddEmptyBook(); } } - Note the !cellEditing in the previous code. It's a Boolean field that we are using to check whether we are currently editing a value that is in a cell or we simply have a row selected. In order to carry out this check, we should add both the BeginningEdit and the CellEditEnded events in the DataGrid as shown in the following code. These will be triggered when the cell enters or leaves the edit mode respectively. <sdk:DataGrid x:Name="BookDataGrid" BeginningEdit="BookDataGrid_BeginningEdit" CellEditEnded="BookDataGrid_CellEditEnded" ...> - In the event handlers, we change the value of the cellEditing variable as shown in the following code: private void BookDataGrid_BeginningEdit(object sender, DataGridBeginningEditEventArgs e) { cellEditing = true; } private void BookDataGrid_CellEditEnded(object sender, DataGridCellEditEndedEventArgs e) { cellEditing = false; } - Next, we need to write the code either to add an empty Book object or to remove an existing one. Here, we're actually working with the ObservableCollection . We're adding items to the collection or removing them from it. The application UI contains two buttons. We can add two Click event handlers that will trigger adding or removing an item using the following code. Note that while deleting, we are checking whether an item is selected. private void AddButton_Click(object sender, RoutedEventArgs e) { AddEmptyBook(); } private void DeleteButton_Click(object sender, RoutedEventArgs e) { RemoveBook(); } private void AddEmptyBook() { Book b = new Book(); bookCollection.Add(b); } private void RemoveBook() { if (BookDataGrid.SelectedItem != null) { Book deleteBook = BookDataGrid.SelectedItem as Book; bookCollection.Remove(deleteBook); } } - Finally, let's take a look at updating the items. In fact, simply typing in new values for the existing items in the DataGrid will push the updates back to the bound collection. Add a Grid containing the TextBlock controls in order to see this. The entire Grid should be bound to selected row of the DataGrid. This is done by means of an element data binding. The following code is a part of this code. The remaining code can be found in the completed solution in the code bundle. <Grid DataContext="{Binding ElementName=BookDataGrid, Path=SelectedItem}" > <TextBlock Text="Title:" FontWeight="Bold" Grid.Row="1" Grid. </TextBlock> <TextBlock Text="{Binding Title}" Grid.Row="1" Grid. </TextBlock> </Grid> We now have a fully working application to manage the data of the Book collection. We have a data-entry application that allows us to perform CRUD (create, read, update, and delete) operations on the data using the DataGrid. The final application is shown in the following screenshot: How it works... The DataGrid is bound to an ObservableCollection To remove an item by hitting the Delete key, we first need to check that we're not editing the value of the cell. If we are, then the row shouldn't be deleted. This is done using the BeginningEdit and CellEditEnded events. The former one is called before the user can edit the value. It can also be used to perform some action on the value in the cell such as formatting. The latter event is called when the focus moves away from the cell. In the end, managing (inserting, deleting, and so on) the data in the DataGrid comes down to managing the items in the collection. We leverage this here. We aren't adding any items to the DataGrid itself, but we are either adding items to the bound collection or removing items from the bound collection. Sorting and grouping data in a DataGrid Sorting the values within a column in a control such as a DataGrid is something that we take for granted. Silverlight's implementation has some very strong sorting options working out of the box for us. It allows us to sort by clicking on the header of a column, amongst other things. Along with sorting, the DataGrid enables the grouping of values. Items possessing a particular property (that is, in the same column) and having equal values can be visually grouped within the DataGrid. All of this is possible by using a view on top of the bound collection. In this recipe, we'll look at how we can leverage this view to customize the sorting and grouping of data within the DataGrid. Getting ready This sample continues with the same code that was created in the previous recipes of this article. If you want to follow along with this recipe, you can continue using your code or use the provided start solution located in the Chapter04/Datagrid_Sorting_And_Grouping_Starter folder in the code bundle that is available on the Packt website. The finished code for this recipe can be found in the Chapter04/Datagrid_Sorting_And_ Grouping_Completed folder. How to do it... We'll be using the familiar list of Book items again in this recipe. This list, which is implemented as an ObservableCollection - Instead of using the AutoGenerateColumns feature, we'll define the columns that we want to see manually. We'll make use of several DataGridTextColumns, a DataGridCheckBoxColumn and a DataGridTemplateColumn. The following is the code for the DataGrid: <sdk:DataGrid x:Name="CopyBookDataGrid" AutoGenerateColumns="False" ... > <sdk:DataGrid.Columns> <sdk:DataGridTextColumn x: </sdk:DataGridTextColumn> <sdk:DataGridTextColumn x: </sdk:DataGridTextColumn> <sdk:DataGridTextColumn x: </sdk:DataGridTextColumn> <sdk:DataGridTextColumn x: </data:DataGridTextColumn> <data:DataGridTextColumn x: </sdk:DataGridTextColumn> <sdk:DataGridCheckBoxColumn x:Name="CopyAlreadyReadColumn" Binding="{Binding AlreadyRead, Mode=TwoWay}" Header="Already read"> </sdk:DataGridCheckBoxColumn> <sdk:DataGridTemplateColumn Header="Purchase date" x: <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <controls:DatePicker SelectedDate="{Binding PurchaseDate}"> </controls:DatePicker> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> - In order to implement both sorting and grouping, we'll use the PagedCollectionView. It offers us a view on top of our data and allows the data to be sorted, grouped, filtered and so on without changing the underlying collection. The PagedCollectionView is instantiated using the following code. We pass in the collection (in this case, the bookCollection) on which we want to put the view. PagedCollectionView view = new PagedCollectionView(bookCollection); - In order to change the manner of sorting from the code, we need to add a new SortDescription to the SortDescriptions collection of the view. In the following code, we are specifying that we want the sorting to occur on the Title property of the books in a descending order: view.SortDescriptions.Add(new SortDescription("Title", ListSortDirection.Descending)); - If we want our data to appear in groups, we can make it so by adding a new PropertyGroupDescription to the GroupDescriptions collection of the view. In this case, we want the grouping to be based on the value of the Language property. This is shown in the following code: view.GroupDescriptions.Add(new PropertyGroupDescription("Language")); - The DataGrid will not bind to the collection, but to the view. We specify this by setting the ItemsSource property to the instance of the PagedCollectionView. The following code should be placed in the constructor as well: public MainPage() { InitializeComponent(); LoadBooks(); view = new PagedCollectionView(bookCollection); view.SortDescriptions.Add(new SortDescription("Title", ListSortDirection.Descending)); view.GroupDescriptions.Add(new PropertyGroupDescription("Language")); BookDataGrid.ItemsSource = view; } We have now created a DataGrid that allows the user to sort the values in a column as well as group the values based on a value in the column. The resulting DataGrid is shown in the following screenshot: How it works... The actions such as sorting, grouping, filtering and so on don't work on an actual collection of data. They are applied on a view that sits on top of the collection (either a List To change the sorting, we can add a new SortDescription to the SortDescriptions collection that the view encapsulates. Note that SortDescriptions is a collection in which we can add more than one sort field. The second SortDescription value will be used only when equal values are encountered for the first SortDescription value. Grouping (using the PropertyGroupDescription) allows us to split the grid into different levels. Each section will contain items that have the same value for a particular property. Similar to sorting, we can add more than one PropertyGroupDescription, which results in nested groups. There's more... From code, we can control all groups to expand or collapse. The following code shows us how to do so: private void CollapseGroupsButton_Click(object sender, RoutedEventArgs e) { foreach (CollectionViewGroup group in view.Groups) { BookDataGrid.CollapseRowGroup(group, true); } } private void ExpandGroupsButton_Click(object sender, RoutedEventArgs e) { foreach (CollectionViewGroup group in view.Groups) { BookDataGrid.ExpandRowGroup(group, true); } } Sorting a template column If we want to sort a template column, we have to specify which value needs to be taken into account for the sorting to be executed. Otherwise, Silverlight has no clue which field it should take. This is done by setting the SortMemberPath property as shown in the following code: <sdk:DataGridTemplateColumn x: We'll look at the DataGridTemplateColumn in more detail in the Using custom columns in the DataGrid recipe of this article. Summary In this article, we discussed about the following: - Displaying data in a customized DataGrid - Inserting, updating, and deleting data in a DataGrid - Sorting and grouping data in a DataGrid If you have read this article you may be interested to view :
https://www.packtpub.com/books/content/data-manipulation-silverlight-4-data-grid
CC-MAIN-2017-22
refinedweb
3,457
54.73
This action might not be possible to undo. Are you sure you want to continue? quality 1. This view consists of four layers, namely, quality focus, process, methods and tools. Figure 1.1 illustrates this software engineering view. Tools Methods Process Quality Focus Software Engineering – A Layered View Quality Focus At the very foundation of this layer is a total focus on quality. It is a culture where commitment to continuous improvement on the software development process is fostered. This culture enables the development of more effective approaches to software engineering., 1 what work products need to be produced, and what milestones are defined. They also include assurance that quality is maintained, and that change is properly controlled and managed. The Booch method encompasses both a “micro development process” and a “macro development process”. The micro level defines a set of analysis tasks that are reapplied for each step in the macro process. Hence, an evolutionary approach is maintained. Booch’s micro development process identifies classes and objects and the semantics of classes and objects and defines relationships among classes and objects and conducts a series of refinements to elaborate the analysis model. • Coad and Yourdon Method The Coad and Yourdon method is often viewed as one of the easiest mthods to learn. Modeling notation is relatively simple guidelines for developing the analysis model are straightforward. A brief outline of Coad and Yourdon’s process follows: - Identify objects using “what to look for” criteria - Define a generalization/specification structure - Define a whole/part structure - Identify subjects (representations of subsystem components) - Define attributes - Define services • Jacobson Method Also called OOSE(object-oriented software engineering), the Jacobson method is a simplified version of the proprietary objectory method, also developed by Jacobson. This method is differentiated from others by heavy emphasis on the use-case – a description or scenario that depicts how the user interacts with the product or system. • Rumbaugh Method Rumbaugh and his colleagues developed the object modeling technique (OMT) for analysis, system design, and object-level design. The analysis activity creates three models: the object model (a 2 representation of objects, classes, hierarchies, and relationships), the dynamic model (a representation of object and system behavior), and the functional model (a high-level DFD-like representation of information flow through the system). • Wirfs-Brock Method Wirfs-Brock, Wilkerson, and Weiner do not make a clear distinction between analysis and design tasks. Rather a continuous process that begins with the assessment of a customer specification and ends with design is proposed. A brief outline of Wirfs-Brock et al.’s analysis-related tasks follows: - Evaluate the customer specification - Extract candidate classes from the specification via grammatical parsing - Group classes in an attempt to identify superclasses - Define responsibilities for each class - Assign responsibilities to each class - Identify relationships between classes - Define collaboration between classes based on responsibilities - Build hierarchical representations of classes - Construct a collaboration graph for the system Tools Tools provide support to the process and methods. Computer-aided software engineering provides a system of support to the software development project where information created by one tool can be used by another. They may be automated or semiautomated. Most tools are used to develop models. Models are patterns of something to made or they are simplification of things. There are two models that are generally developed by system model is an inexpensive representation of a complex system that one needs to study while a software model is a blueprint of the software that needs to be built. Like methodologies, several modeling tools are used to represent systems and software. Some of them are briefly enumerated below. Structured Approach Modeling Tools: • Entity-relationship Diagrams • Data Flow Diagrams • Structured English or Pseudocodes • Flow Charts. Object-oriented Approach Modeling Tools: • Unified Modeling Language (UML) Quality within the Development Effort As was mentioned in the previous section, quality is the mindset that must influence every software engineer. Focusing on quality in all software engineering activities reduces costs and improves time-to-market by minimizing rework. In order to do this, a software engineer must explicitly define what software quality is, have a set of activities that will ensure that every software engineering work product exhibits high quality, do quality control and assurance activities, and use metrics to develop strategies for improving the software product and process. 3 quality of the process. Examples of which includes errors or faults found during requirements analysis.e. major and catastrophic. It is relative to the person analyzing quality. and coding normally done prior to the shipment of the products to the endusers. It is also important to measure the value of the software in terms of business terminologies such as "how many sales orders were processed today?". i. It was formulated by the Software Engineering Institute (SEI). If the software does not add value to the business. dollar value on return on investments (ROI) etc. Improving the technical quality of the business process adds value to the business. Failures are categorized as minor. It is a generic standard that applies to any organization that wants to improve the overall quality of the products. Quality of the Product Quality of the product would mean different things to different people. As software engineers. • Capability Maturity Model Integration(CMMI). These characteristics or attributes must be measurable so that they can be compared to known standards. they take a look at the internal characteristics rather than the external. we also improve the quality of the resulting product. Quality of the Process There are many tasks that affects the quality of the software. Common process guidelines are briefly examined below. the software has quality if it gives what they want. the quality of the software suffers. How do we define quality? Three perspectives are used in understanding quality. we look at the quality of the product. all the time. • ISO 9000:2000 for Software. As software engineers. Process guidelines suggests that by improving the software development process. when a task fails.What is quality? Quality is the total characteristic of an entity to satisfy stated and implied needs. They also judge it based on ease of use and ease in learning to use it. It is a process meta-model that is based on a set of system and software engineering capabilities that must exists within an organization as the organization reaches different level of capability and maturity of its development process. For end-users. Quality in the Context of the Business Environment In this perspective. For the ones developing and maintaining the software. Sometimes. • Software Process Improvement and Capability Determination (SPICE). technical value of the software translates to business value. quality is viewed in terms of the products and services being provided by the business in which the software is used. It is a standard that defines a set of requirements for software process assessment. we value the quality of the software development process. specifically. They normally assess and categorized quality based on external characteristics such as number of failures per type of failure. The intent of the standard is to assist organization in developing an objective evaluation of the efficacy of any defined software process. why do we need it in the first place? How do we address the Quality Issues? We can address quality issues by: 4 .. systems or services that it provides. quality in the context of the business environment. we build models based on how the user's external requirements relate to the developer's internal requirements. designing. when they want it. i. • Maintainability. ISO 9000:2000 for Software and SPICE. methodologies. Manage user requirements because it will change over time. they influence the way software is developed such as good maintainability. and they comply with user requirements and standards. It is characterized by the ease of upgrading and maintaining. A mindset focus on quality is needed to discover errors and defects so that they can be addressed immediately. In order for it to work properly. a multi-tiered testing strategy. 3. 4. Some of them are enumerated below: • Usability. explicitly documented development standards (quality standards). It is the capability of the software to execute in different platforms and architecture. Understand people involved in the development process including end-users and stakeholders. It is the ability of the software to transfer from one system to another.1. formal technical reviews. • Portability. Quality standards are sets of principles. It is the ability of the software to evolve and adapt to changes over time. 2. 5. one takes a look at specific characteristics that the software exhibits. it should conform to explicitly stated functional and performance requirements (user's external characteristics). Implicit characteristics must be identified and documented. Use Quality Standards. It is the characteristic of the software that exhibits ease with which the user communicates with the system.. Characteristics of a Well-engineered Software To define a well-engineered software. 3. and implicit characteristics (developer's internal characteristics) that are expected of all professionally developed software. Requirements are the basis defining the characteristics of quality software. It is necessary to explicitly specify and prioritize them. secure and safe. control of software documentation and the changes made to it. This fosters an environment of collaboration and effective communication. It encompasses a quality management approach. procedures. 1. effective software engineering technology (methods and tools). 5 . and measuring and reporting mechanism. Commit to quality. people are unduly optimistic in their plans and forecasts. Software Quality Assurance and Techniques Software quality assurance is a subset of software engineering that ensures that all deliverables and work products are meet. • Dependability. It is considered as one of the most important activity that is applied throughout the software development process. and people prefer to use intuitive judgment rather than quantitative models. Software Quality A software has quality if it is fit for use. Its goal is to detect defects before the software is delivered as a final product to the end-users. • Reusability. It is the characteristic of the software to be reliable. Standards define a set of development criteria that guide the manner by which the software is engineered.e. it is working properly. and guidelines to bring about quality in the process such as CMMI. Understand the systematic biases in human nature such as people tend to be risk averse when there is a potential loss. 2. Three important points should be raised from the definition of software quality. Software Requirements are the foundation from which quality is measured. a procedure to assure compliance with software development standards. 4. analyzing and reporting defects and rework. • documents to be produced. Activities involved are the following: 1. These results contribute to the development of quality software. The SQA team has responsibility over the quality assurance planning. records keeping. They monitor and track defects or faults found with each work products. The SQA team ensures that deviations in the software activities and work products are handled based on defined standard operating procedures. It is the capability of the software to use resources efficiently. 5. They document it and ensures that corrections have been made. The SQA team reviews software engineering activities employed by the development teams to check for compliance with the software development process. Specifically. 3. Software Quality Assurance Activities Software Quality Assurance is composed of a variety of activities with the aim of building quality software. They monitor and track deviations from the software development process. • audits and reviews to be performed. to ensure that the software has been represented according to defined standards. 3. to uncover errors in function. They do this during the project planning phase. 4. and • amount of feedback required. They identify the: • evaluation to be performed.development team and SQA team. 6 . • standards that are applicable. to verify that the software under review meets user requirements. It involves two groups of people. The SQA team reviews work products to check for compliance with defined standards. • procedures for error reporting and tracking. logic or implementation for any representation of the software. its goals are: 1. and 5. 2. It serves to discover errors and defects that can be removed before software is shipped to the end-users.• Efficiency. Formal Technical Reviews Work products are outputs that are expected as a result of performing tasks in the software process. They document it and ensure that corrections have been made. Therefore. 6. The SQA team participates in the development of the project's software process description. to achieve software that is developed in a uniform manner. Formal Technical Reviews (FTR) are performed at various points of the software development process. to make projects more manageable. overseeing. they should be measurable and checked against requirements and standards. A technique to check the quality of the work products is the formal technical review. 2. they should be monitored and controlled. The development team selects a software development process and the SQA team checks it if it conform to the organizational policy and quality standards. The changes to this work products are significant. The SQA team reports deviations and non-compliance to standards to the senior management or stakeholders. The SQA team prepares the SQA Plan. It is managed by a moderator who as responsibility of overseeing the review. • Keep the number of participants to a minimum and insist on preparing for the review. data and code design etc. Fagan's Inspection Method It was introduced by Fagan in 1976 at IBM. A checklist provides structure when conducting the review. Remind everybody that it is not time to resolve these issues rather have them documented. Writing down comments and remarks by the reviewers is a good technique. it is not time for problem-solving session. specifications or function errors. It is inevitable that issues arise and people may not agree with its impact. it can be extended to include other work products such as technical documents. • Provide a checklist for the work product that is likely to be reviewed. Originally. • Minimize debate and rebuttal. • All classes of defects in documentation and work product are inspected not merely logic. Mention and clarify problem areas. However. • Point out problem areas but do not try to solve them. Checklists of questionnaires to be asked by the inspectors are used to define the task to stimulate increased defect finding. It would required a team of inspectors assigned to play roles that checks the work product against a prepared list of concerns. It checks the effectiveness of the review process. Reviews should not last more than two hours. • Inspections are carried out in a prescribed list of activities. It also helps the reviewers stay focus on the review. They are categorized as follows: 7 . The tone of the review should be loose but constructive. Conducting inspections require a lot of activities. It should be done and schedule for another meeting. • De-brief the review.A general guideline of conducting formal technical reviews is listed below. • inspectors are assigned specific roles to increase effectiveness. • Inspection are carried out by colleagues at all levels of seniority except the big boss. and used for reports which are analyzed in a manner similar to financial analysis. Two formal technical reviews of work products used in industry are the Fagan's Inspection Method and Walkthroughs. • Review the work product NOT the developer of the work product. • Plan for the agenda and stick to it. Those rules are listed as follows: • Inspections are carried out at a number of points in the process of project planning and systems development. • Schedule the reviews as part of the software process and ensure that resources are provided for each reviewer. Materials are inspected at a particular rate which has been found to give maximum error-finding ability. and set another meeting for their resolutions. Preparation prevents drifts in a meeting. It is a good practice to write down notes so that wording and priorities can be assessed by other reviewers. However. It is more formal than a walkthrough. it was used to check codes of programs. It follows certain procedural rules that each member should adhere to. • Inspection meetings are limited to two hours. • Inspections are led by a trained moderator. • Write down notes. It also helps the reviewers stay focus. model elements. It should aid in clarifying defects and actions to be done. The goal of the review is to discover errors and defect to improve the quality of the software. • Statistics on types of errors are key. • Giving of the overview. An action list is a list of actions that must be done in order to improve the quality of the work product which includes the rework for the defects. • He distributes the necessary materials of the work product to the reviewers. These are later on inspected by other inspections. Some guidelines must be followed in order to have a successful walkthrough. The defect list is assigned to a person for repair. • Holding a casual analysis meeting. Walkthrough A walkthrough is less formal than the inspection. not the person. • Holding the meeting. He will try to discover defects in the work product. The developer of the work product is present to explain the work product. • Criticize the product. No discussion about whether the defect is real or not is allowed. when and where they have to play the roles. He will perform the role that was assigned to him based on the documentation provided by the moderator. Emphasis is given to the way the inspection was done. Normally. Unlike the inspection where one has a moderator. • Keep vested interest group apart. similar with inspection. the roles that they have to play. the developer of the work product. The moderator ensures that the defects on the work products are addressed and reworked. The participants of the meeting are the inspectors. • Emphasize that the walkthrough is for error detection only and not error correction. normally around 3 people. the work product and corresponding documentation are given to a review team. • Preparing. Here. They are listed below: • No manager should be present. moderator and the developer of the work product. A 30-minute presentation of the project for the inspectors are given. A moderator is tasked to prepare a plan for the inspection.• Planning. and answer questions that inspectors ask. It can be omitted if everybody is familiar with the overall project. • No counting or sandbagging. Conducting a walkthrough. • Walkthrough Proper 8 . He is discouraged to fix the defects or criticize the developer of the work product. where comments of their correctness are elicited. This is optionally held where inspectors are given a chance to express their personal view on errors and improvements. A defect list is produced by the moderator. • He specifically asks each reviewer to bring to the walkthrough two positive comments and one negative comment about the work product. They are categorized as follows: • Pre-walkthrough Activities • The developer of the work product schedules the walkthrough preferably. • Reworking of the work product. Each inspector is given 1 to 2 hours alone to inspect the work product. A scribe is also present to produce an action list. and distributes the necessary documentation accordingly. a day or two in advance. moderates the walkthrough. • Following up the rework. this is the developer of the work product. would require many activities. resolution of issues etc. He decides who will be the inspectors. • Always document the action list. • The developer of the work product gives a brief presentation of the work product. coding. change management. According to him. Linear Sequential Model The Linear Sequential Model is also known as the waterfall model or the classic life cycle. composition of the development team. Sometimes. and the management and work products that are required. • Possibly. The Software Process The software process provides a strategy that a software development team employs in order to build quality software. another walkthrough may be scheduled. and the requirements of the software. and complexity of the problem. • An action list is produced at the end of the walkthrough. tasks sets and umbrella activities. • He is asked to submit a status report on the action taken to resolve the errors or discrepancies listed in the action list. testing and maintenance. design. progressing to the analysis of the software.3 shows this type of software process model. Issues are listed down in the action list. • Post-walkthrough Activities • The developer of the work product receives the action list. They are modified and adjusted to the specific characteristic of the software project. Pressman provides a graphical representation of the software process. These tasks would have milestones. It begins by analyzing the system. This may be omitted if the reviewers are familiar with the work product or project. They are also known as phases of the software development process. requirements management. • He solicit comments from the reviewers. deliverables or work products and software quality assurance (SQA) points. Common process models are discussed within this section. Task Sets Each of the activities in the process framework defines a set of tasks. This is the first model ever formalized. issues arise and presented but they should not find resolutions during the walkthrough. Framework of Activities These are activities that are performed by the people involved in the development process applicable to any software project regardless of project size. It insists that a phase can not begin unless the previous phase is finished. it provides the framework from which a comprehensive plan for software development can be established. formal technical reviews etc. Umbrella Activities These are activities that supports the framework of activities as the software development project progresses such as software project management. and other process models are based on this approach to development. methods and tools to be used. risk management. Figure 1. It suggests a systematic and sequential approach to the development of the software. Types of Software Process Models There are many types of software process models that suggest how to build software. 9 . It is chosen based on the nature of the project and application. It consists of framework activities. • It provides a basis for other software process models. • End-user involvement only occurs at the beginning (requirements engineering) and at the end (operations and maintenance). or the form that human-computer interaction should take. It does not address the fact the requirements may change during the software development project. it delays the development of the software. the adaptability of a technology. prototypes are built. Thus. • End-users sometimes have difficulty stating all of their requirements. They get the actual "feel" of the software. This approach is best suited for the following situations: • A customer defines a set of general objectives for the software but does not identify detailed input. The disadvantages of this model are: • Real software projects rarely follow a strict sequential flow. Linear Sequential Model Prototyping Model To aid in the understanding of end-user requirements. it is very difficult to decide when one phase ends and the other begins. Software quality is compromised because other software requirements are not considered such as maintainability. Prototypes are partially developed software that enable end-users and developers examine aspects of the proposed system and decide if it is included in the final software product. or output requirements. The advantage of this process model is: • The end-users have an active part in defining the human-computer interaction requirements of the system.The advantages of this model are: • It is the first process model ever formulated. 10 . • The developer may be unsure of the efficiency of an algorithm. processing. In fact. The disadvantages of this process model are: • Customers may mistakenly accept the prototype as a working version of the software. It is achieved through a modularbased construction approach. Functional partitions are assigned to different teams. Everybody is expected to be committed to a rapid approach to development. project scope is properly constrained. In this process model. • Developers and customers must be committed to the rapid-fire of activities necessary to develop the software in a short amount of time. It is best used for software projects where requirements are well-understood. • It is not a good process model for systems that require high performance. The disadvantages of this model are: • For large but scalable projects. 11 . • It is not a good process model for systems that cannot be modularized. The advantage of this model is: • A fully functional system is created in a short span of time.• Developers tend to make implementation compromises in order to have a working prototype without thinking of future expansion and maintenance Prototyping Model Rapid Application Development (RAD) Model This process is a linear sequential software development process that emphasizes an extremely short development cycle. and are developed in parallel. this process requires a sufficient number of developers to have the right number of development teams. the software project is defined based on functional decomposition of the software. • It is not a good process model for systems that make use of new technology or high degree of interoperability with existing computer programs such as legacy systems. and big budget with resources are available. and Component-based Assembly Model. Specific evolutionary process models are Incremental Model. Spiral Model. Incremental Model This process model combines the elements of a linear sequential model with the iterative philosophy of prototyping.Rapid Application Development (RAD) Model Evolutionary Process Models This process model recognizes that software evolves over a period of time. Linear sequences are defined where each sequence produces an increment of the software. The approach is iterative in nature. It enables the development of an increasingly more complicated version of the software. Unlike prototyping. the increment is an operational product. 12 . It provides potential rapid development of incremental versions of the software. Therefore.7 shows an example of a spiral model. 13 . Figure 1.Incremental Model Spiral Model It was originally proposed by Boehm. it requires risk assessment expertise. An important feature of this model is that it has risk analysis as one of its framework of activities. It is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of linear sequential model. Spiral Model Component-based Assembly Model It is similar to Spiral Process Model. it makes use of object technologies where the emphasis of the development is on the creation of classes which encapsulates both data and the methods used to manipulate the data. However. Component-based Assembly Model 14 . Reusability is one of the quality characteristics that are always checked during the development of the software. either they provide input. It serves as a means to verify. It asks the question. It is represented schematically by a series of major technical tasks. organized by means of structure. interacting together to form specific interrelationships. Systems consists of a group of entities or components. It provides a mechanism for removing many of the problems that are difficult to overcome using other software engineering paradigm.Concurrent Development Model The Concurrent Development Model is also known as concurrent engineering. management decisions and review results drive the over-all progression of the development. It makes use of state charts to represents the concurrent relationship among tasks associated within a framework of activities. and working together to achieve a common goal. The user's need. • a list of inputs. and associated states. a software engineer discovers the following: • entities or group of entities that are related and organized in some way within the system. • activities or actions that must be performed by the entities or group of entities in order to achieve the purpose of the system. and 15 . do activities or receive output. Factors that Affect the Choice of Process Model • Type of the Project • Methods and Tools to be Used • Requirements of the Stakeholders • Common Sense and Judgment Understanding Systems The software project that needs to be developed revolves around systems. "What is included in the project? What is not?" In defining the system boundaries. Understanding systems provides a context for any project through the definition of the boundaries of the projects. Formal Methods The Formal Methods is a software engineering approach which encompasses a set of activities that lead to mathematical specification of the software. discover and correct errors that might otherwise be undetected. 2. namely. The arrow head indicates the flow of the data. They will always have areas for correctness and improvements. club staff and coach. One should be carefully that there is no dramatic changes in the environment or requirements when the software is being developed. and they can always be partitioned into smaller systems. the more resources must be devoted to its everyday maintenance. Computer Hardware. They are represented by a circle in the middle that defines the functionality of maintaining club membership information. General Principles of Systems Some general principles of systems are discussed below. This is the most important principle that a software engineer must understand. 5. and they interpret the output (information) for day-to-day decisions. Stakeholders and developers should be aware of the risks and costs of the changes during the development of the software. The arrow head indicates the flow of the data. the squad listings. Entities that are involved in this system are the applicant. • The more specialized a system. 4. application forms and the schedule of the mock try-outs. This component provides the input (data) and output (information). 3. The goal of this system is to handle club membership application. Procedures. the cost of maintaining a mainframe is very expensive compared to maintaining several personal computers. • Systems are always part of larger systems. People. In general. it consists of the following: 1. This component is the program that executes within the machine. man-made systems and automated systems. the less it is able to adapt to different circumstances. They are represented by an arrow with the name of the data being passed. The major activities that are performed are the submission of the application forms. They are related with one another by performing certain activities within this system. It consists of components that supports the operation of a domain-specific system. 16 .• a list of outputs. This component is the policies and procedures that govern the operation of the automated system. Automated systems are examples of systems. specifically. Again. This would help the software engineer study the system where the software project revolves. software systems can be developed in a modular way. They provide the data as input. This component is the physical device. Computer Software. Changes would have a great impact on the development of such systems. These areas for correctness and improvements can be addressed by automated systems. scheduling of mock try-outs and the assignment of the applicant to a squad. They are not perfect. Man-made systems are also considered manual systems. Components of Automated Systems There are two types of systems. They are represented as rectangular boxes. The results that are expected from this system are the membership reports and importantly. As an example. This component is responsible for the use of the computer hardware and software. they are represented by an arrow with the name of the data being passed. • The larger the system is. To perform these actions. Data and Information. a list of inputs are necessary. Because systems are composed of smaller subsystem and vice versa. It is important to determine the boundaries of the systems and their interactions so that the impact of their development is minimal and can be managed and controlled. End-users End-users are the people who will be using the end-product. those who are directly involved and those who are indirectly involved. They can be grouped into two according to their involvement within the system organization and development. 17 . Understanding People in the Development Effort To help in the fostering of a quality mindset in the development of the software. specifically. It is also known as the network component. their interest regarding the system and the software that needs to be developed. This component allows the connection of one computer system with another computer system. Connectivity. Much of the requirements will be coming from this group. one should understand the people involved in the software development process. end-users and development team. there are two major groups that are involved in the software development effort.6. In this section. Those who are directly involved Table 1 shows the categorization of the end-users according to the job functions that they perform within the system. namely. particularly. funding or time that the users feel is necessary to build an effective system. This can be seen based on their different levels of concerns.Table 1 General Guidelines with End-Users • The higher the level of the manager. More on this on Chapter 3. Resource and financial constraints will occur.Requirements Engineering. It would be best to ask him or her over-all results and performance the system can provide. try to discover areas of commonality.Requirements Engineering. It is important to prioritize requirements. • The goals and priorities of management may be in conflict with those of the supervisory and operational users. 18 . • Management may not provide resources. the less he or she is likely to care about computer technology. More on this on Chapter 3. As software engineer. They are good candidates for interview regarding the report layouts and code design. • MOTIVATIONAL. these group includes the auditors. programmer and testers. demonstrations. This involves breaking down the system to determine specific requirements which will be the basis for the design of the software. examples are brochures. it should be reviewed for faults and errors. systems designer. The general objective of this group is to ensure that the system is developed in accordance with various standard set such as: • Accounting standards developed by the organization's accounting operations or firm. and quality assurance group. System Analyst His responsibility is understanding the system. Usually. System Designer His job is to transform a technology free architectural design that will provide the framework within which the programmers can work. the system analyst and designer are the same person but it must be emphasized that the functions require different focus and skill. This supports the quality culture needed to developed quality software. examples are tutorials. particularly.Those who are indirectly involved Mostly. Some of them are listed below: 19 . • They are more interested in substance rather than form. demonstrations. Development Team The development team is responsible in building the software that will support a domain specific system. prototypes. and documents and prioritizes requirements. Programmers Based on the system design. keep an eye on them and address them accordingly. examples are technical or functional specifications • INSTRUCTIONAL. • They provide the necessary notation and format of documentation. Within this system. Testers For each work product. the programmers write the codes of the software using a particular programming language. They may be needed in the definition of the presentation and documentation of the system. There are several types of documentation and informational work products. As software engineers. standard bearers. the quality assurance group. • They don't get involved in the project until the very end. prototypes etc. • Standards developed by other departments within the organization or by the customer or user who will inherit the system • Various standards imposed by the government regulatory agencies. Each document is designed to perform a particular function such as: • REFERENCE. It ensures that work products meet requirements and standards defined. It may consists of the following: systems analyst. he identifies customer wants. Documentation in the Development Effort What is documentation? It is a set of documents or informational products to describe a computer system. It is important that they be involved in every activity that would require their expertise and opinion. Some possible problem that may be encountered with this group. • • • • • • • • • • • • • • • • • • • System Features and Functions User and Management Summaries Users Manual Systems Administration Manuals Video Multimedia Tutorials Demonstrations Reference Guide Quick Reference Guide Technical References System Maintenance Files System Test Models Conversion Procedures Operations/Operators Manual On-line help Wall Charts Keyboard Layouts or Templates Newsletters Good documents cannot improve messy systems. they can help in other ways. internal design documentation. 20 . user guides and systems architecture documents. The following table shows how documentation support the software development process. Specifically. There are two main purpose of documentation. • serve as transitory documents that are part of the infrastructure involved in running real projects such as scenarios. they: • provide a reasonably permanent statement of a system's structure or behavior through reference manuals. However. bugs etc. meeting reports. manuals and technical documents serve as written backup. a purpose or objective. It should be easy to find the information that users want. and maintain the system. It should not have any abbreviations. provide a legend. Users should know that the documents exists. hands-on training for new personnel and design aid. It should be aligned to users tasks and interests.a slick manual means a slick product. programs and procedures. but they can hold and see a user manual.Criteria for Measuring Usability of Documents A useful document furthers the understanding of the system's desired and actual behavior and structure. It should be accurate and complete. Good manuals differentiate their products. Some criteria for measuring usability of documents are listed below: 1. 2. Readability. support on-going operations. It should be written in a fluent and easy-to-read style and format. 3. Each item of documentation should have a unique name for referencing and cross-referencing. for off-the-shelf software products. It serves to communicate the system's architectural versions. It is important to include the following items as part of the user's manual. • They serve as contractual obligations. • They serve as sales and marketing tools. • They serve as testing and implementation aids. one needs less personnel to train the users. It provides a description of details that cannot be directly inferred from the software itself or from executable work products. It should be understandable without further explanation. Accessibility. Management and users know little about computer jargons. Related documents must be located in one manual or book. • They serve as security blankets. Important of Documents and Manuals Documents and manuals are important because: • They save cost. It should fit in an ordinary 8.system test scripts and models. With good manuals. • They are used to compare the old and new systems. If you must use one. Availability. In case people leave.5in x 11in paper for ease of handling. and target audience (who will be using the document). It must be present when and where needed. Suitability. storage. • They serve as tangible deliverables. 21 . and retrieval. Referrals to other manuals and books should be avoided.especially. 4. clerical and automated procedures. CHAPTER 2 Requirements Engineering Designing and building a computer system is challenging, creative and just plain fun. However, developing a good software that solves the wrong problem serves no one. It is important to understand user's needs and requirements so that we solve the right problem and build the right system. In this chapter, we will be discussing the concepts and dynamics of requirements engineering. It is a software development phase that consists of seven distinct tasks or activities which has a goal of understanding and documenting stake holder's requirements. Two important models will be built, namely, requirements model which is the system or problem domain model, and the analysis model which serves as the base model of the software (solution model). The Requirements Traceability Matrix (RTM) will be introduced to help software engineers manage requirements, and the requirements metrics and its significance will also be discussed. Requirements Engineering Concepts Requirements Engineering allows software developers to understand the problem they are solving. It encompasses a set of tasks that lead to an understanding of what the business impact of the software will be, what the customer wants, and how end-user will interact with the software. It provides an appropriate mechanism for understanding stake holder's needs, analyzing requirements, determining feasibility, negotiating a reasonable solution, specifying the solutions clearly, validating specification and managing requirements as they are transformed into a software system. Significance to the Customer, End-users, Software Development Team and Other Stakeholders Requirements Engineering provides the basic agreement between end-users and developers on what the software should do. It is a clear statement on the scope and boundaries of the system to be analyzed and studied. It gives stakeholders an opportunity to define their requirements understandable to the development team. This can be achieved through different documents, artifacts or work products such as use case models, analysis models, features and functions list, user scenarios etc. Designing and building an elegant computer program that solves the wrong problem is a waste. This is the reason why it is important to understand what customer wants before one begins to design and build a computer-based system. Requirements Engineering builds a bridge to design and construction. It allows the software development team to examine: • the context of the software work to be performed • the specific needs that design and construction must address • the priorities that guide the order in which work is to be completed • the data, functions and behaviors that will have a profound impact on the resultant design Requirements Engineering, like all other software engineering activities, must be adapted to the needs of the process, projects, products and the people doing the work. It is an activity that starts at inception until a base model of the software can be used at the design and construction phase. Requirements Engineering Tasks There are seven distinct tasks to requirements engineering, namely, inception, elicitation, elaboration, negotiation, specification, validation and management. It is 22 important to keep in mind that some of these tasks occurs in parallel and all are adapted to the needs of the project. All strive to define what customer wants, and all serve to establish a solid foundation for the design and construction of what the customer needs. Inception In general, most software projects begin when there is a problem to be solved, or an opportunity identified. As an example, consider a business that discovered a need, or a potential new market or service. At inception, the problem scope and its nature is defined. Software engineer asks a set of context free questions with the intent of establishing a basic understanding of the problem, people who want the solution, the nature of the solution, and the effectiveness of the preliminary communication and collaboration between end-users and developers. Initiating Requirements Engineering Since this is a preliminary investigation of the problem, a Q&A (Question and Answer) Approach or Interview is an appropriate technique in understanding the problem and its nature. Enumerated below are the recommend steps in initiating the requirements engineering phase. STEP 1: Identify stakeholders. A stakeholder is anyone who benefits in a direct or indirect way from the system which is being developed. The business operations managers, product managers, marketing people, internal and external customers, endusers, and others are the common people to interview. It is important at this step to create a list of people who will contribute input as requirements are elicited. The list of users will grow as more and more people get involved in elicitation. STEP 2: Recognize multiple viewpoints. It is important to remember that different stakeholder would have a different view of the system. Each would gain different benefits when the system is a success; each should have different risks if the development fails. At this step, categorize all stakeholder information and requirements. Also, identify requirements that are inconsistent and in conflict with one another. It should be organized in such a way that stakeholders can decide on a consistent set of requirements for the system. STEP 3: Work toward collaboration. The success of most projects would rely on collaboration. To achieve this, find areas within the requirements that are common to stakeholders. However, the challenge here is addressing inconsistencies and conflicts. Collaboration does not mean that a committee decides on the requirements of the system. In many cases, to resolve conflicts a project champion, normally a business manager or senior technologist, decides which requirements are included when the software is developed. STEP 4: Ask the First Question. To define the scope and nature of the problem, questions are asked to the customers and stakeholders. These questions may be categorized. As an example, consider the following questions: Stakeholder's or Customer's Motivation: 1. Who is behind the request for this work? 2. Why are they requesting such a work? 23 3. Who are the end-users of the system? 4. What are the benefits when the system has been developed successfully? 5. Are there any other ways in providing the solution to the problem? What are the alternatives. Customer's and Stakeholder's Perception: 1. How can one characterized a "good" output of the software? 2. What are the problems that will be addressed by the software? 3. What is the business environment to which the system will be built? 4. Are there any special performance issues or constraints that will affect the way the solution is approached? Effectiveness of the Communication: 1. Are we asking the right people the right questions? 2. Are the answers they are providing "official"? 3. Are the questions relevant to the problem? 4. Am I asking too many questions? 5. Can anyone else provide additional information? 6. Is there anything else that I need to know? Inception Work Product The main output or work product of inception task is a one- or twopage(s) of product request which is a paragraph summary of the problem and its nature. Elicitation After inception, one moves onward to elicitation. Elicitation is a task that helps the customer define what is required. However, this is not an easy task. Among the problems encountered in elicitation are discussed below: 1. Problems of Scope. It is important that the boundaries of the system be clearly and properly defined. It is important to avoid using too much technical detail because it may confuse rather than clarify the system's objectives. 2. Problems of Understanding. It is sometimes very difficult for the customers or users to completely define what they needed. Sometimes they have a poor understanding of the capabilities and limitations of their computing environment, or they don't have a full understanding of the problem domain. They sometimes may even omit information believing that it is obvious. 3. Problems of Volatility. It is inevitable that requirements change overtime. To help overcome these problems, software engineers must approach the requirements gathering activity in an organized and systematic manner. Collaborative Requirements Gathering Unlike inception where Q&A (Question and Answer) approach is used, elicitation makes use of a requirements elicitation format that combines the elements of problem solving, elaboration, negotiation, and specification. It requires the cooperation of a group of endusers and developers to elicit requirements. They work together to: • identify the problem • propose elements of the solution • negotiate different approaches • specify a preliminary set of solution requirements Joint Application Development is one collaborative requirement gathering technique that is popularly used to elicit requirements. 24 If there is no product request. or reworded to reflect the product or system to be developed. Each sub-team. The consensus list in each topic area (objects. size and business rules • a list of performance criteria such as speed. constraints and performance) is defined. Select a facilitator. Compile the complete draft specifications of the items discussed in the meeting. Each participant presents his list to the group. Each will be working to develop the mini-specifications for one or more entries on the consensus list. then. but does not delete anything. 25 . customers. In some cases. namely. 9. 5. Pre-Joint Meeting Tasks 1. Joint Meeting Tasks 1. 4. Additions. and other stakeholders). Prioritize the requirements. A consensus list of validation criteria is created. 6. each attendee makes a list of validation criteria for the product or system and presents the list to the team. Invite the members of the team which may be the software team. One or more participants is assigned the task of writing a complete draft specification using all inputs from the meeting. After all participants presents. Each attendee is asked to make the following: • a list of objects that are part of the environment that surrounds the system • a list of other objects that are produced by the system • a list of objects that are used by the system to perform its functions • a list of services (processes or functions) that manipulate or interact with the objects • a list of constraints such as cost. The mini-specification is simply an elaboration of the item in the list using words and phrases. pre-joint meeting tasks. 2. accuracy etc. 8. 2. An issue list is maintained and they will be acted on later. One can use the Quality Function Technique or MoSCoW Technique. 4. 5. After each mini-specification is completed. joint meeting tasks and post-joint meeting tasks. In some cases. a combined list is created. 3. the team is divided into sub-teams. It eliminates redundant entries. constraints. it will uncover new objects.The tasks involved in elicitation may be categorized into three groups. The first topic that needs to be resolved is the need and justification of the new product. Everyone should agree that the product is justified. or performance requirements that will added to the original lists. presents its mini-specification to all attendees. adds new ideas that come up during the discussion. 7. time and date of the meeting. 2. Distribute the product request to all attendees before the meeting. deletions and further elaboration are made. services. one stakeholder should write one. Once the consensus list is defined. lengthen. Post-Joint Meeting Tasks 1. issues arise that cannot be resolved during the meeting. The combined list from the previous task is shortened. 3. services. 6. Note that the list are not expected to be exhaustive but are expected to reflect the person's perception of the system. Set the place. The absence of these requirement may cause for significant dissatisfaction. With the succeeding meetings with the team. information deployment. the customer is satisfied. 3 . This is related to a function. overall operation correctness and reliability. and ease of software installation. From the value analysis. Information deployment identifies both data objects and events that the system must consume and produce. reassign priorities. It means that if the requirements are present. Task deployment examines the behavior of the product or system within the context of its environment. Examples of expected requirements are ease of human or machine interaction. namely. Function deployment is used to determine the value of each function that is required for the system. Then. deploy these values throughout the engineering process. Exciting Requirements These requirements reflect features that go beyond the customer's expectations and prove to be very satisfying when present. MoSCoW Technique Each requirement can be evaluated against classes of priority as specified in the table below During the software engineering process. Expected Requirements These requirements are implicit to the product or system and may be so fundamental that the customer does not explicitly state them. each requirements are categorized based on the three types of requirements.Quality Function Deployment Quality Function Deployment is a technique that emphasizes an understanding of what is valuable to the customer. Normal Requirements These requirements directly reflect the objectives and goals stated for a product or system during meetings with the customer. Classification of Priorities 26 . function deployment. 2 . It identifies three types of requirements: 1 . value analysis is conducted to determine the relative priority of requirement based on three deployments. and task deployment. a short meeting should be conducted to review and probably. Elaboration Work Product The requirements model and the analysis model are the main work product of this task. and other stakeholders who participated in requirements elicitation. The requirements model is created using methodologies that capitalizes on user scenarios which define the way the system is used. It describes how the endusers and actors interact with the system. This requirement engineering task focuses on defining. 27 . The relationships and collaboration between classes are identified and a variety of supplementary UML diagrams are produced. It tries to model the "WHAT" rather than the "HOW". For most systems. namely..Elicitation Work Product The output of the elicitation task can vary depending on size of the system or product to be built. business domain entities that are visible to the end-user. and the software team is able to realistically work towards meeting deadlines and budgets. and Requirements Specifications section of this chapter.e. Below are some guidelines in negotiating with stakeholders. risk associated with each requirements are identified and analyzed. requirements are ranked. the requirements model (system or problem domain) and analysis model (solution domain). and delivery time set. The attributes of each analysis class are defined and responsibilities that are required by each class are identified. users. Conflicts arise when customers are asking for more than what the software development can achieve given limited system resources. objects and domain constraints that apply to each Elaboration The information obtained from the team during inception and elicitation is expanded and refined during elaboration. customers. stakeholders and software development team reconcile conflicts. The end-result of elaboration is an analysis model that defines the informational. The purpose of negotiation is to develop a project plan that meets the requirements of the user while reflecting real-world constraints such as time. For a successful software development project. estimates on development effort and costs are made. Negotiation In negotiation. The development of these models will be discussed in the Requirements Analysis and Model. The analysis model is derived from the requirements model where each scenario is analyzed to get the analysis classes. collaboration is a must. functional and behavioral domain of the problem. preferably. i. To resolve these conflicts. It means that customer gets the system or product that satisfies majority of the needs. people and budget. • A description of the system's technical environment • A priority list of requirements. redefining and refining of models. The Art of Negotiation Negotiation is a means of establishing collaboration. the output or work products include: • A statement of need and feasibility • A bounded statement of scope for the system or product • A list of customer. in terms of functions. 3. Don't make it personal. conflicting and unrealistic requirements. The models serves as your specifications. everybody should feel that their concerns have been addressed. Remember that negotiation is not competition. a prototype or any combination of these. Focus on the other party's interest. It serves as the foundation for subsequent software engineering activities. omission. 6. The review team that validates the requirements consists of software engineers. a specific individual) noted for each requirement? 6. 1. functional and behavioral aspects of the system. It checks the conformance work products to the standards established in the software project. Decide on how we are going to make everything happen. Does each requirement have attribution? That is. Listening shows that you are concern. inconsistencies. 5. Specification A specification is the final artifact or work product produced by the software engineer during requirements engineering. The requirements are prioritized and grouped within packages that will be implemented as software increments and delivered to the customer. omissions. Is each requirement consistent with the overall objective for the system or product? 2. You might get something that can help you negotiate later on. Don't take hard positions if you want to avoid conflict. missing information. the design and construction of the software. a formal mathematical model. Requirements Validation Checklist As the models are built. Be ready to commit. It can be written down as a document. 4. At some level. Questions as suggested by Pressman are listed below to serve as a guideline for validating the work products of the requirement engineering phase. 7. Focus on the problem that needs to be solved. Have all requirements been specified at the proper level of abstraction? That is. Don't be afraid to think out of the box. and ambiguity. Is the requirement really necessary or does it represent an add-on feature that may not be essential to the objective of the system? 4. they are examined for consistency.1. and errors have been detected and corrected. Try not to formulate your response or reaction while the other is speaking. It shows the informational. or that they have achieved something. commit to the agreement and move on to other matters. is a source (generally. Is each requirement bounded and clear? 5. Listen effectively. areas where clarification is required. Validation The work products produced as a consequence of requirements engineering are assessed for quality during validation step. Have a strategy. Be creative. users. a set of graphical models. They look for errors in content or interpretation. Everybody should compromise. Once an agreement has been reached. 2. and other stakeholders. customers. It examines the specification to ensure that all software requirements have been stated clearly and that inconsistencies. Listen to what the parties want to achieve. Do any of the requirements conflict with other requirements? 28 . do some requirements provide a level of technical detail that is not appropriate at the stage? 3. particularly. we will be discussing the requirements model and how it is built.requirements model and analysis model. information obtained during inception and elicitation is expanded and refined to produce two important models. In this section. The Requirements Traceability Matrix (RTM) is discussed in the Requirements Traceability Matrix section of this chapter which will help software engineers manage requirements as the development process progresses. Supplementary Specifications. Once requirements have been identified. Is each requirement testable. Management It is a set of activities that help the project team identify. specifically. The Requirements Model Rational Rose defines the Requirements Model as illustrated below. traceability tables are developed. once implemented? 9. Requirements Analysis and Model During elaboration. control. function and behavior of the system to be built? 10. and track requirements and their changes at any time as the project progresses. and Glossary.Has the requirements model been "partitioned" in a way that exposes progressively more detailed information about the system? 11. Does the requirement model properly reflect the information. Requirements management starts once they are identified. Is each requirement achievable in the technical environment that will house the system or product? 8. It consists of three elements. Use Case Model.Have the requirements pattern been used to simplify the requirements model? Have all patterns been properly validated? Are all patterns consistent with customer requirements? These and other questions should be asked and answered to ensure that all work products reflect the customer's needs so that it provides a solid foundation for design and construction. 29 . Each requirement is assigned a unique identifier.7. The requirements model provides a model of the system or problem domain. The use case specifications are textual documents that specify properties of the use cases such as flow of events. the use case diagrams and use case specifications. post-conditions etc. namely. tool used to define the Use case Model is the Use case Diagram. it is used to validate that the system will become what they expect it to be. end-users and system developers. 30 .Requirements Model Use Case Model It is used to describe what the system will do. Glossary It defines a common terminology for all models. It serves as a contract between the customers. It shows the functionality that the system provides and which users will communicate with the system. or system constraints that restricts our choices for constructing a solution to the problem such as the system should be developed on Solaris and Java. The Use Case Model consists of two parts. For customers and end-users. reliability. Supplementary Specifications It contains those requirements that don't map to a specific use case. Each use case in the model is describe in detail using the use case specifications. 1. For the developers. The use case diagram consists of actors and use cases. performance. and supportability. This is used by the developers to establish a common dialect with customers and end-users. pre-conditions. It is an important complement to the Use Case Model because with it we are able to specify a complete system requirements. usability. They may be nonfunctional requirements such as maintainability of the source codes. it is used to ensure that what they build is what is expected. There should only be one glossary per system. Use Case Diagram Basic Notation The Use Case Model can be refined to include stereotypes on associations and generalization of actors and use cases. the Use Case Diagram of UML is used as the modeling tool for the Use case Model. 31 . The point of view of the actors interacting with the system are used in describing the scenarios.Scenario Modeling The Use Case Model is a mechanism for capturing the desired behavior of the system without specifying how the behavior is to be implemented. It is important to note that building the model is an iterative process of refinement. They are shown by using a keyword in matched guillemets (<<>>) such as <<extend>> and <<include>>. It captures the specific interactions that occur between producers and consumers of data. Use Case Diagram of UML As was mentioned in the previous section. Stereotypes in UML is a special use of model elements that is constrained to behave in a particular way. and the system itself. Generalization or specialization follows the same concepts that was discussed in object oriented concepts. Scenarios are instances of the functionality that the system provides. Remember that use cases are sequence of actions that yields an observable result to actors. It may be a person. As an example. STEP 2: Identify use cases. 32 . namely. Club Membership Maintenance Actors Two actors were identified. As an example. club staff and coach.Use Case Diagram Expanded Notation Developing the Use Case Model STEP 1: Identify actors. the image below identifies the initial use cases. device or another system. the figure below identifies the actors for the Club Membership Maintenance of the case study. Identify the external actors that will interact with the system. Identify use cases that the system needs to perform. First Iteration of Club Membership Use Case Model STEP 4: Refine and re-define the model. Optionally.Club Membership Maintenance Use Cases The following use cases were identified. • Add Athlete Record • Edit Athlete Record • Delete Athlete Record • Update Athlete's Status STEP 3: Associate use cases with actors. The second iteration of the use case model is seen in the figure below. number the use cases. 33 . The figure below shows the first iteration of the use case model. The identified use cases are distributed to the two actors. This use case can be refined by using the enhanced notation. an Edit Athlete Record is also performed. consider answering questions found in the Requirements Model Validation Checklist.3 Remove Athlete Record Every time Update Athlete's Record Use Case is performed. Modeling is an iterative process. STEP 5: For each use case. The Maintain Athlete Record Use Case is extended to have the following option: • 1. elaboration is done by introducing the Maintain Athlete Record Use Case. the use case specification must be defined. For each use case found in the Use Case Model. Also. the firstfive are the recommended sections. It can contain any of the listed sections below.1 Add Athlete Record • 1. 34 .2 Edit Athlete Record • 1. However.Second Iteration Club Membership Use Case Model In the second iteration. the another use case was added to specify that certain actors can view the athlete's record (View Athlete Record Use Case). To help in the decision-making. Stereotypes on associates are used to include or extend this use case. The use case specification is a document where the internal details of the use case is specified. This is being managed by the club staff. write the use case specification. The decision when to stop iterating is a subjective one and is done by the one modeling. 8. It uses the following notation. it is used to model each scenario of the use case. scenarios are identified. It specifies the conditions that exist before the use case starts. 7. Brief Description. NOT how the system is design to perform. This is the name of the use case. It should be the same as the name found in the Use Case Model. Name.textual and graphical. It is also describe as one flow through a use case. they are the extend or include use cases. There are two ways in expressing the flow of events. It describes the role and purpose of the use case using a few lines of sentences. It describes essentially what the use case is specifically doing. sketches of dialog of users with the system etc. 5. These are events that describes what the use case is doing. Flow of Events. The flow of events of a use case consists of a basic flow and several alternative flows. one uses the Activity Diagram of UML. odd cases and exceptional flows handling error situations. The flow of events contains the most important information derived from use case modeling work. 3. An instance of the functionality of the system is a scenario. 2. Other Diagrams. Here. The basic flow is the common work flow that the use case follows. Additional diagrams to help in clarifying requirements are placed in this section such as screen prototypes. It specifies the conditions that exist after the use case ends. In this section. Normally. other use case that are associated with the current use case are specified here. It contains other requirements that cannot be specified using the diagram which is similar to the non-functional requirements. we have one basic flow while the alternative flow addresses regular variants.1. Post-conditions. This is the most important part of the requirements analysis part. Pre-conditions. 6. Special Requirements. normally. It provides a description of a sequence of actions that shows what the system needs to do in order to provide the service that an actor is expecting. 4. To graphically illustrate the flow of events. 35 . However. It is used similar to the Flowchart. Relationships. Activity Diagram Notation The figure below is an example of an activity diagram. It illustrates how athletes are initially assigned to a squad. The club staff gets the filled up application form from the athletes and they prepare the mock try-out area. During the try-outs, the selection committee evaluates the performance of the athletes. If the athlete plays excellently, they are immediately assigned to the Competing Squad. Otherwise, they are assigned to the Training Squad. The club staff will keep the athlete records on the appropriate files. Initial Athlete Squad Assignment The activity diagram can be modified as a the swimlane diagram. In this diagram, activities are aligned based on the actors who have responsibility for the specified activity. The figure below is the modified activity diagram where the actors responsible for the activity is drawn above the activity symbols. 36 Initial Athlete Squad Assignment using Swimlane Diagram To help software engineers in defining the flow of events, a list of guidelines is presented below. 1. To reinforce actor responsibility, start the description with "When the actor...". 1. Describe the data exchange between the actor and the use case. 2. Try not to describe the details of the user interface unless needed. 3. Answer ALL "what" questions. Test designers will use this text to identify test cases. 4. Avoid terminologies such as "For example,...", "process" and "information". 5. Describes when the use case starts and ends. 37 STEP 6: Refine and re-define the use case specifications. Similar with developing the Use Case Model, one can refine and re-define the use case specification. It is also done iteratively. When to stop depends on the one modeling. One can answer the questions presented in the Requirements Model Validation Checklist to determine when to stop refining and redefining the use case specifications. Requirements Model Validation Checklist Like any work product being produced at any phase, validation is required. Listed below are the questions that guides software engineers to validate the requirements model. It is important to note at this time that the checklist serves as a guide. The software engineer may add or remove questions depending on the circumstances and needs of software development project. Use Case Model Validation Checklist 1. Can we understand the Use Case Model? 2. Can we form a clear idea of the system's over-all functionality? 3. Can we see the relationship among the functions that the system needs to perform? 4. Did we address all functional requirements? 5. Does the use case model contain inconsistent behavior? 6. Can the use case model be divided into use case packages? Are they divided appropriately? Actor Validation Checklist 1. Did we identify all actors? 2. Are all actors associated with at least one use case? 3. Does an actor specify a role? Should we merge or split actors? 4. Do the actors have intuitive and descriptive names? 5. Can both users and customers understand the names? Use Case Validation Checklist 1. Are all use case associated with actors or other use cases? 2. Are use cases independent of one another? 3. Are there any use cases that exhibit similar behavior or flow of events 4. Are use cases given a unique, intuitive or explanatory names? 5. Do customers and users alike understand the names and descriptions of the use cases? Use Case Specification Validation Checklist 1. Can we clearly see who wishes to perform a use case? 2. Is the purpose of the use case also clear? 3. Is the use case description properly and clearly defined? Can we understand it? Does it encapsulate what the use case is supposed to do? 4. Is it clear when the flow of events starts? When it ends? 5. Is it clear how the flow of events starts? How it ends? 6. Can we clearly understand the actor interactions and exchanges of information? 7. Are there any complex use cases? Glossary Validation Checklist 1. Is the term clear or concise? 2. Are the terms used within the use case specification? 38 a validation is required. The whitebox approach refers to the development of system requirements without considering the technical implementation of those requirements. Is the class traceable to a requirement? Behavioral Model 1. It attempts to whitebox the development approach. The Conceptual Perspective deals with the “concepts in the domain”. Are all analysis classes identified. Are all diagrams traceable to the requirements? Requirements Traceability Matrix (RTM) The Requirements Traceability Matrix (RTM) is a tool for managing requirements that can be used not only in requirements engineering but through out the software engineering process.synonyms? 3. Those listed in the table below are the recommended initial elements of the RTM. Object Model 1. Similar with UML. this approach uses the concept of perspectives. namely. Are the terms used consistently within the system? Are there any Analysis Model Validation Checklist Similar with the Requirements Model. Components of Requirements Traceability Matrix There are many variations on the components that make up the RTM. specification and implementation. and documented? 2. Is the class a well-defined abstraction? 5. conceptual. Are all scenarios been handled and modeled? (Including exceptional cases) 2. 39 . But in terms of requirements. Interfaces deal with how the concepts (system elements) interact with one another such as promoting or demoting an athlete. As an example. Are messages well-defined? 5. defined. Are all attributes and responsibilities defined for a class? Are they functionally coupled? 6. There are three. Does the class have a name indicative of its role? 4. Is the number of classes reasonable? 3. The following guide questions can be used. squad and teams. coaches. concepts within the domain of the case study are athletes. The Specification Perspective deals with interfaces. Are the interaction diagrams clear? 3. Are the relationships clear and consistent? 4. we deal with the conceptual and specification. This is really where the power of the RTM is reflected. One may add other elements. one can customize the matrix to tie the requirements with other work products or documents. Additional RTM Components 40 .Initial RTM Components While the above elements are the initial RTM. iterate back to the requirements engineering phase.The components and the use of the RTM should be adapted to the needs of the process. it gives an indication how large the software project is. as the software development progresses. Also. In such a case. remove components or add components as you seem fit. The requirements work products can be evaluated by looking first at the number of requirements. As software engineers. As the number grows. The Requirements Profile Test is a 41 . Requirements can be used by the designers and testers. possibly. It can be used to determine if the requirements are ready to be turned over to them. A large number of changes indicates some instability and uncertainty in our understanding of what the system should do or how it should behave. project and product. Similarly. we would have a deeper understanding of the problem and solution which could lead to uncovering requirements that were not apparent during the requirements engineering process. one can measure the number of times that the requirements have changed. you can use all components. the requirement size can be tracked down. Example of a RTM: Example of a RTM Requirements Metrics Measuring requirements usually focuses on three areas: process. Requirement size can be a good input for estimating software development effort. a thorough understanding of the requirements should be done. It grows as the project moves along. product and resources. As design and development occur. they are asked to rate each requirement on a scale of 1 to 5 based on the a system as specified in the table below. 42 . they are asked to rate each requirement on a scale of 1 to 5 based on a system as specified in the table below: System Designer Scale Description For testers.technique employed to determine the readiness to turn over requirements to the designers or testers3. For the designers. the requirements can be passed on the designers and testers. You need to iterate back to the requirement engineering phase. requirements need to be reassessed and rewritten. However.System Tester Scale Description If the result of the requirement's profile resulted with 1's and 2's. 43 . the scores can provide useful information that encourages you to improve the quality of the requirements before design proceeds. Otherwise. Assessment is subjective. The design should consists of unique data representations. 3. 8. CHAPTER 3 Design Engineering Design Engineering Concepts Design Engineering is the most challenging phase of the software development project that every software engineer experience. The design should have interfaces that reduces the complexity of the links between components and the environment. 44 . It uses various techniques and principles for the purpose of defining a device. design engineering focuses on the creation of a representation or model that are concentrated on architecture of the software. Each representation must be traced back to a specific requirement as documented in the RTM to ensure quality of design. 1. The design of the data structure should lead to the design of the appropriate classes that are derived from known data patterns.e. Specifically. component-level design. Pressman suggests a list of quality guidelines as enumerated below4. architecture. In this chapter. data design. systematic methodology and thorough review. function and behavior. It produces a design model that translates the analysis model into a blueprint for constructing and testing the software. that consists of good design components. 4. all implicit requirements desired by the end-user and developers. The design is derived from using a method repetitively to obtain information during the requirements engineering phase. The design metrics will also be discussed. and that can be created in an evolutionary manner. data structures. i. It sets a stage where construction and testing of software is done. 2. • be readable and understandable by those who will generate the code and test the software.. the design should: • implement all explicitly stated requirements as modeled in the analysis model. The design process is an iterative process of refinement. The design should be modular that are logically partitioned into subsystems and elements. The design should have a recognizable architecture that has been created using known architectural styles or patterns. function and behavior from an implementation perspective. The above guidelines encourages good design through the application of fundamental design principles. The design of the components should have independent functional characteristics. from a higher level of abstraction to lower levels of abstraction. The design should use a notation that convey its meaning. and • provide an over-all illustration of the software from the perspective of the data. Creativity is required for the formulation of a product or system where requirements (end-users and developers) and technical aspects are joined together. interfaces and components that are necessary to implement the software. at the same time. interfaces.Unlike Requirements Engineering which creates model that focuses on the description of data. interface design. 6. We will also learn what elements of the RTM are modified and added to trace design work products with the requirements. and components. we will learn the design concepts and principles. 7. process or system in sufficient detail to permit its physical realization. 5. It is the last software engineering phase where models are created. One minimize the number of attributes and operations that are unnecessarily inherited. Modularity also encourages functional independence. They are named and addressable components when linked and working together satisfy a requirement. 1. a detailed description of the solution is defined. we state the solution using broad terms. Interaction Coupling is the measure of the number of message types an object sends to another object. In object-oriented approach. three types of cohesion are used. Procedural abstractions refer to the sequences of commands or instructions that have a specific limited actions. Coupling is the degree of interconnectedness between design objects as represented by the number of links an object has and by the degree of interaction it has with other objects. thus. Functional Independence is the characteristic of a module or class to address a specific function as defined by the requirements. Modularity leads to information hiding. The software is decomposed into pieces called modules. Information hiding means hiding the details (attributes and operations) of the module or class from all others that have no need for such information. they are called classes. accommodate changes easily. They are achieved by defining modules that do a single task or function and have just enough interaction with other modules. At the higher level of abstraction. Good interaction coupling is kept to a minimum to avoid possible change ripples through the interface. many level of abstractions are used. For object-oriented design.Design Concepts Design concepts provides the software engineer a foundation from which design methods can be applied. It provides a necessary framework of creating the design work products right. test and debug effectively. we define different levels of abstractions as we design the blueprint of the software. Data abstractions refer to the named collection of data that describes the information required by the system. For object-oriented design. 2. 45 . and the number of parameters passed with these message types. Design is modularized so that we can easily develop a plan for software increments. Cohesion is the measure to which an element (attribute. and maintain the system with little side-effects. Modules and classes communicate through interfaces. Abstraction When designing a modular system. Modularity Modularity is the characteristic of a software that allows its development and maintenance to be manageable. Inheritance Coupling is the degree to which a subclass actually needs the features (attributes and operations) it inherits from its base class. This limits or controls the propagation of changes and errors when modifications are done to the modules or classes. Two types of abstractions are created. two types of coupling are used. As software engineers. When we iterate to much lower level of abstractions. enforces access constraints on data and procedural details. or class within a package) contributes to a single purpose. Good design uses two important criteria: coupling and cohesion. operation. namely. data abstractions and procedural abstractions. It includes the dialog and screen designs. The Design Model The work product of the design engineering phase is the design model which consists of the architectural design. particularly. Inheritance definition should reflect true inheritance rather than sharing syntactic structure. This is the most important design because the functional requirements are represented by these classes. It helps the software engineer in creating a complete design model as the design evolves. Class Cohesion is the degree to which a class is focused on a single requirement. data design. It is modeled using the class diagram.1. interface design and component-level design. It represents layers. Specialization Cohesion address the semantic cohesion of inheritance. It is a process of changing the software so that the external behavior remains the same and the internal structures are improved. Good design produces highly cohesive operations. It uses the class diagram and state transition diagrams. inefficient or unnecessary algorithms. Refinement helps the software engineer to uncover the details as the development progresses. Abstraction complements refinement as they enable a software engineer to specify the behavior and data of a class or module yet suppressing low levels of detail. 3. Deployment-level Design 46 . Component-level Design This refers to the design of the internal behavior of each classes. Architectural Design This refers to the overall structure of the software. Interface Design This refers to the design of the interaction of the system with its environment. poorly constructed or inappropriate data structures or any other design failures. Operation Cohesion is the degree to which an operation focuses on a single functional requirement. Persistent classes are developed to access data from a database server. Entity classes that are defined in the requirements engineering phase are refined to create the logical database design. These are corrected to produce a better design. the human-interaction aspects. It uses the class diagrams and component diagrams. Of particular interest are the control classes. It includes the ways in which it provides conceptual integrity for a system. Refinement Refinement is also known as the process of elaboration. the design model is checked for redundancy. unused design elements. It is modeled using the package diagram of UML. subsystems and components. Refactoring Refactoring is a technique that simplifies the design of the component without changingits function and behavior. During refactoring. Data Design This refers to the design and organization of data. 2. Report and form layouts are included. The software architecture of a system is an artifact. It shows groupings of classes and dependencies among them. the subsystem stereotype is indicated (<<subsystem>>). The deployment diagram will be used to represent this model. It allows components to be developed independently as long as the interface does not change. It involves making decisions on how the software is built. Packages are represented as folders. A package is a model element that can contain other elements. Client Subsystem is dependent on the Supplier Subsystem. It is the result of software design activity. and the structure of the data that are used by these components. It can be deployed across a set of distributed 47 . The dependencies are represented as broken arrow lines with the arrow head as a line. It is interpreted differently depending upon the context. Some would describe it in terms of class structures and the ways in which they are grouped together. • It is modeled using the Package Diagram of UML. Software Architecture There is no general agreed definition for the term software architecture. The packages are realized by interfaces. it gives an intellectual view of how the system is structured and how its components work together. Figure 4. A dependency exists between two elements if changes to the definition of one element may cause changes to other elements. • Defining the architecture is important for several reasons. If a package is represented as a subsystem.1 shows the basic notation of the package diagram. Software architecture is the layered structure of software components and the manner in which these components interact. It is a mechanism used to organize design elements and allows modularization of the software components. Software functionality. • It is this definition that is being followed by this section. and normally. It can be used to partition the system into parts which can be independently ordered. Subsystems and components are typically specified in different views to show the relevant functional and nonfunctional properties of a software system. As in the example.This refers to the design of the how the software will be deployed for operational use. Second. configured and delivered. First. The interface can be canonical as represented by the circle as depicted by the Client Subsystem Interface. or it can be a class definition as depicted by the Supplier Subsystem Interface (stereotype is <<interface>>). Buschmann et al. Lastly. Describing the Package Diagram The Package Diagram shows the breakdown of larger systems into logical groupings of smaller subsystems. subsystems and components are distributed to the physical environment that will support the software. Others used it to describe the overall organization of a system into subsystem.5 defined it as follows: • A software architecture is a description of the sub-systems and components of a software system and the relationships between them. Subsystems and Interfaces A subsystem is a combination of a package (it can contain other model elements) and a class (it has behavior that interacts with other model elements). it enables design decisions that will have a significant effect on all software engineering work products for the software's interoperability. representation of software architecture enables communication between stakeholders (end-users and developers). Noticed that the relationship of the package is illustrated with a broken line with a transparent blocked arrow head. controls the iterative and incremental development of the software. Each subsystem provides services for other subsystems. and ensures that the components can work together. As an example. Each subsystem should have a clear boundary and fully defined interfaces with other subsystems. • It supports portability. They are shown below. This is the secret of having plug-and-play components. • It maximizes the reusability of components. It serves as a contract to help in the independent development of the components by the development team. As shown above. The implementation of the subsystem can change without drastically changing other subsystems as long as the interface definition does not change. It encapsulates an understandable set of responsibilities in order to ensure that it has integrity and can be maintained. • It allows developers handle complexity. The specification of the interface defines the precise nature of the subsystem's interaction with the rest of the system but does not describe its internal structure. The advantages of defining subsystems are as follows: • It allows for the development of smaller units. they are known as client-server and peer-topeer communication.computational nodes and changed without breaking other parts of the system. 48 . It also provides restricted security control over key resources. There are two styles of communication subsystems use. Interfaces define a set of operations which are realized by a subsystem. It typically groups together elements of the system that share common properties. an interface is realized by one or more subsystems. one subsystem may contain humancomputer interfaces that deals with how people interact with the system. another subsystem may deal with data management. • It supports and eases maintainability. It allows the separation of the declaration of behavior from the realization of the behavior. In practice. the client-subsystem needs to know the interface of the server subsystems. They. not vice versa. The client subsystem request services from the server-subsystem. client-server communication is simpler to implement and maintain since they are less tightly coupled that the peer-to-peer communication. There are two approaches in dividing software into subsystems. The communication is two way since either peer subsystem may request services from others.In a client-server communication. They are known as layering and partitioning. The general structure is depicted below: Each layer of the architecture represents one or more subsystems which may be differentiated from one another by differing levels of abstraction or by a different focus of their functionality. The communication is only in one direction. layers request services from the layer directly below them. In general. In an open layered architecture. may use services of the next layer. both approaches are used such that some subsystems are defined in layers others as partitions. The top layer request services from the layer below it. For closed layered architecture. subsystems may request services from any layers below them. Layered architectures are commonly used for high-level structures for a system. Layering focuses on subsystems as represented by different levels of abstraction or layers of services while partitioning focuses on the different aspects of the functionality of the system as a whole. coupling is tighter. In this case. each subsystem knows the interface of the other subsystem. They cannot skip layers. in turn. Closed layered architecture minimizes dependencies between layers 49 . In a peer-to peer communication. Interfaces are defined. The Domain Layer. This section provides an overview of the J2EE Platform. it is the Athlete Database. This layer is responsible for executing applications that are representing business logic. 50 . In the example. Open layered architectures allows for the development of more compact codes since services of all lower level layers can be accessed directly by any layer above them without the need for extra program codes to pass messages through each intervening layer. devices or other systems.and reduces the impact of change to the interface of any one layer. There are four layer architecture. It is particularly defined when systems are distributed. A Sample of Layered Architecture Layered architectures are used widely in practice. 2. 3. 2. Developing the Architectural Design In constructing the software architecture. validate the analysis model. 4. This layer is responsible for services or objects that are shared by different applications. In the example. Java 2 Enterprise Edition (J2EETM) adopts a multi-tiered approach and an associated patterns catalog has been developed. Back-end provides the enterprise information systems for data management is supported through the Enterprise Information Systems (EIS) which has a standard APIs. one or more client types both outside and inside a corporate firewall. The Database Layer. This layer is responsible for the storage and retrieval of information from a repository. Applications are configured as: 1. it breaks encapsulation of the layers. increases the dependencies between layers and increases the difficulty caused when a layer needs to be modified. they are the Athlete HCI Maintenance Subsystem and the Athlete HCI Find Subsystem. It is designed to provide client-side and server-side support for developing distributed and multi-layered applications. they are the Athlete Maintenance Subsystem and Athlete Find Subsystem. Some layers within a layered architecture may have to be decomposed because of their complexity. The Application Layer. This layer is responsible for presenting data to users. iterate back to the requirements engineering phase. and ensure that a class defines a single logical abstraction. Middle-tier modules which provide client services through web containers in the Web-tier and business logic component services through the Enterprise JavaBean (EJB) containers in the EJB-tier. The J2EE platform represents a standard for implementing and deploying enterprise applications. The relationship among them are illustrated by defining the dependencies. We must ensure that attributes and responsibilities are distributed to the classes. 3. the design elements are grouped together in packages and subsystems. In the example. One can use the Analysis Model Validation Checklist. In the example. Client-tier which provides the user interface that supports. If there are problems with the analysis model. STEP 1: If the analysis model has not been validated yet. it is the Athlete Domain. The Presentation Layer. However. Partitioning is required to define these decomposed components. It consists of the following: 1. the boundary classes should be separated from the rest of the design. • Two class that are related to different actors should not be placed on the same package. It defines what is and is not accessible within the package. packages are mechanism that allows us to group together model elements. the package dependencies are defined. Packaging Functionally Related Classes • If a change in one class affects a change to another class. As was mentioned previously. they are functionally related. the boundary classes should be placed together with the entity and control classes with which they are functionally related. In this way. Package Dependencies are also known as visibility. • If no major interface changes are planned. user type groupings. • If the boundary class is related on an optional service. Packaging decisions are based on packaging criteria on a number of different factors which includes configuration units. • Mandatory boundary classes that are not functionally related on any entity or control classes are grouped together separately. • Two classes are functionally related if they have a relationship with one another. 51 . • If the class is removed from the package and has a great impact to a group of classes. After making a decision on how to group the analysis classes. Packaging Boundary Classes • If the system interfaces are likely to be changed or replaced. • Optional and mandatory classes should not be placed in the same classes. 2. and representation of the existing product and services the system uses. it can provide that service using the boundary class.STEP 2: Package related analysis classes together. group it in a separate subsystem with the classes that collaborate to provide the said service. the class is functionally related to the impacted classes. • A class is functionally related to the class that creates instances of it. or are affected by the changes made by the same actor. allocation of resources. • Two classes are functionally related if they interact with the same actor. when the subsystem is mapped onto an optional component interface. The following provides guidelines on how to group together classes. • Two classes are functionally related if they have a large number of messages that they can send to one another. The table below shows the visibility symbol that are placed beside the name of the class within the package. Packaging Classes Guidelines 1. STEP 3: Identify Design Classes and Subsystems. 4. Layers will be discussed in one of the steps. 1. Optionality. they should be part of the subsystem. collapsed. 1. Packages in lower layers should not be dependent on packages on upper layers. combined or removed from the design. 2. and the collaboration produces an observable result. Object Collaboration. However. If the objects of the classes collaborate only with themselves. At this point. 2. packages should not skip layers. it can be refined to become: • a part of another class • an aggregate class • a group of classes that inherits from the same class • a package • a subsystem • an association or relationship between design elements The following serves as a guideline in identifying subsystems. encapsulate the classes within a subsystem. 3. Packages should not be dependent on subsystems. If the analysis class is complex.Package Coupling defines how dependencies are defined between packages. It should adhere to some rules. or features which may be removed. exceptions can be made and should be documented. decisions have to be made on the following: • Which analysis classes are really classes? Which are not? • Which analysis classes are subsystems? • Which components are existing? Which components need to be designed and implemented? The following serves as a guideline in identifying design classes. upgraded or replaced with alternatives. 1. It is possible that analysis classes are expanded. In general. It the associations of the classes are optional. Packages should not be cross-coupled. The analysis classes are analyzed to determine if they can be design classes or subsystems. An analysis class is a design class if it is a simple class or it represents a single logical abstraction. 2. 52 . They should be dependent on other packages or on subsystem interfaces. Design elements (design classes and subsystems) need to be allocated to specific layers in the architecture. types and data structures. Analysis classes that provide complex services or utilities. When classes within layers are elaborated. they are designed based on technological requirements such as system should distributed. Encapsulate into one subsystem all classes that may change over a period of time. vertical subsystem is the choice. If the boundary class is used to display entity class information. no associations and the operation implementation details are not defined. 8. the case study uses the three layer architecture. 2. It is equivalent to an abstract class that has no attributes. The decision to go for horizontal or vertical subsystem depends on the coupling of the user interface and entity. User Interface. common utilities and application specific products. Dependencies among classes and subsystem occur only within the current layer and the layer directly below it. Consider visibility.3. 6. Existing products or external systems in the design such as communication software. The possible subsystems are: 1. 5. STEP 4: Define the interface of the subsystems. It serves as a contract to help in the independent development of the components by the development team. Actor. database access support. For simplicity purpose. Separate along the lines of weak coupling. and ensures that the components can work together. Defining layers of the software application is necessary to achieve an organized means of designing and developing the software. and boundary classes. 1. If the particular functionality must reside on a particular node. no associations and only abstract operations. control classes tend to appear in the middle and entity classes appear below. The following serves as a guideline in defining layers of the architecture. chances are additional layers and packages are used. Interfaces are group of externally visible (public) operations. There are many architecture styles and patterns that are more complex and comprehensive than the simple three layer architecture. it should be justified and documented. Organize highly coupled classes into subsystems. Separating together boundary and related entity classes as one subsystem is called horizontal subsystem while grouping them together is called vertical subsystem. Substitution. 7. defined and refined. It allows the separation of the declaration of the behavior from its implementation. it contains no internal structure. Class coupling and cohesion. Volatility. This type of layering is known as the three layer architecture. most boundary classes tend to appear at the top layer. If dependency skips. In general. 4. If two different actors use the same functionality provided by the class. no attributes. Distribution. 53 . STEP 5: Layer subsystems and classes. ensure that the subsystem functionality maps onto a single node. system should support concurrency. Normally. model them as different subsystems since each actor may independently change requirements. Interfaces define a set of operations which are realized by a subsystem. In UML. Represent different service levels for a particular capability as separate subsystem each realizes the same interface. system is considered a web-based application. It is more primitive than a framework.2. Context. For larger systems. It cannot be directly implemented. Are the name of the packages descriptive? 2. They are reusable solutions to common problems. The name of the pattern should be meaningful and reflects the knowledge embodied by the pattern. are there more than four layers? 2. Are all operations that the subsystem needs to perform identified? Are there any missing operations? Packages 1. the software architecture should be validated. 54 . It leverages the knowledge and insights of other developers. a pattern description includes the following elements: 1. is a solution. Design patterns. Number of Layers. Frameworks are partially completed software systems that may be targeted at a specific type of application. a pattern cannot incorporate a framework. It addresses individual problems but can be combined in different ways to achieve an integrated solution for an entire system. upper layers are affected by requirements change while lower layers are affected by environment and technology change. Design patterns are not frameworks. a successful implementation is an example of a design pattern. by itself. Are the subsystems partitioning done in a logically consistent way across the architecture? 2. Patterns may be documented using one of several alternative templates. An application system can be customized to an organization's needs by completing the unfinished elements and adding application specific elements. It should be detailed enough to allow the applicability of the pattern to be determined. Is the interface description concise and clear? 4. are there more than seven layers? Subsystems and Interfaces 1. Use the following list questions as the checklist. The context of the pattern represents the circumstances or preconditions under which it can occur. It is a description of the way that a problem can be solved but it is not. are more abstract and general than frameworks. The pattern template determines the style and structure of the pattern description. on the other hand. 3. This may be a single word or a short phrase. A framework can use several patterns. Is the name of the interface depict the role of the subsystem within the entire system? 3. Does the package description match the responsibilities? Design Patterns A design pattern describes a proven solution to a problems that keep recurring. and these vary in the emphasis they place on different aspects of the pattern. In general. 2. Layers 1. Small systems should have 3 – 4 layers while larger systems should have 5 – 7 layers. It is essentially a reusable mini-architecture that provides structure and behavior common to all applications of this type. Software Architecture Validation Checklist Like any other work product. Name. Consider volatility. For smaller systems. Normally. It would involved specialization of classes and the implementation of some operations. Does the pattern trigger an alternative solution that may be more acceptable? 3. logo. 6. Classes) that are meaningful to the application. Composite View Context The Composite View Pattern allows the development of the view more manageable through the creation of a template to handle common page elements for a view. Study the structure. Each component of the template may be included into the whole and the layout of the page may be managed independently of the content. Choose application specific names for the operations. background. 1. Is there a pattern that addresses a similar problem? 2. 2. read the pattern. the participants and their collaborations are all described. footer. It should be view as a guide on how to find a suitable solution to a problem. Is the context of the pattern consistent with that of the problem? 5. Code or implement the operations that perform responsibilities and collaborations in the pattern. For a web application. A solution that does not solve all the forces fails. 1. The following questions provide a guideline to resolve these issues. It promotes reusability of atomic portions of the view by encouraging modular design. Define the classes. 5. 4. Examine the sample codes to see the pattern in use. Problem. Pages are built by formatting code directly on each view. participants and collaboration of the pattern in detail. 7. The structure. The use of design patterns requires careful analysis of the problem that is to be addressed and the context in which it occurs. issues should be considered. Problem Difficulty of modifying and managing the layout of multiple views due to duplication of code. and so forth.3. 4. It should identify and describe the objectives to be achieved within a specific context and constraining forces. This solution provides the creation of a composite view through the inclusion and substitution of modular dynamic components. This scenario occurs for a page that shows different 55 . Solution. 3. It should provide a description of the problem that the pattern addresses. It is a description of the static and dynamic relationships among the components of the pattern.e. The template captures the common features. Solution Use this pattern when a view is composed of multiple atomic sub-views. When a designer is contemplating the use of design patterns. Name the pattern participants (i. All members of the development team should receive proper training. Are the consequences of using the pattern acceptable? 6. It is used to generate pages containing display components that may be combined in a variety of ways. It is important to note at this time that a pattern is not a prescriptive solution to the problem. A solution should resolve all the forces in the given context. To get a complete overview. a page contains a combination of dynamic contents and static elements such header. Is there another simple solution? Patterns should not be used just for the sake of it? 4. Are constraints imposed by the software environment that would conflict with the use of the pattern? How do we use the pattern? The following steps provides a guideline in using a selected design pattern. It adapts a specific data resource's access API to a generic client interface. we only need to change the code that specifies the access mechanism. and managing the selection of content creation strategies. Solution A controller is used as an initial point of contact for handling a request. It is used to manage the handling of services such as invoking security services for authentication and authorization. delegating business processing. managing the appropriate view. and presents an appropriate response to the client. Problem Data will be coming from different persistent storage mechanism such as relational database. plugging in static content on each component. handling of errors. It implements the access mechanism required to work with the data source. 56 . Problem The system needs a centralized access point for presentation-tier request handling to support the integration of system data retrieval. It centralizes decision-making controls. Access mechanism varies based on the type of storage. B2B external services. problems may occur such as: • each view may provide its own system service which may result in duplication of code • navigation of view is responsibility of the view which may result in commingled view content and view navigation. It is a trade-off between flexibility and performance. mainframe or legacy systems. When the storage type changes. stock quotes etc. This pattern has a drawback. and navigation. It allows data access mechanism to change independently of the code that uses the data. There is a runtime overhead associated with it. It receives all incoming client requests. As the development of the project progresses. Data Access Object Design Pattern Context The Data Access Object (DAO) Design Pattern separates resource's client interface from its data access mechanisms. It allows maintainability of codes by separating access mechanism from the view and processing. A benefit of using this pattern is that interface designer can prototype the layout of the page. LDAP etc. Solution Use the Data Access Object to abstract and encapsulate all access to the data source. When someone accesses the view directly without going through a centralized request handler. forwards each request to an appropriate request handler. Additionally. the actual content is substituted for these placeholders.information at the same time such as found in most website's home pages where you might find new feeds weather information. distributed control is more difficult to maintain since changes will often be made at different places in the software. Front Controller Context The Front Controller Pattern provides a centralized controller for managing requests. view management. If the underlying storage is not subject to change from one implementation to another. 1. It divides functionality among objects involved in maintaining and presenting data to minimize the degree of coupling between the objects. It serves as a software approximation of real-world process. where the view is responsible for calling the model when it needs to retrieve the most current data. it may require an HTML from for web customers. external service such as B2B.The data source can be a relational database system. For Web application. In a stand-alone GUI client. namely. It renders the contents of a model. a WML from for wireless customers. where the view registers itself with the model for change notifications or a pull model. It represents enterprise data and the business rules that govern access to and updates to this data. The actions performed on the model can be activating device. 3. View. It divides the application into three. model. • The same enterprise data needs to be updated through different interactions. the Factory Method pattern can be used. Controller. • The support of multiple types of views and interactions should not impact the components that provide the core functionality of the enterprise application. 2. The forces behind this pattern are as follows: • The same enterprise data needs to be accessed by different views. It can be achieved through a push model. a Java Swing Interface for administrators. For one application. a repository similar to LDAP etc. Data Access Object Strategy The DAO pattern can be made highly flexible by adopting the Abstract Factory Pattern and Factory Method Pattern. Model-View-Controller Design Pattern Problem Enterprise applications need to support multiple types of users with multiple types of interfaces. It accesses enterprise data through the model and specifies how that data is presented to the actors. 57 . Model. it can be clicks on buttons or menu selections. and an XMLbased Web services for suppliers. they appear as GET and POST HTTP requests. Solution The Model-View-Controller (MVC) is a widely used design pattern for interactive applications. business process or changing the state of a model. so simple real-world modeling techniques apply when defining the model. It translates interactions with the view into actions to be performed by the model. It is responsible for maintaining the consistency in its presentation when the underlying model changes. view and controller. Context The application presents contents to users in numerous pages containing various data. • Access Mechanism. i. • Reliability. discussed in a Database Course. For this course. It defines whether more or less the object is constant. normally. It defines how the object will survive if a crash occurs. also known as data architecting. 7This section discusses the concept of persistence and teaches how to model persistent classes. designed and implemented. called the persistent layer. instead of having multiple servlets as controller. It is the size of the persistent object. 58 .The strategies by which MVC can be implemented are as follows: • For Web-based clients such as browsers. There are a lot of design patterns that can be use to model persistence. a main servlet is used to make control more manageable. use Java Server Pages (JSP) to render the view. • Duration. Data Design Data design. and Enterprise JavaBeans (EJB) components as the model. Servlet as the controller. It is the number of objects to keep persistent. In terms of the our software architecture. Persistence means to make an element exists even after the application that created it terminates. It defines how long to keep the object persistent. To access the data in a database system. another layer. our programs should be able to connect to the database server. The Front-Controller pattern can be a useful pattern for this strategy. • For Centralized controller. • Volume. Two persistence design patterns are discussed in the succeeding sections. is a software engineering task that creates a model of the data in a more implementation specific representation. For classes that needs to be persistent. The techniques used here falls under the field of database analysis and design. We will not discuss how databases are analyzed.e.. we need to identify: • Granularity. is created on top of the database layer. we assume that the database was implemented using a relational database system. do we allow modifications or updates. pre-printed forms. It will ultimately establish the success of the system. When do we use forms? • Forms are used for turnaround documents. reports and screens. reports and screens are critical in determining how the system will be acceptable to end-users. charts etc. There should be a balance between the number of copies generated with the number of pages. continuous forms. Report Design Reports can be placed on an ordinary paper. Ordinary Continuous Forms. are used for compliance with external agencies. • Forms are used if they are legally important documents. Designing good forms. Report Design Consideration 1.Interface Design The Interface Design is concern with designing elements that facilitate how software communicates with humans. are used if it is to voluminous for on-line browsing. Preprinted Forms. Media • Where will the report be produced on CRT Screen. we will be concentrating on design elements that interact with people. Report Frequency • How often do the user need the report? 4. • Screens are used for intermediate steps in long interactive process. When to use Screens? • Screens are used to query a single record from a database. • Forms are used if personnels need to do the transaction but does not have access to workstations. screen-based or microfilm or microfiche. Microfilm or microfiche. Number of Copies and Volume. we would want to decrease emphasis on reports and forms and we want to increase emphasis on the screen. • How many copies of the report will be generated? • On the average how many pages will be generated? • Can the user do without the report? • How often to do the user file a copy? Is it really needed? 2. Storage media. The following gives the steps in designing reports. When do we use • Reports • Reports • Reports reports? are used for audit trails and controls. Report Generation • When do users need the report? • When is the cut-off period of entering or processing of data before reports are generated? 3. Report Figures • Do the user required reports to generate tables. In this section. graphs. 59 . • Screens are used for low volume output. particularly. Bond Paper. etc. devices and other systems that interoperate with it. 5. As software engineers. Adhere to standard printout data zones of the report layout. then.Report Layout Guidelines 1. • Important fields are placed on the left side. Consider the following tips in designing the summary page. Consider the following tips in designing the report footer. 3. • Always include report date and page numbers. 5. • Dates use MM/DD/YY format. The data zones are the headers. Fit details in one line. • Body of report should have descriptive data on the left side and numerical data on the right side. • Use only logos on all externally distributed documents. represent them in graphic form such as charts. • Align column headers with data item's length. Developing the Report Layouts STEP 1: Define the Report Layout Standards. • Group related items. 20's and 50's. such as Current Sales with Year-to-end Sales or Account number. • Column headers identify the data items listed. Name and Address. • One line should represent one record of information. If more than one line is needed. • Avoid printing repetitive data. 4. club name and page number. 2. STEP 2: Prepare Report Layouts. • It is also a good place for an indicator that nothing follows. grand totals and code-value tables in this area. • Report heading should contain title of the report. 60 . • Summary pages can be printed at the beginning of the report or at the end. • It is a good idea to group the number of details per page by monetary values such as by 10's. • If trends are part of the report. For representing data that will appear on the columns of the report use the following format conventions. • Select only one. Do not use this space. • Bottom of each page provides instructions or explanation of each code while bottom last page contains grand totals and indication of no more pages to follow. Consider the following tips in designing the detail line. • Important data are positioned on top of the document. • Remove corporate name from internal reports for security purposes. it would be appropriate to include punctuation marks and sign. The standards are used as guide for the implementation. left to right. • For numeric representation such as money. and summary. • Codes are always explained when used. • Clearly describe report and contents. detail lines. alphanumeric fields are grouped on the first line and numeric fields are grouped in the next line. • The first five lines and the last five lines are used for the headers and footers. footers. Consider the following tips in designing report headers. • Place page totals. preparation date. 2. for small font size. For handwritten answer spaces. The following are the steps in designing forms. Use positive. Lower case print is easier to read that uppercase. Use upper letters to call attention to certain statements. 6 or 8 lines per inch is a good measurement. Avoid using "not". "except" and "unless". 3. Forms Layout Guidelines 1.Forms Design Forms are usually used for input when a workstation is not available. 61 . • Colors Schemes • White copies goes to the applicant. General instructions are placed on top. even when compressed • Capital letters are desirable for headings • Italic • Has serifs and a distinct slant • Hard to read in large amounts of long letters • Good for printing out a work or phrase • Roman • Has serifs but does not slant • Best for large quantities • Good for instructions 5. An example of a standard for defining forms is listed below. Specific instructions should be placed just before the corresponding items. • Paper size should not be bigger than 8 1/2” x 11”. Use familiar words. Developing the Form Layouts STEP 1: Define the standards to be used for designing forms. no serifs • Easy to read. Use the following guidelines for typefaces of the fonts. they should not be used. Instructions should appear on the form except when the same person fills it over and over again. 4. It is sometimes used as a turnaround document. 3 lines per inch with 5 to 8 characters per inch is a good measurement. The answer space should be enough for user to comfortably enter all necessary information. squared off. They should be brief. If the form will be used for a typewriter. However. active and short sentences. • Gothic • Simple. It gives the user an impression that they are manipulating objects on the screen. ● They should be displayed in the same way such as in the case of presenting dates. several guidelines should be followed. STEP 2: Prepare Form Samples. Such metaphor is widely used in the structured approach of developing user-interfaces that are shown on character or text based terminals. 62 . Consistency helps users learn the application faster since functionality and appearance are the same across different parts of the application. do not change it to 'Student Number' or 'Student No. function keys or entering of a command through a command line. we use the direct manipulation metaphor. another screen must appear.. Common style guides are listed below: ● Java Look and Feel Design Guidelines (. • Important data should be positioned on left side of document. However. For every screen that display this item.• Yellow copies goes to the club staff. • Pink copies goes to the coach. 1. always remember that: ● They should have a screen label or caption. If the format of the dates follows 'MM-DD-YYYY' format.apple. the system will respond. layout of the screens etc. This applies to commands. i. Using the standard format defined in step 1. let say clicks a button (event). Do not change it. Screen and Dialog Design The boundary classes are refined to define screens that the users will be using and the interaction (dialog) of the screens with other components of the software.e. Redesign if necessary. This would involved defining style guides that should be applied to every type of screen element. In general. Direct Manipulation Metaphor is an interaction between user and the system using graphical user interfaces (GUIs). Standardize the screen layout. Such interfaces are eventdriven. As an example. the format of the data as it is presented to the user such as date formats. there are two metaphors used in designing userinterface are used. for the data entry screens. For object-oriented development which uses visual programming integrated environments. Several software vendors provide style guides that you may use. then all dates should be displayed in this format. • Ruling should follow 3 lines per inch.com) ● The Windows Interface Guidelines for Software Design ● Macintosh Human Interface Guidelines (. • Heading should include Page Number and Title. ● As much as possible. • Logos are only used for all externally distributed documents.sun. When presenting data items to users. use the value 'Student ID' as the label. Dialog and Screen Design Guidelines 1. 2. design the layout of the forms. • Binding should use 3-hole paper.'. Dialog Metaphor is an interaction between the user and the system through the use of menus.com) 3. particularly. they should be located on the same place in all screens. If the main idea changes. and they should be the same throughout the entire application. when designing interfaces. You may decide to use 'Student ID' as its caption. Stick to the rule: ONE IDEA PER SCREEN. Style guides support consistency of interface. • Dates should use DD/MM/YY Format. you have a data item that represents a student identification number. when a user. 2. Provide meaningful error messages and help screens. Describing the State Chart Diagram The State Chart Diagram shows the possible states that an object can have. NetBeans and Java Studio Enterprise 8 provides a means to visually create our prototypes. For controllers that require a longer time to process. If possible. Wizard-type dialog design is always a good approach. we avoid creating such prototypes. keep user informed of what is going on. It basically models the behavior of the screen when a user does something on the screen elements such as clicking the OK Button. some logical functionality and database. 5. The collaboration diagrams and sequence diagrams are used in this step. model the behavior of the screen with other classes. behaves like the finished product but lacking in certain features. design the screen class. STEP 4. There are several types of prototypes. Use "XX% completed" messages or line indicators. It also identifies the events that causes the state change. this type of prototype shows the screen sequencing when the user interacts with the software. This step would require us to model the screen as a class. A prototype is a model that looks. No functionality is emulated. The State Chart Diagram is used to model the internal behavior of the screen. Basically. This type of prototype is done during the requirements engineering phase. it would be wise to always inform the user of the next step. 2. STEP 2: For each screen. As much as possible. 63 . model its internal behavior. Throwaway Prototypes are prototypes that are discarded later after they have served their purpose. and to some extent. AVOID PROFANITY! Developing the Screen and Dialog Design STEP 1: Prototyping the User-interface. STEP 3: For each screen class. Vertical prototypes takes one functional subsystem of the whole system and develops it through each layer: user-interface. It deals only with the presentation or view layer of the software architecture. 6. For each screen class.4. Horizontal prototypes provide a model of the user interface. It shows how the object's state changes as a result of events that are handled by the object. 1. It is like an architect's scale model of a new building. The behavior of the screen class relative to the other classes is known as the dialog design. The use of Visual Programming Environment can support the development prototypes. 3. The event should be an operation defined in the class. any guard conditions that needs to be evaluated during the triggering of the event. and the action that needs to be performed when the event is triggered. States can have internal actions and activities. as represented by the bull's eye. Internal actions of the state defines what operation should be perform upon entering the state. 64 . signifies the end of the diagram. An object cannot remain in its initial state but needs to move to a named state. The state is represented as a rounded rectangular object with the state name inside the object. Similar with other diagrams in UML. what operation should be perform upon exiting the state. A transition should have a transition string that shows the event that causes the transition. as represented by a solid filled circle. A transition is represented by an arrow line. each state of which can be entered and exited independently of substates in the other set. Nested states allow one to model a complex state through defining different level of detail. there is an extended or enhanced notation that can be used when defining the state of a class. Concurrent states means that the behavior of the object can best be explained by regarding the product as two distinct set of substates.The initial state. The final state. and what operation should be perform while the object is in that state. signifies the entry point of the transition. STEP 5: Document the screen classes. STEP 6: Modify the software architecture. The derived class or subclass can be a substitute for the base class. Only operations specific to a particular client should be defined in the client-specific interface. In object-oriented software engineering. add them to the architecture. a component called Athlete is clear to any one reading the component. 65 . Classes should depend on abstraction. interface characteristics. • Implementation component name should come from the implementation specific name. Interface Segregation Principle. its interface. Software engineers are encouraged to develop clientspecific interface rather than a single general purpose interface. 1. This principle enforces that any derived class should comply with any implied contract between the base class and any component that uses it. They are used to guide the software engineer in developing components that are more flexible and amenable to change and reduces the propagation of side effects when change do occur. 2. PCLAthlete is the persistent class list for the athletes. the more difficult it can be extended. 3. algorithms. and communication mechanism allocated to each software component. The software engineer should define a component such that it can be extended without the need to modify the internal structure and behavior of the component. particularly. Component-level Design The Component-level Design defines the data structure. A component is the building block for developing computer software. the end-users. If additional control classes are defined. Component-level Design Guidelines This guideline is applied to the design of the component. This minimizes inheriting irrelevant operations to the client. The Open-Close Principle. 4. Abstractions in the target programming language supports this principles. It is a replaceable and almost independent part of a software that fulfills a clear function in the context of a welldefined architecture. The boundary subsystems are replaced by the classes for the screens or user interfaces. Liskov Substitution Principle. Identify all the attributes and operations of the screen. It is important that all interfaces and messages that allow classes within the component to communicate and collaborate be defined. that class can use any derived class as a substitute for the base class. The documentation is similar to documenting DBClasses and Persistent Classes. The more a component depends on other concrete components. Component • Architectural component names should come from the problem domain and is easily understood by the stakeholders. Dependency Principle. Each class in the component should be fully defined to include all attributes and operations related to the implementation. Basic Component Design Principles Four basic principles are used for designing components. When defining modules or components. 1. they should be open for extension but they should not be modifiable. It can be independently developed. not on concretions. As an example. its dependencies and inheritance. As an example. If a class is dependent on a base class. a component is a set of collaborating classes that fulfills a particular functional requirement of the software. 3: Refine attributes of each class. The dependency can be named. • Inheritance should be modeled from bottom (subclass or derived class) to top (superclass or base class). 2. dependencies and hierarchies.• Use stereotypes to identity the nature of the component such as <<table>>. The classes are refined in such a way that it can be translated into a program. redefine the sequence and collaboration diagrams to reflect the interaction of the current classes. Dependencies and Inheritance • Dependencies should be modeled from left to right.2: Distribute operations and refine operation signature of the class. 3. STEP 1. Use data types and naming conventions of the target programming language. <<database>> or <<screen>>. • They should flow from the left-hand side of the implementing component. • Component interdependence are modeled from interface to interface rather than component to component. STEP 1. • Show only the interfaces that are relevant to the component under consideration. STEP 1: Refine all classes. Refactoring may be required before proceeding. their interface. Use data types and naming conventions of the target programming language. STEP 1. A short description is attached to the operation. Figure 4.1: If necessary. Developing the component-level design involves the following steps. The dependency is depicted by the broken arrow.57 shows the elaboration 66 . Component Diagram The Component Diagram is used to model software components. Figure 4. This step involves a series of substeps. The Client Component is dependent on the Supplier Component. AddAthleteController. Interfaces • The canonical representation of the interface is recommended when the diagram becomes complex. and data design classes.55 shows the notation of the this diagram. Developing the Software Component All classes identified so far should be elaborated including the screens. Component Diagram Notation A component is represented by a rectangular box with the component symbol inside. STEP 2: If necessary. Particularly. Define the software components. Deployment-level Design The Deployment-level Design creates a model that shows the physical architecture of the hardware and software of the system. The components identified in the Component-level design are distributed to hardware. it was decided that no repackaging is necessary. It highlights the physical relationship between software and hardware. In our example. Pre-conditions are the conditions that must exists before the class can be used. Nodes would represent the computers. 67 . Also. The communication associations show network connectivity.of the attributes of the AddAthleteController. STEP 3. STEP 1. Post-conditions are the conditions that exists after the class is used. refine the packaging of the classes.4: Identify visibility of the attributes and operations. include the description of the attributes and operations. The visibility symbols are enumerated in the table below: STEP 1. Deployment Diagram Notation The Deployment Diagram is made up of nodes and communication associations. identify the pre-conditions and postconditions. Define the components using the component diagram. They are represented as a threedimensional box.5: Document the class. A short description is attached to the attribute. Does the class exhibit the required behavior that the system needs? Operation 1. Is the attribute needed or used by the class? 68 . is the operation defined in the interface mapped to a model element within the software component? Class 1. Is the attribute signify a single conceptual thing? 2. Is the name of the software component clearly reflect its role within the system? 2. Design Model Validation Checklist Validation is needed to see if the design model: • fulfills the requirements of the system • is consistent with design guidelines • serves as a good basis for implementation Software Component 1. Is the name of the class clearly reflect its role within the system? 2. Are there any attributes. Is the operation signature correct? 4. Are all specific requirements about the class addressed? 6. Is the class signifies a single well-defined abstraction? 3. operations or associations that needs to be generalized? 5. Are the implementation specifications for an operation correct? 6. Are the parameters defined correctly? 5. If the software component has an interface. Is the operation needed or used by the class? Attribute 1. Just distribute the software components identified in the component-level design to the computer node where it will reside.Developing the Deployment Model Developing the deployment diagram is very simple. Does the operation signature comply with the standards of the target programming language? 7. Is there any model element within the software component visible outside? 4. Are the dependencies defined? 3. Can we understand the operation? 2. Does the operation provide the behavior that the class needs? 3. Are all operations and attributes within the class functionally coupled? 4. Is the name of the attribute descriptive? 3. It is defined as the maximum length from the root superclass to the lowest subclass.60. The number of methods and their complexity indicates: • the amount of effort required to implement and test a class • the larger the number of methods. Number of Children (NOC). Assume that there are n methods defined for the class. care is given such that 69 . and class collaboration is important to the software engineer especially for the assessment on the design quality.Mapping the Design Deliverables to the Requirements Traceability Matrix Once the software components are finalized. 2. It consists of six classbased design metrics which are listed below. the class hierarchy. Consider the class hierarchy in Figure 4. We compute for the complexity of each method and find the sum. It also aids the software engineer in tracking the progress of the development. There are many complexity metric that can be used but the common one is the cyclomatic complexity. Design Metrics In object-oriented software engineering. This is discussed in the chapter for Software Testing. Depth of the Inheritance Tree (DIT). the more complex is the inheritance tree • as the number of methods grows within the class. reuse increases. the NOC of Class4 is 2. 1. Weighted Methods per Class (WMC). the likely it will become more complicated. As the number of children increases. Measures and metrics for an individual class. This is computed as the summation of the complexity of all methods of a class. This ensures that the model is related to a requirement. the class is the fundamental unit. However. There are many sets of metrics for the object-oriented software but the widely used is the CK Metrics Suite proposed by Chidamber and Kemerer6. 3. we need to tie them up to the RTM of the project. Children of a class are the immediate subordinate of that class. It also complicates modifications and testing. the number of testing the children of the parent class also increases. As this number increases. 5. 4. It is for this reason that CBO is kept to a minimum. as the number of children increases. methods are coupled together through this attribute. It is best to keep LCOM as low as possible. Coupling Between Object Classes (CBO). Response for a class (RFC). the reusability factor of the class decreases. If this number is high. 6. Lack of Cohesion in Methods (LCOM). It is the number of collaboration that a class does with other object. It is the number of methods that are executed in response to a message given to an object of the class. the effort required to test also increases because it increases the possible test sequence. As this number increases. Of course.the abstraction represented by the parent class is not diluted by its children which are not appropriately members of the parent class. This increases the complexity of the class design. It is the number of methods that access an attribute within the class. 70 . it can help programmers organize their thoughts and avoid mistakes. The design model has no value if the design's modularity is not carried forward to the implementation. Thus. This will be discussed in the programming documentation guidelines. For the software. These may be: 1. the impact should be analyzed and addressed. the data's structure and programming language constructs while still creating code that is easily maintainable and reusable. several people are involved. The software's general purpose might remain the same throughout its life cycle but its characteristics and nature may change overtime as customer requirements are modified and enhancements are identified. This includes the minimum and recommended requirements of the hardware and software. Upgrades are not really recommended. 3. It also helps in locating faults and aids in making changes because it gives the section where the changes are applied. why it was written. First. It also helps in the translation from the design code to the source code through maintaining the correspondence between the design components with the implementation components. it can assists other team members such as software testers. This chapter gives some programming guidelines that may be adopted when creating codes. It is important that others be able to understand the code that was written. they include versions. It allows communication and coordination to be smooth among the software development teams. This chapter does not intend to each programming as this is reserved for a programming course. It is for this reason that programming standards and procedures be defined and consistently used. Third. First. Second. software integrators and software maintainers. the programmer must take advantage of the characteristics of the design's organization. and how it fits in the software to be developed. high-cohesion and well-defined interfaces should also be the program characteristics. Standards for correspondence between design and source codes. However. How to document the source codes to make them clear and easy to follow makes it easier for the programmer to write and maintain the codes. The people involved in the development software must decided on implementation specific standards. 2. Platform where the software will be developed and used. it will be traced down to lower level components. Standards on the source code documentation. Programming Standards and Procedures Teams are involved in developing a software. the designer of the system might not able to address the specifics of the platform and programming environment. a great deal of cooperation and coordination is required. Changes are first reflected on the design. However. Design characteristics such as low coupling. A variety of tasks performed by different people is required to generate a quality product. 71 . Second. Design correspondence with source code allows us to locate the necessary source code to be modified.Chapter 4 Implementation Programs are written in accordance to what is specified in the design. It also gives some software engineering practices that every programmer must keep in mind. if upgrades are necessary. Programming standards and procedures are important for several reasons. Sometimes when writing a code. codes should be written in a way that is understandable not only to the programmer who did it but by others (other programmers and testers). Translating design can be a daunting task. In object-oriented software. Coupling and cohesion are another design characteristics that must be translated into program characteristics. algorithm and control flow Optionally. a programmer can hide implementation details at different levels. Modularity is a good design characteristic that must be translated to program characteristic. algorithms. Control Structure Guidelines The control structure is defined by the software architecture. the programmer can experiment and decide which implementation is best. and control flow. how the data will be represented. It identifies the following elements: • Component Name • Author of the Component • Date the Component was last created or modified • Place where the component fits in the general system • Details of the component's data structure. The history element consists of: • Who modified the component? • When the component was modified? • What was the modification? 72 .Programming Guidelines Programming is a creative skill. By building the program in modular blocks. It is an outline of what is to be done in a component. it is based on messages being sent among objects of classes. how to use them. test and maintain. It acts as an introduction to the source code. system states and changes in the variables. making the entire system easier to understand. It is directed at some who will be reading the source code. this information is placed at the beginning of the code. This section discusses several guidelines that apply to programming in general. They are structured English that describes the flow of a program code. Internal Documentation It is a descriptive document directly written within the source code. He has the flexibility to choose the particular programming language constructs to use. Codes can be rearranged or reconstructed with a minimum of rewriting. Using Pseudocodes The design usually provides a framework for each component. The programmer has the flexibility to implement the code. Two program documentation are created: internal documentation and external documentation. Pseudocodes can be used to adapt the design to the chosen programming language. Documentation Guidelines Program Documentation is a set of written description that explain to the reader what the programs do and how they do it. A summary information is provided to describe its data structures. one can add history of revisions that was done to the component. Normally. It is important for the program structure to reflect the design's control structure. and so on. This section of the source code is known as the header comment block. By adopting the constructs and data representations without becoming involved immediately in the specifics of a command or statement. The design component is used as a guide to the function and purpose of the component. The programmer adds his creativity and expertise to build the lines of code that implement the design. If the class is not public. the compiler creates them. 3. Consider the code of the Athlete persistent class shown in Text. Use formatting to enhance readability of the codes such as indention and proper blocking. 2. Add the package statement to the source code file for a reusable class definition. The compiled class is made available to the compiler and interpreter. Place additional comments to enlighten readers on some program statements. In the example. 4.class file is placed in the directory specified by the package statement. just import the package. Placing a package statement at the beginning of the source file indicates that the class defined in the file is part of the specified package. it can be used only by other classes in the same package. The Java Programming Language provides a mechanism for defining packages. Avoid writing codes that jump wildly from one place to another. it identifies the pre-conditions and post-conditions of the source code. 4. Package names should be in all-lowercase ASCII letters. They describe how components interact with one another which include object classes and their inheritance hierarchy. The use of abstract classes and interfaces greatly increase the ability of the software to be reusable and manageable. They are actually directories used to organize classes and interfaces. With hundreds of thousands Java programmers around the world.pc which is the name of the package for persistent classes and class lists. one may need to restructure the program. 2. Writing code is also iterative. Use meaningful variable names and method names. the result *.athlete. Avoid using GOTO's. it is: package abl.Internal documents are created for people who will be reading the code. The following are the steps in defining a package in Java. One of the goals of programmers is to create reusable software components so that codes are not repeatedly written. If the control flow is complex and difficult to understand. It also allows software to 73 . When a Java file containing a package statement is compiled. Implementing Packages Packages provide a mechanism for software reuse. If these directories do not exist. External Documentation All other documents that are not part of the source code but is related to the source code are known as external documents.pc.e. Implementing Controllers Implementing controllers is similar to writing programs in your previous courses. Java provides a convention for unique package and class names. the name one uses for his classes may conflict with classes developed by other programmers.500 format for distinguished names.class file is placed under the pc directory under the athlete which is under the abl directory. one starts with a draft.. Choose a package name. Tips in writing codes 1. In this example. 6. These type of documents are indented for people who may not necessarily read the codes. For objectoriented system. the package name is abl. i. 5. Have separate methods for input and output. the athlete. To reuse the class. A good programming practice is using abstract classes and interfaces. In the example. 1. This section serves as a review of abstract classes and interfaces. Compile the class so that it is placed in the appropriate package directory structure. Define a public class. 3. It should follow the Internet Domain Name Convention as specified in X.athlete. Inheritance among Interfaces 74 . It often appears at the top of an object-oriented programming class hierarchy. we can actually capture similarities among unrelated classes without artificially forcing a class relationship. Another common characteristic is that both interface and class can define methods. but they are not related whatsoever. Abstract Classes Now suppose we want to create a superclass wherein it has certain methods in it that contains some implementation. Interface vs. We can create an interface class. just write the method declaration without the body and use the abstract keyword. an interface does not have an implementation code while the class have one. However. In order to enforce a way to make sure that these two classes implement some methods with similar signatures. they are defined independently. defining the broad types of actions possible with objects of all subclasses of the class. Interfaces define a standard and public way of specifying the behavior of classes. both of the classes have some similar methods which compares them from other objects of the same type. Abstract Class The following are the main differences between an interface and an abstract class: interface methods have no body. suppose we have another class MyInteger which contains methods that compares a MyInteger object to objects of the same class. They allow classes. Interfaces An interface is a special kind of block containing method signatures (and possibly constants) only. regardless of their location in the class hierarchy. Let's take as an example a class Line which contains methods that computes the length of the line and compares a Line object to objects of the same class. Note that interfaces exhibit polymorphism as well. Why do we use Interfaces? We need to use interfaces if we want unrelated classes to implement similar methods. to implement common behaviors. we can use an interface for this. Interface vs. let's say interface Relation which has some comparison method declarations. To create an abstract method. This means that an interface can be used in places where a class can be used. As we can see here. This section serves as a review of abstract classes and interfaces in Java7. since program may call an interface method and the proper version of that method will be executed depending on the type of object passed to the interface method call. Interfaces define the signatures of a set of methods without the body. an interface can only define constants and an interface have no direct inherited relationship with any particular class. Class One common characteristic of an interface and class is that they are both types. Those methods in the abstract classes that do not have implementation are called abstract methods. An abstract class is a class that cannot be instantiated. Now.have a 'plug-and-play' capabilities. and some methods wherein we just want to be overridden by its subclasses. Through interfaces. 3. Interfaces define a standard and public way of specifying the behavior of classes. The interface methods have no body. Other metrics considered at implementation phase are used for succeeding similar projects. However. unrelated to implementation. we can actually capture similarities among unrelated classes without artificially forcing a class relationship. interfaces can have inheritance relationship among themselves. Another reason for using an object's programming interface is to reveal an object's programming interface without revealing its class. 1. Number of Classes. As we can see later on the section Interface vs. since program may call an interface method and the proper version of that method will be executed depending on the type of object passed to the interface method call. Finally. This is the number of statement lines that was used. However. The total cost of developing the software. Effort. Number of Documentation Pages. One common characteristic of an interface and class is that they are both types. This is the total number of classes created. Note that interfaces exhibit polymorphism as well. integrated and tested. Why do we use Interfaces? We need to use interfaces if we want unrelated classes to implement similar methods. Implementation Metrics Since the program characteristics must correspond to design characteristics. additional classes may have been developed. an interface can only define constants and an interface have no direct inherited relationship with any particular class. 75 . It is basically used later on in managing similar projects. the metrics used for implementation are the metrics for the design. to implement common behaviors. Mapping Implementation Traceability Matrix Deliverables with the Requirements Normally. This is the total number of days or months the project was developed. regardless of their location in the class hierarchy.Interfaces are not part of the class hierarchy. Interfaces define the signatures of a set of methods without the body. 4. An interface formalizes polymorphism. It is a special kind of block containing method signatures (and possibly constants) only. It defines polymorphism in a declarative way. we need to use interfaces to model multiple inheritance which allows a class to have more than one superclass. They allow classes. This means that an interface can be used in places where a class can be used. Thru interfaces. Rather. They should be entered in this column under the appropriate package. Lines of Code (LOC). we can actually use an interface as data type. we monitor the development of each software component defined under the Classes column. 5. It can be for a program. They serve as history data that can be used for estimation for other projects. during the course of the development. component or entire software. The total number of documentation produced. We can use the RTM to determine how many of the classes have been implemented. no additional RTM elements are used for the implementation phase. This is the key to the "plug-and-play" ability of an architecture. but present in other object-oriented languages like C++. they are defined independently. Multiple inheritance is not present in Java. 2. Classes. Cost. To be most effective. There may be instances that a subset of 76 . Exhaustive testing is not possible but there are testing strategies that allow us to have good chance of constructing software that is less likely to fail. The Quality Assurance group may be tasked to perform the test. Software testing begins from a single component using white-box and black-box techniques to derive test cases. Fault Correction and Removal is the process of making changes to the software and system to remove the fault. It is a known fact that it is very difficult to correct a fault once the software is in use. It is a road map that helps the end-users. Once the class is tested. a software testing group) should conduct the test. Fault Identification is the process of identifying the cause of failure. Software testing encompasses a set of activities with the primary goal of discovering faults and defects. it has the following objectives: • To design a test case with high probability of finding as-yet undiscovered bugs • To execute the program with the intent of finding bugs Software Testing Principles 1. It has goals of ensuring that the software constructed implements a specific function (verification). Software Testing Strategies integrate software test case design methods into a wellplanned series of steps that result in the successful implementation of the software. Test should be planned long before testing begins. 3. Software testing strategies are also presented to help us understand the steps needed to plan a series of steps that will result in the successful construction of the software. 5. Software testing is performed by a variety of people.Chapter 5 Software Testing Software is tested to detect defects or faults before they are given to the end-users. they may have vested interest of demonstrating that the code is error-free. and that the software is constructed traceable to a customer requirements (validation). an independent third party (normally. In this chapter. The Pareto Principle states that 80% of all errors uncovered during testing will likely be traceable to 20% of all program modules or classes. the methods and techniques to develop test cases will be discussed. Software developers are responsible for testing individual program units before they perform the integration of these program units. software developers and quality assurance group conduct software testing. For object-oriented software development. 6. In a sense. a test is successful only when a fault is detected or is a result of the failure of the testing procedure. 4. Their main goal is to uncover as many errors as possible. They ask the software developers to correct any errors that they have discovered. All tests should be traceable to the requirements. integration of the class through communication and collaboration with other tested classes is tested. Introduction to Software Testing Software is tested to demonstrate the existence of a fault or defect because the goal of software testing is to discover them. they may be several. The Pareto Principle applies to software testing. Specifically. Testing should begin “in small” and progress toward testing “in the large”. 2. building block of software is the class. However. The first approach is known as white-box testing while the second approach is known as black-box testing. There are several techniques that can be employed in defining test cases using the whitebox approach. For classes. A Test Specification is an overall strategy that defines the specific tests that should be conducted. White-Box Testing Techniques White-Box Testing is also known as glass-box testing. As an example. know how the software works and tests if it conforms to the specified functionality in the requirements. It produces test cases that: • Ensures that all independent paths with the component have been tested at least onces. i. Use a procedural specification as input in deriving the basic set of execution path. In order to have an organized way of testing the software. It is a test case design technique that uses the internal control structure of the software component as defined by their methods to derive test cases. It ensures that no logical errors. It can be used to track the progress of the development team. Software Test Case Design Methods There are two ways software is tested. assume that the following is pseudo-code of some method of some class. and define a set of project milestones. This complexity measure is used as a guide for defining a basis set of execution paths to be tested.e. • Tests paths within the components that are considered “out of the mainstream”. Finally. It checks if internal operations perform as specified. and all internal components have been adequately tested. It also includes the procedures on how to conduct the test. a test specification needs to be developed. • Tests all logical decisions on their true or false sides. incorrect assumptions and typographical errors have been missed. They are discussed in the succeeding section. and • Tests internal data structures for their validity. One approach is to test the internal workings of the software. The procedural specification can be the design (pseudo-code or flowchart) or the source code itself. Basic Path Testing Basis Path Testing is a white-box testing technique that enables the test case designer to derive a logical complexity measure based on the procedural specification of a software component.. • Tests all loops at their boundaries and within their operational bounds. Its goal is to ensure that internal operations perform according to specification of the software component. normally. the operation or method procedural design will be used. the project leader should formulate one. 77 . The other approach is to test the software as a whole.a test will be performed again to ensure that corrects of the fault does not generate additional or new errors. If there is no quality assurance group. the software is tested as a whole system. Steps in Deriving Test Cases using Basis Path Testing STEP 1. by the leader of the quality assurance group. edges and regions. It is placed there for clarification purposes. The number of regions of the flow graph. It can be mapped to sequences or conditions. Areas that are bounded by the edges and nodes are called regions.1 shows the flow graph of the sample procedural code. Compute for the complexity of the code. which is the basis of the basic set. The edges are the links of the nodes. It can be computed in three ways. The cyclomatic complexity is used to determine the complexity of the procedural code. Draw the flow graph of the procedural specification. STEP 3. They represent flow of control similar to flow charts.STEP 2. 1. It is a number that specifies the independent paths in the code. 78 . The flow graph depicts the logical control flow of the procedural specification. The nodes represent one or more statements. Figure 6. It uses nodes. The graph does not normally shows the code. e. Node-3. In the above example. Node-3. Node-9 Path 3: Node-1. Each test case is executed and compared from the respected result. Node-6.2. Node-7.. Node-9 Path 4: Node-1. 3. Determine the basic set of execution paths. V(G) = P + 1. nodes is 10. Node-2. The cyclomatic complexity provides the number of the linearly independent paths through the code. i. Node-3. Node-4. 5 linearly independent paths are identified. Node-2. predicates is 4.e. i. V(G) = E – N + 2 In the example: The number The number The number The number of of of of regions is 5. Test cases are prepared to force execution of each path. Node-2. Conditions on predicate nodes should be properly set. The number of edges minus the nodes plus 2. Path 1: Node-1. Node-6. Node-9 Path 2: Node-1. edges is 13. Node-5. Node-9 Path 5: Node-1. Using the formula: • V(G) = 5 (regions) • V(G) = 4 (predicates) + 1 = 5 • V(G) = 13 (edges) – 10 (nodes) + 2 = 5 STEP 4. The number of predicate nodes plus one. Node-3. Node-11 STEP 5: Document the test cases based on the identified execution path. Node-8. As an example: 79 . Node-2. There are four classes of iteration: simple.) For Node-3. concatenated. 2. namely. As an example. • Only one pass through the loop. condition testing. It is a test case design method that focuses exclusively on the validity of iterative constructs or repetition. Looping Testing. nested and unstructured. evaluation of var1 should lead to Node-4 (Identify value of var1) Expected Results: should produce necessary result for Node-9. • Two passes through the loop. It is a test case design method that test the logical conditions contained in a procedural specification. • m passes through the loop where m < n • n – 1. It focuses on testing each condition in the program by providing possible combination of values. For Simple Iteration. if ((result < 0) && (numberOfTest != 100)) The test cases that can be generated is shown in the table below. looping testing and data flow testing. n + 1 passes through the loop 80 . Condition Testing. 1. n. Control Structure Testing Control Structure Testing is a white-box testing technique that test three types of program control. • Skip the loop entirely.PATH 1 Test Case: For Node-1. They are shown in the table below. condition2 should evaluate to TRUE (Identify the necessary values. consider the following condition of a program. the test cases can be derived from the following possible execution of the iteration or repetition. and errors in initialization and termination. The derived test cases are: Test Case 1: • FindAthleteUI class sends a request to retrieve a list of athlete based on a search criteria. It is a test case design method that selects test paths of a program according to the locations of the definitions and uses of variables in the program. errors in interface. performance errors. Nodes represent the software objects. the test cases can be derived from the following possible execution of the iteration or repetition • If loops are independent. 3. Create a graph of software objects and identify the relationship of these objects. create a graph of software objects. • If loops are dependent. For Concatenated Iteration. Developing Test Cases Using Graph-based Testing STEP 1. errors in external database access. Graph-based Testing Graph-based Testing is a black-box testing technique that uses objects that are modeled in software and the relationships among these objects. For object oriented software engineering.range or excluded values. • Work outward. Properties can be used to describe the nodes and edges. the test cases can be derived from the following possible execution of the iteration or repetition. conducting tests for the next loop. Data Flow Testing. errors in data structure. Black-Box Testing Techniques Black-box testing is a test design technique that focuses on testing the functional aspect of the software whether it complies with functional requirements. • Conduct simple loop tests for the innermost loop while holding the outer loop at their minimum iteration parameter values. the collaboration diagram is a good input for the graph based testing because you don’t need to create a graph. the test for simple loops can be used. • Continue until all loops have been tested.For Nested Iterations. For Unstructured Iteration. Software engineers derive sets of input conditions that will fully test all functional requirements of the software. • Start with the innermost loop. no test cases can be derived since it would be best to redesign the loop since it is not a good iteration or repetition constructs. Add other test for outof. STEP 2. 81 . The request is sent to the FindAthleteRecord. but keeping all other outer loops at minimum values and other nested loops to “typical” values. Using nodes and edges. It defines a set of test cases that finds incorrect or missing functions. Understanding the dynamics on how these objects communicate and collaborate with one another can derive test cases. the test for nested loops can be used. Traverse the graph to define test cases. reduces the effort in testing the software. 3. The test case is one valid. The test case is one valid and one invalid. Equivalence Testing Equivalence Testing is a black-box testing technique that uses the input domain of the program. which are sets of valid and invalid states that an input may be in. It divides the input domain into sets of data from which test cases can be derived. It returns the reference of the PCLAthlete to the AthleteListUI. Specify the properties. It populates the PCLAthlete class with the athlete information. 3. 4. Establish their relationship through the use of edges. Guidelines in Identifying Equivalence Classes 1. Name nodes and specify their properties. and two invalid equivalence classes. and one invalid. Derive test cases and ensure that there is node and edge coverage. consider a text message code of registering a mobile number to a text service of getting traffic reports. There should be an entry and exit nodes. • The AthleteListUI lists the names of the athletes. The test case is one valid. Identify the start and stop points of the graph. Input condition is Boolean. Derived test cases are used to uncover errors that reflect a class of errors. 2. As an example. and two invalid equivalence classes. It makes use of equivalence classes. 4. Thus. 2. Guidelines for Graph-based Testing 1. Input condition specifies a member of a set. • The DBAthele request the database server to execute the SELECT statement. The test case is one valid input.• The FindAthleteRecord sends a message to the DBAthlete to process the search criteria. Input Condition is specified as a range of value. Assume that the message requires the following structure: 82 . Input Condition requires a specific value. To test the module. If the driver and stub require a lot of effort to develop. passes data to the component to be tested. the test cases that can be derived: • use the minimum • use the maximum • just above and below minimum • just above and below maximum. a driver and stub is used. To create effective unit tests. boundary conditions. one needs to understand the behavior of the unit of software that one is testing. Guidelines in Deriving Test Cases Using Boundary Value Testing 1. If input condition specifies number of values. A Stub is program that performs support activities such as data manipulation. queries of states of the component being tested and prints verification of entry.Boundary Value Testing Boundary Value Testing is a black-box testing technique that uses the boundaries of the input domain to derive test cases. and prints relevant results. Unit Testing Unit testing is the basic level of testing. local data structures. This section discusses concepts and methods that allows one to test programs. unit testing may be delayed until integration testing. It is the process of executing each module to confirm that each performs its assigned function. A Driver is a program that accepts test case data. Most errors occur at the boundary of the valid input values. Testing your Programs Software testing can be done in several tasks or phases. This is usually done by decomposing the software 83 . the test cases that can be derived: • use values n and m • just above n and m • just below n and m 2. It has an intention to test the smaller building blocks of a program. The environment to which unit tests can be performed is shown in Figure 6. independent paths and error handling paths. It involves testing the interface. If input condition specifies range bounded by n and m.4. Program testing is concerned with testing individual programs (unit testing) and their relationship with one another (integration testing). Test-driven Development Methodology 84 . However. Re-testing occurs for the software function that was affected.. i. software component that has been changed and software components are likely to be affected by the change. the data (attributes) and functions (methods or operation) are grouped together in a class. it is necessary to test draw() method in all subclasses that uses it. all classes must be tested for integration. Object-oriented software does not have an obvious hierarchy of control structure which a characteristic of conventional way of developing software. Regression Testing Sometimes when errors are corrected.e. it is necessary to re-test the software components. there are two approaches that can be employed in performing integration testing. operations defined in class cannot be tested separately. All subclasses of this base class inherit this operation. Regression testing is re-execution of some subset of tests to ensure that changes have not produced unintended side effects. the concept of encapsulation is used to define a class. Therefore. As an example. consider the class diagram shown below. As the number of subclasses is defined for a base class. The smallest testable units are the operations defined in the class. The draw() method is defined in Shape class. the more testing is required for the base class. Thread-based Testing Approach Integration Testing is based on a group of classes that collaborate or interact when one input needs to be processed or one event has been trigger. Integration Testing After unit testing.requirements into simple testable behaviors. The sequence diagrams and collaboration diagrams can be used as the basis for this test. Traditional way of integration testing (top-down and bottom-up strategies) has little meaning in such software. It should be tested in the context to which objects of the class are instantiated. As opposed to conventional way of testing. In object-oriented software engineering. A thread is a path of communication among classes that needs to process a single input or respond to an event. it is being executed together with the private attributes and other methods within the context of the subclass. All possible threads should be tested. The context in which the method is used varies depending on what subclass executes it. Integration testing verifies that each component performs correctly within collaboration and that each interface is correct. When a subclass uses the draw() method. It is important that software requirements can be translated into tests. One of the immediate benefits of TDD is that one has a working software system almost immediately. Two basic steps are involved in this approach: • Write just enough test to describe the next increment of behavior. means improving the design of the existing code. The unit tests serve as a repository that provides information about the design decisions that went into the design of the module. 4. In test-driven development. This means that any change to the code that has an undesired sideeffect will be detected immediately and be corrected. Using these two sources of 85 . This is less risky compared from building the whole system and hope that the pieces will work together. 3. the functionality will improve as the development continues. The other benefit of constant regression testing is that you always have a fully working system at every iteration of the development. A change to one module may have unforeseen consequences throughout the rest of the project. He focuses his attention on a small piece of software. It provides constant regression testing. It is a process of changing code so that the internal structure of the code is improved without changing the behavior of the program. Developers are more focused because the only thing that they need to worry about is passing the next test. the set of unit tests becomes a representative of a set of required behaviors of the software. Each of these pass/fail criteria adds to the knowledge of how the software must behave. This allows you to stop the development at any time and quickly respond to any changes in the requirements. each new test provides feedback into the design process which helps in driving design decisions. As the name implies. Writing unit tests before writing the code helps the developer focus on understanding the required behavior of the software. As more unit tests are added because of new features or correction. one doesn't realize a design has problems until it has been used. It improves communication. on the other hand. It is cleaning up bad code. Often. gets it to work. 6. But. design. Test allows us to uncover design problems. involving analysis. Most developers refactor code on a regular basis using common sense. Benefits of Test-driven Development There are many benefits. The test-first approach is a programming technique. This is why regression testing is needed. 1. It allows simple incremental development. Some of them are enumerated below. It improves understanding of required software behavior. It is a systematic way of improving the design of the code by removing redundant or duplicate codes. and moves on. The unit tests provide a list of requirements while the source code provides the implementation of the requirements. As a developer is writing unit test he is adding pass/fail criteria for the behavior of the software.test Test-driven Development (TDD) is a method of developing software that adopts a -first approach plus refactoring. one writes the test first. The domino effect is very prevalent in software development. Refactoring. The unit tests serve as a common language that can be used to communicate the exact behavior of a software component without ambiguities. It centralizes knowledge. It involves a simpler development process. coding and testing all together. 5. • Write just enough production code to pass the test. TDD runs the full set of unit tests every time a change is made. 2. The first iteration of the software is very simple and may not have much functionality. To derive system test cases. the use case 86 . 3. take a step back and refactor the codes to remove any duplication or any other problem that was introduced to get the test running. Write a test that defines how the software should behave. The software under test is compared with its intended functional specification. • There must be a way to get a athlete's shooting average for the entire season. This may include any tests such as performance testing which test the response time and resource utilization. suppose that "Ang Bulilit Liga" has the following requirements. just get it to work. the system is tested as a whole for functionality and fitness of use." There are a number of behaviors the software needs to have in order to implement this requirement such as: • The software should identify an athlete. If you break your code. the code to implement the behavior is written and the unit test is run against it to make sure that the software works as expected. You don't need to be concerned about the design of the source code. It improves software design. As developers. The code fails because the code to implement the behavior has not yet been written but this is an important step because this will verify that unit test is working correctly. After integration testing. the unit test will let you know immediately. Now that the code is working correctly. stress testing which test the software under abnormal quantity.Test-driven Development Steps information makes it a lot easier for developers to understand the module and make changes that won't introduce bugs. requests or processes. In the case of the example. Clean up the code. Do not be afraid to refactor your code. the source code we are referring to is the class AthleteStatistics which contains the methods that compute the shooting average of an athlete by game and season. Test-driven development basically is composed of the following steps: 1. you know that the first pass at writing a piece of code is usually not the best implementation. it is executed to verify for execution failure. you could write the test case. After making some assumptions like athletes will be identified by their ids and games by dates. • There must be a way to input the data needed to calculate the shooting average. As an example. "The software must be able to compute the shooting average of every athlete by game and for the season. frequency and volume of data. test requirements are derived from the analysis artifacts or work products that use UML such as the class diagrams. In the context of object-oriented software engineering. Requiring the developer to write the unit tests for the module before the code helps them define the behavior of the software better and helps them to think about the software from a users point of view. Make the test run as easily and quickly as possible. Once the unit test has been verified. recovery testing which test the recovery mechanism of the software when it fails etc. Once the unit test is written. sequence diagrams and collaboration diagrams. 7. 2. Testing the System System testing is concerned with testing an entire system based on its specification. • There must be a way to identify the game. security testing which test the protective mechanism of the software. • There must be a way to get a athlete's shooting average for a game. Conceptually. The basic flow covers the normal flow of the use case. End users of the completed system can go down many paths as they execute the functionality specified in the use case. create a sufficient set of scenarios. they also have sequential dependencies which came from the logic of the business process the system supports. however. In this case. They not only have the extend and include dependencies. Many software project development may be constrained to deliver the software soon with a relatively small team of developers. create use case scenarios. They represent a high level of functionalities provided by the system to the user. each object may be used by many functional requirements. Users may decide to do Use Case 1. and the objects as sub-system components that stand vertically.use cases first rather than the rest.diagram is a good source. prioritize use cases. preconditions and postconditions can be very helpful in the later stages of test 87 . A scenario is an instance of a use case. The succeeding section gives the steps in deriving the test cases using use cases. The alternative flows cover the behavior of an optional or exceptional character relative to normal behavior. use cases are not independent. This transition from a functional to an object point of-view is accomplished with use cases and scenarios. Each use case would have a basic flow and alternative flow. Having a detailed description of each scenario such as the inputs. STEP 2: For each use case. For each use case starting from higher to lower priority. Generating System Test Cases STEP 1: Using the RTM. However. or a complete path through the use case. Consider the RTM formulated during the analysis as shown in Table 28. Each functionality does not use every object. we need to identify possible execution sequences as they may trigger different failures. The RTM is a good tool in determining which system functions are developed and tested first. It is adapted from Heumann9. functionality can be viewed as a set of processes that run horizontally through the system. When planning test cases. prioritizing which system function should be developed and tested first is very crucial.0 and its sub. Importance is gauged based on the frequency with which each function of the system is used. I indicates invalid and N/A indicates not applicable. Table 31 shows sample values. take at least one test case and identify the conditions that will make it execute. Once all test have been identified they should be completed. Before deploying the application. A matrix format is suitable for clearly documenting the test cases for each scenario. The more alternative flows mean more comprehensive modeling and consequently more thorough testing. As an example.generation. The list of scenario should include the basic flow and at least one alternative flow. If this is ensured then the next step is to plug in the data values for each V's and I's. The V indicates valid. reviewed. STEP 4: For each test case. consider the matrix as shown in the table below. As an example. STEP 3: For each scenario. The test matrix represents a framework of testing without involving any specific data values. and validated to ensure accuracy and identify redundant or missing test cases. The preconditions and flow of events of each scenario can be examined to identify the input variables and the constraints that bring the system to a specific state as represented by the post conditions. determine the data values. consider the list of scenarios for adding an athlete to the system in the table below. 88 . This matrix is an intermediate step and provides a good way to document the conditions that are being tested. it should be tested using real data to see if some other issues like performance prop up. It can range from informal test drive to planned and systematically executed tests. Alpha & Beta Testing Alpha & Beta Testing is a series of acceptance tests to enable customer or end-user to validate all requirements. at the developer’s site. It consists of a series of black-box tests cases that demonstrate conformity with the requirements. Alpha testing is conducted at a controlled environment. then. The software is completely packaged as a system and that interface errors among software components have been uncovered and corrected. This test allows end-users to uncover errors and defects that only them can find. We need to keep track of it. the test specification and test cases must be developed. Beta testing. The following list the recommended elements for software testing in the RTM. End-users are asked to use the system as if it is being used naturally while the developers are recording errors and usage problems. It is the basis for validation testing. and at the same time. the software complies with the requirements. A Validation Criteria is a document containing all user-visible attributes of the software. end-users are the ones recording all errors and usage problems. They report them to the developer for fixes. on the other hand. At this time. normally.Validation testing starts after the culmination of system testing. assure that each test is traced to a requirement. Additional RTM components are added. If the software exhibits these visible attributes. 89 . Mapping the Software Testing Deliverable to the RTM Once a component is implemented. is conducted at one or more customer sites and developers are not present. It focuses on user-visible actions and user-recognizable output. the more states of the class needs to be tested. 3. 5. Number of Root Classes (NOR). If the value of LCOM is high. Number of Children (NOC) and Depth of the Inheritance Tree (DIT). Percent Public and Protected (PAP). other metrics can be considered for encapsulation and inheritance. it could lead to many side effects. 90 . If the value of PAD is high. Public Access To Data Members (PAD). testing effort also increases. 2. it increases the likelihood of side effects among classes because it leads to high coupling. Superclass methods should be re-tested for every subclass. As the number of root classes increases.These components should be related to a software component defined in the RTM. If the value of PAP is high. Test Metrics The metrics used for object-oriented design quality mentioned in Design Engineering can also provide an indication of the amount of testing effort is need to test object-oriented software. Lack of Cohesion in Methods (LCOM). This is a measure of the percentage of class attributes that are public or protected. 4. 1. Additionally. This is a measure of the number of classes or methods that an access another class’s attributes. Example of such metrics are listed below. It is a systematic integration of technical.Chapter 6 Introduction to Software Project Management Building a computer software include a variety of phases. Some of these activities are done iteratively in order for the computer software be completed and delivered to the end-users for their operation. The project plan outlines a series of actions or steps needed to develop a work product or accomplish a 91 . inadequate resource allocation. However. end-users and developers need estimates on how long the project will proceed and how much it will cost. and timing resources to develop computer software that meets requirements. The figure below shows the project management process. That is reserved for a Software Project Management Course. Problem Definition It is a task where the purpose of the project is clarified. poor quality and conflicting priorities. It consists of eight (8) tasks. Problem Identification It is a task where a proposed project is identified. The proposed project may be a new product. implementation of a new process or an improvement of existing facilities. The discussion in this chapter would revolve around project management concepts that will help the group manage class-based software development projects. It is not the intention of this chapter to give a full-blown discussion of software project management. before the development can proceed. allocating. The mission statement of the software project is the main output of this task. Project Planning It is a task that defines the project plan. It should specify how project management may be used to avoid missed deadlines. defined and justified. human and financial resources to achieve software development goals and objectives done in an efficient and expedient manner. lack of coordination. Software Project Management Software project management is defined as the process of managing. poor scheduling. activities and tasks. reallocation of resources or expedition of task performance.goal. Tracking. Recommend at least two alternatives. 3. and new technologies. It is usually done concurrently with project planning. guiding and supervising the project personnel. The variables to be measured. It requires that a product study request and proposal be given to provide a clear picture of the proposed project and the rationale behind it. CPM. skills etc. project definition. 92 . Define the need. and the measuring approaches should be clearly specified during planning activity. It requires the skill of directing. It involves the process of measuring the relationship between planned performance and actual performance with respect to project objectives. information. people. and risk. Project Termination It is that task that involves submission of final report. market research. to the project in order to achieve software goals and objectives. Corrective action may be rescheduling. It involves the assignment of time periods to specific tasks within the work schedule. the power-on of new equipment. The main work product is the project proposal. Resource Allocation It is the task of allocating resources such as money. It specifies how to initiate a project and execute its objectives. 1. Gantt Charts). Creating a project proposal requires a series of steps. the measurement scales. It uses techniques such as resource availability analysis (human. Reporting and Controlling It is a task that involves checking whether or not project results conform to project plans and performance specification. Identify alternative approaches to meet the need. equipment. It typically comes from a variety of sources such as client or user requirements and organization engineering. The purpose of this task is to make informed decisions when approving. planning and design. It is necessary that clear expectations for each personnel and how their job functions contribute in the overall goals of the project is identified. cost and benefit. Project Scheduling It is the task of allocating resources so that overall project objectives are achieved within a reasonable time span. It does not include details of the requirements. regulatory or legal requirements. or the signing of a release order. 2. initial project boundaries and scope. facilities. Project Organization It is a task that specifies how to integrate the functions of the personnel in a project. Controlling involves identifying and implementing proper actions to correct unacceptable deviations from expected performance. technical feasibility. rejecting and prioritizing projects. Problem Identification and Definition The development of a project starts with its initiation and definition. team organization and performance criteria or validation criteria. It should address background information. money) and scheduling techniques (PERT. It may trigger follow-ups or spin-off projects. material. Common components of the project plan include project objectives. tools. It should include expertise from the following functional area: • Software/Hardware • Network Support • Data Processing Centers • Data Security • Database Administration • Clients and Users • Internal and External Auditors • Other affective or support groups All of the project proposal deliverables should be documented in the development plan or project file.Schedule V. who is the client. It describes the functions and process that are involved in the project.Recommendation Executive Summary It provides an introduction to the project which includes background information leading to the project’s initiation and summary of work to be performed. It answers the following question: what are the desired results or benefits of this project. It answers the questions why is the project important. 93 . attainable. I. The project proposal outline can be seen below. Measurements and results are in terms of time.Scope of the Project B. cost and performance.Alternatives IV. When defining objectives. Obtain approval A proposal team can be established to do the project proposal. what will not be included in the project.Success Criteria III. It answers the questions what is the ultimate achievement expected from this proposal. what is the business opportunity or need. result-oriented and timeoriented) and M&M’s (measurable and manageable) principles are used. Information System Objectives a)Primary Objective b)Secondary Objective II. measurable. what will be included in the project. and what strategic. Executive Summary A.4. tactical or operational objectives are supported by this project. and who requested the project development effort.Costs VI.Benefits VII. SMART (specific. Business and Information System Objectives It provides specific targets and defines measurable results. Business Objective a)Primary Objective b)Secondary Objective ii.Risk VIII. Scope It defines the boundaries of the project. and what is the pertinent background information.Business and Information System Objectives i. It should support the project objectives. It is represented as a range of values. Recommendation The proposal team should work together to select best alternative that best balances project schedule. Benefits It may be short or long term benefits. and other tangible and intangible benefits. productivity improvements. This task needs an understanding on the systems development organization structure. The figure below illustrates this structure 94 . It should be analyzed based on likelihood of occurrence and impact. what are the logical relationships between the major deliverables. improved quantity or quality. what assumptions were used to develop the schedule. Alternatives It defines the alternative solutions to the business problem or need. who will judge the success of the project. It is necessary that clear expectations for each personnel and how their job functions contribute in the overall goals and objectives of the project be communicated for smooth collaboration. Costs It represents the estimated cost of developing the software. Each alternative should be evaluated to determine the schedule. what other non-labor costs need to be considered.Success Criteria It identifies the high-level deliverables that must be in place in order for the project to be completed. Risks It identifies risks of each alternative. what are the costs associated with hardware and software. and how the risks can be mitigated. and risk. Project Organization The project organization integrates the functions of the personnel in a project. and what operating costs need to be considered and identified. It should meet the project objectives in the best way. effort. company or product positioning. potential revenue. dependencies and assumptions. It may be “make or buy” alternative or technical approach such as traditional coding. cost and benefit. It answers the questions what are the major deliverables. what is a reasonable schedule range based on size. It includes potential savings. business process reengineering. It answers the questions how will we know when the project is completed. integration issues. objectoriented development or integrated CASE. It provides specific measurement used to judge success. and how will they judge the project’s success. It answers the questions what are the costs associated with the effort required to produce each major deliverables. Comptroller). Rather. It is composed of top management (the President. designers. • approving critical system change request. The communication is highly horizontal. developers and end-user. They regularly interact to fulfill a goal. It is a small group of with a leader or manager. • regularly assessing progress of the project. It has a secondary leader responsible 95 . Project Team The project team is composed of two sets of groups. casuals and consultants. namely. Controlled Decentralized (CD) The Controlled Decentralized (CD) Team Structure has a permanent leader who primarily coordinates tasks.Steering Committee The Steering Committee defines some business issues that have a great effect on the development of the project. • approving and allocating resources for the project. analysts. • resolving conflict situations. programmers. Consultants. Democratic Decentralized (DD) The Democratic Decentralized (DD) Team Structure has no permanent leader. MIS or EDP Management and optionally. Their functions consists of: • formalizing the project team composition. • synchronizing the system development effort with the rest of the organization’s activities. The Project Team Structure The team structure can be organized as democratic decentralized. and • releasing operational system to user. user staff. Developers consist of the project manager. controlled decentralized and controlled centralized. and quality assurance members. Staff Management (Corporate Planning. Decisions are solved by group consensus. It requires team spirit. Line Management (Functional Vicepresidents or Division Heads). • reviewing and approving project plans and recommendations. contribution and coordination. The end-user group consists of EDP or MIS Staff. there are task coordinators appointed for a short duration who are replaced later on by others who may need to coordinate other task. CEO or Managing Director or Partner). It supports the need to communicate expectations and responsibilities among the personnel involved in the development of the software. It answers the following questions: • Who is to do what? • How long will it take? • Who is to inform whom of what? • Whose approval is needed for what? • Who is responsible for which results? • What personnel interfaces are required? • What support is needed form whom and when? 96 . The decision on what team structure to employ would depend on the project characteristics. Project Responsibility Chart The Project Responsibility Chart is a matrix consisting of columns of individual or functional departments. Controlled Centralized (CC) The Controlled Centralized (CC) Team Structure has a team leader responsible for team coordination and top-level decision-making. and rows of required actions. It avoids problems encountered with communication and obligations from being neglected. Decisions are solved by group consensus but the implementation is done by subgroups. The communication during implementation is horizontal but control is vertical. Use the table below to determine the team structure to be used for a project.for subtasks. The communication between leader and team members is vertical. 3. It is a team process that gives the start point of formulating a work program. 97 . Interdependency. It enumerates phases or stages of the projects. Time Allocation. The product and process are decomposed into manageable activities and tasks. 2.Project Scheduling Project Scheduling is the task that describes the software development process for a particular project. 1. It is iterative process and must be flexible to accommodate changes. time constraints. breaks each into discrete tasks or activities to be done. The interdependency of each compartmentalized activity or task must be determined. Each task should be allocated some number of work unit (persondays or man-days of effort). Tasks can occur in sequence or parallel. Tasks can occur independently. Some of them are enumerated below. It is a time-phased sequencing of activities subject to precedence relationships. Each task must have a start and end date subject to interdependency and people responsible for the task (part-time or fulltime). portrays the interactions among these pieces of work and estimates the time that each task or activity will take. Compartmentalization. There are certain basic principles used in project scheduling. and resource limitations to accomplish specific objectives. It signifies points of accomplishments within the project schedule. milestones and deliverables. A milestone is an indication that something has been completed. 5. Break the project into blocks of related activities. 3. It references a particular moment of time. milestones and deliverables. Deliverables. 2. Arrange the blocks into a logical hierarchy. • For each activity. and system turnover. Work Breakdown Analysis The Work Breakdown Analysis consists of the following steps: 1. 7. approved system design. Examples of project milestones are user signoff. WBS Top-down Decomposition and Bottom-up Integration Process This method defines two major activities as specified in its name. • Continue to subdivide an activity until you get to an activity that cannot be subdivided anymore. The work unit or package is the responsibility of one person. Activities or Task Set. 98 . Define Milestones. Each task must have a defined outcome. Two methods are used in defining the work breakdown structure. 6. Project milestones are reviewed for quality and approved by project sponsor. Define the Work Unit or Package. 2. It can include documents. specifically. It is a systematic analysis approach of depicting a project as a set of discrete pieces of work. Define Responsibility. No more than the allocated number of people has been allocated at any given time. A deliverable is a list of items that a customer expects to see during the development of the project. Do not worry about the sequence. Its duration and cost can be measured. Work products are combined in deliverables. Define Outcome. 1. • Analysis starts by identifying the major phases and the major deliverables that each produces. It is not duration of work. It should be executed until it finishes without any interruption. It is part of a project that takes place over a period of time. namely. milestones and deliverables that must be accomplished to complete a particular project. Effort Validation. The tasks and milestones can be used in a project to track development or maintenance. It is written as a verb-noun phrase. This atomic activity is called the work unit or package. demonstrations of subsystems. demonstrations of functions. break them to define sub-activities and the work products produced by these sub-activities. It should be a team member. Each task or group of tasks should be associated with a project milestone. • Identify intermediate and final deliverables for each grouping. demonstrations of accuracy and demonstration of reliability. Top-down Decomposition • Identify 4-7 major components of work. the topdown decomposition and bottom-up integration. 3. the Work Breakdown Analysis and WBS Top-down Decomposition and Bottom-up Integration. Milestones. 1. Project Scheduling identifies activities or task sets. and requires the continuous use of a resource group.4. security or speed. Project Work Breakdown Structure (WBS) The Project Work Breakdown Structure (WBS) is a tool that allows project managers to define task sets. Each task must have an owner. An activity or task set is a collection of software engineering work tasks. • Organize task into 4-7 major groupings reflecting how the project will be managed. 99 . graphical or hierarchy chart and outline format. GANTT Chart It is an easy way of scheduling tasks. Recommended number of levels is four(4). 2. Bottom-up Integration • Brainstorm all possible tasks. Table 35 shows an example of a GANTT Chart. It is a chart that consists of bars that indicates the length of the tasks. clear deliverables. • Use verb-object at lowest task level. namely. • Multiple iterations are required. Graphical or Hierarchy Chart An example of a hierarchy chart of Pre-joint Meeting Task of Requirements Engineering is shown in the figure below. The horizontal dimension indicates the time while the vertical dimension indicates the task. credible estimates and can be tracked.• Perform functional decomposition until a task has one owner. Work Breakdown Schedule Format There are two common formats that can be used to represent the WBS. people. The Resource Availability Database specifies what resources are required against the resources that are available. There are three major project constraints that are always considered in a software development project. They are: • Time Constraints • Resource Constraints • Performance Constraints Resource Availability Data Base The project manager must use the resource availability database to manage resources allocated for each task set. skills etc to the tasks identified in the project. They are needed in order to achieve software goals and objectives. information. equipment.Project Resource Allocation The project resource allocation is the process of allocating or assigning money. tools. The table below shows an example. facilities. 100 . 101 . Each chapter provides the metrics that are normally used to help software engineers the quality of the software and the process. Size-Oriented Metrics. It is important that the metrics of other projects are placed within a table to serve as reference during estimation. Indirect Measures. number of classes. complexity. Human-Oriented Metrics. productivity assessment and project control. 5. 6. Direct Measures. Measurements can be applied to the software process with the intent of continuing the improvement of the process. 4. In software. It refers to the measure that provides indication of how closely software conforms to the requirements. execution speed. quality. It allows software engineers to make decisions on how to progress within the development effort. Technical Metrics. Function-Oriented Metrics. In software process. 1. It refers to the measure on the characteristics of the product such as complexity. The table below shows an example of this. It is a size-oriented metrics used to estimate effort and cost. Categories of Measurement There are roughly two categories of measurements: 1.Lines of Codes (LOC) A common direct measure used for software metrics is the Lines of Code (LOC). It can be used throughout a software project to assist in estimation. Quality Metrics. project or the software product. it can be the cost and effort applied. degree of collaboration and modularity etc. It is also known as the software’s fitness for use. 3. It is used to collect direct measures of software output and quality based on the line of codes (LOC) produced. reliability. Size-oriented Metrics. it can be the lines of codes (LOC) produced. Software Metrics are used as indicators to provide insight into the software process. It provides measures collected about the manner in which people develop computer software and human perception about the effectiveness of tools and methods. Different phases or activities of the development projects required different metrics. efficiency. Productivity Metrics. It is used to collect direct measures of software output and quality based on the program’s functionality or utility. It can be used by software engineers to help assess the quality of work products and to assist in the tactical decision-making as project proceeds. It refers to the measure on the output of software process. 102 . memory size and defects reported over a period of time 2. quality control. Examples of metrics that are applied to software projects are briefly enumerated below. It can be product measures such as functionality.Software Metrics Software Metrics refers to a variety of measure used in computer software. maintainability etc. 2. e. Fill-up the count column and multiply the value with the chosen weight factor. Computing the Function Point STEP 1. 103 . 3 x 3 = 9. errors is the number of errors reported after the software was delivered to the customer. cost is the money spent producing the software. Productivity = KLOC / person-month Quality = errors / KLOC Cost = Cost / KLOC Documentation = Pages / KLOC Function-Oriented Metrics: Function Points (FP) Another metric that is commonly used is the Function Point (FP) Metrics.In this table. i. chose the weight factor of 3. pages is the number of pages of documentation. effort in person-month specifies the number of months the project was developed. Example computations are shown below. It focuses on the program functionality or utility. if the number of input is 3 and the input data are simple. KLOC is the lines of codes produced. As an example. Determine the value of the information domain by using the table shown in the table below . One can derive metrics from this table. It uses an empirical relationship based on countable measures of software’s information domain and assessments of software complexity. and people is the number of people who developed the software.. Are the inputs.1. heavily utilized operational environment? 6. Is the application designed to facilitate change and ease of use by the user? STEP 3. Use the formula indicated below.01 x sum (Fi)] One can also derive metrics similar to size-oriented metrics. Does the system require on-line data entry? 7. Does the on-line data entry require the input transaction to be built over multiple screens or operations? 8. Will the system run in an existing. Is the code designed to be reusable 12. Is the internal processing complex? 11. or inquiries complex? 10. Are the master files updated on-line? 9. Productivity = FP / person-month Quality = Defects / FP Cost = Cost / FP Documentation = Pages of Documentation / FP 104 . Are data communications required? 3. Is the system designed for multiple installations in different organizations? 14. outputs. Examples of that are listed below. Are there distributed processing functions? 4. Is performance critical? 5. FuntionPoint = Count-Total X [0. files. Does the system require reliable backup and recovery? 2. Compute for the function point. Are conversion and installation included in the design? 13.65 + 0. To use LOC and FP to determine the cost and effort estimate. and commitment to quantitative measures when qualitative data are all that exists. They provide a baseline metrics collected from past projects and used in conjunction with estimation variables to develop cost and effort projections. access to good historical data. As an example. To do estimation. it would require experience.Project Estimations Project estimations are necessary to help software engineers determine the effort and cost required in building the software. One can use LOC and LP to provide cost and effort estimates. consider the project estimates for the club membership maintenance system. They provide estimate variables to “size” each element of the software. 105 . we computer first the FP. . estimate its impact and establish a contingency plan if it does 106 .01 x 31] = 56. A risk is considered a potential problem. Cost Estimates are computed as follows: Cost of Producing a single LOC (From the Database): KLOC / Cost Cost/KLOC = P30.9 or 2 months Risk Management Risk Management consists of steps that help the software engineers to understand and manage uncertainty within the software development process.64 * 63 = 3568.000/15.e.32 / 1875 = 1.64 Effort Estimates are computed as follows: Number of LOC produced in a month (From the Database): KLOC/Effort KLOC/Effort = 15.65 + 0. assess its probability of occurring. It is better to identity it.00 = P7136.000LOC = P2. The estimated LOC is computed as follows: LOC = 56.32 LOC Derived Estimates: Assume that the Club Membership Maintenance is similar to Project Four.000LOC / 8 months = 1875 LOC / month Effort of the project = 3568.64 FP The project heavily uses JAVA.32 * 2.00/LOC Cost of the project = 3568. it may happen or it may not.FP = 59 X [0. i. There are different categories of risks that the software engineer needs to understand. This may include building a software that no one needs. 4. Business Risks. Many things can happen during the progress of the development of the software. 1. These are risks associated with the viability of the software. Project Risks. business environment analysis. 2. The level of uncertainty and degree of loss associated for each risk are identified and documented. These are risks that came from past similar projects that are expected to happen with the current one. This means that risks may or may not happen. It is for this reason that understanding risks and having measures that avoid them influences how well the project is managed. bad consequences and losses will occur. database and infrastructure problems. Risk always involved uncertainty. If a risk becomes a reality. Predictable Risks. These are risks that can be detected with careful evaluation of the project plan. This may include design and interface problems. 107 . 5. Known Risks. and understanding them also allows the creation of contingency plan with minimal impact to the software development project. building a software that cannot be market and losing budgetary and personnel commitment. Unpredictable Risks. budgetary and resource problems. Technical Risks. it would be very difficult for us to implement the project. These are risks that are not detected.happen. the software will become a problem in a business setting. 6. component problems. These are risks associated with the implementation aspects of the software. If such risk becomes a reality. They should be considered as potential problems may arise from them. If such risk becomes a reality. Two important characteristics of risk are always analyzed and identified in risk management. 3. chances are there would be delays. These are risks associated with the project plan. and other good informational sources. If such risk becomes a reality. Monitoring and Management) plan for each risk. These are risks that are related to end-user’s requirements and the ability of the development team to understand and communicate with them. Product Size. 3. files and transactions? • What is the database size? • How many is the estimated number of users? • How many components are reused? 108 .Critical. It would result in inconvenience or usage problems. The last column contains the RMMM (Risk Mitigation. and they are briefly enumerated below. Risk Identification Checklist One can use the following to determine risks within the software development project. Product Size Risks The product size is directly proportional to the project risk. 1. These are risks that are related to the constraints imposed by management or market. 4. Each risk are then categorized. It would result in failure to meet the requirements with system performance degration to the point of mission success is questionable. 1. Risk ID. Description. As software engineers. 3.Catastrophic. the impact of the risk when it occurs is identified. Contingency Plan. Staff Size and Experience.First column lists down all possible risk including those that are most likely not to happen. It would result in failure to meet the requirements and non acceptance of the software. Risk Mitigation and Monitoring. It defines the steps that need to be done when the risk is realized. 3. Business Impact. 2. These are risks that are related to the complexity of the system and the degree of newness that is part of the software.Negligible. 2. 5. 5. 2. These are risk that are related to the overall technical and project experience of the software engineers. It is a unique identification number that is assigned to the risk. 6. It would result in failure to meet the requirements would result in degration of secondary mission. we take a look at the following: • How large is the estimated size of the software in terms of lines-ofcode and function points? • How many is number of programs. Development Environment. Risk Context. 4. It defines the condition by which the risk may be realized. It contains the following components: 1. The probability value of the risk is identified as a percentage value of it occurring. 7. 4. Customer Characteristics. These are risks that are related to the software process and how the people involved in the development effort are organized. It defines the steps that need to be done in order to mitigate and monitor the risk. Categorization identifies the type of risks. It is a brief description of the risk. These are risks that are related to the overall size of the software that needs to be developed.Marginal. Technology. Next. These are risks that are related to the availability and quality of the tools that are used to develop the software. Process Definition. It is assessed using the following values. including errors found and resources used? Do they have defined procedures? • Do they have procedures that ensure that work conducted on a project conforms to software engineering standards? • Do they use configuration management to maintain consistency among system/software requirements. design and code done regularly? Do they have defined procedures? • Are formal technical reviews of test procedures.Business Impact Risks • What is the business value of the software to the company or organization? • What is the visibility of the software to senior management? • How reasonable is the delivery date? • How many is the customers? • How many are the products or systems that interoperate with the software? • How large is the project documentation? • Does the product documentation have quality? • What are the governmental constraints applied to the development of the project? • What is the cost associated with late delivery? • What is the Cost associated with a defective product? Customer Related Risks • What is the customer working relationship? • What is the level of the customer’s ability to state their requirements? • Are customers willing to spend time for requirements gathering? • Are customers willing to participate in formal technical reviews? • How knowledgeable are the customers in terms of the technological aspects of the software? • Do customer understand of the software development process and their role? Process Risks-Process Issues • Does senior management show full support or commitment to the software development effort? • Do the organization have a developed and written description of the software process to be used on this project? • Do the staff members know the software development process? • Is the software process used before in other projects with the same group of people? • Has your organization developed or acquired a series of software engineering training courses for managers and technical staff? • Are there any published software engineering standards that will be used by software developer and software manager? • Are there any developed document outlines and examples of deliverables? • Are formal technical reviews of the requirements specification. software requirements specification. test cases done regularly? Do they have defined procedures? • Are the results of each formal technical review documented. and test cases? • Do they have documents for the statement of work. code. and software development plan for each subcontract? Process Risks-Technical Issues • Are there any mechanism that facilitates clear communication between customer and developer? • Are there any specific methods used for software analysis? 109 . design. ensuring that change is being properly implemented.10 Software Configuration Management Software Configuration Management is an umbrella activity that supports the software throughout its life cycle. It can occur any time as work products are developed and maintained. such as formal methods. AI-based approaches. 1. or testing methods? • Do we need to use unconventional software development methods.Project Planning Phase 110 . and artificial neural networks? Development Environment Risks • Are there any software project management tools employed? • Are there any tools for analysis and design? • Are there any testing tools? • Are there any software configuration management tools? • Are all software tools integrated with one another? • Do we need to train the members of the development team to use the tools? tools? • Are there any on-line help and support groups? Risk Associated with Staff Size and Experience • Is the staff turnover high? • Do we have skilled team members? • Is there enough number of people in the development team? • Will the people be required to work overtime? • Will there any problems with the communications? 7. controlling change. input or output technology? • Do the software need to use new or unsupported hardware? • Do the software need to use a database system not tested in the application area? • Do we need to define a specialized user interface? • Do we need to use new analysis.• Are there any specific methods used for component design? • Are there any specific method for data and architectural design? • Are codes written in a high level language? How much is the percentage? • Are documentation and coding standards followed? • Are there any specific methods for test case design? • Are software tools used to support planning and tracking activities? • Are configuration management software tools used to control and track change activity throughout the software process? • Are CASE tools used? • Are quality metrics collected for all software projects? • Are there any productivity metrics collected for previous software projects? Technology Risks • Is the organization new to the technology? • Do we need to create new algorithms. and reporting the change to others who may be affected by it. It manages items called software configuration items (SCIs) or units. which may be computer programs (both source-level and executable forms). documents (both technical or user) and data. They are characterized based on the software engineering phase or activities they are produced. The possible work products that can serve as software configuration items or units are listed below. design. It focused on managing change within the software development process. It consists of identifying change. Once the work product passes or is acceptable. It goes through a formal technical review to detect faults and errors. It can only be changed through a formal change procedure. A Software Engineering Task develops a work product that becomes a software configuration item or unit.a)Software Project Plan 2.Testing a)Test Specifications b)Test Cases 6.Standards and Procedures for Software Engineering Baseline Managing SCIs uses the concept of baseline.Requirements Engineering a)System Specifications b)Requirements Model c)Analysis Model d)Screen Prototypes 3.Design Engineering a)Database Design b)Dialogue and Screen Design c)Report Design d)Forms Design e)Software Architecture f) Component Design g)Deployment Design 4. Figure 7.Manuals a)User Manuals b)Technical Manuals c)Installation and Procedures for Software Engineering 7. It is a specification or product that has been formally reviewed and agreed upon to be the basis for further development. it becomes a baseline and 111 .4 shows the dynamics of managing the baseline of SCIs. A baseline is a software configuration management concept that helps software engineers control change without seriously impeding justifiable change. Implementation a)Source Code Listing b)Executable or Binary Code c)Linked Modules 5. Five tasks are done. change control. It identifies software configuration items and various version of the software. change identification. Version Control It is the procedures and tools to manage the different version of configuration objects that are created during the software development process. Configuration Audit It is the process of assessing a SCIs for characteristics that are generally not considered during a formal technical review. Change Control It consists of human procedures and automated tools to provide a mechanism for the control of change. the modified SCI will pass through another formal technical review. namely. checked in after the review and subject to version control. Notices that "checking out" a SCI involves controls and procedure. In this case. configuration audition and reporting. version control. it is possible that the SCI has been modified. the modified version becomes the new baseline. this work product is extracted from the project database as input to some other software engineering task. During the execution of the SE task. • A change report is created to identify the final decision on the status and priority of the change. A version is a collection of SCIs that can be used almost independently. • The SCIs is checked out for modification. It specifically identifes software configuration items (SCIs) and uniquely gives it an identifier and name. Sometimes. If it passed or acceptable. It has the following subset of task: • A change request is submitted and evaluated to assess technical merit. It answers the following questions: • Was the change specified in the ECO done? Are there any additional modification made? • Did the formal technical review assessed the technical correctness of the work product? • Did the software engineering procedures been followed? • Did the SCM procedures been followed? • Have all appropriate SCIs been updated? Status Reporting It is the process of informing people that are affected by the change. and the criteria for review and audit. potential side effects. Change Identification It is the process of identifying items that change throughout the software development process. the constraints. Software Configuration Tasks Software configuration management has the responsibility to control change. It also audits the software configuration items to ensure that they have been properly developed and that the reporting of changes is applied to the configuration. It answers the following questions: • What happened? • Who did it? 112 . then. overall impact on other SCIs.is saved in a project database. • The Engineering Change Order (ECO) is created that describes the change to be make. • It is. Ensuring that these are available in compatible versions and with sufficient licences is part of project management. Integrated Development Environment (IDE) Integrated Development Environment (IDE) incorporates a multi-window editor. There are now several tools that support the Unified Modeling Language (UML). the source code is compiled into a intermediary bytecode format that is interpreted by the Java Virtual Machine (JVM). NetBeans is an example of an IDE. Visual Basic. operation. it means that you can run java programs using simply a browser. Compilers. Some IDEs have visual editors. If support for client-server mode of operation is needed. umlet. Other CASE Tools supports code generators. Case Tools Computer-aided Software Engineering (CASE) tools are software that allows the software analysts create models of the system and software. Visual Editors allows the creation of these interface by simply dragging and dropping the components onto the forms and setting the parameters that control their appearance in a properties window. and at the same time.• When did it happen? • What else will be affected? Chapter 7 Software Development Tools The implementation of software requires a variety of software tools. and Dia. etc. a separate client and server 113 . Interpreters and Run-time Support All source code must be translated into an executable code. C++ and SQL codes. This type of tool supports the Software Configuration Management process. Database Management Tools A considerable amount of software is needed for a large-scale database management system. This environment is supported by most browsers. There are tools that generate Java. Many of these tools have been designed and implemented to make the work of the software engineer easier and organized. can perform some testing and debugging. Example of this tool is Concurrent Version System(CVS). In this chapter. mechanisms for managing files that make up the projects. and a debugger to help the programmer step through the code to fine errors. attributes. Visual Editors Creating graphical user interfaces can be difficult when done manually. NetBeans supports this kind of visual editing. In Java. This ensures that the implementation reflects the design diagrams. Rational Rose. The main feature of such software is the project repository which allows the maintenance of the links among the textual and structured descriptions of every class. to the diagrammatic representation. state. links to the compiler so that codes can be compiled from within the IDE. This type of software development tools enable a software engineer to manage source codes in a single interface. software development tools are presented. Configuration Management Configuration Management Tools are tools the support the tracking of changes and dependencies between components and versions of source codes and resource files that are used to produce a particular release of a software package. Examples of CASE Tools are ArgoUML. Such as when ODBC or JDBC is used. extraction of files from the archives and the setting up of parameters or registry entries.components should be installed. Java has javadoc that processes Java source files and generates HTML documentation in the style of the API documentation from special comments with embedded tags in the source code. It allows the project manager to set the project schedule. 114 . What is more likely is that programmers will develop their own tools to provide harnesses within which to test classes and subsystems. manage resource such as time and people. Conversion Tools This is type of tool is needed when an existing system needs to transfer its data to another system. MS Project is an example of this tool. Automated testing tools are available for some environment such as NetBeans' support for JUnit. new software replaces existing computerized system. Nowadays. ObjectStore PSE Pro includes a post-processor that is used to process Java class files to make them persistent. Packages like Data Junction provide automated tools to extract data from a wide range of systems and format it for a new system. Software Project Management Software Project Management tools supports project management. Installation Tools This tools allows the creation of installation executable files that when run automates the creations of directories. Testing Tools These are tools that identifies test methods and generates test cases. Document Generator These are tools that allow the generation of technical or user documentation. additional special libraries or Java packages (specific DriverManager of a vendor)should be installed or available during run-time. track progress. . 2006 115 . “Software Engineering – A Practitioner’s Approach” • JEDI.References • Pressman. “Software Engineering”. Roger S. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/49765910/CSCI15-Lectures
CC-MAIN-2017-04
refinedweb
37,474
51.95
? Hello asyncio/aiohttp Async programming is not easy. It’s not easy because using callbacks and thinking in terms of events and event handlers requires more effort than usual synchronous programming. But it is also difficult because asyncio is still relatively new and there are few blog posts, tutorials about it. Official docs are very terse and contain only basic examples. There are some Stack Overflow questions but not that many only 410 as of time of writing (compare with 2 585 questions tagged “twisted” ) There are couple of nice blog posts and articles about asyncio over there such as this , that , that or perhaps even this or this . To make it easier let’s start with the basics – simple HTTP hello world – just making GET and fetching one single HTTP response. In synchronous world you just do: import requests def hello() return requests.get("") print(hello()) How does that look in aiohttp? #!/usr/local/bin/python3.5 import asyncio from aiohttp import ClientSession async def hello(): async with ClientSession() as session: async with session.get("") as response: response = await response.read() print(response) loop = asyncio.get_event_loop() loop.run_until_complete(hello()) hmm looks like I had to write lots of code for such a basic task… There is “async def” and “async with” and two “awaits” here. It seems really confusing at first sight, let’s try to explain it then. You make your function asynchronous by using async keyword before function definition and using await keyword. There are actually two asynchronous operations that our hello() function performs. First it fetches response asynchronously, then it reads response body in asynchronous manner. Aiohttp recommends to use ClientSession as primary interface to make requests. ClientSession allows you to store cookies between requests and keeps objects that are common for all requests (event loop, connection and other things). Session needs to be closed after using it, and closing session is another asynchronous operation, this is why you need async with every time you deal with sessions. After you open client session you can use it to make requests. This is where another asynchronous operation starts, downloading request. Just as in case of client sessions responses must be closed explicitly, and context manager’s with statement ensures it will be closed properly in all circumstances. To start your program you need to run it in event loop, so you need to create instance of asyncio loop and put task into this loop. It all does sound bit difficult but it’s not that complex and looks logical if you spend some time trying to understand it. Fetch multiple urls Now let’s try to do something more interesting, fetching multiple urls one after another. With synchronous code you would do just: for url in urls: print(requests.get(url).text) This is really quick and easy, async will not be that easy, so you should always consider if something more complex is actually necessary for your needs. If your app works nice with synchronous code maybe there is no need to bother with async code? If you do need to bother with async code here’s how you do that. Our hello() async function stays the same but we need to wrap it in asyncio Future object and pass whole lists of Future objects as tasks to be executed in the loop. loop = asyncio.get_event_loop() tasks = [] # I'm using test server localhost, but you can use any url url = "{}" for i in range(5): task = asyncio.ensure_future(hello(url.format(i))) tasks.append(task) loop.run_until_complete(asyncio.wait(tasks)) Now let’s say we want to collect all responses in one list and do some postprocessing on them. At the moment we’re not keeping response body anywhere, we just print it, let’s return this response, keep it in list, and print all responses at the end. To collect bunch of responses you probably need to write something along the lines of: #!/usr/local/bin/python3.5 import asyncio from aiohttp import ClientSession async def fetch(url): async with ClientSession() as session: async with session.get(url) as response: return await response.read() async def run(loop, r): url = "{}" tasks = [] for i in range(r): task = asyncio.ensure_future(fetch(url.format(i))) tasks.append(task) responses = await asyncio.gather(*tasks) # you now have all response bodies in this variable print(responses) def print_responses(result): print(result) loop = asyncio.get_event_loop() future = asyncio.ensure_future(run(loop, 4)) loop.run_until_complete(future) Notice usage of asyncio.gather() , this collects bunch of Future objects in one place and waits for all of them to finish. Common gotchas Now let’s simulate real process of learning and let’s make mistake in above script and try to debug it, this should be really helpful for demonstration purposes. This is how sample broken async function looks like: # WARNING! BROKEN CODE DO NOT COPY PASTE async def fetch(url): async with ClientSession() as session: async with session.get(url) as response: return response.read() This code is broken, but it’s not that easy to figure out why if you dont know much about asyncio. Even if you know Python well but you dont know asyncio or aiohttp well you’ll be in trouble to figure out what happens. What is output of above function? It produces following output: pawel@pawel-VPCEH390X ~/p/l/benchmarker> ./bench.py [<generator object ClientResponse.read at 0x7fa68d465728>, <generator object ClientResponse.read at 0x7fa68cdd9468>, <generator object ClientResponse.read at 0x7fa68d4656d0>, <generator object ClientResponse.read at 0x7fa68cdd9af0>] What happens here? You expected to get response objects after all processing is done, but here you actually get bunch of generators, why is that? It happens because as I’ve mentioned earlier response.read() is async operation, this means that it does not return result immediately, it just returns generator. This generator still needs to be called and executed, and this does not happen by default, yield from in Python 3.4 and await in Python 3.5 were added exactly for this purpose: to actually iterate over generator function. Fix to above error is just adding await before response.read() . # async operation must be preceded by await return await response.read() # NOT: return response.read() Let’s break our code in some other way. # WARNING! BROKEN CODE DO NOT COPY PASTE async def run(loop, r): url = "{}" tasks = [] for i in range(r): task = asyncio.ensure_future(fetch(url.format(i))) tasks.append(task) responses = asyncio.gather(*tasks) print(responses) Again above code is broken but it’s not easy to figure out why if you’re just learning asyncio. Above produces following output: pawel@pawel-VPCEH390X ~/p/l/benchmarker> ./bench.py <_GatheringFuture pending> Task was destroyed but it is pending! task: <Task pending coro=<fetch() running at ./bench.py:7> wait_for=<Future pending cb=[Task._wakeup()]> cb=[gather.<locals>._done_callback(0)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: <Task pending coro=<fetch() running at ./bench.py:7> wait_for=<Future pending cb=[Task._wakeup()]> cb=[gather.<locals>._done_callback(1)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: <Task pending coro=<fetch() running at ./bench.py:7> wait_for=<Future pending cb=[Task._wakeup()]> cb=[gather.<locals>._done_callback(2)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> Task was destroyed but it is pending! task: <Task pending coro=<fetch() running at ./bench.py:7> wait_for=<Future pending cb=[Task._wakeup()]> cb=[gather.<locals>._done_callback(3)() at /usr/local/lib/python3.5/asyncio/tasks.py:602]> What happens here? If you examine your localhost logs you may see that requests are not reaching your server at all. Clearly no requests are performed. Print statement prints that responses variable contains <_GatheringFuture pending> object, and later it alerts that pending tasks were destroyed. Why is it happening? Again you forgot about await faulty line is this responses = asyncio.gather(*tasks) it should be: responses = await asyncio.gather(*tasks) I guess main lesson from those mistakes is: always remember about using “await” if you’re actually awaiting something. Sync vs Async Finally time for some fun. Let’s check if async is really worth the hassle. What’s the difference in efficiency between asynchronous client and blocking client? How many requests per minute can I send with my async client? With this questions in mind I set up simple (async) aiohttp server. My server is going to read full html text of Frankenstein by Marry Shelley. It will add random delays between responses. Some responses will have zero delay, and some will have maximum of 3 seconds delay. This should resemble real applications, few apps respond to all requests with same latency, usually latency differs from response to response. Server code looks like this: #!/usr/local/bin/python3.5 import asyncio from datetime import datetime from aiohttp import web import random # set seed to ensure async and sync client get same distribution of delay values # and tests are fair random.seed(1) async def hello(request): name = request.match_info.get("name", "foo") n = datetime.now().isoformat() delay = random.randint(0, 3) await asyncio.sleep(delay) headers = {"content_type": "text/html", "delay": str(delay)} with open("frank.html", "rb") as html_body: print("{}: {} delay: {}".format(n, request.path, delay)) response = web.Response(body=html_body.read(), headers=headers) return response app = web.Application() app.router.add_route("GET", "/{name}", hello) web.run_app(app) Synchronous client looks like this: import requests r = 100 url = "{}" for i in range(r): res = requests.get(url.format(i)) delay = res.headers.get("DELAY") d = res.headers.get("DATE") print("{}:{} delay {}".format(d, res.url, delay)) How long will it take to run this? On my machine running above synchronous client took 2:45.54 minutes. My async code looks just like above code samples, you can see it in full here . How long will async client take? On my machine it took 0:03.48 seconds. It is interesting that it took exactly as long as longest delay from my server. If you look into messages printed by client script you can see how great async HTTP client is. Some responses had 0 delay but others got 3 seconds delay. In synchronous client they would be blocking and waiting, your machine would simply stay idle for this time. Async client does not waste time, when something is delayed it simply does something else, issues other requests or processes all other responses. You can see this clearly in logs, first there are responses with 0 delay, then after they arrrived you can see responses with 1 seconds delay, and so on until most delayed responses arrive. Testing the limits Now that we know our async client is better let’s try to test its limits and try to crash our localhost. I’m going to start with sending 1k async requests. I’m curious how many requests my client can handle. > time python3 bench.py 2.68user 0.24system 0:07.14elapsed 40%CPU (0avgtext+0avgdata 53704maxresident)k 0inputs+0outputs (0major+14156minor)pagefaults 0swaps So 1k requests take 7 seconds, pretty nice! How about 10k? Trying to make 10k requests unfortunately fails… responses are <_GatheringFuture finished exception=ClientOSError(24, 'Cannot connect to host localhost:8080 ssl:False [Can not connect to localhost:8080 [Too many open files]]')> Traceback (most recent call last): File "/home/pawel/.local/lib/python3.5/site-packages/aiohttp/connector.py", line 581, in _create_connection File "/usr/local/lib/python3.5/asyncio/base_events.py", line 651, in create_connection File "/usr/local/lib/python3.5/asyncio/base_events.py", line 618, in create_connection File "/usr/local/lib/python3.5/socket.py", line 134, in __init__ OSError: [Errno 24] Too many open files That’s bad, seems like I stumbled across 10k connections problem . It says “too many open files”, and probably refers to number of open sockets. Why does it call them files? Sockets are just file descriptors, operating systems limit number of open sockets allowed. How many files are too many? I checked with python resource module and it seems like it’s around 1024. How can we bypass this? Primitive way is just increasing limit of open files. But this is probably not the good way to go. Much better way is just adding some synchronization in your client limiting number of concurrent requests it can process. I’m going to do this by adding asyncio.Semaphore() with max tasks of 1000. Modified run() function looks like this now: async def run(loop, r): url = "{}" tasks = [] sem = asyncio.Semaphore(1000) for i in range(r): task = asyncio.ensure_future(fetch(url.format(i))) await sem.acquire() task.add_done_callback(lambda t: sem.release()) tasks.append(task) responses = asyncio.gather(*tasks) responses.add_done_callback(print_responses) await responses At this point I can process 10k urls. It takes 23 seconds, pretty nice! How about 100 000? This really makes my computer work hard but suprisingly it works ok. Server turns out to be suprisingly stable although you can see that ram usage gets pretty high at this point, cpu usage is around 100% all the time. What I find interesting is that my server takes significantly less cpu than client. Here’s snapshot of linux ps output. pawel@pawel-VPCEH390X ~/p/l/benchmarker> ps ua | grep python USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND pawel 2447 56.3 1.0 216124 64976 pts/9 Sl+ 21:26 1:27 /usr/local/bin/python3.5 ./test_server.py pawel 2527 101 3.5 674732 212076 pts/0 Rl+ 21:26 2:30 /usr/local/bin/python3.5 ./bench.py Overall it took around 5 minutes before it crashed for some reason. It generated around 100k lines of output so it’s not that easy to pinpoint traceback, seems like some responses are not closed, is it because of some error from my server or something in client? After scrolling for couple of seconds I found this exception in client logs. File "/usr/local/lib/python3.5/asyncio/futures.py", line 387, in __iter__ return self.result() # May raise too. File "/usr/local/lib/python3.5/asyncio/futures.py", line 274, in result raise self._exception File "/usr/local/lib/python3.5/asyncio/selector_events.py", line 411, in _sock_connect sock.connect(address) OSError: [Errno 99] Cannot assign requested address My hypothesis is that test server went down for some split second, and this caused some client error that was printed at the end. Overall it’s really not bad, 5 minutes for 100 000 requests? This makes around 20k requests per minute. Pretty powerful if you ask me. Finally I’m going to try 1 million requests. I really hope my laptop is not going to explode when testing that. For this amount of requests I reduced delays from server to range between 0 and 1. 1 000 000 requests finished in 52 minutes 1913.06user 1196.09system 52:06.87elapsed 99%CPU (0avgtext+0avgdata 5194260maxresident)k 265144inputs+0outputs (18692major+2528207minor)pagefaults 0swaps so it means my client made around 19230 requests per minute. Not bad isn’t it? Note that capabilities of my client are limited by server responding with delay of 0 and 1 in this scenario, seems like my test server also crashed silently couple of times. Epilogue You can see that asynchronous HTTP clients can be pretty powerful. Performing 1 million requests from async client is not difficult, and the client performs really well in comparison to synchronous code. I wonder how it compares to other languages and async frameworks? Perhaps in some future post I could compare Twisted Treq with aiohttp. There is also question how many concurrent requests can be issued by async libraries in other languages. E.g. what would be results of benchmarks for some Java async frameworks? Or C++ frameworks? Or some Rust HTTP 1M requests with python-aiohttp 评论 抢沙发
http://www.shellsec.com/news/13543.html
CC-MAIN-2016-44
refinedweb
2,656
59.6
combination of statements char* a[500]; // ... a[m][j] is strange isn’t it? combination of statements char* a[500]; // ... a[m][j] is strange isn’t it? Actually, i used to work on Turbo c++ and it used to work on it. and also i am new to codeshef. But Yes It seems strange,, its working now but , time limit exceed, but i m trying to develop better logic. You was kind of lucky, your code is not working on ideone I had to change Scanner s = new Scanner(System.in); String str = s.nextLine(); to //Scanner s = new Scanner(System.in); String str = rea.nextLine(); and the code returns 0 4 0 for input from problem statement , can you fork it and fix it on ideone? Your code is not working on ideone, can you fix it? I used your last submission in practice… try this test case <<>.The answer should be 0 , but ur code gives 2. @rishab why so coz <<> last two brackets are matching which is the case with the official test case of <>>> which gives 2 as output. plz clarify it you should output the length of the longest prefix thanku got it ad hoc means something that is not known in advance ANswer for ><<>> should be 0 Why so ?? I can see that I have valid string of length 4 here Read the problem statement carefully it has asked for length of longest valid ‘prefix’ , so the expression must start with ‘<’. Hope it helps! They have asked “Prefix”. Read question again. Even first ‘>’ gives answer 0 as there is no valid prefix. There is no need to check further. is my code, please let me know where’s the mistake. I tried various test cases and they display correct output. I used pair parentheses algorithm to solve it and have then multiplied the count object by 2 to get the output. Can someone plz point out the mistake here… i do not understand y this is giving WA. I am getting the wrong answer, plz explain to me why. #include<bits/stdc++.h> #define fast ios_base::sync_with_stdio(false); cin.tie(NULL); using namespace std ; int main(void){ stack<char> mystack; int t; int i=0,ans=0; string ch; int l; cin>>t; while(t--){ ans=0; i=0; cin>>ch; l=ch.size(); while(l--){ // cout<<mystack.empty()<<endl; if(ch[i]=='<'){ mystack.push(ch[i]); //cout<<"pushing"<<endl; } else if(!mystack.empty() and ch[i]=='>'){ mystack.pop(); //cout<<"poping"<<endl; ans+=2; } else if(mystack.empty() and ch[i]=='>'){ break; } i++; } if(mystack.empty()){ cout<<ans<<endl; } else{ cout<<'0'<<endl; } } } i am also trying to find same thing if you get ans plz tell me to Can someone plz tell me why i am getting WA ! which test case i am missing ? How my solution is wrong ? > #include<iostream> > #include<stack> > #include<vector> > using namespace std; > > int main() > { > int t; > vector<int> ans; > cin>>t; > while(t--) > { > stack<char> expr; > string str; > cin>>str; > int sum=0; > for(int i=0;i<str.length();i++) > { > if(str[i]=='<') > expr.push(str[i]); > else if(!expr.empty()) > { > expr.pop(); > sum+=2; > } > else > break; > } > ans.push_back(sum); > } > for(auto i=ans.begin();i!=ans.end();i++) > cout<<*i<<"\n"; > return 0; > } > The answer is correct. It should be 0, since there are no "lapin"s at the prefix.
https://discuss.codechef.com/t/compiler-editorial/5377?page=3
CC-MAIN-2020-10
refinedweb
566
78.04
Joey blogs about his work here on a semi-daily basis. Preparing for a release tomorrow. Yury fixed the Windows autobuilder over the weekend. The OSX autobuilder was broken by my changes Friday, which turned out to have a simple bug that took quite a long time to chase down. Also added git annex sync --content-of=path to sync the contents of files in a path, rather than in the whole work tree as --content does. I would have rather made this be --content=path but optparse-applicative does not support options that can be either boolean or have a string value. Really, I'd rather git annex sync path do it, but that would be ambiguous with the remote name parameter. Today's work was sponsored by Jake Vosloo on Patreon. Found a bug in git-annex-shell where verbose messages would sometimes make it output things git-annex didn't expect. While fixing that, I wanted to add a test case, but the test suite actually does not test git-annex-shell at all. It would need to ssh, which test suites should not do. So, I took a detour.. Support for GIT_SSH and GIT_SSH_COMMAND has been requested before for various reasons. So I implemented that, which took 4 hours. (With one little possible compatability caveat, since git-annex needs to pass the -n parameter to ssh sometimes, and git's interface doesn't allow for such a parameter.) Now the test suite can use those environment variables to make mock ssh remotes be accessed using local sh instead of ssh. Today's work was sponsored by Trenton Cronholm on Patreon. The new annex.securehashesonly config setting prevents annexed content that does not use a cryptographically secure hash from being downloaded or otherwise added to a repository. Using that and signed commits prevents SHA1 collisions from causing problems with annexed files. See using signed git commits for details about how to use it, and why I believe it makes git-annex safe despite git's vulnerability to SHA1 collisions in general. If you are using git-annex to publish binary files in a repository, you should follow the instructions in using signed git commits. If you're using git to publish binary files, you can improve the security of your repository by switchingto git-annex and signed commits. Today's work was sponsored by Riku Voipio. Yesterday I said that a git-annex repository using signed commits and SHA2 backend would be secure from SHA1 collision attacks. Then I noticed that there were two ways to embed the necessary collision generation data inside git-annex key names. I've fixed both of them today, and cannot find any other ways to embed collision generation data in between a signed commit and the annexed files. I also have a design for a way to configure git-annex to expect to see only keys using secure hash backends, which will make it easier to work with repositories that want to use signed commits and SHA2. Planning to implement that tomorrow. sha1 collision embedding in git-annex keys has the details. The first SHA1 collision was announced today, produced by an identical-prefix collision attack. After looking into it all day, it does not appear to impact git's security immediately, except for targeted attacks against specific projects by very wealthy attackers. But we're well past the time when it seemed ok that git uses SHA1. If this gets improved into a chosen-prefix collision attack, git will start to be rather insecure. Projects that store binary files in git, that might be worth $100k for an attacker to backdoor should be concerned by the SHA1 collisions. A good example of such a project is <git://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git>. Using git-annex (with a suitable backend like SHA256) and signed commits together is a good way to secure such repositories. Update 12:25 am: However, there are some ways to embed SHA1-colliding data in the names of git-annex keys. That makes git-annex with signed commits be no more secure than git with signed commits. I am working to fix git-annex to not use keys that have such problems. Today was all about writing making a remote repo update when changes are pushed to it. That's a fairly simple page, because I added workarounds for all the complexity of making it work in direct mode repos, adjusted branches, and repos on filesystems not supporting executable git hooks. Basically, the user should be able to set the standard receive.denyCurrentBranch=updateInstead configuration on a remote, and then git push or git annex sync should update that remote's working tree. There are a couple of unhandled cases; git push to a remote on a filesystem like FAT won't update it, and git annex sync will only update it if it's local, not accessed over ssh. Also, the emulation of git's updateInstead behavior is not perfect for direct mode repos and adjusted branches. Still, it's good enough that most users should find it meets their needs, I hope. How to set this kind of thing up is a fairly common FAQ, and this makes it much simpler. (Oh yeah, the first ancient kernel arm build is still running. May finish before tomorrow.) Today's work was sponsored by Jake Vosloo on Patreon. When. Last. Finished. Spent. First. The webapp's wormhole pairing almost worked perfectly on the first test. Turned out the remotedaemon was not noticing that the tor hidden service got enabled. After fixing that, it worked perfectly! So, I've merged that feature, and removed XMPP support from the assistant at the same time. If all goes well, the autobuilds will be updated soon, and it'll be released in time for new year's. Anyone who's been using XMPP to keep repositories in sync will need to either switch to Tor, or could add a remote on a ssh server to sync by instead. See for the pointy-clicky way to do it, and for the command-line way. Added the Magic Wormhole UI to the webapp for pairing Tor remotes. This replaces the XMPP pairing UI when using "Share with a friend" and "Share with your other devices" in the webapp. I have not been able to fully test it yet, and it's part of the no-xmpp branch until I can. It's been a while since I worked on the webapp. It was not as hard as I remembered to deal with Yesod. The inversion of control involved in coding for the web is as annoying as I remembered. Today's work was sponsored by Riku Voipio. Have been working on some improvements to git annex enable-tor. Made it su to root, using any su-like program that's available. And made it test the hidden service it sets up, and wait until it's propigated the the Tor directory authorities. The webapp will need these features, so I thought I might as well add them at the command-line level. Also some messing about with locale and encoding issues. About most of which the less said the better. One significant thing is that I've made the filesystem encoding be used for all IO by git-annex, rather than needing to explicitly enable it for each file and process. So, there should be much less bother with encoding problems going forward. git annex p2p --pair implemented, using Magic Wormhole codes that have to be exchanged between the repositories being paired. It looks like this, with the same thing being done at the same time in the other repository. joey@elephant:~/tmp/bench3/a>git annex p2p --pair p2p pair peer1 (using Magic Wormhole) This repository's pairing code is: 1-select-bluebird Enter the other repository's pairing code: (here I entered 8-fascinate-sawdust) Exchanging pairing data... Successfully exchanged pairing data. Connecting to peer1... ok And just that simply, the two repositories find one another, Tor onion addresses and authentication data is exchanged, and a git remote is set up connecting via Tor. joey@elephant:~/tmp/bench3/a>git annex sync peer1 commit ok pull peer1 warning: no common commits remote: Counting objects: 5, done. remote: Compressing objects: 100% (3/3), done. remote: Total 5 (delta 0), reused 0 (delta 0) Unpacking objects: 100% (5/5), done. From tor-annex::5vkpoyz723otbmzo.onion:61900 * [new branch] git-annex -> peer1/git-annex Very pleased with this, and also the whole thing worked on the very first try! It might be slightly annoying to have to exchange two codes during pairing. It would be possible to make this work with only one code. I decided to go with two codes, even though it's only marginally more secure than one, mostly for UI reasons. The pairing interface and instructions for using it is simplfied by being symmetric. (I also decided to revert the work I did on Friday to make p2p --link set up a bidirectional link. Better to keep --link the simplest possible primitive, and pairing makes bidirectional links more easily.) Next: Some more testing of this and the Tor hidden services, a webapp UI for P2P peering, and then finally removing XMPP support. I hope to finish that by New Years. Today's work was sponsored by Jake Vosloo on Patreon. Improved git annex p2p --link to create a bi-directional link automatically. Bi-directional links are desirable more often than not, so it's the default behavior. Also continued thinking about using magic wormhole for communicating p2p addresses for pairing. And filed some more bugs on magic wormhole. Quite a backlog developed in the couple of weeks I was concentrating on tor support. I've taken a first pass through it and fixed the most pressing issues now. Most important was an ugly memory corruption problem in the GHC runtime system that may have led to data corruption when using git-annex with Linux kernels older than 4.5. All the Linux standalone builds of git-annex have been updated to fix that issue. Today dealt with several more things, including fixing a buggy timestamp issue with metadata --batch, reverting the ssh ServerAliveInterval setting (broke on too many systems with old ssh or complicated ssh configurations), making batch input not be rejected when it can't be decoded as UTF-8, and more. Also, spent some time learning a little bit about Magic Wormhole and SPAKE, as a way to exchange tor remote addresses. Using Magic Wormhole for that seems like a reasonable plan. I did file a couple bugs on it which will need to get fixed, and then using it is mostly a question of whether it's easy enough to install that git-annex can rely on it. More improvements to tor support. Yesterday, debugged a reversion that broke push/pull over tor, and made actual useful error messages be displayed when there were problems. Also fixed a memory leak, although I fixed it by reorganizing code and could not figure out quite why it happened, other than that the ghc runtime was not managing to be as lazy as I would expect. Today, added git ref change notification to the P2P protocol, and made the remotedaemon automatically fetch changes from tor remotes. So, it should work to use the assistant to keep repositories in sync over tor. I have not tried it yet, and linking over tor still needs to be done at the command line, so it's not really ready for webapp users yet. Also fixed a denial of service attack in git-annex-shell and git-annex when talking to a remote git-annex-shell. It was possible to feed either a large amount of data when they tried to read a line of data, and summon the OOM killer. Next release will be expedited some because of that. Today's work was sponsored by Thomas Hochstein on Patreon. Git annex transfers over Tor worked correctly the first time I tried them today. I had been expecting protocol implementation bugs, so this was a nice surprise! Of course there were some bugs to fix. I had forgotten to add UUID discovery to git annex p2p --link. And, resuming interrupted transfers was buggy. Spent some time adding progress updates to the Tor remote. I was curious to see what speed transfers would run. Speed will of course vary depending on the Tor relays being used, but this example with a 100 mb file is not bad: copy big4 (to peer1...) 62% 1.5MB/s 24s There are still a couple of known bugs, but I've merged the tor branch into master already. Alpernebbi has built a GUI for editing git-annex metadata. Something I always wanted! Read about it here Today's work was sponsored by Ethan Aubin. Friday and today were spent implementing both sides of the P2P protocol for git-annex content transfers. There were some tricky cases to deal with. For example, when a file is being sent from a direct mode repository, or v6 annex.thin repository, the content of the file can change as it's being transferred. Including being appended to or truncated. Had to find a way to deal with that, to avoid breaking the protocol by not sending the indicated number of bytes of data. It all seems to be done now, but it's not been tested at all, and there are probably some bugs to find. (And progress info is not wired up yet.) Today's work was sponsored by Trenton Cronholm on Patreon. Today I finished the second-to-last big missing peice for tor hidden service remotes. Networks of these remotes are P2P networks, and there needs to be a way for peers to find one-another, and to authenticate with one-another. The git annex p2p command sets up links between peers in such a network. So far it has only a basic interface that sets up a one way link between two peers. In the first repository, run git annex p2p --gen-address. That outputs a long address. In the second repository, run git annex p2p --link peer1, and paste the address into it. That sets up a git remote named "peer1" that connects back to the first repository over tor. That is a one-directional link, while a bi-directional link would be much more convenient to have between peers. Worse, the address can be reused by anyone who sees it, to link into the repository. And, the address is far too long to communicate in any way except for pasting it. So I want to improve that later. What I'd really like to have is an interface that displays a one-time-use phrase of five to ten words, that can be read over the phone or across the room. Exchange phrases with a friend, and get your repositories securely linked together with tor. But, git annex p2p is good enough for now. I can move on to the final keystone of the tor support, which is file transfer over tor. That should, fingers crossed, be relatively easy, and the tor branch is close to mergeable now. Today's work was sponsored by Riku Voipio. Debian's tor daemon is very locked down in the directories it can read from, and so I've had a hard time finding a place to put the unix socket file for git-annex's tor hidden service. Painful details in. At least for now, I'm putting it under /etc/tor/, which is probably a FHS violation, but seems to be the only option that doesn't involve a lot of added complexity. The Windows autobuilder is moving, since NEST is shutting down the server it has been using. Yury Zaytsev has set up a new Windows autobuilder, hosted at Dartmouth College this time. The tor branch is coming along nicely. This weekend, I continued working on the P2P protocol, implementing it for network sockets, and extending it to support connecting up git-send-pack/git-receive-pack. There was a bit of a detour when I split the Free monad into two separate ones, one for Net operations and the other for Local filesystem operations. This weekend's work was sponsored by Thomas Hochstein on Patreon. Today, implemented a git-remote-tor-annex command that git will use for tor-annex:: urls, and made git annex remotedaemon serve the tor hidden service. Now I have git push/pull working to the hidden service, for example: git pull tor-annex::eeaytkuhaupbarfi.onion:47651 That works very well, but does not yet check that the user is authorized to use the repo, beyond knowing the onion address. And currently it only works in git-annex repos; with some tweaks it should also work in plain git repos. Next, I need to teach git-annex how to access tor-annex remotes. And after that, an interface in the webapp for setting them up and connecting them together. Today's work was sponsored by Josh Taylor on Patreon. For a Haskell programmer, and day where a big thing is implemented without the least scrap of code that touches the IO monad is a good day. And this was a good day for me! Implemented the p2p protocol for tor hidden services. Its needs are somewhat similar to the external special remote protocol, but the two protocols are not fully overlapping with one-another. Rather than try to unify them, and so complicate both cases, I prefer to reuse as much code as possible between separate protocol implementations. The generating and parsing of messages is largely shared between them. I let the new p2p protocol otherwise develop in its own direction. But, I do want to make this p2p protocol reusable for other types of p2p networks than tor hidden services. This was an opportunity to use the Free monad, which I'd never used before. It worked out great, letting me write monadic code to handle requests and responses in the protocol, that reads the content of files and resumes transfers and so on, all independent of any concrete implementation. The whole implementation of the protocol only needed 74 lines of monadic code. It helped that I was able to factor out functions like this one, that is used both for handling a download, and by the remote when an upload is sent to it: receiveContent :: Key -> Offset -> Len -> Proto Bool receiveContent key offset len = do content <- receiveBytes len ok <- writeKeyFile key offset content sendMessage $ if ok then SUCCESS else FAILURE return ok To get transcripts of the protocol in action, the Free monad can be evaluated purely, providing the other side of the conversation: ghci> putStrLn $ protoDump $ runPure (put (fromJust $ file2key "WORM--foo")) [PUT_FROM (Offset 10), SUCCESS] > PUT WORM--foo < PUT-FROM 10 > DATA 90 > bytes < SUCCESS result: True ghci> putStrLn $ protoDump $ runPure (serve (toUUID "myuuid")) [GET (Offset 0) (fromJust $ file2key "WORM--foo")] < GET 0 WORM--foo > PROTO-ERROR must AUTH first result: () Am very happy with all this pure code and that I'm finally using Free monads. Next I need to get down the the dirty business of wiring this up to actual IO actions, and an actual network connection. Today's work was sponsored by Jake Vosloo on Patreon. Fixed one howler of a bug today. Turns out that git annex fsck --all --from remote didn't actually check the content of the remote, but checked the local repository. Only --all was buggy; git annex fsck --from remote was ok. Don't think this is crash priority enough to make a release for, since only --all is affected. Somewhat uncomfortably made git annex sync pass --allow-unrelated-histories to git merge. While I do think that git's recent refusal to merge unrelated histories is good in general, the problem is that initializing a direct mode repository involves making an empty commit. So merging from a remote into such a direct mode repository means merging unrelated histories, while an indirect mode repository doesn't. Seems best to avoid such inconsistencies, and the only way I could see to do it is to always use --allow-unrelated-histories. May revisit this once direct mode is finally removed. Using the git-annex arm standalone bundle on some WD NAS boxes used to work, and then it seems they changed their kernel to use a nonstandard page size, and broke it. This actually seems to be a bug in the gold linker, which defaults to an unncessarily small page size on arm. The git-annex arm bundle is being adjusted to try to deal with this. ghc 8 made error include some backtrace information. While it's really nice to have backtraces for unexpected exceptions in Haskell, it turns out that git-annex used error a lot with the intent of showing an error message to the user, and a backtrace clutters up such messages. So, bit the bullet and checked through every error in git-annex and made such ones not include a backtrace. Also, I've been considering what protocol to use between git-annex nodes when communicating over tor. One way would be to make it very similar to git-annex-shell, using rsync etc, and possibly reusing code from git-annex-shell. However, it can take a while to make a connection across the tor network, and that method seems to need a new connection for each file transfered etc. Also thought about using a http based protocol. The servant library is great for that, you get both http client and server implementations almost for free. Resuming interrupted transfers might complicate it, and the hidden service side would need to listen on a unix socket, instead of the regular http port. It might be worth it to use http for tor, if it could be reused for git-annex http servers not on the tor network. But, then I'd have to make the http server support git pull and push over http in a way that's compatable with how git uses http, including authentication. Which is a whole nother ball of complexity. So, I'm leaning instead to using a simple custom protocol something like: > AUTH $localuuid $token < AUTH-SUCCESS $remoteuuid > SENDPACK $length > $gitdata < RECVPACK $length < $gitdata > GET $pos $key < DATA $length < $bytes > SUCCESS > PUT $key < PUT-FROM $pos > DATA $length > $bytes < SUCCESS Today's work was sponsored by Riku Voipio. Have waited too long for some next-generation encrypted P2P network, like telehash to emerge. Time to stop waiting; tor hidden services are not as cutting edge, but should work. Updated the design and started implementation in the tor branch. Unfortunately, Tor's default configuration does not enable the ControlPort. And, changing that in the configuration could be problimatic. This makes it harder than it ought to be to register a tor hidden service. So, I implemented a git annex enable-tor command, which can be run as root to set it up. The webapp will probably use su-to-root or gksu to run it. There's some Linux-specific parts in there, and it uses a socket for communication between tor and the hidden service, which may cause problems for Windows porting later. Next step will be to get git annex remotedaemon to run as a tor hidden service. Also made a no-xmpp branch which removes xmpp support from the assistant. That will remove 3000 lines of code when it's merged. Will probably wait until after tor hidden services are working. Today's work was sponsored by Jake Vosloo on Patreon. Worked on several bug reports today, fixing some easy ones, and following up on others. And then there are the hard bugs.. Very pleased that I was able to eventually reproduce a bug based entirely on the information that git-annex's output did not include a filename. Didn't quite get that bug fixed though. At the end of the day, got a bug report that git annex add of filenames containing spaces has broken. This is a recent reversion and I'm pushing out a release with a fix ASAP. Made a significant change today: Enabled automatic retrying of transfers that fail. It's only done if the previous try managed to advance the progress by some amount. The assistant has already had that retrying for many years, but now it will also be done when using git-annex at the command line. One good reason for a transfer to fail and need a retry is when the network connection stalls. You'd think that TCP keepalives would detect this kind of thing and kill the connection but I've had enough complaints, that I suppose that doesn't always work or gets disabled. Ssh has a ServerAliveInterval that detects such stalls nicely for the kind of batch transfers git-annex uses ssh for, but it's not enabled by default. So I found a way to make git-annex enable it, while still letting ~/.ssh/config settings override that. Also got back to analizing an old bug report about proliferating ".nfs*.lock" files when using git-annex on nfs; this was caused by the wacky NFS behavior of renaming deleted files, and I found a change to the ssh connection caching cleanup code that should avoid the problem. Several bug fixes involving v6 unlocked files today. Several related bugs were caused by relying on the inode cache information, without a fallback to handle the case where the inode cache had not gotten updated. While the inode cache is generally kept up-to-date well by the smudge/clean filtering, it is just a cache and can be out of date. Did some auditing for such problems and hopefully I've managed to find them all. Also, there was a tricky upgrade case where a v5 repository contained a v6 unlocked file, and the annexed content got copied into it. This triggered the above-described bugs, and in this case the worktree needs to be updated on upgrade, to replace the pointer file with the content. As I caught up with recent activity, it was nice to see some contributions from others. James MacMahon sent in a patch to improve the filenames generated by importfeed. And, xloem is writing workflow documentation for git-annex in Workflow guide. Finished up where I left off yesterday, writing test cases and fixing bugs with syncing in adjusted branches. While adjusted branches need v6 mode, and v6 mode is still considered experimental, this is still a rather nasty bug, since it can make files go missing (though still available in git history of course). So, planning to release a new version with these fixes as soon as the autobuilders build it. Over a month ago, I had some reports that syncing into adjusted branches was losing some files that had been committed. I couldn't reproduce it, but IIRC both felix and tbm reported problems in this area. And, felix kindly sent me enough of his git repo to hopefully reproduce it the problem. Finally got back to that today. Luckily, I was able to reproduce the bug using felix's repo. The bug only occurs when there's a change deep in a tree of an adjusted branch, and not always then. After staring at it for a couple of hours, I finally found the problem; a modification flag was not getting propagated in this case, and some changes made deep in the tree were not getting included into parent trees. So, I think I've fixed it, but need to look at it some more to be sure, and develop a test case. And fixing that exposed another bug in the same code. Gotta run unfortunately, so will finish this tomorrow.. Today's work was sponsored by Riku Voipio. Several bug fixes today and got caught up on most recent messages. Backlog is 157. The most significant one prevents git-annex from reading in the whole content of a large git object when it wants to check if it's an annex symlink. In several situations where large files were committed to git, or staged, git-annex could do a lot of work, and use a lot of memory and maybe crash. Fixed by checking the size of an object before asking git cat-file for its content. Also a couple of improvements around versions and upgrading. IIRC git-annex used to only support one repository version at a time, but this was changed to support V6 as an optional upgrade from V5, and so the supported versions became a list. Since V3 repositories are identical to V5 other than the version, I added it to the supported version list, and any V3 repos out there can be used without upgading. Particularly useful if they're on read-only media. And, there was a bug in the automatic upgrading of a remote that caused it to be upgraded all the way to V6. Now it will only be upgraded to V5. Today's work was sponsored by Jake Vosloo on Patreon. Realized recently that despite all the nice concurrency support in git-annex, external special remotes were limited to handling one request at a time. While the external special remote prococol could almost support concurrent requests, that would complicate implementing them, and probably need a version flag to enable to avoid breaking existing ones. Instead, made git-annex start up multiple external special remote processes as needed to handle concurrency. Today's work was sponsored by Josh Taylor on Patreon. Did most of the optimisations that recent profiling suggested. This sped up a git annex find from 3.53 seconds to 1.73 seconds. And, git annex find --not --in remote from 12.41 seconds to 5.24 seconds. One of the optimisations sped up git-annex branch querying by up to 50%, which should also speed up use of some preferred content expressions. All in all, a very nice little optimisation pass. Only had a couple hours today, which were spent doing some profiling of git-annex in situations where it has to look through a large working tree in order to find files to act on. The top five hot spots this found are responsible for between 50% and 80% of git-annex's total CPU use in these situations. The first optimisation sped up git annex find by around 18%. More tomorrow.. Catching up on backlog today. I hope to be back to a regular work schedule now. Unanswered messages down to 156. A lot of time today spent answering questions. There were several problems involving git branches with slashes in their name, such as "foo/bar" (but not "origin/master" or "refs/heads/foo"). Some branch names based on such a branch would take only the "bar" part. In git annex sync, this led to perhaps merging "foo/bar" into "other/bar" or "bar". And the adjusted branch code was entirely broken for such branches. I've fixed it now. Also made git annex addurl behave better when the file it wants to add is gitignored. Thinking about implementing git annex copy --from A --to B. It does not seem too hard to do that, at least with a temp file used inbetween. See transitive transfers. Today's work was sponsored by Thomas Hochstein on Patreon. Turned out to not be very hard at all to make git annex get -JN assign different threads to different remotes that have the same cost. Something like that was requested back in 2011, but it didn't really make sense until parallel get was implemented last year. (Also spent too much time fixing up broken builds.) Back after taking most of August off and working on other projects. Got the unanswered messages backlog down from 222 to 170. Still scary high. Numerous little improvements today. Notable ones: - Windows: Handle shebang in external special remote program. This is needed for git-annex-remote-rclone to work on Windows. Nice to see that external special remote is getting ported and apparently lots of use. - Make --json and --quiet suppress automatic init messages, and any other messages that might be output before a command starts. This was a reversion introduced in the optparse-applicative changes over a year ago. Also I'm developing a plan to improve parallel downloading when multiple remotes have the same cost. See get round robin. Today's work was sponsored by Jake Vosloo on Patreon. A user suggested adding --failed to retry failed transfers. That was a great idea and I landed a patch for it 3 hours later. Love it when a user suggests something so clearly right and I am able to quickly make it happen! Unfortunately, my funding from the DataLad project to work on git-annex is running out. It's been a very good two years funded that way, with an enormous amount of improvements and support and bug fixes, but all good things must end. I'll continue to get some funding from them for the next year, but only for half as much time as the past two years. I need to decide it it makes sense to keep working on git-annex to the extent I have been. There are definitely a few (hundred) things I still want to do on git-annex, starting with getting the git patches landed to make v6 mode really shine. Past that, it's mostly up to the users. If they keep suggesting great ideas and finding git-annex useful, I'll want to work on it more. What to do about funding? Maybe some git-annex users can contribute a small amount each month to fund development. I've set up a Patreon page for this, Anyhoo... Back to today's (unfunded) work. --failed can be used with get, move, copy, and mirror. Of course those commands can all be simply re-ran if some of the transfers fail and will pick up where they left off. But using --failed is faster because it does not need to scan all files to find out which still need to be transferred. And accumulated failures from multiple commands can be retried with a single use of --failed. It's even possible to do things like git annex get --from foo; git annex get --failed --from bar, which first downloads everything it can from the foo remote and falls back to using the bar remote for the rest. Although setting remote costs is probably a better approach most of the time. Turns out that I had earlier disabled writing failure log files, except by the assistant, because only the assistant was using them. So, that had to be undone. There's some potential for failure log files to accumulate annoyingly, so perhaps some expiry mechanism will be needed. This is why --failed is documented as retrying "recent" transfers. Anyway, the failure log files are cleaned up after successful transfers. With yesterday's JSON groundwork in place, I quickly implemented git annex metadata --batch today in only 45 LoC. The interface is nicely elegant; the same JSON format that git-annex metadata outputs can be fed into it to get, set, delete, and modify metadata. I've had to change the output of git annex metadata --json. The old output looked like this: {"command":"metadata","file":"foo","key":"...","author":["bar"],...,"note":"...","success":true} That was not good, because it didn't separate the metadata fields from the rest of the JSON object. What if a metadata field is named "note" or "success"? It would collide with the other "note" and "success" in the JSON. So, changed this to a new format, which moves the metadata fields into a "fields" object: {"command":"metadata","file":"foo","key":"...","fields":{"author":["bar"],...},"note":"...","success":true} I don't like breaking backwards compatability of JSON output, but in this case I could see no real alternative. I don't know if anyone is using metadata --batch anyway. If you are and this will cause a problem, get in touch. While making that change, I also improved the JSON output layer, so it can use Aeson. Update: And switched everything over to using Aeson, so git-annex no longer depends on two different JSON libraries. This let me use Aeson to generate the "fields" object for metadata --json. And it was also easy enough to use Aeson to parse the output of that command (and some simplified forms of it). So, I've laid the groundwork for git annex metadata --batch today. A common complaint is that git annex fsck in a bare repository complains about missing content of deleted files. That's because in a bare repository, git-annex operates on all versions of all files. Today I added a --branch option, so if you only want to check say, the master branch, you can: git annex fsck --branch master The new option has other uses too. Want to get all the files in the v1.0 tag? git annex get --branch v1.0 It might be worth revisiting the implicit --all behavior for bare repositories. It could instead default to --branch HEAD or something like that. But I'd only want to change that if there was a strong consensus in favor. Over 3/4th of the time spent implementing --branch was spent in adjusting the output of commands, to show "branch:file" is being operated on. How annoying. First release in over a month. Before making this release, a few last minute fixes, including a partial workaround for the problem that Sqlite databases don't work on Lustre filesystems. Backlog is now down to 140 messages, and only 3 of those are from this month. Still higher than I like. Noticed that in one of my git-annex repositories, git-annex was spending a full second at startup checking all the git-annex branches from remotes to see if they contained changes that needed to be merged in. So, I added a cache of recently merged branches to avoid that. I remember considering this optimisation years ago; don't know why I didn't do it then. Not every day that I can speed up git-annex so much! Also, made git annex log --all show location log changes for all keys. This was tricky to get right and fast. Worked on recent bug reports. Two bugs fixed today were both reversions introduced when the v6 repository support was added. Backlog is down to 153. Revisited my enhanced smudge/clean patch set for git, updating it for code review and to deal with changes in git since I've been away. This took several hours unfortunately. Back from vacation, with a message backlog of 181. I'm concentrating first on low-hanging fruit of easily implemented todos, and well reproducible bugs, to get started again. Implemented --batch mode for git annex get and git annex drop, and also enabled --json for those. Investigated git-annex startup time; see. Turns out that cabal has a bug that causes many thousands of unnecessary syscalls when linking in the shared libraries. Working around it halved git-annex's startup time. Fixed a bug that caused git annex testremote to crash when testing a freshly made external special remote. Continued working on the enhanced smudge/clean interface in git today. Sent in a third version of the patch set, which is now quite complete. I'll be away for the next week and a half, on vacation. Continued working on the enhancaed smudge/clean interface in git, incorporating feedback from the git developers. In a spare half an hour, I made an improved-smudge-filters branch that teaches git-annex smudge to use the new interface. Doing a quick benchmark, git checkout of a deleted 1 gb file took: - 19 seconds before - 11 seconds with the new interface - 0.1 seconds with the new interface and annex.thin set (while also saving 1 gb of disk space!) So, this new interface is very much worthwhile. Working on git, not git-annex the past two days, I have implemented the smudge-to-file/clean-from-file extension to the smudge/clean filter interface. Patches have been sent to the git developers, and hopefully they'll like it and include it. This will make git-annex v6 work a lot faster and better. Amazing how much harder it is to code on git than on git-annex! While I'm certianly not as familiar with the git code base, this is mostly because C requires so much more care about innumerable details and so much verbosity to do anything. I probably could have implemented this interface in git-annex in 2 hours, not 2 days. There was one more test suite failure when run on FAT, which I've investigated today. It turns out that a bug report was filed about the same problem, and at root it seems to be a bug in git merge. Luckily, it was not hard to work around the strange merge behavior. It's been very worthwhile running the test suite on FAT; it's pointed me at several problems with adjusted branches over the past weeks. It would be good to add another test suite pass to test adjusted branches explicitly, but when I tried adding that, there were a lot of failures where the test suite is confused by adjusted branch behavior and would need to be taught about it. I've released git-annex 6.20160613. If you're using v6 repositories and especially adjusted branches, you should upgrade since it has many fixes. Today I was indeed able to get to the bottom of and fix the bug that had stumped me the other day. Rest of the day was taken up by catching up to some bug requests and suggestions for v6 mode. Like making unlock and lock work for files that are not locally present. And, improving the behavior of the clean filter so it remembers what backend was used for a file before and continues using that same backend. About ready to make a release, but IIRC there's one remaining test suite failure on FAT. Been having a difficult time fixing the two remaining test suite failures when run on a FAT filesystem. On Friday, I got quite lost trying to understand the first failure. At first I thought it had something to do with queued git staging commands not being run in the right git environment when git-annex is using a different index file or work tree. I did find and fix a potential bug in that area. It might be that some reports long ago of git-annex branch files getting written to the master branch was caused by that. But, fixing it did not help with the test suite failure at hand. Today, I quickly found the actual cause of the first failure. Of course, it had nothing to do with queued git commands at all, and was a simple fix in the end. But, I've been staring at the second failure for hours and am not much wiser. All I know is, an invalid tree object gets generated by the adjusted branch code that contains some files more than once. (git gets very confused when a repository contains such tree objects; if you wanted to break a git repository, getting such trees into it might be a good way. cough) This invalid tree object seems to be caused by the basis ref for the adjusted branch diverging somehow from the adjusted branch itself. I have not been able to determine why or how the basis ref can diverge like that. Also, this failure is somewhat indeterminite, doesn't always occur and reordering the tests in the test suite can hide it. Weird. Well, hopefully looking at it again later with fresh eyes will help. A productive day of small fixes. Including a change to deal with an incompatibility in git 2.9's commit.gpgsign, and couple of fixes involving gcrypt repositories. Also several improvements to cloning from repositories where an adjusted branch is checked out. The clone automatically ends up with the adjusted branch checked out too. The test suite has 3 failures when run on a FAT repository, all involving adjusted branches. Managed to fix one of them today, hope to get to the others soon. Release today includes a last-minute fix to parsing lines from the git-annex branch that might have one or more carriage returns at the end. This comes from Windows of course, where since some things transparently add/remove \r before the end of lines, while other things don't, it could result in quite a mess. Luckily it was not hard or expensive to handle. If you are lucky enough not to use Windows, the release also has several more interesting improvements. git-annex has always balanced implicit and explicit behavior. Enabling a git repository to be used with git-annex needs an explicit init, to avoid foot-shooting; but a clone of a repository that is already using git-annex will be implicitly initialized. Git remotes implicitly are checked to see if they use git-annex, so the user can immediately git remote add with git annex get to get files from it. There's a fine line here, and implicit git remote enabling sometimes crosses it; sometimes the remote doesn't have git-annex-shell, and so there's an ugly error message and annex-ignore has to be set to avoid trying to enable that git remote again. Sometimes the probe of a remote can occur when the user doesn't really expect it to (and it can involve a ssh password prompt). Part of the problem is, there's not an explicit way to enable a git remote to be used by git-annex. So, today, I made git annex enableremote do that, when the remote name passed to it is a git remote rather than a special remote. This way, you can avoid the implicit behavior if you want to. I also made git annex enableremote un-set annex-ignore, so if a remote got that set due to a transient configuration problem, it can be explicitly enabled. Over the weekend, I noticed that a relative path to GIT_INDEX_FILE is interpreted in several different, inconsistent ways by git. git-annex mostly used absolute paths, but did use a relative path in git annex view. Now it will only use absolute paths to avoid git's wacky behavior. Integrated some patches to support building with ghc 8.0.1, which was recently released. The gnupg-options git configs were not always passed to gpg. Fixing this involved quite a lot of plumbing to get the options to the right functions, and consumed half of today. Also did some design work on external special remote protocol to avoid backwards compatability problems when adding new protocol features. Fixed several problems with v6 mode today. The assistant was doing some pretty wrong things when changes were synced into v6 repos, and that behavior is fixed. Also dealt with a race that caused updates made to the keys database by one process to not be seen by another process. And, made git annex add of a unlocked pointer file not annex the pointer file's content, but just add it to git as-is. Also, Thowz pointed out that adjusted branches could be used to locally adjust where annex symlinks point to, when a repository's git directory is not in the usual location. I've added that, as git annex adjust --fix. It was quite easy to implement this, which makes me very happy with the adjusted branches code! Posted a proposal for extending git smudge/clean filters with raw file access. If git gets an interface like that, it will make it easy to deal with most of the remaining v6 todo list. It's not every day I add a new special remote encryption mode to git-annex! The new encryption=sharedpubkey mode lets anyone with a clone of the git repository (and access to the remote) store files in the remote, but then only the private key owner can access those files. Which opens up some interesting new use cases... Lots of little fixes and improvements here and there over the past couple days. The main thing was fixing several bugs with adjusted branches and Windows. They seem to work now, and commits made on the adjusted branch are propigated back to master correctly. It would be good to finish up the last todos for v6 mode this month. The sticking point is I need a way to update the file stat in the git index when git-annex gets/drops/etc an unlocked file. I have not decided yet if it makes the most sense to add a dependency on libgit2 for that, or extend git update-index, or even write a pure haskell library to manipulate index files. Each has its pluses and its minuses. git-annex 6.20160419 has a rare security fix. A bug made encrypted special remotes that are configured to use chunks accidentally expose the checksums of content that is uploaded to the remote. Such information is supposed to be hidden from the remote's view by the encryption. The same bug also made resuming interrupted uploads to such remotes start over from the beginning. After releasing that, I've been occupied today with fixing the Android autobuilder, which somehow got its build environment broken (unsure how), and fixing some other dependency issues. I'm on a long weekend. This did not prevent git-annex from getting an impressive lot of features though, as Daniel Dent contributed which uses rclone to add support for a ton of additional cloud storage things, including: Google Drive, Openstack Swift, Rackspace cloud files, Memset Memstore, Dropbox, Google Cloud Storage, Amazon Cloud Drive, Microsoft One Drive, Hubic, Backblaze B2, Yandex Disk Wow! I hope that rclone will end up packaged in more distributions (eg Debian) so this will be easier to set up. Something that has come up repeatedly is that git annex reinject is too hard to use since you have to tell it which annexed file you're providing the content for. Now git-annex reinject --known can be passed a list of files and it will reinject any that hash to known annexed contents and ignore the rest. That works best when only one backend is used in a repository; otherwise it would need to be run repeatedly with different --backend values. Turns out that the GIT_COMMON_DIR feature used by adjusted branches is only a couple years old, so don't let adjusted branches be used with a too old git. And, git merge is getting a new sanity check that prevents merging in a branch with a disconnected history. git annex sync will inherit that sanity check, but the assistant needs to let such merges happen when eg, pairing repositories, so more git version checking there. The past three days have felt kind of low activity days, but somehow a lot of stuff still got done, both bug fixes and small features, and I am feeling pretty well caught up with backlog for the first time in over a month. Although as always there is some left, 110 messages. On Monday I fixed a bug that could cause a hang when dropping content, if git-annex had to verify the content was present on a ssh remote. That bug was bad enough to make an immediate release for, even though it was only a week since the last release. Seems I forgot about executable files entirely when implementing v6 unlocked files. Fixed that oversight today. Yesterday I released version 6.20160412, which is the first to support adjusted branches. Today, some planning for ways to better support annex.thin, but that seems to be stuck on needing a way to update git's index file. Which is the main thing needed to fix various problems with v6 unlocked files. Dove back into the backlog, got it down to 144 messages. Several bug fixes. Think I'm really finished with adjusted branches now. Fixed a bug in annex symlink calculation when merging into an adjusted branch. And, fixed a race condition involving a push of master from another repository. While git annex adjust --unlock is reason enough to have adjusted branches, I do want to at some point look into implementing git annex adjust --hide-missing, and perhaps rewrite the view branches to use adjusted branches, which would allow for updating view branches when pulling from a remote. Also, turns out Windows supports hard links, so I got annex.thin working on Windows, as well as a few other things that work better with hard links. Well, I had to rethink how merges into adjusted branches should be handled. The old method often led to unnecessary merge conflicts. My new approach should always avoid unncessary merge conflicts, but it's quite a trick. To merge origin/master into adjusted/master, it first merges origin/master into master. But, since adjusted/master is checked out, it has to do the merge in a temporary work tree. Luckily this can be done fairly inexpensively. To handle merge conflicts at this stage, git-annex's automatic merge conflict resolver is used. This approach wouldn't be feasible without a way to automatically resolve merge conflicts, because the user can't help with conflict resolution when the merge is not happening in their working tree. Once that out-of-tree merge is done, the result is adjusted, and merged into the adjusted branch. Since we know the adjusted branch is a child of the old master branch, this merge can be forced to always be a fast-forward. This second merge will only ever have conflicts if the work tree has something uncommitted in it that causes a merge conflict. Wow! That's super tricky, but it seems to work well. While I ended up throwing away everything I did last Thursday due to this new approach, the code is in some ways simpler than that old, busted approach. Feels like I've been working on adjusted branches too long. Did make some excellent progress today. Upgrading a direct mode repo to v6 will now enter an adjusted branch where all files are unlocked. Using an adjusted branch like this avoids unlocking all files in the master branch of the repo, which means that different clones of a repo can be upgraded to v6 mode at different times. This should let me advance the timetable for enabling v6 by default, and getting rid of direct mode. Also, cloning a repository that has an adjusted branch checked out will now work; the clone starts out in the same adjusted branch. But, I realized today that the way merges from origin/master into adjusted/master are done will often lead to merge conflicts. I have came up with a better way to handle these merges that won't unncessarily conflict, but didn't feel ready to implement that today. Instead, I spent the latter half of the day getting caught up on some of the backlog. Got it down from some 200 messages to 150. Spent all day fixing sync in adjusted branches. I was lost in the weeds for a long time. Eventually, drawing this diagram helped me find my way to a solution: origin/master adjusted/master master A A |--------------->A' | | | | | C'- - - - - - - - > C B | | | |--------------->M'<-----------------| After implementing that, syncing in adjusted branches seems to work much better now. And I've finally merged support for them into master. There's still several bugs and race conditions and upgrade things to sort out around adjusted branches. Proably another week's work all told. Back from Libreplanet and a week of spring break. Backlog is not too bad for two weeks mostly away; 143 messages. Finally got the OSX app updated for the git security fix yesterday. Had to drop builds for old OSX releases. Getting back into working on adjusted branches now. Polishing up the UI and docs today. Nearly ready to merge the feature; the only blocker is there seems to be something a little bit wrong with how pulled changes are merged into the adjusted branch that I noticed in testing. Pushed out a git-annex release this morning mostly because of the recent git security fix. Several git-annex builds bundle a copy of git and needed to be updated. Note that the OSX autobuilder is temporarily down and so it's not been updated yet -- hopefully soon. Caught up with a few last things today, before I leave for a week in Boston. Converted several places that ran git hash-object repeatedly to feed data to a running process. This sped up git-annex add in direct mode and with v6 unlocked files, by up to 2x. After a real brain-bender of a day, I have commit propagation from the adjusted branch back to the original branch working, without needing to reverse adjust the whole tree. This is faster, but the really nice thing is that it makes individual adjustments simpler to write. In fact, it's so simple that I took 10 minutes just now to implement a second adjustment! adjustTreeItem HideMissingAdjustment h ti@(TreeItem _ _ s) = do mk <- catKey s case mk of Just k -> ifM (inAnnex k) ( return (Just ti) , return Nothing ) Nothing -> return (Just ti) Over the weekend, I converted the linux "ancient" autobuilder to use stack. This makes it easier to get all the recent versions of all the haskell dependencies installed there. Also, merged my no-ffi branch, removing some library code from git-annex and adding new dependencies. It's good to remove code. Today, fixed the OSX dmg file -- its bundled gpg was broken. I pushed out a new version of the OSX dmg file with the fix. With the recent incident in mind of malware inserted into the Transmission dmg, I've added a virus scan step to the release process for all the git-annex images. This way, we'll notice if an autobuilder gets a virus. Also caught up on some backlog, although the remaining backlog is a little larger than I'd like at 135 messages. Hope to work some more on adjusted branches this week. A few mornings ago, I had what may be a key insight about how to reverse adjustments when propigating changes back from the adjusted branch. Tuesday was spent dealing with lock files. Turned out there were some bugs in the annex.pidlock configuration that prevented it from working, and could even lead to data loss. And then more lock files today, since I needed to lock git's index file the same way git does. This involved finding out how to emulate O_EXCL under Windows. Urgh. Finally got back to working on adjusted branches today. And, I've just gotten syncing of commits from adjusted branches back to the orginal branch working! Time for short demo of what I've been building for the past couple weeks: joey@darkstar:~/tmp/demo>ls -l total 4 lrwxrwxrwx 1 joey joey 190 Mar 3 17:09 bigfile -> joey@darkstar:~/tmp/demo>git annex adjust Switched to branch 'adjusted/master(unlocked)' ok joey@darkstar:~/tmp/demo#master(unlocked)>ls -l total 4 -rw-r--r-- 1 joey joey 1048576 Mar 3 17:09 bigfile Entering the adjusted branch unlocked all the files. joey@darkstar:~/tmp/demo#master(unlocked)>git mv bigfile newname joey@darkstar:~/tmp/demo#master(unlocked)>git commit -m rename [adjusted/master(unlocked) 29e1bc8] rename 1 file changed, 0 insertions(+), 0 deletions(-) rename bigfile => newname (100%) joey@darkstar:~/tmp/demo#master(unlocked)>git log --pretty=oneline 29e1bc835080298bbeeaa4a9faf42858c050cad5 rename a195537dc5beeee73fc026246bd102bae9770389 git-annex adjusted branch 5dc1d94d40af4bf4a88b52805e2a3ae855122958 add joey@darkstar:~/tmp/demo#master(unlocked)>git log --pretty=oneline master 5dc1d94d40af4bf4a88b52805e2a3ae855122958 add The commit was made on top of the commit that generated the adjusted branch. It's not yet reached the master branch. joey@darkstar:~/tmp/demo#master(unlocked)>git annex sync commit ok joey@darkstar:~/tmp/demo#master(unlocked)>git log --pretty=oneline b60c5d6dfe55107431b80382596f14f4dcd259c9 git-annex adjusted branch joey@darkstar:~/tmp/demo#master(unlocked)>git log --pretty=oneline master Now the commit has reached master. Notice how the history of the adjusted branch was rebased on top of the updated master branch as well. joey@darkstar:~/tmp/demo#master(unlocked)>ls -l total 1024 -rw-r--r-- 1 joey joey 1048576 Mar 3 17:09 newname joey@darkstar:~/tmp/demo#master(unlocked)>git checkout master Switched to branch 'master' joey@darkstar:~/tmp/demo>ls -l total 4 lrwxrwxrwx 1 joey joey 190 Mar 3 17:12 newname -> Just as we'd want, the file is locked in master, and unlocked in the adjusted branch. (Not shown: git annex sync will also merge in and adjust changes from remotes.) So, that all looks great! But, it's cheating a bit, because it locks all files when updating the master branch. I need to make it remember, somehow, when files were originally unlocked, and keep them unlocked. Also want to implement other adjustments, like hiding files whose content is not present. Pushed out a release today, could not resist the leap day in the version number, and also there were enough bug fixes accumulated to make it worth doing. I now have git-annex sync working inside adjusted branches, so pulls get adjusted appropriately before being merged into the adjusted branch. Seems to mostly work well, I did just find one bug in it though. Only propigating adjusted commits remains to be done to finish my adjusted branches prototype. Now I have a proof of concept adjusted branches implementation, that creates a branch where all locked files are adjusted to be unlocked. It works! Building the adjusted branch is pretty fast; around 2 thousand files per second. And, I have a trick in my back pocket that could double that speed. It's important this be quite fast, because it'll be done often. Checking out the adjusted branch can be bit slow though, since git runs git annex smudge once per unlocked file. So that might need to be optimised somehow. On the other hand, this should be done only rarely. I like that it generates reproducible git commits so the same adjustments of the same branch will always have the same sha, no matter when and where it's done. Implementing that involved parsing git commit objects. Next step will be merging pulled changes into the adjusted branch, while maintaining the desired adjustments. Getting started on adjusted branches, taking a top-down and bottom-up approach. Yesterday I worked on improving the design. Today, built a git mktree interface that supports recursive tree generation and filtering, which is the low-level core of what's needed to implement the adjusted branches. To test that, wrote a fun program that generates a git tree with all the filenames reversed. import Git.Tree import Git.CurrentRepo import Git.FilePath import Git.Types import System.FilePath main = do r <- Git.CurrentRepo.get (Tree t, cleanup) <- getTree (Ref "HEAD") r print =<< recordTree r (Tree (map reverseTree t)) cleanup reverseTree :: TreeContent -> TreeContent reverseTree (TreeBlob f m s) = TreeBlob (reverseFile f) m s reverseTree (RecordedSubTree f s l) = NewSubTree (reverseFile f) (map reverseTree l) reverseFile :: TopFilePath -> TopFilePath reverseFile = asTopFilePath . joinPath . map reverse . splitPath . getTopFilePath Also, fixed problems with the Android, Windows, and OSX builds today. Made a point release of the OSX dmg, because the last several releases of it will SIGILL on some hardware. Should mention that there was a release two days ago. The main reason for the timing of that release is because the Linux wstandalone builds include glibc, which recently had a nasty security hole and had to be updated. Today, fixed a memory leak, and worked on getting caught up with backlog, which now stands at 112 messages. In a v6 repository on a filesystem not supporting symlinks, it makes sense for commands like git annex add and git annex import to add the files unlocked, since locked files are not usable there. After implementing that, I also added an annex.addunlocked config setting, so that the same behavior can be configured in other repositories. Rest of the day was spent fixing up the test suite's v6 repository tests to work on FAT and Windows. Made a no-cbits branch that removes several things that use C code and the FFI. I moved one of them out to a new haskell library,. Others were replaced with other existing libraries. This will simplify git-annex's build process, and more library use is good. Planning to merge this branch in a week or two. v6 unlocked files don't work on Windows. I had assumed that since the build was succeeding, the test suite was passing there. But, it turns out the test suite was failing and somehow not failing the build. Have now fixed several problems with v6 on Windows. Still a couple test suite problems to address. This was one of those days where I somehow end up dealing with tricky filename encoding problems all day. First, worked around inability for concurrent-output to display unicode characters when in a non-unicode locale. The normal trick that git-annex uses doesn't work in this case. Since it only affected -J, I decided to make git-annex detect the problem and make -J behave as if it was not built with the concurrent-output feature. So, it just doesn't display concurrent output, which is better than crashing with an encoding error. The other problem affects v6 repos only. Seems that not all Strings will round trip through a persistent sqlite database. In particular, unicode surrogate characters are replaced with garbage. This is really a bug in persistent. But, for git-annex's purposes, it was possible to work around it, by detecting such Strings and serializing them differently. Then I had to enhance git annex fsck to fix up repositories that were affected by that problem. Working on a design for adjusted branches. I've been kicking this idea around for a while to replace direct mode on crippled filesystems with v6 unlocked files. And the same thing would allow for hiding not present files. It's somewhat complicated, but the design I have seems like it would work. The. The same parser was used for both preferred content expressions and annex.largefiles. Reworked that today, splitting it into two distinct parsers. It doesn't make any sense to use terms like "standard" or "lackingcopies" in annex.largefiles, and such are now rejected. That groundwork also let me add a feature that only makes sense for annex.largefiles, and not for preferred content expressions: Matching by mime type, such as mimetype=text/* For use cases that mix annexed files with files stored in git, the annex.largefiles config is more important in v6 repositories than before, since it configures the behavior of git add and even git commit -a. To make it possible to set annex.largefiles so it'll stick across clones of a repository, I have now made it be supported in .gitattributes files as well as git config. Setting it in .gitattributes looks a little bit different, since the regular .gitattributes syntax can be used to match on the filename. * annex.largefiles=(largerthan=100kb) *.c annex.largefiles=nothing It seems there's no way to make a git attribute value contain whitespace. So, more complicated annex.largefiles expressions need to use parens to break up the words. * annex.largefiles=(largerthan=100kb)and(not(include=*.c)) Bugfix release of git-annex today. The release earlier this month had a bug that caused git annex sync --content to drop files that should be preferred content. So I had to rush out a fix after that bug was reported. (Some of the builds for the new release are still updating as I post this.) In the past week I've been dealing with a blizzard. Snowed in for 6 days and counting. That has slightly back-burnered working on git-annex, and I've mostly been making enhancements that the DataLad project needs, along the lines of more commands supporting --batch and better --json output. After finally releasing git-annex 6 yesterday, I did some catching up today, and got the message backlog back down from 120 to 100. By the way, the first OSX release of git-annex 6 was broken; I had to fix an issue on the builder and update the build. If you upgraded at the wrong time, you might find that git-annex doesn't run; if so reinstall it. I now have an account on a separate OSX machine from the build machine, that automatically tests the daily build, to detect such problems. Added git annex benchmark which uses the excellent Criterion to benchmark parts of git-annex. What I'm interested in benchmarking right now is the sqlite database that is used to manage v6 unlocked files, but having a built-in benchmark will probably have other uses later. The benchmark results were pretty good; queries from the database are quite fast (60 microseconds warm cache) and scale well as the size increases. I did find one scalability issue, which was fixed by adding another index to the database. The kind of schema change that it's easy to make now, but that would be a painful transition if it had to be done once this was in wide use. Test suite is 100% green! Fixed one remaining bug it found, and solved the strange sqlite crash, which turned out to be caused by the test suite deleting its temporary repository before sqlite was done with the database inside it. The only remaining blocker for using v6 unlocked files is a bad interaction with shared clones. That should be easy to fix, so release of git-annex version 6 is now not far away! While I've only talked about v6/smudge stuff here lately, I have been fixing various other bugs along the way, and have accumulated a dozen bug fixes since the last release. Earlier this week I fixed a bug in git annex unused. Yesterday I noticed that git annex migrate didn't copy over metadata. Today, fixed a crash of git annex view in a non-unicode locale. Etc. So it'll be good not to have the release blocked any longer by v6 stuff. Been working hard on the last several test suite failures for v6 unlocked files. Now I've solved almost all of them, which is a big improvement to my confidence in its (almost) correctness. Frustratingly, the test suite is still not green after all this work. There's some kind of intermittent failure related to the sqlite database. Only seems to happen when the test suite is running, and the error message is simply "Error" which is making it hard to track down.. Got the test suite passing 100%, but then added a pass that uses v6 unlocked files and 30-some more failures appeared. Fixed a couple of the bugs today. After sprinting unexpectedly hard all December on v6, I need a change of pace, so I started digging into the website message backlog and fixed some bugs and posted some comments there. Automatic merge conflict resolver updated to work with unlocked files in v6 repos. Fairly tricky and painful; thank goodness the test suite tests a lot of edge cases in that code. If you've got some free holiday time, the v6 repository mode is now available in many of the daily builds, and there's documentation at unlocked files. It would be very useful now if you can give it a try. Use a clone or new repository for safety. Yesterday I checked all parts of the code that special case direct mode, and found a few things that needed adjusting for v6 unlocked files. Today, I added the annex.thin config. Around 4 other major todo items need to be dealt with before this is ready for more than early adopters. Got unexpectedly far today on optimising the database that v6 repositories use to keep track of unlocked files. The database schema may still need optimization, but everything else to do with the database is optimised. Writes to the database are queued together. And reads to the database avoid creating the database if it doesn't exist yet. Which means v5 repos, and v6 repos with no unlocked files will avoid any database overhead. Today was mostly spent making the assistant support v6 repositories. That was harder than expected, because I have not touched this part of the assistant's code much in a long time, and there are lots of tricky races and edge cases to deal with. The smudge branch has a 4500 diff from master now. Not counting documentation changes (Another 500 lines.) The todo list for it is shrinking slowly now. May not get it done before the new year. Two more days working on v6 and the smudge branch is almost ready to be merged. The test suite is passing again for v5 repos, and is almost passing for v6 repos. Also I decided to make git annex init create v5 repos for now, so git annex init --version=6 or a git annex upgrade is needed to get a v6 repo. So while I still have plenty of todo items for v6 repos, they are working reasonably well and almost ready for early adopters. The only real blocker to merging it is that the database stuff used by v6 is not optimised yet and probably slow, and even in v5 repos it will query the database. I hope to find an optimisation that avoids all database overhead unless unlocked files are used in a v6 repo. I'll probably make one more release before that is merged though. Yesterday I fixed a small security hole in git annex repair, which could expose the contents of an otherwise not world-writable repository to local users. BTW, the 2015 git-annex user survey closes in two weeks, please go fill it out if you haven't yet done so! New special remote alert! Chris Kastorff has made a special remote supporting Backblaze's B2 storage servie. And I'm still working on v6 unlocked files. After beating on it for 2 more days, all git-annex commands should support them. There is still plenty of work to do on testing, upgrading, optimisation, merge conflict resolution, and reconciling staged changes. Well, another day working on smudge filters, or unlocked files as the feature will be known when it's ready. Got both git annex get and git annex drop working for these files today. Get was the easy part; it just has to hard link or copy the object to the work tree file(s) that point to it. Handling dropping was hard. If the user drops a file, but it's unlocked and modified, it shouldn't reset it to the pointer file. For this, I reused the InodeCache stuff that was built for direct mode. So the sqlite database tracks the InodeCaches of unlocked files, and when a key is dropped it can check if the file is modified. But that's not a complete solution, because when git uses a clean filter, it will write the file itself, and git-annex won't have an InodeCache for it. To handle this case, git-annex will fall back to verifying the content of the file when dropping it if its InodeCache isn't known. Bit of a shame to need an expensive checksum to drop an unlocked file; maybe the git clean filter interface will eventually be improved to let git-annex use it more efficiently. Anyway, smudged aka unlocked files are working now well enough to be a proof of concept. I have several missing safety checks that need to be added to get the implementation to be really correct, and quite a lot of polishing still to do, including making unlock, lock, fsck, and merge handle them, and finishing repository upgrade code. Made a lot of progress today. Implemented the database mapping a key to its associated files. As expected this database, when updated by the smudge/clean filters, is not always consistent with the current git work tree. In particular, commands like git mv don't update the database with the new filename. So queries of the database will need to do some additional work first to get it updated with any staged changes. But the database is good enough for a proof of concept, I hope. Then I got git-annex commands treating smudged files as annexed files. So this works: joey@darkstar:~/tmp/new>git annex init init ok (recording state in git...) joey@darkstar:~/tmp/new>cp ~/some.mp3 . joey@darkstar:~/tmp/new>git add some.mp3 joey@darkstar:~/tmp/new>git diff --cached diff --git a/some.mp3 b/some.mp3 new file mode 100644 index 0000000..2df8868 --- /dev/null +++ b/some.mp3 @@ -0,0 +1 @@ +/annex/objects/SHA256E-s191213--e4b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.mp3 joey@darkstar:~/tmp/new>git annex whereis some.mp3 whereis some.mp3 (1 copy) 7de17427-329a-46ec-afd0-0a088f0d0b1b -- joey@darkstar:~/tmp/new [here] ok get/drop don't yet update the smudged files, and that's the next step. I've gotten git-annex working as a smudge/clean filter today in the smudge branch. It works ok in a local git repository. git add lets git-annex decide if it wants to annex a file's content, and checking out branches and other git commands involving those files works pretty well. It can sometimes be slow; git's smudge interface necessarily needs to copy the content of files around, particularly when checking out files, and so it's never going to be as fast as the good old git-annex symlink approach. Most of the slow parts are things that can't be done in direct mode repos though, like switching branches, so that isn't a regression. No git-annex commands to manage the annexed content work yet. That will need a key to worktree file mapping to be maintained, and implementing that mapping and ensuring its always consistent is probably going to be the harder part of this. Also there's the question of how to handle upgrades from direct mode repositories. This will be an upgrade from annex.version 5 to 6, and you won't want to do it until all computers that have clones of a repository have upgraded to git-annex 6.x, since older versions won't be able to work with the upgraded repository. So, the repository upgrade will need to be run manually initially, and it seems I'll need to keep supporting direct mode for v5 repos in a transition period, which will probably be measured in years. Spent a couple of days catching up on backlog, and my backlog is down to 80 messages now. Lowest in recent memory. Made the annex.largefiles config be honored by git annex import, git annex addurl, and even git annex importfeed. Planning to dive into smudge filters soon. The design seems ready to go, although there is some complication in needing to keep track of mappings between worktree files and annex keys._0<< The design is done, see stickers, and seems to work well, and even better is easy to modify. May find time to get these printed at some point._1<<! Things have been relatively quiet on git-annex this week. I've been distracted with other projects. But, a library that I developed for propellor to help with concurrent console output has been rapidly developing into a kind of tiling region manager for the console, which may be just the thing git-annex needs on the concurrent download progress display front. After seeing it could go that way, and working on it around the clock to add features git-annex will need, here's a teaser of its abilities. Probably coming soonish to a git-annex -J near you! The first release of git-annex was 5 years ago. There have been a total of 187 releases, growing to 50k lines of haskell code developed by 28 contributors (and another 10 or so external special remote contributors). Approximately 2000 people have posted questions, answers, bugs, todos, etc to this website, with 18900 posts in total. I've been funded for 3 of the 5 years to work on git-annex, with support from 1451 individuals and 6 organizations. Released a new version today with rather more significant changes than usual (see recent devblog entries). The 2015 git-annex user survey is now live. Feeling kind of ready to cut the next release of git-annex, but am giving the recent large changes just a little time to soak in and make sure they're ok. Yesterday, changed the order that git annex sync --content and the assistant do drops. When dropping from the local repo and also some remotes, it now makes more sense to drop from the remotes first, and only then the local repo. There are scenaries where that order lets content be dropped from all the places that it should be, while the reverse order doesn't. Today, caught up on recent bug reports, including fixing a bad merge commit that was made when git merge failed due to filenames not supported by a crippled filesystem, and cleaning up a network transport warning that was displayed incorrectly. Also developed a patch to the aws library to support google nearline when creating buckets. Well, I've spent all week making git annex drop --from safe. On Tuesday I got a sinking feeling in my stomach, as I realized that there was hole in git-annex's armor to prevent concurrent drops from violating numcopies or even losing the last copy of a file. The bug involved an unlikely race condition, and for all I know it's never happened in real life, but still this is not good. Since this is a potential data loss bug, expect a release pretty soon with the fix. And, there are 2 things to keep in mind about the fix: - If a ssh remote is using an old version of git-annex, a drop may fail. Solution will be to just upgrade the git-annex on the remote to the fixed version. When a file is present in several special remotes, but not in any accessible git repositories, dropping it from one of the special remotes will now fail, where before it was allowed. Instead, the file has to be moved from one of the special remotes to the git repository, and can then safely be dropped from the git repository. This is a worrysome behavior change, but unavoidable. Solving this clearly called for more locking, to prevent concurrency problems. But, at first I couldn't find a solution that would allow dropping content that was only located on special remotes. I didn't want to make special remotes need to involve locking; that would be a nightmare to implement, and probably some existing special remotes don't have any way to do locking anyway. Happily, after thinking about it all through Wednesday, I found a solution, that while imperfect (see above) is probably the best one feasible. If my analysis is correct (and it seems so, although I'd like to write a more formal proof than the ad-hoc one I have so far), no locking is needed on special remotes, as long as the locking is done just right on the git repos and remotes. While this is not able to guarantee that numcopies is always preserved, it is able to guarantee that the last copy of a file is never removed. And, numcopies will always be preserved except for when this rare race condition occurs. So, I've been implementing that all of yesterday and today. Getting it right involves building up 4 different kinds of evidence, which can be used to make sure that the last copy of a file can't possibly end up being dropped, no matter what other concurrent drops could be happening. I ended up with a very clean and robust implementation of this, and a 2,000 line diff. Whew! Lots of porting work ongoing recently: - I've been working with Goeke on building git-annex on Solaris/SmartOS. Who knows, this may lead to a binary distribution in some way, but to start with I got the disk free space code ported to Solaris, and have seen git-annex work there. - Jirib has also been working on that same disk free code, porting it to OpenBSD. Hope to land an updated patch for that. - Yury kindly updated the Windows autobuilder to a new Haskell Platform release, and I was able to land the winprocfixbranch that fixes ssh password prompting in the webapp on Windows. - The arm autobuilder is fixed and back in its colo, and should be making daily builds again. While at the DerbyCon security conference, I got to thinking about verifying objects that git-annex downloads from remotes. This can be expensive for big files, so git-annex has never done it at download time, instead deferring it to fsck time. But, that is a divergence from git, which always verifies checksums of objects it receives. So, it violates least surprise for git-annex to not verify checksums too. And this could weaken security in some use cases. So, today I changed that. Now whenever git-annex accepts an object into .git/annex/objects, it first verifies its checksum and size. I did add a setting to disable that and get back the old behavior: git config annex.verify false, and there's also a per-remote setting if you want to verify content from some remotes but not others. I've mostly been chewing through old and new bug reports and support requests that past several days. The backlog is waaay low now -- only 82 messages! Just in time for me to go on another trip, to Louisville on Thursday. Amazon S3 added an "Infrequent Access" storage class last week, and I got a patch into the haskell-aws library to support that, as well as partially supporting Google Nearline. That patch was accepted today, and git-annex is ready to use the new version of the library as soon as it's released. At the end of today, I found myself rewriting git annex status to parse and adjust the output of git status --short. This new method makes it much more capable than before, including displaying Added files. Made the release this morning, first one in 3 weeks. A fair lot of good stuff in there. Just in time for the release, git-annex has support for Ceph. Thanks to Mesar Hameed for building the external special remote! Seems that Git for Windows was released a few weeks ago, replacing msysgit. There were a couple problems using git-annex with that package of git, which I fixed on Thursday. The next release of git-annex won't work with msysgit any longer though; only with Git for Windows. On Friday, I improved the Windows package further, making it work even when git is not added to the system PATH. In such an installation, git-annex will now work inside the "git bash" window, and I even got the webapp starting from the menu working without git in PATH. In other dependency fun, the daily builds for Linux got broken due to a glibc bug in Debian unstable/testing, which makes the bundled curl and ssh segfault. With some difficulty I tracked that down, and it turns out the bug has been fixed upstream for quite a while. The daily builds are now using the fixed glibc 2.21. Today, got back to making useful improvements, rather than chasing dependencies. Improved the bash completion for remotes and backends, made annex.hardlink be used more, and made special remotes that are configured with autoenable=true get automatically enabled by git annex init. Today was a scramble to get caught up after weeks away. Got the message backlog down from over 160 to 123. Fixed two reversions, worked around a strange bug, and implemented support for the gpg.program configuration, and made several smaller improvements. Did some work on Friday and Monday to let external special remotes be used in a readonly mode. This lets files that are stored in the remote be downloaded by git-annex without the user needing to install the external special remote program. For this to work, the external special remote just has to tell git-annex the urls to use. This was developed in collaboration with Benjamin Gilbert, who is developing gcsannex, a Google Cloud Storage special remote. Today, got caught up with recent traffic, including fixing a couple of bugs. The backlog remains in the low 90's, which is a good place to be as I prepare for my August vacation week in the SF Bay Area, followed by a week for ICFP and the Haskell Symposium in Vancouver. Been doing a little bit of optimisation work. Which meant, first improving the --debug output to show fractions of a second, and show when commands exit. That let me measure what takes up time when downloading files from ssh remotes. Found one place I could spawn a thread to run a cleanup action, and this simple change reduced the non-data-transfer overhead to 1/6th of what it had been! Catching up on weekend's traffic, and preparing for a release tomorrow. Found another place where the optparse-applicative conversion broke some command-line parsing; using git-annex metadata to dump metadata recursively got broken. This is the second known bug caused by that transition, which is not too surpising given how large it was. Tracked down and fixed a very tricky encoding problem with metadata values. The arm autobuilder broke so it won't boot; got a serial console hooked up to it and looks like a botched upgrade resulting in a udev/systemd/linux version mismatch. The SHA-3 specification was released yesterday; git-annex got support for using SHA-3 hashes today. I had to add support for building with the new cryptonite library, as cryptohash doesn't (correctly) implement SHA-3 yet. Of course, nobody is likely to find a use for this for years, since SHA-2 is still prefectly fine, but it's nice to get support for new hashes in early. Took a half day and worked on making it simpler to set up ssh remotes. The complexity I've gotten rid of is there's no need to take any action to get a ssh remote initialized as a git-annex repository. Where before, either git-annex init needed to be ran on the remote, or a git-annex branch manually pushed to it, now the remote can simply be added and git annex sync will do the rest. This needed git-annex-shell changes, so will only work once servers are upgraded to use a newer version of git-annex. Ended up sending most of today working on git annex proxy. It had a lot of buggy edge cases, which are all cleaned up now. Spent another couple hours catching up on recent traffic and fixing a couple other misc bugs. Work today has started in the git-annex bug tracker, but the real bugs were elsewhere. Got a patch into hinotify to fix its handling of filenames received from inotify events when used in a non-unicode locale. Tracked down why gitlab's git-annex-shell fails to initialize gcrypt repositories, and filed a bug on gitlab-shell. Yesterday, I got the Android autobuilder fixed. I had started upgrading it to new versions of yesod etc, 2 months ago, and something in those new versions led to character encoding problems that broke the template haskell splicing. Had to throw away the work done for that upgrade, but at least it's building again, at last. Made a release this morning, mostly because the release earlier this week turns out to have accidentally removed several options from git annex copy. Spent some time this afternoon improving how git-annex shuts down when --time-limit is used. This used to be a quick and dirty shutdown, similar to if git-annex were ctrl-c'd, but I reworked things so it does a clean shutdown, including running any buffered git commands. This made incremental fsck with --time-limit resume much better, since it saves the incremental fsck database on shutdown. Also tuned when the database gets checkpointed during an incremental fsck, to resume better after it's interrupted. Made a release today, with recent work, including the optparse-applicative transition and initial gitlab.com support in the webapp. I had time before the release to work out most of the wrinkles in the gitlab.com support, but was not able to get gcrypt encrypted repos to work with gitlab, for reasons that remain murky. Their git-annex-shell seems to be misbehaving somehow. Will need to get some debugging assistance from the gitlab.com developers to figure that out. I've been working on adding GitLab support to the webapp for the past 3 days. That's not the only thing I've been working on; I've continued to work on the older parts of the backlog, which is now shrunk to 91 messages, and made some minor improvements and bugfixes. But, GitLab support in the webapp has certianly taken longer than I'd have expected. Only had to write 82 lines of GitLab specific code so far, but it went slowly. The user will need to cut and paste repository url and ssh public key back and forth between the webapp and GitLab for now. And the way GitLab repositories use git-annex makes it a bit tricky to set up; in one case the webapp has to do a forced push dry run to check if the repository on GitLab can be accessed by ssh. I found a way to adapt the existing code for setting up a ssh server to also support GitLab, so beyond the repo url prompt and ssh key setup, everything will be reused. I have something that works now, but there are lots of cases to test (encrypted repositories, enabling existing repositories, etc), so will need to work on it a bit more before merging this feature. Also took some time to split the centralized git repository tutorial into three parts, one for each of GitHub, GitLab, and self-administered servers. The git-annex package in Debian unstable hasn't been updated for 8 months. This is about to change; Richard Hartmann has stepped up and is preparing an upload of a recent version. Yay! Worked on bash tab completion some more. Got "git annex" to also tab complete. However, for that to work perfectly when using bash-completion to demand-load completion scripts, a small improvement is needed in git's own completion script, to have it load git-annex's completion script. I sent a patch for that to the git developers, and hopefully it'll get accepted soon. Then fixed a relatively long-standing bug that prevented uploads to chunked remotes from resuming after the last successfully uploaded chunk. Worked through the rest of the changes this weekend and morning, and the optparse-applicative branch has landed in master, including bash completion support. Day 3 of the optparse-applicative conversion. 116 files changed, 1607 insertions(+), 1135 deletions(-) At this point, everything is done except for around 20 sub-commands. Probably takes 15 minutes work for each. Will finish plowing through it in the evenings. Meanwhile, made the release of version 5.20150710. The Android build for this version is not available yet, since I broke the autobuilder last week and haven't fixed it yet. Now working on converting git-annex to use optparse-applicative for its command line parsing. I've wanted to do this for a long time, because the current code for options is generally horrible, runs in IO, and is not at all type safe, while optparse-applicative has wonderful composable parsers and lets each subcommand have its own data type repesenting all its options. What pushed me over the edge is that optparse-applicative has automatic bash completion! # source <(git-annex --bash-completion-script `which git-annex`) # git-annex fsck - --all --key -S --from --more -U Since nobody has managed to write a full bash completion for git-annex before, let alone keep it up-to-date with changes to the code, automating the problem away is a really nice win. The conversion is a rather huge undertaking; the diff is already over 3000 lines large after 8 hours of work, and I'm maybe 1/3rd done, with the groundwork laid (except for global options still todo) and a few subcommands converted. This won't land for this week's release; it'll need a lot of testing before it'll be ready for any release. Mostly spent today getting to older messages in the backlog. This did result in a few fixes, but with 97 old messages left, I can feel the diminishing returns setting in, to try to understand old bug reports that are often unclear or lacking necessary info to reproduce them. By the way, if you feel your bug report or question has gotten lost in my backlog, the best thing to do is post an update to it, and help me reproduce it, or clarify it. Moved on to looking through todo, which was a more productive way to find useful things to work on. Best change made today is that git annex unused can now be configured to look at the reflog. So, old versions of files are considered still used until the reflog expires. If you've wanted a way to only delete (or move away) unused files after they get to a certian age, this is a way to do that ... Now caught up on nearly all of my backlog of messages, and indeed am getting to some messages that have been waiting for months. Backlog is down to 113! Couple of bugfixes resulted, and many questions answered. Think I'll spend a couple more days dealing with the older part of the backlog. Then, when that reaches diminishing returns, I'll move on to some big change. I have been thinking about caching database on and off.. Back, and have spent all day focusing on new bug reports. All told, I fixed 4 bugs, followed up on all other bugs reported while I was away, and fixed the android autobuilder. The message backlog started the day at 250 or something, and is down to 178 now. Looks like others have been following up to forum posts while I was away (thanks!) so those should clear quickly. Well, not the literal last push, but I've caught up on as much backlog as I can (142 messages remain) and spent today developing a few final features before tomorrow's release. Some of the newer things displayed by git annex info were not included in the --json mode output. The json includes everything now. git annex sync --all --content will make it consider all known annexed objects, not only those in the current work tree. By default that syncs all versions of all files, but of course preferred content can tune what repositories want. To make that work well with preferred content settings like "include=*.mp3", it makes two passes. The first pass is over the work tree, so preferred content expressions that match files by name will work. The second pass is over all known keys, and preferred content expressions that don't care about the filename can match those keys. Two passes feels a bit like a hack, but it's a lot better than --all making nothing be synced when the a preferred content expression matches against filenames... I actually had to resort to bloom filters to make the two passes work. This new feature led to some slightly tricky follow-on changes to the standard groups preferred content expressions. Ever since git annex fsck --all was added, people have ?complained that there's no way to stop it complaining about keys whose content is gone for good. Well, there is now: git annex dead --key can be used when you know that a key is no longer available and want fsck to stop complaining about it. Running fsck on a directory will intentionally still complain about files in the directory with missing contents, even if the keys have been marked dead. The crucial part was finding a good way to store the information; luckily location log files are parsed in a way that lets it be added there without breaking backwards compatability. A bonus is that adding a key's content back to the annex will automatically bring it back from the dead. I'm pondering making git annex drop --force automatically mark a key as dead when the last copy is dropped, but I don't know if it's too DWIM or worth the complication. Another approach would be to let fsck mark keys as dead, but that would certianly need an extra flag. Now git-annex can be used to set up a public S3 remote. If you've cloned a repository that knows about such a remote, you can use the S3 remote without needing any S3 credentials. Read-only of course. This tip shows how to do it: public Amazon S3 remote One rather neat way to use this is to configure the remote with encryption=shared. Then, the files stored in S3 will be encrypted, and anyone with access to the git repository can get and decrypt the files. This feature will work for at least AWS S3, and for the Internet Archive's S3. It may work for other S3 services, that can be configured to publish their files over unauthenticated http. There's a publicurl configuration setting to allow specifying the url when using a service that git-annex doesn't know the url for. Actually, there was a hack for the IA before, that added the public url to an item when it was uploaded to the IA. While that hack is now not necessary, I've left it in place for now, to avoid breaking anything that depended on it. Worked thru some backlog. Currently stands at 152 messages. Merged work from Sebastian Reuße to teach the assistant to listen for systemd-networkd dbus events when the network connection changes. Added git annex get --incomplete, which can be used to resume whatever it was you were downloading earlier and interrupted, that you've forgotten about. The Isuma Media Players project is using git-annex to "create a two-way, distributed content distribution network for communities with poor connexions to the internet". My understanding is this involves places waaay up North. Reading over their design docs is quite interesting, both to see how they've leveraged things like git-annex metadata and preferred content expressions and the assistant, and areas where git-annex falls short. Between DataLad, Isuma, Baobáxia, IA.BAK, and more, there are a lot of projects being built on top of git-annex now! On Friday I installed the CubieTruck that is the new autobuilder for arm. This autobuilder is hosted at WetKnee Books, so its physical security includes a swamp. The hardware is not fast, but it's faster and far more stable than qemu arm emulation. By Saturday I got the build environment all installed nicely, including building libraries that use template haskell! But, ghc crashed with an internal error building git-annex. I upgraded to ghc 7.10.1 (which took another day), but it also crashed. Was almost giving up, but I looked at the ghc parameters, and -j2 stuck out in them. Removed the -j2, and the build works w/o crashing! \o/ (Filed a bug report on ghc.) Anarcat has been working on improving the man pages, including lots of linking to related commands. The 2015 Haskell Communities and Activities Report is out, and includes an entry for git-annex for the first time! After a less active than usual week (dentist), I made a release last Friday. Unfortunately, it turns out that the Linux standalone builds in that release don't include the webapp. So, another release is planned tomorrow. Yesterday and part of today I dug into the windows ssh webapp password entry broken reversion. Eventually cracked the problem; it seems that different versions of ssh for Windows do different things in a isatty check, and there's a flag that can be passed when starting ssh to make it not see a controlling tty. However, this neeeds changes to the process library, which db48x and I have now coded up. So a fix for this bug is waiting on a new release of that library. Oh well. Rest of today was catching up on recent traffic, and improving the behavior of git annex fsck when there's a disk IO error while checksumming a file. Now it'll detect a hardware fault exception, and take that to mean the file is bad, and move it to the bad files directory, instead of just crashing. I need better tooling to create disk IO errors on demand. Yanking disks out works, but is a blunt instrument. Anyone know of good tools for that? There's something rotten in POSIX fctnl locking. It's not composable, or thread-safe. The most obvious problem with it is that if you have 2 threads, and they both try to take an exclusive lock of the same file (each opening it separately) ... They'll both succeed. Unlike 2 separate processes, where only one can take the lock. Then the really crazy bit: If a process has a lock file open and fcntl locked, and then the same process opens the lock file again, for any reason, closing the new FD will release the lock that was set using the other FD. So, that's a massive gotcha if you're writing complex multithreaded code. Or generally for composition of code. Of course, C programmers deal with this kind of thing all the time, but in the clean world of Haskell, this is a glaring problem. We don't expect to need to worry about this kind of unrelated side effect that breaks composition and thread safety. After noticing this problem affected git-anenx in at least one place, I have to assume there could be more. And I don't want to need to worry about this problem forever. So, I have been working today on a clean fix that I can cleanly switch all my lock-related code to use. One reasonable approach would be to avoid fcntl locking, and use flock. But, flock works even less well on NFS than fcntl, and git-annex relies on some fcntl locking features. On Linux, there's an "open file description locks" feature that fixes POSIX fnctl locking to not have this horrible wart, but that's not portable. Instead, my approach is to keep track of which files the process has locked. If it tries to do something with a lockfile that it already has locked, it avoids opening the same file again, instead implements its own in-process locking behavior. I use STM to do that in a thread-safe manner. I should probably break out git-annex's lock file handling code as a library. Eventually.. This was about as much fun as a root canal, and I'm having a real one tomorrow. git-annex is now included in Stackage! Daniel Kahn Gillmor is doing some work on reproducible builds of git-annex. Today I added a feature to git annex unused that lets the user tune which refs they are interested in using. Annexed objects that are used by other refs then are considered unused. Did a fairly complicated refspec format for this, with globs and include/exclude of refs. Example: +refs/heads/*:+HEAD^:+refs/tags/*:-refs/tags/old-tag I think that, since Google dropped openid support, there seems to have been less activity on this website. Although possibly also a higher signal to noise ratio. I have been working on some ikiwiki changes to make it easier for users who don't have an openid to contiribute. So git-annex's website should soon let you log in and make posts with just an email address. People sometimes ask for a git-annex mailing list. I wouldn't mind having one, and would certianly subscribe, but don't see any reason that I should be involved in running it. Implemented git annex drop --all. This also added for free drop with --unused and --key, which overlap with git annexdropunused and git annex dropkey. The concurrentprogress branch had gone too long without being merged, and had a lot of merge conflicts. I resolved those, and went ahead and merged it into master. However, since the ascii-progress library is not ready yet, I made it a build flag, and it will build without it by default. So, git annex get -J5 can be used now, but no progress bars will display yet. When doing concurrent downloads, either with the new -J or by hand by running multiple processes, there was a bug in the diskreserve checking code. It didn't consider the disk space that was in the process of being used by other concurrent downloads, so would let more downloads start up than there was space for. I was able to fix this pretty easily, thanks to the transfer log files. Those were originally added just to let the webapp display transfers, but proved very helpful here! Finally, made .git/annex/transfer/failed/ files stop accumulating when the assistant is not being used. Looked into also cleaning up stale .git/annex/transfer/{upload,download}/ files (from interrupted transfers). But, since those are used as lock files, it's difficult to remove them in a concurrency safe way. Update: Unfortunately, I turned out to have stumbled over an apparent bug in haskell's implementation of file locking. Had to work around that. Happily, the workaround also let me implement cleanup of stale transfer info files, left behind when a git-annex process was interrupted. So, .git/annex/transfer/ will entirely stop accumulating cruft! Lazy afternoon spent porting git-anenx to build under ghc 7.10. Required rather a lot of changes to build, and even more to build cleanly after the AMP transition. Unfortunately, ghc 7.10 has started warning about every line that uses tab for indentation. I had to add additional cruft to turn those warnings off everywhere, and cannot say I'm happy about this at all. Got the release out after more struggling with ssh on windows and a last minute fix to the quvi support. The downloads.kitenet.net git annex repository had accumulated 6 gb of past builds that were not publically available. I am publishing those on the Internet Archive now, so past builds can be downloaded using git-annex in that repository in the usual way. This worked great! I have ordered a CubieTruck with 2 gb of ram to use for the new Arm builder. Hosting still TBD. Looks like git-annex is almost ready to be included in stackage, which will make building it from source much less likely to fail due to broken libraries etc. I've not been blogging, but have been busy this week. Backlog is down to 113 messages. Tuesday: I got a weird bug report where git annex get was deleting a file. This turned out to be a bug in wget ftp://... where it would delete a symlink that was not where it had been told to download the fie to. I put a workaround in git-annex; wget is now run in a temp directory. But this was a legitimate wget bug, and it's now been reported to the wget developers and will hopefully get fixed there. Wednesday: Added a --batch mode for several plumbing commands (contentlocation, examinekey, and lookupkey). This avoids startup overhead, and so lets a lot of queries be done much faster. The implementation should make it easy to add --batch to more plumbing commands as needed, and could probably extend to non-plumbing commands too. Today: The first 5 hours involved an incompatible mess of ssh and rsync versions on Windows. A Gordian knot of brokenness and dependency hell. I finally found a solution which involves downgrading the cygwin rsync to an older version, and using msysgit's ssh rather than cygwin's. Finished up today with more post-Debian-release changes. Landed a patch to switch from dataenc to sandi that had been waiting since 2013, and got sandi installed on all the git-annex autobuilders. Finished up with some prep for a release tomorrow. Finally, Debian has a new enough ghc that it can build template haskell on arm! So, whenever a new version of git-annex finally gets into Debian (I hope soon), the webapp will be available on arm for those arm laptops. Yay! This also means I have the opportunity to make the standalone arm build be done much more simply. Currently it involves qemu and a separate companion native mode container that it has to ssh to and build stuff, that has to have the same versions of all libraries. It's just enormously complicated and touchy. With template haskell building support, all that complexity can fall away. What I'd really like to do is get a fast-ish arm box with 2gb of ram hosted somewhere, and use that to do the builds, in native mode. Anyone want to help provide such a box for git-annex arm autobuilds? Reduced activity this week (didn't work on the assistant after all), but several things got done: Monday: Fixed fsck --fast --from remote to not fail when the remote didn't support fast copy mode. And dealt with an incompatibility in S3 bucket names; the old hS3 library supported upper-case bucket names but the new one needs them all in lower case. Wednesday: Caught up on most recent backlog, made some improvements to error handling in import, and improved integration with KDE's file manager to work with newer versions. Today: Made import --deduplicate/--clean-duplicates actively verify that enough copies of a file exist before deleting it. And, thinking about some options for batch mode access to git-annex plumbing, to speed up things that use it a lot. Posted a design for balanced preferred content. This would let preferred content expressions assign each file to N repositories out of a group, selected using Math. Adding a repository could optionally be configured to automatically rebalance the files (not very bandwidth efficiently though). I think some have asked for a feature like this before, so read the design and see if it would be useful for you. Spent a while debugging a problem with a S3 remote, which seems to have been a misconfiguration in the end. But several improvements came out of it to make it easier to debug S3 in the future etc. I hope that today's git-annex release will be landing in Debian unstable toward the end of the month. And I'm looking forward to some changes that have been blocked by wanting to keep git-annex buildable on Debian 7. Yesterday I got rid of the SHA dependency, switching git-annex to use a newer version of cryptohash for HMAC generation (which its author Vincent Hanquez kindly added to it when I requested it, waay back in 2013). I'm considering using the LambdaCase extension to clean up a lot of the code next, and there are 500+ lines of old yesod compatability code I can eventually remove. These changes and others will prevent backporting to the soon to be Debian oldstable, but the standalone tarball will still work there. And, the git-annex-standalone.deb that can be installed on any version of Debian is now available from the NeuroDebian repository, and its build support has been merged into the source tree. In the run up to the release today, I also dealt with getting the Windows build tested and working, now that it's been updated to newer versions of rsync, ssh, etc from Cygwin. Had to add several more dlls to the installer. That testing also turned up a case where git-annex init could fail, which got a last-minute fix. PS, scroll down this 10 year of git timeline and see what you find! Recent work has included improving fsck --from remote (and fixing a reversion caused by the relative path changes in January), and making annex.diskreserve be checked in more cases. And added a git annex required command for setting required content. Also, I want to thank several people for their work: - Roy sent a patch to enable http proxy support.. despite having only learned some haskell by "30 mins with YAHT". I investigated that more, and no patch is actually necessary, but just a newer version of the http-client library. - CandyAngel has been posting lots of helpful comments on the website, including this tip that significantly speeds up a large git repository. - Øyvind fixed a lot of typos throughout the git-annex documentation. - Yaroslav has created a git-annex-standalone.debpackage that will work on any system where debian packages can be installed, no matter how out of date it is (within reason), using the same methods as the standalone tarball. Mostly working on Windows recently. Fixed handling of git repos on different drive letters. Fixed crazy start menu loop. Worked around stange msysgit version problem. Also some more work on the concurrentprogress branch, making the progress display prettier. Added one nice new feature yesterday: git annex info $dir now includes a table of repositories that are storing files in the directory, with their sizes. repositories containing these files: 288.98 MB: ca9c5d52-f03a-11df-ac14-6b772ffe59f9 -- archive-5 288.98 MB: f1c0ce8d-d848-4d21-988c-dd78eed172e8 -- archive-8 10.48 MB: 587b9ccf-4548-4d6f-9765-27faecc4105f -- darkstar 15.18 kB: 42d47daa-45fd-11e0-9827-9f142c1630b3 -- origin Nice thing about this feature is it's done for free, with no extra work other than a little bit of addition. All the heavy location lookup work was already being done to get the numcopies stats. Back working on git annex get --jobs=N today. It was going very well, until I realized I had a hard problem on my hands. The hard problem is that the AnnexState structure at the core of git-annex is not able to be shared amoung multiple threads at all. There's too much complicated mutable state going on in there for that to be feasible at all. In the git-annex assistant, which uses many threads, I long ago worked around this problem, by having a single shared AnnexState and when a thread needs to run an Annex action, it blocks until no other thread is using it. This worked ok for the assistant, with a little bit of thought to avoid long-duration Annex actions that could stall the rest of it. That won't work for concurrent get etc. I spent a while investigating maybe making AnnexState thread safe, but it's just not built for it. Too many ways that can go wrong. For example, there's a CatFileHandle in the AnnexState. If two threads are running, they can both try to talk to the same git cat-file --batch command at once, with bad results. Worse, yet, some parts of the code do things like modifying the AnnexState's Git repo to add environment variables to use when running git commands. It's not all gloom and doom though. Only very isolated parts of the code change the working directory or set environment variables. And the assistant has surely smoked out other thread concurrency problems already. And, separate git-annex programs can be run concurrently with no problems at all; it uses file locking to avoid different processes getting in each-others' way. So AnnexState is the only remaining obstacle to concurrency. So, here's how I've worked around it: When git annex get -J10 is run, it will start by allocating 10 job slots. A fresh AnnexState will be created, and copied into each slot. Each time a job runs, it uses its slot's own AnnexState. This means 10 git cat-file processes, and maybe some contention over lock files, but generally, a nice, easy, and hopefully trouble-free multithreaded mode. And indeed, I've gotten git annex get -J10 working robustly! And from there it was trivial to enable -J for move and copy and mirror too! The only real blocker to merging the concurrentprogress branch is some bugs in the ascii-progress library that make it draw very scrambled progress bars the way git-annex uses it. I've had to release git-annex twice this week to fix reversions. On Monday, just after I made a planned release, I discovered a bug in it, and had to update it with a .1 release. Today's release fixes 2 other reversions introduced by recent changes, both only affecting the assistant. Before making today's release, I did a bunch of other minor bugfixes and improvements, including adding a new contentlocationn plumbing command. This release also changes git annex add when annex.largefiles is configured, so it will git add the non-large files. That is particularly useful in direct mode. I feel that the assistant needs some TLC, so I might devote a week to it in the latter part of this month. My current funding doesn't cover work on the assistant, but I should have some spare time toward the end of the month. Rethought distributed fsck. It's not really a fsck, but an expiration of inactive repositories, where fscking is one kind of activity. That insight let me reimplement it much more efficiently. Rather than updating all the location logs to prove it was active, git annex fsck can simply and inexpensively update an activity log. It's so cheap it'll do it by default. The git annex expire command then reads the activity log and expires (or unexpires) repositories that have not been active in the desired time period. Expiring a repository simply marks it as dead. Yesterday, finished making --quiet really be quiet. That sounds easy, but it took several hours. On the concurrentprogress branch, I have ascii-progress hooked up and working, but it's not quite ready for prime time. I've started work on parallel get. Today, laid the groundwork in two areas: Evalulated the ascii-progress haskell library. It can display multiple progress bars in the terminal, portably, and its author Pedro Tacla Yamada has kindly offered to improve it to meet git-annex's needs. I ended up filing 10 issues on it today, around 3 of the are blockers for git-annex using it. Worked on making --quiet more quiet. Commands like rsync and wget need to have thier progress output disabled when run in parallel. Didn't quite finish this yet. Yesterday I made some improvements to how git-annex behaves when it's passed a massive number of directories or files on the command line. Eg, when driven by xargs. There turned out to be some bugs in that scenario. One problem one I kind of had to paper over. While git-annex get normally is careful to get the files in the same order they were listed on the command line, it becomes very expensive to expand directories using git-ls-files, and reorder its output to preserve order, when a large number offiles are passed on the command line. There was a O(N*M) time blowup. I worked around it by making it only preserve the order of the first 100 files. Assumption being that if you're specifying so many files on the command line, you probably have less of an attachment to their ordering. Added two options to git annex fsck that allow for a form of distributed fsck. This is useful in situations where repositiories cannot be trusted to continue to exist, and cannot be checked directly, but you'd still like to keep track of their status. iabackup is one use case for this. By running a periodic fsck with the --distributed option, the repositories can verify that they still exist and that the information about their contents is still accurate. This is done by doing an extra update of the location log each time a file is verified by fsck to still be in the repository. The other option looks like --expire="30d somerepo:60d". It checks that each specified repository has recorded a distributed fsck within the specified time period. If not, the repository is dropped from the location tracking log. Of course it can always update that later if it's really still around. Distributed fsck is not the default because those extra location log updates increase the size of the git-annex branch. I did one thing to keep the size increase small: An identical line is logged to for each key, including the timestamp, so git's delta compression will work as well as is possible. But, there's still commit and tree update overhead. Probably doesn't make sense to run distributed fscks too often for that and other reasons. If the git-annex branch does get too large, there's always git annex forget ... (Update: This was later rethought and works much more efficiently now..) Turns out that git has a feature I didn't know about; it will expand wildcards and other stuff in filenames passed to many git commands. This is on top of the shell's expansion. That led to some broken behavior by git annex add 'foo.*' and, it could lead to other probably unwanted behavior, like git annex drop 'foo[barred]' dropping a file named food in addition to foo[barred] For now, I've disabled this git feature throughout git-annex. If you relied on it for something, let me know, I might think about adding it back in specific places where it makes sense. Improved git annex importfeed to check the itemid of the feed and avoid re-downloading a file with the same itemid. Before, it would add duplicate files if a feed kept the itemid the same, but changed the url. This was easier than expected because annex.genmetadata already caused the itemid to be stored in the git-annex metadata. I just had to make it check the itemid metadata, and set itemid even when annex.genmetadata isn't set. Also got 4 other bug reports fixed, even though I feel I'm taking it easy today. It's good to be relaxed again! While I plowed through a lot of backlog the past several days, I still have some 120 messages piled deep. That work did result in a number of improvements, culminating in a rather rushed release of version 5.20150327 today, to fix a regression affecting git annex sync when using the standalone linux tarballs. Unfortunately, I then had to update those tarballs a second time after the release as the first fix was incomplete. And, I'm feeling super stressed out. At this point, I think I should step away until the end of the month. Unfortunately, this will mean more backlog later. Including lots of noise and hand-holding that I just don't seem to have time for if I want to continue making forward progress. Maybe I'll think of a way to deal with it while I'm away. Currently, all I have is that I may have to start ignoring irc and the forum, and de-prioritizing bug reports that don't have either a working reproduction recipe or multiple independent confirmations that it's a real bug. While traveling for several days, I filled dead time with a rather massive reorganization of the git-annex man page, and I finished that up this morning. That man page had gotten rather massive, at around 3 thousand lines. I split out 87 man pages, one for each git-annex command. Many of these were expanded with additional details, and have become a lot better thanks to the added focus and space. See for example, git-annex-find, or any of the links on the new git-annex man page. (Which is still over 1 thousand lines long..) Also, git annex help <command> can be used to pull up a command's man page now! I'm taking the rest of the day off to R&R from the big trip north, and expect to get back into the backlog of 143 messages starting tomorrow. Caught up with most of the recent backlog today. Was not very bad. Fixed remotedaemon to support gcrypt remotes, which was never quite working before. Seem to be on track to making a release tomorrow with a whole month's changes. After an intense week away, I didn't mean to work on git-annex today, but I got sucked back in.. Worked on some plumbing commands for mass repository creation. Made fromkey be able to read a stream of files to create from stdin. Added a new registerurl plumbing command, that reads a stream of keys and urls from stdin. Did a deep dive into ipfs last night. It has great promise. As a first step toward using it with git-annex, I built an experimental ipfs special remote. It has some nice abilities; any ipfs address can be downloaded to a file in the repository: git annex addurl ipfs:QmYgXEfjsLbPvVKrrD4Hf6QvXYRPRjH5XFGajDqtxBnD4W --file somefile And, any file in the git-annex repository can be published to the world via ipfs, by simply using git annex copy --to ipfs. The ipfs address for the file is then visible in git annex whereis. Had to extend the external special remote protocol slightly for that, so that ipfs addresses can be recorded as uris in git-annex, and will show up in git annex whereis. Fixed a mojibake bug that affected metadata values that included both whitespace and unicode characters. This was very fiddly to get right. Finished up Monday's work to support submodules, getting them working on filesystems that don't support symlinks. This month is going to be a bit more random than usual where git-annex development is concerned. - On Saturday, the Seven Day Roguelike competition begins, and I will be spending a week building a game in haskell, to the exclusion of almost all other work. - On March 18th, I'll be at the Boston Haskell User's group. (Attending, not presenting.) - March 19-20, I'll be at Dartmouth visiting with the DataLad developers and learning more about what it needs from git-annex. - March 21-22, I'll be at the FSF's LibrePlanet conference at MIT. Got started on the randomness today with this design proposal for using git-annex to back up the entire Internet Archive. This is something the Archive Team is considering taking on, and I had several hours driving and hiking to think about it and came up with a workable design. (Assuming large enough crowd of volunteers.) Don't know if it will happen, but it was a useful thought problem to see how git-annex works, and doesn't work in this unusual use case. One interesting thing to come out of that is that git-annex fsck does not currently make any record of successful fscks. In a very large distributed system, it can be useful to have successful fscks of an object's content recorded, by updating the timestamp in the location log to say "this repository still had the content at this time". I had thought that git-annex and git submodules couldn't mix. However, looking at it again, it turned out to be possible to use git-annex quite sanely in a submodule, with just a little tweaking of how git normally configures the repository. Details of this still experimental feature are in submodules. There is still some work to be done to make git-annex work with submodules in repositories on filesystems that don't support symlinks. I'm snowed in, but keeping busy.. Developed a complete workaround for the sqlite SELECT ErrorBusy bug. So after a week, I finally have sqlite working robustly. And, I merged in the branch that uses sqlite for incremental fsck. Benchmarking an incremental fsck --fast run, checking 40 thousand files, it used to take 4m30s using sticky bits, and using sqlite slowed it down by 10s. So one added second per 4 thousand or so files. I think that's ok. Incremental fsck is intended to be used in big repos, which are probably not checked in --fast most, so the checksumming of files will by far swamp that overhead. Also got sqlite and persistent installed on all the autobuilders. This was easier than expected, because persistent bundles its own copy of sqlite. That would have been a good stopping place for the day's work.. But then I got to spent 5 more hours getting the EvilSplicer to support Persistent. Urgh. Now I can look forward to using sqlite for something more interesting than incremental fsck, like metadata caching for views, or the direct mode mappings. But, given all the trouble I had with sqlite, I'm going to put that off for a little while, to make sure that I've really gotten sqlite to work robustly. Today's release doesn't have the database branch merged of course, but it still has a significant amount of changes. Developed a test case for the sqlite problem, that reliably reproduces it, and sent it to the sqlite mailing list. It seems that under heavy write load, when a new connection is made to the database, SELECT can fail for a little while. Once one SELECT succeeds, that database connection becomes solid, and won't fail any more (apparently). This makes me think there might be some connection initialization steps that don't end up finishing before the SELECT goes through in this situation. I should be able to work around this problem by probing new connections for stability, and probably will have to, since it'll be years before any bug fixed sqlite is available everywhere. I also noticed that current git-annex incremental parallel fsck doesn't really parallelize well; eg the processes do duplicate work. So, the database branch is not really a regression in this area. Breaking news: gitlab.com repositories now support git-annex! A very nice surprise! More git hosters should do this.. Back to sqlite concurrency, I thought I had it dealt with, but more testing today has turned up a lot more problems with sqlite and concurrent writers (and readers). First, I noticed that a process can be happily writing changes to the database, but if a second process starts reading from the database, this will make the writier start failing with BUSY, and keep failing until the second process goes idle. It turns out the solution to this is to use WAL mode, which prevents readers from blocking writers. After several hours (persistent doesn't make it easy to enable WAL mode), it seemed pretty robust with concurrent fsck. But then I saw SELECT fail with BUSY. I don't understand why a reader would fail in WAL mode; that's counter to the documentation. My best guess is that this happens when a checkpoint is being made. This seems to be a real bug in sqlite. It may only affect the older versions bundled with persistent. Worked today on making incremental fsck's use of sqlite be safe with multiple concurrent fsck processes. The first problem was that having fsck --incremental running and starting a new fsck --incremental caused it to crash. And with good reason, since starting a new incremental fsck deletes the old database, the old process was left writing to a database that had been deleted and recreated out from underneath it. Fixed with some locking. Next problem is harder. Sqlite doesn't support multiple concurrent writers at all. One of them will fail to write. It's not even possible to have two processes building up separate transactions at the same time. Before using sqlite, incremental fsck could work perfectly well with multiple fsck processes running concurrently. I'd like to keep that working. My partial solution, so far, is to make git-annex buffer writes, and every so often send them all to sqlite at once, in a transaction. So most of the time, nothing is writing to the database. (And if it gets unlucky and a write fails due to a collision with another writer, it can just wait and retry the write later.) This lets multiple processes write to the database successfully. But, for the purposes of concurrent, incremental fsck, it's not ideal. Each process doesn't immediately learn of files that another process has checked. So they'll tend to do redundant work. Only way I can see to improve this is to use some other mechanism for short-term IPC between the fsck processes. Also, I made git annex fsck --from remote --incremental use a different database per remote. This is a real improvement over the sticky bits; multiple incremental fscks can be in progress at once, checking different remotes. Yesterday I did a little more investigation of key/value stores. I'd love a pure haskell key/value store that didn't buffer everything in memory, and that allowed concurrent readers, and was ACID, and production quality. But so far, I have not found anything that meets all those criteria. It seems that sqlite is the best choice for now. Started working on the database branch today. The plan is to use sqlite for incremental fsck first, and if that works well, do the rest of what's planned in caching database. At least for now, I'm going to use a dedicated database file for each different thing. (This may not be as space-efficient due to lacking normalization, but it keeps things simple.) So, .git/annex/fsck.db will be used by incremental fsck, and it has a super simple Persistent database schema: Fscked key SKey UniqueKey key It was pretty easy to implement this and make incremental fsck use it. The hard part is making it both fast and robust. At first, I was doing everything inside a single runSqlite action. Including creating the table. But, it turns out that runs as a single transaction, and if it was interrupted, this left the database in a state where it exists, but has no tables. Hard to recover from. So, I separated out creating the database, made that be done in a separate transation and fully atomically. Now fsck --incremental could be crtl-c'd and resumed with fsck --more, but it would lose the transaction and so not remember anything had been checked. To fix that, I tried making a separate transation per file fscked. That worked, and it resumes nicely where it left off, but all those transactions made it much slower. To fix the speed, I made it commit just one transaction per minute. This seems like an ok balance. Having fsck re-do one minute's work when restarting an interrupted incremental fsck is perfectly reasonable, and now the speed, using the sqlite database, is nearly as fast as the old sticky bit hack was. (Specifically, 6m7s old vs 6m27s new, fscking 37000 files from cold cache in --fast mode.) There is still a problem with multiple concurrent fsck --more failing. Probably a concurrent writer problem? And, some porting will be required to get sqlite and persistent working on Windows and Android. So the branch isn't ready to merge yet, but it seems promising. In retrospect, while incremental fsck has the simplest database schema, it might be one of the harder things listed in caching database, just because it involves so many writes to the database. The other use cases are more read heavy. Spent a couple hours to make the ssh-options git config setting be used in more places. Now it's used everywhere that git-annex supports ssh caching, including the git pull and git push done by sync and by the assistant. Also the remotedaemon and the gcrypt, rsync, and ddar special remotes. Many more little improvements made yesterday and part of today. While it's only been a week since the last release, it feels almost time to make another one, after so many recent bug fixes and small improvements. I've updated the roadmap. I have been operating without a roadmap for half a year, and it would be nice to have some plans. Keeping up with bug reports and requests as they come in is a fine mode of work, but it can feel a little aimless. It's good to have a planned out course, or at least some longer term goals. After the next release, I've penciled in the second half of this month to work on the caching database. Plowing through the backlog today, and fixing quite a few bugs! Got the backlog down to 87 messages from ~140. And some of the things I got to were old and/or hard. About a third of the day was spent revisiting git-annex branch shows commit with looong commitlog. I still don't understand how that behavior can happen, but I have a donated repository where it did happen. Made several changes to try to make the problem less likely to occur, and not as annoying when it does occur, and maybe get me more info if it does happen to someone again. Made a release yesterday, and caught up on most recent messages earlier this week. Backlog stands at 128 messages. Had to deal with an ugly problem with /usr/bin/glacier today. Seems that there are multiple programs all using that name, some of them shipping in some linux distributions, and the one from boto fails to fail when passed parameters it doesn't understand. Yugh! I had to make git-annex probe to make sure the right glacier program is installed. I'm planning to deprecate the glacier special remote at some point. Instead, I'd like to make the S3 special remote support the S3-glacier lifecycle, so objects can be uploaded to S3, set to transition to glacier, and then if necessary pulled back from glacier to S3. That should be much simpler and less prone to break. But not yet; haskell-aws needs glacier support added. Or I could use the new amazonka library, but I'd rather stick with haskell-aws. Some other minor improvements today included adding git annex groupwanted, which makes for easier examples than using vicfg, and making git annex import support options like --include and --exclude. Also I moved a many file matching options to only be accepted by the commands that actually use them. Of the remaining common options, most of them make sense for every command to accept (eg, --force and --debug). It would make sense to move --backend, --notify-start/finish, and perhaps --user-agent. Eventually. Today I put together a lot of things I've been thinking about: - There's some evidence that git-annex needs tuning to handle some unusual repositories. In particular very big repositories might benefit from different object hashing. - It's really hard to handle upgrades that change the fundamentals of how git-annex repositories work. Such an upgrade would need every git-annex user to upgrade their repository, and would be very painful. It's hard to imagine a change that is worth that amount of pain. - There are other changes some would like to see (like lower-case object hash directory names) that are certainly not enough to warrant a flag day repo format upgrade. - It would be nice to let people who want to have some flexibility to play around with changes, in their own repos, as long as they don't a) make git-annex a lot more complicated, or b) negatively impact others. (Without having to fork git-annex.) This is discussed in more depth in new repo versions. The solution, which I've built today, is support for tuning settings, when a new repository is first created. The resulting repository will be different in some significant way from a default git-annex repository, but git-annex will support it just fine. The main limitations are: - You can't change the tuning of an existing repository (unless a tool gets written to transition it). - You absolutely don't want to merge repo B, which has been tuned in nonstandard ways, into repo A which has not. Or A into B. (Unless you like watching slow motion car crashes.) I built all the infrastructure for this today. Basically, the git-annex branch gets a record of all tunings that have been applied, and they're automatically propagated to new clones of a repository. And I implemented the first tunable setting: git -c annex.tune.objecthashlower=true annex init This is definitely an experimental feature for now. git-annex merge and similar commands will detect attempts to merge between incompatibly tuned repositories, and error out. But, there are a lot of ways to shoot yourself in the foot if you use this feature: - Nothing stops git mergefrom merging two incompatible repositories. - Nothing stops any version of git-annex older from today from merging either. Now that the groundwork is laid, I can pretty easily, and inexpensively, add more tunable settings. The next two I plan to add are already documented, annex.tune.objecthashdirectories and annex.tune.branchhashdirectories. Most new tunables should take about 4 lines of code to add to git-annex. Today I got The pre-commit-annex hook working on Windows. It turns out that msysgit runs hook scripts even when they're not executable, and it parses the #! line itself. Now git-annex does too, on Windows. Also, added a new chapter to the walkthrough, using special remotes. They clearly needed to be mentioned, especially to show the workflow of running initremote in one repository, then syncing another repository and running enableremote to enable the same special remote there. Then more fun Windows porting! Turns out git-annex on Windows didn't handle files > 2 gb correctly; the way it was getting file size uses a too small data type on Windows. Luckily git-annex itself treats all file sizes as unbounded Integers, so I was easily able to swap in a getFileSize that returns correct values for large files. While I haven't blogged since the 13th and have not been too active until today, there are still a number of little improvements that have been done here and there. Including a fix for an interesting bug where the assistant would tell the remotedaemon that the network connection has been lost, twice in a row, and this would make the remotedeamon fail to reconnect to the remote when the network came up. I'm not sure what situation triggers this bug (Maybe machines with 2 interfaces? Or maybe a double disconnection event for 1 interface?), but I was able to reproduce it by sending messages to the remotedaemon, and so fixed it. Backlog is down to 118 messages. Got a release out today. I'm feeling a little under the weather, so wanted something easy to do in the rest of the day that would be nice and constructive. Ended up going over the todo list. Old todos come in three groups; hard problems, already solved, and easy changes that never got done. I left the first group alone, closed many todos in the second group, and implemented a few easy changes. Including git annex sync -m and adding some more info to git annex info remote. Worked more on the relativepaths branch last night, and I am actually fairly happy with it now, and plan to merge it after I've run it for a bit longer myself. It seems that I did manage to get a git-annex executable that is built PIE so it will work on Android 5.0. But all the C programs like busybox included in the Android app also have to be built that way. Arranging for everything to get built twice and with the right options took up most of today. git-annex internally uses all absolute paths all the time. For a couple of reasons, I'd like it to use relative paths. The best reason is, it would let a repository be moved while git-annex was running, without breaking. A lesser reason is that Windows has some crazy small limit on the length of a path (260 bytes?!), and using relative paths would avoid hitting it so often. I tried to do this today, in a relativepaths branch. I eventually got the test suite to pass, but I am very unsure about this change. A lot of random assumptions broke, and the test suite won't catch them all. In a few places, git-annex commands do change the current directory, and that will break with relative paths. A frustrating day. I've finally been clued into why git-annex isn't working on Android 5, and it seems fixing it is as easy as pie.. That is, passing -pie -FPIE to the linker. I've added a 5.0 build to the Android autobuilder. It is currently untested, so I hope to get feedback from someone with an Android 5 device; a test build is now available. I've been working through the backlog of messages today, and gotten down from 170 to 128. Mostly answered a lot of interesting questions, such as "Where to start reading the source code?" Also did some work to make git-annex check git versions at runtime more often, instead of assuming the git version it was built against. It turns out this could be done pretty inexpensively in 2 of 4 cases, and one of the 2 fixed was the git check-attr behavior change, which could lead to git-annex add hanging if used with an old version of git. Took a holiday week off from git-annex development, and started a new side project building shell-monad, which might eventually be used in some parts of git-annex that generate shell scripts. Message backlog is 165 and I have not dove back into it, but I have started spinning back up the development engines in preparation for new year takeoff. Yesterday, added some minor new features -- git annex sync now supports git remote groups, and I added a new plumbing command setpresentkey for those times when you really need to mess with git-annex's internal bookkeeping. Also cleaned up a lot of build warning messages on OSX and Windows. Today, first some improvements to make addurl more robust. Then the rest of the day was spent on Windows. Fixed (again) the Windows port's problem with rsync hating DOS style filenames. Got the rsync special remote fully working on Windows for the first time. Best of all, got the Windows autobuilder to run the test suite successfully, and fixed a couple test suite failures on Windows. Spent a couple days adding a bittorrent special remote to git-annex. This is better than the demo external torrent remote I made on Friday: It's built into git-annex; it supports magnet links; it even parses aria2c's output so the webapp can display progress bars. Besides needing aria2 to download torrents, it also currently depends on the btshowmetainfo command from the original bittorrent client (or bittornado). I looked into using instead, but that package is out of date and doesn't currently build. I've got a patch fixing that, but am waiting to hear back from the library's author. There is a bit of a behavior change here; while before git annex addurl of a torrent file would add the torrent file itself to the repository, it now will download and add the contents of the torrent. I think/hope this behavior change is ok.. Some more work on the interface that lets remotes claim urls for git annex addurl. Added support for remotes suggesting a filename to use when adding an url. Also, added support for urls that result in multiple files when downloaded. The obvious use case for that is an url to a torrent that contains multiple files. Then, got git annex importfeed to also check if a remote claims an url. Finally, I put together a quick demo external remote using this new interface. git-annex-remote-torrent adds support for torrent files to git-annex, using aria2c to download them. It supports multi-file torrents, but not magnet links. (I'll probably rewrite this more robustly and efficiently in haskell sometime soon.) Here's a demo: # git annex initremote torrent type=external encryption=none externaltype=torrent initremote torrent ok (Recording state in git...) # ls # git annex addurl --fast % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 198 100 198 0 0 3946k 0 --:--:-- --:--:-- --:--:-- 3946k addurl _home_joey_my.torrent/bar (using torrent) ok addurl _home_joey_my.torrent/baz (using torrent) ok addurl _home_joey_my.torrent/foo (using torrent) ok (Recording state in git...) # ls _home_joey_my.torrent/ bar@ baz@ foo@ # git annex get _home_joey_my.torrent/baz get _home_joey_my.torrent/baz (from torrent...) % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-100 198 100 198 0 0 3580k 0 --:--:-- --:--:-- --:--:-- 3580k 12/11 18:14:56 [NOTICE] IPv4 DHT: listening on UDP port 6946 12/11 18:14:56 [NOTICE] IPv4 BitTorrent: listening on TCP port 6961 12/11 18:14:56 [NOTICE] IPv6 BitTorrent: listening on TCP port 6961 12/11 18:14:56 [NOTICE] Seeding is over. 12/11 18:14:57 [NOTICE] Download complete: /home/joey/tmp/tmp.Le89hJSXyh/tor 12/11 18:14:57 [NOTICE] Your share ratio was 0.0, uploaded/downloaded=0B/0B Download Results: gid |stat|avg speed |path/URI ======+====+===========+======================================================= 71f6b6|OK | 0B/s|/home/joey/tmp/tmp.Le89hJSXyh/tor/baz Status Legend: (OK):download completed. ok (Recording state in git...) # git annex find _home_joey_my.torrent/baz # git annex whereis _home_joey_my.torrent/baz whereis _home_joey_my.torrent/baz (2 copies) 1878241d-ee49-446d-8cce-041c46442d94 -- [torrent] 52412020-2bb3-4aa4-ae16-0da22ba48875 -- joey@darkstar:~/tmp/repo [here] torrent: ok Worked on ?extensible addurl today. When git annex addurl is run, remotes will be asked if they claim the url, and whichever remote does will be used to download it, and location tracking will indicate that remote contains the object. This is a masive 1000 line patch touching 30 files, including follow-on changes in rmurl and whereis and even rekey. It should now be possible to build an external special remote that handles *.torrent and magnet: urls and passes them off to a bittorrent client for download, for example. Another use for this would be to make an external special remote that uses youtube-dl or some other program than quvi for downloading web videos. The builtin quvi support could probably be moved out of the web special remote, to a separate remote. I haven't tried to do that yet. Today's release has a month's accumulated changes, including several nice new features: git annex undo, git annex proxy, git annex diffdriver, and I was able to land the s3-aws branch in this release too, so lots of improvements to the S3 support. Spent several hours getting the autobuilders updated, with the haskell aws library installed. Android and armel builds are still out of date. Also fixed two Windows bugs related to the location of the bundled ssh program. Back from the holiday, catching up on traffic. Backlog stands at 113 messages. Here's a nice tip that Giovanni added: publishing your files to the public (using a public S3 bucket) Just before going on break, I added a new feature that I didn't mention here. git annex diffdriver integrates git-annex with git's external diff driver support. So if you have a smart diff program that can diff, say, genome sequences, or cat videos, or something in some useful way, it can be hooked up to git diff and will be able to see the content of annexed files. Also today, I spent a couple hours today updating the license file included in the standalone git-annex builds to include the licenses of all the haskell libraries git-annex depends on. Which I had for some reason not thought to include before, despite them getting built into the git-annex binary. Built the git annex undo command. This is intended to be a simple interface for users who have changed one file, and want to undo the change without the complexities of git revert or git annex proxy. It's simple enough that I added undo as an action in the file manager integration. And yes, you can undo an undo. Ever since the direct mode guard was added a year ago, direct mode has been a lot safer to use, but very limited in the git commands that could be run in a direct mode repository. The worst limitation was that there was no way to git revert unwanted changes. But also, there was no way to check out different branches, or run commands like git mv. Today I made git annex proxy, which allows doing all of those things, and more. documentation here It's so flexible that I'm not sure where the boundries lie yet, but it seems it will work for any git command that updates both the work tree and the index. Some git commands only update one or the other and not both and won't work with the proxy. As an advanced user tool, I think this is a great solution. I still want to make a simpler ?undo command that can nicely integrate into file managers. The implementation of git annex proxy is quite simple, because it reuses all the complicated work tree update code that was already written for git annex merge. And here's the lede I buried: I've gotten two years of funding to work on git-annex part-time! Details in my personal blog. The OSX autobuilder has been updated to OSX 10.10 Yosemite. The resulting build might also work on 10.9 Mavericks too, and I'd appreciate help testing that. Went ahead and fixed the ?partial commit problem by making the pre-commit hook detect and block problematic partial commits. S3 multipart is finally completely working. I still don't understand the memory issue that stumped me yesterday, but rewrote the code to use a simpler approach, which avoids the problem. Various other issues, and testing it with large files, took all day. This is now merged into the s3-aws branch, so when that branch lands, S3 support will massively improve, from the current situation of using a buggy library that buffers uploaded files in memory, and cannot support very large file uploads at all, to being able to support hopefully files of arbitrary hugeness (at least up to a few terabytes). BTW, thanks to Aristid Breitkreuz and Junji Hashimoto for working on the multipart support in the aws library. More work on S3 multipart uploads, since the aws library got fixed today to return the ETAGs for the parts. I got multipart uploads fully working, including progress display. The code takes care to stream each part in from the file and out the socket, so I'd hoped it would have good memory behavior. However, for reasons I have not tracked down, something in the aws library is causing each part to be buffered in memory. This is a problem, since I want to use 1 gb as the default part size. Some progress on the ?S3 upload not using multipart bug. The aws library now includes the multipart API. However, when I dug into it, it looks like the API needs some changes to get the ETAG of each uploaded part. Once that's fixed, git-annex should be able to support S3 multipart uploads, although I think that git-annex's own chunking is better in most situations -- it supports resuming uploads and downloads better. The main use case for S3 multipart seems to be using git-annex to publish large files. Also, managed to get the backlog down from 100 to just 65 messages, including catching up on quite old parts of backlog. New AWS region in Germany announced today. git-annex doesn't support it yet, unless you're using the s3-aws branch. I cleaned up that branch, got it building again, and re-tested it with testremote, and then fixed a problem the test suite found that was caused by some changes in the haskell aws library. Unfortunately, s3-aws is not ready to be merged because of some cabal dependency problems involving dbus and random. I did go ahead and update Debian's haskell-aws package to cherry-pick from a newer version the change needed for Inernet Archive support, which allows building the s3-aws branch on Debian. Getting closer.. Today, I've expanded git annex info to also be able to be used on annexed files and on remotes. Looking at the info for an individual remote is quite useful, especially for answering questions like: Does the remote have embedded creds? Are they encrypted? Does it use chunking? Is that old style chunking? remote: rsync.net description: rsync.net demo remote uuid: 15b42f18-ebf2-11e1-bea1-f71f1515f9f1 cost: 250.0 type: rsync url: xxx@usw-s002.rsync.net:foo encryption: encrypted (to gpg keys: 7321FC22AC211D23 C910D9222512E3C7) chunking: 1 MB chunks remote: ia3 description: test [ia3] uuid: 12817311-a189-4de3-b806-5f339d304230 cost: 200.0 type: S3 creds: embedded in git repository (not encrypted) bucket: joeyh-test-17oct-3 internet archive item: encryption: not encrypted chunking: none Should be quite useful info for debugging too.. Yesterday, I fixed a bug that prevented retrieving files from Glacier. 3 days spent redoing the Android autobuilder! The new version of yesod-routes generates TH splices that break the EvilSplicer. So after updating everything to new versions for the Nth time, I instead went back to older versions. The autobuilder now uses Debian jessie, instead of wheezy. And all haskell packages are pinned to use the same version as in jessie, rather than the newest versions. Since jessie is quite near to being frozen, this should make the autobuilder much less prone to getting broken by new versions of haskell packages that need patches for Android. I happened to stumble over while doing that. This supports setting and unsetting environment variables on Windows, which I had not known a way to do from Haskell. Cleaned up several ugly corners of the Windows port using it. git commit $some_unlocked_file seems like a reasonably common thing for someone to do, so it's surprising to find that it's a ?little bit broken, leaving the file staged in the index after (correctly) committing the annexed symlink. This is caused by either a bug in git and/or by git-annex abusing the git post-commit hook to do something it shouldn't do, although it's not unique in using the post-commit hook this way. I'm talking this over with Junio, and the fix will depend on the result of that conversation. It might involve git-annex detecting this case and canceling the commit, asking the user to git annex add the file first. Or it might involve a new git hook, although I have not had good luck getting hooks added to git before. Meanwhile, today I did some other bug fixing. Fixed the Internet Archive support for embedcreds=yes. Made git annex map work for remote repos in a directory with an implicit ".git" prefix. And fixed a strange problem where the repository repair code caused a git gc to run and then tripped over its pid file. I seem to have enough fixes to make another release pretty soon. Especially since the current release of git-annex doesn't build with yesod 1.4. Backlog: 94 messages Made two releases of git-annex, yesterday and today, which turned out to contain only Debian changes. So no need for other users to upgrade. This included fixing building on mips, and arm architectures. The mips build was running out of memory, and I was able to work around that. Then the arm builds broke today, because of a recent change to the version of llvm that has completely trashed ghc. Luckily, I was able to work around that too. Hopefully that will get last week's security fix into Debian testing, and otherwise have git-annex in Debian in good shape for the upcoming freeze. Working through the forum posts and bugs. Backlog is down to 95. Discovered the first known security hole in git-annex! Turns out that S3 and Glacier remotes that were configured with embedcreds=yes and encryption=pubkey or encryption=hybrid didn't actually encrypt the AWS credentials that get embedded into the git repo. This doesn't affect any repos set up by the assistant. I've fixed the problem and am going to make a release soon. If your repo is affected, see insecure embedded creds for what to do about it. Made a release yesterday, which was all bugfixes. Today, a few more bug fixes. Looked into making the webapp create non-bare repositories on removable drives, but before I got too far into the code, I noticed there's a big problem with that idea. Rest of day was spent getting caught up on forum posts etc. I'm happy to read lots of good answers that have been posted while I've been away. Here's an excellent example: That led to rewriting the docs for building git-annex from source. New page: fromsource. Backlog is now down to 117. Yesterday and today were the first good solid days working on git-annex in a while. There's a big backlog, currently of 133 messages, so I have been concentrating on bug reports first. Happily, not many new bugs have been reported lately, and I've made good progress on them, fixing 5 bugs today, including a file descriptor leak. catching up In this end of summer rush, I've been too busy to blog for the past 20 days, but not entirely too busy to work on git-annex. Two releases have been made in that time, and a fair amount of improvements worked on. Including a new feature: When a local git repository is cloned with git clone --shared, git-annex detects this and defaults to a special mode where file contents get hard linked into the clone. It also makes the cloned repository be untrusted, to avoid confusing numcopies counting with the hard links. This can be useful for temporary working repositories without the overhead of lots of copies of files. looking back I want to look back further, over the crowdfunded year of work covered by this devblog. There were a lot of things I wanted to accomplish this past year, and I managed to get to most of them. As well as a few surprises. Windows support improved more than I guessed in my wildest dreams. git-annex went from working not too well on the command line to being pretty solid there, as well as having a working and almost polished webapp on Windows. There are still warts -- it's Windows after all! Android didn't get many improvements. Most of the time I had budgeted to Android porting ended up being used on Windows porting instead. I did, however, get the Android build environment cleaned up a lot from the initial hacked together one, and generally kept it building and working on Android. The direct mode guard was not planned, but the need for it became clear, and it's dramatically reduced the amount of command-line foot-shooting that goes on in direct mode. Repository repair was planned, and I've very proud of git-repair. Also pleased with the webapp's UI for scheduling repository consistency checks. Always room for improvement in this kind of thing, but this brings a new capability to both git and git-annex. The external special remote interface came together beautifully. External special remotes are now just as well supported as built-in ones, except the webapp cannot be used to configure them. Using git-remote-gcrypt for fully encrypted git repositories, including support in the webapp for setting them (and gpg keys if necessary), happened. Still needs testing/more use/improvements. Avoided doing much in the area of gpg key management, which is probably good to avoid when possible, but is probably needed to make this a really suitable option for end users. Telehash is still being built, and it's not clear if they've gotten it to work at all yet. The v2 telehash has recently been superseded by a a new v3. So I am not pleased that I didn't get git-annex working with telehash, but it was outside my control. This is a problem that needs to get solved outside git-annex first, either by telehash or something else. The plan is to keep an eye on everything in this space, including for example, Maidsafe. In the meantime, the new notifychanges support in git-annex-shell makes XMPP/telehash/whatever unnecessary in a lot of configurations. git-annex's remotedaemon architecture supports that and is designed to support other notification methods later. And the webapp has a lot of improvements in the area of setting up ssh remotes, so fewer users will be stuck with XMPP. I didn't quite get to deltas, but the final month of work on chunking provides a lot of new features and hopefully a foundation that will get to deltas eventually. There is a new haskell library that's being developed with the goal of being used for git-annex deltas. I hadn't planned to make git-annex be able to upgrade itself, when installed from this website. But there was a need for that, and so it happened. Even got a gpg key trust path for the distribution of git-annex. Metadata driven views was an entirely unplanned feature. The current prototype is very exciting, it opens up entire new use cases. I had to hold myself back to not work on it too much, especially as it shaded into adding a caching database to git-annex. Had too much other stuff planned to do all I wanted. Clearly this is an area I want to spend more time on! Those are most of the big features and changes, but probably half of my work on git-annex this past year was in smaller things, and general maintenance. Lots of others have contributed, some with code (like the large effort to switch to bootstrap3), and others with documentation, bug reports, etc. Perhaps it's best to turn to git diff --stat to sum up the activity and see just how much both the crowdfunding campaign and the previous kickstarter have pushed git-annex into high gear: campaign: 5410 files changed, 124159 insertions(+), 79395 deletions(-) kickstarter: 4411 files changed, 123262 insertions(+), 13935 deletions(-) year before: 1281 files changed, 7263 insertions(+), 55831 deletions(-) What's next? The hope is, no more crowdfunded campaigns where I have to promise the moon anytime soon. Instead, the goal is to move to a more mature and sustainable funding model, and continue to grow the git-annex community, and the spaces where it's useful. Plan is to be on vacation and/or low activity this week before DebConf. However, today I got involved in fixing a bug that caused the assistant to keep files open after syncing with repositories on removable media. Part of that bug involved lock files not being opend close-on-exec, and while fixing that I noticed again that the locking code was scattered all around and rather repetitive. That led to a lot of refactoring, which is always fun when it involves scary locking code. Thanks goodness for referential transparency. Now there's a Utility.LockFile that works on both POSIX and Windows. Howver, that module actually exports very different functions for the two. While it might be tempting to try to do a portability layer, the two locking models are really very different, and there are lots of gotchas such a portability layer would face. The only API that's completely the same between the two is dropLock. This refactoring process and the cleaner, more expressive code it led to helped me spot a couple of bugs involving locking. See e386e26ef207db742da6d406183ab851571047ff and 0a4d301051e4933661b7b0a0791afa95bfe9a1d3 Neither bug has ever seemed to cause a problem, but it's nice to be able to spot and fix such bugs before they do. Over the past couple days, got the arm autobuilder working again. It had been down since June with several problems. cabal install tended to crash; apparenty this has something to do with threading in user-mode qemu, because -j1 avoids that. And strange invalid character problems were fixed by downgrading file-embed. Also, with Yury's help I got the Windows autobuilder upgraded to the new Haskell Platform and working again. Today a last few finishing touches, including getting rid of the last dependency on the old haskell HTTP library, since http-conduit is being used now. Ready for the release! Working on getting caught up with backlog. 73 messages remain. Several minor bugs were fixed today. All edge cases. The most edge case one of all, I could not fix: git-annex cannot add a file that has a newline in its filename, because git cat-file --batch's interface does not support such filenames. Added a page documenting how verify the signatures of git-annex releases. Over the past couple days, all the autobuilders have been updated to new dependencies needed by the recent work. Except for Windows, which needs to be updated to the new Haskell Platform first, so hopefully soon. Turns out that upgrading unix-compat means that inode(like) numbers are available even on Windows, which will make git-annex more robust there. Win win. Yesterday, finished converting S3 to use the aws library. Very happy with the result (no memory leaks! connection caching!), but s3-aws is not merged into master yet. Waiting on a new release of the aws library so as to not break Internet Archive S3 support. Today, spent a few hours adding more tests to testremote. The new tests take a remote, and construct a modified version that is intentionally unavailable. Then they make sure trying to use it fails in appropriate ways. This was a very good thing to test; two bugs were immediately found and fixed. And that wraps up several weeks of hacking on the core of git-annex's remotes support, which started with reworking chunking and kind of took on a life of its own. I plan a release of this new stuff in a week. The next week will be spent catching up on 117 messages of backlog that accumulated while I was in deep coding mode. Finished up webdav, and after running testremote for a long time, I'm satisfied it's good. The newchunks branch has now been merged into master completely. Spent the rest of the day beginning to rework the S3 special remote to use the aws library. This was pretty fiddly; I want to keep all the configuration exactly the same, so had to do a lot of mapping from hS3 configuration to aws configuration. Also there is some hairy stuff involving escaping from the ResourceT monad with responses and http connection managers intact. Stopped once initremote worked. The rest should be pretty easy, although Internet Archive support is blocked by. This is in the s3-aws branch until it gets usable. Today was spent reworking so much of the webdav special remote that it was essentially rewritten from scratch. The main improvement is that it now keeps a http connection open and uses it to perform multiple actions. Before, one connection was made per action. This is even done for operations on chunks. So, now storing a chunked file in webdav makes only 2 http connections total. Before, it would take around 10 connections per chunk. So a big win for performance, although there is still room for improvement: It would be possible to reduce that down to just 1 connection, and indeed keep a persistent connection reused when acting on multiple files. Finished up by making uploading a large (non-chunked) file to webdav not buffer the whole file in memory. I still need to make downloading a file from webdav not buffer it, and test, and then I'll be done with webdav and can move on to making similar changes to S3. Converted the webdav special remote to the new API. All done with converting everything now! I also updated the new API to support doing things like reusing the same http connection when removing and checking the presence of chunks. I've been working on improving the haskell DAV library, in a number of ways that will let me improve the webdav special remote. Including making changes that will let me do connection caching, and improving its API to support streaming content without buffering a whole file in memory. Just finished converting both rsync and gcrypt to the new API, and testing them. Still need to fix 2 test suite failures for gcrypt. Otherwise, only WebDAV remains unconverted. Earlier today, I investigated switching from hS3 to. Learned its API, which seemed a lot easier to comprehend than the other two times I looked at it. Wrote some test programs, which are in the s3-aws branch. I was able to stream in large files to S3, without ever buffering them in memory (which hS3's API precludes). And for chunking, it can reuse an http connection. This seems very promising. (Also, it might eventually get Glacier support..) I have uploaded haskell-aws to Debian, and once it gets into testing and backports, I plan to switch git-annex over to it. Have started converting lots of special remotes to the new API. Today, S3 and hook got chunking support. I also converted several remotes to the new API without supporting chunking: bup, ddar, and glacier (which should support chunking, but there were complications). This removed 110 lines of code while adding features! And, I seem to be able to convert them faster than testremote can test them. Now that S3 supports chunks, they can be used to work around several problems with S3 remotes, including file size limits, and a memory leak in the underlying S3 library. The S3 conversion included caching of the S3 connection when storing/retrieving chunks. [Update: Actually, it turns out it didn't; the hS3 library doesn't support persistent connections. Another reason I need to switch to a better S3 library!] But the API doesn't yet support caching when removing or checking if chunks are present. I should probably expand the API, but got into some type checker messes when using generic enough data types to support everything. Should probably switch to ResourceT. Also, I tried, but failed to make testremote check that storing a key is done atomically. The best I could come up with was a test that stored a key and had another thread repeatedly check if the object was present on the remote, logging the results and timestamps. It then becomes a statistical problem -- somewhere toward the end of the log it's ok if the key has become present -- but too early might indicate that it wasn't stored atomically. Perhaps it's my poor knowledge of statistics, but I could not find a way to analize the log that reliably detected non-atomic storage. If someone would like to try to work on this, see the atomic-store-test branch. Built git annex testremote today. That took a little bit longer than expected, because it actually found several fence post bugs in the chunking code. It also found a bug in the sample external special remote script. I am very pleased with this command. Being able to run 640 tests against any remote, without any possibility of damaging data already stored in the remote, is awesome. Should have written it a looong time ago! It took 9 hours, but I finally got to make c0dc134cded6078bb2e5fa2d4420b9cc09a292f7, which both removes 35 lines of code, and adds chunking support to all external special remotes! The groundwork for that commit involved taking the type scheme I sketched out yesterday, completely failing to make it work with such high-ranked types, and falling back to a simpler set of types that both I and GHC seem better at getting our heads around. Then I also had more fun with types, when it turned out I needed to run encryption in the Annex monad. So I had to go convert several parts of the utility libraries to use MonadIO and exception lifting. Yurk. The final and most fun stumbling block caused git-annex to crash when retriving a file from an external special remote that had neither encryption not chunking. Amusingly it was because I had not put in an optimation (namely, just renaming the file that was retrieved in this case, rather than unnecessarily reading it in and writing it back out). It's not often that a lack of an optimisation causes code to crash! So, fun day, great result, and it should now be very simple to convert the bup, ddar, gcrypt, glacier, rsync, S3, and WebDAV special remotes to the new system. Fingers crossed. But first, I will probably take half a day or so and write a git annex testremote that can be run in a repository and does live testing of a special remote including uploading and downloading files. There are quite a lot of cases to test now, and it seems best to get that in place before I start changing a lot of remotes without a way to test everything. Today's work was sponsored by Daniel Callahan. Zap! ... My internet gateway was destroyed by lightning. Limping along regardless, and replacement ordered. Got resuming of uploads to chunked remotes working. Easy! Next I want to convert the external special remotes to have these nice new features. But there is a wrinkle: The new chunking interface works entirely on ByteStrings containing the content, but the external special remote interface passes content around in files. I could just make it write the ByteString to a temp file, and pass the temp file to the external special remote to store. But then, when chunking is not being used, it would pointlessly read a file's content, only to write it back out to a temp file. Similarly, when retrieving a key, the external special remote saves it to a file. But we want a ByteString. Except, when not doing chunking or encryption, letting the external special remote save the content directly to a file is optimal. One approach would be to change the protocol for external special remotes, so that the content is sent over the protocol rather than in temp files. But I think this would not be ideal for some kinds of external special remotes, and it would probably be quite a lot slower and more complicated. Instead, I am playing around with some type class trickery: {-# LANGUAGE Rank2Types TypeSynonymInstances FlexibleInstances MultiParamTypeClasses #-} type Storer p = Key -> p -> MeterUpdate -> IO Bool -- For Storers that want to be provided with a file to store. type FileStorer a = Storer (ContentPipe a FilePath) -- For Storers that want to be provided with a ByteString to store type ByteStringStorer a = Storer (ContentPipe a L.ByteString) class ContentPipe src dest where contentPipe :: src -> (dest -> IO a) -> IO a instance ContentPipe L.ByteString L.ByteString where contentPipe b a = a b -- This feels a lot like I could perhaps use pipes or conduit... instance ContentPipe FilePath FilePath where contentPipe f a = a f instance ContentPipe L.ByteString FilePath where contentPipe b a = withTmpFile "tmpXXXXXX" $ \f h -> do L.hPut h b hClose h a f instance ContentPipe FilePath L.ByteString where contentPipe f a = a =<< L.readFile f The external special remote would be a FileStorer, so when a non-chunked, non-encrypted file is provided, it just runs on the FilePath with no extra work. While when a ByteString is provided, it's swapped out to a temp file and the temp file provided. And many other special remotes are ByteStorers, so they will just pass the provided ByteStream through, or read in the content of a file. I think that would work. Thoigh it is not optimal for external special remotes that are chunked but not encrypted. For that case, it might be worth extending the special remote protocol with a way to say "store a chunk of this file from byte N to byte M". Also, talked with ion about what would be involved in using rolling checksum based chunks. That would allow for rsync or zsync like behavior, where when a file changed, git-annex uploads only the chunks that changed, and the unchanged chunks are reused. I am not ready to work on that yet, but I made some changes to the parsing of the chunk log, so that additional chunking schemes like this can be added to git-annex later without breaking backwards compatability. Last night, went over the new chunking interface, tightened up exception handling, and improved the API so that things like WebDAV will be able to reuse a single connection while all of a key's chunks are being downloaded. I am pretty happy with the interface now, and except to convert more special remotes to use it soon. Just finished adding a killer feature: Automatic resuming of interrupted downloads from chunked remotes. Sort of a poor man's rsync, that while less efficient and awesome, is going to work on every remote that gets the new chunking interface, from S3 to WebDAV, to all of Tobias's external special remotes! Even allows for things like starting a download from one remote, interrupting, and resuming from another one, and so on. I had forgotten about resuming while designing the chunking API. Luckily, I got the design right anyway. Implementation was almost trivial, and only took about 2 hours! (See 9d4a766cd7b8e8b0fc7cd27b08249e4161b5380a) I'll later add resuming of interrupted uploads. It's not hard to detect such uploads with only one extra query of the remote, but in principle, it should be possible to do it with no extra overhead, since git-annex already checks if all the chunks are there before starting an upload. Remained frustratingly stuck until 3 pm on the same stuff that puzzled me yesterday. However, 6 hours later, I have the directory special remote 100% working with both new chunk= and legacy chunksize= configuration, both with and without encryption. So, the root of why this is was hard, since I thought about it a lot today in between beating my head into the wall: git-annex's internal API for remotes is really, really simple. It basically comes down to: Remote { storeKey :: Key -> AssociatedFile -> MeterUpdate -> Annex Bool , retrieveKeyFile :: Key -> AssociatedFile -> FilePath -> MeterUpdate -> Annex Bool , removeKey :: Key -> Annex Bool , hasKey :: Key -> Annex (Either String Bool) } This simplicity is a Good Thing, because it maps very well to REST-type services. And it allows for quite a lot of variety in implementations of remotes. Ranging from reguar git remotes, that rsync files around without git-annex ever loading them itself, to remotes like webdav that load and store files themselves, to remotes like tahoe that intentionally do not support git-annex's built-in encryption methods. However, the simplicity of that API means that lots of complicated stuff, like handling chunking, encryption, etc, has to be handled on a per-remote basis. Or, more generally, by Remote -> Remote transformers that take a remote and add some useful feature to it. One problem is that the API is so simple that a remote transformer that adds encryption is not feasible. In fact, every encryptable remote has had its own code that loads a file from local disk, encrypts it, and sends it to the remote. Because there's no way to make a remote transformer that converts a storeKey into an encrypted storeKey. (Ditto for retrieving keys.) I almost made the API more complicated today. Twice. But both times I ended up not, and I think that was the right choice, even though it meant I had to write some quite painful code. In the end, I instead wrote a little module that pulls together supporting both encryption and chunking. I'm not completely happy because those two things should be independent, and so separate. But, 120 lines of code that don't keep them separate is not the end of the world. That module also contains some more powerful, less general APIs, that will work well with the kinds of remotes that will use it. The really nice result, is that the implementation of the directory special remote melts down from 267 lines of code to just 172! (Plus some legacy code for the old style chunking, refactored out into a file I can delete one day.) It's a lot cleaner too. With all this done, I expect I can pretty easily add the new style chunking to most git-annex remotes, and remove code from them while doing it! Today's work was sponsored by Mark Hepburn. A lil bit in the weeds on the chunking rewrite right now. I did succeed in writing the core chunk generation code, which can be used for every special remote. It was pretty hairy (needs to stream large files in constant memory, separating into chunks, and get the progress display right across operations on chunks, etc). That took most of the day. Ended up getting stuck in integrating the encryptable remote code, and had to revert changes that could have led to rewriting (or perhaps eliminating?) most of the per-remote encryption specific code. Up till now, this has supported both encrypted and non-encrypted remotes; it was simply passed encrypted keys for an encrypted remote: remove :: Key -> Annex Bool But with chunked encrypted keys, it seems it needs to be more complicated: remove' :: Maybe (Key -> Key) -> ChunkConfig -> Key -> Annex Bool So that when the remote is configured to use chunking, it can look up the chunk keys, and then encrypt them, in order to remove all the encrypted chunk keys. I don't like that complication, so want to find a cleaner abstraction. Will sleep on it. While I was looking at the encryptable remote generator, I realized the remote cost was being calculated wrongly for special remotes that are not encrypted. Fixed that bug. Today's work was sponsored by bak. The design for new style chunks seems done, and I laid the groundwork for it today. Added chunk metadata to keys, reorganized the legacy chunking code for directory and webdav so it won't get (too badly) in the way, and implemented the chunk logs in the git-annex branch. Today's work was sponsored by LeastAuthority.com. Working on designs for better chunking. Having a hard time finding a way to totally obscure file sizes, but otherwise a good design seems to be coming together. I particularly like that the new design puts the chunk count in the Key (which is then encrypted for special remotes, rather than having it be some special extension. While thinking through chunking, I realized that the current chunking method can fail if two repositories have different chunksize settings for the same special remote and both upload the same key at the same time. Arn't races fun? The new design will eliminate this problem; in the meantime updated the docs to recommend never changing a remote's chunksize setting. Updated the Debian backport. (Also the git-remote-gcrypt backport.) Made the assistant install a desktop file to integrate with Konqueror. Improved git annex repair, fixing a bug that could cause it to leave broken branch refs and yet think that the repair was successful. A bit surprised to see that now been a full year since I started doing development funded by my campaign. Not done yet! Update on campaign rewards: Today's work was sponsored by Douglas Butts. Spent hours today in a 10-minute build/test cycle, tracking down a bug that caused the assistant to crash on Windows after exactly 10 minutes uptime. Eventually found the cause; this is fallout from last month's work that got it logging to the debug.log on Windows. There was more, but that was the interesting one.. I have mostly been thinking about gcrypt today. This issue needs to be dealt with. The question is, does it really make sense to try to hide the people a git repository is encrypted for? I have posted some thoughts and am coming to the viewpoint that obscuring the identities of users of a repository is not a problem git-annex should try to solve itself, although it also shouldn't get in the way of someone who is able and wants to do that (by using tor, etc). Finally, I decided to go ahead and add a gcrypt.publish-participants setting to git-remote-gcrypt, and make git-annex set that by default when setting up a gcrypt repository. Some promising news from the ghc build on arm. I got a working ghc, and even ghci works. Which would make the template haskell in the webapp etc avaialble on arm without the current horrible hacks. Have not managed to build the debian ghc package successfully yet though. Also, fixed a bug that made git annex sync not pull/push with a local repository that had not yet been initialized for use with git-annex. Today's work was sponsored by Stanley Yamane. Yay, the Linux autobuilder is back! Also fixed the Windows build. Fixed a reversion that prevented the webapp from starting properly on Windows, which was introduced by some bad locking when I put in the hack that makes it log to the log file on that platform. Various other minor fixes here and there. There are almost enough to do a release again soon. I've also been trying to bootstrap ghc 7.8 on arm, for Debian. There's a script that's supposed to allow building 7.8 using 7.6.3, dealing with a linker problem by using the gold linker. Hopefully that will work since otherwise Debian could remain stuck with an old ghc or worse lose the arm ports. Neither would be great for git-annex.. Spent past 2 days catching up on backlog and doing bug triage and some minor bug fixes and features. Backlog is 27, lowest in quite a while so I feel well on top of things. I was saddened to find this bug where I almost managed to analize the ugy bug's race condition, but not quite (and then went on vacation). BTW, I have not heard from anyone else who was hit by that bug so far. The linux autobuilders are still down; their host server had a disk crash in an electrical outage. Might be down for a while. I would not mind setting up a redundant autobuilder if anyone else would like to donate a linux VM with 4+ gb of ram. Important A bug ?caused the assistant to sometimes remove all files from the git repository. You should check if your repository is ok. If the bug hit you, it should be possible to revert the bad commit and recover your files with no data loss. See the bug report for details. This affected git-annex versions since 5.20140613, and only when using the assistant in direct mode. It should be fixed in today's release, 5.20140709. I'm available urgent2014@joeyh.name to help anyone hit by this unfortunate bug. This is another bug in the direct mode merge code. I'm not happy about it. It's particularly annoying that I can't fix up after it automatically (because there's no way to know if any given commit in the git history that deletes all the files is the result of this bug, or a legitimate deletion of all files). The only good thing is that the design of git-annex is pretty robust, and in this case, despite stupidly committing the deletion of all the files in the repository, git-annex did take care to preserve all their contents and so the problem should be able to be resolved without data loss. Unfortunately, the main autobuilder is down and I've had to spin up autobuilders on a different machine (thank goodness that's very automated now!), and so I have not been able to build the fixed git-annex for android yet. I hope to get that done later this evening. Yesterday, I fixed a few (much less bad) bugs, and did some thinking about plans for this month. The roadmap suggests working on some of chunks, deltas or gpgkeys. I don't know how to do deltas yet really. Chunks is pretty easily done. The gpg keys stuff is pretty open ended and needs some more work to define some use cases. But, after today, I am more inclined to want to spend time on better testing and other means of avoiding this kind of situation. Got the release out. Had to fix various autobuilder issues. The arm autobuilder is unfortunatly not working currently. Updated git-annex to build with a new version of the bloomfilter library. Got a bit distracted improving Haskell's directory listing code. Only real git-annex work today was fixing ?Assistant merge loop, which was caused by changes in the last release (that made direct mode merging crash/interrupt-safe). This is a kind of ugly bug, that can result in the assistant making lots of empty commits in direct mode repositories. So, I plan to make a new release on Monday. Spent the morning improving behavior when commit.gpgsign is set. Now git-annex will let gpg sign commits that are made when eg, manually running git annex sync, but not commits implicitly made to the git-annex branch. And any commits made by the assistant are not gpg signed. This was slightly tricky, since lots of different places in git-annex ran git commit, git merge and similar. Then got back to a test I left running over vacation, that added millions of files to a git annex repo. This was able to reproduce a problem where git annex add blew the stack and crashed at the end. There turned out to be two different memory issues, one was in git-annex and the other is in Haskell's core getDirectoryContents. Was able to entirely fix it, eventually. Finally back to work with a new laptop! Did one fairly major feature today: When using git-annex to pull down podcasts, metadata from the feed is copied into git-annex's metadata store, if annex.genmetadata is set. Should be great for views etc! Worked through a lot of the backlog, which is down to 47 messages now. Only other bug fix of note is a fix on Android. A recent change to git made it try to chmod files, which tends to fail on the horrible /sdcard filesystem. Patched git to avoid that. For some reason the autobuilder box rebooted while I was away, and somehow the docker containers didn't come back up -- so they got automatically rebuilt. But I have to manually finish up building the android and armel ones. Will be babysitting that build this evening. Today's work was sponsored by Ævar Arnfjörð Bjarmason. I am back from the beach, but my dev laptop is dead. A replacement is being shipped, and I have spent today getting my old netbook into a usable state so I can perhaps do some work using it in the meantime. (Backlog is 95 messages.) Last night, got logging to daemon.log working on Windows. Aside from XMPP not working (but it's near to being deprecated anyway), and some possible issues with unicode characters in filenames, the Windows port now seems in pretty good shape for a beta release. Today, mostly worked on fixing the release process so the metadata accurarely reflects the version from the autobuilder that is included in the release. Turns out there was version skew in the last release (now manually corrected). This should avoid that happening again, and also automates more of my release process. After despairing of ever solving this yesterday (and for the past 6 months really), I've got the webapp running on Windows with no visible DOS box. Also have the assistant starting up in the background on login. It turns out a service was not the way to do. There is a way to write a VB Script that runs a "DOS" command in a hidden window, and this is what I used. Amazing how hard it was to work this out, probably partly because I don't have the Windows vocabulary to know what to look for. It's officially a Windows porting month. Now that I'm half way through it and with the last week of the month going to be a vacation, this makes sense. Today, finished up dealing with the timezone/timestamp issues on Windows. This got stranger and stranger the closer I looked at it. After a timestamp change, a program that was already running will see one timestamp, while a program that is started after the change will see another one! My approach works pretty much no matter how Windows goes insane though, and always recovers a true timestamp. Yay. Also fixed a regression test failure on Windows, which turned out to be rooted in a bug in the command queue runner, which neglected to pass along environment overrides on Windows. Then I spent 5 hours tracking down a tricky test suite failure on Windows, which turned out to also affect FAT and be a recent reversion that has as it's root cause a fun bug in git itself. Put in a not very good workaround. Thank goodness for test suites! Also got the arm autobuilder unstuck. Release tomorrow. Spent all day on some horrible timestamp issues on legacy systems. On FAT, timestamps have a 2s granularity, which is ok, but then Linux adds a temporary higher resolution cache, which is lost on unmount. This confused git-annex since the mtimes seemed to change and it had to re-checksum half the files to get unconfused, which was not good. I found a way to use the inode sentinal file to detect when on FAT and put in a workaround, without degrading git-annex everywhere else. On Windows, time zones are a utter disaster; it changes the mtime it reports for files after the time zone has changed. Also there's a bug in the haskell time library which makes it return old time zone data after a time zone change. (I just finished developing a fix for that bug..) Left with nothing but a few sticks, I rubbed them together, and actually found a way to deal with this problem too. Scary details in ?Windows file timestamp timezone madness. While I've implemented it, it's stuck on a branch until I find a way to make git-annex notice when the timezone changes while it's running. Today's work was sponsored by Svenne Krap. Spent most of today improving behavior when a sync or merge is interrupted in direct mode. It was possible for an interrupt at the wrong time to leave the merge committed, but the work tree not yet updated. And then the next sync would make a commit that reverted the merged changes! To fix this I had to avoid making any merge commit or indeed updating the index until after the work tree is updated. It looked intractable for a while; I'm still surprised I eventually succeeded. Did work on Windows porting today. First, fixed a reversion in the last release, that broke the git-annex branch pretty badly on Windows, causing \r to be written to files on that branch that should never have DOS line endings. Second, fixed a long-standing bug that prevented getting a file from a local bare repository on Windows. Also refreshed all autobuilders to deal with the gnutls and openssl security holes-of-the-week. (git-annex uses gnutls only for XMPP, and does not use openssl itself, but a few programs bundled with it, like curl, do use openssl.) A nice piece of news: OSX Homebrew now contains git-annex, so it can be easily installed with brew install git-annex Yesterday I recorded a new screencast, demoing using the assistant on a local network with a small server. git-annex assistant lan. That's the best screencast yet; having a real framing story was nice; recent improvements to git-annex are taken advantage of without being made a big deal; and audio and video are improved. (But there are some minor encoding glitches which I'd have to re-edit it to fix.) The roadmap has this month dedicated to improving Android. But I think what I'd more like to do is whatever makes the assistant usable by the most people. This might mean doing more on Windows, since I hear from many who would benefit from that. Or maybe something not related to porting? After making a release yesterday, I've been fixing some bugs in the webapp, all to do with repository configuration stored on the git-annex branch. I was led into this by a strange little bug where the webapp stored configuration in the wrong repo in one situation. From there, I noticed that often when enabling an existing repository, the webapp would stomp on its group and preferred content and description, replacing them with defaults. This was a systematic problem, it had to be fixed in several places. And some of the fixes were quite tricky. For example, when adding a ssh repository, and it turns out there's already a git-annex repository at the entered location, it needs to avoid changing its configuration. But also, the configuration of that repo won't be known until after the first git pull from it. So it doesn't make sense to show the repository edit form after enabling such a repository. Also worked on a couple other bugs, and further cleaned up the bugs page. I think I am finally happy with how the bug list is displayed, with confirmed/moreinfo/etc tags. Today's work was sponsored by François Deppierraz. Got a handle on the Android webapp static file problems (no, they were not really encoding problems!), and hopefully that's all fixed now. Also, only 3 modules use Char8 now. And updated the git-annex backport. That's all I did today. Meanwhile, a complete ZSH completion has been contributed by Schnouki. And, Ben Gamari sent in a patch moving from the deprecated MonadCatchIO-transformers library to the exceptions library. These themed days are inaverdent, but it happened again: Nearly everything done today had to do with encoding issues. The big news is that it turned out everything written to files in the git-annex branch had unicode characters truncated to 8 bits. Now fixed so you should always get out the same thing you put in, no matter what encoding you use (but please use utf8). This affected things like storing repository descriptions, but worse, it affected metadata. (Also preferred content expressions, I suppose.) With that fixed, there are still 7 source files left that use Char8 libraries. There used to be more; nearly every use of those is a bug. I looked over the remaining uses of it, and there might be a problem with Creds using it. I should probably make a push to stamp out all remaining uses of Char8. Other encoding bugs were less reproducible. And just now, Sören made some progress on Bootstrap3 icons missing on Android ... and my current theory is this is actually caused by an encoding issue too. With some help from Sören, have been redoing the android build environment for git-annex. This included making propellor put it in a docker container, which was easy. But then much struggling with annoying stuff like getting the gnutls linking to work, and working around some dependency issues on hackage that make cabal's dependency resolver melt down. Finally succeeded after much more time than I had wanted to spend on this. Working on moving the android autobuilder to Docker & Propellor, which will finish containerizing all the autobuilds that I run. Updated ghc-android to use the released ghc 7.8.2, which will make it build more reliably. Also did bug triage. Bugs are now divided into confirmed and ?unconfirmed categories. Keeping lots of things going these past few days.. - Rebootstrapping the armel autobuilder with propellor. Some qemu instability and the need to update haskell library patches meant this took a lot of hand-holding. Finally got a working setup today. - Designing and ordering new git-annex stickers on clear viynl backing; have put off sending those to campaign contributors for too long. - Added a new feature to the webapp: It now remembers the ssh remotes that it sets up, and makes it easy to enable them elsewhere, the same as other sorts of remotes. Had a very pleasant surprise building this, when I was able to reuse all the UI code for enabling rsync and gcrypt remotes. I think this will be a useful feature as we transition away from XMPP. Worked on triaging several bugs. Fixed an easy one, which involved the assistant choosing the wrong path to a repository that has multiple remotes. After today, backlog is down to 43, nearly pre-Brazil levels. It seems that git-remote-gcrypt ?never quite worked on OSX. It looked like it did, but a bug prevented anything being pushed to the remote. Tracked down and fixed that bug. This evening, getting back to working on the armel autobuilder setup using propellor. The autobuilder will use a pair of docker containers, one armel and a companion amd64, and their quite complex setup will be almost fully automated (except for the haskell library patching part). Today's work was sponsored by Mica Semrick. Released git-annex 5.20140517 today. The changelog for this release is very unusual, because it's full of contributions from others! There are as many patches from others in this release as git-annex got in the first entire two years of its existence. I'd like to keep that going. Also, I could really use help triaging bug reports right now. So I have updated the contribute page with more info about easy ways to contribute to git-annex. If you read this devblog, you're an ideal contributor, and you don't need to know how to write haskell either.. So take a look at the page and see if you can help out. Powered through the backlog today, and got it down to 67! Probably most of the rest is the hard ones though. A theme today was: It's stupid hard to get git-annex-shell installed into PATH. While that should be the simplest thing in the world, I'm pinned between two problems: - There's no single portable package format, so all the decades of development nice ways to get things into PATH don't work for everybody. - bash provides not a single dotfile that will work in all circumstances to configure PATH. In particular, "ssh $host git-annex-shell" causes bash to helpfully avoid looking at any dotfiles at all. Today's flailing to work around that inluded: - Merged a patch from Fraser Tweedale to allow git config remote.origin.annex-shell /not/in/path/git-annex-shell - Merged a patch from Justin Lebar to allow symlinking the git-annex-shell etc from the standalone tarball to a directory that is in PATH. (Only on Linux, not OSX yet.) - Improved the warning message git-annex prints when a remote server does not have git-annex-shell in PATH, suggesting some things the user could do to try to fix it. I've found out why OSX machines were retrying upgrades repeatedly. The version in the .info file did not match the actual git-annex version for OSX. I've fixed the info file version, but will need to come up with a system to avoid such mismatches. Made a few other fixes. A notable one is that dragging and dropping repositories in the webapp to reorder the list (and configure costs) had been broken since November. git-annex 5.20140421 finally got into Debian testing today, so I updated the backport. I recommend upgrading, especially if you're using the assistant with a ssh remote, since you'll get all of last month's nice features that make XMPP unnecessary in that configuration. Today's work was sponsored by Geoffrey Irving. Spent the day testing the sshpasswd branch. A few interesting things: - I was able to get rid of 10 lines of Windows specific code for rsync.net, which had been necessary for console ssh password prompting to work. Yay! - git-remote-gcrypt turned out to be broken when there is no controlling tty. --no-tty has to be passed to gpg to avoid it falling over in this case, even when a gpg agent is available to be used. I fixed this with a new release of git-remote-gcrypt. Mostly the new branch just worked! And is merged... Merged a patch from Robie Basak that adds a new special remote that's sort of like bup but supports deletion: ddar Backlog: 172 Today's work was sponsored by Andrew Cant. My backlog is massive -- 181 items to answer. Will probably take the rest of the month to get caught back up. Rather than digging into that yet, spent today working on the webapp's ssh password prompting. I simplified it so the password is entered on the same form as the rest of the server's information. That made the UI easy to build, but means that when a user already has a ssh key they want to use, they need to select "existing ssh key"; the webapp no longer probes to automatically detect that case. Got the ssh password prompting in the webapp basically working, and it's a really nice improvement! I even got it to work on Windows (eventually...). It's still only in the sshpassword branch, since I need to test it more and probably fix some bugs. In particular, when enabling a remote that already exists, I think it never prompts for the password yet. Today's work was sponsored by Nicola Chiapolini. I have a preliminary design for requests routing. Won't be working on it immediately, but simulations show it can work well in a large ad-hoc network. Sören Brunk's massive bootstrap 3 patch has landed! This is a 43 thousand line diff, with 2 thousand lines after the javascript and CSS libraries are filtered out. Either way, the biggest patch contributed by anyone to git-annex so far, and excellent work. Meanwhile, I built a ?haskell program to simulate a network of highly distributed git-annex nodes with ad-hoc connections and the selective file syncing algorythm now documented at the bottom of efficiency. Currently around 33% of requested files never get to their destination in this simulation, but this is probably because its network is randomly generated, and so contains disconnected islands. So next, some data entry, from a map that involves an Amazon not in .com, dotted with names of people I have recently met... I've moved out of implementation mode (unable to concentrate enough), and into high-level design mode. Syncing efficiency has been an open TODO for years, to find a way to avoid flood filling the network, and find more efficient ways to ensure data only gets to the nodes that want it. Relatedly, Android devices often need a way to mark individual files they want to have. Had a very productive discussion with Vince and Fernao and I think we're heading toward a design that will address both these needs, as well as some more Brazil-specific use cases, about which more later. Today's work was sponsored by Casa do Boneco. Reviewed Sören's updated bootstrap3 patch, which appeared while I was traveling. Sören kindly fixed it to work with Debian stable's old version of Yesod, which was quite a lot of work. The new new bootstrap3 UI looks nice, found a few minor issues, but expect to be able to merge it soon. Started on sshpassword groundwork. Added a simple password cache to the assistant, with automatic expiration, and made git-annex be able to be run by ssh as the SSH_ASKPASS program. The main difficulty will be changing the webapp's UI to prompt for the ssh password when one is needed. There are several code paths in ssh remote setup where a password might be needed. Since the cached password expires, it may need to be prompted for at any of those points. Since a new page is loading, it can't pop up a prompt on the current page; it needs to redirect to a password prompt page and then redirect back to the action that needed the password. ...At least, that's one way to do it. I'm going to sleep on it and hope I dream up a better way. Today was mostly spent driving across Brazil, but I had energy this evening for a little work on git-annex. Made the assistant delete old temporary files on startup. I've had scattered reports of a few users whose .git/annex/tmp contained many files, apparently put there by the assistant when it locks down a file prior to annexing it. That seems it could possibly be a bug -- or it could just be unclean shutdowns interrupting the assistant. Anyway, this will deal with any source of tmp cruft, and I made sure to preserve tmp files for partially downloaded content. Next month the roadmap has me working on sshpassword. That will be a nice UI improvement and I'd be very surprised if it takes more than a week, which is great. Getting a jump on it today, investigating using SSH_ASKPASS. It seems this will even work on Windows! Preliminary design in sshpassword. Time to get on a plane to a plane to a plane to Brasilia! Now git-annex's self-upgrade code will check the gpg signature of a new version before using it. To do this I had to include the gpg public keys into the git-annex distribution, and that raised the question of which public keys to include. Currently I have both the dedicated git-annex distribution signing key, and my own gpg key as a backup in case I somehow misplace the former. Also spent a while looking at the recent logs on the web server. There seem to be around 600 users of the assistant with upgrade checking enabled. That breaks down to 68% Linux amd64, 20% Linux i386, 11% OSX Mavericks, and 0.5% OSX Lion. Most are upgrading successfully, but there are a few that seem to repeatedly fail for some reason. (Not counting the OSX Lion, which will probably never find an upgrade available.) I hope that someone who is experiencing an upgrade failure gets in touch with some debug logs. In the same time period, around 450 unique hosts manually downloaded a git-anex distribution. Also compare with Debian popcon, which has 1200 reporting git-annex users. I hope this will be a really good release. Didn't get all the way to telehash this month, but the remotedaemon is pretty sweet. Updated roadmap pushes telehash back again. The files in this release are now gpg signed, after recently moving the downloads site to a dedicated server, which has a dedicated gpg key. You can verify the detached signatures as an additional security check over trusting SSL. The automatic upgrade code doesn't check the gpg signatures yet. Sören Brunk has ported the webapp to Bootstrap 3. The branch is not ready for merging yet (it would break the Debian stable backports), but that was a nice surprise. Sometimes you don't notice something is missing for a long time until it suddenly demands attention. Like today. Seems the webapp never had a way to stop using XMPP and delete the XMPP password. So I added one. The new support for instantly noticing changes on a ssh remote forgot to start up a connection to a new remote after it was created. Fixed that. (While doing some testing on Android for unrelated reasons, I noticed that my android tablet was pushing photos to a ssh server and my laptop immediately noticed and downloaded them from tere, which is an excellent demo. I will deploy this on my trip in Brazil next week. Yes, I'm spending 2 weeks in Brazil with git-annex users; more on this later.) Finally, it turns out that "installing" git-annex from the standalone tarball, or DMG, on a server didn't make it usable by the webapp. Because git-annex shell is not in PATH on the server, and indeed git and rsync may not be in PATH either if they were installed with the git-annex bundle. Fixed this by making the bundle install a ~/.ssh/git-annex-wrapper, which the webapp will detect and use. Also, quite a lot of other bug chasing activity. Today's work was sponsored by Thomas Koch. Worked through message backlog today. Got it down from around 70 to just 37. Was able to fix some bugs, including making the webapp start up more robustly in some misconfigurations. Added a new findref command which may be useful in a git update hook to deny pushes of refs if the annexed content has not been sent first. BTW, I also added a new reinit command a few days ago, which can be useful if you're cloning back a deleted repository. Also a few days ago, I made uninit a lot faster. After fixing a few bugs in the remotecontrol branch, It's landed in master. Try a daily build today, and see if the assistant can keep in sync using nothing more than a remote ssh repository! So, now all the groundwork for telehash is laid too. I only need a telehash library to start developing on top of. Development on telehash-c is continuing, but I'm more excited that htelehash has been revived and is being updated to the v2 protocol, seemingly quite quickly. Made ssh connection caching be used in several more places. git annex sync will use it when pushing/pulling to a remote, as will the assistant. And git-annex remotedaemon also uses connection caching. So, when a push lands on a ssh remote, the assistant will immediately notice it, and pull down the change over the same TCP connection used for the notifications. This was a bit of a pain to do. Had to set GIT_SSH=git-annex and then when git invokes git-annex as ssh, it runs ssh with the connection caching parameters. Also, improved the network-manager and wicd code, so it detects when a connection has gone down. That propagates through to the remote-daemon, which closes all ssh connections. I need to also find out how to detect network connections/disconnections on OSX.. Otherwise, the remote-control branch seems ready to be merged. But I want to test it for a while first. Followed up on yesterday's bug with writing some test cases for Utility.Scheduled, which led to some more bug fixes. Luckily nothing I need to rush out a release over. In the end, the code got a lot simpler and clearer. -- Check if the new Day occurs one month or more past the old Day. oneMonthPast :: Day -> Day -> Bool new `oneMonthPast` old = fromGregorian y (m+1) d <= new where (y,m,d) = toGregorian old Today's work was sponsored by Asbjørn Sloth Tønnesen. Pushed out a new release today, fixing two important bugs, followed by a second release which fixed the bugs harder. Automatic upgrading was broken on OSX. The webapp will tell you upgrading failed, and you'll need to manually download the .dmg and install it. With help from Maximiliano Curia, finally tracked down a bug I have been chasing for a while where the assistant would start using a lot of CPU while not seeming to be busy doing anything. Turned out to be triggered by a scheduled fsck that was configured to run once a month with no particular day specified. That bug turned out to affect users who first scheduled such a fsck job after the 11th day of the month. So I expedited putting a release out to avoid anyone else running into it starting tomorrow. (Oddly, the 11th day of this month also happens to be my birthday. I did not expect to have to cut 2 releases today..) The git-remote-daemon now robustly handles loss of signal, with reconnection backoffs. And it detects if the remote ssh server has too old a version of git-annex-shell and the webapp will display a warning message. Also, made the webapp show a network signal bars icon next to both ssh and xmpp remotes that it's currently connected with. And, updated the webapp's nudging to set up XMPP to now suggest either an XMPP or a ssh remote. I think that the remotecontrol branch is nearly ready for merging! Today's work was sponsored by Paul Tagliamonte. git-remote-daemon is tied into the assistant, and working! Since it's not really ready yet, this is in the remotecontrol branch. My test case for this is two client repositories, both running the assistant. Both have a bare git repository, accessed over ssh, set up as their only remote, and no other way to keep in touch with one-another. When I change a file in one repository, the other one instantly notices the change and syncs. This is gonna be awesome. Much less need for XMPP. Windows will be fully usable even without XMPP. Also, most of the work I did today will be fully reused when the telehash backend gets built. The telehash-c developer is making noises about it being almost ready for use, too! Today's work was sponsored by Frédéric Schütz. Various bug triage today. Was not good for much after shuffling paper for the whole first part of the day, but did get a few little things done. Re, git-annex does not use OpenSSL itself, but when using XMPP, the remote server's key could have been intercepted using this new technique. Also, the git-annex autobuilds and this website are served over https -- working on generating new https certificates now. Be safe out there.. Built git-annex remotedaemon command today. It's buggy, but it already works! If you have a new enough git-annex-shell on a remote server, you can run "git annex remotedaemon" in a git-annex repository, and it will notice any pushes that get made to that remote from any other clone, and pull down the changes. Added git-annex-shell notifychanges command, which uses inotify (etc) to detect when git refs have changed, and informs the caller about the changes. This was relatively easy to write; I reused the existing inotify code, and factored out code for simple line-based protocols from the external special remote protocol. Also implemented the git-remote-daemon protocol. 200 lines of code total. Meanwhile, Johan Kiviniemi improved the dbus notifications, making them work on Ubuntu and adding icons. Awesome! There's going to be some fun to get git-annex-shell upgraded so that the assistant can use this new notify feaure. While I have not started working on the assistant side of this, you can get a jump by installing today's upcoming release of git-annex. I had to push this out early because there was a bug that prevented the webapp from running on non-gnome systems. Since all changes in this release only affected Linux, today's release will be a Linux-only release. I have a plan for this month. While waiting for telehash, I am going to build git-remote-daemon, which is the infrastructure git-annex will need, to use telehash. Since it's generalized to support other protocols, I'll be able to start using it before telehash is ready. In fact, I plan to first make it work with ssh:// remotes, where it will talk with git-annex-shell on the remote server. This will let the assistant immediately know when the server has received a commit, and that will simplify using the assistant with a ssh server -- no more need for XMPP in this case! It should also work with git-remote-gcrypt encrypted repositories, so also covers the case of an untrusted ssh server where everything is end-to-end encrypted. Building the git-annex-shell part of this should be pretty easy, and building enough of the git-remote-daemon design to support it also not hard. Got caught up on all recent bugs and questions, although I still have a backlog of 27 older things that I really should find time for. Fixed a couple of bugs. One was that the assistant set up ssh authorized_keys that didn't work with the fish shell. Also got caught up on the current state of telehash-c. Have not quite gotten it to work, but it seems pretty close to being able to see it do something useful for the first time. Pushing out a release this evening with a good number of changes left over from March. Last week's trip was productive, but I came home more tired than I realized. Found myself being snappy & stressed, so I have been on break. I did do a little git-annex dev in the past 5 days. On Saturday I implemented ?preferred content (although without the active checks I think it probably ought to have.) Yesterday I had a long conversation with the Tahoe developers about improving git-annex's tahoe integration. Today, I have been wrapping up building propellor. To test its docker support, I used propellor to build and deploy a container that is a git-annex autobuilder. I'll be replacing the old autobuilder setup with this shortly, and expect to also publish docker images for git-annex autobuilders, so anyone who wants to can run their own autobuilder really easily. I have April penciled in on the roadmap as the month to do telehash. I don't know if telehash-c is ready for me yet, but it has had a lot of activity lately, so this schedule may still work out! Catching up on conference backlog. 36 messages backlog remains. Fixed git-annex-shell configlist to automatically initialize a git remote when a git-annex branch had been pushed to it. This is necessary for gitolite to be easy to use, and I'm sure it used to work. Updated the Debian backport and made a Debian package of the fdo-notify haskell library used for notifications. Applied a patch from Alberto Berti to fix support for tahoe-lafs 1.10. And various other bug fixes and small improvements. Attended at the f-droid sprint at LibrePlanet, and have been getting a handle on how their build server works with an eye toward adding git-annex to it. Not entirely successful getting vagrant to build an image yet. Yesterday coded up one nice improvement on the plane -- git annex unannex (and uninit) is now tons faster. Before it did a git commit after every file processed, now there's just 1 commit at the end. This required using some locking to prevent the pre-commit hook from running in a confusing state. Today. LibrePlanet and a surprising amount of development. I've added file manager integration, only for Nautilus so far. The main part of this was adding --notify-start and --notify-finish, which use dbus desktop notifications to provide feedback. (Made possible thanks to Max Rabkin for updating fdo-notify to use the new dbus library, and ion for developing the initial Nautilus integration scripts.) Today's work and LibrePlanet visit was sponsored by Jürgen Lüters. Yesterday, worked on cleaning up the todo list. Fixed Windows slash problem with rsync remotes. Today, more Windows work; it turns out to have been quite buggy in its handling of non-ASCII characters in filenames. Encoding stuff is never easy for me, but I eventually managed to find a way to fix that, although I think there are other filename encoding problems lurking in git-annex on Windows still to be dealt with. Implemented an interesting metadata feature yesterday. It turns out that metadata can have metadata. Particularly, it can be useful to know when a field was last set. That was already beeing tracked, internally (to make union merging work), so I was able to quite cheaply expose it as "$field-lastchanged" metadata that can be used like any other metadata. I've been thinking about how to implement required content expressions, and think I have a reasonably good handle on it. The website broke and I spent several hours fixing it, changing the configuration to not let it break like this again, cleaning up after it, etc. Did manage to make a few minor bugfixes and improvements, but nothing stunning. I'll be attending LibrePlanet at MIT this weekend. Added some power and convenience to preferred content expressions. Before, "standard" was a special case. Now it's a first-class keyword, so you can do things like "standard or present" to use the standard preferred content expression, modified to also want any file that happens to be present. Also added a way to write your own reusable preferred content expressions, tied to groups. To make a repository use them, set its preferred content to "groupwanted". Of course, "groupwanted" is also a first-class keyword, so "not groupwanted" or something can also be done. While I was at it, I made vicfg show the built-in standard preferred content expressions, for reference. This little IDE should be pretty self-explanatory, I hope. So, preferred content is almost its own little programming language now. Except I was careful to not allow recursion. Did some more exploration and perf tuning and thinking on caching databases, and am pretty sure I know how I want to implement it. Will be several stages, starting with using it for generating views, and ending(?) with using it for direct mode file mappings. Not sure I'm ready to dive into that yet, so instead spent the rest of the day working on small bugfixes and improvements. Only two significant ones.. Made the webapp use a constant time string comparison (from securemem) to check if its auth token is valid. This could help avoid a potential timing attack to guess the auth token, although that is theoretical. Just best practice to do this. Seems that openssh 6.5p1 had another hidden surprise (in addition to its now-fixed bug in handing hostnames in .ssh/config) -- it broke the method git-annex was using for stopping a cached ssh connection, which led to some timeouts for failing DNS lookups. If git-annex seems to stall for a few seconds at startup/shutdown, that may be why (--debug will say for sure). I seem to have found a workaround that avoids this problem. Updated the Debian stable backport to the last release. Also it seems that the last release unexpectedly fixed XMPP SIGILL on some OSX machines. Apparently when I rebuilt all the libraries recently, it somehow fixed that ?old unsolved bug. RichiH suggested "wrt ballooning memory on repair: can you read in broken stuff and simply stop reading once you reach a certain threshold, then start repairing, re-run fsck, etc?" .. I had considered that but was not sure it would work. I think I've gotten it to work. Now working on a design for using a caching database for some parts of git-annex. My initial benchmarks using SQLite indicate it would slow down associated file lookups by nearly an order of magnitude compared with the current ".map files" implementation. (But would scale better in edge cases). OTOH, using a SQLite database to index metadata for use in views looks very promising. Squashed three or four more bugs today. Unanswered message backlog is down to 27. The most interesting problem today is that the git-repair code was using too much memory when git-fsck output a lot of problems (300 thousand!). I managed to half the memory use in the worst case (and reduced it much more in more likely cases). But, don't really feel I can close that bug yet, since really big, really badly broken repositories can still run it out of memory. It would be good to find a way to reorganize the code so that the broken objects list streams through git-repair and never has to all be buffered in memory at once. But this is not easy. Release made yesterday, but only finished up the armel build today. And it turns out the OSX build was missing the webapp, so it's also been updated today. Post release bug triage including: Added a nice piece of UI to the webapp on user request: A "Sync now" menu item in the repository for each repo. (The one for the current repo syncs with all its remotes.) Copying files to a git repository on the same computer turns out to have had a resource leak issue, that caused 1 zombie process per file. With some tricky monad state caching, fixed that, and also eliminated 8% of the work done by git-annex in this case. Fixed git annex unused in direct mode to not think that files that were deleted out of the work tree by the user still existed and were unused. Preparing for a release (probably tomorrow or Friday). Part of that was updating the autobuilders. Had to deal with the gnutls security hole fix, and upgrading that on the OSX autobuilder turned out to be quite complicated due to library version skew. Also, I switched the linux autobuilders over to building from Debian unstable, rather than stable. That should be ok to do now that the standalone build bundles all the libraries it needs... And the arm build has always used unstable, and has been reported working on a lot of systems. So I think this will be safe, but have backed up the old autobuilder chroots just in case. Also been catching up on bug reports and traffic and and dealt with quite a lot of things today. Smarter log file rotation for the assistant, better webapp behavior when git is not installed, and a fix for the webdav 5 second timeout problem. Perhaps the most interesting change is a new annex.startupscan setting, which can be disabled to prevent the assistant from doing the expensive startup scan. This means it misses noticing any files that changed since it last run, but this should be useful for those really big repositories. (Last night, did more work on the test suite, including even more checking of merge conflict resolution.) Today's work was sponsored by Michael Alan Dorman. Yesterday I learned of a nasty bug in handling of merges in direct mode. It turns out that if the remote repository has added a file, and there is a conflicting file in the local work tree, which has not been added to git, the local file was overwritten when git-annex did a merge. That's really bad, I'm very unhappy this bug lurked undetected for so long. Understanding the bug was easy. Fixing it turned out to be hard, because the automatic merge conflict resolution code was quite a mess. In particular, it wrote files to the work tree, which made it difficult for a later stage to detect and handle the abovementioned case. Also, the automatic merge resolution code had weird asymmetric structure that I never fully understood, and generally needed to be stared at for an hour to begin to understand it. In the process of cleaning that up, I wrote several more tests, to ensure that every case was handled correctly. Coverage was about 50% of the cases, and should now be 100%. To add to the fun, a while ago I had dealt with a bug on FAT/Windows where it sometimes lost the symlink bit during automatic merge resolution. Except it turned out my test case for it had a heisenbug, and I had not actually fixed it (I think). In any case, my old fix for it was a large part of the ugliness I was cleaning up, and had to be rewritten. Fully tracking down and dealing with that took a large part of today. Finally this evening, I added support for automatically handling merge conflicts where one side is an annexed file, and the other side has the same filename committed to git in the normal way. This is not an important case, but it's worth it for completeness. There was an unexpected benefit to doing it; it turned out that the weird asymmetric part of the code went away. The final core of the automatic merge conflict resolver has morphed from a mess I'd not want to paste here to a quite consise and easy to follow bit of code. case (kus, kthem) of -- Both sides of conflict are annexed files (Just keyUs, Just keyThem) -> resolveby $ if keyUs == keyThem then makelink keyUs else do makelink keyUs makelink keyThem -- Our side is annexed file, other side is not. (Just keyUs, Nothing) -> resolveby $ do graftin them file makelink keyUs -- Our side is not annexed file, other side is. (Nothing, Just keyThem) -> resolveby $ do graftin us file makelink keyThem -- Neither side is annexed file; cannot resolve. (Nothing, Nothing) -> return Nothing Since the bug that started all this is so bad, I want to make a release pretty soon.. But I will probably let it soak and whale on the test suite a bit more first. (This bug is also probably worth backporting to old versions of git-annex in eg Debian stable.) Worked on metadata and views. Besides bugfixes, two features of note: Made git-annex run a hook script, pre-commit-annex. And I wrote a sample script that extracts metadata from lots of kinds of files, including photos and sound files, using extract(1) to do the heavy lifting. See automatically adding metadata. Views can be filtered to not include a tag or a field. For example, git annex view tag=* !old year!=2013 Today's work was sponsored by Stephan Schulz Did not plan to work on git-annex today.. Unexpectedly ended up making the webapp support HTTPS. Not by default, but if a key and certificate are provided, it'll use them. Great for using the webapp remotely! See the new tip: remote webapp setup. Also removed support for --listen with a port, which was buggy and not necessary with HTTPS. Also fixed several webapp/assistant bugs, including one that let it be run in a bare git repository. And, made the quvi version be probed at runtime, rather than compile time. Pushed a release today. Rest of day spent beating head against Windows XMPP brick wall. Actually made a lot of progress -- Finally found the right approach, and got a clean build of the XMPP haskell libraries. But.. ghc fails to load the libraries when running Template Haskell. "Misaligned section: 18206e5b". Filed a bug report, and I'm sure this alignment problem can be fixed, but I'm not hopeful about fixing it myself. One workaround would be to use the EvilSplicer, building once without the XMPP library linked in, to get the TH splices expanded, and then a second time with the XMPP library and no TH. Made a winsplicehack branch with tons of ifdefs that allows doing this. However, several dozen haskell libraries would need to be patched to get it to work. I have the patches from Android, but would rather avoid doing all that again on Windows. Another workaround would be to move XMPP into a separate process from the webapp. This is not very appealing either, the IPC between them would be fairly complicated since the webapp does stuff like show lists of XMPP buddies, etc. But, one thing this idea has to recommend it is I am already considering using a separate helper daemon like this for Telehash. So there could be synergies between XMPP and Telehash support, possibly leading to some kind of plugin interface in git-annex for this sort of thing. But then, once Telehash or something like it is available and working well, I plan to deprecate XMPP entirely. It's been a flakey pain from the start, so that can't come too soon. Not a lot accomplished today. Some release prep, followed up to a few bug reports. Split git-annex's .git/annex/tmp into two directories. .git/annex/tmp will now be used only for partially transferred objects, while .git/annex/misctmp will be used for everything else. In particular this allows symlinking .git/annex/tmp to a ram disk, if you want to do that. (It's not possible for .git/annex/misctmp to be on a different filesystem from the rest of the repository for various reasons.) Beat on Windows XMPP for several more painful hours. Got all the haskell bindings installed, except for gnuidn. And patched network-client-xmpp to build without gnuidn. Have not managed to get it to link. More Windows porting. Made the build completely -Wall safe on Windows. Fixed some DOS path separator bugs that were preventing WebDav from working. Have now tested both box.com and Amazon S3 to be completely working in the webapp on Windows. Turns out that in the last release I broke making box.com, Amazon S3 and Glacier remotes from the webapp. Fixed that. Also, dealt with changes in the haskell DAV library that broke support for box.com, and worked around an exception handling bug in the library. I think I should try to enhance the test suite so it can run live tests on special remotes, which would at least have caught the some of these recent problems... Since metadata is tied to a particular key, editing an annexed file, which causes the key to change, made the metadata seem to get lost. I've now fixed this; it copies the metadata from the old version to the new one. (Taking care to copy the log file identically, so git can reuse its blob.) That meant that git annex add has to check every file it adds to see if there's an old version. Happily, that check is fairly fast; I benchmarked my laptop running 2500 such checks a second. So it's not going to slow things down appreciably. When generating a view, there's now a way to reuse part of the directory hierarchy of the parent branch. For example, git annex view tag=* podcasts/=* makes a view where the first level is the tags, and the second level is whatever podcasts/* directories the files were in. Also, year and month metadata can be automatically recorded when adding files to the annex. I made this only be done when annex.genmetadata is turned on, to avoid polluting repositories that don't want to use metadata. It would be nice if there was a way to add a hook script that's run when files are added, to collect their metadata. I am not sure yet if I am going to add that to git-annex though. It's already possible to do via the regular git post-commit hook. Just make it look at the commit to see what files were added, and then run git annex metadata to set their metadata appropriately. It would be good to at least have an example of such a script to eg, extract EXIF or ID3 metadata. Perhaps someone can contribute one? Spent the day catching up on the last week or so's traffic. Ended up making numerous small big fixes and improvements. Message backlog stands at 44. Here's the screencast demoing views! Added to the design today the idea of automatically deriving metadata from the location of files in the master branch's directory tree. Eg, git annex view tag=* podcasts/=* in a repository that has a podcasts/ directory would make a tree like "$tag/$podcast". Seems promising. So much still to do with views.. I have belatedly added them to the roadmap for this month; doing Windows and Android in the same month was too much to expect. Still working on views. The most important addition today is that git annex precommit notices when files have been moved/copied/deleted in a view, and updates the metadata to reflect the changes. Also wrote some walkthrough documentation: metadata driven views. And, recorded a screencast demoing views, which I will upload next time I have bandwidth. Today I built git annex view, and git annex vadd and a few related commands. A quick demo: joey@darkstar:~/lib/talks>ls Chaos_Communication_Congress/ FOSDEM/ Linux_Conference_Australia/ Debian/ LibrePlanet/ README.md joey@darkstar:~/lib/talks>git annex view tag=* view (searching...) Switched to branch 'views/_' ok joey@darkstar:~/lib/talks#_>tree -d . |-- Debian |-- android |-- bigpicture |-- debhelper |-- git |-- git-annex `-- seen 7 directories joey@darkstar:~/lib/talks#_>git annex vadd author=* vadd Switched to branch 'views/author=_;_' ok joey@darkstar:~/lib/talks#author=_;_>tree -d . |-- Benjamin Mako Hill | `-- bigpicture |-- Denis Carikli | `-- android |-- Joey Hess | |-- Debian | |-- bigpicture | |-- debhelper | |-- git | `-- git-annex |-- Richard Hartmann | |-- git | `-- git-annex `-- Stefano Zacchiroli `-- Debian 15 directories joey@darkstar:~/lib/talks#author=_;_>git annex vpop vpop 1 Switched to branch 'views/_' ok joey@darkstar:~/lib/talks#_>git annex vadd tag=git-annex vadd Switched to branch 'views/(git-annex)' ok joey@darkstar:~/lib/talks#(git-annex)>ls 1025_gitify_your_life_{Debian;2013;DebConf13;high}.ogv@ git_annex___manage_files_with_git__without_checking_their_contents_into_git_{FOSDEM;2012;lightningtalks}.webm@ mirror.linux.org.au_linux.conf.au_2013_mp4_gitannex_{Linux_Conference_Australia;2013}.mp4@ joey@darkstar:~/lib/talks#_>git annex vpop 2 vpop 2 Switched to branch 'master' ok Not 100% happy with the speed -- the generation of the view branch is close to optimal, and fast enough (unless the branch has very many matching files). And vadd can be quite fast if the view has already limited the total number of files to a smallish amount. But view has to look at every file's metadata, and this can take a while in a large repository. Needs indexes. It also needs integration with git annex sync, so the view branches update when files are added to the master branch, and moving files around inside a view and committing them does not yet update their metadata. Today's work was sponsored by Daniel Atlas. Working on building metadata filtered branches. Spent most of the day on types and pure code. Finally at the end I wrote down two actions that I still need to implement to make it all work: applyView' :: MkFileView -> View -> Annex Git.Branch updateView :: View -> Git.Ref -> Git.Ref -> Annex Git.Branch I know how to implement these, more or less. And in most cases they will be pretty fast. The more interesting part is already done. That was the issue of how to generate filenames in the filter branches. That depends on the View being used to filter and organize the branch, but also on the original filename used in the reference branch. Each filter branch has a reference branch (such as "master"), and displays a filtered and metadata-driven reorganized tree of files from its reference branch. fileViews :: View -> (FilePath -> FileView) -> FilePath -> MetaData -> Maybe [FileView] So, a view that matches files tagged "haskell" or "git-annex" and with an author of "J*" will generate filenames like "haskell/Joachim/interesting_theoretical_talk.ogg" and "git-annex/Joey/mytalk.ogg". It can also work backwards from these filenames to derive the MetaData that is encoded in them. fromView :: View -> FileView -> MetaData So, copying a file to "haskell/Joey/mytalk.ogg" lets it know that it's gained a "haskell" tag. I knew I was on the right track when fromView turned out to be only 6 lines of code! The trickiest part of all this, which I spent most of yesterday thinking about, is what to do if the master branch has files in subdirectories. It probably does not makes sense to retain that hierarchical directory structure in the filtered branch, because we instead have a non-hierarchical metadata structure to express. (And there would probably be a lot of deep directory structures containing only one file.) But throwing away the subdirectory information entirely means that two files with the same basename and same metadata would have colliding names. I eventually decided to embed the subdirectory information into the filenames used on the filter branch. Currently that is done by converting dir/subdir/file.foo to file(dir)(subdir).foo. We'll see how this works out in practice.. More Windows porting.. Seem to be getting near an end of the easy stuff, and also the webapp is getting pretty usable on Windows now, the only really important thing lacking is XMPP support. Made git-annex on Windows set HOME when it's not already set. Several of the bundled cygwin tools only look at HOME. This was made a lot harder and uglier due to there not being any way to modify the environment of the running process.. git-annex has to re-run itself with the fixed environment. Got rsync.net working in the webapp. Although with an extra rsync.net password prompt on Windows, which I cannot find a way to avoid. While testing that, I discovered that openssh 6.5p1 has broken support for ~/.ssh/config Host lines that contain upper case letters! I have filed a bug about this and put a quick fix in git-annex, which sometimes generated such lines. Windows porting all day. Fixed a lot of issues with the webapp, so quite productive. Except for the 2 hours wasted finding a way to kill a process by PID from Haskell on Windows. Last night, made git annex metadata able to set metadata on a whole directory or list of files if desired. And added a --metadata field=value switch (and corresponding preferred content terminal) which limits git-annex to acting on files with the specified metadata. Built the core data types, and log for metadata storage. Making metadata union merge well is tricky, but I have a design I'm happy with, that will allow distributed changes to metadata. Finished up the day with a git annex metadata command to get/set metadata for a file. This is all the goundwork needed to begin experimenting with generating git branches that display different metadata-driven views of annexed files. There's a new design document for letting git-annex store arbitrary metadata. The really neat thing about this is the user can check out only files matching the tags or values they care about, and get an automatically structuted file tree layout that can be dynamically filtered. It's going to be awesome! metadata In the meantime, spent most of today working on Windows. Very good progress, possibly motivated by wanting to get it over with so I can spend some time this month on the above. - webapp can make box.com and S3 remotes. This just involved fixing a hack where the webapp set environment variables to communicate creds to initremote. Can't change environment on Windows (or I don't know how to). - webapp can make repos on removable drives. git annex assistant --stopworks, although this is not likely to really be useful - The source tree now has 0 func = error "Windows TODO"type stubbed out functions to trip over. Pushed out the new release. This is the first one where I consider the git-annex command line beta quality on Windows. Did some testing of the webapp on Windows, trying out every part of the UI. I now have eleven todo items involving the webapp listed in windows support. Most of them don't look too bad to fix. Last night I tracked down and fixed a bug in the DAV library that has been affecting WebDAV remotes. I've been deploying the fix for that today, including to the android and arm autobuilders. While I finished a clean reinstall of the android autobuilder, I ran into problems getting a clean reinstall of the arm autobuilder (some type mismatch error building yesod-core), so manually fixed its DAV for now. The WebDAV fix and other recent fixes makes me want to make a release soon, probably Monday. ObWindows: Fixed git-annex to not crash when run on Windows in a git repository that has a remote with a unix-style path like "/foo/bar". Seems that not everything aggrees on whether such a path is absolute; even sometimes different parts of the same library disagree! import System.FilePath.Windows prop_windows_is_sane :: Bool prop_windows_is_sane = isAbsolute upath || ("C:\\STUFF" </> upath /= upath) where upath = "/foo/bar" Perhaps more interestingly, I've been helping dxtrish port git-annex to OpenBSD and it seems most of the way there. git-annex has been using MissingH's absNormPath forever, but that's not very maintained and doesn't work on Windows. I've been wanting to get rid of it for some time, and finally did today, writing a simplifyPath that does the things git-annex needs and will work with all the Windows filename craziness, and takes advantage of the more modern System.FilePath to be quite a simple peice of code. A QuickCheck test found no important divergences from absNormPath. A good first step to making git-annex not depend on MissingH at all. That fixed one last Windows bug that was disabled in the test suite: git annex add ..\subdir\file will now work. I am re-installing the Android autobuilder for 2 reasons: I noticed I had accidentally lost a patch to make a library use the Android SSL cert directory, and also a new version of GHC is very near to release and so it makes sense to update. Down to 38 messages in the backlog. Added a new feature that started out with me wanting a way to undo a git-annex drop, but turned into something rather more powerful. The --in option can now be told to match files that were in a repository at some point in the past. For example, git annex get --in=here@{yesterday} will get any files that have been dropped over the past day. While git-annex's location tracking info is stored in git and thus versioned, very little of it makes use of past versions of the location tracking info (only git annex log). I'm happy to have finally found a use for it! OB Windows porting: Fixed a bug in the symlink calculation code. Sounds simple; took 2 hours! Also various bug triage; updated git version on OSX; forwarded bug about DAV-0.6 being broken upstream; fixed a bug with initremote in encryption=pubkey mode. Backlog is 65 messages. Today's work was sponsored by Brock Spratlen. A more test driven day than usual. Yesterday I noticed a test case was failing on Windows in a way not related to what it was intended to test, and fixed the test case to not fail.. But knew I'd need to get to the bottom of what broke it eventually. Digging into that today, I eventually (after rather a long time stuck) determined the bug involved automatic conflict resolution, but only happened on systems without symlink support. This let me reproduce it on FAT outside Windows and do some fast TDD iterations in a much less unwieldly environment and fix the bug. While I've not been blogging over what amounted to a long weekend, looking over the changelog, there were quite a few things done. Mostly various improvements and fixes to git annex sync --content. Today, got the test suite to pass on Windows 100% again. With yesterday's release, I'm pretty much done with the month's work. Since there was no particular goal this month, it's been a grab bag of features and bugfixes. Quite a lot of them in this last release. I'll be away the next couple of days.. But got a start today on the next part of the roadmap, which is planned to be all about Windows and Android porting. Today, it was all about lock files, mostly on Windows. Lock files on Windows are horrific. I especially like that programs that want to open a file, for any reason, are encouraged in the official documentation to retry repeatedly if it fails, because some other random program, like a virus checker, might have opened the file first. Turns out Windows does support a shared file read mode. This was just barely enough for me to implement both shared and exclusive file locking a-la-flock. Couldn't avoid a busy wait in a few places that block on a lock. Luckily, these are few, and the chances the lock will be taken for a long time is small. (I did think about trying to watch the file for close events and detect when the lock was released that way, but it seemed much too complicated and hard to avoid races.) Also, Windows only seems to support mandatory locks, while all locking in git-annex needs to be advisory locks. Ie, git-annex's locking shouldn't prevent a program from opening an annexed file! To work around that, I am using dedicated lock files on Windows. Also switched direct mode's annexed object locking to use dedicated lock files. AFAICS, this was pretty well broken in direct mode before. A big missing peice of the assistant is doing something about the content of old versions of files, and deleted files. In direct mode, editing or deleting a file necessarily loses its content from the local repository, but the content can still hang around in other repositories. So, the assistant needs to do something about that to avoid eating up disk space unnecessarily. I built on recent work, that lets preferred content expressions be matched against keys with no associated file. This means that I can run unused keys through all the machinery in the assistant that handles file transfers, and they'll end being moved to whatever repository wants them. To control which repositories do want to retain unused files, and which not, I added a unused keyword to preferred content expressions. Client repositories and transfer repositories do not want to retain unused files, but backup etc repos do. One nice thing about this unused preferred content implementation is that it doesn't slow down normal matching of preferred content expressions at all. Can you guess why not? See 4b55afe9e92c045d72b78747021e15e8dfc16416 So, the assistant will run git annex unused on a daily basis, and cause unused files to flow to repositories that want them. But what if no repositories do? To guard against filling up the local disk, there's a annex.expireunused configuration setting, that can cause old unused files to be deleted by the assistant after a number of days. I made the assistant check if there seem to be a lot of unused files piling up. (1000+, or 10% of disk used by them, or more space taken by unused files than is free.) If so, it'll pop up an alert to nudge the user to configure annex.expireunused. Still need to build the UI to configure that, and test all of this. Today's work was sponsored by Samuel Tardieu. Worked on cleaning up and reorganizing all the code that handles numcopies settings. Much nicer now. Fixed some bugs. As expected, making the preferred content numcopies check look at .gitattributes slows it down significantly. So, exposed both the slow and accurate check and a faster version that ignores .gitattributes. Also worked on the test suite, removing dependencies between tests. This will let tasty-rerun be used later to run only previously failing tests. In order to remove some hackishness in git annex sync --content, I finally fixed a bad design decision I made back at the very beginning (before I really knew haskell) when I built the command seek code, which had led to a kind of inversion of control. This took most of a night, but it made a lot of code in git-annex clearer, and it makes the command seeking code much more flexible in what it can do. Some of the oldest, and worst code in git-annex was removed in the process. Also, I've been reworking the numcopies configuration, to allow for a ?preferred content numcopies check. That will let the assistant, as well as git annex sync --content proactively make copies when needed in order to satisfy numcopies. As part of this, git config annex.numcopies is deprecated, and there's a new git annex numcopies N command that sets the numcopies value that will be used by any clone of a repository. I got the preferred content checking of numcopies working too. However, I am unsure if checking for per-file .gitattributes annex.numcopies settings will make preferred content expressions be, so I have left that out for now. Today's work was sponsored by Josh Taylor. Spent the day building this new feature, which makes git annex sync --content do the same synchronization of file contents (to satisfy preferred content settings) that the assistant does. The result has not been tested a lot yet, but seems to work well. Activity has been a bit low again this week. It seems to make sense to do weekly releases currently (rather than bi-monthly), and Thursday's release had only one new feature (Tahoe LAFS) and a bunch of bug fixes. Looks like git-annex will get back into Debian testing soon, after various fixes to make it build on all architectures again, and then the backport can be updated again too. I have been struggling with a problem with the OSX builds, which fail with a SIGKILL on some machines. It seems that homebrew likes to agressively optimise things it builds, and while I have had some success with its --build-bottle option, something in the gnutls stack used for XMPP is still over-optimised. Waiting to hear back from Kevin on cleaning up some optimised system libraries on the OSX host I use. (Is there some way to make a clean chrooot on OSX that can be accessed by a non-root user?) Today I did some minor work involving the --json switch, and also a small change (well, under 300 line diff) allowing --all to be mixed with options like --copies and --in. Fixed a bug that one or two people had mentioned years ago, but I was never able to reproduce myself or get anyone to reproduce in a useful way. It caused log files that were supposed to be committed to the git-annex branch to end up in master. Turned out to involve weird stuff when the environment contains two different settings for a single variable. So was easily fixed at last. (I'm pretty sure the code would have never had this bug if Data.AssocList was not buried inside an xml library, which rather discourages using it when dealing with the environment.) Also worked on, and hopefully fixed, another OSX cpu optimisations problem. This one involving shared libraries that git-annex uses for XMPP. Also made the assistant detect corrupt .git/annex/index files on startup and remove them. It was already able to recover from corrupt .git/index files. Today's work was sponsored by David Wagner. If you've been keeping an eye on the roadmap, you'll have seen that xmpp security keeps being pushed back. This was because it's a hard and annoying problem requiring custom crypto and with an ugly key validation problem built into it too. I've now removed it from the roadmap entirely, replacing it with a telehash design. I'm excited by the possibilities of using telehash with git-annex. It seems it would be quite easy to make it significantly more peer-to-peer and flexible. The only issue is that telehash is still under heavy development and the C implementation is not even usable yet.. (I'll probably end up writing Haskell bindings to that.) So I've pushed it down the roadmap to at least March. Spent the rest of the day making some minor improvements to external special remote protocol and doing some other minor bug fixes and backlog catch up. My backlog has exploded to nearly 50 messages remaining. Today's work was sponsored by Chad Horohoe. Been on reduced activity the past several days. I did spend a full day somewhere in there building the Tahoe LAFS special remote. Also, Tobias has finished updating his full suite of external special remotes to use the new interface! Worked on closing up the fundraising campaign today (long overdue). This included adding a new wall-o-names to thanks. Taught the assistant to stop reusing an existing git annex transferkeys process after it detects a network connection change. I don't think this is a complete solution to what to do about long-duration network connections in remotes. For one thing a remote could take a long time to time out when the network is disconnected, and block other transfers (eg to local drives) in the meantime. But at least if a remote loses its network connection and does not try to reconnect on its own, and so is continually failing, this will get it back into a working state eventually. Also, fixed a problem with the OSX Mavericks build, it seems that the versions of wget and coreutils stuff that I was including in it were built by homebrew with full optimisations turned on, so didn't work on some CPUs. Replaced those with portable builds. Spent ages tracking down a memory leak in the assistant that showed up when a lot of files were added. Turned out to be a standard haskell laziness induced problem, fixed by adding strictness annotations. Actually there were several of them, that leaked at different rates. Eventually, I seem to have gotten them all fixed: Before: ?leakbefore.png After: ?leakafter.png Also fixed a bug in git annex add when the disk was completely full. In that situation, it could sometimes move the file from the work tree to .git/annex/objects and fail to put the symlink in place. Yesterday, added per-remote, per-key state storage. This is exported via the external special remote protocol, and I expect to use it at least for Tahoe Lafs. Also, made the assistant write ssh config files with better permissions, so ssh won't refuse to use them. (The only case I know of where that happened was on Windows.) Today, made addurl and importfeed honor annex.diskreserve. Found out about this the hard way, when an importfeed cron job filled up my server with youtube videos. I should probably also make import honor annex.diskreserve. I've been working, so far inconclusively, on making the assistant deal with remotes that might open a long duration network connection. Problem being that if the connection is lost, and the remote is not smart enough to reconnect, all further use of it could fail. In a restarttransferrer branch, I have made the assistant start separate transferkeys processes for each remote. So if a remote starts to fail, the assistant can stop its transferkeys process, and restart it, solving the problem. But, if a resource needed for a remote is not available, this degrades to every transfer attempt to that remote restarting it. So I don't know if this is the right approach. Other approaches being considered include asking that implementors of external special remotes deal with reconnection themselves (Tobias, do you deal with this in your remotes?), or making the assistant only restart failing remotes after it detects there's been a network connection change. Implemented read-only remotes. This may not cover every use case around wanting to clone a repository and use git-annex without leaking the existence of your clone back to it, but I think it hits most of them in a quite easy way, and allows for some potentially interesting stuff like partitioned networks of git-annex repositories. Zooko and I have been talking things over (for rather too long), and I think have now agreed on a how a more advanced git-annex Tahoe-LAFS special remote should work. This includes storing the tahoe file-caps in the git-annex branch. So, I really need to add that per-special-remote data storage feature I've been thinking about. Various work on Debian, OSX, and Windows stuff. Mostly uninteresting, but took most of the day. Made git annex mirror --all work. I can see why I left it out; when the mirroring wants to drop an object, in --all mode it doesn't have an associated file in the tree, so it cannot look at the annex.numcopies in gitattributes. Same reason why git annex drop --all is not implemented. But decided to go ahead and only use other numcopies configuration for mirroring. Added GETWANTED and SETWANTED to the external special remote protocol, and that is as far as I want to go on adding git-annex plumbing stuff to the protocol. I expect Tobias will release a boatload of special remotes updated to the new protocol soon, which seems to prove it has everything that could reasonably be needed. This is a nice public git-annex repository containing a growing collection of tech conference videos. Did some design work on ?untracked remotes, which I think will turn out to be read-only remotes. Being able to clone a repository and use git-annex in the clone without anything leaking back upstream is often desirable when using public repository, or a repository with many users. Worked on bug report and forum backlog (24 messages left), and made a few bug fixes. The main one was a fix for a Windows-specific direct mode merge bug. This month didn't go entirely to plan. I had not expected to work on the Windows assistant and webapp and get it so close to fully working. Nor had I expected to spend time and make significant progress on porting git-annex to Linux -- particularly to embedded NAS devices! I had hoped to encourage some others to develop git-annex, but only had one bite from a student and it didn't work out. Meanwhile, automatically rewarding committers with bitcoin is an interesting alternative approach to possibly motivating contributors, and I would like to set that up, but the software is new and I haven't had time yet. The only thing that went exactly as planned was the external special remote implementation. A special surprise this month is that I have started hearing privately from several institutions that are starting using git-annex in interesting ways. Hope I can share details of some of that 2014! Fixed a bug that could leave a direct mode repository stuck at annex.version 3. As part of that, v3 indirect mode repositories will be automatically updated to v5. There's no actual change in that upgrade, it just simplifies things to have only one supported annex.version. Added youtube playlist support to git-annex. Seems I had almost all the pieces needed, and didn't know it. Only about a dozen lines of code! Added PREPARE-FAILURE support to the external special remote interface. After I found the cable my kitten stole (her apport level is high), fixed file transfers to/from Android. This broke because git-annex assistant tries to use ionice, if it's in PATH, and Android's ionice is not suitable. It could probably include ionice in the busybox build and use that one, but I wanted a quick fix for this before the upcoming release. The external special remote interface is now done, and tested working great! Now we just need all the old hook special remotes to be converted to use it.. I punted on per-special-remote, per-key state storage in the git-annex branch for now. If I find an example of a remote that needs it (Tahoe-LAFS may, but still TBD), I'll add it. Added suppport for using the same credential storage that git-annex uses for S3 and WebDAV credentials. The main improvement I'd like to make is to add an interface for transferring files where the file is streamed to/from the external special remote, rather than using temp files as it does now. This would be more efficient (sometimes) and make the progress bars better. But it needs to either use a named pipe, which is complicated and non-portable, or serialize the file's contents over a currently line-based protocol, which would be a pain. Anyway, this can be added later, the protocol is extensible. Built most of the external special remote today. While I've written 600 lines of code for this, and think it's probably working, and complete (except for a couple of features), all I know is that it compiles. I've also written an example external special remote program in shell script, so the next step is to put the two together and see how it works. I also hope that some people who have built hook special remotes in the past will update them to the new external special remote interface, which is quite a lot better. Today's work was sponsored by Justine Lam. Only did a few hours today, getting started on implementing the external special remote protocol. Mostly this involved writing down types for the various messages, and code to parse them. I'm very happy with how the parsing turned out; nearly all the work is handled by the data types and type classes, and so only one line of very simple code is needed to parse each message: instance Receivable Response where parseCommand "PREPARE-SUCCESS" = parse0 PREPARE_SUCCESS parseCommand "TRANSFER-SUCCESS" = parse2 TRANSFER_SUCCESS parseCommand "TRANSFER-FAILURE" = parse3 TRANSFER_FAILURE An especially nice part of this implementation is that it knows exactly how many parameters each message should have (and their types of course), and so can both reject invalid messages, and avoid ambiguity in tokenizing the parameters. For example, the 3rd parameter of TRANSFER-FAILURE is an error message, and as it's the last parameter, it can contain multiple words. *Remote.External> parseMessage "TRANSFER-FAILURE STORE SHA1--foo doesn't work on Christmas" :: Maybe Response Just (TRANSFER_FAILURE Upload (Key {keyName = "foo", keyBackendName = "SHA1", keySize = Nothing, keyMtime = Nothing}) "doesn't work on Christmas") That's the easy groundwork for external special remotes, done. Resurfaced today to fix some problems with the Linux standalone builds in the Solstice release. The worst of these prevented the amd64 build from running on some systems, and that build has been updated. The other problems all involved the binary shimming, and were less serious. As part of that work, replaced the hacky shell script that handled the linux library copying and binary shimming with a haskell program. Also worked on some Windows bugs, and fixed a typo in the test suite. Got my own little present: haskell-tasty finally got out of Incoming, so the next Debian package build will once again include the test suite. Got the arm webapp to build! (I have not tried to run it.) The build process for this is quite elaborate; 2 chroots, one amd64 and one armel, with the same versions of everything installed in each, and git-annex is built in the first to get the info the EvilSplicer needs to build it in the second. Fixed a nasty bug in the assistant on OSX, where at startup it would follow symlinks in the repository that pointed to directories outside the repository, and add the files found there. Didn't cause data loss itself (in direct mode the assistant doesn't touch the files), but certainly confusingly breaks things and makes it easy to shoot your foot off. I will be moving up the next scheduled release because of this bug, probably to Saturday. Looped the git developers in on a problem with git failing on some kernels due to RLIMIT_NOFILE not working. Looks like git will get more robust and this should make the armel build work on even more embedded devices. Today's work was sponsored by Johan Herland. Fixed a few problems in the armel build, and it's been confirmed to work on Raspberry Pi and Synology NAS. Since none of the fixes were specific to those platforms, it will probably work anywhere the kernel is new enough. That covers 9+% of the missing ports in the user survey! Thought through the possible issues with the assistant on Windows not being able to use lsof. I've convinced myself it's probably safe. (In fact, it might be safe to stop checking with lsof when using the assistant in direct mode entirely.) Also did some testing of some specific interesting circumstances (including 2 concurrent writers to a single file). I've been working on adding the webapp to the armel build. This can mostly reuse the patches and EvilSplicer developed for Android, but it's taking some babysitting of the build to get yesod etc installer for various reasons. Will be surprised if I don't get there tomorrow. One other thing.. I notice that is up and running. This was set up by Subito, who offered me the domain, but I suggested he keep it and set up a pretty start page that points new users at the relevant parts of the wiki. I think he's done a good job with that! Made the Linux standalone builds more self-contained, now they include their own linker and glibc, and ugly hacks to make them be used when running the included programs. This should make them more portable to older systems. Set up an arm autobuilder. This autobuilder runs in an Debian armel chroot, using qemu-user-static (with a patch to make it support some syscalls ghc uses). No webapp yet; waiting on feedback of how well it works. I hope this build will be usable on eg, Synology NAS and Raspberry PI. Also worked on improving the assistant's batching of commits during the startup scan. And some other followups and bug triage. Today's work was sponsored by Hamish Coleman. Made some improvements to git-annex's plumbing level commands today. Added new lookupkey and examinekey commands. Also expanded the things that git annex find can report about files. Among other things, the elusive hash directory locations can now be looked up, which IIRC a few people have asked for a way to do. Also did some work on the linux standalone tarball and OSX app. Both now include man pages, and it's also now possible to just unpack it and symlink git-annex into ~/bin or similar to add it to PATH. Spent most of today catching up with a weeks's worth of traffic. Fixed 2 bugs. Message backlog is 23 messages. I've switched over to mostly working on Windows porting in the evenings when bored, with days spent on other git-annex stuff. So, getting back to the planned roadmap for this month.. Set up a tip4commit for git-annex. Anyone who gets a commit merged in will receive a currently small amount of bitcoin. This would almost be a good way to encourage more committers other than me, by putting say, half the money I have earmarked for that into the tip jar. The problem is, I make too many commits myself, so most of the money would be quickly tipped back out to me! I have gotten in touch with the tip4commit people, and hope they will give me a way to blacklist myself from being tipped. Designed a external special remote protocol that seems pretty good for first-class special remotes implemented outside git-annex. It's moderately complicated on the git-annex side to make it simple and flexible on the special remote side, but I estimate only a few days to build it once I have the design finalized. windows Tested the autobuilt windows webapp. It works! Sorted out some issues with the bundled libraries. Reworked how git annex transferkeys communicates, to make it easier to port it to Windows. Narrowly managed to avoid needing to write Haskell bindings to Windows's equivilant of pipe(2). I think the Windows assistant can transfer keys now. and the webapp UI may even be able to be used to stop transfers. Needs testing. Investigated what I'll need to get XMPP working on Windows. Most of the libs are available in cygwin, but gsasl would need to be built from source. Also some kind of space-in-path problem is preventing cabal installing some of the necessary dependencies. Got the Windows autobuilder building the webapp. Have not tried that build yet myself, but I have high hopes it will work. Made other Windows improvements, including making the installer write a start menu entry file, and adding free disk space checking. Spent rest of the day improving git repair code on a real-world corrupted repository. Fixed up a few problems with the Windows webapp, and it's now completely usable, using any browser other than MSIE. While there are missing features in the windows port, all the UI for the features it does have seems to just work in the webapp. Fixed a ugly problem with Firefox, which turned out to have been introduced a while ago by a workaround for an ugly problem in Chrome. Web browsers are so wonderful, until they're crap. Think I've fixed the bug in the EvilLinker that was causing it to hang on the autobuilder, but still don't have a Windows autobuild with the webapp just yet. Also improved git annex import some more, and worked on a bug in git repository repair, which I will need to spend some more time on tomorrow. I have seen the glory of the webapp running on Windows. One of the warp developers pointed me in the right direction and I developed a fix for the recv bug. My Windows and MSIE are old and fall over on some of the javascript, so it's not glorious enough for a screenshot. But large chunks of it do seem to work. Windows webapp now starts, opens a web browser, and ... crashes. This is a bug in warp or a deep level of the stack. I know that yesod apps have run on Windows before, so apparently something has changed and introduced this problem. Also have a problem with the autobuilder; the EvilSplicer or something it runs is locking up on that system for reasons not yet determined. Looks like I will need to wait a bit longer for the windows webapp, but I could keep working on porting the assistant in the meantime. The most important thing that I need to port is how to check if a file is being written to at the same time the assistant adds it to the repository. No real lsof equivilant on Windows. I might be able to do something with exclusive locking to detect if there's a writer (but this would also block using the file while it was being added). Or I may be able to avoid the need for this check, at least in direct mode. Android has the EvilSplicer, now Windows gets the EvilLinker. Fully automated, and truly horrible solution to the too long command line problem. Now when I run git annex webapp on windows, it almost manages to open the web browser. At the same time, I worked with Yuri to upgrade the Windows autobuilder to a newer Haskell platform, which can install Yesod. I have not quite achieved a successful webapp build on the autobuilder, but it seems close. Here's a nice Haskell exercise for someone. I wrote this quick and dirty function in the EvilSplicer, but it's crying out for a generalized solution. {- Input contains something like - c:/program files/haskell platform/foo -LC:/Program Files/Haskell Platform/ -L... - and the *right* spaces must be escaped with \ - - Argh. -} escapeDosPaths :: String -> String escapeDosPaths = replace "Program Files" "Program\\ Files" . replace "program files" "program\\ files" . replace "Haskell Platform" "Haskell\\ Platform" . replace "haskell platform" "haskell\\ platform" Got the entire webapp to build on Windows. Compiling was easy. One line of code had to be #ifdefed out, and the whole rest of the webapp UI just built! Linking was epic. It seems that I really am runninginto a 32kb command line length limit, which causes the link command to fail on Windows. git-annex with all its bells and whistles enabled is just too big. Filed a ghc bug report, and got back a helpful response about using to work around. 6 hours of slogging through compiling dependencies and fighting with toolchain later, I have managed to link git-annex with the webapp! The process is not automated yet. While I was able to automate passing gcc a @file with its parameters, gcc then calls collect2, which calls ld, and both are passed too many parameters. I have not found a way to get gcc to generate a response file. So I did it manually. Urgh. Also, it crashes on startup with getAddrInfo failure. But some more porting is to be expected, now that the windows webapp links.. Had planned to spend all day not working on git-annex and instead getting caught up on conference videos. However, got a little bit multitasky while watching those, and started investigating why, last time I worked on Windows port, git-annex was failing to link. A good thing to do while watching conference videos since it involved lots of test builds with different flags. Eventially solved it. Building w/o WebDAV avoids crashing the compiler anyhow. Thought I'd try the resulting binary and see if perhaps I had forgotten to use the threaded RTS when I was running ghc by hand to link it last time, and perhaps that was why threads seemed to have hung back then. It was. This became clear when I saw a "deadlocked indefinitely in MVar" error message, which tells me that it's at least using the threaded RTS. So, I fixed that, and a few other minor things, and ran this command in a DOS prompt box: git annex watch --force --foreground --debug And I've been making changes to files in that repository, and amazingly, the watcher is noticing them, and committing them! So, I was almost entirely there to a windows port of the watcher a month ago, and didn't know. It has some rough edges, including not doing anything to check if a newly created file is open for write when adding it, and getting the full assistant ported will be more work, and the full webapp may be a whole other set of problems, but this is a quite nice milestone for the Windows port. The 2013 git-annex user survey has been running for several weeks and around 375 people have answered at least the first question. While I am going to leave it up through the end of the year, I went over the data today to see what interesting preliminary conclusions I can draw. 11% build git-annex from source. More than I would have guessed. 20% use the prebuilt versions from the git-annex website. This is a number to keep in mind later, when more people have upgraded to the last release, which checks for upgrades. I can run some stats on the number of upgrade checks I receive, and multiplying that by 5 would give a good approximation of the total number of computers running git-annex. I'm surprised to see so many more Linux (79%) than OSX (15%) users. Also surprising is there are more Windows (2%) than Android (1%) users. (Android numbers may be artificially low since many users will use it in addition to one of the other OSes.) Android and Windows unsurprisingly lead in ports requested, but the Synology NAS is a surprise runner up, with 5% (more than IOS). In theory it would not be too hard to make a standalone arm tarball, which could be used on such a device, although IIRC the Synology had problems with a too old linker and libc. It would help if I could make the standalone tarball not depend on the system linker at all. A susprising number (3%) want some kind of port the the Raspberry Pi, which is weird because I'd think they'd just be using Raspbian on it.. but a standalone arm tarball would also cover that use case. A minimum of 1664 (probably closer to 2000) git annex repositories are being used by the 248 people who answered that question. Around 7 repositories per person average, which is either one repository checked out on 7 different machines or two repositories on 3 machines, etc. At least 143 terabytes of data are being stored in git-annex. This does not count redundant data. (It also excludes several hundred terabytes from one instituion that I don't think are fully online yet.) Average user has more than half a terabyte of data. 8% of users store scientific data in git-annex! A couple of users are using it for game development assets, and 5% of users are using it for some form of business data. Only 10% of users are sharing a git-annex repository with at least one other person. 27% use it by themselves, but want to get others using their repositories. This probably points to it needing to be easier for nontechnical users. 61% of git-annex users have good or very good knowledge of git. This question intentionally used the same wording as the general git user survey, so the results can be compared. The curves have somewhat different shapes, with git-annex's users being biased more toward the higher knowledge levels than git's users. The question about how happy users are also used the same wording. While 74% of users are happy with git-annex, 94% are similarly happy with git, and a while the median git-annex user is happy, the median git user is very happy. The 10% who wrote in "very enthusiastic, but still often bitten by quirks (so not very happy yet, but with lots of confidence in the potential" might have thrown off this comparison some, but they certianly made their point! 3% of respondants say that a bug is preventing them from using git-annex, but that they have not reported the bug yet. Frustrating! 1% say that a bug that's been reported already is blocking them. 18% wrote in that they need the webapp to support using github (etc) as a central server. I've been moving in that direction with the encryption and some other changes, so it's probably time to make a UI for that. 12% want more control over which files are stored locally when using the assistant. A really surprising thing happened when someone wrote in that I should work on "not needing twice disk space of repo in direct mode", and 5% of people then picked this choice. This is some kind of documentation problem, because of course git-annex never needs 2x disk space, whether using direct mode or not. That's one of its advantages over git! Somewhere between 59 and 161 of the survey respondants use Debian. I can compare this with Debian popularity contest data which has 400 active installations and 1000 total installations, and make guesses about what fraction of all git-annex users have answered the survey. By making different assumptions I got guesses that varied by 2 orders of magnitude, so not worth bothering with. Explicitly asking how many people use each Linux distribution would be a good idea in next year's survey. Main work today was fixing Android DNS lookups, which was trying to use /etc/resolv.conf to look up SRV records for XMPP, and had to be changed to use a getprop command instead. Since I can't remember dealing with this before (not impossible I made some quick fix to the dns library before and lost it though), I'm wondering if XMPP was ever usable on Android before. Cannot remember. May work now, anyway... Still working through thanksgiving backlog. Around 55 messages to go. Wrote hairy code to automatically fix up bad bare repositories created by recent versions of git-annex. Managed to do it with only 1 stat call overhead (per local repository). Will probably keep that code in git-annex for a year or so, despite the bug only being present for a few weeks, because the repositories that need to be fixed might be on removable drives that are rarely used. Various other small bug fixes, including dealing with box.com having changed their WebDAV endpoint url. Spent a while evaluating various key/value storage possibilities. ?incremental fsck should not use sticky bit has the details. Made a release yesterday to fix a bug that made git-annex init in a bare repository set core.bare=false. This bug only affected git-annex 5, it was introduced when building the direct mode guard. Currently recovering from it is a manual (pretty easy) process. Perhas I should automate that, but I mostly wanted to get a fix out before too many people encountered the bug. Today, I made the assistant run batch jobs with ionice and nocache, when those commands are available. Also, when the assistant transfers files, that also runs as a batch job. Changed how git-annex does commits, avoiding using git commit in direct mode, since in some situations git commit (not with -a!) wants to read the contents of files in the work tree, which can be very slow. My last day before thanksgiving, getting caught up with some recent bug reports and, quite a rush to get a lot of fixes in. Adding to the fun, wintery weather means very limited power today. It was a very productive day, especially for Android, which hopefully has XMPP working again (at least it builds..), halved the size of the package, etc. Fixed a stupid bug in the automatic v5 upgrade code; annex.version was not being set to 5, and so every git annex command was actually re-running the upgrade. Fixed another bug I introduced last Friday, which the test suite luckily caught, that broke using some local remotes in direct mode. Tracked down a behavior that makes git annex sync quite slow on filesystems that don't support symlinks. I need to switch direct mode to not using git commit at all, and use plumbing to make commits there. Will probably work on this over the holiday. Upgrades should be working on OSX Mavericks, Linux, and sort of on Android. This needs more testing, so I have temporarily made the daily builds think they are an older version than the last git-annex release. So when you install a daily build, and start the webapp, it should try to upgrade (really downgrade) to the last release. Tests appreciated. Looking over the whole upgrade code base, it took 700 lines of code to build the whole thing, of which 75 are platform specific (and mostly come down to just 3 or 4 shell commands). Not bad.. Last night, added support for quvi 0.9, which has a completely changed command line interface from the 0.4 version. Plan to spend tomorrow catching up on bug reports etc and then low activity for rest of the week. Completely finished up with making the assistant detect when git-annex's binary has changed and handling the restart. It's a bit tricky because during an upgrade there can be two assistant daemons running at the same time, in the same repository. Although I disable the watcher of the old one first. Luckily, git-annex has long supported running multiple concurrent git-annex processes in the same repository. The surprisingly annoying part turned out to be how to make the webapp redirect the browser to the new url when it's upgraded. Particularly needed when automatic upgrades are enabled, since the user will not then be taking any action in the webapp that could result in a redirect. My solution to this feels like overkill; the webapp does ajax long polling until it gets an url, and then redirects to it. Had to write javascript code and ugh. But, that turned out to also be useful when manually restarting the webapp (removed some horrible old code that ran a shell script to do it before), and also when shutting the webapp down. Getting back to upgrades, I have the assistant downloading the upgrade, and running a hook action once the key is transferred. Now all I need is some platform-specific code to install it. Will probably be hairy, especially on OSX where I need to somehow unmount the old git-annex dmg and mount the new one, from within a program running on the old dmg. Today's work was sponsored by Evan Deaubl. The difference picking the right type can make! Last night, I realized that the where I had a distributionSha256sum :: String, I should instead use distributionKey :: Key. This means that when git-annex is eventually downloading an upgrade, it can treat it as just another Key being downloaded from the web. So the webapp will show that transfer along with all the rest, and I can leverage tons of code for a new purpose. For example, it can simply fsck the key once it's downloaded to verify its checksum. Also, built a DistriutionUpdate program, which I'll run to generate the info files for a new version. And since I keep git-annex releases in a git-annex repo, this too leverages a lot of git-annex modules, and ended up being just 60 easy lines of code. The upgrade notification code is tested and working now. And, I made the assistant detect when the git-annex program binary is replaced or modified. Used my existing DirWatcher code for that. The plan is to restart the assistant on upgrade, although I need to add some sanity checks (eg, reuse the lsof code) first. And yes, this will work even for apt-get upgrade! Today's work was sponsored by Paul Tötterman Still working on the git repair code. Improved the test suite, which found some more bugs, and so I've been running tests all day and occasionally going and fixing a bug in the repair code. The hardest part of repairing a git repo has turned out to be reliably determining which objects in it are broken. Bugs in git don't help (but the git devs are going to fix the one I reported). But the interesting new thing today is that I added some upgrade alert code to the webapp. Ideally everyone would get git-annex and other software as part of an OS distribution, which would include its own upgrade system -- But the survey tells me that a quarter of installs are from the prebuilt binaries I distribute. So, those builds are going to be built with knowledge of an upgrade url, and will periodically download a small info file (over https) to see if a newer version is available, and show an alert. I think all that's working, though I have not yet put the info files in place and tested it. The actual upgrade process will be a manual download and reinstall, to start with, and then perhaps I'll automate it further, depending on how hard that is on the different platforms. Pushed out a minor release of git-annex today, mostly to fix build problems on Debian. No strong reason to upgrade to it otherwise. Continued where I left off with the Git.Destroyer. Fixed quite a lot of edge cases where git repair failed due to things like a corrupted .git/HEAD file (this makes git think it's not in a git repository), corrupt git objects that have an unknown object type and so crash git hard, and an interesting failure mode where git fsck wants to allocate 116 GB of memory due to a corrupted object size header. Reported that last to the git list, as well as working around it. At the end of the day, I ran a test creating 10000 corrupt git repositories, and all of them were recovered! Any improvements will probably involve finding new ways to corrupt git repositories that my code can't think of. Wrote some evil code you don't want to run today. Git.Destroyer randomly generates Damage, and applies it to a git repository, in a way that is reproducible -- applying the same Damage to clones of the same git repo will always yeild the same result. This let me build a test harness for git-repair, which repeatedly clones, damages, and repairs a repository. And when it fails, I can just ask it to retry after fixing the bug and it'll re-run every attempt it's logged. This is already yeilding improvements to the git-repair code. The first randomly constructed Damage that it failed to recover turned out to be a truncated index file that hid some other corrupted object files from being repaired. [Damage Empty (FileSelector 1), Damage Empty (FileSelector 2), Damage Empty (FileSelector 3), Damage Reverse (FileSelector 3), Damage (ScrambleFileMode 3) (FileSelector 5), Damage Delete (FileSelector 9), Damage (PrependGarbage "¥SOH¥STX¥ENQ¥f¥a¥ACK¥b¥DLE¥n") (FileSelector 9), Damage Empty (FileSelector 12), Damage (CorruptByte 11 25) (FileSelector 6), Damage Empty (FileSelector 5), Damage (ScrambleFileMode 4294967281) (FileSelector 14) ] I need to improve the ranges of files that it damages -- currently QuickCheck seems to only be selecting one of the first 20 or so files. Also, it's quite common that it will damage .git/config so badly that git thinks it's not a git repository anymore. I am not sure if that is something git-repair should try to deal with. Today's work was sponsored by the WikiMedia Foundation. Release today, right on bi-weekly schedule. Rather startled at the size of the changelog for this one; along with the direct mode guard, it adds support for OS X Mavericks, Android 4.3/4.4, and fixes numerous bugs. Posted another question in the survey,. Spun off git-repair as an independant package from git-annex. Of course, most of the source code is shared with git-annex. I need to do something with libraries eventually.. Fixed two difficult bugs with direct mode. One happened (sometimes) when a file was deleted and replaced with a directory by the same name and then those changes were merged into a direct mode repository. The other problem was that direct mode did not prevent writes to .git/annex/objects the way that indirect mode does, so when a file in the repository was not currently present, writing to the dangling symlink would follow it and write into the object directory. Hmm, I was going to say that it's a pity that direct mode still has so many bugs being found and fixed, but the last real bug fix to direct mode was made last May! Instead, I probably have to thank Tim for being a very thorough tester. Finished switching the test suite to use the tasty framework, and prepared tasty packages for Debian. The user survey is producing some interesting and useful results! Added two more polls: using with and blocking problems (There were some load issues so if you were unable to vote yesterday, try again..) Worked on getting the autobuilder for OS X Mavericks set up. Eventually succeeded, after patching a few packages to work around a cpp that thinks it should parse haskell files as if they're C code. Also, Jimmy has resuscitated the OS X Lion autobuilder. A not too bad bug in automatic merge conflict resolution has been reported, so I will need to dig into that tomorrow. Didn't feel up to it today, so instead have been spending the remaining time finishing up a branch that switches the test suite to use the tasty test framework. One of my goals for this month is to get a better sense of how git-annex is being used, how it's working out for people, and what areas need to be concentrated on. To start on that, I am doing the 2013 git-annex user survey, similar to the git user surveys. I will be adding some less general polls later (suggestions for topics appreciated!), but you can go vote in any or all of 10 polls now. Found a workaround for yesterday's Windows build problem. Seems that only cabal runs gcc in a way that fails, so ghc --make builds is successfully. However, the watcher doesn't quite work on Windows. It does get events when files are created, but it seems to then hang before it can add the file to git, or indeed finish printing out a debug log message about the event. This looks like it could be a problem with the threaded ghc runtime on Windows, or something like that. Main work today was improving the git repository repair to handle corrupt index files. The assistant can now start up, detect that the index file is corrupt, and regenerate it all automatically. Annoyingly, the Android 4.3 fix breaks git-annex on Android 4.0 (probably through 4.2), so I now have two separate builds of the Android app. Worked on Windows porting today. I've managed to get the assistant and watcher (but not yet webapp) to build on Windows. The git annex transferrer interface needs POSIX stuff, and seems to be the main thing that will need porting for Windows for the assistant to work, besides of course file change detection. For that, I've hooked up Win32-notify. So the watcher might work on Windows. At least in theory. Problem is, while all the code builds ok, it fails to link: ghc.exe: could not execute: C:\Program Files (x86)\Haskell Platform\2012.4.0.0\lib/../mingw/bin/gcc.exe I wonder if this is case of too many parameters being passed? This happens both on the autobuilder and on my laptop, so I'm stuck here. Oh well, I was not planning to work on this anyway until February... Finally found the root cause of the Android 4.3/4.4 trouble, and a fix is now in place! As a bonus, it looks like I've fixed a problem accessing the environment on Android that had been worked around in an ugly way before. Big thanks to my remote hands Michael Alan, Sören, and subito. All told they ran 19 separate tests to help me narrow down this tricky problem, often repeating long command lines on software keyboards. Been chipping away at my backlog of messages, and it's down to 23 items. Finally managed to get ghc to build with a newer version of the NDK. This might mean a solution to git-annex on Android 4.2. I need help with testing. Finished the direct mode guard, including the new git annex status command. Spent the rest of the day working on various bug fixes. One of them turned into rather a lot of work to make the webapp's UI better for git remotes that do not have an annex.uuid. Started by tracking down a strange bug that was apparently ubuntu-specific and caused git-annex branch changes to get committed to master. Root cause turned out to failing to recover from an exception. I'm kicking myself about that, because I remember looking at the code where the bug was at least twice before and thinking "hmm, should add exception handling here? nah..". Exceptions are horrible. Made a release with a fix for that and a few minor other accumulated changes since last Friday's release. The pain point of this release is to fix building without the webapp (so it will propigate to Debian testing, etc). This release does not include the direct mode guard, so I'll have a few weeks until the next release to get that tested. Fixed the test suite in directguard. This branch is now nearly ready to merge to master, but one command that is badly needed in guarded direct mode is "git status". So I am planning to rename "git annex status" to "git annex info", and make "git annex status" display something similar to "git status". Also took half an hour and added optional EKG support to git-annex. This is a Haskell library that can add a terrific monitoring console web UI to any program in 2 lines of code. Here we can see the git-annex webapp using resources at startup, followed in a few seconds by the assistant's startup scan of the repository. BTW, Kevin tells me that the machine used to build git-annex for OSX is going to be upgraded to 10.9 soon. So, hopefully I'll be making autobuilds of that. I may have to stop the 10.8.2 autobuilds though. Today's work was sponsored by Protonet. Long, long day coding up the direct mode guard today. About 90% of the fun is dealing with receive.denyCurrentBranch not preventing pushes that change the current branch, now that core.bare is set in direct mode. My current solution to this involves using a special branch when using direct mode, which nothing will ever push to (hopefully). A much nicer solution would be to use a update hook to deny pushes of the current branch -- but there are filesystems where repos cannot have git hooks. The test suite is falling over, but the directguard branch otherwise seems usable. Today's work was sponsored by Carlo Matteo Capocasa. I've been investigating ways to implement a ?direct mode guard. Preventing a stray git commit -a or git add doing bad things in a direct mode repository seems increasingly important. First, considered moving .git, so git won't know it's a git repository. This doesn't seem too hard to do, but there will certainly be unexpected places that assume .git is the directory name. I dislike it more and more as I think about it though, because it moves direct mode git-annex toward being entirely separate from git, and I don't want to write my own version control system. Nor do I want to complicate the git ecosystem with tools needing to know about git-annex to work in such a repository. So, I'm happy that one of the other ideas I tried today seems quite promising. Just set core.bare=true in a direct mode repository. This nicely blocks all git commands that operate on the working tree from doing anything, which is just what's needed in direct mode, since they don't know how to handle the direct mode files. But it lets all git commands and other tools that don't touch the working tree continue to be used. You can even run git log file in such a repository (surprisingly!) It also gives an easy out for anyone who really wants to use git commands that operate on the work tree of their direct mode repository, by just passing -c core.bare=false. And it's really easy to implement in git-annex too -- it can just notice if a repo has core.bare and annex.direct both set, and pass that parameter to every git command it runs. I should be able to get by with only modifying 2 functions to implement this. Low activity the past couple of days. Released a new version of git-annex yesterday. Today fixed three bugs (including a local pairing one that was pretty compicated) and worked on getting caught up with traffic. Spent today reviewing my ?plans for the month and filling in a couple of missing peices. Noticed that I had forgotten to make repository repair clean up any stale git locks, despite writing that code at the beginning of the month, and added that in. Made the webapp notice when a repository that is being used does not have any consistency checks configured, and encourage the user to set up checks. This happens when the assistant is started (for the local repository), and when removable drives containing repositories are plugged in. If the reminders are annoying, they can be disabled with a couple clicks. And I think that just about wraps up the month. (If I get a chance, I would still like to add recovery of git-remote-gcrypt encrypted git repositories.) My roadmap has next month dedicated to user-driven features and polishing and bugfixing. All command line stuff today.. Added --want-get and --want-drop, which can be used to test preferred content settings of a repository. For example git annex find --in . --want-drop will list the same files that git annex drop --auto would try to drop. (Also renamed git annex content to git annex wanted.) Finally laid to rest problems with git annex unannex when multiple files point to the same key. It's a lot slower, but I'll stop getting bug reports about that. Finally got the assistant to repair git repositories on removable drives, or other local repos. Mostly this happens entirely automatically, whatever data in the git repo on the drive has been corrupted can just be copied to it from ~/annex/.git. And, the assistant will launch a git fsck of such a repo whenever it fails to sync with it, so the user does not even need to schedule periodic fscks. Although it's still a good idea, since some git repository problems don't prevent syncing from happening. Watching git annex heal problems like this is quite cool! One thing I had to defer till later is repairing corrupted gcrypt repositories. I don't see a way to do it without deleting all the objects in the gcrypt repository, and re-pushing everything. And even doing that is tricky, since the gcrypt-id needs to stay the same. Got well caught up on bug fixes and traffic. Backlog is down to 40. Made the assistant wait for a few seconds before doing the startup scan when it's autostarted, since the desktop is often busy starting up at that same time. Fixed an ugly bug with chunked webdav and directory special remotes that caused it to not write a "chunkcount" file when storing data, so it didn't think the data was present later. I was able to make it recover nicely from that mistake, by probing for what chunks are actually present. Several people turn out to have had problems with git annex sync not working because receive.denyNonFastForwards is enabled. I made the webapp not enable it when setting up a ssh repository, and I made git annex sync print out a hint about this when it's failed to push. (I don't think this problem affects the assistant's own syncing.) Made the assistant try to repair a damaged git repository without prompting. It will only prompt when it fails to fetch all the lost objects from remotes. Glad to see that others have managed to get git-annex to build on Max OS X 10.9. Now I just need someone to offer up a ssh account on that OS, and I could set up an autobuilder for it. The webapp now fully handles repairing damage to the repository. Along with all the git repository repair stuff already built, I added additional repairs of the git-annex branch and git-annex's index file. That was pretty easy actually, since git-annex already handles merging git-annex branches that can sometimes be quite out of date. So when git repo repair has to throw away recent changes to the git-annex branch, it just effectively becomes out of date. Added a git annex fsck --fast run to ensure that the git-annex branch reflects the current state of the repository. When the webapp runs a repair, it first stops the assistant from committing new files. Once the repair is done, that's started back up, and it runs a startup scan, which is just what is needed in this sitation; it will add any new files, as well as any old files that the git repository damange caused to be removed from the index. Also made git annex repair run the git repository repair code, for those with a more command-line bent. It can be used in non-git-annex repos too! So, I'm nearly ready to wrap up working on disaster recovery. Lots has been accomplished this month. And I have put off making a release for entirely too long! The big missing piece is repair of git remotes located on removable drive. I may make a release before adding that, but removable drives are probably where git repository corruption is most likely to occur, so I certainly need to add that. Today's work was sponsored by Scott Robinson. I think that git-recover-repository is ready now. Made it deal with the index file referencing corrupt objects. The best approach I could think of for that is to just remove those objects from the index, so the user can re-add files from their work tree after recovery. Now to integrate this git repository repair capability into the git-annex assistant. I decided to run git fsck as part of a scheduled repository consistency check. It may also make sense for the assistant to notice when things are going wrong, and suggest an immediate check. I've started on the webapp UI to run a repository repair when fsck detects problems. Solid day of working on repository recovery. Got git recover-repository --force working, which involves fixing up branches that refer to missing objects. Mostly straightforward traversal of git commits, trees, blobs, to find when a branch has a problem, and identify an old version of it that predates the missing object. (Can also find them in the reflog.) The main complication turned out to be that git branch -D and git show-ref don't behave very well when the commit objects pointed to by refs are themselves missing. And git has no low-level plumbing that avoids falling over these problems, so I had to write it myself. Testing has turned up one unexpected problem: Git's index can itself refer to missing objects, and that will break future commits, etc. So I need to find a way to validate the index, and when it's got problems, either throw it out, or possibly recover some of the staged data from it. Built a git-recover-repository command today. So far it only does the detection and deletion of corrupt objects, and retrieves them from remotes when possible. No handling yet of missing objects that cannot be recovered from remotes. Here's a couple of sample runs where I do bad things to the git repository and it fixes them: joey@darkstar:~/tmp/git-annex>chmod 644 .git/objects/pack/* joey@darkstar:~/tmp/git-annex>echo > .git/objects/pack/pack-a1a770c1569ac6e2746f85573adc59477b96ebc5.pack joey@darkstar:~/tmp/git-annex>~/src/git-annex/git-recover-repository Running git fsck ... git fsck found a problem but no specific broken objects. Perhaps a corrupt pack file? Unpacking all pack files. fatal: early EOF Unpacking objects: 100% (148/148), done. Unpacking objects: 100% (354/354), done.. joey@darkstar:~/tmp/git-annex>chmod 644 .git/objects/00/0800742987b9f9c34caea512b413e627dd718e joey@darkstar:~/tmp/git-annex>echo > .git/objects/00/0800742987b9f9c34caea512b413e627dd718e joey@darkstar:~/tmp/git-annex>~/src/git-annex/git-recover-repository Running git fsck ... error: unable to unpack 000800742987b9f9c34caea512b413e627dd718e header error: inflateEnd: stream consistency error (no message) error: unable to unpack 000800742987b9f9c34caea512b413e627dd718e header error: inflateEnd: stream consistency error (no message) git fsck found 1 broken objects. Unpacking all pack files. removing 1 corrupt loose objects. Works great! I need to move this and git-union-merge out of the git-annex source tree sometime. Today's work was sponsored by Francois Marier. Goal for the rest of the month is to build automatic recovery git repository corruption. Spent today investigating how to do it and came up with a fairly detailed design. It will have two parts, first to handle repository problems that can be fixed by fetching objects from remotes, and secondly to recover from problems where data never got sent to a remote, and has been lost. In either case, the assistant should be able to detect the problem and automatically recover well enough to keep running. Since this also affects non-git-annex repositories, it will also be available in a standalone git-recover-repository command. A long day of bugfixing. Split into two major parts. First I got back to a bug I filed in August to do with the assistant misbehaving when run in a subdirectory of a git repository, and did a nice type-driven fix of the underlying problem (that also found and fixed some other related bugs that would not normally occur). Then, spent 4 hours in Windows purgatory working around crazy path separator issues. Productive day, but I'm wiped out. Backlog down to 51. While I said I was done with fsck scheduling yesterday, I ended up adding one more feature to it today: Full anacron style scheduling. So a fsck can be scheduled to run once per week, or month, or year, and it'll run the fsck the next time it's available after that much time has passed. The nice thing about this is I didn't have to change Cronner at all to add this, just improved the Recurrence data type and the code that calculates when to run events. Rest of the day I've been catching up on some bug reports. The main bug I fixed caused git-annex on Android to hang when adding files. This turns out to be because it's using a new (unreleased) version of git, and git check-attr -z output format has changed in an incompatible way. I am currently 70 messages behind, which includes some ugly looking bug reports, so I will probably continue with this over the next couple days. Fixed a lot of bugs in the assistant's fsck handling today, and merged it into master. There are some enhancments that could be added to it, including fscking ssh remotes via git-annex-shell and adding the ability to schedule events to run every 30 days instead of on a specific day of the month. But enough on this feature for now. Today's work was sponsored by Daniel Brockman. Built everything needed to run a fsck when a remote gets connected. Have not tested it; only testing is blocking merging the incrementalfsck branch now. Also updated the OSX and Android builds to use a new gpg release (denial of service security fix), and updated the Debian backport, and did a small amount of bug fixing. I need to do several more days of bug fixing once I get this incremental fsck feature wrapped up before moving on to recovery of corrupt git repositories. Last night, built this nice user interface for configuring periodic fscks: Rather happy that that whole UI needed only 140 lines of code to build. Though rather more work behind it, as seen in this blog.. Today I added some support to git-annex for smart fscking of remotes. So far only git repos on local drives, but this should get extended to git-annex-shell for ssh remotes. The assistant can also run periodic fscks of these. Still need to test that, and find a way to make a removable drive's fsck job run when the drive gets plugged in. That's where picking "any time" will be useful; it'll let you configure fscking of removable drives when they're available, as long as they have not been fscked too recently. Today's work was sponsored by Georg Bauer. Some neat stuff is coming up, but today was a pretty blah day for me. I did get the Cronner tested and working (only had a few little bugs). But I got stuck for quite a while making the Cronner stop git-annex fsck processes it was running when their jobs get removed. I had some code to do this that worked when run standalone, but not when run from git-annex. After considerable head-scratching, I found out this was due to forkProcess masking aync exceptions, which seems to be probably a bug. Luckily was able to work around it. Async exceptions continue to strike me as the worst part of the worst part of Haskell (the worst part being exceptions in general). Was more productive after that.. Got the assistant to automatically queue re-downloads of any files that fsck throws out due to having bad contents, and made the webapp display an alert while fscking is running, which will go to the page to configure fsck schedules. Now all I need to do is build the UI of that page. Lots of progress from yesterday's modest start of building data types for scheduling. Last night I wrote the hairy calendar code to calculate when next to run a scheduled event. (This is actually quite superior to cron, which checks every second to see if it should run each event!) Today I built a "Cronner" thread that handles spawning threads to handle each scheduled event. It even notices when changes have been made to the its schedule and stops/starts event threads appropriately. Everything is hooked up, building, and there's a good chance it works without too many bugs, but while I've tested all the pure code (mostly automatically with quickcheck properties), I have not run the Cronner thread at all. And there is some tricky stuff in there, like noticing that the machine was asleep past when it expected to wake up, and deciding if it should still run a scheduled event, or should wait until next time. So tomorrow we'll see.. Today's work was sponsored by Ethan Aubin. Spent most of the day building some generic types for scheduling recurring events. Not sure if rolling my own was a good idea, but that's what I did. In the incrementalfsck branch, I have hooked this up in git-annex vicfg, which now accepts and parses scheduled events like "fsck self every day at any time for 60 minutes" and "fsck self on day 1 of weeks divisible by 2 at 3:45 for 120 minutes", and stores them in the git-annex branch. The exact syntax is of course subject to change, but also doesn't matter a whole lot since the webapp will have a better interface. Finished up the automatic recovery from stale lock files. Turns out git has quite a few lock files; the assistant handles them all. Improved URL and WORM keys so the filenames used for them will always work on FAT (which has a crazy assortmeny of illegal characters). This is a tricky thing to deal with without breaking backwards compatability, so it's only dealt with when creating new URL or WORM keys. I think my next step in this disaster recovery themed month will be adding periodic incremental fsck to the assistant. git annex fsck can already do an incremental fsck, so this should mostly involve adding a user interface to the webapp to configure when it should fsck. For example, you might choose to run it for up 1 hour every night, with a goal of checking all your files once per month. Also will need to make the assistant do something useful when fsck finds a bad file (ie, queue a re-download). Started the day by getting the builds updated for yesterday's release. This included making it possible to build git-annex with Debian stable's version of cryptohash. Also updated the Debian stable backport to the previous release. The roadmap has this month devoted to improving git-annex's support for recovering from disasters, broken repos, and so on. Today I've been working on the first thing on the list, stale git index lock files. It's unfortunate that git uses simple files for locking, and does not use fcntl or flock to prevent the stale lock file problem. Perhaps they want it to work on broken NFS systems? The problem with that line of thinking is is means all non-broken systems end up broken by stale lock files. Not a good tradeoff IMHO. There are actually two lock files that can end up stale when using git-annex; both .git/index.lock and .git/annex/index.lock. Today I concentrated on the latter, because I saw a way to prevent it from ever being a problem. All updates to that index file are done by git-annex when committing to the git-annex branch. git-annex already uses fcntl locking when manipulating its journal. So, that can be extended to also cover committing to the git-annex branch, and then the git index.lock file is irrelevant, and can just be removed if it exists when a commit is started. To ensure this makes sense, I used the type system to prove that the journal locking was in effect everywhere I need it to be. Very happy I was able to do that, although I am very much a novice at using the type system for interesting proofs. But doing so made it very easily to build up to a point where I could unlink the .git/annex/index.lock and be sure it was safe to do that. What about stale .git/index.lock files? I don't think it's appropriate for git-annex to generally recover from those, because it would change regular git command line behavior, and risks breaking something. However, I do want the assistant to be able to recover if such a file exists when it is starting up, since that would prevent it from running. Implemented that also today, although I am less happy with the way the assistant detects when this lock file is stale, which is somewhat heuristic (but should work even on networked filesystems with multiple writing machines). Today's work was sponsored by Torbjørn Thorsen. Did I say it would be easy to make the webapp detect when a gcrypt repository already existed and enable it? Well, it wasn't exactly hard, but it took over 300 lines of code and 3 hours.. So, gcrypt support is done for now. The glaring omission is gpg key management for sharing gcrypt repositories between machines and/or people. But despite that, I think it's solid, and easy to use, and covers some great use cases. Pushed out a release. Now I really need to start thinking about disaster recovery. Today's work was sponsored by Dominik Wagenknecht. Long day, but I did finally finish up with gcrypt support. More or less. Got both creating and enabling existing gcrypt repositories on ssh servers working in the webapp. (But I ran out of time to make it detect when the user is manually entering a gcrypt repo that already exists. Should be easy so maybe tomorrow.) Fixed several bugs in git-annex's gcrypt support that turned up in testing. Made git-annex ensure that a gcrypt repository does not have receive.denyNonFastForwards set, because gcrypt relies on always forcing the push of the branch it stores its manifest on. Fixed a bug in git-annex-shell recvkey when it was receiving a file from an annex in direct mode. Also had to add a new git annex shell gcryptsetup command, which is needed to make setting up a gcrypt repository work when the assistant has set up a locked-down ssh key that can only run git-annex-shell. Painted myself into a bit of a corner there. And tested, tested, tested. So many possibilities and edge cases in this part of the code.. Today's work was sponsored by Hendrik Müller Hofstede. So close to being done with gcrypt support.. But still not quite there. Today I made the UI changes to support gcrypt when setting up a repository on a ssh server, and improved the probing and data types so it can tell which options the server supports. Fairly happy with how that is turning out. Have not yet hooked up the new buttons to make gcrypt repos. While I was testing that my changes didn't break other stuff, I found a bug in the webapp that caused it to sometimes fail to transfer one file to/from a remote that was just added, because the transferrer process didn't know about the new remote yet, and crashed (and was restarted knowing about it, so successfully sent any other files). So got sidetracked on fixing that. Also did some work to make the gpg bundled with git-annex on OSX be compatable with the config files written by MacGPG. At first I was going to hack it to not crash on the options it didn't support, but it turned out that upgrading to version 1.4.14 actually fixed the problem that was making it build without support for DNS. Today's work was sponsored by Thomas Hochstein. Worked on making the assistant able to merge in existing encrypted git repositories from rsync.net. This had two parts. First, making the webapp UI where you click to enable a known special remote work with these encrypted repos. Secondly, handling the case where a user knows they have an encrypted repository on rsync.net, so enters in its hostname and path, but git-annex doesn't know about that special remote. The second case is important, for example, when the encrypted repository is a backup and you're restoring from it. It wouldn't do for the assistant, in that case, to make a new encrypted repo and push it over top of your backup! Handling that was a neat trick. It has to do quite a lot of probing, including downloading the whole encrypted git repo so it can decrypt it and merge it, to find out about the special remote configuration used for it. This all works with just 2 ssh connections, and only 1 ssh password prompt max. Next, on to generalizing this rsync.net specific code to work with arbitrary ssh servers! Today's work was made possible by RMS's vision 30 years ago. Being still a little unsure of the UI and complexity for configuring gcrypt on ssh servers, I thought I'd start today with the special case of gcrypt on rsync.net. Since rsync.net allows running some git commands, gcrypt can be used to make encrypted git repositories on it. Here's the UI I came up with. It's complicated a bit by needing to explain the tradeoffs between the rsync and gcrypt special remotes. This works fine, but I did not get a chance to add support for enabling existing gcrypt repos on rsync.net. Anyway, most of the changes to make this work will also make it easier to add general support for gcrypt on ssh servers. Also spent a while fixing a bug in git-remote-gcrypt. Oddly gpg --list-keys --fast-list --fingerprint does not show the fingerprints of some keys. Today's work was sponsored by Cloudier - Thomas Djärv. Did various bug fixes and followup today. Amazing how a day can vanish that way. Made 4 actual improvements. I still have 46 messages in unanswered backlog. Although only 8 of the are from this month. Added support for gcrypt remotes to git-annex-shell. Now gcrypt special remotes probe when they are set up to see if the remote system has a suitable git-annex-shell, and if so all commands are sent to it. Kept the direct rsync mode working as a fallback. It turns out I made a bad decision when first adding gcrypt support to git-annex. To make implementation marginally easier, I decided to not put objects inside the usual annex/objects directory in a gcrypt remote. But that lack of consistency would have made adding support to git-annex-shell a lot harder. So, I decided to change this. Which means that anyone already using gcrypt with git-annex will need to manually move files around. Today's work was sponsored by Tobias Nix. Finished moving the Android autobuilder over to the new clean build environment. Tested the Android app, and it still works. Whew! There's a small chance that the issue with the Android app not working on Android 4.3 has been fixed by this rebuild. I doubt it, but perhaps someone can download the daily build and give it another try.. I have 7 days left in which I'd like to get remote gcrypt repositories working in the assistant. I think that should be fairly easy, but a prerequisite for it is making git-annex-shell support being run on a gcrypt repository. That's needed so that the assistant's normal locked down ssh key setup can also be used for gcrypt repositories. At the same time, not all gcrypt endpoints will have git-annex-shell installed, and it seems to make sense to leave in the existing support for running raw rsync and git push commands against such a repository. So that's going to add some complication. It will also complicate git-annex-shell to support gcrypt repos. Basically, everything it does in git-annex repos will need to be reimplemented in gcrypt repositories. Generally in a more simple form; for example it doesn't need to (and can't) update location logs in a gcrypt repo. I also need to find a good UI to present the three available choices (unencrypted git, encrypted git, encrypted rsync) when setting up a repo on a ssh server. I don't want to just remove the encrypted rsync option, because it's useful when using xmpp to sync the git repo, and is simpler to set up since it uses shared encryption rather than gpg public keys. My current thought is to offer just 2 choices, encrypted and non-encrypted. If they choose encrypted, offer a choice of shared encryption or encrypting to a specific key. I think I can word this so it's pretty clear what the tradeoffs are. Made a release on Friday. But I had to rebuild the OSX and Linux standalone builds today to fix a bug in them. Spent the past three days redoing the whole Android build environment. I've been progressively moving from my first hacked up Android build env to something more reproducible and sane. Finally I am at the point where I can run a shell script (well, actually, 3 shell scripts) and get an Android build chroot. It's still not immune to breaking when new versions of haskell libs are uploaded, but this is much better, and should be maintainable going forward. This is a good starting point for getting git-annex into the F-Droid app store, or for trying to build with a newer version of the Android SDK and NDK, to perhaps get it working on Android 4.3. (Eventually. I am so sick of building Android stuff right now..) Friday was all spent struggling to get ghc-android to build. I had not built it successfully since February. I finally did, on Saturday, and I have made my own fork of it which builds using a known-good snapshot of the current development version of ghc. Building this in a Debian stable chroot means that there should be no possibility that upstream changes will break the build again. With ghc built, I moved on to building all the haskell libs git-annex needs. Unfortunately my build script for these also has stopped working since I made it in April. I failed to pin every package at a defined version, and things broke. So, I redid the build script, and updated all the haskell libs to the newest versions while I was at it. I have decided not to pin the library versions (at least until I find a foolproof way to do it), so this new script will break in the future, but it should break in a way I can fix up easily by just refreshing a patch. The new ghc-android build has a nice feature of at least being able to compile Template Haskell code (though still not run it at compile time. This made the patching needed in the Haskell libs quite a lot less. Offset somewhat by me needing to make general fixes to lots of libs to build with ghc head. Including some fun with ==# changing its type from Bool to Int#. In all, I think I removed around 2.5 thousand lines of patches! (Only 6 thousand lines to go...) Today I improved ghc-android some more so it cross builds several C libraries that are needed to build several haskell libraries needed for XMPP. I had only ever built those once, and done it by hand, and very hackishly. Now they all build automatically too. And, I put together a script that builds the debian stable chroot and installs ghc-android. And, I hacked on the EvilSplicer (which is sadly still needed) to work with the new ghc/yesod/etc. At this point, I have git-annex successfully building, including the APK! In a bored hour waiting for a compile, I also sped up git annex add on OSX by I think a factor of 10. Using cryptohash for hash calculation now, when external hash programs are not available. It's still a few percentage points slower than external hash programs, or I'd use it by default. This period of important drudgery was sponsored by an unknown bitcoin user, and by Bradley Unterrheiner and Andreas Olsson. Spent a few hours improving gcrypt in some minor ways, including adding a --check option that the assistant can use to find out if a given repo is encrypted with dgit, and also tell if the necessary gpg key is available to decrypt it. Also merged in a fix to support subkeys, developed by a git-annex user who is the first person I've heard from who is using gcrypt. I don't want to maintain gcrypt, so I am glad its author has shown up again today. Got mostly caught up on backlog. The main bug I was able to track down today is git-annex using a lot of memory in certian repositories. This turns out to have happened when a really large file was committed right intoo to the git repository (by mistake or on purpose). Some parts of git-annex buffer file contents in memory while trying to work out if they're git-annex keys. Fixed by making it first check if a file in git is marked as a symlink. Which was really hard to do! At least 4 people ran into this bug, which makes me suspect that lots of people are messing up when using direct mode (probably due to not reading the documentation, or having git commit -a hardwired into their fingers, and forcing git to commit large files into their repos, rather than having git-annex manage them. Implementing ?direct mode guard seems more urgent now. Today's work was sponsored by Amitai Schlair. Spent basically all of today getting the assistant to be able to handle gcrypt special remotes that already exist when it's told to add a USB drive. This was quite tricky! And I did have to skip handling gcrypt repos that are not git-annex special remotes. Anyway, it's now almost easy to set up an encrypted sneakernet using a USB drive and some computers running the webapp. The only part that the assistant doesn't help with is gpg key management. Plan is to make a release on Friday, and then try to also add support for encrypted git repositories on remote servers. Tomorrow I will try to get through some of the communications backlog that has been piling up while I was head down working on gcrypt. I decided to keep gpg key generation very simple for now. So it generates a special-purpose key that is only intended to be used by git-annex. It hardcodes some key parameters, like RSA and 4096 bits (maximum recommended by gpg at this time). And there is no password on the key, although you can of course edit it and set one. This is because anyone who can access the computer to get the key can also look at the files in your git-annex repository. Also because I can't rely on gpg-agent being installed everywhere. All these simplifying assumptions may be revisited later, but are enough for now for someone who doesn't know about gpg (so doesn't have a key already) and just wants an encrypted repo on a removable drive. Put together a simple UI to deal with gpg taking quite a while to generate a key ... Then I had to patch git-remote-gcrypt again, to have a per-remote signingkey setting, so that these special-purpose keys get used for signing their repo. Next, need to add support for adding an existing gcrypt repo as a remote (assuming it's encrypted to an available key). Then, gcrypt repos on ssh servers.. Also dealt with build breakage caused by a new version of the Haskell DNS library. Today's work was sponsored by Joseph Liu. Now the webapp can set up encrypted repositories on removable drives. This UI needs some work, and the button to create a new key is not wired up. Also if you have no gpg agent installed, there will be lots of password prompts at the console. Forked git-remote-gcrypt to fix a bug. Hopefully my patch will be merged; for now I recommend installing my worked version. Today's work was sponsored by Romain Lenglet. Fixed a typo that broke automatic youtube video support in addurl. Now there's an easy way to get an overview of how close your repository is to meeting the configured numcopies settings (or when it exceeds them). # time git annex status . [...] numcopies stats: numcopies +0: 6686 numcopies +1: 3793 numcopies +3: 3156 numcopies +2: 2743 numcopies -1: 1242 numcopies -4: 1098 numcopies -3: 1009 numcopies +4: 372 This does make git annex status slow when run on a large directory tree, so --fast disables that. Worked to get git-remote-gcrypt included in every git-annex autobuild bundle. (Except Windows; running a shell script there may need some work later..) Next I want to work on making the assistant easily able to create encrypted git repositories on removable drives. Which will involve a UI to select which gpg key to use, or creating (and backing up!) a gpg key. But, I got distracted chasing down some bugs on Windows. These were quite ugly; more direct mode mapping breakage which resulted in files not being accessible. Also fsck on Windows failed to detect and fix the problem. All fixed now. (If you use git-annex on Windows, you should certainly upgrade and run git annex fsck.) As with most bugs in the Windows port, the underlying cause turned out to be stupid: isSymlink always returned False on Windows. Which makes sense from the perspective of Windows not quite having anything entirely like symlinks. But failed when that was being used to detect when files in the git tree being merged into the repository had the symlink bit set.. Did bug triage. Backlog down to 32 (mostly messages from August). I've been out sick. However, some things kept happening. Mesar contributed a build host, and the linux and android builds are now happening, hourly, there. (Thanks as well to the two other people who also offered hostng.) And I made a minor release to fix a bug in the test suite that I was pleased three different people reported. Today, my main work was getting git-annex to notice when a gcrypt remote located on some removable drive mount point is not the same gcrypt remote that was mounted there before. I was able to finesse this so it re-configures things to use the new gcrypt remote, as long as it's a special remote it knows about. (Otherwise it has to ignore the remote.) So, encrypted repos on removable drives will work just as well as non-encrypted repos! Also spent a while with rsync.net tech support trying to work out why someone's git-annex apparently opened a lot of concurrent ssh connections to rsync.net. Have not been able to reproduce the problem though. Also, a lot of catch-up to traffic. Still 63 messages backlogged however, and still not entirely well.. Got git annex sync working with gcrypt. So went ahead and made a release today. Lots of nice new features! Unfortunately the linux 64 bit daily build is failing, because my build host only has 2 gb of memory and it is no longer enough. I am looking for a new build host, ideally one that doesn't cost me $40/month for 3 gb of ram and 15 gb of disk. (Extra special ideally one that I can run multiple builds per day on, rather than the current situation of only building overnight to avoid loading the machine during the day.) Until this is sorted out, no new 64 bit linux builds.. gcrpyt is fully working now. Most of the examples in fully encrypted git repositories with gcrypt should work. A few known problems: git annex syncrefuses to sync with gcrypt remotes. some url parsing issue. - Swapping two drives with gcrypt repositories on the same mount point doesn't work yet. - http urls are not supported About half way done with a gcrypt special remote. I can initremote it (the hard part to get working), and can send files to it. Can't yet get files back, or remove files, and only local repositories work so far, but this is enough to know it's going to be pretty nice! Did find one issue in gcrypt that I may need to develop a patch for: Woke up with a pretty solid plan for gcrypt. It will be structured as a separate special remote, so initremote will be needed, with a gitrepo= parameter (unless the remote already exists). git-annex will then set up the git remote, including pushing to it (needed to get a gcrypt-id). Didn't feel up to implementing that today. Instead I expectedly spent the day doing mostly Windows work, including setting up a VM on my new laptop for development. Including a ssh server in Windows, so I can script local builds and tests on Windows without ever having to touch the desktop. Much better! Started work on gcrypt support. The first question is, should git-annex leave it up to gcrypt to transport the data to the encrypted repository on a push/pull? gcrypt hooks into git nicely to make that just work. However, if I go this route, it limits the places the encrypted git repositores can be stored to regular git remotes (and rsync). The alternative is to somehow use gcrypt to generate/consume the data, but use the git-annex special remotes to store individual files. Which would allow for a git repo stored on S3, etc. For now, I am going with the simple option, but I have not ruled out trying to make the latter work. It seems it would need changes to gcrypt though. Next question: Given a remote that uses gcrypt, how do I determine the annex.uuid of that repository. I found a nice solutuon to this. gcrypt has its own gcrypt-id, and I convert it to a UUID in a reproducible, and even standards-compliant way. So the same encrypted remote will automatically get the same annex.uuid wherever it's used. Nice. Does mean that git-annex cannot find a uuid until git pull or git push has been used, to let gcrypt get the gcrypt-id. Implemented that. The next step is actually making git-annex store data on gcrypt remotes. And it needs to store it encrypted of course. It seems best to avoid needing a git annex initremote for these gcrypt remotes, and just have git-annex automatically encrypt data stored on them. But I don't know. Without initializing them like a special remote is, I'm limited to using the gpg keys that gcrypt is configured to encrypt to, and cannot use the regular git-annex hybrid encryption scheme. Also, I need to generate and store a nonce anyway to HMAC ecrypt keys. (Or modify gcrypt to put enough entropy in gcrypt-id that I can use it?) Another concern I have is that gcrypt's own encryption scheme is simply to use a list of public keys to encrypt to. It would be nicer if the full set of git-annex encryption schemes could be used. Then the webapp could use shared encryption to avoid needing to make the user set up a gpg key, or hybrid encryption could be used to add keys later, etc. But I see why gcrypt works the way it does. Otherwise, you can't make an encrypted repo with a friend set as one of the particpants and have them be able to git clone it. Both hybrid and shared encryption store a secret inside the repo, which is not accessible if it's encrypted using that secret. There are use cases where not being able to blindly clone a gcrypt repo would be ok. For example, you use the assistant to pair with a friend and then set up an encrypted repo in the cloud for both of you to use. Anyway, for now, I will need to deal with setting up gpg keys etc in the assistant. I don't want to tackle full gpgkeys yet. Instead, I think I will start by adding some simple stuff to the assistant: - When adding a USB drive, offer to encrypt the repository on the drive so that only you can see it. - When adding a ssh remote make a similar offer. - Add a UI to add an arbitrary git remote with encryption. Let the user paste in the url to an empty remote they have, which could be to eg github. (In most cases this won't be used for annexed content..) - When the user has no gpg key, prompt to set one up. (Securely!) - Maybe have an interface to add another gpg key that can access the gcrypt repo. Note that this will need to re-encrypt and re-push the whole git history. Now I can build git-annex twice as fast! And a typical incremental build is down to 10 seconds, from 51 seconds. Spent a productive evening working with Guilhem to get his encryption patches reviewed and merged. Now there is a way to remove revoked gpg keys, and there is a new encryption scheme available that uses public key encryption by default rather than git-annex's usual approach. That's not for everyone, but it is a good option to have available. I try hard to keep this devblog about git-annex development and not me. However, it is a shame that what I wanted to be the beginning of my first real month of work funded by the new campaign has been marred by my home's internet connection being taken out by a lightning strike, and by illness. Nearly back on my feet after that, and waiting for my new laptop to finally get here. Today's work: Finished up the git annex forget feature and merged it in. Fixed the bug that was causing the commit race detection code to incorrectly fire on the commit made by the transition code. Few other bits and pieces. Implemented git annex forget --drop-dead, which is finally a way to remove all references to old repositories that you've marked as dead. I've still not merged in the forget branch, because I developed this while slightly ill, and have not tested it very well yet. John Millikin came through and fixed that haskell-gnutls segfault on OSX that I developed a reproducible test case for the other day. It's a bit hard to test, since the bug doesn't always happen, but the fix is already deployed for Mountain Lion autobuilder. However, I then found another way to make haskell-gnutls segfault, more reliably on OSX, and even sometimes on Linux. Just entering the wrong XMPP password in the assistant can trigger this crash. Hopefully John will work his magic again. Meanwhile, I fixed the sync-after-forget problem. Now sync always forces its push of the git-annex branch (as does the assistant). I considered but rejected having sync do the kind of uuid-tagged branch push that the assistant sometimes falls back to if it's failing to do a normal sync. It's ugly, but worse, it wouldn't work in the workflow where multiple clients are syncing to a central bare repository, because they'd not pull down the hidden uuid-tagged branches, and without the assistant running on the repository, nothing would ever merge their data into the git-annex branch. Forcing the push of synced/git-annex was easy, once I satisfied myself that it was always ok to do so. Also factored out a module that knows about all the different log files stored on the git-annex branch, which is all the support infrastructure that will be needed to make git annex forget --drop-dead work. Since this is basically a routing module, perhaps I'll get around to making it use a nice bidirectional routing library like Zwaluw one day. Yesterday I spent making a release, and shopping for a new laptop, since this one is dying. (Soon I'll be able to compile git-annex fast-ish! Yay!) And thinking about ?wishlist: dropping git-annex history. Today, I added the git annex forget command. It's currently been lightly tested, seems to work, and is living in the forget branch until I gain confidence with it. It should be perfectly safe to use, even if it's buggy, because you can use git reflog git-annex to pull out and revert to an old version of your git-annex branch. So if you're been wanting this feature, please beta test! I actually implemented something more generic than just forgetting git history. There's now a whole mechanism for git-annex doing distributed transitions of whatever sort is needed. There were several subtleties involved in distributed transitions: First is how to tell when a given transition has already been done on a branch. At first I was thinking that the transition log should include the sha of the first commit on the old branch that got rewritten. However, that would mean that after a single transition had been done, every git-annex branch merge would need to look up the first commit of the current branch, to see if it's done the transition yet. That's slow! Instead, transitions are logged with a timestamp, and as long as a branch contains a transition with the same timestamp, it's been done. A really tricky problem is what to do if the local repository has transitioned, but a remote has not, and changes keep being made to the remote. What it does so far is incorporate the changes from the remote into the index, and re-run the transition code over the whole thing to yeild a single new commit. This might not be very efficient (once I write the more full-featured transition code), but it lets the local repo keep up with what's going on in the remote, without directly merging with it (which would revert the transition). And once the remote repository has its git-annex upgraded to one that knows about transitions, it will finish up the transition on its side automatically, and the two branches will once again merge. Related to the previous problem, we don't want to keep trying to merge from a remote branch when it's not yet transitioned. So a blacklist is used, of untransitioned commits that have already been integrated. One really subtle thing is that when the user does a transition more complicated than git annex forget, like the git annex forget --dead that I need to implement to forget dead remotes, they're not just telling git-annex to forget whatever dead remotes it knows right now. They're actually telling git-annex to perform the transition one time on every existing clone of the repository, at some point in the future. Repositories with unfinished transitions could hang around for years, and at some future point when git-annex runs in the repository again, it would merge in the current state of the world, and re-do the transition. So you might tell it to forget dead remotes today, and then the very repository you ran that in later becomes dead, and a long-slumbering repo wakes up and forgets about the repo that started the whole process! I hope users don't find this massively confusing, but that's how the implementation works right now. I think I have at least two more days of work to do to finish up this feature. I still need to add some extra features like forgetting about dead remotes, and forgetting about keys that are no longer present on any remote. After git annex forget, git annex syncwill fail to push the synced/annex branch to remotes, since the branch is no longer a fast-forward of the old one. I will probably fix this by making git annex syncdo a fallback push of a unique branch in this case, like the assistant already does. Although I may need to adjust that code to handle this case, too.. For some reason the automatic transitioning code triggers a "(recovery from race)" commit. This is certainly a bug somewhere, because you can't have a race with only 1 participant. Today's work was sponsored by Richard Hartmann. I've started a new page for my devblog, since I'm not focusing extensively on the assistant and so keeping the blog here increasingly felt wrong. Also, my new year of crowdfunded development formally starts in September, so a new blog seemed good.
http://git-annex.branchable.com/devblog/
CC-MAIN-2017-13
refinedweb
61,841
63.39
I personally like the 'exclusive or' operator when it makes sense in context of boolean checks because of its conciseness. I much prefer to write if (boolean1 ^ boolean2) { //do ... I have the following code: public class boolq { public static void main(String[] args) { boolean isTrue = true; ... I've just seen this line of code in my housemates code. Bool bool = method() > 0; string name = "Tony"; boolean nameIsTony = name == "Tony"; nameIsTony true What does 'Conditional expressions can be only boolean, not integral.' mean? I do not know Java and I know C++ deffenetly not enought to understend what it means.. Please help (found ... That is, if I have a statement that evaluates multiple conditions, in say a 'or' statement like so.. if(isVeryLikely() || isSomewhatLikely() || isHardlyLikely()) { ... } I have some simple logic to check if the field is valid: private boolean isValidIfRequired(Object value) { return (required && !isEmpty(value)) || ... Time can be a bit difficult the first time you need to think about it. Part of the problem is that there are some major time concepts that people try to ... Hi all. Not sure if this is the right place... but anyway. I was just about to write (in a document) that a Boolean in a conditional expression would improve performance over the use of an expression. For example: boolean x = false if(x) do stuff else do other stuff ----------------- char x = 'N'; if(x == 'Y') do stuff else ... hi I have a program where i create an Object tv, from the Fernseher class which is the first class i make.Then in main i create the Object in the FernseherRemote class. then trying to check the status of the tv, if it is on or off, it is initially off so i use the tv.setStatus(sum) to turn it on sum ...
http://www.java2s.com/Questions_And_Answers/Java-Data-Type/Boolean/condition.htm
CC-MAIN-2018-17
refinedweb
299
72.97
In this video tutorial I show you how to scrap websites. I introduce 2 new modules being UrlLib and Beautiful Soup. UrlLib is preinstalled on Python, but you have to install Beautiful Soup for it to work. Beautiful Soup is available at their website. If you are using Python versions previous to Python 3.0 get this version Beautiful Soup for Python previous to 3.0. If you are using Python 3.0 or higher get this version of Beautiful Soup. To install it follow these steps: This is how you normally install all Python modules on every OS by the way! What is Website Scraping and is it Legal? Website scraping is almost always legal as long as you provide the following: As per what Website Scraping is. It is the act of removing information from one or many sites using some automated program. I provide you a program that was made to Website Scrap the Huffington Post, but this code can be used to scrap most any site. Like always, a lot of code follows the video. If you have any questions or comments leave them below. And, if you missed my other Python Tutorials they are available here: All the Code from the Video #! /usr/bin/python from urllib import urlopen from BeautifulSoup import BeautifulSoup import re # Copy all of the content from the provided web page webpage = urlopen(‘’).read() # Grab everything that lies between the title tags using a REGEX patFinderTitle = re.compile(‘<title>(.*)</title>’) # Grab the link to the original article using a REGEX patFinderLink = re.compile(‘<link rel.*href=”(.*)” />’) # findPatTitle[i] # The title print findPatLink[i] #: print i print “\n” # Here I retrieve and print to screen the titles and links with just Beautiful Soup soup2 = BeautifulSoup(webpage) print soup2.findAll(‘title’) print soup2.findAll(‘link’) titleSoup = soup2.findAll(‘title’) linkSoup = soup2.findAll(‘link’) for i in listIterator: print titleSoup[i] print linkSoup[i] print “\n” Hi Derek, Thanks for all the great tutorials! They have really helped me a lot! FYI I had to copy the beautiful soup files directly into the lib folder to get it to import properly. Also, when I copy the code from the site I get an error because the open single quote appears as a non-ASCII character. Example… webpage = urlopen(‘http://……) The single quote has to be deleted and typed back in for it to work. Just thought I’d share what I found since others may be experiencing it too. Have a nice weekend Yes the quotes sometimes get a little messed up. Also you have to place the tabs in the right place. I could provide a link to a file? WordPress messes things up sometimes. Well… Maybe I am doing some other things incorrectly too. Still trying to get the code to compile without errors. Might want to ignore what I wrote above. okay 🙂 It looks like you can’t copy/paste the code from your website into the module. You have to delete and retype the single and double quotes. Then it will run properly. Thanks for all the guidance. I would be hopelessly lost without your tutorials. I’ll provide links to the actual files later today so you can just download and run them. Here is all of the Python Tutorial code in one zip archive. That should help you out. Thanks Thanks Derek! I sent you an email on a Python Programming Job, it was sent from a different email address, but it is from me. Thanks again for all your help!! Hi Derek, Can you demonstrate a quick example on how to parse the data from the tcpdump output, given the output has already been converted to text format? Thanks, Tito I don’t have a lot of experience with the tcpdump library, but I find it is normally easiest to parse plain text through the use of regular expressions. That is how most of these parsing libraries work anyway. I’m going to try to expand my regular expression tutorial today. I’ll cover all of the most commonly wanted regular expressions. What specifically are you trying to do? As a matter of fact, after watching the regex tutorials, I’ll try to parse something on my own using regex with Python. I’ve been using grep with simple regex to search for what i want but it is too labor intensive Thanks again, Tito I’ll have the video tutorials for regular expressions very soon. They work the same in most every language except you use different methods. Thanks for your prompt reply. I am not looking for any thing specifics. I use grep and some simple regex to get what I need for the most part but that seems to involve too much manual labor and I m not a programmer/scripter by any mean. That’s why I have been trying to learn Python by reading easy books and watching your tutorials :). I’m putting up a ton of regular expression videos today. I hope they help Does the code above work for RSS feeds. I tried using your code to scrape the following RSS feed: and I get the following errors: Traceback (most recent call last): File “C:\Documents and Settings\Administrator\My Documents\harvardext\week2\week2friday\PubMedScraper_edits.py”, line 56, in print findPatTitle[i] # The title IndexError: list index out of range The code works on any page as long as you provide the proper tags that surround the content that you want to get. Here is a more advanced tutorial on using Regular Expressions that may help you Complex Regular Expressions. Hope that helps? Hello Admin, Your py tutorials are HQ and best on youtube. Please keep them coming and lead us all to advanced python coding skills. Please don’t stop after 20-30 tutorials! Good Luck! When I visit the link you posted: , for BeautifulSoup download compatible with Py 2.7 I see tons of files and don’t know what do download. For almost everyone, the the 3.2 series is the best choice. Sorry for not pointing that out Derek I have recently started with Python and initially found it difficult. Your tutorials have brought me a long way. Thanks for sharing and keep up the good work. Graham Derek Thanks for putting up all these great videos. Im wanting to scrape a webpage that requires a log-in first, is there an easy way to do this in Python? Thanks You’re welcome. The short answer to your question is maybe 🙂 Some sites will completely block you from getting through their login screen programmatically through the use of a captcha. If there is nothing blocking you have to figure out all of the requirements to login: username, password, encoding issues, what cookies are set, etc. I can’t think of away to write code that would work with every site. This is definitely a hack job, but I’ll look to see if I can come up with something Hey Derek, I’ve been looking for your tutorials on FB and twitter, and couldn’t find it. Have you done it yet? If yes, can you post the link? If no, when is it going to come out? Thanks! Hi Ann, I did 3 tutorials on Facebook: How to Make Facebook Apps, How to Make Facebook Apps Pt 2 and How to integrate Facebook into WordPress. I plan on making more Facebook apps as soon as I figure out all of the recent changes. As per Twitter, what would you like to see? Great tutorials! Would love a tutorial on how to scrape friends and followers on Twitter, any plans for that? Scott I’m going to be covering social network programming next. Ill start with Facebook and then twitter. Explain exactly what you want to do with twitter and I’ll tell you if it is possible Hey Derek, Great tutorials! I’m having a problem with the re.findall(patFinderTitle,webpage) portion of the code. I get the following error: Traceback (most recent call last): File “”, line 2, in File “/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7 /re.py”, line 177, in findall return _compile(pattern, flags) .findall(string) TypeError: expected string or buffer Any idea what the problem may be? Thanks I’m glad you like them 🙂 I have a few questions: Are you searching the Huffington Post or some other site? Beautiful Soup doesn’t work with all sites. Have you edited the code in anyway? I think the problem was I didn’t append read() to the end of urlopen… Sorry about that. Now I’m getting an Index Error: “list index out of range”, and I’ve checked several times to make sure that the code I’m using is identical. Could it be a problem with beautiful soup? I’m worried I might not have gotten it on the path, but eclipse seems to import it. Thanks for any help. And yes, I’m using the same Huffington Post link Here is the code I’m using: from urllib import urlopen from BeautifulSoup import BeautifulSoup import re webpage = urlopen(‘’).read() patFinderTitle = re.compile(‘(.*)’) patFinderLink = re.compile(”) findPatTitle = re.findall(patFinderTitle,webpage) findPatLink = re.findall(patFinderLink,webpage) listIterator = [] listIterator[:] = range(1,100) for i in listIterator: print findPatTitle[i] print findPatLink[i] print “\n” Sorry Derek, I had a problem in my code. I was using (.*) instead of (.*) I’m glad you fixed it. I figured it was some silly typo. Sorry I couldn’t respond quicker. I’m getting a lot of comments lately Hi Reed I am having the same problem can you explain what you meant by “I was using (.*) instead of (.*)” this line Hello Derek, great site you have here, i love python, it was love at first, your tutos rock. Could you do demo how to use a rss reader in Tkinter? Thanks Thank you 🙂 I’m glad you like it. What specifically are you looking to do with RSS feeds? Hi Reed I am having the same problem can you explain what you meant by “I was using (.*) instead of (.*)” this line Well trying to create a simple rss feed like scrapping a web and show it with python in a visual way, not txt. Well trying to create a simple rss redaer like scrapping a web and show it with python in a visual way, not txt, to keep track of my favorite sites and stuff. I’ll do my best. I’m a bit backed up with tutorials at the moment Ok, no problem. I have really enjoyed your tutorials! Muchas gracias! Do you have any suggestions for scraping websites which don’t particularly want to be scraped? Like wikipedia? Again, thanks for all the great tutorials! You’re very welcome. I just looked at Wikipedia and yikes what a mess! You can of course grab data from it and probably delete all of the css. I don’t think you should try to grab what you want using regex. It’s probably a better idea to grab all of it and then delete what you don’t want using regex. It’s do able, but will take some time. I hope that helps Hi watched your video and they are really good but one thing that i didn’t get it is why you use this “if __name__ == ‘__main__’: main ()” i’ve wrote so many begineers prog without using it. Can you please explain it to me That line allows your Python code to act as either a reusable module or as standalone program. It simply tells the interpretor to call the main function’ll check this out as soon as possible. Everything should work perfectly unless the tags have been changed on the site Wonderful tutorial. Web Scraping is the reason I have started to teach myself Python. A little problem with the code above, though troubleshooting it was a good learning experience for me, the divBegin line is not fully finished which would identify the body_entry_text division. All in all , wonderful job Derek, thank you for teaching me about Python! You’re very welcome. Thank you for pointing that out 🙂 First, thanks for all the videos, they’re really great. I have a question about grabbing the titles and links from the huffington RSS feed. From your code: titleString = ‘(.*)’ origArticleLink = ” I can strip the code to where it only grabs these lines and prints them out, and it works. But how!?? The Huffington RSS feed seems to have changed it’s tags. I can’t figure out how to scrape the new tags, but this old code is working when I don’t see why it should! For example, here is the article title and link from the RSS feed. I can’t figure out how to scrape it with new code, but the code I list above still works… ‘Tabatha Takes Over’: Tabatha’s Appalled By Salon Employees Drinking At Work (VIDEO) I checked out the code. Everything seems to still be working because the title and link to the original article are still set up the same way. I then use urlopen to do all of the heavy lifting in regards to grabbing the original articles. I’m not sure what could be going wrong with your code? The trick is to grab the original article link and let urlopen do its job for any other feeds your pulling from. Does that make sense? Oops, looks like the comments section killed the code I pasted in there. But you know what it should be, the simple title tag and the link ref tag Great tutorial Derek! Very helpful! I had one problem when running your code though. I also get an error message like the one below: Traceback (most recent call last): File “C:/Python27/beautifulSoupDemo.py”, line 18, in findPatTitle = re.findall(patFinderTitle,webpage) File “C:\Python27\lib\re.py”, line 177, in findall return _compile(pattern, flags).findall(string) TypeError: expected string or buffer any idea on how I can fix this? Thanks, Helena This error seems to get cleared up if you update everything in eclipse. Just click Help and Check for Updates in eclipse nvm. problem fixed. don’t know how, but the program is working now. Thank you again, Derek, for posting this! Great 🙂 Hey Derek, Great tutorial. One question though: how do you get rid of the tags? Ann Delete anything that qualifies as a tag using regular expressions. The tags, and also the link tags. Before even trying BeautifulSoup, I was trying to get the other code working. If I print the webpage variable, the whole thing is there in all its glory, but when I print patFinderTitle, I see that it’s an object: This explains the error code I’m getting when re.findall tries to run: Traceback (most recent call last): File “C:/Documents and Settings/Cyd/Desktop/pytut_13.py”, line 14, in findPatTitle = re.findall(patFinderTitle, webpage) File “C:\Python32\lib\re.py”, line 193, in findall return _compile(pattern, flags).findall(string) TypeError: can’t use a string pattern on a bytes-like object It doesn’t explain why it works for you, though. I’ve looked through the Python documentation and can’t find any explanation. Can you tell me what I’m missing? (I even tried copying and pasting your code for that section, and get the same error.) Thanks, Cyd I guess using the code tag doesn’t work. Anyway, it’s an object: Okay, third try: Pretend there are opening and closing tags on this: _sre.SRE_Pattern object at 0x010B3180 Sorry again. Through mucho searching on the web, I found that I can put a ‘b’ in front of the pattern to make it a bytes pattern and then everything works fine (except I get an ugly b’ in front of each entry). Nope, that didn’t work. When it gets to articlePage, I’m told that ‘bytes’ object has no attribute ‘timeout’ (part of urlopen). I guess it still wants a regular string. I just don’t understand why the re.compile lines are returning bytes objects instead of strings. Sorry I couldn’t get to you quicker. I’ll look over the code. Because I made this tutorial so long ago I’m guessing beautiful soup must have changed. The code worked in the past as you saw in the video. I’ve also heard that many people have been struggling because of recent changes in eclipse and pydev. I’ve seen numerous errors from people that didn’t recently update pydev. In hind site I probably should have avoided covering beautiful soup because the author seems to be giving up on the project. No problem. I appreciate your quick response. I found bs4 (the new beautiful soup) and it’s working ok. I encoded the urlopen statements into utf-8 and that fixed the object problem. Then for the soup.findall, I used soup.find_all and stuff started printing out. It got through four articles before hitting an error in an articlePage: “UnicodeDecodeError: ‘utf8’ codec can’t decode byte 0x80 in position 132109: invalid start byte” But I must say, figuring this stuff out has GOT to make me a better programmer. Thanks again. On to the next tutorial! That is pretty much how I learned to program. Except I didn’t have the internet. I had to go dig up stuff in real books. Yuck! I’ll have to revisit this tutorial and make corrections based off of BS not being backwards compatible. Thank you for pointing all of this out 🙂 Amazing videos – thank you so much for sharing your knowledge. I am however having the following issue: Shows me the following error while running… SyntaxError: Non-ASCII character ‘\x94’ in file C:\Users\amoosa\workspace\PythonTest\PythonTest\Web_Scraping1.py on line 19, but no encoding declared; see for details It points to the following sentence: patFinderLink = re.compile(“”) I am using Windows 7 and Eclipse with PyDev. Furthermore, could you point/instruct how to take care of a login page to have credentials put in, so I can start doing the website scraping thereafter? Thanks. Nevermind, I found and corrected the problem. Apparently, the copy/paste function has the ‘”‘ sign taken in differently. When I removed the quotes and re-added them manually – no errors were discovered. Just thought I should let you know 🙂 Also, thank you for the videos. I am a big fan of your site. Yes that is a problem with my old videos. It was done for security reasons. I was going to go back and correct that in every tutorial, but I figured I’d just keep making new tutorials instead. I could take months to go back and fix all of my past errors 🙂 this site is amazing. is there anyway you could do another tutorial on another website? (non rss / feed) Thank you. I show you how to website scrap using other languages. I have a really neat tutorial using PHP here PHP Website Scraping. The code is easy to translate into Python Just found your article on web scraping after seeing Kim Rees (Periscopic) on “Digging into Open Data” at OSCON 2012. Thanks! You’re very welcome 🙂 Here are some videos on web scraping with PHP Web Design and Programming Pt 7 REGEX Web Design and Programming Pt 8 Regex Web Design and Programming Pt 24 Regex on Steroids I’m also a member of Atlanta PHP Meetup. Check out “AtlantaPHP dot org” or stop by if you’re in town. Again, thanks! Hey Derek, Loving the tutorials, but ran into a problem on the code in this tutorial. I’m getting the below error: TypeError: expected string or buffer Related to this line: findPatTitle = re.findall(patFinderTitle, webpage) I did see that a few others had this error, and I saw your response about updating eclipse. I just did that, and am still getting this error. I’m using python 3.2 if that helps. The code in it’s entirety is: from urllib.request import urlopen from bs4 import BeautifulSoup import re webpage = urlopen('') patFinderTitle = re.compile('(.*)') #the (.*) is a regular exp to grab anything between title tags patFinderLink = re.compile('') findPatTitle = re.findall(patFinderTitle, webpage) findPatLink = re.findall(patFinderLink, webpage) listIterator = [] listIterator[:] = range(2,16) for i in listIterator: print(findPatTitle[i]) print(findPatLink[i]) print("/n") I think it is a beautiful soup issue. Since I made this tutorial, I no longer use that library. I’ll take a look to see if that is the issue. Hi, I’m getting an IndexError: list index out of range. I’ve stripped down the code to its basics and still have the problem. I would be very thankful if you could help me out. Fixed. Indentation problem… Great! I’m glad you fixed it 🙂 hi, would you comment on how you would do the following using beautifulsoup? i feel like it can be simplified significantly but i run into errors when i trying doing it soup. # def get_urls_of_restaurant(): # list_urls = [] # n = 0 # nn = 0 # for i in range(4): # url = urlopen(‘’ + str(nn) + ‘-Dar_es_Salaam.html’).readlines() #open URL whis lists restaurants # while n < len(url): # if '"' not in url[n] and '‘ in url[n] and len(url[n]) > 5: # list_urls.append(url[n-1].split(‘”‘)[1]) # n += 1 # n = 0 # nn += 30 # list_urls.reverse() # print “Geting urls done! Get %s” %len(list_urls) + ‘ urls.’ # return list_urls Hi, What version of BeautifulSoup are you using? They changed a bunch of things lately and of course it isn’t backwards compatible. What errors are you getting? Thanks – Derek I am using beautifulsoup4 That is the problem. I made this tutorial using BeautifulSoup 3. After time I started having trouble with this library as well and now just use regular expressions to perform scraping techniques. How do I check the URL that I’m going to is not a broken URL? Check out this article How to check for broken url in Python Hey great tutorial, I plan to check out more of them. Your directions on installing Beautiful soup was way better than the instructions on BeautifulSoup’s website. Frank Thank you 🙂 I do my best to make everything easy. Hi derek I had a question about python , is there a way to handle exceptions of a program u dnt know about ( ie u want to catch a simple exception in a program someone else has written ) . how would you invoke their program in out try catch block , or could they invoke the try catch block in their program ? Thank you for your time Edit: I thought about using execfile() but it dnt help You can catch all Python exceptions like this, but it isn’t recommended try: # do sstuff except Exception, e: print e Hi Derek Thank you for your answer , my prob statement was like 1) prog 1 : debugger (try except stament for a few errors)( written by me) 2) prog 2 : reg prog (written by my team ) by person X person X wants to know if certain exceptions are occuring in prog 2 , he imports prog 1 and tada he can catch the custom exceptions. I just want to know , how do I write prog 1 so it works like that? Hi derek, Im a fan of your videos about scrappin in python, but I have a problem, my iddle of python doesnt recognize beautifulSoap, how i can do for recognize this import? Regards! Are you using Python 2.7? Type python -V in the terminal or console to find out Hey! I just want to ask you a question; Is it possible to do website scraping without the beautiful soup module? I am currently making a program that runs different python scripts, and I don’t want other users who use it to manually download beautiful soup. I also need to know what I should do when the tags are combined together, for example: link = … title = titlehere Edit: For the example, it should have been: “link = … title = titlehere” Sure you can do that and I actually prefer to scrap without beautiful soup. you just have to use regular expressions. I cover how later in this tutorial. I also show how to scrap complicated stuff in PHP here Website Scraping with PHP. The regex can easily be used in python. I hope that helps Hi Derek, Is it possible to get BS to do this? Go to website homepage (video site) Enter the article (normally by clicking thumbnail’s url) Grab the title Grab the video’s tags Grab the videos embed code Then go back and move on to the next article It normally works but BS can get buggy at times. Try it out and see Great tutorials, thank you (and keep it up)! QUESTION: How to scrape when the data spans multiple pages (with pagination, such as < Last >>)? Thank you! Thank you 🙂 You’ll have to grab multiple pages depending upon how everything is set up. This would be one of those cases by case problems Could you post an example of webscraping where webpages are loaded with Javascripts (and extracting information from that site)? I have numerous tutorials on regular expressions on my site. This regex tutorial may help the most. It is a php tutorial, but the regular expression part is the same in python can you tell me, how to make hotel booking system in tkinter That is a pretty big project. Can you break it down into just the parts that are confusing to you? Thank you very much You’re very welcome 🙂 Hi mates, how is everything, and what you want to say about this post, in my view its actually awesome for me. Thank you very much 🙂 Great post about screen scraping. This works pretty well for singe page or website. For large project, Scrapy framework is more suitable than the method mentioned in this post. Thank you 🙂 Yes I was just playing around and mainly i was teaching what goes one with a screen scraping tool I really like you tutorial, it made things easy to understand. I have one problem I have been trying to figure out. I got a big file with over 200 lines. Here is a example of one of the lines: DateTitle The file name has the date in it, and the title. The date code is: Y/M/D. The ldm is an abbreviation for the title. How can I make a script to get the date code and put it in here Date and the title here Title. I have several HTML files like this, each one with several hundred lines, all just like this. Any ideas are GREATLY appreciated!!! Thank you 🙂 I made a Regex tutorial that seems to help people. It was done using PHP, but the regular expression codes work exactly the same way in Python. Here is the video PHP Regex Tutorial. I explain the thought process behind using regex in complex situations. I hope it helps Hi Derek, I am trying to do some web scraping for my research in python. But I am unable to construct the url. After going through your post thought maybe shoot you some of my questions. Any help is greatly appreciated. I want to access this website from python and submit my query to convert ids. Then get that response back from the website into my program. While studying about it I learned more of webscraping in python. Is this the way to do it? For example if I give input as 21707345 23482678 And choose option PMID to PMCID (or NIHMSID) And hit convert it will return PMCID and NIHMSid. I want these returned values to be used in my program. I guess that will be just parsing the results. I went through basic youtube videos for web scraping in python as this I also tried doing it by my self using BeautifulSoup but no success. Also, tried firebug in firefox to get the url that I can use to query the website. As far as I got, the base url is: But I am unable to join it to the query to complete the url. When queried I get the and when tried to get the complete query as it is not working. So, I am guessing I am missing something here. Thank you for your time and help. The problem you’re having revolves around the difference between scraping sites using the GET versus the POST method. The GET method is easy to use and that is what I use in these tutorials when needed. This website however is using the POST method which can be very complicated and maybe even impossible because you can’t pass information in the URL. As you noticed when you pass information there is no change with the URL. This information may help you I hope that helps Thanks a lot Derek. I am sure this helps me to learn things. Great I’m very happy to be able to help 🙂 Sorry for not mentioning the website. The website I am trying to access is Hi Derek! I loved your Python tutorials! I went over some stuff on my own and have created two well-commented *.py files covering: 1. Lambda expressions, functional programming tools, and list comprehension (all following the 2.7.6 documentation). 2. Iteration generators and related stuff (mostly following the stackoverflow question at) I’d be proud if you wanted to make a couple of video tutorials out of these! Your video editing and narrating are probably much better than what I could muster, and you already have a crowd. Please let me know (by e-mail, if at all possible) if you’d like me to post the *.py files on googledocs or whatever… Good luck with your awsome project, I’m certainly going to continue using it! Inon. Hi Inon, Thank you for all of the nice compliments 🙂 Please feel free to post links to your code and most anything else in the comments here and I’ll gladly share them. Thank you very much for the offer! Shoot… gotta fix all the indentations… Sorry!
http://www.newthinktank.com/2010/11/python-2-7-tutorial-pt-13-website-scraping/?replytocom=9494
CC-MAIN-2020-45
refinedweb
5,065
74.19