text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Wireless Home Security and Java Java Programming Notes # 734 - Preface - Preview - Discussion and Sample Code - The Sockets15 Class - The Sockets14 Class - The Sockets10 Class - The Sockets11 Class - The Sockets11a Class - The Sockets12 Class - The Sockets13 Class - Run the Programs - Summary - Good Network Practice - References - Complete Program Listing Preface There are many good reasons for learning to write Java programs that communicate with network servers, particularly HTTP servers. Cleaning up your bookmark library For example, in the earlier lesson entitled Using Java to Clean Up Your Bookmark Library, I showed you how to write a Java program that will help you to identify the broken links in your bookmark (Favorites) library so that you can either delete or repair them. Basically, the program in that earlier lesson cycles through your bookmark library, attempting to contact the servers listed there to download the headers for the specified resources. It then interprets those headers providing you with information that you can use to clean up the library. Setting the encryption key in your wireless router In this lesson, I will show you how to write a Java program that makes it very easy to modify the encryption key in your wireless router, either manually or on an automated scheduled basis. The premise is that if it is easy for you to change the encryption key, you may be inclined to change it more frequently. If you change the encryption key more frequently, this may (and I emphasize the word may) cause your home or small office wireless network to be more secure. (On the other hand, because I am not a security expert and don't have a full understanding of all of the ramifications of wireless network security, it is entirely possible that this may make your wireless network less secure. Please pay careful attention to the disclaimer that follows.). What can you expect from this lesson? My goal in this lesson is to teach you how to write HTTP client programs in Java, which communicate with HTTP servers, and which deal with the following HTTP/1.0 and/or HTTP/ 1.1 features: - Standard basic authentication - Nonstandard authentication - Keep-alive features - GET and HEAD methods with query strings - POST methods with formatted message bodies Regardless of whether or not you elect to use the program for setting the encryption key on your wireless router, the lesson should still teach you quite a lot about writing Java programs to communicate with HTTP servers. Seven sample programs I will provide and explain seven sample programs, each designed to illustrate certain HTTP programming concepts. One of the programs will be designed to run against an HTTP server on the World Wide Web. The other five will be designed to run against the HTTP server that is built into a typical wireless router. (Such HTTP servers are used by the router administrator to configure and manage the router.) Packet sniffing Along the way, I will teach you a little about packet sniffing and show you how to eavesdrop on conversations between HTTP servers and HTTP clients. particular, I recommend that you review the lessons referred to in the References section in preparation for understanding the material in this lesson. Preview Typically, in tutorial lessons that involve one or two large programs, this is the place where I explain in detail what you can expect to find in each of the programs. However, this lesson will explain seven relatively small programs. To keep the level of confusion down, it will probably be better for me to defer the detailed technical discussion until we get to the sections where I explain the code for the programs. Topics covered by the programs The following list provides a very brief summary description of each of the seven programs: - Sockets15: Illustrates general communication between an HTTP client and an HTTP/1.1 server, which doesn't require authentication. Also shows how to cause the server to forego keep-alive and to close the connection at the end of each response by the server. - Sockets14: Illustrates basic authentication using a Linksys WRT54G wireless router as the experimental platform. - Sockets10: Shows how to automatically install a new WEP encryption key on a Linksys WRT54G wireless router requiring basic authentication. - Sockets11: Shows how to automatically install a new WPA or WPA2 shared encryption key on two different versions of a Linksys WRT54G wireless router, one that supports WPA2, and one that does not support WPA2 but does support WPA. - Sockets11A: A stand alone, non-network program that generates and displays the same encryption key generated by Sockets11 for any particular data. - Sockets12: Illustrates the authentication methodology on a Belkin 54G wireless router using the custom scheme provided by that router for that purpose. Note that this is completely different from the basic authentication methodology used on the Linksys WRT54G router. - Sockets13: Shows how to automatically install a new WEP encryption key on a Belkin 54G wireless router requiring custom authentication. Packet sniffing Sprinkled throughout the discussions of these programs will be a discussion of the use of the Ethereal Network Protocol Analyzer program to discover exactly what is required to automate the normally manual process of installing encryption keys in these wireless routers. Discussion and Sample Code I will discuss the programs in fragments in the sections that follow. The Sockets15 Class A complete listing of this program is provided in Listing 49 near the end of the lesson. The program named Sockets15 illustrates general communication between a program and an HTTP/1.1server, which doesn't require authentication. It also illustrates how to cause the server to forego the keep-alive feature and to close the connection at the end of each response message. In operation, this programs attempts to get the headers for two different resources from the same server. The server that was used for the testing of this program normally implements keep-alive by default. Although the use of the keep-alive feature can improve network efficiency, programming an HTTP client program to support the keep-alive feature can be a very difficult task. Therefore, this program instructs the server to forego the keep-alive feature and to close the connection following the transmission of the response to each client request. Sample output from the program is shown in the comments at the beginning of the program in Listing 49. In addition, I will show you portions of that output in the discussion that follows. Will discuss in fragments As is my custom, I will explain this program by discussing important parts of the program in fragments. The first such fragment is shown in Listing 1. Listing 1 shows the beginning of the class including the declaration of three instance variables and the initialization of one of them. The instance variable named server is initialized to contain the name of the server that I used to test this program. However, you could test the program against any HTTP/1.1 server that doesn't require authentication, and which implements keep-alive by default. What is keep-alive anyway? The original HTTP protocol was a simple request/response protocol. Using this simple protocol, the following steps occur during each request/response cycle: - The client establishes a socket connection with the server. - The client sends a request for a resource to the server. - The server either delivers the resource back to the client or sends an error message. - The server disconnects. While this is a very simple protocol to implement, it is also slow. Often, more time is required to make and break the connection than is required to exchange the request and the response. The protocol was enhanced As a result, somewhere along the way, the protocol was enhanced to make it possible for the client to send a sequence of requests to the server during a single connection, and for the server to send a sequence of matching responses back to the client during the same connection. This is generally referred to as keep-alive, and I will discuss some of the technical manifestations of the feature later. Examples of complexity While the advent of keep-alive can greatly improve the efficiency of the communications between the client and the server, it also causes the client program to be considerably more complex. For example, the client may send a request for resources A and B to the server before learning from the server that resource A is not available but resource B is available and is being delivered. However, resource B may be of no value to the client without resource A. In addition, there is no guarantee that the server will support keep-alive over the long haul. The server may simply decide on its own to switch from keep-alive operation to the older request/response format after the client has already sent a long sequence of requests. The client program must be able to take issues like that into account and to make the necessary adjustments. Can disable keep-alive As a result, the HTTP/1.1 protocol makes it possible for client programs that don't have the ability to support keep-alive to instruct the server to revert back to the simple request/response form of the protocol by including a specific instruction in the request headers. As you will see shortly, this simple HTTP client program does not support keep-alive operation, and it does provide such an instruction to the server. The main method The main method is shown in its entirety in Listing 2. The main method simply instantiates an object of the class and invokes the instance method named doIt on that object. The method named doIt The method named doIt is the primary processing method for the class named Sockets15. (In fact, the primary processing method for each of the seven sample programs discussed in this lesson will be named doIt.) The method named doIt for the class named Sockets15 begins in Listing 3. The code in Listing 3 invokes the getSocket method to get a socket, which is connected to the specified server, on port 80. Port 80 is the standard HTTP port. The getSocket method also gets input and output streams on the socket that can be used to communicate with the server. References to the Socket object as well as the input and output stream objects are returned encapsulated in an object of the simple wrapper class named SocketWrapper. The SocketWrapper class The SocketWrapper class is shown in its entirety in Listing 4. An object of this class is used to encapsulate the references to the three objects described above, making it easy to return three values from the getSocket method. An object of the class consists of nothing more than three encapsulated instance variables, which will be populated with references to the three objects. The getSocket method begins in Listing 5. The purpose of the getSocket method is to get a new Socket object connected to a server on port 80 along with input and output stream objects that can be used to communicate with the server by way of the connected socket. References to the Socket object and the two stream objects are returned in a wrapper object of type SocketWrapper. (The getSocket method will be used in six of the programs provided in this lesson. Therefore, I will explain it only once in conjunction with the class named Sockets15 and will simply refer to it in the discussions of the other five programs.) Port 80 Listing 5 begins by declaring a variable named port and initializing it with the value of the standard HTTP port. (Much of the background material that you will need to know in order to understand the programs in this lesson, such as the standard HTTP port, was provided in my earlier lessons. This includes lessons that are referred to in the References section.) Then the code in Listing 5 instantiates a new empty object of the SocketWrapper class, to be populated later with references to the three objects. A new Socket object Finally, Listing 5 instantiates a new Socket object that is connected to the specified server on port 80. When the constructor returns the connection will have been established. In the event that it is not possible to establish the connection, the constructor will throw an exception. As you will see later, this will cause the program to abort printing a stack trace in the process. Get an input stream Listing 6 gets an input stream object used later for reading the response sent by the server. The reference to the stream object is saved in the wrapper object of type SocketWrapper. The syntax for creating input and output streams in Java is rather complicated. You can read more about that syntax in my earlier lessons referred to in the References section. Get an output stream Listing 7 gets an eight-bit byte output stream object on the socket, (which will autoflush), and saves the reference to the stream object in the wrapper object. What does Sun have to say? Sun has this to say about the PrintStream class: "All characters printed by a PrintStream are converted into bytes using the platform's default character encoding. The PrintWriter class should be used in situations that require writing characters rather than bytes." I initially tried using an output stream of the PrintWriter class, (which delivers sixteen-bit character data instead of eight-bit byte data). When I began testing one of the later programs that uses the POST method to post data to the server, I discovered that the program did not work properly using the PrintWriter class. In order to cause the POST method to work properly, it was necessary for me to revert back to the PrintStream class that delivers eight-bit byte data instead of 16-bit character data. Listing 7 returns a reference to the populated SocketWrapper object, which signals the end of the getSocket method. Returning to the doIt method ... Returning now to the discussion of the doIt method, Listing 8 displays a message informing the user that the program is going to invoke the HEAD method on the server. Invoke the HEAD method on the server Listing 9 uses the output stream to invoke the HEAD method on the server. The HEAD method The HEAD method is one of several methods that a client can invoke on a server. This lesson will discuss the following methods: - HEAD - GET HTTP transactions According to HTTP Made Really Easy, "Like most network protocols, HTTP uses the client-server model: An HTTP client opens a connection and sends a request message to an HTTP server; the server then returns a response message, usually containing the resource that was requested. After delivering the response, the server closes the connection (making HTTP a stateless protocol, i.e. not maintaining any connection information between transactions)." Note that the above statement doesn't necessarily take the keep-alive feature that was introduced in version HTTP/1.1 into account. That's OK for us, however, because we are going to prevent the use of the keep-alive feature in these programs. Format of request and response messages According to the same source, the request and response messages are similar consisting of: - An initial line - Zero or more header lines (at least one header line is required in an HTTP/1.1 request message) - A blank line consisting of a carriage return (0x0D) followed by a line feed (0x0A), often referred to as a CRLF, on a line by itself. - An optional message body (such as query data or query output) Because this lesson is mostly concerned with the creation and transmission of request messages (as opposed to the creation or interpretation of response messages) I will concentrate mostly on the format of requests. (On the other hand, the earlier lesson entitled Using Java to Clean Up Your Bookmark Library is very concerned with the interpretation of response messages.) The initial request line The initial request line is very similar for HEAD, GET, and POST request messages. It has three parts as shown in Figure 1. The method The first part of the initial request line is the name of the method to be invoked on the server (HEAD in the case of Figure 1). This part is shown in black in Figure 1. Method names are always specified in uppercase. The requested resource The second part of the initial request line (shown in red in Figure 1) consists of a forward slash character followed by the path to the requested resource. As I understand it, for the HEAD and GET methods, the requested resource specifies data that is to be sent by the server to the client. For the POST method, the requested resource names a script or program that is to be executed on the server to consume the POST data that will be sent later. The GET method attempts to send the entire requested resource back to the client. The HEAD method attempts to send only the headers associated with the requested resource to the client. (I will have more to say about headers later.) The HTTP version The third part of the initial request line (shown in blue in Figure 1), specifies the HTTP version in the format shown, (such as HTTP/1.0 or HTTP/1.1). This part is always in uppercase characters. The case in point Referring back to Listing 9, the first line requests that the server execute the HEAD method using HTTP version 1.1, applying that method to a specified resource whose name is blank. The use of a blank resource (a forward slash followed by a blank character) requests that the server use whichever resource it considers to be the default. (In the early days of the web, the default resource was often a file named index.html, but that seems to be less the case now. In many cases, the default resource is the so-called "home page" for the server.) Because the requested method in Listing 9 is HEAD, the client is requesting that only the headers for the requested resource be delivered and not the entire resource. The request headers The initial request line in Listing 9 is followed by two header lines, which in turn are followed by the required blank line (CRLF). Many different header lines are possible. For example, Figure 2 shows a request sent by a Firefox browser to. Captured using the Ethereal program The material in Figure 2 was captured using the Ethereal program. (Note that line breaks were manually inserted into Figure 2 to force the material to fit into this narrow publication format.) I'm not going to try to teach you how to use the Ethereal program. Rather, I will simply refer you to the Ethereal User's Guide. There are many ways to view data that has been captured. As a hint, I will tell you that it is often convenient to save captured data to a text file and then to view the text file using a typical text editor program such as Notepad. That is how I was able to reproduce the material shown in Figure 2. Getting back to header lines ... Header lines are intended to provide information about the request or the response. Generally, they are formatted as follows: - Each header resides on a separate line terminated by a CRLF. - The information is provided in a name:value format with the name and the value separated by a colon. - The header name is not case sensitive. - Any number of space or tab characters may follow the colon. - A header may be continued onto the next line by beginning the continuation line with a space or a tab character. Once again, according to HTTP Made Really Easy: "HTTP 1.0 defines 16 headers, though none are required. HTTP 1.1 defines 46 headers, and one (Host:) is required in requests." Thus, the first header line in Listing 9 and Figure 2 is the required Host: header line, specifying the name of the server to which the request is directed. Disabling keep-alive As mentioned earlier, this program does not support the keep-alive feature that is (apparently) the default operating mode for HTTP/1.1 servers. Therefore, the second header line in Listing 9, beginning with Connection: is a request (or instruction) to the server to close the connection following the response to the request. This makes it possible for the program to operate in a simple request/response ping-pong mode, which is much less complex than trying to support the keep-alive operating mode. Supporting keep-alive The Firefox browser, on the other hand, is happy to support keep-alive. Figure 2 shows two different header lines associated with the keep-alive feature. I'm confident that if you are interested, you can find more information about those header lines using Google. The required blank line The second header line in Listing 9 is followed by the required blank line consisting of a CRLF. (Invoking the println method with no parameters produces the correct CRLF byte stream on a Windows system. I don't know if that is the case for all operating systems. If not, you can get the same result by invoking the print method and printing the hexadecimal character values 0x0D and 0x0A.) The captured request Using Ethereal to capture the conversation (and deleting the non-readable characters involved in the conversation) produces the request text shown in Figure 3. This is the request that was sent from the client (this program) to the server. (Figure 3 includes the required blank line.) The captured response The server response to the above request is shown in Figure 4. (A line break was manually inserted into Figure 4 to force the material to fit into this narrow publication format.) Perhaps the most interesting parts of the response are the initial response line and the header line that begins with the word Connection, both of which are highlighted in boldface in Figure 4. HTTP/1.1 200 OK According to HTTP Made Really Easy, the status code value of 200 in the initial response line tells us: "The request succeeded, and the resulting resource (e.g. file or script output) is returned in the message body." Of course, since this is a HEAD request, there is no output other than the header lines. Had the request been a GET request, however, there would most likely have been quite a lot of output. Connection: close The header line in the response in Figure 4 that begins with the word Connection tells us that the server is acknowledging our special request and that the connection will be closed at the end of the response. This tells us that in order to send another request to the server, we will need to get another socket object connected to the server. Returning to the doIt method ... The next statement in the doIt method, shown in Listing 10, invokes the local method named print to display up to fifteen lines of server response. The print method also gets and returns information about the closing of the connection. That information is saved in the variable named closedFlag to be used later. The print method The print method is shown in its entirety in Listing 11. The purpose of this method is to get and display a specified number of lines of the server response. In the process, the method checks to determine if the server closed the connection at the end of its response. The method returns a boolean value indicating whether or not the server closed the connection. The print method is straightforward and shouldn't require further explanation. Returning once more to the doIt method ... Listing 12 begins by displaying a message. Then the code in Listing 12 confirms that the connection was properly closed by the server at the end of the previous response. - If the connection was properly closed, the code in Listing 12 gets a new Socket connection with the server. - If the connection was not closed, the code in Listing 12 displays an error message and terminates the program. The comments at the beginning of Listing 49 show the result of both possibilities. Invoke the HEAD method for a different resource Listing 13 invokes the HEAD method on the server to request that the headers be provided for a different resource. In this case, the resource is not blank. In other words, the client program is not requesting the default resource from the server. Rather the client program is requesting the headers for the file specified by the following path: /baldwin/index.html The captured request The captured request information for this part of the conversation is shown in Figure 5. Note that I added a dummy header in Figure 5 that I didn't put in Figure 3. As far as I know, unlike the other two header lines in Figure 5, the User-Agent header line is for information purposes only, and the server will take no special action based on that header line. Thus, it is really of no consequence, except perhaps that it may irritate the webmaster at the server end if she is keeping statistics on the different clients that communicate with her server. What about the other two header lines? You already know about the purpose and the result of the Connection header line, so what about the Host header line? As I indicated earlier, this is the only header line that is required for HTTP/1.1. (No header lines are required for HTTP/1.0.) Figure 6 shows the response headers that would be received from this particular server if the Host header line is omitted. The most important item in Figure 6 is the boldface first line indicating unhappiness on the part of the server. (Once again, a line break was manually inserted into Figure 6 to force the material to fit into this narrow publication format.) A captured response to a correct request Figure 7 shows the server response to the correct client request shown in Figure 5. Additional information Figure 7 shows several information items (highlighted in boldface) regarding this specific requested resource that were not included in the response of Figure 4 when the request was for the default resource. Often, this information, (particularly the date on which the resource was last modified and possibly the length of the resource), is what a client that invokes the HEAD method is looking for. For example, the client may compare that date with the date on a local copy of the resource and then invoke the GET method to download the entire resource if the version on the server is newer than the local copy. Close the socket Although this is probably unnecessary and redundant, the code in Listing 14 closes the socket before ending the program just in case it hasn't been closed by the server. And that brings us to the end of the program named Sockets15. Hopefully, you will have learned something new about HTTP communications in this section. The discussions of the other programs in the following sections will build upon this material. The Sockets14 Class A complete listing of this class is provided in Listing 50 near the end of the lesson. The purpose of the class is to illustrate basic authentication. In order to avoid dealing with the complexities of keep-alive, the program asks the server to close the connection at the end of each response. The class was tested and determined to authenticate properly with J2SE 5.0, WinXP, and a very recent Linksys WRT54G wireless router that supports WEP, WPA, and WPA2 encryption. (This version of the router can be identified by the inclusion of SES on the front panel.) The class was also determined to authenticate properly with a recent Linksys WRT54G wireless router that supports WEP and WPA encryption, but does not support WPA2 encryption. (Note that both of these wireless routers have the same model number, WRT54G.) The class definition for Sockets14 The class definition begins in Listing 15. Note that in this case, the server is specified by its IP address rather than by a domain name. (I discuss the difference between the two in some of the lessons referred to in the References section.) Listing 15 also shows the main method, which instantiates an object of the class and invokes the method named doIt on that object. The doIt method This doIt method will attempt a login on the administrator panel for the router using the administrator's user name and password. The doIt method begins in Listing 16. The constant named adminUserPwd is initialized to contain the administrator password expressed in base64 format. The use of base64 encoding is a requirement of standard HTTP Basic Authentication. (See my earlier lesson entitled Understanding Base64 Data for an explanation of base64 encoding. Also see for an online base64 encoder/decoder that you can use to easily encode and decode string data.) What does OmFkbWlu mean? The base64 value of OmFkbWlu given in Listing 16 is the value required for the default administrator username and password for a Linksys WRT54G wireless router. (For example, go to and use the decoder that you will find there to decode the base64 value of OmFkbWlu and see what you get. You should get :admin.) For a Linksys WRT54G wireless router, the default username is blank and the default password is admin. Basic authentication requires the base64 value to be the encoded version of the concatenation of the username and the password separated by a colon. Since the default username is blank and the default password is admin, the base64 value given in Listing 16 is the encoded version of :admin. (Note the leading colon.) Get a Socket object The statement in Listing 17 invokes the getSocket method to get a Socket object connected to the specified server on port 80. This is the same getSocket method that was discussed earlier. The statement in Listing 17 also gets input and output streams that can be used to communicate with the server. Send initial GET command The code in Listing 18 displays a message on the screen and then invokes the GET method on the server, requesting that the default resource be downloaded in its entirety. (Recall that the main difference between the HEAD method and the GET method is that invocation of the HEAD method requests only the headers for the specified resource whereas invocation of the GET method requests the entire specified resource.) Three header lines The statement that invokes the GET method in Listing 18 is followed by the same three header lines and the blank line that you saw in Listing 13 (except that the dummy value given to the User-Agent is different.). (If the default resource for the server were not a protected resource, the server would respond by sending that resource to the client. However, that is not the case here. The administrator panel for the Linksys router is a protected resource.) The captured request The code in Listing 18 resulted in the request shown in Figure 8, as captured by the Ethereal program. Display the response Listing 19 invokes the print method to display up to fifteen lines of the server response. The print method in this class is very similar to the version that was discussed earlier. The only real difference between the two is that the earlier version monitored the response lines to determine if the server closed the connection following the response. This version does that also. In addition, this version also monitors the response lines to determine if authentication is required. New code in the print method You can view the entire print method in Listing 50. Listing 20 shows the code in the print method that monitors for the requirement to authenticate the user. As you will see shortly, if the server responds with a status code value of 401, that means that the requested resource is protected and the user must provide proper authentication credentials in order to gain access to the resource. The code in Listing 20 simply monitors for the substring 401 in the response. (Upon reflection during the writing of this lesson, I realized that it would be possible for the server to deliver an unprotected resource containing the substring 401. That would cause the program to falsely conclude that authentication is required. I should have confirmed that the line containing the substring 401 was the first line of the response before reaching that conclusion. This is a bug in the program.) Another difference Another difference between this version of the print method and the earlier version has to do with how the results are returned. In the earlier version, only one return value was required. In this version, two return values are required: - One value for authentication information - One value for connection closing status Therefore, in this version, the two values are encapsulated in an object of the simple StateWrapper class and a reference to that object is returned. The response Figure 9 shows the text returned by the server in the Linksys WRT54G wireless router when a request is made for the default resource as shown in Figure 8. HTTP/1.0 401 Unauthorized Regarding the 401 Unauthorized response, one resource has this to say: "The request requires user authentication. The response must include a WWW-Authenticate header field ... containing a challenge applicable to the requested resource. The client may repeat the request with a suitable Authorization header field ..." Thus, the inclusion of the 401 status code indicates that the user must be authenticated before gaining access to the protected resource. This would cause a typical browser to display a form into which the user could enter the username and the password. The form would also provide a Submit button, by which the user could cause the request to be retransmitted to the server with authentication credentials included in the request. What about the Basic realm? Here is some of what the folks at Apache have to say about the system in general and the realm in particular: "When a ... resource has been protected using basic authentication, Apache sends a 401 Authentication Required header with the response ... to notify the client that user credentials must be supplied ... ... the client's browser, ... will ask the user to supply a username and password to be sent to the server. ... If the username is in the approved list, and if the password supplied is correct, the resource will be returned to the client. ... each request will be treated in the same way, even though they are from the same client. ... every resource which is requested ... will have to supply authentication credentials ... to receive the resource. ... the browser takes care of the details ... you only have to type in your username and password one time per browser session... ... other information will be passed back to the client. ... [the server] sends a name which is ... the protected area of the web site. This is called the realm.... The client browser caches the username and password that you supplied, and stores it along with the authentication realm ... if other resources are requested from the same realm, the same username and password can be returned to authenticate that request without requiring the user to type them in again." A very important conclusion This leads us to an important conclusion. Since the Linksys WRT54G router requires authentication simply to view the default page, this tells us that we will be required to send authentication credentials with every request that we send to the Linksys router for any resource whatsoever. (Later you will see that this is not the case for a Belkin 54G router. The server in the Belkin 54G router will deliver the default page, containing lots of routine information, without a requirement for authentication. It is only when the user needs to make changes to the router configuration that authentication is required.) A new request with authentication On the strength of web research and packet capture using the Ethereal program, I concluded that in order to access the default resource on the Linksys WRT54G router, I would need to send the request shown in the captured data in Figure 10. Note the inclusion of the boldface header that begins with the word Authorization, followed by a space and the word Basic, followed in turn by another space and the base64-encoded user name and password. Correct Authentication Request Header? According to this web resource, "Upon receipt of the server's 401 response, your web browser prompts you for the username and password associated with that realm. The Authentication header of your browser's follow-up request again contains the token "Basic" and the base64-encoded concatenation of the username, a colon, and the password. Authorization: Basic [base64-username:password]The server base64-decodes the credentials and compares them against his username-password database. If it finds a match, you are in." The response Sending the request with the authentication credentials shown in Figure 10 caused the server to respond as shown in Figure 11. Figure 11 shows only the first fifteen lines of the response with the line numbers being inserted by the program. Also, the long lines were truncated to force the material to fit into this narrow publication format. As you can see, lines 11 through 15 contain the beginning of the HTML code describing the default home page for the Linksys router. What about HTTP/1.0 200 Ok? According to this web resource, 200 OKThe. Since the GET method was used in Figure 10, and a specific resource was not specified, the information returned with the response was an entity corresponding to the default home page for the server. If you were to view this page with a regular browser, you would see that it is the main page used by the administrator for setting configuration parameters for the wireless router. The remaining code Listing 21 shows the remaining code in the doIt method. Note in particular the boldface code that causes the authentication credentials to be sent with a new request for the default server resource. Knowing what you know now, there should be no mystery as to how the code shown in Listing 21 produced the results shown in Figure 10 and Figure 11. All is not perfect however Although this program properly authenticates with the server for two different relatively new versions of the Linksys WRT54G wireless router, despite many hours spent doing web research and pouring over Ethereal capture dumps, I was unable to cause the program to properly authenticate with two other Linksys routers. One of the problem routers was an old Linksys BEFW11S4 802.11B wireless router. The other problem router was an old Linksys BEFSR41Cable router. The same symptoms were exhibited by both of the older Linksys routers. They wouldn't close the connection following the 401 message response even when requested to do so. Then they ignored a following GET command containing the authentication request header. Old routers work OK with a standard browser Both of the older routers authenticate properly when operated using a Firefox browser or an Internet Explorer browser. I finally gave up and concluded that there is something that the browsers know how to do to make the older routers happy that I was unable to identify from either the web research or the Ethereal dumps. You may need to customize the programs The important message here is that unless you are using a Linksys WRT54G router, you shouldn't necessarily expect the programs in this lesson to work perfectly correctly with your router. Depending on the specifics of your router, you may have to download the Ethereal program (or some similar program) and examine the dumps of captured conversations between your browser and your router to determine how to customize these programs to make them compatible with your router. And that is probably more than you ever wanted to know about HTTP basic authentication. The Sockets10 Class A complete listing of this class is provided in Listing 51 near the end of this lesson. This class can be used to connect to a Linksys WRT54G wireless router (with or without SES/WPA2) to change the WEP key. The program installs a ten-character hexadecimal WEP key in the router. It would be a simple matter to modify the program to cause it to install a 26-character WEP key instead. Note that for simplicity, this class uses some deprecated methods. Note also that for simplicity, it asks the server to forego the use of keep-alive. The new key is displayed on the screen along with some other information. When the program terminates successfully, the new WEP key shown on the screen has been installed in the wireless router. This class was tested with J2SE 5.0, WinXP, and two different versions of the Linksys WRT54G wireless router. What have we learned so far? We have learned that we can disable keep-alive operation on the server by including the following line in the request headers: Connection: close Authentication We have learned that we can satisfy the authentication requirements of the Linksys server by including the following line in the request headers: Authorization: Basic OmFkbWlu where OmFkbWlu is the base64 encoded version of the concatenated username and password with the two being separated by a colon. The default user name and password are :admin where the default username is blank. (Hopefully you have changed the default user name and password on your wireless router. You can encode your username and password at and use the result to replace the encoded value shown above.) Each individual request must be authenticated We have learned that with basic authentication, each request must be separately authenticated because the HTTP protocol is a stateless protocol. As a result, if we already know that the resource that we are requesting is protected using basic authentication, then there is no need for us to go through the "401 Unauthorized" drill just to be notified by the server that the resource is protected. Rather, we can skip all of that and start the process by requesting the resource of interest and including the authentication credentials in the request headers. The resource of interest By paying careful attention to what was going on when I manually set the WEP password using a browser, and by using the Ethereal program to confirm my observations, I determined that the resource of interest for setting the WEP password in a Linksys WRT54G wireless router is: /apply.cgi Must invoke the POST method Also, by using the Ethereal program to monitor the conversation when I manually set the WEP key, I determined that the method invocation that is required to set the key is: The body of the POST message Finally, by using the Ethereal program to intercept, capture, and analyze the conversation (and this is where the Ethereal program is indispensable), I concluded that the syntax of the body of the POST message used to set the WEP key is as shown in Figure 12. (Line breaks were manually inserted into Figure 12 to force the material to fit into this narrow publication format.) The new WEP key is shown in boldface in Figure 12. Figuring it all out Perhaps it should have been possible for me to analyze the source code for the web page containing the form that is used to change the WEP key and to determine the format shown in Figure 12 without the help of the Ethereal program. However, I have neither the patience nor the HTML skills to pull that off. Unless you have such patience and HTML skills, it will probably be necessary for you to use a similar program for analyzing your router if it is not a Linksys WRT54G router. Writing the program Once I realized that I knew all of the things that I have discussed above, it was a simple matter to write the Java program to automatically change the WEP key. A discussion of that program follows. The class definition The beginning of the class definition is shown in Listing 22. There is nothing new in Listing 22. It shows the declaration of a couple of instance variables and the main method. The doIt method The doIt method begins in Listing 23. Get and display the new WEP key Listing 23 invokes the method named getWepKey to get a new ten-character hexadecimal WEP key based on the current date and a random number generator. The method named getWepKey This method generates a ten-character hexadecimal key based on the current date and a random number generator. The algorithm will generate the same key every time it is run on a given date, but will generate different keys on different dates. You can easily modify this algorithm to come up with a different recipe for the key. Also, you can easily modify this method to cause it to generate a 26-character hexadecimal key instead of a ten-character key. (If you do that, you will need to make some changes to the body of the POST message in the doIt method to cause that message to accommodate 26-character keys.) Note that for simplicity involving dates, this method uses some deprecated methods. The getWepKey method begins in Listing 24. Two secret values The code in Listing 24 declares and initializes the secret values for two constants that impact the overall security of the WEP key that is generated. If you use this class for your own purposes, you should change these two values. Get the current date Listing 25 begins by instantiating a Date object that encapsulates the current date and time according to the system clock. For this algorithm, we want to use the current date but we don't want to use the current time. In particular, if this algorithm is executed on different computers on the same date, we want the algorithm to produce the same key on both machines. We don't want that result to be influenced by the fact that the algorithm is executed at different times within the same date on the two machines. The code in Listing 25 manipulates the Date object, ending up with a Date object in which the time has been set to zero. Thus, the object contains the date only. Instantiate a pseudorandom number generator An instance of the Random class is used to generate a stream of pseudorandom numbers. A seed is passed to the constructor for the class when the object is instantiated. If two instances of Random are created with the same seed, and the same sequence of method calls is made on each object, they will generate and return identical sequences of numbers. Conversely, if the two instances are created with different seeds, they will generate and return different sequences of numbers. An object of the class Random Listing 26 instantiates an object of the class Random using a seed that is based on the current date as modified by a secret value named bias known only to you. Unless an attacker knows the secret value, the probability is extremely low that he will be able to instantiate a Random object that generates and returns the same sequence of numbers as your object. On the other hand, if you install the algorithm using the same secret value on two or more machines and ascertain that the system clocks are showing the same dates on all of the machines, they will all produce the same sequence of numbers on any given date. Advance the random sequence Listing 27 advances through the random sequence of numbers by discarding the first n numbers, where n is another secret value named lim known only to you. Once again, if you install the algorithm on two or more machines as described above, and use the same secret value for n on each of the machines, the same quantity of random numbers will be discarded on all machines, causing all of the machines to be in synchronism once the numbers are discarded. (The use of two secret values is probably redundant. It may be just as secure to use only one secret value. It just feels better to use two of them.) Get a random number Finally, Listing 28 gets the actual random number that will be used to generate the hexadecimal key. The random number will be a 64-bit long value. However, it is possible that it may have leading bits with a value of zero. This is undesirable. Because the long value is maintained in two's complement format, one way to guarantee that there are no leading zero bits is to force the value to be a negative value. In this case, the first bit will always have a value of 1. Make it negative Listing 29 forces the long value to be negative, thus guaranteeing that there is a 1 in the first bit position. Extract a ten-character hexadecimal substring Listing 30 converts the long value to a String of sixteen hexadecimal characters, and then extracts the first ten characters from the string. (This is another place where you could change the recipe. For example, you could extract and return the middle ten characters instead of the first ten characters.) Listing 30 returns the hexadecimal string to be used as the new WEP key. Listing 30 also signals the end of the getWepKey method. Just how secure is this scheme? My concept is to have this program running automatically on a scheduled basis in the middle of the night on a machine that is connected to the router by a cable. (Never run this program on a machine that has a wireless connection to the router. If you do, you will simply be broadcasting the new WEP key. Keep in mind however, that the router has a built-in wireless access point, and that raises the question that I will pose below.) Running the program in the automatic mode will cause a new WEP key to be installed on the wireless router each time the program is run. Once again, just how secure is this scheme? With regard to the question regarding the security of the scheme, I don't have the expertise to answer that question. (See the earlier disclaimer.) I will simply teach you how to write the program. It will be up to you to do the necessary research to decide for yourself whether or not you should use it. The real question is the following, and it is a question for which I don't know the answer. If you know the answer, I will be interested in hearing from you via email. The real question is, if you use a computer connected by cable to a router that has a built-in wireless access point, is it possible for the communications between that computer and the router that are conducted via cable to be intercepted by another computer on a wireless basis using a packet sniffing program? If the answer to that question is yes, then the scheme isn't secure at all, and you probably shouldn't use it. A local program on each machine My scheme also has a simplified (strictly local) version (see Sockets11a, for example) of the program with the same algorithm installed on all of the other machines in the wireless network. When the users of those machines come to work in the morning, they will run the local version of the program on their machines to learn the new encryption key for the day. Then they will connect to the wireless router by typing in the new key on their machine when the encryption key is requested. (If they were connected to the router on a wireless basis when the key was changed, they should have been disconnected automatically by the router. In other words, as far as the router is concerned, if they were connected using the old key, they are no longer authorized to maintain a connection unless they can provide the new encryption key.) Alternatives to a local program Encryption key management is one of the most difficult aspects of security. If having a local version of the program on each machine in the network is a concern (and it might be in a small office environment but probably not in a home), there are a couple of other alternatives that might work. Using Public Key Cryptography One alternative would be to update the program to cause it to use Public Key Cryptography and to send an email message with an encrypted version of the new wireless router encryption key to each employee each time it is changed. I believe that this would be a pretty good approach. With this approach, the two secret values could be changed frequently by the network administrator without her having to install a recompiled version of the program on each machine. Then, even if an ex-employee were to know the algorithm, he wouldn't know the secret values necessary to use it. Printing a list of encryption keys A second alternative would be to write a simple program containing the same algorithm by which the network administrator could print the key for every day of the month in advance and distribute a copy to each employee at the beginning of the month. Of course, those employees would need to keep the list locked in their desks. (Somehow this sounds like the worst approach of all. Some employees would probably post the list on the side of a file cabinet in their office using refrigerator magnets to keep it in place.) I am not a network security expert As I mentioned early in this lesson, I am not a security expert. However, this scheme looks to me like a reasonable way to easily change the encryption key on a daily basis to keep the attackers off balance. Unless the attackers can crack the encryption key within 24 hours, this should prevent them from being able to break into the wireless network. And then I learn the truth ... I will probably receive a flood of email messages after publication of this lesson showing me that this scheme can easily be cracked by a ten-year old kid with a hand calculator. More on this scheme later, let's get back to the program Listing 31 picks back up in the doIt method where the new WEP key has been generated and it is time to install it in the wireless router. Listing 31 declares and initializes the constant that contains the administrator username and password in base64 format. There's nothing new here. Get a Socket object Listing 32 gets a new Socket object connected to the server on port 80. Listing 32 also displays a message to the effect that a POST is about ready to begin. Send the POST request Listing 33 sends the POST request followed by three header lines and a blank line as required. The new material here is the format of the POST request and the header line that specifies the length of the message body that is to follow, shown in boldface in Listing 33. (I believe that I determined experimentally that the header line that specifies the length of the message body is required. This is another place where the Ethereal program comes in extremely handy.) Post the new WEP key Listing 34 posts the new WEP key on the server just as though a human user filled in the form and clicked the Submit button. This syntax was captured from the actual data stream produced by a browser using the Ethereal program. A LF without a CR Note that this statement calls the print method instead of the println method, and then adds a LF without a CR at the end. (In doing the backup research for this lesson, I probably spent more time trying to discover that the POST message body must end with a LF instead of a CRLF than on any other topic.) The remainder of the doIt method The remaining code in the doIt method is shown in Listing 35. There is nothing new in Listing 35. Upgrade to a WPA or WPA2 encryption key Now that you know how to write a Java program that will automatically generate and install a new ten-character hexadecimal WEP key on a Linksys WRT54G wireless router, it is time to learn how to generate and install a WPA or WPA2 encryption key on the router for improved security. (I will let you research the difference between WEP, WPA, and WPA2 encryption keys and the security differences among them on your own.) Remember, however, that before you can use either a WPA key or a WPA2 key on the wireless router, all of the computers on your wireless network must be able to support such a key. For example, I still have a machine on my home network running Windows 98. Although Windows 98 does support the use of a WEP key, as near as I can tell, Windows 98 won't support the use of a WPA key. As a result, I'm still running my home network using a WEP key (but I change it every night on a random basis). The Sockets11 Class A complete listing of this class is provided in Listing 52 near the end of the lesson. This class can be used to connect to a Linksys WRT54G wireless router (with or without SES/WPA2) to change the WPA shared key. The router requires that the WPA key be 63 characters or less. This class generates a fifteen-character WPA key based on the current date and a random number generator. The class could be easily modified to extend the length of the key up to 63 characters if you choose to do so. Different behavior for different versions When this class is used with an older Linksys router that doesn't support WPA2 but does support WPA, the class sets the configuration to WPA Pre-Shared Key and TKIP. When used with a newer router that supports WPA2, the class sets the configuration to WPA2 Personal and TKIP+AES. For simplicity, this class uses some deprecated methods. The new key is displayed The new key is displayed on the screen. When the program terminates successfully, the new WPA key shown on the screen has been installed in the wireless router. The class was tested using J2SE 5.0, WinXP, and two different versions of a Linksys WRT54G wireless router, one that supports WPA2 and one that doesn't. A complete listing of the class As mentioned above, a complete listing of the class is provided in Listing 52. This class is very similar to the class that I explained under The Sockets10 Class. Therefore, I won't bore you by repeating the explanation for code that is essentially the same. Rather, I will highlight the code that is different between the two classes. The class definition The class definition begins in Listing 36. The first new code occurs at the point where the method named getWpaKey is called to generate and return the WPA key. The method named getWpaKey The method named getWpaKey begins in Listing 37. This method generates a 15-character key in the form ccc.ccc.ccc.ccc based on the current date and a random number generator. The characters range from the character 0 (ASCII value 48) through the character z (ASCII value 122) inclusive. The algorithm will generate the same key every time that it is run on a given date, but will generate different keys on different dates. You can easily modify this algorithm to come up with a different recipe for the key, or a longer key using the same general recipe, or both. Note that for simplicity, this method uses some deprecated methods in the handling of dates. The method code The getWpaKey method begins in Listing 37. The code in Listing 37 is essentially the same as the code that I explained earlier beginning in Listing 24. Construct sequence of twelve characters Listing 38 constructs a sequence of twelve characters using sequential random values between 48 and 122 inclusive. As mentioned earlier, the character values represent the characters from 0 through z when viewed according to an ASCII collating sequence. Insert periods every third character This method assumes that a user may be required to manually enter the WPA key when setting up a wireless connection on a client computer. To make this task easier, Listing 39 inserts a period every third character. (In most cases, it should be possible for the user to copy the key and paste it into the form used to set up the wireless connection. In those cases, the code to insert the periods should be eliminated, and the length of the key should be significantly increased for improved security.) Getting back to the doIt method ... The code in Listing 40 is essentially the same as the code that I explained in conjunction with the class named Sockets10. Set the content length The code in Listing 41 differs from the code that I explained earlier in the following way. The code that I explained in conjunction with the class named Sockets10 was based on the assumption of a fixed-length ten-character hexadecimal key. (Recall that the choices for the length of a WEP key are exclusively ten characters and 26 characters. I assumed that you were not likely to modify the program to generate a 26-characteter key.) The code in Listing 41 assumes that you are very likely to modify the program to increase the key length because it is so easy to do. Therefore, the code in Listing 41 takes the length of the key into account when setting the content length for the POST message. Obviously this code could easily be retrofitted into the code for the WEP key in Listing 33. Post the data to the server Listing 42 shows the message body for the POST message. As before, the actual syntax was captured using the Ethereal program. As you can see, the POST message body for the WPA key is somewhat shorter than the POST message body for the WEP key shown in Listing 34. This is because the Linksys data entry form for a WPA key is simpler than the data entry form for a WEP key. The remainder of the code Listing 43 shows the remaining code in the doIt method. This code is essentially the same as the code that I explained earlier. The Sockets11a Class Now I am going to describe how you might implement this scheme in a home or small office network under Windows XP Professional. Before you do so, make certain that you read the disclaimer that I provided earlier. A simplified local version of the program Listing 53 provides a class named Sockets11a, which is a simplified version of the class named Sockets11. The Sockets11a class is designed to compute and display the WPA key for a given date using the same algorithm as in Sockets11. It has no network connection but runs as a local Java application. You have two computers on your network Assume that you have two computers in your home network. Let's call them Computer A and Computer B. (This procedure should work for any number of computers on the network provided one of them is connected to the wireless router via a cable.) Computer A is connected via cable Computer A is a computer that you have connected to the wireless router using a cable. The main purpose for keeping Computer A on the network is to use it as a disk backup machine and possibly as a file or print server. You install the program named Sockets11 on this machine and schedule it to run at 1:00 am each morning. (See the earlier lesson entitled Consolidating Email using Java, Part 2 to learn how to schedule Java programs to run automatically at preset times under Windows XP.) Never run the program named Sockets11 on a computer that is connected to the wireless router using a wireless connection. If you do, you will be broadcasting your new WPA key to anyone who may be listening. Computer B has a wireless connection Computer B is a laptop computer, running Windows XP, with a wireless network interface card installed. You routinely use this computer to communicate with the Internet via your Linksys WRT54G wireless router. You install the program named Sockets11a on Computer B. The time sequence Assume that you successfully used Computer B to communicate with the Internet via the Linksys WRT54G wireless router on Monday. At 1:00 am on Tuesday morning, the WPA password on the wireless router is automatically changed by the program named Sockets11 running on Computer A. Assume that you keep Computer B running during the night so that it will perform a complete virus scan. When you get up on Tuesday morning, Windows XP may still be showing that Computer B is successfully connected to the wireless network. However, when you attempt to connect to Google, you get a message from the browser telling you that the browser could not find the requested page or words to that effect. This is because you originally connected to the wireless router on Monday using a WPA key that is no longer valid. Re-establish the wireless connection You need to re-establish the wireless connection using the new WPA key. You can accomplish that by performing the following steps: - First run the program named Sockets11a on Computer B and note the key value that is displayed. Just leave the command-line screen on the desktop because you are going to copy the new key from it and paste it into a Windows text field later. - Open the Start menu, select Connect To, and right click on Wireless Network Connection. Select Properties in the popup menu. This should open the Wireless Network Connection Properties dialog. - Select the Wireless Networks tab. Remove the wireless network from the list of preferred networks on the basis of the network SSID (name). - Click the Add button. This should expose the Wireless network properties dialog. - Enter the network name in the SSID field if it isn't already there. - Select WPA-PSK for Network Authentication. - Select AES for Data encryption if your router supports WPA2. Select TKIP if your router doesn't support WPA2. - Copy the new network key from the command-line screen and paste it into the two text fields that require the key. - Click the OK button to close the Wireless network properties dialog. - Click the OK button to close the Wireless Network Connection Properties dialog. Your wireless network icon in the system tray should now be showing that you have been disconnected from the wireless router. - Open the Start menu, select Connect To, and right click on Wireless Network Connection again. This time select View Available Wireless Networks on the popup menu. (You can also get there from the wireless icon in the system tray.) - Highlight the wireless network associated with your Linksys wireless router and click the Connect button. - Paste the new key into the two required text fields on the Wireless Connection Dialog if requested and click the Connect button on that dialog. After a few seconds, the Wireless Network Dialog as well as the wireless icon in the system tray should show that you are connected to the wireless router. - Now go back and confirm that you can successfully connect to Google via the Internet. Congratulations If the stingy neighbor in the apartment next door has been trying to crack your WPA key so that he can piggy back onto your Internet connection and avoid having to pay for a connection of his own (or possibly for more malicious reasons) he will now have to start all over because the WPA key has changed. And it will change once every twenty-four hours if you use these two programs to implement the procedure described above. Not as complicated as it looks The above procedure looks complicated, but it is actually quite easy to perform once you get used to it. In addition, there are several shortcuts that you will probably discover to shorten the procedure. However, I have described the long procedure that I believe will always work because some shortcuts may work some of the time but not all of the time. What if you don't have a Linksys WRT54G wireless router? Then you will simply have to take what you have learned in this lesson and write a version of the program that is compatible with your wireless router. Once you have done that, the procedure described above should still work just fine with your program. The Sockets12 Class Now for something really different. I don't actually use a Linksys WRT54G wireless router in my home network. Rather, I use an older Belkin 54G, which, with rebates, was the cheapest one that I could find several years ago when I decided to upgrade my home network to support wireless. The Belkin router supports both WEP and WPA. However, as I mentioned earlier, because I still have a computer on my network running Windows 98, I haven't been able to implement the more-secure WPA, because it appears to be incompatible with Windows 98. The program required to set the encryption key in the Belkin router is much different from the program required to set the encryption key in the Linksys router. No basic authentication The biggest difference between the two routers is the mechanism by which the administrator establishes credentials that allow her to modify the router configuration parameters. When you connect to the Linksys router with your browser, the first and only thing that you see is the basic authentication login screen. Until you enter an acceptable username and password, you are not allowed to see anything else. When you connect to the Belkin router with your browser, you immediately see the router's home page that displays a great deal of information about the router configuration. (That is probably not so good because an attacker who manages to crack your encryption key and connect to your router can learn a great deal about the router configuration even if he doesn't know the administrator password.) You will need to log in Near the top of that screen is a note that reads: "You will need to log in before you can change any settings." On the left side of the screen is a list of more than twenty options for setting the router configuration. As mentioned above, a great deal of information is displayed in the center of the screen, including MAC addresses, IP addresses, SSID, etc. (This is another good reason for trying to make certain that the neighbor can't crack the encryption key.) Click to Login Near the very top of the screen are three hyperlinks that read: Home | Help | Login When you point to the Login link with your mouse, the browser displays the link as login.html. In other words, clicking that link will cause the file named login.html to be downloaded and displayed by your browser. When the login.html file is displayed by your browser, there is a single text field for entry of a password along with a Clear button and a Submit button. There is no concept of a username relative to authentication on the Belkin router. Study the HTML source code If you are willing to study the source code for the HTML page, you can determine that clicking the Submit button will invoke the GET method on the server, requesting the resource named login.cgi, and passing three parameters as a query string. (However it is much easier to get that information through the use of the Ethereal program.) Once you have logged in with an acceptable password, you can select any of the options on the left side of the screen, including the option entitled Wireless Security. The Wireless Security page The Wireless Security page allows you to select one of the following Security Mode options: - Disabled - WPA-PSK (no server) - 128bit WEP - 64bitWEP - WPA (with Radius Server) Depending on the Security Mode that you select, the data entry form will change to become appropriate for the data required for that Security Mode. The program that I will describe later is based on a Security Mode selection of 64bit WEP. In addition to the normal data entry fields for WEP security, this page provides a Submit button labeled Apply Changes. Examine the HTML source code Once again, by examining the source code for the HTML page, it can be determined that clicking the Submit button will invoke the POST method on the server, requesting the resource named postapply.cgi, and passing a very large number of parameters in the body of the POST message. (Also, once again, it is much easier to determine this through the use of the Ethereal program, particularly with regard to the format of the body of the POST message.) The Logout hyperlink At this point, the Login hyperlink has changed to a Logout hyperlink. Pointing to the Logout hyperlink with the mouse indicates that the associated resource is named logout.cgi. Clicking the Logout link returns you to the initial home page. Login is persistent Somehow, the router keeps track of the fact that the administrator is logged in and won't allow another login to occur until the first one logs out or the current login expires. (A configuration parameter that must be set along with the administrator password is a parameter named Login Timeout.) I don't know how the server keeps track of the fact that the administrator is currently logged in. Initially I suspected that it was done using cookies, but I didn't write any support for cookies in my program and it still works fine. The Sockets12 class The first class that I will explain illustrates the procedure for logging in and logging out of a Belkin 54G router, without any attempt to set the encryption key. A complete listing of this class is shown in Listing 54. I'm going to begin by showing you the screen output produced by the program named Sockets12. That output is shown in Figure 13. HTTP/1.0 200 Ok To make a long story short, the two server response message lines highlighted in boldface that read HTTP/1.0 200 Ok demonstrate that the program was successful in logging in and then logging out of the server as an administrator. Much of the code in this class is very similar to code that I have already explained in this lesson. Therefore, I won't repeat that explanation. The class definition The class definition for the class named Sockets12 begins in Listing 44. The only thing that is new in Listing 44 is the fact that this login methodology does not require the administrator password to be expressed in base64 as is the case with HTTP basic authentication. The doIt method The method named doIt begins in Listing 45. Login involves the GET method The thing that is new about Listing 45 is that the actual login occurs as a result of invoking the GET method on the server, requesting a resource named login.cgi, and passing three parameters to the resource as a query string. I was able to capture the syntax for this request using the Ethereal program. Given my limited knowledge of HTML, I would have been hard pressed to figure it out otherwise. Display the response Listing 46 invokes the local print method to get and to display up to nine lines of server response data. This produced the top half of the output shown in Figure 13 with the line numbers being inserted by the print method. The logout code Listing 47 shows the code that is used to logout from the router. Knowing what you now know, there is nothing in Listing 47 that should require further explanation. Setting the WEP key Now it is time for us to examine the code used to set the WEP key on the Belkin router. We will accomplish that using the class named Sockets13. The Sockets13 Class A complete listing of the Sockets13 class is provided in Listing 55. This class can be used to connect to a Belkin 54G wireless router and to change the WEP key. The class installs a ten-character hexadecimal WEP key in the router. It would be a simple matter to modify the program to cause it to install a 26-character WEP key. Much of the code in the class named Sockets13 is very similar to code previously explained in this lesson. I won't repeat that explanation here. Rather, I will concentrate on the code that is different. The output Once again, I am going to begin by showing you the output produced by this program. That output is shown in Figure 14 The new material The new material is shown in boldface in Figure 14. This new material consists of: - The display of the new WEP key at the beginning of the output. - Nine lines of server response data, resulting from the invocation of the POST method to install the new WEP key. This material appears near the middle of Figure 14. As you can see, the server response to the invocation of the POST method began with HTTP/1.0 200 Ok, indicating that the server was happy with the process. In addition, a physical examination of the Wireless Security page using a browser confirmed that the WEP key was properly installed. The new code The new code for the class named Sockets13 is shown in Listing 48. There are no new concepts in Listing 48, only new details such as: - The name of the requested resource in the invocation of the POST method. - The syntax of and the information contained in the body of the POST message. The most interesting items are highlighted in boldface in Listing 48. I would have found it very difficult to construct the body of the POST message without the help of the Ethereal program. That's all folks And that is probably a lot more than you ever wanted to know about the topics covered in this lesson. Run the Programs I encourage you to copy the code from the classes in the section entitled Complete Program Listings. Compile the code and execute it if you have a compatible wireless router. Experiment with the code, making changes, and observing the results of your changes. If you don't have a compatible wireless router, modify the code to make it compatible with your wireless router. Above all, however, pay attention to the disclaimer that I provided earlier Summary In this lesson, I taught you how to write a Java program that will automatically set the WEP, WPA, or WPA2 encryption key on a wireless router on an unattended scheduled basis. Hopefully in the process, I also taught you quite a bit about writing Java programs that will successfully communicate with HTTP servers. Lest you forget, I will remind you one more time that. Good Network Practice Whether or not you decide, (despite my disclaimer), to implement these ideas in hopes of improving your home or small office wireless network security, there are some other things that you should consider doing. Suggestions from CWNA Guide to Wireless LANS The following ideas were generally taken from the textbook entitled CWNA Guide to Wireless LANS, Second Edition, by Mark Ciampa. - Use the highest level of authentication and encryption supported by your wireless router and the computers on your network. Above all, make sure that authentication and encryption are not disabled as is often the default at installation time. - Disable the Wireless SSID Broadcast on your router. While this won't prevent serious attackers from finding your wireless router, it will discourage casual eavesdroppers. - Change the SSID on your wireless router from its default to some more cryptic name. This will prevent attackers from finding your wireless router simply by entering the well-known SSIDs for different brands of wireless routers. - Enable the Wireless MAC Filter feature. While this won't prevent the really serious attacker from breaking into your wireless network, it will force them to expend quite a lot of effort to do so. Hopefully that will cause them to search for an easier target at someone else's house. My suggestions And in addition to those suggestions from Ciampa, my suggestions are: - Use a strong encryption key by avoiding common words and phrases, birthdays, children's names, mother's maiden name, etc, and by mixing upper case characters, lower case characters, numbers, and special characters whenever possible. - Change the encryption key often. The breaking of strong encryption keys generally requires the intercept of and analysis of large amounts of traffic. By changing the encryption key often, you eliminate the opportunity for the attacker to intercept a large amount of traffic involving the use of the same encryption key. Physical security And don't forget physical security for the router, particularly in a small office environment. With many wireless routers, if an attacker can gain access to the equipment long enough to press the reset button (about ten seconds), the router will revert to its default operational configuration which typically includes no security at all. References 550 Network Programming - General Information 552 Network Programming - The InetAddress Class 554 Network Programming - The URL Class and the URLEncoder Class 556 Network Programming - The URLConnection Class 560 Network Programming - Sockets 562 Network Programming - Server Sockets 564 Network Programming - Datagram Clients 566 Network Programming - Datagram Servers 568 Network Programming - Stubs, Skeletons, and Remote Objects 060 Input and Output Streams 2188 Understanding Base64 Data 727 Public Key Cryptography 101 Using Java 2400 Consolidating Email using Java 2402 Consolidating Email using Java, Part 2 2404 Uploading Old Email to Gmail using Java 2410 Using Java to Clean Up Your Bookmark Library Also see. Complete Program Listing<<
https://www.developer.com/java/other/article.php/3612951/Wireless-Home-Security-and-Java.htm
CC-MAIN-2017-43
refinedweb
13,772
59.53
public class BracketMatcher extends java.lang.Object BracketMatcher detects matching bracket pairs of the following types: (), {}, [], (**). It ignores brackets within strings and within Mathematica comments. It can acommodate nested comments. It searches in the typical way--expanding the current selection left and right to find the first enclosing matching brackets. If this description is not clear enough, simply create a MathSessionPane and investigate how its bracket-matching feature behaves. To use it, create an instance and supply the input text either in the constructor or by using the setText() method. Then call the balance() method, supplying the character position of the left end of the current selection (or the position of the caret if there is no selection) and the length of the selection in characters (0 if there is no selected region). The balance() method returns a Point, which acts simply as a container for two integers: the x value is the character position of the left-most matching bracket and the y value is the character position just to the right of the rightmost bracket. That means that these numbers mark the beginning and end of a selection region that encompasses a matched bracket pair. Null is returned if there is no block larger than the current selection that is enclosed by a matched pair and has no unclosed brackets within it. A single BracketMatcher instance can be reused with different text by calling the setText() method each time. MathSessionPane clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait public BracketMatcher() public BracketMatcher(java.lang.String text) text- the string of input that will be searched for matches public void setText(java.lang.String text) text- public java.awt.Point balance(int leftEndOfSelection, int selectionLength) leftEndOfSelection- the character position of the left end of the current selection (or the position of the caret if there is no selection) selectionLength- the length of the selection in characters (0 if there is no selected region) J/Link is Copyright (c) 1999-2017, Wolfram Research, Inc. All rights reserved.
https://reference.wolfram.com/language/JLink/ref/java/com/wolfram/jlink/ui/BracketMatcher.html
CC-MAIN-2019-47
refinedweb
340
52.09
The Debug class allows you to visualise information in the Editor that may help you understand or investigate what is going on in your project while it is running. For example, you can use it to print messages into the Console windowA Unity Editor window that shows errors, warnings and other messages generated by Unity, or your own scripts. More info See in Glossary, draw visualization lines in the SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info See in Glossary view and Game view, and pause Play Mode in the Editor from script. This page provides an overview of the Debug class and its common uses when scripting with it. For an exhaustive reference of every member of the Debug class, see the Debug script reference. Unity itself sometimes logs errors, warnings and messages to the Console window. The Debug class provides you with the ability to do exactly the same from your own code, as shown below: Debug.Log("This is a log message."); Debug.LogWarning("This is a warning message!"); Debug.LogError("This is an error message!"); The three types (error, warning, and message) each have their own icon type in the Console window. Everything that is written to the Console Window (by Unity, or your own code) is also written to a Log File. If you have Error Pause enabled in the Console, any errors that you write to the Console via the Debug class will cause Unity’s Play Mode to pause. You can also optionally provide a second parameter to these log methods to indicate that the message is associated with a particular GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary, like this: using UnityEngine; public class DebugExample : MonoBehaviour { void Start() { Debug.LogWarning("I come in peace!", this.gameObject); } } The benefit of this is that when you click the message in the console, the GameObject you associated with that message is highlighted in the Hierarchy, allowing you to identify which GameObject the message related to. In the image below you can see that clicking the “I come in peace!” warning message highlights the “Alien (8)” GameObject. The Debug class also offers two methods for drawing lines in the Scene viewAn interactive view into the world you are creating. You use the Scene View to select and position scenery, characters, cameras, lights, and all other types of Game Object. More info See in Glossary and Game view. These are DrawLine and DrawRay. In this example, a script has been added to every Sphere GameObject in the scene, which uses Debug.DrawLine to indicate its vertical distance from the plane where Y equals zero. Note that the last parameter in this example is the duration in seconds that the line should stay visible in the Editor. using UnityEngine; public class DebugLineExample : MonoBehaviour { // Start is called before the first frame update void Start() { float height = transform.position.y; Debug.DrawLine(transform.position, transform.position - Vector3.up * height, Color.magenta, 4); } } And the result in the Scene view looks like this:
https://docs.unity3d.com/Manual/class-Debug.html
CC-MAIN-2022-05
refinedweb
559
54.93
qcompilerdetection.h error Hello, I am new to Qt and C++ development and recently configured Qt with Visual Studio 2017. I have created a Qt console application and everything works fine and it executes with any issue. However, when I reference and include the header files of another C++ library to work with PDF documents, I get the same errors in qcompilerdetection.h file. #include <QtCore/QCoreApplication> #include <iostream> // everything works fine is a comment the following line #include "stdafx.h" // contains the reference of additional headers of external library using namespace std; int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); cout << "successful"; return a.exec(); } - Christian Ehrlicher Lifetime Qt Champion last edited by @usman-qt And what has this problem to do with the topic? Please open a new topic for your question! - SGaist Lifetime Qt Champion last edited by Thread forked. Can I get any updates? - Christian Ehrlicher Lifetime Qt Champion last edited by @usman-qt said in qcompilerdetection.h error: Can I get any updates? Which updates? You include another library and then you get warnings - so why do you blame Qt then? This post is deleted! - Christian Ehrlicher Lifetime Qt Champion last edited by Christian Ehrlicher @usman-qt said in qcompilerdetection.h error: It really seems unprofessional from the support team of such a well-reputed company where you are acting rudely instead of giving any satisfactory answer First - this forum is a user-driven forum, it has absolutely nothing to do with TQtC nor are we paid for anything we do here. If you want support from TQtC then ask them via their support channels (and pay for it). Second - if you use Qt and it works fine and then add another library and it does not work - why do you blame Qt for this? Try to figure out what the other library does so it breaks the Qt headers. Regarding Aspose.PDF for C++, The common library asposecpplib has MulticastDelegate::emit() function and 'emit' is QT keyword as well. This is the main reason for the conflict and error you are facing. We are working over resolving the conflict and as soon as it is resolved, we will surely inform you. Please spare us some time. PS: I am Asad Ali and I work at Aspose as Developer Evangelist. We would like to share with you that QT support has been added to the Aspose.PDF for C++ 20.6. We have created and successfully tested QT project that used Aspose.PDF for C++.
https://forum.qt.io/topic/113453/qcompilerdetection-h-error
CC-MAIN-2022-27
refinedweb
422
64.81
3 ways to solve the expression problem The expression problem—also known as the extensibility problem—is about adding new capabilities to existing code without modifying it. The term was coined by Philip Wadler at Bell Labs in the late 90s. As he put it: The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts). Given that we might use a dynamic language instead of a language that require a compilation step to introduce new functionality, I suggest we view the existing code Wadler talks about, as code that comes from a library we don’t have access to and we can’t modify directly. Let’s image we can’t read the source code of the original library and we can only use its public API. To recap, here is what we want to achieve: - use the existing types provided by a third-party library, and introduce new operations on such types; - use the existing operations provided by the library, and introduce new types that can be acted upon by such operations. Cool. How do we go about solving goals 1 and 2? Different approaches to solve the expression problemDifferent approaches to solve the expression problem In object-oriented languages goal 1 is hard, goal 2 is easy: introducing new operations on existing types is hard, because we would need to modify all existing types to support the new operations (and we might not be able to do it, if we don’t control the code which defines such types); on the other hand, introducing new types is easy, because we can just subclass the existing types. In functional languages goal 1 is easy, goal 2 is hard: introducing new operations on existing types is easy, because we can just create a new function; but introducing new types is hard, because we would need to edit every function that accepts the new types we want to support. So, neither with object-oriented languages nor with functional languages we can achieve goal 1 and 2 easily, and we need to rely on additional language features or design patterns to do that. Here are three approaches we can take to solve the expression problem: - open classes (aka monkey patching) - multiple dynamic dispatch (aka multimethods) - single dynamic dispatch Not all languages support all of these features. Let’s see a few examples. Open classes (aka monkey patching)Open classes (aka monkey patching) Dynamic languages such as Ruby, Python and Javascript can redefine at runtime any code they need to extend. Let’s say there is a class that we want to extend. Here is what we have to do: - import the class we want to extend; - define new methods or redefine existing ones; - attach them to the class we want to extend. Step 1: import the class we want to extend. class MyClass: a = 1 b = '2' def get_value(self): return self.a Step 2: define a new method. def get_another_value(cls): return cls.b Step 3: attach the new method to the existing class. MyClass.get_another_value = get_another_value With these 3 steps we achieved goal 1: introduce new operations on existing types. What about goal 2, namely introduce new types that can be acted upon by existing operations? This is a non-issue in dynamic languages like Python and Javascript. Since in these languages functions are first-class citizens, we can pass them around and bind them to our objects at runtime. And thanks to duck typing we can use existing functions with our new objects. In Python we can even allow our classes to define their own behavior with respect to language operators. This can be done with special method names, commonly known as magic methods. So if we have an existing class like this: class FooParent: def bar(self): return "baz" and we want to introduce new functionality when instances of this class are garbage collected, we can extend FooParent and overload __del__: class FooChild(FooParent): def __del__(self): print("I am garbage collected!") So when we write: foo = FooChild() del foo we might get "I am garbage collected!". JavaScript is based on prototypes rather than classes, so monkey patching involves extending the prototype of the object we want to extend. If we have an application and plan to extend the prototype of a library that no other library uses, that might be ok, but extending native objects such as String is a big no-no. Keep in mind that the problem here is that these native Javascript objects have a global scope. ClojureScript is able to bypass this issue and extend native objects prototypes safely because it extends the JS prototypes per namespace (if you want to know more about it, watch the talk “ClojureScript Anatomy” at around 19’25"). So, monkey patching can solve the expression problem. It’s convenient and easy to understand. It has several problems though. First of all, only dynamic languages can use it. Second, it’s easy to make a mess and forget what code we monkey patched and why. There are some ways to mitigate these issues. For example, in Ruby we can scope our monkey patches in a module or use Ruby refinements. Multiple dynamic dispatch (aka multimethods)Multiple dynamic dispatch (aka multimethods) Some—though not many—programming languages support multiple dispatch. In these languages a function uses more than one piece of information to determine which function to actually call (runtime polymorphism). Usually the pieces of information are the types of the arguments passed to the function. A language designed with multiple dispatch in mind is Julia. In fact in Julia multiple dispatch is so at the core of the language that + is a generic function with 96 implementations. And since generic functions are open, functions are more like protocols which users can also implement. Let’s say we have a function f which comes from an existing library we don’t control (if you want you can try this code in a Julia REPL). f(x::Float64, y::Float64) = 2x + y If we call this function with f(2.0, 3.0) we get 7.0. That’s fine and dandy, but what if we write f(2.0, 3)? If we do, we get this error. julia> f(2.0, 3) ERROR: MethodError: no method matching f(::Float64, ::Int32) Closest candidates are: f(::Float64, ::Float64) at REPL[1]:1 Stacktrace: [1] top-level scope at none:0 On a sidenote, I think that Julia error messages are pretty great, maybe on par with Elm ones. We would really like to call f with an integer as its second argument. So what do we do? Well, in Julia we can simply define a new version of the function f: f(x::Float64, y::Integer) = 2x + y Now if we call (2.0, 3) we get 7.0. Another language which supports multimethods is Clojure, but I’ll write about Clojure multimethods in a future blog post. Single dynamic dispatchSingle dynamic dispatch In some cases the type of the first argument in a function or method is enough to determine which function to call at runtime. A language that supports this flavor of runtime polymorphism in an elegant and performant way is Clojure. Clojure methods live outside of types. They don’t have to be part of a class like in Java or C++. Have look at this clojure gist. Triangle and Square are two data structures that obey the Areable protocol and the SelfAware protocol. If you are not familiar with Clojure, think of them as Java classes Triangle and Square that implement both the Areable and the SelfAware interfaces. These clojure protocols (or Java interfaces) define the area method and the whoami method. ; data structures ("shapes") (defrecord Triangle [a b c]) (defrecord Square [edge]) ; protocols (defprotocol Areable (area [shape] "calculates the shape's area")) (defprotocol SelfAware (whoami [shape] "returns the name of the shape")) ; implementations (extend-type Triangle Areable (area [{:keys [a b c]}] "use Heron's formula to calculate area" (let [s (/ (+ a b c) 2)] (Math/sqrt (* s (- s a) (- s b) (- s c))))) SelfAware (whoami [this] "Triangle")) (extend-type Square Areable (area [this] (* (:edge this) (:edge this))) SelfAware (whoami [this] "Square")) Let’s say that we want to extend Triangle and Square to provide new functionality that computes the perimeter. Without modyfiying existing code, in Clojure we are able to define a new protocol that contains the abstract definition of perimeter and provide a concrete implementation for the Triangle and the Square types. (defprotocol Perimeterable (perimeter [shape] "calculates the perimeter of the shape")) (extend-protocol Perimeterable Triangle (perimeter [{:keys [a b c]}] (+ a b c)) Square (perimeter [square] (* (:edge square) 4))) By doing so, we gained new functionality for the existing Triangle and Square types. (let [triangle (->Triangle 1 1 1)] ; existing functionality (area triangle) (whoami triangle) ; new functionality (perimeter triangle)) Note: Clojure protocols can also extend final Java classes, even if I still don’t know how they are able to do it. Other approachesOther approaches Open classes and dynamic dispatch (single or multiple) are not the only approaches we can take to solve the expression problem. Here are a few approaches I haven’t talked about in this article: - Typeclasses - Object algebras (only available in languages that support generics) - Tagless final ReferencesReferences This blog post was fairly short and introductive, but I hope it taught you a couple of things. If you want to know more about the expression problem—and especially if you are interested in Clojure—have a look at these articles: - The Expression Problem and its solutions - Clojure’s Solutions to the Expression Problem - Solving the expression problem in Clojure - Solving the Expression Problem with Clojure 1.2
https://www.giacomodebidda.com/posts/3-ways-to-solve-the-expression-problem/
CC-MAIN-2021-21
refinedweb
1,646
57.71
Barry Warsaw wrote: > >Unless this new proposal also includes changing the meaning of >> "except:" to "except Error". > It's worth debating. OT1H, it's a semantic different for Python 2.x > (although +1 on the idea for Py3K). I was speaking of Py3K here, yes. > Going along with that, maybe the interpreter should do something > different when an Exception that's not an Error reaches the top > (e.g. not print a traceback if KeyboardInterrupt is seen -- > we usually just catch that, print "Interrupted" and exit). SystemExit is already special-cased, as far as I can tell. KeyboardInterrupt could be in fact be special cased as well (I saw many Python newbies -- but otherwise experienced -- being disgusted at first when they interrupt their code with CTRL+C: they expect the program to exit "almost silently"). >> Also, under this new proposal, we could even remove >> Exception from the builtins namespace in Py3k. It's almost >> always wrong to use it, and if you really really need it, it's >> spelled exceptions.Exception. > I'm not sure I'd go as far as hiding Exception, since I don't think the > penalty is that great and it makes it easier to document. The situation (in Py3k) I was thinking is when people see this code: except: # something and want to change it so to get a name to the exception object. I *think* many could get confused and write: except Exception, e: # something which changes the meaning. It "sounds" correct, but it's wrong. Of course, it's easy to argue that "Exception" is just that, and people actually meant "Error". In a way, the current PEP352 is superior here because it makes harder to do the "bad" thing by giving it a complex name (BaseException). Giovanni Bajo
https://mail.python.org/pipermail/python-dev/2006-March/062585.html
CC-MAIN-2016-50
refinedweb
296
63.19
Hello everyone!So I'm developing a game in Unity using the Leap Motion Controller. It is going to be an Air Hockey game and I want to move one of the Paddles with the Leap Motion. Unfortunately I'm pretty new to programming (learning programming in C# for almost 4 weeks now), so the scripts that come with Leap killed my understanding of code a bit. It was pretty overwhelming... Maybe somebody of you guys can tell me how to move an object with Leap without having the hands appear in the scene. Thanks in advance and have a nice day!Cheers Felix using UnityEngine;using System.Collections;using Leap; public class LeapHandController : MonoBehaviour { public GameObject Palm; public GameObject Palm2; Controller LEAPcontroller; Frame frame; HandList Hands; Hand hand; Vector3 PalmPosition; public bool tierra=true; public static int Nsaltos; void Start () { LEAPcontroller = new Controller (); if (LEAPcontroller.IsConnected) { Debug.Log ("LEAP connected!"); } else { Debug.Log ("LEAP is NOT connected!"); } } void FixedUpdate() { frame = LEAPcontroller.Frame(); Hands = frame.Hands; if (Hands.Count == 0 || Hands.Count > 1) { return; } PalmPosition = Hands [0].PalmPosition.ToUnity (); //Palm.gameObject.transform.position = PalmPosition * 1 ; Palm.gameObject.transform.up=(PalmPosition* 1 * Time.deltaTime); LEAPcontroller.EnableGesture(Gesture.GestureType.TYPE_CIRCLE); LEAPcontroller.EnableGesture(Gesture.GestureType.TYPE_KEY_TAP); LEAPcontroller.EnableGesture(Gesture.GestureType.TYPE_SWIPE); GestureList gesturesInFrame = frame.Gestures(); if (!gesturesInFrame.IsEmpty) { foreach (Gesture gesture in gesturesInFrame) { switch (gesture.Type) { case Gesture.GestureType.TYPECIRCLE: Debug.Log ("circle"); // Palm.gameObject.transform.=PalmPosition * 50; break; case Gesture.GestureType.TYPEKEYTAP: Debug.Log ("key"); if(Nsaltos==0){ Palm2.gameObject.transform.position +=Vector3.up * 1; Nsaltos=1; } break; case Gesture.GestureType.TYPESCREENTAP: Debug.Log ("screen"); break; case Gesture.GestureType.TYPESWIPE: Debug.Log ("swipe"); break; default: Debug.Log ("nada"); break; } } } } } is this a new script or can i find it in the assets leap motion gives you?? Idc if the hands are in the shot i just want to move a sphere with my hand orientation That isn't one of the Leap Motion scripts. To do what you want, you can use something like this: using UnityEngine; using System.Collections; using Leap; public class MatchHand : MonoBehaviour { //Assume a reference to the scene HandController object public HandController handCtrl; void Update(){ Frame frame = handCtrl.GetFrame(); Hand hand = frame.Hands.Frontmost; if(hand.IsValid) { transform.position = handCtrl.transform.TransformPoint(hand.PalmPosition.ToUnityScaled()); transform.rotation = handCtrl.transform.rotation * hand.Basis.Rotation(false); } } } This moves the object to which it is attached to the position of your hand over the Leap Motion hardware. Units are in millimeters, so unless the object is close to the camera in your scene (as would probably be the case in VR/AR), you will have to scale up the position to see appreciable movement. The rotation of the object is also set to match your hand. This requires a HandController object from the Leap Motion core assets. You can just set the hand models to None if you don't want them to appear. (This can be done easily without the Core assets at all, but you would have to define the ToUnityScaled() and Rotation() extensions used to convert from the Leap Motion frame of reference to the Unity frame of reference.) Where you put transform, would that be referring to an object that I wanna control? Thats how it appears to me I just wanna make sure Yes. This script assumes it is attached to the object that you want to control. To control a different object, you could specify a reference to that object's transform instead. Thank you One last question.... When I am trying to make a frame and what not I keep getting: NULLReferenceException Object not set to an instance of an object... It keeps pointing too the line where frame is made Frame frame = handCtrl.GetFrame(); Nevermind I fixed it....Its complaining about mesh triangles but it works Thank you And how would this be done with Orion? I find many of the previous functiones don't work now.Any help is really appreciated. Thanks. I need this working with orion, how can i access frame.getframe() and hands.frontmost, etc.Thanks.Best regards.Jesus. For information on how to get tracking frames in an Unity script, please see the documentation. Note that the Hands.Frontmost API no longer exists in Orion. If you need that feature, it is easy enough to write yourself. All the old function did was compare the z-coordinates of all hands in the frame and return the farthest forward. Of course, this is a less well defined problem in VR since the Leap device is not stationary so the definition of forward needs to be considered too. can anyone tell me what is to be done with the first and second code ?i also wanna do the same thing as mentioned above but cant figure out how have out guy done it .
https://forums.leapmotion.com/t/how-do-i-move-objects-with-leap-motion-without-having-the-hands-appear-in-the-scene/2572
CC-MAIN-2018-30
refinedweb
803
59.09
Hello libraries, fast shift operations that don't verify shift amount is rather popular demand. how about providing them in the Data.Bits? -- |Shifts its first operand left by the amount of bits specified in -- second operand. Result of shifting by negative/too large amount -- is not specified. Should be used only in situations when you may -- guarantee correct value of second operand or don't care about result :) unsafeShiftL :: Int -> Int -> Int unsafeShiftR :: Int -> Int -> Int #ifdef __GLASGOW_HASKELL__ unsafeShiftL (I# a) (I# b) = (I# (a `iShiftL#` b)) unsafeShiftR (I# a) (I# b) = (I# (a `uncheckedIShiftRL#` b)) #else /* ! __GLASGOW_HASKELL__ */ unsafeShiftL = shiftL unsafeShiftR = shiftR #endif /* ! __GLASGOW_HASKELL__ */ it may be even better to include them in Bits class with obvious default definition -- Best regards, Bulat mailto:Bulat.Ziganshin at gmail.com
http://www.haskell.org/pipermail/libraries/2006-November/006206.html
CC-MAIN-2014-10
refinedweb
129
55.95
TL;DR Version: I ran a basic program and I'm getting the following error: avr-objcopy: 'TestProject.elf': No such file I thought AVR Studio 4 worked with hex files, not elf files . . . did I miss a really basic configuration setting that's triggering this error? Full Version (including what I've already tried): I realize how incredibly dated this request is . . . but I have a super simple timer/GPIO application I was putting together for a friend so I thought I'd "quickly" throw something together. I had an old STK500 and an ATmega8515 lying around, downloaded the latest version of AVR Studio (7, I believe) to find there is no support for an STK500 (makes sense, it's super old). After trying a few different versions with similar problems, I just loaded AVR Studio 4 (as simple as it gets and what I originally used with an STK500) and decided to jump back in on this basic application. I kept getting errors about an elf file . . . which was odd because I thought the default output in AVR Studio 4 was .hex, not .elf. I loaded an old project that I had working years ago, same problem. Went to the newbie thread, followed the instructions to create a new project and just the move base level bit manipulation and I'm still getting the same error: avr-objcopy: 'TestProject.elf': No such file The code I'm trying to run is: #include <avr/io.h> int main(void) { // Set Port B pins as all outputs DDRB = 0xff; // Set all Port B pins as HIGH PORTB = 0xff; return 1; } The full log is shown below: rm -rf TestProject.o TestProject.elf dep/* TestProject.hex TestProject.eep TestProject.lss TestProject.map Build succeeded with 0 Warnings... mmcu=atmega16a -Wall -gdwarf-2 -Os -std=gnu99 -funsigned-char -funsigned-bitfields -fpack-struct -fshort-enums -MD -MP -MT TestProject.o -MF dep/TestProject.o.d -c ../TestProject.c /usr/bin/sh: -Wall: command not found make: [TestProject.o] Error 127 (ignored) mmcu=atmega16a -Wl,-Map=TestProject.map TestProject.o -o TestProject.elf /usr/bin/sh: -Wl,-Map=TestProject.map: command not found make: [TestProject.elf] Error 127 (ignored) avr-objcopy -O ihex -R .eeprom -R .fuse -R .lock -R .signature TestProject.elf TestProject.hex avr-objcopy: 'TestProject.elf': No such file make: *** [TestProject.hex] Error 1 Build failed with 1 errors and 0 warnings... I'm not even sure someone's able to help with this, considering how old it is but if anyone can provide some insight on how to get this stupid error to go away so I can build a project, I'd greatly appreciate it! I'm guessing it's some sort of configuration thing but I would've assumed the install itself would have set things up correctly by default. I actually thought I'd be spending more time going through code and less time going through the setup process . . . heh. Oh well. Thanks in advance! AS7 supports STK500. Add it via Tools Menu. You must tell it the COM port.. . AS4 supports C projects. And supports subsequent programming with the STK500. . Neither IDE appreciates you inventing your own build statements. Yes, you can do it but you need to know what you are doing. . Just study the Makefile that AS4 or AS7 generate for your project. Any home-brew that you want to invent should follow a similar strategy. . David. Top - Log in or register to post comments The tools generate an ELF file, which is converted to HEX. objcopy is the utility which does the conversion - so, clearly, this will fail if the ELF file was not generated ... EDIT Something like this EDIT 2 Found a better diagram:... EDIT 3 clawson uses that diagram here:... #GccToolchainFlow Top Tips: Top - Log in or register to post comments You know when we were reviewing the stickies and someone suggested to unsticky a post I made at the top of the Studio forum about how to "fix" AS4.19 + WinAVR? Seems I made the right decision not to unsticky it. The first post is showing the classic signs of it not "seeing" the toolchain. EDIT: this post...... EDIT2: forgot to say that AS4.19 *will* automatically see "Atmel Toolchain for Windows" (I was convinced at the time it was a deliberate move to wean people off WinAVR!). The fact is that this is almost 10 years later and Atmel Toolchain for Windows *is* the better option now. Oh and never: from an embedded micro program!! Top - Log in or register to post comments Who would do a thing like that ... ? Top Tips: Top - Log in or register to post comments Yeah, definitely wasn't intending on telling it how to build the file . . . looks like it was an issue with AVR Studio 4.19. As far as AS7 supporting it, sounds like I should move to AS7 rather than AS4, if it'll support the STK500. Thanks for the information! Heh, the funny thing is I thought I had downloaded 4.18 because I read about the issues with 4.19 but clearly, I didn't. Should've recognized the issue earlier. It's funny, I haven't been in this forum for years but it's good to see some of the same names around here. Clawson and David, I think you've both been here helping me since I got started with microcontrollers! A little embarassing that my first issue coming back is something as simple as this (ESPECIALLY with it being stickied at the top . . . heh). Thanks again for your help! And thanks to everyone else who took the time to respond! Now I can jump back in to the actual programming! Have a good day/night, everyone! Top - Log in or register to post comments
https://www.avrfreaks.net/comment/2430316
CC-MAIN-2019-18
refinedweb
970
75.1
Beginner Tutorial: Neural Nets in Theano Theano is part framework and part library for evaluating and optimizing mathematical expressions. It's popular in the machine learning world because it allows you to build up optimized symbolic computational graphs and the gradients can be automatically computed. Moreover, Theano also supports running code on the GPU. Automatic gradients + GPU sounds pretty nice. I won't be showing you how to run on the GPU because I'm using a Macbook Air and as far as I know, Theano doesn't support or barely supports OpenCL at this time. But you can check out their documentation if you have an nVidia GPU ready to go. Summary¶ As the title suggests, I'm going to show how to build a simple neural network (yep, you guessed it, using our favorite XOR problem..) using Theano. The reason I wrote this post is because I found the existing Theano tutorials to be not simple enough. I'm all about reducing things to fundamentals. Given that, I will not be using all the bells-and-whistles that Theano has to offer and I'm going to be writing code that maximizes for readability. Nonetheless, using what I show here, you should be able to scale up to more complex algorithms. Assumptions¶ I assume you know how to write a simple neural network in Python (including training it with gradient descent/backpropagation). I also assume you've at least browsed through the Theano documentation and have a feel for what it's about (I didn't do it justice in my explanation of "why Theano" above). import theano import theano.tensor as T import theano.tensor.nnet as nnet import numpy as np Before we actually build the neural network, let's just get familiarized with how Theano works. Let's do something really simple, we'll simply ask Theano to give us the derivative of a simple mathematical expression like $$ f(x) = e^{sin{(x^2)}} $$ As you can see, this is an equation of a single variable $x$. So let's use Theano to symbolically define our variable $x$. What do I mean by symbolically? Well, we're going to be building a Theano expression using variables and numbers similar to how we'd write this equation down on paper. We're not actually computing anything yet. Since Theano is a Python library, we define these expression variables as one of many kinds of Theano variable types. x = T.dscalar() fx = T.exp(T.sin(x**2)) Here I've defined our expression that is equivalent to the mathematical one above. fx is now a variable itself that depends on the x variable. type(fx) #just to show you that fx is a theano variable type theano.tensor.var.TensorVariable Okay, so that's nice. What now? Well, now we need to "compile" this expression into a Theano function. Theano will do some magic behind the scenes including building a computational graph, optimizing operations, and compiling to C code to get this to run fast and allow it to compute gradients. f = theano.function(inputs=[x], outputs=[fx]) f(10) [array(0.602681965908778)] We compiled our fx expression into a Theano function. As you can see, theano.function has two required arguments, inputs and outputs. Our only input is our Theano variable x and our output is our fx expression. Then we ran the f() function supplying it with the value 10 and it accurately spit out the computation. So up until this point we could have easily just np.exp(np.sin(100)) using numpy and get the same result. But that would be an exact, imperative, computation and not a symbolic computational graph. Now let's show off Theano's autodifferentiation. To do that, we'll use T.grad() which will give us a symbolically differentiated expression of our function, then we pass it to theano.function to compile a new function to call it. wrt stands for 'with respect to', i.e. we're deriving our expression fx with respect to it's variable x. fp = T.grad(fx, wrt=x) fprime = theano.function([x], fp) fprime(15) array(4.347404090286685) 4.347 is indeed the derivative of our expression evaluated at $x=15$, don't worry, I checked with WolframAlpha. And to be clear, Theano can take the derivative of arbitrarily complex expressions. Don't be fooled by our extremely simple starter expression here. Automatically calculating gradients is a huge help since it saves us the time of having to manually come up with the gradient expressions for whatever neural network we build. So there you have it. Those are the very basics of Theano. We're going to utilize a few other features of Theano in the neural net we'll build but not much. Now, for an XOR neural network¶ We're going to symbolically define two Theano variables called x and y. We're going to build our familiar XOR network with 2 input units (+ a bias), 2 hidden units (+ a bias), and 1 output unit. So our x variable will always be a 2-element vector (e.g. [0,1]) and our y variable will always be a scalar and is our expected value for each pair of x values. x = T.dvector() y = T.dscalar() Now let's define a Python function that will be a matrix multiplier and sigmoid function, so it will accept and x vector (and concatenate in a bias value of 1) and a w weight matrix, multiply them, and then run them through a sigmoid function. Theano has the sigmoid function built in the nnet class that we imported above. We'll use this function as our basic layer output function. def layer(x, w): b = np.array([1], dtype=theano.config.floatX) new_x = T.concatenate([x, b]) m = T.dot(w.T, new_x) #theta1: 3x3 * x: 3x1 = 3x1 ;;; theta2: 1x4 * 4x1 h = nnet.sigmoid(m) return h Theano can be a bit touchy. In order to concatenate a scalar value of 1 to our 1-dimensional vector x, we create a numpy array with a single element ( 1), and explicitly pass in the dtype parameter to make it a float64 and compatible with our Theano vector variable. You'll also notice that Theano provides its own version of many numpy functions, such as the dot product that we're using. Theano can work with numpy but in the end it all has to get converted to Theano types. This feels a little bit premature, but let's go ahead and implement our gradient descent function. Don't worry, it's very simple. We're just going to have a function that defines a learning rate alpha and accepts a cost/error expression and a weight matrix. It will use Theano's grad() function to compute the gradient of the cost function with respect to the given weight matrix and return an updated weight matrix. def grad_desc(cost, theta): alpha = 0.1 #learning rate return theta - (alpha * T.grad(cost, wrt=theta)) We're making good progress. At this point we can define our weight matrices and initialize them to random values. Since our weight matrices will take on definite values, they're not going to be represented as Theano variables, they're going to be defined as Theano's shared variable. A shared variable is what we use for things we want to give a definite value but we also want to update. Notice that I didn't define the alpha or b (the bias term) as shared variables, I just hard-coded them as strict values because I am never going to update/modify them. theta1 = theano.shared(np.array(np.random.rand(3,3), dtype=theano.config.floatX)) # randomly initialize theta2 = theano.shared(np.array(np.random.rand(4,1), dtype=theano.config.floatX)) So here we've defined our two weight matrices for our 3 layer network and initialized them using numpy's random class. Again we specifically define the dtype parameter so it will be a float64, compatible with our Theano dscalar and dvector variable types. Here's where the fun begins. We can start actually doing our computations for each layer in the network. Of course we'll start by computing the hidden layer's output using our previously defined layer function, and pass in the Theano x variable we defined above and our theta1 matrix. hid1 = layer(x, theta1) #hidden layer We can do the same for our final output layer. Notice I use the T.sum() function on the outside which is the same as numpy's sum(). This is only because Theano will complain if you don't make it explicitly clear that our output is returning a scalar and not a matrix. Our matrix dimensional analysis is sure to return a 1x1 single element vector but we need to convert it to a scalar since we're substracting out1 from y in our cost expression that follows. out1 = T.sum(layer(hid1, theta2)) #output layer fc = (out1 - y)**2 #cost expression Ahh, almost done. We're going to compile two Theano functions. One will be our cost expression (for training), and the other will be our output layer expression (to run the network forward). cost = theano.function(inputs=[x, y], outputs=fc, updates=[ (theta1, grad_desc(fc, theta1)), (theta2, grad_desc(fc, theta2))]) run_forward = theano.function(inputs=[x], outputs=out1) Our theano.function call looks a bit different than in our first example. Yeah, we have this additional updates parameter. updates allows us to update our shared variables according to an expression. updates expects a list of 2-tuples: updates=[(shared_variable, update_value), ...] The second part of each tuple can be an expression or function that returns the new value we want to update the first part to. In our case, we have two shared variables we want to update, theta1 and theta2 and we want to use our grad_desc function to give us the updated data. Of course our grad_desc function expects two arguments, a cost function and a weight matrix, so we pass those in. fc is our cost expression. So every time we invoke/call the cost function that we've compiled with Theano, it will also update our shared variables according to our grad_desc rule. Pretty convenient! Additionally, we've compiled a run_forward function just so we can run the network forward and make sure it has trained properly. We don't need to update anything there. Now let's define our training data and setup a for loop to iterate through our training epochs. inputs = np.array([[0,1],[1,0],[1,1],[0,0]]).reshape(4,2) #training data X exp_y = np.array([1, 1, 0, 0]) #training data Y cur_cost = 0 for i in range(10000): for k in range(len(inputs)): cur_cost = cost(inputs[k], exp_y[k]) #call our Theano-compiled cost function, it will auto update weights if i % 500 == 0: #only print the cost every 500 epochs/iterations (to save space) print('Cost: %s' % (cur_cost,)) Cost: 0.6729492014975456 Cost: 0.23521333773509118 Cost: 0.20385060705569344 Cost: 0.09715044753510742 Cost: 0.039259128265329804 Cost: 0.027491611330928263 Cost: 0.013058140670015577 Cost: 0.007656970860067689 Cost: 0.005215440091514665 Cost: 0.0038843551856147704 Cost: 0.003063599050987251 Cost: 0.002513378114127917 Cost: 0.0021217874358153673 Cost: 0.0018303604198688056 Cost: 0.0016058512119977342 Cost: 0.0014280751222236468 Cost: 0.001284121957016395 Cost: 0.0011653769062277865 Cost: 0.0010658859592106108 Cost: 0.000981410600338758 #Training done! Let's test it out print(run_forward([0,1])) print(run_forward([1,1])) print(run_forward([1,0])) print(run_forward([0,0])) 0.9752392598335232 0.03272599279350485 0.965279382474992 0.030138157640063574 It works! Closing words¶ Theano is a pretty robust and complicated library but hopefully this simple introduction helps you get started. I certainly struggled with it before it made sense to me. And clearly using Theano for an XOR neural network is overkill, but its optimization power and GPU utilization really comes into play for bigger projects. Nonetheless, not having to think about manually calculating gradients is nice. Cheers
http://outlace.com/theano.html
CC-MAIN-2018-17
refinedweb
2,010
66.64
Do we need forward declarations in Java? Predict output of the following Java program. chevron_right filter_none Output: fun() called: x = 5 The Java program compiles and runs fine. Note that Test1 and fun() are not declared before their use. Unlike C++, we don’t need forward declarations in Java. Identifiers (class and method names) are recognized automatically from source files. Similarly, library methods are directly read from the libraries, and there is no need to create header files with declarations. Java uses naming scheme where package and public class names must follow directory and file names respectively. This naming scheme allows Java compiler to locate library files. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above. My Personal Notes arrow_drop_up
https://www.geeksforgeeks.org/do-we-need-forward-declarations-in-java/
CC-MAIN-2019-30
refinedweb
131
59.5
XmlWriter.WriteAttributeString Method (String, String, String) Microsoft Silverlight will reach end of support after October 2021. Learn more. When overridden in a derived class, writes an attribute with the specified local name, namespace URI, and value. Namespace: System.Xml Assembly: System.Xml (in System.Xml.dll) Syntax 'Declaration Public Sub WriteAttributeString ( _ localName As String, _ ns As String, _ value As String _ ) public void WriteAttributeString( string localName, string ns, string value ) Parameters - localName Type: System.String The local name of the attribute. - ns Type: System.String The namespace URI to associate with the attribute. - value Type: System.String The value of the attribute. Exceptions Remarks This method writes out the attribute with a user defined namespace prefix and associates it with the given namespace. If localName is "xmlns" then this method also treats this as a namespace declaration. In this case, the ns argument can be nulla null reference (Nothing in Visual Basic)..
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/xkd34zdt%28v%3Dvs.95%29
CC-MAIN-2020-05
refinedweb
155
60.01
I've been wondering about how easy it might be to use SignalR so I decided to work through one of Microsoft's quick intro tutorials. While doing that I ran into some challenges so I wanted to take a shot at explaining how SignalR works myself. The entire point of using a technology like SignalR is updating remote clients. I wanted a simple example that would allow readers to see how it might work. A while back I used Firebase (more on this in a moment) to create a simple example which : Here's a simple example. The browser on the left is Google Chrome and the one of the right is Microsoft Edge. Both are pointed at the same URL which is my site hosted at SmarterASP.net - Unlimited ASP.NET Web Hosting[^]. I got a 60 day free trial there and you can too - no credit card required. More on why I'm using SmarterASP.net later. Just open up two different browser windows and point them at my SmarterASP.net site:[^] Next, move one of the pawns in either of the browser windows and it will move in the other. I've already written the app I show you in this article as a Firebase[^] app. You can see that example at:[^] This one runs at my GoDaddy.com site. More about the struggles I had getting GoDaddy to host a SignalR app, later. I've worked with Firebase fairly extensively so I was very interested to see how SignalR would compare in relation to: If you tried both of them, I'm sure you saw that the Firebase one is far more responsive. You can see quite a lag even in the example gif above which was recorded from the SignalR version which was written for this article. Now that you've seen it work, let's set up our SignalR project in Visual Studio and get started. As I stated earlier, I learned SignalR by working through one of Microsoft's tutorials. You can see that tutorial at: Tutorial: Getting Started with SignalR 2 | Microsoft Docs[^]. However, that article presents a few challenges. Fire up Visual Studio and create a new project. Choose the Web project type on the left and choose the ASP.NET Web Application (.NET Framework) on the right side. Web Name the project pawns to keep it simple and click the [OK] button. pawns A dialog box will appear. We don't need a lot in this project and we're going to add the SignalR libraries using Nuget, so choose the Empty Project and click the [OK] button. Once the template project is created, choose the Tools... menu item at the top of Visual Studio. Slide down to the Nuget Package Manager menu item. Another menu will pop out. Choose the Manage Nuget Packages for Solution... menu item. A very ugly window will open up in Visual Studio. On mine the window is very small and you can hardly see anything. Microsoft isn't something less than amazing when it comes to UI / UX, right? by default the Installed item at the top will be selected. Go ahead and : Microsoft.AspNet.SignalR At this point I had to move slider bars around so I could actually see the stuff I needed on the screen. Go ahead and do that if you have to, because on the right we need to click a button we cannot even see right now so we can add the SignalR library. A dialog box will pop up giving you a preview of all the libraries which will be added to your project. Click the [OK] button and all the libraries will be added. Note: a license acceptance dialog also pops up to insure you accept the license for use. In the Microsoft tutorial they have you add the OWIN library separately, but you don't really need to do that now because Nuget adds all dependencies for you. If you have any troubles with the project creation you can grab the v001 zip file at the top of this article and unzip it and you'll be all set. Of course, I deleted the downloaded nuget packages so the zip is just the source, but all you have to do is restore packages from Visual Studio and you'll be ready to continue this article. Once you've done all the previous steps, you have everything you need to use SignalR in your project. Now, let's add some code. In the Microsoft tutuorial the first thing the author tells you to do is add a new class that implements the SignalR behavior. However, I like to build the thing as we go to see how it all works together. So first, we'll set up the HTML page that we'll use as our app's user interface. Go ahead and add a new web page to your project. You can simply right-click your solution in the Solution Explorer. Next, slide down to the Add menu item. And finally select the HTML page menu item. A dialog box will pop up. Type the name index.htm in there and click the [OK] button. Yes, I use the 3-letter extension instead of a full 4-letters of HTML. index.htm When you add the file, Visual Studio will add a basic skeleton HTML file which will only display a blank page to the user. Of course, for our purposes we want to display three different colored pawns on a blue grid background. Let's alter the skeleton that Visual Studio provides for us by replacing it with the following code now: <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>pawns</title> </head> <style> body, html { margin: 0; padding: 0; } </style> <body> <img style="visibility:hidden;display:none;" src="assets/redBlueGreenPawn.png" id="allPawns" /> <canvas id="gamescreen">You're browser does not support HTML5.</canvas> </body> </html> Normally, I'd keep the CSS (Cascading Style Sheet) data in a separate file but I'm attempting to simplify this tutorial and there is really just the one style which removes any margin or padding so I've added into our index.htm. Next, you see that I reference an image that you don't have. You can see it here and download it, if you like: You can see that even though there are three distinct pawns you can move around, the image itself is actually one PNG file. That's because of how you can easily reference portions of an image as separate HTML5 Canvas objects with the Canvas API. I will also add a new folder to the project named \assets and I will place the redGreenBluePawn.png in the folder so it is a part of the project. You'll get that file in the next (v002) download. Finally, we set up a Canvas element where our grid and pawns will be drawn. Canvas The real work is done by JavaScript, however. I know that many of you feel about that, but it is the way of the Web so get used to it. :) We need to add some references to some scripts that are provided to us by Visual Studio project. Visual Studio allows you to drag and drop items to reference them. We need to add some references to the jQuery libraries now. Here's how you can do it in Visual Studio. In solution explorer expand the \Scripts folder and you'll see a list of JavaScript files that Visual Studio added to the project for you. Actually, I believe Nuget added the jQuery ones because it new you'd need them for use with SignalR. \Scripts Nuget jQuery SignalR To add a reference to any of those, you simply click on it and drag it over to your source file. When you let go it'll drop the correct script tag in as shown in the next image. script Go ahead and add a reference to the jquery.signalR-2.2.2.min.js also. These are the minified versions of the JavaScript libraries that we need to get this working. I like to keep my custom JavaScript separate so I'm going to add a folder and create a new Javascript file named pawns.js. I'll show it to you and let you work out how to do this same thing. Take a close look of where I add the pawns.js reference in my HTM file. It's a bit important because that code references the Canvas and we are insuring that the Canvas element is already loaded. pawns.js Also notice the line that I've bolded in the HTML example below. It is a somewhat strange reference to a file that does not exist. That is part of how Microsoft shows you in the tutorial to reference SignalR. The code that will be referenced will be generated at runtime. That's because the code is created by the SignalR C# libraries. We'll see more later on in the article. For now, make sure you add that reference. <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8" /> <title>pawns</title> <script src="Scripts/jquery-1.6.4.min.js"></script> <script src="Scripts/jquery.signalR-2.2.2.min.js"></script> <script src="signalR/hubs"></script> </head> <style> body, html { margin: 0; padding: 0; } </style> <body> <img style="visibility:hidden;display:none;" src="assets/redBlueGreenPawn.png" id="allPawns" /> <canvas id="gamescreen">You're browser does not support HTML5.</canvas> <script src="js/pawns.js"></script> </body> </html> You can also download v002 of the code at the top of this article and you'll be up to date at this point and ready to write the code to draw the pawns and grid. The code will build and run but of course the displayed page will be completely blank. Let's take a look at how to draw the pawns and grid. I'm going to race through most of this, since it is only indirectly a part of what I want to talk about. First of all I need to set up some variables I will need to use and set up the load event which will fire once the browser has loaded the target page (index.htm) and all of its associated resources (images and JavaScript files). load // pawns.js var ctx = null; var theCanvas = null; var firebaseTokenRef = null; window.addEventListener("load", initApp); var mouseIsCaptured = false; var LINES = 20; var lineInterval = 0; var allTokens = []; // hoverToken -- token being hovered over with mouse var hoverToken = null; var pawnR = null; //$.on("mousemove", mouseMove function token(userToken){ this.size = userToken.size; this.imgSourceX = userToken.imgSourceX; this.imgSourceY = userToken.imgSourceY; this.imgSourceSize = userToken.imgSourceSize; this.imgIdTag = userToken.imgIdTag; this.gridLocation = userToken.gridLocation; } function gridlocation(value){ this.x = value.x; this.y = value.y } I also add a couple of types I've created (token and gridLocation) to make it easier to keep track of things. You'll see how their used a bit later. token gridLocation When the browser loads everything the initApp() function will run. Let's take a look at it. initApp() function initApp() { theCanvas = document.getElementById("gamescreen"); ctx = theCanvas.getContext("2d"); ctx.canvas.height = 650; ctx.canvas.width = ctx.canvas.height; initBoard(); } We begin to set up the Canvas where we'll draw the grid. Next, we call our custom code in initBoard(). initBoard() function initBoard(){ lineInterval = Math.floor(ctx.canvas.width / LINES); console.log(lineInterval); initTokens(); } I calculate the line intervals using the canvas width and the distance between the lines. After that I call initTokens() to get ready to draw the pawns (tokens). initTokens() function initTokens(){ if (allTokens.length == 0) { allTokens = []; var currentToken =null; // add 3 pawns for (var i = 0; i < 3;i++) { currentToken = new token({ size:lineInterval, imgSourceX:i*128, imgSourceY:0*128, imgSourceSize:128, imgIdTag:'allPawns', gridLocation: new gridlocation({x:i*lineInterval,y:3*lineInterval}) }); allTokens.push(currentToken); } console.log(allTokens); } draw(); } Here, we just make sure the allTokens array is initialized to empty. Next we add all the tokens to the array while sourcing the correct portion of our PNG image. It's easy to do with some math since each one is 128 pixels wide. allTokens Finally, we call our draw() function which will draw all of our graphics to our Canvas element. draw() function draw() { ctx.globalAlpha = 1; // fill the canvas background with white ctx.fillStyle = "white"; ctx.fillRect(0, 0, ctx.canvas.height, ctx.canvas.width); // draw the blue grid background for (var lineCount = 0; lineCount < LINES; lineCount++) { ctx.fillStyle = "blue"; ctx.fillRect(0, lineInterval * (lineCount + 1), ctx.canvas.width, 2); ctx.fillRect(lineInterval * (lineCount + 1), 0, 2, ctx.canvas.width); } // draw each token it its current location for (var tokenCount = 0; tokenCount < allTokens.length; tokenCount++) { drawClippedAsset( allTokens[tokenCount].imgSourceX, allTokens[tokenCount].imgSourceY, allTokens[tokenCount].imgSourceSize, allTokens[tokenCount].imgSourceSize, allTokens[tokenCount].gridLocation.x, allTokens[tokenCount].gridLocation.y, allTokens[tokenCount].size, allTokens[tokenCount].size, allTokens[tokenCount].imgIdTag ); } // if the mouse is hovering over the location of a token, show yellow highlight if (hoverToken !== null) { ctx.fillStyle = "yellow"; ctx.globalAlpha = .5 ctx.fillRect(hoverToken.gridLocation.x, hoverToken.gridLocation.y, hoverToken.size, hoverToken.size); ctx.globalAlpha = 1; drawClippedAsset( hoverToken.imgSourceX, hoverToken.imgSourceY, hoverToken.imgSourceSize, hoverToken.imgSourceSize, hoverToken.gridLocation.x, hoverToken.gridLocation.y, hoverToken.size, hoverToken.size, hoverToken.imgIdTag ); } } Basically, all we do is : The draw() function does use another helper method named drawClippedAsset() which allows me to easily reference the tokens in our image. It looks like the following: function drawClippedAsset(sx,sy,swidth,sheight,x,y,w,h,imageId) { var img = document.getElementById(imageId); if (img != null) { ctx.drawImage(img,sx,sy,swidth,sheight,x,y,w,h); } else { console.log("couldn't get element"); } } Once you add all of this code you will finally have the background grid drawn and the pawns drawn at their initial location. If you get the code v003 at the top of this article you will be up to date so you can continue along with the article. Note: At this point I had to put a special addition into the web.config file because for some reason I was getting an odd error where OWIN was trying to run automatically. I'm sure it is related to the library being added with the Nuget of the SignalR. The line I added looks like: <appSettings> <add key="owin:AutomaticAppStartup" value="false" /> </appSettings> This seems to keep the thing running, otherwise OWIN attempts to load and the app crashes. We're now very close to getting things going with the SignalR, but first we have to add the local code which will allow us to grab one of the pawns and move it around. Once we do that, we will allow the updated values to be broadcast to other clients. Let's add the code to do that work now. Yes, it's still more JavaScript. We need some event handlers which will do some work when the mouse is clicked (mousedown) and when the mouse is moved. Back in our initApp() function we want to add the event listeners for those two events. Now initApp() will look like the folllowing: (I've added the bolded lines)); initBoard(); } This is the straight-up pure JavaScript way to add those listeners. You can do it with jQuery too, but it's easy enough to do it like this. Now, we've registered the event listeners, we need to implement the methods handleMouseMove() and mousedownHandler(). handleMouseMove() mousedownHandler() function handleMouseMove(e) { if (mouseIsCaptured) { if (hoverItem.isMoving) { var tempx = e.clientX - hoverItem.offSetX; var tempy = e.clientY - hoverItem.offSetY; hoverItem.gridLocation.x = tempx; hoverItem.gridLocation.y = tempy; if (tempx < 0) { hoverItem.gridLocation.x = 0; } if (tempx + lineInterval > 650) { hoverItem.gridLocation.x = 650 - lineInterval; } if (tempy < 0) { hoverItem.gridLocation.y = 0; } if (lineInterval + tempy > 650) { hoverItem.gridLocation.y = 650 - lineInterval; } allTokens[hoverItem.idx]=hoverItem; pawnR.server.send(hoverItem.gridLocation.x, hoverItem.gridLocation.y,hoverItem.idx); } draw(); } // otherwise user is just moving mouse / highlight tokens else { hoverToken = hitTestHoverItem({x:e.clientX,y:e.clientY}, allTokens); draw(); } } Any time the mouse is moved this function will run. I could've been more specific and said, only run when the mouse is moved and the mouse is over the Canvas element, but this will work. The first thing I do is check if mouseIsCaptured is true. That value gets set when the mouseDownHandler fires (user clicks) and the method determines that the user is above one of the three pawns. That work is done in the mouseDownHandler so let's take a look. mouseIsCaptured mouseDownHandler function mouseDownHandler(event) { var currentPoint = getMousePos(event); for (var tokenCount = allTokens.length - 1; tokenCount >= 0; tokenCount--) { if (hitTest(currentPoint, allTokens[tokenCount])) { currentToken = allTokens[tokenCount]; // the offset value is the diff. between the place inside the token // where the user clicked and the token's xy origin. currentToken.offSetX = currentPoint.x - currentToken.gridLocation.x; currentToken.offSetY = currentPoint.y - currentToken.gridLocation.y; currentToken.isMoving = true; currentToken.idx = tokenCount; hoverItem = currentToken; console.log("b.x : " + currentToken.gridLocation.x + " b.y : " + currentToken.gridLocation.y); mouseIsCaptured = true; window.addEventListener("mouseup", mouseUpHandler); break; } } } In the mouseDownHandler we simply iterate through the allTokens array and check their gridLocation. if the we determine that the mouse pointer is within that area we set the mouseIsCaptured boolean to true. allTokens I've broken out the code that checks to see if the mouse location is within any one of the tokens location and placed that code in a a method called hitTest(). hitTest() function hitTest(mouseLocation, hitTestObject) { var testObjXmax = hitTestObject.gridLocation.x + hitTestObject.size; var testObjYmax = hitTestObject.gridLocation.y + hitTestObject.size; if ( ((mouseLocation.x >= hitTestObject.gridLocation.x) && (mouseLocation.x <= testObjXmax)) && ((mouseLocation.y >= hitTestObject.gridLocation.y) && (mouseLocation.y <= testObjYmax))) { return true; } return false; } You can see we simply send in the mouseLocation and the object you want to test and the function iterates through and determines if it is a hit or not and returns true or false. It makes it all very easy to use. There are a couple other helper methods in there which will decide which pawn is being grabbed and which will continually call the draw() method so that the pawn is drawn as the mouse is moved. At this point the user can grab any one of the pawns and move it around on the screen. Download the v004 zip file at the top of this article and you can try dragging the pawns around. We are finally ready to attempt to resolve the main challenge. What we want to do is : Update every client that happens to be looking at our web page so that when one of the pawns is moved all other clients will see the pawn move. To get this going the first thing we have to do is initialize the SignalR startup. The original Microsoft article tells us that we have to add a Startup.cs which does this work. Startup.cs Of course, it doesn't have to be named Startup, but it does need to be a new class in the project. Go ahead and right-click the Pawns project in solution explorer and choose the Add menu item that appears and then choose Class... A dialog will popup so you can add the name of the class you want to create. Go ahead and name it Startup.cs and click the [OK] button. Pawns Add Class Startup.cs The file will be added to your project and some template code will be added. Go ahead and replace the code in that file with the following: using Microsoft.Owin; using Owin; [assembly: OwinStartup(typeof(Pawns.Startup))] namespace Pawns { public class Startup { public void Configuration(IAppBuilder app) { // Any connection or hub wire up and configuration should go here app.MapSignalR(); } } } This code initializes the OwinStartup and maps the SignalR code so that it can build the JavaScript that you will need. It's basic boilerplate code to get everything ready. Now we can write some code to do something. What we want to do is send as little data to the other browsers as possible. That means we'd like to just send the gridLocation information of the moving pawn to other browsers. That way as the local Pawn moves around and it's location gets updated then pawns in other clients are also updated. To do this work, we need to generate a SignalR Hub class that will send the data to the other browsers. Hub Now, we add another class to our project and this time we name it PawnHub. Notice in the next code snippet that we derive our PawnHub from the Hub type, which is a special SignalR library type which provides some special abilities. PawnHub PawnHub Once you add that class, replace all of its code with the following: using Microsoft.AspNet.SignalR; namespace Pawns { [Microsoft.AspNet.SignalR.Hubs.HubName("pawnHub")] public class PawnHub : Hub { public void Send(int x, int y, int idx) { // Call the broadcastMessage method to update clients. Clients.Others.broadcastMessage(x, y, idx); } } } This is the piece that binds the C# to the JavaScript. When you build and run this code, it will look for a special JavaScript object named pawnHub. When it finds that pawnHub object and the Send() method is called, it will call the broadcastMessage() method passing along the values that we have provided to it. pawnHub Send() broadcastMessage() Three Parameters Are Customizable The three parameters are customizable. I chose to send in the x and y locations of the object and the idx of the object from our allTokens array. x y This is a small amount of data to broadcast (just three integers) and it makes it quite easy to handle the code in JavaScript. Let's go add the JavaScript which will update all other client browsers now, so that when you move the pawn in one browser all others will be updated. Back in our initApp() we need to initialize our The initApp() will now look like the following (I've added the bold code):); pawnR = $.connection.pawnHub; pawnR.client.broadcastMessage = function (x, y, idx) { allTokens[idx].gridLocation.x = x; allTokens[idx].gridLocation.y = y; draw(); }; $.connection.hub.start().done(function () { console.log("Hub is started."); }); initBoard(); } We simply set up our pawnR which is our variable to hold our SignalR object. As you can see, we've initialized it with a copy of the pawnHub object. This $.connection.pawnHub is in code that is generated for us by the compiler. That's simply the syntax you use to get to that object. pawnR SignalR pawnHub $.connection.pawnHub After you set up the method, you have to start the hub by calling the start() method. When it completes it will run a function and here I have just added code to display a message in the console.log(). hub start() console.log() This is the same example that you would see in the Microsof tutorial. Once we do that, we have to tell the pawnHub object which JavaScript function to call when the client sends a broadcast message. You can see we have done that on the next line with the anonymous function which simply takes the values sent (remember the C# PawnHub Send() method?) -- in our case, x,y and idx. It uses those values to update the appropriate token (from allTokens) and then calls draw() so that on the client, the pawn will move around (be drawn again) as the remote user moves the pawn. It's that simple. x idx Now, we just need to add one line of code so that the data is sent when the user drags a pawn around. Since that code should run when the mouse is moved, we'll update the onMouseMove handler. Here's the code with the one additional line of code bolded. You can see that the method is named Send() and takes three parameters. That matches the definition of our Send() method in our PawnHub. We've mapped that PawnHub method to a JavaScript client method so that when it is called and broadcasts the message then all clients are updated. This gives us the entire solution. Get the complete solution in v005 of the download at the top of this article. Build and run and then open two separate browser windows pointing to your same URL and move a pawn. It will move in the other browser window too. I had to remove the following code from the web.config file so that hub would start. If you leave it in, you will get errors. <appSettings> <add key="owin:AutomaticAppStartup" value="false" /> </appSettings> One more animated gif to show how fast it updates running local. There were definitely challenges getting this working. Also, if you refresh the page the pawns always move back to their original locations. That's because I did not do any persistence of those values to a data store. If you compare this SignalR solution to my Firebase solution you will see that the Firebase one always keeps the locations of the pawns. That's because the Firebase solution inherently solves this by providing and object database where your items are stored remotely. I never could get it working on my GoDaddy hosted site. That's because the JavaScript seems to be generated on the fly in a virtual directory or something using that SignalR/Hubs reference. I never could figure it out. But I did not have to do anything special at all to get it working on the SmarterASP.net site. If you have trouble deploying your SignalR apps definitely find out what your Web hosting service does. I hope you found this article a helpful example and introduction to using SignalR. I am also announcing my entry into writing science fiction. I am going to blog my book, Robot Hunters: Divided Resistance (Book 1 in the trilogy). You can read the first chapter at my web site / blog:^ It's rough and not quite finished but maybe you're a sci-fi fan who is interested in writing as I am. Thanks for checking it out. First version : 05/22/2017 This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) using Microsoft.Owin; using Owin; using Microsoft.AspNet.SignalR; [assembly: OwinStartup(typeof(SignalRPersistent.Startup))] namespace SignalRPersistent { public class Startup { public void Configuration(IAppBuilder app) { app.MapSignalR(); //app.MapSignalR<RTConnect>("signalr"); } } } using Microsoft.AspNet.SignalR; //using System.Threading.Tasks; //using Newtonsoft.Json; namespace SignalRPersistent { /// <summary> /// Descrizione di riepilogo per RTConnect /// </summary> /// /* public class RTConnect : PersistentConnection { protected override Task OnReceived(IRequest request, string connectionId, string data) { return Connection.Broadcast(data); } }*/ [Microsoft.AspNet.SignalR.Hubs.HubName("rtconnect")] public class RTConnect : Hub { public void Send(string message) { // Call the broadcastMessage method to update clients. Clients.Others.broadcastMessage(message); } } } <script src="Scripts/jquery.signalR-2.2.2.js"></script> $(function (){ //Parte di lettura dei dati per ricaricare la pagina, in tutto o in parte con ajax //alert('passo qui'); /*connection = $.connection("signalr"); connection.logging = true; connection.start();*/ signalRconn = $.connection.rtconnect; signalRconn.client.broadcastMessage = function (data) { console.log("ricevuto: " + data); if (data=="Salva" || data=="Modifica" || data=="Elimina"){ location.reload(); } }; signalRconn.logging=true; signalRconn.start().done(function () { console.log("Hub is started."); $("#cmdSalva").click(function(){signalRconn.send("Salva");}); }); signalRconn.client.broadcastMessage = function (data){ } index.aspx:6021 Uncaught TypeError: Cannot read property 'client' of undefined Marc Clifton wrote:there definitely were some trials and errors! General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/1188400/Beginners-Guide-to-Using-SignalR-via-ASP-NET
CC-MAIN-2019-04
refinedweb
4,626
65.83
Asp.Net Core - Exam test 02 /20 0 : 0 Submit Total 20 question (100 marks) 1. Select some ways below to help send many models from the controller to view(.cshtml) in Asp.Net Core 2. How can I invoke a view component in Asp.Net Core? 3. Choose the correct options about ViewBag in Asp.Net Core MVC 4. The host for ASP.NET Core web application is configured in _________ file. 5. How to write C# code in a view .cshtml file? 6. Choose options correct about Middleware in Asp.net Core 7. Which of the following is an entry point of ASP.NET Core application? 8. How many ways can help make custom router in Asp.Net Core 9. What is Kestrel in Asp.Net Core? 10. Which of the following is an entry point of ASP.NET Core application? 11. ___________ applications can be installed and run on any platform without .NET Core runtime 12. Which of the following is executed on each request in ASP.NET Core application? 13. Which of the following middleware must be installed to serve static files in ASP.NET Core application? 14. ASP.NET Core web application uses __________ as an internal web server by default. 15. The Startup class must include _________ method. 16. Which Helpers are introduced in ASP.NET Core? 17. What is the namespace of IHostBuilder to help build host in Program.cs file? 18. What methods are used to enable session in ASP.NET Core? 19. Can I use Html Tag helper of Asp.Net Framwork in Asp.Net Core? 20. Partialview in MVC 4 is not supported in .Net Core?
https://quizdeveloper.com/quiz/aspdotnet-core-exam-test-02-qid6
CC-MAIN-2022-33
refinedweb
276
80.48
Contents Strategy Library Intraday Dynamic Pairs Trading using Correlation and Cointegration Approach Abstract In this tutorial we implement a high frequency and dynamic pairs trading strategy based on market-neutral statistical arbitrage strategy using a two-stage correlation and cointegration approach. This strategy is based on George J. Miao's work. We applied this trading strategy to the U.S. bank sector stocks, backtested this strategy with 10-minute stock data from 2012 to 2013. Our trading strategy yields a compounding annual return up to 29.4% and a 0.968 sharpe ratio. This strategy is especially profitable when the market is performing poorly. The profit is resulted from mispricing, and mispricings are likely to happen when the market goes down or volatility increases. To explore this strategy further, we design this strategy to be flexible. We can change the data resolution into 5 minutes, 10 minutes or even 30 minutes by simply changing a parameter. It's also essential to choose optimized entering, closing and stop loss threshold. Everyone can has his/her own version of this strategy. Introduction High Frequency Trading(HFT) is a type of quantitative trading characterized by short holding period and the use of sophisticated computer method to trade securities rapidly. It aims to capture small profit on every short-term trade.(Cartea & Penalva, 2012). Statistical arbitrage is a situation where there is a statistical mispricing of one or more assets based on the expected values of these assets. When a profit situation takes place from pricing inefficiencies between securities, traders can identify the statistical arbitrage situation through mathematical models. Statistical arbitrage depends heavily on the ability of market prices to return to a historical or predicted mean. The Law of One Price(LOP) lays the foundation for this assumption. LOP states that two stocks with the same payoff in every state of nature must have the same current value (Gatev, Goetzmann, & Rouwenhorst, 2006) Thus, two stock prices spread between close substitute assets should have a stable, long-term equilibrium price over time. Data Description In order to have more pairs with high correlation, we select stocks in a specific industry. Economically, we prefer traditional sectors because the companies in these sector are more likely to be close substitutes. If we selected N stocks, the number of pairs can be calculated by \(\textrm{C}_{n}^{2} = \frac{n*(n-1)}{2}\). In the demonstrated strategy we used 80 stocks, so we have 3160 pairs in total. We used minute data and aggregate them into lower resolution, thus 1 minute is the highest resolution for this strategy. Correlation Approach Correlations measure the relationship between two stocks that have price trends. They tend to move together, and thus are correlated. Correlation filter is the first step to screen the candidate pairs. Consider two stocks A and B, a correlation coefficient between the stocks was a statistic that provide a measure of how the two stocks A and B were associated. The correlation coefficient \(\rho\) of stock A and stock B was obtained by\[\rho = \frac{\sum_{i}^{N}(A_i - \bar{A})(B_i - \bar{B}))}{[\sum_{i}^{N}(A - \bar{A})^2\sum_{i}^{N}(B_i - \bar{B})^2]^\frac{1}{2}}\] Where \(\bar{A}\) and \(\bar{B}\) are the mean prices of stock A and stock B respectively, N denoted a trading data range. \(\rho\) is in the range of [-1,1]. The more positive \(\rho\) is, the more positive the association of stock A and stock B is. However, the pairs trading based on a correlation approach alone would have a disadvantage of instabilities over time. Correlation coefficients do not necessarily imply mean-reversion between the prices of the two stock pairs. In order to overcome the above issue, a cointegration approach was further used as the second-step of the selection process for the pairs. Cointegration Approach The Cointegration concept, an innovative mathematical model in economics developed by Nobel laureates Engle and Granger. Cointegration states that, in some instances, despite two given non-stationary time series, a specific linear combination of the two time series is actually stationary. In other word, the two time series move together in a lockstep pattern. The definition of cointegration is the following: assume that \(x_t\) and \(y_t\) are two time series that were non-stationary. If there exists a parameter \(\gamma\) such that the following equation:\[z_t = y_t - \gamma x_t\] It is a stationary process, then \(x_t\) and \(y_t\) would be cointegrated. This process is a powerful tool for investigating common asset trends in multivariate time series. In our case, Let \(p_t^A\) and \(p_t^B\) be the prices of two stocks A and B respectively. If it is assumed that {\({p_t^A, p_t^B}\)} is individually non-stationary, there exists the parameter \(\gamma\) such that the following equation was a stationary process\[P_t^A - \gamma P_t^B = \mu + \epsilon_t\] where \(\mu\) is a mean of the cointegration model. \(\epsilon_t\) is a stationary, mean-revering process and was referred to as a cointegration residual. The parameter \(\gamma\) is known as a cointegration coefficient. The equation above represents a model of cointegrated pair for stocks A and B. It's essential to understand how the conitegration residual together with the cointegration coefficient determines our trading direction. If \(\epsilon\) is positive, in a given confidence interval, this is a signal that stock A is relatively overpriced and stock B is relatively underpriced, and we are going to long B and short A; If If \(\epsilon\) is negative, we are going to long A and short B. Cointegration Verification(optional reading part) In the Engle-Granger method(Engle & Granger, 1987), we first set up a cointegration regression between stock A and stock B as stated in the equation above, and then estimate the regression parameters \(\mu\) and \(\gamma\) using an ordinary least squares(OLS). Subsequently, we tested the regression residual \(epsilon_t\) to determine whether or not it was stationary. The most popular stationary test in the area of cointegration, the Augmented Dickey Fuller (ADF) test, was used on the regression residual \(\epsilon\) to determine whether it had a unit root. Testing for the presence of the unit root in the regression residual using the ADF test was given by\[\Delta Z_t = \alpha + \beta t + \gamma Z_{t-1} + \sum_{i = 1}^{p -1}\delta_i \Delta Z_{t-i} + \mu_t\] where \(\alpha\) is a constant, \(\beta\) is the coefficient on a time trend, p is the lag of order of the autoregressive process, \(\mu_t\) is an error term and serially uncorrelated. The number of lag order p in the equation is usually unknown and therefore had to be estimated. To determine the number of lag p, the information criteria for lag order selection was used. Here we choose Bayesian Information Criterion(BIC)\[BIC = (T-p)\ln\frac{T\hat{\sigma}_p^2}{T-p} + T[1+ln(\sqrt{2\pi})] + p\ln[\frac{\sum_{t=1}^{T}(\Delta Z_t)^2 -T\hat{\sigma}_p^2}{p}]\] Where T is the sample size. The unit root test for the regression residual \(\epsilon\) using the ADF test was then carried out under the null hypothesis \(H_0 : \gamma = 0\) versus the alternative hypothesis \(H_1 : \gamma < 0\). A statistical value of the ADF test was obtained by\[ADF test = \frac{\hat{\gamma }}{SE(\hat{\gamma })}\] The test result in the equation above is compared with the critical value of the ADF test. If the test result is less than the critical value, then the null hypothesis is rejected. This means the regression residual \(\epsilon\) is stationary. Thus, the two stock prices {\({p_t^A, p_t^B}\)} are cointegrated. Pairs Trading Strategy The pairs trading strategy uses trading signals based on the regression residual \(\epsilon\) and were modeled as a mean-reverting process. In order to select potential stocks for pairs trading, the two-stage correlation and cointegration approach was used. The first step is to identify potential stock pairs from the same sector, where the stock pairs are selected with correlation coefficient of at least 0.9 using the correlation approach. The second step is to check the the cointegration of the pairs passed the correlation test. If the test value of cointegration is equal or less than -3.34, which is the critical value at a 95% confidence lever, the null hypothesis \(H_0 : \gamma = 0\) is rejected, thus the residual \(\epsilon\) is stationary, and the pair passed the cointegration test. The third step is to rank all of the stock pairs that passed the two-stage test according to their cointegration test values. The smaller the cointegration test value is, the higher rank the stock pair is assigned to. Financial selection of the stock pairs from the top rank is used for pairs trading. The final step of the strategy is to define trading rules. To open a pairs trading, the regression residual \(\epsilon_t\) must cross over and down the positive \(\sigma\) standard deviation above the mean or cross down and over the negative \(\sigma\) standard deviation below the mean. If the residual is positive, we short stock B and long stock A; if the residual is negative, we short Stock A and long Stock B. When the regression residual (\epsilon_t\) returned to a certain level, the pairs trading is closed. Further more, in order to prevent the loss of too much on a single pairs trading, a stop-loss is used to close the pairs when the residual hit \(4\epsilon\) positive or negative standard deviation. In the training period, each of the training data contained a 3-month period, which is a dynamic rolling window size. Immediately after the training period, we begin our one-month trading period, and the dynamic rolling window automatically shift ahead to record the new prices of the stocks in each pair. After the first trading period, we use the updated stock prices to select our pairs for trading again, and begin another trading period. Parameter Adjustment The performance of the strategy is sensitive to the parameters. There are mainly four parameter to adjust: Opening Threshold, Closing Threshold, Stop-loss Threshold and data resolution. Opening threshold represents by how many times the residual \(\epsilon\) exceed the standard deviation, which is calculated by \(\frac{\epsilon - \bar{\epsilon}}{\sigma}\). By default we set it to 2.32 and -2.32, which is the critical value for 99% confidence interval if we assume the residual follows normal distribution. Closing threshold is calculated in the same way as opening threshold, we set it to 0.5 by default to close early to prevent further divergence. Stop-loss Threshold is set to 4.5. This depends on the level of mispricing we can bear. The higher degree our tolerance to risk is, the higher we can set this parameter. However, if we set this number too low, we may have too many pairs closed before reversion to stop loss. Method In this trading strategy we would define a class named 'pairs'. We manage pairs instead of stocks directly to make it's more convenient for us to calculate correlation and cointegration, update stock prices in the pair and trade on the selected pairs. Step 1: Pairs Class Definition The pairs is made up of two stocks, stock A and stock B. This class has several properties. The basic properties include symbols of stock A and stock B, the pandas DataFrame that contains time and prices of the two stocks, the current error, the error of the last datapoint, and the lists to record stock prices for update purpose. Instead of updating the DataFrame every 5 minutes, we record the prices in lists to update the DataFrame monthly. This would speed up the algorithm at least 10 times because manipulating DataFrame is very time consuming. The cor_update method is used every month to update the correlation between the two stocks in this pair. The cointegration_test method is also used monthly to do OLS regression, conduct ADF test, and calculate the mean and standard deviation of the residual. The method also assign these calculated values as properties to the pair object. class pairs(object): def __init__(self, a, b): self.a = a self.b = b self.name = str(a) + ':' + str(b) self.df = pd.concat([a.df,b.df],axis = 1).dropna() # The number of bars in the rolling window would be determined by the resolution, so we get this information from the shape of the DataFrame here. self.num_bar = self.df.shape[0] self.cor = self.df.corr().ix[0][1] # Set the initial signals to be 0 self.error = 0 self.last_error = 0 self.a_price = [] self.a_date = [] self.b_price = [] self.b_date = [] def cor_update(self): self.cor = self.df.corr().ix[0][1] def cointegration_test(self): self.model = sm.ols(formula = '%s ~ %s'%(str(self.a),str(self.b)), data = self.df).fit() # This line conduct ADF test on the residual. ts.adfuller() returns a tuple and the first element in the tuple is the test value. self.adf = ts.adfuller(self.model.resid,autolag = 'BIC')[0] self.mean_error = np.mean(self.model.resid) self.sd = np.std(self.model.resid) def price_record(self,data_a,data_b): self.a_price.append(float(data_a.Close)) self.a_date.append(data_a.EndTime) self.b_price.append(float(data_b.Close)) self.b_date.append(data_b.EndTime) def df_update(self): new_df = pd.DataFrame({str(self.a):self.a_price,str(self.b):self.b_price},index = [self.a_date]).dropna() self.df = pd.concat([self.df,new_df]) self.df = self.df.tail(self.num_bar) # after updating the DataFrame, we empty the lists for the incoming data for list in [self.a_price,self.a_date,self.b_price,self.b_date]: list = [] Step 2: Generate and Clean Pairs The function generate_pairs generates pairs using the stock symbols. self.pair_threshold and self.pair_num are pre-determined to control the number of candidate pairs. The pairs in self.pair_list would be kept and updated throughout our backtesting period. we set self.pair_threshold to 0.88 and self.pair_num to 120 to limit the number of pairs in the list. If we put too many pairs in the list, the backtesting would be too time consuming. The function pair_clean is called after the two-stage screen. If the first pair contains stock A and stock B, and the second pair contains stock B and stock C, we would remove the second pair because the overlapped signal would disturb the balance of our portfolio. def generate_pairs(self): for i in range(len(self.symbols)): for j in range(i+1,len(self.symbols)): self.pair_list.append(pairs(self.symbols[i],self.symbols[j])) self.pair_list = [x for x in self.pair_list if x.cor > self.pair_threshold] self.pair_list.sort(key = lambda x: x.cor, reverse = True) if len(self.pair_list) > self.pair_num: self.pair_list = self.pair_list[:self.pair_num] def pair_clean(self,list): l = [] l.append(list[0]) for i in list: symbols = [x.a for x in l] + [x.b for x in l] if i.a not in symbols and i.b not in symbols: l.append(i) else: pass return l Step 3: Warming up Period This part is under the OnData step. We set self.num_bar equals to the number of TradeBar in three months, which is determined by the resolution. During this period we fill the stock prices in lists, and assign each stock's price list to the symbol as a property. We would also remove the symbol from the symbol list if it has no data. if len(self.symbols[0].prices) < self.num_bar: for symbol in self.symbols: if data.ContainsKey(i) is True: symbol.prices.append(float(data[symbol].Close)) symbol.dates.append(data[symbol].EndTime) else: self.Log('%s is missing'%str(symbol)) self.symbols.remove(symbol) self.data_count = 0 return Step 4: Pairs Selection This process is also under the OnData step. This step would generate pairs if it is the first trading period of this algorithm. If it's not, it will update the DataFrame and correlation coefficient of each pair in self.pair_list. After that the pairs have a correlation coefficient higher than 0.9 would be selected into self.selected_pair. Then all the pairs in self.selected_pair would be tested on their cointegration, and the pairs with a test value less than -3.34 would be selected to the final list. This step will also limit the number of stocks in the final list, by default we set self.selected_num to 10. self.count is a flag to count the number of datapoint we received. Once it reach 1-month amount, that means one trading period is passed and it would be set to 0. if self.count == 0 and len(self.symbols[0].prices) == self.num_bar: if self.generate_count == 0: for symbol in self.symbols: symbol.df = pd.DataFrame(symbol.prices, index = symbol.dates, columns = ['%s'%str(symbol)]) self.generate_pairs() self.generate_count +=1 self.Log('pair list length:'+str(len(self.pair_list))) for pair in self.pair_list: pair.cor_update() # Update the DataFrame and correlation selection if len(self.pair_list[0].a_price) != 0: for pair in self.pair_list: pair.df_update() pair.cor_update() self.selected_pair = [x for x in self.pair_list if x.cor > 0.9] # Cointegration test for pair in self.selected_pair: pair.cointegration_test() self.selected_pair = [x for x in self.selected_pair if x.adf < self.BIC] self.selected_pair.sort(key = lambda x: x.adf) # If no pair passed the two-stage test, return. if len(self.selected_pair) == 0: self.Log('no selected pair') self.count += 1 return # clean the pair to avoid overlapping stocks. self.selected_pair = self.pair_clean(self.selected_pair) # assign a property to the selected pair, this is a signal that would be used for trading. for pair in self.selected_pair: pair.touch = 0 self.Log(str(pair.adf) + pair.name) # limit the number of selected pairs. if len(self.selected_pair) > self.selected_num: self.selected_pair = self.selected_pair[:self.selected_num] self.count +=1 self.data_count = 0 return Step 5: Trade Period It would be too long to read if we paste all the code in trading period together. Thus we would separate the code into three part: updating pairs, opening pairs trading and closing pairs trading. But all those lines are under OnData step and are under the condition: if self.count != 0 and self.count < self.one_month. This means it's in the trading period. Updating Pairs This step would update the stock prices in each pair. It would also update the signal called 'last_error' and immediately after this the pairs would receive new signals. num_select = len(self.selected_pair) for pair in self.pair_list: if data.ContainsKey(pair.a) is True and data.ContainsKey(pair.b) is True: i.price_record(data[i.a],data[i.b]) else: self.Log('%s has no data'%str(pair.name)) self.pair_list.remove(pair) for pair in self.selected_pair: pair.last_error = pair.error for pair in self.trading_pairs: pair.last_error = pair.error Opening Pairs Trading For each pair in self.selected_pair, we receive the current prices of the stocks, and then use the cointegration model to calculate the residual \(\epsilon\), which is assigned to the pair as a property named 'error'. self.trading.pairs is a list to store the trading pairs. Once a pairs trading is open, this pair would be add to the list, and it would be removed when the trading is closed. The property 'touch' is signal. If the residual \(\epsilon\) cross over the positive threshold standard deviation(we set \(\ 2.23*sigma\) here), the signal would become +1; while if it cross down the negative threshold deviation(\(\ -2.23*sigma\), the signal would become -1. For those pairs with +1 signal, if the error cross down positive threshold, there is a signal to open a trade. We long stock B and short stock A. For those pairs with -1 signal, if the error cross over negative threshold, we long Stock A and short stock B. When we opening a trade, we need to record the current model, current mean and standard deviation of the residual. This is necessary because if we enter a new trading period and the trade has not been closed yet, the cointegration model, mean and standard deviation of the pairs would be changed. We need to use the original thresholds to close the trades. while adding the pairs into self.trading_pairs, we also need to set the signal 'touch' to 0 for further use. for i in self.selected_pair: price_a = float(data[i.a].Close) price_b = float(data[i.b].Close) i.error = price_a - (i.model.params[0] + i.model.params[1]*price_b) if (self.Portfolio[i.a].Quantity == 0 and self.Portfolio[i.b].Quantity == 0) and i not in self.trading_pairs: if i.touch == 0: if i.error < i.mean_error - self.open_size*i.sd and i.last_error > i.mean_error - self.open_size*i.sd: i.touch += -1 elif i.error > i.mean_error + self.open_size*i.sd and i.last_error < i.mean_error + self.open_size*i.sd: i.touch += 1 else: pass elif i.touch == -1: if i.error > i.mean_error - self.open_size*i.sd and i.last_error < i.mean_error - self.open_size*i.sd: self.Log('long %s and short %s'%(str(i.a),str(i.b))) i.record_model = i.model i.record_mean_error = i.mean_error i.record_sd = i.sd self.trading_pairs.append(i) self.SetHoldings(i.a, 5.0/(len(self.selected_pair))) self.SetHoldings(i.b, -5.0/(len(self.selected_pair))) i.touch = 0 elif i.touch == 1: if i.error < i.mean_error + self.open_size*i.sd and i.last_error > i.mean_error + self.open_size*i.sd: self.Log('long %s and short %s'%(str(i.b),str(i.a))) i.record_model = i.model i.record_mean_error = i.mean_error i.record_sd = i.sd self.trading_pairs.append(i) self.SetHoldings(i.b, 5.0/(len(self.selected_pair))) self.SetHoldings(i.a, -5.0/(len(self.selected_pair))) i.touch = 0 else: pass else: pass Closing Pairs Trading This part controls pairs trading exit. It works similar to the opening part. It uses the recorded original model and thresholds to determine whether or not we should close the position. If the residual \(\epsilon\) reaches our closing threshold, we liquidate stock A and stock B to close. If the residual continue to deviate from the mean and goes too far, we would also close the position to stop loss. When we close a pairs trading, we also remove the pairs from self.trading_pairs. for i in self.trading_pairs: price_a = float(data[i.a].Close) price_b = float(data[i.b].Close) i.error = price_a - (i.record_model.params[0] + i.record_model.params[1]*price_b) if ((i.error < i.record_mean_error + self.close_size*i.record_sd and i.last_error >i.record_mean_error + self.close_size*i.record_sd) or (i.error > i.record_mean_error - self.close_size*i.record_sd and i.last_error i.record_mean_error + self.stop_loss*i.record_sd: self.Log('close %s to stop loss'%str(i.name)) self.Liquidate(i.a) self.Liquidate(i.b) self.trading_pairs.remove(i) else: pass Result We used 10-minute resolution data to backtest the strategy from Jan 2013 to Dec 2016. To demonstrate the in sample training results, we randomly selected a training period that from 2016-09-07 to 2013-11-30. The following table demonstrates the top 10 selected pairs in the training period mentioned above. We can see that the pairs with the highest correlation coefficient doesn't not necessarily has the best ADF test value. We made the rank by ADF test value because it's more robust. The upper part of the following chart plots the stock prices of pair ING vs TCB. The lower part plots by how many times standard deviation the residual deviate from its mean. There are 5 trading opportunities if we set the opening threshold to be 2.32. The following chart is the density plot of the residual error. From the shape we can see the error is approximately normal distributed. Summary The strategy is considered to be market neutral strategy because it a long/short strategy betting on price convergence. Out backtested beta is -0.112, which is within our expectation. Theoretically, the higher resolution we use, the higher win rate is because on one hand the higher resolution would increase the number of datapoint in our training period, which would make it's harder to past the two-stage test; on the other hand the higher resolution data would let us capture minor profit more accurately. However, there is a trade off between performance and backtesting time. The higher resolution will lead backtesting time to increase drastically. The number of stocks in the initialize step would also affect our performance. Theoretically, the more stock we have, we better pairs we are likely to pick. But too many stocks would also be time consuming. what's worth mentioning is that the optimized parameters are different for each sector. It depends on the features of the price patterns in the specific industry. Plotting the pairs prices and the residual to observe is good option to adjust the thresholds. References - George J. Miao High Frequency and Dynamic Pairs Trading Based on Statistical Arbitrage Using a Two-Stage Correlation and Cointegration Approach Online Copy - Cartea & Penalva, 2012, Where is the value in high frequency trading? Online Copy - Gatev, Goetzmann, & Rouwenhorst, 2006, Pairs trading: Performance of a relative-value arbitrage rule. The Review of Financial Studies, 19(3), 797–827. Online Copy - Engle and Granger, Co-integration and error correction: Representation, estimation, and testing. Econometrica, 55(2), 251–276. Online Copy You can also see our Documentation and Videos. You can also get in touch with us via Chat. Did you find this page helpful?
https://www.quantconnect.com/tutorials/strategy-library/intraday-dynamic-pairs-trading-using-correlation-and-cointegration-approach
CC-MAIN-2020-16
refinedweb
4,269
50.23
In this post, we will implement the functionality to Create Dynamic Apex Instance. This is mostly required when we have a method of Interface that is implemented by multiple Apex Classes and we have to call it dynamically. To create a Dynamic Apex Instance, we have to make use of Type.forName(). Let’s hop into the implementation with a common scenario. Implementation First, create Interface FruitInterface with method printFruitName() without any parameters. FruitInterface.apxc public interface FruitInterface { void printFruitName(); } Create Apex Class MangoFruit and OrangeFruit and implement FruitInterface Interface. Provide body for the method printFruitName(). This method will simply display the name of Fruit. MangoFruit.apxc public class MangoFruit implements FruitInterface { public void printFruitName(){ System.debug('This is Mango Fruit'); } } OrangeFruit.apxc public class OrangeFruit implements FruitInterface { public void printFruitName(){ System.debug('This is Orange Fruit'); } } Create Dynamic Apex Instance To create a Dynamic Apex Instance, we have to make use of Type.forName(). First, create an Apex Class FruitController with a method getFruit() which will accept the name of the class as a parameter. If the parameter is MangoFruit, it should call the printFruitName() method of MangoFruit class. If the parameter is OrangeFruit, it should call the printFruitName() method of OrangeFruit class. For this, we usually write code with multiple if conditions, something like below: FruitController.apxc public class FruitController { public static void getFruit(String className){ FruitInterface objFruit; if(className == 'MangoFruit'){ objFruit = new MangoFruit(); } else if(className == 'OrangeFruit'){ objFruit = new OrangeFruit(); } objFruit.printFruitName(); } } But this implementation is not scalable. In the future, we might need to add another Fruit Class that implements the FruitInterface, let’s say BananaFruit. In this case, we need to make a change in FruitController and add another if condition to handle the instance of BananaFruit. Improved FruitController using Type.forName() In such cases, we can make use of Type.forName(). Call forName() method of Type Class and pass the name of the Class as a parameter. This will return the instance of Type. Then, call the newInstance() method using Type instance to get the instance of Apex Class that is passed as a parameter. public class FruitController { public static void getFruit(String className){ Type dynamicApexType = Type.forName(className); FruitInterface objFruit = (FruitInterface) dynamicApexType.newInstance(); objFruit.printFruitName(); } } No matter how many classes implements FruitInterface, this code will run without making any changes. This is how we can make use of Type.forName() method to create a Dynamic Apex Instance. Please Subscribe here if you don’t want to miss new implementations. If you want to know more about Type Apex Class, you can check official Salesforce documentation here. Some hand-picked implementations for you: See you in the next implementation! Thanks for reading.
https://niksdeveloper.com/salesforce/create-dynamic-apex-instance-using-type-for-name/
CC-MAIN-2022-27
refinedweb
448
50.43
That's Vimprovement! A Better vi The Linux system comes with a vi clone called Vim. However, this editor can do more than just mimic vi. It has literally hundreds of additional functions, including commands for a help system, multiple windows, syntax coloring, program compilation and error corrections, advanced searching and many more. By default, Vim starts in vi-compatibility mode. This means that many of the advanced features are turned off. To turn on these Vim improvements you need to create a $HOME/.vimrc file. Listing 1 contains a sample .vimrc file. You can also create one by copying your .exrc file if you have one. A zero length file works as well. There are two flavors of the Vim editor. The command vim starts the console version of the editor. In this version, you edit inside the terminal window in which you executed the command. The gvim command creates its own window. This second command is preferred because you get features, such as a command menu and toolbar, not found on the console version. One of the most useful innovations that Vim has is the on-line, integrated help. The :help command displays a help window. For specific help :help / displays the help text for the search (/) command (see Figure 1). You can move through this text using the normal editing commands, such as h, j, k, l, <Page-Up>, <Page-Down>, etc. As you scroll through the text, you'll see some lines that look like /<CR> Search forward for the [count]'th latest used pattern |last-pattern| with latest used |{offset}|. The text enclosed in vertical bars (|) is a Vim hyperlink. Position the cursor over one of these items (say |last-pattern|) and press CTRL-]. This will make the screen jump to the given subject. To go back to where you were before the jump, use the command CTRL-T. To get out of the help system, use the normal Vim exit commands :q or ZZ. To see how some of the new Vim commands can help you, start by creating a simple program file containing the following text: #include "even.h" int even(int value) { if (value & 1) == 1) // Deliberate mistake return (1); return (0); } The first thing to notice is the changing colors of the text. This is called syntax highlighting. Each component of a program (keyword, string, constant, preprocessor directive, etc.) gets a different color. In our initialization file, this was enabled by the :syntax on command. If you do a :syntax off, the highlighting disappears. The next thing to note as you type in the file is that you don't have to do any indenting. All the lines are automatically indented for you. This is because of the cindent option being turned on by the magic lines: :autocmd FileType c,cpp :set cindent This applies to C and C++ files only. (Unfortunately a full discussion of autocommands is beyond the scope of this article. You can do a :help autocmd for full instructions.) Note: The actual commands in the sample .vimrc file are a little more fancy than the ones presented in this section, but they do the same thing. Now, let's have a little fun. Position the cursor on the “r” of one of the return statements and type xxxxxx. The return disappears. Now type “u” to undo the last change. The “n” returns. The old vi editor had only one level of undo. Vim has many. Type “u” again, and notice that you've got “rn” back. Type “u” four more times and the whole word returns. So how do you undo an undo? Through the new “redo” command:' CTRL-r. By typing this a couple of times, you redo the delete, which causes parts of “return” to disappear. Now that you have a C file, you should create a header. It would be nice to be able to copy and paste the prototype from the C file into the header. With vi you couldn't do this. With Vim, it's easy. First, bring up both files in the editor. The command :split even.h splits the current window in two. The top-half gets the file even.h, and the bottom even.c (see Figure 2). To move the cursor from the top window to the bottom window use the command CTRL-Wj. To move up a window, use the command CTRL-Wk. To close a window, use the ZZ or :quit
http://www.linuxjournal.com/article/4424?quicktabs_1=0
CC-MAIN-2015-32
refinedweb
746
76.93
Django's database stored web content processor Project description deep-pages Django's database stored web content processor About My motivation to create this small package raises when I was needing to create some small pages with static URL and want to use some template tags. So, unfortunatelly Django's Flat Pages wasn't enough in my case. Ok, so what the DeepPages does? With DeepPages you can store a page (or any other text-based content) into your database using the Page Model, set a static URL to that and get it rendered. Simple. How it works? DeepPages provides two ways to being used into your Django's project: 1. As Middleware All you need is add DeepPageTemplateRendererMiddleware as a middleware in your settings. I really do recommend to insert this middleware in the end of MIDDLEWARE's list. 2. As PageView (TemplateView's inheritance) Actually it was the first way that I've created. You need to include DeepPage's url patterns into your project (see Install). Signals DeepPages has three signals that you can connect to workaround. You can import those from signals.py. They are: page_requested, page_found and page_not_found. from django.dispatch import receiver from deeppages.signals import page_requested, page_found, page_not_found @receiver(page_requested) def page_requested_callback(sender, path, request): # do something here pass @receiver(page_not_found) def page_not_found_callback(sender, path, request): # do something here pass @receiver(page_found) def page_found_callback(sender, path, request, page, content, context): # do something here In page_found signal's receiver you can change the arguments content and context to get rendered by Middleware or PageView (depending how you've configured in your project). Programmatically DeepPages Rendering You can get a DeepPage rendered programmatically. To do this you just need to import get_page_by_name function from utils.py. Function statement: def get_page_by_name(name, context=None, callback=None) Where: - name = Page name - context (optional) = A dictionary with context for template processing - callback (optional) = A function to be called with arguments pageand contextbefore rendering and should return the new page content. So, assuming that you've created a page named as test-page, do it: from deeppages.utils import get_page_by_name def render(): rendered_page = get_page_by_name('test-page', ctx) # do something Install pip install deeppages After package install, add deeppages in your INSTALLED_APPS list. INSTALLED_APPS = [ ... 'deeppages.apps.DeepPagesConfig', ... ] If you want to use the Middleware way (personally, it's my preferred btw), open your settings file and look for MIDDLEWARE list. MIDDLEWARE = [ ... 'deeppages.middleware.DeepPageTemplateRendererMiddleware', ] Or, if you want to use the PageView way, you just need to open your project's URL patterns file ( urls.py) and configure DeepPage as an URL Pattern: from deeppages.views import PageView urlpatterns = [ ... url(r'^deeppages/', include(deeppages.urls, namespace='deeppages')), ... ] This way, you can create a page with URL /test-page/ and it will be found at: /deeppages/test-page/. Of course that you can use as default URL seeker, maybe for small projects it can work fine. For example: urlpatterns = [ ... url(r'', include(deeppages.urls, namespace='deeppages')), ... ] Or, if you want to make your own View, you can import the PageView class and inherite from that: from deeppages.view import PageView class YourNewView(PageView): # do something And your /test-page/ will be found at /test-page/ as well. I'm using this package in a project that I'm developing and isn't under production environment. So, be careful to use this in production. Feel free to make it better and send your updates/suggestions. Enjoy. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/deeppages/0.2.2/
CC-MAIN-2021-25
refinedweb
603
55.24
Games, .NET, Performance, and More! There are quite a few books that cover the concepts of packing data for the network. I remember reading a few articles in Game Developer Magazine that talked about several different levels of packing and finally some compression. You can always pick up a copy of Massively Multiplayer Game Development and learn all about a number of packaging techniques though most of the network code relates to the propagation of network data, and not necessarily how to pack the data in a small space. Too Big!Check the comments. Super bloated XML will definitely outstrip a fully expanded card name by many, many bytes. Jeez, I can't believe people actually send that over the wire. There are benefits in that you can't mistake the contents of the packet. The largest problem with XML for this type of data format lies in the eventual degeneration of the protocol or standard. Notice the XML is namespaced, but eventually someone will realize they can remove the namespace altogether and save a bunch of space. Words like hand and card are bloated and so maybe they'll change it to h and c. Every attribute you suffer the addition of 3 characters into your stream. If you use an element body then you suffer repeating element tag in order to close the element. The overhead is often insane. Could probably show how the lag induced by large packets of XML cascades to elongate various processes. If you played a few hundred hands (how many does one play in a sitting) you might have up to 50% of the total play time wasted with mechanical and social lag. BiggestWhat is the biggest packet or message we can make to serve cards? What a dumb question right? Wrong! It is a great question, because it shows that you don't need any experience with networking or data packing in order to serve cards over a network. As long as the data being passed uniquely identifies the card in some way we are fine. You could pass the string "Ace of Hearts"... Is that valid? Hell yeah. If you are using two decks of cards is it still valid? Sure is. The biggest you can probably get is going to be passing a string representation of the card in question over the wire. BigTime to shrink it a bit and use some information about our data to make it smaller. We are probably using a message per card that we send, that isnt' bad. So we need to go anywhere from 10 or 15 bytes from the string down to just a few. This is the funny part, because most users will take their knowledge of natural compression, they'll end up with a human encoding of the card. A human encoding is likely something as simple as a 2 byte string coding. We'll use that for our Big encoding... A card now becomes a series of two string characters "2H". NormalUp to now, a human could easily understand the coding we are using with minimal effort. However, at the normal level of encoding we are starting to look at how the computer is going to store the number. If we do something basic like label the cards 1 through 52, we can pack them in a single integer. This is easy enough using the existing .NET BinaryWriter. That means we can pack a set of 5 cards into 5 bytes. Since the previous encoding was 2 bytes per card, we now have a 50% savings in size. Can we get smaller even? SmallThe cards have an equal likelihood of coming up making compression very difficult to do, but the cards 1 through 52 can be represented in 6 bytes quite easily not the 8 we are packing them in now. We have two options for packing the cards at this level, the first is to simply pack them by left shifting 6 bits at a time.; If you are packing the information for 52 cards then this is the way you want to go. You have some extra options though since you can also pack card/suit information if you desire. A suit is only 2 bits of information (00,01,10,11), while 13 card values can be represented in 4 additional bits. This is the same as encoding the numerical ID of each card. We don't save any space, but it does give us options.... Logically, we pack all of the card properties into a single integer for a hand of five. Thats pretty awesome. I point out the second form, only so that you know complex data types can also be packed quite easily. You probably want to add some enumeration types in order to clean the entire process up, but all in all you are in good hands. An entire hand in a single integer. SmallestCan you get it smaller? Definitely possible, but probably not worth it if you can't get yourself down to the next byte level. We'd have to shave off 6 bits at this point to get from 30 bits down to 24 bits. Remember that you can always use those 2 slack bits for something else, primarily a command identifier maybe? You could pass up to 4 commands using the remaining 2 bits, which might in your scenario stand for dealt, discard, view, or maybe the command identifies the player the card is going to to implement view logic. Players 1 through 4. With that in mind, since we have a use for these 2 bits already maybe shaving off another 1 or 2 bits might be something we should look at? If you think you have an algorithm in mind for packing 5 cards in less than 30 bits go ahead and post it for all to share. It never hurts for potential opponents to think you’re more than a little stupid and can hardly count all the money in your hip pocket, much less hold on to it
http://weblogs.asp.net/justin_rogers/archive/2004/09/09/227260.aspx#6153677
crawl-003
refinedweb
1,014
71.24
In previous post, we performed general integration of MoSKito into the target project. In today’s step (rather short), we’re going to add some business-value-related information with a Counter. Counters are a great way of counting stuff. Well, you probably didn’t expect so much wisdom at the beginning of the post, so I’ll try to rephrase. Counters are good to count or monitor producers that are less technical. A typical Monitored producer will generate statistics, like request duration or average error count. Those are great to monitor a web service. But what if you simply count the orders in your web-shop? Then you’re not interested in average duration or error count whatsoever, because you don’t have or don’t need this technical info. Instead, you have the burger-order-count. So, that’s what we are going today – count the burgers we sell. Since we have been using AOP integration in our last step, we will also use it for counters. There is a detailed post about counters in this blog, see it for more info. To add an AOP Style Counter, we need a class, which, well, simply has a method we can call. I. Adding a Counter So now lets add the Counter: package de.zaunberg.burgershop.service; import net.anotheria.moskito.aop.annotation.Count; @Count public class OrderCounter { public void orderPlaced(){} } That’s all, a class annotated @Count and a method to call. Every call to any method of the class will be counted. Now we have to add the actual counting. The ShopServiceImpl is the location where an Order is actually generated. Therefore, it’s also a good place to count orders. To count an order, we need an instance of the counter and a call to the order placed method, let’s add both. First, we need to have a variable we can call: @Monitor public class ShopServiceImpl implements ShopService { private LinkedList<ShopableItem> items; private static Logger log = LoggerFactory.getLogger(ShopServiceImpl.class); //add the counter. private OrderCounter counter = new OrderCounter(); ... and now the call itself: @Override public Order placeOrder(String... items) { ... counter.orderPlaced(); return order; } Now, let’s build the shop with mvn install and check what we have achieved. Click the link below to place a ready order (feel free to place the order in a normal way, though): This creates a new order. If we take a look at our MoSKito-WebUI now (), we’ll see we’ve got a new producer called OrderCounter, within Counter Decorator: we’ll see that 1 order has been placed. If we add further methods to the counter class, we can monitor other business values, such as userRegistered, userLoggedIn, paymentReceived and so on. And all of this, just by simply calling a method in a annotated class! II. Counting by Parameter But Counters can do more. Let’s assume we need to monitor the ingredients of our burgers, and we need to know how much of which ingredient we are selling, so we can order more. To do so, we add another Counter class to our project, the IngredientCounter: package de.zaunberg.burgershop.service; import net.anotheria.moskito.aop.annotation.CountByParameter; public class IngredientCounter { @CountByParameter public void ingredientUsed(String ingredient){ } } You surely noticed the 2 differences with the previous Counter: 1. We haven’t annotated the class, but the method only. 2. We used @CountByParameter instead of @Count. Since we have multiple ingredients, it wouldn’t be very convenient to use the standard @Count annotation, because we would need to add a method for each ingredient. This wouldn’t look elegant at all. @Count uses the name of the called method as a use-case name (stat name). @CountByParameter uses the first parameter to the method instead. This is much more handy if the stat names are variables that we don’t know when creating the application. We now add a new counter to the ShopServiceImpl: @Monitor public class ShopServiceImpl implements ShopService { ... private OrderCounter counter = new OrderCounter(); private IngredientCounter ingredientCounter = new IngredientCounter(); @Override public Order placeOrder(String... items) { ... Order order = new Order(); for (String item : items){ order.addItem(findItemByName(item)); ingredientCounter.ingredientUsed(item); } counter.orderPlaced(); return order; } ... } Let’s build again, restart and place an order. Now we see two Counters in the Counter decorator section: We also see that after one order is placed, the ingredient counter has 3 hits. This is logical because our burger contains three ingredients. Let’s find out which: So, I apparently ordered a wheat bread burger with pork and extra cheese. Which is pretty regular, no cockroaches and other yummy stuff. III. Creating Groups Finally, let’s add some metadata to our Counters. Since both of them are counting business values, let’s add them to a category business, where we can later find all business-related producers. To do so, we give an additional parameter to the @Count annotation. @Count (category = "business", producerId = "orders") public class OrderCounter { As you’ve seen, we also gave a shorter name to the counter. @CountByParameter is a little different, because the method name is a mandatory part of the producer ID (at least for now), to separate multiple producers from each other. So let’s rename the counter and the method to achieve a more readable WebUI. @CountByParameter(category = "business", producerId="ingredients") public void used(String ingredient){ Now the WebUI has a business category, which is callable from outside per link (after we place another order): … and also the ingredient view looks easier: Of course, you can also add Accumulators and Thresholds to Counters, thus being able to monitor those values over time. That’s all for today. The next step will be about adding custom Stats that aren’t part of MoSKito’s out-of-the-box package. Enjoy and see you soon! Pingback: The complete MoSKito integration guide – Step 3 – Custom producers | anotheria devblog Great post. I’m following it but can’t see the Counter in UI. I’m using OSX Tomcat7053 Moskito 2.4.0 ( 2.4.3) Thanks, Van Hello Van, did you follow all the steps above, and did you actually performed an order?
http://blog.anotheria.net/msk/the-complete-moskito-integration-guide-step-2-add-some-counters/
CC-MAIN-2018-34
refinedweb
1,025
56.96
To experiment, we'll use the following document. It has one title element and one verse element from the namespace, two verse elements from the namespace, and one verse element from the default namespace. <poem xmlns:red="" xmlns: <red:title>From Book IV</red:title> <blue:verse>The way he went, and on the Assyrian mount</blue:verse> <red:verse>Saw him disfigured, more then could befall</red:verse> <blue:verse>Spirit of happy sort: his gestures fierce</blue:verse> <verse>He marked and mad demeanor, then alone</verse> </poem> Sample documents and stylesheets are available in this zip file. Our first stylesheet has template rules that act on element nodes based on various conditions. Each adds a text node to the result tree about what it found (for example, "Found a red node:") followed by information about the node it found. Note that for the namespace, the stylesheet uses a prefix that is different from the one that the document above uses — instead of "blue", it uses the German word for "blue": "blau". <!-- xq255.xsl: converts xq254.xml into xq256.txt --> <xsl:stylesheet xmlns:xsl="" xmlns:red="" xmlns: <xsl:output <xsl:template Namespace nodes: <xsl:for-each <xsl:value-of<xsl:text> </xsl:text> </xsl:for-each> <xsl:apply-templates/> </xsl:template> <xsl:template Found a blue verse. name <xsl:value-of local-name <xsl:value-of namespace-uri <xsl:value-of contents <xsl:apply-templates/> </xsl:template> <xsl:template Found a red node: name <xsl:value-of local-name <xsl:value-of namespace-uri <xsl:value-of contents <xsl:apply-templates/> </xsl:template> <xsl:template Found a verse element from the default namespace: name <xsl:value-of local-name <xsl:value-of namespace-uri <xsl:value-of contents <xsl:apply-templates/> </xsl:template> <xsl:template </xsl:stylesheet> Let's look at the result of applying this stylesheet to the document above, before we talk about how it does what it does: Namespace nodes: xml blue red Found a red node: name red:title local-name title namespace-uri contents From Book IV Found a blue verse. name blue:verse local-name verse namespace-uri contents The way he went, and on the Assyrian mount Found a red node: name red:verse local-name verse namespace-uri contents Saw him disfigured, more then could befall Found a blue verse. name blue:verse local-name verse namespace-uri contents Spirit of happy sort: his gestures fierce Found a verse element from the default namespace: name verse local-name verse namespace-uri contents He marked and mad demeanor, then alone When the first template rule finds a poem element, it lists all the namespace nodes for that element. It does this by using an xsl:for-each instruction to count through the names in the namespace axis with any name ("*"), adding each one's name to the result tree by calling the name() function in an xsl:value-of instruction's select attribute. It then puts a single space after each name with an xsl:text element, so that they don't run together. In addition to the "blue" and "red" namespaces declared in the poem element's start-tag, note the "xml" namespace that starts the list in the result; an XSLT processor assumes that this was implicitly declared. (It's supposed to, but they don't all actually do so.) poem The second template rule looks for verse elements in the "blau" namespace. Remember, "blau" isn't really the namespace name — as we can see in the xsl:stylesheet start-tag, it's the prefix assigned in the stylesheet to refer to the namespace that's really called. The sample source document has two verse elements from the namespace, and even though the stylesheet and source documents refer to this namespace with different prefixes, they're still referring to the same namespace -- so the XSLT processor recognizes them and adds two "Found a blue verse" text nodes to the result tree. Each of these result tree sections has four lines to tell us about the element node that the template processed: The first line uses the name() function to show us the element's full name. For all the verse elements from the namespace, this name is blue:verse. (The stylesheet's first template rule used the same function to retrieve namespace prefix names, and not element names, because namespace nodes were the type of node being handed to the name() function inside the xsl:for-each element that was counting through the namespace nodes.) A template rule looking for "verse" elements from the namespace hands this function element nodes, not namespace nodes, so it adds the element names to the result tree. blue:verse The second line uses the local-name() function to show us the local part of the element's name—that is, that name that identifies it within that particular namespace. For an element with a full name of blue:verse, the local name is "blue". The third line uses the namespace-uri() function to get the full URI of the element's namespace. As we saw with the namespace, documents may assign any prefix to a namespace; it's the corresponding URI that really identifies the namespace. For example, you can use "xsl" or "potrzebie" or "blue" as the prefix for your stylesheet's XSLT instructions, as long as the prefix is declared with the "" URI so that your XSLT processor recognizes those elements as the special ones from the XSLT namespace. The fourth line shows the contents of the selected element node with an xsl:apply-templates instruction. Also in Transforming XML Automating Stylesheet Creation Appreciating Libxslt Push, Pull, Next! Seeking Equality The Path of Control The stylesheet's third template rule looks for any element in the "red" namespace and adds the same information to the result tree that the blue:verse template rule added. Because the source document included both a title element and a verse element from the namespace, both get a four-line report in the result. Their corresponding element type names show up in their "name" and "local-name" parts of the result tree. The stylesheet's final template rule suppresses any elements not accounted for in the first three template rules. We've seen how a template can select all the elements with a specific name from a specific namespace (in the example above, the verse elements from the namespace) and how it can select all the elements, regardless of their names, from a particular namespace (in the example, those from the namespace). The next template shows how to select all the elements of a particular name regardless of their namespace: it has a match pattern for all the verse elements from any namespace. <!-- xq257.xsl: converts xq254.xml into xq258.txt --> <xsl:template Found a verse: name <xsl:value-of local-name <xsl:value-of namespace-uri <xsl:value-of contents <xsl:apply-templates/> </xsl:template> Technically speaking, it's really matching all the elements for which the local part of their name is "verse". This match pattern uses it to look for elements of any name ("*") that meet the condition in the predicate: the local-name() function must return a value of "verse". When we apply this stylesheet to the document used in the earlier examples, the result shows two verse elements from the "blue" namespace, one from the "red" namespace, and one from the default namespace (that is, one with no specific namespace assigned to it—the last verse element in the source document). Found a verse: name blue:verse local-name verse namespace-uri contents The way he went, and on the Assyrian mount Found a verse: name red:verse local-name verse namespace-uri contents Saw him disfigured, more then could befall Found a verse: name blue:verse local-name verse namespace-uri contents Spirit of happy sort: his gestures fierce Found a verse: name verse local-name verse namespace-uri contents He marked and mad demeanor, then alone For a more realistic example, we'll convert certain elements of an XLink document, regardless of their element names, to HTML. The first template rule in the following stylesheet applies to elements with any name ("*") that meet both of the two conditions in the predicate: They must have a type attribute in the XLink namespace with a value of "simple". They must have an href attribute in the XLink namespace. The value of this attribute doesn't affect whether the XSLT processor applies this template to the node. <!-- xq259.xsl: converts xq260.xml into xq261.html --> <xsl:stylesheet xmlns:xsl="" xmlns: <xsl:output <xsl:template <a href="{@xlink:href}"><xsl:apply-templates/></a> </xsl:template> <xsl:template <html><body> <xsl:apply-templates/> </body></html> </xsl:template> </xsl:stylesheet> If both of these conditions are true, the element—regardless of its element name—gets converted to an HTML a element in the result tree with the source tree XLink element's href attribute value used for the HTML href attribute in the result tree version. The template rule will do this to both the author and ingredient elements of the following document: <recipe xmlns: <author xlink:href="http:/" xlink:Joe "Cookie" Jones</author> <ingredients> <ingredient xlink:href="http:/" xlink:flour</ingredient> <ingredient xlink:href="http:/" xlink:sugar</ingredient> </ingredients> <steps/> </recipe> Because of the example's simplicity, the result won't look fancy in a browser, but it does demonstrate how the two different element types can both be converted to HTML a elements with one template rule because of the namespace of their attributes: <html> <body> <a href="http:/">Joe "Cookie" Jones</a> <a href="http:/">flour</a> <a href="http:/">sugar</a> </body> </html> It also demonstrates the use of the exclude-result-prefixes attribute in the xsl:stylesheet element that I mentioned in last month's column. This attribute tells the XSLT processor to keep the original elements' namespace declaration and prefixes out of the result, which helps to make the result something that any web browser would understand. namespaces: XML Namespaces Support in Python Tools, Part 1 (5 tags) XML Namespaces by Example (3 tags) Namespaces and Stylesheet Logic (2 tags) Articles that share the tag xml: Very Dynamic Web Interfaces (595 tags) Introducing del.icio.us (181 tags) How to Create a REST Protocol (161 tags) Secure RSS Syndication (112 tags) XML on the Web Has Failed (109 tags)
http://www.xml.com/pub/a/2001/05/02/trxml.html
crawl-001
refinedweb
1,740
50.2
16 June 2010 18:56 [Source: ICIS news] TORONTO (ICIS news)--Dow Chemical is preparing to restart plants at its petrochemicals complex in ?xml:namespace> The electricity disruption late Tuesday caused a fire at Dow's synthetic rubber plant at Schkopau and forced the company to take all its plants there offline, a spokesman told ICIS news. He was confirming a German media report. The site's electricity had been restored and the fire had been quickly extinguished, he said. Dow was now working to restart production at Schkopau, where it produces synthetic rubber, polyethylene, polypropylene, polystyrene, caustic soda, vinyl chloride monomer, and adhesives for the automotive industry, among other products, the spokesman said. Dow had no timeline for when exactly the individual plants would be in production again, he said. In addition to Dow, major firms with production at Schkopau include INEOS, Vinnolit, Equipolymers, Manuli Stretch and RP Compounds, among others. Dow employs a staff of 6,000 in
http://www.icis.com/Articles/2010/06/16/9368592/dow-prepares-to-restart-german-schkopau-site-after-outage.html
CC-MAIN-2014-52
refinedweb
160
60.35
Hi, I am new to Lucene. Hope the question is not too naive. >From Lucene FAQ, i know that IndexSearcher instance shall be shared by threads, rather than opening one for each thread. However, after index rebuild, we need to create a new IndexSearcher instance, and call close() on the old indexSearcher instance. Here is the pseudo code: public class SearchEngine { private IndexSearcher iSearcher = null; public void setIndexSearcher(IndexSearcher searcher) { this.iSearcher.close(); this.iSearcher = searcher; } public void search() { iSearcher search(); } } As you can see, after index rebuild, one thread wants to call SearchEngine.setIndexSearcher to instantiate a new IndexSearcher. Before that, it also needs to clean up the resource of the old IndexSearcher by calling close(). At the same time, another thread is in the middle of search of the old indexSearcher object. That is why I am concerned whether I might get some weird exception by calling searcher.close() and searcher.search() concurrently by different threads. Will that be a problem? Or lucene takes care of the synchronization between close() and search(), or that does not need any synchronization at all? Bottom line is that I don't want to do synchronization between close() and search(). Or is there more elegant way of passing a new IndexSearcher after index rebuild without any resource leak? -- View this message in context: Sent from the Lucene - General mailing list archive at Nabble.com.
http://mail-archives.apache.org/mod_mbox/lucene-general/200909.mbox/%3C25667782.post@talk.nabble.com%3E
CC-MAIN-2015-06
refinedweb
232
75.1
HTTP JSON Requests with Swift and Alamofire Editor’s note: See our latest post on using Carthage for adding Alamofire to your project. You don’t need to suffer through git submodules (the method described below) any longer. It didn’t take long after the introduction of Swift to begin seeing Stackoverflow questions asking about using AFNetworking, the popular Objective-C framework for making HTTP requests on iOS. Of course it can be done, as Swift and Objective-C can coexist together in the same project, but there’s the Objective-C way of doing things, and then there is the Swift way. Enter Alamofire, brought to you by the same author as AFNetworking. As you can guess, we’re interested in using this new framework with our Swift projects. Let’s get to it shall we. This tutorial will walk you through creating a new Xcode project using Swift to make use of the MyMemory language translator to translate simple phrases from English to Spanish. Of course we could have built an application that allowed the user to choose the source and destination languages, but we wanted to leave it as an exercise to the reader. Before you get started, I should point out that I’m using Xcode 6.1 (6A1052d). If you are using a different version, well, as they say, YMMV. While I suggest you go through this tutorial step-by-step, you can fast forward and download the working project on Bitbucket (see the end of this post for instructions to ensure Alamofire is updated in your checkout). Start Xcode and select File – New Project and create a Single View iOS application. For the Product Name we chose translator, and of course make sure your language is Swift. Before getting to Alamofire, let’s create a UI. Click on Main.storyboard to bring it up. Drag four labels and a text input field to the storyboard, arranged as follows: In this example we have also added constraints to the layout. I’m by no means an expert on using Xcode constraints, but if I did it correctly the UI should look appropriate on the iPhone 5, 6, or 6 Plus. The layout should also rotate correctly. Wire up the Text Field to the view controller as an IBOutlet. In this example I’ve named it textFieldToTranslate. Also wire up the label where we’ll display our translated text. I’ve named it translatedTextLabel. Your IBOutlets in ViewController.swift should look like: @IBOutlet weak var textFieldToTranslate: UITextField! @IBOutlet weak var translatedTextLabel: UILabel! Now, let’s add the appropriate delegate methods to the view controller so we can interact with the text field. This includes: - Adding UITextFieldDelegateto the ViewControllerclass declaration - Setting the delegateproperty of the UITextFieldto the view controller - Adding the textFieldShouldReturnfunction to the ViewControllerclass I typically set the UITextField delegate property in the view controller viewDidAppear method, so let’s add that: override func viewDidAppear(animated: Bool) { super.viewDidAppear(animated) self.textFieldToTranslate.delegate = self } func textFieldShouldReturn(textField: UITextField) -> Bool { textField.resignFirstResponder() return true } We can compile and run our code as is, but of course it does next to nothing. Now, let’s wire up Alamofire to translate the text that is in the text field. Using Alamofire Alamorefire’s set up is a bit different than with Objective-C frameworks. The Alamofire Github repository currently states, Due to the current lack of proper infrastructure for Swift dependency management, using Alamofire in your project requires the following steps, followed by 7 steps to follow. We’re going to follow those here, with some additional detail provided. Open a terminal (again, I like iTerm 2), and cd over to your Xcode project directory. In our case we cd ~/projects/blogging/translator, and use the git submodule feature to check the Alamofire code as a git submodule: Now, with the Mac Finder, locate your project directory, navigate into the Alamofire folder, and drag-and-drop the Alamofire.xcodeproj icon from the Finder window into the Xcode project. You should now see something like this in your Xcode project: Now, in Xcode, navigate to the target configuration window of Alamofire by clicking on the blue Alamofire project icon, and selecting the application target under the Targets heading in the sidebar. Ensure here that the Deployment Target of Alamofire is the same as the Deployment Target of the translator application. In our case, we are using a deployment target of 8.0. Now, click on the translator project icon (the blueprint icon), select the translator target and in the tab bar at the top of that window open the Build Phases panel. Expand the Target Dependencies group, and add Alamofire: Finally, click on the + button at the top left of the panel (right under the label that says General and select New Copy Files Phase. Rename this new phase to Copy Frameworks, set the Destination to Frameworks, and add Alamofire.framework. And that’s all there is to it! Okay, so it’s a pain in the ass doing all that, but I find it worth the trouble. For a minute or two of drag-and-drop-and-configure we have an excellent API to begin making HTTP requests with. Now that we have Alamofire added, let’s use it in our textFieldShouldReturn function. Remember, textFieldShouldReturn is going to get called when the user presses the action key on the iOS keyboard associated with our text field. First, add the statement import Alamofire at the top of your ViewController.swift file, like such: import Alamofire Then, in textFieldShouldReturn we’ll do the following: let textToTranslate = self.textFieldToTranslate.text let parameters = ["q":textToTranslate, "langpair":"en|es"] Alamofire.request(.GET, "", parameters:parameters) .responseJSON { (_, _, JSON, _) -> Void in let translatedText: String? = JSON?.valueForKeyPath("responseData.translatedText") as String? We’re following the API at MyMemory for translating text from English to Spanish. Notice the simplicity of both the HTTP request with Alamofire as well as handling the response. .GET is an enumeration for the HTTP method to use for the request, followed by our URL, and then a basic dictionary of URL parameters. The MyMemory API call returns a JSON string, so we can utilize the Alamofire responseJSON function to give us a JSON dictionary (it handles taking the JSON string returned in the body and converting it for us). Although it looks careless the method in which we are getting the translatedText is good form. Since we declare translatedText as String? we are saying “This could be a String or nil.” Moreover, by using JSON?.valueForKeyPath() we are saying, “ JSON could respond to valueForKeyPath or it could be nil“. If JSON is nil then it follows translatedText will be nil as well. If JSON is not nil and we find a value at the given key path responseData.translatedText, then we have our translation available (which we’ll display in the label). All of this code makes heavy use of chained optionals in Swift. For a refresher in Swift optionals in general, visit our post on the topic. Once we have the translated text we update our view controller label: if let translated = translatedText { self.translatedTextLabel.text = translated } else { self.translatedTextLabel.text = "No translation available." } Alamofire has a lot of features, and of course we’ve only scratched the surface. Take a look through the extensive documentation on Github; it’s all there, support for POST and the remaining cast of characters, various authentication methods (eg., basic auth), uploading data, etc. Let’s run our new application and translate Good night, friends! into Spanish. Nice, it worked! Of course, my Spanish-speaking amigos will point out that the opening signo de exclamación is missing, but that can be corrected. That’s it for today, I hope you enjoyed the tutorial. Again, if you’d like a working example visit the Bitbucket repository. After checking out the repository, make sure and run the following in your alamofire-translator directory: Exercises for the Reader You will no doubt notice that the application is only capable of translating from English to Spanish. MyMemory allows for any-to-any translation, why not add a popup menu to allow for choosing which language to translate from and to? And while the HTTP request is pretty quick there’s room perhaps for a progress indicator somewhere on the screen, as either a HUD or basic activity spinner. Great article – it was most helpful. I do have a related question that I would think would be quite common, but I cannot seem to find an answer to it anywhere. When I follow your instructions to add Alamofire as a submodule, and then change the Deployment Target in the Alamofire.xcodeproj file to match my project settings, the project file is obviously now dirty. What is the proper way to handle this from a SCM perspective? It seems creating a fork just for this is silly. How do you handle this? Apologies in advance if this is a noob question ;( I do see someone else asking the exact same question here, but alas no one has answered it. Any assistance you can give would be greatly appreciated… There is a broader tutorial here – Basic: – Intermediate: Thanks for the feedback!
https://dev.iachieved.it/iachievedit/http-json-requests-with-swift-and-alamofire/
CC-MAIN-2019-18
refinedweb
1,540
63.59
As we know that Software is a Program. And a Program is that which Contains set of instructions, and an Instruction contains Some Tokens. So Tokens are used for Writing the Programs the various Types of Tokens those are contained by the Programs. As in the English language, in a paragraph all the words, punctuation mark and the blank spaces are called Tokens. Similarly in a C++ program all the C++ statements having Keywords, Identifiers, Constants, Strings, Operators and the Special Symbols are called C++ Tokens. C++ Tokens are the essential part of a C++ compiler and so are very useful in the C++ programming. A Token is an individual entity of a C++ program. For example, some C++ Tokens used in a C++ program are: Reserve: words long, do if, else etc. Identifiers: Pay, salary etc. Constant: 470.6,16,49 etc. Strings: "Dinesh", "2013-01" etc. Operator: +, *, <, >=, &&,11, etc Special symbols: 0, {}, #, @, %, etc. Variable A variable is used for storing a value either a number or a character and a variable also vary its value means it may change his value Variables are used for given names to locations in the Memory of Computer where the different constants are stored. these locations contain Integer ,Real or Character Constants. For Naming a Variable There are Some Specific Rules • A Variable name is any Combination of 1 to 8 alphabets digits or underscore. • The First Character in the Variable name must be an alphabet • No Commas or blanks spaces are allowed in variable name • No Special Symbols are used in the name of the Variable. Character Set:- A character set denotes any Alphabet or the Special Symbol that is used for representing the Information Like In C we uses Alphabets, digits and many Types of special Symbols for Ex: in Alphabets we use A B .....Z or a b ......z in Digits we use 1 2 3 etc Special Symbols * ( ) _ # etc. Data Type :-Every variable has a data type which denotes the type of data which a variable will hold There are many built in data types in the c language those are following as 1) Int (integer) 2) Char (character) 3) float 4) double 5) Long Constants: Constants, also known as literals; The term constants or literals refers to the fixed value means a Constant is that whose value is never changed at the end of the program. for ex: 3x+2y=10 Here 3 and 2 and 10 are constants their value never be change but here x and y are the variables and may be they vary their value There are two types of Constants in c Language. 1) Primary Constants 2) Secondary Constants The Primary and Secondary Constants are further divided into the other Categorized In Primary these are Numeric constants refer to the numbers consisting of a sequence of digits (with or without decimal point) that can be either positive or negative. However, by default, numeric constants are positive. Numeric constants can be further classified as integer constants and floating-point constants, which are listed in Table Character constants refer to a single character enclosed in single quotes (‘ ‘). The examples of character constants are ‘f’, ‘M’, ‘8 ‘, ‘& ‘, ‘7 ‘, etc. All character constants are internally stored as integer values. Character constants can represent either the printable characters or the non-printable characters. The examples of printable character constants are ‘ a ‘ , ‘ 5 ‘ , '#' , ‘ ; ‘ etc. However, there are few character constants that cannot be included in a program directly through a keyboard such as backspace, newline and so on. These character constants are known as non-printable constants and are included in a program using the escape sequences. An escape sequence refers to a character preceded by the backslash character (\). Some of the escape sequences used in C++ is listed in Table String constants refer to a sequence of any number of characters enclosed in the double quotes (" "). The examples of string constants are "hello" , "name", "colour", "date", etc. Note that string constants are always terminated by the Null ( ‘\ 0 ' ) character. The presence of a backs lash character in a string constant indicates an escape sequence. For example, the string constant "welcome \ "home" is displayed as welcome" home. Note that the double quote next to the backslash is an escape sequence and not a delimiter for the string constant. In Secondary these are 1) Array 2) Pointer 3) Structure 4) Union 5) Enum or Enumeration. For Making or Declaring a Integer Constant There are some Specific Rules like • An Integer Constant must have at Least one Digit • it must not have a Decimal value • it could be either positive or Negative • if no sign is Specified then it should be treated as Positive • No Spaces and Commas are allowed in Name • The Range of Integer Constant is -32768 to 32767 For Making or Declaring Real constant there are some Specific Rules like • A Real Constant must have at Least one Digit • it must have a Decimal value • it could be either positive or Negative • if no sign is Specified then it should be treated as Positive • No Spaces and Commas are allowed in Name In The Exponential Form of Representation the Real Constant is represented in the two Parts The part before appearing e is called mantissa whereas the part following e is called Exponent. • In Real Constant the Mantissa and Exponent Part should be separated by letter e • The Mantissa Part has may have either positive or Negative Sign • Default Sign is Positive • The Range of Real Constant is -3.4e38 to 3.4e38 For Making or Declaring a Character Constant there are some Specific Rules like A Character is Single Alphabet a single digit or a Single Symbol that is enclosed within Single inverted commas. • It must not have a Decimal value • The Range of Character is -128 to 127 Identifiers The Identifiers are those which are used for giving a name to a variables, arrays, functions, classes, structures, namespaces and so on, like a b etc these are used for naming a variable. When we declare any variable then we specify a Name So that identifiers are used for naming a variable. While defining identifiers in C++, programmers must follow the rules listed here . • An identifier must be unique in a program . • An identifier must contain only upper case and lower case letters, underscore character (_) ordigits 0 t0 9. • An identifier must start with a letter or an underscore. • An identifier in upper case is different from that in lower case. • An identifier must be different from a keyword. In addition, identifiers that start with a double underscore '_' or an underscore followed by an upper case letter must be avoided as these names are reserved by the Standard c++ Library. • An identifier must not contain other characters such as '*', ';' and whitespace characters (tabs, space and newline). Some valid andinvalid identifiers inC++ are listed here. Po178_ddm //valid _78hhvt4 //valid 902gt1 //invalid as it starts with a digit Tyy;ui8 //invalid as it contains the ';' character for //invalid as it is a c++ keyword Fg026 neo //invalid as it contains spaces Keywords : A Keyword is that which have a special meaning those are already been explained to the c++ language like cout, cin, for, if, else , etc these are the reserve keywords Always Remember that we can t give a name to a variable as the name of a keyword we cant create new Keywords in c Language. All the keywords of C++ are listed in Table. Operator: -An Operator is a Special Symbol, which is used for performing an Operation. So that Operator is that which have a Special Meaning and also have a Sign For Example we know that + is used for Adding two Numbers. Depending on the number of operands and function performed, the C++ operators can be classified into various categories. These include arithmetic operators, relational operators, logical operators,' the conditional operator, assignment operators, bitwise operators and other operators. These categories are further classified into unary operators, binary operators and ternary operators. Statement: A Statement is that which Contains constants, variable, and also Some Operators for Example 2a+3b=146; is a Statement because in this 2, 3 and 146 are the constants and the + and = are operators and a and b are the variables. Always remember that a Statement always ends with a Semicolon. If you forgot the Semicolon at the end of the Statement then this will gives us the error. Punctuators, also known as separators, are the tokens that serve different purposes based on the context in which they are used. Some punctuators are used as operators, some are used to demarcate a portion of the program and so on. The various punctuators defined in C ++ are asterisk '*', braces '{ }', brackets '[]', colon ':', comma ',', ellipsis ' ... ', equal to '=', semicolon ';', parentheses '()' and pound
http://ecomputernotes.com/cpp/introduction-to-oop/what-do-you-means-by-c-tokens-explain-variabledata-typeconstants-identifiers-and-keyword
CC-MAIN-2018-30
refinedweb
1,471
57.5
Menus and applets in the AWT Java's Abstract Windowing Toolkit (AWT) includes four concrete menu classes: menu bars representing a group of menus that appears across the top of the window or the screen (class MenuBar); pull-down menus that pull from menu bars or from other menus (class Menu); menu items that represent menu selections (class MenuItem); and menu items that the user can turn on and off (class CheckBoxMenuItem). These classes are all subclasses of MenuComponent, not subclasses of Component. Because they aren't components, they can't be placed in any container -- the way you would place buttons and lists in a container. In a graphical user interface (GUI) the only way to use these menu classes is to place a menu bar (which can contain additional menus) in a frame using the frame's setMenuBar method. Since the applet class is not a subclass of Frame, it does not inherit the setMenuBar method. This means you cannot simply place a menu bar in an applet. Still, there are several ways to create applets with menus: (1) An applet can open a new frame that contains an AWT menu bar with pull-down menus -- perhaps in response to the user clicking on a button; (2) an applet can use a custom pop-up-menu class, either one that you implement yourself or one that comes from a third-party widget library. (The AWT itself does not include pop-up menus.); and (3) an applet can also use an AWT menu bar with pull-down menus embedded in its enclosing rectangle in a Web page. You can accomplish this last action using the simple technique described in this article. While each of the three approaches has its uses, the last one has several advantages over the other two. The main advantage of this approach over a design in which the applet opens a new frame with a menu bar are: An applet that does not open a new frame integrates better with Web pages and is less distracting to users than an applet that requires the user to click a button to open a frame. This is especially true in windowing environments that require the user to manually place the new frame -- like many X window managers. - Using AWT menus within an applet rather than opening a new frame allows you to embed an exact working copy of the GUI of a standalone application within a Web page -- or you can even embed the entire application. This capability can be valuable for user guides and tutorials, as well as for distributing applications to users who do not have a standalone Java interpreter installed on their machines. These users can use the application with a Java-enabled Web browser instead of launching the standalone application. The main advantages of using AWT menus in applets rather than using a custom pop-up-menu class are: Using a custom pop-up-menu class requires you to either develop the class, purchase it, or at least find a suitable free one, whereas the AWT menus are built into every Java run-time environment. The AWT menus always have the look and feel of the native windowing environment, whereas a custom pop-up-menu widget might not. - An applet that uses the AWT menus loads faster because it does not need to load the menu class from across the network. Placing a menu bar and menus in an applet Although an applet is not a subclass of Frame and therefore cannot contain a menu bar, it is always contained within some frame. You can find the frame that contains your applet using the following code, which was suggested for a different purpose in the Recently Asked Questions document on Sun's JavaSoft site: Object f = getParent (); while (! (f instanceof Frame)) f = ((Component) f).getParent (); Frame frame = (Frame) f; This code finds the applet's parent, the parent's parent, and so on, until it finds a container that is an instance of class Frame. The following applet shows how to place menus in an applet. It creates, in its init method, a menu bar that contains two menus -- one with three menu items and the other with two. It then finds its enclosing frame and adds the menu bar to that frame. It attempts to handle, in its action method, the events that are generated when a menu item is selected, but as you will see, if you try to make a menu selection, it fails. The next section of this article analyzes the problem and demonstrates how to solve it. Handling menu events in an applet Let's examine why the previous applet fails to handle menu selection events. The code that creates the menus and places them in the frame is the same code you would use in a subclass of Frame that uses menus. Here is an excerpt that creates a menu bar, a menu, and a menu item, and then places the menu bar in the containing frame: mb = new MenuBar(); mb.add(fm = new Menu("File")); fm.add(ol = new MenuItem("Open Location")); frame.setMenuBar(mb); The event-handling code is also typical: public boolean action(Event e, Object arg) { if (e.target == ol) { /* Handle the event */ return true; } return super.action(e,arg); } The problem is that the menu-item selection events are never passed to the applet's action method. A selection event is first passed to the menu item itself. The menu item does not handle the event, so it is passed from one object to its parent. Eventually,the event reaches the frame, which also does not handle it. The applet is a descendent of the frame, not its parent or ancestor, so the event is never passed to applet. The solution is to replace the standard menu items with custom menu items that intercept the selection event and pass it explicitly to the applet rather than rely on the AWT's default event-handling strategy. Our custom menu items are an almost trivial derivation from MenuItem: public class RedirectingMenuItem extends MenuItem { private Component event_handler; public RedirectingMenuItem(Component event_handler, String label) { super(label); this.event_handler = event_handler; } public boolean postEvent(Event e) { if (event_handler.isValid()) return event_handler.postEvent(e); else return false; } } The RedirectingMenuItem constructor expects an event-handling component, which it simply saves, and a label that it passes to the MenuItem constructor. The trick in this class is to intercept events by overriding the postEvent method and posting the event to the event-handling component. The event is only posted to the event-handling component if it is already valid (that is, its peer has been constructed by the AWT). Using the RedirectingMenuItem class is easy. We just replace the standard menu items by the redirecting menu items, and pass the this pointer as the reference to the event-handling component: fm.add(ol = new RedirectingMenuItem(this,"Open Location")); The applet below uses this technique to correctly handle menu events. Ensuring correct applet layout When we add a menu bar to an applet, it uses some of the space that is allocated to the applet in the Web page by the width and height parameters of the HTML applet tag. Typically, the applet itself is pushed down, and it should be resized by the AWT to lay itself out in a smaller space. The applet will not be resized automatically, however. You must call the frame's pack method after you add the menu bar with setMenuBar. The frame's pack method calculates the new, smaller size of the applet and resizes it. If the applet uses a layout manager, it is automatically called to lay out the components within the applet. The example below calls pack and works correctly. To demonstrate that it indeed works, we have added two buttons above and below the selection text using a BorderLayout layout manager. If we omit the call to pack, the applet continues to believe that it occupies the entire frame and therefore lays out its components incorrectly. The applet and its layout manager assume that they have more vertical space than they actually have and place the bottom button in an invisible part of the applet. Do not forget to call pack! Conclusion We have shown in this article how to use a menu bar and menus in an applet. We have demonstrated how to place the menu bar in an applet, how to ensure that the applet's layout manager lays it out correctly, and perhaps most importantly, how to handle menu events. The RedirectingMenuItem class allows you to write event-handling code that is identical to the menu-related event-handling code that you would normally use in a standalone application. Addendum: Incompatibilities Feedback from readers indicates the applets described in this article do not work correctly on all platforms. In particular, the applets do not work under Microsoft Internet Explorer 3.0 and Netscape Navigator 3.0/2.02 for Windows 95 and for Windows NT. They do work under Netscape Navigator for Unix machines, except that menu panes do not always appear in the correct location on the screen. In particular, the panes appear in a correct location when first pulled down, but they continue to appear in the same location even if the page that contains the applet is scrolled up or down. While I regret that these problems exist, I believe these problems reflect bugs in the browsers. I do not believe that they reflect an inherent nonportability of the techniques the applets use, because all of these techniques are documented by Sun. I have filed a bug report with Netscape concerning these bugs, and I am hopeful that these bugs will be corrected so that these techniques can be widely deployed. Theses bugs most likely exist because the use of menus in applets is not straightforward, and this aspect of the AWT therefore was not tested by the browsers' developers. Although it may seem plausible to expect applet developers to test the applet on any possible platform that might be used to run the applet over a network, the task is nearly impossible given the number of platforms. The need to test applets on multiple platforms implies that there are variations in the functionality of various Java platforms. Such variations defeat one of the most important benefits that Java promises to developers: write-once, run-everywhere" (The Java Platform, A White Paper, by Douglas Kramer with Bill Joy and David Spenhoff, Sun Microsystems, May 1996). Hence, it is reasonable to expect that Sun and browser vendors will work to eliminate any such variations.
http://www.javaworld.com/article/2077288/core-java/using-menus-and-menu-bars-in-applets.html
CC-MAIN-2014-35
refinedweb
1,778
57.71
Previously, in the llint C++ interpreter (in LowLevelInterpreter.h), I declared a handful of llint opcode aliases for opcode that look like this: const OpcodeID llint_op_call = op_call; const OpcodeID llint_op_call_eval = op_call_eval; ... When r128219 landed, it added a reference to llint_op_get_array_length, and this broke the C++ llint. This demonstrates that the above approach is too fragile in practice. So, I will refactor the FOR_EACH_OPCODE_ID() macro to create a separate FOR_EACH_CORE_OPCODE_ID() macro. This FOR_EACH_CORE_OPCODE_ID() macro will be used to automatically declare the llint opcode aliases that the C++ llint needs. Created attachment 163520 [details] Fix. Attachment 163520 [details] did not pass style-queue: Failed to run "['Tools/Scripts/check-webkit-style', '--diff-files', u'Source/JavaScriptCore/ChangeLog', u'Source..." exit_code: 1 Source/JavaScriptCore/bytecode/Opcode.h:42: Code inside a namespace should not be indented. [whitespace/indent] [4] Total errors found: 1 in 3 files If any of these errors are false positives, please file a bug against check-webkit-style. The style checker will complain about indented code in the namespace. This is needed to stay consistent with existing code in Opcode.h. To resolve this complaint would mean making the changed code inconsistent with the rest, or go on an unindent spree that will make the diff hard to read. It's better off the way it is right now. *** Bug 96509 has been marked as a duplicate of this bug. *** Comment on attachment 163520 [details] Fix. Clearing flags on attachment: 163520 Committed r128369: <> All reviewed patches have been landed. Closing bug.
https://bugs.webkit.org/show_bug.cgi?id=96466
CC-MAIN-2020-16
refinedweb
252
66.94
[7/20/05: Added an additional comment regarding setting up for Win32] I've mentioned in my earlier posts just how nice Managed C++ in 2005 (C++/CLI) is when it comes to interoping with unmanaged code. It makes P/Invoke downright painful by comparison. I'd rather toss 10,000 lines of P/Invoke code and restart in C++ than maintain it (and yes, I've done just that). Those legions out there that are using C# (or even VB.Net) shouldn't be afraid of C++ now, the managed syntax is so much better, it's really quite easy to work with. In this post I'm going to go through some examples of just how easy it is, specifically looking at accessing the Win32 API. An example in C# using P/Invoke: Lets say we want to write a wrapper base class for a window object in Win32. Lets say we want that class to do something relatively simple, like find a specific window by title and stash the handle (HWND) to it, and grab the window's coordinates. Well, first we'd have to define the necessary structs before we could get to defining the P/Invokes: [StructLayout(LayoutKind.Sequential)] public struct RECT { public Int32 left; public Int32 top; public Int32 right; public Int32 bottom; } [StructLayout(LayoutKind.Sequential)] public struct WINDOWINFO { public UInt32 cbSize; public RECT rcWindow; public RECT rcClient; public UInt32 dwStyle; public UInt32 dwExStyle; public UInt32 dwWindowStatus; public UInt32 cxWindowBorders; public UInt32 cyWindowBorders; public UInt16 atomWindowType; public UInt16 wCreatorVersion; } Hmm, ok. Now the P/Invokes: [DllImport("user32.dll", SetLastError = true)] public static extern Boolean GetWindowInfo( IntPtr hwnd; out WINDOWINFO pwi); [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] public static extern IntPtr FindWindow( [MarshalAs(UnmanagedType.LPTStr)] string lpClassName; [MarshalAs(UnmanagedType.LPTStr)] string lpWindowName); Ugh. Now we can finally call the appropriate APIs. This is a simple example too. It can quickly balloon into a real pain in the ass. Say, when you have an API that takes a #define value for one or more of it's arguments, such as the SendMessage api. You have to find said values, define an enum or constant, and hope desperately you manually copied the value correctly. (When you start doing hundreds or thousands this isn't too difficult to mess up.) A few initial humps on the way to the C++ example: Allright, I may lose some people here, but there are a few things you'll have to do on the C++ side. Native and .NET Interoperability describes in detail what you have to do. Usually you don't have to do much of anything. Strings, however, are a little different. (See How to: Marshal ANSI Strings Using C++ Interop for complete details--there is a link to Unicode & COM strings from there). I'll give you a small class here that you can use to make this easier (note that I haven't split this up into a header/declaration and a definition for clarity, but I highly recommend it): public ref class Convert { private: // Handle for marshalling the unmanged string pointer. IntPtr unmanagedStringPointer; // Constructor for the managed to native conversion. Convert(String^ managedString) { this->unmanagedStringPointer = System::Runtime::InteropServices::Marshal::StringToHGlobalAnsi(managedString); } // Converts the marshalled pointer to a native string. char* ToNativeString() { return static_cast<char*>(this->unmanagedStringPointer.ToPointer()); } public: // Converts a managed string to a native string. static char* ToNativeString(String^ managedString) { return (gcnew Convert(managedString))->ToNativeString(); } // Converts a native string to a managed string. static String^ ToString(char* nativeString) { return System::Runtime::InteropServices::Marshal::PtrToStringAnsi(static_cast<IntPtr>(nativeString)); } // Destructor (implicitly implements IDisposable) ~Convert() { if(this.unmanagedStringPointer != IntPtr::Zero) { System::Runtime::InteropServices::Marshal::FreeHGlobal(this.unmanagedStringPointer); } } }; This little helper will make things much easier. You should be able to add Unicode and COM string support by following the previous SDK link. The other thing you'll probably want to do is start adding helpers to convert from common Win32 datatypes to .Net datatypes (say RECT to Rectangle, etc.). You would have to do this in C# as well, of course. Now for the C++ example: Again, I won't break this up into a .h and a .cpp file, but this is only for clarity. I'm only using the helper above to convert strings. The rest comes straight from including the Win32 headers. (See the next section for details on how to set that up.) public ref class Window { private: // The native HWND. HWND windowHwnd; public: // Constructor that takes a window title. Window(String^ windowTitle) { HWND foundWindow = FindWindow(NULL, (LPCSTR)Convert::ToNativeString(windowTitle)); if (foundWindow == NULL) { // didn't find the window, barf here... } else { this->windowHwnd = foundWindow; } } // Returns the bounds of the window in screen coordinates. property Drawing::Rectangle Bounds { Drawing::Rectangle get() { RECT windowRect; if (GetWindowRect(this->windowHwnd, &windowRect== FALSE) { // Failed somehow. Deal with it... (Another post.) } else { return Drawing::Rectangle::FromLTRB(windowRect.left, windowRect.top, windowRect.right, windowRect.bottom); } } } }; And that's it. One small price in setting up a string helper class and you're up and running. I could create the above in a DLL and use it in all my C# projects. (Hey, I do just that.) You can read the Platform SDK and pretty much use it directly. You don't have to set up P/Invokes or redefine structs or #defines. You get intellisense... It's much more freeform to be able to see another useful API and relatively directly be able to use it. For example, when I was constructing my Window class from a specified HWND I noticed there was an API for verifying that a HWND was valid--ok, add one line of code... Now, with everything there are always caveats. I have found two circumstances where things don't work seamlessly. If the API has a struct that has a union or bitfields I can't figure out how to access them directly from managed code. (SendInput is an example.) There is a relatively easy workaround and that is to create an unmanaged helper class. That's another post if anyone is interested. Setting up a Managed C++ project for interop: Dll, exe, lib, doesn't matter. Here are the key things you need: - /clr option set for the project. (One of the general settings, see my last post.) - Include the appropriate Win32 headers. (#include <windows.h> is the main one) - Appropriate target OS #defines. (see Using the Windows Headers for the right values) - [7/20/05] Note that if you have a C++ forms app you'll need to remove $(NoInherit) from the Additional Dependencies property under Linker:Input. It's probably a good idea to put the Win32 #includes in a precompiled header for speedier compilation (stdafx.h typically, see Creating Precompiled Header Files for more info). I’ve already given the steps necessary to make this happen in an earlier post, but as this will come… PingBack from Why do I need to call Win32 functions? Why not have the functionality natively in .NET? PingBack from
https://blogs.msdn.microsoft.com/jeremykuhne/2005/06/11/pinvoke-no-way/
CC-MAIN-2018-51
refinedweb
1,166
57.67
std::weak_ptr::expired From cppreference.com Checks whether the managed object has already been deleted. Equivalent to use_count() == 0. [edit] Parameters (none) [edit] Return value true if the managed object has already been deleted, false otherwise. [edit] Exceptions [edit] Notes expired() may be faster than use_count(). [edit] Example Demonstrates how expired is used to check validity of the pointer. #include <iostream> #include <memory> std::weak_ptr<int> gw; void f() { if (!gw.expired()) { std::cout << "gw is valid\n"; } else { std::cout << "gw is expired\n"; } } int main() { { auto sp = std::make_shared<int>(42); gw = sp; f(); } f(); } Output: gw is valid gw is expired
http://en.cppreference.com/mwiki/index.php?title=cpp/memory/weak_ptr/expired&oldid=42028
CC-MAIN-2013-20
refinedweb
104
51.44
Integrate External C/C++ Code into Simulink Using C Function Blocks You can call and integrate your external C code into Simulink® models using C Function blocks. C Function blocks allow you to call external C code and customize the integration of your code using the Output Code, Start Code, Initialize Conditions Code, and Terminate Code panes in the block parameters dialog. Use the C Function block to: Call functions from external C code, and customize the code for your Simulink models. Preprocess data to call a C function and postprocess data after calling the function. Specify different code for simulation and code generation. Call multiple functions. Initialize and work with persistent data cached in the block. Allocate and deallocate memory. Use the C Function block to call external C algorithms into Simulink that you want to modify. To call a single C function from a Simulink model, use the C Caller block. To integrate dynamic systems that have continuous states or state changes, use the S-Function block. Note C99 is the standard version of C language supported for custom C code integration into Simulink. The following examples use C Function blocks to calculate the sum and mean of inputs. Write External Source Files Begin by creating the external source files. Create a header file named data_array.h. /* Define a struct called DataArray */ typedef struct DataArray_tag { /* Define a pointer called pData */ double* pData; /* Define the variable length */ int length; } DataArray; /* Function declaration */ double data_sum(DataArray data); In the same folder, create a new file, data_array.c. In this file, write a C function that calculates the sum of input numbers. #include "data_array.h" /* Define a function that takes in a struct */ double data_sum(DataArray data) { /* Define 2 local variables to use in the function */ double sum = 0.0; int i; /* Calculate the sum of values */ for (i = 0; i < data.length; i++) { sum = sum + data.pData[i]; } /* Return the result to the block */ return sum; } Enter the External Code Into Simulink Create a new, blank model and add a C Function block. The C Function block is in the User-Defined Functions library of the Library Browser. Double-click the C Function block to open the block dialog. Click to open the Model Configuration Parameters dialog. In the Simulation Target pane, define your header file under Include headers on the Code information tab. Tip After you have entered information for Source file in the next step, you can click Auto-fill from Source files to have the header file name filled in automatically, using information contained in your source files. Define the source file under Source files on the Code Information tab. To verify that your custom code can be parsed and built successfully, click Validate custom code. Note To use a C Function block in a For Each subsystem or with continuous sample time, or to optimize the use of the block in conditional input branch execution, all custom code functions called by the block must be deterministic, that is, always producing the same outputs for the same inputs. Identify which custom code functions are deterministic by using the Deterministic functions and Specify by function parameters in the Simulation target pane. If the block references any custom code global variables, then Deterministic functions must set to Allin order for the block to be used in a For Each subsystem, in conditional input branch execution, or with continuous sample time. For an example showing a C Function block in a For Each subsystem, see Use C Function Block Within For Each Subsystem. In the Output Code pane of the C Function block parameters dialog, write the code that the block executes during simulation. In this example, the external C function computes a sum. In the Output Code pane, write code that calls the data_array.cfunction to compute the sum, then computes the mean. /* declare the struct dataArr */ DataArray dataArr; /* store the length and data coming in from the input port */ dataArr.pData = &data[0]; dataArr.length = length; /* call the function from the external code to calculate sum */ sum = data_sum(dataArr); /* calculate the mean */ mean = sum / length; You can specify code that runs at the start of a simulation and at the end of a simulation in the Start Code and Terminate Code panes. Use the Symbols table to define the symbols used in the code in the block. Add or delete a symbol using the Add and Delete buttons. Define all symbols used in the Output Code, Start Code, Initialize Conditions Code, and Terminate Code panes to ensure that ports display correctly. In the Symbols table, for each symbol used in the code in the block, define the Name, Scope, Label, Type, Size, and Port, as appropriate. Close the block parameters dialog. After filling in the data in the table, the C Function block now has one input port and two output ports with the labels specified in the table. Add a Constant block to the Simulink canvas that will be the input to the C Function block. In the Constant block, create a random row array with 100 elements. To display the results, attach display blocks to the outputs of the C Function block. Specify Simulation or Code Generation Code You can specify different Output Code for simulation and code generation for the C Function block by defining MATLAB_MEX_FILE. For example, to specify code that only runs during the model simulation, use the following. #ifdef MATLAB_MEX_FILE /* Enter simulation code */ #else /* Enter code generation code */ #endif Specify Declaration for Target-Specific Function for Code Generation For code generation purposes, if you do not have the external header file with the declaration of a function (such as a target-specific device driver) that you want to call from the C Function block, you can include a declaration with the correct signature in the Output Code pane of the block. This action creates a function call to the expected function when code is generated, as in the following example: #ifndef MATLAB_MEX_FILE extern void driverFcnCall(int16_T in, int16_T * out); driverFcnCall(blockIn, &blockOut); #endif
https://au.mathworks.com/help/simulink/ug/call-and-integrate-external-c-algorithms-into-simulink-using-c-function-blocks.html
CC-MAIN-2022-33
refinedweb
1,011
52.19
How can I add an easer in “sketch.py” in Example? I want to customize “sketch.py” in Example. I want to add a button to switch to an eraser mode which enables to erase the scribbled line. “sketch.py” is very simple using only ui.Path to draw, and I first thought that changing color of the path (to white which is the same color of the view’s background) would do. However, ui.Path has no attribute for setting it’s color (is it correct?). Is there any good way to add the eraser function? @satsuki.kojima said: ui.Path has no attribute for setting it’s color (is it correct?) No 😀 During drawing def draw(self): if self.path: ui.set_color('red') self.path.stroke() After saving def path_action(self, sender): path = sender.path old_img = self.image_view.image width, height = self.image_view.width, self.image_view.height with ui.ImageContext(width, height) as ctx: if old_img: old_img.draw() ui.set_color('blue') path.stroke() self.image_view.image = ctx.get_image() @satsuki.kojima said: Is there any good way to add the eraser function? That's what the clear button does, isn'it? @cvp, I guess the idea is to add an eraser that clears some parts of the image. This is challenging because Path is a vector, so you would need to see which of the paths points are covered by the eraser, and remove & split the path accordingly. This would not be perfect if the line was drawn fast and has longer straight segments. Alternative is to turn the image to a bitmap when eraser is used. Depending on what you want to use the image for, loss of vector information is probably not an issue. @mikael ok, agree. But small questions imply short answers 😀 And, of course, we're ready to help more during a project. @satsuki.kojima you could try this (a very little) modified MySketch.py @satsuki.kojima said: your color palette Not mine, old sample from, I think, Pythonista 's creator (or I even forgot I did it, very old brain) @satsuki.kojima for the fun, MySketch.py modified for width (idea of @mikael ) Small modif of gist to 'show' white color in slider to:@cvp Yeah.... I finally decided my eraser to set the line_width to 20 while the default pen to 2. I’m building a game “4 numbers”, which is to guess 4 randomly given numbers by hints of hits and blows. I wanted to add a sketch board so the players can scribble their process of guessing. By the way, I actually took for a while to fix my problem even though your advise was very clear. My sketch board showed some odd behavior. I finally found it was because I was writing my program adding subview of path_view before image_view like as below. iv = ui.ImageView(frame=(0, 0, width, height)) pv = PathView(frame=self.bounds) pv.action = self.path_action pv.main = self self.add_subview(pv). [ <- shoud be after self.add_subview(iv)] self.add_subview(iv)
https://forum.omz-software.com/topic/6756/how-can-i-add-an-easer-in-sketch-py-in-example
CC-MAIN-2021-49
refinedweb
505
76.82
Greetings all i am leaving the world of C/C++ Console and learning Windows applications I am using Visual C/C++ 6.0 My proplem is ive written a simple Hello World probram (follows) It will compile and excute with no problems at first but if I save the project and open it, it will still compile ok but it gives me Link errors (following) I appreciate any help this concern!! Thanks in advance Link errors are as follows Linking... LIBCD.lib(crt0.obj) : error LNK2001: unresolved external symbol _main Debug/Dgraphics2.exe : fatal error LNK1120: 1 unresolved externals Error executing link.exe. Dgraphics2.exe - 2 error(s), 0 warning(s) Code is as follows #include <windows.h> #include <windowsx.h> #define win32_lean_and_mean int WINAPI WinMain (HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpcmdline, int ncmdshow) { MessageBox(NULL, "Whats up World!!", "My First Windows Program", 3); return(0); }
http://cboard.cprogramming.com/windows-programming/7900-probably-simple-problem.html
CC-MAIN-2014-15
refinedweb
146
56.86
Back to index #include <nsScreenManagerXlib.h> Definition at line 49 of file nsScreenManagerXlib.h. Definition at line 44 of file nsScreenManagerXlib.cpp. { // nothing else to do. I guess we could cache a bunch of information // here, but we want to ask the device at runtime in case anything // has changed. } Definition at line 52 of file nsScreenManagerXlib.cpp. { // nothing to see here. } Definition at line 71 of file nsScreenManagerXlib.cpp. { nsIScreen* retval = nsnull; if ( !mCachedMainScreen ) mCachedMainScreen = new nsScreenXlib ( ); NS_IF_ADDREF(retval = mCachedMainScreen.get()); return retval; } Definition at line 63 of file nsScreenManagerXlib.h. Definition at line 59 of file nsIScreenManager.idl. Definition at line 56 of file nsIScreenManager.idl.
https://sourcecodebrowser.com/lightning-sunbird/0.9plus-pnobinonly/classns_screen_manager_xlib.html
CC-MAIN-2017-51
refinedweb
108
55.4
groupby() Method: Split Data into Groups, Apply a Function to Groups, Combine the Results - Nov 28 • 15 min read - Key Terms: groupby, python, pandas A group by is a process that tyipcally involves splitting the data into groups based on some criteria, applying a function to each group independently, and then combining the outputted results. Import Modules import pandas as pd import seaborn as sns import numpy as np Group bys with Tips Dataset Group by of a Single Column and Apply a Single Aggregate Method on a Column The simplest example of a groupby() operation is to compute the size of groups in a single column. By size, the calculation is a count of unique occurences of values in a single column. Here is the official documentation for this operation. This is the same operation as utilizing the value_counts() method in pandas. Below, for the df_tips DataFrame, I call the groupby() method, pass in the sex column, and then chain the size() method. df_tips.groupby(by='sex').size() sex Male 157 Female 87 dtype: int64 To interpret the output above, 157 meals were served by males and 87 meals were served by females. A note, if there are any NaN or NaT values in the grouped column that would appear in the index, those are automatically excluded in your output (reference here). In pandas, we can also group by one columm and then perform an aggregate method on a different column. For example, in our dataset, I want to group by the sex column and then across the total_bill column, find the mean bill size. To do this in pandas, given our df_tips DataFrame, apply the groupby() method and pass in the sex column (that'll be our index), and then reference our ['total_bill'] column (that'll be our returned column) and chain the mean() method. Meals served by males had a mean bill size of 20.74 while meals served by females had a mean bill size of 18.06. df_tips.groupby(by='sex')['total_bill'].mean() sex Male 20.744076 Female 18.056897 Name: total_bill, dtype: float64 We can verify the output above with a query. We get the same result that meals served by males had a mean bill size of 20.74 df_tips.query("sex=='Male'")['total_bill'].mean() 20.744076433121034 Another interesting tidbit with the groupby() method is the ability to group by a single column, and call an aggregate method that will apply to all other numeric columns in the DataFrame. For example, if I group by the sex column and call the mean() method, the mean is calculated for the three other numeric columns in df_tips which are total_bill, tip, and size. df_tips.groupby(by='sex').mean() Aggregate Methods Other aggregate methods you could perform with a groupby() method in pandas are: To illustrate the difference between the size() and count() methods, I included this simple example below. The DataFrame below of df_rides includes Dan and Jamie's ride data. data = {'person': ['Dan', 'Dan', 'Jamie', 'Jamie'], 'ride_duration_minutes': [4, np.NaN, 8, 10]} df_rides = pd.DataFrame(data) df_rides For one of Dan's rides, the ride_duration_minutes value is null. However, if we apply the size method, we'll still see a count of 2 rides for Dan. We are 100% sure he took 2 rides but there's only a small issue in our dataset in which the the exact duration of one ride wasn't recorded. df_rides.groupby(by='person')['ride_duration_minutes'].size() person Dan 2 Jamie 2 Name: ride_duration_minutes, dtype: int64 Upon applying the count() method, we only see a count of 1 for Dan because that's the number of non-null values in the ride_duration_minutes field that belongs to him. df_rides.groupby(by='person')['ride_duration_minutes'].count() person Dan 1 Jamie 2 Name: ride_duration_minutes, dtype: int64 Group by of a Single Column and Apply the describe() Method on a Single Column With grouping of a single column, you can also apply the describe() method to a numerical column. Below, I group by the sex column, reference the total_bill column and apply the describe() method on its values. The describe method outputs many descriptive statistics. Learn more about the describe() method on the official documentation page. df_tips.groupby(by='sex')['total_bill'].describe() Group by of a Single Column and Apply a Lambda Expression on a Single Column Most examples in this tutorial involve using simple aggregate methods like calculating the mean, sum or a count. However, with group bys, we have flexibility to apply custom lambda functions. You can learn more about lambda expressions from the Python 3 documentation and about using instance methods in group bys from the official pandas documentation. Below, I group by the sex column and apply a lambda expression to the total_bill column. The expression is to find the range of total_bill values. The range is the maximum value subtracted by the minimum value. I also rename the single column returned on output so it's understandable. df_tips.groupby(by='sex').agg({'total_bill': lambda bill: bill.max() - bill.min()}).rename(columns={'total_bill': "range_total_bill"}) In this dataset, males had a bigger range of total_bill values. Group by of Multiple Columns and Apply a Single Aggregate Method on a Column We can group by multiple columns too. For example, I want to know the count of meals served by people's gender for each day of the week. So, call the groupby() method and set the by argument to a list of the columns we want to group by. df_tips.groupby(by=['sex', 'day']).size() sex day Male Thur 30 Fri 10 Sat 59 Sun 58 Female Thur 32 Fri 9 Sat 28 Sun 18 dtype: int64 We can also group by multiple columns and apply an aggregate method on a different column. Below I group by people's gender and day of the week and find the total sum of those groups' bills. df_tips.groupby(by=['sex', 'day'])['total_bill'].sum() sex day Male Thur 561.44 Fri 198.57 Sat 1227.35 Sun 1269.46 Female Thur 534.89 Fri 127.31 Sat 551.05 Sun 357.70 Name: total_bill, dtype: float64 Group by of a Single Column and Apply Multiple Aggregate Methods on a Column The agg() method allows us to specify multiple functions to apply to each column. Below, I group by the sex column and then we'll apply multiple aggregate methods to the total_bill column. Inside the agg() method, I pass a dictionary and specify total_bill as the key and a list of aggregate methods as the value. You can pass various types of syntax inside the argument for the agg() method. I chose a dictionary because that syntax will be helpful when we want to apply aggregate methods to multiple columns later on in this tutorial. df_tips.groupby(by='sex').agg({'total_bill': ['count', 'mean', 'sum']}) You can learn more about the agg() method on the official pandas documentation page. The code below performs the same group by operation as above, and additionally I rename columns to have clearer names. df_tips.groupby(by='sex').agg({'total_bill': ['count', 'mean', 'sum']}).rename(columns={'count': 'count_meals_served', 'mean': 'average_bill_of_meal', 'sum': 'total_bills_of_meals'}) We can modify the format of the output above through chaining the unstack() and reset_index() methods after our group by operation. This format may be ideal for additional analysis later on. df_tips.groupby(by='sex').agg({'total_bill': ['count', 'mean', 'sum']}).unstack().reset_index().rename(columns={'level_0': 'aggregated_column', 'level_1': 'aggregate_metric', 'sex': 'grouped_column', 0: 'aggregate_calculation'}).round(2) Group by of a Single Column and Apply Multiple Aggregate Methods on Multiple Columns Below, I use the agg() method to apply two different aggregate methods to two different columns. I group by the sex column and for the total_bill column, apply the max method, and for the tip column, apply the min method. df_tips.groupby(by='sex').agg({'total_bill': 'max', 'tip': 'min'}).rename(columns={'total_bill': 'max_total_bill', 'tip': 'min_tip_amount'}) Group by of Multiple Columns and Apply a Groupwise Calculation on Multiple Columns In restaurants, common math by guests is to calculate the tip for the waiter/waittress. My mom thinks 20% tip is customary. So, if the bill was 10, you should tip 2 and pay 12 in total. I'm curious what the tip percentages are based on the gender of servers, meal and day of the week. We can perform that calculation with a groupby() and the pipe() method. The pipe() method allows us to call functions in a chain. So as the groupby() method is called, at the same time, another function is being called to perform data manipulations. You can learn more about pipe() from the official documentation. To perform this calculation, we need to group by sex, time and day, then call our pipe() method and calculate the tip divided by total_bill multiplied by 100. df_tips.groupby(by=['sex', 'time', 'day']).pipe(lambda group: group.tip.sum()/group.total_bill.sum()*100) sex time day Male Lunch Thur 15.925121 Fri 16.686183 Dinner Fri 12.912840 Sat 14.824622 Sun 14.713343 Female Lunch Thur 15.388192 Fri 19.691535 Dinner Thur 15.974441 Fri 19.636618 Sat 14.236458 Sun 16.944367 dtype: float64 The highest tip percentage has been for females for dinner on Sunday.
https://dfrieds.com/data-analysis/groupby-python-pandas
CC-MAIN-2019-26
refinedweb
1,537
64.3
A general in-memory representation of a single property. More... #include <svn_props.h> A general in-memory representation of a single property. Most of the time, property lists will be stored completely in hashes. But sometimes it's useful to have an "ordered" collection of properties, in which case we use an array of these structures. Also: sometimes we want a list that represents a set of property *changes*, and in this case, an apr_hash_t won't work -- there's no way to represent a property deletion, because we can't store a NULL value in a hash. So instead, we use these structures. Definition at line 55 of file svn_props.h.
http://subversion.apache.org/docs/api/1.6/structsvn__prop__t.html
CC-MAIN-2018-17
refinedweb
112
64.81
In this example we start to approach a real playable game with a score. We give MyWidget a new name (GameBoard) and add some slots. We put the definition in gamebrd.h and the implementation in gamebrd.cpp. The CannonField now has a game over state. The layout problems in LCDRange are fixed. #include <qwidget.h> class QSlider; class QLabel; class LCDRange : public QWidget We inherit QWidget rather than QVBox. QVBox is very easy to use, but again it showed its limitations so we switch to the more powerful and slightly harder to use QVBoxLayout. (As you remember, QVBoxLayout is not a widget, it manages one.) #include <qlayout.h> We need to include qlayout.h now to get the other layout management API. LCDRange::LCDRange( QWidget *parent, const char *name ) : QWidget( parent, name ) We inherit QWidget in the usual way. The other constructor has the same change. init() is unchanged, except that we've added some lines at the end: QVBoxLayout * l = new QVBoxLayout( this ); We create a QVBoxLayout with all the default values, managing this widget's children. l->addWidget( lcd, 1 ); At the top we add the QLCDNumber with a non-zero stretch. l->addWidget( slider ); l->addWidget( label ); Then we add the other two, both with the default zero stretch. This stretch control is something QVBoxLayout (and QHBoxLayout, and QGridLayout) offers but classes like QVBox do not. In this case we're saying that the QLCDNumber should stretch and the others should not. The CannonField now has a game over state and a few new functions. bool gameOver() const { return gameEnded; } This function returns TRUE if the game is over or FALSE if a game is going on. void setGameOver(); void restartGame(); Here are two new slots: setGameOver() and restartGame(). void canShoot( bool ); This new signal indicates that the CannonField is in a state where the shoot() slot makes sense. We'll use it below to enable_ang = ang; shoot_f = f; autoShootTimer->start( 50 );; repaint(); }. void CannonField::restartGame() { if ( isShooting() ) autoShootTimer->stop(); gameEnded = FALSE; repaint(); emit canShoot( TRUE ); }(): void CannonField::paintEvent( QPaintEvent *e ) { QRect updateR = e->rect(); QPainter p( this ); if ( gameEnded ) { p.setPen( black ); p.setFont( QFont( "Courier", 48, QFont::Bold ) ); p.drawText( rect(), AlignCenter, "Game Over" ); }. if ( updateR.intersects( cannonRect() ) ) paintCannon( &p ); if ( isShooting() && updateR.intersects( shotRect() ) ) paintShot( &p ); if ( !gameEnded && updateR.intersects( targetRect() ) ) paintTarget( &p ); } We draw the shot only when shooting and the target only when playing (that is, when the game is not ended). This file is new. It contains the definition of the GameBoard class, which was last seen as MyWidget. class QPushButton; class LCDRange; class QLCDNumber; class CannonField; #include "lcdrange.h" #include "cannon.h" class GameBoard : public QWidget { Q_OBJECT public: GameBoard( QWidget *parent=0, const char *name) which display the game status. This file is new. It contains the implementation of the GameBoard class, which was last seen as MyWidget. We have made some changes in the GameBoard constructor. cannonField = new CannonField( this, "cannonField" ); cannonField is now a member variable, so we carefully change the constructor to use it. (The good programmers at Trolltech never forget this, but I do. Caveat programmor - if "programmor" is Latin, at least. Anyway, back to the code.)()), SLOT(fire()) ); Previously we connected the Shoot button's clicked() signal directly to the CannonField's shoot() slot. This time we want to keep track of the number of shots fired, so we connect it to a protected. QPushButton *restart = new QPushButton( "&New Game", this, "newgame" );, this, "hits" ); shotsLeft = new QLCDNumber( 2, this, "shotsleft" ); QLabel *hitsL = new QLabel( "HITS", this, "hitsLabel" ); QLabel *shotsLeftL = new QLabel( "SHOTS LEFT", this, "shotsleftLabel" ); *topBox = new QHBoxLayout; grid->addLayout( topBox, 0, 1 ); topBox->addWidget( shoot ); topBox->addWidget( hits ); topBox->addWidget( hitsL ); topBox->addWidget( shotsLeft ); topBox->addWidget( shotsLeftL ); topBox->addStretch( 1 ); topBox-(). (NewGame() is a slot, but as we said, slots can be used as ordinary functions, too.). (See Compiling for how to create a makefile and build the application.) Add a random wind factor and show it to the user. Make some splatter effects when the shot hits the target. Implement multiple targets. You're now ready for Chapter 14. [Previous tutorial] [Next tutorial] [Main tutorial page]
http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/tutorial1-13.html
CC-MAIN-2014-10
refinedweb
694
67.15
The QFontInfo class provides general information about fonts. More... #include <qfontinfo.h> List of all member functions. The QFontInfo class provides general information about fonts. The QFontInfo class provides the same access functions as QFont, e.g. family(), pointSize(), italic(), weight(), fixedPitch(), styleHint() etc. But whilst the QFont access functions return the values that were set, a QFontInfo object returns the values that apply to the font that will actually be used to draw the text. For example, when the program asks for a 25pt Courier font on a machine that has a non-scalable 24pt Courier font, QFont will (normally) use the 24pt Courier for rendering. In this case, QFont::pointSize() returns 25 and QFontInfo::pointSize() returns 24. There are three ways to create a QFontInfo object. *. The font must be screen-compatible, i.e. a font you use when drawing text in widgets or pixmaps, not QPicture or QPrinter. The font info object holds the information for the font that is passed in the constructor at the time it is created, and is not updated if the font's attributes are changed later. Use QPainter::fontInfo() to get the font info when painting. This will give correct results also when painting on paint device that is not screen-compatible. Returns TRUE if weight() would return a value greater than QFont::Normal; otherwise returns FALSE. See also weight() and QFont::bold(). See also QFont::exactMatch(). See also QFont::family(). Example: fonts/simple-qfont-demo/viewer.cpp. See also QFont::fixedPitch(). See also QFont::italic(). See also QFont::pointSize(). See also QFont::pointSize(). Example: fonts/simple-qfont-demo/viewer.cpp.(). This file is part of the Qt toolkit. Copyright © 1995-2003 Trolltech. All Rights Reserved.
http://doc.trolltech.com/3.2/qfontinfo.html
crawl-002
refinedweb
285
61.12
Minimal Fixture everywhere When people hear the word fixture in the context of tests, they may imagine a huge dataset in the database or REST service mocks. The test fixture is a broader concept. And usually has a much smaller scope than you would expect. A test fixture is defined as everything you need to execute the System Under Test (SUT, it can be an object method, standalone function, or a microservice API). And you will require: - direct inputs — method or function arguments, the data you will feed into the SUT - expected direct outputs — values or exceptions to compare with the result of the SUT execution - indirect inputs — arguments passed to and values returned from the queries to stubbed¹ SUT dependencies - indirect outputs — recorded executions of commands sent to mocked SUT dependencies and the command arguments - the SUT itself with all required dependencies (the real ones or replaced with test doubles) In this article, I will be using test pattern terminology from the XUnit Test Patterns book. It introduced a common language for the problems and solutions around automated testing (and that was already in 2007!). I recommend wholeheartedly to read the book, even if it is more than 10 years old (or at least skim through the companion website). You find there condensed wisdom about testing that you usually get from reading tons of blogs or by making your own mistakes when practicing the craft. Developers writing tests want to reap the benefits of having an automated test suite. It is supposed to provide a specification of their system and guard against unintended changes that violate the specification. Engineers want to spend minimal time on writing the tests. Those who practice TDD use tests as a feedback tool for their designs. The intentions are always good, but in practice, I encounter lots of issues about the fixture creation that undermine those lofty goals. Let’s go through them and see how they may be fixed. Too much data You create a complex object/data structure when a simpler structure could be used to execute the SUT. For example, you set all the properties or you attach all the children/peers of the object as in this code fragment in Kotlin language: val invoice =()) ) ) The SUT will use only the Invoice.id so you don't have to fill lineItems. You can leave them empty: val invoice = Invoice( id = 12345, totalAmount = 100.toBigDecimal(), customer = Customer(id = 23456, firstName = "Marcin", surname = "Gryszko", middleName = "NA"), lineItems = emptyList() ) Some people use external libraries to generate complete objects with random data (e.g. Podam). This approach can be even worse: you generate too large fixture that may be random. Randomness and fixture complexity may make your test not only less understandable but also erratic. Too complex data Your fixture uses a complex representation when a simple one would be sufficient. We can simplify the previous example even more: val invoice = Invoice( id = 12345, totalAmount = 0.toBigDecimal(), customer = Customer(id = 0, firstName = "", surname = "", middleName = null), lineItems = emptyList() ) by setting required properties to the most simple representation possible or null if a property is optional. They are just fill-in values required to construct the data. In conversations with my fellow engineers, I often find a mix of astonishment and resistance when I propose to replace a value with null. We learned the hard way that null s sneak into the runtime due to the developer mistakes and cause costly errors. In tests, we want to introduce them deliberately to increase test sensitivity against regressions. Imagine that in the previous example, somebody adds logic based on invoice customer middleName without modifying the test. If middleName had a value, there is a chance that the test wouldn't fail. With null middleName, you will get a NullPointerException indicating that the value became to be used by the SUT. Too realistic data Your fixture uses some real examples from the domain but your SUT doesn’t care about the meaning of the data passed to it². The behaviour of the SUT isn’t different if you use a real/realistic example or just a minimal one (as in the previous section) with invented values for the required properties. Take this example of a domain-to-JSON converter test: // Google is our main customer val tenant = DomainTenant(id = 92348476, companyName = "Google, LLC", address = "Mountain View, California, United States") val jsonTenant = converter.toJsonRepresentation(tenant) // assert JSON representation The converter doesn’t know nor cares about Google being our most profitable customer. It is just an infrastructure class that maps one type to another. We can change this test to: val tenant = DomainTenant(id = 1, companyName = "::company::", address = "::address::") You may notice a strange notation for string constants For the string values, I use a notation learned from J.B. Rainsberger to indicate that I need a value to pass it to the SUT but I don’t care what is the exact value. Too little data Yes, you can underspecify the SUT too. This happens when you are describing the behaviour of our test doubles with lenient matches: when(invoiceCreator.create(any(), any())).thenReturn(OK) You are allowing to pass any argument to the stub which is a source of indirect inputs. Those arguments could be replaced by another value (or even by null) in the SUT and the test wouldn't stop you from doing so. Use generally strict matchers when specifying test double behaviour: when(invoiceCreator.create(invoice, tenant)).thenReturn(OK) There are exceptions from this rule described in more detail in: Effective test doubles, part 1 Tactical tips about the efficient and effective usage of test doubles marcingryszko.medium.com Variations of data Someone has in mind eventual extensions of the SUT and adds to the fixture data variations that in an unspecified future may change the logic of the SUT. Currently, this data is required in the fixture but irrelevant to the SUT. The SUT doesn’t take any decision on that data: @ParameterizedTest @MethodSource("tenant") fun `create invoice`(tenant: Tenant) { // tenant is required to execute the SUT but there is no SUT logic on tenant! } You can safely remove those variations and use a single value (the simplest you can think of!). Shared data In your career, you are taught that duplication is bad. You eagerly remove it in tests without notifying that actually, this process introduces other, worse problems than the duplication itself. You realize that some parts of the fixture are very similar between tests. So you extract them into parameterized creation methods that spawn similar objects valid for a variety of tests. Or you create and share standard objects to reuse in tests (pattern known as Object Mother). As a result, your SUT is fed with too much data. Basically, it is the same complex data problem as described in the Too complex data section, with a difference that the data is external to the test. What are the consequences? Tests exhibit high coupling to the extracted fixture. They can become fragile and start suddenly to fail because somebody adapts the fixture to their own new test. Irrelevant details or mystery guests (parts of the fixture created outside of the test) phenomena appear, making it hard to connect the dots between the test inputs and outputs — smell known as an obscure test. The approach of extracting and externalizing shared parts of the fixture (to the outside of the test) leads to the pattern known as Standard Fixture, being more an antipattern than a boon. In practice, I find it implemented as: - Object Mother — a class/object with static methods or variables creating or holding some standard instances of fixture objects class TestObjects { val standardInvoice = Invoice(…) val invoiceWithTwoLineItems = Invoice(…) val invoiceWithHighTotal = Invoice(…) val customer = Customer(…) fun invoiceForTenant(tenant: Tenant): Invoice = … // the list goes on } - test Builder with pre-initialized object properties. You create instances of Invoicejust by calling d new InvoiceBuilder().build()and get the object filled with some mysterious data. public class InvoiceBuilder { private int id = 12345; private Big totalAmount = new BigDecimal(100); private Customer customer = new CustomerBuilder().build(); private List<LineItem> lineItems = List.of( new LineItemBuilder().withSku(“fake SKU 1”).withAmount(10).build(), new LineItemBuilder().withSku(“fake SKU 2”).withAmount(90).build() ); public InvoiceBuilder withId(int id) { this.id = id; return this; } // ... public Invoice build() { return new Invoice(id, totalAmount, customer, lineItems); } } - creation methods — yes, you can have a standard fixture within the same test. Those little inoffensive helper methods within the same that create the same object again and again (maybe with some minor variations) and reuse it for different test cases… fun createInvoice(id: Int = 12345, totalAmount: BigDecimal = 100.toBigDecimal) =()) ) ) - member variables initialized in the shared setup part — this is a similar case to the Object Mother with the difference that the shared fixture is local to the test and doesn’t leak outside of it. If you find that the shared fixture elements are only partially used in tests (i.e. they contain some irrelevant details for the tests) you can: - inline them and then remove all the unneeded data (following the tips from the previous sections) - group your test cases around the fixture (e.g. using nested tests) and move the shared fixture to the test group Strive for the minimal fixture To sum up the fixture dos and dont’s: prefer a minimalist fixture over a general one unless you have a really good reason to share the data. Your tests will document better the verified behaviour. You decrease the test fragility — everything that is needed to execute the SUT is right there in the test and your test is independent of other tests. Notice that when applying Test-Driven Development, chances are higher that you’ll have the minimal fixture. TDD mandates to write one test at a time and the very necessary test code to pass the test (not only the production code!). As a consequence, your fixture should contain the bare minimum to execute the production code and verify the result. In the test-after approach, you fit your tests to the SUT. It happens frequently in the last phase of the iteration when there is pressure to deliver the feature and jump to the next one. If there is an already created, shared object, the temptation to use it is high. So you reuse it, maybe adapting slightly to test requirements. And you are on a slippery slope to the entangled standard fixture. 1: I’m using the terms mock and stub, as defined in the XUnit Test Patterns book and popularized by Martin Fowler in his article Mocks Aren’t Stubs 2: Unless you are implementing a system test
https://marcingryszko.medium.com/minimal-fixture-everywhere-266f2c2958bb?source=user_profile---------9-------------------------------
CC-MAIN-2021-49
refinedweb
1,778
51.78
Programmers should definitely know how to use R. I don’t mean they should switch from their current language to R, but they should think of R as a handy tool during development.Again and again I find myself working with Java code like the following. public class SomeBigProject1 { public static double logStirlingApproximation(final int n) { return n*(Math.log(n)-1) + 0.5*Math.log(2*Math.PI*n); } public static double logFactorial(final int n) { double r = 0.0; for(int i=n;i>1;--i) { r += Math.log(i); } return r; } public static void main(final String[] args) { int nbad = 0; for(int n=1000;n<10000;++n) { if(Math.abs(logFactorial(n)-logStirlingApproximation(n))>=1.0e-5) { ++nbad; } } System.out.println("nbad: " + nbad); } } Imagine that this is some humongous project to use Stirling’s Approximation as a replacement for factorial. All the code up until main is great. But the unfortunate developer has hard-coded an acceptance test into main(). If they run their big project all they get out is: nbad: 7334 The developer needs to re-code and re-build to diagnose the failure, tweak their acceptance criteria or add more measurements. I strongly recommend a different work pattern. Instead of bringing criteria into the code, bring the data out: public class SomeBigProject2 { public static void main(final String[] args) { System.out.println("n" + "\t" + "logFactorial" + "\t" + "logStirlingApproximation"); for(int n=1000;n<10000;++n) { System.out.println(String.valueOf(n) + "\t" + SomeBigProject1.logFactorial(n) + "\t" + SomeBigProject1.logStirlingApproximation(n)); } } } Capture this output in a file named “data.tsv” and both Microsoft Excel and R can open it. Naturally I prefer to use R (so that is what I will demonstrate). To read the results into R you start up an R and type in a command like the following: > d <- read.table('data.tsv', header=T,sep='\t',quote='',as.is=T, stringsAsFactors=F,comment.char='',allowEscapes=F) Most of the arguments controlling the style of file R is to expected (what the field separator is, weather to expect escapes and quotes and so on). The settings I suggest here are the “ultra hardened” settings. If you make sure none of your fields have a tab or line-break in them when you print then it is guaranteed R can read the data (not matter what whacky symbols are in it). On the java side that usually means making sure any varying text fields are run through .replaceAll("\\s+"," ") “just in case.” At this point you can already look at your data with the summary() command: > summary(d) n logFactorial logStirlingApproximation Min. :1000 Min. : 5912 Min. : 5912 1st Qu.:3250 1st Qu.:23034 1st Qu.:23034 Median :5500 Median :41870 Median :41870 Mean :5500 Mean :42536 Mean :42536 3rd Qu.:7749 3rd Qu.:61653 3rd Qu.:61653 Max. :9999 Max. :82100 Max. :82100 This immediately hints that you should have been thinking in terms of relative error instead of absolute error (since insisting on high absolute accuracy on large results does not always make sense). You also have access to standard statistical measures of agreement like correlation: > with(d,cor(logFactorial,logStirlingApproximation)) result: 1 You can see where your failures were: > library(ggplot2) > d$bad <- with(d,abs(logFactorial-logStirlingApproximation)>=1.0e-5) > ggplot(d) + geom_point(aes(x=n,y=bad)) Yields the graph: You can see all your failures are in the initial interval. You can then drill in: > ggplot(d) + geom_point(aes(x=n,y=logFactorial-logStirlingApproximation)) + scale_y_log10() And here we see some things (that are in general true for Stirling’s approximation): - It is very accurate. - It is always an under estimate. - It gets better as n gets larger. Essentially by poking around with graphs in R you can figure out the nature of your errors (telling you what to fix) and generate findings that tell you how to fix your criteria (perhaps your code is working- but your test wasn’t sensible). The “dump everything and then use R” technique is also particularly good for generating reports on code timings using either geom_histogram or geom_density. For example, if we had data with a field runTimeMS then it is a simple one-liner to get plot like the following: > ggplot(t) + geom_density(aes(x=runTimeMS)) From this graph we can immediately see: - Most of our run-times are very fast. - We have a heavy right-tail (evidence of “contagion” or one slow-down causing others, like CPU or IO contention). - Data is truncated at 100MS (could be something “censoring” the measurement, an exception being thrown or an abort). - There is a spike at 30MS (something is true and slow for some subset of the data that isn’t present in the majority). This is a lot more that would be seen in a mean-only or mean and standard deviation summary. We may even being seeings signs of two different bugs (the truncation and the spike). In all cases the key is to dump a lot of data in machine readable form and then come back to to analyze. This is far more flexible than hoping to code in the right summaries and then further hoping the summaries don’t miss something important (or that you at least get a chance to notice if they do miss something). Being able to do exploratory statistics on dumps from your code (both results and timing) gives you incredible measurement, tuning and debugging powers. The scriptability of R means any later analysis is as easy as cut and paste. Related posts: - Automatic Differentiation with Scala - R examine objects tutorial - Learn Logistic Regression (and...
https://www.r-bloggers.com/programmers-should-know-r/
CC-MAIN-2016-44
refinedweb
939
55.64
Not updating imported module I'm importing a module from a script. If I modify this module and run the script again (reloading), it keeps using the previous version. It doesn't updates the new changes. This doesn't happen with the same script and module in the same place running from Textmate. I guess it's a bug... Sometimes it's not easy to decide which Forum post to : ) Thanks! python creates .pyc files on the fly during runtime only if the compiled version is not existing. You can force to reload a module by: import myModule reload(myModule) good luck
https://forum.robofont.com/topic/165/not-updating-imported-module
CC-MAIN-2020-40
refinedweb
102
76.22
Cookies play an important role in most modern websites and web applications, allowing us leave small strings of key/value pairs on the clients browser to help both developers and users by temporarily preserving inportant information such as preferences, unique identifiers, state etc.. Fortunately for us, Flask makes working with cookies very simple. Let's get started. Flask imports Working with cookies requires a couple of imports from Flask. requestTo set and get cookies make_response- To build a response to attach cookies to Go ahead and import them at the top of your Flask app: from flask import request, make_response Create a new route For this example, we're going to create a simple route with the URL /cookies: @app.route("/cookies") def cookies(): resp = make_response("Cookies") return resp We've create a response by passing a simple string to the make_response function resp = make_response("Cookies") which is then returned. The exact same thing would be achieved with: @app.route("/cookies") def cookies(): return "Cookies" We're not covering make_response in detail in this part, just know that you can build your response ahead of time and modify it before returning it! If you wanted to return a template, you would do the following: @app.route("/cookies") def cookies(): resp = make_response(render_template("cookies.html")) return resp It's exactly the same as: @app.route("/cookies") def cookies(): return render_template("cookies.html") The difference is that by using make_response we can build and modify our request ahead of sending it. Setting cookies Setting cookies is a simple affair. We simply attach them to our response object. The syntax for setting a cookie is: response.set_cookie("key", "value") For example, let's set a cookie with the key of flavor and the value of chocolate chip: @app.route("/cookies") def cookies(): resp = make_response("Cookies") resp.set_cookie("flavor", "chocolate chip") return resp If you open the developer tools in your browser, head to the storage tab and select cookies from the navigation on the left, you'll see that the flavour cookie has been set! Cookie parameters You might notice in the developer tools, cookies have several parameters including key, value, domain, path and more, all of which can be set using Flask. The set_cookie function takes the following parameters: set_cookie( key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None ) See the table below for a breakdown of the cookie parameters available: These options give us a great deal of control of how our cookies work and provide plenty of ways to manage them. Let's set the max_age and path keys with 30 seconds and the /cookies path: resp.set_cookie( "flavor", value="chocolate chip", max_age=10, path=request.path ) We've used request.path to access the path of the current route If you check your browsers developer tools console, you'll see our flavor cookie now has an Expires on date along with a value for Path of /cookies. We're going to come back to max_age in a minute with another example, but now let's talk through how to access cookies. Accessing cookies Just like we've used the request object to access many different request values including request.form, request.args, request.files and request.get_json().. We use request.cookies to access the cookies with the following syntax: cookies = request.cookies If you run print(request.cookies) you'll see we get a nicely serialized Python disctionary: print(request.cookies) {'flavor': 'chocolate chip'} As we're now working with a dictionary, we can access the individual values by key: flavor = cookies.get("flavor") Tip - Use cookies.get("key")to access keys in order to mute any key errors when trying to access the dictionary values by key Let's set a few more cookies: resp.set_cookie("chocolate type", "dark") resp.set_cookie("chewy", "yes") If we now print request.cookies, we see: {'flavor': 'chocolate chip', 'chocolate type': 'dark', 'chewy': 'yes'} max age You may notice that even after setting max_age in our flavor cookie, it still hangs around in the developer tools. Go ahead and comment out the first cookie we set: @app.route("/cookies") def cookies(): resp = make_response("Set cookies") cookies = request.cookies print(cookies) # resp.set_cookie( # "flavor", # value="chocolate chip", # max_age=10, # path=request.path # ) resp.set_cookie("chocolate type", "dark") resp.set_cookie("chewy", "yes") return resp Refresh the page and give it 10 seconds or so. You'll notice we get the following in the terminal output: {'chocolate type': 'dark', 'chewy': 'yes'} Even though the flavor cookie persists in the browser, it's not send to the server as we explicitly set the max_age variable in the cookie to 10 seconds. It will be deleted when the browser is closed. Tip - To delete cookies from your browser, right click on the domain in the cookiestab in the developer tools and click on delete Setting cookies from the client Setting cookies using JavaScript is also very simple. document.cookie = "key=value"; This will set the most basic type of cookie with no other meta information. We can also provide the cookie with some more parameters like so: document.cookie = "key=value; expires=DDD, DD MMM YYYY HH:MM:SS UTC"; You can also add a path with the following: document.cookie = "key=value; expires=DDD, DD MMM YYYY HH:MM:SS UTC; path=/path"; You'll then be able to access any of the cookies set client-side using request.cookies Sessions Sessions use a special type of signed cookie, but you'll have to read the next episode in this series to learn more!
https://pythonise.com/series/learning-flask/flask-cookies
CC-MAIN-2021-17
refinedweb
933
55.34
Single cell tracking with napari In this application note, we will use napari (requires version 0.4.0 or greater) to visualize single cell tracking data using the Tracks layer. For an overview of the Tracks layer, please see the tracks layer fundamentals tutorial. This application note covers two examples: Visualization of a cell tracking challenge dataset Single cell tracking using btrack and napari 1. Cell tracking challenge data¶ The first example of track visualization uses data from the cell tracking challenge. We will use the C. elegans developing embryo dataset which consists of 3D+t volumetric imaging data, manually annotated tracks and cell lineage information. A full description of the data format can be found here. Extracting the tracks from the dataset¶ We need to extract the centroids of each cell and their associated track labels from the annotated dataset. We start by loading the images containing the centroids and unique track labels: import os import napari import numpy as np import pandas as pd from skimage.io import imread from skimage.measure import regionprops_table PATH = '/path/to/Fluo-N3DH-CE/' NUM_IMAGES = 195 def load_image(idx: int): """Load an image from the sequence. Parameters ---------- idx : int Index of the image to load. Returns ------- image : np.ndarray The image specified by the index, idx """ filename = os.path.join(PATH, '01_GT/TRA', f'man_track{idx:0>3}.tif') return imread(filename) stack = np.asarray([load_image(i) for i in range(NUM_IMAGES)]) For each image in the time-lapse sequence, we will now extract the unique track label ( track_id), centroid and timestamp in order to create the track data we will pass to the Tracks layer. For more information on the format of the track data, please see the “tracks data” section of the tracks layer fundamentals tutorial. def regionprops_plus_time(idx): """Return the unique track label, centroid and time for each track vertex. Parameters ---------- idx : int Index of the image to calculate the centroids and track labels. Returns ------- data_df : pd.DataFrame The dataframe of track data for one time step (specified by idx). """ props = regionprops_table(stack[idx, ...], properties=('label', 'centroid')) props['frame'] = np.full(props['label'].shape, idx) return pd.DataFrame(props) data_df_raw = pd.concat( [regionprops_plus_time(idx) for idx in range(NUM_IMAGES)] ).reset_index(drop=True) # sort the data lexicographically by track_id and time data_df = data_df_raw.sort_values(['label', 'frame'], ignore_index=True) # create the final data array: track_id, T, Z, Y, X data = data_df.loc[ :, ['label', 'frame', 'centroid-0', 'centroid-1', 'centroid-2'] ].to_numpy() This represents the minimum amount of information to display tracks in napari, and can already be visualised. At this point, there is no concept of track links, lineages, or tracks splitting or merging. These single tracks are sometimes known as tracklets: napari.view_tracks(data, name='tracklets') napari.run() Calculating the graph using the lineage information¶ The Tracks layer can also be used to visualize a track ‘graph’ using the additional keyword argument graph. The graph represents associations between tracks, by defining the mapping between a track_id and the parents of the track. This graph can be useful in single cell tracking to understand the lineage of cells over multiple cell division events. For more information on the format of the track graph, please see the “tracks graph” section of the tracks layer fundamentals tutorial. In the cell tracking challenge dataset, cell lineage information is stored in a text file man_track.txt in the following format: A text file representing an acyclic graph for the whole video. Every line corresponds to a single track that is encoded by four numbers separated by a space: L - a unique label of the track (label of markers, 16-bit positive value) B - a zero-based temporal index of the frame in which the track begins E - a zero-based temporal index of the frame in which the track ends P - label of the parent track (0 is used when no parent is defined) To extract the graph, we load the text file and convert it to a Nx4 integer numpy array, where the rows represent individual tracks and the columns represent L, B, E and P: lbep = np.loadtxt(os.path.join(PATH, '01_GT/TRA', 'man_track.txt'), dtype=np.uint) We can then create a dictionary representing the graph, where the key is the unique track label (L) and the value is the label of the parent track (P). full_graph = dict(lbep[:, [0, 3]]) Finally, we remove the root nodes (i.e. cells without a parent) for visualization with the Tracks layer: graph = {k: v for k, v in full_graph.items() if v != 0} Traversing the lineage trees to identify the root nodes¶ One property that is useful to visualize in single cell tracking is the track_id of the root node of the lineage trees, i.e. the founder cell. We create it with the following code: def root(node: int): """Recursive function to determine the root node of each subgraph. Parameters ---------- node : int the track_id of the starting graph node. Returns ------- root_id : int The track_id of the root of the track specified by node. """ if full_graph[node] == 0: # we found the root return node return root(full_graph[node]) roots = {k: root(k) for k in full_graph.keys()} The Tracks layer enables the vertices of the tracks to be colored by user specified properties. Here, we will create a property which represents the root_id of each tree, so that cells with a common ancestor are colored the same: properties = {'root_id': [roots[idx] for idx in data[:, 0]]} Visualizing the tracks with napari¶ Alongside the tracks, we can also visualize the fluorescence imaging data. timelapse = np.asarray( [imread(os.path.join(PATH, '01', f't{i:0>3}.tif')) for i in range(NUM_IMAGES)] ) Finally, we need to adjust the scaling of the data to account for the anisotropic nature of the images. We can use the scale feature of napari layers to set the voxel size where the z dimension is different to the size in the x and y dimensions. From the dataset, the voxel size (XYZ) in microns is 0.09 x 0.09 x 1.0. Therefore we can set the scale for the layers as: # scale factor for dimensions in TZYX order SCALE = (1.0, 1.0, 0.09, 0.09) We can now visualize the full, linked tracks in napari! viewer = napari.Viewer() viewer.add_image(timelapse, scale=SCALE, name='Fluo-N3DH-CE') viewer.add_tracks(data, properties=properties, graph=graph, scale=SCALE, name='tracks') napari.run() 2. Using btrack to track cells¶ The btrack library can be used for cell tracking. It provides a convenient to_napari() function to enable rapid visualization of the tracking results. You can learn more about the btrack library here. import btrack We start by loading a file containing the centroids of all the found cells in each frame of the source movie. Note that this file only contains the locations of cells in the movie, there are no tracks yet. We can use the btrack library to load this file as a list of objects that contain information about each found cell, including the TZYX position. The example dataset can be downloaded here. objects = btrack.dataio.import_CSV('napari_example.csv') Next, we set up a btrack.BayesianTracker instance using a context manager to ensure the library is properly initialized. The objects are added to the tracker using the .append() method. We also set the imaging volume using the volume property. As this is a 2D dataset, the limits of the volume are set to the XY dimensions of the image dataset, while the Z dimension is set to be very large (±1e5). In this case, setting the Z limits of the volume to be very large penalises tracks which initialize or terminate in the centre of the XY plane, unless they can be explained by corresponding cell division or death events. The tracker performs the process of linking individual cell observations into tracks and the generating the associated track graph: with btrack.BayesianTracker() as tracker: # configure the tracker using a config file tracker.configure_from_file('cell_config.json') tracker.append(objects) tracker.volume=((0,1600), (0,1200), (-1e5,1e5)) # track and optimize tracker.track_interactive(step_size=100) tracker.optimize() # get the tracks in a format for napari visualization data, properties, graph = tracker.to_napari(ndim=2) We set the configuration of the tracker using a configuration file using the .configure_from_file() method. An example configuration file can be found here. Next, the objects are linked into tracks using the .track_interactive() method. The step_size argument specifies how many steps are taken before reporting the tracking statistics. The .optimize() method then performs a global optimization on the dataset and creates lineage trees automatically. Finally, the .to_napari() method returns the track vertices, track properties and graph in a format that can be directly visualized using the napari Tracks layer: viewer = napari.Viewer() viewer.add_tracks(data, properties=properties, graph=graph) napari.run() A notebook for this example can be found in the btrack examples directory ( napari_btrack.ipynb) Further reading¶ References for cell tracking challenge: For a more advanced example of visualizing cell tracking data with napari, please see the Arboretum plugin for napari:
https://napari.org/tutorials/tracking/cell_tracking.html
CC-MAIN-2022-05
refinedweb
1,518
55.95
XML::FOAFKnows::FromvCard - Perl module to create simple foaf:knows records from vCards use XML::FOAFKnows::FromvCard; # read a vCard file into $data my $formatter = XML::FOAFKnows::FromvCard->format($data); print $formatter->fragment; The foafvcard script in the distribution is also a good and more elaborate usage example. records of your contacts. This module conforms with the Formatter API specification, version 0.95. It is not in the Formatter namespace, however, because it doesn't what Formatters generally do, namely reformat all data from one format to another. Since it does conform, it can be used in most of the same contexts as a formatter. format($string, [(seeAlso =$seeAlsoUri, uri => $myUri, email => $myEmail, attribute => 'CLASS', privacy => 'PRIVATE|PUBLIC') )> The format function that you call to initialise the converter. It takes the plain text as a string argument and returns an object of this class. In the present implementation, it does pretty much all the parsing and building of the output, so this is the only really expensive call of this module. In addition to the string, it can take a hash containing seeAlso, uri and seeAlso to an URL to the rest of your FOAF and you should specify one of uri or privacy and attribute are privacy options, and they can optionally be set to indicate what level of details should be included in the output. See the discussion in "Privacy Settings" for further details. You should ensure that the data passed is UTF-8, and has the UTF-8 flag set, as invalid RDF nodeIDs may result if you don't. document([$charset]) This will return a full RDF document. The FOAF knows records will be wrapped in a Person element, which has to represent you somehow, see above. fragment This will return the FOAF knows records. links Will return all links found the input plain text string as an arrayref. The arrayref will for each element contain keys url and title, the former containing the URL, the latter the full name of the person if it exists. title Is meaningless for vCards, so will return undef. By default, this module is conservative in what it outputs. FOAF is a very powerful and will give us many interesting applications when we compile data about people. However, people may also feel that their privacy is compromised by having even their name so readily available. You will have to be concerned about the privacy of your friends. vCards commonly contain an attribute that indicate a privacy level of the vCard. The name of this attribute can be set using the attribute parameter to format and defaults to CLASS. If this attribute contains a "CONFIDENTIAL" value, this module will write nothing, and unless a there is a "PUBLIC" class, it will only output the SHA1-hashed mailbox, a nick if it exists and a homepage if it exists. You may also set a privacy parameter to format. If set, it will override the above attribute for all vCards in the input. It may be set to PRIVATE or PUBLIC. In the first case, it will make sure only the above minimal information is included, in the latter, it will include many more properties (not defined, as it may change). If neither the privacy attribute can be found, nor the privacy parameter, it will default to PRIVATE. Finally, note that even though we are hashing the e-mail addresses, they are not impossible to crack. It is, for many purposes, not infeasible to recover the plaintext e-mail addresses by a dictionary attack, i.e. combine common ISP domains with common names, and compare them with the hash. Hashing is therefore not a 100% guarantee that your friend's cleartext addresses will remain a secret if a determined attacker seeks them. This is presently a beta release. It should do most things OK, but it has only been tested on vCards from three different sources. Also, it is problematic to produce a full FOAF document, since the vCard has no concept at all of who knows all these folks. I have tried to approach this by allowing the URI of the person to be entered, but I don't know if this is workable. Feedback is very much appreciated. One may also report bugs at Text::vCard, Formatter, This module is currently maintained in a Subversion repository. The trunk can be checked out anonymously using e.g.: svn checkout FOAFKnows.
http://search.cpan.org/~kjetilk/XML-FOAFKnows-FromvCard/lib/XML/FOAFKnows/FromvCard.pm
crawl-002
refinedweb
738
62.27
Hello I currently use 2 IDEs - IntelliJ and Netbeans 6.5 as both of them have strengths in some areas. In my personal opinion IntelliJ lacks in the area of web service support, and that should change in IntelliJ 9. Specifically: - when editing WSDLs by hand, IntelliJ 8 doesn't seem to be able to recognize custom external XSD namespaces, it complains that URI is not registered, even though I have XSDs with the same targetNamespace in the same directory as WSDLs (netbeans finds XSDs automatically). - it would be nice to have some graphical XSD & WSDL editor (not necessarily like in netbeans) - better support for web service stacks. We could have dialogs with various configuration options like we have for web.xml, and be able to configure ws-security (and other ws- standards), and other options of web service stack there. IntelliJ should support this way metro, axis2 & apache CXF (which is currently not supported at all, but advantageous due to usage of Spring). Currently if somebody wants to use some web service stack, he needs to read lots of documentation just to be able to configure something very simple. GUI web service stack configuration is common not only in netbeans but also JDeveloper. - at some point we should have OpenESB support (with a designer). I have not used the web service and JAXB support in IntelliJ, so I could be wrong here: - it should be easy to regenerate web service java from wsdl, and merge wsdl changes into java - it should be possible to easily regenerate JAXB 2.0 java classes after XSD changes, without leaving old unused files around Should I post this also in JIRA or could someone from jetbrains take care of this so that these suggestions are not forgotten? Hello
https://intellij-support.jetbrains.com/hc/en-us/community/posts/206849085-Suggestions-for-IntelliJ-9
CC-MAIN-2020-24
refinedweb
295
57.3
Josh Branchaud I'm a developer and consultant focused primarily on the web, specializing in React, Ruby on Rails, and PostgreSQL. Newsletter: Work Sr. Software Dev Pinned Speeding Up An Expensive PostgreSQL Query: B-Tree vs. BRIN Strong Confirmation Modal with XState 1/7 GUI Tasks with React and XState: Counter Test ActionMailer `deliver_later` in RSpec Controller Tests Give your Postgres Queries More Memory to Work With Tackle that Big Task Tackle that Big Task 1 min read There is no triple-quote multi-line string syntax in Ruby Upgrade a Rails app from the Heroku-16 to the Heroku-18 Stack Beware The Missing Foreign Key Index: A Postgres Performance Gotcha Use an XState Machine with React Build a Custom RSpec Matcher for Comparing DateTimes 4 Months in to a 3 Month Experiment Reinstall PostgreSQL with OpenSSL using asdf A Few Methods for Returning Default Values when Creating ActiveRecord Objects The Modes of tmux The Modes of tmux Test Timing-Based JS Functions with Jest Reapply the Previous Visual Selection in Vim Interactively Browse Large Lists of Data with FZF Referencing the global namespace with Ruby's Scope Resolution Operator Living Notes on Working Effectively with Legacy Code Next-Gen Blogging Next-Gen Blogging 5 min read Code Decay Code Decay Practicing Well Practicing Well Crafting Commits Crafting Commits 1 min read Atomic Commits Atomic Commits Slower Debugging Slower Debugging Cleanup Commits Cleanup Commits 2 min read Whiteboards as a Shared Resource Stepping back from the developer mindset Give pair programming another chance Don't Leave Your Empty States Empty Into the Flywheel Into the Flywheel Recent comments Reinstall PostgreSQL with OpenSSL using asdf I proactively used the flags detailed in this post to install... Beware The Missing Foreign Key Index: A Postgres Performance Gotcha Thanks, Ian! Unfortunately, the query plan does not expose th... Use an XState Machine with React Are there certain situations where you shouldn't use a state ... Use an XState Machine with React What are some applications of state machines that you've foun... How to configure Jest on a Next.js project Without Step 4, I got a pretty obscure error about needing to... Good Commit Messages Absolutely! I tried to touch on that a bit in this Cleanup ... How I learned to Learn in Public GitHub -- with its built-in support for markdown and code b... A Little Bit of JavaScript: classnames I've picked up the habit from my current team to use void 0...
https://dev.to/jbranchaud
CC-MAIN-2022-21
refinedweb
414
51.11
Core PHP Interview Questions Core PHP Interview Questions PHP constructor and a destructor are special type functions which are automatically called when a PHP class object is created and destroyed. Generally Constructor are used to intializes the private variables for class and Destructors to free the resources created /used by class . Here is sample class with constructor and destructer in PHP. <?php class Foo { private $name; private $link; public function __construct($name) { $this->name = $name; } public function setLink(Foo $link){ $this->link = $link; } public function __destruct() { echo 'Destroying: ', $this->name, PHP_EOL; } } ?> PHP Namespaces provide a way of grouping related classes, interfaces, functions and constants. # define namespace and class in namespace namespace Modules\Admin\; class CityController { } # include the class using namesapce use Modules\Admin\CityController ; Unlink: Is used to remove a file from server. usage:unlink(‘path to file’); Unset: Is used unset a variable. usage: unset($var); use function_exists('curl_version') function to check curl is enabled or not. This function returns true if curl is enabled other false Example : if(function_exists('curl_version') ){ echo "Curl is enabled"; }else{ echo "Curl is not enabled"; } NO, multiple inheritance is not supported by); PECL is an online directory or repository for all known PHP extensions. It also provides hosting facilities for downloading and development of PHP extensions. You can read More about PECL from func_get_args() function is used to get number of arguments passed in a PHP function. Sample Usage: function foo() { return func_get_args(); } echo foo(1,5,7,3);//output 4; echo foo(a,b);//output 2; echo foo();//output 0; You can add 301 redirect in PHP by adding below code snippet in your file. header("HTTP/1.1 301 Moved Permanently"); header("Location: /option-a"); exit(); PHP count function is used to get the length or numbers of elements in an array <?php // initializing an array in PHP $array=['a','b','c']; // Outputs 3 echo count($array); ?>() . An exception that occurs at compile time is called a checked exception. This exception cannot be ignored and must be handled carefully. For example, in Java Note: Checked exception is not handled so it becomes an unchecked exception. This exception occurs at the time of execution.
https://www.onlineinterviewquestions.com/core-php-interview-questions/page/2/
CC-MAIN-2019-18
refinedweb
362
52.9
#include <fei_LogManager.hpp> Singleton class to manage attributes controlling the type and amount of data that should be written to the fei log file. Definition at line 22 of file fei_LogManager.hpp. destructor Definition at line 21 of file fei_LogManager.cpp. Accessor for the one-and-only instance of LogManager. Constructs a LogManager instance on the first call, returns that same instance on the first and all subsequent calls. Definition at line 25 of file fei_LogManager.cpp. Query output-level. Result is an enumeration. The enumeration is defined in fei_fwd.hpp. Definition at line 31 of file fei_LogManager.cpp. Set output-level, using an enumeration. The enumeration is defined in fei_fwd.hpp. Definition at line 36 of file fei_LogManager.cpp. Set output-level, using a string. Valid values are strings that match the names of the enumeration values. e.g., "MATRIX_FILES", etc. Definition at line 58 of file fei_LogManager.cpp. Specify path where debug-log files should be written. Definition at line 63 of file fei_LogManager.cpp. Query for string specifying path to where debug-log files should be written. Definition at line 68 of file fei_LogManager.cpp. Set numProcs and localProc (which will be used in the log-file-name). Definition at line 73 of file fei_LogManager.cpp. Register an instance of fei::Logger, to be notified when relevant attributes change. Definition at line 79 of file fei_LogManager.cpp. Remove an instance of fei::Logger from the notify list. Definition at line 84 of file fei_LogManager.cpp.
http://trilinos.sandia.gov/packages/docs/r10.4/packages/fei/doc/html/classfei_1_1LogManager.html
CC-MAIN-2014-10
refinedweb
248
53.98
#include <services.h> The space_services class is used to represent the parse state of a services file (see services(5) for file format information). Definition at line 29 of file services.h. Reimplemented in space_services_slurp. Definition at line 33 of file services.h. The destructor. Definition at line 24 of file services.cc. The constructor. It is private on purpose, use the create class method instead. Definition at line 31 of file services.cc. The default constructor. Do not use. The copy constructor. Do not use. The create class method is used to create new dynaically allocated instance of this class. Definition at line 41 of file services.cc. The get method is used to get another record from the file. It returns false at end-of-file. Input lines are validated for simple correctness. Invalid lines cause fatal errors (see the error method) and are not passed through. Definition at line 48 of file services.cc. The is_ok_name method is used to determine if a service name is acceptable. It doesn't strictly follow IANA's rules () because IANA itself does not. Definition at line 288 of file services.cc. The assignment operator. Do not use. The last_num instance variable is used to remember the previous service number. This is to allow diagnosis of sequencing errors. Definition at line 171 of file services.h.
http://nis-util.sourceforge.net/doxdoc/classspace__services.html
CC-MAIN-2018-05
refinedweb
225
71.71
Use regular expressions Click Here Here's quick example .. >>> import re >>>>> re.findall(r'<li>(.+)</li>',html) ['I am some text 12345'] I tried this for testing: >>> import urllib2 >>> import re >>>>> re.findall(r'<p>(.+),/p>', html) But the output was: [] I tried other tags too but all outputs was [], what's the problem? The problem is that your html variable is just a string containing this value and not the actual HTML code ... the library that you have imported urllib2 .. use it to get the code from that page Read urllib2 and also the example from there .. import urllib2 response = urllib2.urlopen('') html = response.read() also.. should this >>> re.findall(r'<p>(.+),/p>', html) be >>> re.findall(r'<p>(.+)</p>', html)? and I am not sure if you read the link I gave you earlier about regular expression but the . matches any character including space. The + stands for that get all character that match the pattern stated which in our case was the . representing any character between <li></li> as in get all characters that match the pattern, as if it was only . without + it will simply return a single character that matches the pattern Edited 1 Year Ago by Slavi Use regular expressions Click Here No no no just to make it clear :) Have to post this link again. Use a parser Beautifulsoup or lxml. from bs4 import BeautifulSoup html = '''\ <head> <title>Page Title</title> </head> <body> <li>Text in li 1</li> <li>Text in li 2</li> </body> </html>''' soup = BeautifulSoup(html) tag_li = soup.find_all('li') print tag_li for tag in tag_li: print tag.text """Output--> [<li>Text in li 1</li>, <li>Text in li 2</li>] Text in li 1 Text in li 2 """ Edited 1 Year Ago by snippsat Thank you @snippsat. Your example was exactly what i was looking for. And thank you @Slavi for your answer and explanation. he's gone too far=D Yes of course,to make it great humoristic read. Regex can be ok to use some times,like you only need a singel text/value. Both BeautifulSoup and lxml has build in support for regex. Sometime it ok to use regex as helper to parser,when parsing dynamic web-sites you can get a at lot of rubbish text. Edited 1 Year Ago by snippsat I wanted to post this question into a new Discussion but as it was related to this discussion so i will ask here. My code: from bs4 import BeautifulSoup import urllib2 mylist = [] url = '' html = urllib2.urlopen(url).read() soup = BeautifulSoup(html) tag_li = soup.find_all('li') for tag in tag_li: if tag.text.startswith('A'): mylist.append(tag.text) if 'A' in mylist[0]: if 'A' in mylist[1]: if 'A' in mylist[2]: print mylist else: 'sorry!' The output must be the else message but it print this output: [u'Apple', u'Age', u'Am'] What is the problem? I want the script to check if the first 3 words (indexes) of mylist start with the letter 'A', print the list, but if not, print 'sorry!'. But as you can see here, it has printed even the index[4]! And one more question, how i can remove those u letters that has printed into output? Edited 1 Year Ago by Niloofar24 Well, it seems my question is basically wrong. Forget that question, sorry!
https://www.daniweb.com/programming/software-development/threads/492669/how-to-print-only-the-content-of-all-li-tags-from-a-url-page
CC-MAIN-2016-50
refinedweb
561
74.69
wikiHow is a “wiki,” similar to Wikipedia, which means that many of our articles are co-written by multiple authors. To create this article, volunteer authors worked to edit and improve it over time. Learn more... Talk pages or discussion pages are a great way to start a discussion, leave comments for another editor, or notify of any action that you have taken on a wikiHow article. This wikiHow will show you how to use talk pages on wikiHow. Steps Method 1 of 3: Writing a Message - 1Locate the talk/discussion page. This usually has User_talk: or Discussion: in the namespace - however, won't be displayed on the page itself. - These namespaces will be shown in the URL however - at the very end. - 2Locate the comment box. This is located at the bottom of the talk page and allows you to leave messages related to the page or the user's actions. - 3Enter a message for the article/user. The message should be related to the user's actions or edits to the article. Giving warnings that address non-existant behavior as well as spam/personal attacks is not acceptable. - 4Sign your message with four tildes ~~~~(optional). If you want to give your message a personal touch, then signing with four tildes appends your signature and a timestamp. - 5Click on Preview. This will preview your message for errors before you post. - 6Click on Post. This will post your comment to the talk page.Advertisement Method 2 of 3: Replying to a Message There are two ways to reply to a message on a talk page. On Talk Pages (Discussion Pages) - 1Quote or mention the user you are replying to. Go into edit mode and copy all of the code in the user's message. Then type {{quote|(name of user)|2=and paste what you copied earlier. - 2Type your reply below the quote template. This should be related to the comment you made to the user. - 3Sign your post with four tildes ~~~~(required). This will send a notification of your mention. - 4Click on Post. This will post your comment to the talk page.Advertisement On User Talk Pages - 1Click on "Reply to (user)". This will take you to the talk page to leave a message on their talk page. - 2Type your reply in the box. This should be related to the comment you made to the user. - 3Sign your post with four tildes ~~~~(optional). If you want to give your message a personal touch, then signing with four tildes appends your signature and a timestamp. - 4Click on Post. This will post your comment to the talk page.Advertisement Method 3 of 3: Keeping within Policies - 1Remember that wikiHow is not a social networking site. Like most wikis, it is not appropriate to just chat generally on talk pages. They exist as a means of discussing improvements to articles or discussing conduct pertaining to the editor. Unnecessary chat bogs down the recent changes patrol backlog, and as such, chatting excessively could result in a block. - 2Note that talk pages are an important means of communication. HTML that alters the message(s) so it is unreadable is against policy, as it disrupts communication with the editor. - 3Avoid using talk pages to troll or disrupt wikiHow. Do not ask for personally identifiable information, do not make personal attacks, do not spam messages, assume good faith, and do not leave unconstructive comments for other editors. Advertisement - An example of an acceptable message would be "I noticed you made an edit X that I do not agree with". An unacceptable message would be "Your edit is a piece of trash" or "If you revert my edit again, I will post your phone number".
https://www.wikihow.com/Use-Talk-Pages-on-wikiHow
CC-MAIN-2020-45
refinedweb
618
75.1
The Spotify engineering team recently released a new open-source tool called Backstage. While the initial release is still very much a work in progress, the tool has a lot of potential to fill a gap in developer tooling that many engineering teams may not even realize could help them. What is Backstage? Developed by the Spotify engineering team, Backstage is an open-source platform used for building developer portals. It is based on an internal tool Spotify uses to help organize development tools, documentation, and processes that new developers need to be aware of when developing a new app or API. Simply put, Backstage helps you build developer productivity tools. The idea behind Backstage is that it helps reduce the cognitive load on a new developer by pulling together commonly required resources into one browser-based user interface. Think about all the things you need to familiarize yourself with when you start developing something for a new organization. Is there a standard set of design patterns, frameworks, and programming languages that you are expected to use? Where can you find documentation about the organization’s APIs that you may need to consume? How and where can or should you deploy your solution? You can help your developers answer these types of questions by building your own custom instance of Backstage, using the growing library of existing plugins or building your own plugins. Note: Keep in mind that Backstage is still very new. In fact, the initial alpha version was released on 16 March 2020. Don’t expect a full ecosystem of plugins just yet. Do, however, expect a clean solution, fresh UI, thoughtful documentation, and a potential for something great. Some of the examples in this article could become stale rather quickly, so always refer to the official documentation when in doubt. Backstage tech stack highlights Before we get hands-on with Backstage, let’s look at a few tools and frameworks that are fundamental to the Backstage implementation. - Node.js: Backstage is a web frontend that is designed to run on Node.js, at least at development time. Backstage currently requires Node 12; I had mixed results running on Node 14 - TypeScript: Backstage is mostly written in TypeScript, though you can code in pure JavaScript if you so choose - React: The frontend code is written using React. React components play a fundamental role in Backstage’s plugin architecture. Plugins are essentially individually packaged React components - Yarn and Lerna: These two JavaScript tools go hand in hand. An alternative to npm, the Yarn package manager adds a few extra capabilities that enable Backstage’s monorepo structure. Similarly, Lerna also helps enable a monorepo structure. More on this shortly Getting started Let’s get started with Backstage by creating a new instance of it to explore what is included out of the box. There is a Backstage CLI (an npm package) we can use to quickly create a new Backstage workspace. Note: You will need Node.js 12 installed to use the Backstage CLI. Open a terminal and navigate to a folder on your computer where you want to create a new Backstage workspace. Run the following commands to install the CLI and run it. You only need to provide a name for your Backstage instance at this point. > npm install -g @backstage/cli > backage-cli create-app > Enter a name for the app [required] brian-backstage Creating the app... Checking if the directory is available: checking brian-backstage ✔ Creating a temporary app directory: creating temporary directory ✔ Preparing files: ... Moving to final location: moving brian-backstage ✔ Building the app: executing yarn install ✔ executing yarn tsc ✔ executing yarn build ✔ Successfully created brian-backstage The build step may take some time to complete. Once complete, navigate into the folder that was just created and start the app for the first time. For example: cd brian-backstage npm start You should now be able to see your Backstage instance in the browser, running at. It will look something like this: Exploring the repository structure Backstage is structured as a monorepo. Everything you need to build an instance is included in a single repository. This simplifies the developer experience while allowing Backstage to have a plugin architecture where each plugin can be built, tested, and shared independently. Here is what the monorepo structure looks like: The source for the main Backstage UI is found in the packages/app folder, and plugins can be found in the plugins folder. Notice that the app folder and each of the plugin folders are independent npm packages complete with their own package.json. This structure is possible thanks to Lerna and Yarn. These two tools come together to create a seamless monorepo structure. Yarn’s workspace feature allows a single repository to contain the source for multiple npm packages. In Yarn terminology, a workspace is a folder containing an npm package. The list of folders considered to be Yarn workspaces is defined in the top-level package.json like this: "workspaces": { "packages": [ "packages/*", "plugins/*" ] }, This configuration tells Yarn that any child folders within the packages and plugins folders are separate workspaces containing npm packages. Creating dependencies between these npm packages is as easy as referencing them as a normal npm package. For example: // packages/app/src/plugins.ts export { plugin as HelloworldPlugin } from '@backstage/plugin-helloworld-plugin'; Lerna provides the CLI commands to build, test, and lint all of the packages in the monorepo as one unit. Its configuration can be found in lerna.json: { "packages": ["packages/*", "plugins/*"], "npmClient": "yarn", "useWorkspaces": true, "version": "0.1.0" } Similar to Yarn, Lerna’s configuration specifies a set of folders that contain npm packages. It also specifies that Yarn should be used as the npm client and the Yarn workspaces feature should be used. The scripts defined in package.json provide a good demonstration of where Yarn and Lerna fit into the build process: "scripts": { "start": "yarn workspace app start", "bundle": "yarn workspace app bundle", "build": "lerna run build", "tsc": "tsc", "clean": "backstage-cli clean && lerna run clean", "diff": "lerna run diff --", "test": "lerna run test --since origin/master -- --coverage", "test:all": "lerna run test -- --coverage", "lint": "lerna run lint --since origin/master --", "lint:all": "lerna run lint --", "create-plugin": "backstage-cli create-plugin", "remove-plugin": "backstage-cli remove-plugin" }, Lerna is used for any of the scripts that should be run against the multiple workspaces. For example, when we run npm test, we want to run tests for the app and all of the plugins at the same time: $ npm test > root@1.0.0 test D:\brian-backstage > lerna run test -- --coverage lerna notice cli v3.22.1 lerna info Executing command in 3 packages: "yarn run test --coverage" lerna info run Ran npm script 'test' in 'plugin-welcome' in 81.7s: yarn run v1.22.4 $ backstage-cli test --coverage ... Note: If you have not pushed your Backstage workspace into a remote repository such as GitHub, then some of the out-of-the-box Lerna scripts will fail. These scripts are designed to consider whether your local code differs from what is in your remote repository. If you don’t want to push your code to a remote repository, remove the --since origin/master from the script. Creating a custom plugin The Backstage CLI lets you quickly generate a new plugin. Run the following command within the root of the repository and provide a name for the plugin: backstage-cli create-plugin Enter an ID for the plugin [required] helloworld-plugin The CLI will create a new plugin under the plugins folder. It wires up the plugin into the Backstage app. For example, you will notice a new route has been set up in plugins/helloworld-plugin/src/plugin.tsx: export const rootRouteRef = createRouteRef({ path: '/helloworld-plugin', title: 'helloworld-plugin', }); Your plugin’s main component, ExampleComponent, is available at the /helloworld-plugin path by default. Start your server with npm start and navigate to to view your plugin. Try changing the title of the plugin by modifying the ExampleComponent component. Using existing plugins The Spotify engineering team has made several plugins available in the main Backstage GitHub repo already. Some of these plugins consist of frontend and backend packages. Incorporating these plugins is almost as easy as running a Yarn command: yarn add @backstage/plugin-tech-radar. Let’s take a look at how to add the Tech Radar plugin. This plugin renders a visualization of your organization’s standardized technologies. The data that drives the visualization can be provided from an external API, but for this example, we will use the sample data that comes built into the plugin. There are actually two ways to use the Tech Radar plugin. There is a “simple configuration” that lets you install it as a normal Backstage plugin, and there is an “advanced configuration” that lets you reuse the Tech Radar visualization as a normal React component within your own custom plugin. Let’s try the advanced configuration option and incorporate the Tech Radar visualization into the hello world plugin that we just created. First you need to add the Tech Radar npm package to the plugin. Navigate into the plugin’s subdirectory and install the package: cd plugins/helloworld-plugin yarn add @backstage/plugin-tech-radar Replace the contents of plugins\helloworld-plugin\src\components\ExampleComponent.tsx with the following code: import React, { FC } from 'react'; import { Grid } from '@material-ui/core'; import { Header, Page, pageTheme, Content, ContentHeader, HeaderLabel, SupportButton } from '@backstage/core'; import { TechRadarComponent } from '@backstage/plugin-tech-radar'; const ExampleComponent: FC<{}> = () => ( <Page theme={pageTheme.tool}> <Header title="Welcome to helloworld-plugin!" subtitle="Optional subtitle"> <HeaderLabel label="Owner" value="Team X" /> <HeaderLabel label="Lifecycle" value="Alpha" /> </Header> <Content> <ContentHeader title="Hello Tech Radar"> <SupportButton>A description of your plugin goes here.</SupportButton> </ContentHeader> <Grid container spacing={3} <Grid item> <TechRadarComponent width={1000} height={400} /> </Grid> </Grid> </Content> </Page> ); export default ExampleComponent; Line 4 imports the TechRadarComponent React UI component, and line 18 renders it. You will notice that we are specifying minimal props on the component — just width and height. The authors of this component included a rich set of sample data that is shown by default if a data source is not provided. You can provide your own data by specifying your own function on the getData prop. Check out the Tech Radar component API here. When you run your app and access your hello world plugin, it should look something like this: What’s next? We looked at how Backstage is structured, and how to create a new instance of it, build it, and run it. We also looked at how to create a custom plugin and reuse existing plugins. At this point, you may want to deploy what you have. One deployment option is to containerize and deploy your instance as a Docker container. The Spotify engineering team’s instance of Backstage serves as a great demonstration of how to do this. Check out their Dockerfile to get started and you will be deployed in no time..
https://blog.logrocket.com/better-developer-portals-spotify-backstage/
CC-MAIN-2020-40
refinedweb
1,842
53.61
? You might also take a look at LibX - "A Browser Plugin for Libraries" that can do this sort of function for you automatically:. Change the location by saving the following as a bookmarklet: javascript:(function(){ location.href = location.href.replace( location.hostname, location.hostname + '.ezproxy.its.uu.se' ); })() However, the above first needs you to tell Firefox to load the original URL (so: you'll have to press Return in the location bar) to get the location object populated. Instead, to be prompted for a URL rather than first having your browser (try to) load it: javascript:(function(){ var url = prompt('Type URL to browse'); var suffix = '.ezproxy.its.uu.se'; /* Don't know how the proxy would handle https or specific ports; * let's just copy them... * $1 = optional protocol, like 'http[s]://' * $2 = domain, like 'superuser.com' * $3 = optional port, like ':8080' * $4 = rest of the URL, like '/questions/154689/ .. page/154692#154692' */ url = url.replace( /(\w*:\/\/)?([^:\/]*)(:[0-9]*)?(.*)/, '$1$2' + suffix + '$3$4' ); if(url.indexOf('http') != 0){ url = 'http://' + url; } location.href = url; })() And once you've switched to using the proxy, you can use some jQuery magic to rewrite each location in the HTML that is served by the proxy -- but only needed if it doesn't do that for you on the fly. To be saved as a user script (like for Greasemonkey), with some initial code to first ensure jQuery is available, and to only be included for the domain of your proxy server (hence only when you're browsing using that proxy): // ==UserScript== // @name Rewrite URLs to use proxy // @namespace // @description Rewrites absolute URLs to use proxy // @include http://*.ezproxy.its.uu.se/* // ==/UserScript== var $; var suffix = '.ezproxy.its.uu.se'; // Rewrites an attribute to include the proxy server address, if a full // domain is specified in that attribute. function rewriteAttr(attrName){ $('[' + attrName + ']').attr(attrName, function(){ // Don't know how the proxy would handle https or specific ports; // let's just copy them... // $1 = protocol, like 'http[s]://' // $2 = domain, like 'superuser.com' // $3 = optional port, like ':8080' // $4 = rest of the URL, like '/questions/154689/ .. page/154692#154692' return $(this).attr(attrName).replace( /(\w*:\/\/)([^:\/]*)(:[0-9]*)?(.*)/, '$1$2' + suffix + '$3$4' ); }); } // Rewrite anchors such a <a href=""> and references // like <link rel="stylesheet" href=""> function letsJQuery() { rewriteAttr('href'); rewriteAttr('src'); } // Loads jQuery if required. // See (function(){ if (typeof unsafeWindow.jQuery == 'undefined') { var GM_Head = document.getElementsByTagName('head')[0] || document.documentElement; var GM_JQ = document.createElement('script'); GM_JQ.src = ''; GM_JQ.type = 'text/javascript'; GM_JQ.async = true; GM_Head.insertBefore(GM_JQ, GM_Head.firstChild); } GM_wait(); })(); // Check if jQuery's loaded function GM_wait() { if (typeof unsafeWindow.jQuery == 'undefined') { window.setTimeout(GM_wait, 100); } else { $ = unsafeWindow.jQuery.noConflict(true); letsJQuery(); } } This is the precisely the kind of situation that a Proxy auto-config (PAC) script is intended to solve. The following script will config Firefox so that it will transparently route requests through your local proxy, without having to rewrite them. Save this file somewhere on your filesystem, and then go into the Connection Settings dialog and put the path in the "Automatic proxy configuration URL" setting. (This is supported by all the major browers, not just Firefox.) function FindProxyForURL(url, host) { return "com.ezproxy.its.uu.se"; } This is a javascript function, and so conditional logic is possible as well. How about using the URL Parser Firefox Add-on @ addons.mozilla.org/en-US/firefox/addon/176748/ Or use the bookmarklet from urlparser.com90 times active Get the weekly newsletter! see an example newsletter
http://superuser.com/questions/154689/what-firefox-functionality-do-i-use-to-rewrite-and-open-the-url-of-the-active-pa/154755#154755
crawl-003
refinedweb
585
58.38
remove − remove a file or directory #include <stdio.h> int remove(const char *pathname);. On success, zero is returned. On error, −1 is returned, and errno is set appropriately. The errors that occur are those for unlink(2) and rmdir(2). For an explanation of the terms used in this section, see attributes(7). C89, C99, 4.3BSD, POSIX.1-2001. Under libc4 and libc5, remove() was an alias for unlink(2) (and hence would not remove directories). 3.53 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at−pages/.
http://man.linuxtool.net/centos7/u3/man/3_remove.html
CC-MAIN-2019-30
refinedweb
101
70.9
Re: [ydn-javascript] Microsoft Visual Studio 2008 Hello, I wonder if someone could advise me on which framework tool or development environment toolkit to use with YUI. Thanks, Nora----- Original Message ---- From: Frank Dietrich <fdietrich@...> To: ydn-javascript@yahoogroups.com Sent: Sunday, 3 August, 2008 6:47:12 PM Subject: Re: [ydn-javascript] Microsoft Visual Studio 2008As I said, I'm not sure if You can do JavaScript from within VS. But I'm sure, some of the guys here will kick in and have the correkt answer. *********** REPLY SEPARATOR *********** On 03.08.08 at 17:40 Noorhan Abbas wrote: Hello, I am trying to get the yahoo calendar to work. The statement 'YAHOO.namespace( "") gives me the following error: 'namespace' is a new reserved word and should not be used as an identifier. Any clues?!!! Thank you, Nora----- Original Message ---- From: Frank Dietrich <fdietrich@csi. com> To: ydn-javascript@ yahoogroups. com Sent: Sunday, 3 August, 2008 5:42:44 PM Subject: Re: [ydn-javascript] Microsoft Visual Studio 2008 Nora, >I have just started using the YUI in my project. I am trying to access >the YAHOO components from withing the Microsoft Visual Studio 2008 >Edition tool. It can not understand the statement YAHOO.namespace >("").... Does anybody know why this is happening? I am using the YUI >book and was trying out the first example of creating a simple >calender. Any recommendations? !!! >The broader picture is: I will try to use the YAHOO TreeView control >and link it with a web page created using the Google Appengine kit. I'm not absolutely sure and others may correct me, but the namespace here more or less is a global variable (YAHOO) where "namespace" is a function creating a reference underneath it. I think this differs from the namespace-concept where it's more about naming. The idea behind it is the same, allowing same function-signatures in different contexts and thus protecting them from interference. You'll need a reference to several YAHOO js-files that will make the YAHOO namespaces available. But I'm not sure if You then could work from within VS. Frank Not happy with your email address? Get the one you really want - millions of new email addresses available now at Yahoo! Not happy with your email address? Get the one youreally want - millions of new email addresses available now at Yahoo!
https://groups.yahoo.com/neo/groups/ydn-javascript/conversations/topics/35464?o=1&d=-1
CC-MAIN-2017-26
refinedweb
396
64.91
Polymorphism is one of the OOPs feature that allows us to perform a single action in different ways. For example, lets say we have a class Animal that has a method sound(). Since this is a generic class so we can’t give it a implementation like: Roar, Meow, Oink etc. We had to give a generic message. public class Animal{ ... public void sound(){ System.out.println("Animal is making a sound"); } } Now lets say we two subclasses of Animal class: Horse and Cat that extends (see Inheritance) Animal class. We can provide the implementation to the same method like this:. This is a perfect example of polymorphism (feature that allows us to perform a single action in different ways). It would not make any sense to just call the generic sound() method as each Animal has a different sound. Thus we can say that the action this method performs is based on the type of object. What is polymorphism in programming? Polymorphism is the capability of a method to do different things based on the object that it is acting upon. In other words, polymorphism allows you define one interface and have multiple implementations. As we have seen in the above example that we have defined the method sound() and have the multiple implementations of it in the different-2 sub classes. Which sound() method will be called is determined at runtime so the example we gave above is a runtime polymorphism example. Types of polymorphism and method overloading & overriding are covered in the separate tutorials. You can refer them here: before going though this topic. Lets write down the complete code of it: Example 1: Polymorphism in Java Runtime Polymorphism example: Animal.java public class Animal{ public void sound(){ System.out.println("Animal is making a sound"); } } Method Overloading on the other hand is a compile time polymorphism. Output: a: 10 a and b: 10,20 double a: 5.5 O/P : 30.25.
https://beginnersbook.com/2013/03/polymorphism-in-java/
CC-MAIN-2018-05
refinedweb
326
66.13
On Fri, Mar 9, 2012 at 3:11 PM, Laine Stump <laine laine org> wrote: > On 03/09/2012 09:16 AM, Jiri Denemark wrote: >> Hi. >> >> On Fri, Mar 09, 2012 at 11:32:47 +0000, Stefan Hajnoczi wrote: >> ... >>> static __inline__ int platform_test_xfs_fd(int fd) >>> { >>> struct statfs buf; >>> if (fstatfs(fd, &buf) < 0) >>> return 0; >>> return (buf.f_type == 0x58465342); /* XFSB */ >>> } >>> >>> In other words, XFS detection will fail when SELinux is enabled. >>> >>> I'm not familiar with libvirt's use of SELinux. Can someone explain >>> if we need to expand the policy in libvirt and how to do that? >> Actually, there is no SELinux policy in libvirt. Libvirt merely uses an >> appropriate security context when running qemu processes. The rules what such >> processes can do and what they are forbidden to do are described in SELinux >> policy which is provided as a separate package (or packages on some distros). >> So it's this policy (selinux-policy package on Fedora based distros) which >> would need to be expanded. Thus it should be negotiated with SELinux policy >> maintainers if they are willing to allow svirt_t domain calling fstatfs. > > (Also, since the problem occurs on NFS, this may need to be somehow > related to virt_use_nfs being turned on.) No, this XFS situation is independent of NFS. It's another codepath in QEMU where fstatfs(2) is called, I found it this morning. Stefan
https://www.redhat.com/archives/libvir-list/2012-March/msg00397.html
CC-MAIN-2016-36
refinedweb
229
64.41
Farid Zaripov wrote: > > -----Original Message----- > > From: Martin Sebor [mailto:sebor@roguewave.com] > > Sent: Friday, July 28, 2006 4:02 AM > > To: stdcxx-dev@incubator.apache.org > > Subject: Re: testsuite process helpers (was: RE: string > > methods thread safety) > [...] > > Martin, I have been updated the files rw_process.h, process.cpp and > 0.process.cpp. The new files are attached. Can you please post a diff instead of the entire files? It makes it easier to see changes. Also, the [PATCH] bit should be first on the subject line (as opposed to last), for two reasons: a) it's easier to spot it among all other emails in one's inbox, and b) it makes it possible to sort one's inbox and see all pending patches. Thanks! > > ChangeLog: > *_process_error_report_mode): New variable to enable/disable > rw_error() outputs within defined functions.. > (rw_wait_pid): Added the timeout parameter. Excellent! Although I think the UNIX branch should probably try to be more robust about handling an existing alarm (or avoiding the function and using some other mechanism). > ). > * process.cpp: Ditto. > * 0.process.cpp: New test exercising the rw_process_create(), > rw_process_kill() and rw_waitpid() functions. The test doesn't compile with strict compilers (such as EDG eccp). Keep in mind that extern "C" and extern "C++" functions or pointers to such things are incompatible. I.e., this is ill-formed: extern "C" void handle_signal (int) { } void foo () { void (*old_handler)(int) = signal (SIGALRM, handle_signal); } because old_handler has extern "C++" language linkage (there's no way to declare a local extern "C" pointer in an extern "C++" function without introducing a typedef at namespace scope, like so: extern "C" { typedef void sighandler_t (int); void handle_signal (int) { } } void foo () { sighandler_t* old_handler = signal (SIGALRM, handle_signal); } Thanks Martin
http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-dev/200608.mbox/%3C44D0E118.9010000@roguewave.com%3E
CC-MAIN-2013-48
refinedweb
283
55.03
Ads: Sell Me On Selling Me Ads that do more than just try to sell you something and serve as an inspiration for copywriters and art directors everywhere. REEBOK: "Terry Tate, Office Linebacker: Pilot" Short Film03:42 The pilot film that introduces Terry Tate to the world at large.+ More details - 01:04 This is a short video I was commissioned to make by Lush UK. They aim to help small charities and ask for nothing in return (ie. they don't ask for their logo to be included on the charities…+ More details - 01:47 Discover the art of finger Tutting, a new kind of dance performed by JayFunk, from LA. Commercial video for Samsung Galaxy SII phone. Agence : Heaven Production : LABANDEORIGINALE Art Direction:…+ More details - 00:30 A 30sec spot for Good & Plenty candy. Made with 4 boxes of candy, 3 lights, 2 poster boards, 1 animator, and about 100 hours of work. Shot with the Canon 7D using Dragonframe. Dedicated to…+ More details - 01:33 This is a montage of 2.5D animations created for AD Hoc Films By manipulating still photos we created these stunning slow motion shots. Some shots were…+ More details - 04:13 Havana Heat is a steamy love letter to Good Books in the bodice-ripping language made famous by Mills & Boon books -- available along with millions of other titles through the online store at…+ More details QuitSmoking.com - Behind the Scenes00:52 A quick behind the Scenes look at the QuitSmoking "Kill the Habit" and "Drop the Habit" spots.+ More details - 00:30 Project Name: MTV summer Design & Production company: Mirari & Co Creative Director: Jimmy Yuan Executive producer : Michelle Xie Music:Thom Kellar & Damien Lane / Beautiful Noise Client:…+ More details History Channel - Mankind00:30 Thanks to Mauro Zinni, in 2012 History channel gave us the privilege to develop one of their most important animated pieces. A 30 second Ident which takes us through the history of humanity, a pretty…+ More details - 01:25 a stop motion chalk drawing for twixl media+ More details - 02:03 Path of beauty - Director's Cut Version. A women walks in the Musée du Louvre, alone. The museum is completely empty. We follow this young woman in her dreamlike journey through…+.
https://vimeo.com/album/2253544/sort:preset/format:detail
CC-MAIN-2017-13
refinedweb
374
59.84
The SOLID Principles are five principles of Object-Oriented class design. They are a set of rules and best practices to follow while designing a class structure. These five principles help us understand the need for certain design patterns and software architecture in general. So I believe that it is a topic that every developer should learn. This article will teach you everything you need to know to apply SOLID principles to your projects. We will start by taking a look into the history of this term. Then we are going to get into the nitty-gritty details – the why's and how's of each principle – by creating a class design and improving it step by step. So grab a cup of coffee or tea and let's jump right in! Background The SOLID principles were first introduced by the famous Computer Scientist Robert J. Martin (a.k.a Uncle Bob) in his paper in 2000. But the SOLID acronym was introduced later by Michael Feathers. Uncle Bob is also the author of bestselling books Clean Code and Clean Architecture, and is one of the participants of the "Agile Alliance". Therefore, it is not a surprise that all these concepts of clean coding, object-oriented architecture, and design patterns are somehow connected and complementary to each other. They all serve the same purpose: "To create understandable, readable, and testable code that many developers can collaboratively work on." Let's look at each principle one by one. Following the SOLID acronym, they are: - The Single Responsibility Principle - The Open-Closed Principle - The Liskov Substitution Principle - The Interface Segregation Principle - The Dependency Inversion Principle The Single Responsibility Principle The Single Responsibility Principle states that a class should do one thing and therefore it should have only a single reason to change. To state this principle more technically: Only one potential change (database logic, logging logic, and so on.) in the software’s specification should be able to affect the specification of the class. This means that if a class is a data container, like a Book class or a Student class, and it has some fields regarding that entity, it should change only when we change the data model. Following the Single Responsibility Principle is important. First of all, because many different teams can work on the same project and edit the same class for different reasons, this could lead to incompatible modules. Second, it makes version control easier. For example, say we have a persistence class that handles database operations, and we see a change in that file in the GitHub commits. By following the SRP, we will know that it is related to storage or database-related stuff. Merge conflicts are another example. They appear when different teams change the same file. But if the SRP is followed, fewer conflicts will appear – files will have a single reason to change, and conflicts that do exist will be easier to resolve. Common Pitfalls and Anti-patterns In this section we will look at some common mistakes that violate the Single Responsibility Principle. Then we will talk about some ways to fix them. We will look at the code for a simple bookstore invoice program as an example. Let's start by defining a book class to use in our invoice. class Book { String name; String authorName; int year; int price; String isbn; public Book(String name, String authorName, int year, int price, String isbn) { this.name = name; this.authorName = authorName; this.year = year; this.price = price; this.isbn = isbn; } } This is a simple book class with some fields. Nothing fancy. I am not making fields private so that we don't need to deal with getters and setters and can focus on the logic instead. Now let's create the invoice class which will contain the logic for creating the invoice and calculating the total price. For now, assume that our bookstore only sells books and nothing else. public class Invoice { private Book book; private int quantity; private double discountRate; private double taxRate; private double total; public Invoice(Book book, int quantity, double discountRate, double taxRate) { this.book = book; this.quantity = quantity; this.discountRate = discountRate; this.taxRate = taxRate; this.total = this.calculateTotal(); } public double calculateTotal() { double price = ((book.price - book.price * discountRate) * this.quantity); double priceWithTaxes = price * (1 + taxRate); return priceWithTaxes; } public void printInvoice() { System.out.println(quantity + "x " + book.name + " " + book.price + "$"); System.out.println("Discount Rate: " + discountRate); System.out.println("Tax Rate: " + taxRate); System.out.println("Total: " + total); } public void saveToFile(String filename) { // Creates a file with given name and writes the invoice } } Here is our invoice class. It also contains some fields about invoicing and 3 methods: - calculateTotal method, which calculates the total price, - printInvoice method, that should print the invoice to console, and - saveToFile method, responsible for writing the invoice to a file. You should give yourself a second to think about what is wrong with this class design before reading the next paragraph. Ok so what's going on here? Our class violates the Single Responsibility Principle in multiple ways. The first violation is the printInvoice method, which contains our printing logic. The SRP states that our class should only have a single reason to change, and that reason should be a change in the invoice calculation for our class. But in this architecture, if we wanted to change the printing format, we would need to change the class. This is why we should not have printing logic mixed with business logic in the same class. There is another method that violates the SRP in our class: the saveToFile method. It is also an extremely common mistake to mix persistence logic with business logic. Don't just think in terms of writing to a file – it could be saving to a database, making an API call, or other stuff related to persistence. So how can we fix this print function, you may ask. We can create new classes for our printing and persistence logic so we will no longer need to modify the invoice class for those purposes. We create 2 classes, InvoicePrinter and InvoicePersistence, and move the methods. public class InvoicePrinter { private Invoice invoice; public InvoicePrinter(Invoice invoice) { this.invoice = invoice; } public void print() { System.out.println(invoice.quantity + "x " + invoice.book.name + " " + invoice.book.price + " $"); System.out.println("Discount Rate: " + invoice.discountRate); System.out.println("Tax Rate: " + invoice.taxRate); System.out.println("Total: " + invoice.total + " $"); } } public class InvoicePersistence { Invoice invoice; public InvoicePersistence(Invoice invoice) { this.invoice = invoice; } public void saveToFile(String filename) { // Creates a file with given name and writes the invoice } } Now our class structure obeys the Single Responsibility Principle and every class is responsible for one aspect of our application. Great! Open-Closed Principle The Open-Closed Principle requires that classes should be open for extension and closed to modification. Modification means changing the code of an existing class, and extension means adding new functionality. So what this principle wants to say is: We should be able to add new functionality without touching the existing code for the class. This is because whenever we modify the existing code, we are taking the risk of creating potential bugs. So we should avoid touching the tested and reliable (mostly) production code if possible. But how are we going to add new functionality without touching the class, you may ask. It is usually done with the help of interfaces and abstract classes. Now that we have covered the basics of the principle, let's apply it to our Invoice application. Let's say our boss came to us and said that they want invoices to be saved to a database so that we can search them easily. We think okay, this is easy peasy boss, just give me a second! We create the database, connect to it, and we add a save method to our InvoicePersistence class: public class InvoicePersistence { Invoice invoice; public InvoicePersistence(Invoice invoice) { this.invoice = invoice; } public void saveToFile(String filename) { // Creates a file with given name and writes the invoice } public void saveToDatabase() { // Saves the invoice to database } } Unfortunately we, as the lazy developer for the book store, did not design the classes to be easily extendable in the future. So in order to add this feature, we have modified the InvoicePersistence class. If our class design obeyed the Open-Closed principle we would not need to change this class. So, as the lazy but clever developer for the book store, we see the design problem and decide to refactor the code to obey the principle. interface InvoicePersistence { public void save(Invoice invoice); } We change the type of InvoicePersistence to Interface and add a save method. Each persistence class will implement this save method. public class DatabasePersistence implements InvoicePersistence { @Override public void save(Invoice invoice) { // Save to DB } } public class FilePersistence implements InvoicePersistence { @Override public void save(Invoice invoice) { // Save to file } } So our class structure now looks like this: Now our persistence logic is easily extendable. If our boss asks us to add another database and have 2 different types of databases like MySQL and MongoDB, we can easily do that. You may think that we could just create multiple classes without an interface and add a save method to all of them. But let's say that we extend our app and have multiple persistence classes like InvoicePersistence, BookPersistence and we create a PersistenceManager class that manages all persistence classes: public class PersistenceManager { InvoicePersistence invoicePersistence; BookPersistence bookPersistence; public PersistenceManager(InvoicePersistence invoicePersistence, BookPersistence bookPersistence) { this.invoicePersistence = invoicePersistence; this.bookPersistence = bookPersistence; } } We can now pass any class that implements the InvoicePersistence interface to this class with the help of polymorphism. This is the flexibility that interfaces provide. Liskov Substitution Principle The Liskov Substitution Principle states that subclasses should be substitutable for their base classes. This means that, given that class B is a subclass of class A, we should be able to pass an object of class B to any method that expects an object of class A and the method should not give any weird output in that case. This is the expected behavior, because when we use inheritance we assume that the child class inherits everything that the superclass has. The child class extends the behavior but never narrows it down. Therefore, when a class does not obey this principle, it leads to some nasty bugs that are hard to detect. Liskov's principle is easy to understand but hard to detect in code. So let's look at an example. class Rectangle { protected int width, height; public Rectangle() { } public Rectangle(int width, int height) { this.width = width; this.height = height; } public int getWidth() { return width; } public void setWidth(int width) { this.width = width; } public int getHeight() { return height; } public void setHeight(int height) { this.height = height; } public int getArea() { return width * height; } } We have a simple Rectangle class, and a getArea function which returns the area of the rectangle. Now we decide to create another class for Squares. As you might know, a square is just a special type of rectangle where the width is equal to the height. class Square extends Rectangle { public Square() {} public Square(int size) { width = height = size; } @Override public void setWidth(int width) { super.setWidth(width); super.setHeight(width); } @Override public void setHeight(int height) { super.setHeight(height); super.setWidth(height); } } Our Square class extends the Rectangle class. We set height and width to the same value in the constructor, but we do not want any client (someone who uses our class in their code) to change height or weight in a way that can violate the square property. Therefore we override the setters to set both properties whenever one of them is changed. But by doing that we have just violated the Liskov substitution principle. Let's create a main class to perform tests on the getArea function. class Test { static void getAreaTest(Rectangle r) { int width = r.getWidth(); r.setHeight(10); System.out.println("Expected area of " + (width * 10) + ", got " + r.getArea()); } public static void main(String[] args) { Rectangle rc = new Rectangle(2, 3); getAreaTest(rc); Rectangle sq = new Square(); sq.setWidth(5); getAreaTest(sq); } } Your team's tester just came up with the testing function getAreaTest and tells you that your getArea function fails to pass the test for square objects. In the first test, we create a rectangle where the width is 2 and the height is 3 and call getAreaTest. The output is 20 as expected, but things go wrong when we pass in the square. This is because the call to setHeight function in the test is setting the width as well and results in an unexpected output. Interface Segregation Principle Segregation means keeping things separated, and the Interface Segregation Principle is about separating the interfaces. The principle states that many client-specific interfaces are better than one general-purpose interface. Clients should not be forced to implement a function they do no need. This is a simple principle to understand and apply, so let's see an example. public interface ParkingLot { void parkCar(); // Decrease empty spot count by 1 void unparkCar(); // Increase empty spots by 1 void getCapacity(); // Returns car capacity double calculateFee(Car car); // Returns the price based on number of hours void doPayment(Car car); } class Car { } We modeled a very simplified parking lot. It is the type of parking lot where you pay an hourly fee. Now consider that we want to implement a parking lot that is free. public class FreeParking implements ParkingLot { @Override public void parkCar() { } @Override public void unparkCar() { } @Override public void getCapacity() { } @Override public double calculateFee(Car car) { return 0; } @Override public void doPayment(Car car) { throw new Exception("Parking lot is free"); } } Our parking lot interface was composed of 2 things: Parking related logic (park car, unpark car, get capacity) and payment related logic. But it is too specific. Because of that, our FreeParking class was forced to implement payment-related methods that are irrelevant. Let's separate or segregate the interfaces. We've now separated the parking lot. With this new model, we can even go further and split the PaidParkingLot to support different types of payment. Now our model is much more flexible, extendable, and the clients do not need to implement any irrelevant logic because we provide only parking-related functionality in the parking lot interface. Dependency Inversion Principle The Dependency Inversion principle states that our classes should depend upon interfaces or abstract classes instead of concrete classes and functions. In his article (2000), Uncle Bob summarizes this principle as follows: "If the OCP states the goal of OO architecture, the DIP states the primary mechanism". These two principles are indeed related and we have applied this pattern before while we were discussing the Open-Closed Principle. We want our classes to be open to extension, so we have reorganized our dependencies to depend on interfaces instead of concrete classes. Our PersistenceManager class depends on InvoicePersistence instead of the classes that implement that interface. Conclusion In this article, we started with the history of SOLID principles, and then we tried to acquire a clear understanding of the why's and how's of each principle. We even refactored a simple Invoice application to obey SOLID principles. I want to thank you for taking the time to read the whole article and I hope that the above concepts are clear. I suggest keeping these principles in mind while designing, writing, and refactoring your code so that your code will be much more clean, extendable, and testable. If you are interested in reading more articles like this, you can subscribe to my blog's mailing list to get notified when I publish a new article.
https://www.freecodecamp.org/news/solid-principles-explained-in-plain-english/
CC-MAIN-2021-43
refinedweb
2,619
54.63
How to Think Like a Computer Scientist: Learning with Python 2nd Edition/Strings< How to Think Like a Computer Scientist: Learning with Python 2nd Edition Contents StringsEdit 7.1 A compound data typeEdit. 7.2 LengthEdit. 7.3 Traversal and the for loopEdit A lot of computations involve processing a string one character at a time. Often they start at the beginning, select each character in turn, do something to it, and continue until the end. This pattern of processing is called a traversal. One way to encode a traversal is with a while statement: index = 0 while index < len(fruit): letter = fruit[index] print letter index += 1 #fruit = "banana" #while index is less than 6. #6 is the length of fruit #letter = fruit[index] #Since index = 0, "b" is equal to letter in loop 1 #letter is printed #1 is added to whatever the value of index is #the loop continues until index < 6. Using an index to traverse a set of values is so common that Python provides an alternative, simpler syntax --- the for loop: for letter in fruit: print letter Each time through the loop, the next character in the string is assigned to the variable letter. The loop continues until no characters are left. The following example shows how to use concatenation and a for loop to generate an abecedarian series. Abecedarian refers to a series or list in which the elements appear in alphabetical order. For example, of this program is: Jack Kack Lack Mack Nack Oack Pack Qack Of course, that's not quite right because Ouack and Quack are misspelled. You'll fix this as an exercise below. 7.4 String slicesEdit? String comparisonEdit The comparison operators work on strings. To see if two strings are equal: Other comparison operations are useful for putting words in lexigraphical order_: This is similar to the alphabetical order you would use with a dictionary, except that all the uppercase letters come before all the lowercase letters. As a result: A common way to address this problem is to convert strings to a standard format, such as all lowercase, before performing the comparison. A more difficult problem is making the program realize that zebras are not fruit. Strings are immutableEdit It is tempting to use the [] operator on the left side of an assignment, with the intention of changing a character in a string. For example:. The in operatorEdit The in operator tests if one string is a substring of another: Note that a string is a substring of itself: Combining the in operator with string concatenation using +, we can write a function that removes all the vowels from a string: Test this function to confirm that it does what we wanted it to do. 7.8 A find functionEdit What does the following function do? def find(strng, ch): index = 0 while index < len(strng): if strng[index] == ch: return index index += 1 return -1 #assume strng is "banana" and ch is "a" #if strng[index] == ch: #return index #the above 2 lines check if strng[index#] == a #when the loop runs first index is 0 which is b (not a) #so 1 is added to whatever the value of index is #when the loop runs second time index is 1 which is a #the loop is then broken, and 1 is returned. #if it cannot find ch in strng -1 is returned. Looping and countingEdit The following program counts the number of times the letter a appears in a string, and is another example of the counter pattern introduced in :ref:`counting`: 7.10 Optional parametersEdit To find the locations of the second or third occurrence of a character in a string, we can modify the find function, adding a third parameter for the starting position in the search string: def find2(strng, ch, start): index = start while index < len(strng): if strng[index] == ch: return index index += 1 return -1 The call find2('banana', 'a', 2) now returns 3, the index of the first occurrence of 'a' in 'banana' #index = start = 0 by default #while index is less than the length of string: #if strng[index] equals ch #return index i.e. location of ch in strng -- note return breaks out of loop #else add 1 to index and continue until index equals the length of sting #if no match accommodate this change. The string moduleEdit The string module contains useful functions that manipulate strings. As usual, we have to import the module before we can use it: To see what is inside it, use the dir function with the module name as an argument.. Since string.digits is a string, we can print it to see what it contains: Not surprisingly, it contains each of the decimal digits. string.find is a function which does much the same thing as the function we wrote. To find out more about it, we can print out its docstring, __doc__, which contains documentation on the function: The parameters in square brackets are optional parameters. We can use string.find much as we did our own find:: Like ours, it takes an additional argument that specifies the index at which it should start: Unlike ours, its second optional parameter specifies the index at which the search should end: In this example, the search fails because the letter b does not appear in the index range from 1 to 2 (not including 2). Character classificationEdit: We can use these constants and find to classify characters. For example, if find(lowercase, ch) returns a value other than -1, then ch must be lowercase: Alternatively, we can take advantage of the in operator: As yet another alternative, we can use the comparison operator: If ch is between a and z, it must be a lowercase letter. Another constant defined in the string module may surprise you when you print it:, []_. String formattingEdit The most concise and powerful way to format a string in Python is to use the string formatting operator, %, together with Python's string formatting operations. To see how this works, let's start with a few examples: The syntax for the string formatting operation looks like this: It begins with a format which contains a sequence of characters and conversion specifications. Conversion specifications start with a % operator. Following the format string is a single % and then a sequence of values, one per conversion specification, separated conversion specifications, %s and %d. The d in the second conversion specification indicates that the value is a decimal integer. In the third example variables n1 and n2 have integer values 4 and 5 respectively. There are four conversion specifications in the format string: three %d's and a %f. The f indicates that the value should be represented as a floating point number. The four values that map to the four conversion: This program prints out a table of various powers of the numbers from 1 to 10. In its current form it relies on the tab character ( \t) to align the columns of values, but this breaks down when the values in the table get larger than the 8 character tab width: One possible solution would be to change the tab width, but the first column already has more space than it needs. The best solution would be to set the width of each column independently. As you may have guessed by now, string formatting provides the solution: Running this version produces the following output: The - after each % in the converstion specifications indicates left justification. The numerical values specify the minimum length, so %-13d is a left justified number at least 13 characters wide. Summary and First ExercisesEdit This chapter introduced a lot of new ideas. The following summary and set of exercises may prove helpful in remembering what you learned: Exercises:: - * - * - * GlossaryEdit ExercisesEdit Question 1Edit Modify: prefixes = "JKLMNOPQ" suffix = "ack" for letter in prefixes: print letter + suffix so that Ouack and Quack are spelled correctly. Question 2Edit Encapsulate: fruit = "banana" count = 0 for char in fruit: if char == 'a': count += 1 print count in a function named count_letters, and generalize it so that it accepts the string and the letter as arguments. Question 3Edit Now rewrite the count_letters function so that instead of traversing the string, it repeatedly calls find (the version from Optional parameters), with the optional third parameter to locate new occurences of the letter being counted. Question 4Edit - Which version of is_lower do you think will be fastest? Can you think of other reasons besides speed to prefer one version or the other? Question 5Edit Create a file named stringtools.py and put the following in it: Add a function body to reverse to make the doctests pass. Add mirror to stringtools.py . Write a function body for it that will make it work as indicated by the doctests. Include remove_letter in stringtools.py . Write a function body for it that will make it work as indicated by the doctests. Finally, add bodies to each of the following functions, one at a time until all the doctests pass. Try each of the following formatted string operations in a Python shell and record the results: - "%s %d %f" % (5, 5, 5) - "%-.2f" % 3 - "%-10.2f%-10.2f" % (7, 1.0/2) - print " $%5.2fn $%5.2fn $%5.2f" % (3, 4.5, 11.2) The following formatted strings have errors. Fix them: - "%s %s %s %s" % ('this', 'that', 'something') - "%s %s %s" % ('yes', 'no', 'up', 'down') - "%d %f %f" % (3, 3, 'three')
https://en.m.wikibooks.org/wiki/How_to_Think_Like_a_Computer_Scientist:_Learning_with_Python_2nd_Edition/Strings
CC-MAIN-2018-13
refinedweb
1,585
57.3
On Thu, Mar 30, 2006 at 05:05:30PM +0200, Tomasz Zielonka wrote: > Actually, it may require no effort from compiler implementors. > I just managed to get the desired effect in current GHC! :-) More specifically: in uniprocessor GHC 6.4.1. >. I just realised that this technique will only work on uniprocessors! :-( I relies on only one thread running at any moment. If there are multiple CPUs, yielding won't stop the current thread from consuming the list. > The code isn't as beautiful as the naive wc implementation. That's > because I haven't yet thought how to hide newEmptyMVar, forkIO, putMVar > i takeMVar. Perhaps someone will come up with a solution to this. Here is my attempt to make the code more pure. The "concurrently" combinator uses CPS, because otherwise it was a bit difficult to split evaluation into two phases - first forking the thread, second taking the result from an MVar. I also tried using additional data constructor wrapper for the result, so first phase occured when forcing the constructor, and the second when forcing it's parameter, but it was tricky to use it properly considering that "let" and "where" bindings use irrefutable patterns. import Control.Concurrent import Control.Monad import System.IO.Unsafe stepper :: Int -> [a] -> [a] stepper n l = s n l where s 0 (x:xs) = unsafePerformIO $ do yield return (x : s n xs) s i (x:xs) = x : s (i-1) xs s _ [] = [] concurrently :: a -> (a -> b) -> b concurrently e f = unsafePerformIO $ do var <- newEmptyMVar forkIO $ putMVar var $! e return (f (unsafePerformIO (takeMVar var))) wc :: String -> (Int, Int, Int) wc cs0 = let cs = stepper 500 cs0 in concurrently (length (lines cs)) $ \ll -> concurrently (length (words cs)) $ \ww -> concurrently (length cs) $ \cc -> (ll, ww, cc) main = do cs <- getContents print (wc cs) It's probably worth noting that (in this case) when I remove "yield", so I only use concurrency with no stepper, the space-leak is also reduced, but not completely. Best regards Tomasz
http://www.haskell.org/pipermail/haskell-cafe/2006-March/015140.html
CC-MAIN-2014-41
refinedweb
333
61.16
- NAME - VERSION - SYNOPSIS - DESCRIPTION - USAGE - METHODS - Constructors - Warnings - DateTime->new( ... ) - DateTime->from_epoch( epoch => $epoch, ... ) - DateTime->now( ... ) - DateTime->today( ... ) - DateTime->from_object( object => $object, ... ) - DateTime->last_day_of_month( ... ) - DateTime->from_day_of_year( ... ) - $dt->clone() - "Get" Methods - $dt->year() - $dt->ce_year() - $dt->era_name() - $dt->era_abbr() - $dt->christian_era() - $dt->secular_era() - $dt->year_with_era() - $dt->year_with_christian_era() - $dt->year_with_secular_era() - $dt->month() - $dt->month_name() - $dt->month_abbr() - $dt->day() - $dt->day_of_week() - $dt->local_day_of_week() - $dt->day_name() - $dt->day_abbr() - $dt->day_of_year() - $dt->quarter() - $dt->quarter_name() - $dt->quarter_abbr() - $dt->day_of_quarter() - $dt->weekday_of_month() - $dt->ymd( $optional_separator ), $dt->mdy(...), $dt->dmy(...) - $dt->hour() - $dt->hour_1() - $dt->hour_12() - $dt->hour_12_0() - $dt->am_or_pm() - $dt->minute() - $dt->second() - $dt->fractional_second() - $dt->millisecond() - $dt->microsecond() - $dt->nanosecond() - $dt->hms( $optional_separator ) - $dt->datetime( $optional_separator ) - $dt->is_leap_year() - $dt->week() - $dt->week_year() - $dt->week_number() - $dt->week_of_month() - $dt->jd(), $dt->mjd() - $dt->time_zone() - $dt->offset() - $dt->is_dst() - $dt->time_zone_long_name() - $dt->time_zone_short_name() - $dt->strftime( $format, ... ) - $dt->format_cldr( $format, ... ) - $dt->epoch() - $dt->hires_epoch() - $dt->is_finite(), $dt->is_infinite() - $dt->utc_rd_values() - $dt->local_rd_values() - $dt->leap_seconds() - $dt->utc_rd_as_seconds() - $dt->locale() - $dt->formatter() - "Set" Methods - Math Methods - $dt->duration_class() - $dt->add_duration( $duration_object ) - $dt->add( parameters for DateTime::Duration ) - $dt->add( $duration_object ) - $dt->subtract_duration( $duration_object ) - $dt->subtract( DateTime::Duration->new parameters ) - $dt->subtract( $duration_object ) - $dt->subtract_datetime( $datetime ) - $dt->delta_md( $datetime ) - $dt->delta_days( $datetime ) - $dt->delta_ms( $datetime ) - $dt->subtract_datetime_absolute( $datetime ) - Class Methods - Testing Code That Uses DateTime - How DateTime Math Works - Overloading - Formatters And Stringification - CLDR Patterns - strftime Patterns - DateTime.pm and Storable - THE DATETIME PROJECT ECOSYSTEM - KNOWN BUGS - SEE ALSO - SUPPORT - SOURCE - DONATIONS - AUTHOR - CONTRIBUTORS NAME DateTime - A date and time object for Perl VERSION version 1.43 ); All the object methods which return names or abbreviations return data based on a locale. This is done by setting the locale when constructing a DateTime object. If this is not set, then "en-US" is used.. Math If you are going to be doing date math, please read the section "How DateTime Math Works". Determining the Local Time Zone Can Be Slow If $ENV{TZ} is not set, it may involve reading a number of files. Globally Setting a Default Time Zone. Upper and Lower Bounds). METHODS DateTime provide many methods. The documentation breaks them down into groups based on what they do (constructor, accessors, modifiers, etc.). Constructors All constructors can die when invalid parameters are given. really slow (minutes). All warnings from DateTime use the DateTime category and can be suppressed with: no warnings 'DateTime'; This warning may be removed in the future if DateTime::TimeZone is made much faster.: month An integer from 1-12. day An integer from 1-31, and it must be within the valid range of days for the specified month. hour An integer from 0-23. minute An integer from 0-59. second An integer from 0-61 (to allow for leap seconds). Values of 60 or 61 are only allowed when they match actual leap seconds. nanosecond An integer >= 0. If this number is greater than 1 billion, it will be normalized into the second value for the DateTime object. DateTime::Locale->load. See the DateTime::Locale documentation for details. The "time_zone" parameter can be either a string. Parsing Dates.. DateTime->from_epoch( epoch => $epoch, ... ). DateTime->now( ... ) This class method is equivalent to calling from_epoch() with the value returned from Perl's time() function. Just as with the new() method, it accepts "time_zone" and "locale" parameters. By default, the returned object will be in the UTC time zone. DateTime->today( ... ) This class method is equivalent to: DateTime->now(@_)->truncate( to => 'day' ); DateTime->from_object( object => $object, ... ). DateTime->last_day_of_month( ... ) This constructor takes the same arguments as can be given to the new() method, except for "day". Additionally, both "year" and "month" are required. DateTime->from_day_of. $dt->clone() This object method returns a new object that is replica of the object upon which the method is called. "Get" Methods This class has many methods for retrieving information about an object. $dt->year() Returns the year. $dt->ce_year() Returns the year according to the BCE/CE numbering system. The year before year 1 in this system is year -1, aka "1 BCE". $dt->era_name() Returns the long name of the current era, something like "Before Christ". See the Locales section for more details. $dt->era_abbr() Returns the abbreviated name of the current era, something like "BC". See the Locales section for more details. $dt->christian_era() Returns a string, either "BC" or "AD", according to the year. $dt->secular_era() Returns a string, either "BCE" or "CE", according to the year. $dt->year_with_era() Returns a string containing the year immediately followed by its era abbreviation. The year is the absolute value of ce_year(), so that year 1 is "1AD" and year 0 is "1BC". $dt->year_with_christian_era() Like year_with_era(), but uses the christian_era() method to get the era name. $dt->year_with_secular_era() Like year_with_era(), but uses the secular_era() method to get the era name. $dt->month() Returns the month of the year, from 1..12. Also available as $dt->mon(). $dt->month_name() Returns the name of the current month. See the Locales section for more details. $dt->month_abbr() Returns the abbreviated name of the current month. See the Locales section for more details. $dt->day() Returns the day of the month, from 1..31. Also available as $dt->mday() and $dt->day_of_month(). $dt->day_of_week() Returns the day of the week as a number, from 1..7, with 1 being Monday and 7 being Sunday. Also available as $dt->wday() and $dt->dow(). $dt->local_day_of_week() Returns the day of the week as a number, from 1..7. The day corresponding to 1 will vary based on the locale. $dt->day_name() Returns the name of the current day of the week. See the Locales section for more details. $dt->day_abbr() Returns the abbreviated name of the current day of the week. See the Locales section for more details. $dt->day_of_year() Returns the day of the year. Also available as $dt->doy(). $dt->quarter() Returns the quarter of the year, from 1..4. $dt->quarter_name() Returns the name of the current quarter. See the Locales section for more details. $dt->quarter_abbr() Returns the abbreviated name of the current quarter. See the Locales section for more details. $dt->day_of_quarter() Returns the day of the quarter. Also available as $dt->doq(). $dt->weekday_of_month() Returns a number from 1..5 indicating which week day of the month this is. For example, June 9, 2003 is the second Monday of the month, and so this method returns 2 for that day. $dt->ymd() method is also available as $dt->date(). $dt->hour() Returns the hour of the day, from 0..23. $dt->hour_1() Returns the hour of the day, from 1..24. $dt->hour_12() Returns the hour of the day, from 1..12. $dt->hour_12_0() Returns the hour of the day, from 0..11. $dt->am_or_pm() Returns the appropriate localized abbreviation, depending on the current hour. $dt->minute() Returns the minute of the hour, from 0..59. Also available as $dt->min(). $dt->second() Returns the second, from 0..61. The values 60 and 61 are used for leap seconds. Also available as $dt->sec(). $dt->fractional_second() Returns the second, as a real number from 0.0 until 61.999999999 The values 60 and 61 are used for leap seconds. $dt->millisecond() Returns the fractional part of the second as milliseconds (1E-3 seconds). Half a second is 500 milliseconds. This value will always be rounded down to the nearest integer. $dt->microsecond() Returns the fractional part of the second as microseconds (1E-6 seconds). Half a second is 500_000 microseconds. This value will always be rounded down to the nearest integer. $dt->nanosecond() Returns the fractional part of the second as nanoseconds (1E-9 seconds). Half a second is 500_000_000 nanoseconds. $dt->hms( $optional_separator ) Returns the hour, minute, and second, all zero-padded to two digits. If no separator is specified, a colon (:) is used by default. Also available as $dt->time(). $dt->datetime( $optional_separator ) This method is equivalent to: $dt->ymd('-') . 'T' . $dt->hms(':') The $optional_separator parameter allows you to override the separator between the date and time, for e.g. $dt->datetime(q{ }) . This method is also available as $dt->iso8601(), but it's not really a very good ISO8601 format, as it lacks a time zone. If called as $dt->iso8601() you cannot change the separator, as ISO8601 specifies that "T" must be used to separate them. $dt->is_leap_year() This method returns a true or false indicating whether or not the datetime object is in a leap year. . $dt->week_year() Returns the year of the week. See $dt->week() for details. $dt->week_number() Returns the week of the year, from 1..53. See $dt->week() for details. . $dt->jd(), $dt->mjd() These return the Julian Day and Modified Julian Day, respectively. The value returned is a floating point number. The fractional portion of the number represents the time portion of the datetime. $dt->time_zone() This returns the DateTime::TimeZone object for the datetime object. $dt->offset() This returns the offset from UTC, in seconds, of the datetime object according to the time zone. $dt->is_dst() Returns a boolean indicating whether or not the datetime object is currently in Daylight Saving Time or not. $dt->time_zone_long_name() This is a shortcut for $dt->time_zone->name. It's provided so that one can use "%{time_zone_long_name}" as a strftime format specifier. $dt->time_zone_short_name() This method returns the time zone abbreviation for the current time zone, such as "PST" or "GMT". These names are not definitive, and should not be used in any application intended for general use by users around the world. $dt->strftime( $format, ... ). $dt->format_cldr( $format, ... ). . $dt->hires_epoch() Returns the epoch as a floating point number. The floating point portion of the value represents the nanosecond value of the object. This method is provided for compatibility with 1325376000 because adding 0.000000004 to 1325376000 returns 1325376000. $dt->is_finite(), $dt->is_infinite() These methods allow you to distinguish normal datetime objects from infinite ones. Infinite datetime objects are documented in DateTime::Infinite. $dt->utc_rd_values() Returns the current UTC Rata Die days, seconds, and nanoseconds as a three element list. This exists primarily to allow other calendar modules to create objects based on the values provided by this object. $dt->local_rd_values() Returns the current local Rata Die days, seconds, and nanoseconds as a three element list. This exists for the benefit of other modules which might want to use this information for date math, such as DateTime::Event::Recurrence. $dt->leap_seconds() Returns the number of leap seconds that have happened up to the datetime represented by the object. For floating datetimes, this always returns 0. $dt->utc_rd_as_seconds() Returns the current UTC Rata Die days and seconds purely as seconds. This number ignores any fractional seconds stored in the object, as well as leap seconds. $dt->locale() Returns the current locale object. $dt->formatter() Returns current formatter object or class. See "Formatters And Stringification" for details. "Set" Methods ); $dt->set( .. ). $dt->set_year(), $dt->set_month(), etc. DateTime has a set_* method for every item that can be passed to the constructor: $dt->set_year() $dt->set_month() $dt->set_day() $dt->set_hour() $dt->set_minute() $dt->set_second() $dt->set_nanosecond() These are shortcuts to calling set() with a single key. They all take a single parameter. en-US locale, the first day of the week is Sunday. $dt->set_locale( $locale ) Sets the object's locale. You can provide either a locale code like "en-US" or an object returned by DateTime::Locale->load. $dt-?" $dt->set_formatter( $formatter ) Set the formatter for the object. See "Formatters And Stringification" for details. You can set this to undef to revert to the default formatter. Math Methods Like the set methods, math related methods always return the object itself, to allow for chaining: $dt->add( days => 1 )->subtract( seconds => 1 ); $dt->duration_class() This returns DateTime::Duration, but exists so that a subclass of DateTime.pm can provide a different value. $dt->add_duration( $duration_object ) This method adds a DateTime::Duration to the current datetime. See the DateTime::Duration docs for more details. $dt->add( parameters for DateTime::Duration ) This method is syntactic sugar around the add_duration() method. It simply creates a new DateTime::Duration object using the parameters given, and then calls the add_duration() method. $dt->add( $duration_object ) A synonym of $dt->add_duration( $duration_object ). $dt->subtract_duration( $duration_object ) When given a DateTime::Duration object, this method simply calls invert() on that object and passes that new duration to the add_duration method. $dt->subtract( DateTime::Duration->new parameters ) Like add(), this is syntactic sugar for the subtract_duration() method. $dt->subtract( $duration_object ) A synonym of $dt->subtract_duration( $duration_object ). $dt->subtract_datetime( $datetime ). $dt->delta_md( $datetime ) $dt->delta_days( $datetime ). $dt->delta_ms( $datetime ) Returns a duration which contains only minutes and seconds. Any day and month differences to minutes are converted to minutes and seconds. This method also always return a positive (or zero) duration. $dt->subtract_datetime_absolute( $datetime ) $dt->epoch(). Class Methods DateTime->DefaultLocale( $locale ) This can be used to specify the default locale to be used when creating DateTime objects. If unset, then "en-US" is used.. Testing Code That Uses DateTime. How DateTime Math Works:; math on non-UTC time zones If you need to do date math on objects with non-UTC time zones, please read the caveats below carefully. The results DateTime.pmprodu. never do math on two objects where only one is in the floating time zone The date math code accounts for leap seconds whenever the DateTimeobject is not in the floating time zone. If you try to do math where one object is in the floating zone and the other isn't, the results will be confusing and wrong. DateTime::Duration for a detailed explanation of these algorithms.. Reversibility_duration(),: G{1,3} The abbreviated era (BC, AD). GGGG The wide era (Before Christ, Anno Domini). GGGGG The narrow era, if it exists (and it mostly doesn't). y and y{3,} The year, zero-prefixed as needed. Negative years will start with a "-", and this will be included in the length calculation. In other, words the "yyyyy" pattern will format year -1234 as "-1234", not "-01234". yy This is a special case. It always produces a two-digit year, so "1976" becomes "76". Negative years will start with a "-", making them one character longer. Y{1,} The year in "week of the year" calendars, from $dt->week_year(). u{1,} Same as "y" except that "uu" is not a special case. Q{1,2} The quarter as a number (1..4). QQQ The abbreviated format form for the quarter. QQQQ The wide format form for the quarter. q{1,2} The quarter as a number (1..4). qqq The abbreviated stand-alone form for the quarter. qqqq The wide stand-alone form for the quarter. M{1,2] The numerical month. MMM The abbreviated format form for the month. MMMM The wide format form for the month. MMMMM The narrow format form for the month. L{1,2] The numerical month. LLL The abbreviated stand-alone form for the month. LLLL The wide stand-alone form for the month. LLLLL The narrow stand-alone form for the month. w{1,2} The week of the year, from $dt->week_number(). W The week of the month, from $dt->week_of_month(). d{1,2} The numeric day of the month. D{1,3} The numeric day of the year. F The day of the week in the month, from $dt->weekday_of_month(). g{1,} The modified Julian day, from $dt->mjd(). E{1,3} and eee The abbreviated format form for the day of the week. EEEE and eeee The wide format form for the day of the week. EEEEE and eeeee The narrow format form for the day of the week. e{1,2}. c The numeric day of the week from 1 to 7, treating Monday as the first of the week, regardless of locale. ccc The abbreviated stand-alone form for the day of the week. cccc The wide stand-alone form for the day of the week. ccccc The narrow format form for the day of the week. a The localized form of AM or PM for the time. h{1,2} The hour from 1-12. H{1,2} The hour from 0-23. K{1,2} The hour from 0-11. k{1,2} The hour from 1-24. j{1,2} The hour, in 12 or 24 hour form, based on the preferred form for the locale. In other words, this is equivalent to either "h{1,2}" or "H{1,2}". m{1,2} The minute. s{1,2} The second. S{1,} The fractional portion of the seconds, rounded based on the length of the specifier. This returned without a leading decimal point, but may have leading or trailing zeroes. A{1,} The millisecond of the day, based on the current time. In other words, if it is 12:00:00.00, this returns 43200000. z{1,3} The time zone short name. zzzz The time zone long name. Z{1,3} The time zone offset. ZZZZ The time zone short name and the offset as one string, so something like "CDT-0500". ZZZZZ The time zone offset as a sexagesimal number, so something like "-05:00". (This is useful for W3C format.) v{1,3} The time zone short name. vvvv The time zone long name. V{1,3} The time zone short name. VVVV The time zone long name.') ); strftime Patterns The following patterns are allowed in the format string given to the $dt->strftime() method: want folks from both the United States and the rest of the world to understand the date! %e Like %d, the day of the month as a decimal number, but a leading zero is replaced by a space. %F Equivalent to %Y-%m-%d (the ISO 8601 date format) ) This value will always be rounded down to the nearest integer. A tab character. .pmobject method. DateTime.pm and Storable DateTime implements Storable hooks in order to reduce the size of a serialized DateTime object. THE DATETIME PROJECT ECOSYSTEM This module is part of a larger ecosystem of modules in the DateTime family. DateTime::Set The DateTime::Set module represents sets (including recurrences) of datetimes. Many modules return sets or recurrences. Format Modules The various format modules exist to parse and format datetimes. For example, formatter with a DateTime object. All format modules start with DateTime::Format::. Calendar Modules There are a number of modules on CPAN that implement non-Gregorian calendars, such as the Chinese, Mayan, and Julian calendars. All calendar modules start with DateTime::Calendar::. Event Modules There are a number of modules that calculate the dates for events, such as Easter, Sunrise, etc. All event modules start with DateTime::Event::. Others There are many other modules that work with DateTime, including modules in the DateTimeX namespace, as well as others. See the datetime wiki and search.cpan.org for more details. KNOWN BUGS. SEE ALSO A Date with Perl - a talk I've given at a few YAPCs. datetime@perl.org mailing list SUPPORT Ben Bennett <fiji@limey.net> Christian Hansen <chansen@cpan.org> Daisuke Maki <dmaki@cpan.org> David E. Wheeler <david@justatheory.com> David Precious <davidp@preshweb.co.uk> Doug Bell <madcityzen@gmail.com> Flávio Soibelmann Glock <fglock@gmail.com> Gregory Oschwald <oschwald@gmail.com> Hauke D <haukex@zero-g.net> Iain Truskett <deceased> Jason McIntosh <jmac@jmac.org> Joshua Hoblitt <jhoblitt@cpan.org> Karen Etheridge <ether@cpan.org> Michael Conrad <mike@nrdvana.net> Michael R. Davis <mrdvt92@users.noreply.github.com> Nick Tonkin <1nickt@users.noreply.github.com> Olaf Alders <olaf@wundersolutions.com> Ovid <curtis_ovid_poe@yahoo.com> Philippe Bruhat (BooK) <book@cpan.org> Ricardo Signes <rjbs@cpan.org> Richard Bowen <bowen@cpan.org> Ron Hill <rkhill@cpan.org> Sam Kington <github@illuminated.co.uk> viviparous <viviparous@prc> This software is Copyright (c) 2003 - 2017 by Dave Rolsky. This is free software, licensed under: The Artistic License 2.0 (GPL Compatible) The full text of the license can be found in the LICENSE file included with this distribution.
http://docs.activestate.com/activeperl/5.22/perl/lib/DateTime.html
CC-MAIN-2018-34
refinedweb
3,341
60.82
Defining and Compiling Classes This chapter describes the basics of defining and compiling classes with the InterSystems IRIS® data platform. Introduction to Terminology The following shows a simple InterSystems IRIS class definition, with some typical elements: Class Demo.MyClass Extends %RegisteredObject { Property Property1 As %String; Property Property2 As %Numeric; Method MyMethod() As %String { set returnvalue=..Property1_..Property2 quit returnvalue } } Note the following points: The full class name is Demo.MyClass, the package name is Demo, and the short class name is MyClass This class extends the class %RegisteredObject . Equivalently, this class inherits from %RegisteredObject . %RegisteredObject is the superclass of this class, or this class is a subclass of %RegisteredObject . An InterSystems IRIS class can have multiple superclasses, as this chapter later discusses. The superclass(es) of a class determine how the class can be used. This class defines two properties: Property1 and Property2. Property Property1 is of type %String , and Property Property1 is of type %Numeric This class defines one method: MyMethod(), which returns a value of type %String . This class refers to several system classes provided by InterSystems IRIS. These classes are %RegisteredObject (whose full name is %Library.RegisteredObject ), %String (%Library.String ), and %Numeric (%Library.Numeric ). %RegisteredObject is a key class in InterSystems IRIS, because it defines the object interface. It provides the methods you use to create and work with object instances. %String and %Numeric are data type classes. As a consequence, the corresponding properties hold literal values (rather than other kinds of values). Kinds of Classes InterSystems IRIS provides a large set of class definitions that your classes can use in the following general ways: You can use InterSystems IRIS classes as superclasses for your classes. You can use InterSystems IRIS classes as values of properties, values of arguments to methods, values returned by methods, and so on. Some InterSystems IRIS classes simply provide specific APIs. You typically do not use these classes in either of the preceding ways. Instead you write code that calls methods of the API. The most common choices for superclasses are as follows: %RegisteredObject — This class represents the object interface in its most generic form. %Persistent — This class represents a persistent object. In addition to providing the object interface, this class provides methods for saving objects to the database and reading objects from the database. %SerialObject — This class represents an object that can be embedded in (serialized within) another object. Subclasses of any of the preceding classes. None — It is not necessary to specify a superclass when you create a class. The most common choices for values of properties, values of arguments to methods, values returned by methods, and so on are as follows: Object classes (the classes contained in the previous list) Data type classes Collection classes Stream classes Later chapters of this book discuss these categories of classes. Object Classes The phrase object class refers to any subclass of %RegisteredObject . With an object class, you can create an instance of the class, specify properties of the instance, and invoke methods of the instance. A later chapter describes these tasks (and provides information that applies to all object classes). The generic term object refers to an instance of an object class. There are three general categories of object classes: Transient object classes or registered object classes are subclasses of %RegisteredObject but not of %Persistent or %SerialObject (see the following bullets). For details, see “Working with Registered Objects.” Persistent classes are subclasses of %Persistent , which is a direct subclass of %RegisteredObject . The %Persistent class includes the behavior of %RegisteredObject and adds the ability to save objects to disk, reopen them, and so on. For details, see the chapter “Introduction to Persistent Objects” and the chapters that follow it. Serial classes are subclasses of %SerialObject , which is a direct subclass of %RegisteredObject . The %SerialObject class includes the behavior of %RegisteredObject and adds the ability to create a string that represents the state of the object, for inclusion as a property within another object (usually either a transient object or a persistent object). The phrase serializing an object refers to the creation of this string. For details, see the chapter “Defining and Using Object-Valued Properties.” The following figure shows the inheritance relationship among these three classes. The boxes list some of the methods defined in the classes: Collection classes and stream classes are object classes with specialized behavior. Data Type Classes The phrase data type class refers to any class whose ClassType keyword equals datatype or any subclass of such a class. These classes are not object classes (a data type class cannot define properties, and you cannot create an instance of the class). The purpose of a data type class (more accurately a data type generator class) is to be used as the type of a property of an object class. Kinds of Class Members An InterSystems IRIS class definition can include the following items, all known as class members: Parameters — A parameter defines a constant value for use by this class. The value is set at compilation time, in most cases. Methods — InterSystems IRIS supports two types of methods: instance methods and class methods. An instance method is invoked from a specific instance of a class and performs some action related to that instance; this type of method is useful only in object classes. A class method is a method that can be invoked whether or not an instance of its class is in memory; this type of method is called a static method in other languages. Properties — A property contains data for an instance of the class. Properties are useful only in object classes. The following subsection provides more information. Class queries — A class query defines an SQL query that can be used by the class and specifies a class to use as a container for the query. Often (but not necessarily), you define class queries in a persistent class, to perform queries on the stored data for that class. You can, however, define class queries in any class. Other kinds of class members that are relevant only for persistent classes: Storage definitions Indices Foreign keys SQL triggers XData blocks — An XData block is a named unit of data defined within the class, typically for use by a method in the class. These have many possible applications. Projections — A class projection provides a way to extend the behavior of the class compiler. The projection mechanism is used by the Java projections; hence the origin of the term projection. Kinds of Properties Formally, there are two kinds of properties: attributes and relationships. Attributes hold values. Attribute properties are usually referred to simply as properties. Depending on the property definition, the value that it holds can be any of the following: A literal value such as "MyString" and 1. Properties that hold literal values are based on data type classes and are also called data type properties. See the chapter “Defining and Using Literal Properties.” A stream. A stream is an InterSystems IRIS object that contains a value that would be too long for a string. See the chapter “Working with Streams.” A collection. InterSystems IRIS provides the ability to define a property as either a list or an array. The list or array items can be literal values or can be objects. See the chapter “Working with Collections.” Some other kind of object. See the chapter “Defining and Using Object-Valued Properties.” Relationships hold associations between objects. Relationship properties are referred to as relationships. Relationships are supported only in persistent classes. See the chapter “Defining and Using Relationships.” Defining a Class: The Basics This section discusses basic class definitions in more detail. It discusses the following topics: - Specifying class keywords - Introduction to defining class parameters Introduction to defining properties Introduction to defining methods Typically, you use an Integrated Development Environment (IDE) to define classes. You can also define classes programmatically using the InterSystems IRIS class definition classes or via an XML class definition file. If you define an SQL table using SQL DDL statements, the system creates a corresponding class definition. Choosing a Superclass When you define a class, one of your earliest design decisions is choosing the class (or classes) which to base your class. If there is only a single superclass, include Extends followed by the superclass name, at the start of the class definition. Class Demo.MyClass Extends Superclass { //... } If there are multiple superclasses, specify them as a comma-separated list, enclosed in parentheses. Class Demo.MyClass Extends (Superclass1, Superclass2, Superclass3) { //... } It is not necessary to specify a superclass when you create a class. It is common to use %RegisteredObject as the superclass even if the class does not represent any kind of object, because doing so gives your class access to many commonly used macros, but you can instead directly include the include files that contain them. Include Files When you create a class that does not extend %RegisteredObject or any of its subclasses, you might want to include the following include files: %occStatus.inc, which defines macros to work with %Status values. %occMessages.inc, which defines macros to work with messages. For details on the macros defined by these include files, see “Using System-supplied Macros” in Using ObjectScript. If your class does extend %RegisteredObject or any of its subclasses, these macros are available automatically. You can also create your own include files and include them in class definitions as needed. To include an include file at the beginning of a class definition, use syntax of the following form. Note that you must omit the .inc extension of the include file: Include MyMacros For example: Include %occInclude Class Classname { } To include multiple include files at the beginning of a class definition, use syntax of the following form: Include (MyMacros, YourMacros) Note that this syntax does not have a leading pound sign (in contrast to the syntax required in a routine). Also, the Include directive is not case-sensitive, so you could use INCLUDE instead, for example. The include file name is case-sensitive. See also the reference section on #Include in Using ObjectScript. Specifying Class Keywords In some cases, it is necessary to control details of the code generated by the class compiler. For one example, for a persistent class, you can specify an SQL table name, if you do not want to (or cannot) use the default table name. For another example, you can mark a class as final, so that subclasses of it cannot be created. The class definitions support a specific set of keywords for such purposes. If you need to specify class keywords, include them within square brackets after the superclass, as follows: Class Demo.MyClass Extends Demo.MySuperclass [ Keyword1, Keyword2, ...] { //... } For example, the available class keywords include Abstract and Final. For an introduction, see “Compiler Keywords,” later in this chapter. InterSystems IRIS also provides specific keywords for each kind of class member. Introduction to Defining Class Parameters A class parameter defines a constant value for all objects of a given class. To add a class parameter to a class definition, add an element like one of the following to the class: Parameter PARAMNAME as Type; Parameter PARAMNAME as Type = value; Parameter PARAMNAME as Type [ Keywords ] = value; Keywords represents any parameter keywords. For an introduction to keywords, see “Compiler Keywords,” later in this chapter. For parameter keywords; see “Parameter Keywords” in the Class Definition Reference. These are optional. Introduction to Defining Properties An object class can include properties. To add a property to a class definition, add an element like one of the following to the class: Property PropName as Classname; Property PropName as Classname [ Keywords ] ; Property PropName as Classname(PARAM1=value,PARAM2=value) [ Keywords ] ; Property PropName as Classname(PARAM1=value,PARAM2=value) ; PropName is the name of the property, and Classname is an optional class name (if you omit this, the property is assumed to be of type %String ). Keywords represents any property keywords. For an introduction to keywords, see “Compiler Keywords,” later in this chapter. For property keywords; see “Property Keywords” in the Class Definition Reference. These are optional. Depending on the class used by the property, you might also be able to specify property parameters, as shown in the third and fourth variations. Notice that the property parameters, if included, are enclosed in parentheses and precede any property keywords. Also notice that the property keywords, if included, are enclosed in square brackets. Introduction to Defining Methods You can define two kinds of methods in InterSystems IRIS classes: class methods and instance methods. To add a class method to a class definition, add an element like the following to the class: ClassMethod MethodName(arguments) as Classname [ Keywords] { //method implementation } MethodName is the name of the method and arguments is a comma-separated list of arguments. Classname is an optional class name that represents the type of value (if any) returned by this method. Omit the As Classname part if the method does not return a value. Keywords represents any method keywords. For an introduction to keywords, see “Compiler Keywords,” later in this chapter. For method keywords, see “Method Keywords” in the Class Definition Reference. These are optional. To add an instance method, use the same syntax with Method instead of ClassMethod: Method MethodName(arguments) as Classname [ Keywords] { //method implementation } Instance methods are relevant only in object classes. Naming Conventions Class and class members follow specific naming conventions. These are detailed in this section. Rules for Class and Class Member Names This section describes the rules for class and member names, such as maximum length, allowed characters, and so on. A full class name includes its package name, as described in the next section. Every identifier must be unique within its context (that is, no two classes can have the same name). InterSystems IRIS has the following limits on package, class, and member names: Each package name can have up to 189 unique characters. Each class name can have up to 60 unique characters. Each method and property name can have up to 180 unique characters. See the section “Class Member Names” for more details. The combined length of the name of a property and of any indices on the property should be no longer than 180 characters. The full name of each member (including the unqualified member name, the class name, the package name, and any separators) must be 220 characters or fewer. Each name can include Unicode characters. Identifiers preserve case: you must exactly match the case of a name; at the same time, two classes cannot have names that differ only in case. For example, the identifiers “id1” and “ID1” are considered identical for purposes of uniqueness. Identifiers must start with an alphabetic character, though they may contain numeric characters after the first position. Identifiers cannot contain spaces or punctuation characters with the exception of package names which may contain the “.” character. Certain identifiers start with the “%” character; this identifies a system item. For example, many of the methods and packages provided with the InterSystems IRIS library start with the “%” character. Member names can be delimited, which allows them to include characters that are otherwise not permitted. To create a delimited member name, use double quotes for the first and last characters of the name. For example: Property "My Property" As %String; For more details on system identifiers, see the appendix “Rules and Guidelines for Identifiers” in the Orientation Guide for Server-Side Programming. Class Names Every class has a name that uniquely identifies it. A full class name consists of two parts: a package name and a class name: the class name follows the final “.” character in the name. A class name must be unique within its package; a package name must be unique within an InterSystems IRIS namespace. For details on packages, see the chapter “Packages.” Because persistent classes are automatically projected as SQL tables, a class definition must specify a table name that is not an SQL reserved word; if the name of a persistent class is an SQL reserved word, then the class definition must also specify a valid, non-reserved word value for its SQLTableName keyword. Class Member Names Every class member (such as a property or method) must have a name that is unique within its class and with a maximum length of 180 characters. Further, a member of a persistent cannot use an SQL reserved word as its identifier. It can define an alias, however, using the SQLName or SQLFieldName keyword of that member (as appropriate). InterSystems strongly recommends that you do not give two members the same name. This can have unexpected results. Inheritance An InterSystems IRIS class can inherit from already existing classes. If one class inherits from another, the inheriting class is known as a subclass and the class or classes it is derived from are known as superclasses. The following shows an example class definition that uses two superclasses: Class User.MySubclass Extends (%Library.Persistent, %Library.Populate) { } In addition to a class inheriting methods from its superclasses, the properties inherit additional methods from system property behavior classes and, in the case of a data type attribute, from the data type class. For example, if there is a class defined called Person: Class MyApp.Person Extends %Library.Persistent { Property Name As %String; Property DOB As %Date; } It is simple to derive a new class, Employee, from it: Class MyApp.Employee Extends Person { Property Salary As %Integer; Property Department As %String; } This definition establishes the Employee class as a subclass of the Person class. In addition to its own class parameters, properties, and methods, the Employee class includes all of these elements from the Person class. Use of Subclasses You can use a subclass in any place in which you might use its superclass. For example, using the above defined Employee and Person classes, it is possible to open an Employee object and refer to it as a Person: Set x = ##class(MyApp.Person).%OpenId(id) Write x.Name We can also access Employee-specific attributes or methods: Write x.Salary // displays the Salary property (only available in Employee instances) Primary Superclass The leftmost superclass that a subclass extends is known as its primary superclass. A class inherits all the members of its primary superclass, including applicable class keywords, properties, methods, queries, indices, class parameters, and the parameters and keywords of the inherited properties and inherited methods. Except for items marked as Final, the subclass can override (but not delete) the characteristics of its inherited members. See the next section for more details about multiple inheritance. Multiple Inheritance By means of multiple inheritance, a class can inherit its behavior and class type from more than one superclass. To establish multiple inheritance, list multiple superclasses within parentheses. The leftmost superclass is the primary superclass. For example, if class X inherits from classes A, B, and C, its definition includes: Class X Extends (A, B, C) { } The default inheritance order for the class compiler is from left to right, which means that differences in member definitions among superclasses are resolved in favor of the leftmost superclass (in this case, A superseding B and C, and B superseding C.) Specifically, for class X, the values of the class parameter values, properties, and methods are inherited from class A (the first superclass listed), then from class B, and, finally, from class C. X also inherits any class members from B that A has not defined, and any class members from C that neither A nor B has defined. If class B has a class member with the same name as a member already inherited from A, then X uses the value from A; similarly, if C has a member with the same name as one inherited from either A or B, the order of precedence is A, then B, then C. Because left-to-right inheritance is the default, there is no need to specify this; hence, the previous example class definition is equivalent to the following: Class X Extends (A, B, C) [ Inheritance = left ] { } To specify right-to-left inheritance among superclasses, use the Inheritance keyword with a value of right: Class X Extends (A, B, C) [ Inheritance = right ] { } With right-to-left inheritance, if multiple superclasses have members with the same name, the superclass to the right takes precedence. Even with right-to-left inheritance, the leftmost superclass (sometimes known as the first superclass) is still the primary superclass. This means that the subclass inherits only the class keyword values of its leftmost superclass — there is no override for these. For example, in the case of class X inheriting from classes A, B, and C with right-to-left inheritance, if there is a conflict between a member inherited from class A and one from class B, the member from class B overrides (replaces) the previously inherited member; likewise for the members of class C in relation to those of classes A and B. The class keywords for class X come exclusively from class A. (This is why extending classes A and B — in that order — with left-to-right inheritance is not the same as extending classes B and A — in that order — with right-to-left inheritance; the keywords are inherited from the leftmost superclass in either definition, which makes the two cases different.) Additional Topics Also see “%ClassName() and the Most Specific Type Class (MSTC)” in the chapter “Working with Registered Objects.” Introduction to Compiler Keywords As shown in “Defining a Class: The Basics,” you can include keywords in a class definition or in the definition of a class member. These keywords, also known as class attributes, generally affect the compiler. This section introduces some common keywords and discusses how InterSystems IRIS presents them. Example The following example shows a class definition with some commonly used keywords: /// This sample persistent class represents a person. Class MyApp.Person Extends %Persistent [ SqlTableName = MyAppPerson ] { /// Define a unique index for the SSN property. Index SSNKey On SSN [ Unique ]; /// Name of the person. Property Name As %String [ Required ]; /// Person's Social Security number. Property SSN As %String(PATTERN = "3N1""-""2N1""-""4N") [ Required ]; } This example shows the following keywords: For the class definition, the Extends keyword specifies the superclass (or superclasses) from which this class inherits. Note that the Extends keyword has a different name when you view the class in other ways; see the next section. For the class definition, the SqlTableName keyword determines the name of the associated table, if the default name is not to be used. This keyword is meaningful only for persistent classes, which are described later in this book. For the index definition, the Unique keyword causes InterSystems IRIS to enforce uniqueness on the property on which the index is based (SSN in this example). For the two properties, the Required keyword causes InterSystems IRIS to require non-null values for the properties. PATTERN is not a keyword but instead is a property parameter; notice that PATTERN is enclosed in parentheses, rather than square brackets. Later chapters of this book discuss many additional keywords, but not all of them. Apart from keywords related to storage (which are not generally documented), you can find details on the keywords in the Class Definition Reference. The reference information demonstrates the syntax that applies when you view a class in the usual edit mode. Creating Class Documentation InterSystems IRIS provides a web page called the InterSystems Class Reference, which displays automatically generated reference information for the classes provided by InterSystems, as well as for classes you create. Informally, the Class Reference is known as Documatic, because it is generated by the class %CSP.Documatic . This section introduces the Class Reference and explains how to create your own documentation and how to include HTML markup. Introduction to the Class Reference The purpose of the Class Reference is to advertise, to other programmers, which parts of a class can be used, and how to use them. The following shows an example: This reference information shows the definitions of class members, but not their actual implementations. For example, it shows method signatures but not their internal definitions. It includes links between elements so that you can rapidly follow the logic of the code. There is also a search option. Creating Documentation to Include in the Class Reference To create documentation to include in the Class Reference, create comments within the class definitions — specifically comments that start with ///. If you precede the class declaration with such comments, the comments are shown at the top of the page for the class. If you precede a given class member with such comments, the comments are shown after the generated information for that class member. Once you compile the class, you can view its generated class documentation the next time you open the Class Reference documentation. If you add no Class Reference comments, items that you add to a class or package appear appropriately in the lists of class or package contents, but without any explanatory text. You can extend any existing Class Reference comments by modifying the class definition. The syntax rules for Class Reference comments are strict: All Class Reference comments that describe a class or class member must appear in a consecutive block immediately before the declaration of the item that they describe. Each line in the block of comments must start with three slashes: ///Tip: Note that, by default, the presentation combines the text of all the /// lines and treats the result as single paragraph. You can insert HTML line breaks (<br>). Or you can use HTML formatting (such as <p> and </p>), as discussed in the subsection. The three slashes must begin at the first (left-most) position in the line. No blank lines are allowed within Class Reference comments. No blank lines are allowed between the last line of the Class Reference comments and the declaration for the item that they describe. The length of the Class Reference comment (all lines combined) must be less than the string length limit (which is extremely long). See “String Length Limit” in the Orientation Guide for Server-Side Programming. Class Reference comments allow plain text, plus any standard HTML element and a small number of specialized elements. Using HTML Markup in Class Documentation You can use HTML tags within the comments in a class. With regard to the allowed HTML elements, adhere to as strict an HTML standard as you can, for example XHTML. This ensures that your comments can be interpreted by any browser. In addition to standard HTML, you can use the following tags: CLASS, METHOD, PROPERTY, PARAMETER, QUERY, and EXAMPLE. (As with standard HTML tags, the names of these tags are not case-sensitive.) The most commonly used tags are described here. See the documentation for %CSP.Documatic for details of the others. Use to tag class names. If the class exists, the contents are displayed as a link to the class' documentation. For example: /// This uses the <CLASS>MyApp.MyClass</CLASS> class. Use to tag programming examples. This tag affects the appearance of the text. Note that each /// line becomes a separate line in the example (in contrast to the usual case, where the lines are combined into a single paragraph). For example: /// <EXAMPLE> /// set o=..%New() /// set o.MyProperty=42 /// set o.OtherProp="abc" /// do o.WriteSummary() /// </EXAMPLE> Use to tag method names. If the method exists, the contents are displayed as a link to the method's documentation. For example: /// This is identical to the <METHOD>Unique</METHOD> method. Use to tag property names. If the property exists, the contents are displayed as a link to the property's documentation. For example: /// This uses the value of the <PROPERTY>State</PROPERTY> property. Here is a multi-line description using HTML markup: /// The <METHOD>Factorial</METHOD> method returns the factorial /// of the value specified by <VAR>x</VAR>. Compiling Classes InterSystems IRIS class definitions are compiled into application routines by the class compiler. Classes cannot be used in an application before they are compiled. The class compiler differs from the compilers available with other programming languages, such Java, in two significant ways: first, the results of compilation are placed into a shared repository (database), not a file system. Second, it automatically provides support for persistent classes. Specifically, the class compiler does the following: It generates a list of dependencies — classes that must be compiled first. Depending on the compile options used, any dependencies that have been modified since last being compiled will also be compiled. It resolves inheritance — it determines which methods, properties, and other class members are inherited from superclasses. It stores this inheritance information into the class dictionary for later reference. For persistent and serial classes, it determines the storage structure needed to store objects in the database and creates the necessary runtime information needed for the SQL representation of the class. It executes any method generators defined (or inherited) by the class. It creates one or more routines that contain the runtime code for the class. The class compiler groups methods according to language (ObjectScript and Basic) and generates separate routines, each containing methods of one language or the other. It compiles all of the generated routines into executable code. It creates a class descriptor. This is a special data structure (stored as a routine) that contains all the runtime dispatch information needed to support a class (names of properties, locations of methods, and so on). Invoking the Class Compiler You can compile classes using an IDE (as documented elsewhere), and you can compile them in the Terminal. In the latter case, using the Compile() method of the %SYSTEM.OBJ object: Do $System.OBJ.Compile("MyApp.MyClass") Class Compiler Notes Compilation Order When you compile a class, the system also recompiles other classes if the class that you are compiling contains information about dependencies. For example, the system compiles any subclasses of the class. On some occasions, you may need to control the order in which the classes are compiled. To do so, use the System, DependsOn, and CompileAfter keywords. For details, see the Class Definition Reference. To find the classes that the compiler will recompile when you compile a given class, use the $SYSTEM.OBJ.GetDependencies() method. For example: TESTNAMESPACE>d $system.OBJ.GetDependencies("Sample.Address",.included) TESTNAMESPACE>zw included included("Sample.Address")="" included("Sample.Customer")="" included("Sample.Employee")="" included("Sample.Person")="" included("Sample.Vendor")="" The signature of this method is as follows: classmethod GetDependencies(ByRef class As %String, Output included As %String, qspec As %String) as %Status Where: class is either a single class name (as in the example), a comma-separated list of class names, or a multidimensional array of class names. (If it is a multidimensional array, be sure to pass this argument by reference.) It can also include wildcards. included is a multidimensional array of the names of the classes that will be compiled when class is compiled. qspec is a string of compiler flags and qualifiers. See the next subsection. If you omit this, the method considers the current compiler flags and qualifiers. Viewing Class Compiler Flags and Qualifiers The Compile() method also allows you to supply flags and qualifiers that affect the result. Their position in the argument list is described in the explanation of the Compile() method. To view the applicable flags, execute the command: Do $System.OBJ.ShowFlags() This produces the following output: See $system.OBJ.ShowQualifiers() for comprehensive list of qualifiers as flags have been superseded by qualifiers b - Include sub classes. c - Compile. Compile the class definition(s) after loading. d - Display. This flag is set by default. e - Delete extent. h - Show hidden classes. i - Validate XML export format against schema on Load. k - Keep source. When this flag is set, source code of generated routines will be kept. p - Percent. Include classes with names of the form %*. r - Recursive. Compile all the classes that are dependency predecessors. s - Process system messages or application messages. u - Update only. Skip compilation of classes that are already up-to-date. y - Include classes that are related to the current class in the way that they either reference to or are referenced by the current class in SQL usage. These flags are deprecated a, f, g, l, n, o, q, v Default flags for this namespace You may change the default flags with the SetFlags(flags,system) classmethod. To view the full list of qualifiers, along with their description, type, and any associated values, execute the command: Do $System.OBJ.ShowQualifiers() Qualifier information displays in a format similar to one of the following: Name: /checkschema Description: Validate imported XML files against the schema definition. Type: logical Flag: i Default Value: 1 Name: /checksysutd Description: Check system classes for up-to-dateness Type: logical Default Value: 0 Name: /checkuptodate Description: Skip classes or expanded classes that are up-to-date. Type: enum Flag: ll Enum List: none,all,expandedonly,0,1 Default Value: expandedonly Present Value: all Negated Value: none While many options can be specified by means of either flags or qualifiers, InterSystems recommends using qualifiers, as they are the newer mechanism. For more information, see Flags and Qualifiers. Compiling Classes that Include Bitmap Indices When compiling a class that contains a bitmap index, the class compiler generates a bitmap extent index if no bitmap extent index is defined for that class. Special care is required when adding a bitmap index to a class on a production system. For more information, see the section “Generating a Bitmap Extent Index” in the “Defining and Building Indices” chapter of SQL Optimization Guide. Compiling When There Are Existing Instances of a Class in Memory If the compiler is called while an instance of the class being compiled is open, there is no error. The already open instance continues to use its existing code. If another instance is opened after compilation, it uses the newly compiled code. Putting Classes in Deployed Mode You might want to put some of your classes in deployed mode before you send them to customers; this process hides the source code. For any class definitions that contain method definitions that you do not want customers to see, compile the classes and then use $SYSTEM.OBJ.MakeClassDeployed(). For example: d $system.OBJ.MakeClassDeployed("MyApp.MyClass") For an alternative approach, see the article Adding Compiled Code to Customer Databases. About Deployed Mode When a class is in deployed mode, its method and trigger definitions have been removed. (Note that if the class is a data type class, its method definitions are retained because they may be needed at runtime by cached queries.) You cannot export or compile a class that is deployed mode, but you can compile its subclasses (if they are not in deployed mode). There is no way to reverse or undo deployment of a class. You can, however, replace the class by importing the definition from a file using the Management Portal, if you previously saved it to disk. (This is useful if you accidentally put one of your classes into deployed mode prematurely.)
https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=GOBJ_classes
CC-MAIN-2021-49
refinedweb
5,837
54.42
When you open the My Project page, you will notice there are several tabs on the left side. These tabs are Application Compile Debug References Resources Settings Security Publish In this tutorial, we talk about the Application, Resources, References, and Settings tabs. Application Tab This page talks about the general properties of your application. The first one we will cover is the assembly information. The assembly information stores information about the applications purpose, creator, copyright, trademark, and version. To edit this, click on the button that says Assembly Information.... A window will show (see attached picture) where you can edit each of these properties. Next, you should see a combo box with a title that says Shutdown Mode. In this, you can control weather the application will close when the startup form is closed, or when the last form closes, regardless of whether or not it is the startup form. Below that, there is another combo box that says Splash Screen. This is fairly self-explaining. If you want your application to show a splash screen while it's starting up, set the desired form in that box. Also on this page is where you can change the icon of your application. You should be able to find that by yourself, because it says Icon right above it. From there, you can select an icon on your computer to serve as the application's icon. To the right of the icon menu, there is a combo box that says Application Type on it. There really shouldn't be any need to change that, because it determines what type of application you are making (console app, form app, or class library (dll)). Once you start making your application, changing that will really mess it up (in most cases). Right below that, there is yet another combo box that says Startup Form. That controls which form will be shown when the application starts. This can be very useful if you start making your application, and suddenly realize you need a different form to start up. Instead of redesigning your entire project, you can simply change the item in that box. (A picture of the entire Application Tab) There are a few more things on that page, but we have covered most of the important things (things a beginner would probably use). Resources Tab Just as every individual form has it's own set of resources, your entire application has it's own set of resources as well. Adding resources should be fairly easy to figure out (if not, see attached image). To access them in your code, use the namespace My.Resources. You can add almost any type of file as a resource. To access an image resource called "ImageRes", you would simply do My.Resources.ImageRes 'This returns the bitmap called "ImageRes" Settings Tab There will be many times you want your applications to save some of there settings on exit. To do that, you could store them in an ini file, or, a much easier way is to add a setting. Adding a setting, like adding a resource, should be fairly self-explaining (again, if not, i attached an image for that too). Once you have your settings in there, they can be access in your code in the My.Settings namespace. For example, to access an integer setting called "IntSetting", this is what you would do My.Settings.IntSetting 'This returns the integer setting called "IntSetting" Also, settings, by default, save themselves automatically when the application shuts down (this can be turned off by unchecking the Save My.Settings on Shutdown box on the Application Tab). To save them manually, do My.Settings.Save() References Tab This tab controls which COM or .NET dlls will be referenced by your application. It also controls the project-level imports. As you know, putting the code Imports ANamespace ' .... will cause you to be able to access classes and objects inside that namespace without having to put the namespace name before it. If you frequently use one namespace throughout your entire application, instead of putting Imports statements at the beginning of every file, you can add a project-level import. The references part shouldn't be hard to figure out. If there is a dll that has methods you want to use in your application, add it to the list of references (this can also be done in the Solution Explorer).
http://www.dreamincode.net/forums/topic/56156-editing-the-properties-of-your-net-application/
CC-MAIN-2016-44
refinedweb
738
71.85
In this tutorial, you will learn how to use the JobScheduler API available in Android Lollipop. The JobScheduler API allows developers to create jobs that execute in the background when certain conditions are met. Introduction When working with Android, there will be occasions where you will want to run a task at a later point in time or under certain conditions, such as when a device is plugged into a power source or connected to a Wi-Fi network. Thankfully with API 21, known by most people as Android Lollipop, Google has provided a new component known as the JobScheduler API to handle this very scenario. The JobScheduler API performs an operation for your application when a set of predefined conditions are met. Unlike the AlarmManager class, the timing isn't exact. In addition, the JobScheduler API is able to batch various jobs to run together. This allows your app to perform the given task while being considerate of the device's battery at the cost of timing control. In this article, you will learn more about the JobScheduler API and the JobService class by using them to run a simple background task in an Android application. The code for this tutorial is available on GitHub. 1. Creating the Job Service To start, you're going to want to create a new Android project with a minimum required API of 21, because the JobScheduler API was added in the most recent version of Android and, at the time of writing, is not backwards compatible through a support library. Assuming you're using Android Studio, after you've hit the finished button for the new project, you should have a bare-bones "Hello World" application. The first step you're going to take with this project is to create a new Java class. To keep things simple, let's name it JobSchedulerService and extend the JobService class, which requires that two methods be created onStartJob(JobParameters params) and onStopJob(JobParameters params). public class JobSchedulerService extends JobService { @Override public boolean onStartJob(JobParameters params) { return false; } @Override public boolean onStopJob(JobParameters params) { return false; } } onStartJob(JobParameters params) is the method that you must use when you begin your task, because it is what the system uses to trigger jobs that have already been scheduled. As you can see, the method returns a boolean value. If the return value is false, the system assumes that whatever task has run did not take long and is done by the time the method returns. If the return value is true, then the system assumes that the task is going to take some time and the burden falls on you, the developer, to tell the system when the given task is complete by calling jobFinished(JobParameters params, boolean needsRescheduled). onStopJob(JobParameters params) is used by the system to cancel pending tasks when a cancel request is received. It's important to note that if onStartJob(JobParameters params) returns false, the system assumes there are no jobs currently running when a cancel request is received. In other words, it simply won't call onStopJob(JobParameters params). One thing to note is that the job service runs on your application's main thread. This means that you have to use another thread, a handler, or an asynchronous task to run longer tasks to not block the main thread. Because multithreading techniques are beyond the scope of this tutorial, let's keep it simple and implement a handler to run our task in the JobSchedulerService class. private Handler mJobHandler = new Handler( new Handler.Callback() { @Override public boolean handleMessage( Message msg ) { Toast.makeText( getApplicationContext(), "JobService task running", Toast.LENGTH_SHORT ) .show(); jobFinished( (JobParameters) msg.obj, false ); return true; } } ); In the handler, you implement the handleMessage(Message msg) method that is a part of Handler instance and have it run your task's logic. In this case, we're keeping things very simple and post a Toast message from the application, though this is where you would put your logic for things like syncing data. When the task is done, you need to call jobFinished(JobParameters params, boolean needsRescheduled) to let the system know that you're done with that task and that it can begin queuing up the next operation. If you don't do this, your jobs will only run once and your application will not be allowed to perform additional jobs. The two parameters that jobFinished(JobParameters params, boolean needsRescheduled) takes are the JobParameters that were passed to the JobService class in the onStartJob(JobParameters params) method and a boolean value that lets the system know if it should reschedule the job based on the original requirements of the job. This boolean value is useful to understand, because it is how you handle the situations where your task is unable to complete because of other issues, such as a failed network call. With the Handler instance a Handler instance. You'll also notice that the number 1 is being passed to the Handler instance. This is the identifier that you're going to use for referencing the job. @Override public boolean onStartJob(JobParameters params) { mJobHandler.sendMessage( Message.obtain( mJobHandler, 1, params ) ); return true; } @Override public boolean onStopJob(JobParameters params) { mJobHandler.removeMessages( 1 );: 2. Creating the Job Scheduler With JobSchedulerService class finished, we can start looking at how your application will interact with the JobScheduler API. The first thing you will need to do is create a JobScheduler object, called mJobScheduler in the sample code, and initialize it by getting an instance of the system service JOB_SCHEDULER_SERVICE. In the sample application, this is done in the MainActivity class. mJob. JobInfo.Builder builder = new JobInfo.Builder( 1, new ComponentName( getPackageName(), JobSchedulerService.class.getName() ) ); This builder allows you to set many different options for controlling when your job will execute. The following code snippet shows how you could set your task to run periodically every three seconds. builder.setPeriodic( 3000 );. It's important to note that setRequiredNetworkType(int networkType), setRequiresCharging(boolean requireCharging) and setRequiresDeviceIdle(boolean requireIdle) may cause your job to never start unless setOverrideDeadline(long time) is also set, allowing your job to run even if conditions are not met. Once the preferred conditions are stated, you can build the JobInfo object and send it to your JobScheduler object as shown below. if( mJob. mJobScheduler.cancelAll(); You should now be able to use the JobScheduler API with your own applications to batch jobs and run background operations. Conclusion In this article, you've learned how to implement a JobService subclass that uses a Handler object to run background tasks for your application. You've also learned how to use the JobInfo.Builder to set requirements for when your service should run. Using these, you should be able to improve how your own applications operate while being mindful of power consumption. Envato Tuts+ tutorials are translated into other languages by our community members—you can be involved too!Translate this post
https://code.tutsplus.com/tutorials/using-the-jobscheduler-api-on-android-lollipop--cms-23562
CC-MAIN-2020-10
refinedweb
1,158
51.48
> > It's probably a bit early to be changing manifest files until we decide on > new component namespaces and where they should live but you do have to > compile it somehow. > Agreed. I hope this gets figured out as we go forward. I'll be honest - part of the reason behind initiating this process is the desire to bring some of these "gray" areas to light so they can be discussed and decisions made. There seem to be a lot of questions being kicked down the road. I've never built a theme that "inherited" from another theme, so I don't know how easy or difficult that is. I've always assumed they were fairly self-contained. Perhaps they need to share a project with different build targets. > Just not 100% sure if the popup is already showing do you need to restart > the timer? > The thought process there was if you wanted to front end this with a "ToastManager" singleton that only ever uses one instance of the Toast all you would need to do is change the label and call show(). That would leave the option open for queuing notifications, etc. Presumably, if you're explicitly calling show() more than once you'd expect to get a "full" showing; though, I may be off on that and open to discussion on behavior. Ultimately, it was just a guess on desired behavior and I didn't look in to how Android handles that case.
http://mail-archives.apache.org/mod_mbox/incubator-flex-dev/201203.mbox/%3CCAE0mPOvjTGxVNS6CrmJXn_UgPftOJb=ciiKjTTFBszF2OzSHxQ@mail.gmail.com%3E
CC-MAIN-2014-52
refinedweb
246
68.3
The great thing about blogging every day is that I'm guaranteed to make an idiot out of myself. There is just no way around it. There are some days (sometimes even several days in succession) when a obvious solution just isn't obvious. And it's very liberating to accept that and just move forward with learning. Actually, it's still completely embarrassing, but I'll stick with the first story. Today's embarrassment comes in the form friend classes in Dart. I continue to explore the memento pattern for the forthcoming Design Patterns in Dart. The implementation section in the Gang of Four book suggests that, to preserve encapsulation, the memento should expose its properties only to the originating class of the pattern, but not to anything else. Specifically, the caretaker that saves memento objects should not itself be able to access the properties of the memento. Since the memento represents a previous internal state, the caretaker should not be looking under the covers, so to speak. If it needs to know anything, it should ask the originating class. In the Gang of Four book, this is achieved via friend classes. For the past two nights, I have struggled to bend Dart to my will in achieving this friend class behavior. Instead, as the esteemed James Hurford pointed out in last night's comments, Dart's libraries simply work this way. Private variables in Dart are private to a library, not a class. So if I make the properties of my memento object private, nothing outside of the library (like the caretaker from the pattern) can access them. But the originator class in the same library should still be able to access them. The originating class with which I have been experimenting is the VelvetFogMachine, which plays wonderful selections of the great crooner's catalog. The nowPlayinggetter method is used to generate the Playingmemento objects: class VelvetFogMachine { Song currentSong; double currentTime; // ... // Create a memento of the current state Playing get nowPlaying => new Playing(currentSong, time); // ... }Following James' suggestion, I should make the properties of Playingprivate (leading underscore variables are private in Dart): // The Memento class Playing { Song _song; double _time; Playing(this._song, this._time); }Next, I ensure that both of these classes exist in the velvet_fog_machinelibrary: library velvet_fog_machine; // The "Originator" class VelvetFogMachine { /* ... */ } // The Memento class Playing { /* ... */ }With that, the VelvetFogMachineshould have access to Playing's _songand _timeprivate variables when restoring a previous memento: class VelvetFogMachine { // ... // Restore from memento void backTo(Playing p) { print(" *** Whoa! This was a good one, let's hear it again :) ***"); _play(p._song, p._time); } }And that actually works right away. Well, I shouldn't say "actually." I really should have remembered that this is how private variables work, but this is so much simpler than any of the craziness that I had been trying the previous two nights. Back in the caretaker code—and outside the velvet_fog_machinelibrary—I can play songs, store mementos, and restore mementos: Running this from the command line produces the usualRunning this from the command line produces the usual import 'package:memento_code/velvet_fog_machine.dart'; // ... List<Playing> replayer = []; var scatMan = new VelvetFogMachine(); // ... scatMan.play( 'New York, New York Medley', 'A Vintage Year' ); replayer.add(scatMan.nowPlaying); scatMan.play( 'The Lady is a Tramp', 'The Velvet Frog: The Very Best of Mel Tormé' ); // The New York, New York Medley with George Shearing really is wonderful scatMan.backTo(replayer.last); play()messages and verifies that the backTo()method was able to access the memento's private properties: ./bin/play_melvie.dart Playing Blue Moon // The Velvet Frog: The Very Best of Mel Tormé @ 0.00 Playing 'Round Midnight // Tormé @ 0.00 Playing It Don't Mean A Thing (If It Ain't Got That Swing) // Best Of/ 20th Century @ 0.00 @ 1.28Best of all, if I try to access the private variables from the caretaker: Then I get no-such-method errors andThen I get no-such-method errors and print("Caretaker says last remembered song is: ${replayer.last._song}"); dartanalyzercomplains: [warning] The getter '_song' is not defined for the class 'Playing' (/home/chris/repos/design-patterns-in-dart/memento/bin/play_melvie.dart, line 40, col 68)This is exactly the "friend class" behavior that I wanted. So the moral of the story is that it takes two days of trying dumb stuff on the internet before a kind soul takes pity on you and points you in the right direction. No wait, the moral of the story is that if you want friend classes in Dart, keep them in the same library. Thanks James! Day #20 This comment has been removed by the author. You're welcome I didn't realise that members of a library were friends with each other either till last night. I just had a bunch that there was a better way.
https://japhr.blogspot.com/2015/12/dart-has-friend-class-baked-in.html
CC-MAIN-2018-22
refinedweb
806
54.73
How add ASP.NET Core identity to existing Core mvc project? add asp.net identity to existing database add asp.net core identity to existing mvc project asp.net core 3 add identity to existing project asp.net core 2.1 identity tutorial asp.net core identity web api asp.net core 2.2 identity add identity to asp.net core web api I have already started my dotnet core mvc project without identity on Mac with CLI and now I want to add this feature. The only option I have known till now is to create a new project by dotnet new mvc --auth Is there a better way to add identity to an existing project? I hope there is a 'dotnet new' command. dotnet add package Microsoft.AspNetCore.Identity Adding ASP.NET Identity to an Empty or Existing Web Forms Project , How can I do this manually for an application which was not configured for an individual accounts authentication and authorization ? Adding. According to docs.microsoft.com you can scaffold identity into an existing MVC project with aspnet-codegenerator. 1) If you have not previously installed the ASP.NET Core scaffolder, install it now: dotnet tool install -g dotnet-aspnet-codegenerator 2) Add a package reference to Microsoft.VisualStudio.Web.CodeGeneration.Design to the project (*.csproj) file. Run the following command in the project directory: dotnet add package Microsoft.VisualStudio.Web.CodeGeneration.Design dotnet restore 3) Run the following command to list the Identity scaffolder options: dotnet aspnet-codegenerator identity -h 4) In the project folder, run the Identity scaffolder with the options you want. For example, to setup identity with the default UI and the minimum number of files, run the following command: dotnet aspnet-codegenerator identity --useDefaultUI 5) The generated Identity database code requires Entity Framework Core Migrations. Create a migration and update the database. For example, run the following commands: dotnet ef migrations add CreateIdentitySchema dotnet ef database update 6) Call UseAuthentication after UseStaticFiles: public class Startup {.UseHsts(); } app.UseHttpsRedirection(); app.UseStaticFiles(); app.UseAuthentication(); // <-- add this line app.UseMvcWithDefaultRoute(); } } How add ASP.NET Core identity to existing Core mvc project , NET Core 2.1 application as the Identity framework is available in a nuget In this post, find out how to add Identity as UI in ASP. 2.1 and choose either Web Application or Web Application(MVC). Migrate existing ASP.Net 4) In the project folder, run the Identity scaffolder with the options you want. For example, to setup identity with the default UI and the minimum number of files, run the following command: 5) The generated Identity database code requires Entity Framework Core Migrations. Create a migration and update the database. You can manage this through the NuGet Package Manager: Tools -> NuGet Package Manager -> Console $ Install-Package Microsoft.AspNet.Identity.Core Introducing Identity to the ASP.NET Core Project, It's easy to create a new project with ASP.NET Identity. In this post we look at adding ASP.NET Identity to an existing ASP.NET Core project. ASP.NET Core 2.1 and later provides ASP.NET Core Identity as a Razor Class Library. Applications that include Identity can apply the scaffolder to selectively add the source code contained in the Identity Razor Class Library (RCL). You might want to generate source code so you can modify the code and change the behavior. Adding ASP.NET Core Identity service to an Existing ASP.NET Core , Today, I would like to show how to add asp.net identity 2.0 into existing mvc project. First, create a mvc project without individual authentication. Add Identity as UI in ASP.NET Core 2.1 application, NET Core Identity. ASP.NET Core Identity is basically a membership system is intended to replace the existing membership system of classic ASP.NET. How to create an application using Identity Authentication in ASP. remove user accounts, change the password, add a role to the user, AddMvc(). For Existing Applications. Obviously, adding this functionality can depend on what you’re adding it to. The following was compiled from an ASP.Net Core app created without identity services, and then retrofitted with them. In order to do this, I strongly recommend starting with a dummy app created as above, as there’s a lot of cutting and pasting coming up. ASP.NET Core Identity to Existing Project, This post is going to take the API project created last week for the Swagger/OpenAPI with NSwag and ASP.NET Core 3 post and replace the generated data with a database using Entity Framework Core. If you want to follow along with this post the files before any changes can be found here . - I look for an explanation why my question was so bad? It was a genuine doubt after all..! - I even found the way to the answer. dotnet add package Microsoft.AspNetCore.Identity would do the trick. - Starting with ASP.NET Core 2.1, Microsoft.AspNetCore.Identity.UI is also being introduced. - This doesn't work with Core projects.
http://thetopsites.net/article/50179696.shtml
CC-MAIN-2020-45
refinedweb
838
52.46
Jay Hayes “JavaScript is single-threaded, so it doesn’t scale. JavaScript is a toy language because it doesn’t support multithreading.” Outside (and inside) the web community, statements like these are common. And in a way, it’s true: JavaScript’s event loop means your program does one thing at a time. This intentional design decision shields us from an entire class of multithreading woes, but it has also birthed the misconception that JavaScript can’t handle concurrency. But in fact, JavaScript’s design is well-suited for solving a plethora of concurrency problems without succumbing to the “gotchas” of other multithreaded languages. You might say that JavaScript is single-threaded… just so it can be multithreaded! You may want to do some homework if “concurrency” and “parallelism” are new to your vocabulary. TL;DR: for simple programs, we usually write “sequential” or (“serial”) code: one step executes at a time, and must complete before the next step begins. If JavaScript could perform a “blocking” AJAX request with ajaxSync(), serial code might look like this: console.log('About to make a request.'); let json = ajaxSync(''); console.log(json); console.log('Finished the request.'); /* => About to make a request. ... AJAX request runs ... ... a couple seconds later ... ... AJAX request finishes ... => { all: ['the', 'things'] } => Finished the request. */ Until the AJAX request completes, JavaScript pauses (or “blocks”) any lines below from executing. In contrast, concurrency is when the execution of one series of steps can overlap another series of steps. In JavaScript, concurrency is often accomplished with async Web APIs and a callback: console.log('About to make a request.'); ajaxAsync('', json => { console.log(json); console.log('Finished the request.'); }); console.log('Started the request.'); /* => About to make a request. ... AJAX request runs in the background ... => Started the request. ... a couple seconds later ... ... AJAX requests finishes ... => { all: ['the', 'things'] } => Finished the request. */ In this second version, the AJAX request only “blocks” the code inside the callback (logging the AJAX response), but the JavaScript runtime will go on executing lines after the AJAX request. The JavaScript runtime uses a mechanism, called the “event loop,” to keep track of all in-progress async operations so it can notify your program when an operation finishes. If you are unfamiliar with the event loop, check out Philip Robert’s exceptional 20 minute overview from ScotlandJS: “Help, I’m stuck in an event-loop.” Thanks to the event loop, a single thread can perform an admirable amount of work concurrently. But why not just reach for multithreading? Software is harder to write (and debug) when it constantly switches between different tasks through multithreading. So unlike many languages, JavaScript finishes one thing at a time—a constraint called “run-to-completion”—and queues up other things to do in the background. Once the current task is done, it grabs the next chunk of work off the queue and executes to completion. Since the JavaScript runtime never interrupts code that is already executing on the call stack, you can be sure that shared state (like global variables) won’t randomly change mid-function—reentrancy isn’t even a thing! Run-to-completion makes it easy to reason about highly concurrent code, for which reason Node.js is so popular for backend programming. Although your JavaScript code is single-threaded and only does one thing at a time, the JavaScript Runtime and Web APIs are multithreaded! When you pass a callback function to setTimeout() or start an AJAX request with fetch(), you are essentially spinning up a background thread in the runtime. Once that background thread completes, and once the current call stack finishes executing, your callback function is pushed onto the (now empty) call stack and run-to-completion. So your JavaScript code itself is single-threaded, but it orchestrates legions of threads! However, we need some patterns to write concurrent code that is performant and readable. Suppose we are building a media library app in the browser and are writing a function called updateMP3Meta() that will read in an MP3 file, parse out some ID3 metadata (e.g. song title, composer, artist) and update a matching Song record in the database. Assuming the read(), parseMP3() and Song.findByName() functions return Promises, we could implement it like this: let read = (path) => { ... }; // returns a Promise let parseMP3 = (file) => { ... }; // returns a Promise let Song = { findByName(name) { ... } // returns a Promise }; let updateMP3Meta = (path) => { return read(path) .then(file => { return parseMP3(file).then(meta => { return Song.findByName(file.name).then(song => { Object.assign(song, meta); return song.save(); }); }); }); }; It does the job, but nested .then() callbacks quickly turn into callback hell and obscure intent… and bugs. We might try using Promise chaining to flatten the callback chain: let updateMP3Meta = (path) => { return read(path) .then(file => parseMP3(file)) .then(meta => Song.findByName(file.name)) .then(song => { Object.assign(song, meta); return song.save(); }); }; This reads nicely, but unfortunately it won’t work: we can’t access the file variable from the second .then() callback, nor meta from the third .then() anymore! Promise chaining can tame callback hell, but only by forfeiting JavaScript’s closure superpowers. It’s hardly ideal—local variables are the bread-and-butter of state management in functional programming. Luckily, ES2017 async functions merge the benefits of both approaches. Rewriting our updateMP3Meta() as an async function yields: let updateMP3Meta = async (path) => { let file = await read(path); let meta = await parseMP3(file); let song = await Song.findByName(file.name); Object.assign(song, meta); return song.save(); }; Hooray! async functions give us local scoping back without descending into callback hell. However, updateMP3Meta() unnecessarily forces some things to run serially. In particular, MP3 parsing and searching the database for a matching Song can actually be done in parallel; but the await operator forces Song.findByName() to run only after parseMP3() finishes. To get the most out of our single-threaded program, we need to invoke JavaScript’s event loop superpowers. We can queue two async operations and wait for both to complete: let updateMP3Meta = (path) => { return read(path) .then(file => { return Promise.all([ parseMP3(file), Song.findByName(file.name) ]); }) .then(([meta, song]) => { Object.assign(song, meta); return song.save(); }); }; We used Promise.all() to wait for concurrent operations to finish, then aggregated the results to update the Song. Promise.all() works just fine for a few concurrent spots, but code quickly devolves when you alternate between chunks of code that can be executed concurrently and others that are serial. This intrinsic ugliness is not much improved with async functions: let updateMP3Meta = async (path) => { let file = await read(path); let metaPromise = parseMP3(file); let songPromise = Song.findByName(file.name); let meta = await metaPromise; let song = await songPromise; Object.assign(song, meta); return song.save(); }; Instead of using an inline await, we used [meta|song]Promise local variables to begin an operation without blocking, then await both promises. While async functions make concurrent code easier to read, there is an underlying structural ugliness: we are manually telling JavaScript what parts can run concurrently, and when it should block for serial code. It’s okay for a spot or two, but when multiple chunks of serial code can be run concurrently, it gets incredibly unruly. We are essentially deriving the evaluation order of a dependency tree… and hardcoding the solution. This means “minor” changes, like swapping out a synchronous API for an async one, will cause drastic rewrites. That’s a code smell! To demonstrate this underlying ugliness, let’s try a more complex example. I recently worked on an MP3 importer in JavaScript that involved a fair amount of async work. (Check out “Encore, JavaScript! Create an MP3 Reader with DataViews + TextDecoder” or the parser source code if you’re interested in working with binary data and text encodings.) The main function takes in a File object (from drag-and-drop), loads it into an ArrayBuffer, parses MP3 metadata, computes the MP3’s duration, creates an Album in IndexedDB if one doesn’t already exist, and finally creates a new Song: import parser from 'id3-meta'; import read from './file-reader'; import getDuration from './duration'; import { mapSongMeta, mapAlbumMeta } from './meta'; import importAlbum from './album-importer'; import importSong from './song-importer'; export default async (file) => { // Read the file let buffer = await read(file); // Parse out the ID3 metadata let meta = await parser(file); let songMeta = mapSongMeta(meta); let albumMeta = mapAlbumMeta(meta); // Compute the duration let duration = await getDuration(buffer); // Import the album let albumId = await importAlbum(albumMeta); // Import the song let songId = await importSong({ ...songMeta, albumId, file, duration, meta }); return songId; }; This looks straightforward enough, but we’re forcing some async operations to run sequentially that can be executed concurrently. In particular, we could compute getDuration() at the same time that we parse the MP3 and import a new album. However, both operations will need to finish before invoking importSong(). Our first try might look like this: export default async (file) => { // Read the file let buffer = await read(file); // Compute the duration let durationPromise = getDuration(buffer); // Parse out the ID3 metadata let metaPromise = parser(file); let meta = await metaPromise; let songMeta = mapSongMeta(meta); let albumMeta = mapAlbumMeta(meta); // Import the album let albumIdPromise = importAlbum(albumMeta); let duration = await durationPromise; let albumId = await albumIdPromise; // Import the song let songId = await importSong({ ...songMeta, albumId, file, duration, meta }); return songId; }; That took a fair amount of brain tetris to get the order of awaits right: if we hadn’t moved getDuration() up a few lines in the function, we would have created a poor solution since importAlbum() only depends on albumMeta, which only depends on meta. But this solution is still suboptimal! getDuration() depends on buffer, but parser() could be executing at the same time as read(). To get the best solution, we would have to use Promise.all() and .then()s. To solve the underlying problem without evaluating a dependency graph by hand, we need to define groups of serial steps (which execute one-by-one in a blocking fashion), and combine those groups concurrently. What if there was a way to define such a dependency graph that’s readable, doesn’t break closures, doesn’t resort to .then(), and doesn’t require a library? That’s where async IIFEs come in. For every group of serial (dependent) operations, we’ll wrap them up into a micro API called a “task”: let myTask = (async () => { let other = await otherTask; let result = await doCompute(other.thing); return result; })(); Since all async functions return a Promise, the myTask local variable contains a Promise that will resolve to result. I prefer to call these *Task instead of *Promise. Inside the async IIFE, operations are sequential, but outside we aren’t blocking anything. Furthermore, inside a task we can wait on other tasks to finish, like otherTask, which could be another async IIFE. Let’s turn the getDuration() section into a task called durationTask: let durationTask = (async () => { let buffer = await readTask; let duration = await getDuration(buffer); return duration; })(); Since these tasks are defined inline, they have access to variables in the outer closure, including other tasks! Let’s refactor the entire importer with async IIFEs, or “tasks”: export default async (file) => { // Read the file let readTask = read(file); // Parse out the ID3 metadata let metaTask = (async () => { let meta = await parser(file); let songMeta = mapSongMeta(meta); let albumMeta = mapAlbumMeta(meta); return { meta, songMeta, albumMeta }; })(); // Import the album let albumImportTask = (async () => { let { albumMeta } = await metaTask; let albumId = await importAlbum(albumMeta); return albumId; })(); // Compute the duration let durationTask = (async () => { let buffer = await readTask; let duration = await getDuration(buffer); return duration; })(); // Import the song let songImportTask = (async () => { let albumId = await albumImportTask; let { meta, songMeta } = await metaTask; let duration = await durationTask; let songId = await importSong({ ...songMeta, albumId, file, duration, meta }); return songId; })(); let songId = await songImportTask; return songId; }; Now reading the file, computing duration, parsing metadata and database querying will automatically run concurrently or serially—we were even able to leave getDuration() in its original spot! By declaring tasks and awaiting them inside other tasks, we defined a dependency graph for the runtime and let it discover the optimal solution for us. Suppose we wanted to add another step to the import process, like retrieving album artwork from a web service: // Look up album artwork from a web service let albumArtwork = await fetchAlbumArtwork(albumMeta); Prior to the async IIFE refactor, adding this feature would have triggered a lot of changes throughout the file, but now we can add it with just a small isolated chunk of additions! +// Look up album artwork from a web service +let artworkTask = (async () => { + let { albumMeta } = await metaTask; + let artwork = await fetchAlbumArtwork(albumMeta); + return artwork; +})(); // Import the album let albumImportTask = (async () => { + let artwork = await artworkTask; let { albumMeta } = await metaTask; - let albumId = await importAlbum(albumMeta); + let albumId = await importAlbum({ artwork, ...albumMeta }); return albumId; })(); Tasks are declarative, so managing concurrent vs. serial execution order becomes an “execution detail” instead of an “implementation detail”! What if we revamped our parser() function so it could synchronously parse an ArrayBuffer instead of a File object? Before this would have triggered a cascade of line reordering, but now the change is trivial: // Parse out the ID3 metadata let metaTask = (async () => { + let buffer = await readTask; - let meta = await parser(file); + let meta = parser(buffer); let songMeta = mapSongMeta(meta); let albumMeta = mapAlbumMeta(meta); return { meta, songMeta, albumMeta }; })(); It’s tempting to take shortcuts and solve the dependency graph yourself. For example, after our changes to parser() above, all of the tasks depend on the file being read in, so you could block the entire function with await read(file) to save a few lines. However, these areas are likely to change, and organizing into serial tasks provides other benefits: these micro APIs make it is easier to read, debug, extract and reason about a complex chunk of concurrency. Since we wrapped these tasks into async IIFEs, why not extract them into dedicated functions? For the same reason we couldn’t use Promise chaining: we have to give up nested closures and lexically scoped variables. Extracting tasks into top level functions also begs a design question: if all these operations were synchronous, would we still perform this extraction? If you find yourself extracting async functions (as we did with importAlbum() and importSong()) because of their complexity or reusability, bravo! But ultimately, design principles for breaking down functions should be independent of whether the code is async vs. sync. Also, splitting functions or moving them too far from their context makes code harder to grasp, as Josh discusses in his post about extracting methods. Functional programming is well-suited to multithreading because it minimizes shared state and opts for local variables as the de facto state mechanism. And thanks to JavaScript’s event loop, we can deal with shared state by merging results inside a single thread. Next time, we’ll examine functional patterns for throttling concurrency on a single thread, then wrap up with techniques for efficiently managing a cluster of Web Workers… without worrying a shred about “thread safety.” Interested in learning next-generation JavaScript for the web platform? Want to leverage functional patterns to build complex Single Page Apps? Join us for a Front-End Essentials bootcamp at one of our Ranches, or we can come to you through our corporate training program. Jay Hayes
https://www.bignerdranch.com/blog/cross-stitching-elegant-concurrency-patterns-for-javascript/
CC-MAIN-2017-43
refinedweb
2,541
54.42
How To Make An Infinite Game In Python 3 Hey there! In this tutorial, I’ll show you how to make an infinite game.You will need to know some things about Python before you get started, though. So, without further ado, let’s get started! (oh god this phrase is used too much) Imports For this tutorial, we’ll only import one thing: random. Type import random in the program code and you should be good to go! The Variables Now, make a list storing two values: the strings left and right. Then, create a variable storing the score of your player, and set it to zero. left_or_right = [“left”, “right”] score = 0 The REAL Code Ok, so now we can get on to the actual code. First, we make a while loop that runs forever using while True:. Then we put an input so the user can type left or right in, and we assign a variable which holds a random.choice value to pick left or right randomly. while True: inpt = input(“Choose left or right: “) choosed = random.choice(left_or_right) Now we add a few if statements to finish. The code that I’m going to show you means: if choosed is the same as as the input given by the player, but not always having the same capitalization, then the hallway is safe and your score goes up by one, else, say Game Over, display your score, and break from the loop. while True: inpt = input(“Choose left or right: “) choosed = random.choice(left_or_right) if input.lower() == choosed: print(“The hallway is safe!”) score += 1 else: print(“Game over!”) print(“Score: “ + str(score)) break The str() is VERY important. If you don’t wrap it around the score variable in the last If you combine all these code together, you get a simple infinite game! You can always add more things to your infinite game program. Thanks for reading! (The completed code is down below.)
https://replit.com/talk/learn/How-To-Make-An-Infinite-Game-In-Python-3/60769
CC-MAIN-2021-17
refinedweb
328
83.76
System.Win32.Error.Foreign Description This module provides functions which can be used as drop-in replacements for Win32 when writing wrappers to foreign imports. You will likely need to import modules from Win32 as well. To avoid accidentally calling the standard error handling functions it's a good idea to hide a few names: import qualified System.Win32.Error.Foreign as E import System.Win32 hiding (failIfFalse_, failIf, failUnlessSuccess, failWith) Handling error conditions in Windows revolves around a thread-local global variable representing the most recent error condition. Functions indicate that an error occurred in various ways. The C++ programmer will observe that a function failed, and immediately call GetLastError to retrieve details on the possible cause or to get a localized error message which can be relayed to a human in some way. There are some cases where an error code may mean different things depending on varying context, but in general these codes are globally unique. Microsoft documents which error codes may be expected for any given function. When working with functions exported by Win32, error conditions are dealt with using the IOError exception type. Most native Win32 functions return an error code which can be used to determine whether something went wrong during its execution. By convention these functions are all named something of the form "c_DoSomething" where DoSomething matches the name given by Microsoft. A haskell wrapper function named "doSomething" will typically, among other things, check this error code. Based on its value the operating system will be queried for additional error information, and a Haskell exception will be thrown. Consider the createFile function used to open existing files which may or may not actually exist. createFile "c:\\nofilehere.txt" gENERIC_READ fILE_SHARE_NONE Nothing oPEN_EXISTING 0 Nothing If no file by that name exists the underlying c_CreateFile call will return iNVALID_HANDLE_VALUE. This will result in an IOError exception being thrown with a String value indicating the function and file name. Internally, the IOError will also contain the error code, which will be converted to a general Haskell value. The Win32-errors package works similarly. A (simplified) wrapper around c_CreateFile could be written as follows. Source code from the Win32 package often provides a good starting point: createFile name access mode = withTString name $ \ c_name -> E.failIf (== E.toDWORD E.InvalidHandle) "CreateFile" $ c_CreateFile c_name access fILE_SHARE_NONE nullPtr mode 0 nullPtr Synopsis Documentation failIf :: (a -> Bool) -> Text -> IO a -> IO a Source Copied from the Win32 package. Use this to throw a Win32 exception when an action returns a value satisfying the given predicate. The exception thrown will depend on a thead-local global error condition. The supplied Text value should be set to the human-friendly name of the action that triggered the error. failIfFalse_ :: Text -> IO Bool -> IO () Source This function mirrors the Win32 package's failIfFalse_ function. failIfNull :: Text -> IO (Ptr a) -> IO (Ptr a) Source This function mirrors the Win32 package's failIfNull function. failUnlessSuccess :: Text -> IO DWORD -> IO () Source Perform the supplied action, and throw a Win32Exception exception if the return code is anything other than Success. The supplied action returns a DWORD instead of an ErrCode so that foreign imports can be used more conveniently. failWith :: Text -> ErrCode -> IO a Source Throw a Win32Exception exception for the given function name and error code. errorWin :: Text -> IO a Source Windows maintains a thread-local value representing the previously triggered error code. Calling errorWin will look up the value, and throw a Win32Exception exception. The supplied Text argument should be set to the name of the function which triggered the error condition. Calling this action when no error has occurred (0x00000000 -- ERROR_SUCCESS) will result in an exception being thrown for the Success error code.
https://hackage.haskell.org/package/Win32-errors-0.2.1/docs/System-Win32-Error-Foreign.html
CC-MAIN-2016-22
refinedweb
619
53.61
Unity is currently one of the most popular choices when it comes to building virtual reality applications. Google Cardboard is a great first choice for getting started in the field – spending $20 on a VR headset rather than potentially hundreds is a much safer bet for getting started (and there are plenty of people with smartphones who could potentially enjoy your app with an equally smaller upfront cost!). I’ve previously covered how to build VR for the web today via JS , along with demos on Visualizing a Twitter Stream in VR and JS and plenty others. For those who prefer Unity development, it’s about time I evened these numbers and wrote some more on VR development in Unity. For some rather strange reason, I couldn’t find many in depth or varied guides to working with the Google Cardboard Unity SDK from scratch, so I did what any SitePoint writer would do. I put one together!What You’ll Need To follow along, you’ll need the following bits of software and hardware:More from this author Smartwatch Platforms to Consider Developing for in 2015 What Developers Need to Know about the Pebble Time Round Unity – At least v4.5 is recommended. I’m using Unity v5 Pro for this guide. Unity v5 Personal apparently works a-okay too. Windows or Mac – I’ll be using Windows in this tutorial for a change (mainly because most Unity devs seem to be on Windows, so it’s only fair I follow suit for the article!) Cardboard SDK for Unity – That link will provide you with the GitHub repo. I’ll explain this and setting up Java and Android in this article. Java SE SDK – This will be required to install the Android SDK. Android SDK – If you are downloading this for the first time, you only need the “SDK Tools Only” bits of the SDK. I’ll explain this soon. A Google Cardboard style headset An Android device to put inside that headset – it is possible to export this to work with iOS, but I won’t be covering that process as I don’t have an iPhone! Relatively basic knowledge of Unity – I’ll cover a lot of the absolute basics for those new to it all but having a small amount of knowledge will help! The Code Want to jump straight into the code? My sample project is up at GitHub for you to do whatever you’d like!Preparing the Android SDK In order to create an Android mobile app, we’ll need the Android SDK working on our system. For seasoned Android developers who’ve already got this running, feel free to skip to the next section! To begin the Android SDK process, you first will need to check you’ve got the Java SE SDK installed on your system. If you aren’t sure, try skipping to the Android SDK install step first. The installation will complain if you don’t have Java!Which Java? If your system is Java-free, it can be a bit confusing to know which of the various Java options you’ll need. Head to the Java SE download page and choose the JDK download option: Once you’ve got that downloaded, run through the very typical install process that looks a bit like so: If you are new to Android development, you’ll need to download the Android SDK and install it. You only need the “SDK Tools Only” bit of the SDK which is available at their Other Download Options page. This provides the bare minimum. You could download the whole Android Studio package if you’re looking to do a lot of Android app development in future. If you are sticking with the “SDK Tools Only” option, head to that page and choose the version for your operating system: Run that installation file you’ve downloaded. If you weren’t sure if you had Java installed, the installation should confirm this for you: Follow the rest of the prompts (they are pretty standard installation prompts) and install the SDK to your system. When it is done, keep the checkbox ticked before you click Finish, that way you can load the SDK Manager: The SDK Manager should appear and look like so: It is likely to have a bunch of checkboxes already selected for you. Let’s reduce it to just what we’ll need for now. Underneath the “Tools” folder, you’ll want the following selected:Android SDK Tools – This should be at the top of the list. Android SDK Platform-tools – This will be right after the SDK Tools. Android SDK Build-tools – You should only need the latest revision of this (you can see the revisions under the “Rev.” column) That should look like so: Then, within the latest Android API folder (whichever is highest, at the time of writing that’s “Android 6.0 (API 23)”) , choose the following:SDK Platform – You’ll need this! A system image – You can choose either an ARM or an Intel system image to allow you to emulate that Android system on your computer. If you’re looking to do all testing on your physical Android device, you can skip this. Google APIs – This will let us use any Google APIs with our app. This doesn’t include Google Cardboard – I’ve just included it for convenience, you can likely skip this one too. That should look like the following: Click “Install X packages” in the bottom right corner to begin the installation process. You’ll need to accept the licence before it will let you install. Once that is installed, you should have everything you need on the Android side of things!Creating A Cardboard Empowered Unity Project From the Cardboard SDK Github repo linked above, all you’ll really need to download is the Cardboard SDK For Unity .unitypackage file here . Once that has downloaded, we’re ready to begin our actual Unity project. Open up Unity and do the obvious first step – create a new project. Choose 3D and give your project a name. Don’t worry about adding asset packages on this screen, we’ll be adding our custom one in the next step. Select “Create Project” once you’re ready to begin: Next, go to Assets > Import Package > Custom Package… and find your Cardboard SDK .unitypackage file you downloaded earlier. If you are using Unity 5, you won’t need the legacy folders and can untick those: When that is successful, your new Cardboard SDK assets should appear in your project inside a “Cardboard” folder: Within the Cardboard SDK folder, you’ll find four subfolders – “Editor”, “Prefabs”, “Resources” and “Scripts”: Inside the “Prefabs” folder, you’ll find a Prefab called “CardboardMain.prefab”. Drag that into your scene and raise it up by 2 on the Y axis (otherwise the user will feel like they’re either tiny or super short!): Open up the “CardboardMain” prefab and you’ll find the “Main Camera” within it. This is our two stereoscopic style cameras working in unison to display the scene in wonderous VR when looking at it though a Google Cardboard headset. In our “Main Camera” object, we’ll want to add a “Physics Raycaster” component. To do so, click “Add Component” whilst you’ve got the “Main Camera” open. I typically find it by typing in “raycaster” in the window that pops up. We’ll need a Physics Raycaster to be able to interact with elements by looking at them. Next, you can add in an icon to show when the user is looking at an object. I used a free icon in the Unity store from the 64 Flat Game Icons pack by Office57 . You could create your own sprite for this too if you’d prefer. Get a sprite from somewhere and import it into your project. Create an empty game object within the “Head” object in “CardboardMain”. I’ve renamed mine as “Target”. Then drag your sprite inside this object: That should be all you need to enable plenty of very useful Cardboard view tracking and controls.Responding To Cardboard Events Now we have our CardboardMain view created, we’ll need a few things in the scene to interact with. For my demo, I used the Magic Lamp by 3dFoin and SkySphere Volume 1 by the Hedgehog Team from the Unity Store. You can use any objects you’d like, drag the lamp into your scene: When importing the SkySphere, feel free to untick the “Examples” folder to keep things clean in our project: Place your lamp at position {x: 0, y: 1.5, z: 2} with a rotation of {x: -90, y: 90, z: 0} and scale it by 0.5 on all axes so it’s not too big in comparison to our scene: Then drag in the “Skyball_WithoutCap” from the SkySphere_V1 > Meshes folder. It should be placed at {0,0,0} with an X rotation of -90: In order to interact with the Cardboard SDK on this object, you’ll need some code on that object. Our initial bit of functionality will focus around causing events to occur when we look at the lamp and then look away. Each time the user looks away from the lamp, the sky will change texture:using UnityEngine;using UnityEngine.EventSystems;using System.Collections;public class Lamp : MonoBehaviour { public Renderer skybox; public Material sky1; public Material sky2; public Material sky3; public Material sky4; private int currentSky = 0; void Start () { } public void stareAtLamp() { switch (currentSky) { case 0: skybox.material = sky2; currentSky++; break; case 1: skybox.material = sky3; currentSky++; break; case 2: skybox.material = sky4; currentSky++; break; case 3: skybox.material = sky1; currentSky = 0; break; } }} The code above is pretty straightforward stuff, we set up a skybox variable, four materials for that skybox and an integer to store the number of the current skybox. Then, we’ve got a public method called stareAtLamp() which changes our skybox’s material to the next one in the series of materials, or takes it back to the first material if we’re at the last one. To add this code to our project, select your lamp and click “Add Component”. Type in “script” and you should find “New Script…”. Click that and call it “Lamp”. Put in our code from above. When your code is saved, those public variables will be available as options for your script. Drag your skybox object from the Hierarchy of your project into the skybox variable. Find the materials for your SkySphere from the Assets > SkySphere_V1 > SkySphere > Materials > With_Planet folder (or whatever other materials you’d prefer) and drag a few of them into the four sky materials. We will need one more component within our lamp object – a “Event Trigger”. This is what will actually call the stareAtLamp() function we coded earlier. Head to “Add Component” with your lamp object selected and find the “Event Trigger” component. Then once it is in your lamp object, click “Add New Event Type” and choose “PointerExit”: From here, you’ll now have an area to add in responses to that “PointerExit” event. Click the “+” icon on the bottom left and then drag in your lamp object to the box that appears underneath “Runtime Only”: This allows us to access the lamp’s functions as possible responses. Click on the dropdown that says “No function”, find “Lamp” and then choose the stareAtLamp() function. In order for the event trigger to work, our scene needs an “Event System”. Head back to your scene and create one from UI > Event System . You can remove the “Standalone Input Module” from the Event System if you’d like. The main important component we do need is the “Gaze Input Module” from the Cardboard SDK. You can add this one by clicking “Add Component” and going to Scripts > GazeInputModule : Now we define our target sprite as the one that appears when we are using the GazeInputModule and are looking at an object. Drag the “Target” object (or whatever you chose to call it within your CardboardMain object) into the cursor setting for the component: In order to pick up when we are looking at this lamp, we’ll need to add a collider to it. To do so, click “Add Component” and type in “collider” to find “Sphere Collider”. Add that in: Then, if necessary, adjust the position and radius of the sphere collider so that the green sphere around the lamp covers its shape: We are almost ready to test out our VR app! Delete the “MainCamera” object in your scene if you haven’t already instinctively removed it – the “CardboardMain” object is our new camera. Before we test, add a simple floor to the scene too. Create a “Plane” object from 3D Object > Plane and ensure it is positioned at {x: 0, y: 0, z: 0}. Scale that plane to {x: 2, y: 1, z: 2} so that it extends out to the whole skybox: We should have everything in place to be able to try out our VR app and see how well it plays. To do so, click the Play button at the top of Unity and it should get the app running for you. Two important controls to remember are:To simulate looking around your scene, hold down the Alt key while moving your mouse. To simulate tilting your head, hold down the Shift key while moving your mouse. You should see your scene running nicely showing your lamp. To see it a bit bigger, there’s a tiny icon on the top right. Click that and choose “Maximise”. You can also click “Maximise on Play” to request it does this automatically when you are testing: You should have your scene running maximised now: If you look at the lamp, your icon should appear indicating that you are indeed looking at the lamp: If you look away, the sky should change like magic! Excellent, that’s our first use of the Cardboard SDK. However, there’s one more function we haven’t taken advantage of on the Cardboard headset – our clicker on the side. Here’s how to engage that.Using The Clicker To add click events from the Cardboard headset, we’ll need to add a new event handler. First, we will set up what that event should actually do. For me, the lamp just isn’t quite magical enough. I’m going to include a particle system that is set off any time the viewer clicks the lamp. To do so, open up the lamp once more and click “Add Component”, then type in “particle”. You should find “Particle System”. Add this onto your lamp. You should now see white particles floating out of it: You can play around with the Particle System settings to your own preferences and see what you think looks the coolest! I set the color to be more of a yellow, changed the duration to 10 and set them to go in a random direction via the “Shape” section. One setting you’ll need to change so that the particles only appear on click is to change the “Rate” within “Emission” to be 0: I also ended up using a “Gravity Modifier” of 1, which made the particles look a bit cooler as they fell back down over the lamp. Give that a try! Within the code, we’ll add a new function called rubLamp() that emits a burst of particles from our Particle System:public void rubLamp() { GetComponent<particlesystem>().Emit(10); } Then, we go back to our lamp object’s “Event Trigger” component and add one for “Pointer Click”: We then click and add our rubLamp() function as the response: Now if we play our scene and click that lamp, we’ll have particles bursting from it. Quite neat! Now we definitely want to get it onto our phone so that we can try it within a Google Cardboard headset. In Unity, go to File > Build Settings . In the screen that appears, choose “Android” from the list and then select “Player Settings”: The settings will appear on the right (you might need to move the “Build Settings” window to the left a little to see it all). Under Resolution and Presentation , change “Default Orientation” to “Landscape Left”: Then underneath that, open up the “Other Settings” section and update the bundle identifier to include your company name and the app’s name – com.companyName.appName . I used com.SitePoint.GreatestVRProjectInTheWorld : You should now be ready to go! Connect up your Android phone via USB to your computer and click “Build and Run” in the “Build Settings” window: Choose where to save your apk file and give it a name: Unity should then run through and do everything else for you. It will place the app on your phone and run it. Once the app is on your phone, disconnect it from your computer, put it into your Google Cardboard and enjoy your VR creation! Working with Google Cardboard and Unity is surprisingly really straight forward! From this initial demo, you’re now empowered with all the capabilities of Unity, combined with the capabilities of a whole portable VR platform! A little while back I covered calling Web APIs in Unity – the Cardboard SDK integrates well into that whole process, so you could have an IoT enabled VR Unity app if you combine the two techniques! Feel free to use this code tutorial as the base for any Cardboard VR project you build in future! If you make something with it, please share it in the comments or get in touch with me on Twitter ( @thatpatrickguy ). I’d love to check it out! If you’re looking for other links and sample projects to guide you in VR and Unity development, I’ve got a set of curated links that might be perfect for you! Head over to Dev Diner and check out my VR with Unity Developer Guide , full of links to resources around the web. If you’ve got other great resources I don’t have listed – please let me know too! Patrick's latest venture is online at DevDiner.com, a site for developers looking to get involved in emerging tech. He is a SitePoint contributing editor focused on exploring and sharing the possibilities of new technology such as the Internet of Things, virtual/augmented reality and wearables. He is an instructor at SitePoint Premium and O'Reilly, a Meta Pioneer and freelance web developer who loves every opportunity to tinker with something new in a tech demo.
http://m.dlxedu.com/m/detail/20/196023.html
CC-MAIN-2019-18
refinedweb
3,103
68.6
Program Arcade GamesWith Python And Pygame Chapter 12: Introduction to Classes Classes and objects are very powerful programming tools. They make programming easier. In fact, you are already familiar with the concept of classes and objects. A class is a “classification” of an object. Like “person” or “image.” An object is a particular instance of a class. Like “Mary” is an instance of “Person.” Objects have attributes, such as a person's name, height, and age. Objects also have methods. Methods define what an object can do, like run, jump, or sit. 12.1 Why Learn About Classes? Each character in an adventure game needs data: a name, location, strength, are they raising their arm, what direction they are headed, etc. Plus those characters do things. They run, jump, hit, and talk. Without classes, our Python code to store this data might look like: name = "Link" sex = "Male" max_hit_points = 50 current_hit_points = 50 In order to do anything with this character, we'll need to pass that data to a function: def display_character(name, sex, max_hit_points, current_hit_points): print(name, sex, max_hit_points, current_hit_points) Now imagine creating a program that has a set of variables like that for each character, monster, and item in our game. Then we need to create functions that work with those items. We've now waded into a quagmire of data. All of a sudden this doesn't sound like fun at all. But wait, it gets worse! As our game expands, we may need to add new fields to describe our character. In this case we've added max_speed: name = "Link" sex = "Male" max_hit_points = 50 current_hit_points = 50 max_speed = 10 def display_character(name, sex, max_hit_points, current_hit_points, max_speed): print(name, sex, max_hit_points, current_hit_points) In example above, there is only one function. But in a large video game, we might have hundreds of functions that deal with the main character. Adding a new field to help describe what a character has and can do would require us to go through each one of those functions and add it to the parameter list. That would be a lot of work. And perhaps we need to add max_speed to different types of characters like monsters. There needs to be a better way. Somehow our program needs to package up those data fields so they can be managed easily. 12.2 Defining and Creating Simple Classes A better way to manage multiple data attributes is to define a structure that has all of the information. Then we can give that “grouping” of information a name, like Character or Address. This can be easily done in Python and any other modern language by using a class. For example, we can define a class representing a character in a game: class Character(): """ This is a class that represents the main character in a game. """ def __init__(self): """ This is a method that sets up the variables in the object. """ self.name = "Link" self.sex = "Male" self.max_hit_points = 50 self.current_hit_points = 50 self.max_speed = 10 self.armor_amount = 8 Here's another example, we define a class to hold all the fields for an address: class Address(): """ Hold all the fields for a mailing address. """ def __init__(self): """ Set up the address fields. """ self.name = "" self.line1 = "" self.line2 = "" self.city = "" self.state = "" self.zip = "" In the code above, Address is the class name. The variables in the class, such as name and city, are called attributes or fields. (Note the similarities and differences between declaring a class and declaring a function.) Unlike functions and variables, class names should begin with an upper case letter. While it is possible to begin a class with a lower case letter, it is not considered good practice. The def __init__(self): in a special function called a constructor that is run automatically when the class is created. We'll discuss the constructor more in a bit. The self. is kind of like the pronoun my. When inside the class Address we are talking about my name, my city, etc. We don't want to use self. outside of the class definition for Address, to refer to an Address field. Why? Because just like the pronoun “my,” it means someone totally different when said by a different person! To better visualize classes and how they relate, programmers often make diagrams. A diagram for the Address class would look like Figure 12.1. See how the class name is on top with the name of each attribute listed below. To the right of each attribute is the data type, such as string or integer. The class code defines a class but it does not actually create an instance of one. The code told the computer what fields an address has and what the initial default values will be. We don't actually have an address yet though. We can define a class without creating one just like we can define a function without calling it. To create a class and set the fields, look at the example below: #" An instance of the address class is created in line 2. Note how the class Address name is used, followed by parentheses. The variable name can be anything that follows normal naming rules. To set the fields in the class, a program must use the dot operator. This operator is the period that is between the home_address and the field name. See how lines 5-10 use the dot operator to set each field value. A very common mistake when working with classes is to forget to specify which instance of the class you want to work with. If only one address is created, it is natural to assume the computer will know to use that address you are talking about. This is not the case however. See the example below: class Address(): def __init__(self): self.name = "" self.line1 = "" self.line2 = "" self.city = "" self.state = "" self.zip = "" # Create an address my_address = Address() # Alert! This does not set the address's name! name = "Dr. Craven" # This doesn't set the name for the address either Address.name = "Dr. Craven" # This does work: my_address.name = "Dr. Craven" A second address can be created and fields from both instances may be used. See the example below: class Address(): def __init__(self): self.name = "" self.line1 = "" self.line2 = "" self.city = "" self.state = "" self.zip = "" #" # Create another address vacation_home_address = Address() # Set the fields in the address vacation_home_address.name = "John Smith" vacation_home_address.line1 = "1122 Main Street" vacation_home_address.line2 = "" vacation_home_address.city = "Panama City Beach" vacation_home_address.state = "FL" vacation_home_address.zip = "32407" print("The client's main home is in " + home_address.city) print("His vacation home is in " + vacation_home_address.city) Line 11 creates the first instance of Address; line 22 creates the second instance. The variable home_address points to the first instance and vacation_home_address points to the second. Lines 25-30 set the fields in this new class instance. Line 32 prints the city for the home address, because home_address appears before the dot operator. Line 33 prints the vacation address because vacation_home_address appears before the dot operator. In the example Address is called the class because it defines a new classification for a data object. The variables home_address and vacation_home_address refer to objects because they refer to actual instances of the class Address. A simple definition of an object is that it is an instance of a class. Like “Bob” and “Nancy” are instances of a Human class. By using we can visualize the execution of the code (see below). There are three variables in play. One points to the class definition of Address. The other two variables point to the different address objects and their data. Putting lots of data fields into a class makes it easy to pass data in and out of a function. In the code below, the function takes in an address as a parameter and prints it out on the screen. It is not necessary to pass parameters for each field of the address. # Print an address to the screen def print_address(address): print(address.name) # If there is a line1 in the address, print it if len(address.line1) > 0: print(address.line1) # If there is a line2 in the address, print it if len(address.line2) > 0: print( address.line2 ) print(address.city + ", " + address.state + " " + address.zip) print_address(home_address) print() print_address(vacation_home_address) 12.3 Adding Methods to Classes In addition to attributes, classes may have methods. A method is a function that exists inside of a class. Expanding the earlier example of a Dog class from the review problem 1 above, the code below adds a method for a dog barking. class Dog(): def __init__(self): self.age = 0 self.name = "" self.weight = 0 def bark(self): print("Woof") The method definition is contained in lines 7-8 above. Method definitions in a class look almost exactly like function definitions. The big difference is the addition of a parameter self on line 7. The first parameter of any method in a class must be self. This parameter is required even if the function does not use it. Here are the important items to keep in mind when creating methods for classes: - Attributes should be listed first, methods after. - The first parameter of any method must be self. - Method definitions are indented exactly one tab stop. Methods may be called in a manner similar to referencing attributes from an object. See the example code below. my_dog = Dog() my_dog.name = "Spot" my_dog.weight = 20 my_dog.age = 3 my_dog.bark() Line 1 creates the dog. Lines 3-5 set the attributes of the object. Line 7 calls the bark function. Note that even through the bark function has one parameter, self, the call does not pass in anything. This is because the first parameter is assumed to be a reference to the dog object itself. Behind the scenes, Python makes a call that looks like: # Example, not actually legal Dog.bark(my_dog) If the bark function needs to make reference to any of the attributes, then it does so using the self reference variable. For example, we can change the Dog class so that when the dog barks, it also prints out the dog's name. In the code below, the name attribute is accessed using a dot operator and the self reference. def bark(self): print("Woof says", self.name) Attributes are adjectives, and methods are verbs. The drawing for the class would look like Figure 12.3. 12.3.1 Example: Ball Class This example code could be used in Python/Pygame to draw a ball. Having all the parameters contained in a class makes data management easier. The diagram for the Ball class is shown in Figure 12.4. class Ball(): def __init__(self): # --- Class Attributes --- # Ball position self.x = 0 self.y = 0 # Ball's vector self.change_x = 0 self.change_y = 0 # Ball size self.size = 10 # Ball color self.color = [255,255,255] # --- Class Methods --- def move(self): self.x += self.change_x self.y += self.change_y def draw(self, screen): pygame.draw.circle(screen, self.color, [self.x, self.y], self.size ) Below is the code that would go ahead of the main program loop to create a ball and set its attributes: theBall = Ball() theBall.x = 100 theBall.y = 100 theBall.change_x = 2 theBall.change_y = 1 theBall.color = [255,0,0] This code would go inside the main loop to move and draw the ball: theBall.move() theBall.draw(screen) 12.4 References Here's where we separate the true programmers from the want-to-be's. Understanding class references. Take a look at the following code: class Person(): def __init__(self): self.name = "" self.money = 0 bob = Person() bob.name = "Bob" bob.money = 100 nancy = Person() nancy.name = "Nancy" print(bob.name, "has", bob.money, "dollars.") print(nancy.name, "has", nancy.money, "dollars.") The code above creates two instances of the Person() class, and using we can visualize the two classes in Figure 12.5. The code above has nothing new. But the code below does: class Person(): def __init__(self): self.name = "" self.money = 0 bob = Person() bob.name = "Bob" bob.money = 100 nancy = bob nancy.name = "Nancy" print(bob.name, "has", bob.money, "dollars.") print(nancy.name, "has", nancy.money, "dollars.") See the difference on line 10? A common misconception when working with objects is to assume that the variable bob is the Person object. This is not the case. The variable bob is a reference to the Person object. That is, it stores the memory address of where the object is, and not the object itself. If bob actually was the object, then line 9 could create a copy of the object and there would be two objects in existence. The output of the program would show both Bob and Nancy having 100 dollars. But when run, the program outputs the following instead: Nancy has 100 dollars. Nancy has 100 dollars. What bob stores is a reference to the object. Besides reference, one may call this address, pointer, or handle. A reference is an address in computer memory for where the object is stored. This address is a hexadecimal number which, if printed out, might look something like 0x1e504. When line 9 is run, the address is copied rather than the entire object the address points to. See Figure 12.6. We can also run this in to see how both of the variables are pointing to the same object. 12.4.1 Functions and References Look at the code example below. Line 1 creates a function that takes in a number as a parameter. The variable money is a variable that contains a copy of the data that was passed in. Adding 100 to that number does not change the number that was stored in bob.money on line 11. Thus, the print statement on line 14 prints out 100, and not 200. def give_money1(money): money += 100 class Person(): def __init__(self): self.name = "" self.money = 0 bob = Person() bob.name = "Bob" bob.money = 100 give_money1(bob.money) print(bob.money) Running on PythonTutor we see that there are two instances of the money variable. One is a copy and local to the give_money1 function. Look at the additional code below. This code does cause bob.money to increase and the print statement to print 200. def give_money2(person): person.money += 100 give_money2(bob) print(bob.money) Why is this? Because person contains a copy of the memory address of the object, not the actual object itself. One can think of it as a bank account number. The function has a copy of the bank account number, not a copy of the whole bank account. So using the copy of the bank account number to deposit 100 dollars causes Bob's bank account balance to go up. Arrays work the same way. A function that takes in an array (list) as a parameter and modifies values in that array will be modifying the same array that the calling code created. The address of the array is copied, not the entire array. 12.4.2 Review Questions - Create a class called Cat. Give it attributes for name, color, and weight. Give it a method called meow. - Create an instance of the cat class, set the attributes, and call the meow method. - Create a class called Monster. Give it an attribute for name and an integer attribute for health. Create a method called decrease_health that takes in a parameter amount and decreases the health by that much. Inside that method, print that the animal died if health goes below zero. 12.5 Constructors There's a terrible problem with our class for Dog listed below. When we create a dog, by default the dog has no name. Dogs should have names! We should not allow dogs to be born and then never be given a name. Yet the code below allows this to happen, and that dog will never have a name. class Dog() def __init__(self): self.name = "" my_dog = Dog() Python doesn't want this to happen. That's why Python classes have a special function that is called any time an instance of that class is created. By adding a function called a constructor, a programmer can add code that is automatically run each time an instance of the class is created. See the example constructor code below: class Dog(): def __init__(self): """ Constructor. Called when creating an object of this type. """ self.name = "" print("A new dog is born!") # This creates the dog my_dog = Dog() The constructor starts on line 2. It must be named __init__. There are two underscores before the init, and two underscores after. A common mistake is to only use one. The constructor must take in self as the first parameter just like other methods in a class. When the program is run, it will print: A new dog is born! When a Dog object is created on line 8, the __init__ function is automatically called and the message is printed to the screen. 12.5.1 Avoid This Mistake Put everything for a method into just one definition. Don't define it twice. For example: # Wrong: class Dog(): def __init__(self): self.age = 0 self.name = "" self.weight = 0 def __init__(self): print("New dog!") The computer will just ignore the first __init__ and go with the last definition. Instead do this: # Correct: class Dog(): def __init__(self): self.age = 0 self.name = "" self.weight = 0 print("New dog!") A constructor can be used for initializing and setting data for the object. The example Dog class above still allows the name attribute to be left blank after the creation of the dog object. How do we keep this from happening? Many objects need to have values right when they are created. The constructor function can be used to make this happen. See the code below: class Dog(): def __init__(self, new_name): """ Constructor. """ self.name = new_name # This creates the dog my_dog = Dog("Spot") # Print the name to verify it was set print(my_dog.name) # This line will give an error because # a name is not passed in. her_dog = Dog() On line 3 the constructor function now has an additional parameter named new_name. The value of this parameter is used to set the name attribute in the Dog class on line 8. It is no longer possible to create a Dog class without a name. The code on line 15 tries this. It will cause a Python error and it will not run. A common mistake is to name the parameter of the __init__ function the same as the attribute and assume that the values will automatically synchronize. This does not happen. 12.5.2 Review Questions - Should class names begin with an upper or lower case letter? - Should method names begin with an upper or lower case letter? - Should attribute names begin with an upper or lower case letter? - Which should be listed first in a class, attributes or methods? - What are other names for a reference? - What is another name for instance variable? - What is the name for an instance of a class? - Create a class called Star that will print out “A star is born!” every time it is created. - Create a class called Monster with attributes for health and a name. Add a constructor to the class that sets the health and name of the object with data passed in as parameters. 12.6 Inheritance Another powerful feature of using classes and objects is the ability to make use of inheritance. It is possible to create a class and inherit all of the attributes and methods of a parent class. For example, a program may create a class called Boat which has all the attributes needed to represent a boat in a game: class Boat(): def __init__(self): self.tonnage = 0 self.name = "" self.is_docked = True def dock(self): if self.is_docked: print("You are already docked.") else: self.is_docked = True print("Docking") def undock(self): if not self.is_docked: print("You aren't docked.") else: self.is_docked = False print("Undocking") To test out our code: b = Boat() b.dock() b.undock() b.undock() b.dock() b.dock() The outputs: You are already docked. Undocking You aren't docked. Docking You are already docked. (If you watch the video for this section of the class, you'll note that the "Boat" class in the video doesn't actually run. The code above has been corrected but I haven't fixed the video. Use this as a reminder, no matter how simple the code and how experienced the developer, test your code before you deliver it!) Our program also needs a submarine. Our submarine can do everything a boat can, plus we need a command for submerge. Without inheritance we have two options. - One, add the submerge() command to our boat. This isn't a great idea because we don't want to give the impression that our boats normally submerge. - Two, we could create a copy of the Boat class and call it Submarine. In this class we'd add the submerge() command. This is easy at first, but things become harder if we change the Boat class. A programmer would need to remember that we'd need to change not only the Boat class, but also make the same changes to the Submarine class. Keeping this code syncronized is time consuming and error-prone. Luckily, there is a better way. Our program can create child classes that will inherit all the attributes and methods of the parent class. The child classes may then add fields and methods that correspond to their needs. For example: class Submarine(Boat): def submerge(self): print("Submerge!") Line 1 is the important part. Just by putting Boat in between the parentheses during the class declaration, we have automatically picked up every attribute and method that is in the Boat class. If we update Boat, then the child class Submarine will automatically get these updates. Inheritance is that easy! The next code example is diagrammed out in Figure 12.10. class Person(): def __init__(self): self.name = "" class Employee(Person): def __init__(self): # Call the parent/super class constructor first super().__init__() # Now set up our variables self.job_title = "" class Customer(Person): def __init__(self): super().__init__() self.email = "" john_smith = Person() john_smith.name = "John Smith" jane_employee = Employee() jane_employee.name = "Jane Employee" jane_employee.job_title = "Web Developer" bob_customer = Customer() bob_customer.name = "Bob Customer" bob_customer.email = "send_me@spam.com" By placing Person between the parentheses on lines 5 and 13, the programmer has told the computer that Person is a parent class to both Employee and Customer. This allows the program to set the name attribute on lines 19 and 22. Methods are also inherited. Any method the parent has, the child class will have too. But what if we have a method in both the child and parent class? We have two options. We can run them both with super() keyword. Using super() followed by a dot operator, and then finally a method name allows you to call the parent's version of the method. The code above shows the first option using super where we run not only the child constructor but also the parent constructor. If you are writing a method for a child and want to call a parent method, normally it will be the first statement in the child method. Notice how it is in the example above. All constructors should call the parent constructor because then you'd have a child without a parent and that is just sad. In fact, some languages force this rule, but Python doesn't. The second option? Methods may be overridden by a child class to provide different functionality. The example below shows both options. The Employee.report overrides the Person.report because it never calls and runs the parent report method. The Customer report does call the parent and the report method in Customer adds to the Person functionality. class Person(): def __init__(self): self.name = "" def report(self): # Basic report print("Report for", self.name) class Employee(Person): def __init__(self): # Call the parent/super class constructor first super().__init__() # Now set up our variables self.job_title = "" def report(self): # Here we override report and just do this: print("Employee report for", self.name) class Customer(Person): def __init__(self): super().__init__() self.email = "" def report(self): # Run the parent report: super().report() # Now add our own stuff to the end so we do both print("Customer e-mail:", self.email) john_smith = Person() john_smith.name = "John Smith" jane_employee = Employee() jane_employee.name = "Jane Employee" jane_employee.job_title = "Web Developer" bob_customer = Customer() bob_customer.name = "Bob Customer" bob_customer.email = "send_me@spam.com" john_smith.report() jane_employee.report() bob_customer.report() 12.6.1 Is-A and Has-A Relationships Classes have two main types of relationships. They are “is a” and “has a” relationships. A parent class should always be a more general, abstract version of the child class. This type of child to parent relationship is called an is a relationship. For example, a parent class Animal could have a child class Dog. The Dog class could have a child class Poodle. Another example, a dolphin is a mammal. It does not work the other way, a mammal is not necessarily a dolphin. So the class Dolphin should never be a parent to a class Mammal. Likewise a class Table should not be a parent to a class Chair because a chair is not a table. The other type of relationship is the has a relationship. These relationships are implemented in code by class attributes. A dog has a name, and so the Dog class has an attribute for name. Likewise a person could have a dog, and that would be implemented by having the Person class have an attribute for Dog. The Person class would not derive from Dog because that would be some kind of insult. Looking at the prior code example we can see: - Employee is a person. - Customer is a person. - Person has a name. - Employee has a job title. - Customer has an e-mail. 12.7 Static Variables vs. Instance Variables The difference between static and instance variables is confusing. Thankfully it isn't necessary to completely understand the difference right now. But if you stick with programming, it will be. Therefore we will briefly introduce it here. There are also some oddities with Python that kept me confused the first several years I've made this book available. So you might see older videos and examples where I get it wrong. An instance variable is the type of class variable we've used so far. Each instance of the class gets its own value. For example, in a room full of people each person will have their own age. Some of the ages may be the same, but we still need to track each age individually. With instance variables, we can't just say “age” with a room full of people. We need to specify whose age we are talking about. Also, if there are no people in the room, then referring to an age when there are no people to have an age makes no sense. With static variables the value is the same for every single instance of the class. Even if there are no instances, there still is a value for a static variable. For example, we might have a count static variable for the number of Human classes in existence. No humans? The value is zero, but it still exists. In the example below, ClassA creates an instance variable. ClassB creates a static variable. # Example of an instance variable class ClassA(): def __init__(self): self.y = 3 # Example of a static variable class ClassB(): x = 7 # Create class instances a = ClassA() b = ClassB() # Two ways to print the static variable. # The second way is the proper way to do it. print(b.x) print(ClassB.x) # One way to print an instance variable. # The second generates an error, because we don't know what instance # to reference. print(a.y) print(ClassA.y) In the example above, lines 16 and 17 print out the static variable. Line 17 is the “proper” way to do so. Unlike before, we can refer to the class name when using static variables, rather than a variable that points to a particular instance. Because we are working with the class name, by looking at line 17 we instantly can tell we are working with a static variable. Line 16 could be either an instance or static variable. That confusion makes line 17 the better choice. Line 22 prints out the instance variable, just like we've done in prior examples. Line 23 will generate an error because each instance of y is different (it is an instance variable after all) and we aren't telling the computer what instance of ClassA we are talking about. 12.7.1 Instance Variables Hiding Static Variables This is one “feature” of Python I dislike. It is possible to have a static variable, and an instance variable with the same name. Look at the example below: # Class with a static variable class ClassB(): x = 7 # Create a class instance b = ClassB() # This prints 7 print(b.x) # This also prints 7 print(ClassB.x) # Set x to a new value using the class name ClassB.x = 8 # This also prints 8 print(b.x) # This prints 8 print(ClassB.x) # Set x to a new value using the instance. # Wait! Actually, it doesn't set x to a new value! # It creates a brand new variable, x. This x # is an instance variable. The static variable is # also called x. But they are two different # variables. This is super-confusing and is bad # practice. b.x = 9 # This prints 9 print(b.x) # This prints 8. NOT 9!!! print(ClassB.x) Allowing instance variables to hide static variable caused confusion for me for many years! 12.8 Review 12.8.1 Multiple Choice Quiz 12.8.2 Short Answer Worksheet 12
http://programarcadegames.com/index.php?chapter=introduction_to_classes&amp;lang=
CC-MAIN-2019-18
refinedweb
5,027
68.67
I have the below class There is an answer to this in StackOverflow but it deals with List throw checked Exceptions from mocks with Mockito. I like to look into this condition. Not getting where I am missing. public SimpleClass{ private SimpleClass(){} public void runMethod(request,String,Map,Object,Object){ try{ doesSomething().... } } catch(Exception e){ String message= ""+request.getAttribute(X)+"Some message" Logger.Log(param1+param2+message); } } public class SimpleClassTest{ @Test public void testCatchBlock(){ SimpleClass instanceObj =PowerMockito.mock(SimpleClass.class); Mockito.doThrow(new Exception()).when(instanceObj).runMethod(request, anyString(), anyMap(), anyObject(), anyObject()); } } org.mockito.exceptions.base.MockitoException: Checked exception is invalid for this method! Invalid: java.lang.Exception You are getting unit testing with mocking wrong. Here: SimpleClass instanceObj =PowerMockito.mock(SimpleClass.class); There is no point in mocking the class that is under test! When you mock that class, you get a stub that has "nothing to do" with your real implementation. A "working setup" would look more like: public void methodUnderTest(X x, ...) { try { x.foo(); } catch (Exception e) { ... } and X mockedX = mock(X.class); when(x.foo()).thenThrow(new WhateverException()); underTest.methodUnderTest(mockedX); ... and then you could try to verify for example that the logger saw that expected logging call. In other words: you either use a mock to allow your code under test to do its job (with you being in control!) or to verify that some expected call took place on a mock object. But as said: it doesn't make any sense to mock that class that you want to test. Because a mocked object doesn't know anything about the "real" implementation!
https://codedump.io/share/rhet3kpYAVaF/1/checked-exception-is-invalid-for-this-method
CC-MAIN-2017-47
refinedweb
269
52.56
JNA, using native code with Java, demo To use native code with Java, JNA is simpler than JNI, here is a simple demonstration... Java Native Access is an extension to Java that allows the use of native APIs by including dynamically library files, DLLs under Windows. Unlike JNI, it does not require to generate code for using C functions. To use them, simply include the file that defines them and declare the header of these functions in an interface. We want to use the puts function of the C language, which is provided by the msvcrt.dll file in Windows, create the following interface: package CInterface; import com.sun.jna.Library; public interface CInterface extends Library { public int puts(String str); } We have declared the CInterface interface which is a subclass of JNA. Inside the interface, C functions are declared as methods. To use the method puts simply create an instance of CInterface: CInterface demo = (CInterface) Native.loadLibrary(libName, CInterface.class); demo.puts("Hello World!"); For this to work, some imports are required: import com.sun.jna.Library; import com.sun.jna.Native; import com.sun.jna.Platform; In addition, the library jna.jar must be included in the project. A complete project for NetBeans can be downloaded. It contains two sources: - CInterface.java that holds the interface above. - Hello.java that uses this interface. To run the demonstration: - Download and install NetBeans. - Download the project file, it is unpacked into the JNA directory. - Download the jna.jar library on Sun's site and copy it into the JNA directory. - Load the project in NetBeans from this directory. - Add jna.jar to the list of libraries: Project Properties -> Libraries -> Add JAR / Folder. - Compile the project. Then type the command: java -jar /jna/dist/hello.jar "My own text!" You can add the interface to all the functions you need, if they are present in the same DLL, and create an interface for each DLL file to include. Full source code: // JNA Demo. Scriptol.com package CInterface; import com.sun.jna.Library; import com.sun.jna.Native; import com.sun.jna.Platform; public class hello { public static void main(String[] args) { String mytext = "Hello World!"; if (args.length != 1) { System.err.println("You can enter you own text between quotes..."); System.err.println("Syntax: java -jar /jna/dist/demo.jar \"myowntext\""); } else mytext = args[0]; // Library is c for unix and msvcrt for windows String libName = "c"; if (System.getProperty("os.name").contains("Windows")) { libName = "msvcrt"; } CInterface demo = (CInterface) Native.loadLibrary(libName, CInterface.class); demo.puts(mytext); } } Download the source code. The archive holds the file of the demo which displays a simple text using the puts function of the C language. A project file for NetBeans is included. See the Github project of Java Native Access for the jna.jar library.
https://www.scriptol.com/programming/jna.php
CC-MAIN-2020-16
refinedweb
471
52.87
Program Arcade GamesWith Python And Pygame Chapter 17: Sorting Binary searches only work on lists that are in order. So how do programs get a list in order? How does a program sort a list of items when the user clicks a column heading, or otherwise needs something sorted? There are several algorithms that do this. The two easiest algorithms for sorting are the selection sort and the insertion sort. Other sorting algorithms exist as well, such as the shell, merge, heap, and quick sorts. The best way to get an idea on how these sorts work is to watch them. To see common sorting algorithms in action visit this excellent website: Each sort has advantages and disadvantages. Some sort a list quickly if the list is almost in order to begin with. Some sort a list quickly if the list is in a completely random order. Other lists sort fast, but take more memory. Understanding how sorts work is important in selecting the proper sort for your program. 17.1 Swapping Values Before learning to sort, we need to learn how to swap values between two variables. This is a common operation in many sorting algorithms. Suppose a program has a list that looks like the following: my_list = [15,57,14,33,72,79,26,56,42,40] The developer wants to swap positions 0 and 2, which contain the numbers 15 and 14 respectively. See Figure 17.1. A first attempt at writing this code might look something like this: my_list[0] = my_list[2] my_list[2] = my_list[0] See Figure 17.2 to get an idea on what would happen. This clearly does not work. The first assignment list[0] = list[2] causes the value 15 that exists in position 0 to be overwritten with the 14 in position 2 and irretrievably lost. The next line with list[2] = list[0] just copies the 14 back to cell 2 which already has a 14. To fix this problem, swapping values in an array should be done in three steps. It is necessary to create a temporary variable to hold a value during the swap operation. See Figure 17.3. The code to do the swap looks like the following: temp = my_list[0] my_list[0] = my_list[2] my_list[2] = temp The first line copies the value of position 0 into the temp variable. This allows the code to write over position 0 with the value in position 2 without data being lost. The final line takes the old value of position 0, currently held in the temp variable, and places it in position 2. 17.2 Selection Sort The selection by looking at element 0. Then code next scans the rest of the list from element 1 to n-1 to find the smallest number. The smallest number is swapped into element 0. The code then moves on to element 1, then 2, and so forth. Graphically, the sort looks like Figure 17.4. The code for a selection sort involves two nested loops. The outside loop tracks the current position that the code wants to swap the smallest value into. The inside loop starts at the current location and scans to the right in search of the smallest value. When it finds the smallest value, the swap takes place. def selection_sort(my_list): """ Sort a list using the selection sort """ # Loop through the entire array for cur_pos in range(len(my_list)): # Find the position that has the smallest number # Start with the current position min_pos = cur_pos # Scan left to right (end of the list) for scan_pos in range(cur_pos + 1, len(my_list)): # Is this position smallest? if my_list[scan_pos] < my_list[min_pos]: # It is, mark this position as the smallest min_pos = scan_pos # Swap the two values temp = my_list[min_pos] my_list[min_pos] = my_list[cur_pos] my_list[cur_pos] = temp The outside loop will always run $n$ times. The inside loop will run $n/2$ times. This will be the case regardless if the list is in order or not. The loops' efficiency may be improved by checking if min_pos and cur_pos are equal before line 16. If those variables are equal, there is no need to do the three lines of swap code. In order to test the selection sort code above, the following code may be used. The first function will print out the list. The next code will create a list of random numbers, print it, sort it, and then print it again. On line 3 the print statement right-aligns the numbers to make the column of numbers easier to read. Formatting print statements will be covered in Chapter 20. # Before this code, paste the selection sort and import random def print_list(my_list): for item in my_list: print("{:3}".format(item), end="") print() # Create a list of random numbers my_list = [] for i in range(10): my_list.append(random.randrange(100)) # Try out the sort print_list(my_list) selection_sort(my_list) print_list(my_list) See an animation of the selection sort at: For a truly unique visualization of the selection sort, search YouTube for “selection sort dance” or use this link: You can trace through the code using. 17.3 Insertion Sort The insertion sort is similar to the selection sort in how the outer loop works. The insertion sort starts at the left side of the array and works to the right side. The difference is that the insertion sort does not select the smallest element and put it into place; the insertion sort selects the next element to the right of what was already sorted. Then it slides up each larger element until it gets to the correct location to insert. Graphically, it looks like Figure 17.5. The insertion sort breaks the list into two sections, the “sorted” half and the “unsorted” half. In each round of the outside loop, the algorithm will grab the next unsorted element and insert it into the list. In the code below, the key_pos marks the boundary between the sorted and unsorted portions of the list. The algorithm scans to the left of key_pos using the variable scan_pos. Note that in the insertion sort, scan_pos goes down to the left, rather than up to the right. Each cell location that is larger than key_value gets moved up (to the right) one location. When the loop finds a location smaller than key_value, it stops and puts key_value to the left of it. The outside loop with an insertion sort will run $n$ times. The inside loop will run an average of $n/2$ times if the loop is randomly shuffled. If the loop is close to a sorted loop already, then the inside loop does not run very much, and the sort time is closer to $n$. def insertion_sort(my_list): """ Sort a list using the insertion sort """ # Start at the second element (pos 1). # Use this element to insert into the # list. for key_pos in range(1, len(my_list)): # Get the value of the element to insert key_value = my_list[key_pos] # Scan from right to the left (start of list) scan_pos = key_pos - 1 # Loop each element, moving them up until # we reach the position the while (scan_pos >= 0) and (my_list[scan_pos] > key_value): my_list[scan_pos + 1] = my_list[scan_pos] scan_pos = scan_pos - 1 # Everything's been moved out of the way, insert # the key into the correct location my_list[scan_pos + 1] = key_value See an animation of the insertion sort at: For another dance interpretation, search YouTube for “insertion sort dance” or use this link: You can trace through the code using. 17.3.1 Multiple Choice Quiz 17
http://programarcadegames.com/index.php?chapter=sorting&amp;lang=fi
CC-MAIN-2020-24
refinedweb
1,258
70.84
Smart charts in a SAP Fiori Object Page using ABAP CDS views and annotations With views to create mixed apps or just let the Fiori Elements execute all the job. Currently, there are 3 different types of Fiori Elements available: - List report: Allows users to filter and work with large amounts of data. - Object page: Shows all facets of a single business object. - Overview page: Immediate domain specific insight on what needs attention. Offers quick actions. For more information check this link: OBS: A new template called Analytical List Page is available after the innovation version 1.48, I’m going to cover this subject in a future post. A List Report template is always implemented in conjunction with an Object Page, this powerful template provides the ability to query and filter a set of records and navigate to a detail page of the record. For more information about these templates check the links below: - List Report: - Object Page: An Object Page is basically composed by a Header and Facets (sections), each facet is related with a group of data and we can use the following layouts: - Forms & Fields - Contacts - Tables - Charts Most part of the developers don’t know about the possibility to insert a chart inside an Object Page and it’s not a difficult task to implement, with a sequence of simple steps you’ll be able to enrich your application with a powerful analysis tool. Okay, enough talking and let’s start the development of our demo. 🙂 I’m going to split this post in 3 sections: - ABAP CDS view - OData project - UI5 Project (Web IDE) ABAP CDS View To avoid spending time with table creations, we’re going to reuse the Flight demo table offered by SAP, so let’s create 2 new CDS views on top of this table: - ZDEMO_FLIGHT: Returns all the flights, type of plane, dates and respective prices. @AbapCatalog.sqlViewName: 'ZDEMOFLIGHT' @AbapCatalog.compiler.compareFilter: true @AccessControl.authorizationCheck: #NOT_REQUIRED @EndUserText.label: 'Flight' @UI.headerInfo: { title.value: 'FlightCode', description.value: 'PlaneType', typeName: 'Flight', typeNamePlural: 'Flights' } @OData.publish: true define view ZDEMO_FLIGHT as select from sflight association [0..*] to ZDEMO_FLIGHT_CHART as _Chart on $projection.FlightCode = _Chart.FlightCode and $projection.FlightDate = _Chart.FlightDate { @EndUserText.label: 'Flight Code' @UI: { lineItem.position: 10, fieldGroup: { qualifier: 'FlightDetails', position: 10 } } key concat(carrid, connid) as FlightCode, @UI: { selectionField.position: 10, lineItem.position: 20, fieldGroup: { qualifier: 'FlightDetails', position: 20 } } key fldate as FlightDate, @UI.lineItem.position: 30 @Semantics.amount.currencyCode: 'Currency' price as Price, @Semantics.currencyCode: true currency as Currency, planetype as PlaneType, _Chart } - ZDEMO_FLIGHT_CHART: Returns the maximum capacity of seats and occupied seats per class for each one of the flights. @AbapCatalog.sqlViewName: 'ZDEMOFLIGHTCHART' @AbapCatalog.compiler.compareFilter: true @AccessControl.authorizationCheck: #NOT_REQUIRED @EndUserText.label: 'Flight' @UI.chart: [{ qualifier: 'OccupiedSeats', chartType: #COLUMN, dimensions: [ 'FlightCode' ], measures: [ 'MaximumCapacity', 'EconomyOccupiedSeats', 'BusinessOccupiedSeats', 'FirstOccupiedSeats' ] }] define view ZDEMO_FLIGHT_CHART as select from sflight { key concat(carrid, connid) as FlightCode, key fldate as FlightDate, seatsmax as MaximumCapacity, seatsocc as EconomyOccupiedSeats, seatsocc_b as BusinessOccupiedSeats, seatsocc_f as FirstOccupiedSeats } Let’s review some important points about the annotations we have in these CDS views: - @UI.headerInfo: This annotation is used to place information in the header of the Object Page, in our case we place the Flight Code and Plane Type as title and description. - @UI.lineItem: This annotation determines the position of the field in the result list of the List Report. - @UI.selectionField: This annotation determines the position of the field in the filter of the List Report. - @UI.chart: This is the main annotation regarding our demo’s purpose. It basically sets the chart type, dimensions and measures for a Smart Chart consumption inside the Object Page. - @Semantics.amount and @Semantics.currency: These annotations define a relation between an amount field and respective currency. - @EndUserText.label: This annotation provides a label for a specific field. - @OData.publish is used to publish the OData service automatically without the need to create an OData project through transaction SEGW (you can check more details about OData Project in the next section). An interesting point is that both CDS views share the same key, so why I need to split the content in 2 different views? I don’t know exactly but there is some kind of restriction and the Smart Template expects this specific structure of separate views (one for the main entity and another one for the chart). I’ve tried to place the chart (and respective annotations) in a single view but the Smart Chart wasn’t rendered properly by the UI5 application. You can also notice that we define the association as [0..*] instead of [0..1], if you don’t follow this convention the chart will not appear in the screen as well. Now the ABAP CDS views are finished, we just need to expose/activate our OData service and generate our UI5 application._FLIGHT as select from sflight { ... } We use the second option to publish our service, but no matter the approach you decide to follow just remember always to activate the OData service in the Front-end server (SAP Gateway server) through the transaction /IWFND/MAINT_SERVICE. UI5 Project (Web IDE) There are some types of annotations that are not available through the ABAP CDS, in this case we need to mix a little bit of the local annotations (published inside the UI5 application) with the annotations generated by the ABAP CDS views. I, personally, prefer to include all annotations in the CDS views because if I need to execute a maintenance there is no need to re-deploy the whole application, we just need to transport the ABAP CDS view that holds the relevant annotations and the job is done! But in the case of a Facet configuration (Object Page sections), there is no other option instead to configure through the UI5 local annotation. Let’s start creating a new project based on a List Report Application. Note #1: I’m using the SAP Innovation version 1.48, but Smart Charts and List Report Application are available since the version 1.44, if you are working with an on-premise solution you can still use this functionality. Fill the project name, title, namespace and description: Define your data source and select the ZDEMO_FLIGHT_CDS service. Note #2: Since we are publishing our OData service through the @OData.publish annotation, the system generates a project with the name of our ABAP CDS view + the _CDS suffix. Select the annotation ZDEMO_FLIGHT_CDS_VAN (generated automatically by the OData Service / ABAP CDS). Note #3: If you don’t select this option all the annotations declared through your ABAP CDS view are not going to flow to the UI5 application. Finally select your OData Collection ZDEMO_FLIGHT and confirm the template creation. Now open the project path, annotations folder and the file annotations.xml. Open the annotation modeler and select the Entity Type ZDEMO_FLIGHTType. Let’s create 2 new facets, for the first one we will reference the field group with the ID FlightDetails and in the second facet we will point to the chart annotation with the ID OccupiedSeats. Note #4: Notice that all annotations that we declared through our ABAP CDS views are available under External Annotations section, the Annotation Modeler merges automatically annotations from external sources (ABAP CDS or OData project) with the local annotations declared inside the UI5 application. If you want to declare your Smart Chart in a scenario without ABAP CDS there is no restriction, you just need to open the Annotation Modeler and place your own @UI.chart annotation. After you finish to edit this file, save the content and start the application. This is the expected outcome: List Report Object Page Good one Felipe. Thanks for sharing . You can also have a mixed OData Project where some of the entities are manually implemented and others use SADL and CDS views. Just to provide more flexibility. Check out the blog - Hey Saurabh, There is actually another possibility when you generate the structure through the ABAP CDS view and populate part or all of the data through the OData service layer, specifically with an ABAP code in the Data Provider Class Extension (suffix DPC_EXT). SAP uses this approach in standard applications to populate long texts or price calculations that needs to run through a formula during the runtime. If there is no option to retrieve this data through an ABAP CDS view you can use this mixed approach. I use this technique in a few of my developments and it works perfectly! The good part about this strategy is that you reduce the time configuring the SEGW. ? Thanks for your contribution. Cheers, Felipe Excellent Felipe . I have tried this everything works perfect. Ia m expecting you next blog on OVP & Analytical Page Thanks, Syam Hi Syam, Thanks for the feedback! I'm currently working in a new article about KPIs and criticality using Overview Pages, I'll try to release this one until the end of the year. I've been working also with small articles focusing only in ABAP CDS development. I will probably release my post about Analytical List Page in 2018, but this subject definitely deserves a detailed post about the new functionalities and improvements compared with the List Report & Object Page. Cheers, Felipe Hi Felipe, That's Cool!! Thanks, Syam Can we use CDS views in the master-detail template ? Hi Rolf, Since you can encapsulate your CDS views in OData projects it is actually possible to use with any kind of Web IDE templates or custom apps, the only problem is that you will not benefit by the power of annotations because they work only in two scenarios: You should be fine if your intention is only to consume the data instead to generate UI driven applications based on annotations. Cheers, Felipe Thanks a lot, Felipe, I have another requirement for my current development. Is it possible to create icon tab bar in object page using annotation modeler? Currently, I am using Facet to show different sub-screen. Thanks Angshuman Hi Angshuman, Sorry but I didn't understand how your query is related with Rolf's question. If you are facing issues with an specific development I advise you to post your question in the Q&A section instead of using blog comments to ask help. This way the community can help with your requirement and others users can benefit from the discussion too. If you have any queries related with the content of this article I'll be glad to help. Cheers, Felipe getting error Unhandled Error: Request failed: Server Error URI: /file/as-OrionContent/ZDEMO_FLIGHT/ Hi Felipe, I have tried all those steps. Everything works fine the list is come out and I am able to click on the line item but the chart is not coming out. Do you have any idea where I can check? Hi Methee, If you are using ABAP CDS views I advise to review the declaration of the @UI.Chart annotation. I remember to face some issues because I didn't declare the annotation inside of an array, between [...], check the example below: Also, review all of your annotations and also try to use the same SAPUI5 version defined in my article (SAP Innovation 1.48). Since SAP has been updating their libraries constantly is always good to check if the functionality is still available in the current version. Cheers, Felipe Hi Phillipe, I have downloaded SAP webIDE version 1.53.5 from but once I create an application I have found that the latest version I've got is 1.44. (as attached pic) Do you have any idea what I might be missing here? Thank you. Hi, I believe you are talking about different versions of the SAPUI5, after creating your project via SAP Web IDE you can select prior versions depending on your requirements. There is also an assistant to support this configuration. Cheers, Felipe Hi Felipe, I am working with a List Report application based on gateway odata(not cds). The application purely is based on frontend UI annotations.Issue is when I create any chart annotation and assign to object page, it always shows no data available(even tried with list report extension & defined chart in fragment). Also the batch call goes infinitely. Question is: Is there any code implementation needds to be done in a non cds odata service? or only aggregate settings is enough? any help? Regards, Sakthi Hi Sakthi, No code implementation is necessary if you are following the structure proposed by this article and defining the right set of annotations. Cheers, Felipe Thanks Felipe, Explained Well 🙂 Hi Felipe - I am trying to display total seats per plane type ( field snumber in table SCARPLAN ) as a measure, but for some reason the measure is displayed in intervals of 0.2,04 ...1. and there are no vertical bars. Can you pls help. Hello Kshitij, I am also facing the same issue regarding the smartchart. Did you resolve this? If yes, please help. Plz add -> @DefaultAggregation: #SUM to your Measure field in CDS ZDEMO_FLIGHT_CHART. It will work. BR, RAM. Plz add -> @DefaultAggregation: #SUM to your Measure field in CDS ZDEMO_FLIGHT_CHART. It will work. BR, RAM. Hello Felipe, I followed the same steps as given by you. But still facing the issue to display the smartchart. Kindly suggest. Hi Felipe, I must say that I like the way you think. I was thinking than rather than implement Cross App Navigation from an overview page via a chart click, is it possible to simply place another chart using the facets annotations. My thought is yes. Love your blogs champ.
https://blogs.sap.com/2017/11/27/smart-charts-in-object-pages-using-abap-cds-annotations/
CC-MAIN-2021-43
refinedweb
2,269
54.52
csin, csinf, csinl - complex sine function #include <complex.h> double complex csin(double complex z); float complex csinf(float complex z); long double complex csinl(long double complex z); Link with -lm. These functions calculate the complex sine of z. The complex sine function is defined as: csin(z) = (exp(i * z) - exp(-i * z)) / (2 * i) These functions first appeared in glibc in version 2.1. For an explanation of the terms used in this section, see attributes(7). C99, POSIX.1-2001, POSIX.1-2008. cabs(3), casin(3), ccos(3), ctan(3), complex(7) This page is part of release 4.15 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
https://www.zanteres.com/manpages/csinl.3.html
CC-MAIN-2022-33
refinedweb
131
65.83
Hello: My instance is running behind a proxy with root end point as - say /mysplunkapp The configurations are done in web.conf file correctly to reflect the root context path to /mysplunkapp The problem is while login into splunk. From the firebug, we see that there is 303 - "See Other" firing up. With this, the context path is lost and the proxy web server is not able to understand where to redirect the request to. Hence, users getting lost ( redirected to some other page on the proxy) . This happens, on and off. Specifically, for sure for the first time login into Splunk. I looked into the Splunk python code. But, could not locate where the login is handled. Though I see login.html where FORM based auth is taken care.. I could not see the real issue in the code. I suspect its around redirect and return_to variable. In simple terms, whats expected is, after login into Splunk, the response should have the path as "/mysplunkapp" but, what is redirected is just root "/". Hence proxy is not able to understand where to redirect to? Any idea how to address this? thanks.. Debugged this extensively. Finally, I did a hack. Hardcoded my root context path /mysplunkapp in the file account.py:login(..) method This works for me. File affected: ../lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/account.py @expose_page(must_login=False, methods=['GET','POST'], verify_session=False) @lock_session @set_cache_level('never') def login(self, username=None, password=None, return_to=None, cval=None, **kwargs): #FIX (1) line below return_to = '/mysplunkapp/en-US/' <<=== # Force a refresh of startup info so that we know to # redirect if license stuff has expired. startup.initVersionInfo(force=True) updateCheckerBaseURL = self.getUpdateCheckerBaseURL() ps: Replace hardcoded value with cherrypy.config.get('root.endpoint') , if you didn't like hardcoding. View solution in original post
https://community.splunk.com/t5/Developing-for-Splunk-Enterprise/Splunk-behind-proxy-returning-wrong-context-path-with-303/m-p/70739
CC-MAIN-2021-21
refinedweb
305
60.51
Reseting Your Extensions for the Last Time! We’ve heard a great deal of feedback on how each update of the Productivity Power Tools re-enables all of the extensions when it installs. If you are careful about installing the Power Tools via the extension manager or as long as an instance of Visual Studio is running, this version of the Productivity Power Tools will be the last which resets the extensions. Please note: If you are running Visual Studio 2010 SP1 (Beta) that you will need to uninstall previous versions of the Productivity Power Tools prior to upgrading due to a change in the digital signature for the Power Tools Find There are many different ways to find within Visual Studio (Incremental Search, Quick Find, Find in Files, Find Toolbar, etc) and it often isn’t clear which is the best for a given task or worse these options even exist. The find dialog itself also can obstruct code and jump around while users are searching. Our solution to these problems is the new Find extension. In the screenshot below, you will see that we’ve turned the quick find & incremental search experiences into a find pop-up that is available at top right hand corner of the editor. After hitting, Ctrl+I or Ctrl+F, it simply highlights the find results as you type. From this small but powerful pop-up, you have access to most of the Quick Find functionality such as replace, options to match case and added support for matching .NET Regular Expressions! Release notes: - As an extension, it was only possible to implement these changes for the code editor. You still must use Quick Find for searching in designers and other non-editor tabs. - .NET Regular expressions are only available in the Find extension. Find in Files will continue to use VS Regular expressions - Feel free to email us your feedback: VSFindFeedback@microsoft.com Enhanced Scrollbar We’ve been looking into ways that we can improve the experience of navigating through code files. Our solution is the source map which has three modes that will allow you to more easily see the interesting artifacts in your files (edits, breakpoints, bookmarks, errors, warnings etc) and make it easy for you to navigate between them. The default mode is the “scroll bar only mode” which overlays icons onto the standard scrollbar to allow for viewing of these artifacts. In the source map mode, we’ve replaced the default scroll bar allow you to click on any item on the scrollbar to navigate directly to it. This source map mode also provides a preview of the part of the document as you hover. Finally, we have the detailed source map mode, which allows you to get a zoom out view of your entire file. You can switch between any of these modes by right-clicking on the scroll bar or going to Tools Options>Productivity Power Tools>Source Map where we have a host of other options that you can configure. Middle-Click Scrolling The ability to do middle click scrolling in Visual Studio 2010 has been a top request from our beta customers that we weren’t quite able to get into the release. With this extension you can press down on your scroll wheel and the move the mouse to quickly scroll through your document! Organize Imports for Visual Basic As part of theco-evolution strategy for Visual Basic and C#, we continue to bring the best features from each language experiences to the other. With Organize Imports for Visual Basic we’ve added yet another feature to that list. From the context menu, it allows you to sort the imports logically and remove the ones that aren’t being used. Add Reference Support for Multi-Targeting “How come I can’t add a reference to System.Web?” Many users have scratched their heads trying to figure out why certain dlls aren’t showing up in the Add Reference dialog. The confusion has been caused by the logic in the Add Reference dialog which filters out assemblies that are not valid on the .NET Client Profile which many of Visual Studio templates target by default. The Productivity Power Tools solution is to grey out the assemblies which are not available in the current framework profile. When you try to add them, it will automatically prompt you to re-target to a profile of the same framework which does support them. Options in HTML Cut/Copy Productivity Power Tool users have asked for the ability to tweak the html format which gets copied to the clipboard and with the release you now have the ability to customize that to suite your needs. Simply go to Tools Options>Productivity Power Tools> HTML Copy This release of the extension also fixes the commonly reported bug where Cut/Copy occasionally fail which was fixed in VS 2010 SP1 Beta but was present in the October release of the Productivity Power Tools. Awesome. I love these extensions. Keep them coming. One thing though. Nine times out of ten, the answer to the question "Why can't I add a reference to System.Web" should be "Because you shouldn't be referencing that assembly just so you can HTML-encode a string!" Loving the new scrollbar (in the detailed map mode). I seriously think you are under-advertising it! A picture in the blog post is a definitive must 🙂 These are some great productivity enhancements. Especially the new scrollbar, I love it (but really would love to make it a bit/much smaller). The Code Contract Editor Extensions QuickInfo tooltip location interferes with the Solution Navigator interactive tooltip location. This was mentioned by others on the Visual Studio Gallery page for the Code Contract Editor Extensions visualstudiogallery.msdn.microsoft.com/85f0aa38-a8a8-4811-8b86-e7f0b8d8c71b, but is not yet mentioned on the Visual Studio Gallery page for the Productivity Power Tools. Can the Productivity Power Tools developers confirm they are aware of this and are planning to resolve this with the Code Contract Editor Extensions QuickInfo developers? Wow, I'm impressed! In every release, I notice it's getting better and better. Very creative indeed. I hope other teams in VS would be this creative as well, and more :). Thank you for your creative and hard work! The remove unused imports from the Organize Imports module seems to always delete any import that are being part of the root namespace of the file's project, even if they are being used. @Matthieu: Does the file still compile after you run Organize Imports? It always should; if it doesn't then please e-mail vspropowertools@microsoft.com with any additional details you might have. The extension will always remove redundant imports. If I have a project with root namespace "MyCompany.MyProduct", an "Imports MyCompany" will always be removed since that's implicitly in scope, even if there is a type being used from that parent namespace. .NET regex search are really welcome. Keep on the good work! It breaks the build. For example take a project with root namespace MyCompany.MyProduct, and two sub namespaces : MyCompany.MyProduct.DataAccess and MyCompany.MyProduct.Models. If I have a file declaring a class in MyCompany.MyProduct.DataAccess where I need an import from MyCompany.MyProduct.Models, that import will be deleted by the Organize Imports. I'll send a mail with a complete exemple. There appears to be an issue with adding a reference to a local assembly if it exists in the GAC as well. We have some vendor assemblies that are installed in the GAC. We copy these assemblies to a vendor folder within the project (local assembly). This vendor folder is checked into version control with the rest of the project. When we add a reference to the local assembly a reference to the GAC assembly is added instead. We are explicitly browsing to the assembly within the vendor folder. Hello, another bug report: After updating to this version, the utility to sort/remove usings (for C#) project and solution wide breaks. On a large solution (maybe 1000 source code files more or less) eventually (never at the same file) I get a object reference not set, or no error at all, and then VS2010 simply closes immediately (gone from the task list as well). There does not seems to be any pattern to reproduce, just that it is run on a large enough solution/project. Opening the solution on another computer with the previous version of the utility processes the solution just fine. Any ideas? Thanks! @Remco Blok – We're aware there are other extensions (like Code Contract Editor Extensions QuickInfo) that use the same space as the Solution Navigator Quick Info popup. Unfortuantely, we can't put extensibility points into the Solution Navigator extension for third parties to plug into, and we can't prevent them from using the same space. For now, you can always disable the Solution Navigator's Quick Info and only use the Code Contract one by changing the ToolsOptionsProductivity Power ToolsSolution NavigatorEnable Interactive Tooltips option. @BarfieldMV – You can change the size of the new scrollbar by editing the regsitry under HKEY_CURRENT_USER/Software/Microsoft/VisualStudio/10.0/Text Editor Add a new string value "SourceImageMargin/MarginWidth" and set its value the desired width of the "source" part of the margin in pixels (default is 100). Hi Matthew, thanks for your response on the issue with the Code Contract Editor Extensions. I can understand that you may not be able to currently cater for every other third party (Resharper conflicts as well I've read), but I did not expect you would consider Code Contract Editor Extensions third party. The Code Contract Editor Extensions and Productivity Power Tools are built by the same company. Rather than allowing Code Contract Editor Extensions to extend the Productivity Power Tools, I thought they may be integrated perhaps. Thanks anyway. Hello, Great tools but when i press F12 to go to the source (metadata) of some standad control the VS2010 start to consume 100% CPU and never stops. I don't know if it helps but i have a RedGate's Reflector integrated in VS too. Without Power Tools the F12 works fine. Thanks. I noticed that the Solution Navigator behaves a little differently than the Solution Explorer, speifically for scrolling. Can you add the functionality to allow the user to scroll in the Solution Navigator by simply hovering over the window and scrolling with the scroll wheel. This would mimic the default behavior for the behavior in the Solution Explorer built into VS2010. Remove and Organize namespaces in vb.net is not working. I have tried in several scenarios stand alone and TFS same error. Remove and Organize namespaces in vb.net is not working. I have tried in several scenarios stand alone and TFS same error. There seems to be an issue (Dev10 Crash) with XSLT Debugging when the PPTs are enabled. One workaround is to diable the tools, restart VS, and optionally re-enable the tools again. This will fix the problem until the next machine reboot. If you did not re-enable the tools, all's fine after rebooting. Dev Team, please find the details here: social.msdn.microsoft.com/…/0d33eddb-55d7-4ee2-b136-02b429ff1785 You may want to add some information to the Known Issues paragraph on the download page until this will eventually be resolved. Thanks! It's the Enhanced Scrollbar available for visual studio 2012? Cool!
https://blogs.msdn.microsoft.com/visualstudio/2011/02/22/productivity-power-tools-introduces-find-organize-imports-for-vb-enhanced-scrollbar-middle-click-scrolling-and-more/
CC-MAIN-2017-09
refinedweb
1,922
62.58
Summary The QueryRow object provides access to all the values in a row from the result of the queryJobs method. Discussion The QueryRow object can be accessed from QueryResult by index, field name or iterator. Code sample The following script runs a job query and accesses a query row. import arcpy #Establish a connection to a Workflow database conn = arcpy.wmx.Connect(r'c:\test\Workflow.jtc') #Run a query for jobs being assigned to current user. The query results are sorted by job name. result = conn.queryJobs("JOB_NAME,ASSIGNED_TO","JTX_JOBS","Job Name,Assigned To", "ASSIGNED_TO = '[SYS:CUR_LOGIN]'", "JOB_NAME") #To get total number of records being returned which are assigned to current user. print("There are %s jobs assigned to me" % str(len(result.rows))) # Access first row by index print ("The first value is %s" % str(result.rows[0])) # Access first row's job name field by field name print ("The job name is %s" % result.rows[0]['JOB_NAME']) # Access first row's fields values by iterator row = result.rows[0] for val in row: print("Value is %s" % str(val))
https://pro.arcgis.com/en/pro-app/latest/arcpy/workflow-manager/queryrow-class.htm
CC-MAIN-2022-27
refinedweb
182
58.58
Before we can start writing the code that will read the input from the infrared receiver, we will need to load the IRremote library by shirriff. The following screenshot shows the library and version that we will use: Once the library is loaded, we will need to start by importing the header file for the IRremote library and creating the global variables and directives. The following code shows how to do that: #include <IRremote.h> #define IR_PIN 2 IRrecv ir(IR_PIN); decode_results irCode; In the preceding code, we start off by including the IRremote.h header file into our project. We then define that the infrared receiver is connected ...
https://www.oreilly.com/library/view/mastering-arduino/9781788830584/a1401cd1-fa1d-4156-867b-02015eec4f3d.xhtml
CC-MAIN-2020-50
refinedweb
109
61.97
Yeah, it sounds like you are a pretty advanced user. I am learning each of these techniques and I’m sure I will find ways to work around this problem as I move forward… but the point is that I am dancing around something here. The console used to be able to handle this in light and reliable way (in Rhino 5) and it doesn’t do that any more. I know that @Alain has been working on improving the print output editor in a way that addressed your concerns. In Rhino 7 BETA, you can now choose to print the output as simple text. Also, copy-paste should now be simpler. I specifically asked this to be shipped with Rhino 7 SR0. The default type hint was deliberately left as rhinoscriptsyntax. You can create your own defaults for this using a Grasshopper User Object. A GHUO can store default code, default type hints and default variable names both for input and outputs. It also stores the default window position and a few other minor settings. It stores also SDK mode ON/OFF. We can have it store more stuff if you need. Personally, maybe I’d like it to store font size. That’s fantastic news! I only just quickly tested the Rhino 7 Beta and must have missed this. Many many apologies, should have looked closer. And setting this option appears to be persistent, again this is just awesome guys Whoaah, that sounds like great news as well. How does that work when adding new input parameters via ZUI? Also only just noticed AutoSolve In Rhino 6 you can replace the output as well. In case the DataGrid is the bottleneck here. In theory you can replace the label with any other usercontrol. Here is a quick proof of concept. (It replaces the output grid with a simple text label) import System.Reflection as rf import System.Windows.Forms as forms def print_ex(line): editor = ghdoc.Component.Attributes.Form outputPage = editor.GetType().GetField("outputPage", rf.BindingFlags.NonPublic | rf.BindingFlags.Instance).GetValue(editor); label = forms.Label() label.Text = line outputPage.Controls.Clear() outputPage.Controls.Add(label) outputPage.SuspendLayout(); print_ex("foobar") The reflection call could be made only once per script execution, since this will be a potential bottleneck. @AndersDeleuran @Alexander_Jacobson Here is the workaround for RH6. Works extremly fast compared to the inbuild one. Give it a try. Of course its visible to the user, and I spend no time in making it more reliable or nice. But see it as quick solution. """Provides a scripting component. Inputs: x: The x script variable y: The y script variable Output: a: The a output variable""" __author__ = "tomja" __version__ = "2020.10.14" import rhinoscriptsyntax as rs from System.Text import StringBuilder import System.String as ss import System.Reflection as rf import System.Windows.Forms as forms outputPage = None textBox = None printdata = StringBuilder() def print_f(line): global outputPage, textBox if (outputPage == None and textBox == None): editor = ghdoc.Component.Attributes.Form outputPage = editor.GetType().GetField("outputPage", rf.BindingFlags.NonPublic | rf.BindingFlags.Instance).GetValue(editor); outputPage.Controls.Clear() textBox = forms.TextBox() textBox.Width = 1200 textBox.Height = 200 textBox.Multiline = True textBox.ScrollBars = forms.ScrollBars.Vertical textBox.AcceptsReturn = True textBox.AcceptsTab = True textBox.WordWrap = True outputPage.Controls.Add(textBox) outputPage.SuspendLayout() printdata.Append(line) # Your code goes in here for i in range(0,100000): print_f("test \r\n") # Add to the output before the script terminates textBox.Text = printdata.ToString()
https://discourse.mcneel.com/t/additional-rhino-6-ghpython-feedback/87096?page=2
CC-MAIN-2022-21
refinedweb
577
53.07
How to create wordpress theme from scratch a beginners guide 2017Jobs I bought the theme from this website. Https://[log ind for at se URL] I want to customize the website like [log ind for at se URL] header Footer Layout etc ... Whatever. .. color) with minimalistic/aesthetic I downloaded Caffe from github. [log ind for at se URL] I want to build it on Windows 10, VS 2015/2017 Only if you have rich experience with CMake, please bid. Hi, i have url shortner website. i was also having a adsense there, but it disable. Now can you guide me how i can earn by the URL Shortner Website ...list. I have a webpage I designed and I need to put the site on Wordpress. I have already created the Wordpress page-template, but the assets cannot be found and the styling, javascript, and overall design is broken. I've already created two .[log ind for at se URL] files - one calling the content, and the actual content. The .php file calling ...looking for advisors to guide us through the planning stage of our website. This job is for one or two hours shot advice or introductory consultation only. The right person will be awarded further contract to lead us to develop the website. Our website [log ind for at se URL] is already up and running, but for future scalability, it is better to seek professi... Hello, Please draw a new version of the attached graphic from scratch. Please do not copy - draw something new. MAKE SURE THE ARCHER IS POINTING NORTH. Please help me to install new theme .. exp...,... Hi mercy Can you guide me on how to be a professional writer? I need the most useful tips that writers do in writing researches and thesis Their secrets Thanks. I need some help with internet marketing. ** p... Need a creative music director to compose theme music for a Fantasy Sports Company. 1. The tune should be a copyRight Free. 2. Should Provide raw files 3. Should Provide Copyright for the tune. 4, Recommended Digital instruments to compose the tune I installed the cordova plugin fcm in my cordova application projec...build android, throw many errors, i tried with other plugins like cordova plugin firebase, but throws other errors. I think could be a bad installation of android sdk in Visual Studio. Need help on teamviewer to check configuration of the blank project with only cordova plugin. ... import landing pages in divi theme from others site fully full functionality Converting hydroxylamine to ketamine I am looking for someone to show me or teach me how to convert hydroxylamine into ketamine. I have very basic science knowledge so will need a step by step guide in basic English PLEASE DON'T OVERBID AND SEE DETAILS BELOW PROPERLY I need +18 adult stories in Arabic and English More details will be provided to interested bidder .. : We are launching our sports trading platforms. Initially the primary site will offer sports blogs, offers & general information. This brand will prom...background & logo (profile) We're quite clear of the design we're after and will provide 3-4 good examples of competitor sites, along with our existing logo, which will serve as a basis for context. .. user community to guide the... Hello, everyone! I need to translate some guide documents for the education of children from Spanish to English. So I am looking native Spanish translator now and he has too many experiences at translation. Thanks ...functionality-wise is working. but I am looking for redesigning it from scratch with a new angle/aspect. url: [log ind for at se URL] Please go through the storyboard attached to understand the website concept. The 1st phase will include only the homepage I will see the HTML because I want to have a first hand feel and I am not comfortable with jpegs..!! create a new html site from scratch I will provide some written explanations to act as a guide for the explainer video 3 to 4 minutes. My budget is $50.I need an explainer video (Animation kind of), about 3 to 4 minutes .. within one ... I need some changes to an existing website. I need you to create and build applications page. .. need some drawings done in Solidworks 2017. Price per drawing is $10-$13. Message directly for more details. [log ind for at se URL] I have an ASP based website that is on a mysql 2012 database. The host is shutting down this server and the site needs migrating over to a mysql 2017 server with the same host. The site is used for generating electrical safety certificates. Hello, I want to copy the entire Wordpress website currently located on the domain [log ind for at se URL] to two new domains / locations instead of setting up the website from scratch. Let's discuss and see how it can be done the safest and easiest way. - Website / WP installation to be copied: [log ind for at se URL] - Destination domains: ...
https://www.dk.freelancer.com/job-search/how-to-create-wordpress-theme-from-scratch-a-beginners-guide-2017/3/
CC-MAIN-2019-26
refinedweb
839
73.98
module "QtQuick.Controls" is not installed (static Qt 64bit Win10) Hello all, I have built Qt 5.5.1 statically for Windows10 (using MinGW) and I'm trying to build a simple Qml app using this 64 bit kit. here is the content of main.cpp: QGuiApplication app(argc, argv); TestApp testApp; QQmlApplicationEngine engine; QQmlContext* context = engine.rootContext(); engine.addImportPath( QStringLiteral("/qml") ); engine.addImportPath( QStringLiteral("/qml/QtQuick") ); engine.addImportPath( QStringLiteral("/qml/QtQuick/Controls") ); engine.load(QUrl(QStringLiteral("qrc:/main.qml"))); Here the content of main.qml import QtQuick 2.5 import QtQuick.Window 2.2 import QtQuick.Controls 1.1 Window { visible: true width: Screen.width height: Screen.height Button { id: runButton anchors.verticalCenter: parent.verticalCenter text: "Click Me!" width: implicitWidth height: implicitHeight onClicked: { } } } The exe gets compiled just fine but when I launch it I get the runtime error "module "QtQuick.Controls" is not installed" I've been searching on the forums and googled a lot but I didn't find a solution. Do you guys have some idea Thanks in advance! regards Manfred Have you deployed the app? Used windeployqt? Anyways, the Quick Controls QML code must also be copied to your local app directory, do you have them in the locations which you have given to engine.addImportPath? They can be copied from the binary Qt distribution, this is what windeployqt does. I suppose you have checked that you have actually compiled Quick Controls c++ code into you static build beforehand (I don't know if it's possible to leave them out, I'm not familiar with static builds). Hi Eeli, Yes, I have used windeployqt_64 but it does not copy anything, I suppose because Qt is statically linked. here is what winseployqt prints after it has finished: Warnings while parsing QML type information of C:/Qt/qt-everywhere-enterprise-src-5.5.1/qtbase/qml/QtCharts: C:\Qt\qt-everywhere-enterprise-src-5.5.1\qtbase\qml\QtCharts\plugins.qmltypes:1:24: Reading only version 1.1 parts. C:\Qt\qt-everywhere-enterprise-src-5.5.1\qtbase\qml\QtCharts\plugins.qmltypes:10:5: Expected only Component and ModuleApi object definitions. Then I copied manually the whole qml folder to the folder where my exe is. So it should be able to find it. Qtquickcontrols should be compiled and linked correctly, otherwise I would have had linker problems, wouldn't I ?
https://forum.qt.io/topic/74470/module-qtquick-controls-is-not-installed-static-qt-64bit-win10
CC-MAIN-2018-43
refinedweb
396
52.05
battery-d 0.0.2 Simple way to get battery info To use this package, run the following command in your project's root directory: Manual usage Put the following dependency into your project's dependences section: battery-d battery-d - simple library for reading battery info on linux laptops. Provides access to battery status (discharging, charging or full), battery level (0-100%), time remaining and time until full. Also provides access to raw data (which can differ on different laptops). Basic usage dub run battery-d As library battery-d can be used as library. Just add it to dependencies in dub.json. Example usage: import std.stdio; import battery.d; void main() { auto b = new Battery(); writeln("Level: ", b.level); writeln("Status: ", b.status); } Documentation Advanced usage battery-d have been developed as rewrite of old perl script, which parses output of acpi command. By default battery-d -pc outputs coloured battery level: - <= 20% - red - <= 50% - yellow - <= 100% - green Output can be customized by editing code or cli arguments: battery-d -pc --threshold=30=["%F{orange}","%%%f"] This command adds new threshold, which prepends battery level with %F{orange} and appends %%%f. Output will look like this: %F{orange}29%%%f In zsh it produces orange-coloured 29%. - Registered by Azbuka - 0.0.2 released 5 years ago - Azbukagh/battery-d - MIT - Authors: - - Dependencies: - none - Versions: - Show all 5 versions - Download Stats: 0 downloads today 0 downloads this week 0 downloads this month 14 downloads total - Score: - 0.4 - Short URL: - battery-d.dub.pm
https://code.dlang.org/packages/battery-d
CC-MAIN-2022-33
refinedweb
257
56.55
itertools – Iterator functions for efficient looping¶ The Iterators¶ The chain() function takes several iterators as arguments and returns a single iterator that produces the contents of all of them as though they came from a single sequence. from itertools import * for i in chain([1, 2, 3], ['a', 'b', 'c']): print i $ python itertools_chain.py 1 2 3 a b c izip() returns an iterator that combines the elements of several iterators into tuples. It works like the built-in function zip(), except that it returns an iterator instead of a list. from itertools import * for i in izip([1, 2, 3], ['a', 'b', 'c']): print i $ python itertools_izip.py (1, 'a') (2, 'b') (3, 'c') The islice() function returns an iterator which returns selected items from the input iterator, by index. It takes the same arguments as the slice operator for lists: start, stop, and step. The start and step arguments are optional. $ python itertools_islice.py Stop at 5: 0 1 2 3 4 Start at 5, Stop at 10: 5 6 7 8 9 By tens to 100: 0 10 20 30 40 50 60 70 80 90 The tee() function returns several independent iterators (defaults to 2) based on a single original input. It has semantics similar to the Unix tee utility, which repeats the values it reads from its input and writes them to a named file and standard output. from itertools import * r = islice(count(), 5) i1, i2 = tee(r) for i in i1: print 'i1:', i for i in i2: print 'i2:', i $ python itertools_tee.py i1: 0 i1: 1 i1: 2 i1: 3 i1: 4 i2: 0 i2: 1 i2: 2 i2: 3 i2: 4 Since the new iterators created by tee() share the input, you should not use the original iterator any more. If you do consume values from the original input, the new iterators will not produce those values: from itertools import * r = islice(count(), 5) i1, i2 = tee(r) for i in r: print 'r:', i if i > 1: break for i in i1: print 'i1:', i for i in i2: print 'i2:', i $ python itertools_tee_error.py r: 0 r: 1 r: 2 i1: 3 i1: 4 i2: 3 i2: 4 Converting Inputs¶. from itertools import * print 'Doubles:' for i in imap(lambda x:2*x, xrange(5)): print i print 'Multiples:' for i in imap(lambda x,y:(x, y, x*y), xrange(5), xrange(5,10)): print '%d * %d = %d' % i $ python itertools_imap.py Doubles: 0 2 4 6 8 Multiples: 0 * 5 = 0 1 * 6 = 6 2 * 7 = 14 3 * 8 = 24 4 * 9 = 36). from itertools import * values = [(0, 5), (1, 6), (2, 7), (3, 8), (4, 9)] for i in starmap(lambda x,y:(x, y, x*y), values): print '%d * %d = %d' % i $ python itertools_starmap.py 0 * 5 = 0 1 * 6 = 6 2 * 7 = 14 3 * 8 = 24 4 * 9 = 36 Producing New Values¶. from itertools import * for i in izip(count(1), ['a', 'b', 'c']): print i $ python itertools_count.py (1, 'a') (2, 'b') (3, 'c'). from itertools import * i = 0 for item in cycle(['a', 'b', 'c']): i += 1 if i == 10: break print (i, item) $ python itertools_cycle.py (1, 'a') (2, 'b') (3, 'c') (4, 'a') (5, 'b') (6, 'c') (7, 'a') (8, 'b') (9, 'c') The repeat() function returns an iterator that produces the same value each time it is accessed. It keeps going forever, unless the optional times argument is provided to limit it. from itertools import * for i in repeat('over-and-over', 5): print i $ python itertools_repeat.py over-and-over over-and-over over-and-over over-and-over over-and-over It is useful to combine repeat() with izip() or imap() when invariant values need to be included with the values from the other iterators. from itertools import * for i, s in izip(count(), repeat('over-and-over', 5)): print i, s $ python itertools_repeat_izip.py 0 over-and-over 1 over-and-over 2 over-and-over 3 over-and-over 4 over-and-over from itertools import * for i in imap(lambda x,y:(x, y, x*y), repeat(2), xrange(5)): print '%d * %d = %d' % i $ python itertools_repeat_imap.py 2 * 0 = 0 2 * 1 = 2 2 * 2 = 4 2 * 3 = 6 2 * 4 = 8 Filtering¶ The dropwhile() function returns an iterator that returns elements of the input iterator after a condition becomes false for the first time. It does not filter every item of the input; after the condition is false the first time, all of the remaining items in the input are returned. from itertools import * def should_drop(x): print 'Testing:', x return (x<1) for i in dropwhile(should_drop, [ -1, 0, 1, 2, 3, 4, 1, -2 ]): print 'Yielding:', i $ python itertools_dropwhile.py Testing: -1 Testing: 0 Testing: 1 Yielding: 1 Yielding: 2 Yielding: 3 Yielding: 4 Yielding: 1 Yielding: -2 The opposite of dropwhile(), takewhile() returns an iterator that returns items from the input iterator as long as the test function returns true. from itertools import * def should_take(x): print 'Testing:', x return (x<2) for i in takewhile(should_take, [ -1, 0, 1, 2, 3, 4, 1, -2 ]): print 'Yielding:', i $ python itertools_takewhile.py Testing: -1 Yielding: -1 Testing: 0 Yielding: 0 Testing: 1 Yielding: 1 Testing: 2 ifilter() returns an iterator that works like the built-in filter() does for lists, including only items for which the test function returns true. It is different from dropwhile() in that every item is tested before it is returned. from itertools import * def check_item(x): print 'Testing:', x return (x<1) for i in ifilter(check_item, [ -1, 0, 1, 2, 3, 4, 1, -2 ]): print 'Yielding:', i $ python itertools_ifilter.py Testing: -1 Yielding: -1 Testing: 0 Yielding: 0 Testing: 1 Testing: 2 Testing: 3 Testing: 4 Testing: 1 Testing: -2 Yielding: -2 The opposite of ifilter(), ifilterfalse() returns an iterator that includes only items where the test function returns false. from itertools import * def check_item(x): print 'Testing:', x return (x<1) for i in ifilterfalse(check_item, [ -1, 0, 1, 2, 3, 4, 1, -2 ]): print 'Yielding:', i $ python itertools_ifilterfalse.py Testing: -1 Testing: 0 Testing: 1 Yielding: 1 Testing: 2 Yielding: 2 Testing: 3 Yielding: 3 Testing: 4 Yielding: 4 Testing: 1 Yielding: 1 Testing: -2 Grouping Data¶ The groupby() function returns an iterator that produces sets of values grouped by a common key. This example from the standard library documentation shows how to group keys in a dictionary which have the same value: from itertools import *) $ python itertools_groupby.py 1 ['a', 'c', 'e'] 2 ['b', 'd', 'f'] 3 ['g'] This more complicated example illustrates grouping related values based on some attribute. Notice that the input sequence needs to be sorted on the key in order for the groupings to work out as expected. from itertools import * class Point: def __init__(self, x, y): self.x = x self.y = y def __repr__(self): return 'Point(%s, %s)' % (self.x, self.y) def __cmp__(self, other): return cmp((self.x, self.y), (other.x, other.y)) # Create a dataset of Point instances data = list(imap(Point, cycle(islice(count(), 3)), islice(count(), 10), ) ) print 'Data:', data print # Try to group the unsorted data based on X values print 'Grouped, unsorted:' for k, g in groupby(data, lambda o:o.x): print k, list(g) print # Sort the data data.sort() print 'Sorted:', data print # Group the sorted data based on X values print 'Grouped, sorted:' for k, g in groupby(data, lambda o:o.x): print k, list(g) print $ python itertools_groupby_seq.py Data: [Point(0, 0), Point(1, 1), Point(2, 2), Point(0, 3), Point(1, 4), Point(2, 5), Point(0, 6), Point(1, 7), Point(2, 8), Point(0, 9)] Grouped, unsorted: 0 [Point(0, 0)] 1 [Point(1, 1)] 2 [Point(2, 2)] 0 [Point(0, 3)] 1 [Point(1, 4)] 2 [Point(2, 5)] 0 [Point(0, 6)] 1 [Point(1, 7)] 2 [Point(2, 8)] 0 [Point(0, 9)] Sorted: [Point(0, 0), Point(0, 3), Point(0, 6), Point(0, 9), Point(1, 1), Point(1, 4), Point(1, 7), Point(2, 2), Point(2, 5), Point(2, 8)] Grouped, sorted: 0 [Point(0, 0), Point(0, 3), Point(0, 6), Point(0, 9)] 1 [Point(1, 1), Point(1, 4), Point(1, 7)] 2 [Point(2, 2), Point(2, 5), Point(2, 8)] See also - itertools - The standard library documentation for this module. - The Standard ML Basis Library - The library for SML. - Definition of Haskell and the Standard Libraries - Standard library specification for the functional language Haskell.
https://pymotw.com/2/itertools/index.html
CC-MAIN-2017-22
refinedweb
1,444
63.22
> On 13 Nov 2017, at 5:37 pm, Martin Pieuchot <m...@openbsd.org> wrote: > > On 13/11/17(Mon) 10:56, David Gwynne wrote: >> On Sun, Nov 12, 2017 at 02:45:05PM +0100, Martin Pieuchot wrote: >>> [...] >>> We're currently using net_tq() to distribute load for incoming packets. >>> So I believe you should schedule the task on the current taskq or the >>> first one if coming from userland. >> >> /em shrug. >> >> i dont know how to tell what the current softnet tq is. > > That's something we need to know. This will allow to have per taskq > data structure. maybe we could use curproc()->p_spare as an index into the array of softnets. > >> would it >> be enough to simply not mix if the ifq_idx into the argument to >> net_tq? > > Maybe. If a CPU processing packets from myx0 in softnet0 and want to > send forward via myx1, using net_tq() like you do it know will pick > softnet1. > My suggestion was to keep softnet1 busy with processing packets coming > from myx1 and enqueue the task on softnet0. This way no inter CPU > communication is needed. > But without testing we can't tell if it's more efficient the other way. scheduling the bundled start routine as a task on the current softnet would require each ifq to allocate a task for each softnet, and would probably result in a lot of thrashing if multiple threads are pushing packets onto a ring. one of them would have to remove the work from all of them. i think this is good enough for now. my tests with 2 softnets mixed in with this diff seemed ok. with 8 it sucked cos there was so much waiting on the netlock, so it's hard to say what the effect of this is. > >>>> because ifq work could now be pending in a softnet taskq, >>>> ifq_barrier also needs to put a barrier in the taskq. this is >>>> implemented using taskq_barrier, which i wrote ages ago but didn't >>>> have a use case for at the time. >> >> both hrvoje and i hit a deadlock when downing an interface. if >> softnet (or softnets) are waiting on the netlock that and ioctl >> path holds, then an ifq_barrier in that ioctl path will sleep forever >> waiting for a task to run in softnets that are waiting for the >> netlock. >> >> to mitigate this ive added code like what uvm has. thoughts? > > That's only called in ioctl(2) path, right? Which commands end up > there? ifq_barrier is generally called from the drv_stop/drv_down path by mpsafe drivers. if you go ifconfig down is the most obvious. other ioctl paths include those that generate ENETRESET, eg, configuring lladdr. some drivers also call their down routine during its own reconfiguration, or to fix a stuck ring. this diff also adds a call to ifq_barrier inside ifq_destroy, which is called when an interface is detached. > The problem with driver *_ioctl() routines is that pseudo-driver need > the NET_LOCK() while the others should not. So I'd rather see the > NET_LOCK() released before calling ifp->if_ioctl(). > >> Index: share/man/man9/task_add.9 >> =================================================================== >> RCS file: /cvs/src/share/man/man9/task_add.9,v >> retrieving revision 1.16 >> diff -u -p -r1.16 task_add.9 >> --- share/man/man9/task_add.9 14 Sep 2015 15:14:55 -0000 1.16 >> +++ share/man/man9/task_add.9 13 Nov 2017 00:46:17 -0000 >> @@ -20,6 +20,7 @@ >> .Sh NAME >> .Nm taskq_create , >> .Nm taskq_destroy , >> +.Nm taskq_barrier , >> .Nm task_set , >> .Nm task_add , >> .Nm task_del , >> @@ -37,6 +38,8 @@ >> .Ft void >> .Fn taskq_destroy "struct taskq *tq" >> .Ft void >> +.Fn taskq_barrier "struct taskq *tq" >> +.Ft void >> .Fn task_set "struct task *t" "void (*fn)(void *)" "void *arg" >> .Ft int >> .Fn task_add "struct taskq *tq" "struct task *t" >> @@ -88,6 +91,15 @@ Calling >> against the system taskq is an error and will lead to undefined >> behaviour or a system fault. >> .Pp >> +.Fn taskq_barrier >> +guarantees that any task that was running on the >> +.Fa tq >> +taskq when the barrier was called has finished by the time the barrier >> +returns. >> +.Fn taskq_barrier >> +is only supported on taskqs serviced by 1 thread, >> +and may not be called by a task running in the specified taskq. >> +.Pp >> It is the responsibility of the caller to provide the >> .Fn task_set , >> .Fn task_add , >> @@ -163,6 +175,8 @@ argument given in >> and >> .Fn taskq_destroy >> can be called during autoconf, or from process context. >> +.Fn taskq_barrier >> +can be called from process context. >> .Fn task_set , >> .Fn task_add , >> and >> Index: sys/sys/task.h >> =================================================================== >> RCS file: /cvs/src/sys/sys/task.h,v >> retrieving revision 1.11 >> diff -u -p -r1.11 task.h >> --- sys/sys/task.h 7 Jun 2016 07:53:33 -0000 1.11 >> +++ sys/sys/task.h 13 Nov 2017 00:46:17 -0000 >> @@ -43,6 +43,7 @@ extern struct taskq *const systqmp; >> >> struct taskq *taskq_create(const char *, unsigned int, int, unsigned int); >> void taskq_destroy(struct taskq *); >> +void taskq_barrier(struct taskq *); >> >> void task_set(struct task *, void (*)(void *), void *); >> int task_add(struct taskq *, struct task *); >> Index: sys/kern/kern_task.c >> =================================================================== >> RCS file: /cvs/src/sys/kern/kern_task.c,v >> retrieving revision 1.20 >> diff -u -p -r1.20 kern_task.c >> --- sys/kern/kern_task.c 30 Oct 2017 14:01:42 -0000 1.20 >> +++ sys/kern/kern_task.c 13 Nov 2017 00:46:17 -0000 >> @@ -22,6 +22,7 @@ >> #include <sys/mutex.h> >> #include <sys/kthread.h> >> #include <sys/task.h> >> +#include <sys/proc.h> >> >> #define TASK_ONQUEUE 1 >> >> @@ -68,6 +69,7 @@ struct taskq *const systqmp = &taskq_sys >> >> void taskq_init(void); /* called in init_main.c */ >> void taskq_create_thread(void *); >> +void taskq_barrier_task(void *); >> int taskq_sleep(const volatile void *, struct mutex *, int, >> const char *, int); >> int taskq_next_work(struct taskq *, struct task *, sleepfn); >> @@ -176,6 +178,30 @@ taskq_create_thread(void *arg) >> } while (tq->tq_running < tq->tq_nthreads); >> >> mtx_leave(&tq->tq_mtx); >> +} >> + >> +void >> +taskq_barrier(struct taskq *tq) >> +{ >> + struct sleep_state sls; >> + unsigned int notdone = 1; >> + struct task t = TASK_INITIALIZER(taskq_barrier_task, ¬done); >> + >> + task_add(tq, &t); >> + >> + while (notdone) { >> + sleep_setup(&sls, ¬done, PWAIT, "tqbar"); >> + sleep_finish(&sls, notdone); >> + } >> +} >> + >> +void >> +taskq_barrier_task(void *p) >> +{ >> + unsigned int *notdone = p; >> + >> + *notdone = 0; >> + wakeup_one(notdone); >> } >> >> void >> Index: sys/net/ifq.c >> =================================================================== >> RCS file: /cvs/src/sys/net/ifq.c,v >> retrieving revision 1.12 >> diff -u -p -r1.12 ifq.c >> --- sys/net/ifq.c 2 Jun 2017 00:07:12 -0000 1.12 >> +++ sys/net/ifq.c 13 Nov 2017 00:46:17 -0000 >> @@ -64,9 +64,16 @@ struct priq { >> void ifq_start_task(void *); >> void ifq_restart_task(void *); >> void ifq_barrier_task(void *); >> +void ifq_bundle_task(void *); >> >> #define TASK_ONQUEUE 0x1 >> >> +static inline void >> +ifq_run_start(struct ifqueue *ifq) >> +{ >> + ifq_serialize(ifq, &ifq->ifq_start); >> +} >> + >> void >> ifq_serialize(struct ifqueue *ifq, struct task *t) >> { >> @@ -108,6 +115,16 @@ ifq_is_serialized(struct ifqueue *ifq) >> } >> >> void >> +ifq_start(struct ifqueue *ifq) >> +{ >> + if (ifq_len(ifq) >= 4) { >> + task_del(ifq->ifq_softnet, &ifq->ifq_bundle); >> + ifq_run_start(ifq); >> + } else >> + task_add(ifq->ifq_softnet, &ifq->ifq_bundle); >> +} >> + >> +void >> ifq_start_task(void *p) >> { >> struct ifqueue *ifq = p; >> @@ -131,6 +148,14 @@ ifq_restart_task(void *p) >> } >> >> void >> +ifq_bundle_task(void *p) >> +{ >> + struct ifqueue *ifq = p; >> + >> + ifq_run_start(ifq); >> +} >> + >> +void >> ifq_barrier(struct ifqueue *ifq) >> { >> struct sleep_state sls; >> @@ -140,6 +165,18 @@ ifq_barrier(struct ifqueue *ifq) >> /* this should only be called from converted drivers */ >> KASSERT(ISSET(ifq->ifq_if->if_xflags, IFXF_MPSAFE)); >> >> + if (!task_del(ifq->ifq_softnet, &ifq->ifq_bundle)) { >> + int netlocked = (rw_status(&netlock) == RW_WRITE); >> + >> + if (netlocked) >> + NET_UNLOCK(); >> + >> + taskq_barrier(ifq->ifq_softnet); >> + >> + if (netlocked) >> + NET_LOCK(); >> + } >> + >> if (ifq->ifq_serializer == NULL) >> return; >> >> @@ -168,6 +205,7 @@ void >> ifq_init(struct ifqueue *ifq, struct ifnet *ifp, unsigned int idx) >> { >> ifq->ifq_if = ifp; >> + ifq->ifq_softnet = net_tq(ifp->if_index); >> ifq->ifq_softc = NULL; >> >> mtx_init(&ifq->ifq_mtx, IPL_NET); >> @@ -189,6 +227,7 @@ ifq_init(struct ifqueue *ifq, struct ifn >> mtx_init(&ifq->ifq_task_mtx, IPL_NET); >> TAILQ_INIT(&ifq->ifq_task_list); >> ifq->ifq_serializer = NULL; >> + task_set(&ifq->ifq_bundle, ifq_bundle_task, ifq); >> >> task_set(&ifq->ifq_start, ifq_start_task, ifq); >> task_set(&ifq->ifq_restart, ifq_restart_task, ifq); >> @@ -239,6 +278,8 @@ void >> ifq_destroy(struct ifqueue *ifq) >> { >> struct mbuf_list ml = MBUF_LIST_INITIALIZER(); >> + >> + ifq_barrier(ifq); /* ensure nothing is running with the ifq */ >> >> /* don't need to lock because this is the last use of the ifq */ >> >> Index: sys/net/ifq.h >> =================================================================== >> RCS file: /cvs/src/sys/net/ifq.h,v >> retrieving revision 1.13 >> diff -u -p -r1.13 ifq.h >> --- sys/net/ifq.h 3 May 2017 20:55:29 -0000 1.13 >> +++ sys/net/ifq.h 13 Nov 2017 00:46:17 -0000 >> @@ -25,6 +25,7 @@ struct ifq_ops; >> >> struct ifqueue { >> struct ifnet *ifq_if; >> + struct taskq *ifq_softnet; >> union { >> void *_ifq_softc; >> /* >> @@ -57,6 +58,7 @@ struct ifqueue { >> struct mutex ifq_task_mtx; >> struct task_list ifq_task_list; >> void *ifq_serializer; >> + struct task ifq_bundle; >> >> /* work to be serialised */ >> struct task ifq_start; >> @@ -378,6 +380,7 @@ void ifq_init(struct ifqueue *, struct >> void ifq_attach(struct ifqueue *, const struct ifq_ops *, void *); >> void ifq_destroy(struct ifqueue *); >> int ifq_enqueue(struct ifqueue *, struct mbuf *); >> +void ifq_start(struct ifqueue *); >> struct mbuf *ifq_deq_begin(struct ifqueue *); >> void ifq_deq_commit(struct ifqueue *, struct mbuf *); >> void ifq_deq_rollback(struct ifqueue *, struct mbuf *); >> @@ -411,12 +414,6 @@ static inline unsigned int >> ifq_is_oactive(struct ifqueue *ifq) >> { >> return (ifq->ifq_oactive); >> -} >> - >> -static inline void >> -ifq_start(struct ifqueue *ifq) >> -{ >> - ifq_serialize(ifq, &ifq->ifq_start); >> } >> >> static inline void >>
https://www.mail-archive.com/tech@openbsd.org/msg42668.html
CC-MAIN-2021-10
refinedweb
1,468
64.1
Ticket #8160 (closed Bugs: fixed) multiprecision: Number is not assigned after dividing zero by something. Description #include <iostream> #include <boost/multiprecision/cpp_int.hpp> using boost::multiprecision::cpp_int; int main() { cpp_int a = 1; a = 0/cpp_int("1"); std::cout << "a = " << a << "\n"; // a = 1 return 0; } divide.hpp: line 341 says All the limbs in x are zero, so is the result: However, result is not assigned to zero. Attachments Change History comment:1 Changed 4 years ago by Stepan Podoskin <stepik-777@…> - Summary changed from multiprecision: Number is not assinned after dividing zero by something. to multiprecision: Number is not assigned after dividing zero by something. comment:3 Changed 4 years ago by johnmaddock - Status changed from new to closed - Resolution set to fixed comment:4 Changed 4 years ago by Stepan Podoskin <stepik-777@…> I just want to note that comparison with zero here isn't necessary. There is: if((r_order == 0) && (*pr == 0)) And few lines below: if((r_order == 0) && (*pr < y)) There is no reason to treat zero as some special value, it will be handled by second if, so lines 339-345 can be simply removed. comment:5 Changed 4 years ago by johnmaddock Um.. clearly the bug I have at present has eaten my brains away :-( Will fix shortly, thanks for your patience! Note: See TracTickets for help on using tickets. patch "boost/multiprecision"
https://svn.boost.org/trac/boost/ticket/8160
CC-MAIN-2017-09
refinedweb
229
60.65
The Elastic APM Go agent provides an implementation of the OpenTracing API, building on top of the core Elastic APM API. Spans created through the OpenTracing API will be translated to Elastic APM transactions or spans. Root spans, and spans created with a remote span context, will be translated to Elastic APM transactions; all others will be created as Elastic APM spans. Initializing the traceredit The OpenTracing API implementation is implemented as a bridge on top of the core Elastic APM API. To initialize the OpenTracing tracer implementation, you must first import the apmot package: import ( "go.elastic.co/apm/module/apmot" ) The apmot package exports a function, "New", which returns an implementation of the opentracing.Tracer interface. If you simply call apmot.New() without any arguments, the returned tracer will wrap apm.DefaultTracer. If you wish to use a different apm.Tracer, then you can pass it with apmot.New(apmot.WithTracer(t)). otTracer := apmot.New() Once you have obtained an opentracing.Tracer, you can use the standard OpenTracing API to report spans to Elastic APM. Please refer to opentracing-go for documentation on the OpenTracing Go API. import ( "context" "go.elastic.co/apm/module/apmot" "github.com/opentracing/opentracing-go" ) func main() { opentracing.SetGlobalTracer(apmot.New()) parent, ctx := opentracing.StartSpanFromContext(context.Background(), "parent") child, _ := opentracing.StartSpanFromContext(ctx, "child") child.Finish() parent.Finish() } Mixing Native and OpenTracing APIsedit When you import apmot, transactions and spans created with the native API will be made available as OpenTracing spans, enabling you to mix the use of the native and OpenTracing APIs. e.g.: // Transaction created through native API. transaction := apm.DefaultTracer.StartTransaction("GET /", "request") ctx := apm.ContextWithTransaction(context.Background(), transaction) // Span created through OpenTracing API will be a child of the transaction. otSpan, ctx := opentracing.StartSpanFromContext(ctx, "ot-span") // Span created through the native API will be a child of the span created // above via the OpenTracing API. apmSpan, ctx := apm.StartSpan(ctx, "apm-span", "apm-span") The opentracing.SpanFromContext function will return an opentracing.Span that wraps either an apm.Span or apm.Transaction. These span objects are intended only for passing in context when creating a new span through the OpenTracing API, and are not fully functional spans. In particular, the Finish and Log* methods are no-ops, and the Tracer method returns a no-op tracer. Elastic APM specific tagsedit Elastic APM defines some tags which are not included in the OpenTracing API, but are relevant in the context of Elastic APM. Some tags are relevant only to Elastic APM transactions. type- sets the type of the transaction or span, e.g. "request", or "ext.http". If typeis not specified, then the type may be inferred from other tags. e.g. if "http.url" is specified, then the type will be "request" for transactions, and "ext.http" for spans. If no type can be inferred, it is set to "unknown". The following tags are relevant only to root or service-entry spans, which are translated to Elastic APM transactions: user.id- sets the user ID, which. If resultis not specified, but errortag is set to true, then the transaction result will be set to "error" Span Logsedit The Span.LogKV and Span.LogFields methods will send error events to Elastic APM for logs with the "event" field set to "error". The deprecated log methods Span.Log, Span.LogEvent, and Span.LogEventWithPayload are no-ops. Caveatsedit Context Propagationedit We support the TextMap and HTTPHeaders propagation formats; Binary is not currently supported. Span Referencesedit We support only ChildOf references. Other references, e.g. FollowsFrom, are not currently supported. Baggageedit Span.SetBaggageItem is a no-op; baggage items are silently dropped.
https://www.elastic.co/guide/en/apm/agent/go/1.x/opentracing.html
CC-MAIN-2019-39
refinedweb
611
52.46
Writing to a Text File Writing to text files are useful when you want a way of permanent storage of your data. Although databases and XML files are more popular choices, text files are still used by many legacy software. And if you are writing a small amount of simple data, writing to a text file is an acceptable choice. Before we write to a data, we need a stream, specifically a FileStream. A stream represents a file in your disk or you can even use streams that point to a remote destination. The FileStream class will be used to point to a file. The FileStream object is then passed as a parameter to the constructor of the StreamWriter class. The StreamWriter class is the one responsible for writing to the stream, that is, the file indicated by the stream. Note that both FileStream and StreamWriter classes are contained inside the System.IOnamespace. The following program asks the user to enter first names, last names, and ages of people. using System; using System.IO; using System.Text; namespace WritingToFile { class Program { static void Main() { try { FileStream fs = new FileStream("sample.txt", FileMode.Create); StreamWriter writer = new StreamWriter(fs); StringBuilder output = new StringBuilder(); int repeat = 0; do { Console.WriteLine("Please enter first name: "); output.Append(Console.ReadLine() + "#"); Console.WriteLine("Please enter last name: "); output.Append(Console.ReadLine() + "#"); Console.WriteLine("Please enter age: "); output.Append(Console.ReadLine()); writer.WriteLine(output.ToString()); output.Clear(); Console.WriteLine("Repeat? 1-Yes, 0-No : "); repeat = Convert.ToInt32(Console.ReadLine()); } while (repeat != 0); writer.Close(); } catch (IOException ex) { Console.WriteLine(ex.Message); } } } } Figure 1 Please enter first name: John Please enter last name: Smith Please enter age: 21 Repeat? 1-Yes, 0-No: 1 Please enter first name: Mike Please enter last name: Roberts Please enter age: 31 Repeat? 1-Yes, 0-No: 1 Please enter first name: Garry Please enter last name: Mathews Please enter age: 27 Repeat? 1-Yes, 0-No: 0 The output shows that we input three persons and they will be stored in a text file located in the path given when we created the FileStream object. Line 13 creates a FileStream object. The path we provided is a relative path and has no parent directories so the text file will be found in the directory where the program executable is also located. The constructor of the FileStream class, as shown above, accepts the path of the file and a FileMode enumeration value. The FileMode enumeration specifies how the operating system will open the file. The values available are shown below: Figure 2 Passing FileMode.Create simply means create the file and if the file already exists, it will be overwritten. If we want to append new persons to the file, then we can use FileMode.Append instead. Another enumeration, System.IO.FileAccess, determines the capabilities you can do with the file. The values are listed below. Figure 3 To use this enumeration, you need to use a different constructor of the FileStream class which accepts a third argument which is a value from the FileAccess enumeration. FileStream fs = new FileStream("sample.txt", FileMode.Create, FileAccess.ReadWrite); After we successfully created the FileStream object, we need to pass it to the StreamWriter class’ constructor. StreamWriter writer = new StreamWriter(fs); The StreamWriter class contains the actual methods for writing data to the file. Such methods are shown below. Figure 4 We used an instance of StringBuilder to build the output (lines 20-25). Note that we added a “#” character between at the end of firstname and lastname. It is used as a delimiter so that it will be easy for us to separate each component when we will be reading the data in the next lesson. You can use any special character as a delimiter, but # or $ is the most widely used. Lastly, we need to close the StreamWriter using the Close() method(line 32).This releases the resources in the memory and also closes the stream associated to it. We enclosed everything in a try block because these operations can throw a System.IO.IOException. For example, if the file cannot be found, then this exception will be thrown. Lines 34-37 catches this exception. Let’s now look at the created file. John#Smith#21 Mike#Roberts#31 Garry#Matthews#27 Each line represents a person, and each person is composed of three fields separated by a delimiter. The next lesson will retrieve each person from this file.
https://compitionpoint.com/file-write/
CC-MAIN-2021-31
refinedweb
747
66.33
Last week we looked at a preliminary version of this application that made use of an EntityJig to display a Spirograph as we provided the values needed to define it. While that was a good start, I decided it would be better to show additional graphics during the jig process, to give a clearer idea of the meaning of the information being requested from the user. I wanted, for instance, to show temporary circles indicating the radii of the outer and inner circles, mainly to make it clearer how the various parameters affect the display of the resultant Spirograph pattern. Anyway, my next step in this process was going to be a post showing how to implement IExtensionApplication from an F# application, but it turns out I’ve done that already <sigh>. But that’s good, as it leaves the coast clear for me to get into the rest of the implementation. I needed IExtensionApplication’s Initialize() callback to execute the demand-loading creation code provided in this recent post. Here’s the source file I used for that: // Declare a specific namespace and module name namespace Spirograph // Import managed assemblies open Autodesk.AutoCAD.Runtime open DemandLoading type App() = interface IExtensionApplication with override x.Initialize() = try RegistryUpdate.RegisterForDemandLoading() with _ -> () override x.Terminate() = () The application itself now works slightly differently, but using the same principles, overall. I’ve removed the old prompt-based SPI command, renaming the jig-based SPIG command to be SPI. The new jig has a little more to it – as we’re using a DrawJig to draw additional geometry – but it shouldn’t be any harder to understand than the last one. I’m also using a tip suggested by Fenton Webb to improve the performance of a complex jig: it’s good practice to check WorldDraw.RegenAbort during your WorldDraw, and if the flag is set you should exit immediately. Not doing so can make your jig appear sluggish. I’ve done this in a number of places, but I’ve also kept the segment count low when drawing our Spirograph pattern inside the jig, just to decrease the likelihood that we’ll have to cancel the draw operation. Here’s the application’s main F# source file: // Autodesk.AutoCAD.GraphicsInterface open System open DemandLoading // Return a sampling of points along a Spirograph's path let pointsOnSpirograph cenX cenY inRad outRad a tStart tEnd num = [| for i in tStart .. tEnd * num do let t = (float i) / (float num)) |] // Different modes of acquisition for our jig type AcquireMode = | Inner | Outer | A type SpiroJig() as this = class inherit DrawJig() // Our member variables let mutable (_pl : Polyline) = null let mutable _cen = Point3d.Origin let mutable _norm = new Vector3d(0.0,0.0,1.0) let mutable _inner = 0.0 let mutable _outer = 0.0 let mutable _a = 0.0 let mutable _mode = Outer member x.StartJig(ed : Editor, pt, pl) = // Set our center and start with the outer radius _cen <- pt _pl <- pl _mode <- Outer _norm <- ed.CurrentUserCoordinateSystem.CoordinateSystem3d.Zaxis let stat = ed.Drag(this) if stat.Status <> PromptStatus.Cancel then // Next we get the inner radius _mode <- Inner let stat = ed.Drag(this) if stat.Status <> PromptStatus.Cancel then // And finally the pen distance _mode <- A ed.Drag(this) else stat else stat // Our Sampler function to acquire the various distances override x.Sampler prompts = // We're just acquiring distances let jo = new JigPromptDistanceOptions() jo.UseBasePoint <- true jo.Cursor <- CursorType.RubberBand // Local function to acquire a distance and return // the appropriate status let getDist (prompts : JigPrompts) (opts : JigPromptDistanceOptions) oldVal = let res = prompts.AcquireDistance(opts) if res.Status <> PromptStatus.OK then (SamplerStatus.Cancel, 0.0) else if oldVal = res.Value then (SamplerStatus.NoChange, 0.0) else (SamplerStatus.OK, res.Value) // Then we have slightly different behavior depending // on the info we're acquiring match _mode with // The outer radius... | Outer -> jo.BasePoint <- _cen jo.Message <- "\nRadius of outer circle: " let (stat, res) = getDist prompts jo _outer if stat = SamplerStatus.OK then _outer <- res stat // The inner radius... | Inner -> jo.BasePoint <- _cen + new Vector3d(_outer, 0.0, 0.0) jo.Message <- "\nRadius of smaller circle: " let (stat, res) = getDist prompts jo _inner if stat = SamplerStatus.OK then _inner <- res stat // The pen distance... | A -> jo.BasePoint <- _cen + new Vector3d(_outer - _inner, 0.0, 0.0) jo.Message <- "\nPen distance from center of smaller circle: " let (stat, res) = getDist prompts jo _a if stat = SamplerStatus.OK then _a <- res stat // Our WorldDraw function to display the Spirograph and // the related temporary graphics override x.WorldDraw(draw : WorldDraw) = // Save our current colour, to reset later let col = draw.SubEntityTraits.Color // Make our construction geometry red draw.SubEntityTraits.Color <- (int16 1) match _mode with | Outer -> // Draw the outer circle draw.Geometry.Circle(_cen, _outer, _norm) |> ignore | Inner -> // Draw the outer and inner circles draw.Geometry.Circle(_cen, _outer, _norm) |> ignore draw.Geometry.Circle (_cen + new Vector3d(_outer - _inner, 0.0, 0.0), _inner, _norm) |> ignore | A -> // Draw the outer and inner circles draw.Geometry.Circle(_cen, _outer, _norm) |> ignore draw.Geometry.Circle (_cen + new Vector3d(_outer - _inner, 0.0, 0.0), _inner, _norm) |> ignore // Check the RegenAbort flag... // If it's set then we drop out of the function if not draw.RegenAbort then draw.SubEntityTraits.Color <- col // If getting the outer radius fix the other // parameters relative to it (as the inner radius // comes later we only need to fix the pen distance // against it) if _mode = Outer then let frac = _outer / 8.0 _inner <- frac _a <- frac * 3.0 else if _mode = Inner then _a <- _inner / 3.0 // Generate the polyline with low accuracy // (fewer segments == quicker) if not draw.RegenAbort then // Generate our polyline x.Generate(2) if not draw.RegenAbort then // And then draw it draw.Geometry.Polyline(_pl, 0, _pl.NumberOfVertices-1) |> ignore true // Generate a more accurate polyline member x.Perfect() = x.Generate(10) member x.Generate(num) = // Generate points based on the accuracy let pts = pointsOnSpirograph _cen.X _cen.Y _inner _outer _a 0 300 num // Remove all existing vertices but the first // (we need at least one, it seems) while _pl.NumberOfVertices > 1 do _pl.RemoveVertexAt(0) // Add the new vertices to our polyline for i in 0 .. pts.Length-1 do _pl.AddVertexAt(i, pts.[i], 0.0, 0.0, 0.0) // Remove the first (original) vertex if _pl.NumberOfVertices > 1 then _pl.RemoveVertexAt(0) end // Our jig-based command [<CommandMethod("ADNPLUGINS", "SPI", CommandFlags.Modal)>] let spirojig() = // // Create the polyline and run the jig let pl = new Polyline() let jig = new SpiroJig() let res = jig.StartJig(ed, cen, pl) if res.Status = PromptStatus.OK then // Perfect the polyline created, smoothing it up jig.Perfect() our polyline to the modelspace let id = ms.AppendEntity(pl) tr.AddNewlyCreatedDBObject(pl, true) tr.Commit() [<CommandMethod("ADNPLUGINS", "REMOVESP", CommandFlags.Modal)>] let removeSpirograph() = try RegistryUpdate.UnregisterForDemandLoading() let doc = Application.DocumentManager.MdiActiveDocument doc.Editor.WriteMessage ("\nThe Spirograph plugin will not be loaded" + " automatically in future editing sessions.") with _ -> () When we run the SPI command, we see a red construction geometry being drawn along with our Spirograph pattern. We start with the radius of the outer circle: Followed by the inner circle, which we can make small: Or large: And we then define the distance of the pen from the inner circle’s centre, whether close to it: Or further away: You’ll probably have noticed that I’ve structured the application as a potential Plugin of the Month. I haven’t yet decided if it should become one – as it’s really just for fun – but I decided to structure it as such, just in case.
http://through-the-interface.typepad.com/through_the_interface/2010/03/using-a-drawjig-from-f-to-create-spirograph-patterns-in-autocad.html
CC-MAIN-2016-22
refinedweb
1,267
50.94
Verify Multiple Email Addresses If you are migrating to Amazon SES from another email-sending solution, you may already have a long list of email addresses that you want to use to send email. The Python script in this example accepts a JSON-formatted list of email addresses as an input. The following example shows the structure of the input file: [ { "email":"carlos.salazar@example.com" }, { "email":"mary.major@example.co.uk" }, { "email":"wei.zhang@example.cn" } ] The following script reads the input file and attempts to validate all of the email addresses contained in the file. This code example assumes that you have installed the AWS SDK for Python (Boto), and that you have created a shared credentials file. For more information about creating a shared credentials file, see Create a Shared Credentials File. import json #Python standard library import boto3 #sudo pip install boto3 from botocore.exceptions import ClientError # The full path to the file that contains the identities to be verified. # The input file must be JSON-formatted. See # # for a sample input file. FILE_INPUT = ' /path/to/identities.json' # If necessary, replace us-west-2 with the AWS Region you're using for Amazon SES. AWS_REGION = " us-west-2" # Create a new SES resource specify a region. client = boto3.client('ses',region_name=AWS_REGION) # Read the file that contains the identities to be verified. with open(FILE_INPUT) as data_file: data = json.load(data_file) # Iterate through the array from the input file. Each time an object named # 'email' is found, run the verify_email_identity operation against the value # of that object. for i in data: try: response = client.verify_email_identity( EmailAddress=i['email'] ) # Display an error if something goes wrong. except ClientError as e: print(e.response['Error']['Message']) # Otherwise, show the request ID of the verification message. else: print('Verification email sent to ' + i['email'] + '. Request ID: ' + response['ResponseMetadata']['RequestId'])
https://docs.aws.amazon.com/ses/latest/DeveloperGuide/sample-code-bulk-verify.html
CC-MAIN-2018-09
refinedweb
310
50.73
Eclipse Community Forums - RDF feed Eclipse Community Forums How to obtain the runtime element of an bpel:import target <![CDATA[One can import WSDLs and XSDs using the bpel:import element. The import element knows 3 attributes: namespace, location, importType. Obviously, the "location" attribute points to the imported document. Now I want to know how I get access to the runtime object of the imported document. So in the case I imported a WSDL, the runtime object should be castable to org.eclipse.wst.wsdl.Definition. I suppose I need to take an indirection via the EcorePackage and get the element for the path as given by the "location" attribute, but I don't find any good tutorials or advices on this. The crucial question maybe is: Who (which class) knows the model of the BPEL file as well as the models of the imported files. I hope I put the idea across.]]> Christoph 2010-09-30T13:36:02-00:00 Re: How to obtain the runtime element of an bpel:import target <![CDATA[Hi Christoph, Have a look at the bpel.model plug-in, specifically the ImportResoverRegistry, WSDLImportResolver, XSDImportResolver and WSDLUtil classes. You can see how these are used by looking at the EmfModelQuery#scanImports() method in the validator plug-in. Finally, WSDLUtil#resolveUsingFinder() demonstrates how to recurse into WSDLs (look for RESOLVING_DEEPLY). Let me know if you still have questions - have fun Bob ]]> Robert Brodt 2010-09-30T20:10:21-00:00 Re: How to obtain the runtime element of an bpel:import target <![CDATA[Yes. That did it. Thanks a lot!]]> Christoph 2010-10-01T08:23:39-00:00
http://www.eclipse.org/forums/feed.php?mode=m&th=197592&basic=1
CC-MAIN-2014-52
refinedweb
272
57.37
I'm completely new in R and tm package, so please excuse my stupid question 😉 How can I show the text of a plain text corpus in R tm package? I've loaded a corpus with 323 plain text files in a corpus: src <- DirSource("Korpora/technologie") corpus <- Corpus(src) But when I call the corpus with: corpus[[1]] I always get some output like this instead of the corpus text itself: <<PlainTextDocument>> Metadata: 7 Content: chars: 144 Content: chars: 141 Content: chars: 224 Content: chars: 75 Content: chars: 105 How can I show the text of the corpus? Thanks! UPDATE Reproducible sample: I've tried it with the built-in sample text: > data("crude") > crude <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 20 > crude[1] <<VCorpus>> Metadata: corpus specific: 0, document level (indexed): 0 Content: documents: 1 > crude[[1]] <<PlainTextDocument>> Metadata: 15 Content: chars: 527 How can I print the text of the documents? UPDATE 2: Session Info: > sessionInfo() R version 3.1.3 (2015-03-09) Platform: x86_64-w64-mingw32/x64 (64-bit) Running under: Windows 7 x64 (build 7601) Service Pack 1 locale: [1] LC_COLLATE=German_Germany.1252] tm_0.6-1 NLP_0.1-7 loaded via a namespace (and not attached): [1] parallel_3.1.3 slam_0.1-32 tools_3.1.3 Best Solution This works in mine, to print the content text, with latest version of tm, Note: More or less as suggested by Ricky in the previous comment. Sorry, I wanted to write comment, only my rep is only 25 (need min. of 50 rep to comment).
https://itecnote.com/tecnote/r-how-to-show-corpus-text-in-r-tm-package/
CC-MAIN-2022-40
refinedweb
264
53.65
zbar package Project description Introduction Author: Zachary Pincus zpincus@gmail.com Contributions: Rounak Singh rounaksingh17@gmail.com (example code and zbar.misc). zbar-py is a module (compatible with both Python 2.7 and 3+) that provides an interface to the zbar bar-code reading library, which can read most barcode formats as well as QR codes. Input images must be 2D numpy arrays of type uint8 (i.e. 2D greyscale images). The zbar library itself packaged along with zbar-py (it’s built as a python extension), so no external dependencies are required. Building zbar requires the iconv library to be present, which you almost certainly have, except if you’re on windows. Then you probably will need to download or build the iconv DLL. Here are pre-built 32- and 64-bit binaries for same. The python code is under the MIT license, and zbar itself is licensed under the GNU LGPL version 2.1. Prerequisites: - iconv – c library required for building zbar-py; see above - numpy – for running zbar-py - pygame – for examples using a webcam Simple examples: More sophisticated examples can be found in ‘examples’ directory. - Scan for barcodes in a 2D numpy array: import zbar image = read_image_into_numpy_array(...) # whatever function you use to read an image file into a numpy array scanner = zbar.Scanner() results = scanner.scan(image) for result in results: print(result.type, result.data, result.quality, result.position) - Scan for UPC-A barcodes and perform checksum validity test: import zbar import zbar.misc image = read_image_into_numpy_array(...) # get an image into a numpy array scanner = zbar.Scanner() results = scanner.scan(image) for result in results: if result.type == 'UPC-A': print(result.data, zbar.misc.upca_is_valid(result.data.decode('ascii'))) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/zbar-py/
CC-MAIN-2019-26
refinedweb
312
52.05
Are you sure? This action might not be possible to undo. Are you sure you want to continue? RUNE MAGIC VOLKISCH MAGIC THE OCCULT MESSIAH SEX SPIES AND SECRET SOCIETIES THE ORDER OF THE SS THE QUEST FOR THE HOLY GRAIL this publication is under construction – more chapters to follow That is the secret delight and security of hell - that it is not to be informed on, that it is protected from speech, that it just is, but cannot be public in the newspaper, be brought by any word to critical knowledge ... Thomas Mann PROLOGUE: Christmas Day, 1907 A Castle in Upper Austria, on the Danube. Against a backdrop of snowcovered hills and ice-blue sky, church bells and Christmas carols, a flag is raised over Burg Weifenstein (see left)and, for the first time, the world sees a swastika banner fluttering in the breeze over Europe. Men dressed in white robes emblazoned with red crosses Rune Magicians - raise their arms and voices in pagan chant to Baldur, the Sun God, Lord of the Winter Solstice. The Order of the New Templars (see right) is proclaimed, while, only a few miles away, young Adolf Hitler has just buried his mother. Most of us live in a world that is neatly organized around several basic principles. Like medieval serfs who lived secure in the knowledge that there was a God in heaven and a Satan in hell, that humanity was the battleground between these two forces, and that God was winning; we twentieth-century serfs bask in the comfort of a world that we are told is the product of purely scientific principles. Genesis has given way to 'The Origin of Species' and the 'Big Bang'. We have, it is claimed, landed on the moon, rather than drawing it down with incantations and rites of witchcraft. We heal with lasers and sterilized instruments to a back-beat of the blips and beeps of electronic monitoring equipment; the fractured rhythms of the witch doctor's drums are but a faint echo of old fears and tainted memories. Thus, we assume, all sane men and women are guided by scientific principles in their daily lives; and this is especially true, we sometimes like to think, of our politicians. What more prosaic a lot of people can there be but the House of Representatives or the House of Commons ? A debate on the Senate floor - although televised in all its stultifying detail on something called C-SPAN - is rarely gripping; not quite the stuff of Becket or Richard III. There is little in the way of poetry or vision in Western politics any more, and that is largely because the Romantic ideals of our ancestors have been discredited with the passage of time. We are nations of laws, and these laws are constantly changing to reflect new 'realities' created - not by philosophers or metaphysicians or theologians - but by scientists and technicians. The very fact that Americans can tune in their television sets and watch live coverage of a debate in the House over funding allocations for a program they've never heard of is somewhat comforting. It means, in fact, that the wheels of government grind on, in the open, with boring, peristaltic regularity, aided and abetted by scientific invention and technological achievement. While, outside on the streets, civilization is breaking down so fast western society is on the verge of a catastrophe of major proportions. A Communist might say the reasons for the decline of the Western way of life are purely economic and that the warring factions are economic classes struggling for dominance over the means of production. But there are few Communists abroad in the land any more; fewer still who could carry that argument with any conviction, no matter how reasonable it might seem today. After all, an L.A. street gang performing a drive-by shooting on rival gang members to enforce their control of the drug trade on some bleak city block seems hardly what Mao had in mind when he wrote "All power comes from the barrel of a gun," or what Marx (see right) and Engels meant by "Working-men of all countries, unite!" The polarity within which so much of the twentieth century was written - Communism and Fascism has crumbled. The Soviet Union - Hitler's greatest enemy after the Jews - has fallen. The map of Europe has been redrawn, with a reunified Germany as its center-piece - its capital is Berlin, once again. Who had reason to celebrate the most when the Berlin Wall came down ? And who is celebrating now that the races have become, if anything, even more divided; when we read once again about a "Jewish-Masonic conspiracy" as the rationale for "ethnic cleansing"; when the potential for racial violence all over the world has escalated to heights unheard of thirty, forty years ago? While science is humming along nicely inside our homes, just what is happening outside, and why ? If we are honest with ourselves we know we can't answer all these questions with a few canned explanations about the decline of the nuclear family or the failure of the social welfare system. There is clearly something more going on here. Our politicians are not men of science; they never were. They live as dangerously close to the 'Beast Within' as the rest of us; perhaps even more so. In the end, the difference between a seasoned politician and a gang-leader with a machine gun is very slight, a difference of style rather than substance. If that seems to be overstating the case, just ask anyone who lived in Berlin in 1919. Or in Munich in 1923. - or Vienna in 1938. - Poland in 1941- Santiago in 1973 - or Sarajevo in 1994. The epigraph from Thomas Mann (see left) at the beginning of this book is taken from 'Doctor Faustus'. The author thought that both quotation and source were apt selections to christen this discussion. 'Doctor Faustus', after all, is a novel that takes place in Germany during the rise and fall of Hitler. Its title comes from a long German tradition of Doctors Faustus: magicians and occultists who have sold their souls for personal power and glory in this one. Thomas Mann himself lived to see his books burned in great bonfires throughout Germany. Even more apt, however, is Satan's own explanation for the 'secret delight of hell': as Mann understands it, hell's delight is that it cannot be discussed or described, that 'it is not to be informed on.' It is the author's intention, therefore, to steal some of hell's secret delight and security; to shine a light, however feebly or inexpertly, on that corner of history's basement few professional historians have dared to visit, and to show the reader why. And to do that, we must begin with a scene of utter chaos; with the breakdown of civilization; with rooftop snipers and roving gangs with guns; with terror and madness; with mystical diagrams and pagan rituals in ruined castles. We must begin with Munich in 1918. DECENT INTO HELL The city is in turmoil. The Kaiser's (see left) Reich (see right) capital city of Bavaria: Munich. Within forty-eight hours there is a meeting of the Thule Gesellschaft (see left). The Thule, a mystical society based in part on the theosophical writings of Guido von List (see left) and Lanz von Liebenfels (see right) - which is to say, an amalgam of Eastern religion, theosophy, antiSemf (see left) --secret, super racist, and super-occult "German Order Walvater of the Holy Grail," or Germanenorden, which is using the name Thule Gesellschaft - or Thule Society, a "literarycultural society" - as a cover to confuse Munich's fledgling Red Army, which is on the lookout for right-wing extremists. Sebottendorf and occult organizations with elaborate initiation ceremonies and complex magical rituals, from the List Society's inner HAO (Higher Armanen Order) to the Order of the New Templars, will soon culminate in a pitched battle in the streets of Munich between the neopagan Thule Society and the "godless Communists." FEBRUARY 21, 1919. The idealistic but hapless Kurt Eisner (see right) - assassination ? Sebottamberg to prevent the Communists from taking over the government. The Thule organizes among the anti-Communist factions in Munich and Sebottendorff (together with his friend, the racist priest Bernhard Stempfle – see left) begins conspiring with the "exiled" Bavarian government in Bamberg for a counterrevolt. APRIL 13, 1919. The Palm Sunday Putsch. An abortive attempt by the Thule Gesellschaft - with other anti-Communist groups - to take power in Munich. There is bloodshed. The Putsch fails. Munich explodes into anarchy. The Communists seize control of the city and begin taking hostages. The Red Army is on the march ... and hunting for the Thule Gesellschaft. APRIL 26, 1919. Sebottendorff is away at Bamberg, busy organizing a Freikorps (Free Corps) assault on Communist headquarters, when a Red Army unit raids Thule Society offices and arrests its secretary, the Grafin Hella von Westarp (see right), the courtyard of Luitpold High School. It is probably the worst mistake they could have made. The next day, an obituary appears in Sebottendorff's 'Münchener Beobachter' (see left)- a newspaper which a year later becomes the official organ of the NSDAP, the 'Völkischer Beobachter' - giving the names of the seven murdered cultists and laying the blame on the doorstep of the Red Army. The citizens of Munich are finally outraged, shaken out of their lethargy. Thulists continue their well-organized campaign of propaganda against the Communist regime. The people take to the streets. The Free Corps - twenty thousand strong - marches on Munich under the command of General von Oven (see right). For the first time in history, storm troopers - members of the Ehrhardt Free Corps Brigade (see left) - march beneath a swastika flag, with swastikas painted on their helmets, singing a swastika hymn. As they enter the city, they find that the Thule has managed to organize a fullscale citizen rebellion against the Soviet government. They join forces. When the dust settles on May 3, the Communists have been defeated in Munich, politically and militarily. Hundreds of people, including many innocent civilians, have been slaughtered in their streets and homes by the crusading "Whites" with the swastika banners. But there will be no Socialist or Communist government in Germany until after World War II, over twenty-five years later, and even then it will rule over this decision. Hence, Freikorps units move to the defence of Latvia. Even Germany's own right wing is divided into two camps: those in favour of restoring the monarchy, and separating Bavaria from the rest of Germany, and those in favour of a unified Greater German Reich, without a monarch but with a leader, a leader with vision - a German messiah - a Fuhrer. Where is that Fuhrer to be found ? Unwittingly, the Thule Gesellschaft provides the answer. Meeting in the expensive Vier Jahreszeiten (Four Seasons Hotel see left)), the weapons. They will organize units of the Free Corps, particularly the Ehrhardt (DAP) (see right) roughand-tumble factory workers meeting in beer halls - they will be able to form a united front against Communism, international Freemasonry, and world Jewry. Within a year, this project of the Thule Gesellschaft will become the NSDAP: the National Socialist German Workers' Party. It will sport a swastika flag and a swastika armband, and its leader will be a war veteran, a corporal who had been sent by the German Army to spy on the organization: Adolf Hitler. And by November, 1923, the tiny German Workers' Party will have grown to enormous proportions with many thousands of members, and will attempt to take over the country in the famous Beer Hall Putsch. The Putsch will fail, but Adolf Hitler the Füuhrer will be born. What was the Thule Gesellschaft ? What were cultists doing fighting Communists in the streets of Munich ? What did they believe ? How did it influence the Nazi Party ? A PHILOSOPHICAL DIGRESSION The once-fashionable and still-controversial French philosopher Michel Foucault (see right) once described two major impulses in European culture and its dynamics of power. The first of these he called 'the blood.' This impulse was directly related to oldfashioned concepts of political sovereignty. According to Foucault, the death penalty was important and indulged in heavily during this period because it represented the monarch's divinely given power to cause the death of enemies; i.e., to rob them of their blood. One pledged to defend a monarch to the last drop of one's blood. People became rulers owing to their consanguinity with the previous ruler (they shared the same blood, were of the same family). And, of course, although Foucault does not say it so baldly, an essential element of the dominant European religion, Christianity, is the idea of the redemption of humanity through the spilled (and sacred) blood of Christ. This cultural conceit existed in the West well until the advent of the nineteenth century, at which time it was gradually replaced by the second of the two impulses, that of 'sexuality.' 'Sanguinity' gave way to "sexuality" as political attitudes shifted from the importance of blood (and, hence, of the mystical value of death, the spilling of blood) to the importance of life itself: to the regulation of life's processes, the (selective) preservation of life, and the survival (or destruction) of entire populations. Power, therefore, was no longer a mystical quality of kingly blood - i.e., of an individual sovereign - but inherent in the control, manipulation, and interpretation of the sex act and its product. Power shifted - according to Foucault - from the symbol or sign of the blood toward the object of sex; Machiavelli moving down the talk show couch to make room for Freud, who will take it with him when he leaves. This seeming digression has been made because it so perfectly describes what will follow in the remainder of this study; - we will be watching how these twin forces - blood and sex - came to be epitomized in the occult struggles, the mystical agon of the Third Reich. Rather than remain an abstract philosophical problem, however, the themes of blood and sex become very real, very conspicuous in the writings, acts, and preoccupations of the magicians who gave birth to the Occult Reich, and of those who carried out the policies of the SS. The reader is asked to remember this brief encapsulation of Foucault's observations as we follow the argument down the last hundred- odd years since the birth of the German Theosophical movement and its illegitimate offspring, the sex-and-blood rune magicians List, Liebenfels, and Sebottendorff. Before we get ahead of ourselves, however, let us begin where most Western twentieth- century occultism begins, with the birth of the Theosophical Society in New York City in 1875 and the subsequent occult revival that spread to England and the Continent with such powerful consequences. SECRET SUPERMEN AND SECRET DOCTRINES Madame Helena Petrovna Blavatsky (1831-1891) (see left) was born in what is now Ukraine. She would be forty-four years old before creating the Society for which she is best remembered, but her most important achievements still lay ahead of her. In 1877 - two years after starting the Theosophical Society - she would publish 'Isis Unveiled' (see right), an energetic blend of Eastern religion and mysticism, European mythology and Egyptian occultism, which would pave the way for her even more ambitious 'The Secret Doctrine' (see left) in 1888. Some authors have written that the popularity of Blavatsky's writings in the late nineteenth century was evidence of an anti-positivist reaction among the middle classes to the effect that science was having on religious belief. In other words, science was going so far toward "proving" the errors of faith that the average person – in a state of existential and ontological 'angst' - embraced the quasi-scientific approach toward religion represented in 'The Secret Doctrine'. Darwin (see left). People were startled to discover that Biblical myths were at odds with scientific theories, and thus began to doubt everything they ever believed. They found themselves spiritually – and, more significantly, morally adrift. Blavatsky provided a much-appreciated antidote to Darwin, even as she was brazenly appropriating (and some might argue - reversing) his theory of evolution. As unusual as her theories might appear to some today, they were actually quite brilliant for her time, for they enabled intelligent and educated men and women to maintain deep spiritual beliefs while simultaneously acknowledging the inroads made by scientific research into areas previously considered beyond the domain of mere human knowledge. Blavatsky outlined a map of evolution that went far beyond Darwin to include vanished races from time immemorial through the present imperfect race of humans, and continuing on for races far into the future. Based on a selection of various Asian scriptures 'The Secret Doctrine's' message would later be picked up by the German occultists, who welcomed the pseudo-scientific prose of its author as the answer to a dream. The smug and condescending attitude of scientists and their devotees toward the "unscientific" had proved contagious among many in the newly created middle class, and mystics began to find themselves in the ridiculous position of having to satisfy the requirements of science in what are patently unscientific (we may say "non-scientific") (see right) furores her cue from Darwin, she popularized the notion of a spiritual struggle between various races, and of the inherent superiority of the Aryan race (see left), hypothetically the latest in the line of spiritual evolution. Blavatsky would borrow heavily from carefully chosen scientific authors in fields as diverse as archaeology and astronomy to bolster her arguments for the existence of Atlantis (see right), extraterrestrial (or super-terrestrial) life-forms, the creation of animals by humans (as opposed to the Darwinian line of succession), etc. It should be remembered that Blavatsky's works - notablyIsis Unveiled' and 'The Secret Doctrine' - appear to be the result of prodigious scholarship and were extremely convincing. The rationale behind many later National Socialist 'initiated' version of astrology and astronomy, the cosmic truths coded within pagan myths ... all of these and more can be found both in Blavatsky and in the doctrines of the Third Reich itself, specifically in the ideology of the SS. It was, after all, Blavatsky who pointed out the supreme occult significance of the swastika (see left). And it was a follower of Blavatsky who was instrumental in introducing the 'Protocols of the Elders of Zion' to a Western European community eager for a scapegoat. This is not to imply that Theosophy is inherently fascist. Although Blavatsky herself did not become overtly involved in political campaigning or intriguing, many of her followers and self-appointed devotees could not help but use their new-found faith as a springboard into the political arena. The fascinating mixture of armchair archaeology, paleo-astronomy, comparative religion, Asian scriptural sources, and European mythology that can be found in Blavatsky's writings was enough to cause a kind of explosion of consciousness among many women and men of her generation, including the scientists who would one day direct entire departments within the SS. Blavatsky's 'creative' method of scholarship inspired admirers and imitators throughout the world, who considered the theories put forward in such books as 'The Secret Doctrine' to be literally true, and who used her writings as the basis for further research. In a way, this was understandable. n ancient times, alchemists were the only chemists; as the centuries went by and science developed a philosophy and methodology of its own, the alchemists and chemists split off from each other and went their separate ways. So it was with the rest of academia. In the nineteenth century - bereft of a unified vision of humanity and cosmos, cosmos and God - it was no longer easy to be an expert in every field of science and philosophy; by the twentieth century, it would become impossible. The writings of people like Blavatsky and her spiritual descendants represent what could be the last gasp of the 'Renaissance Man' before science, medicine, the Industrial Revolution, and mechanized warfare made specialization a necessity and the medieval image of the all- powerful and allknowing Magician a bitter-sweet memory. GERMAN INITIATES The German Section of the Theosophical Society (see left) was founded in the town of Elberfeld on July 22, 1884. Blavatsky was staying there at the home of Marie Gebhard (183292), nee L'Estrange, a native of Dublin who married a well-to-do German and, moving to elegant surroundings in her new homeland, devoted her leisure time to a study of occultism and ritual magic. Frau Gebhard had corresponded regularly with the famous French magician and author of several popular books on magic, Eliphas Levi (the Abbe Louis Constant – see right)). She is known to have visited the Master at least once a year in Paris for ten years until his death in 1875 in order to receive personalized instruction in the occult arts. A room at her estate in Elberfeld was completely devoted to these pursuits, and it was there that the German Section of the TS was inaugurated with a Dr. Wilhelm Hübbe-Schleiden (1846-1916) as its first president. Although Hübbe-Schleiden would become well known as the publisher of the influential German occult magazine, 'Die Sphinx' (see right), prior to his occult career he was an outspoken supporter of German nationalism and colonialism. This is mentioned only to show how early on occultism and political adventurism - specifically an elitist, racist adventurism - were linked. While not exactly a proponent of an early Lebensraum policy, HübbeSchleiden had once been the manager of an estate in West Africa and was at the time of his tenure as president of the Theosophical Society in Germany a senior civil servant with the Colonial Office, energetically promoting the expansion of Germany's colonies abroad. In all fairness, however, it must be admitted that 'Die Sphinx' was one of the first, and also more up-market, occult periodicals of its time. It catered to an intellectual audience, and its contributors included scientists, philosophers, and other mainstream academics writing on a variety of topics, from the paranormal and psychical research, to archaeology and mysticism. As such, it was firmly in the Theosophical camp, which required some sort of accommodation with mainstream science. One man of science who would come to personify this uneasy truce was a Blavatsky enthusiast who became influential in the German movement. Dr. Franz Hartmann (see left) travelled to Theosophical Society headquarters in Adyar, India, to sit at the feet of the Masters, evidently impressing his hosts greatly. He was trusted so highly that, while Blavatsky was in Elberfeld helping jumpstart the German Section, Hartmann was in Adyar as acting president of the Theosophical Society and remained in India until 1885. Hartmann is of considerable interest to this study, as it was he who helped create the Ordo Templi Orientis (see left), a German occult society formed around the idea of sexual magic. Other illustrious members of the OTO will include another Theosophist, Dr. Rudolf Steiner (see right). Steiner would. Another personal friend of Mme. Blavatsky was Dr. William Wynn Westcott (1848-1925) (see right), another coroner and a Theosophist who founded the Hermetic Order of the Golden Dawn in England in 1888, the same year as 'The Secret Doctrine' was published. Westcott claimed that the Golden Dawn (see left)watching - but with the added benefit that the seeker elevates himself in his own eyes to stratospheric levels of arcane wisdom beyond the feeble understanding of mere mortals. That is, until the next occult order is formed and another - more formidable initiation becomes available. Thus Hartmann will become involved, in 1902, with one John Yarker (see right) whose Masonic order, the Ancient and Primitive Rite of Memphis and Mizraim, would claim many otherwise-sincere individuals as PanGerman, anti-Semitic movement which gave birth to National Socialism. Thus Hartmann is the axle on which this peculiar Wheel of Life will turn. Wherever we pick up the thread of twentieth-century Western occultism and ritual magic, we can follow it back along a trail that leads to Hartmann. A few years later, Hartmann became involved with another Lebensreform community, this time at Ascona, in Switzerland, where we will eventually find his associate and fellow OTO initiate Theodor Reuss (see left) sitting out the First World War in 1917. There Hartmann began his own journal, the 'Lotusbluthen', (Lotus Blossoms) in 1892, which printed translations of many Theosophical and related writings. Lotusbluthen's logo included the ubiquitous swastika. Among Hartmann's many other publications were translations of the 'Bhagavad-Gita' (one of Himmler's favorite texts), and the 'Dao De Jing', the sacred text of Taoism. It is a measure of Hartmann's popularity and reputation that some of his writings have been translated into English and are available today under a variety of imprints. Little of what Hartmann wrote, however, could be said to fall under the OTO's domain of "sex-magic." Hartmann would eventually take on as a kind of disciple and amanuensis a young Theosophist, Hugo Vollrath (see right) (born 1877). In 1899, Hartmann picked up this university student as a personal secretary and the two would go on speaking tours together, trumping up business for the Theosophical Society. Vollrath, an intense young man eventually became involved with the Leipzig branch of the Society, and soon found himself embroiled in one scandal after another. It quickly became evident to the other members that Vollrath saw Theosophy as a potential cash cow. He began a series of publishing ventures, introducing Theosophy and, later, astrology to the German-speaking public. The Theosophists complained about Vollrath's apparent lack of sincerity to the General Secretary of the German Section of the Society, who at that time was Dr. Rudolf Steiner. Steiner, a friend of Dr. Hartmann, had become involved with both Theosophy and the OTO only to eventually leave them both to found his own group, the Anthroposophical Society (which also exists to this day). In 1908, Steiner was forced to expel Vollrath from the German Section but the damage had already been done. The Theosophists had created a monster, and Vollrath would go on to become a Theosophical publisher to be reckoned with, providing a forum for the men who were laying the foundations of a New World Order. An associate of Vollrath will be Johannes Baltzli, a Theosophist and the secretary of yet another mystical organization, the List Society. Baltzli would contribute articles to Vollrath's new Theosophical magazine, 'Prana', and soon the bizarre ideas of racist and rune magician Guido von List would fill the pages of this otherwise-bland outlet previously devoted to the writings of Blavatsky, her successor Annie Besant, and wandering "Bishop" Leadbetter (see right). And, as if to emphasize how inextricable German occultism was with German racism, it is through his astrological journal, 'Astrologische Rundschau' (see left), that Vollrath has additional impact on our story, for in 1920 he turned it over to the editorial ministrations of no less a historic personage than the Baron Rudolf von Sebottendorff: mystic, Freemason, initiate of the Eastern mysteries, and now astrologer. The Baron needed a new career. After all, his last occult experiment - although an unqualified success in the political arena - had turned on him. He needed new pastures, and editing 'Astrologische Rundschau' from the relative safety of Switzerland seemed just the ticket. Maybe there - with a completely new audience of adoring fans - he could forget about the Thule Gesellschaft. MUNICH - 1919 To hear most historians speak of the Thule Gesellschaft, one would think that it was a slight aberration, an anomaly that does not deserve close scrutiny. It is mentioned almost in passing in John Toland's 'Adolf Hitler' and in works by Joachim Fest and other historians of the Third Reich. Its founder, the same Rudolf Sebottendorff, wrote its story himself in a book he published in 1933, a book that was suppressed under the Third Reich. But to understand the origins of National Socialism itself much more thoroughly than has been done to date. For the NSDAP was never merely a political party; it was always much more. Hitler himself warned his critics that if they understood National Socialism as a political party only, they were missing the point. Many observers have since agreed. Politics alone did not create such force in human history. As Robert G. L. Waite says in his 'The Psychopathic God: Adolf Hitler' : 'The hard historic fact about the genocide is that it was not caused by the exigencies of war, nor was it a political manoeuvre to cope with internal unrest and domestic conflict. These people were killed as the result of one of Hitler's ideas: the idea of a superior race and the need to exterminate what he considered to be the vermin that were attacking it.' This was an idea that can be traced to Hitler's early, student days in Vienna and to the influence of racial tracts published by the leading occult, antiSemitic lights of the day: Guido von List and Lanz von Liebenfels. And from there, directly to the occultist and Eastern initiate Rudolf von Sebottendorff and his brainchild, the Thule Gesellschaft. In this century, in Europe, racism had its roots in occultism. Racism is, after all, an expression of fundamental fears and such fears often finds a home in the milieu of primordial, preconscious archetypes that is the environment of both religion and occultism. Racism and the occult were often found sharing the same magic circles in the early days of this century, and therewith hangs a tale. THE PROTOCOLS OF THE ELDERRS OF ZION It has been the refusal of historians to view the NSDAP as a religious - or at least a mystical - organization, a cult, that has contributed to so much confusion over the phenomena of National Socialism and Aryanism. Indeed, yet another Blavatsky protégée - during the time of Hartmann, Hubbe-Schleiden, and Vollrath - was the mysterious Yuliana Glinka, a Russian noblewoman who donated enormous sums of money to spiritualist mediums and their circles, and who was instrumental in promoting the document known as 'The Protocols of the Elders of Zion', which, along with 'Mein Kampf', can be considered one of the significant texts of National Socialism. As Norman Cohn illustrates at some length, the Protocols were largely thought to be the product of a conspiracy between the Okhrana (the Czarist secret police) and occult circles operating in Paris and St. Petersburg. Originally, this pamphlet was an attack against both the Jews and the Freemasons, and was probably created around 1895 to discredit enemies of the head of the Okhrana in Paris, one Rachkhovsky (see left). It was the occultist Mme. Glinka who 'leaked' the manuscript to the press. The newspapers seized upon it in the heyday of the Dreyfus Affair and the first-ever Zionist Congress presided over by Theodor Herzl in 1897. Here was documentary evidence that the Jews, operating through the lodge network and secret rituals of the Masonic Society, were putting the final touches on their program of world domination the authenticity of the Protocols became a matter of faith among some of the most influential political and intellectual leaders of Europe. Thus, this single most inflammatory and crucial document of the Third Reich had its origins in that strange twilight world where occultism and espionage meet. And it is important to realize that the Masonic Society was considered just as culpable as the Jews; that it was, in fact, a Jewish "front": for the Elders sign themselves as thirty-third degree Masons (see right). The plot as described in the Protocols involves a schedule for world domination that was believed to be well on its way to full implementation. By taking over the reins of commerce and by fomenting world revolution, the Jewish-Masonic conspiracy against Christian monarchies was nearly successful. With the destruction of the Second Reich, the last bastion of Aryan supremacy was removed and victory virtually assured. Using the twin tools of Democracy and Communism, the Jewish-Masonic cult had emasculated the potentially troublesome populations of America and Russia. To the völkisch believers, their only hope of salvation was to be found - not in any revived Christian fundamentalism, for Christianity (as a Jewish creature) was also suspect - but in the rediscovered faith of their fathers, the Odinist religion (see left) that had been stolen from them by the fire and sword of the Inquisition. 'The Protocols' implied that the Jews had infected all governments, all commerce, all of the arts and media; - everything was suspect. Only the pure faith of the Old Ones - abandoned for centuries and thus beyond reproach - could offer salvation. THE RUNE MAGICIANS On the Continent there existed a group of purely German nationalistic cults that were the result of this underground surge of neo-paganism. These cults were usually linked in some way with the more overtly political "Pan-German" movement, which sought to unite all the German-speaking peoples of Europe into a single, coherent nation. The "Pan-German" movement envisioned a single, German national and racial entity that would abrogate or dissolve sovereign boundaries and unite the German speakers all over Europe wherever sizeable numbers could be found. In order to provide a solid philosophical or ethical framework for this peculiarly German desire for monolithic statehood, some sort of precedent was required to show that much of what is now Europe was actually once part of a greater German Reich, even if that Reich was in the remote - even prehistoric - past. If it could be proved that everything from the Ukraine to the Atlantic was at one time part of an ancient Teutonic Empire, then the German people would have historical justification for the acquisitive urges they were suddenly experiencing, as well as a seemingly rational excuse for the exercise of their right to bear arms against all and sundry. What was required, then, was the assistance of the twin sciences of archaeology and linguistics and where better do these two rational arts combine but in their occult child, the runes? The obsession with runes that was enjoyed by a certain minority of Germans at the turn of the last century has been discussed by other authors in other books, but usually as a kind of crank occupation not fit for serious academics. Before we go on to study the contributions to Nazi ideology by such famous rune promoters as Guido von List and Rudolf von Sebottendorff, it would behove us to pause for a moment to observe to what extent this arcane lore was - and is - making its effect on traditional academia. Runes (see left) are simple alphabetic symbols. They owe their odd and distinctive shapes to the fact that they were designed to be carved on wood, stone, or metal, as opposed to written with a pen; thus, only straight lines are used to form the letters. The words formed by the runes that concern us are generally in some form of Nordic tongue, and thus belong to that class of things purely Teutonic, preChristian, and German. In the new, urban, middle-class world where a multiplicity of words, books, ideas, and philosophies seemed to contend in a violent thunderstorm of polysyllabic chatter, the clean simplicity and bare prose of the runes and their sagas stood to the Pan-Germans, the anti- Semites, and the Aryan mystics for a saner time, an honest time, when the questions were few and the answers as clear and natural as the light of the sun. They are also tangible relics of an ancient legacy-landmarks of historic accomplishments. Unlike words printed or written on paper (a German proverb reminds us that "paper is patient"), runes were inscribed with earnest deliberation using iron implements on solid rock; serious messages from the past intended to survive the centuries. The effort required to carve these messages was quite different from the ease with which pen slides over paper; the implication being that whatever was written in runes was not the mindless static of superficial minds, chewing up the forests with self-absorbed monologues.. The alphabet was therefore abandoned for mystical purposes by the PanGerman cults in favour of the runes. What was the alphabet, after all, but some sort of Semitic invention ? The runes, on the other hand, were the pure expression of people of German blood. If a rune were discovered carved into a stone found lying in a field in Tibet, for instance, it was simply further proof of Teutonic migration and domination. And once the swastika - a sacred symbol in many parts of the world - was identified as a rune the German Occultists were well on their way to proclaiming the entire globe German territory. Many academics of the day placed a great deal of importance on runic studies and on the use of runes to establish the extent of Nordic migrations. For instance, as late as 1932 and 1940 Hjalmar R. Holand was publishing his analyses of the famous Kensington Stone, analyses that were later examined by some of the leading German rune experts in Nazi Germany, including Richard Hennig in the 'Zeitschrift fur Rassenkunde' (Magazine of Race Science) in 1937, Wolfgang Krause in an issue of 'Germanien' (the official organ of the Ahnenerbe-SS: see Chapter Six) of the same year and Eilert Pastor in 'Wacht am Osten', also of 1937. If his analyses were determined to have merit, then the presence of Nordic peoples in America as far back as the fourteenth century (over a hundred years before Columbus) could be established, with rather sobering political ramifications considering German policy regarding former Teutonic territories! Briefly, the Kensington Stone is a slab carved with runic characters found on a farm in Minnesota in 1898. The runes describe an Indian massacre said to have taken place in the year 1362. While some scholars have considered the Stone to be a hoax, others disagree. Mr. Holand went even further, however, by suggesting that various Indian tribes may have intermarried with Nordic peoples at that time; a circumstance that would account for the presence of blue eyes and fair hair among the Mandan population, for example. (Wouldn't this also, after all, explain how the swastika turns up in North America as a Native American symbol ?) While Holand himself does not take all this to its illogical conclusion, it is clear that some in the Third Reich would have considered this just one more proof of ancient Teutonic expansion. More importantly, this work is not a Third Reich propaganda tract, or the crazed scribblings of a völkisch medium – but rather it is the sober offering of a college-educated American author of Scandinavian descent, representing his carefully considered contribution to the growing literature of rune studies. Today, similar studies have been undertaken by American and European epigraphers and by the Diffusionists led by Harvard Professor Barry Fell. These professional and amateur archaeologists have made substantial contributions to this neglected field, and labour to preserve from vandalism and neglect those ancient stone inscriptions wherever they might be found. Runic inscriptions are evidence of ancient voyages otherwise unrecorded, and refer to vast tracts of unexplored history for which few documents remain. They have also demonstrated sophisticated advances in astronomy and navigation that may require significant portions of world history to be rewritten. Books and research like the foregoing provide the "missing link" between the fanciful and outlandish works by authors such as von List on the one hand, and the regular academic community that considers the rune scholars to be nothing more than a cabal of German Occultists on the other. The notion of a hidden science of runes was given heat by the writings of Blavatsky (see left), in which the runes are discussed in connection with her racial theories. For example, if there truly is a caste system of races, and if the present Master Race is the Aryan, and if the Aryan is a blond-haired, blueeyed Nordic race, then it stands to reason that the Germans are the Master Race. If runic symbols, such as the swastika (see right), are evidence of a secret Aryan science of symbols, and if the ancient German (Teutonic) method of communication was this same runic system, and if runes can be discovered all over the known world, then (a) that is further evidence that the Germans are the Master Race and (b) it is also further evidence that Germans once ruled the entire world; from which it follows that Germany has a "legitimate" stake in such property. Even more importantly, the runes themselves have mystical as well as practical applications. They are not merely alphabetic symbols that identify their users as Aryans: within the construction of the individual runes themselves are certain potent designs - like printed electronic circuitry - that can connect one directly to God. Coded within their stark diagrams are secret formulae for achieving telepathic power, foretelling the future, and peering into the past: innate magical abilities that the Aryans - through inbreeding, neglect, and ruthless suppression by the Christian authorities - have lost. The remains of this occult science are to be found in an 'initiated' interpretation of the runes and, in conjunction with an aggressive eugenics program, the careful application of rune magic will enable the Aryan race to walk once more with the Gods in the halls of Valhalla (see left). Thus the Aryans are not simply a superior race in a strictly Darwinian sense; they are also the 'Chosen People', divinely ordained supermen locked in cosmic combat with a race of subhuman beings - red, brown, black, yellow under the command of the Jews, the Communists, and the Freemasons: worshipers all of the demon Jehovah. This satanic conspiracy has robbed the Aryan male of his manhood, has leached from him his birthright, his mystical powers, the very land that once was his; it has enslaved him in chains made of debts to Jewish bankers, of twisted ideas of democracy and freedom learned from the Masons, under a dictatorship of the proletariat imposed by the Bolsheviks. But then, how to explain the sad condition of this once and innately superior race ? Divided, conquered, bereft of all its old territories, in enormous debt ... what happened ? And how to rectify the situation ? For the answers to these questions, we must resort to the literature of the völkisch apologists, specifically to Guido von List and his student, the former Cistercian monk Lanz von Liebenfels. 'Therehistorian, (see (see right) Nazi Party: they wanted to resurrect what they perceived to be (see left) Nazi Party) is somehow a representation of Evil, but this would be unknown to the Eastern peoples who probably gave the world the swastika in the first place. Thus the swastika was not a Nazi invention, nor was its association with occultism solely a figment of Mme. Blavatsky's imagination (see left). - and in 1897 the young Adolf Hitler, attending school at the Benedictine Monastery at Lambach, would pass every day beneath an archway which bears the monastery's coat of arms (see right) cast in stone – and its most prominent feature is the swastika (see right). antiChristian potential as they were of their own anti-Semitic intent. This was not paganism as a pure cult (such as the modern Wicca phenomenon) but paganism as a movement set up in opposition to JudeoChrist. It is perhaps this strategy more than any other that has allowed Völkisch occultism, in various forms, to survive its calamitous defeat in World War II, and to continue to exert an influence over young people and old down the years into our present decade. several (even hundreds of) Nazi-era (see right),: (see left)the northernmost point on earth, an entryway into a subterranean landscape peopled by giants. A kind of Teutonic Eden, Thule was the mythic origin of all "Aryans": an equally mythic white-skinned, blue-eyed, blondhaired PanGerman movement - had begun much earlier, and there were many antiSem unturned. There are pages and pages of documents and photographs of megaliths, dolmens, and standing stones from all over Europe, and their interpretation by Nazi academics, in the files of captured German documents in the archives of Germany and America. Thule was a siren song to these early German occultists. PanG subterranean master race in his celebrated novel, 'Vril' (see right). (see antiSem (see right) fictional) (see left), that so attracted Hitler in the latter's early days as an impoverished artist in Vienna, and we now know that Hitler - so inflamed by the wild occult, racial, and anti-Semitic theories he found in Ostara - actually paid an unannounced visit to the editor's offices and came face-to-face with Liebenfels himself. This information comes from an interview with von Liebenfels after the war, when he was struggling with the de-nazification process and would have had no ulterior motive in describing this meeting since the revelation of a personal relationship with Hitler could conceivably only hurt him. Who was Lanz von Liebenfels, and how did he manage such an emotional impact on young Hitler? If all one had to go on were back copies of Ostara, we would have to say that he was a cross between Pat Buchanan and Henry Lee Lucas, with a little Jimmy Swaggart thrown in to provide the Biblical and sexual references. Von Liebenfels ideology was complex. His Order of the New Templars was an occult lodge that met at a ruined castle high on a cliff over the Danube - the eerie Burg Werfenstein (see left) in Upper Austria, a few miles upriver from Hitler's childhood home - among other sites. The members wore white, surplicestyle) (see right) - (see left) popicsounding name).. Oddly enough, during the eighteenth century the Templar legend enjoyed a kind of revival during the development of speculative Freemasonry (see left).. THE THULE GESELLSCHAFT Many. If the theories and proposals of List (see right) and Liebenfels (seewing. What the Germanenorden became was, a Masonic-style society devoted to the eradication of Freemasonry itself; and an anti-Semetic mutual help and support network based on racial principles (one had to prove one's Aryan heritage by providing birth certificates going back several generations). The Germanenorden was formally established -- along with the Reichshammerbund - in May of 1912 at the home of Theodor Fritsch (see left). (see left).bodied antiSem Beer Hall (see right), a consciously Socialist-sounding title. It was this group that Hitler was sent to spy on. It should be pointed out that there is a great deal of controversy over the early days and connections of the DAP with the Thule Society.. Indeed, during the "troubles" of 1918 when the German revolution was in full swing with the collapse of the Second Reich, Pan-German groups were shut down all over Germany with the exception of the Thule Society (which was, we remember, purely a "literarycultural" society); and its premises at the Four Seasons Hotel were used as a meeting place - and sometime hiding place - for such notables as Rudolf Hess (see left) and Alfred Rosenberg (see right), not to mention the poet Dietrich Eckart. quasiRenaissance. ... this idea of himself as the German Messiah was the source of his personal power. It enabled him to become the ruler of eighty million people -- and in the space of twelve short years to leave his ineradicable mark on history. SCHELLENBERG That Hitler was fascinated by the occult is proven: the Berchtesgaden library, discovered in a mine after the war, contained many volumes on occultism. His small collection of books as a student contained works on mythology and a collection of von Liebenfels's racist-occult magazine, Ostara (see left), and he even visited with the Templar Master (as seen in Chapter Two). Friends of his from the early days recall long conversations on occult themes - everything from reincarnation to yoga to paganism and magic - and his later biographers, such as Sir Allan Bullock (see right), record Hitler's familiarity with occult topics in the days prior to the Second World War. While Hitler appreciated the "scholarship" he discovered in the occult magazines and books he devoured, he never took a particular liking to the type of people who composed occult lodges. The occultists who were members of his inner circle - such as Hess, Rosenberg, Gutberlet, and Eckart - lived on the periphery of the Thule and Germanenorden lodges; while Eckart and Rosenberg were members of the Thule, it is clear that they would have been exploiting that membership for their own, hidden, agenda. The leadership and influence of men like Sebottendorff was strong, and it is doubtful whether Hitler would have willingly accepted a role subservient to an occult (or political) master. History has shown that no occult order can survive two masters. Hitler was an activist. Almost any action was better than sitting around a room in a robe and meditating on Thor. Hitler was a pacer. He couldn't sit still for long. And he was a demagogue, almost from the beginning. He had to lead; and if he couldn't lead, he would absent himself from the action and the conversation altogether. But was Hitler a ritualistic cultist? As a black-robed, ritual-performing, invocation-chanting priest of Satan? Probably not.. Hitler was born on April 20, 1889, to an Austrian civil servant in the town of Braunau-am-Inn (see left), a locale said to be famous at the time for its large proportion of native mediums. It is even claimed that Hitler shared the same wet nurse as two famous 'channelers' of the day: Rudi and Willy Schneider (see right). That Hitler himself might have been a medium was a contention made by a great many of his personal friends and other observers, who described the Fuhrer in terms ranging from "hypnotic" to "demoniacally possessed". So, we will begin with Hitler's childhood schooling at Lambach Monastery, from 1897 to 1899, under the guidance of Catholic monks. It is so indicative of the atmosphere in which the Nazi Party would later take root in Germany and Austria that, as mentioned earlier, the coat of arms of this monastery is a swastika before which Hitler would pass every school day and which even now adorns the chapel where Hitler would attend choir practice, and in several other places, and which was even visible from his apartment window. Hitler (like Heinrich Himmler, Joseph Goebbels, and so many other prominent Nazis) was born and reared a Roman Catholic (see left), a fact that is often forgotten. His mother was devout, his father rather less so; and it is important to recognize that Hitler never got along well with his father but idolized his mother. Like all good Catholic children of a certain age, Hitler was confirmed in the Church. The Roman Catholic Confirmation ceremony is one in which young Catholics reaffirm the sacrament of Baptism: that is, with their own voice they confirm their acceptance of the vows made for them by their godparents when they were infants. They officially reject "Satan, and all his pomps, and all his works," in a ceremony which evidently left the young Hitler either totally unimpressed or strangely tense, for he was distracted and restless that whole afternoon until some neighbourhood children came by and invited him to a game of cowboys and Indians, which he joined with unbridled enthusiasm. We may wonder at what point Hitler lost his interest in the Church or, indeed, if he ever had any interest to lose. Many Christian organizations enthusiastically supported Hitler in the early years of his dictatorship, choosing not to believe that the virulently antiChristian stance of his neo-pagan NSDAP was sincere. They were accompanied in their folly by many Jewish people and organizations which could not accept that the anti-Semitism of the Party was anything more than a cheap political ploy. Germany, of course, was the birthplace of the Lutheran Reformation (see left), the last stronghold of the Heiliges Römisches Reich (Holy Roman Empire) (see right) and a country of Christians of whatever persuasion. Germany was also the country of Walpurgisnacht, that famous pagan festival celebrated on April 30 every year, traditionally at the top of Mount Brocken in the Harz Mountains, where the Witches' Sabbath supposedly takes place. Germany was also the scene of what we might call "Christian revisionism," an attempt to describe the resurrection of Christ as a myth perpetrated by his disciples: a thesis promulgated by Professor Reimarus of Hamburg in the eighteenth century, who insisted that Jesus was nothing more than a Jewish rebel, and that his body had been stolen from his tomb by his followers. Eventually, German scholarship would prove that the Gospels were written much later than anyone had previously realized, a position represented by no less than the esteemed Biblical commentator and professor of the University of Marburg, Rudolf Bultmann (see left) who, in his 'Jesus and the Word' - published before Hitler came to power - came to the conclusion that the life of Jesus was virtually unknowable. Thus we have a land where scientific research and religious fervor meet; a country that will occasionally engage in an almost masochistic turning-inward upon itself and its cherished ideals, devouring its own children in the process. We have a nation where fierce religious beliefs live cheek by jowl with fierce religious dissent; a land where Lutherans and Catholics, Christians and pagans, each lay claim to the country's psyche. The Holy Inquisition was founded there in 1231 in response to the Cathar threat to the Holy See; yet Germany was also the birthplace of Rosicrucianism, the core documents of that movement - the 'Fama Fraternitatis and the Confessio' - having been published there in 1614 and 1615. The infamous bugaboo of right-wing conspiracy buffs - the 'Illuminaten Orden', the dread Illuminati of Adam Weishaupt (see left) - began in Ingolstadt, Bavaria, in 1776 (the birthplace of Ernst Röhm and once the home of Heinrich Himmler and now of BMW, Ingolstadt is also well known as the city where the fictional Dr. Frankenstein created his Monster). And Germany became the country where the Anthroposophical Society of Rudolf Steiner (see right) was founded less than 150 years later, an organization that was banned and persecuted by the Third Reich, resulting in Steiner's own untimely death in 1925. So it was perhaps inevitable that the type of occultism which would develop on such fertile soil would be the syncretist type represented by List and Liebenfels: an anti-Papist neo-Templarism mixed with Teutonic mythology and anti-Semitism, blended in a mind-boggling metaphysical stew and spiced with a fanatic desire to prove the undiluted "purity" of the blood. It also comes as no surprise that the ultimate British secret occult society of that era - the Golden Dawn (see left) - was traditionally said to have originated, not on that "blessed isle," but in Germany itself with the forged "cypher manuscript" of a non-existent Fraulein A. Sprengel in Stuttgart. It was somehow important to the Chiefs of the Golden Dawn - Dr. William Wynn Westcott, MacGregor Mathers, and Dr. William R. Woodman - to demonstrate a German origin for their Society, even though later scholarship has shown that the possibly forged documents were in a grammatically poor, error-ridden German. So why not a British or a Celtic origin ? Or French or Italian, for that matter ? Or Middle Eastern ? Mathers's command of Latin was good enough to enable him to perform the first ever English translation of the Sefer ha-Zohar, the central text of Jewish Qabalism (see right), from the Latin version by Knorr von Rosenroth, a translation still in use today. Indeed, Qabalism is a major element of the Golden Dawn system of initiation. So why strain for a German origin for the Golden Dawn when Mathers could have forged an ancient Latin pedigree from anywhere else in Europe or the Middle East much more easily ? Because the late nineteenth century occult revival was taking place - not in England or France or anywhere else on the Continent - but in Germany itself: the land of the first Rosicrucians, the Teutonic Knights (see left), Paracelsus, Johannes Trithemius, the notorious Vehmgericht (the secret tribunal to exact vigilante justice that was revived in the Third Reich), and other famous figures of medieval mysticism, both real and mythical. And it was from Germany, after all, that Aleister Crowley's most famous import originated, one still in existence today: the Ordo Ternpli Orientis, or OTO. We have noted how Hitler was influenced by the writings of völkisch occultists like Liebenfels. This had happened at a time in his life when everything around him was falling apart. In 1907, his beloved mother died in an excruciating manner: diagnosed with breast cancer, she submitted to the painful application of iodoform to her chest. This was a method by which - it was believed - the acid-like characteristics of iodoform would literally burn out the cancerous cells. She succumbed, however, on December 21, dying in the light of a Christmas tree near her bed. (Four days later, Lanz von Liebenfels would raise his swastika flag (see rigth) over Burg Werfenstein, not far from the Hitler home in Upper Austria.) That this experience would have figuratively burned itself into her son and thereby affect his psyche in profound and disturbing ways - particularly in relation to women for there is evidence that his mother's gruesome death may have affected his sex life but, being rejected as an artist - his first career choice - no less than three times, he abandoned the humanities for politics and thereby came to vent his anger and frustration on the whole world. It has been said that Hitler's problem in terms of art was his inability to draw the human body; (see left) perhaps, then, his inability to purge himself of the trauma of watching his mother die in such a horrible fashion on the eve of the winter solstice. Thus orphaned, estranged from most of his family, somewhat impoverished, continually rejected in his quest for acceptance at the Academy of Fine Arts (see right) in Vienna, sleeping in men's dorms and living on the dole, this lover of grand opera - reduced to prostituting what he believed was his great artistic talent by painting picture postcards for tourists - was prime material for the screeds of the German and Austrian occultists. In another age, or another country, Hitler might have blamed his gross misfortunes on a plague of evil spirits, and sought the assistance of an exorcist or witch doctor. Instead, the perfectly scientific-sounding jargon he found in Liebenfel's 'Ostara' (see left) provided him with another - equally occult and nefarious - enemy: an evil race whose very blood, and cells, and genes were slowly possessing and dispossessing the entire German people. Authors like Liebenfels took the racial theories of Blavatsky - with her root-races and evolutionary scheme and mixed them with the programs of Social Darwinists and eugenicists, and took the resulting mixture to a logical conclusion: exterminate the subhumans and so avoid polluting the gene pool with their recessive traits. Hitler was not completely credulous: that is, he did not surrender his entire life to a blind acceptance of occult beliefs; otherwise, he would have spent his remaining years sitting around seance tables and invoking spirit guides like many of his contemporaries, in Europe and America. Instead, Hitler was nothing if not pragmatic, and not easily fooled by fake mediums and other occult charlatans. He sought real-world solutions to the problems posed by mystics such as Liebenfels. That is, he agreed with occult theory, and seemed to take much of it as accepted fact; it was occult practice - particularly the occult practices taking place in his own environment of self-deluded, albeit self-proclaimed, magi and bishops and seers - that he couldn't stomach, although he loved to read about occultism and to discuss it with those of his friends who had done some of the same reading. To Hitler, the occult was possibly a further refinement of the Roman Catholicism he was brought up with. According to Schellenberg (see right) and others around the Fuhrer, Hitler did not believe in an afterlife or a personal god. In the years before World War I and after the death of his mother, Hitler lived in relative poverty in Vienna. He eventually had his own space at a men's dormitory where he was given a small, clean room of his own, and managed to buy some watercolors. He would paint scenes of churches and local landmarks, and a friend would hawk them on the streets for a cut of the proceeds. It was in Vienna and during these tough times that Hitler made the personal acquaintance of Lanz von Liebenfels at the latter's office, sometime in 1909. Liebenfels remembered that Hitler appeared so distraught and so impoverished that the New Templar himself gave Hitler free copies of Ostara and bus fare back home. It would be von Liebenfels who would greet the ascension of Hitler to Germany's throne with tremendous enthusiasm as a sign of the great occult power that was sweeping through the world under the sign of the swastika (before he was silenced by that same regime after Anschluss in 1938). Hitler was also fascinated by the opera, particularly Wagner (1813-1883). The four operas that compose the famous 'Ring Cycle' were a favourite, of course, and 'Parsifal', 'Lohengrin' ... virtually all of Wagner's mythological and mystical work. One Wagnerian opera that stands out as an early favourite of Hitler's is a lesser-known and infrequently performed work called 'Rienzi'. Hitler was captivated by this opera and took Gustav Kubizek (see right), his friend, to see it several times. They had to stand during the performance since they could not afford seat tickets. It is an intriguing footnote to the story of the occult Reich that 'Rienzi's' libretto was based on an historical novel of medieval Rome by the celebrated English occultist and best-selling author Lord Bulwer-Lytton (18031873) (see left). 'Rienzi' was a patriot who attempted to reform the Roman government, but who eventually failed and went to his death. 'Rienzi' - whose real name was Niccolo Gabrini -was often called "the last of the Romans." As for Bulwer-Lytton, who is probably best known for his 'The Last Days of Pompei', he was the author of the popular occult novels Zanoni and 'Vril - The Coming Race' (see right), the latter having inspired the creation of a German secret society by the same name. (Bulwer-Lytton's name would also be lumped together with those of Byron, Moore, Shelley, Rousseau, George Sand, and Victor Hugo as a member of the "Satanic School" of literature: a trend of certain Romantic poets towards the anti-Christian, unconventional, and occasionally obscene in literature.) It would be Wagner's peculiar vision of cosmology and world history - that finds its most perfect expression in 'Parsifal' (see left), with its moving, if peculiar, pagan spin on the Christ mythos - that would influence Hitler and an entire generation of Germans who were cutting their milk teeth on Teutonic mythology as German prehistory and on the writings of erstwhile Wagner devotee Friedrich Nietzsche (1844-1900), the philosopher who popularized the concept of the "superman." The heady combination of Nietzsche (see right) and Wagner provided an atmosphere in which strange pagan societies could develop among the otherwise fastidious members of polite society. Groups such as the Thule Society, the Edda Society, the List Society, the Germanenorden, and the Order of New Templars would include nobles, military officers, college professors, and wealthy industrialists among their ranks. It was also the influence of Wagner to which we can attribute that fascination for orders of knighthood, the quest for the pagan Grail (see left), Teutonic gods, and blond-haired heroes that would eventually dominate the Weltanschauung of Hitler's most ardent supporter, Heinrich Himmler. About the year 1911 Hitler made the acquaintance of one Josef Greiner (see right)another resident of the men's hostel, an unemployed lamplighter - and they would spend hours discussing such arcane lore as astrology, religion, and the occult sciences. According to Greiner in his published memoirs, Hitler was fascinated by stories of yoga and the magical accomplishments of the Hindu fakirs. He read with enthusiasm the travel books of Swedish explorer Sven Hedin (see left), who blazed trails through the Himalayas in search of Tibetan Shangri-las. But in 1913, defeated in his dream of becoming an artist and thereby redesigning the great public buildings of Vienna, Linz, and other Austrian cities, Hitler finally left his homeland for Germany, crossing over the border from AustriaHungary on May 24 and arriving in Munich (see right) the next day. A year and a month later, Archduke Ferdinand would be assassinated at Sarajevo (see left) by a member of a Serbian secret society called the Black Hand. In July 1914, Austria will declare war on Serbia. Three days later, on August 1, Germany will mobilize against Czarist Russia; on August 3, she will declare war against France; on August 16, Hitler will enlist with the 1st Bavarian Infantry Regiment. The young artist - broke, his artistic efforts constantly rejected, and living a humiliating life on charity - embraces war with glee. If there is still any doubt about Hitler's enthusiasm for occult and volkisch themes, the following should put all objections to rest. Adolf Hitler is twenty-six years old; by the time the war ends in 1918, he will have been awarded the Iron Cross, First and Second Classes (see right), and will have proven himself an exceptionally brave combat soldier. But in October of 1918, he is blinded by a mustard gas attack in Belgium (see left). He temporarily loses his sight, and is sent to the sanatorium at Pasewalk. The doctors, not familiar with this type of condition, believe it to be psychosomatic. While they may be wrong, he does eventually regain his sight, only to lose it again as word of Germany's surrender reaches his ears on Martin Luther's birthday: November 9, 1918. (On that same day, in Munich, Baron Sebottendorff would call the Saturday meeting of the Thule Gesellschaft to order and his cultists would begin to forge identity papers, spy on the Reds, and stockpile weapons.) Yet it is during Hitler's blindness that he receives a kind of mystical enlightenment (see right), like that experienced by Guido von List many years before during his own temporary blindness (or like that of Saul, blinded on the way to Damascus) for, from that point on, Adolf Hitler has changed. He has been illumined, perhaps. Spoken - as the Golden Dawn would have said - to his 'Holy Guardian Angel', his higher Self – the 'True Will'. He has been blinded fighting the Allies in defence of his adopted country, Germany, only to regain his sight to witness Germany's capitulation and the abdication of the Kaiser (see left) - whom the Allies had already characterized as the Antichrist - and the resulting collapse of the Reich. After the successful overthrow of the Soviet Government by the Free Corps under Thule Society leadership, the Thulists recognize that they need to organize the workers into a coherent political party, else the Communists will return with a vengeance. Sebottendorff has already formed the Political Workers' Circle out of his base at the rather expensive and exclusive Four Seasons Hotel. From this Circle will be spawned the German Workers' Party with rail worker and locksmith Anton Drexler at its head. It is this Party that Hitler will infiltrate - on the orders of a Captain Mayr, who reports to a clique of wealthy industrialists and officers operating, coincidentally, out of the Four Seasons Hotel - in September of 1919. Drexler will give him a small pamphlet that he has authored containing explosive phrases like "National Socialism" and the rather sinister "New World Order." Hitler is captivated by these concepts, and decides that his spying days are over. Drexler is equally captivated by the brash and outspoken young Austrian corporal, and urges him to join the Party. Adolf Hitler becomes German Workers' Party member 555 (see left). Later, perhaps for superstitious reasons, Hitler will annoy the Old Guard by claiming that he was member number 7; this will be proven wrong when it is revealed that the Party began its numbering system at 500 in order to appear larger than it really was. (Hitler was actually member number 7 of the executive committee of the Party, formed later.) In a bizarre coincidence, the number 555 will come up again a little later as the numerological value of the word 'Necronomicon', a book of black magick that was first introduced to the Western world in a short story by H.P. Lovecraft entitled "The Hound" (1922). Lovecraft was also an anti-Semite and an ethnophobe, as many of his writings and letters attest. During the years that Nazism rose to total power in Germany, Lovecraft (see right) was writing stories about an un-namable evil that could be conjured using the formulas of the 'Necronomicon', and along the way introduced yet another "black book," the 'Unaussprechlichen Kulten' ("Unspeakable Cults" or, alternatively, "Unpronounceable Cults"!) of the mythical German anthropologist von Junst. He wrote about the mysterious and abhorred practices of Asians and Arabs in his short story "The Horror of Red Hook" among other tales and - save for the rather high literary quality of his stories when compared to the articles of a von Liebenfels - their racist nature could have easily promised him publication in select copies of Ostara. While the actual nature and extent of Lovecraft's anti-Semitism and ethnophobia have become the subject of much debate, it is safe to say that many of his stories do not meet the criteria set down by our faithful watchdogs of the Politically Correct. (To followers of Aleister Crowley (see left), the number 555 is the qabalistic equivalent of an ancient Hebrew term meaning "Darkness," an appropriate connotation from a Jewish perspective of what Hitler represented. Gradually, Hitler - carrying out his own, mysterious agenda spawned at the sanitorium in Pasewalk begins to assume total control of the German Workers' Party (Deutsche Arbeiterpartei or DAP). He changes its name to National Socialist German Workers' Party or Nazionalsocialistische Deutsche Arbeiterpartei or NSDAP (see right). He will also design its emblem with the help of a Thule Society member, the dentist Dr. Friedrich Krohn, and the swastika will become the official symbol of the new Nazi Party. Still broke, Hitler lives in a tiny rented room in Munich. His bookcase has a few, well-thumbed volumes, including the memoirs of a famous Swedish explorer, Sven Hedin, (already mentioned) whose principal destination has always been Asia with an emphasis on Tibet. Sven Hedin will later become deeply involved with the infamous Ahnenerbe: a research organization within Himmler's SS. DIETRICH ECKART Although hungry, poorly dressed, and uncomfortable in high society, Hitler comes to the attention of one of Germany's most famous poets, the eccentric genius Dietrich Eckart (1868-1923) (see right). Eckart, encouraged by his friends in the Thule, went to hear him speak at DAP meetings and, like so many people after him, became entranced by the hypnotic, wild-eyed Austrian fanatic. He takes Hitler under his wing and introduces him to the elite of Munich society. Dietrich Eckart was an author and playwright who owned his own newspaper. Famous for his translation into German of 'Peer Gynt', (see right) Eckart was one of Munich's coffeehouse darlings, as well known for his biting wit and sarcasm as for his felicitous use of the German language in poetry and plays. With a circulation of some thirty thousand, his newspaper - Auf Gut Deutsch ("In Good German") ranked with the 'Volkischer Beobachter' and 'Ostara' as an influential anti-Semetic publication. His protege was none other than Alfred Rosenberg (see left), the Baltic- born antiSemite who is later to become one of the architects of the official pagan policies of the Third Reich. Eckart, Rosenberg, and, later, Rudolf Hess become Hitler's closest companions and co-conspirators in the first years of the 1920s in Munich. It was Eckart (see right) cosmological theories of Hans Horbiger (see right), and introduced them to his Austrian corporal. His close relationship with both Rosenberg and Hess would have provided fertile ground for any number of wide-reaching discussions on mystical subjects. It has even been claimed that Eckart and Hitler attended seances in which ghostly ectoplasmic forms were seen. There is also evidence that Eckart was approached by none other than the eminent occultist Rudolf Steiner himself. Steiner (see left) for his "Threefold Commonwealth" idea in the pages of Eckart's Auf gut Deutsch, an attempt that was doomed to failure. According to Eckart, Steiner was a crazed sex magician and a member of the Jewish-Masonic conspiracy. An article, written by Alfred Rosenberg and published almost a (see right). Occult Reich, and a tenet of their basic beliefs. As for Eckart himself, most histories give him very little print space. His influence over Hitler is downplayed, clearly groomed Hitler for the role he was later to play and spent those three years orchestrating his rise to power. It was Eckart who helped arrange financing for the nascent NSDAP from European and American industrialists, including Henry Ford (see right). And it was Eckart who, along with Rosenberg, accompanied Hitler to Upper Bavaria with fifteen hundred Storm Troopers to "liberate" the town of Coburg from the Reds, in what was arguably Hitler's first real military victory. Hitler's popularity and influence in Germany was growing at a speed that must have amazed Hitler himself, considering that only a few years earlier he had been practically unknown. But his anti-Communist, antiCapitalist platform was winning him converts from all over Germany's political spectrum. The old guard - those members of Germany's defeated army that came home to find their nation unrecognisable, in shreds from the hundreds of wars taking place between dozens of private armies and political parties, and in absolute economic chaos - drank in Hitler's speeches like cool steins of draft in the very beer cellars where the NSDAP met. And on February 24, 1920 in the Hofbrauhaus (see above and right) - at the meeting during which his Twenty-Five Point program for saving Germany was proclaimed, introduced by Marc Sesselmann (a Thulist and member of the DAP) - he told them what they wanted to hear: that the war was lost because of Capitalists, Communists, Freemasons and, of course, international Jewry, which was behind them all. That the Germans were enslaved by punitive interest payments. That swift and violent action was needed if Germany was to be snatched from the jaws of a satanic conspiracy. The speech was welcomed by thunderous applause from the approximately two thousand listeners, and the die of the Occult Reich was cast. At this time the 'Protocols of the Elders of Zion' (see left) was being widely disseminated, and raising alarms about a grand conspiracy of Jews and Freemasons bent on destroying Germany as they were at that moment destroying Russia. If Hitler were in power, his listeners believed, he would throw out all these undesirable elements - by force of arms, if necessary - and the country would be right again. EINE ARTE MENSCHENOPFER - (A Kind of Human Sacrifice) As Hitler was travelling around Germany, raising consciousness and gathering recruits, a secret organization within the Ehrhardt Freikorps Brigade was itching for revolution. They eventually carried out (on June 24, 1922) the most famous assassination of the era, one that is still remembered today by those who lived through it, as Americans remember where they were when Kennedy was killed. The society was called Organization Consul and its members included Erwin Kern, Hermann Fischer, Ernst von Salomon (see left), and Ernst-Werner Techow. Organization Consul was a terrorist cell within the Ehrhardt Brigade, dedicated to carrying out bombings and assassinations against leftist targets and "Versailles" politicians, i.e., the "November criminals" who were believed to have sold Germany down the river at the Armistice and later at the Versailles Peace Conference. While the Freikorps marched openly and provocatively through the streets, their brothers in Organization Consul stuck to the alleys. Their target for June 1922 was none other than Walther Rathenau, foreign minister of the Weimar Republic. Rathenau's father had founded what later became AEG, Germany's version of General Electric, by purchasing Edison's patents on the electric light bulb. Rathenau himself, a sensitive, artistic soul who became enmeshed in high finance, industry, and politics almost against his will (he was a lover of poetry and music who had written volumes of aphorisms under a pseudonym) was Jewish. But that was not his only crime. He was also wealthy, admired, powerful, and a man with far-ranging vision. He had virtually single-handedly ensured that Germany would be able to wage a continuous war under the Kaiser by arranging to bring all of Germany's raw materials under centralized control in 1914. He had successfully negotiated the famous Treaty of Rapallo with the Soviet Union when France was frantically trying to isolate Germany from the European community after the war. He had written books describing the political and cultural situation in Germany with insight and wit. In short, he was a man of many accomplishments and, what is more, a sympathetic and elegant figure whom even the conspirators admitted "unites in himself everything in this age that is of value in thought, in honour, and in spirituality." We might not be discussing Rathenau at this point were it not for a peculiar phenomenon surrounding his death that is referred to by historian Norman Cohn. Of course, the Freikorps (and particularly the Ehrhardt Brigade (see left), as we have seen) was heavily influenced by völkisch and other PanGerman occultism. And it was the Ehrhardt Brigade, that marched into Munich that May Day in 1919 wearing the swastika as their symbol and singing the hakenkreuz hymn. But Rathenau was identified with the most legendary conspiracy of all time, and was numbered among its members in the crazed imaginations of desperate men. Walther Rathenau, they believed, was one of the actual Elders of Zion. His assassination would be a blow against the international Jewish/Masonic/Communist/Capitalist cabal to dominate the world. He did unite in himself all those qualities and values recognized by the Organization Consul itself, and thereby symbolized the success of the Zionist conspiracy. Therefore, according to Cohn: Rathenau was not simply assassinated as an Elder of Zion, he was offered up as a human sacrifice to the sun-god of ancient Germanic religion. The murder was timed to coincide with the summer solstice; and when the news was published, young Germans gathered on hilltops to celebrate simultaneously the turning of the year and the destruction of one who symbolized the powers of darkness. In later years, Ernst Röhm would deliver a eulogy at the graves of two of the assassins, saying that their spirit "is the spirit of the SS, Himmler's black soldiers." The human sacrifice of Walther Rathenau - timed to occur on a pagan holiday or "sabbath" that was observed by Nazi cultists throughout Germany - was the signal that the new Aryan faith was increasing in strength. It certainly must have seemed that way to Hitler. THE LIBERATION OF COBURG With Eckart and now Rosenberg at his side, Hitler strode all over Germany like an avenging angel on a budget, seeking out targets of opportunity. With him could be counted upon a contingent of six hundred former Free Korps men who had sworn an oath of loyalty to the cause, a kind of bodyguard that was now known as the dreaded SA, the Sturmabteilung (see left), the brownshirted Storm Troopers. The unifying symbol of the SA, of course, was the swastika, which they wore as armbands, and which they flew as blackred-white flags after a design approved by Hitler. They were also accompanied by a brass band that played rousing marches at every public meeting of the Nazi Party. Hitler himself had presided over very little actual armed conflict up to this time, but was ready for battle when they reached the town of Coburg in Upper Bavaria on October 14, 1922, for a 'German Day' celebration (see left). This time, they were met with opposition in the form of a crowd of opponents of various persuasions who began by jeering and shouting epithets, calling Hitler's followers murderers and criminals, and who proceeded very shortly to throw rocks at the marching Storm Troopers. Hitler gave a signal with his whip, and the Troopers fell upon the crowd with reckless abandon. The hostile crowd was forced back, and the march continued, but the talk on the street was that the Communists had only fallen back to regroup and that a major confrontation would take place in twenty-four hours. The following day, in spite of a call to all leftists to throw out the. And now they were ready for the rest of Germany. THE COMBINATION OF STELLAR INFLUENCES In a letter written to Hitler by a female admirer in Munich a little over a month before the famous Beer Hall Putsch of November 1923, the future leader of Germany was advised of certain astrological predictions made by Frau Elsbeth Ebertin, the dowager empress of an impressive line of German astrologers whose innovative techniques are still employed today in Europe and America. MUNICH, 30 SEPTEMBER 1923 Highly honored Mr. Hitler, Allow me, as an old member and a fanatical adherent of your movement, to point our to you a matter that would surely interest you. I have in front of me a work of an expert of scientific astrology who is famous and popular in all of Germany, E. Ebertin Publishers, 1914. The following is an excerpt of the article in question. No name is given in the article, but it can only be your esteemed person who is referred to therein (Ebertin, p. 54). "A fighter born on April 20, 1889, at whose birth the sun stood at 29°of Aries, might, by his all too daring actions, place himself in danger and possibly soon contribute to the impetus which will start the stone rolling. "According to the stellar constellations the man must definitely be taken seriously and is destined for the role of a leader in future struggles. "It almost seems as if he whom I have in mind has been chosen by fate, under this strong influence of Aries, to sacrifice himself for the German people and to bear everything courageously and bravely; even if it should be a matter of life and death; but at the least to give the impetus to a German liberation movement, which then will erupt quite suddenly in an elementary way. "However, I don't want to preempt fate. Time will tell, but as things are going at the time of my writing they cannot continue! "The German people can only come to itself again in the political and religious field through some spiritual leaders sent by God, namely by the agency of individuals who believe in God and have a cosmological sensitivity, and who are above party politics, several of whom I have discovered among April natives (that is to say only if the star constellations are favorable). "Once the right point in time will have come, i.e., once the Versailles peace treaty will have proved to be impossible to fulfill and will have been overturned, then the stars -- which are now still shining in hidden places -- will beautifully appear as shining meteors, similar to the heavenly bodies which are now newly discovered or become visible ..." etc. etc. You must forgive me if I could not help but inform you of the foregoing. Most respectfully, Heil und Sig! Most devotedly, Maria Heiden, Munich It is felt it worthwhile to quote the entire text as it illustrates both the selfprofessed "fanatical" devotion of the letter writer as well as the political sentiments of Frau Ebertin at this time. Frau Heiden quoted the comments from Ebertin's own book of predictions, Ein Blick in die Zukunft (A Glimpse into the Future) for the year 1924, which was published in July of 1923. It was brought to Hitler's attention by a number of other admirers as well, and Frau Ebertin herself sent a copy of her book to the Volkischer Beobachter ... but according to Ebertin her predictions only served to irritate Hitler. Hilter was not one who was willing to believe that his fate was out of his hands and written in the indelible ink of the stars, at least not when he felt he had the future - and Germany's - in his grasp, as he did that September of 1923. But it all came to an end with the failed Munich Beer Hall Putsch of November 1923 (see right). An ill-planned and poorly executed attempt to take over the Bavarian government by force resulted in a major setback for the Party. Hitler was arrested; Hess - who had escaped to Austria - was being sought by the authorities and would eventually surrender himself; and Dietrich Eckart - Hitler's mentor and protector - died in Berchtesgaden, on December 26 of that year, his protege in prison but his optimism unbounded. Eckart knew where Hitler was headed, because it was he who had pushed him in the right direction. To rebut those who claim that Eckart's influence and effect on Hitler was not relevant, one merely has to indicate the memorial services that were held every year in his honour by Hitler, including the lavish ceremony on December 26, 1933 (the year Hitler came to power); the monument put up over his grave in Berchtesgaden; the eulogies written for him by such important contemporaries as Rosenberg (who would later become enormously influential in the Third Reich) and the speeches made on the anniversary of his death by such men as Baldur von Schirach (the head of Hitler Youth). Hitler owed a great deal to Eckart, and the evidence left behind shows that he knew and understood that; after all, the final words of Mein Kampf show that Hitler's infamous memoir was dedicated to him. Another contribution of Eckart, and one that is frequently missed even by occult historians, is his connection with Henry Ford. Eckart was approached by agents of the American automobile manufacturer as early as 1920-1921. Ford was a notorious anti-Semite, and had actually written a book - 'The International Jew' - which was enormously popular in Germany where a Germanlanguage version was a best-seller. Hitler had read it before writing 'Mein Kampf'. Hitler even had a picture of Ford hanging in his office at Party headquarters (the Brown House). It is worthwhile to note that the German publisher of 'The International Jew' (as well as of an early German edition of the 'Protocols of the Elders of Zion') was none other than Theodor Fritsch, the man who founded the Germanenorden in 1912 for which the Thule Society served as a front. The support of Henry Ford was vital to the survival of the Nazi Party in the early days, and one of Hitler's proudest achievements. He would award that quintessential American with the highest Nazi honor it was possible to bestow to a non-German, the Grand Cross of the Supreme Order of the German Eagle (see left), in 1938. He was the first American and only the fourth person to be given the award. Even Baldur von Schirach would credit Henry Ford's writings for having converted him to anti-Semitism. An earlier recipient of the award was Benito Mussolini that same year. Thus it was Eckart who handled some of these early financial contributions from Henry Ford, and Eckart who, among others, dealt directly with the Ford representatives in Germany. After Eckart, perhaps no single other human would come to exert that type of influence over Hitler until Hanussen, the psychic and astrologer who honed Hitler's public-speaking skills ... and who performed occult rituals on Hitler's behalf. As Eckart's ghost continued to inspire Hitler from beyond the grave, Hitler would indeed dance; and in that 'danse macabre' Hanussen would lead. HANUSSEN In the last days of 1932, Hitler was contemplating suicide. Released from prisonSDAP were in danger of defecting to other political organizations. His own trusted disciples were dividing the Party into warring factions that could not be controlled. And on Halloween night - the pagan Sabbath Viennese Hanussen - whose real name was Herschel Steinschneider, the son of a Jewish vaudeville performer - began his career early 1930s forces to thwart the attempts of Hitler to gain power in Germany. We may wonder if a witch or sorcerer - operating alone in some mountain fastness home-town - been Sabbath of Oimelc: one of the four "cross-quarter" days of the witches' calendar. It seemed an outrageous prediction but - after a series of bizarre coincidences and half-baked conspiratorial machinations on the part of his opponents - Hitler went from washed-up political has-been to chancellor of Germany with dizzying speed in thirty days and, on January 30, 1933, he assumed power. Hanussen, signalledussen "knew too much" or perhaps might even have had connections to the Communist Party (hence his accurate prediction of the Reichstag fire; some mediums and psychics - and Han perpetrator, Karl Ernst, who was executed during the Röhm purge with a bewildered 'Heil Hitler' on his lips. And then, of course, Hanussen's father was Jewish, which would have been reason enough to execute the inordinately influential seer. Unfortunately, we will never know what happened, for Hanussen died as he had lived: the Count St. Germain of Weimar Germany, a complete and compelling mystery. THE MASTER OF THE PENDULUM Another occultist in Hitler's inner circle was the Thulist, astrologer, and pendulum expert Wilhelm Gutberlet (born 1870). Gutberlet first comes to the historian's attention as a shareholder in the 'Volkischer Beobachter', Sebottendorff's former newspaper. Franz Eher Verlag was a publishing company that Sebottendorff purchased in 1918 for about five thousand Reichsmarks (RM). It consisted of a newspaper, the 'Munchener Beobachter', that had ceased publication with the death of its founder in June. Sebottendorff picked it up and moved its offices to the Thule meeting rooms at the Four Seasons Hotel, turning it into an anti-Semitic organ that was eventually taken over - after a series of intervening ownerships by other parties - by the German Workers' Party after Sebottendorff left Munich. In 1920, Wilhelm Gutberlet owned shares worth 10,000 RM, or about 8.5 percent of the total value of the paper. It was renamed the 'Völkischer Beobachter', and as such became the propaganda machine of the NSDAP. Gutberlet was a Thulist. He was also one of Hitler's earliest followers. A medical doctor, he was present at the first meeting of the German Workers' Party that Hitler attended and had remained a close friend and confidant since then. In other words, since 1919. Gutberlet virtually disappears from most official accounts of the NSAP until he reappears in Schellenberg's memoirs. Walter Schellenberg was chief of the Foreign Intelligence section of the SD (Sicherheitsdienst or Security Service), and survived the war to write about his experiences as spymaster in Europe. According to Schellenberg: Hitler's racial concern was one of his characteristic features. I discussed this several times with Dr. Gutberlet , a Munich physician who belonged to the intimate circle around Hitler. Gutberlet believed in the barlett's mystic power and had many discussions with him on racial questions. Thus, in Gutberlet, we have an occultist, a Thulist, an astrologer, a racist, a pendulum expert, and a confidant of Hitler, all wrapped into one. The matter of the "sidereal pendulum" itself will be dealt with later, but for now we can agree that Gutberlet's influence over Hitler's thinking must have been profound, for the Fuhrer himself constantly ridiculed the völkisch occult groups in his official speeches ... while secretly soliciting their advice and counsel away from the prying eyes of both the press and the public. And it is revealing to know that Gutberlet, the astrologer and mystic, was consulted by Hitler on racial matters as well as on mystical subjects, thus providing additional evidence that Hitler's racism was motivated by his occultism. List, Liebenfels, Eckart, Hanussen, Gutberlet - these are only five of the many occultists whose influence surrounded Hitler from his early days as an art student, and throughout his later career. To complete the story, we have to investigate Haushofer, Hess, Himmler and many others for - as the Reich consolidated and became more powerful other occult lodges in Germany were active and were seen to pose a threat to the new regime. While drawing upon some of the same traditions as the Order of New Templars, the Germanenorden, and the Thule Society - Eastern religions, rituals associated with astrology and mythology, sexual formulas for becoming powerful and casting spells - they had other associations which made them suspect. Heydrich was informed about the smallest detail of Hitler's private life. They showed that Hitler was so ruled by the daemonic forces driving him that he ceased to have thoughts of normal cohabitation with a woman. The ecstasies of power in every form were sufficient for him. Schellenberg Hitler's entourage included the pagan ideologue Alfred Rosenberg (whom Hitler made head of the Nazi Party). We have covered runic mysticism already; the Blood and Soil doctrine is too complex to examine thoroughly here. Beer Hall Putsch, was a high-profile Reichsleiter with a blatantly pagan and antiChristeyed unregenerate anti-Semite and godfather of the Occult Reich, Guido von List, might have been a Golden Dawn initiate is an unsettling proposition but there is no evidence for this.az. Aiwaz. SEX MAGIC Crowley accepted initiations into a variety of occult lodges and societies in his time, and eventually picked up an initiation into something called the Ordo Templi Orientis, or the Order of the Eastern Temple. This was the brainchild of one Karl Kellner, a wealthy German Freemason of high rank in a rather distaff branch of Freemasonry (the Rite of Memphis and Mizraim of John Yarker), who claimed that he was instructed in the techniques of sex-magic by a Hindu adept and two Arab magi during his travels in the East.. Certainly, among some practitioners in the East, the left-hand tantric circle has become a sexual one and there is a small library of techniques, rituals, invocations, chants, etc. appropriate to this type of magic.. These treatises discuss the occult methods to be employed during autoerotic, heterosexual, and homosexual sex acts, and concern everything from uniting with one's god or goddess through masturbation or intercourse, to making talismans for various purposes, and even using sex to achieve enlightenment. These few booklets can stand as the West's answer to, and interpretation of, Hindu tantrism, particularly of the Kaula Shastra variety, with a little distaff Sufism thrown in for good measure. Quite simply, we are dealing with the subordination of the sex act to the Great Work by the magician and mystic of every age. The Christian eremites Frazier, Nazi antiChrist Naziminded), and when Hitler was fighting the Allies as an enlisted man in the trenches of France and Belgium, writing Wotanist poetry full of magical symbolism, Crowley was writing proGerman propaganda for 'The Fatherland', a journal published in New York by one George Viereck, who had known Crowley slightly from years before. Crowley needed a job, and agreed to take over as editor of 'The Fatherland'. He claimed to be Irish, which would have made him a natural enemy of the English if true (which it wasn't). proG, Mina Nazis PanG. Grosche. The magicians of the Brotherhood of Saturn saw it, therefore, as a gate opening upon this world from the domain of daemons; and daemons were thought to be nothing more than powerful forces which - to the uninitiated appeared fearsome and evil but which the initiate (with proper training and discipline) could tame to more productive ends. This mystification of the sex act among the German occult lodges was perfectly consistent with later National Socialist fashions regarding sex and power. As Susan Sontag points out in her essay, "Fascinating Fascism": The fascist ideal is to transform sexual energy into a "spiritual" force, for the benefit of the community. The erotic (that is, women) is always present as a temptation, with the most admirable response being a heroic repression of the sexual impulse.... Fascist aesthetics is based on the containment of vital forces; movements are confined, held tight, held in. Nazi art that Sontag brilliantly describes, the German art of sexmagic. In the late 1970s, the author received a communication from a traditional (that is to say, non-Crowleyan) OTO lodge operating out of Frankfurt; evidence, if slim, that an OTO continued to function that had not accepted Thelema and was still working the original grades. The (Sturmabteilung) or Brownshirts, were really the first shock troops - "Storm Troopers" - of the Nazis: brutal enforcers in uniform that intimidated the opposition and acted as a kind of private army for the Party (a type of Free Korps, such as that supported by the Thule; indeed the leader of the SA, Ernst Röhm, had been a Free Korps leader well known around Thule headquarters). disciples had envisioned. It was not the first time, Beer Hall Nazi government: The Ancestral Heritage Research and Teaching Organization. So, how to describe the Ahnenerbe? nide voöl Wiist,tongue - enjoyed the distinction of the SS rank of Oberfuhrer, or Brigadier. THE MIDDLE POINT OF THE WORLD; designed by Professor Diebitsch and experts from Ahnenerbe.. In any event, Himmler created the King Heinrich Memorial Institute in 1938 in Quedlinburg.azis and, as we have seen, was the occasion of the "human sacrifice" of Walther Rathenau.': 'I am sending to you now ... six photographs with explanatory text.. [25] SS organization had been built up by Himmler on the principles of the Order of the Jesuits.'. It was written by none other than Lanz von Liebenfels, he of the Order of New Templars.authored [sic]cum threepointed;Fatherula, the famous destination of Pytheas in the fourth century BC, to be identical to Iceland. For them, Thule corresponded to their own Atlantis myth; while the rest of the human race might have descended from monkeys,founder Nazis' most prestigious race theorist an honorary member about a year before he died. Thus were the worlds of "scientific racism", Social Darwinist eugenics and mystical Nordic paganism and anthropology linked, and to them both the political agenda of the Third Reich, which involved not only Lebensraum and a "drive to the East" but also the eliminationcentury panNtimeGerman. It would seem that Schafer's primary goal in trekking through the Himalayas was scientific in nature - and hence of less immediate value - than the Reich leadership was willing to accommodate.azi Party Master Race. Happily, not all of Schafer's observations were of the sexual habits of the Lachung and other Himalayan peoples, nor of the flora and fauna, as the following article from the Nazi Volkischer Beobachter of July 29, 1939, relates: 'Dr. Ernst Schafer, SS-Hauptsturmfuhrer, has now completed the first German SS- ibet Expedition with extraordinarily great success and will soon return to Germany with his guides. The participants of the expedi tionf I have been unable to discover what happened to it after the war, though (for reasons too complex to discuss here) I Welt-Eis-Lehre or 'World Ice Theory' once popularized by Austrian engineer Hans! In a manuscript authored by an anonymous SS-Obersturmfuhrer, we note the same attempt to put the World Ice Theory into a purely "scientific" framework, with the same unselfconscious irony: 'The Need and Format of a New Implementation of the World Ice Theory ! THE KNIGHT, DEATH AND THE DOUBLE. Scientists, doctors, and professional people in every field found themselves "doubling" to the extent that what would be considered normal, civilized behaviour in a healthy society had to be suppressed in favour of a belief in the purity of the race and the sacred mission of the Occult Messiah. Science was still expected to carry on, however, and scientists found themselves making their knowledge and method subservient to the New Religion. 1939). - even as they insisted that the Bible was either full of errors or had been interpreted incorrectly by generations of selfserving. 'At dinner ... he spoke of India and Indian philosophy. This led him to speak of a subject which was a hobbyhorse of his: in a lively manner he described to me the result of researches in German witchcraft trials. He said it was monstrous that thousands of witches had been burned during the Middle Ages. So much good German blood had been stupidly destroyed. From this he began an attack on the Catholic Church, and at the same time on Calvin; before I had caught up with all this he was discussing the Spanish Inquisition and the essential nature of primitive Christianity.'dogs.. For some reason,B amateur (as mentioned in a previous chapter) and others who see in these stories a coded form of astronomical observations.BrigadeIFER'S SERVANTS As we saw in the preceding chapter, Himmler's personal agenda was to amass enough data - archaeological, historical, cultural, religious, and occult to prove that the propping up a trifle weird, endeavor, Himmler had two distinct sets of ideological opponents. First of all, there were the scientists who disparaged such canonical Nazi claims as Aryan racial purity and the prevalence of an Aryan cult or protoChrist, then Christ could not possibly have been Jewish. murdered; largescale an embarrassment of three suns ? As much as one may wish to argue with the thesis of 'Crusade against the Grail' or 'Lucifer's Servants', there is nothing of the raving mystagogue about Rahn. Interestingly, both Wiligut and Rahn retired from the SS at the same time - in the same month - is suggestive of some collusion between the two mythologians: the one elderly, the other young. Rahn's exploits and the mystery surrounding his resignation and subsequent death have received a great deal of attention in European circles over the years, although they are little known in America. His unusual life story has led to considerable speculation that Rahn actually did discover something in his travels, and that since he seemed to confide in Wiligut they both had to be gotten had a rabid dislike of homosexuals, and through the auspices of Nazi psychiatrists at the Goring Institute tried to have several SS men "cured" of this "malady." Although that was probably not an option with an SS man as relatively well known as Rahn, he was possibly looking at some sort of reprisal in the future, either professionally or in some other way. Unfortunately, we shall never know.. HOLY BLOOD, HOLY GRAory of Zion, which the authors link to an underground tradition of Freemasonry and Templarism spanning the centuries and which finds its modern manifestation in the Knights of Malta, Italy's P-2, and other such groups. Nazis rob Christianity of a huge chunk of its popular mythology. The chalice a Catholic priest raises during the Mass becomes a pagan cauldron; the mystery of the Blood of Christ becomes a hollow echo of pagan sacrifice. Appropriation of the Grail symbolism then becomes an assault on Christian faith itself; at least, on the popular faith of the lumpenproletariat ... or the need for a holy war against the black-clad SS, the satanic monsters who had "stolen" God's sacred Cup from the righteous., sacred stone or golden cup,.
https://pt.scribd.com/doc/251529258/The-Occult-Reich-2011
CC-MAIN-2018-09
refinedweb
17,348
53.34
go to bug id or search bugs for Description: ------------ addChild() does not behave as expected(?) when used on a node that wasn't previously declared. In the example below, we initialize an empty tree, to which we attempt to add a new node at "/child/grandchild" without previously adding "child" to the tree. addChild() seems to operate on a temporary SimpleXMLElement and the changes are never applied to the original object/tree. I believe that this bug is also responsible of the segfault that happens if getName() is used on the temporary SimpleXMLElement. I was very tempted to file this bug as a reproducible crash, but decided to file it under SimpleXML in case it would help routing it to the maintainer faster. Tested on: PHP 5.2.4-dev (cli) (built: Jun 27 2007 20:04:30) WinXP libxml2 2.6.26 SimpleXML Revision: 1.151.2.22.2.29 PHP 5.2.2-pl1-gentoo (cli) (built: May 24 2007 00:26:35) libxml 2.6.27 SimpleXML Revision: 1.151.2.22.2.26 Reproduce code: --------------- <?php $xml = simplexml_load_string('<?xml version="1.0" encoding="utf-8" ?><root />'); $xml->child->addChild('grandchild'); echo $xml->asXML(); Expected result: ---------------- I expect SimpleXML to create "child" if it does not exist, then add "grandchild" to that node. The output should be: <?xml version="1.0" encoding="utf-8"?> <root><child><grandchild/></child></root> Actual result: -------------- Neither node is added to the tree. The output is: <?xml version="1.0" encoding="utf-8"?> <root/> Add a Patch Add a Pull Request I believe related to this bug: on my machine (Mac OS X 10.4.9 Intel; PHP 5.2.3/5.2-200707181030) I get a bus error if I do it with a namespace, i.e. $xml->child->addChild('grandchild','','whatever'); Exception: EXC_BAD_ACCESS (0x0001) Codes: KERN_PROTECTION_FAILURE (0x0002) at 0x00000020 Thread 0 Crashed: 0 php 0x0018596f zim_simplexml_element_addChild + 263 (simplexml.c:1512) ... Ben. This bug has been fixed in CVS. Snapshots of the sources are packaged every three hours; this change will be in the next snapshot. You can grab the snapshot at. Thank you for the report, and for helping us make PHP better. Segfaults are fixed. addChild() cannot be called on a temporary node, so output is correct and warning is now issued.
https://bugs.php.net/bug.php?id=41833
CC-MAIN-2015-14
refinedweb
383
68.36
Light-weight OCR engine. Project description This library provides a clean interface to segment and recognize text in an image. It’s optimized for printed text, e.g. scanned documents and website screenshots. Installation pip install liteocr The installation includes both the liteocr Python3 library and a command line executable. Usage >> liteocr Performs OCR on an image file and writes the recognition results to JSON. usage: LiteOCR [-h] [-d] [--extra-whitelist str] [--all-unicode] [--lang str] [--min-text-size int] [--max-text-size int] [--uniformity-thresh :0.0<=float<1.0] [--thin-line-thresh :odd int] [--conf-thresh :0<=int<100] [--box-expand-factor :0.0<=float<1.0] [--horizontal-pooling int] str str positional arguments: str image file str output JSON file optional arguments: -h, --help show this help message and exit -d, --display display recognized bounding boxes and text on top of the image engine: parameters to liteocr.OCREngine constructor --extra-whitelist str string of extra chars for Tesseract to consider only takes effect when all_unicode is False --all-unicode if True, Tesseract will consider all possible unicode characters --lang str language in the text. Defaults to English. recognition: parameters to OCREngine.recognize() method --min-text-size int min text height/width in pixels, below which will be ignored --max-text-size int max text height/width in pixels, above which will be ignored --uniformity-thresh :0.0<=float<1.0 ignore a region if the number of pixels neither black nor white < [thresh] --thin-line-thresh :odd int remove all lines thinner than [thresh] pixels.can be used to remove the thin borders of web page textboxes. --conf-thresh :0<=int<100 ignore regions with OCR confidence < thresh. --box-expand-factor :0.0<=float<1.0 expand the bounding box outwards in case certain chars are cutoff. --horizontal-pooling int result bounding boxes will be more connected with more pooling, but large pooling might lower accuracy. Python3 library from liteocr import OCREngine, load_img, draw_rect, draw_text, disp image_file = 'my_img.png' img = load_img(image_file) # you can either use context manager or call engine.close() manually at the end. with OCREngine() as engine: # engine.recognize() can accept a file name, a numpy image, or a PIL image. for text, box, conf in engine.recognize(image_file): print(box, '\tconfidence =', conf, '\ttext =', text) draw_rect(img, box) draw_text(img, text, box, color='bw') # display the image with recognized text boxes overlaid disp(img, pause=False) Notes I deprecated and moved the old code into a separate folder. The old API calls Tesseract directly on the entire image. The low recall wasn’t trivial to fix at all, as I realized later: - The command-line Tesseract makes really weird global page segmentation decisions. It ignores certain text regions with no apparent patterns. I have tried many different combinations of a handful of tuneable parameters, but none of them helps. My hands are tied because Tesseract is poorly documented and very few people asks such questions on Stackoverflow. - Tesserocr is a python package that builds a .pyx wrapper around Tesseract’s C++ API. There are a few native API methods that can iterate through text regions, but they randomly fail with SegFault (ughh!!!). I spent a lot of time trying to fix it, but gave up in despair … - Tesseract is the best open-source OCR engine, which means I don’t have other choices. I thought about using Google’s online OCR API, but we shouldn’t be bothered by internet connection and API call limits. So I ended up using a new workflow: - Apply OpenCV magic to produce better text segmentation. - Run Tesseract on each of the segmented text box. It’s much more transparent than running on the whole image. - Collect text result and mean confidence level (yield as a generator). Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages. Source Distribution liteocr-0.2.1.tar.gz (8.5 kB view hashes) Built Distribution liteocr-0.2.1-py3-none-any.whl (13.1 kB view hashes)
https://pypi.org/project/liteocr/
CC-MAIN-2022-27
refinedweb
685
57.77
In my last past, we looked at building an AsyncLock in terms of an AsyncSemaphore. In this post, we’ll build a more advanced construct, an asynchronous reader/writer lock. An asynchronous reader/writer lock is more complicated than any of the previous coordination primitives we’ve created. It also involves more policy, meaning there are more decisions to be made about how exactly the type should behave. For the purposes of this example, I’ve made a few decisions. First, writers take precedence over readers. This means that regardless of the order in which read or write requests arrive, if a writer is waiting, it will also get priority over any number of waiting readers, even if it arrived later than those readers. Second, I’ve decided not to throttle readers, meaning that all waiting readers will be released as soon as there are no writers outstanding or waiting. Both of those points could be debated based on the intended usage of the type, so you might choose to modify the implementation based on your needs. Here’s the shape of the type we’ll build: public class AsyncReaderWriterLock { public AsyncReaderWriterLock(); public Task<Releaser> ReaderLockAsync(); public Task<Releaser> WriterLockAsync(); public struct Releaser : IDisposable { public void Dispose(); } } As with the AsyncLock, we’ll utilize a disposable Releaser to make it easy to use this type in a scoped manner, e.g. private readonly AsyncReaderWriterLock m_lock = new AsyncReaderWriterLock(); … using(var releaser = await m_lock.ReaderLockAsync()) { … // protected code here } This Releaser is almost identical to that used in AsyncLock, except that we’re using the same type to represent both readers and writers, and since we need to behave differently based on which kind of lock is being released, I’ve parameterized the Releaser accordingly: public struct Releaser : IDisposable { private readonly AsyncReaderWriterLock m_toRelease; private readonly bool m_writer; internal Releaser(AsyncReaderWriterLock toRelease, bool writer) { m_toRelease = toRelease; m_writer = writer; } public void Dispose() { if (m_toRelease != null) { if (m_writer) m_toRelease.WriterRelease(); else m_toRelease.ReaderRelease(); } } } In terms of members variables, I need several more for this type than I’ve needed for the other data structures previously discussed. First, we will have fast paths in this type, so I want to cache a Task<Releaser> for reader waits that complete immediately, and one for writer waits that complete immediately. private readonly Task<Releaser> m_readerReleaser; private readonly Task<Releaser> m_writerReleaser; These members will be initialized in the constructor: public AsyncReaderWriterLock() { m_readerReleaser = Task.FromResult(new Releaser(this, false)); m_writerReleaser = Task.FromResult(new Releaser(this, true)); } Next, I need to maintain a queue of writer waiters, one TaskCompletionSource<Releaser> for each, since I need to be able to wake them individually. I also need a TaskCompletionSource<Releaser> for my readers; however, for the readers, per our previously discussed design, when it’s time to allow a reader to run, I can allow them all to run, and therefore I just need a single TaskCompletionSource<Releaser> that all of the readers in a given group will wait on. However, since I’m maintaining a single TaskCompletionSource<Releaser> for all readers, I also need to maintain a count of how many readers are waiting, so that when I eventually wake them all, I can keep track of all of their releases and know when there are no more outstanding readers. private readonly Queue<TaskCompletionSource<Releaser>> m_waitingWriters = new Queue<TaskCompletionSource<Releaser>>(); private TaskCompletionSource<Releaser> m_waitingReader = new TaskCompletionSource<Releaser>(); private int m_readersWaiting; Finally, I need a variable to maintain the current status of the lock. This will be an integer, where the value of 0 means that no one has acquired the lock, a value of –1 means that a writer has acquired the lock, and a positive value means that one or more readers have acquired the lock, where the positive value indicates how many. private int m_status; We now have four methods to implement: ReaderLockAsync, ReaderRelease, WriterLockAsync, and WriterRelease. ReaderLockAsync is used when a new reader wants in. After acquiring the lock on m_waitingWriters (which we’ll use across all four of these methods to ensure data consistency), we need to determine whether the reader should be allowed in immediately or should be forced to wait. Based on the policy described earlier, if there are currently no writers active or waiting, then this reader can be allowed in immediately; in that case, we increment the status (which would have either been 0, meaning no activity on the lock, or positive, meaning there are currently readers) and we return the cached reader releaser. If, however, there was an active or waiting writer, then we need to force the reader to wait, which we do by incrementing the count of the number of readers waiting, and return the m_waitingReader task (or, rather, a continuation off of the reader task, ensuring that all awaiters will be able to run concurrently rather than getting serialized). public Task<Releaser> ReaderLockAsync() { lock (m_waitingWriters) { if (m_status >= 0 && m_waitingWriters.Count == 0) { ++m_status; return m_readerReleaser; } else { ++m_readersWaiting; return m_waitingReader.Task.ContinueWith(t => t.Result); } } } WriterLockAsync is used when a new writer wants in. As with ReaderLockAsync, there are two cases to deal with: when the writer can be allowed in immediately, and when the writer must be forced to wait. The only time a writer can be allowed in immediately is when the lock is currently not being used at all; since a writer must be exclusive, it can’t run when there are an active readers or active writers. So, if m_status is 0, we change the status to indicate that there’s now an active writer, and we return the cached writer releaser. Otherwise, we create a new TaskCompletionSource<Releaser> for this writer, queue it, and return its Task. public Task<Releaser> WriterLockAsync() { lock (m_waitingWriters) { if (m_status == 0) { m_status = -1; return m_writerReleaser; } else { var waiter = new TaskCompletionSource<Releaser>(); m_waitingWriters.Enqueue(waiter); return waiter.Task; } } } Now we need to write the release functions, which are called when an active reader or writer completes its work and wants to release its hold on the lock. ReaderRelease needs to decrement the count of active readers, and then check the current state of the lock. If it was the last active reader and there are now writers waiting, then it needs to wake one of those writers and mark that the lock now has an active writer. We don’t need to check for any pending readers; if there are any writers, then they’d take priority anyway, and if there aren’t any pending writers, than any readers that had arrived would have been allowed in immediately. private void ReaderRelease() { TaskCompletionSource<Releaser> toWake = null; lock (m_waitingWriters) { –m_status; if (m_status == 0 && m_waitingWriters.Count > 0) { m_status = -1; toWake = m_waitingWriters.Dequeue(); } } if (toWake != null) toWake.SetResult(new Releaser(this, true)); } Finally, we need our WriterRelease method. When a writer completes, if there are any pending writers waiting to get in, we simply dequeue and complete one of their tasks (we don’t need to update the lock’s status, since there will still be a single active writer, with one having completed and a new one having taken its place). If there aren’t any writers, but there are readers waiting, then we can complete the single task on which all of those readers are waiting; in that case, we also need to create a new single task for all subsequent readers to wait on, and we need to update our status accordingly to now indicate how many active readers there are. If there weren’t any writers or readers waiting, then we can simply reset the lock’s status. private void WriterRelease() { TaskCompletionSource<Releaser> toWake = null; bool toWakeIsWriter = false; lock (m_waitingWriters) { if (m_waitingWriters.Count > 0) { toWake = m_waitingWriters.Dequeue(); toWakeIsWriter = true; } else if (m_readersWaiting > 0) { toWake = m_waitingReader; m_status = m_readersWaiting; m_readersWaiting = 0; m_waitingReader = new TaskCompletionSource<Releaser>(); } else m_status = 0; } if (toWake != null) toWake.SetResult(new Releaser(this, toWakeIsWriter)); } That’s it. Now, a production implementation of such a lock would likely want to be better instrumented, throw exceptions for erroneous usage (e.g. releasing when there wasn’t anything to be released), and so forth, but this should give you a basic sense of how such an asynchronous reader/writer lock could be implemented. Before I conclude, it’s worth highlighting that .NET 4.5 includes a related type: ConcurrentExclusiveSchedulerPair. I briefly discussed this type when describing what was new for parallelism in .NET 4.5 Developer Preview, but in short, it provides reader/writer-like scheduling for tasks (and it’s robust and has been well-tested, unlike the code in this post). Hanging off of an instance of ConcurrentExclusiveSchedulerPair are two TaskScheduler instances: ConcurrentScheduler and ExclusiveScheduler. These two schedulers collude to ensure that an “exclusive” (or writer) task may only run when no other task associated with the schedulers is running, and that one or more “concurrent” (or reader) tasks may run concurrently as long as there are no exclusive tasks. The type includes more advanced capabilities than what I’ve implemented in this post, for example being able to throttle readers (whereas in my AsyncReaderWriterLock in this post, all readers are allowed in as long as there are no writers). You can use ConcurrentExclusiveSchedulerPair to build similar solutions as what you might use AsyncReaderWriterLock in. For example, instead of wrapping the protected code with a block that will on entrance access and await AsyncReaderWriterLock.WriterLockAsync and then on exit call the returned Releaser’s Dispose, you could instead await a task queued to the ConcurrentExclusiveSchedulerPair’s ExclusiveScheduler. ConcurrentExclusiveSchedulerPair also works well with systems layered on top of TPL and in terms of TaskScheduler, like TPL Dataflow. You can, for example, create multiple ActionBlocks that all target the same ConcurrentScheduler instance configured with a maximum concurrency level, and then all of those blocks will collude to ensure they don’t go above that maximum. However, there is a key behavior aspect of ConcurrentExclusiveSchedulerPair to keep in mind: it works at the level of a Task’s execution, and this may or may not be what you want. If you write code like the following: Task.Factory.StartNew(async delegate => { … // code #1 await SomethingAsync(); … // code #2 }, CancellationToken.None, TaskCreationOptions.None, myExclusiveScheduler); that could either end up resulting in one Task queued to the scheduler (if the Task returned from SomethingAsync was complete by the time we awaited it), or it could result in two Tasks queued to the scheduler (if the Task returned from SomethingAsync wasn’t yet complete by the time we awaited it). If it results in two tasks, then the atomicity provided by the exclusive scheduler applies to each task individually, not across them… in effect the exclusive lock can be released while awaiting. For certain scenarios, this is exactly what you want, and in particular for cases where you’re using the atomicity to provide consistency while accessing in-memory data structures and where you ensure that await calls do not come in the middle of such modifications. To achieve the same behavior with AsyncReaderWriterLock, you’d need to do something like the following: Task t = null; using(var releaser = await m_lock.WriterLockAsync()) { … // code #1 t = SomethingAsync(); } await t; using (var releaser = await m_lock.WriterLockAsync()) { … // code #2 } That concludes my short series on building async coordination primitives. I hope you enjoyed it. Join the conversationAdd Comment Hey, there are missing await keywords here too. The same typo as in the previous post. Unfortunately the Task class implements IDisposable, so the compiler interprets this as valid code. Would it be a reliable solution to wrap the returned Task instances with lightweight objects that has only a GetAwaiter method? Thanks, Tamas. I'd typed that example at the end directly into Live Writer, so no compiler was involved. I've fixed it the typo. Regarding wrapping Task, you could do that if you wanted to. If you remembered to do it, it would help flag missing awaits, but then again, if you remember to do it, you might as well remember to use awaits 😉 I expect an FxCop rule or some similar analysis tool would be more reliable. Great series, thanks. Are there plans to include any of these types natively in .NET 4.5? (Perhaps other than AsyncReaderWriterLock, which seems redundant compared to ConcurrentExclusiveSchedulerPair.) Hi Dave- I'm glad you enjoyed the posts. Regarding .NET 4.5 plans, stay tuned 🙂 Great series! Very interesting. I was wondering about memory barriers. In these sample posts, you're using lock during each acquire/release – which includes a memory barrier – so a volatile qualifier wouldn't be necessary for any data protected by AsyncReaderWriterLock (I think). Question: what memory barrier guarantees does ConcurrentExclusiveSchedulerPair make? Hi Stephen- I'm glad you like the series. CESP doesn't itself need to make any such guarantees, because it only schedules Tasks, and tasks themselves have their own guarantees. Any data published prior to a Task getting scheduled will be visible to that Task's body, and any modifications by a Task will be visible to anyone joining with the Task when it completes. Hi Stephen, I have a very general question regarding async context 'behind' await – and was not able to clear this from existing documents. In order to have lock-free code I use the actor model and single threaded synchronization contexts (like WinForms or Stephen Clearys Nito library). In this case, it seems that when continuing after 'await' it is guaranteed to be on the same thread as before – therefore no locks are needed to protect datastructures accessed before and after await. But – what thread is used inside the async method2 called through 'await' ? Do I have to protect shared datastructures when accessing them from e.g. the UI-Thread as well as from a task generated by the compiler through await ? In my opinion this would be *VERY* dangerous. But on the other hand, how can these auto-generated tasks beeing scheduled onto other threads and CPU's in order to speed up long running tasks ? e.g: List<string> list; // shared, not threadsafe data async Task method1() { list.Add ("Start"); await method2(); list.Add (" "Stop"); // ok, same thread (when NOT using the default threadpool synchronization context) } async Task method2() { for(i=0; i<1000; i++) { list.Add ("Doing "+i); // ???? what thread ???? DoSomethingComputeIntense(); } } Hi Stefan- Asynchronous methods are invoked synchronously. When you call method2, you're calling it on the current thread. If method2 never awaits something that's not yet completed, method2 will run to completion on the current thread. Since you have no awaits in your method2, method2 will run entirely synchronously on the current thread. If you want to run code on a thread other than the current one, you can use Task.Run, e.g. … // on UI thread await Task.Run(() => { … // on ThreadPool thread }); … // back on UI thread I hope that helps. Sorry, my example was not so good, I try to formulate my question different: 1) Is it true, that I never will run on another thread unless I use "await Task.Run(…)" or explicitly start a thread ? Reason: – the start of an async function runs synchronously – the completion after await runs on the same thread as before await – since I use a single threaded synchronization context. 2) Is it true, that using this programming model, the non interruptable (atomic) code portions span from one await to the next await – regardless of how deeply nested the async functions are ? Many thanks for yor response! Hi Stefan- Regarding (1), no, that's not true. There are two initial cases to think through here: has the task being awaited completed or not when you await it. If it has completed, then the await is effectively a nop, and the current thread just continues running what comes after the await. The second case of the task not yet being complete is the more interesting one, and again there are two cases here, based on whether or not there's a custom SynchronizationContext or TaskScheduler on the current thread. If there is such a context/scheduler set, then the code that comes after the await will be scheduled back to that context/scheduler when the awaited task completes (e.g. if you were running on the UI thread of a WPF app when you awaited the task, then when the task completes, the continuation will be queued back to the Dispatcher for the UI thread). If there is no custom context, then there's nowhere to marshal the continuation back to, and as such, it'll typically just run on whatever thread is completing the task… that's likely to be a thread pool thread. Now, for your specific example under "Reason", yes, that's true. Since you were using a single-threaded synchronization context, all of the awaits will return back to that same thread, so unless you explicitly change that, you'll only execute on that one thread. Such "explicit" means could include using Task.Run(…) to put a delegate onto a ThreadPool thread. It could also include using "await task.ConfigureAwait(false);" to ignore the current context if there is one. Regarding (2), yes, that's true. The only points at which an async method will potentially yield is at an await site within the method. Now I got it – many thanks for your detailed answer! Coming back to async coordination primitives … I understand the possibility to do such things and it is instructive – but they look very scary to me. C# async/await seems to be a concept to bring asynchronous multitasked applications to mainstream. Would't it be nice to have a small set of rules that enables 90% of the programmers to write deadlock- and race condition-free multitasked applications without ever thinking about locks of whatever kind ? I think an actor based model could come near to that goal and the rules would have to include … – Always have a single threaded synchronization context, and be warned when falling back to a default multithreaded threadpool context. – Data belongs to a sync context, do something against foreign threads trying to access your data. – Guarantee that data sent to another sync context (e.g. post messages, post lambdas+parameters…) is not accessible by the origing sync context anymore. It would be interresting to hear from you what you think about an actor based model and what you would do to enforce such guiding rules. Sorry for being a bit off topic here. I recently updated AsyncWcfLib (sourceforge.net/…/asyncwcflib) to support async-await and an actor based programming model (sourceforge.net/…/index.php). It is a library implementation of the ideas I mentioned in my last post. The library user can write lockfree code even for a multithreaded and asynchronous application. But I could not make it completly foolproof. The user has to be careful not to fall back on a threadpool thread and he may not modify a already sent message. I'm doing some threadsafety checking during runtime (like WinForms or WPF) to fight against these programming errors. Any chance of a awaitable (global) Mutex or is that not possible? @ Richard Szalay: By "global", you mean across Windows processes? I do, though since Mutex's have thread identity I ended up using them synchronously in a background thread. (Semaphores aren't available on WP7) Is there a specific reason to keep creating new Releasers instead of caching just 2, a reader one and a writer one, just like the cached tasks? or, rather, a continuation off of the reader task, ensuring that all awaiters will be able to run concurrently rather than getting serialized What does this mean? Can you explain it in detail, thanks. @Bar Arnon: Release is a struct; it's cheap to create and doesn't involve any allocations. @Bin Du: Continuations can be either asynchronous or synchronous with respect to the antecedent task completing; when a task completes, it'll queue any continuations registered as asynchronous, and it'll execute any continuations registered as synchronous. Obviously if it's executing a continuation, it can only one run at a time, so if there are multiple synchronous continuations registered, they'll be executed serially with respect to each other. ContinueWith creates asynchronous continuations by default, but you can ask for a synchronous continuation via the TaskContinuationOptions.ExecuteSynchronously option. The await keyword by default creates synchronous continuations (because they have less overhead and the most common case is that a Task is created and then immediately awaited). So, if m_waitingReader.Task were returned directly, it's likely that multiple pieces of code would end up awaiting it with synchronous continuations that would end up getting executed serially with respect to each other. By tacking on a ContinueWith call, the continuations on m_waitingReader.Task will all be asynchronous. Regarding to the latest issue about synchronous continuations and your answer, I am a little confused, so I need to clarify some things. Is it true, in general, that in C# 5.0 async-await functionality, When we await one Task instance across multiple threads, then: 1) If we have SynchronizationContent.Current set, then all continuations are serialized and executed on the same Context ( i.e on the same UI thread ) serially? This case is rather obvious I hope.? 3)? 4) If we have configured awaiting by using ConfigureAwait( false ), then continuations are marshalled back to the default TaskScheduler ( TaskScheduler.Default ), so is is the same situation as above ( Case 3rd ), thus continuations are serialized? Thank you for your attention, Best regards, Sgn @sgn: Most of your questions are answered at blogs.msdn.com/…/10293335.aspx, but answers below as well… re: "If we have SynchronizationContent.Current set, then all continuations are serialized and executed on the same Context ( i.e on the same UI thread ) serially? This case is rather obvious I hope." When you await a task (assuming you don't use ConfigureAwait(false)), the system looks to see if there's a current SynchronizationContext. If there is, it ensures the continuation executes there. Whether all continuations on the same task and associated with the same SynchronizationContext are executed serially depends on the SynchronizationContext. Most, but not all, SynchronizationContexts will execute queued work serially, such as ones for UI contexts, but it's not a requirement, and in fact the base SynchronizationContext class targets the ThreadPool and has no such guarantee… its Post implementation just does ThreadPool.QueueUserWorkItem.?" When you await a task (assuming you don't use ConfigureAwait(false)), the system first looks to see if there's a custom SynchronizationContext. If there isn't, it then looks to see if there's a custom TaskScheduler. As with SynchronizationContext, it's up to the scheduler how the queued work runs. Some TaskSchedulers will ensure queued work runs serialized, others will let work run in parallel. re: ?" Usually the case but not guaranteed. In many cases the system will iterate through such continuations on the task and just execute them serially rather than queueing them, but there are situations in which they'll be queued instead. re: "If we have configured awaiting by using ConfigureAwait( false ), then continuations are marshalled back to the default TaskScheduler ( TaskScheduler.Default ), so is is the same situation as above ( Case 3rd ), thus continuations are serialized?" It's the same as above, as if TaskScheduler.Default was being used. Thank you for your explanation. I will definitely read your FAQ post blog, but I would like to ask you last question about async-await (multiple) continuations issue:)? Than you for your answers and wonderful blog posts? Best regards, Sgn @Sgn: re: )?" Yes, TryExecuteTaskInline is used from the thread that's completing the Task's execution. You can trace through all of this yourself in the reference source, e.g. here's the method invoked to run all of a task's continuations: referencesource.microsoft.com
https://blogs.msdn.microsoft.com/pfxteam/2012/02/12/building-async-coordination-primitives-part-7-asyncreaderwriterlock/
CC-MAIN-2017-43
refinedweb
3,972
52.39