text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Hey there, So I guess you're the first person I've come to meet online or offline who actually enjoys working with soap :) Your message is huge, yet entertaining, so I've actually read it all. Top posting answers to prevent this thread from explosing in length: 1) You do it manually, from scratch. This comes up a lot, and I believe someone just will do it sometime. 2) Rpclib supports all this, and it also supports other protocols. It doesn't support NCName, but to get that you just need to: from rpclib.model.primitive import Unicode class NCName(Unicode): __type_name__='NCName' and you're set. send me a pull request if that works for you. 3) Nope. Java ecosystem has plenty of those though. Avoid ZSI, SOAPPy, SOAPpy, they're not maintained anymore. Use maintained packages like Rpclib, Ladon or PySimpleSoap for server and SUDS for client. Best Regards, Burak PS: I'm totally going to steal ladon's documentation generator :) On 03/16/12 20:20, Alex Railean wrote: > Hi everyone, > > > I am glad I discovered this list, I've read most of the conversations > from the "beginning of time" up to this day and I am looking forward > to interacting with you. > > SOAP is a new concept for me, there are questions I hope you can help > with. I am not intimately familiar with the terminology, so I suspect > some of the questions may be dumb. > > > The service I am working on must comply with an existing WSDL > specification: (standardized in ETSI > 102.204) > > > My questions are: > 1. what is the recommended approach for writing a SOAP server that > implements an existing WSDL? > > 2. which of the existing frameworks > a. supports basic types such as xsd:anyURI, xsd:dateTime or > xsd:base64Binary? > b. allows the declaration of enumerations and their possible > values? > c. allows the declaration of custom types for which some > information is passed not as an XML element, but as an > attribute? > d. allows the use of custom complex types that are defined in an > existing namespace? (I think this is the same as [a], but am > not entirely sure about it) > > 3. are there any books on SOAP/XML and Python you can recommend? If > there is no Python-specific literature, perhaps there are good > books about SOAP/XML in general? > > > > > > > > > The part below is optional :-), it describes my experiences with > SOAP/XML in Python. > > > > Having studied the available tools, I chose to go with Ladon, because: > - the description makes it look very simple > - the syntax is straightforward and concise > - it has very clear examples > - documentation is generated automatically > > I was able to create a simple SOAP/XML server and call its functions > within an hour, it worked just as advertised. > > > Surprisingly, I only found 2 references to Ladon on this mailing list, > it seems that the rest of the world is doing something completely > different. > > > Although my first experience was very positive, later I realized > things may be more complicated than I anticipated. > > Ladon makes it easy to write services from scratch, I focus on the > logic, and the WSDL is automagically generated. > > > > However, my actual objective is to take an existing WSDL and implement > its rules in my own code. Since my WSDL is relatively simple, it has 7 > functions (not sure if this is the proper SOAP terminology) - I > figured I'd just define my classes and functions accordingly, such > that the generated WSDL is identical to the reference WSDL. > > This is where things got tricky. While everything is easy with > primitive types such as strings or numbers, the XML schema of the spec > has a lot of other types in it, simple ones such as anyURI or NCName, > and complex ones. > > These two examples I gave are still strings, but they have some > constraints applied to them. If I declare them as strings, I can > perform all the necessary checks myself, but the generated WSDL file > doesn't look right (because "xs:string" and "xs:anyURI" are > different). This is probably going to cause compatibility issues with > systems that are supposed to use my service when it is released. > > > My conclusion was that I would have to define them as custom types > derived from primitive types, but.... this simply doesn't feel right, > because it involves reinventing a lot of things. > > > > > Then I turned to ZSI, as people say it generates code from the given > WSDL - it sounded that this was the thing I needed. I was also hoping > it would take care of defining all the types for me. > > ZSI seems pretty arcane and not very alive (even though the mailing > list seems to be somewhat active) - I am not sure if it is a good > choice. > > > I hope you weren't bored to death by the account of my SOAP > adventures. > > > Alex > > > > > > p.s. if anyone ever visits Moldova, I'd be happy to show you around > :-) > > _______________________________________________ > Soap mailing list > Soap at python.org >
https://mail.python.org/pipermail/soap/2012-March/000771.html
CC-MAIN-2017-04
refinedweb
829
61.16
I have used following set of code: And I need to check accuracy of X_train and X_test The following code works for me in my classification problem over multi-labeled class import numpy as np from sklearn.pipeline import Pipeline from sklearn.feature_extraction.text import CountVectorizer from sklearn.svm import LinearSVC from sklearn.feature_extraction.text import TfidfTransformer from sklearn.multiclass import OneVsRestClassifier] ,[2],[2]] X_test = np.array(['nice day in nyc', 'the capital of great britain is london', 'i like london better than new york', ]) target_names = ['Class 1', 'Class 2','Class 3'] classifier = Pipeline([ ('vectorizer', CountVectorizer(min_df=1,max_df)) nice day in nyc => Class 1 the capital of great britain is london => Class 2 i like london better than new york => Class 3 >>> classifier.score(X_train, X_test) If you want to get an accuracy score for your test set, you'll need to create an answer key, which you can call y_test. You can't know if your predictions are correct unless you know the correct answers. Once you have an answer key, you can get the accuracy. The method you want is sklearn.metrics.accuracy_score. I've written it out below: from sklearn.metrics import accuracy_score # ... everything else the same ... # create an answer key # I hope this is correct! y_test = [[1], [2], [3]] # same as yours... classifier.fit(X_train, y_train) predicted = classifier.predict(X_test) # get the accuracy print accuracy_score(y_test, predicted) Also, sklearn has several other metrics besides accuracy. See them here: sklearn.metrics
https://codedump.io/share/idWse1QuFX9c/1/python--how-to-find-accuracy-result-in-svm-text-classifier-algorithm-for-multilabel-class
CC-MAIN-2017-04
refinedweb
245
51.44
Re: More than One - From: stcheng@xxxxxxxxxxxxxxxxxxxx (Steven Cheng[MSFT]) - Date: Tue, 27 Dec 2005 09:40:55 GMT Hi Thom, Any further progress on this or does those thing in my last reply also helps a little? If still anything else we can help, please feel free to post here. Regards, Steven Cheng Microsoft Online Support Get Secure! (This posting is provided "AS IS", with no warranties, and confers no rights.) -------------------- | X-Tomcat-ID: 65443529 | References: <#5kAbeNBGHA.1252@xxxxxxxxxxxxxxxxxxxx> <UfT9XnSBGHA.2560@xxxxxxxxxxxxxxxxxxxxx> <e1flLyTBGHA.628@xxxxxxxxxxxxxxxxxxxx> | MIME-Version: 1.0 | Content-Type: text/plain | Content-Transfer-Encoding: 7bit | From: stcheng@xxxxxxxxxxxxxxxxxxxx (Steven Cheng[MSFT]) | Organization: Microsoft | Date: Wed, 21 Dec 2005 03:39:41 GMT | Subject: Re: More than One | X-Tomcat-NG: microsoft.public.dotnet.framework.aspnet | Message-ID: <Q3tMxAeBGHA.2560@xxxxxxxxxxxxxxxxxxxxx> | Newsgroups: microsoft.public.dotnet.framework.aspnet | Lines: 209 | Path: TK2MSFTNGXA02.phx.gbl | Xref: TK2MSFTNGXA02.phx.gbl microsoft.public.dotnet.framework.aspnet:366148 | NNTP-Posting-Host: TOMCATIMPORT1 10.201.218.122 | | Thanks for your response Thom, | | For common ASP.NET web application(1.1 or 2.0), it's still ok that we | deploy the application by simply copy the page files to the target virtual | directory and assemblies to the target bin folder (if strong-named assembly | , we need to put them into GAC.....). | | I am hoping to have a similar arrangement with 2.0 where there will 32 | subdirectories from the root and each will be a separate customer with | separate access controls. | ============================== | I think it possible, since we can just create 32 separate application | virtual directory under the root webspace) and each application virtual | directory can has its own virtual directory based IIS | setting.....(authentication mode...... ) | | | |? | =============================== | No, this won't affect the isolation of each asp.net web application. | No-namespace in asp.net 2.0 is because all the web page classes | or source files in (app_code folder) are dynamically compiled at runtime, | so they will be given a runtime generated internal namespace.... this not | controled by us. And for each web application,their assemblies will be | loaded into their own AppDomain so classes in differernt application won't | conflict with those in other application..... | | Thanks, | | Steven Cheng | Microsoft Online Support | | Get Secure! | (This posting is provided "AS IS", with no warranties, and confers no | rights.) | | | | -------------------- | | From: "Thom Little" <thom@xxxxxxxxxx> | | References: <#5kAbeNBGHA.1252@xxxxxxxxxxxxxxxxxxxx> | <UfT9XnSBGHA.2560@xxxxxxxxxxxxxxxxxxxxx> | | Subject: Re: More than One | | Date: Tue, 20 Dec 2005 03:08:18 -0500 | | Lines: 149 | |1flLyTBGHA.628.dotnet.framework.aspnet:365884 | | X-Tomcat-NG: microsoft.public.dotnet.framework.aspnet | | | | Thank you for the information. I am still working my way through it. | | | | In pre-ASP and ASP 3 I worked with a customer by developing a website on | my | | remote space (currently containing my applications and 37 customer | | applications that have limited access. When the customer is happy with | the | | result I simply copy the "space" from my remote server to the root of | their | | remote server "webspace" and it is published to the world. | | | | I am hoping to have a similar arrangement with 2.0 where there will 32 | | subdirectories from the root and each will be a separate customer with | | separate access controls. | | | |? | | | | -- | | -- Thom Little -- -- Thom Little Associates, Ltd. | | -- | | | | "Steven Cheng[MSFT]" <stcheng@xxxxxxxxxxxxxxxxxxxx> wrote in message | | news:UfT9XnSBGHA.2560@xxxxxxxxxxxxxxxxxxxxxxxx | | > - RE: More than One - From: Steven Cheng[MSFT] - Re: More than One - From: Thom Little - Re: More than One - From: Steven Cheng[MSFT] - Prev by Date: Re: List Box Populated Client Side - Next by Date: Re: Fragile - Previous by thread: Re: More than One - Next by thread: Re: More than One - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.aspnet/2005-12/msg04023.html
crawl-002
refinedweb
601
58.18
In this article, we explore some tips and tricks for using the Microsoft .NET framework for developing Web services. In the first section, I'll provide a brief comparison between ASP.NET Web services and .NET remoting, and then I'll delve into five tips I've found useful for developing ASP.NET Web services. ASP.NET Web services and .NET remoting are two separate paradigms for building distributed applications using Internet-friendly protocols and the .NET framework. Each has its advantages and drawbacks, which are important factors in deciding which one to use for your application. Web services typically use SOAP for the message format and require that you use IIS for the HTTP message transport. This makes Web services good for communication over the Internet, and for communication between non-Windows systems. Web services are a good choice for message-oriented services that must support a wide range of client platforms and a potentially heavy load. Microsoft's MapPoint.NET service is an example of an ASP.NET Web service. Remoting can be configured to use either SOAP or Microsoft's proprietary binary protocol for communication. The binary protocol yields higher performance, and is great for .NET to .NET communication, but cannot be used to communicate with non-Windows platforms. Remoting does not require an IIS Web server, making it a good choice for peer-to-peer development, but this also means that it cannot leverage the scalability and performance of IIS to support a high number of connections or requests per second. Microsoft's Terrarium is an example of a peer-to-peer application built using .NET remoting. Unless you specify otherwise, .NET will attempt to bind your Web services to three separate protocols: HTTP/POST, HTTP/GET, and SOAP. We say attempt, because depending on the parameter and return types for a service, the HTTP/GET protocol may not be possible. Bindings for these three protocols will be included in the WSDL file automatically generated by .NET, and consumer clients will have the option of choosing any one of them for communication with your service. You can easily remove these bindings by adding the following section to your Web.config file: <webservices> <protocols> <remove name="HttpPost" /> <remove name="HttpGet" /> </protocols> </webservices> This section will tell the WSDL generator not to include bindings for HTTP/POST and HTTP/GET. The two remove sections specify that HttpPost and HttpGet should not be supported. Security and interoperability are two good reasons to avoid exposing your Web services using HTTP/POST or HTTP/GET. HTTP/GET is less secure than SOAP, and since HTTP/GET is commonly used for Web linking, a malicious user could potentially trick someone into unknowingly calling a Web service using their security credentials when they thought they were clicking a Web link. With regard to interoperability, where SOAP is a widely-used standard for Web service communication, HTTP/GET and HTTP/POST are not. As a result, many automatic proxy generation tools weren't designed to "understand" the HTTP/GET and HTTP/POST bindings included by default in a .NET-generated WSDL document. If your service doesn't make use of these bindings, removing them can increase your service's interoperability. Debugging can be an exceptionally difficult task for Web services application developers, because neither the .NET SDK nor Visual Studio.NET includes tools for viewing the SOAP messages sent back and forth between the client and service. Having the capability to view these messages becomes particularly important when you try to identify the cause(s) of interoperability problems between .NET and non-.NET clients and servers, because such problems are often related to the format of the SOAP messages (e.g. "Is the SOAPAction field present?"). One great tool for viewing the message exchanges is tcpTrace. This tool works by setting up a tunnel between your client and server. When you start tcpTrace, you're prompted to enter the destination URL and port number, along with a local port number on which tcpTrace will listen. You can then point your proxy stub to this local port by setting the stub's Url property (e.g. localhost:8080). tcpTrace will log all of the request and response HTTP messages. One limitation to tcpTrace is that its location in the message flow makes it useless for viewing messages sent over SSL. If you need to view the contents of SOAP messages sent over SSL, your best bet is to write a custom ISAPI filter. This design advice has no doubt been beaten to death in the literature regarding proper approaches to tiered application design, but it's especially important for distributed computing environments like Web services, so I'll take yet another swing at the horse. When designing for performance and scalability in a distributed system, you want to make sure that you minimize the number of calls between the client and server. By minimizing the number of calls, you improve application speed, reduce communications overhead (why send three SOAP headers when one will do?), and reduce network traffic, all of which are generally considered very good things. So what do chatty and chunky designs look like? Consider the following chatty service and client: Example 1a: A Chatty Service using System; using System.Web.Services; namespace ChattyService { public class ChattyService : WebService { private string username; private string password; public string Username { [WebMethod] set { username = Username; } } public string Password { [WebMethod] set { password = Password; } } [WebMethod] public bool Logon() { // Authentication Logic Here return true; } } } Example 1b: A Chatty Consumer namespace ChattyConsumer { class ChattyConsumerClass { [STAThread] static void Main(string[] args) { bool bReturn = false; Proxy.ChattyService objSvc = new Proxy.ChattyService(); objSvc.set_Username("alex"); objSvc.set_Password("opensesame"); bReturn = objSvc.Logon(); } } } In Example 1a, username and password are designed as properties that must first be set before calling the logon() method. What is not clear from the code alone is that the properties are exposed as Web methods, which means that each get/set call on a property results in a call to the service -- a chatty service. Example 1b shows the consumer used to call the service. You can see that the username and password properties are set, followed by a call to the logon() method for a total of three trips to the service. A better design is to use a chunky interface like the following: Example 2a: A Chunkier Interface using System; using System.Web.Services; namespace ChattyService { public class ChattyService : WebService { [WebMethod] public bool Logon(string Username, string Password) { // Authentication Logic Here return true; } } } The logon() method in this example includes the username and password as part of the method signature. This is a better design, because it reduces the number of calls to the server from three to one; however, for a large number of parameters, the method signature can become a bit of a beast. In that case, it makes sense to start modeling the input parameters as complex types; for example, a credentials object that encapsulates the username and password variables. The method in Example 2a can be called with the following code: Example 2b: A Chunky Consumer namespace ChattyConsumer { class ChattyConsumerClass { [STAThread] static void Main(string[] args) { bool bReturn = false; Proxy.ChunkyService objSvc = new Proxy.ChunkyService(); bReturn = objSvc.Logon("alex", "opensesame"); } } } This approach results in one call to the logon() Web method, reducing the number of network calls to 33% of the first approach. ASP.NET Web services can take advantage of all of the features available to their .aspx cousins. This includes the ability to use a Web.config file to store application specific data (for example, database connection strings, file paths, and so on). Using Web.config instead of the global.asax file enables you to change configuration settings without having to rebuild the project. While .NET's implementation of session state solves a number of the problems found in its ASP 3.0 predecessor (for example, request serialization, reliance on cookies, and lack of support for Web farms), it still suffers from several drawbacks. You should realize that it was not designed specifically for managing state in Web service applications, but rather for managing state in ASP.NET applications in general, and as a result, it relies on HTTP cookies (there is also an option to use a cookie-less mode that uses munged URLs, not compatible with a Web service). Cookies are HTTP specific, and while fine for the Web, where all browsers support them, using them in your Web services ties you to the HTTP protocol. SOAP is designed to work independently of the transport protocol, so tying your application to HTTP limits your flexibility and may create a lot of additional work for you down the line if you need to provide services over a transport protocol other than HTTP (for example, SMTP). A better approach to managing state is to use a ticketing system implemented as metadata in a SOAP header. You can learn more about this technique in Chapter 5, "Managing State," of Programming .NET Web Services. O'Reilly & Associates recently released (September 2002) Programming .NET Web Services. Sample Chapter 2, "Creating ASP.NET Web Services," is available free online. You can also look at the Table of Contents, the Index, and the full description of the book. For more information, or to order the book, click here.
http://archive.oreilly.com/lpt/a/2777
CC-MAIN-2014-52
refinedweb
1,550
54.52
POWDER Working Group Blog Archives for: June 2008 Friday, June 27th 2008 10:35:39 am, Categories: Meeting summaries Meeting Summaries, back to normal Oops... I've been rather remiss in posting the meeting updates. The next three posts are those for the last few weeks, all of which say things like "the group expects to publish new versions of the document any day now." OK, as I write they really are being installed and the W3C webmaster is making final adjustments... More soon. Phil ARCHER 10:32:54 am, Categories: Meeting summaries Meeting Summary 23rd June 2008 The group is working towards getting its already published documents updated and ready for Last Call which, all being well, will be announced at the end of this month. New versions of the Description Resources and Grouping documents are being put through the W3C publication process, along with the new Formal Semantics document. In addition the XML schemata and datatype definitions necessary for POWDER and POWDER-BASE are ready to go along with the POWDER-S RDF vocabulary. These have the appearance of ancillary documents however they are crucial as they are what will make the Protocol for Web Description Resources actually work! The meeting this week spent most of its time, however, looking at two documents that are not quite ready for publication yet - the Primer and test suite. It was felt that we need to have more examples of POWDER processing ready to include in the Primer before it can be published - hopefully around the end of next month. A Test Suite is also being prepared. Some sample data has been prepared and the format of the suite worked out. This week will see specific references to sections of the normative documents included so that it is clearer exactly what the test is about. It is hoped that the Test Suite will be ready for First Public Working Draft by this time next week - when the aim is to announce Last Call on several documents. Phil ARCHER 10:31:42 am, Categories: Meeting summaries Meeting Summary 16th June 2008 This was a relatively short meeting but an important one as we have reached a significant milestone in the group's work. With the exception of the use cases, new versions of all the POWDER documents currently in the public domain are ready to be published, along with some new ones. These include a new Formal Semantics document, XML schemata and RDF vocabulary namespace documents. Work on the Primer and Test Suite is also progressing well and these should be ready at the end of the month when the group plans to make a Last Call announcement on its major Recommendations Track documents. Phil ARCHER 10:30:40 am, Categories: Meeting summaries Meeting Summary 2nd June 2008 The bulk of the meeting concerned the Primer which the group intends to publish in the near future as a first public working draft and evolve over the remainder of the year as a Note. It explains why and how to use POWDER and is meant as a general introduction to the whole protocol. Technical detail is included where necessary but is introduced in a near tutorial style. There are a number of open issues but these are being reduced and FPWD should be ready next week. The group resolved to publish an updated working draft of the Description Resources document and seek permission to publish a FPWD of its Formal Semantics document. Meanwhile, updated versions of the schema and vocabulary documents are close to being ready to publish. The Test Suite and an updated version of the Grouping of Resources document should also be ready for the group to resolve to publish on next week's call. The WG was pleased to note the recent discussion in the TAG concerning the HTTP Link Header which is very positive for POWDER....
https://www.w3.org/blog/powder/2008/06/
CC-MAIN-2017-43
refinedweb
647
55.17
.dataview.dvmodeler;58 59 import java.awt.Component ;60 import java.awt.Graphics ;61 import javax.swing.Icon ;62 63 /**64 *65 * @author Andriy Shapochka66 * @version 1.067 */68 69 public class EmptyIcon implements Icon {70 public static final Icon DEFAULT_ICON = new EmptyIcon();71 72 private int width = 1;73 private int height = 1;74 75 public EmptyIcon() {76 }77 public EmptyIcon(int width, int height) {78 this.width = width;79 this.height = height;80 }81 public void paintIcon(Component c, Graphics g, int x, int y) {82 }83 public int getIconWidth() {84 return width;85 }86 public int getIconHeight() {87 return height;88 }89 } Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/objectstyle/cayenne/dataview/dvmodeler/EmptyIcon.java.htm
CC-MAIN-2018-05
refinedweb
120
58.79
Applet, Netscape and Opera but not IE, encore 2015-10-11 Views:0 I've got the following trivial applet that runs under Netscape and Opera, but not under IE. It's also at as the ColorChooser choice. There's a half dozen Dukes from previous posts, and my sincere gratitude for anyone who can te import java.awt.BorderLayout; import java.awt.Button; import java.awt.Label; public class CCApplet extends java.applet.Applet { public void start() { Label l = new Label( "CCApplet!" ); setLayout( new BorderLayout() ); add( l, BorderLayout.NORTH ); add( new Button("South"), BorderLayout.SOUTH ); } // end of class CCAppletThe .class is in the same directory as the .html. Putting it in a .jar doesn't seem to matter. My IE runs other applets from the web, but not the ones I write. Thanks in advance. Check that everything in IE's Java Plug-in is valid and correct. Check IE's Internet Options. What JVM version are you wanting the applet to be able to run with? What does this statement mean? "...most visitors can use without knowing what a JRE is." To get greatest compatibility, use code that conforms to the 1.1 API, compile to that target version, possibly use the old 1.1 libraries, and then test each jvm. (BTW - Swing has limited compatibility - it isn't MS compatible, and wasn't [Sun] standard until jvm version 1.2.) Applet, Netscape and Opera but not IE, encore Category:DefaultRelease time:2015-10-11Views:130 I've got the following trivial applet that runs under Netscape and Opera, but not under IE. It's also at as the ColorChooser choice. There's a half dozen Dukes from previous posts, and my sincere gratitude for anyone who can te[More] IE, Netscape and Opera problems my .css Category:DefaultRelease time:-0001-11-30Views:130 Hello! I have just built a site for a designer in dreamweaver, and it looks great in FF and Safari. IE 5,6,7 and Opera push the <h1> content down and in Netscape the container div doesn't expand down to contain the content. I have browser cam screen[More] ClassNotFoundException Applet in Opera Mac OS X Category:DefaultRelease time:-0001-11-30Views:130 I'd like to display a simple Hello World applet in the Opera v6.03 browser on Mac OS X 10.2.8. I continue to see this class not found exception, though I am able to run this exact applet, with the same html in several other browsers on the Mac. Is an[More] Applet not viewable in browser Category:DefaultRelease time:2015-10-11Views:130 Hi applet not viewable in browser any website that uses java code seems not to run on my browser though it runs normally well on my friends computer. The browser shows a screen area in grey colour with an error message like "Exception: java.lang.Clas[More] Can't run an applet with Internet Explorer for Windows Category:DefaultRelease time:2015-10-11Views:130 I'm not sure this is the right news group, if not please direct me to the correct one. I have an applet at It works in various browser except internet explor[More] Class not found. Or: THERE'S NO SUCH THING AS APPLETS Category:DefaultRelease time:-0001-11-30Views:130 Okay, the problem I�m having seems to be somewhat widespread judging from the forum achieves, but I wasn�t able to find any answers that worked for me. So forgive me if I�m treading old ground, but asking explicitly is my one last attempt to learn wh[More] Where are Applets stored in WinNT/IE5.5 Category:DefaultRelease time:-0001-11-30Views:130 Hi Folks, A client is receiving an error saying (not exact) Java.Lang error cannot find ...class In Winnt\Downloaded Program Files, Java 1.3.1 is saying it is damaged. I have reinstalled IE 5.5 but the error remains. How can I reinstall Java, and/or[More] Probllem with applets connecting to servlets Category:DefaultRelease time:-0001-11-30Views:130 This piece of code has an applet making a connection to a server. This works using the older version of an appletviewer but not on the jdk1.2 appletviewer and also on netscape nor opera browsers. It gives me a security... error. import java.awt.*; im[More] Trying to Imitate the html POST method with an applet Category:DefaultRelease time:2015-10-11Views:130 I am trying to imitate the POST method with an applet, so that I can eventually send sound from a microphone to a PHP script which will store it in a file on a server. I am starting out by trying to post a simple line of text by making the PHP script[More] Linux, Netscape 7.01 and java plug-in how to install? Category:DefaultRelease time:2015-10-11Views:130 I have J2SE v 1.4.1_01 SDK installed in /usr/java/ directory. Some days ago I installed Netscape 7.0.1 on my RedHat 8.0 Linux. Now if I try to run my simple html page which calls simple java applet, I get message to install proper plug-in. "OK"[More] Is there any way for applets to write to files? Category:DefaultRelease time:-0001-11-30Views:130 i heard that java applets restricts file operations. is that true? im not looking in particular for code samples, but just the basics.You'll be interested to see this thread: other 2 ans[More] - 1 I use multiple libraries with itunes - should I create a new library for a 2nd iphone - 2 2 Queries within 1 workbook in BEx BI7.0 - 3 Odd Vista 64-bit Ultra behavior with BC 2.1 - 4 Where are the Processed Messages in XI stored? - 5 Why won't my icons work? - 6 App to import audio from vinyl or cassette to Mac - 7 What is the difference between function and method - 8 Exception handling in static block - 9 Why I can't change on the settings ,I cloud ,advanced l, mail and then email address there I can't change ? Help please - 10 PS4 unable to download items in the download queue in standby mode - Pulse audio no sound card detected - Several JDK:s on the same computer? - New PC Suite 6.7 is available - Error in query ABAP using HANA as secundary data base - E90: USB errors every time I connect - Unable to install WSUS 3.0 console - Consistent gets are reduced but elapsed time is increased. - OT question re: Firmtek vs Internal - GATHER_STATS_JOB seems to be NOT running any more - Deleting an iCloud account also deletes synched docs from device?
http://www.amicuk.com/applet-netscape-and-opera-but-not-ie-encore/
CC-MAIN-2019-09
refinedweb
1,116
66.03
Ingemar Wrote:He said C, which to me means using QuickTime's stand-alone APIs. And there are updated calls, but I havn't figured out how to use them yet. /* C translation from Pascal source file: MovieHack.p */ // Super simple movie player for playing sounds (like mp3) // Plays a file named "Sample.mp3" in the same folder as the binary. #include <QuickTime/Movies.h> Movie theMovie; OSErr err; FSSpec fileSpec; short resRefNum; SInt16 actualResId; Rect theMovieBox; WindowPtr w; int main() { err = EnterMovies(); actualResId = DoTheRightThing; // The following lines need replacing by FSRef-based calls err = FSMakeFSSpec(0, 0, "\pSample.mp3", &fileSpec); if ( err != noErr ) printf("FSMakeFSSpec %d\n", err); err = OpenMovieFile(&fileSpec, &resRefNum, 0); if ( err != noErr ) printf("OpenMovieFile %d\n", err); err = NewMovieFromFile(&theMovie, resRefNum, &actualResId, 0L, newMovieActive, 0L); if ( err != noErr ) printf("NewMovieFromFile %d\n", err); // Try playing it StartMovie(theMovie); while ( ! IsMovieDone(theMovie) ) MoviesTask(0L,0); DisposeMovie(theMovie); ExitMovies(); } ThemsAllTook Wrote:FSSpec? WindowPtr? I don't think we should be recommending to new users that they use deprecated technologies. Ingemar Wrote:Did you at all notice that I clearly stated, three times, that the FSSpec should be replaced by FSRef-based calls? (And the WindowPtr is not used at all.)
http://idevgames.com/forums/printthread.php?tid=1285
CC-MAIN-2017-17
refinedweb
203
50.33
Things you should know: how to use scanf and printf how to use if and while how to declare variables know about variable types First start your C program like normal #include <stdio.h> int main() {Okay, so what what do you do now? Well, your program needs to accept input for a calculation. For that, we will use the scanf() function. Think of a simple math problem. 1+1. In C programming that is an int, a char, and another int. But wait, what about .2+.5? Those are floating point numbers in C. So the scanf code should look like this: float num1, num2; char operation; scanf("%f%c%f", &num1, &operation, &num2);Remember: Use the & sign because scanf needs to point to the variable! This will define two floating point numbers and an operator. Now your program has to compute that. Add this line of code: if (operation == '+') printf("%f\n", num1+num2);Remember: Use == not = when comparing things! This code says: if char operation equals plus sign then print floating point number num1+num2. Now that you know that, the rest is easy: if (operation == '-') printf("%f\n", num1-num2); if (operation == '*') printf("%f\n", num1*num2); if (operation == '/') printf("%f\n", num1/num2);We did the same thing, but added subtraction, multiplication, and division. Now the program does all of that only 1 time. We want to keep entering more problems for the computer to solve. The solution: put the all of the scanf code and the if code into a while loop. while (1) { scanf("%f%c%f", &num1, &operation, &num2); // this is where the rest of the code is printf("%f\n", num1/num2); }We use while (1) because we want to loop forever. As long as the user has math problems, we will keep accepting them until they click X. Now lets finish the program: return 0; }Exit with success. Now we are done! Let's see what all the code looks like at the same time: #include <stdio.h> int main() { float num1, num2; char operation; while (1) { scanf("%f%c%f", &num1, &operation, &num2); if (operation == '+') printf("%f\n", num1+num2); if (operation == '-') printf("%f\n", num1-num2); if (operation == '*') printf("%f\n", num1*num2); if (operation == '/') printf("%f\n", num1/num2); } return 0; }Your code should look like that, if it doesn't, fix your mistakes. Compile your calculator and try it out! It should work fine, just remember it can only do one calculation at a time. This calculator is very simple, I will post a more advanced calculator tutorial later if this was too easy. Edited by Guest, 11 August 2010 - 10:33 AM.
http://forum.codecall.net/topic/50733-very-simple-c-calculator/
crawl-003
refinedweb
447
73.37
This is a discussion on string operation and related exception within the C++ Programming forums, part of the General Programming Boards category; Hi Daved, In release mode, this line will check the boundary. Any comments? Code: _SCL_SECURE_VALIDATE_RANGE(_Pos < size()); Originally Posted by ... Can you post the definition of _SCL_SECURE_VALIDATE_RANGE? >> Here it is. Any comments? Question: Is it always defined like that, or is its definition dependent on some other symbol in the way that _HAS_ITERATOR_DEBUGGING is? On closer inspection, if _SECURE_SCL is defined to 1, _SCL_SECURE_VALIDATE_RANGE is defined as George2 has posted. Otherwise it is simply defined as _SCL_SECURE_VALIDATE_RANGE. Nothing, in other words. So far as I see, _SECURE_SCL is not dependant on Release/Debug and defaults to 1 if not defined. It may be defined in other headers, however. Not sure. In Debug, operator [] throws a fit several times (ASSERTs), does not throw. at() throws in both debug and release. if _SECURE_SCL is not defined, then release ignores invalid subscript ranges (go figure) (release). Debug still complains about out of range. And lastly, iterator debugging is defined in both release/debug and _SECURE_SCL is always defined to one inside vector header. For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ Hi Elysia, Three more comments, 1. What is your conclusion? operator[] can throw in release mode? 2. I do not know quite understand what do you mean in the following sentence, -------------------- In Debug, operator [] throws a fit several times (ASSERTs), does not throw. -------------------- 3. How _SECURE_SCL is defined or not? Is it implicitly defined when some other commonly used macros are defined? You mentioned both throws and not throw. Sorry English is not my native language, what do yo mean? :-) regards, George It still seems as if you can govern whether bounds checking is done with the _SECURE_SCL symbol. Although given Microsoft's recent attention to security and avoiding things like buffer overruns, it wouldn't be that surprising if it now defaulted to range checking in release build. Have you looked at the Microsoft documentation? That might have better information than trying to read though the code itself. So now I have to specify a fundamental design decision (whether the operator[] does bounds checks) every time I declare a vector?So now I have to specify a fundamental design decision (whether the operator[] does bounds checks) every time I declare a vector?vector(bool bEnableExceptions = false) I literally barfed on that one..I literally barfed on that one..#define SafeVector vector(true) I expect cars to have seven cylinders, and be built entirely of fiberglass.I expect cars to have seven cylinders, and be built entirely of fiberglass.I would expect the operator [] and the member function at both to throw exceptions if exceptions are enabled. Otherwise if they aren't enabled, none should throw (or make out of bounds checks). Why would I want to give up all the conveniences of a std::vector and go back to dumb arrays just because I don't want bounds checking?Why would I want to give up all the conveniences of a std::vector and go back to dumb arrays just because I don't want bounds checking?I understand they want to keep C compability, but then you would be using a C array and not std::vector since that wouldn't even compile under C so not C project would ever use std::vector. In release, only one assert is raised. You can still define it to 0 before including the header and it will remain 0. Not throw = the function will not throw (operator [] for example, it will assert but never throw in either debug or release). For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ They use a hardware interrupt, __asm int 3 on Intel & AMD processors.They use a hardware interrupt, __asm int 3 on Intel & AMD processors.2. I am wondering how assert is implemented internally? Using some soft interrupt or through by exception handling approach? For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ <removed copyrighted code, sorry - CornedBee><removed copyrighted code, sorry - CornedBee>Code:#undef assert #ifdef NDEBUG #define assert(exp) ((void)0) #else #ifdef __cplusplus extern "C" { #endif _CRTIMP void __cdecl _assert(void *, void *, unsigned); #ifdef __cplusplus } #endif #define assert(exp) (void)( (exp) || (_assert(#exp, __FILE__, __LINE__), 0) ) #endif /* NDEBUG */ So it is just an __crtMessageBoxA call followed by _DbgBreak to start debugging Last edited by CornedBee; 03-02-2008 at 07:04 AM. The first 90% of a project takes 90% of the time, the last 10% takes the other 90% of the time. Here's another interesting tidbit. There's another interesting define called _SECURE_SCL_THROWS. If defined to 1, operator [] will throw an exception is it's out of range. It also requires _SECURE_SCL to be defined as 1. Microsoft also provides special functions in the library. They provide checked integrators (access will be checked) and non-checked iterators (access will not be checked). If you've defined _SECURE_SCL to 1 (meaning you want checked access), all accesses will be checked. If you call functions that do not normally perform a check, you'll get a warning and still get checked access. However, if it's defined to 0 (meaning you don't want checked access), you won't get any access check unless you you a checked iterator (checked_iterator), which is a Microsoft extension. Last edited by Elysia; 03-02-2008 at 06:15 AM. For information on how to enable C++11 on your compiler, look here. よく聞くがいい!私は天才だからね! ^_^ Thanks Elysia, regards, George
http://cboard.cprogramming.com/cplusplus-programming/99576-string-operation-related-exception-3.html
CC-MAIN-2013-48
refinedweb
949
56.86
- substituteAllString command broken in 6.5??? - Changing shading dymatically - Pls explain about python , how can i use for maya? - find the 4 corners of a SUd-D patch - convert sound to animation - Hotkeys question - Query about values returned via filterExpand - Utter Frustration with MEL please help me - python script editor History? - changin data types for variables - MEL fix for shelved cylindrical and sphereical mapping bug - modelPanel with own view handling - MEL Modelling - Clearing reference transform edits? - Isolate / Hide / Render selected tools - MEL a way to read light of an object - outliner help - mel noob question - Separate window for reference images? - query if an attribute is muted? - problem with for loop and lattices - MEL Newb Help - File i/o question (newb) - How to move a joint to another joint by scale and rotation only and not translation? - graphUI (alias created mel script) - version control - Create Editor Similar to Connection editor - Extend the glow - time stopping - completely remove dragCallback on iconTextButton - pointOnCurve question (coordinates to parameter conversion) - New To Mel - Quick Question - Display selected vert name - Assigning an expression fails... - expression not updating - Adding to a strip of particles in Mel - ramp = linstep? - Calling a procedure within a MEL script from a custom attribute - .NET in Maya (similar to Max 9) - MEL Script, Shader Painting - blend a sphere with a cube. script req? - get ID of Nurbs CV point? - MEL-Problem: faceNormalOutput - [Python]bug with maya.cmds.skinPercent? - Pymel - python and mel working together - system command - Safer Delete History - dynamic vertex selection - Setting particle attrs - Maya commands across a network? - how do I make colored UI on linux? - changing Quality To Production Quality via mel - fopen and 64bit - align local scale axis to the rotation axis... - Strange behavior of my mel - changing namespace via mel - copy object at vertices - city building up - Squiggly Particle Motion? - Cleaning up variables - replace reference in MEL? - Finding Object's Shader - execute script based on currentFrame - littel big question - source myscript.mel - Shelfbutton name - query keyboard with mel - marking menu on right click?? - Links to nice MEL reels - Distance Attribute Creator - MEL books for FX artists? - polyEvaluate one face? - Removing a Specific Reference Node - "Unable to start debugging" - random scale Y? - Updating MPxDeformerNode when time changes - Reset Scale Attributes - changing diffuse value trough a loop - is VC++ the only compiler? - Question on maya connection - dgMod.createNode("transform") doesn't create a thing - Need help with script syntax - need help with procedure - show poly objects and joint nodes in the hypergraph window? - how do i convert .BMP to .XPM on linux - Copy/Paste multiple joints keys - Interactive mel execution, performance - How To Automated Snap Together Tool Script - Anyone ever used Appverifier on a Maya plugin? - Conditionnode - total noob - ZJ Mel Scripts? - query last command in undo queue - Query modelPanel size? - How can I get Maya to write out a text file - Tangent CV - how to synchronize 2 rig controllers? - new to mel, Help? - Accessing artPuttyCtx functionality in batch mode - MenuSet - Mel Newbie - Isolate selection! - Random Constrained Movement on Y - New to Mel - Get Local Rotation One object relate another - Keyboard Autofire - Problem Installing OMToolbox - undeclared variable - searchReplaceNames - Find objects visible to camera - Eric miller's muscle system - connect a variable to an attribut - UI building - radio buttons and loop - attrFieldSliderGrp offers no slider!? - proj management issue - Mel "system" command? - query if the modelPanel is not camera? - How to add influence object in the API? - How do you create UIs for character controls selection? - Questions Regarding MEL - Curve guided texture painting - Calculate Transforms - parent switching - Very Simple Motion Capture? - String + String = Float? - Python vs MEL, simple Question - Converting Rotations into Vectors - is anyone care to build a tiny script for me? - grow thorns on a tree - Nesting Columns/Rows? - FloatSliderGrp Question - MEL commands for MentalRay and maybe a flag like "-showallFlags" ? - how to print this " - API Command to do colorAtPoint ... - Webbrowser call and OS X dashboard - textField Help plz - hyperPanel error - Incremental Save script V1.0 - Set Translate Pivot to Rotation Script??? - The tedious task of converting mel to python... - Retrieving Global Coordinates of an Object? - Mass texture reference script - Sparking Shader concept - a couple of syntax problems - How to Start? - Plugin: Mel Editor For Eclipse - Rendering timestamp - Position and rotation based on reference objects - GUI Interface Help - set the extension type in render globals - Python & Maya API question - Mel script to change/update a shelf icon? - Iterate through selected objects? (selectedNodes gives annoying output) - Duplicate with transform an animate Object - random offset/movement? - Querying the type of light - random naming - python package - Euler angles to unit vector - selecting objects in a series-aargh!! - mayabatch?? - Grouping in object as they are created - Sticky Marking Menus? - LightLinking Bug removal with MEL?? - find ":" in shadernames and delete out ?! - Play order of frames - Help with trimmed NURBS surfaces - suggestions - Nested Procedures & Function - Absolute maya root. - Paint Effects MEL - GUI Help: Control to select attribute - help me with Character Set - Calculating Distance between two objects? - Python - Switch / Case statement - own headupdisplays - How to Programatically Align a Joints X Axis to the Joint Itself? - pointOnCurve my first newbie mel - Match Rotation Axis? - query scene units (centimetres or metres) - rescale an object by link to camera's focal length ? - Selecting a Control runs a Script? Howto? (i have no idea what its even called) - Switching tabs in the Render Settings Window - Maya mulitiple opening sessions in Mac ? - autoload mel scripts? - help needed:rendering maya files as .psd layers - rendering sub regions high res - Dynamicaly build UI - How do I get the numbers of CV's on a curve?? - maya 7.0 to 8.5. - Is "polyColorPerVertex" a bug? - Naming problem - List user created namespaces only - select drive dialog - How to Return Array of String from a function - create object at frame 50 , should not exist before frame 50 - Proj management: textScrollList - Assigning texture - Menu Display>ObjectDisplay>Template: What is that command? - Scale command scales at what percentage - delete specific faces? - query and setting per vertex weight color? - Replacing the RM menu "Complete Tool / Select All" - cacheFile problem - counter issue with select? - code only works in stages?? - syntax problem with variable in expression - why is this proc executing? - MEL Studio Pro and Maya 8.5 or 2008 compatible? - is it possible to name hairSystem and the output curves whithout renaming comand? - Quick expressions question - Changing default selection action - Vairables in Icon Path - Custom Rightclick Menus? - keep a button pressed - Querying whether a key is being pressed - quering maya folder? - an attribute to control the amount of frames that it takes to animate an object. - Forearm Twist: Multiply/Divide vs. Expression - floatSliderButtonGrp width question - pickWalk and lights - Change attributes of multiple objects - connect expresion to multiple object attrb - Unable to build Maya 8.5 plug-in. - script jobs and attribute changes - average the length of a selection of edges ? - Changing selectType automatically when a scene opens - scripted renderglobals batchrendered result in one frame rendered over and over again - Mel command what changes spacing of lines in Grid of UV Editor? - BakeResults - global string problem:"requires a constant value" - multiple commands button - polyPlane vertex attribute setting - MayaClockDemo, how to fix recording ? - Documentation on hidden commands? - Getting/Setting attributes with a Single Wildcard character? - How to pass arguments when button is pressed - animated fractal speed is too fast - Test Field Focus Out - Browse Field in MEL - Browse Field in MEL UI - API: compute function not invoked - adding attributes and sliders - move keyframe - setAttr -unkeyable true...what does that do?objecyt remains keyable - Print to the Command Response? - [Resolved] Calling Python Functions from within a Marking Menu? - fprint not accepting "\n" new line character - What is required to get paintable weights on a custom deformer? - Hiding nodes from the outliner: mel or C?
http://forums.cgsociety.org/archive/index.php?f-89-p-20.html
CC-MAIN-2017-17
refinedweb
1,283
57.87
#include <nng/nng.h> int nng_pipe_get(nng_pipe p, const char *opt, void *val, size_t *valszp); int nng_pipe_get_bool(nng_pipe p, const char *opt, bool *bvalp); int nng_pipe_get_int(nng_pipe p, const char *opt, int *ivalp); int nng_pipe_get_ms(nng_pipe p, const char *opt, nng_duration *durp); int nng_pipe_get_ptr(nng_pipe p, const char *opt, void **ptr); int nng_pipe_get_addr(nng_pipe p, const char *opt, nng_sockaddr *sap); int nng_pipe_get_string(nng_pipe p, const char *opt, char **strp); int nng_pipe_get_size(nng_pipe p, const char *opt, size_t *zp); int nng_pipe_get_uint64(nng_pipe p, const char *opt, uint64_t *u64p); nng_pipe_get(3) NAME nng_pipe_get - get pipe option SYNOPSIS DESCRIPTION The nng_pipe_get()() This is untyped, and can be used to retrieve the value of any option._bool() This function is for options which take a Boolean ( bool). The value will be stored at bvalp. nng_pipe_get_int() This function is for options which take an integer ( int). The value will be stored at ivalp. nng_pipe_get_ms() This function is used to retrieve time durations ( nng_duration) in milliseconds, which are stored in durp. nng_pipe_get_size() This function is used to retrieve a size into the pointer zp, typically for buffer sizes, message maximum sizes, and similar options. nng_pipe_get_addr() This function is used to retrieve an nng_sockaddrinto sap. nng_pipe_get_string() This function is used to retrieve a string into strp. This string is created from the source using nng_strdup()and consequently must be freed by the caller using nng_strfree()when it is no longer needed. nng_pipe_get_uint64() This function is used to retrieve a 64-bit unsigned value into the value referenced by u64p. This is typically used for options related to identifiers, network numbers, and similar. RETURN VALUES These functions return 0 on success, and non-zero otherwise.
https://nng.nanomsg.org/man/tip/nng_pipe_get.3.html
CC-MAIN-2021-04
refinedweb
277
59.64
This was my first meeting with Michael. For those that don’t know him, Michael Rys is the principal program manager lead for SQL Server’s Beyond Relational Data team and represents Microsoft on the W3C xml query working group. We had exchanged a lot of emails on various XML/XQuery stuff in the past but never got a chance to meet in person. He presented an interesting session on XML/XQuery best practices for improving XQuery performance. He was kind enough to dedicate quite a lot of time with me after the session, discussing a long list of problems and feature-requests that I put forward. Here is a quick summary of the feature requests/enhancements I discussed. Memory and performance issues while dealing with large XML documents. XQuery seems to be performing well if the size of the XML document is relatively small. However, if the document size is over a few MBs, XQuery seems to be bit slow and if the document is pretty big (say, 50 or 100 MB) there seems to be some memory issues. [OPENXML() seems to be performing better when the document is big. However, this may vary from case to case] After performing an XQuery modify() operation, there is no way to verify that the value/element/attribute is updated/inserted/deleted. You can update an element that does not exist and the modify() operation will still succeed. You can attempt to delete an element or attribute that does not exist and still the operation will succeed. There is no easy way to identify if the operation REALLY did any modification in the XML content or not. The only way we can do this currently is by running a second query after the update/insert/delete operation and see if the data is modified or not. If you think this feature is important, please vote at There is no built-in support for comparing two XML values. [Well, you can use my function posted here :-) ] Please vote at Need a way to check if a given string is a WELL-FORMED XML string. This can avoid generating an error while trying to cast a string to XML. [something close to ISDATE() or ISNUMERIC() functions] Please vote at There is no way to generate the XML declaration [<?xml version="1.0" ...] when generating XML documents using FOR XML. The only option we have currently is to use string concatenation. Please vote at When generating XML documents with namespace declaration using FOR XML PATH, the namespace declaration is repeated along with each child element. Ideally, this should be present only within the top level element. Please vote at Position of elements is significant in XML. There is no way to retrieve the position of elements from an XML document [..and I have posted a workaround here]. It would be great if an XQuery expression can return the position/row-number of each element present in the XML document. Please vote at There is no way to validate an XML document against a schema collection. The only workaround available currently is to do an assignment operation within a TRY-CATCH block. If the operation succeeds, the XML is valid and if an error is raised, the XML is invalid. There needs to be a better way to do this. Please vote at SSMS comes with a nice XSD/XML editor. However, there is no way to invoke this editor. There is no toolbar, menu or keyboard short-cut. One dirty way to launch this editor is by opening an XML document from 'file->open' and SSMS will load the XML document in the XML/XSD editor. If you find it hard to work without it, please vote at There is no way to update multiple elements in a single query. If an XML document contains 10 Employee elements and If I want to update all of them in a single query, it is not possible currently. Please vote at There is also a similar request that asks for supporting update of multiple elements. Vote at: When generating XML documents using FOR XML, the element names should be static. So we need to have prior knowledge of the element names. If the name of the elements come from a table, a dynamic query is needed to generate the XML document. [Michael suggested a way to get this done, but I could not make it work. I will contact him again and take some help to test this] There is no way to retrieve the path at which a given value is placed within the XML document. A few people asked me about this. I am still not sure how important this feature is and have not filed a connect item yet. If you think this is an important feature, please go ahead and create a connect item. [Please do not forget to share it with me :-)] Types declared within schema collections are not re-usable outside the schema collection. This will result in a lot of code duplication and maintenance head-ache. Please vote at: Need a way to generate an XML tree (recursive) using FOR XML. A recursive function (as given in books online) is not very efficient. I have posted a workaround here, but that is bit complex. We need something simpler. Please vote at! Here is a quick summary of the feature requests/enhancements I discussed.! Tags: XML, SQL Server Heroes,
http://beyondrelational.com/modules/2/blogs/28/posts/10430/meeting-the-sql-server-heroes-2-michael-rys.aspx
CC-MAIN-2016-40
refinedweb
908
63.7
This is my first C# code so it will not be a very ideal piece of code but will be of help to those who are trying to parse a JSON feed from GW on windows mobile. To start with I have a simple screen with a “Get Data” button, On clicking this button a http get call is made to the GW and a JSON response is received. Subsequently this is parsed and printed on the console. I am not very comfortable with screen designing as yet so not venturing into that as of now. Along with the above I will put in some titbits which might be of help for anyone who is coding in Visual Studio and C# for the first time like myself. I am using the following environment Microsoft Visual Studio 2008 Version 9.0.30729.1 SP Microsoft .NET Framework Version 3.5 SP1 Windows Mobile 6 Professional SDK Windows Mobile 6 Classic Emulator Prerequisites Before starting the actual coding there are some prerequisites that need to be done with. - Setting up the emulator to access the network on your laptop. To access GW we need to setup the emulator to use the network on the computer. The steps for this are well explained in the below link. - Providing a storage card for the emulator. On the main window of the studio go to Tools -> Options ->Device Tools ->Devices On the right side under Devices select the right Emulator and click Properties. Then click Emulator Options. Here, under the General tab, provide a folder path for “Shared folder”. (Ex: c:\devicemem) Henceforth this location will act as the storage card for the emulator. Any io action can be done on this location from emulator as well as laptop. If a change is made in a file from the laptop, a running emulator will also see the change. Thus its not required to restart the emulator to see the change. - Getting Json.Net Go to the Downloads tab on the top. Depending on the sdk and visual studio version you are using, download the best suitable library for use. For my environment i downloaded Json.NET 3.5 Release 8 - Adding Json.Net to the project. Adding a reference to the project is very simple. In the “Solution Manager” tab on the right side of the project window, right click on the References node and select Add Reference. Now browse to the location where you have downloaded and unzipped the library and select the proper dll. In my case its under “Compact” as mentioned by the Readme.txt present in the zip. - Json.NET Documentation. In the above URL there are 2 sections which were refered for the scope of this blog. They are “CustomCreationConverter” & “Serializing Collections” under “Serializing and Deserializing JSON”. - Simple ways of seeing output/debugging To print the output on the “Immediate window” using System.Diagnostics; Debug.WriteLine(“Any text”); To Show the output in a dialoug box. MessageBox.Show(“Any Message”); Project creation and First screen To start coding please follow the below mentioned steps: - Create a Smart Device Project, it can be found under File -> New Project -> Visual C# -> Smart Device -> Smart Device Project. - Next, add a button and change its label to “Get Data”. - Double click on the button and start coding as mentioned in the next section. Actual code Making a HTTP Get call to GW and reading the response stream. string sURL; sURL =”“; WebRequest wrGETURL; wrGETURL = WebRequest.Create(sURL); wrGETURL.Headers.Add(“Authorization: Basic n3VwdXNlcjI6czNwdXNlcg==”);//Please provide proper Authorization Stream objStream; objStream = wrGETURL.GetResponse().GetResponseStream(); StreamReader reader = new StreamReader(objStream); string text = reader.ReadToEnd(); Reading JSON from file. Sometimes its easier to store the feed in a local file and work with it, in this way one can keep changing the JSON stored in the file to get the right class structure required to parse the actual feed. Here I have stored the feed in a local file feed.txt in the c:\devicemem folder ( see the above section for “Providing a storage card for the emulator”). This file is accessible from the emulator. System.IO.StreamReader myFile = new System.IO.StreamReader(“\\Storage Card\\feed.txt”); string text = myFile.ReadToEnd(); myFile.Close(); Map classes to Travel Agency collection. We will need to create 4 classes here: - TravelagencyCollection - TravelagencyCollection__Metadata - TravelagencyCollectionD - TravelagencyCollectionResults You can find the class structure in the attached Collection.cs.txt Parsing JSON. Its just one line of code as listed below using Newtonsoft.Json; TravelagencyCollection tc = JsonConvert.DeserializeObject<TravelagencyCollection>(text); //for “text” see above Accessing the parsed data. Debug.WriteLine(tc.d.results[0].__metadata.type); Debug.WriteLine(tc.d.results[1].NAME); Hope this helps!
https://blogs.sap.com/2013/04/15/parsing-json-from-netweaver-gateway-on-windows-mobile-c/
CC-MAIN-2018-05
refinedweb
781
58.08
Unit 2: Lists Introduction The most important datatype in any functional programming language is the list. A list is a linearly ordered collection of elements. All the elements of a list must be of the same type. The easiest way in which to specify a list is by enumeration: you simply write the elements of the list between square brackets and separate them with commas. Here are some examples with their types: [3, 7, 5, 88] :: [Int] [[2, 3], [4, 8, 17]] :: [[Int]] ['t', 'i', 'm', 'e'] :: [Char] "time" :: [Char] Note that the last two of these are equivalent. The empty list, written [a]. Haskell has a means of producing lists of integers in arithmetical progression. A few examples should make clear how this is achieved: [], belongs to type ghci> [1 .. 10] [1,2,3,4,5,6,7,8,9,10] ghci> [3, 5 .. 11] [3,5,7,9,11] ghci> [(-9), (-7) .. (-3)] [-9,-7,-5,-3] Because Haskell is a lazy functional programming language it can handle infinite lists. Here are some examples: nats = [1 .. ] evens = [2, 4 .. ] negs = [(-1), (-2) .. ] ones = 1:ones stars = '*':stars Haskell provides several list operators. Some are: head (unary prefix), which extracts the first element of a non-empty list, tail (unary prefix), which returns the tail of a non-empty list, that is to say, the list of all the elements except the first, length (unary prefix), which returns the length of a list and :(binary infix), which sticks an element at the front of a list, !!(binary infix), which extracts an element of a list. Note that this list indexing operator treats the first element of a list as occupying position 0. Some examples of these operators in action should make it clear what they do: ghci> 'f':"ear" "fear" ghci> head [7, 18, 3] 7 ghci> head "Harvard" 'H' ghci> tail [7, 18, 3] [18,3] ghci> tail "Harvard" "arvard" ghci> length "time" 4 ghci> length [8, 1, 4, 5, 2] 5 ghci> "time" !! 2 'm' ghci> [1, 2, 3, 4] !! 0 1 These operators have the following types: (:) :: a -> [a] -> [a] head :: [a] -> a tail :: [a] -> [a] length :: [a] -> Int (!!) :: [a] -> Int -> a Note that (head xs):(tail xs) is the same as xs except when xs is the empty list. Note also that every list can be represented using the cons operator and the empty list. For example, [2, 8, 13] can be depicted as 2:8:13:[]; there's no need to add parentheses as cons associates to the right. Defining functions that operate on lists A function to sum the elements of a list of integers can be defined like this: sum :: Integral a => [a] -> [a] sum ys | ys == [] = 0 | otherwise = head ys + sum (tail ys) It is better, however, to use pattern-matching thus: sum :: Integral a => [a] -> [a] sum [] = 0 sum (y:ys) = y + sum ys Here, the patterns are (y:ys). In dealing with lists a pattern can contain variables and any number of occurrences of the empty list and the cons operator sum is actually defined in the Haskell Prelude; the above definitions are just presented here as examples. []and :. The function List addition and subtraction Two useful binary infix functions on lists are ++(list addition) and \\(list subtraction). In talking about the operator ++, various idioms are used. People talk about joining two lists together or concatenating them. The operator ++itself is often called append. These days, list subtraction is more usually called list difference. List addition takes two lists as its arguments and sticks them together. List subtraction removes elements from a list, for example: [1, 2, 3, 4, 5] \\ [1, 4] is equivalent to [2, 3, 5], [1, 1, 1, 1] \\ [1, 4] is equivalent to [1, 1, 1] and [1, 1, 1, 1] \\ [1, 1] is equivalent to [1, 1]. List difference is not included in the standard Prelude. If you want to use it, you need to include the library of functions Data.List which you do as follows: ghci> import Data.List ghci> [1,8,77,11] \\ [77] [1,8,11] You can also add import Data.List to any file you load into GHCi. Memoisation In mathematics the Fibonacci numbers are usually defined like this: fib :: Integer -> Integer fib 0 = 0 fib 1 = 1 fib i = fib (i - 1) + fib (i - 2) Although this works in Haskell, it is extremely inefficient. A more efficient definition prevents the re-evaluation of the same Fibonacci number. The values are stored in a list. The definition is as follows: fib :: Integer -> Integer fib j = fiblist !! j fiblist = map f [ 0 .. ] where f 0 = 0 f 1 = 1 f i = fiblist !! (i - 1) + fiblist !! (i - 2) Intuitively, fiblist contains the infinite list of Fibonacci numbers. Each element, say the ith, can be expressed in at least two ways, namely as fib i and as fiblist !! i. This version of the Fibonacci numbers is very much more efficient. © Antoni Diller (17 September 2021)
https://www.cantab.net/users/antoni.diller/haskell/units/unit02.html
CC-MAIN-2022-21
refinedweb
834
71.85
8.12. Machine Translation and Data Sets¶ Machine translation (MT) refers to the automatic translation of a segment of text from one language to another. Solving this problem with neural networks is often called neural machine translation (NMT). Compared to the language model we discussed before, a major difference for MT is that the output is a sequence of words instead of a single words. The length of the output sequence could be different to the source sequence length. In the rest of this section, we will demonstrate how to pre-process a MT dataset and transform it into a set of data batches. In [1]: import sys sys.path.insert(0, '..') import collections import d2l import zipfile from mxnet import nd from mxnet.gluon import utils as gutils, data as gdata 8.12.1. Read and Pre-process Data¶ We first download a dataset that contains a set of English sentences with the corresponding French translations. As can be seen that each line contains a English sentence with its French translation, which are separated by a TAB. In [2]: fname = gutils.download('') with zipfile.ZipFile(fname, 'r') as f: raw_text = f.read('fra.txt').decode("utf-8") print(raw_text[0:95]) Go. Va ! Hi. Salut ! Run! Cours ! Run! Courez ! Who? Qui ? Wow! Ça alors ! Fire! Au feu ! Help! Words and punctuation marks should be separated by spaces. But this dataset has a few exceptions. We fix them by adding necessary spaces before punctuation marks, replacing non-breaking space with space. In addition, we convert all chars into lower cases. In [3]: def preprocess_raw(text): text = text.replace('\u202f', ' ').replace('\xa0', ' ') out = '' for i, char in enumerate(text.lower()): if char in (',', '!', '.') and i > 0 and text[i-1] != ' ': out += ' ' out += char return out text = preprocess_raw(raw_text) print(text[0:95]) go . va ! hi . salut ! run ! cours ! run ! courez ! who? qui ? wow ! ça alors ! fire ! au feu ! 8.12.2. Tokenization¶ A word or a punctuation mark is treated as a token, then a sentence is a list of tokens. We convert the text data into a set of source (English) sentences, a list of list of tokens, and a set of target (French) sentences. To simplify the later model training, we only sample the first num_examples sentences pairs. In [4]: num_examples = 50000 source, target = [], [] for i, line in enumerate(text.split('\n')): if i > num_examples: break parts = line.split('\t') if len(parts) == 2: source.append(parts[0].split(' ')) target.append(parts[1].split(' ')) source[0:3], target[0:3] Out[4]: ([['go', '.'], ['hi', '.'], ['run', '!']], [['va', '!'], ['salut', '!'], ['cours', '!']]) We visualize the histogram of the number of tokens per sentence the following figure. As can be seen that a sentence in average contains 5 tokens, and most of them have less than 10 tokens. In [5]: d2l.set_figsize() d2l.plt.hist([[len(l) for l in source], [len(l) for l in target]], label=['source', 'target']) d2l.plt.legend(loc='upper right'); 8.12.3. Vocabulary¶ Now build a vocabulary for the source sentences and print its vocabulary sizes. In [6]: def build_vocab(tokens): tokens = [token for line in tokens for token in line] return d2l.Vocab(tokens, min_freq=3, use_special_tokens=True) src_vocab = build_vocab(source) len(src_vocab) Out[6]: 3790 8.12.4. Load Dataset¶ Since sentences have variable lengths, we define a pad function to trim or pad a sentence into a fixed length. In [7]: def pad(line, max_len, padding_token): if len(line) > max_len: return line[:max_len] return line + [padding_token] * (max_len - len(line)) pad(src_vocab[source[0]], 10, src_vocab.pad) Out[7]: [37, 4, 0, 0, 0, 0, 0, 0, 0, 0] Now we can convert a list of sentences into an (num_example, max_len) index array. We also record the length of each sentence without the padding tokens, called valid length. In addition, we add the special “<bos>” and “<eos>” tokens to the target sentences so that our model will know the signals for starting and ending predicting. In [8]: def build_array(lines, vocab, max_len, is_source): lines = [vocab[line] for line in lines] if not is_source: lines = [[vocab.bos] + line + [vocab.eos] for line in lines] array = nd.array([pad(line, max_len, vocab.pad) for line in lines]) valid_len = (array != vocab.pad).sum(axis=1) return array, valid_len Finally, we construct data iterators to read data batches from the source and target index arrays. In [9]: def load_data_nmt(batch_size, max_len): # This function is saved in d2l. src_vocab, tgt_vocab = build_vocab(source), build_vocab(target) src_array, src_valid_len = build_array(source, src_vocab, max_len, True) tgt_array, tgt_valid_len = build_array(target, tgt_vocab, max_len, False) train_data = gdata.ArrayDataset( src_array, src_valid_len, tgt_array, tgt_valid_len) train_iter = gdata.DataLoader(train_data, batch_size, shuffle=True) return src_vocab, tgt_vocab, train_iter Let’s read the first batch. In [10]: src_vocab, tgt_vocab, train_iter = load_data_nmt(batch_size=2, max_len=8) for X, X_valid_len, Y, Y_valid_len, in train_iter: print('X =', X.astype('int32'), '\nValid lengths for X =', X_valid_len, '\nY =', Y.astype('int32'), '\nValid lengths for Y =', Y_valid_len) break X = [[ 57 15 6 1785 0 0 0 0] [ 12 317 10 185 4 0 0 0]] <NDArray 2x8 @cpu(0)> Valid lengths for X = [4. 5.] <NDArray 2 @cpu(0)> Y = [[ 1 51 92 26 84 8 39 3] [ 1 15 265 1219 4 2 0 0]] <NDArray 2x8 @cpu(0)> Valid lengths for Y = [8. 6.] <NDArray 2 @cpu(0)>
http://d2l.ai/chapter_recurrent-neural-networks/machine-translation.html
CC-MAIN-2019-18
refinedweb
881
66.64
Depending. Officially, the US occupation of Iraq ended on June 28, 2004. (I didn't know that, did anyone else?) But in reality the US is still in charge. Among the "100 Orders" of L. Paul Bremer is Order #17 which grants foreign contractors, including private security firms, full immunity from Iraq's laws. Even if they, say, kill someone or cause an environmental disaster, the injured party cannot turn to the Iraqi legal system. Rather, the charges must be brought to US courts. I wonder if the Iraqi people realize that their nation is no more than a satellite of the US. September 17, 2007 11:41 AM | Reply | Permalink I'm shocked to find Blackwater operating Little Birds. Which flavor? Troop transport on benches or gunship? If the latter, how did they get them? My rule of thumb for distinguishing legitimate contract security from mercenary functions were: If they were protecting a convoy, the mission criterion would be OK, but I think they cross a line with air suppport. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 17, 2007 11:59 AM | Reply | Permalink "I wonder if the Iraqi people realize that their nation is no more than a satellite of the US." Given that 93% or so of them support attacks on Americans, I would say that they do realize they are pawns in a game they don't understand and have no interest in playing. September 17, 2007 12:02 PM | Reply | Permalink Since we have an adminstration that doesn't own a fleck of respect for US law, why wouldn't you expect them to behave like outlaws when abroad? Which is precisely how they have behaved with US and Iraqi lives and bodies, our money, and a blind eye to crime, corruption and murder. If they want to act like criminals, for Christ sake let's try them as such. This madness must end! And, phelicity, no I did not know we were no longer the occupying power. I don't think that international law allows you to just declare yourself no longer the occupying power, especially when you have 170,000 troops in country and self-declare to be at war with the populace. But preznit has always had his cake and icecream, and gets to eat it, too, so everthing is as he believes it to be. Facts and moral questions are irrelevant. September 17, 2007 1:08 PM | Reply | Permalink I hope the Dems get behind this and support the request by our "ally." This would do more to end the surge than any other course they could take ... September 17, 2007 1:35 PM | Reply | Permalink O Blackwater O Blackwater Mesopotamian moon is shining on me... gonna make ev'rything all right September 17, 2007 1:53 PM | Reply | Permalink I was also shocked by a little paragraph in a cnn article : The report [Congressional Research Service report]." I wonder who else they can't prosecute in their own country? Does that apply to other types of private companies operating in Iraq? September 17, 2007 2:44 PM | Reply | Permalink The use of contractors was once a fairly heated topic. Sadly it dropped off the radar here in America much like the continued violence and casualty figures. Of the many cynical and barbarous activities this administration is responsible for this one seriously needs to be addressed. The last time I clearly remember hearing any real dialogue regarding these contractors, or rather mercenaries, was during the torture scandals. There was some very valid concern that these mercenaries were under no clear chain of command or clear legal constraints. And as expected, there appears to have been little or nothing done since then to clear up this dubious situation. If this administration and the flaccid press can't even accurately report what our military and the Iraqi government are doing in Iraq think about how very little is known about these mercenaries companies operating over there? Something occurred to me when this recent story broke and in light of the ongoing fervor regarding Iran. One of the points often sighted as a hindrance was that our military is spread dangerously thin and is already showing signs of breaking in numerous places. With this in mind the question of tangling with Iran seemed implausible. Well, what if the White House simply turned the entire Iraqi mess over to these mercenary companies and freed up our beleaguered military to wrestle with this new enemy? Ah yes, a fresh start so to speak. Let the downward spiral in Iraq continue to fade from our minds and let "the market" handle it. Because we all know how effecting that old "market" can be when it comes to dealing with people and their well being. September 17, 2007 3:03 PM | Reply | Permalink I'm sensitive to using the term "mercenary" about someone who indeed might be a criminal, because "mercenary" has a fairly specific meaning in the Geneva Conventions. The context there is that the individual is taking active part in combat, motivated solely by money. At Abu Ghraib, there were contract interpreters and mercenaries that indeed may have violated, if they were military personnel, the Uniformed Code of Military Justice (UCMJ). I freely admit to being confused here, as I thought that contract employees in Iraq had been placed under the UCMJ. Might that have been only contractors to the Department of Defense, and, if the individuals in this case were under Department of State contract, the UCMJ might not apply? Apropos of the "market", some years ago, GEN Creighton Abrams, then Army Chief of Staff and, by all accounts, a highly ethical man whose term was cut short by death from cancer, introduced the Total Force concept, which was, along with the Gulf of Tonkin resolution, a means of preventing inappropriate use of the military without Congressional approval. In what now seem the innocent days of Vietnam, it was seen as a very major step, and a strong Congressional move, to mobilize Reservists. Reservists are always Federal, while the Guard must be Federalized to go under regular military command. Army forces are categorized as combat, combat support, and combat service support. Combat arms are specialties that are expected to involve themselves directly in fighting, such as infantry, armor, special forces and artillery. Combat support often will be close to battle, such as military intelligence, signal (i.e., communications) and, especially in Iraq, military police. The system has trouble deciding if the Engineers are combat arms or combat support. Combat Service Support organizations are needed to sustain combat, but, with exceptions, don't usually engage in it. These include supply, maintenance, transportation and medical. Under Total Force, all but a minimum of Combat Service Support, and a good deal of Combat Support, went into the Reserves, and the idea was that Congressional action would be necessary to support a longer-term deployment requiring support services. This Administration has managed to side-step Total Force by contracting out many Combat Service and Combat Service Support functions. It's been done in the name of economy, but, in reality, it circumvents the Army-developed means of getting Congressional approval of major deployments. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 17, 2007 3:24 PM | Reply | Permalink Imperialism is a dirty business, isn't it? September 17, 2007 4:33 PM | Reply | Permalink You're correct in pointing out that there are multiple missions or types of contracts being handed out to these companies and not all of them are combat related. There was a great story (I think it was on Frontline but I'm not sure) about the growing frustration within the military at the most mundane of tasks such as cleaning of latrines and laundry and food services having all largely been "out sourced" leaving many soldiers, not necessarily in combat mission roles, with little or nothing to do. In addition it is making some of those same services infinitely more expensive for both the tax payers and the soldiers themselves. This is absurd and obscene. And it's incredibly inefficient. I look at how much impact our industrial base, logistics, and design had to do with our country's success in the Second World War. Standard parts were the order of the day - such as on our armored vehicles. Many of them and their components were based on one central design with common parts making manufacturing and in-the-field maintenance much more cost effective and practical. We've gone in the exact opposite direction with modern weapon systems and the logistical support needed to operate them in distant theaters. It's an invitation to waste both in lives lost (see lack of properly armored vehicles reaching Iraq in a timely manner) and money wasted (see everywhere you look in our military today). It really seems as though instead of thinking of the military as a whole or the soldier in particular the desire was to make sure there were enough slices of war pie to go around to all the different companies pulling chairs up to the table. Not smart. But of the legions of contractors in Iraq, there are also a large number of them in actual combat rolls. These are an even more severe problem in my mind than the immensely wasteful logistical contractors. These people in many cases are ex-military so they are at least familiar with the conduct that is expected of a soldier within the theater of operation. But of course in these cases there is no clear system in which to monitor or enforce any violations. Actually, given the lack of any clear rules, it could be argued that there can be no legal violations since there are no rules. When and how the Geneva Conventions can be applied has already proven to be about as important to this administration as our Constitution. And as conscienceless as our administration is, big business has even less. And it is all driven by and large by money. Most of these contractors are there for it. Whether they are driving supply trucks or escorting them, most are lured to Iraq by the promise of a big payday. I'd be shocked (and extremely skeptical) if any of these "contractors" were there for humanitarian reasons or to help get Iraq back on it's feet. Sure they might SAY that they are, but I'm becoming much more reliant on the old adage - actions speak louder than words. September 17, 2007 4:40 PM | Reply | Permalink The name creeps me out. The infortunately named "Custer Battle" is actually two guys with those names. "Blackwater" does include Cofer Black, but it seems the company wants to imply still waters running deep. September 17, 2007 4:44 PM | Reply | Permalink Maintenance is an interesting subject. On the one hand, it simply is not technically possible to have WWII style parts for electronics. Oh, I remember vacuum tubes and soldering wires, but there is no practical way to repair an integrated circuit, or a multilayer printed circuit board. There are other aspects of maintenance, however, where the outsourcing could be deadly. US tank crews do at least first level maintenance on their vehicles, including fairly complex things like replacing tracks or zeroing the main gun and gun computer. In many armies, this would be considered at least second level, with first level something like changing an air filter. There are mechanics at company or battalion that do more complex things. There were cases, however, where we were fighting with allies where manual labor was looked down on, but they suddenly discovered the tank quit and they didn't know why -- but it might be as simple as changing the air filter, or that the engine was wrecked because the oil wasn't changed. I know people who have worked with Kuwaitis, who now insist on armored vehicle crews knowing a fair bit about maintenance as well as operation. I cannot rationalize any legal framework where contractors can take on independent combat roles with more than personal weaponry. As I've mentioned, one good guideline is that personal weapons are the things medical personnel are allowed to carry for self- and patient protection, if they don't wear the Red Cross or equivalent. I remember my mother, an Army Medical Service Corps officer who happened to have a hobby of target shooting, agonizing over the ethics of whether to go armed or not. She decided to do so, but she also said she would only fire -- and to kill -- in the defense of her patients. I had a couple of offers, which turned out to be fraudulent, to go to Iraq to do very specialized things, mostly in the civilian sector, such as helping establish a Kurdish university, including a telemedicine service. Other parts were very involved communications engineering, which also would involve turning them over to the Iraqis. Supposedly, personal weapons were permitted, but I wouldn't have dreamed of going off on a combat mission. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 17, 2007 4:50 PM | Reply | Permalink Imperialism is a dirty business, isn't it? It's also one that begins by looking enormously profitable but usually ends in bankruptcy. September 17, 2007 4:58 PM | Reply | Permalink While contractors working directly for the military in support roles are subject to the UCMJ (Uniform Code of Military Justice), they are actually a small minority of the mercenaries in Iraq. Those working for State, CIA, or civilian firms contracted by the Pentagon (think Haliburton, though there are plenty more), operate in a vacuum, a weird no-law zone that is specific to the Bush Administration. They are exempt from any law the Iraqi government might pass and there is no US law governing their conduct. They are beyond any constraint and may kill and maim and torture as they see fit and the Iraqis have no recourse. I am shocked that more Americans have no idea of the numbers of Iraqi young men who disappeared into Abu Grabe and other less famous places never to emerge. They died in US custody and were quietly buried with no acknowledgement that they had ever been taken from their homes. Much of this was farmed out to the mercenary firms. There was no way to attach criminal liability to them. It is as dispicable and revolting as anything the US government has ever done and happened while you and I were supposed to be watching. September 17, 2007 5:14 PM | Reply | Permalink The reason there is a distinction between national soldiers and private agents is that a country is sort of immune from criminal justice, while the private agent is subject to it. A country stands behind its use of force, essentially saying it feels justified, and if somebody has a beef they can come with their army. The entire citizenry accepts responsibility. (And even soldiers acting in a war are subject to UCMJ.) The private agent does not have the country standing behind him, and should be subject to justice. If he is exempted, like the national soldiers, there is power without responsibility. This is the reason non-state combatants lack full protection of Geneva conventions, and why we may call someone a terrorist. In this case, it is at least partially the fault of US officials that tasked Blackwater this way. September 17, 2007 5:35 PM | Reply | Permalink Again, I urge staying close to the Geneva definition of mercenary, and considering other definitions of criminality. There is the banning of privateering under the Treaty of Paris of 1856, actually in an annex to the main treaty, which ended the Crimean War. Relatively few nations actually ratified it, but there were a sufficient number of prosecutions that a privateering defense is probably not acceptable under customary international law. The legal doctrine of hostis humani generis has been interpreted to be within the rights of any nation to enforce. It speaks of enemies of humanity, which, in this context, means pirate or slave trader. One could argue that any national presence in Iraq could apprehend these individuals under this doctrine, and either the US would have to disavow them -- probably making them pirates -- or accept responsibility. They might be immune from US or Iraqi action, but what about British? It would be a reasonable Democratic action to introduce legislation that any US contractor in an area of military operations (i.e., where there is a military area command) must be subject to the UCMJ. Bush might veto it or give a signing statement, but that would be political dynamite. I'm not saying you are using it in this manner, but there is an unfortunate tendency to use "mercenary" as an epithet rather than as a legal term. More use of the legal definitions, and of related issues such as pirates, could contribute to bringing irresponsibility under control. Consider the other national contingents, and remember that a desire from justice can be found in the damndest places. Look up Georg Konrad Morgen sometime, and find a legal officer within the Nazi SS who searched out corruption, but resisted Allied pressure, at Nuremberg, for individuals for which he did not believe there was evidence of guilt. By all accounts, he was an exceptionally just man. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 17, 2007 5:59 PM | Reply | Permalink I think it was all prearranged under Bremer's rule. Everything was arranged for the benefit of US, just like an imperial ruler would do. It was disgusting at the time. And it's disgusting as it plays out. September 17, 2007 6:04 PM | Reply | Permalink C'mon HC, the streets of Baghdad are not some Harvard debating society, the doctrine of hostis humani generis won't help when the RPG's are coming your way. Spreadin' democracy is dangerous business. If Cheney/Bush and Co. can flush the US Constitution down the turlet, they sure as hell aren't goin to give a hoot about some words written on a paper in, for Gawd's sake, Paris France, in 1856. September 17, 2007 8:39 PM | Reply | Permalink They seem to be acting as if they are unaccountable. Are they acting as soldiers (a private army being unconstitutional)? It seems to me that the tens of thousands of private security contractors in Iraq could be considered a private army in that they are protecting US officials and even generals. Blackwater (scroll down) is hiring for policing the Big Easy (this is a dated newsletter)! Let’s not forget that BW and others were rushed into New Orleans after Katrina and there were reports of civilians shot by private security contractors there and no charges. There was also talk of contracts for border control. It seems to me like some lines have been crossed in attempts at privatizing the military and militarizing “emergencies” at home. September 17, 2007 9:08 PM | Reply | Permalink By creating a new "set" of "security contractors", one which intersects the otherwise statutorily separated sets "soldiers" and "cops" the Masters of Blackwater et al have oddly subverted the mandates of posse commitatus. The result, an unaccountable, thoroughly militarized police force with total impunity for any crimes. Suppose: Blackwater + Katrina like emergency + blanket presidential pardons on group basis* for all past and future acts done in furtherance of executive order blah blah blah.... Hello permanent Republican majority, goodbye republican government. *a domestic rule 17, if you will... September 17, 2007 11:41 PM | Reply | Permalink The Emperor used to say to me, "Darth", he'd say, "Darth, nothing binds the hearts and minds of the people to the Imperium like having our storm troopers rape their daughters and kill their aged mothers with impunity...When they see that trooper freely walking his beat tomorrow, they'll show RESPECT!" September 17, 2007 11:54 PM | Reply | Permalink Mr. Pinochet didn't find the Spanish in a Harvard debating society. Mr. Eichmann did not debate the Israeli team, and Mr. Noriega did not debate the people playing loud rock music along with a goat wearing red underwear. I'm being slightly facetious here, as the idea of one sovereign state going into another to arrest them, especially in the Pinochet case, in the name of acts done in a third case, is not something I particularly like. Nevertheless, whether a third country makes the arrest or the US comes to its senses and no longer has Bush and Cheney in office, there will be an opportunity to have laws that deal appropriately with contractors. A fairly straightforward general principle for the US, not having to reach back into the twisty turny passages of the Treaty of Paris, is to put all US contractors, in an area of military operations, under the UCMJ. In the real world, where covert operations are sometimes ways to avoid much larger killing, the National Clandestine Service may be an exception. Of course, when one plays covert, one presumably is intelligently waiving certain protections and disciplines. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 18, 2007 4:42 AM | Reply | Permalink I hate the entire idea of these contractors. They are repulsive to me. War is not a business venture it's mankind trying to prove it shouldn't exist. The idea of people and companies looking to exploit war are the worst humanity has to offer. I see little difference between them and a dictator using similar brutality to maintain authority. They both uses chaos, fear, and violence to achieve an end. I'm sorry but I don't think I can even post on this topic without saying this. In reading your post here (and Blackwater hiring for NO) I was reminded of the private armies of thugs hired by the steel barons to whip their workers back in line. You would think something similar could never happen in this country again. But then I also remembered seeing people forced to stand in a cage blocks way from a political event in order to protest. And then get shot with gas & rubber bullets. Can someone remind me what country we live in again? We seem high on violence & when we can't get our fix on someone else we're more than happy to do it to ourselves... September 18, 2007 5:15 AM | Reply | Permalink I was just watching a Today Show story of a student persistently asking Kerry why he didn’t challenge vote tampering. He’s pulled away and later tasered by the campus police. Anne Curry excuses the police because the student did seem to want attention. Of course, they don’t show the video of Reverend Lennox Yearwood getting his leg broken by security outside of the Petraeus hearing. At the bottom of the BW newsletter is Chaplain’s Corner, an article by Chaplain Staton, VBPD (?). He coldly explains how the victims in N.O. are to blame for their predicament. I guess it’s kind of a pep talk: September 18, 2007 6:09 AM | Reply | Permalink We are in agreement about contractors in anything that a real soldier would call a combat role, or the equivalent of the private police thugs arrayed against the UAW at the Battle of the Overpass. At the same time, just as I prefer precise use of the word "mercenary", I do believe there are legitimate contractor roles in various support functions. I also want a discussion of how the Total Force concept might be brought back to help enforce reality and Congressional checks & balances. To take one of the least controversial examples I can imagine, think of an Army intelligence document analyst who has reached mandatory retirement -- and has absolute fluency in Arabic, Farsi, and Kurdish, and a great understanding of the area and culture. When she examines a captured document, she's using 30 years of knowledge. We don't want that knowledge lost, and such a person can reasonably be in an intelligence office in Iraq or the US. There might be occasions when she might go out of her office if we need a true interpreter, not a translator. Let me also mention civilian employees of the US government, but in combat zones. In 1990, quite a number of US tanks needed upgrading from M1 to M1A1, which was a factory-level process involving upgrading the main gun, armor, and other systems. Quite a number of civil service workers volunteered to go to a potential combat zone, work 12-18 hour days, and building a field factory to do the conversions. They weren't hired to be at risk, and I respect what they did. Now, realize that tanks aren't completely made by the government. Contractor technicians and engineers also went along, including people who had designed the upgrades and were the best possible experts to work out bugs in the process. Things start getting more confusing when you start talking about large-scale construction. On the one hand, you have the Air Force's RED HORSE units (an acronym for something) that can fly in and create a field airstrip in an incredibly short time. When it comes to larger construction projects -- warehouses, roads, etc. -- the Engineer tables of organization do call for augmentation that may be local or US citizens. For a really big construction project, like a large airfield, construction companies like Bechtel may have the best people and equipment to do it. In this, I am not getting into "permanent" or not. Certain aircraft can only land on runways that are heavily built. I would hope, then, that we realize that we don't want to circumvent Total Force or use mercenaries, but there can be legitimate roles for civil servants and for commercial contractors. There are areas in the US where such people might carry a pistol or light rifle for self-protection, but they don't have helicopter gunships to call in. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 18, 2007 7:02 AM | Reply | Permalink I shouldn't be astonished or depressed by this but I am. I went to the Blackwater site and read two o things. The first was a story, the header of which read I then read the section which Don Key refers to above: I then went to the New Orleans Police Department Website and read the recruiting section on salary and benefits: In other words, a Blackwater employee can make more than twice what a New Orleans Police Officer makes, for performing duties probably less hazardous. I should mention to be absolutely fair, that a police officer with a Ph. D. could make $4000 per annum more. Big Whoop. Privatization sure does provide a cheaper alternative doesn't it? If we have to look for problems in the public safety apparatus I don't think we need to look further than this. Of course, the same thing obtains between the military and the contractor in Iraq. Want to make big bucks? Be a private contractor. aMike September 18, 2007 10:19 AM | Reply | Permalink I thought Abrams was ethical. In 1996 I was dispatched to his mansion in Saigon to draw floorplans of the building, to determine if we were paying rent for it that was under the embassy ceiling prices. Clipboard and measuring tape in hand, I was standing in his outer office when the General entered the room. The four or five others in the room, including his aid, were behind me, and I couldn't see them snap to attention. It turns out that I was on sick call in basic training the day they covered military courtesy. What did I know? Abrams just stared at me for a second or two, while I just stood there hiding behind my clipboard, and the he just went about his business as if nothing happened. That's class, in my book. Neoboho September 18, 2007 11:06 AM | Reply | Permalink I can understand your desire to properly use the word mercenary (as dictated within international treaties) and I agree with you. But I disagree with you in terms of most of these contractors, be they in combat roles or not. Your examples are sound and reasonable. And I can fully agree that there are indeed indispensable people that our nation may need to call upon in times where their particular talents and knowledge could be vitally important. But how many of these civilian contractors do we now have in Iraq? And would it even be necessary in the examples you cited if we simply were more prepared? Certainly 30 years of experience is something that is virtually impossible to replace. But if that is the case doesn't the situation dictate that the system needs to be reevaluated? So one person retires and there's no one to replace them? How about 10 or 100 people? Can no one retire without breaking the system? But there are now literally about as many "civilian contractors" over there as there are troops, maybe more. Just how many documents are there to be translated or tanks are there needing upgrading that we need this many people over there? On top of that, where are the results? How long are these tasks supposed to take? Is there anyone even watching these people? Where's the water and power? Where's there anything even remotely resembling improvement (outside of the posh Green Zone and our Club Med embassy)? The entire thing reeks of profiteering. What's occurred is that like most things that American capitalism touches, we've found a way to break something in order to continue to make money off of it. And there's no way to slice our new privatized army that even comes close to being efficient, effective or practical. It's simply been turned into a gigantic maze full of trap doors for people to slip in and out of carrying large sacks of tax payer money. And that's how these people make "the big bucks." Ask us Californians about our nation's energy policy and how "privatized" businesses financially raped us. These same sorts of "businessmen" are doing the same thing here only in this case a lot of people are dying while they swindle billions. It's disgusting and there can be no forgiving these people. Period. There are about 100,000 government contractors operating in Iraq, not counting subcontractors, a total that is approaching the size of the U.S. military force there, according to the military's first census of the growing population of civilians operating in the battlefield. The survey finding, which includes Americans, Iraqis and third-party nationals hired by companies operating under U.S. government contracts, is significantly higher and wider in scope than the Pentagon's only previous estimate, which said there were 25,000 security contractors in the country. The number has certainly gone up since then (and this is WaPo!). And (this is unverified) a friend of mine mentioned that part of Patreaus' drawdown plans for some of the troops included and equal number of civilian contractors to take their places. So as of last year the Pentagon (an agency notorious for it's inability to accurately count ANYTHING) said there were 25,000 "security contractors" in Iraq. My guess is that the number is probably closer to 40,000 or even 50,000 of these supposed "security" personnel. But who really knows? It all depends on who's matrix you use I guess. But however you count it, that certainly qualifies as an army in my book. An army being paid money from private entities, from businesses, to perform armed activities in a hostile war zone. And what ever word we end up using to label these people we need something a little more vile and accurately descriptive than "civilian contractors" or "security contractors". I think that the idea of the Total Force policy is fundamentally sound but seems a little simplistic and it neglects the element that is the primary problem here which is the civilian or rather commercial sector. The web that links politician to business exec to generals is as shady as it is complex. For example, there are pet "defense" programs (which aid a generals future, a politicians constituents and a execs bottom line). These things can last for decades and run into astronomically large amounts of money. So these three players have found it necessary to work together in order to keep the gravy train rolling. If anything, they've actually created and implemented their own version of the Total Force policy. September 18, 2007 11:19 AM | Reply | Permalink With the windfall profits Blackwater is earning, Howard, they could afford to buy the birds from Viktor Bout himself, who is probably also supplying KBR with frozen chicken. BTW, as I recall the issue of contractors came up early in the Bush Administration with the killing of Veronica Bowers and her infant daughter in the Peruvian Amazon. Under the auspices of Plan Columbia, it was civilian contractors who were employed as military advisors who identified the Bower's plane to the Peruvians, who shot it down. I appreciate your views on the term "mercenary", but since GW took office, any meaningful distinction really blurs. For example, the replacement of military advisors (204th MI Battalion, El Paso) with civilian contractors was the Bush Administration's response to the controversies hitting the streets about corrupt local authorities setting up the 204th for "hits" from the ground, as well as warning the druggies on the ground about survelliance missions. See Salon.com - Treachery over the Andes. Neoboho September 18, 2007 12:18 PM | Reply | Permalink Thanks for an extensive response. I'll focus on a few things. You just put your finger on something very key, not only in the military. I am going to mention something that I don't have sources, but heard from several informed people. Supposedly, among the roughly 1000 US citizens in the Embassy, 6 are at native proficiency in Arabic (S5/R5 rating), and about 30 at moderate to professional competency (S3/R3 or better). It would seem as if one of the most important missed investments over the last few decades was language training. For State, so there could be a home for the gay linguists that got bounced out of the military by DADT. Yes, reevaluation is needed. I'm sufficiently familiar with intelligence to know that the promotions flow better for the collection rather than the analytic side, so really good analysts are scarce. For analysts, we do need to look at ways of preserving knowledge. That might look like a master and apprentice pair. In places, computer expert systems might help. Lots of things might help and I don't think anyone is really trying to improve. Some of this will necessarily be classified, but I can't imagine at a level that the appropriate Congressional committees can't monitor. I've known a few people in government that had become world-class experts at something, in some cases things that people hadn't realized could be a specialty until an individual demonstrated it. There's a fellow at the National Archives who is a walking index to WWII documents, and I'm sure he is far, far beyond retirement age. Indeed, the problem of preserving knowledge extends into corporate America, where cost-cutting can get rid of the older, higher-salaried experts that can't be billed cost-plus to the government, as is happening in lots of this contracting. As far as security contractors, look at issues in the US, such as privatizing prisons, or security investigation, or tax collection. That might be a really good place to start with policy development, although someone really good in publicity is going to have to break through the prejudicial aspect of "bureaucrat" and compare it with the costs of contracting. I was a contractor in the Labor Department computer center for three years, and many had been there longer as pseudo-employees. They'd go through a farce of "recompeting" the allegedly "task order" resident staff, and all that meant is the middleman would change, the prior middleman would lay us off, and the new one take us on, hopefully without loss of benefits. In like manner, what about private armed guards in the US? Some places have fairly stringent licensing for "special police", but where is the proper line where someone in a job should be responsible, through a civil service line of command, to elected officials? It's hard to explain how a rent-a-cop is OK in the US, where he might shoot US citizens, and then say someone armed, perhaps a little more heavily, that guards a convoy or individuals in a war zone is radically different. I see them as very similar policy or ethical issues. Where I have real problems is where "security contractors" are not augmentees, but operate as units, and, for Geneva Convention reasons, have heavier weapons than what are generally accepted as personal arms. I wonder if anyone in Congress recognized that the appropriations being authorized allowing contracting was devastating the idea of Total Force? For the immediate term, we can't fix that, but I consider Total Force a basically good idea if you have a Congress with courage. We aren't going to fix the politician-business linkage without hard work and courage. Our current problem, I believe, is rather different than the Military-Industrial Complex about which Eisenhower warned. That Complex was more characteristic of the Cold War. Indeed, we need thoughtful analysis of the part of the enemy that we have met, and, in the immortal words of Pogo, are us. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 18, 2007 12:30 PM | Reply | Permalink In other words, you can raise a family on Blackwater pay, but not police pay. September 18, 2007 2:20 PM | Reply | Permalink I've never really understood the problem with translators. Why aren't our good friends in Mossad and even Saudi Arabia assisting in translation? Sure there are probably some things you might not want them involved in but they could probably handle a large bulk of the not so top secret and leave the double super secret stuff for our own translators. It might even be informative to give both Mossad and Saudis the same documents to translate just to see if the results matched. September 18, 2007 3:14 PM | Reply | Permalink Your analysis highlights how incompatible are the military and police function, and the ways an attempt to merge them vitiates both. The appropriate accountability standards alone are so widely variant, that the military model ("kill'em all, let god sort'em out") swallows up the police model (think civilian review boards...) Then, of course, beyond the military model of minimal accountability, we have the BW model--no accountability--which is why privatizing the military is counter productive to our foreign policy ends. September 18, 2007 3:16 PM | Reply | Permalink Why does not our Princeton PhD Commander order only arabic spoken at mess and off-duty...How long would it take for the "translation" problem to be obviated? The only reason our entire occupying force is not as fluent as the graduate of any *Berlitz total immersion program is that we (apparently...)do not want them to be. *What is it, two weeks, three weeks? I forget.... September 18, 2007 3:20 PM | Reply | Permalink I'm pretty sure it's three weeks... September 18, 2007 3:21 PM | Reply | Permalink They apparently were. Jan 8 2007 U.” September 18, 2007 3:39 PM | Reply | Permalink In fact, the intelligence community does something much like this, for the translation of broadcasts and public documents. In WWII, the Foreign Broadcast Information Service (FBIS) was created under the Federal Communications Commission, and eventually would up under CIA, in an administratively reasonable way. FBIS, whose charter was radio broadcasts from which a surprising amount of good intelligence has come, deliberately has listening post scattered around the world, for several reasons: The military had a translation organization for documents called the Joint Publications Research Service (JPRS), which primarily worked with open publications. It was my sad fate, during Vietnam, to have to read its (accurate) jargon-filled translation of the North Vietnamese party journal, Nhan Dan, which now has an excellent website. Eventually, it was realized that the two organizations could merge. JPRS also used locals in the field, for many of the same reasons. The two functions were merged under FBIS, which is now part of the National Open Source Intelligence Center. NOSIC also does website scanning, library research, etc. Sometimes the analyst does have access to supersecret material to help understand the full picture, while sometimes the "all-source" analyst gets fed by NOSIC and can request studies without giving a (potentially classified) reason. So, there is a lot of stuff that is already done this way. Being an FBIS interpreter in a comfortable office, with high-grade earphones, however, is a lot different than going out with a combat patrol. In a combat zone, there's a lot more uncertainty about the trustworthiness of interpreters. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 18, 2007 3:40 PM | Reply | Permalink In principle, this is an excellent idea, although you aren't going to get professional fluency. The basic Arabic course at the Defense Language Institute runs 62 weeks of total immersion. That being said, any level of language is useful in the field, and, not infrequently, foreign militaries, in the same mess hall, can be quite happy to help with language. Sometimes there are sensitivities. I speak only a few phrases of Arabic, although I recognize more transliterated ones. Last year, I was in the hospital for a mystery intestinal bleed, and was not getting a lot of attention from the first hospitalist. The second said something that made me realize he was Pakistani, and, as he was leaving and said "I think you will be OK, but we don't know why yet", I responded "Inshallah". Literally, that's "God willing", but it has a lot more cultural context. The doctor looked like he hit a glass wall, spun around, and said "WHAT did you say?" I repeated it. He asked me why I chose to say that, and I muttered something of it was polite. He went out, turned, and said goodbye in Arabic, and I answered correctly. That more or less exhausted my Arabic unless I was ordering dinner. About an hour later, two nurses came in and asked what I had done to the doctor. They said he came out of your room, grinning from ear to ear, which he rarely did, and told them he wanted me taken care of as if I was family. When a friend's son, a Marine reservist, was activated to go to Iraq, his father asked friends what he should take with him. When I confirmed he had a CD player, I suggested some how-to-speak-Arabic disks, and pointed out it could save his life. Another Army friend did urban patrols in Iraq, and, while in no way fluent, studied some Arabic. He learned that offering cigarettes is socially important, and, while he's a nonsmoker, he started carrying them. His comment was that just a few courteous words, and an offered cigarette, made an enormous difference in reaction to him. While the orders were not to accept food, when he was offered ceremonial tea or coffee, and especially bread, he took it with respect. Within a couple of weeks, Iraqis were whispering IED and arms cache locations to him. He's one of the most professional soldiers that I know, illustrated by knowing that his rifle and armor isn't always the most important thing to have with him. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 18, 2007 3:52 PM | Reply | Permalink Blackwater et al were needed in New Orleans because the National Guard was otherwise occupied. Other big contractors secured lucrative no-bid contracts in both Iraq and N.O. And the taxpayers are really paying many times more than what a soldier or policeman would cost. Some of these contracts in N.O. have paid the companies three or four times more than what they pay the worker. And in both Iraq and N.O., contractors have brought in cheap labor. A no ID requirement was passed quickly for the gulf to allow undocumented workers to be hired. Of course, in both places, it would have made sense to try to farm as much of the reconstruction out to residents as possible. Think how much real reconstruction is accomplished by employing those who have been thrown out of work and the percolating effect of those monies. September 18, 2007 4:06 PM | Reply | Permalink Mea Culpa. Your original post used present tense, so I assumed Blackwater was hiring for New Orleans now, not in 2005. The Blackwater server was messed up when I signed on this morning and the 2005 date was not headlined on the page. aMike September 18, 2007 5:33 PM | Reply | Permalink ~ Now what does that do in respect to private contractors hired as security for State Department personnel and embassy protection and related purposes? Or those with thew CIA, or civilian firms contracted by the Pentagon? ~OGD~ September 18, 2007 6:03 PM | Reply | Permalink ~ Who says they aren't? We'd be the last to know. ~OGD~ September 18, 2007 6:16 PM | Reply | Permalink re:"I suggested some how-to-speak-Arabic disks, and pointed out it could save his life." It does seem like such a no-brainer, does it not? That said, I speculate that the failure of a command structure to order (after all, they have to power so to do...) the speaking of arabic (by soldiers who, let us remember, have been here for a year before and, god help us, will probably be back here again) bespeaks a positive aversion to having our (warning, HYPERBOLE ALERT)"stormtroopers" "go native". September 18, 2007 7:51 PM | Reply | Permalink No, it's my culpa, not yours. I phrased it badly and added a parenthetical note in the original that wasn't clear. My appy polly loggy... September 18, 2007 7:52 PM | Reply | Permalink de jure--these private contractors are subject to the military justice system. de facto--nothing, because (1) you don't want to piss off the people who keep you alive or they might not and (2) military justice is to justice as . . . you know. September 18, 2007 8:08 PM | Reply | Permalink Considering Bush's very loose definition of "enemy combatant" could these mercenaries not be charged and tried either in the US the Hague or elsewhere? If not Bush's rule then what is the international law's stand on mercenaries. The immidiate start of such legal processes against the companies, the individual men, and the people who hire them, might have a sufficient damping effect (money drain) on these mercenaries to make it difficult for them to profit from their occupation armies. Wouldn't it be fun if Duhbys as the Commander In Cheep of these mercenariy forces were found guilty under international law of being a mercenary September 19, 2007 5:13 AM | Reply | Permalink Much as dislike GWB, and much as I dislike inappropriate and probably illegal acts of security contractors, I believe that using "mercenary" as an epithet, and calling on the ICC which has no real jurisdiction here, is probably a waste of effort. What is much more likely to do something is to get better analysis of what happened -- such as how Blackwater got a gunship -- and find evidence supporting UCMJ violations. It's very likely, for example, that someone in uniform, or a supply contractor reporting to military and thus under the UCMJ, made a military helicopter available. If thee's solid evidence of UCMJ violations, get them into the chain of command while making media aware. In parallel, there may be at least symbolic legislation introduced if only strictly military contractors, not State Department contractors in a war zone, are under the UCMJ. -- Howard *equal opportunity offense to both extremes* "Those who cannot remember the past are condemned to repeat it" [George Santayana] September 19, 2007 7:36 AM | Reply | Permalink "I wonder if the Iraqi people realize that their nation is no more than a satellite of the US." In light of all this, I would say "colony" might be a more appropriate term. dc September 19, 2007 9:25 AM | Reply | Permalink 3 words: 'end the war'...spend the 1/2 trillion per year 'defense' budget on something more gainful, like green-tech or maybe even a national rail system. Reasoning? If we keep doing the same old thing the same old way, we're going to keep getting the same old result. It wasn't really that long ago that our national transportation infrastructure consisted of sailing ships, coal-fired locomotives, and horses. Now it's ALL highway traffic and we import the stuff that we use to fuel the cars with. Maybe, the reality is that globalization is basically a really crappy idea...? September 19, 2007 6:23 PM | Reply | Permalink Do the Dems have the guts to get behind anything of substance on Iraq -NO! September 19, 2007 6:40 PM | Reply | Permalink Blackwater's relationship to Iraqis may have a deeper and darker history than just this latest storm. Wasn't it Blackwater employees who were killed, burnt and hung from a bridge in Fallujah? At the time of that Fallujah incident, there appeared a couple of sentences in an Iraqi woman's blog [Raed's mom's blog]. She was reporting her own great horror that Iraqis would ever do such a horrific thing, and then went on to write, briefly, that she shared her feelings over a clothesline with a neighbor woman who happened to work for a health agency in Iraq. The neighbor told her, 'You don't understand.... if only you could know what was done by these [Americans] to our imprisoned women." I am paraphrasing this from memory. Some time later, when the Abu Ghraib story came out, I went back to that blog, scrolled extensively and found again and re-read those couple of sentences. If Blackwater also had contractors working in Abu Ghraib as did another private company, or if the Iraqis believed that any and all private contractors were given a role in the abuses of prisoners, it sheds a very different light on the Fallujah story. We, the American public, will probably never know the extent to which Iraqis may hate the hired 'contractors' whom they believe are operating outside the laws, and against whom they believe they can have no recourse. I think this may be especially true for Iraqis who need or want the protection of American troops, but still need to have someone to hate for what they have suffered since the invasion. September 19, 2007 8:40 PM | Reply | Permalink
http://tpmcafe.talkingpointsmemo.com/2007/09/17/the_iraq_blackwater_test/
crawl-002
refinedweb
8,696
59.43
Ext.onReady(function() { Ext.Msg.prompt("Enter your name", "Your name"); }); Ext.Msg.prompt('Enter your name', 'Your name', function(btn, txt) { if (btn == "ok" && txt.length > 0) { Ext.Msg.alert('Result', 'You have set the name to: ' + txt); } }, this, true, 'Lukasz Kurylo'); Ext.Msg.confirm("Confirmation", "Do you want to create a new account?", function(btn) if (btn == 'yes') Ext.Msg.alert("Result", "New account created successfully"); } else Ext.Msg.alert("Result", "Abort"); }, this); var i = 0; Ext.Msg.progress("Progress example", "some message", "progress text"); var x = window.setInterval(function() i = i + 0.01; Ext.Msg.updateProgress(i); if (i >= 1) window.clearInterval(x); Ext.Msg.hide(); }, 50); Ext.Msg.show({ title: 'Gender', msg: 'Exter your gender', width: 300, buttons: Ext.MessageBox.OKCANCEL, multiline: false, icon: Ext.MessageBox.INFO, prompt: true, fn: function(btn, txt) if (btn == 'ok') { //do sth } }); Ext.select('.button').on('click', function() var wait = Ext.Msg.wait('Installation..', 'Setup'); var sec = 2; setTimeout(actionFunc, 1000); function actionFunc() if (sec < 10) { wait.updateText('progress: ' + (++sec)*10 + '%'); setTimeout(actionFunc, 1000); } else wait.hide(); <input type="button" value="Click Me" class="button"> Ext JS is a very powerful javascript UI library which allows you to create a rich internet applications. Ext JS is very easy in use, in learn and has very intuitive API. Supports all major web browsers (IE, Opera, Firefox, Safari). Starting with this post I would like to initiate a series of articles explaining the nuances of how to use a basics Ext aspects in web developments with ASP.NET MVC and how to create a great-looking user interfaces in web apps. Links to all related articles to this tutorial I will be posting below - Integration ExtJS and ASP.NET MVC - Interaction with user. Dialogs. And remember. Here is very good documentation to Ext JS library created by their authors. If you don’t understand something (e.g. one of the config option) that appears in one of the tutorials, the first place where should you go is the link above. For more powerful examples showing how to use a Ext JS library in a various of situations, look at the official Sample & Demos section. Official forum is located here. I also assume that you know what it is class, method, event and property (since this tutorial is intended for .NET programmers). This"). Create a rss or atom feed is very simple in asp.net mvc. In V1.0 there isn’t a build-in mechanism to work with feeds, however we can very quickly build our own. All we have to do is create a xml structure accordance with their specification and new ActionResult derived class to handle the result. But let’s start form beginning. The first of all, we need to create a new MVC project and call it e.g. RssFeed. The data for feeds will be taken from database, so we have to build model. Let’s create a new database called SampleDB with one table Article, like shown below and add some records for testing. Next step is to create a DataContext class. Select the Model catalog from Soluction Explorer, add a new item called LINQ to SQL Classes with name SampleDB.dbml and drag from Server Explorer the Article table to it. The data taken from database we need to handle somewhere, so we can create a two classes. One for channel data, like title or description, and second one for information described ours articles: public class Feed { //channel informations public string Title { set; get; } public string Description { set; get; } public string Link { set; get; } public string Author { set; get; } public List<FeedItem> Items { set; get; } public DateTime Updated { set; get; } } public class FeedItem //feed informations public DateTime PublicDate { set; get; } public Guid Id { set; get; } We will fill these classes in the controller, so add a new controller called FeedController. In this class we create a action method(s) to handle the RSS and/or Atom feeds with the data provided in the ActionResult constructor. public class FeedController : Controller { SampleDBDataContext db = new SampleDBDataContext(); [NonAction] private Feed GetData() Feed rss = new Feed() Description = "Site description", Link = "link", Title = "Simple rss/atom feed", Author = "Author name", Updated = DateTime.Now }; //get articles var articles = from a in db.Articles select new FeedItem { PublicDate = a.PublicDate, Title = a.Title, Description = a.Content, Id = a.ArticleId }; rss.Items = articles.ToList(); return rss; } public ActionResult RSS() Feed rss = GetData(); return new RssResult(rss); public ActionResult Atom() Feed atom = GetData(); return new AtomResult(atom); At the end we need to create a new ActionResult derived classes. One for Atom and one for RSS. In these classes we will use a LINQ to XML to build a xml file structure with the data provided in constructor and send the result to the user in ExecuteResult method. RSS feed public class RssResult : ActionResult public Feed RssFeeds set; get; public RssResult(Feed f) RssFeeds = f; private XDocument CreateXmlDoc() //create the rss xml structure XDocument doc = new XDocument( new XDeclaration("1.0", "UTF-8", ""), new XElement("rss", new XAttribute("version", "2.0"), new XElement("channel", new XElement("title", RssFeeds.Title), new XElement("link", RssFeeds.Link), new XElement("description", RssFeeds.Description) ))); foreach (FeedItem item in RssFeeds.Items) XElement i = new XElement("item", new XElement("title", HttpUtility.HtmlEncode(item.Title)), new XElement("link", "" + item.Id), new XElement("pubDate", item.PublicDate), new XElement("description", HttpUtility.HtmlEncode(item.Description))); doc.Element("rss") .Element("channel") .Add(i); } return doc; public override void ExecuteResult(ControllerContext context) context.HttpContext.Response.ContentType = "application/rss+xml"; using (System.Xml.XmlWriter writer = System.Xml.XmlWriter.Create(context.HttpContext.Response.Output)) CreateXmlDoc().Save(writer); Atom feed public class AtomResult : ActionResult public Feed AtomFeeds public AtomResult(Feed f) AtomFeeds = f; XNamespace ns = ""; new XElement(ns + "feed", new XElement(ns + "title", AtomFeeds.Title), new XElement(ns + "link", new XAttribute("href", ""), new XAttribute("rel", "self")), new XElement(ns + "updated", //updated element must be in //year-month-dayThour:minuts:secondsTimeZone format AtomFeeds.Updated.ToString("yyyy-MM-dd\\THH:mm:ss%K")), new XElement(ns + "author", new XElement(ns + "name", AtomFeeds.Author)), //id must be constant and unique for this channel //if uri address is used, it hasn't be real new XElement(ns + "id", "") )); foreach (FeedItem item in AtomFeeds.Items) XElement i = new XElement(ns + "entry", new XElement(ns + "title", item.Title), new XElement(ns + "link", new XAttribute("href", "/Article/" + item.Id)), //id must be constant and unique, otherwise each update //by feed readers will be duplicating all entries new XElement(ns + "id", item.Id), new XElement(ns + "updated", item.PublicDate.ToString("yyyy-MM-dd\\THH:mm:ss%K")), new XElement(ns + "summary", item.Description.Length > 255 ? item.Description.Substring(0, 254).Insert(254, "...") : item.Description)); doc.Element(ns + "feed") context.HttpContext.Response.ContentType = "application/atom+xml"; using (System.Xml.XmlWriter writer = At the really end we can add a link(s) for the channel(s) in the master page. Thanks to them, in the address bar e.g. in Firefox we will have a small icon which we can add subscribe to our channel. Link for RSS: <link href="<%= Url.Content("/Feed/RSS") %>" type="application/rss+xml" rel="alternate" title="RSS Feed" /> Link for Atom: <link href="<%= Url.Content("/Feed/Atom") %>" type="application/atom+xml" rel="alternate" title="Atom Feed" /> Full source code for this example can be found here (click the blue button called "Pobierz plik"). Implementing a new ActionResult classes is only one of the ways to build this stuff. Other is using the standard View engine. It is possible to derive new class from ViewPage which is strongly-typed with model. Beyond generation the xml file structure manually in our ActionResult classes, there is also possibility to use a build-in in .NET Framework classes: Atom10FeedFormatter and Rss20FeedFormatter. This keeps us from knowing the structure of xml files for each channel. Examples for that are available here: - Guy Burstein's blog post (strongly-typed ViewPage) - Rss20FeedFormatter class - Atom10FeedRormatter class A RSS/Atom documents structures are described here
http://geekswithblogs.net/lszk/archive/2009/08.aspx
CC-MAIN-2014-42
refinedweb
1,330
50.53
> DTMS.rar > DODELETE.C #include "mail.h" /* Delete the current record specified by the mail structure. * This function is called by mail_delete() and mail_store(), * after the record has been located by _mail_find(). */ int _mail_dodelete(MAIL *mail) { int i; char *ptr; off_t freeptr, saveptr; /* Set data buffer to all blanks */ for (ptr = mail->datbuf, i = 0; i < mail->datlen - 1; i++) *ptr++ = ' '; *ptr = 0; /* null terminate for _mail_writedat() */ /* Set key to blanks */ ptr = mail->idxbuf; while (*ptr) *ptr++ = ' '; /* We have to lock the free list */ if (writew_lock(mail->idxfd, FREE_OFF, SEEK_SET, 1) < 0) err_dump("writew_lock error"); /* Write the data record with all blanks */ _mail_writedat(mail, mail->datbuf, mail->datoff, SEEK_SET); /* Read the free list pointer. Its value becomes the chain ptr field of the deleted index record. This means the deleted record becomes the head of the free list. */ freeptr = _mail_readptr(mail, FREE_OFF); /* Save the contents of index record chain ptr, before it's rewritten by _mail_writeidx(). */ saveptr = mail->ptrval; /* Rewrite the index record. This also rewrites the length of the index record, the data offset, and the data length, none of which has changed, but that's OK. */ _mail_writeidx(mail,NULL, mail->idxoff, SEEK_SET, freeptr); /* Write the new free list pointer */ _mail_writeptr(mail, FREE_OFF, mail->idxoff); /* Rewrite the chain ptr that pointed to this record being deleted. Recall that _mail_find() sets mail->ptroff to point to this chain ptr. We set this chain ptr to the contents of the deleted record's chain ptr, saveptr, which can be either zero or nonzero. */ _mail_writeptr(mail, mail->ptroff, saveptr); if (un_lock(mail->idxfd, FREE_OFF, SEEK_SET, 1) < 0) err_dump("un_lock error"); return(0); }
http://read.pudn.com/downloads12/sourcecode/unix_linux/50527/DTMS/DODELETE.C__.htm
crawl-002
refinedweb
270
69.62
This tutorial will teach you how to use the Repl.it Auth API. Prerequisites You are required to know the following before you start: - Basic knowledge of Python/Flask - Basic knowledge of Jinja2 (Flask templating) - Basic knowledge of HTML Starting off We'll start off with a basic Flask template (main.py) from flask import Flask, render_template, request app = Flask('app') @app.route('/') def hello_world(): return render_template('index.html') app.run(host='0.0.0.0', port=8080) (/templates/index.html) <!doctype html> <html> <head> <title>Repl Auth</title> </head> <body> Hello! </body> </html> Nothing interesting yet. The authentication script Now, we'll add the authentication script. <div> <script authed="location.reload()" src=""></script> </div> This can be placed anywhere in the document body and will create an iframe in its parent element. Additionally, any JavaScript placed in the authed attribute will be executed when the person finishes authenticating, so the current one will just reload when the user authenticates. If you run it now, you will notice a big Let (your site url) know who you are? with a small version of your profile and an Authorize button. You can click the button but nothing will happen. The headers Now, let's make something happen. Go back to your main.py file; we will be grabbing the Repl.it specific headers for the request and extracting data from them. The main ones we care about are: X-Replit-User-Id, X-Replit-User-Name, and X-Replit-User-Roles. The username one will probably be the most useful for now. With this information, we can let our HTML template be aware of them. (main.py) @app.route('/') def hello_world(): return render_template( 'index.html', user_id=request.headers['X-Replit-User-Id'], user_name=request.headers['X-Replit-User-Name'], user_roles=request.headers['X-Replit-User-Roles'] ) (templates/index.html) <body> {% if user_id %} <h1>Hello, {{ user_name }}!</h1> <p>Your user id is {{ user_id }}.</p> {% else %} Hello! Please log in. <div> <script authed="location.reload()" src=""></script> </div> {% endif %} </body> Success! Now, run your code. It should display a big Hello, (your username)! along with your user ID. If you want to port this to other languages or frameworks like NodeJS + Express, just be aware of how you can get specific request headers. Warning Also, be aware that if you're going to be using an accounts system, PLEASE do all the specific logic for checking users on the BACKEND, that means NOT doing it with JavaScript in your HTML. That is all. Please upvote my post if you found it helpful :) If you want it, here is the source code for the basic Repl Auth script demonstrated in this tutorial. Thank you so much! Did it on express! Yeah, JavaScript logic for checking users is asking for some attacker to come and bypass authentication. @mylesbartlett theres no way to circumvent it as repl sends the authentication headers to the server and theres no way to forge them. Content security policies prevent most iframes attacks you can think of, so I would say its pretty secure. @mat1 Would getting the user's profile picture be def hello_world(): return render_template( 'index.html', user_id=request.headers["X-Replit-User-Id"], user_name=request.headers["X-Replit-User-Name"], user_roles=request.headers["X-Replit-User-Roles"], user_profile-pic=request.headers["X-Replit-User-Picture"] ) or something like that...? How would I make this backend? I need it for this project I'm making: @JamesGordon1 you could probably make a request to repl.it/login but since you don't host a site with flask or something similar I'm not sure how you'd set it up. If you want to use repl.it authentication with PHP: You shouldn't be able to. It's just like using google auth api. The only thing you can change is the auth key. @MrEconomical well yeah, but the prompt that shows up to log in I don't think you can change @AdCharity @Coder100 You can make templates in NodeJS with EJS. Here's a tutorial on how you can do that: Ok, thank you! I have succeeded: @TaylorLiang First person makes something cool with this will get a MAJOR shoutout on the next newsletter. @mat time for a real life example!? How about creating and storing data with the repl.it accounts? @enigma_dev @amasad I used to constantly use this bookmarklet script that would allow me to get to a repl quickly, now I made it into a site that anyone can use with repl auth: @amasad repl mail, repl chat ;) @21natzil can you please showcase repl auth and repl mail/chat in the next newsletter? @MrEconomical @amasad sure thing! @amasad thanks for featuring my project! it really means a lot to me!
https://repl.it/talk/learn/Authenticating-users-with-Replit-Auth/23460
CC-MAIN-2020-29
refinedweb
795
68.06
This is your resource to discuss support topics with your peers, and learn from each other. 02-16-2011 09:41 AM Well, at least, it works on the Home stage - it's invisble on the New stage. Hmmm... 02-16-2011 12:19 PM hey johnp, glad you got it working as a proof of concept. at least you know it works now! so for it only showing up in the Home stage, are you using view components or are you using states? 02-16-2011 12:27 PM I'm using states except for top swipe which is component. I do have it set to includeIn="New" - which it doesn't display in, but when I set it to Home it does. I started reading up but got distracted with some graphical elements I'm trying to finish up, sorry I don't have any other updates for you. 02-16-2011 01:18 PM hey johnp, dont sweat, i get distracted and digress all the time haha. question, are you running your project as an regular flex project or a mobile project? dont know if thats relavent but thought id ask. anyways my theory is as i think comantis has stated is something might be covering up your toggle switch. so try to set it with weird x and y values (the container) and see if that helps it show up. place it somewhere you know nothing else will be and place it there and see if it shows up. below is a sample code i wrote up showing that it works using states. when you toggle it switches the states from "home" to "new" but keeps hte toggle visible in both views: FlexTestHome.mxml: <?xml version="1.0" encoding="utf-8"?> <s:View xmlns:fx="" xmlns: <fx:Script> <![CDATA[ import qnx.ui.buttons.ToggleSwitch; private var toggleSwitch:ToggleSwitch; private function init():void { toggleSwitch = new ToggleSwitch(); toggleSwitch.setSize(300,60); toggleSwitch.selectedLabel = "New"; toggleSwitch.defaultLabel = "Home"; toggleSwitch.addEventListener(Event.SELECT, onSelect); toggleSwitchContainer.addChild(toggleSwitch); trace("toggleSwitchContainer.numChildren: " + toggleSwitchContainer.numChildren); } private function onSelect(e:Event):void { if (e.target.selected) { currentState = ToggleSwitch(e.currentTarget).selectedLabel; } else { currentState = ToggleSwitch(e.currentTarget).defaultLabel; } this.title = currentState; } ]]> </fx:Script> <fx:Declarations> <!-- Place non-visual elements (e.g., services, value objects) here --> </fx:Declarations> <s:states> <s:State <s:State </s:states> <mx:UIComponent </s:View> 02-16-2011 01:32 PM I've tried different background, no background, random locations on top of other elements, off on it's own. I've tried colored background, I've tried it in all the states (only shows in Home). This might be a hint to you: It displays fine in BOTH states of Home/New if I includeIn="Home, New" Is your brain tickled yet? Mine's pickled! 02-16-2011 01:37 PM wow that really is strange lol. my next theory is are the states properly being switched? like is the "New" state active or is another state active. is there something else thats also just in the "New" state along with the mx.UIComponent? the icons you mentioned probably are but im just throwing up ideas at this point haha ah also try to create a new flex mobile project. and try to run the sample app. maybe its your project that is causing the errors. that way we can weed more stuff out. good luck! 02-16-2011 01:54 PM I feel it has something to do with my init function. If I set the set the currentState="New" in my Application it loads the slider just fine within the New state. However, setting the UIComponent to includeIn="Home" and not New while currentState is set to New, it does not render in Home. To chance states I'm using a "round about" way to do it - I have a component to handle my swipe down nav, and I'm using FlexGlobals.topLevelApplication.currentState to change states as I couldn't get my custom event listeners to work properly so I gave up and went with that - which seems to work for not only changing state but I'm also able to call the functions from topLevelApplication.function() 02-16-2011 02:04 PM that's weird. i tried running the code i posted in the previous post with the state switching via the toggle and then replaced the currentState with FlexGlobals.topLevelApplication.currentState and got the following error when i tried to toggle: ArgumentError: Undefined state 'New'. how are you setting the states in your app? 02-16-2011 02:06 PM Button: <s:Button goHome() function: FlexGlobals.topLevelApplication.currentState = "Home"; 02-16-2011 02:18 PM Messy way to get it to work also: includeIn="Home, New" visible.Home="false" Shows up fine in New. I'll rebuild everything tonight and see if I yield any different results. My init function just doesn't want that to load unless it is also loading in the state the app starts in.
https://supportforums.blackberry.com/t5/Adobe-AIR-Development/ToggleSwitch-in-Flex/td-p/797017/page/3
CC-MAIN-2016-50
refinedweb
834
65.83
How To Add Custom Headers To Google Cloud Endpoints If you want to add custom response headers to GCE (Google Cloud Endpoints), you’re going to have to do some monkey patching. import endpoints.util as util # Note: If someone imports send_wsgi_response before here, the function will NOT be decorated # and the original function will be used until this bit runs def add_headers(wsgi_func): def wrapper(status, headers, content, start_response, cors_handler=None): headers.append(('Some-Header', 'some-value')) return wsgi_func(status, headers, content, start_response, cors_handler) return wrapper util.send_wsgi_response = add_headers(util.send_wsgi_response) I put this bit in my __init__.py that takes care of loading up the service, since I want to make sure this runs first. This is specific example for my needs, so customize as you see fit.
http://www.pygopar.com/how-to-add-custom-headers-to-google-cloud-endpoints
CC-MAIN-2018-51
refinedweb
130
51.89
Today at work, I come across a very simple requirement to “Insert Multiple Rows in SharePoint list using SharePoint 2007 Services”. Although this is very common problem and you might get much information while binging on same topic. But to benefit of my readers, I decided to go ahead blog post solution of above problem. For purpose of this post, I have created a SharePoint list as below called TestInsertion, We will insert multiple records from a console application. Create a console application and add a DTO [Data Transfer object] class to represent list. If you notice below class, it got properties name same as columns of target list TestInsertion. public class Items { public string FirstName { get; set; } public string LastName { get; set; } public string EmailAddress { get; set; } public string Region { get; set; } } Now go ahead and define two global variables in Program.cs Make sure list TestInsertion resides in Yoursubsite [See the Site URL] Before start writing code to insert multiple rows, we need to ass Web Service in console program. To do that right click on console application and select Add service Reference. Click on Advanced button and select Add Service Reference . In URL give the URL of the SharePoint Service. Assume you have a function returning List<Items> to insert as below. Below static function GetItemsToInsert() is returning List<items> to insert. public static List<Items> GetItemsToInsert() { List<Items> lstItems = new List<Items> { new Items { FirstName = "John " , LastName ="Papa" , EmailAddress= "John.papa@blahblah.com", Region = "USA" }, new Items { FirstName = "Scott" , LastName ="Gui" , EmailAddress= "Scott.Gui@blahblah.com", Region = "USA" }, new Items { FirstName = "Dhananjay" , LastName ="Kumar" , EmailAddress= "Dhananjay.kumar@blahblah.com", Region = "India" }, new Items { FirstName = "Pinal" , LastName ="dave" , EmailAddress= "Pinal.dave@blahblah.com", Region = "India" }, new Items { FirstName = "Victor" , LastName ="Gui" , EmailAddress= "Victor.Gui@Blahblah.com", Region = "USA" }, new Items { FirstName = "Sahil" , LastName ="Malik" , EmailAddress= "sahil.Malik@blahblah.com", Region = "USA" }, }; return lstItems; } To insert record you need to very first create Proxy of the list web service as below. Pass default credential to access the SharePoint Service. After creation of proxy, we need GUID of list and default view of the list. We can get GUID of list and list default view with below code To insert record we need to create a XML document and send that to SharePoint list service. When setting attribute for View, we are passing Guid of list default view fetched in previously. Now XML document is created and we are due to create XML body representing data to be inserted. In above code snippet points to be noted are as below, - We are making call to GetItemsToInsert() function . - Converting List<Items> to array - Iterating through each element of array and creating Methods. - Since we need to insert or add records so cmd value would be New. Point here need to understand is about Method. If we need to insert 5 records then we need to create 5 methods. So in above code we are creating Method in array loop such that same number of Method would be created as of array count. - We need to make sure that field names are same as internal name of the SharePoint list columns. As of now we have created the data to be inserted in the list. To do the actual Insertion assign created data as inner XML of XML document and call the UpdateListItem() method of SharePoint Service For your reference full source code is given below. Feel free to use it and modify it for your requirement Program.cs using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Xml; using System.Data; namespace InsertingIteminSharePointListRemotely { class Program { static string"+result[i].FirstName+"</Field>"; dataToInsert = dataToInsert + "<Field Name=\"LastName\">"+result[i].LastName+"</Field>"; dataToInsert = dataToInsert + "<Field Name=\"EmailAddress\">"+result[i].EmailAddress+"</Field>"; dataToInsert = dataToInsert + "<Field Name=\"Region\">"+result[i].Region+"</Field>"; dataToInsert = dataToInsert + "</Method>"; } #endregion #region Inserting Record docToElemnt.InnerXml = dataToInsert; try { listService.UpdateListItems(strListID, docToElemnt); Console.WriteLine("Item Inserted"); } catch (Exception ex) { Console.WriteLine(ex.StackTrace + ex.Message); } #endregion Console.ReadKey(true); } } } I hope this quick post was useful to you. Follow @debugmode_Follow @debugmode_
http://debugmode.net/category/sharepoint/
CC-MAIN-2014-52
refinedweb
687
58.99
# Tarantool Data Grid: Architecture and Features ![](https://habrastorage.org/r/w780q1/webt/v7/6-/di/v76-dikmo5ifekbsg35xutol68s.jpeg) In 2017, we won the competition for the development of the transaction core for Alfa-Bank's investment business and started working at once. (Vladimir Drynkin, Development Team Lead for Alfa-Bank's Investment Business Transaction Core, [spoke](https://www.youtube.com/watch?v=o9XuVXTotHU) about the investment business core at HighLoad++ 2018.) This system was supposed to aggregate transaction data in different formats from various sources, unify the data, save it, and provide access to it. In the process of development, the system evolved and extended its functions. At some point, we realized that we created something much more than just application software designed for a well-defined scope of tasks: we created a system for building distributed applications with persistent storage. Our experience served as a basis for the new product, [Tarantool Data Grid](https://www.tarantool.io/en/datagrid/) (TDG). I want to talk about TDG architecture and the solutions that we worked out during the development. I will introduce the basic functions and show how our product could become the basis for building turnkey solutions. In terms of architecture, we divided the system into separate *roles*. Every one of them is responsible for a specific range of tasks. One running instance of an application implements one or more role types. There may be several roles of the same type in a cluster: ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/ff3/bf8/e64/ff3bf8e64bed30112a4fd1b004337204.png) Connector --------- The Connector is responsible for communication with the outside world; it is designed to accept the request, parse it, and if it succeeds, then it sends the data for processing to the input processor. The following formats are supported: HTTP, SOAP, Kafka, FIX. The architecture allows us to add support for new formats (IBM MQ support is coming soon). If request parsing fails, the connector returns an error. Otherwise, it responds that the request has been processed successfully, even if an error occurred during further processing. This is done on purpose in order to work with the systems that do not know how to repeat requests, or vice versa, do it too aggressively. To make sure that no data is lost, the repair queue is used: the object joins the queue and is removed from it only after successful processing. The administrator receives notifications about the objects remaining in the repair queue and can retry processing after handling a software error or hardware failure. ### Input Processor The Input Processor categorizes the received data by characteristics and calls the corresponding handlers. Handlers are Lua code that runs in a sandbox, so they cannot affect the system operation. At this stage, the data could be transformed as required, and if necessary, any number of tasks may run to implement the necessary logic. For example, when adding a new user in MDM (Master Data Management built based on Tarantool Data Grid), a golden record would be created as a separate task so that the request processing doesn't slow down. The sandbox supports requests for reading, changing, and adding data. It also allows you to call some function for all the roles of the storage type and aggregate the result (map/reduce). Handlers can be described in files: ``` sum.lua local x, y = unpack(...) return x + y ``` Then declared in the configuration: ``` functions: sum: { __file: sum.lua } ``` Why Lua? Lua is a straightforward language. Based on our experience, people start to write code that would solve their problem only a couple of hours after seeing the language for the first time. And these are not only professional developers, but for example, analysts. Moreover, thanks to the JIT compiler, Lua is sppedy. ### Storage The Storage stores persistent data. Before saving, the data is validated for compliance with the data scheme. To describe the scheme, we use the extended [Apache Avro](https://avro.apache.org/) format. Example: ``` { "name": "User", "type": "record", "logicalType": "Aggregate", "fields": [ { "name": "id", "type": "string" }, { "name": "first_name", "type": "string" }, { "name": "last_name", "type": "string" } ], "indexes": ["id"] } ``` Based on this description, DDL (Data Definition Language) for Tarantool DBMS and [GraphQL](https://graphql.org/) schema for data access are generated automatically. Asynchronous data replication is supported (we also plan to add synchronous replication). ### Output Processor Sometimes it is necessary to notify external consumers about the new data. That is why we have the Output Processor role. After saving the data, it could be transferred into the appropriate handler (for example, to transform it as required by the consumer), and then transferred to the connector for sending. The repair queue is also used here: if no one accepts the object, the administrator can try again later. ### Scaling The Connector, Input Processor, and Output Processor roles are stateless, which allows us to scale the system horizontally by merely adding new application instances with the necessary enabled role. For horizontal storage scaling, a cluster is organized using the virtual buckets [approach](https://www.youtube.com/watch?v=9PW5agbLyQM). After adding a new server, some buckets from the old servers move to a new server in the background. This process is transparent for the users and does not affect the operation of the entire system. ### Data Properties Objects may be huge and contain other objects. We ensure adding and updating data atomically, and saving the object with all the dependencies on a single virtual bucket. This is done to avoid the so-called «smearing» of the object across multiple physical servers. Versioning is also supported: each update of the object creates a new version, and we can always make a time slice to see how everything looked like at the time. For data that does not need a long history, we can limit the number of versions or even store only the last one, that is, we can disable versioning for a specific data type. We can also set the historical limits: for example, delete all the objects of a specific type older than a year. Archiving is also supported: we can upload objects above a certain age to free up the cluster space. ### Tasks Interesting features to be noted include the ability to run tasks on time, at the user's request, or automatically from the sandbox: ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/386/513/8b6/3865138b6add38e46e973582d430abc4.png) Here we can see another role called Runner. This role has no state; if necessary, more application instances with this role could be added to the cluster. The Runner is responsible for completing the tasks. As I have already mentioned, new tasks could be created from the sandbox; they join the queue on the storage and then run on the runner. This type of tasks is called a Job. We also have a task type called Task, that is, a user-defined task that would run on time (using the cron syntax) or on-demand. To run and track such tasks, we have a convenient task manager. The scheduler role must be enabled to use this function. This role has a state, so it does not scale which is not necessary anyway. However, like any other role it can have a replica that starts working if the master suddenly fails. ### Logger Another role is called Logger. It collects logs from all cluster members and provides an interface for uploading and viewing them via the web interface. ### Services It is worth mentioning that the system makes it easy to create services. In the configuration file, you can specify which requests should be sent to the user-written handler running in the sandbox. Such a handler may, for example, perform some kind of analytical request and return the result. The service is described in the configuration file: ``` services: sum: doc: "adds two numbers" function: sum return_type: int args: x: int y: int ``` The GraphQL API is generated automatically, and the service is available for calls: ``` query { sum(x: 1, y: 2) } ``` This calls the `sum` handler that returns the result: ``` 3 ``` ### Request Profiling and Metrics We implemented support for the OpenTracing protocol to bring a better understanding of the system mechanisms and request profiling. On demand, the system can send information about how the request was executed to tools supporting this protocol (e.g. Zipkin): ![](https://habrastorage.org/r/w1560/getpro/habr/post_images/fef/257/a0f/fef257a0fcc0acd5d8c2136ec210c48e.png) Needless to say, the system provides internal metrics that can be collected using Prometheus and visualized using Grafana. ### Deployment Tarantool Data Grid can be deployed from RPM-packages or archives using the built-in utility or Ansible. Kubernetes is also supported ([Tarantool Kubernetes Operator](https://habr.com/ru/company/mailru/blog/472428/)). An application that implements business logic (configuration, handlers) is loaded into the deployed Tarantool Data Grid cluster in the archive via the UI or as a script using the provided API. ### Sample Applications What applications can you create with Tarantool Data Grid? In fact, most business tasks are somehow related to data stream processing, storing and accessing. Therefore, if you have large data streams that require secure storage, and accessibility, then our product could save you much time in development and help you concentrate on your business logic. For example, you would like to gather information about the real estate market to stay up to date on the best offers in the future. In this case, we single out the following tasks: 1. Robots gathering information from open sources would be your data sources. You can solve this problem using ready-made solutions or by writing code in any language. 2. Next, Tarantool Data Grid accepts and saves the data. If the data format from various sources is different, then you could write code in Lua that would convert everything to a single format. At the pre-processing stage, you could also, for example, filter recurring offers or further update database information about agents operating in the market. 3. Now you already have a scalable solution in the cluster that could be filled with data and used to create data samples. Then you can implement new functions, for example, write a service that would create a data request and return the most advantageous offer per day. It would only require several lines in the configuration file and some Lua code. ### What is next? For us, a priority is to increase the development convenience with [Tarantool Data Grid](https://www.tarantool.io/en/datagrid/). (For example, this is an IDE with support for profiling and debugging handlers that work in the sandbox.) We also pay great attention to security issues. Right now, our product is being certified by FSTEC of Russia (Federal Service for Technology and Export Control) to acknowledge the high level of security and meet the certification requirements for software products used in personal data information systems and federal information systems.
https://habr.com/ru/post/471744/
null
null
1,810
53.71
#include <wx/app.h> This class is essential for writing console-only or hybrid apps without having to define wxUSE_GUI=0. It is used to: You should use the macro wxIMPLEMENT_APP(appClass) in your application implementation file to tell wxWidgets how to create an instance of your application class. Use wxDECLARE_APP(appClass) in a header file if you want the wxGetApp() function (which returns a reference to your application object) to be visible to other files. Destructor. Creates the wxAppTraits object when GetTraits() needs it for the first time. Deletes the pending events of all wxEvtHandlers of this application. See wxEvtHandler::DeletePendingEvents() for warnings about deleting the pending events. Call this to explicitly exit the main message (event) loop. You should normally exit the main loop (and the application) by deleting the top window. This function simply calls wxEvtLoopBase::Exit() on the active loop. Overridden wxEventFilter method. This function is called before processing any event and allows the application to preempt the processing of some events, see wxEventFilter documentation for more information. wxApp implementation of this method always return -1 indicating that the event should be processed normally. Implements wxEventFilter. Returns the user-readable application name. The difference between this string and the one returned by GetAppName() is that this one is meant to be shown to the user and so should be used for the window titles, page headers and so on while the other one should be only used internally, e.g. for the file names or configuration file keys. If the application name for display had been previously set by SetAppDisplayName(), it will be returned by this function. Otherwise, if SetAppName() had been called its value will be returned; also as is. Finally if none was called, this function returns the program name capitalized using wxString::Capitalize(). Returns the application name. If SetAppName() had been called, returns the string passed to it. Otherwise returns the program name, i.e. the value of argv[0] passed to the main() function. Gets the class name of the application. The class name may be used in a platform specific manner to refer to the application. Returns the one and only global application object. Usually wxTheApp is used instead. Returns the main event loop instance, i.e. the event loop which is started by OnRun() and which dispatches all events sent from the native toolkit to the application (except when new event loops are temporarily set-up). The returned value maybe NULL. Put initialization code which needs a non-NULL main event loop into OnEventLoopEnter(). Returns a pointer to the wxAppTraits object for the application. If you want to customize the wxAppTraits object, you must override the CreateTraits() function. Returns the user-readable vendor name. The difference between this string and the one returned by GetVendorName() is that this one is meant to be shown to the user and so should be used for the window titles, page headers and so on while the other one should be only used internally, e.g. for the file names or configuration file keys. By default, returns the same string as GetVendorName(). Returns the application's vendor. Returns true if there are pending events on the internal pending event list. Whenever wxEvtHandler::QueueEvent or wxEvtHandler::AddPendingEvent() are called (not only for wxApp itself, but for any event handler of the application!), the internal wxApp's list of handlers with pending events is updated and this function will return true. Check if the object had been scheduled for destruction with ScheduleForDestruction(). This function may be useful as an optimization to avoid doing something with an object which will be soon destroyed in any case. Called by wxWidgets on creation of the application. Override this if you wish to provide your own (environment-dependent) main loop.. Called when the help option ( --help) was specified on the command line. The default behaviour is to show the program usage text and abort the program. Return true to continue normal execution or false to return false from OnInit() thus terminating the program.. Called by wxEventLoopBase::SetActive(): you can override this function and put here the code which needs an active event loop. Note that this function is called whenever an event loop is activated; you may want to use wxEventLoopBase::IsMain() to perform initialization specific for the app's main event loop. Called by wxEventLoopBase::OnExit() for each event loop which is exited. This function is called if an unhandled exception occurs inside the main application event loop. It can return true to ignore the exception and to continue running the loop or false to exit the loop and terminate the program. The default behaviour of this function is the latter in all ports except under Windows where a dialog is shown to the user which allows him to choose between the different options. You may override this function in your class to do something more appropriate. If this method rethrows the exception and if the exception can't be stored for later processing using StoreCurrentException(), the program will terminate after calling OnUnhandledException(). You should consider overriding this method to perform whichever last resort exception handling that would be done in a typical C++ program in a try/catch block around the entire main() function. As this method is called during exception handling, you may use the C++ throw keyword to rethrow the current exception to catch it again and analyze it. For example:. This function may be called if something fatal happens: an unhandled exception under Win32. This must be provided by the application, and will usually create the application's main window, optionally calling SetTopWindow(). You may use OnExit() to clean up anything initialized here, provided that the function returns true. Notice that if you want to use the command line processing provided by wxWidgets you have to call the base class version in the derived class OnInit(). Return true to continue processing, false to exit the application immediately. Called from OnInit() and may be used to initialize the parser with the command line options for this application. The base class versions adds support for a few standard options only. Note that this method should just configure parser to accept the desired command line options by calling wxCmdLineParser::AddOption(), wxCmdLineParser::AddSwitch() and similar methods, but should not call wxCmdLineParser::Parse() as this will be done by wxWidgets itself slightly later.. This function is called when an unhandled C++ exception occurs in user code called by wxWidgets. Any unhandled exceptions thrown from (overridden versions of) OnInit() and OnExit() methods as well as any exceptions thrown from inside the main loop and re-thrown by OnUnhandledException() will result in a call to this function. By the time this function is called, the program is already about to exit and the exception can't be handled nor ignored any more, override OnUnhandledException() or use explicit try/catch blocks around OnInit() body to be able to handle the exception earlier. The default implementation dumps information about the exception using wxMessageOutputBest. Process all pending events; it is necessary to call this function to process events posted with wxEvtHandler::QueueEvent or wxEvtHandler::AddPendingEvent. This happens during each event loop iteration (see wxEventLoopBase) in GUI mode but it may be also called directly. Note that this function does not only process the pending events for the wxApp object itself (which derives from wxEvtHandler) but also the pending events for any event handler of this application. This function will immediately return and do nothing if SuspendProcessingOfPendingEvents() was called. Resume processing of the pending events previously stopped because of a call to SuspendProcessingOfPendingEvents(). Method to rethrow exceptions stored by StoreCurrentException(). If StoreCurrentException() is overridden, this function should be overridden as well to rethrow the exceptions stored by it when the control gets back to our code, i.e. when it's safe to do it. See StoreCurrentException() for an example of implementing this method. The default version does nothing when using C++98 and uses std::rethrow_exception() in C++11. Delayed objects destruction. In applications using events it may be unsafe for an event handler to delete the object which generated the event because more events may be still pending for the same object. In this case the handler may call ScheduleForDestruction() instead. Schedule the object for destruction in the near future. Notice that if the application is not using an event loop, i.e. if UsesEventLoop() returns false, this method will simply delete the object immediately. Examples of using this function inside wxWidgets itself include deleting the top level windows when they are closed and sockets when they are disconnected. Set the application name to be used in the user-visible places such as window titles. See GetAppDisplayName() for more about the differences between the display name and name. Notice that if this function is called, the name is used as is, without any capitalization as done by default by GetAppDisplayName(). Sets the name of the application. This name should be used for file names, configuration file entries and other internal strings. For the user-visible strings, such as the window titles, the application display name set by SetAppDisplayName() is used instead. By default the application name is set to the name of its executable file. Sets the class name of the application. This may be used in a platform specific manner to refer to the application. Sets the C locale to the default locale for the current environment. It is advised to call this to ensure that the underlying toolkit uses the locale in which the numbers and monetary amounts are shown in the format expected by user and so on. Calling this function is roughly equivalent to calling but performs additional toolkit-specific tasks under some platforms and so should be used instead of setlocale() itself. Alternatively, you can use wxLocale to change the locale with more control. Notice that this does not change the global C++ locale, you need to do it explicitly if you want, e.g. but be warned that locale support in C++ standard library can be poor or worse under some platforms. Allows external code to modify global wxTheApp, but you should really know what you're doing if you call it. Set the vendor name to be used in the user-visible places. See GetVendorDisplayName() for more about the differences between the display name and name. Sets the name of application's vendor. The name will be used in registry access. A default name is set by wxWidgets. Method to store exceptions not handled by OnExceptionInMainLoop(). This function can be overridden to store the current exception, in view of rethrowing it later when RethrowStoredException() is called. If the exception was stored, return true. If the exception can't be stored, i.e. if this function returns false, the program will abort after calling OnUnhandledException(). It is necessary to override this function if OnExceptionInMainLoop() doesn't catch all exceptions, but you still want to handle them using explicit try/catch statements. Typical use could be to allow code like the following to work: By default, throwing an exception from an event handler called from the dialog modal event loop would terminate the application as the exception can't be safely propagated to the code in the catch clause because of the presence of the native system functions (through which C++ exceptions can't, generally speaking, propagate) in the call stack between them. Overriding this method allows the exception to be stored when it is detected and rethrown using RethrowStoredException() when the native system function dispatching the dialog events terminates, with the result that the code above works as expected. An example of implementing this method: Temporary suspends processing of the pending events. Returns true if the application is using an event loop. This function always returns true for the GUI applications which must use an event loop but by default only returns true for the console programs if an event loop is already running as it can't know whether one will be created in the future. Thus, it only makes sense to override it in console applications which do use an event loop, to return true instead of checking if there is a currently active event loop. Yields control to pending messages in the event loop. This method is a convenient wrapper for wxEvtLoopBase::Yield(). If the main loop is currently running, it calls this method on it. Otherwise it creates a temporary event loop and uses it instead, which can be useful to process pending messages during the program startup, before the main loop is created. Use extreme caution when calling this function as, just as wxEvtLoopBase::Yield(), it can result in unexpected reentrances. Number of command line arguments (after environment-specific processing). Command line arguments (after environment-specific processing). Under Windows and Linux/Unix, you should parse the command line arguments and check for files to be opened when starting your application. Under macOS, you need to override MacOpenFiles() since command line arguments are used differently there. You may use the wxCmdLineParser to parse command line arguments.
https://docs.wxwidgets.org/trunk/classwx_app_console.html
CC-MAIN-2022-05
refinedweb
2,182
54.83
// // This program finds the area of a triangle given the points of the triangle. It computes // the area of the triangle using three different formulas, each will result in the same answer. // // Area1 = 1/2 | (x1y2 - x2y1) + (x2y3 - x3y2) + (x3y1 - x1y3) | // (distance)^2 = (x-difference)^2 + (y-difference)^2 // Area2 = sqrt( s(s-a)(s-b)(s-c) ) // c^2= A^2 + B^2 - 2 A B cos(theta) // // C = sqrt( pow(A, 2) + pow(B, 2) - 2*A*B*cos(theta) // // Area3 = 0.5*A*B*sin(theta) // #include <iostream> #include <cmath> #include <iomanip> using namespace std; int main() { float x1, x2, x3, y1, y2, y3, Area1, Area2, Area3, s, a, b, c, Perimeter, theta; // The user needs to input the points of the triangle. cout << "Please enter the value of x1: "; cin >> x1; cout << "Please enter the value of y1: "; cin >> y1; cout << "Please enter the value of x2: "; cin >> x2; cout << "Please enter the value of y2: "; cin >> y2; cout << "Please enter the value of x3: "; cin >> x3; cout << "Please enter the value of y3: "; cin >> y3; // Find Area1 using the first formula. Area1 = 0.5 * abs((x1*y2 - x2*y1) + (x2*y3 - x3*y2) + (x3*y1 - x1*y3)); // Use this formula to find the three sides of the triangle. a = sqrt( pow(x2-x1, 2) + pow(y2-y1, 2) ); b = sqrt( pow(x3-x2, 2) + pow(y3-y2, 2) ); c = sqrt( pow(x3-x1, 2) + pow(y3-y1, 2) ); // Find perimeter of the triangle using the formula for perimeter. Perimeter = a + b + c; // Semiperimeter is half the perimeter. s = Perimeter / 2; // Find Area2 using the given formula. Area2 = sqrt ( s*(s-a)*(s-b)*(s-c) ); // Solve for theta in the formula c^2= A^2 + B^2 - 2 A B cos(theta). theta = acos(( pow(a, 2) + pow(b, 2) - pow(c, 2) ) / ( 2.00 * a * b ) ); // Find Area3 using the given formula. Area3 = 0.5*( a*b*sin(theta) ); // Display the three areas after being calculated by the three formulas. cout << "Area1 is: " << setprecision(12) << Area1 << "\nArea2 is: " << setprecision(12) << Area2 << "\nArea3 is: " << setprecision(12) << Area3 << endl; return 0; } These are the vertices: (0,2), (0,0), (1.73205, 1) (-1.8,-1), (0.7,-1), (-1.1,1.4) (1,1), (15,6), (3,11) The question is: One would expect that since all the integers in the third batch of vertices have exact representations in the machine, with ho round-off errors, that they should cause the least problems. Yet a glance at the output should see it has the biggest errors. Can you explain why? The most common formula for the area of a triangle does not appear among the methods above, because it assumes the base is easy to find. Unfortunately, it's only really easy to find in analytic geometry if it runs parallel to the x-axis, which does not always happen. For be able to use this equation for a more general case, consider the following figure. And, this one: . (x3,y3) / / | b / | (x1,y1) . ------------- / | h / | / | . (x2,y2) The area of the triangle formed from these three points can be found with the usual ½bh, if one gets the correct value for b. The right-end of that line segment would appear on the line connecting Points 2 and 3. Suppose we call it (x4, y4). Both of these representations of the slope of the line must be the same: y3 - y2 y4 - y2 ------- = ------- x3 - x2 x4 - x2 Use this information find and output the point (x4, y4) and the triangle area computed from it. (Your program would then output four area calculations instead of three.)
https://www.daniweb.com/programming/software-development/threads/259216/question
CC-MAIN-2018-43
refinedweb
609
69.52
Is there a function that will change UTF-8 to Unicode leaving non special characters as normal letters and numbers? ie the German word "tchüß" would be rendered as something like "tch\20AC\21AC" (please note that I am making the Unicode codes up). EDIT: I am experimenting with the following function, but although this one works well with ASCII 32-127, it seems to fail for double byte chars: function strToHex ($string) { $hex = ''; for ($i = 0; $i < mb_strlen ($string, "utf-8"); $i++) { $id = ord (mb_substr ($string, $i, 1, "utf-8")); $hex .= ($id <= 128) ? mb_substr ($string, $i, 1, "utf-8") : "&#" . $id . ";"; } return ($hex); } Converting one character set to another can be done with iconv: Note that UTF is already an Unicode encoding. Another way is simply using htmlentities with the right character set:
https://codedump.io/share/ANTPIMR8doMr/1/utf-8-to-unicode-code-points
CC-MAIN-2017-04
refinedweb
133
67.08
Problem: Program allows the user to enter grade information for up to 20 students in a class. Name, Exam 1 grade, Exam 2 grade, Homework grade, Final Exam Grade For each student, first calculate a final grade , using the formula: finalgrade = 0.20 * Exam 1 + 0.20 * Exam 2 + 0.35 * Homework + 0.25 * Final Exam then, assign a letter grade on the basis of 90-100=A, 80-89=B, 70-79=c, 60-69=d, less than 60=F. All the information, including the final grade and the letter grade should be written and displayed to a file. Note: Program should STOP after 20 loops. Ask the user if they want to continue? If the counter < 20, prompt the user, store - Ex. - enter a y to continue or an n to quit. You can use a do-while loop condition such as: while counter less than 20 and proceed equal to 'Y' Question for you: How would you handle the exit condition? Code: #include <fstream.h> #include <iomanip.h> #include <stdlib.h> int main() { const int MAXSTUDENTS = 20; const int MAXNAME = 20; const int MAXCHARS = 10; char firstname [MAXCHARS] = "allgrades.dat"; int i; char lastname[MAXNAME], lettergrade; float exam1, exam2, homework, finalexam, finalgrade; ofstream outFile; outFile.open(firstname); if (outFile.fail()) { cout << "\nNot successful opening " << firstname << endl; exit(l); } outFile << setiosflags(ios::fixed) << setiosflags(ios::showpoint) << setprecision(2); for (i = 1; 1 <= MAXSTUDENTS; i++) { cout << "\nEnter the student's last name: "; cin >> lastname; cout << "\nEnter exam 1's grade: "; cin >> exam1; cout << "\nEnter exam 2's grade: "; cin >> exam2; cout << "\nEnter the student's homework grade: "; cin >> homework; cout << "\nEnter the final exam grade: "; cin >> finalexam; finalgrade = 0.20 * examl + 0.20 * exam2 + 0.35 * homework + 0.25 * finalexam; if(finalgrade >= 90) lettergrade = 'A'; else if (finalgrade >= 80) lettergrade = 'B'; else if (finalgrade >= 70) lettergrade = 'C'; else if (finalgrade >= 60) lettergrade = 'D'; else finalgrade = 'F'; cout << lastname << " " << examl << " " << exam2 << " " homework << " " << finalexam << " " << finalgrade << " " << lettergrade << endl; outFile << lastname << " " << examl << " " << exam2 << " " homework << " " << finalexam << " " << finalgrade << " " << lettergrade << endl; } return 0 }
http://cboard.cprogramming.com/cplusplus-programming/40682-please-review-comment-printable-thread.html
CC-MAIN-2014-52
refinedweb
337
58.28
I'm not absolutely certain whether this should be here or on Web Applications, but here goes. I need wiki software, preferably something that runs on PHP but this is not a hard requirement, which supports namespaces. That is, I need to be able to give users an arbitrary number of wiki spaces that do not overlap or interact with each other. FOSS is preferred as well. I've been unable to find something like this so far, so I may have to roll my own solution. Does anyone know of such an application? I suggest you consider MediaWiki for this. It is PHP-based FOSS software. A search on the MediaWiki page reveals these results regarding Namespaces. It's not FOSS, but Confluence will
http://serverfault.com/questions/289165/simple-feature-light-wiki-software-with-namespace-support
crawl-003
refinedweb
125
76.11
. Function1 () { int A; char B[20]; // Function2 (A); } // Function2 (int C) { int D; int E; } // main (int argc, char **argv) { Function1 (); } #include <iostream> #include <stack> #include <string> using namespace std; int main() { stack<string> stackName; string name; int SIZE = 0; int count = 0; cout << "Enter how big the stack is: "; cin >> SIZE; cout << endl; count = SIZE; for (int i = 0; i < count; i++) { cout << "Enter name to add to stack: "; cin >> name; stackName.push(name); cout << endl; } for (int i = 0; i < count; i++) { cout << "Pushing: " << stackName.top() << "\n"; } } If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/23504075/Stacks-container.html
CC-MAIN-2017-30
refinedweb
113
52.16
What's new in Python 3.8? Sanket Saurav ・4 min read Python Tips (3 Part Series) The latest, greatest version of Python is going to be out in beta soon. While there's still some time before the final stable version is available, it is worth looking into all that's new. Python 3.8 adds some new syntax to the language, a few minor changes to existing behavior, and mostly a bunch of speed improvements — maintaining the tradition from the earlier 3.7 release. This post outlines the most significant additions and changes you should know about Python 3.8. Take a look! 1. The walrus operator Assignment expressions have come to Python with the "walrus" operator :=. This will enable you to assign values to a variable as part of an expression. The major benefit of this is it saves you some lines of code when you want to use, say, the value of an expression in a subsequent condition. So, something like this: length = len(my_list) if length > 10: print(f"List is too long ({length} elements, expected <= 10)") Can now be written in short like this: if (length := len(my_list)) > 10: print(f"List is too long ({length} elements, expected <= 10)") Yay for brevity, but some might say this affects readability of code — it can be argued that the first variant here is clearer and explicit. This discussion was the center of a major controversy in the Python community. 2. Positional-only arguments A special marker, /, can now be used when defining a method's arguments to specify that the functional only accepts positional arguments on the left of the marker. Keyword-only arguments have been available in Python with the * marker in functions, and addition of / marker for positional-only arguments improves the language's consistency and allows for a robust API design. Take an example of this function: def pow(x, y, z=None, /): r = x**y if z is not None: r %= z return r The / marker here means that passing values for x, y and z can only be done positionally, and not using keyword arguments. The behavior is illustrated below: >>> pow(2, 10) # valid >>> pow(2, 10, 17) # valid >>> pow(x=2, y=10) # invalid, will raise a TypeError >>> pow(2, 10, z=17) # invalid, will raise a TypeError A more detailed explanation on the motivation and use-cases can be found in PEP 570. 3. f-strings now support "=" Python programmers often use "printf-style" debugging. In the old days this was pretty verbose: print "foo=", foo, "bar=", bar f-strings make this a bit nicer: print(f"foo={foo} bar={bar}") But you still have to repeat yourself: you have to write out the string "foo", and then the expession "foo". The = specifier, used as f'{expr=}' expands to the text of the expression, an equal sign, then the repr of the evaluated expression. So now, you can simply write: print(f"{foo=} {bar=}") A small step for the language, but a giant leap for everyone who sprinkles print() statements for debugging! 4. reversed() now works with dict Since Python 3.7, dictionaries preserve the order of insertion of keys. The reversed() built-in can now be used to access the dictionary in the reverse order of insertion — just like OrderedDict. >>> my_dict = dict(a=1, b=2) >>> list(reversed(my_dict)) ['b', 'a'] >>> list(reversed(my_dict.items())) [('b', 2), ('a', 1)] 5. Simplified iterable unpacking for return and yield This unintentional behavior has existed since Python 3.2 which disallowed unpacking iterables without parentheses in return and yield statements. So, the following was allowed: def foo(): rest = (4, 5, 6) t = 1, 2, 3, *rest return t But these resulted in a SyntaxError: def baz(): rest = (4, 5, 6) return 1, 2, 3, *rest def baz(): rest = (4, 5, 6) yield 1, 2, 3, *rest The latest release fixes this behavior, so doing the above two approaches are now allowed. 6. New syntax warnings The Python interpreter now throws a SyntaxWarning in some cases when a comma is missed before tuple or list. So when you accidentally do this: data = [ (1, 2, 3) # oops, missing comma! (4, 5, 6) ] Instead of showing TypeError: 'tuple' object is not callable which doesn't really tell you what's wrong, a helpful warning will be shown pointing out that you probably missed a comma. Pretty helpful while debugging! The compiler now also produces a SyntaxWarning when identity checks ( is and is not) are used with certain types of literals (e.g. strings, integers, etc.). You rarely want to compare identities with literals other than None, and a compiler warning can help avoid a number of elusive bugs. 7. Performance improvements This release adds a number of performance speed-ups to the interpreter, following suit from the previous 3.7 release. operator.itemgetter()is now 33% faster. This was made possible by optimizing argument handling and adding a fast path for the common case of a single non-negative integer index into a tuple (which is the typical use case in the standard library). Field lookups in collections.namedtuple()are now more than two times faster, making them the fastest form of instance variable lookup in Python. The listconstructor does not over-allocate the internal item buffer if the input iterable has a known length (the input implements len). This makes the created list 12% smaller on average. Class variable writes are now twice as fast: when a non-dunder attribute was updated, there was an unnecessary call to update slots, which is optimized. Invocation of some simple built-ins and methods are now 20-50% faster. The overhead of converting arguments to these methods is reduced. uuid.UUIDnow uses slots to reduce it's memory footprint. Summary The upcoming release of Python adds some great new features to the language and significantly improves the performance with fundamental speed-up fixes. There is a small number behavior changes that might require modifying existing code while upgrading to Python 3.8, but the performance gains and new syntax make it totally worth the effort. A detailed change log of all that's new can be found here. Originally posted on DeepSource Blog. Python Tips (3 Part Series) Stay Healthy as a Developer Hello. I'm sorry for announcement of translate. Today, I published translated article of your post. Article link is here. yakst.com/ja/posts/5567 Thanks. Thanks — much appreciated! :) Thank you very much 😊 Hello. I think this post is very beneficial, so can I translate japanes and publish following blog? yakst.com/ja/about I hope your reply. Thanks. Sure! Please also add a link to deepsource.io/blog. Let me know so I can link to the Japanese version in the original post. Thank you for your kind consent. After translation is complete, I will share the link of the article.
https://dev.to/deepsource/what-s-new-in-python-3-8-1onl
CC-MAIN-2019-47
refinedweb
1,149
62.38
This blog post is adapted from a talk I gave at Strange Loop 2014 with the same title. Watch the video! When I started Hacker School, I wanted to learn how the Linux kernel works. I’d been using Linux for ten years, but I still didn’t understand very well what my kernel did. While there, I found out that: - the Linux kernel source code isn’t all totally impossible to understand - kernel programming is not just for wizards, it can also be for me! - systems programming is REALLY INTERESTING - I could write toy kernel modules, for fun! - and, most surprisingly of all, all of this stuff was useful. I hadn’t been doing low level programming at all – I’d written a little bit of C in university, and otherwise had been doing web development and machine learning. But it turned out that my newfound operating systems knowledge helped me solve regular programming tasks more easily. I also now feel like if I were to be put on Survivor: fix a bug in my kernel’s USB driver, I’d stand a chance of not being immediately kicked off the island. This is all going to be about Linux, but a lot of the same concepts apply to OS X. We’ll talk about - what even is a kernel? - why bother learning about this stuff? - A few strategies for understanding the Linux kernel better, on your own terms: - strace all the things! - Read some kernel code! - Write a fun kernel module! - Write an operating system! - Try the Eudyptula challenge - Do an internship. What even is a kernel? In a few words: A kernel is a bunch of code that knows how to interact with your hardware. Linux is mostly written in C, with bit of assembly. Let’s say you go to in your browser. That requires typing, sending data over a network, allocating some memory, and maybe writing some cache files. Your kernel has code that - interprets your keypresses every time you press a key - speaks the TCP/IP protocol, for sending information over the network to Google - communicates with your hard drive to write bytes to it - understands how your filesystem is implemented (what do the bytes on the hard drive even mean?!) - gives CPU time to all the different processes that might be running - speaks to your graphics card to display the page - keeps track of all the memory that’s been allocated and much, much more. All of that code is running all the time you’re using your computer! This is a lot to handle all at once! The only concept I want to you to understand for the rest of this post is *system calls. System calls are your kernel’s API – regular programs that you write can interact with your computer’s hardware using system calls. A few example system calls: openopens files sendtoand recvfromsend and receive network data writewrites to disk chmodchanges the permissions of a file brkand sbrkallocate memory So when you call the open() function in Python, somewhere down the stack that eventually uses the open system call. That’s all you need to know about the kernel for now! It’s a bunch of C code that’s running all the time on your computer, and you interact with it using system calls. Why learn about the Linux kernel, anyway? There are some obvious reasons: it’s really fun! Not everyone knows about it! Saying you wrote a kernel module for fun is cool! But there’s a more serious reason: learning about the interface between your operating system and your programs will make you a better programmer. Let’s see how! Reason 1: strace Imagine that you’re writing a Python program, and it’s meant to be reading some data from a file /user/bork/awesome.txt. But it’s not working! A pretty basic question is: is your program even opening the right file? You could start using your regular debugging techniques to investigate (print some things out! use a debugger!). But the amazing thing is that on Linux, the only way to open a file is with the open system call. You can get a list of all of these calls to open (and therefore every file your program has opened) with a tool called strace. Let’s do a quick example! Let’s imagine I want to know what files Chrome has opened! $ strace -e open google-chrome [... lots of output omitted ...] open("/home/bork/.config/google-chrome/Consent To Send Stats", O_RDONLY) = 36 open("/proc/meminfo", O_RDONLY|O_CLOEXEC) = 36 open("/etc/opt/chrome/policies/managed/lastpass_policy.json", O_RDONLY) = 36 This is a really powerful tool for observing the behavior for a program that we wouldn’t have if we didn’t understand some basics about system calls. I use strace to: - see if the file I think my program is opening is what it’s really opening (system call: read) - find out what log file my misbehaving poorly documented program is writing to (though I could also use lsof) (system call: write) - spy on what data my program is sending over the network (system calls: sendtoand recvfrom) - find out every time my program opens a network connection (system call: socket) I love strace so much I gave a lightning talk about just strace: Spying on your programs with strace. Reason 2: /proc /proc lets you recover your deleted files, and is a great example of how understanding your operating system a little better is an amazing programming tool. How does it do that? Let’s imagine that we’ve written a program smile.c, and we’re in the middle of running it. But then we accidentally delete the binary! The PID of that process right now is 8604. I can find the executable for that process at /proc/8604/exe: /proc/8604/exe -> /home/bork/work/talks/2014-09-strangeloop/smile (deleted) It’s (deleted), but we can still look at it! cat /proc/8604/exe > recovered_smile will recover our executable. Wow. There’s also a ton of other really useful information about processes in /proc. (like which files they have open – try ls -l/proc/<pid>/fd) You can find out more with man proc. Reason 3: ftrace ftrace is totally different from strace. strace traces system calls and ftrace traces kernel functions. I honestly haven’t had occasion to do this yet but it is REALLY COOL so I am telling you about it. Imagine that you’re having some problems with TCP, and you’re seeing a lot of TCP retransmits. ftrace can give you information about every time the TCP retransmit function in the kernel is called! To see how to actually do this, read Brendan Gregg’s post Linux ftrace TCP Retransmit Tracing. There also appear to be some articles about ftrace on Linux Weekly News! I dream of one day actually investigating this :) Reason 4: perf Your CPU has a whole bunch of different levels of caching (L1! L2!) that can have really significant impacts on performance. perf is a great tool that can tell you - how often the different caches are being used (how many L1 cache misses are there?) - how many CPU cycles your program used (!!) - profiling information (how much time was spent in each function?) and a whole bunch of other insanely useful performance information. If you want to know more about awesome CPU cycle tracking, I wrote about it in I can spy on my CPU cycles with perf!. Convinced yet? Understanding your operating system better is super useful and will make you a better programmer, even if you write Python. The most useful tools for high-level programming I’ve found strace and /proc. As far as I can tell ftrace and perf are mostly useful for lower-level programming. There’s also tcpdump and lsof and netstat and all kinds of things I won’t go into here. Now you’re hopefully convinced that learning more about Linux is worth your time. Let’s go over some strategies for understanding Linux better! Strategy 1: strace all the things! I mentioned strace before briefly. strace is literally my favorite program in the universe. A great way to get a better sense for what your kernel is doing is – take a simple program that you understand well (like ls), and run strace on it. This will show you at what points the program is communicating with your kernel. I took a 13 hour train ride from Montreal to New York once and straced killall and it was REALLY FUN. Let’s try ls! I ran strace -o out ls to save the output to a file. strace will output a WHOLE BUNCH OF CRAP. It turns out that starting up a program is pretty complicated, and in this case most of the system calls have to do with that. There’s a lot of - opening libraries: open("/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) - putting those libraries into memory: mmap(NULL, 2126312, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7faf507fc000 and a bunch of other things I don’t really understand. My main strategy when stracing for fun is to ignore all the crap at the beginning, and just focus on what I understand. It turns out that ls doesn’t need to do a lot! openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) = 3 getdents(3, /* 5 entries */, 32768) = 136 getdents(3, /* 0 entries */, 32768) = 0 close(3) = 0 fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 12), ...}) = 0 mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7faf5104a000 write(1, "giraffe out penguin\n", 22) = 22 close(1) = 0 munmap(0x7faf5104a000, 4096) = 0 close(2) = 0 exit_group(0) = ? This is awesome! Here’s what it needed to do: - Open the current directory: openat(AT_FDCWD, ".", O_RDONLY|O_NONBLOCK|O_DIRECTORY|O_CLOEXEC) - Get the contents of that directory: getdents(3, /* 5 entries */, 32768) = 136. Looks like it was 136 bytes of stuff! - Close the directory: close(3) - Write the files to standard out: write(1, "giraffe out penguin\n", 22) = 22 - Close a bunch of things to clean up. That was really simple, and we already learned a new system call! That mmap in the middle there? No idea what that does. But it’s totally fine! STRACE IS THE BEST. So! Running strace on random processes and looking up the documentation for system calls you don’t recognize is an easy way to learn a ton! Warning: Don’t strace processes that you actually need to run efficiently! strace is like putting a huge stopsign in front of your process every time you use a system call, which is all the time. Brendan Gregg has a great post about strace which you should read. Also you should probably read everything he writes. Strategy 2: Read some kernel code! Okay, let’s imagine that we’ve gotten interested in getdents (the system call to list the contents of a directory), and we want to understand better what it actually does. There’s this fantastic tool called livegrep that lets you search through kernel code. It’s by Nelson Elhage who is pretty great. So let’s use it to find the source for getdents, which lists all the entries in a directory! I searched for it using livegrep, and found the source. On line 211, it calls iterate_dir. So let’s look that up! It’s here. Honestly this code makes no sense to me (maybe res = file->f_op->iterate(file, ctx) is what’s iterating over the directory?). But it’s neat that we can look at it! If you want to know about current Linux kernel development, Linux Weekly News is a great resource. For example, here’s an interesting article about the btrfs filesystem! Strategy 3: Write a fun kernel module! Kernel modules sound intimidating but they’re actually really approachable! All a kernel module is fundamentally is - An initfunction to run when the module is loaded - A cleanupfunction to run when the module is unloaded You load kernel modules with insmod and unload them with rmmod. Here’s a working “Hello world” kernel module! #include <linux/module.h> // included for all kernel modules #include <linux/kernel.h> // included for KERN_INFO #include <linux/init.h> // included for __init and __exit macros static int __init hello_init(void) { printk(KERN_INFO "WOW I AM A KERNEL HACKERl!!!\n"); return 0; // Non-zero return means that the module couldn't be loaded. } static void __exit hello_cleanup(void) { printk(KERN_INFO "I am dead.\n"); } module_init(hello_init); module_exit(hello_cleanup); That’s it! printk writes to the system log, and if you run dmesg, you’ll see what it printed! Let’s look at another fun kernel module! I gave a talk about kernel hacing at CUSEC in January, and I needed a fun example. My friend Tavish suggested “hey julia! What if you made a kernel module that rick rolls you every time you open a file?” And my awesome partner Kamal said “that sounds like fun!” and inside a weekend he’d totally done it! You can see the extremely well-commented source here: rickroll.c. Basically what it needs to do when loaded is - find the system call table (it turns out this is not trivial!) - Disable write protection so that we’re actually allowed to modify it (!!) - Save the old openso we can put it back - Replace the opensystem call with our own rickroll_opensystem call That’s it! Here’s the relevant code: sys_call_table = find_sys_call_table(); DISABLE_WRITE_PROTECTION; original_sys_open = (void *) sys_call_table[__NR_open]; sys_call_table[__NR_open] = (unsigned long *) rickroll_open; ENABLE_WRITE_PROTECTION; printk(KERN_INFO "Never gonna give you up!\n"); The rickroll_open function is also pretty understandable. Here’s a sketch of it, though I’ve left out some important implementation details that you should totally read: rickroll.c static char *rickroll_filename = "/home/bork/media/music/Rick Astley - Never Gonna Give You Up.mp3"; asmlinkage long rickroll_open(const char __user *filename, int flags, umode_t mode) { if(strcmp(filename + len - 4, ".mp3")) { /* Just pass through to the real sys_open if the extension isn't .mp3 */ return (*original_sys_open)(filename, flags, mode); } else { /* Otherwise we're going to hijack the open */ fd = (*original_sys_open)(rickroll_filename, flags, mode); return fd; } } SO FUN RIGHT. The source is super well documented and interesting and you should go read it. And if you think “but Kamal must be a kernel hacking wizard! I could never do that!“, it is not so! Kamal is pretty great, but he had never written kernel code before that weekend. I understand that he googled things like “how to hijack system call table linux”. You could do the same! Kernel modules are an especially nice way to start because writing toy kernel modules plays nicely into writing real kernel modules like hardware drivers. Or you could start out writing drivers right away! Whatever floats your boat :) The reference for learning about writing drivers is called Linux Device Drivers or “LDD3”. The fabulous Jessica McKellar is writing the new version, LDD4. Strategy 4: Write an operating system! This sounds really unapproachable! And writing a full-featured operating system from scratch is a ton of work. But the great thing about operating systems is that yours don’t need to be full-featured! I wrote a small operating system that basically only has a keyboard driver. And doesn’t compile for anyone except me. It was 3 weeks of work, and I learned SO MUCH. There’s a super great wiki with lots of information about making operating system. A few of the blog posts that I wrote while working on it: - Writing an OS in Rust in tiny steps - After 5 days, my OS doesn’t crash when I press a key - SOMETHING IS ERASING MY PROGRAM WHILE IT’S RUNNING (oh wait oops) I learned about linkers and bootloaders and interrupts and memory management and how executing a program works and so many more things! And I’ll never finish it, and that’s okay. Strategy 5: Do the Eudyptula challenge If you don’t have an infinite number of ideas for hilarious kernel module pranks to play on your friends (I certainly don’t!), the Eudyptula Challenge is specifically built to help you get started with kernel programming, with progressively harder steps. The first one is to just write a “hello world” kernel module, which is pretty straightforward! They’re pretty strict about the way you send email (helping you practice for the linux kernel mailing list, maybe!). I haven’t tried it myself yet, but Alex Clemmer tells me that it is hard but possible. Try it out! Strategy 6: Do an internship If you’re really serious about all this, there are a couple of programs I know of: - Google Summer of Code, for students - The GNOME outreach program for women The GNOME outreach program for women (OPW) is a great program that provides mentorship and a 3-month paid internship for women who would like to contribute to the Linux kernel. More than 1000 patches from OPW interns and alumni have been accepted into the kernel. In the application you submit a simple patch to the kernel (!!), and it’s very well documented. You don’t need to be an expert, though you do need to know some C. You can apply now! The application deadline for the current round is October 31, 2014, and you can find more information on the kernel OPW website. Resources To recap, here are a few super useful resources for learning that I’ve mentioned: - Previous writing: 4 paths to being a kernel hacker, everything I’ve written about kernels - I learned all of this at Hacker School - LXR and are great for searching the Linux kernel - Linux Device Drivers 3 is available free online. - The OPW internship for the Linux kernel - Linux Weekly News (here’s an index) - Brendan Gregg has a ton of extremely useful writing about performance analysis tools like perfand ftraceon Linux. You can be a kernel hacker I’m not a kernel hacker, really. But now when I look at awesome actual kernel hackers like Valerie Aurora or Sarah Sharp, I no longer think that they’re wizards. I now think those are people who worked really hard on becoming better at kernel programming, and did it for a long time! And if I spent a lot of time working on learning more, I could be a kernel hacker too. And so could you.
https://jvns.ca/blog/2014/09/18/you-can-be-a-kernel-hacker/
CC-MAIN-2017-09
refinedweb
3,087
64.91
>>.'" "Not a major overhaul"? (Score:5, Insightful) I mean, it's not, but it makes it sound like C++11 is a minor update. Lambdas, auto, concurrency, are these minor updates? There's a lot of interesting stuff in C++11! Re: (Score:2): (Score:2) damn my angle brackets got blanked out. [system_error] for std::error_code std::atomic[size_t] for thread-safe counter std::atomic[bool] for thread-safe flag Re: (Score:2) Re:"Not a major overhaul"? (Score:5, Informative) > What C++ compiler are you using? g++ 4.6 - standard in Ubuntu Two of the features I'm waiting on are class level non-static initialisers and templated typedefs. I've heard Microsof's C++ compiler has better C++11 support but I've never tried it. Beware that MingW has a bug so std::thread is disabled. I've heard mingw-w64 works better. You might want to also try boost::thread (same library essentially, except std::thread has move semantics). Re: (Score:2) Re: (Score:3) VC++ 2010 used to have better C++11 support than g++ for some time, but the latter has overtaken it now, and it looks like it'll keep ahead for the next release also - VC still doesn't do variadic templates, for example. It doesn't have instance field initializers, either, nor templated typedefs. That said, C++11 support is being implemented pretty rapidly in general - I mean, we've only just got the final spec out, what, a couple months ago? and more than half of it is already supported in all major compile Re: (Score:2) Microsoft's C++11 support in VS2010 was pretty good at the time: lambdas, auto, new function decl syntax, and that's about it. They've really dropped the ball in VS11 though: They've basically added strongly types enums, and that's about it. Still no word on variadic templates, template aliases or initializer lists. Re: (Score:3) Slide 5 of the deck here [msdn.com] says that initializer lists, template aliases, variadic templates, and other features are coming in a series of out of band releases after VC11 RTM (but sooner than the next major release [msdn.com] of VC). That slide also lists the stdlib and language features that are included in VC11 Beta/RTM. Re: (Score:3, Informative) Sure. But it should be noted that this feature (along with many others brought to the new stand, I'm sure) were introduced in the Boost set of libraries first. Re: (Score:2) That is incorrect. Re:"Not a major overhaul"? (Score:5, Informative) You didn't anyway. You type in "int" to loop over a vector. Only if you want to tie yourself to using a vector. Using a proper iterator costs you nothing in code space or execution time (because for a vector it optimizes down to just pointer arithmetic anyway), but means that at some future time you can replace that vector with a different data structure without having to modify the code that operates on it. Re: (Score:2, Informative) for (auto i : v) { } Even in terms of typing time it's a nice addition, ignoring the structure-independence benefits of this sort of thing. Re: (Score:2, Informative) It pretty simple why you would use iterators. While the data you store might not change much over time, the amount of data stored for successful apps tends to grow alarmingly over time. Allowing the vector to change to some other more elegant solution when the need arises without having to rewrite large swaths of code. Re: (Score:2) For a vector both are equivalent, but for structures like "list", iterator is faster. Re:"Not a major overhaul"? (Score:5, Insightful) Take goto, which is all that's really needed for modern programming. Using goto, you can implement subroutines (gosub like functions), while and for loops, etc. But each time you implement one of the higher constructs, your implementation will be slightly different. And if you force yourself to use the optimal implementation all the time, you'll just be writing a lot of boiler plate code. So having functions and while loops implemented as language constructs is a good thing. Similarly, std::thread + lambdas is a great addition. goto is all you need for modern programming? (Score:3) (Not arguing with your broad point that these new addirions to C++ will be useful.) Goto is no way sufficient for modern programming. You also need some way to manipulate whatever passes for the (effectively stack) constructs that allow nested parameters and locals. And you need some way to hide the bulky extra code (as in text macros and such). I got bit on this when I should have known better, back in '97 or so -- using a Fujitsu CoBOL, forgot that the old CoBOL PERFORM was kind of like the old BASIC GOSUB. Re:"Not a major overhaul"? (Score:5, Informative) Don't forget initializer lists, variadic templates, non-static data member initializers, finally fixing that Template> (note the >>) thing, rvalues, nullptr, strongly-typed enums, constructor improvements (holy god we don't have to rewrite every fucking thing every fucking time or split off into an ::init()), user-defined literals which is crazy cool combined with templates and initializer lists, and lots of stuff I'm sure I'm forgetting about. Since starting on C#, I've kind of felt like I'm back in the dark ages in C++, even as it remains my favorite language. I've already started using a lot of these improvements, and while C++ still has it's rough edges, the improvement in "fun" while coding is massive. No more for (some_container_type<vector<map<int, string> > >::reverse_iterator aargh = instance.begin(); aargh != instance.end(); ++aargh) for me! Re: (Score:2) Heh, thanks for filling in the rest. I knew there were more, but I was just throwing down what I could remember off the top of my head. I was watching Bjarne talk about C++11 at Going Native and now I'm anxiously awaiting our (long overdue) transition to VS2010 (from 2005). I know it doesn't cover all the good stuff, but it has a lot of the goodness in there. Though, I'm writing cross platform code with a Mac team that refuses to use Clang, so God knows when I'll be allowed to use anything from C++11 :( Re: (Score:2) Re: (Score:3) firstly, I wish I was using VC6 still - the IDE was the best there ever was, fast, simple, didn't flicker or use up all your RAM. However, you can use a different IDE [wordpress.com] if you must keep the old compiler for backwards compatibility. Alternatively, you can replace the compiler [intel.com] in the IDE. Re: (Score:2) Wait, there is an object oriented version of C? Theses are days of miracle and wonder. Re:In practice it's like a different language. (Score:5, Insightful): (Score:2) class A { public: enum Type { TYPE_A, TYPE_B, TYPE_C } mType; void Foo (); }; class B : public A { void Foo (); }; class C : public A { void Foo (); }; A * p Re: (Score:2) The practices you have listed are actually good C++ programming (except "custom string and array classes", whatever that means). Just because there is a bad library bundled with the language, it does not mean that everyone must prefer it over the native libc, it is useful only when it fits the purpose of the program. Re:In practice it's like a different language. (Score:5, Insightful) >Time to join the 21st century grandpa. FILE* leaks. Hell no. And get off my lawn.. FILE* leaks? I assume by this you mean when sloppy programmers fail to close their files and you start burning through file descriptors. Sounds like a bug to me, and again, stop sucking. Or do what we do - throw an object with a destructor containing fclose() around it. Then you get all the awesomeness of of FILE* (including those awesome formatting commands like fprintf and fscanf) without the danger of your file staying open when something goes nuts. Why on earth would you want memcpy() to call anything? It's a low level byte move. Anybody with five minutes of familiarity with it should know that. If you wanted something different, use the assignment operator. void* have all sorts of applications, most recently to me in writing architecture neutral VMs where really all the native machine knows is that it's moving around some sort of pointer. Now the custom string and array classes? That I'll agree on. Troll on. Re:In practice it's like a different language. (Score:5, Insightful) printf() isn't typesafe, but it's a fuckton more readable than all that cout formatting stuff Readable? Meh. Localisable? Now that is an important attribute. Consider this trivial bit of code: You want to localise this, so you wrap each string in a function that returns the localised version (by the way, gcc and clang have an attribute that you can put on functions that says if the output is safe to use as a format string in any case where the input is). So now you have something that looks like this: Okay, no problem. Now let's consider the C++ equivalent: Harder to read? Maybe, but the tokens are all in the order that they'll appear in the output, so I'll give C++ that one. Now let's localise it. How about something like:. No problem in the C version, the format string is just translated as "Le %2$s %1$s". The arguments are reversed (well, it's a little more complex because you also need agreement between the noun and the article, but I'll skip over that for now - it can be solved in the same way). What about the C++ version? Well, because the word order is in the code, not the data, you need different code paths for English and French. And that's just with a three-word sentence. Now consider the kind of sentence that actually appears in a typical UI. Consider German word-order rules. Your simple C / Objective-C strings file has to be replaced by a huge tangle of C++ code. Re:In practice it's like a different language. (Score:4, Funny). It seems like it would be a lot easier to just change the word order of the offending languages than to screw up perfectly good C++. Re:In practice it's like a different language. (Score:4, Informative) Re:Uh. No. (Score:4, Informative) Operator overloading is to address the ambiguity of having by-value and by-reference passing of objects being possible via different semantics in the language. If I have a pointer and write: ObjType* pObjRef = then it's pretty obvious I have an object reference. But but what does ObjType Obj2 = Obj1; actually mean? C++ defines this type of transaction as always being a copy operation. But an object is a complex datatype - doing a straight copy of all it's memory doesn't necessarily give me sensible behavior. So you need operator overloading to let you enforce sensible behavior. That you can also use it to create syntactic sugar or completely illogical behavior doesn't make it bad though. And absent a garbage collector, I'm not sure it actually makes sense to do what C# does and try and treat all object variables as references (in that when would you deallocate things?) Re: (Score:2) I still prefer printf to cout. Though php's echo is better than both, which is sad. Re:In practice it's like a different language. (Score:5, Insightful) Don't be a zealot, pragmatic programmer should find the right trade-off between reusing code and writing an optimal one for a specific problem/area. Nothing can be optimal in all cases - sometimes you need to be as close to hardware as possible [beyond3d.com] at the expense of unreadable/inflexible code (for me, those are the most interesting and challenging areas), and sometimes you only care about maintainability of your code by a disposable programming drone. Re: (Score:3) So what? I think he simply claimed that you have to deal with C++ written by C programmers all too often. That's my experience, too. STL isn't suited for all possible uses, sometimes you need [open-std.org] your own string and container classes. I don't see what that has to do with the above. I suppose there are some rare cases where the standard library isn't appropriate, but are you arguing this is an excuse *never* to use it? By the way, your reference seems severely dated. Some of its complaints are still valid, but seem based on the state of STL support ten years ago. Re:In practice it's like a different language. (Score:4, Informative) I think he simply claimed that you have to deal with C++ written by C programmers all too often. That's my experience, too. I'm probably one of those C programmers. I use C++ features when I feel they are appropriate, but my definition of "the right thing" changed over years. I started to value custom-tailored solutions, often hardware- and problem-specific. No longer I'm trying to find (or create) code that would fit all (or even the most) cases, and micro-managed security is of lesser concern to me. I don't see what that has to do with the above. I suppose there are some rare cases where the standard library isn't appropriate, but are you arguing this is an excuse *never* to use it? There are always multiple tradeoffs involved, that's why I am very wary of saying "never". STL is no silver bullet, this is what I'm saying. E.g. STL is bad at managing memory, it's hard to make it NOT allocate it dynamically - yes, you can write a custom allocator for it, but STL allocators are inflexible and also - for some poorly thought reason - manage object construction, not just memory allocation, making this unnecessarily hard. STL is a template library, and templates produce separate implementation for every type used - whereas you can have a single void *-based container for all your POD types, with templates providing just minimal wrapper. Also, STL makes it hard for you to control how often memory (object instances) is copied - there's no way to influence its behavior if memory copying tops your profiler results. If you are developing primarily for desktop computers which are quite beefy (and also vague in terms of hardware), this may matter less (although it will never cease to matter - just think of all that slow code running on our desktops which eats the performance improvements we still (marginally) get, making desktop performance appear flat since 2004), but if your target is well-defined hardware-wise and if you know/set "upper bounds" for all the practical problem sizes your program is designed to handle, you can think of more optimal solutions for your case. This is one of reasons why you can still play the newest games on 2005 hardware (XBox 360) which hasn't even got an out-of-order CPU, while running the newest desktop software (not games!) on PC of the same era is problematic. To sum up, I'm not saying "never use STL", but I would not say "always use STL" either. People claim that using "standard" code helps you create more secure, robust, performant programs - but in my experience, you will end up running a lot of custom tools (all kinds of profilers and validators) anyway before shipping your binary, that makes the point of "performant/secure code by design" moot. When profiler tells you that some code in an STL container underperforms, reimplementing it - or even understanding why that happens - is harder than fixing/improving your own implementation, yet you'll probably have to do that anyway. And no one starts "from scratch" these days, each "low-level" programmer I know has their own set of STL-alike classes. By the way, your reference seems severely dated. Some of its complaints are still valid, but seem based on the state of STL support ten years ago. Well, it is perhaps improving with new "move" constructors and whatnot, but that's actually a problem: it's better to have cross-platform code with predictable behavior than code that is supposedly optimal everywhere, but which depends on highly varied implementations. What good is it for me to know that some compilers (e.g. gcc), on some platforms (e.g. x86), handle new, C++11-ready STL well - gcc isn't particularly good at optimizing code even for x86, and another compiler that is a good optimizer may not support the newest features and make my STL-heavy code suck. P.S. Also, STL (in implementations I saw) is written in quite unreadable, convoluted style - I always wondered why. This is of course less rele Re: (Score:3, Insightful) #warning rant coming "printf instead of std::cout" I love c++, but it definitely has some dark spots, even still the long overdue c++0x update (c++11 whatever). Thank deity we finally have standard smart pointers, better templates in various ways, move semantics, etc. It was really necessary to finally have that, even if it was a decade late (I'm sure we lost quite a bunch of good c++ programmers and projects to more newfangled languages in the delay). Even though we often won't be able to use it until all co Re: (Score:3) Maybe because pure C++ with all its advances looks worse than random perl code or assembly. Im not asking for cobol like scheme, but come on, enough of the !@#$%^&*(){}|} magic I want auto! (Score:2) Unfortunately I code in java these days.. chances that oracle will see the light? :-) Re:I want auto! (Score:5, Funny) chances that oracle will see the light? :-) Last time they saw the Sun, it did not end well... Re: (Score:2) chances that oracle will see the light? Roughly the same as the chance that Larry Ellison will fly his MiG 25 straight into the side of a mountain. Re: (Score:3) Re:I want auto! (Score:4, Insightful) Now what happens when you change the name of a return type from a commonly used function? Have fun with your maintenance nightmare. If you are lucky the compiler will complain, if you are unlucky you break existing code without knowing it. Re: (Score:2) Re: (Score:3) Re: (Score:2) Re: (Score:2) However, take a look at clang [llvm.org]. One day, this and much more will be possible. Re:I want auto! (Score:5, Insightful) Prohibiting use of "var" is gross over-kill. There are times where it's use is not recommended, certainly. But even in those cases, it can lead you to the point that there's a "bad code smell" where methods aren't named well, variables aren't named well, classes aren't named well. var x = foo(); is definitely less readable. But var x = new ComplexObject(); is every bit as readable, if not more so, because you don't have a redundant "ComplexObject" in the declaration. You always know exactly what type "x" is. It's also very useful in cases where the return type is a complicated generic. it saves a lot of typing, and is definitely more readable. Here is a very good discussion on the benefits and uses of "var": [blogspot.com] Re: (Score:2) Re: (Score:3) Prohibiting use of "var" is gross over-kill. Gross overkill is, of course, the main point of most coding standards. Re: (Score:3) In general I agree.. but for some stuff (iterating through a collection for instance) I think it is acceptable. When you start dealing with collections of collections, iterator types become a nightmare! in Java of course you can just use the foreach syntax (Kitten kitten : kittens). Re: (Score:3) in Java of course you can just use the foreach syntax (Kitten kitten : kittens). Which is also a C++11 feature, now. Except they call it a "range-based for loop." for ( Kitten& kitten : kittens ) Using a reference is optional, I think, but is probably the better way to go in the general case (so you can modify the object instead of a copy).++. Re: (Score:3) (object) ~= (void*) I mean that in spirit, not in actual details. Yes, I know there's about a billion, trillion ways they're different. Re: (Score:2) Seriously get a better IDE if you think using var is confusing. var saves me from having to do this: Dictionary, AnotherClass myCollection = new Dictionary, AnotherClass(); instead I can go: var myCollection = new Dictionary, AnotherClass(); which is significantly more readable and removes redundant overhead of typing a type twice. And if you can't figure out what type it is, you should change your profession. or even better: var collection = someclass.ACollection; without having to explicitly know what type AColl Re: (Score:2) Here's a construct that I used today: If I want to iterate through this, I need to create a variable of type . Since I'm not totally sure of the exact type of my map, I need to find out where it's declared and copy that. If I then change that vector to a list, I need to change every iterator that uses that. Okay, I should probably use a typedef, but sin Re: (Score:3) Auto in K&R was a storage class, not a request for compiler type inference. Re: (Score:2) And also completely redundant (every local variable is an auto variable unless declared differently), and thus completely useless and never used. I think it's wonderful they managed to re-purpose a dead keyword into something wonderfully useful, and the name even fits so well. Re:I want auto! (Score:5, Informative) "auto" was always implemented, since the very first version of C, it just had a different meaning - it means that variable has an "automatic storage class" (as opposed to "static storage class" etc). Because automatic was the default, it was almost always redundant, but it did have a meaning. It actually goes way back to B [wikipedia.org], which only had a single data type - machine word. Variable declarations looked somewhat like C, but, for the lack of type, they started with the storage class instead, i.e.: In C, we've got types, so you'd normally write "static int y" for a static local, and just "int z" for an automatic one - "auto" being implied. However, C inherited some of B's semantics as "default int" - i.e. if the declaration is clearly a variable, but it omits the type, assume that type to be "int" (i.e. machine word). So in C the above code snippet from B is actually valid, and declares x, y and z to all be ints. Then "auto" got inherited by C++, which dropped the "default int", making auto completely redundant - you couldn't write "auto x" in C++ anymore, and in all other cases where you could use "auto", like "auto int x = 123", it was always redundant. So when they appropriated it for type inference in C++11, it was technically a breaking change - it just wasn't ever used by anyone in production code in the old way, so nobody noticed. Re: (Score:2) Then simply prepend the type identifier to the variable name, simples! Or, if you don't do this, you can use the IDE's ability to tell you the type by hovering over the variable. In which case, it really doesn't matter if you declare it using var or not., Re:I want auto! (Score:5, Insightful) Because var still works within the type system and gives you compile-time errors, and casting to object is a massive sledgehammer that delays errors until runtime (with all of runtime error checking's glory, like only failing some of the time, which you can basically read as "never on a developer's machine, sometimes on TQA's machine, and always on a customer's machine (unless support is on the phone with them)"), and a stupid idea in general (I'm looking at you, Objective-C!). Re: (Score:2) It gives you some of the benefits of duck typing with very few of its drawbacks, and the reduced noise makes your code clearer. I know that seems counter-intuitive to you right now, but if you try it, you'll see the benefits, and realize that the type names appearing next to the declarations is, in most cases, redundant or irrelevant when it comes to actually understanding what the code does. It's more of a fuzzy thing, but I started using it at my job, everybody hated it for two weeks, I gave no fucks and k Re: (Score:2) if its just the same function, build the properties for the anonymous type before initializing the anonymous type. Can you explain what you mean by this? You can't pass an anonymous type to another function as its type Actually, you can, if that function is generic. It's why you can pass an IEnumerable of anon type objects to something like .Where(), and access properties on that object without any casting within the lambda that you also pass. Nothing to see (Score:2) Seriously, a big disappointment. and in the next revision... (Score:5, Funny) For my next trick (Score:3) 1) Hordes of comments are about to appear that denigrate C++. 2) 95% of these commenters have never written any C++ code more sophisticated than Not seeing (1) (Score:5, Insightful). Re: (Score:2, Funny) That's way to verbose. I overloaded the ; Re: (Score:2) C++ SUCKS!! It's a piece of shit language with a very crappy garbage collector If you are making extensive use of the C++ garbage collector and it isn't working very well, I can understand your frustration :D Re: (Score:2) Re: (Score:3) Garbage collectors only manage memory. Smart pointers, on the other hand, can manage any kind of resource - whatever you release in your destructor. That's why languages like Java and C# still need stuff like Closeable and IDisposable, and syntactic sugar for them to avoid writing try/finally ladders. Whereas in C++, resource management has always been done right, and memory is treated as just another kind of resource.. Re: (Score:3) I'll mention that very few C++ programmers know what they are. That's the problem with those programmers, not the language. C++ is powerful and fast by design; Not really. Most Java programmers know about the weak references in Java. And in Java they are less necessary than they are in C++ (because the C++ gc model steaming from smart pointers is based on reference counts). If a necessary tool goes underused in a certain context, then it is the problem with how the context is presented to its audience -- not with the audience. But C++ is a very powerful tool when used right. Any tool is only as powerful as it is useful. It is not reasonable to judge tools only by the peak performance achieved by its most commi Fascinating Software Engineering Challenge (Score:5, Insightful) In some ways, a lot of what is being added to C++ makes me think of Scala, just less readable. While the additions and extensions certainly make things more interesting and potentially more powerful/easier for the *individual* programmer, I look forward to seeing what sort of interesting train wrecks happen when larger teams try to make use of these features. I certainly hope the debuggers are being updated to be useful when someone's written a template that uses a #define'd macro to specify a closure that is passed through an anonymous function, etc. This strikes me as the next generation's 'multi-threading' -- where almost every programmer claims they can handle multi-threaded programming, but very few actually do it well. Particularly in teams when interactions require coordination. Going to take a whole new round of winnowing the wheat from the chaff when it comes to finding developers who can actually use these features well without driving their coworkers insane. Re: (Score:3) Time to overhaul the academics. Nearly every engineering course here (I'm in India) has a couple of programming courses. A lot of students do coding some time or the other. Yet not even a sentence is uttered about threads or parallelism, even though practically every computer they code in has multiple cores. They should probably introduce a course on parallel processing as an elective for freshmen. Re: (Score:3) There's a very good reason for this. Concurrent programming is hard. An acceptable coverage of concurrent programming cannot be given in the space of a couple of programming courses. Re: (Score:3) Both C++ and Scala are attempts to modernize limited, poorly designed languages (C, Java) by adding things like type parameters and functional features. There are a bunch more of those attempts and I think they have mostly been failures. It's roughly like: "Never try to teach a pig to sing. It wastes time and annoys the pig" He's optimistic (Score:2) FTA I expect to see the first complete C++11 compiler sometime after I see the apocalypse. Oh right... I guess that would be December 2012, wouldn't it? :) Seriously... while a lot of compilers implement some of the features, I really don't think there's a hope in hell of seeing any real progression to adopting the standard. With C, the standard developed around what many compilers were already doing... ditto with the original C++ spec. Re: (Score:3) The majority of it is implemented. [gnu.org] Re:He's optimistic (Score:5, Informative) > But C++11 describes a standard that absolutely nobody has ever got anywhere close to, so I don't imagine that there's going to be a lot of drive to adopt it.. Re: (Score:3). C++98 was worse. There was extern templates, which was a major mistake (read the proposal to remove it from C++11 for some amusement). There was Microsoft, who lost interest in C++ around that time, and condemned a generation of Windows programmers to Visual Studio 6 and a weak standard library implementation. And yet gcc was usable around 1999 or so. I started using the std namespace in late November that year.. Assigning new values to constants can be useful! (Score:2) I was amused by the comment . " Back then, I wrote Fortran subroutines which took computed dimension arrays by declaring the arrays with crazy bounds, numbers I hoped would never be used as constants, and then "assigning" the re Just awful what C++ is turning into (Score:2) I watched a few of the "Gone Native" webcasts on the C++ extensions, and it's crazy what they're doing with the language. r Re: (Score:3) They need to just start from scratch and create a limited subset of features that doesn't pretend to be C and doesn't lug around all of the past mistakes in the standard, and call it C+++. They did. Check out the D Programming Language. So, has it gained a standard ABI yet? (Score:2) Probably the last thing preventing it from being truly safe and useful in shared libraries across implementations. How many years has it been? So what is a good book to learn C++11 (Score:2) Re:So what is a good book to learn C++11 (Score:5, Funny) Dante's Inferno. Sample quote: Obscure, profound it was, and nebulous, So that by fixing on its depths my sight - Nothing whatever I discerned therein. Meh... (Score:3, Interesting) Good for showing off.. bad for getting actual work done, esp for projects that last more then a year or two. Re: (Score:3) I gave up on C++ years ago. It has really become a 'geek cred' language with a constantly changing 'right' way an aesthetic, perfect for figuring out if a fellow geek is from the same snapshot of teaching you came from, but that is about it. It has become overly complex with redundant language features that one needs to keep relearning in order to understand other people's code.. and of course with complexity comes the ability to show off your knowledge through doing things in 'clever' ways. Good for showing off.. bad for getting actual work done, esp for projects that last more then a year or two. Well, what are the alternatives? I my domain, it's plain old C. Java is not an option, and from the little I hear fashion changes *more* quickly there anyway. I think your impression of the changing "right" way is warped, or perhaps you're in a subculture I'm not familiar with. I see these different ways: (a) C with classes, from C programmers who've read a book but don't quite get it. (b) OOP/Design Patterns rule! Popular in the mid-1990s. Lots of inheritance and stuff; every piece of code aspiring to gen Re: (Score:3) Re:News? (Score:5, Funny) That said, as a professional C++ developer working in HPC, this is exciting. Stop pretending and get back to your FORTRAN! Re: (Score:2) prominent compilers support it? the one supporting the most platforms, gcc, doesn't have it all yet Re: (Score:3) which is in the new standard No it's not. Vendors have always been allowed to tack on GC. None of the big ones do. Re: (Score:2) WTF? Re:George Bjarne Lucas Stroustrup (Score:4, Insightful) Re: (Score:2, Informative) It is easy to refute your argument on memory safety and auto with a single line of code: auto obj = make_shared( arg1, arg2 ); lambda expressions can only be assigned to an auto, because the actual type is compiler defined. auto some_callable_type = []( float f ){ return f * f; }; Currency isn't supported? What more do you want apart from: threads, mutexes, atomics, thread local storage, concurrency safe memory model, futures, promises, async tasks and thread exception transfer? Re: (Score:3) The main purpose of "auto" is to allow for (auto p = arr.begin(); p != arr.end(); p++) {... } without worrying about the type of arr. No, it's not. The above is written as follows in C++11: And, of course, you can just as well use the actual element type there instead of auto.
https://developers.slashdot.org/story/12/02/24/1954225/stroustrup-reveals-whats-new-in-c-11?sdsrc=prevbtmprev
CC-MAIN-2017-04
refinedweb
5,809
62.48
Name: jl125535 Date: 03/27/2003 FULL PRODUCT VERSION : java version "1.4.1" Java(TM) 2 Runtime Environment, Standard Edition (build 1.4.1-b21) Java HotSpot(TM) Client VM (build 1.4.1-b21, mixed mode) FULL OS VERSION : Linux galahad 2.4.20 #2 Mon Dec 16 22:53:44 CET 2002 i686 unknown (RedHat 7.1) A DESCRIPTION OF THE PROBLEM : When adding a KeyListener to any kind of component, keyReleased events are fired when the key on the keyboard is held down. This is different in windows. When holding a keyboard button down, keyPressed and keyTyped are fired, but keyReleased is only fired when releasing the button (behaviour as wished). 1.4.2 on Solaris 8 also exhibits the desired behavior. Thus, on Linux it is difficult to write suitable keyListeners for instance for games, since you can never react approriate when you do not know if a key is held or not. A solution using a mechanism that will use timestamps in order to check every n milliseconds whether an event has been fired or not is way to poor to implement, sorry. STEPS TO FOLLOW TO REPRODUCE THE PROBLEM : compile the code below and press a key EXPECTED VERSUS ACTUAL BEHAVIOR : keyReleased should only be fired when the key is actually released. keyReleased is fired when the key is held down REPRODUCIBILITY : This bug can be reproduced always. ---------- BEGIN SOURCE ---------- import java.awt.event.*; import java.awt.*; import javax.swing.*; public class Test { /** * @param args the command line arguments */ public static void main(String[] args) { JFrame fr = new JFrame(); JLabel l = new JLabel("a label"); fr.getContentPane().add(l); fr.pack(); fr.show(); l.requestFocus(); fr.setDefaultCloseOperation(fr.EXIT_ON_CLOSE); l.addKeyListener(new KL() ); } private static class KL implements KeyListener { public void keyPressed(KeyEvent e) { System.out.println("pressed"); } public void keyReleased(KeyEvent e) { System.out.println("released"); } public void keyTyped(KeyEvent e) { System.out.println("typed"); } } } ---------- END SOURCE ---------- (Review ID: 182446) ======================================================================
https://bugs.java.com/bugdatabase/view_bug.do?bug_id=4839127
CC-MAIN-2021-21
refinedweb
328
56.55
Translation(s): none The basic idea behind initramfs is that a (gzipped): - Customizing the early boot process becomes much easier. Anybody who needs to change how the system boot can now do so with user-space code; patching the kernel itself will no longer be required. - Moving the initialization code into user space makes it easier to write that code - it has a full C library, memory protection, etc. - user-space code is required to deal with the kernel via system calls. This requirement will flush a lot of in-kernel "magic" currently used by the initialization code; the result will be cleaner, safer code. It includes : - A small C library ("klibc") will be merged to support initramfs applications. - A small kinit application will be created with klibc. In the beginning, it will only do enough work to show that the mechanism is functioning properly. - The Initrd (initial ramdisk) subsystem will be moved into kinit, and out of the kernel itself. - The mounting of the root filesystem will be moved to user space. A lot of code for dealing with things like NFS-mounted root filesystems will go away. The kernel has currently 3 ways to mount the root filesystem: - remain backwards compatibility, the /init binary will only run if it comes via an initramfs cpio archive. If this is not the case, init/main.c:init() will run prepare_namespace() to mount the final root and exec one of the predefined init binaries See: CategoryKernel | CategoryBootProcess
https://wiki.debian.org/initramfs
CC-MAIN-2015-27
refinedweb
246
62.68
Opened 8 years ago Closed 7 years ago #2045 closed defect (worksforme) Implicit limit of 1000 builders per slave Description It has been noted that when > ~1000 builders are configured for a single slave, the slave cannot go online. This /must be/ due to a protocol packet size limit, PB transmission size. The message noted at logs is: File ".../site-packages/buildbot-0.8.4_pre_741_g2089c5b-py2.6.egg/build bot/process/slavebuilder.py", line 107, in <lambda> self.remote.callRemote("setMaster", self)) ... File ".../twisted/spread/flavors.py", line 127, in jellyFor return "remote", jellier.invoker.registerReference(self) File ".../twisted/spread/pb.py", line 666, in registerReference raise Error("Maximum PB reference count exceeded.") twisted.spread.pb.Error: Maximum PB reference count exceeded. Change History (5) comment:1 Changed 8 years ago by dustin - Keywords performance added - Milestone changed from undecided to 0.8.+ - Type changed from undecided to defect comment:2 Changed 8 years ago by armenzg comment:3 Changed 8 years ago by dustin Well, the value to patch is here: so you can monkey-patch that fairly easily at runtime: from twisted.spread import pb pb.MAX_BROKER_REFS = 2048 I don't necessarily think that's a good idea! I'll leave this bug open to track finding a better solution that doesn't burden PB with so many references. comment:4 Changed 8 years ago by peterschueller - Cc schueller.p@… added ah finally I know why suddenly a part of my builders can no longer attach! :( Thanks for the quickfix, however it is important to note that this has to be done BOTH in master and slave (quite obvious but I first failed at that nevertheless) Another issue is the 10 second timeout for "buildbot restart" or "buildbot reconfig" commands. I have huge amounts of builders, so this always times out which is kind of uncomfortable. comment:5 Changed 7 years ago by dustin - Keywords master-slave added; performance removed - Resolution set to worksforme - Status changed from new to closed This workaround is useful for folks running into this limit. The real fix is a new master/slave protocol. We might be hitting the same issue in Can the PB limit be increased?
http://trac.buildbot.net/ticket/2045
CC-MAIN-2019-47
refinedweb
365
54.32
Components and supplies Apps and online services About this project Functioning and uses A datalogger is a particular mounting which is able to registered some values on a support ( like SD card ) periodically. In other words, a measure, represented by a value, is realised every N second. So this system can be used in many projects that need a measurement every N second and that have to record it on a non-volatile memory ( for meteorologist this type of system can be used to make probe like hygrometer or thermometer ). When you plug your SD card into a computer, a excel file nammed Light is created and all the value of the analog reading appears in a column nammed Brightness value per seconds. Connections There are no connections to make between the Arduino UNO and the Datalogger Shield, you only have to plug the datalogger shield on the Arduino UNO. The code of this project registered on the card the brightness per seconds. The values come from a photoresistor so connections are really simple. Code Datalogger codeArduino #include <SPI.h> #include <SD.h> const char* filename = "Light.csv"; File file; void setup() { Serial.begin(9600); pinMode(10, OUTPUT); if (!SD.begin(10)) { Serial.println("Error : Push the reset button"); for (;;); } file = SD.open(filename, FILE_WRITE); if (file.size() == 0) { file.println("Brightness value per seconds"); file.flush(); } } void loop() { measure(); delay(1000); } void measure() { int lightvalue = analogRead(A0); Serial.println(lightvalue); file.println(lightvalue); file.flush(); } Author MisterBotBreak - 25 projects - 60 followers Published onOctober 29, 2019 Members who respect this project you might like
https://create.arduino.cc/projecthub/MisterBotBreak/how-to-use-a-datalogger-ffd5f4
CC-MAIN-2019-47
refinedweb
264
57.98
24, 2005 at 05:49:09PM -0300, bulia byak wrote: > Before we can think about a release, one thing we absolutely must do > is clean up the current mess with dialogs. Everyone who did dialogs > recently (and there were a lot) did it in their own incompatible way. I'm glad this is getting attention. > (Personally, I attribute this to the lack of attention and leadership > from the initiators of the gtkmm transition, but there's no use to > argue about that now; let's just fix what we have.) No, much of the mess is due to work done before the gtkmm transition was really initiated. The dialogs that were transitioned into src/ui/dialog/ are all fairly consistent. The one exception being the Preferences dialog, however there had been discussions of redesigning it anyway. It is true that the last two weeks I have not been putting attention into it, but I think you will find that due to the months I had put into it leading up to this, the project is in a much better position to establish a full solution than it ever was before. The cause of the shift in attention is probably now obvious to those who follow slashdot. My company had a round of layoffs two weeks ago, and while I wasn't affected, I had to reprioritize my commitments. Anyway, I'm sorry this happened, and apologize if my lack of attention has forced people to scramble. However, the good news is that finally more people are getting seriously involved in this work. > Recently I've been working on the Dialog and DialogManager classes in > ui/dialog, and I almost fixed them with regard to things like > transientizing, remembering geometry, and F12 behavior. There are > remaining issues still but I'm pretty sure I'll be able to fix them. Thanks, I'm glad you took care of completing this. I had never really understood how all this stuff should work, and had been hoping you could put some effort into it. I hope that you found that with the common Dialog class, it means that this will solve the problem in a permanent way, instead of re-implementing the transient code for each new dialog. > Unfortunately, many dialogs do not use these classes or use them in a > wrong way. The old gtk dialogs need not be touched at this time; they > are OK as they are, and we can always convert them later. What needs > to be fixed urgently is the recenty added gtkmm dialogs, all of which > are broken in different ways. > > The simplest dialog to use as a template is ui/dialog/messages.cpp; > another is ui/dialog/memory.cpp. To create a dialog, you need to pass > the prefs path (path in the preferences.xml for the dialog settings; > make sure it exists in preferences-skeleton.h) and the verb that > creates the dialog (make sure it exists in verbs.h/cpp). E.g.: > > Messages::Messages() > : Dialog ("dialogs.messages", SP_VERB_DIALOG_DEBUG) > > You also need to register the dialog's factory in DIalogManager > constructor and to make sure its verb in verbs.cpp does the right > thing: > > switch (reinterpret_cast<std::size_t>(data)) { > ... > case SP_VERB_DIALOG_DEBUG: > dt->_dlg_mgr->showDialog("Messages"); > break; Btw, this is intended to only be an interim solution. Ultimately, I would like to see a mechanism that allows the dialogs to be registered in a way that would allow invoking them without passing a string. I.e., instead of a switch statement with cases to handle each verb define as above, you could replace the entire switch statement with just: dt->_dlg_mgt->showDialog(reinterpret_cast<std::size_t>(data)); The advantage to this is more than just saving lines of code, though. By doing dialog invokation entirely dynamically, it will enable dynamic addition of dialogs to Inkscape at run time. This in turn means that Ted's dynamically loaded extensions will be able to add their dialogs to Inkscape when they're loaded. My hope is that by allowing extensions to be better decoupled from the core codebase, it will give us better ways of solving extension dependency issues more flexibly. > Also don't forget to remove your own F12 handlers, event handles, > transientizing code, title setting, size setting etc. if you have it > in your dialog classes. This is all in Dialog now. Not only will this > stuff finally work as it must, but this will lead to a significant > reducing of code. Btw, if anyone has a better name for the F12 handler than "f12_handler", PLEASE feel free to rename it. ;-) Also, I notice that in some cases people are using the 'Impl Trick' such as is shown in memory.*. I had demonstrated this trick a few months ago, and I still feel it is an eligant way to solve header inclusion issues, but the design of Dialog::Manager makes this completely unnecessary. Notice in verbs.cpp and other places that use dialogs, that with the new design the verb handler doesn't need to actually "know" what the dialog is, but just calls routines in the parent Dialog class. This means that the individual dialog headers do NOT need to be included in verbs.cpp or any other places that launch dialogs; thus, the impact of having the private gtkmm widget members defined in the .h is not propagated beyond Dialog::Manager. Of course, there's nothing *wrong* with using the Impl trick, but I found that it conceptually simplifies the dialog code to not use it, which I feel means the dialog code will be easier for others to understand and maintain. Thus, I would encourage using the messages.* dialog as the template to follow. This is Ishmal's old Debug dialog, that I converted into the new format. Note that I renamed it from Debug to Messages in order to keep it consistent with what it is called in the Inkscape menus. I notice that in the old dialogs there are major discrepancies between what the dialog is called within the application and what it is called in the code; this leads to unnecessary confusion. If you wish to rename the dialog inside the application, please also rename the file, so when people need to debug a dialog's behavior, it will be extremely obvious which file it is. ;-) Also, there is a naming convention with the new dialog design, that jonadab and I tried to adhere to. First, do not include 'Dialog' in the dialog's name; it's redundant, and since the class is in the "Inkscape::UI::Dialog::" namespace already, it is unnecessary and just makes the name longer. ;-) Second, use '-' to separate words, so it should be 'clone-tiler.*' not 'clonetiler.*'. Third, make the name as close to the in-Inkscape name, as mentioned above. Oh, and be attentive to how the name of the dialog shows up in various places. Remember that in Inkscape it shows up in the verb (i.e., the menu and statusbar) as well as in the title of the dialog itself. In some cases in the old dialogs I noticed this got to be inconsistent; it will look a lot better to users if it is consistent throughout. Bryce ----- End forwarded message ----- View entire thread
https://sourceforge.net/p/inkscape/mailman/message/14324755/
CC-MAIN-2017-43
refinedweb
1,211
61.16
Python Recipe: Grab page, scrape table, download file scraping seems like such a sought-after skill that it feels like a good idea to throw up a basic walkthrough here, where beginners can cut and paste code and any feedback can be memorialized.. - Install the necessary Python modules, mechanize and Beautiful Soup. - Train our computer to visit Ben's list of The Greatest Albums in the History of 2007. - Parse the html and scrape out Ben's rankings. - Click through to Ben's list of The Greatest Albums in the History of 2006 and repeat the scrape. - Do it all over again, but this time download the cover art. 1. Download the mechanize and Beautiful Soup modules. Install them. There are a dozen different methods for going about our task, so you shouldn't assume the one I'm about to show you is the only or the best. It's just one way to do it. And doing it this way requires a couple additions to your Python installation, which might seem a little daunting but should be doable unless IT has your computer on double secret probation. A module is a collection of functions, defintions and statements contained in a separate file that you can import into your script. Examples native to Python used in our earlier scripts included "re", "os" and "string." Out there on the Web, kind and ambitious programmers are constantly drafting, updating and publishing new modules to boil down complicated tasks into simpler forms. It it wasn't for these people, praise be upon them, I probably wouldn't have a job. If you want to take advantage of their contributions, you need to plug their creations into your local Python installation. It's usually not that hard, even on Windows! To accomplish today's task, we're going to rely on two third-party modules. The first is mechanize, a Python translation of the popular Perl module for calling up and walking through Web pages. The second is Beautiful Soup, a superlatively elegant means for parsing HTML and XML documents. Working hand-in-hand, they can accomplish most simple web scrapes. If you're working Linux or Mac OS X, this is going to be a piece of cake. All you need is to use Python's auto-installer Easy Install to issue the following commands: sudo easy_install mechanize sudo easy_install BeautifulSoup And now you can check if the modules are available for use by cracking open your python interpreter... python ...and attempting to import the new modules... from mechanize import Browser from BeautifulSoup import BeautifulSoup If the interpreter accepts the commands and kicks down the next line without an error, you know you're okay. If it throws an error, you know something is off. I don't have a lot of Python experience working in Windows, but the method for adding modules that I've had success with is simply downloading the .py files to my desktop and dumping them in the "lib" folder of my Python installation. If, like me, you use Activestate's ActivePython distribution for Windows, it should be easily found at C:/Python25/lib/. And when you browse around the directory, you should already see os.py, re.py and other modules we're already familar with. So just visit the mechanize and Beautiful Soup homepages and retrieve the latest download. Dump the .py files in your lib folder and now you should be able to fire up your python interpreter just the same as above and introduce yourself to our new friends. With that out of the way, we now have all the tools we need to grip and rip. So let's do it! 2. Open the command line, create a working directory, move there. We're going to start the same way we did in the first three lessons, creating a working folder for all our files and moving in with our command line. cd Documents/ mkdir py-scrape-and-download cd py-scrape-and-download/-scrape-and-download from mechanize import Browser from BeautifulSoup import BeautifulSoup mech = Browser() url = "" page = mech.open(url) html = page.read() soup = BeautifulSoup(html) print soup.prettify() Our first snippet of code, seen above, shows a basic introduction to each of our new modules. After they've been imported in lines two and three, we put mechanize's browser to use right away, storing it a variable I've decided to call mech, but which you could call anything you wanted (ex. browser, br, ie, whatever). We then use its open() method to grab the location of our first scrape target, my favorite albums of 2007, and store that in another variable we'll call page. That's enough to go out on the web and grab the page, now we need to tell Python what to do with it. Mechanize's read() method will return all of the HTML in the page, which we store, simply, in an variable called html and then pass to BeautifulSoup's default method so it can be prepared for processing. The reason we need to pass the page to Beautiful Soup is that there is a ton of HTML code in the page we don't want. Our ultimate goal isn't to print out the complete page source. We don't want all the junky td and img and body tags. We want to free the data from the HTML by printing it out in a machine readable format we can repurpose for our own needs. In the next step we'll ask Beautiful Soup to step through the code and pull out only the good parts, but here in the first iteration we'll pause with just printing out the complete page code using a fun Beautiful Soup method called prettify(). It will spit out the HTML in a well-formed format. To take a look, save and quit out of your script (ESC, SHIFT+ZZ in vim) and fire it up from the command-line: python py-scrape-and-download.py And you should see something like.... <html> <head> <title> According to Ben... </title> </head> <body> <h2> The 10 Greatest Albums in the History of 2007 </h2> <table padding="1" width="60%" border="1" style="text-align:center;"> <tr style="font-weight:bold"> <td> Rank </td> <td> Artist </td> ...</tr></table></body></html> ...which means that you've successfully retrieved and printed out our first target. Now let's move on to scraping the data out from the HTML. #!/usr/bin/env python from mechanize import Browser from BeautifulSoup import BeautifulSoup mech = Browser() url = "" page = mech.open(url) html = page.read() soup = BeautifulSoup(html) table = soup.find("table", border=1) for row in table.findAll('tr')[1:]: col = row.findAll('td') rank = col[0].string artist = col[1].string album = col[2].string cover_link = col[3].img['src'] record = (rank, artist, album, cover_link) print "|".join(record) The second version of our script, seen above, removes the prettify() command that concluded version one and replaces it with the Beautiful Soup code necessary to parse the rankings from the page. When you're scraping a real target out there on the wild Web, the mechanize part of the script is likely to remain pretty much the same, but the Beautiful Soup portion that pulls the data from the page is going to have change each time, tailored to work with however your target HTML is structured. So your job as the scraper is to inspect your target table and figure out how you can get Beautiful Soup to hone in on the elements you want to harvest. I like to do this using the Firefox plugin Firebug, which allows you to right-click and, by choosing the "Inspect Element" option, have the browser pull up and highlight the HTML underlying any portion of the page. But all that's really necessary is that you take a look at the page's source code. Since most HTML pages you'll be targeting, including my sample site, will include more than one set of table tags, you often have to find something unique about the table you're after. This is necessary so that Beautiful Soup knows how to zoom in on that section of the code you're after and ignore all the flotsam around it. If you look closely at this particular page, you'll note that while both table tags have the same width value, an easy way to distinguish them is that they have different border values... <table width="60%" border="1" style="text-align: center;" padding="1"> ... <table width="60%" border="0"></table></table> ...and the one we want to harvest has a border value of one. That's why the first Beautiful Soup command seen in the snippet above uses the find() method to capture the table with that characteristic. table = soup.find("table", border=1) Once that's been accomplished, the new table variable is immediately put to use in a loop that is designed to step through each row and pull out the data we want. for row in table.findAll('tr')[1:]: It uses Beautiful Soup's findAll() method to put all of the tr tags (which is the HTML equivalent of a row) into a list. The [1:] modifier at the end instructs the loop to skip the first item, which, from looking at the page, we can tell is an unneeded header line. Then, after the loop is set up on the tr tags, we set up another list that will grab all of the td tags (the HTML equivalent of a column) from each row. col = row.findAll('td') Now pulling out the data is simply a matter of figuring out which order we can expect the data to appear in each row and pulling the corresponding values from the list. Since we expect rank, artist, album and cover to appear in each row from left to right, the first element of the col variable (col[0]) can always be expected to be the rank and the last element (col[3]) can always be expected to be the cover. So we create a new set of values to retrieve each, with some Beautiful Soup specific objects tacked on the end to grab only the bits we want. rank = col[0].string artist = col[1].string album = col[2].string cover_link = col[3].img['src'] The ".string" object will return the text within the target tag (similar to javascript's innerHTML method). But in the case of something like the cover art, which is an image tag, not a string value, we can step down to the next tag nested within the td column -- img -- and access its source attribute by tacking on ['src']. This would work just the same for a hyperlink (.a['href']) or any other attibute. And if you've got multiple layers of nested tags, you can simply step down through them with a linked set of objects. For example, "b.a.string" would retrieve the string within a link within a bold tag. There's great documentation on these and other Beautiful Soup tricks here. After we've wrangled out the data we want from the HTML, the only challenge remaining is to print it out. I accomplish that above by loading the column values into a list called record and printing it out use a trick that will print them with a pipe-delimiter using the .join method. record = (rank, artist, album, cover_link) print "|".join(record) Phew. That's a lot of explaining. I hope it made sense. I'm happy to clarify or elaborate on any of it. But if you save the snippet above and run it. You should get a simple print out of the data that looks something like this: 10|LCD Soundsystem|Sound of Silver| 9|Ulrich Schnauss|Goodbye| 8|The Clientele|God Save The Clientele| 7|The Modernist|Collectors Series Pt. 1: Popular Songs| 6|Bebel Gilberto|Momento| 5|Various Artists|Jay Deelicious: 1995-1998| 4|Lindstrom and Prins Thomas|BBC Essential Mix| 3|Go Home Productions|This Was Pop| 2|Apparat|Walls| 1|Caribou|Andorra| See the difference?! Pretty cool, right? But, really, you could of done that with copy and paste. Or, if you're slick, maybe even Excel's Web Query. As with our previous recipes, the real efficiencies aren't found until you can train your computer to repeat a task over a large body of data. One of the great things mechanize can do is step through pages one by one and help Beautiful Soup suck the data out of each. This is very helpful when you're trying to scrape the search results from online web queries, which are commonly displayed in paginated sets that run into hundreds and hundreds of pages. Today's example is only two pages in length, though the principles we learn here can later be applied to broader data sets. But before we can run, we have to learn how to walk. So, in that spirit, here's a simple expansion of our script above that will click on the "Next" link at the bottom of our example page and repeat the scrape on my 2006 list. #!/usr/bin/env python from mechanize import Browser from BeautifulSoup import BeautifulSoup "|".join(record) mech = Browser() url = "" page1 = mech.open(url) html1 = page1.read() soup1 = BeautifulSoup(html1) extract(soup1, 2007) page2 = mech.follow_link(text_regex="Next") html2 = page2.read() soup2 = BeautifulSoup(html2) extract(soup2, 2006) Note that our Beautiful Soup snippet remains the same as above, but we've moved it to the top of the script and placed it in a Python function called extract. Structured this way, the extract function is reusable on any number of pages as long as the HTML you're looking to parse is formatted the same way. The function accepts two parameters, soup and year, which are passed in the lower part of our script after Beautiful Soup captures each page's contents. The first snippet ... page1 = mech.open(url) html1 = page1.read() soup1 = BeautifulSoup(html1) extract(soup1, 2007) ...essentially does the same thing as our early versions: visits the URL for my 2007 list and parses out the table. The only change is that the soup variable is now being passed to the extract function along with the year, so that it can be printed alongside the data columns in our output by adding it to the "record" list inside the function here: record = (str(year), rank, artist, album, cover_link) I figured it's a nice add since then our eventual results will contain a field that discerns the 2007 list from the 2006 list. Now check out easy it is to get mechanize to step through to the next page. page2 = mech.follow_link(text_regex="Next") html2 = page2.read() soup2 = BeautifulSoup(html2) extract(soup2, 2006) All it takes is feeding the link's string value to mechanize's follow_link() method and, boom, you're walking over to the next page. Treat what you get back the same as we did our first "page" and, bam, you've done it. Save the script, run it, and you should see something more like this: 2007|10|LCD Soundsystem|Sound of Silver| 2007|9|Ulrich Schnauss|Goodbye| 2007|8|The Clientele|God Save The Clientele| 2007|7|The Modernist|Collectors Series Pt. 1: Popular Songs| 2007|6|Bebel Gilberto|Momento| 2007|5|Various Artists|Jay Deelicious: 1995-1998| 2007|4|Lindstrom and Prins Thomas|BBC Essential Mix| 2007|3|Go Home Productions|This Was Pop| 2007|2|Apparat|Walls| 2007|1|Caribou|Andorra| 2006|10|Lily Allen|Alright, Still| 2006|9|Nouvelle Vague|Nouvelle Vague| 2006|8|Bookashade|Movements| 2006|7|Charlotte Gainsbourg|5:55| 2006|6|The Drive-By Truckers|The Blessing and the Curse| 2006|5|Basement Jaxx|Crazy Itch Radio| 2006|4|Love is All|Nine Times The Same Song| 2006|3|Ewan Pearson|Sci.Fi.Hi.Fi_01| 2006|2|Neko Case|Fox Confessor Brings The Flood| 2006|1|Ellen Allien & Apparat|Orchestra of Bubbles| Now all that's left on our checklist is to figure out a way to download the cover art in addition to recording the urls. When we're interested in just snatching a simple file off the web, I like to use the urlretrieve() function found in Python's urlib module. All you have to do is add it to your import line, as below, and tell it where to save the files. I just stuff it in the extract loop so it pulls down the file immediately after scraping its row in the table. Check it out. #!/usr/bin/env python from mechanize import Browser from BeautifulSoup import BeautifulSoup import urllib, os >> outfile, "|".join(record) save_as = os.path.join("./", album + ".jpg") urllib.urlretrieve("" + cover_link, save_as) print "Downloaded %s album cover" % album outfile = open("albums.txt", "w") mech = Browser() url = "" page1 = mech.open(url) html1 = page1.read() soup1 = BeautifulSoup(html1) extract(soup1, 2007) page2 = mech.follow_link(text_regex="Next") html2 = page2.read() soup2 = BeautifulSoup(html2) extract(soup2, 2006) outfile.close() While I was at it, I also added in an outfile where the scrape results are saved in a text file, just like we did in our previous recipes. Run this version and then check out your working directory, where you should see all the images as well as the new outfile. Voila. I think we're done. If this is useful for people, next time we can cover how you leverage these basic tools against search forms and larger result sets. Per usual, if you spot a screw up, or I'm not being clear, just shoot me an email or drop a comment and we'll sort it out. Hope this is helpful to somebody. And, as a postscript, since we're kind of on a roll here, I thought it might be fun to cook up an LAT version of the Python Cookbook cover, in the classic O'Reilly style. What do you think? I couldn't quite find the right font.
http://palewi.re/posts/2008/04/20/python-recipe-grab-a-page-scrape-a-table-download-a-file/
CC-MAIN-2019-09
refinedweb
3,025
70.84
Applied Microsoft .NET Framework Programming Common Object Operations In this chapter, I'll describe how to properly implement the operations that all objects must exhibit. Specifically, I'll talk about object equality, identity, hash codes, and cloning. Object Equality and Identity The System.Object type offers a virtual method, named Equals, whose purpose is to return true if two objects have the same "value". The .NET Framework Class Library (FCL) includes many methods, such as System.Array's IndexOf method and System.Collections.ArrayList's Contains method, that internally call Equals. Because Equals is defined by Object and because every type is ultimately derived from Object, every instance of every type offers the Equals method. For types that don't explicitly override Equals, the implementation provided by Object (or the nearest base class that overrides Equals) is inherited. The following code shows how System.Object's Equals method is essentially implemented: class Object { public virtual Boolean Equals(Object obj) { // If both references point to the same // object, they must be equal. if (this == obj) return(true); // Assume that the objects are not equal. return(false); } ' } When you implement your own Equals method, you must ensure that it adheres to the four properties of equality: - Equals must be reflexive; that is, x.Equals(x) must return true. - Equals must be symmetric; that is, x.Equals(y) must return the same value as y.Equals(x). - Equals must be transitive; that is, if x.Equals(y) returns true and y.Equals(z) returns true, then x.Equals(z) must also return true. - Equals must be consistent. Provided that there are no changes in the two values being compared, Equals should consistently return true or false. If your implementation of Equals fails to adhere to all these rules, your application will behave in strange and unpredictable ways. Unfortunately, implementing your own version of Equals isn't as easy and straightforward as you might expect. You must do a number of operations correctly, and, depending on the type you're defining, the operations are slightly different. Fortunately, there are only three different ways to implement Equals. Let's look at each pattern individually. Implementing Equals for a Reference Type Whose Base Classes Don't Override Object's Equals The following code shows how to implement Equals for a type that directly inherits Object's Equals implementation: // This is a reference type (because of 'class'). class MyRefType : BaseType { RefType refobj; // This field is a reference type. ValType valobj; // This field is a value type. public override Boolean Equals(Object obj) { // Because 'this' isn't null, if obj is null, // then the objects can't be equal. if (obj == null) return false; // If the objects are of different types, they can't be equal. version of Equals starts out by comparing obj against null. If the object being compared is not null, then the types of the two objects are compared. If the objects are of different types, then they can't be equal. If both objects are the same type, then you cast obj to MyRefType, which can't possibly throw an exception because you know that both objects are of the same type. Finally, the fields in both objects are compared, and true is returned if all fields are equal. You must be very careful when comparing the individual fields. The preceding code shows two different ways to compare the fields based on what types of fields you're using. - Comparing reference type fields To compare reference type fields, you should call Object's static Equals method. Object's static Equals method is just a little helper method that returns true if two reference objects are equal. Here's how Object's static Equals method is implemented internally: public static Boolean Equals(Object objA, Object objB) { // If objA and objB refer to the same object, return true. if (objA == objB) return true; // If objA or objB is null, they can't be equal, so return false. if ((objA == null) || (objB == null)) return false; // Ask objA if objB is equal to it, and return the result. return objA.Equals(objB); } You use this method to compare reference type fields because it's legal for them to have a value of null. Certainly, calling refobj.Equals(other.refobj) will throw a NullReferenceException if refobj is null. Object's static Equals helper method performs the proper checks against null for you. - Comparing value type fields To compare value type fields, you should call the field type's Equals method to have it compare the two fields. You shouldn't call Object's static Equals method because value types can never be null and calling the static Equals method would box both value type objects. Implementing Equals for a Reference Type When One or More of Its Base Classes Overrides Object's Equals The following code shows how to implement Equals for a type that inherits an implementation of Equals other than the one Object provides: // This is a reference type (because of 'class'). class MyRefType : BaseType { RefType refobj; // This field is a reference type. ValType valobj; // This field is a value type. public override Boolean Equals(Object obj) { // Let the base type compare its fields. if (!base.Equals(obj)) return false; // All the code from here down is identical to // that shown in the previous version. // Because 'this' isn't null, if obj is null, // then the objects can't be equal. // NOTE: This line can be deleted if you trust that // the base type implemented Equals correctly. if (obj == null) return false; // If the objects are of different types, they can't // be equal. // NOTE: This line can be deleted if you trust that // the base type implemented Equals correctly. code is practically identical to the code shown in the previous section. The only difference is that this version allows its base type to compare its fields too. If the base type doesn't think the objects are equal, then they can't be equal. It is very important that you do not call base.Equals if this would result in calling the Equals method provided by System.Object. The reason is that Object's Equals method returns true only if the references point to the same object. If the references don't point to the same object, then false will be returned and your Equals method will always return false! Certainly, if you're defining a type that is directly derived from Object, you should implement Equals as shown in the previous section. If you're defining a type that isn't directly derived from Object, you must first determine if that type (or any of its base types, except Object) provides an implementation of Equals. If any of the base types provide an implementation of Equals, then call base.Equals as shown in this section. Implementing Equals for a Value Type As I mentioned in Chapter 5, all value types are derived from System.ValueType. ValueType overrides the implementation of Equals offered by System.Object. Internally, System.ValueType's Equals method uses reflection (covered in Chapter 20) to get the type's instance fields and compares the fields of both objects to see if they have equal values. This process is very slow, but it's a reasonably good default implementation that all value types will inherit. However, it does mean that reference types inherit an implementation of Equals that is really identity and that value types inherit an implementation of Equals that is value equality. For value types that don't explicitly override Equals, the implementation provided by ValueType is inherited. The following code shows how System.-ValueType's Equals method is essentially implemented: class ValueType { public override Boolean Equals(Object obj) { // Because 'this' isn't null, if obj is null, // then the objects can't be equal. if (obj == null) return false; // Get the type of 'this' object. Type thisType = this.GetType(); // If 'this' and 'obj' are different types, they can't be equal. if (thisType != obj.GetType()) return false; // Get the set of public and private instance // fields associated with this type. FieldInfo[] fields = thisType.GetFields(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance); // Compare each instance field for equality. for (Int32 i = 0; i < fields.Length; i++) { // Get the value of the field from both objects. Object thisValue = fields[i].GetValue(this); Object thatValue = fields[i].GetValue(obj); // If the values aren't equal, the objects aren't equal. if (!Object.Equals(thisValue, thatValue)) return false; } // All the field values are equal, and the objects are equal. return true; } ' } Even though ValueType offers a pretty good implementation for Equals that would work for most value types that you define, you should still provide your own implementation of Equals. The reason is that your implementation will perform significantly faster and will be able to avoid extra boxing operations. The following code shows how to implement Equals for a value type: // This is a value type (because of 'struct'). struct MyValType { RefType refobj; // This field is a reference type. ValType valobj; // This field is a value type. public override Boolean Equals(Object obj) { // If obj is not your type, then the objects can't be equal. if (!(obj is MyValType)) return false; // Call the type-safe overload of Equals to do the work. return this.Equals((MyValType) obj); } // Implement a strongly typed version of Equals. public Boolean Equals(MyValType obj) { // To compare reference fields, do this: if (!Object.Equals(this.refobj, obj.refobj)) return false; // To compare value fields, do this: if (!this.valobj.Equals(obj.valobj)) return false; return true; // Objects are equal. } // Optionally overload operator== public static Boolean operator==(MyValType v1, MyValType v2) { return (v1.Equals(v2)); } // Optionally overload operator!= public static Boolean operator!=(MyValType v1, MyValType v2) { return !(v1 == v2); } } For value types, the type should define a strongly typed version of Equals. This version takes the defining type as a parameter, giving you type safety and avoiding extra boxing operations. You should also provide strongly typed operator overloads for the == and != operators. The following code demonstrates how to test two value types for equality: MyValType v1, v2; // The following line calls the strongly typed version of // Equals (no boxing occurs). if (v1.Equals(v2)) { ... } // The following line calls the version of // Equals that takes an object (4 is boxed). if (v1.Equals(4)) { ... } // The following doesn't compile because operator== // doesn't take a MyValType and an Int32. if (v1 == 4) { ... } // The following compiles, and no boxing occurs. if (v1 == v2) { ... } Inside the strongly typed Equals method, the code compares the fields in exactly the same way that you'd compare them for reference types. Keep in mind that the code doesn't do any casting, doesn't compare the two instances to see if they're the same type, and doesn't call the base type's Equals method. These operations aren't necessary because the method's parameter already ensures that the instances are of the same type. Also, because all value types are immediately derived from System.ValueType, you know that your base type has no fields of its own that need to be compared. You'll notice in the Equals method that takes an Object that I used the is operator to check the type of obj. I used is instead of GetType because calling GetType on an instance of a value type requires that the instance be boxed. I demonstrated this in the "Boxing and Unboxing Value Types" section in Chapter 5. Summary of Implementing Equals and the ==/!= Operators In this section, I summarize how to implement equality for your own types: - Compiler primitive types Your compiler will provide implementations of the == and != operators for types that it considers primitives. For example, the C# compiler knows how to compare Object, Boolean, Char, Int16, Uint16, Int32, Uint32, Int64, Uint64, Single, Double, Decimal, and so on for equality. In addition, these types provide implementations of Equals, so you can call this method as well as use operators. - Reference types For reference types you define, override the Equals method and in the method do all the work necessary to compare object states and return. If your type doesn't inherit Object's Equals method, call the base type's Equals method. If you want to, overload the == and != operators and have them call the Equals method to do the actual work of comparing the fields. - Value types For your value types, define a type-safe version of Equals that does all the work necessary to compare object states and return. Implement the type unsafe version of Equals by having it call the type-safe Equals internally. You also should provide overloads of the == and != operators that call the type-safe Equals method internally. Identity The purpose of a type's Equals method is to compare two instances of the type and return true if the instances have equivalent states or values. However, it's sometimes useful to see whether two references refer to the same, identical object. To do this, System.Object offers a static method called ReferenceEquals, which is implemented as follows: class Object { public static Boolean ReferenceEquals(Object objA, Object objB) { return (objA == objB); } } As you can plainly see, ReferenceEquals simply uses the == operator to compare the two references. This works because of rules contained within the C# compiler. When the C# compiler sees that two references of type Object are being compared using the == operator, the compiler generates IL code that checks whether the two variables contain the same reference. If you're writing C# code, you could use the == operator instead of calling Object's ReferenceEquals method if you prefer. However, you must be very careful. The == operator is guaranteed to check identity only if the variables on both sides of the == operator are of the System.Object type. If a variable isn't of the Object type and if that variable's type has overloaded the == operator, the C# compiler will produce code to call the overloaded operator's method instead. So, for clarity and to ensure that your code always works as expected, don't use the == operator to check for identity; instead, you should use Object's static ReferenceEquals method. Here's some code demonstrating how to use ReferenceEquals: static void Main() { // Construct a reference type object. RefType r1 = new RefType(); // Make another variable point to the reference object. RefType r2 = r1; // Do r1 and r2 point to the same object? Console.WriteLine(Object.ReferenceEquals(r1, r2)); // "True" // Construct another reference type object. r2 = new RefType(); // Do r1 and r2 point to the same object? Console.WriteLine(Object.ReferenceEquals(r1, r2)); // "False" // Create an instance of a value type. Int32 x = 5; // Do x and x point to the same object? Console.WriteLine(Object.ReferenceEquals(x, x)); // "False" // "False" is displayed because x is boxed twice // into two different objects. } Object Hash Codes The designers of the FCL decided that it would be incredibly useful if any instance of any object could be placed into a hash table collection. To this end, System.Object provides a virtual GetHashCode method so that an Int32 hash code can be obtained for any and all objects. If you define a type and override the Equals method, you should also override the GetHashCode method. In fact, Microsoft's C# compiler emits a warning if you define a type that overrides just one of these methods. For example, compiling the following type yields this warning: "warning CS0659: 'App' overrides Object.Equals(object o) but does not override Object.GetHashCode()." class App { public override Boolean Equals(Object obj) { ... } } The reason why a type must define both Equals and GetHashCode is that the implementation of the System.Collections.Hashtable type requires that any two objects that are equal must have the same hash code value. So if you override Equals, you should override GetHashCode to ensure that the algorithm you use for calculating equality corresponds to the algorithm you use for calculating the object's hash code. Basically, when you add a key/value pair to a Hashtable object, a hash code for the key object is obtained first. This hash code indicates what "bucket" the key/value pair should be stored in. When the Hashtable object needs to look up a key, it gets the hash code for the specified key object. This code identifies the "bucket" that is now searched looking for a stored key object that is equal to the specified key object. Using this algorithm of storing and looking up keys means that if you change a key object that is in a Hashtable, the Hashtable will no longer be able to find the object. If you intend to change a key object in a hash table, you should first remove the original object/value pair, next modify the key object, and then add the new key object/value pair back into the hash table. Defining a GetHashCode method can be easy and straightforward. But, depending on your data types and the distribution of data, it can be tricky to come up with a hashing algorithm that returns a well-distributed range of values. Here's a simple example that will probably work just fine for Point objects: class Point { Int32 x, y; public override Int32 GetHashCode() { return x ^ y; // x XOR'd with y } ' } When selecting an algorithm for calculating hash codes for instances of your type, try to follow these guidelines: - Use an algorithm that gives a good random distribution for the best performance of the hash table. - Your algorithm can also call the base type's GetHashCode method, including its return value in your own algorithm. However, you don't generally want to call Object's or ValueType's GetHashCode method because the implementation in either method doesn't lend itself to high-performance hashing algorithms. - Your algorithm should use at least one instance field. - Ideally, the fields you use in your algorithm should be immutable; that is, the fields should be initialized when the object is constructed and they should never again change during the object's lifetime. - Your algorithm should execute as quickly as possible. - Objects with the same value should return the same code. For example, two String objects with the same text should return the same hash code value. System.Object's implementation of the GetHashCode method doesn't know anything about its derived type and any fields that are in the type. For this reason, Object's GetHashCode method returns a number that is guaranteed to uniquely identify the object within the AppDomain; this number is guaranteed not to change for the lifetime of the object. After the object is garbage collected, however, its unique number can be reused as the hash code for a new object. System.ValueType's implementation of GetHashCode uses reflection and returns the hash code of the first instance field defined in the type. This is a naove implementation that might be good for some value types, but I still recommend that you implement GetHashCode yourself. Even if your hash code algorithm returns the hash code for the first instance field, your implementation will be faster than ValueType's implementation. Here's what ValueType's implementation of GetHashCode looks like: class ValueType { public override Int32 GetHashCode() { // Get this type's public/private instance fields. FieldInfo[] fields = this.GetType().GetFields( BindingFlags.Instance | BindingFlags.Public | BindingFlags.NonPublic); if (fields.Length > 0) { // Return the hash code for the first non-null field. for (Int32 i = 0; i < fields.Length; i++) { Object obj = field[i].GetValue(this); if (obj != null) return obj.GetHashCode(); } } // No non-null fields exist; return a unique value for the type. // NOTE: GetMethodTablePtrAsInt is an internal, undocumented method return GetMethodTablePtrAsInt(this); } } If you're implementing your own hash table collection for some reason or you're implementing any piece of code where you'll be calling GetHashCode, you should never persist hash code values. The reason is that hash code values are subject to change. For example, a future version of a type might use a different algorithm for calculating the object's hash code. Object Cloning At times, you want to take an existing object and make a copy of it. For example, you might want to make a copy of an Int32, a String, an ArrayList, a Delegate, or some other object. For some types, however, cloning an object instance doesn't make sense. For example, it doesn't make sense to clone a System.Threading.Thread object since creating another Thread object and copying its fields doesn't create a new thread. Also, for some types, when an instance is constructed, the object is added to a linked list or some other data structure. Simple object cloning would corrupt the semantics of the type. A class must decide whether or not it allows instances of itself to be cloned. If a class wants instances of itself to be cloneable, the class should implement the ICloneable interface, which is defined as follows. (I'll talk about interfaces in depth in Chapter 15.) public interface ICloneable { Object Clone(); } This interface defines just one method, Clone. Your implementation of Clone is supposed to construct a new instance of the type and initialize the new object's state so that it is identical to the original object. The ICloneable interface doesn't explicitly state whether Clone should make a shallow copy of its fields or a deep copy. So you must decide for yourself what makes the most sense for your type and then clearly document what your Clone implementation does. Many developers implement Clone so that it makes a shallow copy. If you want a shallow copy made for your type, implement your type's Clone method by calling System.Object's protected MemberwiseClone method, as demonstrated here: class MyType : ICloneable { public Object Clone() { return MemberwiseClone(); } } Internally, Object's MemberwiseClone method allocates memory for a new object. The new object's type matches the type of the object referred to by the this reference. MemberwiseClone then iterates through all the instance fields for the type (and its base types) and copies the bits from the original object to the new object. Note that no constructor is called for the new object-its state will simply match that of the original object. Alternatively, you can implement the Clone method entirely yourself, and you don't have to call Object's MemberwiseClone method. Here's an example: class MyType : ICloneable { ArrayList set; // Private constructor called by Clone private MyType(ArrayList set) { // Refer to a deep copy of the set passed. this.set = set.Clone(); } public Object Clone() { // Construct a new MyType object, passing it the // set used by the original object. return new MyType(set); } } ou might have realized that the discussion in this section has been geared toward reference types. I concentrated on reference types because instances of value types always support making shallow copies of themselves. After all, the system has to be able to copy a value type's bytes when boxing it. The following code demonstrates the cloning of value types: static void Main() { Int32 x = 5; Int32 y = x; // Copy the bytes from x to y. Object o = x; // Boxing x copies the bytes from x to the heap. y = (Int32) o; // Unbox o, and copy bytes from the heap to y. } Of course, if you're defining a value type and you'd like your type to support deep cloning, then you should have the value type implement the ICloneable interface as shown earlier. (Don't call MemberwiseClone, but rather, allocate a new object and implement your deep copy semantics.) About the Author Jeffrey Richter is a co-founder of Wintellect (), a training, debugging, and consulting firm dedicated to helping companies build better software, faster. He is the author of "Applied Microsoft .NET Framework Programming" (Microsoft Press) and several Windows programming books. Jeffrey is also a contributing editor to MSDN Magazine where he authors the .NET column. Jeffrey has been consulting with Microsoft's .NET Framework team since October 1999. Applied Microsoft .NET Framework Programming. Copyright 2002, Jeffrey Richter. Reproduced by permission of Microsoft Press. All rights reserved. # # # There are no comments yet. Be the first to comment!
http://www.codeguru.com/columns/chapters/article.php/c6681/Applied-Microsoft-NET-Framework-Programming.htm
CC-MAIN-2013-48
refinedweb
4,041
64
Deprecated in iOS 10.0: os_log(3) has replaced asl(3) So iOS 10.0 apparently deprecates the asl (Apple System Log) api and replaces it with the very limited os_log api. I use something similar to the code snippet below to read out log writes for the running app to show in a uitextview in app - and now it is full of deprecation warnings. Does anyone know of a way to read the printed log using the new os_log api? Because I only see an api for writing (). import asl let query = asl_new(UInt32(ASL_TYPE_QUERY)) let response = asl_search(nil, query) while let message = asl_next(response) { var i: UInt32 = 0 let key = asl_key(message, i) print(asl_get(message, key)) ... } asl_search Looks like you need to use the enhanced Console instead of your own log viewer. The logs are compressed and not expanded until viewed - this makes logging much less intrusive at debug levels. There is no text form of the logs however. See the 2016 WWDC video session 721 "Unified Logging and Activity Tracing" Also the Apple sample app that demos the new approach has an undocumented build setting that I had to add to my iOS app. See the setting in the 'Paper Company (Swift)' iOS app. The setting is found in the Targets section of the top level xCode window. These are the steps that I followed: On the Build Settings page add in "User-Defined" a new section = ASSETCATALOG_COMPRESSION. Under it add two lines: Debug = lossless Release = respect-asset-catalog After adding this build setting then logging worked in my app as per the video session demo.
https://codedump.io/share/IBooHGLkCOYQ/1/read-logs-using-the-new-swift-oslog-api
CC-MAIN-2018-13
refinedweb
270
71.04
ist".[6]] Following continued assessment of all geological and geophysical data, renowned petroleum engineers, geologists and geophysicists continue to ask … "where is the most logical place to drill where we can be sure of 'tapping' G-d and his blessings of the twelve tribes as G-d prepares him to view the "Promised Land" which HE had given to HIS chosen "Children of Israel" — makes direct reference to the existence of OIL in Israel.So, friends of Israel, as we begin "The Great Treasure Hunt" … may we join hands with the Government of Israel and its people and together look toward and pray for "the discovery of OIL in their homeland, the Land of Israel" and on the "Head ONE … The Prayer ." (1 Kings 8:41-43). TWO … The Instructions early rain and the late rain, that thou mayest gather in thy corn, and thy wine, and thine OIL [the OIL of Israel]." (Deuteronomy 11:13-14) THREE ... The Evidence "Then thou shalt see, and flow together, and thine heart shall fear, and be enlarged; because the abundance of the sea shall be converted unto thee, the forces of the Gentiles shall come unto thee." (Isaiah 60:5) These three specific sets of instructions received by John Brown in 1983 from G-d while in the land of Israel, were a consuming "fire" which has been born by him through the years for the exact purpose that G-d first lit it in his heart! Today, G-d continues to confirm, by HIS Word, the reality of HIS promise when HE also promised …"For surely there is an end; and thy expectation shall not be cut off." (Proverbs 23:18)Son:21-28)[7] THE PROMISES … Also Unto the Gentiles (Leviticus 19:33, 34) For the prophets foretold that the Gentiles (Isaiah 55:5) would also call upon G-d and they too would serve him (Zephaniah 3:9) and he would answer their prayers (Isaiah 65:24) and he would save them (Isaiah 52:10) and set them in the land! (Isaiah 14:1) For it is also written: "Also the sons of the stranger, that join themselves to the Lord, to serve Him, and to love the name of the Lord, to be His servants, every one that keepeth) A NEW COVENANT PEOPLE … (Jeremiah 31:31-34) For G-d has promised that not only would He honor his First Covenant and would cause the Jews to return (Jeremiah 33:14-26), but that one day He would also make a New Covenant (Jeremiah 31:33-34) and (in Hosea 2:23) He said: "… And I will say to them that were not my people, Thou art my people!" And they shall say, "you are my God" and they will help restore the Nation of Israel and many people would come to the Land of Israel to seek the G-d of Israel (Zechariah 8:20-23) (Isaiah 65:1) and these strangers who dwelt among the people of Israel would also share in the promised inheritance (Ezekiel 47:21-23) and .... "the stranger that dwells with you shall be to you as one born among you." (Leviticus 19:34) For G-d's promise to Abraham was, "and in thy seed shall all the nations of the earth be blessed". (Genesis 22:18) So, now we a new covenant people (Deuteronomy 29:12-15) have came to the land of Israel to claim our promised inheritance and to receive the blessings of Abraham. (Galatians 3:6-14). For it is also written, "... is He the God of the Jews only? Is HE not the God of the Gentiles? Yes, of the Gentiles also," (Romans 3:29-30) for G-d hath not cast away HIS people which HE foreknew. "For if their [the Jewish people] being cast away is the reconciling of the world, what will their acceptance be but life from the dead?" (Romans 11:1-24) Because, "that blindness in part has happened to Israel until the fullness of the Gentiles be come in. And so all Israel will be saved." (Romans 11:24-36) Therefore: As a Christian, one must recognize that it is written "… if [we] the Gentiles have been made partakers of their [Israel's] spiritual things, their [Gentiles'] duty is also to minister to them [Israel] in material things." (Romans 15:27) Because "the G-d of heaven He will prosper us [Zion]" (Nehemiah 2:20) with the oil of Israel … so that we a new covenant people of Zion will minister and assist "Israel in their petroleum and material needs." Because, as touching the election, "they [the Jewish People] are beloved for the sake of the fathers." (Romans 11:28) And, when they do turn to the Lord, the veil on their hearts will be taken away. (2 Corinthians 3:14-16) THE PURPOSE … … To Receive The Blessings Of Jacob (Genesis 49:1-2 and 22-26) Zion Oil & Gas was ordained by G-d The Blessing Of Moses (Deuteronomy 33:1,13-19) "And this is the blessing which Moses the man of GOD blessed the sons of Israel before his death" (Deuteronomy 33:1) :"They shall call the peoples to the mountain; there they shall offer sacrifices of righteousness; for they shall suck of the abundance of the seas, and of treasures [the Oil of Israel] hid in the sand." (Deuteronomy 33:19)' "...); Zion's main exploration licence is the "Joseph Project" and Brown proceeds to infer geological prospects from Biblical verses:[4] THE JOSEPH PROJECT ... The Inheritance (Genesis 48:15-22) "Joseph shall have two portions. And you shall inherit it, one as well as another; that concerning which I lifted up my hand to give it to your fathers; and this land shall fall to you as an inheritance." (Ezekiel 47:13-14) "And it shall come to pass, that you shall divide it by lot for an inheritance to you, and to the strangers [Gentile believers] that sojourn among you, who shall beget children among you: and they shall be to you as those born in the country among the children of Israel; they shall have an inheritance with you among the tribes of Israel. And it shall come to pass, that in whatever tribe the stranger may stay, there you shall give him his inheritance, says the Lord GOD." (Ezekiel 47:22-23) … The Plan … By Faith (Hebrews 11:1-40)From its inception, G-d’s plan was born through Faith in G-d’s Word into the heart of John Brown, Zion’s founder, and, in all matters concerning Zion, the Word of G-d has been the basis of John Brown’s conduct and actions: For it is written, "This book of Law shall not depart from your mouth, but you shall meditate in it day and night, that you may observe to do according to all that is written in it." (Joshua 1:8) Consequently, the founder of Zion shall respect and "Fear God, and keep HIS commandments: for that is the WHOLE duty of man." (Ecclesiastes 12:13) Zion’s approach is somewhat unprecedented … and that is because of its founder's, John Brown's, FAITH in G-d and HIS promises (Joshua 1:8) and in G-d’s faithfulness and ability to perform all HIS Word, as we also walk in the steps of faith as Our Father Abraham walked. (Romans 4:11-25) And Jesus said, "It is written [[[RationalWiki:Annotated Bible/Deuteronomy#Deuteronomy 8:3|Deuteronomy 8:3]]], Man shall not live by bread alone, but by every word of God." (Luke 4:4) Now, "FAITH is the substance of things [the Oil of Israel] hoped for, the evidence of things not seen. … [And, it is also written that] By FAITH Abraham, when he was tested, offered up Isaac, and … , By FAITH Isaac blessed Jacob and Esau concerning things to come; … By FAITH Jacob, when he was dying, blessed each of the sons of Joseph." (Hebrews 11:1-22) G-d’s word establishes and so states "…The JUST shall live by his FAITH [[[RationalWiki:Annotated Bible/Habakkuk#Habakkuk 2:4|Habakkuk 2:4]]] … But without FAITH, it is impossible to please HIM." (Hebrews 10:38 and 11:6) For: We worship by faith as Abel. We walk by faith as Enoch. We work by faith as Noah. We live by faith as Abraham. We govern by faith as Moses. We follow by faith as Israel. We fight by faith as Joshua. We conquer by faith as Gideon. By faith we are patient in suffering, courageous in battle, made strong out of weakness, and are victorious in defeat. We are more than conquerors by our faith seeing "… there is one God who will justify the circumcised By Faith and the uncircumcised through faith." (Romans 3:29-30) So, it is only through faith in JESUS CHRIST that we are saved. "It is written in the prophets, ‘And they shall all be taught by God’ [[[RationalWiki:Annotated Bible/Isaiah#Isaiah 54:13|Isaiah 54:13]]] Therefore everyone who hath heard, AND HATH LEARNED OF THE FATHER, cometh unto me." (John 6:45-51) Geology[edit] Zion's basic premise is that Brown noticed on a 1973 map of the 12 tribes of Israel that Asher's Land looked like a foot which he took as a clue from the Bible:[4] "And of Joseph he said, blessed of the LORD be his land, for the precious things of HEAVEN, for the dew and for the DEEP [the Oil of Israel] THAT COUCHETH BENEATH ... and, for the chief thing of the ancient mountains and for the precious things [Meged] of) "And of Asher he said, Be Asher blessed above sons; let him be acceptable to his brethren, and let him DIP HIS FOOT IN OIL." (Deuteronomy 33:24) for over a decade, and (unsurprisingly) found less oil than is in your car.[8].[9] However, as a desperate measure for success John Brown beefed up his exploration activities with a call for more prayers..[11].[12].[13] The fund-raising had only an 80% uptake[14] and raised only $24.5 million. It would appear that Zion still have funding issues as in December 2011; the expiration date of the company's warrants was extended by 11 months. ().".[17] Allegations of fraud[edit] One investor has made accusations of fraud against Brown across several internet forums and has created a site entitled "John Brown and the Zion Oil and Gas Securities Fraud"[18]. The chief complaint appears to be that Brown believes that "the oil will only flow when a super spiritual person known as Joseph comes into his life". The complainant Yousef Yomtov, who uses the alias 'Yosalov'[19], alleges that he is the "super spiritual person"[20] but Brown has refused to meet with him.[21] As only a wingnut can do, Yosalov also sees something highly suspicious in the fact that in January 2009 Zion announced that they had issued "approximately 666,000 warrants with a $7.00 exercise price"[22].[23] Although Brown frequently omits the middle vowel from the word God his usage is far from consistent. - ↑ The bizarre underlining is as shown in the original document - [1] - ↑ [2] - ↑ Presumably based on his anti-abortion and anti-homosexual activities. - ↑ Skeptic Friends Network, Forum thread: Zion Oil" getting into hot water? - ↑ Zion Oil Investor Alert! - ↑ Zion Oil Investor Alert! Full Report
https://rationalwiki.org/wiki/Zion_Oil_and_Gas
CC-MAIN-2017-39
refinedweb
1,918
69.65
Chat Control You can use the Chat frame control to add a Windows Live chat frame to a webpage. The chat frame enables you to post messages and share them with other users. Although all visitors to the page can view the chat session, only users who have a Windows Live ID can post messages. You can add a Chat frame to your website by including a <wl:chat-frame> tag. The following image is an example of how the Chat control appears on a webpage. The following example shows the markup for inserting the Chat frame control into a webpage. <html xmlns="" xmlns: <head> </head> <body> <script src=""></script> <wl:chat-frame </wl:chat-frame> </body> </html> Note The Chat frame control is unique among the Messenger Connect UI controls in that it requires no authentication infrastructure on your site. Place the <wl:chat-frame> tag on the page and then reference the loader script in the <head> tag. Unlike with other UI controls, you do not need to first add a <wl:app> control tag. The <wl:chat-frame> tag is used to embed a Messenger chat control within an IFRAME element on the page. Users must explicitly sign in, and chat sessions do not persist between pages. If a user opens multiple chat windows, each window is considered as a separate endpoint. Users can choose between two views of the chat stream: Everyone and Friends. When the Everyone view is selected, postings from everyone on the chat stream are displayed. When the Friends view is selected, only postings from other friends are displayed. Sites that implement a chat control can use metadata to advertise the chat event to the visitor's Messenger friends, contacts on Windows Live, and other sites such as Facebook. By default, the shared chat-event metadata can be configured by setting the optional attributes of the Chat Control. If no event attributes have been set, the chat control can also use Open Graph protocol tags. A user can share chat-event metadata by inviting contacts to chat directly. During sign-in a user can also select a check box that shares a link to the chat page with friends. If a user selects the Share a link with this page to your friends check box during sign-in, the chat information and an optional comment will be posted to the user's Windows Live feed. If the user sends an invitation to a friend, an instant message will be sent along with a link to the chat page. Many popular social networking sites use a tag-based metadata schema called the Open Graph Protocol, which is used to annotate webpages and include them in an object graph. The most common current use of Open Graph annotations is to enable site visitors to "like" the page with Facebook. For more information about Open Graph Protocols, see The Open Graph Protocol. The following example demonstrates a typical Open Graph protocol website annotation. Open Graph <meta> tags that define properties begin with the og: prefix. <html xmlns: <head> <title>The Rock (1996)</title> <meta property="og:title" content="The Rock"/> <meta property="og:type" content="movie"/> <meta property="og:url" content=""/> <meta property="og:image" content=""/> <meta property="og:description" content="loren ipsum"/> ... </head> Open Graph properties can also be used as default values for the event details that were not set by the chat-frame attributes. To use Open Graph properties as default values for unspecified event details, add an XML schema namespace declaration referencing the open graph protocol schema, such as: The following table shows how the Open Graph tags map to the chat control attributes.
https://msdn.microsoft.com/library/ff750122
CC-MAIN-2015-14
refinedweb
612
60.45
Testing Scala with JUnit If you invent a new computer programming language, you should probably also invent a new testing framework to go with it. However, if the new language is for the Java Virtual Machine (JVM), why not just use JUnit? Scala, as you might already know, is an object-oriented functional language for the JVM invented by Martin Odersky, author of the Java 1.3 compiler. There is ScalaTest, which definitely has some features to recommend it. But once you get Scala working in IntelliJ, using JUnit is super easy and requires no additional setup. For the project into which I’m gradually adding Scala classes, a program to draw diagrams of prime numbers in certain imaginary domains, I created the class ImagQuadInt to mirror ImaginaryQuadraticInteger and ImagQuadRing to mirror ImaginaryQuadraticRing for smoother use in the Scala REPL, which I wrote about a few weeks ago. It’s also a project I’m gradually switching over to test-driven development. Since the Scala classes I’ve added are just, as Cay Horstmann would call it, “a thin paste” on Java classes, I didn’t consider it necessary to write tests for them. For instance, ImagQuadInt.+ is just a version of ImaginaryQuadraticInteger.plus that takes advantage of operator overloading in Scala. Here is a quick quote of the latter with a lot left out: /** * Addition operation. * @param summand The number to add. * @return result The result. * @throws AlgebraicDegreeOverflowException When adding numbers from different domains. * @throws ArithmeticException When either the real part or the imaginary part overflows the int primitive. */ public ImaginaryQuadraticInteger plus(ImaginaryQuadraticInteger summand) { // Omitting lines in which decision to throw exceptions is made // Omitting lines about the so-called "half-integers" // Also omitting the lines with the actual adding return new ImaginaryQuadraticInteger((int) sumRealPart, (int) sumImagPart, this.imagQuadRing, sumDenom); } I think I have tested this quite thoroughly, so there’s probably not much to be gained from testing its simple Scala wrapper, which is now quoted in its entirety: /** Adds an Int to this ImagQuadInt (overloaded operator for ImaginaryQuadraticInteger.plus()). * * @param summand The Int to add, a purely real integer. * @return The result of the addition. */ def +(summand: Int): ImagQuadInt = { val temp = this.plus(summand) new ImagQuadInt(temp.realPartMult, temp.imagPartMult, temp.imagQuadRing, temp.denominator) } Oops, that’s the one to add a purely real Int to an ImagQuadInt. Well, the one I meant to quote is not that different. Once I started thinking about how to do test-driven development in Scala, though, I realized I might as well use ImagQuadInt to try out the basics of using a testing framework with Scala in IntelliJ. There are quite a few wrinkles to the process. First of all, if you ask IntelliJ to auto-generate a test class with test stubs, it will generate a new Java class in the test folder, not a new Scala class. I’m guessing it’s the same in NetBeans. But with all the auto-complete IntelliJ does, it is not too big a deal to create a new almost empty Scala file in the test folder and then start writing into it the JUnit imports. package imaginaryquadraticintegerimport org.junit.Test import org.junit.Assert._ You can put in semicolons if you want. Note that the underscore is the wildcard character in Scala, not the asterisk like in Java. I suppose that as I develop this, I might need @BeforeClass and the like. For now, this is enough to start getting into it. class ImagQuadIntTest { val ring = new ImagQuadRing(-2) val operand1 = new ImagQuadInt(3, 5, ring) val operand2 = new ImagQuadInt(4, 2, ring) /** Test of + method of class ImagQuadInt. */ @Test def testPlus(): Unit = { println("operator+") val expResult = new ImagQuadInt(7, 7, ring) val result = operand1 + operand2 assertEquals(expResult, result) } As my classmates at Integrate Detroit like to remind me, test is not necessary in the test subroutine name for JUnit 4. I could call that first test just +, but that would be confusing, and an argument against operator overloading. So I’m going with testPlus, just as in ImaginaryQuadraticIntegerTest. On thing I really don’t like about Scala is that void is called Unit. Quite confusing. It should have been called Void, in my opinion. IntelliJ should show green play buttons next to the tests. Now you can run them just as you would any ordinary JUnit test in Java. To test that exceptions are thrown when they should, almost everyone prefers to use JUnit’s annotation argument expected to the good old-fashioned try-catch-with-fail, but in Scala there’s a major wrinkle that I would not have figured out on my own. /** Verifying numbers from different rings trigger AlgebraicDegreeOverflowException */ @Test(expected = classOf[AlgebraicDegreeOverflowException]) def testTimesDiffRings(): Unit = { println("Multiplication by number from another ring") val numFromDiffRing = new ImagQuadInt(1, 1, new ImagQuadRing(-5)) operand1 * numFromDiffRing } With ScalaTest I could do @Test def testTimesDiffRings(): Unit = { println("Multiplication by number from another ring") val numFromDiffRing = new ImagQuadInt(1, 1, new ImagQuadRing(-5)) evaluating { operand1 * numFromDiffRing } should produce [AlgebraicDegreeOverflowException] } which I like a heck of a lot better, but which I haven’t yet figured out how to do it in IntelliJ yet. Adding the lines import org.scalatest.junit.JUnitSuite import org.scalatest.junit.ShouldMatchersForJUnit is the easiest part of the setup. Messing with the project build dependencies… I’ll have to figure that out another time. Any time you want to give your testing of exceptions a little bit more foresight, you’re probably going to have to go with the good old try-catch with fail. For example, it has always seemed strange to me that division by zero in Java’s primitive numeric types causes ArithmeticException. I think IllegalArgumentException makes more sense. A reasonable argument could even be made for NotDivisibleException for classes implementing the AlgebraicInteger interface. That’s something I’ve been going back and forth on. But although TestNG does have the capability to declare multiple expected exceptions (I’m guessing that the test passes as long as one of the listed exceptions is thrown), it looks like JUnit still doesn’t have that capability. ScalaTest might, I don’t know yet. Then for now, my division by zero test looks like this: /** Verify that division by zero causes an appropriate exception. */ @Test def testDivideByZero(): Unit = { print("Division by zero") val zero = new ImagQuadInt(0, 0, ring) try { operand2 / zero } catch { case nde: NotDivisibleException => println(" caused NotDivisibleException \"" + nde.getMessage + "\"") case iae: IllegalArgumentException => println(" caused IllegalArgumentException \"" + iae.getMessage + "\"") case ae: ArithmeticException => println(" caused ArithmeticException \"" + ae.getMessage + "\"") case e: Exception => println(" caused " + e.getClass + " \"" + e.getMessage + "\"") fail("Division by zero should not have caused " + e.getClass) } } The pertinent code is at GitHub.
https://alonso-delarte.medium.com/testing-scala-with-junit-a79bc2d1bb4c
CC-MAIN-2021-21
refinedweb
1,116
54.83
A little about the history... At present EFx (Enterprise Framework) is in a state of transition from the Everett world (.NET Framework 1.1) to (.NET Framework 2.0). Although quite a separate issue from the base framework, EFx is also migrating to the new diagramming and code generation tools of Visual Studio 2005. The vision here is to provide Guidance Automation and Diagramming to the framework to enable developers to build components on the framework, without having to understand the complexities of the underlying classes and to be frank, to save heaps of time in rote coding of the subsystems. A subsystem is simply a concept, an association of vertical slices of functionality (feature sets if you like) that split the service or application into logical categories. For example, an e-procurement system may have one subsystem for 'purchasing' and one for 'customer management'. A subsystem simply equates to components in a number of layers: a layer containing a any number of web service classes, a layer containing any number of business logic components, a layer containing any number of data access and/or service agent components and a number of auxillary support layers/components (resources/commonality/entities etc). For the developer then, the experience of creating a subsystem requires them to create up to 7 separate projects in a single solution. If you consider most enterprise services may contain a hand full of subsystems, that's a lot of projects to code up. [Of course the benefits (too many to list here) of all these components far outweighs the effort required to simply create a bunch of projects.] Today, developers using EFx have to follow manual guidelines on how to add these projects in each layer to their framework solution. Although its very straight forward to create the subsystems, it is fairly arduous and prone to manual errors. The work is fairly rote, and this is the the single reason we have not been able to release the framework as a single installation package that developers could pick up and run with right away. (The risk of butchering the architecture and structure by taking shortcuts is too high). The key piece we were missing were templates and wizards to do this for the developers. Initially, (back in mid-2004) the thinking was that we could do this using Enterprise Templates. [I should explain at this point that as an MS Services Consultant, we have to deal with technologies our customers have and can use in the field at the present time. Of course, Whitehorse was on the horizon, but that's was no use to our customers at the time]. So, after a little delving into this promising technology (Ent. Templates), we ended up with nothing useful. It was simply too hard to get started and deliver anything usable for us. Developing with the framework itself requires a very basic understanding of layered .NET architectures and the C# language and relies on solid understanding and appreciation of abstraction and encapsulation. Writing subsystems follows a simple pattern that most developers understand easily enough. The beauty of programming on the framework was that you rarely needed to leave its namespace to get anything done. The framework makes extensive use of the Application Blocks (now Enterprise Library) and provides addition features which make the standard tasks very simple, i.e. Validation, Authorisation. The framework also ships with a sample subsystem that showcases most of the common tasks developers code by hand in a solution. However, we found with most developers using a structured framework, that it took sometime for them to get to grips with how it hung together. In the field (customers & partners), we find that most developers simply are not used to implementing layered architectures, so keeping them from breaking the contract at each layer was the hardest part. (I have lots of funny examples here). Of course, they were initially required to understand the available services and support libraries in the framework, and so it took some time to become familiar with it. We would have to deliver training courses on it and a great deal of education on .NET practices and patterns. We wanted to address this issue, and therefore increase the uptake of the framework, so we decided to take some steps to re-birth the framework and give it a new and more appealing developer experience and easy feel to it. One of the first steps we took was contacting the patterns & practices group to see if they could help us with expanding the framework and bring it to the masses. Unfortunately, due to some bad timing and probably the wrong contact person at the time, this process was seriously delayed. Anyhow, fortunately much later, after getting the run-around a little, we found the right person around the same time some new key offerings were in the pipeline (GAT especially). With Ent.Lib, GAT, .NET 2.0, VS2005, DSL, Indigo and other factors looming upon us, it was time to make some significant upgrades to the framework. The idea being that with p&p's help, under their wing so to speak, we could make these upgrades to the technology and package the framework and tools as a single package that developers could download and get started with immediately. The vision, would then be that creating the subsystems and defining the interaction between the layers would be a customisable visual experience rather than a hand coded experience. At about that time (and not so long ago in fact). We re-discovered the DSL toolkit. Earlier, we had briefly looked at DSL and dismissed it as not applicable to our problem right now (in hindsight, of course, totally of the mark) at that time we were convinced GAT alone would solve the immediate issues. However clearly now, a synergy between these two technologies is quite obviously the way forward. So where are we now? At present the p&p is busy shipping Ent.Lib 2.0. VS 2005 is out now, DSL, GAT is soon releasing, Indigo got renamed (I must have missed that) WWF came out of nowhere and we have a bunch of other new compelling technologies to leverage. Of course Software Factories (I feel unclean getting on that bandwagon) is now the trendy topic today, which is right on-time for the framework, because people (customers/partners) might start seriously considering a framework in mainstream development projects in the field. This would make my job and my other consultant brothers cry a sign of relief, that we don't have to do so many firefighting/troubleshooting/eleventh-hour rescue engagements to try and recover failed enterprise development projects. You have to remember that while a lot of us are fighting here at the bleeding edge, there are normal developers working out there who are oblivious to all this, in fact I recently on one of my accounts the lead developer (con)fessed up that he had no idea what SOA was or even that WS-standards existed! This was a lead developer writing real code for real customers. Of course, this is not really funny, you have to understand and sympathise with these guys, they simply don't have the time to go rooting around all the new 'cool' stuff, they are focused on the here and now and struggling with the current platform to deliver a solution on-time to a customer, and this maybe after lots of overtime and over the course of a year long development project which is already over-schedule and budget. Of course, I get to see a great deal of this in my work, the reason these guys are so busy and why projects are failing after so long periods is the sole reason that Enterprise Framework was created, that is, the problem it is destined to solve I mean [:)]. (I'll post another article why these projects are failing (the root causes) and why these developers are overstretched). So, at present we are upgrading and building a new face for Enterprise Framework. Currently we are on Ent.Lib 1.1, and plans are in progress to build a new DSL with GAT recipes to model the entire service layers. We also plan to leverage WCF (Indigo, thanks Ed), WWF and WPF and a bunch of other emerging stuff. We have along road ahead, and fortunately with the LOB Application Toolkit on the cards in p&p I am hoping we will have an offering second to none in the market place some time next year. I shall be discussing the progress of EFx and our experiences with DSL and GAT in the near future.
http://blogs.msdn.com/b/jezzsa/archive/2005/10/22/next-steps-with-efx.aspx
CC-MAIN-2013-48
refinedweb
1,450
56.79
TableModel model = table.getModel(); while(SQL.next()){ model..addColumn(SQL.getString(1)); } it dosen't work there is no addColum method. THANKS! Post your Comment section for inserting rows in JTable through using the insertRow() method restrict jtable editing restrict jtable editing How to restrict jtable from editing or JTable disable editing? public class MyTableModel extends AbstractTableModel { public boolean isCellEditable(int row, int column populate with resultset. JTable populate with resultset. How to diplay data of resultset using JTable? JTable is component of java swing toolkit. JTable class is helpful in displaying data in tabular format. You can also edit data. JTable jtable It dosent workJoe November 1, 2011 at 1:32 PM TableModel model = table.getModel(); while(SQL.next()){ model..addColumn(SQL.getString(1)); } it dosen't work there is no addColum method. THANKS!Vova January 7, 2012 at 4:42 AM THANKS! Post your Comment
http://www.roseindia.net/discussion/18239-Inserting-a-Column-in-JTable.html
CC-MAIN-2015-06
refinedweb
147
53.07
Slashdot Log In Another Sony Rootkit? Posted by ScuttleMonkey on Mon Aug 27, 2007 09:40 AM." Related Stories [+] Games: BioShock Installs a Rootkit 529 comments An anonymous reader writes ?" Firehose:Another Sony rootkit ( and its not Bioshock ) by Anonymous Coward This discussion has been archived. No new comments can be posted. Another Sony Rootkit? | Log In/Create an Account | Top | 317 comments | Search Discussion The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way. Sony (Score:5, Interesting) Re:Sony (Score:5, Interesting) () Re:Sony (Score:4, Funny) () So, are rootkits entertainment or technology? Re:Sony (Score:5, Funny) (Score:4, Interesting) () Re:Sony (Score:5, Informative) ( | Last Journal: Thursday September 13, @11:11AM):Sony (Score:5, Insightful) ( | Last Journal: Thursday April 12 2007, @09:41AM) Re:Sony (Score:5, Insightful) (Last Journal: Thursday November 08, @06:00PM) Hype here notwithstanding, this is not a "rootkit". It seems to be a bizarre form of write-protection. Re:Sony (Score:5, Informative) ( | Last Journal: Monday August 27, @11:47AM) Re:Sony (Score:5, Insightful) ( | Last Journal: Monday August 20, @09:15PM) Re:Sony (Score:4, Informative) ( | Last Journal: Monday August 27, @11:47AM):Sony (Score:4, Insightful) ( | Last Journal: Monday August 27, @11:47AM)) ( | Last Journal: Sunday June 19 2005, @07:25AM). Consider (Score:4, Insightful) Re:Consider (Score:5, Insightful) (Last Journal: Saturday August 25, @03:49PM) Re:Consider (Score:5, Insightful) Hidden files (Score:5, Insightful) () Re:Hidden files (Score:5, Insightful)) ( | Last Journal: Wednesday November 07, @12:32PM):Hidden files (Score:5, Informative) () Format before use (Score:4, Interesting) And using OS that won't run anything from the newly attached memry as a default would also help. Is there a way to permanantly disable this? (Score:2) Is there anything that would break if one was to find a way to nullify this functionality in OS calls? Ryan Fenton Why? (Score:2, Insightful) tsk tsk tsk... (Score:4, Insightful) kiosk (Score:5, Insightful) (). Rootkits aside... (Score:1) () The issue here is the biometric stuff. If your CC number gets stolen, or your password gets hacked, you can simply cancel the old CC/reset your account etc. Now, what happens when your data 'fingerprint' [retina scan, whatever] gets hacked and compromised? Get new fingers? Get new eyeballs (ala Tom Cruise!)?. I think not. The sooner people learn not ot buy and trust this crap the better - but thinking, perhaps the people that buy this crap deserve a MS designed rootkit anyway. You can't solve this on a single system. (Score:4, Insightful) ( | Last Journal: Monday September 26 2005, @06:53PM)! Look at the bright side (Score. SUCKERS! What did you expect? (Score:2, Insightful) How fucking stupid can you people be? Stop buying Sony! [mcgrew.info] -mcgrew what a bunch of weasels (Score:3, Insightful) ( | Last Journal: Monday April 16 2007, @01:18PM)..... Memory Stick? (Score:2) ( | Last Journal: Thursday March 31 2005, @01:48PM) But it doesn't work for security, either! (Score:3, Insightful) ( | Last Journal: Thursday May 12 2005, @09:37AM). Security through obscurity (Score:1) A propos... (Score:3, Funny) () Could this be done on any OS so easily? (Score:2) Karma Abuse Poetry (Score:3, Funny) (Last Journal: Tuesday March 13 2007, @02:39PM)." About Sony and rootkits (Score:2, Insightful) (Last Journal: Sunday August 26, @07:45PM) I feel like I finally have to create a user account to correct a misconception I see a lot on the internet. It wasn't Sony that put a rootkit on the music CDs, it was Sony-BMG which is a separate company that is 50/50 owned by Sony and Bertelsmann (BMG stands for Bertelsmann Music Group). Furthermore, the top executives at Sony-BMG all come from the BMG side, like that guy Thomas Hesse who made those stupid remarks that consumers shouldn't care about rootkits. If anything, all the anger toward Sony should be directed at the entity involved, which is Sony-BMG. Just boycott their music. Can't affect me ... (Score:1, Interesting) Will you buy Sony products? Not rootkit, not malware. (Score:2) The malware aspect comes in because the Sony software installs a driver for the fingerprint reader in a special hidden directory, presumably with the intention of making the driver more difficult to tamper with and/or bypass. The idea here is that if an attacker can tamper with the driver they can have the tampered driver send a false "correct read" signal to the vault which would expose the content to attackers. Vista's driver protection basically works the same way by preventing you from editing sections of the registry and editing/deleting certain files. So, in theory anyway, if Sony updates the driver for Vista this behavior shouldn't be necessary (not that it is now) beacuse Sony can make it a "signed" driver that this more difficult to tamper with. The driver might also contain some sort of obsucated code (I'm that familiar with this kind of driver hacking). On the grand scale of software that breaks Windows conventions, this is a rather petty example. There are anti-virus tools and debuggers that tamper with the kernel. There is DRM software that breaks other apps on your system. There are virtual disk drives that can destroy your entire Windows install, Really, one hidden driver ain't so bad. Here's a question: Does the uninstaller remove this hidden driver cleanly? If so, what's the problem? You shouldn't be using this Sony software anyway. Do you really want to stick you confidential data into a propretary database coobbled together in a weekend by a few chumps at Sony? There are far more robust and flexible password vaults out there. Many are free. Does any of you know if you can use the fingerprint reader without installing Sony's software? Heh, heh... (Score:1) DRM as well (Score:1) simple... stop buying sony (Score:1) What to hide? (Score:1) (Last Journal: Saturday June 16, @09:32AM) But seriously, this device seems to be designed for securing your data. Would you trust a vendor who takes these measures to hide the inner workings of the device? It's not that obfuscation, hidden, binary code ever stopped ambitious crackers. On the contrary, I think it just gives a false feeling of security to the vendor. Re:This article is retarded (Score:5, Informative) (Last Journal: Friday October 25 2002, @11:31PM). Re:Ha! it melted anyway! (Score:1) Re:Ha! it melted anyway! (Score:1) > when I looked at it the whole case was drooping and had his thumbprint in it Well, after all, it *is* a thumbprint reader! (I agree with other poster, there's no way a USB device can suck enough power to melt itself.) Re:Ha! it melted anyway! (Score:1) Re:Is this a problem under Linux too? (Score:2) ( | Last Journal: Saturday March 24 2007, @07:58AM) It's possible. For particular known in advance kernel version. In other words, thanks to multitude of Linux configurations, such attack vector isn't practically feasible. Rootkits try to patch syscall table but it is not always trivial from user-space. And again - not reliable. Now with so short update cycle (about 3-6 month) I haven't seen Linux root-kits in a wild for very very long time. Before in 2.0/2.2 times there were root-kits as well as popular security systems against them. On other side, Linux file system API does support so called namespaces (or what windows calls mount points). IOW it is possible to remove something so it would be invisible to user and his/her applications. But then it is feature for user - not against user - so s/he can easily see that something was manipulated and undo the manipulations.
http://it.slashdot.org/it/07/08/27/1334210.shtml
crawl-002
refinedweb
1,320
65.62
This is Part 4 of our ongoing series on NumPy optimization. In Parts 1 and 2 we covered the concepts of vectorization and broadcasting, and how they can be applied to optimize an implementation of the K-Means clustering algorithm. Next in the cue, Part 3 covered important concepts like strides, reshape, and transpose in NumPy. In this post, Part 4, we'll cover the application of those concepts to speed up a deep learning-based object detector: YOLO. Here are the links to the earlier parts for your reference. Part 3 outlined how various operations like reshape and transpose can be used to avoid unnecessary memory allocation and data copying, thus speeding up our code. In this part, we will see it in action. We will focus particularly on a specific element in the output pipeline of our object detector that involves re-arranging information in memory. Then we'll implement a naïve version where we perform this re-arranging of information by using a loop to copy the information to a new place. Following this we'll use reshape and transpose to optimize the operation so that we can do it without using the loop. This will lead to considerable speed up in the FPS of the detector. So let's get started! Bring this project to life Understanding the problem statement The problem that we are dealing with here arises due to, yet again, nested loops (no surprises there!). I encountered it when I was working on the output pipeline of a deep learning-based object detector called YOLO (You Only Look Once) a couple of years ago. Now, I don't want to digress into the details of YOLO, so I will keep the problem statement very limited and succinct. I will describe enough so that you can follow along even if you have never even heard of object detection. However, in case you are interested in following up, I have written a 5-part series on how to implement YOLO from scratch here. YOLO uses a convolutional neural network to predict objects in an image. The output of the detector is a convolutional feature map. In case the couple of lines above sound like gibberish, here's a simplified version. YOLO is a neural network that will output a convolutional feature map, which is nothing but a fancy name for a data structure that is often used in computer vision (just like Linked Lists, dictionaries, etc.). A convolutional feature map is essentially a spatial grid with multiple channels. Each channel contains information about a specific feature across all locations in the spatial grid. In a way, an image can also be defined as a convolutional feature map, with 3 channels describing intensities of red, green, and blue colors. A 3-D image, where each channel contains a specific feature. Let's say we are trying to detect objects in an image containing a Rail engine. We give the image to the network and our output is something that looks like the diagram below. The image is divided into a grid. Each cell of the feature map looks for an object in a certain part of the image corresponding to each grid cell. Each cell contains data about 3 bounding boxes which are centered in the grid. If we have a grid of say 3 x 3, then we will have data of 3 x 3 x 3 = 27 such boxes. The output pipeline would filter these boxes to leave the few ones containing objects, while most of others would be rejected. The typical steps involved are: - Thresholding boxes on the basis of some score. - Removing all but one of the overlapping boxes that point to the same object. - Transforming data to actual boxes that can be drawn on an image. However, given the way information is stored in a convolutional feature map, performing the operations highlighted above can lead to messy code (for more details, refer to Part 4 of the YOLO series). To make the code easier to read and manage, we would like to take the information stored in a convolutional feature map and rearrange it to a tabular form, like the one below. And that's the problem! We simply need to rearrange information. That's it. No need to go further into YOLO! So let's get to it. Setting up the experiment For the purposes of this experiment, I have provided the pickled convolutional feature map which can be downloaded from here. We first load the feature map into our system using pickle. import pickle conv_map = pickle.load(open("conv_map.pkl", "rb")) The convolutional neural network in PyTorch, one of most widely-used deep learning libraries, is stored in the [B x C x H x W] format where B is batch size, C is channel, and H and W are the dimensions. In the visualization used above, the convolutional map was demonstrated with format [H W C], however using the [C H W] format helps in optimizing computations under the hood. The pickled file contains a NumPy array representing a convolutional feature map in the [B C H W] format. We begin by printing the shape of the loaded array. print(conv_map.shape) #output -> (1, 255, 13, 13) Here the batch size is 1, the number of channels is 255, and the spatial dimensions are 13 x 13. 255 channels correspond to 3 boxes, with information for each box represented by 85 floats. We want to create a data structure where each row represents a box and we have 85 columns representing this information. Naïve solution Let's try out the naïve solution first. We will begin by pre-allocating space for our new data structure and then fill it up by looping over the convolutional feature map. # Get the shape related info b, c, h, w = (conv_map.shape) box_info_length = 85 # Pre-allocate the memory counter = 0 output = np.zeros((b, c * h * w // box_info_length, box_info_length)) # Loop over dimensions for x_b in range(b): for x_h in range(h): for x_w in range(w): for x_c in range(0, c, box_info_length): # Set the values output[x_b, counter] = conv_map[x_b, x_c: x_c + box_info_length, x_h, x_w] counter += 1 We iterate first across each cell in the image, and then for each of the boxes described by the cell. As you can probably figure it out, a quadruple-nested loop ain't a very good idea. If we use larger image sizes (bigger h and w), a larger number of boxes per cell ( c) or a larger batch size ( b ), the solution would not scale well. Can we vectorize this? The operation here involves rearranging information rather than performing an operation that can be vectorized. Even if we consider rearranging information as a copy operation, in order to get the desired result, we need to reshape the original data in such a way that the vectorized result would have the tabular shape we require. Hence, even for vectorization, data reshaping / rearrangement is a prerequisite before the vectorized copy can be performed. So, in case we want to speed up the above data reshaping exercise, we must come up with a way to do it without using slow pythonic loops. We will use the features that we have learned about in Part 3 (reshaping and transpose) to do so. Optimized Solution Let us begin with initializing various helper variables. output = conv_map[:] b, c, h, w = (conv_map.shape) box_info_length = 85 bis the batch size, 1. box_info_lengthis the number of elements used to described a single box, 85. cis the number of channels, or information about 3 boxes per grid cell, box_info_length* 3 = 255. hand ware the spatial dimensions, both of them 13. Consequently, the number of columns in our target data structure would be equal to box_info_length (85) since we need to describe one box per row. Re-plugging the image showing the transformation that needs to be performed for your reference here. From here on I will refer to the convolutional map as the source array, whereas the tabular form we want to create is the target array. In our target array, the first dimension is the batch. The second dimension is the boxes themselves. Each cell predicts 3 boxes in our case, so our target array will have H x W x 3 or 13 x 13 x 3 = 507 such boxes. In our source array, 3 boxes at, say, an arbitrary location [h,w] is given by [h, w, 0:85], [h, w, 85:170]and [h, w, 170:255] respectively. The same information is described by three rows in our target array. In the memory, the data of the source array is stored something like this. Now we would like to visualize how data is stored in the target array. But before that, we would like to introduce some terminology to map information between the source and target arrays. In the target array, each box represents a row. Row with a index D (h, w, n) represents the box n predicted by grid cell at [h,w] in the source array. The columns in target array are denoted as [f1, f2......fn] representing the features of a box. Our source array has 4 dimensions, whereas our target array has 3 dimensions. The two dimensions used to represent spatial details of the boxes ( h and w) in the source array are basically rolled into one in the target array. Therefore, we first begin by flattening out the spatial dimensions in our source array into one. output = output.reshape(b, c, h * w) Once this is done, can we simply reshape it to the array to [b, num_boxes, box_width_length] and be done with it? The answer is no. Why? Because we know reshaping would not change how the array is stored in the memory. In our array (that we just reshaped), we store the information in the following way. [b0][c0][h0,w0] [b0][c0][h0,w1] [b0][c0][h0,w2] [b0][c0][h0,w3] However, in our target array, two adjacent cells describe two channels for a single cell. [b0][D(h0, w0, B0)][f1] [b0][D(h0, w0, b0)][f2] [b0][D(h0, w0, b0)][f3] For each batch, and for each spatial location in that batch, store the information across the channels and then move to the other spatial location, and then move to the other batch element. Note f1....fn is basically the information described along the channel the channel c in the source array. (Refer to the transformation diagram above). This means that in the internal memory, our source array is stored in a different way in our target array. Reshaping alone can't do the job. Thus, we must modify how arrays are stored. The fundamental difference between the array we have and the one we want to arrive at is the order of dimensions encoding channel and spatial information. If we take the array we have, and swap the dimensions for the channel and the spatial features dimensions, we would get an array that is stored in the memory like this. [b0][h0,w0][c0] [b0][h0,w0][c1] [b0][h0,w0][c2] [b0][h0,w0][c3] This is precisely how the information is stored in the target array. This re-order of dimensions can be accomplished by a transpose operation. output = output.transpose(0, 2, 1) However, our job is not done yet. We still have the entire channel information in the third dimension of the new array. As stated above, the channel information contains information of three boxes end to end. The array we have now is of shape [b][h*w][c]. arr[b][h,w] describes three boxes arr[b][h,w][c:c//3], arr[b][h,w][c//3: 2*c//3] and arr[b][h,w][2*c//3:c]. What we want to do is to create an array of shape [b][h*w*3][c/3] such that 3 boxes per channel are accommodated as 3 entries in the second dimension. Note that the way our current array is stored in memory is same as the way our target array would be. # Current Array [b0][h0,w0][c0] [b0][h0,w0][c1] [b0][h0,w0][c2] [b0][h0,w0][c3] # Destination Array [b0][D(h0, w0, b0)][f1] [b0][D(h0, w0, b0)][f2] [b0][D(h0, w0, b0)][f3] Therefore, we can easily turn our current array into the destination array using a reshape operation. output = output.reshape(b, c * h * w // box_info_length, box_info_length) This finally gives us our desired output without the loop! Timing the new code So now that we have a new way to accomplish our task. Let us compare the time taken by the two versions of the code. Let's time the loop version first. %%timeit -n 1000 counter = 0 b, c, h, w = (conv_map.shape) box_info_length = 85 counter = 0 output = np.zeros((b, c * h * w // box_info_length, box_info_length)) for x_b in range(b): for x_h in range(h): for x_w in range(w): for x_c in range(0, c, box_info_length): output[x_b, counter] = conv_map[x_b, x_c: x_c + box_info_length, x_h, x_w] counter += 1 Let us now time the optimized code. %%timeit -n 1000 output = conv_map[:] b, c, h, w = (conv_map.shape) box_info_length = 85 output = output.reshape(b, c, h * w) output = output.transpose(0, 2, 1) output = output.reshape(b, c * h * w // box_info_length, box_info_length) We see that thee optimized code runs in about 18.2 nanoseconds, which is about 20 times faster than the loop version! Effect on FPS of the Object Detector While the optimized code runs about 20x faster than the loop version, the actual boost in FPS is about 8x when I use a confidence threshold of 0.75 on my RTX 2060 GPU. The reason why we get a lesser speed-up is because of a few reasons which are mentioned as follows. First, if the number of objects detected are too many, then the bottleneck of the code becomes drawing the rectangular boxes on the images, and not the output pipeline. Therefore the gains from the output pipeline don't exactly correspond to gains in FPS. Second, each frame has a different number of boxes. Even the moving average of the frame rate varies throughout the video. Of course even the confidence threshold will affect the frame rate, as a lesser confidence threshold means more boxes detected. Third, the speed-up you get may vary depending upon what sort of GPU you are using as well. That's why I have demonstrated the time difference between the snippets of the code and not the detector itself. The speed-up in FPS on the detector will vary on a case-by-case basis depending on the parameters I have talked about above. However, every time I have used this trick, my speed-ups have always been considerable. Even a 5x speed-up in FPS can mean that object detection can be performed real time. Setting up an object detector can be quite an elaborate exercise, and is beyond the scope of this post, for not all the readers may be into deep learning-based computer vision. For those who are interested, you can check out my YOLO series. That particular implementation uses reshape and transpose to speedup the output pipeline. Conclusion That's it for this part folks. In this part, we saw how methods like reshape and transpose can help us speed up information re-arrangement tasks. While transpose does require making a copy, the loop is still moved to the C routines and therefore is faster than transpose implemented using pythonic loops. While we optimized the output of an object detector, similar ideas can be applied to allied applications like semantic segmentation, pose detection, etc., where a convolutional feature map is our ultimate output. For now, I'm going to put this series to rest. It's my hope that these posts would have helped you to appreciate NumPy better and write more optimized code. Till then, signing off! Add speed and simplicity to your Machine Learning workflow today
https://blog.paperspace.com/faster-object-detection-numpy-reshape-transpose/
CC-MAIN-2022-27
refinedweb
2,696
62.78
06-16-2010 04:43 AM When attempting to backup database get the following error message: Incorrect syntax near the keyword 'Database'. Incorrect syntax near the keyword 'With'. If this statement is a common table expression or an xmlnamespace clause, the previous statement must be terminated with a semicolon. PRC_MAINT_BACKUPDB: ERROR EXECUTING DATABASE BACKUP STATEMENT The first error 'Database' may be related to the folder name, Database-database files, produced when started from the .ADF file. OS Win 7 Home Premium. Help to solve both problems would be appreciated Ron 06-16-2010 08:17 AM The issue may be caused by the name. Try saving your database to a new name using the File > Save Copy As option. Then backup the new copy. 06-16-2010 07:25 AM I see a reference to 'Database-database files', is the name of your database "Database"? If it is, was this converted from a previous version of Act!? Have you ever been able to create a backup? 06-16-2010 07:38 AM 06-16-2010 08:08 AM Yes the name was Database and it was converted from v2009. I have not backed up this database since moving from v2009 as it is in My Documents on a partition that is backed up daily. I am attempting to move the database to a laptop for use when out of town. Ron 06-16-2010 08:17 AM The issue may be caused by the name. Try saving your database to a new name using the File > Save Copy As option. Then backup the new copy. 06-16-2010 06:43 PM That fixed the problem. Strangely I had to switch to Administrator to be able to copy the database. Once copied and switched back to my own login the database opened without a hitch and can now be backed up. Great - Thanks for the help. Ron
https://community.act.com/t5/Act/Act-2010-Problem-Backing-Up-Database/m-p/73562/highlight/true
CC-MAIN-2020-29
refinedweb
316
74.19
- . Step 1: PROJECT PARTS - GOduino III or Arduino Uno. You should be able to get this project to work with most Arduinos with some tweaking. - Sound Sensor ($4 from ebay). I used Seeed's sound sensor - IR 940nm LED Transmitter .($0.10 from ebay) - IR Receiver 38Khz 3-pin not the 2-pin LEDs. ($1 from ebay) I got mine from Tayda2009. - 1K Ohms resistor. - Breadboard. - Jumper wires. - Power: you can use USB power or any battery that can source 7V to 12V and over 500mA. SOFTWARE - The Arduino IDE (Download from) - IRremote library. Download from and extract the content of the zip file to folder Arduino\libraries\IRremote Step 2: WIRING THE TV LOUDNESS GUARD SOUND SENSOR - GND pin ---> Arduino GND pin - VCC pin ---> Arduino 5V pin - SIG pin ---> Arduino A0 pin IR TRANSMITTER LED - Cathode ---> Arduino GND - Anode ---> 1K Ohms ---> Arduino pin 3 (PWM) IR 38KHZ RECEIVER (FACING YOU) - Right Pin ---> Arduino 5V pin - Middle Pin ---> Arduino GND pin - Left Pin ---> Arduino 11 pin (PWM) Step 3: DECODE YOUR REMOTE CONTROL BUTTONS Since our gadget needs to simulate sending a Volume Down remote control command whenever the TV volume is too high, we need to figure out what's the code for any particular TV remote. This is done easily using the example program provided by the IRremote library. - With the TV Loudness Guard gadget fully wired, connect your Arduino to your PC. - From the Arduino IDE, load the example file IRrecvDump which can be found under menu File/Examples/IRremote - Open the Arduino IDE serial monitor. - Point your remote control at the IR LED receiver (3 pin) and press the Volume Down button. You will see numbers being displayed on the Serial Monitor. - Record the short number generated when you pressed your remote button. In my case, the volume down button was 1CE3E817 and the bit count (e.g. 32 bit) which I will see in my Arduino program. You need to replace my remote code with your captured remote control code for your Volume Down button. Step 4: PREPARE THE ARDUINO PROGRAM REMOTE BUTTON CODE & BIT #define REMOTE_CODE Your remote code as returned by the IRrecvDump decoder utility prefixed with "0x" #define REMOTE_BIT Your remote code data size as returned by the IRrecvDump decoder utility. This Arduino program works for most remote controls but you need to tell it about your remote control protocol from the info you gathered in the previous step when you decoded your remote control buttons using IRrecvDump utility. It's possible to make the remote selection dynamic during run time so you don't have to change and upload code. I might do this in a later version of this gadget. VOLUME LEVEL THRESHOLD #define NOISE_LEVEL A number from 0 to 1024. Start with 500 then fine-tune the number. This is the number that decides at what point the Arduino will start transmitting Volume Down codes NOTE: The sound sensor I am using as a built-in potentiometer which also controls the sensor's sensitivity. VOLUME CHANGE SPEED #define REPEAT_TX (from 1 to as many as you want. Start with 3 then fine tune) Change how many times you want the remote code transmitted to the TV. If you want more drastic drop in TV volume increase this number. If you want a more gradual change in volume, lower this number. FEATURE TODO LIST It's very simple to program more functionality into this gadget. Some of the features that can be added: - Average audio level over a period of time to determine if increase in volume is persistent requiring volume control or momentary and should be ignored. - Read audio level after a period of time. If audio is too low, increase volume by a certain increment. - Make program inclusive of supported remote protocols - Add Panasonic & JVC support THE ARDUINO CODE Cut and paste the code below into your Arduino IDE //================================================= /* PROJECT: TV Volume Guard AUTHOR: Hazim Bitar (techbitar) DATE: FEB 9, 2013 CONTACT: techbitar at gmail dot com LICENSE: My code is in the public domain. IRremote library: copyright by Ken Shirriff */ #include <IRremote.h> #define NOISE_LEVEL 350 // level of noise to detect from 0 to 1023 #define REPEAT_TX 3 // how many times to transmit the IR remote code #define REMOTE_CODE 0x1CE3E817 // remote code to transmit. This is for my TV. Replace with yours. #define REMOTE_BIT 32 #define SOUND_SENSOR_PIN A0 // sound sensor connected to this analog pin #define LED 13 // LED used to blink when volume too high IRsend irsend; // instantiate IR object void setup() { pinMode(LED, OUTPUT); } void loop() { int soundLevel = analogRead(SOUND_SENSOR_PIN); // read the sound sensor if(soundLevel > NOISE_LEVEL) // compare to noise level threshold you decide { digitalWrite(LED,HIGH); // LED on delay(200); for (int txCount = 0; txCount < REPEAT_TX; txCount++) { // how many times to transmit the IR remote code irsend.sendNEC(REMOTE_CODE , REMOTE_BIT); // Change to match your remote protocol delay(200); // Uncomment the function that matches your remote control protocol as shown by IRrecvDump // irsend.sendNEC(REMOTE_CODE, REMOTE_BIT); // irsend.sendSony(REMOTE_CODE, REMOTE_BIT); // irsend.sendRC5(REMOTE_CODE, REMOTE_BIT); // irsend.sendRC6(REMOTE_CODE, REMOTE_BIT); } } digitalWrite(LED,LOW); // LED off } //================================================= This is too complicated for little ole me and I was wondering, are there any tv's that control the volume without extra equipment? Missy gaynel13@gmail.com big respect for u you didn't include "IRromote.h" and while i have it, the code has presented some problems in my IDE, did you change it from the IR library online? I'm doing this but I'm having trouble with the code. the IDE will not compile it, due to countless errors. anyone else have trouble, or understand the code in depth?!
http://www.instructables.com/id/TV-Volume-Loudness-Guard-using-Arduino/?ALLSTEPS
CC-MAIN-2014-52
refinedweb
941
61.67
- The Cart Class?s Needs - Defining the Cart Class - Making the Cart Iterable and Countable - The Item Class - Using the Code - Conclusion Defining the Cart Class To start, the cart class needs an internal array for storing all the items in the cart: class ShoppingCart { protected $items = array(); I’ve made this attribute protected, not private, just in case you need to derive another shopping cart class from this one. Next, the isEmpty() method of the class can be used to quickly see whether or not there are items in the cart. It returns a Boolean: public function isEmpty() { return (empty($this->items)); } The addItem() method should add items to the cart. Here it is in full, then I’ll explain it: public function addItem(Item $item) { // Need the item id: $id = $item->getId(); // Throw an exception if there's no id: if (!$id) throw new Exception('The cart requires items with unique ID values.'); // Add or update: if (isset($this->items[$id])) { $this->updateItem($item, $this->items[$item]['qty'] + 1); } else { $this->items[$id] = array('item' => $item, 'qty' => 1); } } // End of addItem() method. The method takes one argument of type Item. This restriction is accomplished via PHP’s type hinting ability, and an exception will be thrown if an Item object is not received (see Figure 1). I’ll explain the Item class later in the article, but understand now that so long as the items used by your site are of type Item, or a derived class, this will work. Items are stored in the internal array using the item ID as the index. The item ID should be a unique reference to each item the site sells, whether that means an automatically incremented primary key or a string SKU. This method must therefore first find the item ID by calling the Item object’s getId() method. If no ID value exists, an exception is thrown. (I’ve included an example of this validation here, but have omitted it from the other methods in the class for the sake of brevity.) Next, the method has a check to confirm that this is a new item being added to the class, rather than another of an existing item being added. That check just sees if there’s already an element in the array indexed at the ID. If so, then the updateItem() method will be called instead, passing along the item being added/updated and a quantity of one more than the current quantity (this will mean more when you see the full class definition). If this is indeed a new item to the cart, then the item is added as an array to the items array, indexed at the item’s ID value. Next, let’s write the updateItem() method. It could be called internally (as just shown), or when a form is submitted to update the quantities of the items in the cart. public function updateItem(Item $item, $qty) { // Need the unique item id: $id = $item->getId(); // Delete or update accordingly: if ($qty === 0) { $this->deleteItem($item); } elseif ( ($qty > 0) && ($qty != $this->items[$id]['qty'])) { $this->items[$id]['qty'] = $qty; } } // End of updateItem() method. As you can see, the method is similar to addItem(), except that it takes both an item and a quantity as its arguments. If the quantity is 0, then the item is removed from the cart. Otherwise, so long as the quantity is a positive integer and not equal to the current quantity, that item’s quantity is updated in the cart. The deleteItem() method just removes the item from the cart: public function deleteItem(Item $item) { $id = $item->getId(); if (isset($this->items[$id])) { unset($this->items[$id]); } } And that concludes the core functionality. Let’s make it more useful, though, by applying two interfaces defined in the Standard PHP Library.
http://www.peachpit.com/articles/article.aspx?p=1962481&seqNum=2
CC-MAIN-2018-26
refinedweb
639
66.98
Build a chat app with React Native A basic understanding of React and Node.js are needed to follow this tutorial. Social chat applications are hugely popular these days, allowing people to stay connected on topics they are interested in from all over the world. In this article we’re going to explore creating a simple chat app in the React Native framework, which allows us to use the same source code to target both Android and iOS. In order to keep this example simple to follow we’re going to focus only on the basics - a single chat room, and no authentication of the people chatting. The application will work in two parts. The client application will receive events from Pusher informing it of new users and new messages, and there will be a server application that is responsible for sending message to Pusher. In order to implement this you need to have the following on your computer: - A recent version of Node.js - A text editor You will also need a mobile device with the Expo tools installed - available from the Android Play Store or the Apple App Store for free. This is used to test the React Native application whilst you are still developing it. It works by allowing you to start and host the application on your workstation, and connect to it remotely from your mobile device as long as you are on the same network. Note as well that this article assumes some prior experience with writing JavaScript applications, and with the React framework - especially working with the ES6 and JSX versions of the language. Creating a Pusher application to use Firstly, we’ll need to create a Pusher application that we can connect our server and client to. This can be done for free here. When you create your application, you will need to make note of your App ID, App Key, App Secret and Cluster: Creating the server application Our server application is going to be written in Node.js using the Express web framework. We are going to have three RESTful endpoints, and no actual views. The endpoints are: - PUT /users/ - Indicate that a new user has joined - DELETE /users/ - Indicate that a user has left - POST /users/ /messages - Send a message to the chatroom Creating a new Node application is done using the npm init call, as follows: >: (server) react-native-chat-server version: (1.0.0) description: Server component for our React Native Chat application entry point: (index.js) test command: git repository: keywords: author: license: (ISC) { "name": "react-native-chat-server", "version": "1.0.0", "description": "Server component for our React Native Chat application", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" } Is this ok? (yes) We then need to install the modules that we’re going to depend on - express, body-parser - to allow us to parse incoming JSON bodies - and pusher, to talk to the Pusher API. > npm install --save express body-parser pusher This gives us everything we need to get our server application written. Because it’s so simple we can do it all in one file - index.js - which will look like this: const express = require('express'); const bodyParser = require('body-parser'); const Pusher = require('pusher'); const pusherConfig = require('./pusher.json'); // (1) const pusherClient = new Pusher(pusherConfig); const app = express(); // (2) app.use(bodyParser.json()); app.put('/users/:name', function(req, res) { // (3) console.log('User joined: ' + req.params.name); pusherClient.trigger('chat_channel', 'join', { name: req.params.name }); res.sendStatus(204); }); app.delete('/users/:name', function(req, res) { // (4) console.log('User left: ' + req.params.name); pusherClient.trigger('chat_channel', 'part', { name: req.params.name }); res.sendStatus(204); }); app.post('/users/:name/messages', function(req, res) { // (5) console.log('User ' + req.params.name + ' sent message: ' + req.body.message); pusherClient.trigger('chat_channel', 'message', { name: req.params.name, message: req.body.message }); res.sendStatus(204); }); app.listen(4000, function() { // (6) console.log('App listening on port 4000'); }); This is the entire Server application, which works as follows: - Create a new Pusher client and configure it to connect to our Pusher application, as configured above - Create a new Express server - Add a new route - PUT /users/:name. This will send a join message to the Pusher application with the name of the user that has joined as the payload - Add a new route - DELETE /users/:name. This will send a part message to the Pusher application with the name of the user that has just departed as the payload - Add a third route - POST /users/:name/messages. This will send a message message to the Pusher application with the name of the user sending the message and the actual message as the payload - Start the server listening on port 4000 Pusher has native support for Join and Leave notification as a part of it’s API, by leveraging the Presence Channel functionality. This requires authentication to be implemented before the client can use it though, which is beyond the scope of this article, but will give a much better experience if you are already implementing authentication. Note Why the names join and part? It’s a throwback to the IRC specification. The names aren’t important at all - as long as they are distinct from each other, and consistent with what the client expects. Before we can start the application, we need a Pusher configuration file. This goes in pusher.json and looks like this: { "appId":"SOME_APP_ID", "key":"SOME_APP_KEY", "secret":"SOME_APP_SECRET", "cluster":"SOME_CLUSTER", "encrypted":true } The values used here are exactly the ones taken from the Pusher application config we saw above. Starting the server We can now start this up and test it out. Starting it up is simply done by executing index.js: > node index.js App listening on port 4000 If we then use a REST Client to interact with the server, by sending the appropriate messages to our server. Doing so will cause the correct messages to appear in the Debug Console in the Pusher Dashboard, proving that they are coming through correctly. You can do the same for the other messages, and see how it looks: Creating the client application Our client application is going to be built using React Native, and leveraging the create-react-native-app scaffolding tool to do a lot of work for us. This first needs to be installed onto the system, as follows: > npm install -g create-react-native-app Once installed we can then create our application, ready for working on: > create-react-native-app client Creating a new React Native app in client. Using package manager as npm with npm interface. Installing packages. This might take a couple minutes. Installing react-native-scripts... npm WARN react-redux@5.0.6 requires a peer of react@^0.14.0 || ^15.0.0-0 || ^16.0.0-0 but none was installed. Installing dependencies using npm... npm WARN react-native-branch@2.0.0-beta.3 requires a peer of react@>=15.4.0 but none was installed. npm WARN lottie-react-native@1.1.1 requires a peer of react@>=15.3.1 but none was installed. Success! Created client at client Inside that directory, you can run several commands: npm start Starts the development server so you can open your app in the Expo app on your phone. npm run ios (Mac only, requires Xcode) Starts the development server and loads your app in an iOS simulator. npm run android (Requires Android build tools) Starts the development server and loads your app on a connected Android device or emulator. npm test Starts the test runner. npm run eject Removes this tool and copies build dependencies, configuration files and scripts into the app directory. If you do this, you can’t go back! We suggest that you begin by typing: cd client npm start Happy hacking! We can now start up the template application ensure that it works correctly. Starting it is a case of running npm start from the project directory: Amongst other things, this shows a huge QR Code on the screen. This is designed for the Expo app on your mobile device to read in order to connect to the application. If we now load up Expo and scan this code with it, it will load the application for you to see: Adding a Login screen The first thing we’re going to need is a screen where the user can enter a name to appear as. This is simply going to be a label and a text input field for now. To achieve this, we are going to create a new React component that handles this. This will go in Login.js and look like this: import React from 'react'; import { StyleSheet, Text, TextInput, KeyboardAvoidingView } from 'react-native'; export default class Login extends React.Component { // (1) render() { return ( <KeyboardAvoidingView style={styles.container} // (2) <Text>Enter the name to connect as:</Text> // (3) <TextInput autoCapitalize="none" // (4) autoCorrect={false} autoFocus keyboardType="default" maxLength={ 20 } placeholder="Username" returnKeyType="done" enablesReturnKeyAutomatically style={styles.username} onSubmitEditing={this.props.onSubmitName} /> </KeyboardAvoidingView> ); } } const styles = StyleSheet.create({ // (5) container: { flex: 1, backgroundColor: '#fff', alignItems: 'center', justifyContent: 'center' }, username: { alignSelf: 'stretch', textAlign: 'center' } }); This works as follows: - Define our Login component that we are going to use - Render the KeyboardAvoidingView. This is a special wrapper that understands the keyboard on the device and shifts things around so that they aren’t hidden underneath it - Render some simple text as a label for the user - Render a text input field that will collect the name the user wants to register as. When the user presses the Submit button this will call a provided callback to handle the fact - Apply some styling to the components so that they look as we want them to We then need to make use of this in our application. For now this is a simple case of updating App.js as follows: import React from 'react'; import Login from './Login'; export default class App extends React.Component { // (1) constructor(props) { super(props); // (2) this.handleSubmitName = this.onSubmitName.bind(this); // (3) this.state = { // (4) hasName: false }; } onSubmitName(e) { // (5) const name = e.nativeEvent.text; this.setState({ name, hasName: true }); } render() { // (6) return ( <Login onSubmitName={ this.handleSubmitName } /> ); } } This is how this works: - Define our application component - We need a constructor to set up our initial state, so we need to pass the props up to the parent - Add a local binding for handling when a name is submitted. This is so that the correct value for thisis used - Set the initial state of the component. This is the fact that no name has been selected yet. We’ll be making use of that later - When a name is submitted, update the component state to reflect this - Actually render the component. This only renders the Login view for now If you left your application running then it will automatically reload. If not then restart it and you will see it now look like this: Managing the connection to Pusher Once we’ve got the ability to enter a name, we want to be able to make use of it. This will be a Higher Order Component that manages the connection to Pusher but doesn’t render anything itself. Firstly we are going to need some more modules to actually support talking to Pusher. For this we are going to use the pusher-js module, which has React Native support. This is important because React Native is not a full Node compatible environment, so the full pusher module will not work correctly. > npm install --save pusher-js We then need our component that will make use of this. Write a file ChatClient.js: import React from 'react'; import Pusher from 'pusher-js/react-native'; import { StyleSheet, Text, KeyboardAvoidingView } from 'react-native'; import pusherConfig from './pusher.json'; export default class ChatClient extends React.Component { constructor(props) { super(props); this.state = { messages: [] }; this.pusher = new Pusher(pusherConfig.key, pusherConfig); // (1) this.chatChannel = this.pusher.subscribe('chat_channel'); // (2) this.chatChannel.bind('pusher:subscription_succeeded', () => { // (3) this.chatChannel.bind('join', (data) => { // (4) this.handleJoin(data.name); }); this.chatChannel.bind('part', (data) => { // (5) this.handlePart(data.name); }); this.chatChannel.bind('message', (data) => { // (6) this.handleMessage(data.name, data.message); }); }); this.handleSendMessage = this.onSendMessage.bind(this); // (9) } handleJoin(name) { // (4) const messages = this.state.messages.slice(); messages.push({action: 'join', name: name}); this.setState({ messages: messages }); } handlePart(name) { // (5) const messages = this.state.messages.slice(); messages.push({action: 'part', name: name}); this.setState({ messages: messages }); } handleMessage(name, message) { // (6) const messages = this.state.messages.slice(); messages.push({action: 'message', name: name, message: message}); this.setState({ messages: messages }); } componentDidMount() { // (7) fetch(`${pusherConfig.restServer}/users/${this.props.name}`, { method: 'PUT' }); } componentWillUnmount() { // (8) fetch(`${pusherConfig.restServer}/users/${this.props.name}`, { method: 'DELETE' }); } onSendMessage(text) { // (9) const payload = { message: text }; fetch(`${pusherConfig.restServer}/users/${this.props.name}/messages`, { method: 'POST', headers: { 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); } render() { // (10) const messages = this.state.messages; return ( <Text>Messages: { messages.length }</Text> ); } } There’s an awful lot going on here, so let’s go over it all: - This is our Pusher client. The configuration for this is read from an almost identical to the one on the server - the only difference is that this file also has the URL to that server, but that’s not used by Pusher - This is where we subscribe to the Pusher channel that our Server is adding all of the messages to - This is a callback when the subscribe has been successful, since it’s an asynchronous event - This is a callback registered whenever we receive a joinmessage on the channel, and it adds a message to our list - This is a callback registered whenever we receive a partmessage on the channel, and it adds a message to our list - This is a callback registered whenever we receive a messagemessage on the channel, and it adds a message to our list - When the component first mounts, we send a message to the server informing them of the user that has connected - When the component unmounts, we send a message to the server informing them of the usre that has left - This is the handler for sending a message to the server, which will be hooked up soon - For now we just render a counter of the number of messages received This isn’t very fancy yet, but it already does all of the communications with both our server and with Pusher to get all of the data flow necessary. Note that to communicate with our server we use the Fetch API, which is a standard part of the React Native environment. We do need the address of the server, which we put into our pusher.json file to configure it. This file then looks as follows here: { "appId":"SOME_APP_ID", "key":"SOME_APP_KEY", "secret":"SOME_APP_SECRET", "cluster":"SOME_CLUSTER", "encrypted":true, "restServer":" } Note When you actually deploy this for real, the restServer property will need to be changed to the address of the live server. Chat Display The next thing that we need is a way to display all of the messages that happen in our chat. This will be a list containing every message, displaying when people join, when they leave and what they said. This will look like this: import React from 'react'; import { StyleSheet, Text, TextInput, FlatList, KeyboardAvoidingView } from 'react-native'; import { Constants } from 'expo'; export default class ChatView extends React.Component { constructor(props) { super(props); this.handleSendMessage = this.onSendMessage.bind(this); } onSendMessage(e) { // (1) this.props.onSendMessage(e.nativeEvent.text); this.refs.input.clear(); } render() { // (2) return ( <KeyboardAvoidingView style={styles.container} <FlatList data={ this.props.messages } renderItem={ this.renderItem } styles={ styles.messages } /> <TextInput autoFocus </KeyboardAvoidingView> ); } renderItem({item}) { // (3) const action = item.action; const name = item.name; if (action == 'join') { return <Text style={ styles.joinPart }>{ name } has joined</Text>; } else if (action == 'part') { return <Text style={ styles.joinPart }>{ name } has left</Text>; } else if (action == 'message') { return <Text>{ name }: { item.message }</Text>; } } } const styles = StyleSheet.create({ container: { flex: 1, backgroundColor: '#fff', alignItems: 'flex-start', justifyContent: 'flex-start', paddingTop: Constants.statusBarHeight }, messages: { alignSelf: 'stretch' }, input: { alignSelf: 'stretch' }, joinPart: { fontStyle: 'italic' } }); This works as follows: - When the user submits a new message, we call the handler we were provided, and then clear the input box so that they can type the next message - Render a FlatList of messages, and an input box for the user to type their messages into. Each message is rendered by the renderItem function - Actually render the messages in the list into the appropriate components. Every message is in a Text component, with the actual text and the styling depending on the type of message. We then need to update the render method of the ChatClient.js component to look as follows: render() { const messages = this.state.messages; return ( <ChatView messages={ messages } onSendMessage={ this.handleSendMessage } /> ); } This is simply so that it renders our new ChatView in place of just the number of messages received. Finally, we need to update our main view to display the Chat Client when logged in. Update App.js to look like this: render() { if (this.state.hasName) { return ( <ChatClient name={ this.state.name } /> ); } else { return ( <Login onSubmitName={ this.handleSubmitName } /> ); } } The end result of this will look something like this: Conclusion This article has shown an introduction to the fantastic React Native framework for building universal mobile applications, and shown how it can be used in conjunction with the Pusher service for handling realtime messaging between multiple different clients. All of the source code for this application is available at Github. September 12, 2017 by Graham Cox
https://pusher.com/tutorials/chat-react-native/
CC-MAIN-2022-21
refinedweb
2,944
55.54
Understanding Struts Controller Understanding Struts Controller  .... It is the Controller part of the Struts Framework. ActionServlet is configured...; This servlet is responsible for handing all the request for the Struts Servlet - Struts Servlet Can I can my action class from servlet? If yes, then how? Hi friend, I am sending you a link. I hope that this link will help you please visit for more information: struts - Struts struts when the action servlet is invoked in struts? Hi Friend, Please visit the following link: why servlet as controller - Struts why servlet as controller Hi Friends, Main reasons for using servlet as a controller in struts. why not jsp . Thanks Prakash Hi... to generate the proper user response. So Servlet is used as controller. Thanks Struts 2.0 - JSP-Servlet Struts 2.0 How to call two actions from single JSp page in Struts 2,jsp - JSP-Servlet Struts, JSP get records from database how can i get single record from database in tabular form in struts as well as in jsp can anyone tell me how can i implement session tracking in struts? please it,s urgent........... session tracking? you mean... for later use in in any other jsp or servlet(action class) until session exist action Servlet - Struts action Servlet What is the difference between ActionServlet ans normal servlet? And why actionServlet is required Singleton & Threadsafe in struts - Struts Singleton & Threadsafe in struts Hi, How to implement singleton and threadsafe in struts . same in jsp and servlet. Thanks in advance struts - JSP-Servlet page from servlet. I set and get the values on java page through beans. I... Servlet to JSP : "MyServlet.java" import java.io.*; import javax.servlet....,res); } } "web.xml" for servlet mapping MyServletName struts first example - Struts struts first example I got errors in struts first example like can... welcome.title=Struts Blank Application welcome.heading=Welcome! index.jsp struts-config.xml Struts with Servlet Jsp - Development process Struts with Servlet Jsp Hi Friends, Can u give me steps for developing simple web application with struts, Servlets, Jsp... then click submit button. 2)control goes to servlet (controller) thru UserLogin Architecture - Struts Struts Architecture Hi Friends, Can u give clear struts architecture with flow. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller Servlet action is currently unavailable - Struts Servlet action is currently unavailable Hi, i am getting the below error when i run the project so please anyone can help me.. HTTP Status 503 - Servlet action is currently unavailable Java - Struts Java this error occur when we run our struts wev b applcation Servlet action is currently unavailable 2 passed through its servlet filter, which initializes the Struts dispatcher needed...Struts 2 I am getting the following error.pls help me out. The Struts dispatcher cannot be found. This is usually caused by using Struts tags Unable to understand Struts - Struts = "success"; /** * This is the action called from the Struts... { /** * Processes requests for both HTTP GET and POST methods. * @param request servlet request * @param response servlet response * @throws ServletException Struts - Framework knowledge of JSP-Servlet, nothing else. Best of Luck for struts. Thanks... struts application ? Before that what kind of things necessary... that you are going to learn Struts. Just refer " of the Application. Hi friend, Struts is an open source framework used for developing J2EE web applications using Model View Controller (MVC) design pattern. It uses and extends the Java Servlet API to encourage developers struts <logic:iterate problem - JSP-Servlet struts how can i limit no.of rows to be displayed using an tag Hi friend,  ... visit - Struts ;controller" in the Model-View-Controller (MVC) design pattern for web... to this servlet. * There can be one instance of this servlet class, which receives... with the application. The servlet delegates the handling of a request to a RequestProcessor Struts Resources Struts Resources RoseIndia.Net is the ultimate Struts Resources for the web development community using Struts in their development. Struts is ultimate framework for the development of web What is Struts? . The Struts 2 version is based on the following Java Technologies: Servlet...What is a Struts? Understand the Struts framework This article tells you "What is a Struts?". Tells you how Struts learning is useful in creating What is Struts 2 framework What is Struts 2 framework Hi, I am new to the Java web programming. I have completed JSP, Servlet and HTML. Now I want to learn Struts 2. Tell me what is Struts 2? Thanks java - Struts [Servlet Error]-[/loginpage.jsp]: javax.servlet.jsp.JspException: Cannot find java.lang.ClassNotFoundException: - Struts \lib\servlet-api;C:\Program Files\Java\jdk1.6.0_14\bin;C:\Program Files\MySQL\MySQL... start INFO: Starting Servlet Engine: Apache Tomcat/6.0.20 Jan 28, 2010 4:03:02... Foundation\Tomcat 6.0\lib\servlet-api;C:\Pr Struts Projects . Understanding Spring Struts Hibernate DAO Layer... Struts Projects Easy Struts Projects to learn and get into development ASAP. These Struts Project will help you jump the hurdle of learning complex What is Struts Framework? Struts Framework uses the latest technologies such as Servlet API, Filter API...What is Struts Framework? Learn about Struts Framework This article is discussing the Struts Framework. It is discussing the main points of Struts framework Struts MVC pattern. It is based on Servlet, JSP and Java Beans. The Struts framework... request passes through the controller. In Struts all the user request passes... Struts MVC Struts is open source MVC framework in Java. The Struts framework Problems With Struts - Development process Problems With Struts Respected Sir , While Deploying Struts Application in Tomcat 5.0 ,using Forward Action.This can... an exception "servlet action is not available with 404" Kindly Struts Books -Controller (MVC) design paradigm. Want to learn Struts and want get... of design are deeply rooted. Struts uses the Model-View-Controller design pattern... of Struts and its Model-View-Controller (MVC) architecture. The authors Struts 2.2.1 - Struts 2.2.1 Tutorial prior development experience in JSP, Servlet and good understanding of Core Java... 2.2.1 Features of Struts 2.2.1 Understanding MVC design pattern...Struts 2.2.1 - Struts 2.2.1 Tutorial The Struts 2.2.1 framework is released Struts Tutorials Struts Struts is comprised of a controller servlet, beans and other Java classes... controller for your application (the Struts servlet acts as a common controller... experience with Apache Struts and the Apache Tomcat servlet engine. You should Links - Links to Many Struts Resources , but servlet, JSP, Struts, and JSF training courses at public venues are periodically... Struts Links - Links to Many Struts Resources Jakarta Struts Tutorials One of the Best Jakarta Struts available on the web. Struts Struts Quick Start , and it provides a controller ( servlet) to handle all the request from client. The struts servlet reads the configuration parameters from struts-config.xml...Struts Quick Start Struts Quick Start to Struts technology In this post 1.x Vs Struts 2.x POJOs. In struts 2, the servlet contexts are represented as simple Maps which... Struts 1.x Vs Struts 2.x  ... frameworks. Struts 2.x is very simple as compared to struts 1.x, few History of Struts History of Struts In this section we will disuses about the history of web application and the history of Struts. Why Struts was developed and the problems...'s for their IIS web server. Sun Microsystems came up with Servlet and JSP Struts Frameworks Struts Frameworks Struts framework is very useful in the development of web application. It is based on the popular Model-View-Controller design pattern. The Model-View-Controller design pattern is also known as MVC design pattern Struts Articles goal of the Struts framework is to enforce a MVC-style (Model-View-Controller... the existing struts validation framework with AJAX. A few components, such as a controller... are familiar with the Struts framework in the servlet environment how to use Struts Struts When Submit a Form and while submit is working ,press the Refresh , what will happen in Struts How singleton in struts - Java Interview Questions How singleton in struts Hi, How to implement singleton and threadsafe in struts . same in jsp and servlet. Thanks Prakash Introduction to Struts 2 Framework . Controller In Struts 1 ActionServlet works as the controller. In the latest version of Struts 2, StrutsPrepareAndExecuteFilter... and the View. All the user request passes through the controller. In Struts all What is Struts - Struts Architecturec : The Struts Controller Components: Whenever a user request for something, then the request is handled by the Struts Action Servlet. When the ActionServlet receives... What is Struts - Struts Architecture   saritha project - Struts in the web.xml? Hi Friend, If OnlineRecruit is your servlet then check whether you have done the servlet-mapping in the web-xml or not. Then restart Struts 2 - History of Struts 2 Struts 2 History Apache Struts... taken over by the Apache Software Foundation in 2002. Struts have provided an excellent framework for developing application easily by organizing JSP and Servlet struts struts how to start struts? Hello Friend, Please visit the following links:... can easily learn the struts. Thanks Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/14809
CC-MAIN-2015-48
refinedweb
1,546
68.47
Wicked Cool Perl Scripts 239 239 Michael J. Ross writes Web sites. In fact, Perl has been called the glue that holds the Internet together. The tremendous flexibility and power of Perl is seen in Steve Oualline's book Wicked Cool Perl Scripts: Useful Perl Scripts That Solve Difficult Problems." Read the rest of Michael's review Published by the cleverly named No Starch Press, Wicked Cool Perl Scripts comprises 336 pages, spanning 11 chapters, with a brief introduction, as well as an index. The book appeared in February 2006, and was published under the ISBN of 1593270623. No Starch Press maintains a Web page for the book, where readers can find a sample chapter (the third one, covering CGI debugging), in PDF format. There is a link for downloading all of the source code. The book presents 47 scripts, grouped into 11 categories: general-purpose utilities, Web site Web sites, require more than ten pages. Fortunately, the scripts generally contain enough comments Web sites Web site or a Linux/Unix server, would not even have to know the language in order to download these Perl scripts, and use them to solve problems on the job. Michael J. Ross is a freelance writer, computer consultant, and the editor of the free newsletter of PristinePlanet.com. You can purchase Wicked Cool Perl Scripts from bn.com. Slashdot welcomes readers' book reviews -- to see your own review here, read the book review guidelines, then visit the submission page. Solve it (Score:5, Funny) Useful Perl Scripts That Solve Difficult Problems. Heh, Steve Oualline. In his book on Vim, he had this example of code where the program comment went something like this: Program -- Solve it -- Solves the worlds problems. All of them. At once. This will be a great program when I finish it. */ Re:Solve it (Score:2, Funny) Re:Solve it (Score:5, Informative) Solve it, Fixed version (Score:2, Interesting) #!/usr/bin/perl # # Program -- Solve it -- Solves the worlds problems. # All of them. At once. # This will be a great program when I finish it. # sub main() { # no idea at all... } main(); exit; 1; __END__ or the even better, the POD way: #!/usr/bin/perl =head1 GOAL Program -- Solve it -- Solves the worlds problems. All of them. At once. This will be a great program when I finish it. =cut sub main() { # no idea at all... } main(); e Re:Solve it, Fixed version (Score:2, Funny) from __future__ import solutions solutions.solveTheWorldsProblems() Re:Solve it, Fixed version (Score:2) Re:Solve it, Fixed version (Score:5, Interesting) That looks as though you don't know Perl very well. There's no need for a separate main routine, as whatever is in the file that's not part of another subroutine is assumed to be part of main. There's also no need to have the program end in a true statement (that's necessary only for modules) or to use an __END__. In the true spirit of Perl (i.e. eliminating needless elements) here's a refined version: Re:Solve it, Fixed version (Score:5, Insightful) Re:Solve it, Fixed version (Score:3, Informative) Gaah, that was in a book? (Score:2, Informative) Re:Gaah, that was in a book? (Score:2) Apøstrøphe bites can be nasty. Re:Solve it (Score:5, Insightful) Re:Solve it (Score:5, Insightful) See, we can both make absolutist, arbitrary statements with no basis in reality. Fun, eh? Re:Solve it (Score:4, Interesting) "Perl is the most wonderfully architected, elegant, flexible language in the world." Indeed. In fact, it's the only language in the world that can guarantee perfect software [cpan.org], every time. Even when you're drunk [cpan.org]. 8^) Re:Solve it (Score:3, Interesting) Rest assured, Perl 6 has planning in abundance. You'll probably be less happy to learn that the guiding principle seems to be a "lot of this, lot of that" philosophy. But, this time with a plan. (My personal call on Perl 6? It's time to just ship it. Whatever "it" is. After a Good Practice? (Score:2, Insightful) Is these really that good of a practice though? Your pc's will be jam-packed with go you never wrote Re:Good Practice? (Score:5, Insightful) yes and no (Score:5, Insightful) Re:yes and no (Score:5, Funny) Finkployd Re:yes and no (Score:3, Insightful) DIY is not always the answer, but in cases where the person is doing it for their own education I don't see a downside. There is also a compelling reason to do it for administration scripts were you Re:yes and no (Score:3, Insightful) The grandparent excerpt reads "the administrator of a Web site or a Linux/Unix server, would not even have to know the language in order to download these Perl scripts, and use them to solve problems on the job" -- implying that the admin not only doesn't Re:Good Practice? (Score:4, Funny) Re:Good Practice? (Score:5, Insightful) Let's try this from another angle. "In fact, the administrator of a Web site of Unix/Linux server would not even have to know the language in order to download Apache, compile it, and use it to serve pages." "Is this really that good of a practice though? Your PCs will be jam-packed with software you never wrote ... therefore you don't really know what's going on with your own machines. Write your own programs, kiddies." (I corrected your spelling, grammar, and punctuation errors as well.) Basically, your argument amounts to absolutely nothing, because it's no different from other programs. Do you REALLY think that admins typically vet every line of code on their systems? People don't live that long. Know what the difference is between a C program you don't understand, and a perl script you don't understand? The C program is compiled once, and the perl script is JIT-compiled ever time you run it. Re:Good Practice? (Score:2) Re:Good Practice? (Score:2, Funny) Re:Good Practice? (Score:2) Yeah, no kidding! It's like all those lazy administrators that go installing arbitrary software from random third parties. I mean, do you have any idea what Sweet (Score:2, Funny) "Wicked" Cool? (Score:5, Funny) Re:"Wicked" Cool? (Score:2) Re:"Wicked" Cool? (Score:2) Re:"Wicked" Cool? (Score:3, Informative) -matthew Re:"Wicked" Cool? (Score:2) Re:"Wicked" Cool? (Score:2) Re:"Wicked" Cool? (Score:4, Funny) Please replace every instance of "Wicked" with "Hella" to improve readability. Thank you for your cooperation. Re:"Wicked" Cool? (Score:2, Interesting) Please replace every instance of "Wicked" with "Hella" to improve readability. That's only the Bay area. Even the rest of California really wants them to quit using that stupid sounding term. Perl glue (Score:3, Funny) Hey, wait a minute! That's not glue... ewww... what is that? Larry, have you been touching yourself again? Re:Perl glue (Score:2) That would be "pearl" Re:Perl glue (Score:3, Funny) Future wickedness (Score:5, Informative) The slide show links show some terrifying code snippets. These Perl-merlins are wicked, indeed. Now that is a cool poll (Score:2) Vista or Perl 6 The race is on... Re:Now that is a cool poll (Score:5, Funny) Let the religious flamefest begin! (Score:4, Funny) In that corner - advocates of Ruby (I haven't got a clue on this one, folks) And in this corner - dinosaurs like myself who still think awk/sed/sh is a pretty neat thing. Wait a minute, that's three corners. Uh . . . Re:Let the religious flamefest begin! (Score:2) PERC? Pathologically Eclectic Rubbish Lister, or sometimes Practical Extraction and Reporting Language. The first one is said to have come first, and was the "real" meaning of the acronym, but I think the second one is the more "official" meaning. As for awk, sed, and sh, I use them all the time for a lot of small scripts, but when I'm writing something that involves complex logic, I prefer to do it in Perl. Re:Let the religious flamefest begin! (Score:3, Informative) "Pathologically Eclectic Rubbish Lister, or sometimes Practical Extraction and Reporting Language. The first one is said to have come first, and was the "real" meaning of the acronym, but I think the second one is the more "official" meaning." Er, no [wikipedia.org]. The word 'Perl' is a backronym [wikipedia.org]. Wikipedia sez: Re:Let the religious flamefest begin! (Score:5, Funny) Weep not, brave one, it's called a triangle and is perfectly normal. Re:Let the religious flamefest begin! (Score:2) Re:Let the religious flamefest begin! (Score:2) In that corner - advocates of Ruby (I haven't got a clue on this one, folks) And in this corner - dinosaurs like myself who still think awk/sed/sh is a pretty neat thing. Wait a minute, that's three corners. Uh . . It's OK, you live in a triangle. Celebrate your inner trilateralness! But what's PERC? Re:Let the religious flamefest begin! (Score:3, Funny) I dunno. Maybe that's why so many use redundant acronyms - one less word to think of. Re:Let the religious flamefest begin! (Score:2) Re:Let the religious flamefest begin! (Score:2) Hmm, PERC... the Dell RAID cards? Actually, I think your acronym fits those just fine. Perl? Bah! (Score:5, Funny) Re:Perl? Bah! (Score:3, Funny) Well done. Is this more useful (Score:4, Insightful) I'm asking seriously, because of all of the "cookbooks" and collection books of this sort that I've seen on the shelves at Borders, they're all full of things that a quick bit of googling could come up with. In fact, a little searching usually yields better solutions, and I'm convinced they're written by copy/pasting google results into the author's editor of choice. I'm all for good dead-tree reference material, but I've been frustrated trying to find books that don't contain stuff-i-already-know, or stuff-i-can-get-free on the 'net. I guess it can't be good for the dead tree tech manual industry, but so long as universities and colleges force students to buy the books (and a new revision of the same book every year), that's all fine and good. Re:Is this more useful (Score:3, Interesting) I sit on the couch and just read them and learn a lot. Well, (Score:2) OB Ruby fanboyism (Score:2, Informative) You know, I love Perl. I've been using it for CGI stuff, for system-administration stuff, etc, for six or seven years now. In fact, the only things I haven't written in Perl during that time have been things that were either too lightweight (five line shell scripts) or too in need of structure (a free/Free clone of Advance Wars in Java). That said, every new script I've written so far this summer has been written in Ruby. I hate to sound like a Ruby fanboy, but I think Ruby is really a better perl than Re:OB Ruby fanboyism (Score:3, Insightful) Hey!! Perl has real objects... you just have to work at it... Of course, being a Perl programmer, I am very averse to work, hence all the time I spend reading Slashdot. Re:OB Ruby fanboyism (Score:2) So my question is, what would I want to do with objects that a language like Ruby would let me do (because Ruby has real objects), but I can't do (as easily) in Perl (because Perl doesn't have real objects)? Other than preventing myself from directly accessing an object's internal workings, which breaks the concept of OOP but isn't a problem if I simply choose not to do it. Re:OB Ruby fanboyism (Score:4, Interesting) Re:OB Ruby fanboyism (Score:3, Funny) s/objects/anything/g Re:OB Ruby fanboyism (Score:2) 1) CPAN 2) ubiquity 'course, you've already mentioned CPAN, but it's an important point to reiterate. The power of Perl, in large part, lies in the massive number of third party libraries available. As for ubiquity, it's rare these days for me to ssh into a machine that doesn't have Perl. The same can't be said Re:OB Ruby fanboyism (Score:5, Insightful) 3) unicode I have to deal with lots of unicode, index it, run regexes on it, and so on. Ruby lacks any real unicode support, which has made it a deal-breaker. Re:OB Ruby fanboyism (Score:2) Re:OB Ruby fanboyism (Score:2, Interesting) Re:OB Ruby fanboyism (Score:4, Insightful) I don't want this to degenerate into rabid fanboyism, but it seems the benefits of a "real" (or, real, if you preffer) object system over Perl's are routinely exaggerated. Yes, it could be better, but for 95% of the things you do, it works just fine. And of course Perl has iterators and closures (and first order functions, and all that other hard-to-maitain stuff the Functional crowd always goes on about). It's probably one of the things I like best about Perl, it just has features as part of the language, no one makes a huge deal of it. I've looked at Ruby (ok, glanced), and I just can't stomach the syntax - it's like writing Java in VB. Entirely subjective, of course. Though, as long as I live I will not understand this recent fad of trying your best not to delimit code blocks clearly - it smells of choosing ideology over utility. Definitely agree about the lack of "simple and rigid" struct-like things, I miss those often. And of course for anyone who wants a feature that Perl doesn't have, there's Perl 6 - that will have every feature that has existed in any language, ever. Re:OB Ruby fanboyism (Score:2) $blah = chomp $blah; if ($blah eq 'a') { Ruby's syntax *SUCKS* (Score:3, Interesting) Every text editor I know and use can match braces. Where does this block end? By clicking one or two keys I immediately know. I have no idea (and I really don't want to know) how do you match a generic "end" with the corresponding block opener in Ruby. Do you think this is unimportant? If so, you aren't a professional programmer, and it's OK for you to use that toy language, Ru Re:Ruby's syntax *SUCKS* (Score:3, Insightful) I have to call BS on this one. I highly doubt that the parent poster has seen the ubiquitous pile of legacy Perl code lying around, and how bad and bug-ridden it can be because the programmers didn't do enough to control a sloppy language. I think Perl Medic is the only Perl book I'll ever recommend for t It occurs to me... (Score:4, Insightful) Really shitty, unreadable Perl that makes your head hurt. Stuff that was only designed to execute from top to bottom like a shell script and it only works in the context of some main file. Won't even run if you use sttrict. And beautiful, expressive object-oriented, properly packaged stuff that lets you plug and test it any which way. Stuff you aren't afraid to instrument, modify, or implement because you just understand the author's intent. They look almost like two different languages. Then again, I think this is what happen when you have languages that have a lot of built in functionality blessed with tersity and "reasonable defaults". You can't write spaghetti code in a more structured language like Java or C that does anything useful without code generation. Perl makes it possible even without syntax highlighting. It's _too easy_ on the programmer. *shrugs* It's like VB... except you had a web GUI with a simple "use CGI". And pervasive regular expressions (which, IMHO, is the thing that makes perl unreadable if you don't learn about the m//x modifier). In 1999. So anytime you go online searching google for a Perl script that does XYZ, you end up on one of those code-sharing sites full of "consultants" idiots and you've got the worst kind of code floating around there. Copy and pasted crap with no structure... just hacked up until it works. And Perl's permissiveness allows it to proliferate. I just write everything myself, or get it from CPAN. CPAN at least acts as a kind of quality control. If you want to see how to write or model your code, use CPAN modules as examples. You learn alot. Fine, but about the title... (Score:2, Insightful) Re:Fine, but about the title... (Score:2) -matthew Space = Money (Score:2) I thought these people got paid per page? the relationship between Perl and C (Score:3, Informative) Only recently, despite having read a lot of the Perl books and hung around online a lot, I found about the history of Perl that I almost couldn't believe...that originally, it was just a glue language for a big honking chunk of Unix system calls, mostly written in C. My credulity is because Perl sometimes seems like the anti-C...especially in terms of handling strings, since my memory of C is using chararrays for everything. But it makes sense.... C offered blazing speed, and Perl was a great duct tape glue for all that. It's amazing that it had such quality memory management, string handling, associative arrays, and loosey goosey syntax for reg ex etc. But it's great. I think it falls apart once you get to the perl 5 object model, which I've never been able to really get my head around... but for anything that should be written programatically rather than structured from objects, it's really great. Re:the relationship between Perl and C (Score:5, Insightful) My advice for OOP in Perl: 1) learn how OOP is supposed to work in some other language 2) pretend that it works that way in Perl, and try not to think about how "bless" actually works. Re:the relationship between Perl and C (Score:2) Well, I was well aware of the "weird UNIXisms" in Perl, but I kind of thought that functions like localtime() were still "weird" in Perl because Wall et al. wrote them to be slantwise compatible with what C programmers on the system new, not that they were all pretty much straight passthroughs. It still blows my mind, actually, that such a loosely typed language has straight pass througs to misc. Unix C fun *shrugs* (Score:3, Insightful) Perl's scalar numbers internally are the same size as the system's ints, so that handles that detail. Perl auto-coerces strings to numbers and vice-versa. So it can handle the number and string arguments using built-in API functions that just take whatever perl expression and coerce it down to the appropriate scalar context in the Re:the relationship between Perl and C (Score:2) Perl praise and beefs interspersed (Score:5, Insightful) Praise ======= 1) The power of perl is irrefutable -- it helps slap together quick and clean solutions to irritating admin problems. The flip-side of being a perl jockey I guess is that one tends to try and create a solution to many a problem that already has a solution - because searching CPAN can be a pain at times. 2) Use of the more flexible features of the languages (such as Hashes, hash of hashes etc) data/number munging and organization becomes more manageable. 3) Using Perl's almost endless modules, a lot of relatively complicated tasks can be simplified. 4) Annoyance factor of numerous tasks (especially Administrative and reporting) can be reduced drastically with the help of Perl. Beefs ===== 1) The beef I guess is that unlike Python or Perl's other competitors, Perl modules don't come tightly integrated with the core distro. Agreed that Perl probably has a lot more modules than any of those other languages do, but a larger than ordinary de facto distribution (why not include important modules like Digest::MD5, Crypt modules, SSH modules etc?) would be desirable (especially in those situations where you don't have access to the internet directly from within corporate networks and can't install the modules with the "perl -MCPAN -e shell" option) . There might be those Perl veterans who would say -- "build your own distro with your custom modules already packaged" -- and while that might be a very smart thing to do, many a time (when one keeps moving from one environment to another -- some call it job hopping, it helps to be able to download one single perl distro package or rpm or the source+compile and have basic administrative scripts work -- especially those that rely on centralized automation (ssh-based trusts, copies across the network, etc). 2) Also, perl's syntax can be terse and difficult for noobies to understand (or even older perl-hands for that matter -- when someone has written code without appropriate comments, etc). 3) Tinkering with Python recently, I found it's simplicity refreshing and it's syntax easier to comprehend (especially when compared with Perl's (imho) complicated "scoping" requirements, etc). 4) Sometimes (and I guess it depends on the person writing the code) Perl tends to over-complicate things that can be easily handled via Shell scripts. Re:Perl praise and beefs interspersed (Score:2) perldoc perlmodlib tells you which modules come with perl. In the 5.8 version of, one modules such module is Digest::MD5. The core crypt() function also passes things directly through to the OS's crypt library. However, for portability reasons, you should use Crypt::PasswdMD5 for MD5-base crypted passwords, as some libraries (such as Windows') don't support MD5 hashes. Having said that, I have my own list of things I dislike about perl [livejournal.com]. NOT Worth While (Score:2, Informative) Write yes, read not so much (Score:4, Interesting) Yes, I know you can write very readable maintainable perl. In theory. The only example of this I have ever seen is the Calcium web calendar. But whenever our perl guy writes something it looks more like ($l=join("",))=~s/.*\n/index($`,$&)>=$[||print$&/ (stolen from You need a new perl guy (Score:3, Insightful) Fire your perl guy -- he's clearly a menace. And after you fire him, tell him to stop reading perlmonks.org. After a while, he'll start doing things like using foreach() instead of map() when it makes the script clearer. And as an added bonus, he won't waste time trying to find the bug he caused from ov My Ex-Language (Score:5, Funny) Re:My Ex-Language (Score:5, Interesting) Look over the perl 6 syntax and the increased punctuation, then compare to Ruby. I've been working with perl now for about 10 years or more, and Ruby has replaced or supplanted all my Perl work over the last several months. No going back. Not even bothering to learn Perl 6. CPAN is sweet, but so much is already built into ruby's standard libs. Ruby syntax is clean. Real OOP. Self documenting. No way I'm going back, especially not for Perl 6, which is too much, and too late. Evil Perl (Score:4, Interesting) @a{($b=pop)=~/\w/g}=@a=0..9;$a="@{[keys%a]}";sub a{@a?map@a=(a(@_,pop@a),@a),1..@a:($_="$b ",eval"y,$a,@_,",/\b0/)||eval&pop}a Version 2: while($_=$a=pop){/([a-z])/i?map{($t=$a)=~s/$1/$_/ "}0..9:/\b0\d/||eval&&print} They do the same thing. I'll post what that is in a follow-up, for the sake of any masochists who want to figure it out for themselves. Re:Evil Perl (Score:5, Informative) $ perl s send+more==money 9567+1085==10652 9567+1085==10652 If anyone can shorten either of these programs (even by one byte) please let me know. If you do, and you're geographically close enough, I'll buy you lunch. (Watch for bugs with numbers with leading zeros.) Version 1 is 133 bytes, version 2 is 103 bytes. Version 1 is almost entirely my own work, and contains a nifty recursive permutation generator. Version 2 was produced by someone else in response to my challenge, and then further compressed a bit by me. Unfortunately, the Obfuscated Perl Contest has disappeared (although these are principally compressed rather than obfuscated.) Re:Evil Perl (Score:2) ",eval"y,$a,@_,",/\b0/)||eval && print;pop}a Re:Evil Perl (Score:2) $_="$b\n" except I used a real newline instead of \n to save a byte. (None of the newlines in either version are optional, although in one case any whitespace char would do.) Perl is VB (Score:2, Interesting) Perl Necklace? (Score:2, Funny) C is the glue... (Score:3, Insightful) That's not true. C was considered the glue of the Internet... Perl is the gaffa tape. As much as I hate granting time to the Perl haters (Score:5, Insightful) I work in a shop where we maintain (after last count) 112,002 lines of perl in a single system (which also contains about half a million lines of C). Guess what? It's not a problem! Not in the slightest! And you know why? - Modules - Coding conventions - Mature programmers Two of those three are redundant. Take a guess which ones (the third item isn't part of the anwer set). If you take a programmer that writes disciplined, careful, extensible, extendable and professional C - are they going to start generating hacked up crap when they switch to Perl? No. They're not. They split source among modules. They use naming conventions. They use strict. They use the namespaces. They use clear syntax. The end result looks almost like C most of the time. Except when it doesn't, 'cause it's Perl. What does C written by hack-job Perl "programmers" look like? Rephrasing #37 - "It ain't the arrow, it's the (Native American)". Re:As much as I hate granting time to the Perl hat (Score:5, Funny) Java. Re:All I want to know is... (Score:5, Funny) Re:One of my favorites... (Score:2) I'm not impressed (Score:4, Insightful) Re:Overhyped (Score:2, Interesting) Not that I've learned any other scripting languages to have some comparison, admittedly. Re:Yes... (Score:2) *Sigh* Why am I bothering, you're just a troll... Re:Clarification (Score:4, Interesting) You, sir, are a veritable fount' of wisdom...</sarcasm> Re:Clarification (Score:4, Insightful) It can be unreadable, but that doesn't mean that it is. Code can be good for lots of reasons- not just legibility Perl is probably the language with the highest chance of accepting the output of a random number generator as a valid program. That honor belongs to sendmail; We used to offhook the telephone couplers whenever someone had messed sendmail.cf up to get a good working setup from the line noise that'd leak through.
http://slashdot.org/story/06/06/28/148252/wicked-cool-perl-scripts
CC-MAIN-2015-32
refinedweb
4,567
72.87
This post starts with a fairly obscure topic - how an overloaded operator delete behaves in light of polymorphism; amazingly, it then gets even more obscure - shedding light on the trickery the compiler employs to make this work, by generating more than one destructor for certain classes. If you're into such things, read on. If not, sorry about that; I heard that three new Javascript libraries were released this week for MVC JSON-based dynamic CSS layout. Everyone's switching! Hurry up to keep up with the cool guys and leave this grumpy compiler engineer to mumble to himself. Virtual operator delete? Consider this code sample: #include <cstdio> class Animal { public: virtual void say() = 0; virtual ~Animal() {} }; class Sheep : public Animal { public: virtual void say() { printf("Sheep says baaaaa\n"); } virtual ~Sheep() { printf("Sheep is dead\n"); } void operator delete(void* p) { printf("Reclaiming Sheep storage from %p\n", p); ::operator delete(p); } }; int main(int argc, char** argv) { Animal* ap = new Sheep; ap->say(); delete ap; return 0; } What happens when ap is deleted? Two things: - The destructor of the object pointed to by ap is called. - operator delete is called on ap to reclaim heap storage. Part 1 is fairly clear: the static type of ap is Animal, but the compiler knows that Animal has a virtual destructor. So it looks up the actual destructor to invoke in the virtual table stored in the object ap points to. Since the dynamic type of ap is Sheep, the destructor found there will be Sheep::~Sheep, which is correct. What about that operator delete, though? Is operator delete virtual too? Is is also stored in the virtual table? Because if it isn't, how does the compiler know which operator delete to invoke? No, operator delete is not virtual. It is not stored in the virtual table. In fact, operator delete is a static member. The C++11 standard says so explicitly in secton 12.5: Any deallocation function for a class X is a static member (even if not explicitly declared static). It also adds: Since member allocation and deallocation functions are static they cannot be virtual. And if you keep reading, it actually mandates that even though this is the case, when the base destructor is virtual operator delete will be correctly looked up in the scope of the class that is the dynamic, not the static type of the object. Indeed, the code snippet above works correctly and prints: Sheep says baaaaa Sheep is dead Reclaiming Sheep storage from 0x1ed1be0 Deleting destructor So how does this work, if operator delete is not virtual? Then answer is in a special destructor created for by the compiler. It's called the deleting destructor and its existence is described by the Itanium C++ ABI: deleting destructor of a class T - A function that, in addition to the actions required of a complete object destructor, calls the appropriate deallocation function (i.e,. operator delete) for T. The ABI goes on to provide more details: The entries for virtual destructors are actually pairs of entries. The first destructor, called the complete object destructor, performs the destruction without calling delete() on the object. The second destructor, called the deleting destructor, calls delete() after destroying the object. So now the mechanics of this operation should be fairly clear. The compiler mimics "virtuality" of operator delete by invoking it from the destructor. Since the destructor is virtual, what ends up called eventually is the destructor for the dynamic type of the object. In our example this would be the destructor of Sheep, which can call the right operator delete since it's in the same static scope. However, as the ABI says, such classes need two destructors. If an object is destructed but not deleted from the heap, calling operator delete is wrong. So a separate version of the destructor exists for non-delete destructions. Examining how the compiler implements deleting destructors That's quite a bit of theory. Let's see how this is done in practice by studying the machine code generated by gcc for our code sample. First, I'll slightly modify main to invoke another function that just creates and discards a new Sheep without involving the heap. void foo() { Sheep s; } int main(int argc, char** argv) { Animal* ap = new Sheep; ap->say(); delete ap; foo(); return 0; } And compiling this with the flags [1]: g++ -O2 -g -static -std=c++11 -fno-inline -fno-exceptions We get the following disassembly for main. I've annotated the disassembly with comments to explain what's going on: 0000000000400cf0 <main>: 400cf0: push %rbx 400cf1: mov $0x8,%edi // Call operator new to allocate a new object of type Sheep, and call // the constructor of Sheep. Neither Sheep nor Animal have fields, so // their size is 8 bytes for the virtual table pointer. // The pointer to the object will live in %rbx. The vtable pointer in this // object (set up by the constructor of Sheep) points to the the virtual // table of Sheep, because this is the actual type of the object (even // though we hold it by a pointer to Animal here). 400cf6: callq 401750 <_Znwm> 400cfb: mov %rax,%rbx 400cfe: mov %rax,%rdi 400d01: callq 4011f0 <_ZN5SheepC1Ev> // The first 8 bytes of an Animal object is the vtable pointer. So move // the address of vtable into %rax, and the object pointer itself ("this") // into %rdi. // Since the vtable's first entry is the say() method, the call that // actually happens here is Sheep::say(ap) where ap is the object pointer // passed into the (implicit) "this" parameter. 400d06: mov (%rbx),%rax 400d09: mov %rbx,%rdi 400d0c: callq *(%rax) // Once again, move the vtable address into %rax and the object pointer // into %rdi. This time, invoke the function that lives at offset 0x10 in // the vtable. This is the deleting destructor, as we'll soon see. 400d0e: mov (%rbx),%rax 400d11: mov %rbx,%rdi 400d14: callq *0x10(%rax) // Finally call foo() and return. 400d17: callq 4010d0 <_Z3foov> 400d1c: xor %eax,%eax 400d1e: pop %rbx 400d1f: retq A diagram of the memory layout of the virtual table for Sheep can be helpful here. Since neither Animal nor Sheep have any fields, the only "contents" of a Sheep object is the vtable pointer which occupies the first 8 bytes: Virtual table for Sheep: ap: -------------- ----------------------- | vtable ptr | ---------> | Sheep::say() | 0x00 -------------- ----------------------- | Sheep::~Sheep() | 0x08 ----------------------- | Sheep deleting dtor | 0x10 ----------------------- The two destructors seen here have the roles described earlier. Let's see their annotated disassembly: // Sheep::~Sheep 0000000000401140 <_ZN5SheepD1Ev>: // Call printf("Sheep is dead\n") 401140: push %rbx 401141: mov $0x49dc7c,%esi 401146: mov %rdi,%rbx 401149: movq $0x49dd50,(%rdi) 401150: xor %eax,%eax 401152: mov $0x1,%edi 401157: callq 446260 <___printf_chk> 40115c: mov %rbx,%rdi 40115f: pop %rbx // Call Animal::~Animal, destroying the base class. Note the cool tail // call here (using jmpq instead of a call instruction - control does not // return here but the return instruction from _ZN6AnimalD1Ev will return // straight to the caller). 401160: jmpq 4010f0 <_ZN6AnimalD1Ev> 401165: nopw %cs:0x0(%rax,%rax,1) 40116f: nop // Sheep deleting destructor. The D0 part of the mangled name for deleting // destructors, as opposed to D1 for the regular destructor, is mandated by // the ABI name mangling rules. 00000000004011c0 <_ZN5SheepD0Ev>: 4011c0: push %rbx // Call Sheep::~Sheep 4011c1: mov %rdi,%rbx 4011c4: callq 401140 <_ZN5SheepD1Ev> 4011c9: mov %rbx,%rdi 4011cc: pop %rbx // Call Sheep::operator delete 4011cd: jmpq 401190 <_ZN5SheepdlEPv> 4011d2: nopw %cs:0x0(%rax,%rax,1) 4011dc: nopl 0x0(%rax) Now, going back to the amended code sample, let's see what code is generated for foo: 00000000004010d0 <_Z3foov>: 4010d0: sub $0x18,%rsp 4010d4: mov %rsp,%rdi 4010d7: movq $0x49dd30,(%rsp) 4010df: callq 401140 <_ZN5SheepD1Ev> 4010e4: add $0x18,%rsp 4010e8: retq 4010e9: nopl 0x0(%rax) foo just calls Sheep::~Sheep. It shouldn't call the deleting destructor, because it does not actually delete an object from the heap. It is also interesting to examine how the destructor(s) of Animal look, since unlike Sheep, Animal does not define a custom operator delete: // Animal::~Animal 00000000004010f0 <_ZN6AnimalD1Ev>: 4010f0: movq $0x49dcf0,(%rdi) 4010f7: retq 4010f8: nopl 0x0(%rax,%rax,1) // Animal deleting destructor 0000000000401100 <_ZN6AnimalD0Ev>: 401100: push %rbx // Call Animal::~Animal 401101: mov %rdi,%rbx 401104: callq 4010f0 <_ZN6AnimalD1Ev> 401109: mov %rbx,%rdi 40110c: pop %rbx // Call global ::operator::delete 40110d: jmpq 4011f0 <_ZdlPv> 401112: nopw %cs:0x0(%rax,%rax,1) 40111c: nopl 0x0(%rax) As expected, the destructor of Animal calls the global ::operator delete. Classes with virtual destructors vs. regular destructors I want to emphasize that this special treatment - generation of a deleting destructor, is done not for classes that have a custom operator delete, but for all classes with virtual destructors. This is because when we delete an object through a pointer to the base class, the compiler has no way of knowing what operator delete to invoke, so this has to be done for every class where the destructor is virtual [2]. Here's a clarifying example: #include <cstdio> class Regular { public: ~Regular() { printf("Regular dtor\n"); } }; class Virtual { public: virtual ~Virtual() { printf("Virtual dtor\n"); } }; int main(int argc, char **argv) { Regular* hr = new Regular; delete hr; Virtual* hv = new Virtual; delete hv; return 0; } The only difference between Regular and Virtual here is the destructor being virtual in the latter. Let's examine the machine code for main to see how the two delete statements are lowered: 0000000000400cf0 <main>: 400cf0: push %rbx 400cf1: mov $0x1,%edi // Allocate a new Regular object with the global ::operator new 400cf6: callq 4016a0 <_Znwm> // If hr != nullptr, call Regular::~Regular, and then call the global // ::operator delete on hr. 400cfb: test %rax,%rax 400cfe: mov %rax,%rbx 400d01: je 400d13 <main+0x23> 400d03: mov %rax,%rdi 400d06: callq 401130 <_ZN7RegularD1Ev> 400d0b: mov %rbx,%rdi 400d0e: callq 401160 <_ZdlPv> 400d13: mov $0x8,%edi // Allocate a new Virtual object with the global ::operator new 400d18: callq 4016a0 <_Znwm> 400d1d: mov %rax,%rbx 400d20: mov %rax,%rdi // Call the constructor for Virtual. We didn't define a default // constructor, but the compiler did - to populate the vtable pointer // properly. 400d23: callq 401150 <_ZN7VirtualC1Ev> // If hv != nullptr, call the deleting destructor of Virtual through the // virtual table. Do not call operator delete for vr; this will be done by // the deleting destructor. 400d28: test %rbx,%rbx 400d2b: je 400d36 <main+0x46> 400d2d: mov (%rbx),%rax 400d30: mov %rbx,%rdi 400d33: callq *0x8(%rax) 400d36: xor %eax,%eax 400d38: pop %rbx 400d39: retq 400d3a: nopw 0x0(%rax,%rax,1) The key difference here is that for deleting Regular, the compiler inserts a call to the (global) operator delete after the destructor. However, for Virtual it can't do that so it just calls the deleting destructor, which will take care of the deletion as we've seen earlier.
http://eli.thegreenplace.net/2015/c-deleting-destructors-and-virtual-operator-delete/
CC-MAIN-2017-13
refinedweb
1,805
56.59
So let's take this scenario: Your database is getting hammered with requests and building up some load over time and we would like to place a caching layer in front of our database that will return data from the caching layer, to reduce some traffic to our database and also improve our performance for our application. The Scenario: Our scenario will be very simple for this demonstration: - Database will be using SQLite with product information (product_name, product_description) - Caching Layer will be Memcached - Our Client will be written in Python, which checks if the product name is in cache, if not a GET_MISSwill be returned, then the data will be fetched from the database, returns it to the client and save it to the cache - Next time the item will be read, a GET_HITwill be received, then the item will be delivered to the client directly from the cache SQL Database: As mentioned we will be using sqlite for demonstration. Create the table, populate some very basic data: $ sqlite3 db.sql -header -column import sqlite3 as sql SQLite version 3.16.0 2016-11-04 19:09:39 Enter ".help" for usage hints. sqlite> create table products (product_name STRING(32), product_description STRING(32)); sqlite> insert into products values('apple', 'fruit called apple'); sqlite> insert into products values('guitar', 'musical instrument'); Read all the data from the table: sqlite> select * from products; product_name product_description ------------ ------------------- apple fruit called apple guitar musical instrument sqlite> .exit Run a Memcached Container: We will use docker to run a memcached container on our workstation: $ docker run -itd --name memcached -p 11211:11211 rbekker87/memcached:alpine Our Application Code: I will use pymemcache as our client library. Install: $ virtualenv .venv && source .venv/bin/activate $ pip install pymemcache Our Application Code which will be in Python import sqlite3 as sql from pymemcache.client import base product_name = 'guitar' client = base.Client(('localhost', 11211)) result = client.get(product_name) def query_db(product_name): db_connection = sql.connect('db.sql') c = db_connection.cursor() try: c.execute('select product_description from products where product_name = "{k}"'.format(k=product_name)) data = c.fetchone()[0] db_connection.close() except: data = 'invalid' return data if result is None: print("got a miss, need to get the data from db") result = query_db(product_name) if result == 'invalid': print("requested data does not exist in db") else: print("returning data to client from db") print("=> Product: {p}, Description: {d}".format(p=product_name, d=result)) print("setting the data to memcache") client.set(product_name, result) else: print("got the data directly from memcache") print("=> Product: {p}, Description: {d}".format(p=product_name, d=result)) Explanation: - We have a function that takes a argument of the product name, that makes the call to the database and returns the description of that product - We will make a get operation to memcached, if nothing is returned, then we know the item does not exists in our cache, - Then we will call our function to get the data from the database and return it directly to our client, and - Save it to the cache in memcached so the next time the same product is queried, it will be delivered directly from the cache The Demo: Our Product Name is guitar, lets call the product, which will be the first time so memcached wont have the item in its cache: $ python app.py got a miss, need to get the data from db returning data to client from db => Product: guitar, Description: musical instrument setting the data to memcache Now from the output, we can see that the item was delivered from the database and saved to the cache, lets call that same product and observe the behavior: $ python app.py got the data directly from memcache => Product: guitar, Description: musical instrument When our cache instance gets rebooted we will lose our data that is in the cache, but since the source of truth will be in our database, data will be re-added to the cache as they are requested. That is one good reason not to rely on a cache service to be your primary data source. What if the product we request is not in our cache or database, let's say the product tree $ python app.py got a miss, need to get the data from db requested data does not exist in db This was a really simple scenario, but when working with masses amount of data, you can benefit from a lot of performance using caching. Resources:
https://sysadmins.co.za/give-your-database-a-break-and-use-memcached-to-return-frequently-accessed-data/
CC-MAIN-2019-22
refinedweb
742
52.12
Traditionally, the "three main sources of FlightGear users' joy" (TM) are tightly coupled together: The Base Package depends on the Sim-/FlightGear source mainly because aircraft features are being developed in sync with the available FDM- and rendering facilities. The Scenery on the other hand depends on the Base Package because we rely on deriving runway threshold positions and other geographic entities from the files which are shipped as a part of the Base Package. We all know about the related downsides: The enthusiastic developer is adjusting the layout of his local airfield, submits his change to Robin Peel - and has to wait one or two years until his change makes it into the official Scenery .... maybe he has to wait even longer, because almost nobody cares about updating the respective data files in the Base Package during the release process. So, we didn't spare any effort to hold a democratic election among the main active Scenery developers and surprisingly we came to the conclusion that it's time to untie the Scenery from this dependency chain :-)) We plan to achieve this goal by a solution as minimalistic as possible, still keeping compilance with traditional rules in the FlightGear development process. In other words, we're deriving an absolute minimal subset, basically: the geographic information that is required to match with the Scenery airport layout, from the respective sources and store this in a 'traditional' file format that, as usual, allows every contributor to verify his contributions before submitting to some general repository. Those entities we identified as overlapping between the visual representation of airfields in the Scenery and the respective data files in the Base Package are: 1.) Threshold positions and orientation (typical startup positions), 2.) ILS positions and orientation (apparently depends on runways), 3.) Taxiway ground network (we don't want AI airliners to taxi off the tarmac), 4.) Aircraft parking positions (to avoid airliners parking on the grass or inside the terminal building). We're storing this information in a couple of XML files in a newly created 'Airports/' directory alongside the well known 'Terrain/' and 'Objects/' directories. The new directory is being organized using sort of a poor-man's alphabetical index, using three stages of subdirectories. Note, this directory structure is not meant to be used primarily as a datasource during runtime. Instead, this directory structure is designed as a transport to be editable by the average developer. A tool to create a condensed runtime-extract is being prepared and is expected to get introduced until the next FlightGear software release is due. Even though there's no immediate use of this feature, we'll add it to the upcoming Scenery release in order to allow the software adapting to it and to have everything in place until the next software release. You'll find an example of the mentioned structure in CVS. Regards, Martin. -- Unix _IS_ user friendly - it's just selective about who its friends are ! -------------------------------------------------------------------------- John Denker wrote: >> I don't know what the 'Sport Model' is, can you elaborate? > > A summary of _Sport Model_ features can be found here: > Sport Model was last updated to CVS FlightGear over a year ago. Do you have any plans to update it to current CVS? Thanks, Tim James Turner wrote: >. I don't find this use of type enums in a base class to be clean at all. I have nothing against having a type field in a base, but with an enum approach the base has to have knowledge of all the derived classes, and any time you add a new one the base has to be modified. I'd prefer to see here a singleton type object defined in each derived class that compared to in order to find the type. Also, your search member functions don't seem to belong to "FGPositioned," but to the index that stores these things. Finally, in my own code I've been using the simgear and flightgear namespaces instead of the SG and FG prefixes, but I won't force anyone else to do that :) Tim Attached patch refactors the KLN89 code to use the same navaid / fix / airport storage as the rest of FG - instead of copying each (large) list when the instrument is initialised. This gets rids of the _waypoints member of DCLGPS - the waypoint class is still used in the internal flightplans, a job for another day is to change the code to use 'standard' flight plans, when they're better defined. Some comments on the patch: - it removes the hard-coded precision approaches Dave Luff put in for testing. If this troubles anyone, I can re-instate them, but my gut feeling is no one apart from Dave knew about them - it factors all the code that does 'order idents by the KLN89's ordering scheme' into one place - it extends some of the FGblahList search methods to take an FGIdentOrdering - which is essentially a wrapper to let me pass a string ordering object into the search methods without making everything 'templatey'. It's not massively elegant, but equally it's not hugely intrusive on the code. The major improvement, however, is an internal one - this gets rid of the worst offender for the 'iterate over everything in list blah' - the MkVIII was another which I've fixed, and the remaining one is the airport list GUI widget, which I'm going to add a special accessor for, so the FGAirportList can returned a pu-compatible, filtered list itself. Anyway, I'm not sure how widely used the KLN89 is - it's obviously got a huge amount of features, though equally the code I've changed isn't *that* widely used in it. The major one is adding waypoints in the FPL page. If there's anyone out there who uses the unit regularly, and could test the patch, that'd be great. James > 'FGPositioned' ..... I'll buy a beer/beer-substitute for > anyone who comes up with a shorter, more meaningful class name. How about "FGSite"??? -- It's shorter. -- Also in this context "site" is a noun, which seems preferable to adjectives such as "positioned" or "located". [The corresponding verb (site) and adjective (sited) exist also, in case they are needed.] -- It comes from the Latin _situs_ meaning position, location, or situation. -- I don't see any conflicting uses of the term "site" within FGFS. On 08/15/2008 06:53 PM, James Turner wrote: > If you could enumerate the issues here, in a new thread, that'd be > interesting. Here's a start: *) There should be false localizer courses abeam the antenna, as there are in real life. This is implemented in the _Sport Model_ in navradio.cxx *) There should be false glideslopes above the true glideslope, as there are in real life. This should be straightforward to implement in analogy to the localizer. *) The code in navradio.cxx assumes the gs service volume is the same as the LOC service volume, which is not correct. *) The "intermittency" feature in navradio.cxx (near the edge of the service volume) is a cute idea, and might look OK if only the flag were fluctuating, but it is not at all realistic to have the needle fluctuating at frame rate. Real-world needles have a strong low-pass filter in them. *) In the real world, some localizers have an _expanded_ service volume. The code needs to respect the range info in the database, rather than relying on some compiled-in assumption. The _Sport Model_ handles this correctly. *) The "has_dme" code in navradio.cxx needs to find a DME if and only if it is paired with the azimuth navaid (LOC or VOR). Right now it unwisely accepts any DME on the frequency, even if it is thousands of miles away. *) And it should check to see whether the DME receiver is tuned to the appropriate DME frequency before playing the ident. Right now it plays the DME ident without checking to see whether there is a DME receiver on board, let alone checking its tuning, let alone checking the audio volume or the routing through the audio panel. NOTE: There is a tricky issue here, having to do with synchronizing the audio for two almost-independent instruments: -- In the aircraft, the DME receiver is independent of the azimuth receiver ... except that sometimes the DME is remotely tuned by the azimuth receiver. -- Similarly, on the ground, the DME station is independent of the azimuth station ... except that the audio ident is synchronized. I don't see a reasonable way to put the synchronized audio feature into navradio.cxx or into dme.cxx. Possibly constructive suggestion: create a new module (azimuth_dme_shared_ident.cxx or some such) ... and let it play traffic cop between navradio.cxx and dme.cxx. This looks like a classic "multiple inheritance" situation. Tangential remark: In the real world, this synchronization is not handled in the aircraft at all; it is implemented in the ground station. One could imagine, in the long run, moving some of these tricky features out of the aircraft ... doing them on a per-station basis rather than on a per-aircraft basis. Obvious candidates for this include -- local weather -- choice of active runway -- ATIS -- synchronized ident ++ This has tremendous advantages in multiplayer situations, where you would like all players to get a consistent view of the world. ++ But even in single-player situations, it is common for aircraft to have more than one radio, and if you tune up ATIS on two (or three or four) receivers you should hear the /same/ ATIS. This is another strong argument for why ATIS should not be synthesized on-the-fly within the radio. There is code in the _Sport Model_ to achieve consistency for multi-radio ATIS. *) Why does marker_beacon.cxx go to a lot of trouble to calculate term_tbl, low_tbl, and high_tbl ... which are then never used? *) The marker beacons need a high/low sensitivity switch. *) The service-volume code in marker_beacon.cxx is not a paragon of clarity, and produces results that seem unrealistic. *) Remember, this is not a complete list. > I don't know what the 'Sport Model' is, can you elaborate? A summary of _Sport Model_ features can be found here: On sam 16 août 2008, Stuart Buchanan wrote: > ---. May be, it could be delivered with the updated F-8E (new JSBSim) , probably not before next year (will depends on free time). I guess that somebody else could have done the Nasal/redout.nas modifications before. > > > -- Gérard "J'ai décidé d'être heureux parce que c'est bon pour la santé. Voltaire " --- Send instant messages to your online friends As you may have noticed, Erik has recently overhauled (or rather rewritten) xmlgrep. That's a fast utility for finding properties in an XML file. It works with any XML file, not just our internal <PropertyList> flavor. In many cases a simple recursive grep isn't good enough. If you do a simple search for "<name", because you want "view" names listed, then you get this: $ grep '<name' $FG_ROOT/preferences.xml <name type="string">Helvetica.txf</name> <name>Cockpit View</name> [...] The first one doesn't look like a "view" name at all. More like a font name? You could let grep display a few lines before and after the match so that you get some context. But that's a bit messy and painful to use as input for further commands. xmlgrep allows to adddress XML elements with their full path, e.g. /PropertyList/sim/view/name So far the altruistic part. Actually, I just wanted to introduce another, similar tool. ;-) It doesn't want to compete with xmlgrep and was never meant to. It's really a byproduct of something else for which I needed it. It works only for <PropertyList>s and is slower than xmlgrep. It doesn't search for any particular pattern, but just lists them, so you will need to pipe the output into e.g. grep: [4.8 kB] lsprop lists all properties in the files given on the command line, or, if none were specified, then of $FG_ROOT/preferences.xml and $FG_ROOT/Aircraft/*/*-set.xml. It follows all "include"s and treats the "omit-node" flag correctly (I hope :-). Example: $ lsprop|grep --color "/systems/refuel/type" $FG_ROOT/Aircraft/ufo/ufo-set.xml:522: /systems/refuel/type = "boom" $FG_ROOT/Aircraft/ufo/ufo-set.xml:523: /systems/refuel/type[1] = "probe" $FG_ROOT/Aircraft/f16/f16-set.xml:158: /systems/refuel/type = "boom" $FG_ROOT/Aircraft/KC135/KC135-set.xml:169: /systems/refuel/type = "boom" $FG_ROOT/Aircraft/a4/a4f-set.xml:183: /systems/refuel/type = "probe" $FG_ROOT/Aircraft/A-10/A-10-set.xml:1034: /systems/refuel/type = "boom" $FG_ROOT/Aircraft/f-14b/f-14b-set.xml:549: /systems/refuel/type = "probe" $FG_ROOT/Aircraft/Lightning/lightning-set.xml:276: /systems/refuel/type = "probe" $FG_ROOT/Aircraft/A-6E/A-6E-set.xml:151: /sim/systems/refuel/type = "probe" Use --help/-h for a few options. More options to come ... m. On ven 15 août 2008, Alexis Bory - xiii wrote: > Erik Hofman wrote: > > Hi, > > > > Ever since I switched to the CVS version of FlightGear I wondered > > whether the black-out behavior really is that realistic . Although I > > never experienced it I couldn't imagine this would happen in real > > life, at least not with an anti-g suit. In an excerpt from a nasa > > document describing a simulation involving black-outs I got the > > > > following piece of text: > > > blackout simulation: The algorithm used a direct relationship > > > between the algorithm of the load factor a(n) and the algorithm of > > > the time to blackout; the simulation used 300 sec to blackout at 5g > > > and 10 sec to blackout at 9g, with simulatoed tunel vision during > > > the interim period. > > > > Maybe this will help to make it more realistic. > > Agreed, and there is something annoying about blackout and HUD, when you > can't see anything due to blackout effect, the HUD remains visible as > nothing were happening... For sure that's not realistic at all :-) > > Alexis > Coming back to the Erik, Alexis remark and my crazy first answer. Won't it be possible, to include in the Cockpit view parameters the additional parameters delay which gives the delay for 5g blackout and 9 g blackout ? and why not, if we want or not want, to have the Hud visible or not during blackout/redout ? That delay depends on the pilot :) (physical condition) Regards -- Gérard "J'ai décidé d'être heureux parce que c'est bon pour la santé. Voltaire " On 16 Aug 2008, at 11:37, Melchior FRANZ wrote: > > > Mathias will know more about it. :-) Gave this a quick eyeball and it seems pretty nice - and straightforward to integrate with my proposed base class. So, yes, to re-iterate, spatial indexes good, but what I care about right now is the unified API to talk about such 'things', and make the spatial- and non-spatial queries. If this is added to Simgear, I'd definitely use instead of my SGBucket-hash based approach. I\d be happy to test Mathias' code more closely, and test integrate it with my local tree, where I'm already experimenting with FGPositioned to verify the work involved is approximately consistent with my guesstimates - so far it seems to be. And to re-iterate something else - 'FGPositioned' is an awful name, I'd love a better alternative. FGLocated? yuck. James * Thomas Förster -- Saturday 16 August 2008: > As far as I know, Mathias Fröhlich has written a general > templated spatial index for simgear. Mathias will know more about it. :-) m. On 16 Aug 2008, at 09:50, Thomas Förster wrote: >!!!) Hmm, okay, some slight confusion here I think. While I've proposed this API to help with spatial lookups, it's not the world's greatest spatial implementation - an R-tree or something would be better. What it does aim to do is collect all the different 'find' 'search' and 'query' methods on the FXblahList classes, which are all rather ambiguously named, and overloaded on different sets of pos / distance / ident / frequency / heading / etc. However, the main this is I need to be able to treat a group of these things homogeneously, to get rid of code that currently has to do: given an ident A: - search the airports list - search the navaids list - search the fix list I want to add user waypoints, which would be another case, and for the NAV display, I want to do a spatial query (all within 160nm, let's say) of this nature - the existing call sites that look like this are doing it by ident. (Oh and obstacles, which is another case, and in theory we could show WX-radar-lightning strikes the same way, although I'd probably use a different spatial index scheme internally for movable objects ... hmmm .... actually this could the also support a TCAS displays by having an FGMoveablePositioned entry for each multiplayer and AI plane) Once the abstraction is in place, I'd be delighted to use whatever spatial implementation anyone can propose, mine is just the simplest one I know will work 'well enough'. It's much easier to try different spatial solutions when all the users are collected together.. Oh, and unifying the name index also allows me to handle a problem I've encountered cleaning up the KLN-89b code, namely that it uses a different string ordering from ASCII (letters before numbers instead of after). I've written a STL comparator which implements this ordering, and patched the code to use it, but unifying the 'findFirstWithIdent' rules which got added to the various list classes would make dealing with this much simpler. (Incidentally, is this 'letters before numbers' string ordering standard in the aviation world? There was already a comment about something similar next to one of the many 'search' methods) > Note that it is not possible with your interface to find the > nearest "FGPositioned" to an arbitrary geodetic position yet. This is > somewhat critical (e.g. finding things based on the user aircraft). > This is my fault for causing confusion, see below. > Additional methods I'd like to see is an extension of the nearest > neighbor > query to N-nearest neighbors. Examples might be finding the 3(or 5) > nearest > metar stations for weather interpolation, 5 nearest fixes to an > airport for > rough route planning... I'm planning to do methods like the above, I just didn't bother to show them in my sample file - absolutely I do need them, and intend to support them. Once I have the SGBucket-based index, they're simple to implement, and again, once the abstraction is there, changing the underlying spatial index is easy. Sorry, I should have said the list of 'find' methods wasn't supposed to be exhaustive - just to point out 'some spatial queries go here, and non-spatial ones too, go here' James James,!!!) Note that it is not possible with your interface to find the nearest "FGPositioned" to an arbitrary geodetic position yet. This is somewhat critical (e.g. finding things based on the user aircraft). Additional methods I'd like to see is an extension of the nearest neighbor query to N-nearest neighbors. Examples might be finding the 3(or 5) nearest metar stations for weather interpolation, 5 nearest fixes to an airport for rough route planning... Thomas I'm planning to do a slightly intensive re-factoring - creating a base class where one didn't exist before. The (draft) base class is attached - it will become a base for the following: FGAirport FGRunway FGFix FGNavRecord ATCData The members are pretty uncontroversial, I think. What I'll do is keep the 'old' accessors / members on each derived class, to avoid a massive diff, and clean them up in separate, boring patches. Once the base class is in, and nothing is broken, I'll proceed as follows: - make FGFix, FGRunway and ATCdata be 'pointery', so all these things are living on the heap and do proper virtual calls. I'll actually make them SGReferenced I hope, but I need to see how much more work that will be. (FGNavRecord is already an SGReferenced, for example) - create a centralised, SGBucket-based spatial index of the whole lot, with some query methods: static FGPositionedList FGPositioned::findWithinRangeOfLocation(double lat, double lon, double rangeNm); static FGPositionedList FGPositioned::findWithIdent(....) static FGPositionedList FGPositioned::findClosestWithIdent(....) (and of course some variants / overloads to filter by type. I need to check about overloading statics, but ideally FGNavRecord::findWithinRangeOfLocation would be overloaded to only return navaids, etc, etc). - gradually simplify or kill off the various FGblahList classes in favour of unified queries on this (i.e clean up the call sites). This includes removing the 'poor mans' spatial index in navlist, the total *absence* of a spatial DB in airports, the SGBucket-based one in commlist.cxx. At the same time I'd let the derived classes keep their specialised indices and query methods - eg airport idents are unique, navaids and ATCDatas can be indexed by frequency, and so on. (this last step will take some time :) The motivation for this is of course being able to build my NAV display, but it's also the starting point for working on improving the airways code and creating a standard FGFlightPlan class - an airway or flightplan is essentially built out of FGPositioned objects, tagged with some extra data. And indeed that's what Dave Luff's GPS code looks like - just using its own internal waypoint and flightplan classes for these jobs, which is one of the many things I hope to improve. Incidentally, the unified 'type' enum will replace several existing type fields - the FGAirport type member, the FGRunway taxiway flag, the FGNavRecord type, and the ATCData type. There's also scope to add more - I've optimistically included an 'OBSTACLE' type, for example, on the hope that I can add find an existing obstacle DB to import. In any case, adding types is cheap. appreciated. Note I hate the name 'FGPositioned' but can't think of a better name that accurately reflects what the class does. I'll buy a beer/beer-substitute for anyone who comes up with a shorter, more meaningful class name. Cheers, James On 15 Aug 2008, at 23:07, John Denker wrote: > Possibly constructive suggestions: > > 1) In the case of paired transmitters, turn on the one that serves > the > runway _favored by the wind_. Do this regardless of the location of > the aircraft. > > Remark: This works even in multiplayer situations. > > Remark: Put a heavy low-pass filter on the choice of runway, so it > doesn't go nuts if the wind is variable. Maybe cooperate with the > ATIS > code so that the "active" runway reported by ATIS is the runway > with the > active transmitter. Funnily enough, that's the next step in my runways code. However, selectively enabling and disabling ILS/LOC/GS navaids will take a bit more engineering. I think it's possible, since Durk's runway prefs logic knows which runway(s) are active for landings. And that code already has the low-pass filter, AND soon will be hooked up to the ATIS code. > 2) Fix the many other bugs in the service-volume code. Note that > there > is code in the Sport Model to handle false LOC courses, false GS > paths, > extended LOC volumes, and other stuff you encounter in the real > world. I don't know what the 'Sport Model' is, can you elaborate? If you could enumerate the issues here, in a new thread, that'd be interesting. I'm not promising to work on any issues besides this one, but at least we'd have the problems tracked in a searchable, archived place. Cheers, James I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/flightgear/mailman/flightgear-devel/?viewmonth=200808&viewday=16
CC-MAIN-2016-40
refinedweb
4,003
61.36
Part 2: Google Maps Section 1: Introduction to Part 2 When approaching any sufficiently high-level computing topic, it is often true that we need to be familiar with a broad range of underlying and related concepts to establish the foundation necessary to learn the new material. Virtually every technology not only borrows from, but also builds on others. Furthermore, it is often the case with programming projects that the bulk of the work comes at the beginning and involves carefully thinking through the problem, devising a solution, and building those parts of the project that we need to write before we can get to the code we want to write. In the first part of the tutorial we wrote a Perl script which allows us to generate valid KML files from collections of geotagged photos. Using our new tool, we generated a number of files containing all of the data (about our photos) that we'll use to populate our Google Maps. We also looked at the structure of KML, and covered a number of other important topics, which are key to understanding the material presented here. If you have not read part 1, do take the time at least to glance through it to confirm for yourself that you won't have trouble following along with this discussion. Now we turn our attention to Google Maps. After this short introduction we'll handle the tutorial in 3 parts. Part 1: This section discusses the document-object-model (DOM), which is a formal methodology for negotiating XML files, like the KML files we created in part 1. Part 2: Next we'll look at XHTML and the XHTML file we'll use to display our Google Map. As you'll see, the file included with the project is no more than a shell, but it does suffice to allow us to present our map. Furthermore, it affords us an opportunity to talk about integrating XHTML, Javacript, and the Google Maps API. There are a few important things we'll need to understand. Working with a simple page will help us avoid complicating the issues unnecessarily and will make it easy for you to relocate the map to a page of your own (maybe as part of a weblog for example) with a minimum amount of fuss. Part 3: Finally, we'll look at Javascript and the Google Maps API. Before we get to all of that, there are a couple of preliminary topics to talk about: XML We've already looked at XML, describing it as an open, extensible markup language. We've said that XML is a metalanguage which allows us to define other languages with specific syntactic structures and vocabularies tailored to discrete problem domains. Furthermore we've said that KML is one such XML-based language, engineered to address the problem of describing geographical data and the positioning of geographical placemarks and other features to display on a map, among other applications. XHTML We haven't yet talked about XHTML which is a reimplementation of HTML as a standard, XML-based format. HTML (Hypertext Markup Language) has served as the preeminent language for authoring documents on the World Wide Web since the web's inception in the late 1980s and early 1990s. HTML is the markup language1; which makes hypertext2; linking between documents on the web possible and defines the set of tags, i.e. elements and attributes, that indicate the structure of a document using plain-text codes included alongside the content itself. Just as the web was not the first hypertext system3, HTML was not the first markup language, though both have become extremely important. Without delving too deeply into the primordial history of markup, HTML is a descendant of SGML (Standard Generalized Markup Language) which is itself an offspring of GML (Generalized Markup Language). The Wikipedia entry for GML offers this description of the language: GML frees document creators from specific document formatting concerns such as font specification, line spacing, and page layout required by Script. SCRIPT was an early text formatting language developed by IBM, and a precursor to modern page description languages like Postscript and LaTeX, that describe the structure and appearance of documents. Not surprisingly the goal of GML is echoed in the intent of HTML, though the two are far removed from each other. Berners-Lee; considered HTML to be an application of SGML from the very beginning, but with a clear emphasis on simplicity and winnowing down much of the overhead that had always characterized formal markup languages. In the early days, the web had to struggle for acceptance. The birth of the web is a story of humble beginnings4. The significance of the web was by no means a forgone conclusion and early adoption required that the markup syntax of the web be as accessible as possible. After all, more stodgy languages already existed. The simplicity of HTML certainly contributed to the success of the web (among many other factors), but it also meant that the standard was lacking in key areas. As the web gained wider appeal there was a tendency to take advantage of the early permissiveness of the standard. Developers of mainstream web browsers exacerbated the problem significantly by tolerating noncompliant documents and encouraging the use of proprietary tags in defiance of efforts on the part of standards organizations to reign in the syntax of the language. In the short term, this was a confusing situation for everyone and led to incompatibilities and erratic behavior among an ever increasing multitude of documents on the web and the various web browsers available, which differed in their support for, and interpretation of, what were in essence emerging 'dialects' of HTML. These web browsers differed not only from one another, but also frequently from one version to the next of the same application. There was a real need to establish stability in the language. Lack of dependable syntax meant that the job of building a parser capable of adequately adhering to the standards, and at the same time accommodating various abuses of the standard, had become an exercise in futility. Unlike HTML, XHTML must strictly adhere to the XML standard, which means that any compliant XML parser can negotiate properly formatted documents with relative ease. This reliability paves the way for a more efficient and sophisticated web, resulting in not only more consistent rendering of content, but the development of software that is able to differentiate and act on content based on context, and relationships among data. This is one of the primary advantages of XML, as has already been discussed, and this is a key advantage of XHTML as well. Long term, Berners-Lee, who currently serves as the director of the World Wide Web Consortium (W3C); and a senior researcher at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), continues to actively pursue the vision of a Semantic Web. That is, a web that can be mined as a rich repository of data by intelligent software agents, which not only present information, perform basic pattern matching, and the like, but are capable of analysing information (in a truer sense—keep this or get rid of it?). From the W3C's Semantic Web Roadmap we have this brief introduction to the concept:. Whether this vision accurately predicts the future of the web remains to be seen. At the moment the Semantic Web is a veritable stew of protocols, frameworks, concepts, extensions, namespaces, vocabularies, schema, ontologies, and other components. It is the subject of much debate, virtually all of it far beyond the scope of this tutorial. But this 'brave new web' is not purely academic. The increased emphasis on standards-compliant markup has resulted in developers of web browsers and content creation tools steering their apps toward the standards. This in turn motivates authors to produce compliant documents. In fact the Strict variations of XHTML do not allow for deviation from the standards. Two immediate benefits of this work, whether or not it ultimately leads to some future web, are (1) more consistent document rendering across platforms among mainstream web browsers, (2) and the emergence of the Document Object Model (DOM). We'll look at the DOM shortly. Section 2: Object Models and the DOM An object model is a collection of objects and the typically hierarchical, often complex, relationships among them, where the most generic definition of an object is simply a 'thing'. Object models are all around us and are not limited to programming or the broader topic of computing for that matter. If, for example, you were to describe a car in terms of its object model: You might start by talking about the chassis, or base frame, and then go on to describe the body of the car, its wheels, axles, exhaust system, the drivetrain, etc.; everything that is directly attached to the frame. For each of these you could describe various attributes and elements of that object; for e.g. the body material is an attribute of the body object. The elements may in fact be other objects that collectively comprise those objects within which they are contained. The body of the car contains the engine compartment, passenger compartment, and trunk elements. The engine compartment includes the engine which is itself a collection of smaller objects. The passenger compartment can be described as the collection of front and rear compartment objects, where the front compartment object includes at least a driver side compartment with a steering wheel, instrument panel, etc. These objects have their own attributes and elements perhaps consisting of other objects each of which is an object itself, with a more specific function than the objects in which they are contained. If you've ever put together a toy model of a car you may remember seeing an exploded schematic diagram of the completed model which clearly shows all of the objects from the largest container elements to the smallest ties, screws and other fasteners that hold all of the pieces together. This is a nice way to visualize the object model of the toy car. Of course the object model I have described is only an informal example. It may be fair to dispute the model I've sketched here. As long as you have a better general understanding of what an object model is then this example has served its purpose. Object models are very common within the realms of computer technology, information technology, programming languages, and formal notation. The Document Object Model The Document Object Model (DOM); is a standard object model for describing, accessing and manipulating HTML documents and XML-based formats. The often quoted description of the DOM from the W3C's site dedicated to the specification is: The Document Object Model is a platform- and language-neutral interface that will allow programs and scripts to dynamically access and update the content, structure and style of documents. The document can be further processed and the results of that processing can be incorporated back into the presented page. Remember that XHTML is a special case of XML (i.e. it is an XML based format) and essentially no more than a formalization of HTML. Because it is an XML based format the XML DOM applies. Furthermore, because XHTML describes a single format with a number of required structural elements and only a limited collection of allowable elements and attributes, the HTML DOM has a number of unique objects and methods for acting on those objects that are not available in the more generic XML DOM. Having said that, I want to emphasize the the two are more alike than the are different. Technically we will look at the XML DOM here but nearly everything discussed is applicable to the HTML DOM as well. Though there are differences, some of which we'll encounter later in this tutorial, as an introduction it is appropriate to limit ourselves to fundamental concepts, which the two share in common. We need both. We'll rely on the XML DOM to parse the KML files we generated in part one of the tutorial to populate our Google Map, and the HTML DOM to add the map to our webpage. By the time we've completed this part of the tutorial hopefully you will appreciate just how integral the DOM is for modern web design and development, though we'll only have skimmed the surface of it. The concept of the DOM is actually quite easily understood. It will seem intuitive to anyone who has ever dealt with tree structures (from filesystem hierarchies to family trees). Even if you haven't had any experience with this sort of data structure, you should anticipate being able to pick it up quickly. Under the Document Object Model individual components of the structure are referred to as nodes. Elements, attributes and the text contained within elements are all nodes. The DOM represents an XML document as an inverted tree with a root node at the top. As far as the structure of an actual XML document is concerned, the root is the element that contains all others. In every structure all other nodes are contained within a document node. In the KML files we've generated the Document element is the root element. Every other node is a descendent of Document. We can express the reciprocal relationship by stating that Document is an ancestor of every element other than itself. Relationships among nodes of the document are described in familial terms. Direct descendants are called child nodes or simply children of the their immediate ancestor, referred to as a parent. In our KML files, We can make a couple of other useful observations about this structure: - Every node other than the root has exactly one parent. - Parents may have any number of children including zero, though a node without any children won't be referred to as a parent. (A node with no children is called a leaf.) Implicitly there are other familial relationships among nodes. For example, elements with parents that are siblings could be thought of as 'cousins' I suppose, but it is unusual to see these relationships named or otherwise acknowledged. There is one important subtlety. Text is always stored in a text node and never directly in some other element node. For example, the description elements in our KML files contain either plain text or html descriptions of associated Placemarks. This text is not contained directly in the description node. Instead the description node contains unseen text node which contains the descriptive text. So the text is a grandchild of the description node, and a child of a text node, which is the direct descendent of description. Make sure that you understand this before continuing. Because of the inherent structure of XML, we can unambiguously navigate a document without knowing anything else about it, except that it validates. We can move around the tree without being able to name the nodes before we begin. Starting at the root document node, we can traverse that node's children, move laterally among siblings, travel more deeply from parent to child, and then work our way back up the tree negotiating parent relationships. We haven't yet described how we move among siblings. The DOM allows us to treat siblings as a list of nodes, and take advantage of the relationships that exist among elements in any list. Specifically, we can refer to the first (firstChild) and last (lastChild) nodes to position ourselves in the list. Once we are at some location in the list, we can refer to the previous (previousSibling) and next (nextSibling) nodes to navigate among siblings. Programmatically we can use the DOM to treat siblings in a XML data structure as we would any other list. For example, we can loop through sibling nodes working on each node in turn. Keep in mind that we are using generic terminology, not referring to specific node names and we are relying only on the structure of XML which we know must be dependable if the document adheres to the standard. This will work well for our KML files, and it is certainly not limited to KML. There are primarily two techniques we can use to find and manipulate elements in our XML structure using the DOM. - Node Properties Firstly, we can take advantage of relationships among element nodes as we have been discussing. A number of node properties, some of which have already been mentioned, allow us to move between nodes in the structure. These include: firstChild,lastChild,previousSibling,nextSibling,parentNode If we look at a fragment of KML, similar to the KML files generated in part 1 of the tutorial, starting at the Placemark element... <Placemark> <name>value</name> <Snippet maxLines="1"> value </Snippet> <description><![CDATA[ value ]]></description> <Point> <coordinates>value</coordinates> </Point> </Placemark> ...we see a number of these properties: is the firstChild of is the lastChild of - The nextSibling of is - The previousSibling of is is the parentNode of , , , and - getElementsByTagName() Secondly, we can use the method getElementsByTagName() to find any element regardless of the document structure. For example, using the syntax... getElementsByTagName("name") ...we can retrieve all elements as a nodeList from the document which are descendants of the element we are using when we call the method. The following Javascript statement returns a list of all elements in the document and stores that list at the variable list_of_nodes. var list_of_nodes = getElementsByTagName("name"); What is a node list? A node list (NodeList) is an object representing an ordered list of nodes, where each node represents an element in the XML document. The order of the NodeList is the same as the order of the elements in the document. Elements at the top of the document appear first in the list, and the first element returned, i.e. the first position in the list, is numbered 0. Keep in mind that the list includes all <name> elements. If you look at the KML files we've generated you may notice that both <folder> and <placemark> elements contain <name>. We need to be aware that getElementsByTagName("name") will return all of these if we are starting at the document root. We can differentiate between these <name> elements in a number of different ways. For example we can insist that the node is a child of a Placemark node to exclude <name> elements that are the children of <folder> elements. We need to be able to refer to various properties of these nodes if we are going to act on them in any reasonable way. The XML DOM exposes several useful node properties incl: nodeName, nodeValue, and nodeType. nodeName: is (quite obviously) the name of the node. What might be less obvious is precisely how the DOM defines nodeName. - The tag name is always the name of an element, e.g. 'Placemark' is the name of the <Placemark> elements - The attribute name is the nodeName of an attribute, e.g. the name of the maxLines attribute of <Snippet> is 'maxLines' - The name of any text node is always the string '#text'. e.g., the plain text or html that we're using as our Placemark descriptions are each contained in a text node, as has already been discussed, and the name of this text node is '#text', and not the name of the element which surrounds the value in the XML document - The nodeName of the root document node is always the literal string '#document'. - The value of text nodes is text itself. So the text node of one of our Placemark elements is all of the plain-text or html within the description tags. Again, as far as the DOM is concerned the text is actually contained within a text node which is a child of a description node - The value of an attribute node is simply the attribute value. e.g. maxLines="1" has a nodeValue of 1 - nodeValue is not defined for the document node and all element nodes. nodeValue: is what you might expect. nodeType: is certainly not something you could guess. A specific value is assigned to each of the available types (categories) of nodes. The following is an incomplete list of common node types. - Element, type 1 - Attribute, type 2 - Text, type 3 - Comment, type 8 - Document, type 9 As I have already said, we will take advantage of the DOM to parse our KML files and populate our Google Maps. Many of the ideas we've seen here will seem much more concrete when we put them to use. Still, this has been a very brief look at an interesting topic that is especially important to web developers and designers. I would recommend that you pick up a book that does a good job of describing the document object model if you are planning on doing any significant amount of that kind of work. This need not be an intimidating topic, though I have found that the majority of books and articles do an inadequate job with it. If you are only interested in following along with this tutorial then this brief treatment of the DOM, the comments in the included source files, and the javascript code itself should be sufficient for you to complete the project. Section 3: XHTML The next topic we need to look at before moving on to the Javascript that will complete the project is the XHTML page on which we will present our map. A very simple page has been provided. It will be obvious from looking at the file that it is no more than a shell; the basic framework necessary to present our map in a web page. I doubt that any effort I put into building an elaborate XHTML page would accomplish more than confusing the issues and detracting from the overall value of the tutorial. Absolutely no one should be intimidated by the page nor should it take more than a moment to understand it. There are some noteworthy parts of the page you will want to pay attention to: - The page is compliant with the XHTML 1.0 Strict standard as written. It could just as easily be XHTML 1.1, HTML 4.0.1 Transitional, or any other specification. In the case of XHTML 1.1, all that would need to change is the doctype declaration and MIME type: Compare the two doctype declarations XHTML 1.0 Strict <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> XHTML 1.1 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" ""> and the MIME types XHTML 1.0 Strict text/html XHTML 1.1 application/xhtml+xml I chose XHTML 1.0 Strict because it is an XML based format and closer to HTML 4.01 than XHTML 1.1. Furthermore, you may need to reconfigure your web server or change the extensions on your XHTML 1.1 files before your server will deliver the proper MIME types. Again, this point is fairly trivial considering how simple the page is. However, it does present an opportunity to mention validation. We've already discussed validating the KML files generated in the first part of this tutorial. In that case we used the Feed Validator at to confirm that our KML files were up to spec so that we wouldn't run into trouble opening the files in Google Earth, or when it came time to incorporate them into our Google Maps script. Here we'll use the W3C's Markup Validation Service () to check our XHTML file. The procedure is similar to what we did with KML validation though we're using a different validation service against a different type of file. - Simply visit the validator's page in your web browser, - type or paste the complete URI to your map page at the text entry box labeled "Address:', and - click the 'Check' button. A valid page will result in the following message: "This Page Is Valid XHTML 1.0 Strict!" while an invalid page will produce the error message: "This page is not Valid XHTML 1.0 Strict!" Like the Feed Validator, the W3C validation service makes a distinction between errors, which are failures of the page to comply with the standard and must be corrected, and warnings, which are detected deviations from what may be considered ideal. You should take these warnings under advisement and correct as many as you are able to, unless you're intentionally ignoring them for some good reason. Besides validating a page publicly available on the web by URI, you have the option of uploading a local file, or even copying and pasting a full document into a text entry box for validation, what the validator calls 'Validate by Direct Input'. There are some options you can use to affect the behavior of the service. The default behavior should be fine, but using options you can do things like: - temporarily override the automatically detected Document Type - group reported error messages by type - display the source of the page being validated with the results - fix problems with your page using the HTML Tidy library, which is code that can correct common editing mistakes (for example it can add closing tags, fix simple tag nesting issues, etc.) and apply a consistent structural style to messily written pages The validator is a valuable tool for all of us and I encourage you to make a habit of using it, especially when designing your own pages, including weblog templates or themes. Keep in mind that there is no such thing as a personal publicly accessible web page. If your site is accessible on the web then you have a responsibility to produce clean markup for your visitors and the rest of the global community. If you're unmotivated by the argument that you should validate because it's 'the right thing to do', then keep in mind that invalid markup may effectively reduce the size of your audience and impact your site's ranking with search engines. Furthermore, nonstandard markup is ultimately more time-consuming and difficult to maintain, and is likely to break in unpredictable ways. This brings up a dirty little not-so-secret truth about some of the more popular web services, which are becoming increasingly important these days. Many very useful, slick-looking web services generate broken markup. What can you do about it? Well, from a practical standpoint, don't be too upset by this. The services wouldn't be popular if they weren't functional. So, if you want to use one of them and it works for you and your audience then I say go for it. Of course, this is no excuse to produce invalid markup yourself. In fact, this is yet another very practical reason to do everything you can to keep your own code as compliant as possible. The problems you introduce that work on their own may react badly and unpredictably with the problems you inherit from others, which may also function correctly until they conflict with your broken markup. These are likely to be very difficult issues to diagnose and resolve, which is good reason to avoid them to begin with. There is a well known quote from internet pioneer Jon Postel () that applies here, "be conservative in what you do, be liberal in what you accept from others". Of course, feel free to complain to whichever organization is responsible for the web services you use that are fouling up your otherwise standards-compliant site. Something along the lines of... Thank you for the service you provide but please invest whatever resources are necessary to clean up the code you're asking others to embed in their sites ...is probably the right approach. - Notice the tags in the header that we use to include our own Javascript functions and the code necessary to interact with the Google Maps API. From the Google Maps API documentation: The URL () is the location of a JavaScript file that includes all of the symbols you need for placing Google Maps on your pages. Your page must contain a script tag pointing to that URL, using the key you got when you signed up for the API. If your Maps API key were "abcdefg", then your script tag might look like this: <script src="" type="text/javascript"></script> It's not necessary that you understand these lines to use them, but they are simple enough. The script tag references the location of Javascript code hosted at maps.google.com (a specific server or more likely group of servers in the google.com domain) that defines the various functions we'll use on our page, and in our own Javascript code to produce and manipulate the Google Map. We provide some information to Google with our request. We need only refer to the official documentation to learn about these and possibly other available parameters. From the documentation: The v parameter within the URL refers to the version number of the Google Maps API to use. Most users of the API will want to use the stable "Version 2" API by passing the v=2 parameter within that URL. You may instead obtain the latest release (including the latest features) by passing v=2.x instead. However, be aware that the latest release may not be as stable as the v=2 release. We update the Google Maps API about every two weeks, at which point features within the v=2.x release are migrated into the stable v=2 release unless problems are discovered. We have already identified the 'key' parameter which is required to authorize the page against a valid Google Maps API key. Remember that these keys are tied to a specific directory on your server. You must keep your pages that contain code that references the API in the directory you specified when you requested the key. It will not work with pages in subdirectories. You can request additional keys to serve pages from multiple directories or in the case that you lose track of the key(s) you have already generated. You do not need to request that unused or misplaced keys be revoked. In fact there is no established procedure for doing this even if you wanted to. There are of course terms of service governing the Google Maps API. In most cases these terms won't interfere with what you want to do, but I'm sure there are many legitimate uses of the API that do not meet the terms of service for one reason or another. You will find all of the legal information you need to know about the API on the 'Google Maps API - Sign Up' page, including the complete Terms of Use and a link to a form for contacting Google if you are unsure that your usage will meet their terms. Concerning our own Javascript code, you will see the following script tag in the XHTML file <script src="maps_tutorial.js" type="text/javascript"></script> which references an external file where we will create our Javascript code. Notice that this is a relative link. The tag will work for you as written if you keep the filename 'maps_tutorial.js', and if you place the file in the same directory as your XHTML page. Of course you are free to use any filename you like at any location by simply changing the value of the src attribute. Any relative path will work, as will absolute paths from your web server's document root and fully qualified URLs, e.g.. There are a few ways for us to include javascript on our XHTML pages. Javascript can be included: - in the head section of the page, - the body of the page, - in an external file referenced by the script tag's src attribute. - Javascript code written into the head section of a page is available from anywhere on the page, but only on the page. Defined functions are executed on request, either because they are explicitly called, or triggered in response to an event. - Javascript written into the body of the page is executed as soon as the page is loaded. You might include code this way to generate the content of the page dynamically. - Javascript in an external file is available to possibly many documents. For example, Google's Maps API is available globally to anyone who requests an API key. This is a significant advantage of creating external Javascript files but it is not the only one. I've repeatedly said just how simple and blatantly obvious the included XHTML file is. It's all of 30 lines long including all of the header info and basic structural bits. On the other hand, the relatively simple Javascript file accompanying the tutorial is hundreds of lines long (including comments). The XHTML file would seem to be radically more complex, and unnecessarily so, if we simply dumped all of the Javascript into the file. This separation of structure (XHTML) from behavior (Javascript) is an important tenet of modern web development. Unless you are doing something highly unusual the Javascript code should exist in its own file(s) referenced from your pages as we have done here. Except where appropriate the other two techniques mentioned amount to no more than bad habit which you should work to break or be careful to avoid in the first place. Even with a page as simple as this, separating the structure of the XHTML page from the behavior of the code makes it easier to discuss each as discrete aspects of the project. - You will notice if you look at the body tag that two HTML events have been specified along with behaviors (Javascript functions) triggered by the events. <:body onload is a trigger that causes the associated script to execute when the page is loaded. setup_map() is the name of one of the functions in our external Javascript file. It is responsible for (a) working with the Google Maps API to create a new map on our page (b) defining the custom icons we'll use to represent markers on our map, (c) initiating the process of adding markers to the map by calling the other functions we'll define. As the name implies, setup_map() is the function that gets the whole process of generating our custom map started. As you might guess, the behavior associated with the event onunload is triggered when the visitor leaves the page. GUnload() is a function from the Google Maps API that takes care of cleaning up the data structures created by the API. From the documentation: function GUnload - You can call this function to cause the map API to cleanup internal data structures to release memory. This helps you to work around various browser bugs that cause memory leaks in web applications. You should call this function in the unload event handler of your page. After this function was called, the map objects that you've created in this page will be dysfunctional. As you can see from this description, we're doing the right thing by associating GUnload with the onunload event. We'll be dealing with the Google Maps API and taking a closer look at setup_maps() later in the tutorial. - I'm including a basic stylesheet which does no more than center the map on the page horizontally, and style the font used on the page. For the purposes of this tutorial, you can either keep the CSS file as it is, make any modifications you like, or even choose not to use it altogether. If you do choose to forgo the stylesheet, just delete the included tutorial.css file and remove the link tag from the head section of the XHTML file. If you keep the file, notice that I've used a relative link. The style info will be available to your XHTML page without making any changes as long as you keep the filename 'tutorial.css' and place the file in the same directory as your XHTML page. You are free to use any filename you like at any location by simply changing the value of the src attribute. Any relative path will work, as will absolute paths from your web server's document root, and fully qualified URLs, e.g.. There is one bit of style information that I am purposefully keeping out of the external stylesheet and instead including in the XHTML file as an inline style; that is the height and width of the div which will contain our map. From the Google Maps API Documentation: When you create a new map instance, you specify a named element in the page (usually a div element) to contain the map. Unless you specify a size explicitly for the map, the map uses the size of the container to determine its size. I've left the size of the map div outside of the stylesheet to de-emphasize the the topic of CSS as much as possible. This is not meant to imply that CSS is an unimportant topic. In fact CSS is well worth your time and effort to learn. But, there are already quite a few topics that we must cover to do a responsible job of introducing Google Maps as a tutorial like this. CSS is not necessarily one of them, so I'm pushing it to the side in favor of emphasizing other required topics. Of course, I am still using a style sheet for much the same reasons I opted to use an external Javascript file. Separating presentation from structure is as important as separating structure and behavior.. Unfortunately, a discussion of CSS is beyond the scope of the article. Fortunately there are many good books and other resources available on the topic. Other than what has already been discussed, the XHTML file consists of a number of div elements each named by an id attribute. For example: <div id="map"></div> Is the container for our map, as we have discussed. The other divs specify locations on the page where we'll output status messages related to the operation of our script. For example: After we request the photos.kml file from our server, so that we can include markers for each of our photos on our map. The response code returned from our web server will be written to <div id="photos_kml_response"></div> The other divs are target locations for response codes resulting from requests for the other KML files we need to download, and some basic statistics, e.g. the number of markers generated, etc. The purpose of each of these divs should be obvious to you after looking through the source for our Javascript code, which is responsible for writing these messages, and the XHTML file itself. In fact you may be able to predict the purpose of each from the name chosen for the id attributes. Alternatively, we could have included a single 'status' div for all of these messages, but considering the simplicity of the XHTML file, I decided that handling each in a separate div would be a more explicit way of structuring the file. Typically, you would not expose this sort of 'debugging' message to your visitors. They are useful only during development and then the code responsible for producing them should be disabled in the Javascript and the divs removed from your XHTML pages. Section 4: A Note about Web Programming Why include these messages at all? Because it can be difficult to debug Javascript code. This is true of all web based application development and network programming more generally. There are a number of reasons that this is the case. Firstly, the success or failure of our code is dependent on many factors that are out of our control. - If you are connecting to a remote machine and that computer is malfunctioning, then your code may fail to execute as expected. - Transient networking issues may cause significant problems even if the machines and all of the code involved are behaving properly. - Network applications tend to get complicated quickly because the requirement of network communication means that the applications depend on many underlying network services (e.g. name resolution) and network devices (routers, switches, other gateway devices, etc.). What's more, because of increasing concerns about security, many of these devices are intentionally designed to interfere with network communications. Of course the idea is that this interference affects only unwanted or harmful apps but practically speaking this is virtually impossible. - Compromises inherent in the design of the security mechanisms, configuration errors, other mistakes introduced during development or implementation, and the intentional actions of those agents who endeavor to abuse the network all contribute to the sorts of problems you may encounter when developing network applications. Fortunately for us, our network application is particularly simple. Traffic over HTTP between web client and server is almost universally allowed, because the web is viewed as such a fundamentally important application. Furthermore, our communications with Google are almost entirely mediated by functions provided to us as part of the Maps API. Assuming we trust that Google knows what they are doing, then we can hope to avoid many of the problems that might otherwise affect client/server interactions. Of course, Google can't guarantee the stability of the network and those network services that are out of its control any better than we can. For example, transient network issues may still disrupt our application. What's more, we are responsible for understanding the API so that we can be sure that we are using it correctly. We need to supply the various functions Google provides with appropriate data in the correct formats, and we must be prepared for the values returned. A well-designed API simplifies these interactions, but we must do our part as well.Secondly, and this is more specifically an issue with Javascript and other web programming, a web browser is extremely limited as a debugging environment. We must do a lot of the heavy lifting to expose even the most basic information that will be useful to us when it comes to identifying problems with our code. This is a particularly annoying situation considering that the sorts of errors we may encounter are so varied and difficult to predict, as has already been discussed. Unfortunately the topic of debugging Javascript is well beyond the scope of this tutorial. In fact, it deserves a substantial article of its own. So what can we do? The single best piece of advice I can give is to use Firefox when you are working on the Javascript portion of this project. Firefox includes an Error Console that will show you warnings and errors encountered during the execution of this script. This basic information will be incredibly useful to you if you find yourself staring at an empty webpage where your map should be. You will find the 'Error Console' under the 'Tools' menu. While we're on the topic of debugging Javascript code, I want to point out that this is another reason to use a very basic shell of an XHTML file before dumping your maps into an existing page, e.g. an existing weblog theme. It very often is the case that your existing webpages have a number of problems that will generate warnings or errors in the Error Console. There is quite a bit of code involved when a Weblog engine generates a page dynamically, especially considering all of the various extensions to the base script that you have the opportunity to add. It would be a mistake to assume that all of this code runs without throwing off a number of errors, even if the finished page displays perfectly in your browser. These errors have nothing to do with your map of course, but will nonetheless complicate the task of identifying and resolving problems you encounter when building it. A page as simple as the XHTML file included with this tutorial will not contribute any errors or warnings to the messages you see in the Error Console. Once you have your map working, it's a trivial task to move it to another page simply by copying and pasting the script tags to your new page, and adding a 'map' div where you want the map to be displayed. If you want something more than the Error Console, take a look at the Venkman Javascript Debugger () which is available as an add-on for Firefox and other Mozilla-based browsers. From the Introduction on the Venkman project page: Venkman is the code name for Mozilla's JavaScript Debugger. Venkman aims to provide a powerful JavaScript debugging environment for Mozilla based browsers namely Firefox 2.x, the Netscape 7.x series of browsers, Netscape 9.x series, Mozilla Seamonkey 1.x and Mozilla Seamonkey 2.x. Having discussed all of the preliminary topics, we can succinctly describe how to write the Javascript code that will generate our map. Following the pattern established in the first part of the tutorial we'll do this in 3 passes. Section 5: An Introduction to the Code We'll begin with a high level, natural language discussion that faithfully describes the code. The second and third passes can be found in the source file itself, accompanying this tutorial. What I'm referring to as a second pass are the detailed comments you will find throughout the source. Pass 3 is the code itself. After reading through the discussion here I recommend that you read through the source file. Not only will this help familiarize you with the code but it will give you the opportunity to make any changes necessary so that the code meets your needs. You'll find the complete source code for the javascript portion of this tutorial in the file 'maps_tutorial.js'. This discussion will frequently refer to the official Google Maps documentation pages and version two of the API Reference. The documentation is well written and fairly easy to understand. After you have finished with this tutorial you will want to remember to spend some time reviewing the API reference. You will no doubt find answers to many of your questions and discover new ways of doing things. Where the descriptions presented in the reference are clearly written (and relevant of course) I will quote them here rather than complicating the issue with my own clumsy explanations. I'll take this opportunity to say that this is not intended to be an introduction to the Javascript language. I will be careful to describe the execution of the script, i.e. how it operates even when it may seem obvious to those of you who have some programming experience. This is done intentionally to make the code as accessible as possible. Those of you who find this discussion too chatty are welcome to move on directly to the source code. I will not be spending any significant amount of time on the syntax of Javascript. There are any number of good introductory Javascript books and other resources available. It shouldn't take you much time at all to pick up enough of the syntax to follow along with the rest of the tutorial, especially given that a completely functional script is available for you to download, use and modify. I'm hopeful that even those of you who have had little to no exposure to Javascript before now will be able to copy, paste and reason your way through the script well enough to get your Google Map up and running. Section 6: Natural language walk-through The first function definition we see is setup_map(). You may remember from the the discussion about the XHTML page above that setup_map() is attached to the onload event on our page. Accordingly, this function is called when the page is loaded. First, we check that the browser is compatible with the Google Maps API using the function GBrowserIsCompatible(). From the API Reference: This function decides whether the maps API can be used in the current browser. If the browser is compatible then we continue, or else we return having done nothing. You may want to handle the case that the browser is incompatible with the API by writing a helpful message to the page. Alternatively, you could do something more complicated, but certainly you won't be generating a map so the best way to handle this may be a simple message that is informative without wasting your visitors' time or causing some unexpected behavior. Assuming the browser is compatible we generate a map by creating an instance of the class GMap2. The class defines a Google Map, and we can request an instance of that class with a single statement: map = new GMap2(document.getElementById("map_container")); From the API Reference: class GMap2 - Instantiate class GMap2 in order to create a map. This is the central class in the API. Everything else is auxiliary. Creates a new map inside of the given HTML container, which is typically a DIV element. If no set of map types is given in the optional argument opts.mapTypes, the default set G_DEFAULT_MAP_TYPES is used. If no size is given in the optional argument opts.size, then the size of the container is used. If opts.size is given, then the container element of the map is resized accordingly. See class GMapOptions. You can safely ignore the discussion of optional arguments. In the absence of these options sensible defaults will be used. From the statement above: map, which is the target of the assignment, is a variable which going forward will be a reference to the map object created with this statement. We will manipulate this object throughout the execution of the script to modify the map that appears on the page. There is also "map_container", which is the structural element of the page that will contain our map. We saw this div when we looked at our XHTML file. Note the use of the HTML DOM in the statement. Though our use of the XML DOM with the KML files containing our Placemark data will be a little more extensive, what we do with the HTML DOM will not be much more complicated than what we see here. We have already discussed the method getElementsByTagName() when we looked at the XML DOM. getElementById() is similar. We simply pass the method the unique name of one of the id attributes included on the page to target the named container. Because all id names are unique we can ignore the nested structure of the document and relationship among elements when using getElementById. Technically the method returns a reference to the first object with the named id, but there should be only one. In the HTML DOM, the document object represents the entire page. So, document.getElementById("map") returns a reference to the first container with the id "map" anywhere on the page. Adding standard controls to our map is just as easy thanks to the API. map.addControl(new GLargeMapControl()); This statement adds the built-in large pan and zoom control. This the standard control which typically appears along the left side of a Google Map and allows the user to select the zoom level of the map either by incrementally increasing or decreasing the level or jumping directly to the desired resolution utilizing a slider. The control also allows visitors to pan the map in any direction. The statement instructs the API to create a new GLargeMapControl() object and then add the control to our map object. Alternatively we could have opted for the equally common smaller control with the statement, map.addControl(new GSmallMapControl()); which offers much the same functionality but lacks the slider. The advantage is that the control takes up significantly less space. This may be especially important for smaller maps. The choice is yours. Next we'll add a type control which allows visitors to switch between the three primary map views, 'Map', 'Satellite', and 'Hybrid': map.addControl(new GMapTypeControl()); This is functionality that most users appreciate and it does seem appropriate for our map of photos. Map view is often the least cluttered of the three, and so the least distracting, and the easiest to navigate. Satellite view on the other hand may help users better understand the context of our photos. Hybrid view combines the two so that we can have the satellite imagery but still find our way around the map with street names and other important indicators. It is possible to implement your own controls by taking advantage of several classes Google makes available via the API for enabling and disabling standard overlay information and toggling between map types. You may have a good reason for doing this, and if so I don't want to discourage you. I do want to say however, that unless you have cause to do something else, keeping the standard controls is recommended. Chances are good that your visitors will be familiar with the standard controls and replacing them just for the sake of being different is unnecessarily disorienting for users. Interface consistency should be one of the primary goals of any developer. Keep in mind that your map is essentially an instance of the Google Maps application. The more consistent we all are, the more comfortable our visitors will be with our collective maps. We've created a map (an instance of the GMap2 class), specified where to place the map on the page, and we've added some controls. Before we can expect the Google API to be able to generate a map, we need to tell it what to display, i.e. what location to show us. We do this by setting the center point of the map and defining an initial zoom level. Taken together, a center point, zoom level, and the size of the map (determined by the height and width attributes of the 'map' div on our XHTML page), completely defines the map that we see. A better way to think about an interactive Google Map might be to imagine it as a dynamic window offering a view of a potentially much larger map of the world. We can change the resolution of that view by adjusting the zoom level, or adjust our vantage point by panning our view of the map. The API provides the function setCenter() to set the position of the map. From the reference: setCenter Sets the map view to the given center. Optionally, also sets zoom level and map type. The map type must be known to the map. See the constructor, and the method addMapType(). This method must be called first after construction to set the initial state of the map. It is an error to call other operations on the map after construction. Do not overlook this line from that description: This method must be called first after construction to set the initial state of the map. map.setCenter(initial_center, initial_zoom); This statement, from setup_map() sets the center of our map object. setCenter must be provided a point and a zoom level. Technically the zoom level is optional, but it's one of those optional arguments that you will specify most of the time. Generally a point is a coordinate pair, i.e. standard latitude and longitude values. When working with the Google Maps API, a coordinate pair is represented by a GLatLng object. From the API reference: class GLatLng GLatLng is a point in geographical coordinates longitude and latitude. Notice that although usual map projections associate longitude with the x-coordinate of the map, and latitude with the y-coordinate, the latitude cooridnate is always written first, followed by the longitude, as it is custom in cartography. Notice also that you cannot modify the coordinates of a GLatLng. If you want to compute another point, you have to create a new one. When working with GLatLng objects be very careful about the order of the values in the pair. The reference tells us that the latitude must always be written first. Also notice that the latitude and longitude must be specified in decimal degrees, as opposed to degrees, minutes, and seconds. We have already taken care of converting the coordinate metadata from our photos to the proper format in the first part of the project. The second argument in the call to setCenter above is the zoom level. Google Maps allows for a range of zoom levels. The higher the integer value the greater the amount of detail (the higher the resolution) of the map. Of course higher resolution comes at the expense of field of view. In other words, as we make the map larger, we get a more detailed image of a smaller area. This correctly implies that smaller zoom level values (lower integer values) present a wider field of view (more of the Earth is visible on the map) but less detail (any specific area is smaller in terms of the number of pixels it occupies). The smallest zoom level available is 0, which displays a map of the entire world. At a zoom level of 4 we can fit all of (Western) Europe on a map roughly 800px wide by 500px tall: Increasing the zoom level to 6, adjusts the resolution such that France occupies nearly the entire map: and at 12 we have a map just larger than the city of Paris in France: There is one issue about zoom level that we want to be careful to remember. Though 0 consistently represents the low end of the scale, always producing a very wide world map, the high end is less reliable. In an urban area, for example the city of Paris, high zoom levels are likely to be useful because at this level we can clearly distinguish between individual streets, and even buildings and landmarks, like this satellite map of the Eiffel Tower in Paris at zoom level 17. UTF8&ll=48.858278,2.294705&spn=0.0036,0.008819&t=k&z=17&om=1 On the other hand, there are many areas of the planet where high zoom levels will result in a map displaying nothing of any interest. This issue is especially problematic with Satellite maps. The satellite imagery available via Google Maps is fairly inconsistent and somewhat unpredictable at high zoom levels. The problem is that Google simply does not have high resolution satellite imagery covering the entire planet. Where no images exist instead of a map you will see the message 'We are sorry, but we don't have imagery at this zoom level for this region.'. Your only choice is to move to a different region, or reduce the zoom level until you reach a point where a lower resolution image is available. We need to be aware of these issues when working with Google Maps or our visitors may find themselves staring at a broken looking page. In some cases, adding a feature to our map is as simple as invoking a function from the API with a single statement, as is the case with the next line from our setup_map function. map.enableDoubleClickZoom(); This does what you might guess from the name. With this statement included among our Javascript code double clicking on the map will increase the zoom by one integer level, while centering the map on the position clicked. Holding the Control key while double-clicking will decrease the zoom level. Without this statement, double-clicking the map centers the map without affecting the zoom level. This feature is disabled for all maps by default in the API, so if you would rather disable the 'double click zoom' feature, simply comment out this line in the source file or remove it entirely. At this point we've defined our initial map. It is complete except that there are no markers. There are no icons indicating the location of the markers, no info windows with descriptive text, thumbnails, and links to our photo gallery attached to the non-existent markers. In fact there is nothing linking our photos to the map whatsoever. This is no small omission. We had better get started on customizing our rather generic map. Still in the setup_map function, we define all of the custom icons we'll use in our project. Icons (GIcon) are very visible components on the map. Icons are the symbols that represent markers at a particular point on the map but you should think of the marker (GMarker) as the primary object. It is this object which will represent each of our photos. The marker includes an icon (GIcon) described above, a point (GPoint) which describes where to place the marker, and optionally an info window (GInfoWindow) which is an area of content containing descriptive information about the marker displayed when it is clicked. It is not uncommon for people to confuse points and markers either because they do not understand the distinction or because of simple carelessness, but the two are not interchangeable. We'll discuss markers later. Here we're defining the icons we'll use so that later we can refer to them by name. Icons are a required component of markers but it is not necessary to create custom icons. If we don't supply a custom icon then Google Maps will use a default. Why then go through the bother of creating custom icons at all? There is no single answer to that question. In fact it's perfectly acceptable to decide to use the defaults. Someone at Google has taken the time to design icons that work quite well, and it would be a mistake to waste your time and effort creating icons that aren't as nice. But there are a number of common reasons why you might want to consider creating your own, and why I have decided to include custom icons with this project. - Branding and identification of markers as belonging to a specific application or set. It's possible for a Google map to include not only your markers but also markers from Google (e.g. search results), and even third party applications. The term 'Mashup' has been coined to describe an application that combines data from multiple sources to create what is in some sense a completely new application. Mashups can be much more than the sum of their collected data sources. Custom icons allow users to distinguish markers related to your data from all of the others. - Custom markers can communicate more information than the default icon. Using a default icon tells a visitor where a marker is located but not much else. With custom icons you can vary the size, color, and even create a number of entirely different icon styles to indicate that the icons represent different categories of data or to communicate information about the status of a marker. For example a weather application may use several different icons to represent possible weather conditions, e.g. the weather for a particular location may be sunny, raining, overcast, warm, cold, or even a combination of these conditions, maybe sunny but cold. The developer may choose to vary the size of the icons to indicate the severity or intensity of the weather pattern. For example, an icon depicting a rain cloud might be enlarged to indicate a storm and reduced in size to indicate a light rain. The same weather map may use red colored icons to immediately warn users of dangerous conditions. - Size of icons influences visibility of markers on the map. While large icons may be more noticeable, and small icons more likely to blend into the background detail if there is not a lot of contrast between the map and the icon, it is possible to fit more small icons than large in a given area and resolution without overlap. - Icons which indicate general regions vs icons which mark a specific point on the map. Frequently icons may be used to indicate either a region of the map or a specific place. Icons that represent a precise location should clearly indicate that by attaching to the map at a specific point. It's less useful, maybe even inappropriate, for an icon designating a region to appear to indicate a specific location. We can make regional icons to resemble 'badges' which appear to float over the map, while point specific icons can be made with lines, tails or arrows that appear to extend down to the point of attachment, in much the same way as Google's default icons. - Aesthetic sensibility. Of all of these common reasons for using custom icons, this is the hardest to justify. You may choose to use custom icons because you consider Google's default icons ugly. That's fair enough, certainly taste is subjective. The problem with this argument is that it is difficult to completely banish Google's icons from your maps, even if you do choose to use custom icons for your markers. I've chosen to use several different custom icons for all of the reasons just discussed, except for aesthetic sensibility. I like Google's default icon and wouldn't create a custom icon, something that requires a nontrivial amount of time and effort, just to be different. Let's briefly discuss the rationale for the icons we'll be using before we get to the business of the code for creating the icons. First, let's quickly review the concept of clustering. We've described what clustering is and its advantages. Because the topic has already been discussed I'll just remind you that clustering is a way of replacing multiple markers with one representing the group when the resolution of the map (i.e. the zoom level) and the proximity of the markers would result in overlap such that it would not be possible to distinguish any one from the others. Let's say that at every possible zoom level we want to guide visitors toward our photos. At the lowest zoom levels a visitor will be looking at large geographical areas condensed within a small amount of space in terms of display size. Entire countries are reduced to the size of a small number of pixels. At these levels even just two markers representing photos taken literally thousands of miles away from each other may overlap. A typical collection of photos may include hundreds of photos or more with large groupings of them appearing in the same relatively small geographical areas. For example, the majority of the photos included in the gallery accompanying this tutorial were taken in and around Boston, Massachusetts, USA. It would be ineffective to display markers for these photos at many of the lowest zoom levels. Instead we'll create an icon to indicate areas on the map where photos are available at higher zoom levels. By displaying markers attached to regions appropriate for the current zoom level of the map, we avoid the issue of overlapping markers and the performance issues that would result from trying to display a world's worth of photos in a single view. The details of how to do this are discussed in the comments included with the source code. The first type of icon we want to create is a regional icon. Depending on the zoom level we will draw a marker with this icon centered on a country, state/province, or city, where we have at least one photo in our collection. You may remember that we created a KML file matching each of these regions. We will rely on these KML files to create our regional markers. For example for the lowest few zoom levels, let's say levels 0 - 2, we can distinguish between countries but not smaller regions. At these zoom levels our regional icons will indicate countries where we have at least one photo. At higher zoom levels the same icons will be used with markers to indicate states and then, as the level increases, eventually cities where we have photos to display on the map. At intermediate zoom levels the resolution is such that there is a reasonable amount of separation between small geographical areas so that we can add markers for individual photos and expect that many of the icons will be visible without overlap and, for any one view, we will not have so many markers that we overwhelm the map. It may be true that we have thousands of photos but many of the corresponding markers will fall outside the bounds of the view. However, at these levels it may be that we have a number of images appearing in the same small area of the map. In this case we include only a single marker for one of the images, and use an icon to not only indicate the location of a specific photo, but also to let visitors know that there are additional icons which are currently obstructed by this one. Where we can clearly distinguish between the icons for all of our markers, we will use a third custom icon that simply indicates the location of a particular photo. At the highest zoom levels this is the icon we will use for all of our markers. Though we have described only three distinct styles we need to define one more type. So that we can include markers for as many photos as possible at intermediate zoom levels, we will need two differently sized icons—one for the highest zoom levels and a smaller one for intermediate levels. Now we can discuss how we will create these icon styles. This is only intended to be an overview, for the full details please refer to the Javascript file accompanying this tutorial. From the API reference: class GIcon An icon specifies the images used to display a GMarker on the map. For browser compatibility reasons, specifying an icon is actually quite complex. Note that you can use the default Maps icon G_DEFAULT_ICON if you don't want to specify your own. There are quite a few properties to consider when it comes to creating a new icon. All of these properties are not required and in fact we do not use all of them in the tutorial. Refer to the API Reference for the full details (). We will limit ourselves to the following list of properties: image, shadow, iconSize, iconAnchor, and infoWindowAnchor To create a custom icon we create a new instance of the GIcon class icon = new GIcon(); The image property specifies a URL referencing a file used as the foreground image. photo_icon_large_image = ""; The shadow property specifies a URL referencing a file used as a shadow image for this icon. The shadow image should be based on the foreground image of course. I'll have more to say about the shadow at the end of the list of properties. photo_icon_large_shadow = ""; iconSize is the size in pixels of the foreground image, listed as (width, height) photo_large.iconSize = new GSize(32, 34); shadowSize is the size in pixels of the shadow image, also listed as (width, height) photo_large.shadowSize = new GSize(56, 32); iconAnchor is: The pixel coordinate relative to the top left corner of the image at which this icon is anchored to the map. (This description is taken from the API reference) icon.iconAnchor = new GPoint(6, 20); infoWindowAnchor is: The pixel coordinate relative to the top left corner of the image at which the info window is anchored to the icon. (This description is taken from the API reference) icon.infoWindowAnchor = new GPoint(5, 1); We create a new instance of the GIcon class and define all of these properties for each of the following custom icon styles. regional_icon = new GIcon(); photo_large = new GIcon(); photo_small = new GIcon(); photo_bunch = new GIcon(); Before we move on, a couple of notes about the icon foreground and shadow images. The foreground image should be of reasonable size. Google's own default image is approx 32px high by 20 pixels wide. We've already discussed the tradeoffs involved in creating larger versus smaller icons. The shadow image is derived from the foreground image. Using an image editing application of your choice (I used Adobe's Photoshop), the steps are as follows: - Copy the foreground image to a new file. - Fill the shape of the object so that it's black. The icons included in the project started out as Adobe Illustrator files, which I opened in Photoshop. Once in Photoshop I could select the object and set the fill and outline colors to black. If you have another type of image you may need to use a selection tool to grab the shape and set the fill that way. - Shear the shape at a 45 degree angle. (Note that shearing is not the same as rotating. Shearing the image essentially stretches it at the specified angle rather than turning it around a fixed point. It's the shearing effect that gives it the look of a shadow.) - Scale the image to half its original height without adjusting the width. (Don't do a proportional scale.) - Blur the image. In Photoshop this means applying a blur filter. Whatever is identified in your application as a standard blur should work fine. - Finally, adjust the opacity of the image to lighten it a bit. - Save the image as a transparent png. After defining all of the icons, we're essentially finished setting up the map itself. The last statement of the setup_map() function calls setup_markers() which controls the rest of the execution of the script. In setup_markers() we do the following: - Download from the server the KML files generated in the first part of the tutorial which contain all of the data about our photos. To do this we take advantage of the function GDownloadUrl() from the Google Maps API. This is an important function for us and its use is described at length in the comments of the sourcecode. - After parsing the KML structure from the data files using a parse method, again from the API, we create a number of NodeLists, which are objects that mimic lists of nodes from the DOM. I want to emphasize that these aren't lists in the traditional sense. A NodeList is a particular type of object which represents an ordered collection of nodes (not an array of nodes). This distinction may seem subtle, but it's not. For example, virtually all of the array object methods that you may be used to using aren't available for NodeList objects. Still, we can loop through these NodeLists and create markers for each of the elements from the original KML files where each Placemark corresponds to one of our photos. - We push all of these markers onto one or more arrays where they're collected until we're done processing all of the nodes from the KML structure. - Finally, we pass these arrays to a number of marker managers (GMarkerManager), which are objects the API makes available to us to control display of collections of markers based on the bounds and zoom level of the map. Marker Managers The marker managers automatically remove markers that fall outside of the bounds of the view. Without the marker manager, all of the markers, including those that can't be seen, are added to the map. This adds to the amount of work the application must do but isn't useful. Alternatively, we could accomplish the same thing by determining the bounds of the map and checking that each marker falls within the view before adding it. We'd need to repeat this every time the map was moved or the zoom level increased or decreased. The marker managers handle this for us. Another advantage is that we can define a number of marker managers and assign different collections of markers to each. For each marker manager we set a zoom range over which it is active; that is to say, levels at which it adds and removes all of the markers it's responsible for from the map, subject to the bounds of the view as just described. We can set these marker managers up to accomplish clustering. As we move from low to higher zoom levels we can enable different managers to display increasingly more of our markers so that we're never overloading the map, but always representing as many of our photos as can be accommodated at the current zoom level. Though we've described the setup_map function here, I'll refer you to the source code for detailed discussion of the rest of the project. It simply makes more sense to structure the discussion as the source code is written. The easiest way to handle that is in the source file itself. Certainly don't be intimidated by the source. It is all explained very explicitly. In fact, because the file is so heavily commented, I'm including a duplicate maps_tutorial.js that omits the commenting so that those of you who feel that the narration is distracting can focus on the code itself. This means we've come to the end of the tutorial, except that you must read through the source file to come to the end of the story of the javascript portion of the project. If your Google Map is working, congratulations. If not, keep at it. When it comes to programming, persistence is a virtue. Review those portions of the tutorial that you may not have understood completely, or double-check your source file looking for any small mistake. As has already been mentioned, Firefox's Error Console should help point you in the right direction. Once you get the project working, it will only take a few minutes to create a map of your geotagged photos going forward. It really is something that is valuable on its own and potentially the beginning of any number of other more sophisticated projects. Google Maps has been a fantastically successful application for Google and in recognition of this, the amount of time and effort they continue to invest is only increasing. Recently the API has gotten radically better. They've added their geocoder, which we've used in this project, and expanded it to reach literally billions of new people. Moreover, they're improving their own applications based on the API. For example, they've recently launched My Maps, which is a simple web-based interface for customizing a Google Map with information including photos, markers, routes, etc. And they're doing everything they can to encourage independent developers like us. Literally, from the time I started writing this tutorial to the time it was completed, it's become possible to do more with Google Maps. One of the best places to find out about improvements to Google Maps and the Maps API is the Official Google Maps API Blog at Whether this project we've put together is everything you want to do with Google maps or just the beginning of something bigger, I hope this tutorial helps you to enjoy your photos. Read Part I of Google Earth, Google Maps and Your Photos: a Tutorial Notes: 1 A markup language is any language that uses text codes or symbols alongside the content itself to specify the style and layout of the document. 2 A simple definition of hypertext from the The W3Cs What is Hypertext glossary of Hypertext terms:. The full list of terms is available at:. Note that this document is not a complete list of terms, nor does it necessarily reflect current usage. (Last updated 1995.). See the appendix for additional resources. 3History of Hypertext:. 4 The World Wide Web: A very short personal history:. Downloads:
https://www.packtpub.com/books/content/google-earth-google-maps-and-your-photos-tutorial-part-ii
CC-MAIN-2016-22
refinedweb
13,317
60.04
If you’re using the standard library a lot, typing “std::” before everything you use from the standard library can become repetitive. C++ provides some alternatives to simplify things, called “using statements”. The using declaration One way to simplify things is to utilize a using declaration statement. Here’s our Hello world program again, with a using declaration on line 5: The using declaration ”. This doesn’t save much effort in this trivial example, but if you are using cout a lot inside of a function, a using declaration can make your code more readable. Note that you will need a separate using declaration for each name you use (e.g. one for std::cout, one for std::cin, and one for std::endl). Although this method is less explicit than using the “std::” prefix, it’s generally considered safe and acceptable. The using directive Another way to simplify things is to use a using directive statement. Here’s our Hello world program again, with a using directive on line 5:). There’s open debate about whether this solution is good practice or not. Because using directives pull in all of the names in a namespace, the chance for a naming collisions goes up significantly (but is still relatively small). Suggestion: We recommend you avoid “using directives” entirely. An ambiguity example with using directives For illustrative purposes, let’s take a look at an example where a using directive causes ambiguity: In the above example, the compiler is unable to determine whether we meant std::cout or the cout function we’ve defined. In this case, it will fail to compile with an “ambiguous symbol” error. Although this example is trivial, if we had explicitly prefixed std::cout like this: or used a using declaration instead of a using directive: then our program wouldn’t have any issues in the first place. Many people avoid “using directives” altogether for this reason. Others find them acceptable so long as they are used only within individual functions (which limits the places where naming collisions can occur to just those functions). Limiting the scope of using declarations and directives If a using declaration or directive is used within a block, the using statement applies only within that block (it follows normal. This is considered bad practice. Rule: Avoid “using” statements outside of a function.. Never understood namespaces or the purpose of namespaces before. learncpp saves my life. thanks for your efforts sir Alex. I also want to be a programmer who will not only program but also teach others. Thanks a lot, a great inspiration for me. You’re welcome. The best way to ensure you really understand something is to teach others. You’ll very quickly find out where the gaps in your knowledge are. 🙂 In the case of using declarations I use "using std::cout" and I wanted to use a different cout in the global namespace, can I just prefix it like this "::cout"" without any problems? It depends. If your using namespace std is inside the function, then using the global namespace prefix (::) should resolve to your user-defined function. If your using namespace std is in the global space (outside of a function) then that won’t work. you said that, If a using declaration or directive is used within a block, the using statement applies only within that block (it follows normal scoping rules). then why does this produce an error ? When you do a “using namespace std;” inside of main, everything inside namespace std is now accessible without the std:: qualifier while inside main. That means inside of main, there’s now a conflict between our user-defined cout and std::cout, because the compiler can’t tell which one “cout” refers to. Uh-Oh ! I dint understand ! Do U Mean that even if the using namespace std was declared inside of main, i can access everything out of main(), But How? How can it Go Out Of Scope !? No. If you do a using statement inside a function, that using statement only applies within the function. Since you did a using namespace std inside of main, then inside of main you don’t need the std:: prefix. But you would need it outside of main. The problem here happens because: 1) There is a function above main() named cout(), and this cout is visible inside of main. 2) #include iostream is bringing in another declaration for cout. Normally this wouldn’t cause a problem because cout would be our function, and std::cout would be the one in iostream, but the using statement tells the compiler that std::cout should be called cout, which conflicts with the function we defined above. We would have the same problem if the using statement were used above main(), so that the objects in the standard library could be accessed without the std::prefix anywhere below that point in the file. Ok. So, you mean that using statement doesnt apply within main(), so, it didnt see the cout inside of main but as iostream brought in the declaration of std::cout it conflicted with the cout function…… But, then, What is the difference b/w Using Statement declared globally and statement declared inside a function ? > If you do a using statement inside a function, that using statement only applies within the function. Since you did a using namespace std inside of main, then inside of main you don’t need the std:: prefix. A using statement inside the function applies the effect of the using statement for the rest of that function only. A using statement outside the function applies to the rest of the file. Thats what i’m asking….if it only applies for within that function so why did it show an error even when the cout() function was declared globally and was outside the scope of main().And, The Cout() Function is even not called. And If It is able to see outside of main() then it must be same as “using directive that is declared globally”. Remember that in a .cpp file, anything declared is visible until the end of the scope it is declared in. That means if something is declared in a function, it’s visible until the end of the function. If something is declared in the global part of the file (outside of a function), it’s visible until the end of the file. So when we #include iostream, all of the declarations in iostream are visible and usable for the rest of the file. When we define function cout, that cout is visible and usable for the rest of the file. Inside of main(), there are three things happening: 1) The using statement make all of the functionality in namespace std accessible without the std:: prefix. Because the using statement is declared within the function, this is true only until the end of the function. 2) Because of the using statement, the std::cout from iostream (now accessible as just cout) and the cout from the function we defined are both visible from within the main() function, thus causing a potenital naming conflict. 3) The use of cout in the statement following the using statement cause the compiler to not be able to disambiguate which cout we mean, causing an error. This potential conflict actually happens because we are using cout in the statement following the using statement. Ok Man I Understood ! Tell me if i am right …. If we would have declared the cour functiom below main() then it would not have been able to identify cout() and would have used std for cout in main() ( as we all know compiler reads files sequentially ) Correct. int main() { using namespace Foo; using namespace Goo; return 0; } So, we can’t do this??? You can (even though it’s bad practice), so long as none of the names in namespace Foo and namespace Goo collide. if we avoid use cout as function, will it reduce naming collision? It depends. If you always use explicit prefixes (e.g. std::) then no. If you like to use “using namespace std;” liberally, then yes. Hi, Alex I understand that the prefixes are always explicit, so they are always safe. Now I want to know if there is some sort of order of magnitude between the using directive and using declaration. For example, if I build a namespace called "mynamespace" with a cout function in it(I’m new to C++ and don’t know how to do it yet). Then I put this … int cout() { … } int main() { using namespace mynamespace; using std::cout; cout …. } What will actually happend? Also is there a explicit qualifier for the function(in the above example the "int cout()" function) that I created? Thank You! The more specific using declaration (using std::cout) takes precedence over a using directive (using mynamespace). Assuming your cout() function was inside namespace mynamespace, the qualifier would be mynamespace::cout. I talk more about namespaces and how to create your own in chapter 4. Thank You Alex! Hi Alex, I have a question about how the "using" statement (line 5) is used in the code below. This is a little different from what I am used to or what your literature here explains. I am mostly confused about the equality(=) sign used in the using statement. Please explain. Moreover, why does the statement in line 17 have nothing preceding the scope resolution operator :: ? What is going on here? “using” is one of the C++ keywords that has a lot of different meanings depending on context. This form of using is called a type alias, and it is used to give a type an easier to remember/use name (in this case, StringPtr now means std::shared_ptr). I talk about type aliases in lesson 4.6. The scope resolution with nothing preceding it is used to indicate that the object being resolved lives in the global scope. In this case, the :: is extraneous because there’s no other definitions of print_count to conflict with. However, if main() had an identifier named print_count (a variable or a type), then print_count would select that name instead of the one in the global scope (the locally defined print_count would hide/shadow the global one). In that case, the :: could be used to say, “no, I mean the print_count that lives in the global scope, not the locally defined one”. I talk about shadowing in lesson 4.1a and :: as a prefix in lesson 4.3b. Do you know if there would be any possible way to sum this up, or simplify it at least? It just seems really long and irritating. There’s no way to simplify the general logic, but you could condense this a bit by not commenting the vanity spacing lines and combining those with the previous print statements e.g: Hi Alex, First, I really want to thank you for creating such a helpful website. Your explanation is very clear to follow. Just one quick question. So I changed the cout operation a bit as below. #include <iostream> int cout(){ return 10; } int main(){ using std::cout; cout << cout(); This produces an error message: type ‘ostream’ (aka ‘basic_ostream’) does not provide a call operator But if I get rid of using std::cout & put in std::cout << cout();, then it works. Is this a dead-end, which forces me to put in the std::cout << cout();? Any work around this where I can still use the namespace directive or std::cout declaration? Yes, your example is producing a naming conflict, because the compiler doesn’t know whether you mean your cout() or std::cout(). There’s no way to fix this using a using directive (that’s what’s causing the issue in the first place). While you found one way to resolve the issue (explicit use of std::), another way would be to give your function a different name to avoid the naming conflict altogether. thank you so much~! In the collision example you used the directive statement within the main function but then also naming collision happened.Why? Using directives don’t provide any priority for the compiler to pick which version of the function it should. So in this case, it’s not sure whether it should use our version of cout() or std::cout(). That’s one reason why using declarations are better: if a naming collision occurs, it’ll use the one specified in the using declaration. But why is the compiler unable to know the difference b/w a function we defined and a keyword.We always write a function along with these set of brackets ().And in the collision example inside the main function no such () brackets were used.so wasn’t that obvious that we were actually talking about the "cout" keyword their. That’s a good question. I’m not sure why the compiler can’t disambiguate that one version of cout is a function and the other is an object of type std::ostream. Hello there! In the collision example you provided, I didn’t quite understand why a using declaration prevents collisions while a using directive does not. Isn’t cout trying to use the std namespace WHILE declared as a variable either way? Thanks in advance, and thank you VERY much for this tutorial. 🙂 A using declaration says “treat all instances of name X as namespace::X” (which implies a priority in the case of conflict). A using directive says “import all of these names from the namespace”, but doesn’t tell you which to prefer if a conflict occurs. #include <iostream> int cout() { return 5; } int main () { using namespace std; cout << "Hello world"; return 0; } The abve program executed well without any error. It displayed Hello world. Pls let me why didnt it gave error. Unsure. Perhaps your compiler was able to intelligently resolve the ambiguity. Visual Studio 2015 gives an ambiguous symbol (naming conflict) between our int cout(void) function and std::ostream std::cout. Thanks for the very clear and complete tutorial! A question about namespaces. What if I still want to have a function called cout in my code and call it in the main function, where I’m using std namespace declaration? Should I declare a namespace? You can either put your version of cout in a separate namespace (and then access it via namespace::cout), or leave it in the global namespace and access it via ::cout (nothing before the ::, which indicates the global namespace). In the case above everything inside std namespace is brought into the scope of main function so compiler is not able to resolve between user defined cout() function and namespace cout object. but in the case below using std::cout we still brought cout object to the scope of main so how compiler resolve between these two now as function cout() is still acessible in main; In the case where we do “using namespace std”, we import all of the names, but we haven’t told the compiler anything about how to prioritize conflicting names. In the case where we do “using std::cout”, we’re explicitly telling the compiler “when you see cout(), we mean std::cout”, so it knows how to resolve this, even when it sees a conflict. I’m not sure I understand why using would be preferable to using in that last example? Surely both would equally cause a name collision if you have a function called cout? I understand why would avoid the collision, but why would using a declaration statement instead of a directive statement? Good question. As you’ve correctly noted, “using std::cout” is no better than “using namespace std” for avoiding naming collisions with the specific identifier “cout”. However, consider the case where you had written a function named “cin”. In this case, “using std::cout” wouldn’t cause a naming collision, but “using namespace std” would. In other words “using std::cout” only has the potential to cause a naming collision with the identifier “cout”, whereas “using namespace std” has the potential to cause a naming collision with _every_ identifier in the std namespace (and there are a lot!). Just to be clear, how do prefixes, directives, and declarations interact if used together? Is there some sort of order of magnitude, do they cause errors, or does something else happen? For instance, if I write a directive that calls on a library with a cout function other than std, but also write a declaration of using std::cout;, what will the compiler do? And what about other scenarios, like directive vs. prefix, or declaration vs. prefix? Prefixes should always be fine since they’re explicit. Declarations will take precedence over directives. When using the Using Declaration for cout as noted above under "The Using Declaration", is it correct that since it is within the main() function that it will only work within that function? Say if I went to another function, would I have to enter in again for it to work within that function? Just trying to clear some things up for myself as i’m taking notes as I go through these Chapters/Sections. Thanks! Yes, if you use a using directive or declaration within a function, it only applies within that function. Hey there, Just curious. In this code on the last example you gave: int main() { using namespace std; // this using directive tells the compiler that we’re using everything in the std namespace! cout << "Hello world!"; // so no std:: prefix is needed here! return 0; } you didnt put endl after the hello world.. if thats the last bit of code that your going to use before the return do you need to put it still? If it’s the last line of code, it doesn’t matter whether you use std::endl or not. I omitted it from the example just to keep things simpler. Hello, When we use "using namespace std" we tell compiler to look into a whole class for definitions of objects like "cout" other way is, explicelty define the desired object like, using std::cout; using std::cin; using std::endl; My doubt is, if we are dealing with a very large file, will that going to affect the execution time? I mean what will be the optimised way of using it? second doubt is silly…well how do people paste the snippet of their code when they comment here? All identifier resolution happens at compile/link time, so there’s no runtime performance penalty for using using statements (it could make your compilation take longer though). Thanksies muchly for the site and tutorials, really handy and much better than the last time I tried to learn C++ a few years ago from a textbook riddled with several typos per page (yeah, fun debugging when you don't even know what the syntax is supposed to look like -_-; ) so this is a huge improvement. Decided to take the basic concept a step further with the std::cin statement test: int main() { // set cout to std::cout because I'm lazy :: also test that this statement works properly using std::cout; // test to make sure cin works the way cout does using std::cin; /* Also using only individual commands :: I don't trust automating more than absolutely neccessary. * Computers are stupid, don't entrust them with anything that may involve making a decision. * Since "using namespace std;" requires the computer to make a decision if two commands overlap, * I'm simply not willing to entrust it with a decision because, with my luck, it'll make the wrong call. */ // Also, the above is a test of enclosed comments across multiple lines. // set X and Y variables first, prefer to set early before messing with anything else // defaults X so it doesn't break when called int x = 0; // defaults Y so it doesn't break when called int y = 0; // ask user for a number, cout << "Enter a number:"; // read number from console and store it in x cin >> x; // ask user for a second number cout << "Enter a second number:"; // User inputs Y value :: test for second variable and order cin >> y; // adds totals and displays X+Y :: test for expression cout << "You entered a combined total of " << x+y << std::endl; return 0; } I like to test things like this by running several previous things I've learned together to make sure they actually work the way I think they do. In this case, I discovered that the space at the end of "Enter a number: " in the original example is unnecessary in some cases; you can't see it anyway, since it's a command prompt and the only thing different would be a space between the : and your inputted integer. The space makes it a little prettier, but doesn't stand out as especially needed. The second thing was actually the first chronologically discovered; the realization that when I put in: std::cout << "You entered a combined total of" << x+y << std::endl; …it printed off "You entered a combined total ofX" This meant I had to go searching for where the space was coming from, which turned out to be included within the end of the quotations. This also made me curious, since it means spaces are preserved in quotes, so does it minimize it to one space or does it count all spaces? A simple test of adding 3 spaces to the end of the quotations so it'd be obvious verifies that, yes, C++ does in fact preserve all spaces entered within quotations. Combined with checking the printed data, and noting it has a monospaced font, this means it can be used to print off ASCII art. =P Anyway, mostly irrelevant, but it did prove that the code works (mostly) the way I thought it did, and I was able to learn some neat things by testing some variations thereof. For any newbies reading, I'd generally recommend running the presented text first (write it out manually instead of copy/paste, helps you to remember the commands via repetition), and then do something a little different to test how it works in practice under new conditions. It helps to make sure you understand what's happening and why it's working, and if something breaks during the test, it means you did something wrong in an enclosed environment where you can't break too much. Better to screw up while learning in a relatively safe area than screw up later on when getting paid and your boss gets pissy when you break it and spend an hour debugging something you could've learned early on. Learning involves making mistakes; use your learning time to get all the mistakes out of the way by feeling out what works and what doesn't before it matters. =3 Also, while I overdid it (many times over) with the comments, in past experience I've learned spending 15 minutes writing out clear explanations for why you did something will save you hours of work later on when it invariably breaks and you can't recall what you were thinking at the time you wrote it. It's even more useful if someone else has to read your code or work because they may not come to the same conclusions you did, such as "using std" instead of "using namespace". Any time there's a judgement call, give a clear explanation, it'll save headaches in the long run, even if it feels like a waste of time now. Great use of experimentation to aid your learning process. I’d second the advice for readers to play with the examples and extend/alter them as a way to learn. I totally agree with “write it out manually instead of copy/paste, helps you to remember the commands via repetition.” What seems a waste of time now is, in reality, a tremendous blessing when you are writing code and have no source for copy/paste. Also, concerning commenting, there was I think a moderator (maybe a frequent poster) on the AutoIt forum who had a signature that may embody the need for commenting. I don’t remember exactly, but it was something like this: “I was looking at some code and wondering what drugs this guy was on, then I remembered that I wrote the code.” Of course it was funny, but I believe it could have been a lack of appropriate commenting. Another option was that he might have written inelegant code in the first place. But the lesson about commenting is there. The coder didn’t know why the code was written in the way it was. _aleph_ This is a wonderful approach to learning. I’m sure by now you know that a ‘space’ is actually a character with an ASCII value… instead of: #include int main() { std::cout << "Hi!" << std::endl; std::cout << "My name is Alex." << std::endl; } _________________________________________________________________________________________________ couldn't i just use (using namespace std;) in the beginning #include using namespace std; int main() { cout << "Hi!" << std::endl; cout << "My name is Alex." << std::endl; } In this case, you may write ‘endl’ instead of ‘std::endl’ : using namespace std; int main() { cout << "Hi!" << endl; cout << "My name is Alex." << endl; } Generally, using directives should not be used outside of function bodies. Instead of: It’s much better to do this: What if after using namespace std; I needed to use another objects not from the standard library, but from the other library I created ? THANK YOU. It’s no problem. If the objects in your own library are not in a namespace, just call them directly (e.g. doSomething()). If the objects in your own library are in a namespace, either call them directly (e.g. myNamespace::doSomething()) or use a using statement for that library (eg. using namespace myNamespace) and then call them directly (e.g. doSomething()). As long as the names of the objects in your library are different than those in the standard library, you won’t have any problems. And even if you do encounter a naming collision, there are ways of working around it. For example, you can remove the using statements and use explicit qualifiers (e.g. std:: or myNamespace::). Thank you. Name (required) Website
http://www.learncpp.com/cpp-tutorial/4-3c-using-statements/
CC-MAIN-2017-26
refinedweb
4,409
68.81
[OpenGL] Deprecated methods - how to treat them?Posted Saturday, 30 May, 2009 - 11:43 by the Fiddler in This has been discussed before, but no conclusion was reached: how should OpenTK treat deprecated methods? Possible approaches, sorted from least to most intrusive: - Do nothing. (This is the current approach) - Add a message to the method summary: "This method is deprecated." (Minimally intrusive, no compile-time warning) - Mark methods with the ObsoleteAttribute. (Compilation warning, potential breakage with "Treat warnings as errors" option) - Move methods to a different assembly, called OpenTK.Graphics.Deprecated. (Reduces OpenTK.dll size by 287KB (15%), see below for more details) - [New] Create two OpenTK.dlls, one with deprecated functions, one without. The consumer picks one. (one-time decision, no breakage, however increases download size and makes OpenTK build-system / distribution slightly more complex. Idea provided by Mincus) I have toyed with approach #4 with mixed results so far. Ideally, we would like to introduce minimal breaking changes to the consumer: - Add a reference to OpenTK.Graphics.Deprecated.dll - Add " using OpenTK.Graphics.Deprecated". - Continue using OpenTK as before. The question is how to implement this? One possible approach is to create a GL class in the deprecated namespace that inherits from OpenTK.Graphics.GL: // OpenTK.dll namespace OpenTK.Graphics { public class GL { /* Non-deprecated methods */ } } // OpenTK.Graphics.Deprecated.dll namespace OpenTK.Graphics.Deprecated { public class GL : OpenTK.Graphics.GL { /* Deprecated methods */ } } The user can consume this as follows: using OpenTK.Graphics; using GL = OpenTK.Graphics.Deprecated.GL; ... GL.ClearColor(Color.MidnightBlue); Questions: - Which approach (1-4) do you prefer? Why? (If you have another idea, please post it!) - Do you consider the approach I outlined as too intrusive or complex? It requires a new reference and a single line of code to each source file that uses deprecated methods. - Can you think of a better / simpler alternative to using GL = OpenTK.Graphics.Deprecated.GL? - Do you know wheter C++/CLI, Boo, IronPython, IronRuby have anything equivalent to the " using" directive? F# and VB.Net have, so it's not a problem. Re: [OpenGL] Deprecated methods - how to treat them? 1.1 & 1.2) Not a solution. 1.3) sounds interesting, individual warnings can be disabled so treating warnings as errors is not a problem. I've never tried this, but does it work to flag a single value in an enum as deprecated too? So for example calling GL.Enable( EnableCap.Light0 ); would generate a warning, but GL.Enable( EnableCap.DepthTest ); does not. 1.4) The flaws with this might be that it's non-obvious and it cannot eliminate deprecated tokens from enums. Do you have any numbers how much faster the GL ProcAddresses are loaded without the legacy stuff? But this certainly is a good idea too. Re: [OpenGL] Deprecated methods - how to treat them? 1.3) Indeed, you can mark individual enum values as Obsolete. Never thought of this before, but it opens up some interesting possibilities! You can disable [Obsolete] warnings with #pragma warning disable 0681or the "/nowarn:0681" compiler option. Unfortunately, there is no way to disable this warning for e.g. OpenTK.Graphics but leave it intact for your code, unless you manually pepper the code with "#pragma warning disable/restore". In that light, 1.4 might be less annoying, as it's a "do and forget" solution. 1.4) Indeed, this requires more setup on the user side (1.3 is a mere warning by default, while 1.4 will cause cause compilation to fail unless you add the necessary declarations). It also adds the potential for future breakage, once more methods are marked as deprecated. The cost of the extra assembly probably outweights the benefit of fewer GetProcAddress calls. I don't have exact numbers, but the number of deprecated functions is close to ~250 (judging from the size of the dll). Re: [OpenGL] Deprecated methods - how to treat them? Something that has not yet been brought up: How do we deal with another situation where more deprecation occurs? For example Transform Feedback might become deprecated at some point in favor of OpenCL. Our previous suggestions all consider only a separation between legacy GL and forward GL. Maybe in 5 years it will look like this: legacy GL - OpenGL 1.0-3.0 forward GL - OpenGL 3.1-4.5 future GL - OpenGL 5.0-x.y Re: [OpenGL] Deprecated methods - how to treat them? There's the clunky #define method similar to Microsoft's CRT deprecation stuff from C/C++ that is a possibility. I've not played with #defines much in C#, but I suspect something similar could be knocked up so that there was a warning at compile time from both individual enum values and functions. To disable the warnings, a user would just put #define use_opentk_deprecated or something similar in the files that use those calls. Another option with splitting it out is that instead of creating a seperate OpenTK.Graphics.Deprecated, perhaps create two binaries, one with ALL the calls (current and deprecated) and one with just the current set of calls in. The user would just need to switch to the deprecated DLL and nothing else. Both of these would, imo, cause minimum interference, but they feel rather crude in some ways and doubtless have other issues. Just throwing ideas out. :o) Re: [OpenGL] Deprecated methods - how to treat them? As far as I can see, the lifetime of a method in the deprecation model looks like this: Now, we have two options: we can say that OpenTK will always track the latest version of the specs, treating the first four categories as core and the rest as deprecated (whatever we decide that "deprecated" means for us); or we can move OpenTK.Graphics outside of OpenTK and have several versions side by side: When you create a new application, you pick a target OpenGL version and stick with that, without fear that a future spec revision (and subsequent OpenTK release) might offer a different OpenGL API and break your code. Edit: Both of these would, imo, cause minimum interference, but they feel rather crude in some ways and doubtless have other issues. Just throwing ideas out. :o) Keep 'em coming. Unfortunately (or fortunately, depending on your point of view) .Net interfaces cannot be influenced by the consuming application. This means that the OpenTK.Graphics API is fixed as soon as OpenTK is compiled. If you need two different APIs (e.g. one with deprecated functions, one without), you have to use conditional compilation on the OpenTK side and allow the consuming application to pick which of the dlls it prefers. This only leaves your "two binaries" model as an option, which indeed works. This is somewhat similar to model 1.4, with the difference that you have two complete dlls (one with deprecated functions, one without) and the consumer picks one. Model 1.4, on the other hand, offers one complete dll (without deprecated functions) and one "addon" (for the deprecated ones) and the consumer uses either the first or both. Your idea is superior, in that the user can pick the correct dll and never need to change a single line of code. The cost is a more complicated compilation model for OpenTK (needs two passes, one for each binary, with different options) and slightly increased download size. Re: [OpenGL] Deprecated methods - how to treat them? It's hard to make a call here, on one side I'd like to see the legacy stuff move to it's own .dll and R.I.P., but as long as enums still contain legacy tokens the split is not complete (see GL.Enable example above). #define or [Obsolete] both can deal with enums much better. #define probably results in the lightest .dll after building OpenTK and can handle future deprecations better than [Obsolete] or creating a 3rd .dll, however it will turn GL.cs from inconvenient-legible to painfully-legible. Re: [OpenGL] Deprecated methods - how to treat them? I'd say GL.cs left the inconvenient-legible state a few revisions ago, when it gained documentation and debugging functionality. As of right now, it weights at 5727KB of source and has a 50% chance to make Visual Studio hang. Better check GLDelegates.cs instead, it contains the same information but it's much more manageable. Re: [OpenGL] Deprecated methods - how to treat them? Well if that implies you have no objections about #defines, why not? IIRC it was one of the very first suggestions how to handle the problem, but I don't remember why it was abandoned. It would also have the advantage that all CLS compliant overloads could be tied to a conditional, so you can compile OpenTK to use uint everywhere and only contain GL 3.1 functions&token. (No intent to start another CLS-Compliance usefulness discussion, but I would not object adding this manually to OpenTK.Audio to reduce overload choices) Re: [OpenGL] Deprecated methods - how to treat them? No objections on that front. I just wish to gather more information before committing to a solution. I will push this functionality to trunk shortly, so we can play with actual code instead of theory. Re: [OpenGL] Deprecated methods - how to treat them? this is probably selfish of me. as i'm new to OpenGL and would like to start just coding for OpenGL3.0/3.1. it would make my life easier if all i had to call was say a OpenGL v3.+ dll. so that i would only be dealing with v3.+. therefor less likely to get confused. hopefully lesson the learning process at the same time as i wouldn't be using previous OpenGL stuff by mistake.
http://www.opentk.com/node/910
CC-MAIN-2014-15
refinedweb
1,621
58.69
I am trying to get a tweet with location information using python. Although it moved, I was able to get tweets including queries, but I could not get tweets containing location information. Error messagesource code: !/usr/bin/env python3 -*-coding: utf-8-*- import tweepy import csv consumer_key ="" consumer_secret ="" access_key ="" access_secret ="" tweet_data = []csv output with open ('tweets_20171225.csv','a', newline ='', encoding ='utf-8') as f: writer = csv.writer (f, lineterminator ='\ n') writer.writerow (["id","created_at","geo","text"]) writer.writerows (tweet_data) passTry it I deleted count = 100, items (100), or (api.search, geocode =""). I hope you can tell me what is wrong. I would like to get only tweets with location information. More information - Answer # 1 Related articles - twitter - i want to get the top 5 tweets from "topic tweets" - c # - i want to get a datatable with linq and then outer join - javascript - i want to get ajax with rails - i want to get the id of the twitter api list - i want to get a percentage with calc of css - i want to get "articles with tags" in mysql - python - i want to get all tweets of a specific user with api - i get an error with cs0104 in c# - i want to detect a ball with python+opencv - python - if you want to get a list of the list of pages with css selector - i want to go back in time and get tweets with python and the twitter api, but i can't - php - i want to get the key of a multidimensional array - i want to run c++ with vscode - i want to save pdf with name with python selenium - typescript - i want to use axios with nuxtjs to get data from an api - javascript - i want to edit pdf with nodejs - i want to read xml with python asynchronously - java - i want to get the values from two arrays - i want to make a pyramid with python - java - i want to get the numbers from two arrays Thank you for pointing out. As you said, it seemed to be because there were few tweets with positioning.
https://www.tutorialfor.com/questions-57375.htm
CC-MAIN-2021-49
refinedweb
350
60.08
Repository::Simple::Type::Value - Abstract base class for value types; } If you are just a casual user of Repository::Simple, then the nature of this class isn't a concern. However, if you want to extend the functionality of Repository::Simple, then you may be interested in this class. To create a value type, subclass this class and implement methods as appropriate. Below are listed the expected inputs/outputs for each method and the nature of the default implementation, if one is provided. Your type should provide a well-documented constructor. This method MUST be implemented by the subclass. It should return a short string naming the class. This name should be in "ns:name" form as namespaces are an intended feature for implementation in the future.. Given a scalar value, this method should throw an exception if the value is not acceptable for some reason. If the value is acceptable, the method must not throw an exception. If not defined, all input is considered acceptable. Given a flat scalar value, this method may transform the value into the representation to be accessed by the end-user, and return that as a scalar (possibly a reference to a complex type). For example, if this type represents a DateTime object, then the method will translate some string formatted date and parse it into a DateTime object. This method will be called whenever loading the value from storage. Given the end-user representation of this type (possibly a reference to a complex type), this method may transform the value into a scalar value for storage and return it. For example, if this type represents a DateTime object, then the method should return a string representation of the DateTime object. This method will be called whenever saving the value back to storage..
http://search.cpan.org/~hanenkamp/Repository-Simple-0.06/lib/Repository/Simple/Type/Value.pm
CC-MAIN-2017-17
refinedweb
299
54.32
1. Revision History 1.1. Revision 0 - June 17th, 2019 Initial release. 2. Motivation Many codebases write a version of a small utility function converting an enumeration to its underlying type. The reason for this function is very simple: applying / (or similar) to change an enumeration to its underlying type makes it harder to quickly read and maintain places where the user explicitly converts from a strongly-typed enumeration to its underlying value. For the purposes of working with an untyped API or similar, casts just look like any old cast, making it harder to read code and potentially incorrect when enumeration types are changed from signed / unsigned or similar. Much of the same rationale is why this is Item 10 in Scott Meyers' Effective Modern C++. In Around Christmas of 2016, the number of these function invocations for C++ was around 200 including both to_underlying/to_underlying_type/toUtype (the last in that list being the way it was spelled by Scott Meyers). As of June 17th, 2019, the collective hits on GitHub and other source engines totals well over 1,000 hits, disregarding duplication from common base frameworks such as the realm mobile app database and more. The usefulness of this function appears in Loggers for enumerations, casting for C APIs, stream operations, and more. We are seeing an explosive move and growth in usage of Modern C++ utilities, and the growth in this usage clearly indicates that the foresight and advice of Scott Meyers is being taken seriously by the full gamut of hobbyist to commercial software engineers. Therefore, it would seem prudent to make the spelling and semantics of this oft-reached-for utility standard in C++. Typical casts can also mask potential bugs from size/signed-ness changes and hide programmer intent. For example, going from this code, enum class ABCD { A = 0x1012 , B = 0x405324 , C = A & B }; // sometime later ... void do_work ( ABCD some_value ) { // no warning, no visual indication, // is this what the person wanted, // what was the original intent in this // 'harmless' code? internal_untyped_api ( static_cast < int > ( some_value )); } To this code: #include<cstdint> // changed enumeration, underlying type enum class ABCD : uint32_t { A = 0x1012 , B = 0x405324 , C = A & B , D = 0xFFFFFFFF // !! }; // from before: void do_work ( ABCD some_value ) { // no warning, no visual indication, // is this what the person wanted, // what was the original intent in this // 'harmless' code? internal_untyped_api ( static_cast < int > ( some_value )); } is dangerous, but the is seen by the compiler as intentional by the user. Calling is a code smell internally because the cast is the wrong one for the enumeration. If the internal untyped API takes an integral value larger than the size of and friends, then this code might very well pass a bit pattern that will be interpreted as the wrong value inside of the , too. Of course, this change does not trigger warnings or errors: is a declaration of intent that says "I meant to do this cast", even if that cast was done before any changes or refactoring was performed on the enumeration. Doing it the right way is also cumbersome: void do_work ( ABCD some_value ) { // will produce proper warnings, // but is cumbersome to type internal_untyped_api ( static_cast < std :: underlying_type_t < ABCD >> ( some_value )); } It is also vulnerable to the parameter’s type changing from an enumeration to another type that is convertible to an integer. Because it is still a , unless someone changes the type for while also deleting , that code will still compile: void do_work ( OtherEnumeration value ) { // no warnings, no errors, ouch! internal_untyped_api ( static_cast < std :: underlying_type_t < ABCD >> ( some_value )); } We propose an intent-preserving function used in many codebases across C++ called , to be used with enumeration values. 3. Design completely avoids all of the above-mentioned problems related to code reuse and refactoring. It makes it harder to write bugs when working with strongly-typed enumerations into untyped APIs such with things such as C code and similar. It only works on enumeration types. It will the enumeration to integral representation with . This means that the value passed into the function provides the type information, and the type information is provided by the compiler, not by the user. This makes it easy to find conversion points for "unsafe" actions, reducing search and refactoring area. It also puts the inside of a utility function, meaning that warnings relating to size and signed-ness differences can still be caught in many cases since the result’s usage comes from a function, not from an explicitly inserted user cast. #include<utility> void do_work ( MyEnum value ) { // changes to match its value, // proper warnings for signed/unsigned mismatch, // and ease-of-use! internal_untyped_api ( std :: to_underlying ( some_value )); } 4. Proposing Wording The wording proposed here is relative to [n4800]. 4.1. Proposed Feature Test Macro The proposed library feature test macro is . 4.2. Intent The intent of this wording is to introduce 1 function into the header called . If the input to the function is not an enumeration, then the program is ill-formed. 4.3. Proposed Library Wording Append to §17.3.1 General [support.limits.general]'s Table 35 one additional entry: Add the following into §20.2.1 Header [utility.syn] synopsis: Add a new section §20.2.7 Function template [utility.underlying]: 20.2.7 Function template [utility.underlying][utility.underlying] to_underlying namespace std { template < typename T > constexpr std :: underlying_type_t < T > to_underlying ( T value ) noexcept ; } 1 Constraints: shall satisfyshall satisfy T .. std :: is_enum_v < T > 2 Returns: .. static_cast < std :: underlying_type_t < T >> ( value ) 5. Acknowledgements Thanks to Rein Halbersma for bringing this up as part of the things that would make programming in his field easier and the others who chimed in. Thanks to Walter E. Brown for the encouragement to Rein Halbersma to get this paper moving.
http://www.open-std.org/JTC1/SC22/wg21/docs/papers/2019/p1682r0.html
CC-MAIN-2019-47
refinedweb
949
52.6
Tiny python docstring tip Tomer Keren ・1 min read When defining interfaces in python using the abc.ABC metaclass, sometimes it gets pretty annoying to have so many empty methods, waiting to be filled in. import abc class Requester(abc.ABC): @abc.abstractmethod def get(self, endpoint:str) -> Response: """Sends a GET request.""" pass @abc.abstractmethod def post(self, endpoint:str, params:dict) -> Response: """Makes a POST request.""" pass All these uses of pass always felt pretty ugly to me, and luckily there is a solution! Because docstrings are simply a string expression in the start of a function - you can just let go of the pass! import abc class Requester(abc.ABC): @abc.abstractmethod def get(self, url:str, url_params:dict) -> Response: """Sends a GET request.""" @abc.abstractmethod def post(self, url:str, url_params:dict) -> Response: """Makes a POST request.""" Now isn't that so much nicer? Classic DEV Post from Sep 9 '19 What I learned after applying to 100 jobs Between August 2014 and July 2019 I applied for just over 100 jobs. Like most people, I find job hu... VSCode autocomplete on private package import do not working Frederico D. S. Reckziegel - The quest for a better markdown processor, part 5 Ryan Westlund - Why ForeignKey reference in Django model's __str__ method is a bad idea Constantine - I would rather use stubs, as explained here github.com/python/mypy/wiki/Creati... Stubs and abstract classes aren't the same thing. Stubs annotate existing classes/interfaces/methods helping mypymake static checks. while abstract classes declare new constructs, used to later be inherited from and implemented (with only runtime checks)
https://dev.to/tadaboody/tiny-python-docstring-tip-5f2k
CC-MAIN-2020-16
refinedweb
272
57.87
sub ret_list { return $_[0..$#_]; } [download] I'd like to give you a good trouting, but I'm not certain if you did this on purpose or not. :-) The "proper" way to take a slice is as follows: sub ret_slice { return @_[0..$#_]; } [download] This will return the desired result. I think a better example to illustrate your point (lists vs. arrays) would have been this: # the data our @zot = qw(apples Ford perl Jennifer); # the output print "Func Context RetVal \n", "---- ------- ------ \n"; { # our function my @list = &ret_std( @zot ); my $scalar = &ret_std( @zot ); print "Std LIST @{list} \n", # prints 'apples Ford perl' "Std SCALAR ${scalar} \n\n"; # prints 3 } { # a poorly-written function my @list = &ret_bad( @zot ); my $scalar = &ret_bad( @zot ); print "Bad LIST @{list} \n", # prints 'apples Ford perl' "Bad SCALAR ${scalar} \n\n"; # prints 'perl' } { # a better function my @list = &ret_good( @zot ); my $scalar = &ret_good( @zot ); print "Good LIST @{list} \n", # prints 'apples Ford perl' "Good SCALAR ${scalar} \n\n"; # prints 'apples Ford perl' } # the functions # returns full list, or number of elements sub ret_std { my @foo = @_[0..2]; return @foo; } # returns a list each time, but how long, and which parts?? sub ret_bad { return @_[0..2]; } # the "proper" function (from perldoc -f wantarray) # returns the full list, as a space-delimited scalar or list sub ret_good { my @bar = @_[0..2]; return (wantarray()) ? @bar : "@bar"; } [download] I apologize for the length and relative messiness of the code (this would be a good place to use write formats) but I hope I get my point across. Essentially, I follow what you're saying and you raise several crucial issues. Most importantly, PAY ATTENTION to A) where the return value(s) from your function are being used, and B) how your function is delivering those return values. Is it clear, or at least documented? &ret_bad() in particular scares me, I would hate to have a library full of functions like that. "Huh, I got the LAST element of the list? WTF?" I hope I don't come across as being snide. I understand what you are trying to say and it was definitely a very thoughtful post, and should serve as a warning to us all. Thank you. :-) Patience, meditation, and good use of the scalar function will see us through. Alakaboo In reply to (Slice 'em and Dice 'em) RE: Arrays are not lists by mwp in thread Arrays are not lists by tilly Throwing nuclear weapons into the sun Making everybody on Earth disappear A threat from an alien with a mighty robot A new technology or communications medium Providing a magic fish to a Miss Universe Establishing a Group mind Results (150 votes). Check out past polls.
http://www.perlmonks.org/?parent=26326;node_id=3333
CC-MAIN-2018-22
refinedweb
455
75.84
How to use StringBuilder, StringBuffer and why you should use them. This video also unveils the mystery of formatting strings with printf() and related methods; vital skills for any Java course or aspiring software developer. When the video is running, click the maximize button in the lower-right-hand corner to make it full screen. Code for this tutorial: public class App { public static void main(String[] args) { // Inefficient String info = ""; info += "My name is Bob."; info += " "; info += "I am a builder."; System.out.println(info); // More efficient. StringBuilder sb = new StringBuilder(""); sb.append("My name is Sue."); sb.append(" "); sb.append("I am a lion tamer."); System.out.println(sb.toString()); // The same as above, but nicer .... StringBuilder s = new StringBuilder(); s.append("My name is Roger.") .append(" ") .append("I am a skydiver."); System.out.println(s.toString()); ///// Formatting ////////////////////////////////// // Outputting newlines and tabs System.out.print("Here is some text.tThat was a tab.nThat was a newline."); System.out.println(" More text."); // Formatting integers // %-10d means: output an integer in a space ten characters wide, // padding with space and left-aligning (%10d would right-align) System.out.printf("Total cost %-10d; quantity is %dn", 5, 120); // Demo-ing integer and string formatting control sequences for(int i=0; i<20; i++) { System.out.printf("%-2d: %sn", i, "here is some text"); } // Formatting floating point value // Two decimal place: System.out.printf("Total value: %.2fn", 5.6874); // One decimal place, left-aligned in 6-character field: System.out.printf("Total value: %-6.1fn", 343.23423); // You can also use the String.format() method if you want to retrieve // a formatted string. String formatted = String.format("This is a floating-point value: %.3f", 5.12345); System.out.println(formatted); // Use double %% for outputting a % sign. System.out.printf("Giving it %d%% is physically impossible.", 100); } } My name is Bob. I am a builder. My name is Sue. I am a lion tamer. My name is Roger. I am a skydiver. Here is some text. That was a tab. That was a newline. More text. Total cost 5 ; quantity is 120 0 : here is some text 1 : here is some text 2 : here is some text 3 : here is some text 4 : here is some text 5 : here is some text 6 : here is some text 7 : here is some text 8 : here is some text 9 : here is some text 10: here is some text 11: here is some text 12: here is some text 13: here is some text 14: here is some text 15: here is some text 16: here is some text 17: here is some text 18: here is some text 19: here is some text Total value: 5.69 Total value: 343.2 This is a floating-point value: 5.123 Giving it 100% is physically impossible.
https://caveofprogramming.com/java-video/java-for-complete-beginners-video-part-20-stringbuilder-stringbuffer-formatting-strings.html
CC-MAIN-2018-09
refinedweb
472
70.19
Microsoft. Rob Bazinet (RB): So, Chris, tell us who you are and how are you involved in Unity? Chris Tavares (CT): My name is Chris Tavares. I’m a senior software developer in Microsoft’s patterns & practices group. I am currently the lead developer on Enterprise Library 4 and the Unity Application Block. I also wrote the vast majority of the Unity code, so Unity is pretty much my fault. I’ve been at patterns & practices for a little over two years. Previous to coming to Microsoft, I’ve bounced around the industry doing contracting, shrinkwrap development, and even a little bit of embedded software waaay back in the 90’s. RB: What is the Unity Application Block? CT:. For example, in a banking system, you may have an object that manages account transfers. This object needs to get hold of the individual account objects, plus there’s also security rules and auditing requirements. A common implementation could look something like this: public class AccountTransfer { public void TransferMoney(int sourceAccountNumber, int destAccountNumber, decimal amount) { Account sourceAccount = AccountDatabase.GetAccount(sourceAccountNumber); Account destAccount = AccountDatabase.GetAccount(destAccountNumber); sourceAccount.Withdraw(amount); destAccount.Deposit(amount); Logger.Write("Transferred {0} from {1} to {2}", amount, sourceAccountNumber, destAccountNumber); } } Granted, this is pretty terrible code (no transaction management, for example), but work with me here. ;-) This is pretty straightforward, but it’s also highly coupled to its environment. The calls to the global AccountDatabase class means that you can’t even compile this by itself, let alone test it. And what happens if the accounts are from two different banks? Similarly, the global Logger means that you cannot use this class in absence of not just a logger, but that specific global logger class. This results in a lot of pain when trying to write unit tests, and longer term it greatly limits the flexibility. The principle of Separations of Concerns says my class shouldn’t be doing multiple things. Here, the class violates it, as it’s hard wired not just the details of how to transfer money, but also about how to get accounts out of the database, and how to write logging message. To restore that flexibility, those concerns should be in separate objects, which are then passed into the object that uses them, like this: public class AccountTransfer { private IAccountRepository accounts; private ILogger logger; public AccountTransfer(IAccountRepository accounts, ILogger logger) { this.accounts = accounts; this.logger = logger; } public void TransferMoney(int sourceAccountNumber, int destAccountNumber, decimal amount) { Account sourceAccount = accounts.GetAccount(sourceAccountNumber); Account destAccount = accounts.GetAccount(destAccountNumber); sourceAccount.Withdraw(amount); destAccount.Deposit(amount); logger.Write("Transferred {0} from {1} to {2}", amount, sourceAccountNumber, destAccountNumber); } } This gets us closer. Now we don’t depend on external global objects, just on the instances passed in the constructor. This class can now be tested in isolation, and can even talk to different banks simply by passing in a different implementation of IAccountRepository. However, now there’s a new cost. The creators of AccountTransfer now have to know how to create the needed dependent objects. What account repository do you use? Which logger? If these are set up in configuration, for example, now you’ve got code that’s dependent on your configuration everywhere, and you’re back at square one. That’s where the Dependency Injection container comes in. It is a smart object factory. You tell the container how to resolve the dependencies of a particular object. Using Unity, for example, you could configure the container like this (using the API, there’s also support for external configuration files): IUnityContainer container = new UnityContainer(); container.RegisterType<IAccountRespository, ContosoBankRepository>(); container.RegisterType<ILogger, DatabaseLogger>(); What this does is tell the container “If any object we resolve has a dependency on an instance of IAccountRepository, create a ContosoBankRepository and use that. If anyone needs an ILogger, give them a DatabaseLogger.” Now, you can ask the container to give you an instance of an object that has dependencies, like this: container.Resolve<AccountTransfer>(); The Resolve call tries to create an instance of AccountTransfer. The container sees that the constructor requires an IAccountRepository and an ILogger, so it creates those (using the concrete types previously specified) and passes them to the constructor. This use of the container centralizes all the wiring of your application. This provides (typically) a single place in your application that deals with hooking objects together, and frees the individual objects of the responsibility for construction of object graphs. The resulting flexibility really pays off both in testability and flexibility. And if your dependencies change as your classes evolve, that doesn’t impact the creation of those objects, just the configuration of the container. RB: Is Unity part of the Enterprise Library or does it stand on its own? There was some information I had read in the past that Microsoft Dependency Injection Block would be released as part of Enterprise Library 4.0. CT: Unity stands on its own. Enterprise Library 4 is build on top of parts of Unity, and you can use Unity to access the functionality of Enterprise Library. A small bit of history should hopefully clear up the confusion. Back when Enterprise Library 2 and the Composite UI Application Block (CAB) were released, under the hood of both of them was a small library called ObjectBuilder. ObjectBuilder was a framework which was used to build Dependency Injection containers. Both CAB and Enterprise Library consumed OB, but OB was its own independent thing that was later shipped separately[1]. Part of Unity is an updated version of ObjectBuilder. Enterprise Library 4 continues to use ObjectBuilder the way it did before: to read configuration and build up the appropriate Enterprise Library objects. We are also introducing a new way to access the Enterprise Library functions by resolving them directly through a container, rather than having that mechanism hidden. Scott Densmore’s got a blog post[2] that goes into more detail about what we’re planning there. So, to reiterate: Unity is a standalone block. Enterprise Library uses parts of Unity or can be used with Unity. To save downloaders trouble, Enterprise Library includes the Unity binaries, so if all you care about is using Enterprise Library you’re good to go without any other stuff to install. RB: Under what circumstances would a developer choose to use Unity? CT: The first question to decide is if you want/need to do dependency injection. If so, I think Unity is a good choice, but we also suggest looking at the other containers in this space. Scott Hanselman has started a list of .NET DI container projects[3]. RB: How is Unity different than other DI containers and how does it stack up against them? CT: Speaking from the patterns & practices pulpit, I have to be very careful on questions like this. I don’t want to give the impression of either endorsing or discouraging people against using outside projects. We strongly recommend that everyone evaluate their options and choose the container that meets their needs best, either Unity or one of the existing open source projects. RB: There are quite a few DI containers already, what was the motivation by your team in creating Unity? CT: Patterns & practices has been giving guidance around dependency injection for a while now. CAB, Mobile client Software Factory, Smart Client Software Factory, Web Client Software Factory, and Enterprise Library have all used DI in various ways. That last word is the killer: “various”. Each project, although they were all build on ObjectBuilder, did DI differently and incompatibly. Having an explicit, fully featured container that we can use unencumbered allows us to provide better guidance around DI and container based architectures. There are other reasons as well. We have customers that will not touch open source software for whatever reason. Having a DI container from Microsoft makes them feel safer and lets them get the advantages. It also puts them in a good position to later switch containers if they wish. Another goal is to raise the profile of Dependency Injection both inside and outside of Microsoft. Having a container from Microsoft helps promote DI both to the general Microsoft .NET developer community and internal Microsoft developers as well. RB: What would you suggest is the best way for a developer or team to get started with Unity? CT: Grab the download, install it, read through the docs, and take a look at the simple quick starts we shipped. RB: Are there any quick starts available for Unity with best-practice and sample code? CT: We have a small example of a Windows Forms application (the Stoplight Simulator) that uses the container to inject services. It’s pretty small and quite approachable. Versions are included in C# and VB.NET. RB: What are the future plans for Unity? CT: Nothing is set in stone, of course. My personal goals are to add a few more features (the ability to intercept method calls is highest on the list), and get future patterns & practices assets to standardize on container based architectures. A better configuration file system would be nice. In the longer term, I’d love to get some or all of these concepts into the core platform in a reasonable way. RB: Chris, thank you for your time today and the great information on Unity. Taken from the Unity web site, Unity is: The Developers should look at the Introduction to Unity to get a good overview of getting started with Unity. Additional information about the Unity Application Block can be found at the Patterns and Practices web site and can be downloaded from the CodePlex web site. Readers can follow Chris at his blog. Dependency Injection coming to the Enterprise Library 4 was covered on InfoQ in December 2007, called Microsoft Enterprise Library 4.0 will get a dose of Dependency Injection. [0] [1] [2] [3] Excellent Article by Scott Hanselman Re: Excellent Article by Robert Bazinet ditto by Al Tenhundfeld Unity 1.2 released - now with interception by Grigori Melnik Unity for Silverlight by Grigori Melnik Re: Unity for Silverlight by Grigori Melnik New guide published - Dependency Injection with Unity by Grigori Mel
http://www.infoq.com/news/2008/04/microsoft-unity/
CC-MAIN-2014-15
refinedweb
1,691
55.13
Hey, complete beginner programmer here, learnt about user input today and decided to mess about. All codes with integers and doubles worked perfectly, but I'm stuck with string. It shows up as incorrect when I write in "Yes", with or without brackets. I'm guessing there's something more to than just assigning "Yes" to the string ans variable. Any help would be appreciated! Code Java: package MessingAbout; import java.util.Scanner; public class BlueSky { public static void main(String[] args){ Scanner rocky = new Scanner(System.in); String que, ans; ans = "Yes"; System.out.println("Is the sky blue?"); que = rocky.nextLine(); if (que == ans) { System.out.println("Correct!"); } else { System.out.println("Incorrect!"); } } }
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/33633-if-strings-%5Bsolved%5D-printingthethread.html
CC-MAIN-2016-30
refinedweb
115
61.63
Binary sketch size: 3,396 bytes (of a 30,720 byte maximum) #include <pitches.h> That's not an error message, that is a normal message after a successful compilation. Can you please tell me how you created an new tab, because I can't find the button indicated in the tutorial to make a new tab. (but then I still wonder why the tutorial doesn't show it like that). And indeed, the .ccp file is missing. But the pitches.h file as shown in the tutorial is not a proper library at all As I stated, I use IDE version 1.0.4. I don't know what you mean by "dialog", but just above the window in which you type the statements of the program there are 5 buttons on the left side and 1 at the utmost right side.
http://forum.arduino.cc/index.php?topic=156459.msg1174306
CC-MAIN-2014-52
refinedweb
143
81.83
EXP(3) BSD Programmer's Manual EXP(3) NAME exp, expm1, log, log10, log1p, pow - exponential, logarithm, power func- tions SYNOPSIS #include <<math.h>> double exp(double x); double expm1(double x); double log(double x); double log10(double x); double log1p(double x); double pow(double x, double y); DESCRIPTION The exp() function computes the exponential value of the given argument x. The expm1() function computes the value exp(x)-1 accurately even for tiny argument x. The log() function computes the value for the natural logarithm of the argument x. The log10() function computes the value for the logarithm of argument x to base 10. The log1p() function computes the value of log(1+x) accurately even for tiny argument x. The pow() computes the value of x to the exponent y. ERROR (due to Roundoff etc.) exp(x), log(x), expm1(x) and log1p(x) are accurate to within an up, and log10(x) to within about 2 ups; an up is one Unit in the Last Place. The error in pow(x, y) is below about 2 ups when its magnitude is moderate, but increases as pow(x, y) approaches the over/underflow thresholds until almost as many bits could be lost as are occupied by the floating-point format's exponent field; that is 8 bits for VAX D and 11 bits for IEEE 754 Double. No such drastic loss has been exposed by testing; the worst errors observed have been below 20 ups for VAX D, 300 ups for IEEE 754 Double. Moderate values of pow() are accurate enough that pow(integer, integer) is exact until it is bigger than 2**56 on a VAX, 2**53 for IEEE 754. RETURN VALUES These functions will return the appropriate computation unless an error occurs or an argument is out of range. The functions exp(), expm1() and pow() detect if the computed value will overflow, set the global variable errno to RANGE.). Previous implementations of pow may have defined x**0 to be undefined in some or all of these cases. Here are reasons for returning x**0 = 1 al- ways:), infnan(3) HISTORY A exp(), log() and pow() function appeared in Version 6 AT&T UNIX. A log10() function appeared in Version 7 AT&T UNIX. The log1p() and expm1() functions appeared in 4.3BSD. 4th Berkeley Distribution April 19, 1994 2
http://modman.unixdev.net/?sektion=3&page=pow&manpath=4.4BSD-Lite2
CC-MAIN-2017-39
refinedweb
397
60.45
This site works best with JavaScript enabled. Please enable JavaScript to get the best experience from this site. Quote from nhg1 this mod is epic but there is a few things missing giving your mob armor and weapons. both the skeleton and the zombie can wear armor and they both can hold weapons skeleton bows and zombies pretty much every thing it picks up. also where is the pig zombie parts you could use golden nuggets as there parts. also it would be nice to give them bat wings but not to sure what you could give them so they could get that. Quote from Bulbaswat So any answer on the white texture issues? return (new Random().nextInt(2) == 0 ? "alive" : "dead"); Quote from Bulbaswat No, I have no texture packs. Would it help if I post a copy of the files I have for the mod? See if something is missing or the like? Quote from Badgyr A couple things that would make this mod pretty awesome, would be the option to give minions armour and weapons, and some more commands other than sit and follow. Maybe toss in some actual sitting animations just for shiggles. Quote from Jade Knightblazer Read a few posts up , already has been stated that minions could equip weapons and armor Quote from sirolf2009 the next version will allow minions to hold armor and weapons. Quote from Jade Knightblazer Sorry, here is more clear post , you weren't missing anything glaringly obvious in-game. God Mod RP-Craft And remember: Don't fear the (c)reaper! Quote from AznaktaX It's alive.......IT'S ALIVE!!!! IT'S AAALIIIIIVEEEEE!!!! (Sorry, I always wanted to do that!) So, the necromancer villagers are just villagers with different texture? the next version will allow minions to hold armor and weapons. not really... i can't seem to re-create the bug. are you using a texture pack? if you didn't extract it shouldn't be changed at all... you didn't extract it did you? Read a few posts up Sorry, here is more clear post When the 2 mods are combined the soul in a jar and the bottle of blood are turned in pet bats. Making it impossible to bring life to your creations. Anyway, awsome mod!!! God Mod RP-Craft And remember: Don't fear the (c)reaper! 1st = Do this mod for multiplayer 2nd = Once you kill a player, It will drop an special kind of item (ONLY WITH THE PSYCHE) 3rd = Mix the player with other mobs and vouallá: An frankinstein or an Notch-creeper IT'S AAALIIIIIVEEEEE!!!! (Sorry, I always wanted to do that!) So, the necromancer villagers are just villagers with different texture?
http://www.minecraftforum.net/forums/mapping-and-modding/minecraft-mods/1286889-1-6-4-the-necromancy-mod-1-5-necro-api-1-2?page=7&cookieTest=1
CC-MAIN-2016-30
refinedweb
453
82.75
Hi list! There is a bug tracked in Red Hat bugzilla The problem is best demonstrated by this Makefile snippet: all:;@echo e\ cho With this make invocation, it works as intended: $ make 'SHELL=/bin/sh' echo But when the SHELL variable contains quotes, it fails: $ make 'SHELL="/bin/sh"' e /bin/sh: line 1: cho: command not found make: *** [all] Error 127The problem is that when SHELL contains quotations etc., /bin/sh is invoked, and whole command is passed through that. But the outer shell then destroys the backslash-newline sequences. The solution is to singly-quote these. The attached patch against make 3.81 does this. Testsuite passes. Comments welcome. Thanks, PM --- make-3.81-orig/job.c 2007-02-21 19:10:54.000000000 +0100 +++ make-3.81-pm/job.c 2007-02-22 18:13:59.000000000 +0100 @@ -2706,7 +2706,7 @@ unsigned int line_len = strlen (line); char *new_line = (char *) alloca (shell_len + (sizeof (minus_c) - 1) - + (line_len * 2) + 1); + + (line_len * 4) + 1); char *command_ptr = NULL; /* used for batch_mode_shell mode */ # ifdef __EMX__ /* is this necessary? */ @@ -2740,9 +2740,10 @@ #endif if (PRESERVE_BSNL) { - *(ap++) = '\\'; + *(ap++) = '\''; *(ap++) = '\\'; *(ap++) = '\n'; + *(ap++) = '\''; } ++p;
https://lists.gnu.org/archive/html/bug-make/2007-02/msg00040.html
CC-MAIN-2019-30
refinedweb
191
75.1
The QXmlNodeModelIndex class identifies a node in an XML node model subclassed from QAbstractXmlNodeModel. More... #include <QXmlNodeModelIndex> This class is not part of the Qt GUI Framework Edition. Note: All functions in this class are reentrant. This class was introduced in Qt 4.4. The QXmlNodeModelIndex class identifies a node in an XML node model subclassed from QAbstractXmlNodeModel. QXmlNodeModelIndex is an index into an XML node model. It contains: ide. Identifies the specific node comparison operator that should be used. Typedef for QList<QXmlNodeModelIndex>. Identifies a kind of node. Note that the optional XML declaration at very beginning of the XML document is not a processing instruction See also QAbstractXmlNodeModel::kind(). Default constructor. Creates an item that is null. See also isNull(). Standard copy constructor. Creates a QXmlNodeModelIndex instance that is a copy of other. Returns the second data value. The node index holds two data values. data() returns the first one. See also data(). Returns the first data value. The node index holds two data values. additionalData() returns the second one. See also additionalData(). Returns the first data value as a void* pointer. See also additionalData().(). Returns true if other is the same node as this. Returns true if this node is the same as other. This operator does not compare values, children, or names of nodes. It compares node identities, i.e., whether two nodes are from the same document and are found at the exact same place.
https://doc.qt.io/archives/4.6/qxmlnodemodelindex.html
CC-MAIN-2019-47
refinedweb
240
63.46
MVC controller returns many types of output to the view . In this article we will learn about EmptyResult return type of MVC . So instead of going into the depth on the subject, let us start with its practical implementation. To know more about the Action result types please refer my previous article It is one of the type of output format in ASP.NET MVC which is shown to the client . What is EmptyResult ? The EmptyResult is a class in MVC which does not return anything at client site ,its just like Void method . EmptyResult is used when you wants to execute logic return inside the controller action method but does not want any result back to the view then EmptyResult return type is very important . Key points - It does not return any output to the browser . - It shows the empty result set to the browser without any adding the view . - Does not require to add the view Methods of EmptyResult The following are the methods of EmptyResult class - Equals : This method is used to check whether the two objects are equal or not. - ExecuteResult : This method is used to execute the specific result context. - Finalize : This method is used to free the memory which is occupied by object and allow to allocate another object in freed memory . - GetHashCode: It is used to get the numeric value which is used to identify and insert an object in hash based collection. - GetType : This method is used to check the what is the type of object . - MemberwiseClone: This method is used to create the shallow copy of the current object . - ToString : This method is used to convert the current result to the string . Step 1 : Create an MVC applicationStep 3 : Add controller Controller folder in the created MVC application ,give the class name Home or as you and click on OK HomeControlle.cs public class HomeController : Controller { // GET: for main view public EmptyResult EmptyData() { return new EmptyResult(); } } Lets run the application and see the output. From above examples we have learned about the EmptyResult return type and its use. Summary I hope this article is useful for all the readers. If you have any suggestion please contact me. Download Aspose : API To Create and Convert Files
http://www.compilemode.com/2016/04/empty-result-type-in-asp-net-mvc.html
CC-MAIN-2017-22
refinedweb
374
71.95
Let’s say your building a really fancy UI, and you don’t want to use plain old radio buttons in your form. You might have a list of items, and you’d like your customer to select one of them. You also don’t want them to submit the form without selecting an item. Here’s an example of how you can use Stimulus.js to handle the selection, put the value into an input field, and then enable the submit button. Getting Started I’m assuming you’ve setup your page to properly pull in Stimulus Controllers. The Handbook has instructions if you’re new and need to get started. Build the HTML Let’s add the Stimulus annotations to our form. The <form> tag is going to have the controller, and each list option is both a target, and clicking the option is the source of the selectRadioOption() function. There is another input field that I’ve made visible, but disabled. I imagine you would make this hidden in a production form, but I wanted you to be able to see it. The submit button is also a target, so that we can enable it once a selection has been made. I’ve filled in the form to order a flavor of ice cream, but you can fit this to your needs. <h1>Order An Ice Cream Cone</h1> <form data- <ul> <li data- Chocolate </li> <li data- Vanilla </li> <li data- Strawberry </li> </ul> You're order is <input id='option' value="Please Select An Option Above" disabled <br /> <input type="submit" value="Finish Order" disabled </form> Setup the Stimulus Controller Our controller needs the targets for the selector options, the form input, and the form submit button. We also need the code for our selectRadioOption() function. And that’s all that radio_selector_controller.js will have. import { Controller } from "stimulus" export default class extends Controller { static targets = ["option", "input", "submit"] selectRadioOption() { this.optionTargets.forEach((el, i) => { el.classList.toggle("active", event.target == el ) }) this.inputTarget.value = event.target.innerText this.submitTarget.disabled = false } } The selectRadioOption() function will set the selected item to an active class, which only sets background color to blueviolet. The function sets the value of our form input from the selection items text, and then it enables the submit button of the form. In Closing This is a small example I extracted from a more complicated form. You could add data attributes to each option for a richer UI, such as price of the cone, or perhaps the flavor_id, that you can pass onto your ordering system. Want To Learn More? Try out some more of my Stimulus.js Tutorials.
https://johnbeatty.co/2018/04/18/stimulus-js-tutorial-implementing-radio-selection-in-a-form/
CC-MAIN-2019-35
refinedweb
447
63.9
Help your plugin creation. Gives an usable template, and an empty UI. This is your first step to your first plugin. You don't need anymore to reboot Qgis when you make a modification in your script. main file.py interface.py resources.py interface.ui resources.qrc pyuic4 pyrcc4 Very simplified schema Open Qt Designer Load empty UI generated with Plugin Builder and add classes : Once your ui is done : Download PyQt4 (binary packages): If you used plugin builder, change call for your ui (cf my post) Open your main file.py (ie my_first_plugin.py) In import PyQt4.QtGui add QDialog and QFileDialog : QFileDialog is loading/saving window QDialog is to know for where the action comes from Add QDialog in your main class dependency, then init QDialog: from PyQt4.QtGui import QDialog, QFileDialog myfirstplugin( QDialog ) : QDialog.__init__(self) Import QgsMessageLog for a perfect debugging Then you create a log message with : from qgis.core import QgsMessageLog QgsMessageLog.logMessage('bug using : '+yourVariabletoTest) Once QDialog is init, you can use self.sender() It gives you the origin of an action because you can't use args Example : you clic on the "runButton" and self.dlg is your ui() : self.dlg.runButton.clicked.connect(self.saveTo) def saveTo(self): if self.sender() == self.dlg.runButton: #do the job fileName = QFileDialog.getSaveFileName(self.dlg, \ "Select output file","","TIF (*.tif)") Then you can open a window that let the user select the file to save : Now, what to do with your filename ? Save it in a QLineEdit field. If your file outRaster is a QLineEdit field, you can show to the user his file choice : self.dlg.outRaster.setText(fileName) outRasterVariable = self.dlg.outRaster.text() Then you can assign it to a variable You have now all basics to make your own Qgis plugin. Some tips : try: from osgeo import gdal except: QgsMessageLog.logMessage('Gdal is not available') Remember to use QgsMessageLog to debbug Make sure your script runs in python first, then merge it in your plugin Use self.sender() from QDialog to replace args from ui To avoid bugs, use try/except function
http://slides.com/nicart/qgisplugin_tutorial/fullscreen
CC-MAIN-2022-21
refinedweb
352
58.99
fclose— #include <stdio.h>int fclose(FILE *stream); fclose() function dissociates the named stream from its underlying file or set of functions. If the stream was being used for output, any buffered data is written first, using fflush(3). EOFis returned and the global variable errno is set to indicate the error. In either case no further access to the stream is possible. EBADF] fclose() function may also fail and set errno for any of the errors specified for the routines close(2) or fflush(3). fclose() function conforms to ANSI X3.159-1989 (“ANSI C89”). fclose() function first appeared in Version 7 AT&T UNIX.
https://man.openbsd.org/fclose.3
CC-MAIN-2018-26
refinedweb
106
77.43
Setting An Avatar/Buddy Icon With Ruby's XMPP4R API Join the DZone community and get the full member experience.Join For Free I have been making service bots using Ruby, XMPP4R, and Jabber for some time now. I recently had someone ask if I could add a buddy icon to one of them. I had a hard time finding info on the net about doing this with XMPP4R. After some trial and error and RFC reading, I got it to work. Here is a skeleton code listing of what I did to get it to work. This is also detailed at: require 'rubygems' require 'xmpp4r' # allows me to use the vcard stuff require 'xmpp4r/vcard' # for encoding the buddy icon require 'base64' class JabberBot # user : Jabber ID # pass : Password def initialize(user, pass) # no debugging for now Jabber::debug = false # connect Jabber client to the Server @client = Jabber::Client.new(Jabber::JID.new(user)) @client.connect @client.auth(pass) # send initial presence to let the server know you are ready for messages @client.send(Jabber::Presence::new) # set vcard info # this is in a thread because it waits on a server response # the XMPP4R docs suggest placing this code inside a thread avatar_sha1 = nil # this gets used later on Thread.new do vcard = Jabber::Vcard::IqVcard.new vcard["FN"] = "My Bot" # full name vcard["NICKNAME"] = "mybot" # nickname # buddy icon stuff vcard["PHOTO/TYPE"] = "image/png" # open buddy icon/avatar image file image_file = File.new("buddy.png", "r") # Base64 encode the file contents image_b64 = Base64.b64encode(image_file.read()) # process sha1 hash of photo contents # this is used by the presence setting image_file.rewind # must rewind the file to the beginning avatar_sha1 = Digest::SHA1.hexdigest(image_file.read()) vcard["PHOTO/BINVAL"] = image_b64 # set the BINVAL to the Base64 encoded contents of our image begin # create a vcard helper and immediately set the vcard info vcard_helper = Jabber::Vcard::Helper.new(@client).set(vcard) rescue # very simple error "logging" if you can call it that puts "#{Time.now} vcard operation timed out." end end # just a 'keepalive' thread to keep the jabber server in touch # sends a presence entity every 30 seconds Thread.new do while true do # saying "I'm alive!" to the Jabber server pres = Jabber::Presence::new # according to the RFC the server expects a SHA1 hash of the avatar to be # sent with subsequent presence messages # # it wants it in the following format: # # ## # append buddy icon/avatar info if not avatar_sha1.nil? # send the sha1 hash of the avatar to the server # as per the RFC x = REXML::Element::new("x") x.add_namespace('vcard-temp:x:update') photo = REXML::Element::new("photo") # this is the avatar hash as computed in the vcard thread above avatar_hash = REXML::Text.new(avatar_sha1) # add text to photo photo.add(avatar_hash) # add photo to x x.add(photo) # add x to presence pres.add_element(x) end # send presence entity @client.send(pres) sleep 30 end # end 'keepalive' end # initialize end # the most barebones basic way to run this code bot = JabberBot.new('username@jabber.server', 'password') while true do end ## sha1-hash-of-image# API Icon Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/setting-avatarbuddy-icon-rubys
CC-MAIN-2022-40
refinedweb
529
55.34
Recently, Jeffrey Rosenbluth published (and showcased on Reddit) a pretty cool Haskell package called static-canvas. This package uses the free monad DSL pattern to make a DSL for programming for HTML5 canvas, restricted to fairly simple static use cases. While you can't use this to make user interfaces, it's still potentially a pretty cool tool, and there's a few very clear examples on the GitHub readme. As with most things involving pretty graphics or pictures, I think this would be a whole ton of fun to experiment with interactively, making it a great fit for IHaskell, an interactive notebook-based environment for Haskell. IHaskell allows the creation of "addon" packages to specify how to display various data types in its browser-based UI. These addons can render data types as text, as images, or even as HTML mixed with Javascript; they can even render them as interactive Javascript widgets that can evaluate Haskell code at will. All of this is done without GHCJS or similar Haskell-to-Javascript compilation tools. However, these display packages have mostly been written by only a few people, those fairly closely involved with IHaskell development. As the creator of IHaskell, I'd love to have more of these packages, but I obviously can't create display instances for all existing packages, and certainly can't anticipate what people might want for their own packages or new ones. Thus, I'd love to use this very neat library as a showcase and tutorial for how to make IHaskell display packages. In this section, I'll very briefly introduce you to the tools IHaskell provides for creating IHaskell display packages. If you'd like to get to the real meat of this tutorial, skip this, read the next section, and maybe come back here if you need to. IHaskell internally uses a data type called Display to represent possible outputs. The Display data types looks like this: -- In IHaskell.Display data Display = Display [DisplayData] -- Display just one thing. | ManyDisplay [Display] -- Display several things. In turn, the DisplayData data type from the ipython-kernel package specifies how to actually display the object in the browser: -- In IHaskell.IPython.Display data DisplayData = DisplayData MimeType Text -- All the possible ways to display things. data MimeType = PlainText | MimeHtml | MimePng Width Height -- Base64 encoded. | MimeJpg Width Height -- Base64 encoded. | MimeSvg | MimeLatex | MimeJavascript For example, to output the string "Hello" in red in the browser, you might construct a value like this: redStr :: Display redStr = Display [textDisplay, htmlDisplay] textDisplay :: DisplayData textDisplay = DisplayData PlainText "Hello" htmlDisplay :: DisplayData htmlDisplay = DisplayData MimeHtml "<span style=\"color: red;\">Hello</span>" You may note that Display takes a list of DisplayData values; this allows IHaskell to choose the proper display mechanism for the frontend. The frontend can be a console or the in-browser notebook, and the in-browser notebook may have different preferences for displays, so by providing different ways to render output, the best possible rendering can be chosen for each interface. Instead of always using the data types, IHaskell.Display exports the following convenience functions: -- Construct displays from raw strings of different types. plain :: String -> DisplayData html :: String -> DisplayData svg :: String -> DisplayData latex :: String -> DisplayData javascript :: String -> DisplayData -- Encode into base 64. encode64 :: String -> Base64 decode64 :: ByteString -> Base64 -- Display images. png :: Int -> Int -> Base64 -> DisplayData jpg :: Int -> Int -> Base64 -> DisplayData -- Create final Displays. Display :: [DisplayData] -> Display many :: [Display] -> Display In order to create a display for some data type, we must first import the main IHaskell display module: import IHaskell.Display This package contains the following typeclass: class IHaskellDisplay a where display :: a -> IO Display In order to display a data type, create an instance of IHaskellDisplay for your data type – then, any expression that results in your data type will generate a corresponding display. Let's go ahead and do this for CanvasFree a from the static-canvas package. -- Start with necessary imports. import IHaskell.Display -- From the 'ihaskell' package. import IHaskell.IPython.Types(MimeType(..)) import Graphics.Static -- From the 'static-canvas' package. -- Text conversion functions. import Data.Text.Lazy.Builder(toLazyText) import Data.Text.Lazy(toStrict) Now that we have the imports out of the way, we can define the core instance necessary: -- Since CanvasFree is a type synonym, we need a language pragma. {-# LANGUAGE TypeSynonymInstances #-} instance IHaskellDisplay (CanvasFree ()) where -- display :: CanvasFree () -> IO Display display canvas = return $ let src = toStrict $ toLazyText $ buildScript width height canvas in Display [DisplayData MimeHtml src] where (height, width) = (200, 600) We can now copy and paste the examples from the static-canvas Github page, and see them appear right in the notebook! {-# LANGUAGE OverloadedStrings #-} text As we play with this a little more, we see that this is a little bit unsatisfactory. Specifically, the width and the height of the resulting canvas are fixed in the IHaskellDisplay instance! I would solve this by creating a custom Canvas data type that stores these: data Canvas = Canvas { width :: Int, height :: Int, canvas :: CanvasFree () } Then we could define an IHaskellDisplay that respects this width and height: {-# LANGUAGE TypeSynonymInstances #-} instance IHaskellDisplay Canvas where -- display :: Canvas -> IO Display display cnv = return $ let src = toStrict $ toLazyText $ buildScript (width cnv) (height cnv) (canvas cnv) in Display [DisplayData MimeHtml src] Then when we use this we can specify how to display our canvases: Canvas 200 600 $ do font "italic 60pt Calibri" lineWidth 6 strokeStyle blue fillStyle goldenrod textBaseline TextBaselineMiddle strokeText "Hello" 150 100 fillText "Hello World!" 150 100 Sadly, it seems that the static-canvas library currently only supports having one generated canvas on the page – if you try to add another one, it simply modifies the pre-existing one. This is probably a bug that should be fixed, though! Once you've made an IHaskell display instance, you can easily package it up and stick it on Hackage. Specifically, for a package named package-name, you should take everything before the -. Then, prepend ihaskell- to the package name. Finally, make sure there exists a module IHaskell.Display.Package, where Package is the first word in package-name capitalized. If this is done, then IHaskell will happily load your package and instance upon startup, making it very easy for your users to install the display addon! For example, the hatex library is exposed as an addon through the ihaskell-hatex display package and the IHaskell.Display.Hatex module in that package. The juicypixels library has an addon package called ihaskell-juicypixels with a module IHaskell.Display.Juicypixels. As I write this now, I realize that this protocol is a little bit weird. Specifically, I think that perhaps the rule that you take the first thing before the - is not too great, but rather that perhaps the - should be a word separator, and thus package-name would get translated to ihaskell-package-name and IHaskell.Display.PackageName. (We do need some standard!) If you have any opinions about this, or suggestions for how to improve this process, please let me know! Anyway, I hope that this brief tutorial / guide can show someone how to write small IHaskell addons. Perhaps someone will find this useful, and please get in touch if you have any questions, comments, or suggestions!
https://nbviewer.ipython.org/github/IHaskell/IHaskell/blob/jamesdbrock-patch-1/notebooks/Static%20Canvas%20IHaskell%20Display.ipynb
CC-MAIN-2022-33
refinedweb
1,193
51.28
Hello everybody, I have some files both .cpp and .h that are coded in C++ by someone else before(only 5 files that creates a Binary Search Tree). I would like to use it in my project. First of all, I have copied the files that I need to the same directory of my .cpp file that contains the main method. Then, I have added the files using, Add> Add Existing Item from Visual Studio 2003.NET. Then, click build and compile, compiler did not give any error or warnings and compiled successfully. Nevertheless, when I want to create an object from a class from my main method compiler does not see any classes. I have spent my day on this but couldn't go one step further, any help would be appriciated. If you want I can send the project file who wants to help, it is so small, and I only need why it does not see the classes. Code:// ******************************************************** // Header file TreeException.h for the ADT binary tree. // ******************************************************** #include <stdexcept> #include <string> using namespace std; class TreeException: public logic_error { public: TreeException(const string & message = "") : logic_error(message.c_str()) { } }; // end TreeException Kind Regards,
https://cboard.cprogramming.com/cplusplus-programming/64178-can-not-reach-classes-create-instances-them.html
CC-MAIN-2017-22
refinedweb
194
74.79
This is the mail archive of the libstdc++@gcc.gnu.org mailing list for the libstdc++ project. |hmm. streampos is not actually used: | | pos_type | tellg(void); | | __istream_type& | seekg(pos_type); | | __istream_type& | seekg(off_type, ios_base::seekdir); According to Josuttis "The C++ Standard Library" page 634 -- 636, instead of std::ios::pos_type pos (= file.tellg()), I can also write std::streampos pos. Quote from Josuttis: "Class fpos<> is used to define types streampos for char and wstreampos for w_char_t streams. These types are used to define the pos_type of the corresponding character traits. And the pos_type member of the traits is used to define pos_type of the corresponding stream classes. Thus, you could also use streampos as the type of the stream positions". Since old code (using the old iostream implementation, confer e.g. the version of libstdc++ which is shipped with gcc-2.95.2) defines streampos as return type for tellg, I propose to expose streampos to the global namespace, if that makes sense. Peter Schmid
http://gcc.gnu.org/ml/libstdc++/2001-03/msg00196.html
crawl-001
refinedweb
165
66.13
Are you looking to adapt your Django application to different linguistic and cultural settings? From a developer starting out on a project to one wanting to adapt an existing Django application, this beginner’s guide talks about the nuances of working using the process of i18n in Django. First, we’ll cover how you can modify your Django application to enable i18n and then how to integrate your i18n with Lokalise. Then, we’ll talk about various techniques in Django that may help you with i18n concepts. - Step 1: Prepare your Django Project for I18n with Lokalise - Step 2: Integrate with Lokalise - Step 3: Set a Locale in Django - Translation of Text: Integrate with gettext in Python - Lazy Translation with Django - Django i18n in Templates - Translations in Django with JavaScript - Final Thoughts on Django I18n Step 1: Prepare your Django Project for I18n with Lokalise 1.1 Setup New Project Let us start by creating a project in Django using: django-admin startproject blog The directory structure of your project is as follows: blog ├── blog │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── db.sqlite3 └── manage.py 1.2 Enable I18n in Django The first task before we get started with any of the techniques is to enable i18n in Django. By default, Django is geared up to handle 18n. If you navigate to your project’s settings.py, you will notice that the variable USE_I18N is set to True. Even if your project does not use i18n, this can remain enabled in Django as it causes no overhead to your overall project. USE_I18N = True Also, you can set the default language using the variable LANGUAGE_CODE in the same settings.py file. LANGUAGE_CODE = 'en' Next, you can set up the languages that you plan to support in your Django application: LANGUAGES = [ ('en','English'), ('bn', 'Bengali') ] You can also set the variable LANGUAGE_BIDI, which decides the direction in which the text is read. It should be set to True for right-to-left languages like Arabic, and to False otherwise. 1.3 Start a New Django App Next, let’s start an application called blogs to test our i18n techniques on: python manage.py startapp blogs This creates the basic structure of an application within a Django project, as shown below: blog ├── blog │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── db.sqlite3 ├── manage.py └── blogs ├── __init__.py ├── admin.py ├── apps.py ├── migrations │ └── __init__.py ├── models.py ├── templates ├── tests.py ├── urls.py └── views.py 1.4 Create Translation Files in Django Next, we need to start a makemessages process in Django, which automatically creates and updates PO files for the application to use. You may encounter an error with the following command if gettext is not installed on your local system: ./manage.py makemessages -l en The last argument, en sets the language for translation. You can run this file from the root of the project or the application directory. This script essentially scans your project for all strings that require translation. The following is the updated structure of the project: blog ├── blog │ ├── __init__.py │ ├── settings.py │ ├── urls.py │ └── wsgi.py ├── db.sqlite3 ├── manage.py └── blogs ├── __init__.py ├── admin.py ├── apps.py ├── locale │ └── en │ └── LC_MESSAGES │ └── django.po ├── migrations │ └── __init__.py ├── models.py ├── templates ├── tests.py ├── urls.py └── views.py Notice that a locale directory has been created within the project. In this, a directory for each language is created. The django.po file has been created within the LC_MESSAGES directory, which contains all strings for translation. We will now see how you can integrate your Django application with your Lokalise account. A PO file generated by Django looks like this: # blogs/locale/de/LC_MESSAGES/django.po #: templates/blogs/index.html:3 msgid "Hello World" msgstr "" You can edit the PO file to fill in the msgstr values that store the translations. Step 2: Integrate with Lokalise 2.1 Setup Lokalise CLI Before you can build an interface between your Django application and your Lokalise application, you need to install the Lokalise CLI client. Here are the installation instructions for the CLI client. The latest version of the CLI tool is now v2 — make sure you upgrade if you are using the older version. If you use a Mac, installation of the CLI client is carried out through Homebrew. Run the following commands to download and install v2 of the Lokalise CLI client: brew tap lokalise/brew brew install lokalise If you do not use Homebrew, you can download the installer directly. On Linux systems, you can download the installation script and run it in a single command as shown below: curl -sfL | sh Alternately, you can install Lokalise CLI v2 using the installer for Linux (32 bit | 64 bit). If you are working on Windows, you can only install the CLI client using the installers (32 bit | 64 bit). 2.2 Configure the CLI tool Once you have installed the CLI, the command lokalise2 is available for you to use. Any interaction that you perform with your Lokalise account is validated through a token. A token essentially mimics your user access and can be generated from your Lokalise account. Additionally, if you are working on a certain project within Lokalise, you also need to supply the project ID. You can save the token and project IDs in the /etc/lokalise.cfg file or supply them as parameters to every lokalise2 command. For instance: Token = "xxxxxxxxxx" Project = "xxxxx.xxxxx" Please note that supplying the token in your commands may save it to your bash history, which is a potential security risk. 2.3 Exchange Files between Lokalise and your Django Application To upload a file to your Lokalise project, run the following command and replace <token> and <project_id> with your actual tokens and project IDs: lokalise2 --token <token> --project-id <project_id> file upload --file "locale/en/LC_MESSAGES/default.po" --lang-iso en You can download the files from your Lokalise project using the following command: lokalise2 --token <token> --project-id <project_id> file download --format po --filter-langs en --original-filenames=true --directory-prefix "" --unzip-to "locale/en/LC_MESSAGES/" You will need to re-run the command to download the files for each language. Step 3: Set a Locale in Django 3.1 Use Inbuilt Functions in Django Django comes with a default view django.conf.urls.i18n that sets the user’s language in Python. You can simply set a particular URL to trigger this function. For example: path('i18n/', include('django.conf.urls.i18n')), This view expects the language parameter as a POST variable, which saves the current user’s language preference in the session. In addition to this, there is an optional next parameter that sets the URL to which the view should redirect after setting the language. 3.2 Set Locale Settings Manually While the default implementation is good for basic i18n, you may need to set additional parameters to set the locale to store variables, such as a geography, in addition to the language. To set such a variable, you would need to create a custom view. In this example, let’s manually set a language ID in our Django application. You can follow the same process to set any other variable related to the locale of the user. First, set the URL pattern in your application’s urls.py file as shown below. We set the variable language_id as a GET variable embedded within the URL. Whenever someone visits the URL /your_app/set_language/15/, Django will call up the set_language view with the parameter language_id as 15. urlpatterns = patterns( 'my_app.views', (r'^set_language/(?P<language_id>d+)/$', 'set_language') ) Now we’ll define the set_language() view to set a session variable to the language_id that has come as a part of the GET payload. def set_language(request): if request.GET.has_key('language_id'): # Set language in Session variable # Redirect to home request.session['language_id'] = request.GET['language_id'] return HttpResponseRedirect('/') Translation of Text: Integrate with gettext in Python We have explored the usage of the gettext module in Python i18n. Django has a wrapper in its translation module. The following view searches for the msgid “Hello World” in the current locale’s PO file and returns the output as an HTTP response. from django.utils.translation import gettext as _ def display_message(request): output = _("Hello World") return HttpResponse(output) Here is the part of the PO file that is relevant to this translation and here is the translation if the language is set of Bengali: msgid "Hello World" msgstr "ওহে বিশ্ব" The HTTP response would simply be ওহে বিশ্ব, i.e., the translated string in the PO file. The argument to the gettext function may be any string. For instance, if you would like to print a string that displays how many days ago you last logged in, you can modify the function as shown below. def my_view(request, day): if day > 1: output = _('You logged in %(day)s days ago.') % {'day': day} else: output = _('You logged in %(day)s day ago.') % {'day': day} return HttpResponse(output) The corresponding files to handle such a situation in the PO file are shown below: msgid "You logged in %d day ago." msgid_plural "You logged in %d days ago." msgstr[0] "আপনি একদিন আগে লগ ইন করেছেন" msgstr[1] "আপনি %d দিন আগে লগ ইন করেছেন" Lazy Translation with Django If you follow this path of translation, Django translates all strings on the go. An interesting feature of Django is the ability to defer translation until it is absolutely necessary. Django has a pre-defined function, gettext_lazy(), which translates strings lazily. The translation takes place when the application needs to use the value of a string, and not when the view is called up. from django.utils.translation import gettext_lazy as _ def display_message(request): output = _("Hello World") return HttpResponse(output) In this example, the translation does not occur until the value needs to be sent as an HTTP response. Django i18n in Templates While we have spoken extensively about i18n in the back end. In such use cases, you send the translated strings directly to the front end, which you can simply display in the templating engine. You can also translate a string from Django templates. To use this feature, you first need to enable it in the templating engine. You need to load the i18n module if you would like to translate a string using the templates. Next, use the trans keyword followed by the string to translate, as shown below: {% extends 'admin.html' %} {% load i18n %} {% block content %} <h1>{% trans 'Hello World' %}</h1> {% endblock %} Translations in Django with JavaScript The gettext module is available for you to use only through the backend in Python; therefore, it is not possible for you to translate using gettext in JavaScript. Furthermore, your JavaScript code would not have direct access to the translation files. Django provides a solution for passing the translations to JavaScript. This enables you to call the gettext() function from within your JavaScript code. First, enable the URL where JavaScript will get the required resources. from django.views.i18n import JavaScriptCatalog urlpatterns = [ path('jsi18n/', JavaScriptCatalog.as_view(), name='javascript-catalog') ] Next, call the catalog from your templates to fetch the translation resources. You can call the gettext() function, just like you would in Python, to get a related translation. $ console.log(gettext('Hello World')); ওহে বিশ্ব Here is a complete list of gettext related functions that you can call from within your JavaScript code. Final Thoughts on Django I18n In this tutorial, we explored the topic of i18n in Django. Here is what we learned to do: - Set up a Django project and enable I18n - Create and manage translations in Django - Integrate the translations in your Django application with your Lokalise account - Translate strings in Django and work with lazy translations - Translate strings on the go within templates - Integrate translations with JavaScript in Django
https://lokalise.com/blog/django-i18n-beginners-guide/
CC-MAIN-2020-29
refinedweb
1,996
62.17
skia / external / github.com / KhronosGroup / EGL-Registry / 11478904448bbdf5757b798c856a525aa2b351b1 / . / extensions / KHR / EGL_KHR_reusable_sync.txt blob: df491755e160caf7311c4a0a6922addf8e1b85c4 [ file ] [ log ] [ blame ] Name KHR_reusable_sync Name Strings EGL_KHR_reusable_sync Contributors Acorn Pooley Gary King Gregory Prisament Jon Leech Robert Palmer Contacts Acorn Pooley, NVIDIA Corporation (apooley 'at' nvidia.com) Gary King, NVIDIA Corporation (gking 'at' nvidia.com) Gregory Prisament, NVIDIA Corporation (gprisament 'at' nvidia.com) Jon Leech (jon 'at' alumni.caltech.edu) Robert Palmer (robert.palmer 'at' nokia.com) Notice Status Complete. Approved by the Khronos Board of Promoters on August 28, 2009. Version Version 22, January 31, 2014 Number EGL Extension from the GL_ARB_sync extension but introduces a type of sync object known as "reusable sync object" comparable to an OS semaphore. The specification is designed to allow additional types of sync objects to be easily introduced in later extensions. Reusable sync objects may be used to synchronize activity between threads or between client APIs. Synchronization is accomplished by explicitly changing the status of a reusable object using EGL API commands. New Types /* * EGLSyncKHR is an opaque handle to an EGL sync object */ typedef void* EGLSyncKHR; /* * EGLTimeKHR is a 64-bit unsigned integer representing intervals * in nanoseconds. */ #include <khrplatform.h> typedef khronos_utime_nanoseconds_t EGLTimeKHR; New Procedures and Functions EGLSyncKHR eglCreateSyncKHR( EGLDisplay dpy, EGLenum type, const EGLint *attrib_list); EGLBoolean eglDestroySyncKHR( EGLDisplay dpy, EGLSyncKHR sync); EGLint eglClientWaitSyncKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLint flags, EGLTimeKHR timeout); EGLBoolean eglSignalSyncKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLenum mode); EGLBoolean eglGetSyncAttribKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLint attribute, EGLint *value); New Tokens Accepted by the <type> parameter of eglCreateSyncKHR, and returned in <value> when eglGetSyncAttribKHR is called with <attribute> EGL_SYNC_TYPE_KHR: EGL_SYNC_REUSABLE_KHR 0x30FA Accepted by the <attribute> parameter of eglGetSyncAttribKHR: EGL_SYNC_TYPE_KHR 0x30F7 EGL_SYNC_STATUS_KHR 0x30F1 Accepted by the <mode> parameter of eglSignalSyncKHR and returned in <value> when eglGetSyncAttribKHR is called with <attribute> EGL_SYNC_STATUS_KHR: EGL_SIGNALED_KHR 0x30F2 EGL_UNSIGNALED_KHR 0x30F3 Accepted in the <flags> parameter of eglClientWaitSyncKHR: EGL_SYNC_FLUSH_COMMANDS_BIT_KHR 0x0001 Accepted in the <timeout> parameter of eglClientWaitSyncKHR: EGL_FOREVER_KHR 0xFFFFFFFFFFFFFFFFull Returned by eglClientWaitSyncKHR: EGL_TIMEOUT_EXPIRED_KHR 0x30F5 EGL_CONDITION_SATISFIED_KHR 0x30F6 Returned by eglCreateSyncKHR in the event of an error: EGL_NO_SYNC_KHR ((EGLSyncKHR>. Initially, sync objects are unsignaled. EGL may be asked to wait for a sync object to become signaled, or a sync object's status may be queried. Depending on the type of a sync object, its status may be changed either by an external event, or by explicitly signaling and unsignaling the sync. Sync objects are associated with an EGLDisplay when they are created, and have <attributes> defining additional aspects of the sync object. All sync objects include attributes for their type and their status. Additional attributes are discussed below for different types of sync objects. <Reusable sync objects> are created in the unsignaled state, and may be signaled and/or unsignaled repeatedly. Every transition of a reusable sync object's status from unsignaled to signaled will release any threads waiting on that sync object. The command EGLSyncKHR eglCreateSyncKHR( EGLDisplay dpy, EGLenum type, const EGLint *attrib_list); creates a sync object of the specified <type> associated with the specified display <dpy>, and returns a handle to the new object. <attrib_list> is an attribute-value list specifying other attributes of the sync object, terminated by an attribute entry EGL_NONE. Attributes not specified in the list will be assigned their default values. If <type> is EGL_SYNC_REUSABLE_KHR, a reusable sync object is created. In this case <attrib_list> must be NULL or empty (containing only EGL_NONE). Attributes of the reusable sync object are set as follows: Attribute Name Initial Attribute Value(s) --------------- -------------------------- EGL_SYNC_TYPE_KHR EGL_SYNC_REUSABLE_KHR EGL_SYNC_STATUS_KHR EGL_UNSIGNALED_KHR Errors ------ * If <dpy> is not the name of a valid, initialized EGLDisplay, EGL_NO_SYNC_KHR is returned and an EGL_BAD_DISPLAY error is generated. * If <attrib_list> is neither NULL nor empty (containing only EGL_NONE), EGL_NO_SYNC_KHR is returned and an EGL_BAD_ATTRIBUTE error is generated. * If <type> is not a supported type of sync object, EGL_NO_SYNC_KHR is returned and an EGL_BAD_ATTRIBUTE error is generated. The command EGLint eglClientWaitSyncKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLint flags, EGLTimeKHR timeout); blocks the calling thread until the specified sync object <sync> is signaled, or until <timeout> nanoseconds have passed. More than one eglClientWaitSyncKHR may be outstanding on the same <sync> at any given time. When there are multiple threads blocked on the same <sync> and the sync object is signaled, all such threads are released, but the order in which they are released is not defined. If the value of <timeout> is zero, then eglClientWaitSyncKHR simply tests the current status of <sync>. If the value of <timeout> is the special value EGL_FOREVER_KHR, then eglClientWaitSyncKHR does not time out. For all other values, <timeout> is adjusted to the closest value allowed by the implementation-dependent timeout accuracy, which may be substantially longer than one nanosecond. eglClientWaitSyncKHR returns one of three status values describing the reason for returning. A return value of EGL_TIMEOUT_EXPIRED_KHR indicates that the specified timeout period expired before <sync> was signaled, or if <timeout> is zero, indicates that <sync> is not signaled. A return value of EGL_CONDITION_SATISFIED_KHR indicates that <sync> was signaled before the timeout expired, which includes the case when <sync> was already signaled when eglClientWaitSyncKHR was called. If an error occurs then an error is generated and EGL_FALSE is returned.. If a sync object is destroyed while an eglClientWaitSyncKHR is blocking on that object, eglClientWaitSyncKHR will unblock and return immediately, just as if the sync object had been signaled prior to being destroyed. Errors ------ * If <sync> is not a valid sync object for <dpy>, EGL_FALSE is returned and an EGL_BAD_PARAMETER error is generated. * If <dpy> does not match the EGLDisplay passed to eglCreateSyncKHR when <sync> was created, the behaviour is undefined. The command EGLBoolean eglSignalSyncKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLenum mode); signals or unsignals the reusable sync object <sync> by changing its status to <mode>, which must be one of the values in table 3.bb. If as a result of calling eglSignalSyncKHR the status of <sync> transitions from unsignaled to signaled, any eglClientWaitSyncKHR commands blocking on <sync> will unblock. Assuming no errors are generated, EGL_TRUE is returned. Mode Effect ------------------ ------------- EGL_SIGNALED_KHR Set the status of <sync> to signaled EGL_UNSIGNALED_KHR Set the status of <sync> to unsignaled Table 3.bb Modes Accepted by eglSignalSyncKHR Command Errors ------ * If <sync> is not a valid sync object for <dpy>, EGL_FALSE is returned and an EGL_BAD_PARAMETER error is generated. * If the type of <sync> is not EGL_SYNC_REUSABLE_KHR, EGL_FALSE is returned and an EGL_BAD_MATCH error is generated. * If <dpy> does not match the EGLDisplay passed to eglCreateSyncKHR when <sync> was created, the behaviour is undefined. The command EGLBoolean eglGetSyncAttribKHR( EGLDisplay dpy, EGLSyncKHR sync, EGLint attribute, EGLint *value); is used to query attributes of the sync object <sync>. Legal values for <attribute> depend on the type of sync object, as shown in table 3.cc. Assuming no errors are generated, EGL_TRUE is returned and the value of the queried attribute is returned in <value>. Attribute Description Supported Sync Objects ----------------- ----------------------- ---------------------- EGL_SYNC_TYPE_KHR Type of the sync object All EGL_SYNC_STATUS_KHR Status of the sync object All Table 3.cc Attributes Accepted by eglGetSyncAttribKHR Command. * If <attribute> is not one of the attributes in table 3.cc, EGL_FALSE is returned and an EGL_BAD_ATTRIBUTE error is generated. * If <attribute> is not supported for the type of sync object passed in <sync>, EGL_FALSE is returned and an EGL_BAD_MATCH error is generated. If any error occurs, <*value> is not modified. The command EGLBoolean eglDestroySyncKHR( EGLDisplay dpy, EGLSyncKHR sync); is used to destroy an existing sync object. If any eglClientWaitSyncKHR commands are blocking on <sync> when eglDestroySyncKHR is called, they will be woken up, as if <sync> were signaled. If no errors are generated, EGL_TRUE is returned, and <sync> will no longer be the handle of a valid sync object.." Issues Note about the Issues --------------------- The wording for this extension was originally written as a single extension defining two types of sync object; a "reusable sync object" and a "fence sync object". That extension was split to produce standalone extensions for each type of sync object, and references to the other type removed from the specification language. This issues list has been simplied to remove references to fence sync objects but is otherwise very similar to the EGL_KHR_fence_sync extension issues list.yncKHR). [REMOVED - found in the fence_sync extension.]SyncKHR objects to OpenKODE synchronization primitives rather than platform-specific ones. We suggest that this functionality, if needed, be added as a layered extension instead of being included here. This way, EGL_KHR_sync remains minimal and easy to implement on a variety of platforms. 6. Please provide a more detailed description of how eglClientWaitSyncKHR behaves. RESOLVED: eglClientWaitSyncKHR blocks until the status of the sync object transitions to the signaled state. Sync object status is either signaled or unsignaled. More detailed rules describing signalling follow (these may need to be imbedded into the actual spec language): * A reusable sync object has two possible status values: signaled or unsignaled. * When created, the status of the sync object is unsignaled by default. * A reusable sync can be set to signaled or unsignaled status using eglSignalSyncKHR. * A wait function called on a sync object in the unsignaled state will block. It unblocks (note, not "returns to the application") when the sync object transitions to the signaled state. * A wait function called on a sync object in the signaled state will return immediately. 7. Should the 'flags' argument to eglClientWaitSyncKHR be EGLint or EGLuint? RESOLVED: EGLint, setting a precedent for explicit bitmask types in EGL going forward. We don't have an EGLuint type and it is overkill for this purposes when other bitmasks (surface type and api type) are already using EGLint attribute fields. 8. Can multiple WaitSyncs be placed on the same sync object? RESOLVED: Yes. This has been allowed all along but we now state it more clearly in the spec language. However, there is some concern that this is hard to implement and of limited use, and we might remove this capability before approving the extension. One way to do this while allowing multiple waiters at some future point is to expose it through the API to developers as either a sync attribute allowing multiple waits (default not allowing it), or a parameter to WaitSync, which initially must be something like EGL_SINGLE_WAIT_ONLY. 9. Should eglDestroySyncKHR release all WaitSyncs placed on a reusable sync object? RESOLVED: Yes. It is safest to release all threads waiting on a reusable object when the sync object is deleted so that waiting threads do not wait forever. Revision History #22 (Jon Leech, January 31, 2014) - Clarify return value of ClientWaitSyncKHR when called with <timeout> of zero for an unsignaled <sync> (Bug 11576). #21 (Jon Leech, April 23, 2013) - Simplify issues list to remove issues specific to fence sync objects. #20 (Jon Leech, September 8, 2009) - Change status to complete and note approval by the Promoters. Minor formatting changes. #19 (Robert Palmer, July 14, 2009) - Branch wording from draft KHR_sync specification. Remove ability to create "fence sync objects and all tokens/wording specific to them. #18 (Robert Palmer, July 8, 2009) - Issues 8 and 9 declared resolved in EGL meeting 2009-07-08 #17 (Robert Palmer, July 8, 2009) - Update eglDestroySyncKHR to special-case deletion of fence sync objects. This is explained in issue 9. - Corrected EGL_REUSABLE_SYNC_KHR -> EGL_SYNC_REUSABLE_KHR - Define value for EGL_SYNC_REUSABLE_KHR - Fix typo and whitespace #16 (Jon Leech, July 7, 2009) - Update description of new tokens to match changes to the eglCreateSyncKHR entry point in revision 15. #15 (Jon Leech, June 16, 2009) - Define separate one-time fence sync and reusable sync extensions and corresponding extension strings. Remove AUTO_RESET and eglFenceKHR. Rename eglCreateFenceSyncKHR to eglCreateSyncKHR and change initial status of reusable syncs to unsignaled. Clarify which functions apply to which types of sync objects. Update issues list. #14 (Jon Leech, April 29, 2009) - Clarify that all waiters are woken up on signalling a sync. Remove tabs to cleanup some formatting issues. #13 (Acorn Pooley, April 2, 2009) - Renamed GL_OES_egl_sync -> GL_OES_EGL_sync VG_KHR_egl_sync -> VG_KHR_EGL_sync #12 (Jon Leech, April 1, 2009) - Changed sync flags type from EGLuint to EGLint and add issue 7. #11 (Acorn Pooley, February 4, 2009) - add error case to eglGetSyncAttribKHR. - fix year on rev 8-10 (2008->2009) #10 (Acorn Pooley, February 4, 2009) - clarify some error message descriptions #9 (Greg Prisament, January 15, 2009) - Destroy now wakes up all waits (eglClientWaitSyncKHR) - Add EGLDisplay <dpy> as first parameter to all commands - Split into 3 extension strings, EGL_KHR_sync, GL_OES_egl_sync, VG_KHR_egl_sync, all described in this document. - Add attribute AUTO_RESET_KHR - Time type uses the type from khrplatform.h - Remove EGL_ALREADY_SIGNALLED #8 (Jon Leech, November 11, 2009) - Assign enum values #7 (Acorn Pooley, October 30, 2008) - Fix typos - remove obsolete wording about Native sync objects (see issue 5) - formatting: remove tabs, 80 columns #6 (Acorn Pooley, October 27, 2008) - Corrected 'enum' to 'EGLenum' in prototypes. #5 (Jon Leech, September 9, 2008) - Removed native sync support (eglCreateNativeSyncKHR and EGL_SYNC_NATIVE_SYNC_KHR), and re-flowed spec to fit in 80 columns. #4 (Jon Leech, November 20, 2007) - Corrected 'enum' to 'EGLenum' in prototypes. #3 (Jon Leech, April 5, 2007) - Added draft Status and TBD Number #2 (November 27, 2006) - Changed OES token to KHR
https://skia.googlesource.com/external/github.com/KhronosGroup/EGL-Registry/+/11478904448bbdf5757b798c856a525aa2b351b1/extensions/KHR/EGL_KHR_reusable_sync.txt
CC-MAIN-2022-05
refinedweb
2,162
52.6
This documentation covers version 1.0.3 of libspf2. It does not yet cover all public functions; however, it contains more than enough information to effectively use this library. Read the header files for more information. Please submit corrections or requests for missing information to libspf2 [ta] rt.anarres.org. #include <spf2/spf.h> The main spf2 header file must be included. #include <spf2/spf_dns_resolv.h> Include this for definitions relating to the basic SPF resolver. #include <spf2/spf_dns_cache.h> Include this for definitions relating to the caching SPF resolver. SPF_err_t An error code: an integer. See 'Error Codes' below. SPF_config_t A handle to an instantiation of the SPF library. SPF_dns_config_t A handle to an SPF DNS resolver. SPF_c_results_t A struct which holds the output from the SPF bytecode compiler. SPF_output_t A response from the SPF engine. This is a struct containing the following fields: result: One of SPF_RESULT_PASS, SPF_RESULT_FAIL, SPF_RESULT_SOFTFAIL, SPF_RESULT_NEUTRAL, SPF_RESULT_UNKNOWN, SPF_RESULT_ERROR, SPF_RESULT_NONE. received_spf: A header to be inserted into the checked mail. smtp_comment: An SMTP error message XXX(?) err: The error message from the SPF library (justifying a failure?) SPF_config_t SPF_create_config() Construct a new handle for accessing the SPF library. Return values: (SPF_config_t)0: If construction of the handle fails. (an opaque handle): If construction of the handle succeeds. SPF_config_t SPF_dup_config(SPF_config_t handle) Duplicate a handle for accessing the SPF library. Return values: (SPF_config_t)0: If construction of the handle fails. (an opaque handle): If construction of the handle succeeds. void SPF_destroy_config(SPF_config_t handle) Destroy a handle for accessing the SPF library. void SPF_set_debug(SPF_config_t handle, int debug) Set the debugging level for an SPF handle. The SPF library allows a certain amount of debugging output to be generated for help in determining why things succeeded or failed. Currently, only the following debug levels are implemented: SPF_dns_config_t SPF_dns_create_config_resolv(SPF_dns_config_t layer_below, int debug) Create a handle for accessing a non-caching DNS resolver. The value passed for layer_below will be 0. void SPF_dns_destroy_config_resolv(SPF_dns_config_t handle) Destroy a handle for accessing a non-caching DNS resolver. SPF_dns_config_t SPF_dns_create_config_cache(SPF_dns_config_t layer_below, int debug) Create a handle for accessing a caching DNS resolver. The value passed for layer_below will be a handle to a non-caching DNS resolver created using SPF_dns_create_config_resolv. void SPF_dns_destroy_config_cache(SPF_dns_config_t handle) Destroy a handle for accessing a caching DNS resolver. void SPF_init_c_results(SPF_c_results_t *data) Initialize the given compiler results structure so that it may receive the output of a compilation. This must be freed when it is no longer to be used. void SPF_free_c_results(SPF_c_results_t *data) Frees the bytecode contained in the SPF_c_results_t structure. SPF_err_t SPF_compile_local_policy(SPF_config_t handle, const char *spf_record, int use_default_whitelist, SPF_c_results_t *c_results) Several of the SPF specifications support a "local policy" option. This is both very important, and not particularly obvious how it works. Email may come from many sources, sometimes these sources are not direct, and not all of these indirect sources correctly rewrite the envelope-from to specify the new domain that is resending the email. This can happen on incorrectly configured mailing lists, or from people who have set up unix-like .forward files. Often, you want to accept these emails, even if they would technically fail the SPF check. So, you can set up a "local policy" that lists these sources of known-ok emails. If a local policy is set, it will allow you to whitelist these sources. There is a default globally maintained whitelist of known trusted email forwarders that is generally a good idea to use. SPF checks that pass due to local policies will be noted in the messages generated from SPF_result(). As such, it is best if the local policy option is check only right before the SPF check is sure to fail. SPF records that say that a domain never sends email should not do any checking of the local policy. The exact spot in the evaluation of the SPF record was defined in a message sent to the SPF-devel mailing list. It said in part: Philip Gladstone says: Message-ID: <400B56AB.30702@gladstonefamily.net> Date: Sun, 18 Jan 2004 23:01:47 -0500 I think that the localpolicy should only be inserted if the final mechanism is '-all', and it should be inserted after the last mechanism which is not '-'. Thus for the case of 'v=spf1 +a +mx -all', this would be interpreted as 'v=spf1 +a +mx +localpolicy -all'. Whereas 'v=spf1 -all' would remain the same (no non-'-' mechanism). 'v=spf1 +a +mx -exists:%stuff -all' would become 'v=spf1 +a +mx +localpolicy -exists:%stuff -all'. This local policy string can be any string with macro variables included. It is first byte compiled, and then the result can be set in the configuration. void SPF_set_local_policy(SPF_config_t handle, SPF_c_results_t c_results) SPF_err_t SPF_compile_exp(SPF_config_t handle, const char *exp, SPF_c_results_t *c_results) When the SPF check fails, an "explanation" string is generated for use by the MTA during the 4xx or 5xx reject code. This explanation string can be any string with macro variables included. It is first byte compiled, and then the result can be set in the configuration. If an SPF record does not use the "exp=" modifier to specify a more appropriate explanation string, this default explanation string will be used. int SPF_set_exp(SPF_config_t handle, SPF_c_results_t c_results) Set the explanation string for the given SPF handle. int SPF_set_rec_dom(SPF_config_t handle, const char *receiving_hostname) Set the local hostname. Part of the Received-SPF: email header requires the domain name of the receiving MTA. int SPF_set_ipv4(SPF_config_t handle, struct in_addr ipv4) Set the IPv4 address of the SMTP client. int SPF_set_ipv4_str(SPF_config_t handle, const char *ipv4_address) Set the IPv4 address of the SMTP client. int SPF_set_ipv6(SPF_config_t handle, struct in6_addr ipv6) Set the IPv6 address of the SMTP client. int SPF_set_ipv6_str(SPF_config_t handle, const char *ipv6_address) Set the IPv6 address of the SMTP client. int SPF_set_helo_dom(SPF_config_t handle, char *helohost) Set the helo address of the SMTP client. SPF needs both an IP address and a domain name to do its checking. The IP address is set by one of the above routines, but the domain name is not so simple. The domain name is normally obtained from the envelope-from (SMTP MAIL FROM: command), but if that is null (MAIL FROM:<>), then the HELO domain is used (SMTP HELO or EHLO commands). If there is no local part to the envelope-from email address, the name "postmaster" is used instead. This is the case when the HELO domain has to be used, but it might be able to happen with the envelope-from also, depending on how the MTA works. Whatever the source of the domain name, the SPF spec defines this as the "current domain". Normally, you wouldn't set this directly, you would call the SPF_set_helo_dom() and SPF_set_env_from() routines. However, when an SPF record is being evaluated, the current domain is changed when an include or redirect mechanism is executed. int SPF_set_env_from(SPF_config_t handle, char *from) Set the 'MAIL FROM' address from the SMTP client. SPF_output_t SPF_result(SPF_config_t handle, SPF_dns_config_t resolver) Perform an SPF query based on the parameters specified in the handle, and return a result. The SPF_result() function does most of the real, important work. SPF_result() checks the IP address and the envelope-from (or HELO domain) as was configured using the spfcid variable and sees if it is valid. It returns all the info that the caller will need to use the SPF check results. See the description of the structure SPF_output_t for details about the return value of SPF_result() and how they should be used. It may use the DNS configuration to fetch additional information. Actually, SPF_result() is just an easy-to-use wrapper around SPF_get_spf(), SPF_eval_id() and SPF_result_comments(). SPF_output_t SPF_result_2mx(SPF_config_t handle, SPF_dns_config_t resolver) SPF_result_2mx() does everything that SPF_result() does, but it first checks to see if the sending system is a recognized MX secondary for the email recipient. If so, then it returns "pass" and does not perform the SPF query. Note that the sending system may be a MX secondary for some (but not all) of the recipients for a multi-recipient message, which is why SPF_result_2mx may be called many times with the final result being obtained from SPF_result_2mx_msg(). In effect, SPF_result_2mx() adds the mechanism "mx: If you do not know what a secondary MX is, you probably don't have one. Use the SPF_result() function instead. SPF_E_SUCCESS SPF_E_NO_MEMORY SPF_E_NOT_SPF SPF_E_SYNTAX SPF_E_MOD_W_PREF SPF_E_INVALID_CHAR SPF_E_UNKNOWN_MECH SPF_E_INVALID_OPT SPF_E_INVALID_CIDR SPF_E_MISSING_OPT SPF_E_INTERNAL_ERROR SPF_E_INVALID_ESC SPF_E_INVALID_VAR SPF_E_BIG_SUBDOM SPF_E_INVALID_DELIM SPF_E_BIG_STRING SPF_E_BIG_MECH SPF_E_BIG_MOD SPF_E_BIG_DNS SPF_E_INVALID_IP4 SPF_E_INVALID_IP6 SPF_E_INVALID_PREFIX SPF_E_RESULT_UNKNOWN SPF_E_UNINIT_VAR SPF_E_MOD_NOT_FOUND SPF_E_NOT_CONFIG SPF_E_DNS_ERROR SPF_E_BAD_HOST_IP SPF_E_BAD_HOST_TLD SPF_E_MECH_AFTER_ALL
http://www.libspf2.org/docs/api.html
crawl-002
refinedweb
1,426
55.95
The Internal equity capital is by far, conceptually speaking, the most difficult and controversial cost to measure. It has been shown. Like other sources of funds, does certainly involve a cost to the firm. It may be recalled that the objective of financial management is to maximize shareholders' wealth and the maximization of market price of shares is the operational substitute for wealth maximization. When equity holders invest their funds they also expect returns in the form of dividends. compensation to the higher risk exposure, holders of such securities expect a higher return and, therefore, higher cost is associated with them.All these points are taken care of by the experts providing Finance homework help and assignment help at Transtutors.com Retained earnings, as a source of finance for investment proposals, differ from other sources like debt, preference shares and equities. The use of debt is associated with a contractual obligation to pay a fixed rate of interest to the suppliers of funds and, often, repayment of principal at some predetermined date. An almost similar kind of stipulation applies to the use of preference shares also. In the case of ordinary shares, although there is no provision for any predetermined payment to the shareholders, yet a certain expected rate of dividend provides a starting point for the computation of cost of equity capital. That retained earnings do not involve any formal arrangement to become a source of funds is obvious. In other words, there is no obligation, formal or implied, on a firm to pay a return on retained earnings. Apparently, retained earnings may appear to carry no cost since they represent funds which have not been raised from outside. The contention that retained earnings are free of cost, however, is not correct. On the contrary, they do involve cost like any other source.All such points are covered in Finance Homework help, Assignment help at Transtutors.com. It is true that a firm is not obliged to pay a return (dividend or interest) on retained earnings. But retention of earnings does have implications for the shareholders of the firm. If earnings were not retained, they would have been paid out to the ordinary shareholders as dividends. In other words, retention of earnings implies withholding of dividends from holders of ordinary shares. When earnings are, thus, retained, shareholders are forced to forego dividends.The dividends foregone by the equity holders are, in fact, an opportunity cost. Thus, retained earnings involve opportunity cost. In other words, the firm is implicitly required to earn on the retained earnings at least equal to the rate that would have been earned by the shareholders if they were distributed to them. This is the cost of retained earnings. Therefore, the cost of retained earnings may be defined as opportunity cost in terms of dividends foregone by/withheld from the equity shareholders. Our email-based homework help and assignment help assistance offers brilliant insights and simulations which help make the subject practical and pertinent. Attach Files
http://www.transtutors.com/homework-help/corporate-finance/capital-cost/internal-equity-and-flotation-cost/
CC-MAIN-2017-47
refinedweb
500
53.61
IRC log of xproc on 2009-10-15 Timestamps are in UTC. 14:52:30 [RRSAgent] RRSAgent has joined #xproc 14:52:30 [RRSAgent] logging to 14:52:37 [Norm] Zakim, this will be xproc 14:52:37 [Zakim] ok, Norm; I see XML_PMWG()11:00AM scheduled to start in 8 minutes 15:00:25 [Vojtech] Vojtech has joined #xproc 15:00:48 [PGrosso] PGrosso has joined #xproc 15:01:18 [Zakim] XML_PMWG()11:00AM has now started 15:01:25 [Zakim] +[ArborText] 15:02:22 [Norm] Norm has joined #xproc 15:03:00 [Zakim] +Jeroen 15:03:29 [Norm] Meeting: XML Processing Model WG 15:03:29 [Norm] Date: 15 Oct 2009 15:03:29 [Norm] Agenda: 15:03:29 [Norm] Meeting: 156 15:03:29 [Norm] Chair: Norm 15:03:30 [Norm] Scribe: Norm 15:03:32 [Norm] ScribeNick: Norm 15:03:58 [Norm] Regrets: Henry 15:04:02 [Norm] Zakim, who's on the phone? 15:04:02 [Zakim] On the phone I see PGrosso, Vojtech 15:04:06 [MoZ] Zakim, what's the code 15:04:09 [Zakim] I don't understand 'what's the code', MoZ 15:04:09 [MoZ] Zakim, what's the code? 15:04:12 [Zakim] the conference code is 97762 (tel:+1.617.761.6200 tel:+33.4.89.06.34.99 tel:+44.117.370.6152), MoZ 15:04:32 [Zakim] + +1.413.624.aaaa 15:04:38 [Norm] Zakim, aaaa is Norm 15:04:38 [Zakim] +Norm; got it 15:04:50 [Norm] Present: Paul, Norm, Vojtech 15:04:53 [Zakim] +MoZ 15:05:44 [Norm] Present: Paul, Norm, Vojtech, Mohamed 15:05:55 [Norm] Topic: Accept this agenda? 15:05:55 [Norm] -> 15:06:07 [alexmilowski] alexmilowski has joined #xproc 15:06:17 [Norm] Accepted 15:06:23 [Norm] Topic: Accept minutes from the previous meeting? 15:06:23 [Norm] -> 15:06:27 [Zakim] +Alex_Milows 15:06:28 [Norm] Accepted 15:06:32 [Norm] Present: Paul, Norm, Vojtech, Mohamed, Alex 15:06:43 [Norm] Topic: Next meeting: telcon 22 Oct 2009 15:06:50 [Norm] No regrets heard. 15:07:13 [Norm] Paul gives regrets for 29 Oct 15:08:19 [Norm] Topic: W3C Technical Plenary 15:08:35 [Norm] Expected: Norm, Henry, Alex, Vojtech, correct? 15:09:02 [Norm] Norm: I believe I requested a phone, we'll hook up Zakim for you if we can. 15:10:00 [Norm] Alex gives regrets for TPAC after all 15:10:30 [ht] Aw p1sh :-( 15:11:08 [Norm] Topic: Versioning and forwards compatibility 15:11:27 [Norm] 0-> 15:11:47 [Norm] Norm attempts to summarize where we stand. 15:12:10 [Norm] Norm: The status quo allows very limited forwards-compatibility and only if new step declarations are loaded from the W3C server. 15:12:20 [Norm] Norm: You can't just load a V2 pipeline in a V1 processor and expect it to work. 15:13:20 [Norm] Norm: The desire has been expressed, from several camps, to provide a more flexible story. 15:13:36 [Norm] Norm: They want something that will "just work" 15:14:51 [Norm] Norm: And the ability to use p:choose to select between two different flavors of pipeline, for V1 or V2. 15:15:12 [Norm] Norm: And they don't want to have to hit the W3C web server for new decls to make it work. 15:16:05 [Norm] Norm: The last point to make is, if you don't do versioning "right" in V1, it's very hard to get it right later. 15:16:54 [Norm] Norm: Do we care? Are we going to try to address this in V1? 15:17:21 [Norm] No one is heard to argue for the status qo. 15:17:24 [Norm] s/qo./quo./ 15:18:00 [Norm] Mohamed: Let's go as far as possible in the investigation since this is the last chance. But at the end, perhaps we'll stick to the status quo. 15:18:22 [ht] +1 to MoZ 15:18:36 [Norm] Norm: I think the proposals boil down to two camps: 15:18:55 [Norm] 1. Some sort of defaulting rules for what to do when you see steps/ports in the p: namespace that you don't recognize. 15:19:09 [Norm] 2. Some sort of static manipulation like XSLT's use-when 15:21:32 [Norm] Norm attempts to summarize the two flavors. 15:21:39 [Norm] Norm: Thoughts? 15:21:44 [Norm] Norm: Comments? 15:22:13 [Norm] Alex: Thinking about Norm's proposal for empty sequence on output. 15:22:36 [Norm] Norm: One weakness of my proposal is that it requires you to have the 2.0 declarations. 15:23:03 [ht] As I said in email, I find Jeni's critique of that [being online] compelling 15:24:11 [Norm] Alex: The 1.0 processor isn't going to do the right thing... 15:24:36 [Norm] Vojtech: If we introduce defaulting that can work without the declarations, then we have to be a lot more relaxed about static errors. 15:25:21 [ht] That's why I prefer the use-when approach 15:25:30 [Norm] Alex: If you're trying to do this, the onus on the 1.0 processor is that it has to go retrieve the stuff. 15:26:11 [Norm] Norm: I'm (mostly) convinced by the offline arguments. 15:27:09 [Norm] Norm: The reality is, I think the use-when approach is the most practical. It balances ignoring things with user-control. 15:28:31 [Norm] ...It's the C preprocessor. 15:28:40 [Norm] Vojtech: Does this solve what we want to solve? 15:29:03 [Norm] ...This is really the forwards-compatible mode. When you evaluate these use-when conditions, you wind up with some sort of pipeline. 15:29:35 [Norm] Alex: One interpretation is that we have this static view and we're trying to adjust that. The other is that the data flow graph is the same in both situations, but what the steps can do is different. 15:30:17 [Norm] ...In the case where you produce an empty sequence, you can still hook things up. 15:30:22 [Norm] Norm: Right. If you ahve the declarations. 15:30:25 [Norm] s/ahve/have/ 15:32:35 [Norm] Norm natters on about the need to have declarations. 15:33:00 [Norm] Norm points out that the user community that wants us to fix this isn't going to be happy with the need to load declaratins. 15:33:10 [Norm] Alex: I'm not against the use-when approach. 15:34:50 [Norm] Norm wonders out loud about the requirement to use use-when; or can we have p:when that contains a step we don't recognize and still "compile" the pipeline. 15:35:12 [Norm] Alex: I don't think use-when is as simple as it is in the case of XSLT. 15:35:34 [Norm] Norm: And does this really solve the backwards compability problem? If you throw a random V2 pipeline at a V1 processor, what will it do? 15:38:07 [Norm] Norm: If a pipeline uses a V2 step unconditionally, there's nothing a V1 processor can do with it. 15:38:39 [Norm] ...So if an author wants to write a pipeline that can run in V1, he or she will have to use some sort of conditionality. 15:40:11 [MoZ] q+ 15:40:40 [Norm] ack moz 15:40:52 [Norm] Mohamed: My understanding is that we have three levels: 15:41:05 [Norm] ...1: mandate importing of step decls; without them then there are two levels: 15:41:14 [Norm] ...1a: primary ports 15:41:18 [Norm] ...1b: secondary ports 15:41:30 [Norm] ...My understanding is that only primary ports could be connected automatically. 15:41:32 [Norm] Norm: Right. 15:41:48 [Norm] Mohamed: So I think the lack of information that we have to compute the graph dependencies is only on primary ports. 15:41:54 [Norm] Norm: Right. 15:42:16 [Norm] Mohamed: On one branch, we say that a new primary port can never be added in V.next. 15:42:56 [Norm] ...Or, we say that if you add some primary port then we say that you have to make the connection explicit if you want to run it in V1. 15:43:24 [Norm] Vojtech: This is only about existing steps, not about new steps. 15:43:26 [Norm] Norm: True 15:43:57 [Norm] Mohamed: The same could be true for new steps, you have to make all the bindings explicit. 15:44:10 [Norm] Norm: So if you see a new step, you assume that it has no primary inputs or outputs. 15:44:45 [Norm] Mohamed: Yes. 15:44:47 [Norm] Norm: Interesting. 15:45:20 [Norm] Mohamed: As soon as we say that we don't want the user to import the step declarations, then the user will have to put new information into the pipeline to make it work in V1. 15:46:12 [Norm] Vojtech: For new V2 things that have have inputs like "iteration-source" that aren't recognized, then the pipeline will just fail. 15:48:14 [Norm] More discussion. 15:48:40 [Norm] Mohamed: The only point of my proposal is that it moves the entire static validation into dynamic when you have p:choose or p:try 15:51:22 [Norm] More discussion 15:51:36 [Norm] Norm talks through the "propagate errors up" story again. 15:53:22 [Norm] Mohamed observes that we have to support p:choose/p:try dynamically. 15:56:35 [Norm] Vojtech: We have to decide if a new messages output port on p:xslt should work or should be a failure. 15:56:41 [Norm] ...I don't think we can really solve this. 15:56:52 [Norm] Alex: I'm having trouble following the different situations we want to handle. 15:57:34 [Norm] Norm: Is there consensus to explore a radical solution that involves defaulting, use-when, and the other mechanisms that we've discussed here? 15:58:05 [Norm] Proposal: Norm to attempt to write this up? 15:58:21 [ht] Yes 15:58:33 [Norm] Accepted. 15:58:35 [ht] Willing to help 15:59:29 [Norm] Norm: I think the solution has these general features: 15:59:35 [Norm] 1. Remove the requirement to load step declarations 15:59:55 [Norm] 2. Add use-when to allow authors to be explicitly dynamic 16:00:38 [Norm] 3. Add some defaulting rules to allow for dynamic selection (p:choose) of steps from different versions. (p:identity vs. p:identity-with-colors) 16:00:48 [Norm] 4. Add a version attribute to identify when forwards compatible behavior is required 16:02:16 [Norm] Norm: I don't want a V1 processor running a pipeline that it thinks is a V1 pipeline to be unable to statically reject broken pipelines. 16:02:30 [Norm] Norm: I suppose we could require a version attribute, like XSLT does. 16:03:12 [Norm] Topic: Any other business? 16:03:45 [Norm] ht, can you send email reminding us how short LC and CR are allowed to be? 16:03:54 [Norm] Adjourned. 16:03:58 [Zakim] -Alex_Milows 16:04:04 [Zakim] -PGrosso 16:04:08 [ht] Will do 16:04:08 [Zakim] -Vojtech 16:04:11 [Zakim] -MoZ 16:04:12 [Zakim] -Norm 16:04:14 [Norm] RRSAgent, set logs world-visible 16:04:17 [Zakim] XML_PMWG()11:00AM has ended 16:04:18 [Norm] RRSAgent, draft minutes 16:04:18 [RRSAgent] I have made the request to generate Norm 16:04:19 [Zakim] Attendees were PGrosso, Jeroen, Vojtech, +1.413.624.aaaa, Norm, MoZ, Alex_Milows 16:05:48 [alexmilowski] alexmilowski has left #xproc 16:09:00 [PGrosso] PGrosso has left #xproc 18:00:22 [Zakim] Zakim has left #xproc 18:53:03 [Norm] RRSAgent, bye 18:53:03 [RRSAgent] I see no action items
http://www.w3.org/2009/10/15-xproc-irc
CC-MAIN-2016-50
refinedweb
2,026
79.5
ncl_stitle man page STITLE — Creates scrolled movie or video titles. It receives all input through the argument list. Utility This routine is part of the Scrolled_title utility in NCAR Graphics. To see the overview man page for this utility, type "man scrolled_title". Synopsis CALL STITLE (CRDS,NCDS,IYST,IYND,TMST,TMMV,TMND,MTST) C-Binding Synopsis #include <ncarg/ncargC.h> void c_stitle (char *crds[], int ncds, int iyst, int iynd, float tmst, float tmmv, float tmnd, int mtst) Description - CRDS (an input array, dimensioned NCDS, of type CHARACTER*n, where "n" is greater than or equal to 21) is the "card input buffer". This array must be filled, prior to calling STITLE, either by internal manipulations or by reading n-character "cards". Each element of the array CRDS represents one line on the scroll (or, sometimes, a continuation of a previous line) and contains the following: Columns 1-5: MX, the X coordinate of the line of text on the scroll. This is normally a value between 1 and 1024, inclusive. Exactly how the line of text is positioned relative to the specified X coordinate depends on the value of ICNT (in columns 14-15). If the value -9999 is used for MX, it indicates a continuation line: characters from columns 21 through "n" are just appended to the characters from the previous card to form the line of text. Any number of continuation cards may be used, but the total number of characters in a line of text must not be greater than 512. Trailing blanks are omitted from each card, including those that are followed by a continuation card; thus, if there are to be blanks between the characters from one card and the characters from a continuation card, those blanks must be placed in columns 21 and following of the continuation card. On a continuation card, columns 6-20 are ignored. - Columns 6-10: MY, the Y coordinate of the line of text on the scroll. MY may range from -9999 to 99999. - Columns 11-13: ICLR, the index of the color to be used for the line of text. If this field is blank, the default foreground color specified by the value of the internal parameter ´FGC´ will be used. Columns 14-15: ICNT, the centering option: - 0 means "start the text at MX". - 1 means "center the text about MX". - 2 means "end the text at MX". - Columns 16-20: SIZE, the desired size of the characters to be used in writing the line. SIZE is given as a multiplier of a default height specified by the value of the internal parameter ´PSZ´, the default value of which is 21 (out of 1024). Values of SIZE from .75 to 2.5 are recommended. - Columns 21-n: Text for this line (or for continuation of a line when MX = -9999). - NCDS (an input expression of type INTEGER) is the dimension of the array CRDS (i.e., the number of card images in "practice" run. - 0 means "real run". - 1 means "practice run". During real runs, frames are created for the fade-in sequence (if the user has turned on fade-in by setting the internal parameter ´FIN´ non-zero), the stationary sequence at the start (if TMST is non-zero), the scrolling time (if TMMV is non-zero), the stationary sequence at the end (if TMND is non-zero), and the fade-out sequence (if the user has turned on fade-out by setting the internal parameter ´FOU´ non-zero). During practice apprpriate value.) C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Usage STITLE takes input through its argument list and generates graphic output that moves a body of text up through the viewing window. This is done by outputting the appropriate number of frames required to generate a movie sequence of a duration specified by you. At each frame STITLE skips plotting lines of text that are completely outside of the viewing window and clips those that are partially outside the window. Examples Use the ncargex command to see the following relevant examples: fslfont, slex01, slex02, tstitl. Access To use STITLE or c_stitle,: ftitle, scrolled_title, scrolled_title_params, slgeti, slgetr, slogap, slrset, slseti, slsetr, ncarg_cbind. Hardcopy: NCAR Graphics Fundamentals, UNIX Version; User's Guide for NCAR GKS-0A Graphics University Corporation for Atmospheric Research The use of this Software is governed by a License Agreement.
https://www.mankier.com/3/ncl_stitle
CC-MAIN-2017-17
refinedweb
736
62.17
Results 1 to 4 of 4 - Join Date - Oct 2013 - 3 - Thanks - 1 - Thanked 0 Times in 0 Posts Guessing Game - Statistics in Java Hello! I'm a new programmer and I got stuck with my task. I'm trying to create a Guessing Game. The game randomize a number between 1 and 100 and let's the user guess. When correct, program asks if player want to play a new game or return to menu. While this is happening the program will also count the guesses and the number of games played, for statistics. The menu contains of: 1. Play 2. Statistics 3. Exit I've got the game and exit part working alright. It is the statistics that is giving me a hard time. The statistics contains of: Number of games played The lowest number of guesses The highest number of guesses The avarage number of guesses The avarage is no problem. NumberOfGuesses / NumberOfGames The problem is the lowest and highest number of guesses. The program must keep track of my best game and my worst game, even if i play 50 games. Here is where i'm stuck. And i'm not sure that my main class is correct, looks awefully empty. It works though. Appreciate help! MAIN CLASS: Code: package thegame; /** * * @author Emil */ public class TheGame { /** * @param args the command line arguments */ public static void main(String[] args) { newGame theGame = new newGame(); theGame.showMainMenu(); theGame.playGame(); theGame.showStatistics(); THE GAME CLASS. Here i have included all my methods for new game, statistics, menu, exit etc. Code: package thegame; import java.util.Random; import java.util.Scanner; /** * * @author Madeleine */ public class newGame { //Initialize variables int numberOfGames = 0; int numberOfGuesses = 0; //int counter1; // int counter2; // private static final int MIN_NUMBER = 1; //private static final int MAX_NUMBER = 100; //double avarageGuesses; int highestGuesses; int lowestGuesses; // int currentGuess = 0; //Method for the main menu. public void showMainMenu() { System.out.println("Welcome to The Game. Choose an option."); System.out.println("1. New Game"); System.out.println("2. Statistics"); System.out.println("3. Exit"); System.out.println(); int menuOption; Scanner input = new Scanner(System.in); System.out.print("Selection: "); menuOption = input.nextInt(); System.out.println(); if (menuOption < 1 || menuOption > 3){ System.out.println("Invalid Menu Selection!"); System.out.println("Please make another selection."); showMainMenu(); } //Reads users input and display the method chosen. If not 1,2 or 3, error //message is displayed. if (menuOption == 1)playGame(); if (menuOption == 2)showStatistics(); if (menuOption == 3)exitGame(); //End Method } //Method for the game. public void playGame() { //Variables and scanner. boolean win = false; Random rand = new Random(); int theNumber = rand.nextInt(100); int guess; Scanner input = new Scanner(System.in); System.out.println("Welcome. I would like to play a game. Guess a number" + " between 1 and 100!"); //While guess is not right, loop. When guess is right, end loop. while (win == false){ guess = input.nextInt(); while (guess < 0 || guess > 100){ System.out.println("Please type a number between 1 and 100." + " Don't worry, this guess doesn't count."); System.out.println("Try again:"); guess = input.nextInt(); } if (guess == theNumber){ win = true;numberOfGuesses++; } //Adding a guess each time a user guesses wrong. If user guess under 0 //or over 100, error message is displayed and guess is not counted. else if (guess > theNumber){ System.out.println("The number guessed is too high!" + " Guess again:");numberOfGuesses++; } else if (guess < theNumber){ System.out.println("The number guessed is too low!" + " Guess again:");numberOfGuesses++; } } //Game end. +1 for numberOfGames System.out.println("You beat the game. Congratulations!"); System.out.printf("\nThe number was indeed %d.\n", theNumber); numberOfGames++; System.out.println("Want to start a new game? Type in 1 to start a new game" + " or type 2, to return to the main menu."); int playAgain = input.nextInt(); //Asks if user will play again or return to menu. while (playAgain > 2 || playAgain < 0){ System.out.println("Choose a number between 1 and 2."); playAgain = input.nextInt(); } if (playAgain == 1)playGame(); else showMainMenu(); //Ends Method } //Method for statistics. public void showStatistics (){ //countGuesses(); Scanner input = new Scanner(System.in); if (numberOfGames == 0){ System.out.println("Please, play a game before statistics can" + " be shown."); showMainMenu(); } //Shows statistics. System.out.println("Statistic:"); System.out.println(); System.out.println("Played games:" +numberOfGames); System.out.println("Number of Guesses:" +numberOfGuesses); System.out.println("Lowest number of guesses:" +lowestGuesses) ; System.out.println("Highest number of guesses: " +highestGuesses); double avarageGuesses; avarageGuesses = numberOfGuesses / numberOfGames; System.out.printf("Avarage number of guesses: %.1f" , avarageGuesses); System.out.println(); System.out.println("Please, type 1 to return to the main menu." + "\nPress 2 to share result on Facebook!\n"); //Choice to return to menu or "post on facebook". int returnMenu; returnMenu = input.nextInt(); while (returnMenu < 1 || returnMenu > 2){ System.out.println("Please, type 1 or 2."); returnMenu = input.nextInt(); } if (returnMenu == 1)showMainMenu(); if (returnMenu == 2) System.out.println("Open your broswer. Log on Facebook and type" + " that stuff in bro! You're awesome!"); System.out.println("And here is the menu again, please play" + " one more time!"); System.out.println(); showMainMenu(); //If user chooses 2, it goes to menu after a short message. //End Method } //Method to exit game. public void exitGame(){ System.out.println("Are you sure that you want to exit The Game?" + "\nType YES to exit and NO to return to the main menu.\n"); Scanner input = new Scanner(System.in); String exit; exit = input.next(); exit = exit.toUpperCase(); //Prompt user to input YES or NO. Also ignores case-sensitivity. while (!(exit.equals ("YES") || (exit.equals ("NO") ) ) ){ System.out.println("Please type Yes or No."); exit = input.next(); exit = exit.toUpperCase(); } if (exit.equals ("YES")) System.out.println("The game has ended. Please come again!");System.exit(0); if (exit.equals ("NO")) showMainMenu(); }//End Method //End Class } - Join Date - Sep 2002 - Location - Saskatoon, Saskatchewan - 16,994 - Thanks - 4 - Thanked 2,662 Times in 2,631 Posts Dangit, I had such a nice long one typed up here; stupid back mouse button >.>. Remind me to come back in case I forget since these mark them as read. This looks like homework, so I can't give the answer, but can provide ideas; its quite simple: Make a running least and most amount of guesses as a property to the class. When verifying and completing the game, check and see if this game score is better/worse than one of them, and if so, simply replace it. Most of it's in the playGame() method, but you'll need to do the one for statistics to show them as well.PHP Code: header('HTTP/1.1 420 Enhance Your Calm'); - Join Date - Oct 2013 - 3 - Thanks - 1 - Thanked 0 Times in 0 Posts Solved it! Thank you for the help, it was the best =). As u said, simple. I just forgot to initialize numberOfGuesses to 0 everytime I start a new game. - Join Date - Oct 2013 - 3 - Thanks - 1 - Thanked 0 Times in 0 Posts Alright, i'm back with a new problem. Still the same program. I should implement the menu as a method showMainMenu() that returns an integer depending on the user's choice. The method showMainMenu() is found in the GAME class. And here is what i got so far: Code: public static void main(String[] args) { newGame test = new newGame(); int callMethod = test.showMainMenu(); while (callMethod == 1){ test.playGame(); } if (callMethod == 2)test.showStatistics(); When i run this, it seems to be working. The game plays out fine and the statistics show a friendly error message if numberOfGames == 0. The problem is, that when i now use showMainMenu(); anywhere in my code, i get an error. The menu pops up like it should, but any choice i make just give me errors. I think it has something to do with that the menu just returns an integer and i can't seem to make my methods go back to main after they are executed. Anyone have an idea? And as previous poster wrote, this is an assignment. So i would enjoy tips but not the entire solution. I do appreciate quick answers, since it is due to Sunday. AdSlot6
http://www.codingforums.com/java-and-jsp/303919-guessing-game-statistics-java.html
CC-MAIN-2015-40
refinedweb
1,356
70.6
. 1: /* 2: * ---------------------------------------------------------------------------- 3: * "THE BEER-WARE LICENSE" (Revision 42): 4: * <phk@FreeBSD.org> wrote this file. As long as you retain this notice you 5: * can do whatever you want with this stuff. If we meet some day, and you think 6: * this stuff is worth it, you can buy me a beer in return. Poul-Henning Kamp 7: * ---------------------------------------------------------------------------- 8: * 9: * $FreeBSD: src/sys/i386/i386/elan-mmcr.c,v 1.6.2.1 2002/09/17 22:39:53 sam Exp $ 10: * $DragonFly: src/sys/i386/i386/elan-mmcr.c,v 1.7 2004/05/19 22:52:57 dillon Exp $ 11: * The AMD Elan sc520 is a system-on-chip gadget which is used in embedded 12: * kind of things, see for instance, and it has a few quirks 13: * we need to deal with. 14: * Unfortunately we cannot identify the gadget by CPUID output because it 15: * depends on strapping options and only the stepping field may be useful 16: * and those are undocumented from AMDs side. 17: * 18: * So instead we recognize the on-chip host-PCI bridge and call back from 19: * sys/i386/pci/pci_bus.c to here if we find it. 20: */ 21: 22: #include <sys/param.h> 23: #include <sys/systm.h> 24: #include <sys/kernel.h> 25: #include <sys/conf.h> 26: #include <sys/proc.h> 27: #include <sys/sysctl.h> 28: #include <sys/time.h> 29: 30: #include <machine/md_var.h> 31: 32: #include <vm/vm.h> 33: #include <vm/pmap.h> 34: 35: uint16_t *elan_mmcr; 36: 37: #if 0 38: 39: static unsigned 40: elan_get_timecount(struct timecounter *tc) 41: { 42: return (elan_mmcr[0xc84 / 2]); 43: } 44: 45: static struct timecounter elan_timecounter = { 46: elan_get_timecount, 47: 0, 48: 0xffff, 49: 33333333 / 4, 50: "ELAN" 51: }; 52: 53: #endif 54: 55: void 56: init_AMD_Elan_sc520(void) 57: { 58: u_int new; 59: int i; 60: 61: if (bootverbose) 62: printf("Doing h0h0magic for AMD Elan sc520\n"); 63: elan_mmcr = pmap_mapdev(0xfffef000, 0x1000); 64: 65: /*- 66: * The i8254 is driven with a nonstandard frequency which is 67: * derived thusly: 68: * f = 32768 * 45 * 25 / 31 = 1189161.29... 69: * We use the sysctl to get the timecounter etc into whack. 70: */ 71: 72: new = 1189161; 73: i = kernel_sysctlbyname("machdep.i8254_freq", 74: NULL, 0, 75: &new, sizeof new, 76: NULL); 77: if (bootverbose) 78: printf("sysctl machdep.i8254_freq=%d returns %d\n", new, i); 79: 80: #if 0 81: /* Start GP timer #2 and use it as timecounter, hz permitting */ 82: elan_mmcr[0xc82 / 2] = 0xc001; 83: init_timecounter(&elan_timecounter); 84: #endif 85: } 86: 87: 88: /* 89: * Device driver initialization stuff 90: */ 91: 92: static d_open_t elan_open; 93: static d_close_t elan_close; 94: static d_ioctl_t elan_ioctl; 95: static d_mmap_t elan_mmap; 96: 97: #define CDEV_MAJOR 100 /* Share with xrpu */ 98: static struct cdevsw elan_cdevsw = { 99: /* name */ "elan", 100: /* maj */ CDEV_MAJOR, 101: /* flags */ 0, 102: /* port */ NULL, 103: /* clone */ NULL, 104: 105: /* open */ elan_open, 106: /* close */ elan_close, 107: /* read */ noread, 108: /* write */ nowrite, 109: /* ioctl */ elan_ioctl, 110: /* poll */ nopoll, 111: /* mmap */ elan_mmap, 112: /* strategy */ nostrategy, 113: /* dump */ nodump, 114: /* psize */ nopsize 115: }; 116: 117: static int 118: elan_open(dev_t dev, int flag, int mode, struct thread *td) 119: { 120: return (0); 121: } 122: 123: static int 124: elan_close(dev_t dev, int flag, int mode, struct thread *td) 125: { 126: return (0); 127: } 128: 129: static int 130: elan_mmap(dev_t dev, vm_offset_t offset, int nprot) 131: { 132: if (offset >= 0x1000) 133: return (-1); 134: return (i386_btop(0xfffef000)); 135: } 136: 137: static int 138: elan_ioctl(dev_t dev, u_long cmd, caddr_t arg, int flag, struct thread *td) 139: { 140: return(ENOENT); 141: } 142: 143: static void 144: elan_drvinit(void) 145: { 146: 147: if (elan_mmcr == NULL) 148: return; 149: printf("Elan-mmcr driver: MMCR at %p\n", elan_mmcr); 150: cdevsw_add(&elan_cdevsw, 0, 0); 151: make_dev(&elan_cdevsw, 0, UID_ROOT, GID_WHEEL, 0600, "elan-mmcr"); 152: return; 153: } 154: 155: SYSINIT(elan, SI_SUB_PSEUDO, SI_ORDER_MIDDLE+CDEV_MAJOR,elan_drvinit,NULL);
http://www.dragonflybsd.org/cvsweb/src/sys/i386/i386/Attic/elan-mmcr.c?f=h;content-type=text%2Fx-cvsweb-markup;ln=1;rev=1.7
CC-MAIN-2015-18
refinedweb
648
65.76
In this problem, we are given a number N. Our task is to create a Program to find last two digits of Nth Fibonacci number in C++. We need to find the last two digits (i.e. the two LSB’s ) of the Nth Fibonacci number. Let’s take an example to understand the problem, Input: N = 120 Output: 81 A simple solution will be using the direct Fibonacci formula to find the Nth term. But this method will not be feasible when N is a large number. So to overcome this, we will use the property of the Fibonacci Series that last two digits repeats itself after 300 terms. I.e. The last two digits of the 75th term are the same as that of the 975th term. This means that working till 300 will give us all possible combinations and to find which term to use we will find the number’s mod with 300. #include <iostream> using namespace std; long int fibo(int N){ long int a=0,b=1,c; for(int i=2; i<= N;i++) { c=a+b; a=b; b=c; } return c; } int findLastTwoDigitNterm(int N) { N = N % 300; return ( fibo(N)%100); } int main() { int N = 683; cout<<"The last two digits of "<<N<<"th Fibonacci term are "<<findLastTwoDigitNterm(N); return 0; } The last two digits of 683th Fibonacci term are 97
https://www.tutorialspoint.com/program-to-find-last-two-digits-of-nth-fibonacci-number-in-cplusplus
CC-MAIN-2022-21
refinedweb
231
67.28
The problem statement is, an long array is given with n elements, we need to find all the pairs of numbers in this long array which have a constant difference k. I will post three methods. The first method is the brute force method and has a runtime complexity O (n2), the next method has runtime complexity O (n*log (n)), and the last one will have the runtime complexity O (n). The O (n2) solution This is the brute force solution. Check each pair in the list for difference k. Fix the ith, element of the list and iterate over the list from j = (i + 1) upto j = (n - 1). Where i = 0, 1, 3, ... , (n-1) and n is the number of elements in the list, and check if the numbers in the list in the location pair i and j has a difference of k. Here is the sourcecode. #include <stdio.h> #include <stdlib.h> #define ABS(x) (((x)<0)?-(x):(x)) int main (void) { int *arr, i, j, n, k, count = 0; scanf (" %d %d", &n, &k); arr = malloc (sizeof (int) * n); for (i=0; i<n; i++) { scanf (" %d", &arr[i]); } /* brute force */ for (i=0; i<n; i++) { for (j=i + 1; j<n; j++) { if (ABS (arr[j] - arr[i]) == k) { count++; } } } printf ("%d\n", count); free (arr); return 0; } For each number fixed and the rest of the list starting from that index is scanned. Therefore growth of the execution time for this method is clearly O (n2). ABS finds the difference between two numbers, because if a and b is compared once only and a - b is -k, we will miss the pair as there will be no b - a comparison. The O (n*log(n)) solution In this method, we will first sort the list using any O (n * log (n)) sorting algorithm. I am assuming that the numbers are sorted in a non-decreasing order. After sorting, we know one fact that if j > i and arr[j] - arr[i] > k then that there exists no arr[k] for k > j and arr[k] - arr[i] = k. This will help us search. The simple way is after we fix an element i and linearly search for the pair. We search if arr[i] + k is present in the array segment start from j = (i + 1) upto j = (n - 1). If present we have a pair, else we don’t. This won’t be O (n * log (n)) as it still will involve, for each element, to the end of the list in the worst case. But this will definitely be much better than the previous solution, as we immediately break at the point when the difference goes beyond k. We can make improvement to the previous idea by replacing linear search with binary search. For this case, for a location i in the sorted array, find for arr[i] + k in the array range j = (i + 1) to j = (n - 1) . If found then we have a pair, else we don’t. If k is low and n is large the linear search may get faster pair matches and failures. If k is large then binary search will be faster. But making the binary search will guarantee the search in O (log (n)) time each pass, therefore for n elements O (n * log (n)) which makes the searching for pairs of number with k difference in O (n * log (n)) time. I will show the binary search implementation of this problem below. I have used the C standard library qsort and bsearch function to quicksort and binary search the list. #include <stdio.h> #include <stdlib.h> static int compare (const void *a, const void *b) { return *((int *) a) - *((int *) b); } int main (void) { int i, n, k, *arr, count = 0, key; scanf (" %d", &n); scanf (" %d", &k); arr = malloc (sizeof (int) * n); for (i=0; i<n; i++) { scanf (" %d", &arr[i]); } qsort (arr, n, sizeof (int), compare); for (i=0; i<n; i++) { key = arr[i] + k; if (bsearch (&key, arr + i + 1, n - i - 1, sizeof (int), compare) != NULL) { count++; printf ("(%d, %d)\n",arr[i], arr[i] + k); } } printf ("Total: %d\n", count); free (arr); return 0; } The O (n) solution To find the pair of numbers in a list with difference k we have to scan the list at least once, selecting one element in each iteration and doing some processing with it. This process itself is O (n). To solve this problem in O (n), after we select an element during the scanning process, we need to do something which takes O (1) time. Nothing more than O (1) processing time growth per element in the list will let us make the entire process to find k difference pairs O (n). Therefore the answer is clear, Hash Table comes to the rescue. The process is still the same as the last. We populate the list numbers into the hash table. Then we start iterating with each element in the hash table. For each element e in the hash table, we search if (e + k) is also present in the table. If yes, we have a pair, if no then we don’t. The insertion and search operation in the hash table is O (1) in average case. Let’s code it. I will use C++ to implement this, as the STL already has containers which uses hash tables. The existing map container in C++ STL is implemented with some kind of self balancing binary search tree, most probably Red Black trees (that’s what the implementation in my system uses). Therefore using map will only get you O (log (n)) insertion and search. I will use the unordered_map present in C++ STL. Note that the unordered_map is implemented using hash tables and is introduced in C++11 standard. Therefore to compile the code you need appropriate compiler switches, if your compiler by default does not compile with C++11 standards. For gcc you need to add the -std=c++11 or at least -std=c++0x switch option. /* Point to be noted: * map container insertion and search is not O(1), unordered_map is. * unordered_map container class is introduced from C++11 standard * therefore to compile you need -std=c++0x or -std=c++11 . */ #include <iostream> #include <unordered_map> using namespace std; int main (void) { unordered_map<int, bool> hash; int n, k, val, count = 0; cin >> n; cin >> k; /* Load the values in hash */ for (int i=0; i<n; i++) { cin >> val; hash.insert (pair<int, bool> (val, true)); } /* For each value inside the hash, add `k' with it and see if it exists, * in the hash. If yes, we have a pair, otherwise, we have no pair. */ for (unordered_map<int, bool>::iterator it = hash.begin (); it != hash.end (); it++) { if (hash.find (it->first + k) != hash.end ()) { cout << "(" << it->first << ", " << (it->first + k) << ")" << endl; count++; } } cout << "Pairs: " << count << endl; return 0; } And before end, I will write the same with Perl (learning Perl :) ). #!/usr/bin/perl -w my $n = <STDIN>; my $k = <STDIN>; my %hash; my $count = 0; my $val; for (my $i = 0; $i < $n; $i++) { chomp ($val = <STDIN>); $hash{$val} = 1; } foreach my $key (keys %hash) { if (exists ($hash{$key + $k})) { print ("($key, ", $key + $k, ")\n"); $count++; } } print ("\nTotal pairs: $count\n"); For Perl implementation note that, when we enter from keyboard you should enter each number in each line and also should not input leading or trailing spaces or decimal points. I have not included processing to parse out numbers from a line. One thing to note that, when you press enter after entering an integer it is stored in $val and then it is uses in the hash table. $val has a trailing newline character at this point and every key will go with the trailing newline character in the hash table. Later when we do $val + $k the answer will have no newline character at the end and we won’t find any match. Therefore it is important to remove the trailing newline, if any exists using the chomp function. So, that was it.
https://phoxis.org/2013/08/20/find-pairs-of-numbers-in-an-array-whose-difference-are-a-constant-k/
CC-MAIN-2022-05
refinedweb
1,366
77.67
Created on 2004-11-27 21:37 by netvigator, last changed 2009-04-23 12:23 by gpolo.. Logged In: YES user_id=80475 Kurt, as far as I can tell, there is nothing in Tkinter that gives us any control over this. If you concur, please mark this as 3rd party and/or platform specific and close it. Logged In: YES user_id=149084 Yes, if OP wants to pursue it, he should take it up with the Tk people: Logged In: YES user_id=1167414 > Yes, if OP wants to pursue it, he should take it up with the Tk people: 1) Who is OP? 2) Is this ball in my court or someone else's? Thanks, netvigator aka Rick Graves Logged In: YES user_id=469548 OP = opening poster. So yes, the ball is in your court. :) Logged In: YES user_id=1167414 I posted the "bug" on the Tk list as suggested. Today I got this: begin quote >Comment By: Jeffrey Hobbs (hobbs) Date: 2004-12-24 11:25 Logged In: YES user_id=72656 This is not a bug, but rather just that Tk differentiates between the regular arrow up and keypad up on some systems, depending on how their system keymaps operate. The first is <Up> and the second is <KP_Up> on Linux, but both are <Up> on Windows. This has always been the case for KP_Enter as well. The fact that Windows doesn't separate these is by design, but has also caused people to want them separated (see TIP). IOW, the bindings should be on <Up> and <KP_Up> if they are to be considered equivalent in an app. This is best handled by using virtual events (like <<Up>>) and adding the specific event names that you want to apply to it. Please filter this back to the other reports. ---------------------------------------------------------------------- You can respond by visiting: end quote Would someone please either reopen this or let me know what my next step should be. Thanks, Rick Logged In: YES user_id=149084 OK, thanks for the leg work. I'll take a further look. My keyboards are IBM Model M SpaceSavers. I don't do keypads... :-) Confirmed in trunk, the <KP_Up> (80) events show in a terminal if one adds some print-based debug help. Unfortunately this is not that easy for us, while we could add some code like this: import Tkinter text = Tkinter.Text() text.event_add("<<Up>>", "<Key-Up>") text.event_add("<<Up>>", "<Key-KP_Up>") text.bind_class("Text", "<<Up>>", text.bind_class("Text", "<Key-Up>")) text.pack() text.mainloop() it won't work as most would expect.). I will be asking someone about all these unsupported commands. >). Ah yes, here is something that would do what we wanted (for "up" only): import Tkinter def my_up(event): widget = event.widget if event.keysym == 'KP_Up' and event.state & 16: widget.tk.call('tk::TextInsert', widget, event.char) return "break" pos = widget.tk.call('tk::TextUpDownLine', widget, -1) widget.tk.call('tk::TextSetCursor', widget, pos) text = Tkinter.Text() text.event_add("<<Up>>", "<Key-Up>") text.event_add("<<Up>>", "<Key-KP_Up>") text.bind_class("Text", "<<Up>>", my_up) text.pack() text.mainloop()
http://bugs.python.org/issue1074333
crawl-002
refinedweb
513
76.72
Home -> Community -> Mailing Lists -> Oracle-L -> Re: user creation with similar profile You didn't tell us the Oracle version which you are using. There are many workaround for this. In 10g, you can simpy use datapumps to achieve this. export the schema without data import using remap_schema option. If you have OEM, you can also use the feature of create like. May be, there are other workaround as well. On 6/27/07, nilesh kumar <nileshkum_at_gmail.com> wrote: > >. > -- Best Regards, Syed Jaffar Hussain Oracle ACE 8i,9i & 10g OCP DBA ---------------------------------------------------------------------------------- "Winners don't do different things. They do things differently." -- on Wed Jun 27 2007 - 02:13:53 CDT Original text of this message
http://www.orafaq.com/maillist/oracle-l/2007/06/27/1362.htm
CC-MAIN-2017-30
refinedweb
115
77.64
Learn how to setup your Adafruit Feather HUZZAH ESP8266 with Ubidots Not a member? You should Already have an account? To make the experience fit your profile, pick a username and tell us what interests you. We found and based on your interests. Choose more interests. The. View project log Begin by connecting your Feather HUZZAH ESP8266 to your computers USB port to configure the device. 1. Download the Arduino IDE if you not already have it. 1a. Open the Arduino IDE, select Files -> Preferences and enter the URL below into the Additional Board Manager URLs field. You can add multiple URLs by separating them with commas: NOTE: If you're a Mac user, please note that in the Arduino software configurations are slightly different from Windows and Linux. Further, you may have to install the following driver to be able to upload your NodeMCU. 2. Open Boards Manager from Tools -> Board -> Boards Manager and install esp8266 platform. To simply find the correct device, search ESP8266 within the search bar. 3. Select your Adafruit Feather HUZZAH ESP8266 from Tools > Board menu. 4. Additionally, we need to be able to communicate with the Feather HUZZAH ESP8266 by selecting the proper port com. Go to Tools > Port > Select the appropriate PORT for your device. 5. To keep everything running fast and smooth - let's make sure the upload speed is optimized to 115200. Go to Tools > Upload Speed > 115200: IMPORTANT NOTE: Don't forget you will also need to install the SiLabs CP2104 Driver to be able to program the board properly. 6. Close and REBOOT the Arduino IDE. 7. Now with everything configured, UPLOAD the Blink Sketch to verify that everything is working properly. Go to File > Examples > Basics > Blink and compile the code. 8. Once the code is properly updated the a red LED will start blinking. 9. Download and install the Ubidots library. For a detailed explanation of how to install libraries using the Arduino IDE, refer to this guide. With the following example, you will be able to simulate random readings taken from the Feather HUZZAH ESP8266 to Ubidots 1. To begin posting values to Ubidots, open the Arduino IDE and paste the sample code below. Once you have pasted the code, be sure to assign the following parameters: /**************************************** * Include Libraries ****************************************/ #include "Ubidots.h" /**************************************** * Define Instances and Constants ****************************************/ const char* UBIDOTS_TOKEN = "..."; // Put here your Ubidots TOKEN const char* WIFI_SSID = "..."; // Put here your Wi-Fi SSID const char* WIFI_PASS = "..."; // Put here your Wi-Fi password Ubidots ubidots(UBIDOTS_TOKEN, UBI_HTTP); /**************************************** * Auxiliar Functions ****************************************/ // Put here your auxiliar functions /**************************************** * Main Functions ****************************************/ void setup() { Serial.begin(115200); ubidots.wifiConnect(WIFI_SSID, WIFI_PASS); // ubidots.setDebug(true); // Uncomment this line for printing debug messages } void loop() { float value1 = random(0, 9) * 10; float value2 = random(0, 9) * 100; float value3 = random(0, 9) * 1000; ubidots.add("Variable_Name_One", value1); // Change for your variable name ubidots.add("Variable_Name_Two", value2); ubidots.add("Variable_Name_Three", value3); bool bufferSent = false; bufferSent = ubidots.send("feather-huzzah"); // Will send data to a device label that matches the device Id if (bufferSent) { // Do something if values were sent properly Serial.println("Values sent by the device"); } delay(5000); } 2. Verify your code within the Arduino IDE. To do this, in the top left corner of our Arduino IDE you will see the "Check Mark" icon press it to verify your code. 3. Upload the code into your Feather HUZZAH ESP8266.-huzzah" and visualize your data. View all 4 instructions Already have an account? Maria Carlina Hernandez Alexander Become a member to follow this project and never miss any updates Contact Hackaday.io Give Feedback Terms of Use Hackaday API © 2021 Hackaday Yes, delete it Cancel You are about to report the project "Connect your Feather HUZZAH ESP8266 to Ubidots", please tell us the reason. Your application has been submitted. Are you sure you want to remove yourself as a member for this project? Project owner will be notified upon removal.
https://hackaday.io/project/31280-connect-your-feather-huzzah-esp8266-to-ubidots
CC-MAIN-2021-04
refinedweb
655
66.44
How to Override Classes in PHP and Composer If you’re working with PHP, maybe using Laravel or some other framework, in this tutorial, we’ll cover how to override classes using Composer. This is helpful when you are using a library or package and want to override a certain functionality, but you can’t really edit the code directly. Prerequisites This tutorial assumes you already have a project that uses Composer, thus having a composer.jsonfile. But First, What’s PSR-4 PSR stands for PHP Standard Recommendation. PSR-4 specifies standards for namespaces, class names, etc… For example, let’s say you have the following file structure in your project: - app | |_ _ _ Model | |_ _ User.php This is similar to the structure you’d have when using Laravel. PSR-4’s standards say that the namespace should be exactly similar to the file structure. So, inside User.phpthe namespace should be app/Model. However, when we’re using Laravel we always capitalize app, so the namespace would be App\Model. How is that not breaking the standard? Well, that’s because of the following lines in composer.json: "autoload": { "psr-4": { "App\\": "app/", //.... } } So, how does this work? Autoloading with Composer Using the autoloadkey, we can specify how to autoload certain files based on the specification, which in this case is psr-4. So, first, we add the autoloadkey, which is an object that has the key psr-4: "autoload": { "psr-4": { } } Inside the psr-4object, we'll have key-value pairs that specify how the files should be autoloaded. The key will be the namespace to be loaded, and the value is where it should be loaded from. So, in the example of Laravel, the App\namespace is the key and app/is the value. That means that all App\namespaces should be loaded from the appdirectory. How to Use This for Overriding Classes Let’s say we have a class Vendor\Model\Userand we want to override that class. First, the class that overrides it should be in a specific directory. So, maybe you can create the path app/overridesto place your overriding classes inside. It can be any path you want it doesn't matter. Let’s say we created app/overrides/User.php. We want this class to override Vendor/Model/User. The first step is we need to make sure that app/overrides/User.phphas the same namespace: namespace Vendor/Model; You can then place whatever code you want inside the class. Next, we need to let Composer know where to autoload the namespace Vendor/Modelfrom. So, the key here should be Vendor/Model, and the value should be the path to the directory that has the overriding class, which in our case is app/overrides: "autoload": { "psr-4": { "Vendor\\Model\\": "app/overrides" } } That’s it. Now, the autoloading for Vendor/Modelwill be from app/overridesinstead of the original place, and this way you can override any class you want. Extra Options Instead of the value being a string of the path the namespace should be loaded from, you can provide an array. This tells Composer that it should look for the classes in multiple places: "Vendor\\Model\\": ["app/overrides", "src"] You can also provide a fallback directory for any namespace: "autoload": { "psr-4": { "": "app/overides/" } } If you would like to connect and talk more about this article or programming in general, you can find me on my Twitter account @shahednasserr
https://shahednasser.medium.com/how-to-override-classes-in-php-and-composer-aa44a2a068a5?responsesOpen=true&source=---------6----------------------------
CC-MAIN-2021-25
refinedweb
581
63.29
Microservices are all the rage. They are talked about everywhere, and it seems like everyone wants them nowadays. There are probably as many implementations of them as there are words in this paragraph, and we'll add yet another one into the mix. But this comes from several implementations and years of experience developing enterprise grade microservice ecosystems for big clients. Now, I'm letting you in on the same techniques and best practices I've been using in the real world. And thus, you have the logic behind this book. I'm going to show you how to develop a powerful, flexible, and scalable microservice ecosystem, and hopefully along the way spark ideas for you to go off on your own endeavors and create even more. And we're not talking about some skimpy little web page or a single service; I've packed this book full of more microservices than you can shake a stick at, and I am sure your ideas will take shape and you will enhance this ecosystem to meet your needs. In this chapter, we will cover: - What a microservice is - What a microservice architecture is - Pros and cons of a microservice - Installing and an overview of Topshelf - Installing and an overview of RabbitMQ - Installing and an overview of EasyNetQ - Installing and an overview of Autofac - Installing and an overview of Quartz - Installing and an overview of Noda Time Ok, let's just go ahead and get this one out of the way. Let's start this book off by talking a bit about exactly what a microservice is, to us at least. Let's start with a simplistic visual diagram of what we're going to accomplish in this book. This diagram says it all, and if this looks too confusing, this might be a good place to stop reading! Let's next agree to define a microservice as an independently deployable and developable, small, modular service that addresses a specific and unique business process or problem, and communicates via a lightweight event-based, asynchronous, message-based architecture. A lot of words in that one I know, but I promise by this end of the book that the approach will make perfect sense to you. Basically, what we are talking about here is the Messages central component in the previous diagram. I know that some of you might be asking yourselves, what's the difference between a service and a microservice? That is one very good question. Lord knows I've had some very heated discussions from non-believers over the years, and no doubt you might as well. So, let's talk a bit about what a Service-Oriented Architecture (SOA) is. The SOA is a software design paradigm where services are the central focus. For the purposes of discussion and clarity, let's define a service as a discrete unit of functionality that can be accessed remotely and acted upon independently. The characteristics of a service in terms of a SOA are: - It represents a specific business function or purpose (hopefully) - It is self-contained - It can and should function as a black box - It may also be comprised of other associated services - There is a hard and dedicated contract for each service (usually) Some folks like to consider a microservice nothing more than a more formalized and refined version of an SOA. Perhaps in some ways, that could be the case. Many people believe that the SOA just never really formalized, and microservices are the missing formality. And although I am sure an argument could be made for that being true, microservices are usually designed differently, with a response-actor paradigm, and they usually use smaller or siloed databases (when permissible), and smaller and faster messaging protocols versus things like a giant Enterprise Service Bus (ESB). Let's take a moment and talk about the microservice architecture itself. Just as there is no one set definition for a microservice, there is also not one set architecture. What we will do is make a list of some of the characteristics that we view a microservice architecture to have. That list would then look something like this: - Each microservice can be deployed, developed, maintained, and then redeployed independently. - Each microservice focuses on a specific business purpose and goal and is non-monolithic. - Each microservice receives requests, processes them, and then may or may not send a response. - Microservices practice decentralized governance and in some cases, when permissible, decentralized data management. - Perhaps most importantly, at least in my mind anyways, I always design a microservice around failure. In fact, they are designed to fail. By following this paradigm, you will always be able to handle failures gracefully and not allow one failing microservice to negatively impact the entire ecosystem. By negatively impact, I mean a state where all other microservices are throwing exceptions due to the one errant microservice. Every microservice needs to be able to gracefully handle not being able to complete its task. - Finally, let's stay flexible and state that our microservice architecture is free to remain fluid and evolutionary. - No microservice talks directly to another microservice. Communication is always done in the form of messages. With all that in mind, we've now created our definition of a microservice and its architecture and characteristics. Feel free to adjust these as you or your situation sees fit. Remember, as C# developers we don't always have the luxury, save truly greenfield projects, to dictate all the terms. Do the best you can with the room you have to operate within. As an example, chances are you will have to work with the corporate database and their rules rather than a small siloed database as described earlier. It's still a microservice, so go for it! Let's run down some pros and cons of a microservice architecture. Here are a few of the positive points of a microservice architecture: - They give developers the freedom to independently architect, develop, and deploy services - Microservices can be developed in different languages if permitted - Easier integration and deployment than traditional monolithic applications and services - Microservices are organized around specific business capabilities - When change is required, only the specific microservice needs to be changed and redeployed - Enhanced fault isolation - They are easier to scale - Integration to external services is made easier Here are a few negatives when considering a microservice architecture. Please keep in mind that negative does not equal bad, just information that may affect your decision: - Testing can be more involved - Duplication of effort and code can occur more often - Product management could become more complicated - Developers may have more work when it comes to communications infrastructure - Memory consumption may increase Let's take a look at someone who took a monolithic application and broke it down into components and created a microservice-based system. The following is the story of Parkster; I think you will enjoy it! A growing digital parking service from Sweden is right now breaking down their monolithic application towards microservices. Follow their story! Portable voicemail. They want to see a world where you don't need to guesstimate the required parking time or stand in line waiting by a busy parking meter. It should be easy to pay for parkingâfor everyone, everywhere. Moreover, Parkster doesn't want the customer to pay more when using tools of the futureâ that's why there are no extra fees if you are using Parkster's app when parking: Breaking up a tightly coupled monolithic application Like many other companies, making an attempt at a single small code change; no one wants to add new code that could disrupt operations in some unforeseen way. One day they had enoughâthe application had to be decoupled. The biggest reason we moved from monolith to microservice were decoupling. Application decoupling Parkster's move from a monolith architecture to a microservice architecture is. said Anders Davoust, developer at Parkster: Breaking down their codebase has also given the software developers the freedom to use whatever technologies make sense for a particular service. Different parts of the application can evolve independently, be it written in different languages and/or a container orchestration platform. Resiliencyâthe. Now, Parkster's goal is to get rid of the old monolithic repo entirely, and focus on a new era where the whole system is built upon microservices. The following is a list of concepts that relate to messaging: - from a queue - it. -  Advanced Message Queuing Protocol (AMQP): AMQP: A virtual host provides a way to segregate applications using the same RabbitMQ instance. Different users can have different access privileges to different vhosts and queues, and exchanges can be created so they only exist in one vhost. Throughout this book, we will be dealing a lot with message queues. You will also see it prevalent in the software we are developing. Messaging queues are how our ecosystem communicates, maintains separation of concerns, and allows for fluid and fast development. With that being said, before we get too far along into something else, let's spend some time discussing exactly what message queues are and what they do. Let's think about the functionality of a message queue. They are two sided components; messages enter from one side and exit from the other one. Thus, each message queue can establish connections on both sides; on the input side, a queue fetches messages from one or more exchanges, while on the output side, the queue can be connected to one or more consumers. From the single queue point of view being connected to more than one exchange with the same routing key, this is transparent, since the only thing that concerns the message queue itself are the incoming messages: Put another way... The basic architecture of a message queue is simple. There are client applications called producers that create messages and deliver them to the broker (the message queue). Other applications, called consumers, connect to the queue and subscribe to the messages to be processed. A software can be a producer, or consumer, or both a consumer and a producer of messages. Messages placed onto the queue are stored until the consumer retrieves them: And, breaking that down even further: The preceding diagram illustrates the following process: - The user sends a PDF creation request to the web application - The web application (the producer) sends a message to RabbitMQ, including data from the request, such as name and email - An exchange accepts the messages from a producer application and routes them to correct message queues for PDF creation - The PDF processing worker (the consumer) receives the task and starts the processing of the PDF Let's now look at some of the different message queue configurations that we can use. For now, let's think of a queue as an ordered collection or list of messages. In the diagrams that follow, we're going to use P to represent a producer, C to represent a consumer, and the red rectangles to represent a queue. Here's our legend:  Let's start by taking the simplest of all possible scenarios. We have a single producer, which sends one or more messages (each message is one red block) to a single consumer, such as in the following: Our next step up the difficulty ladder would be to have a single producer publish one or more messages to multiple consumers, such as in the following diagram. This is distributing tasks (work) among different workers, also sometimes referred to as the competing consumers pattern. This means that each consumer will take one or more messages. Depending upon how the message queues are set up, the consumers may each receive a copy of every message, or alternate in their reception based upon availability. So, in one scenario, consumer one may take ten messages, consumer two may take five, then consumer one takes another ten. Alternatively, the messages that consumer one takes, consumer two does not get and vice versa: Next, we have the ever so famous publish/subscribe paradigm, where messages are sent to various consumers at once. Each consumer will get a copy of the message, unlike the scenario shown previously where consumers may have to compete for each message: Our next scenario provides us with the ability for a client to selectively decide which message(s) they are interested in, and only receive those. Using a direct exchange, the consumers are able to ask for the type of message that they wish to receive: If we were to expand this direct exchange map out a little bit, here's what our system might look like: A direct exchange delivers messages to queues based on a message routing key. The routing key is a message attribute added into the message header by the producer. The routing key can be seen as an address that the exchange is using to decide how to route the message. A message goes to the queue(s) whose binding key exactly matches the routing key of the message. The direct exchange type is useful when you would like to distinguish between messages published to the same exchange using a simple string identifier. Next, as you will see me use quite heavily in this book, our consumers can receive selected messages based upon patterns (topics) with what is known as a topic queue. Users subscribe to the topic(s) that they wish to receive, and those messages will be sent to them. Note that this is not a competing consumers pattern where only one microservice will receive the message. Any microservice that is subscribed will receive the selected messages: If we expand this one out a little bit, we can see what our system might look like: The topic exchanges route messages to queues based on wildcard matches between the routing key and routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern. The routing key must be a list of words, delimited by a period (.). The routing patterns may contain an asterisk (*) to match a word in a specific position of the routing key (for example, a routing pattern of agreements.*.*. Finally, we have the request/reply pattern. This scenario will have a client subscribing to a message, but rather than consume the message and end there, a reply message is required, usually containing the status result of the operation that took place. The loop and chain of custody is not complete until the final response is received and acknowledged: Now that you know all you need to know about message queues and how they work, let's fill in our initial visual diagram a bit more so it's a bit more reflective of what we are doing, what we hope to accomplish, and how we expect our ecosystem to function. Although we will primarily be focusing on topic exchanges, we may occasionally switch to fanouts, direct, and others. In the end, the visual that we are after for our ecosystem is this: Let's start with a very simple message, the deployment messages: public class DeploymentStartMessage { public DateTime Date { get; set; } } public class DeploymentStopMessage { public DateTime Date { get; set; } } As you can see, they are not overly complicated. What will happen is that we will have a DeploymentMonitor microservice. As soon as our deployment kicks off, we will send a DeploymentStartMessage to the message queue. Our microservice manager will receive the message, and immediately disable tracking each microservice's health until the DeploymentStopMessage is received. Note Always include all your messages in the same namespace. This makes it much easier for EasyNetQ and its type name resolver to know where the messages are coming from. It also gives you a centralized location for all your messages, and lastly, prevents a lot of weird looking exchange and queue names! Now that we have shown you what a deployment message looks like, let's discuss what happens when you subscribe to a message. An EasyNetQ subscriber subscribes to a message type (the .NET type of the message class). Once the subscription to a type has been set up by calling the Subscribe method, a persistent queue will be created on the RabbitMQ broker and any messages of that type will be placed on the queue. RabbitMQ will send any messages from the queue to the subscriber whenever it is connected. To subscribe to a message, we need to give EasyNetQ an action to perform whenever a message arrives. We do this by passing the Subscribe method a delegate such as this: bus.Subscribe<MyMessage>("my_subscription_id", msg => Console.WriteLine(msg.Text)); Now, every time an instance of MyMessage is published, EasyNetQ will call our delegate and print the message's Text property to the console. EasyNetQ will create a unique queue on the RabbitMQ broker for each unique combination of message type and subscription ID. Each call to Subscribe creates a new queue consumer. If you call the Subscribe method two times with the same message type and subscription ID, you will create two consumers consuming from the same queue. RabbitMQ will then round-robin successive messages to each consumer in turn. This is great for scaling and work-sharing. Say you've created a service that processes a particular message, but it's getting overloaded with work. Simply start a new instance of that service (on the same machine, or a different one) and without having to configure anything, you get automatic scaling. If you call the Subscribe method two times with different subscription IDs but the same message type, you will create two queues, each with its own consumer. A copy of each message of the given type will be routed to each queue, so each consumer will get all the messages (of that type). This is great if you've got several different services that all care about the same message type. As messages are received from queues subscribed to via EasyNetQ, they are placed on an in-memory queue. A single thread sits in a loop taking messages from the queue and calling their action delegates. Since the delegates are processed one at a time on a single thread, you should avoid long-running synchronous IO operations. Return control from the delegate as soon as possible. SubscribeAsync allows your subscriber delegate to return a Task immediately and then asynchronously execute long-running IO operations. Once the long-running subscription is complete, simply complete the Task. All the subscribe methods return an ISubscriptionResult. It contains properties that describe the IExchange and IQueue used by the underlying IConsumer; these can be further manipulated using the advanced API IAdvancedBus if required. You can cancel a subscriber at any time by calling Dispose on the ISubscriptionResult instance or on its ConsumerCancellation property: var subscriptionResult = bus.Subscribe<MyMessage>("sub_id", MyHandler); subscriptionResult.Dispose(); This will stop EasyNetQ consuming from the queue and close the consumer's channel. It is the equivalent to calling subscriptionResult.ConsumerCancellation.Dispose(); Note that disposing of the IBus or IAdvancedBus instance will also cancel all consumers and close the connection to RabbitMQ. Even though I can honestly say that I have developed interfaces that could accommodate any change made to both sides without ever modifying the interface, most people don't design to that extreme. There will, more likely than not, come a time where you will have to change a message to accommodate a new feature or request, and so on. Now, we get into the issue of message versioning. To enable support for versioned messages, we need to ensure the required components are configured. The simplest way to achieve this is as follows: var bus = RabbitHutch.CreateBus( "host=localhost", services => services.EnableMessageVersioning() ) Once support for versioned messages is enabled, we must explicitly opt-in any messages we want to be treated as versioned. So as an example, let's say we have a message defined called MyMessage. As you can see in the following message, it is not versioned and all versions will be treated the same way as any other when it is published: public class MyMessage { public string Text { get; set; } } The next message that you see will be versioned, and ultimately it will find its way to both the V2 and previous subscribers by using the ISupersede interface: public class MyMessageV2 : MyMessage, ISupersede<MyMessage> { public int Number { get; set; } } Let's stop for a second and think about what's happening here. When we publish a message, EasyNetQ usually creates an exchange for the message type and publishes the message to that exchange: Subscribers create queues that are bound to the exchange and therefore receive any messages published to it: With message versioning enabled, EasyNetQ will create an exchange for each message type in the version hierarchy and bind those exchanges together. When you publish the MyMessageV2 message, it will be sent to the MyMessageV2 exchange, which will automatically forward it to the MyMessage exchange. When messages are serialized, EasyNetQ stores the message type name in the type property of the message properties. This metadata is sent along with your message to any subscribers, who can then use it to deserialize the message. With message versioning enabled, EasyNetQ will also store all the superseded message types in a header in the message properties. Subscribers will use this to find the first available type that the message can be deserialized into, meaning that even if an endpoint does not have the latest version of a message, so long as it has a version, it can be deserialized and handled. Here are a few tips for message versioning: - If the change cannot be implemented by extending the original message type, then it is not a new version of the message; it is a new message type - If you are unsure, prefer to create a new message type rather than version an existing message - Versioned messages should not be used with request/response as the message types are part of the request/response contract and Request<V1,Response>is not the same as Request<V2,Response>, even if V2extends V1(that is, public class V2 : V1 {}) - Versioned messages should not be used with send/receive as this is targeted sending and therefore there is a declared dependency between the sender and the receiver Messages are not published directly to any specific message queue. Instead, the producer sends messages to an exchange. Exchanges are message routing agents, defined per virtual host within RabbitMQ. An exchange is responsible for the routing of the messages to the different queues. An exchange accepts messages from the producer application and routes them to message queues with the help of header attributes, bindings, and routing keys. A binding is a link that you set up to bind a queue to an exchange. The routing key is a message attribute. The exchange might look at this key when deciding how to route the message to queues (depending on exchange type). Exchanges, connections, and queues can be configured with parameters such as durable, temporary, and auto delete upon creation. Durable exchanges will survive server restarts and will last until they are explicitly deleted. Temporary exchanges exist until RabbitMQ is shut down. Auto-deleted exchanges are removed once the last bound object is unbound from the exchange. As we begin to explore more about messages, I want to give a big shoutout to Lovisa Johansson at CloudAMQP for permission to reprint information she and others have done an excellent job at obtaining. Everyone should visit CloudAMQP; it is an infinite source of wisdom when it comes to RabbitMQ. The following is a standardly configured RabbitMQ message flow: - The producer publishes a message to the exchange. - The exchange receives the message and is now responsible for the routing of the message. - A binding has to be set up between the queue and the exchange. In this case, we have bindings to two different queues from the exchange. The exchange routes the message in to the queues. - The messages stay in the queue until they are handled by a consumer. - The consumer handles the: Message flow in RabbitMQ - The producer publishes a message to an exchange. When you create the exchange, you have to specify the type of it. The different types of exchanges are explained in detail later on. - The exchange receives the message and is now responsible for the routing of the message. The exchange takes different message attributes into account, such as routing key, depending on the exchange type. - Bindings have to be created from the exchange to queues. In this case, we see two bindings to two different queues from the exchange. The exchange routes the message into the queues depending on message attributes. - The messages stay in the queue until they are handled by a consumer. - The consumer handles the message. A direct exchange delivers messages to queues based on a message routing key. The routing key can be seen as an address that the exchange is using to decide how to route the message. A message goes to the queues whose binding key exactly matches the routing key of the message. The direct exchange type is useful when you would like to distinguish messages published to the same exchange using a simple string identifier. Imagine that Queue A (create_pdf_queue) in the following diagram is bound to a direct exchange (pdf_events) with the binding key pdf_create. When a new message with the routing key pdf_create arrives at the direct exchange, the exchange routes it to the queue where the binding_key = routing_key is, in this case, to Queue A (create_pdf_queue). SCENARIO 1: - Exchange: pdf_events - Queue A: create_pdf_queue - Binding a key between exchange (pdf_events) and Queue A (create_pdf_queue): pdf_create SCENARIO 2: - Exchange: pdf_events - Queue B: pdf_log_queue - Binding a key between exchange (pdf_events) and Queue B (pdf_log_queue): pdf_log EXAMPLE:  A message with the routing key pdf_logis sent to the exchange pdf_events. The message is routed to pdf_log_queue because the routing key (pdf_log) matches the binding key (pdf_log). If the message routing key does not match any binding key, the message will be discarded, as seen in the direct exchange diagram:  A message goes to the queues whose binding key exactly matches the routing key of the message. The default exchange is a pre-declared direct exchange with no name, usually referred to by the empty string "". When you use the default exchange, your message will be delivered to the queue with a name equal to the routing key of the message. Every queue is automatically bound to the default exchange with a routing key that is the same as the queue name. Topic exchanges route messages to queues based on wildcard matches between the routing key and something called the routing pattern specified by the queue binding. Messages are routed to one or many queues based on a matching between a message routing key and this pattern. The routing key must be a list of words, delimited by a period (.); examples are agreements.us and agreements.eu.stockholm, which, in this case, identifies agreements that are set up for a company with offices in lots of different locations. The routing patterns may contain an asterisk (*) to match a word in a specific position of the routing key (for example, a routing pattern of agreements.*.*.b.. The following diagram shows three example scenarios: SCENARIO 1: Consumer A is interested in all the agreements in Berlin: - Exchange: agreements - Queue A: berlin_agreements - Routing pattern between exchange (agreements) and Queue A (berlin_agreements): agreements.eu.berlin.# - Example of message routing key that will match: agreements.eu.berlin and agreements.eu.berlin.headstore SCENARIO 2: Consumer B is interested in all the agreements: - Exchange: agreements - Queue B: all_agreements - Routing pattern between exchange (agreements) and Queue B (all_agreements): agreements.# - Example of message routing key that will match: agreements.eu.berlin and agreements.us Topic Exchange: Messages are routed to one or many queues based on a matching between a message routing key and the routing pattern. SCENARIO 3: Consumer C is interested in all agreements for European head stores: - Exchange: agreements - Queue C: headstore_agreements - Routing pattern between exchange (agreements) and Queue C (headstore_agreements): agreements.eu.*.headstore - Example of message routing keys that will match: agreements.eu.berlin.headstore and agreements.eu.stockholm.headstore following fanout exchange figure shows an example where a message received by the exchange is copied and routed to all three queues that are bound to the exchange. It could be sport or weather news updates that should be sent out to each connected mobile device when something happens. The default exchange AMQP brokers must provide for the topic exchange is amq.fanout: Received messages are routed to all queues that are bound to the exchange. SCENARIO 1: - Exchange: sport news - Queue A: Mobile client Queue A - Binding: Binging between the exchange (sport news) and Queue A (mobile client Queue A) EXAMPLE: A message is sent to the exchange sport news. The message is routed to all queues (Queue A, Queue B, and Queue C) because all queues are bound to the exchange. Provided routing keys are ignored. Headers exchange routes are based on arguments containing headers and optional values. Headers exchanges are very similar to topic exchanges, but they route based on header values instead of routing keys. A message is considered matching if the value of the header equals the value specified upon binding. A special argument named x-match, which can be added in the binding between your exchange and your queue, tells if all headers must match or just one. Either any common header between the message and the binding counts as a match, or all the headers referenced in the binding need to be present in the message for it to match. The x-match property can have two different values: any or all, where all is the default value. A value of all means all header pairs (key, value) must match and a value of any means at least one of the header pairs must match. Headers can be constructed using a wider range of data typesâinteger or hash, for example, instead of a string. The header's exchange type (used with binding argument any) is useful for directing messages that may contain a subset of known (unordered) criteria: - Exchange: Binding to Queue A with arguments (key = value): format = pdf, type = report, x-match = all - Exchange: Binding to Queue B with arguments (key = value): format = pdf, type = log, x-match = any - Exchange: Binding to Queue C with arguments (key = value): format = zip, type = report, x-match = all SCENARIO 1: Message 1 is published to the exchange with the header arguments (key = value): format = pdf, type = report and with the binding argument x-match = all. Message 1 is delivered to Queue A âall key/value pair match. SCENARIO 2: Message 2 is published to the exchange with header arguments of (key = value): format = pdf and with the binding argument x-match = any. Message 2 is delivered to Queue A and Queue Bâthe queue is configured to match any of the headers (format or log): Headers exchange route messages to queues that are bound using arguments (key and value) that contain headers and optional values. SCENARIO 3: Message 3 is published to the exchange with the header arguments of (key = value): format = zip, type = log and with the binding argument x-match = all. Message 3 is not delivered to any queue; the queue is configured to match all of the headers (format or log). Following are all the common messages we have defined for this book. You may feel free to change any one you need to as they are merely guides for you to start thinking in the microservice mindset: [Queue("Bitcoin", ExchangeName = "EvolvedAI")] [Serializable] public class BitcoinSpendMessage { public decimal amount { get; set; } } [Queue("Bitcoin", ExchangeName = "EvolvedAI")] [Serializable] public class BitcoinSpendReceipt { public long ID { get; set; } public decimal amount { get; set; } public bool success { get; set; } public DateTime time { get; set; } } [Queue("Financial", ExchangeName = "EvolvedAI")] [Serializable] public class BondsRequestMessage { BondsResponseMessage { public long ID { get; set; } CreditDefaultSwapRequestMessage { public double fixedRate { get; set; } public double notional { get; set; } public double recoveryRate {get; set;} public double fairRate { get; set; } public double fairNPV { get; set; } } [Queue("Financial", ExchangeName = "EvolvedAI")] [Serializable] public class CreditDefaultSwapResponseMessage { public long ID { get; set; } public double fixedRate { get; set; } public double notional { get; set; } public double recoveryRate { get; set; } public double fairRate { get; set; } public double fairNPV { get; set; } } [Serializable] [Queue("Deployments", ExchangeName = "EvolvedAI")] public class DeploymentStartMessage { public long ID { get; set; } public DateTime Date { get; set; } } [Serializable] [Queue("Deployments", ExchangeName = "EvolvedAI")] public class DeploymentStopMessage { public long ID { get; set; } public DateTime Date { get; set; } } [Queue("Email", ExchangeName = "EvolvedAI")] [Serializable] public class EmailSendRequest { public string From; public string To; public string Subject; public string Body; } [Serializable] [Queue("FileSystem", ExchangeName = "EvolvedAI")] public class FileSystemChangeMessage { public long ID { get; set; } public int ChangeType { get; set; } public int EventType { get; set; } public DateTime ChangeDate { get; set; } public string FullPath { get; set; } public string OldPath { get; set; } public string Name { get; set; } public string OldName { get; set; } } [Serializable] Queue("Health", ExchangeName = "EvolvedAI")] public class HealthStatusMessage { public string ID { get; set; } public DateTime date { get; set; } public string serviceName { get; set; } public int status { get; set; } public string message { get; set; } public double memoryUsed { get; set; } public double CPU { get; set; } } [Serializable] [Queue("Memory", ExchangeName = "EvolvedAI")] public class MemoryUpdateMessage { public long ID { get; set; } public string Text { get; set; } public int Gen1CollectionCount { get; set; } public int Gen2CollectionCount { get; set; } public float TimeSpentPercent { get; set; } public string MemoryBeforeCollection { get; set; } public string MemoryAfterCollection { get; set; } public DateTime Date { get; set; } } [Serializable] [Queue("MachineLearning", ExchangeName = "EvolvedAI")] public class MLMessage { public long ID { get; set; } public int MessageType { get; set; } public int LayerType { get; set; } public double param1 { get; set; } public double param2 { get; set; } public double param3 { get; set; } public double param4 { get; set; } public double replyVal1 { get; set; } public double replyVal2 { get; set; } public string replyMsg1 { get; set; } public string replyMsg2 { get; set; } } [Serializable] [Queue("Trello", ExchangeName = "EvolvedAI")] public class TrelloResponseMessage { public bool Success { get; set; } public string Message { get; set; } } In this chapter, we defined what a microservice and its architecture means to us. We also had an in-depth discussion regarding what we will see as queues and their different configurations. Without any further ado, let's move on and start talking about some of the pieces of our puzzle. We're going to discuss the fantastic world of open source software and take a look at some of the many tools and frameworks we are highlighting in this book in order to create our ecosystem. This entire book is written, and the software is developed, with the sole purpose of you being able to quickly develop a microservice ecosystem, and there is no better way to do this than to leverage the many great open source contributions made.
https://www.packtpub.com/product/hands-on-microservices-with-c/9781789533682
CC-MAIN-2020-50
refinedweb
5,830
56.18
check if thread is running Hey, What is best practice for a watchdog in a multithreading system? So if one thread has an unhandled exeption, restart that thread? @Gijs Thought of about the same thing. I would just add a short sleep at the end of the loop, so that in case it fails very quickly it doesn't go into a tight loop, especially if the code in the thread is sending anything (over the network or to a device connected via I2C or SPI etc.). Of course it relies on the contents of the function being able to restart a second time. I'm not sure this is 100% fool proof as I believe there are cases when some conditions will not result in proper exceptions being thrown, so of course using and feeding WDT(or an external watchdog) may be a good idea as well. I'd imagine you could do something like this, but there's probably a better solution that I cannot think of right now. Also I'm not sure whether that also handles exceptions that occur inside foo() def thread1(): while True: try: #run some code foo() except: #handle all potential errors If anyone has a better solution I'm interested as well!
https://forum.pycom.io/topic/7132/check-if-thread-is-running
CC-MAIN-2022-33
refinedweb
210
71.07
There are many features in C++ that can be used to enhance the quality of code written with classic C design even if no object oriented techniques are used. This article describes a technique to protect against value overflow and out-of-bounds access of arrays. This article started with a discussion about how C projects could use features in C++ to improve the quality of the code without having to do any major redesign. The built-in integral types in C and C++ are very crude. They map directly to what can be represented in hardware as bytes and words with or without signs. There is no way to say that a number can only have values in the range 1 to 100. The best you can do is to use an unsigned char which typically has a value range from 0 to 255, but this does not provide any checking for overflow. It is easy to create an integral type that does the range checking as Pascal and Ada do. The implementation of BoundedInt in listing 1 shows how this can be done with C++ templates. It takes three parameters. The first two specify the inclusive range of allowed values. The third parameter specifies the underlying type to be used and uses a default type given by the BoundedIntTraits class. #include <cassert> template <int Lower, int Upper, typename INT=typename BoundedIntTraits<Lower,Upper>::Type> class BoundedInt { public: // Default constructor BoundedInt() #ifndef NDEBUG : m_initialised(false) #endif {} // Conversion constructor BoundedInt(int i) : m_i(static_cast<INT>(i)) #ifndef NDEBUG , m_initialised(true) #endif { // Check input value assert((Lower<=i) && (i<=Upper)); } // Conversion back to a builtin type operator INT() { assert(m_initialised); return m_i; } // Assignment operators BoundedInt & operator+=(int rhs) { assert(m_initialised); // Check for overflow assert(m_i/2 + rhs/2 + (m_i&rhs&1) <= Upper/2); assert(Lower/2 <= m_i/2 + rhs/2 - ((m_i^rhs)&1)); // Check result value assert((Lower<=m_i+rhs) && (m_i+rhs<=Upper)); // Perform operation m_i += rhs; return *this; } // Increment and decrement operators. BoundedInt & operator++() { assert(m_initialised); // Check for overflow assert(m_i < Upper); // Perform operation ++m_i; return *this; } // Other operators ... private: INT m_i; #ifndef NDEBUG bool m_initialised; #endif }; Listing 1: Definition of BoundedInt. Only the plus operator is shown here. The other arithmetic operators follow the same design. The BoundedIntTraits class is used to find the smallest built-in type that can hold numbers of the specified range. It uses some meta-programming to figure out which type to use. The implementation of the BoundedIntTraits class is shown in listing 2. #include <climits> // Compile time assertion: template <bool condition> struct StaticAssert; template <> struct StaticAssert<true> {}; // Template for finding the smallest // built-in type that can hold a given // value range, based on a set of // conditions. template< bool sign, bool negbyte, bool negshort, bool negint, bool sbyte, bool ubyte, bool sshort, bool ushort, bool sint> struct BoundedIntType; template<> struct BoundedIntType< true, true, true, true, true, true, true, true, true> { typedef signed char Type; }; template< bool negbyte, bool sbyte, bool ubyte> struct BoundedIntType< true, negbyte, true, true, sbyte, ubyte, true, true, true> { typedef signed short Type; }; template<bool negbyte, bool negshort, bool sbyte, bool ubyte, bool sshort, bool ushort> struct BoundedIntType< true, negbyte, negshort, true, sbyte, ubyte, sshort, ushort, true> { typedef signed int Type; }; template <bool sbyte> struct BoundedIntType< false, true, true, true, sbyte, true, true, true, true> { typedef unsigned char Type; }; template< bool sbyte, bool ubyte, bool sshort> struct BoundedIntType< false, true, true, true, sbyte, ubyte, sshort, true, true> { typedef unsigned short Type; }; template< bool sbyte, bool ubyte, bool sshort, bool ushort, bool sint> struct BoundedIntType< false, true, true, true, sbyte, ubyte, sshort, ushort, sint> { typedef unsigned int Type; }; // The traits template provides value // range information to the // BoundedIntType to get the smallest // possible type. template <int Lower, int Upper> struct BoundedIntTraits { StaticAssert<(Lower <= Upper)> check; typedef typename BoundedIntType<Lower < 0, Lower >= CHAR_MIN, Lower >= SHRT_MIN, Lower >= INT_MIN, Upper <= CHAR_MAX, Upper <= UCHAR_MAX, Upper <= SHRT_MAX, Upper <= USHRT_MAX, Upper <= INT_MAX>::Type Type; }; Listing 2: Definition of BoundedIntTraits. The types long and unsigned long are not included to keep the listing shorter. The checking is performed here by using the assert() macro. Note that this checking only happens in debug builds and not in the release builds to reduce the overhead for this checking. Using inlining and the assert() macro removes any overhead in optimised release builds. With a good optimiser the resulting code will be identical to when built-in types are used. Alternatives to assert() can of course be used such as throwing an exception or logging a message to a file. The BoundedInt class is only designed to work with value ranges that fit in an int. To support wider ranges all methods that take an int as a parameter must have overloaded siblings that take a long, or even long long where supported. The operator+=() member must check that the new value is within the valid range. It also has to check that there is no overflow during addition. The method of detecting overflow is complicated as there is no support for detecting overflow for built-in types in C and C++. The method here scales down all values to manageable sizes in order to do an overflow check. Because of the scaling down, it has to keep track of carry over data from the least significant bits to work properly in edge cases where the value range is close to the value range of the underlying type. Other arithmetic assignment operators that BoundedInt should support are not shown here as they would take too much space. The design of these operators follows the design for the plus operator. There are no binary arithmetic operators defined. When a BoundedInt object is used in a binary arithmetic operation, it will be converted to a built-in integral type before the operation. This means that there is no checking of the results of these operations, unless the result is assigned to a BoundedInt object. There is a pitfall here in that overflow cannot be checked for. BoundedInt<-10, INT_MAX> a = 10; a += INT_MAX; // Overflow checked a = a + INT_MAX; // Overflow not checked A default constructor is available in order to mimic the behaviour of built-in types. It does not initialise the value but maintains a flag to indicate that this object does not have a defined value. This flag is checked by member functions that access or modify the value. The m_initialised member flag is surrounded by conditional pre-processing directives to avoid overhead in release builds. The copy constructor and copy assignment operators are not defined as the compiler generated versions are appropriate. Below are some examples from an imaginary C project implementing a lift control with a single change to use BoundedInt: typedef BoundedInt<-4, 17> FloorNumber; FloorNumber liftPosition = 0; const FloorNumber myOfficeFloor = 10; /* go up */ ++liftPosition; /* go up fast */ liftPosition += 4; printf("The lift is %d floors away.\n", abs(liftPosition-myOfficeFloor)); BoundedInt objects can appear in any arbitrarily complex expression thanks to the conversion operator. Because the conversion operator is inlined the BoundedInt object will generate exactly the same code as when using a built-in type. A BoundedInt object can be used as a bounds checked index into arrays. Example: const int SixPackSize = 6; Bottle myBeers[SixPackSize]; BoundedInt<0, SixPackSize-1> ix; for( ix = 0 ; ix < SixPackSize ; ++ix ) { drink(myBeers[ix]); } If ix for some reason is changed to an invalid value, the BoundedInt class will warn about this. We can take this one step further by creating a class that only allows element access using numbers within the allowed range. template <typename T, size_t Size> class BoundedArray { public: T& operator[](BoundedInt<0, Size-1> ix) { return m_data[ix]; } public: T m_data[Size]; }; Note that the member data is public to allow aggregate initialisation. See how this is used below. The member data can be made public without risk for misuse as the data is equally accessible through the index operator as with direct access. Whenever an element is requested using an index of any builtin integral type, that index is converted to a BoundedInt which checks that its value is within the acceptable range. This template takes two parameters, the type of the elements in the array and a non-type template parameter to indicate the size of the array. The simple example above will work as before with only a small change to the definition of myBeers. BoundedArray<Bottle, SixPackSize> myBeers; This array can be initialised in the same way as a built-in array: BoundedArray<Bottle, SixPackSize> myBeers = { ... }; There is no overhead in release builds for this array class. The index operator is inlined and there is no indirect pointer access to the underlying array. Having the size as a template parameter may look like we are causing code bloat if several arrays of different sizes are used. Yes, there will be several instantiations but because all functions are inlined and optimised away there is no extra code that can multiply. In the same way as for using checked array indices we can create a smart pointer class that makes sure that it points to an element inside the array. It will have to know the base address of the array and the size to do the checking. This information is retrieved from the array class when a pointer is created. The starting point is an example with built-in pointers: Bottle* p = myBeers; for( ; p->size != 0 ; ++p ) { drink(*p); } myBeers is an array where the last elements members are cleared as a termination condition. We replace the built-in pointer p with a smart pointer: BoundedPointer<Bottle> p = myBeers; The loop in the example above remains unchanged. The definition of BoundedPointer is shown in listing 3. The array base address, array size and the initialised flag are kept as members only for debug builds to perform the runtime checks. To avoid this overhead in release builds the m_base, m_size and m_initialised members are surrounded with conditional preprocessing directives. #include <cstddef> #include <cassert> template <typename T> class BoundedPointer { public: // Default constructor BoundedPointer() #ifndef NDEBUG : m_initialised(false) #endif {} // Constructor from a built-in array template <size_t Size> BoundedPointer(T (&arr)[Size]) : m_p(arr) #ifndef NDEBUG , m_base(arr), m_size(Size) , m_initialised(true) #endif {} // Constructor from a user defined array BoundedPointer(const T* base, size_t size) : m_p(const_cast<T*>(base)) #ifndef NDEBUG , m_base(m_p) , m_size(size) , m_initialised(true) #endif {} // Constructor from null BoundedPointer(void * value) : m_p(static_cast<T *>(value)) #ifndef NDEBUG , m_base(m_p), m_size(1) , m_initialised(true) #endif {} // Dereference operators T & operator*() { assert(m_initialised); assert(m_p != 0); return *m_p; } T * operator->() { assert(m_initialised); assert(m_p != 0); return m_p; } T & operator[](size_t ix) { assert(m_initialised); assert(m_p != 0); assert(m_p + ix < m_base + m_size); return m_p[ix]; } // Pointer arithmetic operations ptrdiff_t operator-(BoundedPointer const & rhs) { // Check validity of the pointers assert(m_initialised); assert(rhs.m_initialised); assert(m_p != 0); assert(rhs.m_p != 0); // Ensure both pointers point to same array assert(m_base == rhs.m_base); return m_p - rhs.m_p; } BoundedPointer & operator+=(ptrdiff_t rhs) { // Check validity of the pointer assert(m_initialised); assert(m_p != 0); m_p += rhs; assert(m_base <= m_p && m_p < m_base + m_size); return *this; } BoundedPointer & operator++() { // Check validity of the pointer assert(m_initialised); assert(m_p != 0); ++m_p; assert(m_p < m_base + m_size); return *this; } // Other arithmetic operators ... // Comparison operators bool operator==(BoundedPointer const & rhs) { // Check validity of the pointers assert(m_initialised); assert(rhs.m_initialised); assert(m_p != 0); assert(rhs.m_p != 0); // Make sure that both pointers point // to the same array assert(m_base == rhs.m_base); return m_p == rhs.m_p; } // Other comparison operators ... private: T * m_p; #ifndef NDEBUG T * m_base; size_t m_size; bool m_initialised; #endif }; // Binary arithmetic operators template <typename T> inline BoundedPointer<T> operator+(BoundedPointer<T> lhs, int rhs) { return lhs.operator+=(rhs); } template <typename T> inline BoundedPointer<T> operator+(int lhs, BoundedPointer<T> rhs) { return rhs.operator+=(lhs); } Listing 3: Definition of BoundedPointer. A BoundedPointer object can be constructed from built-in arrays and from user defined array types. The constructor for user defined array types takes two parameters (base address and size) and is intended to be called from conversion operators of those array classes. This conversion operator for BoundedArray looks like this: template <typename T, size_t Size> class BoundedArray { public: ... operator BoundedPointer<T>() { return BoundedPointer<T>(m_data, Size); } }; There is also a constructor that takes a void* parameter to support assignment from NULL. A T* parameter cannot be used as it would conflict with the constructor for built-in arrays. The BoundedPointer class supports all the operations that can be used with built-in pointers. There are checks for incrementing and decrementing the pointer to make sure that it does not point outside its array. As with BoundedInt there are checks to see that the pointer is initialised when it is used. All methods are inlined to avoid any overhead in release builds. The classes described here are designed to do the bounds checking during unit and system testing when compiled in debug mode. It is important to run as many test cases as possible that exercise all boundary conditions. In release builds, all you have to do is make sure that the NDEBUG macro is defined, inlining is enabled and the optimise level is as high as possible. Then your code will be as efficient as if built-in types were used. The BoundedIntTraits in listing 2 hides the chosen underlying integral type. If the ranges change in the future, there is no need to manually change the underlying type required for the wider range. This article describes the design of a class that wraps an array and adds bounds checking functionality. There are many more possible classes that can be used in this framework for different purposes. Examples include a class that manages dynamically allocated arrays. A possible extension to the checked pointer is to keep track of whether the array still exists. If the array goes out of scope or is de-allocated the pointer shall be set to an invalid state. This is straight-forward to implement but is outside the scope of this article. This article does not discuss checked iterators for STL containers as the article was originally intended to motivate C users to adopt C++ to improve their lives. For STL there are already implementations that check validity of the iterators. Although the code in this article has been tested with several C++ compilers there are some difficulties using some existing compilers. If your compiler does not support partial template specialisations you cannot use the traits class BoundedIntTraits. You can avoid the BoundedIntTraits class by removing it from the template parameter list of BoundedInt and replace it with int. You will miss the feature where the underlying type of BoundedInt is automatically chosen from the specified range and it will be int if a type is not specified. With the strategies shown in this article it is possible to catch various out of bounds conditions during the testing phase at no cost to the released code. An additional benefit is that the bounds given to BoundedInt and the array types document their valid ranges well. - Safe and efficient data types in C++ by Nicolas Burrus Describes classes for compile time type safety when using different integral types. It defines safe operations for a set of integral types. The integral types used here are only bounded by the number of bits used in the internal representation. The description of operations and integral promotion is interesting and can be applied to the classes in this article. - Boost Integer Library Contains some helpful classes for determining types of integers given required number of bits. Also contains other helpful classes that can be useful in implementing a portable bounded integer and pointer library. - Boost array class in the container library A constant size array class. The design goal for this class is to follow the STL principles. - Bounds checking pointers for GCC. Additions to GCC to add bounds checking to the generated code. An implementation of STL that performs various run-time checks on iterators. - CheckedInt: A Policy-Based Range-Checked Integer by Hubert Matthews Overload issue 58, December 2003 Describes how policy classes can be used to select behaviour when a given range is exceeded.
https://accu.org/index.php/journals/313
CC-MAIN-2020-16
refinedweb
2,699
51.58
StrictMode highlighting problems in an application. IMP: Strict mode checks are run in development mode only; IMP: they do not impact the production build. import ReactDOM, { findDOMNode } from 'react-dom'; import React, { Component, StrictMode } from 'react'; class App extends Component { constructor() { super(); this.state = { count: 0 }; } componentDidMount() { const node = findDOMNode(this); // Set red color to our <h1> tag. node.style.color = 'red'; } render() { return ( <StrictMode> <h1>Heading</h1> </StrictMode> ) } } export default App; he `findDOMNode()` method is DEPREACTED for that we get the Warning with `<StrictMode>` In above example we can see the error: Warning: findDOMNode is deprecated in StrictMode. findDOMNode was passed an instance of App which is inside StrictMode. Instead, add a ref directly to the element you want to reference. Learn more about using refs safely here:
https://maheshwaghmare.com/reactjs/api/strictmode/
CC-MAIN-2022-05
refinedweb
129
55.95
I want to share my solutions to the obvious problems using the UART to communicate with some other bit of kit. - Junk at startup and regular intervals. Some of this, such as "ip=..." comes from Espressif's binary blobs and short of patching those which might be worth trying my solution is with hardware - Baud rate - easy to fix - Non-blocking read - again straightforward The schematic is self-explanatory. There are plenty of perfectly valid ways of doing this and partly depends on what components you have lying about. If you have a board that already has pullups on some GPIO then you only need to add one common NPN transistor and one resistor. When you actually want to transmit make the GPIO an output and pull it low and afterwards (allowing for transmission time) send it high or switch it back to input. You should in any case call esp.osdebug(None) at startup as this reduces the chances of other unwanted output. No doubt with better documentation one might be able to use official API but in the meantime... Code: Select all def baudrate(rate): machine.mem32[0x60000014] = int(80000000/rate) Code: Select all >>> uart = machine.UART(0) >>> time.sleep(2); uart.read() brexit yawn b'brexit yawn' >>>
https://forum.micropython.org/viewtopic.php?p=11748
CC-MAIN-2020-24
refinedweb
210
64.81