text stringlengths 454 608k | url stringlengths 17 896 | dump stringclasses 91 values | source stringclasses 1 value | word_count int64 101 114k | flesch_reading_ease float64 50 104 |
|---|---|---|---|---|---|
How to set a PDF to expire (Working Script)(Bryan_Hardesty) Mar 21, 2008 9:12 AM
Here is a little script I made up the other night. You can use it to allow a PDF to be opened only until a set date. I use this for when my employees go to service a customer. I want them to be able to see the customer's information, but only for 24 to 48 hours.<br /><br />CheckExpiration()<br /><br />function CheckExpiration()<br />{<br />/*-----START EDIT-----*/<br />var LastDay = 21<br />var LastMonth = 3<br />var LastYear = 2008<br />/*-----END EDIT-------*/<br /><br />/* DO NOT EDIT PAST HERE !!! */<br />var today = new Date();<br />var myDate=new Date();<br />LastMonth = LastMonth - 1<br />myDate.setFullYear(LastYear,LastMonth,LastDay);<br /><br />if (myDate<today)<br /> {<br /> this.closeDoc(1);<br /> app.alert("This files has expired.",1,0,"Expired");<br /> }<br />}
This content has been marked as final. Show 39 replies
1. Re: How to set a PDF to expire (Working Script)Patrick Leckey Mar 21, 2008 9:19 AM (in response to (Bryan_Hardesty))Well, as long as the user doesn't change the date on their computer or turns off JavaScript in Acrobat. If a user does either of those things, the form will open regardless of your script.
Adobe LiveCycle Policy Server provides this functionality in a server / client setup so that it authenticates the user, date and time against a trusted server before the form is opened.
2. Re: How to set a PDF to expire (Working Script)Patrick Leckey Mar 24, 2008 6:30 AM (in response to (Bryan_Hardesty))Actually that was the name for the LiveCycle version 7 suite. In version 8 the product that provides this functionality is called LiveCycle Rights Management.
It does not use scripting of any sort and so it cannot be so easily disabled as by removing a check from a preference setting or by changing the year on your clock to 2007 (since the script above does not have a start date to validate against either). It is embedded into the document and forces Acrobat or Reader to validate against your Rights Management server before you are allowed to open the document.
3. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 5, 2008 2:40 PM (in response to (Bryan_Hardesty))If a fairly technical person with nearly no coding skills (for example, a person like ME...) wanted to use this script, just what does it take to implement.
I totally understand that it is limited to 'keeping honest people honest' and has nearly zero "real" protection.
However, I have a request from an associate to protect pdfs by date and he wants it quick and cheap..... (we don't have tons of servers or want to acquire the know-how of maintaining them ourselves).
We've been trying the ADC, but have issues with it opening files on a Mac. Other solutions look nice (FileOpen, etc.), though they are a bit pricey-er than we'd like since the main thing is simply to have docs expire on a given date (we like the 'by user' features, but that's really not a huge requirement at this time).
Thanks for your input.
4. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 1:08 AM (in response to (SteveMajors))Be very sure to explain to your associate: here is a solution, but the
end user only has to turn off JavaScript to avoid it.
Aandi Inston
5. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 8:09 AM (in response to (Bryan_Hardesty))Thanks, Aandi.
I clearly understand that and have tried to tell him, but 'pictures speak louder than words' and seeing it happen (or not) will likely help him understand everything.
What I have found (by studying the help files all weekend) is ONE way to implement this Java code (I'm not sure if it is 'right'...) that WORKS on ONE OUT OF SIX computers...
I placed the code on the Page Properties of the first page (steps below for the 'newbies' like me...) - it was really simple once I found out how to do it...
**** STEP BY STEP FOR OTHER NEWBIES ****
To use the Java script as described in this post (at least, as I did - maybe someone else will have better ideas and describe it for us...)
1. click on Pages on the left sidebar
2. right-click on the first page
3. select Page Properties, Action
4. under Select Action choose Run a Javascript
5. click Add
6. in the window that pops up, copy/paste the code from the post
7. click OK, OK to get back to the doc (change the Javascript later if you want, this is just 'initial testing'...)
8. IMPORTANT! Save the file under a different file name (or you may not be able to open it later!)
*********END OF STEP-BY-STEP*********
I included #8 because now, when I try to open the 'expired' doc in Acrobat, I get the 'expired' message and can't open it at all, but in IE, I see the document - on 5 of my 6 computers (including the one that Acrobat won't open!)
So, even though this is not 'secure' nor a 'pro' solution, please help me understand if a) I did this right and b) why it only works on the ONE computer (running Win 2000 Server and IE 6 with Reader 7) and not on any others (incluing NT, XP, Vista and 2000 Pro with various versions of IE and Reader).
Thank you for your time.
What I would really like to see is your reply with DETAILS on how to test this and both 1) learn how this stuff is done and 2) show my associate the difference in turning it 'on' and 'off'.
Best regards
Steve Majors
6. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 8:42 AM (in response to (SteveMajors))One tip is to always check the JavaScript console. There may be a
message in there about the problem.
Aandi Inston
7. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 8:58 AM (in response to (Bryan_Hardesty))Aandi,
Thanks. Sure wish I knew what that was, or where to find it....
guess it's back to the 'search the help file' again.....
I really do thank you (and all the experienced folks out there) for your tips/guidance, however, PLEASE remember that I got Adobe Acrobat last week (haven't even received the CD yet...) and I'm about as lost as anyone can get! (step-by-step is highly appreciated... - not only for me, but in reading the forums, it seems there are many more out there as bad, if not worse off than me!)
Steve
P.S. (added after doing some searching) I found that "Javascript console" is something that browsers have.... check out for a nice page... They say, "In IE, go to Tools > Internet Options and choose the Advanced tab. Make sure the check box for "Display a notification for every script error" is checked. " I'm off to try that....
P.P.S. Turned that on, tried the 'expired' page and nothing special - was able to read the entire thing...
8. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 9:39 AM (in response to (SteveMajors))They now call it the JavaScript debugger in Acrobat Professional, look
under Advanced > Document Processing. Not sure about other products.
Browsers have a different JavaScript environment to Acrobat; each one
may have a console, but when running Acrobat JavaScript you need the
AcrobatJavaScript console.
Please remember that you are now learning to be a programmer, and that
isn't something you can get good at in a day, a week, or a month; nor
through a handful of tips.
Aandi Inston
9. Re: How to set a PDF to expire (Working Script)(SteveMajors) Apr 7, 2008 10:07 AM (in response to (Bryan_Hardesty))Thanks for the message with details!
When I turn on the JavaScript debugger in Acrobat Pro, I get this message (whether the 'clean' file or the 'expired' - same message).
Don't know if it has anything to do with the issue, but it is what it is...
Acrobat Database Connectivity Built-in Functions Version 8.0
Acrobat EScript Built-in Functions Version 8.0
Acrobat Annotations / Collaboration Built-in Functions Version 8.0
Acrobat Annotations / Collaboration Built-in Wizard Functions Version 8.0
Acrobat Multimedia Version 8.0
Acrobat SOAP 8.0
NotSupportedError: Not supported in this Acrobat configuration.
Doc.closeDoc:17:Page undefined:Open
As for being a 'programmer' - that ain't gonna happen.... I've been in computers since 1978 (Apple ][ days...) and found out a long time ago that I don't think like a programmer (it takes a special person with special skills and dedication, IMHO...), though I do like to 'hack around' and try out the simple stuff on my own.
What I'd really like to do/have/find is someone that totally understands this stuff as well as the business side of things that will stick to being 'on staff' (what we find is that "programmers" tend to get the majority of the project done, then something else comes along.... - certainly on the bigger projects.... we have a mostly-written back-end project done in perl, but now the programmer isn't answering the phone, or not getting into it to finish....
That's why I like to 'hack' - small changes are something I can do myself.
Anyway, I think this has gone as far as I want to go with it - something that 'kinda' works on one out of 6 computers seems to me to be a FAILURE, not an OPTION....
Thanks for your replies and the instructions on how to at least look at the debugger thing.
All the best.
10. Re: How to set a PDF to expire (Working Script)(Aandi_Inston) Apr 7, 2008 10:57 AM (in response to (SteveMajors))>NotSupportedError: Not supported in this Acrobat configuration.
>Doc.closeDoc:17:Page undefined:Open
Ok, if we refer to the JavaScript Reference there may be some clues
there. It's basically saying that closeDoc (which you'll find in the
code somewhere) isn't being allowed, usually for some security or
impracticality reason, rather than because you didn't ask right.
But no: no useful notes. It may be that you are trying to do this in a
browser document? You can't close the document window for a browser.
>
>As for being a 'programmer' - that ain't gonna happen....
It's already happened. You may feel you're just doing copy-and-paste
programming - but an increasing number of "programmers" actually
believe this is all there is to programming. I respect your judgement
that you don't want to be what you see a programmer as being, but you
are doing programming tasks, just as someone who has to saw a piece of
wood is doing carpentry tasks - and the saw is still sharp!
What I'm saying, I guess, is that trying to get this working while
simultaneously saying "No! I don't want to learn this stuff" isn't
going to work.
Aandi Inston
11. Re: How to set a PDF to expire (Working Script)Patrick Leckey Apr 7, 2008 11:55 AM (in response to (Bryan_Hardesty))Good catch, Aandi. Hadn't occurred to me until now. closeDoc does not work in a browser window, you're right. Since AcroJS stops processing at the first error, you never get the app.alert message because closeDoc comes before it, and you see the document because closeDoc can't close a browser window.
Just another reason why "JavaScript-based Document Security" is a misnomer and why this script really shouldn't be used for any sort of security - all you have to do is open the document in a browser to bypass the security.
12. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 9, 2008 10:13 AM (in response to (Bryan_Hardesty))When I attempt this.. I get to step 4 and then it will not allow me to Add...
Is there a specification I am missing... a file type.. or reason my PDF won't allow me to edit such things.
13. Re: How to set a PDF to expire (Working Script)gkaiseril May 9, 2008 10:23 AM (in response to (Bryan_Hardesty))These instructions apply to PDFs NOT created by LiveCycle Designer.
14. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 9, 2008 10:28 AM (in response to (Bryan_Hardesty))Thanks Geo - that is my issue.
Does anyone know if there is there a way to generate silimar results in designer? (an expiration by date, or number of times opened etc...)
15. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 9, 2008 11:56 AM (in response to (Bryan_Hardesty))You do understand that this method provides no security at all, right?
Turning off JavaScript or changing the date on your computer will circumvent any form of "security" this script may seem to provide.
16. Re: How to set a PDF to expire (Working Script)(Dawn_Kay) May 12, 2008 5:45 AM (in response to (Bryan_Hardesty))Yes, I understand there are obvious ways around it, but it is better than having no alternative.
17. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 12, 2008 5:54 AM (in response to (Bryan_Hardesty))The alternative is Adobe LiveCycle Policy Server.
EDIT: Sorry, it's been renamed Adobe LiveCycle Rights Management for the LiveCycle ES Suite.
18. Re: How to set a PDF to expire (Working Script)(sobencha) May 18, 2008 5:48 AM (in response to (Bryan_Hardesty))Two questions regarding the JavaScript approach...
1) Is there a way to add this bit of JavaScript information to a pdf file in a batch fashion via Java, Python, etc.? I would like to add this to a build process in Ant. Any suggestions?
2) The responses regarding how insecure this is and to use the client/server approach seems worthless for a situation where the pdfs may never see internet connectivity. The only reason for using pdfs is for mobile users of my information. That is first and foremost why I need some document-embedded solution. Are there any document-embedded solutions that are known that do not involve client/server communication?
Thanks a lot for any assistance.
19. Re: How to set a PDF to expire (Working Script)gkaiseril May 18, 2008 10:14 AM (in response to (Bryan_Hardesty))And Acrobat JavaScript may not work on many PDA and other wireless devices that might display content.
20. Re: How to set a PDF to expire (Working Script)Patrick Leckey May 20, 2008 5:43 AM (in response to (Bryan_Hardesty))They may be "worthless" solutions for non-internet-connect scenarios, but that doesn't make the implementation any more secure. It's still a laughably insecure approach to expiring a document. Unless you are in complete control of the viewing environment, I know for a fact that a lot of corporate deployments of Acrobat default to JavaScript turned off - which means to them, your document never expires. A lot of home users turn JavaScript off too because they don't "trust" JavaScript. Not to mention all the people that use non-Adobe viewers (Foxit Reader for example) that may not handle the JavaScript correctly and cause unknown results.
There is no perfect solution for expiring a PDF in a non-internet-connected scenario. If you can't control the timechecking in a known-safe server environment and have to rely on information from the local system, your security it lost since anybody can do anything to the local system. If you're looking for something showy that will make people feel warm and think that you have some form of security on your documents, use the above script. But I would be weary about passing it off, especially in a professional environment, as "secure". Anybody who wants to spend 5 minutes on Google looking at PDF security will realize your "security" is a complete sham and that could reflect badly on you.
The best option to secure a PDF in a non-connected environment is to apply document encryption and only give the password to those who need to view the document.
Oh and LiveCycle Rights Management ES has plenty of fallback configuration options for how to handle non-connected environments with policy-protected PDFs.
21. Re: How to set a PDF to expire (Working Script)(And_Be) Jun 16, 2008 8:21 AM (in response to (Bryan_Hardesty))Do you know how to add a specific time to the expiry?
22. Re: How to set a PDF to expire (Working Script)gkaiseril Jun 16, 2008 9:43 AM (in response to (Bryan_Hardesty))You need to add the variables for hours, minutes, seconds and milliseconds to the code. A generalized version that will work with omitted time elements follows.
function CheckExpiration(LastYear, LastMonth, LastDate, LastHour, LastMin, LastSec, LastMS) {
// document level function to see if passed date less than today's date
// check that numbers are passed as parameters
if (isNaN(LastYear) ) LastYear = 1900;
if (isNaN(LastMonth) ) LastMonth = 1;
if (isNaN(LastDate) ) LastDate = 1;
if (isNaN(LastHour) ) LastHour = 0;
if (isNaN(LastMin) ) LastMin = 0;
if (isNaN(LastSec) ) LastSec= 0;
if (isNaN(LastMS) ) LastMS = 0;
LastMonth = LastMonth - 1; // adjust the passed month to the zero based month
// make the expiration date time object a numeric value
var myDate = new Date( Number(LastYear), Number(LastMonth), Number(LastDate), Number(LastHour), Number(LastMin), Number(LastSec), Number(LastMS) ).valueOf(); // convert passed expiration date time to a date time object value
// get the current date time's object as a numeric value
var today = new Date().valueOf();
// return logical value of the comparison of the passed expiration date value to today - if true document has expired
return (myDate < today);
}
// the following code has to be executed after the above function is defined
// edit following fields
var ExpireYear = 2008; // 2008
var ExpireMonth = 3; // March
var ExpireDate = 21; // 21st
var ExpireHour = 12; // noon
var ExpireMin = 0;
// the following code has to executed after the above function and variables are defined.
// test for expired document based on result of the passed time elements compared to the current date and time as returned by the CheckExpiration() function.
if (CheckExpiration(ExireYear, ExpireMonth, ExoireDate, ExpireHour, ExpireMin) ) {
this.closeDoc(1);
app.alert("This files has expired.",1,0,"Expired");
}
23. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 1, 2008 7:53 PM (in response to (Bryan_Hardesty))Yes the script does work to expire the pdf, however you can still open the pdf file through Photoshop or Illustrator even though it has expired. Is there any other solution?
24. Re: How to set a PDF to expire (Working Script)(SteveMajors) Aug 1, 2008 9:16 PM (in response to (Bryan_Hardesty))Here's a slightly different method, but it works..... (and, there is certainly no 'work around' for anyone to get data you don't want them to get!) <br /> <br />It uses the Action on Open (sorry, I don't know the 'official' name of this - I'll describe it below..) in the pdf to send the user to a server checking mechanism (in this case, a date checker - but you could do the same thing with lots of stuff...) <br /> <br />Here's how I set it up and it works great! <br /> <br />1. Start a new document that you can get a form field into (don't use LiveDesign for this - that's too fancy... - just make a document in Word that says ______ then print to pdf to get a nice simple pdf that you can edit directly in Acrobat) <br /> <br />2. Do a 'Run Form Field Recognition' and Acrobat will find the blank as a field. <br /> <br />3. Edit this field to have a "key" name (my example, I will use "today" which tells me this is the day that my document starts). <br /> <br />4. Under the 'Options' tab, make a Default Value of today's date (this will be the 'test against' date - for my example I'm showing how to expire a document in 30 days from creation date, but you could use anything you like - it is simple to modify!) <br /> <br />5. From the 'Format' tab, use the Select format category of Date. Use a date format you like (my example uses mm/dd/yyyy) Note that this MUST match the server side code described later). <br /> <br />6. For a 'live' document, you may want to go to the General tab and make the field Hidden as well as other things to not see the 'false' page, but that is outside the scope of this example. <br /> <br />OK, now on to setting up the document to automatically send this info to your server for testing... <br /> <br />1. Go to the 'Pages' view (click on the icon at the top left) <br />2. right-click on the page, select Page Properties. <br />3. Under the 'Actions' tab, select "Page Open" and "Submit a form" from the drop down boxes. <br />4. Enter a URL to your web page (see below) that does the checking (like so it is easy to remember) <br />5. Check the HTML radio button. <br />6. Check the 'Only these...' button and then 'Select fields'. Make sure your "key" field is checked as well as the 'Include Selected' button (sometimes, for me, these weren't - I don't know why, so check it!) <br /> <br />Now, on to the server side... <br /> <br />Here's the code that goes into the php server file; <br /> <br /><?php<br />$day = date("d");<br />$month = date("m");<br />$year = date("Y");<br />$today = substr($_REQUEST["today"],3,2);<br />$tomonth = substr($_REQUEST["today"],0,2);<br />$toyear = substr($_REQUEST["today"],6,4);<br /> $tsp = mktime(0, 0, 0, $month, $day, $year);<br /> $tsd = mktime(0, 0, 0, $tomonth, $today, $toyear);<br /> $xdays = ($tsp - $tsd)/(24*60*60); // as calculated from dates<br />$maxdays = 30; //set this to whatever your 'expire from today' date is<br />if ($xdays >= $maxdays)<br /> //do what you like to tell the user it is expired<br />else<br /> //show the info you want them to see<br />?> <br /> <br />i (the info shown (or not) is outside the scope of this message, this is just a 'one way to make this work' example. In my system, I use a program called fpdf [] to create the pdf from 'scratch', building it to show the reader what I want them to see - whether it is a "Sorry, this is expired" document, or the data that they came for if the time hasn't expired.) <br /> <br />THAT'S IT!!! It works great and you can use it to do tons of stuff - just set the "key" on the original document and make the server code check whatever it is you want! <br /> <br />Pretty cool, I think! (and, it only took me, a novice programmer, about 3 hours to figure it all out!) <br /> <br />Best of success with it - enjoy!
25. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 1, 2008 11:42 PM (in response to (Bryan_Hardesty))Thank you for your time Steve.
Sorry I should have been a little more clearer, I am actually a designer (not a programmer)
so the above is a little overwhelming, are you able to set it out step by step?
Basically what I am trying to do is to set an expiration date on pdf files that I supply to my clients, so that once the file has expired it also can not be opened and edited via Photoshop/Illustrator should the pdf file land in the hands of another designer.
Is this possible?
26. Re: How to set a PDF to expire (Working Script)Bernd Alheit Aug 2, 2008 1:48 AM (in response to (Bryan_Hardesty))Use digital rights management (DRM) software.
27. Re: How to set a PDF to expire (Working Script)(Allegra_Pilosio) Aug 2, 2008 2:13 AM (in response to (Bryan_Hardesty))Can you recommend any? I am a mac user. I don't have a big budget I am a freelancer?
28. Re: How to set a PDF to expire (Working Script)(SteveMajors) Aug 2, 2008 5:58 AM (in response to (Bryan_Hardesty))try Adobe Document Center. you will need Acrobat 9 for a Mac (as I understand it - I have a client that told me that yesterday...)
29. Re: How to set a PDF to expire (Working Script)(M._Ahmad) Sep 18, 2008 5:30 AM (in response to (Bryan_Hardesty))Hi,
I have posted the same problem in forum at:
M. Ahmad, "How to close a PDF doc opended in IE web browser using JavaScript?" #, 16 Sep 2008 6:44 am
The example of my script works in pdf but not in browser, as most of you are having the same problem.
I will just say:
1) SetPageAction is not the right place to put your javascript for this purpose. As the script will not run untill file is not opened to that page or user does not go to that page.
2) I put this script as Document Level Script and that way it will run as soon as the file is opened.
3) Yes, the script can be added to one file or a group of files through BATCH PROCESSING. To do this you have to write another script to add this expiry script to the file/files automatically thorugh batch processing
4) I agree, it is not the true security but it is better than having nothing at all.
Thanks.
M.Ahmad
30. Re: How to set a PDF to expire (Working Script)gkaiseril Sep 18, 2008 6:13 AM (in response to (Bryan_Hardesty))Just be aware that any user who turns off JavaScript in their copy of Reader or Acrobat will not be restricted by this approach.
31. Re: How to set a PDF to expire (Working Script)Bernd Alheit Sep 18, 2008 6:40 AM (in response to (Bryan_Hardesty))Or any user with an other PDF viewer.
32. Re: How to set a PDF to expire (Working Script)(Peter_Wepplo) Nov 19, 2008 4:28 PM (in response to (Bryan_Hardesty))I am interested in enabling the method of SteveMajors - Aug 1, 08 PST (#24 of 31). I have tried doing it, but it is beyond my knowledge of what i need to do or how to do it.
I don't know if or whether the options he advocated are necessary or not. I believe he was populating his form from his server, so disabling the script blocks access to the file as well. However, I would like to start with something simple like just the date being sent from my server.
I think my problem is in properly setting up the .php code and capturing it in acrobat (instruction #6).
33. Re: How to set a PDF to expire (Working Script)(faisal_naik) Dec 16, 2008 1:17 AM (in response to (Bryan_Hardesty))Hello,
Using SteveMajor's method of having the pdf checked against a script on a website, is it possible to cross check against the date a pdf is created on the user system with a expiry. To put it simply, i want a user downloading a copy of pdf from my intranet to be able to use it for 24 hours only (the counter starting from the time it is downloaded on the user system). This is to ensure some form of document control / version control.
Thanks!
34. Re: How to set a PDF to expire (Working Script)(Valentine_Deepak_Crasta) Dec 18, 2008 4:35 AM (in response to (Bryan_Hardesty))Allegra Pilosio, you said the script to expire PDF is working. To restrict opening the file in Illustrator or Photoshop, you can set the document secruity in PDF.
Use clt+D or Cmd+D, choose password security
Compactibility: Acrobat 3 or later
Encrypt All document contents
Set a Password
Printing Allowed: High Resolution
Changes Allowed: None
Thats it, people can't open your PDF in Photoshop or in Illustrator without knowing the password
35. Re: How to set a PDF to expire (Working Script)(Melissa_Green) Jan 10, 2009 11:02 AM (in response to (Bryan_Hardesty))Too bad we can't turn this around and somehow require Javascript is ON and then proceed. Maybe a pair of documents, first one that checks for Javascript that shows a link to the second document with the time sensitive data. Again, not a long term solution.
36. Re: How to set a PDF to expire (Working Script)DimitriM Jan 10, 2009 12:01 PM (in response to (Bryan_Hardesty))Hi Melissa,
Yes, the problem with all these solutions is that in order to make them work JavaScript must be turned on. But your mention of a "cover" document explaning to the user that JS must be turned on is possible. There is an example PDF at-
AcroDialogs Product Page
Scroll down to the link for "Document License Dialog Example" to download it. If the user does not turn JS on then they cannot view the information under the "cover" layer. If JS is already turned on then they can view it.
Again, this is not an airtight security method, just a pretty good deterrent.
Hope this helps,
Dimitri
WindJack Solutions
37. Re: How to set a PDF to expire (Working Script)(pari_patel) Jan 10, 2009 4:28 PM (in response to (Bryan_Hardesty))Not a big expert on wondows active directory but I am sure within active directory there is an option to force users not to be able to switch off programs like javascript.
The solution you have proposed here is actually quite handy. As mentioned this is really to help protect honest people. If you need something more secure I would suggest looking at windows active directory, although off course you need to be running this service in the first place.
Regards. Peter.
38. Re: How to set a PDF to expire (Working Script)try67 Jan 11, 2009 3:18 AM (in response to (Bryan_Hardesty))Switching off JavaScript can be done from within Acrobat, so Active Directory can't prevent it. The layer option is good... I thought of another one -- hiding the pages in templates that are made hidden only when JavaScript is enabled and the script has not yet expired. Of course, an experienced user can display the pages themselves, but it would work for most.
39. Re: How to set a PDF to expire (Working Script)(JRG) Mar 12, 2009 7:48 AM (in response to (Bryan_Hardesty))I'm not familiar with the JavaScript API, but is there a way to modify document appearance with the API? If so, then the document could be rendered in an unreadable format (e.g. white print on white background) and the JavaScript could check the date and modify the appearance. Thusly, if JavaSript were disabled, the document could be opened but not "used".
Yes, I understand that any of these measures can be likened to a lock on a screen door, but sometimes that's all you need to redirect the actions of the "almost honest". | https://forums.adobe.com/message/1096909 | CC-MAIN-2017-09 | refinedweb | 5,280 | 68.3 |
Jabber::NS - Jabber namespaces
use Jabber::NS qw(<some tag>); print NS_AUTH;
Jabber::NS is simply a load of constants that reflect Jabber namespace constants (and other things). These can be imported into your program with the
use statement. These namespace constants are based on those specified in the lib/lib.h file in the Jabber server source.
By default, nothing is imported - specify one or more tags or individual constants in the
use statement as shown in the SYNOPSIS.
The tags are:
Stream namespaces, such as jabber:client.
IQ namespaces, such as jabber:iq:auth.
X namespaces, such as jabber:x:oob.
Miscellaneous namespaces, such as the w3c one for XHTML.
Various flags, such as r_HANDLED, used by Jabber::Connection.
You can use this to bring in all the namespaces that this module offers.
Don't forget to prefix these tag names with a colon, e.g.:
use Jabber::NS qw(:iq :x);
Jabber::NodeFactory, Jabber::Connection
DJ Adams
early
This module is free software; you can redistribute it and/or modify it under the same terms as Perl itself. | http://search.cpan.org/~qmacro/Jabber-Connection-0.04/lib/Jabber/NS.pm | CC-MAIN-2015-35 | refinedweb | 181 | 67.25 |
Overview On 'vue-360' Library:
Some of the key features of the 'vue-360' library are:
amount - total number of image count that is used to display the 360-degree image preview.
imagePath - need to specify the relative path or full domain path without the image name.
fileName - need to specify the file name.
spinReverse - boolean property used to rotate the images in reverse order. The default value is false.
autoplay - autoplay your images. The default value is '24' images.
loop - number of loops you want for autoplay. The default value is '1'.
boxShadow- apply a box-shadow background. The default value is 'false'.
buttonClass - apply styling to buttons. The default value is 'light'.
paddingIndex - apply leading zero to image index. The default value is 'false'.
disableZoom - disabling zoom functionality. The default value is 'false'.
ScrollImage - Scroll images instead of default zoom. The default value will be 'false'.
Create A Sample Vue 2.0 Application:
The 'vue-360' library works fine with Vue 2.0 application, at the time of the article creation this plugin is not working with Vue 3.0 might be supported in future releases.
Install 'vue-360' Library:
npm install vue-360
Configure Font-Awsome Icons:
The 'vue-360' library adds some action buttons like 'Play', 'Next Image', 'Previous Image', etc. For these buttons, the library uses font-awsome icons. So we need to add a font-awesome CSS file to render those action buttons. Add the CSS file in the 'public/index.html' page inside of the HTML header tag.
public/index.html:(inside of the Html header tag)
<link href="" rel="stylesheet" type="text/css">
Prepare Images To Use In 360-Degree Vue Component:
Normally images can be saved within our own application and consume or can consume other website images(websites are hosted only for images).
For this sample, I'm saving images within my vuejs application. The area we will store is inside of the 'public' folder.
Configure 'vue-360' Library Vue Component And CSS:
Now we have to configure 'VueThreeSixty' component in the main.js file. The main.js file is the entry file for the application, so any configuration made in this file called a global or root configuration. Need to import the 'vue-360' library CSS file on the main.js.
src/main.js:
import Vue from 'vue' import App from './App.vue' import VueThreeSixty from 'vue-360' import 'vue-360/dist/css/style.css' Vue.use(VueThreeSixty) Vue.config.productionTip = false new Vue({ render: h => h(App), }).$mount('#app')
- (Line: 3) Loading 'VueThreeSixty' component from 'vue-360' library.
- (Line: 5) Imported CSS reference of 'vue-360' library.
- (Line: 7) Configured 'VueThreeSixty' component into the application 'Vue' instance.
Render 'vue-360' Vue Component:
The 3 main input properties to configuration 'vue-three-sixty' component are 'amount', 'imagePath', 'filename'.
Now clean up all the existing code in the 'HelloWorld.vue' component add the below code:
src/components/HelloWorld.vue:(Html Part)
<template> <div > <vue-three-sixty : </div> </template>
- The 'vue-three-sixty' is a vue component derived from the 'vue-360' library.
- (Line: 4) The property 'amount' set with a value to '16'. This count value is the number of images to frame the 360-degree animation.
- (Line: 5) The property 'imagePath'. If you are using different domain images then use the entire image URL by excluding the image name in the URL. If you are using images within the application that are saved in the 'public' folder then use the folder path inside of the 'public' folder. Here for this sample, I stored images inside of the 'bike360' folder that saved inside of the 'public' folder. So my in case 'imagePath' value will be './bike360'.
- (Line: 6) The property 'fileName'. Here I mentioned image's name should contain a number to determine the order of the images. So number value passed dynamically by the 'vue-three-sixty' component by using its context variable 'index'.
- So 'vue-three-sixty' component loads images inside of the 'imagePath' specified then it creates a nice 360-degree image animation.
Video Session:
Support Me!
Buy Me A Coffee PayPal Me
Wrapping Up:
Hopefully, I think this article delivered some useful information about integrating the 'vue-360' library for the image previewer component in VueJS application. I love to have your feedback, suggestions, and better techniques in the comment section below. | https://www.learmoreseekmore.com/2021/02/360-degree-image-preview-in-vuejs-using-vue-360-plugin.html | CC-MAIN-2021-21 | refinedweb | 726 | 60.01 |
I can request a Wildcard cert or a SAN Enabled cert, but apparently not both functions at the same time. Most of the SSL Cert providers I've checked with don't support this effect [Incommon/Comodo, DIGICert, etc...etc...]
- The current SAN cert I have, which captures the namespace of the tenant, install fails with the CN doesn’t match in the HCP gui. We believe that this is because the CN should be a wildcard [*.hcp.its.unc.edu vs. hcp.its.unc.edu]…as this is what the guide “HCP Certificate HowTo” written by Thorsten Simons indicates as well. –The provider doesn’t allow wildcards in the CN of a multi-domain or Unified Communications SAN Enabled Cert...so I am stuck there.
- I currently have a wildcard cert installed; however, it doesn’t capture and protect the namespace buckets of the tenant.
Has anyone actually done this??? as every example I find only has self-signed certs installed.
So, I thought I would report my findings and see if this matches your expectations, as it is not matching ours.
GoDaddy says they cannot issue a Wildcard & SAN Enabled Cert for both *.hcp.its.unc.edu [domain] and *.unc01.hcp.its.unc.edu [first tenant]. They said that they could provide us with two wildcard certs to install both on the HCP system. Not sure that two certs are support to be installed at the same time here. All the documentation seems to indicate that there is one active cert at a time which is replicated to all node in the hcp cluster. Plus, the moment we add another tenant, we’d then have three certs installed, then 4, then 5, then 6….etc…etc…if that is expected, then ok.
DigiCert says, that in order to get the desired effect of wildcard san enabled certs, we ultimately require three certs to be issued. One wildcard cert for *.hcp.its.unc.edu, another wildcard cert for *.unc01.hcp.its.unc.edu. Then they would take both of those certs and combine them into a third cert effectively doing what we are trying to accomplish….but that it would cost us a lot more money as we’d be paying for multiple certs and custom manual configs…..and, then the moment we add another tenant to the HCP system, that’s now 5 certs. The original two, combined into the third….plus the new cert for the new tenant and the original two wildcard cert, all three combined into a 5th cert.
I can install a self-signed cert without issue….other than the client errors or hassle of installing a customized self-signed root-authority file for each user that wishes to make use of this product----where they may or may not need to update this file each and every time we add additional tenants? Even if this isn’t the case, this process of self-signed is undesirable
So, can you point me to directions, or really to an actual human being which has accomplished this before in a non-self-signed way? Otherwise, our options seem to be either to be held hostage by DigiCert for an expensive process which really, imho, should be cheap…..or, we take HCP out of the picture for terminating our SSL connections, fronting it by some webapps on a Windows or Linux box to handle the process…which sort of somewhat defeats the purpose of having an HCP system in the first place. | https://community.hitachivantara.com/thread/8217-has-anyone-successfully-installed-a-third-party-san-enabled-ssl-cert-on-hds-hcp | CC-MAIN-2018-34 | refinedweb | 585 | 71.44 |
Monitoring AWS IoT 1-Click with Amazon CloudWatch
AWS IoT 1-Click automatically monitors devices on your behalf and reports metrics through Amazon CloudWatch. These metrics are reported in the device region where the devices were registered by the manufacturer. For more information about device regions, see How AWS IoT 1-Click Works. You can find the metrics in the Amazon CloudWatch dashboard under the IoT1Click namespace. which automated actions to take when an event matches a rule. The following actions that can be triggered:
Invoking an AWS Lambda function.
Invoking Amazon EC2 Run Command.
Relaying the event to Amazon Kinesis Data Streams.
Activating an AWS Step Functions state machine.
Notifying an Amazon SNS topic or an AWS SMS queue.
AWS IoT 1-Click tracks and reports the following metrics:
TotalEvents tracks the number of events published by devices. This metric can be viewed and graphed by device event, project, device type, or product type.
RemainingLife represents the approximate percentage of life remaining for a device. AWS IoT 1-Click reports this number based on the manufacturer’s rating of the device. For example, if a button is designed to last for approximately 2000 clicks, and 500 clicks have been recorded, the RemainingLife value is reported as 75%. The RemainingLife metric can be viewed and graphed by project, device type, or product type. Customers can use the RemainingLife metric to set up alarms that are triggered when devices fall below a certain threshold. Customers can then query the RemainingLife of devices by using the
GetDeviceHealthParametersmethod to identify devices that have low RemainingLife values.
CallbackInvocationErrors tracks failures in invoking the callbacks (Lambda functions) when the device emits an event. The CallbackInvocationErrors metric can be viewed and graphed by invoked callback (Lambda function ARNs set as callbacks) or by project. Customers can set up alarms for the CallbackInvocationErrors metric to be notified when AWS IoT 1-Click was unable to route events from their devices to their configured Lambda functions.
For more information, see the Amazon CloudWatch Events User Guide. | https://docs.aws.amazon.com/iot-1-click/latest/developerguide/1click-cloudwatch.html | CC-MAIN-2019-04 | refinedweb | 338 | 56.25 |
# Why PVS-Studio Uses Data Flow Analysis: Based on Gripping Error in Open Asset Import Library

An essential part of any modern static code analyzer is data flow analysis. However, from an outside perspective, the use of data flow analysis and its benefit is unclear. Some people still consider static analysis a tool searching for something in code according to a certain pattern. Thus, we occasionally write blog posts to show how this or that technology, used in the PVS-Studio analyzer, helps to identify another interesting error. Today, we have such an article about the bug found in the Base64, one of the encoding standard implementations of binary data.
It all started with checking the latest version of the Qt 6 library. There was a separate usual [article](https://www.viva64.com/en/b/0801/) on this, where I'd described 77 errors found. It turned out that at first, I decided to flip through the report, not excluding the third-party libraries' warnings. In other words, I didn't exclude the warnings related to \src\3rdparty in the settings. It so happened that I immediately ran up against a gripping error example in the [Open Asset Import Library](https://github.com/assimp/assimp). So, I decided to write this extra little note about it.
This defect highlights the benefit of data flow analysis in tools such as [PVS-Studio](https://www.viva64.com/en/pvs-studio/). Without that, it's impossible to find numerous errors. By the way, if you're interested in learning more about data flow analysis and other aspects of the tool's setup, you can read the [Technologies used in the PVS-Studio code analyzer for finding bugs and potential vulnerabilities](https://www.viva64.com/en/b/0592/) article.
Now, let's turn our attention right to the error, found in the Open Asset Import Library (assimp). File: \src\3rdparty\assimp\src\code\FBX\FBXUtil.cpp.
```
std::string EncodeBase64(const char* data, size_t length)
{
// calculate extra bytes needed to get a multiple of 3
size_t extraBytes = 3 - length % 3;
// number of base64 bytes
size_t encodedBytes = 4 * (length + extraBytes) / 3;
std::string encoded_string(encodedBytes, '=');
// read blocks of 3 bytes
for (size_t ib3 = 0; ib3 < length / 3; ib3++)
{
const size_t iByte = ib3 * 3;
const size_t iEncodedByte = ib3 * 4;
const char* currData = &data[iByte];
EncodeByteBlock(currData, encoded_string, iEncodedByte);
}
// if size of data is not a multiple of 3,
// also encode the final bytes (and add zeros where needed)
if (extraBytes > 0)
{
char finalBytes[4] = { 0,0,0,0 };
memcpy(&finalBytes[0], &data[length - length % 3], length % 3);
const size_t iEncodedByte = encodedBytes - 4;
EncodeByteBlock(&finalBytes[0], encoded_string, iEncodedByte);
// add '=' at the end
for (size_t i = 0; i < 4 * extraBytes / 3; i++)
encoded_string[encodedBytes - i - 1] = '=';
}
return encoded_string;
}
```
If you want, for a start you may try to detect the error yourself. So that you don't accidentally read the answer right away, let me show you some other exciting articles and briefly tell you what Base64 is:). Here's a list of additional articles on related topics:
1. [February 31](https://www.viva64.com/en/b/0550/);
2. [Machine Learning in Static Analysis of Program Source Code](https://www.viva64.com/en/b/0706/);
3. [How to introduce a static code analyzer in a legacy project and not to discourage the team](https://www.viva64.com/en/b/0743/).
Ok, let's go on. Here is the coding algorithm implementation of a byte string in [Base64](https://en.wikipedia.org/wiki/Base64) encoding. This is the coding standard of binary data with only 64 characters. The encoding alphabet contains text and numeric Latin characters A-Z, a-z, and 0-9 (62 characters) and 2 additional characters that vary among implementations. Base64 encoding converts every 3 source bytes into 4 encoded characters.
If only one or two bytes are left to encode, as a result, we have only the first two or three characters of the line. The output will be padded with one or two additional pad characters (=). The padding character "=" prevents further bits from being added to the reconstructed data. This point is implemented incorrectly in the function considered.
Found the error? Well done. If not, that's ok too. You need to delve into the code to notice that something goes wrong. The analyzer reports about this "something wrong" with the warning: [V547](https://www.viva64.com/en/w/v547/) [CWE-571] Expression 'extraBytes > 0' is always true. FBXUtil.cpp 224
To understand what worried the analyzer, let's take a look at the initialization of the *extraBytes* variable:
```
// calculate extra bytes needed to get a multiple of 3
size_t extraBytes = 3 - length % 3;
```
The programmer planned to calculate how many additional bytes of input data need to be processed if their total number is not equal to 3. To do this, we just need to divide the number of processed bytes by modulo 3. A correct option of the variable initialization looks like this:
```
size_t extraBytes = length % 3;
```
Then, if, for example, 5 bytes are processed, we get 5 % 3 = 2. So, we need to additionally process 2 bytes. If the input received 6 bytes, then nothing needs to be processed separately, since 6 % 3 = 0.
Although, it may have meant the number of bytes missing for a multiple of three. Then, the correct code should look that way:
```
size_t extraBytes = (3 - length % 3) % 3;
```
Right now, I'm not interested in trying to figure out the right variant. Anyway, the programmer wrote some average meaningless version of the code:
```
size_t extraBytes = 3 - length % 3;
```
Right at the moment of analyzing this code, the analyzer uses data flow analysis. Whatever value is in the *length* variable, after modulo division, a value in the range [0..2] will be obtained. The PVS-Studio analyzer can work with ranges, exact values, and sets. That is, we are talking about [Value Range Analysis](https://en.wikipedia.org/wiki/Value_range_analysis). In this case, it is the range of values that will be used.
Let's continue the evaluations:
```
size_t extraBytes = 3 - [0..2];
```
It turns out that the *extraBytes* variable will never be equal to zero. The analyzer will evaluate the following possible range of its values: [1..3].
Until the moment of checking, the variable is not changed anywhere. The analyzer reports us that the check result will always be true. Therefore, the tool is absolutely right:
```
if (extraBytes > 0)
```
This is a simple but wonderful example. It shows how the data flow analysis allowed us to evaluate the range of variable values. It also helped us to be certain that the variable does not change, and finally, that the condition is always true.
Of course, the incorrectness of function operation is not limited to the execution of a code fragment that should not be executed. Everything goes awry there. Imagine, you want to encode 6 characters. In this case, the output string must contain 8 characters. Let's quickly estimate how the considered function will behave.
```
// calculate extra bytes needed to get a multiple of 3
size_t extraBytes = 3 - length % 3; // 3-6%3 = 3
// number of base64 bytes
size_t encodedBytes = 4 * (length + extraBytes) / 3; // 4*(6+3)/3 = 12
std::string encoded_string(encodedBytes, '=');
```
The output string happened to contain 12 characters, not 8. Further, everything will work in a wrong way, too. There's no point in going into details.
That's how nice and easy static analysis found the error in the code. Just imagine how painful it would be to debug and understand why the characters encoding in Base64 encoding went wrong. By the way, here comes the question of the third-party libraries' quality. I touched upon it in the following article: [Why it is important to apply static analysis for open libraries that you add to your project](https://www.viva64.com/en/b/0762/).
Try to use PVS-Studio regularly in your development process to find many bugs as early as possible. You'll like it :). If you are developing an open-source project, you can use the analyzer for [free](https://www.viva64.com/en/b/0614/). Thanks for your attention. Wish you bugless code. | https://habr.com/ru/post/543138/ | null | null | 1,384 | 54.83 |
Blender 3D: Noob to Pro/Advanced Tutorials/Python Scripting/Procedural object creation
Text Blocks[edit]
A Blender document can contain text blocks, which are not the same as text objects in a 3D scene (though the former can be converted to the latter). Besides generating text objects, a text block can serve any purpose you like; for example, use it to pass workflow instructions to a colleague along with the document; display a copyright or help message in the initial layout that a user sees on opening the document; or hold a Python script that can be run by the user to perform some useful action related to the document.
Text blocks are edited in the Text Editor window. The Text Editor also provides commands to load the contents of a text block from an external file, or save it to an external file. And also execute the text as a Python script.
Your First Script[edit]
Open a new, default Blender document. Split the 3D View in two vertically. Change the type of one side to a Text Editor window. In the header, you will see a small popup menu showing just a double-headed arrow; click on this, and it should show three items: “Text”, “ADD NEW” and “OPEN NEW”. “ADD NEW” creates a new, empty text block, while “OPEN NEW” creates a new text block by reading its contents from an external file. But “Text” is already the name of a default empty text block, so just use that; as soon as you select it, you should see a red insertion cursor appear at the top left, indicating that you can start typing.
Unlike the Interactive Python Console, nothing is automatically imported for you. So as in any other Python script, you need to mention every module you want to access.
Let us write a script that inserts a new mesh primitive into a Blender document, namely a tetrahedron. First we need to create a new mesh datablock, which we will name “Tetrahedron”:
NewMesh = Blender.Mesh.New("Tetrahedron")
Then we need to define the coordinates of the vertices; for a tetrahedron with edges of length 1 Blender unit, suitable values are , , and . Or in Python:
NewMesh.verts.extend \ ( [ (0, -1 / math.sqrt(3),0), (0.5, 1 / (2 * math.sqrt(3)), 0), (-0.5, 1 / (2 * math.sqrt(3)), 0), (0, 0, math.sqrt(2 / 3)), ] )
We also need to define the faces of the object; each face is defined by listing a sequence of indexes into the above array of vertices (you don’t need to bother about defining the edges; Blender will deduce them from the faces):
NewMesh.faces.extend \ ( [[0, 1, 2], [0, 1, 3], [1, 2, 3], [2, 0, 3]] )
That suffices for the mesh, now we create an actual object datablock that will appear in the scene (which we also name “Tetrahedron”):
TheObj = Blender.Object.New("Mesh", "Tetrahedron")
Link it to the mesh we just made:
TheObj.link(NewMesh)
And to make the object appear in the scene, it has to be linked to it:
TheScene = Blender.Scene.GetCurrent() TheScene.link(TheObj)
And finally, tell Blender that the scene has changed:
TheScene.update()
and to redraw the 3D view to show the updated scene:
Blender.Window.Redraw()
Put It All Together[edit]
Your complete script should look like this, including the imports of the referenced math and Blender modules; note also the use of the directive to ensure that the “/” operator always returns a real, not an integer, result; this is good practice with Python 2.x, since it becomes mandatory behaviour beginning with Python 3.0.
from __future__ import division import math import Blender NewMesh = Blender.Mesh.New("Tetrahedron") NewMesh.verts.extend \ ( [ (0, -1 / math.sqrt(3),0), (0.5, 1 / (2 * math.sqrt(3)), 0), (-0.5, 1 / (2 * math.sqrt(3)), 0), (0, 0,()
In your 3D View, get rid of any default cube to avoid it obscuring things. Now come back to the Text Editor window showing the above script, and press ALT + P to execute it; you should see your new tetrahedron appear in the 3D view!
If You Hit An Error[edit]
If there is any error running the script, Blender will display a cryptic message to let you know. For example, the following simple one-line script
raise RuntimeError("Uh-oh")
displays this message:
To get more details you will have to look in Standard Error; on Linux/Unix systems, the message will appear in the terminal session if you invoked Blender from the command line; otherwise it will be appended to your ~/.xsessionerrors file if you launched Blender from a GUI. On Windows the message appears in the console window. By hunting around in the appropriate place, you should be able to find the full Python traceback:
Traceback (most recent call last): File "Text.001", line 1, in <module> RuntimeError: Uh-oh | https://en.wikibooks.org/wiki/Blender_3D:_Noob_to_Pro/Advanced_Tutorials/Python_Scripting/Procedural_object_creation | CC-MAIN-2018-13 | refinedweb | 818 | 70.53 |
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
How set 'To' field of email template with a many2many field
There are 2 parameters in the email templates:
To (Emails)
To (Partners)
I want to use a many2many fields variable for this. Something like ${object.user_ids}. Is there any way to do it?
Hi,
To (Emails) :
'email_to': fields.char('To (Emails)', help="Comma-separated emailaddresses (placeholders may be used here)"),
To (Partners) :
'partner_to': fields.char('To (Partners)',
help="Comma-separated ids of recipient partners (placeholders may be used here)",
=> list of id of class res partner, not res users, but a user has a related partner
add in To (Partners)
${object.get_partner_ids(object.user_ids)}
in the class where is defined field user_ids, add the function used in ${object.get_partner_ids(object.user_ids)}
def _get_partner_ids(self, cr, uid, user_ids) :
return str([user.partner_id.id for user in user_ids]).replace('[', '').replace(']', '')
bye
I have a similar case, but can't figure out how I get the correct partner ids. I have a message model that is related to project.project and I want to get the followers of the project into my mail.template.
Here is my code so far. I stuck at the relation between the field project.message_follower_ids and res_partner. (project.message_follower_ids is related to mail.followers -> res_id).
<field name="partner_to">${object.get_partner_ids(object.project_id.message_follower_ids)}</field>
@api.multi # TODO get the followers of the active Scrum Project
def get_partner_ids(self, user_ids):
return str([user_ids.ids]).replace('[', '').replace(']', '')
Thanks Cyril ! This is excellent.
Just a couple of changes that I had to make from my end:
The function name beginning with underscore (_get_partner_ids) had to be changed to just get_partner_ids because the private class was inaccessible from the template.
partner.id should be partner_id.id
Calling the function from template was defined without the cr,uid because that was raising issues for me
${object.get_partner_ids(object.user_ids)}
Also, could you tell me why the additional char field is required?
Thanks again !
could yoy explain what you said : the additional char field ?
This char field: 'partner_to': fields.char('To (Partners)', help="Comma-separated ids of recipient partners (placeholders may be used here)", It is not being used right? So I was wondering why it is required.
it is not required, but you should have at least one email address, if you want use directy address a@a.com,...., use field, 'email_to', if you want use email field of a partner, use 'partner_to', bye
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now | https://www.odoo.com/forum/help-1/question/how-set-to-field-of-email-template-with-a-many2many-field-83510 | CC-MAIN-2017-26 | refinedweb | 457 | 58.79 |
Write a C++ program that reads a word from the keyboard, stores it in a string.
Checks whether the word is a palindrome.
A palindrome reads the same from left to right as from right to left.
The following are examples of palindromes:"OTTO, " "deed, " and "level."
Use the subscript operator [].
Continually read and check words.
#include <iostream> #include <string> using namespace std; int main() {/*from w ww. j av a 2 s . co m*/ string word; // Empty string char key = 'y'; while (key == 'y' || key == 'Y') { cout << "\n Enter a word: "; cin >> word; int i = 0, j = word.length() - 1; for (; i <= j; ++i, --j) if (word[i] != word[j]) break; if (i > j) // All characters equal? cout << "\nThe word " << word << " is a Palindrome !" << endl; else cout << "\nThe word " << word << " is not a palindrome" << endl; cout << "\nRepeat? (y/n) "; do cin.get(key); while (key != 'y' && key != 'Y' && key != 'n' && key != 'N'); cin.sync(); } return 0; } | http://www.java2s.com/example/cpp-book/write-program-to-check-whether-the-word-is-a-palindrome.html | CC-MAIN-2018-39 | refinedweb | 157 | 95.88 |
22
results of 22
Lloyd Dupont wrote:
> 1./ why not a "package" (and using) directive in gcc's ObjectiveC
> implementation. to implement namespace, which are a good idea (i
> found) to avoid name collision when mixing a huge API with its own.
Go ahead, provide a patch ! I also think it's a good addition, at least
if it stays simple (eg I pretty much like the Java way).
If I remember right there was a lengthy discussion of this on
comp.lang.objective-c some year ago, you may want to check the archives.
> 2./ what about shipping gcc ObjectiveC with NSObject, NSString,
> NSInvocation and NSException. As i did not manage (hum, yes i must
> admit) to build GNUstep.
> (NSThread would also be cool)
Bad idea IMHO. There are already various implementations of all of these
classes and all have their pros and cons. Also where do you want to draw
the line ?
Did you try to compile libFoundation ? This is more lightweight than
gstep-base and *might* be easier to compile on Win (didn't try for a
long time).
Greetings
Helge
--
SKYRIX Software AG -
Web Application Technology for Enterprises
"Timothy J. Wood" wrote:
> This is a terrible idea. I don't know of any good solution (although
> OpenStep/NT must have done something...).
All windows includes can be wrapped by
#define BOOL _WINBOOL
#define id _id
# include <windows.h>
#undef id
#undef BOOL
windows.h obviously doesn't include objc.h so the above thing should
work, right ?
> BOOL is four bytes on Windows and BOOL is one byte under ObjC.
I don't think that anywhere is specified that ObjC BOOL has to be one
byte ...
> The only reasonable 'solution' to this is to take the approach someone
> used on VxWorks in objc.h:
>
> #ifdef __vxworks
> typedef int BOOL;
> #else
> typedef unsigned char BOOL;
> #endif
I think this is even better. Using compatible BOOLs makes probably most
sense. But one has to watch that type encoding doesn't break, but I'm
not sure whether this is really a problem.
> Of course, the problem here is that then you introduce sizeof(BOOL)
> encoding problems between applications running with GNU ObjC vs. Apple
> ObjC (say, in some sort of DO server, if that were even possible between
> the two platforms).
No. The NSArchiver/NSUnarchiver defines the encoding format, not the
binary architecture of the host platform (at least in any Foundation I
know). Eg NSArchiver also usually moves the binary encoded data to
network byte order.
> Does anyone have any brilliant ideas to get around this?
I don't think it's brilliant, but the #defines as shown above are the
usual 'trick' to get around 'incompatible' C libraries and should work
just fine ?! You very often have C libs defining BOOL and/or 'id',
that's nothing special to Windows ;-)
Greetings
Helge
--
SKYRIX Software AG -
Web Application Technology for Enterprises
oups, it was a bug of me..... forget it...
anyway i still have a problem with end of file.
i am obliged to read EOF, and return a special toke and treat it as a
top level rule.
is that normal ?...
could some one help me.
i am new to bison:
i want to parse something like:
#-----------------
int a,b;
int c;
.... to be continued ...
#------------------
my lexer seems to work.
in my parser i have:
#--------------------
%token END /* END for EOF */
%token NUMBER STRING ID TYPE
%type <token> ID
%%
declaration: TYPE ID_declList ';' {
printf("declaration !!..\n");
return 0;
}
ID_declList: ID_decl ',' ID_declList { printf("list\n"); }
| ID_decl { printf("decl\n"); }
ID_decl: ID { printf("ID:%s\n", $1); }
#-------------------------
and this don't work, outputting
ID:a
find any
decl
declaration !!..
yyparse() = 0
parse error
yyparse() = 1
parse error
yyparse() = 1
parse error
yyparse() = 1
parse error
yyparse() = 1
parse error
yyparse() = 1
parse error
............................;
i should have 2 declaration ! (ID:a, ID:b..)
i have attached the complete code.. if someone could help me with it.
i have understand the idea, but it seems i have a lot of trouble with
practice...
could some one help me.
i am new to bison:
i want to parse something like:
#-----------------
int a,b;
int c;
.... to be continued ...
#------------------
my lexer seems to work.
in my parser i have:
#--------------------
"Steve D. Perkins" wrote:
> > i experiment such problem.
> > i fix them by setting needed env var as follow
> > ...
> I have tried setting these additional environment variables
> with no further success. Something
humm....
BTW do you install the w95 patch ? it solve of my problem.
after release of "mingw-1.0-20010608.tar.gz", someone (mummmit ?)
post 2 or thre path, one of them fixing the search path problem on some
windows station. most notably w95, but i already suspect w2000. (it done
for me !)
so go to sourceforge and link later (after 20010608) version of gcc...
On Mon, 23 Jul 2001, Henrik Stokseth wrote:
> <reinhard.jessich@...> wrote:
>=20
> > And where can I find a Posix-compatible sh?
>=20
> try cygwin bash @
Thank you, but this uses the cyqwin.dll. I look for a mingw (VisualC) por=
t
of sh/bash.
Regards,
Reinhard
--=20
Ing. Reinhard Jessich mailto: reinhard.jessich@...
A-1190 Vienna, Goergengasse 2/2/1 phone: +43/1/3692600 mobile: +43/664/1735439
>
I have tried setting these additional environment variables (modified for my
particular installation directory of course), with no further success. Something
seems exceptionally fishy about all this. The previous version of MinGW I installed
just a few short months ago required no environmental settings at all (even an entry
in PATH was technically optional)... it seems strange that MinGW suddenly developed
dependencies on 5 or 6 environmental variables during the switch to a single-bundle
distribution. Also, other mailing list posts I have read through offer conflicting
environmental variable opinions... some listing other variables you have left out
(such as COMPILER_PATH), others claiming that you still need nothing other than PATH
at all.
I'm trying to finish up work with the MinGW FAQ that I've been doing over the
past couple of weeks, and find it terribly embarrassing that even the person trying
to document the thing can't get it installed cleanly. Has anyone else encountered
strange problems with include file and library search paths, that ended up being
explained by something external I might be overlooking? If this is any further of a
clue, I tried running gcc -print-search-dirs from an NT command-prompt... and while
the 'programs:' and 'libraries:' sections looked fine, the 'install:' portion had an
incorrect Cygwin-style path that struck me as odd (see below). Perhaps this has
nothing to do with anything, though.
C:\>gcc -print-search-dirs
install: /mingw/lib/gcc-lib/mingw32/2.95.3-4/
programs: c:/development/mingw/bin/../lib/gcc-lib/mingw32/2.95.3-4/;<etc>....
...
you could search in your install, set the right path, and this should
work later...
This weekend I finally decided to upgrade my MinGW environment... nuking the
previous collection-of-individual-packages based installation (from several months
ago), and dropping the contents of "mingw-1.0-20010608.tar.gz" in its place.
However, I quickly discovered problems compiling even simple "Hello World"
applications.
The first time I tried compiling my "Hello World", I encountered an error about
"iostream.h" not being found. I spent awhile researching through the mailing list
archives, and found a recommended fix to be adding
"<MinGW-path>/lib/gcc-lib/mingw32/2.95.3-4" to my PATH. I thought this sounded a bit
flaky, as my previous working setup didn't need that directory in the PATH... but I
made the addition anyway, and compilation succeeds.
However, I'm still having no luck with the linking portion... my simple one-line
"Hello World" C++ source file generates these errors during linking:
undefined reference to `endl(ostream &)'
undefined reference to `cout'
undefined reference to `ostream::operator<<(char const *)'
I have tried manually adding directories to the GCC search path with "-I" and
"-L" on the command-line... but fixing one missing dependency only opens up new
failed dependencies of THAT guy! Obviously, this is not normal and I must be missing
something hopefully obvious.
If this matters... I am using Windows 2000. I have the most recent version of
Cygwin installed, but my MinGW entries come before the Cygwin bin directory in my
PATH (both within my Cygwin environment and outside of it). I am getting the same
results whether I try compiling my "Hello World" from a Cygwin bash session or a NT
command-prompt.
Steve
Just to add my $.02.
I use zsh as my shell and had to set
GCC_EXEC_PREFIX=3Dd:/mingw/lib/gcc-lib/ to get it to be able to compile
from any drive other than D: (or even on D: if I installed mingw to
other than d:/mingw).
I believe it has to do with the fact that ZSH does not change directory
to where the executable is started from thus causing the builtin
relative search paths to fail.
Hope this may help.
Sam
<reinhard.jessich@...> wrote:
> And where can I find a Posix-compatible sh?
try cygwin bash @
-henrik
On Mon, 23 Jul 2001, Danny Smith wrote:
> This make binary works best if you have a Posix-compatible sh installed
> in your path, but will also work with only cmd or command.=20
And where can I find a Posix-compatible sh?
I know zsh from Unxutils, but there are some small problems with it and i=
t
would be nice to have an other sh or bash.
Thank, you
Reinhard
--=20
Ing. Reinhard Jessich mailto: reinhard.jessich@...
A-1190 Vienna, Goergengasse 2/2/1 phone: +43/1/3692600 mobile: +43/664/1735439
I'm not a member of this list, I just wanted to give a big Thank You to
the MinGW developers. I love games and just got started on a hobby game
programming project using Allegro and MinGW. Thanks for making that
possible. I'll try to remember to send out another email when I've
completed something fun. :)
Thanks again,
Blain
> I'm thinking a problem in binutils/dlltool.
you might be right.
as dlltool is, i guess, no longer maintained, since you build dll
with gcc now.
gcc -shared -o mydll.dll <my_source_or_object_files>
and eventually, if you want also to produce static declaration
library (though useless as you could directly link against gcc
produced DLL without trouble)
gcc -shared -o mydll.dll -Wl,--out-implib,libmylib.a
<my_source_or_object_files>
Hi All,
I'm having a problem with mingw producing corrupt dlls. I had been using
2.95.2 19991024 and had no problems, but since an upgrade, it get an
invalid page fault in the dll without having changed any code. Right now
I'm using gcc-2.95.3-5 and binutils-2.11.90-20010705. I know this isn't
much to go on, but where could the problem be? I'm thinking a problem
in binutils/dlltool.
--
Peter Zelezny. <zed@...>
hum, i have 2 "good idea" about ObjectiveC.
1./ why not a "package" (and using) directive in gcc's ObjectiveC
implementation. to implement namespace, which are a good idea (i
found) to avoid name collision when mixing a huge API with its own.
2./ what about shipping gcc ObjectiveC with NSObject, NSString,
NSInvocation and NSException. As i did not manage (hum, yes i must
admit) to build GNUstep.
(NSThread would also be cool)
--- Chris <sstallone@...> wrote: > When I want to compiler I get
following error:
>
> GCC.EXE: installation problem, cannot exec
> `e:/mingw/lib/gcc-lib/mingw32/2.95.3-4/cpp0.exe': no such
> file or directory
>
>
>
> Using Snapshot MinGW-1.0-20010608
> Unpacked in e:\mingw
> SET GCC_EXEC_PREFIX=e:\mingw\lib\gcc-lib\
>
>
> Any solutions???
>
1) Send queries like this to the MinGW list, not to me personally.
2) Read the list archives. Many people have reported the same problem
on W9x/ME. The solution, also repeated many times in the archives, is
to download the gcc-2.95.3-20010613-W95patch.
The newest gcc-2.95.3 should not have this problem.
Danny
>
_____________________________________________________________________________ - Yahoo! Messenger
- Voice chat, mail alerts, stock quotes and favourite news and lots more!
Hello, Mumit, all
I am writting a C++ game, and I am trying to use DLL's with LoadLibrary,
and GetProcAddress, I wrote the DLL and the Game App both with Mingw32,
however when I try to do a LoadLibrary call on the generated DLL I get a
null pointer, and GetLastError says an error with code 487 happened, this
means : Attempt to access invalid address. ERROR_INVALID_ADDRESS.
I thought this might have been a mingw bug on the exe, so I recompiled with
VC++, same error, so I thought it was a problem with my code, I wrote a
simple app that would just load the Library, call a function and end, it
worked as spected, so I believed it was a problem in my code even more, but
I am using the same libs, and the same way of using loadlibray in both
apps, I check every posibility, and couldnt find anything wrong, I even
made it so in the exe LoadLibrary gets called first thing in WinMain, still
the same error, so just to be sure, I took a random DLL, and made my
application load that instead, and there were no problems, which make me
believe the problem is in my mingw generated, DLL, however since it works
on one app (the small one) and not in the other (the big one) I believe it
may be a bug on Mingw32, anyway here is how I am making the DLL, please
help me before I lose my hair :o)
LFLAGS = -mdll -lgdi32 -lwsock32 -lstdc++ -lddraw -lcomdlg32 -ldxguid
-fvtable-thunks
dlltool --output-def export.def AeonScript.o
perl FixDef.pl ---> this is a small script to append function = to the C++
"function__FRt5deque3Zt12basic_string . . . etc." naming to the .def
dlltool -d AeonScript.def -e exports.o -l libAeonScript.a AeonScript.o
gcc exports.o $(OBJS) -oAeonScript.dll $(LFLAGS)
Thanks, and greets to all.
Thank you very much. It IS what I want (though I also very much appreciate
a Win32 port). I need the -z option. I don't know before that DJGPP
support long file names.
Regards
Wu Yongwei
----------------------------------------------------
On 19 Jul 2001, at 12:11, adah@... wrote:
> I don't understand why. Anyone have a try? If this is difficult, maybe
> emailing me a working current copy of tar.exe is also OK. The ones I
have
> found on the Net is too old, especially that they did not contain
> compression.
You might want to use the tar from djgpp, even though it's a dos32
program, it works very well in the windows console, supporting long
filenames and the -z, -Z and --use-compress-prog options. If you
run it only on windows it doesn't depend on any other packages
either.
In general, I usually pull everything that hasn't got a decent mingw
port off the djgpp archive. It really is a very good platform.
Jon Svendsen
_______________________________________________
MinGW-users mailing list
MinGW-users@...
You may change your MinGW Account Options at:
I have uploaded a modified version of gcc-2.95.3, with support for ObjC
to SF file release page:
This release contains a beta version of the ObjC compiler for MinGW
(gcc-2.95.3) in addition to the C, C++ and G77 compilers.
If updating an existing installation you will need to update the gcc
driver files in /bin as well as the ObjC-specific binaries. The
safest option is to back-up, then overwite all existing files with the
contents of this archive, which contains a complete installation of the
gcc binaries.
This release should work on both W9x and NT *without* the need for the
W95 patch released on 2001-06-13.
The release contains only the static libobjc.a runtime lib. I realise
that a dll version is required for many apps. However, the dll build
procedure in the gcc-2.95.3 sources needs some modification. Some help
from ObjC users would be appreciated.
Danny
_____________________________________________________________________________ - Yahoo! Messenger
- Voice chat, mail alerts, stock quotes and favourite news and lots more!
I have uploaded version 3.79.1 of GNU make, built for MinGW target, to
SF file release page
This make binary works best if you have a Posix-compatible sh installed
in your path, but will also work with only cmd or command.
As always, I would appreciate hearing of any bugs you discover. Fixes
are nice too.
Danny
_____________________________________________________________________________ - | https://sourceforge.net/p/mingw/mailman/mingw-users/?viewmonth=200107&viewday=23 | CC-MAIN-2017-26 | refinedweb | 2,766 | 66.33 |
17 April 2008 11:56 [Source: ICIS news]
DUBAI (ICIS news)--Qurain Petrochemical Industries Co (QPIC) is planning to build a Kuwaiti dinars (KD) 800m ($3bn) purified terephthalic acid (PTA) plant in ?xml:namespace>
Speaking at QPIC's general meeting which took place late on Wednesday, the CEO said that the company has received all the necessary governmental approvals to start the project.
Further details about the project, which will be a joint venture with local and international companies, were not disclosed.
On other projects QPIC is involved in, the paraxylene and styrene plant has reached 75% completion, while the Olefins II project will go on stream by the end of this year, the CEO told the meeting.
A methanol project in
QPIC is a joint stock company established in 2004 whose main investments are in the petrochemical industry.
($1 = KD0.27) | http://www.icis.com/Articles/2008/04/17/9112334/kuwaits-qurain-plans-to-develop-pta-plant.html | CC-MAIN-2015-22 | refinedweb | 142 | 60.58 |
Comment Re:What it has to do with privacy? (Score 2) 150
It's a cosmetic change. There is nothing visible that wasn't visible to begin with.
Agreed. The backlash after any Facebook redesign is ridiculous. Now we have to complain to the FTC?
It's a cosmetic change. There is nothing visible that wasn't visible to begin with.
Agreed. The backlash after any Facebook redesign is ridiculous. Now we have to complain to the FTC??.
Here's some Python, that will work in Python 3.1 (which is the most consistent for educational purposes, in my opinion). No external libraries required - all the standard distributions of Python 3.1 include turtle graphics.
from turtle import *
tracer(10, 0) # speeds up display - turtles can be slow!
for x in range(-160, 160):
color(x / 320 % 1, x / 160 % 1, x / 100 % 1)
penup()
goto(-x, -100)
pendown()
goto(x, 100)
More fun, in my opinion, are recursive functions:
from turtle import *
delay(0)
def tree(length):
if length 1:
fd(length)
bk(length)
else:
fd(length)
lt(20)
tree(length *
rt(60)
tree(length *
lt(40)
bk(length)
lt(90)
tree(50)
Exactly why I hope to work on getting (or see someone else work on getting) the new turtle module ported to work with Canvas objects. It has a couple very straightforward interfaces, and would be awesome for creating animations - and it has the added bonus of probably not being that hard to port.
Obviously a port of SDL (and thus all the libraries/modules that depend upon it, such as pygame) or pyglet is not likely to happen, but it seems like there will be quite a few simpler options. And this also opens the doors for other, web-Python specific libraries that use HTML/Canvas as their primary means of output...
JavaScript has a lot going for it, but it also has quite a few downsides. Just off the top of my head:
A good mainframe would last decades. Google's frankenframe (lets call it what it is) must be sloughing off parts like skin cells from a Texan with eczema.
In the computer world, where Moore's law reigns supreme, I would much prefer to have an excuse to refresh my hardware every few years and take advantage of all the advancements of technology that have taken place in that time. It seems that Google has figured out how to make this sort of thing modular and easily swappable, so kudos to them.
I'm almost certain that's for cost reasons. Sure, Google could probably get Gigabyte to custom-make a board - but then they'd have to pay that much extra to custom-design it, and Gigabyte would probably charge them a little bit more. As it is, they can just use the same lines that Gigabyte is already running, and get the same hefty discount that Joe the Computer builder gets from the massive volume they're running.
I for one heartily agree. (And yes, this comment is me doing some achievement-grabbing).
"Oh what wouldn't I give to be spat at in the face..." -- a prisoner in "Life of Brian" | https://slashdot.org/~slthytove | CC-MAIN-2016-44 | refinedweb | 531 | 71.75 |
You are not Logged in
Would you like to Login or Register
Today is: 04 July 2009
Check this months hot topics
Atlas gave us the ability to easily call Web Services from JavaScript. MS AJAX has gone one step further! We can now call methods in the codebehine of the current page from Javascript. Here's how:
This is designed for v1.0.61025.0 of AJAXExtensionsToolbox.dll
Set the EnablePageMethods attribute to true
<asp:ScriptManager
The method has to be declared static. It must also be marked with the WebMethod attribute. You'll probably find that you need to include System.Web.Services
using System.Web.Services;
[WebMethod]
public static string MyMethod(string name)
{
return "Hello " + name;
}
To call the method from javascript you need to append PageMethods. to the front of the method name.
<script>
function test(){
alert(PageMethods.MyMethod("Paul Hayman"));
}
</script>.
This is cool! What about security? If you needed to be logged in to execute "MyMethod", does the xmlhttpRequest issued contain a session cookie with it?
This is something I considered myself.. what I decided to do in my implementation was check that the user was in one of the allowed rolegroups for the operation.
To clarify, I used this for passing back information from a content editor. So, a check to see if the user was an Administrator or Editor was sufficient.
Paul
Hi, I get an exception "PageMethods is not defined". Any ideas? is PageMethods an internal js object or am I missing something?
After discussing it with Wouter we discovered that his problem was caused by the WebMethod being located inside a UserControl, not the Page.
It would appear that you can't call a WebMethod located in a UserControl.
If anyone knows how to achieve this, post a comment.
I'm able to hit the server method, but it is returning me "undefined" instead of the string "Hello whatever".
What am I doing wrong?
I'm facing with the same problem as Andres V. When I call my webservice method, returning a string, i get "undefined"...
Yes, the actual way of doing this is to use a "onComplete" because of the asynchronous call, as follows:
<script type="text/javascript">
function test(){
PageMethods.MyMethod("Gerard van de Ven", OnMyMethodComplete);
}
function OnMyMethodComplete(result, userContext, methodName)
{
alert(result);
}
</script>
excellent, thanks
Not marking the PageMethod as STATIC is quite a common mistake, be sure to mark your metdod as STATIC :)
I am having a problem calling my webmethod from javascript. Everything was working fine until I implemented security. Now when I do PageMethod.TestCall("something"); I get a 401 - Authentication Error. Do you have any ideas on how to resolve this problem?
I am trying to make a call back to my server from the javascript the same as you have here. My problem is that now that I have secured my website with forms authentication, the call back to the server is throwing a 401-Authentication error. Have you seen this before? Do you have a work around for this problem?
just what i was looking for thanks
Having problems here....
I'm using the same code verbadum (enabling scriptManager page methods, your javaScript and codebehind method) on a master page. For some reason, it's not liking PageMethods and won't call the server side function. Any Suggestions?
Enter your comment below and it will be submitted for moderation.
Please enter tags for this article, seperated by semi-colon ;
RSS Feed | Contact | CSS and XHTML | http://www.geekzilla.co.uk/View7B75C93E-C8C9-4576-972B-2C3138DFC671.htm | crawl-002 | refinedweb | 584 | 67.15 |
Vector –Matrix Inner Product with Computer Shader and C++ AMP
Large vector-matrix inner products by the GPU are 250 times faster than straight forward CPU implementations on my PC. Using C++ AMP or a Compute Shader the GPU realized a performance of over 30 gFLOPS. That is a huge increase, but my GPU has a “computational power” (whatever that may be) of 1 teraFLOP, and 30 gFLOPS is still a long way from 1000 gFLOPS.
This article presents a general architectural view of the GPU and some details of a particular exemplar: the Ati Radeon HD5750. Then code examples follow that show various approaches to large vector-matrix products. Of course the algorithms at the end of the article are the fastest. It is also the simplest.
Unified View of the GPU Architecture
Programming the GPU is based on an architectural view of the GPU. The purpose of this architectural view is to provide a unified perspective on GPUs from various vendors, hence with different hardware setup. It is this unified architecture that’s being programmed against using DirectX11. A good source of information on Direct Compute and Compute Shaders is the Microsoft Direct Compute BLog. The architecture described below is based on information from Chas Boyd’s talk at PDC09, as published on Channel9. Of course, this blog post only presents some fragments of the information found there.
A GPU is considered to be build from a number of SIMD cores. SIMD means: Single Instruction Multiple Data. By the way, the pictures below are hyperlinks to their source.
The idea is that a single instruction is executed on a lot of data, in parallel. The SIMD processing unit is particularly fit for “data parallel” algorithms. A GPU may consist of 32 SIMD cores (yes, the image shows 40 cores) that access memory with 32 floats at a time (128 bit bus width). Typically the processor runs at 1Ghz, and has a (theoretical) computational power of about 1 TeraFLOP.
A SIMD core uses several kinds of memory:
- 16 Kbyte of (32-bit) registers. Used for local variables
- 8 Kbyte SIMD shared memory, L1 cache.
- L2 cache
The GPU as a whole has typically 1Gb of general RAM. Memory access bandwidth is typically of order 100GBit/s.
Programming Model
A GPU is programmed using a Compute Shader or C++ AMP. Developers can write compute shaders in HLSL (Looks like C) to be executed on the GPU. AMD is a C++ library. The GPU can run up to 1024 threads per SIMD. A thread is a line of execution through code. The SIMD shared memory is shared among the threads of a SIMD. It is programmable in the sense that you can declare variables (arrays) as “groupshared” and they will be stored in the Local Data Share. Note however, that over-allocation will spill the variables to general RAM, thus reducing performance. Local variables in shader code will be stored in registers.
Tactics
The GPU architecture suggests programming tactics that will optimize performance.
- Do your program logic on the CPU, send the data to the GPU for operations that apply to (about) all of the data and contain a minimal number of alternative processing paths.
- Load as much data as possible into the GPU general RAM, so as to prevent the GPU waiting for data from CPU memory.
- Declare registers to store isolated local variables
- Cache data that you reuse in “groupshared” Memory. Don’t cache data you don’t reuse. Keep in mind that you can share cached data among the threads of a single group only.
- Use as much threads as possible. This requires you use only small amounts of cache memory per thread.
- Utilize the GPU as efficiently as possible by offering much more threads to it than it can process in a small amount of time.
- Plan the use of threads and memory ahead, then experiment to optimize.
Loading data from CPU memory into GPU memory passes the PCIe bridge which has a bandwidth, typically of order 1GBit/s; that is, it is a bottleneck.
So, you really like to load as much data onto GPU memory before executing your code.
The trick in planning your parallelism is to chop up (schedule, that is J ) the work in SIMD size chunks. You can declare groups of threads; the size of the groups and the number of groups. A group is typically executed by a single SIMD. To optimize performance, use Group Shared Memory, and set up the memory consumption of your thread group so it will fit into the available Group Shared Memory. That is: restrict the number of threads per group, and make sure you have a sufficient number of groups. Thread groups are three dimensional. My hypothesis at this time is that it is best to fit the dimensionality of the thread groups to match the structure of the end result. More about this below. Synchronization of the threads within a thread group flushes the GroupShared Memory of the SIMD.
A register typically has a lifetime that is bound to a thread. Individual threads are member of several groups – depending on how you program stuff. So, intermediate results aggregated by thread groups can be stored in registers.
Does My ATI Radeon HD5750 GPU Look Like This Architecture… A Bit?
The picture below (from here) is of the HD5770, which has 10 SIMD cores, one more than the HD5750.
What do we see here?
- SIMD engines. We see 10 cores for the HD5770, but there are 9 in the HD5750. Each core consists of 16 red blocks (streaming cores) and 4 yellow blocks (texture units).
- Registers (light red lines between the red blocks).
- L1 Textures caches, 18Kbyte per SIMD.
- Local Data Share, 32 Kbyte per SIMD.
- L2 caches, 8 Kbyte each.
Not visible is the 1Gb general RAM.
The processing unit runs at 700Mhz, memory runs at 1,150Mhz. Over clocking is possible however. The computational power is 1,008 TeraFLOP. Memory bandwidth is 73.6 GBit/s.
So, my GPU is quite a lot less powerful than the reference model. At first, a bit disappointing but on the other hand: much software I write for this GPU cannot run on the PCs of most people I know – their PCs are too old.
Various Approaches to Vector-Matrix Multiplication
Below we will see a number of approaches to vector-matrix multiplication discussed. The will include measurements of time and capacity. So, how do we execute the code and what do we measure?
Times measured include a number of iterations that each multiply the vector by the matrix. Usually this is 100 iterations, but fast alternatives get 1000 iterations. The faster the alternative, the more we are interested in variance and overhead.
Measurements:
- Do not include data upload and download times.
- Concern an equal data load, 12,288 input elements if the alternative can handle it.
- Correctness check; computation is also performed by CPU code, reference code.
- Run a release build from Visual Studio, without debugging.
- Allow AMP programs get a warming up run.
Vector-Matrix Product by CPU: Reference Measurement
In order to determine the performance gain, we measure the time it takes the CPU to perform the product. The algorithm, hence the code is straightforward:
In this particular case rows = cols = 12,288. The average over 100 runs is 2,452 ms, or 2.45 seconds. This amounts to a time performance of 0.12 gFLOPS (giga FLOPS: FLoating point Operations Per Second). We restrict floating point operations to addition and multiplication (yes, that includes subtraction and division). We calculate gFLOPS as:
2 / ms x Rows / 1000 x Cols / 1000, where ms is the average time in milliseconds.
The result of the test is correct.
Parallel Patterns Library
Although this blog post is about GPU performance, I took a quick look at PPL performance. We then see a performance gain of a factor 2, but the result is incorrect, that is, the above code leads to indeterminacy in a parallel_for loop. I left it at that, for now.
Matrix-Matrix Product
We can of course, view a vector as a matrix with a single column. The C++ AMP documentation has a running code example of a matrix multiplication. There is also an accompanying compute shader analog.
AMP
To the standard AMP example I’ve added some optimizing changes, and measured the performance. The AMP code look like this:
Here: amp is an alias for the Concurrency namespace. The tile size TS has been set to 32, which is the maximum; the product of the dimensional extents of a compute domain should not exceed 1024. The extent of the compute domain has been changed to depend on B, the matrix, instead of the output vector. The loop that sums element products has been unrolled in order to further improve performance.
As mentioned above, we start with a warming up. As is clear from the code we do not measure data transport to and from the GPU. Time measurements are over 100 iterations. The average run time obtained is 9,266.6 ms, hence 0.01 gFLOPS. The result after the test run was correct.
The data load is limited to 7*1024 = 7,168; that is 8*1024 is unstable.
Compute Shader
The above code was adapted to also run as a compute shader. The code looks like this:
The variables Group_SIZE_X and Group_SIZE_Y are passed into the shader at compile time, and are set to 32 each.
Time measurements are over 100 iterations. The average run time obtained is 11,468.3 ms, hence 0.01 gFLOPS. The result after the test run was correct. The data load is limited to 7*1024 = 7,168; that is 8*1024 is unstable.
Analysis
The performance of the compute shader is slightly worse that the AMP variant. Analysis with the Visual Studio 11 Concurrency Visualizer shows that work by the GPU in case of the compute shader program is executes in small spurts, separated by small periods of idleness, whereas in the AMP program the work is executed by the GPU in one contiguous period of time.
Nevertheless, performance is bad, worse than the CPU alternative. Why? Take a look at the picture below:
For any value of t_idx.global[0] – which is based on the extent of the matrix- that is unequal to zero, vector A does not have a value. So, in fact, if N is the number of elements in the vector, we do O( N3)retrievals but only O(N2) computations. So, we need an algorithm that is based on the extent of a vector, say the output vector.
Vector-Matrix Product
Somehow, it proved easier to develop the vector-matrix product as a compute shader. This is in spite of the fact that unlike AMP, it is not possible (yet?) to trace a running compute shader in Visual Studio. The idea of the algorithm is that we tile the vector in one dimension, and the matrix in two, thus obtaining the effect that the vector tile can be reused in multiplications with the matrix tile.
Compute Shader
A new compute shader was developed. This compute shader caches vector and matrix data in Group Shared memory. The HLSL code looks like this:
This program can handle much larger amounts of data. Indeed, this program runs problem free for a vector of 12,288 elements and a total data size of 576 Mbyte. Using an input vector of 12,288 elements, with total data size of 576 Mbyte. The time performance is 10.3 ms per run, averaged over 1,000 runs, which amounts to 29.3 gFLOPS. The result of the final run was reported to be correct.
AMP
In analogy to the compute shader above I wrote (and borrowed 🙂 ) a C++ AMP program. The main method looks like this:
The matrix is a vector with size * size elements. He tile size was chosen to be 128, because that setting yields optimal performance. The program was run on an input vector of 12,288 elements again, with total data size of 576 Mbyte. The time performance is 10.1 ms per run, averaged over 1000 runs, which amounts to 30.0 gFLOPS. The result of the final run was reported to be correct.
Analysis
We see here that the performance has much improved. When compared to the reference case, we can now do it (in milliseconds) 2,452 : 10.1 = 243 : 1, hence 243 times faster.
Simpler
Then, I read an MSDN Magazine article on AMP tiling by Daniel Moth, and it reminded me that caching is useless if you do not reuse the data. Well, the above algorithm does not reuse the cached matrix data. So I adapted the Compute Shader program to retrieve matrix data from central GPU memory directly. The HLSL code looks like this:
Note the tileSize of 512(!). This program was run for a vector of 12,288 elements and a total data size of 576 Mbyte. The time performance is again 10.3 ms for a multiplication which amounts to 29,3 gFLOPS (averaged over 1000 runs). The result of the final run was reported to be correct. So, indeed, caching the matrix data does not add any performance improvement.
AMP
For completeness, the AMP version:
Time performance is optimal for a tile size of 128, in case the number of vector elements is 12,288. We obtain an average run time of 9.7 ms (averaged over 1,000 runs), and a corresponding 31.1 gFLOPS. The result of the final run was correct. This program is 2452 / 9.7 = 252.8 times as fast as the reference implementation.
Conclusions
Developing an algorithm for vector-matrix inner product has demonstrated comparable performance for Compute Shaders and AMP, but much better tooling support for AMP: we can step through AMP code while debugging, and the Concurrency Visualizer has an AMP line. This better tool support helped very well in analyzing performance of a first shot at the algorithm. The final algorithm proved over 250 times faster than a straight forward CPU program for the same functionality.
Detailed knowledge of the GPU architecture, or the hardware model, proved of limited value. When trying to run the program with either the maximum nr of threads per group, or the maximum amount of data per Group Shared Memory, I ran into parameter value limits, instabilities, performance loss, and incorrect results. I guess, you will have to leave the detailed optimization to the GPU driver and to the AMP compiler.
One question keeps bothering me though: Where is my TeraFLOP?
I mean, Direct Compute was introduced with the slogan “A teraFLOP for every one of us”, AMP is built on top of Direct Compute, and my GPU has a computational power of 1.08 TeraFLOP. Am I not ‘one of us’? | https://thebytekitchen.com/2012/05/ | CC-MAIN-2017-39 | refinedweb | 2,480 | 65.52 |
A coworker just sent me some Python code for review and, among such code, there was the addition of a function similar to:
def PathWithCurrentDate(prefix, now=None): """Extend a path with a year/month/day subdirectory layout. Args: prefix: string, The path to extend with the date subcomponents. now: datetime.date, The date to use for the path; if None, use the current date. Returns: string, The new computed path with the date appended. """ path = os.path.join(prefix, '%Y', '%m', '%d') if now: return now.strftime(path) else: return datetime.datetime.now().strftime(path)
The purpose of this function, as the docstring says, is to simplify the construction of a path that lays out files on disk depending on a given date.
This function works just fine… but it has a serious design problem (in
my opinion) that you only see when you try to write unit tests for such
function (guess what, the code to review did not include any unit tests
for this). If I ask you to write tests for
PathWithCurrentDate, how
would you do that? You would need to consider these cases (at the very
very least):
- Passing
now=Nonecorrectly fetches the current date. To write such a test, we must stub out the call to
datetime.datetime.now()so that our test is deterministic. This is easy to do with helper libraries but does not count as trivial to me.
- Could
datetime.datetime.now()raise an exception? If so, test that the exception is correctly propagated to match the function contract.
- Passing an actual date to
nowworks. We know this is a different code path that does not call
datetime.datetime.now(), but still we must stub it out to ensure that the test is not going through that past in case the current date actually matches the date hardcoded in the test as an argument to
now.
My point is: why is such a trivial function so complex to validate? Why such a trivial function needs to depend on external state? Things become more obvious if we take a look at a caller of this function:
def BackupTree(source, destination): path = PathWithCurrentDate(destination) CreateArchive(source, os.path.join(path, 'archive.tar.gz'))
Now, question again: how do we test this? Our tests would look like:
def testOk(self): # Why do we even have to do this? ... create stub for datetime.datetime.now() to return a fake date ... CreateArchive('/foo', '/backups/prefix') ... validate that the archive was generated in the fake date directory ...
Having to stub out the call to
datetime.datetime.now() before calling
CreateArchive is a really, really weird thing at first glance. To be
able to write this test, you must have deep insight of how the auxiliary
functions called within the function work to know what dependencies on
external state they have. Lots of black magic involved.
All this said, the above may not seem like a big issue because, well, a
call to
datetime.datetime.now() is cheap. But imagine that the call
being performed deep inside the dependency tree was more expensive and
dealt with some external state that is hard to mock out.
The trick to make this simpler and clearer is to apply a form of
Dependency
injection (or,
rather, “value injection”). We want the
PathWithCurrentDate function
to be a simple data manipulation routine that has no dependencies on
external state (i.e. make it purely functional). The easiest way to do
so is to remove the
now=None code path and pass the date in right from
the most external caller (aka, the
main() program). For example
(skipping docstrings for brevity):
def PathWithCurrentDate(prefix, now): path = os.path.join(prefix, '%Y', '%m', '%d') return now.strftime(path) def BackupTree(source, destination, backup_date): path = PathWithCurrentDate(destination, backup_date) CreateArchive(source, os.path.join(path, 'archive.tar.gz'))
With this approach, the dependency on
datetime.datetime.now() (aka, a
dependency on global state) completely vanishes from the code. The code
paths to validate are less, and they are much simpler to test. There is
no need to stub out a function call seemingly unused by
BackupTree.
Another advantage of this approach can be seen if we were to have multiple functions accessing the same path. In this case, we would need to ensure that all calls receive the exact same date… what if the program kept running past 12AM and the “now” value changed? It is trivial to reason about this feature if the code does not have hidden queries to “now” (aka global state) within the code… but it becomes tricky to ensure our code is right if we can’t easily audit where the “now” value is queried from!
The “drawback”, as some will think, is that the caller of any of these functions must do more work on its own to provide the correct arguments to the called functions. “And if I always want the backup to be created on the current directory, why can’t the backup function decide on itself?”, they may argue. But, to me, the former is definitely not a drawback and the latter… is troublesome as explained in this post.
Another “drawback”, as some others would say, is that testing is not a goal. Indeed it is not: testing is only a means to “correct” code, but it is also true that having testable code often improves (internal) APIs and overall design.
To conclude: the above is an over-simplistic case study and my explanations will surely not convince anyone to stop doing black evil “clever” magic from within functions (and, worse, from within constructors). You will only realize that the above makes any sense when you start unit-testing your code. Start today! :-) | https://jmmv.dev/2010/12/dependency-injection-and-testing.html | CC-MAIN-2022-21 | refinedweb | 955 | 63.49 |
First off... I ran into some problems at the very beginning, and after making some changes got it to work, but I'm not sure why. This is the original code:
#include <iostream> using namespace std; // It won't compile like this, until I declare the Stats functions outside the class as well, why is this? class mob { public: void Stats (int Hp, int Atk, int Def, int Dex); }; class Char { public: char Name[10]; void Stats (int Hp, int Atk, int Def, int Dex); }; void Char::Stats() int main() { mob Witch, Ogre; Witch.Stats (10, 10, 10, 10); Ogre.Stats (15, 5, 9, 4); Char Char; Char.Name; cin >> Char.Name; cout << Char.Name; return 0; }
Here is how far I've gotten so far and to what most of my other questions will pertain to:
#include <iostream> #include <string> using namespace std; class mob { public: char *Name [10]; void Stats (int, int, int, int, int); }; class Your { public: char Name[10]; void Stats (int, int, int, int, int); }; void mob::Stats(int Hp, int Atk, int Def, int Dex, int Spd) { Hp; Atk; Def; Dex; Spd; }; void Your::Stats(int Hp, int Atk, int Def, int Dex, int Spd) { Hp; Atk; Def; Dex; Spd; }; int main() { mob Witch, Ogre; Your Char; Witch.Name = {"Witch"}; Witch.Stats (10, 10, 10, 10, 10); Ogre.Name = {"Ogre"}; Ogre.Stats (15, 5, 9, 4, 7); Char.Name; cout << "Input a name: \n"; cin >> Char.Name; cout << "Your name is: " << Char.Name << endl; int MyHp, MyAtk, MyDef, MyDex, MySpd; Char.Stats (MyHp, MyAtk, MyDef, MyDex, MySpd); cout << "Input your Health Points: (1-15) "; cin >> MyHp; cout << "Input your Atk Power: (1-15) "; cin >> MyAtk; cout << "Input your Def Power: (1-15) "; cin >> MyDef; cout << "Input your Dexterity: (1-15) "; cin >> MyDex; cout << "Input your Speed: (1-15) "; cin >> MySpd; cout << "\nAre you ready to fight?(Yes or No)" << endl; int Value = 1; string Response; while (Value != 0) { cin >> Response; if (Response == "Yes") { Value = 0;} else "Are you ready now?"; } cout << "\nThen let's begin." << endl; cout << "Your first opponent is " << Witch.Name << endl << endl; cout << "Ready... Set... Rumble!! " << endl; return 0; }
So I've done a couple baddies and stats, but I'm having a hard time figuring out how to use these to fight with.
1. To find out who goes first, I was going to include an if statement comparing my speed to the mobs, but how do I do input the speed from Witch.Stats into the if statement?
2. How would I go about changing Witch.Name on line 73 to something more flexible, so that the name of the monster would be called up like a variable(like (variable-name).Name) so based on some score system it would be able to decide who I'm fighting next.
I obviously need more then that for the game but I'll figure out the rest for myself. I've heard of DirectX and OpenGL, but I won't feel ready to start learning that until I get a better handle on C++. Finally, can anyone recommend any header files that could be used for games, offer any tips, or suggest any useful books to read on the subject? | http://www.dreamincode.net/forums/topic/210507-some-questions-regarding-a-txt-based-rpg/ | CC-MAIN-2016-26 | refinedweb | 539 | 88.47 |
In the last part we took a loot at graphing scatterplots and line graphs with SVG. If that seemed a little too trivial we'll try using WebGL this time. This post will not cover all the minutia of using WebGL, I have other posts on that you can use but we'll try to keep it as minimal as possible.
Boilerplate
I'm basically just going to rip out the render method of the SVG chart but keep all the same attributes. For now I'm going to drop the "shape" part of the tuple because this becomes a little more complicated than I want it to be but we can examine that later maybe.
function hyphenCaseToCamelCase(text) { return text.replace(/-([a-z])/g, g => g[1].toUpperCase()); } class WcGraphGl extends HTMLElement { #points = []; #width = 320; #height = 240; #xmax = 100; #xmin = -100; #ymax = 100; #ymin = -100; #func; #step = 1; #thickness = 1; #continuous = false; #defaultSize = 2; #defaultColor = "#F00" static observedAttributes = ["points", "func", "step", "width", "height", "xmin", "xmax", "ymin", "ymax", "default-size", "default-color", "continuous", "thickness"]; constructor() { super(); this.bind(this); } bind(element) { element.attachEvents.bind(element); } render() { if (!this.shadowRoot) { this.attachShadow({ mode: "open" }); } } attachEvents() { } connectedCallback() { this.render(); this.attachEvents(); } attributeChangedCallback(name, oldValue, newValue) { this[hyphenCaseToCamelCase(name)] = newValue; } set points(value) { if (typeof (value) === "string") { value = JSON.parse(value); } value = value.map(p => ({ x: p[0], y: p[1], color: p[2] ?? this.#defaultColor, size: p[3] ?? this.#defaultSize })); this.#points = value; this.render(); } get points() { return this.#points; } set width(value) { this.#width = parseFloat(value); } get width() { return this.#width; } set height(value) { this.#height = parseFloat(value); } get height() { return this.#height; } set xmax(value) { this.#xmax = parseFloat(value); } get xmax() { return this.#xmax; } set xmin(value) { this.#xmin = parseFloat(value); } get xmin() { return this.#xmin; } set ymax(value) { this.#ymax = parseFloat(value); } get ymax() { return this.#ymax; } set ymin(value) { this.#ymin = parseFloat(value); } get ymin() { return this.#ymin; } set func(value) { this.#func = new Function(["x"], value); this.render(); } set step(value) { this.#step = parseFloat(value); } set defaultSize(value) { this.#defaultSize = parseFloat(value); } set defaultColor(value) { this.#defaultColor = value; } set continuous(value) { this.#continuous = value !== undefined; } set thickness(value) { this.#thickness = parseFloat(value); } } customElements.define("wc-graph-gl", WcGraphGl);
Setting up the canvas
connectedCallback(){ this.attachShadow({ mode: "open" }); this.canvas = document.createElement("canvas"); this.shadowRoot.appendChild(this.canvas); this.canvas.height = this.#height; this.canvas.width = this.#width; this.context = this.canvas.getContext("webgl2"); this.render(); this.attachEvents(); }
Nothing fancy. We setup everything on connectedCallback so we can re-use it later. I'm also using WebGL2 which should be supported in all browsers now.
Shader Boilerplate
WebGL has a lot of boilerplate so lets get to it.
function compileShader(context, text, type) { const shader = context.createShader(type); context.shaderSource(shader, text); context.compileShader(shader); if (!context.getShaderParameter(shader, context.COMPILE_STATUS)) { throw new Error(`Failed to compile shader: ${context.getShaderInfoLog(shader)}`); } return shader; } function compileProgram(context, vertexShader, fragmentShader) { const program = context.createProgram(); context.attachShader(program, vertexShader); context.attachShader(program, fragmentShader); context.linkProgram(program); if (!context.getProgramParameter(program, context.LINK_STATUS)) { throw new Error(`Failed to compile WebGL program: ${context.getProgramInfoLog(program)}`); } return program; } render(){ if(!this.context) return; const vertexShader = compileShader(this.context, ` attribute vec2 aVertexPosition; void main(){ gl_Position = vec4(aVertexPosition, 1.0, 1.0); gl_PointSize = 10.0; } `, this.context.VERTEX_SHADER); const fragmentShader = compileShader(this.context, ` void main() { gl_FragColor = vec4(1.0, 0, 0, 1); } `, this.context.FRAGMENT_SHADER); const program = compileProgram(this.context, vertexShader, fragmentShader) this.context.useProgram(program); }
If we don't have a context, which can happen if an attribute change triggers a render before we've attached to some DOM, we just abort. We continue in
render to use the context and set up a basic vertex and fragment shader, compile them, link them, and then compile the program. We won't go into the the details here but this is the minimum we need to get started. The one interesting new thing here is
gl_PointSize. This controls the size of the dots that are drawn.
Vertices
//setup vertices const positionBuffer = this.context.createBuffer(); this.context.bindBuffer(this.context.ARRAY_BUFFER, positionBuffer); const positions = new Float32Array(this.#points.flat()); this.context.bufferData(this.context.ARRAY_BUFFER, positions, this.context.STATIC_DRAW); const positionLocation = this.context.getAttribLocation(program, "aVertexPosition"); this.context.enableVertexAttribArray(positionLocation); this.context.vertexAttribPointer(positionLocation, 3, this.context.FLOAT, false, 0, 0);
We create a buffer with our points, associate it with attribute
aVertexPosition. Again lots of song and dance but we need all of it.
Drawing
Drawing is at least easier. We clear the screen and then tell it to draw everything as points:
this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT); this.context.drawArrays(this.context.POINTS, this.#points.length, this.context.UNSIGNED_SHORT, 0);
This won't work for a few reasons though. If you hardcode
this.#points to
[ -1.0, -1.0, 1.0, -1.0, 1.0, 1.0, -1.0, 1.0 ]
And set the the length of
drawArrays to 4 (final parameter) you should see 4 red dots in the corners of the canvas. This is sufficient to see if you made any syntax mistakes so far.
Vertex Colors
Firstly is that our points are not the right shape. We have an array of struct objects representing the points but what we actually want is a series of vectors. We can change this pretty easily in
set points but we run into a problem with the color. In the SVG chart we could use any DOM color (like "blue") because it was using the DOM to render it. In WebGL we don't have those, we just have vec4s of floats. Converting DOM colors to RGB is actually a lot harder than you think and usually means hackery so we'll have to drop that. What we can do is either use a series of floats or we can convert it from hex values (conversion from other formats is non-trivial). Let's start with a series of floats because it's easy.
So
{ x, y, color, size } becomes
[x,y,r,g,b,size]
But we have another problem. The biggest vector we can pass to WebGL is a vec4 and we have 6 components. We need to split it up into two vectors. Color and x,y,size. We could also split off size but that'll just create more overhead so I'm going to squish it in to the existing buffer since we have extra space.
Let's rebuild
set points:
set points(value) { if (typeof (value) === "string") { value = JSON.parse(value); } this.#points = value.map(p => [ p[0], p[1], p[6] ?? this.#defaultSize ]); this.#colors = value.map(p => p.length > 2 ? [ p[2], p[3], p[4], p[5] ] : this.#defaultColor); this.render(); }
There's other ways you could approach how to input things but I thought a flat-array was best. But maybe you could nest the color array. In any case we pull out the x,y and size to one array and the color into another or if the input doesn't have a color we use the default. This means
defaultColor has to also be a 4 value array so be sure to update that. But also, since we're dealing with alpha the 4th element should be 1 or you may be frustrated when things don't appear. There's also a lot of validation that could happen here, I won't do it for code size but you may if you wish.
Then we need to copy-paste the vertex code and update it to use
this.#colors:
//setup color const colorBuffer = this.context.createBuffer(); this.context.bindBuffer(this.context.ARRAY_BUFFER, colorBuffer); const colorsArray = new Float32Array(this.#colors.flat()); this.context.bufferData(this.context.ARRAY_BUFFER, colorsArray, this.context.STATIC_DRAW); const colorLocation = this.context.getAttribLocation(program, "aVertexColor"); this.context.enableVertexAttribArray(colorLocation); this.context.vertexAttribPointer(colorLocation, 4, this.context.FLOAT, false, 0, 0);
Please note that we also need to update to use
this.#colors.flat() and
this.#points.flat() because the buffers are flat series of values. When assigning the attributes make sure colors uses size 4, and update the name to
aVertextColor (or whatever you want to call it) and update
aVertexPosition to be size 3.
Then we can update the shaders:
//Setup WebGL shaders const vertexShader = compileShader(this.context, ` attribute vec3 aVertexPosition; attribute vec4 aVertexColor; varying mediump vec4 vColor; void main(){ gl_Position = vec4(aVertexPosition.xy, 1.0, 1.0); gl_PointSize = aVertexPosition.z; vColor = aVertexColor; } `, this.context.VERTEX_SHADER); const fragmentShader = compileShader(this.context, ` varying mediump vec4 vColor; void main() { gl_FragColor = vColor; } `, this.context.FRAGMENT_SHADER);
We have our 3-valued position (x,y,size), and our 4-valued color per vertex. The first 2 parts of the location become the x,y of a 4-valued final position, and the 3rd value
z becomes the point size. Then we simply forward the color down to the fragment shader with the
varying variable. In the fragment shader we use it directly. Now sizes and colors should work for points.
Still, we won't see anything outside the -1 to 1 range because that's the coordinates of screen space. We need to scale our points to fit but lucky for us that can be done directly in the vertex shader.
Scaling vertices
We need to setup the bounds. This is similar to setting up a buffer:
//setup bounds const bounds = new Float32Array([this.#xmin, this.#xmax, this.#ymin, this.#ymax]); const boundsLocation = this.context.getUniformLocation(program, "uBounds"); this.context.uniform4fv(boundsLocation, bounds);
I'm putting them all into a single 4-value vector instead of 4 scalars because it makes more sense.
And now for the magic in the vertex shader:
attribute vec3 aVertexPosition; attribute vec4 aVertexColor; uniform vec4 uBounds; varying mediump vec4 vColor; float inverseLerp(float a, float b, float v){ return (v-a)/(b-a); } void main(){ gl_PointSize = aVertexPosition.z; gl_Position = vec4(mix(-1.0,1.0,inverseLerp(uBounds.x, uBounds.y, aVertexPosition.x)), mix(-1.0,1.0,inverseLerp(uBounds.z, uBounds.w, aVertexPosition.y)), 1. 0, 1.0); vColor = aVertexPosition; }
One of the cool things is that all the transforms can be done in on the GPU in the vertex shader but it takes a bit to know what you are doing. I've introduced a function
inverseLerp which is the inverse of a lerp and if you remember last time I said that's exactly what the
windowValue function did. Surprisingly, no inverse lerp function exists in GLSL so you have to write it. I've also called it
inverseLerp because it's a little more understandable in the graphics context here. You can see that we use .x,.y,.z,.w to refer to the components. This is a shorthand but you can also use index notation [0],[1],[2],[3] if you want. But there's more, we also have
mix.
mix is a built-in function that is the lerp function. We need this because of how screen space works. Previously, we did the inverse lerp on both axes and multiplied by the max length. This works because the coordinates go from 0 to max length. However, in WebGL screen space they are -1 to 1 so we have to re-scale it back to the screen. Basically we take the original coordinates, normalize them and then use those normalized coordinates to re-project back to the canvas. I'm fairly sure that there must be a simplification, especially to do the vector multiplication in one step, so if you find it let me know! The rest is the same, we pass through the color and use the
z coordinate for the size.
And it should look roughly identical to the SVG ones. Keep in mind that the size of the pixel is 1/2 what it is in SVG. I'm not sure if that's due to DPI scaling factor or what but to get the same dot size it needs to be 2x bigger.
Finally, lets add the function graphing utility. We can start by making sure that default color works correctly:
set defaultColor(value) { if(typeof(value) === "string"){ this.#defaultColor = JSON.parse(value); } else { this.#defaultColor = value; } }
We parse if it's a string otherwise assume the user knows what they are doing as passes and array. Then let's add the code to render out the function into points:
let points; let colors; if(this.#func){ points = []; colors = []; for (let x = this.#xmin; x < this.#xmax; x += this.#step) { const y = this.#func(x); points.push([x, y, this.#defaultSize]); colors.push(this.#defaultColor); } } else { points = this.#points; colors = this.#colors; }
We need to do some adjusting to the variable names so that when we bind we bind the buffers we use the data from
colors rather than
this.#colors and
points rather
this.#points.
Continuous lines
This part is pretty easy, we just draw twice, once with lines and once with points.
//draw this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT); if(this.#continuous){ this.context.lineWidth(this.#thickness); this.context.drawArrays(this.context.LINE_STRIP, 0, points.length); } this.context.drawArrays(this.context.POINTS, 0, points.length);
Sadly, while you can set the line width, it's very unlikely your platform will actually support it. See, WebGL doesn't have to respect the line width and at least on my machine it doesn't and I will always get 1.0 thick lines. You can query to see if it is supported:
console.log(this.context.getParameter(this.context.ALIASED_LINE_WIDTH_RANGE));
This will get you the min and max supported line width which for me was
[1.0, 1.0]. This sucks and in order to fix it we'll likely need to look into rasterizing lines ourselves, but that's for another day.
Another thing is that these lines will behave differently. Remember how in the last one we decided to make the continuous line a single color so as not to use line segments for everything? Well in WebGL we have the opposite problem. By default the lines will switch color based on the point, but if we wanted a solid line we'd need to create a new color buffer with all lines the same color, or we'd need to add a uniform parameter to the shader program to say "these are lines, so just make them this color". I'm not going to do that but that's how you could.
Drawing the guides
So we have those sorta useless guides on the SVG implementation and this turns out to be an interesting problem in WebGL. See, we setup all these buffers, uniforms and a shader program just to render one thing, the points. But how do we add other things? Basically we have to do it all again. But we can at least take one short-cut. Instead of creating a new shader program that draws the guides we can re-use the existing one, since afterall it's just a basic line-drawing shader.
//draw guides { const positionBuffer = this.context.createBuffer(); this.context.bindBuffer(this.context.ARRAY_BUFFER, positionBuffer); const positions = new Float32Array([ (this.#xmax + this.#xmin) / 2, this.#ymin, 10, (this.#xmax + this.#xmin) / 2, this.#ymax, 10, this.#xmin, (this.#ymax + this.#ymin) / 2, 10, this.#xmax, (this.#ymax + this.#ymin) / 2, 10 ]); this.context.bufferData(this.context.ARRAY_BUFFER, positions, this.context.STATIC_D const positionLocation = this.context.getAttribLocation(program, "aVertexPosition") this.context.enableVertexAttribArray(positionLocation); this.context.vertexAttribPointer(positionLocation, 3, this.context.FLOAT, false, 0, const colorBuffer = this.context.createBuffer(); this.context.bindBuffer(this.context.ARRAY_BUFFER, colorBuffer); const colorsArray = new Float32Array([ 0,0,0,1, 0,0,0,1, 0,0,0,1, 0,0,0,1 ]); this.context.bufferData(this.context.ARRAY_BUFFER, colorsArray, this.context.STATIC const colorLocation = this.context.getAttribLocation(program, "aVertexColor"); this.context.enableVertexAttribArray(colorLocation); this.context.vertexAttribPointer(colorLocation, 4, this.context.FLOAT, false, 0, 0) const bounds = new Float32Array([this.#xmin, this.#xmax, this.#ymin, this.#ymax]); const boundsLocation = this.context.getUniformLocation(program, "uBounds"); this.context.uniform4fv(boundsLocation, bounds); this.context.clear(this.context.COLOR_BUFFER_BIT | this.context.DEPTH_BUFFER_BIT); this.context.drawArrays(this.context.LINES, 0, points.length); }
We can put this after we set
useProgram but before we setup the attributes for the points. The curly block is intentional so I can reuse variable names. We set 4 points at the half-way point of each scale and we supply 4 colors for the vertices (in this case black). Then we can draw using
gl.LINES which is similar to
gl.LINE_STRIP except it's not continuous. We also need to move the buffer
clear method here otherwise we'll clear the buffer when we do the point drawing instead of draw over it.
And there we go, an almost approximation of the SVG drawing:
Conclusion
The code this time was bigger, more complex and even missing a few features like stroke width and shape. We can still add them but it'll be a lot more work. On the other hand all the heavy lifting was moved to the GPU which makes it really fast. It's really up to you to decide how you should proceed but don't let your framework make that decision for you!
Discussion (0) | https://practicaldev-herokuapp-com.global.ssl.fastly.net/ndesmic/graphing-with-web-components-2-webgl-e8h | CC-MAIN-2021-39 | refinedweb | 2,885 | 52.15 |
I'm trying to make JonesForth run on a recent MacBook out of the box, just using Mac tools.
I started to convert everything 64 bits and attend to the Mac assembler syntax.
I got things to assemble, but I immediately run into a curious segmentation fault:
/* NEXT macro. */ .macro NEXT lodsq jmpq *(%rax) .endm ... /* Assembler entry point. */ .text .globl start .balign 16 start: cld mov %rsp,var_SZ(%rip) // Save the initial data stack pointer in FORTH variable S0. mov return_stack_top(%rip),%rbp // Initialise the return stack. //call set_up_data_segment mov cold_start(%rip),%rsi // Initialise interpreter. NEXT // Run interpreter! .const cold_start: // High-level code without a codeword. .quad QUIT
QUIT is defined like this via macro defword:
.macro defword .const_data .balign 8 .globl name_$3 name_$3 : .quad $4 // Link .byte $2+$1 // Flags + length byte .ascii $0 // The name .balign 8 // Padding to next four-byte boundary .globl $3 $3 : .quad DOCOL // Codeword - the interpreter // list of word pointers follow .endm // QUIT must not return (ie. must not call EXIT). defword "QUIT",4,,QUIT,name_TELL .quad RZ,RSPSTORE // R0 RSP!, clear the return stack .quad INTERPRET // Interpret the next word .quad BRANCH,-16 // And loop (indefinitely) ...more code
When I run this, I get a segmentation fault the first time in the NEXT macro:
(lldb) run There is a running process, kill it and restart?: [Y/n] y Process 83000 exited with status = 9 (0x00000009) Process 83042 launched: '/Users/klapauciusisgreat/jonesforth64/jonesforth' (x86_64) Process 83042 stopped * thread #1, stop reason = EXC_BAD_ACCESS (code=EXC_I386_GPFLT) frame #0: 0x0000000100000698 jonesforth`start + 24 jonesforth`start: -> 0x100000698 <+24>: jmpq *(%rax) 0x10000069a <+26>: nopw (%rax,%rax) jonesforth`code_DROP: 0x1000006a0 <+0>: popq %rax 0x1000006a1 <+1>: lodsq (%rsi), %rax Target 0: (jonesforth) stopped.
rax does point to what I think is the dereferenced address, DOCOL:
(lldb) register read General Purpose Registers: rax = 0x0000000100000660 jonesforth`DOCOL
So one mystery is:
I commented out the original segment setup code in the original that called brk to set up a data segment. Another [implementation] also did not call it at all, so I thought I could as well ignore this. Is there any magic on how to set up segment permissions with syscalls in a 64-bit binary on Catalina? The make command is pretty much the standard JonesForth one:
jonesforth: jonesforth.S gcc -nostdlib -g -static $(BUILD_ID_NONE) -o $@ $<
P.S.: Yes, I can get JonesForth to work perfectly in Docker images, but that's besides the point. I really want it to work in 64 bit on Catalina, out of the box.
The original code had something like
mov $cold_start,%rsi
And the Apple assembler complains about not being able to use 32 immediate addressing in 64-bit binaries.
So I tried
mov $cold_start(%rip),%rsi
but that also doesn't work.
So I tried
mov cold_start(%rip),%rsi
which assembles, but of course it dereferences
cold start, which is not something I need.
The correct way of doing this is apparently
lea cold_start(%rip),%rsi
This seems to work as intended.
User contributions licensed under CC BY-SA 3.0 | https://windows-hexerror.linestarve.com/q/so61881258-Porting-JonesForth-to-macOS-v1015-Catalina | CC-MAIN-2020-40 | refinedweb | 511 | 67.45 |
Thoughts from the EPS Windows Server Performance Team
Useful Microsoft Blogs
Platforms Blogs
Today I would like to do a quick walkthrough of the VMMap tool from Windows Sysinternals.
This tool gives us a neat summary of a process’ virtual and physical memory usage. It shows a graphical summary as well as a detailed report of a given process’ memory usage pattern.
VMMap breaks down the memory usage to distinguish space used by file images loaded into the process address space, shared memory, memory used by mapped files, heap, stack, free space and more.
This is useful for one to easily understand where the process’ allocated virtual memory is being utilized.
The main window of VMMap looks like this:
As soon as you open VMMap, you are prompted to choose the process you want to analyze.
Note: You must run this in elevated mode in Windows Vista and higher.
If you want to view a previously saved report just click Cancel and open the saved report from the main window.
It also allows you to compare the two most recent snapshots and view the differences using the Show Changes option.
VMMap also has the following command line switches:
From a performance engineer’s perspective, most of the virtual memory usage questions are related to memory leak issues and are often resolved by the use of the Performance Monitor tool.
However, VMMap is useful in understanding where specifically allocated memory is utilized.
Another good use for this tool is determining the cause of virtual memory fragmentation. Memory fragmentation can cause cases where a memory allocation fails even though you have ample free space available in process’ address space.
In such cases you can look under the “Largest” column under “Free” memory type to find out the largest free block available. If your allocation is larger than the largest free block, your allocation will fail and the program may throw an “out of memory” error.
An example of this is Exchange virtual memory fragmentation, where you sometimes experience memory allocation failures even though you don’t appear to actually be out of memory.
While this tool may be more useful for developers to understand their programs memory usage patterns and tweak them better, we can have our share of VMMap too.
VMMap Download:
Sumesh P.
VMMap is a very nice tool, but for some reason, there is often a mismatch in Private Bytes when comparing to pslist or Process Explorer. The deviation seems to originate from "Image", where the sum of Private Bytes usually does not match the listed sum in the upper pane. The deviation is small unless the module is loaded from a file share, where the full module size seems to be added to the Private Bytes sum. A very small program loading imageres.dll (~20 MBytes on Windows 7) can demo this behavior (both for x86 and x64):
#include <windows.h>
void main(void)
{
HMODULE hMod;
hMod = LoadLibrary("imageres.dll");
//hMod = LoadLibrary("u:\\imageres.dll");
if(!hMod)
printf("GLE=%d\n", GetLastError());
else
Sleep(60000);
}
When loading from local disk, total Private Bytes for "Image" remains small, but if you copy imageres.dll to a file share (U:\ in the example above) and load it from there, the full module size seems to be added to the total, but does not appear in the details pane (the module is not relocated in any case). This is just a small demo with simple standard ingredients. If you have a larger application with several modules on a file share, the deviation can be huge. This was tested with VMMap 3.11 on 64 bit Windows 7. | http://blogs.technet.com/b/askperf/archive/2010/01/29/vmmap-a-peek-inside-virtual-memory.aspx | CC-MAIN-2014-41 | refinedweb | 606 | 60.65 |
I didn't get it help please guys....!!!1
I Got 99 Problems, But a Switch Ain't One
Hi arcace59459!
I would be happy to help! But first I need to know what you need help with.
Looking at your screenshot it looks like you have not attempted the problem.
Did you need help understanding the question? Or are you looking for guidance getting started solving the problem?
Let me know how I can best help you out !
Joe
@arcace59459 Please try the exercise first. Read the directions and make an attempt at it before asking for help from others.
I never had a problem in the exercise to begin with, that was directed to @arcace59459. Do you have a problem with your code? If so, please post your code and error message.
The instructions are as follows:
On line 2, fill in the if statement to check if answer is greater than 5.
On line 4, fill in the elif so that the function outputs -1 if answer is less than 5.
def greater_less_equal_5(answer):
if ________:
return 1
elif ________:
return -1
else:
return 0
print greater_less_equal_5(4)
print greater_less_equal_5(5)
print greater_less_equal_5(6)
So, on line 2 I put variations like something to give me greater than 5 and less than 5 in order to trigger the elif. On line 4 I used something to get a number less than 5.
The error was that my return values were incorrect.
What are the instructions saying in laymen's terms?
Can you show the actual code you have?
Guys thanks I have done it but I needed help with the instructions because I didn't know what to do.
i did everything and i know what to do first, but then i reset the code.@chipjumper36731
hello
screenshot your code please... so i can help you.xx
Yes it does. Thanks. I didn't fully understand what I was being asked to do. I understand based on your example that the word in the parenthesis was determining what should be greater than or less than. Great help.
i did the same thing without adding in the word "answer". so i kept getting an error i was getting upset too thank you. you helped me without even knowing it!!!
OMG you are a genius! I've been sitting here for half an hour trying to figure this out!
Thank you!
I'm confused.
answer is compared to 5 this I get it, but what is the value of answer ?
Is it comparing the length of the variable against 5 (like: len(answer) > 5) ?
This is a super confusing instruction set as you can see from the feedback above. There is no variable "answer" so I think folks started trying to put in expressions to meet the requirements. Might want to tweak the instructions so they are a bit clearer.
I need help with the if/else problem DUDE!!! | https://discuss.codecademy.com/t/i-got-99-problems-but-a-switch-aint-one/56206 | CC-MAIN-2018-34 | refinedweb | 490 | 84.17 |
21 August 2008 05:00 [Source: ICIS news]
SINGAPORE (ICIS news)--South Korean polypropylene (PP) producer Polymirae is considering cutting production in response to weak Asian demand, sources close to the company said on Thursday.
“Polymirae is expected to cut production by the end of this week,” one of the sources said, adding that the extent of the reduction was not known yet.
Polymirae runs a 630,000 tonne/year plant at Yeosu.
It received propylene feedstock from ?xml:namespace>
It is unclear if YNCC’s recent cracker cutbacks are related to the PP maker's decision to mull production cuts.
GS Caltex was running its 180,000 tonne/year PP plant at Yeosu at normal rates but it was concerned that its inventories would rise to critically high levels if demand did not pick up, a company source said.
“Our inventories are now about 10% higher (than the typical levels) but it is not critical yet,” the source said.
Other PP producers in
For more on PP | http://www.icis.com/Articles/2008/08/21/9150601/polymirae-daelim-mull-pp-cuts-on-weak-demand.html | CC-MAIN-2013-48 | refinedweb | 168 | 59.74 |
06 February 2013 17:56 [Source: ICIS news]
(recasts, amending date range at end of Table 1)
LONDON (ICIS)--Exports of emulsion styrene butadiene rubber (E-SBR) from the EU to the rest of the world nearly doubled in October 2012 compared with the same month in 2011 because sellers targeted other markets amid poor demand in Europe, a producer said on Wednesday.
Eurostat data shows that the EU exported 12,304 tonnes of E-SBR in October 2011 compared with 23,548 tonnes in October 2012, an increase of 91% (see table 1 below).
The latest Eurostat data is only available until October last year. The data shows there is a clear trend to increase exports from Europe to other parts of the world, in particular Asia. Total export volumes from January to October 2012 already well surpassed exports in the entire year of 2011.
Producers said that sales in Europe remain poor and, as a result, they have increased their exports to countries in Asia and Latin America (see table 2 below).
"Spot availability in Europe is next to nothing, as we are exporting all the surplus to Asia where prices are more attractive," the producer said.
It added that the fourth quarter was by far the worst quarter in 2012 and this has resulted in record export volumes. Eurostat will publish its November trade data in mid-February.
?xml:namespace>
($1 = €0.74) | http://www.icis.com/Articles/2013/02/06/9638671/Europe-SBR-exports-up-91-in-October-2012.html | CC-MAIN-2014-35 | refinedweb | 236 | 56.89 |
Write data to a socket.
#include <zircon/syscalls.h> zx_status_t zx_socket_write(zx_handle_t handle, uint32_t options, const void* buffer, size_t buffer_size, size_t* actual);
zx_socket_write() attempts to write buffer_size bytes to the socket specified by handle. The pointer to bytes may be NULL if buffer_size is zero.
If ZX_SOCKET_CONTROL is passed to options, then
zx_socket_write() attempts to write into the socket control plane. A write to the control plane is never short. If the socket control plane has insufficient space for buffer, it writes nothing and returns ZX_ERR_OUT_OF_RANGE. Only a single control plane message can be active in a socket direction. object_wait_one or object_wait_async). For datagram sockets, attempting to write a packet larger than the socket's capacity will fail with ZX_ERR_OUT_OF_RANGE.
A ZX_SOCKET_DATAGRAM socket write is never short. If the socket has insufficient space for buffer, it writes nothing and returns ZX_ERR_SHOULD_WAIT. If the write succeeds, buffer_size is returned via actual.
handle must be of type ZX_OBJ_TYPE_SOCKET and have ZX_RIGHT_WRITE.
zx_socket_write() returns ZX_OK on success.
ZX_ERR_BAD_HANDLE handle is not a valid handle.
ZX_ERR_BAD_STATE options includes ZX_SOCKET_CONTROL and the socket was not created with ZX_SOCKET_HAS_CONTROL.
ZX_ERR_WRONG_TYPE handle is not a socket handle.
ZX_ERR_INVALID_ARGS buffer is an invalid pointer.
ZX_ERR_ACCESS_DENIED handle does not have ZX_RIGHT_WRITE.
ZX_ERR_SHOULD_WAIT The buffer underlying the socket is full. For the control plane, a previous control message is still in the socket.
ZX_ERR_OUT_OF_RANGE The socket was created with ZX_SOCKET_DATAGRAM and buffer is larger than the remaining space in the socket.
ZX_ERR_BAD_STATE Writing has been disabled for this socket endpoint.
ZX_ERR_PEER_CLOSED The other side of the socket is closed.
ZX_ERR_NO_MEMORY Failure due to lack of memory. There is no good way for userspace to handle this (unlikely) error. In a future build this error will no longer occur.
zx_socket_accept()
zx_socket_create()
zx_socket_read()
zx_socket_share()
zx_socket_shutdown() | https://fuchsia.googlesource.com/fuchsia/+/419b51fe8a82d81b63b0e67951ec6e224c2194f7/zircon/docs/syscalls/socket_write.md | CC-MAIN-2020-24 | refinedweb | 293 | 52.56 |
On Oct 4, 2010, at 3:20 PM, Cristopher Ewing wrote: > Greetings all, > > I have taken the first steps towards a refactoring of ZopeSkel from a > monolithic package into a federated series of namespace packages. For > information on the reasoning behind this move, and the goals of the move, > please refer to the SPLITTING_PROPOSAL document in the trunk of ZopeSkel: > >
I was reviewing the proposal and had a thought about naming. There is no real reason to use the zopeskel namespace, especially for the core package. Shouldn't we make the namespace more inviting to non Zope projects? I took a look on pypi and there are current packages named skel and skeleton. One abandoned package is called skeletor. It had similar goals way back when. That would be a fitting name, but would probably be confusing given there is already a skeletor package. Just thought I'd voice my concern before we got too far into the renaming process. The ZopeSkel egg could still serve as the all-in-one for Zope/Plone devs. Clayton -- Six Feet Up, Inc. | Sponsor of Plone Conference 2010 (Oct. 25th-31st) Direct Line: +1 (317) 861-5948 x603 Email: clay...@sixfeetup.com Try Plone 4 Today at: How am I doing? Please contact my manager Gabrielle Hendryx-Parker at gabrie...@sixfeetup.com with any feedback. _______________________________________________ ZopeSkel mailing list ZopeSkel@lists.plone.org | https://www.mail-archive.com/zopeskel@lists.plone.org/msg00178.html | CC-MAIN-2017-47 | refinedweb | 230 | 75.61 |
Question:
I.
Solution:1)
Solution:2.
Solution:3 []
Solution:4
for i in xrange(len(somelist) - 1, -1, -1): if some_condition(somelist, i): del somelist[i]
You need to go backwards otherwise it's a bit like sawing off the tree-branch that you are sitting on :-)
Solution)
Solution:6
For those that like functional programming:
somelist[:] = filter(lambda tup: not determine(tup), somelist)
or
from itertools import ifilterfalse somelist[:] = list(ifilterfalse(determine, somelist))
Solution:7
The official Python 2 tutorial 4.2. "for Statements"']
which is what was suggested at:
The Python 2 documentation 7.3. "The for statement" gives the same advice:)
Could Python do this better?
It seems like this particular Python API could be improved. Compare it, for instance, with its Java counterpart ListIterator, which makes it crystal clear that you cannot modify a list being iterated except with the iterator itself, and gives you efficient ways to do so without copying the list. Come on, Python!
Solution:8
It might be smart to also just create a new list if the current list item meets the desired criteria.
so:
for item in originalList: if (item != badValue): newList.append(item)
and to avoid having to re-code the entire project with the new lists name:
originalList[:] = newList
note, from Python documentation:
copy.copy(x) Return a shallow copy of x.
copy.deepcopy(x) Return a deep copy of x.
Solution:9
I needed to do this with a huge list, and duplicating the list seemed expensive, especially since in my case the number of deletions would be few compared to the items that remain. I took this low-level approach.
array = [lots of stuff] arraySize = len(array) i = 0 while i < arraySize: if someTest(array[i]): del array[i] arraySize -= 1 else: i += 1
What I don't know is how efficient a couple of deletes are compared to copying a large list. Please comment if you have any insight.
Solution:10
This answer was originally written in response to a question which has since been marked as duplicate: Removing coordinates from list on python
There are two problems in your code:
1) When using remove(), you attempt to remove integers whereas you need to remove a tuple.
2) The for loop will skip items in your list.
Let's run through what happens when we execute your code:
>>> L1 = [(1,2), (5,6), (-1,-2), (1,-2)] >>> for (a,b) in L1: ... if a < 0 or b < 0: ... L1.remove(a,b) ... Traceback (most recent call last): File "<stdin>", line 3, in <module> TypeError: remove() takes exactly one argument (2 given)
The first problem is that you are passing both 'a' and 'b' to remove(), but remove() only accepts a single argument. So how can we get remove() to work properly with your list? We need to figure out what each element of your list is. In this case, each one is a tuple. To see this, let's access one element of the list (indexing starts at 0):
>>> L1[1] (5, 6) >>> type(L1[1]) <type 'tuple'>
Aha! Each element of L1 is actually a tuple. So that's what we need to be passing to remove(). Tuples in python are very easy, they're simply made by enclosing values in parentheses. "a, b" is not a tuple, but "(a, b)" is a tuple. So we modify your code and run it again:
# The remove line now includes an extra "()" to make a tuple out of "a,b" L1.remove((a,b))
This code runs without any error, but let's look at the list it outputs:
L1 is now: [(1, 2), (5, 6), (1, -2)]
Why is (1,-2) still in your list? It turns out modifying the list while using a loop to iterate over it is a very bad idea without special care. The reason that (1, -2) remains in the list is that the locations of each item within the list changed between iterations of the for loop. Let's look at what happens if we feed the above code a longer list:
L1 = [(1,2),(5,6),(-1,-2),(1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)] ### Outputs: L1 is now: [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
As you can infer from that result, every time that the conditional statement evaluates to true and a list item is removed, the next iteration of the loop will skip evaluation of the next item in the list because its values are now located at different indices.
The most intuitive solution is to copy the list, then iterate over the original list and only modify the copy. You can try doing so like this:
L2 = L1 for (a,b) in L1: if a < 0 or b < 0 : L2.remove((a,b)) # Now, remove the original copy of L1 and replace with L2 print L2 is L1 del L1 L1 = L2; del L2 print ("L1 is now: ", L1)
However, the output will be identical to before:
'L1 is now: ', [(1, 2), (5, 6), (1, -2), (3, 4), (5, 7), (2, 1), (5, -1), (0, 6)]
This is because when we created L2, python did not actually create a new object. Instead, it merely referenced L2 to the same object as L1. We can verify this with 'is' which is different from merely "equals" (==).
>>> L2=L1 >>> L1 is L2 True
We can make a true copy using copy.copy(). Then everything works as expected:
import copy L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)] L2 = copy.copy(L1) for (a,b) in L1: if a < 0 or b < 0 : L2.remove((a,b)) # Now, remove the original copy of L1 and replace with L2 del L1 L1 = L2; del L2 >>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Finally, there is one cleaner solution than having to make an entirely new copy of L1. The reversed() function:
L1 = [(1,2), (5,6),(-1,-2), (1,-2),(3,4),(5,7),(-4,4),(2,1),(-3,-3),(5,-1),(0,6)] for (a,b) in reversed(L1): if a < 0 or b < 0 : L1.remove((a,b)) print ("L1 is now: ", L1) >>> L1 is now: [(1, 2), (5, 6), (3, 4), (5, 7), (2, 1), (0, 6)]
Unfortunately, I cannot adequately describe how reversed() works. It returns a 'listreverseiterator' object when a list is passed to it. For practical purposes, you can think of it as creating a reversed copy of its argument. This is the solution I recommend.
Solution:11
If you want to do anything else during the iteration, it may be nice to get both the index (which guarantees you being able to reference it, for example if you have a list of dicts) and the actual list item contents.
inlist = [{'field1':10, 'field2':20}, {'field1':30, 'field2':15}] for idx, i in enumerate(inlist): do some stuff with i['field1'] if somecondition: xlist.append(idx) for i in reversed(xlist): del inlist[i]
enumerate gives you access to the item and the index at once.
reversed is so that the indices that you're going to later delete don't change on you.
Solution:12
You might want to use
filter() available as the built-in.
For more details check here
Solution:13
You can try for-looping in reverse so for some_list you'll do something like:
list_len = len(some_list) for i in range(list_len): reverse_i = list_len - 1 - i cur = some_list[reverse_i] # some logic with cur element if some_condition: some_list.pop(reverse_i)
This way the index is aligned and doesn't suffer from the list updates (regardless whether you pop cur element or not).
Solution:14
One possible solution, useful if you want not only remove some things, but also do something with all elements in a single loop:
alist = ['good', 'bad', 'good', 'bad', 'good'] i = 0 for x in alist[:]: if x == 'bad': alist.pop(i) i -= 1 # do something cool with x or just print x print(x) i += 1
Solution:15
I needed to do something similar and in my case the problem was memory - I needed to merge multiple dataset objects within a list, after doing some stuff with them, as a new object, and needed to get rid of each entry I was merging to avoid duplicating all of them and blowing up memory. In my case having the objects in a dictionary instead of a list worked fine:
```
k = range(5) v = ['a','b','c','d','e'] d = {key:val for key,val in zip(k, v)} print d for i in range(5): print d[i] d.pop(i) print d
```
Solution:16
TLDR:
I wrote a library that allows you to do this:
from fluidIter import FluidIterable fSomeList = FluidIterable(someList) for tup in fSomeList: if determine(tup): # remove 'tup' without "breaking" the iteration fSomeList.remove(tup) # tup has also been removed from 'someList' # as well as 'fSomeList'
It's best to use another method if possible that doesn't require modifying your iterable while iterating over it, but for some algorithms it might not be that straight forward. And so if you are sure that you really do want the code pattern described in the original question, it is possible.
Should work on all mutable sequences not just lists.
Full answer:
Edit: The last code example in this answer gives a use case for why you might sometimes want to modify a list in place rather than use a list comprehension. The first part of the answers serves as tutorial of how an array can be modified in place.
The solution follows on from this answer (for a related question) from senderle. Which explains how the the array index is updated while iterating through a list that has been modified. The solution below is designed to correctly track the array index even if the list is modified.
Download
fluidIter.py from here, it is just a single file so no need to install git. There is no installer so you will need to make sure that the file is in the python path your self. The code has been written for python 3 and is untested on python 2.
from fluidIter import FluidIterable l = [0,1,2,3,4,5,6,7,8] fluidL = FluidIterable(l) for i in fluidL: print('initial state of list on this iteration: ' + str(fluidL)) print('current iteration value: ' + str(i)) print('popped value: ' + str(fluidL.pop(2))) print(' ') print('Final List Value: ' + str(l))
This will produce the following output:
initial state of list on this iteration: [0, 1, 2, 3, 4, 5, 6, 7, 8] current iteration value: 0 popped value: 2 initial state of list on this iteration: [0, 1, 3, 4, 5, 6, 7, 8] current iteration value: 1 popped value: 3 initial state of list on this iteration: [0, 1, 4, 5, 6, 7, 8] current iteration value: 4 popped value: 4 initial state of list on this iteration: [0, 1, 5, 6, 7, 8] current iteration value: 5 popped value: 5 initial state of list on this iteration: [0, 1, 6, 7, 8] current iteration value: 6 popped value: 6 initial state of list on this iteration: [0, 1, 7, 8] current iteration value: 7 popped value: 7 initial state of list on this iteration: [0, 1, 8] current iteration value: 8 popped value: 8 Final List Value: [0, 1]
Above we have used the
pop method on the fluid list object. Other common iterable methods are also implemented such as
del fluidL[i],
.remove,
.insert,
.append,
.extend. The list can also be modified using slices (
sort and
reverse methods are not implemented).
The only condition is that you must only modify the list in place, if at any point
fluidL or
l were reassigned to a different list object the code would not work. The original
fluidL object would still be used by the for loop but would become out of scope for us to modify.
i.e.
fluidL[2] = 'a' # is OK fluidL = [0, 1, 'a', 3, 4, 5, 6, 7, 8] # is not OK
If we want to access the current index value of the list we cannot use enumerate, as this only counts how many times the for loop has run. Instead we will use the iterator object directly.
fluidArr = FluidIterable([0,1,2,3]) # get iterator first so can query the current index fluidArrIter = fluidArr.__iter__() for i, v in enumerate(fluidArrIter): print('enum: ', i) print('current val: ', v) print('current ind: ', fluidArrIter.currentIndex) print(fluidArr) fluidArr.insert(0,'a') print(' ') print('Final List Value: ' + str(fluidArr))
This will output the following:
enum: 0 current val: 0 current ind: 0 [0, 1, 2, 3] enum: 1 current val: 1 current ind: 2 ['a', 0, 1, 2, 3] enum: 2 current val: 2 current ind: 4 ['a', 'a', 0, 1, 2, 3] enum: 3 current val: 3 current ind: 6 ['a', 'a', 'a', 0, 1, 2, 3] Final List Value: ['a', 'a', 'a', 'a', 0, 1, 2, 3]
The
FluidIterable class just provides a wrapper for the original list object. The original object can be accessed as a property of the fluid object like so:
originalList = fluidArr.fixedIterable
More examples / tests can be found in the
if __name__ is "__main__": section at the bottom of
fluidIter.py. These are worth looking at because they explain what happens in various situations. Such as: Replacing a large sections of the list using a slice. Or using (and modifying) the same iterable in nested for loops.
As I stated to start with: this is a complicated solution that will hurt the readability of your code and make it more difficult to debug. Therefore other solutions such as the list comprehensions mentioned in David Raznick's answer should be considered first. That being said, I have found times where this class has been useful to me and has been easier to use than keeping track of the indices of elements that need deleting.
Edit: As mentioned in the comments, this answer does not really present a problem for which this approach provides a solution. I will try to address that here:
List comprehensions provide a way to generate a new list but these approaches tend to look at each element in isolation rather than the current state of the list as a whole.
i.e.
newList = [i for i in oldList if testFunc(i)]
But what if the result of the
testFunc depends on the elements that have been added to
newList already? Or the elements still in
oldList that might be added next? There might still be a way to use a list comprehension but it will begin to lose it's elegance, and for me it feels easier to modify a list in place.
The code below is one example of an algorithm that suffers from the above problem. The algorithm will reduce a list so that no element is a multiple of any other element.
randInts = [70, 20, 61, 80, 54, 18, 7, 18, 55, 9] fRandInts = FluidIterable(randInts) fRandIntsIter = fRandInts.__iter__() # for each value in the list (outer loop) # test against every other value in the list (inner loop) for i in fRandIntsIter: print(' ') print('outer val: ', i) innerIntsIter = fRandInts.__iter__() for j in innerIntsIter: innerIndex = innerIntsIter.currentIndex # skip the element that the outloop is currently on # because we don't want to test a value against itself if not innerIndex == fRandIntsIter.currentIndex: # if the test element, j, is a multiple # of the reference element, i, then remove 'j' if j%i == 0: print('remove val: ', j) # remove element in place, without breaking the # iteration of either loop del fRandInts[innerIndex] # end if multiple, then remove # end if not the same value as outer loop # end inner loop # end outerloop print('') print('final list: ', randInts)
The output and the final reduced list are shown below
outer val: 70 outer val: 20 remove val: 80 outer val: 61 outer val: 54 outer val: 18 remove val: 54 remove val: 18 outer val: 7 remove val: 70 outer val: 55 outer val: 9 remove val: 18 final list: [20, 61, 7, 55, 9]
Solution:17
The other answers are correct that it is usually a bad idea to delete from a list that you're iterating. Reverse iterating avoids the pitfalls, but it is much more difficult to follow code that does that, so usually you're better off using a list comprehension or
filter.
There is, however, one case where it is safe to remove elements from a sequence that you are iterating: if you're only removing one item while you're iterating. This can be ensured using a
return or a
break. For example:
for i, item in enumerate(lst): if item % 4 == 0: foo(item) del lst[i] break
This is often easier to understand than a list comprehension when you're doing some operations with side effects on the first item in a list that meets some condition and then removing that item from the list immediately after.
Solution:18
For anything that has the potential to be really big, I use the following.
import numpy as np orig_list = np.array([1, 2, 3, 4, 5, 100, 8, 13]) remove_me = [100, 1] cleaned = np.delete(orig_list, remove_me) print(cleaned)
That should be significantly faster than anything else.
Solution:19
In some situations, where you're doing more than simply filtering a list one item at time, you want your iteration to change while iterating.
Here is an example where copying the list beforehand is incorrect, reverse iteration is impossible and a list comprehension is also not an option.
""" Sieve of Eratosthenes """ def generate_primes(n): """ Generates all primes less than n. """ primes = list(range(2,n)) idx = 0 while idx < len(primes): p = primes[idx] for multiple in range(p+p, n, p): try: primes.remove(multiple) except ValueError: pass #EAFP idx += 1 yield p
Solution:20
Right away you want to create a copy of the list so you can have that as a reference when you are iterating through and deleting tuples in that list that meet a certain criteria.
Then it depends on what type of list you want for the output whether that be a list of the removed tuples or a list of the tuples that are not removed.
As David pointed out, I recommend list comprehension to keep the elements you don't want to remove.
somelist = [x for x in somelist if not determine(x)]
Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
EmoticonEmoticon | http://www.toontricks.com/2018/10/tutorial-how-to-remove-items-from-list.html | CC-MAIN-2019-04 | refinedweb | 3,147 | 63.43 |
I have an add() method for my doubly circular linked list which is supposed to add new Nodes at the end while maintaining the data structure. But when I create a doubly circular linked list and print the contents, the output is a list of the first item only.
The method is adding only the first item only.
My class makes use of DoubleNode class which is imported. DoubleNode<E>works fine.
Double node is just a node which has a getPrevious() method in addition to getNext() method.
Code java:
public void add(E item){ public class DCLinkedList<E> implements SomeInterface<E>{ private int numberOfItems= 0; private DoubleNode<E> first; private DoubleNode<E> currentPosition; public DCLinkedList(){ first = null; currentPosition = first;} public void add(E item){ private DoubleNode<E> node = new DoubleNode<E>(item); if(numElements == 0){ first = node; first.setPrevious(first); first.setNext(first); numberOfItems++; currentPosition = first; } else{ node.setBack(currentPosition.getPrevious()); currentPosition.getPrevious().setNext(node); node.setNext(first); currentPosition = head; numberOfItems++; } public String toString(){ String string = ""; int i = 0; while( i < numberOfItems){ string += currentPosition.getInfo()+" "; i++; } return "{"+string.substring(0,string.length()-1)+"}"; } } //Here's what happens when I try to run the code. public static void main(String[] args){ DCLinkedList<Integer> someList = new DCLinkedList<Integer>(); someList.add(1); someList.add(2); someList.add(4); System.out.println(list.toString()); }
--- Update ---
Was in the middle of deleting post but thought some will find it useful. It turns out the problem was in my toString() method. I actually solved the problem while writing the toString() method. I saw my mistake. So, you should think about your code while posting. You might just spot the mistake and correct it. ;) | http://www.javaprogrammingforums.com/%20whats-wrong-my-code/36967-my-cirular-doubly-linked-list-only-adds-first-number-printingthethread.html | CC-MAIN-2014-42 | refinedweb | 278 | 57.47 |
If an output stream is non-blocking, it may return NS_BASE_STREAM_WOULD_BLOCK when written to. More...
import "nsIAsyncOutputStream.idl";
If an output stream is non-blocking, it may return NS_BASE_STREAM_WOULD_BLOCK when written to.
The caller must then wait for the stream to become writable. If the stream implements nsIAsyncOutputStream, then the caller can use this interface to request an asynchronous notification when the stream becomes writable or closed (via the AsyncWait method).
While this interface is almost exclusively used with non-blocking streams, it is not necessary that nsIOutputStream::isNonBlocking return true. Nor is it necessary that a non-blocking nsIOutputStream implementation also implement nsIAsyncOutputStream.
Asynchronously wait for the stream to be writable or closed.
The notification is one-shot, meaning that each asyncWait call will result in exactly one notification callback. After the OnOutputStreamReady event is dispatched, the stream releases its reference to the nsIOutputStreamCallback object. It is safe to call asyncWait again from the notification handler.
This method may be called at any time (even if write has not been called). In other words, this method may be called when the stream already has room for more data. It may also be called when the stream is closed. If the stream is already writable or closed when AsyncWait is called, then the OnOutputStreamReady event will be dispatched immediately. Otherwise, the event will be dispatched when the stream becomes writable or closed.
This method closes the stream and sets its internal status.
If the stream is already closed, then this method is ignored. Once the stream is closed, the stream's status cannot be changed. Any successful status code passed to this method is treated as NS_BASE_STREAM_CLOSED, which is equivalent to nsIInputStream::close.
NOTE: this method exists in part to support pipes, which have both an input end and an output end. If the output end of a pipe is closed, then reads from the input end of the pipe will fail. The error code returned when an attempt is made to read from a "closed" pipe corresponds to the status code passed in when the output end of the pipe is closed, which greatly simplifies working with pipes in some cases.
If passed to asyncWait, this flag overrides the default behavior, causing the OnOutputStreamReady notification to be suppressed until the stream becomes closed (either as a result of closeWithStatus/close being called on the stream or possibly due to some error in the underlying stream). | http://doxygen.db48x.net/comm-central/html/interfacensIAsyncOutputStream.html | CC-MAIN-2019-09 | refinedweb | 406 | 62.48 |
How do I use
fork in C?
The only way to create a new process in UNIX is with the
fork system call, which duplicates the current process. Example:
#include <stdio.h> #include <unistd.h> int main(void) { pid_t pid = fork(); if (pid == 0) { printf("I'm the child process.\n"); } else { printf("I'm the parent process; the child got pid %d.\n", pid); } return 0; }
This prints:
% ./a.out I'm the parent process; the child got pid 45055. I'm the child process.
A call to
fork() instructs the operating system to duplicate the calling process. The new process is identical, with one main difference: the new process gets a new process ID, and this ID is returned to the caller of
fork(). The new process is returned the value
0, by which it knows that it is the child, because
0 is not a valid process ID.
Notice we get one line of output from each process. The order of the lines here is non-deterministic, since they come from different processes! We receive both lines because both processes’
stdout descriptor reference the same pipe. The
fork() system call copies all of the parent process’s descriptors, including the standard pipes (
stdin,
stdout, and
stderr).
The
fork system call is often combined with
execve as a way to start a new process from a program file. I’ll cover
fork/execve in a future. | https://jameshfisher.com/2017/02/06/how-do-i-use-fork-in-c/ | CC-MAIN-2019-22 | refinedweb | 238 | 84.37 |
Hide Forgot
Logging in with gdm though not running a session manager, I get
XAUTHORITY=/home/ejb/.Xauthority
If I su and reset this environment variable to the same value, up2date
fails with the following error:
------------------------------
Xlib: connection to ":0.0" refused by server
Xlib: Client is not authorized to connect to Server
Traceback (innermost last):
File "/usr/sbin/up2date", line 455, in ?
main()
File "/usr/sbin/up2date", line 441, in main
import gui
File "/usr/share/rhn/up2date/gui.py", line 16, in ?
import gtk, GTK, GDK
File "/usr/lib/python1.5/site-packages/gtk.py", line 29, in ?
_gtk.gtk_init()
RuntimeError: cannot open display
------------------------------
Note that running other X clients at this time works. I reported this
along with other problems in a bug 26011 about printtool, but I'm
reporting it again because it was not the primary focus of 26011.
Running xhost localhost works around the problem but should not be
necessary.
Maybe the category of this bug should have python, python-gtk, or something
like that.
Note: the same Xlib errors occur when I run up2date as myself and enter the root
password when prompted. It was only after this failed that I tried running su
first.
We (Red Hat) should really try to fix this before next release.
I've attempted to change the category of this bug to usermode since that seems
to be where the problem is. /usr/bin/up2date is a link to consolehelper which
in turn invokes /usr/sbin/up2date. If I su and invoke /usr/sbin/up2date, there
is no display access problem. consolehelper (or userhelper) seems to reset all
environment variables except
HOME TERM DISPLAY SHELL USER LOGNAME PATH HOME TERM DISPLAY SHELL USER
LOGNAME PATH
The bug seems to be in userhelper.c. If my quick reading of the code is
correct, it looks like the exec of the new process is done before XAUTHORITY is
restored.
Okay -- I figured it out. Get rid of this line:
session optional /lib/security/pam_xauth.so
in /etc/pam.d/up2date
and the problem is solved. This will need to be done for printconf-gui and any
other X application that is linked to usermode.
sorry for all the noise.... I set the category back to up2date since that's the
component that owns /etc/pam.d/up2date. More likely someone else will find this
in a query.....
The real problem may be that pam_xauth doesn't work right. I'm going to stop
babbling about this now. I just didn't want to leave things with the impression
that I am suggesting that just removing the offending line from
/etc/pam.d/up2date was necessarily the right fix. pam_xauth may have been there
for a reason and may not be working. I noticed a while ago that my XAUTHORITY
environment variable disappeared after su, but I did not suspect pam. I thought
it was a feature of su. I don't think I have enough data to actually submit a
bug report about pam_xauth though. Anyway, I trust that the person to whom this
bug has been assigned will look at all the facts, understand what's happening
and what's supposed to happen, and do the Right Thing. That's all I'm going to
say today.
I'm unable to reproduce this with:
gdm-2.0beta2-39
pam-0.74-3
up2date-2.1.7-2
up2date-gnome-2.1.7-2
usermode-1.37-2
My .Xclients file has been modified to simply "exec xterm". Logging in via gdm,
I get an XAUTHORITY variable, but when I switch to another VT where I'm already
logged in as root, remove root's .Xauthority file, and su from the X session, I
can run both up2date and xclock without difficulty (though, as mentioned, the
XAUTHORITY variable disappears).
There *is* a problem in how unprivileged graphical apps are run (gnorpm-auth,
when the user elects to run it unprivileged and XAUTHORITY doesn't point to
~/.Xauthority), and that'll be fixed in 1.40, but I can't reproduce the reported
problem with either printtool or up2date.
can't duplicate it either...
I will send very explicit instructions on how to reproduce this problem and
re-open unless it turns out to be a problem in my own environment.
By explicit instructions, I mean, "create a user with the following home
directory, boot at runlevel 3 with the following inittab, log in, type the
following commands..."
Okay, I definitely know what the problem is. My uid is 417 which is <= 499
which is the systemuser parameter's value in pam_xauth. The problem I described
would happen for any user with uid < 500.
After being unable to reproduce this from a freshly created user created via
useradd but being able to reproduce it every time from my own login even after
replacing the entire contents of my home directory with the default user's, I
finally traced it down to the uid, and found the argument systemuser to
pam_xauth. If I add systemuser=299 to the pam_xauth line in /etc/pam.d/up2date,
then it works for me too.
I know useradd starts creating users with user ids of 500, but I've been
creating users since long before RedHat existed and I have lots of users with
IDs less than 500. Why does this value have to be so high?
As it is now, I have a simple work-around: I can add the systemuser=299 argument
to all pam.d files that have pam_xauth in them. I also have a heinous
workaround: I can renumber all my uids on all my systems. I'd be really happy
if there were a place I could change this globally other than editing
pam_xauth.c to change the default value of this field.
Anyway, I would understand if you weren't planning on fixing this, but I've
reopened the bug so that this can receive some consideration. I've also changed
severity to "low" and category to "pam".
*** Bug 26011 has been marked as a duplicate of this bug. ***
Nice catch. Historically, we've assigned new users UIDs starting at 500, but
have had to make the area between 100 and 500 a "no man's land" for maximum
compatibility with other OSs, which may assign users UIDs below 500. We'll
change this default in pam_xauth for the next release.
Thanks! | https://bugzilla.redhat.com/show_bug.cgi?id=26343 | CC-MAIN-2019-22 | refinedweb | 1,073 | 64.71 |
An assembly is first and foremost a deployment unit, they should normally be used to group code that works together and is deployed together or putting it the other way around, they are used to split code that may not be deployed together.
There are reasons to split code between multiple assemblies even if you intend to deploy them together but these are exceptions to the rule. I would see independent versioning requirements as one possible exceptional reason.
What you really shouldn’t do is create assemblies just for the sake of splitting your code for cosmetic reasons. I’m not saying that you shouldn’t organize your code, just saying there are better tools for that job. In this case, that would be namespaces alongside project folders and while on the subject of namespaces, another thing that really does not make any sense is to try to have a single namespace per assembly. If you’re down that path, take a step back cause you’re doing it wrong…
I saw, more than one time, .NET solutions suffer from this assembly explosion and quickly escalating to the hundreds of assemblies for something that it sure as hell wasn’t that complex and where 80% of the assemblies end up being deployed together due to a high level of dependencies.
However, you also need to avoid doing the opposite and cram everything in a single assembly. As pretty much everything in software development the correct answer depends on many things specific to the scenario at hand.
Be conscious of your decisions and why you make them. | https://exceptionalcode.wordpress.com/2014/11/25/a-different-kind-of-assembly-hell/ | CC-MAIN-2017-17 | refinedweb | 266 | 55.98 |
Deep Copying Parent Nodes from Source to Target
You can deep copy a similar parent node from the source to the target using one drag and drop action. This eliminates the need to map every individual leaf node in large mappings. You can then override individual leaf mappings on a case-by-case basis. Similar is defined as a source and target having the same Qname (name and namespace) and data type. A similar parent node can also be mapped if both source and target nodes are repeating elements. While performing deep copy, repeating parent nodes are mapped with the for-each tag while the nonrepeating parent nodes are not mapped. The leaf element and attributes are mapped with the value-of tag. You can only perform deep copying on the mapper page. Deep copying of functions on the Build Mappings page is not supported.
- Access the mapper in an integration.
- Drag the similar parent node from the source to the target.The mapper identifies that the selected repeating elements are similar and asks if you want to map similar descendents.
- Click Yes. You can also select a checkbox to remember your preferences for this session.For this example, the source TypedAddressList element is mapped to the target TypedAddressList element. Attributes and leaf elements are mapped. Nonleaf elements are not mapped, such as Country. Nonrepeating parents cannot be mapped, but their children can be mapped.
Description of the illustration deep_copy_in_mapper.png
- Edit leaf mappings on a case-by-case basis. However, if you delete the parent, everything is deleted. | https://docs.oracle.com/en/cloud/paas/integration-cloud-service/ocmap/deep-copying-parent-nodes-source-target.html | CC-MAIN-2019-30 | refinedweb | 259 | 58.79 |
rurpy at yahoo.com wrote: > Fredrik Lundh wrote: > >>def convert(old): >> >> new = dict( >> CODE=old['CODEDATA'], >> DATE=old['DATE'] >> ) >> >> if old['CONTACTTYPE'] == 2: >> new['CONTACT'] = old['FIRSTCONTACT'] >> else: >> new['CONTACT'] = old['SECONDCONTACT'] >> >> return new > > > I don't find your code any more readable than the OP's > equivalent code: > > def convert(old): > new = { > CODE: old['CODEDATA'], > DATE: old['DATE'], > CONTACT: old['FIRSTCONTACT'] \ > if old['CONTACTTYPE'] == 2 \ > else old['OLDDCONTACT'] > } > return new The problem I have with your code is that it is too similar to: def convert(old): new = { CODE: old['CODEDATA'], DATE: old['DATE'], CONTACT: old['FIRSTCONTACT'] } if old['CONTACTTYPE'] == 2: else: old['OLDDCONTACT'] return new Yes, the above code is broken. But it *looks* right, at first glance, unless you train yourself to never write if cond: TRUE_CLAUSE else: FALSE_CLAUSE as a two-liner. Regardless of whatever benefits the ternary operator is going to have, in my opinion allowing people to write TRUE_CLAUSE if COND else FALSE_CLAUSE will increase the amount of poorly written, hard to read code. > The OPs code make one pass through the dict, your's makes > two. The original code binds a name to an empty dict, then rebinds the name to a populated dict. Your code simply creates the dict in one step. Fredrik's code creates an initial dict, then adds a new key and value to it. That's hardly making two passes through the dict -- what does that even mean? > I do not know what effect (if any) that has in the case of > a very large dict. Quick and dirty speed test: py> from time import time py> def makedict1(): ... return dict.fromkeys(range(1000)) ... py> def makedict2(): ... D = {} ... for i in range(1000): ... D[i]=None ... return D ... py> assert makedict1() == makedict2() py> def test(func, n=100): ... t = time() ... for i in range(n): ... tmp = func() ... return (time() - t)/n ... py> test(makedict1) 0.00020779848098754884 py> test(makedict2) 0.00042409896850585936 That's not timing quite the same thing you refer to, but it is suggestive: if you create an empty dict, and then populate it one item at a time, it will take approximately twice as long as creating the non-empty dict directly. As a very unscientific, system-dependent, statistically shoddy ball-park estimate, it looks like each item assignment to a pre-existing dict takes less than 0.0000002s. So if you had a dict and added another million items to it, one at a time, it might take an extra 0.2s in total more than what it would have taken you if you wrote those one million items in your source code. I can live with that. -- Steven. | https://mail.python.org/pipermail/python-list/2005-November/305076.html | CC-MAIN-2014-10 | refinedweb | 445 | 70.94 |
Created attachment 627486 [details]
testcase
The attached testcase sets the class name of an element in window.onload, but in Firefox >= 13 the CSS rule that references that class name is not applied.
The testcase works as expected in Opera, Google Chrome and Firefox <= 12.
Created attachment 627487 [details]
New testcase
Sorry, the previous attachment was wrong.
In this testcase, the block should turn red when you hover the mouse over it. This works in Firefox 12 and other browsers, but not in Firefox 13.
Regression range from Linux m-c nightlies is (not surprisingly; I only downloaded 2 nightlies to test this):
Confirmed as regression from by local backout of the nsCSSFrameConstructor.cpp changes in it.
Taking. Wish this had gotten reported earlier; at this point the chances of me being able to convince the beta drivers to take a fix for 13 might be slim. :(
So the key issue here is that during initial CSS matching the node does not have class="test", so we short-circuit matching the ".test:hover" selector before matching the :hover part, and hence don't set the "affected by :hover" flag on the element.
Then when the class is changed, we do HasAttributeDependentStyle, but the ".test:hover" selector doesn't match because ":hover" doesn't match, so we don't do a restyle on the element. During HasAttributeDependentStyle, mForStyling is false, so we don't set the "affected by :hover" flag again.
Finally, when we enter hover the "affected by :hover" flag is not set, so we don't think we need to restyle.
Nice testcase!
And even a naive change that fixed HasAttributeDependentStyle when matching :hover to flag the node would fail in general, I think, because we shortcut the matching in AttrEnumFunc altogether if the restyle hint for the current selector is subsumed by our pending restyle hint. So we might not match the :hover during the restyle at all.
So the obvious options are:
1) Back out bug 732667.
2) Add a boolean flag to nsCSSSelector (maybe shrinking the number of bits mPseudoType
to avoid changing the struct size) that indicates whether the selector includes
:hover. Set this as needed while building the rule cascade; check for this flag in
SelectorMatches() after the namespace/tag checks, and set the bit on the DOM node
accordingly. Need to watch out for :not here.
3) Restrict the optimization from bug 732667 to selectors that only include :hover with a
tag name and nothing else. It would still be good enough for github, and would not be
subject to the issues here because tag names are immutable.
Everything else I've thought of so far is a slower version of #2.
I think #3 is the way to go, personally. I'll write up a patch.
Created attachment 627838 [details] [diff] [review]
Don't apply the dynamic :hover reresolution skipping optimization to selectors which can match on mutable state other than :hover.
I tried to add a test for this to test_hover, but couldn't get it to fail even without this patch....
Comment on attachment 627838 [details] [diff] [review]
Don't apply the dynamic :hover reresolution skipping optimization to selectors which can match on mutable state other than :hover.
r=dbaron
I guess AncestorFilter interactions are ok because any change to a class or ID on an ancestor will always trigger the full restyle when there's a selector that's relevant (because we're only looking at part of the selector).
Yes, exactly.
Comment on attachment 627838 [details] [diff] [review]
Don't apply the dynamic :hover reresolution skipping optimization to selectors which can match on mutable state other than :hover.
[Approval Request Comment]
Bug caused by (feature/regressing bug #): bug 732667
User impact if declined: Some elements that should respond to hover won't
Testing completed (on m-c, etc.): Passes attached test and some other basic
smoketesting.
Risk to taking this patch (and alternatives if risky): Risk should be low: the
patch is making an optimization apply in fewer cases so it doesn't lead to
incorrect behavior. The main alternative is disabling the optimization
altogether (i.e. backing out bug 732667).
String or UUID changes made by this patch: None
Comment on attachment 627838 [details] [diff] [review]
Don't apply the dynamic :hover reresolution skipping optimization to selectors which can match on mutable state other than :hover.
[Triage Comment]
There isn't enough evidence that this is a common action in websites to warrant any additional risk between our final beta and RC. We haven't received feedback around site breakage caused by this bug. Please re-nominate if you disagree.
We will, however, release note this bug for FF13. Approving for Aurora 14.
Hi
Im having this same issue, even in the test case this problem still exists. Using Mac firefox 13, tested in earlier versions of Firefox, this works fine but the problem persists in firefox 13.
Yes, the fix didn't quite make it into version 13. It's fixed in 14 though, which is due to be released in mid-July (it's currently in beta).
Indeed. See the Firefox 13 release notes...
Thanks for getting back to me so quickly guys :)
Verified using the test case attached in the Description that the CSS rule is applied.
Verified using Firefox 14 beta 8 on Windows
Verified on Firefox 15 beta 3, using the test case attached in the Description that - the CSS rule is applied.
Verified on Windows XP, Ubuntu 12.04 and Mac OS X 10.7:
Mozilla/5.0 (Windows NT | https://bugzilla.mozilla.org/show_bug.cgi?id=758885 | CC-MAIN-2016-40 | refinedweb | 929 | 62.58 |
.
How Swift 5.5 enables us to conditionally compile postfix member expressions using compiler directives, and what kinds of situations that this new feature could be useful in.
Antoine van der Lee, creator of SwiftLee, joins John to discuss the new language features that are being introduced as part of Swift 5.5 — from the brand new concurrency system, to convenience features and various improvements.
How we can now use Swift’s very convenient “dot syntax” to refer to protocol-conforming types, and how that improves some of SwiftUI’s styling APIs.
What Swift’s @unknown attribute does, and why the compiler tells us to use it when switching on certain enums.
How Swift 5.5 enables computed properties to become either throwing or asynchronous, and what sort of situations that these new capabilities could become useful in.
What sort of capabilities that a mutating Swift context has, and what the mutating and nonmutating keywords do.
Chris Lattner returns to the show to discuss Swift’s new concurrency features, the ongoing evolution of the language, and the importance of both language and API design. This, and much more, on this special 100th episode of the show.
Availability checks let us conditionally use new system APIs and features while still enabling the rest of our code to keep running on older system versions. Let’s take a look at how they can be used.
Swift’s enums are awesome, but they’re not always the best choice for modeling a given piece of data. Let’s explore why that is, and what other tools that can be good to keep in mind in order to avoid certain problematic enum cases.
How a Swift property wrapper can refer to its enclosing type, and examples of how that capability could be used.
Vincent Pradeilles joins John to discuss various ways to use Swift language features like key paths and closures, how they relate to patterns typically used within functional programming, and when and how to adopt such patterns.
A roundup of some of the key ways in which Swift 5.3 enhances the overall experience of building views using SwiftUI.
An overview of the tools and directives that enable us to influence how our Swift code gets compiled, and what sort of situations that each of those tools might be particularly useful in.
A closer look at Swift’s result builders feature, and how it can give us some really valuable insights into how SwiftUI’s DSL operates under the hood.
A look at a few somewhat lesser-known ways in which enums can be used to solve various problems in Swift-based apps and libraries.
Let’s take a closer look at opaque return types, how they can be used both with and without SwiftUI, and how they compare to similar generic programming techniques, such as type erasure.
An introduction to Swift’s type inference system, how it makes the syntax of the language so lightweight, and how to work around some of its limitations.
In this Basics article, let’s take a look at a few examples of the various kinds of properties that Swift supports, and what their characteristics are.
Swift enums are really powerful, but they can often be made even more capable when mixed with other kinds of Swift types — such as protocols and structs. This week, let’s take a look at a few examples of doing just that.
What makes Swift a protocol-oriented language, and how can protocols be used to create abstractions and to enable code reuse? That’s what we’ll take a look at in this Basics article.
A summary of all Swift by Sundell content published during March 2020.
Let’s explore how optional values work in Swift, including how they’re implemented under the hood, and what sort of techniques that we can use to handle them.
This week, let’s take a look at how Swift’s property wrappers work, and explore a few examples of situations in which they could be really useful.
Welcome to Swift Clips — a new series of shorter videos showcasing interesting and useful Swift tips and techniques. In this first episode we’ll take a look at first class functions, which is a language feature that enables us to use functions in really powerful ways.
An overview of Swift’s five different levels of access control, how they work, and when each of them might be useful in practice.
This week, let’s dive deep into the world of pattern matching in Swift — to take a look at how we can construct completely custom patterns, and some of the interesting techniques that we can unlock by doing so.
How enums work in Swift, a look at some of their most prominent features, and examples of situations in which they can be incredibly useful.
Swift 5.1 has now been officially released, and despite being a minor release, it contains a substantial number of changes and improvements. This week, let’s take a look at five of those features, and what kind of situations they could be useful in.
This week, let’s take a look at how subscripting works in Swift, and a few different ways to incorporate it into the way we design APIs — including some brand new capabilities that are being added in Swift 5.1..
Being able to express basic values using inline literals is an essential feature in most programming languages. This week, let’s focus on string literals, by taking a take a look at the many different ways that they can be used and how we — through Swift’s highly protocol-oriented design — are able to customize the way literals are interpreted.
One really elegant aspect of Swift’s design is how it manages to hide much of its power behind much simpler programming constructs. Pattern matching is one source of that power, especially considering how it’s integrated into many different aspects of the language..
When using syntactic sugar, what we ideally want is to be able to strike a nice balance between low verbosity and clarity, and this week, let’s take a look at a few different ways that type aliases can enable us to do just that.
A really elegant aspect of Swift's take on optionals is that they're largely implemented using the type system - since all optional values are actually represented using an enum under the hood. That gives us some interesting capabilities, since we can extend that enum to add our own convenience APIs and other kinds of functionality. This week, let's take a look at how to do just that.
Swift keeps gaining more and more features that are more dynamic in nature, while still retaining its focus on type safe code. This week, let’s take a look at how key paths in Swift work, and some of the cool and powerful things they can let us do..
Type inference is a key feature of the Swift type system and plays a big part in the syntax of the language - making it less verbose by eliminating the need for manual type annotations where the compiler itself can infer the types of various values.
One really interesting feature of Swift is the ability to create lightweight value containers using tuples. The concept is quite simple - tuples let us easily group together any number of objects or values without having to create a new type. But even though it's a simple concept, it opens up some really interesting opportunities, both in terms of API design and when structuring code..
Languages that support first class functions enable us to use functions and methods just like any other object or value. We can pass them as arguments, save them in properties or return them from another function. In order words, the language treats functions as "first class citizens".
Even though closures are very widely used, there's a lot of behaviors and caveats to keep in mind when using them. This week, let's take a closer look at closures, how capturing works and some techniques that can make handling them easier..
Swift’s @autoclosure attribute enables us to define an argument that automatically gets wrapped in a closure. It’s primarily used to defer execution of a (potentially expensive) expression to when it’s actually needed, rather than doing it directly when the argument is passed.
Lazy properties allow you to create certain parts of a Swift type when needed, rather than doing it as part of its initialization process. This can be useful in order to avoid optionals, or to improve performance when certain properties might be expensive to create. This week, let’s take a look at a few ways to define lazy properties in Swift, and how different techniques are useful in different situations..
While Swift does not yet feature a dedicated namespace keyword, it does support nesting types within others. Let’s take a look at how using such nested types can help us improve the structure of our code. | https://swiftbysundell.com/tags/language-features/ | CC-MAIN-2021-43 | refinedweb | 1,515 | 57.4 |
It's very easy to get bogged down in the early stages of the namespace design without actually progressing much further. The stumbling block seems to be that it feels conceptually wrong to have only one domain, yet administrators can't put their finger on what the problem is. Experienced Windows NT administrators who manage multiple domains seem to find this much more of a problem than those coming from another operating system.
If you follow the guidelines in the initial steps of the namespace design, you quite probably will end up with one domain to start with. That's the whole point of the design process: to reduce the number of domains you need. Yet NT administrators tend to feel that they have conceptually lost something very important; with only one domain, somehow this design doesn't "feel right."
This is partly a conceptual problem: a set of domains with individual objects managed by different teams can feel more secure and complete than a set of Organizational Units in a single domain containing individual objects managed by different teams. It's also partly an organizational problem and, possibly, a political problem. Putting in an Active Directory environment is a significant undertaking for an organization and shouldn't be taken lightly. This change is likely to impact everyone across the company, assuming you're deploying across the enterprise. Changes at that level are likely to require ratification by a person or group who may not be directly involved on a day-to-day basis with the team proposing the change. So you have to present a business case that explains the benefits of moving to Active Directory.
Following our advice in this chapter and Microsoft's official guidelines from the white papers or Resource Kit will lead most companies to a single domain for their namespace design. It is your network, and you can do what you want. More domains give you better control over replication traffic but may mean more expense in terms of hardware. If you do decide to have multiple domains but have users in certain locations that need to log on to more than one domain, you need DCs for each domain that the users need in that location. This can be expensive. We'll come back to this again later, but let's start by considering the number of domains you need.
If the algorithm we use to help you determine the number of domains gives you too small a figure in your opinion, here's how you can raise it:
Have one domain for every single-master and multimaster Windows NT domain that you have. If you are using the Windows NT multimaster domain model, consider the entire set of multimasters as one domain under Active Directory (use Organizational Units for your resource domains).
Have one domain per geographical region, such as Asia-Pacific, Africa, Europe, and so on.
Have extra domains whenever putting data into one domain would deny you the control over replication that you would like if you used Organizational Units instead. It's all very well for us to say that Organizational Units are better, but that isn't true in all situations. If you work through the algorithm and come up with a single domain holding five Organizational Units, but you don't want any of the replication traffic from any of those Organizational Units to go around to certain parts of your network, you need to consider separate domains.
Even Microsoft didn't end up with one domain. They did manage to collapse a lot of Windows NT domains, though, and that's what you should be aiming for if you have multiple Windows NT domains.
There are two parts to this: how you construct a business case itself for such a wide-reaching change and how you can show that you're aiming to save money with this new plan.
Simply stated, your business case should answer two main questions:
Why should you not stay where you are now?
Why should you move to Active Directory?
If you can sensibly answer these two questions, you've probably solved half your business case; the other half is cost. Here we're talking about actual money. Will using Active Directory provide you with a tangible business cost reduction? Will it reduce your Total Cost of Ownership (TCO)? It sure will, but only if you design it correctly. Design it the wrong way, and you'll increase costs.
Imagine first that you have a company with two sites, Paris and Leicester, separated by a 64 Kb WAN link. Now imagine you have one domain run by Leicester. You do not have to place a DC in Paris if it is acceptable that when a user logs on, the WAN link uses bandwidth for items like these:
Roaming user profiles
Access to resources, such as server-based home directories
GPOs
Application deployment via Microsoft Installer (MSI) files
If authentication across the link from Paris would represent a reasonable amount of traffic, but you do not want profiles and resources coming across the slow link, you could combat that by putting a member server in Paris that could service those resources. You could even redirect application deployment mount points to the local member server in Paris (note that I'm saying member server and not DC here). However, if GPOs themselves won't go across the link, you need to consider a DC in Paris holding all the local resources. That gives you two sites, one domain, and two DCs.
Now let's expand this to imagine that you have a company with 50 WAN locations; they could be shops, banks, suppliers, or whatever. These are the Active Directory sites. Next, imagine that the same company has 10 major business units: Finance, Marketing, Sales, IS, and so on. You really have 3 choices when designing Active Directory for this environment:
Assuming everything else is equal, create a single domain with a DC in whichever sites require faster access than they would get across any link. Now make the business units Organizational Units under the single domain.
Everything is in one domain.
You need as many DCs as you have sites with links that you consider too slow. If you want to count a rough minimum, make it 1 DC per site with more DCs for larger sites; that is a rough minimum of 50 DCs. This is a low-cost solution.
With one forest and one domain, any user can log on quickly anywhere because authentication is always to a local DC.
Every part of the domain is replicated to every other part of the domain, so you have no granularity if you don't want objects from one business unit replicating to DCs everywhere.
Create multiple domains representing the 10 major business units. Place DCs for each business unit in whichever sites require faster access than they would get across any link.
This means more domains than the previous solution, but replication can now be better controlled on a per-business unit basis between sites.
Active Directory cannot host multiple domains on a single DC. This can make for an extremely high cost due to the large number of DCs that you may need. If you need to be able to log on to each of the 10 business unit domains from every site, you need 10 DCs per site, which makes 500 DCs. That's a much more costly solution.
With one forest and multiple domains, any user can log on quickly at any site that has a local DC for her domain; otherwise, she would have to span a WAN link to authenticate her logon and send down her data.
Create multiple domains representing geographical regions that encompass the 50 sites. Make these geographical regions the domains and have each domain hold Organizational Units representing business units that contain only the users from that region.
Even if you end up with 10 geographic regions, the DCs for each region are placed only in the sites belonging to that region. So if there were 5 sites per region (to make the math simple), each of the 5 needs only 1 DC. As the namespace model is a geographic model, you need to place a DC for Europe in the Asia-Pacific region only if the Asia-Pacific region ever has visiting users from Europe who need to authenticate faster than they would across the WAN link from Asia-Pacific to Europe. So the number of DCs that you need is going to be smaller.
Domain replication traffic occurs now only within a region and between regions that has DCs hosting the same domain.
You end up duplicating the business units in all the domains... or maybe not, if some don't need all business unitsyou get the idea.
With one forest and multiple domains, any user can log on quickly at any site that has a local DC for his domain; otherwise he would have to span a WAN link to authenticate his logon and send down his data.
We hope this illustrates that while it is easy to map a simple and elegant design on paper, there can be limitations on the feasibility of the design based on replication issues, DC placement, and cost.
Arguably, there are a number of "best" ways to design depending on whom you talk to. We propose an iterative approach with Active Directory, and this is probably going to happen anyway due to the nature of the many competing factors that come into play. On your first pass through this chapter, you'll get a draft design in hand for the namespace. In Chapter 9, you'll get a draft site and replication design. Then you'll come up against the issue that your namespace design may need changing based on the new draft sites and replication design, specifically on the issues of domain replication and server placement that we have just covered. After you've revised the namespace design, you can sit down and look at the GPO design (using Chapter 7 and Chapter 10) in a broad sense, as this will have an impact on the Organizational Unit structure that you have previously drafted in your namespace design. And so it goes.
While this is the way to design, you will come up against parts of your organization that do not fit in with the design that you're making. The point is to realize that your job is to identify a very good solution for your organization and then decide how to adapt that solution to the real world that your company lives in. One domain may be ideal but may not be practicable in terms of cost or human resources. You have to go through stages of modifying the design to a compromise solution that you're happy with. | http://etutorials.org/Server+Administration/Active+directory/Part+II+Designing+an+Active+Directory+Infrastructure/Chapter+8.+Designing+the+Namespace/8.8+Designing+for+the+Real+World/ | CC-MAIN-2017-30 | refinedweb | 1,819 | 57.91 |
Hi Gary,
> I want the <form>'s POST method deliver the (keyword,value) pairs from
the form
> to a python-based handler. My problem is this: I can't get get the **kw
> syntax to work with mod_python. That is, if my form looks like this
[...]
> ... and my python function is defined as follows ...
>
> def makechanges(request, **kw):
>
> ... makechanges() receives kw={} regardless of the status of the
> checkboxes. If instead my function is defined as ...
Define your function like this:
def makechanges(request):
The request object contains your fields, in request.form.list .
You can process them in a loop:
for f in request.form.list:
#f.name is the name of the field
#f.value holds the value of the field
(See more in modpythion docs, chapter 4.4)
Cheers,
Sandor
-------------------------------------------------- - Mert levelezni kell... | https://modpython.org/pipermail/mod_python/2002-June/012696.html | CC-MAIN-2022-05 | refinedweb | 135 | 78.04 |
This article is based on Android in Practice, to be published on Summer:
Technique: HTTP the Java way
Introduction
The standard Java class library already has a solution for HTTP messaging. An open-source implementation of these classes is bundled with Android‘s class library, which is based on Apache Harmony. It’s simple and bare-bones in its structure and, while it supports features like proxy servers, cookies (to some degree), and SSL, the one thing that it lacks more than anything else is a class interface and component structure that doesn’t leave you bathed in tears. Still, more elaborate HTTP solutions are often wrappers around the standard Java interfaces and, if you don’t need all the abstraction provided, for example, by Apache HttpClient interfaces, the stock Java classes may not only be sufficient, they also perform much better thanks to a much slimmer, more low-level implementation.
Problem
You must perform simple networking tasks via HTTP (such as downloading a file) and you want to avoid the performance penalty imposed by the higher-level, much larger, and more complex Apache HttpClient implementation.
Solution
If you ever find yourself in this situation, you probably want to do HTTP conversations through a java.net.HttpURLConnection. HttpURLConnection is a subtype of the more generic URLConnection, which represents a general purpose data connection to a network endpoint specified by a java.net.URL. A URLConnection is never instantiated directly; instead, you construct a URL instance from a string and, based on the URL’s scheme, a proper implementation of URLConnection is returned:
URL url = new URL(""); HttpURLConnection conn = (HttpURLConnection) url.openConnection(); conn.connect(); … conn.disconnect();
This connection implementation lookup by URL scheme is performed by a protocol handler object. The Java class library and Android already provide protocol handlers for all common schemes such as HTTP(S), FTP, MAILTO, FILE, and so on, so typically you don’t have to worry about that. This,of course, also means that you are free to create your own protocol handlers that instantiate your own custom URLConnection. URLConnection is based on TCP sockets and the standard java.io stream classes. That means I/O is blocking, so remember to never run them on the main UI thread.
Let’s see how it works in a practical example. We want to extend the MyMovies application to display a simple message dialog with the latest update news downloaded from a Web server so the user is always up to date about what has changed in the latest release. For this to work, we just have to place a text file containing the update notes somewhere on a Web server, download and read the file, and display its text in a message dialog. Figure 1 shows what that will look like.
For simplicity, we will show the dialog on every application start, a “detail” that would probably annoy the heck out of your users if this were a production release, but the example serves our purpose well enough. Implementation wise, the plan is to write an AsyncTask that establishes a connection to an HTTP server via HttpURLConnection and downloads the file containing the update notes text. We then send this text via a Handler object to our main activity so we can show an AlertDialog with that text. Let’s look at the MyMovies activity class first, which contains the callback for the handler to show the pop-up dialog (listing 1).
Listing 1 Showing an update notes pop-up dialog in MyMovies
public class MyMovies extends ListActivity implements Callback { private MovieAdapter adapter; public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); ... new UpdateNoticeTask(new Handler(this)).execute(); #A } ... public boolean handleMessage(Message msg) { String updateNotice = msg.getData().getString("text"); #B AlertDialog.Builder dialog = new AlertDialog.Builder(this); dialog.setTitle("What's new"); dialog.setMessage(updateNotice); #C dialog.setIcon(android.R.drawable.ic_dialog_info); dialog.setPositiveButton(getString(android.R.string.ok), new OnClickListener() { public void onClick(DialogInterface dialog, int which) { dialog.dismiss(); } }); dialog.show(); return false; } } #A Starts a new task to download the file #B Reads update text from the message #C Sets update text on the dialog
We launch the UpdateNoticeTask in the last line of onCreate() since that’s where the download proceeds. Listing 2 has the source code.
Listing 2 Downloading a text file using HttpURLConnection and AsyncTask
public class UpdateNoticeTask extends AsyncTask { private String updateUrl = ""; private HttpURLConnection connection; private Handler handler; public UpdateNoticeTask(Handler handler) { this.handler = handler; } @Override protected String doInBackground(Void... params) { try { URL url = new URL(updateUrl); connection = (HttpURLConnection) url.openConnection(); #1 connection.setRequestMethod("GET"); #2 connection.setRequestProperty("Accept", "text/plain"); #2 connection.connect(); #3 int statusCode = connection.getResponseCode(); if (statusCode != HttpURLConnection.HTTP_OK) { #4 return "Error: Failed getting update notes"; } String text = readTextFromServer(); #5 connection.disconnect(); #6 return text; } catch (Exception e) { return "Error: " + e.getMessage(); } } private String readTextFromServer() throws IOException { InputStreamReader isr = new InputStreamReader(connection.getInputStream()); BufferedReader br = new BufferedReader(isr); StringBuilder sb = new StringBuilder(); String line = br.readLine(); while (line != null) { sb.append(line + "\n"); line = br.readLine(); } return sb.toString(); } @Override protected void onPostExecute(String updateNotice) { #7 Message message = new Message(); Bundle data = new Bundle(); data.putString("text", updateNotice); message.setData(data); handler.sendMessage(message); } } #1 Get an instance of HttpURLConnection #2 Configure the HTTP request #3 Establish the connection #4 Handle non-200 server reply #5 Read text from response body #6 Close the connection #7 Pass retrieved text to the activity
After reading the URL from the parameters, the first thing we have to do is use that URL object to retrieve an instance of a fitting URLConnection instance (#1)—an HttpURLConnection in this case, since our URL has the http:// scheme). Note that the call to openConnection() does not yet establish a connection to the server, it merely instantiates a connection object. We then configure our HTTP request (#2): we first tell it that it should use the GET method to request the file (we could have also omitted this call, since GET is the default) and also set an HTTP Accept header to tell the server what kind of document type we expect it to return—plain text in this case. The request is now configured and can be sent to the server by a call to connect() (#3). Depending on the server reply, we either return an error message if we received a status message that was not 200/OK (#4) or proceed to read the text from the response body (#5). Don’t forget to close the connection when you are done processing the response (#6). Finally, we send the text we received from the server to our main activity using the Handler (#7).
Discussion
The example we showed here was extremely simple—the simplest kind of request you can send, really. For these scenarios, HttpURLConnection does the job quite well. The biggest problem with it is its class architecture. HttpURLConnection shares a large part of its interface with the general purpose URLConnection (obviously, since it inherits from it), which means that some abstraction is required for method names. If you’ve never used HttpURLConnection before, you have probably pondered the call to setRequestProperty(), which is the way to set HTTP headers—not very intuitive. This is simply because implementations for other protocols may not even have the concept of header fields but would still share the same interface, so the methods in this class all have rather generic names.
While this is just a cosmetic thing, the actual problem with URLConnection is its entire lack of a proper separation of concerns. The request, the response, and the mechanisms to send and receive them are merged into a single class, often leaving you wondering which methods to use to process which part of this triplet. This is a bit like creating a five-course meal just to stick it in the blender: you can still serve it, but it’s just disgusting. It also makes each part difficult to customize and even more difficult to mock out when writing unit tests. It’s simply not a beaming example of good object-oriented class design.
There are also some more practical problems with this class. If you find yourself in a situation where you need to intercept requests to preprocess and modify them (a good example is message signing in secure communication environments, where the sender needs to compute a signature over a request’s properties and then modify the request to include the signature), then HttpURLConnection is not a good choice for sending HTTP requests. That’s because request payload is sent unbuffered, so there is no way to get your hands on it in a non-intrusive way. Last but not least, HttpURLConnection in Apache Harmony has bugs—serious bugs. One of the major ones is detailed in the sidebar.
Android, HttpURLConnection, and HTTP header fields
As you already know, the Java class library bundled with Android is based on Apache Harmony, the open-source Java implementation driven by the Apache foundation. At the time of this writing, there is a serious bug that affects HTTP messaging using HttpURLConnection: it sends HTTP header field names in lowercase. This is nonconformant to the HTTP specification and, in fact, breaks many HTTP servers since they will simply drop these header fields. This can have a wide array of effects, with documents served to you in the format that doesn’t match your request (for example, the Accept header field was ignored) and failed requests to protect resources due to the server’s failure to recognize the Authorization header field. A workaround is simply to not use HttpURLConnection until these problems have been fixed and use Apache HttpClient instead. If you want to follow the progress of fixing this bug, you can find the official issue report at this Web address:.
Summary
Overall, HttpURLConnection is a simply structured but rather low-level way of doing HTTP messaging, and its clumsy interface, lack of object-orientation, and proper abstractions make it difficult to use. For simple tasks like the file download shown here it’s absolutely fine and comes with the least overhead (it doesn’t take a sledgehammer to crack a nut) but, if you want to do more complex things like request interception, connection pooling, or multipart file uploads, then don’t bother with it—there’s a much better way to do this in the Java world and, thanks to the engineers at Google, it’s bundled with Android!
HTTP messaging in Java…
Thank you for submitting this cool story – Trackback from JavaPins… | https://javabeat.net/http-messaging-in-java/ | CC-MAIN-2017-47 | refinedweb | 1,766 | 50.87 |
On 2013/03/11 20:34, Aristeu Rozanski wrote: > On Mon, Mar 11, 2013 at 04:27:58PM +0800, Gao feng wrote: >> On 2013/03/06 23:10, Daniel P. Berrange wrote: >>> From: "Daniel P. Berrange" <berrange redhat com> >>> >>> To allow the efficient correlation of container audit messages >>> with host hosts, include the pid namespace inode in audit >>> messages. >>> >>> The resulting audit message will be >>> >>> type=VIRT_CONTROL msg=audit(1362582468.378:50): pid=19284 uid=0 auid=0 ses=312 subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023>> >>> Note the 'pid-ns' field showing the inode number of the PID >>> namespace of the container init process. Since /proc/PID/ns/pid >>> doesn't exist on older kernels, we keep the previous 'init-pid' >>> field too, showing the host PID of the init process. >>> >> >> The inode numbers of /proc/PID/ns/pid are different even two task >> in the same pid namespace... >> ignore this incorrect information,I used the lstat... >> We can't use this inode number to identify pid namespace. >> Or I misunderstand what's the purpose of this patch? > > not in kernels 3.8 and newer > > Thanks for your explanation. | https://www.redhat.com/archives/libvir-list/2013-March/msg00525.html | CC-MAIN-2014-23 | refinedweb | 191 | 67.96 |
On December 16, 2009, Spring framework version 3.0 was finally released.
For those with Spring projects already in progress, as well as those starting new projects, a number of new features are available, and some changes must be made to take advantage of them. It is strongly recommended that the full release with documents is downloaded as the changes are sweeping and sometimes drastic. That said, you shouldn’t need to do much differently than before to get the same stuff out of your Spring applications. At the very least, a slew of compiler warnings, if not errors, will be introduced if you just take a “replace the JAR files” approach to upgrading as many of the Controllers and other elements have been deprecated or otherwise reworked.
Downloading the full release with documentation brings forth a new 800-page reference document, which we should all read, but we know not all of us will. This article will endeavor to point out some pitfalls and help make a quick conversion of an existing web project to the new paradigm as well as try to show how to quickly make a new web project from scratch using the new Spring 3.0 framework.
Swing by the Spring download page () and grab the release archive, if you haven’t already. Expand it so the files will be at the ready. If you grab the with-docs version (which is recommended at least once), peruse the reference documents for areas you expect to have trouble.
The assumption made is that you’re at least familiar with Spring enough to recognize some of the generalities and translate them to your own project; a word of warning–this isn’t intended to be a “learn Spring” tutorial.
This bit assumes you’ve got a Spring 2.x project and you’re looking to upgrade it to the new framework. Presumably if you’ve been developing with the Spring 3.0 beta and release candidates these problems have already been addressed; perhaps not, though, or perhaps there’s some clarity you seek. Hopefully this section helps in either case.
In your project, we’ll assume a standard structure of a source directory for the Java and such, and a web content directory for the web-served bits. In the web content directory lives our magic WEB-INF folder, and in the WEB-INF/lib folder is where the Spring JAR files are located. If your configuration is different, you’ll have to translate, and should probably consider this more concise approach instead.
In a Spring 2.x project, the spring-beans.jar, spring-core.jar, spring-web.jar, and whatever other Spring JAR files you’ve been using will need to be removed to avoid conflicting with the new framework files. Remove also any .tld or .xsd or other files that may have been copied from the old framework folders into the project. A flurry of compiler errors should be evident as we’ve removed core classes required by the project. Don’t worry about correcting any of these just yet as we’ve got to put the replacement JARs in place, which will remove most of those errors, and give us just a few warnings in their place.
As we look in the new Spring 3.0 archive’s dist folder, we see immediately that the file naming format has been changed. For the most part, once you get comfortable with the new format it starts to make sense. In most cases the classes have been packaged inside the JAR that starts with the same package name. For example, the starting point for nearly every Spring web application is the DispatcherServlet. The DispatcherServlet is in the org.springframework.web.servlet package. There should be a org.springframework.web.servlet-3.0.0.RELEASE.jar file in the dist folder. In the Spring 2.x framework, the DispatcherServlet is in the spring-webmvc.jar file. Other classes we may use are likewise tucked into their package-named JAR files. For example, if we use the org.springframework.beans.factory.config.PropertiesFactoryBean, we can find that it is hiding in the org.springframework.beans-3.0.0.RELEASE.jar file.
With this in mind, for each of the missing class errors that our compiler is giving us, copy the appropriate package-named JAR file to our WEB-INF/lib folder. As we recompile, we’ll lose all or most of the errors. I leave the caveat that your project may be doing something different than mine, and that perhaps there’s a dependency that you have that also requires an upgraded JAR…so upgrade those as well. The project I’m basing this on uses Hibernate and some Apache Commons bits, and none of them required any changes to support the upgrade of Spring.
When done, there shouldn’t be any need to change any source files to satisfy any of the errors created by changing the JAR files. In some of the Spring-related XML files, like the applicationContext.xml (or whatever you named yours), there may be a bit that says something like “spring-beans-2.5.xsd” that needs to be changed to “spring-beans-3.0.xsd” or “spring-context” or whatever other bits you’ve used in your application. Change those to match the new version, and that should be it for the change.
It should be the case that the application will again compile and execute as before, without any other changes. Of course, there may be some configuration file quirks, again depending on the complexity of your project, or how hard you banged on Spring to get it to work.
If you’re not already using annotations, though, a diligent eye will notice that we’re now probably left with some compiler warnings. Most notably, any class that extends a Spring Controller will complain that the Controller it’s extending has been deprecated. Spring 3.0 pushes us hard into using annotations.
I was already using annotations for most of my controllers, but I did have one “catch-all” controller for URLs that I hadn’t mapped. The two bits in my Spring configuration that I changed included removing the bean for the AnnotationMethodHandlerAdapter (which in retrospect I may not have needed anyway), and removing the SimpleControllerHandlerAdapter and SimpleUrlHandlerMapping beans I’d configured for my catch-all, replacing that bean instead with @Controller and @RequestMapping(“/*”) annotations which did the same default URL handling.
This reduced the Controller/URL handling parts of my applicationContext.xml to just this bit.
<?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE beans PUBLIC "-//SPRING//DTD BEAN//EN" ""> ="tld.domain.project.controllers" /> </beans>
Two lines, with wrapping, that controls all of our URL and JSP handling. The first tells the ViewReslover where the related JSP files are, and the second tells Spring in which package I’ve tucked my annotated Controllers. I also have some Hibernate configuration, Locale resolvers, and ResourceBundle bits, but they didn’t change a bit from 2.5 to 3.0, and this is enough to satisfy the annotated Controller discussion. Your paths may vary, and that’s a horrible package name.
If you’re not using annotations, here’s where it gets a little tricky: you’re probably using a specific Controller for specific tasks. Perhaps you’re a fan of the SimpleFormController or you roll your own with the AbstractController. The difficulty in making the change from those to the annotated Controller is going to depend on the complexity of the Controllers used and the functions therein. Let’s take a simple SimpleFormController as an example and convert it to an annotated Controller instead, to give a quick example.
First, one of our application context XML files would contain a bit not unlike this defining our bean and telling Spring to handle all otherwise unmapped URLs with our controller.
<bean id="exampleSimpleFormController" class="tld.domain.project.ExampleSimpleFormController"> <property name="commandClass" value="tld.domain.project.CommandClass" /> <property name="commandName" value="Command" /> <property name="formView" value="index" /> <property name="successView" value="index" /> </bean> <bean class="org.springframework.web.servlet.mvc.SimpleControllerHandlerAdapter" /> <bean class="org.springframework.web.servlet.handler.SimpleUrlHandlerMapping"> <property name="order" value="99" /> <property name="mappings"> <props> <prop key="/*">exampleSimpleFormController</prop> </props> </property> </bean>
Our Controller class would look something like this, presumably doing a little bit more work between the call and return, but this actually meets our criteria for the example.
public class ExampleSimpleFormController extends SimpleFormController { @Override protected ModelAndView handleRequestInternal(final HttpServletRequest httpServletRequest, final HttpServletResponse httpServletResponse) throws Exception { // Do your work here return super.handleRequestInternal(httpServletRequest, httpServletResponse); } }
We can convert this to an annotated Controller without too many changes and losing no functionality. Simply remove those bits in the application context XML file related to this bean, make sure it’s in a package covered by the annotated class scan (the context:component-scan bit above). Make the following changes, and you’ve converted the controller to an annotated one.
@Controller public class ExampleSimpleFormController { @RequestMapping("/*") public String handleRequestInternal(final HttpServletRequest httpServletRequest, final HttpServletResponse httpServletResponse) throws Exception { // Do your work here return “index”; } }
Again, of course, the complexity of your Controller will make that more difficult. This class, however, will function exactly the same in both versions. The name of the class can be changed, as can the name of the method handling our default request, with consideration to use elsewhere.
One obnoxious thing about this example is that it’s tediously simple. All it’s really going to do is handle any URL request not matched by another Controller and return the rendered WEB-INF/index.jsp page. The original Spring configuration implied, as a SimpleFormController would, that some action would be taking place, as we had defined a command object and had to define both success and failure view names. Let’s make a tougher example and convert it from a SimpleFormController to an annotated Controller with some simple form input.
Let’s begin with a JSP fragment that contains this simple login form:
<div id="form"> <form:form <div><label for="user">Name</label> <form:input</div> <div><label for="password">Password</label> <form:input</div> <div><input id="submit" type="submit" value="Log-in" /></div> </form:form> </div>
Note that the .ext used in the action should match whatever URL mapping done in the web.xml tying the URL to the DispatcherServlet. It could be that /* is handled by the DispatcherServlet, but that’s not recommended. We’d back this with an Object that presumably had a pair of member Strings, for the name and password, clumsily shortened for brevity like this:
public class Login { public String user = null; public String password = null; }
We’d reference this class as our Controller’s command object, giving it the same name as the one in our form, all tied into our bean definition in an appropriate XML file. The Controller class then probably would have getters and setters to set the object. We can eliminate most of that busy work with our new annotated Controller that does a little trivial validation.
@Controller @RequestMapping("/login") public class LoginController { @RequestMapping(method = RequestMethod.GET) public String handleGET() { return “login”; } @RequestMapping(method = RequestMethod.POST) public String handlePOST(@ModelAttribute("Login") final Login login) { if ((login != null) && (login.user != null) && (login.password != null)) { return “index”; } return “login”; } }
Note that while the .ext was specified in the form, we can leave it out (or put it in) in the RequestMapping, which will by default then map all login.* and login/ requests to this Controller unless a better match to the URL is found. When the Controller gets a GET request, it simply renders the login.jsp page. When it gets a POST request, it verifies that the user and password fields have values (I did say trivial), and then renders the index.jsp page, otherwise it re-renders the login.jsp page.
A more comprehensive example will follow as we set up a new simple web application in the next section, but that’s the crux of a quick transformation from Spring 2.x to Spring 3.0 that will work with most projects. A little work to transform deprecated Controllers to annotated Controllers, but that can be postponed for a bit and done as time allows.
Sadly, in my opinion, the Spring documentation spreads out all of the bits and pieces necessary to make a project from nothing. Additionally, what strong suggestions there are tend to start with “use the sample project and remove what you don’t need,” which is both tedious and intimidating if you’re not sure what is safe to remove. I prefer an approach of starting with nothing and add the minimum necessary to get things going.
Of course, a bit of up front design is always nice; it’s hard to make a project without any kind of intention behind it. For the purposes of this example, we’ll make a trivial in-memory Twitter clone out of just a couple of annotated Spring Controllers and the JSPs that render the pages. No security, graphics, or style sheets to clog the works, just simple annotations and tags.
For the view, we’ll have a simple one-page interface that will give us a simple form to add a post, a simple search form, and a list of the posts thus far, ordered in descending date (newest on top).
For our controllers, we’ll need something to handle the search, something to handle the post, and something to give us the list of previous posts. We’ll use a simple multi-action controller with a different target for each of our form actions.
For our model, we’ll simply have a post object, and we’ll maintain a small collection of them (so we don’t overwhelm our example system or run out of memory.
We’ll assume and discuss as if the whole world uses Java6, Tomcat v6, and Eclipse, and that all of the paths are correctly configured, and that Eclipse has a Tomcat server configured. In Eclipse, make a new Dynamic Web Project; give it a name (I’ll call it “microblog”), associate it with the Tomcat server, let it create source and web content folders, and generate a default web.xml file.
Since we know we’re going to use Spring for our framework, let’s put that in our application. Start with editing the WEB-INF/web.inf file in the web content folder. By default the XML contains a description that has the display-name of the application and a generous list of welcome-files. Reduce the welcome-file-list to just one, like index.htm and create an empty file in the root of the web content folder of the same name; a quirk in Tomcat will throw a 404 error if the file doesn’t exist, even though it won’t be used as we’ll replace it with a Spring Controller in just a bit.
A quick fix to another little Tomcat quirk, add a context-param for the webAppRootKey to avoid collisions with other applications. Give it a unique name so that Tomcat won’t complain every time the app starts, nor will any application fail because suddenly it’s looking in the wrong folder. The param-name needs to be webAppRootKey, but the param-value should be unique within your Tomcat server. Put this between the display-name and welcome-file-list.
<context-param> <param-name>webAppRootKey</param-name> <param-value>microblog.appRoot</param-value> </context-param>
Then since we’re going to be making a really simple Spring application, we’ll start with the textbook minimum configuration. We need to declare our Servlet to handle our requests, and map URLs to the servlet. Put this between the context-param (or description if you and welcome file list
>
The servlet definition tells Tomcat that there’s a servlet named “microblog” of the type DispatcherServlet, with the provided parameter. The parameter will be passed to the DispatcherServlet and tell it where to find the context information; we’re going to tuck it away in the WEB-INF folder where web browsers can’t get to it. The servlet-mapping definition tells Tomcat that all URLs reaching this application that end in .htm will be handled by our microblog Servlet.
This gives us a simple WEB-INF/web.xml file that looks like this one.
<?xml version="1.0" encoding="UTF-8"?> <web-app xmlns: <display-name>microblog</display-name> <context-param> <param-name>webAppRootKey</param-name> <param-value>microblog.appRoot</param-value> </context-param> > <welcome-file-list> <welcome-file>index.htm</welcome-file> </welcome-file-list> </web-app>
Since we’ve declared that we’re going to use the DispatcherServlet, we’ll need to copy the appropriate Spring JAR file to our WEB-INF/lib folder. DispatcherServlet is in the org.springframework.web.servlet package, which most closely matches the org.springframework.web.servlet-3.0.0.RELEASE.jar, so copy that one. You can verify by searching for the type and Eclipse should be able to find it.
Although we’re going to be annotating our Controllers and such, some may point out how we’re sticking with the old XML way of configuring Spring. This is due in part to the comfort we have with the XML configurations, the trivial configuration we’re going to have, and finally because the Spring documentation that tells us that annotating the configuration is not a 100% replacement. Rather than anything confusing at this point, we’ll stick with the trivial XML we need to make our application.
<?xml version="1.0" encoding="UTF-8"?> ="com.objectpartners.microblog" /> </beans>
This was discussed before, but so you don’t have to scroll back, this essentially two-line XML file (one bean, one context element) tells our DispatchServlet what we’re going to use for our view resolver (to render our JSPs), and to scan our package for annotations. Since we’ve stated that our JSPs are going to be in WEB-INF/jsp, we should take this opportunity to create that folder. Additionally, we’ve declared our package to scan, so we should create that path in our folder, too; in Eclipse we just create an empty package. Our two additional classes are in the org.springframework.web.servlet package with our DispatcherServlet, so we don’t need to copy an additional JAR file just yet.
Where to go next is a choice of style and discipline; controller, model, or view, or all at once. I think we need somewhere for the first URL request to go, so let’s build a simple catch-all controller to deliver our default page, and see where that leads us.
Our project is going to be terribly small, so despite any other design patterns, we’re going to put everything right in our previously created package. Let’s create a class named CatchAll to catch everything that isn’t otherwise handled.
package com.objectpartners.microblog; import org.springframework.stereotype.Controller; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class CatchAll { @RequestMapping("/*") public String catchAll() { return "index"; } }
Curious, the org.springframework.stereotype.Controller isn’t found in a JAR named after the package, and it doesn’t give any hint that this is an annotation class, either. We can find this is the right fully-qualified class name in the Spring API documentation, but finding it in the JAR files is a bit tedious. It’s hiding in the org.springframework.context-3.0.0.RELEASE.jar which is one of the few deviations from the new naming scheme we’ll run into. The RequestMapping annotation is correctly in the org.springframework.web-3.0.0.RELEASE.jar file. Add both of those files to our WEB-INF/lib and any compile errors so far will be resolved.
Our handler is woefully simple, and will tell Spring to render the index.jsp for every URL received. We told Spring before that these files would exist in the WEB-INF/jsp folder, so let’s make one right now; we’ll revisit and make it useful in a moment. Simply add a “hello, world” JSP so we can test that everything works so far; using Eclipse, just add a new JSP page to the folder, name it index.jsp, and add some text between the body tag.
So far, all looks well, and while not meeting our project goals, we should be at an acceptable starting point. That is, the app looks like it completes all of the required configuration elements. Add the project to the configured Tomcat server in Eclipse and start it up. Tomcat should start with no problem, but if we try to visit it (), we’ll end up with an exception because we’re missing some classes we didn’t declare, but that Spring needs to function. There are going to be a lot of missing classes, so be prepared. In our Tomcat console, and probably the web page, we should see a stack trace that warns us that there’s a missing class.
java.lang.NoClassDefFoundError: org/springframework/beans/BeansException
We need to add org.springframework.beans.BeansException, which is in the org.springframework.beans-3.0.0.RELEASE.jar file, so copy that to the WEB-INF/lib folder and restart Tomcat. A second try, another exception for another missing class.
java.lang.NoClassDefFoundError: org/springframework/core/NestedRuntimeException
Copy the org.springframework.core-3.0.0.RELEASE.jar file to the WEB-INF/lib folder, restart Tomcat, and try again. We could have almost guessed we’d need beans and core, but we’re trying to add as few files as possible, remember. Trying again, and, yes, another failure. This time from a dependency Spring has for logging.
java.lang.NoClassDefFoundError: org/apache/commons/logging/LogFactory
Quickly scanning the documentation, we can see that this is left to us to decide for ourselves which implementation we can or may want to use. Some application servers provide such logging, some environments need a little extra configuration. We could rebuild Spring without the dependency, or the easy solution, get the latest JAR from Apache Commons Logging () and add it to our WEB-INF/lib folder. Restart, retry, re-fail.
java.lang.NoClassDefFoundError: org/springframework/asm/ClassVisitor
Copy org.springframework.asm-3.0.0.RELEASE.jar to WEB-INF/lib, restart, reload.
java.lang.NoClassDefFoundError: org/springframework/expression/PropertyAccessor
This one is hiding in org.springframework.expression-3.0.0.RELEASE.jar so copy that, restart, reload.
The next one gets a little tricky.
java.lang.NoClassDefFoundError: javax/servlet/jsp/jstl/core/Config
While this should be a part of Tomcat, it isn’t. I suppose it’s possible that the tag library might not be used by enough Servlet or JSP applications to make it a default. If you’re not using Tomcat, you might have this in your J2EE engine’s classpath already. Since we are running Tomcat in our example, grab the Taglibs from the Apache website (). Grab the version 1.1 file (1.2 isn’t quite done yet) and copy the JAR files from the downloaded lib folder to your WEB-INF/lib folder. Restart Tomcat.
Finally, we should be rewarded with the text of our index.jsp page. Our annotated Spring web application works for now. Back to making it useful.
Since we were last editing the index.jsp, let’s add the tags and form elements to display a list of our micro-blogging posts and some simple forms for submitting a post and searching.
<%@ page <html> <head> <meta http- <title>Microblog</title> </head> <body> <div><form:form <div><form:textarea</div> <div><input id="submit" type="submit" value="Post!" /></div> </form:form></div> <div><form:form <div><form:input</div> <div><input id="submit" type="submit" value="Search!" /></div> </form:form></div> <div><c:forEach <div> <div>${post.post}</div> <div>${post.time}</div> </div> </c:forEach></div> </body> </html>
Near the top we added a couple taglib lines telling the JSP renderer to use the Spring form and JSTL core tags. The first form is a big text area with a button to submit; it’s aiming for the post.htm action, and will use a page bean named “Post” for storing its data. The second form is an input and a submit button; it’s aiming for the search.htm action and will use a page bean named “Search” for storing its data. Finally, there’s a forEach loop that will simply spew whatever comes out of the array or collection page bean named “Posts.”
Let’s throw together a quick bean to satisfy the forms. We’ll cheat and use the same little guy, for brevity. A simple bean with two Strings.
package com.objectpartners.microblog; public class Post { private String post = null; private String time = null; public String getPost() { return post; } public String getTime() { return time; } public void setPost(String post) { this.post = post; } public void setTime(String time) { this.time = time; } }
We’ve already got our CatchAll Controller, so let’s just tweak that to handle these new requests.
package com.objectpartners.microblog; import java.util.LinkedList; import java.util.Calendar; import java.util.List; import javax.servlet.http.HttpServletRequest; import org.springframework.stereotype.Controller; import org.springframework.ui.ModelMap; import org.springframework.web.bind.annotation.ModelAttribute; import org.springframework.web.bind.annotation.RequestMapping; @Controller public class CatchAll { @RequestMapping("/*") public String catchAll(final HttpServletRequest httpServletRequest, final ModelMap modelMap) { modelMap.addAttribute("Post", new Post()); modelMap.addAttribute("Search", new Post()); final Object object = httpServletRequest.getSession() .getServletContext().getAttribute("Posts"); if (object instanceof List<?>) { modelMap.addAttribute("Posts", object); } return "index"; } @RequestMapping("/post") @SuppressWarnings("unchecked") public String post(final HttpServletRequest httpServletRequest, @ModelAttribute("Post") final Post post, final ModelMap modelMap) { LinkedList<Post> posts = null; final Object object = httpServletRequest.getSession() .getServletContext().getAttribute("Posts"); if (object instanceof LinkedList<?>) { posts = (LinkedList<Post>) object; } else { posts = new LinkedList<Post>(); httpServletRequest.getSession().getServletContext().setAttribute( "Posts", posts); } if (post.getPost() != null && !post.getPost().trim().isEmpty()) { post.setTime(Calendar.getInstance().getTime().toString()); posts.offerFirst(post); } while (posts.size() > 100) { posts.pollLast(); } modelMap.addAttribute("Posts", posts); modelMap.addAttribute("Post", new Post()); modelMap.addAttribute("Search", new Post()); return "index"; } @RequestMapping("/search") @SuppressWarnings("unchecked") public String search(final HttpServletRequest httpServletRequest, @ModelAttribute("Search") final Post search, final ModelMap modelMap) { final Object object = httpServletRequest.getSession() .getServletContext().getAttribute("Posts"); if (object instanceof List<?>) { if (search.getPost().trim().isEmpty()) { modelMap.addAttribute("Posts", (List<Post>) object); } else { final List<Post> posts = new LinkedList<Post>(); for (final Post post : (List<Post>) object) { if (post.getPost().toLowerCase().contains( search.getPost().toLowerCase())) { posts.add(post); } } modelMap.addAttribute("Posts", posts); } } modelMap.addAttribute("Post", new Post()); modelMap.addAttribute("Search", new Post()); return "index"; } }
We’ve upgraded our default request method, catchAll(), adding a couple parameters that Spring will autowire for us. We need the HttpServletRequest to gain access to the collection we’re maintaining in memory, or more specifically in the Servlet context’s memory (we could use a static variable). We need the ModelMap to store our page beans for the form. We could annotate these a little better, getting directly to the individual beans, but that actually adds as much extra class material to build what will ultimately result in a bunch of empty objects. The catchAll() simply grabs the collection from the ServletContext, if it exists, and sends back that collection and some empty beans to satisfy the JSP.
We added a post() method, and mapped that to the /post request to handle when the user taps the “Post!” button from the form. Note the ModelAttribute annotation telling Spring to associate the form data for a form named “Post” with the parameter. This method will create the collection if it doesn’t already exist. It validates that the post contains text (and nothing more) and adds it to the head of the list. It also truncates the list to be sure our simple list won’t get too long. Like catchAll(), it puts the necessary elements in the ModelMap to render the form.
Finally, the search() method will handle the /search request when the user taps the “Search!” button. Like the post() method it has an annotation allowing Spring to autowire the form data. This method searches for all posts containing the string as input. If the string is empty, all post elements are returned. The ModelMap is populated and the form is rendered.
In our trivial application, we have (after much trial-and-error correction) the minimum Spring and other JAR files necessary to support our application. We’ve got one annotated Controller class, handling all of the actions our one JSP page can dish out. It isn’t pretty, but it works. | https://objectpartners.com/2010/03/01/updating-or-starting-spring-3-0-project/ | CC-MAIN-2019-04 | refinedweb | 4,801 | 55.54 |
20 November 2013 18:46 [Source: ICIS news]
HOUSTON (ICIS)--US base oil prices are falling on seasonally soft demand and sidelined buyer interest, buyers and sellers said on Wednesday.
Buyers see current base oil supply in most grades as readily available, while demand is at a lull.
“Buyers will be de-stocking their own inventory in November and December, but they will come back into the market in January,” one base oil supplier said this week.
Supply availability is fostering more domestic spot business, sources said.
The ?xml:namespace>
“There is no question that November demand is soft,” a base oil producer said.
But US base oil demand generally slows down during the fourth quarter as refiners work to lower inventories because of year-end tax considerations.
Market sources said that refinery stocks are now largely relieved, but the year-end de-stocking mode is ongoing in the buyers’ sector.
“There are always more [base oils] available at this time of the year,” another seller commented.
“This is just normal and about the same as we saw at this time last year,” the source added.
Group I spot solvent neutral (SN) base stocks were assessed by ICIS at $3.08-3.15/gal (€0.60/litre), losing 6 cents off the low end of the spread.
Group I SN 600/650 grades fell sharply this week, dropping by 22 cents across the spread to range at $3.37-3.47/gal.
Brightstock lost 10 cents off the spread to range at $3.95-4.15/gal.
All spreads are on an FOB (free onboard) US Gulf | http://www.icis.com/Articles/2013/11/20/9727735/us-base-oil-spot-prices-fall-on-soft-demand.html | CC-MAIN-2014-42 | refinedweb | 267 | 70.63 |
Here’s a disclaimer: I avoid Linux and am no Linux expert by any means. I shudder at the thought of black screens with a flashing cursor. I find myself moving my mouse around trying to find an icon to click or a menu to select.
It’s from such a perspective that this article will demonstrate how anyone (even I!) can get Mono up and running on a fresh, clean Linux box. I’ll walk through my experience of installing the package, and discuss all the components needed to run Mono.
What Is.
What’s Usable in Mono?
Currently, Mono is in development, but the project has reached some significant milestones that make it suitable and stable enough for deployment today. The C# compiler is the only fully featured compiler for Mono. Yes, that’s right: VB.NET and J# developers will need to switch to C# to fully use Mono. Watch the progress of the VB.NET compiler here.
The compiler itself is written in C#. You can download all 1.7 million lines of C# code and compile this yourself, or, as we’ll see shortly, you can use one of the many distributions to ease the installation process.
ASP.NET is fully-featured and supported within Mono. In fact, it’s the great strength of Mono at present. You can build and deploy both Web forms and Web services using either the built-in Web server that ships with Mono (XSP) or through an Apache modification, Mod Mono. For those who are uncomfortable using Windows and IIS to host applications, Mono provides a viable alternative.
Windows Forms is currently under development, but functionality is progressing. Though complex WinForm applications, such as Web Matrix, are currently unavailable, there are alternatives to WinForms in Mono that build GUI applications. Gtk# and #WT are wrappers to the popular GUI tools on Linux. WinForms itself is being built as a library extension to Wine, the Win32 implementation on Linux. The project’s progress is documented here.
ADO.NET and the System.Data classes are also fairly mature; however, they aren’t at production level at the time of writing. This is one of the largest projects within Mono, and at present you can connect to a wide variety of databases. Native support is provided for many of the databases usually associated with Linux, such as PostgreSQL.
Mono has successfully been used in commercial and heavy-use applications. Novell used the tool to build two of its products, iFolder and ZenWorks. Also, SourceGear has used Web services deployed in Mono within its Vault application.
Getting Started: Download Mono
Mono is available in many packages. You can download the latest source code build and a daily snapshot source code build, through CVS, an RPM package, or a Red Carpet install for those with Xiamian Desktop. By far the easiest to use is the Red Carpet system, which, while similar to RPM, offers good versioning control, allowing you to upgrade Mono on your machine very easily.
The Mono download page details the various packages and how they can be downloaded, as well as specific packages for specific varieties of Linux.
I downloaded the Mono Runtime tarball and used the following command to unpack the distribution:
tar -zxvf mono-1.0.tar.gz
Once the tarball was extracted, I could start the installation process using:
./configure
make
make install
It was at this point I realised I needed to upgrade pkg-config on the system as the installation spat out some errors. I found the RPM distribution for this, and installed it using the following command:
rpm -Uvh pkgconfig-0.15.0-4mdk.i586.rpm
The make process now worked without a hitch.
Hello World in C# Running on Linux
It's always a good thing to use the cliched "Hello World" example to test an installation!
Open your favourite Linux text editor (I used vi) and enter the following simple C# application code:
public class HelloWorld
{
static void Main()
{
System.Console.WriteLine("Hello World!");
}
}
Save this file as HelloWorld.cs and compile the class with the Mono C# compiler:
mcs HelloWorld.cs
In the directory to which you saved HelloWorld.cs, you should now see a HelloWorld.exe file. This is standard MSIL code that can be executed on any computer on which the .NET framework is installed, including Windows.
There are two ways to run Mono applications. One method is to use "mint," which is an interpreter for the MSIL bytecode. Mint does not compile and run your applications into native machine code, hence it is not a JIT runtime. "Mono" however is a JIT runtime, which compiles bytecode when first requested into machine code native for the platform and processor for which it was designed.
This means that, for performance, the Mono application is much faster than mint, though it's not as portable as it is tied to the particular operating system. Mint, on the other hand, is far more portable and, as it's written in ANSI C, may be used on a multitude of deployment platforms.
To run our Hello World application, we can use the following command, which invokes Mono:
mono hello.exe
We use the following command to invoke mint:
mint hello.exe
Dishing Up ASP.NET
Mono comes with its own Web server, ready to dish out your ASP.NET applications. Mono. XSP is very easy to use and makes for a simple ASP.NET Web server that is almost akin to the Cassini server that ships with Web Matrix on Windows. For more dominance, you can download a module called mod_mono that allows Apache to support ASP.NET. In this article, however, we'll examine the formation of a simple Web application and host it using XSP.
I must admit that I cheated when I created the code for the Web application: I used Visual Studio to create the Web application and its associated files. Then, I copied the code over to the Linux box that was ready to host.
For our example, we'll create a Web page with a simple button and a label. When the button is clicked, the label will show that the user clicked the button.
<%@ Page Language="C#" %>
<script runat="server">
void Button1_Click(object sender, EventArgs e)
{
titleTag.InnerText = "You clicked the button!";
}
void Page_Load(object sender, EventArgs e)
{
if (!IsPostBack)
titleTag.InnerText = "Page has loaded";
}
</script>
<html>
<head>
<title runat="server" id="titleTag"/>
</head>
<body>
<form runat="server">
<asp:Button
</form>
</body>
</html>
You can save this file to a directory that will act as your root for the Web application.
If you are using codebehinds for your application, you will also need to compile the source files using the Mono compiler, in order to obtain the compiled assembly for the application. Just as with ASP.NET on Windows, you'll need to drop this into a /bin directory on the root.
XSP, by default, listens on the 8080 port so as to not interfere with Apache; however, you can set up the server, via the command line, to listen at a different port. You can also specify the root directory of the application you wish to host:
xsp.exe [--root rootdir]
[--port N]
With the server running, you can access your page through any standard Web browser. And, there we have it: ASP.NET served over Linux!
Third Party Tools
There are numerous tools available to aid your developments in Mono so that, unlike me, you don't need to resort back to Visual Studio.
- MonoDevelop:
With GUI features still lacking in Mono, MonoDevelop is really pushing the current implementations to show what can be achieved in Mono. Resembling #develop for Windows, MonoDevelop supports syntax highlighting and the compilation of projects from an easy-to-use interface. However, this tool is still at an early stage of development and presently lacks the features needed to make it a truly useful instrument.
- Eclipse:
Billed as "a kind of universal tool platform -- an open extensible IDE for anything and nothing in particular," Eclipse is a great solution for developing Mono applications. By downloading and installing the Improve C# plugin for Eclipse you can have a fully-featured and free Java based IDE for your Mono developments.
- MonoDoc:
This is a browser for the Mono documentation. It isn't installed by default through the standard Mono packages, but it is a life saver for those needing to check whether certain APIs and parts of the framework are available in Mono.
Summary
From my study of Mono, I've come to understand that this is a very important project for .NET. The release of .NET from the confines of Microsoft operating systems will allow it to expand within communities that are normally off-limits to Microsoft technologies.
Mono can only grow stronger, and perhaps in the near future, .NET developers will be able to develop for Linux as easy as we develop for Windows today.
No Reader comments | http://www.sitepoint.com/get-started-mono/ | CC-MAIN-2014-23 | refinedweb | 1,498 | 64.3 |
Logged In: YES
user_id=827666
Forgive me for speaking out of turn, I just stumbled in here
looking for something to convert a .dbx file to mbox format,
but...
It seems to me you don't really need this. Once you have
files in mbox format, you can use any number of methods to
split them into individual messages. You can use formail
(part of procmail), along with a little shell script
#!/bin/sh
#
# createfile - Use this with formail to split a Unix mbox into
# individual files named message.1, message.2, ...
#
# FILENO=1 formail -ds createfile < mboxfile
#
# Basically, formail is looking for the name of a program,
so this is
# a program which simply redirects its input to a file.
#
cat > message.$FILENO
or Perl's Email::Folder module
perl -MEmail::Folder -e
'$f=Email::Folder->new(shift);while($m=$f->next_message){$n++;open
F,"> message.$n" or die; print F $m->as_string}' mboxfile
or Python's email and mailbox modules
#!/usr/bin/python
import email
import mailbox
num = 1
for msg in mailbox.PortableUnixMailbox(open('mboxfile'),
file = open('message.' + str(num), 'w')
print >>file, msg
num += 1
or something similar in whatever your favorite scripting
language is. Once you've got the data out of its proprietary
format, the battle's already won. | https://sourceforge.net/p/ol2mbox/feature-requests/4/ | CC-MAIN-2017-04 | refinedweb | 215 | 65.62 |
[I am sorry I did not pay enough attention to this old thread at that time. (See for why this is relevant now.)] At Sun, 6 Sep 2009 17:56:34 +0100, Richard Frith-Macdonald wrote: > 1. we don't want to expose internal workings because we don't want > developers to start to depend on those internals in such a way that > it's hard for us to change things later without breaking existing > applications. > 2. we don't want to expose internal workings because changes to them > may break API and mean that apps need to be recompiled. > > Issue 1 is real, but we can't quantify how important it is as it's a > amatter of perceptions rather than a true technical issue. Luckily > it's quite easily largely fixable, simply by removing pthread.h form > the header and replacing the types from pthread.h with opaque dummy > types of the same size, so that there is no temptation for developers > to use them directly. So I did that, though really, I'm not sure that > was worth the bother, since the ivars concerned were already clearly > marked as private. As they're marked as @private, there's no way for developers to access them in subclasses, right? Only the size matters for the purpose of subclassing. So I fail to see what the issue is, and how including a specific header and using library types in ivars for a *required* library (configure bails out if pthread is not found) is "exposing internal workings". > I guess it's just good to do it to avoid having the extra symbols > polluting developers namespaces. I admit I don't understand this. > Issue 2 is a technical problem ... if someone subclasses one of the > lock classes, their compiled code is obviously dependent on the size > of the superclass and if that size changes (eg due to using a > different pthread implementation) then the size of the superclass > would change even though the version of the base library is > unchanged. So potentially an app linked with one copy of the base > library would fail to run properly with another copy of the library > even though it was the same version! If the different pthread implementation is ABI incompatible, that would mean that gnustep-base (and anything else using this new pthread implementation) *must* be rebuilt. Two different incompatible implementations on one platform can only coexist if they have different SONAME. So, if Base is rebuilt because the size of, say, pthread_mutex_t is different (i.e. ABI change in pthread), then config.status will substitute @GS_PTHREAD_MUTEX_T@ to the new value, achieving what ... the compiler would do automatically with David's code before that change. If Base is not rebuilt (for whatever reason -- user/distro omission for example), the opaque type will not help at all, because gs_mutex_t would still have the wrong size -- it would match the size of pthread_mutex_t at the time Base was compiled. The class size will not match the actual size at runtime, possibly leading to the breakage the trick was intended to avoid. In conclusion, I think this change serves no purpose and doesn't make *anyone's* life easier. I may be wrong, but it seems to me that using directly pthread types in ivars does no harm at all, in practice. | http://lists.gnu.org/archive/html/discuss-gnustep/2010-08/msg00033.html | CC-MAIN-2013-48 | refinedweb | 556 | 60.65 |
This site uses strictly necessary cookies. More Information
Hi.?
Thanks work great !!.
Thanks! Worked for me. The error is gone. But, is it normal to do it this way? Cause seeing the dll file on my Unity editor seems kinda.
command-line build only: error CS0234: `Linq' does not exist in the namespace `System.Xml'
0
Answers
using xml, linq, and lists
0
Answers
A node in a childnode?
1
Answer
Getting NullReferenceException: Object reference not set to an instance of an object when using Linq reading an xml file.
1
Answer
LINQ, XML, putting XML nodes into an array
2
Answers
EnterpriseSocial Q&A | https://answers.unity.com/questions/210822/problem-with-systemxmllinq.html | CC-MAIN-2021-21 | refinedweb | 105 | 70.5 |
Okay I can fix it easily, but why the whole stack is modified?
Okay I can fix it easily, but why the whole stack is modified?
I want to place objects using matrix stack, but the following code seems to modify the whole stack, so that using stack becomes a nonsense. What would you do in order to use stack? I need it to...
This is a part of code from tutorial:
//This file is licensed under the MIT License.
#include <string>
#include <vector>
#include <stack>
Thanks, I was thinking about this, but how I can do it?
I need to make scene where two triangles are moving around on a circle like on a carousel:
position1 = [ sin alpha, cos alpha]
position2 = [ -sin alpha, -cos alpha]
All that I have achieved is... | https://www.opengl.org/discussion_boards/search.php?s=5ee2722a026530f17ae0a91989e14272&searchid=1432214 | CC-MAIN-2015-40 | refinedweb | 132 | 80.11 |
Hello Mr. Kay
I'd like to process small to medium sized XML documents with virtually one
open-sized XSLT stylesheet. The structure of the different input documents
should be able to evolve and grow independently from the process itself. The
process should be able to handle any new namespace and element type with the
addition of corresponding template matches. Since such a all-mighty stylesheet
would soon grow too big, I would like to break it down, for example by
namespace:
1) The process first scans the input document for all the namespaces
contained. Then it dynamically builds a stylesheet, importing the according
stylesheets per namespace. The result gets compiled and the input transformed.
2) The process scans the input document for the namespaces contained and
builds a pipeline with the pre-compiled stylesheets, handling these
namespaces. The input gets transformed step-by-step.
3) The process starts with the stylesheet for the root namespace. For all the
unknown namespaces transforms the node with the pre-compiled stylesheet for
the according namespace via the saxon:transform() extension. Every "foreign"
node gets transformed separatly.
Which approach do you think performs best? do you have ideas for another
approach?
The number of namespaces within one input document is only a small subset of
the possible overall selection. Some namespaces appear more often, others very
seldom. A proper cache management would prevent the process from pre-compiling
all the stylesheets in a long term, wide coverage usage.
Thanks,
Bruno | https://sourceforge.net/p/saxon/discussion/94026/thread/ff13b355/ | CC-MAIN-2017-13 | refinedweb | 247 | 64.81 |
XMonad.Actions.Submap
Description
A module that allows the user to create a sub-mapping of key bindings.
Usage
First, import this module into your
~/.xmonad/xmonad.hs:
import XMonad.Actions.Submap
Allows you to create a sub-mapping of keys. Example:
, ((modm, xK_a), submap . M.fromList $ [ ((0, xK_n), spawn "mpc next") , ((0, xK_p), spawn "mpc prev") , ((0, xK_z), spawn "mpc random") , ((0, xK_space), spawn "mpc toggle") ])
So, for example, to run 'spawn "mpc next"', you would hit mod-a (to
trigger the submapping) and then
n to run that action. (0 means "no
modifier"). You are, of course, free to use any combination of
modifiers in the submapping. However, anyModifier will not work,
because that is a special value passed to XGrabKey() and not an actual
modifier.
For detailed instructions on editing your key bindings, see XMonad.Doc.Extending.
submap :: Map (KeyMask, KeySym) (X ()) -> X () Source #
submapDefault :: X () -> Map (KeyMask, KeySym) (X ()) -> X () Source # | https://hackage.haskell.org/package/xmonad-contrib-0.15/docs/XMonad-Actions-Submap.html | CC-MAIN-2019-13 | refinedweb | 155 | 66.74 |
User talk:Richard
User:Richard's talk page
Please don't message me here, the wiki sucks. I'm usually on IRC instead.
Contents
- 1 List of relations: scrollbar needed
- 2 Your old London map
- 3 Re: gathering data
- 4 East Cotswolds
- 5 Public Domain
- 6 different size potlatch editors
- 7 Zoom bug in Potlatch
- 8 potlatch presets bug
- 9 colon troubles
- 10 Potlach 0.8b and waypoints
- 11 a small thank-you
- 12 Potlatch translation
- 13 help with gnash compatibility
- 14 Adlestrop Rail Atlas
- 15 translation to Brazilian Portuguese of Potlatch Messages
- 16 Praise
- 17 change in xml causes mkgmap to bomb
- 18 lat/long resolution in Potlatch
- 19 Panel Image
- 20 Cycle map scripts
- 21 A link to the keyboard shortcuts
- 22 osm2ai & multipolygons
- 23 License
- 24 Your T-Shirt idea
- 25 Fish fingers
- 26 Where did your Garmin cycle maps go?
- 27 relation warning in potlatch
- 28 Is this a Potlatch Bug?
- 29 Talk:WikiProject_FLOSS#Removal_of_phrase_with_citation_of_Potlatch
- 30 API 0.6 edit
- 31 OSM nameservice
- 32 cyclenet- the dog's danglies
- 33 Garmin cycle map
- 34 No scrapers
- 35 Don't revert
- 36 Thank you - P2@ "editing"
- 37 "track" description in Template:Map Features:highway
- 38 Shoulders
- 39 Welcome to Wikipedia users
List of relations: scrollbar needed
Hi Richard! I've almost mapped the entire Public Transportation System Lines in my Town, but now I'm facing the problem that there are too many relations. To say: the list is longer than my screen is high (1200px), so I'll need a scrollbar to go on.
And btw: alphabetical sorting would be nice. And in long lists, the highlighted item tends to be higher than the mouse is, the more I get to the bottom of the list.
Last but not least I would appraise a function to easily assign relations to ways, like a stamp function to select a relation, and every way that is selected gets that relation. Or something like that.
cheers, RalpH himself 22:25, 4 December 2008 (UTC)
- Hi Ralph: if you mean the tag/relations panel at the bottom of the screen, there is a (left-right) scrollbar - admittedly a fairly little one! Is that what you're looking for? --Richard 11:33, 5 December 2008 (UTC)
- Nope! I mean the list in the "Add way to an existing relation"-dialogue. This list is sometimes higher than the screen, and there is definately no scroll bar. Picture: [1] RalpH himself 18:28, 5 December 2008 (UTC)
- done. --RalpH himself 15:15, 7 December 2008 (UTC)
Your old London map
Hi Richard. Someone's expressing interest in your old map scanning work: Talk:Out-of-copyright_maps -- Harry Wood 13:35, 31 May 2006 (UTC)
Re: gathering data
Re: gathering data, you might be interested in -- Owhite - 13:12, 10 Jan 2006
East Cotswolds
Hi Richard. I've set up a page for the East_Cotswolds, fancy putting on where you've done / might do in future? -- Gagravarr 22:45, 3 Sep 2006 (BST)
Public Domain
I have added the PD-user template to my user page as well, and changed it to automatically set up a category containing all users who have this template on their user page. It's just the two of us right now but we'll get there eventually... --Frederik Ramm 15:14, 22 February 2007 (UTC)
different size potlatch editors
i do most of my osm mapping on a 23" monitor at 1920 x 1200, and potlatch occupies about a sixth of the screen. i would love to have something that could take advantage of the extra screen space (josm is not an option, work firewall breaks it), are you willing to build different size editors for those with big screens? Myfanwy 21:25, 1 November 2007 (UTC)
Zoom bug in Potlatch
Richard, I can't access my email right now, so I'll use this channel. Something is wrong with the zoom of the Yahoo! imagery in the new version of Potlatch. I hope you can fix it. Otherwise you did a lot of great work. I don't see highway=service among the possible highway tags, but that's a minor nitpick. Polyglot 09:14, 26 December 2007 (UTC)
- Sorry about that - memo to self: don't commit stuff just before Christmas. :) It's fixed now. Will look at highway=service when I'm back at my development machine. cheers --Richard 11:41, 27 December 2007 (UTC)
potlatch presets bug
Hey Richard, good work with potlatch :) As of 0.6c, whenever you use the pre-sets (either by mouse or keyboard), the attributes of the way/node won't update until you click off and then select it again. Perhaps this is related to the change to allow more attributes being displayed? --Brainwad 14:40, 21 January 2008 (UTC)
colon troubles
The problem with colons is back :/ Again I assume it something to do with the change to how many tags you can see. --Brainwad 01:56, 24 January 2008 (UTC)
- It's a regression caused when SteveC moved some of the server code - hopefully he can sort it. --Richard 02:01, 24 January 2008 (UTC)
Potlach 0.8b and waypoints
Hi. I see that 0.8b has reinstated showing waypoints from GPX tracks. It wasn't clear to me whether the problem was with the upload or the editing? I'm still not seeing my waypoints so will try re-loading the track, but can you clarify? Do waypoints need timestamps to show up?
- The problem was that, in 0.8a, Potlatch got its start latitude and longitude from the first trackpoint in the GPX track. (This allows GPXs to be edited before they've been processed through the database.) Unfortunately, since waypoints are often in the GPX before the trackpoints, this meant Potlatch was trying to process them before it had a base lat/long, and therefore failing. As of 0.8b, Potlatch processes the waypoints after the trackpoints.
- You shouldn't need to reupload, but if you're having problems, let me know the track ID and I'll look into it. --Richard 09:35, 23 April 2008 (UTC)
- Well there's a big backlog of uploads at the moment, but as we don't need to wait for the upload before editing now :-) I've had a go and I'm still not seeing the waypoints. The file is RochdaleCanal.gpx and I guess by the URL it has been given an ID of 99934. I waypointed all the locks and bridges I went through last week. Thanks hugely. --POHB 11:16, 23 April 2008 (UTC)
- Ok, thanks - I'll have a look. I notice that the track was produced by Garmin something-or-other - maybe it lays things out differently to GPSbabel (which is what it's tested on). Will get back to you asap (and how could I resist a track of the Rochdale?). :) --Richard 11:41, 23 April 2008 (UTC)
- I saw your message that you'd fixed it - just after I uploaded an edited file where I'd hacked a load of the Garmin guff out and moved the waypoints to the end. That one worked fine. --POHB 12:39, 29 April 2008 (UTC)
a small thank-you
Potlatch is great. Really, really great. Much better than I dared hope for when I first clicked on the "Edit" tab. Don't need to install anything, plus I can use it on any computer. Best of all, I can demonstrate editing OpenStreetMap on-the-spot to others. THANK YOU, THANK YOU and THANK YOU again! --Emexes 14:12, 25 May 2008 (UTC)
Potlatch translation
Hello,
I would like to help and I could translate the potlatch user interface to German. I am a German native spiker and I am working on internationalisation tasks in my job. Is there a language file? --Lulu-Ann 13:42, 5 June 2008 (UTC)
Hi again, I have finished the German translation as far as I could.
- You are using the terms "point" and "node" in the English version.
I have read more "node" in the wiki. Maybe you want to decide to use only one word.
- The same with "way" and "route" - Or is a route something different when we are talking about relations?
- There is a sentence where you probably add a variable at the end. I have added %unknown% to the German text to point out where the variable goes.
- What about the bug report mailing? Do you want "English preferred" or similar added to the other languages' texts?
Bye
- Do you need an additional text for "choose language"?
--Lulu-Ann 16:45, 5 June 2008 (UTC)
help with gnash compatibility
Richard,
Potlatch is awesome! Adobe's flash player for Linux however, is very not awesome. Gnash has a long way to go, but at least it doesn't crash Firefox all the time.
But I miss potlatch! So I've started hacking on gnash to get it to support potlatch.
Today I got the AMF encoding/decoding to the point that potlatch loads and displays streets in gnash... after about an hour... still some work to do there. I'm hoping over the next week or two I'll have time to get it to send multiple requests over the same connection and do multitasking properly.
So I'm writing to see how much you'd like to be involved with helping me achieve full compatibility between gnash and potlatch. I'm sure I could do it eventually by myself, but it'd be so much faster and more fun if I could chat with you along the way. If you'd like to be available to me even just for occational questions, please let me know! I've got a contact form at jasonwoof.com/contact.html or you can find me on freenode (nick: JasonWoof). Thanks! -- JasonWoof 02:36, 19 July 2008 (UTC)
- Thank you Jason. There might be some information here WikiProject_FLOSS#Project_No_Flash . logictheo 12:42, 5 May 2010 (UTC)
- Thanks logictheo, but... a lot has changed in two years. I've quit the gnash project because I couln't get along with the project leader over IRC. Before I left I was able to get potlatch working, though very slowly. Or at least potlatch loaded, I forget if we got saving to work. Hopefully other people are picking it up where I left off. I no longer have high hopes for gnash's success. I'm betting my hampster on HTML5 and the abolishment of software patents. -- JasonWoof 00:30, 10 May 2010 (UTC)
Adlestrop Rail Atlas
Am I permitted to use this to confirm the station names, and status of lines when tracing older disused/abandoned lines from NPE/Seventh Series(as they become available)? ShakespeareFan00 10:39, 24 July 2008 (UTC)
translation to Brazilian Portuguese of Potlatch Messages
Hello Richard, my colleague Alan Tamer Vasques was so kind to provide a Brazilian Portuguese translation of the Potlatch messages. We put it in Pt-br:Potlatch/Translation because there is already an incomplete Portuguese translation page featuring apparently Portuguese of Portugal. We hope that the translations will be useful. Thanks for providing Potlatch! --Ulf Mehlig 21:07, 24 July 2008 (UTC)
Praise
Big thanks for the quick fixing of the relation adding bug within v0.10.
Additionally, I'd like to say the enhancement of the history dialog in v0.10b is a _very_ helpful feature saving a lot of mouse clicks. Great job! --HeikoE 08:33, 1 August 2008 (UTC)
change in xml causes mkgmap to bomb
The xml returned by wget'ing the osm from seems to have changed. There are single quotes surrounding the parms. There used to be double quotes. The osmcut java requires double quotes, so it bombs out with an invalid index error on line 144 of the Java source code. Here's how it used to be:
<node id="26856937" timestamp="2008-07-24T10:35:24Z" lat="32.7953416" lon="-79.9385824"> <tag k="created_by" v="YahooApplet 1.0"/>
Here's how it is now:
<node id='29561771' lat='34.7825807' lon='-82.4543862' user='sadam' osmxapi: </node>
I can think of 3 ways to fix this,
- get informationfreeway.org to change back
- change preprocess.pl so it creates double quote parms
- change osmcut to deal with either single of double
Or am I missing something obvious?
- preprocess.pl is really only designed to cope with the XML from planet dumps; it doesn't make any attempt to parse it properly. I guess it would be possible to add an option to use a real XML library, rather than regexes, but that's not really my area of expertise I'm afraid - I loathe XML! Is there a particular reason you'd like to use an Information Freeway download rather than a planet file? --Richard 12:56, 31 August 2008 (UTC)
Richard, I was using Information Freeway mainly from newbie ignorance and because I live at the intersection of 3 states in the United States and wanted a portion of each state. But I have no problem going with CloudMade or one of the other Planet extracts. Thanks Art (CyclingGreen)
lat/long resolution in Potlatch
Hi Richard, I just want to ask what is the resolution you are using for the latitude and longitude in Potlatch. According to Data Primitives, the latitude and longitude are stored to 7 decimal places, which translates to about 1 cm at the equator (if I computed correctly). But in Potlatch, I can't position nodes to that level of accuracy. I only get a resolution of around 5 meters or so. Thanks! --Seav 08:59, 4 October 2008 (UTC)
- Hello! Potlatch itself doesn't have any explicit limit on resolution: the limit is likely to be that of Flash's precision at the scale at which Potlatch works, and certainly that will be less precise than that available in the database. I've just experimented and I think I can get two nodes (within Oxfordshire, England) to within 0.00059km of each other - i.e. 59cm - though my calculations might be wrong. There is of course a minimum 'snap to pixel' so you may need to move a node a way away, then back, to get this level of precision. --Richard 09:17, 4 October 2008 (UTC)
- Yes, because Potlatch's "native" co-ordinates are calculated at zoom level 13; the co-ordinate system stays constant if you're zooming in or out, but the Flash movieclip for the map is enlarged/reduced accordingly. So I suspect when you get to zoom level 19, Flash is no longer able to discern a sub-3px difference due to the enlargement factor currently in operation; after all, you're effectively moving a tiny fraction of a zoom-13 pixel. --Richard 15:38, 4 October 2008 (UTC)
Panel Image
Hi Richard, I have updated the previous panel image and put it on DE:Potlatch with German sub titles and the other translations (e.g. Potlatch) with hopefully correct English sub titles ;) If something changes I can relatively easily adopt it accordingly, I think. Best regards, --Krza 01:39, 7 December 2008 (UTC)
Cycle map scripts
Hi, I used your scripts and instructions OSM Map On Garmin/Cycle map to create a cycle map for Scotland (gmapsupp.img) using this world extract. There is a 2.5km gap in one of the NCN routes that doesn't get rendered on my eTrex Vista. The relevant section is here. The non-rendered section is part of a continuous way that gets rendered on either side. Bug? Also, do you know of any way to view the generated gmapsupp.img under Linux? ChrisB 13:03, 29 April 2009 (UTC)
A link to the keyboard shortcuts
One common feedback i get from all my friends ive intriduces osm to so far is that they find the editor way to confusing and unusable. its not untill i give them a link Potlatch/Keyboard shortcuts that they can use it. there have been numerous instances of newcomers trying to play with potlatch, messing up and leaving without knowing how to undo or delete their changes. i think whats urgently needed is a help button in the editor which is visible at all times and not only the starting. it could popup list of shortcuts as well as links to the wiki --Planemad/Talk 05:56, 14 May 2009 (UTC)
osm2ai & multipolygons
Hi, i tried to add transformation of multipolygon relations to illustrator's compound paths sorry i messed up your code, i'm not programmer, but it seems to be working [i hope]. --Platlas23 11:19, 21 May 2009 (UTC)
License
Hi Richard,:45, 4 August 2009 (UTC)
- Hi. :))
Your T-Shirt idea
Hi, I really like your T-Shirt idea. Do you have an SVG or other vector file of it? --Jorges 15:31, 14 August 2009 (UTC)
Fish fingers
I like them four to a sandwich with lots of ketchup and, occasionally, cheese.
Now, does Richard's page really deserve to be a haddock? Jonathan Bennett 14:17, 24 November 2009 (UTC)
Where did your Garmin cycle maps go?
The OSM Garmin download page points to but the garmin directory appears to no longer exist. ChrisB 20:12, 11 December 2009 (UTC)
- Yep, sorry, the dev server changed over and I haven't reuploaded them yet. Will do so when I get a chance. --Richard 20:55, 11 December 2009 (UTC)
relation warning in potlatch
Hello Richard,
many streets include points, that are members of a relation i.e. Relation:restriction, Tag:highway=bus_stop and many more. A User who does not know relations, moves or deletes that "useless" points without being aware, that he damages or destroys the relation. There is a useful feature in potlatch, that warns, if there is a double point at the same place. Is it possible to add a similar feature for points, that are member of a relation and pop up a warning message, if the user clicks this point? He should not move or delete that point, if he does not know, what a relation is or how this can harm the relation. I have to admit, that it also difficult for me, to keep the relations in mind every moment. -- Tirkon 17:08, 1 March 2010 (UTC)
- I can certainly add something that adds a prompt if you try to delete such a street, just as it does for tagged nodes. But please put it on trac as an enhancement, or I'll probably forget! cheers --Richard 18:44, 1 March 2010 (UTC)
- Done. Thank you for your answer. :-)
Is this a Potlatch Bug?
Hallo Richard, Please have a look to this map and the traffic light in the middle. Change to Potlatch an click the black spuare at the traffic light. Potlatch immediatly wants to begin a new way. This only happens to the black square, but not to red points. Is this a bug or a feature? -- Tirkon 21:17, 1 March 2010 (UTC)
- Potlatch always extends the current way (not starting a new one) when you click on the end point of a way. If you didn't want to extend it, just press Enter/Return to stop drawing. (Incidentally IRC is better for this sort of question, the wiki sucks. :) ) --Richard 21:56, 1 March 2010 (UTC)
Talk:WikiProject_FLOSS#Removal_of_phrase_with_citation_of_Potlatch
I replied. Regards, logictheo 13:00, 5 May 2010 (UTC)
API 0.6 edit
You guessed wrong the editor in question was Merkaartor and the problem is a need to cli edit user preferences
OSM nameservice
Hi Richard,
sorry it wasn't my aim to argue anybody by calling a development 'bad'. Unfortunatly I noticed that a lot of people say that nominatim results and their representation might get better. Long time ago I read that developers say they are unhappy with the codebase. Isn't that true? --!i!
21:17, 24 March 2011 (UTC)
- As someone who as worked with SQL databases for 20 years, I'd say a) Nominatim is extremely ambitious; b) it's quick; c) there is nothing wrong with the code base; d) any piece of namefinder technology which is miles better than Google for finding my home address has something going for it; and e) I'm sure if could do with more people working on its development, like every other bit of OSM.
- The major problems with Nominatim are handling issues with tagging of places (nodes, ways etc), and the fact that this tagging is often wildly inconsistent, variable or just plain missing. This is not helped by people blaming Nominatim for, what are essentially, data issues.
- The one feature which I think Nominatim could do with is fuzzy matching of names to cope with typos, spelling errors etc, but mainly we could do with more place name control reports (to find inconsistencies in mapping), and find a better way to tag a number of place elements (region, locality come to mind, but we have no good way of doing mountain ranges, nested named residential areas, valleys as in 'Wensleydale', 'Neandertal', 'Val di Susa'). Like a lot of OSM tools, using the tool properly helps improve the data. In the meantime remember GIGO. -- SK53 21:35, 24 March 2011 (UTC)
cyclenet- the dog's danglies
here's more praise for you Richard. having torn my hair out with overpriced and useless commercial (g**min) crap (notably Topo grr), CycleNet works really well. Many thanks from some south downs trail riders
Garmin cycle map
Hi, I was wondering if there are going to be any more regular updates of your garmin cycle map - last one appears to be 30/12/2010. Also, the "Find->Transportation" function on the Garmin doesn't show train stations with your map. It used to, maybe back in 2009, but at some point it stopped working. Do you know why? Is it something that would be easy to fix? ChrisB 18:42, 14 June 2011 (BST)
No scrapers
hi there Richard,
I noticed you removed MOBAC from Trebuddy. Even if I understand your point in preventing endusers to slow down our map using tiles scraping apps, I added it after I scanned what tools are supported by MOBAC to make sure the users benefit as easy as possible by OSM material. So please re-add the hint again :) --!i!
21:01, 1 October 2011 (BST)
- If the users benefit at the expense of our servers, that's an unsustainable situation. I would far rather users were directed towards solutions that the OSM servers can cope with. Do you know how much load MOBAC has put on our servers? As in, 30% at one point? --Richard 23:27, 1 October 2011 (BST)
- As said I agree completely with you that MOBAC scraping is a problem, of course. Thus it's not a reason to "ban" it on the Wiki or to remove links towards the tool. Better would be to add a hint at the MOBAC wiki page and to point the authors of the tool once again. Removing the links yust confuese endusers and let them search longer, on how to use OSM offline. So please add it again, Richard. --!i!
20:24, 2 October 2011 (BST)
- Putting a link on the OSM wiki gives the suggestion that OSM endorses MOBAC (which it sure as heck doesn't). By removing the link (which is, don't forget, on a page about an unrelated program) there is less chance that people will find it, therefore less chance that our tile servers will be scraped to buggery and back. And if you think that the authors of scraper tools are willing to happily post whatever notices OSM requests of them... well, you obviously haven't dealt with many. --Richard 01:04, 3 October 2011 (BST)
- I dont think so Richard. This will just result in a "WTF!?! How does I get this damned map into this fuckedup APP?!?" and that people will need more support. Using this link and the MOBAC wiki page gives us the opinion to show them a hint that this isn't a usefull procedure to get the maps. On the other hand offline-tiles-usage is a very common problem for low-budget Apps, so it's a bit up to us to solve this problem, too. So please add the link again. --!i!
07:44, 3 October 2011 (BST)
- As me I don't like it if our project only cares about itself. But at this aspect my point is clear and till now nowbody showed me what the benefits of this way should be. But see, Firefishy has a very similar opinion and cleared MOBAC page completely. Dunno if this was a nice step :( Going to revert your changes but created a label to help you solving the problem a bit Template:KillsTileServers --!i!
19:45, 3 October 2011 (BST)
Don't revert
Please don't revert legitimate edits, such as you did to Copyright. Your action has been reverted. --BrandonSkyPimenta (Talk • Contribs) 03:30, 27 May 2012 (BST)
Thank you - P2@ "editing"
Hi Richard! Thanks for your edit at "editing". I guessed it (when doing some edits to this page), read it somewhere (that you plan to keep P2 active and as intermediate edior) but did not find it when I searched for it. Harry seems to have misunderstand that. Cheers --Aseerel4c26 (talk) 02:39, 12 January 2014 (UTC)
"track" description in Template:Map Features:highway
Hi Richard,)
Shoulders
Hi Richard, why did you delete so much information on shoulder tagging [[3]]? All description on left/right was removed from the wiki page - and this is in use several thousand times.
--Mueschel (talk) 13:25, 16 April 2016 (UTC)
- Not really deleting, just making it more concise by explaining in the 'Refinement' section that you can use normal tags suitably namespaced. If you want to add details back in I'd suggest you do it there. --Richard (talk) 13:43, 16 April 2016 (UTC)
Welcome to Wikipedia users
Many thanks for this page. If it's reasonnably stable, I volunteer to translate and adapt it into french. Gall (talk) 13:32, 29 April 2016 (UTC) | http://wiki.openstreetmap.org/wiki/User_talk:Richard | CC-MAIN-2017-13 | refinedweb | 4,385 | 71.24 |
Hi,Thanks Jakub for your helpful advice, I took the time to fix the points you mentioned and re-uploaded the package to debexpo.
For future, debian-python@lists.debian.org might a better place to ask for sponsorship of packages like this.
Ok, I will, is that ok if I stay on debian-mentors for that first package ?
According to Python Policy 2.2, the binary package name should be python-autoslug. (Now, you may say that's too generic and that the package name should include "django" - but so should upstream module name...)Yes, I do believe that django-requiring modules should stick in django namespace. Anyway, it's never the case but all debian packages I saw are named python-django-*, which is quite usefull for the user.
debian/control: - Remove XB-Python-Version, it servers no purpose.- More importantly, remove "Provides: ${python:Provides}". See <> for rationale. - Short description is not a sentence, so it should not end with a dot. See Developer's Reference 6.2.2.
Done
debian/python-django-autoslug.preinst:- It doesn't make sense. You package was never affected by #479852 (besides, the bug is already fixed in stable).
done
debian/rules: - Please consider removing useless comments.- Unexporting anything is not necessary, the *FLAGS don't affect your package by any means.
done
debian/watch:- Please write one. That shouldn't be hard, as the tarballs are available at PyPI.
done
autoslug/__version.py: - So is it 1.3.5 or 1.4.1 after all?
It's definitely 1.4.1. By the way, I noticed that the actual home is , I changed that in the debian/control and debian/copyright too. So the actual latest version from bitbucket doesn't have that confusing _version.py file.It's definitely 1.4.1. By the way, I noticed that the actual home is , I changed that in the debian/control and debian/copyright too. So the actual latest version from bitbucket doesn't have that confusing _version.py file.
autoslug/tests.py: - Please run them at build time.
done (needed to add python-all to build-depends).I see two lines errors on in "QA" section of :
Error: /home/expo/data/live/incoming/django-autoslug_1.4.1-1_amd64.changesError: django-autoslug_1.4.1-1.dsc
What do they mean ? Do you see any other issues with this package ? Cheers, and thanks again :) -- Jocelyn Delalande Blog (fr) Home IRC JocelynD /OFTC | https://lists.debian.org/debian-mentors/2011/09/msg00326.html | CC-MAIN-2017-04 | refinedweb | 409 | 61.63 |
Knowing how to plot a Dataframe will help you perform better data analysis in just a few lines of code. Visualizing a Dataframe is one of the first activities carried out by Data scientists to understand the data better.
Visualizing a dataset often gives a better picture and helps you plan out your course of action. It also makes it easy to spot outliers and make speculations for the existence of any correlation in the dataset.
In short, knowing how to visualize a Dataframe is an important skill to have.
Methods to Plot a Dataframe in Python
Let’s get started with importing a dataset.
1. Import the dataset
For the scope of this tutorial we are going to be using the California Housing dataset.
Let’s start with importing the data into a data frame using pandas.
import pandas as pd housing = pd.read_csv("/sample_data/california_housing.csv") housing.head()
Plotting using Pandas
You can plot your Dataframe using .plot() method in Pandas Dataframe.
You will need to import matplotlib into your python notebook. Use the following line to do so.
import matplotlib.pyplot as plt
1. Plotting Dataframe Histograms
To plot histograms corresponding to all the columns in housing data, use the following line of code:
housing.hist(bins=50, figsize=(15,15)) plt.show()
This is good when you need to see all the columns plotted together. Next, let’s look at how to make scatter plots between two columns.
2. Scatter Plots
Scatter plots help in determining correlation between two variables.
To plot a scatter plot between two variables use the following line of code :
housing.plot(x='population', y = 'median_house_value', kind='scatter') plt.show()
This gives the following output :
We can see that there are a few outliers in the dataset. We can’t see a strong correlation between the two variables.
Let’s try plotting median income against median house value.
housing.plot(x='median_income', y = 'median_house_value', kind='scatter') plt.show()
Here we can see a positive correlation between the two variables. As the median income goes up, the median housing value also tends to go up.
To see an example of an even stronger correlation let’s plot another scatter plot. This time between population and total rooms. Logically these two should have a strong positive correlation.
A positive correlation means that the two variables tend to increase and decrease together.
housing.plot(x='population', y = 'total_rooms', kind='scatter') plt.show()
Our speculation was right, total rooms and population do have a strong positive correlation. We can say so because both the variables tend to increase together, as can be seen in the graph.
The different arguments that you can use while plotting different plots are as follows:
- ‘line’ : line plot (default)
- ‘bar’ : vertical bar plot
- ‘barh’ : horizontal bar plot
- ‘hist’ : histogram
- ‘box’ : boxplot
- ‘kde’ : Kernel Density Estimation plot
- ‘density’ : same as ‘kde’
- ‘area’ : area plot
- ‘pie’ : pie plot
- ‘scatter’ : scatter plot
- ‘hexbin’ : hexbin plot
Plotting using Seaborn
Alternatively, you can also plot a Dataframe using Seaborn. It is a Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics.
Seaborn is a very powerful visualization tool. You get a lot of customization options along with it.
1. Import Seaborn
Let’s start with importing Seaborn into our python notebook.
import seaborn as sns
2. Using Distplot
Seaborn provides the option to plot a distplot . A distplot is a histogram with an automatic calculation of a good default bin size.
You can create one using the following line of code :
sns.distplot(housing['median_house_value'])
Here also you can spot the outlier. Let’s try plotting one for median income as well.
sns.distplot(housing['median_income'])
Conclusion
This tutorial was about plotting a Pandas Dataframe in Python. We covered two different methods of plotting a DataFrame. Hope you had fun learning with us! | https://www.askpython.com/python-modules/pandas/plot-graph-for-a-dataframe | CC-MAIN-2021-31 | refinedweb | 648 | 58.89 |
Plone1.0 beta1 released
The evening of September 12, 2002 saw the release of Plone 1.0 beta1. The coordination involved was immense - quality assurance was a large part of our more recent release strategy. Plone1.0 beta1 show cases the hard work that the Plone Team and contributors have put into the project over the last 16 months.
The beta is feature complete. I am confidant that moving from beta to 1.0 will be very smooth sailing. We are in a position now to clean out the "bugs and features": that are scheduled for the 1.0 release.
Andy McKay has updated the Win32 installer and is available from "plone download files.": We have added a multitude of features and lots of bug cleanups. The "HISTORY": file tells of the changes from alpha4 to beta1. The biggest ones to note is a refactoring of FormTool (our form validation/controller mechanism), i18n namespace has been completely overhauled and now Plone is 100% localizable, a tree widget was added to ease navigation of larger sites, and slots (left/right columns) are dictated by Folder properties - left_slot and right_slot which are paths to macros to be included on that page.
The development effort is going to continue full bore. We have been hearing some great testimonial by large companies and agencies who have implemented Plone solutions - we hope to get those up. If you are a company who needs training, please don't hesitate to contact "alan runyan.":mailto:runyaga@clearnoodle.com There are many firms around the world that are experienced Plone consultants. If you contact me I can put you in touch them one closer to your location.
Thanks to "all of the contributors": who have helped make Plone a great product. We hope to show the development and business world that Plone is the fastest, most maintainable, and scalable content management system that can be easily customized for a client in the world of Open source.
What makes Plone so flexible and maintainable? The underlying technologies:
- "Python": - the most elegant language I have ever used.
- "Zope": - an Application server that delivers on its Rapid Development promises.
- "CMF": - a Product suite that eases the development of content management systems.
NOTE: The Official Announcement can be found "here":
Sincerely,
~runyaga
**The Plone Team** | https://plone.org/news/2002/Plone-beta1.news | CC-MAIN-2021-21 | refinedweb | 383 | 63.59 |
Boot Graphics Resource Table definition.
More...
#include <Acpi50.h>
Boot Graphics Resource Table definition.
Definition at line 1051 of file Acpi50.h.
Definition at line 1052 of file Acpi50.h.
2-bytes (16 bit) version ID.
This value must be 1.
Definition at line 1056 of file Acpi50.h.
1-byte status field indicating current status about the table.
Bits[7:1] = Reserved (must be zero) Bit [0] = Valid. A one indicates the boot image graphic is valid.
Definition at line 1062 of file Acpi50.h.
1-byte enumerated type field indicating format of the image.
0 = Bitmap 1 - 255 Reserved (for future use)
Definition at line 1068 of file Acpi50.h.
8-byte (64 bit) physical address pointing to the firmware's in-memory copy of the image bitmap.
Definition at line 1073 of file Acpi50.h.
A 4-byte (32-bit) unsigned long describing the display X-offset of the boot image.
(X, Y) display offset of the top left corner of the boot image. The top left corner of the display is at offset (0, 0).
Definition at line 1079 of file Acpi50.h.
A 4-byte (32-bit) unsigned long describing the display Y-offset of the boot image.
Definition at line 1085 of file Acpi50.h. | https://dox.ipxe.org/structEFI__ACPI__5__0__BOOT__GRAPHICS__RESOURCE__TABLE.html | CC-MAIN-2020-10 | refinedweb | 211 | 70.8 |
Recently I’ve been a bit surprised to find out that ASP.NET MVC [HandleError] attribute forces your website to bypass the Application_Error event. I wouldn’t normally be concerned about this, but typically Application_Error is your last resort to log errors. Logging is a vital feature of any application and I wouldn’t like to stop using it in favour of the [HandleError] attribute.
Okay some may say, what’s wrong with you, just drop [HandleError] and rely on the web.config and Application_Error error handling. That doesn’t sound a bad idea, but I don’t particularly like it. Why? Because of 2 things:
- [HandleError] is a formal Microsoft way to handle exceptions in ASP.NET MVC
- [HandleError] allows you to have separate error views for every controller (or just for some plus a generic one)
After some research I realised that it does seem to be an unfortunate oversight from Microsoft ASP.NET team, that a default [HandleError] filter doesn’t really allow you to track errors. With some Google help I have found a few suggestions how to overcome the issue, but none of them looked really simple and elegant. I didn’t want to copy/paste/modify/test some large code just to track .NET exceptions.
I reckon I have come up with a better and much simpler solution. All you need to do is to insert a code snippet into your error.aspx file and that code snippet will do logging or whatever else is required:
Related posts:
- ASP.NET MVC3 – Application_Error not firing log4net
- Limiting LINQ String Field Lengths
- Linq to SQL select and update oddity
- Power of the ASP.NET MVC + jquery
- Remote Desktop Connection Manager
Tags: ASP.NET MVC
instead of inserting the snippet in your error page, can you not handle it in some sort of base class and have your error pages all extend from that?
i have several error pages in my site and the thought of copy-pasting it several times into each page just feels a bit “icky”.
That’s right, I guess it can be done by creating a base class which handles the error event in its constructor.
However if you’re concerned about amount of code needed for both scenarios (and hence testing efforts) I’m not sure base class approach wins. But I agree it’d be potentially more elegant.
Just extend the attribute:
public class HandleAndLogErrorAttribute : HandleErrorAttribute
{
public override void OnException(ExceptionContext filterContext)
{
base.OnException(filterContext);
MyWebSite.Models.Utils.Logging.LogErrorView(filterContext.Exception);
}
}
Thanks Jeremy, this makes perfect sense. | http://www.revium.com.au/articles/sandbox/asp-net-mvc-handleerror-and-logging/ | CC-MAIN-2014-41 | refinedweb | 428 | 54.73 |
Go to: Synopsis. Return value. Related. Flags. Python examples.
scriptJob([allChildren=boolean], [attributeAdded=[string, string]], [attributeChange=[string, string]], [attributeDeleted=[string, string]], [compressUndo=boolean], [conditionChange=[string, string]], [conditionFalse=[string, string]], [conditionTrue=[string, script]], [connectionChange=[string, string]], [disregardIndex=boolean], [event=[string, string]], [exists=int], [force=boolean], [idleEvent=string], [kill=int], [killAll=boolean], [killWithScene=boolean], [listConditions=boolean], [listEvents=boolean], [listJobs=boolean], [nodeNameChanged=[string, string]], [parent=string], [permanent=boolean], [protected=boolean], [replacePrevious=boolean], [runOnce=boolean], [timeChange=string], [uiDeleted=[string, string]])
Note: Strings representing object names and arguments must be separated by commas. This is not depicted in the synopsis.
scriptJob is undoable, NOT queryable, and NOT editable.This command creates a "script job", which is a MEL command or script. This job is attached to the named condition, event, or attribute. Each time the condition switches to the desired state (or the trigger is triggered, etc), the script is run. Script jobs are tied to the event loop in the interactive application. They are run during idle events. This means that script jobs do not exist in the batch application. The scriptJob command does nothing in batch mode. This triggering happens very frequently so for speed considerations no events are forwarded during playback. This means that you cannot use scriptJob -tc tcCallback; to alter animation behaviour. Use an expression instead, or the rendering callbacks "preRenderMel" and "postRenderMel". When setting up jobs for conditions, it is invalid to setup jobs for the true state, false state, and state change at the same time. The behaviour is undefined. The user can only setup jobs for the true and/or false state, or only for the state change, but not three at the same time. i.e. if you do:
// Set up a job that runs for the life of the application. // This job cannot be deleted with the "kill" command no matter what. scriptJob -e "SelectionChanged" "print \"Annoying Message!\\n\"" -permanent; // set up a job for the true state scriptJob -ct "playingBack" playBackCallback; // set up a job for the false state scriptJob -cf "playingBack" playBackCallback; then you should NOT do scriptJob -cc "playingBack" playBackCallback; otherwise it will lead to undefined behaviour.This command can also be used to list available conditions and events, and to kill running jobs.
import maya.cmds as cmds # create a job that deletes things when they are seleted jobNum = cmds.scriptJob( ct= ["SomethingSelected","cmds.delete()"], protected=True) # Now display the job jobs = cmds.scriptJob( listJobs=True ) # Now kill it (need to use -force flag since it's protected) cmds.scriptJob( kill=jobNum, force=True) # create a sphere, but print a warning the next time it # is raised over 10 units high def warn(): height = cmds.getAttr( 'mySphere.ty' ) if height > 10.0: print 'Sphere is too high!' cmds.sphere( n='mySphere' ) cmds.scriptJob( runOnce=True, attributeChange=['mySphere.ty', warn] ) # create a job to detect a new attribute named "tag" # def detectNewTagAttr(): print "New tag attribute was added" cmds.scriptJob( runOnce=True, attributeAdded=['mySphere.tag',detectNewTagAttr] ) cmds.addAttr( 'mySphere', ln='tag', sn='tg', dt='string') # list all the existing conditions and print them # nicely conds2 = cmds.scriptJob( listConditions=True ) for cond in sorted(conds2): print cond | http://download.autodesk.com/us/maya/2010help/CommandsPython/scriptJob.html | CC-MAIN-2014-41 | refinedweb | 524 | 58.18 |
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll
More Examples14:05 with Jeremy McLain
Here are some tips on writing extension methods.
As you can imagine extension methods could be misused to cause a lot of confusion. 0:00 Notice how they appear to be part of the original class, but in fact they aren't. 0:05 When writing extension methods, I stick to a set of guidelines. 0:11 We'll go through these at the end of the workshop. 0:15 A good guideline is to only create extension methods that you could 0:17 realistically see as being part of the original class that you're extending. 0:21 A great example of an extension method is the IsEven method called on an integer. 0:25 So let's create a new class. 0:31 We'll call it, IntExtensions. 0:34 Again, we'll make it a public static class. 0:43 And in here, we'll have a public static method that returns a bool. 0:49 And it's called IsEven. 0:56 It's an extension method, so I've to say this and 0:59 the type that it's going to extend is integer. 1:03 I'll just call the parameter, value, here. 1:07 So in our method, we wanna check to see if this value is an even value. 1:12 So we'll return. 1:18 To see if the value is even, we can just divide the value by 2 and 1:20 to see if there's any remainder. 1:24 There's actually a special operator for that in C#, it's called modulus. 1:27 So we can return the value %2. 1:33 And if that is equal to zero then we know that value is even. 1:38 Otherwise it's false. 1:44 Another way to do this is to use a bitwise operator. 1:47 What we can do is we can take the integer value and 1:50 bitwise & it with the value 1, 1:53 because all odd integers have their lowest bit as 1. 1:58 Now don't worry if you're not familiar with bitwise operations. 2:04 This is just a very fast way to determine if the last bit of a integer is 1. 2:08 So now we can call the IsEven method on any integer 2:15 just like it was part of the original class. 2:20 In order to call it though, we need to make sure that we're 2:22 using the Treehouse.Common namespace where we want to use it. 2:25 So up here, we need to add, using Treehouse.Common. 2:29 Now down here, we can say 5.IsEven() and this should return false. 2:37 Now it's conceivable that the Treehouse.common namespace 2:45 has a lot of other classes in it. 2:49 We can reduce the scope even more by using a static using directive. 2:51 So we can actually say using static Treehouse.Common.IntExtensions. 2:56 This works with any static class. 3:04 And this makes it even more obvious that 3:07 we want to use the extension methods in this IntExtensions class. 3:10 Now, instead of using the entire Treehouse.Common namespace and 3:15 pulling in all of the other classes that might be in that namespace, 3:19 we're just pulling in the IntExtensions class. 3:23 And thereby, only pulling in this IsEven extension method. 3:26 Remember that just because there's an extension method that extends a type, 3:30 doesn't mean that it can be used from anywhere the type is used. 3:35 Extension methods are scoped to the namespace that they're declared in. 3:38 And we can further reduce that scope with the using static directive just like this. 3:42 It's a good practice to declare extension methods in a limited namespace. 3:48 For example, 3:53 we could have put the IntExtension class in the system namespace. 3:54 But then this extension method would be available everywhere. 3:58 And it wouldn't be very obvious that this method isn't part of the original 4:02 integer type. 4:05 There are a number of classes and 4:07 interfaces that I find myself writing extension methods for a lot. 4:08 One of those is the string class. 4:13 Let's take a look at the documentation for the string class. 4:15 Here's the documentation for the string class. 4:20 If we find the methods that start with split. 4:22 Here we see the all of the methods that start with split take a character array. 4:26 Now this first one, 4:32 actually takes a character array that's actually a params argument. 4:34 And a params argument allows us to pass in zero, one or 4:38 as many characters as we want and what it does is it just 4:43 combines all those into a single argument of type character array. 4:47 But the thing is, with params arguments, 4:52 is they have to be the last argument in the method. 4:54 So, like this one right here, takes a character array and the second 4:59 argument is the count which is the maximum number of substrings it should return. 5:05 So the first argument here, actually, it's just this regular old character array. 5:11 It's not a params, 5:17 which means we have to instantiate a character array and pass that in. 5:18 So this makes it kind of cumbersome when calling split, 5:24 if we only wanna pass in a single character as the separator. 5:29 We don't wanna have to create a character array every time we call the split method. 5:33 So we can leave this pain a little bit by creating an extension method 5:38 that does this for us. 5:42 So let's go back to our code here and I'll create a new file. 5:44 And it's a string extension, so 5:57 we'll put it in the StringExtensions class. 6:00 So it'll have to be a public static class. 6:08 And we'll make a public static method that returns a string array and 6:15 we'll call it split to match the other ones and it will extend a string. 6:20 So, we'll say this string. 6:26 And here we have to decide what we want to name the this parameter. 6:33 Now there's some debate about what you should name this. 6:36 Personally, I like to call it this. 6:40 But the problem is, is this is a keyword which, 6:43 we can't put it here because it's a keyword in C#. 6:47 It causes the C# compiler to kind of go into a fit. 6:50 So, [LAUGH] well, it just causes a compiler error. 6:56 So what we can do if we want to use a C# keyword as a variable 7:00 is we just prefix it with the @ symbol and this makes a lot of sense because 7:05 this parameter is actually called this parameter. 7:10 [LAUGH] So calling it @this makes a lot of sense. 7:14 Some people don't like that convention and 7:18 it's fine if you don't wanna do it this way. 7:20 Other people like to call the this parameter target or 7:22 source or value or extended or 7:26 something that just makes sense for the method that you code in, at that time. 7:30 I like to call it @this because I don't have to think about what to name the this 7:34 parameter every time I write an extension method. 7:38 It also makes argument null exceptions really obvious and 7:41 I'll show you what I mean in just a little bit. 7:45 Let's continue to write this method. 7:48 So we have to say which character we want to separate the string on, so 7:51 pass in the separator. 7:56 And we also wanna say the maximum number of times we want the string to split, 8:00 so we'll say int count. 8:04 The thing with extension methods is that 8:06 they can actually be called on an object whose value is null. 8:09 Now, if we were to call an instance method. 8:14 A method that's inside of a regular class on null, 8:17 we'd get a NullReferenceException thrown. 8:21 But because split is actually just a regular static method inside of a static 8:24 class and the object that is being called on is just another argument or 8:31 parameter being passed into the method, that means this can actually be null. 8:37 So we have to check for that, and throw an exception if that's the case. 8:42 Just to demonstrate that these extension methods are just really static 8:47 methods inside of a static class, we can go back to the program class and 8:52 change where we're calling random item, 8:57 to actually be look like a normal static method. 9:00 So we can say IListExtensions.RandomItem() and 9:05 we can pass in our list here. 9:11 So now this looks a lot more like the utility function that we had before. 9:15 So back in our split method, we need to check for null. 9:24 So I'll say, if(@this == null), 9:28 Now I've said normally, if a method is called on an object whose value is null 9:35 then we'll get a NullReferenceException. 9:39 But sometimes people actually call extension methods 9:43 like a regular static method. 9:46 So instead of throwing a NullReferenceException which really 9:48 wouldn't make sense if you're calling it as a utility method or as a static method. 9:52 What we'll throw is a ArgumentNullException. 9:56 So I'll say, throw new ArgumentNullException. 10:00 And here, we pass in the name of the argument that is null, 10:06 so in this case it's @this. 10:11 There we go. 10:15 Now there's a cool trick in C#. 10:16 There's an operator called nameof that we can use. 10:18 That can make it so that instead of passing in a string here, 10:24 we can actually just pass in the variable itself and nameof will return 10:29 the actual name, the string, that is the name of the variable. 10:34 This is handy because we may wanna refactor in the future and 10:39 change the name of the variable here. 10:43 And as part of that refactoring, the compiler will catch 10:46 that we've changed the variable and will tell us we also need to change it here. 10:50 Strings can actually be quite dangerous when we're using them to refer to code. 10:54 So it's always best to always just stick in the code. 10:59 Now down here, we can finish implementing our method. 11:03 So, we want to return. 11:05 And we'll just call the regular split method from the string class. 11:08 So we'll say @this.Split(). 11:12 Now, this is how we would normally call this method. 11:16 We would create a new array. 11:20 And in the array, we would specify the separator that we want to split on. 11:25 And then pass in count. 11:35 So there you go. 11:37 So now when we're using this method, instead of calling string.Split, 11:38 creating a new array with the separator we wanna split on, 11:44 we can just call this method like so. 11:48 We'll just say "mySTring".Split('S', 11:51 3) like that. 11:59 Pretty nifty. 12:02 So I mentioned before that calling this parameter @this can 12:09 actually help a little bit when we have argument null exceptions. 12:13 So let's go to program and let's cause one of these exceptions. 12:19 So let's create a string. 12:27 And we'll just call it myString, and set it equal to null. 12:29 And we'll call myString.Split. 12:37 And notice now that there are a lot of different split methods in this 12:45 string class. 12:49 We can go down here, these ones are all part of the regular class. 12:50 And we'll just pass in you can split on a comma, three times. 12:58 And it's not seeing our new extension method because we don't have 13:04 the namespace up here. 13:09 So let's bring this back to using static Treehouse.Common. 13:10 There we go. 13:21 So let's run this and let's see what kind of exception we get. 13:22 All right, so because we're in the debugger, 13:26 it breaks right here where the ArgumentNullException is being thrown. 13:28 And here we see ArgumentNullException was unhandled and 13:33 we go down here, we can click on this view detail. 13:37 And here we see value cannot be null parameter name is this. 13:40 Now my logic for calling the parameter this is because if 13:46 an ArgumentNullException is thrown and the parameter name is this, then I know right 13:51 away that the method that threw this exception is a extension method. 13:55 So that gives me a lot better idea of where to look. 14:01 | https://teamtreehouse.com/library/more-examples | CC-MAIN-2020-40 | refinedweb | 2,406 | 81.33 |
-- | A wrapper around the types and functions from "Data.Graph" to make programming with them less painful. Also -- implements some extra useful goodies such as 'successors' and 'sccGraph', and improves the documentation of -- the behaviour of some functions. -- -- As it wraps "Data.Graph", this module only supports directed graphs with unlabelled edges. -- -- Incorporates code from the 'containers' package which is (c) The University of Glasgow 2002 and based -- on code described in: -- -- /Lazy Depth-First Search and Linear Graph Algorithms in Haskell/, -- by David King and John Launchbury module Data.Graph.Wrapper ( Edge, Graph, vertex, fromListSimple, fromList, fromListLenient, fromListBy, fromVerticesEdges, toList, vertices, edges, successors, outdegree, indegree, transpose, reachableVertices, hasPath, topologicalSort, depthNumbering, SCC(..), stronglyConnectedComponents, sccGraph, traverseWithKey ) where import Data.Graph.Wrapper.Internal import Control.Arrow (second) import Control.Monad import Control.Monad.ST import Data.Array import Data.Array.ST import qualified Data.Graph as G import qualified Data.IntSet as IS import Data.List (sortBy, mapAccumL) import Data.Maybe (fromMaybe, fromJust, mapMaybe) import qualified Data.Map as M import Data.Ord import qualified Data.Set as S import qualified Data.Foldable as Foldable import qualified Data.Traversable as Traversable fst3 :: (a, b, c) -> a fst3 (a, _, _) = a snd3 :: (a, b, c) -> b snd3 (_, b, _) = b thd3 :: (a, b, c) -> c thd3 (_, _, c) = c -- amapWithKey :: Ix i => (i -> v -> v') -> Array i v -> Array i v' -- -- More efficient, but not portable (uses GHC.Arr exports): -- --amapWithKey f arr = unsafeArray' (bounds arr) (numElements arr) [(i, f i (unsafeAt arr i)) | i <- [0 .. n - 1]] -- amapWithKey f arr = array (bounds arr) [(i, f i v) | (i, v) <- assocs arr] amapWithKeyM :: (Monad m, Ix i) => (i -> v -> m v') -> Array i v -> m (Array i v') amapWithKeyM f arr = liftM (array (bounds arr)) $ mapM (\(i, v) -> liftM (\v' -> (i, v')) $ f i v) (assocs arr) -- | Construct a 'Graph' where the vertex data double up as the indices. -- -- Unlike 'Data.Graph.graphFromEdges', vertex data that is listed as edges that are not actually themselves -- present in the input list are reported as an error. fromListSimple :: Ord v => [(v, [v])] -> Graph v v fromListSimple = fromListBy id -- | Construct a 'Graph' that contains the given vertex data, linked up according to the supplied key extraction -- function and edge list. -- -- Unlike 'Data.Graph.graphFromEdges', indexes in the edge list that do not correspond to the index of some item in the -- input list are reported as an error. fromListBy :: Ord i => (v -> i) -> [(v, [i])] -> Graph i v fromListBy f vertices = fromList [(f v, v, is) | (v, is) <- vertices] -- | Construct a 'Graph' directly from a list of vertices (and vertex data). -- -- If either end of an 'Edge' does not correspond to a supplied vertex, an error will be raised. fromVerticesEdges :: Ord i => [(i, v)] -> [Edge i] -> Graph i v fromVerticesEdges vertices edges | M.null final_edges_map = fromList done_vertices | otherwise = error "fromVerticesEdges: some edges originated from non-existant vertices" where (final_edges_map, done_vertices) = mapAccumL accum (M.fromListWith (++) (map (second return) edges)) vertices accum edges_map (i, v) = case M.updateLookupWithKey (\_ _ -> Nothing) i edges_map of (mb_is, edges_map) -> (edges_map, (i, v, fromMaybe [] mb_is)) -- | Construct a 'Graph' that contains the given vertex data, linked up according to the supplied index and edge list. -- -- Unlike 'Data.Graph.graphFromEdges', indexes in the edge list that do not correspond to the index of some item in the -- input list are reported as an error. fromList :: Ord i => [(i, v, [i])] -> Graph i v fromList = fromList' False -- | Construct a 'Graph' that contains the given vertex data, linked up according to the supplied index and edge list. -- -- Like 'Data.Graph.graphFromEdges', indexes in the edge list that do not correspond to the index of some item in the -- input list are silently ignored. fromListLenient :: Ord i => [(i, v, [i])] -> Graph i v fromListLenient = fromList' True {-# INLINE fromList' #-} fromList' :: Ord i => Bool -> [(i, v, [i])] -> Graph i v fromList' lenient vertices = G graph key_map vertex_map where max_v = length vertices - 1 bounds0 = (0, max_v) :: (G.Vertex, G.Vertex) sorted_vertices = sortBy (comparing fst3) vertices index_vertex = if lenient then mapMaybe (indexGVertex'_maybe key_map) else map (indexGVertex' key_map) graph = array bounds0 $ [0..] `zip` map (index_vertex . thd3) sorted_vertices key_map = array bounds0 $ [0..] `zip` map fst3 sorted_vertices vertex_map = array bounds0 $ [0..] `zip` map snd3 sorted_vertices -- | Morally, the inverse of 'fromList'. The order of the elements in the output list is unspecified, as is the order of the edges -- in each node's adjacency list. For this reason, @toList . fromList@ is not necessarily the identity function. toList :: Ord i => Graph i v -> [(i, v, [i])] toList g = [(indexGVertexArray g ! m, gVertexVertexArray g ! m, map (indexGVertexArray g !) ns) | (m, ns) <- assocs (graph g)] -- | Find the vertices we can reach from a vertex with the given indentity successors :: Ord i => Graph i v -> i -> [i] successors g i = map (gVertexIndex g) (graph g ! indexGVertex g i) -- | Number of edges going out of the vertex. -- -- It is worth sharing a partial application of 'outdegree' to the 'Graph' argument if you intend to query -- for the outdegrees of a number of vertices. outdegree :: Ord i => Graph i v -> i -> Int outdegree g = \i -> outdegrees ! indexGVertex g i where outdegrees = G.outdegree (graph g) -- | Number of edges going in to the vertex. -- -- It is worth sharing a partial application of 'indegree' to the 'Graph' argument if you intend to query -- for the indegrees of a number of vertices. indegree :: Ord i => Graph i v -> i -> Int indegree g = \i -> indegrees ! indexGVertex g i where indegrees = G.indegree (graph g) -- | The graph formed by flipping all the edges, so edges from i to j now go from j to i transpose :: Graph i v -> Graph i v transpose g = g { graph = G.transposeG (graph g) } -- | Topological sort of of the graph (<>). If the graph is acyclic, -- vertices will only appear in the list once all of those vertices with arrows to them have already appeared. -- -- Vertex /i/ precedes /j/ in the output whenever /j/ is reachable from /i/ but not vice versa. topologicalSort :: Graph i v -> [i] topologicalSort g = map (gVertexIndex g) $ G.topSort (graph g) -- | List all of the vertices reachable from the given starting point reachableVertices :: Ord i => Graph i v -> i -> [i] reachableVertices g = map (gVertexIndex g) . G.reachable (graph g) . indexGVertex g -- | Is the second vertex reachable by following edges from the first vertex? -- -- It is worth sharing a partial application of 'hasPath' to the first vertex if you are testing for several -- vertices being reachable from it. hasPath :: Ord i => Graph i v -> i -> i -> Bool hasPath g i1 = (`elem` reachableVertices g i1) -- | Number the vertices in the graph by how far away they are from the given roots. The roots themselves have depth 0, -- and every subsequent link we traverse adds 1 to the depth. If a vertex is not reachable it will have a depth of 'Nothing'. depthNumbering :: Ord i => Graph i v -> [i] -> Graph i (v, Maybe Int) depthNumbering g is = runST $ do -- This array records the minimum known depth for the node at the moment depth_array <- newArray (bounds (graph g)) Nothing :: ST s (STArray s G.Vertex (Maybe Int)) let -- Lets us adjust the known depth given a new observation atDepth gv depth = do mb_old_depth <- readArray depth_array gv let depth' = maybe depth (`min` depth) mb_old_depth depth' `seq` writeArray depth_array gv (Just depth') -- Do an depth-first search on the graph (checking for cycles to prevent non-termination), -- recording the depth at which any node was seen in that array. let gos seen depth gvs = mapM_ (go seen depth) gvs go seen depth gv | depth `seq` False = error "depthNumbering: unreachable" | gv `IS.member` seen = return () | otherwise = do gv `atDepth` depth gos (IS.insert gv seen) (depth + 1) (graph g ! gv) gos IS.empty 0 (map (indexGVertex g) is) -- let go _ _ [] = return () -- go seen depth gvs = do -- let go_one (seen, next_gvs) gv -- | gv `IS.member` seen = return (seen, next_gvs) -- | otherwise = do gv `atDepth` depth -- return (IS.insert gv seen, next_gvs ++ (graph g ! gv)) -- (seen, next_gvs) <- foldM go_one (seen, []) gvs -- go seen (depth + 1) next_gvs -- -- go IS.empty 0 (map (indexGVertex g) is) gvva <- amapWithKeyM (\gv v -> liftM (\mb_depth -> (v, mb_depth)) $ readArray depth_array gv) (gVertexVertexArray g) return $ g { gVertexVertexArray = gvva } data SCC i = AcyclicSCC i | CyclicSCC [i] deriving (Show) instance Functor SCC where fmap f (AcyclicSCC v) = AcyclicSCC (f v) fmap f (CyclicSCC vs) = CyclicSCC (map f vs) instance Foldable.Foldable SCC where foldMap f (AcyclicSCC v) = f v foldMap f (CyclicSCC vs) = Foldable.foldMap f vs instance Traversable.Traversable SCC where traverse f (AcyclicSCC v) = fmap AcyclicSCC (f v) traverse f (CyclicSCC vs) = fmap CyclicSCC (Traversable.traverse f vs) -- | Strongly connected components (<>). -- -- The SCCs are listed in a *reverse topological order*. That is to say, any edges *to* a node in the SCC -- originate either *from*: -- -- 1) Within the SCC itself (in the case of a 'CyclicSCC' only) -- 2) Or from a node in a SCC later on in the list -- -- Vertex /i/ strictly precedes /j/ in the output whenever /i/ is reachable from /j/ but not vice versa. -- Vertex /i/ occurs in the same SCC as /j/ whenever both /i/ is reachable from /j/ and /j/ is reachable from /i/. stronglyConnectedComponents :: Graph i v -> [SCC i] stronglyConnectedComponents g = map decode forest where forest = G.scc (graph g) decode (G.Node v []) | mentions_itself v = CyclicSCC [gVertexIndex g v] | otherwise = AcyclicSCC (gVertexIndex g v) decode other = CyclicSCC (dec other []) where dec (G.Node v ts) vs = gVertexIndex g v : foldr dec vs ts mentions_itself v = v `elem` (graph g ! v) -- | The graph formed by the strongly connected components of the input graph. Each node in the resulting -- graph is indexed by the set of vertex indices from the input graph that it contains. sccGraph :: Ord i => Graph i v -> Graph (S.Set i) (M.Map i v) sccGraph g = fromList nodes' where -- As we consume the SCCs, we accumulate a Map i (S.Set i) that tells us which SCC any given index belongs to. -- When we do a lookup, it is sufficient to look in the map accumulated so far because nodes that are successors -- of a SCC must occur to the *left* of it in the list. (_final_i2scc_i, nodes') = mapAccumL go M.empty (stronglyConnectedComponents g) --go :: M.Map i (S.Set i) -> SCC i -> (M.Map i (S.Set i), (S.Set i, M.Map i v, [S.Set i])) go i2scc_i scc = (i2scc_i', (scc_i, Foldable.foldMap (\i -> M.singleton i (vertex g i)) scc, Foldable.foldMap (\i -> map (fromJust . (`M.lookup` i2scc_i')) (successors g i)) scc)) where -- The mechanism by which we index the new graph -- the set of indexes of its components scc_i = Foldable.foldMap S.singleton scc i2scc_i' = i2scc_i `M.union` Foldable.foldMap (\i -> M.singleton i scc_i) scc | http://hackage.haskell.org/package/graph-wrapper-0.2.4.1/docs/src/Data-Graph-Wrapper.html | CC-MAIN-2015-27 | refinedweb | 1,785 | 64.91 |
.
It's been some time is there some way to retrieve the active build system now?
Na, you can do it manually, though. Have a look here:
It doesn't take into consideration the variants though, but it's not hard to implement.
variants
Build system from the current project aren't taken into consideration too, but, again, it can be fixed.
That only returns the possible build systems for the build system selection, not what is actually selected. Besides, you can override the build in the menu.
I screwed around a bit and here is my attempt:
_BUILD_SYS_HANDLES = dict()
def get_build_system(window):
buildhandle = _BUILD_SYS_HANDLES[window.id()]
if isinstance(buildhandle, int):
return window.project_data()['build_systems'][buildhandle]
elif isinstance(buildhandle, str) and buildhandle != "":
return sublime.decode_value(sublime.load_resource(buildhandle))
class BuildSystemWatcher(sublime_plugin.EventListener):
def on_window_command(self, window, command_name, args):
if command_name != 'set_build_system':
return None
index = args.get('index', None)
global _BUILD_SYS_HANDLES
if not index is None:
_BUILD_SYS_HANDLES[window.id()] = index
else:
_BUILD_SYS_HANDLES[window.id()] = args.get('file', None)
print(_BUILD_SYS_HANDLES) # debug
This almost works... It works when the user selects a build system after the plugin is loaded. On startup however, every window gets its selected build system in some way and that doesn't get caught by this event listener. So, this is pretty much unreliable.
If Sublime Text would issue the "select_build_system" command for each of its windows after all the plugins are loaded, this would actually work. | https://forum.sublimetext.com/t/getting-the-current-build-system/5886/7 | CC-MAIN-2018-09 | refinedweb | 239 | 51.75 |
Image processing using ObjC
Hi all. Non-programmer here. I've been using pythonista to process photos of student work, to remove the background and shadows etc. It is working fairly well, however, it is incredibly slow if I use full resolution photos. I thought I might be able to speed things up by using ObjC and CoreImage Filters, however, I have no idea how to get started using ObjC within Pythonista, other than what is shown in the sample script Camera Scanner.py, which is significantly past my understanding.
Can anyone give me an idea on how to get started? I'm happy to experiment with different filters, just not sure how to get a filter actually working and have an image file outputted.
My original Pythonista script is very simple - just a series of filters (I use the workflow app to get the photos into Pythonista, then the cleaned up photos are PDFed so I can open them in Notability to annotate, to send back to my students (highschool Chemistry)).
Any help would be much appreciated!
Original Script:
from PIL import Image, ImageOps, ImageFilter, ImageChops import photos, clipboard, webbrowser imo=clipboard.get_image(idx=0) if not imo.mode == 'RGB': img=imo.convert('RGB') im1=img.filter(ImageFilter.MaxFilter(size=9)) im2=ImageChops.subtract(im1,img) im3=ImageOps.invert(im2) im4=im3.filter(ImageFilter.SHARPEN) im5=ImageOps.autocontrast(im4,cutoff=1) clipboard.set_image(im5,jpeg_quality=1.0) webbrowser.open('workflow://')
- plessner14
Take a look at this as it uses some CoreImage filters.
Thanks for that - trying to figure it out now - steep learning curve!
- bruceathome
Hi plessner14. The photo editor script fails when I try and run it - 'no module named toolz' (line 5).
Regards,
Bruce
So I'm trying to do something like the following script:
import photos, console
from objc_util import *
CIFilter, CIImage, CIContext, CIDetector, CIVector = map(ObjCClass, ['CIFilter', 'CIImage', 'CIContext', 'CIDetector', 'CIVector'])
imo = photos.pick_image()
img = ObjCClass('CIGaussianBlur').imo
img.show()
Unfortunately I'm obviously not getting the syntax correct, as it fails, and I can't find a simple example of what I want to do - just run a filter, with parameters I specify, on an image I pick from the photo album.
Can anyone help?
- plessner14
@bruceathome Sorry about that. toolz is a pure Python helper module that I used. You can download it with pip in Stash
@bruceathome CoreImage filters are certainly an interesting use case for ObjC bridging, but I understand that it can be a bit daunting to figure out how to get started. So I've made this little wrapper module that allows you to use CoreImage in a more “pythonic” manner, without needing to know very much about Objective-C or Cocoa.
It's meant to be used as a module, though you can also run it directly for a quick demo.
Save the script above as
core_image.pyin the same folder as your other script. Then try something like this:
import photos from core_image import CImage img = CImage(photos.pick_asset()) img2 = img.filter('CIGaussianBlur', radius=10) img2.show() # or: # img2.save_jpeg('output.jpg')
You can find a bunch of other examples, and details on how to set different kinds of filter parameters in the gist.
You'll also need Apple's Core Image Filter Reference to look up filter names and their different parameters. There are a lot.
Have fun!
@plessner - thanks for that - all up and running now.
@omz admin - that's pretty much exactly what I was after - very much appreciated.
Regards,
Bruce | https://forum.omz-software.com/topic/4504/image-processing-using-objc/5 | CC-MAIN-2020-10 | refinedweb | 587 | 56.76 |
On Wed, Jun 2, 2010 at 8:36 AM, Gmail <arnodel at googlemail.com> wrote: > > On 1 Jun 2010, at 18:36, cool-RR wrote: > > Hello, > >.) > > > Not exactly (python 2.6): > > >>> class Foo(object): > ... def f(self): pass > ... > >>> Foo.f > <unbound method Foo.f> > >>> Foo.f.im_class > <class '__main__.Foo'> > >>> class Bar(Foo): pass > ... > >>> bar.f > <unbound method Bar.f> > >>> Bar.f.im_class > <class '__main__.Bar'> > > >? > > > Unbound methods in Python 2.X were objects that were created on class > attribute access, not when the class was created, so what you are asking for > is different from what Python 2.X provided. Here is a very simplified way > to mimic 2.X in 3.X via metaclasses (Python 3.2): > > >>> class FooType(type): > ... def __getattribute__(self, attrname): > ... attr = super().__dict__[attrname] > ... if isinstance(attr, type(lambda:0)): > ... return ("unbound method", self, attr) > ... else: > ... return attr > ... > >>> class Foo(metaclass=FooType): > ... def f(self):pass > ... > >>> Foo.f > ('unbound method', <class '__main__.Foo'>, <function f at 0x445fa8>) > >>> Foo().f() > >>> > > What you want maybe instead is a metaclass that overrides type.__new__ or > type.__init__ so that each function in the attributes of the class is > wrapped in some kind of wrapper like this: > > class DefinedIn: > def __init__(self, f, classdef): > self.classdef = classdef > self.f = f > def __call__(self, *args, **kwargs): > return self.f(*args, **kwargs) > > -- > Arnaud > > Thanks for the corrections and the metaclass, Arnaud. (And thanks to you too, Terry.) I might use it in my project. > so what you are asking for is different from what Python 2.X provided. Yes, I have been imprecise. So I'll correct my idea: I want Python 3.x to tell me the class from which the unbound method was accessed. (It can be done either on creation or or access, whatever seems better to you.) So I propose this as a modification of Python. Ram. -------------- next part -------------- An HTML attachment was scrubbed... URL: <> | https://mail.python.org/pipermail/python-ideas/2010-June/007318.html | CC-MAIN-2018-26 | refinedweb | 319 | 80.78 |
@bigtest/mocha
Convergent Mocha functions for testing against asynchronous states
Synopsis
import { describe, beforeEach, it } from '@bigtest/mocha'; import { expect } from 'chai'; describe('clicking my button', () => { beforeEach(() => $button.click()); // repeatedly asserts until passing it('shows a loading indicator', () => { expect($button.className).to.include('is-loading'); }); // repeatedly asserts it is passing until the timeout it.always('does not navigate away', () => { expect(app.location).to.equal('/') }).timeout(200); })
Convergent Assertions
Typically, when testing asynchronous states (such as rendered content in an application) your tests need to run at just the right moment so that they are executed in the correct context. If your tests run too soon, they will fail; if they run too slow, well then you just have slow tests.
Converging on a state means asserting against a state until the assertion passes, in which case you have successfully converged on that state!
This package uses
@bigtest/convergence
to repeatedly run assertions until they pass, or until the timeout has
expired. Performing tests in this way allows them to pass the moment
the desired state is achieved. This results in very fast tests when
testing asynchronous things.
Read the
@bigtest/convergence docs on
convergences
for more info as to why converging on a desired state is better than
trying to time it properly.
How does it work?
For the most part, you write tests in the exact same way that you're
used to writing tests with Mocha. The only difference is that this
package wraps Mocha's
it in a convergence helper so that any
assertions you write using
it become convergent assertions that
allow you to easily test asynchronous states.
This package also wraps the Mocha hooks
before,
after,
beforeEach, and
afterEach to support automatically timing and
running returned
Convergence instances from
@bigtest/convergence.
Writing Tests
Because convergent assertions are run repeatedly until they pass, it is highly recommended that you do not perform any side-effects in your assertions. This will result in your side-effect being run multiple, perhaps even hundreds of times.
For this reason, you should keep your side-effect producing code in hooks and out of your assertions. These "pure assertions" also help your tests be more readable and explicit.
:no_entry_sign: do not do this:
describe('my button', () => { it('shows a loading indicator after clicking', () => { // this will be called every time the assertion runs $button.click(); // it might take a few milliseconds for any side-effects to // happen, so this might fail the first time and cause this entire // assertion to run again, thus clicking the button again expect($button.className).to.include('is-loading'); }); });
:white_check_mark: do this:
describe('clicking my button', () => { // keep side-effects inside hooks beforeEach(() => $button.click()); // a pure assertion has no side-effects; even if it fails, it can // be run again and again without consequence it('shows a loading indicator', () => { expect($button.className).to.include('is-loading'); }); });
Asserting that something has not happened
Another common scenario is asserting that something has not happened. If you were to test for this normally (or even with a convergent assertion above) the test could potentially pass successfully before a side-effect has time to even happen.
In these scenarios, you want to converge when the state meets an expectation for a given period of time. In other words, "if this assertion remains true for X amount of time, this test is considered to be passing."
@bigtest/mocha provides an
it.always method to do just this. This
method will run the assertion throughout the entire timeout period
ensuring it never fails. When the assertion does fail, the test
fails. If the assertion never fails, it will pass just after the
timeout period.
describe('clicking my button', () => { beforeEach(() => $button.click()); // the default timeout for it.always is 100ms it.always('does not navigate away for at least 1 second', () => { expect(app.location).to.equal('/'); }).timeout(1000); });
Convergent Hooks
Sometimes you may attempt to perform an async task to find it fails
due to a preconceived state not being met. For example, you can't
click a button if it doesn't exist in the DOM. You may use
@bigtest/convergence to converge on these states and return
convergences inside of your hooks. The hooks provided by
@bigtest/mocha will automatically set the timeout and run returned
Convergence instances.
describe('clicking my button', () => { // @bigtest/mocha will wait for a returned Convergence to converge // before continuing with the assertions beforeEach(() => new Convergence() .once(() => expect($button).to.exist) .do(() => $button.click())); it('shows a loading indicator', () => { expect($button.className).to.include('is-loading'); }); });
@bigtest/convergence API
docs
for working with the
Convergence class.
Pausing Tests
Pausing tests can be very useful when debugging. It allows you to
investigate your application during a critical moment in your testing
suite. Mocha does not have a convinient way to pause tests, but
@bigtest/mocha helps alleviate this by providing an
it.pause
method. It works by setting the current timeout to
0 and gives Mocha
a promise that never resolves. This effectively pauses the entire
suite until you remove
it.pause and restart your tests.
describe('clicking my button', () => { beforeEach(() => $button.click()); // if the class is never set, this test will fail; it.pause allows // us to investigate the app at this point in the test suite it.pause('shows a loading indicator', () => { expect($button.className).to.include('is-loading'); }); }); | https://www.npmtrends.com/@bigtest/mocha | CC-MAIN-2021-43 | refinedweb | 892 | 56.15 |
Things to know about premium messaging
Happy summer!
We just wanted to check in and let you know a couple of things about the upcoming general availability of our premium messaging offering. As we have more and more customers join our preview program, we wanted to make sure that you are aware of a few benefits, as well as a couple "nice-to-know" requirements.
- 1 MB message size - We heard your feedback and premium messaging now supports message sizes up to 1 MB instead of 256KB.
- Partitioning –Premium messaging uses 2 partitions.
- Partitioning was originally introduced for load distribution and availability by favoring healthy nodes. However, management operations can become more complex as the number of partitions increases. Since premium messaging offers dedicated capacity, partitioning is not useful for storage size either. Hence, we only use 2 partitions with premium messaging in order to increase availability.
- 80GB is still supported by allowing up to 40GB per partition
- Default API version - For premium namespaces, the new default REST API version is "2013-10"
- If you do not provide an API version, or a lower version than “2013-10”, we will override the value to "2013-10"
- Client - Please use most up to date version of the .NET client available here.
Stay tuned for more blogs about the upcoming availability of premium messaging!
Questions? Let us know in the comments.
Happy messaging! | https://docs.microsoft.com/en-us/archive/blogs/servicebus/things-to-know-about-premium-messaging | CC-MAIN-2022-33 | refinedweb | 231 | 53.81 |
This is an XML namespace reserved for use with the VoiceML 2.0 Implementation Report.
The Implementation Report for VoiceXML 2.0 describes a framework for authoring VoiceXML tests. The framework abstracts the markup used to define interactions with the user, allowing vendors to use their own test infrastructure by applying an XSLT transformation to the test source. Modifications to the test infrastructure should only require a change to the XSLT template followed by re-transformation of the test source.
Note. This namespace may change without notice. For updated information, please refer to the latest version of the VoiceML Implementation Report.
For more information about XML, please refer to The Extensible Markup Language (XML) 1.0 specification. For more information about XML namespaces, please refer to the Namespaces in XML specification. | http://www.w3.org/2002/vxml-conformance | CC-MAIN-2013-20 | refinedweb | 131 | 50.12 |
Effortless unit testing with AVA
There are a lot of test runners out there. Mocha, Jasmine, tape and more. I hear you thinking: “another framework?”. But Ava is a worthy alternative for the existing solutions.
Simplicity
First of all AVA is simple. It’s really easy to set up. First install ava:
npm install ava -g, then run ava by running
ava. AVA will by default detect tests using common patterns. Let’s make one:
// test.js import test from 'ava' test('one plus one is two', t => { t.is(1 + 1, 2) })
This is it, simple as that. Maybe you are thinking, ES6 code? Don’t you have to transpile that? No, AVA supports ES6 by default, awesome!
Efficiency
Tests run in parallel by default, this means that in general they will run faster. You’re able to run them in serial if needed. However, having your tests run in parallel also forces you to make sure all your tests are completely independent of each other, which is always a good practice.
Asynchronous
AVA supports asynchronous testing by default:
test('async test', t => { t.plan(1) setTimeout(() => { t.is(3, 1 + 2) }, 1000) })
We let AVA know that one assertion is coming up, so AVA will wait for the timeout to finish. If you return a promise you don’t have to wait at all.
test('async test', t => { return Promise.resolve('wecodetheweb') .then(text => { t.is(text, 'wecodetheweb'); }); })
Only one
Another nifty feature that you can only run the test that you are working on. This can be handy if you have a lot of tests and you want to focus one fixing one at a time:
test('one plus one is two', t => { t.is(1 + 1, 2) }) test.only('two plus one is three', t => { t.is(1 + 2, 3) })
It will now only run the second test.
Mocking
AVA has no mocking built in, to mock functions just use Sinon.js. Install sinon by running
npm install sinon --save-dev. Then use it in your tests:
import sinon from 'sinon' const myFunction = sinon.spy() test('my function is running!', t => { myFunction() t.true(myFunction.called) })
Running AVA locally
Installing AVA globally is okay to play around with it. But in a “real” project you want your dependencies to be local. To setup AVA for your project simply run
npm install ava --save-dev. Then add the following script to your package.json:
{ "scripts": { "test": "ava", "test:watch": "ava --watch" } }
You can now run your tests using
npm run test or
npm run test:watch if you want to let them automatically run on change.
Code coverage
Getting code coverage reports is also really easy. AVA runs tests using so called “subprocesses”, this is why just using istanbul for code coverage will not work. Luckily there is a wrapper around istanbul that supports subprocesses called nyc. Install nyc using
npm install nyc --save-dev, then run it with
nyc ava.
Conclusion
AVA is really worth a try, it combines the ease of use of Jasmine with the simplicity of tape. It works for both front- and back-end Javascript applications. I can start summing up all the features in this article, but that’s what the documentation is for.
Reference
- AVA: github.com/sindresorhus/ava
- Sinon.js: sinonjs.org
- nyc: github.com/bcoe/nyc | https://wecodetheweb.com/2016/04/19/effortless-unit-testing-with-ava/ | CC-MAIN-2019-18 | refinedweb | 558 | 77.13 |
Session.Timeout means that after how much time the user's session will expire and the user will not be able to access the items in the Session object. By default the Session.Timeout is 20 minutes. You can change this through the web.config or the page level code. Let's see a small example:
Session.Timeout = 1;
Session["Name"] = "Mohammad Azam";
In the above code I am setting the Session.Timeout to "1" minute. After that I put "Mohammad Azam" in the Session object. This means that if you don't make request to the server and sit idle then after 1 minute you won't be able to access the "Name" key in the Session bag. A good scenario will be a user filling out a long form and you store the values of the form in a Session object. The user fills only couple of fields and then sit idle for 25 minutes keeping in mind that the Session timeout is 20 minutes. Now, if the user tries to retrieve something from the Session object after 25 minutes it will throw the ArgumentNullException as the Session object has been removed.
The question is how do we renew the Session and how do we notify the user that his session is about to expire. The idea is to use JavaScript and notify the user 10 seconds before the Session is about to expire.
First, here is the code from the TestPage.aspx.cs:
public partial class TestPage : BasePage { protected void Page_Load(object sender, EventArgs e) { }
protected void Button1_Click(object sender, EventArgs e) { Session["Name"] = "Mohammad Azam"; }
protected void Button2_Click(object sender, EventArgs e) { if (Session["Name"] == null) throw new ArgumentNullException("Session[Name] is null"); } }
With the click of a button I insert my name "Mohammad Azam" into the Session object. The Session.Timeout is 1 minute and is configured in the web.config file.
Here is the code for the BasePage.cs:
public class BasePage : System.Web.UI.Page {
public BasePage() { this.Load += new EventHandler(BasePage_Load); }
void BasePage_Load(object sender, EventArgs e) { AjaxPro.Utility.RegisterTypeForAjax(typeof(BasePage)); InjectTimerScript(); }
[AjaxPro.AjaxMethod] public void UpdateSession() { }
public void InjectTimerScript() { // set the time for 10 seconds less then the expiration time!
var t = Session.Timeout * 50 * 1000; //var t = 1000; string script = String.Empty;
if (!ClientScript.IsClientScriptBlockRegistered("Timer")) { script = "function refreshSession() { document.getElementById(\"btnRefreshSession\").style.display ='none'; document.getElementById(\"divTimeOutMessage\").innerHTML = \"\"; DemoWatiN.BasePage.UpdateSession(); } function notifyTimeOut() { document.getElementById(\"btnRefreshSession\").style.display ='block'; document.getElementById(\"divTimeOutMessage\").innerHTML = \" Your Session is about to expire! \" } setInterval(\"notifyTimeOut()\"," + t + ")";
ClientScript.RegisterClientScriptBlock(this.GetType(), "Timer", script, true); } }
protected override void Render(HtmlTextWriter writer) { writer.Write("<div id=\"divTimeOutMessage\"></div><input type=\"button\" id=\"btnRefreshSession\" style=\"display:none\" value=\"Refresh Session\" onclick=\"refreshSession()\" />"); base.Render(writer); }
When the page loads I register the page to use AJAXPRO.NET library and inject the timer script. The InjectTimerScript method injects the setInterval method which fires 10 seconds before the Session timeout. It also presents the user with a button "Refresh Session". When the "Refresh Session" button is clicked an Ajax call is made to the "UpdateSession" method. There is NO code inside the UpdateSession method. The call/request renews the session for another 1 minute.
This technique helps us to notify the users that their session is about to expire so if they like to continue they must refresh/renew their session.
Print | posted @ Saturday, January 05, 2008 6:21 AM
©
Mohammad Azam
Key West theme by
Robb Allen. | http://geekswithblogs.net/AzamSharp/archive/2008/01/05/118269.aspx | crawl-002 | refinedweb | 580 | 51.14 |
Opened 8 years ago
Closed 5 years ago
#11816 closed Cleanup/optimization (duplicate)
defaults in genereated settings.py are absolute paths for template directories, media directories and media urls
Description
'best practice' seems to be to use
import os OUR_ROOT = os.path.realpath( os.path.dirname(__file__) ) MEDIA_ROOT = os.path.join(OUR_ROOT, 'media') MEDIA_URL = '/media/' TEMPLATE_DIRS = ( # Put strings here, like "/home/html/django_templates" or "C:/www/django/templates". # Always use forward slashes, even on Windows. # Don't forget to use absolute paths, not relative paths. os.path.join(OUR_ROOT, 'templates') )
Shouldn't the automatically generated settings.py file include these things right from the beginning ?
It can take a newbie like me a long time to figure out all these little details required to make a project relocatable.
Change History (4)
comment:1 Changed 8 years ago by
comment:2 Changed 8 years ago by
comment:3 Changed 6 years ago by
comment:4 Changed 5 years ago by
This is a duplicate of #16504 which was closed as wontfix.
I have mixed feelings about this idea.
While it would make the initial setup of a new project slightly faster and more-beginner friendly, I really, really don't want people to store uploaded data (media files) next to code in production. This is prone to awful misconfigurations — from making the Python code writable by the webserver to serving uninterpreted Python files from the web root, and probably several others.
Since we can't guess what a good location would be (maybe
/var/www/media,
/var/www/{{ projectname }}/media,
D:\\media, ...), the default is empty.
two weeks and not out of triage ? | https://code.djangoproject.com/ticket/11816 | CC-MAIN-2017-13 | refinedweb | 272 | 56.25 |
Opened 4 months ago
Closed 4 months ago
#29489 closed Bug (duplicate)
Shell and default language preference
Description
When we run the django shell (./manage.py shell), the default language code (settings.LANGUAGE_CODE) is not activated by default (settings.USE_I18N is set to True) and user has to activate it manually by performing something like this:
from django.conf import settings from django.utils import translation; translation.activate(settings.LANGUAGE_CODE)
Change History (2)
comment:1 Changed 4 months ago by
comment:2 Changed 4 months ago by
I think Serguey is right. Would be nice to have your feedback with this on Django 2.1.
Note: See TracTickets for help on using tickets.
Duplicate of #17379? | https://code.djangoproject.com/ticket/29489 | CC-MAIN-2018-43 | refinedweb | 116 | 60.82 |
Preface
We know that the execution environment of Javascript language is “single thread”. This means that only one task can be completed at a time. If there are multiple tasks, you must queue up, the previous task is completed, and the next task is executed.
Although this mode is relatively simple to implement and the execution environment is relatively simple, as long as there is one task that takes a long time, the following tasks must wait in line, which will delay the execution of the entire program. The common browser does not respond (fake death), which is often because a piece of Javascript code runs for a long time (such as a dead loop), causing the entire page to get stuck in this place and other tasks cannot be performed.
In order to solve this problem, Javascript language divides the execution modes of tasks into two types: synchronous and asynchronous. This article mainly introduces several methods of asynchronous programming, and through comparison, obtains the best asynchronous programming solution!
If you want to read more excellent articles, please stamp them fiercely.GitHub blog
First, synchronous and asynchronous
We can generally understand asynchrony as a task that is divided into two sections, first executing the first section, then executing other tasks, and then returning to the second section when we are ready. The code that follows the asynchronous task will run immediately without waiting for the asynchronous task to end, that is to say,Asynchronous tasks do not have “blocking” effect. For example, there is a task to read files for processing. The asynchronous execution process is as follows
This discontinuous execution is called asynchronous. Accordingly, continuous execution is called synchronization.
Asynchronous mode is very important. On the browser side, time-consuming operations should be executed asynchronously to prevent the browser from losing its response. The best example is Ajax operations. On the server side, “asynchronous mode” is even the only mode, because the execution environment is single-threaded. If all http requests are allowed to be executed synchronously, the server performance will drop sharply and the response will soon be lost. Next, six methods of asynchronous programming are introduced.
Second, the Callback function (callback)
Callback function is the most basic method for asynchronous operation. The following code is an example of a callback function:
ajax(url, () => { //processing logic })
But callback function has a fatal weakness, is easy to writeCallback hell. Assuming multiple requests are dependent, you might write the following code:
ajax(url, () => { //processing logic ajax(url1, () => { //processing logic ajax(url2, () => { //processing logic }) }) })
The advantages of callback functions are simple, easy to understand and implement, while the disadvantages are not conducive to code reading and maintenance. The high coupling between various parts makes the program structure chaotic and the process difficult to track (especially when multiple callback functions are nested), and each task can only specify one callback function. In addition, it cannot use try catch to catch errors and cannot return directly.
Iii. incident monitoring
In this way,The execution of asynchronous tasks does not depend on the sequence of codes, but on whether an event occurs or not..
The following are two functions f1 and f2. The intention of programming is that f2 cannot be executed until f1 is executed. First, bind an event for f1 (jQuery is used here)
f1.on('done', f2);
The above line of code means that when f1 has done, f2 is executed. Then, rewrite f1:
function f1() { setTimeout(function () { // ... f1.trigger('done'); }, 1000); }
In the above code, f1.trigger(‘done’) indicates that the done event will be triggered immediately after the execution is completed, thus starting to execute f2.
The advantage of this method is that it is relatively easy to understand, can bind multiple events, each event can specify multiple callback functions, and can be “decoupled”, which is conducive to the realization of modularity. The disadvantage is that the whole program will become event-driven and the running process will become very unclear. When reading the code, it is difficult to see the main process.
IV. Publish and Subscribe
We assume that there is a “signal center”. When a task is completed, it will “publish” a signal to the signal center. Other tasks can “subscribe” to the signal center to know when they can start executing. This is called “publish-subscribe pattern”, also known as “observer pattern”.
First, f2 subscribes to the done signal from the signal center jQuery.
jQuery.subscribe('done', f2);
Then f1 is rewritten as follows:
function f1() { setTimeout(function () { // ... jQuery.publish('done'); }, 1000); }
In the above code, jQuery.publish(‘done’) means that after f1 execution is completed, the done signal is issued to jQuery, the signal center, thus triggering f2 execution.
F2 can unsubscribe after execution
jQuery.unsubscribe('done', f2);
The nature of this method is similar to “event monitoring”, but obviously superior to the latter. Because you can monitor the operation of the program by looking at the “message center” to know how many signals exist and how many subscribers there are for each signal.
V. Promise/A+
Promise means to promise that I will give you a result after a period of time. When will it take some time? The answer is asynchronous operation. Asynchronous refers to operations that may take a long time to produce results, such as network requests, reading local files, etc.
1. Three states of 1.Promise
- Pending-Initial state of Promise object instance at creation time
- Fulfilled—- can be understood as a state of success.
- Rejected—- can be understood as a state of failure
Once this promise changes from waiting state to other state, it can never change the state.For example, once the state is resolved, it cannot be changed to full again.
let p = new Promise((resolve, reject) => { reject('reject') Resolve('success')// Invalid code will not be executed }) p.then( value => { console.log(value) }, reason => { console.log(reason)//reject } )
When we construct Promise, the code inside the constructor is executed immediately
new Promise((resolve, reject) => { console.log('new Promise') resolve('success') }) console.log('end') // new Promise => end
2. Chain call of 2.promise
- Each call returns a new Promise instance (which is why then can use chain calls)
- If a result is returned in then, the result will be passed to the next successful callback in then.
- If there is an exception in then, the next then failure callback will be taken.
- If return is used in then, the value of return will be wrapped by Promise.resolve () (see examples 1 and 2)
- Parameters may not be passed in then, and if they are not passed, they will pass through to the next then (see Example 3)
- Catch will catch the exception that was not caught.
Let’s look at a few examples:
//Example 1 Promise.resolve(1) .then(res => { console.log(res) Return 2 // packaged as Promise.resolve(2) }) .catch(err => 3) .then(res => console.log(res))
//Example 2 Promise.resolve(1) .then(x => x + 1) .then(x => { throw new Error('My Error') }) .catch(() => 1) .then(x => x + 1) .then(x => console.log(x)) //2 .catch(console.error)
//Example 3 let fs = require('fs') function read(url) { return new Promise((resolve, reject) => { fs.readFile(url, 'utf8', (err, data) => { if (err) reject(err) resolve(data) }) }) } read('./name.txt') .then(function(data) { An exception occurred in throw new Error() //then, which will lead to the next then failure callback. })//Since the next then has no failed callback, it will continue to look down. If none exists, it will be caught by catch. .then(function(data) { console.log('data') }) .then() .then(null, function(err) { console.log('then', err)// then error }) .catch(function(err) { console.log('error') })
Promise can not only capture errors, but also solve the problem of callback hell. The previous callback hell example can be rewritten into the following code:
ajax(url) .then(res => { console.log(res) return ajax(url1) }).then(res => { console.log(res) return ajax(url2) }).then(res => console.log(res))
It also has some shortcomings, such as unable to cancel Promise, and errors need to be captured through callback functions.
VI. Generators/ yield
Generator function is an asynchronous programming solution provided by ES6. Its syntax behavior is completely different from that of traditional functions. The greatest feature of Generator is that it can control the execution of functions.
- Syntactically, it can be first understood as a Generator function is a state machine that encapsulates multiple internal states.
- The Generator function is not only a state machine, but also a traverser object generation function..
- The function can be paused, yield can be paused, the next method can be started, and the expression result after yield is returned each time.
- The yield expression itself does not return a value, or always returns undefined.The next method can take a parameter, which is treated as the return value of the previous yield expression.
Let’s look at an example first:
function *foo(x) { let y = 2 * (yield (x + 1)) let z = yield (y / 3) return (x + y + z) } let it = foo(5) console.log(it.next()) // => {value: 6, done: false} console.log(it.next(12)) // => {value: 8, done: false} console.log(it.next(13)) // => {value: 42, done: true}
Perhaps the result is not consistent with your imagination. Next we will analyze the code line by line:
- First, unlike ordinary functions, the Generator function call returns an iterator
- When the first next is executed, the transfer session is ignored, and the function pauses at yield (x+1), so return 5+1 = 6
- When performing the second next, the passed-in parameter 12 will be taken as the return value of the previous yield expression. If you do not pass in the parameter, yield will always return undefined. Let y = 2 at this time12, so the second yield is equal to 212 / 3 = 8
- When performing the third next, the passed-in parameter 13 will be taken as the return value of the previous yield expression, so z = 13, x = 5, y = 24, and the sum equals 42
Let’s look at another example: there are three local files, 1.txt,2.txt and 3.txt, which have only one sentence. The next request depends on the result of the previous request. We want to call the three files in turn through the Generator function.
//1.txt file 2.txt
//2.txt file 3.txt
//3.txt file End
let fs = require('fs') function read(file) { return new Promise(function(resolve, reject) { fs.readFile(file, 'utf8', function(err, data) { if (err) reject(err) resolve(data) }) }) } function* r() { let r1 = yield read('./1.txt') let r2 = yield read(r1) let r3 = yield read(r2) console.log(r1) console.log(r2) console.log(r3) } let it = r() let { value, done } = it.next() Value.then (function) (data) {//value is a promise console.log(data) //data=>2.txt let { value, done } = it.next(data) value.then(function(data) { console.log(data) //data=>3.txt let { value, done } = it.next(data) value.then(function(data) { Log (data)//data = > end }) }) }) // 2.txt=>3.txt= > end
From the above example, we can see manual iteration
GeneratorFunctions are very troublesome, and the implementation logic is somewhat convoluted, while actual development will generally be coordinated.
coLibrary to use.
coIs a generator-based process control tool for Node.js and browsers. with Promise, you can write non-blocking code in a more elegant way.
Installation
coThe library only needs to:
npm install co
The above example can be easily realized in two sentences.
function* r() { let r1 = yield read('./1.txt') let r2 = yield read(r1) let r3 = yield read(r2) console.log(r1) console.log(r2) console.log(r3) } let co = require('co') co(r()).then(function(data) { console.log(data) }) // 2.txt=>3.txt= > end =>undefined
We can solve the problem of callback hell through the Generator function, and can rewrite the previous callback hell example into the following code:
function *fetch() { yield ajax(url, () => {}) yield ajax(url1, () => {}) yield ajax(url2, () => {}) } let it = fetch() let result1 = it.next() let result2 = it.next() let result3 = it.next()
Async/await
Introduction to 1.Async/Await
With async/await, you can easily accomplish what you did with the generator and the co function. It has the following characteristics:
- Async/await is implemented based on Promise and cannot be used for ordinary callback functions.
- Async/await, like Promise, is non-blocking.
- Async/await makes asynchronous code look like synchronous code, which is its magic.
If async is added to a function, the function will return a Promise.
async function async1() { return "1" } console.log(async1()) // -> Promise {<resolved>: "1"}
The Generator function calls three files in turn. The example is written in async/await and can be realized in a few words.
let fs = require('fs') function read(file) { return new Promise(function(resolve, reject) { fs.readFile(file, 'utf8', function(err, data) { if (err) reject(err) resolve(data) }) }) } async function readResult(params) { try { Letp1 = Await Read (params,' utf8')//Await is followed by a Promise instance let p2 = await read(p1, 'utf8') let p3 = await read(p2, 'utf8') console.log('p1', p1) console.log('p2', p2) console.log('p3', p3) return p3 } catch (error) { console.log(error) } } Readresult ('1.txt'). then (/async function returns a promise data => { console.log(data) }, err => console.log(err) ) // p1 2.txt // p2 3.txt // p3 End //End
2.Async/Await concurrent request
If you request two files, there is no relationship, you can request them concurrently.
let fs = require('fs') function read(file) { return new Promise(function(resolve, reject) { fs.readFile(file, 'utf8', function(err, data) { if (err) reject(err) resolve(data) }) }) } function readAll() { read1() Read2()// This function executes synchronously } async function read1() { let r = await read('1.txt','utf8') console.log(r) } async function read2() { let r = await read('2.txt','utf8') console.log(r) } readAll() // 2.txt 3.txt
Viii. summary
1.JS asynchronous programming evolution history: callback-> promise-> generator-> async+away
The realization of the 2.async/await function is to wrap the Generator function and the automatic actuator in a function.
3.async/await can be said to be the ultimate asynchronous solution.
(1) async/await function has the following advantages over Promise:
- Handling then’s call chain can write code more clearly and accurately
- And it can also gracefully solve the problem of callback to hell.
Of course, async/await function also has some shortcomings, because await has transformed asynchronous code into synchronous code. If multiple asynchronous codes have no dependency but use await will lead to performance degradation, if the code does not have dependency, it is completely possible to use Promise.all.
(2) async/await function improves the Generator function in the following three points:
- Built-in actuator.
The execution of the Generator function must depend on the executor, so there is a co function library, while the async function has its own executor. That is to say,Async functions are executed exactly like normal functions, with only one line required..
- Wider applicability. Co function library convention, yield command can only be followed by Thunk function or Promise object, andThe await command of the async function can be followed by a Promise object and values of the original type (numeric, string, and Boolean values, but this is equivalent to a synchronization operation).
- Better semantics. Async and await have clearer semantics than asterisks and yield. Async indicates that there are asynchronous operations in the func tion, await indicates that the expression immediately following needs to wait for the result. | https://ddcode.net/2019/05/31/six-schemes-of-js-asynchronous-programming/ | CC-MAIN-2021-10 | refinedweb | 2,577 | 56.96 |
Entire
.SE TLD Drops Off the Internet
207
Icemaann writes "Pingdom and Network World are reporting that the SE tld dropped off the internet yesterday due to a bug in the script that generates the SE zone file. The SE tld has close to one million domains that all went down due to missing the trailing dot in the SE zone file. Some caching nameservers may still be returning invalid DNS responses for 24 hours."
No big deal (Score:2, Informative)
The downtime lasted 30 minutes, and most domains were probably cached by nameservers anyway.
Re:No big deal (Score:4, Informative)
Yeah, been there done that. *My* fumble only brought 10,000 domains down for about 10 minutes, and no one noticed. (I think all the domains hosted only cat pictures anyway.)
Sorry, that's as big a responsibility as any employer has ever deemed suitable for my incompetent ass.
Re: (Score:3, Interesting)
My biggest bug resulted in about a dozen tigers getting tranquilized.
Re:No big deal (Score:4, Funny)
Are you my motherboard?
Re:No big deal (Score:5, Funny)
The downtime lasted 30 minutes, and most domains were probably cached by nameservers anyway.
I once viddied an animated documentary about a small town in Colorado that lost the internet for 22 minutes [wikipedia.org]. It was not pretty. Our hearts and minds go out to you, people of Sweden. I cannot even fathom what that would be like
... I hope the looting and rioting has died down with the restoration of the internet.
Re: (Score:2)
Nah. In Sweden, when you want to see hot chicks, you just have to go outside. Even looking out the window might suffice. ^^
Re: (Score:2)
Re: (Score:2)
For the pool souls in the
.se domain, it was the end of the universe.
Re: (Score:2)
It was not the whole internet, it was only
.se tld ...
Can I riot anyway?
Re: (Score:2)
Depends. Do you live in Sweden? Do you have Swedish relatives? Are there any Swedish meatballs in your refrigerator?
Re: (Score:2)
"For those who love Adam Smith, in Argentina we have only two ISP providers, Telecom and Telefonica. Telefonica has bougth Telecom, so now we have a BIG monopoly on cell phones, wired phones, and internet services."
Ahhh... but that's prue free market in action, señor mío, so you must be grateful.
Re: (Score:3, Insightful)
While the impact of this is no big deal, it's still kind of scary that the people running a decently-sized ccTLD would make such a novice mistake on their zonefile.
Re: (Score:3, Insightful)
You expect them to be absolutely perfect all the time no matter what, forever and ever?
/That's/ unrealistic.
Re: (Score:3, Informative)
Incorrect. The zone file is hosted by Autonomica AB (who own the servers that are authoritative for the "se" domain according to the root servers).
If you were talking about a change to the NS records, you'd - I assume - be correct - Verisign operates a.root-servers.net (which I assume is the root)
Re:No big deal (Score:5, Funny)
The downtime lasted 30 minutes, and most domains were probably cached by nameservers anyway.
I didn't notice the DNS freak out, but I did notice the internet's smug meter had dropped about 30%.
Re:No big deal (Score:5, Funny)
but I did notice the internet's smug meter had dropped about 30%.
Norwegian detected.
Re:No big deal (Score:5, Insightful)
DNS is very simple, but it's just as prone to human error as anything else. If you're responsible for the records of a large number of domains (like, say, an entire country), you probably ought to take some time to develop proper testing and change control procedures before you fiddle with it. It sounds like these guys didn't take it seriously enough and got burned. I hope they'll learn their lesson from this and change their procedures.
Re:No big deal (Score:5, Funny)
DNS is very simple, but it's just as prone to human error as anything else.
Are you kidding? I've been programming DNS for a long time, and if theirs one thing I learned, its that programmers like me don't make errors.
Re: (Score:2, Redundant)
Are you kidding? I've been programming DNS for a long time, and if theirs one thing I learned, its that programmers like me don't make errors.
If one doesn't count spelling errors, apparently.
Re: (Score:2)
"When I worked for a major telecomm here in the US, one of our partner companies submitted a text file generated on a *nix machine [...] I found it more interesting that the reason why the partner company didn't want to muck with it was because the file would be 'validated' with their servers. The inclusion of two CRs threw off the checksum value and nothing would work."
So, the partner company sent you some files. You inserted them on your system which sudden and misteriosuly failed. You blame your partne
Re: (Score:2)
Incorrect. Notepad does not interpret LF or CR on their own as a line break, so you'd find it pretty obvious that the file is malformed when the whole damn thing shows up on a single line. Wordpad will transparently fix it though.
Re: (Score:2)
No big deal? No big deal??? Where the hell else am I supposed to go to look at pictures of hot Swedish women hitting the nightclub scene (in a way that's at least a little SFW) if I can't get to [thelocal.se]?
Re: (Score:3, Insightful)
I wish browsers would store the IP address of the page as well as the domain name in bookmarks. That way if the DNS server goes down you could still get to the site. Of course, the primary lookup should still be the domain name, since a site can have its address changed; the browser would only look at the IP if the DNS lookup failed.
Re: (Score:2)
That feature would be very handy.
The main reason one can't simply record host/ip pairs right now, is due to named-based virtual web servers.
Even if you put in the IP manually, without sending the correct domain in the http request, you won't get the proper page.
Having the IP as a separate field in the bookmarks would let the browser connect to any IP you put there (be it cached, or manually changed when a server is renumbered), but it would still have the needed data to send in the http request to make the
Re:unless you are swedish (Score:4, Insightful)
its "no big deal" until you need to know something off the internet right now, high stakes
I need to know what a fourteen year old thinks about copyright law and I need to know it NOW [smbc-comics.com] !
Re:unless you are swedish (Score:4, Insightful)
Anything sufficiently "high stakes" shouldn't rely on an unreliable medium.
Re: (Score:3, Funny)
If a packet gets through, great. If not, well, it's not the end of the world.
Sounds like a lot of cities' approaches to freeway systems/traffic control.
Re: (Score:3, Funny)
Cache your porn, folks. Just sayin'.
Re: (Score:2)
Re: (Score:2)
Tell that to the "Operation SpoilSport" computers running the missle silos.
Re: (Score:2)
There goes my favorite web site ! (Score:3, Funny)
Goat.se
Re: (Score:3, Funny)
Goat.se
Huh... that's interesting. I've never heard of that one before... I think, though, that based on your recommendation I'll share the link with the rest of the office. I've seen a lot of your posts here in Slashdot, Anonymous Coward, and all the ones I've seen have been pretty highly rated, so I'm guessing you wouldn't link me to a website that wasn't interesting.
Re: (Score:2)
(humor)
The satellite Microsoft Retro Fan Site Windows98.se also went down.
And look. My sig this month is all about your joke.
(No Closing tag. The humor never ends.)
Re:There goes my favorite web site ! [Goat...] (Score:2, Funny)
Don't worry, there's plenty of mirrors......unfortunately.
Re: (Score:3, Funny)
Goat.se
Arrgh... the horror... [goat.se] You'll want to claw your eyes out!
change control / management, anyone? (Score:5, Insightful)
I seriously hope someone is fired or loses a contract over this. Where was the validation, change control, etc? I would expect that at the TLD level, a change to a configuration file would have to be inspected by someone AND run through some syntax-checking scripts...
As for the person who was modded up for saying "hey, no big deal, fixed in 30 minutes!", not quite. DNS servers (and individual computers!) cache negative results. Anything anyone did a query on during those 30 minutes will be negatively cached by their system and their local DNS server. Granted, a whole lot of local Swedish ISPs and network providers have probably flushed their DNS server caches, but it's still going to seriously impact traffic to many, many sites, especially for everyone outside Sweden.
Re: (Score:2)
It really isn't a big deal. The mistake was made, the world has the opportunity to learn from it and the economic impact was probably small but scalable enough to take seriously.
Now if it happened again I'd hope action were taken... don't be so vengeful, SuperBanana!
Re: (Score:3, Insightful)
I'll go one better and say we should try him in a military tribunal and sentenced to hard time in ADX. That will send the world a message - NO MISTAKES OR ELSE.
Get real man, this is a human error. Your struggle for perfection baffles my monkey brain.
Re:change control / management, anyone? (Score:4, Funny)
I seriously hope someone is fired or loses a contract over this.
You'll be happy to know that the person responsible have been found. The person in question was described as having unusual bushy eyebrows and speaking in a thick Swedish accent. His last comment about the incident, before being dragged away, was "bork bork bork".
Re: (Score:2)
I seriously hope someone is fired or loses a contract over this.
It seems a silly idea to fire somebody just after having invested $(whatever_this_snafu_is_supposed_to_have_cost) into his education.
Re: (Score:2)
I seriously hope someone is fired or loses a contract over this.
It seems a silly idea to fire somebody just after having invested $(whatever_this_snafu_is_supposed_to_have_cost) into his education.
Disagree... Obviously that file was being maintained by hand, BS press releases about scripts to the contrary. So the failure was at the management level for allowing such a crazy working procedure with no testing infrastructure at all. The only "education" the peon got was "typos cause problems", not exactly a Nobel prize winning contribution to human knowledge (although in comparison to a recent winner...) Since management doesn't make mistakes, and someone has to be the fall guy... the excuse will pro
Re:change control / management, anyone? (Score:5, Insightful)
As a DNS admin myself, touching high value zones, let me tell you, missing a stupid dot happens all the time. All the change control in the world doesn't help when you just don't type one little period. Even more helpfully, most tools won't notice and the zone will pass a configuration check because missing the trailing "." is syntactically correct.
Let me add as well that "change management" that you want is just fantastic
.. no making changes during core hours. When you run a 24/7 business, non-core hours means something like 2am. at 2am, I, and most mammals, are not at their mental best, so missing a single dot isn't horribly hard.
The only thing I'd suggest they do is use an offline test box for zones, then promote that change to prod. Then, you can load all the mistakes you want, do your digs, and if stuff works, THEN you move it to prod. I never ever make changes on production servers, they are done offline, tested, then put into prod with scripts. It makes it a lot harder for missing periods to make it into production.
Finally, this is a good reason why negative caching should have low TTLs. If you run a DNS server that can't handle low neg-caching TTLs, it's time to upgrade from a 386.
Cheers..)
I agree, but I'll also be a monkey's uncle when free software is designed this way.
What does this failure have to do with free software? If anything, it should be easy.
Even if you have an open source DNS server that uses text files, a major DNS registrar should be automating the hell out of it. I'm struggling to think of a reason why you wouldn't generate all your DNS records from a database. The files aren't that complicated, and they're essentially tabular data anyway.
I once saw an admin go on about how much 'better' Linux DNS servers were, then spend 5 hours hunting typos in the DNS co
Re: (Score:3, Insightful)
Not if the configuration check you wrote checks for the trailing "." anyways. And if it doesn't, you need to rewrite it.
Re: (Score:2)
It's not "a" dot, it's "every" dot. A bad script adn DNSSEC are to blame. Note that this is version 4 (5?) of dnssec. The earlier ones just didn't work.
And there's a real bad gotcha in the current one they haven't found yet that has still to raise it's ugly head. In time.
Re: (Score:2)
at 2am, I, and most mammals, are not at their mental best,
I'm a black-footed ferret, you insensitive clod!
Re: (Score:2)
I'm no DNS expert, but I can't fathom why negative responses are cached at all. You have many, many more requests for valid domains than you do for invalid ones and the vast majority of the invalid ones are one-off typos. I just don't see what the benefit is. We could do away with an entire class of sysadmin headaches if all resolver software configuration and network policies defaulted to not caching negative responses.
Re: (Score:2)
Obviously, it passed syntax-checking, or the server wouldn't have loaded it. What you are looking for is semantic-checking, which is much more difficult. I expect that the generation scripts will be expanded to check for more things; that's generally what happens (you check for what you can think of, and expand the checking when someone thinks of a better way to break things).
Negative caching (in BIND anyway) tops out at 3 hours (it looks like
.se has it set to 2 hours). The NS record TTL is 2 days, so o
Re: (Score:2)
And if you get so emotional for 30min of Internet downtime you will probably die out of stress too soon.
Re: (Score:2)
"I would expect that at the TLD level, a change to a configuration file would have to be inspected by someone AND run through some syntax-checking scripts..."
Expect price and time-to-activation increase for second level domains way beyond current status then.
"DNS servers (and individual computers!) cache negative results."
Yeah, but in practice only for individual resources, not whole domains, since negative answers from authoritative sources must include SOA references as per RFC2308.
"Anything anyone did a
Re:change control / management, anyone? (Score:4, Funny)
Sweden porn?
IKEA instruction manuals?
Re: (Score:2)
Don't you mean "I wrong code" in this context?
Re: (Score:2)
20 GOTO 10
Re:change control / management, anyone? (Score:4, Insightful)
Excessive paperwork like 30 min to fill out a change request form to do something like make a 30 second edit to a config file and sighup a daemon is stupid and you'll hear no argument from me on that. Change control per se however, is essential, particularly in a large enterprise. Running part of that kind of infrastructure without change control would be like trying to manage the kernel source tree without cvs (or svn or $REPOS_OF_CHOICE, analogy holds either way.)
The problem is not change control, its the way it is implemented. Change control methodology is designed by PHBs who haven't actually done the tech work in years, if they ever did. It's then scribbled all over by a "business analyst" who thinks a sigpipe is a plumbing problem and by the time guys actually doing the work get hold of it it has become a nightmare of procedural BS when all you really needed was a way to make sure everything you do to a live production system is documented and that anything other than emergency break-fix at least got basic testing and a second pair of eyes looking at it before rolling it out.
Re: (Score:2)
Running part of that kind of infrastructure without change control would be like trying to manage the kernel source tree without cvs (or svn or $REPOS_OF_CHOICE, analogy holds either way.)
I hate to break it to you, but until 2002 the Linux kernel was managed without automated version control. It worked quite well, actually.
Re: (Score:2)
Then why did they stop doing it?
Actually, I'll tell *you* why they stopped doing it: because Linus realized he was doing by hand a job that could be done much better by machine.
Re: (Score:2)
"Then why did they stop doing it?"
Because it didn't scalate not because Linus thought his previous procedure made kernel quality lacking.
Re: (Score:2)
Running part of that kind of infrastructure without change control would be like trying to manage the kernel source tree with cvs.
FTFY
;-)
Re: (Score:2)
Yes, that's why we have testbeds. The problem is not the missing character or whatever, is testing stuff before making a change in a system which affects thousands of websites.
So I guess it's... (Score:5, Funny)
Re: (Score:2)
I'm chopping up the zone files if that's ok with you (tosses random shyte over shoulder)
We'll scoop up all the trailing dots and put them in the stew
BORKBORKBORK!
Re: (Score:2)
Re: (Score:2)
Iceland? That would be BjorkBjorkBjork surely?
This is a Muppets' Swedish Chef reference.
Ah, the joy of automated oopsies. (Score:2)
somewhere in sweden: (Score:3, Funny)
DNS is the problem (Score:5, Interesting)
Re: (Score:2)
And your robust solution to a scalable global directory of name-to-ip address mapping is... ?
Re:DNS is the problem (Score:5, Funny)
Regedit32.exe
Upgrade .com to .exe (Score:2)
Regedit32.exe
I agree. It's long past time for the
.com domain to be upgraded to .exe.
Re: (Score:2)
I agree. It's long past time for the
.com domain to be upgraded to .exe.
No,
.exe is the new domain for malware and trojan distribution websites. Did you miss the recent ICANN memo?
Re: (Score:2)
DHT. Thanks for asking. Efforts are already underway, quietly, so ICANN doesn't notice and cannot co-opt it. Oh, and name and address shortages? Thing of the past.
The end of an era where artificial scarcity to promote a monopoly to make the insiders very wealthy is nearly at an end. [icann.org]
I'm shocked nobody is asking "what have all those poeple done for 10 years and many many millions of dollars".
Re:DNS is the problem (Score:4, Insightful)
Except the Pakistan affair was about the BGP routing protocol. I agree the file format is nutty, though.
I can't think of a better alternative to the hierarchical system, perhaps you have a suggestion. A flat namespace would be an administrative impossiblity, not to mention the stress it would put on name servers. Increasing the number of TLDs would lessen the impact of a single failure, though.
Re: (Score:3, Insightful)
Pakistan taking out Youtube had absolutely nothing to do with DNS, they wrongly propagated a BGP announcement for the youtube IPs outside of Pakistan, so about 1/3 of the internet routed traffic into their black hole instead of to Youtube. Pretty effective blocking had they kept it internal, but they didn't.
Re: (Score:3, Informative)
Well in the 1980's when the RFC was written for zone files (1034/1035) it probably sounded like a perfectly sound way to configure this sort of thing, same with DNS in general (RFC's for which were also written in the 1980's).
If it were invented from scratch today I'm sure it would resemble something like LDAP.
The fact we haven't had more mass DNS failures like this is actually surprising.
Re: (Score:2)
It still boggles my mind that anyone thought zone files are a good idea. The file format is so damn brittle, that a single byte can spell disaster. On top of that, the hierarchical naming structure presents an inherent systemic risk for all sub-domains as exhibited by this
.se fiasco. Nevermind the injection attacks, Pakistan taking out Youtube, and the rest, you have organizations like Verisign which profit immensely off of keeping the system broken. And don't even bother mentioning DNSSEC, as it still doesn't resolve this fundamental issue. The next systemic fuckup will simply be a signed fuckup.
Yes, it's a shame you were still in diapers when this solution was developed. They could have benefited from your vast wisdom. Or maybe not, if you think the problem with YouTube in Pakistan was due to DNS rather than BGP.
Re: (Score:2, Insightful)
Besides, if we redesigned it now, it would be insanely complex and bloated, not to mention never fully implemented (CSS? ha!), as there would be too many parties "contributing".
Re:DNS is the problem (Score:5, Informative)
Part of the problem with DNS these days, which your post exemplifies, is that from very early on "BIND's implementation of DNS", and "DNS The Protocol" have been mashed together and confused by the RFC authors (who were involved with the BIND implementation and had motive to encourage the world to think only in BIND terms) and basically everyone who ever used DNS in any capacity. Zonefiles are not implicit in DNS address resolution (neither for authoritative servers or recursive caches). They really aren't any part of the wire DNS protocol for resolving names. They *are* part of a wire protocol for secondary servers that slave zonefiles from primary servers, but even in that case it's really more a "BIND convention" than a necessity. Ultimately how you transfer a zone's records from a master server to a slave server is up to however those two servers and their administrators agree to do so. You can skip the AXFR protocol that uses zonefiles and instead do something else that works for both of you. Inventing a new method of slaving zone data is easy and doesn't involved much complicated rollout. Some people just rsync zonefiles for instance instead of using AXFR today.
It's really frustrating (believe me, I've done it) when you try to implement a new DNS server daemon from scratch from the RFCs, and you have to wade through this mess of "what's a BIND convention that doesn't matter and what's important to the actual DNS protocol for resolving names on the wire".
Re: (Score:2)
BIND was the spec for DNS for a while. But recently Vixie has washed his hands of that mess by saying "Don't use BIND as a spec".
Like that helps Paul.
Re: (Score:3, Interesting).
Re: (Score:2)
It gets worse. In 2007, Paul Vixie wrote an article in ACM Queue [acm.org] basically praising the vagueness of the DNS protocol specifications:.
Correlation does not imply causation.
DNS didn't grow to be huge because it was designed loosely, it happened to grow big because coincidentally the Internet took off and become huge, and the Internet happened to use DNS. It would be a bit of a stretch to say that the Internet become the size it is today because one of the many underpinning protocols and standards was loosely specified.
The Internet could have used any number of alternate name lookup systems, and it would have grown to its current size just f
Re: (Score:3, Interesting)
The file format is so damn brittle, that a single byte can spell disaster.
You know what, so is ELF. Who said you should write zonefiles by hand let alone without any kind of syntax verification.
Input syntax is never really an issue. You only ever lack the necessary tools or you are unable to use them properly. It can always be hidden behind a precompiler or whatever necessary.
Hmmm... wait, termcap. I stand corrected.
Why MaraDNS uses a special zone file format (Score:2, Interesting)
This is why MaraDNS [maradns.org] (my open-source DNS server) uses a special zone file format.
MaraDNS uses a zone file format that, for the most part, resembles BIND zone files. However, the zone file format has some minor differences so the common "Forgot to put a dot at the end of a hostname" and the "forgot to update the SOA serial number" problems do not happen; a domain name without a dot at the end in a syntax error in MaraDNS' zone file parser; if you want to end a hostname with the name of the zone in questio
Re: (Score:3, Insightful)
Can MaraDNS handle IPv6 now? Last time I used it I had to ditch it in end as IPv6 support was lacking.
There's møre to Sweden than .se (Score:5, Funny)
Wi nøt trei a høliday in Sweden this yer?
See the løveli lakes
The wonderful telephøne system
And mani interesting furry animals
Re:There's møre to Sweden than .se (Score:5, Funny)
We apologise for the fault in the previous post. Those responsible have been sacked.
Nö There's Nöt! (Score:2)
The Swedish alphabet does not have the letter "ø", it's written "ö" in Swedish. The letter "ø" is found in Danish and Norwegian.
The letter is NOT a ligature or a diacritical variant of the letter o! The vowel it sounds most like is the vowel in "bird" or "hurt".
Re:No There's Not! (Score:2)
Hand in your geek card: [youtube.com]
A moose once bit my DNS... (Score:2)
What's a " .SE TLD"? (Score:2)
That's how it looked like in Thunderbird's RSS reader.
It happens (Score:2)
Because Unix admins never test-run their code.
Simple solution (Score:2)
Re: (Score:3, Interesting)
Uh, it would make no difference.
DNS is hierarchical, and has teh caching.
2 independent groups running DNS would strive to make sure they sync with each other quickly - thus all failures would sync quickly too.
The difference between
- the delay of a correct change propagating across the two firms running DNS
- the delay of an incorrect change propagating within a single DNS
would essentially be zero.
No good things could come from what you propose unless it was specifically designed to have a 24
Re: (Score:2)
You can't protect against a single point of failure when you're talking about a person updating a system. Redundancy protects against computer error, not human error.
See, ultimately, somebody, somewhere has to be responsible for the name updating. Having it in two places just means that an incorrect update gets pushed to both places by the person making the change.
In this case, the effects were minimized by the nature of DNS itself, and the caching mechanisms involved. Most servers probably never saw the ch
Re: (Score:2)
Re: (Score:2)
"In this case, the effects were minimized by the nature of DNS itself"
Well, at least somebody shows some common sense.
Of course, losing a whole TLD even if only for half an hour is a shame probably the one that did it won't include in his resume, but the fact is that nobody will expend more on secure a resource than it's very value. DNS is basically distributed, cached information described on plain text files; if an update works (which is vastly most of the time), it works; if it isn't you detect the fail
Re: (Score:3, Funny)
It looks like someone messed up the summary. I'm pretty sure it should be:
Peengdum und Netvurk Vurld ere-a repurteeng thet zee SE tld drupped ooffff zee internet yesterdey dooe-a tu a boog in zee screept thet generetes zee SE zune-a feele-a. Zee SE tld hes cluse-a tu oone-a meelliun dumeeens thet ell vent doon dooe-a tu meessing zee treeeling dut in zee SE zune-a feele-a. Sume-a cecheeng nemeserfers mey steell be-a retoorneeng infeleed DNS respunses fur 24 huoors.
Re: (Score:2)
Re: (Score:2) | http://tech.slashdot.org/story/09/10/13/1537207/Entire-SE-TLD-Drops-Off-the-Internet | CC-MAIN-2015-18 | refinedweb | 4,895 | 72.26 |
<detail> contains information specific to the fault in question, and sometimes the package
that created it. In this case, the stack trace is just a standard Java printStackTrace()
output, and anyone who cares to look for the <stacktrace> element can get at it. We've
discussed putting in a switch to turn this on and off for a service/engine, in fact.
So the short answer is no, it doesn't follow a particular standard. For things like this,
and exception class names, it might be nice if all the Java implementations agreed on a standard,
and that might be something you see in future revs of JAX-RPC.
--Glen
> -----Original Message-----
> From: Thomas Börkel [mailto:tb@ap-ag.com]
> Sent: Thursday, October 17, 2002 10:48 AM
> To: Axis Dev Mailinglist
> Subject: SOAP stacktrace
>
>
> HI!
>
> If the server throws an exception then Axis puts the
> stacktrace into the <detail> tag of the response with
> namespace "ns2". Does this follow some standard, so should
> other SOAP implementations (like .NET) understand this? The
> current implementation of .NET (using generated proxy
> classes) does not even provide the text of the fault detail,
> only the fault string.
>
> Thanks!
>
> Regards,
> Thomas
> | http://mail-archives.apache.org/mod_mbox/axis-java-dev/200210.mbox/%3CCB1FF0A474AEA84EA0206D5B05F6A4CB02A4BE86@S1001EXM02.macromedia.com%3E | CC-MAIN-2018-30 | refinedweb | 197 | 64 |
Can't make shortcut icons.
Once version upgrade is done on iOS 13, you can't make a shortcut of a script of Pythonisa on the home screen.
Is this a bug? Is there any change in the way?
@generic shortcut "execute Pythonista script" does not work for iCloud folder of Pythonista
That works, but non iCloud and without arguments
Not sure if this helps anyone but also since running this script does not work at the moment. This workaround does work. If you make a 2 action shortcut with:
pythonista3://iCloud/shortcutScript.py?action=run | <open url>
it should work. You can get get your script URL with this:
import _pythonista, os def this_url(): _script = os.path.realpath(__file__) script_url = _pythonista.make_url(_script) return script_url
@couplelongnecks Pythonista doc says
Run a script from your library: Use pythonista://MyScript.py?action=run for running a script that is in your library. By default, the script path is relative to Pythonista’s local documents folder. Add ?root=icloud to make the path relative to Pythonista’s iCloud folder instead.
@cvp This works for you?
I havent been able to get that working. If its in the docs its probably user error on my part.
Excuse the ignorance but whats the difference/benefit to this method?
@couplelongnecks you're right, it does not work, sorry.
But it seems that pythonista3://iCloud/shortcutScript.py?action=run does not work too...
But this works
pythonista3://shortcutScript.py?action=run&root=icloud
@cvp
Awesome! That works for me!
The link I posted must have been a fluke, yours must be the robust way. So will implement! :)
This post is deleted!last edited by
This post is deleted!last edited by
- abakan4222
This post is deleted!last edited by | https://forum.omz-software.com/topic/5861/can-t-make-shortcut-icons/16 | CC-MAIN-2021-49 | refinedweb | 293 | 79.67 |
This patch adds a prctl to modify current->comm as shown in /proc.This feature was requested by KDE developers. In KDE most programsare started by forking from a kdeinit program that already has the libraries loaded and some other state. Problem is to give these forked programs the proper name.It already writes the command line in the environment (as seen in ps),but top uses a different field in /proc/pid/status that reportscurrent->comm. And that was always "kdeinit" instead of thereal command name. So you ended up with lots of kdeinitsin your top listing, which was not very useful.This patch adds a new prctl PR_SET_NAME to allow a program to change its comm field. I considered the potential security issues of a program obscuringitself with this interface, but I don't think it matters muchbecause a program can already obscure itself when the admin usesps instead of top. In case of a KDE desktop calling everythingkdeinit is much more obfuscation than the alternative.diff -u linux-2.6.8-5/kernel/sys.c-o linux-2.6.8-5/kernel/sys.c--- linux-2.6.8-5/kernel/sys.c-o 2004-08-14 07:36:16.000000000 +0200+++ linux-2.6.8-5/kernel/sys.c 2004-09-07 11:34:07.000000000 +0200@@ -1660,6 +1660,13 @@ } current->keep_capabilities = arg2; break;+ case PR_SET_NAME: {+ struct task_struct *me = current;+ me->comm[sizeof(me->comm)-1] = 0;+ if (strncpy_from_user(me->comm, (char *)arg2, sizeof(me->comm)-1) < 0)+ return -EFAULT;+ return 0;+ } default: error = -EINVAL; break;diff -u linux-2.6.8-5/include/linux/prctl.h-o linux-2.6.8-5/include/linux/prctl.h--- linux-2.6.8-5/include/linux/prctl.h-o 2004-08-14 07:37:14.000000000 +0200+++ linux-2.6.8-5/include/linux/prctl.h 2004-09-07 11:35:02.000000000 +0200@@ -49,5 +49,6 @@ # define PR_TIMING_TIMESTAMP 1 /* Accurate timestamp based process timing */ +#define PR_SET_NAME 15 /* Set process name. */ #endif /* _LINUX_PRCTL_H */-To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at | http://lkml.org/lkml/2004/9/7/116 | CC-MAIN-2016-07 | refinedweb | 364 | 50.94 |
Measure Distance with a Sonar Sensor on an Arduino
Learn how to measure distances up to 20 ft with a sonar sensor on an Arduino!
Get measuring!
Sonar Sensors
Sonar’s most popular and primary use is to be able to "see" underwater. Sonar uses the propagation of sound to detect objects. Since sound waves travel farther in water than they do in air, sonar is preferred over other types of sensors like radar for this reason. Even though it's preferred for underwater sensing, sonar can still be used in air; however, there exists the small chance of interference, which we might see when measuring distance.
There are two types of sonar: passive sonar and active sonar. Active sonar has an emitter and a detector: depending on the time that the signal takes to come back to the sonar, it can detect the range or distance of an object and its orientation. It also detects the strength of a signal to determine how much time it took to be picked up by the receiver. Passive sonars are used to pick up signals from vessels and other marine life like whales and submarines. Passive sonars don't have emitters; they just receive sound waves coming towards them.
Bill of Materials
- Arduino Uno
- MaxBotix Ultrasonic Range Finder
- 3 Loose Wires
- Soldering Iron
- Solder
- Computer with Arduino IDE (Integrated Development Environment)
- USB Type B to connect the Arduino
- Multimeter
We will be using an Arduino Uno as our microprocessor to be able to read distance detected by the sonar. The sonar that we are using is the Maxbotix Ultrasonic Range Finder, but any models that are close to this one with an output as a pulse width or analog might be able to be used in this project. The three loose wires will be soldered to the Ultrasonic Range finder. We need the solder and soldering iron to solder wires to the sensor. Once everything is soldered and in place, the code below will have to be uploaded to the Arduino via the IDE, and it will also be connected with a USB Type B.
Getting Started
Since the Arduino and the code will interpret the output of the sonar in volts, we do not want there to be any false connections or shorts between the circuit, so we have to make sure that when the pins are soldered there is no solder residue that can cause a short.
The 3 pins that will be soldered on the sonar sensor are shown below.
Solder a wire to the ground, V in of +5 Volts, and the second from the bottom, which is the pulse width output. After soldering these three pins, clean with a cotton swab and some alcohol around the holes to get rid of any residue that may remain from the solder. To check for any shorts then use the multimeter and check the resistance between these three pins. Between the GND and the +5 V there should be OL or infinite resistance. If you check with the multimeter for an open or if you check continuity, then it should not come up. If there is some continuity between any of these three pins, then you need to de-solder the wires and clean up any solder residue. Once the wires are soldered on the Sonar Sensor and you have checked for no shorts then you can connect to the Arduino.
How to connect the sensor to the Arduino
You can connect the sensor and the Arduino above with a breadboard as a medium or you can connect directly from the sensor to the Arduino. The sensor is being grounded on the Arduino and is receiving power from the Arduino’s +5V output. The sensor's pulse width output is being connected to any input on the Arduino that can accept a pulse width. In this case, I'm using digital pin 3.
#include "Maxbotix.h" Maxbotix rangeSensorPW(3, Maxbotix::PW, Maxbotix::LV); // #3 Defines which Digital Input in being Read //Maxbotix:PW defines that the Arduino is reading PW signals void setup() { Serial.begin(9600); } void loop() { unsigned long start; Serial.println("Reading 1st Sensor"); //Serial Monitor will print this line start = millis(); // Number of Milli seconds until the Sonar Receives the signal it sent out Serial.print("PW 1: "); Serial.print(rangeSensorPW.getRange()*.393701); // Multiply by this to convert Cm to Inches Serial.print(" inches - "); Serial.print(millis() - start); Serial.println("ms"); Serial.println(); delay(1500); // Wait for 1.5 Seconds }
When the Arduino is connected as shown in the diagram above and the code in uploaded, you can open the serial monitor and the distances will be displayed in inches with a refresh every 1.5 seconds. When you run the serial monitor, depending on where your sonar sensor is pointing, it will give you a certain number of inches. If you put your hand or another large object where the sonar is pointed, it will also read that and display its distance. For this specific sonar, the range is 20 feet.
Below is an image of how the serial monitor and code should look like once they're running. Happy building!
Give this project a try for yourself! Get the BOM.
if we need to use this idea to measure distances underwater by connect it into an ROV sysetm, does this work underwater ?
This one, no.. It’s not waterproof. So the moment you immerse it in water,it short-circuits. If you want to use one underwater, you have to find a specific one for that.
And there is no way to make this one waterproof: boxing will block the “output waves”...
Can someone help me find micro-miniature underwater sonar transceivers for building an array? Thank you. | https://www.allaboutcircuits.com/projects/measure-distance-with-a-sonar-sensor-on-an-arduino/ | CC-MAIN-2021-39 | refinedweb | 963 | 69.62 |
Last few days I was facing a problem of creating CrystalReport with dynamic query. More specifically, I wanted to change the query in runtime and populate the report with query-results. And I somehow managed to get rid of this. Still I’m not sure if it’s the best way, but it works.
Followings are the steps I found useful:
- Firstly, create a crystal report with the database fields (which will be viewed) embedded in the report.
- Then, create a ReportDocument object, say ‘rptDoc’. [required namespace ‘CrystalDecisions.CrystalReports.Engine’]
- Load the crystal report within rptDoc.
- Set its data source, with the DataSet / DataTable which contains the results of the query.
- Finally, for viewing the report set the ReportSource property of CrystalReportViewer to this rptDoc object.
Sample Code:
// prepare the DataTable for the report
string connectionString = "Dsn=TestDB";
OdbcConnection conn = new OdbcConnection(connectionString);
conn.Open();
string query = "select * from person where personId <>; // dynamic query
OdbcCommand command = new OdbcCommand(query, conn);
OdbcDataAdapter adapter = new OdbcDataAdapter(command);
DataSet ds = new DataSet();
adapter.Fill(ds);
DataTable dt = ds.Tables[0];
conn.Close();
// populate the report with the created DataTable dt
string reportPatfh = "CrystalReport1.rpt"; // path of the crystal report
ReportDocument rptDoc = new ReportDocument();
rptDoc.Load(reportPatfh);
rptDoc.SetDataSource(dt);
crystalReportViewer1.ReportSource = rptDoc;
Notes:
Here only data are changed dynamically, not the format of the report.
1 Comment:
Hi,
We have Crystal Report XIR2 Designer. Is it possible to create dynamic query just with designer? If so can you please help me on how to create dynamic query on runtime.
Many Thanks. | http://tipsntricksbd.blogspot.com/2008/01/crystal-report-populate-with-dynamic.html | CC-MAIN-2018-05 | refinedweb | 258 | 50.33 |
In this article, we will understand MVP, execute a sample project with MVP, implement the same using
a Windows UI, and then finally we will discuss about the differences between MVP and MVC.
For the past several days, I have been writing and recording videos on Design Patterns, UML, FPA, Enterprise blocks, and you can watch the videos
at.
You can download my 400 .NET FAQ EBook from.
This article assumes you know MVC; in case you do not, you can read about it at 3Musketeers.aspx.
The best way to understand any architectural concept is by taking a small practical sample and then applying the fundamentals on the same. So let’s first discuss
a simple scenario and then let’s think how MVP fits in to the same. Below is a pictorial representation of a simple stock project. We can increase the stock value
and decrease the stock value. Depending on the stock, we can have three status: overstocked, under stocked, or optimally stocked.
We have three scenarios depending on which the UI will change color:
What we will do is define the problem logic in two parts:
The other responsibilities look fine, so let’s concentrate on the last responsibility where the UI needs to change colors depending
on the stock value. This kind of logic in presentation is termed as presentation logic.
If you look from a generic perspective, the work responsibilities allocated to the business object and the user interface look fine. There is a slight
concern when we look at the presentation logic.
We have three big problems:
If we want to reuse the presentation logic in some other UI type like Windows. I mean to say we have implemented this logic in ASP.NET web pages and we want
to reuse the same in a Windows app. As we have the presentation logic tied up in the UI code, it will be difficult to decouple it from the UI. We also would
like to use the same presentation logic with other pages of the same type.
The presentation is tied up with the business object. It’s doing a lot of work checking the stock status using the business object and changing the UI colors accordingly.
If we want to test the user interfaces, it becomes difficult as the UI is highly coupled with the UI. It becomes difficult to test these parts as different pieces.
So let’s take the above three problems and see how we can solve them. MVP will help us easily solve the above three problems.
If we want to reuse the presentation logic irrespective of the UI type, we need to move this logic to a separate class. MVP does the same thing.
It introduces something called as the presenter in which the presentation logic is centralized.
To decouple the presentation from the business object, we introduce an interface for every UI. The presenter always talks through the interface and the concrete UI, i.e., the web page
communicates through the view interface. This way, the model is never in touch with the concrete UI, i.e., the ASPX or Windows interface. All user interfaces should
implement this interface so that the presenter can communicate with the UI in a decoupled manner.
Problem 3 will get solved if we solve the first two problems.
We will first visualize how the UI will look like. There will be two buttons, one which increments the stock value and the other which decrements the stock value.
The stock value is displayed on the stock text box and the color is set according to the value of the stock.
All events are first sent to the presenter. So all the events need to connect to some methods on the presenter. All data needed by the UI, i.e., in this instance,
we need the stock value and the color should be defined in the interface so that the presenter can communicate using the same.
Shown below is the interface for the ASPX page. We need the stock value so we have defined a method called setStockValue and we also need the color,
so we have defined a method called setColor.
setStockValue
setColor
public interface StockView
{
void setStockValue(int intStockValue);
void setColor(System.Drawing.Color objColor);
}
The presenter class will aggregate the view class. We have defined an Add method which takes the view object. This view will set when the ASP.NET page starts.
Add
public class clsPresenter
{
StockView iObjStockView;
public void add(StockView ObjStockView)
{
iObjStockView = ObjStockView;
}
.....
.....
.....
}
When the user presses the increase stock button, it will call the increasestock method of clsPresenter, and when the decrease stock button is called,
it will call the decreasestock method of the clsPresenter class. Once it increments or decrements the value, it passes the value to the UI through the interface.
increasestock
clsPresenter
decreasestock
public void increaseStock()
{
Stock.IncrementStock();
iObjStockView.setStockValue(Stock.intStockValue);
ChangeColor();
}
public void decreaseStock()
{
Stock.DecrementStock();
iObjStockView.setStockValue(Stock.intStockValue);
ChangeColor();
}
We had also talked about moving the presentation logic in the presenter. So we have defined the ChangeColor method which takes the status from the Stock object
and communicates through the view to the ASPX page.
ChangeColor
Stock
public void ChangeColor()
{
if(Stock.getStatus()==-1)
{
iObjStockView.setColor(Color.Red);
}
else if (Stock.getStatus() == 1)
{
iObjStockView.setColor(Color.Blue);
}
else
{
iObjStockView.setColor(Color.Green);
}
}
Now let’s understand how the UI will look like. The UI, either an ASPX page or a Windows form, should inherit from the stock view interface which we have previously explained.
In the page load, we have passed the reference of this page object to the presenter. This is necessary so that the presenter can call back and update data which
it has received from the model.
public partial class DisplayStock : System.Web.UI.Page,StockView
{
private clsPresenter objPresenter = new clsPresenter();
protected void Page_Load(object sender, EventArgs e)
{
objPresenter.add(this);
}
…..
…..
…..
}
}
As we have inherited from an interface, we also need to implement the method. So we have implemented the setStockValue and the setColor method.
Note that these methods will be called by the presenter. In the buttons, we have called the increaseStock and DecreaseStock methods of the presenter.
increaseStock
DecreaseStock
public partial class DisplayStock : System.Web.UI.Page,StockView
{
private clsPresenter objPresenter = new clsPresenter();
protected void Page_Load(object sender, EventArgs e)
{
objPresenter.add(this);
}
public void setStockValue(int intStockValue)
{
txtStockValue.Text = intStockValue.ToString();
}
public void setColor(System.Drawing.Color objColor)
{
txtStockValue.BackColor = objColor;
}
protected void btnIncreaseStock_Click(object sender, EventArgs e)
{
objPresenter.increaseStock();
}
protected void btnDecreaseStock_Click(object sender, EventArgs e)
{
objPresenter.decreaseStock();
}
}
The model is pretty simple. It just increments and decrements the stock value through the two methods IncrementStock and DecrementStock.
It also has a getStatus function which tells us the stock level type: over stocked, under stocked, or optimally stocked. For simplicity,
we have defined the stock value as a static object.
IncrementStock
DecrementStock
getStatus
public class Stock
{
public static int intStockValue;
public static void IncrementStock()
{
intStockValue++;
}
public static void DecrementStock()
{
intStockValue--;
}
public static int getStatus()
{
// if less than zero then -1
// if more than 5 then 1
// if in between 0
if (intStockValue > 5)
{
return 1;
}
else if (intStockValue < 0)
{
return -1;
}
else
{
return 0;
}
}
}
Below is a complete flow of the stock project from an MVP perspective. The UI first hits the presenter. So all the events emitted from the UI will first route to the presenter.
The presenter will use the model and then communicate back through the interface view. This interface view is the same interface which your UI will inherit.
We have solved all the three problems with all the actions passing through the presenter; the ASPX page/ Windows form is completely decoupled from the model. The presenter centralizes
the presentation logic and communicates through the interface. As the presentation logic is in a call, we can reuse the logic.
As all the commands are passing through the presenter, the UI is decoupled from the model. Now that we have all the components decoupled, we can test
the UI component separately using the presenter.
To just show how magical the presenter is, I have reused the same presentation logic in a Windows application.
In the below sample, we have ported the same presenter logic in a Windows application:
private void Form1_Load(object sender, EventArgs e)
{
objpresenter.add(this);
}
private void btnInCreaseStock_Click(object sender, EventArgs e)
{
objpresenter.increaseStock();
}
private void btnDecreaseStock_Click(object sender, EventArgs e)
{
objpresenter.decreaseStock();
}
#region StockView Members
public void setStockValue(int intStockValue)
{
txtStockValue.Text = intStockValue.ToString();
}
public void setColor(Color objColor)
{
txtStockValue.BackColor = objColor;
}
You can understand the difference of how consuming the model objects directly and using a presenter varies. When we use the presenter, we have moved the UI presentation
logic to the presenter and also decoupled the model from the view. Below is the code in which the UI directly uses the model… lot of work, right?
With the presenter, all the presentation logic is now centralized:
The above figure summarizes the differences between MVP and MVC. We will summarize the figure in tabular format below.
You can download the stock project from this link: click here. The project is coded from three aspects:
The Web and Windows samples are shown to illustrate how we can reuse the presentation logic.
It would be selfish to say that all the above knowledge is mine. Below are some links which I have referred to while I was writing this
N a v a n e e t h wrote:Article says:
All events are first sent to the presenter.
No this is not correct. Presenter will not handle events. It is handled by the view and redirects calls to presenter.
N a v a n e e t h wrote:Article says:
The presenter class will aggregate the view class. We have defined an ‘Add’ method which takes the view object. This view will set when the ASP.NET page starts.:Interfaces will be implemented, not inherited.
N a v a n e e t h wrote:Article says:
In MVP presenter handles all the UI events In MVC the views handles the events
This is wrong. In both events are handled by views. MVC is less suitable for stand alone applications.
N a v a n e e t h wrote:Article says:
In MVC the controller passes the model to the view and the view then updates itself.
Wrong. In MVC, model is tightly bounded with view. View listens to Model and updates automatically. This is done using databinding features provided with the UI frameworks. Controller just updates the model.
Oleg Zhukov wrote:actually there are two kinds of it: constructor injection and method injection, so I don't think the author broke the law here
Oleg Zhukov wrote:Well, I should say here that there are two kinds of inheritance: interface inheritance and implementation inheritance
Oleg Zhukov wrote:Are you in quarrel with this guy?
Oleg Zhukov wrote:It has minor problems in properly expressing some concepts (probably due to the language barrier) but the overall expression is quite good.
N a v a n e e t h wrote:I don't know. But AFAIK, interfaces are said to be implemented. Do you have any materials to prove your point?
N a v a n e e t h wrote:Never . But I got a feeling that he is advertising his videos and tutorials which you can see in the first paragraph of the article. It is there for all his articles. So the intention is clear.
N a v a n e e t h wrote:So true. That's why I voted 3 instead of 1. I never said it is a total crap, did I?
N a v a n e e t h wrote:I appreciate your effort for posting the article here. It would have much better if you did good research before you post
N a v a n e e t h wrote:The first paragraph in your article gives a feeling to the reader that you are writing article just for advertising your tutorials, books and videos. That may be the reason for getting many low votes.
Shivprasad koirala wrote:But i do not agree with the thing that i do not research before writing.If you read MVP very closely all the above things will be answered
N a v a n e e t h wrote:No this is not correct. Presenter will not handle events. It is handled by the view and redirects calls to presenter.
N a v a n e e t h wrote:This is wrong. In both events are handled by views. MVC is less suitable for stand alone applications.
N a v a n e e t h wrote:Wrong. In MVC, model is tightly bounded with view. View listens to Model and updates automatically. This is done using databinding features provided with the UI frameworks. Controller just updates the model
N a v a n e e t h wrote:I think you should do some more research on the topic before writing an article. MVP has two varients. "Supervising Control" and "Passive view". You haven't explained anything about that. Consider reading fowler's article on these.
Ramon Smits wrote:Using Java/hungarion like notation.
Ramon Smits wrote:Not making use of enums
Ramon Smits wrote:Comparison between MVC and MVP is incorrect.
Ramon Smits wrote:Laks information about why each of the patterns are designed in the first place and why MVP seems to "fix" comparing to MVC in for example an ASP.NET application. Why didn't you use MVC?
Ramon Smits wrote:* Comparison between MVC and MVP is incorrect.
* Laks information about why each of the patterns are designed in the first place and why MVP seems to "fix" comparing to MVC in for example an ASP.NET application. Why didn't you use MVC?
General News Suggestion Question Bug Answer Joke Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. | http://www.codeproject.com/Articles/31210/Model-View-Presenter?fid=1531640&df=90&mpp=10&noise=1&prof=True&sort=Position&view=Expanded&spc=None&select=4103075&fr=11 | CC-MAIN-2014-41 | refinedweb | 2,353 | 57.37 |
In this blog I shall show the actual step by step process on AIF(Application Interface Framework) configuration for outbound Scenario along with how to code for the same.
AIF for outbound Interface.
Say if there is any outbound interface developed in PI from S4 as shown in the below screen then one can use the below steps to generate the proxy. class with methods is created as shown in the fig 2.
Fig 2. Default proxy class.
In the transaction /AIF/CUST
Fig 3. All the structures mentioned in this diagram should be available in the system.
AIF configuration
Open the transaction /AIF/CUST
In this example we are getting the message from the service max
So we configure a namespace for the same system in the path
SAP Application Framework –>Interface Development –> Define Namespace
Fig 3. Namespace for the the third party system
Define the interfaces in the path
SAP Application Framework–>Interface Development–>Define interfaces
Fig 4.
Fig 5.
Define the structure mapping for the interface in the path
SAP Application Framework–>Interface Development–>Define structure Mappings
Fig 6.
Fig 7.
Continue to Part 2 for the ABAP part .
Continue to Part 3 for checking the proxy structure details.
Continue to Part 4 for checking the created AIF structure details. | https://blogs.sap.com/2018/09/26/aif-implementation-for-pi-proxiesoutbound-part-1/ | CC-MAIN-2018-43 | refinedweb | 213 | 55.44 |
eav_hashes
eav_hashes is a neato gem for implementing the EAV (entity-attribute-value)
database design pattern in your Rails models. All you need to do is add one
line to your model's code and that's it! Schema generation is automatically
handled for you.
Why would I need it?
Rails' ActiveRecord includes a helper function,
serialize, to allow you to
save complex data types (like hashes) into your database. Unfortunately, it
isn't very useful. A lot of overhead is created from serialization and
deserialization, and you can't search by the contents of your hash. That's
where
eav_hashes comes in.
How does it work?
Great question! Lets dive in with a simple code example, but first lets set up the gem.
Put this in your gemfile...
gem "eav_hashes", "~> 1.0.2"
...and update your bundle.
$ bundle install
Now, lets make this Rails model.
class Product < ActiveRecord::Base eav_hash_for :tech_specs end
Now run this generator to create a migration:
$ rails generate eav_migration Product tech_specs
And run the migration:
$ rake db:migrate
Now watch the magic the happen:
# Assuming this whole example is on a blank DB, of course a_product = Product.new a_product.tech_specs["Widget Power"] = "1.21 GW" a_product.tech_specs["Battery Life (hours)"] = 12 a_product.tech_specs["Warranty (years)"] = 3.5 a_product.tech_specs["RoHS Compliant"] = true a_product.save! # Setting a value to nil deletes the entry a_product.tech_specs["Warranty (years)"] = nil a_product.save! the_same_product = Product.first puts the_same_product.tech_specs["nonexistant key"] # magic alert: this actually gets the count of EVERY entry of every # hash for this model, but for this example this works puts "Entry Count: #{ProductTechSpecsEntry.count}" the_same_product.tech_specs.each_pair do |key, value| puts "#{key}: #{value.to_s}" end # Ruby's default types: Integer, Float, Complex, Rational, Symbol, # TrueClass, and FalseClass are preserved between transactions like # you would expect them to. puts the_same_product.tech_specs["Battery Life (hours)"]+3
And the output, as you can expect, will be along the lines of:
nil Entry Count: 3 Widget Power: 1.21 GW Battery Life (hours): 12 RoHS Compliant: true 15
That looks incredibly simple, right? Good! It's supposed to be! All the magic
happens when you call
save!.
Now you could start doing other cool stuff, like searching for products based on their tech specs! You've already figured out how to do this, haven't you?
flux_capacitor = Product.find_by_tech_specs("Widget Power", "1.21 GW")
Nifty, right?
Can I store arrays/hashes/custom types inside my hashes?
Sure, but they'll be serialized into YAML (so you cant search by them like you
would an eav_hash). The
value column is a TEXT type by default but if you
want to optimize your DB size you could change it to a VARCHAR in the migration
if you don't plan on storing large values.
What if I want to change the table name?
By default,
eav_hash uses a table name derived from the following:
"<ClassName>_<hash_name>".tableize
You can change this by passing a symbol to the
:table_name argument:
class Widget < ActiveRecord::Base eav_hash_for :foobar, table_name: :bar_foo end
Just remember to edit the table name in the migration, or use the following migration generator:
$ rails generate eav_migration Widget foobar bar_foo
What's the catch?
By using this software, you agree to write me into your will as your next of kin, and to sacrifice the soul of your first born child to Beelzebub.
Just kidding, the code is released under the MIT license so you can use it for whatever purposes you see fit. Just don't sue me if your application blows up from the sheer awesomeness! Check out the LICENSE file for more information.
Special Thanks!
Thanks to Matt Kimmel for adding support for models contained in namespaces.
Thanks to Arpad Lukacs for adding Enumerator-like behavior.
I found a bug or want to contribute!
You're probably reading this from GitHub, so you know what to do. If not, the Github project is at | http://www.rubydoc.info/github/iostat/eav_hashes | CC-MAIN-2017-26 | refinedweb | 657 | 67.45 |
The following are the release notes for the 8.12.9 version of sendmail(1M), plus the 8.12.10 security features included on Unixware. These notes were provided by sendmail.org, and are part of the sendmail distribution.
We list below only the changes since the previous version of sendmail included with UnixWare (version 8.10.1).
Note that the version numbers at the beginning of each section below indicate the versions of sendmail and it configuration file sendmail.cf, respectively, to which the notes apply.
Also see ..
8.12.10/8.12.10 2003/09/24 SECURITY: Fix a buffer overflow in address parsing. Problem detected by Michal Zalewski, patch from Todd C. Miller of Courtesan Consulting., settings for Irix 6: remove confSBINDIR, i.e., use default /usr/sbin, change owner/group of man pages and user-executable to root/sys, set optimization limit to 0 (unlimited). Based on patch from Ayamura Kikuchi, M.D, and proposal from Kari Hurtta of the Finnish Meteorological Institute. Do not assume LDAP support is installed by default under Solaris 8 and later. Add support for OpenUNIX. CONFIG: Increment version number of config file to 10. CONFIG: Add an install target and a README file in cf/cf. CONFIG: Don't accept addresses of the form a@b@, a@b@c, a@[b]c, etc. CONFIG: Reject empty recipient addresses (in check_rcpt). CONFIG: The access map uses an option of -T<TMPF> to deal with temporary lookup failures. CONFIG: New value for access map: SKIP, which causes the default action to be taken by aborting the search for domain names or IP nets. CONFIG: check_rcpt can deal with TEMPFAIL for either recipient or relay address as long as the other part allows the email to get through. CONFIG: Entries for virtusertable can make use of a third parameter "%3" which contains "+detail" of a wildcard match, i.e., an entry like user+*@domain. This allows handling of details by using %1%3 as the RHS. Additionally, a "+" wildcard has been introduced to match only non-empty details of addresses. CONFIG: Numbers for rulesets used by MAILERs have been removed and hence there is no required order within the MAILER section anymore except for MAILER(`uucp') which must come after MAILER(`smtp') if uucp-dom and uucp-uudom are used. CONFIG: Hosts listed in the generics domain class {G} (GENERICS_DOMAIN() and GENERICS_DOMAIN_FILE()) are treated as canonical. Suggested by Per Hedeland of Ericsson. CONFIG: If FEATURE(`delay_checks') is used, make sure that a lookup in the access map which returns OK or RELAY actually terminates check_* ruleset checking. CONFIG: New tag TLS_Rcpt: for access map to be used by ruleset tls_rcpt, see cf/README for details. CONFIG: Change format of Received: header line which reveals whether STARTTLS has been used to "(version=${tls_version} cipher=${cipher} bits=${cipher_bits} verify=${verify})". CONFIG: Use "Spam:" as tag for lookups for FEATURE(`delay_checks') options friends/haters instead of "To:" and enable specification of whole domains instead of just users. Notice: this change is not backward compatible. Suggested by Chris Adams from HiWAAY Informations Services. CONFIG: Allow for local extensions for most new rulesets, see cf/README for details. CONFIG: New FEATURE(`lookupdotdomain') to lookup also .domain in the access map. Proposed by Randall Winchester of the University of Maryland. CONFIG: New FEATURE(`local_no_masquerade') to avoid masquerading for the local mailer. Proposed by Ingo Brueckl of Wupper Online. CONFIG: confRELAY_MSG/confREJECT_MSG can override the default messages for an unauthorized relaying attempt/for access map entries with RHS REJECT, respectively. CONFIG: FEATURE(`always_add_domain') takes an optional argument to specify another domain to be added instead of the local one. Suggested by Richard H. Gumpertz of Computer Problem Solving. CONFIG: confAUTH_OPTIONS allows setting of Cyrus-SASL specific options, see doc/op/op.me for details. CONFIG: confAUTH_MAX_BITS sets the maximum encryption strength for the security layer in SMTP AUTH (SASL). CONFIG: If Local_localaddr resolves to $#ok, localaddr is terminated immediately. CONFIG: FEATURE(`enhdnsbl') is an enhanced version of dnsbl which allows checking of the return values of the DNS lookups. See cf/README for details. CONFIG: FEATURE(`dnsbl') allows now to specify the behavior for temporary lookup failures. CONFIG: New option confDELIVER_BY_MIN to specify minimum time for Deliver By (RFC 2852) or to turn off the extension. CONFIG: New option confSHARED_MEMORY_KEY to set the key for shared memory use. CONFIG: New FEATURE(`compat_check') to look up a key consisting of the sender and the recipient address delimited by the string "<@>", e.g., sender@sdomain<@>recipient@rdomain, in the access map. Based on code contributed by Mathias Koerber of Singapore Telecommunications Ltd. CONFIG: Add EXPOSED_USER_FILE() command to allow an exposed user file. Suggested by John Beck of Sun Microsystems. CONFIG: Don't use MAILER-DAEMON for error messages delivered via LMTP. Problem reported by Larry Greenfield of CMU. CONFIG: New FEATURE(`preserve_luser_host') to preserve the name of the recipient host if LUSER_RELAY is used. CONFIG: New FEATURE(`preserve_local_plus_detail') to preserve the +detail portion of the address when passing address to local delivery agent. Disables alias and .forward +detail stripping. Only use if LDA supports this. CONFIG: Removed deprecated FEATURE(`rbl'). CONFIG: Add LDAPROUTE_EQUIVALENT() and LDAPROUTE_EQUIVALENT_FILE() which allow you to specify 'equivalent' hosts for LDAP Routing lookups. Equivalent hostnames are replaced by the masquerade domain name for lookups. See cf/README for additional details. CONFIG: Add a fourth argument to FEATURE(`ldap_routing') which instructs the rulesets on what to do if the address being looked up has +detail information. See cf/README for more information. CONFIG: When chosing a new destination via LDAP Routing, also look up the new routing address/host in the mailertable. Based on patch from Don Badrak of the United States Census Bureau. CONFIG: Do not reject the SMTP Mail from: command if LDAP Routing is in use and the bounce option is enabled. Only reject recipients as user unknown. CONFIG: Provide LDAP support for the remaining database map features. See the ``USING LDAP FOR ALIASES AND MAPS'' section of cf/README for more information. CONFIG: Add confLDAP_CLUSTER which defines the ${sendmailMTACluster} macro used for LDAP searches as described above in ``USING LDAP FOR ALIASES, MAPS, AND CLASSES''. CONFIG: confCLIENT_OPTIONS has been replaced by CLIENT_OPTIONS(), which takes the options as argument and can be used multiple times; see cf/README for details. CONFIG: Add configuration macros for new options: confBAD_RCPT_THROTTLE BadRcptThrottle confDIRECT_SUBMISSION_MODIFIERS DirectSubmissionModifiers confMAILBOX_DATABASE MailboxDatabase confMAX_QUEUE_CHILDREN MaxQueueChildren confMAX_RUNNERS_PER_QUEUE MaxRunnersPerQueue confNICE_QUEUE_RUN NiceQueueRun confQUEUE_FILE_MODE QueueFileMode confFAST_SPLIT FastSplit confTLS_SRV_OPTIONS TLSSrvOptions See above (and related documentation) for further information. CONFIG: Add configuration variables for new timeout options: confTO_ACONNECT Timeout.aconnect confTO_AUTH Timeout.auth confTO_LHLO Timeout.lhlo confTO_STARTTLS Timeout.starttls CONFIG: Add configuration macros for mail filter API: confINPUT_MAIL_FILTERS InputMailFilters confMILTER_LOG_LEVEL Milter.LogLevel confMILTER_MACROS_CONNECT Milter.macros.connect confMILTER_MACROS_HELO Milter.macros.helo confMILTER_MACROS_ENVFROM Milter.macros.envfrom confMILTER_MACROS_ENVRCPT Milter.macros.envrcpt Mail filters can be defined via INPUT_MAIL_FILTER() and MAIL_FILTER(). See libmilter/README, cf/README, and doc/op/op.me for details. CONFIG: Add support for accepting temporarily unresolvable domains. See cf/README for details. Based on patch by Motonori Nakamura of Kyoto University. CONFIG: confDEQUOTE_OPTS can be used to specify options for the dequote map. CONFIG: New macro QUEUE_GROUP() to define queue groups. CONFIG: New FEATURE(`queuegroup') to select a queue group based on the full e-mail address or the domain of the recipient. CONFIG: Any IPv6 addresses used in configuration should be prefixed by the "IPv6:" tag to identify the address properly. For example, if you want to use the IPv6 address 2002:c0a8:51d2::23f4 in the access database, you would need to use IPv6:2002:c0a8:51d2::23f4 on the left hand side. This affects the access database as well as the relay-domains and local-host-names files. CONFIG: OSTYPE(aux) has been renamed to OSTYPE(a-ux). CONFIG: Avoid expansion of m4 keywords in SMART_HOST. CONFIG: Add MASQUERADE_EXCEPTION_FILE() for reading masquerading exceptions from a file. Suggested by Trey Breckenridge of Mississippi State University. CONFIG: Add LOCAL_USER_FILE() for reading local users (LOCAL_USER() -- $={L}) entries from a file. CONTRIB: dnsblaccess.m4 is a further enhanced version of enhdnsbl.m4 which allows to lookup error codes in the access map. Contributed by Neil Rickert of Northern Illinois University. DEVTOOLS: Add new options for installation of include and library files: confINCGRP, confINCMODE, confINCOWN, confLIBGRP, confLIBMODE, confLIBOWN. DEVTOOLS: Add new option confDONT_INSTALL_CATMAN to turn off installation of the the formatted man pages on operating systems which don't include cat directories. EDITMAP: New program for editing maps as supplement to makemap. MAIL.LOCAL: Mail.local now uses the libsm mbdb package to look up local mail recipients. New option -D mbdb specifies the mailbox database type. MAIL.LOCAL: New option "-h filename" which instructs mail.local to deliver the mail to the named file in the user's home directory instead of the system mail spool area. Based on patch from Doug Hardie of the Los Angeles Free-Net. MAILSTATS: New command line option -P which acts the same as -p but doesn't truncate the statistics file. MAKEMAP: Add new option -t to specify a different delimiter instead of white space. RMAIL: Invoke sendmail with '-G' to indicate this is a gateway submission. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. SMRSH: Use the vendor supplied directory on FreeBSD 3.3 and later. VACATION: Change Auto-Submitted: header value from auto-generated to auto-replied. From Kenneth Murchison of Oceana Matrix Ltd. VACATION: New option -d to send error/debug messages to stdout instead of syslog. VACATION: New option -U which prevents the attempt to lookup login in the password file. The -f and -m options must be used to specify the database and message file since there is no home directory for the default settings for these options. VACATION: Vacation now uses the libsm mbdb package to look up local mail recipients; it reads the MailboxDatabase option from the sendmail.cf file. New option -C cffile which specifies the path of the sendmail.cf file. New Directories: libmilter/docs New Files: cf/cf/README cf/cf/submit.cf cf/cf/submit.mc cf/feature/authinfo.m4 cf/feature/compat_check.m4 cf/feature/enhdnsbl.m4 cf/feature/msp.m4 cf/feature/local_no_masquerade.m4 cf/feature/lookupdotdomain.m4 cf/feature/preserve_luser_host.m4 cf/feature/preserve_local_plus_detail.m4 cf/feature/queuegroup.m4 cf/sendmail.schema contrib/dnsblaccess.m4 devtools/M4/UNIX/sm-test.m4 devtools/OS/OpenUNIX.5.i386 editmap/* include/sm/* libsm/* libsmutil/cf.c libsmutil/err.c sendmail/SECURITY sendmail/TUNING sendmail/bf.c sendmail/bf.h sendmail/sasl.c sendmail/sm_resolve.c sendmail/sm_resolve.h sendmail/tls.c Deleted Files: cf/feature/rbl.m4 cf/ostype/aix2.m4 devtools/OS/AIX.2 include/sendmail/cdefs.h include/sendmail/errstring.h include/sendmail/useful.h libsmutil/errstring.c sendmail/bf_portable.c sendmail/bf_portable.h sendmail/bf_torek.c sendmail/bf_torek.h sendmail/clock.c Renamed Files: cf/cf/generic-solaris2.mc => cf/cf/generic-solaris.mc cf/cf/generic-solaris2.cf => cf/cf/generic-solaris.cf cf/ostype/aux.m4 => cf/ostype/a-ux.m4
8.11.7/8.11.7 2003/03/29 SECURITY: Fix a remote buffer overflow in header parsing by dropping sender and recipient header comments if the comments are too long. Problem noted by Mark Dowd of ISS X.11.7.11.7 defaults, set MaxMimeHeaderLength to 0/0. Properly clean up macros to avoid persistence of session data across various connections. This could cause session oriented restrictions, e.g., STARTTLS requirements, to erroneously allow a connection. Problem noted by Tim Maletic of Priority Health. Ignore comments in NIS host records when trying to find the canonical name for a host. Fix a memory leak when closing Hesiod maps. Set ${msg_size} macro when reading a message from the command line or the queue. Prevent a segmentation fault when clearing the event list by turning off alarms before checking if event list is empty. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Fix a potential core dump problem if the environment variable NAME is set. Problem noted by Beth A. Chaney of Purdue University. Prevent a race condition on child cleanup for delivery to files. Problem noted by Fletcher Mattox of the University of Texas. CONFIG: Do not bounce mail if FEATURE(`ldap_routing')'s bounce parameter is set and the LDAP lookup returns a temporary error. CONFIG: Fix a syntax error in the try_tls ruleset if FEATURE(`access_db') is not enabled. LIBSMDB: Fix a lock race condition that affects makemap, praliases, and vacation. LIBSMDB: Avoid a file creation race condition for Berkeley DB 1.X and NDBM on systems with the O_EXLOCK open(2) flag. MAKEMAP: Avoid going beyond the end of an input line if it does not contain a value for a key. Based on patch from Mark Bixby from Hewlett-Packard. MAIL.LOCAL: Fix a truncation race condition if the close() on the mailbox fails. Problem noted by Tomoko Fukuzawa of Sun Microsystems. SMRSH: SECURITY: Only allow regular files or symbolic links to be used for a command. Problem noted by David Endler of iDEFENSE, Inc.
8.11.6/8.11.6 2001/08/20 SECURITY: Fix a possible memory access violation when specifying out-of-bounds debug parameters. Problem detected by Cade Cairns of SecurityFocus. Avoid leaking recipient information in unrelated DSNs. This could happen if a connection is aborted, several mails had been scheduled for delivery via that connection, and the timeout is reached such that several DSNs are sent next. Problem noted by Dileepan Moorkanat of Hewlett-Packard. Fix a possible segmentation violation when specifying too many wildcard operators in a rule. Problem detected by Werner Wiethege. Avoid a segmentation fault on non-matching Hesiod lookups. Problem noted by Russell McOrmond of flora.ca
8.11.5/8.11.5 2001/07/31 Fix a possible race condition when sending a HUP signal to restart the daemon. This could terminate the current process without starting a new daemon. Problem reported by Wolfgang Breyha of SE Netway Communications. Only apply MaxHeadersLength when receiving a message via SMTP or the command line. Problem noted by Andrey J. Melnikoff. When finding the system's local hostname on an IPv6-enabled system which doesn't have any IPv6 interface addresses, fall back to looking up only IPv4 addresses. Problem noted by Tim Bosserman of EarthLink. When commands were being rejected due to check_relay or TCP Wrappers, the ETRN command was not giving a response. Incoming IPv4 connections on a Family=inet6 daemon (using IPv4-mapped addresses) were incorrectly labeled as "may be forged". Problem noted by Per Steinar Iversen of Oslo University College. Shutdown address test mode cleanly on SIGTERM. Problem noted by Greg King of the OAO Corporation. Restore the original real uid (changed in main() to prevent out of band signals) before invoking a delivery agent. Some delivery agents use this for the "From " envelope "header". Problem noted by Leslie Carroll of the University at Albany. Mark closed file descriptors properly to avoid reuse. Problem noted by Jeff Bronson of J.D. Bronson, Inc. Setting Timeout options on the command line will also override their sub-suboptions in the .cf file, e.g., -O Timeout.queuereturn=2d will set all queuereturn timeouts to 2 days. Problem noted by Roger B.A. Klorese. Portability: BSD/OS has a broken setreuid() implementation. Problem noted by Vernon Schryver of Rhyolite Software. BSD/OS has /dev/urandom(4) (as of version 4.1/199910 ?). Noted by Vernon Schryver of Rhyolite Software. BSD/OS has fchown(2). Noted by Dave Yadallee of Netline 2000 Internet Solutions Inc. Solaris 2.X and later have strerror(3). From Sebastian Hagedorn of Cologne University. CONFIG: Fix parsing for IPv6 domain literals in addresses (user@[IPv6:address]). Problem noted by Liyuan Zhou.
8.11.4/8.11.4 2001/05/28 Clean up signal handling routines to reduce the chances of heap corruption and other potential race conditions. Terminating and restarting the daemon may not be instantaneous due to this change. Also, non-root users can no longer send out-of-band signals. Problem reported by Michal Zalewski of BindView. If LogLevel is greater than 9 and SASL fails to negotiate an encryption layer, avoid core dump logging the encryption strength. Problem noted by Miroslav Zubcic of Crol. If a server offers "AUTH=" and "AUTH " and the list of mechanisms is different in those two lines, sendmail might not have recognized (and used) all of the offered mechanisms. Fix an IP address lookup problem on Solaris 2.0 - 2.3. Patch from Kenji Miyake. This time, really don't use the .. directory when expanding QueueDirectory wildcards. If a process is interrupted while closing a map, don't try to close the same map again while exiting. Allow local mailers (F=l) to contact remote hosts (e.g., via LMTP). Problem noted by Norbert Klasen of the University of Tuebingen. If Timeout.QueueReturn was set to a value less the time it took to write a new queue file (e.g., 0 seconds), the bounce message would be lost. Problem noted by Lorraine L Goff of Oklahoma State University. Pass map argument vector into map rewriting engine for the regex and prog map types. Problem noted by Stephen Gildea of InTouch Systems, Inc. When closing an LDAP map due to a temporary error, close all of the other LDAP maps which share the original map's connection to the LDAP server. Patch from Victor Duchovni of Morgan Stanley. To detect changes of NDBM aliases files check the timestamp of the .pag file instead of the .dir file. Problem noted by Neil Rickert of Northern Illinois University. Don't treat temporary hesiod lookup failures as permanent. Patch from Werner Wiethege. If ClientPortOptions is set, make sure to create the outgoing socket with the family set in that option. Patch from Sean Farley. Avoid a segmentation fault trying to dereference a NULL pointer when logging a MaxHopCount exceeded error with an empty recipient list. Problem noted by Chris Adams of HiWAAY Internet Services. Fix DSN for "Too many hops" bounces. Problem noticed by Ulrich Windl of the Universitaet Regensburg. Fix DSN for "mail loops back to me" bounces. Problem noticed by Kari Hurtta of the Finnish Meteorological Institute. Portability: OpenBSD has a broken setreuid() implementation. CONFIG: Undo change from 8.11.1: change 501 SMTP reply code back to 553 since it is allowed by DRUMS. CONFIG: Add OSTYPE(freebsd4) for FreeBSD 4.X. DEVTOOLS: install.sh did not properly handle paths in the source file name argument. Noted by Kari Hurtta of the Finnish Meteorological Institute. DEVTOOLS: Add FAST_PID_RECYCLE to compile time options for OpenBSD since it generates random process ids. PRALIASES: Add back adaptive algorithm to deal with different endings of entries in the database (with/without trailing ' '). Patch from John Beck of Sun Microsystems. New Files: cf/ostype/freebsd4.m4
8.11.3/8.11.3 2001/02/27 Prevent a segmentation fault when a bogus value was used in the LDAPDefaultSpec option's -r, -s, or -M flags and if a bogus option was used. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Prevent "token too long" message by shortening {currHeader} which could be too long if the last copied character was a quote. Problem detected by Jan Krueger of digitalanswers communications consulting gmbh. Additional IPv6 check for unspecified addresses. Patch from Jun-ichiro itojun Hagino of the KAME Project. Do not ignore the ClientPortOptions setting if DaemonPortOptions Modifier=b (bind to same interface) is set and the connection came in from the command line. Do not bind to the loopback address if DaemonPortOptions Modifier=b (bind to same interface) is set. Patch from John Beck of Sun Microsystems. Properly deal with open failures on non-optional maps used in check_* rulesets by returning a temporary failure. Buffered file I/O files were not being properly fsync'ed to disk when they were committed. Properly encode '=' for the AUTH= parameter of the MAIL command. Problem noted by Hadmut Danisch. Under certain circumstances the macro {server_name} could be set to the wrong hostname (of a previous connection), which may cause some rulesets to return wrong results. This would usually cause mail to be queued up and delivered later on. Ignore F=z (LMTP) mailer flag if $u is given in the mailer A= equate. Problem noted by Motonori Nakamura of Kyoto University. Work around broken accept() implementations which only partially fill in the peer address if the socket is closed before accept() completes. Return an SMTP "421" temporary failure if the data file can't be opened where the "354" reply would normally be given. Prevent a CPU loop in trying to expand a macro which doesn't exist in a queue run. Problem noted by Gordon Lack of Glaxo Wellcome. If delivering via a program and that program exits with EX_TEMPFAIL, note that fact for the mailq display instead of just showing "Deferred". Problem noted by Motonori Nakamura of Kyoto University. If doing canonification via /etc/hosts, try both the fully qualified hostname as well as the first portion of the hostname. Problem noted by David Bremner of the University of New Brunswick. Portability: Fix a compilation problem for mail.local and rmail if SFIO is in use. Problem noted by Auteria Wally Winzer Jr. of Champion Nutrition. IPv6 changes for platforms using KAME. Patch from Jun-ichiro itojun Hagino of the KAME Project. OpenBSD 2.7 and higher has srandomdev(3). OpenBSD 2.8 and higher has BSDI-style login classes. Patch from Todd C. Miller of Courtesan Consulting. Unixware 7.1.1 doesn't allow h_errno to be set directly if sendmail is being compiled with -kthread. Problem noted by Orion Poplawski of CQG, Inc. CONTRIB: buildvirtuser: Substitute current domain for $DOMAIN and current left hand side for $LHS in virtuser files. DEVTOOLS: Do not pass make targets to recursive Build invocations. Problem noted by Jeff Bronson of J.D. Bronson, Inc. MAIL.LOCAL: In LMTP mode, do not return errors regarding problems storing the temporary message file until after the remote side has sent the final DATA termination dot. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. MAIL.LOCAL: If LMTP mode is set, give a temporary error if users are also specified on the command line. Patch from Motonori Nakamura of Kyoto University. PRALIASES: Skip over AliasFile specifications which aren't based on database files (i.e., only show dbm, hash, and btree). Renamed Files: devtools/OS/OSF1.V5.0 => devtools/OS/OSF1.V5.x
8.11.2/8.11.2 2000/12/29 Prevent a segmentation fault when trying to set a class in address test mode due to a negative array index. Audit other array indexing. This bug is not believed to be exploitable. Noted by Michal Zalewski of the "Internet for Schools" project (IdS). Add an FFR (for future release) to drop privileges when using address test mode. This will be turned on in 8.12. It can be enabled by compiling with: APPENDDEF(`conf_sendmail_ENVDEF', `-D_FFR_TESTMODE_DROP_PRIVS') in your devtools/Site/site.config.m4 file. Suggested by Michal Zalewski of the "Internet for Schools" project (IdS). Fix potential problem with Cyrus-SASL security layer which may have caused I/O errors, especially for mechanism DIGEST-MD5. When QueueSortOrder was set to host, sendmail might not read enough of the queue file to determine the host, making the sort sub-optimal. Problem noted by Jeff Earickson of Colby College. Don't issue DSNs for addresses which use the NOTIFY parameter (per RFC 1891) but don't have FAILURE as value. Initialize Cyrus-SASL library before the SMTP daemon is started. This implies that every change to SASL related files requires a restart of the daemon, e.g., Sendmail.conf, new SASL mechanisms (in form of shared libraries). Properly set the STARTTLS related macros during a queue run for a cached connection. Bug reported by Michael Kellen of NxNetworks, Inc. Log the server name in relay= for ruleset tls_server instead of the client name. Include original length of bad field/header when reporting MaxMimeHeaderLength problems. Requested by Ulrich Windl of the Universitat Regensburg. Fix delivery to set-user-ID files that are expanded from aliases in DeliveryMode queue. Problem noted by Ric Anderson of the University of Arizona. Fix LDAP map -m (match only) flag. Problem noted by Jeff Giuliano of Collective Technologies. Avoid using a negative argument for sleep() calls when delaying answers to EXPN/VRFY commands on systems which respond very slowly. Problem noted by Mikolaj J. Habryn of Optus Internet Engineering. Make sure the F=u flag is set in the default prog mailer definition. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. Fix IPv6 check for unspecified addresses. Patch from Jun-ichiro itojun Hagino of the KAME Project. Fix return values for IRIX nsd map. From Kari Hurtta of the Finnish Meteorological Institute. Fix parsing of DaemonPortOptions and ClientPortOptions. Read all of the parameters to find Family= setting before trying to interpret Addr= and Port=. Problem noted by Valdis Kletnieks of Virginia Tech. When delivering to a file directly from an alias, do not call initgroups(); instead use the DefaultUser group information. Problem noted by Marc Schaefer of ALPHANET NF. RunAsUser now overrides the ownership of the control socket, if created. Otherwise, sendmail can not remove it upon close. Problem noted by Werner Wiethege. Fix ConnectionRateThrottle counting as the option is the number of overall connections, not the number of connections per socket. A future version may change this to per socket counting. Portability: Clean up libsmdb so it functions properly on platforms where sizeof(u_int32_t) != sizeof(size_t). Problem noted by Rein Tollevik of Basefarm AS. Fix man page formatting for compatibility with Solaris' whatis. From Stephen Gildea of InTouch Systems, Inc. UnixWare 7 includes snprintf() support. From Larry Rosenman. IPv6 changes for platforms using KAME. Patch from Jun-ichiro itojun Hagino of the KAME Project. Avoid a typedef compile conflict with Berkeley DB 3.X and Solaris 2.5 or earlier. Problem noted by Bob Hughes of Pacific Access. Add preliminary support for AIX 5. Contributed by Valdis Kletnieks of Virginia Tech. Solaris 9 load average support from Andrew Tucker of Sun Microsystems. CONFIG: Reject addresses of the form a!b if FEATURE(`nouucp', `r') is used. Problem noted by Phil Homewood of Asia Online, patch from Neil Rickert of Northern Illinois University. CONFIG: Change the default DNS based blacklist server for FEATURE(`dnsbl') to blackholes.mail-abuse.org. CONFIG: Deal correctly with the 'C' flag in {daemon_flags}, i.e., implicitly assume canonical host names. CONFIG: Deal with "::" in IPv6 addresses for access_db. Based on patch by Motonori Nakamura of Kyoto University. CONFIG: New OSTYPE(`aix5') contributed by Valdis Kletnieks of Virginia Tech. CONFIG: Pass the illegal header form <list:;> through untouched instead of making it worse. Problem noted by Motonori Nakamura of Kyoto University. CONTRIB: Added buildvirtuser (see `perldoc contrib/buildvirtuser`). CONTRIB: qtool.pl: An empty queue is not an error. Problem noted by Jan Krueger of digitalanswers communications consulting gmbh. CONTRIB: domainmap.m4: Handle domains with '-' in them. From Mark Roth of the University of Illinois at Urbana-Champaign. DEVTOOLS: Change the internal devtools OS, REL, and ARCH m4 variables into bldOS, bldREL, and bldARCH to prevent namespace collisions. Problem noted by Motonori Nakamura of Kyoto University. RMAIL: Undo the 8.11.1 change to use -G when calling sendmail. It causes some changes in behavior and may break rmail for installations where sendmail is actually a wrapper to another MTA. The change will re-appear in a future version. SMRSH: Use the vendor supplied directory on HPUX 10.X, HPUX 11.X, and SunOS 5.8. Requested by Jeff A. Earickson of Colby College and John Beck of Sun Microsystems. VACATION: Fix pattern matching for addresses to ignore. VACATION: Don't reply to addresses of the form owner-* or *-owner. New Files: cf/ostype/aix5.m4 contrib/buildvirtuser devtools/OS/AIX.5.0. Problem noted by Tim "Darth Dice" Bosserman of EarthLink.
8.11.0/8.11.0 2000/07/19 SECURITY: If sendmail is installed as a non-root set-user-ID binary (not the normal case), some operating systems will still keep a saved-uid of the effective-uid when sendmail tries to drop all of its privileges. If sendmail needs to drop these privileges and the operating system doesn't set the saved-uid as well, exit with an error. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. SECURITY: sendmail depends on snprintf() NUL terminating the string it populates. It is possible that some broken implementations of snprintf() exist that do not do this. Systems in this category should compile with -DSNPRINTF_IS_BROKEN=1. Use test/t_snprintf.c to test your system and report broken implementations to sendmail-bugs@sendmail.org and your OS vendor. Problem noted by Slawomir Piotrowski of TELSAT GP. Support SMTP Service Extension for Secure SMTP (RFC 2487) (STARTTLS). Implementation influenced by the example programs of OpenSSL and the work of Lutz Jaenicke of TU Cottbus. Add new STARTTLS related options CACERTPath, CACERTFile, ClientCertFile, ClientKeyFile, DHParameters, RandFile, ServerCertFile, and ServerKeyFile. These are documented in cf/README and doc/op/op.*. New STARTTLS related macros: ${cert_issuer}, ${cert_subject}, ${tls_version}, ${cipher}, ${cipher_bits}, ${verify}, ${server_name}, and ${server_addr}. These are documented in cf/README and doc/op/op.*. Add support for the Entropy Gathering Daemon (EGD) for better random data. New DontBlameSendmail option InsufficientEntropy for systems which don't properly seed the PRNG for OpenSSL but want to try to use STARTTLS despite the security problems. Support the security layer in SMTP AUTH for mechanisms which support encryption. Based on code contributed by Tim Martin of CMU. Add new macro ${auth_ssf} to reflect the SMTP AUTH security strength factor. LDAP's -1 (single match only) flag was not honored if the -z (delimiter) flag was not given. Problem noted by ST Wong of the Chinese University of Hong Kong. Fix from Mark Adamson of CMU. Add more protection from accidentally tripping OpenLDAP 1.X's ld_errno == LDAP_DECODING_ERROR hack on ldap_next_attribute(). Suggested by Kurt Zeilenga of OpenLDAP. Fix the default family selection for DaemonPortOptions. As documented, unless a family is specified in a DaemonPortOptions option, "inet" is the default. It is also the default if no DaemonPortOptions value is set. Therefore, IPv6 users should configure additional sockets by adding DaemonPortOptions settings with Family=inet6 if they wish to also listen on IPv6 interfaces. Problem noted by Jun-ichiro itojun Hagino of the KAME Project. Set ${if_family} when setting ${if_addr} and ${if_name} to reflect the interface information for an outgoing connection. Not doing so was creating a mismatch between the socket family and address used in subsequent connections if the M=b modifier was set in DaemonPortOptions. Problem noted by John Beck of Sun Microsystems. If DaemonPortOptions modifier M=b is used, determine the socket family based on the IP address. ${if_family} is no longer persistent (i.e., saved in qf files). Patch from John Beck of Sun Microsystems. sendmail 8.10 and 8.11 reused the ${if_addr} and ${if_family} macros for both the incoming interface address/family and the outgoing interface address/family. In order for M=b modifier in DaemonPortOptions to work properly, preserve the incoming information in the queue file for later delivery attempts. Use SMTP error code and enhanced status code from check_relay in responses to commands. Problem noted by Jeff Wasilko of smoe.org. Add more vigilance in checking for putc() errors on output streams to protect from a bug in Solaris 2.6's putc(). Problem noted by Graeme Hewson of Oracle. The LDAP map -n option (return attribute names only) wasn't working. Problem noted by Ajay Matia. Under certain circumstances, an address could be listed as deferred but would be bounced back to the sender as failed to be delivered when it really should have been queued. Problem noted by Allan E Johannesen of Worcester Polytechnic Institute. Prevent a segmentation fault in a child SMTP process from getting the SMTP transaction out of sync. Problem noted by Per Hedeland of Ericsson. Turn off RES_DEBUG if SFIO is defined unless SFIO_STDIO_COMPAT is defined to avoid a core dump due to incompatibilities between sfio and stdio. Problem noted by Neil Rickert of Northern Illinois University. Don't log useless envelope ID on initial connection log. Problem noted by Kari Hurtta of the Finnish Meteorological Institute. Convert the free disk space shown in a control socket status query to kilobyte units. If TryNullMXList is True and there is a temporary DNS failure looking up the hostname, requeue the message for a later attempt. Problem noted by Ari Heikkinen of Pohjois-Savo Polytechnic. Under the proper circumstances, failed connections would be recorded as "Bad file number" instead of "Connection failed" in the queue file and persistent host status. Problem noted by Graeme Hewson of Oracle. Avoid getting into an endless loop if a non-hoststat directory exists within the hoststatus directory (e.g., lost+found). Patch from Valdis Kletnieks of Virginia Tech. Make sure Timeout.queuereturn=now returns a bounce message to the sender. Problem noted by Per Hedeland of Ericsson. If a message data file can't be opened at delivery time, panic and abort the attempt instead of delivering a message that states "<<< No Message Collected >>>". Fixup the GID checking code from 8.10.2 as it was overly restrictive. Problem noted by Mark G. Thomas of Mark G. Thomas Consulting. Preserve source port number instead of replacing it with the ident port number (113). Document the queue status characters in the mailq man page. Suggested by Ulrich Windl of the Universitat Regensburg. Process queued items in which none of the recipient addresses have host portions (or there are no recipients). Problem noted by Valdis Kletnieks of Virginia Tech. If a cached LDAP connection is used for multiple maps, make sure only the first to open the connection is allowed to close it so a later map close doesn't break the connection for other maps. Problem noted by Wolfgang Hottgenroth of UUNET. Netscape's LDAP libraries do not support Kerberos V4 authentication. Patch from Rainer Schoepf of the University of Mainz. Provide workaround for inconsistent handling of data passed via callbacks to Cyrus SASL prior to version 1.5.23. Mention ENHANCEDSTATUSCODES in the SMTP HELP helpfile. Omission noted by Ulrich Windl of the Universitat Regensburg. Portability: Add the ability to read IPv6 interface addresses into class 'w' under FreeBSD (and possibly others). From Jun Kuriyama of IMG SRC, Inc. and the FreeBSD Project. Replace code for finding the number of CPUs on HPUX. NCRUNIX MP-RAS 3.02 SO_REUSEADDR socket option does not work properly causing problems if the accept() fails and the socket needs to be reopened. Patch from Tom Moore of NCR. NetBSD uses a .0 extension of formatted man pages. From Andrew Brown of Crossbar Security. Return to using the IPv6 AI_DEFAULT flag instead of AI_V4MAPPED for calls to getipnodebyname(). The Linux implementation is broken so AI_ADDRCONFIG is stripped under Linux. From John Beck of Sun Microsystems and John Kennedy of Cal State University, Chico. CONFIG: Catch invalid addresses containing a ',' at the wrong place. Patch from Neil Rickert of Northern Illinois University. CONFIG: New variables for the new sendmail options: confCACERT_PATH CACERTPath confCACERT CACERTFile confCLIENT_CERT ClientCertFile confCLIENT_KEY ClientKeyFile confDH_PARAMETERS DHParameters confRAND_FILE RandFile confSERVER_CERT ServerCertFile confSERVER_KEY ServerKeyFile CONFIG: Provide basic rulesets for TLS policy control and add new tags to the access database to support these policies. See cf/README for more information. CONFIG: Add TLS information to the Received: header. CONFIG: Call tls_client ruleset from check_mail in case it wasn't called due to a STARTTLS command. CONFIG: If TLS_PERM_ERR is defined, TLS related errors are permanent instead of temporary. CONFIG: FEATURE(`relay_hosts_only') didn't work in combination with the access map and relaying to a domain without using a To: tag. Problem noted by Mark G. Thomas of Mark G. Thomas Consulting. CONFIG: Set confEBINDIR to /usr/sbin to match the devtools entry in OSTYPE(`linux') and OSTYPE(`mklinux'). From Tim Pierce of RootsWeb.com. CONFIG: Make sure FEATURE(`nullclient') doesn't use aliasing and forwarding to make it as close to the old behavior as possible. Problem noted by George W. Baltz of the University of Maryland. CONFIG: Added OSTYPE(`darwin') for Mac OS X and Darwin users. From Wilfredo Sanchez of Apple Computer, Inc. CONFIG: Changed the map names used by FEATURE(`ldap_routing') from ldap_mailhost and ldap_mailroutingaddress to ldapmh and ldapmra as underscores in map names cause problems if underscore is in OperatorChars. Problem noted by Bob Zeitz of the University of Alberta. CONFIG: Apply blacklist_recipients also to hosts in class {w}. Patch from Michael Tratz of Esosoft Corporation. CONFIG: Use A=TCP ... instead of A=IPC ... in SMTP mailers. CONTRIB: Add link_hash.sh to create symbolic links to the hash of X.509 certificates. CONTRIB: passwd-to-alias.pl: More protection from special characters; treat special shells as root aliases; skip entries where the GECOS full name and username match. From Ulrich Windl of the Universitat Regensburg. CONTRIB: qtool.pl: Add missing last_modified_time method and fix a typo. Patch from Graeme Hewson of Oracle. CONTRIB: re-mqueue.pl: Improve handling of a race between re-mqueue and sendmail. Patch from Graeme Hewson of Oracle. CONTRIB: re-mqueue.pl: Don't exit(0) at end so can be called as subroutine Patch from Graeme Hewson of Oracle. CONTRIB: Add movemail.pl (move old mail messages between queues by calling re-mqueue.pl) and movemail.conf (configuration script for movemail.pl). From Graeme Hewson of Oracle. CONTRIB: Add cidrexpand (expands CIDR blocks as a preprocessor to makemap). From Derek J. Balling of Yahoo,Inc. DEVTOOLS: INSTALL_RAWMAN installation option mistakenly applied any extension modifications (e.g., MAN8EXT) to the installation target. Patch from James Ralston of Carnegie Mellon University. DEVTOOLS: Add support for SunOS 5.9. DEVTOOLS: New option confLN contains the command used to create links. LIBSMDB: Berkeley DB 2.X and 3.X errors might be lost and not reported. MAIL.LOCAL: DG/UX portability. Problem noted by Tim Boyer of Denman Tire Corporation. MAIL.LOCAL: Prevent a possible DoS attack when compiled with -DCONTENTLENGTH. Based on patch from 3APA3A@SECURITY.NNOV.RU. MAILSTATS: Fix usage statement (-p and -o are optional). MAKEMAP: Change man page layout as workaround for problem with nroff and -man on Solaris 7. Patch from Larry Williamson. RMAIL: AIX 4.3 has snprintf(). Problem noted by David Hayes of Black Diamond Equipment, Limited. RMAIL: Prevent a segmentation fault if the incoming message does not have a From line. VACATION: Read all of the headers before deciding whether or not to respond instead of stopping after finding recipient. Added Files: cf/ostype/darwin.m4 contrib/cidrexpand contrib/link_hash.sh contrib/movemail.conf contrib/movemail.pl devtools/OS/SunOS.5.9 test/t_snprintf.c
8.10.2/8.10.2 2000/06/07 SECURITY: Work around broken Linux setuid() implementation. On Linux, a normal user process has the ability to subvert the setuid() call such that it is impossible for a root process to drop its privileges. Problem noted by Wojciech Purczynski of elzabsoft.pl. SECURITY: Add more vigilance around set*uid(), setgid(), setgroups(), initgroups(), and chroot() calls. Added Files: test/t_setuid.c
The following are the known problems and limitations in Sendmail 8.12.9, as provided by sendmail.org. For descriptions of bugs that have been fixed, see the ``Sendmail Version 8.12.9 Release Notes''.
* Delivery to programs that generate too much output may cause problems
If e-mail is delivered to a program which generates too much output, then sendmail may issue an error:
timeout waiting for input from local during Draining Input
Make sure that the program does not generate output beyond a status message (corresponding to the exit status). This may require a wrapper around the actual program to redirect output to /dev/null.
Such a problem has been reported for bulk_mailer.
*.
* Header checks are not called if header value is too long or empty.
If the value of a header is longer than 1250 (MAXNAME + MAXATOM - 6) characters or it contains a single word longer than 256 (MAXNAME) characters then no header check is done even if one is configured for the header.
* Sender addresses whose domain part cause a temporary A record lookup failure but have a valid MX record will be temporarily rejected in the default configuration. Solution: fix the DNS at the sender side. If that's not easy to achieve, possible workarounds are: - add an entry to the access map: dom.ain OK - (only for advanced users) replace
# Resolve map (to check if a host exists in check_mail) Kresolve host -a<OKR> -T<TEMP>
with
# Resolve map (to check if a host exists in check_mail) Kcanon host -a<OKR> -T<TEMP> Kdnsmx dns -R MX -a<OKR> -T<TEMP> Kresolve sequence dnsmx canon
* Duplicate error messages.
Sometimes identical, duplicate error messages can be generated. As near as I can tell, this is rare and relatively innocuous.
* Misleading error messages.
If an illegal address is specified on the command line together with at least one valid address and PostmasterCopy is set, the DSN does not contain the illegal address, but only the valid address(es).
* .
* Client ignores SIZE parameter.
When sendmail acts as client and the server specifies a limit for the mail size, sendmail will ignore this and try to send the mail anyway. The server will usually reject the MAIL command which specifies the size of the message and hence this problem is not significant.
*-user-ID files
Sendmail will deliver to a fail if the file is owned by the DefaultUser or has the set-user-ID.
* MAIL_HUB always takes precedence over LOCAL_RELAY
Despite the information in the documentation, MAIL_HUB ($H) will always be used if set instead of LOCAL_RELAY ($R). This will be fixed in a future version.
$Revision: 8.55.2.1 $, Last updated $Date: 2002/12/18 22:38:48 $ | http://uw714doc.xinuos.com/en/MM_admin/sendmail-8.12.9-rn.html | CC-MAIN-2020-05 | refinedweb | 7,012 | 51.65 |
One of the first things many already noted - myself included - was that the offending class library was not in the Harmony source tree, which should mean it didn't come from Harmony. However, the header and some of the language suggested otherwise, but in fact really didn't, as Apache's statement explains.
"Even though the code in question has an Apache license, it is not part of Harmony. PolicyNodeImpl.java is simply not a Harmony class," the ASF states, "Verifying that something is from the Apache Software Foundation is very easy to do: our sources are all posted online. So it is sad when people don't take that step."
Of course, we did take that step, but people still weren't convinced, mostly because of the language at the beginning of the file. Via email, ASF member Geir Magnusson Jr. explained this in more detail.
"I realize the code is under the org.apache.harmony package and has the Apache License on it, but putting things in org.apache.* namespace when testing Java in the org.apache.* package generally is a technical necessity, and anyone is free to license software under the Apache License," Geir explains, "In fact, we encourage it :)."
So, that's that, then. Despite claims made all over the place - including on OSNews, made by me - the class in question does not, and did not come, from the Apache Harmony project. My apologies for taking part in spreading the confusion. | http://www.osnews.com/story/23968/Apache_Software_Foundation_Disputed_Code_Not_from_Harmony | CC-MAIN-2015-27 | refinedweb | 245 | 64.1 |
Having an issue in unity web requests. When I try to download a file from my url if it's not included or if I've updated it online I always get this as a return if I use web client, web request or any variation.
> <!DOCTYPE HTML PUBLIC "-//W3C//DTD
> HTML 4.01//EN"
> "">
> <html>
>
> <head> <title>MythosWorks</title>
> <meta name="description" content="">
> <meta name="keywords" content="">
> </head> <frameset rows="100%,*"
> <frame
> </frameset>
>
> </html>
I just want to return the text form the json file. How would I go about that this is my unity script currently.
> > using System.Collections; using System.Collections.Generic; using
> System.IO; using System.Net; using
> UnityEngine; using
> UnityEngine.Networking;
>
> public class VersionControl :
> MonoBehaviour {
> // Start is called before the first frame update
> void Start()
> {
> VersionCheck();
> }
>
> // Update is called once per frame
> void Update()
> {
>
> }
>
> // Loading Game Check from dictionary file > version > build!
> private void VersionCheck()
> {
> // Check if file exists!
> if (File.Exists(Application.dataPath +
> "/Scripts/DB/words.json"))
> {
> // Set file path & create reader
> string path = Application.dataPath +
> "/Scripts/DB/words.json";
> using (StreamReader dfetch = new StreamReader(path))
> {
> // fetch data from file and build list, then provide word
> count!
> string json = dfetch.ReadToEnd();
> jsonDataClass jsnData = JsonUtility.FromJson<jsonDataClass>(json);
> }
> }
> // Fetch file form repo!
> else
> {
> WebClient moogle = new WebClient();
> moogle.Headers["User-Agent"] =
> "Mozilla/5.0 (Windows NT 6.3; Win64; x64) AppleWebKit/537.36
> (KHTML, like Gecko)
> Chrome/45.0.2454.85 Safari/537.36";
> moogle.DownloadFile("",
> Application.dataPath +
> "/Scripts/DB/words.json");
> Debug.Log("Working on download");
> VersionCheck();
> }
> }
Tried without the headers as well. Don't matter if I have to parse the data down, but just really want to download the exact file and save it.
Answer by Mouton
·
Oct 06, 2019 at 03:05 PM
You need to change how you serve the static file from the server. In your server application, you need to serve the file with content-type header set to application/json then serve the file directly as a JSON entity.
content-type
application91 People are following this question.
How to get UnityWebRequest to work for downloading JSON
1
Answer
Alternative to BuildStreamedSceneAssetBundle ?
0
Answers
Multiple Cars not working
1
Answer
Distribute terrain in zones
3
Answers
Making the player change his movement when it hit an object with collider
0
Answers | https://answers.unity.com/questions/1670028/web-request-pulls-html-script-instead-of-file-text.html?sort=oldest | CC-MAIN-2020-29 | refinedweb | 383 | 50.73 |
The
Order typeclass abstracts the ability to compare two instances of any object and determine their total order.
Depending on your needs this comparison can be structural -the content of the object-, referential -the memory address of the object-, based on an identity -like an Id field-, or any combination of the above.
It can be considered the typeclass equivalent of Java’s
Comparable.
fun F.compare(b: F): Int
Compare [a] with [b]. Returns an Int whose sign is:
x < y
x = y
x > y
import arrow.* import arrow.typeclasses.* import arrow.instances.* Int.order().run { 1.compare(2) } // -1
Lesser than or equal to defines total order in a set, it compares two elements and returns true if they’re equal or the first is lesser than the second.
It is the opposite of
gte.
Int.order().run { 1.lte(2) } // true
Greater than or equal compares two elements and returns true if they’re equal or the first is lesser than the second.
It is the opposite of
lte.
Int.order().run { 1.gte(2) } // false
Compares two elements and respectively returns the maximum or minimum in respect to their order.
Int.order().run { 1.min(2) } // 1
Int.order().run { 1.max(2) } // 2
Sorts the elements in a
Tuple2
Int.order().run { 1.sort(2) } // Tuple2(a=2, b=1)
Arrow provides
OrderLaws in the form of test cases for internal verification of lawful instances and third party apps creating their own
Order instances.
Orderinstances
Order has a constructor to create an
Order instance from a compare function
(F, F) -> Int.
Order { a: Int, b: Int -> b - a }.run { 1.lt(2) } // false
See Deriving and creating custom typeclass to provide your own
Order instances for custom datatypes. | http://arrow-kt.io/docs/typeclasses/order/ | CC-MAIN-2018-17 | refinedweb | 293 | 55.13 |
Created on 2008-12-14 16:48 by merrellb, last changed 2009-08-06 02:10 by jnoller. This issue is now closed.
Despite carefully matching my get() and task_done() statements I would
often trigger "raise ValueError('task_done() called too many times')" in
my multiprocessing.JoinableQueue (multiprocessing/queues.py)
Looking over the code (and a lot of debug logging), it appears that the
issue arises from JoinableQueue.put() not being protected with a locking
mechanism. A preemption after the first line allows other processes to
resume without releasing the _unfinished_tasks semaphore.
The simplest solution seems to be allowing task_done() to block while
waiting to acquire the _unfinished_tasks semaphore.
Replacing:
if not self._unfinished_tasks.acquire(False):
raise ValueError('task_done() called too many times')
With simply:
self._unfinished_tasks.acquire()
This would however remove the error checking provided (given the many
far more subtler error that can be made, I might argue it is of limited
value). Alternately the JoinableQueue.put() method could be better
protected.
Here are a few stabs at how this might be addressed.
1) As originally suggested. Allow task_done() to block waiting to
acquire _unfinished_tasks. This will allow the put() process to resume,
release() _unfinished_tasks at which point task_done() will unblock. No
harm, no foul but you do lose some error checking (and maybe some
performance?)
2) One can't protect JoinableQueue.put() by simply acquiring _cond
before calling Queue.put(). Fixed size queues will block if the queue
is full, causing deadlock when task_done() can't acquire _cond. The
most obvious solution would seem to be reimplementing
JoinableQueue.put() (not simply calling Queue.put()) and then inserting self._unfinished_tasks.release() into a protected portion. Perhaps:
def put(self, obj, block=True, timeout=None):
assert not self._closed
if not self._sem.acquire(block, timeout):
raise Full
self._notempty.acquire()
self._cond.acquire()
try:
if self._thread is None:
self._start_thread()
self._buffer.append(obj)
self._unfinished_tasks.release()
self._notempty.notify()
finally:
self._cond.release()
self._notempty.release()
We may be able to get away with not acquiring _cond as _notempty would
provide some protection. However its relationship to get() isn't
entirely clear to me so I am not sure if this would be sufficient.
Hi Brian - do you have a chunk of code that exacerbates this? I'm having
problems reproducing this, and need a test so I can prove out the fix.
Hey Jesse,
It was good meeting you at Pycon. I don't have anything handy at the moment
although, if memory serves, the most trivial of example seemed to illustrate
the problem. Basically any situation where a joinable queue would keep
bumping up against being empty (ie retiring items faster than they are being
fed), and does enough work between get() and task_done() to be preempted
would eventually break. FWIW I was running on a Windows box.
I am afraid I am away from my computer until late tonight but I can try to
cook something up then (I presume you are sprinting today?). Also I think
the issue becomes clear when you think about what happens if
joinablequeue.task_done() gets preempted between its few lines.
-brian
On Mon, Mar 30, 2009 at 2:55 PM, Jesse Noller <report@bugs.python.org>wrote:
>
> Jesse Noller <jnoller@gmail.com> added the comment:
>
> Hi Brian - do you have a chunk of code that exacerbates this? I'm having
> problems reproducing this, and need a test so I can prove out the fix.
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
Jesse,
I am afraid my last post may have confused the issue. As I mentioned in
my first post, the problem arises when JoinableQueue.put is preempted
between its two lines. Perhaps the easiest way to illustrate this is to
exacerbate it by modifying JoinableQueue.put to force a preemption at
this inopportune time.
import time
def put(self, item, block=True, timeout=None):
Queue.put(self, item, block, timeout)
time.sleep(1)
self._unfinished_tasks.release()
Almost any example will now fail.
from multiprocessing import JoinableQueue, Process
def printer(in_queue):
while True:
print in_queue.get()
in_queue.task_done()
if __name__ == '__main__':
jqueue = JoinableQueue()
a = Process(target = printer, args=(jqueue,)).start()
jqueue.put("blah")
I ran into the same problem and am greatful to Brian for reporting this
as I thought I was loosing my mind.
Brian noted that he was running windows and I can confirm that Brian's
test case is reproducable on my laptop running:
Ubuntu 9.04
python 2.6.2
Although I'm reluctant to try Brian's suggestions without additional
comments even if they do work. I'll be using this in production.
Filipe,
Thanks for the confirmation. While I think the second option (ie
properly protecting JoinableQueue.put()) is best, the first option
(simply removing the 'task_done() called too many times' check) should
be safe (presuming your get() and put() calls actually match).
Jesse, any luck sorting out the best fix for this? I really think that
JoinableQueue (in my opinion the most useful form of multiprocessing
queues) can't be guaranteed to work on any system right now.
-brian.
Cool., let me know if there is anything I can do to help.
On Mon, Jun 29, 2009 at 7:46 AM, Jesse Noller <report@bugs.python.org>wrote:
>
> Jesse Noller <jnoller@gmail.com> added the comment:
>
>.
>
> ----------
>
> _______________________________________
> Python tracker <report@bugs.python.org>
> <>
> _______________________________________
>
Fix checked into python trunk with r74326, 26 maint w/ r74327
I used the protected JoinableQueue put method suggested by Brian. | http://bugs.python.org/issue4660 | CC-MAIN-2017-04 | refinedweb | 915 | 60.01 |
PMIEND(3) Library Functions Manual PMIEND(3)
pmiEnd - finish up a LOGIMPORT archive
#include <pcp/pmapi.h> #include <pcp/impl.h> #include <pcp/import.h> int pmiEnd(void); cc ... -lpcp_import -lpcp
use PCP::LogImport; pmiEnd();
As part of the Performance Co-Pilot Log Import API (see LOGIMPORT(3)), pmiEnd closes the current context, forcing the trailer records to be written to the PCP archive files, and then these files are closed. In normal operations, an application would include a call to pmiEnd at the end of processing for each context created with a call to pmiStart(3).
pmiEnd returns zero on success else a negative value that can be turned into an error message by calling pmiErrStr(3).
LOGIMPORT(3) and pmiStart PMIEND(3)
Pages that refer to this page: logimport(3), pmistart(3) | http://man7.org/linux/man-pages/man3/pmiend.3.html | CC-MAIN-2017-26 | refinedweb | 134 | 65.22 |
InsertTag() - Error
On 27/11/2016 at 20:34, xxxxxxxx wrote:
Hello there, **
Error message :**
A problem with this project has been detected: Object "Platonic" - Tag 5671 not in sync. Please save and contact MAXON Support with a description of the last used commands, actions or plugins.
I made a video to better explain this error message.
I want to obtain the same result as in the method 01 using python.
Python code :
import c4d from c4d import gui def main() : ObjToAddTag = doc.GetActiveObjects(False)[0] OTag = doc.SearchObject("Cube") ObjToAddTag.InsertTag(OTag.GetFirstTag().GetNext()) c4d.EventAdd() if __name__=='__main__': main()
Thanks!
On 28/11/2016 at 06:45, xxxxxxxx wrote:
Hi Mustapha,
the reason for your problem is, that you try to insert a tag, which is already inserted on another object, for a second time.
Here:
ObjToAddTag.InsertTag(OTag.GetFirstTag().GetNext())
A tag (actually any GeListNode derived entity) can (and must) only be member of one single list at any given time.
So in your case there are two options:
Either you want to copy the tag from object A onto object B, then you need to clone the tag first (GetClone()) :
# ... if OTag is None or OTag.GetFirstTag() is None: return tagSrc = OTag.GetFirstTag().GetNext() if tagSrc is None: return tagClone = tagSrc.GetClone(c4d.COPYFLAGS_0) if tagClone is None: return ObjToAddTag.InsertTag(tagClone) # ...
Or you want to move the tag from one object to the other, then you need to Remove() it from source object, first:
# ... if OTag is None or OTag.GetFirstTag() is None: return tag = OTag.GetFirstTag().GetNext() if tag is None: return tag.Remove() ObjToAddTag.InsertTag(tag) # ...
One final note, I hope you don't mind. As long as a topic is not too complicated, we prefer to not need to watch any video. Usually a text explanation is enough and the need to watch a video only costs extra time. Also video are rarely helpful on programming questions, as you can't easily copy code from them.
On 28/11/2016 at 11:21, xxxxxxxx wrote:
Hi Andreas,
Thank you very much for your assistance, I tested your scripte and I obtained the same result.
I will explain you what I want to do exactly, I want to apply my material on etch faces of Platonic object for example, to obtain a wireframe texture. (fit the texure on etch faces) see the images below.
when I move manualy the UVW tag from the Cube object to the Platonic then I add my material (preview material below) I obtain a wireframe texure as the image above.
Material :
but when i use python to move or clone the UVW tag I obtain the result below and a error message when I run the render.
Result when I move the UVW tag using Python :
Thanks.
On 29/11/2016 at 03:37, xxxxxxxx wrote:
Hi Mustapha,
I'm sorry in so many ways...
First of all i'm sorry, for being triggered by the most obvious error in your script, that I didn't look into your actual problem. Terribly sorry!
The actual reason for your issue: You copy/move a UVW Tag from one object to another object with a different polygon count. Resulting in the UVW tag not being "in sync" with the host object, as the polygon count differs from the number of UV polygons in the tag.
Now, I'm sorry for the second time, because internally this is handled via a MSG_POLYGONS_CHANGED/VariableChanged (links into C++ docs), unfortunately this message is not correctly supported via Python, yet. I have put it on our list and hopefully we will be able to address this with one of our future updates.
Depending on your needs you can manually work around these limitations.
For example like so:
tagUVW = platonic.GetTag(c4d.Tuvw) # platonic is the source object in this example cntPlatonic = platonic.GetPolygonCount() cntSphere = sphere.GetPolygonCount() # sphere is the destination object in this example tagDst = sphere.MakeVariableTag(c4d.Tuvw, cntSphere, None) for idxPoly in range(0, cntSphere) : uvwdict = tagUVW.GetSlow(idxPoly % cntPlatonic) tagDst.SetSlow(idxPoly, uvwdict["a"], uvwdict["b"], uvwdict["c"], uvwdict["d"]) c4d.EventAdd()
Notes:
- The above example works only for copying the tag from a lower poly count object to a higher polygon count object. But you get the idea, I guess.
- Also repeating the coordinates for the additional polygons, might not be what you want. But at least it comes close to, what was shown in the screenshots.
And then the above example might look overly complicated with a CpySlow() function being available in the UVWTag class. Well, this is where I'm sorry for the third time. There's a bug in the index range checking of CpySlow(), so it can't be used in this case. This will be most likely addressed in one of the next service packs.
No, my back hurts from apologizing so much. But it was due.
On 29/11/2016 at 07:23, xxxxxxxx wrote:
Hi Andreas,
Thank you very much again for your assistance.
No problem, no need to apologize for that. We recognize the huge job you are doing to help the Cinema 4D Community.
And a big thanks for the proposed solution in the above example that works perfectly. | https://plugincafe.maxon.net/topic/9826/13231_inserttag--error | CC-MAIN-2019-13 | refinedweb | 878 | 65.32 |
Hello the below image.
Custom Button in Flutter
First thing to do is to create a new dart file named “custom_button.dart” inside you ‘lib‘ folder or any folder inside ‘lib‘.
My Folder structure is like this
Folder Structure
Now we will create the custom class inside the ‘custom_button.dart‘.
Our Custom Button has one parameter in the constructor which is a onPressed function that is going to be called when user taps on the button.
CustomButton({@required this.onPressed}); final GestureTapCallback onPressed;
Then we will add a ‘Row’ Widget with widgets ‘Text’ and ‘Icon’ as Children. We will add ‘SizeBox’ in between the ‘Text’ and ‘Icon’ to add some space.
The Text Widget will look like this.
Text( "Tap Me", maxLines: 1, style: TextStyle(color: Colors.white), )
As you see, you can style the ‘Text‘ with the ‘style‘ property.
Custom Button – Complete Code
The whole custom button Code will look like this.
import 'package:flutter/foundation.dart';(), ); } }
‘StadiumBorder()‘ to give the curved effect for the button.
‘Padding‘ widget is added to give padding for the whole widget. Padding Widget will accept only one child. RawMaterialButton is the parent component that comes from material library in Flutter. GestureTapCallback handles the tap on the button.
Our Custom Button is done.
Use it
In the parent widget component, import the custom_button.dart and add as like a normal widget. Don’t forget to give the ‘onPressed’ property as it is a required one.
class _CustomWidgetDemoState extends State<CustomWidgetDemo> { @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.spaceEvenly, children: <Widget>[ Text("here is a custom button in FLutter"), CustomButton( onPressed: () { print("Tapped Me"); }, ) ], ), ), ); } }
Source Code
Get the complete source code from here.
Thanks for reading. Please leave your valuable comments below. | https://www.coderzheaven.com/2019/01/03/create-custom-widgetbutton-in-flutter-android-or-ios/ | CC-MAIN-2020-50 | refinedweb | 299 | 60.11 |
Topic
SystemAdmin 110000D4XK
6772 Posts
Pinned topic Namespace error
2013-01-01T22:26:36Z |
Could someone help me with this error, after I have configured a web service proxy, I'm getting this error from the system log "Policy mapping:no namespace mapping found".
Updated on 2013-01-15T16:44:44Z at 2013-01-15T16:44:44Z by SystemAdmin
Re: Namespace error2013-01-02T23:58:49Z
This is the accepted answer. This is the accepted answer.Please attach your WSDL (or a simplified version showing the same error message) here.
Hermann<myXsltBlog/> <myXsltTweets/>
- SystemAdmin 110000D4XK6772 Posts
Re: Namespace error2013-01-15T16:44:44Z
This is the accepted answer. This is the accepted answer. | https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014924272 | CC-MAIN-2017-47 | refinedweb | 113 | 56.45 |
Last week, I embarked on an adventure into the world of web application programming. Since my work place uses Python as much as possible and my boss likes TurboGears, I chose it for this endeavor. I have worked through various TurboGears tutorials and thought it looked pretty cool. However, it doesn't take long to discover that there is a lot of undocumented functionality. In this case, I needed a web application that could access multiple databases. I knew SqlAlchemy could do it and since TG2 uses SqlAlchemy, I figured it would too. In this article you will get to travel down the rabbit hole with me as I explain how I figured it out.
When you go looking for help, the first article Google is likely to return is Mark Ramm's blog post about how easy it is to do. However, Ramm does not actually explain how to do it. Another fellow on the TurboGears Google Group posted a link to the Pylons way of setting up multiple databases. If you go there, you'll find out that the first step is set up multiple SqlAlchemy URLs in your config file, which in the case of TurboGears is your development.ini file. All you need to do is add one or more SqlAlchemy URLs to the [app:main] section.
So, instead of this:
sqlalchemy.url = sqlite:///%(here)s/devdata.db
You would do something like this:
sqlalchemy.first.url=mssql://user:password@ntsql.servername.com/database
sqlalchemy.second.url=sqlite:///%(here)s/devdata.db
Notice that you add to the dot notation to make the URLs unique. I think the "sqlalchemy" and "url" parts are required at the beginning and the end, but the rest can be whatever you want. If that was all you needed to do, this would indeed be an easy setup. However, we have a couple more files to modify. The next part was pretty tricky. I discovered that there's some comments on multiple databases in the model folder's __init__.py file. It claims that you need to create a new MetaData object, which is true, but the commented out example is misleading. In the example, the metadata is not bound to an engine object. Without that info, the metadata object will be basically useless. I then noticed the init_model method which is there for reflection purposes. Upon further digging, I found that you needed to modify it to pass in the engines needed for your various databases, This is where I ended up putting my new metadata object. Technically, you could create it before the method and then just make it global, but I didn't need that in my test case. Change yours as needed. Following are the changes I needed to make:
# Global session manager: DBSession() returns the Thread-local # session object appropriate for the current web request. maker = sessionmaker(autoflush=True, autocommit=False, extension=ZopeTransactionExtension()) DBSession = scoped_session(maker) maker2 = sessionmaker(autoflush=True, autocommit=False, extension=ZopeTransactionExtension()) secondSession = scoped_session(maker2) def init_model(engineOne, engineTwo): """Call me before using any of the tables or classes in the model.""" DBSession.configure(bind=engineOne) secondSession.configure(bind=engineTwo) # you only need this metadata # if you want to autoload a table second_metadata = MetaData(engineTwo)
The last file that should be edited is the "app_cfg.py" located in the config folder. Here you need to subclass the "AppConfig" object to override the "setup_sqlalchemy" method. This is required only if you have modified your init_model method (above) to accept multiple engines. If you do not do this, you'll receive a lovely traceback about your method needing additional parameters. Here's my code:
from pylons import config as pylons_config from tg.configuration import config class MyAppConfig(AppConfig): def setup_sqlalchemy(self): """Setup SQLAlchemy database engine.""" from sqlalchemy import engine_from_config engineOne = engine_from_config(pylons_config, 'sqlalchemy.first.') engineTwo = engine_from_config(pylons_config, 'sqlalchemy.second.') config['pylons.app_globals'].engineOne = engineOne config['pylons.app_globals'].sa_engine = engineTwo # Pass the engine to initmodel, to be able to introspect tables init_model(engineOne, engineTwo) base_config = MyAppConfig()
It should be noted here that the default websetup.py file is hardwired to use a variable called "sa_engine" when you're using authentication. Thus, I assign one of the engines to that variable above. You could also go into websetup.py and just edit it as needed to accept whatever customizations are required. I do not know if there are additional files that need to be modified as this I ended up just changing the variable name above rather than mess with additional issues like this one.
Once all that is done, you should be good to go. On the TurboGears IRC, one of the members there mentioned a way to use multiple databases by "calling setup twice". He didn't explain how this worked, so I don't know if it is simpler, but if you want to investigate this method and report back, that's fine by me. I went ahead and created some dummy files with my changes so you can see then in context. They are downloadable below:
Sample files as zip or tar | https://www.blog.pythonlibrary.org/2009/06/13/using-multiple-databases-in-turbogears-2/ | CC-MAIN-2022-27 | refinedweb | 848 | 55.74 |
Excellent. It worked. Thank you very much.
James Smith
----- Original Message -----
From: "Greg Trasuk" <stratuscom@on.aibn.com>
To: <tomcat-user@jakarta.apache.org>
Sent: Monday, September 10, 2001 2:28 PM
Subject: RE: XML version of JSP question
> Hi James:
>
> It appears that you need to put your text into CDATA sections inside
> <jsp:text> tags. The following seems to work:
>
> <?xml version="1.0" ?>
> <jsp:root xmlns:
> <jsp:scriptlet>
> int a = 15;
> </jsp:scriptlet>
> <jsp:text><![CDATA[<html><head><title>test</title></head>
> <body>
> Hi everyone! The number is:]]>
> </jsp:text> <jsp:expression>a</jsp:expression>
> <jsp:text>.<br /></jsp:text>
> <jsp:scriptlet>
> out.println("I like the mongoose!<br />");
> </jsp:scriptlet>
> <jsp:text><![CDATA[</body></html>]]></jsp:text>
> </jsp:root>
>
> Greg Trasuk, President
> StratusCom Manufacturing Systems Inc. - We use information technology to
> solve business problems on your plant floor.
>
>
> > -----Original Message-----
> > From: James Smith [mailto:jksmith@email.arizona.edu]
> > Sent: Monday, September 10, 2001 11:25 AM
> > To: tomcat-user@jakarta.apache.org
> > Subject: XML version of JSP question
> >
> >
> > Hello. I've looked through the archives, but I haven't seen
> > a solution to
> > this problem. I'm using XSLT to create a JSP page and to get
> > that to work,
> > I am using Tomcat 4.0 Release Candidate 1 (on a Windows 2000 box, only
> > running Tomcat--no Apache nor IIS). But the simple tests of
> > the XML-version
> > of JSP I'm using seems to process all the scriplets first,
> > and then the
> > HTML. For instance, this code in regular JSP format returns the right
> > results:
> >
> > <%
> > int a = 15;
> > %>
> > <html><head><title>hi there</title></head>
> > <body>
> > Hi. My variable is: <%=a%>. <br />
> > <% out.println("I like mongeese!<br />"); %>
> > </body></html>
> >
> > ******the results*******************
> >
> > Hi. My variable is: 15.
> > I like mongeese!
> >
> > **********************************
> >
> > While the XML-style, which looks like this:
> >
> > <?xml version="1.0" ?>
> > <jsp:root xmlns: >
> > <jsp:scriptlet>
> > int a = 15;
> > </jsp:scriptlet>
> > <html><head><title>test</title></head>
> > <body>
> > Hi everyone! The number is:
> > <jsp:expression>a</jsp:expression>.<br />
> > <jsp:scriptlet>
> > out.println("I like the mongoose!<br />");
> > </jsp:scriptlet>
> > </body></html>
> > </jsp:root>
> >
> > Returns this:
> > ****************************
> > 15
> >
> >
> >
> > I like the mongoose! Hi everyone! The number is: .
> > ****************************************
> >
> > I attempted to add a namespace to my HTML tags in the XML
> > version; however,
> > when I tried to run that version, I received an exception
> > saying that the
> > URI I was using () was not resolvable:
> >
> > ***********************
> >
> > org.apache.jasper.JasperException: This absolute uri
> > () cannot be resolved in either
> > web.xml or the
> > jar files deployed with this application
> >
> > ************************
> >
> > So my questions are:
> > 1) How do I get Jasper to resolve the namespace URI? (I'm
> > assuming that I
> > have to stick a line in web.xml?)
> > 2) How do I get Tomcat to render XML-style JSP in the right
> > order, just as
> > regular style JSP renders?
> >
> > Thank all of you very much for your assistance in these questions.
> >
> > James Smith
> >
> | http://mail-archives.apache.org/mod_mbox/tomcat-users/200109.mbox/%3C00c801c13a42$e09099b0$0d07c480@apollo%3E | CC-MAIN-2013-48 | refinedweb | 484 | 61.43 |
See also: IRC log
<trackbot> Date: 29 November 2012
Chair Sean Hayes
IANA response
<plh>
plh: Reply by Dec 15.
... charset parametre
... c/parameter
... RFC3023 and diff
<plh> "If supplied, the charset parameter must match the XML encoding declaration."
plh: Remove all references.
mike: Detailed reply by
Mike.
... Talk about this sentence.
If supplied, the charset parameter must match the XML encoding declaration, or if absent, the actual encoding
scribe: Filed a new Issue-197 on
TTML 1.0.
... If supplied, the charset parameter must match the XML encoding declaration, or if absent, the actual encoding.
plh: Check XML spec.
mike: Silent.
plh: UTF-8 or 16 or you have to have a XML declaration.
mike: Even if only UTF-8 or 16, you still have issues.
plh: What is actual encoding definition?
mike: TTML 1.0 is silent. XML 1.0 is unclear.
<plh>
<plh> [[ parsed entities which are stored in an encoding other than UTF-8 or UTF-16 MUST begin with a text declaration (see 4.3.1 The Text Declaration) containing an encoding declaration ]]
glenn: Discusses what XML 1.0 says.
sean: TTML doesn't specify this.
Is it premature to register a MIME type at this time.
... If we tackle all the issues for XML, we need to have a note on application of TTML using XML encoding.
glenn: Could have a TTML 1.0 SE appendix that defines a wire encoding.
[discuss optioins]
c/options
plh: ttml+xml
mike: We are left with the specification telling us to do something without a MIME type.
<plh>
mike: People are using dxfp.
[Discuss what we do in testing.]
mike: Let lapse. We can start over.
sean: Lapse and get ducks in a row with a new note and then re-register.
glenn: Still wish to put an annex in SE.
sean: Complete by next week's call.
Action Glenn Create an Annex on using of XML wire encoding in TTML 1.0 SE.
<trackbot> Created ACTION-132 - Create an Annex on using of XML wire encoding in TTML 1.0 SE. [on Glenn Adams - due 2012-12-06].
plh: Respond to IANA we won't be ready by Dec 15.
mike: We have most of the bits to redo a submisson that is acceptable to IANA.
Resolution: Agreed to give Glenn 1 week to write an Annex. If this doesn't materialize, send email to IANA on the expiration of the request.
Issue-182?
<trackbot> ISSUE-182 -- Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element. -- open
<trackbot>
mmartin3: Added namespace.
... Can close.
Close Issue-182
<trackbot> ISSUE-182 Allow more than one profile to be used in the SDP-US. Add use of ttp:profile element. closed
Issue-190?
<trackbot> ISSUE-190 -- Background vs Window Color -- pending review
<trackbot>
mmartin3: Added example 2.
... Explanatory text.
glenn: Need colorization.
... copy into Visual Studio.
mike: On Issue-182, we don't address multiple profiles.
glenn: Strange to be different than TTML 1.0.
mmartin3: See also Conformance. [TTML10] allows zero or more profiles (ttp:profile in the head element) to be specified and used simultaneously. A presentation processor may reject documents it does not understand.
sean: Close on SE on Issue-183.
glenn: TTML 1.0 talks about semantically additive profiles.
sean: Algorithm exists.
glenn: Agreed.
sean: Have implemented TTML 1.0
and it gave me an answer using the algorithm.
... May need more text to describe.
mike: Agreed it should be in TTML
1.0.
... DXFP 1.0 + SDP-US profile is the issue.
... Deal with in TTML 1.0.
glenn: How does ttp:profile constrain processor or content.
Issue-190?
<trackbot> ISSUE-190 -- Background vs Window Color -- pending review
<trackbot>
Close Issue-190?
Close Issue-190
<trackbot> ISSUE-190 Background vs Window Color closed
Issue-188?
<trackbot> ISSUE-188 -- Bounding SDP-US rendering complexity -- open
<trackbot>
mmartin3: Descriptive text in
annex.
... Section A.3
Text: Consideration may be given to defining a rendering model that accounts for drawing performance.
mike: Need to capture for future?
sean: Create an issue on v.next.
pal: Change target for
deliverable for Issue-188.
... Add comment on why target was changed.
Issue-194?
<trackbot> ISSUE-194 -- root container needs better definition -- open
<trackbot>
mmartin3: Glenn added information in 7.4.2:
mike: Why did you choose a note rather than a requirement?
glenn: Use SE terminology.
mike: Why not include in R0021 or R0022?
glenn: Clarification of what media object means in this context.
mike: OK with text.
Close Issue-194.
Close Issue-194
<trackbot> ISSUE-194 root container needs better definition closed
Issue-196?
<trackbot> ISSUE-196 -- Constraining encoding to UTF-8 -- open
<trackbot>
mmartin3: 2 reqts Section
10.1
...http: //dvcs.w3.org/hg/ttml/raw-file/tip/ttml10-sdp-us/Overview.html#Encoding_Constraints
A TTML document must be concretely represented as a well-formed [XML10] entity. A TTML document must be concretely represented using the UTF-8 character encoding [UNICODE].
glenn: Describes reqts.
... Add normative references.
... document representation in table
Close Issue-196
<trackbot> ISSUE-196 Constraining encoding to UTF-8 closed
glenn: Added prose before
requirements.
... Content Authors must adhere to and Presentation processors must support the following constraints:....
Issue-189?
<trackbot> ISSUE-189 -- do regions scroll their flowed content? -- open
<trackbot>
mmartin3: Example SE, text SDP-US pointed to it.
Asked a question about displayAlign.
scribe: Defer to Glenn on his questions.
glenn: Didn't know where to add
the example in SE.
... Discusses his concerns.
... Synchronic documents, no animation of the content.
... What is the need for this?
mike: Has alluded many experts how to simulate a scrolling region using synchronic timing is critical.
sean: This uses overflow to implement what looks like a roll-up.
glenn: For users' needs.
... Suggest an appendix to show applications of TTML.
sean: Put in new section after 1.2.
mike: Need to simulate where captions is an application.
Add empty section in SE as placeholder and point to it from SDP-US.
c/Add/Glenn: Add
sean: Put all examples in an annex?
glenn: Add empty subsection
reference.
... Put in rough content in SE and then reference in SDP-US.
sean: We should close all issues and then agree to publish to Working Group Note.
pal: Read through document and then publish.
glenn: Editorial issues - need to
do some formating and text cleanup.
... Rewrite Conformance and Error Handling
sean: Moratorium
... Publish as Last Call
... 2-4 week review
Resolution: To publish as Last Call this week with changes.
Final at New Year. Default Present: [Microsoft], +1.408.771.aaaa, +1.425.658.aabb, Plh, +1.310.210.aacc, glenn, +1.858.847.aadd, Sean WARNING: No "Present: ... " found! Possibly Present: Microsoft Sean Text aaaa aacc aadd courtney glenn mike mike2 mmartin3 pal plh trackbot You can indicate people for the Present list like this: <dbooth> Present: dbooth jonathan mary <dbooth> Present+ amy Regrets: Frans and Andreas Found Date: 29 Nov 2012 Guessing minutes URL: People with action items:[End of scribe.perl diagnostic output] | http://www.w3.org/2012/11/29-tt-minutes.html | CC-MAIN-2014-42 | refinedweb | 1,177 | 70.39 |
Backwards compatibility and TripleO¶
TripleO has run with good but not perfect backwards compatibility since creation. It’s time to formalise this in a documentable and testable fashion.
TripleO will follow Semantic Versioning (aka semver) for versioning all releases. We will strive to avoid breaking backwards compatibility at all, and if we have to it will be because of extenuating circumstances such as security fixes with no other way to fix things.
Problem Description¶
TripleO has historically run with an unspoken backwards compatibility policy but we now have too many people making changes - we need to build a single clear policy or else our contributors will have to rework things when one reviewer asks for backwards compat when they thought it was not needed (or vice versa do the work to be backwards compatible when it isn’t needed.
Secondly, because we haven’t marked any of our projects as 1.0.0 there is no way for users or developers to tell when and where backwards compatibility is needed / appropriate.
Proposed Change¶
Adopt the following high level heuristics for identifying backwards incompatible changes:
Making changes that break user code that scripts or uses a public interface.
Becoming unable to install something we could previously.
Being unable to install something because someone else has altered things - e.g. being unable to install F20 if it no longer exists on the internet is not an incompatible change - if it were returned to the net, we’d be able to install it again. If we remove the code to support this thing, then we’re making an incompatible change. The one exception here is unsupported projects - e.g. unsupported releases of OpenStack, or Fedora, or Ubuntu. Because unsupported releases are security issues, and we expect most of our dependencies to do releases, and stop supporting things, we will not treat cleaning up code only needed to support such an unsupported release as backwards compatible. For instance, breaking the ability to deploy a previous still supported OpenStack release where we had previously been able to deploy it is a backwards incompatible change, but breaking the ability to deploy an unsupported OpenStack release is not.
Corollaries to these principles:
Breaking a public API (network or Python). The public API of a project is any released API (e.g. not explicitly marked alpha/beta/rc) in a version that is >= 1.0.0. For Python projects, a _ prefix marks a namespace as non-public e.g. in
foo.\_bar.quux
quuxis not public because it’s in a non-public namespace. For our projects that accept environment variables, if the variable is documented (in the README.md/user documentation) then the variable is part of the public interface. Otherwise it is not.
Increasing the set of required parameters to Heat templates. This breaks scripts that use TripleO to deploy. Note that adding new parameters which need to be set when deploying new things is fine because the user is doing more than just pulling in updated code.
Decreasing the set of accepted parameters to Heat templates. Likewise, this breaks scripts using the Heat templates to do deploys. If the parameters are no longer accepted because they are for no longer supported versions of OpenStack then that is covered by the carve-out above.
Increasing the required metadata to use an element except when both Tuskar and tripleo-heat-templates have been updated to use it. There is a bi-directional dependency from t-i-e to t-h-t and back - when we change signals in the templates we have to update t-i-e first, and when we change parameters to elements we have to alter t-h-t first. We could choose to make t-h-t and t-i-e completely independent, but don’t believe that is a sensible use of time - they are closely connected, even though loosely coupled. Instead we’re treating them a single unit: at any point in time t-h-t can only guarantee to deploy images built from some minimum version of t-i-e, and t-i-e can only guarantee to be deployed with some minimum version of t-h-t. The public API here is t-h-t’s parameters, and the link to t-i-e is equivalent to the dependency on a helper library for a Python library/program: requiring new minor versions of the helper library is not generally considered to be an API break of the calling code. Upgrades will still work with this constraint - machines will get a new image at the same time as new metadata, with a rebuild in the middle. Downgrades / rollback may require switching to an older template at the same time, but that was already the case.
Decreasing the accepted metadata for an element if that would result in an error or misbehaviour.
Other sorts of changes may also be backwards incompatible, and if identified will be treated as such - that is, this list is not comprehensive.
We don’t consider the internal structure of Heat templates to be an API, nor any test code within the TripleO codebases (whether it may appear to be public or not).
TripleO’s incubator is not released and has no backwards compatibility guarantees - but a point in time incubator snapshot interacts with ongoing releases of other components - and they will be following semver, which means that a user wanting stability can get that as long as they don’t change the incubator.
TripleO will promote all its component projects to 1.0 within one OpenStack release cycle of them being created. Projects may not become dependencies of a project with a 1.0 or greater version until they are at 1.0 themselves. This restriction serves to prevent version locking (makes upgrades impossible) by the depending version, or breakage (breaks users) if the pre 1.0 project breaks compatibility. Adding new projects will involve creating test jobs that test the desired interactions before the dependency is added, so that the API can be validated before the new project has reached 1.0.
Adopt the following rule on when we are willing to [deliberately] break backwards compatibility:
When all known uses of the code are for no longer supported OpenStack releases.
- If the PTL signs off on the break. E.g. a high impact security fix for which
we cannot figure out a backwards compatible way to deliver it to our users and distributors.
We also need to:
Set a timeline for new codebases to become mature (one cycle). Existing codebases will have the clock start when this specification is approved.
Set rules for allowing anyone to depend on new codebases (codebase must be 1.0.0).
Document what backwards compatible means in the context of heat templates and elements.
Add an explicit test job for deploying Icehouse from trunk, because that will tell us about our ability to deploy currently supported OpenStack versions which we could previously deploy - that failing would indicate the proposed patch is backwards incompatible.
If needed either fix Icehouse, or take a consensus decision to exclude Icehouse support from this policy.
Commit to preserving backwards compatibility.
When we need alternate codepaths to support backwards compatibility we will mark them clearly to facilitate future cleanup:
# Backwards compatibility: <....> if .. # Trunk ... elif # Icehouse ... else # Havana ...
Alternatives¶
We could say that we don’t do backwards compatibility and release like the OpenStack API services do, but this makes working with us really difficult and it also forces folk with stable support desires to work from separate branches rather than being able to collaborate on a single codebase.
We could treat tripleo-heat-templates and tripleo-image-elements separately to the individual components and run them under different rules - e.g. using stable branches rather than semver. But there have been so few times that backwards compatibility would be hard for us that this doesn’t seem worth doing.
Security Impact¶
Keeping code around longer may have security considerations, but this is a well known interaction.
Performance Impact¶
None anticipated. Images will be a marginally larger due to carrying backwards compat code around.
Other Deployer Impact¶
Deployers will appreciate not having to rework things. Not that they have had to, but still.
Developer Impact¶
Developers will have clear expectations set about backwards compatibility which will help them avoid being asked to rework things. They and reviewers will need to look out for backward incompatible changes and special case handling of them to deliver the compatibility we aspire to.
Implementation¶
Dependencies¶
None. An argument could be made for doing a quick cleanup of stuff, but the reality is that it’s not such a burden we’ve had to clean it up yet.
Testing¶
To ensure we don’t accidentally break backwards compatibility we should look at the oslo cross-project matrix eventually - e.g. run os-refresh-config against older releases of os-apply-config to ensure we’re not breaking compatibility. Our general policy of building releases of things and using those goes a long way to giving us good confidence though - we can be fairly sure of no single-step regressions (but will still have to watch out for N-step regressions unless some mechanism is put in place). | https://specs.openstack.org/openstack/tripleo-specs/specs/juno/backwards-compat-policy.html | CC-MAIN-2021-04 | refinedweb | 1,546 | 51.68 |
- Type:
Bug
- Status: Closed
- Priority:
Major
- Resolution: Fixed
- Affects Version/s: None
-
- Component/s: Go - Compiler
- Labels:None
The Go Thrift compiler assumes that all service methods use only types from the same namespace. If a service method uses a type from an included Thrift file, the compiler still attempts to refer to the type using the calling file's namespace.
Steps to reproduce:
1. Create a new directory/project
2. Copy the files in the gist to the directory ()
3. Generate Go code using Thrift
4. Examine the gen-go/expanded/my_service-remote/my_service-remote.go file
5. Line 133 refers to `expanded.NewBaseThing()` which does not exist.
Expected:
Line 133 should use `my_base.NewBaseThing` and an import for `my_base` should be present.
- duplicates
THRIFT-3413 Thrift code generation bug in Go when extending service
- Closed
- is duplicated by
THRIFT-3443 Thrift include can generate uncompilable code
- Closed
- relates to
THRIFT-3443 Thrift include can generate uncompilable code
- Closed | https://issues.apache.org/jira/browse/THRIFT-3411 | CC-MAIN-2021-17 | refinedweb | 160 | 51.85 |
Rob, since we're moving TODO and BUGS (and maybe in the future other things) out of guile-core/ into workbook/, it behooves us to figure out how to get these files back into the tree on release. apparently, automake supports "dist-hook", which we currently use in these makefiles: ./doc/Makefile.am ./guile-readline/Makefile.am it is easy to add something like: dist-hook: @echo doing dist-hook... if MAINTAINER_MODE @echo 'This is a snapshot of the TODO file.' > TODO @echo >> TODO cat $(workbook)/tasks/TODO >> TODO $(workbook)/../scripts/render-bugs $(workbook)/bugs > BUGS endif to top-level Makefile.am. the question is how to define $(workbook)? we can either require maintainers to specify this directory like: make workbook=some/path/to/workbook dist or adopt the convention that workbook is always at $(top_srcdir)/../workbook and codify that in Makefile.am. so: what is your preference? what other files can we factor out of guile-core/? where would we put them in workbook/? where is a good place to document all this? (RELEASE?) if you see where i'm going w/ this, you can obviously make the changes yourself, but i'm happy to do them for you if given reasonable answers. thi | http://lists.gnu.org/archive/html/guile-devel/2002-03/msg00340.html | CC-MAIN-2015-35 | refinedweb | 205 | 67.04 |
(Almost) Real-Time output streams. More...
#include "fosi.h"
#include "rtconversions.hpp"
#include "rtstreambufs.hpp"
Go to the source code of this file.
(Almost) Real-Time output streams.
If you really have to print something out from a RealTime thread, you can use the streams of the os namespace, which will call the rtos_printf functions (which are supposed to be as realtime as possible) of the OS you are using. Be warned, these classes have not been tested extensively and might in certain cases still break hard-realtime behaviour. avoid using it from your most critical threads and production code.
Definition in file rtstreams.hpp. | http://www.orocos.org/stable/documentation/rtt/v2.x/api/html/rtstreams_8hpp.html | CC-MAIN-2015-06 | refinedweb | 105 | 60.41 |
Errata for Agile Web Development with Rails 3.2
The latest version of the book is P3.0, released.1 (13-Nov-11)
- Fixed in: B2.0
PDF page: all
It would be convenient if the page numbers could be synchronized such that they are always along the outside edge of the page. Some chapters seem to place it on the inside edge (for duplexed printing). Example, Ch 6 - outside edge, CH 7 - inside edge--Steven Finnegan
- Reported in: B2.0 (01-May-13)
Paper page: 0
Errata error ...
on pragprog.com/titles/rails4/errata
it says
The latest version of the book is P3.2, released about 3 hours ago. If
you've bought a PDF of the book and would like to upgrade it to this version
(for free), visit your home page.
from my home page I go to
pragprog.com/downloads/1307843/185539b84b425eda
where I see
Downloads for Agile Web Development with Rails 3.2 (4th edition) (0.0)
which does not give me much confidence that I will see P3.2
The pdf download _IS_ named
agile-web-development-with-rails-3-2_0_0.pdf
--Michael Bianchi
- Reported in: B1.0 (06-Mar-13)
- Fixed in: B2.0
PDF page: 5
The command used in the book reads as "rvm install 2.0.0", but this is causing an error saying the version is too confusing. What really helps is issuing the command like this: "rvm install ruby-2.0.0-p0" (the part after the last dash will change in shorter iterations, I guess).--Christian Blume
- Reported in: B1.0 (03-Mar-13)
- Fixed in: B2.0
PDF page: 11: B2.0 (01-May-13)
- Fixed: 01-May-13, awaiting book release
Paper page: 13
re: Agile Web Development with Rails 3.2 (4th edition)
I cannot generate the documentation as explained on page 13.
I get
pwd
/foveal/home/mbianchi/rails.d/dummy_app
rake doc:rails
rake aborted!'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/task.rb:203:in `each'
:170:in `invoke'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:143:in `invoke_task'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:101:in `block (2 levels) in top_level'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:101:in
`each'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:101:in `block in top_level'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:110:in `run_with_threads'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:95:in `top_level'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:73:in `block in run'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:160:in `standard_exception_handling'
/foveal/home/mbianchi/.gem/ruby/1.9.1/gems/rake-10.0.4/lib/rake/application.rb:70:in `run'
Tasks: TOP => doc:rails => doc/api/index.html
(See full trace by running task with --trace)
gem list actionmailer
*** LOCAL GEMS ***
actionmailer (3.2.13, 3.2.8)
ls /usr/share/gems/gems/actionmailer-3.2.8/
MIT-LICENSE lib/
--Michael Bianchi
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 15
Mikel Lindsaar has changed the company name from RubyX to reInteractive--Trung LE
- Reported in: P3.0 (06-Jan-13)
- Fixed in: B1.0
PDF page: 16.2
For next edition, the chapter on deployment via Capistrano needs to be updated:
- if you are using RVM, the first two lines in the RVM section of deploy.rb (the one starting "$:.unshift . . . " and "require rvm/capistrano") are no longer needed when the capistrano gem is used and will generate an error if they are included.
- the default :rvm_type is now :user, so it's the reverse of what the book currently says, you would include this only if rvm is installed at the system level.--Mike Ruch
- Reported in: P3.0 (07-Dec-12)
- Fixed in: B1.0
PDF page: 19
Running Windows version Rails 3.2.1. When you attempt to run this new version it fails because of the asset pipeline support that was added, you need to change the application.rb file and set # Enable the asset pipeline
config.assets.enabled = false. This will allow the book as written to function correctly.--Charlie Brown
- Reported in: B1.0 (27-Feb-13)
- Fixed in: B2.0
PDF page: 20
"Some browsers (such as Safari) will mistakenly try to interpret some of the templates as HTML" - Safari 6.0.3 renders the Ruby code just fine. OS X 10.8.3 if it helps (I don't presently have an older system to test on.)--Mark Glossop
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 28
My surname LE as in Trung LE is missing the accent, it should be Trung Lê--Trung Lê
- Reported in: P3.0 (26-Feb-13)
- Fixed in: B2.0
PDF page: 28
Assuming Ubuntu is one of the most popular Linux distribution. The upgrade command does not work on it.
$ gem update --system.
--Xavier John
- Reported in: P3.0 (12-Nov-12)
- Fixed in: B1.0
PDF page: 29
Installing on Mac:
sudo port install rb-rubygem
missing s on end - should read:
sudo port install rb-rubygems
--scott macri
- Reported in: B1.0 (03-Mar-13)
- Fixed in: B2.0
PDF page: 31
(Got page number wrong).: P2.2 (03-Mar-12)
- Fixed in: B2.0
PDF page: 31
It seems that every use of the [Add to Cart] button in the PDF version comes across in the mobi ebook version as "ADD ADD CART button" with at least two ADDs, sometime many more. Doesn't show up in the PDF version. I'm reading with the Mobipocket Reader. Would prefer to read it using Kindle for PC app, but there doesn't appear to be any way to do that. Only on a Kindle, not any of their reading apps.--Gary Holeman
- Reported in: B1.0 (03-Mar-13)
- Fixed in: B2.0
PDF page: 34
Remove reference to postgres-pr. It's undocumented, and hasn't been updated since Dec 2009.
- Reported in: P3.0 (24-Aug-12)
- Fixed in: B1.0
PDF page: 62
rails generate scaffold Product \
title:string description:text image_url:string price:decimal
The \ only works in a Linux environment. It causes errors on both Windows and Mac.
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 64
For the paragraph on recommending installing Xcode 4.1 from App Store, I think we should mention that we just need to install Command Line Tools package which could be downloaded from developer.apple.com/downloads/--Trung Lê
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 65
In the image of the web viewer showing an empty product listing, the location field shows "localhost:3000/product". The "s" on the end of product has probably been cut off because the browser window isn't wide enough to contain the entire url. You may want to widen the browser window.--Kim Shrier
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 66
OSX 1.0.7+ is known to have issues with UTF-8 input within IRB due to a bug within libreadline that is shipped by default on the platform. The fix is to compile with patched libreadline:
$ rvm pkg install readline
$ rvm reinstall 2.0.0 --with-readline-dir=$rvm_path/usr--Trung Lê
- Reported in: P1.0 (22-Nov-12)
- Fixed in: B1.0
Paper page: 70
for me the cycle method doesn´t work to create alternating background colors.--Adrian
- Reported in: P3.0 (20-Jul-13)
PDF page: 71
<body class='<%= controller.controller_name %>'> didn't work
But when I change it for:
<body class="<%= controller.controller_name %>">
worked =)--Matías Mascazzini
- Reported in: P2.2 (20-Jul-13)
PDF page: 71
<body class='<%= controller.controller_name %>'> didn't work
But when I change it for:
<body class="<%= controller.controller_name %>">
worked =)--Matías Mascazzini
- Reported in: P3.0 (13-Dec-12)
- Fixed in: B1.0
Paper page: 72
<table> should say <table class="products">Sam Ruby says: class="products" is placed on the body element on the previous page
- Reported in: B1.0 (28-Feb-13)
- Fixed in: B2.0
PDF page: 72
The file products.css.scss has comments /* START_HIGHLIGHT */ and /* END_HIGHLIGHT */. This has the effect of creating an empty products.css file, and no cycled rows as described in the PDF book.--Ken Burgett
- Reported in: P3.0 (24-Dec-12)
- Fixed in: B1.0
PDF page: 72
The page rails32/depot_a/app/views/products/index.html.erb linked to on page 72 displays the code in a table, instead of as html--DJHSam Ruby says: view source
- Reported in: P2.2 (14-Jul-12)
- Fixed in: B1.0
PDF page: 80
Validating `:image_url, presence: true`, makes the image_url field required.
A few lines down, in the same model, `validates :image_url, allow_blank: true` is used. These are conflicting, and the `presence:true` wins out.
As a result leaving the image_url field blank results in an error.--Eldar Omuraliev
- Reported in: P3.0 (04-Dec-12)
- Fixed in: B1.0
Paper page: 81
Tests don't passSam Ruby says: works for me
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 84
In the paragraph after the 'test "product price must be positive" do' code, third sentence, you mention using the join method to concatenate the error messages associated with the :price field. However, you aren't using the join method in the test code. It looks like you are just comparing 2 array instances.--Kim Shrier
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 85
Sublime Text Editor has gained much traction for the past years and has replaced the defacto TextMate. It's good if we could mention Sublime here.--Trung Lê
- Reported in: P3.0 (30-Sep-12)
- Fixed in: B1.0
PDF page: 88
the seed.rb uses rails.jpg for the product, and the file provided in media.pragprog.com/titles/rails4/code/rails32/depot_a/app/assets/images/ is a png file--Paulo de Souza Geyer
- Reported in: P2.1 (30-Aug-12)
- Fixed in: B1.0
Paper page: 92
# root :to => 'welcome#index'
root to: => 'store#index', as: 'store'
Has to be
# root :to => 'welcome#index'
root :to => 'store#index', as: 'store'--Ralph
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 93
Please remove paragraph mentioning about postgres-pr library because that lib is very outdated and inactive--Trung Lê
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 95
I think we should mention PostgresSQL in 'We installed (or upgraded) the SQLite 3 and MySQL databases' (page 95) and 'MySQL and SQLite adapters..." (page94). Because I think PostgreSQL is now very decent choice--Trung Lê
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 96
Please replace URL `rvm.beginrescueend.com/` with `rvm.io`--Trung Lê
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 96
Please recursively search and replace URL rvm.beginrescueend.com with rvm.io--Trung Le
- Reported in: B1.0 (10-Apr-13)
- Fixed in: B2.0
PDF page: 96
There shouldn't be a "#" after the comma on line 5 for application.html.erb
<!DOCTYPE html>
<html>
<head>
<title>Pragprog Books Online Store</title>
<%= stylesheet_link_tag "application", media: "all",#
"data-turbolinks-track" => true %>
<%= javascript_include_tag "application", "data-turbolinks-track" => true %>
<%= csrf_meta_tags %>
</head>--Brad Ballard
- Reported in: B1.0 (01-Mar-13)
- Fixed in: B2.0
PDF page: 96
Clicking on "Download rails40/depot_e/app/views/layouts/application.html.erb"
does not open in the browser but instead jumps to PDF page 449
--James R Grier
- Reported in: P2.0 (26-Sep-11)
- Fixed in: B1.0
PDF page: 98
Why don't you teach to use flash standard messages by adding lines like those in the application template?
<% flash.each do |key, value| %>
<%= content_tag(:div, sanitize(value), :class => "flash #{key}") %>
<% end %>
This is much more DRYer than adding:
<% if notice %>
<p id="notice"><%= notice %></p>
<% end %>
to each single view!
Regards,
--Duccio Armenise
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 98
There appears to be a problem with the css for the img tag in the banner. When viewing the page in Safari 6.0, the logo image shows up on top of the text instead of to the left, where it belongs. This is also a problem in Firefox and Chrome. --Kim Shrier
- Reported in: P2.2 (12-Jul-12)
- Fixed in: B1.0
PDF page: 98
'}' missing at the end of the #columns declaration (in the pdf AND on the page code/rails32/depot_e/app/assets/stylesheets/application.css.scss )
"Sam Ruby says: the close parens show up on the next page."
no, please look it up. in my pdf and on the page the '}' is missing!--Deniz
- Reported in: B1.0 (20-Apr-13)
- Fixed in: B2.0
PDF page: 99
I believe Rails 4 is using the minitest framework now as it has replaced test unit in ruby 1.9+. The book still mentions test unit.--Peter Rhoades
- Reported in: B1.0 (20-Mar-13)
- Fixed in: B2.0
PDF page: 100
With Task B Validation and Unit Testing in Iteration B2.
The directory structure in the project differs to what is described in the book. Where the book refers to ../depot/test/models/product_test.rb the currently generated project has the file at ../depot/test/unit/product_test.rb
Also the command "rake test:models" seems to have been superseded by "rake test:units"--Ockert Botha
- Reported in: B1.0 (26-Apr-13)
- Fixed in: B2.0
PDF page: 100
when adding this to my products_controller_test
@update = {
:title => 'Lorem Ipsum',
:description => 'Wibbles are fun!',
:image_url => 'lorem.jpg',
:price => 19.95
}
and the rows:
post :create, :product => @update
and
put :update, :id => @product.to_param, :product => @update
The test still gives me the same errors
) Failure:
test_should_create_product(ProductsControllerTest) [test/functional/products_controller_test.rb:20]:
"Product.count" didn't change by 1.
<3> expected but was
<2>.
2) Failure:
test_should_update_product(ProductsControllerTest) [test/functional/products_controller_test.rb:39]:
Expected response to be a <:redirect>, but was <200>
Im running ruby 1.9.3 and Rails 3.2.13
--Patrik N
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 101
The formatting for this page appears to be wrong. After the sentence, "First, let’s take a look at what Rails generated for us:", the rest of the page is blank even though there is plenty of room for the code snippet and some of the following text.--Kim Shrier
- Reported in: P3.0 (12-Feb-13)
- Fixed in: B1.0
PDF page: 104
In section 8.5, "What We Just Did," item 3 states: "Add a call to the order() method with the Products controller..."
This call to order() was added to the StoreController, not ProductsController, back on page 93.--James Miller
- Reported in: B1.0 (01-Apr-13)
- Fixed in: B2.0
PDF page: 107
Last test("...unique title - i18n") fails.
--- expected
+++ actual
@@ -1 +1 @@
-"translation missing: en.activerecord.errors.messages.taken"
+"has already been taken"
--Cassio Cabral
- Reported in: B1.0 (27-Feb-13)
- Fixed in: B2.0
PDF page: 108
"current_cart()..." in the first and second paragraphs should be changed to "set_cart()". "CreateCart module" in the second paragraph should be changed to "CurrentCart module".--Austin J. Alexander
- Reported in: P3.0 (08-Nov-12)
- Fixed in: B1.0
Paper page: 111
Minor nit, second paragraph: says that you pass the product into @cart.line_items.build, then save into an instance variable named @line_item.
The code you show calls build, saves to @line_item, then sets the product on the line item.--Doug
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 112
In the first sentence on the page, you mention adding the new css lines to the rule for .entry. However, the lines were added to the rule for price_line.--Kim Shrier
- Reported in: P3.0 (02-Oct-12)
- Fixed in: B1.0
PDF page: 112
@line_item = @cart.line_items.build
should be:
@line_item = @cart.line_items.build(:product_id => product.id)
this even fix the AJAX problems I've tried hard to fix on the page 140.--Celso de Sá Lopes
- Reported in: P2.2 (04-Jul-12)
- Fixed in: B1.0
PDF page: 113
the image showing a double add of the "CoffeeScript" title is wrong. It should just be one, despite a refresh (because you're simply refreshing the CART ULR - and not the POST!)--Jeff Lim
- Reported in: P3.0 (24-Jan-13)
- Fixed in: B1.0
PDF page: 115
The book says to rename application.css to application.css.scss. Previously when told to delete public/index.html, the book also listed the Git command to do so. It would be good to do the same here (something like git mv application.css application.css.scss).--Andy Schott
- Reported in: P2.2 (20-Jul-13)
PDF page: 116
Missing the line "@line_item.product = product" in the controller. Add after the line "@line_item = @cart.add_product(product.id)"--Matías Mascazzini
- Reported in: P2.2 (11-Oct-12)
- Fixed in: B1.0
PDF page: 118
related to erratum #49593
When applying the migration:
depot> rake db:migrate
the following error occurs:
"Can't mass assign protected attributes: quantity"
This is fixed by making the quantity attribute accessible,
in the model "line_item.rb":
#...
attr_accessible :product, :product_id, :cart_id, :quantity
--Alexandros K
- Reported in: P3.0 (23-Aug-12)
- Fixed in: B1.0
PDF page: 118
In the paragraph right after the code for add_product, the last sentence is worded strangely. It seems that something is missing after the word "starting".--Kim Shrier
- Reported in: B1.0 (24-Mar-13)
- Fixed in: B2.0
PDF page: 120
The only edit needed for the line_items_controller.rb is the one noted by the --> arrow pointing to the second line starting "@line_item = ..." The other two arrows pointing to "product = ..." and "format.html..." don't change from the state they are already in--Jim Skibbie
- Reported in: B1.0 (06-Apr-13)
- Fixed in: B2.0
PDF page: 123
In reference to Donald Guy's submission #51022 I was able to reproduce this by restarting the rails server between changes instead of keep the rails server running in a separate terminal window while making updates to the code in another terminal. The simple fix I found when stuck in the same scenario was to rename the migration or modify config/environments/development.rb to prevent raising ActiveRecord::PendingMigrationError. With this said it might help to address the following:
1. Stopping and starting the rails web server. Although earlier in the book it mentions stopping and starting the rails server it might help to write a quick warning to address the different behavior associated with restarting the rails server when working on the depot app.
2. Introduce rake db:migrate:status when talking about migration methods change(), up(), down() and running rake db:rollback.
3. Talk briefly about config/environments/development.rb and more specifically about the setting for pending migrations in the development environment using config.active_record.migration_error = :page_load--Owen Pletz
- Reported in: B1.0 (13-Mar-13)
- Fixed in: B2.0
PDF page: 123
Following along using the same versions (ruby 2.0, rails 4.0beta1), after rolling back the migration (rake db:rollback), I can't see the result as shown because I get instead a ActiveRecord::PendingMigrationError. I managed to see them by temporarily moving the migration away -- don't know another way to surpress the checking here.--Donald Guy
- Reported in: P2.2 (17-Jun-12)
- Fixed in: B2.0
PDF page: 125
The "David says" sidebar at the top of the page comes smack in the middle of the code example for line_items_controller.rb--Martin Wehlou
- Reported in: P2.2 (26-Jul-12)
- Fixed in: B1.0
PDF page: 125
I'm running rails 3.2.3.
I received this error when trying to add a book to the cart:
Can't mass-assign protected attributes: product
adding "attr_accessible :product" to the /app/model/line_item.rb file solved this issue for me. I'm not sure if this is the "correct" solution.
I believe this is related to issue #49439
I also just wanted to say thanks for the time you put into this.
- Reported in: B1.0 (30-Apr-13)
- Fixed in: B2.0
PDF page: 128
The word "has" should be "have" in "they has a lot of useful
information." This is in the admonition to review log files periodically.--Dave Hackett
- Reported in: B1.0 (31-Mar-13)
- Fixed in: B2.0
PDF page: 133
In the solutions wiki page for "Play Time" that the PDF links to, for Activity 2, the hint says "Hint: add two tests to test/unit/cart_test.rb", but that path doesn't exist in Rails 4. It should be /test/models/cart_test.rb.--Jim Skibbie
- Reported in: P3.0 (19-Oct-12)
- Fixed in: B1.0
PDF page: 134
I think you wanted to change the source for rails32/depot_k/app/views/carts/show.html.erb to be:
<% if notice %>
<p id="notice"><%= notice %></p>
<% end %>
<%= render(@cart) %>
--Kim Shrier
- Reported in: P2.2 (28-Apr-12)
- Fixed in: B1.0
PDF page: 135
On page 135 of 737 in iBooks, there should not be a "\" in the following command:
rails generate scaffold Product \ title:string description:text image_url:string price:decimal
Should be:
rails generate scaffold Product title:string description:text image_url:string price:decimal--Liam McArdle
- Reported in: P2.2 (08-Jul-12)
- Fixed in: B1.0
PDF page: 136
Trivially, but annoyingly, Figure 20 splits the line_items controller code started on page 135. Also (trivially) the screenshot of the page has an unrelated pop-up on the top right corner. --John Goetz
- Reported in: P2.1 (12-Jul-12)
- Fixed in: B1.0
Paper page: 136
You forgot to run rake test:functionals after making these changes (iteration F1)! Most of the tests broke but I was on my own trying to figure out how to fix them.
1) setting @cart via a before_filter in the application controller (rather than in StoreController.index()) fixed most of them
2) I had to update the line items controller test to reflect that we changed the redirect path
3) I had to rem out the test for cart creation because my fix for #1 broke it (and it didn't seem relevant anymore)--Joseph Shraibman
- Reported in: P2.2 (06-Jun-12)
- Fixed in: B1.0
PDF page: 138
the alias "j" is used to escape rendered partial for javascript. But "j" is an alias for "json_encode" while here escape_javascript is needed.
So rails32/depot_l/app/views/line_items/create.js.erb should be
$('#cart').html("<%= escape_javascript render @cart %>");
instead of
$('#cart').html("<%=j render @cart %>");--Martin Meier
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 139
Please update Footnote [24] to use rvm.io URL instead--Trung Lê
- Reported in: B1.0 (07-Mar-13)
- Fixed in: B2.0
PDF page: 139
Footnote [25] URL is no longer valid, the correct one is
gembundler.com/v1.3/bundle_exec.html--Trung Lê
- Reported in: P2.2 (26-Jul-12)
- Fixed in: B1.0
PDF page: 139
Paper page: 121
This did not work...
redirect_to store_url, notice: 'Invalid cart'
So I changed it to:
redirect_to store_index_url, notice: 'Invalid cart'
--Jez
- Reported in: P3.0 (21-Nov-13)
Paper page: 141
I get an error stating that the 'jquery-ui' couldn't be found in the application.js file. Has the reference changed for jQuery UI? --Thomas
- Reported in: P3.0 (13-Dec-12)
- Fixed in: B1.0
PDF page: 142
This comparison <% if line_item == @current_item %> will always return false, it doesn't work, because we are comparing to different objects, and not their values.
I tried it and it failed. Instead, I think you should comapre the product_id inside these objetcs
--Mamadou Toure
- Reported in: P2.2 (05-Aug-12)
- Fixed in: B1.0
PDF page: 150
Paper page: 132
I think something's missing here...
<table>
➤ <%= render(cart.line_items) %>
I did not get same display with the book (p.136) so I appended the id...
<table id="cart">
➤ <%= render(cart.line_items) %>
--JH
- Reported in: P3.0 (11-Dec-12)
- Fixed in: B1.0
PDF page: 151
Paper page: 134
After "Let’s avoid all of that and replace the original template with code that causes the partial to be rendered:",
the code snippet for "carts/show.html.erb" should be :
<% if notice %>
<p id="notice"><%= notice %></p>
<% end %>
<%= render(@cart) %>
(the version you put does not delegate to the cart partial)--Olivier Michallat
- Reported in: P2.2 (06-Apr-13)
- Fixed in: B2.0
PDF page: 152
Paper page: 134
The file rails32/depot_k/app/assets/stylesheets/carts.css.scss diff the content of the displayed in the book--Francisco PeñaSam Ruby says: Updated in P3.0
- Reported in: B1.0 (19-Mar-13)
- Fixed in: B2.0
PDF page: 153
store.js.coffee
➤ $(document).on "page:change", ->
➤ $('.store .entry > img').click ->
➤ $(this).parent().find(':submit').click()
The above code does not work for me. I added "ready" to the first line and it works now, this is how it should look:
$(document).on "ready page:change", ->--Nadeem
- Reported in: P2.2 (29-Jul-12)
- Fixed in: B1.0
PDF page: 157
The code segment shown for new.html.erb does not match my existing file at all. This confused me until I realized the code segment is just missing the arrows in the left margin to indicate that each line is new or changed. I suggest adding those new/changed-line indicator arrows to clarify that the file's contents are completely replaced.--Brian Ensink
- Reported in: B1.0 (21-Apr-13)
- Fixed in: B2.0
PDF page: 160
I could not get tests to pass, or the Checkout button to work without adding the following two lines to the orders_controller.rb:
class OrdersController < ApplicationController
> include CurrentCart
> before_action :set_cart, only: [:new, :create]
Otherwise, when I ran tests from pg 161, or later, pressed the Checkout button, I got an error: "NoMethodError: undefined method `line_items' for nil:NilClass"--Jim Skibbie
- Reported in: P3.0 (21-Oct-12)
- Fixed in: B1.0
PDF page: 162
In the test fixture line_items.yml, you highlight one line for updating but the entire file listing you have does not match what was generated by the generate scaffold command. I currently have the following:
# Read about fixtures at (url removed to get it through the pragprog spam filter)
one:
product_id: 1
cart_id: 1
two:
product_id: 1
cart_id: 1
You have to change every line to match what you have in the listing.--Kim Shrier
- Reported in: P2.2 (20-Jun-13)
PDF page: 164
Paper page: 146
Name "JQuery" should be "jQuery".--Ing. Martin Bachtík
- Reported in: B1.0 (26-Mar-13)
- Fixed in: B2.0
PDF page: 164
the code line
<%= render(:partial => "cart_item" , :collection => cart.line_items) %>
should be the same as in Page 156 :
<%= render(cart.line_items) %>
since we do not have the cart_item partial in the 4th edition of this book.--Jing Kai
- Reported in: P3.0 (21-Oct-12)
- Fixed in: B1.0
PDF page: 165
In the code for orders_controller_test.rb, I only needed to modify the assert_redirected line. The post line was already correct. I am doing this under rails 3.2.8 so it may be different in 3.2.6
--Kim Shrier
- Reported in: P3.0 (21-Oct-12)
- Fixed in: B1.0
PDF page: 166
The output from "select * from line_items;" should have 2 rows, the one you showed for the CoffeeScript line item but also one for the Ruby 1.9 line item which is not shown.
--Kim Shrier
- Reported in: B1.0 (06-Apr-13)
- Fixed in: B2.0
PDF page: 168
The text describes selecting params[:order] in the first line, but the example has been updated to use strong params (calling the private order_params method of the controller). The text should probably be updated to reflect this change--Donald Guy
- Reported in: P3.0 (26-Jan-13)
- Fixed in: B1.0
PDF page: 169
There is a syntax error on the second line of the code for who_bought.atom.builder. There needs to be an equals between feed.title and its new value:
feed.title = "Who bought #{@product.title}"--Andy Schott
- Reported in: B1.0 (20-Mar-13)
- Fixed in: B2.0
PDF page: 172
In the last paragraph on the page you ask the reader to recall a section which they have not even read yet as it is much further on in the book!
"Remember back in Section 25.4, Finding More at RailsPlugins.org, on page 424 when we cached partial results of responses,..."--Nadeem
- Reported in: B1.0 (04-Mar-13)
- Fixed in: B2.0
PDF page: 175
Paper page: 160
I'd suggest you tell to add the before_action callback for the cart (before_action :set_cart, only: [:new, :create]). Otherwise, there will be a problem to call the line_items function. --André Furquim Xotta
- Reported in: P3.0 (21-Oct-12)
- Fixed in: B1.0
PDF page: 175
In Figure 25, the URL in the location field is localhost:3000/en/orders which probably works if you are running the Rails app behind apache or nginx with the right configuration but when running it under webrick it should be localhost:3000/orders.--Kim Shrier
- Reported in: P3.0 (08-Nov-12)
- Fixed in: B1.0
PDF page: 176
I created the file with ruby codes, and run it. but nothing happens to the database--derekSam Ruby says: I don't understand this errata.
- Reported in: P3.0 (21-Oct-12)
- Fixed in: B1.0
PDF page: 185
The integration test generated does not match your listing. It should be:
require 'test_helper'
class UserStoriesTest < ActionDispatch::IntegrationTest
# test "the truth" do
# assert true
# end
end
The important difference is that it inherits from ActionDispatch::IntegrationTest, not ActionController::IntegrationTest as shown in the listing.--Kim Shrier
- Reported in: P2.2 (06-Apr-12)
- Fixed in: B1.0
Paper page: 190
[p. 190] Why does this form use ":" after each field label? It
doesn't match the other forms, and it looks ugly and broken if there are
form errors. On [p. 193], the colons are part of the label.--Shannon -jj Behrens
- Reported in: B1.0 (06-Apr-13)
- Fixed in: B2.0
PDF page: 195
While editing users_controller.rb, you need to change user_params to permit :password and :password_confirmation (and probably disallow password_digest). Otherwise first attempt to create a user leads to a confusing "Password can't be blank" error notice.--Donald Guy
- Reported in: B1.0 (06-Apr-13)
- Fixed in: B2.0
PDF page: 196
The code example for the new user form omits the class="field" and class="actions" from the divs. Though they are included by the initial scaffold generation, if the user deletes them, the layout of the form gets all out of whack. --Donald Guy
- Reported in: P3.0 (14-Jan-13)
- Fixed in: B1.0
PDF page: 200
specifically for the value of hashed_password.
...should be...
specifically for the value of password_digest.--T Farrington
- Reported in: B1.0 (05-Mar-13)
- Fixed in: B2.0
PDF page: 211
Paper page: 198
I'd suggest you to update the user_params private method in the user_controller in order to the controller test to pass (adding the :password and :passwod_confirmation symbols). You forgot to mention that on the text.--André Furquim Xotta
- Reported in: P2.1 (05-Dec-11)
- Fixed in: B1.0
PDF page: 211
"Note that we did not choose to put the administrative and session functions inside this scope, because it is not our intent to translate them at this time."
But the "resources :products do get :who_bought, on: :member end" is surely an administrative-only controller and function -- and is in the "scope '(:locale)'" block. I believe the "resources :products" should just be moved out of the "scope '(:locale)'" block, but I have not tested this.
(Also, the "get 'login'" line above isn't properly indented on my PDF.)--Seth Arnold
- Reported in: P3.0 (21-Nov-12)
- Fixed in: B1.0
Paper page: 221
Should provide English version of errors.template, or there will be errors in English version. Also it's hard to know what errors.template.body says for who don't know Spanish.--Iven
- Reported in: P3.0 (22-Oct-12)
- Fixed in: B1.0
PDF page: 222
In the code for app/views/orders/_form.html.erb, in Rails 3.2.8, it does appear that you need to use the i18n functions for labels if you want them translated. Without the i18n functions, name, email, and pay_type were not translated.--Kim Shrier
- Reported in: B1.0 (04-Apr-13)
- Fixed in: B2.0
PDF page: 229
figure 37 shows
Body Html
not
Hay problemas con los siguientes campos:
--Tim Morgan
- Reported in: P2.2 (22-May-13)
PDF page: 231
"Linux users should have already installed Apache in in Section 1.3, Installing on Linux, on page 6."
The word in is repeated--Daniel Garcia
- Reported in: P3.0 (27-Dec-12)
- Fixed in: B2.0
PDF page: 231
Reading and understanding chapter 16 was sometimes hard. I wasn't always sure whether you were talking about the server or the development machine. It is clearly described where to install the Git deployment repository and where to install Capistrano. But often I got lost.
So the suggestion is to clearly indicate if we talk about the server or the development machine. As you did on page 246 when you clearly indicate # on your server.--Pierre Sugar
- Reported in: P3.0 (30-Jan-13)
- Fixed in: B1.0
PDF page: 233
As of Mac OS X 10.8, the Web Sharing system preference is no longer included with the operating system. Apache can still be enabled via Terminal, or in a GUI by purchasing OS X Server from the Mac App Store.--Andy Schott
- Reported in: P3.0 (03-Feb-13)
- Fixed in: B2.0
PDF page: 234
I had a lot of problems getting virtual hosts working in Apache. foundationphp.com/tutorials/apache22_vhosts.php is what finally helped me to get it working.--Andy Schott
- Reported in: P3.0 (23-Dec-12)
- Fixed in: B1.0
PDF page: 241
# Deploy with Capistrano
➤ gem 'capistrano'
should be
# Deploy with Capistrano
➤ gem 'rvm-capistrano'
Otherwise when invoking "cap deploy:setup" I get the error
/home/pierre/.rvm/gems/ruby-1.9.3-p286/gems/capistrano-2.13.5/lib/capistrano/con
figuration/loading.rb:152:in `require': cannot load such file -- rvm/capistrano
(LoadError)--Pierre Sugar
- Reported in: P3.0 (24-Oct-12)
- Fixed in: B1.0
PDF page: 242
In the paragraph after the capify command, you mention the creation of the Capfile and say that we do not need to modify it. You then show the contents of the Capfile with the assets line uncommented. So we do need to make this one simple modification.--Kim Shrier
- Reported in: P3.0 (24-Oct-12)
- Fixed in: B1.0
PDF page: 242
In the listing of the Capfile, you show the line:
Dir['vendor/gems/*/recipes/*.rb','vendor/plugins/*/recipes/*.rb'].each { |plugin| load(plugin) }
which is missing from the one I have. This may just be another difference between 3.2.6 and 3.2.8.
--Kim Shrier
- Reported in: P3.0 (03-Feb-13)
- Fixed in: B2.0
PDF page: 243
If you are using Rails v3.2.11, there will be errors during deployment (when running cap deploy:migrations) with the deploy.rb file as-is. You need to change the line that sets the environment to production as follows:
set :rails_env, "production"--Andy Schott
- Reported in: P3.0 (27-Dec-12)
- Fixed in: B1.0
PDF page: 244
The last sentence in the first paragraph reads
The :deploy_to may need to be tweaked to match where we told Apache it could find the config/public directory for the application.
I assume config/public should be depot/public or DocumentRoot or Directory indicated in VirtualHost
--Pierre Sugar
- Reported in: P3.0 (27-Dec-12)
- Fixed in: B1.0
PDF page: 246
In the section "Using Console to Look at a Live Application" it reads
# On your server
$ cd /home/rubys/work/depot/
$ rails console production
As written on page 240 Capistrano is adding current between depot and public. Therefore the cd commands should include current
# On your server
$ cd /home/rubys/work/depot/current
$ rails console production
--Pierre Sugar
- Reported in: P3.0 (26-Feb-13)
- Fixed in: B2.0
PDF page: 287
The block "order" starts with: "SQL that rows ...", which should be "SQL specifies that..."
That is, the word "specifies" is missing.--Björn Peemöller
- Reported in: P2.1 (12-Feb-13)
- Fixed in: B1.0
Paper page: 296
In the third paragraph (one above title 'Grouping Related Callbacks Together'), the sentence "If you try declaring them as handlers using the second technique..."
word 'second' should be 'first'.
- Reported in: B1.0 (28-Feb-13)
- Fixed in: B2.0
PDF page: 305
I believe Rails Observers have been removed from Rails 4. It would be good if this section could be rewritten to expound on how to do things like an audit trail using modules, and using self.included(base) to register the callbacks on the base class.--Jim Haungs
- Reported in: P3.0 (29-Sep-12)
- Fixed in: B1.0
PDF page: 351
It's missing the equal sign for the form_for on the first line of the get.html.erb view template. it should be:
<%= form_for(:picture, url: {action: 'save'}, html: {multipart: true}) do |form| %>
thats corrects the render problems--Celso de Sá Lopes
- Reported in: B1.0 (29-Mar-13)
- Fixed in: B2.0
PDF page: 383
Replace current_cart with set_cart and SetCart module with CurrentCart--Trung Lê
- Reported in: B1.0 (29-Mar-13)
- Fixed in: B2.0
PDF page: 383
Please explain the usage of concerns here, as a beginner, they would get very confused--Trung LêSam Ruby says: As explained in the text, a concern is nothing more than a way to share common code (even as little as a single method) between controllers. If you feel this is not enough, consider starting a discussion on the forum, and we can work through this.
- Reported in: B1.0 (29-Mar-13)
- Fixed in: B2.0
PDF page: 384
It is highly recommended to use #find_by method in Rails 4. Please replace find_by_product_id with find_by(product_id: id)--Trung Lê
- Reported in: B1.0 (29-Mar-13)
- Fixed in: B2.0
PDF page: 399
" ...“ordered pairs of product_ids and quantity.”
Change product_ids to product_id--Trung Lê
- Reported in: P3.0 (10-Feb-13)
- Fixed in: B1.0
PDF page: 430
"You can see the list of middlewares that Rails provides for Rails applications using the command rake middleware." That should probably read, "...middlewares that Rake provides for Rails applications..."--Andy Schott
- Reported in: P3.0 (26-Feb-13)
- Fixed in: B2.0
PDF page: 431
When I run the command
rails generate jquery:install --ui --force
I get
deprecated You are using Rails 3.1 with the asset pipeline enabled, so this generator is not needed.
The necessary files are already in your asset pipeline.
Just add `//= require jquery` and `//= require jquery_ujs` to your app/assets/javascripts/application.js
If you upgraded your app from Rails 3.0 and still have jquery.js, rails.js, or jquery_ujs.js in your javascripts, be sure to remove them.
If you do not want the asset pipeline enabled, you may turn it off in application.rb and re-run this generator.
--Xavier John
- Reported in: P3.0 (18-Aug-12)
- Fixed in: B1.0
PDF page: 432
Original code doesn't escape directory white spaces. See suggested change below.
namespace :db do
desc "Backup the development database"
task :backup => :environment do
backup_dir = ENV['DIR'] || File.join(Rails.root, 'db', 'backup')
source = Regexp.escape(File.join(Rails.root, 'db', "development.db"))
dest = Regexp.escape(File.join(backup_dir, "development.backup"))
makedirs backup_dir, :verbose => true
sh "sqlite3 #{source} .dump > #{dest}"
end
end--OTA
- Reported in: B1.0 (04-Apr-13)
- Fixed in: B2.0
PDF page: 581
who_boughtrequests should have space in between--Trung LêSam Ruby says: the only occurrence I can find of this sequence of words is on page 176, and on this page there is a line break between the two words.
Stuff To Be Considered in the Next Edition
- Reported in: P2.2 (11-Apr-12)
PDF page: 1
Could the report-erratum url in the pdf footer, be modified to contain the pdf page number?--Brian Maltzan | https://pragprog.com/titles/rails32/errata | CC-MAIN-2016-44 | refinedweb | 6,917 | 69.68 |
"Displaying" XLinks?
January 2, 2003
Q: How do I display XLinks?
How can I get my stylesheet to display XLinks? It displays text without the linking.
Here's an excerpt from the schema:
...
<xsd:element
<xsd:complexType>
<xsd:sequence>
<xsd:element
</xsd:sequence>
<xsd:attribute
<xsd:attribute
</xsd:complexType>
</xsd:element>
...
...and some sample XML:
...
<who href=""
type="simple"
xmlns:
<description>Who</description>
</who>
...
...and the stylesheet::
<xsl:apply-templates
A: It's possible that displaying XLinks isn't the major obstacle you're facing, which is, rather, understanding them in the first place. Just in case, you might want to review some of this background:
- The XLink 1.0 Recommendation itself, of course, details all you need to know (in theory) in order to "use" (the scare-quotes are important) XLink in your XML documents.
- Three XML.com resources provide a good handle on the political ups and downs of XLink and its support (or lack thereof) in applications to date:
- Bob DuCharme's article, "XLink: Who Cares?"
- Kendall Clark's XML-Deviant column ("The Absent Yet Present Link"), recounting discussions on the XML-DEV mailing list about XLink's prolonged invisibility.
- Clark's back-to-back feature and XML-Deviant column, "Introducing HLink" and "TAG Rejects XLink," respectively. HLink was an alternative to XLink proposed by the XHTML Working Group of the W3C, as an alternative to XLink.
Reading those last three bulleted resources may leave you wondering if XLink is worth taking seriously at. (Your document, after all, includes both of those
features.)
The key is that the
href
and
type
attributes must be placed in the XLink namespace, as identified by the XLink
namespace declaration and corresponding namespace prefixes. Thus, your document should
be
changed, just slightly, to something like the following (changes in boldface):
<who xlink:href=""
xlink:type="simple"
xmlns:
<description>Who</description>
</who>
(Your schema, naturally, would have to take the namespace-qualified attribute names into account. Again, though, a schema or DTD does not by itself establish an element's suitability as an XLinking element. Also note that the element to which the namespace-qualified attributes apply is not itself in the XLink namespace.).
The fact that you're using a stylesheet at all makes me think what you're after is
the
ability to render the
who element as a standard hyperlink in a non-XLink-aware
application -- in, say, a Web browser. But the brief excerpt from your stylesheet
doesn't do
that at all, at least on its own. To put it in a larger context, this excerpt must
exist
within a template rule. Typically (although not necessarily always) what you'd see
in a
stylesheet would be two or more template rules linked together in a "trickle-down"
fashion,
such as:
<xsl:template
[processing to occur when such a who element is matched in source tree]
<xsl:apply-templates
</xsl:template>
<xsl:template
[processing to occur when description element is matched in source tree]
</xsl:template>
See the way the value of an
xsl:apply-templates element's
attribute maps onto the
match attribute of a different
xsl:template element? In your case, assuming you haven't skipped anything
critical in your stylesheet, you have not provided a specific template rule for the
description element. This means that that element will be processed by XSLT's
built-in rules -- which is to say that "it displays text without the linking."
Your objective, in your stylesheet, should be to transform the reasonably intelligent
XLink
from your source document into its (possibly) less intelligent sibling in XHTML: a
plain old
a element.
One approach would be to replace the second template rule above, as follows:
<xsl:template
<a href="{../@xlink:href}"><xsl:value-of</a>
</xsl:template>
This template rule:
- Instantiates in the result tree, for each description element in the source tree, a simple
aelement.
- Assigns to each
aelement's
hrefattribute the value of the
descriptionelement's parent's
xlink:hrefattribute.
- Provides each
aelement with a text value: the string-value of the
descriptionelement currently being processed.
In the case of your sample document, what the browser (or XSLT transformation engine) will produce with these two template rules is:
<a href="">Who</a>
And that, I take it, is what you're looking for.
Note that this kind of transformation (from a simple-type XLink in the source tree to a classic XHTML anchor in the result) is pretty straightforward. If you're looking for a real challenge, though, try transforming an extended-type XLink into something XHTML can handle. You will probably have to consider some rather fancy Dynamic HTML and CSS effects. And I, for one, would be interested in what you come up with. | http://www.xml.com/pub/a/2003/01/02/qa.html | CC-MAIN-2017-13 | refinedweb | 788 | 51.38 |
Hi Joe, I have a new feature request for Device-Mapper that I'm hoping you'll consider. Currently, the DM ioctl interface allows device's to be renamed, but the UUID remains unchanged for the life of the device. This patch adds the ability to change the UUID as well as the name. The changes are pretty straight-forward. The UUID rename uses the same ioctl command as regular rename. I have added a new flag field that can be set on that command to indicate a UUID rename instead of a regular rename. The rename() and dm_hash_rename() functions in dm-ioctl.c were modified to check for this flag, and take the appropriate actions. The change is backwards compatible with the old version of the ioctl interface (i.e. LVM2 will be able to run on the new interface without any changes) but not forwards compatible (i.e. any app that wants to use this new UUID rename function needs the new ioctl version). Thus, the DM_VERSION_MINOR is incremented by one (according to the comments in dm-ioctl.h). There are a couple reasons for this request. First, some of the EVMS modules do not have actual UUIDs stored anywhere, but rather generate UUIDs based on other metadata information. For instance, a DOS partition has a name that is based on the location of that partition within the partition tables (e.g. hda6), and has a UUID that reflects the name of the disk, the staring LBA, and the partition size (e.g. hda:63:10240). Since DOS partitions can be "moved" into available freespace elsewhere on the disk, the UUID could change, even though the name might stay the same. Thus, we'd like to keep the assigned UUID up-to-date in the kernel, to make sure it can be correctly located in future query's to the kernel. Another reason for the change is due to our implementation of snapshotting. When taking a snapshot of a device (we'll call that device the origin child), we'd like the newly created snapshot-origin (we'll call it the origin parent) to effectively switch device numbers with the child. This way, any active users of the origin child will not notice that the snapshot has taken place. However, the device numbers cannot be changed. Thus, we create the parent device with the child's mapping, perform a series of renames, and reload the child device with a snapshot-origin map. But in order for the renames to work correctly, both the names and uuids must be switched. Please take a look at the patch, and let me know if you have any questions. Also, as the 2.4 and 2.5 versions of the interface are quite similar, this patch should apply to 2.5.63-dm-1 as well. Thanks! Kevin Corry --- 2.4.20a/drivers/md/dm-ioctl.c 2003/02/13 19:35:45 +++ 2.4.20b/drivers/md/dm-ioctl.c 2003/02/27 21:51:41 @@ -262,7 +262,7 @@ up_write(&_hash_lock); } -int dm_hash_rename(const char *old, const char *new) +int dm_hash_rename(const char *old, const char *new, int uuid_rename) { char *new_name, *old_name; struct hash_cell *hc; @@ -279,7 +279,7 @@ /* * Is new free ? */ - hc = __get_name_cell(new); + hc = uuid_rename ? __get_uuid_cell(new) : __get_name_cell(new); if (hc) { DMWARN("asked to rename to an already existing name %s -> %s", old, new); @@ -290,7 +290,7 @@ /* * Is there such a device as 'old' ? */ - hc = __get_name_cell(old); + hc = uuid_rename ? __get_uuid_cell(old) : __get_name_cell(old); if (!hc) { DMWARN("asked to rename a non existent device %s -> %s", old, new); @@ -301,14 +301,21 @@ /* * rename and move the name cell. */ - list_del(&hc->name_list); - old_name = hc->name; - hc->name = new_name; - list_add(&hc->name_list, _name_buckets + hash_str(new_name)); - - /* rename the device node in devfs */ - unregister_with_devfs(hc); - register_with_devfs(hc); + if (uuid_rename) { + list_del(&hc->uuid_list); + old_name = hc->uuid; + hc->uuid = new_name; + list_add(&hc->uuid_list, _name_buckets + hash_str(new_name)); + } else { +); kfree(old_name); @@ -909,6 +916,8 @@ static int rename(struct dm_ioctl *param, struct dm_ioctl *user) { int r; + int uuid_rename = (param->flags & DM_RENAME_UUID_FLAG); + char *old_name = (uuid_rename) ? param->uuid : param->name; char *new_name = (char *) param + param->data_start; if (valid_str(new_name, (void *) param, @@ -917,11 +926,13 @@ return -EINVAL; } - r = check_name(new_name); - if (r) - return r; + if (!uuid_rename) { + r = check_name(new_name); + if (r) + return r; + } - return dm_hash_rename(param->name, new_name); + return dm_hash_rename(old_name, new_name, uuid_rename); } /*----------------------------------------------------------------- --- linux-2.4.20a/include/linux/dm-ioctl.h 2003/01/10 16:49:14 +++ linux-2.4.20b/include/linux/dm-ioctl.h 2003/02/27 19:55:51 @@ -130,7 +130,7 @@ #define DM_TARGET_WAIT _IOWR(DM_IOCTL, DM_TARGET_WAIT_CMD, struct dm_ioctl) #define DM_VERSION_MAJOR 1 -#define DM_VERSION_MINOR 0 +#define DM_VERSION_MINOR 1 #define DM_VERSION_PATCHLEVEL 6 #define DM_VERSION_EXTRA "-ioctl (2002-10-15)" @@ -145,5 +145,11 @@ * rather than current status. */ #define DM_STATUS_TABLE_FLAG 0x00000010 + +/* + * Flag passed into ioctl RENAME command to change the uuid + * rather than the name. + */ +#define DM_RENAME_UUID_FLAG 0x00000020 #endif /* _LINUX_DM_IOCTL_H */ | http://www.redhat.com/archives/dm-devel/2003-February/msg00020.html | CC-MAIN-2014-42 | refinedweb | 814 | 64.3 |
In Maya 2017 you need to inherit from MayaQWidgetDockableMixin to create a dockable window. This in turn parents your custom window (a QMainWindow in my case) to a workspaceControl. I can create a custom icon for my widow if I don't set it to dockable, but as soon as I do the custom icon is destroyed, and the workspaceControl takes over. There is no access to the icon through the workspaceControl flags. Has anyone solved this?
Have you tried getting the QObject/QWidget for the workspaceControl? Latest PyMEL has utility functions to get them for PySide and PyQt. Send me some UI code. I'll suffer through it with you.
I'm taking a swing from that direction. I will keep you posted.
Well, that was fun...... So, when you set dockable to True, the widget gets buried in several layers of other widgets, which requires you to change the widget you're setting the icon for. So just need to override the show() method to set the window icon on the proper widget when dockable=True:
class testDockUI(mmi.MayaQWidgetDockableMixin, QtWidgets.QMainWindow):
def __init__(self, parent=None):
super(testDockUI, self).__init__(parent)
self.setWindowTitle('Test Dock UI')
pixMap = QtGui.QPixmap(12, 12)
pixMap.fill(QtGui.QColor('red'))
self.iconData = QtGui.QIcon(pixMap)
self.setWindowIcon(self.iconData)
def show(self, *args, **kwargs):
super(testDockUI, self).show(*args, **kwargs)
if 'dockable' in kwargs:
if kwargs['dockable']:
QtWidgets.QWidget.setWindowIcon(self.parent().parent().parent().parent().parent(), self.iconData)
Unfortunately, after docking/undocking, the icon gets reset. Can't figure out where that's happening to set it back again.
Thanks for looking at this Mile. It pains me to see so much digging for the parent There should be a dockTriggered event or something like that to look for. I'm going to work with this a bit and will report back.
I did find this method which seems to eliminate the need for parent(). However it requires you wrap a workspaceControl as a QT object. """Docking concept courtesy of Lior ben horin"""name = 'Chickens'main_control = pm.workspaceControl(name, iw=300, ttc=["AttributeEditor", -1], li=False, mw=True, wp='preferred', label = "Chickens")control_widget = omui.MQtUtil.findControl(name)
control_wrap = wrapInstance(long(control_widget), QtWidgets.QWidget)control_wrap.setStyleSheet("background-color:black;")control_wrap.setAttribute(QtCore.Qt.WA_DeleteOnClose)
pixMap = QtGui.QPixmap(12, 12)pixMap.fill(QtGui.QColor('blue'))iconData = QtGui.QIcon(pixMap)v = control_wrap.window()v.setWindowIcon(iconData)
Ah, gotcha. So you're just using the workspaceControl command now instead of the MayaQWidgetDockableMixin. Definitely like that approach more.
I just realized that sets the icon for several Maya windows. I think I need to step down by one.
It looks like the parent() method does the same. | http://tech-artists.org/t/maya-2017-dockable-windows-and-custom-icons/9421 | CC-MAIN-2017-51 | refinedweb | 451 | 52.76 |
Buddies
There´s some limitation about use of openGL shaders?
In Cg Shaders i got possible shader inputs … and in openGL? Are the same inputs?
Buddies
There´s some limitation about use of openGL shaders?
In Cg Shaders i got possible shader inputs … and in openGL? Are the same inputs?
Are you sure you want to use openGL shaders? I suggest just using plane Cg. The only reason to use them is if you are doing embedded programming. And if you are doing that you can look them up in source…
I´m not sure.
I buy a big book about OpenGL shaders and i think pratice it in Panda3D. I read a lot about CG shaders and pratice a little. I read cg manual, Panda´s manual about it and other articles but… i can´t feel the spirit of shaders development
. Today i just see my xbox and think "someday, someday… "
Maybe i need more time … dont know
Lot of thanx Tree
In GLSL, there are already a big number of variables provided to you by the GLSL language. We don’t provide custom Panda3D-specific inputs, but in general you don’t need those.
Dummy question:
Then i can use all standart OpenGL codes without worry? I can´t did it before, but i´ll try more carefully tonight.
I know i wanna too much but someone can post a very basic code who´s interact with OpenGL (or a link instead)?
Very Thanx
You can use any GLSL shader with Panda3D that you can find on the web. (Note that some of them may require custom inputs, though.)
Thanx but i have another problem. I try to use basic GLSL shader decribe in manual and got this error:
:display:gsg:glgsg(error): An error occurred while compiling shader! 0(1) : error C0000: syntax error, unexpected '<' at token "<" 0(1) : error C0501: type name expected at token "<" 0(1) : warning C7530: OpenGL requires main to return void 0(1) : error C1110: function "main" has no return statement :display:gsg:glgsg(error): at 413 of c:\panda3d-1.7.0\panda\src\glstuff\glShaderContext_src.cxx : GL error 1281
I´m on windows7, panda-1.7.0, GeForce9600 and Notepad++ to write shader.
My program:
import direct.directbase.DirectStart from panda3d.core import * from sys import exit shaderLoc=["../shaders/vertexBasic.glsl", "../shaders/fragBasic.glsl"] smile=loader.loadModel("smiley") smile.reparentTo(render) shader=Shader.load(Shader.SLGLSL, shaderLoc[0], shaderLoc[1], "") smile.setShader(shader) camera.setPos(0,-20,0) base.disableMouse() base.setFrameRateMeter(True) base.accept("escape", exit) base.run()
Vertex
void main() { gl_Position = ftransform(); }
Fragment
void main() { gl_FragColor = vec4(gl_Color.g, gl_Color.r, gl_Color.b, 1.0); }
Some ideas?
Regards
That’s a bug in 1.7.0. The latest buildbot release fixes that.
Ok master. I´ll download it.
Thanx
Buddies
I downloaded the lastest buildbot of Panda and try the same code before. Now i got this message:
:display:gsg:glgsg(error): An error occurred while linking shader program! Fragment info ------------- 0(2) : error C5052: gl_Position is not accessible in this profile (0) : error C5052: gl_Vertex is not accessible in this profile
What is this? Whata hell is a profile?
A profile is a machine language a shader can be compiled to. Best analogy I can think of is it is like coding a program in C++, but it can be compiled for Windows, Linux, Macs, etc. Different profiles support different features.
Thanx David
Everything is run fine now
My python editor have a bug, a i reinstall it and everything is all right! | https://discourse.panda3d.org/t/opengl-shaders-and-inputs/8438 | CC-MAIN-2022-27 | refinedweb | 597 | 67.65 |
Bram Moolenaar wrote: > You can improve this a lot by changing: > > if ch_canread(ch) > let text = ch_read(ch, {'timeout':0}) > caddexpr text > cbottom > endif > > To: > > if ch_canread(ch) > while ch_canread(ch) > let text = ch_read(ch, {'timeout':0}) > caddexpr text > endwhile > cbottom > endif
Advertising
Yes, I intentionally do so to make reading really slower than writing. > But, you might still miss some messages if the job exits early. > > I suppose we will need to add an option to tell Vim that you will read > the messages, not using a callback. I think this should do it: > > "drop" Specifies when to drop messages: > "auto" When there is no callback to handle a message. > The "close_cb" is also considered for this. > "never" All messages will be kept. > It's very nice if I can have the "drop" option in job. | https://www.mail-archive.com/vim_dev@googlegroups.com/msg42886.html | CC-MAIN-2017-04 | refinedweb | 137 | 69.72 |
----------------------------------------------------------- This is an automatically generated e-mail. To reply, visit: -----------------------------------------------------------
Advertising
src/slave/http.cpp (lines 89 - 105) <> This needs to be defined in the namespace that `TaskInfo` is defined, that is, in the `mesos` namespace. src/slave/http.cpp (line 97) <> If we were using `model` before, we need to maintain that. We have a `json` defined for `CommandInfo` in `src/common/http.cpp`, we should add the declaration `void json(JSON::ObjectWriter* writer, const CommandInfo& command);` to `src/common/http.hpp`. We can then use it like this: ``` writer->field("command", task.command()); ``` src/slave/http.cpp (line 125) <> Same as above, since we were using `model` before, we need to maintain this. We just need to remove the `JSON::Protobuf` here. ``` writer->element(task); ``` src/slave/http.cpp (line 493) <> Curious as to why we added a newline here. src/slave/http.cpp (line 497) <> Same here, why the newline? - Michael Park On Feb. 29, 2016, 7:01 a.m., Neil Conway wrote: > > ----------------------------------------------------------- > This is an automatically generated e-mail. To reply, visit: > > ----------------------------------------------------------- > > (Updated Feb. 29, 2016, 7:01 a.m.) > > > Review request for mesos and Michael Park. > > > Repository: mesos > > > Description > ------- > > Updated `/state` agent endpoint to use jsonify. > > > Diffs > ----- > > src/slave/http.cpp 4eb1fafdfa72094511b0b2684a3c2705bd8c7c5e > > Diff: > > > Testing > ------- > > make check > > > Thanks, > > Neil Conway > > | https://www.mail-archive.com/reviews@mesos.apache.org/msg26387.html | CC-MAIN-2017-17 | refinedweb | 214 | 62.04 |
I have the following classes:
class foo {
public void a() {
print("a");
}
public void ...
Let's say I have this base class:
abstract public class Base {
abstract public Map save();
abstract public void load(Map data);
}
Typically an overridden method can be called with super. For example:
public class SuperClass {
public void something() {
out("called from super");
}
}
public ...
Of course it will be compiler error in case 2. Horse class is a descendant to Animal. Animal is more 'wide' class, then the Horse. In terms of 'is-a' Horse is-an Animal, but Animal is-not a Horse. It's correct to assign Horse instance to Animal variable. But on compile time you still are working with Animal variable, and Animal doesn't have ... | http://www.java2s.com/Questions_And_Answers/Java-Class/Polymorphism/overridden.htm | CC-MAIN-2013-48 | refinedweb | 124 | 66.94 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.