text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
Search and Replace a Line in a File in Python In this article, we will learn to search and replace the contents of a file in Python. We will use some built-in functions and some custom codes as well. We will replace lines within a file using mentioned ways. Python provides multiple built-in functions to perform file handling operations. Instead of creating a new modified file, we will search a line from a file and replace it with some other line in the same file. This modifies the file with new lines of data. This will replace all the matching lines within a file and decreases the overhead of changing each line. Let us discuss some of the mentioned ways to search and replace lines. FileInput in Python: Replace only the First Line of a File using FileInput The below example takes the review.txt file and changes its first line with the new line. import fileinput filename = "review.txt" with fileinput.FileInput(filename, inplace = True, backup ='.bak') as f: for line in f: if(f.isfirstline()): print("In the case of Ghostbusters", end ='\n') else: print(line, end='') Output: Example: Search any Line of a File and Replace it using FileInput The below example takes the review.txt file and changes a particular line with the new line within the file. It searches for the line and replaces it. import fileinput filename = "review.txt" with fileinput.FileInput(filename, inplace = True, backup ='.bak') as f: for line in f: if("the movie would still work played perfectly straight\n" == line): print("the movie work played perfectly straight",end ='\n') else: print(line, end ='') Output: Conclusion In this article, we learned to search and replace a line in a file by using several built-in functions such as replace(), and FileInput module. We used some custom code as well. We saw outputs too to differentiate between the examples. Therefore, to search and replace a line in Python user can load the entire file and then replace the contents in the same file instead of creating a new file and then overwrite the file.
https://www.studytonight.com/python-howtos/search-and-replace-a-line-in-a-file-in-python
CC-MAIN-2022-21
refinedweb
354
74.29
Basic DNS Concepts Updated: May 3, 2010 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2 DNS is a distributed database that represents a namespace. The namespace contains all of the information needed for any client to look up any name. Any DNS server can answer queries about any name within its namespace. A DNS server answers queries in one of the following ways: - If the answer is in its cache, it answers the query from the cache. - If the answer is in a zone hosted by the DNS server, it answers the query from its zone. A zone is a portion of the DNS tree stored on a DNS server. When a DNS server hosts a zone, it is authoritative for the names in that zone, which means that the DNS server can answer queries for any name in the zone. For example, a server hosting the zone contoso.com can answer queries for any name in contoso.com. - If the server cannot answer the query from its cache or zones, it queries other servers for the answer. It is important to understand the core features of DNS, such as delegation, recursive name resolution, and Active Directory–integrated zones, because they have a direct impact on your Active Directory logical structure design. For more information about DNS and Active Directory, see "DNS and Active Directory" later in this chapter. Show:
https://technet.microsoft.com/en-us/library/cc755671(v=ws.10).aspx
CC-MAIN-2015-48
refinedweb
241
68.3
Create DOM Tree out of a Dynamic HTML String 1 min read 1 min read Just a very short post before I go to bed as I'm really tired today. I just played around with some JavaScript code, building a Google Chrome extension when I came to the point where I needed to process the DOM tree of a HTML string.Basically I had something like the following: ...The var dynamicHtmlString = "<pre>public class SomeClass{....}</pre>"; ... dynamicHtmlStringwas build by my JavaScript code through a long-lasting process. At the end I wanted to format those (basically pieces of source code) in a proper way by associating the "prettyprint" css class to all "pre" elements and then calling google-code-prettify's prettyPrint()method. document.body.innerHTML=dynamicHtmlString;doesn't work obviously as I wanted to use jQuery for traversing the DOM and adding the CSS class. Luckly, jQuery does also support creating the DOM tree, by simply wrapping html code within $(). var dynamicHtmlString = "<pre>public class SomeClass{....}</pre>";So that did the job. Pretty easy, isn't it :). This tells a lot about the power of jQuery. Love this library! $(dynamicHtmlString).appendTo("body"); $("pre").each(function(){ $(this).attr("class", "prettyprint"); //check this for compatibility with v1.6 -> use .prop(...) otherwise }); prettyPrint();
https://juristr.com/blog/2011/06/create-dom-tree-out-of-dynamic-html/
CC-MAIN-2019-26
refinedweb
212
66.33
I couldn't make it work on my own. It can launch but it doesn't have the Alert effect. The image and the text is already there without even clicking the word Midlet. How can I make the alert work? Post your Comment User Define Alert Example User Define Alert Example Creating a user defined alert message, In the given example we have created a image that can be fill with some colors and text Alert in Servlet Alert in Servlet <p>Hi</p> <p>I want to delete record from oracle db, i need alert message saying 'Are you sure you want... of each row, when i click delete it shd alert me....This is my servlet code:< J2ME Alert Example J2ME Alert Example  ... to create alert through sound. In this example we are using AlertType class... user by playing the sound. There are different Alert Types in J2ME jQuery alert() will explain jQuery alert with the help of a simple example, ehich would help you understand the concept easily. In this example, we have shown about how an alert box can display information. This example displays an alert box after clicking jQuery Hello World alert jQuery Hello World alert example  .... In this example you will learn how to write JavaScript using jQuery. When user.... Hello world alert box example(HelloAlert.html) <html> <head> alert alert after pressing ok button in alert message it has to stop...; if(username==""){ alert("Enter Username!"); return false; } if(password==""){ alert("Enter Password!"); return false; } return true; } </script> < JSP Alert with Example The Tutorial illustrate an elaborative example to create alert box...;AlertMessage.jsp". The Alert Message contains the code for User Login page that include...; } alert ( "Welcome User" ); return true; } </script> Define float in Java Define float in Java Hi, What is the correct method to define...; Hi, Here is is an example of Float variable in Java. public class JavaFloatExample { /* Learn How to define and use float variable in Java A MIDlet Example to execute Alert Message with an Image A MIDlet Example to execute Alert Message with an Image In this "Alert Message" example, you will learn how to send an alert message to user whenever an action objective c define float objective c define float Hi, How to define float variable in objective c? Thanks Hi, Here is the example of defining float variable. float i; i=10; Thanks Hi, Here is another example: // File Flex Alert Box example Flex Alert Box example Alert box is a dialog box that appears on window with some message and stays rigid until the event associated with it is not done. Alert Define Tag: Define Tag: bean:define Tag - is used to define a scripting variable based on the value(s) of the specified bean... as null. bean:define Tag can't be used to instantiate a DynaActionForm How to define variable in Velocity How to define variable in Velocity This Example shows you how to define a variable in velocity. Methods used in this example are described below:- 1:- Initialize jQuery alert() giving a simple example which will demonstrate you about how an alert box can display information. In this example we will display an alert box after clicking... this example and click on the button then the alert box will be displayed Alert Control in Flex4 =mx.controls.Alert.OK ):Alert Example: <?xml version...Alert control in Flex4: The Alert control is a MX component. It has no Spark component. The Alert control contains a message, icon, title, and buttons Java Alert Box Java Alert Box In this example we will describe java alert box. First of all we have created class than main method after that we have created alert box method. In this method display information message to the user. We use Facelet define Tag tag with the same name. For example, In the first define tag name attribute...Facelet define Tag This tag is used to define the name of the content Alert Box in Flex , an alert message as follows will be shown on the screen. Example 2...Flex Alert Box Components Adobe Flex provides many types of components, in this tutorial we will study visual component. 1. Alert Box: In the present Define and use Macro in Velocity Define and use Macro in Velocity This Example shows you how to define and use macro in velocity template and also shows how to use Iterator in velocity template How to Define Text Field Validation iPhone How to Define Text Field Validation iPhone Hi, Can anybody explain me how to define textfield validation iPhone program. please phone online help reference or example. Thanks Display alert box on clicking link to alert box or jQuery. In this example we have used the jQuery selector... intercepts the click event and then displays the alert message to the user...Display alert box on clicking link In this tutorial, we will display alert box alert in jsp alert in jsp if(pass.equals(pwd)) { Statement st1..."); %> alert("password is successfully changed"); %> <script> alert("please enter old Password Define HibernateTemplate? Define HibernateTemplate? Hi, Define HibernateTemplate? Thanks alert for validation alert for validation i want to put alert on the text box that enter...) if (cpos1==-1 || cpos2==-1){ alert("The date format must be : dd/mm/yyyy... || pmonth<1 || pmonth>12){ alert("Enter a valid month") return Define stack Define stack hii, Explain stack Anchor tag attributes in HTML5, Define the type attribute of anchor tag. ; Example of target attribute: Code:TypeAttribute.html <!DOCTYPE  Define please!?! Define please!?! What is tree map sort define a constant. define a constant. How do you define a constant? Hi, With the help of define() directive, you can create constant in php. And here is how: <?php define('NAME', 'Bharat'); ?> Thanks Aside tag in HTML5, Define aside <aside> tag in HTML5. Aside tag in HTML5, Define aside <aside> tag in HTML5. In this section, we will define the use and implementation of aside <aside> tag of HTML5...; <aside>Content of aside. </aside> Example of<aside> something's wrongNicolecai Kim DC Zamora February 23, 2012 at 6:57 AM I couldn't make it work on my own. It can launch but it doesn't have the Alert effect. The image and the text is already there without even clicking the word Midlet. How can I make the alert work? Post your Comment
http://roseindia.net/discussion/22532-User-Define-Alert-Example.html
CC-MAIN-2016-07
refinedweb
1,067
64.91
>>> I had no idea this is even possible... and the manual advises that the >>> defvar should "usually be at top-level". So, commenting out all the >>> file-level (defvar date) in all of org and then changing in org-bbdb.el >>> like this: >> In general, having them at file-level is just as good as having it >> within a defun. > Either I've lost you here or the conclusion is that we can't do anything > in org to make these defvars less evil and should just silence the > warnings associated with them? The only thing that can be done for dynamic vars is to make them namespace-clean, so if renaming them is not an option (which could be the case for example because those vars appears in user customizations), then indeed, there's not much you can do. Stefan
https://lists.gnu.org/archive/html/emacs-devel/2012-04/msg00863.html
CC-MAIN-2014-15
refinedweb
141
66.07
Directory Listing interpolation from reduced to full for finley Function. needs more work. moving isDirectSolver() function to escript to correct solver library selection logic. Also updated Ripley SystemMatrix overload with recent change. added unroll flag to dudley and finley and enabled tests that were skipped before.... s/assertEquals/assertEqual/g since the former is deprecated. fixing coverity-detected leaks CID 115857,115858. fixing coverity-detected leaks CID 115859,115860,115861. fixing coverity-detected leaks CID 115863,115864,115865. skip paso test in finley without paso and fix assemble parameter issue without paso. fixed typo. trying to support 2nd order elements in finley with trilinos. WIP trilinos-based DOF->node interpolation in finley. Not complete yet. typo with trilinos. guard slave-only functions in readgmsh to avoid unused func warnings in non-MPI builds. more test adjustments. fixed typo. merged finley Mesh and MeshAdapter into FinleyDomain. fixed types for index type long.. rather than skipping pde system tests with trilinos we now expect failures, which actually translate to quite a few unexpected successes. There is still a threading issue somewhere.... index type fix. fix savanna options file and make sure libraries get added once only. now refraining from adding all libraries to all targets. So we don't link unnecessary libraries, e.g. escript does not need parmetis etc... last round of namespacing defines. more namespacing of defines. more index_t != int fixes in dudley and finley. Both compile now with index type long. Weipa doesn't.... Relicense all the things! please keep third-party includes after our own to avoid reintroducing the "python has to be first" issue. Compile error fix. Not sure why my system didn't pick it up. Fixing some namespaces and includes.. Fixed a corner case in finley where some ranks have no reduced DOFs. This came to light when bumping the number of testing ranks to 20 on Savanna. fixed another current exception hang. removed useless include. Bye bye esysUtils. Also removed first.h as escript/DataTypes.h is now required everywhere and fulfills that role by including a boost python header first. a few more include rearrangements. removed some macros and moved index.h to escript. moved esys MPI to escript. 1) finally put dudley into its own namespace 2) replaced all Esys*Error calls by exceptions in dudley and paso update 2. update. small fix. eliminated Esys_setError() et al from finley. We do have the issue of potentially throwing exceptions in some ranks and not others but I'd claim that was the case before in various places. Added IOError. Also started replacing some 'setError()' calls by exception throws as the latest test failures are bogus because an error was set but never checked for. fixed a few exception types in tests. Major rework of our exceptions. We now have specific AssertException NotImplementedError ValueError which translate to the corresponding python exception type. I have gone through a few places and replaced things but not everywhere.. moved random to escript. moved exception translators... metis kindly put its own real_t typedef in the global namespace so we need to be careful not to clash with that :-(. hmm, this change was necessary on Trusty for some reason, it shouldn't break other builds... Type fix for parmetis with long indices DOF and reduced DOF can now be passed to ownSample() in Finley. fix for Bug #326 64-bit index fixes in finley. made finley::Mesh::write long index compatible. made namespace use consistent between header and cpp to help poor doxygen find the right functions fix for build failures on x86 machines due to underlying type of size_t being different for a format string cleanup of finley test imports and docstrings parmetis options. We've been passing an array that's too small according to current parmetis doco. Avoid unnecessary allocs/frees in parallel region which caused hangs using many threads and intel OMP on savanna (probably due to a bug in the intel library or glibc). fixing a missing build dependency of pythonMPI existing before individual tests are run improved error checking in gmshreader suite fixing a memory leak in the new MPI wrapper, along with some comment updating to satisfy doxygen a bit better fixed gmsh named tags, along with a little more cleanup fixes I forgot to commit and that concludes our story of how the gmsh ruined christmas further debug removings fixed infinite loop in gmsh reader fixed may be unitialised error reworked code to add more openMpness fixed compiler errors working mpi version of readGmsh. tweaks and unit tests to come fixed longstanding lazy test failures due to calling getDataRO() without resolving Data object first (only in finley). adding speckley to ripley interpolation (single process only). removing renamed tests MPI Fixed a bug causing Mac crashes..). more work on pyvisi integration and more work on saving the element jacobians (please don;t look at the code) more tests on slicing solution and reduced solution can have reference numbers now! first steps towards the reuse of the element jacobians pdetools test added now done test_utilOnFinley.py small bug fixed add remainder of includes to SConscript includes install add mechanism to install .h files to inc directories (still need to specify full lists of .h files to install) rationalise #includes and forward declarations a few more benchmarks added restructure escript source tree move src/Data/* -> src remove inc modify #includes and cpppath settings accordingly make adjustments to allow for new source tree structure rationalise all #includes reorganise finley src tree to remove inc dir and src/finley directory reorganised esysUtils to remove inc directory adjustments to includes to allow for new paso src tree structure typo fixed: the anisotropic problems are returneing the correct answer now. systems benchmarks run now some mopre problems added some mopre problems added fix installation to directories specified by pyinstall and libinstall minor comment and formatting changes replace libdir default settings - SConstruct was broken without them reset the solver iter_max increased remove old make based build system finley source directories now merged into one directory: src/finley small typy fixed a simple linear elastic solver typo fixed ILU has been replicated is called RILU (recursive ILU) now. ILU will now be reimplemented. cut of for small values frame to run a single test out of the test_util suite Laplace benchmarks added laplace benchmarks added finleybench is build now Finley options added plus some other stuff run finley benchmark first version of a finley benchmark suite for test the paso solver explicitly define dependencies between modules cast to float added. this avoids problems with very small values cast to float added. this avoids problems with very small coodinates. cast to float added. this avoids problems with very small values setup scons configuration to make and install python code pass in solver libraries to link with via options files now pass in platform specific libraries via options files now fix compiler options to enable compilation on gcc platforms implement switching of debug compiler options based on command line flag 'debug' pass in cc/cxx settings from config file import sys first pass at importing compile options from an external file' collect all information needed to determine build configuration (ie: debug/nodebug, host, platform etc) removed ? remove MeshAdapter test for now - will only work with scons build system fix settings to account for new finley source tree structure can now build finleyC/finleycpp concurrently first draft of config/construct files for finleyC/finleycpp libraries add __init__.pyc to install list python init file for finley module removed all references to modellib module *** Initial revision
https://svn.geocomp.uq.edu.au/escript/trunk/finley/?view=log&sortby=date&pathrev=6420
CC-MAIN-2018-47
refinedweb
1,262
56.35
Hi, Being new to python scripting, I am facing a simple problem. I am writing a python script "MyScript.py" in which I am calling the third-part program PyMOL through the import function. Here is the simplified script (mainly inspired from this blog:) #! /usr/bin/python # Usage: MyScript.py <input> # launch PyMOL from the terminal import __main__ __main__.pymol_argv = [ 'pymol', '-qc'] import sys import pymol pymol.finish_launching() # Input file = sys.argv[1] #run PyMOL command pymol.cmd.do("load %s" % file) #Exit PyMOL pymol.cmd.quit() I get this error message: Traceback (most recent call last): File "./MyScript.py", line 10, in <module> import pymol ImportError: No module named pymol * My path to Python site-packages: /Library/Python/2.7/site-packages * My path to PyMOL: /Users/me/my_apps/MacPyMOL_1.3.app/Contents/MacOS/MacPyMOL (I made an alias pymol='MacPyMOL' in .bash_profile) * I also tried to replace in the script: import sys import pymol by import sys sys.path.append("/Users/me/my_apps/MacPyMOL_1.3.app/Contents/MacOS") import MacPyMOL but still the same. I looked at other forums but it didn't solve the problem:;24c8090b.1011 If someone could direct me to a solution... Thanks !
https://www.biostars.org/p/113328/
CC-MAIN-2019-43
refinedweb
199
54.18
SnakesAndRubiesTranscript Transcript of the video of the Django Presentation by Adrian Holovaty Slides available from: Introduction Hey guys, thanks for coming out, I'm Adrian, one of the developers of Django, which is a Python web framework. Nice to see that a lot of you guys were Python developers. First I'm going to show you a little bit about the background of Django - where it came from. it was derived out of a real world application, so, in order to understand django, it's kind of important to understand the culture it kind of came out of. I know that sounds really exciting, but... Basically, I'm Adrian, I live here in chicago, grew up in the suburbs, I work for washingtonpost.com as editor of editorial innovations, which is a fancy title for basically Mad Scientist - I get to do cool stuff with the website, like basically build database driven applications on a deadline. I've only been doing that for three months. Before that I was actually living in Kansas - which is not quite as bad as you might think (slide: Kansas board of education) (Laughter) It's actually a lot better than that because I lived in a town called Lawrence, Kansas, really cool college town - got a great music scene, there's a lot of cool bars and restaurants and stuff - very very cool place. But that's not the reason I moved there. I moved there because the newspaper is really really good. I've got a news background - my background is between programming and journalism. I develop news web applications. This web operation in Lawrence, Kansas is really really cool, really really cutting edge and they do a lot of really really cool stuff, so I went to work there. Adrian's background with lawrence.com I'll give you an example of one of the cool things that we did. I told you about the music scene in Lawrence; uh, we said ok, wouldn't it be cool if there were a site you could go to that represented Lawrence, all the music scene had, all the stuff, all the events happening, all the bars and restaurants, that stuff. So we made this: Lawrence.com Lawrence.com is easily the most in-depth local entertainment site in the world; this thing fricken kicks ass. Starting out with the basics, it has an events calendar, which is you know, every entertainment site has one, has the standard stuff, when and where it's happening, what the cost is, but when it gets really interesting is the bands database. I don't know if you know allmusic.com, but it's essentially allmusic.com at a local level, plus it ties into the events database, so that on this band's page it says that they have shows at the Granada on Friday, at the <somewhere> on Tuesday, and it pulls in all the genres, all the biographical information about the band, and everything's in a huge database, so that when you go to the page for this band, you can see oh - those are the musicians in the band, click on one of the musicians and you can see all the other bands he's in, you can see OK - this guy plays pedal steel, so you click on pedal steel, you see all the other people in town that play pedal steel. It's just kind of, this huge database of all local entertainment, and because it's very music oriented, every band has song clips, and we have entire MP3s, around a thousand MP3s of all local music, which is very very intense. The way this stuff integrates is very very cool; on the events page, if you're looking at a band in the band database, it'll pull in those links and say related band pages, it'll look at those bands and say ok, do any of these bands have songs in the songs database? Yes, in this case there were, so it says if you go to this event, you might hear these songs, blah blah blah blah blah. So we're doing some really cool things. Including a radio station that's this little flash-wizzy interface that automatically grabs all the songs for a particular band, there are genre stations, you can make your own stations - which are called playlists - it's very very interactive site, you can create your playlist, you can comment on other people's playlists, you can comment on essentially any object in the system. There's an album discography database; you can comment on any band, any event, any story in the blogs, there's an intense restaurant database with not only the basics like which cuisine it is, and which region, but also whether it's a homey's favourite (*laughs*) - the site authors refer to themselves as the homies - whether it's vegetarian friendly, and whether it accepts barter (*laughs*). No restaurant in Lawrence accepts barter, but we've been thinking of adding a little thing with adding that, where if you search for it, it says, like, "are you stupid?", or something. But, the point is, it does a lot of cool stuff. Because we have all the restaurant closing and opening times, we're able to say here are all the restaurants that are open right now, so at 4pm or whatever time it is now, it's not very impressive, but if you're drunk at 1am and you want to get some grub, you can go to this page and there's all the stuff that's still open. Speaking of being drunk, there's a drinks specials database, which is updated every day and has all the drinks specials for every bar in town every day. People actually print this thing out, and put it next to their beds - I'm not making this up - Simon's roommate used to do this. And you can export drink specials to your ipod, so if you're wandering around town, you can get out your iPod and there are your drink specials. The drinks specials integrate in the events calendar. *audience comment* Could you integrate the drinks specials with a GPS thing so drunk people could find their way to the bars? (laughter) That's a good idea. I hope you're getting the impression that everything kind of knows about everything else. It's one gigantic database. This example kind of events page (on a slide) - if an event happens at a bar, and the bar has a drinks special for the date the event is happening, it'll pull in the drinks special. If you go to this event, here's a drinks special for that place. Now living in Chicago, I really wish that MetroMix did something like this. Metromix... bad. So there's tons of cool data on this site, I'll go through a couple more things. The downloads page; we wanted to have kind of a front door for all the stuff you can download on this site, of course we can put random MP3's "here's a bunch of random MP3s", but that wouldn't be that exciting, so we were thinking - what would be a cool timely way of displaying MP3s as they're added to the site. This page, what it does, is it looks at the events calendar, it finds all the bands that are playing according to the events calendar in a local band database, and of those, which ones have MP3s. So everytime you go to this page, it's fresh, it's new, it's got new stuff according to who's playing, so everyone can get the MP3, get band information - oh, by the way, they're playing live at the <somewhere>. There's a mobile edition, so that when you're wandering around drunk in downtown Lawrence, you can pull up the drinks specials to further your alcoholism, you can see which restaurants are open right now, and you can get a bunch of other stuff that kind of makes sense to have in a cellphone context. There's blogs, there's podcasts, there's comments on everything, there's other stuff that I didn't want to offend anyone with (slide says 'pictures of drunk sorority girls')... especially, why am I bringing this up? The point is, you kind of have to understand the intensity of all the cool shit we did in Lawrence, and that was kind of our unofficial motto - we build cool shit - we wanted to do cool stuff, we wanted to put it on really rapidly, improve these sites - we'd get these ideas, like, on the events calendar you can sign up to get reminders of an event - that just happened one day when somebody told me 'hey, did you know that you can send SMS via email?' 'oh, I didn't know that, let's add it to the site' 'ok, let's do it today'. My boss calls me on a Saturday night "Hey, oh here's this cool idea for the site, why don't we do it?" and I'm like "hey, I'm at the computer anyway, I'll just do it right now". So that's the kind of culture. Where does this come from? Well it's this guy - Rob Curly - he was our boss there, and he really encouraged this really really rapid web development. It's essentially web development - computer programming - with journalism deadlines. It's a fusion of those two concepts. The first thing that he is really really pushing on us is that we need to develop things very very quickly. An example of that is, toward the end of last year, around the elections, he said: "Hey, oh my God, I just realised there's a presidential debate tonight. Wouldn't it be cool if users could sign on to our news site and rate the candidates. Wouldn't it be cool if people could rate candidates on whether they made sense, how eloquent they were, whether they made any weird facial expressions. Oh, and by the way, if you could do it in four hours, please". So, this kind of culture... first of all, it drove us insane. Secondly, it really encouraged us to come up with some sort of solution that would let us make websites - first of all that were very intense and cool and interactive and all that - but also so that we could do it quickly. The second thing - we did end up making that presidential debate score card in four hours - it was Jacob, who's doing AV stuff today. The second thing that he's always pressing on us was that the ink was never dry on these babies. A newspaper term, of course, ink being a newspaper thing, but the concept is that you create a web application of some sort, of course you have to create it quickly, that's the first point I had, but once you create it, the thing's not done yet. Inevitably, requirements change, you're going to have to change that thing to add new features, more ways of browsing, more ways of searching, all that stuff. The restaurants thing I showed you, for example, when that first started, it was just type of food and location of town. All those other things, including barter - barter I just added one night when I was futzing around - but all those other things were ideas that our editors had - oh, wouldn't it be cool if on our restaurants page you could search by whether they accepted Visa or American Express - so boom, the ink is never dry on these things. You've got to be able to not only make the first version very quickly, but you've got to be able to update them very very fast. So. We originally used PHP for this, the original version of Lawrence.com was a PHP app which started out OK, but it got quite messy. The problems with PHP were that the syntax is very verbose, you had to edit a lot of files to get stuff done, I know in theory it is possible to do good PHP, but it's just so much of a hassle - the language doesn't encourage good practise. Introspection abilities were very bad, you couldn't really follow the Don't Repeat Yourself principle, because you had to put bits in this file, and this in this file, and it was hard to maintain. And then there's the namespace thing (slide of PHP functions) - this is, of course, for those of you that didn't get the joke, a list of the first 30 PHP functions that start with 'A', available in any PHP script, and I believe - what's the latest on PHP namespaces? It's going into 6, probably? Discovered Python So, Simon and I, around this point - it was around two years ago when we were getting a little tired of PHP, we discovered Python, and we just immediately fell in love. Essentially, we loved the terse syntax, we loved how you could introspect, we loved the dynamic abilities - So clean, and it encouraged best practices. We said OK, we have to do this, we have to move all web development to Python, because we're going to go fricken insane if we stay with this PHP. So, we decided to make this framework for our internal needs - I showed you this entertainment site we have, we have a news site, a sports site, and our organisation does development for outside companies - so kind of a lot of different types of stuff that we do. So this framework that we were putting together had a couple of goals. One, like, I hope I've been pounding into your brain, we really had the need to make web development stupidly fast. Like four hours, not six months, fast. Like two days for an intense classified system. Jacob just this past week wrote an entire classified system between 9pm and 3am. No doubt powered by a lot of coffee, or something. We wanted to automate the repetetive stuff; there are certain tasks in web development that are just so frickin boring, like validation routines, and making sure that input is clean and all this stuff - we just wanted the framework to take care of this for us. We wanted to practise loose coupling, so that if we didn't like a certain part of this system, we could swap in another template language, we could swap in another database abstraction layer. We wanted to follow best practices - Simon and I, we developed this originally, are obsessive perfectionists. We really really strongly believe in the foundation principles of http - that everything that affects data on the server should use POST, that you use GET properly, beautiful URLs, because doing ugly urls is just pathetic. We wanted it to be efficient. We didn't want this to be some sort of really high level framework that did a lot of cool stuff, but didn't care about the efficiency. We needed it to be fast, we're powering more than a million page views per day, so if it doesn't need to do a database query, don't do a fricken database query. Be efficient. The end goal was that we wanted to create lawrence.com with this framework. Starting to create the framework When we created the framework, we kind of had a trial by fire, because we had a bunch of websites that we were creating as we were creating the framework, so we had to take a break, then go back to the framework, take another break, then go back to the framework. Essentially, the end goal though was to make lawrence.com. So, one of these trials by fire was when Rob came up to us and he said "Hey, it's summer, that means it's time for little league." and he had this wacky idea - why don't we take local little league, which is about a hundred... more than two hundred teams or something like that, and treat these teams like they're the New York Yankees. So I'm thinking every team gets it's own page (see slides) with a list of who they're playing, what field they're at, game info - tied into our local weather database, so if a game's in the next five days, which is the time range for which we have weather forecasts, it'll pull in that weather forecase and display it right on there. Why don't we give every league of L.L. it's own page, which what teams are in it, what the standings are. We didn't want to do player detail pages because of privacy reasons, but if these guys were 18, you know we would have been doing that. We wanted to have a database of every field that these guys play on. Because there are incredibly subtle differences between 4H east and 4H middle, but you can go on the site, and click on there, and do a little 360 degree view thing... if you care. And of course, we wanted to make it so that parents could sign up for cellphone alerts, so that if games were cancelled the website would email them. And of course, do this in three days. So, we did. Using Django, this was the first really big thing that we did - and it worked, and we were like "Wow, this is really really cool, we should use this". And we did, we continued to use it, and we ended up recreating lawrence.com to use it. lawrence.com is now powered by that, our news site ljworld.com is powered by it, all of our internal sites that our company had and all our commercial development that we did for other companies. So, let's fast forward a couple of years to PyCon 2005. If you don't know, this is a Python conference that's held every year. Simon and I, and also our other developer Jacob Kaplan-Moss, who's handling AV, went there, and we did a short lightning talk. It was five minutes, showing off Django and what it does and how fast it makes things. And people were just going gaga over those things. A lot of people came up to me afterward, I got a lot of emails afterward, so we were thinking 'wow, we should open source this' - not only because it's going to be a cool service to contribute back to the community, but because it would get outside developers giving us code. So, you know, it's kind of a win-win. So we did, we open sourced it in July. This is the Django website, it's at, I'll have that URL later also. Since July, we've had a ton of awesome contributions, people have been using it, people all over the world - Poland, Australia using it - it's awesome. So what does this actually do? Django More Technical Discussion and Examples I've kind of given a lot of background, but haven't actually talked code. There's the stack. Number one is the database wrapper - now what this does is it converts Python code into SQL, so you don't have to write SEQUEL statements. I'm going to alternate my usage of SQL and SEQUEL to please both camps, let's see if I can keep that straight. The second level is the URL dispatcher, which, because we're obsessed with clean URLs, and because we mix and match different applications on different sites - for example on Lawrence.com we have a forums system - and on our new site we have a forums system, but they have different URLs in different places, so the URL logic had to be abstracted into this URL dispatcher thing. There's a template system, which separates content from presentation; I think that's pretty much a well known concept. And there's the admin framework, which I think is pretty much Django's "crown jewel", which I will go over in a minute. That essentially automates all the boring ass stuff that's involved with validation and creating sites that are purely for editing stuff. Finally, there's a lot of niceities galore, such as RSS framework, and all these other things that I'll go into. Step 1: When you do a Django app, the first thing - assuming it's database driven, because it doesn't necessarily have to use a database - the first thing you do is you design your database schema. Now if we're going to do a sample application for the purposes of this, we'll do a little blog. It wont be written by a cat, but pretend it is (Slide shows Ginger The Cat's Blog). When you're creating generally a database driven app, the first thing you do is you create table statements. Create a table "blog entries" that has a headline varchar entry, body, then the pub date. Well, in Django how you would represent that is, instead of in SQL, you would use a Python class. It looks very similar. The headline is just an attribute of that class, the body is another attribute and the pub_date is another attribute. Slide shows: class Entry(meta.Model): headline = meta.CharField(maxlength=50) body = meta.TextField() pub_date = meta.DateTimeField() So that's it. All you need to do is write those 4 lines, and you get a whole heck of a lot of stuff free. First thing is, it generates the create table statement for you - if it didn't do that, it would really suck. The second thing is, it gives you a Python API to actually edit this information. So you don't have to write SQL statements. Here's how you would instantiate a headline object, pass int he headline, body and pub date: e = Entry(headline="Hello", body="Welcome to my blog", pub_date=date(2005,12,3)) e.save() # saves to DB Pretty straightforward, I'm not writing any SQL. I can change attributes on it, save again, it will do the update statement intead of an insert. You can get stuff out of the database using this get_list API entry_list = entries.get_list() You can pass parameters to get_list that specify essentially your WHERE clause - how you want to narrow down the search, such as just show me the entries with a certain pub_date, just show me the ones that have published=true, or something. You can get a particular entry out of the database like this: e = entries.get_object(pk=3) Here I'm passing pk=3, the primary key is three, that would give me that entry. And, you just use attribute syntax to access the column values, so e.headline would display the headline. Note to purists: This does allow you to still drop into raw SQL if you want to - I hope you've got the impression that Simon and I, when we were desinging this, we're big perfectionists, and we know you have to drop into SQL if you need to. We don't want the framework to be something that gets in your way, we want it to be something that makes things super quick, so it's very easy to drop into SQL - and essentially in any part of this stack, you can drop down a level if need be. So, a lot of web development revolves around these admin forms, right. This is the piece of crap that I use to update my blog, that I wrote in PHP like five years ago, literally, and you know everyone who's a web developer has done this kind of thing - it's like, if you're a web developer you've made these. This is a tremendous pain in the butt, because we have to first of all display the form, we have to redisplay it with validation errors if there are any validation errors, you have to write the validation logic, and finally you have to do the logic that actually saves things. And that's just for adding. There's changing and deleteing too. Fortunately, with Django this is a solved problem, because we've abstracted it to a level where it will generate those forms automatically for you, in a production ready thing that we call the Admin site. To do that, here's the entry class I was showing you that represents a blog entry. If you want to give it admin capability, all you have to do is add this: class Entry(meta.Model): headline = meta.CharField(maxlength=50) body = meta.TextField() pub_date = meta.DateTimeField() class META: admin = meta.Admin() Meta is an inner class that holds metadata about that. Admin is just one of many things you can put in there, it's just kind of a holding place for metadata. So just putting that line in, it will give you - boom - an admin site. This (see slide) is the admin site for the Lawrence operation, which has a ton of stuff. Essentially when you add that admin, it looks for all the objects in your system that have admin turned on, and displays links to them, it keeps track of your recent actions on the side, and it does authentication and all that stuff. Here's, for example, an add screen for a photo object - it knows, based on the database type, based on the field definition, what type of widget to use, so for example that first one there is 'photo', that when it was in the model was described in Python as a file field, it knows that file fields in a database is a textfield, and in the admin interface it's a little widget that is a file selector. For the second one there, it's a text field in the database, and it knows that it's a bigass text-area. The third thing, that's a many to many relationship, so for those it pulls in a multiple select box, and down there staff member is a many to one, so it puts in a little dropdown, and at the bottom, creation date - which I hope you can see - is a datetime field, and for any datetime field it automatically puts in the little date widgets, and you can click on the little today shortcut, and there's a calendar icon, and it pulls in a javascript calendar, and it's very exciting. So the point is, this is production ready, it just works and you don't have to hack with it, and you don't have to code all this logic. The really messy thing I mentioned about admin sites is the validation - well it automatically takes care of that for you. If I had submitted it empty, this is what I would have gotten (see slide). It knows which fields are required, because you specify that in your model. You keep every piece of domain logic in your model - that's the one true place that all metadata about your objects lives. One of those things is whether a field is required, one of those things is implicit in the type of the field, for instance date fields must be in a certain format, so the admin will validate that for you automatically, and of course you can add custom validators to your heart's content. Uh, the point with this is that it's completely production ready. The way we worked in Lawrence was that we would talk to the client - here's a client that we had (slide) Lawrence chamber of commerce, they wanted us to make a website for them, so we would talk to them, and they said "OK, let's put an accommodations thing, let's have a thing of all the local hotels, and the shopping malls and all that stuff so that people will come visit Lawrence". So, given that, we just coded up those models in half an hour, and we were able to give them an admin site which, one, blew their minds, and two, let them put stuff in there right away so they can start putting content in so that we can focus on doing the interesting things. Here's another site we did - a site for the attorney's association of Kansas, and this blue-jean clad website in Denver that was a local radio station - these are all examples of clients that we gave them the admin interface right away, they started putting data in, and we were able to work on it. Production ready. So, that let us focus on the interesting stuff. We haven't really written much code thus far, we've just written those four lines of the model that describes the data. The next thing you do, when you write a Django app is you design your URLs. We don't want this kind of stuff - we don't want to put .php, .cgi, and we don't want annoying query strings with question marks and scary characters. We don't want to expose the underlying framework in the URLs, because that's evil. We don't, we especially don't want to do this: foo.com/bar/site_view/pageinfrastructure/0,2545,TCP167364285088,00.html Which I believe is vignette and is disgusting. It's very very ugly. So, how it works in the Django framework is you specify it in pure Python code. This is what's called a URLconf - a URL configuration where you just... it's essentially a mapping between URLs and their callbacks. So in this example: ( ('^blog/$', 'myapp.blog.homepage'), ('^blog/archive/$', 'myapp.blog.archive'), ('^blog/archive/(\d+)/$', 'myapp.blog.permalink'), ) if anyone goes to '/blogs' it will call the Python function myapp.blog.homepage. If anyone goes to '/blogs/archive', it will call the archive. In the last example, it displays the capability of capturing things from the URL, so if you go to blog/archive then any number - that's just simple regular expression syntax - it will pass that as a parameter to the function myapp.blog.permalink. Let's take one of these for example: ('^blog/archive/(\d+)/$, 'myapp.blog.permalink') This was the last one on that screen. That will call the function: def permalink(request, entry_id): return HttpResponse("Hello World") That takes entry_id, which is whatever was captured in the url, and it returns HttpResponse. This is a view function. Views get, as their first argument, a request object which has all sorts of metadata about the request, like what browser it was, GET and POST data, what URL it was, and all that, and entry_id was taken from the URL. A view is essentially responsible for returning a HttpResponse object, which can be anything you want - it can be HTML, it can be plain text, it can be a PDF, it can be CSV, you can generate images, anything you want. But this is really really lame and boring, so let's beef it up a little bit. This is what the code would look like if you actually wanted to implement this. def permalink(request, entry_id): e = get_object_or_404(entries, pk=entry_id) return render_to_response('blog_permalink', {'entry': e}) The first line there is get_object_or_404, you pass it your model module, and you tell it which parameters to use - so use the entries and get the object whose pk id is whatever number was passed in the URL, save that in the variable e, then you render to a response object using the blog_permalink template, passing it entry as the template context. I'll get to the template context in a little bit. A very little code, but it does a lot. I'll also mention for those of you that are in PHP or anything in those scary worlds, you don't have to worry about quoting database, you don't have to worry about SQL injections, because entry_id is quoted for you by the backend database, so it makes things - it just takes care of things for you. What does the template look like? I'll go back a bit - we loaded the blog_permalink template and passed it one variable, 'entry' - that last part is a dictionary for those guys who don't know Python, that's essentially a hashtable where entry is the name of the variable in the template, and e is whatever you want that variable's value to be. So the template would look like this: <h1>{{ entry.headline }}</h1> <p>Posted on {{ entry.pub_date }}</p> {{ entry.body }} You put the headline between H1's, then Posted On, and whatever the pub_date is, and the entry body. It's a very simple template language, and yet very very powerful, and it does not allow Python code execution. It's intended to be designer friendly. Part of our philosophy is that the designer isn't - doesn't need to be a programmer as well. He shouldn't have to deal with Python code, and he shouldn't have to deal with the security issues of writing pure Python code. However you can write custom tags that let you do other assorted logic with the pure power of Python if you want to. That's basically the stack - models, URLs, views - which are those functions - and templates. Any one of these pieces of the stack can be swapped out for something else because it's all very loosely coupled. What else do you get with this framework? The first really really big contribution from the community after we open sourced it was internationalisation - so that any app in Django can automatically handle the language setting from browsers, so that if your browser is configured to use Polish, or you want to send out Polish things, it provides a framework for specifying translation strings. I don't know if anyone here deals with i18n, but it's very very handy. Already we have 19 translations of the Django admin, and it's really a trip for the hell of it to go in there, and change your browser language to French or Icelandic, and see the admin work in a different language. It's really really cool. And the cool thing about it is that we have Welsh! I don't think any other open source project... that's probably an exadgeration, but we have Welsh, so I'm really excited. Another thing is the cache system, because we're really really interested in performance. You can cache objects on a very granualar low level way - you can cache pages, or you can cache your entire site. I'm about to launch my first Django app at Washington post, and that's going to be using the entire site-wide cache, because it's like four million records, and it's a high-traffic site. So the cache system in that case was as easy as adding a single line that says "use cache=true". It doesn't actually say that, but essentially it does. There's authentication. The admin site that I showed you gives you free authentication, users and permissions, and you can have groups, and assign permissions to groups, and all that. But you can use it outside of it if you want to - you're not tied to using it - you're not tied to using the admin site if you don't want to anyway. There's anonymous sessions, handy for shopping carts, and all that stuff. There's an RSS admin framework that shouldn't even be legal, because it makes you - it's like this amount of code, and you have RSS feeds for everything. It's very very cool. Then there's the concept of generic views. That blog_permalink think I showed you, that was a view - but there are certain patterns in web development that can be abstracted at an even higher level. That was a good example - display a template rendering some object from the URL. So with Django you can step, even one step back, and not have to write any Python code at all - you just, in your URLconf, you point to a generic view function, and you tell it "I'm using the blog object" and it will automatically do everything for you. The only thing you have to do is write the template. We really find ourselves, when we do Django, focusing on the user interface and the templates, because there is no code to write. What sites are using Django? What sites are using this? I did a little side project - chicagocrime.org - if any of you guys live in the city, use this and become scared. It's a freely browsable database of crimes reported in Chicago, it's updated every day - you can go to a city block level, browse by date, street, all this stuff. It was slashdotted a couple of times. As a result of the first time, that's when we wrote the cache framework. But after that, it er, it held up very well. Here's an example of AJAX with Django, the crime map, you can specify which crimes you want to look for, it automatically updates the map ... sexy. There's this Polish - the Polish version of friendster - grono.net - has... it was a big Java shop, and they have converted a couple of pages on their site to use Django, and they've said that it's revolutionised things. For one, it's fun to write code again, because Python kicks ass. For two, the amount of code goes from this (indicates large quantity) to this (indicates small quantity). And the performance is better. So I don't know why you wouldn't do this. Just announced this week is a site of Greenpeace which is doing this cool new thing using Django, so that I'm interested in finding out more about, myself. So, if you like this, if you like what you're hearing here, you can put one of these guys on your site ("I wish this site were powered by Django" button/logo/image). I have one on my site because it's still running that PHP abombination. This actually was requested from the community, believe it or not. The community is really cool. Concluding Remarks I'm just going to close with a couple of quotes. "I've played more with Django, and grown to love it more with every passing day. I'm desperately looking for someone to pay me to build something with Django, and if it doesn't happen soon, I'm just going to go on strike" - James Bennett. "I've spent a few hours re-dunking my head in the Django soup. That is one impressive open source prjected... that shows (or at least seems to show!) that the people who designed it really knew what they were doing. It feels like opening up an iPod box and admiring the packaging, design, button layout." - David Ascher, author of "Python Cookbook" "I'm starting to rewrite the CCIW website with Django, and I'm amazed at my progress" - Luke Plant "And I will migrate my PHP stuff to Django and such frameworks in the future. PHP's braindead decisions by its developers has annoyed me a bit too much by now" - asmodai "Time to write a todo list webapp? 16 mins" - Bill de hOra "ever since I laid eyes on Django, I knew one day I'd be running my website off this web framework" - alastair "Conclusion: Django makes life easier" - Espen Grindhaug So, I invite you to check it out, it's at Djangoproject.com, and very very cool, and that's about it. So thank you very much for attending.
https://code.djangoproject.com/wiki/SnakesAndRubiesTranscript/DjangoPresentation?version=7
CC-MAIN-2016-07
refinedweb
6,706
75.24
. Many Unix, and some Windows, users will be familiar with environment variables. These are key/value strings such as USER=andy or SHELL=/bin/bash, and they form part of the global environment provided to a process by the OS. Windows has a similar concept, although it has a few subtle differences and in this post I’m only discussing the situation in POSIX. POSIX provides various interfaces to query and set these variables. Probably the most well known of these are setenv() and getenv(), so let’s start with those. The getenv() function is pretty modest - you pass in the name of an environment variable and it returns you a pointer to the value. Simple enough, but immediately the spidey sense starts tingling. The function returns a char* instead of a const char* for one thing, but “the application shall ensure that it does not modify the string pointed to by the getenv() function”. Well, OK, perhaps they didn’t have const in the days this function was written. They also presumably hadn’t heard of thread-safety or re-entrancy, because anything that returns a static pointer pretty clearly does neither. The setenv() function is also fairly simple - you pass in a new variable name and value, and a flag indicating whether you’re happy for the assignment to overwrite any previous value. But the man page talks about this function modifying the contents of environ - oh yes, let’s talk about that first… You’ll notice neither of the functions so far has given a way to iterate through all the current environment variables that are set. It turns out that the only POSIX-supported way to do this is use the global environ variable provided by the library. This is similar to argv that’s passed into main() except that instead of an argc equivalent, the environ array is null-terminated. Things start to smell a little fishy when you realise that environ isn’t actually declared in any header files - the application itself has to include something like this, as taken from the POSIX page on environment variables. extern char** environ; OK, so just like argv there’s some OS-supplied storage that contains the environment. It’s not const, but hey ho, neither is argv and we seem to cope fine with just not modifying that directly. Except that the key point here is that setenv() does modify environ - the man page even explicitly states that’s how it works. Unlike argv, therefore, you can’t just treat it as some effectively some read-only constant1 array and quietly ignore the fact that the compiler won’t stop you modifying it. It gets even more crazy when you realise that, according to the man page for the exec family, it’s quite valid to replace your entire environment by assigning a whole new value to environ. You read that correctly - not updating the pointers within environ, just repointing the whole thing at your own allocated memory. So then, when setenv() comes along and wants to modify this, how on earth can it do so? It has no idea how big an array you’ve allocated in your own code - it either has to copy the whole lot to somewhere provided by the system, or cross its fingers and hope there’s enough space. And don’t even get me started on the memory management Pandora’s Box that is putenv()… In summary, therefore, I’ve decided that the only sensible course of action is to use environment variables as little as possible. If you must use them as opposed to command-line arguments, you should parse them away right at the beginning of main() and put them into other storage within your code, never worrying about the environment again. If you’re writing a library… Well, good luck with that - let’s hope your application doesn’t mess around with the environment too badly before you want to query it. Whatever you do, don’t update it! It’s quite possible to work around all this brokenness, of course, as long as you can make some basic assumptions of sanity about your libraries. But it’s all just such a dirty little mess in the otherwise mostly sensible tidiness that POSIX has imposed on the various APIs that exist. Surely there’s got to be a more sensible way to control the behaviour of applications and libraries? For example, we could have some sort of system-wide database of key/value pairs - unlike the environment it could be lovely and clean and type-safe, and all properly namespaced too. For performance reasons we could stick it in some almost unparseable binary blob. There’s no way such a system could be abused by applications, right? It would remain clean and usable, I’m sure. Now all we need is a snappy name for it - something that indicates the way that values can be registered with it. Perhaps, The Register? No, people will confuse it with the online tech site. What about The Repository? Hm, confusing with source control. I dunno, I’ll think about it some more. Yes, I’m aware there are some use-cases for modifying argv too, but I class those as unusual cases, and they also tend to be quite system-specific (assuming you want to resize the strings in the process). ↩
https://www.andy-pearce.com/blog/posts/2015/Mar/an-unhealthy-environment/
CC-MAIN-2021-49
refinedweb
902
58.72
ImageMagick is an open-source tool that you can use to create, edit, compose, or convert digital images. It supports over 200 image formats. According to its website, ImageMagick can resize, flip, mirror, rotate, distort, shear, and transform images, adjust image colors, apply various special effects, or draw text, lines, polygons, ellipses, and Bézier curves. For more information about ImageMagick, you should go to their website. Wand is a Python wrapper around ImageMagick. Wand has many similar features to Pillow and is the closest thing you could label as an alternative to Pillow. Wand is easy to install with pip: python3 -m pip install wand You have to have ImageMagick installed too. Depending on which operating system you are using, you may need to set up an environment variable as well. See the documentation for Wand if you have any issues getting it installed or configured. Wand can do many different image processing tasks. In the next few sections, you will see how capable it is. You will start by learning about Wand's image effects! Wand has several different image effects that are built-in. Here is a full listing: Some of these effects are present in Pillow and some are not. For example, Pillow does not have Despeckle or Kuwahara. To see how you can use these effects, you will use this ducklings photo: You will try out Wand's edge() method. Create a new Python file and name it wand_edger.py. Then enter the following code: # wand_edger.py from wand.image import Image def edge(input_image_path, output_path): with Image(filename=input_image_path) as img: img.transform_colorspace("gray") img.edge(radius=3) img.save(filename=output_path) if __name__ == "__main__": edge("ducklings.jpg", "edged.jpg") The first new item here is the import: from wand.image import Image. The Image class is your primary method of working with photos in Wand. Next, you create an edge() function and open up the duckling's photo. Then you change the image to grayscale. Then you apply edge(), which takes in a radius argument. The radius is an aperture-like setting. When you run this code, the output will look like this: You should experiment with different values for radius. It changes the result. Now let's take a look at the special effects that Wand provides. Wand supports quite a few other effects that they have dubbed "Special Effects". Here is a list of what is currently supported: Some of these are fun or interesting. The documentation has before-and-after photos for all these examples. You will try using Vignette in this section using the photo of the author: Michael Driscoll Create a new file named wand_vignette.py and add this code to it: # wand_vignette.py from wand.image import Image def vignette(input_image_path, output_path): with Image(filename=input_image_path) as img: img.vignette(x=10, y=10) img.save(filename=output_path) if __name__ == "__main__": vignette("author.jpg", "vignette.jpg") In this example, you call vignette(). It takes several different arguments, but you only supply x and y. These arguments control the amount of edging that goes around the image that you are adding a vignette to. When you run this code, you will get the following output: That looks nice. This is a fun way to make your photos look unique and classy. Give it a try with some of your photos! Now you are ready to learn how to crop with Wand. Cropping with Wand is similar to how Pillow crops. You can pass in four coordinates (left, top, right, bottom) or (left, top, width, and height). You will use the duckling's photo and find out how to crop the photo down to only the birds. Create a new file and name it wand_crop.py. Then add the following code: # wand_crop.py from wand.image import Image def crop(input_image_path, output_path, left, top, width, height): with Image(filename=input_image_path) as img: img.crop(left, top, width=width, height=height) img.save(filename=output_path) if __name__ == "__main__": crop("ducklings.jpg", "cropped.jpg", 100, 800, 800, 800) For this example, you supply left, top, width, and height. When you run this code, the photo will be cropped to look like this: You can experiment with different values to see how it affects the crop. Wand can do much more than what is demonstrated in this section. It can do most of the same things as Pillow and more. Pillow's main benefit over Wand is that Pillow is written in Python and doesn't require an external binary like Wand does (i.e., ImageMagick). The Wand package is quite powerful. It can do most of that same things that the Pillow package can do and a few things that it can't. Of course, Pillow also has features that Wand does not. You should definitely check out both packages to see which one is a better fit for what you are doing. Both packages are great ways to edit and manipulate images with Python. Give Wand a try and use some of its many other effects to see just how powerful it is!
https://www.blog.pythonlibrary.org/2021/07/06/an-intro-to-image-processing-with-wand-imagemagick-and-python/
CC-MAIN-2022-27
refinedweb
852
76.42
Work with Packages in Golang Stay DRY with Go packages!. In Go, this can be done easily through packages. In this article, I will explain how you can work with Go packages. You will learn how to import a package, export from a package, set up your own Go package, and install third-party packages. I assume you have some programming experience and you are familiar with the basic syntax of Go. Let’s get started! 🏃 What is a Go package? A Go package is a directory in your project workspace that houses one or more Go source files or other nested Go packages. In Go, every source file must belong to a package. All Go source files must begin with a package declaration like below. package <package_name> Every function, type, and variable of a source file belongs to its declared package and can be accessed by other source files within the same package. Go source files that live in the same directory must belong to the same package. Although it is not necessary to name a package after its directory, it is a good convention that you should follow. A simple example Go comes with a handful set of built-in packages. One of them is the fmt package that provides versatile I/O functionalities. To use this package, we can import it as follows: import "fmt" When you import a package, you have access to the functions exported by the package. You can use these functions using the dot operator. One of such functions exported by fmt is Println() which formats and writes data to the standard output. The eagle-eyed readers may have noticed a benefit of this design pattern. Accessing an exported function via the dot operator helps to prevent naming conflicts. You can reuse function names across different packages, which helps to keep function names concise and meaningful! Package main In Go, there are two types of packages: an executable package and a utility package. An executable package, as the name suggests, is a special package that holds an executable program for Go to compile and run. On the other hand, a utility package contains reusable helpers that support the program in an executable package. The fmt package of Go is an example of a utility package. To create an executable package in Go, you need to meet two criteria: - The name of the package must be main. - It must contain a function called main(), which serves as the entry point of the executable program. Let’s look at an example of a simple source file main.go in the main package. To execute a main package, you can either use go build to compile and then run the executable file manually, or use go run to do it in one step. $ go run main.go In the example above, there is only one source file in the main package. With more complex programs, there could be more than one source file in a main package. To compile and run them, you need to point Go to the directory of the main package. $ go run /path/to/directory/of/main_package Multiple and nested imports In most cases, you would want to import multiple packages. There are two ways where you can do this in Go: Personally, I prefer the second method because it looks much cleaner without the repeated import keywords! Sometimes, packages are designed to be the children of a parent package. This parent-child relation is logical in cases where the child package performs a specific task of the parent package. For example, inside Go’s math package, there is a nested rand package that implements pseudo-random number generators. We can import the rand package and use it as follows: In Go’s lingo, the last element of an import path indicates the nested package that is imported into a source file. Export from packages If you come from the world of JavaScript, you should be familiar with the export keyword when working with JavaScript modules. In Go, we can export functionalities from packages as well. However, instead of using a designated keyword like JavaScript, Go relies on letter casing! Any functions, types, or variables with a capitalized name are exported and visible outside of a package. On the other hand, anything that does not start with a capital letter is private, i.e. not accessible outside of the package. Restricting the export of certain functionalities from a package is a good design practice. It enables you to “force” other developers to use your package through specific interfaces. This minimizes the misuse of a package and unintended errors. Create Go packages Enough of theory! Let’s try to create our own Go packages and use them with the concepts you have learned. Open up your terminal and type in the command below to create a project directory. I’ll name it pkg-tutorial . You can name it whatever you want. $ mkdir pkg-tutorial Then, we need to create a Go module. Module is Go’s new dependency management system introduced in version 1.11. In simple words, a Go module is a collection of Go packages managed by a go.mod file. Every Go module must have a path defined in go.mod . The path in go.mod represents the path to the module, and it is also used as the import path for the packages in the module. Do not panic if you do not understand what I just said. Go modules deserve a separate article. For now, you just need to type the following commands in your terminal. $ cd pkg-tutorial $ go mod init github.com/jseow5177/pkg-tutorial github.com/jseow5177/pkg-tutorial is the module path. After you run the commands above, you should see a go.mod file created in your workspace. Next, we will create three utility packages: numbers, texts, and greets with greets nested in texts. The entry point of our program is in main.go which belongs to the main package at the root of our workspace. After adding a couple of source files, the project structure will look like this: math.go casing.go welcome.go main.go Once you are done, at the root of your project workspace, execute go run main.go in your terminal. You should see the following output: I love Go! 2 + 5 is 7 2 - 5 is -3 Hello World Here are a few things that you should take note of: - All import path of packages is relative to the module path github.com/jseow5177/pkg-tutorial. - Nested packages are created as subdirectories. You import them as usual starting from the module path. fmtis Go’s built-in package. Hence, you won’t be importing it from our module. - The lowerCasefunction in casing.gois not exported. Try calling it in main.goand see what you get! - I followed the convention where a package name is the same as the directory name. However, this is not needed for the mainpackage. Pretty simple right? Install third-party packages Go’s built-in packages provide a wide range of functionalities, but they may not be sufficient for your project. In most cases, you would require packages written by other developers in the open-source community. Fortunately, Go has made it easy for you to install third-party packages into your project. For example, gorilla/csrf is a Go package that protects your application against cross-site request forgery (CSRF) attacks. To install it, you use the go get command and provide that path to the gorilla/csrf package. go get github.com/gorilla/csrf If you haven’t noticed, the module path above is the link to the package’s GitHub repository! After you run the command, Go will fetch the package from GitHub into your local project. The go.mod file will be updated accordingly to make gorilla/csrf a project dependency. You can then import it as follows: import "github.com/gorilla/csrf" Final Thoughts That is all for packages with Go! In this article, you have learned about Go packages and how you can use them to organize your code in a Go project. As the size of a project grows, Go packages can keep our code organized, modular, and reusable. Always remember to stay DRY. Thank you for reading. Peace! ✌️
https://jonathanseow.medium.com/working-with-packages-in-golang-5f247f49090f?source=post_internal_links---------1----------------------------
CC-MAIN-2021-25
refinedweb
1,394
66.64
1 /* LRU2 *3 * Created on September 18, 20064 *5 * Copyright (C) 2006.util;24 25 import java.util.LinkedHashMap ;26 import java.util.Map ;27 28 29 /**30 * A least-recently used cache. As new entries are added to the map, the31 * least-recently accessed entries are removed.32 * 33 * @author pjack34 *35 * @param <K> The key type of the LRU36 * @param <V> The value type of the LRU37 */38 public class LRU<K,V> extends LinkedHashMap <K,V> {39 40 41 /**42 * Generated by Eclipse.43 */44 private static final long serialVersionUID = 1032420936705267913L;45 46 47 /**48 * The maximum number of entries to store in the cache.49 */50 private int max;51 52 53 /**54 * Constructor.55 * 56 * @param max the maximum number of entries to cache57 */58 public LRU(int max) {59 super(max, (float)0.75, true);60 this.max = max;61 }62 63 64 @Override 65 protected boolean removeEldestEntry(Map.Entry <K,V> entry) {66 return size() >= max;67 }68 69 }70 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/archive/util/LRU.java.htm
CC-MAIN-2016-44
refinedweb
182
76.42
I've been lurking around here for a while, but this is my first post. I've been having trouble passing a pointer of a multidimensional array to a function. I think I got it, but i'd like to post it here to see if it's right. Here's the problem we were given: Write a complete program and function for the following. You must use a #define for the size of the array. The program will prompt and let the user fill up a 4 x 4 array of integers. Then call a function called TWOES that will be passed a pointer to the array. Using this pointer you must traverse all memory locations in the array and count the element that are not evenly divisible by 2. The function will return this number of elements that not are divisible by 2. Back in main, the program will print out the value that is returned from the function. note that the prototype for the function will look like this: int TWOES( int * ); This is what I got: It works. I tested the values passed with printf functions, and they are passing the correct values, but is this right?It works. I tested the values passed with printf functions, and they are passing the correct values, but is this right?Code:#include <stdio.h> #define ROW 4 #define COL 4 int numbers[ROW][COL]; /* Initialize the array */ int TWOES(int *); /* function prototype */ int main() { int rownum, colnum; int count; for (rownum = 0; rownum < ROW; rownum++) { for (colnum = 0; colnum < COL; colnum++) { printf("Please enter an integer:"); scanf ("%d", &numbers[rownum][colnum]); /* user input into array */ } } int *ptr; ptr = &numbers[0][0]; count = TWOES(&ptr[0]); /*call the function */ printf("%d of your numbers are NOT divisible by 2\n",count); /* print value obtained from function */ return 0; } int TWOES(int *tempptr) /* function */ { int r,c; int count = 0; for(r=0;r<ROW;r++) { for(c=0;c<COL;c++) { if(*tempptr % 2 != 0) /* running through the array and testing values */ { count++; } tempptr++; /* Increasing by 4 bytes in memory through loop */ } } return (count); /* returning calculation to main() */ }
http://cboard.cprogramming.com/c-programming/93495-multidimensional-arrays-functions.html
CC-MAIN-2014-41
refinedweb
359
66.47
Pages: 1 Topic closed Hello everybody I use scrot to take screenshots. Usually i need them directly as an image file (.png or so). Sometimes i like them directly in my clipboard. So: Is it also possible to take a screenshot directly into my local clipboard? I like doing something like this: $ scrot -s foo.png $ xclip foo.png Then I'd like to paste the image from the clipboard into Gimp/LibreOffice/${YourGuiSoftware}. The only thing I saw was a Python Script ( … n-in-linux). Is there no Software in the Arch Linux Repositories to do this task? Thanks for your answers. Mindfuckup Offline Not exactly what you want, but editing ~/.bashrc and adding the following alias might (not sure) work alias scrotclip= 'scrot -s ~/foo.png && xclip ~/foo.png && rm ~/foo.png' On the next login, your user should be able to run scrotclip, which should run the posted commands on order. Offline <necro> import png:- | xclip -selection c -t image/png </necro> Last edited by trollie (2015-08-05 10:43:47) Offline Trollie, First, Welcome to Arch Linux. Although your post is on relevant, I note that you know this is an old thread. I shall, therefore, use this as an opportunity to bring this thread to a
https://bbs.archlinux.org/viewtopic.php?pid=1550758
CC-MAIN-2019-30
refinedweb
212
77.84
0 Ok so while reviewing this code which is meant to convert yards, and feet to inches I found on lines 17-19 , and 32. There are "string data" (if that makes sense) as parameters in each method? What do they denote, serve , and their purpose? MSDN usually has convoluted explanations.....on this so I guess im resorting to you faithful guys again! using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace Lab1 { class Lab1 { static void Main() { int inches, yards, feet; // this prepares to write code to give the instructions to the user DisplayInstructions(); inches = GetInches("inches"); //why is a string(parameter) inside a method. feet = GetInches("feet"); yards = GetInches("yards"); DisplayResults(inches, yards, feet); } // this method will "DisplayInstructions" to the user public static void DisplayInstructions() { Console.WriteLine("This applications will allow the user to input data in yards, feet and inches " + "& then convert that data into inches.\n"); Console.WriteLine("Press any key to continue..."); Console.ReadKey(); Console.Clear(); } // This method will acquire the value in inches according to user input public static int GetInches(string whichOne) { Console.Write("Please enter in the {0}: ", whichOne); return int.Parse(Console.ReadLine()); } //this method will convert a value given in yards to inches public static int ConvertYards(int yards) { return yards * 36; } // this method will convert feet to inches public static int ConvertFeet(int feet) { return feet * 12; } // this method will display the results of converting , inches, yards, and feet to inches public static void DisplayResults(int inches, int yards, int feet) { Console.Clear(); Console.WriteLine("Inches: {0:n1}" + "\nFeet: {1:n1}" + "\nYards: {2:n1}" + "\nFeet to Inches: {3:n1}" + "\nYards to Inches: {4:n1}", inches , feet , yards , ConvertFeet(feet) , ConvertYards(yards)); Console.ReadKey(); } } } Many Thanks!
https://www.daniweb.com/programming/software-development/threads/384915/why-is-there-a-string-inside-paranthesis-in-a-method
CC-MAIN-2016-50
refinedweb
290
56.45
NAME sys/shm.h - XSI shared memory facility SYNOPSIS #include <sys/shm.h> DESCRIPTION The <sys/shm.h> header shall define the following symbolic constants: SHM_RDONLY Attach read-only (else read-write). SHM_RND Round attach address to SHMLBA. The <sys/shm.h> header shall define the following symbolic value: SHMLBA Segment low boundary address multiple. The following data types shall be defined through typedef: shmatt_t Unsigned integer used for the number of current attaches that must be able to store values at least as large as a type unsigned short. The shmid_ds structure shall contain shall be defined, all of the symbols from <sys/ipc.h> shall be defined when this header is included. The following sections are informative. APPLICATION USAGE None. RATIONALE None. FUTURE DIRECTIONS None. SEE ALSO <sys/ipc.h> , <sys/types.h> , the System Interfaces volume of IEEE Std 1003.1-2001, shmat(), shmctl(), .
http://manpages.ubuntu.com/manpages/hardy/man7/sys_shm.h.7posix.html
CC-MAIN-2014-10
refinedweb
147
53.68
Sorting mailing list jobs Superadmin: Login to CMS. Create admin account. Manage admin credential. Admin: Login to ...Payment gateway. Voucher code and promo. Share product link to social media (fb, twitter, etc). Search result indexing. Product return (refund or send back) Filter and sorting for product search result. Blog or press. Contact form. Help page. In need of laravel php developer to assist in sorting out paypal api integration. immediate hire I need to build a flowershop website on Shoppify with the following functionalities: - Implement a flowershop theme (ex. [login to view URL]) - Customizable calendar on the frontend for user to choose delivery date of product. - Payment processing integrated (US) and also PayPal - We also need to build a custom Order Manager System as a simple website that can: 1. Automatically get the orde... I need you to develop some software for me. I would like this software to be developed for Mac . Algorithm kind of may be Sorting techniques and implement them kind of using open Cl some other different techniques etc this is kind of research thing I have a clothes shop on WooCommerce, and I can't seem to be able to setup an automatic mailing system upon orders. To simplify, whenever someone is making an order, I would like him to get an email that the order has been received with an invoice. Whenever he pays, I want him to get an email with payment of confirmation. Data base with ability to lodge and retrieve submitted artifacts and responses online. Will need: website built, data base built, sorting out hosting in my cloud. pm for details if you have PROVEN WORLD CLASS experience only. budget tba. I have 1400 .csv files that I need data extracted from. All the data that I need are numbers located in the column "I". I need them to be sorted into excel files with 500k records in each of them. Please Sign Up or Login to see details.fol... sorting and ...add on new features shown as below for the version of 1.1 Job requirement: Base on the first version, add-on features: WhereToGo main page (Table: wheretogo): 0) Items sorting by distance 1) Selected item highlight with color 2) The selected item will show 3 more pictures, slide up from map bottom area, close to slide down and hide the box. 3) ...friendly per page -built in chat linked to android phone -google analytics -web landing pages that are lead generating which includes sign up reports -include multi layered sorting of info..search (map, by area, year of occupancy, builder) -please look at..... -include all historical all old condo info provided by client to be uploaded *** We need a mailing list (in two parts) for Commercial Nuclear. Whereas we are brokers of a nuclear safety technology and engineering firm, we seek to make contact with nuclear manufacturers and nuclear suppliers on a worldwide basis. We need senior-level contacts of fabricators, suppliers, designers, engineering firms - especially outside of the Changing existing widget to the one attached to this project Looking for someone to every two weeks engage with my mailing list of customers - included segmenting the mailing list and making better returns. ...overt and covert products. We're quickly growing and need to rapidly expand our marketing efforts across all channels. We're looking for someone to create a Spreadsheet of mailing addresses and potentially emails for law enforcement in a particular region of the country. If it goes well, this job has the potential to grow to include the entire U.S. Please refer to the file attached for a simple explanation and sample of the work. will &... Hi I need someone ASAP to start now to scrape and put the identifying info from the site on a excel spreadsheet. As many Name/Address etc..as you can must be quick. This trial on the site we will give you access to ends tomorrow. The goal is no less than 5000. I have a mysql table containing version, views and sales. I need significance be calculated for A/B/N Testings, so for MORE than only 2 versions. So it could be A/B/C/D, etc. So preferably I would like to have one function, that returns significance and the probability. And second I would prefer to have another function to create a nicely looking table colored in green for the winning versio... I have the attached grid-container that needs the ability to be sorted with a button, using either [login to view URL] or [login to view URL] the sortable values will need to be on 'name' and 'Current Value:' The values that are all static such as border style, font size, margin, line-height should all be moved out of the grid so they are not repeated in every single 'g... &... Errors: 1- SobiPro: sorting function for 2 table columns, only sort by name asc/desc 2- iCagenda: List all function and Print the complete list of events (maybe via [login to view URL]) If you have questions pls. don't hesitate to ask, i'll provide all the infos u need. !!! IMPORTANT !!! Please don't give a quote if you don't have enough capac I need a few hours of work helping me catch up with paper work and sorting through tax stuff. BTW, this is an acupuncture clinic, so if you want some "free" acupuncture, that can be included in our deal. Actually, i am having an issue in sorting with the pagination, if someone is expert in python Django so mostly it would take half an hour to fix this. Please write "I CAN Fix Sorting" in the proposal if you are really confident that you can do it. Thanks ...What Is XSL? Overview of XSL Transformations XSL Templates Processing the Child elements Node Set Functions Operators Number Functions String Functions Numbering and Sorting Defining Constants with xsl:variable Named Templates Making Choices Merging Multiple Style Sheets Determining the Output Format Altering Document Structure JSTL 1 ...we don't know what will be the best approach or what objections will come. So I like keeping it free flowing and use what what we need. I need some better formatting and sorting of this so that the client has a clean one page document he can feel confident with in presenting to his management team. I usually have 6-7 pages of information. I need Bulldogtribe es un nuevo proyecto de Blog y Tienda online dedicado interamente a la raza de perros Bulldog. Ofrecemos informaciones y consejos sobre esta raza y además en nuestra tienda online tenemos productos propios y de otras marcas hechos o seleccionados para el Bulldog. Se trata de un proyecto de nicho basado en productos naturales y/o 100% handmade con calidad superior. Todo produc Need some help with sorting DNS MX records... you need to have experience.. And experience with hostgator! looking to set my ubuntu server mailing correctly .., you would just need to create a mailing version I am a new real estate investor. I need help creating, managing, and scheduling my marketing mailers. Currently I would like help sorting through a list I purchased from the county. Then making a spreadsheet of the target properties. It is not terribly difficult work. But I find it tedious. My focus is being taken away from what I should be doing. I Need to create custom pages ...colors of the picture in a list/database (only the values of the pictures need to be stored) 3. store the most left and right point and calculate the difference . store this also in the database in field "measurement". 3. compare the stored main 5 colors and measurement of the picture with earlier stored "pictures" in the list/database 5. If colors We need to get the mailing addresses of the top media influencers in the lifestyle, health and nutrition space for a mailing campaign. You will be provided with emails with social media posts which need sorting Configure my ubuntu 16.04 VPS for Mass mailing. You can decide the configurations. I will be only available online after 7pm to 4 am
https://www.freelancer.com/job-search/sorting-mailing-list/4/
CC-MAIN-2018-22
refinedweb
1,364
64.2
@laribee on Twitter I was running through a group talk I do at the recent Philly CodeCamp (which was a huge success by the way, special thanks to Brian Donahue for doing the heavy lifting in organizing an ALT.NET track) and we were ping pong pairing on the well known bank example. The first story we covered looked like this: Deposit Funds As an account holder, I want to deposit funds into my account, so that I can save funds for a rainy day. Criteria When depositing funds into an account, the balance should be incremented by the amount deposited. So someone or another mentioned the whole "simplest thing possible" ethos of pair programming and Agile at large. I have to say I agree, but to a point. For example, this specification would pass (using the "specs" base class I recently published on Google Code): using NUnit.Framework; using XEVA.Framework.Specs; namespace Banking.Model { [TestFixture] public class When_depositing_funds_into_an_account : Spec { private Account _account; protected override void Before_each_spec() { _account = new Account(0M); } [Test] public void Increase_the_account_balance_by_the_amount_deposited() _account.Deposit(20M); Assert.AreEqual(20M, _account.Balance); } public class Account private decimal _balance; public Account(decimal balance) _balance = balance; public decimal Balance get { return _balance; } public void Deposit(decimal amount) _balance = amount; } The problem with this is that the spec (remember: not test) oversimplifies things. It doesn't express enough value in and of itself. It's a very test-driven approach as opposed to a behavior-driven approach. When possible, I think BDD specifications should resemble the acceptance criteria attached to the story over leaving a breadcrumb trail of "the dead simplest possible thing" design. Instead of writing two specs, I'd choose one to that illustrates that two deposits to an account are additive: _account.Deposit(10M); Assert.AreEqual(30M, _account.Balance); _balance += amount; This shouldn't come as a revelation to anyone, but I think it's extremely important that we represent how we'd cover these problems in the real world. To me, a specification is a whole concept in and of itself. Even though they are extremely granular, specifications express how the software implements business value. Leaving a wake of half-implemented tests in order to adhere to some kind of arbitrary "single expectation/assertion" rule is far, far less important than authoring meaningful and conceptually integral specifications. [Advertisement] Funny how a lot of people misinterpret the simplest thing for the stupidest thing. Of course the paradigm is not 'a single assert' but test one thing. Kent Beck has a number of techniques to getting the green bar. 'Fake it 'til you make it' is the approach a lot of people think of when they reference this, the idea that you get the test passing by doing the dumbest thing, and then, and here is the key: implement it sensibly in the refactor step. I've never really warmed to the approach, but Kent does describe it as 'first gear' a warm up when you begin TDD. I've always quite liked his Triangulation technique where you build up a body of tests, each one adding testing another feature. So here we might have two tests. The first would make one deposit, the second would make two deposits, etc. But once the engine is running you are allowed to shift up into third: Obvious Implementation, where you know what the valid implementation should be and you code it straight away rather than waiting for a triangulating test or refactoring step. Kent's advice is to stat in low and them move up to high gear, but switch down again if you find your tests are not helping your design enough. the trouble with most examples is that they are so obvious that we can never see the point of these low gears, but they are useful when we are hill climbing through areas where we are uncertain of our design. But a lot of folks quote that 'do the simplest thing mantra' at you without really getting it. Pingback from » Daily Bits - January 15, 2008 Alvin Ashcraft’s Daily Geek Bits: Daily links plus random ramblings about development, gadgets and raising rugrats. Just curious, do you do both BDD specs and TDD tests? BDD specs matching acceptance criteria only doesn't really lead to the full design and doesn't give the coverage traditional TDD might give. The problem I have ran into is that sometimes a desired behavior cannot be expressed in a single clause. The bank example can be misleading because of it's simplicity. Have you encountered a spec that was difficult to express because it was "multi dimensional"? Next time I run into one I'll post about it... @Jimmy - All specs all the time. I would make the argument that not all specs have the luxury of an acceptance criteria. You might drive out specs in the course of fulfilling a story. What changes between TDD and BDD is that a) we're using the specification formation and b) sometimes the intended reader of the specification is the user (model) and sometimes it's the developer (cross cutting components). Generally speaking though you will end up with more executable specifications than scenarios or acceptance criteria. Back to "BDD is TDD done right" and with a big change in language (spec over test). @Christopher - > Have you encountered a spec that was difficult to express because it was "multi dimensional"? I'd differentiate between spec and acceptance criteria. Sometimes you get composite acceptance criteria: When writing an auto-policy from a quote, close the quote and add a link to the new policy. In this case we have an acceptance criteria. We'd break that up into two specifications in the model layer (at least) but we'd have a number of specifications in, say, our MVC front end. Specifications, to me, really mean "class specifications." It is only a side-effect (a happy one) that in certain layers of our applications specifications are business facing (entities, model). The more important use of specifications is that we have readable documentation of our code for various audiences: developer, BA, tester, domain expert... I'd love to see an example; it's very possible I'm not understanding your problem. I wrote that code! The funny thing about this is that, while I do believe that keeping it as simple as possible for people new to TDD/BDD, my "bug" was a total accident that I only realized a couple minutes after I wrote it, and then (jokingly) brushed it off as being the simplest thing that worked :) I will say that I do keep my specs pretty simple. I had a hard time really grokking TDD because of my tendency to think too far ahead, so I put restraints on myself to make sure I don't move too fast and miss something. I do strive for one expectation/assert per spec, but it's more like testing one interaction, or one *type* of state change. So if my method calls a service, acts on the results, and updates a view, I'd have at least 3 specs to cover those 3 actions (and use SetupResult.For() for anything I wasn't explicity specifying in a spec) I am really due for some blog posts. Not to pick on you guys; I've done the same thing in demo many times. I think I'll offer this opinion going forward though. Well, I think it's not really a problem of oversimplifying, but rather a bug in the code AND a bug in the test itself. The test says "increase the account balance". So, to test that, you have to be able get the account balance before the deposit to see if it's really being added. This is telling me that your set-up assumption of an initial 0 balance is not
http://codebetter.com/blogs/david_laribee/archive/2008/01/14/avoid-oversimplification-in-bdd.aspx
crawl-002
refinedweb
1,317
60.24
File::Util::Manual - File::Util Reference version 4.132140 This manual is is the complete reference to all available public methods for use in File::Util. It also touches on a few other topics as set forth below. For a "nutshell"-type reference full of actual small example code snippets, take a look at the File::Util::Manual::Examples For examples of full Programs using File::Util, take a look at the File::Util::Cookbook. Now we'll start out with some brief notes about what File::Util is (and isn't), then we'll talk about the syntax used in File::Util. After that we discuss custom error handling and diagnostics in File::Util. Finally, the rest of this document will cover File::Util's object methods, one by one, with brief usage examples. File::Util is a "Pure Perl" library that provides you with several easy-to-use tools to wrangle files and directories. It has higher order methods (that's fancy talk for saying that you can feed subroutine references to some of File::Util's object methods and they will be treated like "callbacks"). File::Util is mainly Object-Oriented Perl, but strives to be gentle and accommodating to those who do not know about or who do not like "OO" interfaces. As such, many of the object methods available in File::Util can also be imported into your namespace and used like regular subroutines to make short work of simple tasks. For more advanced tasks and features, you will need to use File::Util's object-oriented interface. Don't worry, it's easy, and there are plenty of examples here in the documentation to get you off to a great and productive start. If you run into trouble, help is available. File::Util tries its best to adhere to these guiding principles: Make hard things easier and safer to do while avoiding common mistakes associated with file handling in Perl. Code using File::Util will automatically be abiding by best practices with regard to Perl IO. File::Util makes the right decisions for you with regard to all the little details involved in the vast majority of file-related tasks. File locking is automatically performed for you! File handles are always lexically scoped. Safe reads and writes are performed with hard limits on the amount of RAM you are allowed to consume in your process per file read. (You can adjust the limits.) We make sure that File::Util is going to work on your computer or virtual machine. If you run Windows, Mac, Linux, BSD, some flavor of Unix, etc... File::Util should work right out of the box. There are currently no platforms where Perl runs that we do not support. If Perl can run on it, File::Util can run on it. If you want unicode support, however, you need to at least be running Perl 5.8 or better. File::Util has been around for a long time, and so has Perl. We'd like to think that this is because they are good things! This means there is a lot of backward-compatibility to account for, even within File::Util itself. In the last several years, there has never been a release of File::Util that intentionally broke code running a previous version. We are unaware of that even happening. File::Util is written to support both old and new features, syntaxes, and interfaces with full backward-compatibility. If requested, File::Util outputs extremely detailed error messages when something goes wrong in a File::Util operation. The diagnostic error messages not only provide information about what went wrong, but also hints on how to fix the problem. These error messages can easily be turned on and off. See DIAGNOSTICS for the details. File::Util uses no XS or C underpinnings that require you to have a compiler or make utility on your system in order to use it. Simply follow standard installation procedures (INSTALLATION) and you're done. No compiling required. File::Util offers significant performance increases over other modules for most directory-walking and searching, whether doing so in a single directory or in many directories recursively. (See also the benchmarking and profiling scripts included in the performance subdirectory as part of this distribution)* However File::Util is NOT a single-purpose file-finding/searching utility like File::Find::Rule which offers a handful of extra built-in search features that File::Util does not give you out of the box, such as searching for files by owner/group or size. It is possible to accomplish the same things by taking advantage of File::Util's callbacks if you want to, but this isn't the "one thing" File::Util was built to do. *Sometimes it doesn't matter how fast you can search through a directory 1000 times. Performance alone isn't the best criteria for choosing a module. In the past, File::Util relied on an older method invocation syntax that was not robust enough to support the newer features that have been added since version 4.0. In addition to making new features possible, the new syntax is more in keeping with what the Perl community has come to expect from its favorite modules, like Moose and DBIx::Class. # this legacy syntax looks clunky and kind of smells like shell script $f->list_dir( '/some/dir', '--recurse', '--as-ref', '--pattern=[^\d]' ); # This syntax is much more robust, and supports new features $f->list_dir( '/some/dir' => { files_match => { or => [ qr/bender$/, qr/^flexo/ ] }, parent_matches => { and => [ qr/^Planet/, qr/Express$/ ] }, callback => \&deliver_interstellar_shipment, files_only => 1, recurse => 1, as_ref => 1, } ) If you already have code that uses the old syntax, DON'T WORRY -- it's still fully supported behind the scenes. However, for new code that takes advantage of new features like higher order functions (callbacks), or advanced matching for directory listings, you'll need to use the syntax as set forth in this document. The old syntax isn't covered here, because you shouldn't use it anymore. As shown in the code example above, the new syntax uses hash references to specify options for calls to File::Util methods. This documentation refers to these as the "options hashref". The code examples below illustrates what they are and how they are used. Advanced Perl programmers will recognize these right away. NOTE: "hashref" is short for "hash reference". Hash references use curly brackets and look like this: my $hashref = { name => 'Larry', language => 'Perl', pet => 'Velociraptor' }; File::Util uses these hash references as argument modifiers that allow you to enable or disable certain features or behaviors, so you get the output you want, like this: my $result = $ftl->some_method_call( arg1, arg2, { options hashref } ); # ^^^^^^^^^^^^^^^ # A couple of real examples would look like this: $ftl->write_file( '/some/file.txt', 'Hello World!', { mode => 'append' } ); # ^^^^^^^^^^^^^^^^ # $ftl->list_dir( '/home/dangerian' => { recurse => 1, files_only => 1 } ); # ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ # Managing potential errors is a big part of Perl IO. File::Util gives you several options. In fact, every single call to a File::Util method which accepts an "options hashref" can also include an error handling directive. File::Util has some pre-defined error handling behaviors that you can choose from, or you can supply your own error handler routine. This is accomplished via the onfail option. As an added convenience, when you use this option with the File::Util constructor method, it sets the default error handling policy for all failures; in other words, you can set up one error handler for everything and never have to worry about it after that. # Set every error to cause a warning instead of dying by default my $ftl = File::Util->new( { onfail => 'warn' } ); $ftl->write_file( 'C:\\' => 'woof!' ); # now this call will warn and not die The predefined onfail behaviors and their syntaxes are covered below. die This is what File::Util already does: it calls CORE::die() with an error message when it encounters a fatal error, and your program terminates. Example: my $ftl = File::Util->new( ... { onfail => 'die' } ); zero When you use the predefined zero behavior as the onfail handler, File::Util will return a zero value (the integer 0) if it encounters a fatal error, instead of dying. File::Util won't warn about the error or abort execution. You will just get a zero back instead of what you would have gotten otherwise, and execution will continue as if no error happened. Example: my $content = File::Util->load_file( ... { onfail => 'zero' } ); undefined When you use the predefined undefined behavior as the onfail handler, if File::Util runs into a fatal error it will return undef. Execution will not be aborted, and no warnings will be issued. A value of undef will just get sent back to the caller instead of what you would have gotten otherwise. Execution will then continue on as if no error happened. Note: This option usually makes more practical sense than onfail => 'zero' Example: my $handle = File::Util->open_handle( ... { onfail => 'undefined' } ); warn When you use the predefined warn behavior as the onfail handler, File::Util will return undef if it encounters a fatal error, instead of dying. Then File::Util will emit a warning with details about the error, but will not abort execution. You will just get a warning message sent to STDERR and undef gets sent back to the caller instead of what you would have gotten otherwise. Other than the warning, execution will continue as if no error ever happened. Example: my $write_ok = File::Util->write_file( ... { onfail => 'warn' } ); When you use the predefined message behavior as the onfail handler, if File::Util runs into a fatal error it will return an error message in the form of a string containing details about the problem. Execution will not be aborted, and no warnings will be issued. You will just get an error message sent back to the caller instead of what you would have gotten otherwise. Execution will then continue on as if no error happened. Example: my @files = File::Util->list_dir( ... { onfail => 'message' } ); subroutine reference If you supply a code reference to the onfail option in a File::Util method call, it will execute that code if it encounters a fatal error. You must supply a true code reference, as shown in the examples below, either to a named or anonymous subroutine. The subroutine you specify will receive two arguments as its input in " @_". The first will be the text of the error message, and the second will be a stack trace in text format. You can send them to a logger, to your sysadmin in an email alert, or whatever you like-- because it is *your* error handler. WARNING! If you do not call die or exit at the end of your error handler, File::Util will NOT exit, but continue to execute. When you opt to use this feature, you are fully responsible for your process' error handling and post-error execution. Examples using the constructor: # step 1) define your custom error handler sub politician_error_handler { my ( $err, $stack ) = @_; # do stuff like ... $logger->debug( $stack ); die 'We neither confirm nor deny that an IO error has happened.'; } # step 2) apply your error handler my $ftl = File::Util->new( { onfail => \&politician_error_handler } ); -OR- # Define and apply your error handler in one step: my $ftl = File::Util->new( { onfail => sub { my ( $err, $stack ) = @_; # do stuff ... } } ); Examples in individual method calls: $ftl->write_file( 'greedo' => 'try bargain' => { onfail => \&shoot_first } ); my $file_handle = $ftl->open_handle( '/this/might/not/work' => { onfail => sub { warn "Couldn't open first choice, trying a backup plan..."; return $ftl->open_handle( '/this/one/should/work' ); } } ); When things go wrong, sometimes it's nice to get as much information as possible about the error. In File::Util, you incur no performance penalties by enabling more verbose error messages. In fact, you're encouraged to do so. You can globally enable diagnostic messages (for every File::Util object you create), or on a per-object basis, or even on a per-call basis when you just want to diagnose a problem with a single method invocation. Here's how: use File::Util qw( :diag ); my $ftl = File::Util->new( diag => 1 ); $ftl->diagnostic( 1 ); # turn diagnostic mode on # ... do some troubleshooting ... $ftl->diagnostic( 0 ); # turn diagnostic mode off $ftl->load_file( 'abc.txt' => { diag => 1 } ); Note: In the past, some of the methods listed would state that they were autoloaded methods. This mechanism has been changed in favor of more modern practices, in step with the evolution of computing over the last decade since File::Util was first released. Methods listed in alphabetical order. atomize_path atomize_path( [/file/path or file_name] ) This method is used internally by File::Util to handle absolute filenames on different platforms in a portable manner, but it can be a useful tool for you as well. This method takes a single string as its argument. The string is expected to be a fully-qualified (absolute) or relative path to a file or directory. It carefully splits the string into three parts: The root of the path, the rest of the path, and the final file/directory named in the string. Depending on the input, the root and/or path may be empty strings. The following table can serve as a guide in what to expect from atomize_path() +-------------------------+----------+--------------------+----------------+ | INPUT | ROOT | PATH-COMPONENT | FILE/DIR | +-------------------------+----------+--------------------+----------------+ | C:\foo\bar\baz.txt | C:\ | foo\bar | baz.txt | | /foo/bar/baz.txt | / | foo/bar | baz.txt | | ./a/b/c/d/e/f/g.txt | | ./a/b/c/d/e/f | g.txt | | :a:b:c:d:e:f:g.txt | : | a:b:c:d:e:f | g.txt | | ../wibble/wombat.ini | | ../wibble | wombat.ini | | ..\woot\noot.doc | | ..\woot | noot.doc | | ../../zoot.conf | | ../.. | zoot.conf | | /root | / | | root | | /etc/sudoers | / | etc | sudoers | | / | / | | | | D:\ | D:\ | | | | D:\autorun.inf | D:\ | | autorun.inf | +-------------------------+----------+--------------------+----------------+ bitmask bitmask( [file name] ) Gets the bitmask of the named file, provided the file exists. If the file exists and is accessible, the bitmask of the named file is returned in four digit octal notation e.g.- 0644. Otherwise, returns undef if the file does not exist or could not be accessed. can_flock can_flock Returns 1 if the current system claims to support flock() and if the Perl process can successfully call it. (see "flock" in perlfunc.) Unless both of these conditions are true, a zero value (0) is returned. This is a constant method. It accepts no arguments and will always return the same value for the system on which it is executed. Note: Perl tries to support or emulate flock whenever it can via available system calls, namely flock; lockf; or with fcntl. created created( [file name] ) Returns the time of creation for the named file in non-leap seconds since whatever your system considers to be the epoch. Suitable for feeding to Perl's built-in functions "gmtime" and "localtime". (see "time" in perlfunc.) diagnostic diagnostic( [true / false value] ) When called without any arguments, this method returns a true or false value to reflect the current setting for the use of diagnostic (verbose) error messages when a File::Util object encounters errors. When called with a true or false value as its single argument, this tells the File::Util object whether or not it should enable diagnostic error messages in the event of a failure. A true value indicates that the File::Util object will enable diagnostic mode, and a false value indicates that it will not. The default setting for diagnostic() is 0 (NOT enabled.) see also DIAGNOSTICS ebcdic ebcdic Returns 1 if the machine on which the code is running uses EBCDIC, or returns 0 if not. (see perlebcdic.) This is a constant method. It accepts no arguments and will always return the same value for the system on which it is executed. escape_filename. existent existent( [file name] ) Returns 1 if the named file (or directory) exists. Otherwise a value of undef is returned. This works the same as Perl's built-in -e file test operator, (see "-X" in perlfunc), it's just easier for some people to remember. file_type file_type( [file name] ) Returns a list of keywords corresponding to each of Perl's built in file tests (those specific to file types) for which the named file returns true. (see "-X" in perlfunc.) The keywords and their definitions appear below; the order of keywords returned is the same as the order in which the are listed here: PLAIN File is a plain file. TEXT File is a text file. BINARY File is a binary file. DIRECTORY File is a directory. SYMLINK File is a symbolic link. PIPE File is a named pipe (FIFO). SOCKET File is a socket. BLOCK File is a block special file. CHARACTER File is a character special file. flock_rules flock_rules( [keyword list] ) Sets I/O race condition policy, or tells File::Util how it should handle race conditions created when a file can't be locked because it is already locked somewhere else (usually by another process). An empty call to this method returns a list of keywords representing the rules that are currently in effect for the object. Otherwise, a call should include a list containing your chosen directive keywords in order of precedence. The rules will be applied in cascading order when a File::Util object attempts to lock a file, so if the actions specified by the first rule don't result in success, the second rule is applied, and so on. This setting can be dynamically changed at any point in your code by calling this method as desired. The default behavior of File::Util is to try and obtain an exclusive lock on all file opens (if supported by your operating system). If a lock cannot be obtained, File::Util will throw an exception and exit. If you want to change that behavior, this method is the way to do it. One common situation is for someone to want their code to first try for a lock, and failing that, to wait until one can be obtained. If that's what you want, see the examples after the keywords list below. Recognized keywords: NOBLOCKEX tries to get an exclusive lock on the file without blocking (waiting) NOBLOCKSH tries to get a shared lock on the file without blocking BLOCKEX waits to get an exclusive lock BLOCKSH waits to get a shared lock FAIL dies with stack trace WARN warn()s about the error and returns undef IGNORE ignores the failure to get an exclusive lock UNDEF returns undef ZERO returns 0 Examples: flock_rules( qw( NOBLOCKEX FAIL ) ); This is the default policy. When in effect, the File::Util object will first attempt to get a non-blocking exclusive lock on the file. If that attempt fails the File::Util object will call die() with an error. flock_rules( qw( NOBLOCKEX BLOCKEX FAIL ) ); The File::Util object will first attempt to get a non-blocking exclusive lock on the file. If that attempt fails it falls back to the second policy rule "BLOCKEX" and tries again to get an exclusive lock on the file, but this time by blocking (waiting for its turn). If that second attempt fails, the File::Util object will fail with an error. flock_rules( qw( BLOCKEX IGNORE ) ); The File::Util object will first attempt to get a file non-blocking lock on the file. If that attempt fails it will ignore the error, and go on to open the file anyway and no failures or warnings will occur. is_bin is_bin( [file name] ) Returns 1 if the named file (or directory) exists. Otherwise a value of undef is returned, indicating that the named file either does not exist or is of another file type. This works the same as Perl's built-in -B file test operator, (see "-X" in perlfunc), it's just easier for some people to remember. is_readable is_readable( [file name] ) Returns 1 if the named file (or directory) is readable by your program according to the applied permissions of the file system on which the file resides. Otherwise a value of undef is returned. This works the same as Perl's built-in -r file test operator, (see "-X" in perlfunc), it's just easier for some people to remember. is_writable is_writable( [file name] ) Returns 1 if the named file (or directory) is writable by your program according to the applied permissions of the file system on which the file resides. Otherwise a value of undef is returned. This works the same as Perl's built-in -w file test operator, (see "-X" in perlfunc), it's just easier for some people to remember. last_access last_access( [file name] ) Returns the last accessed time for the named file in non-leap seconds since whatever your system considers to be the epoch. Suitable for feeding to Perl's built-in functions "gmtime" and "localtime". (see "time" in perlfunc.) last_changed last_changed( [file name] ) Returns the inode change time for the named file in non-leap seconds since whatever your system considers to be the epoch. Suitable for feeding to Perl's built-in functions "gmtime" and "localtime". (see "time" in perlfunc.) last_modified last_modified( [file name] ) Returns the last modified time for the named file in non-leap seconds since whatever your system considers to be the epoch. Suitable for feeding to Perl's built-in functions "gmtime" and "localtime". (see "time" in perlfunc.) line_count line_count( [file name] ) Returns the number of lines in the named file. Fails with an error if the named file does not exist. list_dir list_dir( [directory name] => { option => value, ... } ) Returns all file names in the specified directory, sorted in alphabetical order. Fails with an error if no such directory is found, or if the directory is inaccessible. Note that this is one of File::Util's most robust methods, and can be very useful. It can be used as a higher order function (accepting callback subrefs), and can be used for advanced pattern matching against files. It can also return a hierarchical data structure of the file tree you ask it to walk. See the File::Util::Manual::Examples for several useful ways to use list_dir(). Syntax example to recursively return a list of subdirectories in directory "dir_name": my @dirs = $f->list_dir( 'dir_name' => { dirs_only => 1, recurse => 1 } ); list_dir() callback => subroutine reference list_dir() can accept references to subroutines of your own. If you pass it a code reference using this option, File::Util will execute your code every time list_dir() enters a directory. This is particularly useful when combined with the recurse option which is explained below. When you create a callback function, the File::Util will pass it four arguments in this order: The name of the current directory, a reference to a list of subdirectories in the current directory, a reference to a list of files in the current directory, and the depth (positive integer) relative to the directory you provided as your first argument to list_dir(). This means if you pass in a path such as /var/tmp, that "/var/tmp" is at a depth of 0, "/var/tmp/foo" is 1 deep, and so on down through the "/var/tmp" directory. Remember that the code in your callback gets executed in real time, as list_dir() is walking the directory tree. Consider this example: # Define a subroutine to print the byte size and depth of all files in a # directory, designed to be used as a callback function to list_dir() sub filesize { my ( $selfdir, $subdirs, $files, $depth ) = @_; print( "$_ | " . ( -s $_ ) . " | $depth levels deep\n" for @$files; } # Now list directory recursively, invoking the callback on every recursion $f->list_dir( './droids' => { recurse => 1, callback => \&filesize } ); # Output would look something like # # ./droids/by-owner/luke/R2.spec | 1024 | 3 deep # ./droids/by-owner/luke/C2P0.spec | 2048 | 3 deep # ./droids/by-boss/dooku/Grievous.spec | 4096 | 3 deep # ./droids/by-series/imperial/sentries/R5.spec | 1024 | 4 deep # # Depth breakdown # # level 0 => ./droids/ # level 1 => ./droids/by-owner/ # level 1 => ./droids/by-boss/ # level 1 => ./droids/by-series/ # level 2 => ./droids/by-owner/luke/ # level 2 => ./droids/by-boss/dooku/ # level 2 => ./droids/by-series/imperial/ # level 3 => ./droids/by-series/imperial/sentries/ Another way to use callbacks is in combination with closures, to "close around" a variable or variables defined in the same scope as the callback. A demonstration of this technique is shown below: { my $size_total; my $dir = 'C:\Users\superman\projects\scripts_and_binaries'; # how many total bytes are in all of the executable files in $dir $f->list_dir( $dir => { callback => sub { my ( $selfdir, $subdirs, $files, $depth ) = @_; $size_total += -s $_ for grep { -B $_ } @$files; } } ); print "There's $size_total bytes of binary files in my projects dir."; } d_callback => subroutine reference A d_callback is just like a callback, except it is only executed on directories encountered in the file tree, not files, and its input is slightly different. @_ is comprised of (in order) the name of the current directory, a reference to a list of all subdirectories in that directory, and the depth (positive integer) relative to the top level directory in the path you provided as your first argument to list_dir. f_callback => subroutine reference Similarly an f_callback is just like a callback, except it is only concerned with files encountered in the file tree, not directories. It's input is also slightly different. @_ is comprised of (in order) the name of the current directory, a reference to a list of all files present in that directory, and the depth (positive integer) relative to the top level directory in the path you provided as your first argument to list_dir. dirs_only => boolean return only directory contents which are also directories files_only => boolean return only directory contents which are files max_depth => positive integer Works just like the -maxdepth flag in the GNU find command. This option tells list_dir() to limit results to directories at no more than the maximum depth you specify. This only works in tandem with the recurse option (or the recurse_fast option which is similar). For compatibility reasons, you can use " maxdepth" without the underscore instead, and get the same functionality. no_fsdots => boolean do not include "." and ".." in the list of directory contents returned abort_depth => positive integer Override the global limit on abort_depth recursions for directory listings, on a per-listing basis with this option. Just like the main abort_depth() object method, this option takes a positive integer. The default is 1000. Sometimes it is useful to increase this number by quite a lot when walking directories with callbacks. with_paths => boolean Return results with the preceding file paths intact, relative to the directory named in the call. recurse => boolean Recurse into subdirectories. In other words, open up subdirectories and continue to descend into the directory tree either as far as it goes, or until the abort_depth limit is reached. See abort_depth() recurse_fast => boolean Recurse into subdirectories, without checking for filesystem loops. This works exactly like the recurse option, except it turns off internal checking for duplicate inodes while descending through a file tree. You get a performance boost at the sacrifice of a little "safety checking". The bigger your file tree, the more performance gains you see. This option has no effect on Windows. (see perldoc -f stat) dirs_as_ref => boolean When returning directory listing, include first a reference to the list of subdirectories found, followed by anything else returned by the call. files_as_ref => boolean When returning directory listing, include last a reference to the list of files found, preceded by a list of subdirectories found (or preceded by a list reference to subdirectories found if dirs_as_ref was also used). as_ref => boolean Return a pair list references: the first is a reference to any subdirectories found by the call, the second is a reference to any files found by the call. sl_after_dirs => boolean Append a directory separator ("/, "\", or ":" depending on your system) to all directories found by the call. Useful in visual displays for quick differentiation between subdirectories and files. ignore_case => boolean Return items in a case-insensitive alphabetic sort order, as opposed to the default. **By default, items returned by the call to this method are alphabetically sorted in a case-insensitive manner, such that "Zoo.txt" comes before "alligator.txt". This is also the way files are listed at the system level on most operating systems. However, if you'd like the directory contents returned by this method to be sorted without regard to case, use this option. That way, "alligator.txt" will come before "Zoo.txt". count_only => boolean Returns a single value: an integer reflecting the number of items found in the directory after applying any filter criteria that may also have been specified by other options (i.e.- "dirs_only", "recurse", etc.) as_tree => boolean Returns a hierarchical data structure (hashref) of the file tree in the directory you specify as the first argument to list_dir(). Use in combination with other options to get the exact results you want in the data structure. *Note: When using this option, the "files_only" and "dirs_only" options are ignored, but you can still specify things like a "max_depth" argument, however. Note also that you need to specifically call this with the "recurse" or "recurse_fast" option or you will only get a single-level tree structure. One quick example: my $tree = $ftl->list_dir( '/tmp' => { as_tree => 1, recurse => 1, } ); # output would look something like this if you Data::Dumper'd it { '/' => { '_DIR_PARENT_' => undef, '_DIR_SELF_' => '/', 'tmp' => { '_DIR_PARENT_' => '/', '_DIR_SELF_' => '/tmp', 'hJMOsoGuEb' => { '_DIR_PARENT_' => '/tmp', '_DIR_SELF_' => '/tmp/hJMOsoGuEb', 'a.txt' => '/tmp/hJMOsoGuEb/a.txt', 'b.log' => '/tmp/hJMOsoGuEb/b.log', 'c.ini' => '/tmp/hJMOsoGuEb/c.ini', 'd.bat' => '/tmp/hJMOsoGuEb/d.bat', 'e.sh' => '/tmp/hJMOsoGuEb/e.sh', 'f.conf' => '/tmp/hJMOsoGuEb/f.conf', 'g.bin' => '/tmp/hJMOsoGuEb/g.bin', 'h.rc' => '/tmp/hJMOsoGuEb/h.rc', } } } } When using this option, the hashref you get back will have certain metadata entries at each level of the hierarchy, namely there will be two special keys: "_DIR_SELF", and "_DIR_PARENT_". Their values will be the name of the directory itself, and the name of its parent, respectively. That metadata can be extremely helpful when iterating over and parsing the hashref later on, but if you don't want the metadata, include the dirmeta option and set it to a zero (false) value as shown below: my $tree = $ftl->list_dir( '/some/dir' => { as_tree => 1, recurse => 1, dirmeta => 0, } ); **Remember: the as_tree doesn't recurse into subdirectories unless you tell it to with recurse => 1 list_dir() list_dir() can use Perl Regular Expressions to match against and thereby filter the results it returns. It can match based on file name, directory name, the path preceding results, and the parent directory of results. The matching arguments you use must be real regular expression references as shown (i.e.- NOT strings). Regular expressions can be provided as a single argument value, or a specifically crafted hashref designating a list of patterns to match against in either an "or" manner, or an "and"ed cumulative manner. Some short examples of proper syntax will be provided after the list of matching options below. **If you experience a big slowdown in directory listings while using regular expressions, check to make sure your regular expressions are properly written and optimized. In general, directory listings should not be slow or resource-intensive. Badly-written regular expressions will result in considerable slowdowns and bottlenecks in any application. files_match => qr/regexp/ files_match => { and/or => [ qr/listref of/, qr/regexps/ ] } Return only file names matching the regex(es). Preceding directories are included in the results; for technical reasons they are not excluded (if they were excluded, list_dir() would not be able to "cascade" or recurse into subdirectories in search of matching files. Use the files_only option in combination with this matching parameter to exclude the preceding directory names. dirs_match => qr/regexp/ dirs_match => { and/or => [ qr/listref of/, qr/regexps/ ] } Return only files and subdirectory names in directories that match the regex(es) you specify. BE CAREFUL with this one!! It doesn't "cascade" the way you might expect; for technical reasons, it won't descend into directories that don't match the regex(es) you provide. For example, if you want to match a directory name that is three levels deep against a given pattern, but don't know (or don't care about) the names of the intermediate directories-- THIS IS NOT THE OPTION YOU ARE LOOKING FOR. Use the path_matches option instead. *NOTE: Bear in mind that just because you tell list_dir() to match each directory against the regex(es) you specify here, that doesn't mean you are telling it to only show directories in its results. You will get file names in matching directories included in the results as well, unless you combine this with the dirs_only option. path_matches => qr/regexp/ path_matches => { and/or => [ qr/listref of/, qr/regexps/ ] } Return only files and subdirectory names with preceding paths that match the regex(es) you specify. parent_matches => qr/regexp reference/ parent_matches => { and/or => [ qr/listref of/, qr/regexps/ ] } Return only files and subdirectory names whose parent directory matches the regex(es) you specify. listdir() Single-argument matching examples my @files = $f->list_dir( '../notes' => { files_match => qr/\.txt$/i, files_only => 1 } ); my @dirs = $f->list_dir( '/var' => { dirs_match => qr/log|spool/i, recurse => 1, dirs_only => 1, } ); my @dirs = $f->list_dir( '/home' => { path_matches => qr/Desktop/, recurse => 1, dirs_only => 1, } ); my @files = $f->list_dir( '/home/tommy/projects' => { parent_matches => qr/^\.git$/, recurse => 1, } ); A multiple-argument matching examples with OR my @files = $f->list_dir( 'C:\Users\Billy G' => { parent_matches => { or => [ qr/Desktop/, qr/Pictures/ ] } recurse => 1, } ); # ... same concepts apply to "files_match", "dirs_match", # and "parent_matches" filtering Multiple-argument matching examples with AND my @files = $f->list_dir( '/home/leia' => { parent_matches => { and => [ qr/Anakin/, qr/Amidala/ ] } recurse => 1, } ); my @files = $f->list_dir( '/home/mace' => { path_matches => { and => [ qr/^(?!.*dark.side)/i, qr/[Ff]orce/ ] } recurse => 1, } ); # ... same concepts apply to "files_match" and "dirs_match" filtering **When you specify regexes for more than one filter type parameter, the patterns are AND'ed together, as you'd expect, and all matching criteria must be satisfied for a successful overall match. my @files = $f->list_dir( '/var' => { dirs_match => { or => [ qr/^log$/, qr/^lib$/ ] }, files_match => { or => [ qr/^syslog/, qr/\.isam$/i ] }, parent_matches => qr/[[:alpha:]]+/ path_matches => qr/^(?!.*home)/, recurse => 1, files_only => 1, } Negative matches (when you want to NOT match something) - use Perl! As shown in the File::Util::Manual::Examples, Perl already provides support for negated matching in the form of "zero-width negative assertions". (See perlre for details on how they work). Use syntax like the regular expressions below to match anything that is NOT part of the subpattern. # match all files with names that do NOT contain "apple" (case sensitive) my @no_apples = $f->list_dir( 'Pictures/fruit' => { files_match => qr/^(?!.*apple)/ } ); # match all files that that do NOT end in *.mp3 (case INsensitive) # also, don't match files that end in *.wav either my @no_music = $f->list_dir( '/opt/music' => { files_match => { and => [ qr/^(?!.*mp3$)/i, qr/^(?!.*wav$)/i ] } ); load_dir load_dir( [directory name] => { options } ) Returns a data structure containing the contents of each file present in the named directory. The type of data structure returned is determined by the optional data-type option parameter. Only one option at a time may be used for a given call to this method. Recognized options are listed below. my $files_hash_ref = $f->load_dir( $dirname ); # default (hashref) -OR- my $files_list_ref = $f->load_dir( $dirname => { as_listref => 1 } ); -OR- my @files = $f->load_dir( $dirname => { as_list => 1 } ); load_dir() as_hashref => boolean*(default) Implicit. If no option is passed in, the default behavior is to return a reference to an anonymous hash whose keys are the names of each file in the specified directory; the hash values for contain the contents of the file represented by its corresponding key. as_list => boolean Causes the method to return a list comprised of the contents loaded from each file (in case-sensitive order) located in the named directory. This is useful in situations where you don't care what the filenames were and you just want a list of file contents. as_listref => boolean Same as above, except an array reference to the list of items is returned rather than the list itself. This is more efficient than the above, particularly when dealing with large lists. load_dir() does not recurse or accept matching parameters, etc. It's an effective tool for loading up things like a directory of template files on a web server, or to store binary data streams in memory. Use it however you like. However, if you do want to load files into a hashref/listref or array while using the advanced features of list_dir(), just use list_dir to return the files and map the contents into your variable: my $hash_ref = {}; %$hash_ref = map { $_ => $ftl->load_file( $_ ) } $ftl->list_dir( $dir_name => { advanced options... } ); Note: This method does not distinguish between plain files and other file types such as binaries, FIFOs, sockets, etc. Restrictions imposed by the current "read limit" (see the read_limit()) entry below will be applied to the individual files opened by this method as well. Adjust the read limit as necessary. Example usage: my $templates = $f->load_dir( 'templates/stock-ticker' ); The above code creates an anonymous hash reference that is stored in the variable named " $files". The keys and values of the hash referenced by " $files" would resemble those of the following code snippet (given that the files in the named directory were the files 'a.txt', 'b.html', 'c.dat', and 'd.conf') my $files = { 'a.txt' => 'the contents of file a.txt', 'b.html' => 'the contents of file b.html', 'c.dat' => 'the contents of file c.dat', 'd.conf' => 'the contents of file d.conf', }; load_file load_file( [file name] => { options } ) load_file( file_handle => [file handle reference] => { options } ) If [file name] is passed, returns the contents of [file name] in a string. If a [file handle reference] is passed instead, the filehandle will be CORE::read() and the data obtained by the read will be returned in a string. If you desire the contents of the file (or file handle data) in a list of lines instead of a single string, this can be accomplished through the use of the as_lines option (see below). load_file() as_lines => boolean If this option is enabled then your call to load_file will return a list of strings, each one of which is a line as it was read from the file [file name]. The lines are returned in the order they are read, from the beginning of the file to the end. This is not the default behavior. The default behavior is for load_file to return a single string containing the entire contents of the file. no_lock => boolean By default this method will attempt to get a lock on the file while it is being read, following whatever rules are in place for the flock policy established either by default (implicitly) or changed by you in a call to File::Util::flock_rules() (see the flock_rules()) entry below. This method will not try to get a lock on the file if the File::Util object was created with the option no_lock or if the method was called with the option no_lock. This method will automatically call binmode() on binary files for you. If you pass in a filehandle instead of a file name you do not get this automatic check performed for you. In such a case, you'll have to call binmode() on the filehandle yourself. Once you pass a filehandle to this method it has no way of telling if the file opened to that filehandle is binary or not. binmode => [ boolean or 'utf8' ] Tell File::Util to read the file in binmode (if set to a true boolean: 1), or to read the file as UTF-8 encoded data, specify a value of utf8 to this option. (see "binmode" in perlfunc). You need Perl 5.8 or better to use 'utf8' or your program will fail with an error message. Example Usage: my $encoded_data = $ftl->load_file( 'encoded.txt' => { binmode => 'utf8' } ); read_limit => positive integer Override the global read limit setting for the File::Util object you are working with, on a one time basis. By specifying a this option with a positive integer value (representing the maximum number of bytes to allow for your load_file() call), you are telling load_file() to ignore the global/default setting for just that call, and to apply your one-time limit of [ positive integer ] bytes on the file while it is read into memory. Notes: This method does not distinguish between plain files and other file types such as binaries, FIFOs, sockets, etc. Restrictions imposed by the current "read limit" (see the read_limit()) entry below will be applied to the files opened by this method. Adjust the read limit as necessary either by overriding (using the 'read_limit' option above), or by adjusting the global value for your File::Util object with the provided read_limit() object method. make_dir make_dir( [new directory name], [bitmask] => { options } ) Attempts to create (recursively) a directory as [new directory name] with the [bitmask] provided. The bitmask is an optional argument and defaults to oct 777, combined with the current user's umask. If specified, the bitmask must be supplied in the form required by the native perl umask function (as an octal number). see "umask" in perlfunc for more information about the format of the bitmask argument. As mentioned above, the recursive creation of directories is transparently handled for you. This means that if the name of the directory you pass in contains a parent directory that does not exist, the parent directory(ies) will be created for you automatically and silently in order to create the final directory in the [new directory name]. Simply put, if [new directory] is "/path/to/directory" and the directory "/path/to" does not exist, the directory "/path/to" will be created and the "/path/to/directory" directory will be created thereafter. All directories created will be created with the [bitmask] you specify, or with the default of oct 777, combined with the current user's umask. Upon successful creation of the [new directory name], the [new directory name] is returned to the caller. make_dir() if_not_exists => boolean Example: $f->make_dir( '/home/jspice' => oct 755 => { if_not_exists => 1 } ); If this option is enabled then make_dir will not attempt to create the directory if it already exists. Rather it will return the name of the directory as it normally would if the directory did not exist previous to calling this method. If a call to this method is made without the if_not_exists option and the directory specified as [new directory name] does in fact exist, an error will result as it is impossible to create a directory that already exists. abort_depth abort_depth( [positive integer] ) When called without any arguments, this method returns an integer reflecting the current number of times the File::Util object will dive into the subdirectories it discovers when recursively listing directory contents from a call to File::Util::list_dir(). The default is 1000. If the number is exceeded, the File::Util object will fail with an error. When called with an argument, it sets the maximum number of times a File::Util object will recurse into subdirectories before failing with an error message. This method can only be called with a numeric integer value. Passing a bad argument to this method will cause it to fail with an error. needs_binmode needs_binmode Returns 1 if the machine on which the code is running requires that binmode() (a built-in function) be called on open file handles, or returns 0 if not. (see "binmode" in perlfunc.) This is a constant method. It accepts no arguments and will always return the same value for the system on which it is executed. new new( { options } ) This is the File::Util constructor method. It returns a new File::Util object reference when you call it. It recognizes various options that govern the behavior of the new File::Util object. new() use_flock => boolean Optionally specify this option to the File::Util::new method to instruct the new object that it should never attempt to use flock() in it's I/O operations. The default is to use flock() if available on your system. Specify this option with a true or false value ( 1 or 0 ), true to use flock(), false to not use it. read_limit => positive integer Optionally specify this option to the File::Util::new method to instruct the new object that it should never attempt to open and read in a file greater than the number of bytes you specify. This argument can only be a numeric integer value, otherwise it will be silently ignored. The default read limit for File::Util objects is 52428800 bytes (50 megabytes). abort_depth => positive integer Optionally specify this option to the File::Util::new method to instruct the new object to set the maximum number of times it will recurse into subdirectories while performing directory listing operations before failing with an error message. This argument can only be a numeric integer value, otherwise it will be silently ignored. (see also: abort_depth()) onfail => designated handler Set the default policy for how the new File::Util object handles fatal errors. This option takes any one of a list of predefined keywords, or a reference to a named or anonymous error handling subroutine of your own. You can supply an onfail handler to nearly any function in File::Util, but when you do so for the new() constructor, you are setting the default. Acceptable values are all covered in the ERROR HANDLING section (above), along with proper syntax and example usage. onfail onfail( [keyword or code reference] ) Dynamically set/change the default error handling policy for an object. This works exactly the same as it does when you specify an "onfail" handler to the constructor method (see also new). The syntax and keywords available to use for this method are already discussed above in the ERROR HANDLING section, so refer to that for in-depth details. Here are some examples: $ftl->onfail( 'die' ); $ftl->onfail( 'zero' ); $ftl->onfail( 'undefined' ); $ftl->onfail( 'message' ); $ftl->onfail( \&subroutine_reference ); $ftl->onfail( sub { my ( $error, $stack_trace ) = @_; ... } ); open_handle open_handle( [file name] => [mode] => { options } ) open_handle( file => [file name] => mode => [mode] => { options } ) Attempts to get a lexically scoped open file handle on [file name] in [mode] mode. Returns the file handle if successful or generates a fatal error with a diagnostic message if the operation fails. You will need to remember to call close() on the filehandle yourself, at your own discretion. Leaving filehandles open is not a good practice, and is not recommended. see "close" in perlfunc). Once you have the file handle you would use it as you would use any file handle. Remember that unless you specifically turn file locking off when the File::Util object is created (see new) or by using the no_lock option when calling open_handle, that file locking is going to automagically be handled for you behind the scenes, so long as your OS supports file locking of any kind at all. Great! It's very convenient for you to not have to worry about portability in taking care of file locking between one application and the next; by using File::Util in all of them, you know that you're covered. A slight inconvenience for the price of a larger set of features (compare write_file to this method) you will have to release the file lock on the open handle yourself. File::Util can't manage it for you anymore once it turns the handle over to you. At that point, it's all yours. In order to release the file lock on your file handle, call unlock_open_handle() on it. Otherwise the lock will remain for the life of your process. If you don't want to use the free portable file locking, remember the no_lock option, which will turn off file locking for your open handle. Seldom, however, should you ever opt to not use file locking unless you really know what you are doing. The only obvious exception would be if you are working with files on a network-mounted filesystem like NFS or SMB (CIFS), in which case locking can be buggy.. Any non-existent directories in the path preceding the actual file name will be automatically (and silently - no warnings) created for you and any an error while trying to create any preceding directories, the failure results in a fatal error with an error. If all directories preceding the name of the file already exist, the dbitmask argument has no effect and is silently ignored. The default behavior of open_handle() is to open file handles using Perl's native open() (see "open" in perlfunc). Unless you use the use_sysopen option, only then are the following modes valid: mode => 'read'(this is the default mode) [file name] is opened in read-only mode. If the file does not yet exist then a fatal error will occur. mode => 'write' . Optionally you can ask File::Util to open your handle using CORE::sysopen instead of using the native Perl CORE::open(). This is accomplished by enabling the use_sysopen option. Using this feature opens up more possibilities as far as the open modes you can choose from, but also carries with it a few caveats so you have to be careful, just as you'd have to be a little more careful when using sysopen() anyway. Specifically you need to remember that when using this feature you must NOT mix different types of I/O when working with the file handle. You can't go opening file handles with sysopen() and print to them as you normally would print to a file handle. You have to use syswrite() instead. The same applies here. If you get a sysopen()'d filehandle from open_handle() it is imperative that you use syswrite() on it. You'll also need to use sysseek() and other type of sys* commands on the filehandle instead of their native Perl equivalents. (see "sysopen" in perlfunc, "syswrite" in perlfunc, "sysseek" in perlfunc, "sysread" in perlfunc) That said, here are the different modes you can choose from to get a file handle when using the use_sysopen option. Remember that these won't work unless you use that option, and will generate an error if you try using them without it. The standard 'read', 'write', and 'append' modes are already available to you by default. These are the extended modes: mode => 'rwcreate' [file name] is opened in read-write mode, and will be created for you if it does not already exist. mode => 'rwupdate' [file name] is opened for you in read-write mode, but must already exist. If it does not exist, a fatal error will result. mode => 'rwclobber' [file name] is opened for you in read-write mode. If the file already exists it's contents will be "clobbered" or wiped out. The file will then be empty and you will be working with the then-truncated file. This can not be undone. Once you call open_handle() using this option, your file WILL be wiped out. If the file does not exist yet, it will be created for you. mode => 'rwappend' [file name] will be opened for you in read-write mode ready for appending. The file's contents will not be wiped out; they will be preserved and you will be working in append fashion. If the file does not exist, it will be created for you. Remember to use sysread() and not plain read() when reading those sysopen()'d filehandles! open_handle() binmode => [ boolean or 'utf8' ] Tell File::Util to open the file in binmode (if set to a true boolean: 1), or to open the file with UTF-8 encoding, specify a value of utf8 to this option. (see "binmode" in perlfunc). You need Perl 5.8 or better to use "utf8" or your program will fail with an error message. Example Usage: $ftl->open_handle( 'encoded.txt' => { binmode => 'utf8' } );. use_sysopen => boolean Instead of opening the file using Perl's native open() command, File::Util will open the file with the sysopen() command. You will have to remember that your filehandle is a sysopen()'d one, and that you will not be able to use native Perl I/O functions on it. You will have to use the sys* equivalents. See perlopentut for a more in-depth explanation of why you can't mix native Perl I/O with system I/O. read_limit read_limit( [positive integer] ) By default, the largest size file that File::Util will read into memory and return via the load_file is 52428800 bytes (50 megabytes). This value can be modified by calling this method with an integer value reflecting the new limit you want to impose, in bytes. For example, if you want to set the limit to 10 megabytes, call the method with an argument of 10485760. If this method is called without an argument, the read limit currently in force for the File::Util object will be returned. return_path return_path( [string] ) Takes the file path from the file name provided and returns it such that /who/you/callin/scruffy.txt is returned as /who/you/callin. size size( [file name] ) Returns the file size of [file name] in bytes. Returns 0 if the file is empty. Returns undef if the file does not exist. split_path split_path( [string] ) Takes a path/filename, fully-qualified or relative (it doesn't matter), and it returns a list comprising the root of the path (if any), each directory in the path, and the final part of the path (be it a file, a directory, or otherwise) This method doesn't divine or detect any information about the path, it simply manipulates the string value. It doesn't map it to any real filesystem object. It doesn't matter whether or not the file/path named in the input string exists or not. strip_path strip_path( [string] ) Strips the file path from the file name provided and returns the file name only. Given /kessel/run/12/parsecs, it returns parsecs Given C:\you\scoundrel, it returns scoundrel touch touch( [file name] ) Behaves like the *nix touch command; Updates the access and modification times of the specified file to the current time. If the file does not exist, File::Util tries to create it empty. This method will fail with a fatal error if system permissions deny alterations to or creation of the file. Returns 1 if successful. If unsuccessful, fails with an error. trunc trunc( [file name] ) Truncates [file name] (i.e.- wipes out, or "clobbers" the contents of the specified file.) Returns 1 if successful. If unsuccessful, fails with a descriptive error message about what went wrong. unlock_open_handle unlock_open_handle([file handle]) Release the flock on a file handle you opened with open_handle. Returns true on success, false on failure. Will not raise a fatal error if the unlock operation fails. You can capture the return value from your call to this method and die() if you so desire. Failure is not ever very likely, or File::Util wouldn't have been able to get a portable lock on the file in the first place. If File::Util wasn't able to ever lock the file due to limitations of your operating system, a call to this method will return a true value. If file locking has been disabled on the file handle via the no_lock option at the time open_handle was called, or if file locking was disabled using the use_flock method, or if file locking was disabled on the entire File::Util object at the time of its creation (see new()), calling this method will have no effect and a true value will be returned. use_flock use_flock( [true / false value] ) When called without any arguments, this method returns a true or false value to reflect the current use of flock() within the File::Util object. When called with a true or false value as its single argument, this method will tell the File::Util object whether or not it should attempt to use flock() in its I/O operations. A true value indicates that the File::Util object will use flock() if available, a false value indicates that it will not. The default is to use flock() when available on your system. If you are working with files on an NFS mount, or a Windows file share, it is quite likely that using flock will be buggy and cause unexpected failures in your program. You should not use flock in such situations. File locking has known issues on SOLARIS. Solaris claims to offer a native flock() implementation, but after obtaining a lock on a file, Solaris will very often just silently refuse to unlock it again until your process has completely exited. This is not an issue with File::Util or even with Perl itself. Other programming languages encounter the same problems; it is a system-level issue. So please be aware of this if you are a Solaris user and want to use file locking on your OS. You may have to explicitly disable file locking completely. write_file write_file( [file name] => [string] => { other_options } ) write_file( { file => [file name], content => [string], mode => [mode], other_options } ) Syntax Examples: # get some content (a string returned from a function call, perhaps) my $answer = ask_commissioner( 'Can he be trusted?' ); $ftl->write_file( 'Harvey_Dent.txt' => $answer ); -OR- # get some binary content, maybe a picture... my $binary_data = get_mugshot( alias => 'twoface' ); $ftl->write_file( 'suspect.png' => $binary_data => { binmode => 1 } ); -OR- # write a file with UTF-8 encoding (unicode character support) $ftl->write_file( 'encoded.txt' => $encoded_data => { binmode => 'utf8' } ); -OR- $ftl->write_file( { file => '/gotham/city/ballots/Bruce_Wayne.txt', content => 'Vote for Harvey!', bitmask => oct 600, # <- secret ballot file permissions } ); Attempts to write [string] to [file name] in mode [mode].. [string] should be a string or a scalar variable containing a string. The string can be any type of data, such as a binary stream, or ascii text with line breaks, etc. Be sure to enable the binmode => 1 option for binary streams, and be sure to specify a value of binmode => 'utf8' for UTF-8 encoded data. NOTE: that you will need Perl version 5.8 or better to use the 'utf8' feature, or your program will fail with an error.. Returns 1 if successful or fails with an error if not successful. Any non-existent directories in the path preceding the actual file name will be automatically (and silently - no warnings) created for you and a problem while trying to create any preceding directories, the failure results in a fatal error. If all directories preceding the name of the file already exist, the dbitmask argument has no effect and is silently ignored. mode => 'write'(this is the default mode) . write_file() binmode => [ boolean or 'utf8' ] Tell File::Util to write the file in binmode (if set to a true boolean: 1), or to write the file with UTF-8 encoding, specify a value of utf8 to this option. (see "binmode" in perlfunc). You need Perl 5.8 or better to use "utf8" or your program will fail with an error message. Example Usage: $ftl->write_file( 'encoded.txt' => $encoded_data => { binmode => 'utf8' } ); empty_writes_OK => boolean Allows you to call this method without providing a content argument (it lets you create an empty file without warning you or failing. Be advised that if you enable this option, it will have the same effect as truncating a file that already has content in it (i.e.- it will "clobber" non-empty files) enabled. valid_filename valid_filename( [string] ) For the given string, returns 1 if the string is a legal file name for the system on which the program is running, or returns undef if it is not. This method does not test for the validity of file paths! It tests for the validity of file names only. (It is used internally to check beforehand if a file name is usable when creating new files, but is also a public method available for external use.) NL NL Short for "New Line". Returns the correct new line character (or character sequence) for the system on which your program runs. SL SL Short for "Slash". Returns the correct directory path separator for the system on which your program runs. OS OS Returns the File::Util keyword for the operating system FAMILY it detected. The keyword for the detected operating system will be one of the following, derived from the contents of $^O, or if $^O can not be found, from the contents of $Config::Config{osname} (see native Config library), or if that doesn't contain a recognizable value, finally falls back to UNIX. Generally speaking, Linux operating systems are going to be detected as UNIX. This isn't a bug. The OS FAMILY to which it belongs uses UNIX style filesystem conventions and line endings, which are the relevant things to file handling operations. Specifics: OS name =~ /^(?:darwin|bsdos)/i Specifics: OS name =~ /^cygwin/i Specifics: OS name =~ /^MSWin/i Specifics: OS name =~ /^vms/i Specifics: OS name =~ /^dos/i Specifics: OS name =~ /^MacOS/i Specifics: OS name =~ /^epoc/i Specifics: OS name =~ /^os2/i. File::Util::Cookbook, File::Util::Manual::Examples, File::Util
http://search.cpan.org/~tommy/File-Util/lib/File/Util/Manual.pod
CC-MAIN-2014-42
refinedweb
10,185
61.56
import org.springframework.batch.item.file.mapping.FieldSetMapper; 20 import org.springframework.batch.item.file.transform.LineTokenizer; 21 22 23 /** 24 * Interface for mapping lines (strings) to domain objects typically used to map lines read from a file to domain objects 25 * on a per line basis. Implementations of this interface perform the actual 26 * work of parsing a line without having to deal with how the line was 27 * obtained. 28 * 29 * @author Robert Kasanicky 30 * @param <T> type of the domain object 31 * @see FieldSetMapper 32 * @see LineTokenizer 33 * @since 2.0 34 */ 35 public interface LineMapper<T> { 36 37 /** 38 * Implementations must implement this method to map the provided line to 39 * the parameter type T. The line number represents the number of lines 40 * into a file the current line resides. 41 * 42 * @param line to be mapped 43 * @param lineNumber of the current line 44 * @return mapped object of type T 45 * @throws Exception if error occured while parsing. 46 */ 47 T mapLine(String line, int lineNumber) throws Exception; 48 }
http://docs.spring.io/spring-batch/xref/org/springframework/batch/item/file/LineMapper.html
CC-MAIN-2014-15
refinedweb
175
59.64
This page explains how to scale a deployed application in Google Kubernetes Engine. Overview When you deploy an application in GKE, you define how many replicas of the application you'd like to run. When you scale an application, you increase or decrease the number of replicas. Each replica of your application represents a Kubernetes Pod that encapsulates your application's container Inspecting an application Before scaling your application, you should inspect the application and ensure that it is healthy. To see all applications deployed to your cluster, run kubectl get [CONTROLLER]. Substitute [CONTROLLER] for deployments, statefulsets, or another controller object type. For example, if you run kubectl get deployments and you have created only one Deployment, the command's output should look similar to the following: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE my-app 1 1 1 1 10m The output of this command is similar for all objects, but may appear slightly different. For Deployments, the output has six columns: NAMElists the names of the Deployments in the cluster. DESIREDdisplays the desired number of replicas, or the desired state, of the application, which you define when you create the Deployment. in the cluster. In this example, there is only one Deployment, my-app, which has only one replica because its desired state is one replica. You define the desired state at the time of creation, and you can change it at any time by scaling the application. Inspecting StatefulSets Before scaling a StatefulSet, you should inspect it by running kubectl describe statefulset my-app. In the output of this command, check the Pods Status field. If the Failed value is greater than 0, scaling might fail. If a StatefulSet appears to be unhealthy, run kubectl get pods to see which replicas are unhealthy. Then, run kubectl delete [POD], where [POD] is the name of the unhealthy Pod. Attempting to scale a StatefulSet while it is unhealthy may cause it to become unavailable. Scaling an application The following sections describe each method you can use to scale an application. The kubectl scale method is the fastest way to scale. However, you may prefer another method in some situations, like when updating configuration files or when performing in-place modifications. kubectl scale kubectl scale lets your instantaneously change the number of replicas you want to run your application. To use kubectl scale, you specify the new number of replicas by setting the --replicas flag. For example, to scale my-app to four replicas, run the following command, substituting [CONTROLLER] for deployment, statefulset, or another controller object type: kubectl scale [CONTROLLER] my-app --replicas 4 If successful, this command's output should be similar to deployment "my-app" scaled. Next, run kubectl get [CONTROLLER] my-app. The output should look similar to the following: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE my-app 4 4 4 4 15m kubectl apply You can use kubectl apply to apply a new configuration file to an existing controller object. kubectl apply is useful for making multiple changes to a resource, and may be useful for users who prefer to manage their resources in configuration files. To scale using kubectl apply, the configuration file you supply should include a new number of replicas in the replicas field of the object's specification. The following is an updated version of the configuration file for the example my-app object. The example shows a Deployment, so if you use another type of controller, such as a StatefulSet, change the kind accordingly. This example works best on a cluster with at least three Nodes. apiVersion: apps/v1 kind: Deployment metadata: name: my-app spec: replicas: 3 selector: matchLabels: app: app template: metadata: labels: app: app spec: containers: - name: my-container image: gcr.io/google-samples/hello-app:2.0 In this file, the value of the replicas field is 3. When this configuration file is applied, the object my-app scales to three replicas. To apply an updated configuration file, run the following command: kubectl apply -f config.yaml Next, run kubectl get [CONTROLLER] my-app. The output should look similar to the following: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE my-app 3 3 3 3 15m Console To scale a workload in Google Cloud Console, perform the following steps: Visit the Google Kubernetes Engine Workloads menu in Cloud Console. Select the desired workload from the menu. Click Actions, then Scale. From the Replicas field, enter the desired number of replicas. Click Scale. Autoscaling Deployments You can autoscale Deployments based on CPU utilization of Pods using kubectl autoscale or from the GKE Workloads menu in Cloud Console. kubectl autoscale kubectl autoscale creates a HorizontalPodAutoscaler (or HPA) object that targets a specified resource (called the scale target) and scales it as needed. The HPA periodically adjusts the number of replicas of the scale target to match the average CPU utilization that you specify. When you use kubectl autoscale, you specify a maximum and minimum number of replicas for your application, as well as a CPU utilization target. For example, to set the maximum number of replicas to six and the minimum to four, with a CPU utilization target of 50% utilization, run the following command: kubectl autoscale deployment my-app --max 6 --min 4 --cpu-percent 50 In this command, the --max flag is required. The --cpu-percent flag is the target CPU utilization over all the Pods. This command does not immediately scale the Deployment to six replicas, unless there is already a systemic demand. After running kubectl autoscale, the HorizontalPodAutoscaler object is created and targets the application. When there a change in load, the object increases or decreases the application's replicas. To see a specific HorizontalPodAutoscaler object in your cluster, run: kubectl get hpa [HPA_NAME] To see the HorizontalPodAutoscaler configuration: kubectl get hpa [HPA_NAME] -o yaml The output of this command is similar to the following: apiVersion: v1 items: - apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: creationTimestamp: ... name: [HPA_NAME] namespace: default resourceVersion: "664" selfLink: ... uid: ... spec: maxReplicas: 10 minReplicas: 1 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: [HPA_NAME] targetCPUUtilizationPercentage: 50 status: currentReplicas: 0 desiredReplicas: 0 kind: List metadata: {} resourceVersion: "" selfLink: "" In this example output, the targetCPUUtilizationPercentage field holds the 50 percentage value passed in from the kubectl autoscale example. To see a detailed description of a specific HorizontalPodAutoscaler object in the cluster: kubectl describe hpa [HPA_NAME] You can modify the HorizontalPodAutoscaler by applying a new configuration file with kubectl apply, using kubectl edit, or using kubectl patch. To delete a HorizontalPodAutoscaler object: kubectl delete hpa [HPA_NAME] Console To autoscale a Deployment, perform the following steps: Visit the Google Kubernetes Engine Workloads menu in Cloud Console. Select the desired workload from the menu. Click Actions, then Autoscale. Fill the Maximum number of pods field with the desired maximum number of Pods. Optionally, fill the Minimum number of pods and Target CPU utilization in percent fields with the desired values. Click Autoscale. Autoscaling with Custom Metrics You can scale your Deployments based on custom metrics exported from [Stackdriver Kubernetes Engine Monitoring]. To learn how to use custom metrics to autoscale deployments, refer to the Autoscaling Deployments with Custom Metrics tutorial.
https://cloud.google.com/kubernetes-engine/docs/how-to/scaling-apps?hl=no
CC-MAIN-2020-10
refinedweb
1,196
51.89
Functions in C++ Programming Function is a logically grouped set of statements that perform a specific task. For example, a function sort() may sort a group of data. Every C++ program has a function named main() where the execution of the program starts. It is a mandatory function in C++. Advantages of Function Creating functions in a program is beneficial. They - Avoid repetition of codes. - Increase program readability. - Divide a complex problem into many simpler problems. - Reduce chances of error. - Makes modifying a program becomes easier. - Makes unit testing possible. Components of Function A function usually has three components. They are: - Function prototype/declaration - Function definition - Function call 1. Function prototype/declaration Function declaration informs the compiler about the function's name, type and number of argument it receives and type of value it returns. Syntax for function declaration returntype function_name ([arguments type]); For example, void display(char); /*function name = display, receives a character as argument and returns nothing*/ int sum(int,int); /*function name = sum, receives two integers as argument and returns an integer*/ 2. Function definition It is the most important part of function which that consists of body of function. It consists of block of statements that specifies what task is to be performed. When a function is called, the control is transferred to the function definition. Syntax for function definition returntype function_name ([arguments]) { statement(s); ... ... ... } Return Statement Function can return values. A return statement is used to return values to the invoking function. The type of value a function can return is specified in the function prototype. A function which has void as return type don't return any value. Beside basic data type, it can return object and pointers too. A return statement is usually place at the end of function definition or inside a branching statement. For example, int sum (int x, int y) { int s = x+y; return s; } In this function, the return type of sum() is int. So it returns an integer value s to the invoking function. 3. Function call Function call statement calls the function by matching its name and arguments. A function call can be made by using function name and providing the required parameters. Syntax for function call function_name ([actual arguments]); For example, display(a); s = sum(x,y); A function can be called by two ways. They are: - Call by value - Call by reference Call by value When a function is called, the called function creates a copy of all the arguments present in the calling statement. These new copies occupy separate memory location and the function works on these copies only. This method of calling a function is called call by value. In this method, only the value of argument is passed. So, if any changes done to those values inside the function is only visible inside the function. Their values remain unchanged outside it. Example 1: C++ program to determine a number is even or odd using function (call by value) #include <iostream> #include <conio.h> using namespace std; int iseven(int); // function prototype int main() { int n; cout<<"Enter a number: "; cin>>n; if (iseven(n)) // function call by value cout<<n<<" is even"; else cout<<n<<" is odd"; getch(); return 0; } int iseven(int x) // function definition { int r; if (x%2 == 0) r=1; else r=0; return r; } In this program, a number is entered by user which is passed as parameter to a user-defined function iseven(). It receives the integer value and returns 1 if it is even, otherwise returns 0. Output Enter a number: 16 16 is even Enter a number: 31 31 is odd Call by reference In this method of calling a function, the reference of argument is passed rather than its value. The argument received by the function and the actual argument occupy same memory addresses. So, if any changes done to those values inside the function is also visible both inside and outside the function. For example, consider a function swap(int,int) which receives two integer arguments and swap their values. If this function is called by value then the changes in value of variables after swapping won't be seen outside the function. This problem can be solved by calling the function by reference. Example 2: C++ program to swap two values using function (call by reference) #include <iostream> #include <conio.h> using namespace std; void swap(int &, int &); // function prototype int main() { int a,b; cout<<"Enter two numbers: "; cin>>a>>b; cout<<"Before swapping"<<endl; cout<<"a = "<<a<<endl; cout<<"b = "<<b<<endl; swap(a,b); // function call by reference cout<<"After swapping"<<endl; cout<<"a = "<<a<<endl; cout<<"b = "<<b<<endl; getch(); return 0; } void swap(int &x, int &y) // function definition { x=x+y; y=x-y; x=x-y; } This program swaps the value of two integer variables. Two integer values are entered by user which is passed by reference to a function swap() which swaps the value of two variables. After swapping these values, the result is printed. Output Enter two numbers: 19 45 Before swapping a = 19 b = 45 After swapping a = 45 b = 19 Types of function There are two kinds of function. They are: - Library functions - User-defined functions 1. Library functions Library functions are built in function that are defined in the C++ library. Function prototype is present in header files so we need to include specific header files to use library functions. These functions can be used by simply calling the function. Some library functions are pow(), sqrt(), strcpy(), toupper(), isdigit(), etc. 2. User-defined functions These functions are defined by user as per the requirement, hence called user-defined functions. Function definition is written by user and is present in the program. main() is an example of user-defined function. Static Member Function In C++, functions are defined inside a class and can only be accessed by its object. But when a class has a private static data member (variable), these cannot be accessed directly, so we need a function specifically to access those data members, these are static member functions. They are defined/declared using keyword static before their name. We can also define a static function outside a class declaration. These act similar to normal functions. Syntax of Static Member Function static returntype function_name ([argument list]) { body of function } Properties of Static Member Function - A static member function can access only other static data members declared in that class. - We do not need to create class objects to access these functions. They can be accessed directly using class name and scope resolution operator (::) instead of object of the class. classname::function_name ([actual argument]); - A static member function cannot be virtual. Example 3: C++ program to use a static member function #include<iostream> #include<conio.h> using namespace std; class test { static int x; int id; public: test() { x++; id=x; } static void static_display() { cout<<"x = "<<x<<endl; } void display() { cout<<"id= "<<id<<endl; } }; int test::x; int main() { test o1,o2; test::static_display(); o1.display(); test::static_display(); o2.display(); getch(); return 0; } This program is similar to the previous program but here we have used static member function to display the content of static data member. Static member function are called using class name and scope resolution operator (::) as called in the program: test::static_display(); Since, static member function can't access other members of a class except static data member, so a separate function is used to display them. Output x = 2 id= 1 x = 2 id= 2
https://www.programtopia.net/cplusplus/docs/functions
CC-MAIN-2019-30
refinedweb
1,262
55.03
on Docker: A 'Shipping Container' for Linux Code2014-04-16T18:12:34+00:00 should think about security as well.. look good2013-08-02T04:42:09+00:00peacengellwe should think about security as well.. look good I will check it in more details.. future computing = more security help ever.hurts never use of Linux Containers is an excellent idea, 2013-08-02T00:07:46+00:00Kieran GrantThe use of Linux Containers is an excellent idea, the combined use of cgroups, the isolation mechanisms, namespace/pid/mount etc, will make it very successful. Will be good for portable platforms. This could even be used for user application, provide an isolated and standard environment across distros, of course, not for performance sensitive programs that need optimisations for each host. This would allow programs to come pre-packaged with their dependencies, but of course, with an increase in size of the programs, and an increase in security management. Overall, for servers, it appears to be a good idea. Might also work on Desktop computers.
https://www.linux.com/feeds/comments/article/docker-a-shipping-container-for-linux-code
CC-MAIN-2014-15
refinedweb
171
56.35
One of the strengths of the Android platform compared to iOS, for example, is that it has an open source basis, which makes it easier to produce your own applications and distribute them without waiting for a lengthy approval process. You can set up your own Android app on your PC as long as you have the right software installed, and you can even take it for a test drive using an Android emulator so you can see what it will look like when it's run on a smartphone. provides you with a simple drag-and-drop environment that you can use to generate new applications made up of building blocks of code and media. It's an attempt to make application development possible for people who aren't hardcore coders, but it's not recommended for production environments. Assuming that you'd like to try the full coded environment, we'll demonstrate how to produce a simple 'hello world' application. If you'd rather work in a GUI, we'll discuss App Inventor later on. Android apps are written in Java code, so you'll need a Java development kit installed on your PC. You also need an integrated development environment (IDE) so you can write and test the code. You also need to get your computer ready for the Android SDK. Start by installing a Java Development Kit for your version of Windows. You also need to install Eclipse IDE for Java developers. When you install Eclipse it will check for the JDK. It's best to unzip Eclipse in the same directory as the JDK. If it can't find the JDK it won't install, but you can always move the required files to whatever directory the Eclipse installer is examining. With Eclipse up and running, you can download the Android SDK. Extract it to a safe directory on your PC and make a note of where it is. Back in Eclipse you need to add the Android Development Tools. To do this, choose 'Help > Install new software'. Next to 'Work with', enter and click 'Add'. In the pane below this, check 'Development tools' and click 'Next'. Select 'Android DDMS' and 'Android Development Tools'. Click 'Next', accept the terms and restart. You need to point the ADT plugin to where you extracted the Android SDK. In Eclipse choose 'Window > Preferences > Android'. Next to 'SDK location' click 'Browse' and locate the folder with the SDK. Click 'Apply' and 'OK' Android platform Now that you've sorted out the programming environment, you also need to get at least one version of the Android platform. You can do this in the Android SDK and AVD Manager, which you can launch in Eclipse if you've set your system up correctly. Choose 'Window > Android SDK and AVD Manager' to open it, then select 'Available packages' and tick the box next to ''. 'Install selected' and wait for the components to download. Verify and accept the new components if prompted and they will be added to your existing Android SDK folders. Android virtual devices Having downloaded a version of Android, you need to set up an Android Virtual Device (AVD) to run the computer. You can do this in the Android SDK and AVD Manager. Choose 'Window > Android SDK and AVD manager' and select 'Virtual devices'. Click 'New' and provide a name for your new device. Select the Android platform that you want to use as the target. Click 'Create 'Hardware', click 'New' and select a device if you want to add more virtual hardware. For a simple AVD, you'll generally be fine sticking with the default options. You can now close the Android SDK and AVD Manager. Create and emulate your Android app Assuming you now have all the software in place and you've set up a virtual device in the Android SDK and AVD manager, you can create a new project. In Eclipse IDE choose 'File > New > Project'. In the New Project wizard, select the 'Android' folder and choose 'Android project'. Click 'Next'. You now have a new window for your project details. To start with, we'll set up a simple 'Hello world' application that just displays some text when launched. In the field marked 'Project name', enter HelloAndroid. For 'Application name' enter Hello, Android. For 'Package name' supply com.example.helloandroid and for 'CreateActivity', enter HelloAndroid. Click 'Finish'. These parameters are used to set up your project in Eclipse. The project name is also the name for the directory in your workspace that will contain your project files. Eclipse will create it for you. Assuming you accepted the default Windows workspace of C:\Users\[username]\workspace, you'll find the above directory at C:\Users\[username]\workspace\HelloAndroid. If you browse to this in Windows Explorer, you'll see a number of subfolders and files set up as part of the project. The application name is the title of your app, which will be displayed in the Android device. Change this to change the name of the app. You need to be a bit more careful with the package name. This is the namespace for the package where your source code resides. It needs to follow the rules for naming packages in Java. It also needs to be unique across the Android system, which is why a domain style package is used; 'com.example' is reserved for examples like this. If you develop an app that's published, you'll need to use your own namespace. This usually relates to the organisation publishing the app. 'Create activity' relates to the class stub generated by the plug-in. An activity is basically an action. It might need to set up a user interface if it needs one. We left other project fields at their default values, but it's useful to know what they do. 'Min SDK version' lets you set the minimum API required by your application. If 'Use default location' is ticked, your project will be saved in your workspace. You can opt to change this if you want to store the files elsewhere. 'Build target' is the platform target for your application. It's the minimum version of Android that it will run on. If you develop an app to run on an earlier version of Android, it should run on a later one too, but one developed for a later version of the platform probably won't run on an earlier version. For an example like this, the build target isn't critical as long as you can get your application to run in the emulator. It's more of a concern when you come to release an app. Finally, the option to create the project from an existing example enables you to select some existing code to modify. You'll find this of more interest as you move on to greater programming challenges. Modify the code You should now see your project displayed in the Package Explorer, which is shown in the left-hand pane of Eclipse. Double-click 'HelloAndroid' to expand it. Also expand 'src' and 'com.example.helloandroid'. Double-click 'HelloAndroid.java' to see the code that's already been set up. In the main pane you should see the following text: package com.example.helloandroid; import android.app.Activity; import android.os.Bundle; public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(BundlesavedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); } } If you can't see all of this, try looking to the left-hand side of the pane and expanding any plus signs that indicate collapsed code. This defines your application without actually doing anything at this stage. To make it do some work, we need to add an object that will contain your text. Having done that, we also need to specify the text. Below 'import android. os.Bundle;' add the following line: import android.widget.TextView; Also add the following above the two sets of closing curly brackets: TextView tv = new TextView(this); tv.setText("My First Android App"); setContentView(tv); You can replace the text within the quotes to make your app say whatever you like. Check that the code in its entirety reads as the following, assuming you kept the displayed text the same: package com.example.helloandroid; import android.app.Activity; import android.os.Bundle; import android.widget.TextView; public class HelloAndroid extends Activity { /** Called when the activity is first created. */ @Override public void onCreate(BundlesavedInstanceState) { super.onCreate(savedInstanceState); TextView tv = new TextView(this); tv.setText("My First Android App"); setContentView(tv); } } Save the changes to your code. You can now try it out in the Android emulator. In Eclipse, choose 'Run > Run > Android application'. The emulator launches. It can take a few minutes to boot into Android, so be patient. Once booted, your app should run automatically and you'll see a grey title bar with the app name in it. Below this, your chosen text is displayed. Press the 'Home' button in the emulator to return to the Android home screen. Click the 'Applications' button to see the list of available applications. Among these you should see 'Hello, Android'. Select this to launch your app again. Test your app on an Android device Now you've successfully run your app in the emulator, you can try running it on a real device. First you need to ensure that the USB driver is installed in the Android SDK and AVD manager. Choose 'Window > Android SDK and AVD manager > Available packages'. Select the Android repository, ensure that the USB driver is ticked and click 'Install selected'. Connect your phone to a spare USB port and wait for Windows to detect it. In the New Hardware wizard, choose 'Locate and install drivers' and opt to browse your computer for the driver software. Browse to the 'Android SDK' folder and locate the subfolder for the USB driver. Windows should find and install it from here. Now you need to declare your app as debuggable. In Eclipse, expand your HelloAndroid application and double-click 'AndroidManifest.xlm'. Move to the 'Application' tab and select 'True' from the Debuggable dropdown list. Save the project. Go to your Android phone and choose 'Menu' from the home screen, then select 'Applications > Development' and enable USB debugging. Now you can reconnect it to your PC via USB. If you want to check that the SDK can see your phone, browse to the 'Tools' directory in your 'Android SDK' folder. Launch 'adb.exe' and you should be able to see your phone listed as 'Device'. To launch your application on the connected phone, you need to choose 'Run > Run > Android application in Eclipse'. Now you have both the emulator and your phone connected, you need to specify which you want to run it on. Eclipse presents you with a Device Chooser that lists all the available devices and emulators. Select your phone from this list to install and run the app. Now you've produced and run a very basic application from raw code in an emulator and on an Android device, you can begin to learn how to develop your own. It helps to have some knowledge of Java programming, but you'll also find a number of stepped tutorials in the Android Developer Resources pages. These include introductions to the different views available to apps and how to implement them. You'll also find ways to use common resources like location information, and find out how to debug your work. You can find a full list of sample code on these pages too. This will help you to work through example applications that you can modify to your own ends. These include games such as Snake and Lunar Lander, plus utilities like Note Pad and Wiktionary. You can find even more samples at Apps-for-Android.
http://www.techradar.com/us/news/phone-and-communications/mobile-phones/how-to-build-an-android-app-1046599
CC-MAIN-2015-06
refinedweb
1,983
66.03
. SQL Server supports referencing heterogeneous OLE DB data sources in Transact-SQL statements by using: the linked server name or the ad hoc computer name. A linked server is a virtual server that has been defined to SQL Server with all the information required to access an OLE DB data source. A linked server name is defined by using the sp_addlinkedserver system stored procedure. The linked server definition contains all the information required to locate the OLE DB data source. Local SQL Server logins are mapped to logins in the linked server by using sp_addlinkedsrvlogin. Remote tables can be referenced by using the linked server name as one of the following:. For more information, see Identifying a Data Source by Using a Linked Server Name. An ad hoc name is used for infrequent queries against OLE DB data sources that are not defined as linked servers. In SQL Server, the OPENROWSET and OPENDATASOURCE functions provide connection information for accessing data from OLE DB data sources. By default, ad hoc names are not supported. The DisallowAdhocAccess provider option must be set to 0 and the Ad Hoc Distributed Queries advanced configuration option must be enabled. OPENROWSET and OPENDATASOURCE should be used only to reference OLE DB data sources that are accessed infrequently. For any data sources that will be accessed more than several times, define a linked server. Neither OPENDATASOURCE nor OPENROWSET provide all the functionality of linked server definitions. This includes security management and the ability to query catalog information. Every time that these functions are called, all connection information, including passwords, must be provided. OPENROWSET and OPENDATASOURCE appear to be functions and for convenience are referred to as functions; however, OPENROWSET and OPENDATASOURCE are macros and do not support supplying Transact-SQL variables as arguments. OPENROWSET can be used with any OLE DB provider that returns a rowset, and can be used anywhere a table or view reference is used in a Transact-SQL statement. OPENROWSET is specified with the following: All the information required to connect to the OLE DB data source. Either the name of an object that will generate a rowset, or a query that will generate a rowset. OPENDATASOURCE provides connection information as part of a four-part object name. This function supports only OLE DB providers that expose multiple rowsets by using the catalog.schema.object notation. OPENDATASOURCE can be used in the same locations in Transact-SQL syntax that a linked server name can be used. OPENDATASOURCE is specified with the following:. sp_addserver is maintained as a compatibility feature for existing applications, but this stored procedure will not be supported in future releases. As applications are ported to SQL Server 2008, these applications may have to be run for a while with some new code that uses distributed queries against a linked server definition and some legacy code that uses a remote server definition. Both linked servers and remote servers use the same namespace. Therefore, either the linked server or the remote server definition must use a name that differs from the network name of the server being accessed remotely. Define one of the entries with a different server name, and use sp_setnetname to associate that definition with the network name of the remote server.
http://msdn.microsoft.com/en-us/library/ms188313(v=sql.100).aspx
CC-MAIN-2014-41
refinedweb
542
52.8
Hi!> > Either drop this one or explain why it is good idea. It seems to be> > independend on the rest.> This code I just copy from old ppc swsusp port, I don't why, :).So drop the patch...> > > > > @@ -144,9 +151,13 @@> > > }> > > > > > /* Free memory before shutting down devices. */> > > - free_some_memory();> > > + /* free_some_memory(); */> > > > Needs to be if (!swsusp_pagecache), right?> I think we can drop this one, In write_page_caches has same code, and do> the best.So at least delete it properly; no need to comment it out.> > + if (swsusp_pbe_pgdir->orig_address == 0) return;> > > + for (i = 0; i < PAGE_SIZE / (sizeof(unsigned long)); i+=4) {> > > + *(((unsigned long *)(swsusp_pbe_pgdir->orig_address) + i)) = > > > + *(((unsigned long *)(swsusp_pbe_pgdir->address) + i));> > > + *(((unsigned long *)(swsusp_pbe_pgdir->orig_address) + i+1)) = > > > + *(((unsigned long *)(swsusp_pbe_pgdir->address) + i+1));> > > + *(((unsigned long *)(swsusp_pbe_pgdir->orig_address) + i+2)) = > > > + *(((unsigned long *)(swsusp_pbe_pgdir->address) + i+2));> > > + *(((unsigned long *)(swsusp_pbe_pgdir->orig_address) + i+3)) = > > > + *(((unsigned long *)(swsusp_pbe_pgdir->address) + i+3));> > > > Do you really have to do manual loop unrolling? Why can't C code be> > same for i386 and ppc?> here is stupid code, update in my new patch, I using memcopy in i386, it > create small assemble code.Warning: memcpy() may uses MMX or SSE or something on some cpus....> > Please avoid "return (0);". Using "return 0;" will do just fine.> fixed.> > here is my patch relative with your big diff, hope can merge. I have already too big difference against mainline, so I can onlymerge trivial patches at this point. When 2.6.10 comes out, I'd liketo merge "no-high-order-allocation" patch, and "pagecache writer"sometime after
http://lkml.org/lkml/2004/11/24/76
CC-MAIN-2017-30
refinedweb
257
66.33
/> This is a simple Python project demonstrating a useful application of logarithms. The logarithm of a number to a specified base is the power to which the base must be raised to get the number. Since their invention by John Napier in the 17th century until a few decades ago slide rules and books of log tables were used to simplify multiplication by turning it into a process of addition. Modern science, technology and engineering all depended on that simple idea. The following example demonstrates logarithms. If... Exponentiation 102 = 100 ...then the base 10 logarithm of 100 is 2: Logarithm log10(100) = 2 Any number can be used as the base, but the usual bases are 2, 10 and e which is the base of natural logarithms. e is an irrational number and is 2.718281828 to 9dp. Python's math module provides functions for all three of these, and scientific calculators usually provide at least base e and base 10 logarithm keys. The three Python functions are: - math.log - base e, ie. natural logarithm - math.log10 - base 10 - math.log2 - base 2 /> John Napier 1550-1617, discoverer of logarithms Coding This project will consist of two functions, plus a main function to call them. The first function is ridiculously simple and just calculates compound interest over a period of several years, firstly just calculating the final amount and secondly calculating all interim yearly amounts. The second function will carry out the process in reverse: starting with the opening balance and an interest rate we will calculate how long it will take for our money to grow to a specified amount. This is probably one of those little bits of mathematics that makes you think "hmm, we did that in school but I'd forgotten all about it". I have used interest as an obvious example but of course the principle can be applied to any form of exponential growth or decay - reproduction of bacteria, radioactive decay and so on. Create a new folder somewhere convenient and within it create an empty file called logarithms.py. You can download the zip file or clone/download the Github repo if you prefer. Then paste the following into the file. Source Code Links logarithms.py import math def main(): """ Run the calculateamounts and timetoamount function. """ print("-----------------") print("| codedrome.com |") print("| Logarithms |") print("-----------------\n") calculateamounts(1000, 1.1, 12) print("") timetoamount(1000, 1.1, 3138.43) def calculateamounts(startamount, interest, years): """ Calculate totals including compound interest from arguments, as a final total and then including intermediate yearly totals. """ # Calculate and show final amount currentamount = startamount # Due to operator precedence ** is evaluated before * endamount = startamount * interest ** years print("startamount {:.2f}".format(startamount)) print("years {:d}".format(years)) print("interest {:.2f}%".format((interest - 1) * 100)) print("endamount {:.2f}\n".format(endamount)) # Calculate all yearly amounts. for y in range(1, years + 1): currentamount*= interest print("Year {:2d}: {:.2f}".format(y, currentamount)) def timetoamount(startamount, interest, requiredamount): """ Calculate and print the number of years required to reach requiredamount from startamount at given rate of interest. """ yearstorequiredamount = math.log(requiredamount / startamount) / math.log(interest) print("startamount {:.2f}".format(startamount)) print("interest {:.2f}%".format((interest - 1) * 100)) print("requiredamount {:.2f}".format(requiredamount)) print("yearstorequiredamount {:.2f}".format(yearstorequiredamount)) main() calculateamounts This function takes as arguments a start amount, an interest rate (which is expressed as a decimal fraction, eg. 10% would be 1.1) and a number of years. It calculates how much you will have after earning the specified interest for the specified number of years. It actually does this twice, firstly in one hit by multiplying start amount by interest to the power of years - note the comment regarding operator precedence. It then does it year by year within a for loop. timetoamount Now lets get to the main purpose of this little program: given a certain amount of money earning a certain amount of interest, how long will it take for the balance to grow to a certain amount? This is calculated by taking the logarithm of the new amount as a proportion of the original amount, and dividing it by the logarithm of the growth rate. That sentence probably doesn't make sense, even if you understood how to do it beforehand, so here it is as a formula. Formula for yearstorequiredamount yearstorequiredamount = log(requiredamount / startamount) / log(interest) Using the values hardcoded into main we already know that in 12 years £1,000 will grow to £3,138 at 10%pa. (If you know where I can get 10% interest please let me know immediately!) Plugging those numbers into the formula we get Formula for yearstorequiredamount - example yearstorequiredamount = log(3138.43 / 1000) / log(1.1) = log(3.13843) / log(1.1) = 0.496712446 / 0.041392685 = 12.00000546 The rounding error is less than 3 minutes so probably not worth worrying about, but basically we have turned the process round and calculated what we already knew, that it takes 12 years to get to the final amount from our starting point at the specified interest rate. I have used math.log, the natural logarithm (base e) but it doesn't matter which you use as we are calculating ratios rather than actual amounts, as long as you use the same one both times. If you have a minute to spare try out all three, and then try breaking the code by mixing up different log functions. Now run the program with this command: Running the Program python3.7 logarithms.py This should give you the following output: Program Output ----------------- | codedrome.com | | Logarithms | ----------------- startamount 1000.00 years 12 interest 10.00% endamount 3138.43 Year 1: 1100.00 Year 2: 1210.00 Year 3: 1331.00 Year 4: 1464.10 Year 5: 1610.51 Year 6: 1771.56 Year 7: 1948.72 Year 8: 2143.59 Year 9: 2357.95 Year 10: 2593.74 Year 11: 2853.12 Year 12: 3138.43 startamount 1000.00 interest 10.00% requiredamount 3138.43 yearstorequiredamount 12.00 The first chunk of output tells you that after 12 years at 10% £1000 will have grown to £3138.43. We then have the same information again but this time calculated year by year. Lastly we reverse the process by calculating how long it will take to get to £3138.43 from £1000 at 10% pa. Please follow this blog on Twitter for news of future posts and other useful Python stuff.
https://www.codedrome.com/logarithms-a-practical-use-in-python/
CC-MAIN-2021-31
refinedweb
1,069
58.99
There are times when it is important for you to get input from users for execution of programs. To do this you need Java Reading Console Input Methods. Java Reading Console Input Methods - Using BufferedReader Class - Using Scanner Class - Using Console Class Now let us have a close look at these methods one by one. Using the BufferedReader Class This is the oldest technique in Java used for reading console input. This technique was first introduced in JDK 1.0. Using this technique we need to wrap InputStreamReader and System.in in the BufferedReader class. This is done using the following syntax: BufferedReader br = new BufferedReader (new InputStreamReader (System.in)); Using this will link the character based stream ‘br’ to the console for input through System.in. NOTE: while using BufferedReader an IOException needs to be thrown else an error message will be shown at compile time. BufferedReader Class is defined in java.io class so for using BufferedReader Class you need to import java.io first. Let us have a look at an example to make the concept clearer. In this example, we will get a string of characters entered from the user. The program will display the characters one by one to the user on the screen. It will continue till the termination character is encountered in the string. // Program to read a string using BufferedReader class. import java.io.*; class bread { public static void main(String args[]) throws IOException { char ch; BufferedReader br = new BufferedReader(new InputStreamReader (System.in)); System.out.println ("Enter any string of your choice (To terminate Press \'z\' "); do { ch = (char) br.read (); System.out.println (ch); } while (ch != 'z'); } } Output 1 Here, when z is encountered in the string of characters, the program will stop displaying any further characters entered by the user. Output 2 Using the Scanner Class The Scanner class was introduced in JDK 1.5 and it has been widely used thereof. The Scanner class provides various methods for easing the way we get input from the console. Scanner class is defined in java.util so you need to import this first. The Scanner also uses System.in and its syntax is as follows: Scanner obj_name = new Scanner (System.in); Some of the utility methods Scanner class provides are as follows: Let us have a look at an example to understand the concept in a better way. In this example we will read numbers from a user and will find the sum of these numbers and display the result to him. // Program to calculate the sum of n numbers using Scanner class import java.util.*; class scanner_eg { public static void main(String args[]) { Scanner obj = new Scanner (System.in); double total = 0; System.out.println ("Enter numbers to add. Enter any string to end list."); while (obj.hasNext()) { if (obj.hasNextDouble()) { total += obj.nextDouble(); } else { break; } } System.out.println ("Sum of the numbers is " + total); } } Output NOTE: – Whenever we use the next method, at run time it will always search for the input it defines. If such an input is not available then it will throw an exception. Hence, it is always beneficial to check the input beforehand by using the hasNext method before calling the next method. Using the Console Class This is another way of reading user input from the console in Java. This way of reading user input has been introduced in JDK 1.6. This technique also uses System.in for reading the input. This technique is best suited for reading input which does not require echoing of user input like reading user passwords. This technique reads user inputs without echoing the characters entered by the user. The Console class is defined in the java.io class which needs to be imported before using the console class. Let us consider an example. import java.io.*; class consoleEg { public static void main(String args[]) { String name; System.out.println ("Enter your name: "); Console c = System.console(); name = c.readLine(); System.out.println ("Your name is: " + name); } } Output NOTE: – The only drawback of this technique is that it works in interactive environments and it does not work in IDE. In Java we can read console input in the three ways i.e. using BufferedReader class, Scanner class, and Console class in Java. Depending on which way you want to read user input, you can implement it in your programs.
https://csveda.com/java/java-reading-console-input/
CC-MAIN-2022-33
refinedweb
732
67.65
Revision history for DateStamp 1.0.4 Thu Apr 6 12:34:15 CDT 2006 Changed a "like" statement in the test case. No changes to actual code, though. 1.0.3 Fri Jan 13 16:55:15 CST 2006 Ah, a new year. A new month. A slew of bugs revealed. Fixed some problems with how the month/day/year was being looked up. No changes to any of the interface methods. 1.0.2 Tue Nov 29 19:23:49 CST 2005 Fixed %day_alpha error. Renamed entire release from "DateTime::Current" to "DateStamp" because of namespace conflicts. !!! NOTE !!! If--for any reason--you have version 1.0.0/1.0.1 of DateTime::Current, please delete it and replace with current version of DateStamp. >> modules/DateTime-Current-v1.0.0.meta >> Scheduled for deletion (due at Fri, 02 Dec 2005 15:25:30 GMT) >> modules/DateTime-Current-v1.0.0.readme >> Scheduled for deletion (due at Fri, 02 Dec 2005 15:25:30 GMT) >> modules/DateTime-Current-v1.0.0.tar.gz >> Scheduled for deletion (due at Fri, 02 Dec 2005 15:25:30 GMT) 1.0.1 Mon Nov 28 14:47:30 CST 2005 Squashed small bug in _hour value calculation. Initial release to CPAN. 1.0.0 Packaged and ready to test/submit.
https://metacpan.org/release/TWYLIE/DateStamp-v1.0.4/source/Changes
CC-MAIN-2021-49
refinedweb
218
79.16
We use cookies to ensure you have the best browsing experience on our website. Please read our cookie policy for more information about how we use cookies. salcio + 0 comments Hi, I think I found quite nice soution - no recursion, no arrays. We iterate only once through whole list. int GetNode(Node *head,int positionFromTail) { int index = 0; Node* current = head; Node* result = head; while(current!=NULL) { current=current->next; if (index++>positionFromTail) { result=result->next; } } return result->data; } RodneyShag + 0 comments O(1) space complexity Java Iterative solution. From my HackerRank solutions. I use the "runner" technique. We make a runner pointer move k elements into the list. Then we create a curr pointer. We move both pointers 1 step at a time until runner is at the end. When this happens, curr will be at the kth to last element. Runtime: O(n) Space Complexity: O(1) int GetNode(Node head, int k) { Node curr = head; Node runner = head; /* Move runner into the list by k elements */ for (int i = 0; i < k; i++) { runner = runner.next; } /* Move both pointers */ while (runner.next != null) { curr = curr.next; runner = runner.next; } return curr.data; } Let me know if you have any questions. onuremreerol + 0 comments Here is another clean and tidy solution in java. If you have any question, feel free to ask. int GetNode(Node head,int n) { Node temp = head; for (int i = 0; head.next != null; i++) { head = head.next; if ( i >= n) temp = temp.next; } return temp.data; } shubhamgoyal1101 + 0 comments python 3 code, easy to understand :). I used one pointer to get the length of the list than subtracted position from it and reach the required node via another pointer which was on head of the list. def GetNode(head, position): count=0 second_head = head while head.next: head=head.next count+=1 for i in range(count-position): second_head=second_head.next return second_head.data Sort 653 Discussions, By: Please Login in order to post a comment
https://www.hackerrank.com/challenges/get-the-value-of-the-node-at-a-specific-position-from-the-tail/forum
CC-MAIN-2021-04
refinedweb
333
61.43
.rangeThis module defines the notion of a range. Ranges generalize the concept of arrays, lists, or anything that involves sequential access. This abstraction enables the same set of algorithms (see std.algorithm) to be used with a vast variety of different concrete types. For example, a linear search algorithm such as std.algorithm.find works not just for arrays, but for linked-lists, input files, incoming network data, etc. For more detailed information about the conceptual aspect of ranges and the motivation behind them, see Andrei Alexandrescu's article On Iteration. This module defines several templates for testing whether a given object is a range, and what kind of range it is: A number of templates are provided that test for various range capabilities: A rich set of range creation and composition templates are provided that let you construct new ranges out of existing ranges: These range-construction tools are implemented using templates; but sometimes an object-based interface for ranges is needed. For this purpose, this module provides a number of object and interface definitions that can be used to wrap around range objects created by the above templates. Ranges whose elements are sorted afford better efficiency with certain operations. For this, the assumeSorted function can be used to construct a SortedRange from a pre-sorted range. The std.algorithm.sort function also conveniently returns a SortedRange. SortedRange objects provide some additional range operations that take advantage of the fact that the range is sorted. Finally, this module also defines some convenience functions for manipulating ranges: Source: std/range.d License: Boost License 1.0. Authors: Andrei Alexandrescu, David Simcha, and Jonathan M Davis. Credit for some of the ideas in building this module goes to Leonardo Maffi. - template isInputRange(R) - Returns true if R is an input range. An input range must define the primitives empty, popFront, and front. The following code should compile for any input range. R r; // can define a range object if (r.empty) {} // can test for empty r.popFront(); // can invoke popFront() auto h = r.front; // can get the front of the range of non-void typeThe semantics of an input range (not checkable during compilation) are assumed to be the following (r is an object of type R): - r.empty returns false iff there is more data available in the range. - r.front returns the current element in the range. It may return by value or by reference. Calling r.front is allowed only if calling r.empty has, or would have, returned false. - r.popFront advances to the next element in the range. Calling r.popFront is allowed only if calling r.empty has, or would have, returned false. - void put(R, E)(ref R r, E e); - Outputs e to r. The exact effect is dependent upon the two types. Several cases are accepted, as described below. The code snippets are attempted in order, and the first to compile "wins" and gets evaluated. In this table "doPut" is a method that places e into r, using the correct primitive: r.put(e) if R defines put, r.front = e if r is an input range (followed by r.popFront()), or r(e) otherwise. Tip: put should not be used "UFCS-style", e.g. r.put(e). Doing this may call R.put directly, by-passing any transformation feature provided by Range.put. put(r, e) is prefered. - template isOutputRange(R, E) - Returns true if R is an output range for elements of type E. An output range is defined functionally as a range that supports the operation put(r, e) as defined above. Examples: void myprint(in char[] s) { } static assert(isOutputRange!(typeof(&myprint), char)); static assert(!isOutputRange!(char[], char)); static assert( isOutputRange!(dchar[], wchar)); static assert( isOutputRange!(dchar[], dchar)); - template isForwardRange(R) - Returns true if R is a forward range. A forward range is an input range r that can save "checkpoints" by saving r.save to another value of type R. Notable examples of input ranges that are not forward ranges are file/socket ranges; copying such a range will not save the position in the stream, and they most likely reuse an internal buffer as the entire stream does not sit in memory. Subsequently, advancing either the original or the copy will advance the stream, so the copies are not independent. The following code should compile for any forward range. static assert(isInputRange!R); R r1; static assert (is(typeof(r1.save) == R));Saving a range is not duplicating it; in the example above, r1 and r2 still refer to the same underlying data. They just navigate that data independently. The semantics of a forward range (not checkable during compilation) are the same as for an input range, with the additional requirement that backtracking must be possible by saving a copy of the range object with save and using it later. - template isBidirectionalRange(R) - Returns true if R is a bidirectional range. A bidirectional range is a forward range that also offers the primitives back and popBack. The following code should compile for any bidirectional range. R r; static assert(isForwardRange!R); // is forward range r.popBack(); // can invoke popBack auto t = r.back; // can get the back of the range auto w = r.front; static assert(is(typeof(t) == typeof(w))); // same type for front and backThe semantics of a bidirectional range (not checkable during compilation) are assumed to be the following (r is an object of type R): - r.back returns (possibly a reference to) the last element in the range. Calling r.back is allowed only if calling r.empty has, or would have, returned false. - template isRandomAccessRange(R) - Returns true if R is a random-access range. A random-access range is a bidirectional range that also offers the primitive opIndex, OR an infinite forward range that offers opIndex. In either case, the range must either offer length or be infinite. The following code should compile for any random-access range. // range is finite and bidirectional or infinite and forward. static assert(isBidirectionalRange!R || isForwardRange!R && isInfinite!R); R r = void; auto e = r[1]; // can index static assert(is(typeof(e) == typeof(r.front))); // same type for indexed and front static assert(!isNarrowString!R); // narrow strings cannot be indexed as ranges static assert(hasLength!R || isInfinite!R); // must have length or be infinite // $ must work as it does with arrays if opIndex works with $ static if(is(typeof(r[$]))) { static assert(is(typeof(r.front) == typeof(r[$]))); // $ - 1 doesn't make sense with infinite ranges but needs to work // with finite ones. static if(!isInfinite!R) static assert(is(typeof(r.front) == typeof(r[$ - 1]))); }The semantics of a random-access range (not checkable during compilation) are assumed to be the following (r is an object of type R): - r.opIndex(n) returns a reference to the nth element in the range. - template hasMobileElements(R) - Returns true iff R supports the moveFront primitive, as well as moveBack and moveAt if it's a bidirectional or random access range. These may be explicitly implemented, or may work via the default behavior of the module level functions moveFront and friends. Examples: static struct HasPostblit { this(this) {} } auto nonMobile = map!"a"(repeat(HasPostblit.init)); static assert(!hasMobileElements!(typeof(nonMobile))); static assert( hasMobileElements!(int[])); static assert( hasMobileElements!(inout(int)[])); static assert( hasMobileElements!(typeof(iota(1000)))); - template ElementType(R) - The element type of R. R does not have to be a range. The element type is determined as the type yielded by r.front for an object r of type R. For example, ElementType!(T[]) is T if T[] isn't a narrow string; if it is, the element type is dchar. If R doesn't have front, ElementType!R is void. Examples: // Standard arrays: returns the type of the elements of the array static assert(is(ElementType!(int[]) == int)); // Accessing .front retrieves the decoded dchar static assert(is(ElementType!(char[]) == dchar)); // rvalue static assert(is(ElementType!(dchar[]) == dchar)); // lvalue // Ditto static assert(is(ElementType!(string) == dchar)); static assert(is(ElementType!(dstring) == immutable(dchar))); // For ranges it gets the type of .front. auto range = iota(0, 10); static assert(is(ElementType!(typeof(range)) == int)); - template ElementEncodingType(R) - The encoding element type of R. For narrow strings (char[], wchar[] and their qualified variants including string and wstring), ElementEncodingType is the character type of the string. For all other types, ElementEncodingType is the same as ElementType. Examples: // internally the range stores the encoded type static assert(is(ElementEncodingType!(char[]) == char)); static assert(is(ElementEncodingType!(wstring) == immutable(wchar))); static assert(is(ElementEncodingType!(byte[]) == byte)); auto range = iota(0, 10); static assert(is(ElementEncodingType!(typeof(range)) == int)); - template hasSwappableElements(R) - Returns true if R is a forward range and has swappable elements. The following code should compile for any range with swappable elements. R r; static assert(isForwardRange!(R)); // range is forward swap(r.front, r.front); // can swap elements of the rangeExamples: static assert(!hasSwappableElements!(const int[])); static assert(!hasSwappableElements!(const(int)[])); static assert(!hasSwappableElements!(inout(int)[])); static assert( hasSwappableElements!(int[])); - template hasAssignableElements(R) - Returns true if R is a forward range and has mutable elements. The following code should compile for any range with assignable elements. R r; static assert(isForwardRange!R); // range is forward auto e = r.front; r.front = e; // can assign elements of the rangeExamples: static assert(!hasAssignableElements!(const int[])); static assert(!hasAssignableElements!(const(int)[])); static assert( hasAssignableElements!(int[])); static assert(!hasAssignableElements!(inout(int)[])); - template hasLvalueElements(R) - Tests whether R has lvalue elements. These are defined as elements that can be passed by reference and have their address taken. Examples: static assert( hasLvalueElements!(int[])); static assert( hasLvalueElements!(const(int)[])); static assert( hasLvalueElements!(inout(int)[])); static assert( hasLvalueElements!(immutable(int)[])); static assert(!hasLvalueElements!(typeof(iota(3)))); auto c = chain([1, 2, 3], [4, 5, 6]); static assert( hasLvalueElements!(typeof(c))); - template hasLength(R) - Returns true if R has a length member that returns an integral type. R does not have to be a range. Note that length is an optional primitive as no range must implement it. Some ranges do not store their length explicitly, some cannot compute it without actually exhausting the range (e.g. socket streams), and some other ranges may be infinite. Although narrow string types (char[], wchar[], and their qualified derivatives) do define a length property, hasLength yields false for them. This is because a narrow string's length does not reflect the number of characters, but instead the number of encoding units, and as such is not useful with range-oriented algorithms. Examples: static assert(!hasLength!(char[])); static assert( hasLength!(int[])); static assert( hasLength!(inout(int)[])); struct A { ulong length; } struct B { size_t length() { return 0; } } struct C { @property size_t length() { return 0; } } static assert( hasLength!(A)); static assert(!hasLength!(B)); static assert( hasLength!(C)); - template isInfinite(R) - Returns true if R is an infinite input range. An infinite input range is an input range that has a statically-defined enumerated member called empty that is always false, for example: struct MyInfiniteRange { enum bool empty = false; ... }Examples: static assert(!isInfinite!(int[])); static assert( isInfinite!(Repeat!(int))); - template hasSlicing(R) - Returns true if R offers a slicing operator with integral boundaries that returns a forward range type. For finite ranges, the result of opSlice must be of the same type as the original range type. If the range defines opDollar, then it must support subtraction. For infinite ranges, when not using opDollar, the result of opSlice must be the result of take or takeExactly on the original range (they both return the same type for infinite ranges). However, when using opDollar, the result of opSlice must be that of the original range type. The following code must compile for hasSlicing to be true: R r = void; static if(isInfinite!R) typeof(take(r, 1)) s = r[1 .. 2]; else { static assert(is(typeof(r[1 .. 2]) == R)); R s = r[1 .. 2]; } s = r[1 .. 2]; static if(is(typeof(r[0 .. $]))) { static assert(is(typeof(r[0 .. $]) == R)); R t = r[0 .. $]; t = r[0 .. $]; static if(!isInfinite!R) { static assert(is(typeof(r[0 .. $ - 1]) == R)); R u = r[0 .. $ - 1]; u = r[0 .. $ - 1]; } } static assert(isForwardRange!(typeof(r[1 .. 2]))); static assert(hasLength!(typeof(r[1 .. 2])));Examples: static assert( hasSlicing!(int[])); static assert( hasSlicing!(const(int)[])); static assert(!hasSlicing!(const int[])); static assert( hasSlicing!(inout(int)[])); static assert(!hasSlicing!(inout int [])); static assert( hasSlicing!(immutable(int)[])); static assert(!hasSlicing!(immutable int[])); static assert(!hasSlicing!string); static assert( hasSlicing!dstring); enum rangeFuncs = "@property int front();" ~ "void popFront();" ~ "@property bool empty();" ~ "@property auto save() { return this; }" ~ "@property size_t length();"; struct A { mixin(rangeFuncs); int opSlice(size_t, size_t); } struct B { mixin(rangeFuncs); B opSlice(size_t, size_t); } struct C { mixin(rangeFuncs); @disable this(); C opSlice(size_t, size_t); } struct D { mixin(rangeFuncs); int[] opSlice(size_t, size_t); } static assert(!hasSlicing!(A)); static assert( hasSlicing!(B)); static assert( hasSlicing!(C)); static assert(!hasSlicing!(D)); struct InfOnes { enum empty = false; void popFront() {} @property int front() { return 1; } @property InfOnes save() { return this; } auto opSlice(size_t i, size_t j) { return takeExactly(this, j - i); } auto opSlice(size_t i, Dollar d) { return this; } struct Dollar {} Dollar opDollar() const { return Dollar.init; } } static assert(hasSlicing!InfOnes); - auto walkLength(Range)(Range range) if (isInputRange!Range && !isInfinite!Range); auto walkLength(Range)(Range range, const size_t upTo) if (isInputRange!Range); - This is a best-effort implementation of length for any kind of range. If hasLength!Range, simply returns range.length without checking upTo (when specified). Otherwise, walks the range through its length and returns the number of elements seen. Performes Ο(n) evaluations of range.empty and range.popFront(), where n is the effective length of range. The upTo parameter is useful to "cut the losses" in case the interest is in seeing whether the range has at least some number of elements. If the parameter upTo is specified, stops if upTo steps have been taken and returns upTo. Infinite ranges are compatible, provided the parameter upTo is specified, in which case the implementation simply returns upTo. - auto retro(Range)(Range r) if (isBidirectionalRange!(Unqual!Range)); - Iterates a bidirectional range backwards. The original range can be accessed by using the source property. Applying retro twice to the same range yields the original range. Examples: int[] a = [ 1, 2, 3, 4, 5 ]; assert(equal(retro(a), [ 5, 4, 3, 2, 1 ][])); assert(retro(a).source is a); assert(retro(retro(a)) is a); - auto stride(Range)(Range r, size_t n) if (isInputRange!(Unqual!Range)); - Iterates range r with stride n. If the range is a random-access range, moves by indexing into the range; otherwise, moves by successive calls to popFront. Applying stride twice to the same range results in a stride with a step that is the product of the two applications. Throws: Exception if n == 0. Example: int[] a = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ]; assert(equal(stride(a, 3), [ 1, 4, 7, 10 ][])); assert(stride(stride(a, 2), 3) == stride(a, 6)); - auto chain(Ranges...)(Ranges rs) if (Ranges.length > 0 && allSatisfy!(isInputRange, staticMap!(Unqual, Ranges)) && !is(CommonType!(staticMap!(ElementType, staticMap!(Unqual, Ranges))) == void)); - Spans multiple ranges in sequence. The function chain takes any number of ranges and returns a Chain!(R1, R2,...) object. The ranges may be different, but they must have the same element type. The result is a range that offers the front, popFront, and empty primitives. If all input ranges offer random access and length, Chain offers them as well. If only one range is offered to Chain or chain, the Chain type exits the picture by aliasing itself directly to that range's type. Example: int[] arr1 = [ 1, 2, 3, 4 ]; int[] arr2 = [ 5, 6 ]; int[] arr3 = [ 7 ]; auto s = chain(arr1, arr2, arr3); assert(s.length == 7); assert(s[5] == 6); assert(equal(s, [1, 2, 3, 4, 5, 6, 7][])); - auto roundRobin(Rs...)(Rs rs) if (Rs.length > 1 && allSatisfy!(isInputRange, staticMap!(Unqual, Rs))); - roundRobin(r1, r2, r3) yields r1.front, then r2.front, then r3.front, after which it pops off one element from each and continues again from r1. For example, if two ranges are involved, it alternately yields elements off the two ranges. roundRobin stops after it has consumed all ranges (skipping over the ones that finish early). Examples: int[] a = [ 1, 2, 3 ]; int[] b = [ 10, 20, 30, 40 ]; auto r = roundRobin(a, b); assert(equal(r, [ 1, 10, 2, 20, 3, 30, 40 ])); - auto radial(Range, I)(Range r, I startingIndex) if (isRandomAccessRange!(Unqual!Range) && hasLength!(Unqual!Range) && isIntegral!I); auto radial(R)(R r) if (isRandomAccessRange!(Unqual!R) && hasLength!(Unqual!R)); - Iterates a random-access range starting from a given point and progressively extending left and right from that point. If no initial point is given, iteration starts from the middle of the range. Iteration spans the entire range. Examples: int[] a = [ 1, 2, 3, 4, 5 ]; assert(equal(radial(a), [ 3, 4, 2, 5, 1 ])); a = [ 1, 2, 3, 4 ]; assert(equal(radial(a), [ 2, 3, 1, 4 ])); - struct Take(Range) if (isInputRange!(Unqual!Range) && !(!isInfinite!(Unqual!Range) && hasSlicing!(Unqual!Range) || is(Range T == Take!T))); Take!R take(R)(R input, size_t n) if (isInputRange!(Unqual!R) && !isInfinite!(Unqual!R) && hasSlicing!(Unqual!R)); - Lazily takes only up to n elements of a range. This is particularly useful when using with infinite ranges. If the range offers random access and length, Take offers them as well. Examples: int[] arr1 = [ 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]; auto s = take(arr1, 5); assert(s.length == 5); assert(s[4] == 5); assert(equal(s, [ 1, 2, 3, 4, 5 ][])); - auto takeExactly(R)(R range, size_t n) if (isInputRange!R); - Similar to take, but assumes that range has at least n elements. Consequently, the result of takeExactly(range, n) always defines the length property (and initializes it to n) even when range itself does not define length. The result of takeExactly is identical to that of take in cases where the original range defines length or is infinite. Examples: auto a = [ 1, 2, 3, 4, 5 ]; auto b = takeExactly(a, 3); assert(equal(b, [1, 2, 3])); static assert(is(typeof(b.length) == size_t)); assert(b.length == 3); assert(b.front == 1); assert(b.back == 3); - auto takeOne(R)(R source) if (isInputRange!R); - Returns a range with at most one element; for example, takeOne([42, 43, 44]) returns a range consisting of the integer 42. Calling popFront() off that range renders it empty. In effect takeOne(r) is somewhat equivalent to take(r, 1) but in certain interfaces it is important to know statically that the range may only have at most one element. The type returned by takeOne is a random-access range with length regardless of R's capabilities (another feature that distinguishes takeOne from take). Examples: auto s = takeOne([42, 43, 44]); static assert(isRandomAccessRange!(typeof(s))); assert(s.length == 1); assert(!s.empty); assert(s.front == 42); s.front = 43; assert(s.front == 43); assert(s.back == 43); assert(s[0] == 43); s.popFront(); assert(s.length == 0); assert(s.empty); - auto takeNone(R)() if (isInputRange!R); - Returns an empty range which is statically known to be empty and is guaranteed to have length and be random access regardless of R's capabilities. Examples: auto range = takeNone!(int[])(); assert(range.length == 0); assert(range.empty); - auto takeNone(R)(R range) if (isInputRange!R); - Creates an empty range from the given range in Ο(1). If it can, it will return the same range type. If not, it will return takeExactly(range, 0). Examples: assert(takeNone([42, 27, 19]).empty); assert(takeNone("dlang.org").empty); assert(takeNone(filter!"true"([42, 27, 19])).empty); - R drop(R)(R range, size_t n) if (isInputRange!R); R dropBack(R)(R range, size_t n) if (isBidirectionalRange!R); - Convenience function which calls range.popFrontN(n) and returns range. drop makes it easier to pop elements from a range and then pass it to another function within a single expression, whereas popFrontN would require multiple statements. dropBack provides the same functionality but instead calls range.popBackN(n). Note: drop and dropBack will only pop up to n elements but will stop if the range is empty first. Examples: assert([0, 2, 1, 5, 0, 3].drop(3) == [5, 0, 3]); assert("hello world".drop(6) == "world"); assert("hello world".drop(50).empty); assert("hello world".take(6).drop(3).equal("lo ")); - R dropExactly(R)(R range, size_t n) if (isInputRange!R); R dropBackExactly(R)(R range, size_t n) if (isBidirectionalRange!R); - Similar to drop and dropBack but they call range.popFrontExactly(n) and range.popBackExactly(n) instead. Note: Unlike drop, dropExactly will assume that the range holds at least n elements. This makes dropExactly faster than drop, but it also means that if range does not contain at least n elements, it will attempt to call popFront on an empty range, which is undefined behavior. So, only use popFrontExactly when it is guaranteed that range holds at least n elements. Examples: auto a = [1, 2, 3]; assert(a.dropExactly(2) == [3]); assert(a.dropBackExactly(2) == [1]); string s = "日本語"; assert(s.dropExactly(2) == "語"); assert(s.dropBackExactly(2) == "日"); auto bd = filterBidirectional!"true"([1, 2, 3]); assert(bd.dropExactly(2).equal([3])); assert(bd.dropBackExactly(2).equal([1])); - R dropOne(R)(R range) if (isInputRange!R); R dropBackOne(R)(R range) if (isBidirectionalRange!R); - Convenience function which calls range.popFront() and returns range. dropOne makes it easier to pop an element from a range and then pass it to another function within a single expression, whereas popFront would require multiple statements. dropBackOne provides the same functionality but instead calls range.popBack(). Examples: import std.container : DList; auto dl = DList!int(9, 1, 2, 3, 9); assert(dl[].dropOne().dropBackOne().equal([1, 2, 3])); auto a = [1, 2, 3]; assert(a.dropOne() == [2, 3]); assert(a.dropBackOne() == [1, 2]); string s = "日本語"; assert(s.dropOne() == "本語"); assert(s.dropBackOne() == "日本"); auto bd = filterBidirectional!"true"([1, 2, 3]); assert(bd.dropOne().equal([2, 3])); assert(bd.dropBackOne().equal([1, 2])); - size_t popFrontN(Range)(ref Range r, size_t n) if (isInputRange!Range); size_t popBackN(Range)(ref Range r, size_t n) if (isBidirectionalRange!Range); - Eagerly advances r itself (not a copy) up to n times (by calling r.popFront). popFrontN takes r by ref, so it mutates the original range. Completes in Ο(1) steps for ranges that support slicing and have length. Completes in Ο(n) time for all other ranges. Returns: How much r was actually advanced, which may be less than n if r did not have at least n elements. popBackN will behave the same but instead removes elements from the back of the (bidirectional) range instead of the front. Examples: int[] a = [ 1, 2, 3, 4, 5 ]; a.popFrontN(2); assert(a == [ 3, 4, 5 ]); a.popFrontN(7); assert(a == [ ]);Examples: auto LL = iota(1L, 7L); auto r = popFrontN(LL, 2); assert(equal(LL, [3L, 4L, 5L, 6L])); assert(r == 2);Examples: int[] a = [ 1, 2, 3, 4, 5 ]; a.popBackN(2); assert(a == [ 1, 2, 3 ]); a.popBackN(7); assert(a == [ ]);Examples: auto LL = iota(1L, 7L); auto r = popBackN(LL, 2); assert(equal(LL, [1L, 2L, 3L, 4L])); assert(r == 2); - void popFrontExactly(Range)(ref Range r, size_t n) if (isInputRange!Range); void popBackExactly(Range)(ref Range r, size_t n) if (isBidirectionalRange!Range); - Eagerly advances r itself (not a copy) exactly n times (by calling r.popFront). popFrontExactly takes r by ref, so it mutates the original range. Completes in Ο(1) steps for ranges that support slicing, and have either length or are infinite. Completes in Ο(n) time for all other ranges. Note: Unlike popFrontN, popFrontExactly will assume that the range holds at least n elements. This makes popFrontExactly faster than popFrontN, but it also means that if range does not contain at least n elements, it will attempt to call popFront on an empty range, which is undefined behavior. So, only use popFrontExactly when it is guaranteed that range holds at least n elements. popBackExactly will behave the same but instead removes elements from the back of the (bidirectional) range instead of the front. Examples: auto a = [1, 2, 3]; a.popFrontExactly(1); assert(a == [2, 3]); a.popBackExactly(1); assert(a == [2]); string s = "日本語"; s.popFrontExactly(1); assert(s == "本語"); s.popBackExactly(1); assert(s == "本"); auto bd = filterBidirectional!"true"([1, 2, 3]); bd.popFrontExactly(1); assert(bd.equal([2, 3])); bd.popBackExactly(1); assert(bd.equal([2])); - struct Repeat(T); Repeat!T repeat(T)(T value); - Repeats one value forever. Models an infinite bidirectional and random access range, with slicing. Examples: assert(equal(5.repeat().take(4), [ 5, 5, 5, 5 ])); - Take!(Repeat!T) repeat(T)(T value, size_t n); - Repeats value exactly n times. Equivalent to take(repeat(value), n). Examples: assert(equal(5.repeat(4), 5.repeat().take(4))); - struct Cycle(R) if (isForwardRange!R && !isInfinite!R); Cycle!R cycle(R)(R input) if (isForwardRange!R && !isInfinite!R); Cycle!R cycle(R)(R input, size_t index = 0) if (isRandomAccessRange!R && !isInfinite!R); - Repeats the given forward range ad infinitum. If the original range is infinite (fact that would make Cycle the identity application), Cycle detects that and aliases itself to the range type itself. If the original range has random access, Cycle offers random access and also offers a constructor taking an initial position index. Cycle works with static arrays in addition to ranges, mostly for performance reasons. Tip: This is a great way to implement simple circular buffers. Examples: assert(equal(take(cycle([1, 2][]), 5), [ 1, 2, 1, 2, 1 ][])); - struct Zip(Ranges...) if (Ranges.length && allSatisfy!(isInputRange, Ranges)); auto zip(Ranges...)(Ranges ranges) if (Ranges.length && allSatisfy!(isInputRange, Ranges)); auto zip(Ranges...)(StoppingPolicy sp, Ranges ranges) if (Ranges.length && allSatisfy!(isInputRange, Ranges)); - Iterate several ranges in lockstep. The element type is a proxy tuple that allows accessing the current element in the nth range by using e[n]. Example: int[] a = [ 1, 2, 3 ]; string[] b = [ "a", "b", "c" ]; // prints 1:a 2:b 3:c foreach (e; zip(a, b)) { write(e[0], ':', e[1], ' '); }Zip offers the lowest range facilities of all components, e.g. it offers random access iff all ranges offer random access, and also offers mutation and swapping if all ranges offer it. Due to this, Zip is extremely powerful because it allows manipulating several ranges in lockstep. For example, the following code sorts two arrays in parallel: Examples: int[] a = [ 1, 2, 3 ]; string[] b = [ "a", "b", "c" ]; sort!("a[0] > b[0]")(zip(a, b)); assert(a == [ 3, 2, 1 ]); assert(b == [ "c", "b", "a" ]); - this(R rs, StoppingPolicy s = StoppingPolicy.shortest); - Builds an object. Usually this is invoked indirectly by using the zip function. - bool empty; - Returns true if the range is at end. The test depends on the stopping policy. - @property ElementType front(); - Returns the current iterated element. - @property void front(ElementType v); - Sets the front of all iterated ranges. - ElementType moveFront(); - Moves out the front. - @property ElementType back(); - Returns the rightmost element. - ElementType moveBack(); - Moves out the back. Returns the rightmost element. - @property void back(ElementType v); - Returns the current iterated element. Returns the rightmost element. - void popFront(); - Advances to the next element in all controlled ranges. - void popBack(); - Calls popBack for all controlled ranges. - @property auto length(); - Returns the length of this range. Defined only if all ranges define length. - alias opDollar = length; - Returns the length of this range. Defined only if all ranges define length. - auto opSlice(size_t from, size_t to); - Returns a slice of the range. Defined only if all range define slicing. - ElementType opIndex(size_t n); - Returns the nth element in the composite range. Defined if all ranges offer random access. - void opIndexAssign(ElementType v, size_t n); - Assigns to the nth element in the composite range. Defined if all ranges offer random access. Returns the nth element in the composite range. Defined if all ranges offer random access. - ElementType moveAt(size_t n); - Destructively reads the nth element in the composite range. Defined if all ranges offer random access. Returns the nth element in the composite range. Defined if all ranges offer random access. - enum StoppingPolicy: int; - Dictates how iteration in a Zip should stop. By default stop at the end of the shortest of all ranges. - struct Lockstep(Ranges...) if (Ranges.length > 1 && allSatisfy!(isInputRange, Ranges)); Lockstep!Ranges lockstep(Ranges...)(Ranges ranges) if (allSatisfy!(isInputRange, Ranges)); Lockstep!Ranges lockstep(Ranges...)(Ranges ranges, StoppingPolicy s) if (allSatisfy!(isInputRange, Ranges)); - Iterate multiple ranges in lockstep using a foreach loop. If only a single range is passed in, the Lockstep aliases itself away. If the ranges are of different lengths and s == StoppingPolicy.shortest stop after the shortest range is empty. If the ranges are of different lengths and s == StoppingPolicy.requireSameLength, throw an exception. s may not be StoppingPolicy.longest, and passing this will throw an exception. By default StoppingPolicy is set to StoppingPolicy.shortest. BUGS: If a range does not offer lvalue access, but ref is used in the foreach loop, it will be silently accepted but any modifications to the variable will not be propagated to the underlying range. // Lockstep also supports iterating with an index variable: Example: foreach(index, a, b; lockstep(arr1, arr2)) { writefln("Index %s: a = %s, b = %s", index, a, b); }Examples: auto arr1 = [1,2,3,4,5]; auto arr2 = [6,7,8,9,10]; foreach(ref a, ref b; lockstep(arr1, arr2)) { a += b; } assert(arr1 == [7,9,11,13,15]); - struct Recurrence(alias fun, StateType, size_t stateSize); Recurrence!(fun, CommonType!State, State.length) recurrence(alias fun, State...)(State initial); - Creates a mathematical sequence given the initial values and a recurrence function that computes the next value from the existing values. The sequence comes in the form of an infinite forward range. The type Recurrence itself is seldom used directly; most often, recurrences are obtained by calling the function recurrence. When calling recurrence, the function that computes the next value is specified as a template argument, and the initial values in the recurrence are passed as regular arguments. For example, in a Fibonacci sequence, there are two initial values (and therefore a state size of 2) because computing the next Fibonacci value needs the past two values. If the function is passed in string form, the state has name "a" and the zero-based index in the recurrence has name "n". The given string must return the desired value for a[n] given a[n - 1], a[n - 2], a[n - 3],..., a[n - stateSize]. The state size is dictated by the number of arguments passed to the call to recurrence. The Recurrence struct itself takes care of managing the recurrence's state and shifting it appropriately. Example: //); } - struct Sequence(alias fun, State); auto sequence(alias fun, State...)(State args); - Sequence is similar to Recurrence except that iteration is presented in the so-called closed form. This means that the nth element in the series is computable directly from the initial values and n itself. This implies that the interface offered by Sequence is a random-access range, as opposed to the regular Recurrence, which only offers forward iteration. The state of the sequence is stored as a Tuple so it can be heterogeneous. Examples: auto odds = sequence!("a[0] + n * a[1]")(1, 2); assert(odds.front == 1); odds.popFront(); assert(odds.front == 3); odds.popFront(); assert(odds.front == 5); - auto iota(B, E, S)(B begin, E end, S step) if ((isIntegral!(CommonType!(B, E)) || isPointer!(CommonType!(B, E))) && isIntegral!S); auto iota(B, E)(B begin, E end) if (isFloatingPoint!(CommonType!(B, E))); auto iota(B, E)(B begin, E end) if (isIntegral!(CommonType!(B, E)) || isPointer!(CommonType!(B, E))); auto iota(E)(E end); - Returns a range that goes through the numbers begin, begin + step, begin + 2 * step, ..., up to and excluding end. The range offered is a random access range. The two-arguments version has step = 1. If begin < end && step < 0 or begin > end && step > 0 or begin == end, then an empty range is returned. Throws: Exception if begin != end && step == 0, an exception is thrown. - enum TransverseOptions: int; - Options for the FrontTransversal and Transversal ranges (below). - assumeJagged - When transversed, the elements of a range of ranges are assumed to have different lengths (e.g. a jagged array). - enforceNotJagged - The transversal enforces that the elements of a range of ranges have all the same length (e.g. an array of arrays, all having the same length). Checking is done once upon construction of the transversal range. - assumeNotJagged - The transversal assumes, without verifying, that the elements of a range of ranges have all the same length. This option is useful if checking was already done from the outside of the range. - struct FrontTransversal(Ror, TransverseOptions opt = TransverseOptions.assumeJagged); FrontTransversal!(RangeOfRanges, opt) frontTransversal(TransverseOptions opt = TransverseOptions.assumeJagged, RangeOfRanges)(RangeOfRanges rr); - Given a range of ranges, iterate transversally through the first elements of each of the enclosed ranges. Examples: int[][] x = new int[][2]; x[0] = [1, 2]; x[1] = [3, 4]; auto ror = frontTransversal(x); assert(equal(ror, [ 1, 3 ][])); - this(RangeOfRanges input); - Construction from an input. - bool empty; @property ref auto front(); ElementType moveFront(); void popFront(); - Forward range primitives. - @property FrontTransversal save(); - Duplicates this frontTransversal. Note that only the encapsulating range of range will be duplicated. Underlying ranges will not be duplicated. - @property ref auto back(); void popBack(); ElementType moveBack(); - Bidirectional primitives. They are offered if isBidirectionalRange!RangeOfRanges. - ref auto opIndex(size_t n); ElementType moveAt(size_t n); void opIndexAssign(ElementType val, size_t n); - Transversal(Ror, TransverseOptions opt = TransverseOptions.assumeJagged); Transversal!(RangeOfRanges, opt) transversal(TransverseOptions opt = TransverseOptions.assumeJagged, RangeOfRanges)(RangeOfRanges rr, size_t n); - Given a range of ranges, iterate transversally through the the nth element of each of the enclosed ranges. All elements of the enclosing range must offer random access. Examples: int[][] x = new int[][2]; x[0] = [1, 2]; x[1] = [3, 4]; auto ror = transversal(x, 1); assert(equal(ror, [ 2, 4 ][])); - this(RangeOfRanges input, size_t n); - Construction from an input and an index. - bool empty; @property ref auto front(); E moveFront(); @property auto front(E val); void popFront(); @property typeof(this) save(); - Forward range primitives. - @property ref auto back(); void popBack(); E moveBack(); @property auto back(E val); - Bidirectional primitives. They are offered if isBidirectionalRange!RangeOfRanges. - ref auto opIndex(size_t n); E moveAt(size_t n); void opIndexAssign(E val, size_t n); @property size_t length(); alias opDollar = length; - Indexed(Source, Indices) if (isRandomAccessRange!Source && isInputRange!Indices && is(typeof(Source.init[ElementType!Indices.init]))); Indexed!(Source, Indices) indexed(Source, Indices)(Source source, Indices indices); - This struct takes two ranges, source and indices, and creates a view of source as if its elements were reordered according to indices. indices may include only a subset of the elements of source and may also repeat elements. Source must be a random access range. The returned range will be bidirectional or random-access if Indices is bidirectional or random-access, respectively. Examples: auto source = [1, 2, 3, 4, 5]; auto indices = [4, 3, 1, 2, 0, 4]; auto ind = indexed(source, indices); assert(equal(ind, [5, 4, 2, 3, 1, 5])); assert(equal(retro(ind), [5, 1, 3, 2, 4, 5])); - @property ref auto front(); void popFront(); @property typeof(this) save(); @property ref auto front(ElementType!Source newVal); auto moveFront(); @property ref auto back(); void popBack(); @property ref auto back(ElementType!Source newVal); auto moveBack(); @property size_t length(); ref auto opIndex(size_t index); typeof(this) opSlice(size_t a, size_t b); auto opIndexAssign(ElementType!Source newVal, size_t index); auto moveAt(size_t index); - Range primitives - @property Source source(); - Returns the source range. - @property Indices indices(); - Returns the indices range. - size_t physicalIndex(size_t logicalIndex); - Returns the physical index into the source range corresponding to a given logical index. This is useful, for example, when indexing an Indexed without adding another layer of indirection. Examples: auto ind = indexed([1, 2, 3, 4, 5], [1, 3, 4]); assert(ind.physicalIndex(0) == 1); - struct Chunks(Source) if (isForwardRange!Source); Chunks!Source chunks(Source)(Source source, size_t chunkSize) if (isForwardRange!Source); - This range iterates over fixed-sized chunks of size chunkSize of a source range. Source must be a forward range. If !isInfinite!Source and source.walkLength is not evenly divisible by chunkSize, the back element of this range will contain fewer than chunkSize elements. Examples: auto source = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; auto chunks = chunks(source, 4); assert(chunks[0] == [1, 2, 3, 4]); assert(chunks[1] == [5, 6, 7, 8]); assert(chunks[2] == [9, 10]); assert(chunks.back == chunks[2]); assert(chunks.front == chunks[0]); assert(chunks.length == 3); assert(equal(retro(array(chunks)), array(retro(chunks)))); - this(Source source, size_t chunkSize); - Standard constructor - @property auto front(); void popFront(); @property bool empty(); @property typeof(this) save(); - Forward range primitives. Always present. - @property size_t length(); - Length. Only if hasLength!Source is true - auto opIndex(size_t index); typeof(this) opSlice(size_t lower, size_t upper); - Indexing and slicing operations. Provided only if hasSlicing!Source is true. - @property auto back(); void popBack(); - Bidirectional range primitives. Provided only if both hasSlicing!Source and hasLength!Source are true. - auto only(Values...)(auto ref Values values) if (!is(CommonType!Values == void) || Values.length == 0); - Assemble values into a range that carries all its elements in-situ. Useful when a single value or multiple disconnected values must be passed to an algorithm expecting a range, without having to perform dynamic memory allocation. As copying the range means copying all elements, it can be safely returned from functions. For the same reason, copying the returned range may be expensive for a large number of arguments. Examples: import std.uni; assert(equal(only('♡'), "♡")); assert([1, 2, 3, 4].findSplitBefore(only(3))[0] == [1, 2]); assert(only("one", "two", "three").joiner(" ").equal("one two three")); string title = "The D Programming Language"; assert(filter!isUpper(title).map!only().join(".") == "T.D.P.L"); - ElementType!R moveFront(R)(R r); - Moves the front of r out and returns it. Leaves r.front in a destroyable state that does not allocate any resources (usually equal to its .init value). Examples: auto a = [ 1, 2, 3 ]; assert(moveFront(a) == 1); // define a perfunctory input range struct InputRange { @property bool empty() { return false; } @property int front() { return 42; } void popFront() {} int moveFront() { return 43; } } InputRange r; assert(moveFront(r) == 43); - ElementType!R moveBack(R)(R r); - Moves the back of r out and returns it. Leaves r.back in a destroyable state that does not allocate any resources (usually equal to its .init value). Examples: struct TestRange { int payload = 5; @property bool empty() { return false; } @property TestRange save() { return this; } @property ref int front() { return payload; } @property ref int back() { return payload; } void popFront() { } void popBack() { } } static assert(isBidirectionalRange!TestRange); TestRange r; auto x = moveBack(r); assert(x == 5); - ElementType!R moveAt(R, I)(R r, I i) if (isIntegral!I); - Moves element at index i of r out and returns it. Leaves r.front in a destroyable state that does not allocate any resources (usually equal to its .init value). Examples: auto a = [1,2,3,4]; foreach(idx, it; a) { assert(it == moveAt(a, id. Examples: void useRange(InputRange!int range) { // Function body. } // Create a range type. auto squares = map!"a * a"(iota(10)); // Wrap it in an interface. auto squaresWrapped = inputRangeObject(squares); // Use it. useRange(squaresWrapped);Limitations: These interfaces are not capable of forwarding ref access to elements. Infiniteness of the wrapped range is not propagated. Length is not propagated in the case of non-random access ranges. See Also: inputRangeObject -: uint[] outputArray; auto app = appender(&outputArray); auto appWrapped = outputRangeObject!(uint, uint[])(app); static assert(is(typeof(appWrapped) : OutputRange!(uint[]))); static assert(is(typeof(appWrapped) : OutputRange!(uint))); - template isTwoWayCompatible(alias fn, T1, T2) - Returns true if fn accepts variables of type T1 and T2 in any order. The following code should compile: T1 foo(); T2 bar(); fn(foo(), bar()); fn(bar(), foo()); - enum SearchPolicy: int; - Policy used with the searching primitives lowerBound, upperBound, and equalRange of SortedRange below. - linear - Searches in a linear fashion. - trot - Searches with a step that is grows linearly (1, 2, 3,...) leading to a quadratic search schedule (indexes tried are 0, 1, 3, 6, 10, 15, 21, 28,...) Once the search overshoots its target, the remaining interval is searched using binary search. The search is completed in Ο(sqrt(n)) time. Use it when you are reasonably confident that the value is around the beginning of the range. - gallop - Performs a galloping search algorithm, i.e. searches with a step that doubles every time, (1, 2, 4, 8, ...) leading to an exponential search schedule (indexes tried are 0, 1, 3, 7, 15, 31, 63,...) Once the search overshoots its target, the remaining interval is searched using binary search. A value is found in Ο(log(n)) time. - binarySearch - Searches using a classic interval halving policy. The search starts in the middle of the range, and each search step cuts the range in half. This policy finds a value in Ο(log(n)) time but is less cache friendly than gallop for large ranges. The binarySearch policy is used as the last step of trot, gallop, trotBackwards, and gallopBackwards strategies. - trotBackwards - Similar to trot but starts backwards. Use it when confident that the value is around the end of the range. - gallopBackwards - Similar to gallop but starts backwards. Use it when confident that the value is around the end of the range. - struct SortedRange(Range, alias pred = "a < b") if (isInputRange!Range); - Represents a sorted range. In addition to the regular range primitives, supports additional operations that take advantage of the ordering, such as merge and binary search. To obtain a SortedRange from an unsorted range r, use std.algorithm.sort which sorts r in place and returns the corresponding SortedRange. To construct a SortedRange from a range r that is known to be already sorted, use assumeSorted described below. Examples: auto a = [ 1, 2, 3, 42, 52, 64 ]; auto r = assumeSorted(a); assert(r.contains(3)); assert(!r.contains(32)); auto r1 = sort!"a > b"(a); assert(r1.contains(3)); assert(!r1.contains(32)); assert(r1.release() == [ 64, 52, 42, 3, 2, 1 ]);Examples: SortedRange could accept ranges weaker than random-access, but it is unable to provide interesting functionality for them. Therefore, SortedRange is currently restricted to random-access ranges. No copy of the original range is ever made. If the underlying range is changed concurrently with its corresponding SortedRange in ways that break its sortedness, SortedRange will work erratically. auto a = [ 1, 2, 3, 42, 52, 64 ]; auto r = assumeSorted(a); assert(r.contains(42)); swap(a[3], a[5]); // illegal to break sortedness of original range assert(!r.contains(42)); // passes although it shouldn't - @property bool empty(); @property auto save(); @property ref auto front(); void popFront(); @property ref auto back(); void popBack(); ref auto opIndex(size_t i); auto opSlice(size_t a, size_t b); @property size_t length(); alias opDollar = length; - Range primitives. - auto release(); - Releases the controlled range and returns it. - auto lowerBound(SearchPolicy sp = SearchPolicy.binarySearch, V)(V value) if (isTwoWayCompatible!(predFun, ElementType!Range, V) && hasSlicing!Range); - This function uses a search with policy sp to find the largest left subrange on which pred(x, value) is true for all x (e.g., if pred is "less than", returns the portion of the range with elements strictly smaller than value). The search schedule and its complexity are documented in SearchPolicy. See also STL's lower_bound. Example: auto a = assumeSorted([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]); auto p = a.lowerBound(4); assert(equal(p, [ 0, 1, 2, 3 ])); - auto upperBound(SearchPolicy sp = SearchPolicy.binarySearch, V)(V value) if (isTwoWayCompatible!(predFun, ElementType!Range, V)); - This function searches with policy sp to find the largest right subrange on which pred(value, x) is true for all x (e.g., if pred is "less than", returns the portion of the range with elements strictly greater than value). The search schedule and its complexity are documented in SearchPolicy. For ranges that do not offer random access, SearchPolicy.linear is the only policy allowed (and it must be specified explicitly lest it exposes user code to unexpected inefficiencies). For random-access searches, all policies are allowed, and SearchPolicy.binarySearch is the default. See Also: STL's upper_bound. Example: auto a = assumeSorted([ 1, 2, 3, 3, 3, 4, 4, 5, 6 ]); auto p = a.upperBound(3); assert(equal(p, [4, 4, 5, 6])); - auto equalRange(V)(V value) if (isTwoWayCompatible!(predFun, ElementType!Range, V) && isRandomAccessRange!Range); - Returns the subrange containing all elements e for which both pred(e, value) and pred(value, e) evaluate to false (e.g., if pred is "less than", returns the portion of the range with elements equal to value). Uses a classic binary search with interval halving until it finds a value that satisfies the condition, then uses SearchPolicy.gallopBackwards to find the left boundary and SearchPolicy.gallop to find the right boundary. These policies are justified by the fact that the two boundaries are likely to be near the first found value (i.e., equal ranges are relatively small). Completes the entire search in Ο(log(n)) time. See also STL's equal_range. Example: auto a = [ 1, 2, 3, 3, 3, 4, 4, 5, 6 ]; auto r = equalRange(a, 3); assert(equal(r, [ 3, 3, 3 ])); - auto trisect(V)(V value) if (isTwoWayCompatible!(predFun, ElementType!Range, V) && isRandomAccessRange!Range); - Returns a tuple r such that r[0] is the same as the result of lowerBound(value), r[1] is the same as the result of equalRange(value), and r[2] is the same as the result of upperBound(value). The call is faster than computing all three separately. Uses a search schedule similar to equalRange. Completes the entire search in Ο(log(n)) time. Example: auto a = [ 1, 2, 3, 3, 3, 4, 4, 5, 6 ]; auto r = assumeSorted(a).trisect(3); assert(equal(r[0], [ 1, 2 ])); assert(equal(r[1], [ 3, 3, 3 ])); assert(equal(r[2], [ 4, 4, 5, 6 ])); - bool contains(V)(V value) if (isRandomAccessRange!Range); - Returns true if and only if value can be found in range, which is assumed to be sorted. Performs Ο(log(r.length)) evaluations of pred. See also STL's binary_search. - auto assumeSorted(alias pred = "a < b", R)(R r) if (isInputRange!(Unqual!R)); - Assumes r is sorted by predicate pred and returns the corresponding SortedRange!(pred, R) having r as support. To keep the checking costs low, the cost is Ο(1) in release mode (no checks for sortedness are performed). In debug mode, a few random elements of r are checked for sortedness. The size of the sample is proportional Ο(log(r.length)). That way, checking has no effect on the complexity of subsequent operations specific to sorted ranges (such as binary search). The probability of an arbitrary unsorted range failing the test is very high (however, an almost-sorted range is likely to pass it). To check for sortedness at cost Ο(n), use std.algorithm.isSorted. - struct RefRange(R) if (isForwardRange!R); - Wrapper which effectively makes it possible to pass a range by reference. Both the original range and the RefRange will always have the exact same elements. Any operation done on one will affect the other. So, for instance, if it's passed to a function which would implicitly copy the original range if it were passed to it, the original range is not copied but is consumed as if it were a reference type. Note that save works as normal and operates on a new range, so if save is ever called on the RefRange, then no operations on the saved range will affect the original. Examples: import std.algorithm; ubyte[] buffer = [1, 9, 45, 12, 22]; auto found1 = find(buffer, 45); assert(found1 == [45, 12, 22]); assert(buffer == [1, 9, 45, 12, 22]); auto wrapped1 = refRange(&buffer); auto found2 = find(wrapped1, 45); assert(*found2.ptr == [45, 12, 22]); assert(buffer == [45, 12, 22]); auto found3 = find(wrapped2.save, 22); assert(*found3.ptr == [22]); assert(buffer == [45, 12, 22]); string str = "hello world"; auto wrappedStr = refRange(&str); assert(str.front == 'h'); str.popFrontN(5); assert(str == " world"); assert(wrappedStr.front == ' '); assert(*wrappedStr.ptr == " world"); - pure nothrow @safe this(R* range); - - auto opAssign(RefRange rhs); - This does not assign the pointer of rhs to this RefRange. Rather it assigns the range pointed to by rhs to the range pointed to by this RefRange. This is because any operation on a RefRange is the same is if it occurred to the original range. The one exception is when a RefRange is assigned null either directly or because rhs is null. In that case, RefRange no longer refers to the original range but is null. Examples: ubyte[] buffer1 = [1, 2, 3, 4, 5]; ubyte[] buffer2 = [6, 7, 8, 9, 10]; auto wrapped1 = refRange(&buffer1); auto wrapped2 = refRange(&buffer2); assert(wrapped1.ptr is &buffer1); assert(wrapped2.ptr is &buffer2); assert(wrapped1.ptr !is wrapped2.ptr); assert(buffer1 != buffer2); wrapped1 = wrapped2; //Everything points to the same stuff as before. assert(wrapped1.ptr is &buffer1); assert(wrapped2.ptr is &buffer2); assert(wrapped1.ptr !is wrapped2.ptr); //But buffer1 has changed due to the assignment. assert(buffer1 == [6, 7, 8, 9, 10]); assert(buffer2 == [6, 7, 8, 9, 10]); buffer2 = [11, 12, 13, 14, 15]; //Everything points to the same stuff as before. assert(wrapped1.ptr is &buffer1); assert(wrapped2.ptr is &buffer2); assert(wrapped1.ptr !is wrapped2.ptr); //But buffer2 has changed due to the assignment. assert(buffer1 == [6, 7, 8, 9, 10]); assert(buffer2 == [11, 12, 13, 14, 15]); wrapped2 = null; //The pointer changed for wrapped2 but not wrapped1. assert(wrapped1.ptr is &buffer1); assert(wrapped2.ptr is null); assert(wrapped1.ptr !is wrapped2.ptr); //buffer2 is not affected by the assignment. assert(buffer1 == [6, 7, 8, 9, 10]); assert(buffer2 == [11, 12, 13, 14, 15]); - auto opAssign(typeof(null) rhs); - - inout pure nothrow @property @safe inout(R*) ptr(); - A pointer to the wrapped range. - @property auto front(); const @property auto front(); @property auto front(ElementType!R value); - - @property bool empty(); const @property bool empty(); - - void popFront(); - - @property auto save(); const @property auto save(); auto opSlice(); const auto opSlice(); - - @property auto back(); const @property auto back(); @property auto back(ElementType!R value); void popBack(); - Only defined if isBidirectionalRange!R is true. - ref auto opIndex(IndexType)(IndexType index); const ref auto opIndex(IndexType)(IndexType index); - Only defined if isRandomAccesRange!R is true. - auto moveFront(); - Only defined if hasMobileElements!R and isForwardRange!R are true. - auto moveBack(); - Only defined if hasMobileElements!R and isBidirectionalRange!R are true. - auto moveAt(IndexType)(IndexType index) if (is(typeof((*_range).moveAt(index)))); - Only defined if hasMobileElements!R and isRandomAccessRange!R are true. - @property auto length(); const @property auto length(); - Only defined if hasLength!R is true. - auto opSlice(IndexType1, IndexType2)(IndexType1 begin, IndexType2 end); const auto opSlice(IndexType1, IndexType2)(IndexType1 begin, IndexType2 end); - Only defined if hasSlicing!R is true. - auto refRange(R)(R* range) if (isForwardRange!R && !is(R == class)); - Helper function for constructing a RefRange. If the given range is not a forward range or it is a class type (and thus is already a reference type), then the original range is returned rather than a RefRange. - struct NullSink; - An OutputRange that discards the data it receives.
https://docarchives.dlang.io/v2.066.0/phobos/std_range.html
CC-MAIN-2019-04
refinedweb
8,588
52.05
16.2. Deep Convolutional Generative Adversarial Networks¶ In Section 16.1, we introduced the basic ideas behind how GANs work. We showed that they can draw samples from some simple, easy-to-sample distribution, like a uniform or normal distribution, and transform them into samples that appear to match the distribution of some dataset. And while our example of matching a 2D Gaussian distribution got the point across, it is not especially exciting. In this section, we will demonstrate how you can use GANs to generate photorealistic images. We will be basing our models on the deep convolutional GANs (DCGAN) introduced in [Radford et al., 2015]. We will borrow the convolutional architecture that have proven so successful for discriminative computer vision problems and show how via GANs, they can be leveraged to generate photorealistic images. from mxnet import gluon, init, np, npx from mxnet.gluon import nn import d2l import zipfile npx.set_np() 16.2.1. The Pokemon Dataset¶ The dataset we will use is a collection of Pokemon sprites obtained from pokemondb. First download, extract and load this dataset. data_dir = '../data/' url = '' sha1 = 'c065c0e2593b8b161a2d7873e42418bf6a21106c' fname = gluon.utils.download(url, data_dir, sha1_hash=sha1) with zipfile.ZipFile(fname) as f: f.extractall(data_dir) pokemon = gluon.data.vision.datasets.ImageFolderDataset(data_dir+'pokemon') Downloading ../data/pokemon.zip from... We resize each image into \(64\times 64\). The ToTensor transformation will project the pixel value into \([0, 1]\), while our generator will use the tanh function to obtain outputs in \([-1, 1]\). Therefore we normalize the data with \(0.5\) mean and \(0.5\) standard deviation to match the value range. batch_size = 256 transformer = gluon.data.vision.transforms.Compose([ gluon.data.vision.transforms.Resize(64), gluon.data.vision.transforms.ToTensor(), gluon.data.vision.transforms.Normalize(0.5, 0.5) ]) data_iter = gluon.data.DataLoader( pokemon.transform_first(transformer), batch_size=batch_size, shuffle=True, num_workers=d2l.get_dataloader_workers()) Let’s visualize the first 20 images. d2l.set_figsize((4, 4)) for X, y in data_iter: imgs = X[0:20,:,:,:].transpose(0, 2, 3, 1)/2+0.5 d2l.show_images(imgs, num_rows=4, num_cols=5) break 16.2.2. The Generator¶ The generator needs to map the noise variable \(\mathbf z\in\mathbb R^d\), a length-\(d\) vector, to a RGB image with width and height to be \(64\times 64\) . In Section 13.11 we introduced the fully convolutional network that uses transposed convolution layer (refer to Section 13.10) to enlarge input size. The basic block of the generator contains a transposed convolution layer followed by the batch normalization and ReLU activation. class G_block(nn.Block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, **kwargs): super(G_block, self).__init__(**kwargs) self.conv2d_trans = nn.Conv2DTranspose( channels, kernel_size, strides, padding, use_bias=False) self.batch_norm = nn.BatchNorm() self.activation = nn.Activation('relu') def forward(self, X): return self.activation(self.batch_norm(self.conv2d_trans(X))) In default, the transposed convolution layer uses a \(k_h = k_w = 4\) kernel, a \(s_h = s_w = 2\) strides, and a \(p_h = p_w = 1\) padding. With a input shape of \(n_h^{'} \times n_w^{'} = 16 \times 16\), the generator block will double input’s width and height. x = np.zeros((2, 3, 16, 16)) g_blk = G_block(20) g_blk.initialize() g_blk(x).shape (2, 20, 32, 32) If changing the transposed convolution layer to a \(4\times 4\) kernel, \(1\times 1\) strides and zero padding. With a input size of \(1 \times 1\), the output will have its width and height increased by 3 respectively. x = np.zeros((2, 3, 1, 1)) g_blk = G_block(20, strides=1, padding=0) g_blk.initialize() g_blk(x).shape (2, 20, 4, 4) The generator consists of four basic blocks that increase input’s both width and height from 1 to 32. At the same time, it first projects the latent variable into \(64\times 8\) channels, and then halve the channels each time. At last, a transposed convolution layer is used to generate the output. It further doubles the width and height to match the desired \(64\times 64\) shape, and reduces the channel size to \(3\). The tanh activation function is applied to project output values into the \((-1, 1)\) range. n_G = 64 net_G = nn.Sequential() net_G.add(G_block(n_G*8, strides=1, padding=0), # output: (64*8, 4, 4) G_block(n_G*4), # output: (64*4, 8, 8) G_block(n_G*2), # output: (64*2, 16, 16) G_block(n_G), # output: (64, 32, 32) nn.Conv2DTranspose( 3, kernel_size=4, strides=2, padding=1, use_bias=False, activation='tanh')) # output: (3, 64, 64) Generate a 100 dimensional latent variable to verify the generator’s output shape. x = np.zeros((1, 100, 1, 1)) net_G.initialize() net_G(x).shape (1, 3, 64, 64) 16.2.3. Discriminator¶ The discriminator is a normal convolutional network network except that it uses a leaky ReLU as its activation function. Given \(\alpha \in[0, 1]\), its definition is As it can be seen, it is normal ReLU if \(\alpha=0\), and an identity function if \(\alpha=1\). For \(\alpha \in (0, 1)\), leaky ReLU is a nonlinear function that give a non-zero output for a negative input. It aims to fix the “dying ReLU” problem that a neuron might always output a negative value and therefore cannot make any progress since the gradient of ReLU is 0. alphas = [0, 0.2, 0.4, .6, .8, 1] x = np.arange(-2, 1, 0.1) Y = [nn.LeakyReLU(alpha)(x).asnumpy() for alpha in alphas] d2l.plot(x.asnumpy(), Y, 'x', 'y', alphas) The basic block of the discriminator is a convolution layer followed by a batch normalization layer and a leaky ReLU activation. The hyper-parameters of the convolution layer are similar to the transpose convolution layer in the generator block. class D_block(nn.Block): def __init__(self, channels, kernel_size=4, strides=2, padding=1, alpha=0.2, **kwargs): super(D_block, self).__init__(**kwargs) self.conv2d = nn.Conv2D( channels, kernel_size, strides, padding, use_bias=False) self.batch_norm = nn.BatchNorm() self.activation = nn.LeakyReLU(alpha) def forward(self, X): return self.activation(self.batch_norm(self.conv2d(X))) A basic block with default settings will halve the width and height of the inputs, as we demonstrated in Section 6.3. For example, given a input shape $n_h = n_w = 16 $, with a kernel shape \(k_h = k_w = 4\), a stride shape \(s_h = s_w = 2\), and a padding shape \(p_h = p_w = 1\), the output shape will be: x = np.zeros((2, 3, 16, 16)) d_blk = D_block(20) d_blk.initialize() d_blk(x).shape (2, 20, 8, 8) The discriminator is a mirror of the generator. n_D = 64 net_D = nn.Sequential() net_D.add(D_block(n_D), # output: (64, 32, 32) D_block(n_D*2), # output: (64*2, 16, 16) D_block(n_D*4), # output: (64*4, 8, 8) D_block(n_D*8), # output: (64*8, 4, 4) nn.Conv2D(1, kernel_size=4, use_bias=False)) # output: (1, 1, 1) It uses a convolution layer with output channel \(1\) as the last layer to obtain a single prediction value. x = np.zeros((1, 3, 64, 64)) net_D.initialize() net_D(x).shape (1, 1, 1, 1) 16.2.4. Training¶ Compared to the basic GAN in Section 16.1, we use the same learning rate for both generator and discriminator since they are similar to each other. In addition, we change \(\beta_1\) in Adam (Section 11.10) from \(0.9\) to \(0.5\). It decreases the smoothness of the momentum, the exponentially weighted moving average of past gradients, to take care of the rapid changing gradients because the generator and the discriminator fight with each other. Besides, the random generated noise Z, is a 4-D tensor and we are using GPU to accelerate the computation. def train(net_D, net_G, data_iter, num_epochs, lr, latent_dim, ctx=d2l.try_gpu()): loss = gluon.loss.SigmoidBCELoss() net_D.initialize(init=init.Normal(0.02), force_reinit=True, ctx=ctx) net_G.initialize(init=init.Normal(0.02), force_reinit=True, ctx=ctx) trainer_hp = {'learning_rate': lr, 'beta1': 0.5} trainer_D = gluon.Trainer(net_D.collect_params(), 'adam', trainer_hp) trainer_G = gluon.Trainer(net_G.collect_params(), 'adam', trainer_hp) animator = d2l.Animator(xlabel='epoch', ylabel='loss', xlim=[1, num_epochs], nrows=2, figsize=(5, 5), legend=['discriminator', 'generator']), 1, 1)) X, Z = X.as_in_context(ctx), Z.as_in_context(ctx), metric.add(d2l.update_D(X, Z, net_D, net_G, loss, trainer_D), d2l.update_G(Z, net_D, net_G, loss, trainer_G), batch_size) # Show generated examples Z = np.random.normal(0, 1, size=(21, latent_dim, 1, 1), ctx=ctx) # Noramlize the synthetic data to N(0, 1) fake_x = net_G(Z).transpose(0, 2, 3, 1)/2+0.5 imgs = np.concatenate( [np.concatenate([fake_x[i * 7 + j] for j in range(7)], axis=1) for i in range(len(fake_x)//7)], axis=0) animator.axes[1].cla() animator.axes[1].imshow(imgs.asnumpy()) # Show the losses loss_D, loss_G = metric[0]/metric[2], metric[1]/metric[2] animator.add(epoch, (loss_D, loss_G)) print('loss_D %.3f, loss_G %.3f, %d examples/sec on %s' % ( loss_D, loss_G, metric[2]/timer.stop(), ctx)) Now let’s train the model. latent_dim, lr, num_epochs = 100, 0.005, 40 train(net_D, net_G, data_iter, num_epochs, lr, latent_dim) loss_D 0.175, loss_G 5.508, 2696 examples/sec on gpu(0) 16.2.5. Summary¶ DCGAN architecture has four convolutional layers for the Discriminator and four “fractionally-strided” convolutional layers for the Generator. The Discriminator is a 4-layer strided convolutions with batch normalization (except its input layer) and leaky ReLU activations. Leaky ReLU is a nonlinear function that give a non-zero output for a negative input. It aims to fix the “dying ReLU” problem and helps the gradients flow easier through the architecture. 16.2.6. Exercises¶ What will happen if we use standard ReLU activation rather than leaky ReLU? Apply DCGAN on Fashion-MNIST and see which category works well and which does not.
http://d2l.ai/chapter_generative_adversarial_networks/dcgan.html
CC-MAIN-2019-51
refinedweb
1,617
51.14
En savoir plus à propos de l'abonnement Scribd Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs. ECE 106S PROGRAMMING FUNDAMENTALS Spring 2007 Midterm Test This exam is open textbook and open notes. Use of computing and/or communicatingdevices is NOT permitted. Do not remove any sheets from this test book. Answer all questions in the space provided.No additional sheets are permitted. Work independently. The value of each part of each question is indicated. The total valueof all questions is 100. Write your name and student number in the space below. Do the same on the top of eachsheet of this exam book. Name: ___________________________________ (Underline last name) Page 1 of 22Question 1. (7 marks). General. Answer the following questions either by Yes or No, or by providing a very brief and directanswer when indicated. (a) Yes or No? Suppose that all memory that is allocated using new is correctly de-allocated using delete. Then program memory can never be exhausted. (b) Yes or No? A single next instruction in the DDD debugger allows running a function to completion. (c) Yes or No? The following code ensures that the value of xp cannot be modified in main(). class X { int x; int get() const; }; int main() { class X *xp; xp->get(); } (d) Yes or No? An object file (e.g., main.o) can be executed by changing its name to main.exe and typing the command ./main.exe at the Linux command prompt. (e) What is the name of the software tool you use in the lab to convert your C++ code into an executable program? (f) Yes or No? One should always use delete to destroy memory allocated with new before returning from a function? (g) Yes or No? If you do not provide a copy constructor for your class, a default one will be created for you. Page 2 of 22 Question 2. (12 marks). The Make Utility. The following table shows several invocations of the Make utility using the above correct Makefile. For each invocation, indicate the commands that are executed as a result of the invocation, in the order in which they are invoked. To simplify providing an answer, the lines of the Makefile are numbered as shown above; just indicate the line number corresponding to a command in the table provided below. The invocations of Make are in the order shown in the table. Assume that the Makefile exists in the same directory as all the .cc and .h files. Recall that the touch command simply updates the timestamp of its argument to the current time. Page 3 of 22 Make Invocation Commands Executed (indicate line number)make clean make recurse.o make difftool make all touch difftoolmake touch diffdata.hmake Page 4 of 22Question 3. (9 marks). Pointers and Memory Management. Assume that the following code will compile and run properly. 1 int a = 6; 2 int *b = &a; 3 4 int * 5 foo(int **c) 6 { 7 (**c)++; 8 *c = b; 9 int *d = new int; 10 *d = 10; 11 // Point #1 12 return d; 13 } 14 15 int 16 main() 17 { 18 int e = 7; 19 int *f = &e; 20 21 f = foo(&f); 22 (*f)++; 23 // Point #2 24 return 0; 25 } (a) (3.5 marks). Complete the following diagram by showing the values of variables and/or pointers when execution reaches the point labeled “Point #1”. For an integer variable, simply show the integer value inside the corresponding box. For a pointer, indicate the value of the pointer by drawing an arrow from the box corresponding to the pointer to the box corresponding to the variable the pointer points to. a b c d e f new int Page 5 of 22(b) (3.5 marks). Complete the following diagram by showing the values of variables or pointers when execution reaches the point labeled “Point #2”. For an integer variable, simply show the integer value inside the corresponding box. For a pointer, indicate the value of the pointer by drawing an arrow from the box corresponding to the pointer to the box corresponding to the variable the pointer points to. Cross out any variables, pointers or memory allocations that no longer exist. a b c d e f (c) (2 marks). While the program will run correctly as written, it contains a non-fatal memory allocation problem. Write the single line of code that will fix the error. Specify the line number in the program after which the line of code should be inserted. Page 6 of 22Question 4. (8 marks). Scopes. The following class definition describes a simple C++ class called sampleClass. #include <iostream> using namespace std; class sampleClass { private: int val; public: sampleClass(); sampleClass(int v); ~sampleClass(); }; sampleClass::sampleClass() { val = 0; cout << "Constructing " << val << endl; } sampleClass::sampleClass(int v) { val = v; cout << "Constructing " << val << endl; } sampleClass::~sampleClass() { cout << "Destructing " << val << endl; } Page 7 of 22Consider the following code, which uses sampleClass: sampleClass a(1); void f1() { sampleClass a[2]; cout << "Leaving f1()" << endl; return; } sampleClass *f2() { sampleClass *a = new sampleClass(2); cout << "Calling f1()" << endl; f1(); cout << "Leaving f2()" << endl; return a; } int main() { sampleClass a(3); if ((2 + 2) == 4) { cout << "Calling f2" << endl; sampleClass *a = f2(); cout << "Back from f2" << endl; delete a; } cout << "Leaving main" << endl; return 0; } Page 8 of 22In the space provided below, write the output that an execution of the above program wouldproduce in the order in which it is produced. Use one entry in the table for each line of outputproduced. Page 9 of 22Question 5. (12 marks). Classes and Objects. struct triplet { int first; int second; int third; }; class mystery { private: int x; int y; struct triplet* the_triplet; public: mystery(int f, int s, int t); ~mystery(); mystery & mystery_member(const mystery & other) const; }; mystery::~mystery() { delete the_triplet; } Page 10 of 22(a) (6 marks). Indicate by placing an X in the appropriate column whether each of the following statements that appear in the body of a main() function that uses mystery. mystery g; mystery f(1,2,3); cout << f.y; mystery f(1,2,3); mystery g(4,5,6); g=f; mystery f(1,2,3); mystery g(4,5,6); if(g<f) return (0); mystery f(1,2,3); mystery g(4,5,6); mystery_member(g) = f; (b) (6 marks). Indicate by placing an X in the appropriate column whether each of the following statements that appear in the body of the member function mystery_member. Page 11 of 22Question 6. (8 marks). Memory Management. Consider the following definition of the two classes, database and element. class database { private: int count; element** thearray; public: : : : }; class element { private: int count; char* name; public: : : : }; The public functions of each of the two classes include the constructors, accessor methods andthe destructor. A main function uses these class definitions to construct a database object calledmydatabase and many element objects as shown in the figure below. means NULL count thearray 0 1 2 3 dynamically allocated array n-1 mydatabase dynamically allocated count count count name name name objects of type element Page 12 of 22(a) (4 marks). Write the following constructors for the two classes, database and element. The constructor for element should initialize count to 0 and name to an empty string. The constructor for database should initialize count to 0 and dynamically create an n element array as shown in the figure above. Each element of the array should be initialized to NULL. database::database(int n) { element::element() { (b) (4 marks). Write the destructors of the two classes, database and element such that no memory leaks exist when the object mydatabase goes out of scope. Note that all variables are dynamically allocated as indicated in the above figure, except mydatabase, which is an automatic variable. Write your code in the space provided below. database::~database() { element::~element() { Page 13 of 22Question 7. (12 marks). Objects. struct pair { int first; int second; }; class usePair { private: bool valid; struct pair* thepair; public: usePair(); usePair(int f, int s); ~usePair(); void setFirst(int f); void setSecond (int s); usePair & operator= (usePair rhs); void print(); }; #include "usepair.h" #include "iostream" using namespace std; usePair::usePair() { valid = true; thepair = new struct pair; thepair->first = 0; thepair->second = 0; } usePair::usePair(int f, int s) { valid = true; thepair = new struct pair; thepair->first = f; thepair->second = s; } usePair::~usePair() { delete thepair; } void usePair::setFirst(int f) { thepair->first = f; } void usePair::setSecond(int s) { thepair->second = s; } Page 14 of 22 usePair & usePair::operator= (usePair rhs) { valid = rhs.valid; thepair->first = rhs.thepair->first; thepair->second = rhs.thepair->second; rhs.thepair->first = thepair->second; rhs.thepair->second = thepair->first; return (*this); }; void usePair::print() { cout << “(“ << thepair->first << “,” << thepair->second << “)” << endl; } Now consider the following main function, which uses the above class. #include “iostream” #include “usePair.h” using namespace std; int main () { usePair a(1,1); usePair b(4,16); usePair c = b; usePair d; a.print(); // Statement # 1 b.print(); // Statement # 2 c.print(); // Statement # 3 d.print(); // Statement # 4 a.setFirst(0); b.setFirst(8); c.setSecond(20); d.setFirst(5); d.setSecond(10); a.print(); // Statement # 5 b.print(); // Statement # 6 c.print(); // Statement # 7 d.print(); // Statement # 8 a=b; d=c; a.print(); // Statement # 9 b.print(); // Statement # 10 c.print(); // Statement # 11 d.print(); // Statement # 12 return (0); Page 15 of 22Indicate what is being printed by each statement in the above main function. For simplicity, eachstatement that produces output has been given a number, and you can write the output of eachstatement in the table below. Statement # Output 10 11 12 Page 16 of 22Question 8. (9 marks). Linked Lists. (a) (3 marks). You are given two pointers P and Q to the singly-linked list shown below. Assume that each node in the list contains a data field (of type integer) and a next field. Use the P and Q pointers to switch the positions of the nodes B and C so that the nodes occur in the order A, C, B and D. Do not simply swap the integer values associated with nodes B and C. Also, do not change the values of P and Q and do not use any additional variables. A B C D 1 2 3 4 data next P Q (b) (6 marks). You are again given two pointers P and Q to the list shown below. Use these pointers to switch the positions of the nodes B and E so that the nodes occur in the order A, E, C, D, B and F. Do not simply swap the integer values associated with nodes B and E, and do not change the values of P and Q. You can declare and use one additional variable. A B C D E F 1 2 3 4 5 6 data next P Q Page 17 of 22Question 9. (6 marks). Tree Traversals. (a) (3 marks). Give the inorder, preorder, and postorder traversals of the tree shown below. The tree represents the mathematical expression x – y + z. - z x y Inorder Traversal: Preorder Traversal: Postorder Traversal: (b) (3 marks). The following tree is a Binary Search Tree (BST) that contains all integers between 1 and 14. Give the inorder traversal of the tree. 10 6 11 2 7 12 1 3 9 14 5 8 13 Page 18 of 22Question 10. (5 marks). Tree Traversals. Write a recursive function to perform the triple-order traversal of a binary tree, meaning that foreach node of the tree, the function first visits the node, then traverses its left subtree (in triple-order), then visits the node again, then traverses its right subtree (in triple-order), then visits thenode again. Hence, the triple-order traversal of the tree shown below is: 3 4 2 2 2 4 1 1 1 4 3 5 5 5 3. 4 5 2 1 Write a recursive function to perform the triple-order traversal of a binary tree. Assume thatvisiting a node simply prints its key to cout. Your code should be very short (5-6 lines)!Excessively long code will be penalized. class treenode { public: int data; treenode *left; treenode *right;};treenode *Root; // root of the tree Page 19 of 22Question 11. (6 marks). C++ I/O. Your task is to expand the command parser from Lab 3 to implement an additional operation,printmultiple, which accepts multiple student numbers and prints out a record for eachstudent. The command format is: The first parameter, N, indicates the number of student numbers that follow on the line. Thefollowing examples are both valid commands: printmultiple 0 Done. printmultiple 2 987654321 987654320 OUTPUT FOR STUDENT 987654321 APPEARS HERE OUTPUT FOR STUDENT 987654320 APPEARS HERE Done. If any of the numbers are invalid (e.g., “foo” or “-12”), you should skip processing theremainder of the line without printing an error message. You may assume that you haveaccess to the following function, which handles the printing of the student record: The printStudent function will return a false value if the student number does not exist inthe database; in this event, you should skip the remaining student numbers, without printingan error message. When you are finished processing the printmultiple command (whetherthere were errors or not) you should print “Done”. You are not expected to handle any othererrors, or produce any additional output. Using only the cin operator for input, complete the following code fragment for processing asingle printmultiple command: char buffer[MAX_COMMAND_SIZE]; // ...if (!strcmp(buffer, “printmultiple”)) { Page 20 of 22Question 12. (6 marks). Recursion. A tail-recursive function is a special kind of recursive function in which after completing allother operations, the function simply calls itself and returns the result of this call. Below, thefactorial function is shown on the left. Write a tail-recursive factorial function. A template forthis function is shown on the right. int intfactorial(int n) factorial( ){ { if (n <= 1) return 1; return n * factorial(n-1);} return factorial( ); } Page 21 of 22THIS PAGE IS INTENTIONALLY BLANK FOR ANSWER OVERFLOWS Page 22 of 22 Bien plus que des documents. Découvrez tout ce que Scribd a à offrir, dont les livres et les livres audio des principaux éditeurs.Annulez à tout moment.
https://fr.scribd.com/document/37415320/2007-Midterm-ECE106
CC-MAIN-2020-10
refinedweb
2,382
64.41
Wayne Johnson and I have presented the Playable API at GDC 2016 and we would like to share with you what we have talked about. The Playable is a new graph-like API in Unity that allows to have precise programmatic control over Animation and Audio. We are also planning on having video use the same API. Please be aware that the API is still experimental as we are still working on making it very slick and easy to use. The examples provided here work with Unity 5.4. We expect that there will be changes in the API again before it’s integrated into one of our future releases. Keep an eye on our Roadmap for updates. The simple case : Playing a clip. One of the main goals of the playable system is to make sure simple things are simple to do. So let’s start with the basics and play a clip. Easy ? Now, we can control time and speed simply like this: A step further : Creating a dynamic Blend Tree BlendTrees allow to blend animation values between different AnimationClips or even between other BlendTrees. Mecanim’s BlendTrees are quite powerful, but they need to be built in the Editor. With Playables, it’s now possible to create and control BlendTrees at runtime. We can do that using the AnimationMixerPlayable: Then, we control the weights using: Pro Tip : make sure the sum of the weights is 1.0f. Custom Playable : Crossfade While designing the Playable API, one of our key goals was for the system to be extensible. The CustomAnimationPlayable allows you to write your own Playable logic. The secret sauce here is the PrepareFrame callback that automatically gets called when the Playable graph is traversed by the game loop. PrepareFrame is where the blending/weighting logic of the CustomPlayable happens. In the following example, we implemented a CrossFade Playable that looks and behaves a lot like the Legacy animation system’s CrossFade: We are also working on another callback for the CustomPlayable which will be ProcessFrame. In the case of animation, this will allow to write directly into the animation stream and will give users the power to have their own IK, procedural animation, physic based system, live Mocap etc… Putting it all together with AnimatorControllerPlayable Another very nice thing about the API is that you can actually have AnimatorControllers in your graphs. Yes! This means that you can blend between StateMachines! This allows objects and props to supply their own animation StateMachine. For instance, let’s say you have a weapon and you want the weapon to “teach” the player how to use it. The weapon will provide the AnimatorController (StateMachine) to the Player and will ask to crossfade to it. The CharacterPlayableHandler is a simple interface over a CrossFade Playable. Seeing is believing It’s very important to be able to visualize the Playable graphs during runtime. In order to do so, we have created the GraphVisualizer, an open source editor-suite that allows previewing Playable graph. You can download it right now from BitBucket at. There are examples included and it’s very very simple to use! So what about audio ? During the GDC Dev Day talk we also presented a Playable API for audio, which works in a similar way as the current Animation Playable API. The API allows a very powerful, low level control of audio concepts within Unity. It will not only allow programmers to do new and interesting things with Audio in Unity, but will also serve as the foundation of future audio tooling in the Editor. We would have liked to have presented a section outlining code examples for Audio Playables, however this API is currently in flux and will not be present in the 5.4 release cycle. We’re looking forward to showing you the Audio Playables once we have finalised the API! That is it for now! We hope this gets you excited about Playables. As we said, this is still stuff we are working on, there are a couple of things that we want to change and improve on before we move it out of the experimental stage. So please, try it and tell us what you think about it! 相关文章 21 replies on “Unleashing Animation and Audio” Time.time give seconds so weight resolution is to rough when for example animationClip is in total 2s, so if we assume Prepare frame is called in FPS times. Also shifting port while playing might cos glitch. Have fun! namespace ws.winx.unity.components { public class CrossFadeMixerPlayable : CustomAnimationPlayable { private float m_TransitionDuration = 1.0f; private float m_TransitionTime = 0.0f; private float m_TransitionStart = 0.0f; private bool m_InTransition = false; private AnimationMixerPlayable mixer; private enum FreePort { PORT0 = 0, PORT1 } private FreePort __freePort; private float __frameDuration; public CrossFadeMixerPlayable() { mixer = AnimationMixerPlayable.Create(); Playable.Connect(mixer, this); mixer.AddInput(Playable.Null); mixer.AddInput(Playable.Null); __freePort = FreePort.PORT0; } public void Play(AnimationClipPlayable animationClipPlayable) { this.Crossfade(animationClipPlayable,0f); } /// /// /// /// /// in sec /// (default)30 public void Crossfade(AnimationClipPlayable animationPlayable, float transitionDuration, int frameRate = 30) { m_TransitionTime = 0.0f; __frameDuration = 1f / frameRate; Playable playable = mixer.GetInput((int)__freePort); if (playable != Playable.Null) { playable.time = 0f; playable.state = PlayState.Paused; Playable.Disconnect(mixer, (int)__freePort); } //Connect to free port Playable.Connect(animationPlayable, mixer, 0, (int)__freePort); animationPlayable.time = 0f; animationPlayable.state = PlayState.Playing; //UPGRADES //make dynamic transition value equal duration of current playable to finish??? //transitionDuration=(1f-(float)(playable1.time-(int)playable1.time))*playable1.CastTo().clip.length; //if this duration is longer then new playable then shift start of transition for the amount of difference if ( __freePort == FreePort.PORT0) { __freePort = FreePort.PORT1; } else { __freePort = FreePort.PORT0; } m_TransitionDuration = transitionDuration; m_InTransition = true; Debug.Log("Time at start:" + m_TransitionTime + " duration:" + m_TransitionDuration); } public override void PrepareFrame(FrameData info) { if (m_InTransition) { if (m_TransitionTime <= m_TransitionDuration) { m_TransitionTime += __frameDuration;// Time.time - m_TransitionStart; float weight = Mathf.Clamp01(m_TransitionTime / m_TransitionDuration); mixer.SetInputWeight(1 - (int)__freePort, weight); mixer.SetInputWeight((int)__freePort, 1.0f - weight); Debug.Log("Transition time update:" + m_TransitionTime); } else { mixer.RemoveInput((int)__freePort); m_InTransition = false; Debug.Log("Transition end at:" + m_TransitionTime); } Debug.Log(this.ToString()); } } public override string ToString() { string info = string.Empty; info += " crossfade layer mixer time:" + mixer.time; for (int i = 0; i < mixer.inputCount; i++) { Playable playable = mixer.GetInput(i); if (playable != Playable.Null) { info += "\n Port:" + i + " State:" + playable.state + " time:" + ((float)(playable.time - (int)playable.time)) + " weight:" + mixer.GetInputWeight(i) + " clip:" + playable.CastTo().clip; } else { info += "\n Port:" + i + " Empty!"; } } return info; } } } AnimationLayerMixerPlayable could you bring the light of this sealed type? All this hassle and we don’t even have a proper tween engine, Mecanim is a big fail, making once simple things overly complicated. Unfortunately you are right. Bad decision was made by making Mecanim animation specific FSM,instead general, those breaking the connection with rest of the logic, and without API that ofter extra animation control by code. Soon when more control was obviously needed, they patch it with StateBehaviours. StateBehaviours are inside asset and bit difficult to communicate with GameObjects on Scene, and old school in Unity avoid UnityEvents. If you had follow @pierrpaul have mention “Legacy” 2x times on GDC and even in this text “that looks and behaves a lot like the Legacy”, so it is obvious where things are going. Is it possible to use AvatarTarget with Playables? What is the progress of Video Playables? * AvatarMask and layers in Playables It’s great ! Something I would like to see, and I desperately need is to play just a section of a clip. I have an object with an animation of 800 frames, that I need to cut into 100 frames clips at runtime. So maybe something like: AnimationClip clip ; // assign to your clips AnimationClipPlayable playable = AnimationClipPlayable.Create(clip, 0, 100); //StartFrame, EndFrame GetComponent().Play(playable); That is interesting! Not sure if its requested enough to be added the the API, but the good thing is that it’s fairly easy to implement this with a CustomPlayable, simply handle the time directly in PrepareFrame. In the latest versions of unity i can’t export stable apk file to run on android devices. program force closed appear in most of devices ! even don’t show unity splash screen ! We hope this gets you excited about Playables. Lets see… So you need 2 lines of code to “simple play a clip” instead of one. Mkay… And then you need 50 lines of code to replicate a cross fade that has been in Unity since years. uhu…. The visualizer (note: Not an editor, only visualizer) looks very basic. And we already have not only a visualizer, but an whole editor for animation graphs (Mecanim, of course). Finally comes the part with the new stuff like audio and video… oh, wait. Its not here. Just an announcement. So am I excited? No, not really. There is nothing here that is new – compared to the very first presentation about the API from around half a year ago.. This really isn’t something that replaces Mecanim; it complements it. With playables you can tell an actual AnimationController to play a clip directly, which wasn’t possible before (only possible with legacy). You can also determine exactly how crossfading will work (based on any sort of curve or parameter you want). You can crossfade between state machines too. This unlocks so many possibilities for animation. Be careful to not become an architectural astronaut. If you make an universal uber-framework, be sure to at least show practical cool things of what you can do. Showing something in 50 lines of code that is solved for years and everybody takes for granted is the worst thing you can present to get people excited. There are so much things you could show that are currently “not so easy” or even impossible. What about a fully standard animated character running around in typical Mecanim-2D-BlendTree fashion (as seen in all the tech demos). And then you slightly blend in some electrocuted animation – regardless of the current mecanim state he is in. Works in all states. Or what about procedural generated animation? Or Physics blending with Animations? Hi. As I explain a little bit in the post, we want to give you guys access to the “animation feed”. Meaning that with CustomPlayable you will have a function that is called ProcessFrame and that will allow to do all sorts of procedural animation ( IK, Mocap, pull system, physic integration etc..). Its not ready yet but I hope to post something here as soon as I get something worth showing :) Hey where can buy this asap will this let me do my own soundtrack like to sound 2d arcady sound and upload my own samples? Will i be able to make movie clips on the game Looks awesome, and I can’t wait for this to be officially released. It’s great that you guys are making some of the engine’s systems more open to allow us to come up with our own solutions if we need to. Any plans to have forward and backward controls for Movie textures clips? Was hoping that would also be included This looks very promising. “We are also planning on having video use the same API.” This actually catches my attention, since the whole movie texture system needs defiantly an overhaul. Does this mean, we’ll get video support for: mobile and vr as well without launching external players? As now you’re bound to use plugins on which you depend if the creator continues development. Especially now with VR, video is a big things but native support is a huge problem. However, rest of the article is interesting as well. I was just curious about video, since last year Unite, promises were made to improve it. We are in the process of overhauling our video support to properly utilise hardware decoding capabilities of mobile and other devices. This will make high resolution video playback on mobile devices possible, including for VR applications. To begin with we will roll out these video improvements as they become stable. From there we will integrate video concepts into the Playable framework to allow a low level API to video compositing. Will this work with the Legacy animation system? I’m never going to be able to use Mecanim due to my rig’s IK structure. I don’t see why Mecanim would cause you troubles for that. If you want custom IK with mecanim, simply run your mecanim animations normally, and then make an IK script that processes IK on LateUpdate() LateUpdate() happens right after the new animation pose has been computed, but right before the scene is rendered
https://blogs.unity3d.com/cn/2016/04/13/unleashing-animation-and-audio-2/
CC-MAIN-2020-29
refinedweb
2,104
57.77
Hi guys--I have to write a program that reads a set of integers and then finds and prints the sum of the even and odd integers. For instance, if the numbers were: 9, 10, 17, -20, 22, -3 evens should be 12 odds 23 all 35 I'm messing up the math but here's what I have below--help!!! import java.util.*; public class Integers { static Scanner console = new Scanner(System.in); static final int SENTINEL = -999; public static void main(String[] args) { int number; int sumAll = 0; int sumEven = 0; int sumOdd = 0; System.out.println("Enter integers, positive, negative, or zeros ending with " + SENTINEL); number = console.nextInt(); System.out.println(); int count = 0; while (number != SENTINEL) { sumAll += count; number = console.nextInt(); if (count % 2 == 0) sumEven += count; else sumOdd += count; count++; } System.out.println("Sum of all integers: " + sumAll); System.out.println("Sum of the even integers: " + sumEven); System.out.println("Sum of the odd integers: " + sumOdd); } }
http://mathhelpforum.com/advanced-math-topics/6873-java-program-summing-integers.html
CC-MAIN-2014-23
refinedweb
161
67.25
I’m going to take a brief intermission in my Scala series, and show a head-to-head comparison of some code in Scala and C#. To do this, I’m going to go with the first and second problems from Project Euler. If your not familiar with the site, it’s a playground full of problems that are absolutely perfect for functional languages (cause they tend to be mathematical functions). So let’s get started with Question #1: If we list all the natural numbers below 10 that are multiples of 3 or 5, we get 3, 5, 6 and 9. The sum of these multiples is 23. Find the sum of all the multiples of 3 or 5 below 1000. We’ll start with the C# version: public class Problem1 { public int GetResult() { return Enumerable.Range(1, 999).Where(i => i % 3 == 0 || i % 5 == 0).Sum(); } } With the magic of Linq this is pretty easy (I’ll show later how Linq is basically a way to do list comprehensions in C#, but that’s for one of the longer posts). Now, on to the Scala version (which you can paste into your REPL or SimplyScala if you want): (1 until 1000).filter(r => r % 3 == 0 || r % 5 == 0).reduceLeft(_+_) Now, comparing these too, they are fairly similar. Creating the initial range of numbers is a little easier in Scala (and using the until “keyword” means we don’t have to use 999 like in C#). Instead of the Where Scala uses the more traditional filter function, and we have to do a little more work and use the reduceLeft function with the special _+_ I talked about before, but overall they are quite similar. So let’s move on to Question #2. It. Seems pretty straight forward. We want a Fibonacci sequence generator, then we simply need to filter out odd values, and sum the even values that are less than 4 million to get our answer. Lets start with C#: public class Problem2 { private IEnumerable<int> Fibonacci() { var tuple = Tuple.Create(0, 1); while (true) { yield return tuple.Item1; tuple = Tuple.Create(tuple.Item2, tuple.Item1 + tuple.Item2); } } public int GetResult() { return Fibonacci().Where(i => i % 2 == 0).TakeWhile(i => i < 4000000).Sum(); } } This one is a little more involved because we have to generate a Fibonacci sequence. I decided to use an iterator and the magical yield keyword to make a never-ending sequence (well, never ending until we end up with an overflow exception that is), but beyond that the solution is very similar to problem #1. Now for the Scala version: lazy val fib:Stream[Int] = Stream.cons(0,Stream.cons(1,fib.zip(fib.tail).map(p => p._1 + p._2))) fib.filter(_ % 2 == 0).takeWhile(_ <= 4000000).reduceLeft(_+_) Well, isn’t this interesting….The first line is the equivalent of our C# iterator. It’s creating a lazy stream which is a stream who’s contents are calculated as they are needed (just like our iterator). The difference here is that Scala doesn’t try to evaluate it all if you just type fib into the REPL..it will give you the first few results and then say “Look, I could go on, but you didn’t tell me how far to go so I’m just going to stop here”, and it spits out a ? to let you know that there could be more. This means that Scala has a deeper understanding that this thing may well never end. Keeping this in mind we’re actually calculating it’s contents recursively (after hard-coding the 0 and 1 values)) by using the zip function, which will take a list and combine it with another list into a collection of Tuples. For the second list, which gets passed in to the zip function on fib we’re specifying fib.tail which is our list, minus the first element. So if our list starts out looking like List(0,1,...) then fib.tail is List(1,...). That means the initial call to zip creates the tuple (0,1). Now, from there we use the map function (translate this to Select in Linq-ease) to return the sum of the first and second items from out tuple. So now we have just created the third element in our sequence: 1. So the next time round this all happens again, only on the next elements in the two sequences respectively. So the zip function returns a tuple with the second and third element in the sequence: (1,1), and low and behold the 4th element in our sequence is born. This will go on until you stop asking for values, or you get an overflow exception. The entire time the evaluation of what is in the sequence is exactly one element ahead of what is being returned. Kinda mind bending, no? Now for the second line, we once again have an almost one-to-one map to our C# code. We filter out the odd values, take all the values less than 4 million, and then sum up the results. Hopefully this has been at least a little bit enlightening…I’ll continue on making some more detailed forays into the world of Scala, but I thought an occasional one to one comparison might be help shed some light on some of the places where Scala offers some added elegance to what is possible in C#…as well as those spots where it doesn’t.
http://drrandom.org/2011/08/04/making-the-climb-head-to-head-with-project-euler-questions-1-2/
CC-MAIN-2018-51
refinedweb
928
71.24
resolving all the FUD. Pootle, Launchpad translations, Translatewiki, Transifex, and Zanata. So there really is no excuse not to i18n your Python application. In fact, GNU Mailman has been i18n'd for many years, and pioneered the supporting code in Python's standard library, namely the gettext module. As part of the Mailman 3 effort, I've also written a higher level library called flufl.i18n which makes it even easier to i18n your application, even in tricky multi-language contexts such as server programs, where you might need to get a German translation and a French translation in one operation, then turn around and get Japanese, Italian, and English for the next operation. user's locale. If you read the gettext module's documentation, you'd be inclined to do this at the very start of your application: from gettext import gettext as _ gettext.textdomain(my_program_name) then, you'd wrap translatable strings in code like this: print _('Here is something I want to tell you') What gettext. Anyway, if you do write the above code, you'll be in for a heap of trouble, as my colleague soon found out. Just running his program with --help in a French locale, he was getting the dreaded UnicodeEncodeError: "UnicodeEncodeError: 'ascii' codec can't encode character". First, why is that code wrong, and why does it lead to the UnicodeEncodeError s? What might not be obvious from the Python 2 gettext documentation is that gettext.gettext() always returns 8-bit strings (a.k.a. byte strings in Python 3 terminology), and these 8-bit strings are encoded with the charset defined in the language's catalog file.. So the 8-bit strings that gettext.gettext(). What you really want in Python 2 is something like this: from gettext import ugettext as _ which you'd think you should be able to do, the "u" prefix meaning "give me unicode". But for reasons I can only describe as based on our misunderstandings of unicode and i18n at the time, you can't actually do that, because ugettext() is not exposed as a module-level function. It is available in the class-based API, but that's a more advanced API that again almost no one uses. Sadly, it's too late to fix this in Python 2. The good news is that in Python 3 it is fixed, not by exposing ugettext(), but by changing the most commonly used gettext module APIs to return unicode strings directly, as it always should have done. In Python 3, the obvious code just works: from gettext import gettext as _ What can you do in Python 2 then? Here's what you should use instead of the two lines of code at the beginning of this article: _ = gettext.translation(my_program_name).ugettext and now you can wrap all your translatable strings in _('Foo') and it should Just Work. Perhaps more usefully, you can use the gettext.install()) Or you can use the flufl.i18n API, which always uses returns unicode strings in both Python 2 and Python 3. Also interesting was that I could never reproduce the crash when ssh'd into the French locale VM. It would only crash for me when I was logged into a terminal on the VM's graphical desktop. The only difference between the two that I could tell was that in the desktop's terminal, locale(8) returned French values (e.g. fr_FR.UTF-8) for everything, but in the ssh console, it returned the French values for everything except the LC_CTYPE environment variable. For the life of me, I could not get LC_CTYPE set to anything other than en_US.UTF-8 in the ssh context, so the reproducible test case would just return the English text, and not crash. This happened even if I explicitly set that environment variable either as a separate export command in the shell, or as a prefix to the normally crashing command. Maybe there's something in ssh that causes this, but I couldn't find it. One last thing. It's important to understand that Python's gettext module only handles Python strings, and other subsystems may be involved. The classic example is GObject Introspection, the newest and recommended interface to the GNOME Object system. If your Python-GI based project needs to translate strings too (e.g. in menus or other UI elements), you'll have to use both the gettext API for your Python strings, and set the locale for the C-based bits using locale.setlocale(). This is because Python's API does not set the locale automatically, and Python-GI exposes no other way to control the language it uses for translations.
https://www.wefearchange.org/2012/06/the-right-way-to-internationalize-your.html
CC-MAIN-2017-17
refinedweb
787
69.82
] These files can be any type of file. These files must be in image formats supported by Qt. The output from QEmbed is a C++ header file which you should include in a C++ source file. In the source file, you should make a wrapper function that suits your application. Two functions are provided; your wrapper function could just call one of these, or you can implement your own. Here's a simple example of usage for each of the supplied functions: #include "generated_qembed_file.h" QImage myFindImage(const char* name) { return qembed_findImage(name); } Just call the generated function; name is the original image filename without the extension. #include "generated_qembed_file.h" QByteArray myFindData(const char* name) { return qembed_findData(name); } Just call the generated function; name is the original filename with the extension Alternatively, look at the output from QEmbed and write a function tailored to your needs.
http://idlebox.net/2007/apidocs/qt-x11-free-3.3.8.zip/qembed.html
CC-MAIN-2014-15
refinedweb
146
53.81
Brian "Beej Jorgensen" Hall beej@beej.us Version 3.0.21 June 8, 2016 2. What is a socket? 2.1. Two Types of Internet Sockets 2.2. Low level Nonsense and Network Theory 3. IP Addresses, structs, and Data Munging 3.1. IP Addresses, versions 4 and 6 3.2. Byte Order 3.3. structs 3.4. IP Addresses, Part Deux 4. Jumping from IPv4 to IPv6 5. System Calls or Bust 5.1. getaddrinfo()Prepare to launch! 5.2. socket()Get the File Descriptor! 5.3. bind()What port am I on? 5.4. connect()Hey, you! 5.5. listen()Will somebody please call me? 5.6. accept()"Thank you for calling port 3490." 5.7. send() and recv()Talk to me, baby! 5.8. sendto() and recvfrom()Talk to me, DGRAM-style 5.9. close() and shutdown()Get outta my face! 5.10. getpeername()Who are you? 5.11. gethostname()Who am I? 6. Client-Server Background 6.1. A Simple Stream Server 6.2. A Simple Stream Client 6.3. Datagram Sockets 7. Slightly Advanced Techniques 7.1. Blocking 7.2. select()Synchronous I/O Multiplexing 7.3. Handling Partial send()s 7.4. SerializationHow to Pack Data 7.5. Son of Data Encapsulation 7.6. Broadcast PacketsHello, World! 9. Man Pages 9.1. accept() 9.2. bind() 9.3. connect() 9.4. close() 9.5. getaddrinfo(), freeaddrinfo(), gai_strerror() 9.6. gethostname() 9.7. gethostbyname(), gethostbyaddr() 9.8. getnameinfo() 9.9. getpeername() 9.10. errno 9.11. fcntl() 9.12. htons(), htonl(), ntohs(), ntohl() 9.13. inet_ntoa(), inet_aton(), inet_addr 9.14. inet_ntop(), inet_pton() 9.15. listen() 9.16. perror(), strerror() 9.17. poll() 9.18. recv(), recvfrom() 9.19. select() 9.20. setsockopt(), getsockopt() 9.21. send(), sendto() 9.22. shutdown() 9.23. socket() 9.24. struct sockaddr and pals 10. More References 10.1. Books 10.2. Web References 10.3. RFCs Hey!!see "-lnsl -lsocket -lresolv" to the end of the compile command, likeI socketyou need to use closesocket(), instead. Also, select() only works with socket descriptors, not file descriptors (like 0 for stdin). There is also a socket class that you can use, CSocket. Check your compilers help pages for more information.. You hear talk of "sockets" all the time, and perhaps you are wondering just what they are exactly. Well, they're this: a way to speak to other programs using standard Unix file descriptors. What? Oky.hownot at this party, anyway. But there are: host to network short host to network long network to host short network to host long int. Things get weird from here, so just read through and bear with me. My First StructTMstruct addrinfo. This structure is a more recent invention, and is used to prep the socket address structures for subsequent use. It's also used in host name lookups, and service name lookups. That'll make more sense later when we get to actual usage, but just know for now that it's one of the first things you'll call when making a connection. struct addrinfo { int ai_flags; // AI_PASSIVE, AI_CANONNAME, etc. int ai_family; // AF_INET, AF_INET6, AF_UNSPEC int ai_socktype; // SOCK_STREAM, SOCK_DGRAM int ai_protocol; // use 0 for "any" size_t ai_addrlen; // size of ai_addr in bytes struct sockaddr *ai_addr; // struct sockaddr_in or _in6 char *ai_canonname; // full canonical hostname struct addrinfo *ai_next; // linked list, next node };there could be several results for you to choose from. I'd use the first result that worked, but you might have different business needs; I don't know everything, man! You'll see that the ai_addr field in the struct addrinfo is a pointer to a struct sockaddr. This is where we start getting into the nitty-gritty details of what's inside an IP address structure. You might not usually need to write to these structures; oftentimes, a call to getaddrinfo() to fill out your struct addrinfo for you is all you'll need. You will, however, have to peer inside these structs to get the values out, so I'm presenting them here. (Also, all the code written before struct addrinfo was invented we packed all this stuff by hand, so you'll see a lot of IPv4 code out in the wild that does exactly that. You know, in old versions of this guide and so on.) Some structs are IPv4, some are IPv6, and some are both. I'll make notes of which are what. Anyway, the struct sockaddr holds socket address information for many types of sockets.! // (IPv4 only--see struct sockaddr_in6 for IPv6) struct sockaddr_in { short int sin_family; // Address family, AF_INET memset(). Also, notice that sin_family corresponds to sa_family in a struct sockaddr and should be set to "AF_INET". Finally, the sin_port must be in Network Byte Order (by using htons()!) Let's dig deeper! You see the sin_addr field is a struct in_addr. What is that thing? Well, not to be overly dramatic, but it's one of the scariest unions of all time: // (IPv4 only--see struct in6_addr for IPv6) // Internet address (a structure for historical reasons) struct in_addr { uint32_t s_addr; // that's a 32-bit int (4 bytes) }; Whoa!.) What about IPv6? Similar structs exist for it, as well: // that is designed to be large enough to hold both IPv4 and IPv6 structures. (See, for some calls, sometimes you don't know in advance if it's going to fill out your struct sockaddr with an IPv4 or IPv6 address. So you pass in this parallel structure, very similar to struct sockaddr except larger, and then cast it to the type you need:check this to see if it's AF_INET or AF_INET6 (for IPv4 or IPv6). Then you can cast it to a struct sockaddr_in or struct sockaddr_in6 if you wanna. "10.12.110.57" or "2001:db8:63b3:1::3490" that you want to store into it. The function you want to use, inet_pton(), converts an IP address in numbers-and-dots notation into either a struct in_addr or a struct in6_addr depending on whether you specify AF_INET or AF_INET6. ("pton" stands for "presentation to network"you can call it "printable to network" if that's easier to remember.) The conversion can be made as follows: struct in_addr and you want to print it in numbers-and-dots notation? (Or a struct in6_addr that you want in, uh, "hex-and-colons" notation.) In this case, you'll want to use the function inet_ntop() ("ntop" means "network to presentation"you can call it "network to printable" if that's easier to remember), like addressesthey is declared, like so: struct in6_addr ia6 = IN6ADDR_ANY_INIT; Et voila! This is the section where we get into the system calls (and other library calls) that allow you to access the network functionality of a Unix box, or any box that supports the sockets API for that matter (BSD, Windows, Linux, Mac, what-have-you.)! (Please note that for brevity, many code snippets below do not include necessary error checking. And they very commonly assume that the result from calls to getaddrinfo() succeed and return a valid entry in the linked list. Both of these situations are properly addressed in the stand-alone programs, though, so use those as a model.) This is a real workhorse of a function with a lot of options, but usage is actually pretty simple. It helps set up the structs you need later on. A tiny bit of history: it used to be that you would use a function called gethostbyname() to do DNS lookups. Then you'd load that information by hand into a struct sockaddr_in, and use that in your calls. This is no longer necessary, thankfully. (Nor is it desirable, if you want to write code that works for both IPv4 and IPv6!) In these modern times, you now have the function getaddrinfo() that does all kinds of good stuff for you, including DNS and service name lookups, and fills out the structs you need, besides! Let's take a look! #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> int getaddrinfo(const char *node, // e.g. "" or IP const char *service, // e.g. "http" or port number const struct addrinfo *hints, struct addrinfo **res); You give this function three input parameters, and it gives you a pointer to a linked-list, res, of results. The node parameter is the host name to connect to, or an IP address. Next is the parameter service, which can be a port number, like "80", or the name of a particular service (found in The IANA Port List or the /etc/services file on your Unix machine) like "http" or "ftp" or "telnet" or "smtp" or whatever. Finally, the hints parameter points to a struct addrinfo that you've already filled out with relevant information. Here's a sample call if you're a server who wants to listen on your host's IP address, port 3490. Note that this doesn't actually do any listening or network setup; it merely sets hints.ai_flags = AI_PASSIVE; // fill in my IP for me if ((status = getaddrinfo(NULL, "3490", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo error: %s\n", gai_strerror(status)); exit(1); } // servinfo now points to a linked list of 1 or more struct addrinfos // ... do everything until you don't need servinfo anymore .... freeaddrinfo(servinfo); // free the linked-list Notice that I set the ai_family to AF_UNSPEC, thereby saying that I don't care if we use IPv4 or IPv6. You can set it to AF_INET or AF_INET6 if you want one or the other specifically. Also, you'll see the AI_PASSIVE flag in there; this tells getaddrinfo() to assign the address of my local host to the socket structures. This is nice because then you don't have to hardcode it. (Or you can put a specific address in as the first parameter to getaddrinfo() where I currently have NULL, up there.) Then we make the call. If there's an error (getaddrinfo() returns non-zero), we can print it out using the function gai_strerror(), as you see. If everything works properly, though, servinfo will point to a linked list of struct addrinfos, each of which contains a struct sockaddr of some kind that we can use later! Nifty! Finally, when we're eventually all done with the linked list that getaddrinfo() so graciously allocated for us, we can (and should) free it all up with a call to freeaddrinfo(). Here's a sample call if you're a client who wants to connect to a particular server, say "" port 3490. Again, this doesn't actually connect, but it sets up // get ready to connect status = getaddrinfo("", "3490", &hints, &servinfo); // servinfo now points to a linked list of 1 or more struct addrinfos // etc. I keep saying that servinfo is a linked list with all kinds of address information. Let's write a quick demo program to show off this information. This short program will print the IP addresses for whatever host you specify on the command line: /* ** showip.c -- show IP addresses for a host given on the command line */ #include <stdio.h> #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netdb.h> #include <arpa/inet.h> #include <netinet/in.h> int main(int argc, char *argv[]) { return 0; } As you see, the code calls getaddrinfo() on whatever you pass on the command line, that fills out the linked list pointed to by res, and then we can iterate over the list and print stuff out or do whatever. (There's a little bit of ugliness there where we have to dig into the different types of struct sockaddrs depending on the IP version. Sorry about that! I'm not sure of a better way around it.) Sample run! Everyone loves screenshots: $ showip IP addresses for: IPv4: 192.0.2.88 $ showip ipv6.example.com IP addresses for ipv6.example.com: IPv4: 192.0.2.101 IPv6: 2001:db8:8c00:22::171 Now that we have that under control, we'll use the results we get from getaddrinfo() to pass to other socket functions and, at long last, get our network connection established! Keep reading! I guess I can put it off no longerI have to talk about the socket() system call. Here's the breakdown: #include <sys/types.h> #include <sys/socket.h> int socket(int domain, int type, int protocol); But what are these arguments? They allow you to say what kind of socket you want (IPv4 or IPv6, stream or datagram, and TCP or UDP). It used to be people would hardcode these values, and you can absolutely still do that. (domain is PF_INET or PF_INET6, type is SOCK_STREAM or SOCK_DGRAM, and protocol can be set to 0 to choose the proper protocol for the given type. Or you can call getprotobyname() to look up the protocol you want, "tcp" or "udp".) (This PF_INET thing is a close relative of the AF_INET that you can use when initializing the sin_family field in your struct sockaddr_in. In fact, they're so closely related that they actually have the same value, and many programmers will call socket() and pass AF_INET as the first argument instead of PF_INET. Now, get some milk and cookies, because it's times for a story. Once upon a time, a long time ago, it was thought that maybe an address family (what the "AF" in "AF_INET" stands for) might support several protocols that were referred to by their protocol family (what the "PF" in "PF_INET" stands for). That didn't happen. And they all lived happily ever after, The End. So the most correct thing to do is to use AF_INET in your struct sockaddr_in and PF_INET in your call to socket().) Anyway, enough of that. What you really want to do is use the values from the results of the call to getaddrinfo(), and feed them into socket() directly like this: int s; struct addrinfo hints, *res; // do the lookup // [pretend we already filled out the "hints" struct] getaddrinfo("", "http", &hints, &res); // [again, you should do error-checking on getaddrinfo(), and walk // the "res" linked list looking for valid entries instead of just // assuming the first one is good (like many of these examples do.) // See the section on client/server for real examples.] s = socket(res->ai_family, res->ai_socktype, res->ai_protocol); socket() simply returns to you a socket descriptor that you can use in later system calls, or -1 on error. The global variable errno is set to the error's value (see the errno man page for more details, and a quick note on using errno in multithreaded programs.)multiplayer network games do this when they tell you to "connect to 192.168.5.10 port 3490".) The port number is used by the kernel to match an incoming packet to a certain process's socket descriptor. If you're going to only be doing a connect() (because you're the client, not the server), this is probably be unnecessary. Read it anyway, just for kicks. Here is the synopsis for the bind() system call: #include <sys/types.h> #include <sys/socket.h> int bind(int sockfd, struct sockaddr *my_addr, int addrlen); sockfd is the socket file descriptor returned by socket(). my_addr is a pointer to a struct sockaddr that contains information about your address, namely, port and IP address. addrlen is the length in bytes of that address. Whew. That's a bit to absorb in one chunk. Let's have an example that binds the socket to the host the program is running on, port 3490:); By using the AI_PASSIVE flag, I'm telling the program to bind to the IP of the host it's running on. If you want to bind to a specific local IP address, drop the AI_PASSIVE and put an IP address in for the first argument to getaddrinfo(). bind() also returns -1 on error and sets errno to the error's value. Lots of old code manually packs the struct sockaddr_in before calling bind(). Obviously this is IPv4-specific, but there's really nothing stopping you from doing the same thing with IPv6, except that using getaddrinfo() is going to be easier, generally. Anyway, the old code looks something like this: // !!! THIS IS THE OLD WAY !!! int sockfd; struct sockaddr_in my_addr; sockfd = socket(PF_INET, SOCK_STREAM, 0); my_addr.sin_family = AF_INET; my_addr.sin_port = htons(MYPORT); // short, network byte order my_addr.sin_addr.s_addr = inet_addr("10.12.110.57"); memset(my_addr.sin_zero, '\0', sizeof my_addr.sin_zero); bind(sockfd, (struct sockaddr *)&my_addr, sizeof my_addr); In the above code, you could also assign INADDR_ANY to the s_addr field if you wanted to bind to your local IP address (like the AI_PASSIVE flag, above.) The IPv6 version of INADDR_ANY is a global variable in6addr_any that is assigned into the sin6_addr field of your struct sockaddr_in6. (There is also a macro IN6ADDR_ANY_INIT that you can use in a variable initializer.) little bit of a socket that was connected is still hanging around in the kernel, and it's hogging the port. You can either wait for it to clear (a minute or so), or add code to your program allowing it to reuse the port, like this: int yes=1; //char yes='1'; // Solaris people use this // lose the pesky "Address already in use" error message if (setsockopt(listener,SOL_SOCKET,SO_REUSEADDR,&yes,sizeof yes) == -1) { perror("setsockopt"); exit: #include <sys/types.h> #include <sys/socket.h> int connect(int sockfd, struct sockaddr *serv_addr, int addrlen); sockfd is our friendly neighborhood socket file descriptor, as returned by the socket() call, serv_addr is a struct sockaddr containing the destination port and IP address, and addrlen is the length in bytes of the server address structure. All of this information can be gleaned from the results of the getaddrinfo() call, which rocks. Is this starting to make more sense? I can't hear you from here, so I'll just have to hope that it is. Let's have an example where we make a socket connection to "", port 3490: struct addrinfo hints, *res; int sockfd; // first, load up address structs with getaddrinfo(): memset(&hints, 0, sizeof hints); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; getaddrinfo("", "3490", &hints, &res); // make a socket: sockfd = socket(res->ai_family, res->ai_socktype, res->ai_protocol); // connect! connect(sockfd, res->ai_addr, res->ai_addrlen); Again, old-school programs filled out their own struct sockaddr_ins to pass to connect(). You can do that if you want to. See the similar note in the bind() section, above.: int listen(int sockfd, int backlog);() so that the server is running on a specific port. (You have to be able to tell your buddies which port to connect to!) So if you're going to be listening for incoming connections, the sequence of system calls you'll make is: getaddrinfo(); socket(); bind(); listen(); /* accept() goes here */ I'll just leave that in the place of sample code, since it's fairly self-explanatory. (The code in the accept() section, below, is more complete.) The really tricky part of this whole sha-bang is the call to accept(). Get readythe for more new connections, and the newly created one is finally ready to send() and recv(). We're there! The call is as follows: #include <sys/types.h> #include <sys/socket.h> int accept(int sockfd, struct sockaddr *addr, socklen_t *addrlen); sockfd is the listen()ing socket descriptor. Easy enough. addr will usually be a pointer to a local struct sockaddr_storage. This is where the information about the incoming connection will go (and with it you can determine which host is calling you from which port). addrlen is a local integer variable that should be set to sizeof(struct sockaddr_storage): #include <string.h> #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #define MYPORT "3490" // the port users will be connecting to #define BACKLOG 10 // how many pending connections queue will hold int main(void) { struct sockaddr_storage their_addr; socklen_t addr_size; struct addrinfo hints, *res; int sockfd, new_fd; // !! don't forget your error checking for these calls !! //. int send(int sockfd, const void *msg, int len, int flags);: char *msg = "Beej was here!"; int len, bytes_sent; . . . len = strlen(msg); bytes_sent = send(sockfd, msg, len, 0); . . . send() returns the number of bytes actually sent outthis: int recv(int sockfd, void *buf, int len, int flags); will probably be another struct sockaddr_in or struct sockaddr_in6 or struct sockaddr_storage that you cast at the last minute) which contains the destination IP address and port. tolen, an int deep-down, can simply be set to sizeof *to or sizeof(struct sockaddr_storage). To get your hands on the destination address structure, you'll probably either get it from getaddrinfo(), or from recvfrom(), below, or you'll fill it out by hand. fields. from is a pointer to a local struct sockaddr_storage that will be filled with the IP address and port of the originating machine. fromlen is a pointer to a local int that should be initialized to sizeof *from or sizeof(struct sockaddr_storage). When the function returns, fromlen will contain the length of the address actually stored in from. recvfrom() returns the number of bytes received, or -1 on error (with errno set accordingly.) So, here's a question: why do we use struct sockaddr_storage as the socket type? Why not struct sockaddr_in? Because, you see, we want to not tie ourselves down to IPv4 or IPv6. So we use the generic struct sockaddr_storage which we know will be big enough for either. (So... here's another question: why isn't struct sockaddr itself big enough for any address? We even cast the general-purpose struct sockaddr_storage to the general-purpose struct sockaddr! Seems extraneous and redundant, huh. The answer is, it just isn't big enough, and I'd guess that changing it at this point would be Problematic. So they made a new one.): 0 Further receives are disallowed 1 Further sends are disallowed 2 Further sends and receives are disallowed (like close())it just changes its usability. To free a socket descriptor, you need to use close(). Nothing to it. (Except to remember that if you're using Windows and Winsock that you should call closesocket() instead of close().) This function is so easy. It's so easy, I almost didn't give it its own section. But here it is anyway. The function getpeername() will tell you who is at the other end of a connected stream socket. The synopsis: *addr or sizeof(struct sockaddr). The function returns -1 on error and sets errno accordingly. Once you have their address, you can use inet_ntop(), getnameinfo(),: #include <unistd.h> int gethostname(char *hostname, size_t size);.. Every time you use ftp, there's a remote program, ftpd, that serves you. Often, there will only be one server on a machine, and that server will handle multiple clients using fork(). The basic routine is: server will wait for a connection, accept() it, and fork() a child process to handle it. This is what our sample server does in the next section. All this server does is send the string "Hello, world!" out over a stream connection. All you need to do to test this server is run it in one window, and telnet to it from another with: $ telnet remotehostname 3490 where remotehostname is the name of the machine you're running it on. /* ** server.c -- a stream socket server demo */ > that, so I'll just present a couple of sample programs: talker.c and listener.c. listener sits on a machine waiting for an incoming packet on port 4950. talker sends a packet to that port, on the specified machine, that contains whatever the user enters on the command line. Here is the source for listener.c: /* **! Except for one more tiny detail that I've mentioned many times in the past: connected datagram sockets. I need to talk about this here, since we're in the datagram section of the document. Let's say that talker calls connect() and specifies the listener's address. From that point on, talker may only sent to and receive from the address specified by connect(). For this reason, you don't have to use sendto() and recvfrom(); you can simply use send() and recv(). Thesenowit will return -1 and errno will be set to EAGAIN or EWOULDBLOCK. (Waitit can return EAGAIN or EWOULDBLOCK? Which do you check for? The specification doesn't actually specify which your system will return, so for portability, check them both.) fd_set. The following macros operate on this type:.().and you are right: it might be. Some Unices can use select in this manner, and some can't. You should see what your local man page says on the matter if you want to attempt it. Some Unices update the time in your struct timeval to reflect the amount of time still remaining before a timeout. But others do not. Don't rely on that occurring if you want to be portable. (Use gettimeofday() if you need to track time elapsed. It's a bummer, I know, but that's the way it is.) int containing the number of bytes in the buffer.! Quick note to all you Linux fans out there: sometimes, in rare circumstances, Linux's select() can return "ready-to-read" and then not actually be ready to read! This means it will block on the read() after the select() says it won't! Why you little! Anyway, the workaround solution is to set the O_NONBLOCK flag on the receiving socket so it errors with EWOULDBLOCK (which you can just safely ignore if it occurs). See the fcntl() reference page for more info on setting a socket to non-blocking. It's easy enough to send text data across the network, you're finding, but what happens if you want to send some "binary" data like ints or floats? It turns out you have a few options.what's not to like? Well, it turns out that not all architectures represent a double (or int for that matter) with the same bit representation or even the same byte ordering! The code is decidedly non-portable. (Heymaybe you don't need portability, in which case this is nice and fast.) When packing integer types, we've already seen how the htons()-class of functions can help keep things portable by transforming the numbers into Network Byte Order, and how that's the Right Thing to do. Unfortunately, there are no similar functions for float types. Is all hope lost? floats, here's something quick and dirty with plenty of room for improvement: float in a 32-bit number. The high bit (31) is used to store the sign of the number ("1" means negative), and the next seven bits (30-16) are used to store the whole number portion of the float. Finally, the remaining bits (15-0) are used to store the fractional portion of the number.try float) and 64-bit (probably a double) numbers, but the pack754() function could be called directly and told to encode bits-worth of data (expbits of which are reserved for the normalized number's exponent.) structs? Unfortunately for you, the compiler is free to put padding all over the place in a struct, and that means you can't portably send the whole thing over the wire in one chunk. (Aren't you getting sick of hearing "can't do this", "can't do that"? Sorry! To quote a friend, "Whenever anything goes wrong, I always blame Microsoft." This one might not be Microsoft's fault, admittedly, but my friend's statement is completely true.) Back to it: the best way to send the struct over the wire is to pack each field independently and then unpack them into the struct when they arrive on the other side. char array instead of another integer.) #include <stdio.h> #include <ctype.h> #include <stdarg.h> #include <string int i) { *buf++ = i>>24; *buf++ = i>>16; *buf++ = i>>8; *buf++ = i; } /* ** packi64() -- store a 64-bit int into a char buffer (like htonl()) */ void packi64(unsigned char *buf, unsigned long long int i) { *buf++ = i>>56; *buf++ = i>>48; *buf++ = i>>40; *buf++ = i>>32; *buf++ = i>>24; *buf++ = i>>16; *buf++ = i>>8; *buf++ = i; } /* ** unpacki16() -- unpack a 16-bit int from a char buffer (like ntohs()) */ int unpacki16(unsigned char *buf) { unsigned int i2 = ((unsigned int)buf[0]<<8) | buf[1]; int i; // change unsigned numbers to signed if (i2 <= 0x7fffu) { i = i2; } else { i = -1 - (unsigned int)(0xffffu - i2); } return i; } /* ** unpacku16() -- unpack a 16-bit unsigned from a char buffer (like ntohs()) */ unsigned int unpacku16(unsigned char *buf) { return ((unsigned int)buf[0]<<8) | buf[1]; } /* ** unpacki32() -- unpack a 32-bit int from a char buffer (like ntohl()) */ long int unpacki32(unsigned char *buf) { unsigned long int i2 = ((unsigned long int)buf[0]<<24) | ((unsigned long int)buf[1]<<16) | ((unsigned long int)buf[2]<<8) | buf[3]; long int i; // change unsigned numbers to signed if (i2 <= 0x7fffffffu) { i = i2; } else { i = -1 - (long int)(0xffffffffu - i2); } return i; } /* ** unpacku32() -- unpack a 32-bit unsigned from a char buffer (like ntohl()) */ unsigned long int unpacku32(unsigned char *buf) { return ((unsigned long int)buf[0]<<24) | ((unsigned long int)buf[1]<<16) | ((unsigned long int)buf[2]<<8) | buf[3]; } /* ** unpacki64() -- unpack a 64-bit int from a char buffer (like ntohl()) */ long long int unpacki64(unsigned char *buf) { unsigned long long int i2 = (]; long long int i; // change unsigned numbers to signed if (i2 <= 0x7fffffffffffffffu) { i = i2; } else { i = -1 -(long long int)(0xffffffffffffffffu - i2); } return i; } /* ** unpacku64() -- unpack a 64-bit unsigned from a char buffer (like ntohl()) */ unsigned long long int unpacku64(unsigned char *buf) { return (]; } /* ** pack() -- store data dictated by the format string in the buffer ** ** bits |signed unsigned float string ** -----+---------------------------------- ** 8 | c C ** 16 | h H f ** 32 | l L d ** 64 | q Q g ** - | s ** ** (16-bit unsigned length is automatically prepended to strings) */ unsigned int pack; // strings unsigned int len; unsigned int size = 0; va_start(ap, format); for(; *format != '\0'; format++) { switch(*format) { case 'c': // 8-bit size += 1; c = (signed char)va_arg(ap, int); // promoted *buf++ = c; break; case 'C': // 8-bit unsigned size += 1; C = (unsigned char)va_arg(ap, unsigned int); // promoted *buf++ = C; break; case 'h': // 16-bit size += 2; h = va_arg(ap, int); packi16(buf, h); buf += 2; break; case 'H': // 16-bit unsigned size += 2; H = va_arg(ap, unsigned int); packi16(buf, H); buf += 2; break; case 'l': // 32-bit size += 4; l = va_arg(ap, long int); packi32(buf, l); buf += 4; break; case 'L': // 32-bit unsigned size += 4; L = va_arg(ap, unsigned long int); packi32(buf, L); buf += 4; break; case 'q': // 64-bit size += 8; q = va_arg(ap, long long int); packi64(buf, q); buf += 8; break; case 'Q': // 64-bit unsigned size += 8; Q = va_arg(ap, unsigned long long int); packi64(buf, Q); buf += 8; break; case 'f': // float-16 size += 2; f = (float)va_arg(ap, double); // promoted fhold = pack754_16(f); // convert to IEEE 754 packi16(buf, fhold); buf += 2; break; case 'd': // float-32 size += 4; d = va_arg(ap, double); fhold = pack754_32(d); // convert to IEEE 754 packi32(buf, fhold); buf += 4; break; case 'g': // float-64 size += 8; g = va_arg(ap, long double); fhold = pack754_64(g); // convert to IEEE 754 packi64(buf, fhold); buf += 8; ** ** bits |signed unsigned float string ** -----+---------------------------------- ** 8 | c C ** 16 | h H f ** 32 | l L d ** 64 | q Q g ** - | s ** ** (string is extracted based on its stored length, but 's' can be ** prepended with a max length) */ void unpack; unsigned int len, maxstrlen=0, count; va_start(ap, format); for(; *format != '\0'; format++) { switch(*format) { case 'c': // 8-bit c = va_arg(ap, signed char*); if (*buf <= 0x7f) { *c = *buf;} // re-sign else { *c = -1 - (unsigned char)(0xffu - *buf); } buf++; break; case 'C': // 8-bit unsigned C = va_arg(ap, unsigned char*); *C = *buf++; break; case 'h': // 16-bit h = va_arg(ap, int*); *h = unpacki16(buf); buf += 2; break; case 'H': // 16-bit unsigned H = va_arg(ap, unsigned int*); *H = unpacku16(buf); buf += 2; break; case 'l': // 32-bit l = va_arg(ap, long int*); *l = unpacki32(buf); buf += 4; break; case 'L': // 32-bit unsigned L = va_arg(ap, unsigned long int*); *L = unpacku32(buf); buf += 4; break; case 'q': // 64-bit q = va_arg(ap, long long int*); *q = unpacki64(buf); buf += 8; break; case 'Q': // 64-bit unsigned Q = va_arg(ap, unsigned long long int*); *Q = unpacku64(buf); buf += 8; break; case 'f': // float f = va_arg(ap, float*); fhold = unpacku16(buf); *f = unpack754_16(fhold); buf += 2; break; case 'd': // float-32 d = va_arg(ap, double*); fhold = unpacku32(buf); *d = unpack754_32(fhold); buf += 4; break; case 'g': // float-64 g = va_arg(ap, long double*); fhold = unpacku64(buf); *g = unpack754_64(fhold); buf += 8; break; case 's': // string s = va_arg(ap, char*); len = unpackawe.Where can I get those header files? If you don't have them on your system already, you probably don't need them. Check the manual for your particular platform. If you're building for Windows, you only need to #include <winsock.h>.What do I do when bind() reports "Address already in use"? You have to use setsockopt() with the SO_REUSEADDR option on the listening socket. Check out the section on bind() and the section on select() for an example.How do I get a list of open sockets on the system? Use the netstat. Check the man page for full details, but you should get some good output just typing: $ netstat The only trick is determining which socket is associated with which program. :-)How can I view the routing table? Run the route command (in /sbin on most Linuxes) or the command netstat -r.How can I run the client and server programs if I only have one computer? Don't I need a network to write network programs?!How can I tell if the remote side has closed connection? You can tell because recv() will return 0.How do I implement a "ping" utility? What is ICMP? Where can I find out more about raw sockets and SOCK_RAW? All your raw sockets questions will be answered in W. Richard Stevens' UNIX Network Programming books. Also, look in the ping/ subdirectory in Stevens' UNIX Network Programming source code, available online.How do I change or shorten the timeout on a call to connect()?.How do I build for Windows? First, delete Windows and install Linux or BSD. };-). No, actually, just see the section on building for Windows in the introduction.How do I build for Solaris/SunOS? I keep getting linker errors when I try to compile! The linker errors happen because Sun boxes don't automatically compile in the socket libraries. See the section on building for Solaris/SunOS in the introduction for an example of how to do this.Why does select() keep falling out on a signal?.How can I implement a timeout on a call to recv()?.How do I encrypt or compress the data before sending it through the socket?.What is this "PF_INET" I keep seeing? Is it related to AF_INET? Yes, yes it is. See the section on socket() for details.How can I write a server that accepts shell commands from a client and executes them?.I'm sending a slew of data, but when I recv(), it only receives 536 bytes or 1460 bytes at a time. But if I run it on my local machine, it receives all the data at the same time. What's going on? You're hitting the MTUt().I'm on a Windows box and I don't have the fork() system call or any kind of struct sigaction. What to do? sigaction.) Search the help that came with VC++ for "fork" or "POSIX" and see if it gives you any clues. If that doesn't work at all, ditch the fork()/sigaction stuff and replace it with the Win32 equivalent: CreateProcess(). I don't know how to use CreateProcess()it takes a bazillion arguments, but it should be covered in the docs that came with VC++.I'm behind a firewallhow do I let people outside the firewall know my IP address so they can connect to my machine?. ;-)How do I write a packet sniffer? How do I put my Ethernet interface into promiscuous mode?.How can I set a custom timeout value for a TCP or UDP socket?.How can I tell which ports are available to use? Is there a list of "official" port numbers?. s The listen()ing socket descriptor. addr This is filled in with the address of the site that's connecting to you. addrlen This is filled in with the sizeof() the structure returned in the addr parameter. You can safely ignore it if you assume you're getting a struct sockaddr_in back, which you know you are, because that's the type you passed in for addr.(), struct sockaddr_in struct sockaddr by hand; if not, use the results from getaddrinfo(), as per above. In IPv4, the sin_addr.s_addr field of the struct sockaddr_in structure is set to INADDR_ANY. In IPv6, the sin6_addr field of the struct sockaddr_in6 structure is assigned into from the global variable in6addr_any. Or, if you're declaring a new struct in6_addr, you can initialize it to IN6ADDR_ANY_INIT.(), struct sockaddr_in, struct in_addr struct sockaddr if you want to.. Get information about a host name and/or service and load up a struct sockaddr with the result. struct sockaddr for you, taking care of the gritty details (like if it's IPv4 or IPv6.) It replaces the old functions gethostbyname() and getservbyname().The description, below, contains a lot of information that might be a little daunting, but actual usage is pretty simple. It might be worth it to check out the examples first. struct sockaddr with the address of the current host. That's excellent for setting up a server when you don't want to hardcode the address. struct addrinfos, and you can go through this list to get all the addresses that match what you passed in with the hints.", &hints, &servinfo)) != 0) { fprintf(stderr, "getaddrinfo: %s\n", gai_strerror(rv)); exit(1); } // loop through all the results and connect to the first we can for(p = servinfo; p != NULL; p = p->ai_next) { if ((sockfd = socket(p->ai_family, p->ai_socktype, p->ai_protocol)) == -1) { perror("socket"); continue; } if (connect(sockfd, p->ai_addr, p->ai_addrlen) == -1) { struct in_addr. Conversely, if you have a struct in_addr or a struct in6_addr, you can use gethostbyaddr() to get the hostname back. gethostbyaddr() is IPv6 compatible, but you should use the newer shinier getnameinfo() instead. struct hostent which contains tons of information, including the IP address. (Other information is the official host name, a list of aliases, the address type, the length of the addresses, and the list of addressesit's a general-purpose structure that's pretty easy to use for our specific purposes once you see how.) gethostbyaddr() takes a struct in_addr or struct in6_addr and brings you up a corresponding host name (if there is one), so it's sort of the reverse of gethostbyname(). As for parameters, even though addr is a char*, you actually want to pass in a pointer to a struct in_addr. len should be sizeof(struct in_addr), and type should be AF_INET. So what is this struct hostent that gets returned? It has a number of fields that contain information about the host in question. char *h_name The real canonical host name. char **h_aliases A list of aliases that can be accessed with arraysthe last element is NULL int h_addrtype The result's address type, which really should be AF_INET for our purposes. int length The length of the addresses in bytes, which is 4 for IP (version 4) addresses. char **h_addr_list A list of IP addresses for this host. Although this is a char**, it's really an array of struct in_addr*s in disguise. The last array element is NULL. h_addr A commonly defined alias for h_addr_list[0]. If you just want any old IP address for this host (yeah, they can have more than one) just use this field. Returns a pointer to a resultant struct hostent on success, or NULL on error.(), struct in_addr Look up the host name and service name information for a given struct sockaddr. struct sockaddr and does a name and service name lookup on it. It replaces the old gethostbyaddr() and getservbyport() functions. You have to pass in a pointer to a struct sockaddr (which in actuality is probably a struct sockaddr_in or struct sockaddr_in6 that you've cast) in the sa parameter, and the length of that struct in the sal struct sockaddr_in filled with information about the machine you're connected() Holds the error code for the last system call #include <errno.h> int errno;.) The value of the variable is the latest error to have transpired, which might be the code for "success" if the last action succeeded.); } Control socket descriptors #include <sys/unistd.h> #include <sys/fcntl.h> int fcntl(int s, int cmd, long arg);.) Set the socket to be non-blocking. See the section on blocking for more details. Set the socket to do asynchronous I/O. When data is ready to be recv()'d on the socket, the signal SIGIO will be raised. This is rare to see, and beyond the scope of the guide. And I think it's only available on certain systems.. int s = socket(PF_INET, SOCK_STREAM, 0); fcntl(s, F_SETFL, O_NONBLOCK); // set to non-blocking fcntl(s, F_SETFL, O_ASYNC); // set to asynchronous I/O Convert: host to network short host to network long network to host short network to host long Convert IP addresses from a dots-and-number string to a struct in_addr and back );you'll need getaddrinfo() for that.; } Tell a socket to listen for incoming connections #include <sys/socket.h> int listen(int s, int backlog);. Returns zero on success, or -1 on error (and errno will be set accordingly.))); } Testmaybe it's uninitialized?].fd =); } });(). Receive Out of Band data. This is how to get data that has been sent to you with the MSG_OOB flag in send(). As the receiving side, you will have had signal SIGURG raised telling you there is urgent data. In your handler for that signal, you could call recv() with this MSG_OOB flag. If you want to call recv() "just for pretend", you can call it with this flag. This will tell you what's waiting in the buffer for when you call recv() "for real" (i.e. without the MSG_PEEK flag. It's like a sneak preview into the next recv() call. Tell recv() to not return until all the data you specified in the len parameter. It will ignore your wishes in extreme circumstances, however, like if a signal interrupts the call or if some error occurs or if the remote side closes the connection, etc. Don't be mad with it. When you call recv(), it will block until there is some data to read. If you want to not block, set the socket to non-blocking or check with select() or poll() to see if there is incoming data before calling recv() or recvfrom().); send(), sendto(), select(), poll(), Blocking Check if sockets descriptors are ready to read/write this. Note for Linux users: Linux's select() can return "ready-to-read" and then not actually be ready to read, thus causing the subsequent read() call to block. You can work around this bug by setting O_NONBLOCK flag on the receiving socket so it errors with EWOULDBLOCK, then ignoring this error if it occurs. See the fcntl() reference page for more info on setting a socket to non-blocking. Returns the number of descriptors in the set on success, 0 if the timeout was reached, or -1 on error (and errno will be set accordingly.) Also, the sets are modified to show which sockets are ready.2, buf2, sizeof buf2, 0); } } Set various options for a socket #include <sys/types.h> #include <sys/socket.h> int getsockopt(int s, int level, int optname, void *optval, socklen_t *optlen); int setsockopt(int s, int level, int optname, const void *optval, socklen_t optlen);: Bind this socket to a symbolic device name like eth0 instead of using bind() to bind it to an IP address. Type the command ifconfig under Unix to see the device names. Allows other sockets to bind() to this port, unless there is an active listening socket bound to the port already. This enables you to get around those "Address already in use" error messages when you try to restart your server after a crash. Allows UDP datagram (SOCK_DGRAM) sockets to send and receive packets sent to and from the broadcast address. Does nothingNOTHING!!to TCP stream sockets! Hahaha!, should be set to the length of optval, probably sizeof(int), but varies depending on the option. Note that in the case of getsockopt(), this is a pointer to a socklen_t, and it specifies the maximum size object that will be stored in optval (to prevent buffer overflows). And getsockopt() will modify the value of optlen to reflect the number of bytes actually set."! Returns zero on success, or -1 on error (and errno will be set accordingly.)"); });: Send as "out of band" data. TCP supports this, and it's a way to tell the receiving system that this data has a higher priority than the normal data. The receiver will receive the signal SIGURG and it can then receive this data without first receiving all the rest of the normal data in the queue. Don't send this data over a router, just keep it local. If send() would block because outbound traffic is clogged, have it return EAGAIN. This is like a "enable non-blocking just for this send." See the section on blocking for more details. If you send() to a remote host which is no longer recv()ing, you'll typically get the signal SIGPIPE. Adding this flag prevents that signal from being raised.); Stop further sends and receives on a socket #include <sys/socket.h> int shutdown(int s, int how);. a socket descriptor #include <sys/types.h> #include <sys/socket.h> int socket(int domain, int type, int protocol);. domain domain describes what kind of socket you're interested in. This can, believe me, be a wide variety of things, but since this is a socket guide, it's going to be PF_INET for IPv4, and PF_INET6 for IPv6. type Also, the type parameter can be a number of things, but you'll probably be setting it to either SOCK_STREAM for reliable TCP sockets (send(), recv()) or SOCK_DGRAM for unreliable fast UDP sockets (sendto(), recvfrom().) (Another interesting socket type is SOCK_RAW which can be used to construct packets by hand. It's pretty cool.) protocol Finally, the protocol parameter tells which protocol to use with a certain socket type. Like I've already said, for instance, SOCK_STREAM uses TCP. Fortunately for you, when using SOCK_STREAM or SOCK_DGRAM, you can just set the protocol to 0, and it'll use the proper protocol automatically. Otherwise, you can use getprotobyname() to look up the proper protocol number. The new socket descriptor to be used in subsequent calls, or -1 on error (and errno will be set accordingly.)); accept(), bind(), getaddrinfo(), listen() Structures for handling internet addresses ]; }; These are the basic structures for all syscalls and functions that deal with internet addresses. Often you'll use getaddr); accept(), bind(), connect(), inet_aton(), inet_ntoa() You beej@beej.us. :-). BSD Sockets: A Quick And Dirty Primer (Unix system programming info, too!) And here are some relevant Wikipedia pages: Transmission Control Protocol (TCP) User Datagram Protocol (UDP) Serialization (packing and unpacking data) RFCsthe real dirt! These are documents that describe assigned numbers, programming APIs, and protocols that are used on the Internet. I've included links to a few of them here for your enjoyment, so grab a bucket of popcorn and put on your thinking cap: RFC 1The First RFC; this gives you an idea of what the "Internet" was like just as it was coming to life, and an insight into how it was being designed from the ground up. (This RFC is completely obsolete, obviously!) RFC 768The User Datagram Protocol (UDP) RFC 791The Internet Protocol (IP) RFC 793The Transmission Control Protocol (TCP) RFC 854The Telnet Protocol RFC 959File Transfer Protocol (FTP) RFC 1350The Trivial File Transfer Protocol (TFTP) RFC 1459Internet Relay Chat Protocol (IRC) RFC 1918Address Allocation for Private Internets RFC 2131Dynamic Host Configuration Protocol (DHCP) RFC 2616Hypertext Transfer Protocol (HTTP) RFC 2821Simple Mail Transfer Protocol (SMTP) RFC 3330Special-Use IPv4 Addresses RFC 3493Basic Socket Interface Extensions for IPv6 RFC 3542Advanced Sockets Application Program Interface (API) for IPv6 RFC 3849IPv6 Address Prefix Reserved for Documentation RFC 3920Extensible Messaging and Presence Protocol (XMPP) RFC 3977Network News Transfer Protocol (NNTP) RFC 4193Unique Local IPv6 Unicast Addresses RFC 4506External Data Representation Standard (XDR) The IETF has a nice online tool for searching and browsing RFCs. CSocket: 1.5 Cygwin: 1.5 data encapsulation: 2.2, 7.3 DHCP: 10.3 disconnected network: see private network. DNS: domain name service: see DNS. donkeys: 7.3 EAGAIN: 7.1, 7.1, 9.21 email to Beej: 1.6 struct addrinfo: 3.3 struct hostent: 9.7 struct in_addr: 9.24 struct pollfd: 9.17 struct sockaddr: 3.3, 5.8, 9.18, 9.24 struct sockaddr_in: 3.3, 9.1, 9.24 struct timeval: 7.2, 9.19 Continue reading on
https://hackerfall.com/story/introduction-to-network-programming-in-c
CC-MAIN-2019-51
refinedweb
8,284
72.87
Hello! I'm back with my first full-stack web application: "Water of Life," a Scotch whisky API using a React JS frontend and Object-Oriented Ruby, Active Record, and Sinatra on the backend. For those unfamiliar with Scotch, it is essentially just whisky distilled and bottled in Scotland. Scotland itself is divided into regions, each region has many distilleries, and each distillery has many bottles. Sounds like a perfect dynamic to take advantage of Active Record macros! The API is comprised of three tables: regions, distilleries, and bottles, each with a corresponding model that inherits from ActiveRecord::Base. As for the associations between the models, a region instance has many distilleries, and a distillery instance has many bottles. The distillery model acts as the join class so that a region instance has many bottles through distilleries, and a bottle instance has one region through the distillery to which it belongs: class Region < ActiveRecord::Base has_many :distilleries has_many :bottles, through: :distilleries end class Bottle < ActiveRecord::Base belongs_to :distillery has_one :region, through: :distillery end With these associations established, "Water of Life" harnesses the power of Active Record to provide useful, organized data the client requests through the routes established with Sinatra. Yes, I did it Sinatra's way. Not "My Way." But trust me, "The Best is Yet to Come..." The first and most comprehensive endpoint in the application controller is the get "/all" route: get "/all" do regions = Region.all regions.to_json(include: { distilleries: { include: :bottles } }) end Simply, this returns all of the API's data, hierarchically structured by region, distillery, and bottle. More dynamic routes, like get "/bottles/:id", take advantage of the params hash to locate and return the data for a specific bottle: get "/bottles/:id" do bottle = Bottle.find(params[:id]) bottle.to_json(include: { distillery: { include: :region } }) end Or even delete it from the database entirely: delete "/bottles/:id" do bottle = Bottle.find(params[:id]) bottle.destroy bottle.to_json(include: { distillery: { include: :region } }) end In both of these methods, the routes are not only returning information about the specific bottle instance but are "including" the data from its associated distillery and region as well: bottle.to_json(include: { distillery: { include: :region } Such functionality is possible because of the associations established earlier with Active Record. Finally, for one of the post routes on the API... As a quasi well-adapted perfectionist, I want my API to be efficiently created and properly maintained, which includes new bottles being added to the database in the correct format. Many scotches have a multiple-word name: "Lagavulin 16 Year Old," "macallan double cask 12 year old," etc. I prefer the former, not the latter. Of course, I could have imposed a regex validator on the frontend form, but users shouldn't inherit my problems (after all, I'm not their parent class!). There are Ruby gems that provide a method to "titleize" a string, but I wanted to write my own method to better understand the process involved: def titleize(string) title_cased = string.split.map do |word| letters = word.split("") letters[0] = letters[0].upcase letters.join end title_cased.join(" ") end And that, as they say, is that. My first API and full-stack web application. How far we've come... Feeling thirsty? Check out "Water of Life" for details on your scotch of choice. And, if it's missing, please add it! Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/michaellobman/water-of-life-a-scotch-whisky-api-1abe
CC-MAIN-2022-33
refinedweb
563
53.92
Treecc input files consist of zero or more declarations that define nodes, operations, options, etc. The following sections describe each of these elements. Node types are defined using the ‘node’ keyword in input files. The general form of the declaration is: An identifier that is used to refer to the node type elsewhere in the treecc definition. It is also the name of the type that will be visible to the programmer in literal code blocks. An identifier that refers to the parent node type that ‘NAME’ inherits from. If ‘PNAME’ is not supplied, then ‘NAME’ is a top-level declaration. It is legal to supply a ‘PNAME’ that has not yet been defined in the input. Any combination of ‘%abstract’ and ‘%typedef’: The node type cannot be constructed by the programmer. In addition, the programmer does not need to define operation cases for this node type if all subtypes have cases associated with them. The node type is used as the common return type for node creation functions. Top-level declarations must have a ‘%typedef’ keyword. The ‘FIELDS’ part of a node declaration defines the fields that make up the node type. Each field has the following general form: The field is not used in the node's constructor. When the node is constructed, the value of this field will be undefined unless ‘VALUE’ is specified. The type that is associated with the field. Types can be declared using a subset of the C declaration syntax, augmented with some C++ and Java features. See section Types used in fields and parameters, for more information. The name to associate with the field. Treecc verifies that the field does not currently exist in this node type, or in any of its ancestor node types. The default value to assign to the field in the node's constructor. This can only be used on fields that are declared with ‘%nocreate’. The value must be enclosed in braces. For example ‘{NULL}’ would be used to initialize a field with ‘NULL’. The braces are required because the default value is expressed in the underlying source language, and can use any of the usual constant declaration features present in that language. When the output language is C, treecc creates a struct-based type called ‘NAME’ that contains the fields for ‘NAME’ and all of its ancestor classes. The type also contains some house-keeping fields that are used internally by the generated code. The following is an example: The programmer should avoid using any identifier that ends with ‘__’, because it may clash with house-keeping identifiers that are generated by treecc. When the output language is C++, Java, or C#, treecc creates a class called ‘NAME’, that inherits from the class ‘PNAME’. The field definitions for ‘NAME’ are converted into public members in the output. Types that are used in field and parameter declarations have a syntax which is subset of features found in C, C++, and Java: Types are usually followed by an identifier that names the field or parameter. The name is required for fields and is optional for parameters. For example ‘int’ is usually equivalent to ‘int x’ in parameter declarations. The following are some examples of using types: The grammar used by treecc is slightly ambiguous. The last example above declares a parameter called ‘Element’, that has type ‘const’. The programmer probably intended to declare an anonymous parameter with type ‘const Element’ instead. This ambiguity is unavoidable given that treecc is not fully aware of the underlying language's type system. When treecc sees a type that ends in a sequence of identifiers, it will always interpret the last identifier as the field or parameter name. Thus, the programmer must write the following instead: Treecc cannot declare types using the full power of C's type system. The most common forms of declarations are supported, and the rest can usually be obtained by defining a ‘typedef’ within a literal code block. See section Literal code declarations, for more information on literal code blocks. It is the responsibility of the programmer to use type constructs that are supported by the underlying programming language. Types such as ‘const char *’ will give an error when the output is compiled with a Java compiler, for example. Enumerated types are a special kind of node type that can be used by the programmer for simple values that don't require a full abstract syntax tree node. The following is an example of defining a list of the primitive machine types used in a Java virtual machine: Enumerations are useful when writing code generators and type inferencing routines. The general form is: An identifier to be used to name the enumerated type. The name must not have been previously used as a node type, an enumerated type, or an enumerated value. A comma-separated list of identifiers that name the values within the enumeration. Each of the names must be unique, and must not have been used previously as a node type, an enumerated type, or an enumerated value. Logically, each enumerated value is a special node type that inherits from a parent node type corresponding to the enumerated type ‘NAME’. When the output language is C or C++, treecc generates an enumerated typedef for ‘NAME’ that contains the enumerated values in the same order as was used in the input file. The typedef name can be used elsewhere in the code as the type of the enumeration. When the output language is Java, treecc generates a class called ‘NAME’ that contains the enumerated values as integer constants. Elsewhere in the code, the type ‘int’ must be used to declare variables of the enumerated type. Enumerated values are referred to as ‘NAME.VALUE’. If the enumerated type is used as a trigger parameter, then ‘NAME’ must be used instead of ‘int’: treecc will convert the type when the Java code is output. When the output language is C#, treecc generates an enumerated value type called ‘NAME’ that contains the enumerated values as members. The C# type ‘NAME’ can be used elsewhere in the code as the type of the enumeration. Enumerated values are referred to as ‘NAME.VALUE’. Operations are declared in two parts: the declaration, and the cases. The declaration part defines the prototype for the operation and the cases define how to handle specific kinds of nodes for the operation. Operations are defined over one or more trigger parameters. Each trigger parameter specifies a node type or an enumerated type that is selected upon to determine what course of action to take. The following are some examples of operation declarations: Trigger parameters are specified by enclosing them in square brackets. If none of the parameters are enclosed in square brackets, then treecc assumes that the first parameter is the trigger. The general form of an operation declaration is as follows: Specifies that the operation is associated with a node type as a virtual method. There must be only one trigger parameter, and it must be the first parameter. Non-virtual operations are written to the output source files as global functions. Optimise the generation of the operation code so that all cases are inline within the code for the function itself. This can only be used with non-virtual operations, and may improve code efficiency if there are lots of operation cases with a small amount of code in each. Split the generation of the multi-trigger operation code across multiple functions, to reduce the size of each individual function. It is sometimes necessary to split large %inline operations to avoid compiler limits on function size. The type of the return value for the operation. This should be ‘void’ if the operation does not have a return value. The name of the class to place the operation's definition within. This can only be used with non-virtual operations, and is intended for languages such as Java and C# that cannot declare methods outside of classes. The class name will be ignored if the output language is C. If a class name is required, but the programmer did not supply it, then ‘NAME’ will be used as the default. The exception to this is the C# language: ‘CLASS’ must always be supplied and it must be different from ‘NAME’. This is due to a "feature" in some C# compilers that forbid a method with the same name as its enclosing class. The name of the operation. The parameters to the operation. Trigger parameters may be enclosed in square brackets. Trigger parameters must be either node types or enumerated types. Once an operation has been declared, the programmer can specify its cases anywhere in the input files. It is not necessary that the cases appear after the operation, or that they be contiguous within the input files. This permits the programmer to place operation cases where they are logically required for maintainence reasons. There must be sufficient operation cases defined to cover every possible combination of node types and enumerated values that inherit from the specified trigger types. An operation case has the following general form: The name of the operation for which this case applies. A comma-separated list of node types or enumerated values that define the specific case that is handled by the following code. Source code in the output source language that implements the operation case. Multiple trigger combinations can be associated with a single block of code, by listing them all, separated by commas. For example: "(*)" is used below to indicate an option that is enabled by default. Enable the generation of code that can track the current filename and line number when nodes are created. See section Tracking line numbers in source files, for more information. (*) Disable the generation of code that performs line number tracking. Optimise the creation of singleton node types. These are node types without any fields. Treecc can optimise the code so that only one instance of a singleton node type exists in the system. This can speed up the creation of nodes for constants within compilers. (*) Singleton optimisations will have no effect if ‘track_lines’ is enabled, because line tracking uses special hidden fields in every node. Disable the optimisation of singleton node types. Enable the generation of reentrant code that does not rely upon any global variables. Separate copies of the compiler state can be used safely in separate threads. However, the same copy of the compiler state cannot be used safely in two or more threads. Disable the generation of reentrant code. The interface to node management functions is simpler, but cannot be used in a threaded environment. (*) Force output source files to be written, even if they are unchanged. This option can also be set using the ‘-f’ command-line option. Don't force output source files to be written if they are the same as before. (*) This option can help smooth integration of treecc with make. Only those output files that have changed will be modified. This reduces the number of files that the underlying source language compiler must process after treecc is executed. Use virtual methods in the node type factories, so that the programmer can subclass the factory and provide new implementations of node creation functions. This option is ignored for C, which does not use factories. Don't use virtual methods in the node type factories. (*) Use abstract virtual methods in the node type factories. The programmer is responsible for subclassing the factory to provide node creation functionality. Don't use abstract virtual methods in the node type factories. (*) Put the kind field in the node, for more efficient access at runtime. (*) Put the kind field in the vtable, and not the node. This saves some memory, at the cost of slower access to the kind value at runtime. This option only applies when the language is C. The kind field is always placed in the node in other languages, because it isn't possible to modify the vtable. Specify the prefix to be used in output files in place of "yy". Specify the name of the state type. The state type is generated by treecc to perform centralised memory management and reentrancy support. The default value is ‘YYNODESTATE’. If the output language uses factories, then this will also be the name of the factory base class. Specify the namespace to write definitions to in the output source files. This option is ignored when the output language is C. Same as ‘%option namespace = NAME’. Provided because ‘package’ is more natural for Java programmers. Specify the numeric base to use for allocating numeric values to node types. By default, node type allocation begins at 1. Specify the output language. Must be one of "C", "C++", "Java", "C#", "Ruby", "PHP", or "Python". The default is "C". Specify the size of the memory blocks to use in C and C++ node allocators. Strip filenames down to their base name in #line directives. i.e. strip off the directory component. This can be helpful in combination with the %include %readonly command when treecc input files may processed from different directories, causing common output files to change unexpectedly. Don't strip filenames in #line directives. (*) Use internal as the access mode for classes in C#, rather than public. Use public as the access mode for classes in C#, rather than internal. (*) Print #line markers in languages that use them. (*) Do not print #line markers, even in languages that normally use them. Use treecc's standard node allocator for C and C++. This option has no effect for other output languages. (*) Do not use treecc's standard node allocator for C and C++. This can be useful when the programmer wants to redirect node allocation to their own routines. Use libgc as a garbage-collecting node allocator for C and C++. This option has no effect for other output languages. Do not use libgc as a garbage-collecting node allocator for C and C++. (*) Specify the base type for the root node of the treecc node heirarchy. The default is no base type. Sometimes it is necessary to embed literal code within output ‘.h’ and source files. Usually this is to ‘#include’ definitions from other files, or to define functions that cannot be easily expressed as operations. A literal code block is specified by enclosing it in ‘%{’ and ‘%}’. The block can also be prefixed with the following flags: Write the literal code to the currently active declaration header file, instead of the source file. Write the literal code to both the currently active declaration header file and the currently active source file. Write the literal code to the end of the file, instead of the beginning. Another form of literal code block is one which begins with ‘%%’ and extends to the end of the current input file. This form implicitly has the ‘%end’ flag. Most treecc compiler definitions will be too large to be manageable in a single input file. They also will be too large to write to a single output file, because that may overload the source language compiler. Multiple input files can be specified on the command-line, or they can be explicitly included by other input files with the following declarations: Include the contents of the specified file at the current point within the current input file. ‘FILENAME’ is interpreted relative to the name of the current input file. If the ‘%readonly’ keyword is supplied, then any output files that are generated by the included file must be read-only. That is, no changes are expected by performing the inclusion. The ‘%readonly’ keyword is useful for building compilers in layers. The programmer may group a large number of useful node types and operations together that are independent of the particulars of a given language. The programmer then defines language-specific compilers that "inherit" the common definitions. Read-only inclusions ensure that any extensions that are added by the language-specific parts do not "leak" into the common code. Output files can be changed using the follow declarations: Change the currently active declaration header file to ‘FILENAME’, which is interpreted relative to the current input file. This option has no effect for languages without header files (Java and C#). Any node types and operations that are defined after a ‘%header’ declaration will be declared in ‘FILENAME’. Change the currently active source file to ‘FILENAME’, which is interpreted relative to the current input file. This option has no effect for languages that require a single class per file (Java). Any node types and operations that are defined after a ‘%header’ declaration will have their implementations placed in ‘FILENAME’. Change the output source directory to ‘DIRNAME’. This is only used for Java, which requires that a single file be used for each class. All classes are written to the specified directory. By default, ‘DIRNAME’ is the current directory where treecc was invoked. When treecc generates the output source code, it must insert several common house-keeping functions and classes into the code. By default, these are written to the first header and source files. This can be changed with the ‘%common’ declaration: Output the common house-keeping code to the currently active declaration header file and the currently active source file. This is typically used as follows: This document was generated by Klaus Treichel on January, 18 2009 using texi2html 1.78.
http://www.gnu.org/software/dotgnu/treecc/treecc_4.html
CC-MAIN-2015-06
refinedweb
2,899
56.15
test_suites/control.bvt def create_suite_job(suite_name, board, build, pool, check_hosts=True, num=None, file_bugs=False, timeout=24, timeout_mins=None, priority=priorities.Priority.DEFAULT, suite_args=None, wait_for_results=True): """ Create a job to run a test suite on the given device with the given image. When the timeout specified in the control file is reached, the job is guaranteed to have completed and results will be available. @param suite_name: the test suite to run, e.g. 'bvt'. @param board: the kind of device to run the tests on. @param build: unique name by which to refer to the image from now on. @param pool: Specify the pool of machines to use for scheduling purposes. @param check_hosts: require appropriate live hosts to exist in the lab. @param num: Specify the number of machines to schedule across. @param file_bugs: File a bug on each test failure in this suite. @param timeout: The max lifetime of this suite, in hours. @param timeout_mins: The max lifetime of this suite, in minutes. Takes priority over timeout. @param priority: Integer denoting priority. Higher is more important. @param suite_args: Optional arguments which will be parsed by the suite control file. Used by control.test_that_wrapper to determine which tests to run. @param wait_for_results: Set to False to run the suite job without waiting for test jobs to finish. Default is True. @raises ControlFileNotFound: if a unique suite control file doesn't exist. @raises NoControlFileList: if we can't list the control files at all. @raises StageBuildFailure: if the dev server throws 500 while staging build. @raises ControlFileEmpty: if the control file exists on the server, but can't be read. @return: the job ID of the suite; -1 on error.
http://www.chromium.org/chromium-os/testing/dynamic-test-suites
CC-MAIN-2019-22
refinedweb
280
69.28
Update: Belorussian translation. At Edgeio we had a fairly complicated network setup, and at one point I quickly hacked together a Ruby script to merge the paths generated by multiple traceroute runs together into directed graphs, showing the routing from a few selected host in our environment to all our other hosts. I generated dot-files suitable for Graphviz from it. It was a helpful way of looking for weird inconsistencies in routing, in particular between our two locations. Unfortunately, when Edgeio closed down I think the script was lost, and in any case if it isn't I wouldn't be able to get permission to release it without more hassle than it'd take me to recreate it from scratch. So recreate it from scratch is exactly what I did. Here's an example of a traceroute from to, docs.google.com and (scaled down): (Gradients and shadows courtesy of my XSL transform to make graphviz output prettier) There's a couple of caveats: I just strip out failed probes, and I don't try to reconcile the names of the endpoints (which I preferred to include for readability) with the IP addresses of the trace, so the first/last grey nodes before the named/blue nodes may be redundant. This script by default runs traceroute 3 times for each target, and that's the reason why there are more possible paths than endpoints, and it illustrates failover and/or load balancing mostly, but can also be affected by fluctuations in dynamic routing. It's usually fairly stable, and in fact at Edgeio I found several network problems by re-running the script when something was up and looking at how the routing had changed. 3 runs seemed sufficient for my use, but for large networks adding more may give a better picture of the routing. You can find the full script here, but here are the guts: First I defined a convenience method to run traceroute and capture the output. This is intended for a POSIX OS (Linux / Unix / BSD's), but mainly requires a working traceroute where the output is a number of lines starting with a hop count and then the ip address. TRACEROUTE must be set to a valid traceroute command. TRACEROUTE=`which traceroute`.chomp def traceroute host `#{TRACEROUTE} -n #{host}` end The TraceViz class does the gruntwork. @edges contains the edges of the graph, in other words which pairs of ip addresses represent a hop further in the network. @nodes contains a set of the ip addresses found. @targets contains the hostnames of the start and end-points - it's used only to style them differently: class TraceViz def initialize(times,timeout) @times,@timeout = times,timeout @edges = Set.new @nodes = Set.new @targets = Set.new @this_host = Socket.gethostname @targets << @this_host end </pre> The #trace method executes the traceroutes, and enforces a timeout: def trace host @times.times do |i| STDERR.puts "Trace ##{i+1} for #{host}" Timeout::timeout(@timeout) do process_trace(host,traceroute(host)) end rescue nil end end #process_trace handles the parsing of the trace, by splitting the output into lines, extracting the IP addresses, and then adding each of them to @nodes, and adding each pair to @edges. I don't care if we've seen them before, since I use Set's so the previous (identical) nodes/edges will just overwrite the same values: def process_trace host,trace @targets << host trace = [@this_host] + trace.collect do |line| line=line.split line[0].to_i > 0 && line[1] != "*" ? line[1] : nil end.compact trace << host trace.each {|h| @nodes << h } trace.each_cons(2) {|h1,h2| @edges << [h1,h2] } end Finally #to_dot generates a graphviz compatible directed graph: def to_dot res = "digraph G {" @edges.each { |h1,h2| res << " \"#{h1}\"->\"#{h2}\"\n" } @nodes.each do |n| color = @targets.member?(n) ? "lightblue" : "lightgrey" res << " \"#{n}\" [style=filled fillcolor=#{color}]\n" end res << "}" end end First run the script to generate the dot-files, and then generate an SVG file from it: ruby traceviz.rb docs.google.com >trace.dot dot -Tsvg trace.dot >trace.svg Optionally, process the script with my XSL transform to make it prettier (adding the gradients from above etc) - I'm using xsltproc from libxslt: xsltproc notugly.xsl trace.svg >trace-notugly.svg Then I used "rsvg" from librsvg2 to turn it into a PNG: rsvg trace-notugly.svg traceviz.png Of course these steps are easily enough wrapped into a script.
https://hokstad.com/traceviz-visualizing-traceroute-output-with-graphivz
CC-MAIN-2021-21
refinedweb
740
62.27
I just came across this: >>> . Now, the empty string is a substring of every string so how can find fail? find, from the doc, should be generally be equivalent to S[start:end].find(substring) + start, except if the substring is not found but since the empty string is a substring of the empty string it should never fail. Looking at the source code for find(in stringlib/find.h): Py_LOCAL_INLINE(Py_ssize_t) stringlib_find(const STRINGLIB_CHAR* str, Py_ssize_t str_len, const STRINGLIB_CHAR* sub, Py_ssize_t sub_len, Py_ssize_t offset) { Py_ssize_t pos; if (str_len < 0) return -1; I believe it should be: if (str_len < 0) return (sub_len == 0 ? 0 : -1); Is there any reason of having this unexpected behaviour or was this simply overlooked?
https://mail.python.org/pipermail/python-list/2012-November/635538.html
CC-MAIN-2016-50
refinedweb
119
72.46
COVID 19 cases data from Johns Hopkins, augmented and reformtted jhu.edu-covid19-2.4.38. Modified 2020-07-08T04:59:37 Resources | Packages | Documentation| Contacts| References| Data Dictionary Resources - confirmed. Confirmed Non-US cases by date and country - deaths. Non-US Death cases by date and country - recovered. Non-US recoveries cases by date and country - confirmed_us. Confirmed US cases by date and country - deaths_us. US Death cases by date and country Documentation This dataset processed and augments the COVID-19 data provided by Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE). The source data is checked into Github daily. and is collected from a variety of sources. This dataset reformats the data into tidy format, with dates expressed as values instead of column headings, and adds several fields that are useful for analysis. The ‘rate_t5d’ column is the growth date from 5 days before the observation to the observation. For example, for a row with a current observation of value x_5, and a past observation of x_0, the rate_t5d is calculated as e^((log(x_5)-log(x_0)) / 5)-1 . The result is that x_5 = x_0 * (1+rate_t5d)^5, and rate_t5d from the previous 5 days Caveats - China’s minimum cases in the dataset is 548, so it’s value for days sincle 100 cases is shifted by 6 days. It’s just a guess, but it looks good. - Countries that haven’t reached 100 cases yet will have a days since 100 cases value that is always negative. Documentation Links Contacts - Origin John Hopkins University Center for Systems Science and Engineering - Wrangler Eric Busboom, Civic Knowledge Data Dictionaryconfirmed | deaths | recovered | confirmed_us | deaths_us confirmed deaths recovered confirmed_us deaths_us References Urls used in the creation of this data package. - ts_base_url. Base URL for time series data - confirmed_ts_source. Source for time series of confirmed cases, excluding US - death_ts_source. Source for time series of deaths, excluding US - recov_ts_source. Source for time series of recoveries, excluding US - confirmed_ts_us_source. Source for time series of confirmed cases, US Only - death_ts_us_source. Source for time series of deaths, US Only Packages - s3 s3://library.metatab.org/jhu.edu-covid19-2.4.38.csv - csv - source Accessing Data in Vanilla Pandas import pandas as pd confirmed_df = pd.read_csv('') deaths_df = pd.read_csv('') recovered_df = pd.read_csv('') confirmed_us_df = pd.read_csv('') deaths_us_df = pd.read_csv('') Accessing Package in Metapack import metapack as mp pkg = mp.open_package('') # Create Dataframes confirmed_df = pkg.resource('confirmed').dataframe() deaths_df = pkg.resource('deaths').dataframe() recovered_df = pkg.resource('recovered').dataframe() confirmed_us_df = pkg.resource('confirmed_us').dataframe() deaths_us_df = pkg.resource('deaths_us').dataframe() 2 thoughts on “COVID-19 Data”
https://data.sandiegodata.org/dataset/jhu-edu-covid19/
CC-MAIN-2022-40
refinedweb
429
51.85
This program reverse the array. For example if user enters the array elements as 1, 2, 3, 4, 5 then the program would reverse the array and the elements of array would be 5, 4, 3, 2, 1. To understand this program, you should have the knowledge of following Java Programming topics: Example: Program to reverse the array import java.util.Scanner; public class Example { public static void main(String args[]) { int counter, i=0, j=0, temp; int number[] = new int[100]; Scanner scanner = new Scanner(System.in); System.out.print("How many elements you want to enter: "); counter = scanner.nextInt(); /* This loop stores all the elements that we enter in an * the array number. First element is at number[0], second at * number[1] and so on */ for(i=0; i<counter; i++) { System.out.print("Enter Array Element"+(i+1)+": "); number[i] = scanner.nextInt(); } /* Here we are writing the logic to swap first element with * last element, second last element with second element and * so on. On the first iteration of while loop i is the index * of first element and j is the index of last. On the second * iteration i is the index of second and j is the index of * second last. */ j = i - 1; i = 0; scanner.close(); while(i<j) { temp = number[i]; number[i] = number[j]; number[j] = temp; i++; j--; } System.out.print("Reversed array: "); for(i=0; i<counter; i++) { System.out.print(number[i]+ " "); } } } Output: How many elements you want to enter: 5 Enter Array Element1: 11 Enter Array Element2: 22 Enter Array Element3: 33 Enter Array Element4: 44 Enter Array Element5: 55 Reversed array: 55 44 33 22 11 - Java Program to Reverse words of a String - Java Program to reverse a String - Java Program to reverse a number
https://beginnersbook.com/2017/09/java-program-to-reverse-the-array/
CC-MAIN-2018-05
refinedweb
302
53.81
Yesterday I visited once again a customer that is currently migrating .NET Applications to the latest version. About 1 year ago I wrote a post on migrating application from .NET version 1.1 to 2.0 and 3.0. At the end of last year version 3.5 has also been released, so now a lot of customers don’t know if it better or not to directly migrate to version 3.5. My advice is that it is better to do the big step and therefore migrate (just once) everything to the .NET Framework 3.5 and therefore start to use the new Visual Studio 2008 and the new features and technologies (like LINQ) offered by this latest framework version. In a migration discussion it is also important to show customers where they have to pay attention in the migration process. In fact if you decide migrating everything to .NET 3.5 you have to know that .NET 3.5 comes with a SP that applies to .NET Framework 2.0 and 3.0 (as shown in the following figure): Moreover by installing Visual Studio 2008, you automatically get .NET Framework 3.5. One of the most interesting feature of VS2008 is multitargeting: you have the option to chose between different framework version: .NET Framework 2.0, 3.0 or 3.5. In reality the available versions (because of the installation of version 3.5) are .NET Framework 2.0 SP1, 3.0 SP1 and 3.5. What does it mean for you? If you open your “old” 2.0 application and you recompile it, you are actually recompiling it using version 2.0 SP1. Of course the decision of migrating everything to 2.0 SP1 / 3.0 SP1 depends a lot on the situation you have on your servers and clients: what kind of framework version is deployed and what are the barriers (approval, delivery, presence of others applications that you can’t control, …). So, ideally it would be nice if you could migrate everything to 3.5, but in the reality you may have to find compromises. In any case if you decide to migrate everything to .NET 3.5 you may find the following links followed by a post I wrote about one year ago quite useful: · Problems that are fixed in the .NET Framework 2.0 Service Pack 1 · Problems that are fixed in the .NET Framework 3.0 Service Pack 1 Note: in my old post I'm mentioning Visual Studio 2005 SP1. Of course if you decide to migrate directly to .NET 3.5 what I'm telling about VS2005 SP1 is still valid for VS2008. Hope it helps, Ken Casada From my old blog post about migrating .NET application from version 1.1 to 2.0, 3.0 Why I decided to write this post? There are always quite a lot of customers asking questions about migrating .NET 1.1 to 2.0. Moreover recently Microsoft released version 3.0 of the .NET Framework, version 3.0 that is also part of the new Windows Vista Operating System. So I think that now it’s a good idea to makes things a little bit clearer on the newest version and of course to show you which are the possibility to migrate your application, so that you can profit of all the advantages offered by the newest version. Of course I will mention which are the possible problems you can encounter and how you can solve all these in the best way. While writing this post I found that the documentation on MSDN is not always up to date and quite fragmented. That’s why I tried to resume information found around the web and make it up to date. I also found really useful information on Scott Guthrie’s Blog and Peter Laudati’s Blog. Thanks guys! OK. Then let’s starts… The most important thing to know is that at the base the .NET Framework 3.0 use version 2.0 of the Framework. There are no technical changes. If you have already version 2.0 and you install version 3.0, the DLLs of version 2.0 will not be replaced. If you don’t already have version 2.0 (the same that is provided with Visual Studio 2005), when you install version 3.0 you automatically install version 2.0 plus 4 new pieces, 4 new technologies: Windows Presentation Foundation, Windows Communication Foundation, Windows Workflows Foundation and Windows CardSpace. I will not cover the details about these 4 new technologies (if you are interested in these 4 new technologies you can get extra information from the official Microsoft Website or from the community web site) .NET Framework 3.0 it’s not a modification of version 2.0, it’s just the framework 2.0 with these 4 new added Technologies: WPF, WF, WCF and Windows CardSpace What does it mean that? If you are already using .NET Framework Version 2.0, by installing version 3.0, you don’t break any existing application. That’s why for the rest of this post I will focus myself on the migration of .NET Application from version 1.1 to Version 2.0. To start let mention which are the possible upgrade scenarios. The first option is not upgrading: I know, it sounds funny. But you should really ask yourself if you really need to migrate at all. If you have a 1.1 application that is working just fine and you don’t have any plans to modify or update it, you can leave it as it is. It works just fine. In any case what I really encourage to do is a cost/benefits analysis of your particular situation. See if it makes sense to upgrade your application. In many cases not upgrading might be the right solution for you. So it depends really on the situation you are. The problem is that in the reality we often have more applications with different needs but just 1 machine. You may for instance just have 1 web server and 2 web applications: one already using version 2.0 and one still compiled and tested with version 1.1. So, what you need in this case is a way to maintain your old application with 1.x, but at the same time have the opportunity to write new applications with the new 2.0 version. What can you do? You can have both framework version installed on the same machine. In our specific sample 1 web server with both framework versions installed: version 1.1 and version 2.0. So what you can do is just upgrading the framework to 2.0 (or directly to 3.0 as, as I mentioned before, by installing 3.0 you also gets 2.0 and so have both framework versions installed on the same machine. In our specific case our old apps will then use the old framework version, while our new apps will use version 2.0 of the framework. In this case we speak from “side by side execution”. Another possibility is to run our apps in “2.0 backwards compatibility mode”. What does it mean? That you keep your app compiled in 1.1, but your application is executed using 2.0. OK, But what’s happen in general when you load, when you run a .NET 1.1 application on a machine that has both 1.1 and 2.0 framework version installed? Which version will be used? Actually this question opens a lot of others questions. Let’s have a look at the following table: Application type Computer with 1.1 Computer with 2.0 Computer with 1.1 and 2.0 1.1 stand-alone application (Web or Microsoft Windows client) Loads with 1.1 Loads with 2.0(*) 2.0 stand-alone application (Web or Microsoft Windows client) Fails Loads with 2.0 1.1 add-in to a native application (such as Office or Internet Explorer) Loads with 2.0 unless the process is configured to run against 1.1 (*) 2.0 add-in to a native application (such as Office or Internet Explorer) (*) There are some breaking changes in .NET Framework 2.0 In fact in the following table show us different situations. In the first column we have the type of application. The first is a .NET stand-alone application (Web or Windows Client) compiled for 1.1, the second is the same type of application but this time compiled for 2.0. The third and the fourth rows represent unmanaged application that host the Common Language Runtime, so non .NET applications that manually load the CLR in order to run .NET code; for instance unmanaged applications that load an Add-In written in .NET. If you have a look to the first column nothing special must be said. In fact if you just have version 1.1, it’s clear that .NET applications compiled for 2.0 will not work. By the second and the third columns things are a little bit more interesting: if you have a 2.0 application, always version 2.0 will be loaded (that’s also logical!). Where you need to really pay more attention and testing, are the scenarios in bold: the first important thing to say is that the framework version 2.0 is mostly backwards compatible with the 1.1 framework. What does it mean that? That even if your applications are compiled for 1.1 they should run fine on the 2.0 framework. Therefore the scenario in bold should work out for you most of the time if you choose not to migrate your 1.1 application forward. But (and there is always a “but”), there are some breaking changes in the 2.0 framework. That’s the reason why I put red asterisks by the bold scenarios that point you to a link. If you navigate to the specified link () you will get the list of breaking changes. By analyzing this list you will probably notice that the majority of your applications don’t even implement any of these cases à your 1.1 application will probably work without problem with. In any case if your 1.1 application fails to run on 2.0 for whatever reason, you still have the possibility to load it with the old framework version and thus using the scenario described in the 3rd column, in which you have both framework version installed on the same machine J. On the above table we still have a special case left in which we have an unmanaged application that use a .NET application compiled for 1.1. In this specific case, the unmanaged application always loads the newest installed version of the .NET Framework by default. While managed application built with 1.1 continue to load framework 1.1 as long as 1.1 is installed, in the case of an unmanaged application that host a .NET application compiled with 1.1, the unmanaged application will always load the newest installed version of the .NET Framework by default. In some cases this could be a problem and if you really have a problem with this behavior there is still a workaround; there is a possibility a possibility to force the unmanaged application to load the old Framework Version. What you have to do? First you have to locate the unmanaged EXE, where the exe of your application is located. Create a text file and as name use the one of your application EXE file and add the “.config” extension. Then, inside the file you have created paste the following configuration tag (where it is clearly said that the version to be used is version 1.1). Create the text file: <unmanagedEXEname>.exe.config (Ex: myapp.exe è myapp.exe.config) Paste this text into the new text file: <?xml version="1.0”> <configuration> <startup> <supportedRuntime version=“v1.1.4322”/> </startup> </configuration> In case you want to load a Web Application with a specific framework Version you don’t have to edit any configuration file, but things are a little bit easier. First important thing to say is that by installing the .NET Framework 2.0, by default this will not change any existing Web Application to 2.0. If you want to change the used version it’s enough to open IIS, and inside the site properties dialog of your web application, select the ASP.NET tab and select the framework version which must be used by your application. The ASP.NET Tab it’s a new Tab à you only get it by installing .NET Framework 2.0. Another important thing to know: don’t try to mix ASP.NET 1.1 and 2.0 apps within a single AppPool on Windows Server 2003! You can also try, but you will see that it doesn’t work! By now we have seen how your .NET application will work on machines that have only the 2.0 framework, and machines that have both versions of the framework installed. We still have one option left: converting our application to 2.0. What does it mean? We don’t want to keep our assemblies compiled for 1.1, but we really want to recompile everything with 2.0 and profiting of the new .NET 2.0 feature. As first upgrade your Visual Studio 2005 to SP1. The target of VS2005 is clear the .NET framework version 2.0. The installation of the SP1 is really very important, because SP1 helps you a lot with the conversion process (solve a lot of conversion problems that were present before). For the developers who are already using Windows Vista, since the beginning of March there is a specific update for VS2005 targeting Windows Vista user () But what we have to do if we want to migrate our applications from .NET 1.1 to 2.0? Which are steps involved in this migration process? Where do we have to start? As first you have to backup your VS2003 solution before attempting any migration; then you have to open the vs2002/2003 project/solution in VS2005 SP1. A conversion wizard will then be started and will convert your project/solution file to VS2005. Finally compile everything. Everything will be compiled with 2.0 and so you got a .NET Framework 2.0 application. Of course this is quite a quick and dirty answer, but if you are migrating a Windows Form application, the majority of times this quite easy answer will be all it takes for you. Instead, if you are migrating an ASP.NET Web application, you have to do some extra work… But let’s start with non web application first. As I mentioned most of the time, opening the application in VS2005 and running it through the conversion wizard will be all it takes. This will merely update the project files (.vbproj/.csproj) to work with 2005. It will not update your application code to take advantage of the new .NET 2.0 features. At this point, when you compile your existing 1.1 code, it will be compiled against the 2.0 framework. If the code doesn’t compile, check out the list of breaking changes in the 2.0 framework I mentioned before. For migrating Web Application, there can be more work involved since the project model for web applications has changed greatly in VS2005. Or let’s better say that we have 2 Options for migrating an ASP.NET 1.1 application to 2.0. Actually when Visual Studio 2005 was released there was just the possibility to migrate to the new default web site project (let’s call it Option#2), but later after user’s complains about the difficulties they had migrating to the new project, Microsoft decided to release another project model (Web Application Project, I will call it Option#1). Initially Option#1 was released as a separate download, now a fixed version is part of the Visual Studio 2005 SP1. That’s why is so important to use VS2005 SP1. That is all you need today, no extra download. So the first option is to migrate to the Web Application Project template that basically uses the same model as the previous VS2003 project type. And because of that, it’s the easiest way to convert your application to .net framework 2.0, especially if you have a complex web application. In the case of a Web Application Project we have a project file and only files that are referenced inside this project file are part of the project and are displayed in the solution explorer and are compiled during a build. What is not the case with Option#2, where there is no project file and all files in the folder are part of the project. Which model to choose? As I said it’s probably easier to migrate to the Web Application Model because it contains minimal changes compared to VS2003. On the other side I also have to say that the WebSite model was introduced because there was a specific need to have this type of project. As example I bring you some described by this post on Scott Guthrie’s Blog. If you think to the real world, web projects are almost always built using a variety of different tools in parallel. Images are worked on using tools like Photoshop, Illustrator. CSS are maybe edited with FrontPage, the code using Visual Studio. Sometimes the same person performs all of these tasks, typically for larger project they are split up across multiple people working together. One of the biggest complaints about VS2003 is how hard it is to manage these cross-tool workflows. For example a designer might update a CSS file in a web project to use a new background image, and add the new image to an images directory under the web project using their image editing tool. If the web developer using VS2003 does not coordinate this change by updating his or her project file to remove the old image and add the new image to the project file manifest, then they will find that things work ok during development, but they will fail in production since the new image will not be copied/deployed when the web project is built to a new location. Another problem that you can have is that you always need to check-out the project file every time a file is changed added/renamed or is deleted. If you have a complex project with multiple users this can be a problem. In conclusion, which model to chose? It depends. It depends really on your requirements. Some users will find the new Web site project option more appropriate for their applications, while other users will prefer the Web application project option. Web applications provide the best path for upgrading existing Visual Studio.NET 2003 applications to Visual Studio 2005, and are highly recommended for that scenario. Going forward, Microsoft will fully support both the model. If you choose Option #1 what you have to do? 1. First you have to validate your solution in VS2003. If you are using a remote project, copy your remote project locally, otherwise the migration wizard will have a problem. Open your solution with VS2003 and build the whole solution. Validate all your solution and be sure that your application is working correctly. 2. With the second step we start the migration process. So, you have to open the solution file in VS2005. At this point the Conversion Wizard is automatically launched. The migration wizard proposes you to make a backup of your current solution. Do that! It’s really important; if for whatever reason you want to go back you always have your backup. The conversion will update the solution file and the project file to be compliant with VS2005. After the solution and project files are upgraded to Visual Studio 2005 format, validate that your application builds without errors and runs as expected. At this point, the most common error will be conflicts with names that have been introduced in the.NET Framework version 2.0. For example, the new personalization feature introduces classes with the names like Profile or Membership. The Profile name, in particular, is fairly commonly used by developers who want to keep track of user information (ambiguous class reference). To fix this kind of error, you can either fully qualify existing names with a namespace or rename the members so they do not conflict. You will maybe get warning about using obsolete members. You often get the warning telling you that a method has been deprecated, just to tell you that sometime in a future major version the method will not be supported anymore. When you get a warning of this type you always get an alternative method to call (the compiler suggests you to use another method). By compiling you may also obtain an error caused by a breaking change we I mentioned before, but as I said, probably the majority of your applications will compile without problems. Finally start the application and validate all the functionality. 3. The next step is to convert your web project to use Partial Classes. What are Partial Classes? Partial Classes it’s a new technique introduced with the .NET framework 2.0, which allow us to separate what has been generated by the designer and the part that has been customized by the user. With Visual Studio 2003 we have P1.aspx and then the code-behind file with the .cs extension (obviously in the case we are working with C#). In the case of Visual Studio 2005 we have the split of the code-behind file in 2 entities. One is the .aspx.cs and the other is the designer file. The designer file will contain the generated code of the designer. Visual Studio 2003 P1.aspx P1.aspx.cs P1.aspx Inherits=“AppName.P1” Codebehind=“P1.aspx.cs” P1.aspx.cs namespace AppName { public class P1 : System.Web.UI.Page { // Contains both user & auto-generated code, e.g. protected System.Web.UI.WebControls.Label Label1; override protected void OnInit(EventArgs e) { … } }} Visual Studio 2005 P2.aspx P2.aspx.cs P2.aspx.designer.cs Inherits=“P2” CodeBehind=“P2.aspx.cs” P2.aspx.cs namespace AppName { public partial class P2 : System.Web.UI.Page { // Contains user code }} P2.aspx.designer.cs namespace AppName { public partial class P2 { protected System.Web.UI.WebControls.Label Label1; override protected void OnInit(EventArgs e) }} In the first case we establish a connection between the aspx file and the code-behind file with the “CodeBehind” directive. We say explicitly that the code behind file is P1.aspx.cs; and this file contains the normal class definition that you already know (public class P1….). In the case of Visual Studio 2005 things are a little bit different. The definition of our class is split in 2 parts, in 2 different files. The class definition itself is “public partial class P2”. The .aspx.cs file contains the code defined by the user, while the designer file the auto-generated code. What’s important here is not to discuss all the advantages of this solution, but how to convert your solution to use partial classes. It’s very simple: you just have to right click on the root node of your web project and select “Convert to Web Application”. This will move the generated designer code into the “designer.cs” file. Build everything and do evtl. Some fixup. If a control declaration has been accidentally removed (personally I never had this problem) and so if you need to add the missed declaration, add it to the code behind file (not to the designer file). 4. The 4th step is to fix up HTML errors. Why that? Because Visual Studio 2005 generates and validates XHTML-compliant markup. This help you build Web applications that are standards compliant and helps minimize issues with browser-specific rendering. Visual Studio 2003 did not generate XHTML –compliant markup, so you might see validation and rendering issues with pages created in Visual.NET 2003. The conversion Wizard keeps the HTML validation setting to Internet Explorer 6 (which was the Visual Studio .NET 2003 default setting). If you want to make your HTML XHTML compliant you can change the validation setting and using the compiler help instructions make your HTML XHTML compliant. Let’s now see the involved steps if you want to migrate to the Visual Studio 2005 web site model: 1. The first step is as before the validation inside Visual Studio 2003. Open your solution inside Visual Studio 2003 and validate it: test that everything builds fine and works as it should. 2. The second step consist in a review of the site architecture for possible conflicts: a) If you have multiple project files referring to the same set of files it is possible for those files to be migrated twice (usually if these happen it means that there is a duplicate). If you really need to reference the same set of files, I think that the easiest thing to do is to migrate to the Web Application Project. b) If you have two pages referencing the same code-behind file; that’s bad design. Change it and use one unique code-behind file for each “WebForm” and “UserControl”. c) If you have other projects referencing the web project, you have to move the shared code to a separate class library and then make a reference to this class library. d) In Visual Studio.NET 2003 you had to explicitly decide whether or not to include files in your Web project. As with the web site model no project file exists, all files inside a folder are considered part of project. Since the conversion wizard will ignore these files, these files will be leaved where they are, no conversion process will be applied. What you can do when the conversion is finished, is to exclude these files with “Exclude from Project“ (right mouse click). These commands will just put an exclude extension to the specific files and unluckily is not done automatically by the wizard. 3. To start the conversion Wizard you have to select “Open WebSite” from the “File” menu, select Local IIS. During the conversion process, the first step you are asked to execute, is to backup your project. Here you just need to pay attention in case you decide to define by yourself the backup folder. In fact this must be a folder outside the one of your Web application! … otherwise you will get a caos (Remember: with the web site project all files inside a folder are considered part of a project. So pay attention on this point!). In case your solution has multiple projects, for instance your project and other non web projects, you first have to migrate the non-web-project and then in a second moment using “Add Existing Web Site” the web project itself (this will start the conversion Wizard for the web project) 4. The 4th step is probably the most complicated one, because quite a lot of manual fixup can be necessary. As first, as described for the Web Application Project (step 2), you have to try to compile everything and to solve evtl. conflicts. What has been said before is still valid. In addition to that there is some extra “clean-up work” to do. Here I reported some example: a) You may remember that Visual Studio 2003 generated resource file for each Web Form and user Control and in general users did not use them for whatever reason. The conversion wizard doesn’t remove them, but what I suggest is to create your own resource file inside the new App_GlobalResourceFolder and move all the resource declaration inside this file and then remove all “.resx” files that are not anymore used at WebForm or UserControl level. So you will have a cleaned solution. b) In VS2003 the designer auto-generated member functions like “OnInit” and “InitializeComponent()” in the code-behind class file. These functions are not used by VS2005 but will be called if they exist. These functions are not removed from the conversion Wizard. So, once you have migrated your application, you can remove these functions from the code-behind file (of course just if there is no user specific code). It’s just a clean-up work. c) VS2005 and in particular the new web site model brings a new compilation model. In VS2003 the compilation was done by Visual Studio 2003 for all “.cs” and “.vb” files and those files where compiled into a single assembly inside the bin folder. In VS2005 we don’t work just with one assembly but we have multiple assemblies and because of this change generally you also have to do some general fix up. Luckily quite a lot is done by the conversion Wizard, but sometimes you might need to fix-up something manually. What I suggest is to have a look at the following article that reports the common web project conversion issues and solutions. If you work with Visual Studio SP1 (includes the final version of the conversion wizard), you will see that most of the issues has been already solved and so no manual fix-up is needed. That why I strongly recommend the installation of the VS2005 SP1! 5. The last step is as always making your application XHTML compliant. This step is optional and for any details I forward you to point 4 of the other migration option (Web Application Project). Note: on this topic I recently hold a TechTalk. You can download the slides from the following address: PingBack from. I dont see why this was done as it completely fails backward compatibility (may be not completely but I am not not sure)
http://blogs.msdn.com/b/swiss_dpe_team/archive/2008/04/10/migrating-from-net-1-1-to-2-0-3-0-and-3-5.aspx
CC-MAIN-2014-10
refinedweb
5,002
66.94
Java provides the java.util.regex package for pattern matching with regular expressions.. Following example illustrates how to find a digit string from the given alphanumeric string: - import java.util.regex.Matcher; - import java.util.regex.Pattern; - public class Demo - { - - // String to be scanned to find the pattern. - - - // Create a Pattern object - Pattern r = Pattern.compile(pattern); - // Now create matcher object. - Matcher m = r.matcher(line); - if (m.find( )) { - - - - } else { - - } - } - } On execution of the above program we will get the below output: Found value: This order was placed for QT3000! OK? Found value: This order was placed for QT300 Found value: 0
http://itsourcecode.com/2016/03/java-regular-expression/
CC-MAIN-2017-17
refinedweb
102
54.79
Hi all & thanks, Tilman! We did not analyze Arabic WP, but the tools released alongside the paper could be used to produce the analysis. Advertising One challenge w working w redirects (and a motivation for the short paper) is that redirects are actually quite dynamic. They may exist for a time and then be re-routed as the coverage of a topic changes and moves. Thus the discussion in the paper of redirect "spells" and all the work to do something more than just drop them from the analysis. Also relevant to Reem's original concern and the subsequent discussion here: page views accrue to redirects even when the *content* that is viewed exists on the page that is the target of the redirect. Thus even a page that gets few/zero edits may be viewed via a redirect in a way that the usual page view data does not account for very precisely. Not surprisingly, I agree with Tilman that it would be very interesting to see how some of the comparisons/analyses discussed in this thread might change w more precise accounting of redirects :) later, Aaron On Thu, Sep 15, 2016 at 11:55 PM, Tilman Bayer <tba...@wikimedia.org> wrote: > To Andrew's point about excluding redirects, see also this paper by > Benjamin Mako Hill and Aaron Shaw (CCed): > ghteous/consider-the-redirect > <> > (don't know if they have data for Arabic Wikipedia too) > > In short, the distribution of edits is very different for redirects and > articles. In light of this, and to address Reem's original question, it's > probably worth looking at the actual histogram before relying on the > average or other statistical moments. > > Also interesting in this regard, although the data is not current: > > <> > > On Thu, Sep 15, 2016 at 7:00 AM, Dan Andreescu <dandree...@wikimedia.org> > wrote: > >> Good point, updated to *exclude redirects* and rerun: >> >> total_namespace_0_revisions: 457,574,404 >> total_namespace_0_pages: 5,236,104 >> >> per namespace 0 non-redirect article: >> >> standard deviation of edits: *324.45* >> *average* edits: *87.54* >> standard deviation of days between first and last edit: *1360.16* >> *average* days between first and last edit: *2316.37* >> >> So you were right, Andrew, numbers change, but I think the nature of the >> data is roughly the same. It's interesting that average difference between >> first and last edit is smaller than two standard deviations. That suggests >> that curve is also slightly lopsided, with perhaps lots of more recently >> created articles and few long lived ones. But that "recent" could be the >> spike in the 2007-2011 period. It may be interesting to play with these >> metrics more, and I'll keep this in mind as we build the new infrastructure >> (making these queries as fast as possible and easy to dig into). >> >> On Wed, Sep 14, 2016 at 6:18 PM, Andrew Gray <andrew.g...@dunelm.org.uk> >> wrote: >> >>> Hi Dan, >>> >>> Thanks for running these! >>> >>> I'm struck by the figure of 12.8m pages in ns0 - it looks like this >>> includes redirects (there are ~7.6m ns0 redirects on enwiki, and ~5.2m >>> articles). This will probably skew things a lot, as the majority of >>> those will probably be edited once and never touched again, barring >>> the target page being moved,. Given they're ~60% of the pages, this >>> will introduce a lot of extra weight for "articles with very few >>> edits" and "articles that get edited very infrequently". >>> >>> It might be worth trying to filter out redirects - I suspect this >>> would have a noticeable effect on both the distribution and the mean >>> time between edits. >>> >>> Andrew. >>> >>> On 14 September 2016 at 22:01, Dan Andreescu <dandree...@wikimedia.org> >>> wrote: >>> > Quick follow up 'cause I was curious. I calculated the average and >>> standard >>> > deviation for edits per namespace 0 article on enwiki. I tried to do >>> it on >>> > the research db replicas but it took forever so I did it on the hadoop >>> > cluster. Including archived pages isn't useful, doesn't change the >>> results >>> > almost at all. Including pages outside namespace 0 increases the >>> standard >>> > deviation and decreases the average. Here are the results: >>> > >>> > 484,170,218 edits on namespace 0 >>> > 12,756,342 pages in namespace 0 >>> > >>> > standard deviation for edits per page: 213.58 >>> > average edits per page: 38.02 >>> > average days between first and last edit per page: 1215.27 >>> > >>> > So considering the standard deviation is much larger than the mean, I'm >>> > pretty confident to answer yes, I think the vast majority of articles >>> in >>> > namespace 0 on enwiki get very few edits. The dataset we're working on >>> > releasing as part of wikistats 2.0 will allow these kinds of questions >>> to be >>> > answered really easily and really quickly. Stay tuned over the next >>> few >>> > quarters :) >>> > >>> > And the queries: >>> > >>> <> >>> > >>> > If you want to edit those queries to find something else out, I'm >>> happy to >>> > run them one or two more times, but then I really have to get back to >>> my >>> > real job :) >>> > >>> > On Wed, Sep 7, 2016 at 12:42 PM, Andrew Gray < >>> andrew.g...@dunelm.org.uk> >>> > wrote: >>> >> >>> >> Hi Reem, >>> >> >>> >> Here's some rough estimates. >>> >> >>> >> English - >>> <> >>> >> >>> >> English has ~5.2 million articles, with an average of ~92 edits per >>> >> article, not counting deleted edits (or deleted articles). Note that >>> 80% of >>> >> those articles are more than three years old, so they've had plenty >>> of time >>> >> to build up the 92 edits. >>> >> >>> >> [The page does not explicitly say that only article edits are counted >>> in >>> >> the tables, but this is easy to confirm - >>> >> >>> <> >>> has 847m edits] >>> >> >>> >> Arabic - >>> <> >>> >> >>> >> Arabic has ~437k articles, ~31 edits/article - but only half of these >>> are >>> >> more than three years old, so they're on average a lot younger than >>> the >>> >> English ones. >>> >> >>> >> As of July there are 3.3m edits/month in English - this is equal to an >>> >> average of 0.63 edits/article/month - and 226k edits/month in Arabic, >>> equal >>> >> to 0.52 edits/article/month. July was a slow month for Arabic, and >>> March had >>> >> more than twice as many edits, 487k, across 415k articles. >>> >> >>> >> These are plain averages. The distribution is going to be very >>> skewed, so >>> >> high-edit articles get most of the attention, and the other articles >>> easily >>> >> go months without attention. If we assume an 80:20 distribution - >>> which is a >>> >> wild guess but sounds plausible - then the "long tail" of 80% of >>> articles >>> >> would get 20% of the edits. In this case, a plausible average would >>> be: >>> >> >>> >> * English long tail, 4.16m articles and 660k edits/month = average of >>> six >>> >> months between each edit >>> >> * Arabic (July) long tail, 350k articles and 45k edits/month = >>> average of >>> >> seven or eight months between each edit >>> >> * Arabic (March) long tail, 332k articles and 97k edits/month = >>> average of >>> >> three and a half months between each edit >>> >> >>> >> This is a broad range, but it feels more or less right for all those >>> >> unloved pages... >>> >> >>> >> Andrew. >>> >> >>> >> >>> >> On 7 September 2016 at 14:52, Reem Al-Kashif <reemalkas...@gmail.com> >>> >> wrote: >>> >> > Hi, >>> >> > >>> >> > I always hear people saying that most of the articles usually >>> receive >>> >> > little >>> >> > to no edits (and that is used to encourage participants to make sure >>> >> > their >>> >> > articles are good enough). I would like to know if there are >>> statistics >>> >> > that >>> >> > support this for the English and Arabic Wikipedia. >>> >> > >>> >> > Best, >>> >> > Reem >>> >> > >>> >> > -- >>> >> > Kind regards, >>> >> > Reem Al-Kashif >>> >> > >>> >> > _______________________________________________ >>> >> > Analytics mailing list >>> >> > Analytics@lists.wikimedia.org >>> >> > >>> <> >>> >> > >>> >> >>> >> >>> >> >>> >> -- >>> >> - Andrew Gray >>> >> andrew.g...@dunelm.org.uk >>> >> >>> >> >>> >> -- >>> >> - Andrew Gray >>> >> andrew.g...@dunelm.org.uk >>> >> >>> >> _______________________________________________ >>> >> Analytics mailing list >>> >> Analytics@lists.wikimedia.org >>> >> >>> <> >>> >> >>> > >>> > >>> > _______________________________________________ >>> > Analytics mailing list >>> > Analytics@lists.wikimedia.org >>> > >>> <> >>> > >>> >>> >>> >>> -- >>> - Andrew Gray >>> andrew.g...@dunelm.org.uk >>> >>> _______________________________________________ >>> Analytics mailing list >>> Analytics@lists.wikimedia.org >>> >>> <> >>> >> >> >> _______________________________________________ >> Analytics mailing list >> Analytics@lists.wikimedia.org >> >> <> >> >> > > > -- > Tilman Bayer > Senior Analyst > Wikimedia Foundation > IRC (Freenode): HaeB > _______________________________________________ Analytics mailing list Analytics@lists.wikimedia.org
https://www.mail-archive.com/analytics@lists.wikimedia.org/msg03796.html
CC-MAIN-2016-40
refinedweb
1,302
61.97
Issues ZF-7696: Bug: 'include(FrontController.php) [function.include]: failed to open stream: No such file or directory' error message Description Hi, Even though I saw that this bug fixed, I still get this error message: 'Warning: include(FrontController.php) [function.include]: failed to open stream: No such file or directory in /xxx/library/Zend/Loader.php on line 83' The workaround of adding 'false' at class_exists() in BootstrapAbstract.php (line 354) still solves this. Posted by David Abdemoulaie (hobodave) on 2009-08-31T17:58:28.000+0000 The false parameter was removed from the get_class() call in r17801. Simply adding it back causes the test case Zend_Application_Bootstrap_BootstrapAbstractTest::testRequestingPluginsByAutoloadableClassNameShouldNotRaiseFatalErrors. I'm going to try and see if I can't find out a way for this to work in both cases. Posted by David Abdemoulaie (hobodave) on 2009-09-17T13:59:31.000+0000 I'm stumped. Any ideas Matthew? Posted by Glen Ainscow (darkangel) on 2009-09-29T03:06:24.000+0000 I get this error as soon as I enable the fallback autoloader. Any ideas? Posted by Miroslav Kubelik (koubel) on 2009-09-29T03:54:50.000+0000 Yes, if you want to remove this warning, you need to disable fallback autoloader and rewrite your code for an properly namespace based loading. Using false in get_class isn't solution, because it produces other problems as Matthew said. I think module bootstraping need to be deeply revised, but I don't have currenty any idea. Posted by Glen Ainscow (darkangel) on 2009-09-30T04:48:54.000+0000 What if you're using a library (or application classes) that aren't namespaced? Posted by David Abdemoulaie (hobodave) on 2010-04-02T09:16:44.000+0000 I created a patch that prevents this warning from occurring. It doesn't break any unit tests. However, I tried but couldn't duplicate the warning in a unit test context. I can duplicate it in an application at will though by doing the following: resources.modules[] = This will result in 1 warning per module bootstrap. Posted by David Abdemoulaie (hobodave) on 2010-05-05T14:47:36.000+0000 Fixed in r22124
http://framework.zend.com/issues/browse/ZF-7696?focusedCommentId=40394&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-14
refinedweb
358
60.82
RethinkDB is a NoSQL database. It has an easy to use API for interacting with the database. RethinkDB also makes it simple to set up database clusters; that is, groups of servers serving the same databases and tables. Clusters are a way to easily scale your databases without any downtime. This tutorial will look at how to set up a cluster, import data, and secure it. If you are new to RethinkDB, look at the basics in this tutorial before diving into the more complex cluster configuration process. This tutorial requires at least 2 Droplets running Ubuntu 14.04 LTS, named rethink1 & rethink2(These names will be used throughout this tutorial). You should set up a non root sudo user on each Droplet before setting up RethinkDB – doing so is a good security practice. This tutorial also references the Python client driver, which is explained in this tutorial. Clusters in RethinkDB have no special nodes; it is a pure peer-to peer-network. Before we can configure the cluster, we need to install RethinkDB. On each server, from your home directory, add the RethinkDB key and repository to apt-get: source /etc/lsb-release && echo "deb $DISTRIB_CODENAME main" | sudo tee /etc/apt/sources.list.d/rethinkdb.list wget -qO- | sudo apt-key add - Then update apt-get and install RethinkDB: sudo apt-get update sudo apt-get install rethinkdb Next, we need to set RethinkDB to run on startup. RethinkDB ships with a script to run on startup, but that script needs to be enabled: sudo cp /etc/rethinkdb/default.conf.sample /etc/rethinkdb/instances.d/cluster_instance.conf The startup script also serves as a configuration file. Let’s open this file: sudo nano /etc/rethinkdb/instances.d/cluster_instance.conf The machine’s name (the one in the web management console and log files) is set in this file. Let’s make this the same as the machine’s hostname by finding the line (at the very bottom): # machine-name=server1 And changing it to: machine-name=rethink1 (Note: If you don’t set the name before starting RethinkDB for the first time, it will automatically set a DOTA-themed name.) Set RethinkDB so it is accessible from all network interfaces by finding the line: # bind=127.0.0.1 And changing it to: bind=all Save the configuration and close nano (by pressing Ctrl-X, then Y, then Enter). We can now start RethinkDB with the new configuration file: sudo service rethinkdb start You should see this output: rethinkdb: cluster_instance: Starting instance. (logging to `/var/lib/rethinkdb/cluster_instance/data/log_file') RethinkDB is now up and running. We have turned on the bind=all option, making RethinkDB accessible from outside the server. This is insecure. So, we will need to block RethinkDB off from the Internet. But we need to allow access to its services from authorized computers. For the cluster port, we will use a firewall to enclose our cluster. For the web management console and the driver port, we will use SSH tunnels to access them from outside the server. SSH tunnels redirect requests on a client computer to a remote computer over SSH, giving the client access to all of the services only available on the remote server’s localhost name space. Repeat these steps on all your RethinkDB servers. First, block all outside connections: # The Web Management Console sudo iptables -A INPUT -i eth0 -p tcp --dport 8080 -j DROP sudo iptables -I INPUT -i eth0 -s 127.0.0.1 -p tcp --dport 8080 -j ACCEPT # The Driver Port sudo iptables -A INPUT -i eth0 -p tcp --dport 28015 -j DROP sudo iptables -I INPUT -i eth0 -s 127.0.0.1 -p tcp --dport 28015 -j ACCEPT # The Cluster Port sudo iptables -A INPUT -i eth0 -p tcp --dport 29015 -j DROP sudo iptables -I INPUT -i eth0 -s 127.0.0.1 -p tcp --dport 29015 -j ACCEPT For more information on configuring IPTables, check out this tutorial. Let’s install “iptables-persistent” to save our rules: sudo apt-get update sudo apt-get install iptables-persistent You will see a menu like this: Select the Yes option (press Enter) to save the firewall rules. You will also see a similar menu about IPv6 rules, which you can save too. To access RethinkDB’s web management console and the driver interface we need to set up the SSH tunnel. Let’s create a new user for the ssh tunnel on rethink1: sudo adduser ssh-to-me Then set up the authorized keys file for our new user: sudo mkdir /home/ssh-to-me/.ssh sudo touch /home/ssh-to-me/.ssh/authorized_keys If you are using SSH to connect to the cloud server, open a terminal on your local computer. If you are not, you may want to learn more about SSH keys. Get your public key and copy it to your clipboard: cat ~/.ssh/id_rsa.pub Then add that key to the new account by opening the authorized_keys file on the server: sudo nano /home/ssh-to-me/.ssh/authorized_keys Paste your key into the file. Then save and close nano ( Ctrl-X, then Y, then Enter). You need to repeat all of these steps for your other cluster nodes. You may want to import a pre-existing database into your cluster. This is only needed if you have a pre-existing database on another server or on this server; otherwise, RethinkDB will automatically create an empty database. If you need to import an external database: If the database you wish to import is not stored on rethink1, you need to copy it across. First, find the path of your current RethinkDB database. This would be the auto-created rethinkdb_data directory if you used the rethinkdb command to start your old database. Then, copy it using scp on rethink1: sudo scp -rpC From Server User@From Server IP:/RethinkDB Data Folder/* /var/lib/rethinkdb/cluster_instance/data For example: sudo scp -rpC root@111.222.111.222:/home/user/rethinkdb_data/* /var/lib/rethinkdb/cluster_instance/data Then restart RethinkDB: sudo service rethinkdb restart If you have an existing database on rethink1: If you have an existing RethinkDB database on rethink1, the procedure is different. First open the configuration file on rethink1: sudo nano /etc/rethinkdb/instances.d/cluster_instance.conf Then, find the path of the RethinkDB database you want to import. This would be the auto-created rethinkdb_data directory if you used the rethinkdb command to start your old database. Insert that path into the configuration file by adding the line: directory=/home/user/rethink/rethinkdb_data/ Close the file to save your changes (using Ctrl-X, then Y, then Enter). Now restart RethinkDB: sudo service rethinkdb restart It is important to note that importing a pre-existing database will mean that rethink1 will inherit the name of the database’s old machine. You will need to know this when managing the sharding of the database later on. In order to create a cluster, you need to allow all of the cluster machines through each other’s firewalls. On your rethink1 machine, add an IPTables rule to allow the other nodes through the firewall. In this example, you should replace rethink2 IP with the IP address of that server: sudo iptables -I INPUT -i eth0 -s rethink2 IP -p tcp --dport 29015 -j ACCEPT Repeat the command for any other nodes you want to add. Then save the firewall rules: sudo sh -c "iptables-save > /etc/iptables/rules.v4" Then repeat these steps for your other nodes. For a two-server setup, you should now connect to rethink2 and unblock the IP of rethink1. Now you need to connect all of the nodes to create a cluster. Use SSH to connect to rethink2 and open the configuration file: sudo nano /etc/rethinkdb/instances.d/cluster_instance.conf The join option specifies the address of the cluster to join. Find the join line in the configuration file: # join=example.com:29015 And replace it with: join=rethink1 IP Save and close the configuration file (using Ctrl-X, then Y, then Enter). Then restart RethinkDB: sudo service rethinkdb restart The first node, rethink1, does NOT need the join update. Repeat the configuration file editing on all of the other nodes, except for rethink1. You now have a fully functioning RethinkDB cluster! The web management console is an easy to use, online interface that gives access to the basic management functions of RethinkDB. This console is useful when you need to view the status of the cluster, run single RethinkDB commands, and change basic table settings. Every RethinkDB instance in the cluster is serving a management console, but this is only available from the server’s localhost name space, since we used the firewall rules to block it off from the rest of the world. We can use an SSH tunnel to redirect our requests for localhost:8080 to rethink1, which will send the request to localhost:8080 inside its name space. This will allow you to access the web management console. You can do this using SSH on your local computer: ssh -L 8080:localhost:8080 ssh-to-me@rethink1 IP If you go to localhost:8080 in your browser you will now see your RethinkDB web management console. If you receive a bind: Address already in use error, you are already using port 8080 on your computer. You can forward the web management console to a different port, one which is available on your computer. For example, you can forward it to port 8081 and go to localhost:8081: ssh -L 8081:localhost:8080 ssh-to-me@rethink1 IP If you see a conflict about having two test databases, you can rename one of them. In this setup, all of the RethinkDB servers (the web management console, driver port, and cluster port) are blocked off from the outside world. We can use an SSH tunnel to connect to the driver port, just like with the web management console. The driver port is how the RethinkDB API drivers (the ones you build applications with) connect to your cluster. First, pick a node to connect with. If you have multiple clients (e.g., web app servers) connecting to the cluster, you will want to balance them out across the cluster. It would be a good idea to write a list of your clients, then allocate a server for each client. Try to group the clients so clients that need similar tables connect to the same cloud server or group of servers and so no server becomes overloaded with lots of clients. In this example, we’ll use rethink2 as our connecting server. However, in a larger system where your database and web app servers are separate, you’d want to do this from a web app server that’s actually making database calls. Then, on the connecting server, generate an SSH key: ssh-keygen -t rsa And copy that to your clipboard: cat ~/.ssh/id_rsa.pub Then authorize the new key on the cluster node (in this example, rethink1) by opening the authorized_keys file and pasting the key on a new line: sudo nano /home/ssh-to-me/.ssh/authorized_keys Close nano and save the file ( Ctrl-X, then Y, then Enter). Next, use SSH tunneling to access the driver port, from the connecting sever: ssh -L 28015:localhost:28015 ssh-to-me@Cluster Node IP -f -N The driver is now accessible from localhost:28015. If you get a bind: Address already in use error, you can change the port. For example, use port 28016: ssh -L 28016:localhost:28015 ssh-to-me@Cluster Node IP -f -N Install the Python driver on the connecting server. There’s a quick run-through of the commands here, and you can read about them in more detail in this tutorial. Install the Python virtual environment: sudo apt-get install python-virtualenv Make the ~/rethink directory: cd ~ mkdir rethink Move into the directory and create the new virtual environment structure: cd rethink virtualenv venv Activate the environment (you must activate the environment every time before starting the Python interface, or you’ll get an error about missing modules): source venv/bin/activate Install the RethinkDB module: pip install rethinkdb Now start Python from the connecting server: python Connect to the database, making sure to replace 28015 with the port you used, if necessary: import rethinkdb as r r.connect("localhost", 28015).repl() Create the table test: r.db("test").table_create("test").run() Insert data into the table test: r.db("test").table("test").insert({"hello":"world"}).run() r.db("test").table("test").insert({"hello":"world number 2"}).run() And print out the data: list(r.db("test").table("test").run()) You should see output similar to the following: [{u'hello': u'world number 2', u'id': u'0203ba8b-390d-4483-901d-83988e6befa1'}, {u'hello': u'world', u'id': u'7d17cd96-0b03-4033-bf1a-75a59d405e63'}] In RethinkDB, you can configure a table to be sharded (split) across multiple cloud servers. Sharding is an easy way to have data sets larger than what fits in the RAM of a single machine perform well, since more RAM is available for caching. Since sharding also splits the dataset across multiple machines, you can have larger, low-performance tables, since more disk space is available to the table. This can be done through the Web Management Console. To do this, go to the Tables tab in the Web Management Console. Click on the test table (the one we created in the previous section) to enter its settings. Scroll down to the Sharding settings card. Click the Edit button. There, you can enter the number of servers to split the table over. Enter 2 for this example. Click the Rebalance button to save the setting. You may notice that there is a maximum to the amount of shards you can have. This is equal to the number of documents in your database. If you are trying to set up sharding for a new table, you will either need to wait for more data or add dummy data to allow yourself to add more shards. Internally, RethinkDB has range-based shards, based on the document IDs. This means that if we have a dataset with the IDs A, B, C, and D, RethinkDB might split it into 2 shards: A, B (-infinity to C) and C, D (C to +infinity). If you were to insert a document with the ID A1, that would be within the first shard’s range (-infinity to C), so it would go in that shard. We can set the boundaries of the shards, which can optimize your database configuration. Before we can do that, we will want to add a dataset to play with. In the Data Explorer tab of the web management console we can create a table by running this command (click Run after typing): r.db('test').tableCreate('testShards') Then insert our test data: r.db("test").table("testShards").insert([ {id:"A"}, {id:"B"}, {id:"C"}, {id:"D"}, {id:"E"}]) Now we can configure the shards in detail. To do so, we need to enter the admin shell. The admin shell is a more advanced way of controlling your cluster, allowing you to fine-tune your set up. On the rethink1 machine, open the admin shell: rethinkdb admin Then we can view some information about our table: ls test.testShards Expected output: table 'testShards' b91fda27-a9f1-4aeb-bf6c-a7a4211fb674 ... 1 replica for 1 shard shard machine uuid name primary -inf-+inf 91d89c12-01c7-487f-b5c7-b2460d2da22e rethink1 yes In RethinkDB there are many ways to name a table. You can use database.name ( test.testShards), the name ( testShards) or the table uuid ( e780f2d2-1baf-4667-b725-b228c7869aab). These can all be used interchangeably. Let’s split this shard. We will make 2 shards: -infinity to C and C to +infinity: split shard test.testShards C The generic form of the command is: split shard TABLE SPLIT-POINT Running ls testShards again shows the shard has been split. You may want to move the new shard from one machine to another. For this example, we can pin (move) the shard -inf-C (-infinty to C) to the machine rethink2: pin shard test.testShards -inf-C --master rethink2 The generic form of the command is: pin shard TABLE SHARD-RANGE --master MACHINE-NAME If you ls testShards again, you should see that the shard has moved to a different server. We can also merge 2 shards if we know the common boundary. Let’s merge the shards we just made (-infinity to C and C to +infinity): merge shard test.testShards C The generic form of the command is: merge shard TABLE COMMON-BOUNDARY To exit the shell, type exit When a document is split over multiple machines, one machine will always hold its primary index. If the cloud server with the primary index for a particular document is taken offline, the document will be lost. So, before you remove a machine, you should migrate all the primary shards on it away from it. In this example, we will try to migrate the data off the node rethink2 to leave rethink1 as the sole node. Enter the RethinkDB admin shell on rethink1: rethinkdb admin First, let’s list the shards (groups of documents) that rethink2 is responsible for: ls rethink2 Your output should look something like this: machine 'rethink2' bc2113fc-efbb-4afc-a2ed-cbccb0c55897 in datacenter 00000000-0000-0000-0000-000000000000 hosting 1 replicas from 1 tables table name shard primary b91fda27-a9f1-4aeb-bf6c-a7a4211fb674 testShards -inf-+inf yes This shows that rethink2 is responsible for the primary shard of a table with the id “bdfceefb-5ebe-4ca6-b946-869178c51f93”. Next, we will move this shard to rethink1. This is referred to as pinning: pin shard test.testShards -inf-+inf --master rethink1 The generic form of the command is: pin shard TABLE SHARD-RANGE --master MACHINE-NAME If you now run ls rethink1 you will see that the shard has been moved to that machine. Once every shard has been moved from rethink2 to rethink1, you should exit the admin shell: exit It is now safe to stop RethinkDB on the unwanted server. Important: run this on the machine you want to remove. In this example, we will run this on rethink2: sudo service rethinkdb stop The next time you visit the web management console, RethinkDB will display a bright red warning. Click Resolve issues. If the issue shown looks like the one above, and no master responsibilities are listed, click Permanently remove. This will remove the machine from the cluster. If it lists any master responsibilities, turn RethinkDB back on ( sudo service rethinkdb start) and make sure you migrate every primary shard off that server. Note: If you try to re-add the machine to the cluster, you will see messages about a zombie machine. You can remove the data directory to rectify this issue: sudo rm -r /var/lib/rethinkdb/cluster_instance/ sudo service rethinkdb restart lose the primary of a shard, you lose the data of the shard only if you have no replica for the shard. As long as you have a replica, you can declare the permanently remove the master from the cluster, and a replica will be elected master. Can you explain cluster little bit more? let say you have 10 servers? What do you put on join for rest of 9 servers? just join rethink1 or you do for example on rehtink2: join rethink1 join rethink3 join rethink4 join rethink5 join rethink6 join rethink7 join rethink8 join rethink9 join rethink10 or on rehtink3: join rethink1 join rethink2 join rethink4 join rethink5 join rethink6 join rethink7 join rethink8 join rethink9 join rethink10 It would be great if you can answer this. what if rethink1 goes down? I see you covered a lot but you have not explain at all what happens if rehtink1 goes down in 2 node cluster setup?
https://www.digitalocean.com/community/tutorials/how-to-create-a-sharded-rethinkdb-cluster-on-ubuntu-14-04
CC-MAIN-2022-40
refinedweb
3,344
61.46
Tutorial String Pluralization in JavaScript Using Simplur. Handling plural/singular forms of nouns in English can be difficult in software. Using a library called Simplur provides you with a simple JavaScript utility to solving this problem! There are some problems in programming that originate from the human language. One of these problems is singular/plural nouns. A single potato is spelled “potato” but two/more is spelled “potatoes”. How do you tackle this with JavaScript? There are several ways, but they’re not particularly elegant… const shoppingCart = ['guitar', 'bicycle', 'shoes']; /* 1st approach */ const noun = shoppingCart.length >= 2 ? 'items' : 'item'; const text1 = `You have ${shoppingCart.length} ${noun} in your shopping cart`; // "You have 3 items in your shopping cart" /* 2nd approach */ const text2 = `You have ${shoppingCart.length} item(s) in your shopping cart`; // "You have 3 item(s) in your shopping cart" While these solution work, a new library called Simplur has a really smart way to solve it 🌟 We’re using English examples, but many languages exhibit this same problem where plural nouns are spelled differently than their singular forms. Using Simplur You can install simplur via npm: $ npm install simplur Here’s how simplur is used: import simplur from 'simplur'; const breadCount = 12; const text = simplur`Get ${breadCount} loa[f|ves] of bread`; // "Get 12 loaves of bread" Simplur uses a template string that’s tagged using the simplur function, and when the number is greater than 1 it uses the second noun form. Simple as that! Empty Singluar Form Simplur also works with nouns that only adds a suffix for its plural form (instead of changing a significant portion of the word): const shoppingCart = ['shoes']; const text = simplur`You have ${shoppingCart.length} item[|s] in your shopping cart`; // "You have 1 item in your shopping cart" You just need to omit the first form. Multiple Noun Forms You can include several nouns and simplur will “look ahead”. In English, our “demonstratives” like this/that/these/those can be easily handled this way: const chipmunks = ['alvin', 'simon', 'theodore']; const text = simplur`[That|Those] ${chipmunks.length} chipmunk[|s] [is|are] getting away!`; // "Those 3 chipmunks are getting away!" Conclusion The English language (and most languages) is not a precise instrument and this can lead to interesting programming problems. Hopefully you found simplur useful for use in your JavaScript projects!
https://www.digitalocean.com/community/tutorials/js-simplur
CC-MAIN-2020-40
refinedweb
389
62.38
Generally, in swift we use playground interface to do programming for that we need to install XCode Editor which is the built-in free editor of Apple and it’s used for any type of development for apple platforms like OSX Software’s, iPhones App and etc. Once we are familiar with swift concepts then we can use Xcode IDE to develop applications for any type of apple platform like OSX Software’s, iPhones App, Mac, etc. The basic requirement to install Xcode editor, we should have minimum Apple Laptop or Mac Computer for the development of iOS applications using swift. We can download Xcode editor from the Apple website or Apple app store and this editor is completely free we don’t need to pay anything for Apple. In case if you want to download Xcode from the Apple app store open the app store in the system and search for Xcode in the search bar and download it. Following is the screenshot which shows how to search Xcode and download from the app store. In case if you want to download the Xcode from a website, we need to sign up for the Apple Developer Program which is free and download the Xcode editor but remember when we are going to publish our application Apple will charge 99$. Once we sign up on the Apple website with Apple account and go to the Downloads section and download the Xcode Editor. Use this Download Xcode Editor URL to login and download the Xcode editor. If you open and login to the apple website by using the above URL go to the downloads section, you will find Xcode editor to download like as shown below After download click on Xcode.dmg file, just drag the Xcode and drop into the Application folder like as shown below and our Xcode installation will take 2 to 3 minutes. Now we will see how to write a simple "Hello World" program in swift with example. for that Go to Search which is in the top right side at menu bar --> Click on it and search for “Xcode” and open it like as shown below. Once we open Xcode the welcome window will open like as shown below. In the welcome window click on the first option “Get started with a playground” to create a project in swift. After selecting “Get started with a playground” a new window will open in that we need to give the project name as “Hello World” and select platform as “iOS” and click on Next like as shown below. Once we click on the Next button new dialog will open in that we need to select the location to save our project. Once you select the location to save the project then click on Create button like as shown below Once we click on Create button new project will create and the playground window will open like as shown below. Generally, swift playground window will contain both input and output windows side by side like as shown above. Following is the simple “Helloworld” code written in the swift playground (Left Hand Side). import UIKit print("Hello World") The above program will display the result like as shown below (Right Hand Side). This is how we can use the swift playground to write and execute programs based on our requirements.
https://www.tutlane.com/tutorial/swift/swift-development-environment-setup
CC-MAIN-2020-40
refinedweb
561
62.82
How to act on a http post redirect #include <curl/curl.h> CURLcode curl_easy_setopt(CURL *handle, CURLOPT_POSTREDIR, long bitmask); Pass a bitmask to control how libcurl acts on redirects after POSTs that get a 301, 302 or 303 response back. A parameter with bit 0 set (value CURL_REDIR_POST_301) tells the library to respect RFC2616/10.3.2 behaviour is ubiquitous in web browsers, so the library does the conversion by default to maintain consistency. However, a server may require a POST to remain a POST after such a redirection. This option is meaningful only when setting CURLOPT_FOLLOWLOCATION(3). 0 HTTP TODO Added in 7.17.1. This option was known as CURLOPT_POST301 up to 7.19.0 as it only supported the 301 then. Returns CURLE_OK if the option is supported, and CURLE_UNKNOWN_OPTION if not.
https://www.carta.tech/man-pages/man3/CURLOPT_POSTREDIR.3.html
CC-MAIN-2020-45
refinedweb
135
58.89
We are pleased to announce the release of DOMIT! 1.1 This release includes better error handling and logging, and a default error mode whereby DOMIT RSS does not die on an error. After months of development, DOMIT! version 1.0 is now available. With this release comes many speed and feature enhancements, many bugfixes, improved namespace support, experimental XPath support, and new documentation in DocBook format. For more information, visit the DOMIT! homepage at: DOMIT! 0.98 is now available. This release fixes a critical bug in the loadXML method of DOMIT!_Document. DOMIT! 0.98 users should update immediately. For more information, please visit the DOMIT! home page at: We are pleased to announce the release of DOMIT! version 0.98. This version brings PHP5 compatability, HTTP / proxy connections and authentication, and several new helper methods. For more information, please visit the DOMIT! home page at: We are pleased to announce the release of DOMIT! 0.97. This release upgrades SAXY to version 0.86. For more information, please visit the DOMIT! home page at: We are pleased to announce the release of DOMIT! version 0.96. This release implements namespace support, fixes several minor bugs, enables SAX error messages to be accessed through the DOM Document, and modifies the expandEmptyElementTags method so that exceptions to the expansion rule can be made. For more information, please visit the DOMIT! homepage at: We are pleased to annouce that DOMIT! version 0.95 has been released. This version comprises a number of bug fixes and helper methods suggested by users. Some of the class files have been reorganized and phpDocumentor style comments have been added. The DOMIT! license has been changed from GPL to LGPL, to allow incorporation with closed source projects. For more information, please visit the DOMIT! homepage at This release fixes a problem in converting to entities when saving an xml document. We are pleased to announce the release of DOMIT! 0.93. This version incorporates a number of minor bug fixes. Version 0.92 of DOMIT! fixes several critical bugs in the parsing of text nodes. DOMIT! version 0.91 is comprised of several small modifications, with the goal of better cross platform availability and standards compliance. We are pleased to announce the release of DOMIT! version 0.9. This release brings an overall cleanup of code and minor bug-fixes, as well as a number of new methods for convenient display of the document structure. DOMIT! Lite, an abridged version of DOMIT! that is been optimized for speed, is included with the package. An new online testing suite is included in this release and can also be found online at:... read more We are pleased to announce the release of DOMIT! version 0.8. The current version moves closer to DOM Level 1 compliance, with a number of enhancements in the parsing and representation of the xml prolog. Some convenience methods have also been added for handling non-ASCII characters through UTF-8 encoding, string normalization has been improved, performance optimizations have been introduced, and problems with the cloneNode method have been resolved.... read more Version 0.7 brings DOMIT! to nearly complete DOM Level 1 compliance. It has been overhauled architecturally and sports much cleaner code, including better error handling. DOMIT! now handles DocumentFragments, CharacterData, Comments, ProcessingInstructions, DOMImplementation, xml declarations, and doc types. Attributes are now implemented as Attr objects in a NamedNodeMap, as opposed to an associative PHP array. Many of the more esoteric DOM methods are now supported. ... read more I'm pleased to announce that the version 0.6 release of DOMIT! brings a number of significant optimizations to the parser. Although its featureset has not changed from the last version, you should upgrade to 0.6 immediately! DOMIT! version 0.52 has been released. This fixes a bug in the DOMIT_NodeList class, in which appendNode and removeNode were returning a shallow copy of DOMIT_Node. A new DOMIT! tutorial has been added: "OOP Techniques for Modifying an XML Document: Part 1". Some minor revisions to the old DOMIT! Tutorial have been made. Version 0.51 of DOMIT! includes an updated version of SAXY, which allows CDATASections to be parsed properly. Upgrade to this version if you are currently using version 0.5 and experiencing problems. DOMIT! version 0.5 includes a number of significant changes. - DOMIT_Parser (and SAXY) now preserve CDATASection Nodes by default when parsing an xml string. Note that this functionality will not work if you use the Expat option. - getNamedElement method of DOMIT_Node was renamed to getNamedElements - getElementsByTagName is now a method of DOMIT_Element in addition to DOMIT_Document, as defined by the DOM spec... read more DOMIT! version 0.41 has been released. This is a minor release which updates the SAXY parser that is bundled with DOMIT! SAXY now correctly parses equal signs and tabs within attributes. DOMIT! version 0.4 has been released. This release provides several new methods for the safe manipulation of attributes. New features: getAttribute($name) - returns the value of the named attribute setAttribute($name, $value) - sets the named attribute to the specified value removeAttribute($name) - removes the named attribute hasAttribute($name) - determines whether the specified attribute exists... read more DOMIT! version 0.3 has been released. This release includes: - several new convenience methods for accessing DOM Document data - improvement upon several existing methods, and - a documentation update. New features: getNodesByNodeType - returns an array of nodes of a specified nodeType (e.g., all text nodes) getNodesByNodeValue - returns an array of nodes of a specified value (e.g., all text nodes with a value of "This is a test")... read more DOMIT! version 0.2 has been released. This adds a several new utility methods and a minor bug fixes. New features: - a getElementsByPath method, which allows access to element nodes using "path"-like expressions - a loadXML method, which loads an XML file from the specified path - a saveXML method, which saves a DOM document to the specified XML file - a parsedBy method, which informs you whether parsing occurred in conjunction with either the Expat or SAXY parser.... read more DOMIT! version 0.11 has been released. This fixes a minor bug in which "/" characters in Attribute nodes were mistaken for end tag characters. We are pleased to announce the first release of DOMIT!, an XML parser for PHP 4. DOMIT! is lightweight, fast, and implements a generous subset of the DOM Level. ... read more
http://sourceforge.net/p/domit-xmlparser/news/?source=navbar
CC-MAIN-2015-11
refinedweb
1,071
59.6
cylp 0.2.3.6 unique feature is that you can use it to alter the solution process of the solvers from within Python. For example, you may define cut generators, branch-and-bound strategies, and primal/dual Simplex pivot rules completely in Python. You may read your LP from an mps file or use the CyL. Who uses CyLP CyLP is being used in a wide range of practical and research fields. Some of the users include: - PyArt, The Python ARM Radar Toolkit, used by Atmospheric Radiation Measurement (U.S. Department of energy). - Meteorological Institute University of Bonn. - Sherbrooke university hospital (Centre hospitalier universitaire de Sherbrooke): CyLP is used for nurse scheduling. - Maisonneuve-Rosemont hospital (L’hopital HMR): CyLP is used for physician scheduling with preferences. - Lehigh University: CyLP is used to teach mixed-integer cuts. - IBM T. J. Watson research center - Saarland University, Germany Installation The easiest way to install CyLP is by using the binaries. If that’s not possible you may always compile it from source. Requirements CyLP needs Numpy () and Scipy (). If you wish to install CyLP from source, you will also need to compile Cbc. Details of this process is given below. Binary Installation If you have setuptools installed you may run: $ easy_install cylp If a binary is available for your architecture it will be installed. Otherwise you will see an error telling you to specify where to find a Cbc installation. That’s because easy_install is trying to compile the source. In this case you’ll have to compile Cbc and set and environment variable to point to it before calling easy_install again. The details are given in the Installing from source section. Installing from source -" - Optional step: If you want to run the doctests (i.e. make doctest in the doc directory) you should also define: $ export CYLP_SOURCE_DIR=/Path/to/cylp Now you can use CyLP in your python code. For example: >>> from cylp.cy import CyClpSimplex >>> s = CyClpSimplex() >>> s.readMps('../input/netlib/adlittle.mps') 0 >>> s.initialSolve() 'optimal' >>> round(s.objectiveValue, 3) 225494.963 Or simply go to CyLP and run: $ python -m unittest discover to run all CyLP unit tests. Modeling Example Here is an example of how to model with CyLP’s modeling facility: import numpy as np from cylp.cy import CyClpSimplex from cylp.py.modeling.CyLPModel import CyLPArray s = CyClpSimplex() # Add variables x = s.addVariable('x', 3) y = s.addVariable('y', 2) # Create coefficients and bounds A = np.matrix([[1., 2., 0],[1., 0, 1.]]) B = np.matrix([[1., 0, 0], [0, 0, 1.]]) D = np.matrix([[1., 2.],[0, 1]]) a = CyLPArray([5, 2.5]) b = CyLPArray([4.2, 3]) x_u= CyLPArray([2., 3.5]) # Add constraints s += A * x <= a s += 2 <= B * x + D * y <= b s += y >= 0 s += 1.1 <= x[1:3] <= x_u # Set the objective function c = CyLPArray([1., -2., 3.]) s.objective = c * x + 2 * y.sum() # Solve using primal Simplex s.primal() print s.primalVariableSolution['x'] Documentation You may access CyLP’s documentation: - Online : Please visit - Offline : To install CyLP’s documentation in your repository, you need Sphinx (). You can generate the documentation by going to cylp/doc and run make html or make latex and access the documentation under cylp/doc/build. You can also run make doctest to perform all the doctest. - Author: cylp Mehdi Towhidi (mehdi.towhidi@gerad.ca) Dominique Orban (dominique.orban@gerad. - Package Index Owner: mpy, Ted.Ralphs - Package Index Maintainer: Ted.Ralphs, dorban - DOAP record: cylp-0.2.3.6.xml
https://pypi.python.org/pypi/cylp/0.2.3.6
CC-MAIN-2017-04
refinedweb
589
61.02
import cv2 error in python I have installed opencv 3.4.0 with both python2.7 and python3.5 bindings. I am able to import and use opencv successfully in python2 but not python3. I get the following error: ImportError: /usr/local/lib/libopencv_text.so.3.4: undefined symbol: _ZNK2cv3dnn19experimental_dnn_v33Net14getLayerShapesERKSt6vectorIiSaIiEEiRS3_IS5_SaIS5_EESA This worked transiently until I had to upgrade CUDNN for CUDA compatibility purposes. I have cleaned, uninstalled and reinstalled opencv but this issue is not resolved. My additional setup is as follows: Ubuntu 16.04 cuDNN 7.6.1 (I have tried with 7.6.3 as well) CUDA 10.0 I am limited to keeping the above versions because of other dependencies. Thanks in advance.
https://answers.opencv.org/question/218333/import-cv2-error-in-python/
CC-MAIN-2019-43
refinedweb
116
55.2
I have two tabs. I have an input (type number) on each one. I have no Callbacks. When I put a number in one of the inputs it carries over to the other input. A heavily simplified piece of the code is below: import dash_html_components as html import dash_core_components as dcc import dash app = dash.Dash() app.scripts.config.serve_locally = True app.layout = html.Div([ dcc.Tabs( children = [ dcc.Tab( children = [ dcc.Input( id = "input_1", type = "number", min = 0, ), ], label = "thing_1", ), dcc.Tab( children = [ dcc.Input( id = "input_2", type = "number", min = 0, ), ], label = "thing_2", ), ] ), ]) if __name__ == "__main__": app.run_server(debug=True) Running this one should see that inputting a number into input (id = "input_1) on tab thing_1 will also fill in input (id = "input_2) on tab thing_2. Due to my lack of knowledge on the inner workings of Dash I’m not quite sure what I need to do to fix this. It would be great if someone could offer some assistance. Many Thanks
https://community.plotly.com/t/tab-change-carries-over-input-values/19247
CC-MAIN-2021-49
refinedweb
164
69.18
NAME Config::IniFiles - A module for reading .ini-style configuration files. SYNOPSIS use Config::IniFiles; my $cfg = new Config::IniFiles( -file => "/path/configfile.ini" ); print "We have parm " . . The first non-blank character of the line indicating a section must be a left bracket and the last nonblank. Lines that begin with either of these characters will be ignored. Any amount of whitespace may preceed the comment character. Multiline. USAGE -- Object Interface). One Config::IniFiles object is required per configuration file. The following named parameters are available: *-file* filename Specifies a file to load the parameters from. If this option is not specified, (ie:. *. val ($section, $parameter) Returns the value of the specified parameter ("$parameter") in section "$section", returns undef if no section or no parameter for the given section section exists. If you want a multi-line/value field returned as an array, just specify an array as the receiver: @values = $cfg->val('Section', 'Parameter');($setion, parsinf. Parameters ($section. GroupMembers ($group) Returns an array containing the members of specified $group. Each element of the array is a section name. For example, given the sections [Group Element 1] ... [Group Element 2] ... GroupMembers would return ("Group Element 1", "Group Element 2"). WriteConfig ($filename) Writes out a new copy of the configuration file. A temporary file (ending in .new) is written out and then renamed to the specified filename. Also see BUGS below. RewriteConfig Same as WriteConfig, but specifies that the original configuration file should be rewritten.. SetSectionComment($section, @comment) Sets the comment for section $section to the lines contained in @comment. Each comment line will be prepended with "#" if it doesn't already have a comment character (ie: if $line !~ m/^ "#". GetParameterComment ($section, $parameter) Gets the comment attached to a parameter. DeleteParameterComment ($section, $parameter). SetParameterEOT ($section, , $parameter) Removes the EOT marker for the given section and parameter. When writing a configuration file, if no EOT marker is defined then "EOT" is used. DeleteSection ( $section_name ) Completely removes the entire section from the configuration. Delete}{$section} = \@comment_lines ->{group}{$group} = \@group_members ->{parms}{$section} = \@section_parms ->{EOT}{$sect}{$parm} = "end of text string" ->{pCMT}{$section}{$parm} = \@comment_lines ->{v}{$section}{$parm} = $value OR \@values AUTHOR and ACKNOWLEDGEMENTS The original code was written by Scott Hutton. It has since been taken over by Rich Bow, Jeremy Wadsack, Daniel Winkelmann, Pires Claudio, and Adrian Phillips. Geez, that's a lot of people. And apologies to the folks I missed. If you want someone to bug about this, that would be: Rich Bowen <rbowen at rcbow. Change log $Log: not supported by cvs2svn $.
https://bitbucket.org/shlomif/perl-config-inifiles/src/48335bf02fb5/config-inifiles/
CC-MAIN-2015-11
refinedweb
423
51.75
REBOOT(2) BSD Programmer's Manual REBOOT(2) reboot - reboot system or halt processor #include <unistd.h> #include <sys/reboot.h> int reboot(int howto); reboot() reboots the system. Only the superuser)bsd", where xx is the default disk name, without prompting for the file name.. RB_POWERDOWN If used in conjunction with RB_HALT, and if the system hardware supports the function, the system will be powered off. RB_USERREQ By default, the system will halt if reboot() is called dur- ing startup (before the system has finished autoconfigura- tion), even if RB_HALT is not specified. This is because panic(9)s during startup will probably just repeat on the next boot. Use of this option implies that the user has re- quested the action specified (for example, using the ddb(4) boot reboot command), so the system will reboot if a halt is not explicitly requested. ``bsd'' in the root file sys- tem of unit 0 of a disk chosen in a processor specific way. An automatic consistency check of the disks is normally performed (see fsck(8)). If successful, this call never returns. Otherwise, a -1 is returned and an error is returned in the global variable errno. [EPERM] The caller is not the superuser. ddb(4), crash(8), halt(8), init(8), reboot(8), savecore(8), boot(9), panic(9) The reboot() function call appeared in 4.0BSD. Not all platforms support all possible arguments. For example, RB_POWERDOWN is supported only on the i386, sparc, and mac68k platforms..
http://mirbsd.mirsolutions.de/htman/sparc/man2/reboot.htm
crawl-003
refinedweb
249
56.96
IMUA ‘IOLANI A Voice for Students since 1923 December 17, 2012 Honolulu, Hawai‘i Volume 88, Issue 3 ‘Iolani increases concussion awareness and protection By Amy Nakamura When volleyball player Aloha Cerit ‘18 dove for a ball, she collided with another player and struck the floor headfirst. She suffered a concussion. “After getting the concussion, I felt like I was in a daze,” Cerit recalled. “I wasn’t aware of everything that was going on around me. I continued to have symptoms for three weeks, and I had trouble with schoolwork. I had to read every letter of every word in every sentence, just like how you do when you’re learning how to read. I had a hard time processing words and focusing on my assignments.” This year, for the first time, ‘Iolani required all students and parents to sign and return a Concussion Awareness form during the first weeks of school. ‘Iolani enhanced its Concussion Management Program in response to the National Federation of State High School Association (NFHSA) rule change requiring that athletes who suffer concussion-like symptoms withdraw from sports participation. Many ‘Iolani athletes suffered concussions this past fall season. According to Ms. Louise Inafuku, ‘Iolani athletic trainer, “A concussion is when your body is moving at a fast speed and when you suddenly stop, your brain, which is still moving, hits the inside of your skull. This causes your brain and head to rapidly ‘Iolani junior selected to be Senate page move back and forth.” Having a concussion is not fun. Athletes who have a concussion experience symptoms of dizziness, nausea, fatigue, headaches, and moodiness. In order to return to athletic activity, athletes must receive clearance from the school’s trainers by completing three tasks. First, they must take a written Primary Care Physician (PCP) clearance test. If an athlete’s PCP test diagnoses them with a concussion, athletes must take an Immediate Post-Concussion Assessment test (ImPACT) before returning to school. ImPACT evaluates an athlete’s physical abilities post-concussion. It measures symptoms, visual and verbal memory, and reaction time. Next, they must obtain a written neuropsychologist clearance including scores from the ImPACT. Finally, athletes must successfully complete the Return to Activity Plan (RAP). ‘Iolani’s athletic trainers and other professionals determine whether the athlete is ready to start the RAP, which involves seven steps to help athletes regain full participation in their sport. These steps include complete cognitive rest, light exercise, running, weight training, and finally, returning to school and sports. Statistically, football causes the most concussions. So far this year at ‘Iolani, however, cheerleaders and volleyball play- CarrieAnn Randolph | Imua Iolani ‘Iolani athletic trainer Ms. Louise Inafuku treats Max Maneafaiga ‘13, who is recovering from a concussion he received while wrestling. ers have received the most concussions. Concussions can occur in several ways. In sports one may be hit in the head by a ball, or may fall and hit the ground headfirst. But, according to the ‘Iolani Athletic Training Room website, playing sports isn’t the only way to shake up the brain. Car accidents, certain playground activities, and other rapid physical movements ‘58 classmates live on as characters in Sakamoto plays By David Pang Matthew Beattie-Callahan ‘14 will serve as a page in the U.S. Senate for the spring semester of 2013. Beattie-Callahan is one of only 30 juniors from across the U.S. chosen for the prestigious program. After applying for the position earlier this year on Sen. Daniel Inouye’s website, Beattie-Callahan received word of his appointment in early December. “I’m looking forward to having the opportunity to gain first-hand experience and knowledge of our country’s political system,” Beattie-Callahan said. For the full article, go to imuaonline.org. Index Editorials--3 Features-- 1 & 2 Lighter Side-- 4 Sports-- 3 can also cause concussions. Students who attempt to continue their schoolwork or sports with an untreated concussion may fall behind in school, or perform less well in sports. Students who think they may have a concussion or who want to know more about how to protect themselves from concussions should talk to a doctor, teacher, trainer or parent. David Pang | Imua Iolani Edward Sakamoto ‘58, author of the play Fishing for Wives, visited Mrs. Lee Cataluna’s Creative Writing class to teach the students better playwriting techniques and to share lessons from his life. By David Pang Edward Sakamoto ‘58 is no stranger to the arts. Not by a long shot. For over 50 years, Sakamoto has established himself as one of the most prolific playwrights in Hawaii, writing 19 plays thus far. He is a retired editor from the Los Angeles Times and a recipient of the Po‘okela playwriting awards for his plays Aloha Las Vegas and Our Hearts Were Touched With Fire. In 1997, thenGovernor Ben Cayetano presented Sakamoto with the Hawaii Award for Literature, the highest honor for a writer in the state. Since then, Sakamoto’s plays have expanded beyond the islands to venues such as East West Players in Los Angeles. His newest play, Fishing for Wives, a story of two fishermen with women problems, premiered last month at Honolulu’s Kumu Kahua Theatre, where it was sold out every night for five weeks. The play is headed for New York where it will open in 2014 at the prestigious Pan Asian Repertory Theatre. While in Hawaii for the opening of his latest play, Sakamoto visited Creative Writing students in Mrs. Lee Cataluna’s class to speak about his experiences and life lessons. He said that he may have gained his inspiration for playwriting as early as his freshmen year at ‘Iolani when he rewrote the ending to “Treasure Island.” His teacher read it aloud to the class and gave him an A+. Needless to say, Saka- moto’s best subject was English. When asked what his worst subjects were, he responded, “chemistry, physics and geometry.” Spirited and willing to answer questions, Sakamoto revealed that he neither writes outlines nor uses notes when he drafts his plays. Instead he writes many drafts, often putting in themes and adding character development as the drafts progress. However, he never reveals his plays to anyone until he is finished because it “dissipates the creative energy.” One of the most interesting aspects of Sakamoto’s plays is that he names his characters after his ‘Iolani classmates. He reports that they have a good laugh about it while watching the play. However, Sakamoto is wellmannered and makes “a point to not make my classmates bad characters.” When asked if he has ever named a bad character after anyone, he replied that he has never done so. Sakamoto’s visit offered the class a valuable insight on playwriting from one of the most famous playwrights in Hawaii. Marissa Uyemura ‘13, a student in the Creative Writing class, said, “It was a good experience for us to meet Mr. Sakamoto because he is a successful playwright who graduated from ‘Iolani. I think that it’s good for students to see alumni who are successful in professions outside of math and science.” Features Page 2 Pain, from pinky to thumb David Pang | Imua Iolani By Micah Goshi Each finger has its own story, and can be categorized from simply bad ideas to occurrences that I still cannot explain. While pressing the keyboard keys is sending a sharp pain up my hand due to my latest injury, I will start this essay of pain with the most common way of hurting my fingers: volleyball. My left ring, pinkie, and pointer fingers and my right middle and ring finger have all met the blunt force of a volleyball. The culprits consist of an all-state player, my teammate Tahi, and two female JV players,. Four of the five stories include minor accidents with gruesome results, such as tendons ripping fortyfive degrees in the wrong direction. However, the fifth one is just embarrassing. During my sophomore year, I was managing the girls JV team by shagging balls before a game. Just as I caught a ball, another one came and hit the tip of my pinkie straight on, causing a minor dislocation. Looking up from my throbbing hand, I saw that the ball had been hit by the tiny libero, five-foot, eighty-fivepound Joie Wakabayashi ‘13. Since then, I have never been able to bend my pinkie correctly. Following the common injuries are the, “what are the chances” types. Two years ago at the Family Fair, I played in the annual futsal tournament. Coach Mike Among was upset at me and the other players for playing a game that could have hurt us during the volleyball season. Thinking that an injury would be unlikely, we played anyways. I was the goalie at the time, and for good measure, I asked my parents to buy me gloves. However, just as my mom entered Kozuki Stadium with my gloves, Tristan Medios-Simon ‘13 kicked a soccer ball directly into my thumb, jamming it so hard that I had to tape it for the rest of the varsity volleyball season. The spraining of my other thumb was also the result of an unfortunate turn of events. Last year my youth group was on Lana’i, standing on the hot sand and laughing at a sign that read, “Warning: Dangerous Shore Break.” At the time, the waves were only about three feet high, Imua ‘Iolani but an hour later, six-footers started to roll in. After I dove into the ocean, I heard my friend call out, “Party Wave!” There came an eight-and-a-half foot wave for us all to enjoy. I started swimming as fast as I could and was picked up by the surge. It looked as if we were all flying eight feet high. However, that feeling only lasted for a short while because the wave dropped me face first into the sand and pushed my legs over my head, making a backwards U-shape. After I flipped several times in the sand, my thumb caught the ground and bent forward, resulting in paralysis for the next five minutes. I would consider my right pointer and pinkie injuries as part of Lower School mayhem. Both occurred during the sixth grade but were completely different. My pointer was simply stepped on and twisted at a funny angle, while my pinkie was sprained because my basketball team had a contest to see who could dribble the ball the longest with their pinkie. Although all nine of these injuries have amusing stories, the story of how I cut open my left middle finger is the strangest. The culprits were a ceramic toilet cover and my own curiosity. My friends and I were vacationing in a house on Maui. After a day of relaxation, I found that the toilet was not working. At first, I was afraid that I had clogged it, but then I realized that it could not have been the case because I had only made shi shi. As I opened the top, I felt a pinch and immediately let go of the cover. Blood started to flow out of the cut like Coca-Cola and Mentos. To this day, that was the most blood I have ever lost, and I have a toilet to blame. Although I’ve hurt all my fingers here in Hawaii, I will soon be exploring a new world with many more ways to re-injure all my fingers again. Perhaps I will make it my goal to keep myself from having closed fists until the day I graduate from college, though I highly doubt it. Packs hefty issue until iPad arrival Winter Dance Showcase canceled for ‘higher calling’ performance By Max Wei Bag-heavy seventh graders are a common sight in Castle hallways. Their posture plummets as a result of the copious textbooks on their backs. Which raises the question: exactly how much do those bulging backpacks weigh? 10 seventh graders sitting around the Dr. Sun Yat-Sen statue offered up their backpacks for measurement. The average weight of their backpacks was a whopping 16.38 pounds. The weight of the bags ranged from 3.4 pounds to 23.6 pounds. According to the American Academy of Pediatrics, a schoolbag should ideally weigh no more than 10 percent of a child’s weight. Students’ troubles with their lockers contribute even more to the stereotypical “seventh grade turtle shell of textbooks.” Here are some tips for lightening the load: - Practice opening the combination lock: by the time you reach eighth grade, you can perhaps open lockers in By Lauren Yamaguchi Instead of performing for family and friends this year at the 28th annual Winter Dance Showcase, the ‘Iolani dancers put on a show at the Christmas party for Family Programs Hawaii. A nonprofit agency that provides quality care and statewide services to more than 4,000 children and families involved in Hawaii’s child welfare system, Family Programs Hawaii first approached ‘Iolani dance teacher Cerene Okimura three months ago and asked for hula dancers to entertain the guests at the party. Because the party was on the same day as the Winter Dance Showcase, Okimura initially declined the offer. However, after discussing the offer with hula teachers Lehua Carvalho and Sean Nakayama, Okimura decided that, “There had to be a higher calling ... Students can sometimes get wrapped up with tests, grades, extracurricular activities, and trying to fill their col- Max Wei | Imua Iolani Kevan Elias ‘18 lugs his heavy backpack across campus. less than 10 seconds. - Have a cycling plan: make book groups to grab and swap out. Switching out two classes’ worth of books is easiest. A lighter bag means you can run to class faster. - Carry textbooks by hand: redistributing the weight makes walking easier. - Sit down outside your locker and wait for the iPads to come next year. lege résumé. That can be self-consuming. There has to be something more to perform for.” With that said, Okimura canceled the Dance Showcase, and ‘Iolani dancers helped out Family Programs Hawaii. Hula 2K dancer Bailey Sylvester ‘15 said that the performance was “great and charitable because of how good it is to give back to the community.” Dance 4 senior Jamie Lee stated that the performance was a “good opportunity to help, especially through dance because it’s something we love doing.” To add to the Christmas spirit, the ‘Iolani Key Club, alumni, and parents united with the dancers to produce 1,100 goodie bags containing homemade cookies, candy, toothbrushes, and toothpaste to give to the 1,000 children attending the Christmas party. Although there is no Winter Dance Showcase this year, the dancers will still perform their pieces, excluding the Christmas songs, next January. Christmas in Germany By Alanna Simao Over this Christmas break, the ‘Iolani Stage Bands will head to Germany, the Czech Republic, and Switzerland for a 15-day winter wonderland adventure filled with fun and, of course, music. This special opportunity first arose last year when Jazz Studies faculty from the University of Texas at Arlington visited as guest artists for the Stage Bands’ final concert. They were so impressed with the band that they invited ‘Iolani to the Winter International Jazz ‘N‘ Youth Exchange Music Festival in Germany. ‘Iolani will be playing in two concerts at the festival, first in Ibbenbüren, and then in the small town of Krov, where the mayor personally invited everyone to attend. Students also look forward to a ski trip in Switzerland, as well as observing gothic architecture in the beautiful city of Prague in the Czech Republic. Lisa Nakayama ’13 and Taylor-Ann Katase ’13 are most excited for “snow and winter clothes.” Frishan Paulo ’14 is eager to see the “fireworks on New Year’s Eve in Prague.” Even though the students will miss Christmas and New Year’s Eve with their families and friends, they will still bring the holiday spirit along with them: “We’re doing a Secret Santa gift exchange,” says Nakayama. Accompanying the Stage Bands on the tour will be eight dancers from Halau ‘Iolani, as well as guest artists Tim Ishii, Director of Jazz Studies at UT-Arlington, and professional German trumpeter Ulrich Shulz. ‘Iolani Alanna Simao | Imua Iolani Stage Band players diligently prepare for their winter performance in Germany. Stage Bands director Curtis Abe commissioned Dan Cavanagh, a pianist and the Associate Director of Jazz Studies at UT-Arlington, to write a song specifically for the European tour. Cavanagh drew inspiration for the song, called “The Owl King,” from the Hawaiian legend of Kapo‘i, which tells of a young man who spares an owl’s eggs when hunting, and the king of the owls later saves him in reparation. The song features trills and long tones that emulate a flock of owls flying into the distance. Editorials & Sports December 17, 2012 Page 3 Adolescents unappreciative of actors’ dedication, skill By Ashley Mizuo According to The Broadway League, the average Broadway audience member from 2011 to 2012 was 43.5 years old. Many teenagers do not attend live theater performances because tickets are sometimes more expensive than for the movies or simply because they think plays are boring. As a result, members of the younger generation have not learned how to be respectful audience members at perfomances of live theater. I get it. Watching a theater performance seems like something old people do when they have nothing better to occupy their time. The stereotype is that all the plots are boring and that the singing in musicals is unnecessary. Let me say how wrong both those ideas are. The plots in live theater are certainly not boring. Most plots are actually better than those of the blockbusters people watch in the movie theaters. What is on the stage is real talent. The actors don’t have to cut onions right before the sad scene, and the director can’t edit the footage to make it seem as if the actress is crying over her dead werewolf boyfriend. Those tears on stage are real, and the laughter on stage is real. As for the singing, imagine what would happen on “Glee” if no one sang. I do not think the show would be nearly half as good as it is. The same goes with musicals; every song is necessary and adds even more magic to the stage. When the emotion is too big for words, the characters must break into song. When I went to see the ‘Iolani Dramatic Players’ wonderful production of “Daughters of Atreus” last month, I heard inappropriate laughter and comments from the audience. These interruptions took attention away from the stage and inappropriately diverted it to the audience. Watching a live theater performance is different from seeing a movie or going to a rock concert. People do not eat in the theater. It is rude and distracting, especially if the wrappers are loud. Furthermore, the performers can hear every laugh and every comment that comes from the audience. Whooping and hollering during a show is distracting and disrespectful to the performers. Young people tend to forget that it is rude, mistakenly thinking that they are complimenting the performers. I am disappointed when I watch a Cassie Busekrus | Imua Iolani At last month’s performance of “Daughters of Atreus,” some audience members were disrespectful to the cast and crew by laughing and whooping. performance, and there is a group of teenagers, who look much like me, laughing hysterically during emotional parts of the play. I understand that when there is a high level of emotion on stage, people laugh out of awkwardness or discomfort, but I wish those teenagers had tried to stifle their laughter so the rest of us could have enjoyed the show without the interruptions and distractions from one or two thoughtless audience members. Illustration by Bianca Bystrom Pino Update: One For Coach Dom, lifting weights is a spiritual exercise Team Chapel By David Pang Last spring, the middle school faculty reached a consensus to change the One Team Chapel format in order better to encourage the Raider spirit. The teachers of grades 7 and 8 want to recognize the positive behaviors that students display rather than focus on specific individuals and awards. For this reason, the One Team Chapel was put on hold. The faculty is working to create a new plan to encourage good citizenship. On the status of the One Team Chapel, Ms. Sara Finnemore said, “It’s not gone. It’s not cancelled. We’re developing a new way to do it.” The One Team Chapel is scheduled to make a return this coming May. Photos courtesy of Dominic Ahuna ‘93 Dominic Ahuna ‘93, known as Coach Dom on campus, recently competed in the American Open Masters weightlifting competition, earning him a spot in both the national and world championship competitions. By CarrieAnn Randolph A few weeks ago, Dominic Ahuna ’93, ‘Iolani’s strength and conditioning coach, accomplished what some might call a miraculous feat at the American Open Masters Championships in Monrovia, Calif. The American Open Masters is the Olympic sport of weightlifting’s competition for senior athletes out of Olympic contention. Weightlifting--not to be confused with powerlifting or bodybuilding-is the ultimate sport of strength and power. It consists of two events: the snatch and the clean-and-jerk. Ahuna, known as Coach Dom around the ‘Iolani campus, won first place by completing six perfect lifts. He lifted 297 pounds (135 kg) in the snatch and 341 pounds (155 kg) in the clean-and-jerk. He now is qualified for the Nationals and is also eligible to compete in the World Championships. After training since August to compete in the American Open Masters Championships, just 13 days before the competition, Coach Dom partially tore his left pectoral muscle. “I was in the middle of a lift and I heard it tear, like when you tear meat or chicken off the bone,” he said, motioning with his hands. A normal pectoral strain or tear would take several weeks to heal. Coach Dom, however, who only started competing three years ago, is a Christian and asked people to pray for God to heal him. “I decided to walk in faith, and kept training despite the discomfort,” he said. “By the third day the pain was completely gone and by the sixth day 90 percent of my strength and 100 percent of my range of motion had returned.” He competed in the contest only 12 days after the initial tear. Coach Dom said that the previously injured muscle actually felt stronger than the non-injured side. While at the competition, he met with Olympic coaches and ministered to other athletes. Even though he already qualifies for the World Championships, Coach Dom is going to compete in the Nationals in order to gain experience. That competition will occur in March in Moorestown, NJ. For athletes, injuries and setbacks are expected, but it is how the athletes respond that defines their character. In Coach Dom’s first weightlifting competition, the 2009 Aloha State Games, he broke three state records all with a torn quad muscle. “God heals you,” said Coach Dom when asked for any advice for injured or recovering athletes. “Illness or injury never comes from Him. We can always ask Him to take it away. Even if you’re not a believer, it’s not always based on how much faith you have, if you pray or even if you go to church.” def Merry Christmas! fed Page 4 Imua ‘Iolani Are you rapture-ready? by Ilana Buffenstein Though past theories pinning the apocalypse to specific dates have all gone belly up, their continued existence show the creators’ comically minimal faith in humanity. On Jan. 1, 2000, many thought the end of the millennium meant the end of the world. Hexakosioihexekontahexaphobics, afraid of the number 666, all believed that June 6, 2006 (6/6/06), would be a “devil of a day.” The fundamentalist Christian Harold Camping calculated May 21, 2011, as the date of his imminent ascension to Eternal Paradise--a slight miscalculation that left 144,000 of his followers, who had sold all their earthly possessions, broke and homeless. The end of the Mayan calendar in 2012 is the current doomsday prediction. Though not the conventional “world-overrun-by-zombies” shtick, it is sure to change the way people act and think. Though many don’t take it seriously, their behavior is affected by the concept of an impending catastrophe. In fact, people have already begun to react. Facebook posts, songs like Ke$ha’s “Die Young,” and that annoying religious man in Waikiki telling us to repent before Armageddon all demonstrate obsessions with the end of the world. If these weren’t signs enough, the impossible coupling of balding Steve Carell and goddess Keira Knightley in the film “Seeking a Friend for the End of the World” suggests our impending doom. On a more local scale, Chinatown is abuzz with plans for its End of the World Block Party. To many, Doomsday has become a joke. And yet, what makes this next predicted date universally intriguing is the tiny but still possible chance that our everyday lives could be turned upside down. When we wake up on December 22, many of us may be disappointed that nothing has changed. But if the world does end, at least it won’t be awkward when people come to Winter Ball dressed as zombies. Rachael Heller and Ilana Buffenstein Jews on Christmas Eve Meet at the Chinese restaurant or at the movies. e f Chanukah party; the ignorant goyim ask, “Where’s the Christmas tree?” Every Christmas Eve I’m stooped over the toilet: way too much fruitcake. Bipolar Frosty His death inevitable He cries tears of coal. g h j The player Santa Always hitting on my mom She’s not interested d *goyim-Yiddish for non-Jewish person Kekoa Morris also contributed to this article. Winter Ball: expectations vs. reality The Imua Christmas Wishlist Rachael Heller and Ilana Buffenstein Angie Anderson | Imua Iolani Imua ‘Iolani is published by the students of 'Iolani School, located at 563 Kamoku Street, Honolulu, Hawaii, 96826. Est. 1923, printed at Hawaii Hochi Ltd. Editors-In-Chief: Maile Greenhill Maya Stevens News Editors: Matthew Callahan Claire Furukawa Features Editors: Jaylene-Rose Lee Alanna Simao Arts & Entertainment Editors: Cassie Busekrus Chanelle Huang Opinion Editor: Lauren Goto Middle School Editors: Amy Nakamura Emily Nomura Lower School Editors: Lindsey Combs David Pang Sports Editors: Brittany Amano Carrie Ann Randolph Video/Hiki Nō: Korry Luke Ashley Mizuo Sarah Zhang 1. “Bic for Her” pens to hold in my fragile lady hands 2. A binder for all my cutouts of female politicians 3. Hoop earrings, I don’t care if they’re Regina George’s thing 4. An Instagram account to live vicariously through other people’s Christmases 5. A free Junior Class ring 6. A menorah 7. An extra 12 hours of sleep (obviously) 8. A zombie best friend when the apocalypse comes. 9. An iPad. Oh wait. 10. To make it to the front page of Imua. 11. For Kristen Stewart’s face to change. 12. You. 150th Anniversary Editor: Max Wei Photo Editors: Anna Brandes Lia Ho Kekoa Morris Staff Writers: Ilana Buffenstein Rachael Heller Pascha Hokama Daniella Kim Kady Matsuzaki Advisers: Ms. Lee Cataluna Mr. John Tamanaha Contibutor: Bianca Bystrom Pino Imua 'Iolani accepts advertising submissions on a space-available basis. The deadline for the next issue is Jan. 6. The opinions herein expressed do not necessarily reflect the views of the administration, faculty, staff of 'Iolani School or the Imua 'Iolani.
https://issuu.com/iolaniimua/docs/december__12
CC-MAIN-2017-43
refinedweb
4,626
70.13
I have a small problem. I am a begginer in C programming . I need so help please. my project i am working on is in image. I use a camera to take a number in very small part for example ( 0901) but sometime those component have defect next to the number therefore is the number will come with (??09?01??). I do not unknow if my code is good .the number is stored in the folder below. but I would like help how can I make it just to take 4 number only without (?)also should i need only retur 0 if the number exist and 1 if the number is bigger than 4 ? #include "stdafx.h" #include <stdio.h> #include <string.h> int main() { FILE*fp; fp=fopen("C:\\Users\\Noy\\Desktop\\tet.txt","w"); char buf[10]; int i; char *foo = "12?12??34"; int l = strlen(foo); if(fp !=NULL) { for(i=0;i<l;i++) { if(*foo=='?') buf[i] = ' '; else buf[i] = *foo; *foo++; } for(i=0;i<l;i++) fprintf(fp,"%c",buf[i]); fclose(fp); } return 0; }
https://www.daniweb.com/programming/software-development/threads/296133/search-and-replace-a-string-need-small-help-in-c-programming
CC-MAIN-2019-04
refinedweb
183
87.52
Unity as a Library: Bring Unity Features to Your Android App 🎮 Unity is a game engine developed by Unity Technology, running on Windows, MacOS and Linux, for building cross-platform games and applications (Android, iOS, Windows, PlayStation…). It has been there for 15 years now, and it gave us the opportunity to build high quality graphic experiences or games on mobile. Unity is becoming more and more popular: according to the company, “50% of the new mobile games are made with Unity”. A lot of studios (Blizzard, Square Enix, White Elk…) are trusting Unity for developing their games or AR/VR standalone applications. Unity has recently improved the support for building great game experience on Android and gave us Unity as a Library to add a Unity game in an existing Android application as a proper library integration. Let’s see how to export a small Unity experience and add it to your own application, how to easily extend the UnityPlayerActivity generated on the library, and how to add some native UI components to your extended Activity in order to interact with game! First steps with Unity First, we are going to build a tiny 3D experience with Unity (version 2020.1.11f1), with no need of an extended knowledge of the game engine. Our main focus here, is to have an Android library that can be used in an existing Android application. Source code of the Unity project can be found on my Github account: Here is what the Unity editor looks like with a scene and some 3D assets added to it: When starting a new project, for instance a 3D game here, you will be able to easily add 3D components thanks to the “GameObject” menu located at the top on the editor. You can select multiple shapes, lights or even UI components , but you can also import a more complex 3D asset that you have locally or remotely if you get assets from the Unity Asset Store. Therefore, we can create and attach a C# script to the 3D capsule we added on the scene. Keep in mind that the script file name should match the class name of your script. To attach it to the asset, click on “Add Component” and choose “New Script”. For the example, in the created script, we are going to implement a ChangeColor() method for changing the color of the material and 4 others functions for moving the object in the scene (a similar behavior you can find with the keypad arrows while playing a game). Every script attached to an object extends the base class MonoBehaviour, it offers some lifecycle methods that are very helpful for developing your apps and games. Let’s have a look at a newly created script: public class MyAsset : MonoBehaviour { // Start is called before the first frame update void Start() { } // Update is called once per frame void Update() { } } You can override these methods, alongside many others depending on what you need, and implement the behavior of the 3D asset. Here for example, let’s override Update() for handling the click on the capsule from the touch screen and we are going to expose the ChangeColor() method that will be used by our Android application to change the color of the object. In order to be notified by the Android application, the exposed method signature must be as the following: // Exposed method to be notified by UnitySendMessage() void MyMethod(string params) { // Do something } What is very annoying here, is that the parameter of the function must be string type so it might be complicated to pass some complex data with the UnitySendMessage() method. Now let’s have a look at the final script: Once the scripting is done without any compilation error, we can convert our Unity project to an Android project that will be used with Android Studio. Exporting your Unity project as an Android library Now that we are all good on the Unity side, let’s focus on the integration of this game in an existing Android application. To export the project, go to the “File” menu and click on “Build Settings”, there you will be able to export the Unity project as an Android Studio compatible project. Make sure to target the Android platform and to have checked the “Export Project” box. If you already have some Java/Kotlin files in your Unity game, do not forget to check the “Symlink Sources” box before exporting. As you may know, applications published on Google Play needs to support 64-bit architectures. To do so, click on the “Player Settings” and scroll to “Other Settings” to enable IL2CPP for the scripting backend configuration. Then, when exporting the project, ARM64 devices will be supported. You are now all set for working directly with the Unity project exported as Android Studio project. When you open this project on Android Studio, you will see about this screen: Let’s explore the project structure. It is composed of two module (launcher and unityLibrary), the first one is for actually launching the application where you can implement all the Android stuff you want: creating new Activity, Views or Fragment… and the second is the unity game as library module. In this one, you will have access to the UnityPlayerActivity that can be extended, the most interesting here is that we can use the UnityPlayer to send messages to our Unity game. Now let’s bring this module in an existing Android application. Take advantage of the Unity library Now that we have our Unity game as an Android module library, we can import it in an existing Android application. For the demonstration, we are going to keep it simple with a main Activity with an ImageView and a Button to start the Unity experience. The UnityPlayerActivity will be extended to add some native UI components (MaterialButtons for example) in order to send actions to our Unity game. Source code of the sample Android app can be found on my Github account: Oleur/android-unity-as-library-sample Sample Android app that showcases Unity As A Library (UAAL) and how you can use it in an existing app. github.com From the exported project, you can either implement you app directly on the launcher module generated by Unity or import the unityLibrary module in our existing app. Let’s work with the second solution! After importing the Unity module in our application, let’s have a look at the generated UnityPlayerActivity. What interests us here is the UnityPlayer. If we look at its implementation behind the hood, it is a FrameLayout. It will hold the SurfaceView where the Unity experience will be rendered and handle all the things that the game needs to run correctly on Android. Do not forget to add this string resource otherwise the Unity game will crash. <string name=”game_view_content_description”>Game view</string> Now that we know how the UnityPlayerActivity is structured, we can extend it in our app module to get the best of it. Thanks to the UnityPlayer, we can add any UI native components to the screen in order to interact with the game. Here, we will add two buttons at the top of the screen, one for changing the color of the capsule and the other for quitting the Unity game. At the bottom, there will be 4 buttons for moving the capsule around in the scene. Let see how we can achieve that! The UnityPlayer object provides a static method to send native messages to the Unity game objects. To send the action, you have to provide the game object you want to target, the method name on our script and the parameters (as a String) you want to pass. // Send native message to Unity UnitySendMessage(String gameObject, String method, String params) All together, the UnityGameActivity looks like this and can be started from anywhere in your app. And here’s what this a̶m̶a̶z̶i̶n̶g̶ ̶A̶A̶A̶ game looks like in our application :) In this article we explored how to take advantage of Unity as a Library to bring amazing games or AR/VR experiences in your Android applications. In a new article, I will go into the details of what is generated inside the Unity library (Gradle files, Java/C++ classes…). If you are interested in learning more about the Unity game engine and how to master it, Unity provides a lot of high quality resources for beginners and advanced learners. Thanks for reading, I hope it gave you a glimpse at Unity and showed you how easy is it to integrate Unity features in existing apps. Do not hesitate to ping me on Twitter I will share more stuff on Android in the coming weeks 😀 🎮
https://medium.com/swlh/unity-as-a-library-bring-unity-features-to-your-android-app-9936bea9b775?responsesOpen=true&source=---------4----------------------------
CC-MAIN-2021-10
refinedweb
1,463
54.05
CodePlexProject Hosting for Open Source Software I am working on Rules, and I'm a bit confuse of the Events . I need to create a Rule that do an action whenever a content item's field change, for example the field Status change from true to false . Is OnVersioned event can do that ? Or if I need to create new event , how do I do that ? I have read Sebastien's post here : . But i need more details. Overriding Publishing in a handler should give you the new and old versions of the item. bertrandleroy wrote: Overriding Publishing in a handler should give you the new and old versions of the item. Can u give a sample code, I'm still confuse about it. No, I don't have any. Write a handler, override Publishing, use intellisense on the context object. I have written a new Handler like this, just copy the Content.Rules code , but i didnt know how to register and trigger that event ... public interface ITestContentHandler : IDependency { void Changed(ChangeStatusContentContext context); } public abstract class TestContentHandler : ITestContentHandler { protected TestContentHandler () { Filters = new List<IContentFilter>(); Logger = NullLogger.Instance; } public List<IContentFilter> Filters { get; set; } public ILogger Logger { get; set; } protected void OnChanged<TPart>(Action<ChangeStatusContentContext, TPart> handler) where TPart : class, IContent { Filters.Add(new TestInlineStorageFilter<TPart> { OnChanged = handler }); } class TestInlineStorageFilter<TPart> : TestStorageFilterBase<TPart> where TPart : class, IContent { public Action<ChangeStatusContentContext, TPart> OnChanged { get; set; } protected override void Changed(ChangeStatusContentContext context, TPart instance) { if (OnChanged != null) OnChanged(context, instance); } } void ITestContentHandler .Changed(ChangeStatusContentContext context) { foreach (var filter in Filters.OfType<ITestContentStorageFilter>()) filter.Changed(context); Changed(context); } protected virtual void Changed(ChangeStatusContentContext context) { } } I'm not sure I get what this is supposed to do. You've got nested classes in there for no clear reason, and you didn't actually override anything. In particular you didn't override the publishing method as I had advised. Hello you guys, I am trying to do the same thing as taki27, I have created the handler to override the publishing method, I have added a debug point to my content handler class but the onpublishing event never got triggered when I change the content of the item. Am I taking it wrong? public class StatusHandler:ContentHandler { OnPublishing((publishing,part)=>{ //here is my processing line }); } I have also tried to override onversioned but it is not getting triggered as well. Have you guys have any advises for me please. Is your item versionable, i.e. does it have Save and Publish buttons? Otherwise only the Created event will be fired. Is "Versionable" in the sense of having a publish button different from declaring a content part as Versioned in Migrations.cs? They're different. A content part can be defined as supporting versioning by declaring .ContentPartVersionRecord(). However this doesn't mean the content type is versioning enabled, which is done via .Draftable() on a content type definition. If a content part supports versioning, is it just a history of every single state the part has had, or is it a set of named states (e.g. "Published", "Draft")? Thanks randompete, Could I ask if there is anyway that I can do a custom event, that get triggered every time a content is updated ( not by pressing the publish button but to use my own form). I have feeling that I will have to wait for the next few releases for this. Thanks. Hello everyone, I'm still stuck at this issue and hope that I could solve this in Orchard 1.4. I realized that there is updating and updated events in the content handler class since Orchard 1.4 , and I tried to override the OnUpdating method in my custom handler. However, the problem is that it only gets run if I update the content using the default editor, if I update it manually by code, my custom OnUpdating method will not be running. Is there a way to accomplish this, or it is just impossible? Could you guys help me on this please, I really appreciate it. Thanks I don't get it: if it's your own code doing the updating, you know that you are updating, don't you? So to recap the idea, here is how I want it to be: I have a content part named Status that can be attached to any content Item. When that content item is changed and has a status updated , I want to be able to track down that event so I can find the matched rule for it. Hope it is clearer now. Thanks very much Then if it is a part the edition should be done inside a Driver. And you should use BuildEditor/UpdateEditor from your controller, so that the Updating event is triggered. Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://orchard.codeplex.com/discussions/284775
CC-MAIN-2017-47
refinedweb
830
64.3
Building a Family History Web Service - Posted: Oct 31, 2006 at 10:37 AM - 715 Views - 3 Comments Something went wrong getting user information from Channel 9 Something went wrong getting user information from MSDN Something went wrong getting the Visual Studio Achievements In my next two entries I’m going to review a family history Web service I’ve built using Visual Web Developer 2005 Express Edition. In this entry, I’ll start with a look at the Web service. In a following article, I’ll show you a client application I’ve built for my sister and cousins to use for collecting data about our shared ancestors (sorry, but Mom still won’t go near a computer). To begin, I fired up Visual Web Developer 2005 Express Edition and created a new ASP.NET Web site. A nice convenience of the latest version of ASP.NET is that you don’t need to install Internet Information Server (IIS) on your computer to build Web sites or Web services. Instead, you can run the application locally using the built-in Visual Web Developer Web Server. When you run a Web application, you’ll notice an icon appears in your task bar. If you double-click on the icon, a details dialog box similar to the one shown below appears. I really, really like this feature as it reduces the hassle of configuring your system to build Web applications to zero. But I’m get a bit ahead of the game. Once I had my new Web service opened in the IDE, my first task was to create a store for my family data. When you create a new project, you’ll notice a folder titled "Data" in the list of project items. When I right-clicked the folder and selected "Add New Item..." from the context menu, this displayed the dialog shown below. You can use XML or an ordinary text file as your data source for a Web application. As you can see, I chose a database – I named it, and then clicked the Add button. This created a brand new SQL Server Express database file for me in the project. Another great feature of Visual Studio 2005 in all its flavors is that you get built-in database support, without the muss or fuss of using an external program. Think of it as a one-stop shop. And Visual Web Developer 2005 Express Edition provides a wickedly simple designer for defining tables, views, stored procedures and other types of database objects. I created a new table named "person" and defined columns for the various attributes I want to record in my family history database. With the skeleton of my database laid out, I moved to the Service.cs code file in my Web services application. Here I started by adding an attribute to the class definition. Visual C# [WebService(Namespace = "uri:family:geneology",Name = "Family History", Description = "Web service for gathering family history data." )] Visual Basic <WebService(Namespace:="uri:family:geneology", Name:="Family History", _ Description:="Web service for gathering family history data.")> The WebServiceAttribute class (you can leave out "Attribute" in your code as I did) lets you give the Web service a unique internet identity using the namespace attribute. I also gave the Web service a name and a description. Then I renamed the stub "Hello World" Web method to "FindPerson" and updated its WebMethodAttribute declaration to describe the task I intend for the Web method. In this case, the FindPerson method returns a list of people stored in my family history database based on some search criteria. After thinking a bit about how to return search results, I decided against using a DataSet (which is possible but not really a good idea if you want your Web service to get along well with non-.NET clients) and chose to model a Person object after the contents of my database and use that as my return type. I added a class named "Person" to my project and defined a number of public properties that deliberately map to the columns in the person table of the database. Of course, I am not solely bound to mapping the attributes of the table data to the properties of the class; I will probably extend the class some more with behaviors as this application takes on more heft later on. Next, to make the class truly useful, I added a SerializableAttribute to the class declaration. Visual C# [Serializable()] public class Person { ... } Visual Basic <Serializable()> _ Public Class Person ... End Class This signals to the ASP.NET runtime that the class and all of its properties can be serialized (that is, converted to XML) either as an input to one of my Web methods or as a return value. Returning to the Web method, I updated the method signature to return an array of Person objects (families do have members with the same first and last names, after all). I also added another Web method called "UpdatePerson" for adding a new person to my database or for updating an existing person. Again, I took advantage of the underlying serialization capabilities of ASP.NET by declaring a single input parameter of type Person. Following good coding practice, I factored out the actual nuts and bolts work of querying and updating the database to three private helper methods in the Service.cs file. If you look over the code in the "SearchPerson", "AddPerson" and "UpdatePerson" methods you will find the application uses stored procedures to perform work against the database. Again, the database tools provided in Visual Studio Web Developer 2005 Express made this chore virtually painless. That completes the tour of this application. Next time, I’ll show you the client application I’m going to use to manage querying and data entry for the Web service. In the meantime, I encourage you to download Visual Web Developer 2005 Express Edition and explore this Web service application Peter, i am new to developing and have read your article about using web services to input data into a database. i have been given a project in which i have been asked to create a web app which takes 2 parameters and inserts them into a database using a web service. i have found vast amounts of walkthroughs and tutorials to return data but non to insert. i was hoping you could assist me with resources or code snippets which will allow me to accomplish this task, as far as i understand your service provides this kind of functionality. Thanks for your time, any advice will be greatly recieved. Ben jazpa125@hotmail.com Thanks for this, you have taken a daunting task and simplified it for me. Much appreciated. Fantastic and Clear the best ive seen on WS. Remove this comment Remove this threadclose
https://channel9.msdn.com/coding4fun/articles/Building-a-Family-History-Web-Service
CC-MAIN-2015-14
refinedweb
1,141
60.65
view raw I have the following sample code where I download a pdf from the European Parliament website on a given legislative proposal: EDIT: I ended up just getting the link and feeding it to adobes online conversion tool (see the code below): import mechanize import urllib2 import re from BeautifulSoup import * adobe = "" url = "" def get_pdf(soup2): link = soup2.findAll("a", "com_acronym") new_link = [] amendments = [] for i in link: if "REPORT" in i["href"]: new_link.append(i["href"]) if new_link == None: print "No A number" else: for i in new_link: page = br.open(str(i)).read() bs = BeautifulSoup(page) text = bs.findAll("a") for i in text: if re.search("PDF", str(i)) != None: pdf_link = "" + i["href"] pdf = urllib2.urlopen(pdf_link) name_pdf = "%s_%s.pdf" % (y,p) localfile = open(name_pdf, "w") localfile.write(pdf.read()) localfile.close() br.open(adobe) br.select_form(name = "convertFrm") br.form["srcPdfUrl"] = str(pdf_link) br["convertTo"] = ["html"] br["visuallyImpaired"] = ["notcompatible"] br.form["platform"] =["Macintosh"] pdf_html = br.submit() soup = BeautifulSoup(pdf_html) page = range(1,2) #can be set to 400 to get every document for a given year year = range(1999,2000) #can be set to 2011 to get documents from all years for y in year: for p in page: br = mechanize.Browser() br.open(url) br.select_form(name = "byReferenceForm") br.form["year"] = str(y) br.form["sequence"] = str(p) response = br.submit() soup1 = BeautifulSoup(response) test = soup1.find(text="No search result") if test != None: print "%s %s No page skipping..." % (y,p) else: print "%s %s Writing dossier..." % (y,p) for i in br.links(url_regex="file.jsp"): link = i response2 = br.follow_link(link).read() soup2 = BeautifulSoup(response2) get_pdf(soup2) It's not exactly magic. I suggest For text extraction command-line utilities you have a number of possibilities and there may be others not mentioned in the link (perhaps Java-based). Try them first to see if they fit your needs. That is, try each step separately (finding the links, downloading the files, extracting the text) and then piece them together. For calling out, use subprocess.Popen or subprocess.call().
https://codedump.io/share/8m5AZcfSU1Lj/1/converting-a-pdf-to-texthtml-in-python-so-i-can-parse-it
CC-MAIN-2017-22
refinedweb
347
52.87
The main reason for starting this blog was to serve as an incentive to do some long-overdue tidying. Like pretty much any programmer who’s been working for a few years, I’ve got a little bag of tricks I’ve been carrying around throughout my career (17 years so far, of mainly C++, Delphi, and C#). Over the years, some bits and bobs have become obsolete (like that string class I wrote back in 1993, and everything I’ve ever written in Visual Basic), others have been lost in transitions and moves, but it’s still a sizeable bag. Most of the things in the bag, though, has been unorganised little snippets of code, random functions and classes that have been half-formed, often neither tidy or documented enough to be presented publicly. Some has come from experiments and trials, some from various hobby projects, and some from times my work contract has given me some rights to the software I write. The last part there is a sensitive area. In most employment contracts, the employer retains the full right to the code you write as part of your job. If you are an independent contractor, the rights issue can be more flexibly negotiated, and there’s sometimes a distinction made between business-related code, and supporting code. In most cases, the code I’ve written in my day job is code I have no rights to. But because I’m a geek, I’ve sometimes thought “Oooh, that’s interesting”, and sat down to try out and experiment with some concept in my spare time. In those cases, the inspiration might have come from my day job, but the code has been written in my spare time, from scratch. It may be because I’ve wanted to learn more, or because it’s been an interesting problem to solve, or simply because the day job has only required a partial, specialised solution, so I’ve wanted to do a full solution for my own intellectual satisfaction. It’s often been the case, too, that I’ve found a use for these little snippets later, for a different employer, and in those cases I’ve taken my old code and tweaked or rewritten it to fit into the style and needs of the current workplace. So I finally decided to tidy things up. I’ll be re-visiting my old code folders, and extract what’s useful or interesting, and tidy it up, and package it. I will: - stick to a single style of coding (per language) and rewrite the code to have a consistent look - organise the code into namespaces and classes - comment the code thoroughly - use Doxygen or similar to extract the comments to useful documentation - write a simple test app to exercise the code for each module - for each module, create packages with: - only source code - source code, documentation, and example/test app - release under a BSD license Why? Well, in a large part to make it easier for me to find, and introduce at the places I work. And I also think other people might find these bits and bobs useful. I wonder, though, whether it would also be useful to make the code available on SourceForge and/or CodeProject?
https://coolcowstudio.wordpress.com/2010/07/16/old-bag-of-tricks/
CC-MAIN-2017-26
refinedweb
546
60.48
DataImport data-import is a data-migration framework. The goal of the project is to provide a simple api to migrate data from a legacy schema into a new one. It's based on jeremyevans/sequel. Installation gem 'data-import' you can put your migration configuration in any file you like. We suggest something like mapping.rb source :sequel, 'sqlite:/' target :sequel, 'sqlite:/' import 'Animals' do from 'tblAnimal', :primary_key => 'sAnimalID' to 'animals' mapping 'sAnimalID' => 'id' mapping 'strAnimalTitleText' => 'name' mapping 'sAnimalAge' => 'age' mapping 'strThreat' do |context, threat| = ['none', 'medium', 'big'].index(threat) + 1 {:danger_rating => } end end to run the import just execute: mapping_path = Rails.root + 'mapping.rb' DataImport.run_config! mapping_path if you execute the import frequently you can create a Rake-Task: desc "Imports the date from the source database" task :import do mapping_path = Rails.root + 'mapping.rb' = {} [:only] = ENV['RUN_ONLY'].split(',') if ENV['RUN_ONLY'].present? DataImport.run_config! mapping_path, end Configuration data-import provides a clean dsl to define your mappings from the legacy schema to the new one. Before Filter data-import allows you to definie a global filter. This filter can be used to make global transformations like encoding fixes. You can define a filter, which downcases every string like so: before_filter do |row| row.each do |k, v| row[k] = v.downcase if v.respond_to?(:downcase) end end Simple Mappings You've already seen a very basic example of the dsl in the Installation-Section. This part shows off the features of the mapping-DSL. Structure every mapping starts with a call to import followed by the name of the mapping. You can name mappings however you like. The block passed to import contains the mapping itself. You can supply the source-table with from and the target-table with to. Make sure that you set the primary-key on the source-table otherwhise pagination is not working properly and the migration will fill up your RAM. import 'Users' do from 'tblUser', :primary_key => 'sUserID' to 'users' Column-Mappings You can create simple name-mappings with a call to mapping: mapping 'sUserID' => 'id' mapping 'strEmail' => 'email' mapping 'strUsername' => 'username' If you need to process a column you can add a block. This will pass in the values of the columns you specified after mapping. The return value of the block should be a hash or nil. Nil means no mapping at all and in case of a hash you have to use the column-names of the target-table as keys. mapping 'strThreat' do |context, threat| = ['none', 'medium', 'big'].index(threat) + 1 {:danger_rating => } end Dependencies You can specify dependencies between definitions. Dependencies are always run before a given definition will be executed. Adding all necessary dependencies also allows you to run a set of definitions instead of everything. import 'Roles' do from 'tblRole', :primary_key => 'sRoleID' to 'roles' end import 'SubscriptionPlans' do from 'tblSubcriptionCat', :primary_key => 'sSubscriptionCatID' to 'subscription_plans' end import 'Users' do from 'tblUser', :primary_key => 'sUserID' to 'users' dependencies 'SubscriptionPlans' end import 'Permissions' do from 'tblUserRoles' to 'permissions' dependencies 'Users', 'Roles' end you can now run parts of your mappings using the :only option: DataImport.run_config! 'mappings.rb', :only => ['Users'] # => imports SubscriptionPlans then Users DataImport.run_config! 'mappings.rb', :only => ['Roles'] # => imports Roles only DataImport.run_config! 'mappings.rb', :only => ['Permissions'] # => imports Roles, SubscriptionPlans, Users and then Permissions Examples you can learn a lot from the integration specs. Community Got a question? Just send me a message and I'll try to get to you as soon as possible. Found a bug? Please submit a new issue. Fixed something? - Fork data-import - Create a topic branch - git checkout -b my_branch - Make your changes and update the History.txt file - Push to your branch - git push origin my_branch - Send me a pull-request for your topic branch - That's it!
https://www.rubydoc.info/gems/data-import/0.0.1
CC-MAIN-2021-04
refinedweb
627
57.16
Haskell for Kids: Week 2! It’s time for the second weekly roundup from my Haskell programming class for kids! (Here’s a link to last week’s.) Pictures From Last Week Remember, last week’s assignment was to write a program to draw an involved picture about something of your choosing. Here are some of the results: - Marcello: - Sophia: - Grant: - Ms. Sue: - My own picture! This Week: Organization The theme for this week is organization. One problem that’s easy to run into quickly in computer programming is that you write computer programs that are so involved and complicated that you get stuck: the program might even work, but it’s so hard to understand that you dread having to change it! We spent some time talking about ways to organize your computer programs to make them easier to read. Parts of a Program Just to review, a computer program that you wrote on the first day of class might look like this: import Graphics.Gloss picture = color red (pictures [ circle 80, translate 50 100 (rectangleSolid 100 50) ]) We looked a little closer at how programs are built during our second class last week. Here are the parts. Import Statement The first line is the import statement: remember that’s where we tell the computer that we will be using the Gloss library. If it weren’t for that line, then the computer would have no idea what words like color, red, pictures, circle, and so on actually mean! You only have to import a library once, no matter how many times you use the things that it defines. Definitions After the import statement, your program defines some new words, called variables. The one variable we defined in the program earlier was picture. For the moment, we always define a variable called picture, because that’s how the computer knows what to draw. What we didn’t mention in my last post, though, is that you can define other variables besides picture! And when you do, you can use them in the same way you could use circle or other built-in picture types. For example, look at this program: import Graphics.Gloss picture = snowman snowman = pictures [ translate 0 ( 50) top, translate 0 ( 0) middle, translate 0 (-75) bottom ] top = circleSolid 50 middle = circleSolid 70 bottom = circleSolid 90 This program defines five variables: picture, snowman, top, middle, and bottom. Again, the computer is only going to draw the thing labeled picture, but the other variables are useful for keeping things organized. There are two important rules here, which are part of the programming language, Haskell: - Each new definition has to start in the left-hand column. It should not be indented. - If you start a new line in the same definition, you have to indent it. It doesn’t matter how far, but you need to indent at least a little. It came up that I’m defining picture first, and then snowman, and so on… some people thought it would make more sense to do it in the other order. That’s fine! The order of your definitions makes no difference at all, so feel free to start with picture and work your way down to the parts (sometimes called “top down”, or to start with the parts and work your way up to the overall picture (sometimes called “bottom up”), in whatever way makes sense to you. You could even put them in alphabetical order. The computer doesn’t care. (I don’t actually recommend alphabetical order… but you could!) Expressions The parts after the equal sign (=) in your definitions are called expressions. This is where you say what the variables actually mean. Most of what we looked at last week had to do with the different kinds of expressions you can put there: circles, rectangles, and so on, and even some more complicated expressions involving color or translate and so on. In our second class last week we said a little bit more about what an expression is. Look at this example: picture = translate 10 20 (circleSolid 50) The expression on the right side of the definition of picture has four parts to it, all written next to each other. When you see something like that, the first part is called a function, and then rest of them are called parameters. So here we have: - translate – That’s the function - 10 – That’s the first parameter - 20 – That’s the second parameter - (circleSolid 50) – That whole thing is the third parameter Pay attention to the parentheses! Parentheses mean “treat this whole part here as just one thing”. Every function expects a certain number of parameters. For example, translate always expects three: the first is how far to move horizontally, the second is how far to move vertically. The third is the picture that you’re moving. If you tried to give it only 2 parameters, or 5 parameters, it won’t work! (If you want to, try it and see what happens.) What about the part inside the parentheses? It’s just another expression! Can you identify which part of it is the function, and which are parameters? There’s one other piece: you might notice that pictures is a bit different. The piece that comes after pictures has square brackets around it, and you can have as many things in there as you like, with commas between them. That’s just another kind of expression, called a list. So far, pictures is the only thing we’ve seen that uses lists, but we’ll see another one soon! Here’s the whiteboard after we discussed the parts of a program and expressions: The conversation bubble was a different discussion: we talked about how you can make interesting shapes by drawing in white over the top of another shape. Some Advice Most of that was from the second class last week. This week, we spent some time talking about good advice for writing computer programs. We came up with the following: Be Fanatical about Organization If you don’t stay organized, it’s just not possible to write interesting computer programs. Your programs should be designed out of pieces, and you should pick those pieces to be meaningful. Here’s an example of what not to write: import Graphics.Gloss pictures = [ translate 71 123 (rectangleSolid 30 200), rotate 45 (translate 10 1 (color green (circle 50))), translate 100 100 (scale 2 2 (rectangleWire 10 10)), … and on, and on, and on … That’s okay to do when you’re trying things out or playing around. But eventually, you want to break up your program into manageable pieces! A long list of 40 different drawing operations isn’t very fun to read, and it definitely isn’t fun to try to make changes to later on. Instead, try to do something like the snowman example earlier: the pieces of the snowman are broken down into smaller pieces, so you can think about one thing at a time. Of course, not just any pieces will do! You don’t want one piece to have the grass and one leg and sun, and then then next piece to have another leg and a tree! Try to find pieces of your picture that make sense on their own. Computer programmers even have a fancy word for that: we call those pieces coherent or we say that they have a lot of cohesion. Earlier, we mentioned the ideas of “top down” and “bottom up” approaches to programming: you can either start by defining picture, and then define its pieces… or you can start by defining the pieces, and then build bigger things out of them. For now, whichever direction you find easier is okay, but just pay attention to which one you’re doing, and maybe taking the opposite approach once in a while will make certain things easier. Be a Good Communicator The second piece of advice we talked about is to be a good communicator. You’re not just writing a computer program for the computer to run it: you’re also writing something for other people to read. You might show your programs to other people in the class, or to teachers, or plenty of other people. A really good way to communicate well is to pick good names for things. If you’re drawing a flower, your program will be a lot easier to understand when your variables are called leaf, stem, and petal; not picture1, picture2, and picture3! Another tool you can use that’s helpful for communicating well is called comments. The computer will ignore any part of your program that’s between a {- and a matching -}. (That’s an open curly brace following immediately by a dash… and then a dash followed by a closing curly brace.) For shorter comments, the computer will ignore anything from — (two dashes in a row) to the end of the line. So you can write something like this: {- Picture of my dog, by Chris Smith This is a picture of my dog, Ruby. She is part border collie. -} picture = pictures [ circleSolid 50, -- That's her body rectangleSolid 20 100 -- That's her tail ] (No, that doesn’t really look much like a dog… gimme a break, it was just a quick example!) By picking good names and adding comments to explain what you’re doing, you can make your programs a lot easier to read and understand, both for yourself and other people. That’s what we mean by being a good communicator. Hide Unimportant Ideas There’s only so much that one person can remember at a time! Because of that, another important idea in computer programming is to hide the stuff that isn’t important, so that you can focus on what is. We talked about a new language feature that helps here, and that is let and where. Here’s ax example of using where in a program. import Graphics.Gloss picture = snowman snowman = pictures [ translate 0 ( 50) top, translate 0 ( 0) middle, translate 0 (-75) bottom ] where top = circleSolid 50 middle = circleSolid 70 bottom = circleSolid 90 That is almost the same snowman program as before: but with one difference. The variables top, middle, and bottom are defined after the word where, and still inside the definition of snowman. (It might not be obvious here, but the word where is still indented by a few spaces, so it’s still part of the definition of snowman. What this does is make top and middle and bottom only mean something inside of the definition of snowman. So you can use them when you’re defining snowman, but if you tried to use them in a different part of your program, it wouldn’t work. The error message you’d get would say “not in scope”. The idea of scope is important in computer programming. The scope of a variable is how much of the program it can be used in. When you write more involved programs, you don’t want every single variable that you define to be visible everywhere! As this snowman program gets bigger, maybe we might want the same name, like top, to mean something completely different when we’re drawing a different part of the scene. That’s okay, because this top is only “in scope” during the definition of snowman. We talked about an analogy for this: if you’re in England, it makes perfect sense to say “I saw the queen today!” But, if you’re in the United States, it doesn’t make sense any more, because we don’t have a queen. The phrase “the queen” means something different depending on where you are! So if I talked like the Haskell programming language, and you said to me “I saw the queen today!”, I might say in response, “Not in scope: the queen”. By the way, here’s the way to write that same snowman program but using let instead of where. import Graphics.Gloss picture = snowman snowman = let top = circleSolid 50 middle = circleSolid 70 bottom = circleSolid 90 in pictures [ translate 0 ( 50) top, translate 0 ( 0) middle, translate 0 (-75) bottom ] This means exactly the same thing: you just get the choice between defining the pieces first (that’s let) or defining the pieces afterward (that’s where). Pay Attention To Details Finally, the last piece of advice we had for computer programming was to pay close attention to details. There are lots of situations where a normal person might be able to figure out what you want, but your computer program still won’t work if you don’t get all the details right! We looked at some of these details more closely: - Indentation: You are not allowed to indent the first line of a definition. You must indent every line after that. These are rules in the Haskell programming language; if you don’t follow them, your programs won’t work. Also, if you use let and where, then you have to indent all the definitions there by the same amount. - Parentheses: A lot of people have trouble with this when they get started with programming. (For that matter, most of us still make mistakes with parentheses even when we’ve been programming for years!) You have to match up your opening and closing parentheses, or your program just won’t work.We talked about a few things that help. Some programming tools, including the web site we’re using right now, will help you match up parentheses by pointing them out. Using the web site, try putting your cursor right after a parenthesis: the matching one will have a gray box drawn around it! This can help a lot… but it’s only there if the tool you’re using happens to do it.We also talked about counting parentheses. There’s a trick most programmers figure out for checking their parentheses: start out with the number 0 in your head, and then read through part of your program, and add one every time you see an open parenthesis, and subtract one every time you see a close parenthesis. If everything matches, you’ll end up back at zero at the end! If not, you may need to figure out where you’re missing a parenthesis. - Spelling: If you misspell a word, it won’t work! So paying attention to spelling is important. - Capitalization: Whether words are capitalized or not matters! So, for example, red works fine as a color, but Red doesn’t work! We talked about the convention of capitalizing new words inside a variable name, even through the variable name starts in lower case, like in circleSolid. That has a name: “camel case”… because it’s sort of like upper-case or lower-case, except it has a hump in the middle. - Spaces: Most of the time, you can put spaces in your program however you like: but they often do need to be there! You can’t run together numbers or variable names and expect it to work. A lot of programmers like to use spaces to line things up in nice columns, too. Here’s our whiteboard at the end of this part: Paying attention to details like this is very important, and will help a lot as you get more experience with writing programs. Things Other People Said We finished up today by reading some quotations by famous people about computer programming, talking about, and did some laughing, too. Any fool can write programs that a computer can understand. Good programmers write code that humans can understand. – Martin Fowler This is talking about the importance of being a good communicator, which was one of the points above. Controlling complexity is the essence of computer programming. – Brian Kernighan Brian Kernighan was one of two people who invented C, one of the most popular programming languages in the world. He’s talking here about the important of organization, and of hiding unnecessary details, so that you can build programs without being overwhelmed by how complicated everything gets! In fact, he says that’s the most important thing about computer programming. Any sufficiently advanced bug is indistinguishable from a feature. – Rich Kulawiec Remember a “bug” is a mistake in your program. A “feature” is something it does, and is supposed to do. I threw this quotation in for Sophia, because on the first day of class, she found a bug in my web site where you could stretch a solid circle and it would accidentally leave a hole in the middle, and she used it to make a mouth as part of her first program. But then the next day, I fixed the bug and broke her program! That’s definitely a bug that turned into a feature. If you’re about to add a comment, ask yourself, “How can I improve the code so that this comment isn’t needed?” – Steve McConnell This is a great concept! We talked about using comments to explain what’s happening in your programs: but what’s even better than having a comment is organizing your program so that it’s obvious what you’re doing, and you don’t need the comment any more. This certainly doesn’t mean never write comments: it just means see if you can keep things simple so there’s not as much to explain. Great software, likewise, requires a fanatical devotion to beauty. If you look inside good software, you find that This is Paul Graham being funny, but also making a very good point. This touches on the first and last points of advice earlier. There are two ways of writing computer programs: One way is to make it so simple that there are obviously no mistakes. The other way is to make it so complicated that there are no obvious mistakes. – C.A.R. Hoare (sort of) The last one is actually slightly misquoted. I didn’t correct it because the original version means nearly the same thing but uses more technical language. Your Assignment Your assignment is to clean up your computer program! Imagine that you’re going to share with other people not just the picture it draws, but the computer program itself. Get it as clean and nice looking as you can: try to break things down in logical ways, pick really good names, fix all the details, and communicate well to people reading your code. See you next week!]
https://cdsmith.wordpress.com/2011/08/23/haskell-for-kids-week-2/?like=1&_wpnonce=d1fb2cdf27
CC-MAIN-2015-18
refinedweb
3,088
69.52
This may not really be actionable yet, but I've been thinking about it a lot. Twisted Loop twisted.internet has a lot of cruft in it. In particular there are lots things like twisted.internet.main, twisted.internet.posixbase, twisted.internet.abstract, twisted.internet.process, which are sort of public, but sort of not. There are also a bunch of old-style classes. Also, the package name, twisted.internet, was kind of an in-joke from a long time ago, not everything in the module is about internet networking, and doesn't really fit into the 'twisted something' naming idiom, where 'something' is a thing that idiomatically is sometimes referred to as twisted in english. (twisted web, twisted words, etc). A better name would be 'twisted.loop'. This would also allow us to replace the not-quite-accurately-named "reactor" object with a "Loop" object, which would ideally make more sense to newcomers. At some point, when we've addressed some of the issues like a new, clean process transport, a new, clean producer/consumer interface, I'd like to make a new public face for those APIs in a new package, twisted.loop. I'd like to stress that the idea here is not to rewrite anything; this would be a very deliberate moving of code. In fact, I imagine the first cut at something like this would be simply an empty package with some imports from twisted.internet and an __all__. Later, we could move the contents of twisted.internet to twisted.loop._private, and add a compatibility import, and possibly __path__, not quite sure about the technical deatils to twisted.internet, and then add a deprecation. (Except for twisted.internet.defer, of course; that goes back to twisted.python.) dash suggests that if we're going to jump in and scramble things around, we should also do away with our favorite sys.modules shenanigan, twisted.internet.reactor. I agree. One part of twisted.loop should be an elegant way to bootstrap the loop. One way to do this (and this idea is really pre-new-namespace, something that would go into twisted.internet first) might be a way to provide a standard function that does something like this: @inlineCallbacks def runServiceInLoop(serviceFactory): loop = yield waitForSomebodyToInstantiateALoop() # wouldn't it be nice if stuff like callWhenRunning returned Deferreds # instead of taking functions yield loop.deferUntilRunning() serviceFactory.startService() loop.addSystemEventTrigger('before', 'shutdown', serviceFactory.stopService) # or equivalent, system event triggers are pretty dumb. Other random thoughts which might reduce confusion around reactor.run and maybe make it a bit less global: - In the presence of a function like the proposed runServiceInLoop, would it be cool to call loop.run() in an atexit hook, if it hasn't been run yet? While that seems kind of hacky, it does place quick example scripts at the same level as regular functions that are called when the reactor is already running: the rule is 'return control to your caller' in either case, instead of 'return control to your caller sometimes, and other times, call reactor.run()'. - I keep hearing there's some thing that the Tkinter and gtk modules do which runs the reactor along with readline when you are sitting at the interactive prompt. Can we also do that, and run the reactor when people start callin' interactive functions? A somewhat more controversial idea: once the reactor is running, I think we should put it in the context, because once the reactor is running, there is only one "real" reactor, and that's it. It's not reentrant, and it's not going to be reentrant, so that's it. We shouldn't install such a thing before it's actually been run, because. foom suggested on IRC that dynamic scope is cool because it lets you have variables which are global, but thread-local once they've been set; I'm not sure if he meant that tongue-in-cheek, but that's what the reactor - or, should I say, the loop - should be like. (A complete tangent: perhaps we should put a reactor into each thread? That way we wouldn't need to have the somewhat arbitrary / weird distinction between callInThread and callFromThread; callInThread would just mean "find a reactor for a thread that isn't too busy, and call callFromThread on that reactor". If some VM were to ever give us free-threading, it would be handy to be able to have one reactor per thread, and there isn't really any reason that we couldn't do this already, except for a few more ill-conceived global variables...) -glyph
http://twistedmatrix.com/trac/wiki/Plan/TwistedLoop?version=1
CC-MAIN-2014-15
refinedweb
771
62.98
how to use DynamicInvoke method to return a value Discussion in 'ASP .Net' started by flyingchen, Aug 2,81 - PvdK - Jul 24, 2003 getting return value from function without return statement.Seong-Kook Shin, Jun 18, 2004, in forum: C Programming - Replies: - 1 - Views: - 493 - Richard Bos - Jun 18, 2004 Why use "return (null);" instead of "return null;" ?Carl, Aug 21, 2006, in forum: Java - Replies: - 21 - Views: - 992 - Patricia Shanahan - Aug 24, 2006 what value does lack of return or empty "return;" returnGreenhorn, Mar 3, 2005, in forum: C Programming - Replies: - 15 - Views: - 827 - Keith Thompson - Mar 6, 2005 Carriage Return added during return of large string from class methodXeno Campanoli, Feb 13, 2006, in forum: Ruby - Replies: - 0 - Views: - 228 - Xeno Campanoli - Feb 13, 2006
http://www.thecodingforums.com/threads/how-to-use-dynamicinvoke-method-to-return-a-value.526369/
CC-MAIN-2014-41
refinedweb
127
56.93
I. Here is a simple test-case I set up. Here is my .htaccess file: ------------------------- SetHandler python-program PythonHandler main PythonDebug On ------------------------- (I am using version 2.7.10 as shipped in Debian stable's libapache-mod-python package, which is why I am using "python-program") Here is my main.py ----------------------------- from mod_python import apache import testmodule def handler(req): req.write("Hello World!") testmodule.foo(req) return apache.OK ----------------------------- and here is the imported testmodule.py -------------------------------- def foo(req): req.write('three blind mice') -------------------------------- The first time I load this test in my web browser, I see the expected result: Hello World!three blind mice I edit testmodule.py and change the string to something else: -------------------------------------- def foo(req): req.write('mary had a little lamb') -------------------------------------- I then reload my web browser. Sometimes this results in "Hello World!mary had a little lamb", and sometimes it reuslts in "Hello World!three blind mice" I edit the string again, and hit reload again. Maybe I get the new string.... or maybe I get one of the strings I used previously. I keep making changes to testmodule.py and I keep reloading the results, and the results are almost always wrong. I know this is not a browser cache problem. I have cleared/disabled my web-browser's cache. I read about "Multiple Interpreters" in the documentation, and have tried to force my code to run in a single interpreter by adding: PythonInterpreter "test_interpreter" to my .htaccess file, but that makes no difference. This problem only occurs in the included module. I can edit main.py and my changes are always applied. I notice in the apache logs that everytime I reload the page I see: [notice] mod_python: (Re)importing main from None But I can find no way to force testmodule to be re-imported. What am I doing wrong? --- James Paige
http://www.modpython.org/pipermail/mod_python/2006-January/020078.html
crawl-002
refinedweb
313
78.25
How can I pass the csv file or file stream or something in line of that to the rake task I'm running on the remote app via rake task arguments? So I can get the contents of that file in the file and do something with it. It's not a big file. Update I tried with suggestion from Luc: desc 'Test task' namespace :app do task :pipe_file => [:environment] do |t, args| puts "START" File.open('my_temp_file', 'w') do |f2| while line = STDIN.gets f2.puts line end end puts "DONE" end end cat tst.csv | bundle exec rake app:pipe_file You can pipe the content of your file to your rake task: cat my_file | heroku run rake my_task Then inside your task you need to start by reading STDIN: STDIN.binmode tmp_file = Tempfile.new('temp_file_prefix', Rails.root.join('tmp')) tmp_file.write(STDIN.read) tmp_file.close Process tmp_file here. puts tmp_file.path tmp_file.unlink Hope it helps !
https://codedump.io/share/PBeIYXMV3iWR/1/running-rake-task-on-remote-app-use-file-from-local-machine
CC-MAIN-2016-50
refinedweb
158
74.39
'm currently working on a small project using a gumstix overo with the tobi extension board. Via the I2C bus I connected a HMC6343 compass from Sparkfun. The underlying OS is Linux 2.6.36. The following code is used to communicate with the peripherals via I2C: ****************************************************************************** #include ... #include <linux/i2c-dev.h> #include <fcntl.h> using namespace std; int main() { //cout << "!!!COMPASS TESTING!!!" << endl; // prints !!!Hello World!!! int fh; unsigned char COMMAND = {0x50}; unsigned char data[6]; unsigned char TARGET_ADDR = {0x19}; fh = open("/dev/i2c-3", O_RDWR); if (fh < 0) { // ERROR HANDLING; you can check errno to see what went wrong //cout << "Could not open the file!" << endl; } // tell the driver we want the device with address 0x19 on the I2C bus int io = ioctl(fh, I2C_SLAVE, TARGET_ADDR); if( io < 0 ){ //cout << "IOCTL failed!" << endl; } else{ //cout << "IOCTL output: " << io << endl; } int w = write(fh, &COMMAND, 1); if (w != 1) { //cout << "i2c write failed: " << w << endl; } sleep(0.1); //cout << "Read Bytes from Slave: " << endl; int r = read(fh, data, 6); if(r != 6){ //cout << "i2c read failed: " << r << endl; //read 6 byte data from slave to master } else{ //process heading data: //cout << "[0]: " << data[0] << endl; //cout << "[1]: " << data[1] << endl; //cout << "[2]: " << data[2] << endl; //cout << "[3]: " << data[3] << endl; //cout << "[4]: " << data[4] << endl; //cout << "[5]: " << data[5] << endl; } close(fh); return 0; } ****************************************************************************** The output on the bus itself looks like this (recorded with a logic analyzer): Filehandle and ioctl-call seem to be correct. What wonders me is, that no data byte during the write process is being sent. It's just the address of the peripheral and it's ACK-Bit being sent. What am I doing wrong? I always get a "write failed" message from the method call. Subsequently a "read failed". The values being returned are always -1. Thanks in advance. --
https://sourceforge.net/p/gumstix/mailman/message/27952408/
CC-MAIN-2017-17
refinedweb
309
81.83
Your First Component (Windows) This guide walks you through your first custom Grasshopper component library using Visual Studio. It is presumed you already have the necessary tools installed and are ready to go. If you are not there yet, see Installing Tools (Windows). HelloGrasshopper We will use the Grasshopper Assembly templates to create a new, basic, component library called HelloGrasshopper. If you are familiar with Visual Studio, these step-by-step instructions may be overly detailed for you. The executive summary: create a new project using the Grasshopper Assembly 2017 Community Edition and C#). - Navigate to File > New > Project… - A New Project wizard should appear. In the left column, find the Installed > Templates > Visual C# > Rhinoceros section. In the central list, select the Grasshopper Add-On template… - For the purposes of this Guide, we will name our demo plugin HelloGrasshopper. At the bottom of the window, fill in the Name field. Browse and select a location for this project on your disk… - The New Grasshopper Assembly dialog appears. Check the Provide sample code checkbox. - This is where you fill out information about your first component: - Add-on display name: the name of component library itself. - Name: the name of the component as displayed in the ribbon bar and search menus. - Nickname: the default name of the component when inserted into the canvas. - Category: name of tab where component icon will be shown. - Subcategory: name of group inside tab where icon will be shown. - Description: description shown in tooltip when mouse is over the component icon in the menu. - For the purposes of this guide, we will accept the defaults and click Finish… - A new solution called HelloGrasshopper should open… Boilerplate Build - Before we do anything, let’s build and run HelloGrasshopper to make sure everything is working as expected. We’ll just build the boilerplate Plugin template. Click Start (play) button in toolbar corner of Visual Studio (or press F5) to Start Debugging… - Rhinoceros launches. - Since this is the first time you are debugging the components, you need to tell Grasshopper where to look. In the Rhino command prompt, run the GrasshopperDeveloperSettingscommand… - Uncheck the Memory load *.GHA assemblies using COFF byte arrays checkbox. - Click the Add Folder button and add your binoutput folder of your project to Grasshopper’s search path. NOTE: You only need to do this step once during the development of your component, unless you move it elsewhere. - (Optional) Automatically start Grasshopper every time Rhino starts… - Navigate to Tools > Options > General. - In the Run these commands every time Rhino starts text area, type _Grasshopperthen click OK. - Run the Grasshoppercommand to start Grasshopper. If you don’t blink, you might see Grasshopper say it is loading “HelloGrasshopper” in the status bar of the splash screen. - Navigate to Curve > Primitive in the components menus. You should see HelloGrasshopper in the list with a blank icon. Drag this onto the canvas. The component should “work.” - Exit Rhinoceros. This stops the session. Go back to Visual Studio. Let’s take a look at the… Component Anatomy - Use the Solution Explorer to expand the Solution (.sln) so that it looks like this… NOTE: Depending on your edition of Visual Studio, it may look slightly different. - The HelloGrasshopper project (.csproj) has the same name as its parent solution…this is the project that was created for us by the Grasshopper Assembly template wizard earlier. - Properties contains the AssemblyInfo.cs source file. This file contains the meta-data (author, version, etc) about the component library. - References: Just as with most projects, you will be referencing other libraries. The Grasshopper Assembly template added the necessary references to create a custom Grasshopper component. - GH_IO - or GH_IO.dll - is the Grasshopper Input/Output library required to read and write Grasshopper files. - Grasshopper - or Grasshopper.dll - is the Grasshopper base namespace. - RhinoCommon - or RhinoCommon.dll - is the Rhinoceros .NET SDK. - System, System.Core, System.Drawing, System.Windows.Forms are .NET foundational libraries. - HelloGrasshopperInfo.cs contains the component library information, such as the name, icon, etc. - HelloGrasshopperComponent.cs is where the action is. Let’s take a look at this file… Make Changes - Open HelloGrasshopperComponent.cs in Visual Studio’s Source Editor (if it isn’t already). Notice that HelloGrasshopperComponentinherits from GH_Component… public class HelloGrasshopperComponent : GH_Component - If you hover over GH_Componentyou will notice this is actually Grasshopper.Kernel.GH_Component. HelloGrasshopperComponentalso overrides two methods for determining the input and output parameters … protected override void RegisterInputParams(GH_Component.GH_InputParamManager pManager) ... protected override void RegisterOutputParams(GH_Component.GH_OutputParamManager pManager) The actual work done by the component is to be found in the SolveInstancemethod… protected override void SolveInstance(IGH_DataAccess DA) As you can see, this is where the action happens. This boilerplate component creates a spiral on a plane. Just to make sure everything is working, let’s change the default plane on which the spiral is constructed. On line1 67, in SolveInstance, notice that an XY plane is constructed… Plane plane = Plane.WorldXY; Further down in the SolveInstancemethod, you will notice that the input data is being fed into this plane… if (!DA.GetData(0, ref plane)) return; Go back to the RegisterInputParams, and find the line where the Plane input is registered. The last argument being fed to the method - Plane.WorldXY- is the default value of the input… pManager.AddPlaneParameter("Plane", "P", "Base plane for spiral", GH_ParamAccess.item, Plane.WorldXY); Change the default value of the Plane input to be Plane.WorldYZ… pManager.AddPlaneParameter("Plane", "P", "Base plane for spiral", GH_ParamAccess.item, Plane.WorldYZ); - Now let’s examine what happens when inputs are given to this component… Debugging - Set a breakpoint on line1 99 of HelloGrasshopperComponent.cs. You set breakpoints in Visual Studio by clicking in the gutter… - Build and Run. - In Grasshopper, place a HelloGrasshopper component on the canvas…as soon as you do, you should hit your breakpoint and pause… - The reason you hit the breakpoint is because the SolveInstancemethod was called once initially when the component was placed on the canvas. With Rhino and Grasshopper paused, in Visual Studio switch to the Autos tab (if it not already there). In the list, find the planeobject. Our planeis a Rhino.Geometry.Planewith a value of {Origin=0,0,0 XAxis=0,1,0, YAxis=0,0,1, ZAxis=1,0,0}…an YZ plane, the default, as expected. - Continue in Grasshopper by pressing the Continue button in the upper menu of Visual Studio (or press F5)… - Control is passed back to Grasshopper and the spiral draws in the Rhino viewport. Now, place an XY Plane component on the canvas and feed it as an input into HelloGrasshopper’s Plane input. Notice you hit your breakpoint again, because the SolveInstanceis being called now that the input values have changed. - Exit Grasshopper and Rhino or Stop the debugging session. - Remove the breakpoint you created above by clicking on it in the gutter. DONE! Congratulations! You have just built your first Grasshopper component for Rhino for Windows. Now what? Next Steps You’ve built a component library from boilerplate code, but what about putting together a new simple component “from scratch” and adding it to your project? (Component libraries are made up of multiple components after all). Next, check out the Simple Component guide.
https://developer.rhino3d.com/guides/grasshopper/your-first-component-windows/
CC-MAIN-2021-17
refinedweb
1,201
58.08
Although I've retired from full time work, I still consult for lots of small mom-n-pop places. Mostly, it's little scripts to automate doing this and that. Sometimes, the boss' kid or nephew was asked to get ambitious and solve a problem. When the inevitable happens, they call me to bail them out. For the most part, it's usually something like some file got moved/renamed/deleted. Sometimes, they got ambitious and attempted to write a batch file. This time, a college freshman, who claimed to be "good with computers", had written a program to control the little scripts and jobs in an automated fashion. Apparently, it was getting too complicated for him and they asked me if I could work with it. It's a pity that Windows doesn't have some sort of way to run a task on a schedule... Anonymized, but structurally unmodified, and no, there wasn't a single comment in the file: public class TaskScheduler { public void runTask(int taskNum, int ...args) throws Exception { switch (taskNum) { case 1: function1(args[0]); return; case 2: function2(args[0],args[1]); return; case 3: function3(args[0],args[1],args[2],true); return; case 4: function3(args[0],args[1],args[2],false); return; case 5: runTask(2,args[1]+1); return; case 6: runTask(3,args[1]+1,args[2]+1); runTask(5,args[1]); return; // OP: triple-nested switch meaning: "Run only during business hours: 9-5, M-F, with special case on Wed" case 7: switch(new GregorianCalendar().get(Calendar.DAY_OF_WEEK)) { case 0: return; case 1: case 2: case 3: runTask(3, 5, 8); case 4: case 5: { int hourOfDay = new GregorianCalendar().get(Calendar.HOUR); runTask(6, hourOfDay, 23); switch (hourOfDay) { case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7: case 8: return; case 9: case 10: case 11: case 12: case 13: case 14: case 15: case 16: case 17: runTask(2, args[1]); return; case 18: case 19: case 20: case 21: case 22: case 23: return; default: return; // OP: I suppose that 25'th hour *could* // fall during business hours!!! } return; } case 6: return; default: return; // OP: in case of days outside the range: Sun..Sat } //... case 184: { function184(new Date()); runTask(1); runTask(27); runTask(16, 1, 15, 34); // OP: 84 more return; } default: return; } } // all renamed for anonymity, but they were about this meaningfully named void function1(int arg) { // ... } void function2(int arg1, int arg2) { // ... } void function3(int arg1, int arg2, int arg3, boolean arg4) { // ... } // ... void function184(Date d) { // ... } }
http://thedailywtf.com/articles/a-case-of-bad-timing
CC-MAIN-2018-13
refinedweb
432
67.89
With Instructables you can share what you make with the world, and tap into an ever-growing community of creative experts. The Arduino Ethernet Shield allows you to easily connect your Arduino to the internet. This shield enables your Arduino to send and receive data from anywhere in the ... Setting it up is as simple as plugging the header pins from the shield into your Arduino. Note that the Ethernet Shield sold at Radioshack is online compatible ... The Ethernet Shield is based upon the W51000 chip, which has an internal 16K buffer. It has a connection speed of up to 10/100Mb. This is not the ... Plug the Arduino into your computer's USB port, and the Ethernet shield into your router (or direct internet connection). Next, open the Arduino development environment. I highly recommend ... You can use the Arduino Ethernet shield as a web server to load an HTML page or function as a chat server. You can also parse requests sent ... You can also use the Ethernet Shield as a client. In other words, you can use it to read websites like a web browser. Websites have a lot ... /* This code is in the public domain. */ #include <SPI.h> #include <Ethernet.h> // Enter a MAC address and IP address for your controller below. // The IP address will be dependent on your local network: byte mac[] = { 0x00, 0xAA, 0xBB, 0xCC, 0xDE, 0x01 }; IPAddress ip(191,11,1,1); //<<< ENTER YOUR IP ADDRESS HERE!!! // initialize the library instance: EthernetClient client; const int requestInterval = 60000; // delay between requests(); } void loop() { == ">Hello Cruel World"){ digitalWrite(2, HIGH); Serial.println("LED ON!"); } if(tweet != ">Hello Cruel World"){ digitalWrite(2, LOW); Serial.println("LED OFF!"); } // close the connection to the server: client.stop(); } } } } else if (millis() - lastAttemptTime > requestInterval) { // if you're not connected, and two minutes have passed since // your last connection, then attempt to connect again: connectToServer(); } } void connectToServer() { // attempt to connect, and wait a millisecond: Serial(); } 700,798views 682favorites License: Arduino Ethernet Web Server (HACKED)by Matthew Blevins connect arduino with ethernetby LuckyS22 Run Ethernet shield on arduinoby Linksprite Arduino Web Serverby diytransistor Internet Devices for Home Automationby akellyirl Join 2 million + to receive instant DIY inspiration in your inbox. Download our apps! © 2016 Autodesk, Inc.
http://www.instructables.com/id/Arduino-Ethernet-Shield-Tutorial/step5/Client/
CC-MAIN-2016-36
refinedweb
374
57.67
07 March 2012 06:30 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> This includes expansions in Xinjiang’s coal-based petrochemical sector, in order to take advantage of the region’s abundant coal and natural gas resources. By 2015, a 2.6m tonne/year coal-based urea plant, an 800,000 tonne/year dimethyl ether (DME) unit, 60bn cubic metre/year coal-based natural gas unit, 1m tonne/year coal-based olefins capacity and 1m tonne/year coal-based monoethylene glycol (MEG) plant will be brought on stream. In addition, the region will develop its salt chemical industry with its polyvinyl chloride (PVC) and potash fertilizer capacities to reach a total 5.7m tonnes/year and 3m tonnes/year, respectively by the same year. The region is targeting its overall output growth rate to be at an average of 15% per year during the 12th Five-Year period,
http://www.icis.com/Articles/2012/03/07/9538939/chinas-xinjiang-region-to-expand-petrochemical-industry-by.html
CC-MAIN-2015-14
refinedweb
148
53.1
Script in Orlando Andrew Clinick Microsoft Corporation June 12, 2000 Download Script0600.exe. Contents Administering Windows with Windows Script The Ultimate Add-User Script Administrative Sanity Workbench Summary Highlights from Orlando Luckily for me, this month's "Scripting Clinic" coincides with Tech·Ed, where the Windows Script Team is giving presentations on Windows Script Host (WSH) and Version 5.5 of JScript® and Visual Basic® Scripting Edition. I thought I'd take this opportunity to expand on my "Administering Windows with Windows Script" Tech·Ed presentation, and give some more in-depth explanations of how the demos work. Windows Media shows of this session and Peter Torr's "Windows Script for Developers" presentation are available on the MSDN Web site, so you can take a look at them as well. Just to round things off, I'm also including some photos I took of the script presentations and some key folks in the Windows Script community who flew into Orlando (some from as far away as New Zealand—now that's what I call dedication!). Administering Windows with Windows Script This presentation is an overview of how Windows Script and the scriptable technologies in Windows 2000 make administration easier than ever. I've touched on these subjects in prior articles (If It Moves, Script It and Windows 2000: Script Heaven), so I recommend taking a look at those articles in conjunction with this column. The key to the presentation is the two major demos that we put together: - The Ultimate Add-User Script - Administrative Sanity Workbench The slides are worth looking at—but this session was the last one of the week at Tech·Ed, and people were experiencing severe PowerPoint overload by that point. We needed to keep their attention somehow, and every one loves a good demo. The Ultimate Add-User Script This script was developed to help administrators deal with the Monday morning rush when Human Resources (HR) sends them a list of 60 new hires that need accounts set up by 9 A.M., and the list arrives at 8:55 A.M. Each user needs: - An account set up on the domain for logon. - An Exchange mailbox. - A home drive. - A personal Web site on the intranet with a page telling people who the employee is, his/her phone number, etc. - A welcome e-mail telling each new hire all about this wonderful new system that has been set up. This is quite a task, and it would require the administrator to move among a number of applications to achieve the desired result. It would certainly take longer than five minutes for 60 users—a perfect scenario for script to save the day. Requirements for the Script - Allow HR to provide a list of new hires in a format an HR employee can easily use. - Set up a domain logon account. - Set up an Exchange mailbox. - Set up a home drive for the user and map it to a local drive when the user logs on. - Set up a v-root based on the home drive on the corporate user Web site. - Create a home page with the user's information. - Send a confirmation mail to the user. - Do all of the above without asking for any input from the administrator. How the Script Works There are two parts to the script: an Excel spreadsheet and the script that runs using the spreadsheet. You can get both by downloading the code for this article. I chose Excel because it's pretty simple for HR to use and Excel documents can easily be distributed (by e-mail for example). As an alternative, I considered a Web site that would result in some XML being sent to the administrator—but Excel is a simple way to do this, and it works pretty well. (Again, to view the spreadsheet, download the script0600.exe sample file using the link at the top of this article.) When launched, the script first checks to see which domains and exchange servers it needs to talk to. We wanted to make our demo reasonably fool proof (you'd be amazed how your ability to remember things disappears once you're up in front of 6,500 people), so if you don't pass any arguments to the script, it prompts for the required ones with default values. The default values I have in the script are based on the setup I had for my demo domain, the infamous fabrikam.com. Be sure to change those to values that are appropriate for your organization. For more information on the machine setups, see the Machine Setup section below. Setting Up the User Once all the parameters for the script are set up, the script checks to see which domain it's going to use, and stores that information for later use. It does this by using the Active Directory Services Interface (ADSI) Lightweight Directory Access Protocol (LDAP) provider. When calling LDAP with the name of the computer on which the script is being run, it returns an object that the script can query for the domain to be used. Once you have the account domain, you can then bind to the LDAP provider again to pull further information needed for adding users and so forth. ' ------------------------------------------------------------- ' Get the Default Domain name where we will create Windows 2000 users ' based on the computer name given Set objRoot = GetObject("LDAP://" & strAccountComputer & "/RootDSE") strAccountDomain = objRoot.Get("DefaultNamingContext") strRootDomain = objRoot.Get("RootDomainNamingContext") Set objRoot = Nothing ' Bind to the current account Domain to get its Netbios name Set objDomain = GetObject ("LDAP://" & strAccountDomain) ' The domain name will be used to set access rights on directories strAccountNBDomain = UCase (objDomain.get ("name")) Set objDomain = Nothing ' Bind to the root domain to get its canonical name for UPN contruction Set objRootDomain = GetObject("LDAP://" & strRootDomain) ' Retrieve a constructed property, so first we do a GetInfoEx (for UPN construction) objRootDomain.GetInfoEx Array("canonicalName"), 0 strRootDNSDomainName = objRootDomain.Get("canonicalName") ' Remove the / at the end strRootDNSDomainName = Mid (strRootDNSDomainName, 1, _ Len(strRootDNSDomainName) - 1) Once the domain information has been gathered, the script has to load up the Excel spreadsheet that contains information about the new hires; then pull out the data to use in the script. This is achieved by creating an instance of Excel via createobject, and by then using the Excel object model to retrieve results from a predefined range in the spreadsheet. ' ------------------------------------------------------------- ' Start the Excel Worksheet reading ' Bind to an Excel worksheet object Set objXL = WScript.CreateObject("EXCEL.application") Wscript.Echo "Reading Excel datasheet '" & strExcelSheetPath & "'" Wscript.Echo ' Make it visible 'objXL.Visible = True ' Open the desired workbook containing the users list objXL.workbooks.open strExcelSheetPath ' Activate the named page "Export" objXL.sheets("Export").Activate ' Put the cursor on the A2 cell objXL.ActiveSheet.range("A2").Activate Once the range has been returned, the script can iterate through the row in the range and pull the user information from each cell in the row: ' Until we run out of rows Do While objXL.activecell.Value <> "" ' ----------------------------------------------------- ' Parameters for the user's mailbox creation under LDAP: namespace strFirstName = objXL.activecell.offset(0, 0).Value strLastName = objXL.activecell.offset(0, 1).Value '... More extraction of info here see the script for exact details For each row, the script takes the information from the spreadsheet and uses ADSI to create a domain user. This is a reasonably simple process of calling the LDAP provider with the right information to create a new user. The key to this is the moniker for the user: With this information, ADSI can query the Active Directory to see whether the user already exists. If the user does exist, then HR made a mistake (surely not!) and the script gracefully exits. If no user exists, a new user can be created using the create method from ADSI and setting a whole bunch of properties on the user with the information from the spreadsheet. Once all the properties have been set, the script tells ADSI to commit the changes to the directory via the setinfo method, and the user is created. Once the user has been created, the script checks to see whether Exchange 2000 has been installed on the domain. If it has, the script adds the user to Exchange as well. Adding a user to Exchange is pretty simple, because ADSI also provides a mechanism for talking to Exchange. The only real difference is that, rather than passing in a domain to the LDAP provider, you pass in the Exchange Organization. Once you query ADSI with the Exchange Organization, adding a user is pretty much identical to adding a user to Active Directory. In fact, it is creating a user in Active Directory, but this time with Exchange user information (the joys of an integrated directory!). The properties for an Exchange user are a little different, including X400 addresses (which this scripter hopes he never understands!), nicknames, and so forth. You can see how the script does this in the source code's EnableE-mailFunction.vbs include file. Setting Up the File Share Once users have been set up correctly, they need to have home folders and Web v-roots. These are set up by using a combination of ADSI and the FileSystemObject. The script creates a folder using the createfolder method on the FileSystemObject, and then calls ADSI to create a share. To create a share, the function uses the WINNT provider to ADSI, and calls the create method on it. This is just like calling the create method to add a user; the only difference is that you get a different ADSI object because you've passed in a different moniker. This time round, we pass in WINNT://machinename/lanmanserver rather than LDAP://domain/user, etc. Assuming that no share already exists, the script creates a share for the user. Web Sharing Creating a Web share is just an extension of the file share, since it, too, uses FileSystemObject and ADSI. Each user will have a myweb folder in his home folder, which will be used to create a v-root with the naming convention on the corporate Web server. This means the actual hierarchy of myweb is never exposed from the server—and, hopefully, it's a little easier for the user to understand. Setting up a v-root in Internet Information Services (IIS) is achieved by using the IIS handler for ADSI. Again, you just query ADSI with the path to the folder by using the IIS ADSI handler, which returns an object that you can query (e.g., IIS://servername/sharename). Assuming that no share already exists, the script calls the create method on the returned object. Once the v-root has been set up, the script creates a home page for the user that displays information about the user (phone number, and so on). This information is the same data that came from the spreadsheet, which is now also in the Active Directory. You could write an Active Server Page (ASP) file that queried ADSI for the information every time you went to the vroot—but that's probably a bit of overkill, since the page is really just a placeholder and the user will probably replace it pretty quickly. As a result, the script creates a static Web page with the information and saves that into the myweb folder. To get the information into a Web page, we use the resource feature of WSH 2.0 and the replace function in VBScript. The resource element contains the HTML that will be used to create the page with the fields that will be replaced with the user information; the fields are delimited by %'s. The great thing about using the resource element is that you can put the HTML in a separate file, allowing HR folks to update the page easily without having to touch the script. If you're planning to use strings within a WSH Script, I highly recommend the resource element. The script loads the string from the resource using the getResource function available within a .wsf file (and .wsc, if you're familiar with Windows Script Components). The script then uses the VBScript replace function to go through each user information field from the spreadsheet and update the HTML page. Once the replace has finished, the page is saved to the folder as default.htm. Telling the User That They're Set Up Once the user has been successfully added to the domain and to Exchange, and the home page is set up, the script sends the user a message that everything is done. To send the mail, the script uses the resource element and Collaborative Data Objects (CDO). The text of the e-mail is contained in the "WelcomeLetter" resource, and uses the same format as the HTML version, with %-delimited fields. The fields are replaced to build the e-mail, which tells the user where to get help and provides a list of the information that has been entered for them. Once the mail is ready to send, the script creates an instance of CDO, and calls the send method to send the mail. Acknowledgements There are two people I can't thank enough for getting this script going for Tech·Ed. First of all, Allain Lissoir from Compaq did a fantastic job of building the base script with ADSI. !href()Allain's script was the basis for this demo, and we're grateful to him for all his work in the Windows Scripting arena. Finally, Mike Whalen, a developer on the Windows Script Host team, helped me out considerably by implementing my vague ideas so elegantly in the script. Administrative Sanity Workbench Setting up the user is only half the battle for system administrators. Once those new users can log on to the system, many of them will be phoning up helpdesk with questions about why their machine isn't working. To manage this and make it easier for a system administrator to keep some form of sanity, I put together a Web site using Windows Script, ASP, NetMeeting Remote Desktop sharing, and Windows Management Instrumentation. Some key things I'd like to point out here: - You don't have to be running all your scripts in WSH in order to do admin tasks. - You can use your skills from WSH just as easily in ASP. - It's pretty easy to develop a UI that integrates a bunch of scriptable technologies. I put together the sample in a couple of days, including a very bumpy plane trip from Seattle to Denver. What Does It Do? The Administrative Sanity Workbench is a Web site with a main page that divides the process of helping users into three steps: - Give them the benefit of the doubt. This page shows the admin information about the user's machine (disk, memory, processes, etc.), and gives the administrator some limited control of the user's machine, such as listing and closing running problems. - Still can't help 'em? If you can't sort out users' problems with Step One, you can take greater control of their machines via Netmeeting Remote Desktop sharing. This is a good way to provide mentoring support for the user and deal with those annoying concepts you just can't explain over the phone. - Show them some love. This is the last resort, used for those situations when not even Netmeeting will sort out the problem. Clicking the button will cause the remote machine to reboot! Figure 1. The REBOOT button How Does It Work? The main page that shows up when you navigate to the Administrative Sanity Workbench is a simple HTML page with a table for layout, an input box for the machine name, an <IFRAME> (which is used to show the information about the user's machine), an object tag for embedding the NetMeeting control onto the page, and a plain old HTML button for rebooting the machine. Step One When the page is loaded, it uses some script event handlers to hook up script to the buttons on the page. The first script that runs is the onclick handler for the Get Machine Info button. When the user clicks the button, a simple VBScript (it could easily be JScript) calls the navigate method on the <IFRAME> (frmInfo), which is on the page passing in the URL for the machineinfo.asp file and giving the machine name as an argument to the page. The navigate method causes the <IFRAME> to reload itself with the content returned by machineinfo.asp, which will be the machine information for the specified machine in the input box. The machineInfo.asp file utilizes WMI to query the user's machine for its current running information. WMI allows script to query remote machines, so the script is actually running on the administrator's Web server and communicating over the network to the end user's machine. The code in machineinfo.asp is actually pretty simple. First, it displays a title for the page using the machine name passed in when the <IFRAME> loaded in the page. It does this by using the Request.Querystring collection and, specifically, the machine variable. Once the title has been displayed, the main part of the script runs. To query a machine about its current state, the script must first connect to the machine using WMI. This is achieved by using the WbemScripting.SwbemLocater object. This has a connectServer method, which returns an object that can be queried using standard SQL syntax, making the machine look like a database. Note: There are a number of ways to query machines in WMI, but I like the SQL interface.There are a number of ways to query machines in WMI, but I like the SQL interface. Once the script creates the WbemScripting object, it calls the connectServer method with the machine name passed into the page, and then sets the security impersonation level to 3 (which means the calls to the object will impersonate the user who created the object). Security is obviously important with WMI, since you don't want just anyone looking at your machine—or worse, changing settings. To enforce security, WMI will allow only users with administrative privileges to query WMI information. The Administrative Sanity Workbench further enforces this by turning off anonymous access to the Web site and enforcing an admins-only policy. I suggest you do something similar if you build a similar workbench. Once the connection to the machine has been established, we can start to query it for the information that we require. First, machineinfo.asp will return some general operating system settings, including the OS that the machine is running, the specific OS version, and how much memory is available. To get this information, the script builds a simple SQL query: It then calls the execute method on the objservice object returned earlier by the ConnectServer method call. Typically, there will be only one running operating system on a machine. (I can't imagine how you'd have two or more operating systems running simultaneously on one machine, but who knows in the future.) To make things reasonably easy, the script iterates through the collection of operating systems returned by the query and writes out the values for Caption (description of the OS), Version (version of the OS), Freememory, and Freevirtual memory. You'll notice that the script calls formatnumber on the memory results to make the return values a little more humanly readable. The machineinfo page also returns information about the hard disks on the machine and the processes running. Getting this information is simply a case of changing the SQL query and executing it on the WMI object. The Disk query is: The Process query is: The disk information is shown by iterating through the drive collection returns, and writing out the disk name and freespace properties on the drive object. The processes list does essentially the same, but with the addition of a hyperlink at the end of each line to show a process. This hyperlink allows the administrator to kill a selected process. The hyperlink calls kill.asp, and passes in the machine name and the process ID. The kill.asp file uses the same WbemScripting object and queries the machine for the processes passed in. When the query is executed, the script calls the terminate method on the process object returned, and the process is killed. Set objLocator = CreateObject("WbemScripting.SWbemLocator") Set objService = objLocator.ConnectServer (Request.QueryString("machine"), "root/cimv2") ObjService.Security_.impersonationlevel = 3 'Set the query string. strQuery = "Select processid, name, executablepath From Win32_Process where processid=" _ & request.querystring("pid") dim objProcesses Set objProcesses = objService.ExecQuery(strQuery) for each obj in objprocesses x = obj.terminate next With some reasonably simple script, machineinfo.asp provides a pretty powerful page that can help to diagnose and solve users' problems. Step Two Step One is great if there is an obvious problem on the machine, but quite often it's impossible to determine the problem without actually going to the machine. In the past, this meant physically getting up and going to the box. There are obvious downsides to this: - You can't always get to the user machines; they could be miles away. - You have to meet and talk to the users (Okay, it isn't so bad!). Luckily, Netmeeting 3.01 has a solution that makes this pretty simple: Remote Desktop Sharing. This feature allows you to share out your entire desktop and enables certain users to take control of it if you want them to. Netmeeting ships with Windows 2000, Window 98 Second Edition, and Windows Millenium Edition, and is downloadable for Windows 95, Windows NT 4.0, and Windows 98. To take advantage of Remote Desktop Sharing, the Administrative Sanity Workbench integrates Netmeeting so that an admin can just click the call button, and Netmeeting automatically connects to the end user's machine. Integrating Netmeeting into an HTML page is simple: Just add the object, then add some script event handlers for the onclick event for the Call User and Hang up buttons. To call the user, only one line of script is required: To end the call, the script calls the leaveconference method on Netmeeting. TIP:. Once a call has been accepted, you can control a user's desktop and take a look at what the problem might be. The great thing about this is that you can guide the user through the problems he or she is having. This is a boon for anyone who's tried to explain, over the phone, how to get things to work. I use it to help out some of my more technically challenged family members who call me for technical support. If you suffer from this, you might want to try it out, too. Step Three There comes a time when a user just can't be helped, and some extreme measures must be taken for your sanity and quite possibly theirs as well. To implement this, the Admin Sanity Workbench includes the ultimate tool, reboot.asp. It's a simple page that just takes the machine name, calls WMI to get the operating system that's running, then calls the Shutdown method on it. Simple, but deadly efficient. Machine Setup There are some setup requirements to get these scripts to work on your machines. You'll need a Windows 2000 Domain Controller set up with Active Directory, and you'll need at least one machine running Windows with Windows Script 5.1 installed (Windows 2000 Professional already has this, as does any machine with Internet Explorer 5.01 installed). To take advantage of the Exchange features, you'll need the latest release candidate of Exchange 2000 (Beta 2 should work, but I ran with the latest release candidate to be safe). On the server, you need two shares, Home$ and profile$. To use the script unchanged, the server should be called ScriptDemo; the domain, fabrikam—and all the files should be placed in a folder c:\demo. Changing these values will be pretty simple, so you might want to do that rather than set up your infrastructure the same as mine. Summary We hope that these scripts give you a head start in managing your Windows installations, and that they provide some good pointers as to what's possible with a little script. Please feel free to enhance the scripts as you see fit and send in your improvements to the newsgroups at news://msnews.microsoft.com/microsoft.public.scripting.wsh. Highlights from Orlando Scripters were out in force in Orlando. More than 6,000 attended the Administering Windows 2000 with Windows Script session, and 1,500 arrived for the Windows Script for Developers presentation. If you couldn't make it then, try to catch the sessions at your local Tech·Ed (Peter Torr is giving the sessions in Europe), or check out the Windows Media on demand versions on MSDN. I took my trusty digital camera along with me, and tried to capture some of the atmosphere at Tech·Ed. Here are a few highlights: Dino Esposito. Prolific writer (check out his excellent WSH book) and all round script evangelist. Ian Morrish. MSDN regional director, purveyor of all things script related at. 6,500 people at the "Administering Windows 2000 with Windows Script" session..
https://msdn.microsoft.com/en-us/library/ms974582(d=printer).aspx
CC-MAIN-2015-35
refinedweb
4,228
60.95
How would I accomplish tinting the tab bar of a TabbedPage in Xamarin.Forms? The TabbedPage doesn't seem to expose a property or method to set the tint directly. Every child page of my TabbedPage is a NavigationPage. Setting the "Tint" of the NavigationPage adjusts the nav bar, setting the "BackgroundColor" of those same NavigationPage children adjusts the tab bar in a very subtle way (seems to be a mix of the color I choose and some extreme opacity). This is on iOS specifically. How can I set it to the actual color I am specifying for the BackgroundColor, so that I can have it match the nav bar Tint. There's two ways to do this. Via the Appearance which works globally. Or using a custom renderer. [assembly: ExportRenderer(typeof(TabbedPage), typeof(TabbedPageCustom))] namespace MobileCRM.iOS { public class TabbedPageCustom : TabbedRenderer { public TabbedPageCustom () { TabBar.TintColor = MonoTouch.UIKit.UIColor.Black; TabBar.BarTintColor = MonoTouch.UIKit.UIColor.Blue; TabBar.BackgroundColor = MonoTouch.UIKit.UIColor.Green; } } } That fixed it, thank you! How could this be accomplished on Android? The code snippet above seems to work no longer in the current version. How is this to be done with the current xamarin.forms version? Sorry, but when I add the code to my solution, the "TabBar" can not be resolved. I think your code just covers a part of the whole development, right? What else do I need? Thank you! @RonaldKasper have you added the namespace such as Xamarin.Forms.Platform.iOS, Xamarin.Forms, and you renderer class current namespace??? There's an effort on the Xamarin Forms Labs project to extend tab page control called ExtendedTabbedPage that exposes some color properties that could help you: I cant get Swipe or Tint color to work on ExtendedTabbedPage? I cannot get swipe or tint to work either on ExtendedTabbedPage. Can someone please confirm that ExtendedTabbedPage.cs still works with the latest version of Xamarin/Xamarin Forms? Any information would be greatly appreciated. Thanks. @David.6954 I'm seeing the same thing @AliRFarahnak, @David.6954, @ErikAndersen.1430 ExtendedTabbedPage is only in v1.2.x for iOS at this time. What platforms are you guys using? I'm targeting iOS. I don't have my computer by me at the moment, but I just subclassed ExtendedTabbedPage and put all of my code inside its constructor. Then I set it as my root view controller like so: window.RootViewController = new MyCustomExtendedTabbedPage().CreateViewController(); Actually, I forgot that I'm using the Xamarin.Forms 1.3 preview so that's probably why it isn't working. Okay, so I confirmed that I'm using v1.2.3.6257 and is still not working. Here's my code: TabForm.xaml TabForm.xaml.cs AppDelegate.m window.RootViewController = new TabForm ().CreateViewController(); My NavigationForm is just a subclass of a NavigationPage. My DirectoryForm and AssignmentsForm are subclasses of ContentPage. This works for me Thanks Steve! Your example worked just fine. Any ideas on how to achieve the same thing using Custom Renderers in Android while retaining the ActionBar in the bottom (much like on iOS)? After further investigation, I unfortunately found that the code below in fact did not work. Apparently, my CustomRenderer was still applied, even though I did not use the implementation in my view. I gave it another shot, and found that the following worked just fine as well: var tabs = new TabbedPage() { BackgroundColor = Color.FromRgb(76, 108, 139) }; No need to make custom renderers for applying a simple background color anymore, it'd seem. @SteveChadbourne Thanks for posting, works fine for me. hai friends i need to fill the tab view in the screen how cnn i do it in xamarin forms ` using System; using Xamarin.Forms; namespace Resturent_demo { public class Search : TabbedPage { public Search () { this .Title="tabbedPage"; this.ItemsSource = new NamedColor[] { new NamedColor ("Red",Color.Red), new NamedColor ("Yellow", Color.Yellow), new NamedColor ("Green", Color.Green), new NamedColor ("Aqua", Color.Aqua), new NamedColor ("Blue", Color.Blue), new NamedColor ("Purple", Color.Purple), here is my code where should i place the property of the tabs such as color changing and fit for screen etc... Does ExtendedTabbedPage still live using Xamarin.forms 1.4+ ? sorry to asking like this again do you have any example ? MiguelCervante @shamnad I'm trying to set color to the tabbed bar public class MyFriendPage : ExtendedTabbedPage { public MyFriendPage() { TintColor = Color.FromHex("00806E"); BarTintColor = Color.FromHex("00806E"); Title = "Contacts"; Children.Add(MyFriendSearchPage()); //Content Page Children.Add(MyFriendList()); //Content Page } } But I can't see it working on Android, i'm using the latest version of Xamarin.Forms 1.4.0 and XLabs.Forms 2.0 any ideas? @MiguelCervantes iam new to xamrin Forms that's why i am asking like this, i cant't inherit Extended tabbedPage me also trying customize the tab view in xamarin Forms Or is it a class that you created in your program ? if that class is system defined which package would have to implement to my project I'm sorry dude @Shamnad , I missunderstood your question. In order to use Extended TabbedPage you need to download via nuGet the XLabs.Forms package, once downloaded simply add it to your Page: using XLabs.Forms.Controls; After that now you can inherit from ExtendedTabbedPage, using the sample code above: Actually it works with tabbedPage without the extended properties, but I can't change the bar color with ExtendedTabbedPage that's why i'm asking if it still works. Hope this sample helps thanks @MiguelCervantes @MiguelCervantes I am having the same issues. I extend the ExtendedTabbedPage and set the TintColor and the BarTintColor but nothing changes. Considering this was the main point of the control, it is a bit odd. This work perfectly!
https://forums.xamarin.com/discussion/comment/192005
CC-MAIN-2019-39
refinedweb
949
60.01
In C, strings are one special kind of array: a string is an array of char values: char name[7]; I introduced the char type when I introduced types, but in short it is commonly used to store letters of the ASCII chart. A string can be initialized like you initialize a normal array: char name[7] = { 'F', 'l', 'a', 'v', 'i', 'o' }; Or more conveniently with a string literal (also called string constant), a sequence of characters enclosed in double quotes: char name[7] = "Flavio"; You can print a string via printf() using %s: printf("%s", name); Do you notice how “Flavio” is 6 chars long, but I defined an array of length 7? Why? This is because the last character in a string must be a 0 value, the string terminator, and we must make space for it. This is important to keep in mind especially when manipulating strings. Speaking of manipulating strings, there’s one important standard library that is provided by C: string.h. This library is essential because it abstracts many of the low level details of working with strings, and provides us a set of useful functions. You can load the library in your program by adding on top: #include <string.h> And once you do that, you have access to: strcpy()to copy a string over another string strcat()to append a string to another string strcmp()to compare two strings for equality strncmp()to compare the first ncharacters of two strings strlen()to calculate the length of a string and many, many more. I will introduce all those string functions in separate blog posts, but just know that they exist. Download my free C Handbook!
https://flaviocopes.com/c-strings/
CC-MAIN-2022-27
refinedweb
283
60.89
Graphics View is a new canvas framework for Qt 4. Superseding QCanvas, Graphics View has lots of new features, including item grouping, item interaction through events, and floating-point coordinates. Still, Graphics View's design is similar to QCanvas's, ensuring that the transition to the new framework is as painless as possible. In this article, we will port the "canvas" example included with Qt 3 to use Graphics View, demonstrating the porting process step by step. Qt 3's canvas example consists of about 1000 lines of code. We will start with the original sources from the Qt 3 source package and port it one step at a time. At the end of the process, we will have two applications that behave exactly the same, but each using a different set of classes and a different version of Qt. We start by making a copy of the examples/canvas directory from Qt 3. Then, using a command-line interpreter set up for Qt 4.2, we run the qt3to4 porting tool on the example's canvas.pro file. This starts an automated in-place modification of the example, taking care of the most tedious parts of porting an existing Qt 3 application to Qt 4. $ qt3to4 canvas.pro Using rules file: qt-4.2/tools/porting/src/q3porting.xml Parsing... Convert file tmp/canvas/canvas.cpp? (Y)es, (N)o, (A)ll A Wrote to file: tmp/canvas/canvas.cpp Wrote to file: tmp/canvas/main.cpp Wrote to file: tmp/canvas/canvas.h Wrote to file: tmp/canvas/canvas.pro Writing log to portinglog.txt The qt3to4 tool starts by asking whether or not it should convert canvas.cpp. Simply type A to tell it to port all files. The tool generates a log file called portinglog.txt that shows all modifications that were made. The resulting program uses Q3Canvas and friends from the Qt3Support library. We must make a few manual modifications to make the example compile with Qt 4. In this article's code snippets, we follow the diff program's convention and use a leading plus sign (+) to indicate that a code line is being added to the program. Similarly, a leading minus (-) identifies a line that must be removed. #include <qapplication.h> #include <qimage.h> +#include <QDesktopWidget> In main.cpp, we must include <QDesktopWidget>, as QDesktopWidget is referenced indirectly from QApplication::desktop() at the end of the file. In Qt 3, this was not necessary because desktop() returned a plain QWidget pointer. Two modifications are required in canvas.cpp: - pixmap.convertFromImage(image, OrderedAlphaDither); + pixmap.convertFromImage(image, Qt::OrderedAlphaDither); - Main *m = new Main(canvas, 0, 0, WDestructiveClose); + Main *m = new Main(canvas, 0, 0, Qt::WDestructiveClose); In Qt 3, Qt was a class. Since most Qt classes derived directly or indirectly from Qt, we could usually omit the Qt:: prefix. In Qt 4, Qt is a namespace, so we must either prefix every Qt member with the namespace name or add a using namespace Qt; directive at the beginning of our source files. The porting tool catches most of these cases but not all of them. At this point, the canvas example is a stand-alone Qt 4 application that compiles and runs correctly. The only issue is that we rely on the Qt3Support library&emdash;more precisely on Q3Canvas. We have three options at our disposal: We can call it a day. After all, linking against Qt3Support is not a crime. We can replace Q3Canvas with QtCanvas. The QtCanvas class is provided as a Qt Solution. Its purpose is to make it possible to use the old canvas framework available without linking in the entire Qt3Support library. We can port the application to Graphics View. Porting to Graphics View means that we have a more solid basis for further development. For example, if at a later point we want to add item grouping to the canvas example, we can build upon Graphics View's grouping support. In the rest of this article we will see how to carry out the third option. To help us in the process, we can consult the Porting to Graphics View page of the Qt online documentation. Using the porting tables found there as a guide, we start by replacing Q3Canvas and Q3CanvasView with QGraphicsScene and QGraphicsView in all source files. We must also change the includes: -#include <q3canvas.h> +#include <QGraphicsItem> +#include <QGraphicsScene> +#include <QGraphicsView> We also replace Q3CanvasItems with their closest equivalents in Graphics View: Unsurprisingly, if we try to compile the example now, we get many errors and warnings. As a general rule when porting, it's a good idea to quickly bring the project up to a state where it compiles again, even if it doesn't work properly yet. Following this philosophy, the next step is to comment out any block of code that references functions that don't exist in the new API. This includes incompatible constructors and syntax errors. If a block of code accesses many functions that don't exist in the new API, we simply comment out the entire block. For this example, commenting out all erroneous code is a five minute operation that quickly brings us to a point where the application compiles again. This approach is very useful, as it allows us to port one component at a time, testing it as we go. Once the example compiles, we can run it and see what it looks like: The result isn't very exciting yet: All we have is a dysfunctional menu system and a gray canvas area. Let's start by fixing QGraphicsScene's constructor in main.cpp, so that the size of the canvas is back to what it was in the original example: - QGraphicsScene canvas; // (800, 600); + QGraphicsScene canvas(0, 0, 800, 600); Unlike Q3Canvas, QGraphicsScene lets us specify an arbitrary top-left corner for the scene. To obtain the same behavior as before, we must explicitly pass (0, 0) to the constructor. FigureEditor::FigureEditor(QGraphicsScene &c, QWidget *parent, const char *name, Qt::WFlags f) - // : QGraphicsView(&c, parent, name, f) + : QGraphicsView(&c, parent) { + setObjectName(name); + setWindowFlags(name); } In the FigureEditor constructor, we uncomment the base class initialization, we remove the name and flags arguments from QGraphicsView's constructor, and instead call setObjectName() and setWindowFlags() explicitly. If we compile and run the example now, it will look exactly like the original example but without any items. This is a good indication that we are on the right track. The next step is straightforward and quite fun to do: porting the item classes one at a time. For most cases, it's just a matter of dealing with code that we commented out ourselves. We will show how to port ImageItem, the base class for the butterfly and logo items. class ImageItem : public QGraphicsRectItem { public: ImageItem(QImage img, QGraphicsScene *canvas); - int rtti() const { return imageRTTI; } + int type() const { return imageRTTI; } bool hit(const QPoint &) const; protected: - void drawShape(QPainter &); + void paint(QPainter *, const QStyleOptionGraphicsItem *, + QWidget *); private: QImage image; QPixmap pixmap; }; The rtti() virtual function in Q3CanvasItem is called type() in QGraphicsItem, so we simply rename it in the ImageItem subclass. Instead of drawShape(), we reimplement paint(), which plays the same role but has a different signature. ImageItem::ImageItem(QImage img, QGraphicsScene *canvas) - // : QGraphicsRectItem(canvas), image(img) + : QGraphicsRectItem(0, canvas), image(img) { - // setSize(image.width(), image.height()); + setRect(0, 0, image.width(), image.height()); Since Graphics View items can be created as children of other items, we must pass a null pointer as the parent to the QGraphicsRectItem constructor to create a top-level item. Also, there is no setSize() function in QGraphicsRectItem; instead, we call setRect() and pass (0, 0) as the top-left corner. -void ImageItem::drawShape(QPainter &p) +void ImageItem::paint(QPainter *p, + const QStyleOptionGraphicsItem *, + QWidget *) { #if defined(Q_WS_QWS) - p.drawImage(int(x()), int(y()), image, 0, 0, -1, -1, - Qt::OrderedAlphaDither); + p->drawImage(0, 0, image, 0, 0, -1, -1, + Qt::OrderedAlphaDither); #else - p.drawPixmap(int(x()), int(y()), pixmap); + p->drawPixmap(0, 0, pixmap); #endif } We turn the drawShape() implementation into a paint() implementation. The tricky part here is that with Graphics View, the QPainter is set up so that we must draw in local coordinates. As a consequence, we call drawPixmap() with (0, 0) as the top-left corner. If we compile and run the application now, we obtain logos and butterflies stacked on top of each other in the windows's top-left corner: This shouldn't come as a surprise, considering that we've commented out several code sections, including the item placement code in Main::addButterfly(), Main::addLogo(), etc. Let's start by fixing addButterfly(): QAbstractGraphicsShapeItem *i = new ImageItem(butterflyimg[rand() % 4], &canvas); - // i->move(rand() % (canvas.width() - // - butterflyimg->width()), - // rand() % (canvas.height() - // - butterflyimg->height())); - // i->setZ(rand() % 256 + 250); - i->show(); + i->setPos(rand() % int(canvas.width() + - butterflyimg->width()), + rand() % int(canvas.height() + - butterflyimg->height())); + i->setZValue(rand() % 256 + 250); QGraphicsItem::setPos() does the same as what move() did before, and setZ() is now called setZValue(). We also need a couple of int casts because QGraphicsScene's width() and height() functions now return floating-point values and % is only defined for integers. Notice that we don't need to call show() anymore, because Graphics View items are visible by default. Similar changes are required in addLogo(): QAbstractGraphicsShapeItem *i = new ImageItem(logoimg[rand() % 4], &canvas); - // i->move(rand() % (canvas.width() - // - logoimg->width()), - // rand() % (canvas.height() - // - logoimg->width())); - // i->setZ(rand() % 256 + 256); - i->show(); + i->setPos(rand() % int(canvas.width() + - logoimg->width()), + rand() % int(canvas.height() + - logoimg->width())); + i->setZValue(rand() % 256 + 256); If we run the example now, all butterflies and logos are placed at random positions as we would expect. Our example is starting to look more and more like the original. Next function: addSpline(). For Q3CanvasSpline, the natural transition is to port to QGraphicsPathItem, which is based on QPainterPath. QPainterPath supports curves, but its interface is different from that of Q3CanvasSpline, so some porting is required: - // QGraphicsPathItem *i = - // new QGraphicsPathItem(&canvas); + QGraphicsPathItem *i = + new QGraphicsPathItem(0, &canvas); Q3PointArray pa(12); pa[0] = QPoint(0, 0); pa[1] = QPoint(size / 2, 0); ... pa[11] = QPoint(-size / 2, 0); - i->setControlPoints(pa); + QPainterPath path; + path.moveTo(pa[0]); + for (int i = 1; i < pa.size(); i += 3) + path.cubicTo(pa[i], pa[(i + 1) % pa.size()], + pa[(i + 2) % pa.size()]); + i->setPath(path); Because QGraphicsPathItem is a generic vector path item (not a spline item), we must build the curve shape manually using QPainterPath's moveTo() and cubicTo() functions instead of calling Q3CanvasSpline::setControlPoints(). Porting the other items is straightforward, so let's skip ahead a bit and add some navigation support to the view instead. In Main::rotateClockwise(), we can replace three lines of code with one to get rotation support. void Main::rotateClockwise() { - // QMatrix m = editor->worldMatrix(); - // m.rotate(22.5); - // editor->setWorldMatrix(m); + editor->rotate(22.5); } We can do similar changes to the other functions accessible through the View menu (zoomIn(), zoomOut(), etc.). In Main::paint(), we must reintroduce printing support: void Main::print() { if (!printer) printer = new QPrinter; if (printer->setup(this)) { QPainter pp(printer); - // canvas.drawArea(QRect(0, 0, canvas.width(), - // canvas.height()), - // &pp, FALSE); + canvas.render(&pp); } } The only change here is due to the differences in the signatures of Q3Canvas::drawArea() and QGraphicsScene::render(). It turns out we can leave out most arguments to drawArea() because suitable default values are provided with the new API. That's it! We now have a fully functional canvas example based on the new Graphics View framework. As you can see from this article, moving from Q3Canvas to Graphics View is a fairly straightforward operation, once we've understood the main differences between the two APIs. There are a few pitfalls, though. Some Q3Canvas features that were targeted at 2D game programming, such as tiles and sprites, are not directly supported in Graphics View; the "Porting to Graphics View" page explains how to implement these features without too much hassle. In addition, existing code that implicitly relied on integer precision might still compile but behave differently with Graphics View. The complete source code for the ported canvas example is distributed in Qt 4.2's examples/graphicsview/portedcanvas directory. Qt 4.2 also includes a portedasteroids example that corresponds to the Qt 3 asteroids example, which also used QCanvas. To conclude, I would like to wish you good luck with porting to Graphics View! If you run into trouble, let us know so we can help and possibly improve the porting documentation.
http://doc.trolltech.com/qq/qq21-portingcanvas.html
crawl-001
refinedweb
2,099
55.95
defines the object relational mapping between Ruby objects and Database tables. The first section of this article will deal with Active Record and how to work with CRUD operations using it. The later section of the article guides in developing a full-blown web application using the Active Record API. Active Record Basics There are various implementations for Object Relation mapping model available in Ruby. One such popular implementation is Active Record. Usage of Active Record API in an application will lead to writing fewer lines of code. Active Record also provides various utility classes for directly creating tables from the application using simpler syntax. The mapping between Ruby classes and database tables can be elegantly done. Active Record follows the convention of having plaral names for the table name and singular name for the class names. For example, one can say that the class ‘Employee’ will map directly to ‘employees’ table in Active Record context. It is also possible to dynamically generate method names through Active Record, the example of which will be seen later in this article. Creating records In this section, we will see the basics of using Active Record API. More specifically we will see how to map Ruby classes and database table for creating records in the database. Note that this example using MySql database, so make sure before using the application, a compatible version of MySql gem is installed in your machine. require "logger" require "rubygems" require "active_record" require "pp" ActiveRecord::Base.logger = Logger.new(STDOUT) ActiveRecord::Base.establish_connection(:adapter => "mysql" , :database => "ruby", :username => "root", :password => "XXX") class Bank < ActiveRecord::Base end Bank.delete_all hdfc_bank = Bank.new(); hdfc_bank.id = '1'; hdfc_bank.name = "HDFC Bank"; hdfc_bank.operation_date = Date.today; hdfc_bank.head_office = "Mumbai"; hdfc_bank.save; puts ("HDFC Bank object created"); sbi_bank = Bank.new(); sbi_bank.id = '2'; sbi_bank.name = "SBI Bank"; sbi_bank.operation_date = Date.today; sbi_bank.head_office = "Bangalore"; sbi_bank.save; puts ("SBI Bank object created"); icici_bank = Bank.new(); icici_bank.id = '3'; icici_bank.name = "ICICI Bank"; icici_bank.operation_date = Date.today; icici_bank.head_office = "Delhi"; icici_bank.save; puts ("ICICI Bank object created"); The first few statements import the necessary dependencies packages such as ‘logger’ and ‘active_record’. We have defined a logger that points to the standard console. A connection is established to the MySql database using the method call ‘establish_connection’ by passing in the database adapter name, the database name and username/password details. It is assumed that a table with the name ‘banks’ exist in the database for this example. Next we have defined a class called ‘Bank’ which extends from ‘ActiveRecord::Base’. This single line will ensure that a mapping is established between the Ruby class ‘Bank’ and the database table ‘banks’. Since the Bank class extends from ‘ActiveRecord::Base’, most of the common database CRUD operations become implicitly available to the Bank class. One such operation is delete_all() which will remove all the bank records from the database. Note that the banks table has the columns ‘ID’, ‘NAME’, ‘OPERATION_DATE’ and ‘HEAD_OFFICE’ which will directly map to the properties ‘id’, ‘name’, ‘operation_date’ and ‘head_office’ of the Bank class. A new empty bank record is created using the statement Bank.new(), after that we initialize various properties of the bank object. A call to save() on the Bank object will persist the entity to the database. Finding Records In the previous section, we have seen how to use the Active Record API for inserting data into the database. In this section, we will see how to find data using easy-to-use predefined data-aware methods. require "logger" require "rubygems" require "active_record" require "pp" ActiveRecord::Base.logger = Logger.new(STDOUT) ActiveRecord::Base.establish_connection(:adapter => "mysql" , :database => "ruby", :username => "root", :password => "XXX") class Bank < ActiveRecord::Base end bank_object = Bank.find(:first) puts ("Id is #{bank_object.id}"); puts ("Name is #{bank_object.name}"); bank_object = Bank.find(:last) puts ("Id is #{bank_object.id}"); puts ("Name is #{bank_object.name}"); bank_object = Bank.find(2) puts ("Id is #{bank_object.id}"); puts ("Name is #{bank_object.name}"); all_banks = Bank.find_by_sql("select * from banks"); for bank in all_banks puts ("#{bank.name}"); end sbi_bank = Bank.find_by_name("SBI Bank"); puts ("Name is #{sbi_bank.name}"); icici_bank = Bank.find_by_head_office("Delhi"); puts ("Name is #{icici_bank.name}"); Bank.find(:all).each do |bank| puts "#{bank.name}" end The constants ‘:first’ and ‘:last’ carry special meaning in the context of Active Record and when used willl fetch the first and the last records from the database when used in tandem with the find() method. It is also possible to retrieve the data using the primary key value. We have used find() by passing in a value of 2, in this case a comparison will happen between the primary key column with 2. Other variations for finding the objects are through queries and dynamic methods. For example, in the above code, we have used the query for fetching all the bank objects from the database. Another using strategy for retrieving data from the database is to use find_by_<property_name> notation. This means, it is possible to use the methods find_by_name(), find_by_head_office(), find_by_operation_date() directly on the bank obejcts by passing in the appropriate property value. Edit Records Having seen the usage of Create and Find, we will see how to use Active Record for editing persistent objects, the following code snippet will illustrate the usage. require "logger" require "rubygems" require "active_record" require "pp" ActiveRecord::Base.logger = Logger.new(STDOUT) ActiveRecord::Base.establish_connection(:adapter => "mysql" , :database => "ruby", :username => "root", :password => "XXX") class Bank < ActiveRecord::Base end bank = Bank.find(1); bank.name = "HDFC"; bank.save; Bank.update_all("head_office = 'New Location'"); Bank.delete_all(); We have used a flavour of find() method to retrieve the object from the database, and then have updated its properties using the regular approach. A call to save() will now update the corresponding entity in the database, instead of creating a new entity. Likewise, there are other useful methods such as update_all() and delete_all(). A call to update_all() which accept an expression, in the example case we have used ‘head_office = New Location’, this will make all the head_office column values to have the value ‘New Location’. Similarly delete_all() will remove all the records from the database. Establishing relationship Complex relationship between objects as well as its corresponding mapping at the database level can be easily achieved using Active Record. The following example illustrates the usage of relationship between the objects blogs and posts. A blog can have multiple posts and each post must know to which blog it belongs to. require "logger" require "rubygems" require "activerecord" ActiveRecord::Base.logger = Logger.new(STDOUT) ActiveRecord::Base.establish_connection(:adapter => "mysql" , :database => "ruby", :username => "root", :password => "XXX") ActiveRecord::Schema.define do create_table :blogs, :force => true do |b| b.string :name b.string :title b.string :description end create_table :posts, :force => true do |p| p.integer :blog_id p.text :description end end class Blog < ActiveRecord::Base has_many :posts end class Post < ActiveRecord::Base belongs_to :blog end Post.delete_all(); Blog.delete_all(); java_blog = Blog.create( :name => "Java Blog", :title => "Java beat Blog", :description => "Contains Java Articles and Tips" ) post1 = Post.new(); post1.id = 'P1' post1.description = 'Java Articles are great'; post1.blog = java_blog; post1.save; post2 = Post.new(); post2.id = 'P2' post2.description = 'Java Tips are very useful'; post2.blog = java_blog; post2.save; cpp_blog = Blog.create( :name => "CPP Blog", :title => "CPP Blog", :description => "Contains CPP Articles and Tips" ) post3 = Post.new(); post3.id = 'P3' post3.description = 'CPP Articles are great'; post3.blog = cpp_blog; post3.save; post4 = Post.new(); post4.id = 'P4' post4.description = 'CPP Tips are very useful'; post4.blog = cpp_blog; post4.save; The above example also illustrates the usage of Active Record’ Schema for creating tables and relationships directly from the application. The keywords ‘has_many’ defines a one to many relationship between Blog and Post objects and ‘belong_to’ ensures that the master reference Blog is preserved in the Post object. After running the application, the table structure will look something similar to the following, Blog Id Name Title Description 1 Java Blog Java beat Blog 2 CPP Blog CPP Blog Post Id Blog Id Description 1 1 Java Articles are great 2 1 Java Tips are very useful 3 2 CPP Articles are great 4 2 CPP Tips are very useful Creating database aware application In this example, we will create a sample application for task management. Basically the application will provide options for creating, deleting, editing and viewing task items. We will be taking the support of various utilities that comes as part of Rails distribution for creating this sample. Creating the project Create a project called ‘task’ by executing the following command. The following command ensures that the basic set of artifacts necessary for a rails project is created. rails new task, development: adapter: mysql database: javabeat_dev username: root password: #### pool: 5 timeout: 5000 test: adapter: mysql database: javabeat_test username: root password: #### pool: 5 timeout: 5000 production: adapter: mysql database: javabeat_prod username: root password: #### pool: 5 timeout: 5000. gem 'mysql', '2.8.1'. rake db:create Scaffolding In Rails terms, scaffolding refers to the process of the generation of artifacts such as controllers, models and view. In our case, we want to create controller, model and multiple views for the task item. Execute the following command, rails generate scaffold TaskItem tile:string summary:text start_date:date end_date:date status:string priority:string total_hours_of_work:integer. class TaskItem < ActiveRecord::Base end. class TaskItemsController < ApplicationController … end. def new @task_item = TaskItem.new respond_to do |format| format.html # new.html.erb format.xml { render :xml => @task_item } end end . def create @task_item = TaskItem.new(params[:task_item]) respond_to do |format| if @task_item.save format.html { redirect_to(@task_item, :notice => 'Task item was successfully created.') } format.xml { render :xml => @task_item, :status => :created, :location => @task_item } else format.html { render :action => "new" } format.xml { render :xml => @task_item.errors, :status => :unprocessable_entity } end end end. def update @task_item = TaskItem.find(params[:id]) respond_to do |format| if @task_item.update_attributes(params[:task_item]) format.html { redirect_to(@task_item, :notice => 'Task item was successfully updated.') } format.xml { head :ok } else format.html { render :action => "edit" } format.xml { render :xml => @task_item.errors, :status => :unprocessable_entity } end end end. def show @task_item = TaskItem.find(params[:id]) respond_to do |format| format.html # show.html.erb format.xml { render :xml => @task_item } end end Delete Task For deleting a task also, we follow similar logic as ‘edit’, where the id of the task is taken into consideration for fetching the task to be deleted. The method destroy() can be used for actually deleting the task object. def destroy @task_item = TaskItem.find(params[:id]) @task_item.destroy respond_to do |format| format.html { redirect_to(task_items_url) } format.xml { head :ok } end end View all Tasks The code snippet for viewing all the tasks is given below. def index @task_items = TaskItem.all respond_to do |format| format.html # index.html.erb format.xml { render :xml => @task_items } end end’. <h1>Listing all the task items</h1> <table border = '1'> <tr> <th>Tile</th> <th>Summary</th> <th>Start date</th> <th>End date</th> <th>Status</th> <th>Priority</th> <th>Total hours of work</th> <th></th> <th></th> <th></th> </tr> <% @task_items.each do |task_item| %> <tr> <td><%= task_item.tile %></td> <td><%= task_item.summary %></td> <td><%= task_item.start_date %></td> <td><%= task_item.end_date %></td> <td><%= task_item.status %></td> <td><%= task_item.priority %></td> <td><%= task_item.total_hours_of_work %></td> <td><%= link_to 'Show', task_item %></td> <td><%= link_to 'Edit', edit_task_item_path(task_item) %></td> <td><%= link_to 'Destroy', task_item, :confirm => 'Are you sure?', :method => :delete %></td> </tr> <% end %> </table> <br /> <%= link_to 'New Task item', new_task_item_path %>. <h1>Create new task page</h1> <%= render 'form' %> <%= link_to 'Back', task_items_path %>. <h1>Edit the selected task</h1> <%= render 'form' %> <%= link_to 'Show', @task_item %> | <%= link_to 'Back', task_items_path %>. <h2> View information about the selected task </h2> <b>Tile:</b> <%= @task_item.tile %> <b>Summary:</b> <%= @task_item.summary %> <b>Start date:</b> <%= @task_item.start_date %> <b>End date:</b> <%= @task_item.end_date %> <b>Status:</b> <%= @task_item.status %> <b>Priority:</b> <%= @task_item.priority %> <b>Total hours of work:</b> <%= @task_item.total_hours_of_work %> <%= link_to 'Edit', edit_task_item_path(@task_item) %> | <%= link_to 'Back', task_items_path %>. Speak Your Mind
http://www.javabeat.net/creating-database-aware-applications-in-ruby-on-rails/2/
CC-MAIN-2014-42
refinedweb
1,979
52.46
Yes that's good advice. Yes that's good advice. Try quick sort and count the sorts? use bubble sort and count the sort. yes salem is right. Goodbye man. . . You have left behind a legacy . . . First in your family and second in your teachings. Seems logical. Yes, you can, but I disagree with post #2, second part. An important consideration if you wish to stay faithful to the ELS principal is... 'If you take your block of text to be a two dimensional array, then you have to imagine the two vertical edges of... I don't think the risk contest would get many participants. Much for the same reason why any chess AI contests have never really been successful. It's a bit too time consuming to create the AI,... Only if the program uses a completely random method to place the tiles. If you design the scramble method to shuffle the tiles as you would on a real-life tactile board, this shouldn't be a problem. I don't know then. By not work you mean? Are you getting any compiler errors? No try jawib's code on its own. Go to the menu... File>new>project>windows application> Type in a name for your file. Then copy and paste your code in there. So is the webmaster going to do anything about this? I think things are just fine the way they are. I personally don't like scroll bars in code tags. And as for a [cpp] or [c] syntaxer, you could just write your own. The only problem I have is... I rather like this example from the FAQ #include <string> #include <iostream> #include <vector> int main(void) { std::string numbers_str =... When you eventually get this far... Good luck Try using strings instead? #include <cstdlib> #include <iostream> #include <string> using namespace std; int main() There are, I'm not sure if you already knew this, three possible cases. Two circles may intersect in two imaginary points, a single degenerate point, or two distinct points. ... Where did the extra point come from? I thought you were Quzah for a moment there... ;) Try compiling in .cpp You might need a cin.get() as well near the end of main to pause the program. might be beaten to this but here goes... #include <ctype.h> #include <iomanip> #include <fstream> #include <string> #include <iostream> using namespace std; Where's the fun in that? We wouldn't have so much fun nitpicking other people's code, saying oooh look that's not portable and other stuff ;) It's the little battles for supremacy which make...
https://cboard.cprogramming.com/search.php?s=b2fd929d72bd51c0372fa8a64e2d8bc0&searchid=4922572
CC-MAIN-2020-24
refinedweb
437
86.3
Important: Please read the Qt Code of Conduct - QSerialPort doesn't work in QThread Hello, I want to use QSerialPort inside a QThread unfortunately serial port inside a thread doesn't work the code compile with no error and application run and write function of QSerialPort return number of bytes it sends but the receiver can't get anything if I put QSerialPort out of the thread everything work perfect. I use QT SDK 5.11 ,QT Creator 4.6.1 GCC 5.3.1 in Ubuntu 18.04 LTS and virtual port socat the code is : CThread.h #ifndef CTHREAD_H #define CTHREAD_H #include <QObject> #include <QThread> #include <QtSerialPort/QtSerialPort> class CThread : public QThread { Q_OBJECT private : bool m_cancel; static const QString PORT_NAME; static const int BAUD_RATE; QSerialPort *m_serialPort; void init(); public: explicit CThread(); virtual ~CThread() override; // QThread interface protected: void run() override; }; #endif // CTHREAD_H CThread.cpp #include "cthread.h" #include <QDebug> const QString CThread::PORT_NAME="/dev/pts/1"; const int CThread::BAUD_RATE = 38400; void CThread::init() { m_cancel=false; m_serialPort = new QSerialPort(); m_serialPort->setPortName(PORT_NAME); m_serialPort->setBaudRate(BAUD_RATE); m_serialPort->open(QIODevice::ReadWrite); } CThread::CThread() { } CThread::~CThread() { qDebug()<<"~CThread"; m_cancel = true; // wait(500); if(m_serialPort->isOpen()) m_serialPort->close(); if(m_serialPort!=nullptr) { delete m_serialPort; } m_serialPort = nullptr; } void CThread::run() { int i=10; init(); char data[] = {0x00,0x01,0x02,0x03,0x04,0x05}; if(m_serialPort->isOpen()) while (i) { qDebug()<<m_serialPort->write(data,6); i--; } qDebug()<<"end of thread"; } main.cpp #include <QCoreApplication> #include <QObject> #include <QThread> #include "cthread.h" int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); CThread thread; QObject::connect(&thread,&QThread::finished,&thread,&QObject::deleteLater); thread.start(); return a.exec(); } I really appreciate your help Regards, @Alien said in QSerialPort doesn't work in QThread: but the receiver can't get anything I don't see where your code attempts to do anything to do with the serial port in the main thread (not the CThread) --- other than deleteLater()--- if that is what your problem is? So I conclude the code as posted works fine? please read through this post and redo your threading. Overwriting run is for your use case inadequate/not recommended. - aha_1980 Lifetime Qt Champion last edited by aha_1980 Your main problem is that you create the serial port in one thread, but use it in another one. Reason: CThread::init()is run in the calling thread, but CThread::run()runs in a new thread. It should already work if you create the serial port object within run() Regards - aha_1980 Lifetime Qt Champion last edited by I would try and move this const QString CThread::PORT_NAME="/dev/pts/1"; const int CThread::BAUD_RATE = 38400; either in the header or in the constructor of the class, IIRC "Global" variables inside the cpp part of the class may be initiated after the functions are already called!? Guys thank you for replying me, I searched around this problem before posting the question I know that run function will execute in another thread that's why I put init function inside the run function @aha_1980 is it wrong to put init function in run or is it wrong to have it private ? @J-Hilk "Overwriting run is for your use case inadequate/not recommended." what is recommended? I can't get your point could you please give me more information? (Also I change PORT_NAME and BAUD_RATE still the application doesnt' work) @JonB does it work for you? could you please put the code here ? I would imagen, that your QSerialPort will not only send data once, but your goal is a communication between your app and your hardware. Howver by overwriting Run, you will end up implementing your very own SIGNAL/SLOT handling, data passing between threads etc. By going with Worker-class QThread::moveToThread, you end up passing a lot of work to the framework. Also it has the nice benefit of, you can create your QSerialport in its own class non threaded, make sure, everything works as you want it to and move it later into its own Thread without nearly any changes insde of your code. @J-Hilk I've read how-to-really-truly-use-qthreads-the-full-explanation I can't understand why she wrote " The main thing to keep in mind when using a QThread is that it’s not a thread. It’s a wrapper around a thread object. This wrapper provides the signals, slots and methods to easily use the thread object within a Qt project. This should immediately show why the recommended way of using QThreads in the documentation, namely to sub-class it and implement your own run() function, is very wrong. " If overriding run function is very wrong why QT documentation examples introduce this way even in QT 5.11 ? Also I use Maya Posch approach still my app can't use serial port in a thread You are probably just deleting the serial port too early. The easy but ugly way to solve would be adding m_serialPort->waitForBytesWrittenafter m_serialPort->write. Your CThreaddestructor is a race condition. It is executed in the main thread and you are calling methods on m_serialPortand even worse deleting it directly. I really recommend you follow @J-Hilk 's advice @VRonin said in QSerialPort doesn't work in QThread: m_serialPort->waitForBytesWritten what do you recommend me to use qserial in qthread? @Alien said in QSerialPort doesn't work in QThread: If overriding run function is very wrong why QT documentation examples introduce this way even in QT 5.11 ? Also I use Maya Posch approach still my app can't use serial port in a thread it doesn't, Check the details from the QThread docu: cleary a worker approach, to overwrite QThread is the 2nd example, and in my opinion should be removed. But thats far from my call and probably still there for legacy reasons. This was change to the docs was made with Qt5.9 iirc. Hello and welcome @Alien , I am glad your read May's blog and redid your threading. The worker thread approach is a best way to setup the threads. With your serial port in the worker object you will only need QThread to run the event queue which is VERY important to QSerialPort event handling. As you have already found out, overriding "run" will block event queue processing. One thing I did not see is where you are connecting to the readyRead signal. This will allow your worker object to read the data from the serialport as it arrives. If you are still having issues, post your new threading model so we can see where the disconnect is happening and our eyes can give you insight. Thanks for all of your help If my undrestanding is right its a bad idea to use aserialport inside a thread while qserialport is async itself also when I implement qthread's run function actually I disrupt the event dispatcher of that thread so signal and slot can't work properly for qserialport in thread due to the infinite while inside the run. If my undrestanig is still wrong please help me and make me aware of that Yours, @Buckwheat how to run event queue inside my thread? A snippet code would be helpfull call QThread::exec()inside run(). I still suggest you convert to the worker object design though QThread has an event queue and will run freely if you do NOT override run! Using the technique outlined by May will allow the thread's event queue to run. You just need to connect your signals for the serial port and things will run freely. Here is a sample for starting a GNSS receiver in a thread (using May's method): *** NOTE: This is really pseudo code as it comes from our proprietary code *** mpQ_GnssRcvrThread = new QThread; // The receiver code is implemented in an event-based fashion as a QObject // but will be owned by a dedicated thread, to ensure independence from // risk of blocking operations being used by other parts of the system. mpQ_GnssRcvr->moveToThread (mpQ_GnssRcvrThread); // Trigger worker start upon thread start connect (mpQ_GnssRcvrThread, &QThread::started, mpQ_GnssRcvr, &GNSSReceiver::start); // Delete connect (mpQ_GnssRcvrThread, &QThread::finished, mpQ_GnssRcvr, &GNSSReceiver::deleteLater); connect (mpQ_GnssRcvrThread, &QThread::finished, mpQ_GnssRcvrThread, QThread::deleteLater); Now in the GNSSReceiver object (derived from QObject): mQ_SerialPort= new QSerialPort (this); mQ_SerialPort->open (...); // Connect data handling connect (mQ_SerialPort, &QSerialPort::readyRead, this, &handleCommsChan); In handleCommsChan: QByteArray Q_Data = mQ_SerialPort->readAll (); ... do stuff ... The GNSSReceiver object runs inside of the QThread event loop. Writing to the serial port can be done asynchronously. The only Qt object that has issues with worker objects is QTimer. They need to be created in the thread space and assert errors if you try to start/stop them outside of the thread. I use this technique for serial port, network, and serial bus (CAN) interfacing (asynchronous objects). I am liking QFuture for worker threads (functions to do something) currently. @VRonin, as you know, if you get rid of run (or at least use run to initialize your thread and call QThread::exec on exiting) the thread works. I prefer to treat the thread as a container and just call object->moveToThread (new QThread) myself for most things.
https://forum.qt.io/topic/91681/qserialport-doesn-t-work-in-qthread
CC-MAIN-2021-31
refinedweb
1,524
59.13
This is a "common" bug in .NET services where it uses a construct like: <element type="s:schema" ..../> but doesn't import the schema namespace itself into the schema. There are two options: 1) Add an <import namespace="" location="....."/> 2) Change the above to something like: <xsd:any /> or something that is generic. The downside of (1) is that it will generate a ton of useless code if you wsdl2java it. However, it likely is more "proper". Dan On Wednesday, May 04, 2011 7:17:15 AM DharmalingamP wrote: > Hi, > > I need to create a Web Service Client for an external .NET web service. > The Eclipse is showing the following error in WSDL and couldn't create the > web service client. > > src-resolve.4.2: Error resolving component 's:schema'. It was detected that > 's:schema' is in namespace '', but > components from this namespace are not referenceable from schema document > ' > > C:/Workspace_Webservices/SampleClient/WebContent/META-INF/wsdl/Sample.wsdl' > . If this is the incorrect namespace, perhaps the prefix of 's:schema' > needs to be changed. If this is the correct namespace, then an appropriate > 'import' tag should be added to > ' > ple.wsdl'. > > My environment details: > Tomcat 6.0, Apache CXF 2.3.0 > > Please anybody help me out. Thanks in advance. > > Thanks and Regards, > Dharmalingam P. > > -- > View this message in context: >- > s-schema-tp4369529p4369529.html Sent from the cxf-issues mailing list > archive at Nabble.com. -- Daniel Kulp dkulp@apache.org Talend -
http://mail-archives.apache.org/mod_mbox/cxf-dev/201105.mbox/%3C201105051703.57805.dkulp@apache.org%3E
CC-MAIN-2018-26
refinedweb
242
61.83
30 November 2009 05:00 [Source: ICIS news] By John Richardson and Malini Hariharan SINGAPORE (ICIS news)--China’s polyethylene (PE) and polypropylene (PP) virgin resin demand will rise by as much as 38% in 2009 from a year ago on the back of robust domestic consumption, Shanghai-based commodity information service CBI said on Monday. High-density polyethylene (HDPE) demand was expected to grow by 38% to around ?xml:namespace> PP demand would grow by 24% to 13m tonnes, he added. A drop in demand for recycling material also helped pushed demand to fresh highs, said CBI and a Shanghai-based source with a major polyolefin producer. This follows either dips in demand during 2008 or modest increases, depending on the grade of polyolefin. LLDPE fell by 4.5% and PP by 1.4% with LDPE rising by 2.2% and HDPE by 1% last year, he said “We estimate that overall PE demand will grow by around 31.5% and PP by about 24%,” said Longston. “A big factor behind the strong recovery is a big drop in the use of recycled material.” Virgin resin prices had remained too low to justify converters using scrap material so far this year, said the source with the polyolefin producer. “In September, though, the delta or gap between recycled and virgin material was getting very close to its minimum level - $400/tonne,” he added. A fall in exports of finished goods - delivered wrapped in plastic film which is shipped back to Imports of waste and scrap of polymers of ethylene were down 11% in January-October over the same months last year to 1.73m tonnes, according to a study by International Trader Publications (ITP) based on China Customs data. ITP is a US-based analysis service of global chemicals-trade data. “I would agree that demand has grown very strongly this year. In the case of PP we think it’s grown by about 20-25%.,” he continued. Domestic polyolefin demand had also surged on huge government economic stimulus, including a rise in bank lending, he said. “This has led to a steep rise in automobile and real-estate sales with the resulting rise in property prices triggering a construction boom.” Government vouchers providing discounts off the price of white goods such as washing machines and refrigerators were also behind the recovery in polyolefins, he said. Information service provider ICIS owns a minority stake in CBI. To discuss issues facing the chemical industry go to ICIS connect Read John Richardson and Malini Hariharan
http://www.icis.com/Articles/2009/11/30/9267970/chinas-polymer-demand-to-surge-38-in-2009-cbi.html
CC-MAIN-2015-06
refinedweb
423
62.07
In this problem, we have a rectangles of zeroes and ones. A "toggle" of a subrectangle consists of taking every 0 in the rectangle and turning it into a 1, and taking every 1 in the rectangle and turning it into a 0. We want to toggle as few subrectangles as possible to turn every number in the rectangle to zero. If we look at the various squares in the grid, we see that the ones closer to the top-left corner can be toggled by more subrectangles, whereas the ones closer to the bottom-right corner can be toggled by fewer subrectangles. In particular, we note that the bottom-right corner can only be toggled by one subrectangle - the entire rectangle. Whether we need to toggle the entire rectangle then is entirely determined by whether the bottom-right corner needs to be toggled. After we decide whether to toggle that one, we can now consider the square that is directly to the left of the bottom-right corner. That square can now only be toggled by one rectangle, and the decision there is uniquely determined as well. We can then consider every square in the bottom row from right-to-left and toggle the subrectangle when that square needs to be toggled. After inspecting every square in the bottom row, it is guaranteed that the bottom row is entirely zeroes. Thus, one can treat the rectangle as if it had one fewer row, and we can repeat the above process again. We do this repeatedly until the entire rectangle is correctly formatted. import java.io.*; import java.util.*; public class cowtip { public static void main(String[] args) throws IOException { // initialize file I/O BufferedReader br = new BufferedReader(new FileReader("cowtip.in")); PrintWriter pw = new PrintWriter(new BufferedWriter(new FileWriter("cowtip.out"))); // read in the size of the grid int n = Integer.parseInt(br.readLine()); // allocate a 2D array for the grid char[][] grid = new char[n][n]; // define constants to indicate which squares are correct final char WRONG = '1'; final char RIGHT = '0'; // read in the grid for(int i = 0; i < n; i++) { // read in a row of the grid String s = br.readLine(); for(int j = 0; j < n; j++) { // update the relevant row in the array grid[i][j] = s.charAt(j); } } int numTips = 0; // loop over the rectangles to consider from bottom to top, right to left for(int i = n-1; i >= 0; i--) { for(int j = n-1; j >= 0; j--) { if(grid[i][j] == WRONG) { // the rectangles with bottom-right corner at (i, j) needs to be toggled numTips++; for(int a = 0; a <= i; a++) { for(int b = 0; b <= j; b++) { // flip each entry in that rectangle if(grid[a][b] == WRONG) { grid[a][b] = RIGHT; } else { grid[a][b] = WRONG; } } } } } } // print the answer pw.println(numTips); // close the file pw.close(); } }
http://usaco.org/current/data/sol_cowtip_bronze_jan17.html
CC-MAIN-2018-17
refinedweb
482
56.89
Sadly,. Published Tuesday, November 27, 2007 9:55 PM by kevindente Hi, why not just use a dummy placeholder call in the first line? Something like "var dummy=0;" and your gone? It does not seems to be very smart but hey, your problem should be fixed, or not? Or are there any other limitations we have to consider first? Matthias Denkmaier Have you directly reported this in Microsofts Bugtracker? Steve A huge limitation would be something more along the lines of "I can't set any breakpoints in anonymous functions." There are plenty of workarounds to achieve what you're looking for. It's nothing more than a minor annoyance actually. jayson knight Can't you just insert the "debugger" keyword where you want to break? That adds a line, doesn't it? Jeff Atwood Yes, in some cases I could add a line of code to the function. Is that a reasonable thing to be required to do? It's one thing if it's my code - should I have to go changing the YUI or JQuery code all over the place? Isn't setting breakpoints one of the primary functions of a debugger? And what if the script code is being served from an embedded assembly resource, where I can't change the source? Here's another way of thinking about it - would you think this a reasonable limitation if the C# or VB debugger required you to change the code in order to set a breakpoint? kevindente Just ran into this - To the people minimizing it you are obviously not seriously into javascript. The language is not a first class citizen with this limitation. VS 2008 should be able to match Firebug! ecards MS has never seemed to embrace JS as a real language. Hell, noone seems to. Why do we have all these extraneous HTML editors and WYSIWYG designers, but no real choices for JavaScript... I only wish Aptana was more performant. Last time I tried it (about a year ago) I had to wait minutes for it to start. That's just not acceptable. In addition it was incredibly slow to do text editing in... I guess that's the bane of Java on Windows... Lucas Goodwin First of all I'm glad someone else has run into this problem and I'm not losing my mind. Thanks for writing this up. I now have a support group. ;) and, um, this is WAY more than just an annoyance. Almost ALL of my javascript is written in this fashion so I don't add objects & functions to the global namespace (as "aFunc" does in the above example)!!! The fact that VS2008 gets tripped up on this inexcusable. Overall I really like it as a debugger, but if I can't set a breakpoint properly within a *valid function* (even if it is anonymous) without having to do some goofy workaround like adding a meaningless line of code or use the debugger statement just infuriates me. evankstone
http://weblogs.asp.net/kdente/archive/2007/11/27/is-the-visual-studio-2008-javascript-debugger-crippled.aspx
crawl-002
refinedweb
501
73.98
If you’ve spent much time posting on Reddit, you’ll know how frustrating it can be to see post after post flop. You might have posted fifteen times, only to receive three comments and five upvotes. You’ll catch yourself wondering why the hell people rave about this “Reddit” site so much. But the blame here isn’t to be placed with Reddit. Charles Chu is a writer, minimalist, digital nomad Life on the Road: The History of Digital Nomadism Life on the Road: The History of Digital Nomadism Read More , and self-experimenter. He dissects high-achievers and shares his own quirky experiments in his free weekly newsletter, The Open Circle. I started to notice that some of Charles’ Reddit posts were beginning to attract a lot more attention than most. Hundreds of upvotes and tons of discussion were attracted through two posts in particular. One on long-term travel, and one on Charles’ brutally honest time tracking routine. Then when I heard that his first few posts had helped drive around 80,000 visits to his site, I needed to get in touch. Luckily, Charles tracks and analyzes each of his experiments meticulously. Reddit was one such experiment. So, I decided to dig deep into what Charles had discovered, looking for insight into what it takes to craft a popular Reddit post that readers can’t help but engage with. Understand Reddit Before Posting When Reddit first launched in 2005 it was, according to blogger Sebastian Marshall, a “wellspring of nuanced, helpful, pro-social, highly analytical communities”. As the site became more popular, however, the average quality of discussion decreased. Subreddits were spammed by marketers and people searching for nothing other than clicks to their website. But despite Reddit becoming awash with self-promoters, it’s still the hangout of those more nuanced, helpful, analytical communities Top Podcasts to Take Your Biohacking to the Next Level Top Podcasts to Take Your Biohacking to the Next Level Can you biohack your way to improved mental and physical performance? Read More . You can see this in the popularity of helpful, relevant posts, compared to the disinterestedness and hostility toward more self-promoting posts. Combine this with the democratic, anti-authoritarian vibe of Reddit, and you start to get a feeling for its underlying culture, and what’s more likely to resonate with the site’s more active users. The Benefits of Posting on Reddit During our conversation, Charles mentioned that the reason he first decided to experiment with Reddit 8 Extensions To Transform Your Reddit Experience 8 Extensions To Transform Your Reddit Experience Believe it or not, there are a whole host of extensions out there that can make your Reddit experience even better. Here we take a look at eight of our favorites. Read More was because he had “no network and no audience 6 Tools to Find Awesome People with Similar Interests 6 Tools to Find Awesome People with Similar Interests Connecting with new people can help you get more out of life. Take the plunge and start making new friends who share your interests Read More “. But he still wanted his content to provide value to people from day one. Reddit seemed to offer a solution. With such a huge user base, and around one million individual subreddits, the ability to reach a good number of targeted people in virtually any niche is a benefit few other sites can offer. And with Reddit being “the most democratic network” Charles knew of online, the hypothesis was that “good content will always attract views”. Choosing the Right Topic Many posts about succeeding on Reddit will recommend finding a subreddit, researching past posts, then coming up with ideas based on what performs well in those subreddits. This highly analytical approach can understandably lead to more predictable, stagnant ideas. Charles, however, does things differently. His filter for ideas is, “does it make me really, really excited?”. If it does, he’ll write the post. If it doesn’t, he won’t. This approach is based on the idea that if Charles finds the topic exciting, he’ll be able to find other people on Reddit who will be excited about it, too. And these are precisely the kind of people Charles wants to reach, provide value to, and, if his content is valuable enough, perhaps prompt to visit his site, too. Finding the Right Subreddit Once the idea is found, before writing a word you should aim to know exactly which subreddit you’ll be publishing the content to. Usually, Charles will try to locate 2–5 subreddits that largely fit his content niche (there’s a subreddit for any niche). Redditlist is a good way easily see how many subscribers many subreddits have. Browse your short-list of subreddits, asking questions such as: - How many users are in the subreddit? (It’s easier to get to #1 in smaller subreddits, but you will receive fewer views.) - Is it open to linking to URLs, or are only text posts accepted? - What’s the overall quality of the posts like? - Is there a lot of spam? - Is it easy to find high quality discussions? Once you’ve gone through this process, you should find it pretty easy to narrow down your shortlist to just one or two subreddits. These should be those with the highest quality posts, discussions, and active users. The post should be crafted to perform well in this specific subreddit. To quantify this, a heuristic Charles uses is that if your post reaches #1 in a subreddit, it can potentially drive 5–10% of that subreddit to your site. As you’re starting out, aiming at subreddits with 10,000–150,000 subscribers will be more achievable. Some people may feel the urge to post their content to many subreddits. But Charles believes that many users in a single niche will check most of the relevant subreddits anyway. Posting to all of these will therefore have diminishing returns, so he prefers to publish to just one or two (though he has not experimented with this in detail). Analyzing the Subreddit The last step Charles takes before writing his post is to research the kind of posts that perform well in his chosen subreddit. This will guide the angle and structure of his own content to give it the best chance of success. To do this, he filters the subreddit to show the top posts from the last year. These top performing posts will then be displayed in descending order. Go through a good number of these, asking questions such as: - What kind of content does well? Are these informative, graphics, stories, etc.? - Are there any common themes? - What topics seem to generate a lot of discussion? - Are personal or more “corporate” headlines preferred? When doing this yourself, as you start noticing trends among these popular posts, it’ll help you structure your own post in a way that’s more likely to succeed in this particular subreddit. For instance, you may learn that your idea will be more popular if it’s displayed as a graphic, or written up as a personal story. You may even get an idea of how long your post should be. To go back to what was mentioned earlier, the aim is for this research to guide the structure and angle of your post, not necessarily to dictate the idea you’re trying to convey. Crafting the Post When it comes to crafting the post, Charles usually likes to do this so that the content is posted directly onto Reddit, instead of simply posting a link. This is partly because “the best readers (in my opinion) are in subreddits that don’t allow direct linking or personal promotion “. Plus, “it’s easier to put yourself in the top 1% of quality for a subreddit if you write a text post. It shows people you are a human, and that you have a personal story”. The quantity and quality of comments on a text post is also much higher, offering you the chance to include other links to relevant content in response to the conversation that happens on the post itself. With that in mind, the body of the post should be crafted to provide as much value as possible. For this, Charles abides by what he calls the 10x Rule. “If I’m not providing 10x as much value as I’m asking for, don’t do it”. You should be aiming to offer content that’s better than 99% of the other content on that subreddit. This means you’re going to have to put in some work. With each of Charles’ top performing Reddit posts, he’s spent a good few hours working on each one, with a healthy amount of time spent editing the post to make it as concise as possible. And when you do ask for something in return (say, a click to your website), keep the link relevant, so that clicking on it will provide even more contextual value to the reader, rather than just pointing to your home page. Finally, don’t forget formatting. Although Reddit doesn’t allow much in this regard “try to use bold and bullet points to let readers scan more easily”. Crafting a Worthy Headline It’s your headline that’s going to entice people to actually click on your post, so think carefully about this. Your initial research should have already shown which kind of headlines do well in your subreddit of choice, so use this as your guide. But you should also aim to have a healthier understanding of how headlines work. For this, Charles recommends reading Copyblogger’s Copywriting 101 and Tim Ferriss’ article on the subject. Generally speaking, more personal headlines do well (so try using personal pronouns like I, you, me, he, she, us, them, etc.), while also promising additional value inside the post. Headlines like “X Ways to Do Y” perform pretty poorly, though there will be some subreddits where they do well. Engaging With Your Readers It’s not just good etiquette to respond to any comments on your Reddit post. When people are scanning a subreddit and see a post with lots of comments, they will be more likely to click through to see what all the fuss is about. Your own comments and replies contribute to that number, too. There are other benefits on top of this. As mentioned, the ensuing conversation will give you opportunities to link to other pieces of content you’ve published. This all helps to establish yourself as an expert in the field (if that’s what you want). You’ll probably come away with lots of ideas for other posts you could write, too. Another benefit is that all of your comments are visible when people check out your Reddit profile. All of this interaction will prevent other users from thinking you only push your own content. This makes them far more friendly when you do publish your own content because they can see that you’ve provided plenty of value to the community in the past. It’s All an Experiment There is, of course, more than one way to skin a cat. What’s covered in this article is how Charles has managed to see some impressive success during his short time using Reddit. It’s this approach that has driven tens of thousands of visits to his site. Fortunately for us, his strategy is pretty easy to replicate by anyone who wishes to do so, and so you could use it too, to have your content seen by a large number of interested readers. Let us know how you get on in the comments. And if you have any other tips for succeeding on Reddit, please share them!
https://www.makeuseof.com/tag/how-to-consistently-hit-the-top-page-of-reddit/
CC-MAIN-2018-17
refinedweb
1,975
69.31
Files Generated During Replay This section describes what occurs when a Vuser script is replayed, and describes the files that are created. The options.txt file is created. This file includes command line parameters to the preprocessor. Example of options.txt file -DCCI -D_IDA_XL -DWINNT -Ic:\tmp\Vuser (name and location of Vuser include files) -IE:\LRUN45B2\include (name and location of include files) -ec:\tmp\Vuser\logfile.log (name and location of output logfile) c:\tmp\Vuser\VUSER.c (name and location of file to be processed) The file Vuser.c is created. This file contains 'includes' to all the relevant .c and .h files. Example of Vuser.c file #include "E:\LRUN45B2\include\lrun.h" #include "c:\tmp\web\init.c" #include "c:\tmp\web\run.c" #include "c:\tmp\web\end.c" The c preprocessor cpp.exe is invoked in order to 'fill in' any macro definitions, precompiler directives, and so on, from the development files. The following command line is used: cpp -foptions.txt The file pre_cci.c is created which is also a C file (pre_cci.c is defined in the options.txt file). The file logfile.log (also defined in options.txt) is created containing any output of this process. This file should be empty if there are no problems with the preprocessing stage. If the file is not empty then it is almost certain that the next stage of compilation will fail due to a fatal error. The cci.exe C compiler is now invoked to create a platform-dependent pseudo-binary file (.ci) to be used by the Vuser driver program that will interpret it at runtime. The cci takes the pre_cci.c file as input. The file pre_cci.ci is created as follows: cci -errout c:\tmp\Vuser\logfile.log -c pre_cci. The file logfile.log is the log file containing output of the compilation. The file pre_cci.ci is now renamed to Vuser.ci. Since the compilation can contain both warnings and errors, and since the driver does not know the results of this process, the driver first checks if there are entries in the logfile.log file. If there are, it then checks if the file Vuser.ci has been built. If the file size is not zero, it means that the cci has succeeded to compile - if not, then the compilation has failed and an error message will be given. The relevant driver is now run, taking both the .usr file and the Vuser.ci file as input. For example: mdrv.exe -usr c:\tmp\Vuser\.usr -out c:\tmp\Vuser -file c:\tmp\Vuser\Vuser.ci The .usr file is needed since it tells the driver program which database is being used. This determines which libraries need to be loaded for the run. If there is an existing replay log file, output.txt, (see the following entry), the log file is copied to output.bak. - The output.txt file is created (in the path defined by the 'out' variable). This file contains the output messages that were generated during the script replay. These are the same messages that appear in the Replay view of VuGen's Output pane.
https://admhelp.microfocus.com/vugen/en/12.60-12.61/help/WebHelp/Content/VuGen/106150_r_overview_of_files_generated_during_replay.htm
CC-MAIN-2018-51
refinedweb
531
71.1
Banner resizer effect in Actionscript 3 In this Flash tutorial you will learn how to create a banner resizer effect in Actionscript 3. This effect will increase the size of the image when your mouse is over the image and restore to the original size when your mouse is not on the image. You will need to use the tweenlite plugin for this tutorial which can be downloaded here. I have used four free stock images for my banner resizer effect, but you can use six images if you wish. Higher quality images will work best for this effect. Banner resizer effect in Actionscript 3 – part 1 Step 1 Open a new Flash AS3 file. Import all the images you wish to use by selecting File > Import > Import to library. Make sure your images are all the same size. Then on the timeline rename ‘layer 1’ to ‘Images’. Step 2 Drag your images individually onto the stage like below. I have left a small gap in between each of the images. All of the images should be touching the stage boundaries. If your images are too large you can use the Free transform (Q) to resize the images. Step 3 Starting with the top left image, convert it into a movie clip (F8) and give it an appropriate name then select the top left registration point and click ok. Now repeat this for all your images, so for the top right image you would select the top right registration point. And for the bottom left image you would select the bottom left registration point. Step 4 Give each of your images the following instance name accordingly: image1_mc, image2_mc, image3_mc and image4_mc. Step 5 On the timeline insert a new layer called Actions. Then open up the actions panel and enter the following code. //Imports the tweenlite plugin. import com.greensock.*; //The orginal width and height of the images. var imageWidthOriginal:uint = 195; var imageHeightOriginal:uint = 145; //Array to hold the image instances. var imageArr:Array = new Array(image1_mc,image2_mc,image3_mc,image4_mc); //This loops through all the images in the array with mouse over and mouser out event. for(var i:uint = 0; i < imageArr.length; i++){ imageArr[i].addEventListener(MouseEvent.MOUSE_OVER, sizeImage); imageArr[i].addEventListener(MouseEvent.MOUSE_OUT, reduceImage); } //This function increases the width and height of the image until it reaches //the stage boundaries. function sizeImage(event:MouseEvent):void { TweenLite.to(event.target, 1, {width:stage.stageWidth, height:stage.stageHeight}); //This sets moused over image to the highest index position. setChildIndex(MovieClip(event.target), numChildren - 1 ); } //This function restores the images back to the original size. function reduceImage(event:MouseEvent):void { TweenLite.to(event.target, 1, {width:imageWidthOriginal, height:imageHeightOriginal}); } Step 6 Test your movie Ctrl + Enter. Now try moving your mouse over the images below and you should see the images increase in size. Then mouse your mouse off the images and they should decrease in size. Related tutorials Black and white gallery Banner selector 2 comments: While execution, following error messages appeared TwwenLite.as, Line 167 1017: The definition of base class TweenCore was not found. TwwenLite.as, Line 409 1020: Method marked override must override another method. TwwenLite.as, Line 524 1020: Method marked override must override another method. TwwenLite.as, Line 535 1020: Method marked override must override another method. ---Paramasivan P N You may not have imported tweenlite packages properly.
http://www.ilike2flash.com/2010/02/banner-resizer-effect-in-actionscript-3.html
CC-MAIN-2018-47
refinedweb
565
58.58
15 February 2012 17:51 [Source: ICIS news] HOUSTON (ICIS)--Mexichem has put in place the necessary financing for its €531m ($699m) deal to acquire Dutch polyvinyl chloride (PVC) pipe maker Wavin, the Mexican chemical and petrochemical company said on Wednesday. Mexichem, which already holds 2% of Wavin’s shares, said it would finance the takeover of the remaining 98% of Wavin shares – valued at around €520m – with cash on its balance sheet and existing committed credit facilities. “Mexichem has taken all necessary measures to secure the funding of the offer [for Wavin],” it said in a statement. Wavin agreed to the takeover last week after Mexichem raised its all-cash bid to €10.50/share. Since last November, when the bid was first disclosed, Wavin had rejected Mexichem’s previous bids of €10.00/share, €9.00/share and €8.50/share. Mexichem’s deal for Wavin came after its recent acquisitions of INEOS Fluor, Polycid and Plasticos Rex, and of plastics compound producer Alphagary. Wavin, based in Zwolle near Amsterdam, is a supplier of plastic pipe systems. It has 18 manufacturing sites in Europe and one in ?xml:namespace> ($1 = €0.76) For more on Mexic
http://www.icis.com/Articles/2012/02/15/9532628/mexichem-secures-financing-for-531m-takeover-of-dutch-pvc-firm.html
CC-MAIN-2014-35
refinedweb
198
63.19
I am relatively new to Swift and I am following some basic tutorials but I seem to be having a problem with some methods which attempt to allow the user to press return to minimise the keyboard or to click off the keyboard and the keyboard will disappear, I understand why I am receiving these errors but have no clue how to go about fixing it, I feel something may have been changed in the newer version of Swift I am using as he is using an older version than me, could anyone possibly explain how to go about fixing these two errors please? Any help would be greatly appreciated here is my source code: (First error, value of type 'viewController' has no member 'text' and secondly, touchesBegan method does not override any method from its superclass) import UIKit class ViewController: UIViewController { @IBAction func buttonPressed(sender: AnyObject) { label.text = textArea.text } @IBOutlet weak var textArea: UITextField! @IBOutlet weak var label: UILabel! override func viewDidLoad() { super.viewDidLoad() // Do any additional setup after loading the view, typically from a nib. self.text.delegate = self } override func didReceiveMemoryWarning() { super.didReceiveMemoryWarning() // Dispose of any resources that can be recreated. } override func touchesBegan(touches: NSSet, withEvent event: UIEvent) { self.view.endEditing(true) } func textFieldShouldReturn(textField: UITextField!) -> Bool { textField.resignFirstResponder() return true } } You have 2 problems here, based on the images you posted: 1) The method touhesBegan you are using is not correct: Correct one: func touchesBegan(_ touches: Set<UITouch>, withEvent event: UIEvent?) Yours: func touchesBegan(touches: NSSet, withEvent event: UIEvent) I think you want a delegate for the UITextField, so this one is not corerct: touchesBegan is a method for the UIReponder delegate and not for UITextFieldDelegate. Here you can find the reference for the UITextFieldDelegate. 2) the variable text doesn't exists in your code. I think you wanted to use textArea instead. Hope this can help you, happy coding!
https://codedump.io/share/ltA5sbw4VBfB/1/swift-method-doesn39t-override-any-method-from-its-superclass
CC-MAIN-2017-47
refinedweb
317
52.39
import pandas as pd data = pd.read_csv('../data/examp-data.txt') print(data) x y z 0 1 2.0 3 1 2 2.4 6 2 3 1.9 8 We can also choose to use the first row of the data as an the index (the numbers to the left of the table). The index has special properties and is particularly useful for time-series data. Values in index columns should be unique. data_w_index = pd.read_csv('../data/examp-data.txt', index_col=0) print(data_w_index) y z x 1 2.0 3 2 2.4 6 3 1.9 8 You can export to a number of different formats directly from Pandas including: csv, Excel, json, hdf, and SQL. You can export things to using the .to_* methods on a data frame. So, if you want to export to csv: data.to_csv('../data/pandas_output.csv') !cat ./data/pandas_output.csv ,x,y,z 0,1,2.0,3 1,2,2.4,6 2,3,1.9,8 The !cat command is just a quick way to show what the resulting datafile looks like. If you don't want the index information you can easily get rid of it. data.to_csv('../data/pandas_output.csv', index=False) !cat ./data/pandas_output.csv x,y,z 1,2.0,3 2,2.4,6 3,1.9,8 data['y'] 0 2.0 1 2.4 2 1.9 Name: y, dtype: float64 Multiple columns can selected using a list of names in the desired order. data[['z', 'x']] data_w_index data_w_index.loc[2] y 2.4 z 6.0 Name: 2, dtype: float64 data_w_index.iloc[0] y 2 z 3 Name: 1, dtype: float64 data.iloc[0] x 1 y 2 z 3 Name: 0, dtype: float64 You can also grab a slice of rows from the dataframe. data_w_index[0:2] for row in data.values: print row * 2 [ 2. 4. 6.] [ 4. 4.8 12. ] [ 6. 3.8 16. ] data.values[0] data.ix[1] x 2.0 y 2.4 z 6.0 Name: 1, dtype: float64 This is because often when working with data frames we won't want to loop over rows but instead use vectorized operations for things like subsetting and math (see below) You can get subsets of the data by giving conditions inside []. data[data['y'] > 2.0] The syntax for specifying multiple conditions for subsetting is to include each condition in () and separate them with the & symbol. data[(data['x'] > 1) & (data['y'] > 2)] Math is done by treating columsn from the data from just like they are variables. data['y'] * data['z'] + 2 0 8.0 1 16.4 2 17.2 dtype: float64 If you want to use mathematical functions use the ones in numpy not the ones in math. import numpy as np np.log(data['y']) * np.sqrt(data['z']) 0 1.200566 1 2.144452 2 1.815437 dtype: float64 To work on chunks of data in automated ways we can use grouping. First, let's grab some data. url = "" data = pd.read_csv(url, delimiter="\t") data.head() The command data.head() shows us the first few rows of the dataset. The following code groups the data based on the value in the order column, calculates the average of the mass(g) column for every order, and prints out the results. data_by_order = data.groupby('order') for order, order_data in data_by_order: avg_mass = np.mean(order_data['mass(g)']) print "The average mass of {} is {} grams".format(order, avg_mass) The average mass of Artiodactyla is 112939.74354 grams The average mass of Carnivora is 42705.6617766 grams The average mass of Cetacea is 9115442.46673 grams The average mass of Dermoptera is 0.5 grams The average mass of Hyracoidea is 3030.835 grams The average mass of Insectivora is 58.0276923077 grams The average mass of Lagomorpha is 1316.13261905 grams The average mass of Macroscelidea is 11.522 grams The average mass of Perissodactyla is 694486.666667 grams The average mass of Pholidota is 7980.0 grams The average mass of Primates is 5066.5575641 grams The average mass of Proboscidea is 3342500.0 grams The average mass of Rodentia is 496.813082707 grams The average mass of Scandentia is 190.357142857 grams The average mass of Sirenia is 1169400.0 grams The average mass of Tubulidentata is 60000.0 grams The average mass of Xenarthra is 7238.5 grams
http://nbviewer.jupyter.org/github/datacarpentry/semester-biology/blob/gh-pages/materials/pandas.ipynb
CC-MAIN-2018-13
refinedweb
739
79.67
(For more resources related to this topic, see here.) Principles for UI/UX There are many dos and don'ts in Windows Phone UI development. It is good to remember that we have a screen with a limited size where the elements should be placed comfortably. Fast and fluid This is the main goal of Windows Phone UX. Starting with the phone menu, we can switch between tiles and application lists; if we touch an element action it is performed very fast. We have to remember while creating our own software about providing responsivity to user interaction, even if we need to load some data or connect to a server, the user has to be informed about loading. Going through our application, the user should feel that he or she controls and can touch every element. Many controls provide animation out of the box and we need to use it; we need to impress our users while seeing motion in our application. The application has to be alive! Grid system The following design pattern provides consistency and high UX to Windows Phone users. It helps in organizing the application elements. A grid system will provide unity across applications and will make our application look similar to the Windows Store application. Talking about the grid layout, we mean arranging the content using grid lines. As we can see, the grid is where the application design starts. Each square is located 12 pixels from the other square. Single square has 25 x 25 pixels. In order to provide the best user experience to our application, we need to ensure that the elements are arranged appropriately. Looking at the left-hand side of the image, we find that each big tile contains 6 x 6 squares with 12 pixels margins. Windows is one When using a grid system and other design patterns for Windows Phone, porting to Windows Store will be possible and can be done more easily. Unfortunately at the moment, there are no common markets for Windows Phone application and Windows Store (applications for Windows 8 and Windows RT). The current solution is to create the application in a way that it allows portability to other platforms, such as separating the application's logic from UI. Even then some specific changes have to be done. Controls design best practices Fonts From the GUI and UX point of view, it is critical to make text clear and legible. Entire text should be properly contrasted to the background. The font used in Windows Phone, is Sans serif, which is widely used in web and mobile applications. The Sans serif font type has many fonts, which one should we use? There is no simple answer for that, it depends on the control and context in which we use the text. However, it is recommended to use Segoe UI, Calibri, or Cambria font types. - Segoe UI: This is best used in UI elements such as buttons, checkboxes, datepickers, and others like these. - Calibri: This is best used for input and content text such as e-mail messages. When using this font it is good to set the font size to 13. - Cambria: This font type is recommended for articles or any other big pieces of text, depending on content section, we can use 9, 11, 20 pt Cambria font. An example of each of the these fonts is as follows: - 11 pt Segoe UI font - 13 pt Calibri font - 11 pt Cambria font Tiles and notifications My friend who is the owner of a company that creates mobile applications, once said to me that 50% of an application's success depends on how nice and good looking the icon/tile is. After thinking about it, I imagine he was right. The tile is the first thing that a user sees when starting to use our application (remember what I said about first impression?). The Windows Phone tile is not only a static icon that represents our application, but it can be live and show some simplified content as well. A default tile is a static icon and can display some information such as message count after it updates, and it returns to the default tile only after another update. An application tile can be pinned to the Start screen and set in one of 3 sizes: A small tile is built from 3 x 3 grid units. It is half the width and height of medium tile. It was introduced in Windows Phone 8 and 7.8 update of Windows Phone 7. Only a medium-size tile was available in Windows Phone 7. Every application pinned to the Start screen starts with medium tile. Along with the small tile, a large tile was introduced in Windows Phone 8. Why we should use the tile that is updating (live tile)? Because the tile is the front door of our application and if it gives some notification to user it really says, "Hey! Come here and see what I've got for you!". An updated tile assures the users that our application is fresh and active, even if it has not been run for some time. - If we want to allow the user to pin our application in large/wide tile, we should provide a wide image, point it in the application manifest, and mark the Supports wide size option. There are some good practices which are worth following. - If our application has the content that is refreshed at least once every few days and can be interesting for the user, we should enable our application to use wide tile; if not it is not necessary. - Forget about the wide tile if the application doesn't use notifications. There are three available tile templates that will help us to create tiles as follows: - Iconic template which is mainly used for e-mails, messaging, RSS, and social networking apps. - Flip templates that are used in an application gives the user a lot of information like weather application. - Cycle template for galleries and photo applications that cycles 1 to 9 images in a tile—applicable only for medium and wide tile. Bindings Basically, binding is a mechanism that the whole MVVM pattern relays on. It is very powerful and is the "glue" between the view and the exposed fields and commands. The functionality that we want to implement is shown in the following diagram: Model We are going to use the Model class for storing data about users. public class UserModel:INotifyPropertyChanged { private string name {get;set; public string Name { get; set { name = value; RaisePropertyChanged(); } } public string LastName { get; set{(…)} } public string Gender { get; set{(…)} } public DateTime DateOfBirth { get; set{(…)} } public event PropertyChangedEventHandler PropertyChanged; (…) } One thing that we still have to implement is the property changed event subscription for all fields within this UserModel class. ViewModel Now, UserModel will be wrapped into UserViewModel. Our ViewModel will also implement the INotifyPropertyChanged interface for updating View when the VieModels object change. public class UserViewModel:INotifyPropertyChanged { public UserModel CurrentUser { get; set; } public string FullName { get { if (this.CurrentUser != null) { return string.Format("{0} {1}", this.CurrentUser.Name, this.CurrentUser.LastName); } return ""; } } public List<string> ListOfPossibleGenders = new List<string>(){"Male","Female"}; (…) } As we can see, UserModel is wrapped in two ways: the first is wrapping the entire Model and some properties that are transformed into FullName. The FullName property contains Name and LastName that comes from UserModel object. View I'm going to create a view, piece by piece, to show how things should be done. At the beginning, we should create a new UserControl object called UserView in the Views folder. We will do almost everything now in XAML; so, we have to open the UserView.xaml file and add a few things. In the root node, we have to add namespace for our ViewModel folder. xmlns:models="clr-namespace:MyMVVMApplication.ViewModel" Because of this line, our ViewModels will be accessible in the XAML code. <UserControl.DataContext> <models:UserViewModel/> </UserControl.DataContext> It sets DataContext of our view in XAML and has its equivalent in DataContext. If we wish to set DataContext in the code behind, we have to go to the constructor in the .cs file. public UserView() { InitializeComponent(); var viewModel = new SampleViewModel(); this.DataContext = viewModel; } A better approach is to define the ViewModel instance in XAML because we have intellisense support in binding expressions for fields that are public in ViewModel and the exposed model. As we can see, ViewModel contains the CurrentUser property that will store and expose the user data to the view. It has to be public, and thing that is missing here is the change notification in the CurrentUser setter. However, we already know how to do that so it is not described. <StackPanel> <TextBlock Text="Name"></TextBlock> <TextBox Name="txtName" Text="{ Binding CurrentUser.Name, Mode=TwoWay }"> </TextBox> <TextBlock Text="Lastname"></TextBlock> <TextBox Name="txtLastName" Text="{ Binding CurrentUser.LastName , Mode=TwoWay}"> </TextBox> <TextBlock Text="Gender"></TextBlock> <ListBox Name="lstGender" ItemsSource="{ Binding ListOfPossibleGenders}" SelectedItem="{Binding CurrentUser.Gender, Mode=TwoWay}"> </ListBox> </StackPanel> The preceding example shows how to set binding using the exposed model as well as list or property that was implemented directly in ViewModel. We made all these things without any code behind line! This is really good because our sample is testable and is very easy to write unit tests We use the TwoWay binding mode that automatically populates the control value and then the edit control value of the ViewModel property also gets updated. Summary Creating complex applications that will be intuitive and simple to use is really hard without the knowledge of the UI and UX principles that were described in this article. We covered a huge part of XAML— bindings. Resources for Article : Further resources on this subject: - Windows 8 with VMware View [Article] - How to Build a RSS Reader for Windows Phone 7 [Article] - Introducing the Windows Store [Article]
https://www.packtpub.com/books/content/windows-phone-8-principles-uiux-and-bindings
CC-MAIN-2016-50
refinedweb
1,647
61.77
SIGNBIT(3) Linux Programmer's Manual SIGNBIT(3) NAME signbit - test sign of a real floating point number SYNOPSIS #include <<math.h>> int signbit(x); Compile with -std=c99; link with -lm. DESCRIPTION signbit() is a generic macro which can work on all real floating-point types. It returns a non-zero value if the value of x has its sign bit set. This is not the same as x < 0.0, because IEEE 754 floating point allows zero to be signed. The comparison -0.0 < 0.0 is false, but sign- bit(-0.0) will return a non-zero value. CONFORMING TO C99. This function is defined in IEC 559 (and the appendix with recom- mended functions in IEEE 754/IEEE 854). SEE ALSO copysign(3) COLOPHON This page is part of release 3.05 of the Linux man-pages project. A description of the project, and information about reporting bugs, can be found at. GNU 2002-08-10 SIGNBIT(3)
http://modman.unixdev.net/?sektion=3&page=signbit&manpath=Debian-5.0
CC-MAIN-2017-13
refinedweb
162
78.85
We value your feedback. Take our survey and automatically be enter to win anyone of the following: Yeti Cooler, Amazon eGift Card, and Movie eGift Card! Become a Premium Member and unlock a new, free course in leading technologies each month. #include<iostream> #include<cstring> using namespace std; int main(){ char st1[20]="State of Ca"; char st2[5]={' '}; strcpy(st2, st1); cout<<st2<<endl; return 0; } Add your voice to the tech community where 5M+ people just like you are talking about what matters. const int ST2_LEN = 5; ... char st2[ST2_Len] = {' '}; strncpy(st2, st1, ST2_LEN); ... If you are experiencing a similar issue, please ask a related question Join the community of 500,000 technology professionals and ask your questions.
https://www.experts-exchange.com/questions/26902219/using-strcpy-in-c-error.html
CC-MAIN-2017-26
refinedweb
120
55.64
(For more resources on Search Engine, see here.) Client API implementations for Sphinx Sphinx comes with a number of native searchd client API implementations. Some third-party open source implementations for Perl, Ruby, and C++ are also available. All APIs provide the same set of methods and they implement the same network protocol. As a result, they more or less all work in a similar fashion, they all work in a similar fashion. All examples in this article are for PHP implementation of the Sphinx API. However, you can just as easily use other programming languages. Sphinx is used with PHP more widely than any other language. Search using client API Let's see how we can use native PHP implementation of Sphinx API to search. We will add a configuration related to searchd and then create a PHP file to search the index using the Sphinx client API implementation for PHP. Time for action – creating a basic search script - Add the searchd config } - Start the searchd daemon (as root user): $ sudo /usr/local/sphinx/bin/searchd -c /usr/local/sphinx/etc/ sphinx-blog.conf - Copy the sphinxapi.php file (the class with PHP implementation of Sphinx API) from the sphinx source directory to your working directory: $ mkdir /path/to/your/webroot/sphinx $ cd /path/to/your/webroot/sphinx $ cp /path/to/sphinx-0.9.9/api/sphinxapi.php ./ - Create a simple_search.php script that uses the PHP client API class to search the Sphinx-blog index, and execute it in the browser: <?php require_once('sphinxapi.php'); // Instantiate the sphinx client $client = new SphinxClient(); // Set search options $client->SetServer('localhost', 9312); $client->SetConnectTimeout(1); $client->SetArrayResult(true); // Query the index $results = $client->Query('php'); // Output the matched results in raw format print_r($results['matches']); - The output of the given code, as seen in a browser, will be similar to what's shown in the following screenshot: What just happened? Firstly, we added the searchd configuration section to our sphinx-blog.conf file. The following options were added to searchd section: -. Once we were done with adding searchd configuration options, we started the searchd daemon with root user. We passed the path of the configuration file as an argument to searchd. The default configuration file used is /usr/local/sphinx/etc/sphinx.conf. After a successful startup, searchd listens on all network interfaces, including all the configured network cards on the server, at port 9312. If we want searchd to listen on a specific interface then we can specify the hostname or IP address in the value of the listen option: listen = 192.168.1.25:9312 The listen setting defined in the configuration file can be overridden in the command line while starting searchd by using the -l command line argument. There are other (optional) arguments that can be passed to searchd as seen in the following screenshot: searchd needs to be running all the time when we are using the client API. The first thing you should always check is whether searchd is running or not, and start it if it is not running. We then created a PHP script to search the sphinx-blog index. To search the Sphinx index, we need to use the Sphinx client API. As we are working with a PHP script, we copied the PHP client implementation class, (sphinxapi.php) which comes along with Sphinx source, to our working directory so that we can include it in our script. However, you can keep this file anywhere on the file system as long as you can include it in your PHP script. Throughout this article we will be using /path/to/webroot/sphinx as the working directory and we will create all PHP scripts in that directory. We will refer to this directory simply as webroot. We initialized the SphinxClient class and then used the following class methods to set upthe Sphinx client API: - SphinxClient::SetServer($host, $port)—This method sets the searchd hostname and port. All subsequent requests use these settings unless this method is called again with some different parameters. The default host is localhost and port is 9312. - SphinxClient::SetConnectTimeout($timeout)—This is the maximum time allowed to spend trying to connect to the server before giving up. - SphinxClient::SetArrayResult($arrayresult)—This is a PHP client APIspecific method. It specifies whether the matches should be returned as an array or a hash. The Default value is false, which means that matches will be returned in a PHP hash format, where document IDs will be the keys, and other information (attributes, weight) will be the values. If $arrayresult is true, then the matches will be returned in plain arrays with complete per-match information. After that, the actual querying of index was pretty straightforward using the SphinxClient::Query($query) method. It returned an array with matched results, as well as other information such as error, fields in index, attributes in index, total records found, time taken for search, and so on. The actual results are in the $results['matches'] variable. We can run a loop on the results, and it is a straightforward job to get the actual document's content from the document ID and display it. Matching modes When a full-text search is performed on the Sphinx index, different matching modes can be used by Sphinx to find the results. The following matching modes are supported by Sphinx: - SPH_MATCH_ALL—This is the default mode and it matches all query words, that is, only records that match all of the queried words will be returned. - SPH_MATCH_ANY—This matches any of the query words. - SPH_MATCH_PHRASE—This matches query as a phrase and requires a perfect match. - SPH_MATCH_BOOLEAN—This matches query as a Boolean expression. - SPH_MATCH_EXTENDED—This matches query as an expression in Sphinx internal query language. - SPH_MATCH_EXTENDED2—This matches query using the second version of Extended matching mode. This supersedes SPH_MATCH_EXTENDED as of v0.9.9. - SPH_MATCH_FULLSCAN—In this mode the query terms are ignored and no text-matching is done, but filters and grouping are still applied. Time for action – searching with different matching modes - Create a PHP script display_results.php in your webroot with the following code: <?php // Database connection credentials $dsn ='mysql:dbname=myblog;host=localhost'; $user = 'root'; $pass = ''; // Instantiate the PDO (PHP 5 specific) class try { $dbh = new PDO($dsn, $user, $pass); } catch (PDOException $e){ echo'Connection failed: '.$e->getMessage(); } // PDO statement to fetch the post data $query = "SELECT p.*, a.name FROM posts AS p " . "LEFT JOIN authors AS a ON p.author_id = a.id " . "WHERE p.id = :post_id"; $post_stmt = $dbh->prepare($query); // PDO statement to fetch the post's categories $query = "SELECT c.name FROM posts_categories AS pc ". "LEFT JOIN categories AS c ON pc.category_id = c.id " . "WHERE pc.post_id = :post_id"; $cat_stmt = $dbh->prepare($query); // Function to display the results in a nice format function display_results($results, $message = null) { global $post_stmt, $cat_stmt; if ($message) { print "<h3>$message</h3>"; } if (!isset($results['matches'])) { print "No results found<hr />"; return; } foreach ($results['matches'] as $result) { // Get the data for this document (post) from db $post_stmt->bindParam(':post_id', $result['id'], PDO::PARAM_INT); $post_stmt->execute(); $post = $post_stmt->fetch(PDO::FETCH_ASSOC); // Get the categories of this post $cat_stmt->bindParam(':post_id', $result['id'], PDO::PARAM_INT); $cat_stmt->execute(); $categories = $cat_stmt->fetchAll(PDO::FETCH_ASSOC); // Output title, author and categories print "Id: {$posmt['id']}<br />" . "Title: {$post['title']}<br />" . "Author: {$post['name']}"; $cats = array(); foreach ($categories as $category) { $cats[] = $category['name']; } if (count($cats)) { print "<br />Categories: " . implode(', ', $cats); } print "<hr />"; } } - Create a PHP script search_matching_modes.php in your webroot with the following code: <?php // Include the api class Require('sphinxapi.php'); // Include the file which contains the function to display results require_once('display_results.php'); $client = new SphinxClient(); // Set search options $client->SetServer('localhost', 9312); $client->SetConnectTimeout(1); $client->SetArrayResult(true); // SPH_MATCH_ALL mode will be used by default // and we need not set it explicitly display_results( $client->Query('php'), '"php" with SPH_MATCH_ALL'); display_results( $client->Query('programming'), '"programming" with SPH_MATCH_ALL'); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ALL'); // Set the mode to SPH_MATCH_ANY $client->SetMatchMode(SPH_MATCH_ANY); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ANY'); // Set the mode to SPH_MATCH_PHRASE $client->SetMatchMode(SPH_MATCH_PHRASE); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_PHRASE'); display_results( $client->Query('scripting language'), '"scripting language" with SPH_MATCH_PHRASE'); // Set the mode to SPH_MATCH_FULLSCAN $client->SetMatchMode(SPH_MATCH_FULLSCAN); display_results( $client->Query('php'), '"php programming" with SPH_MATCH_FULLSCAN'); - Execute search_matching_modes.php in a browser (). (For more resources on Search Engine, see here.) What just happened? The first thing we did was created a script, display_results.php, which connects to the database and gathers additional information on related posts. This script has a function, display_results() that outputs the Sphinx results returned in a nice format. The code is pretty much self explanatory. Next, we created the PHP script that actually performs the search. We used the following matching modes and queried using different search terms: - SPH_MATCH_ALL (Default mode which doesn't need to be explicitly set) - SPH_MATCH_ANY - SPH_MATCH_PHRASE - SPH_MATCH_FULLSCAN Let's see what the output of each query was and try to understand it: display_results( $client->Query('php'), '"php" with SPH_MATCH_ALL'); display_results( $client->Query('programming'), '"programming" with SPH_MATCH_ALL'); The output for these two queries can be seen in the following screenshot: The first two queries returned all posts containing the words "php" and "programming" respectively. We got posts with id 2 and 5 for "php", and 5 and 8 for "programming". The third query was for posts containing both words, that is "php programming", and it returned the following result: This time we only got posts with id 5, as this was the only post containing both the words of the phrase "php programming". We used SPH_MATCH_ANY to search for any words of the search phrase: // Set the mode to SPH_MATCH_ANY $client->SetMatchMode(SPH_MATCH_ANY); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_ANY'); The function call returns the following output (results): As expected, we got posts with ids 5,2, and 8. All these posts contain either "php" or "programming" or both. Next, we tried our hand at SPH_MATCH_PHRASE, which returns only those records that match the search phrase exactly, that is, all words in the search phrase appear in the same order and consecutively in the index: // Set the mode to SPH_MATCH_PHRASE $client->SetMatchMode(SPH_MATCH_PHRASE); display_results( $client->Query('php programming'), '"php programming" with SPH_MATCH_PHRASE'); display_results( $client->Query('scripting language'), '"scripting language" with SPH_MATCH_PHRASE'); The previous two function calls return the following results: The query"php programming" didn't return any results because there were no posts that match that exact phrase. However, a post with id 2 matched the next query: "scripting language". The last matching mode we used was SPH_MATCH_FULLSCAN. When this mode is used the search phrase is completely ignored, (in our case "php" was ignored) and Sphinx returns all records from the index: // Set the mode to SPH_MATCH_FULLSCAN $client->SetMatchMode(SPH_MATCH_FULLSCAN); display_results( $client->Query('php'), '"php programming" with SPH_MATCH_FULLSCAN'); The function call returns the following result (for brevity only a part of the output is shown in the following image): SPH_MATCH_FULLSCAN mode is automatically used if empty string is passed to the SphinxClient::Query() method. SPH_MATCH_FULLSCAN matches all indexed documents, but the search query still applies all the filters when sorting and grouping. However, the search query will not perform any full-text searching. This is particularly useful in cases where we only want to apply filters and don't want to perform any full-text matching (For example, filtering all blog posts by categories). Boolean query syntax Boolean mode queries allow expressions to make use of a complex set of Boolean rules to refine their searches. These queries are very powerful when applied to full-text searching. When using Boolean query syntax, certain characters have special meaning, as given in the following list: - &: Explicit AND operator - |: OR operator - -: NOT operator - !: NOT operator (alternate) - (): Grouping Let's try to understand each of these operators using an example. Time for action – searching using Boolean query syntax - Create a PHP script search_boolean_mode.php in your webroot with); display_results( $client->Query('php programming'), '"php programming" (default mode)'); // Set the mode to SPH_MATCH_BOOLEAN $client->SetMatchMode(SPH_MATCH_BOOLEAN); // Search using AND operator display_results( $client->Query('php & programming'), '"php & programming"'); // Search using OR operator display_results( $client->Query('php | programming'), '"php | programming"'); // Search using NOT operator display_results( $client->Query('php -programming'), '"php -programming"'); // Search by grouping terms display_results( $client->Query('(php & programming) | (leadership & success)'), '"(php & programming) | (leadership & success)"'); // Demonstrate how OR precedence is higher than AND display_results( $client->Query('development framework | language'), '"development framework | language"'); // This won't work display_results($client->Query('-php'), '"-php"'); Execute the script in a browser (the output shown in next section). What just happened? We created a PHP script to see how different Boolean operators work. Let's understand the working of each of them. The first search query, "php programming", did not use any operator. There is always an implicit AND operator, so "php programming" query actually means: "php & programming". In second search query we explicitly used the & (AND) operator. Thus the output of both the queries were exactly same, as shown in the following screenshot: Our third search query used the OR operator. If either of the terms get matched whilst using OR, the document is returned. Thus "php | programming" will return all documents that match either "php" or "programming", as seen in the following screenshot: The fourth search query used the NOT operator. In this case, the word that comes just after the NOT operator should not be present in the matched results. So "php –programming" will return all documents that match "php" but do not match "programming" We get results as seen in the following screenshot: Next, we used the grouping operator. This operator is used to group other operators. We searched for "(php & programming) | (leadership & success)", and this returned all documents which matched either; "php" and "programming" or "leadership" and "success", as seen in the next screenshot: After that, we fired a query to see how OR has a precedence higher than AND. The query "development framework | language" is treated by Sphinx as "(development) & (framework | language)". Hence we got documents matching "development & framework" and "development & language", as shown here: Lastly, we saw how a query like "-php" does not return anything. Ideally it should have returned all documents which do not match "php", but for technical and performance reasons such a query is not evaluated. When this happens we get the following output: Extended query syntax Apart from the Boolean operators, there are some more specialized operators and modifiers that can be used when using the extended matching mode. Let's understand this with an example. Time for action – searching with extended query syntax - Create a PHP script search_extended_mode.php in your webroot); // Set the mode to SPH_MATCH_EXTENDED2 $client->SetMatchMode(SPH_MATCH_EXTENDED2); // Returns documents whose title matches "php" and // content matches "significant" display_results( $client->Query('@title php @content significant'), 'field search operator'); // Returns documents where "development" comes // before 8th position in content field display_results( $client->Query('@content[8] development'), 'field position limit modifier'); // Returns only those documents where both title and content // matches "php" and "namespaces" display_results( $client->Query('@(title,content) php namespaces'), 'multi-field search operator'); // Returns documents where any of the field // matches "games" display_results( $client->Query('@* games'), 'all-field search operator'); // Returns documents where "development framework" // phrase matches exactly display_results( $client->Query('"development framework"'), 'phrase search operator'); // Returns documents where there are three words // between "people" and "passion" display_results( $client->Query('"people passion"~3'), 'proximity search operator'); // Returns documents where any of the // two words from the phrase matches display_results( $client->Query('"people development passion framework"/2'), 'quorum search operator'); - Execute the script in a browser (the output is explained in the next section). What just happened? For using extended query syntax, we set the match mode to SPH_MATCH_EXTENDED2: $client->SetMatchMode(SPH_MATCH_EXTENDED2); The first operator we used was field search operator. Using this operator we can tell Sphinx which fields to search against (instead of searching against all fields). In our example we searched for all documents whose title matches "php" and whose content matches "significant". As an output, we got posts (documents) with the id 5, which was the only document that satisfied this matching condition as shown below: @title php @content significant The search for that term returns the following result: Following this we used field position limit modifier. The modifier instructs Sphinx to select only those documents where "development" comes before the 8th position in the content field, that is, it limits the search to the first eight positions within given field. @content[8] development And we get the following result: Next, we used the multiple field search operator. With this operator you can specify which fields (combined) should match the queried terms. In our example, documents are only matched when both title and content matches "php" and "namespaces". @(title,content) php namespaces This gives the following result: The all-field search operator was used next. In this case the query is matched against all fields. @* games This search term gives the following result: The phrase search operator works exactly same as when we set the matching mode to SPH_MATCH_PHRASE. This operator implicitly does the same. So, a search for the phrase "development framework" returns posts with id 7, since the exact phrase appears in its content. "development framework" The search term returns the following result: Next we used the proximity search operator. The proximity distance is specified in words, adjusted for word count, and applies to all words within quotes. So, "people passion"~3 means there must be a span of less than five words that contain both the words "people" and "passion". We get the following result: The last operator we used is called as a quorum operator. In this, Sphinx returns only those documents that match the given threshold of words. "people development passion framework"/2 matches those documents where at least two words match out of the four words in the query. Our query returns the following result: Using what we have learnt above, you can create complex search queries by combining any of the previously listed search operators. For example: @title programming "photo gallery" -(asp|jsp) @* opensource The query means that: - The document's title field should match 'programming' - The same document must also contain the words 'photo' and 'gallery' adjacently in any of the fields - The same document must not contain the words 'asp' or 'jsp' - The same document must contain the word 'opensource' in any of its fields There are few more operators in extended query syntax and you can see their examples at. Summary In this article, we saw how to use the Sphinx API to search from within your application. With the index, in this article: - We wrote different search queries - We saw how PHP's implementation of the Sphinx client API can be used in PHP applications to issue some powerful search queries Further resources on this subject: - Drupal 6 Search Engine Optimization [Book] - Search Engine Optimization in Joomla! [Article] - Blogger: Improving Your Blog with Google Analytics and Search Engine Optimization [Article]
https://www.packtpub.com/books/content/sphinx-index-searching
CC-MAIN-2016-40
refinedweb
3,178
50.57
Re: [Cryptography] In the face of cooperative end-points, PFS doesn't help This space is of particular interest to me. I implemented just one of these and published the protocol (rather than pimp my blog if anyone wants to read up on the protocol description feel free to email me and I'll send you a link). The system itself was built around a fairly simple PKI which Re: [Cryptography] Books on modern cryptanalysis On 11 Sep 2013 18:37, Bernie Cosell ber...@fantasyfarm.com wrote: The recent flood of discussions has touched on many modern attacks on cryptosystems. I'm long out of the crypto world [I last had a crypto clearance *before* differential cryptanalysys was public info!]. Attacks that leak a Re: [Cryptography] Summary of the discussion so far On 13 Sep 2013, at 21:46, Nico Williams wrote: On Fri, Sep 13, 2013 at 03:17:35PM -0400, Perry E. Metzger wrote: On Thu, 12 Sep 2013 14:53:28 -0500 Nico Williams n...@cryptonector.com wrote: Traffic analysis can't really be defeated, not in detail. What's wrong with mix networks? Re: [Cryptography] prism proof email, namespaces, and anonymity On Fri, Sep 13, 2013 at 10:12 PM, Perry E. Metzger pe...@piermont.comwrote: On Fri, 13 Sep 2013 16:55:05 -0400 John Kelsey crypto@gmail.com wrote: Everyone, The more I think about it, the more important it seems that any anonymous email like communications system *not* include Re: [Cryptography] End to end On 17 Sep 2013 15:47, Christoph Gruber gr...@guru.at wrote: On 2013-09-16 Phillip Hallam-Baker hal...@gmail.com wrote: [snip] If people are sending email through the corporate email system then in many cases the corporation has a need/right to see what they are sending/receiving. [snip] Re: [Cryptography] End to end On 18 Sep 2013 07:44, Christoph Gruber gr...@guru.at wrote: On 2013-09-17 Max Kington mking...@webhanger.com wrote: [snip] Hence, store in the clear, keep safe at rest using today's archival mechanism and when that starts to get dated move onto the next one en-masse, for all your media Re: [Cryptography] PRISM-Proofing and PRISM-Hardening On 19 Sep 2013 19:11, Bill Frantz fra...@pwpconsult.com wrote: On 9/19/13 at 5:26 AM, rs...@akamai.com (Salz, Rich) wrote: I know I would be a lot more comfortable with a way to check the mail against a piece of paper I received directly from my bank. I would say this puts you in the sub
https://www.mail-archive.com/search?l=cryptography@metzdowd.com&q=from:%22Max+Kington%22
CC-MAIN-2021-25
refinedweb
432
69.11