text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringclasses
91 values
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
This is awesome, thanks for the update. Also, you forgot to mention the addition of the NFS packages, which means CDK can now consume NFS storage OOTB in addition to having Ceph as a backend. Great work folks :) ++ Sam -- Samuel Cozannet Cloud, Big Data and IoT Strategy Team Business Development - Cloud and ISV Ecosystem Changing the Future of Cloud Ubuntu <> / Canonical UK LTD <> / Juju <> samuel.cozan...@canonical.com mob: +33 616 702 389 skype: samnco Twitter: @SaMnCo_23 [image: View Samuel Cozannet's profile on LinkedIn] <> On Thu, Oct 13, 2016 at 9:53 PM, Charles Butler < charles.but...@canonical.com> wrote: > 10/13 Release Notes > > Greetings everyone! It's been a short but busy week for us. We've landed > a lot more bugfixes and some new features for you all to kick the tires. > We're excited to push this week’s rollup as it contains the early work > (alpha?) in consuming Ceph RBD volumes for persistent volume storage in > your kubernetes workloads. > > It’s missing from the readme, so here is a quick rundown below the release > notes. As always, bugs/comments/questions are all welcome. > > > Or you can find us on irc in #juju on irc.freenode.net > > > Layer-docker > > > - > > Added DOCKER_OPTS passthrough config option. This enables end users to > configure the runtime of their docker-engine (Such as insecure registries) > without having to add python code to the layers and/or re-build a fork. > > > - > > Corrected an immutable config when attempting to switch between > archive docker package and the docker-engine package from upstream. > > > Thanks @brianlbaird and @simonklb for driving this feature during > dev/testing. > > Flannel > > > - > > Corrected the directory glob pattern on flannel which was failing and > causing some false positives in the cloud weather report testing tool. > > > Kubernetes Master > > - > > Added a create-rbd-pv action to enlist persistent storage from Ceph. > - > > This requires the use of the ceph-mon charm from the > openstack-charmers-next branch. > - > > Closed a bug where running microbots would yield an EOF error due to > proxy settings. Consult the README for limited egress environments. (Thanks > @ryebot and @cynerva) > > > Kubernetes Worker > > - > > Added a kubectl wrapper for context with manifests, and a kubectl > wrapper for arbitrary keyword args. > - > > Various lint fixes. > - > > Worker nodes now cleanly remove themselves from the cluster during the > stop hook. (Thanks to @ryebot and @cynerva) > - > > The ingress controller now scales to the number of deployed worker > units. 1 ingress controller per 1 worker unit. (Thanks to @ryebot and > @cynerva) > > > Canonical Distribution of Kubernetes Bundle > > > - > > Added documentation for proxy settings in network limited environments > to the bundle README. (Thanks to @ryebot and @cynerva) > - > > Updated the README with additional limitation notes about which charms > are not compatible enough to run in LXD at this time. > - > > Bumped each charm to their latest revision. > > > Kubernetes Core Bundle > > A minimalist bundle, only deploying the bare minimum required to evaluate > kubernetes. Useful when doing laptop development or resource constrained > environments. (Thanks @cynerva and @ryebot) > > > > - > > The kubernetes core bundle has been updated with our latest release of > the Canonical Distribution of Kubernetes (CDK) charms. > - > > Brand new README imported from the CDK bundle. > > > - we’re still > testing this minimal bundle, and it will be published in the charm store as > early as next week. Thanks for your early interest! > > Etcd > > - > > Refactored the test to gate on the status messages before reading the > deployment as ready and proceeding with executing tests. > > > > Quick Rundown on how to enlist RBD PV’s > > You’ll need to be running at bare-minimum the ceph-mon charm from the > ~openstack-charmers-next namespace. > > juju deploy cs:~openstack-charmers-next/xenial/ceph-mon -n 3 > > juju deploy cs:ceph-osd -n 3 > > From here you will need to enlist the OSD storage devices: > > For example on GCE you would fulfill this request with GCE Persistent > Disks: > > juju add-storage ceph-osd/0 osd-devices=gce,10gb > > juju add-storage ceph-osd/1 osd-devices=gce,10gb > > juju add-storage ceph-osd/2 osd-devices=gce,10gb > > This will allocate 30gb of storage, across the 3 OSD device nodes, and > begin your ceph replicated file storage cluster. > > Next is to relate the storage cluster with the kubernetes master: > > juju add-relation kubernetes-master ceph-mon > > We’re now ready to enlist Persistent Volumes in Kubernetes which our > workloads can consume via PersistentVolumeClaims > > juju run-action kubernetes-master/0 create-rbd-pv name=test size=50 > > Tailing a watch on your kubernetes cluster like the following, you should > see the PV become enlisted and be marked as available: > > watch kubectl get pv --all-namespaces > > NAME CAPACITY ACCESSMODES STATUS CLAIM REASON AGE > > test 50M RWO Available 10s > > > To consume these Persistent Volumes, your pods will need an associated PVC > with them, and is outside the scope of this tutorial. See: > for more > information. > > This work is early so please let us know if you using storage and remember > to open issues at:- > kubernetes/issues > > -- > Juju Charmer > Canonical Group Ltd. > Ubuntu - Linux for human beings | > Juju - The fastest way to model your application | > > -- > Juju mailing list > Juju@lists.ubuntu.com > Modify settings or unsubscribe at: > mailman/listinfo/juju > > -- Juju mailing list Juju@lists.ubuntu.com Modify settings or unsubscribe at:
https://www.mail-archive.com/juju@lists.ubuntu.com/msg05142.html
CC-MAIN-2017-51
refinedweb
863
53.51
You can subscribe to this list here. Showing 25 50 100 250 results of 316 *> Author: *Oleg Broytmann ** > Put them here: hey, that's great! good to have a wiki. It would also be good to have it= s existence made known on the sqlobject.org website - I don't see any mention of it there. At the moment the wiki is not responsive - it's taking up to 5 minutes to serve up a page - but I did get a chance to look at it earlier. cheers, -- Stewart Midwinter stewart@... stewart.midwinter@... Skype, GoogleTalk, iChatAV, MSN, Yahoo: midtoad AIM:midtoad1 On 11/30/05, Luke Opperman <luke@...> wrote: > def test_transaction_commit_sync_multi(): > if not supports('transactions'): > return > setupClass(TestSOTrans) > trans =3D TestSOTrans._connection.transaction() > trans2 =3D TestSOTrans._connection.transaction() > try: > TestSOTrans(name=3D'bob') > bIn =3D TestSOTrans.byName('bob', connection=3Dtrans) > bIn2 =3D TestSOTrans.byName('bob', connection=3Dtrans2) > bIn.name =3D 'robert' > assert bIn2.name =3D=3D 'bob' > trans.commit() > trans2.commit() > E assert bIn2.name =3D=3D 'robert' > > assert <TestSOTrans 30 name=3D'bob'>.name =3D=3D 'robert' > finally: > TestSOTrans._connection.autoCommit =3D True > > The issue is that trans2 made no change to Bob's name, but it does not pi= ck up > the change. Sometimes this will be the intended behavior, sometimes not. > Rollback works correctly, because rollback() expires this transaction's > instances. > > My initial preference for this would be to a) allow calls to begin() afte= r > commit(), and have begin() do the expiring of existing instances. I think the notion is that: 1) transactions are isolated 2) you can commit as many times as you want How it works is consistent with that... but, I'm not sure what the preferred behavior would be here. If Transactions are short-lived it probably doesn't make much difference. If you have Transactions open for a while, then it really could make a difference. Kevin Oleg Broytmann wrote: > I backported the fix to 0.7 branch. This is the last bug I wanted to fix > in 0.7. Time to start 0.7.1 beta cycle. I'd like to get the SQLite threading bug fixed for 0.7.1, but that's all I have in mind. -- Ian Bicking / ianb@... / Hello! I just committed a fix for BLOBs on PySQLite2; PySQLite2 doesn't export encode() and decode() from libsqlite. My patch tests if pysqlite1 is available; if it is - SQLObject uses encode()/decode() from it; if it is not available - SQLObject uses base64. This means one cannot simply replace pysqlite1 with PySQLite2 - BLOB columns in existing databases must be reencoded. Well, this is better than not supporting BLOBs at all with PySQLite2. I backported the fix to 0.7 branch. This is the last bug I wanted to fix in 0.7. Time to start 0.7.1 beta cycle. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. Forgot to attach file, it's now on SF #1370278. - Luke There is one caveat to this behavior: if you have cacheValues=False, then instances are cached, but every column access is a re-query to the database. This used to be the default, I see now that cacheValues is True by default. I am attaching a diff against rev 1326 for our fix for this, and includes a test for it. Forcing the connection/transaction cached instances to expire during commit allows you to use cacheValues=True and transactions, but you are correct (from another email) that in a multi-PROCESS situation, you really can't use cacheValues=True unless all your access is through Transactions (but with it False, it is safe). There is an outstanding issue left after this fix, just not sure whether it's a problem or not, the following test shows it: def test_transaction_commit_sync_multi(): if not supports('transactions'): return setupClass(TestSOTrans) trans = TestSOTrans._connection.transaction() trans2 = TestSOTrans._connection.transaction() try: TestSOTrans(name='bob') bIn = TestSOTrans.byName('bob', connection=trans) bIn2 = TestSOTrans.byName('bob', connection=trans2) bIn.name = 'robert' assert bIn2.name == 'bob' trans.commit() trans2.commit() E assert bIn2.name == 'robert' > assert <TestSOTrans 30.name == 'robert' finally: TestSOTrans._connection.autoCommit = True The issue is that trans2 made no change to Bob's name, but it does not pick up the change. Sometimes this will be the intended behavior, sometimes not. Rollback works correctly, because rollback() expires this transaction's instances. My initial preference for this would be to a) allow calls to begin() after commit(), and have begin() do the expiring of existing instances. Thoughts? - Luke I have added this to the SF tracker, #1370261. - Luke Hello, Ian. test_boundattributes.py hangs here. Python 2.2, 2.3 and 2.4, SQLite (PySQLite 1 and 2) and PostgreSQL. Hangs until [Ctrl]+[C]. Infinite loop somewhere, I suppose... Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. On 11/30/05, Oleg Broytmann <phd@...> wrote: > On Thu, Dec 01, 2005 at 01:11:44AM +0800, Yuan HOng wrote: > > It seems the select() method doesn't return a result set that reflects > > changes to the data rows made by other processes. > > Very much depends on transactions and SQLObject caching. I just wrote about this yesterday. If you enclose *all* of your logical transactions (such as a web request) in a Transaction, SQLObject's caching should not pose a problem for you, even in a multiprocess environment. Kevin On Thu, Dec 01, 2005 at 01:11:44AM +0800, Yuan HOng wrote: > It seems the select() method doesn't return a result set that reflects > changes to the data rows made by other processes. Very much depends on transactions and SQLObject caching. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. On Wed, Nov 30, 2005 at 09:50:39AM -0700, Stewart Midwinter wrote: > I'd be willing to work on some recipes - at least as far as my knowledge > goes. Put them here: Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. SXQgc2VlbXMgdGhlIHNlbGVjdCgpIG1ldGhvZCBkb2Vzbid0IHJldHVybiBhIHJlc3VsdCBzZXQg dGhhdCByZWZsZWN0cwpjaGFuZ2VzIHRvIHRoZSBkYXRhIHJvd3MgbWFkZSBieSBvdGhlciBwcm9j ZXNzZXMuCgpUbyBpbGx1c3RyYXRlLCBzdXBwb3NlIEkgaGF2ZSBhIHBlcnNvbiB0YWJsZSB3aXRo IGlkLCBuYW1lIGFuZCBhZ2UsCmFuZCB0aGUgY29ycmVzcG9uZGluZyBTUUxPamJlY3QgY2xhc3M6 CgpjbGFzcyBQZXJzb24oU1FMT2piZWN0KToKICAgIG5hbWU9U3RyaW5nQ29sKCkKICAgIGFnZT1J bnRlZ2VyQ29sKCkKCkluIGFwcGxpY2F0aW9uIEEsIEkgY2FsbCBwID0gUGVyc29uLnNlbGVjdCgp IGFuZCBwcmludCB0aGUgbmFtZXMgYW5kCmFnZXMuIFRoZW4gSSBjaGFuZ2UgZm9yIGluc3RhbmNl IHRoZSBuYW1lIG9mIG9uZSByZXR1cmVkIHBlcnNvbiwgdXNpbmcKYW5vdGhlciBwcm9ncmFtLCBs aWtlIGEgZGF0YWJhc2UgY2xpZW50LgoKVGhlbiBJIGNhbGwgUGVyc29uLnNlbGVjdCgpIGZvciBh IHNlY29uZCB0aW1lIGFuZCBwcmludCB0aGUgbmFtZXMgYW5kCmFnZXMuIEkgd291bGQgZXhwZWN0 IHRvIHNlZSB0aGUgY2hhbmdlZCBuYW1lLCBidXQgaXQgcmVtYWlucyB0aGUgc2FtZQphcyBpbiB0 aGUgZmlyc3QgY2FsbCB0byBzZWxlY3QoKS4KCkkgaGF2ZSB0byBpdGVyYXRlIG92ZXIgdGhlIHNl bGVjdCByZXN1bHRzIGFuZCBjYWxsIHN5bmMoKSBvbiBlYWNoCm9iamVjdCB0aGUgZW5zdXJlIHRo ZXkgbm93IHJlZmxlY3QgdGhlIGN1cnJlbnQgc3RhdHVzIG9mIHRoZSBkYXRhYmFzZS4KCkJ1dCBz aW5jZSB0aGUgUGVyc29uLnNlbGVjdCgpIGNhbGwgc2VuZCBvdXQgYW4gU1FMIGxpa2UgJ3NlbGVj dCBpZCwKbmFtZSwgYWdlIGZyb20gcGVyc29uJywgaXQgc2hvdWxkIGFscmVhZHkgZ2V0IHRoZSB1 cGRhdGVkIHZhbHVlcyBmcm9tCnRoZSBkYXRhYmFzZS4gSSBmaW5kIGl0IG5vdCBxdWl0ZSB1bmRl cnN0YW5kYWJsZSB3aHkgdGhlIHJlc3VsdCBzZXQKZG9lc24ndCBhbHJlYWR5IGNvbnRhaW4gdXBk YXRlZCBkYXRhPwoKV2hhdCBzaGFsbCBJIGRvIHRvIG1ha2Ugc2VsZWN0KCkgcmV0dXJuIHVwZGF0 ZWQgdmFsdWVzPwoKLS0KSG9uZyBZdWFuCgq087ncvNLN+MnPvaiyxLOsytAK17DQ3tew5Oq9qLLE 0rvVvsq9ubrO7wpodHRwOi8vd3d3LmhvbWVtYXN0ZXIuY24K thanks for the follow-up Jon; no I didn't see your post concerning the 3-wa= y join. I gather the SQL below is simply a restatement of what you put in your othe= r You are defining two joins to the neighbourhood table, and using an alias t= o do so. The way you define the alias is simply to state the name of the tabl= e (neighbourhood) and then its alias (nhfr or nhto). Is that right? I'll poke around tonight with that and see what I come up with. While I did get a solution using SQLobject, by using a couple of pre-querie= s to define two IN lists, it would obviously be more elegant to be able to do it all in one go with SQLobject. A couple of observations: The challenge I find is how to express SQL queries in SQLobject. What's missing for me in the site documentation is a recipe-style section where various typical SQL queries are expressed in SQLobject terminology. After reading SAM's 10-minute guide to SQL I was able to fashion together SQL queries to solve my problem, but hours of putzing around with SQLobject hasn't yet produced the desired results. Either SQLobject is more complicated than SQL, or we need more and better documentation. Or am I missing it somewhere, and I'm overlooking it? I'd be willing to work on some recipes - at least as far as my knowledge goes. What I think we need a whole lot of is something like this: "you want to select a single column from a table according to some criterion. here's how: SQL: SELECT prod_name from products where prod_price < 3.49; SQLobject: Product.Select(Product.q.prod_name, where=3D(Product.q.prod_pric= e< 3.49)) " note: I have no idea whether the SQLobject expression is correct! There are lots of examples of complicated SQLobject expressions in the documentation, with their equivalent SQL, which is great if you already kno= w SQLobject and want to learn how to write SQL. But for many of us, I'd wager that we know some python, and a little SQL, and nothing about SQLobject, so we need examples that enable us to go in the other direction, from SQL to SQLobject. thanks to a few gurus on the list, we can often get solutions to our problems. It would be great to be able to unload the work on them by having some recipes available so that they don't need to answer the same questions over and over. At the moment, there doesn't seem to be any way for users t= o contribute to the documentation. cheers S On 11/30/05, jon@... <jon@...> wrote: > > Stewart, > > Did you see my post concerning the 3-way join? That is your solution. > cheers, -- Stewart Midwinter stewart@... stewart.midwinter@... Skype, GoogleTalk, iChatAV, MSN, Yahoo: midtoad AIM:midtoad1 Hi, Ian Bicking wrote: > It's not really intentional. There's a patch related to this (at > least generally) that might resolve this issue: > -- I haven't had a chance to really look at it yet, I'm afraid, but > if everything looks good it should go in. Ok, i looked through the patch and applied it. It works, but has a shortcoming. If you define a foreignKey via some_id = SomeCol(..., foreignKey="some") it want to name the foreignKey also as some_id. Which is bad i think. The good thing is, that now you just have to change one line to change the name of the generated variable. To get the old behaviour i had to change this (new col.py line 214): # this is in case of ForeignKey, where we rename the column # and append an ID self.origName = origName or name into this: # this is in case of ForeignKey, where we rename the column # and append an ID self.origName = origName or name[:-2] Yes, this is still not very nice, but it works with the old ids, and is easier to change. And it should be easy to change this into an Style method. For now i had to change id to -3 to work with my scheme, but if you think this should be Style thingie i will make a patch up for it. My current problem is, that i don't see when the origName is directly provided, so i don't know if it has to get changed too. Perhaps i will find some time to look further into it. Andres Freund PS: Sorry Ian, for sending it directly to you, i have another email client at work than at home, so my customized keybindings didnt work. On Tue, Nov 29, 2005 at 10:32:46PM -0600, Ian Bicking wrote: > UNION is peculiar to me, I haven't seen it before. It doesn't really > fit with SQLObject. Absolutely. > However, you can always just produce two selects > and concatenate the results This is an equivalent of SELECT UNION ALL. SELECT UNION filters the united result and removes duplicates. Oleg. -- Oleg Broytmann phd@... Programmers don't die, they just GOSUB without RETURN. I =3D (perhap= s some kind of join is needed?), but I can build some individual queries that get me partly there. The following two queries get me lists of the neighbourhoods that meet the start and end criteria. select2 =3D Neighbourhood.select(Neighbourhood.q.quadrant=3D=3D'NE') select3 =3D Neighbourhood.select(Neighbourhood.q.quadrant=3D=3D'NW') Now I want to create a query that selects records whose hoodfrom is in the results of select2 and whose hoodto is in the results of select3. I tried this: select =3D =3D [] for sel in select2: nelist.append(sel.name) And I did something similar for the results of select3, building a nwlist. Then I re-ran the select query, modifying it to use the lists instead of th= e queries: select =3D@... stewart.midwinter@... Skype, GoogleTalk, iChatAV, MSN, Yahoo: midtoad AIM:midtoad1 Jon Rosen wrote: >. UNION is peculiar to me, I haven't seen it before. It doesn't really fit with SQLObject. However, you can always just produce two selects and concatenate the results; since Python is polymorphic, if those two tables are sufficiently similar (similar enough for UNION) then the two objects should go together. But you'll have to do it in two queries. -- Ian Bicking | ianb@... | Hi Stewart, I am going to take a stab anyway based on what I think you are trying to do. You want to find all the routes whose quadrant found in the neighbourhood table as joined by hoodfrom is NE and whose quadrant also found in the neighbourhood table but joined by a different column, in this case hoodto, is NW. The best way to do that is a straightforward THREE-WAY join as follows: select origin, dest, hoodfrom, hoodto, nf.name, nf.quadrant, nt.name, nt.quadrant from route, neighbourhood nf, neighbourhood nt where route.hoodfrom = nf.name and route.hoodto = nt.name and nf.quadrant = 'NE' and nt.quadrant = 'NW'; No fuss, no muss, no unions and no correlated subqueries! ;-) In effect, we create two "copies" of the neighbourhood table, join one to route based on hoodto, the other to route based on hoodfrom and get two different quadrants back in each row, one for the hoodfrom side and one for the hoodto side. Of course, there is no actual second copy of the table created, SQL does this with join "magic". The only requirement is to distinguish the two "copies" of the same table in the same query. That is done with the table aliases nf and nt (neighbourhood-from and -to, respectively). I use shortnames but you can use any name you want. Once you use the alias in the FROM clause, you have to qualify the column references with the alias name (otherwise they would be ambiguous). I hope this makes sense (and again, I have NO idea how to do this with SQLObject yet, sorry!) Good luck! Jon hmmm, thanks Jon. How about if I take a different tack, one that I mentioned in the previous note to Andy: The statement would be simpler if I just selected all records that matched > hoodfrom =3D=3D 'SE' (which could include perhaps 25% of all the records = in the > database), then do a 2nd query on the results of this query, and match on > hoodto =3D=3D 'NW'. I'm not sure yet how to query the results of a query, but I'll look into that. S -- Stewart Midwinter stewart@... stewart.midwinter@... Skype, GoogleTalk, iChatAV, MSN, Yahoo: midtoad AIM:midtoad1 Stewart, Actually, Andy didn't notice that the two WHERE clauses in the UNION query reference different columns for hoodfrom and hoodto and so the simple IN example isn't the proper answer.. The subquery version won't give you anything like what I think you want. The outer query picks up those rows similar to the first query in the union example but then RESTRICTS those rows based on the subquery (the EXISTS clause). However, the exists clause isn't "correlated" to the outer query and therefore, if there is even ONE row that matches the inner query, all of the rows in the outer query will be returned (since the EXISTS will ALWAYS be true) and if there are NO rows that matche the inner query, you won't get ANYTHING back from the outer query (black or white, all or nothing). To be correlated, the inner query has to reference a value from the outer query, and since you use BOTH tables in both the inner query and outer query and don't use any aliases, there can be no correlation. As for how to do EITHER of these queries with SQLObject, I don't know the answer and would also be interested in learning that. Jon Rosen Andy Todd wrote: ), but >>the example there used cursor(), which I haven't used up to now. >> >>here's my SQL query, which produces the desired results in command-line >>mySQL: >>mysql> select origin, dest, hoodfrom, hoodto, name, quadrant >> -> from route, neighbourhood >> -> where route.hoodfrom = neighbourhood.name >> -> and neighbourhood.> -> union >> -> select origin, dest, hoodfrom, hoodto, name, quadrant >> -> from route, neighbourhood >> -> where route.hoodto = neighbourhood.name >> -> and neighbourhood.quadrant = 'NW'; >> >>And here's the same query using subquery: >>select origin, dest, hoodfrom, hoodto, name, quadrant from route, >>neighbourhood >>where (route.hoodfrom = neighbourhood.name and neighbourhood.quadrant = >>'NE') >>and exists (select origin, dest, hoodfrom, hoodto, name, quadrant from >>route, neighbourhood >> where route.hoodto = neighbourhood.name and neighbourhood.quadrant = = neighbourhood.name > -> and neighbourhood.quadrant IN ('NE', 'NW') > >Regards, >Andy >-- >>From the desk of Andrew J Todd esq > > >------------------------------------------------------- >This SF.net email is sponsored by: Splunk Inc. Do you grep through log files >for problems? Stop! Download the new AJAX search engine that makes >searching your log files as easy as surfing the web. DOWNLOAD SPLUNK! > >_______________________________________________ >sqlobject-discuss mailing list >sqlobject-discuss@... > > > > -- --------------------------------------------------------------- "The difference between theory and reality is that in theory, there is no difference between theory and reality, but in reality, there is." - Anonymous), bu= t > the example there used cursor(), which I haven't used up to now. > > here's my SQL query, which produces the desired results in command-line > mySQL: > mysql> select origin, dest, hoodfrom, hoodto, name, quadrant > -> from route, neighbourhood > -> where route.hoodfrom =3D neighbourhood.name > -> and neighbourhood.quadrant =3D 'NE' > -> union > -> select origin, dest, hoodfrom, hoodto, name, quadrant > -> from route, neighbourhood > -> where route.hoodto =3D neighbourhood.name > -> and neighbourhood.quadrant =3D 'NW'; > > And here's the same query using subquery: > select origin, dest, hoodfrom, hoodto, name, quadrant from route, > neighbourhood > where (route.hoodfrom =3D neighbourhood.name and neighbourhood.quadrant = =3D > 'NE') > and exists (select origin, dest, hoodfrom, hoodto, name, quadrant from > route, neighbourhood > where route.hoodto =3D neighbourhood.name and neighbourhood.quadrant =3D= =3D neighbourhood.name -> and neighbourhood.quadrant IN ('NE', 'NW') Regards, Andy -- From the desk of Andrew J Todd esq Andres Freund wrote: > Im using sqlobject in combination with sqlos and im a mostly happy user > except one issue: > I have an existing database so i cant change the naming style of that. > The problem is, that columns referring to other tables are named like > this: table_name_id. If i define that to be a foreign key it generates a > variable name line table_name_ which i find ugly, especially as they > occur in my interface. > Is there any reason why this is hardcoded (col.py line 182(in > trunk)/181)? Beside that no one changed it so far (which is a valid > reason, but I will change it myself then). Because i would like to > change this, but i dont see any way, to get around this naming. It's not really intentional. There's a patch related to this (at least generally) that might resolve this issue: -- I haven't had a chance to really look at it yet, I'm afraid, but if everything looks good it should go in. -- Ian Bicking / ianb@... / Hi! I'm writting an app which uses many threads and I'm getting some exceptions from SQLObject. So, my question is: is there a way to use SQLObject with threads? I'm using SQLObject 0.7 and MySql 5.0.x. The errors I get are like this: Exception in thread Thread-23: Traceback (most recent call last): File "C:\Python24\lib\threading.py", line 444, in __bootstrap self.run() File "C:\Python24\lib\threading.py", line 424, in run self.__target(*self.__args, **self.__kwargs) File "D:\projetos\kasamba\pinnaclesports\pinnaclesports.py", line 325, in get_bets bet_list +=3D getBetsFromSoup(soup) File "D:\projetos\kasamba\pinnaclesports\pinnaclesports.py", line 288, in getBetsFromSoup bet_list +=3D extractMatchInfo(section, match_no) File "D:\projetos\kasamba\pinnaclesports\pinnaclesports.py", line 208, in extractMatchInfo create_over_under_bet(over_under, side_name=3Dteam1_name, td=3Drows[start+5], type=3D'overunder') File "D:\projetos\kasamba\pinnaclesports\pinnaclesports.py", line 193, in create_over_under_bet moneyadj=3Dmoneyadj, maxbet=3Dmaxbet) File "D:\projetos\kasamba\pinnaclesports\dbhelper.py", line 148, in creat= e_bet moneyadj=3Dmoneyadj, maxbet=3Dmaxbet) File "c:\python24\lib\site-packages\SQLObject-0.7.0-py2.4.egg\sqlobject\m= ain.py", line 1183, in __init__ self._create(id, **kw) File "c:\python24\lib\site-packages\SQLObject-0.7.0-py2.4.egg\sqlobject\m= ain.py", line 1207, in _create self.set(**kw) File "c:\python24\lib\site-packages\SQLObject-0.7.0-py2.4.egg\sqlobject\m= ain.py", line 1084, in set raise AttributeError, '%s (with attribute %r)' % (e, name) AttributeError: 'Bet' object has no attribute 'id' (with attribute 'gamePar= t') Thanks, JP I just want to confirm a suspicion that I have. Transactions have their own CacheSet, and there is *no* interplay between the Transaction's CacheSet and the DBConnection's CacheSet. So, if you: * pull an object from the database * switch to a transaction * pull the same object from the database * update it * commit the transaction * access that object again outside of the Transaction you'll end up with stale data. Is that correct? It seems to me that, when answering requests in a multithreaded environment, you should do *all* database access in Transactions (a new one for each request), which gives you a reasonable approximation of the "unit of work" mode of operation that works well with tools like Hibernate. If you use Transactions for writing and straight DBConnections for reading, you could end up with stale data appearing... Kevin -- Kevin Dangoor Author of the Zesty News RSS newsreader company: Using rev 1326 (probably long before this, found it during attempting to migrate an 0.5-based app to svn version). I've attached a simple test case, the problem is in main.SQLObject.set. Flow: 1. set plain setters 2. set nonplain setters (in this case, a non-db column) 2a. in the problem scenario, these then set some db columns 2b. which in _SO_setValue add their correct values to _SO_createValues 3. set then updates _SO_createValues with the original dict, which has the default values for the db columns set by the nonplain. Not sure if there's an intentional reason to delay populating _SO_createValues, by moving self._SO_createValues.update(kw) between steps 1 and 2 the tests pass for me: [luke@... sqlobject]$ svn diff main.py Index: main.py =================================================================== --- main.py (revision 1326) +++ main.py (working copy) @@ -1131,6 +1131,7 @@ if to_python: value = to_python(dbValue, self._SO_validatorState) setattr(self, instanceName(name), value) + self._SO_createValues.update(kw) for name, value in extra.items(): try: getattr(self.__class__, name) @@ -1142,7 +1143,6 @@ except AttributeError, e: raise AttributeError, '%s (with attribute %r)' % (e, name) - self._SO_createValues.update(kw) self.dirty = True return
http://sourceforge.net/p/sqlobject/mailman/sqlobject-discuss/?style=flat&viewmonth=200511
CC-MAIN-2014-49
refinedweb
3,802
57.98
30 import java.util.Date; 31 import java.util.concurrent.locks.Condition; 32 33 import org.apache.http.util.Args; 34 35 /** 36 * Represents a thread waiting for a connection. 37 * <p> 38 * This class implements throw away objects. It is instantiated whenever 39 * a thread needs to wait. Instances are not re-used, except if the 40 * waiting thread experiences a spurious wake up and continues to wait. 41 * </p> 42 * <p> 43 * All methods assume external synchronization on the condition 44 * passed to the constructor. 45 * Instances of this class do <i>not</i> synchronize access! 46 * </p> 47 * 48 * @since 4.0 49 * 50 * @deprecated (4.2) do not use 51 */ 52 @Deprecated 53 public class WaitingThread { 54 55 /** The condition on which the thread is waiting. */ 56 private final Condition cond; 57 58 /** The route specific pool on which the thread is waiting. */ 59 //@@@ replace with generic pool interface 60 private final RouteSpecificPool pool; 61 62 /** The thread that is waiting for an entry. */ 63 private Thread waiter; 64 65 /** True if this was interrupted. */ 66 private boolean aborted; 67 68 69 /** 70 * Creates a new entry for a waiting thread. 71 * 72 * @param cond the condition for which to wait 73 * @param pool the pool on which the thread will be waiting, 74 * or {@code null} 75 */ 76 public WaitingThread(final Condition cond, final RouteSpecificPool pool) { 77 78 Args.notNull(cond, "Condition"); 79 80 this.cond = cond; 81 this.pool = pool; 82 } 83 84 85 /** 86 * Obtains the condition. 87 * 88 * @return the condition on which to wait, never {@code null} 89 */ 90 public final Condition getCondition() { 91 // not synchronized 92 return this.cond; 93 } 94 95 96 /** 97 * Obtains the pool, if there is one. 98 * 99 * @return the pool on which a thread is or was waiting, 100 * or {@code null} 101 */ 102 public final RouteSpecificPool getPool() { 103 // not synchronized 104 return this.pool; 105 } 106 107 108 /** 109 * Obtains the thread, if there is one. 110 * 111 * @return the thread which is waiting, or {@code null} 112 */ 113 public final Thread getThread() { 114 // not synchronized 115 return this.waiter; 116 } 117 118 119 /** 120 * Blocks the calling thread. 121 * This method returns when the thread is notified or interrupted, 122 * if a timeout occurrs, or if there is a spurious wakeup. 123 * <p> 124 * This method assumes external synchronization. 125 * </p> 126 * 127 * @param deadline when to time out, or {@code null} for no timeout 128 * 129 * @return {@code true} if the condition was satisfied, 130 * {@code false} in case of a timeout. 131 * Typically, a call to {@link #wakeup} is used to indicate 132 * that the condition was satisfied. Since the condition is 133 * accessible outside, this cannot be guaranteed though. 134 * 135 * @throws InterruptedException if the waiting thread was interrupted 136 * 137 * @see #wakeup 138 */ 139 public boolean await(final Date deadline) 140 throws InterruptedException { 141 142 // This is only a sanity check. We cannot synchronize here, 143 // the lock would not be released on calling cond.await() below. 144 if (this.waiter != null) { 145 throw new IllegalStateException 146 ("A thread is already waiting on this object." + 147 "\ncaller: " + Thread.currentThread() + 148 "\nwaiter: " + this.waiter); 149 } 150 151 if (aborted) { 152 throw new InterruptedException("Operation interrupted"); 153 } 154 155 this.waiter = Thread.currentThread(); 156 157 boolean success = false; 158 try { 159 if (deadline != null) { 160 success = this.cond.awaitUntil(deadline); 161 } else { 162 this.cond.await(); 163 success = true; 164 } 165 if (aborted) { 166 throw new InterruptedException("Operation interrupted"); 167 } 168 } finally { 169 this.waiter = null; 170 } 171 return success; 172 173 } // await 174 175 176 /** 177 * Wakes up the waiting thread. 178 * <p> 179 * This method assumes external synchronization. 180 * </p> 181 */ 182 public void wakeup() { 183 184 // If external synchronization and pooling works properly, 185 // this cannot happen. Just a sanity check. 186 if (this.waiter == null) { 187 throw new IllegalStateException 188 ("Nobody waiting on this object."); 189 } 190 191 // One condition might be shared by several WaitingThread instances. 192 // It probably isn't, but just in case: wake all, not just one. 193 this.cond.signalAll(); 194 } 195 196 public void interrupt() { 197 aborted = true; 198 this.cond.signalAll(); 199 } 200 201 202 } // class WaitingThread
https://hc.apache.org/httpcomponents-client-4.5.x/httpclient/xref/org/apache/http/impl/conn/tsccm/WaitingThread.html
CC-MAIN-2018-05
refinedweb
708
66.94
Hi, given a cstring, I need to extract the digits in it, the digits are prefixed with either a '+' or '-'. Like ,.,.,.,+3ACT,.,.,.,.-12,.,.,.,.,.,.,.,actgncgt #OUTPUT 3 12 I've made a working program that does what I want, but it seems overly complicated. Does anyone have an idea if this can be done smarter, better, faster? Thanks in advance Btw I checked the program with valgrind and there are no leaks or errors #include <stdio.h> #include <string.h> #include <stdlib.h> int main(){ char tmp_array[100]; const char* seq = "+1236,,..,,actgn+3ACT-4CCCC"; printf("%s\n",seq); for(int i=0;i<strlen(seq);i++){ if(seq[i]!='+'&&seq[i]!='-') continue; int j=i+1; while(j<strlen(seq)){ if(seq[j]>='0' &&seq[j]<='9'){ j++; }else break; } strncpy(tmp_array,seq+i+1,j-i-1); tmp_array[j-i-1]='\0'; printf("numbers in substrings are: %d\n",atoi(tmp_array)); } return 0; }
https://www.daniweb.com/programming/software-development/threads/201053/extract-multidigits-from-a-char-substring
CC-MAIN-2018-47
refinedweb
154
67.96
SOAPpy : How do I prefix items with a string in a SOAP/XML request? Discussion in 'Python' started by Doug Farrell, Aug 28, 2003.,526 - Darryl L. Pierce - Dec 10, 2004 SOAPpy/ZSI/Twisted SOAP over stdin/stdout?Harry George, Nov 30, 2004, in forum: Python - Replies: - 1 - Views: - 413 - Harry George - Dec 1, 2004 - Replies: - 1 - Views: - 793 - bruno modulix - Apr 14, 2005 SOAPpy SOAP and/or WSDL QuestionRodney, Dec 4, 2005, in forum: Python - Replies: - 0 - Views: - 388 - Rodney - Dec 4, 2005 removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML - Replies: - 6 - Views: - 652 - Richard Tobin - Nov 14, 2006
http://www.thecodingforums.com/threads/soappy-how-do-i-prefix-items-with-a-string-in-a-soap-xml-request.321758/
CC-MAIN-2014-52
refinedweb
113
75.34
The blocks controls accept extra config so you can precisely control how the controls render and behave. insetControls and focusRing are the two we will configure. interface BlocksControlsProps { children: any index: number insetControls?: boolean focusRing?: boolean | FocusRingStyles } interface FocusRingStyles { offset?: number | { x: number; y: number } borderRadius?: number } Tip: The offset values render in pixels. Right now the focus ring is bleeding off the page. First, we'll adjust the offset; this is the amount of distance between the edge of the block element and where the 'ring' displays. Since this component is 'page-width', we'll also inset the controls to render within the block area. This way if the block renders at the very top of the page, the controls don't get cut off. components/Hero.js export function Hero() { return ( <div className="hero"> <div className="wrapper wrapper--narrow"> <h1> - <InlineTextarea name="headline" /> + <InlineTextarea name="headline" focusRing={false} /> </h1> <p> - <InlineTextarea name="subtext" /> + <InlineTextarea name="subtext" focusRing={false} /> </p> </div> </div> ); } export const heroBlock = { Component: ({ index }) => ( <BlocksControls index={index} + focusRing={{ offset: 0 }} + insetControls > <Hero /> </BlocksControls> ), template: { label: 'Hero', defaultItem: { headline: 'Suspended in a Sunbeam', subtext: 'Dispassionate extraterrestrial observer', }, fields: [], }, }; Notice how we added focusRing={false} to the Inline Fields. This is totally up to your preference whether you want the child fields to render their focus ring. For this demo, we chose to hide them for a cleaner aesthetic. If you wanted to have even more control over the focus ring offset, you could pass in specific x & y values. export const heroBlock = { Component: ({ index }) => ( <BlocksControls index={index} focusRing={{ offset: { x: -10, y: -18 } }} insetControls > <Hero /> </BlocksControls> ), template: { //... }, } You can also adjust the border radius or the amount of curve at the border intersections. In this example it is set to 0, making the focus ring border have square corners. export const heroBlock = { Component: ({ index }) => ( <BlocksControls index={index} focusRing={{ offset: { x: -5, y: -20 }, borderRadius: 0 }} insetControls > <Hero /> </BlocksControls> ), template: { //... }, } We will leave the zero border-radius setting out of the demo, but it's a great examples of all the control at your disposal over the focus ring. Go ahead and tinker with the styles to get the controls to your liking!
https://tinacms.org/guides/general/inline-blocks/customize-controls
CC-MAIN-2020-34
refinedweb
365
54.02
There is some documentation to explain this already, but below is a step-by-step that shows how to use an Excel spreadsheet as a Data Source for both unit and web tests. First, let’s set the stage. I’m going to use a solution containing a class library and a web site. The class library has a single class with a single method that simply returns a “hello”-type greeting. namespace SimpleLibrary { public class Class1 { public string GetGreeting(string name) { return "Hello, " + name; } } } For my VB friends out there: Namespace SimpleLibrary Public Class Class1 Public Function GetGreeting(ByVal name As String) As String Return "Hello, " & name End Function End Class End Namespace Unit Testing So now I’m going to create a unit test to exercise the “GetGreeting” method. (As always, tests go into a Test project. I’m calling mine “TestStuff”.) Here’s my straightforward unit test: [TestMethod()] public void GetGreetingTest() { Class1 target = new Class1(); string name = "Steve"; string expected = "Hello, " + name; string actual; actual = target.GetGreeting(name); Assert.AreEqual(expected, actual); } In VB: <TestMethod()> _ Public Sub GetGreetingTest() Dim target As Class1 = New Class1 Dim name As String = "Steve" Dim expected As String = "Hello, " & name Dim actual As String actual = target.GetGreeting(name) Assert.AreEqual(expected, actual) End Sub I’ll run it once to make sure it builds, runs, and passes: I have an Excel file with the following content in Sheet1: Nothing fancy, but I reserve the right to over-simplify for demo purposes. 🙂 To create a data-driven unit test that uses this Excel spreadsheet, I basically follow the steps you’d find on MSDN, with the main difference being in how I wire up my data source. I click on the ellipsis in the Data Connection String property for my unit test. Follow these steps to set up the Excel spreadsheet as a test data source for a unit test. - In the New Test Data Source Wizard dialog, select “Database”. - Click “New Connection”. - In the “Choose Data Source” dialog, slect “Microsoft ODBC Data Source” and click “Continue”. (For additional details about connection strings & data sources, check this out.) - In “Connection Properties”, select the “Use connection string” radio button, then click “Build”. - Choose if you want to use a File Data Source or a Machine Data Source. For this post, I’m using a Machine Data Source - Select the “Machine Data Source” tab, select “Excel Files” and click Ok - Browse to and select your Excel file. - Click “Test Connection” to make sure everything’s golden. - Click Ok to close “Connection Properties” - Click Next - You should see the worksheets listed in the available tables for this data source. - In my example, I’ll select “Sheet1$” - Click “Finish” - You should get a message asking if you want to copy your data file into the project and add as a deployment item. Click Yes. - You should now see the appropriate values in Data Connection String and Data Table Name properties, as well as your Excel file listed as a deployment item: - Now I return to my unit test, note that it’s properly decorated, and make a change to the “name” variable assignment to reference my data source (accessible via TestContext): void GetGreetingTest() { Class1 target = new Class1(); string name = TestContext.DataRow["FirstName"].ToString(); string expected = "Hello, " + name; string actual; actual = target.GetGreeting(name); Assert.AreEqual(expected, actual); } Again, in VB: Sub GetGreetingTest() Dim target As Class1 = New Class1 Dim name As String = TestContext.DataRow("FirstName").ToString() Dim expected As String = "Hello, " + name Dim actual As String actual = target.GetGreeting(name) Assert.AreEqual(expected, actual) End Sub - Now, running the unit test shows me that it ran a pass for each row in my sheet Yippee! Web Testing You can achieve the same thing with a web test. So I’m going to first create a simple web test that records me navigating to the website (at Default.aspx), entering a name in the text box, clicking, submit, and seeing the results. After recording, it looks like this. See “TxtName=Steve”? The value is what I want to wire up to my Excel spreadsheet. To do that: - Click on the “Add Data Source” toolbar button. - Enter a data source name (I’m using “ExcelData”) - Select “Database” as the data source type, and click Next - Go through the same steps in the Unit Testing section to set up a data connection to the Excel file. (Note: If you’ve already done the above, and therefore the Excel file is already in your project and a deployment item, browse to and select the copy of the Excel file that’s in your testing project. That will save you the hassle of re-copying the file, and overwriting.) - You’ll now see a Data Sources node in my web test: - Select the parameter you want to wire to the data source (in my case, TxtName), and view its properties. - Click the drop-down arrow in the Value property, and select the data field you want to use. - Now save and run your web test again. If you haven’t used any other data-driven web tests in this project, you’ll notice that there was only one pass. That’s because your web test run configuration is set to a fixed run count (1) by default. To make changes for each run, click “Edit run settings” and select “One run per data source row”. To make sure all rows in data sources are always leveraged, edit your .testrunconfig file to specify as such. - Now run it again, and you should see several passes in your test results: That’s it in a simple nutshell! There are other considerations to keep in mind such as concurrent access, additional deployment items, and perhaps using system DSNs, but this should get you started. How do you get this property window to show information for the method? Mine just shows the .cs file properties. Saritha – in what context? Can you provide more details (or a scenario) of what you’re trying to accomplish? Hi, I would like to use data from a column in a excel file for eg. populate a list from a column. How would I go about it? Thanks, Saritha.
https://blogs.msdn.microsoft.com/slange/2009/09/03/data-driven-tests-in-team-system-using-excel-as-the-data-source/
CC-MAIN-2016-44
refinedweb
1,039
61.77
There's not much on the forum to help you out with true false statements, that I could find. Do a forum or Google search for "bools". There's not much on the forum to help you out with true false statements, that I could find. Do a forum or Google search for "bools". Or for the exact answer to your question, putting a string into char c [] puts it in read only format, so you can't output a value that is read only, it also has to be able to write the value (so... Char c [] is an array of characters, initialize it's size: char c [3]. I'm trying to throw an overflow exception: void Stack::push(char c) { if(top == max_size) throw Overflow(); } Read over object oriented design: S.O.L.I.D: The First 5 Principles of Object Oriented Design | Scotch Object-oriented analysis and design - Wikipedia and try to design a small program based... Try libcurl. If you're a newbie to c, why not try Python programming instead. There are many Internet networking books on python. Using ftp libraries in c seems... Mastering python :) Just got through 150 pages of a Python book, by Mark Lutz. Everything said about it above is right, easy to learn, oop, less lines of code than same type of program written in c/c++, lots of built... I've been having issues with other compilers, so visual c++ express was the one that has worked the best for me. I've also been busy with work, and I would also like to learn Python, so c is also... So I was reading a book at Barnes and Noble, and it was a Python book, and it said that Python now has over a million active users. Why is Python getting so popular? It strikes me as if Python is... I will studying the documentation better for the functions. Yes, I don't know the specifics of the undefined behavior. Yes, I understand, although I took my c course back in 2002. For the undefined behavior, yes, I don't understand the functions I'm using well enough to use them without getting warnings, but I will... Sorry about that. I thought the problem was interesting, since I'm learning c and incorporating the c standard library myself, so I put some time into solving this problem as well. But I will try... I learned c in the academic setting close to 15 years ago, so I do understand what you mean. But I was merely trying to solve the problem, as I am relearning c myself. As I continue to get more... Hi, I only gave the code for the portion which was solved using advice on this message board: using string functions, using fgets, and using strstr. But what I showed was only half of my main, but I... int main(array<System::String ^> ^args) { char sentence[100]; char toReplace[20]; char replacement[20]; printf("Enter a sentence: "); fgets(sentence, 100, stdin); Probably not. I looked up similar questions on Google, and it required several built in string functions to complete. As Salem mentioned, a good function to start with is strstr in string.h. ... You will probably need to use several built in functions using the standard c library, and as the instructions states, you will have to create a single user defined function. The best c library to... int main(array<System::String ^> ^args) { int n, reverse=0; printf("Enter a number to reverse (enter 0 to end): \n"); scanf("%d", &n); while(n!=0) { #include "stdafx.h" #include <cstdio> using namespace System; int main(array<System::String ^> ^args) { int c, nl; Yes, remember that the values of variables must be stored in memory before any calculations can be made, otherwise the values will be null if nothing is stored in them. Getting input from the user... Shouldn't you be calculating age after you get the input from the user, not before you get the input from the user? The way you wrote the program, there are no values associated with year and... I probably should have said that an array is a linear list, but tends to be stored in sequential memory units. I apologize, I meant assert (). The a automatically goes to uppercase when you type it in at the beginning of a sentence, and I put down Assert in all instances, though I meant assert (). I think it's simply a bug in the compiler. A float only has the capacity to hold 7 digits of precision. So to hold larger numbers you would have to use a double. I don't think there is a specific...
https://cboard.cprogramming.com/search.php?s=2855d459d219576a8b3d598f08cdf3eb&searchid=6400745
CC-MAIN-2021-10
refinedweb
788
72.66
:OK - I rm -rf'ed /usr/src/*,/usr/obj/*, re-cvsup'ed from :dragonflybsd.org and built new world and kernel plus a reboot. :Still the same sockstat error :-( : :Here is the url to the ktrace output: : : -Erik,0xbfbffc8c,0,0) 62308 sockstat RET __sysctl 0x8058000,0xbfbffc8c,0,0) 62308 sockstat RET __sysctl -1 errno 12 Cannot allocate memory @@ -37,7 +37,7 @@ * * @(#)kern_descrip.c 8.6 (Berkeley) 4/19/94 * $FreeBSD: src/sys/kern/kern_descrip.c,v 1.81.2.19 2004/02/28 00:43:31 tegge Exp $ - * $DragonFly: src/sys/kern/kern_descrip.c,v 1.38 2005/01/14 19:28:10 dillon Exp $ + * $DragonFly$ */ #include "opt_compat.h" @@ -1708,7 +1708,9 @@ struct filedesc *fdp; struct file *fp; struct proc *p; - int error, n; + int count; + int error; + int n; /* * Note: because the number of file descriptors is calculated @@ -1716,35 +1718,47 @@ * there is information leakage from the first loop. However, * it is of a similar order of magnitude to the leakage from * global system statistics such as kern.openfiles. + * + * When just doing a count, note that we cannot just count + * the elements and add f_count via the filehead list because + * threaded processes share their descriptor table and f_count might + * still be '1' in that case. */ - if (req->oldptr == NULL) { - n = 16; /* A slight overestimate. */ - LIST_FOREACH(fp, &filehead, f_list) - n += fp->f_count; - return (SYSCTL_OUT(req, 0, n * sizeof(kf))); - } + count = 0; error = 0; LIST_FOREACH(p, &allproc, p_list) { if (p->p_stat == SIDL) continue; - if (!PRISON_CHECK(req->td->td_proc->p_ucred, p->p_ucred) != 0) { + if (!PRISON_CHECK(req->td->td_proc->p_ucred, p->p_ucred) != 0) continue; - } - if ((fdp = p->p_fd) == NULL) { + if ((fdp = p->p_fd) == NULL) continue; - } for (n = 0; n < fdp->fd_nfiles; ++n) { if ((fp = fdp->fd_ofiles[n]) == NULL) continue; - kcore_make_file(&kf, fp, p->p_pid, - p->p_ucred->cr_uid, n); - error = SYSCTL_OUT(req, &kf, sizeof(kf)); - if (error) - break; + if (req->oldptr == NULL) { + ++count; + } else { + kcore_make_file(&kf, fp, p->p_pid, + p->p_ucred->cr_uid, n); + error = SYSCTL_OUT(req, &kf, sizeof(kf)); + if (error) + break; + } } if (error) break; } + + /* + * When just calculating the size, overestimate a bit to try to + * prevent system activity from causing the buffer-fill call + * to fail later on. + */ + if (req->oldptr == NULL) { + count = (count + 16) + (count / 10); + error = SYSCTL_OUT(req, NULL, count * sizeof(kf)); + } return (error); }
https://www.dragonflybsd.org/mailarchive/bugs/2005-02/msg00049.html
CC-MAIN-2017-04
refinedweb
377
57.2
Asked by: Two OCS2k7 Enterprise pool point to one SQL2005 server Question Hi All, Here is my exsiting environment: AD Architecture: 1. One forest and one child domain Eg. company.com (Root domain), abc.company.com (Child domain) OCS2k7 Enterprise Architecture: 1. One OCS2k7 Enterprise edition in Root domain ( for example: In USA) 2. One OCS2k7 Enterprise edition in Child domain (for example: In Hong Kong) SQL Server: 1. One SQL server 2005 hosted on USA (Root domain) for serving USA OCS server 2. One SQL server 2005 hosted on Hong Kong (Child domain) for servving Hong Kong OCS server For my question. I need to implment additional OCS2k7 Enterprise edition server on child domain, whether can i use exsiting SQL server in child domain or i should implement additional SQL server for the new OCS2k7 server? Because if i use the exising sql server, i'm afraid whether there is any impact or not after i implement additional OCS server, for example, will it override the existing database or not? Thanks a lot for your help. KennethSunday, November 2, 2008 3:36 AM All replies - The Supportability Guide states that a dedicated SQL instance must be used for any OCS back-end databases, so I'd imagine that as long as you install an additional instance you can use the same SQL server for multiple pools.Sunday, November 2, 2008 4:18 AMModerator Hi Jeff, So, Do i need to use the same SQL instance for two OCS server pool in child domain for best practice? Is it impossible to use two SQL instance for two OCS server pool? Best Regards KennethSunday, November 2, 2008 5:23 AM Let me rephrase, you must use a separate dedicated SQL instance for each OCS server pool. The OCS databases of each pool will all have the same defined names and cannot be collocated on the same SQL instance. I would think that best practice would be to deploy dedicated SQL servers for separate pool as typical multiple pools are used in geographically dispersed or hot-standby DR scenarios and using the same SQL back-end server in these would not be advantageous. For performance reasons separate SQL servers would defintely be the recommended approach.Sunday, November 2, 2008 12:45 PMModerator Hi Jeff, Thanks a lot for your recommendation. On the other hand, if i create two Enterprise pool in the same child domain with two SQL server for each pool, whether the presence state can be exchanged each other or not. For example. Pool01: (abc.company.com <--Child domain) IM: User A Pool02: (abc.company.com <--Child domain) IM: User B Is there any problem when User A change the state in IM such as "Away" status, whether the presence state of User A will be reflect on User B communicator? Thanks Best Regards Kenneth ChowSunday, November 2, 2008 1:06 PM Hi All, I am also looking for an answer to above question guys. Will the 2 pools on different SQL instances be able to communicate even? REgards, AliWednesday, November 26, 2008 12:45 PM - Yes, if both pools are part of the same OCS deployment (same forest) then all communications would be natively supported (IM, presence, etc).Wednesday, November 26, 2008 2:11 PMModerator Hi Jeff, Thanks for your reply. I am trying to setup a test lab using the same forest which I have "abc.com". I want to create a new pool in my enterprise pools which will be using a different sql instance on sql clustered server which has our sql instance/db in production. I have seen in the installation docuemnt in the "create enterprise wizard" that there is a window which asks you to reuse existing database. If you check the box "replace any existing database" , is this going to remove my db which is in production. The other option which if I did not select replace will be using an existing db which I do not want. I have read that I need to select OCS namespace while creating the pool. So can I use "abc.com" for my test lab in a different new pool or I have to define a new namespace. The OCS resource kit is not clear in explaining the above point. Regards, A. ZaherWednesday, November 26, 2008 6:58 PM As long as you point the second pool to a separate SQL instance then it can't overwrite your other pool's databases as they are in a completely different instance.Wednesday, November 26, 2008 8:33 PMModerator
https://social.microsoft.com/Forums/en-US/e3c97af2-18f4-48e6-9560-34783719f593/two-ocs2k7-enterprise-pool-point-to-one-sql2005-server?forum=communicationsserversetup
CC-MAIN-2021-10
refinedweb
761
59.84
On 8/12/19 4:05 PM, David Hildenbrand wrote: >>> --- >>> include/linux/mmzone.h | 11 ++ >>> include/linux/page_reporting.h | 63 +++++++ >>> mm/Kconfig | 6 + >>> mm/Makefile | 1 + >>> mm/page_alloc.c | 42 ++++- >>> mm/page_reporting.c | 332 +++++++++++++++++++++++++++++++++ >>> 6 files changed, 448 insertions(+), 7 deletions(-) >>> create mode 100644 include/linux/page_reporting.h >>> create mode 100644 mm/page_reporting.c >>> >>> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h >>> index d77d717c620c..ba5f5b508f25 100644 >>> --- a/include/linux/mmzone.h >>> +++ b/include/linux/mmzone.h >>> @@ -559,6 +559,17 @@ struct zone { >>> /* Zone statistics */ >>> atomic_long_t vm_stat[NR_VM_ZONE_STAT_ITEMS]; >>> atomic_long_t vm_numa_stat[NR_VM_NUMA_STAT_ITEMS]; >>> +#ifdef CONFIG_PAGE_REPORTING >>> + /* Pointer to the bitmap in PAGE_REPORTING_MIN_ORDER granularity */ >>> + unsigned long *bitmap; >>> + /* Preserve start and end PFN in case they change due to hotplug */ >>> + unsigned long base_pfn; >>> + unsigned long end_pfn; >>> + /* Free pages of granularity PAGE_REPORTING_MIN_ORDER */ >>> + atomic_t free_pages; >>> + /* Number of bits required in the bitmap */ >>> + unsigned long nbits; >>> +#endif >>> } ____cacheline_internodealigned_in_smp; >> Okay, so the original thing this patch set had going for it was that >> it was non-invasive. However, now you are adding a bunch of stuff to >> the zone. That kind of loses the non-invasive argument for this patch >> set compared to mine. >> > Adding something to "struct zone" is certainly less invasive than core > buddy modifications, just saying (I agree that this is suboptimal. I > would have guessed that all that's needed is a pointer to some private > structure here). I think having just a pointer to a private structure makes sense here. If I am not wrong then I can probably make an allocation for it for each populated zone at the time I enable page reporting. > However, the migratetype thingy below looks fishy to me. > >> If we are going to continue further with this patch set then it might >> be worth looking into dynamically allocating the space you need for >> this block. At a minimum you could probably look at making the bitmap >> an RCU based setup so you could define the base and end along with the >> bitmap. It would probably help to resolve the hotplug issues you still >> need to address. > Yeah, I guess that makes. > [...] >> So as per your comments in the cover page, the two functions above >> should also probably be plugged into the zone resizing logic somewhere >> so if a zone is resized the bitmap is adjusted. >> >>> +/** >>> + * zone_reporting_init - For each zone initializes the page reporting >>> fields >>> + * and allocates the respective bitmap. >>> + * >>> + * This function returns 0 on successful initialization, -ENOMEM otherwise. >>> + */ >>> +static int zone_reporting_init(void) >>> +{ >>> + struct zone *zone; >>> + int ret; >>> + >>> + for_each_populated_zone(zone) { >>> +#ifdef CONFIG_ZONE_DEVICE >>> + /* we can not report pages which are not in the buddy */ >>> + if (zone_idx(zone) == ZONE_DEVICE) >>> + continue; >>> +#endif >> I'm pretty sure this isn't needed since I don't think the ZONE_DEVICE >> zone will be considered "populated". >> > I think you are right (although it's confusing, we will have present > sections part of a zone but the zone has no present_pages - screams like > a re factoring - leftover from ZONE_DEVICE introduction). I think in that case it is safe to have this check here. What do you guys suggest? > -- Thanks Nitesh --------------------------------------------------------------------- To unsubscribe, e-mail: virtio-dev-unsubscr...@lists.oasis-open.org For additional commands, e-mail: virtio-dev-h...@lists.oasis-open.org
https://www.mail-archive.com/virtio-dev@lists.oasis-open.org/msg05036.html
CC-MAIN-2019-43
refinedweb
529
55.74
I noticed an "interview question" that was posted on StackOverflow awhile ago. It's not particularly complicated -- basically asking "given two strings, how to tell if one is the rotated version of the other?" Some discussion in the question deals with various faster methods, but the simplest answer is a Python version: def isrotation(s1, s2): return len(s1) == len(s2) and s1 in 2*s2 If we wanted to implement this in Factor, we might want to consider using "short circuit" combinators (which will apply a series of boolean tests and stop on the first test that fails). We will also use the convention that a word? (ending in a "?") returns a boolean. : rotation? ( s1 s2 -- ? ) { [ [ length ] bi@ = ] [ dup append subseq? ] } 2&& ; We can test it, to make sure it works: ( scratchpad ) "stack" "tacks" rotation? . t ( scratchpad ) "foo" "bar" rotation? . f Since strings are sequences of characters and this solution uses sequence operations ( length, append, and subseq?), it is already generalized to operate on other types of sequences. For example, arrays: ( scratchpad ) { 1 2 3 } { 2 3 1 } rotation? . t So, the next time you get this question in an interview, maybe you can solve it in Factor!
http://re-factor.blogspot.com/2010/10/is-rotation.html
CC-MAIN-2017-13
refinedweb
199
64.81
Re: QueryInterface for interface * failed. (Using COM Interop dll in web application) - From: "Vasil Buraliev" <vasil_buraliev@xxxxxxxxx> - Date: Wed, 31 May 2006 22:19:44 +0200 I solve the problem. I rewrite my COM using ATL. Now everyting is ok. Ph. "Vasil Buraliev" <vasil_buraliev@xxxxxxxxx> wrote in message news:%238qH%23vAgGHA.3900@xxxxxxxxxxxxxxxxxxxxxxx Hi everyone... I'm faceing a problem that I want to shae with you and I'm hoping that somebody will give write solution. I wrote a COM dll in Visual C++ 6.0 and MFC that generates some report. I'm using Visual Studio (ver. 2003) to create interop for COM dll cause I want to use in a web project (asp.net 1.1, c#). I'm adding my COM dll in References of the web project and everything is cool. COM dll is registered to OS and I'm able to use it in web application. Here is the simplified example of how I'm using the Interop in web project: public class X : Base { protected Object comObj; // some other methods protected ShowReport() { try { comObj = new ReportsClass(); String htmlFormatedStr = ((ReportsClass)comObj).GetReport(/*some id*/); } catch(COMException comExc) { // log exception } catch(Exception exc) { // log general exception } } } I don't have any problem on development enviroment (Windows XP, VS2003, .NET Framework 1.1, IIS 5.1). I'm getting results and everything is fine. Page that is the container of the user control where my COM object is instantiated has AspCompat = "true" in @Page directive. The problem is when I deploy same version on test enviroment (Winwods 2003, .NET Framework 1.1, IIS 6) and it's very strange. I start web application on test enviroment for the first time and everything is cool I can send request for report and I'm getting the proper results. After I quit browser (IE) and start application again I got error (QueryInterface for interface OssDsDCOM.IProducts failed... see details of the exception to the bottom of the this message) and I can not see report. If I restart IIS (IISReset) then I'm able again to use com object without any problems untill next closeing of the browser. (This is the pattern one that is based on the following rules. I can use com obj only first time when I make request to instantiate and I can call it unlimited number of times and it works cool untill I close browser, than I should restart IIS if I want to use com obj again) I tryed also what it will be if I restart IIS start web applicaton and use com obj to get proper results and try to open onother browser (CTRL+N) and try to use report (com obj) from other prowser without closing the fist one where com obj is working corectlly in that time but, ni second browser I got the known exception which details can be found to the bottom of this I chnage test enviroment and I tried to another phisical server (Windows 2003, .NET Framework 1.1, IIS 6) and problem continue to exist. So, would you please help me to solve this problem cause it's very importan to me. Thank you for your time reading this description about my problem. Regards, Vasil Buraliev ************************************************************************ DETAILS ABOUT EXCEPTION ************************************************************************ MESSAGE: QueryInterface for interface OssDsDCOM.IProducts failed. SOURCE: mscorlib.RuntimeType.ForwardCallToInvokeMember(String memberName, BindingFlags flags, Object target, Int32[] aWrapperTypes, MessageData& msgData) at OssDsDCOM.ProductsClass.GetCurrentView(String lpLegalActorID) at DistrSys.WebUI.users.modules.UCConDGUIDetails.ShowProduct(String strProductID, ArrayList prodParams) TARGET SITE NAME: InvokeDispMethod For more information, see Help and Support Center at. ************************************************************************ . - References: - QueryInterface for interface * failed. (Using COM Interop dll in web application) - From: Vasil Buraliev - Prev by Date: Re: COM object and COM server - Next by Date: Re: C# question: dealing with unsigned char* from C++ .lib - Previous by thread: QueryInterface for interface * failed. (Using COM Interop dll in web application) - Next by thread: Re: COM Interop Problems - Index(es):
http://www.tech-archive.net/Archive/DotNet/microsoft.public.dotnet.framework.interop/2006-05/msg00248.html
crawl-002
refinedweb
658
64.41
The Biggest Mistake Static Analysis Could Have Prevented The Biggest Mistake Static Analysis Could Have Prevented Static analysis and code quality tools are abundant, but underused. Read on to see why you should use them, and how to avoid having your next project cost you your job. Join the DZone community and get the full member experience.Join For Free As I’ve probably mentioned before, many of my clients pay me to come do assessments of their codebases, application portfolios and software practice. And, as you can no doubt imagine, some of my sturdiest, trustiest tools in the tool chest for this work are various forms of static analysis. Sometimes I go to client sites by plane, train or automobile (okay, never by train). Sometimes I just remote in. Sometimes I do fancy write-ups. Sometimes, I present my findings with spiffy slide decks. And sometimes, I simply deliver a verbal report without fanfare. The particulars vary, but what never varies is why I’m there. Here’s a hint: I’m never there because the client wants to pay my rate to brag about how everything is great with their software. Where Does It All Go Wrong? Given what I’m describing here, one might conclude that I’m some sort of code snob and that I am, at the very least, heavily judging everyone’s code. And, while I’ll admit that every now and then I think, “the daily WTF would love this,” mostly I’m not judging at all – just cataloging. After all, I wasn’t sitting with you during the pre-release death march, nor was I the one thinking, “someone is literally screaming at me, so global variable it is.” I earnestly tell developers at client sites that I don’t know that I’d have done a lot better walking a mile in their shoes. What I do know is that I’d have, in my head, a clearer map from “global variable today” to “massive pain tomorrow” and be better able to articulate it to management. But, on the whole, I’m like a home inspector checking out a home that was rented and subsequently trashed by a rock band; I’m writing up an assessment of the damage and not judging their lifestyle. But for my clients, I’m asked to do more than inspect and catalog – I also have to do root cause analysis and offer suggestions. So, “maybe pass a house rule limiting renters to a single bottle of whiskey per night,” to return to the house inspector metaphor. And cataloging all of these has led me to be a veritable human encyclopedia of preventable software development mistakes. I was contemplating some of these mistakes recently and asking myself, “which was the biggest one” and “which would have been the most preventable with even simple analysis in place?” It was interesting to realize, after a while, that the clear answer was not at all what you’d expect. Some of the Biggies Before the best candidate, some obvious runners up occurred to me that line up with the kind of thing you might expect. There was the juggernaut assembly. It was a .NET solution with only one project, but man, what a project. It probably should have been about 30 projects, and when I asked why it wasn’t, there was awkward silence. It turned out that there had been talk of splitting it up, and there had even been attempts. Where things got sticky, however, was around the fact that there was a rather large and complex namespace dependency cycle among some of the more prominent dependencies. Efforts had revolved around turning namespaces into assemblies, and, while namespace dependencies are tolerated by the compiler, assembly ones…not so much. The “great split-up” then became one of those things that the team resolved to do “the next time we get some breathing room.” And, as anyone (including those saying it) would likely have predicted, “the next time” never came. Had there been relatively basic static analysis in place, these folks could have seen a warning about it the first time someone created a cycle from the previously acyclic graph. As it stood, who knows how many months or years elapsed between its introduction and discovery. Of course, there are others that are easy to explain. There was the method with a cyclomatic complexity pushing 4 digits that someone probably would have wrangled before it got to 3 digits. There was the untested class that every other class in the code based touched, directly or indirectly (I’m sure you can predict one of the problems I heard about there). There was the codebase with the lowest cohesion score I’ve ever seen, accompanied by complaints of weird bugs in components caused by changes in other, ‘unrelated’ components. The Worst of All But the worst case I’ve seen was not really like these. It wasn’t a matter of some dreadful pocket of code or some mistake that could have been caught early. Instead, it was an entire codebase that never should have been. I’m going to change some details here so as not to offer clues as to true identity, so let’s just say that I was doing a tour consulting with a large shop with a large application portfolio. Historically, they’d had dozens or even hundreds of Java applications, but they were starting to dip their toe into .NET, and specifically C#. By the time I’d gotten there, they’d taken a converted, long-tenured Java developed and tasked him with building out a ‘framework’ to enable rapid development of future .NET applications within the company, and they’d hired on a bunch of .NET folks to assist in this. When I got there, the codebase was… disconcerting. There were anti-patterns and common pitfall errors galore, as well as strained use of inheritance and zany, unnecessary runtime binding schemes. The most amazing feature, though, was a base “DataTransferObject” class, from which every property bag object in the application inherited, that, in the instance constructor, iterated overall of its own reflected properties, and stored a hash of their string name to their expression value, in an instance variable. Every simple DTO in the system took 0.25 seconds to be instantiated. It was a mess. And it was a mess that they were furiously prototyping all over the organization, in spite of the diplomatic protests of some of the newer .NET dev hires. Static Analysis As Reality Check You might wonder how this is a case that static analysis could have solved. After all, they could have been dinged for the excessive inheritance, but there aren’t any “do you have an explanation-defying reflection scheme in your constructor” queries, nor are there any obvious warnings for “are you relating objects with lots of magic strings?” Static analysis wouldn’t have caught these errors per se. But, what it would have done was lit up like a well-decorated Christmas tree on this nascent codebase, indicating to anyone who was looking that there was a sizable gulf between their code and what the industry considers good code. And that might just have caused someone in a position to do so to put the brakes on rolling out this boondoggle en masse, before it was too late. There isn’t any single line of code that’s going to bring your business to its knees (in all likelihood, anyway), nor is there going to be a specific tipping point with method complexity, fan-in, or anything like that. Those are mistakes that get made and get corrected. But static analysis, as a whole, shines a bright light on whether the trusted staff at an organization knows what it’s doing or not. The biggest mistake I have seen and continue to see, without question, is that organizations trust a single, tenured developer to be infallible and to steer the ship with only subordinates as co-pilots. They are the guardians, but no one guards them. Until you introduce static analysis to guard the guardians. Published at DZone with permission of Erik Dietrich , DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/the-biggest-mistake-static-analysis-could-have-pre
CC-MAIN-2019-39
refinedweb
1,397
60.75
This week I ventured into new territory and wrote a custom schema validation for the Elixir/Phoenix project I've been working on. Ecto.Changeset has a good amount of prebuilt validators that will accomplish most tasks, but if its necessary to validate in a manner outside of the established validations, that can be done with a custom validation. A custom validation function can be just about anything, the only requirement is that it returns a changeset, just like a built-in validation does. This example deals with three fields: revenue, expense and net_gain. Ultimately, the validation is that revenue minus expense equals net_gain: This function utilizes two helper functions: get_field/3 and add_error/4. These are Ecto.Changeset helpers and can be accessed by adding import Ecto.Changeset to the top of the module. While looking around the docs there are a few other functions that could be convenient in different scenarios, like get_change/3, fetch_field/2, and fetch_change/2, but for this purpose get_field/3 fits the bill. In both cases of the do block, changeset is returned. If the match checks out, the validation is true and the changeset is simply passed on. If the math does not match up, an error is tacked onto the :error field of the changeset and returned.. For instance, if a field was empty in the validate_required/3 function, an error would be added to the changeset, and it would keep working down the pipeline of validations. When that empty field gets to the custom validation function, it needs to be handled in some way, or it could throw unexpected errors. This can be dealt with by incorporating a changeset.valid? call into the function, which checks the :errors field of the changeset and return false if any exist: The logic shifts from a case statement to a with expression, to better handle the multiple checks. If you're not familiar with the with expression, I wrote about it a pretty detailed explanation of it. Back to the logic, the catch with simply incorporating changeset.valid? is that it's a broad check, if there are any errors, it will return false. It works when piped into the function like this: But, if an engineer were to come in later and add another field to the validate_required check, one that has nothing to do with the custom validation after it, it could result in a false positive in the first bit of logic of mathematical_validation/1. For example: In the above example, if the first_name field was empty, an error would be added to the changeset via validate_required. The first check of mathematical_validation/1 would fail, resulting in the logic portion of the validation not being checked at all, even though the error was not in the any of the three fields that function is concerned about. A more verbose strategy would be to check for the presence of the fields explicitly before stepping into the logic to be performed, like this: Now, before the mathematical validation is performed, it is checking that all the necessary fields are present. There is some redundancy with the validate_required function that is called before this custom function in the pipeline. This is where a rabbit-hole could start. Either the redundancy is accepted, or steps could be taken to remove the validate_required check entirely and add appropriate error messages to each of the nil checks in mathematical_validation/1 individually. Properly handling the incoming errors is probably the trickiest part to writing a custom validation. But after the error handling has been considered, literally any logic can be used as a validation as long as a changeset is returned, and that's pretty powerful. This post is part of an ongoing This Week I Learned series. I welcome any critique, feedback, or suggestions in the comments. Discussion (2) Great write-up! One other way I would maybe handle the need to check for those fields' existence is by having another function clause for mathematical_validation/1: This cleans up the actual logic for this function, of course this wouldn't work I guess if you didn't have the validate_required check before it in the pipeline, so it all kinda depends on the requirement. Mostly I just wanted to say hiiii, long time no talk :) Yeah, I think I started with something like this, but the feedback in the PR was that if other, unrelated validations were inserted in the wrong place in the pipeline then the errorsfield might show a false positive, leading to this custom function not running at all. But, it obviously wasn't too bad of an idea if thats what you thought too! Sorry for the late response, kind of dropped the ball.
https://dev.to/noelworden/how-to-write-a-custom-elixir-schema-validation-167e
CC-MAIN-2022-21
refinedweb
792
57.71
Microsoft Scripting Guy Ed Wilson here. The Smokey Mountains are about 10 degrees cooler than Charlotte, North Carolina, is. The problem is that it was nearly 100 degrees Fahrenheit in Charlotte. Anyway, getting to spend some time with my old high school friend has been fun, even if he does not know Windows PowerShell from a seashell. Oh well, he still has good taste in music. On the drive up into the mountains, the Scripting Wife was commenting on the beautiful trees, creeks, and occasional wild animal; I, on the other hand, was thinking of ways to improve my firewall script. When we arrived at the cabin John had not yet arrived, so I got out my laptop and got to work on my improved firewall script. The Get-EnabledFireWallRules.ps1 script uses an enumeration to parse the protocol instead of displaying a protocol number like the script from yesterday did. In addition, the Format-Table cmdlet uses a hash table to perform the lookup for the enumeration value as well as to interpret the direction of the rule. The complete Get-EnabledFireWallRules.ps1 script is shown here. Get-EnabledFireWallRules.ps1 Function New-ProtocolEnum { $enum = " namespace myspace { public enum protocol { HOPOPT = 0, ICMPv4 = 1, IGMP = 2, TCP = 6, UDP = 17, IPv6 = 41, IPv6Route = 43, IPv6Frag = 44 ,GRE = 47, ICMPv6 = 58, IPv6NoNxt = 59, IPv6Opts = 60, VRRP = 112, PGM = 113, L2TP = 115 } } " Add-Type -TypeDefinition $enum -Language CSharpVersion3 } #end function New-ProtocolEnum Function Test-LoadedEnum { Param([string]$enum) Try { [reflection.assembly]::GetAssembly([type]$enum) | out-null New-Object psobject -Property ` @{ "Name" = $enum.tostring() ; "Loaded" = [bool]$true } } Catch [system.exception] { New-Object psobject -Property ` @{ "Name" = $enum.tostring() ; "Loaded" = [bool]$false } } } #end function Test-LoadedEnum Function import-Enum { Param($rtn) If ($rtn.Loaded) { "$($rtn.name) is loaded" } else { "$($rtn.name) NOT loaded. Loading ..." New-ProtocolEnum } } #end function import-enum Function Get-FireWallRules { $fw = New-Object -ComObject hnetcfg.fwpolicy2 $currentProfile = $fw.CurrentProfileTypes $fw.Rules | Where-Object {$_.enabled -AND $_.Profiles -eq $currentProfile} | Sort-Object -Property direction | Format-Table -Property localports, @{LABEL="Protocol"; EXPRESSION={[enum]::parse([type]"myspace.protocol",$_.protocol)} }, @{LABEL="Direction"; EXPRESSION={ SWITCH($_.direction){ 1{"in"} 2{"out"}} } }, name -autosize } #end function Get-FirewallRules # *** ENTRY POINT TO SCRIPT *** import-enum -rtn (test-LoadedEnum -enum "myspace.protocol") Get-FireWallRules The New-ProtocolEnum function is used to create the myspace.protocol enumeration. The creation of new enums was discussed in a Weekend Scripter article a couple of weeks ago. The mapping of the protocol numbers to names is discussed in a TechNet article in the Library. The New-ProtocolEnum function is shown here: The Test-LoadedEnum function is lifted from a recent Weekend Scripter article that I wrote while in Hilton Head, South Carolina. See that article for a discussion of how it works. The complete Test-LoadedEnum function is shown here: I decided to write a function to determine if I need to load the enumeration or not. This removed some of the logic from the beginning of the script. In addition, it provides a completely re-usable function. The Import-Enum function is shown here: The Get-FireWallRules function is much like the script from yesterday’s post. The difference is that it tests for the current profile and uses two hash tables to create customized table entries. The one that does the protocol lookup from the myspace.protocol enum is shown here: @{LABEL="Protocol"; EXPRESSION={[enum]::parse([type]"myspace.protocol",$_.protocol)} } The key was using the system.enum static parse method, the type constraint for the myspace.protocol enum, and passing the protocol number. This technique was discussed in the working with enumerations and values Weekend Scripter article. The complete Get-FirewallRules function is shown here: The entry point to the script calls the Import-Enum function while passing the result from the Test-LoadedEnum function. Next, it calls the Get-FirewallRules function. This is shown here: When the Get-EnabledFireWallRules.ps1 script runs, the output is displayed that is shown in the following image. The cool thing about all this is that it provides an excellent example for using our recent discussion of working with enums. Well, I think I hear the “old folks” stirring inside the cabin. I believe I overheard Becky and the Scripting Wife talking about making pancakes, so perhaps I will go inside and see if they need a taste tester. I hope you have a great weekend. There is a mistake: '$_.Profiles -eq $currentProfile'. Any of firewall rules can belong to a number of policy profiles (Domain = 1; Private = 2; Public = 4). So the code should be corrected like this: '$_.Profiles -band $currentProfile'. This code checks the occurance of profile type id in 'Profiles' property bitmask. By the way, the returned 'CurrentProfiles' bitmask can have more than 1 bit set if multiple profiles are active or current at the same time. Our code correctly handles this situation. Is the webpage inserting a space that is killing the script? PS C:\Windows\system32> C:\Windows\System32\get-enabledfirewallrules.ps1 New-Object : Cannot bind parameter 'Property'. Cannot convert the " " value of type "System.String" to type "System.Collections.Hashtable". At C:\Windows\System32\get-enabledfirewallrules.ps1:27 char:30 + New-Object psobject -Property <<<< ` + CategoryInfo : InvalidArgument: (:) [New-Object], ParameterBindingException + FullyQualifiedErrorId : CannotConvertArgumentNoMessage,Microsoft.PowerShell.Commands.NewObjectCommand NOT loaded. Loading ...
http://blogs.technet.com/b/heyscriptingguy/archive/2010/07/04/hey-scripting-guy-weekend-scripter-improving-yesterday-s-windows-firewall-script.aspx
CC-MAIN-2014-52
refinedweb
883
50.84
ncl_dashdc man page DASHDC — Defines a dash pattern with labels. If DASHDC is called when the "quick" version of Dashline is used, an error exit results. Synopsis CALL DASHDC (IPAT,JCRT,JSIZE) C-Binding Synopsis #include <ncarg/ncargC.h> void c_dashdc (char *ipat, int jcrt, int jsize) Description - IPAT (an input constant or variable of type CHARACTER) specifies the dash pattern to be used. Although IPAT is of arbitrary length, 60 characters seems to be a practical limit. This pattern is repeated for successive line segments until the full line is drawn. A dollar sign in IPAT indicates solid; an apostrophe indicates a gap; blanks are ignored. Any character in IPAT which is not a dollar sign, apostrophe, or blank is considered to be part of a line label. Each line label can be at most 15 characters in length. Sufficient white space is reserved in the dashed line for writing line labels. - JCRT (an input expression of type INTEGER) specifies that the length to be assigned to each increment of the line pattern is (JCRT/1023.) NDCs (Normalized Device Coordinates). Each increment is either a gap (represented by a dollar sign in IPAT) or a line segment (represented by an apostrophe in IPAT). JCRT must be greater than or equal to 1. - JSIZE (an input expression of type INTEGER) specifies the width of the plotted characters, as follows: - 0 .0078 NDCs - 1 .0117 NDCs - 2 .0156 NDCs - 3 .0234 NDCs - >3 JSIZE/1023. NDCs C-Binding Description The C-binding argument descriptions are the same as the FORTRAN argument descriptions. Examples Use the ncargex command to see the following relevant examples: tdashc, tdashp, tdashs, fcoord1, fcoord2, fdldashc, fdldashd. Usage DASHDC may be called to define a dash pattern for any of the four versions of Dashline except the "quick" version; if you call it when the "quick" version is in use, an error exit will result. A dash pattern defined by a call to DASHDC will supersede one defined by an earlier call to DASHDB or DASHDC. Access To use DASHDC or c_dashdc, load the NCAR Graphics libraries ncarg, ncarg_gks, and ncarg_c, preferably in that order. See Also Online: dashline, dashline_params, curved, dashdb, frstd,.
https://www.mankier.com/3/ncl_dashdc
CC-MAIN-2018-05
refinedweb
365
65.52
#define MICROPY_FLOAT_IMPL on (MICROPY_FLOAT_IMPL_DOUBLE) and compiled micropython for port esp32. Everything works. Code: Select all >>> 1/3 0.333333333333333 Code: Select all >>> 1/3 0.333333333333333 Code: Select all def dist_waypoints(self, lat1, lon1, lat2, lon2): """ Calculate distance betweeen two coordinates. """ try: lon1, lat1, lon2, lat2 = map(radians, [lon1, lat1, lon2, lat2]) result = 6371 * (acos(sin(lat1) * sin(lat2) + cos(lat1) * cos(lat2) * cos(lon1 - lon2))) meters = int(result * 1000.0); return meters except Exception as e: self.exception_save(e) sys.exit() return 0 def bearing_dest(self, lat1, lon1, lat2, lon2): """ Calculate bearing destination betweeen two coordinates. """ try: theta1 = radians(lat1) theta2 = radians(lat2) delta1 = radians(lat2-lat1) delta2 = radians(lon2-lon1) y = sin(delta2) * cos(theta2) x = cos(theta1)*sin(theta2) - sin(theta1)*cos(theta2)*cos(delta2) brng = atan2(y,x) bearing = degrees(brng) bearing = (bearing + 360) % 360 return bearing except Exception as e: self.exception_save(e) return 0 Can I have an example please? because I don't quite understand. Thanks.Can I have an example please? because I don't quite understand. Thanks.pythoncoder wrote: ↑Mon May 25, 2020 6:20 amI don't know your application. Is worldwide coverage needed? If not, you could express lat and long relative to a local datum for calculations, converting to absolute values only for final display.
https://forum.micropython.org/viewtopic.php?f=2&t=8431&p=47872
CC-MAIN-2020-45
refinedweb
217
51.34
tag:blogger.com,1999:blog-91518801694903564012014-10-06T21:30:02.171-07:00Coding RelicRandom musings on software in an embedded world.Denton Gentry Option<p>We're almost out of IPv4 addresses, yet IPv6 <a href="">deployment is still very, very slow</a>. This is a recipe for disaster. I'm talking <i>End of Internet predicted, film at 11</i> scale disaster. Something must be done. Steps must be taken.</p> <p.</p> <p>"How could that possibly work?" people might ask. Go ahead, ask it... I'm glad that you asked, because I have a ready-made explanation waiting. We will leverage a solution to a similar problem which has scaled tremendously well over the last several decades: email addresses.</p> <p>On early email systems like AOL, addresses had to be unique. This led to such ridiculous conventions as appending a number to the end of a popular address, like <i>CompuGuy112673</i> (because really, <i>everybody</i> wants to have an email address like CompuGuy). The beauty of Internet email addressing is in breaking them up into a federated system, so compuguy@foo.com and compuguy@bar.com can simultaneously exist.</p> <p>Therefore I propose to use this same solution for IP addressing. We will allow each <a href="">Autonomous System</a> to maintain its own IP address space. The same IP address can simultaneously exist within multiple ASNs. We are effectively adding a level of indirection to the destination in the IP header. Disambiguating the actual destination will be done via a new IP option.</p> <p><img border="0" width="310" height="238" align="right" src="" style="border: none; margin: 1em 0 1em 1em;" alt="IP Option field appended to frame" title="There is no problem which cannot be solved by an additional layer of indirection.".</p> <p.</p> <br/><p>Updates to the DNS A record and all the application code which looks up IP addresses is left as an exercise for the reader.</p> <br/><p>For historical reasons, this new option is referred to as the BangIP option. An earlier version of this system used bang-separated IP addresses as a source routing path. That portion of the proposal has been deprecated, but we retain the name for its sentimental value.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Octal<p>An article in the March 8 issue of the journal <i>PLoS Computational Biology</i> (as reported by <i><a href="">Science Daily)</a></i> states:</p> <blockquote.</blockquote> <p>From this we can derive one inescapable conclusion: DEC was right about <a href="">octal</a> all along.</p> <br/><p><span style="color: #aaa; font-style=italic;">(Thanks to Sean Hafeez for posting a <a href="">link to the <i>Science Daily</i></a> article on <a href="">Google+</a>)</span></p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Commands in Python<p itemprop='description'>In software development sometimes you spend time on an implementation which you are unreasonably proud of, but ultimately decide not to use in the product. This is one such story.</p> <p>I needed to retrieve information from an attached disk, such as its model and serial number. There are commands which can do this, like hdparm, <a href="">sdparm</a>, and <a href="">smartctl</a>, but initially I tried to avoid building in a dependency on any such tools by interrogating it from the hard drive directly. In pure Python.</p> <p <a href="">Python struct module</a> to define the data structure sent along with the ioctl.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 1em;"><br />def GetDriveId(dev):<br /> """Return information from interrogating the drive.<br /><br /> This routine issues a HDIO_GET_IDENTITY ioctl to a block device,<br /> which only root can do.<br /><br /> Args:<br /> dev: name of the device, such as 'sda' or '/dev/sda'<br /><br /> Returns:<br /> (serial_number, fw_version, model) as strings<br /> """<br /> # from /usr/include/linux/hdreg.h, struct hd_driveid<br /> # 10H = misc stuff, mostly deprecated<br /> # 20s = serial_no<br /> # 3H = misc stuff<br /> # 8s = fw_rev<br /> # 40s = model<br /> # ... plus a bunch more stuff we don't care about.<br /> struct_hd_driveid = '@ 10H 20s 3H 8s 40s'<br /> HDIO_GET_IDENTITY = 0x030d<br /> if dev[0] != '/':<br /> dev = '/dev/' + dev<br /> with open(dev, 'r') as fd:<br /> buf = fcntl.ioctl(fd, HDIO_GET_IDENTITY, ' ' * 512)<br /> fields = struct.unpack_from(struct_hd_driveid, buf)<br /> serial_no = fields[10].strip()<br /> fw_rev = fields[14].strip()<br /> model = fields[15].strip()<br /> return (serial_no, fw_rev, model)<br /></pre> <br/><p>No no wait, stop snickering, it does work! It has to run as root, which is one reason why I eventually abandoned this approach.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 1em;"><br />$ sudo python hdio.py<br />('5RY0N6BD', '3.ADA', 'ST3250310AS')<br /></pre> <br/>.</p> <br/><pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 1em;"><br />class AtaCmd(ctypes.Structure):<br /> """ATA Command Pass-Through<br />"""<br /><br /> _fields_ = [<br /> ('opcode', ctypes.c_ubyte),<br /> ('protocol', ctypes.c_ubyte),<br /> ('flags', ctypes.c_ubyte),<br /> ('features', ctypes.c_ubyte),<br /> ('sector_count', ctypes.c_ubyte),<br /> ('lba_low', ctypes.c_ubyte),<br /> ('lba_mid', ctypes.c_ubyte),<br /> ('lba_high', ctypes.c_ubyte),<br /> ('device', ctypes.c_ubyte),<br /> ('command', ctypes.c_ubyte),<br /> ('reserved', ctypes.c_ubyte),<br /> ('control', ctypes.c_ubyte) ]<br /><br /><br />class SgioHdr(ctypes.Structure):<br /> """<scsi/sg.h> sg_io_hdr_t."""<br /><br /> _fields_ = [<br /> ('interface_id', ctypes.c_int),<br /> ('dxfer_direction', ctypes.c_int),<br /> ('cmd_len', ctypes.c_ubyte),<br /> ('mx_sb_len', ctypes.c_ubyte),<br /> ('iovec_count', ctypes.c_ushort),<br /> ('dxfer_len', ctypes.c_uint),<br /> ('dxferp', ctypes.c_void_p),<br /> ('cmdp', ctypes.c_void_p),<br /> ('sbp', ctypes.c_void_p),<br /> ('timeout', ctypes.c_uint),<br /> ('flags', ctypes.c_uint),<br /> ('pack_id', ctypes.c_int),<br /> ('usr_ptr', ctypes.c_void_p),<br /> ('status', ctypes.c_ubyte),<br /> ('masked_status', ctypes.c_ubyte),<br /> ('msg_status', ctypes.c_ubyte),<br /> ('sb_len_wr', ctypes.c_ubyte),<br /> ('host_status', ctypes.c_ushort),<br /> ('driver_status', ctypes.c_ushort),<br /> ('resid', ctypes.c_int),<br /> ('duration', ctypes.c_uint),<br /> ('info', ctypes.c_uint)]<br /><br />def SwapString(str):<br /> """Swap 16 bit words within a string.<br /><br /> String data from an ATA IDENTIFY appears byteswapped, even on little-endian<br /> achitectures. I don't know why. Other disk utilities I've looked at also<br /> byte-swap strings, and contain comments that this needs to be done on all<br /> platforms not just big-endian ones. So... yeah.<br /> """<br /> s = []<br /> for x in range(0, len(str) - 1, 2):<br /> s.append(str[x+1])<br /> s.append(str[x])<br /> return ''.join(s).strip()<br /><br />def GetDriveIdSgIo(dev):<br /> """Return information from interrogating the drive.<br /><br /> This routine issues a SG_IO ioctl to a block device, which<br /> requires either root privileges or the CAP_SYS_RAWIO capability.<br /><br /> Args:<br /> dev: name of the device, such as 'sda' or '/dev/sda'<br /><br /> Returns:<br /> (serial_number, fw_version, model) as strings<br /> """<br /><br /> if dev[0] != '/':<br /> dev = '/dev/' + dev<br /> ata_cmd = AtaCmd(opcode=0xa1, # ATA PASS-THROUGH (12)<br /> protocol=4<<1, # PIO Data-In<br /> # flags field<br /> # OFF_LINE = 0 (0 seconds offline)<br /> # CK_COND = 1 (copy sense data in response)<br /> # T_DIR = 1 (transfer from the ATA device)<br /> # BYT_BLOK = 1 (length is in blocks, not bytes)<br /> # T_LENGTH = 2 (transfer length in the SECTOR_COUNT field)<br /> flags=0x2e,<br /> features=0, sector_count=0,<br /> lba_low=0, lba_mid=0, lba_high=0,<br /> device=0,<br /> command=0xec, # IDENTIFY<br /> reserved=0, control=0)<br /> ASCII_S = 83<br /> SG_DXFER_FROM_DEV = -3<br /> sense = ctypes.c_buffer(64)<br /> identify = ctypes.c_buffer(512)<br /> sgio = SgioHdr(interface_id=ASCII_S, dxfer_direction=SG_DXFER_FROM_DEV,<br /> cmd_len=ctypes.sizeof(ata_cmd),<br /> mx_sb_len=ctypes.sizeof(sense), iovec_count=0,<br /> dxfer_len=ctypes.sizeof(identify),<br /> dxferp=ctypes.cast(identify, ctypes.c_void_p),<br /> cmdp=ctypes.addressof(ata_cmd),<br /> sbp=ctypes.cast(sense, ctypes.c_void_p), timeout=3000,<br /> flags=0, pack_id=0, usr_ptr=None, status=0, masked_status=0,<br /> msg_status=0, sb_len_wr=0, host_status=0, driver_status=0,<br /> resid=0, duration=0, info=0)<br /> SG_IO = 0x2285 # <scsi/sg.h><br /> with open(dev, 'r') as fd:<br /> if fcntl.ioctl(fd, SG_IO, ctypes.addressof(sgio)) != 0:<br /> print "fcntl failed"<br /> return None<br /> if ord(sense[0]) != 0x72 or ord(sense[8]) != 0x9 or ord(sense[9]) != 0xc:<br /> return None<br /> # IDENTIFY format as defined on pg 91 of<br /> #<br /> serial_no = SwapString(identify[20:40])<br /> fw_rev = SwapString(identify[46:53])<br /> model = SwapString(identify[54:93])<br /> return (serial_no, fw_rev, model)<br /></pre> <br/><p>For the unbelievers out there, this one works too.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 1em;"><br />$ sudo python sgio.py<br />('5RY0N6BD', '3.ADA', 'ST3250310AS')<br /></pre> <br/> <b>ton</b> <a href="">sdparm</a> and <a href="">smartctl</a> as needed.</p> <p>I'll just leave this post here for search engines to find. I'm sure there is a ton of demand for this information.</p> <p>Stop snickering.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry DHCP VIVO config<p.</p> <p>DHCP has always allowed for vendor extensions of the available options, <a href="">inheriting this support</a>.</p> <p>DHCP6 defined a more complex encoding, where each vendor includes their unique <a href="">IANA Enterprise Number</a> as part of its option. Options from different vendors can be accommodated simultaneously. This <a href="">Vendor-Identifying Vendor Options (VIVO)</a> encoding was also added back to DHCP4 as options 124 and 125. DHCP4 thus has two separate vendor option mechanisms in common use.</p> <br/><h3>ISC DHCPd</h3><p>The <a href="">ISC DHCP server</a>.</p> <p>Avoiding magic byte strings by specifying the format of the options is more difficult to get working, but easier to maintain and understand. We'll consider an example here.</p> <ul><li>Vendor: Frobozzco</li><li>IANA Enterprise Number (IEN): 12345</li><li>Code #1: a text string containing the location within the maze.</li><li>Code #2: an integer describing the percentage likelihood of being eaten by a grue.<br/><i>In practice this is always 100%, which many clients simply hard-code.</i></li></ul> <br/><p}</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 3em;"> /></pre> <br/><p>Owing to the long and sordid history of numbering conflicts, most vendor extensions define a <i>secret handshake.</i>.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 3em;"><br />option dhcp6.vendor-class code 16 = {integer 32, integer 16, string};<br /><br /># length=14 bytes, Frobozzco IEN, content=look north<br />send dhcp6.vendor-class 12345 14 "look north";<br /></pre> <br/><p.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 3em;"><br />script "/usr/local/sbin/dhclient-script";<br /> /><br />option dhcp6.vendor-class code 16 = {integer 32, integer 16, string};<br /><br />interface "eth0" {<br /> also request dhcp6.vendor-opts;<br /> send dhcp6.vendor-class 12345 10 "look north";<br />}<br /></pre> <br/><h3>dhclient-script</h3><p>On the client we also must provide the script for dhclient to run. The OS vendor will have provided one, often in /sbin or /usr/sbin. We'll copy it, and add handling.</p> <p>dhclient passes in environment variables for each DHCP option. The name of the variable is "new_<option space name>_<option name>" For the example config above, we'd define a shell script function to write our two options to files in /tmp.</p> <pre style="font-family: monospace; font-size: medium; line-height: 1.3em; margin-left: 3em;"><br />make_frobozzco_files() { <br /> mkdir /tmp/frobozzco <br /> if [ "x${new_frobozzco_maze_location}" != x ] ; then <br /> echo ${new_frobozzco_maze_location} > /tmp/frobozzco/maze_location <br /> fi <br /> if [ "x${new_frobozzco_grue_probability}" != x ] ; then<br /> echo ${new_frobozzco_grue_probability} > /tmp/frobozzco/grue_probability<br /> fi <br />}<br /></pre> <p>The dhclient-script provided with the OS will have handling for DNS nameservers. Adding a call to make_frobozzco_files at the same points in the script which handle /etc/resolv.conf is a reasonable approach to take.</p> <p>When <a href="">list of certificates from mozilla.org,</a> and there are various Perl and Python scripts floating around in the search engines to assemble this list into a PEM file suitable for libssl.</p> <p>2011 was not a good year for certificate authorities. <a href="">DigiNotar was seized</a> by the Dutch government after it became clear they had been thoroughly breached and generated fraudulent certificates for many large domains. Several Comodo resellers <a href="">were similarly compromised</a> and generated bogus certs for some of the same sites. Browser makers responded by encoding a list of toxic certificates into the browser, to reject any certificate signed by them.</p> <p><i>Encoding a list of toxic certificates</i> is the key phrase in that paragraph. As of 2011, Mozilla's certdata.txt contains both trusted CAs and absolutely untrustworthy, revoked CAs. There is metadata in the entry describing how it should be treated, but several of the scripts floating around grab <i>everything</i> listed in certdata.txt and <b>put it in the PEM file.</b> This is disastrous.</p> <p:</p> <ul> <li>Adam Langley's <a href="">extract-nss-root-certs</a>, written in Go. <a href="">Read his announcement for more information</a>.</li> <li>OpenSUSE's <a href="">extractcerts.pl</a>, written in Perl</li></ul><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry<p>SOPA isn't dead. It hasn't been defeated. It hasn't been stopped. Its just regrouping.</p> <p>Their main mistake was in allowing it to become publicly known too long before a decisive vote. Its backers will try again, next time ramming it through in the dead of night. They'll give it a scary title, as anything can be justified if the title of the bill is scary enough.</p> <p>Bills like SOPA are an attempt to legislate a return to media economics the way it used to be, where the sheer cost of distributing content formed a high barrier to entry. Its the economics of scarcity. Better yet, the law would require <i>someone else</i> to pay the cost of creating this scarcity. If the cost of any infringement, intentional or not, third party or first party, can be made so overwhelming as to be ruinous (and incidentally decoupled from any notion of the actual damage from the infringement), then cheap distribution via the Internet can be made expensive again. We can get back to the cozy media business of prior decades.</p> <p>Its time to stop playing defense, desperately trying to stop each of these bills.</p> <p>Its time to start playing offense.</p> <br/><p>The workings of government are obscure and impenetrable. There are reams of data produced in the form of minutes, committee reports, the <i>Federal Register,</i> and other minutiae, but the whole remains an opaque mass. Lobbyists and political operatives thrive in this environment, as they understand more about the mechanisms by which it operates. Yet one of the recent core competencies of the technology industry is Big Data. There are conclusions which can be drawn from trends within the dataset without having to semantically understand all of it.</p> <p>I have to believe there are things the tech industry can do beyond simply increasing lobbying budgets.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Silicon<p>Last week <a href="">Greg Ferro</a> wrote about <a href="">the use of merchant silicon</a> in networking products. I'd like to share some thoughts on the topic.</p> <br/><h3>Chip cost</h3><p><img border="0" width="71" height="78" src="" align="right" style="border: 1px #777; margin: 0 0 1em 1em;" alt="Fistful of dollars" title="Those should probably be $10 bills, with inflation and all".</p> <p.</p> <p>In my experience, chip price was not a decisive factor in the wholesale move to merchant silicon.</p> <br/><h3>Non Recurring Engineering (NRE cost)</h3><p><img border="0" width="73" height="66" src="" align="right" style="border: 1px #777; margin: 0 0 1em 1em;" alt="Silicon chip" title="That is not actually a switch chip. Shhhh. Do not tell anyone, please.".</p> <p <b>did</b> pay the cost of development, but it would be factored into the unit price and pay-as-you-go rather than upfront.</p> <p.</p> <p>In my experience, eliminating the burden of NRE was not a decisive factor in the move to merchant silicon.</p> <br/><h3>Schedule</h3><p><img border="0" width="141" height="100" src="" align="right" style="border: 1px #777; margin: 0 0 1em 1em;" alt="Gantt chart" title="That is from my real day-job project. No fooling.">The merchant silicon vendors of the world can dedicate more ASIC engineers to their projects. This isn't as big a win as it sounds: tripling the size of the design team does <u>not</u> result in a chip with 3x the features or in 1/3rd the time. As with software projects (see <i>The Mythical Man Month),</i> the increasing coordination overhead of a larger team results in steeply diminishing returns.</p> <p.</p> <p.</p> <p>Yet in my experience at least, though schedule is a decisive factor, this isn't the full story.</p> <br/><h3>Misaligned Incentives</h3><p>When leading a chip development effort, the biggest fear is not that the chip will have bugs. Many ASIC bugs can be worked around in software.</p> <p>The biggest fear is not that the chip will be larger and more costly than planned. That is a negotiation issue with the silicon fab, or a business issue in positioning the product.</p> <p>The biggest fear is that the chip will be <b>late.</b>.</p> <p.</p> <p.</p> <br/><h3>The Point of No Return</h3><p.</p> .</p> <p>It can easily become a self-fulfilling prophecy: serious consideration of a move to merchant silicon leads to loss of the capability to develop custom ASICs.</p> <br/><h3>Why it Matters</h3><p.</p> <p.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry<p>Earlier this week <a href="">Sam Biddle of Gizmodo</a> published <a href="">How the Hashtag Is Ruining the English Language</a>, decrying the use of hashtags to add additional color or meaning to text. Quoth the article, "The hashtag is a vulgar crutch, a lazy reach for substance in the personal void – written clipart." #getoffhislawn</p> <p</p> <p>Yet language evolves to suit our needs and to fit advances communications technology. A specific example: in the US we commonly say "Hello" as a greeting. Its considered polite, and it has always been the common practice... except that it <i>hasn't.</i> <a href="">The greeting <i>Hello</i> entered the English language</a> in the mid 19th century <a href="">with the invention of the telephone.</a> The custom until that time of speaking only after a proper introduction simply didn't work on the telephone, it wasn't practical over the distances involved to coordinate so many people. Use of <i>Hello</i></p> <p <u>haven't< <a href="">Blaise Pascal</a> in the 17th century.</p> <br/><h3>Disambiguation</h3><p>Gizmodo even elicited a response from <a href="">Noam Chomsky,</a> probably via email, "Don't use Twitter, almost never see it."</p> , <b>or</b> that anyone bothered by hashtags shouldn't use Twitter so they won't see them. He probably means the former, but in an in-person conversation there would be no ambiguity. Facial expression would convey his unfamiliarity with Twitter.</p> <p</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Is Everywhere<center><img border="0" width="585" height="418" src="" style="margin-top: 1em; border: 0px;" alt="Large Ditch Witch" title="Is the Large Hadron Collider a Proton Refactoring tool, or an actual subatomic IDE?" /></center> <p>The utilities used to run from poles, now they are underground. The functionality is unchanged, but the implementation is cleaner.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Inheritance<center><img border="0" width="500" height="375" src="" style="margin-top: 1em; border: 0px;" alt="Hot Dog cut to resemble octopus tentacles" title="My daughter thought this was funny, but refused to eat it." /></center><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Ada Initiative 2012<p><a href=""><img border="0" width="204" height="137" align="right" src="" style="margin: 0;" alt="Donate to the Ada Initiative"></a>Earlier this year I donated seed funding to the <a href="">Ada Initiative</a>, a non-profit organization dedicated to increasing participation of women in open technology and culture. <a href="">One of their early efforts</a> was development of an <a href="">example anti-harassment policy</a>.</p> <p><a href="">The Ada Initiative is now raising funds</a> for <a href="">2012 activities, including:</a></p> <ul><li>Ada’s Advice: a guide to resources for helping women in open tech/culture</li><li>Ada’s Careers: a career development community for women in open tech/culture</li><li>First Patch Week: help women write and submit a patch in a week</li><li><a href="">AdaCamp</a> and AdaCon: (un)conferences for women in open tech/culture</li><li>Women in Open Source Survey: annual survey of women in open source</li></ul> <br/> <br/><h3>For me personally</h3><p>There are many barriers discouraging women from participating in the technology field. Donating to the Ada Initiative is one thing I'm doing to try to change that. I'm posting this to <a href="">ask other people to join me in supporting this effort</a>.</p> <p>My daughter is 6. The status quo is unacceptable. Time is short.</p><br/> <center><a href=""><img border="0" width="480" height="360" src="" style="border: 1px #777; margin: 0;" alt="My daughter wearing Google hat" title="She loves the hat, BTW."></a></center><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Go Gadget Google Currents!<p>Last week <a href="">Google introduced</a> <a href="">Currents</a>, a publishing and distribution platform for smartphones and tablets. I decided to publish this blog as an edition, and wanted to walk through how it works.</p> <br/> <br/><h3>Publishing an Edition</h3><p><img border="0" width="303" height="355" src="" align="right" style="margin: 0 0 0 1em;" alt="Google Currents producer screenshot" title="Someday I hope to advance to Executive Producer.">Setting up the <a href="">publisher side of Google Currents</a> was straightforward. I entered data in a few tabs of the interface:</p> <p><b>Edition settings:</b> Entered the name for the blog, and the Google Analytics ID used on the web page.</p> <p><b>Sections:</b> added a "Blog" section, sourced from the RSS feed for this blog. I use <a href="">Feedburner</a> to post-process the raw RSS feed coming from Blogger. However I saw no difference in the layout of the articles in Google Currents between Feedburner and the Blogger feed. As Currents provides statistics using Google Analytics, I didn't want to have double counting by having the same users show up in the Feedburner analytics. I went with the RSS feed from Blogger.</p> <p><b>Sections->Blog:</b> After adding the Blog section I customized its CSS slightly, to use the paper tape image from the blog masthead as a header. I uploaded a 400x50 version of the image to the Media Library, and modified the CSS like so:</p><pre style="font-family: monospace; margin-left: 1em;">.customHeader {<br /> background-color: #f5f5f5;<br /> display: -webkit-box;<br /> <b>background-image: url('attachment/CAAqBggKMNPYLDDD3Qc-GoogleCurrentsLogo.jpg');</b><br /> <b>background-repeat: repeat-x;</b><br /> height: <b>50px;</b><br /> -webkit-box-flex: 0;<br /> -webkit-box-orient: horizontal;<br /> -webkit-box-pack: center;<br />}</pre> <p><b>Manage Articles:</b> I didn't do anything special here. Once the system has fetched content from RSS it is possible to tweak its presentation here, but I doubt I will do that. There is a limit to the amount of time I'll spend futzing.</p> <p><b>Media Library:</b> I uploaded the header graphic to use in the Sections tab.</p> <p><b>Grant access:</b> anyone can read this blog.</p> <p><b>Distribute:</b> I had to click to verify content ownership. As I had already gone through the verification process for <a href="">Google Webmaster Tools</a>, the Producer verification went through without additional effort. I then clicked "Distribute" and voila!</p> <br/> <br clear="right" /><h3>The Point?</h3><p><img border="0" width="391" height="506" src="" align="right" style="margin: 0 0 0 1em;" alt="iPad screenshot of this site in Google Currents" title="Yeah, it is an iPad not an Android device. Send me your hate.">Much of the publisher interface concerns formatting and presentation of articles. RSS feeds generally require significant work on the formatting to look reasonable, a service performed by Feedburner and by tools like Flipboard and Google Currents. Nonetheless, I don't think the formatting is the main point, presentation is a means to an end. RSS is a reasonable transport protocol, but people have pressed it into service as the supplier of presentation and layout as well by wrapping a UI around it. Its not very good at it. Publishing tools have to expend effort on presentation and layout to make it useable.</p> <p>Nonetheless, for me at least, the main point of publishing to Google Currents is discoverability. I'm hopeful it will evolve into a service which doesn't just show me material I already <i>know</i> I'm interested in, but also becomes good at suggesting new material which fits my interests.</p> <br/> <br/><h3>Community Trumps Content</h3><p><a href="">A concern has been expressed that content distribution tools</a> like this, which use web protocols but are not a web page, will kill off the blog comments which motivate many smaller sites to continue publishing. The thing is, in my experience at least, blog comments all but died long ago. Presentation of the content had nothing to do with it: Community trumps Content. That is, people motivated to leave comments tend to gravitate to an online community where they can interact. They don't confine themselves to material from a single site. Only the most <a href="">massive blogs</a> have the gravitational attraction to hold a community together. The rest quickly lose their atmosphere to Reddit/Facebook/Google+/etc. I am grateful when people leave comments on the blog, but I get just as much edification from a comment on a social site, and just as much consternation if the sentiment is negative, as if it is here. It is somewhat more difficult for me to <i>find</i> comments left on social sites, but let me be perfectly clear: that is <b>my</b> problem, and my job to stay on top of.</p> <br/> <br/><h3>The Mobile Web</h3><p>One other finding from setting up Currents: <a href="">the Blogger mobile templates are quite good.</a> The formatting of this site in a mobile browser is very nice, and similar to the formatting which Currents comes up with. To me Currents is mostly about discoverability, not just presentation.< for Jumbo Frames<p>This weekend <a href="">Greg Ferro</a> published <a href="">an article about jumbo frames</a>. He points to <a href="">recent measurements showing no real benefit</a> with large frames. Some years ago I worked on NIC designs, and at the time we talked about Jumbo frames a lot. It was always a tradeoff: improve performance by sacrificing compatibility, or live with the performance until hardware designs could make the 1500 byte MTU be as efficient as jumbo frames. The latter school of thought won out, and they delivered on it. Jumbo frames no longer offer a significant performance advantage.</p> <p>Roughly speaking, software overhead for a networking protocol stack can be divided into two chunks:</p><ul><li><b>Per-byte</b> which increases with each byte of data sent. Data copies, encryption, checksums, etc make up this kind of overhead.</li><li><b>Per-packet</b> which increases with each <u>packet</u> regardless of how big the packet is. Interrupts, socket buffer manipulation, protocol control block lookups, and context switches are examples of this kind of overhead.</li></ul> <br/> <br/><h3>Wayback machine to 1992</h3><p>I'm going to talk about the evolution of operating systems and NICs starting from the 1990s, but will focus on Unix systems. DOS and MacOS 6.x were far more common back then, but modern operating systems evolved more similarly to Unix than to those environments.</p> <p><img border="0" width="116" height="130" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Address spaces in user space, kernel, and NIC hardware" title="Three stages to orbit.">Lets consider a typical processing path for sending a packet in a Unix system in the early 1990s:</p><ol><li>Application calls write(). System copies a chunk of data into the kernel, to mbufs/mblks/etc.</li><li>Kernel buffers handed to TCP/IP stack, which looks up the protocol control block (PCB) for the socket.</li><li>Stack calculates a TCP checksum and populates the TCP, IP, and Ethernet headers.</li><li>Ethernet driver copies kernel buffers out to the hardware. Programmed I/O using the CPU to copy was quite common in 1992.</li><li>Hardware interrupts when the transmission is complete, allowing the driver to send another packet.</li></ol> <p>Altogether the data was copied two and a half times: from user space to kernel, from kernel to NIC, plus a pass over the data to calculate the TCP checksum. There were additionally <i>per packet</i> overheads in looking up the PCB, populating headers, and handling interrupts.</p> <p>The receive path was similar, with a NIC interrupt kicking off processing of each packet and two and a half copies up to the receiving application. There was more per-packet overhead for receive: where transmit could look up the PCB once and process a sizable chunk of data from the application in one swoop, RX always gets one packet at a time.</p> <p>Jumbo frames were a performance advantage in this timeframe, but not a huge one. Larger frames reduced the per-packet overhead, but the per-byte overheads were significant enough to dominate the performance numbers.</p> <br/> <br/><h3>Wayback Machine to 1999</h3><p>An early optimization was elimination of the separate pass over the data for the TCP checksum. It could be folded into one of the data copies, and NICs also quickly added hardware support. <i>[Aside: the separate copy and checksum passes in 4.4BSD allowed years of academic papers to be written, putting whatever cruft they liked into the protocol, yet still portraying it as a performance improvement by incidentally folding the checksum into a copy.]</i> NICs also evolved to be DMA devices; the memory subsystem still had to bear the overhead of the copy to hardware, but the CPU load was alleviated. Finally, operating systems got smarter about leaving gaps for headers when copying data into the kernel, eliminating a bunch of memory allocation overhead to hold the TCP/IP/Ethernet headers.</p> <p><img border="0" width="400" height="260" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Packet size vs throughput in 2000, 2.5x for 9180 byte vs 1500" title="I admit it: until about 1995, I thought ATM was a good idea.">I have data on packet size versus throughput in this timeframe, collected in the last months of 2000. It was gathered for a presentation at <a href="">LCN 2000</a>. It used an OC-12 ATM interface, where LAN emulation allowed MTUs up to 18 KBytes. I had to find an <b>old</b> system to run these, the modern systems of the time could almost max out the OC-12 link with 1500 byte packets. I recall it being a Sparcstation-20. The ATM NIC supported TCP checksums in hardware and used DMA.</p> <p>Roughly the year 1999 was the peak of when jumbo frames would have been most beneficial. Considerable work had been done by that point to reduce per-byte overheads, eliminating the separate checksumming pass and offloading data movement from the CPU. Some work had been done to reduce the per-packet overhead, but not as much. <i>After</i> 1999 additional hardware focussed on reducing the per-packet overhead, and jumbo frames gradually became less of a win.</p> <br/> <br/><h3>LSO/LRO</h3><p><img border="0" width="94" height="175" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Protocol stack handing a chunk of data to NIC" title="Microsoft refers to their accelerated protocol stack as Chimney, so I drew it like a bunch of bricks.">Large Segment Offload (LSO), referred to as TCP Segmentation Offload (TSO) in Linux circles, is a technique to copy a large chunk of data from the application process and hand it as-is to the NIC. The protocol stack generates a single set of Ethernet+TCP+IP header to use as a template, and the NIC handles the details of incrementing the sequence number and calculating fresh checksums for a new header prepended to each packet. Chunks of 32K and 64K are common, so the NIC transmits 21 or 42 TCP segments without further intervention from the protocol stack.</p> <p>The interesting thing about LSO and Jumbo frames is that Jumbo frames no longer make a difference. The CPU only gets involved for every large chunk of data, the overhead is the same whether that chunk turns into 1500 byte or 9000 byte packets on the wire. The main impact of the frame size is the number of ACKs coming back, as most TCP implementations generate an ACK for every other frame. Transmitting jumbo frames would reduce the number of ACKs, but that kind of overhead is below the noise floor. We just don't care.</p> <p>There is a similar technique for received packets called, imaginatively enough, Large Receive Offload (LRO). For LSO the NIC and protocol software are in control of when data is sent. For LRO, packets just arrive whenever they arrive. The NIC has to gather packets from each flow to hand up in a chunk. Its quite a bit more complex, and doesn't tend to work as well as LSO. As modern web application servers tend to send far more data than they receive, LSO has been of much greater importance than LRO.</p> <p>Large Segment Offload mostly removed the justification for jumbo frames. Nonetheless support for larger frame sizes is almost universal in modern networking gear, and customers who were already using jumbo frames have generally carried on using them. Moderately larger frame support is also helpful for carriers who want to encapsulate customer traffic within their own headers. I expect hardware designs to continue to accommodate it.</p> <br/> <br/><h3>TCP Calcification</h3><p>There has been a big downside of pervasive use of LSO: it has become the immune response preventing changes in protocols. NIC designs vary widely in their implementation of the technique, and some of them are very rigid. Here "rigid" is a euphemism for "mostly crap." There are NICs which hard-code how to handle protocols as they existed in the early part of this century: Ethernet header, optional VLAN header, IPv4/IPv6, TCP. Add any new option, or any new header, and some portion of existing NICs will not cope with it. Making changes to existing protocols or adding new headers is vastly harder now, as changes are likely to throw the protocol back into the slow zone and render moot any of the benefits it brings.</p> <p>It used to be that any new TCP extension had to carefully negotiate between sender and receiver in the SYN/SYN+ACK to make sure both sides would support an option. Nowadays due to LSO and to the pervasive use of middleboxes, we basically cannot add options to TCP at all.</p> <p>I guess the moral is, <i>"be careful what you wish for."< Followup<p>In August this site <a href="">published</a> <a href="">a series</a> <a href="">of posts</a> <a href="">about</a> the <a href="">Juniper QFabric</a>. Since then Juniper has <a href="">released hardware documentation</a> for the QFabric components, so its time for a followup.</p> <p><img border="0" width="313" height="158" align="right" src="" style="border: none; margin: 1em 0 1em 1em;" alt="QF edge Nodes, Interconnects, and Directors" title="It looks a bit like a suspension bridge.".</p> <p><img border="0" width="436" height="161" align="right" src="" style="border: none; margin: 1em 0 1em 1em;" alt="Control header prepended to frame" title="There is no problem which cannot be solved by an additional layer of indirection.". <a href="">QFabric functions much more like the collection of switch chips inside a modular chassis</a>:.</p> <br/> <br/><h3>Node Groups</h3><p>The <a href="">Hardware Documentation</a> describes two kinds of Node Groups, Server and Network, which gather multiple edge Nodes together for common purposes.</p><ul><li>Server Node Groups are straightforward: normally the edge Nodes are independent, connecting servers and storage to the fabric. Pairs of edge switches can be configured as Server Node Groups for redundancy, allowing LAG groups to span the two switches.</li><li>Network Node Groups configure up to eight edge Nodes to interconnect with remote networks. Routing protocols like BGP or OSPF run on the Director systems, so the entire Group shares a common Routing Information Base and other data.</li></ul> <p?</p> <p><img border="0" width="216" height="89" align="right" src="" style="border: none; margin: 0 0 1em 1em;" alt="Ingress fanout to four LAG member ports" title="Four years of EE classes just so I can draw a trapezoid as a mux.".</p> <p><img border="0" width="211" height="175" align="right" src="" style="border: none; margin: 0 0 1em 1em;" alt="Ingress fanout to four LAG member ports" title="Four years of EE classes just so I can draw a trapezoid as a mux.">The downside of implementing LAG at ingress is that <i>every chip</i> has to know the membership of all LAGs in the system. Whenever a LAG member port goes down, <i>all</i>.</p> <p>I feel compelled to emphasize again: I'm making this up. I don't know how QFabric is implemented nor why Juniper made the choices they made. Its just fun to speculate.</p> <br/> <br/><h3>Virtualized Junos</h3><p>Regarding the Director software, the <a href="">Hardware Documentation</a> says, <i>"[Director devices] run the Junos operating system (Junos OS) on top of a CentOS foundation."</i> Now <b>that</b> is an interesting choice. Way, way back in the mists of time, Junos started from <a href="">NetBSD</a> as its base OS. NetBSD is still a viable project and runs on modern x86 machines, yet Juniper chose to hoist Junos atop a Linux base instead.</p> <p <a href="">Xen hypervisor</a>.</p> <p><b>Update:</b> in the comments, <a href="">Brandon Bennett</a> and <a href="">Julien Goodwin</a> both note that Junos used <a href="">FreeBSD</a> as its base OS, not <a href="">NetBSD</a>.</p> <p.</p> <p>Aside, redux: <a href="">Junosphere</a>.</p> <br/> <br/><h3>Misc Notes</h3><ul> <li>The Director communicates with the Interconnects and Nodes via a separate control network, handled by Juniper's previous generation EX4200. This is an example of <a href="">using a simpler network</a> to bootstrap and control a more complex one.</li> <li>QFX3500 has four QSFPs for 40 gig Ethernet. These can each be broken out into four 10G Ethernet ports, except the first one which supports only three 10G ports. That is <u>fascinating.</u> I wonder what the fourth one does?</li></ul> <p>Thats all for now. We may return to QFabric as it becomes more widely deployed or as additional details surface.< BGP<p>Last week <a href="">Martin Casado</a> published some thoughts about using <a href="">OpenFlow and Software Defined Networking for simple forwarding</a>. That is, does SDN help in distributing shortest path routes for IP prefixes? BGP/OSPF/IS-IS/etc are pretty good for this, with the added benefit of being fully distributed and thoroughly debugged.</p> <p>The <a href="">full article</a> is worth a read. The summary (which Martin himself supplied) is <i>"I find it very difficult to argue that SDN has value when it comes to providing simple connectivity."</i> Existing routing protocols are quite good at distributing shortest path prefix routes, the real value of SDN is in handling more complex behaviors.</p> <p>To expand on this a bit, there have been various efforts over the years to tailor forwarding behavior using more esoteric cost functions. The monetary cost of using a link is a common one to optimize for, as it provides justification for spending on a development effort and also because the business arrangements driving the pricing tend not to distill down to simple weights on a link. Providers may want to keep their customer traffic off of competing networks who are in a position to steal the customer. Transit fees may kick in if a peer delivers significantly more traffic than it receives, providing an incentive to preferentially send traffic through a peer in order to keep the business arrangement equitable. Many of these examples are covered in <a href="">slides from a course</a> by <a href="">Jennifer Rexford</a>, who spent several years working on such topics at <a href="">AT&T Research</a>.</p> <p><img border="0" width="273" height="141" src="" align="right" style="border: none; margin: 0 0 0 1em;" alt="BGP peering between routers at low weight, from each router to controller at high weight" title="Its a whirling shield of BGP peers.">Until quite recently these systems had to be constructed using a standard routing protocol, because that is what the routers would support. BGP is a reasonable choice for this because its interoperability between modern implementations is excellent. The optimization system would peer with the routers, periodically recompute the desired behavior, and export those choices as the best route to destinations. To avoid having the Optimizer be a single point of failure able to bring down the entire network, the routers would retain peering connections with each other at a low weight as a fallback. The fallback routes would never be used so long as the Optimizer routes are present.</p> <p>This works. It solves real problems. However it is hard to ignore the fact that BGP <b>adds no value</b> in the implementation of the optimization system. Its just an obstacle in the way of getting entries into the forwarding tables of the switch fabric. It also constrains the forwarding behaviors to those which BGP can express, generally some combination of destination address and QoS.</p> <p><img border="0" width="265" height="150" src="" align="right" style="border: none; margin: 0 0 0 1em;" alt="BGP peering between routers, SDN to controller" title="You can no longer see the router for all the protocols.">Product support for software defined networking is <a href="">now</a> <a href="">appearing</a> <a href="">in the market</a>. These are generally parallel control paths alongside the existing routing protocols. SDN deposits routes into the same forwarding tables as BGP and OSPF, with some priority or precedence mechanism to control arbitration.</p> <p>By using an SDN protocol these optimization systems are no longer constrained to what BGP can express, they can operate on any information which the hardware supports. Yet even here there is an awkward interaction with the other protocols. Its useful to keep the peering connections with other routers as a fallback in case of controller failure, but they are not well integrated. We can only set precedences between SDN and BGP and hope for the best.</p> <p>I do wonder if the existing implementation of routing protocols needs a more significant rethink. There is great value in retaining compatibility with the external interfaces: being able to peer with existing BGP/OSPF/etc nodes is a huge benefit. In contrast, there is little value to retaining the internal implementation choices inside the router. The existing protocols could be made to cooperate more flexibly with other inputs. More speculatively, extensions to the protocol itself could label routes which are expected to be overridden by another source, and only present as a fallback path.< Computer is the Network<p>Modern.</p> <p><a href="">Software defined networks</a>.</p> <br/> <br/><h3>Decisions at the Edge</h3><p.</p> <p.</p> <p.</p> <br/> <br/><h3>Software Switches to the Rescue</h3><p>A number of market segments have gradually moved to a model where the first network element to touch the packet is implemented mostly in software. This allows the hope of substantially increasing their capability. A few examples:</p> <img border="0" width="131" height="84" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="vswitch running in the Hypervisor" title="Why the world does the Nexus 1000v have a CLI?"><p><b>Datacenters</b>: The first hop is a software switch running in the Hypervisor, like the <a href="">VMware vSwitch</a> or <a href="">Cisco Nexus 1000v</a>.</p> <br clear="right"/><img border="0" width="117" height="84" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="WAN Optimizer with 4 CPUs" title="Yes folks, most of them are just PCs."><p><b>Wide Area Networks</b>: WAN optimizers have become quite popular because they save money by reducing the amount of traffic sent over the WAN. These are mostly software products at this point, implementing protocol-specific compression and deduplication. Forthcoming 10 Gig products from <a href="">Infineta</a> appear to be the <a href="">first products containing significant amounts of custom hardware</a>.</p> <br clear="right"/><img border="0" width="100" height="84" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Wifi AP with CPU, Wifi MAC, and Ethernet MAC" title="Like the TV antenna? I drew it myself."><p><b>Wifi Access Points</b>: Traditional, thick APs as seen in the consumer and carrier-provided equipment market are a CPU with Ethernet and Wifi, forwarding packets in software.<br/>Thin APs for Enterprise use as deployed by <a href="">Aruba</a>/Airespace/etc are rather different, the real forwarding happens in hardware back at a central controller.</p> <br clear="right"/><img border="0" width="90" height="84" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Cable modem with DOCSIS and Ethernet" title="I resisted the urge to draw it sitting on a TV."><p><b>Carrier Network Access Units</b>: Like Wifi APs, access gear for DSL and DOCSIS networks is usually a CPU with the appropriate peripherals and forwards frames in software.</p> <br clear="right"/><img border="0" width="87" height="87" src="" align="right" style="border: 1px #777; margin: 0 0 0 1em;" alt="Enterprise switch with CPU handling all packets, and a big red X through it" title="There is much bitterness in this picture."><p><b>Enterprise</b>: <a href="">out of band approaches</a>.</p> <br/> <br/><h3>The Computer is the Network</h3><p>The Sun Microsystems tagline through most of the 1980s was <i><a href="">The Network is the Computer</a>.</i>.</p> <p.< Point<p>Last week at the <a href="">Web 2.0 Summit in San Francisco</a>, Twitter CEO Dick Costolo talked about recent growth in the service and how iOS5 had caused a <a href="">sudden 3x jump in signups</a>. He also said daily Tweet volume had reached 250 million. There are many, many estimates of the volume of Tweets sent, but I know of only three which are verifiable as directly from Twitter:</p> <ul> <li>50M tweets/day in March, 2010 <a href="">according to a Twitter blog post</a>.</li> <li>140M tweets/day in March, 2011 <a href="">according to that same Twitter blog post</a>.</li> <li>250M tweets/day in late October, 2011 <a href="">according to Dick Costolo</a>.</li></ul> <p>Graphing these on a log scale shows the <i>rate of growth</i> in Tweet volume, <strike>roughly tripling in two years</strike> almost tripling in one year.</p> <center><img border="0" width="640" height="363" src="" style="border: 1px #777; margin: 1em 0 1em 0;" alt="Graph of average daily Tweet volume" title="Four data points + much handwaving == blog post!"></center> <p>This graph is misleading though, as we have so few data points. It is very likely that, like signups for the service, the rate of growth in tweet volume suddenly increased after iOS5 shipped. Lets assume the <i>rate of growth</i> also tripled for the few days after the iOS5 launch, and zoom in on the tail end of the graph. It is quite similar up until a sharp uptick at the end.</p> <center><img border="0" width="640" height="363" src="" style="border: 1px #777; margin: 1em 0 1em 0;" alt="Speculative graph of average daily Tweet volume, knee of curve at iOS5 launch." title="Four data points + one made up data point + much handwaving == blog post!"></center> <p>The reality is somewhere between those two graphs, but likely still steep enough to be <b>terrifying</b> to the engineers involved. iOS5 will absolutely have an impact on the daily volume of Tweets, it would be ludicrous to think otherwise. It probably isn't so abrupt a knee in the curve as shown here, but it has to be substantial. Tweet growth is on a new and steeper slope now. It used to triple in a bit over a year, now it will triple in way less than one year.</p> <br/> <br/><h3>Why this matters</h3><p>Even five months ago, the traffic to carry the Twitter Firehose <a href="">was becoming a challenge to handle</a>. At that time the average throughput was 35 Mbps, with spikes up to about 138 Mbps. Scaling those numbers to today would be 56 Mbps sustained with spikes to 223 Mbps, and about one year until the spikes exceed a gigabit.</p> <p>The indications I've seen are that the feed from Twitter is still sent uncompressed. Compressing using gzip (or <a href="">Snappy</a>) would gain some breathing room, but not solve the underlying problem. The underlying problem is that the volume of data is increasing way, way faster than the capacity of the network and computing elements tasked with handling it. Compression can reduce the absolute number of bits being sent (at the cost of even more CPU), but not reduce the <i>rate of growth.</i></p> <p>Fundamentally, there is a limit to how fast a single HTTP stream can go. As described in the <a href="">post earlier this year</a>, we've scaled network and CPU capacity by going horizontal and spreading load across more elements. Use of a single very fast TCP flow restricts the handling to a single network link and single CPU in a number of places. The network capacity has some headroom still, particularly by throwing money at it in the form of 10G Ethernet links. The capacity of a single CPU core to process the TCP stream is the more serious bottleneck. At some point relatively soon it will be more cost effective to split the Twitter firehose across multiple TCP streams, for easier scaling. The Tweet ID (or a new sequence number) could put tweets back into an absolute order when needed.</p> <center><img border="0" width="463" height="91" src="" style="border: 1px #777; margin: 1em 0 1em 0;" alt="Unbalanced link aggregation with a single high speed HTTP firehose." title="It looks like a tartan. Clearly a Scottish network."></center> <br/> <p><b>Update:</b> My math was off. Even before the iOS5 announcement, the rate of growth was nearly tripling in one year. Corrected Trodden Technology Paths<p>Modern.</p> <p.</p> <img border="0" width="224" height="186" align="right" src="" style="border: none; margin: 0;" alt="Large CPU with many cores, and a small 68k CPU in the corner." title="Can you run Hypercard on that thing?"> <p>Many, and I'd hazard to guess <i>most,</i> complex CPU designs reduce their verification cost and design risk by relying on a far simpler CPU buried within the system to handle the earliest stages of initialization. For example, the <a href="">Montalvo x86 CPU</a>.</p> <br/> <br/><h3>Warning: Sudden Segue Ahead</h3><p.</p> <p>Networking will also face some of the same issues as modern CPUs, where the optimal design for performance in normal operation is not suitable for handling its own control and maintenance. Last week's <a href="">ruminations about L2 learning</a> are one example: though we can make a case for software provisioning of MAC addresses, the result is a network which doesn't handle topology changes without software stepping in to reprovision.</p> <p.</p> <p>All in all, its an exciting time to be in networking.< HTTPClient Chunked Downloads<p><a href="">Tornado</a> is an open source web server in Python. It was originally developed to power <a href="">friendfeed.com</a>, and excels at non-blocking operations for real-time web services.</p> <p().</p> <p>The streaming_callback will be called for each chunk of data from the server. 4 KBytes is a common chunk size. The async_callback will be called when the file has been fully fetched; the response.data will be empty</p> <pre style="font-family: monospace; font-size: small; line-height: 1.3em; margin-left: 3em;">#!/usr/bin/python<br /><br />import os<br />import tempfile<br />import tornado.httpclient<br />import tornado.ioloop<br /><br />class HttpDownload(object):<br /> def __init__(self, url, ioloop):<br /> self.ioloop = ioloop<br /> self.tempfile = tempfile.NamedTemporaryFile(delete=False)<br /> req = tornado.httpclient.HTTPRequest(<br /> url = url,<br /> <b>streaming_callback = self.streaming_callback</b>)<br /> http_client = tornado.httpclient.AsyncHTTPClient()<br /> http_client.fetch(req, self.async_callback)<br /><br /> <b>def streaming_callback(self, data):<br /> self.tempfile.write(data)</b><br /><br /> def async_callback(self, response):<br /> self.tempfile.flush()<br /> self.tempfile.close()<br /> if response.error:<br /> print "Failed"<br /> os.unlink(self.tempfile.name)<br /> else:<br /> print("Success: %s" % self.tempfile.name)<br /> self.ioloop.stop()<br /><br />def main():<br /> ioloop = tornado.ioloop.IOLoop.instance()<br /> dl = HttpDownload("", ioloop)<br /> ioloop.start()<br /><br />if __name__ == '__main__':<br /> main()<br /></pre> 2 History<p><span style="font-weight: bold;">Why use L2 networks in datacenters?</span><br/>Virtual machines need to move from one physical server to another, to balance load. To avoid disrupting service, their IP address cannot change as a result of this move. That means the servers need to be in the same L3 subnet, leading to enormous L2 networks.</p> <p><span style="font-weight: bold;">Why are enormous L2 networks a problem?</span><br/>A switch looks up the destination MAC address of the packet it is forwarding. If the switch knows what port that MAC address is on, it sends the packet to that port. If the switch does not know where the MAC address is, it floods the packet to <i>all</i> ports. The amount of flooding traffic tends to rise as the number of stations attached to the L2 network increases.</p> <img border="0" width="227" height="173" align="right" src="" style="border: none; margin: 0;" alt="Transition from half duplexed Ethernet to L2 switching." title="I miss vampire taps."> <p><span style="font-weight: bold;">Why do L2 switches flood unknown address packets?</span><br/>So they can learn where that address is. Flooding the packet to all ports means that if that destination exists, it should see the packet and respond. The source address in the response packet lets the switches learn where that address is.</p> <p><span style="font-weight: bold;">Why do L2 switches need to learn addresses dynamically?</span><br/>Because they replaced simpler repeaters (often called hubs). Repeaters required no configuration, they just repeated the packet they saw on one segment to all other segments. Requiring extensive configuration of MAC addresses for switches would have been an enormous drawback.</p> <p><span style="font-weight: bold;">Why did repeaters send packets to all segments?</span><br/>Repeaters were developed to scale up Ethernet networks. Ethernet at that time mostly used coaxial cable. Once attached to the cable, the station could see all packets from all other stations. Repeaters kept that same property.</p> <p><span style="font-weight: bold;">How could all stations see all packets?</span><br/>There were <a href="">limits placed</a> on the maximum cable length, propagation delay through a repeater, and the number of repeaters in an Ethernet network. The speed of light in the coaxial cable used for the original Ethernet networks is 0.77c, or 77% of the speed of light in a vacuum. Ethernet has a minimum packet size to allow sufficient time for the first bit of the packet to propagate all the way across the topology and back before the packet ends transmission.</p> <p>So there you go. We build datacenter networks this way because of the speed of light in coaxial cable.< All the Way Down<p><a href="">Jean-Baptiste Queru</a> recently wrote a <a href="">brilliant essay titled <i>Dizzying but invisible depth,</i></a> a description of the sheer, unimaginable complexity at each layer of modern computing infrastructure. It is worth a read.</p><div class="feedflare"> <a href=""><img src="" border="0"></img></a> <a href=""><img src="" border="0"></img></a> </div><img src="" height="1" width="1" alt=""/>Denton Gentry Ritchie, 1941-2011<a href=""><img border="0" width="226" height="298" align="right" src="" style="border: none; margin: 0;" alt="Kernighan and Ritchie _The C Programming Language_" title="Image courtesy Wikipedia."></a><p><a href="">K&R C</a> is the finest programming language book ever published. Its <i>terseness</i> is a hallmark of the work of Dennis Ritchie; it says exactly what needs to be said, and nothing more.</p> <p>Rest in Peace, Dennis Ritchie.</p> <p>The first generation of computer pioneers are already gone. We're beginning to lose the second generation.<<p>In the last decade we have enjoyed a renaissance of programming language development. Clojure, Scala, Python, C#/F#/et al, Ruby (and Rails), Javascript, node.js, Haskell, Go, and the list goes on. Development of many of those languages started in the 1990s, but adoption accelerated in the 2000s.</p> <p>Why now? There are probably a lot of reasons, but I want to opine on one.</p> <div style="width: 20em; text-align: center; font-style: italic; margin: auto; border: 1px #777; background-color: #eee; padding: 1em;">HTTP is our program linker.</div> <!-- <p>Commonly used operating systems have for decades been written primarily in some variant of C. The system libraries often only had C bindings, which in turn encouraged other third party libraries to be in C in order to easily link. Programming languages which evolved in that era had to easily interface with C code, it was impractical to redevelop every facility one might want to use. The Java Native Interface was very important during the first few years of Java deployment, while Python and Tcl also had simple extension mechanisms. Java, Perl, and Python gradually developed extensive standard libraries, though even these used a lot of C code.</p>--> <p>We no longer have to worry about linking to a gazillion libraries written in different languages, with all of the compatibility issues that entails. We no longer build large software systems by linking it all into ginormous binaries, and that loosens a straightjacket which made it difficult to stray too far from C. We dabbled with DCE/CORBA/SunRPC as a way to decouple systems, but RPC marshaling semantics still dragged in a bunch of assumptions about data types.</p> <p>It took the web and the model of software as a service running on server farms to really decompose large systems into cooperating subsystems which could be implemented any way they like. Facebook can implement <a href="">chat in Erlang</a>, Akamai can use Clojure, Google can mix C++ with Java/Python/Go/etc. It is all connected together via HTTP, sometimes carrying SOAP or other RPCs, and sometimes with RESTful interfaces even inside the system.< Ada, 2011<div style="width: 80%; font-style: italic; margin: auto;".<div style="text-align: right; color: #bbb; font-size: 80%;"><a href="">findingada.com</a></div></div> <br/><p>For Ada Lovelace Day 2010 I analyzed a patent for a <a href="">frequency hopping control system for guided torpedoes</a>, granted to <a href="">Hedy Lamarr</a> and <a href="">George Antheil</a>. For Ada Lovelace Day this year I want to share a story from early in my career.</p> <p.</p> <p.</p> <p><img border="0" width="320" height="126" align="right" src="" style="border: none; margin: 0;" alt="Cell loss == packet loss." title="Theres a hole in my packet, dear Liza, dear Liza, theres a hole in my packet, dear Liza, a hole.".</p> <p>Allyn Romanow at Sun Microsystems and <a href="">Sally Floyd</a> from the Lawrence Berkeley Labs conducted a series of simulations, ultimately <a href="">resulting in a paper</a> on how to deal with congestion. If a cell had to be dropped, drop the rest of the cells in that packet. Furthermore, deliberately <a href="">dropping packets early</a>.</p> <p.</p> <p>In this industry we tend to celebrate engineers who spend massive effort putting out fires. What I learned from Allyn, Sally, and Renee is that the truly <i>great</i> engineers see the fire coming, and keep it from spreading in the first place.</p> <p><b>Update:</b> <a href="">Dan McDonald</a> worked at Sun in the same timeframe, and <a href="">posted his own recollections</a> of working with Allyn, Sally, and Renee. As <a href="">he put it on Google+</a>, "Good choices for people, poor choice for technology." <i>(i.e. ATM Considered Harmful).</i>< Uniform Network Access<p><img border="0" width="212" height="178" align="right" src="" style="border: none; margin: 0;" alt="Four CPUs in a ring, with RAM attached to each." title="Confession: I find cache coherency overrated."><a href="">Non Uniform Memory Access</a> is common in modern x86 servers. RAM is connected to each CPU, which connect to each other. Any CPU can access any location in RAM, but will incur additional latency if there are multiple hops along the way. This is the <i>non-uniform</i> part: some portions of memory take longer to access than others.</p> <p>Yet the NUMA we use today is NUMA <span style="font-size: 90%;">in</span> <span style="font-size: 80%;">the</span> <span style="font-size: 70%;">small</span>.... <b>except for performance.</b></p> <p.</p> <p.</p> <br/> <br/><h3>A Segue to Web Applications</h3><!-- <p>The common conception of a web application is multiple front end servers running PHP/Rails/etc, all communicating with a backend MySQL/Postgres/etc database. Though a vast number of sites do work that way, web applications have rapidly accumukated far more moving parts. Large web apps consist of cooperating processes, sometimes co-resident on a single system and sometimes distributed, but communicating by IP and web protocols in either case. This reliance on networked protocols has made possible a renaissance in programming languages: linking them together is now done using HTTP and not via an ELF toolchain. Components of the web application can be implemented in Ruby, Python, Clojure, Scala, Erlang, node.js, etc, whatever is most appropriate for that specific piece, with HTTP plumbing them together.</p>--> <img border="0" width="307" height="222" align="right" src="" style="border: none; margin: 0;" alt="Tuning knobs for CPU, Memory, Network." title="There are little knobs on the wall of every datacenter. I have photographic proof."><p.</p> <br/> <br/><h3>Further Segue To Overlay Networks</h3><p>There is a lot of effort being put <a href="">into overlay networks</a> <a href="">for virtualized datacenters</a>, to create an L2 network atop an L3 infrastructure. This allows the infrastructure to run as an L3 network, which we are pretty good at scaling and managing, while the service provided to the VMs behaves as an L2 network.</p> <p>Yet once the packets are carried in IP tunnels they can, through the magic of routing, be carried across a WAN to another facility. The datacenter network can be transparently extended to include resources in several locations. Transparently, <b>except for performance.</b>.</p> .<
http://feeds.feedburner.com/CodingRelic
CC-MAIN-2015-35
refinedweb
11,263
55.84
World trade in cereals in 1998/99 is currently forecast at 199 million tonnes, down 8 million tonnes, or 4 percent, from the previous year and 2 million tonnes lower than reported in June. Most of the anticipated contraction in world imports would be in wheat and rice mainly because of reduced import demand in a number of low-income food-deficit countries where domestic production is estimated to increase in 1998. By contrast, coarse grain imports are forecast to increase slightly. Two important developments that took place in recent months are expected to weigh on the short-term outlook for cereal trade, i.e. the decline in petroleum prices and the financial turmoil facing several countries. For several oil-exporting, grain-importing countries, the sharp drop in oil export earnings could lead to smaller grain purchases. The continuing financial turmoil in Asia and more recently in the Russian Federation could also force some of the countries affected to curtail their foreign cereal purchases despite smaller domestic output in some cases and despite the slide in international cereal prices expressed in US dollars in recent months. As a result, commercial imports by some of these countries may fall short of covering their deficit. At the same time, larger supplies in major exporting countries may facilitate a substantial increase in the food aid component of total cereal trade. Against this background and taking into account the recent decision by the United States to donate an additional 2.5 million tonnes of wheat to countries in need, food aid shipments in 1998/99 are tentatively forecast to rebound from the previous years estimated 5.5 million tonnes to about 8 million tonnes. The forecast for world imports of wheat and wheat flour (in wheat equivalent) in 1998/99 (July/June) has been raised slightly from the previous report, by 500 000 tonnes, to 90.5 million tonnes, which would be some 5.5 million tonnes below the revised estimate for imports in 1997/98. Apart from the factors mentioned above, this years good crops in a number of countries, following favourable weather conditions, is another reason for this reduction. Overall, wheat imports by the developing countries are now forecast to fall by some 5 million tonnes to 73 million tonnes. Also, somewhat smaller imports are anticipated among the developed countries, particularly in the EC. The sharpest decline is expected in Asia, where total imports may amount to just 42 million tonnes, down more than 4 million tonnes from the previous year and the lowest volume in almost a decade. In Pakistan, a bumper 1998 crop could esult in at least 3 million tonnes lower imports compared to last year. Also in India, large domestic supplies could lead to a decline of about 500 000 tonnes in imports, while wheat purchases by the Islamic Republic of Iran could plunge for the second consecutive year, dropping by some 700 000 tonnes, largely due to above-average domestic crops and, to some extent, the decline in its earnings from oil revenues. Among the Asian countries in financial difficulty, the forecast for imports by Indonesia has been lowered by 400 000 tonnes to 3.8 million tonnes, against 4.2 million tonnes in the previous season. The current forecast includes the recently announced 500 000 tonnes food aid donation by the United States. SOURCE: FAO 1/ Highly tentative The rise in domestic prices, partly resulting from reductions in flour subsidies and the gradual liberalization of the domestic wheat market, is the main reason for the likely reduction in commercial purchases by Indonesia. By contrast, imports by the Republic of Korea, Malaysia and the Philippines could remain largely the same as last year because the drop in international wheat prices and the abundance of low quality wheat is expected to maintain its competitive edge vis-à-vis imports of coarse grains for feed. In Africa, wheat imports are expected to decline by about 1.5 million tonnes to 22 million tonnes. All of this decrease would be on account of reduced requirements in several countries in North Africa due to larger domestic production, particularly in Morocco and Tunisia. However, most countries in Latin America and the Caribbean are likely to import as much as last year, while Brazil, the regions largest wheat importer, is forecast to import over 6 million tonnes, some 500 000 tonnes more than in the previous year. The decline in international prices could encourage larger purchases by Brazil given the continuing strong growth in domestic consumption. In Europe, the forecast 1 million tonnes decline in wheat imports would almost entirely reflect smaller purchases by the EC following this years bumper output and larger availabilities of high quality wheat in the Community. In the CIS, despite this years drastic decline in wheat production, especially in the Russian Federation, imports are expected to rise only by 200 000 tonnes to about 2.7 million tonnes. However, this forecast remains extremely tentative because of uncertainties associated with the impact of the current financial turmoil on the countries ability to import. Turning to exports, the forecast decline in this years trade will weigh heavily on shipments from the major exporting countries, with additional export availabilities from a number of other countries, such as Hungary, Turkey and Syria, also adding to competition for markets. Aggregate wheat exports from the 5 major exporters in 1998/99 (July/June) are forecast to reach 83 million tonnes, against 87 million tonnes in the previous season. The decline would be mostly due to expected reductions in sales from Argentina, Australia and Canada while those from the EC and the United States are forecast to rise. Exports from the Russian Federation and Ukraine to outside the CIS countries are also forecast to fall substantially, mainly as a result of lower domestic output, while foreign sales by Romania are expected to be reduced as production is anticipated to fall below the previous years bumper level. World trade in coarse grains in 1998/99 (July/June) is now forecast at 88.5 million tonnes, some 2.5 million tonnes less than earlier anticipated, but 1 million tonnes above the previous years estimated imports. This months downward revisions mainly concern several countries in Asia. Trade is expected to remain close to the previous years volume for almost all types of coarse grains except for maize and barley, which are likely to increase slightly to 64 million tonnes and 14 million tonnes, respectively, mainly as a result of higher demand from some countries in Latin America. The small rise in total coarse grain imports by the developing countries, to 58 million tonnes, would account for nearly all of the increase in global coarse grain purchases, while those by the developed countries are forecast to remain close to the previous years volume. In Asia, imports are expected to remain unchanged at 53 million tonnes following this months downward adjustments to forecasts for imports by China, Japan and the Islamic Republic of Iran. For Japan, downward adjustments from the earlier prediction are based on the expected slowdown in demand from the feed sector. In Africa, imports by most countries in North Africa are likely to decline because of good crops. However, larger imports are forecast for a number of countries in the southern region, particularly in Lesotho, South Africa, Zambia and Zimbabwe, due to reduced maize crops. In Central America, the likely decline in sorghum crops in Mexico is expected to result in slightly higher imports while in South America the drop in maize production in Brazil and Venezuela is expected to lead to larger purchases by both countries compared to the previous season. Among countries in Europe, the increase of about 500 000 tonnes in aggregate imports would be mainly on account of larger barley purchases by the Czech Republic and larger maize imports by Poland, mainly resulting from poorer crop prospects. Currently the forecast for imports into the CIS points to the same low level as in the previous season, despite a significant reduction in output expected. The anticipated modest rise in world trade of coarse grains is expected to be entirely met by the five major exporters as their combined production is forecast to increase for the fourth consecutive year, resulting in ample exportable supplies. Among other exporters, Hungary and Romania would also have large export surpluses this season, while China, which exported an estimated 7 million tonnes of maize in the previous season and ranked the world's third largest exporter after the United States and Argentina, may reduce its sales to 3 million tonnes, mainly because of smaller carryovers from the previous season. The forecast for global rice trade in 1998 has been adjusted upwards from the last report by 1.7 million tonnes to a record 23.8 million tonnes, which is 4.8 million tonnes more than the estimated 1997 volume and about 3 million tonnes above the previous record in 1995. The upward revision is mainly a result of large imports and/or import commitments to date by several of the major importing countries whose domestic output was severely reduced by adverse weather related to El Niño. The current flood situation in several of the Asian countries is another factor behind the upward revision. The forecast of Indonesias rice imports has been increased by 1.5 million tonnes from the previous report to a record 5 million tonnes, following a bigger fall in the 1998 paddy production than originally anticipated. During the first 6 months of the year, Indonesia is estimated to have imported in excess of 3.2 million tonnes of rice, over three times the total imports estimated for the whole of 1997. Taiwan Province of China is reported to have joined Japan in offering a rice loan of 200 000 tonnes to Indonesia with an option of either paying back in cash or through a barter deal. There are reports that Indonesia and Viet Nam are currently engaged in negotiations for barter deals or deferred payment arrangements for about 400 000 tonnes of rice. The forecast of rice imports by the Philippines has also been adjusted upward by 350 000 tonnes, to 1.55 million tonnes based on contracted volumes to date. However, the final import figure will largely depend on whether the country will be affected by La Niña-related floods which have been predicted for the last quarter of the year. The forecast for Bangladesh has been raised by 500 000 tonnes from the previous report to 1 million tonnes based on shipments to date. Large quantities of rice were imported during the first four months of the year when domestic supplies were tight and prices had risen, a result of lower output from the 1997 Aman crop. In addition, devastating and widespread floods are threatening the current crop. By contrast, the forecast for the Islamic Republic of Iran has been reduced by half from the previous report to 600 000 tonnes due to good production prospects and a slower pace of imports. Also for China (Mainland), the forecast for 1998 imports has been lowered by 100 000 tonnes to 300 000 tonnes based on imports to date and the anticipation that any shotfall this year will be met from stocks. In Brazil, the Government has taken steps to facilitate increased rice imports by lowering the tariffs on brown and milled rice originating from non-MERCOSUR countries from the 1998 rate of 21 percent to 13 percent and 15 percent, respectively. Rice imports in 1998 are forecast to increase by 46 percent from the adjusted 1997 level to 1.2 million tonnes. A higher share of Brazils 1998 rice import requirements will come from non-MERCOSUR sources, including the United States, Thailand and Viet Nam, since Argentina and Uruguay, its traditional suppliers, also experienced production declines. On the export side, the forecast for rice shipments out of Thailand for 1998 has been raised by 400 000 tonnes from previous estimates to 6 million tonnes due to consistently high demand on the international market and a good output from the second-season crop. Exports during the first half of 1998 are estimated at over 3 million tonnes, compared to about 2.3 million tonnes during the same period in 1997. In Viet Nam, rice exports were temporarily suspended in mid-April to ensure domestic food security in the midst of a drought that had affected much of the country. The Government lifted the freeze on new export sales effective July 1, 1998 but reintroduced an export tax of 1 percent on certain grades of rice. However, in mid-August, the Government announced a new temporary ban on fresh commercial export sales again citing food security concerns as the reason behind the decision. Nevertheless, expected export figures have been increased by 200 000 tonnes from the previous forecast to the Government target of 4 million tonnes based on shipments to date. During the first half of the year, Viet Nam shipped close to 3 million tonnes, compared to less than 2 million tonnes during the same period in 1997. The export quota for the period July to September was fixed at 600 000 tonnes. The decision about export volumes for the remainder of the year will be made in September after reviewing the yields from the summer-autumn crop. The forecast for India's exports in 1998 has been increased by 200 000 tonnes from the previous forecast to 2.4 million tonnes based on an upward revision to its 1997 paddy output. Chinas (Mainland) 1998 projected rice exports have also been revised upwards by 700 000 tonnes from the previous report to 2.4 million tonnes based on exports to date and an upward revision to its 1997 production. During the first half of 1998, Chinas shipments amounted to over 1.2 million tonnes compared to 940 000 tonnes during the whole of 1997. Anticipated exports from the Taiwan Province of China have been increased by 150 000 tonnes from earlier expectations to 250 000 tonnes. The bumper harvest in Tanzania is expected to result in exports of about 100 000 tonnes to its neighbours, particularly Uganda and Kenya. For 1999, global rice trade is provisionally forecast to decline from the 1998 projected record by about 10-15 percent as production in 1998 in many of the major importing countries is expected to recover from the lower weather-reduced levels in 1997. Increased production, and therefore lower imports, may materialize particularly in Indonesia, the Philippines and Brazil, three of the leading importers thus far in 1998.
http://www.fao.org/docrep/004/w9687e/w9687e05.htm
CC-MAIN-2015-48
refinedweb
2,429
55.58
Using if else if statement in C Reading time: 30 minutes if else if is a conditional statement that allows a program to execute different code statements based upon a particular value or expression. It is natively supported in C programming language and similarly, in other languages as well. if statement in C The syntax of the if statement in C programming is: if (test expression) { /* statements to be executed if the test expression is true*/ } Flowchart: How if statement works? The if statement evaluates the test expression inside the parenthesis (). - If the test expression is evaluated to true, statements inside the body of if are executed. - If the test expression is evaluated to false, statements inside the body of if are not executed. Example: if statement #include<stdio.h> using namespace std; int main() { int i = 10; if (i > 15){ cout<<"10 is less than 15"; } cout<<"I am Not in if"; } Output I am Not in if if else Statement in C The if statement may have an optional else block. The syntax of the if..else statement is: if (test expression) { /* statements to be executed if the test expression is true */ } else { /* statements to be executed if the test expression is false */ } Flowchart: How if...else statement works? If the test expression is evaluated to true, - statements inside the body of if are executed. - statements inside the body of else are skipped from execution. If the test expression is evaluated to false, - statements inside the body of else are executed - statements inside the body of if are skipped from execution. Example: if...else statement // Check whether an integer is odd or even #include <stdio.h> int main() { int number; printf("Enter an integer: "); scanf("%d", &number); // True if the remainder is 0 if (number%2 == 0) { printf("%d is an even integer.",number); } else { printf("%d is an odd integer.",number); } return 0; } Output Enter an integer: 7 7 is an odd integer. if else if ladder in C if else if statements can check for multiple conditions and can take multiple code paths. There are multiple statements to be checked and in a particular order. Once a statement evaluates to true, the rest of the statements are not checked. If no statement (in if and if else) evaluates to true, then the code statements in the else part is executed. Syntax: if (test expression1) { // statement(s) } else if(test expression2) { // statement(s) } else if (test expression3) { // statement(s) } . . else { // statement(s) } Flowchart: Example: if-else-if ladder // Program to relate two integers using =, > or < symbol #include <stdio.h> int main() { int number1, number2; printf("Enter two integers: "); scanf("%d %d", &number1, &number2); //checks if the two integers are equal. if(number1 == number2) { printf("Result: %d = %d",number1,number2); } //checks if number1 is greater than number2. else if (number1 > number2) { printf("Result: %d > %d", number1, number2); } //checks if both test expressions are false else { printf("Result: %d < %d",number1, number2); } return 0; } Output Enter two integers: 12 23 Result: 12 < 23 Nested if else in C We can place if else statements within if else statements and the nesting can go on to any level. - It is advised to keep nested if else limited by 2 levels as going beyond it can increase the complexity of the code and make it difficult to understand and more prone to runtime errors. Syntax: if (condition1) { // Executes when condition1 is true if (condition2) { // Executes when condition2 is true } } Flowchart: Example 4: Nested if else // C program to illustrate nested-if statement int main() { int i = 10; if (i == 10) { // First if statement if (i < 15) printf("i is smaller than 15"); // Nested - if statement // Will only be executed if statement above // it is true if (i < 12) printf("i is smaller than 12 too"); else printf("i is greater than 15"); } return 0; } Output: i is smaller than 15 i is smaller than 12 too
https://iq.opengenus.org/if-else-if-in-c/
CC-MAIN-2019-47
refinedweb
655
58.32
Do. - R. Need more Help with R for Machine Learning? Take my free 14-day email course and discover how to use R on your project (with sample code). Click to sign-up and also get a free PDF Ebook version of the course. Start Your FREE Mini-Course Now!. Yet it works after installing ellipse packages Nice! Thanks for the post. I tried Google first when I saw the error, interestingly the 5th search result is the link back to this post. 🙂 It works after installing ellipse package. Thanks Jason for this great learning tutorial! Glad to hear it! the most important piece of information missing in the text above: install.packages(“ellipse”) Thanks Rajendra !!! Thanks for the tip!. Perfect remarks. Always follow the instructions of the tutorial. Great tutorial Jason, as usual of course. Thanks. Thanks for highlighting the problem. True, it was hard to find a solution elsewhere on the Internet! Thanks! Your comment saved me!. Yup .. was solved. Please check in discussion. 1) You have to install ‘ellipse” package. which is missing install.packages(“ellipse”) 2) If you change plot=pairs, you can see output. If you want, ellipse, please install ellipse package.. Hi, I have installed the “caret” package. But after this when i am loading through library(caret), I am getting the below error: Error: package or namespace load failed for ‘ggplot2’ in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]): there is no package called ‘munsell’ Error: package ‘ggplot2’ could not be loaded I’m sorry, I have not seen this error. Perhaps check on stackoverflow if anyone has had this fault or consider posting the error there. Hi Jason, Post some R&D was able to resolve it. Below are the actions i did. install.packages(“lattice”) install.packages(“ggplot2”) install.packages(“munsell”) install.packages(“ModelMetrics”) library(lattice) library(munsell) library(ggplot2) library(caret) Nice work, glad to hear you figured it out. Hi Jason, Need one help again. Thanks in advance. Since this is my first Data Science Project, so the question. What and how to interpret from the result of BoxPlot. It will be of help if you can kindly explain a bit of the outcome of the BoxPlot. The box plot shows the middle of the data. The box is the 25th to 75th percentile with a line showing the 50th percentile (median). It is a fast way to get an idea of the spread of the data. More here: Hello Dr Brownlee, I am new to machine learning and attempting to go through your tutorial. I keep getting an error saying that the accuracy matrix values are missing for this line: results <- resamples(list(lda=fit.lda, cart=fit.cart, knn=fit.knn, svm=fit.svm, rf=fit)) The accuracy matrix for lad works however cart, knn, svn and rf do not work. Do you have any suggestions for how to fix this? Thanks I’m sorry to hear that. Confirm your packages are up to date. sir, how could i plot this confusionMatrix “confusionMatrix(predictions, validation$Species)”? Looks good. > predictions confusionMatrix(predictions, validation$Species) Error in confusionMatrix(predictions, validation$Species) : object ‘predictions’ not found Could anyone clarify this error ? predictions confusionMatrix(predictions, validation$Species) Error in confusionMatrix(predictions, validation$Species) : object ‘predictions’ not found Could anyone clarify this error ?Earlier I posted something wrong Perhaps double check that you have all of the code from the post? Hi, I am beginner in this so may be the question I am going to ask wont make sense but I would request you to please answer: So when we say lets predict something, what exactly we are predicting here ? In case of a machine (motor, pump etc) data(current, RPM, vibration) what is that can be predicted ? Regards, Saurabh In this tutorial, given the measurements of iris flowers, we use a model to predict the species. set.seed(7) > fit.lda <- train(Species~., data = data, method = "lda", metric = metric, trControl = control) The error i got, and also tried to install mass package but it not getting installed properly and showing the error again and again please help me sir. ERROR:- Error in unloadNamespace(package) : namespace ‘MASS’ is imported by ‘lme4’, ‘pbkrtest’, ‘car’ so cannot be unloaded Failed with error: ‘Package ‘MASS’ version 7.3.45 cannot be unloaded’ Error in unloadNamespace(package) : namespace ‘MASS’ is imported by ‘lme4’, ‘pbkrtest’, ‘car’ so cannot be unloaded Error in library(p, character.only = TRUE) : Package ‘MASS’ version 7.3.45 cannot be unloaded I’m sorry to hear that. Perhaps try installing the MASS package by itself in a new session? Hello Jason, My question is regarding scaling. For some algorithms like adaboost/xgboost it is recommended to scale all the data. My question is how do I unscale the final predictions. I used the scale() function in R. The unscale() function expects the center(which could be mean/median) value of the predicted values. But my predicted values are already scaled. How can I unscale them to the appropriate predicted values. I am referring to prediction on unlabeled data set. I have searched for this in many websites but have not found any answer. Perhaps scale the data yourself, and use the coefficients min/max or mean/stdev to invert the scaling? I am getting an error while summarize the accuracy of models, Error in summary(results) : object ‘results’ not found You may have missed some code? > library(tidyverse) Sir while adding this library in R, I have installed the package then also it is showing following the error: please help me Error in loadNamespace(j <- i[[1L]], c(lib.loc, .libPaths()), versionCheck = vI[[j]]) : there is no package called ‘bindrcpp’ Error: package or namespace load failed for ‘tidyverse’ Sorry, I am not familiar with that package or the error. Perhaps try posting on stackoverflow? Dear Jason, I am not familiar with R tool. When I started reading this tutorial, I thought of installing R. After the installation when I typed the Rcommand, I got the following error message. Please give me the suggestion… > install.packages(“caret”) Installing package into ‘C:/Users/Ratna/Documents/R/win-library/3.4’ (as ‘lib’ is unspecified) — Please select a CRAN mirror for use in this session — trying URL ‘’ Content type ‘application/zip’ length 5097236 bytes (4.9 MB) downloaded 4.9 MB package ‘caret’ successfully unpacked and MD5 sums checked The downloaded binary packages are in C:\Users\Ratna\AppData\Local\Temp\RtmpQLxeTE\downloaded_packages > Great work! Hi Jasson, I tried the following but got the error, > library(caret) Error: package or namespace load failed for ‘caret’ in loadNamespace(i, c(lib.loc, .libPaths()), versionCheck = vI[[i]]): there is no package called ‘kernlab’ > It looks like you might need to install the “kernlab” package. Thanks Jasson!!!! You’re welcome. e1071 error i have installed all packages….. What error did you get? Hi All, When I created the updated ‘dataset’ in step 2.3 with the 120 observations, the dataset for some reason created 24 N/A values leaving only 96 actual observations. Copy and pasted the code from the post above. Any idea what caused or how to fix so that the ‘dataset’ is inclusive of all the training data observations? Doesn’t seem to be anything wrong with the IRIS dataset or either of the validation_index or validation datasets. Perhaps double check you have the most recent version of R? Update to OP, I reran the original commands from that section and was able to pull in all 120 observations for the training data. Not sure why it didn’t fetch all the data the first time but looks ok now. Glad to hear it. Just confirming, the above tutorial is a multiclass problem? Therefore, I should be able to apply the above methodology to a different k=3 problem. Is this correct? Yes. Jason, For my first Machine Learning Project, this was EXTREMELY helpful and I thank you for the tutorial. I had no problems going through the script and even applied to a dummy dataset and it worked great. So thank you. My question is more related to automation. Instead of manually assessing the accuracy of each model to determine which one to use for prediction, is there a way to automatically call the model with the highest accuracy in the “predictions <- predict([best model], validation)" script. Hope to hear from you soon. Well done! Great question. Generally, once we find the best performing model, we can train a final model that we save/load and use to make predictions on new data. This post will show you how: And this post covers the philosophy of the approach: I did not get 100% Accuracy after following the tutorial example. I got : Confusion Matrix and Statistics Reference Prediction Iris-setosa Iris-versicolor Iris-virginica Iris-setosa 10 0 0 Iris-versicolor 0 8 0 Iris-virginica 0 2 10 Overall Statistics Accuracy : 0.9333 95% CI : (0.7793, 0.9918) No Information Rate : 0.3333 P-Value [Acc > NIR] : 8.747e-12 Kappa : 0.9 Mcnemar’s Test P-Value : NA Statistics by Class: Class: Iris-setosa Class: Iris-versicolor Class: Iris-virginica Sensitivity 1.0000 0.8000 1.0000 Specificity 1.0000 1.0000 0.9000 Pos Pred Value 1.0000 1.0000 0.8333 Neg Pred Value 1.0000 0.9091 1.0000 Prevalence 0.3333 0.3333 0.3333 Detection Rate 0.3333 0.2667 0.3333 Detection Prevalence 0.3333 0.2667 0.4000 Balanced Accuracy 1.0000 0.9000 0.9500 > Perhaps try running the example multiple times? sir, i want to learn r programing at vedio based tutorial which is the best tutorial to learn r programming quickly Sorry, I don’t have good advice on how to learn R, I focus on teaching how to learn machine learning for R. For learning R I strongly recommend the Coursera.org “R Programming” certification course, When I took it it was free, now is paid, something around USD 50. Thanks for the tip. Json, nice article. I left working code with minor fixes in this repo, please comment on, thanks, Carlos Thanks for sharing. what if the dataset is used EuStockMarkets, I error continue Sorry, I don’t know about that dataset. successfully done, and got the result.Thanks for the great tutorial. But now i wonder, what to do further, how to use it in a generic manner for any dataset. How to use the created pred.model anywhere. Yes, you can use this process on other datasets. ohk, but to use any dataset we need to make the dataset similar to that of the iris dataset, like 4 numberic columns, and one class. Also, accuracy output is similar over the traning dataset , and the validation dataset, but how does that help me to predict now what type of flower would be next if i provide it the similar parameters. Now, for example i have to create a model which predicts the cpu utilization of the servers in my Vcenter or complete DC, how can i create a model which will take my continious dataset and predict that when the CPU utilization will go high and i can take proactive measures. This process will help you work through your predictive modeling problem systematically: Hello Jason, Thanks for the clear and set by step instructions. But I just want to understand what I need to do after creating the model and calculating its accuracy ? Can you please explain to draw some conclusions/predictions on the iris data set we used ? You can finalize and start using it. See the whole process here: Hi Sir, For the confusionMatrix(predictions, validation$Species) command , I am getting an output as follows: [,1] [,2] [1,] 0 0 [2,] 0 10 I am not getting the same output as you got. Any suggestions on what I may be doing wrong.?The code worked exactly till this command. Perhaps double check that you copied all of the code exactly? And that your Python environment and libraries are up to date? Hello good day Jason. Thank you very much for the tutorial I have been very useful but I have a question, in the section of “print (fit.lda)” does not deploy “Accuracy SD Kappa SD”. What remains of the tutorial if you have given me exact, could you help me with this doubt ?. Greetings. The API may have changed slightly since I wrote the post nearly 2 years ago. Great article for a beginner like me Jason! Appreciate your work in sharing your knowledge and educating. Is there a model fit for ‘multinomial logistic regression’ algorithm? Thank you! There is, but I would not recommend it. Try LDA instead. Upon further reading of other articles written by you, I realize that I may not need to use ‘Regression’. My dataset has category variables as input and category attributes as output as well (having 7 levels). So, it is a classification problem and I’m assuming I can use one of the 5 models/fit you have given as examples here in this Iris project. Can you let me know if this is correct understanding? – Thank you This post may help clear you the difference between classification and regression: It works for me with the iris data. Thanks a lot Jason! But there are no “Accuracy SD Kappa SD ” from the output of the fit models. Should I change some settings to get them? I believe the API may have changed. Dear Jason Brownlee I have a dataset with 36 predictors and one for classes (“1”, “2”, “3”) that I got it through clustering in the previous step. My question is: how can I reduce all my predictors into five variables representing specific dimensions in my study? Should I run PCA separately to produce a new dataset with 5 predictors and one for classes or is there any other ways? Thank you in advance. Yes, you would run dimensionality reduction first to create a new input dataset with the same number of rows. Hi Jason, I am getting the error – Error: could not find function “trainControl” on typing tc<-trainControl(method="cv",number=10). What can be the solution for this? Perhaps caret is not installed or caret is not loaded? Maybe a very stupid question. But I read “Build 5 different models to predict species from flower measurements”. So now I am wondering what the predictions of the model tell me about this, how I can use it. For example I now go to the forest take some measurements, assume that the flower is one of those tested, and want to know which flower it is exactly. In a traditional regression formula it is straightforward as you can put in your measurements in the formula and the calculated estimates and get an outcome. But I don’t know how to use the outcomes in this case. Great question. Once we choose a model and config, we develop a final model trained on all data and save it. I write about this here: You can then load the model feed in an input (e.g. a set of measures) and use it to make predictions for those measures. Does that help? Your Tutorial is just awesome . Thanks its really helpful Thanks, I’m glad to hear that. This tutorial really helpful. Thanks Jason. Thanks, I’m glad to hear that. Hi Json how are ? I am new in machine learning. i want to invent a unique idea and prof about islami banking and conventional banking. how can i do that. if any suggestion please give me and i cant fund any islami banking data set like loan info or deposit bla bla bla. i want your valuable information Sorry, I don’t know about banking. Dear Sir, I am getting the following error Error in [.data.frame(out, , c(x$perfNames, “Resample”)) : undefined columns selected when i execute results <- resamples(list(lda=fit.lda,nb=fit.nb, cart=fit.cart, knn=fit.knn, svm=fit.svm, rf=fit.rf)) What can be the solution for this? Did you copy all of the code from the tutorial? Hi Jason, First of all great work. May God bless you for all your sincere efforts in sharing the knowledge. You are making a big difference to the lives of people. Thank you for that. I have a basic question. Now we have a best fit model – how to use it in day to day usage – is there a way I can measure the dimensions of a flower and “apply” them in some kind of equation which will give the predicted flower name? How to use the results? Kindly advise when you are free. Hi Jason – found another of your post: Thank you. Hi Jason – the post was good in telling what to do. However the how part is still missing. Hence still need help. Thank you. You can make predictions as follows: yhat = predict(Xnew) Where Xnew are new measurements of flowers. Thanks. Also see this post: Dear Jason, Thank you very much for your response. Yes – I was about to post that this link was indeed helpful in operationalizing the results. Thank you very much. Please keep up the great work. Hussain. Glad to hear it. Hi Jason, Thank you for sharing your methods and codes. It was very useful and easy to follow. Could you please share how to score a new dataset using one of the models? For example, in my training, random forest has the best accuracy. Now I want to apply that model on a new dataset that doesn’t have the outcome variables, and make prediction. Thank you Great question, I answer it in this post: Thanks Jason. I read through the link. I already finalized my model, now I need save the model and apply it for operational use. The dataset that I want to score doesn’t have the outcome variable. I am not sure which command I should use to make prediction after I have the final model. Can you suggest R codes to do so? You can use the predict() function to make a prediction with your finalized model. Hi Jason! Amazing post! I have the same doubt @TNguyen did. I Finalized the model and we know that LDA is the best model to apply in this case. How I predict the outcome variables (species) in a new dataframe without this variable? IN summary, how I deploy the model on a new dataset? Sorry, I´m new in this field and I´m learning new things all the time! Good question, I have an answer here that might help: Here is a tutorial for finalizing a model in R: Hey, Thanks for the great tutorial. I have a problem and don’t know what’s wrong in the section 3.1 Dimensions. When I execute dim(datset) I get the answer NULL. Do you know why R Studio doesn’t show me the dimensions of the “dataset”? Best regards Martin Perhaps confirm that you loaded the data? Very nice, Its given overall structure to write the ML in R. Thanks! Hey, I am working on the package called polisci and I am asked to build a multiple linear regression modal. My dependent variable is human development index and my independent variable is economic freedom. Could ou please tell me how can I perform multiple linear regression modal. How do I go about in steps and what is the syntax in R to get to the results and get a graph? Any help would be greatly appreciated. Please help me as I am an undergrad student and I am learning this for the first time Thanks in advance Sorry, I don’t have examples of time series forecasting in R. Here are some resources that you can use: Thanks Jason, Was able to execute the program in one go.. Excellent description Well done! Jason, Thank you very much for you above work. Its Ohsomesss, I am new to data science and want to make my carrier. I found so useful this superb…… You’re welcome, I’m glad it helped. Please suggest me a path to become data scientist step by step, and how to become champion in R and python ?? Sure, start right here: Thanks, Jason! This is a very helpful post. I did exactly as suggested, but when i print(fir.lda), I do not have the accuracy SD or kappa SD. How should I get them? Thanks Perhaps the API has changed. Amazing tutorial! I just need to install 2 packages: e1071 and Ellipse After that, i wrote every single line, and i really appreciate the big effoct you done to explain so clear!!! Thank you I’m glad it helped. Thanks for the great tutorial. I have a problem and don’t know what’s wrong in the section 6. Make predictions . When I execute predictions <- predict(fit.lda, validation) confusionMatrix(predictions, validation$Species) I get the error "error data and reference should be factors with the same levels."like this Do you know why R Studio doesn’t show me the Make predictions of the “dataset”? Perhaps try running the script from the command line? Please check above link ^ When I try to do the featurePlots I get NULL. I installed the ellipse package without error. featurePlot(x=x, y=y, plot=”ellipse”) NULL > # box and whisker plots for each attribute > featurePlot(x=x, y=y, plot=”box”) NULL > # density plots for each attribute by class value > scales featurePlot(x=x, y=y, plot=”density”, scales=scales) NULL everything up to this point worked fine I’m sorry to hear that. Perhaps there is another package that you must install? I was also getting same error. You would like to check below link for the solution: Thanks for sharing. Great tutorial Jason! Inspired me to look up and a learn a bit more about LDA and KNN etc. which is a bonus! Great self-learning experience. I have experience with analytics but am a relative R newbie but I could understand and follow with some googling about the underlying methods and R functions.. so, thanks! One thing… the final results comparison in Section 5.3 are different in my case and are different each time I run through it. Reason is likely that in Step 2.3 there is no set.seed() prior. So, when you create the validation dataset which is internally a random sample in createDataPartition().. results are different in the end? Thanks. Well done. Thanks. Yes, some minor differences should be expected. Jason Brownlee you the real MVP! hanks, I’m glad the tutorial helped. Hello this is very helpful, but i don’t get how i should read the Scatterplot Matrix Each plot compares one variable to another. It can help you get an idea of any obvious relationships between variables. I have problem in this…. #,] 1 2 3 4 5 6 #,] Please help me out What is the problem exactly? it can’t findout the objects….and function also..! what can i do? What objects? Jason, you’re indeed a MVP! Ran this in R 3.5. 1. install.packages(“caret”, dependencies = c(“Depends”, “Suggests”)) ran for almost an hour. May be connectivity to mirrors. 2. install.packages(“randomForest”) & library(“randomForest”) needed Would definitely recommend this to all ML aspirants as a “hello world!” Hearty Thanks! Well done! Thanks for the tips. First I’d like to say THANK YOU for making this available! It has given me the courage to pursue other ML endeavors. The only issue I have is that when summarizing the results of the LDA model using the print(fit.lda), my results do not show standard deviation. Do you know if this is due to a setting in R that needs to be changed? Any help is appreciated! Best, Giovanni Yes, I believe the API changed since I wrote the tutorial. Hi! First of all great tutorial, I followed and achieved the expected results Really helped me overcome ML jitters. Very very grateful to you. But I really wanted to know the mathematical side of these algorithms, what do these do and how? Also, it would be wonderful if you could explain things like “relaxation=free” (What does this mean?) That do not have a straight answer on Google Thanks Regards Thanks for the feedback Shane. We focus on the applied side of ML here. For the math, I recommend an academic textbook. Very nice tutorial. The caret package is a great invent. where can I find a rapid theory of the methods to understand it better? Right here: Thanks, Brownlee. You’re welcome.
https://machinelearningmastery.com/machine-learning-in-r-step-by-step/
CC-MAIN-2018-34
refinedweb
4,133
67.25
Red Hat Bugzilla – Bug 1306024 Restricting project counts and names Last modified: 2017-03-08 13:14 EST Description of problem: I am in the need of restricting the project count and the name a user can have. I have been provided with: and as pointers by rjhowe@redhat.com. Based on the information he mentioned that this in upstream with possible 3.2 release. In our environment (education space), it is a pretty important thing to have as resources are a bit more finite and project count limits in conjunction with resource quota and limits allows us to maximize what we offer our customers for free. The second part of the bug report neither myself nor Ryan have been able to find much info on. In v2 currently, we are able to restrict the namespace by doing a couple of simple code modifications which allow us to restrict the project name to the user's username (as an example), thus providing a simple audit trail for troubleshooting/etc. Will there be any chance of getting this functionality added, as modifying v3 is a bit harder than it is with v2 :)? This is related: Will release with 3.2. This will allow you to configure the system so only N projects can be created per user. So I am guessing that my second request should be filed as an RFE? Also, there is one last thing that we are currently able to do in v2 with a few lines of code changed. The public hostname of a route. Right now we can only limit to say *.ose.devapps.unc.edu in v3, but is there some mechanism to insert say their username for their project ex: User: boris hostnames available: *-boris.ose.devapps.unc.edu I've created a Trello card for the second part here: Boris there isn't a current mechanism that I know of to default the route name. Sounds like another RFE. This is similar to: Since the feature "project request limitation" will be released in 3.2, so verify this.
https://bugzilla.redhat.com/show_bug.cgi?id=1306024
CC-MAIN-2017-34
refinedweb
345
70.02
The boolean data type has only two valid values: true and false. These two values are called boolean literals. We can use boolean literals as boolean done; // Declares a boolean variable named done done = true; // Assigns true to done A boolean variable cannot be cast to any other data type and vice versa. boolean is the type returned by all relational operators, as in the case of a < b. boolean is the type required by the conditional expressions that govern the control statements such as if and for. Here is a program that demonstrates the boolean type: public class Main { public static void main(String args[]) { boolean b;/* w w w . java 2s .co m*/ b = false; System.out.println("b is " + b); b = true; System.out.println("b is " + b); b = false; if (b) System.out.println("This is not executed."); // outcome of a relational operator is a boolean value System.out.println("10 > 9 is " + (10 > 9)); } } The code above generates the following result.
http://www.java2s.com/Tutorials/Java/Java_Data_Type/0080__Java_boolean_Data_Type.htm
CC-MAIN-2018-13
refinedweb
166
66.94
By: Charlie Calvert Abstract: In the first part of this two part article you will learn how to use interfaces to define a contract between two classes that works at a high level of abstraction and promotes reuse. The extensive use of interfaces is one of the most powerful features of the Java Development Kit. Java takes full advantage of the power of interfaces and uses them to provide standards that help us build easily reusable code. In this two part article I am going to look first at interfaces in general, and then at one narrow case in which Java uses interfaces and events to help you provide a means for loosely coupling objects. This latter technology promotes a simple to use, "plug and play," type of object reuse. The term "loose coupling" probably cannot be defined in a definitive manner. It is popularly used to describe the way web services allow clients and servers to be created in entirely separate development processes. However, I am not going to use the word in that context. Instead, I am going to show how interfaces can provide a high degree of autonomy for individual objects used inside a single application. The objects I will explore will have very few dependencies on other objects. In this sense, they are "loosely" coupled to the other objects in their program. Because the objects are so autonomous, they will be easy to maintain and easy to reuse. As this article will show, combining interfaces with events can provide a simple, easy means to allow developers to promote reuse. The ultimate goal is to allow the creation of objects that can be used by multiple client objects in much the same way that a web service can be used by multiple clients. These loosely coupled objects will have few direct dependencies binding them together. Furthermore, the dependencies that they do have should be defined by clear standards that can be easily replicated by other clients that wish to consume these objects. The establishment by an object of a clear standard, of a well defined contract, makes that object easily reusable. Someone on the Java development team who understood interfaces decided to make them a big part of the Java SDK. There are hundreds of examples in the J2SE SDK of the correct way to use interfaces. This article is going to focus on only a few of them. Here are two key benefits derived from using interfaces: An interface provides a means of setting a standard. It defines a contract that promotes reuse. If an object implements an interface then that object is promising. The next few section of the text will tackle each of these benefits in turn. Contracts are important because they promote reuse. In the west, most of us have a contract that when we meet one another in formal situations we will shake hands as a way of greeting. Having this contract simplifies the act of meeting someone. In the same way, we have a contract that states that saying goodbye in a telephone conversation means that the conversation is over. If that convention, if that standard, if that contract, did not exist then phone conversations would be more difficult. The exact same purpose is served by the contract established by an interface: It provides a standard way of handling a particular task. A interface provides a good way of establishing the convention that two objects should live by when they form a connection. You can declare methods in an interface, but you cannot use the interface to implement those methods. Instead, you use a class to implement the methods found in one or more interfaces. Consider the following simple interface: public interface Runnable { public abstract void run(); } This class provides a declaration for a method called run. It does not, and cannot, provide an implementation for that method. This particular interface states that any class that implements Runnable will contain a method called run that is declared to be public and void. Here is a class that implements the Runnable interface: public class MyClass implements Runnable { public void run() { System.out.println("I implement the runnable interface"); } } In saying that MyClass implements the Runnable interface, we are saying that it is guaranteed to conform to a particular standard. That standard states that any class which implements Runnable must contain a method called run which is declared to be both public and void. It should now be clear to you how you can use an interface to define a contract between an object and its consumer. In particular, the interface promises that a particular class will contain certain methods. The interface is a contract between that class and the class that uses it. The contract states that the implementing class contains certain methods with certain signatures. It is the basis on which a relationship can be established between an object and its consumer. By now you might be getting the sense that interfaces aren't really as complicated as they may have seemed at first. In fact, there are few concepts in programming that are much simpler than interfaces. There is no mystery here at all – at least not on the syntactical level. As the Runnable – MyClass example shows, the syntax for using an interface in Java is very simple. Interfaces represent a fairly high level of abstraction. If we talk about a class that implements Runnable, then we are not talking about a specific class. We are talking about a group of classes that implements a particular behavior. This abstraction can be captured in a UML diagram. In particular, Figure 1 shows the relationship between the Runnable interface and MyClass. Figure 1: The dotted line ending in a closed triangle is the UML way of saying that MyClass implements that Runnable interface. Any class that implements an interface can be captured in diagram similar to the one shown in Figure 1. All one needs to do is draw a dotted line ending in a closed triangle between the implementing class and the interface that it implements. Why is this useful? Why do we care? What is it about this diagram that adds any value to our program? A UML diagram is easy to understand. In complex programs, with dozens or even hundreds of classes and interfaces, it can be very hard to see the relationship between the classes you have created. As they say, a picture is worth a thousand words. It is easier to look at a picture and see the relationship between objects than it is to study many source files and try to see the relationship between the chunks of code found in each file. Interfaces are all about making your life as a programmer easier. Just as the convention of shaking hands makes it easier to meet someone new, so does the existence of an interface make it easier to allow two objects to begin speaking to one another. UML diagrams make it easier for you to see the relationship between classes. UML diagrams capture one way in which interfaces can help programmers deal with complex behavior through relatively easy to understand abstractions. There is, however, another sense in which abstractions can be captured by interfaces. In the J2SE SDK, there are a set of classes all of which implement the List interface. Examples of these classes include Vector, ArrayList and LinkedList. Consider the following simple class: import java.util.Vector; import java.util.ArrayList; import java.util.LinkedList; public class Untitled1 { ArrayList arrayList = new ArrayList(); Vector vector = new Vector(); LinkedList linkedList = new LinkedList(); } The Untitled1 class creates instances of three classes which implement the List interface. Figure 2 shows what class Untitled1 looks like in a UML diagram. Figure 3, 4 and 5 show that ArrayList, Vector and LinkedList all implement the List interface. Figure 6 shows a UML view of the List interface. Figure 2: In this UML dialog a solid line ending in an open arrow shows that Untitled1 contains instances of the ArrayList, LinkedList and Vector classes. Compare with the dotted line and closed arrow symbol shown in Figure 3. Figure 3. ArrayList implements the List interface. Figure 4. LinkedList implements the List interface. Figure 5. Vector implements the List interface. Figure 6. The List interface as seen in a UML diagram. The plus signs represent public methods. Compare with Figure 2, where JBuilder standard icons from the Structure Pane are displayed. JBuilder allows displaying UML in either mode. The ArrayList, LinkedList and Vector classes all conform to the contract established by the List interface. Unlike the Runnable interface, the List interface declares multiple methods. By conforming to this contract, the ArrayList, LinkedList and Vector classes all promise to implement the methods of the List interface such as add(), get(), indexOf() and isEmpty(). In other words, these classes all display the behavior associated with the List interface. Is this sense, they all belong to the same family. Most experienced drivers can pilot any reasonably sized car. They can do this because the interface for a car is the same in most vehicles, whether that car is a Honda, a Ford or a BMW. In the same way, most developers who know the List interface can use the ArrayList, LinkedList and Vector classes. Just as all cars have a steering wheel and a transmission, so do all classes that implement the List interface have methods such as add(), get() and indexOf(). In this sense, all these classes behave in the same manner. To illustrate this point in a practical example, let's extend MyClass to support the List interface. An example of how to do this is shown in Listing 1. Listing 1: A class that uses the List interface. package untitled10; import java.util.List; import java.util.Iterator; public class MyClass implements Runnable { List myList = null; public MyClass(List myList) { this.myList = myList; } private String isListEmpty(List myList) { if (myList.isEmpty()) return "True"; else return "False"; } private void show(List myList) { Iterator itr = myList.iterator(); while (itr.hasNext()) { System.out.println((String)itr.next()); } } public void run() { isListEmpty(myList); myList.add("Sam"); myList.add("Mary"); myList.add("Tom"); myList.add("Sue"); Object item = myList.get(1); System.out.println("Retrieved Item: " + item); System.out.println("Index of retrieved item: " + myList.indexOf(item)); System.out.println("The list before removing an item called " + item + ":"); show(myList); myList.remove(item); System.out.println("The list after removing an item called " + item); show(myList); } } Notice that the constructor for MyClass now takes an object that supports the List interface: List myList = null; public MyClass(List myList) { this.myList = myList; } Notice furthermore that the other code in MyClass exercises the List interface in various ways. The output from this class might look like this: Retrieved Item: Mary Index of retrieved item: 1 The list before removing an item called Mary: Sam Mary Tom Sue The list after removing an item called Mary Sam Tom Sue Any instance of the Vector, ArrayList, or LinkedList class can now be passed into MyClass. For instance, the following code is perfectly legal. ArrayList list = new ArrayList(); Thread thread = new Thread(new MyClass(list)); thread.start(); So is the code shown in this example: Vector list = new Vector(); Thread thread = new Thread(new MyClass(list)); thread.start(); And so is this code: LinkedList list = new LinkedList(); Thread thread = new Thread(new MyClass(list)); thread.start(); As you can see, one class, called MyClass, is able to consume three entirely different classes called Vector, ArrayList and LinkedList. This is a valuable form of reuse. It is made possible by the fact that the Vector, ArrayList and LinkedList classes all support the List interface. Suppose you create a standard JBuilder application that supports a class called Frame1 which is a descendant of JFrame. Suppose that Frame1 contains a method for handling button clicks that looks like this: class Frame1 extends JFrame { ... Code omitted here void jButton1_actionPerformed(ActionEvent e) { LinkedList list = new LinkedList(); Thread thread = new Thread(new MyClass(list)); thread.start(); } } Notice that the relationship between Frame1, LinkedList and MyClass is very abstract. All Frame1 needs to know about MyClass is that it supports the Runnable interface, and can thus be placed in a thread. And all that MyClass needs to know about Frame1 is that it knows how to create a thread. MyClass does not even need to know what type of class it is being passed in its constructor: new MyClass(list). All MyClass needs to know is that the class it is being passed supports the List interface. All these classes are linked together with a very high degree of abstraction. The knowledge they have about one another is on a strictly need to know basis. They don't know any more about each other than is absolutely necessary for them to interact. In short, they are relatively loosely coupled. In this article you have learned how interfaces can be used to specify a contract between two classes. This interface provides a standard means for one class to consume another class. You also learned that an interface provides a high level of abstraction that allows you to easily define certain well know patterns of behavior. Examples of these patterns were illustrated by the Runnable and List interfaces. This article also illustrated that objects which conform to a particular interface can support a form of "loose coupling." This highly abstracted relationship between two classes supports an admirable degree of reuse. This is the end of the first part of this article. In the second part you will learn how to use a particular interface called ActionListener to define a way for one class to consume another class. This relationship will be so loosely coupled that almost any class can quickly learn to consume any object that conforms to the contract defined by this interface. Server Response from: SC3
http://edn.embarcadero.com/article/30372
crawl-002
refinedweb
2,317
63.9
LOTTOmania 2005 1.1.7 Sponsored Links You might be interested in: Download location for LOTTOmania 2005 1.1.7 Let the computer to pick your lotto numbers....read more NOTE: You are now downloading LOTTOmania 2005 1.1.7. This trial download is provided to you free of charge. Please purchase it to get the full version of this software. Select a download mirror, or Buy full version LOTTOmania 2005 1.1.7 description LOTTOmania 2005 uses advanced statistical analysis to select the most popular winning patterns consisted of active , average and passive numbers. It works with almost all lotto-type lotteries that draw 4-8 numbers out of a number pool from 1 to 99....read more LOTTOmania 2005 1.1.7 Screenshot LOTTOmania 2005 1.1.7 Keywords Bookmark LOTTOmania 2005 1.1.7 LOTTOmania 2005 1.1.7 Copyright WareSeeker.com do not provide cracks, serial numbers etc for LOTTOmania 2005 1.1.7. Any sharing links from rapidshare.com, yousendit.com or megaupload.com are also prohibited. Featured Software Want to place your software product here? Please contact us for consideration. Contact WareSeeker.com Related Software LottoMania 2000 is a lottery software to support the analysis of numbers for Lotto games based on drawing of 5,6 numbers and Keno. It helps the players of European, American, Canadian and other worldw Free Download Power and ease of use software for lotto players. Free Download Keep track of scores when playing darts.You can input the score when playing darts at home or in a pub by clicking on a virtual dartboard, type in the score into the score-inputbar or speak the score with the implemented voice-recognition software.Dartscore is implemented with great sound effects and graphics. The program has great statistics. All data is saved into a database for history and statistics. Scorekeeping has never been this easy. Free Download FASTCUBE is a tool for effective data analysis. Free Download OraDeveloper Tools is a set of IDE add-ins designed to automate and simplify the process of developing applications with Oracle from Visual Studio or ... Free Download This Add-In to Microsoft Visual Studio.NET 2005 significantly enhances source code printing capabilities. Print a complete Solution, selected projects, project items, namespaces, classes, modules. The output can be exported to PDF and RTF. 2005. - Naruto Volume 1 (1-7) 火影忍者 第1卷 - Joe Galaxy® .NET 2005.b - Joe Galaxy®.NET 2005 2005 - VB.NET 2005 Beta Free Training - The NET Framework Architecture Part 2 - VB.NET 2005 Tutorials: Using the Data Form Wizard - VB.NET 2005 Tutorials: Complex Data Binding - VB.NET 2005 Tutorials: Creating Web Service Project - VB.NET 2005 Tutorials: Validation Favourite Software - Sales Master PRO 2005 2005 - LMD-Tools Special Edition (Delphi 2005) 10.0 - Delphi 2005 - VB.NET 2005 Tutorials: Editing Data With ADO .NET - VB.NET 2005 Tutorials: Simple Data Binding - VB.NET 2005 Tutorials: Finding and Sorting Data in DataSets - VB.NET 2005 Tutorials: DomainUpDown and NumericUpDown - VB.NET 2005 Tutorials: Using XML Data
http://wareseeker.com/download/lottomania-2005-1.1.7.rar/60235648
CC-MAIN-2014-41
refinedweb
505
53.37
The attrubite is applicable to functions and variables and changes the linkage of the subject to internal. Following the proposal in The attrubite is applicable to functions and variables and changes the linkage of the subject to internal. Following the proposal in This new version supports attribute((internal_linkage)) on classes and even namespaces! No diagnostic is issued for the following C test case: int x __attribute__((internal_linkage)); int x __attribute__((common)); int *f() { return &x; } Added a [[clang::internal_linkage]] spelling to the attribute. Added tests for namespace re-declarations with and without the attribute. I would like to hold off on adding the namespace attribute. There were persuasive reasons to not have attributes on namespaces that was discussed in EWG in Kona, and this is a feature we could add on later if there's sufficient design consensus. I would like to see one more test, just to make sure that a Var subject doesn't also allow it on a parameter: void f(int a [[clang::internal_linkage]]); Aside from that, LGTM! Hm, the current implementation allows all of the following: void f(int a [[clang::internal_linkage]]) { // 1 int b [[clang::internal_linkage]]; // 2 static int c [[clang::internal_linkage]]; // 3 } I'll fix (1). Is it OK to allow (2) and (3)? The attribute has no effect because the declarations already have internal linkage, so I'd say it behaves as documented. This is an interesting test case, though: inline int foo() { static int __attribute__((internal_linkage)) x; return x++; } If foo gets inlined, those call sites will use and update 'x'. If foo is not inlined, one definition of foo will win, and every caller will use its version of 'x'. We could emit a warning, but I kind of don't care. If you're using internal_linkage, you are operating outside the rules of C++. You're expecting multiple copies of these things to be emitted.
https://reviews.llvm.org/D13925
CC-MAIN-2019-30
refinedweb
316
61.06
61065/extract-individual-zscores-values-sample-extract-function Hi @Mike. First, read both the csv ...READ MORE Hi, it is pretty simple, to be ...READ MORE You can find the explanation and implementation ...READ MORE can you give an example using a ...READ MORE Hey @Akki, you can use the numpy ...READ MORE This should work: from bs4 import BeautifulSoup html_doc='''<tr id="tr_m_1570:240HJY" ...READ MORE The optimal CutOff value is the point ...READ MORE Hey, Web scraping is a technique to automatically ...READ MORE Hey, @Subi, Regarding your query, you can go ...READ MORE Hello @kartik, Set show_change_link to True (False by default) in your inline ...READ MORE OR At least 1 upper-case and 1 lower-case letter Minimum 8 characters and Maximum 50 characters Already have an account? Sign in.
https://www.edureka.co/community/61065/extract-individual-zscores-values-sample-extract-function
CC-MAIN-2021-39
refinedweb
134
69.58
compiler, optimize, for loop, while I would like to suggest the compiler optimize the common case of for loops, that is, for (var <- Range [by step]) for (var <- int to int [by step]) for (var <- int until int [by step]) def matMulUsingIterators ( a : Array[Array[Double]], b : Array[Array[Double]], c : Array[Array[Double]]) : Unit = { val b_j = new Array[Double](b.length) for (j <- 0 until b(0).length) { for (k <- 0 until b.length) { b_j(k) = b(k)(j) } for (i <- 0 until a.length) { val c_i = c(i) val a_i = a(i) var s = 0.0d; for (k <- 0 until b.length) { s += a_i(k) * b_j(k) } c_i(j) = s } } def matMulUsingRanges ( val jRange = 0 until b(0).length; val kRange = 0 until b.length; val iRange = 0 until a.length; for (j <- jRange) { for (k <- kRange) { for (i <- iRange) { val c_i = c(i); val a_i = a(i); for (k <- kRange) { } are much slower than the same algorithm coded with while loops: def matMulUsingWhileLoop ( a : Array[Array[Double]], b : Array[Array[Double]], c : Array[Array[Double]]) : Unit = { val m = a.length; val p = b(0).length; val n = b.length; val b_j = new Array[Double](b.length); var i = 0; var j = 0; var k = 0; while (j < p) { k = 0 while (k < n) { b_j(k) = b(k)(j); k += 1 i = 0 while (i < m) { k = 0; while (k < n) { s += a_i(k) * b_j(k); k += 1 c_i(j) = s; i += 1 j += 1; but the while loop code is more complex and error prone. (Sorry, Trac appears to remove some line breaks; I added some explicit semis but might have missed some; I'll try attaching actual working source code) Running this while measuring time in nanoseconds: Iterators 2,807,815,301ns Ranges 2,789,958,191ns While Loop 190,778,574ns MatMul by Iterators is 14 times as slow as with while loops. It does not appear that the Hotspot runtime profiling and optimization dramatically helps this performance problem This performance problem can hurt adoption of Scala for many types of uses/applications. // Scala code to compare performance of nested int loops object MatMul { val jRange = 0 until b(0).length // p val kRange = 0 until b.length // n val iRange = 0 until a.length // m def matMulUsingLimits ( val m = a.length val p = b(0).length val n = b.length for (j <- 0 until p) { for (k <- 0 until n) { for (i <- 0 until m) { for (k <- 0 until n) { var i = 0; var j = 0; var k = 0 k = 0 j += 1 def time[R](block: => R) : (Long, R) = { val start = System.nanoTime() val result : R = block val time = System.nanoTime() - start (time, result) val format = new java.text.DecimalFormat("0,000'ns'"); def report[R](label: String, result: (Long, R)) = { println(label + " " + format.format(result._1)) } private val FACTOR = 5 private val M = 80 private val N = 70 private val P = 60 def main(args : Array[String]) = { for (trial <- 3 until 0 by -1) { val factor = (if (System.getProperty("factor") != null) Integer.parseInt(System.getProperty("factor")) else FACTOR) val multiplier = if (trial == 1) factor else 1; val m = M * multiplier val n = N * multiplier val p = P * multiplier val a = new Array[Array[Double]](m,n) val b = new Array[Array[Double]](n,p) val c = new Array[Array[Double]](m,p) println("\nMultiply c[" + m + "," + p + "] = a[" + m + "," + n + "] times b[" + n + "," + p + "]\n"); val whileTime = time(matMulUsingWhileLoop(a,b,c)) val iterTime = time(matMulUsingIterators(a,b,c)) report("Iterators ", iterTime) report("Limits ", time(matMulUsingLimits(a,b,c))) report("Ranges ", time(matMulUsingRanges(a,b,c))) report("While Loop ", whileTime) println("MatMul by Iterators is " + iterTime._1 / whileTime._1 + " times as slow as with while loops.") } } This is a very important performance enhancement for numerical work. Seems that this could be solved with some pattern matching in the compiler, recognizing a range with no filtering (in the simplest case). Could then remap to a while form for the next stage. +1 It would be nice if for-comprehensions with simple filters could be optimized as well, turning <pre>for (i <- 1 to 10 if shouldProcess) {}</pre> into <pre>var i = 1 while (i < 10) { if (shouldProcess) { } i += 1 } And extra nice if this would work with random access sequences. This is actually the only thing keeping me from using Scala. Replying to [comment:12 PhDP]: > +1 > > This is actually the only thing keeping me from using Scala. Have you tried '-optimize'? It can help a lot. It's very unlikely this will move from library to compiler-generated loops. I haven't benchmarked (with and without -optimize) to see whether the current compilation scheme for "simple loops" is good enough. But in case it isn't, looks like the single place to change in the compiler is method TreeBuilder.makeFor(). According to its comment, it performs five transformations. Prepending as special case a transformation for "simple loops" would not change semantics. (Well, assuming that a local definition does not shadow the usual ones: "to" in "1 to 10", "Range", and so on) Miguel I tried to create a script with the following: def timeit(f : () => Unit) { val t1 = System.currentTimeMillis() f() val t2 = System.currentTimeMillis() println(t2-t1) } def repeat(n : Int, f : Int => Unit) : Unit = { var i = 0 while (i<n) { f(i) i += 1 def test0() { var sum = 0 while (i < 1000000000) { sum += i println(sum) def test1() { repeat(1000000000, i => { }) def test2() { for(i <- 0 until 1000000000) { timeit(test0) timeit(test1) timeit(test2) Result is: -1243309312 467 504 11899 May be this 'repeat' is a workaround? Warning: works only with 'scala -optimise'. This is not very stable, sometimes some seemingly minor modifications, i.e. moving the code outside of the function, break it and I get 12000 for 'repeat'. Replying to [comment:28 mellit]: > I tried to create a script with the following: > > > } > }} Thanks for the suggestion. I hesitate to try or rely on this for several reasons: 1) having to remember to not use the standard for loop is problematic. The syntax is different and not based on generators and less general, although one could certainly patch it to be more general; i.e pass in start and end and optional increment values: repeat(start:Int, end:Int, increment:Int) or repeat(range:Range) etc. (and also handle backwards iteration when called for). 2) This may still involve at least an extra function call per iteration. If the compiler has to inject other synthetic calls into the body to allow access to other lexically scoped variables, this may also affect performance. The goal of this optimization is to achieve Java-level performance of for loops where possible. 3) most critically, if/when the Scala compiler implements this ticket's optimization, then all code using this repeat control would not get the optimization, requiring code maintenance to undo it. Please see update on this ticket sent to scala-user, also available here: I very much agree with mgarcia's comment of nine months ago that TreeBuilder.makeFor already does a whole pile of tree transformations and there is no convincing reason we shouldn't add one which has this kind of impact. Failing agreement on that point, I believe we have a pressing responsibility to clean up the parsing phase and plugin architecture sufficiently that it would be possible to do this transformation with a compiler plugin. Hello, I've just written such a compiler plugin, which you can install straight away on 2.8.0 with sbaz for testing : It's probably full of bugs, but it rewrites int-range for loops with no filters and Array[T].foreach, Array[T].map into equivalent while loops. You can have a look at the auto tests to see the supported cases : Looking forward to seeing something like this mainstream Cheers – zOlive I've adapted the fannkuch and nbody benchmarks that were in the scala-user thread mentioned previously and I had to adapt it a bit (inlining the ranges that were stored as val range = x until y). Here's the modified code : And to run it (with ScalaCL plugin installed via sbaz: sbaz install scalacl-compiler-plugin) : DISABLE_SCALACL_PLUGIN=1 scalac fannkuch.scala && scala fannkuch scalac fannkuch.scala && scala fannkuch DISABLE_SCALACL_PLUGIN=1 scalac nbody.scala && scala nbody scalac nbody.scala && scala nbody With the plugin turned on, the performance of the three variants (While, Limit, Range) is the same (the first while is actually slower, I haven't investigated why). (Sorry for spamming you again, this should be the last time) I've just enhanced the plugin with more conversions to while loops : Also, the conversions should now work on method references and inline lambdas the same way. Further progress and plans can be tracked at the bottom of this page : "Spire also provides a loop macro called cfor whose syntax bears a slight resemblance to a traditional for-loop from C or Java. This macro expands to a tail-recursive function, which will inline literal function arguments." Here's another loop macro with an arguably better syntax: Scalaxy/Loops (which reuses code from ScalaCL): There's a different approach that I've tried in scalaBlitz: dont require users switch from using standard library while they code, but instead give a macro that changes implementation methods, replacing standard library implementation with macro-based one. Here's small description of it: The Range example in this ticket will be compiled to while loops and get same performance.
https://issues.scala-lang.org/si/jira.issueviews:issue-html/SI-1338/SI-1338.html
CC-MAIN-2018-34
refinedweb
1,587
60.85
MAKE Directives MAKE directives resemble directives in languages such as C and Pascal. In MAKE, directives perform various control functions, such as displaying commands onscreen before executing them. MAKE directives begin either with an exclamation point or a period, and they override any options given on the command line. Directives that begin with an exclamation point must appear at the start of a new line. Contents MAKE Directives and Their Command-Line Options The following table lists the MAKE directives and their corresponding command-line options: Using Macros in Directives You can use the $d macro with the !if conditional directive to perform some processing if a specific macro is defined. Follow $d with a macro name enclosed in parentheses or braces, as shown in the following example: !if $d(DEBUG) #If DEBUG is defined, bcc32 -v f1.cpp f2.cpp #compile with debug information; !else #otherwise bcc32 -v- f1.cpp f2.cpp #don't include debug information. !endif Null Macros While an undefined macro name causes an !ifdef MacroName test to return false, MacroName defined as null will return true. You define a null macro by following the equal sign = in the macro definition with either spaces or a return character. For example, the following line defines a null macro in a makefile: NULLMACRO = Either of the following lines can define a null macro on the MAKE command line: NULLMACRO ="" -DNULLMACRO !if and Other Conditional Directives The !if directive works like C if statements. As shown here, the syntax of !if and the other conditional directives resembles compiler conditionals: The following expressions are equivalent: !ifdef macro /* is equivalent to */ !if $d(macro) ifndef macro /* is equivalent to */ !if !$d(macro) These rules apply to conditional directives: - One !elsedirective is allowed between !if, !ifdef, or !ifndefand !endif. - Multiple !elifdirectives are allowed between !if, !ifdef, or !ifndef, !elseand !endif. - You cannot split rules across conditional directives. - You can nest conditional directives. !if, !ifdef, and !ifndefmust have matching !endifdirectives within the same file. The following information can be included between the !if and !endif directives: - Macro definition - Explicit rule - Implicit rule - Include directive - !error directive - !undef directive In an if statement, a conditional expression consists of decimal, octal, or hexadecimal constants and the operators shown in the following table: The operators marked with the * sign also work with string expressions. MAKE evaluates a conditional expression as either a 32-bit signed integer or a character string.
http://docwiki.embarcadero.com/RADStudio/Tokyo/en/MAKE_Directives
CC-MAIN-2018-22
refinedweb
404
50.53
Here we will go through some of the typical use cases for VCS. There are endless opportunities for tracking all aspects of website behavior. These examples will hopefully give you a good idea of what is possible and provide you with a bit of inspiration. Most of these examples can be implemented with a few lines of VCL code in your Varnish setup. They work with both the vanilla Varnish Cache release and the Varnish Cache Plus release. By default, vcs-agent installs with the -d parameter enabled. This configuration automatically generates a key for each URL, HOST, and a global ALL key. Manually tagging a request with a key is done in VCL, by writing an std.log() line prefixed with the string "vcs-key:". The default key configuration is equivalent to the following VCL, used as an example: sub vcl_deliver { std.log("vcs-key:ALL"); std.log("vcs-key:HOST/" + req.http.Host); std.log("vcs-key:URL/" + req.http.Host + req.url); } In the above example, all requests will be tagged with the following keys: Hostheader ( example.com) Host+ URL( example.com/foo) vcs-key ALL For std.log() you will also need to include the std VMOD, with an import std; directive in your VCL. VCS has a flat namespace. Every key is created in this namespace. So, in order to add a bit of organization to your VCS setup we recommend your split the namespace into various sub-namespaces. To split a namespace we recommend you use a separator. We recommend you use / and we’ll be using it in our examples here. The reasons for splitting the namespace would be to create queries against VCS that gives you some subset of the data in VCS. Lets say that you use VCS to track the number of views on your website. If you prepend those keys with VIEWS you can query VCS to give you a top list of the views by asking it to show you the top list of every key beginning with VIEWS. Then you might have another query that gives you the top list of caches misses - MISSES and other logical groups. To omit certain URLs (or requests) from the default keys, remove the -d parameter from vcs-agent’s systemd configuration. Next, add the following VCL to generate the VCS keys, using an if statement to skip the requests you do not want to send to VCS: sub vcl_deliver { if (req.url != "/healthcheck") { std.log("vcs-key:ALL"); std.log("vcs-key:HOST/" + req.http.Host); std.log("vcs-key:URL/" + req.http.Host + req.url); } } In the above example, requests for /healthcheck will not be sent to VCS. To track which URLs have the slowest response times, we can make use of VCS’ ability to provide a sorted list of response times for the keys it is tracking. Simply issuing a request for: /all/top_ttfb will produce a list of the keys associated with the 10 slowest requests. To further get a breakdown of this, for example to get the actual URLs, we can make use of the default keys and combine this with VCS’ regex matching capabilities: /match/^URL/top_ttfb The abbreviation ttfb stands for time to first byte, and is the time between Varnish first started handling the request until it started transmitting the first byte to the client. For a news site there are a few specific things you might want to track. CMS systems typically have unique article IDs that identify one article. Logging the article IDs into VCS gives you easy real time access to what stories are being read right now. We have customers that are embedding this information on their websites generating the what is hot right now lists we often see on a news site. Logging the article ID and not just the URL make the list ignore different presentations of the same article and makes the list about the articles themselves. It also removes the need to normalize the URL in any way, so query strings that annotate links will not pollute the list itself. If your CMS can produce an x-artid header you should be all set. In vcl_deliver you would need to add the following: sub vcl_deliver { std.log("vcs-key:ARTICLE_ID/" + resp.http.x-artid); } You can expand on the setup in several ways. One might for instance also want to measure the social impact of each article by looking at the referrer header (if set). In vcl_deliver add the following: sub vcl_deliver { if (req.http.referer) { std.log("vcs-key:ARTREF/" + resp.http.x-artid + "/" + req.http.referer); } } You might also want to expand it further by looking at the user agent and adding a separate time series for mobile views. In vcl_deliver: sub vcl_deliver { if (req.http.user-agent ~ "mobile") { std.log("vcs-key:MOBILE/" + resp.http.x-artid); } } Many websites want to measure conversions. A conversions might be having a user click a link to sign up, putting an item in the shopping basket. Another use case would be for a paid content site, where the conversion happens with the user clicking the sign up page when reading a specific article. The first step is to identify the conversion taking place, typically done by looking at the request URL, maybe in combination with the HTTP method used. In this example our article page might be /news/art/23245. On that page there is a link pointing to in VCS with the article as the main key we would need the following VCL in vcl_deliver: sub vcl_deliver { if (req.url == "/signup") { set req.http.artid = regsub(...); std.log("vcs-key:CONVERSION/SIGNUP/" + req.http.artid); } } For a more in depth discussion on using VCS to track conversions, and also a how-to on doing AB testing with Varnish and VCS, please see this blog post: If you are streaming HLS/HDS/Smooth/DASH through Varnish you might want to count the number of users on each Varnish server. This might be useful for statistical reasons but might also be used for directing traffic to your various Varnish Cache clusters. The tricky part is to uniquely identify a user. In order to do this you need some sort of session cookie to be preset on the client. All the HTTP video clients are suppose to support cookies. If there is a cookie already present we can probably utilize it, if not we have to generate a random one. We recommend using the cookie VMOD when working with cookies. It will make the VCL much more readable. The following VCL sets a cookie if there is none present. In vcl_deliver: import cookie; sub vcl_deliver { cookie.parse(req.http.cookie); set req.http.X-vcsid = cookie.get("_vcsid"); if (req.http.X-vcsid == "") { set req.http.X-vcsid = std.random(1, 10000000) + "." + std.random(1, 10000000); set resp.http.Set-Cookie = "_vcsid=" + req.http.X-vcsid + "; HttpOnly; Path=/"; } std.log("vcs-key:SESSION/" + req.http.X-vcsid + "/" + req.http.Host + req.url); } There is a blog post on the matter that discusses this in some detail: In an e-commerce setting VCS can be used to give stats about how various SKUs behave. A typical use case would be running statistics on which SKUs receive what traffic. In addition there are various other aspects that VCS can help gather data on: In vcl_deliver: sub vcl_deliver { if (req.url ~ "/sku/\d+") { set req.http.sku = regsub(...); std.log("vcs-key:VIEWSKU/" + req.http.sku); if (req.http.referer ~ "facebook.com|twitter.com") { std.log("vcs-key:SOCIAL/" + req.http.sku); } if (req.http.referer ~ "yahoo.com|google.com") { std.log("vcs-key:ORGANIC/" + req.http.sku); } if (req.url ~ "/ajax/put/\d+") { std.log("vcs-key:PUTBASKET/" + req.http.sku); } } }
https://docs.varnish-software.com/varnish-custom-statistics/use-cases/
CC-MAIN-2021-31
refinedweb
1,302
64.41
> Hi Unity friends, I have a problem with loading and playing an audio file dynamically. I hope you can help! :-) One. I have added an audio file called 01.wav to Resourses/SFX/ directory. The file is 44100HZ, stereo, 0:547 seconds long. Two. The following MonoBehavior class works perfectly. I drag my audio asset on the "clip" field, then I call PlayAudioClip() and it plays it without a problem. public class SoundController : MonoBehaviour { public AudioClip clip; public void PlayAudioClip() { AudioSource.PlayClipAtPoint(clip, Camera.main.transform.position); } } Three. But, when I try to dynamically create an AudioClip object from the file in a regular C# file (where there is opportunity to initialize an AudioClip with drag and drop), for some reason it doesn't play the sound for me. I can see the creation of "one shot audio" in the hierarchy, but no sound is played. Here is the code I'm trying: AudioClip clip = AudioClip.Create("SFX/01", (int)(0.547 * 44100), 2, 44100, false, false); AudioSource.PlayClipAtPoint(clip, Camera.main.transform.position); I have tried various file formats and file settings, but they were not helpful. Also, I changed different address formats such as "Resources/SFX/01", "01" or "SFX/01.wav", but it didn't help either. Thank you for your help! Cheers Did you ever figure out any solution to this, @1h2o1o377? I'm currently having the same issue... @jagels, I didn't find a way to solve the problem above, but I found another way to play clips: 1. Create a MonoBehaviour class and attach its script to an object. 2. Add this public property to the class: public List<AudioClip> SfxClips = new List<AudioClip>(); In the GUI drag and drop your audio files on top of the SfxClips list to populate it. Add this method to your class to play sfx clips: public void PlaySfxClip(int i) { AudioSource.PlayClipAtPoint(SfxClips[i], Camera.main.transform.position); } Good luck! ;-) Cheers, Sia public void PlaySfxClip(int i) { AudioSource.PlayClipAtPoint(SfxClips[i], Camera.main.transform. AudioSource.clip.time won't work? 2 Answers Check scilence in audio clip using FFT. 0 Answers Audio: -3db automatic attenuation on any audio playing? 0 Answers Accessing low passed filtered audio data 0 Answers Ambisonic Audio Support for PS4 0 Answers
https://answers.unity.com/questions/949587/audioclipcreate-playclipatpoint-not-working.html
CC-MAIN-2019-18
refinedweb
379
59.4
from fastai.test_utils import * This is the decorator we will use for all of our scheduling functions, as it transforms a function taking (start, end, pos) to something taking (start, end) and return a function depending of pos. annealings = "NO LINEAR COS EXP".split() p = torch.linspace(0.,1,100) fns = [SchedNo, SchedLin, SchedCos, SchedExp] for fn, t in zip(fns, annealings): plt.plot(p, [fn(2, 1e-2)(o) for o in p], label=t) f = SchedPoly(2,1e-2,0.5) plt.plot(p, [f(o) for o in p], label="POLY(0.5)") plt.legend(); sched = SchedLin(0, 2) test_eq(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.5, 1., 1.5, 2.]) sched = SchedCos(0, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.29289, 1., 1.70711, 2.]) sched = SchedNo(0, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0., 0., 0., 0.]) sched = SchedExp(1, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [1., 1.18921, 1.41421, 1.68179, 2.]) sched = SchedPoly(0, 2, 2) test_close(L(map(sched, [0., 0.25, 0.5, 0.75, 1.])), [0., 0.125, 0.5, 1.125, 2.]) p = torch.linspace(0.,1,100) pows = [0.5,1.,2.] for e in pows: f = SchedPoly(2, 0, e) plt.plot(p, [f(o) for o in p], label=f'power {e}') plt.legend(); pcts must be a list of positive numbers that add up to 1 and is the same length as scheds. The generated function will use scheds[0] from 0 to pcts[0] then scheds[1] from pcts[0] to pcts[0]+pcts[1] and so forth. p = torch.linspace(0.,1,100) f = combine_scheds([0.3,0.7], [SchedCos(0.3,0.6), SchedCos(0.6,0.2)]) plt.plot(p, [f(o) for o in p]); p = torch.linspace(0.,1,100) f = combine_scheds([0.3,0.2,0.5], [SchedLin(0.,1.), SchedNo(1.,1.), SchedCos(1., 0.)]) plt.plot(p, [f(o) for o in p]); This is a useful helper function for the 1cycle policy. pct is used for the start to middle part, 1-pct for the middle to end. Handles floats or collection of floats. For example: f = combined_cos(0.25,0.5,1.,0.) plt.plot(p, [f(o) for o in p]); scheds is a dictionary with one key for each hyper-parameter you want to schedule, with either a scheduler or a list of schedulers as values (in the second case, the list must have the same length as the the number of parameters groups of the optimizer). learn = synth_learner() sched = {'lr': SchedLin(1e-3, 1e-2)} learn.fit(1, cbs=ParamScheduler(sched)) n = len(learn.dls.train) test_close(learn.recorder.hps['lr'], [1e-3 + (1e-2-1e-3) * i/n for i in range(n)]) [0, 3.6711432933807373, 1.281074047088623, '00:00'] The 1cycle policy was introduced by Leslie N. Smith et al. in Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. It schedules the learning rate with a cosine annealing from lr_max/div to lr_max then lr_max/div_final (pass an array to lr_max if you want to use differential learning rates) and the momentum with cosine annealing according to the values in moms. The first phase takes pct_start of the training. You can optionally pass additional cbs and reset_opt. learn = synth_learner(lr=1e-2) xb,yb = learn.dls.one_batch() init_loss = learn.loss_func(learn.model(xb), yb) learn.fit_one_cycle(2) xb,yb = learn.dls.one_batch() final_loss = learn.loss_func(learn.model(xb), yb) assert final_loss < init_loss [0, 5.174861907958984, 1.1238961219787598, '00:00'] [1, 2.6633129119873047, 0.2623096704483032, '00:00'] lrs,moms = learn.recorder.hps['lr'],learn.recorder.hps['mom'] test_close(lrs, [combined_cos(0.25,1e-2/25,1e-2,1e-7)(i/20) for i in range(20)]) test_close(moms, [combined_cos(0.25,0.95,0.85,0.95)(i/20) for i in range(20)]) learn = synth_learner() learn.fit_one_cycle(2) [0, 4.469752788543701, 3.857470989227295, '00:00'] [1, 4.1990861892700195, 3.5734033584594727, '00:00'] learn.recorder.plot_sched() learn = synth_learner() learn.fit_flat_cos(2) [0, 33.230430603027344, 27.790645599365234, '00:00'] [1, 29.089080810546875, 21.577194213867188, '00:00'] learn.recorder.plot_sched() This schedule was introduced by Ilya Loshchilov et al. in SGDR: Stochastic Gradient Descent with Warm Restarts. It consists of n_cycles that are cosine annealings from lr_max (defaults to the Learner lr) to 0, with a length of cycle_len * cycle_mult**i for the i-th cycle (first one is cycle_len-long, then we multiply the length by cycle_mult at each epoch). You can optionally pass additional cbs and reset_opt. learn = synth_learner() with learn.no_logging(): learn.fit_sgdr(3, 1) test_eq(learn.n_epoch, 7) iters = [k * len(learn.dls.train) for k in [0,1,3,7]] for i in range(3): n = iters[i+1]-iters[i] #The start of a cycle can be mixed with the 0 of the previous cycle with rounding errors, so we test at +1 test_close(learn.recorder.lrs[iters[i]+1:iters[i+1]], [SchedCos(learn.lr, 0)(k/n) for k in range(1,n)]) learn.recorder.plot_sched() learn.fine_tune(1) [0, 0.8103247880935669, 0.6641361713409424, '00:00'] [0, 0.6594769954681396, 0.6119114756584167, '00:00'] class LRFinder[source] LRFinder( start_lr= 1e-07, end_lr= 10, num_it= 100, stop_div= True) :: ParamScheduler Training with exponentially growing learning rate with tempfile.TemporaryDirectory() as d: learn = synth_learner(path=Path(d)) init_a,init_b = learn.model.a,learn.model.b with learn.no_logging(): learn.fit(20, cbs=LRFinder(num_it=100)) assert len(learn.recorder.lrs) <= 100 test_eq(len(learn.recorder.lrs), len(learn.recorder.losses)) #Check stop if diverge if len(learn.recorder.lrs) < 100: assert learn.recorder.losses[-1] > 4 * min(learn.recorder.losses) #Test schedule test_eq(learn.recorder.lrs, [SchedExp(1e-7, 10)(i/100) for i in range_of(learn.recorder.lrs)]) #No validation data test_eq([len(v) for v in learn.recorder.values], [1 for _ in range_of(learn.recorder.values)]) #Model loaded back properly test_eq(learn.model.a, init_a) test_eq(learn.model.b, init_b) test_eq(learn.opt.state_dict()['state'], [{}, {}]) First introduced by Leslie N. Smith in Cyclical Learning Rates for Training Neural Networks, the LR Finder trains the model with exponentially growing learning rates from start_lr to end_lr for num_it and stops in case of divergence (unless stop_div=False) then plots the losses vs the learning rates with a log scale. A good value for the learning rates is then either: - one tenth of the minimum before the divergence - when the slope is the steepest Those two values are returned by default by the Learning Rate Finder. with tempfile.TemporaryDirectory() as d: learn = synth_learner(path=Path(d)) weights_pre_lr_find = L(learn.model.parameters()) lr_min,lr_steep = learn.lr_find() weights_post_lr_find = L(learn.model.parameters()) test_eq(weights_pre_lr_find, weights_post_lr_find) print(f"Minimum/10: {lr_min:.2e}, steepest point: {lr_steep:.2e}") Minimum/10: 7.59e-02, steepest point: 1.32e-06
https://docs.fast.ai/callback.schedule.html
CC-MAIN-2020-50
refinedweb
1,170
62.24
When you write code to specify an amount of time, use the class or method that best meets your needs: the Duration class, Period class, or the ChronoUnit.between method. A Duration measures an amount of time using time-based values (seconds, nanoseconds). A Period uses date-based values (years, months, days). A Duration is most suitable in situations that measure machine-based time, such as code that uses an Instant object. A Duration object is measured in seconds or nanoseconds and does not use date-based constructs such as years, months, and days, though the class provides methods that convert to days, hours, and minutes. A Duration can have a negative value, if it is created with an end point that occurs before the start point. The following code calculates, in nanoseconds, the duration between two instants: Instant t1, t2; ... long ns = Duration.between(t1, t2).toNanos(); The following code adds 10 seconds to an Instant: Instant start; ... Duration gap = Duration.ofSeconds(10); Instant later = start.plus(gap); A Duration is not connected to the timeline, in that it does not track time zones or daylight saving time. Adding a Duration equivalent to 1 day to a ZonedDateTime results in exactly 24 hours being added, regardless of daylight saving time or other time differences that might result. The ChronoUnit enum, discussed in the The Temporal Package, defines the units used to measure time. The ChronoUnit.between method is useful when you want to measure an amount of time in a single unit of time only, such as days or seconds. The between method works with all temporal-based objects, but it returns the amount in a single unit only. The following code calculates the gap, in milliseconds, between two time-stamps: import java.time.Instant; import java.time.temporal.Temporal; import java.time.temporal.ChronoUnit; Instant previous, current, gap; ... current = Instant.now(); if (previous != null) { gap = ChronoUnit.MILLIS.between(previous,current); } ... To define an amount of time with date-based values (years, months, days), use the Period class. The Period class provides various get methods, such as getMonths, getDays, and getYears, so that you can extract the amount of time from the period. The total period of time is represented by all three units together: months, days, and years. To present the amount of time measured in a single unit of time, such as days, you can use the ChronoUnit.between method. The following code reports how old you are, assuming that you were born on January 1, 1960. The Period class is used to determine the time in years, months, and days. The same period, in total days, is determined by using the ChronoUnit.between method and is displayed in parentheses: LocalDate today = LocalDate.now(); LocalDate birthday = LocalDate.of(1960, Month.JANUARY, 1); Period p = Period.between(birthday, today); long p2 = ChronoUnit.DAYS.between(birthday, today); System.out.println("You are " + p.getYears() + " years, " + p.getMonths() + " months, and " + p.getDays() + " days old. (" + p2 + " days total)"); The code produces output similar to the following: You are 53 years, 4 months, and 29 days old. (19508 days total) To calculate how long it is until your next birthday, you could use the following code from the Birthday example. The Period class is used to determine the value in months and days. The ChronoUnit.between method returns the value in total days and is displayed in parentheses. LocalDate birthday = LocalDate.of(1960, Month.JANUARY, 1); LocalDate nextBDay = birthday.withYear(today.getYear()); //If your birthday has occurred this year already, add 1 to the year. if (nextBDay.isBefore(today) || nextBDay.isEqual(today)) { nextBDay = nextBDay.plusYears(1); } Period p = Period.between(today, nextBDay); long p2 = ChronoUnit.DAYS.between(today, nextBDay); System.out.println("There are " + p.getMonths() + " months, and " + p.getDays() + " days until your next birthday. (" + p2 + " total)"); The code produces output similar to the following: There are 7 months, and 2 days until your next birthday. (216 total) These calculations do not account for time zone differences. If you were, for example, born in Australia, but currently live in Bangalore, this slightly affects the calculation of your exact age. In this situation, use a Period in conjunction with the ZonedDateTime class. When you add a Period to a ZonedDateTime, the time differences are observed.
http://docs.oracle.com/javase/tutorial/datetime/iso/period.html
CC-MAIN-2015-35
refinedweb
712
60.01
instead of start-server after kill-server execute this :: adb usb -- this restarts the adb in usb mode so the sequence of steps is adb kill-server adb usb This will do the work for you , I don't know the exact reason why this is happening , maybe it refreshes its connections when you restart in usb mode which I know it should be doing when we start the server after killing it , its a minor bug anyways You could implement your job so that it periodically checks if it's allowed to continue. This is best practice for long-running jobs anyway. If that's in place, you can easily provide a UI for the feature - be it on application restart or individual per job. To see the immediate changes you can copy your js files and paste it in your target folder of entrypoint project. And a browser refresh will suffice. Sometimes clearing of browser cache also required. This might not be a classy solution but it will reflect the changes and in cases where you don't want to restart the server again and again to check whether it works or not. while going through documentation for mongodb sharding I found following statement Because all components of a sharded cluster must communicate with each other over the network, there are special restrictions regarding the use of localhost addresses: If you use either “localhost” or “127.0.0.1” as the host identifier, then you must use “localhost” or “127.0.0.1” for all host settings for any MongoDB instances in the cluster. This applies to both the host argument to addShard and the value to the mongos --configdb run time option. If you mix localhost addresses with remote host address, MongoDB will produce errors. on page "" This implies that when you are in test envir There's no time! I believe the reason your thread instances won't listen to the interrupt is because it is super busy running the forward() method! Try to see if appending: import time #somewhere in your code time.sleep(0) at the end of your method, will make the threads listen! "Sleeping" zero seconds is equal to checking for signals. Read more about it You can actually override onPause() and call finish() from there. However, just don't do that. Users expect a common behavior, if you override the way users expect to interact with your app they will be confused. There certainly is a difference. At the very least, the garbage collector is operating and parsing objects during a GC pause, and it is not doing so during a kill -STOP. Furthermore, the garbage collector will be calling finalizers which could potentially be where your crash bug is located, and this behaviour would not be duplicatable via kill -STOP. No that isn't possible. User always can kill your app and OS also always can kill your app. One you can do is use onBackPressed() method to do something on click back button by user. EDIT : You can also create a background service, which will relaunch your app on kill, but remember that service is also "killable". You can try system() or exec(), but it might not work (or return permission denied errors) as cron processes are executed by either the current user or root, and the web server user doesn't usually have access to these processes. I always do something like this: isDead = false; function startCountdown() { if(isDead) { return; } if((wsCount - 1) >= 0){ wsCount = wsCount - 1; // Display countdown $("#countdown").html('<span id="timecount">' + wsCount + '</span>.'); timing = setTimeout(startCountdown, 1000); } else{ // Redirect to session kill alert('Goodbye'); } } now you can "kill" it with setting isDead to true. Also: your code has horrible dumplicity: function revive() { idleTime = 0; idleRedirect = 0; //startCountdown.die(); isDead = true; } $(this).mousemove(revive First a import of the user32.dll is done to be able to use the GetWindowThreadProcessId Then the Kill method receives the outlook app by parameter and obtains the process and kills it public static class OutlookKiller { [DllImport("user32.dll", EntryPoint = "GetWindowThreadProcessId", SetLastError = true, CharSet = CharSet.Unicode, ExactSpelling = true, CallingConvention = CallingConvention.StdCall)] private static extern long GetWindowThreadProcessId(long hWnd, out long lpdwProcessId); public static void Kill(ref Microsoft.Office.Interop.Outlook.Application app) { long processId = 0; long appHwnd = (long)app.Hwnd; GetWindowThreadProcessId(appHwnd, out processId); Process prc = Process.GetProcessById((int)processId); prc.Kill This should be enough to end all queued effects on the divs of the header. $("#headerwrapper div").finish() In older versions of jQuery, use .stop(true,true) in place of .finish() I wouldn't suggest attempting to resume it other than starting it over from the beginning. Every Android app has its own user id, group id, and most of the time runs within its own process. So your app probably have no privilege to kill other process. There is obviously a design flaw, you'd be better state out what actually you want to do. Try EventMachine.stop_event_loop, it will “cause all open connections and accepting servers to be run down and closed”. newProcess1.on('close', function (code) { console.log('child process ' + newProcess1.pid + ' exited with code ' + code); newProcess2.kill(); newProcess3.kill(); }); It depends on your application logic. If you just feed the data into the database without any CPU intensive tasks, then most of your application time will be spent on IO and threads would be sufficient. If you are doing some CPU intensive suff then you should use the multiprocessing module so you can use all your CPU cores, which threads wont allow you because of the GIL. Using subprocess would just add an additional task of implementing the same stuff that's already implemented in the multiprocessing module so I would skip that (why reinvent the wheel). And gevents is just an event loop I don't see how will that be better than using threads. But if I'm wrong please correct me, I never used gevent. you. Function passthru() spawns a shell to run your command and then blocks until the passthru process returns. Those are independent processes with different Process IDs than the php interpreter running your script. You can kill the script but you won't kill the processes it started. However the spawned processes have the same Process Group ID (PGID) and you can use that to kill them or sent them any other signal. The PGID in our case would be the same as the Process ID (PID) of the php script. To see the PGIDs you can execute the command: ps axjf and you will get something like: PPID PID PGID SID TTY TPGID STAT UID TIME COMMAND 24077 12484 12484 24077 pts/9 12484 S+ 1000 0:00 | \_ php sleepScript.php 12484 12486 12484 24077 pts/9 12484 S+ 1000 0:00 | You can block almost all signals, with the notable exception of SIGKILL. By default the kill command sends SIGTERM, which you can block. Read about the sigaction system call to learn how to block signals.). Uncaught SyntaxError: Unexpected end of input jquery.js:6 Uncaught ReferenceError: jQuery is not defined Check if you uploaded jQuery correctly. The file might have been truncated in the process. UPDATE Checking your jQuery file, the file truncates on line 6 with these as the last few characters. if(this[0]){var b=f(a,this[0].owne SOLUTION Re-upload the jQuery file (jquery.js) again, or Use jQuery CDN: Change: <script src='' type='text/javascript'></script> To: <script src='' type='text/javascript'></script> C-c works Also, you can let it break on error. (which, I think, is the default). I sometimes temporarily do :se nowrapscan to avoid "infinitely" looping over my buffer Also, to speed up macro execution, make it silent: :silent! norm 1000@q Basically the interface of your application is controlled and updated on your apps main thread. Therefore if you run some code which ties up the main thread then your interface will not have a chance to update itself until the code is complete. So to fix that you run the code in a background thread and thus your interface will be able to update itself. I don't know if you can do this in AppleScriptObjC because I'm not too familiar with it. Here's how I do it in objective-c. I create a handler (someHandler) and then run this code. Note that since this handler isn't run in the main thread which has an automatically generated release pool, you will have to create and drain a release pool in your handler. [NSThread detachNewThreadSelector:@selector(someHandler) toTarget:self withObject:nil]; You must use clearInterval. Put the variable with your setInterval as global, then you can stop it anywhere. <html> <body> <input type="text" id="clock"> <script language=javascript> var int=self.setInterval(function(){clock()},1000); function clock() { var d=new Date(); var t=d.toLocaleTimeString(); document.getElementById("clock").value=t; } </script> </form> <button onclick="int=window.clearInterval(int)">Stop</button> </body> </html> Here you can find this example e much more info about clearInterval. Hope it helps! Hide the CSS classes that are showing up. .mojozoom_marker, .mojozoom_imgctr { display: none; } mojozoom_marker is the box (with crosshairs) that shows up within the thumbnail image. mojozoom_imgctr is the enlarged, "zoomed-in" version that appears on the side of the image, on hover. If you do: ps aux or something similar (see man ps for the many different possible commands) you should be able to find the PID of the java process (might be difficult if there are many java processes running*). Then do: kill PID If that doesn't work, try: kill -9 PID But this will not give the process a chance to shut down cleanly. *) The reason this might be difficult with many java processes running, is that on some OS's, Java versions, etc, the process name might simply be "java", which makes it hard to distinguish them. Does your bash script produce any output on stderr? It looks like you're reading its stdout via getInputStream() but you're not doing anything with getErrorStream(). If you don't read stderr then the process could hang if its stderr buffer fills up. Best practice when invoking processes is to read both stdout and stderr in separate threads. You must read them in parallel threads to avoid blocking. You don't need to have separate Java threads just for exec(). Each exec() call will start a separate process which executes in a separate thread of execution. The separate Java threads don't buy you anything. You can do all of the exec() calls from a single thread. My recommendation: start all of the processes from a single thread. For each Process object you receive, start two background th Please see the documentation regarding the Future task. From that what I understand is, if the execution started, we cannot cancel it. Then what we can do to get the effect of cancelling is to interrupt the thread which is running the Future task mayInterruptIfRunning - true Inside your runnable, at different places, you need to check whether the thread is Interrupted and return if interrupted and by that way only we can cancel it. Thread.isInterrupted() Sample : private Runnable ExecutorRunnable = new Runnable() { @Override public void run() { // Before coming to this run method only, the cancel method has // direct grip. like if cancelled, it will avoid calling the run // method. // Do some Operation... // Checking for thread in You can use .addEventListener and set the 3rd parameter to true. That will make it fire first before any other click listeners. Example: document.addEventListener('click', function(e) { if(!$('#main-wrapper').hasClass('show-right-menu')) return false; // we don't need this if the menu is closed if(!$(e.target).parents(".right-menu").length) { // if the target is not located in the menu, we cancel the click e.stopPropagation(); } }, true); Here is a quick live example This wont run every 0.01 seconds, it will be called only once after a delay of 0.01 secs from the current time Even if it gets called only once, you can again cancel the request using [NSObject cancelPreviousPerformRequestsWithTarget:self selector:@selector(checkTheUsersText) object:nil]; For more details please refer this Maybe you could fire an "CancelWorker" event in your "VeryLongTimeComputingFunc" and in the EventHandler you stop the BackgroundWorker with "worker.CancelAsync()". This should work: class BackgroundClass { public event EventHandler CancelWorker; BackgroundWorker worker = new BackgroundWorker(); BackgroundClass() { CancelWorker += new EventHandler(BackgroundClass_CancelWorker); } void BackgroundClass_CancelWorker(object sender, EventArgs e) { worker.CancelAsync(); } void RunBackgroundWorker() { worker.DoWork += (sender, args) => { VeryLongTimeComputingFunction(); }; } void VeryLongTimeComputingFunction() { if (CancelWorker != null) { CancelW Assuming you have created the data/db directory under bin after install. Start a terminal for your mongo server Go to mongo/bin directory Run the command ./mongod Start a terminal for your mongo shell Go to mongo/bin directory Run the command (make sure you put the name of the database) ./mongo test Ok, I think I did it. The gps library has a non-blocking method to check if data is available, so now it looks like: def run(self): global gpsd while self.running: try: if gpsd.waiting(): #only True if data is available gpsd.next() #this will continue to loop and grab EACH set of gpsd info to clear the buffer self.file_descriptor.write(str(int(time.time())) + ',' + str(gpsd.fix.latitude) + ',' + str(gpsd.fix.longitude) + ',' + str(gpsd.fix.altitude) + ',' + str(gpsd.fix.speed) + ' ') self.file_descriptor.flush() time.sleep(5) except: raise And it's working properly. Thanks! There is no out of the box solution available for this but this can easily be implemented. You can fire a monitor thread on method call which can monitor the timing and either kill the process or notify some other method. You can of course implement logic in a several ways from here.(); }); }); }); First. The keyword GO divides the file into separate requests. Each request separately processed by the server. RETURN exits only from first request, other requests will be run. Try this: select 1 RETURN select 2 go select 3 go Second, SET NOEXEC ON is dangerous thing, it blocks all subsequent execution. Try this: select 1 SET NOEXEC ON RETURN select 2 go select 3 go SET NOEXEC OFF go You can create procedure on all servers, but return from it in the beginig if database name like something. Or you can remove GO and create stored proc with dynamic SQL: IF DB_NAME() like '%mydb%' BEGIN EXEC dbo.sp_executesql @statement = N' CREATE PROCEDURE [dbo].[my proc] AS BEGIN select 1 END' END You can use WebWorkers. This will work in a separate thread and allow you to do heavy calculations without intefering with the UI thread. You can signal the worker to stop. See here for examples on how to use: If WebWorkers isn't an option you can use a flag to force the function to stop: var _stop; function displayNumber(num) { _stop = false; while(!_stop) { //calc } } And then on button2 click: <input type="button" value="Two" onclick="_stop=true;displayNumber(2)"> Instead of a loop you can check _stop after each segment of the calculation is done. Just note that if the calculation is long and busy you might not g
http://www.w3hello.com/questions/stop-or-kill-mongodb-server-from-running
CC-MAIN-2018-17
refinedweb
2,540
63.8
Agenda See also: IRC log <shadi> JK: added additional header properties ... divided into URI host port and absPath SA: put stuff into usefull namespace ... now we have 5 different namespaces JK: rfc2965 redefined cookie header SA: what cookie is more deployd JK: name is the same, value may be different SA: netscape cookie part, actual description in commend JK: in the commend described: this header is defined in rfc... SA: should have pointer to the sources where it's based - like rdf:seeAlso ... cookies part: see what differences are in there - should forward to the annotea people, what we have come up with, give them a couple of tweaks JK: agrees SA: prop. next weeks, send out comment, edit working group note CV: should have it before february SA: 3-4 weeks should be enough to commend ... robust metadata from nick, unfortunately nick is not here :((( ... something that should be on top of EARL ... lot's of the checkpoints have dependecies, sometimes they're clear sometimes not ... table: if we hash the table, some cp apply to the table not outside - aslong as no changes outside the table happen nothing happens JK: more about how to implement the checking tool (easier for the programmer) - less for the EARL report ... if table does not change compare hashes ... SA: many of the WCAG cp, can be described by what is the relationship to others ... e.g. image + description, can be hashed easily JK: and the context does not change SA: in many cases the context can be described by a hash ... should EARL express some of these persistency mechanisms? ... who is interessted working on this issue? CV: what about the reliability of these descriptions - comparing docs/results ... use case: something cases in the page even if the xpath does not SA: table: today manual eval, if we run test tomorrow can we reuse the result? ... need some kind of persistency, else we would have to run test again ... can be define priority levels on this issue? JK: could be helpfull for manual checks SA: some tools combine manual/automatic checks ... being able to reuse test results, is this useful? CR: this is really important JK: if there is a possibility to reduce the number of checks for manual evaluators -> this is good, does not make sense for automatic checks SA: what is the priority on this for the ERT WG? silence SA: seems to be no key issue for the group JK: if you don't have access to the the namespace mapping, no mapping unless you get an element / prefix namespace mapping ... no requirement for a docs to use prefixes ... if you want to locate elements, you normally take prefixes (map to URI) SA: if the tool know how to map, it can figure out it anyway JK: XPATH uses prefixes, these can be mapped, element uses namespaces, they may use prefixes but they are not required to <scribe> ACTION: (For all) think about the XPATH issue, and solutions about it [recorded in] SA: XML docs has doctype (e.g. XHTML), though the literal is in the same namespace JK: several locs in the report, put it outside and used namespace class for the mapping SA: for all again, use XPATH parsers and give feedback :)
http://www.w3.org/2005/11/30-er-minutes.html
CC-MAIN-2015-18
refinedweb
544
67.59
Portable Chrome browser setup utility Started by Skitty, 1 post in this topic You need to be a member in order to leave a comment Sign up for a new account in our community. It's easy! Register a new account Already have an account? Sign in here. Similar Content - [SOLVED] Trouble to access Google Chrome database - AutoIt Script to Run RoboForm2Go from USB Drive? By AlHazred Just like it says in the title. A scripter from my company gave me a script to make it run, but he's not around anymore, and my drive finally failed with no working backup. I run PortableRoboForm () version 6 from a USB drive to automate password management for hundreds of accounts with different passwords.RoboForm lets me do this, and they even have a Portable version, but it's not that portable and I'd like to use it inside a PortableApps () launcher. I'd like to create a script to go to the root of the USB drive and run "PortableRoboForm.exe". PortableApps only sees executables in subdirectories of the \PortableApps folder of the USB drive. RoboForm2Go only installs and executes in the root directory of the USB drive. I tried using the fantastic little _GetFileDrive function, but I keep getting an error indicating it's trying to run the file in the subdirectory so obviously I'm not using _GetFileDrive right. Any help for a newbie would be appreciated. ; Change the working directory to the root of this file. FileChangeDir(_GetFileDrive(@ScriptDir)) ; Run the PortableRoboForm executable. Run(PortableRoboForm.exe,"") ; Get the drive letter of a filepath. Idea from _PathSplit. Func _GetFileDrive($sFilePath) Return StringLeft($sFilePath, StringInStr($sFilePath, ":", 2, 1) + 1) EndFunc ;==>_GetFileDriveEDIT: Nevermind, found the option to set it up to run from the PortableApps menu. - Security warning on Chrome when download SciTE By J2TeaM When I was trying to download SciTE, Google Chrome show an error (warning) page. I know it is false positive, but I think we need to report to Google to remove this warning page. Download link. - Get Portable Devices By Danyfirex Well an Implementation of IPortableDeviceManager Interface to get all Portable Device conected to our pc. #include <Array.au3> #include <WinAPICom.au3> Opt("MustDeclareVars", 1) Saludos - Trying to get a Tester's attention! By JibsMan I need to determine a way to get a Test Engineer or QA Tech's attention. Audio may not work as the tester may not have headphones or speakers plugged in. What I was thinking of doing was creating and playing a video that shifts from black to white and back until the tester hits a key, but that may interfere with what we are testing, or if there was a way to do something like this using AutoIt. That is, using AutoIt to Flash the screen, not play the video. Searching the inet for "Flash Video AutoIt" or other combinations brings up lots of answers to Video Flashing problems with monitors, but nothing to help me with this. I am writing tests for a video utility and there are some areas of the test that require the tester to LOOK at the monitor to verify video quality or other things that can only be validated by looking at the screen. Other tests are validated internally and automatically post "Pass" or "Fail" messages to the log file. Yeah I know they are supposed to be watching but they test multiple systems at once and can't watch everything. Are there features of AutoIt that could help me? Thanks JibsMan
https://www.autoitscript.com/forum/topic/141104-portable-chrome-browser-setup-utility/?pid=992032
CC-MAIN-2016-22
refinedweb
589
61.97
#include <hallo.h> * MJ Ray [Sun, Mar 07 2004, 11:44:16PM]: > >hardware manufacturers (in the last instance) only. Do you think that > >they produce everything built in their devices? > > Do you really think that hardware manufacturers don't decide what to > build into their devices? Of course they do, but they have different primary goals, eg. produce the hardware product in this century, make it good enough to sell enough of it. Or do you prefer hardware that is 10 times slower or incompatible to what 95% of the market uses, beeing 200% more expensive? > >Are you really so naive > >to think that everything in the hardware world can be powered by free > >software only? > > Are you so naive to think that all this stuff about "3rd party IP" is > the end of the line? Huch? I did never say ALL. > >[...] The vendors of Debian media are free to master them > >as needed and they often (?always?) integrate non-free. The term > >"official" does not mean much then. > > Your comments seem inconsistent with reality. Check the CD vendors > list for many offers of official CDs. Very far from all vendors offer > non-free. A-Ha. Looking at the tree most-known CD seller in my country (Lehmanns, LinuxLand, Schlittermann), I guess that 90% of the sold media actually contain non-free software. And moving the non-free tree to another server just to "draw a line" for no real reasons sounds a bit childish to me. Regards, Eduard. -- Ein Blinder und ein Tauber wollen sich duellieren. Sagt der Blinde: "Ist der Taube schon da?" Sagt der Taube: "Hat der Blinde schon geschossen?"
https://lists.debian.org/debian-vote/2004/03/msg00376.html
CC-MAIN-2016-40
refinedweb
273
74.39
Red-Black trees are ordered binary trees with one extra attribute in each node: the color, which is either red or black. Like the Treap, and the AVL Tree, a Red-Black tree is a self-balancing tree that automatically keeps the tree’s height as short as possible. Since search times on trees are dependent on the tree’s height (the higher the tree, the more nodes to examine), keeping the height as short as possible increases performance of the tree. Red black trees were introduced by Rudolf Bayer as “Symmetric Bimary B-Trees” in his 1972 paper, Symmetric Binary B-Trees: Data Structure and Maintenance Algorithms, published in Acta Informatica, Volume 1, pages 290-306. Later, Leonidas J.Guibas and Robert Sedgewick added the red and black property and gave the tree its name (see: Guibas, L. and Sedgewick, R. "A Dichromatic Framework for Balanced Trees" In Proc. 19th IEEE Symp. Foundations of Computer Science, pp. 8-21, 1978). Apparently, Java’s TreeMap class is implemented as a Red-Black tree as well as IBM's old ISAM (Indexed Sequential Access Method ISAM) and SoftCraft's Btrieve. This article provides a Red-BlackTree implementation in the C# language. Ordered binary trees are popular and fundamental data structures that store data in linked nodes. Each node has, at most, 2 child nodes linked to itself. Some nodes may not have any child nodes, others may have one child node, but no node will have more than two child nodes. A node having at least one child node is referred to as a parent node. Ultimately, all nodes of a tree are child nodes of the root node. The root node is the top node of the entire tree. Every child node contains a value, or a key, that determines its position in the tree relative to its parent. Since the root node is the top parent, all nodes are organized relative to the root node in branches. Child nodes on the left side of the root have keys that are less than the parent’s key, and child nodes on the right have keys that are greater than the root. This property is extended to every node of the tree. Because each node is linked (or points) to the next node (unless it is a leaf), the tree can be walked (or traversed) to produce an ordered list of keys. Binary trees combine the functionality of ordered arrays and linked lists. Ordered Binary trees are not without problems. If items are added to the tree in sequential (ascending or descending) order, the result is a vertical tree. This results in the worst case searching time. Essentially, each item adds to the height of the tree which increases the time to retrieve any given node. If the tree contains 10 nodes, it will take 10 comparisons (beginning at the root) to reach the 10th node. Thus an ordered binary tree's worst case searching time is O(n) or linear time. However, if items are inserted randomly, the height of the tree is shortened as nodes are spread horizontally. Therefore, trees created from random items have better look-up times than trees created from ordered items. More formally, the time it takes to search an ordered binary tree depends on its topology. The greater the breadth, the faster the performance. Trees are said to be perfectly balanced when all their leaf nodes are at the same level. So, the closer the tree is to being perfectly balanced, the faster it will perform. In many applications, if not most, there isn’t a convenient way to randomize the input prior to inserting it into an ordered tree. Fortunately, this isn’t necessary. Self-balancing trees reorder their nodes after insertions and deletions to keep the tree balanced. By reordering the nodes, self- balancing trees give the effect of random input. Rebalancing is accomplished by rotating nodes left or right. This won’t destroy their key order. In other words, the tree is restructured but the child nodes maintain their key order relative to their parents. To rotate right, push node X down and to the right. Node X's left child replaces X, and the left child's right child becomes X's left child. To rotate left, push node X down and to the left. Node X's right child replaces X, and the right child's left child becomes X's right child. Different balancing algorithms exist. Treaps use a random priority in the nodes to randomize and balance the tree. AVL trees use a balance-factor. Red-Black trees use color to balance the tree. Red-Black trees are ordered binary trees where each node uses a color attribute, either red or black, to keep the tree balanced. Rarely do balancing algorithms perfectly balance a tree but they come close. For a red-black tree, no leaf is more than twice as far from the root as any other. A red-black tree has the following properties: The last property, in particular, keeps the tree height short and increases the breadth of the tree. By forcing each leaf to have the same black height, the tree will tend to spread horizontally, which increases performance. The leaf nodes that are labeled “nil” are sentinel nodes. These nodes contain null or nil values, and are used to indicate the end of a subtree. They are crucial to maintaining the red-black properties and are key to a successful implementation. Sentinel nodes are always colored black. Therefore, standalone red nodes, such as “24” and “40” in Figure 6, automatically have two black child leaves. Sentinel nodes are not always displayed in red-black tree depictions but they are always implied. For optimum performance, all data structures and algorithms used in an application should be evaluated and chosen based on the need of the application. Red-Black trees perform well. The average and worst-case insert, delete, and search time is O(lg n). In applications where the data is constantly changing, red-black trees can perform faster than arrays and linked lists. The project available for download includes a red-black tree implementation and a Test project that gives examples using the tree. Extract the zip file into a directory of your choice. The zipped file will create its own directory called RedBlackCS. The project is contained with the RedBlackCS namespace and consists of four classes: RedBlackCS ReadBlack RedBlackEnumerator RedBlackException RedBlackNode To use the tree, include the RedBlackCS.dll as a Reference to the calling project. To create a RedBlack object, call the default constructor: RedBlack RedBlack redBlack = new RedBlack(); The RedBlack's Add method requires a key and data object passed as arguments. Add public void Add(IComparable key, object data) In order for the RedBlack object to make the necessary key comparisons, the key object must implement the .NET IComparable interface: IComparable public class MyKey : IComparable { private int intMyKey; public int Key { get { return intMyKey; } set { intMyKey = value; } } public MyKey(int key) { intMyKey = key; } public int CompareTo(object key) { if(Key > ((MyKey)key).Key) return 1; else if(Key < ((MyKey)key).Key) return -1; else return 0; } } Calling the GetData() method passing a key object as an argument retrieves a data object from the tree. GetData() public object GetData(IComparable key) Nodes are removed by calling the Remove() method. Remove() public void Remove(IComparable key) Additionally, the RedBlack class contains several other methods that offer convenient functionality: GetMinKey() GetMaxKey() GetMinValue() GetMaxValue() GetEnumerator() Treap Keys() Values() RemoveMin() RemoveMax() The sample project demonstrates various method calls to the RedBlack tree and displays the effect of the calls by dumping the tree’s contents to the Console. Executing the sample project produces the following partial output: The RedBlackEnumerator returns the keys and/or the data objects contained, in ascending or descending order. To implement this functionality, I used the .NET Stack class to keep the next node in sequence on the top of the Stack. As the tree is traversed, each child node is pushed onto the stack until the next node in sequence is found. This keeps the child nodes towards the top of the stack and the parent nodes further down in the stack. Stack Also, unlike my Treap implementation, the RedBlack class saves the last node retrieved (or added) in the event that the same key is requested. This probably won’t happen often, but if it does, it will save a tree walk searching for the key. I’m sure there’re many. One in particular would be to replace the IComparable interface with an Int32. This removes the need for a separate class that implements the IComparable interface since the Int32 class already implements the IComparable interface. This would make the implementation less general but it would speed up performance, I think. Int32 It would be nice if the test project displayed the tree in a graphical format, even a simple.
http://www.codeproject.com/Articles/8287/Red-Black-Trees-in-C?fid=105549&df=90&mpp=10&sort=Position&spc=None&select=1388937&tid=4030096
CC-MAIN-2015-48
refinedweb
1,494
63.8
setgroups() Set supplementary group IDs Synopsis: #include <unistd.h> int setgroups( int ngroups, const gid_t *gidset ); Since: BlackBerry 10.0.0 Arguments: - ngroups - The number of entries in the gidset array. - gidset - An array of the supplementary group IDs that you want to assign to the current user. This number of entries in this array can't exceed sysconf(_SC_NGROUPS_MAX). Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description: The setgroups() function sets the group access list of the current user to the array of group IDs in gidset. In order to set new groups or delete existing groups, your process must have the PROCMGR_AID_SETGID ability enabled. For more information, see procmgr_ability(). Errors: - EFAULT - The gidset argument isn't a valid pointer. - EPERM - The calling process doesn't have the required permission; see procmgr_ability(). Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/s/setgroups.html
CC-MAIN-2016-07
refinedweb
167
60.82
Pointer: A simple definition of a pointer can be a data type that stores the address of other data type. Some of the operators that one must know before understanding the rule to declare pointers are: A. Address of operator: Suppose we have a variable ‘a’ of integer datatype. So, to mention it’s address we can write ‘&a’. i.e., &a indicates address of a. B. Dereference operator: * Basic syntax to declare a pointer: data_type* name; For example, int* a; char* p; float* s; (One thing is important to keep in mind is that ,we can’t store the address of integer data_type in a pointer having data_type other than integer, and the same rule goes for other data types as well.) i.e., it is wrong to write : int a = 5; float* b = &a; The data_type of pointer ‘b’ should also be integer only. Let us try to explore the concept of pointers using an example mentioned below: int a = 5; int* p = &a; Here, we have declared a variable ‘a’ of integer data_type and it is storing a value 5. Also in the next line we have declared a pointer ‘p’ and it is storing the address of ‘a’. If we print the value of p, we will simply get the address of ‘a’ and if we print *p , we will get the value at the address, being stored by p(that is why * is called as “Dereference operator” or simply “value at”). Address of a variable can be any hexadecimal number or any garbage value ,which is evaluated by computer only and can vary from computer to computer. Code Example: #include<iostream> #include<bits/stdc++.h> using namespace std; int main(){ int a=5; int* p=&a; cout<<"a= "<<a<<endl; cout<<"p= "<<p<<endl; cout<<"*p= "<<*p<<endl; return 0; } Output: a= 5 p= 0x61ff08 *p= 5 The output value printed above (p= 0x61ff08) may vary on your code editor, as every system stores variables at different memory locations. Pointer to pointer: A pointer that stores the address of other pointer. Example, int a=5; int* b= &a; int** c= &b; Here, - ‘a’ is simply a variable of integer data_type, which is storing an integer 5. - ‘b’ is a pointer which is storing the address of ‘a’. (*b will give 5 as output) - ‘c’ is pointer to pointer which is storing the address of pointer ‘b’. (*c will give address of ‘a’ as output and **c will give 5 as output) *b means value at &a i.e., 5, *c means value at address of b i.e., address of a. Now, since *c means &a then **c will be value at &a i.e., 5. Code Example: #include<iostream> #include<bits/stdc++.h> using namespace std; int main(){ int a=5; int* b=&a; int** c=&b; cout<<a<<endl; cout<<&a<<endl; cout<<b<<endl; cout<<&b<<endl; cout<<*b<<endl; cout<<c<<endl; cout<<*c<<endl; cout<<**c<<endl; return 0; } Output: 5 0x61ff08 0x61ff08 0x61ff04 5 0x61ff04 0x61ff08 5 The output values printed above (something like, 0x61ff08) may vary on your code editor, as every system stores variables at different memory locations. This article was exclusively written by Sadhana Sharma. Read Next: Sieve of Eratosthenes
https://hacktechhub.com/pointer-in-c/
CC-MAIN-2022-40
refinedweb
544
68.1
With a Great Power comes Great Responsibility. I’m referring to the incredible power of defining custom operators as function names. I was convinced that this feature was introduced by C++, but a quick look on Wikipedia was enough to dispell this myth. Starting from Algol 68, programmers were enabled to redefine operators. Not all languages have this feature and even those who do vary in what the programmer can do. The idea is cool, take for example math operators like +and -, they are widely and precisely understood, they are brief and concise. Programmers are usually happy identifying the default semantic for common language operators such as *, /, <, >and by language-specific domain, ==, !=and %. If you define your mathematical structure, then having the option to operate on them via mathematical operators is a great way to write clear and concise code. What can possibly go wrong? Well, until now – at least in production code – I never found a +operator redefined to perform subtraction. But I have found some cases where the programmer made up a custom semantic for operators applied to custom types. The most innocent is the +to concatenate strings. Note that although this may be quite natural to most programmers (exposed to high-level programming languages), it wouldn’t make sense for a mathematician. Just to point out the white elephant, +is commutative, string concatenation is not. But this slippery path looks so inviting – why not use /to start parallel threads, or <<to write stuff in a stream? This easily leads to write-only code where a restricted circle is able to type in the code faster and possibly understand it but leaves outsiders head scraping in front of mysterious gibberish. Java saw that this was BAD™ and refused altogether the concept (this seems natural in Java’s mission to take blades, spikes and booms out of the programmer’s hand). If you want to add two vectors v1 and v2, then call add(v1,v2) and nobody gets hurt. Time passes, Scala enters. Possibly to contest Java, Scala features an even greater power than C++, you can define any symbol you want as a custom operator. And since basically all types are boxed, custom operators may be applied to language types. As for C++, this is not bad per se, but it is easy to be carried away and inventing every kind of funny operator, just to curse yourself a month later or being hunted by coworkers if you are not coding alone. Which kind of custom operators have I found in (and removed from) my code? I add the types (something that you may not have the chance to find so easily in a source code where they are omitted everywhere possible) -+(Byte) : Intto convert bytes to their unsigned equivalents (replaced by an implicit class with toUintmethod). --->(JsValue) : Unitto send data down the websocket. +?(T):Option[T]to implement a sum that may or may not have a result. def ==(data: Array[Byte]) : Booleanmethod of a GpioCommclass used to check if a packet is equal to the given byte sequence. Lesson learned – despite being popularized by successful libraries (e.g. optics) refrain the temptation of defining custom operators. Really no one, but you may have a clue on what -@&£means besides being random keystrokes. Not always your reader may have a modern IDE to look up the definition, and even if she has it, it is an extra time and effort needed to understand your code. One thought on “Our Fathers’ Faults – Operator @!#” Alberto says: As a mathematician, I am pretty at ease with non-commutative operations. Think about matrix multiplication, for instance. However it’s true that the symbol + is used more often in commutative contexts. Anyway, if you introduce a new operator, be sure to give it a catchy name:
https://www.maxpagani.org/2019/11/21/our-fathers-faults-operator/
CC-MAIN-2022-21
refinedweb
635
62.48
How do you export a QtWidget class from a library? - Michael.R.LegakoL-3com.com Does anybody know how to export a QtWidget class (for both WIndows and Linux)? Like most people who write C++, I have some home grown macros that work for either Windows or Linux that allow me to export any class. (They use _declspec(dll[im,ex]port) in Windows, and equate to nothing in Linux). But they don't seem to work with Qt. I want to export a custom control in a library. It implements a MainWIndow::statusBar by combining all the individual controls; i.e., QLabel, QLineEdit, etc into one QWidget class. The class compiles but does not show up as an export when I look at the library with Depends. So the class is defined as: class ZStatusBar: public QWidget { Q_OBJECT ... QLineEdit m_oThing1; QLabel m_oThing2; QPixmap m_oIcon1; ... } Typically, in Windows one would use: class AFX_EXT_CLASS ZStatusBar: public QWidget { } where AFX_EXT_CLASS is a Microsoft macro that equates to _declspec(dllimport), or _declspec(dllexport) (depending on where the header is being included. But this doesn't compile for me, so what does one do in Qt? OK, I tried using the guidelines you pointed me to. (In fact I had already found them on my own, and had tried them. So in a small header file dll_defns.h, I added these definitions: #ifdef COMPILING_DLL_SOURCE #define QTCLASS Q_DECL_EXPORT #else #define QTCLASS Q_DECL_IMPORT #endif Then in my QtWidget class header I define my class using the QTCLASS macro class QTCLASS ZStatusBar: public QWidget { Q_OBJECT ... } Finally in the .pro file of the library I add the definition: DEFINES += COMPILING_DLL_SOURCE This is exactly what the export guidelines suggest, yet when I compile with these changes, I get many warnings, and 1 error: The warnings all are of this type: warning C4273: 'ZStatusBar::qt_static_metacall' : inconsistent dll linkage The Error is: debug\moc_public.cpp(146) : error C2491: 'ZStatusBar::staticMetaObject' : definition of dllimport static data member not allowed Is there something else I'm missing? Hi, Did you do a full rebuild after adding the macro ? @SGaist I sure thought so, but I'll try again. Are you saying that the changes outlined above are adequate to export a class of this type? Yes it should. I didn't realise, moc_public.cpp looks unusual. What part are you compiling ? SGaist, I put these definitions in dll_defns.h (I have changed some of the macro names to see if I've got some weird name conflict): #ifdef COMPILING_QT_SOURCE #define EXPORTQTCLASS Q_DECL_EXPORT #else #define EXPORTQTCLASS Q_DECL_IMPORT #endif Then I include this header in my class definition header: #include "dll_defns.h" class EXPORTQTCLASS ZStatusBar: public QWidget { Q_OBJECT ... } Finally, I put this define in my Library .pro file DEFINES += COMPILING_QT_SOURCE After wiping out the previous build files, and any previous output dll, the compile fails with the 1st error being: D:\Year_2015\svn_nyfr_sw_1\SW\include\API_NYFR_Msg_ICD/public.hpp(75) : error C2470: 'ZStatusBar' : looks like a functio n definition, but there is no parameter list; skipping apparent body But if I remove EXPORTQTCLASS from the class definition, everything compiles and links. (But in that case, the class is not exported). Is there perhaps some include required to define Q_DECL_{EX,IM}PORT? (Not that I'm seeing it complain about this)... @Michael.R.LegakoL-3com.com said: Is there perhaps some include required to define Q_DECL_{EX,IM}PORT? (Not that I'm seeing it complain about this)... That's the downside of macros: If they're not defined, the compiler has absolutely no idea what they're meant to be. Q_DECL_IMPORTand Q_DECL_EXPORTare defined in #include <QtGlobal>: SGaist, The class I'm compiling is a class that builds a statusbar for MainWindow::statusBar. It does have signals, and slots, and sets up a number of connect()'s to tie them together. Basically, when my app gets status messages, they get decoded into various things on the status bar the user needs to see; icons get selected for display, text is displayed, etc.... So the class is a QWidget class that contains various Qt controls: QLineEdit -- used for text output QLabel -- used for icons QPixmap -- used to hold the icon images A timer is started to so I can display a clock with the current time in one of the text fields. Nothing particularly unusual. I've been using this status bar for some time in 3 separate GUI's, but I was also maintaining 3 separate sets of source code. The idea of creating a custom Widget to implement the status bar is basically my attempt to condense 3 sets of redundant code into one custom QWidget that works for all of them. I confess I haven't had to understand the moc files since they have previously just worked.... Mike sGaist, I was missing the #include <QtGlobal> but adding it has not changed the result. I still get D:\Year_2015\svn_nyfr_sw_1\SW\include\API_NYFR_Msg_ICD/public.hpp(75) : error C2470: 'ZStatusBar' : looks like a functio n definition, but there is no parameter list; skipping apparent body The header skipping only fields of the same type is: #ifndef ZSTATUSBAR_H #define ZSTATUSBAR_H #include <QtGlobal> /* Defines: Q_DECL_EXPORT, Q_DECL_IMPORT / #include <QWidget> #include <QStatusBar> #include <QLineEdit> #include <QLabel> #include <QTimerEvent> #include <API_NYFR_Msg_ICD/public_types.h> / Defines: StatusMsg_TYPE / #include <Protos/common_macros.h> / Defines: LIN_SIZ */ #include <Protos/dll_defns.h> class EXPORTQTCLASS ZStatusBar: public QWidget { Q_OBJECT public: ZStatusBar(QWidget *parent=0); virtual ~ZStatusBar(); int Create(QStatusBar *statusBar); void EmitValue(int item,int iSubItem,int iValue); void EmitText(int item,char *sValue); void EmitStatus(StatusMsg_TYPE Msg2,int bHeartBeat,int iRole); void EmitInit(); / Used to put status bar status as 'unknown' */ signals: void updateStatusSignal(StatusMsg_TYPE *Msg2,int bHeartBeat,int iRole); void updateValueSignal(int item,int iSubItem,int iValue); void updateTextSignal(int item,char *sValue); void initSignal(void); protected: void timerEvent(QTimerEvent *event); private slots: void updateValueSlot(int item, int iSubItem, int iValue); void updateTextSlot(int item, char *sValue); void updateStatusSlot(StatusMsg_TYPE *Msg2,int bHeartBeat,int iRole); void initSlot(); private: char *get_today(char *format1,char *sResult); bool m_bCAC; // Status Bar Content QLabel m_oConnectivityIcon[8]; /* 8 LEDs showing system connectivity / QLineEdit m_oNavTime; / PixMaps */ QPixmap m_oRC_Lock[2]; int m_iNavTimerID; char m_sPathFile[LIN_SIZ]; }; #endif // ZSTATUSBAR_H @Michael.R.LegakoL-3com.com said: D:\Year_2015\svn_nyfr_sw_1\SW\include\API_NYFR_Msg_ICD/public.hpp(75) : error C2470: 'ZStatusBar' : looks like a functio n definition, but there is no parameter list; skipping apparent body What's at line 75? Usually, that message means the compiler saw " class EXPORTQTCLASS ZStatusBar: public QWidget" but doesn't understand what EXPORTQTCLASSis. So, the compiler thought that ZStatusBaris a function that returns class EXPORTQTCLASS, and so it looks for the parameter list after the "function" name (e.g. " ZStatusBar(int param1, const QString& param2)"). However, it couldn't find a parameter list, so it skips everything that comes after that. In short, the compiler is confused because your EXPORTQTCLASSdoesn't eventually expand to _declspec(dllexport). Are you using Qt Creator? If so, hover your mouse cursor over EXPORTQTCLASSand see what it expands to. Also, try replacing all instances of EXPORTQTCLASSwith Q_DECL_EXPORTand see what happens. (Again, hover your mouse cursor over Q_DECL_EXPORTand see what it expands to) @JKSH I think you were right. The macro I was using to export with (EXPORTQTCLASS) is somehow not getting equated to Q_DECL_EXPORT. Replacing the class definition: class EXPORTQTCLASS ZStatusBar: public QWidget with class Q_DECL_EXPORT ZStatusBar: public QWidget solves the issue and everything compiles and links. Of course that won't work when I include that header in the application that needs to use ZStatusBar, but that does narrow the issue down to figuring out why the macro is not assigned the correct value. Thanks! Essentially I think this solves the issue... Mike OK, now I think I've isolated why the macro Q_DECL_EXPORT never gets assigned. It was never defined. SGaist suggested that the macros QDECL_EXPORT, and Q_DECL_IMPORT are given values by <QtGlobal>, but I have discovered that these two macros are actually assigned within qcompilerdetection.h. Maybe QtGlobal is supposed to include something that eventually is supposed to include qcompilerdetection.h??? The path to qcompilerdetection.h is C:/Qt/Qt5.2.0/5.2.0/msvc2012_64/include/QtCore/qcompilerdetection.h. I would think the file would be included using #include <QtCore/qcompilerdetection.h> But this doesn't seem to work. I think getting the two macros defined somehow is the issue. On the machine I'm on, MSVS2012 is the compiler, the machine is 64 bits, and 5.2.0 is the version of Qt in use. @JKSH, and @SGaist , Thanks both of you. The problem wasn't your advice. <QtGlobal> DOES define Q_DECL_{EX,IM}PORT (both of them). The real issue was a fairly subtle mistake on my part. I was using the macro COMPILING_Q_LIB to switch between these two macros, but in my .pro there was a DEFINE statement using = following the DEFINE for COMPILING_Q_LIB (which also used the = operator). So the 2nd define statement wiped out all previous defines including the one for COMPILING_Q_LIB. (It would be a good thing to have the compiler detect when more than one DEFINE statement is using the = operator. I'm sure quite a number of the really subtle issues you are seeing are because of that one thing!) Apparently my odyssey isn't over yet. Although I was able to get my QWidget to compile and link, and although depends does show that the object is exported, now I'm seeing an entirely new kind of issue when I try to link the new QWidget object in with an application. In the compiler output of the application, I now get the error: debug\moc_public.cpp(148) : error C2491: 'ZStatusBar::staticMetaObject' : definition of dllimport static data member not allowed And sure enough, looking at the depends exports I see the static meta object in the export list. In fact there is both a qt_static_metacall, and a staticMetaObject although the above error seems to point just to the latter. Since I don't directly generate the files that create these things, how do I prevent them from being included when the object is exported? I haven't got bitten by that one while building Qt based shared libraries. Can you try to make a small minimal library to see if it's still happening ? - JKSH Moderators debug\moc_public.cpp(148) : error C2491: 'ZStatusBar::staticMetaObject' : definition of dllimport static data member not allowed The project that uses ZStatusBar should treat its header as an external file, and add the header to INCLUDEPATH. That project should not contain a copy of the header (in other words, that project should not have " HEADERS += public.h"). Think of it this way: You #include qwidget.h, but you don't copy qwidget.h into your project. You should treat your ZStatusBar header in the same manner. If your project contains a copy of the header, the meta-object compiler will try to generate code for the QObjects in that header. However, this is illegal because the generated code already exists in your library. @JKSH , Direct hit! Score 1 for JKSH. I guess in retrospect your suggestion is obvious since a class header is more than just prototypes, but can also contain implementations. Removing the ZStatusBar header from HEADERS yet making sure the path to the header is in INCLUDEPATH did the trick. Good Catch! At this point, both the ZStatusBar library, and the application that uses it are compiling and linking, so this is pretty good evidence that the original issue of this thread has been solved. Mike
https://forum.qt.io/topic/60559/how-do-you-export-a-qtwidget-class-from-a-library/7
CC-MAIN-2019-04
refinedweb
1,917
55.34
Hi Alan.I gather you were away when I first posted these patches, so can Iresubmit them for your comments? They are: 1. Fix a buglet in the top level makefile which hits when one specifies a relative directory for INSTALL_MOD_PATH rather than an absolute one. Currently, the modules get put in the wrong directory when this happens. 2. Increase the precision of the BogoMIPS counter to 12 bits, this being the safe maximum to which it can be tuned on all systems. The resulting values fit the description for this counter far better than the current values on most of the systems I have here, as all but one of them add four 1's to the binary value. 3. This patch allows the ISA NE2K driver to only claim the ports it actually uses when other devices have already claimed some of the ports that it does not use which fall in the range it would otherwise claim, but restricts this facility to when a new configuration option is defined. It also includes documentation for the new option.I would like to see the first two patches included in the next 2.2series kernel, and the first one may also be relevant for the 2.3series kernel - I haven't tried that out as I need the VFAT filesystem which is currently not supported there.As for the third patch, I can only comment that this patch was inuse successfully on one of my systems with an ISA NE2K card set for0x360 when ide1 was using 0x376, and I never had any problems withit. If the printer port that occupies 0x378-0x37F is defined, thenthe interface is still not auto-probed as the fact that port 0x37Fis in use is enough to say that it can't work there.Best wishes from Riley GM7GOD / KB8PPG********************************************************************===8<=== CUT ===>8===--- linux-2.2.10/Makefile~ Sat May 29 02:10:19 1999+++ linux-2.2.10/Makefile Sun Jul 18 11:02:02 1999@@ -301,6 +301,9 @@ modules_install: @( \ MODLIB=$(INSTALL_MOD_PATH)/lib/modules/$(KERNELRELEASE); \+ if [ "`echo $MODLIB | cut -b 1`" != "/" ]; then \+ MODLIB=$$TOPDIR/$$MODLIB ; \+ fi; \ cd modules; \ MODULES=""; \ inst_mod() { These="`cat $$1`"; MODULES="$$MODULES $$These"; \===8<=== CUT ===>8======8<=== CUT ===>8===--- linux-2.2.10/init/main.c~ Tue May 11 17:57:14 1999+++ linux-2.2.10/init/main.c Sun Jul 18 18:16:58 1999@@ -956,9 +956,9 @@ unsigned long loops_per_sec = (1<<12); /* This is the number of bits of precision for the loops_per_second. Each- bit takes on average 1.5/HZ seconds. This (like the original) is a little+ bit takes on average 1.5/HZ seconds. This gives better than 0.05% and- better than 1% */+ is about the limit of stable resolution on most processors. */-#define LPS_PREC 8+#define LPS_PREC 12 void __init calibrate_delay(void) {===8<=== CUT ===>8======8<=== CUT ===>8===--- linux-2.2.10/Documentation/Configure.help~ Mon Jun 14 03:54:06 1999+++ linux-2.2.10/Documentation/Configure.help Wed Jul 21 18:13:07 1999@@ -6156,6 +6156,26 @@ The module will be called ne.o. If you want to compile it as a module, say M here and read Documentation/modules.txt as well as Documentation/networking/net-modules.txt.++Allow overlapping non-intersecting I/O ports+CONFIG_NE2000_PARTIAL+ The NE2000/1000 driver actually needs just the address offsets from+ 0x00 to 0x11, plus the address offset 0x1F, within the 32 I/O ports+ that it claims, but some clone cards incorrectly fail to decode all+ of the address lines, thus resulting in every port being effectively+ used by the card.++ The case where this is commonly relevant is when the card has been+ configured for port 0x360, the secondary IDE interface is using port+ 0x376, and the printer interface based at port 0x378 is ABSENT. In+ this case, with suitable cards, this option can be enabled to allow+ the network adapter and the secondary IDE channel to co-exist.++ Note that this ONLY affects the ISA driver, as the drivers for the+ other bus systems do not need this option.++ If this means nothing to you, or you are not having problems with an+ ISA NE2000/NE1000 ethernet interface, say N here and rest in peace. SK_G16 support CONFIG_SK_G16--- linux-2.2.10/drivers/net/Config.in~ Mon Jun 7 22:35:22 1999+++ linux-2.2.10/drivers/net/Config.in Wed Jul 21 18:13:07 1999@@ -102,6 +102,9 @@ tristate 'ICL EtherTeam 16i/32 support' CONFIG_ETH16I fi tristate 'NE2000/NE1000 support' CONFIG_NE2000+ if [ "$CONFIG_NE2000" = "y" ]; then+ bool ' Allow overlapping non-intersecting I/O ports' CONFIG_NE2000_PARTIAL+ fi if [ "$CONFIG_EXPERIMENTAL" = "y" ]; then bool 'SEEQ8005 support (EXPERIMENTAL)' CONFIG_SEEQ8005 fi--- linux-2.2.10/drivers/net/ne.c~ Sun Mar 7 23:47:46 1999+++ linux-2.2.10/drivers/net/ne.c Wed Jul 21 18:13:07 1999@@ -18,16 +18,22 @@ Changelog: - Paul Gortmaker : use ENISR_RDC to monitor Tx PIO uploads, made+ Paul Gortmaker : use ENISR_RDC to monitor Tx PIO uploads, made- sanity checks and bad clone support optional.+ sanity checks and bad clone support optional.- Paul Gortmaker : new reset code, reset card after probe at boot.+ Paul Gortmaker : new reset code, reset card after probe at boot.- Paul Gortmaker : multiple card support for module users.+ Paul Gortmaker : multiple card support for module users.- Paul Gortmaker : Support for PCI ne2k clones, similar to lance.c+ Paul Gortmaker : Support for PCI ne2k clones, similar to lance.c- Paul Gortmaker : Allow users with bad cards to avoid full probe.+ Paul Gortmaker : Allow users with bad cards to avoid full probe.- Paul Gortmaker : PCI probe changes, more PCI cards supported.+ Paul Gortmaker : PCI probe changes, more PCI cards supported. rjohnson@analogic.com : Changed init order so an interrupt will only- occur after memory is allocated for dev->priv. Deallocated memory+ occur after memory is allocated for dev->priv.- last in cleanup_modue()+ Deallocated memory last in cleanup_modue()+ rhw@memalpha.cx : Modified ISA probe to correctly deal with cards+ that fully decode offsets 0x10 and 0x1f. These+ do not need to allocate the full 32 addresses,+ but only the 18 addresses actually used, but+ the full 32 addresses are still claimed if they+ are still available. */ @@ -116,7 +122,9 @@ #define NE_CMD 0x00 #define NE_DATAPORT 0x10 /* NatSemi-defined port window offset. */ #define NE_RESET 0x1f /* Issue a read to reset, a write to clear. */-#define NE_IO_EXTENT 0x20++#define NE_IO_EXTENT 0x12 /* Actual registers used, less port 0x1f */+#define NE_IO_EXTENT_X 0x20 /* Full window area size */ #define NE1SM_START_PG 0x20 /* First page of TX buffer */ #define NE1SM_STOP_PG 0x40 /* Last page +1 of RX ring */@@ -167,7 +175,7 @@ #ifdef HAVE_DEVLIST struct netdev_entry netcard_drv =-{"ne", ne_probe1, NE_IO_EXTENT, netcard_portlist};+ {"ne", ne_probe1, NE_IO_EXTENT_X, netcard_portlist}; #else /*@@ -197,7 +205,12 @@ /* Last resort. The semi-risky ISA auto-probe. */ for (base_addr = 0; netcard_portlist[base_addr] != 0; base_addr++) { int ioaddr = netcard_portlist[base_addr];+#ifndef CONFIG_NE2000_PARTIAL- if (check_region(ioaddr, NE_IO_EXTENT))+ if (check_region(ioaddr, NE_IO_EXTENT_X))+#else+ if (check_region(ioaddr, NE_IO_EXTENT) ||+ check_region(ioaddr+NE_RESET, 1))+#endif continue; if (ne_probe1(dev, ioaddr) == 0) return 0;@@ -220,7 +233,7 @@ while ((pdev = pci_find_device(pci_clone_list[i].vendor, pci_clone_list[i].dev_id, pdev))) { pci_ioaddr = pdev->base_address[0] & PCI_BASE_ADDRESS_IO_MASK; /* Avoid already found cards from previous calls */- if (check_region(pci_ioaddr, NE_IO_EXTENT))+ if (check_region(pci_ioaddr, NE_IO_EXTENT_X)) continue; pci_irq_line = pdev->irq; if (pci_irq_line) break; /* Found it */@@ -466,7 +479,29 @@ } } dev->base_addr = ioaddr;- request_region(ioaddr, NE_IO_EXTENT, name);+#ifndef MODULE+ if (check_region(ioaddr, NE_IO_EXTENT_X))+ {+ /* Insert code here to verify that all address lines+ are fully decoded. Suggested algorithm: Set NIC to+ send a packet to 0.0.0.0 then read from the port at+ offset 0x17 and see if that resets the transmitter.+ If it does, return the code for 'probe failed'.++ When this code has been inserted, the following+ configuration option will go away.+ */++#ifndef CONFIG_NE2000_PARTIAL+ return 0;+#else+ request_region(ioaddr, NE_IO_EXTENT, name);+ request_region(ioaddr+NE_RESET, 1, name);+#endif+ }+ else+#endif+ request_region(ioaddr, NE_IO_EXTENT_X, name); for(i = 0; i < ETHER_ADDR_LEN; i++) { printk(" %2.2x", SA_prom[i]);@@ -820,7 +855,7 @@ if (dev->priv != NULL) { void *priv = dev->priv; free_irq(dev->irq, dev);- release_region(dev->base_addr, NE_IO_EXTENT);+ release_region(dev->base_addr, NE_IO_EXTENT_X); unregister_netdev(dev); kfree(priv); }==
http://lkml.org/lkml/1999/8/8/6
CC-MAIN-2016-36
refinedweb
1,351
54.42
Python 201: A Tutorial On Threads Python 201: A Tutorial On Threads Mike Driscoll takes us through Python's threading module with a focus on I/O operations and details on locks, timers, and more. Join the DZone community and get the full member experience.Join For Free Jumpstart your Angular applications with Indigo.Design, a unified platform for visual design, UX prototyping, code generation, and app development. The threading module was first introduced in Python 1.5.2 as an enhancement of the low-level thread module. The threading module makes working with threads much easier and allows the program to run multiple operations at once. Note that the threads in Python work best with I/O operations, such as downloading resources from the Internet or reading files and directories on your computer. If you need to do something that will be CPU intensive, then you will want to look at Python’s multiprocessing module instead. The reason for this is that Python has the Global Interpreter Lock (GIL) that basically makes all threads run inside of one master thread. Because of this, when you go to run multiple CPU intensive operations with threads, you may find that it actually runs slower. So we will be focusing on what threads do best: I/O operations! Intro to Threads A thread lets you run a piece of long running code as if it were a separate program. It’s kind of like calling subprocess except that you are calling a function or class instead of a separate program. I always find it helpful to look at a concrete example. Let’s take a look at something really simple: import threading def doubler(number): """ A function that can be used by a thread """ print(threading.currentThread().getName() + '\n') print(number * 2) print() if __name__ == '__main__': for i in range(5): my_thread = threading.Thread(target=doubler, args=(i,)) my_thread.start() Here we import the threading module and create a regular function called doubler. Our function takes a value and doubles it. It also prints out the name of the thread that is calling the function and prints a blank line at the end. Then in the last block of code, we create five threads and start each one in turn. You will note that when we instantiate a thread, we set its target to our doubler function and we also pass an argument to the function. The reason the args parameter looks a bit odd is that we need to pass a sequence to the doubler function and it only takes one argument, so we need to put a comma on the end to actually create a sequence of one. Note that if you’d like to wait for a thread to terminate, you would need to call its join() method. When you run this code, you should get the following output: Thread-1 0 Thread-2 2 Thread-3 4 Thread-4 6 Thread-5 8 Of course, you normally wouldn’t want to print your output to stdout. This can end up being a really jumbled mess when you do. Instead, you should use Python’s logging module. It’s thread-safe and does an excellent job. Let’s modify the example above to use the logging module and name our threads while we’ll at it: import logging import threading def get_logger(): logger = logging.getLogger("threading_example") logger.setLevel(logging.DEBUG) fh = logging.FileHandler("threading): my_thread = threading.Thread( target=doubler, name=thread_names[i], args=(i,logger)) my_thread.start() The big change in this code is the addition of the get_logger function. This piece of code will create a logger that’s set to the debug level. It will save the log to the current working directory (i.e. where the script is run from) and then we set up the format for each line logged. The format includes the time stamp, the thread name, the logging level and the message logged. In the doubler function, we change our print statements to logging statements. You will note that we are passing the logger into the doubler function when we create the thread. The reason we do this is that if you instantiated the logging object in each thread, you would end up with multiple logging singletons and your log would have a lot of duplicate lines in it. Lastly, we name our threads by creating a list of names and then setting each thread to a specific name using the name parameter. When you run this code, you should get a log file with the following contents: 2016-07-24 20:39:50,055 - Mike - DEBUG - doubler function executing 2016-07-24 20:39:50,055 - Mike - DEBUG - doubler function ended with: 0 2016-07-24 20:39:50,055 - George - DEBUG - doubler function executing 2016-07-24 20:39:50,056 - George - DEBUG - doubler function ended with: 2 2016-07-24 20:39:50,056 - Wanda - DEBUG - doubler function executing 2016-07-24 20:39:50,056 - Wanda - DEBUG - doubler function ended with: 4 2016-07-24 20:39:50,056 - Dingbat - DEBUG - doubler function executing 2016-07-24 20:39:50,057 - Dingbat - DEBUG - doubler function ended with: 6 2016-07-24 20:39:50,057 - Nina - DEBUG - doubler function executing 2016-07-24 20:39:50,057 - Nina - DEBUG - doubler function ended with: 8 That output is pretty self-explanatory, so let’s move on. I want to cover one more topic in this section. Namely, subclassing threading.Thread. Let’s take this last example and instead of calling Thread directly, we’ll create our own custom subclass. Here is the updated code: import logging import threading class MyThread(threading.Thread): def __init__(self, number, logger): threading.Thread.__init__(self) self.number = number self.logger = logger def run(self): """ Run the thread """ logger.debug('Calling doubler') doubler(self.number, self.logger) def get_logger(): logger = logging.getLogger("threading_example") logger.setLevel(logging.DEBUG) fh = logging.FileHandler("threading_class): thread = MyThread(i, logger) thread.setName(thread_names[i]) thread.start() In this example, we just subclassed threading.Thread. We pass in the number that we want to double and the logging object as before. But this time, we set the name of the thread differently by calling setName on the thread object. We still need to call start on each thread, but you will notice that we didn’t need to define that in our subclass. When you call start, it will run your thread by calling the run method. In our class, we call the doubler function to do our processing. The output is pretty much the same except that I added an extra line of output. Go ahead and run it to see what you get. Locks and Synchronization When you have more than one thread, then you may find yourself needing to consider how to avoid conflicts. What I mean by this is that you may have a use case where more than one thread will need to access the same resource at the same time. If you don’t think about these issues and plan accordingly, then you will end up with some issues that always happen at the worst of times and usually in production. The solution is to use locks. A lock is provided by Python’s threading module and can be held by either a single thread or no thread at all. Should a thread try to acquire a lock on a resource that is already locked, that thread will basically pause until the lock is released. Let’s look at a fairly typical example of some code that doesn’t have any locking functionality but that should have it added: import threading total = 0 def update_total(amount): """ Updates the total by the given amount """ global total total += amount print (total) if __name__ == '__main__': for i in range(10): my_thread = threading.Thread( target=update_total, args=(5,)) my_thread.start() What would make this an even more interesting example would be to add a time.sleep call that is of varying length. Regardless, the issue here is that one thread might call update_total and before it’s done updating it, another thread might call it and attempt to update it too. Depending on the order of operations, the value might only get added to once. Let’s add a lock to the function. There are two ways to do this. The first way would be to use a try/finally as we want to ensure that the lock is always released. Here’s an example: import threading total = 0 lock = threading.Lock() def update_total(amount): """ Updates the total by the given amount """ global total lock.acquire() try: total += amount finally: lock.release() print (total) if __name__ == '__main__': for i in range(10): my_thread = threading.Thread( target=update_total, args=(5,)) my_thread.start() Here we just acquire the lock before we do anything else. Then we attempt to update the total and finally, we release the lock and print the current total. We can actually eliminate a lot of this boilerplate using Python’s with statement: import threading total = 0 lock = threading.Lock() def update_total(amount): """ Updates the total by the given amount """ global total with lock: total += amount print (total) if __name__ == '__main__': for i in range(10): my_thread = threading.Thread( target=update_total, args=(5,)) my_thread.start() As you can see, we no longer need the try/finally as the context manager that is provided by the with statement does all of that for us. Of course you will also find yourself writing code where you need multiple threads accessing multiple functions. When you first start writing concurrent code, you might do something like this: import threading total = 0 lock = threading.Lock() def do_something(): lock.acquire() try: print('Lock acquired in the do_something function') finally: lock.release() print('Lock released in the do_something function') return "Done doing something" def do_something_else(): lock.acquire() try: print('Lock acquired in the do_something_else function') finally: lock.release() print('Lock released in the do_something_else function') return "Finished something else" if __name__ == '__main__': result_one = do_something() result_two = do_something_else() This works alright in this circumstance, but suppose you have multiple threads calling both of these functions. While one thread is running over the functions, another one could be modifying the data too and you’ll end up with some incorrect results. The problem is that you might not even notice the results are wrong immediately. What’s the solution? Let’s try to figure that out. A common first thought would be to add a lock around the two function calls. Let’s try modifying the example above to look like the following: import threading total = 0 lock = threading.RLock() def do_something(): with lock: print('Lock acquired in the do_something function') print('Lock released in the do_something function') return "Done doing something" def do_something_else(): with lock: print('Lock acquired in the do_something_else function') print('Lock released in the do_something_else function') return "Finished something else" def main(): with lock: result_one = do_something() result_two = do_something_else() print (result_one) print (result_two) if __name__ == '__main__': main() When you actually go to run this code, you will find that it just hangs. The reason is that we just told the threading module to acquire the lock. So when we call the first function, it finds that the lock is already held and blocks. It will continue to block until the lock is released, which will never happen. The real solution here is to use a Re-Entrant Lock. Python’s threading module provides one via the RLock function. Just change the line lock = threading.Lock() to lock = threading.RLock() and try re-running the code. Your code should work now! If you want to try the code above with actual threads, then we can replace the call to main with the following: if __name__ == '__main__': for i in range(10): my_thread = threading.Thread( target=main) my_thread.start() This will run the main function in each thread, which will in turn call the other two functions. You’ll end up with 10 sets of output too. Timers The threading module has a neat class called Timer that you can use to represent an action that should take place after a specified amount of time. They actually spin up their own custom thread and are started using the same start() method that a regular thread uses. You can also stop a timer using its cancel method. It should be noted that you can even cancel the timer before it’s even started. The other day I ran into a use case where I needed to communicate with a subprocess I had started but I needed it to timeout. While there are lots of different approaches to this particular problem, my favorite solution was using the threading module’s Timer class. For this example, we will look at using the ping command. In Linux, the ping command will run until you kill it. So the Timer class becomes especially handy in Linux-land. Here’s an example: import subprocess from threading import Timer kill = lambda process: process.kill() cmd = ['ping', ''] ping = subprocess.Popen( cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE) my_timer = Timer(5, kill, [ping]) try: my_timer.start() stdout, stderr = ping.communicate() finally: my_timer.cancel() print (str(stdout)) Here we just set up a lambda that we can use to kill the process. Then we start our ping job and create a Timer object. You will note that the first argument is the time in seconds to wait, then the function to call and the argument to pass to the function. In this case, our function is a lambda and we pass it a list of arguments where the list happens to only have one element. If you run this code, it should run for about five seconds and then print out the results of the ping. Other Thread Components The threading module includes support for other items too. For example, you can create a Semaphore which is one of the oldest synchronization primitives in computer science. Basically, a Semaphore manages an internal counter which will be decremented whenever you call acquire on it and incremented when you call release. The counter is designed in such a way that it cannot go below zero. So if you happen to call acquire when it’s zero, then it will block. Another useful tool that’s included is the Event. It will allow you to communicate between threads using signals. We will be looking at an example that uses an Event in the next section. Finally in Python 3.2, the Barrier object was added. The Barrier is a primitive that basically manages a thread pool wherein the threads have to wait for each other. To pass the barrier, the thread needs to call the wait() method which will block until all the threads have made the call. Then it will release all the threads simultaneously. Thread Communication There are some use cases where you will want to have your threads communicate with each other. As we mentioned earlier, you can use create an Event for this purpose. But a more common method is to use a Queue. For our example, we’ll actually use both! Let’s see what that looks like: import threading from queue import Queue def creator(data, q): """ Creates data to be consumed and waits for the consumer to finish processing """ print('Creating data and putting it on the queue') for item in data: evt = threading.Event() q.put((item, evt)) print('Waiting for data to be doubled') evt.wait() def my_consumer(q): """ Consumes some data and works on it In this case, all it does is double the input """ while True: data, evt = q.get() print('data found to be processed: {}'.format(data)) processed = data * 2 print(processed) evt.set() q.task_done() if __name__ == '__main__': q = Queue() data = [5, 10, 13, -1] thread_one = threading.Thread(target=creator, args=(data, q)) thread_two = threading.Thread(target=my_consumer, args=(q,)) thread_one.start() thread_two.start() q.join() Let’s break this down a bit. First off, we have a creator (AKA a producer) function that we use to create data that we want to work on (or consume). Then we have another function that we use for processing the data that we are calling my_consumer. The creator function will use the Queue’s put method to put the data into the Queue and the consumer will continually check for more data and process it when it becomes available. The Queue handles all the acquires and releases of the locks so you don’t have to. In this example, we create a list of values that we want to double. Then we create two threads, one for the creator / producer and one for the consumer. You will note that we pass a Queue object to each thread which is the magic behind how the locks get handled. The queue will have the first thread feed data to the second. When the first puts some data into the queue, it also passes in an Event and then waits for the event to finish. Then in the consumer, the data is processed and when it’s done, it calls the set method of the Event which tells the first thread that the second is done processing and it can continue. The very last line of code call’s the Queue object’s join method which tells the Queue to wait for the threads to finish. You will note that we have set up a sentinel value of -1. The consumer will check for this value and when it comes across it, it will break out of its infinite loop and that thread will stop. The first thread ends when it runs out of items to put into the Queue. Wrapping Up We covered a lot of material here. You have learned the following: - The basics of threading - How locking works - What Events are and how they can be used - How to use a Timer - Inter-Thread Communication using Queues / Events Now that you know how threads are used and what they are good for, I hope you will find many good uses for them in your own code. Take a look at an Indigo.Design sample application }}
https://dzone.com/articles/python-201-a-tutorial-on-threads
CC-MAIN-2019-09
refinedweb
3,047
64.1
Probably, the freeze is because apache 1.3 doesn't support threading. I guess the same applies to the "req" object as most GUIs: Only access the Request object from the thread that created it. It might work if you make the thread object like this: def run(self): self.result = "Hello from thread!\n" then after the join, use "req.write(w.result)" to output the result from the thread. Solution would be to remove the threading, since it is useless in a HTTP server anyway. The client has to wait for the data to be sent back anyway. If you want to use another handler for parts of the message, just send the client a "304" (redirect) response, or (if you feel like having fun) send a HTTP request to your own HTTP server, which will use another process or thread to handle it. I use HTTP client connections inside the HTTP server on our replication server (which acts as a sort of 'proxy': work on a replication slave of the MySQL master database and also caches files) to fetch files from the master server. -- Mike Looijmans -----Original Message----- From: Russell Yanofsky <rey4 at columbia.edu> To: mod_python at modpython.org <mod_python at modpython.org> Date: Sunday, June 01, 2003 11:34 PM Subject: [mod_python] Mod_Python 2.7.8 and threading >Hi, > > > >_______________________________________________ >Mod_python mailing list >Mod_python at modpython.org > > > >
https://modpython.org/pipermail/mod_python/2003-June/013698.html
CC-MAIN-2022-21
refinedweb
232
74.59
A few days ago I was working on a React web application. Once I finished my development, I had ejected the app to get more control of the underlying scripts and webpack bundling. But then after ejecting the application stopped running and was throwing a Babel issue. If you are having a similar problem then this article will help you in Solving Babel issue in Ejected React Application. What is the issue? After ejecting from the React application, when you try to run the application locally you will get a JavaScript error saying that it does not understand JSX syntax. What’s happening here is that Babel can’t compile JSX syntax. Somehow it does not understand it. Before ejecting the application works fine. After ejecting we get the error. Let’s continue and see how to fix this thing. How to reproduce the issue? Start by bootstrapping a new React project (if you do not have one) npx create-react-app test-application Use the npx command as this is recommended by the Create React App (CRA). Once your project is created, run it locally. cd test-application yarn start You should have your React application running at localhost:3000 now. 3000 being the default port. Go ahead and add your own component if you would like to. Copy paste the code that I have shared below. /* /src/components/Page1.js */ import React from 'react'; const Page1 = () => { return ( <div> <h1>This is Page1</h1> </div> ); } export default Page1; Now import Page1 inside app.js /* /src/App.js */ import React from 'react'; import './App.css'; import Page1 from './components/Page1'; function App() { return ( <div className="App"> <Page1 /> </div> ); } export default App; Your local server should be already running. Once you save the file, it will reload the page and you should see a page like below Now, let’s eject the React application. yarn eject After eject completion, start the local server again to run the application. yarn start Now, you will be greeted with the error below. index.js:1 ./src/components/Page1/Page1.js SyntaxError: /Users/josephkhan/htdocs/new-preparations/reactjs/test-application/src/components/Page1/Page1.js: Unexpected token (5:8) 3 | const Page1 = () => { 4 | return ( > 5 | <div> | ^ 6 | <h1>This is Page1</h1> 7 | </div> 8 | ); It throws an Unexpected token syntax error. The issue is that Babel is no longer able to understand JSX (HTML like) syntax inside a .js file. Let’s see how to fix the issue. How to fix the issue? Go ahead and create a babel.config.js file inside your project root touch babel.config.js Copy the configuration below and paste it inside your babel.config.js file. module.exports = function(api) { const presets = ["react-app"]; const plugins = []; if (api.env("development")) { // plugins.push('react-hot-loader/babel'); } return { presets, plugins }; }; Start the local server again yarn start Now your application should be able to run again. Why did this error happen? – While ejecting the CRA scripts somehow messed up with the Babel configs and missed the React preset. This did not happen in previous versions of Create React App. These are the versions that I had when the error showed up. "react": "^16.13.1", "react-dom": "^16.13.1", "react-scripts": "3.4.1" Cheers! If you enjoyed this post and want similar articles to be delivered to your inbox directly, you can subscribe to my newsletters. I send out an email every two weeks with new articles, tips & tricks, news, free materials. No spamming, of course.
https://josephkhan.me/solving-babel-issue-in-ejected-react-application/
CC-MAIN-2020-50
refinedweb
590
59.4
This is the first article in a series that will demonstrate how to use various new features of Vista from native C++ code. The sample code is built with Visual Studio 2005, WTL 7.5, and the Windows SDK. I've classified these articles as Intermediate because I won't be covering the basics of Win32 APIs 2005 are similar.) The Aero theme and glass effects, along with the desktop window manger (DWM), are major new features in Vista that Microsoft is pushing heavily. Here in this first article, I'll demonstrate how to use Aero glass in a frame window-based app and a dialog-based app. Incorporating glass into your app is one way to make it distinctive (and, let's face it, look cool) when the Aero theme is enabled. When Aero is the active theme, and Vista determines that your video card can handle it, the desktop is drawn using the DWM. DWM renders the desktop using a process called composition. The DWM automatically uses Aero theme elements in the non-client area of top-level windows. (This is similar to how XP automatically themes top-level windows.) This does not always add the glass effects, though; if the computer is running on batteries, or the user just decides to turn transparency off, the non-client areas will not be glass: If you do enable transparent glass in the Personalization|Visual Appearance Control Panel applet, then the non-client areas will be transparent: Notice how the frame has a green hue (that's the wallpaper showing through), and a couple of desktop icons are visible in the caption bar. The key thing to remember is that your code only has to worry about whether composition is enabled, not what the glass settings are, because the DWM handles drawing the glass itself. The first sample program is an SDI app with no view window, toolbar, or status bar. After running the WTL AppWizard, the first thing we need to do is set up the #defines in stdafx.h so we can use Vista features. Vista is Windows version 6, and the IE version in Vista is 7, so the beginning of stdafx.h should look like this: #define #define WINVER 0x0600 #define _WIN32_WINNT 0x0600 #define _WIN32_IE 0x0700 Then we include the ATL and WTL header files: #define _WTL_NO_WTYPES // Don't define CRect/CPoint/CSize in WTL headers #include <atlbase.h> #include <atltypes.h> // shared CRect/CPoint/CSize #include <atlapp.h> extern CAppModule _Module; #include <atlwin.h> #include <atlframe.h> #include <atlmisc.h> #include <atlcrack.h> #include <atltheme.h> // XP/Vista theme support #include <dwmapi.h> // DWM APIs If you make these changes and compile now, you'll get four errors in atltheme.h. For example, here is the code for CTheme::GetThemeTextMetrics() which won't compile: CTheme::GetThemeTextMetrics() HRESULT GetThemeTextMetrics(..., PTEXTMETRICW pTextMetric) { ATLASSERT(m_hTheme != NULL); // Note: The cast to PTEXTMETRIC is because uxtheme.h // incorrectly uses it instead of PTEXTMETRICW return ::GetThemeTextMetrics(m_hTheme, ..., (PTEXTMETRIC) pTextMetric); } The cast in the call to the GetThemeTextMetrics() API is a workaround for a mistake in uxtheme.h in the Platform SDK. However, the Windows SDK does not have this mistake, so the cast causes an error. You can remove the cast in that function and the other three that have the same workaround. GetThemeTextMetrics() Adding glass is done by extending the glass effect from the non-client area into the client area. The API that does this is DwmExtendFrameIntoClientArea(). DwmExtendFrameIntoClientArea() takes two parameters, the HWND of our frame window, and a MARGINS struct that says how far the glass should be extended on each of the four sides of the window. We can call this API in OnCreate(): DwmExtendFrameIntoClientArea() HWND MARGINS OnCreate() LRESULT CMainFrame::OnCreate(LPCREATESTRUCT lpcs) { // frame initialization here... // Add glass to the bottom of the frame. MARGINS mar = {0}; mar.cyBottomHeight = 100; DwmExtendFrameIntoClientArea ( m_hWnd, &mar ); return 0; } If you run this code, you won't notice any difference: This happens because the glass effect relies on the transparency of the window being correct. In order for the glass to appear, the pixels in the region (in this case, 100 pixels at the bottom of the client area) must have their alpha values set to 0. The easiest way to do this is to paint the area with a black brush, which sets the color values (red, green, blue, and alpha) of the pixels to 0. We can do this in OnEraseBkgnd(): OnEraseBkgnd() BOOL CMainFrame::OnEraseBkgnd ( HDC hdc ) { CDCHandle dc = hdc; CRect rcClient; GetClientRect(rcClient); dc.FillSolidRect(rcClient, RGB(0,0,0)); return true; } With this change, the frame window looks like this: The bottom 100 pixels are now glass! Adding glass to the window is the easy part, adding your own UI on top of the glass is a bit trickier. Since the alpha values of the pixels have to be maintained properly, we have to use drawing APIs that understand alpha and set the alpha values properly. The bad news is that GDI almost entirely ignores alpha - the only API that maintains it is BitBlt() with the SRCCOPY raster operation. Therefore, apps have to use GDI+ or the theme APIs for drawing, since those APIs were written with alpha in mind. BitBlt() SRCCOPY A common use of glass in the apps that ship with Vista is for a status area (replacing the status bar common control). For example, Windows Media Player 11 shows the play controls and current track information in the glass area at the bottom of the window: In this section, I'll demonstrate how to draw text on the glass area, and how to add the glow effect so the text is readable against any background. Vista has broken away from the old look of MS Sans Serif and Tahoma, and now uses Segoe UI as the default UI font. Our app should also use Segoe UI (or whatever other fonts might come in the future), so we create a font based on the current theme. If themes are disabled (for example, the user is running the Windows Classic color scheme), then we fall back to the SystemParametersInfo() API. SystemParametersInfo() We'll first need to add theme support to CMainFrame. This is pretty simple since WTL has a class for dealing with themes: CThemeImpl. We add CThemeImpl to the inheritance list, and chain messages to CThemeImpl so that code can handle notifications when the active theme changes. CMainFrame CThemeImpl class CMainFrame : public CFrameWindowImpl<CMainFrame>, public CMessageFilter, public CThemeImpl<CMainFrame> { // ... BEGIN_MSG_MAP(CMainFrame) CHAIN_MSG_MAP(CThemeImpl<CMainFrame>) // ... END_MSG_MAP() protected: CFont m_font; // font we'll use to draw text }; In the CMainFrame constructor, we call CThemeImpl::SetThemeClassList(), which specifies the window class whose theme we'll be using. For plain windows (that is, windows that are not common controls), use the name "globals": CThemeImpl::SetThemeClassList() CMainFrame::CMainFrame() { SetThemeClassList ( L"globals" ); } Finally, in OnCreate(), we can read the font info from the theme and create a font for our own use: LRESULT CMainFrame::OnCreate ( LPCREATESTRUCT lpcs ) { // ... // Determine what font to use for the text. LOGFONT lf = {0};; } m_font.CreateFontIndirect ( &lf ); return 0; } Drawing text on glass involves these steps: DrawThemeTextEx() BitBit() Since our drawing code will be different depending on whether composition is enabled, we'll need to check the composition state during the drawing process. The API that checks the state is DwmIsCompositionEnabled(). Since that API can fail, and the enabled state isn't indicated in the return value, CMainFrame has a wrapper called IsCompositionEnabled() that is easier to use: DwmIsCompositionEnabled() IsCompositionEnabled() bool CMainFrame::IsCompositionEnabled() const { HRESULT hr; BOOL bEnabled; hr = DwmIsCompositionEnabled(&bEnabled); return SUCCEEDED(hr) && bEnabled; } Now let's go through OnEraseBkgnd() and see how each step is done. Since this app is a clock, we first get the current time with GetTimeFormat(). GetTimeFormat() BOOL CMainFrame::OnEraseBkgnd(HDC hdc) { CDCHandle dc = hdc; CRect rcClient, rcText; GetClientRect ( rcClient ); dc.FillSolidRect ( rcClient, RGB(0,0,0) ); rcText = rcClient; rcText.top = rcText.bottom - 100; // Get the current time. TCHAR szTime[64]; GetTimeFormat ( LOCALE_USER_DEFAULT, 0, NULL, NULL, szTime, _countof(szTime) ); If composition is enabled, then we'll do the composited drawing steps. We first set up a memory DC: if ( IsCompositionEnabled() ) { // Set up a memory DC and bitmap that we'll draw into CDC dcMem; CBitmap bmp; BITMAPINFO dib = {0}; dcMem.CreateCompatibleDC ( dc ); Next, we fill in the BITMAPINFO struct to make a 32-bpp bitmap, with the same width and height as the glass area. One important thing to note is that the bitmap height (the biHeight member of the BITMAPINFOHEADER) is negative. This is done because normally, BMPs are stored in bottom-to-top order in memory; however, DrawThemeTextEx() needs the bitmap to be in top-to-bottom order. Setting the height to a negative value does this. BITMAPINFO biHeight BITMAPINFOHEADER dib.bmiHeader.biSize = sizeof(BITMAPINFOHEADER); dib.bmiHeader.biWidth = rcText.Width(); dib.bmiHeader.biHeight = -rcText.Height(); dib.bmiHeader.biPlanes = 1; dib.bmiHeader.biBitCount = 32; dib.bmiHeader.biCompression = BI_RGB; bmp.CreateDIBSection ( dc, &dib, DIB_RGB_COLORS, NULL, NULL, 0 ); Now that our graphics objects are created, we can draw the text. // Set up the DC dcMem.SelectBitmap ( bmp ); dcMem.SelectFont ( m_font ); // Draw the text! DTTOPTS dto = { sizeof(DTTOPTS) }; const UINT uFormat = DT_SINGLELINE|DT_CENTER|DT_VCENTER|DT_NOPREFIX; CRect rcText2 = rcText; dto.dwFlags = DTT_COMPOSITED|DTT_GLOWSIZE; dto.iGlowSize = 10; rcText2 -= rcText2.TopLeft(); // same rect but with (0,0) as the top-left DrawThemeTextEx ( m_hTheme, dcMem, 0, 0, CT2CW(szTime), -1, uFormat, rcText2, &dto ); The DTTOPTS struct controls how the text is drawn. The flags indicate that we're drawing composited text, and we want the text to have a glow effect added. Finally, we blit from the in-memory bitmap to the screen: DTTOPTS // Blit the text to the screen. BitBlt ( dc, rcText.left, rcText.top, rcText.Width(), rcText.Height(), dcMem, 0, 0, SRCCOPY ); } // end if (IsCompositionEnabled()) If composition isn't enabled, we draw the text with GDI calls: else { const UINT uFormat = DT_SINGLELINE|DT_CENTER|DT_VCENTER|DT_NOPREFIX; // Set up the DC dc.SetTextColor ( RGB(255,255,255) ); dc.SelectFont ( m_font ); dc.SetBkMode ( TRANSPARENT ); // Draw the text! dc.DrawText ( szTime, -1, rcText, uFormat ); } return true; // we drew the entire background } Here's what the composited text looks like: Just to illustrate the usefulness of the glow effect, here's the text against the same background, but without the glow: When DWM composition is enabled or disabled, the system broadcasts a WM_DWMCOMPOSITIONCHANGED message to all top-level windows. If composition is being turned on, we need to call DwmExtendFrameIntoClientArea() again to tell the DWM what part of our window should be glass: WM_DWMCOMPOSITIONCHANGED LRESULT CMainFrame::OnCompositionChanged(...) { if ( IsCompositionEnabled() ) { MARGINS mar = {0}; mar.cyBottomHeight = 100; DwmExtendFrameIntoClientArea ( m_hWnd, &mar ); } return 0; } The process for adding glass to a dialog is similar to the frame window case, but there are a few differences that require some slightly different code. The sample dialog-based app adds glass to the top of the window; in the text below, code that is changed or added compared to the previous sample is shown in bold. As before, we tell CThemeImpl which window class theme to use, and call DwmExtendFrameIntoClientArea() to add glass to the window frame. CMainDlg::CMainDlg() { SetThemeClassList ( L"globals" ); } BOOL CMainDlg::OnInitDialog ( HWND hwndFocus, LPARAM lParam ) { // (wizard-generared init code omitted) // Add glass to the top of the window. if ( IsCompositionEnabled() ) { MARGINS mar = {0}; mar.cyTopHeight = 150; DwmExtendFrameIntoClientArea ( m_hWnd, &mar ); } Notice that we need to explicitly call OpenThemeData(). We didn't need to call it in the frame window example because CThemeImpl calls it in its WM_CREATE handler. Since dialogs receive WM_INITDIALOG instead, and CThemeImpl doesn't handle WM_INITDIALOG, we need to call OpenThemeData() ourselves. OpenThemeData() WM_CREATE WM_INITDIALOG Next, we construct the font to use for the text. We also make the font larger, just to show how the glow looks on larger text. // Determine what font to use for the text. LOGFONT lf = {0}; OpenThemeData();; } lf.lfHeight *= 3; m_font.CreateFontIndirect ( &lf ); The dialog has a large static text control at the top of the window, which is where we'll draw the time. This code sets the owner-draw style on the control, so we can put all our text-drawing code in OnDrawItem(). OnDrawItem() [__b__] // Set up the owner-draw static control m_wndTimeLabel.Attach ( GetDlgItem(IDC_CLOCK) ); m_wndTimeLabel.ModifyStyle ( SS_TYPEMASK, SS_OWNERDRAW ); Finally, we call EnableThemeDialogTexture() so the dialog's background is drawn using the current theme. EnableThemeDialogTexture() [__b__] // Other initialization EnableThemeDialogTexture ( ETDT_ENABLE ); // Start a 1-second timer so we update the clock every second. SetTimer ( 1, 1000 ); return TRUE; } As before, we need to fill the glass area with a black brush so the glass shows through. Since the built-in dialog window proc draws the dialog's background in response to WM_ERASEBKGND, and handles details like non-square or semi-transparent controls, we need to do our painting on OnPaint() instead of OnEraseBkgnd(). WM_ERASEBKGND OnPaint() void CMainDlg::OnPaint ( HDC hdc ) { CPaintDC dc(m_hWnd); CRect rcGlassArea; if ( IsCompositionEnabled() ) { GetClientRect ( rcGlassArea ); rcGlassArea.bottom = 150; dc.FillSolidRect(rcGlassArea, RGB(0,0,0)); } } In OnTimer(), we get the current time, then set the static control's text to that string: OnTimer() void CMainDlg::OnTimer ( UINT uID, TIMERPROC pProc ) { // Get the current time. TCHAR szTime[64]; GetTimeFormat ( LOCALE_USER_DEFAULT, 0, NULL, NULL, szTime, _countof(szTime) ); m_wndTimeLabel.SetWindowText ( szTime ) } The SetWindowText() call makes the static control redraw, which results in a call to OnDrawItem(). The code in OnDrawItem() is just like the frame window example, so I won't repeat it here. Here's what the clock looks like: SetWindowText() As mentioned earlier, any drawing on the glass area needs to use alpha-aware APIs such as GDI+. The sample project uses the GDI+ Image class to draw a logo in the top-left corner of the dialog, as shown here: Image The logo is read from the mylogo.png file in the same directory as the EXE. Notice that the alpha transparency around the logo is preserved, since the code uses GDI+ to draw the logo. Another option is to make the entire window glass. There is a shortcut for this, just set the first member of the MARGINS struct to -1: MARGINS mar = {-1}; DwmExtendFrameIntoClientArea ( m_hWnd, &mar ); If we did this in our dialog, the results wouldn't be that good: Notice how the text in the four buttons is the wrong color, and there's an opaque rect around each button. In general, transparency and child windows don't mix very well. If you do want an all-glass dialog, the parts with controls should be drawn with an opaque background, as in the Mobility Center app: Adding glass to your apps is a good way to make them visually distinctive, and it can provide a richer status area than what can be accomplished with the status bar common control. This article should give you a good starting point, and an understanding of the DWM APIs that you'll use when adding glass to an app written in native C++. Much of this info was gleaned from the PRS319 session at PDC 2005 ("Building Applications That Look Great in Windows Vista"). I only discovered it after finishing the article, but Kenny Kerr has a huge blog post covering many glass topics. Check out his whole Vista for Developers series, too, they're well worth the time. » This article has no explicit license attached to it but may contain usage terms in the article text or the download files themselves. If in doubt please contact the author via the discussion board below. A list of licenses authors might use can be found here SetWindowTheme(hwnd, L"", L"") _anil_ wrote:Is there any method to run the above project in XP. SolidBrush br ( Color(0,128,0) ); // dark green, alpha defaults to 255 General News Suggestion Question Bug Answer Joke Praise Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
https://www.codeproject.com/Articles/15770/Vista-Goodies-in-C-Using-Glass-in-Your-UI?msg=3226148
CC-MAIN-2017-22
refinedweb
2,645
54.02
[Solved] QSound and error 2019 Hi, I got a problem when I try to use the QSound or QMediaPlayer QTCreator throws a “error LNK2019: external symbol not solved”. here’s my code: @#include "mainwindow.h" #include <QApplication> /#include <QtMultimedia/QMediaContent> #include <QtMultimedia/QMediaPlayer> #include <QUrl>/ #include <QtMultimedia/QSound> int main(int argc, char argv[]) { QApplication a(argc, argv); MainWindow w; w.show(); / QMediaContent media(QUrl::fromLocalFile("pan.wav")); QMediaPlayer player; player.setMedia(media); player.play();*/ QSound::play("pan.wav"); return a.exec(); }@ I put “pan.wav” in each “release” and “source” folder. When I take out all the lines that have to deal with the sound processing it all compiles fine. I really need a little help here. Thanks :) It is not clear to me which version you are using. For Qt 4 you need according to "this": @ QT += multimedia @ in your .pro file. I am using QT5 For "Qt5 there is the same. ": @ QT += multimedia @ Do you have this line in your .pro file? Thanks for the answer, unfortunately I tried it but I still have the same issue. You need to post the .pro and the linker output then. ok. here is the .pro: @#------------------------------------------------- Project created by QtCreator 2013-01-20T19:04:47 #------------------------------------------------- QT += core gui QT += widgets QT += multimedia greaterThan(QT_MAJOR_VERSION, 4): QT += widgets TARGET = Melody_gen TEMPLATE = app SOURCES += main.cpp mainwindow.cpp HEADERS += mainwindow.h FORMS += mainwindow.ui @ and here is the output: \Users\leo\AppData\Local\Temp\Melody_gen.exe.1108.1841.jom main.obj : error LNK2019: symbole externe non résolu "__declspec(dllimport) public: static void __cdecl QSound::play(class QString const &)" (_imp?play@QSound@@SAXABVQString@@@Z) référencé dans la fonction _main release\Melody_gen.exe : fatal error LNK1120: 1 externes non résolus jom: C:\Users\leo\Documents\c++\Melody_gen-build-Desktop_Qt_5_0_0_MSVC2010_32bit_SDK-Release\Makefile.Release [release\Melody_gen.exe] Error 1120 jom: C:\Users\leo\Documents\c++\Melody_gen-build-Desktop_Qt_5_0_0_MSVC2010_32bit_SDK-Release\Makefile [release] Error 2 21:36:23: The process "C:\Qt\Qt5.0.0\Tools\QtCreator\bin\jom.exe" exited with code 2. Error while building/deploying project Melody_gen (kit: Desktop Qt 5.0.0 MSVC2010 32bit (SDK)) When executing step 'Make' Try giving the QSound static call the full path of the .wav file, file system and all.. you mean trying this? @ QSound::play("C:/Users/leo/documents/c++/melody-gen/pan.wav");@ Because When I try this I still get the same error... [quote author="patouf35" date="1358908620"]you mean trying this? @ QSound::play("C:/Users/leo/documents/c++/melody-gen/pan.wav");@ Because When I try this I still get the same error...[/quote] IMHO the path is not the problem. You have a linking problem well before you can start the application. The linker does not find the function QSound::play. My rudimentary French is good enough to catch this. Since the other Qt stuff is not a problem, it must have something to do with the required libs for multimedia. Unfortunately, I have no experience in multimedia nor Qt 5. I am a bit on a loss here. Did you ever rerun qmake in creator? When you add something in a .pro file, this file has to be converted to a makefile which will be used by jom in your case. Most likely the additions to your pro-file have not been transported to the makefile. Under "Edit" you should have the "projects" window on the left. Right mouse click on the project name you should find in the pop-up on 3rd position something like "Run qmake". Do this and a rebuild to be sure. Thanks a lot! When I ran qmake it worked. Thanks again! :) You are welcome ;-)
https://forum.qt.io/topic/23511/solved-qsound-and-error-2019
CC-MAIN-2018-09
refinedweb
608
61.53
On Thu, Apr 17, 2008 at 11:54 AM, Jim White <jim@pagesmiths.com> wrote: > Xavier Hanin wrote: > > On Mon, Apr 14, 2008 at 1:30 AM, Jim White <jim@pagesmiths.com> wrote: > > ... > > > > > > > >. > > > > > > > Yes you can use poms instead of Ivy files if you prefer, Ivy makes the > > difference using the file extension: .pom => parses it as a pom, > > anything > > else => parses it as an Ivy file. Unless you implement your own parser, > > which can handle other cases (hint to implement a parser actually > > supporting > > the groovy syntax you use :-)). > > > > Well, I was generating a POM with a .pom extension and was getting an > error. I will try and get some more details at another time. > > But everything that is supported by Ivy can be done in Ivy files, so you > > don't need to actually use a pom. The trick to support classifiers in > > ivy > > files is to use an extra attribute. Here is an example: > > <?xml version="1.0" encoding="UTF-8"?> > > <ivy-module > > <info organisation="org.apache" > > > > > > /> > > <dependencies> > > <dependency org="net.sf.json-lib" name="json-lib" rev="2.2.1"> > > <artifact name="json-lib" type="jar" m: > > </dependency> > > </dependencies> > > </ivy-module> > > > > With your syntax, you will lack xml namespaces (xml not that bad > > sometimes > > :-)). But under the hood Ivy see it as a classifier attribute, so if you > > disable validation as Gilles suggested, or use your own syntax with your > > own > > parser and your own validation, you can simply support the classifier > > attribute. > > > > Actually Groovy's MarkupBuilder does support XML namespaces because it is > rather literally minded about such things. That's nice, I didn't know that. > > >'s+MarkupBuilder<> > > I put an attribute named 'xmlns:m2' in the root element and then I can do > this: > > XWINGS.IVY { > info(organisation:"org.ifcx", module:"WingsIvyTest") > dependencies { > dependency(org:'net.sf.json-lib', name:'json-lib', rev:'2.2.1' > , conf:'default->runtime') > { > artifact(name:'json-lib', type:'jar' > , 'm2:classifier':'jdk15') > } > } > } > > And that works to get the right artifact the first time. > > Now the trouble is that if you change the value of the classifier Ivy > doesn't notice that the artifact in the local cache is wrong. I think that > the classifier needs to be appended to the module name in the cache for this > to work properly. You're probably right, could you plse open an issue in jira? Obviously the workaround is to set the cache pattern yourself. > > > BTW, if you end up implementing a module descriptor parser for the joy of > > using a groovy syntax in your metadata, please share the result with the > > community! > > > > Hmm, I am not a big fan of Groovy builder syntax outside of Groovy > scripts. > > There are folks doing stuff like Gant which is Groovy scripts that use > AntBuilder and use their own script launching mechanism rather than Ant's, > and I think that is not a good direction because it fails to effectively > leverage the support Ant has in so many development environments. > > The parsing thing I do want to do though is annotation and Javadoc > processors that generate Ivy files and/or POM files using syntax like: > > /** > * @use */ Neat! > > > Also I'm planning a thing called "OOHTML" that will make creating these > sorts of files more fun too. Keep us informed, that sounds like an interesting project. > > > BTW again :-), I like the idea of using groovy inside Open Office, and > > even > > more with Ivy :-) > > > > Thanks! > > I also have a thing called "AntAnywhere" which automates making things > runnable using JavaWebStart. Ivy integration has been next on the list of > things to do for that, and will be done eventually I think... > > This is a very interesting project. Nice to see you think about Ivy integration here too! Xavier > <> > > Jim > > -- Xavier Hanin - Independent Java Consultant
http://mail-archives.apache.org/mod_mbox/ant-ivy-user/200804.mbox/%3C635a05060804170346o73f8ebddo83474b2244ef910b@mail.gmail.com%3E
CC-MAIN-2014-42
refinedweb
628
64.81
1465975 story Atom 1.0 vs RSS 2.0 198 Posted by Hemos on Monday July 18, 2005 @09:26AM from the the-unseen-battle? dept. heeeraldo writes "Is there another format war on the horizon? This wiki compares the two, and finds that even though RSS has far greater deployment (and mindshare), Atom 1.0 solves a lot of the problems associated with it." Can't tell the difference (Score:3, Insightful) So, as a conclusion: Noone cares. Re:Can't tell the difference (Score:3, Insightful) Most users != Slashdotters Re:Can't tell the difference (Score:2) No question (Score:3, Insightful) Re:No question (Score:4, Interesting) Re:No question (Score:3, Interesting) Re:No question (Score:2) Re:No question (Score:3, Informative) So all feeds supported in Longhorn will be: RSS 0.9x RSS 1.0 RSS 2.0 ATOM 0.3 ATOM 1.0 Re:No question (Score:3, Informative) Correct (Score:2) Re:No question (Score:2) Microsoft said that they'd support Atom alongside RSS if it was finished before Longhorn. The Atom 1.0 RFC will be finalised any day now. Neither... (Score:5, Funny) Regards, Yogix It's called namespaces... (Score:5, Informative) I would consider... (Score:2, Interesting) Re:I would consider... (Score:3, Funny) Re:I would consider... (Score:5, Informative) RSS and Atom are standardised ways of having a live list of stories appear from say a newssite (like this one) in various programs. Firefox calls these live bookmarks. I came here using firefox by clicking on my toolbar, seeing all of the new stories, and deciding I was interested in this one. You can also use it for desktop "news ticker" applets. The trouble with RSS (short answer) is that there are at least three different versions of it invented by different people. As far as I know there was an RSS 0.7, then someone else invented a new protocol and called it RSS 1, then the original person invented RSS and called it version 2, but some people argue 2 is worse than 1 :(. All of these standard's owners have been accused of not taking on board comments from the wider community. Atom is another protocol for doing the same thing. Technical issues aside, it gets my vote because they didn't decide to call it RSS 3. Or RSS 10. Re:I would consider... (Score:2) If you don't like something about a protocol, is the correct thing to Re:I would consider... (Score:2, Funny) That's because there was already an RSS 2! Re:I would consider... (Score:3, Informative) The trouble with RSS (short answer) is that there are at least three different versions of it invented by different people. Three? Try nine [diveintomark.org]. As far as I know there was an RSS 0.7, then someone else invented a new protocol and called it RSS 1, then the original person invented RSS and called it version 2 No. The short version is that somebody at Netscape invented 0.9something based on RDF. The public release (another 0.9something) was rushed for my.netscape.com and wasn't based on RDF. Then Netsca Re:I would consider... (Score:3, Interesting) The w3 refactored HTML 4.01 into XHTML 1.0 using XML instead of SGML. This is similar to the RDF to standard XML change in RSS. Then, the w3 modularized XHTML 1.0 Strict into XHTML 1.1, similar to the back and forth Re:I would consider... (Score:3, Informative) Well, these are are XML syndication formats. In other words, they move headlines and article summaries from server to user in machine-parseable format. There's RSS, which is the reigning de facto standard, but it also is regrettably very, very liberally specified, and even less frequently heeded. Everyone's extending it to their own heart's content more or less competently. There are lots of different variations. Not easy to implement, not easy to learn. Atom is an attempt to make a real standard-like sta Hmm... (Score:4, Funny) Re:Hmm... (Score:2) whoa nelly (Score:4, Interesting) Re:whoa nelly (Score:5, Insightful) Re:whoa nelly (Score:2) Re:whoa nelly (Score:2, Interesting) This is the point: Atom is just a fork. RSS is a real concept. Forks come and go, a concept stands. Re:whoa nelly (Score:2) Re:whoa nelly (Score:2) Parent Makes No Sense (Score:5, Informative) The parent post really doesn't make any sense at all. Re:whoa nelly (Score:2) There are at least some out there that would say the article is heavily [docuverse.com] biased [sourcelabs.com]. Not that these responses aren't. Me? I like RSS 2.0. It's simple (for what is does), and extensible (for most of what it doesn't). To each their own. Re:whoa nelly (Score:2) I'm sure it could be argued OGG Vorbis completely demolishes everything that MP3 is/was/used to be also, but does it matter? Like MP3, RSS has already won the mindshare war. Those three letters are already stuck in the minds of bloggers as the very definition of How To Syndicate Content. Re:whoa nelly (Score:2) "Bag of bytes" probably would have been more fair/accurate, at least. It is plaintext, after all. Once again (Score:5, Insightful) That said, one nice thing about this format war is that there doesn't have to be a loser. It's fairly easy to handle multiple formats in software (note the number of redundant music formats), unlike hardware which is usually impossible. If the process of reading RSS tags or Atom tags is made transparent to the user, who cares who wins? Re:Once again (Score:3, Insightful) Re:Once again (Score:2) Whether or not IE supports a standard has a big bearing on uptake. Look how much more widespread jpeg is to png. Re:Once again (Score:4, Informative) Re:Once again (Score:2) Re:Once again (Score:2) Re:Once again (Score:2, Informative) In any case, the majority of sites I visit still use GIFs (1987) for generic elements, like the rounded end on separators and story icons here. AFAIK, PNG was never aimed at replacing JPEG... its main aim was to provide a better, Compuserve-free GIF alternative. Re:Once again (Score:2, Informative) Re:Once again (Score:2) In any case, GIF is still the most common generic site image format on the sites I visit so I do not quite see this alledged favourism in action. Maybe there is a study somewhere with actual online image format statistics that would say otherwise. And with GIF's liberation (or even throughout the Unisys/Compuserve episode), most site designers/maintainers simply applied the good old " Re:Once again (Score:3, Informative) No, LZW was a major motivator for creating PNG [wikipedia.org], not a mark against it. PNG is LZW free. Also it isn't limited to 256 colors like GIF. AFAIK, PNG was never aimed at replacing JPEG... its main aim was to provide a better, Compuserve-free GIF alternative. You're right about that though, if not for the right reasons. PNG wasn't really designed to have anything to do with JPEG, they mostl Re:Once again (Score:2) Looks like you did not notice the past tense in "its main aim was to provide a better, Compuserve-free GIF alternative"... back when PNG first came along, this _was_ one of the reasons, even if it no longer is _now_. Re:Once again (Score:2) I did indeed notice it. That would be why my first sentence after your quote was, "You're right about this though". I added the "for the wrong reasons" because "Compuserve-free" is not entirely accurate. Unisys was the patent holder on LZW, not CompuServe. CS licensed it from Unisys to make it available for their users in the GIF89 format, which is why it became associated with CompuServe but they weren't technically the "bad guys' here. CS didn't control L Re:Once again (Score:2, Insightful) That's a completely back asswards way of looking at it. Website opperators are forced to cater to broken IE implementations not because they are attracted to its features, but because that's what 80% of their visitors are using. And no, if you're a commercial website you can't just say "Screw 'em if they're not smart enough to use Firefox." So back to the original point, if no one is using Atom, why would website operators publish in Atom? Though I do agree with the point that's been made that it's easy Re:Once again (Score:2) Nope, consumers (Score:2) You are right about that to the point that the web site owners decide what format to use... The consumers will use what the webmasters use. Now that is mistaken. The consumers will use what the consumers use. I know that sounds redundant, but consider - Safari already has built in RSS support. IE will have built in RSS support. So how many consumers will actually use Ato Exactly (Score:2) Actually, I never researched the differences, so every time I had a choice between Atom and RSS feeds from the same source, I always chose RSS, because I thought Atom was an older style, and also thought that if I ever switched to another reader, it'd be easier to move my feeds if they were all RSS. Re:Once again (Score:2) Please tell me you didn't really mean to imply that technical sophistication is achieved by making pretty widgets. Re:Once again (Score:2) Please tell me you didn't really mean to imply that all technical sophistication has nothing to do with user functionality and ergonomics Re:Once again (Score:2) Please tell me you didn't really mean to imply that all technical sophistication has nothing to do with user functionality and ergonomics ;) . Certainly not. :-) But then (at least to my mind), "pretty widgets" doesn't imply user functionality and ergonomics, either. I think we agree: a product with great technical sophistication can be killed by a bad user interface (which is also technical to some extent), but lack of effective marketing to bring the product to peoples' attention can make both irre The crucial distinction (Score:2) They've probably been itching for another good format war to take sides on. Re:Once again (Score:2) Re:Once again (Score:2) How true...I never run into problems trying to get CSS1 and CSS2 to work properly across IE/Safari/Firefox/Konqueror/etc. Granted, some of them implement the standard improperly (*cough...IE..*), but this is what happens when multiple standards exist for the same purpose. I for one don't like re-creating my work for RSS 0.7/1.0/Atom/whatever...that's 3x the work I have to do. And then we'll get RSS 10.0... Re:Once again (Score:2) That's right - if they do the same thing noone cares. I suspect somebody clever will come up with a killer app that will require a feature that either one or the other has that we're not noticing right now, and then noone will remember the other format. As it should be. format war? (Score:2, Insightful) I mean there are still 60% who still use that incompatible Browser because they believe that it is the internet and the Modem is a special powercord. It's not pretty folks (Score:3, Funny) Smack! Kapow! At least put your hands in front of your face. Whack! Bam! Get up off the mat, RSS!!! Get up!! I can't watch anymore... Atom's More Than A Syndication Format (Score:5, Informative) What's been (all but) finalized is the syndication format (and rules for extending it). This allows the working group to firm up the details of the publishing API, which, for my money, is the real payoff with Atom. A pretty good overview of the history of RSS and the motivations behind Atom is here [computer.org]. Which one is growing? (Score:4, Insightful) Besides industry support, my only question would be "which one is growing?" Which of these formats is expected to get a new version number sometime soon? If you ask me, that is why Microsoft is talking about adding "extensions" to RSS -- by growing and adapting the standard, it gets more bells and whistles, more application support, and more momentum in the development community. Oracle: More Complicated Pricing Model Needed? [whattofix.com] Sadly.. (Score:2, Funny) Cache (Score:3, Informative) RSS 2.0 vs. Atom vs. RSS 1.0 (Score:5, Insightful) AFAIK the format war between RSS 2.0 and RSS 1.0 hasn't even ended yet. In spite of the version numbering, RSS 2.0 is more of a .95 than a 2.0 since it's an incremental improvement over .94. It doesn't really add any capabilities to RSS 1.0 (both can support enclosures). The only real difference is that RSS 1.0 is based on RDF while 2.0 isn't; this supposedly makes 2.0 simpler, but potentially less capable. It's a pity that all the RSS folks couldn't simply hash together a common standard rather than wasting time on competing standards. Is 2.0 really that much simpler than 1.0? Is 1.0 really that much more capable than 2.0? Does Atom really add much to the mix? It seems that it ought to be possible to find a middle ground. What's Wrong With RSS 1.0 (Score:2) RSS 1.0 is infinitely extensible because it can be combined with other RDF schemas. In order to extend RSS 9.x, the standard must be extended. This allows its expansion to be controlled, which seems more managable, but as time goes on, features get duct-taped to it in ugly ways. Because of RSS 1.0's extensibility, its syntax is less human-friendly. This was an Re:RSS 2.0 vs. Atom vs. RSS 1.0 (Score:2) Without getting too much into the politics of the syndication world, the reason is that no-one wants to touch RSS 1.0 with a ten-foot pole (even without the bitterness and fallout fro One thing (Score:5, Interesting) RSS 2.0 Is Like Perl (Score:2) RSS 1.0 is slightly more complex but a gajillion times more elegant. It has actual standards for metadata [resource.org]. Re:One thing (Score:2) Preach it, brother! I've got an open source aggregator (<plug mode="shameless">Feed for Mac OS X [keeto.net]</plug>) and it seems most of the 'bug fixes' I have to do are directly related to some fools home-grown interpretation of how to deliver content. Effectively, RSS (the concept, not the format) is in the 'tag soup' phase that the web was in seven years ago. While I expect this will all settle down as the concept (and value) of standardization is realized by content publishes and CMS vendors, it cu GUID (Score:4, Interesting) But because an Atom feed must include a guid element, the client has a way of uniquely identifying an item. This means that when you subscribe to an atom feed, you're not going to see duplicate articles the way you do with RSS when the RSS feed doesn't include a guid or any unique identifier (which is legal) and the client has to make one up by hashing the content. I wrote a bit about this here [stevex.org]. Re:GUID (Score:2) as long as rss doesn't require you to at least include blank versions of standardized tags we will have the same problems that html has. lots of people out there writing bad code that don't work well for the diversity of readers. False dichotomy (Score:2) For those who don't know, RSS 0.9x was basically Dave Winer's personal plaything. When the standards community put together an RSS 1.0 standard, he took his most recent 0.9x 'standard' and renamed it RSS 2.0 to make it look more up-to-date. Why not take RSS 1.0 and fix the few problems it has? What Problems? (Score:2) Sure, RSS 1.0 takes more work to understand up front, but once you get RDF isn't it just another schema? And these days, now that blog software has automatic feeds and there are aggregators available on every platform, how many humans actually need to read it? Re:What Problems? (Score:2) We can go on and on... (Score:2) Newster is aggregatates news from many websites that publish them in RSS. Once the news bits are in the database it uses the ATOM API to post it to the blog. And then it republishes it in ATOM (because its a blogger service). So what we have here is a website that p No e-mail obfuscation? (Score:2) As someone who's implemented them both (Score:4, Informative) Atom wins hands-down. Things are actually well specified [atompub.org] . I can just walk through the atom specification, implementing it as I go, and not have any questions about what is required, what type of content can be present in any one element, I don't have to look up five even less well-specified different modules just to get the basics of the feed together (and thus also don't have to worry about namespaces), what elements and attributes mean (actually, I spent a minor five minutes agonizing over what I should put in the term atribute of the category element, given that the label attribute contains the human readable version, before realizing that I was completely free in this, as the "scheme" os up to myself, and deciding to mirror how categories are named in the url on the website (which I found to be consistent with various other already existing atom 1.0 feeds [intertwingly.net] that I checked)), or... well, basically any kind of question that you need to think about as you implement a new and previously unknown specification. RSS on the other hand (any of the 9 incompatible versions)... *shudders* Those specifications don't tell me anything. I copy/paste from other feeds and heavily use the feedvalidator [feedvalidator.org], but... *shakes his head* Once all feedreaders have been updated to support Atom 1.0 completely, I'll go and pull the plug on the remaining RSS feeds, and good riddance too! RSS has terribly crappy version control (Score:2) 1) It was written by Mark Pilgrim, one of the major minds behind creating the specification for Atom. 2) Mark's a personal friend of mine, and I personally think he makes sense. The point of the article is, however, that RSS is terribly broken and fragmented, versions aren't compatible with each other, and it's just a plain mess. Look further on his site and you'll see articles as to not only why he helped c Which RSS format is this? (Score:2) Semantics (Score:2) So while RSS is stuck with regular HTML (escaped markup, whoa!) and images in its contents, Atom can already embed other XML namespaces like XHTML, SVG, MathML, FOAF, Dublin Core... I think the comparison is similar to the HTML/XHTML one: though right now they can give the same results, in the (not so distant I hope) future Atom/XHTML will become the languages of choice. Not a lot of people use XHTML+SVG yet, but with Opera supporting it Podcasting (Score:2, Informative) It's funny how this writeup doesn't even mention enclosures, despite the hundreds of thousands of people downloading content this way. The only place it comes up is in the chart at the end, which makes some side reference to <link rel="enclosure"> in Atom, which is Re:Podcasting (Score:2) Atom has thought about rich media far more than RSS2.0. For example, one of the problems of podcasting is that popular podcasts require stacks of bandwidth. One solution is to offer a bit torrent link. RSS2.0 only allows one enclosure per item, so you can't offer both a straight download and a torrent. In Atom its st Re:Podcasting (Score:2) I just don't get it. Then again, i don't get a lot of recent computer trends... i'm turning 25, and already feel old. Folks, please support Atom (Score:2) With that in mind, please support Atom in your future projects. Atom really is a better user experience for end users, and it's better specified and easier to work with as a developer. The RSS folks are great, and they've put in a ton of hard work, but the Atom spec is Just Better right now, and offers at lot more bang for the developmental buck, not to mention it handles feed aggregations much better. IE, Firefox/Sage and Safa Error in the first paragraph? (Score:2) 2005/07/13: RSS 2.0 is widely deployed and Atom 1.0 not at all. Er... blogger.com (Google's blog service) uses Atom. I think that might count as having been deployed, just maybe... Buzzword regurgitation (Score:2) Multicast/Unicast push protocol (Score:2) Apparently, much bandwidth is wasted just because people can't get themselves out of the only-OSI-level-7+HTTP corset. Re:Firefox support? (Score:3, Informative) Re:Firefox support? (Score:3, Informative) Re:pwned (Score:3, Informative) Unmolested version [intertwingly.net] - get it while it lasts copy and paste from google cache (Score:2) I just copied the one from google cache [72.14.207.104] back into the wiki - we'll see how long it takes before that asshole takes it down again. Re:pwned (Score:2, Funny) Grow up Re:We use it! (Score:2) Re:We use it! (Score:5, Interesting) Re:We use it! (Score:2) Re:We use it! (Score:2) "They block the transparent proxy" The reason for this is because about the only thing you can't forge is your apparent, from the Slashdot server's perspective, WAN IP address. Your real WAN or LAN IP is passed in an easy to manipulate X_FORWARDED_FOR or HTTP_VIA HTTP header (both non-standard HTTP/1.1). Of course, If you add a fake IP address to this header then a legitimate user-agent or proxy should still append you're remote IP. Although, I don Re:As If I Cared (Score:4, Interesting) Re:Where's the comparison? (Score:5, Informative) RSS2.0 had a problem last year where Reuters suffered a public embarrassment adopting the format. They followed the specification correctly, and it resulted in silent data loss - their stock identifiers were in angled brackets and got treated as an HTML tag by news aggregators. It wasn't rocket science, but this simple thing turned out to be impossible to do with RSS2.0 - it was tried many times. After the funky feed debacle, the community realised that a separate format independent of RSS2.0 was the only way to fix the underlying problem. The proponents of RSS2.0 tried to fix the silent data loss, and ended up breaking backwards compatibility with RSS0.92 - something they weren't prepared to do before Atom. Re:What is this stuff *for* anyway? (Score:2) Re:What is this stuff *for* anyway? (Score:4, Interesting) I'm not talking about just Dilbert comics or other entertainment outlets. Imagine notification of software updates. Email is lousy for this sort of thing when you get hundreds of emails per day. It's not searchable and it sits in your own account. Another benefit of RSS is control over the lists. You ever get an email from someone you know that didn't really come from someone you know, yet had a nice virus payload attached? This doesn't do that. Any info that comes from the RSS channel is something YOU have subscribed to and unsubscribing is dead easy. Further, with an RSS Reader I use called Feed On Feeds [minutillo.com], you can access its mySQL backend from any other software to do what you want with the information streams. There are many other readers that use this same philosophy. If you MUST have mailing lists, well, then mail out from there; not all of these sites have mailing lists and this would make a great way to present it in that format. You can reblog select posts, or a channel combining a number of other channels. Re:Who cares? (Score:2) That doesn't mean it will take over from RSS
http://slashdot.org/story/05/07/18/117226/atom-10-vs-rss-20?sdsrc=prev
CC-MAIN-2014-23
refinedweb
4,158
74.19
my code the following partthe following partCode:#include<iostream> #include<cstring> using namespace std; int main() { int x; x=1; int choice; int usenr; usenr=001; char usename[20]; char comment[250]; int age; char pwd[10]; while(x==1) { cout<<endl; cout<<"This program demonstrates all functions i worked with up to now!\n"; cout<<"These include... \n IF...ELSE \n LOOPS \n FUNCTIONS \n SWITCH CASE \n POINTERS \n STRINGS \n"; cout<<"PLease select from the following...\n"; cout<<" 1.EDIT PROFILE \n 2.VIEW PROFILE \n"; cout<<" 3.EXIT \n"; cin>>choice; switch(choice) { case 1: cout<<"To chnge your profile you need to enter your password.\n Please enter your password now \n"; cin.getline(pwd,10); if(strcmp(pwd,"rhino")==0) { cout<<"Admin access granted!\n"; cout<<"Please enter your name: \n"; cin.getline(usename,20); cout<<"Please enter your age: \n"; cin>>age; cout<<"Tell us a little bout yourself(max 250 characters):\n"; cin.getline(comment,250); } else { cout<<"Sorry, wrong password!\n"; } break; case 2: cout<<"Welcome "<<usename<<" This is your profile:\n"; cout<<"User Number: "<<usenr<<"\n"; cout<<"User name: "<<usename<<"\n"; cout<<"User age: "<<age<<"\n"; cout<<"Your comment is: "<<comment<<"\n"; cout<<"Please make sure this is correct and press enter\n"; cin.get(); break; case 3: cout<<"Are you sure?\n"; break; default: cout<<"Sorry, incorrect input\n"; break; } cout<<"Thank you for your participation! \n To try again press 1. \n or else press any key to exit\n"; cin>>x; } cin.get(); } why is this? what am i missing?why is this? what am i missing?Code:cin.getline(pwd,10)
http://cboard.cprogramming.com/cplusplus-programming/74989-cplusplus-problem-reading-input.html
CC-MAIN-2015-14
refinedweb
275
70.19
Object Publishing on the Web - Quixote From a user's perspective This weekend it seems to be de rigueur to comment on web object publishing, inspired by Ben Bangert's article on 'best of breed' controllers for MVC web frameworks. A long discussion about Zope and its ancestors ensued in the comments - Quixote borrows the key idea from Zope - simple object publishing via the web, and, in my opinion, succeeds because it kept it simple. I'm taking a very quick stab at explaining how the Quixote web application development framework hangs together, from a mere users perspective (me) rather than a web framework developers perspective. There are a number of other Quixote tutorials and documentation out there; in this effort I'm aiming to (but may miss completely) explain some Quixote basics, and perhaps impact enough information such that a prospective web framework shopper will themselves be able to put Quixote in context with some of the other web frameworks. Knowing me, very quick will soon turn into long and convoluted, although as kids soccer is fast approaching on the clock, perhaps the time constraint will work to good effect. Nevertheless, expect revisions! On Groovie, Ben takes a look at Best of breed Controllers for MVC web frameworks - its a worthwhile read that dives into some of the differences between the currently hot names, although I am surprised not to at least see a mention of Quixote. I don't believe the good folks that wrote Quixote and its related tools are much into the self-promotion (some articles), but they certainly have written some fine web development software and it deserves a little talking up now and then. Perhaps other Quixote users like myself will jump into the fray. Now at version 2.2, Quixote continues to evolve and progress, off a very fine base. Its in production use across a wide array of web platforms ranging from simple Python servers, to python Medusa, mod_python, AOL server, Apache, lighttpd, twisted and no doubt more. I use Apache (but perhaps soon lighttpd) in conjunction with scgi, which is a replacement for FastCGI that works particularly well. lighttpd supports scgi natively now too. Carrying on from Ben's article, I'm at a loss to plug Quixote's object publishing method into his categories. In Quixote, objects are accessible according to URL (Ben calls this implicit mapping) but the application developer must take this into account when designing the UI (Ben calls this programmatic mapping). What app doesn't, I wonder? I suspect most frameworks that allow something like resolve to calling the new() method of a class called calendar use some sort of rule set (whether its by convention or explicit) to determine what gets called where. Quixote's design allows your UI to naturally follow the url from root to end: / (root) will call calendar which will have one or more methods such as _index new edit delete In reality, your typical Quixote application will have object classes and separate, but related, object UI classes, looking rather like this: / (root UI class) will call CalendarUI which will have one or more methods such as _index new ... each of which will act upon a Calendar object) edit ... and perhaps this also dives deeper and... deeper... CalendarEditUI add_resource book_resource delete ... We'll look later at how the hierarchy of UI classes resolves as you drive down, but lets first start off with a basic project skeleton - there is nothing written in stone in this regard; perhaps it might be useful if the Quixote package included a project skeleton builder. Quixote encourages, but does not enforce, separation between objects and UI. A typical project might look like this: myapp/ bin/ runapp.py (this is your executable that runs the app) doc/ obj/ calendar.py contact.py resource.py ui/ calendar.ptl* (UI classes for object) contact.ptl (UI classes for object) home.ptl (perhaps an index class or function for home page content) publisher.py (typically a subclass of Publisher for your app) resource.ptl (UI classes for object) root.ptl (or call it driver.ptl or what turns your crank: this defines the root web interface from which all things flow. Q-fans might call theirs qslash.ptl) util.ptl (UI helper classes and functions used elsewhere) __init__.py date.py (various helper bits used by both obj and ui) utils.py *don't worry about these ptl extensions for now. They more or less contain straight python code, no need to learn anything new here... move along. We shall have a look at a Quixote application from the ground (from the driver or 'root' interface) on up (or down depending on how you view these things. Actually we'll start one level below the driver - every driver needs a car (or better yet, a bicycle!) - and in this case our car is a Publisher, which is the interface between your web server and your web application. I'll use a simple python-based HTTP server for my example, but this is principally the same whether one uses Apache, Medusa, lighttpd, or AOL server, only the app to server interfaces change. In this case my app script is running both the web server (simple_server) and the application itself (create_publisher). bin/runapp.py: from typicalapp.publisher import create_publisher if __name__ == '__main__': from quixote.server.simple_server import run print 'creating demo listening on' run(create_publisher, host='localhost', port=8080) Ok, that's pretty straight forward. Now on to the heart of the matter, publisher.py: from quixote.publish import Publisher from typicalapp.ui.root import RootDirectory def create_publisher(): return Publisher(RootDirectory(), display_exceptions='plain') Of course a real application might subclass Publisher to do all sorts of other things, including registering a session handler, filtering output through HTMLTidy, custom logging, whatever... Next, what are we publishing? Why, RootDirectory of course - from there all things flow. First, here's prototypical the web hello, world example. We are going to look at two different examples of this and explain why there are two different ways of pumping HTML out from your Quixote application. First - the base case: class RootDirectory(Directory): _q_exports = ['', 'hello'] def _q_index(self): return '''<html> <body>Welcome to the Quixote demo. Here is a <a href="hello">link</a>. </body> </html> ''' def hello(self): return '<html><body>Hello world!</body></html>' def goodbye(self): return '<html><body>Cheers! Salut!</body></html>' Pretty straightforward, no? Hey, guess what, there really is no step one. You don't have to design some complicated regex just to get to the root, or to any object no matter how complex your web applications url hierarchy may get to. To the regex challenged, this might be reason enough to consider Quixote! The URL your client/user is resolved from start towards its end, from 'root' on down. This a call ro will have Quixote dish up whatever has been defined as its root UI. In most cases this will be a class with one or more methods. Ok, there are two things happening here that are not immediately obvious - what are _q_index and _q_exports? Before going there, let me interject: If there is one basic principle in Quixote its *there shall be no magic*. To a large extent this is true, there is no magic, however it wouldn't be a framework without a few basic rules and conventions. _q_exports is a list containing all methods exposed by the class to the web. A '' name in exports allows the _q_index function to be exposed as a callable. Extra work? Perhaps, but the rule is be explicit in Quixote-land. This is a basic defense against exposing things that you don't want to, forcing the developer to think about what will be shown the world. This list can be dynamic of course, but more on that at another time. When Quixote gets to the requested object along the url requested, it expects to find a callable which returns a result; in our example about the callable _q_index() is returning some simple html. If the client had requested, Quixote would resolve or dispatch the request to the hello method of RootDirectory. Had the client requested, Quixote will return a 404 error, as the goodbye method was not listed in _q_exports. Clear as mud? Excellent, lets press on. Wait! Before going on, lets look at one minor tweak to RootDirectory and introduce the concept of Quixote's Python Template Language (PTL), which is to traditional templating languages as the un-Cola is to regular colas. It spits out html, but it doesn't look like the other stuff you drink. PTL is a hook to Python that allows you to write Python functions that integrate textual output more easily. An example will help. First, we have to enable PTL in our publisher script before PTL modules or functions can be called. That's easy, just add to runnapp.py: from quixote import enable_ptl enable_ptl() enable_ptl() is called before create_pubilsher is called. Now we can re-write our prototypical hello, world RootDirectory class as such: class RootDirectory(Directory): _q_exports = ['', 'hello'] def _q_index [html](self): ''' <html> <body>Welcome to the Quixote demo. Here is a <a href="hello">link</a>. </body> </html> ''' def hello [html](self): ''' <html> <body> <ol> ''' for n in range(1,101): '<li>Hello world, %d times!</li>\n' % n '''</ol> </body> </html> ''' # *Note*: you don't HAVE to use ptl extensions for all UI functions... def goodbye(self): return '<html><body>Cheers! Salut!</body></html>' Look closely or you'll miss the difference, its pretty slight. Doesn't look like much of a change, does it? On the surface, not much has changed other than the introduction of a [html] decorator-like extension to python, and the ability to intersperse text in among python code. This latter ability becomes very handy when your "template" is computationally complex, and allows you to lever what you already know - Python - rather than struggle with learning a new templating language and possibly force-fit your needs into whatever constraints the template language demands. You do not need to use PTL! Many Quixote developers use other templating packages including Cheetah, Kid, or their own. Andrew Kuchling once described PTL in a whitepaper: Without us ever realizing this while designing it, PTL turns the idea of HTML templating on its head: the default is program code, and there's an escape sequence into HTML. The escape sequence is just Python's notation for string literals, which is a compact and easily readable notation. Functions that actually contain HTML therefore can look a little messy, but most HTML only appears e lowest-level functions; the bulk of our PTL code simply calls other PTL templates and inserts the odd <br>or <table>here and there. Greg Ward also discussed PTL in a comprehensive article on Quixote in Linux Journal. Other links to articles on Quixote can be found on the project web site. PTL is not web designer friendly, but it is very programmer friendly. I'm not a web designer but have created some half decent xhtml/css based designs - my workflow is to prototype the design in straight xthml and css, and then decide how I want to chunk same up within my application. I don't find this to be problematic for the overall design of a site or web application. Bits and pieces within the application I rarely find the need to prototype in xhtml first; generally I find I spend more time refactoring ptl code into smaller reusable widgets. Your mileage may vary, but for myself, I find it now hurts my eyes to look at html peppered with %this%, %some for loop that%. I'd rather look at plain python. Oh, by the way, vim has a ptl syntax add-on too. PTL also delivers other benefits, chiefly automatic escaping of strings that are not explicitly safe. Read the project documentation on PTL for further information. In installment two we'll look at how Quixote traverses a hierarchy of UI objects.
http://mikewatkins.ca/2005/10/02/
crawl-003
refinedweb
2,012
62.68
Hello Alex When you mention "pulls a list" I assume this is code. Can you post the code? The syssiteassets folder is where all blocks that are marked as "For this site" live. I suspect the code that generates the list isn't filtering correctly for each site. David Hi David, thanks for your reply. The page controller code is as follows: public class ResourcesPageController : PageController<ResourcesPage> { public ActionResult Index(ResourcesPage currentPage) { var results = new List<SearchResultViewModel>(); var categories = new List<int>(); try { results = ResourcesHelper.GetAll(categories); } catch (System.Exception) { throw; } var vm = new ResourcesPageViewModel(currentPage) { Categories = new List<CategoryViewModel>(), Resources = results }; var cats = new List<CategoryViewModel>(); foreach (var item in currentPage.TypesCategoryRoot ?? new CategoryList()) { var cat = CategoryHelper.CreateCategoryViewModel(item); cats.Add(cat); } vm.Categories = cats.ToList(); return View(vm); } [HttpPost] public JsonResult Search(SearchCriteria criteria) { var catList = criteria.Categories ?? new List<int>(); var results = new List<SearchResultViewModel>(); try { results = ResourcesHelper.Search(criteria.SearchTerm, catList, criteria.SortBy); } catch (System.Exception e) { throw new System.Exception(e.ToString()); } return Json(results); } } Presumably the key to the answer here is in that foreach loop where currentPage.TypesCategoryRoot isn't doing what I think it should be doing. Can you see anything? thanks, Alex Update to shut this thread down. I can't speak of the correctness of the code we're running, but it works most of the time. However I did manage to remove the resources which were appearing in our second site through the CMS. In Admin > Tools > Manage Content, i located the hidden resources. after removing one of them the resources still appeared on site B, but instead of coming from sysassests they were now linking to the trash. So I looked at the trash, lots of things had appeared in there, cleared the trash and the resources were now gone. result. Hi All, We're running v11.4 in DXC and have a multi site setup. We have 'resource' blocks in a folder which is specifically for Site A and a page type which pulls a list of the resources from that folder. and it's the same setup for site B. The resource page should only pull resources for its own site. However in site B resource from both site A and B are showing up and the ones from site A are linking to.... Has anyone come across this before? I don't really know what the syssiteassets folder is and couldn't find much enlightening information before making this post. Any ideas how to troubleshoot and fix? thanks in advance, Alex
https://world.optimizely.com/forum/developer-forum/CMS/Thread-Container/2018/7/resources-from-one-site-appearing-on-another-syssiteassets/
CC-MAIN-2022-33
refinedweb
426
59.6
I have a program that I need to modify but I am stuck. Here are the terms. - Modify the program by inserting a big part into a loop that will play the program over and over and will display a number of times you have played the game. - Have the program ask you how many times you want to play the game. - Then the program will play the game the number of times you request. - Keeping track of the number of times there was a win or loss. - Have the program display the percent (%) of times there was a win. - The answer should be a little under 50% (about 47%). Here is the sample program that I have to go off to include these stipulations above. #include <iostream> #include <cstdlib> #include <ctime> using namespace std; int rolldice (); void main () { enum Status {CONTINUE, WON, LOST}; int Sum, MyPoint; Status GameStatus; srand(time(0)); //Roll the dice Sum = rolldice(); cout << "You rolled " << Sum << '.' << endl; switch (Sum) { case 7: case 11: GameStatus = WON; break; case 2: case 3: case 12: GameStatus = LOST; break; default: GameStatus = CONTINUE; MyPoint = Sum; cout << "Point is " << MyPoint << '.' << endl; break; } while (GameStatus == CONTINUE) { Sum = rolldice(); cout << "The roll is " << Sum << '.' << endl; if (Sum == MyPoint) GameStatus = WON; else if (Sum == 7) GameStatus = LOST; } if (GameStatus == WON) cout << "You win the game." << endl; else cout << "You lose the game." << endl; } int rolldice () { int Die1, Die2; int Total; Die1 = (rand() % 6) +1; Die2 = (rand() % 6) +1; Total = Die1 + Die2; return Total; } Can anyone help me out?! click You chose an angry face icon for this thread... Exactly what are you angry at? Is it your own noobness maybe If you want to learn programming, being angry is the worst possible starting point. Nobody cares how it works as long as it works "Have the program ask you how many times you want to play the game. " Can you do.
http://forums.codeguru.com/showthread.php?471908-Issues-overloading-operator&goto=nextnewest
CC-MAIN-2020-24
refinedweb
317
80.82
1. UpdatePanel, ScriptManager and other ASP.NET Ajax elements are squiggled as 'unrecognized'. 2. Formatting lost in UpdatePanel when switching from Design to Source view. 3. Weird __designer::wfdid attributes appear on ASP.NET Ajax elements 4. No intellisense is available for any of the new controls. The reason it is happening is that structure of the ASP.NET Ajax-enabled Web site has changed. Name of the Ajax assembly changed as well as it's location and the namespace. Here is what you can do: 1. Before you install ASP.NET Ajax RC, make sure you uninstalled any previous releases. 2. After you install ASP.NET Ajax RC: - Remove reference to the old assembly which may still be sitting in the bin folder of the Web site or simply delete the old assembly from bin. - Delete cached intellisense schemas in - C:\Documents and Settings\USER\Application Data\Microsoft\VWDExpress\8.0\ReflectedSchemas - C:\Documents and Settings\USER\Application Data\Microsoft\Visual Studio\8.0\ReflectedSchemas - Add a web config file to the Web site from C:\Program Files\Microsoft ASP.NET\ASP.NET 2.0 AJAX Extensions\v1.0.xxxx. 原文地址: 【译文】(待有时间我再翻译出来) 如何修正更新到ASP.NET AJAX 1.0之后的智能提示影响
https://blog.51cto.com/zhoufoxcn/167067
CC-MAIN-2021-49
refinedweb
199
54.69
so, after some debugging i see that the event editorcreated will be fired to early. I change this: Code: // Fix editor size when control will be visible (function fixEditorSize() { // If element is not visible yet, wait. if( !this.isVisible() ) { arguments.callee.defer( 50, this ); return; } var size = this.getSize(); this.withEd( function() { this._setEditorSize( size.width, size.height); // Indicate that editor is created this.fireEvent("editorcreated"); // add here }); }).call( this ); // Indicate that editor is created //this.fireEvent("editorcreated"); // this i remove and add this above Version 0.8.1 released Version 0.8.1 released Guys, please check out new version - 0.8.1: Removed dependency on MIframe. Rewritten is the code for editor resize. Hope, now it will work correctly in all cases. @Dumbledore Your change is not incorporated yet. Will do it soon. Great! Thank you for sharing, xor.ExtJS 3.4, WAMP Apache 2.2.17, PHP 5.3.5, MySQL 5.5.8 another small patch for the 0.8.1 add this in onRender: Code: // Create TinyMCE editor. this.ed = new tinymce.Editor(id, this.tinymceSettings); // Create a new Windows Group for the dialogs this.ed.windowGroup = new Ext.WindowGroup(); this.ed.windowGroup.zseed = 12000; Code: var win = new Ext.Window( { title: s.name, width: s.width, height: s.height, minWidth: s.min_width, minHeight: s.min_height, resizable: true, maximizable: s.maximizable, minimizable: s.minimizable, modal: true, stateful: false, constrain: true, layout: "fit", manager : this.editor.windowGroup, items: [ new Ext.BoxComponent({ autoEl: { tag: 'iframe', src: s.url || s.file }, style : 'border-width: 0px;' }) ] }); Why do you put the whole component into "(function() {"? This is not usual for a component. I have removed it and it works fine without. You could also add the ComponentMgr registration which allows creation by xtype: Code: Ext.ComponentMgr.registerType("tinymce", Ext.ux.TinyMCE); uwolfer, registerType is there already. It is on the line 512 of my distrubution. Outer "(function()" is intended to hide some implementation details from polluting global namespace. It allows for making some kind of private variables and functions. Why do you think you should remove it? Is there any objective reason? Version 0.8.2 Version 0.8.2 Hello! Just released version 0.8.2. It incorporates changes offered by Dumbledore, refactored a bit. I also redesigned component page. It has a link to download area, so grab release from there: Demo package does not include demo page any more. Only test-example files and release notes. I think it is more logical, but tell me if you miss the index page. Finally, I've made a donation button with help of Moneybookers. Some of you wanted to support the component development, so now it is possible. It is there, on the side bar of the component's page. I hope it is working. If not, please let me know. Last edited by xor; 19 Apr 2010 at 11:34 PM. Reason: Added link Hello xor, I'm using tinymce in a window containing a form panel and a tabpanel. This tabpanel contains tree tabs with a tinymce editor inside. If I click on each tab everything works ok, but when don't click all tabs and save the form/close the window I get the following errors in FF: e is null Line 7537 and el is undefined Line 5947 Does this sound familliar to you (or anybody else)????? What am I doing wrong, I can't find the problem and hope somebody can help me.. I'm using Ext.ux.TinyMCE.js version v0.8.2 (without iframes) hansl1963, can you make little test case based on one of test.*.html files in my distribution? Put there your tab panel configuration and form submit code (no server-side is required, as error occurs immediately as I got from your explanation). I will debug it and see what's wrong. I also have changed the code that it does not break if TinyMCE is not included in a specific page. It also improves loading performance since it does initialization (and override of TinyMCE stuff) in case of usage, not on init. See the attached patch for some changes and improvements.
http://www.sencha.com/forum/showthread.php?24787-Updated-Ext.ux.TinyMCE-TinyMCE-form-field-(v0.7b1)/page36
CC-MAIN-2013-48
refinedweb
694
62.75
NAME quotactl — manipulate file system quotas LIBRARY Standard C Library (libc, -lc) SYNOPSIS #include <sys/types.h> #include <ufs/ufs/quota.h> int quotactl(const char *path, int cmd, int id, void *addr); DESCRIPTION The quotactl() system call enables, disables and manipulates file system quotas. A quota control command given by cmd operates on the given filename path for the given user or group id. (NOTE: One should use the QCMD macro defined in file system..h>). Only the usage fields are used. This system call is restricted to the super-user. Q_SYNC Update the on-disk copy of quota usages. The command type specifies which type of quotas are to be updated. The id and addr arguments are ignored. RETURN VALUES The quotactl() function returns the value 0 if successful; otherwise the value -1 is returned and the global variable errno is set to indicate the error. ERRORS The quotactl() system call will fail if: [EOPNOTSUPP] The kernel has not been compiled with the QUOTA a pathname. [EROFS] In Q_QUOTAON, the quota file resides on a read-only file system. [EIO] An I/O error occurred while reading from or writing to a file containing quotas. [EFAULT] An invalid addr was supplied; the associated structure could not be copied in or out of the kernel. [EFAULT] The path argument points outside the process's).
http://manpages.ubuntu.com/manpages/precise/en/man2/quotactl.2freebsd.html
CC-MAIN-2016-22
refinedweb
225
57.98
Accounts with zero posts and zero activity during the last months will be deleted periodically to fight SPAM! #include <stdio.h>int main(void){ printf("Hello World! 测试"); return 0;} I suggest posting a link to the file or attaching the file.Also state the correct encoding and the wrong encoding value detected.NOTE: If this is a program run-time issue search for the solution because it is NOT a CB issue.It is posted somewhere on this board.Tim S. This is a bug which was existed long time ago. I have created a video for demo this issue: However, are you sure you've saved your file in a proper file format like UTF-8? From your video it seems not. Strange is also that you are not being warned about that issue. Usually C::B does so. I had uploaded another video which show CB can not correctly detect the utf-8 file that save by itself: Quote from: edison on October 17, 2014, 06:27:23 amI had uploaded another video which show CB can not correctly detect the utf-8 file that save by itself:Well what happens is perfectly OK. As you create an UTF-8 w/o BOM and have setup windows-936 as default encoding it will be used when opening the file. There is no way you can distinguish exactly between UTF-8 and windows-936 in case you've only ANSI characters in the file.So either you use UTF-8 with BOM or start just coding your Korean (whats-o-ever) stuff into the file. ...not to forget that another perfect solution is to use a file with bom if the target compiler supports this.
http://forums.codeblocks.org/index.php?topic=19737.msg134839
CC-MAIN-2020-05
refinedweb
287
63.39
Rainbow adds text color, background color and style for console and command line output in Swift. It is born for cross platform software logging in terminals, working in both Apple's platforms and Linux. Usage Nifty way, using the String extension, and print the colorized string: import Rainbow print("Red text".red) print("Blue background".onBlue) print("Light green text on white background".lightGreen.onWhite) print("Underline".underline) print("Cyan with bold and blinking".cyan.bold.blink) print("Plain text".red.onYellow.bold.clearColor.clearBackgroundColor.clearStyles) It will give you something like this: You can also use the more verbose way if you want: import Rainbow let output = "The quick brown fox jumps over the lazy dog" .applyingCodes(Color.red, BackgroundColor.yellow, Style.bold) print(output) // Red text on yellow, bold of course :) Motivation and Compatibility Thanks to the open source of Swift, developers now could write cross platform programs with the same language. And I believe the command line software would be the next great platform for Swift. Colorful and well organized output always helps us to understand what happens. It is really a necessary utility to create wonderful software. Rainbow should work well in both OS X and Linux terminals. It is smart enough to check whether the output is connected to a valid text terminal or not, to decide the log should be modified or not. This could be useful when you want to send your log to a file instead to console. Although Rainbow is first designed for console output in terminals, you could use it in Xcode with XcodeColors plugin installed too. It will enable color output for better debugging experience in Xcode. Please notice, after Xcode 8, third party plugins in bundle (like XcodeColors) is not supported anymore. See this. Install Rainbow 3.x supports from Swift 4. If you need to use Rainbow in Swift 3, use Rainbow 2.1 instead. Swift Package Manager If you are developing a cross platform software in Swift, Swift Package Manager might be your choice for package management. Just add the url of this repo to your Package.swift file as a dependency: // swift-tools-version:4.0 import PackageDescription let package = Package( name: "YourAwesomeSoftware", dependencies: [ .package(url: "", from: "3.0.0") ] ) Then run swift build whenever you get prepared. You could know more information on how to use Swift Package Manager in Apple's official page. CocoaPods Add the RainbowSwift pod to your Podfile: source '' platform :ios, '8.0' pod 'RainbowSwift', '~> 3.0' And you need to import RainbowSwift instead of Rainbow if you install it from CocoaPods. // import Rainbow import RainbowSwift print("Hello CocoaPods".red) Carthage Carthage is a decentralized dependency manager for Cocoa application. To integrate Rainbow with Carthage, add this to your Cartfile: github "onevcat/Rainbow" ~> 3.0 Run carthage update to build the framework and drag the built Rainbow.framework into your Xcode project (as well as embed it in your target if necessary). Follow and contact me on Twitter or Sina Weibo. If you find an issue, just open a ticket on it. Pull requests are warmly welcome as well. License Rainbow is released under the MIT license. See LICENSE for details. Github Help us keep the lights on Dependencies Used By Total: Releases 3.0.0 - Nov 26, 2017 Swift 4 support. 2.1.0 - Aug 3, 2017 Expose Rainbow. extractModes as public. 2.0.1 - Sep 30, 2016 Support for Linux. 2.0.0 - Sep 25, 2016 Swift 3 compatibility. 1.1.0 - Mar 24, 2016 Support for Swift 2.2
https://swiftpack.co/package/onevcat/Rainbow
CC-MAIN-2019-18
refinedweb
592
68.57
type() method returns class type of the argument(object) passed as parameter. type() function is mostly used for debugging purposes. Two different types of arguments can be passed to type() function, single and three argument. If single argument type(obj) is passed, it returns the type of given object. If three arguments type(name, bases, dict) is passed, it returns a new type object. Syntax : type(object) type(name, bases, dict) Parameters : name : name of class, which later corresponds to the __name__ attribute of the class. bases : tuple of classes from which the current class derives. Later corresponds to the __bases__ attribute. dict : a dictionary that holds the namespaces for the class. Later corresponds to the __dict__ attribute. Returntype : returns a new type class or essentially a metaclass. Code #1 : Output : True False True True True Code #2 : Output : <class 'dict'> <class 'list'> <class 'tuple'> Code #3 : Output : Both class have different object type. Code #4 : Use of type(name, bases, dict) Output : {'__module__': '__main__', 'var1': 'GeeksforGeeks', '__weakref__': , 'b': 2009, '__dict__': , '__doc__': None} {'b': 2018, '__doc__': None, '__module__': '__main__', 'a': 'Geeks'} Applications : - type() function is basically used for debugging purposes. When used other string functions like .upper(), .lower(), .split() with text extracted from a web crawler, it might not work because they might be of different type which doesn’t support string functions. And as a result it will keep throwing errors, which are very difficult to debug [Consider the error as : GeneratorType has no attribute lower() ] . type() function can be used at that point to determine the type of text extracted and then change it to other forms of string before we use string functions or any other operations on it. - type() with three arguments can be used to dynamically initialize classes or existing classes with attributes. It is also used to register database tables with SQL. - Pygorithm module in Python - Python | Getting started with SymPy module - numpy.polysub() in Python - Python | Plotting Area charts in excel sheet using XlsxWriter module - Python | Pandas series.str.get() - Python | Pandas Series.str.partition() - Python | Pandas Split strings into two List/Columns using str.split() - Python | Pandas Series.str.strip(), lstrip() and rstrip() - Internal working of Python - Generating Word Cloud in.
https://www.geeksforgeeks.org/python-type-function/
CC-MAIN-2018-39
refinedweb
369
65.62
Is there a way to find the center point of a cylinder with RhinoScript? I have looked at rs.IsCylinder and rs.SurfaceCylinder but neither gives a center point from what I can see. I noticed RhinoCommon has a property called “Center” for cylinders - - Is there a way to get this property through RhinoScript? SurfaceCylinder(surface_id) Returns: tuple(plane, number, number): of the cylinder plane, height, radius on success I believe the origin of the plane returned is the center of the cylinder’s defining circle. Hi, Probably found not the most effective way (using Python), but at least it works import rhinoscriptsyntax as rs def midPt(): input = rs.GetObject(“pick a cylinder”) diag = rs.AddLine(rs.BoundingBox(input)[0], rs.BoundingBox(input)[6]) diag_domain = rs.CurveDomain(diag) mid_pt = rs.EvaluateCurve(diag, diag_domain[1]/2) cleanup = [] cleanup.append(diag) rs.DeleteObject(cleanup) rs.AddPoint(mid_pt) midPt() Best of luck, Ilja You are correct, the origin should give me what I need thanks. ilmar_ik - This is handy but it doesn’t work on partial cylinders because of the bounding box method. @Helvetosaur - It appears this is not correct after all. I thought I had tried using origin in the past and it didn’t give me what I needed, which is why I asked the question originally. I just found a situation where it does not give me the center of a defining circle. In this file, it appears to give the center of the defining arc’s domain. Cylinder.3dm (34.7 KB) Sample program ↓ import rhinoscriptsyntax as rs cyl = rs.GetObject(" Pick Cylinder ") cyl_properties = rs.SurfaceCylinder(cyl) cyl_origin = cyl_properties[0][0] rs.AddPoint(cyl_origin) print("Cylinder Properties: " + str(cyl_properties)) print("Cylinder Origin Rounded X: " + str(round(cyl_origin[0],3))) print("Cylinder Origin Rounded Y: " + str(round(cyl_origin[1],3))) print("Cylinder Origin Rounded Z: " + str(round(cyl_origin[2],3))) I found a way to get my point, but this is still not foolproof for all cylinders. Is there a better way? import rhinoscriptsyntax as rs cyl = rs.GetObject(" Pick Cylinder ") cyl_properties = rs.SurfaceCylinder(cyl) cyl_rad = cyl_properties[2] cent_pt_list =[] edge_list = rs.DuplicateEdgeCurves(cyl) for edge in edge_list: if rs.IsArc(edge): edge_rad = rs.ArcRadius(edge) if edge_rad == cyl_rad: arc_cent = rs.ArcCenterPoint(edge) cent_pt_list.append(arc_cent) rs.DeleteObject(edge) cyl_center = (cent_pt_list[0] + cent_pt_list[1])/2 rs.AddPoint(cyl_center) Yep, looks like you are correct, it just gets the plane of the cylinder and the origin has nothing to do with the center of the arc - my bad. Your method might work if you fix it a bit - doesn’t right now, I can explain later - assuming that all circular ends of the cylinder fragment haven’t been trimmed off - i.e. it can find an arc edge. If not, it will fail… Your fixed script might look like the following: import rhinoscriptsyntax as rs #get file tolerance tol=rs.UnitAbsoluteTolerance() cyl = rs.GetObject(" Pick Cylinder ") cyl_properties = rs.SurfaceCylinder(cyl) cyl_rad = cyl_properties[2] cent_pt_list =[] edge_list = rs.DuplicateEdgeCurves(cyl) for edge in edge_list: #need to check for arcs *or circles* if rs.IsArc(edge) or rs.IsCircle(edge): edge_rad = rs.ArcRadius(edge) #find values *within tolerance* if abs(edge_rad-cyl_rad)<tol: arc_cent = rs.ArcCenterPoint(edge) cent_pt_list.append(arc_cent) rs.DeleteObject(edge) #make sure you got *exactly* two points if len(cent_pt_list)==2: cyl_center = (cent_pt_list[0] + cent_pt_list[1])/2 rs.AddPoint(cyl_center) One of the easy mistakes to make is to have a line of code like this: if a == b: do something But in the world of floating point math, a will never equal b exactly - there will always be a tiny difference, even if it’s out at 12 decimal points. So it will virtually always return False. The way to check for “equality” between floating point values is to see if they are equal to within a given tolerance - the commonly accepted way is to see if the absolute value of the difference between the two is less than the tolerance. So you have: if abs(a - b)<tol: do something As to completely trimmed cylinder bits, as I said, your script above will not find the center. The scriptlet below should get you the center - albeit it’s the center of the axis line of the untrimmed underlying surface. It uses some RhinoCommon to avoid duplicating edges and then deleting them, as well as accessing a function .Center that doesn’t to be currently available via rhinoscriptsyntax. import rhinoscriptsyntax as rs cylID=rs.GetObject("Pick Cylinder Surface",8,True) if cylID: #get the underlying surface geometry (it's a brep face actually) cyl_srf = rs.coercesurface(cylID) if cyl_srf: #try to find the cylinder from the surface rc,cyl = cyl_srf.TryGetCylinder() #if successful, add a point at the cylinder "center" if rc: rs.AddPoint(cyl.Center) HTH, --Mitch I see, I ran into this issue and I was working around it by using round() I was working around this by using rs.IsSurfaceTrimmed() and un-trimming if necessary. Your RhinoCommon example looks much cleaner than what I came up with using just rhinoscript. Thanks for the help.
https://discourse.mcneel.com/t/find-center-point-of-cylinder-with-rs/64327
CC-MAIN-2022-33
refinedweb
846
58.48
- Neopixel strip with WiFi .). Only if we use a NodeMCU will we need a logical level converter, because this board uses a 3.3 volt logic, while NeoPixels need to be driven with a 5-volt logic (actually you could keep a 3.3 volt logic if the NeoPixel power supply was between 3.3 and 3.8 volts, but we have a 5-volt power supply). First, we need to download the file archive of this project from GitHub where we will find both the sketches for the two boards and the Python sources in the library, including some animation examples. We prepare the Arduino IDE development environment to be able to program both boards: for Fishino we have to download the libraries from the site and verify that the firmware version is aligned with the version of the libraries (for details visit the page “Firmware update” in the “Docs” section of the site). To prepare for programming NodeMCU, we must instead open the IDE settings via the “File” menu and click on the icon to the right of the “Additional URLs for the Board Manager”; a window will open where we can paste the following string:. After doing this first step, we can install the board, by clicking on the main screen of the IDE Tools->Board->Board Manager… the “Tab Manager” window will open; in the search box type “esp8266” and install the latest version of “esp8266 by ESP8266 Community”. At this point in the section Tools -> Tab, we will see the NodeMCU 1.0 tab under the section “ESP8266 Modules”: let’s select it and change the parameter “Upload speed” to 115.200. The last step to complete the configuration of our development environment (necessary for both boards) is to download the NeoPixel management library distributed by Adafruit Let’s download the zip and unpack it in the Arduino “libraries” folder, then restart the IDE to import the new library; now we are ready to open the “NeoPy_Fishino” sketch (selecting “Arduino Nano” as tab in the Tools menu): modify the values MY_SSID and MY_PASS of our Wi-Fi network and the number of LEDs that we intend to connect to the board, instead we don’t modify the PORT and PIN values; to set a static IP we decompose and modify the line of IPADDR, finally we load the sketch on Fishino. Now connect the NodeMCU board to your PC and open the “NeoPy_NodeMCU” sketch (selecting “NodeMCU 1.0” as the tab in the Tools menu), also here we need to modify only the values contained in the “SETUP” section leaving unchanged the values of PORT and PIN; then load the sketch on NodeMCU. The operation of the system is represented in the diagram in Fig. 1: through the library in Python “NeoPy” we create an object that represents our NeoPixel installation, then we go to set the LEDs, updating only the array contained in the object itself with the methods “Set()” or “SetAll()”; with the method “Show()” is packaged the array with the information of all the LEDs and sent using the UDP protocol to the specified endpoint (IP and port). Fig. 1 As we can see in Listing 1 (sketch for Fishino Guppy) and Listing 2 (code for NodeMCU) the sketches are very similar: in the “setup” function the connection to the Wi-Fi network is initialized with the parameters previously set, then a UDP server is created listening on the specified port. In the “loop” function the packet is received in UDP and its length is validated: in fact, a correct packet must have a length equal to three times the number of specified LEDs, as an array of three bytes will represent the colour of each LED; for example, for two LEDs the packet will be RGBRGB. listing1 Then the UDP string unpacking and the setting of each LED takes place; finally, the method to update all the LEDs with the command “strip.show()” is called up. To develop this system, we chose the UDP protocol, because one of its strengths is the ability to send and receive packets much faster than TCP, and the transmission speed is necessary in case we have to reproduce effects with relatively rapid colour changes, one of the UDP protocol weaknesses is that the loss of packets is not managed, especially in case of slow network or high transmission speeds. For this reason, to create our lighting effects, we will have to use small delays so as not to overlap the incoming packets on the Fishino or the NodeMCU. Listing 2 After programming the two boards, we follow Fig. 2 to connect pin 3 of Fishino to the data line of the NeoPixel strip (pin IN) using jumpers: remember to insert the 470 Ohm resistor. We then power the star by connecting it to the power supply via pins 5V and GND and Fishino by connecting it to pins 5V and GND. Following Fig. 3 we connect the D3 pin of the NodeMCU to a channel in the 3.3-volt section of the level converter, then we leave the same channel but the 5-volt section and connect with the 470-ohm resistor to the data line of the NeoPixel strip (white cable). We supply the low voltage part of the converter with NodeMCU’s 3V3 and GND pins; through the two cables coming from the power supply, we supply the 5-volt part of the converter, the NodeMCU (through the VIN and GND pins) and the NeoPixel strip. Now let’s power up the two NeoPixel installations and check that they are connected to our Wi-Fi network (for example by accessing the router configuration page or with the free Advanced IP Scanner software); download the latest available version of Python 3.x and install it on our PC. Fig. 2 PYTHON DEVELOPMENT To develop the Wi-Fi controller, we chose Python because it is a modern, flexible, intuitive and easy to learn a language; it is also cross-platform, so the code we wrote can be run on Windows, Apple operating systems and Linux (in our case on Raspberry Pi Pi). On the official website, we can also find a complete guide to this coding language. We open the newly installed IDLE program, which looks like a simple notepad, but allows us to write and run programs in Python: we immediately save the empty file in the same folder where neopy.py is located. Through the NeoPy library we have the following commands available: - Set(N, (R, G, B)): we can set the LED number N (in our example from 0 to 55) with the colour formed by R, G, B (each can assume a value from 0 to 255); for example, to set the fifth LED to green the command will be object.Set(4, (0, 255, 0)); - SetAll((R, G, B)): very similar to the previous command, but in this case, we set all the LEDs on the same colour paying attention to the double parentheses; for example to set all the LEDs to blue the command will be object.SetAll((0, 0, 255)); - SetBrightness(L): is used to set all the LEDs a percentage of brightness L (value between 0 and 100), default value 80; for example, to set half of the brightness the command will object.SetBrightness(50); - Wheel(V) object: returns a type value (R, G, B) based on the V parameter passed (value between 0 and 255 that passes all colours); for example to set all LEDs to a random colour the command will object.SetAll(object.Wheel(RAND_NUMBER) object); - Show() object: it is used to effectively send the UDP command via Wi-Fi and make effective all the changes we have made, physically setting the LEDs. Fig. 3 Now that we know the available commands, let’s write this simple program: from neopy import NeoPy import time star= NeoPy(56, “192.168.1.3”) star.SetBrightness(30) for i in range(56): star.Set(i, (255, 0, 0)) star.Show() time.sleep(0.5) In the first line we imported the NeoPy library, while in the second line we imported the “time” library that we will need later to time the animation; then we instantiated a NeoPy object in the variable “star” indicating 56 LEDs and the IP address 192.168.1.3 (the default port is 4242 and must be the same as in the sketch). We then set the overall brightness to 30% and we created a for cycle in which, at each step, the variable “i” will assume values ranging from 0 to 55, always at each step we set one LED at a time on the red colour with the method “Set()”, then update the LEDs with “Show()” and wait for half a second thanks to the object “time”. Let’s be careful to align the three commands inside the for loop with a tab; otherwise, Python will give us compilation error, save and press F5 to run the program: if everything has been set correctly, we will see the NeoPixels on the star animate. We can instantiate as many objects as we want, for example with the following program we instantiate both the star and the NeoPixel strip and then color them one in white and the other in red: from neopy import NeoPy star= NeoPy(56, “192.168.1.3”) strip= NeoPy(150, “192.168.1.19”) star.SetAll((255, 255, 255)) star.Show() strip.SetAll((255, 0, 0)) strip.Show() In the project repository downloaded from GitHub, we can also find the files “examples_star.py” and “examples_strip.py” that contain some examples and will help us to understand the various scripts better, in order to create the animations. Let’s move now to Raspberry Pi to try the same Python scripts we created and started on the PC: let’s download a new Raspbian image from (Raspbian Lite will be fine as we don’t need the graphical interface), write it on the MicroSD with Win32DiskImager, insert the MicroSD in Raspberry Pi, feed Raspberry Pi and connect it to the same network our NeoPixels are connected to. With an SSH terminal (such as Putty or MobaXTerm) we connect to Raspberry Pi (user “pi”, password “Raspberry Pi”) and move to the folder “pi”: cd /home/pi/ We install git with the command (where required we press Y and ENTER): sudo apt-get install git Here, too, we download the files belonging to this project from GitHub and enter the relative folder with these two commands: git clone cd NeoPy/ Check if you are in the same folder as the file “neopy.py” with the command ls -l and create a new file “test.py”: nano test.py Copy the code of the small program previously written to turn on the star one LED at a time and close the file saving it with CTRL+X, then Y and ENTER. Now, let’s try to run the program with the command: python3 test.py The NeoPixel star will light up just like when we launched the same program on the PC; this serves to avoid keeping a PC on to act as a Wi-Fi controller for all NeoPixel installations: rather we’ll keep on Raspberry Pi which is much more compact and less expensive than electricity. Now let’s suppose that we have created several Python programs on our Raspberry Pi; each program performs different effects on our NeoPixel installations and must be executed at certain times of the day. It would be very uncomfortable to remember to launch them manually every time, and for this reason, crontab comes to our aid; crontab is a scheduler present in Raspbian to which we can indicate the exact moment in which we want to launch a program: at first impact, the syntax will be a bit ‘complicated, but we will analyze it in detail. Let’s type the command: crontab-e The first time we will be asked which editor we want to use to edit the schedule file we type the command: 2 (Nano) and let’s ENTER; this will open the editing window to insert the lines of the tasks, each line corresponds to a program that we want to execute and must be composed of six parameters separated from the space: MI H D MO DW COMMAND. Let’s see the parameters in detail: - MI: minutes, a value from 0 to 59, or * which means “all”; - H: hours, a value from 0 to 23, or * which means “all”; - D: day of the month, a value from 1 to 31, or * which means “all”; - MO: month, a value from 1 to 12, or * which means “all”; - DW: day of the week, a value from 0 to 6 (where 0 is Sunday, and 6 is Saturday), or * which means “all”; - COMMAND: the command to execute (remember always to enter the full path to the Python file). Let’s move to the bottom of the file and write this line: 0 * * * * python3 /home/pi/NeoPy/test.py We have just set the execution of our test.py at minute zero, of every hour, of every day of the month, of every month, of every day of the week; we can now save and close the file with CTRL+X, then Y and ENTER and wait for the new time stroke for the script start and check that the star lights up just as if we had launched the script by hand: in this way we can schedule the execution of all the scripts we want by adding new lines in the crontab. CONCLUSION With the NeoPixels controlled in Wi-Fi and the scheduling system on Raspberry Pi we can, for example, place a LED strip in the room and simulate the sunrise at a specific time to create a luminous alarm clock, or we can create beautiful lighting effects in the garden after sunset. Besides, by placing the LEDs in some rooms of the house, we can switch them on in a timed and random way to simulate our presence in the house or, again, we can connect sensors to Raspberry Pi and control the lighting according to their status. FROM OPENSTORE Switching-mode power supply 50W 5V STRIP 150 LED RGB ADDRESSABLE WS2812B – NEOPIXEL STRIP 300 LED RGB ADDRESSABLE WS2812B- NEOPIXEL FT1300M – CHRISTMAS STAR WITH LED NEOPIXEL
https://www.open-electronics.org/how-to-control-neopixel-strip-with-wifi/
CC-MAIN-2021-04
refinedweb
2,406
55.51
Keywords are the reserved words in Python. We cannot use a keyword as variable name, function name or any other identifier. Here's a list of all keywords in Python Programming. If we give the function an odd number, None is returned implicitly. >>> True and False False >>> True or False True >>> not False True: >>>. Learn more about Python break and continue statement.: class ExampleClass: def function1(parameters): … def function2(parameters): … Learn more about Python Objects and Class. def is used to define a user-defined function. Function is a block of related statements, which together does some specific task. It helps us organize code into manageable chunks and also to do some repetitive task. The usage of def is shown below: def function_name(parameters): … Learn more about Python functions.. Learn more about Python if and if...else Statement.: def reciprocal(num): try: r = 1/num except: print('Exception caught') return return r print(reciprocal(10)) print(reciprocal(0)) Output 0.1 Exception caught None: if num == 0: raise ZeroDivisionError('cannot divide'). Learn more about exception handling in Python programming.: names = ['John','Monica','Steven','Robin'] for i in names: print('Hello '+i) Output Hello John Hello Monica Hello Steven Hello Robin Learn more about Python for loop.(). Learn more on Python modules and import statement. global is used to declare that a variable inside the function is global (outside the function). If we need to read the value of a global variable, it is not necessary to define it as global. This is understood. is created which is not visible outside this function. Although we modify this local variable to 15, the global variable remains unchanged. This is clearly visible in our output.. Learn more about Python lamda function.. def outer_function(): a = 5 def inner_function(): nonlocal a a = 10 print("Inner function: ",a) inner_function() print("Outer function: ",a) outer_function() Output Inner function: 10 Outer function: 10 keyword is as follows: def outer_function(): a = 5 def inner_function(): a = 10 print("Inner function: ",a) inner_function() print("Outer function: ",a) outer_function() Output Inner function: 10 Outer function: 5 Here, we do not declare that the variable a inside the nested function is nonlocal. Hence, a new local variable with the same name is created, but the non-local a is not modified as seen in our output.. Learn more about Python while loop.. with open('example.txt', 'w') as my_file: my_file.write('Hello world!'), >>>.
https://www.programiz.com/python-programming/keyword-list
CC-MAIN-2018-30
refinedweb
400
57.98
I have searched and tested code several different ways and cannot get this to work. Every way I try to implement code, I am not seeing any filtered results... It only shows all data. I could really use some guidance in either setting up components differently or needing more code. Any input is helpful!! I am newer to code but understand the basics. Since I have been testing so many different ways to code this, I currently have code for my first dropdown #StateDropdown and would like it to update results onChange. I have a second dropdown #TypeDropdown and would like end users to use both, one, or none to filter. Current code (again just to test if it'll work for the first dropdown): import wixData from 'wix-data'; export function StateDropdown_change(event) { //Add your code for this event here: } wixData.query("PMR_Dog_Database") // Query the collection for any items whose "State" field contains // the value the user selected in the dropdown .eq("state", $w("#StateDropdown").value) .find() // Run the query .then(res => { $w("#DogResults").data = res.items; });
https://www.wix.com/corvid/forum/community-discussion/filtering-repeater-w-2-dropdowns
CC-MAIN-2019-47
refinedweb
178
64.61
Opened 5 years ago Closed 12 months ago #5786 enhancement closed fixed (fixed) Add timeout implementation to Deferred, based on cancellation support Description (last modified by ) If you can cancel a Deferred, you should be able to time it out as well. Some non-obvious points: - It'd be useful to have an ability to distinguish between different timeouts (and cancellation) via the exception in the Failure. - You may want to add multiple timeouts, and have them affect different levels of the callback chain. Consider a high-level request than opens a connection, then sends out a HTTP request. The code that creates the network connection might want to add a 60 second timeout on the connection attempt. Code calling the combo high-level API might want a 10 minute timeout, which applies to combination of network connection and HTTP request. Change History (52) comment:1 Changed 5 years ago by comment:2 Changed 5 years ago by Basically I disagree with this feature and I would like to reject it and close the ticket as invalid. I was super happy when setTimeout went away, and I want to get defer out of twisted.internet. If we do need such a feature I really think it should be external to Deferred itself, but I would rather just have reactor.callLater(n, d.cancel) be the idiom here. But without a lot more justification in terms of concrete use-cases, I would say we just shouldn't do this. comment:3 Changed 5 years ago by Point 2 as a use case was based on actual code I've just written. The implementation I will provide is indeed just a slight improvement over reactor.callLater(n, d.cancel) (which is insufficient only because it leaves dangling DelayedCalls). The minimal implementation you suggest, or the slightly improved one I intend to implement, both provide point 2 as a side effect of the implementation: def getConnection(reactor, endpoint): d = endpoint.connect(Factory()) addTimeout(reactor, d, 60) # 1 minute timeout limited to connection return d def sendCommand(reactor, command): d = getConnection(reactor, makeEndpoint()) d.addCallback(lambda conn: conn.request(command)) addTimeout(reactor, d, 600) # 10 minute timeout on whole process return d Even if your implementation was sufficient (and it almost is), it should be documented, and the nuances in point 2 explained. comment:4 Changed 5 years ago by A correction: your naive implementation (in addition the resource leak) does *not* enable to functionality of multiple timeouts as in the example, because it doesn't cancel the first timeout when the Deferred fires. So another reason to actually implement this and provide it in Twisted: the naive way is broken. comment:5 follow-up: 7. comment:6 follow-up: 8 Changed 5 years ago by As for point 1, whose description I just updated: Given multiple timeouts on a single Deferred, you want a way for a GUI, or a log message, to distinguish between them. Did the operation fail because the connection attempt timed out or because the command timed out? Now, you could do this yourself; every time you call addTimeout you also do: addTimeout(reactor, d, 60) def explainReason(reason): reason.trap(CancelledError) return Failure(MyTimeoutReason()) d.addErrback(explainReason) But of course that will give the wrong result if someone *manually* cancels, and it's also needlessly repetitive boiler plate. So point 1 is also necessary. comment:7 follow-up: 9). comment:8 Changed 5 years ago by As for point 1, whose description I just updated: Given multiple timeouts on a single Deferred, you want a way for a GUI, or a log message, to distinguish between them. OK, that might need to be a feature built in to deferred cancellation; I'd rather that it not be reflected by the type of the exception, but with an attribute of some kind. But, it's not deterministic. You cancel the Deferred from the consumer's perspective, and the originator gets to callback or errback it with whatever value or error it likes. You can't depend on the ability to catch different errors. Did the operation fail because the connection attempt timed out or because the command timed out? Now, you could do this yourself; every time you call addTimeout you also do: So wait - are you saying that the timeout will be creating a new Deferred for each timeout, and ... chaining the previous Deferred in? That's interesting. That could reliably give you different exceptions, by having a custom canceler. I hadn't thought of that. But of course that will give the wrong result if someone *manually* cancels, and it's also needlessly repetitive boiler plate. So point 1 is also necessary. I still don't quite see how this follows, but maybe I'm starting to get your point. comment:9 Changed 5 years ago by). Hmm. I hadn't thought much about adjustment; I guess my thought is that Deferred timeouts are for the absolute it-must-take-this-long-and-no-longer sort of timeout, where you set it and forget it. As opposed to "did I get any bytes in past 60 seconds" where you're constantly resetting, but that happens on the protocol level. In particular, because Deferreds are a "give me one result" API. Does this make sense? Or can you think of a counter-example where you would want to adjust it? (Perhaps each layer adding a timeout should be able to adjust its timeout, though, even if there's no way to access other layers' timeouts). Propagation happens automatically via the way cancellation knows how to pass cancellation to Deferreds returned by callbacks. At this point I think I'll switch to coding and tests, so we have something more concrete to discuss. comment:10 Changed 5 years ago by comment:11 Changed 5 years ago by comment:12 Changed 5 years ago by If adjusting the timeout is a usecase you want to support, it seems like the easiest way to handle that is to have setTimeout return an object that can adjust that timeout. Just returning the {{{IDelayedCall}} would perhaps be sufficient. comment:13 Changed 5 years ago by The basic implementation is done (other than what Tom suggested), so I'd appreciate comments before moving on to documentation. comment:14 Changed 5 years ago by comment:15 Changed 5 years ago by comment:16 Changed 5 years ago by The implementation is completely done. I'd appreciate a review on the design before moving on to documentation. comment:17 Changed 5 years ago by Reviewing. comment:18 Changed 5 years ago by Hey Itamar. Thanks for pursuing this. Aesthetic judgments: - The naming is just awful. - The 'add' in the name addDeferredTimeoutimplies that this is something you might want to use multiple times on the same Deferred. But that would result in multiple cancellations. Which, of course, you might want in some very rarified circumestances, but generally that would not be the case. I suggest a verb-ier name, like timeOutDeferred. - The callIDvariable implies that callLaterreturns some opaque identifier. It doesn't, and it hasn't fora really long time. The thing it does return, an IDelayedCall, would be better encapsulated by a name like, e.g. delayedCall. - In naming the exceptionparameter, you should describe its role and not its type. What is this exception used for? - The documentation is basically incoherent. - The word 'timeout' is used 11 times in the documentation. First as a noun, in the name, where it's slightly ambiguous what it's describing. Then it's used as a noun describing something else - the amount of time before the timeout. Then, as the first word, it's used as a verb. Please clean this up to use "timeout" as little as possible and instead use active words that describe what's happening, like "canceled" or "called back" or "failed". I recommend the parameter be called something like "seconds" or "interval" or "delay". - "Registering a custom canceller is recommended." Recommended by whom? For what purpose? In what way? (In other words: avoid the passive voice, please.) What would such a custom canceller do, especially with respect to this specific API which recommends it? Also: you don't register a custom canceller, you specify one. Registration implies a post-creation API that modifies some state. Deferreds must be created with their canceller. Furthermore, the user of the addDeferredTimeoutAPI will, often as not, be consuming rather than originating a Deferred, and therefore won't be able to "register" a custom canceller. I don't think that this docstring should recommend anything specific about cancellers, custom or otherwise, but it should provide a reference to the cancellation documentation (although something as terse as @see: L{Deferred.cancel}would be sufficient.) - I do not care about the API's inner thoughts and feelings about its role or its level of knowledge. I want to know what it does. The whole second paragraph that describes whether it is "aware" of certain callbacks or not is very confusing. Please rephrase it to describe its behavior with respect to those callbacks - will they be called? - NO AMBIGUOUS ANTECEDENTS. "it will not be able to time it out". WHAT. - Omit the needless word. "In particular" seems to convey no information. "typically the reactor... for testing" seems similarly useless. If this is important information, it should be on IReactorTimeitself, not on some random API. - The test docstrings use unclear language to describe what they're testing. test_noTimeoutIfCallback- says that it "will not cause a timeout". This is a negative assertion and therefore impossible to prove :). It should say something more specific about the state of the Deferred. Several other tests refer to 'causing timeouts' as well. test_timeoutsays something about the Deferredbeing "fired in time". In what time? - I am a little confused by what it means to cancel the timeout "manually" as opposed to just preventing the Deferred from being cancelled. - Attempting to throw an exception over the heads of a bunch of intervening callbacks, and hoping that it will reach the target point in the callback stack, strikes me as poor form. This should be introduced at some point in the call chain where you can make a definitive statement about all the enumerated failure modes. You need to be able to specify not only the exception that you want to raise, but also the exception(s) that you want to trap. But, ultimately, I don't think that this feature of cancellation is really helpful at all and I think it should be removed. - A more useful feature, which I don't see any way to easily add to this externally, would be to force the timeout to happen immediately Failure.checkis more efficient than Failure.trapand would likely read better in this code anyway. Mandatory Stuff (violated policies, et. al.): - The docstring is missing a couple of required elements. - It's missing a @returnand it doesn't return None. Documentation of the return type is always super important. - You should also include separate @typeannotations for each parameter (and an @rtype). - This needs a place in the narrative documentation, as your pre-review comment suggests. A narrative explanation might help to better explain the relationship between the call chain and the timeout, which, except for the exception-type-translation stuff (which I don't think works entirely right and you don't really need anyway) is actually quite straightforward. - The exception type translation suffers from a fatal flaw and can never really do what it's advertising 100% reliably. If you have a deferred A whose callback returns a deferred B, and B's custom canceller does a partial recovery and returns a result, but then the next callback on A raises CancelledErrorfor some unrelated reason, it will appear as though the timeout exception was raised, when in fact, the timeout just made the operation complete faster and the cancellation was from somewhere else. But, again, I don't think this feature serves any real purpose: the real utility in something like this would be in conjunction with a custom canceller that would make the first exception raised by the Deferredbeing cancelled into a TimeoutError. More importantly though, nothing about the documentation or the tests explains why I would care about the distinction between these exceptions at the consumer's point in the call chain. This needs to be explained, and explained in fairly painstaking detail. In short, this appears to be almost-adequately documented and adequately tested, and I can see no procedural reason not to merge it. However, I'm fairly unhappy with the design and the documentation seems as unhelpful as some of our oldest, crummiest documentation. I think this would be a lot better if it didn't attempt the exception-translation stuff and just did the one thing that users would generally forget, which is the reference-cleanup of the timeout callback itself when the Deferred fires; exception translation could be added as a separate feature later. comment:19 Changed 5 years ago by Most of this is things I can just go and fix, but I don't want to punt on the custom exception use case without having a design. And it is a real use case: if my HTTP request timed out, "failed to look up host" is a very different thing than "response timed out", if only for debugging purposes (but possibly for business logic as well). A reasonable design, which is implicit in your critique, is having the custom canceller be able to raise some exception other than CancelledError... but that is not currently part of the cancellation API. Does that seem like a reasonable feature to add? If so, it could then be used to implement custom exceptions for timeouts. Obviously these would be two additional, separate tickets. comment:20 follow-up:. comment:21 Changed 5 years ago by And it is a real use case: if my HTTP request timed out, "failed to look up host" is a very different thing than "response timed out", if only for debugging purposes (but possibly for business logic as well). Have you implemented this using this cancellation API? It'd be helpful to see the code. comment. There's no such thing as a "final" errback, as you well know. The use-case is that you're using Twisted's APIs, and they provide a certain working model (i.e. Deferred control flow) and you expect it to be honored consistently ;). More seriously, cancellation would be intercepted for the same reason that any application would implement a SIGINT handler. There are some applications which you just want to do as much work as possible and when a user calls cancel() that's their way of saying "OK, I've waited long enough, just give me what you've got". Consider a 3D rendering program with a preview function, where the output of the Deferred is going to be an image either way, but may be lower resolution if you cancel it before it's all the way done. comment:23 Changed 4 years ago by The work to be done here: - Remove the exceptionargument to the new function. We can always add that functionality later. - Then, address all review comments that are still applicable. comment:24 Changed 4 years ago by comment:25 Changed 4 years ago by comment:26 Changed 4 years ago by Thanks for working on this. - Most of the tests want to be changed to use successResultOfand/or failureResultOf. - Consider using IDelayedCall.activeinstead of IReactorTime.getDelayedCallsto check if the timeout has been cancelled. - In test_timeout, the call will trigger after advancing excatly 10. test_callbackStackshould probably be split into (at least) two tests, one checking that the cancellation doesn't happen for the first timeout. And the scond, that the cancellation doest happen for the second timeout. Please resubmit for review after addressing the above points. - It is unclear to me why the final example in the narrative documentation is in the form of interaction with the python interpreter. Condsider changing it to match the other examples. - The pargraph about waiting on a deferred in the documentation for timeOutDeferredis awkward, but I don't have any concrete suggetions for improving it. Particularly "The waited deferred" feels ungrammatical. - Consider renaming the function timeoutDeferred. I'm inclined to treat "timeout" as a single compound word, rather than two words. Existing usage in twisted seems to agree: 56 files where it is treated as a single word compared to 18). comment:27 follow-up: 28 Changed 4 years ago by comment:28 Changed 4 years ago by Hi itamar, are you going to take over and finish this ticket? I'm sorry that I'm busying with job finding and finishing my paper so I haven't done my tickets yet. It's great if you finish this ticket. Also my vacation will start at next week and last a whole month so I will be available then. Again, sorry about the delayed tickets. comment:29 Changed 4 years ago by I was hoping to finish it, but you can feel free to take it back over if you have the time. It'd be great to have you working on Twisted again if you do! comment:30 Changed 4 years ago by comment:31 Changed 3 years ago by Still remaining is fixing up howto, and then it's ready for review again. comment:32 Changed 3 years ago by comment:33 Changed 3 years ago by OK, ready for review again. comment:34 Changed 3 years ago by Thanks Itamar, (and kaizhang) Notes: - Builds pass (apart from some spurious errors) - The new documentation renders correctly in sphinx. - Tests pass locally - Merges cleanly Points: - source:branches/deferred-timeouts-5786-4/twisted/internet/defer.py - "Don't use this. Use L{timeoutDefered}." -- Maybe add a link to a ticket about deprecating then removing this API. - source:branches/deferred-timeouts-5786-4/twisted/test/test_defer.py - "class TimeoutTests(unittest.TestCase):" -- Maybe inherit from SynchronousTestCase instead. - test_noTimeoutIfCallback - The first two assertions seem unnecessary - And the important last assertion might be better tested by checking IDelayedCall.active on the return value of timeoutDeferred. - test_noTimeoutIfErrback - Same applies here. - test_noTimeoutIfCancel - Again I might be nicer to check whether the DelayedCall is still active. - test_timeout - Seems like it would be better to advance the reactor to exactly the timeout value rather than *slightly* before then after. - And again assert that the DelayedCall is not active. - test_callbackStack - I found the docstring difficult to understand...or at least the docstring didn't seem to accurately describe the test implementation. - And I'm not sure this test is necessary since it seems to be testing the standard cancellation behaviour rather than anything specific to timeoutDeferred. - test_multipleTimeouts - Again, isn't this just testing the standard cancellation behaviour of chained deferreds, or am I misunderstanding? - test_cancelReturnedDelayedCall - I'd quite like to see this near the top of the TestCase because it's such an important part of the new API and so that the test story flows neatly into subsequent tests that then make use of the returned DelayedCall. - I don't see a response to glyph's 3D rendering usecase in #5786:comment:22. His comment seems to be more about the cancellation API in general, but perhaps his point here is that there are circumstances where you need to differentiate between cancellation due to a timeout and some other cancellation reason. How could a user do that if they used this API? - Some of my comments match those made by tom in #5786:comment:26 As far as I'm concerned this can be merged after you answer or address the numbered points above. It's been through 3 rounds of review from glpyh, exarkun and tom.prince. But if you'd like another opinion then resubmit for another review. comment:35 follow-up:. Unless the user passed a cancellation function in in the first place, in which case bets are off. Perhaps these two pieces of code could be merged to address this ticket? comment. Partially to clarify for other observers, and partially just to make sure I understand for myself the additional feature that the code from Otter is implementing; Otter's timeout_deferred doesn't attempt what Itamar originally suggested, which is to say, a different exception at the start of the callback chain - you call cancel() and the canceler does whatever it does, so the initial callbacks in the chain receive whatever values they expect, including CancelledError. Instead, it adds a callback to the Deferred at the point in the chain where it is called, which means that the caller can know what kind of translation they're getting and when they're getting it, and not interfere with the earlier callback chain. The one issue I have with the code in Otter is that it's not quite flexible enough. While translating CancelledError to TimeoutError is in the case of a failure is usually what you want, there are weird edge-cases. Maybe a partial result is unacceptable, and you really want to fail with a TimeoutError unconditionally once you've cancelled. Maybe you're dealing with a subclass of CancelledError that exposes some important information about the progress during cancellation, and naively translating to TimeoutError naively would throw that away and that's bad. So I would say that while this makes a reasonable default, the exception (or result!) translation callable ought to be an optional parameter to timeoutDeferred, effectively a callback added via addBoth, but one which is only invoked if the timeout has fired. Unless the user passed a cancellation function in in the first place, in which case bets are off. Actually I think that the behavior here is pretty well-defined; if they specified a custom canceler, they can get the behavior that they implemented; with a custom error translator this can behave however the caller wants. Perhaps these two pieces of code could be merged to address this ticket? Yes; looking over Itamar's branch, it seems like it would be very straightforward to add the error-translation stuff from otter in (with the tweak I recommended) and get something that satisfies everything brought up in the discussion on this ticket so far. Thanks! comment:37 Changed 2 years ago by comment:38 Changed 19 months ago by Hello. Looks like cyli has addressed all the issues which rwall had with otter's implementation. If so, should I submit a patch with that code? You can review it and I can fix any issues raised. I would really like to have this in Twisted as I am looking to use it in other projects too. comment:39 Changed 19 months ago by Please do. comment:40 Changed 19 months ago by comment:41 Changed 16 months ago by Stealing for PyCon2016 comment:43 Changed 16 months ago by comment:44 Changed 16 months ago by comment:45 Changed 15 months ago by Thanks for the changes. Major comments: - I don't understand the translateCancellation example. I was expecting it to be based on the previous example. From the translateCancellation docs and API docs I don't understand how I should use the translateCancellation... should I return something from it? How it should know how to handle timeout cancellations ? So maybe it can be renamed to onTimeoutCancel as it will be called when the deferred is cancelled with a timeout. In this way users can do whatever they want here :) - Do we need the new twisted.internet.task.TimedOutError ? Can we re-use twisted.internet.error.TimeoutError ? - We will be on 16.4.0 next. - I think that cancelledToTimedOutError should be _cancelledToTimedOutError to make it explicit a private function. - I can not make sense of the last part of the release notes: can produce a TimedOutError, or other custom error, distinct CancelledErrors. Minor comments: - In api doc we should have L{int} instead of C{int} to make it explicit that this is a cross reference. Same for C{None}. - I don't think there is any value in using _DummyException in tests... it can be DummyException or mabye give it a less generic name... like SomeGenericException or CustomCancelException ... the current DummyException is used for both purposes but I think that this does not help making the tests easier to read :) - I think that the functionality tested by test_defaultTranslationPreservesCancellationFunctionCallback should be part of the narrative docs ... or if the current documentation already talks about it should be some more obvious that when a custom cancellation is used, the timeout will no longer work. Since I don't understand how you would like the translateCancellation to be used, I am not reviewing the remaining tests as I don't know if they are valid or not. Please note that there is also this ticket #8533 which tries to clean the current mess in twisted.internet.defer regarding the TimeoutError and the timeout() function. For this ticket, I think that you can ignore that work, but maybe it will help to get an idea of what we had in the past. Please check my current comments and resubmit for review . Thanks! comment:46 Changed 13 months ago by Hi Adi! Thank you for your detailed review. I think I have addressed all your review comments: - I've renamed the callable timeoutDeferred takes to onTimeoutCancel(thanks for the suggestion!) - I've Cleaned up the documentation so that rather than discussing how to "translate custom CancelledErrors to TimedOutErrors" we talk about the onTimeoutCancelcallable as just a function that gets called when the deferred is timed out, which can be used for logging or returning different values. This seems like a more reasonable use case. - I totally forgot twisted.internet.error.TimeoutErrorwas there, thanks - I've used it instead. - I've fix the tests to be more clear about explicitly testing that timing out doesn't happen if the deferred is callbacked/errbacked/canceled before the timeout, since canceling again can cancel dependent deferreds. - I've bumped the @since tag to 16.5 and updated the topfile to just talk about the function itself rather than all the options it can take or errors it can return. Thanks! () comment:47 Changed 13 months ago by Thanks cyli! I think this is almost ready to go; looks like all the existing feedback was dealt with. There are a couple of minor issues though: - In the tradition of the methods on Deferred, timeoutDeferredshould really return its argument. This would make chaining possible, which can make code that uses this read more nicely. For example, you could shorten your example in the docs to: later = task.deferLater(reactor, delay, f) def called(result): print("{0} seconds later:".format(delay), result) return timeoutDeferred(later, 3, reactor).addBoth(called) - If I set a timeout on a Deferred, but then I *later* decide that I want to cancel or extend the timeout, how would I do that? The reason I make this blocking feedback is that it probably affects the signature here and I'd like to avoid a deprecation cycle. However, I'm not sure that it's actually an important thing for the API to support. Then the non-blocking things: - Over the years twisted.internet.taskhas taken on some really unfortunate scope-creep. The reason it's called taskis that it was originally the home of Cooperator, which allows you to break up a long-running task into small pieces to run cooperatively with the reactor. However, Cooperatorneeded to run periodically in the reactor, so that gave rise to LoopingCall, which lived next to it in the same module. Later, since LoopingCallhad a Deferredin it, and also had some time-related code, deferLatercame to live in this module as well. Now it seems like taskhas become the dumping ground for everything related to any potential intersection between Deferredand the passage of time. The net result here is that we have this confusingly-named module which no longer really has anything to do with "tasks" at all. The relevance to this change is that it makes me wonder if this functionality should live in defer.pyafter all. The previous Deferred.setTimeoutwas a disaster which should not have been in defer.py, but that was because it imported the reactor directly and therefore tightly bound defer.pyto the rest of twisted.internet. Given that this method takes an IReactorTime from the outside, it could become a method on Deferredwithout having defer.pyimport anything. I'm sorry that this review point was quite long-winded; I just wanted to give the necessary background here. However, I'm going to leave it up to the author's discretion; I think the possibilities should be leaving it where it is, moving it to a top-level function in defer.py, or maybe even making it a method on Deferred. - You don't need to use C{}/ L{}markup in comments; it won't be processed by anything. - Rather than using [0]for function closeures, I've started getting in the habit of using attributes on the function objects themselves. So for example, instead of timedOut = [False], something like this instead: def timeItOut(): timeItOut.didTimeOut = True deferred.cancel() timeItOut.didTimeOut = False "..." if timeItOut.didTimeOut: "..." Totally optional, it just lets you skip using zero as a magic number. (It is trading in one hack for another; what we want is to use the nonlocal keyword, but no dice there until we can drop python 2...) Thanks again for checking back in on this long-suffering issue :). I think the first required point is really all you need to take care of here, and that's a sufficiently minor detail (assuming you add a test for it) that I think you can just land it without re-review. If you want to make any more significant changes based on my optional feedback, you should probably submit for re-review. Up to you! comment:48 Changed 13 months ago by Thanks for reviewing, Glyph! At the San Francisco Twisted meetup, there was a discussion amongst attendees (including glyph, runciter, moshez, and others) about some of these API design discussions. Summary: - No one particularly objects to moving this to the deferpackage. - No one could really think of a good use case for needing to cancel a timeout. The possible cases that were suggested were better served being implemented using LoopingCall, and perhaps we should document an example of such a case where a LoopingCallbased implementation would be more appropriate. - Everyone seem to like the idea of deferred.addTimeout(...), so that one could do something like: deferred.addCallbacks(...).addCallback(...).addErrback(...).addTimeout(...).addCallback(...) I'll make these changes. :) comment:49 Changed 12 months ago by Updated - I think maybe the LoopingCall vs addTimeout docs should maybe go in a different prose doc ticket? Maybe something like "timing recipes"? I'm not sure yet what a good section for that would be - it seems kind of random to say "oh and in this one very special case, use LoopingCall}} instead of {{{addTimeout. comment:50 Changed 12 months ago by Feel free to put this back in the review queue. comment:51 Changed 12 months ago by From GitHub: The implementation is good and the documentation sufficient. I would like at least some of the tests I put into that comment included in this PR to elucidate the interaction between timeouts and existing callbacks and errbacks. They're merely nice to have, though, so if I don't see them here in the next 24 hours I'll merge this and open a new PR to add them. Replying to itamar: These are non-obvious :). Counterpoint: no, it wouldn't. What code would ever want to know this, and why? Cancellation can occur for any number of causes; timeouts are one of them, but there are plenty of others, and I think it would actually be bad to have code distinguish, for the most part, because you don't want code that works when a user explicitly cancels but fails when a timeout happens. We should probably have a discussion about how Tubes handle this case (the "progress" callback), since there is some stuff that you can do that's generic in support of this use-case, but I don't think it fits at this level. Having a Deferredpublish any API to set timeouts on events, e.g. other Deferredsand function calls, that aren't in your code and you can't observe, is probably bad. These would just be arguments to the functions that initiate certain activities, and those timeouts would be handled internally and would have specific error conditions associated with them. For example, you will also want a timeout on "time between bytes received on this connection" when a download is in progress, as distinct from "time to download the whole file" and "time to initially connect", but that's none of the Deferred's business (how would you even expose all of those distinct stages?).
http://twistedmatrix.com/trac/ticket/5786
CC-MAIN-2017-39
refinedweb
5,517
62.88
It's not the same without you Join the community to find out what other Atlassian users are discussing, debating and creating. Hello, we have a system build in a way that people are grouped up under affected components in a custom multiple group picker field called "Komponenten". Everything is dependant on those groups and system works fine but for users and process supervisors it would be important to know who the people behind affected component are for this whole process to be more transparent. At the moment we are keeping list of components and people in charge of them in external Excel sheet so you can imagine we want to improve that. Is there a way to get this somehow with Script runner and Scripted field maybe? Custom field or any other suggestion is welcome. Best regards, Rok That's a perfect use-case for a scripted field, and almost one I've done myself quite recently. The group-picker I used was a single select, and they specifically wanted just the user names as a long block of text, but the code below should get you started. import com.atlassian.jira.component.ComponentAccessor def groupManager = ComponentAccessor.getGroupManager() def customFieldManager = ComponentAccessor.getCustomFieldManager() def cf = customFieldManager.getCustomFieldObjectByName("Group to List") def theusers = groupManager.getUserNamesInGroup(issue.getCustomFieldValue(cf)) return theusers.toString() I forgot - there's a minor problem with scripting this. If the members of the group are changed, the results of the scripted field will be wrong until the issue is re-indexed (i.e. project re-index or any edit or transition on the issue) Isn't the contents updated when you view the issue, but searching needs to wait for the issue to be reindexed? Thank you both for your answers! I'll try to adjust the code so that it works with multiple group picker field somehow. About re-indexing, I don't know what are we going to do, we can try to ask from admins when they make a group modification to do re-index or something. Cheers,.
https://community.atlassian.com/t5/Jira-questions/Having-the-members-of-groups-info-somehow-listed-in-Issue/qaq-p/293064
CC-MAIN-2018-30
refinedweb
343
54.63
Posted 25 Oct 2018 Link to this post Good morning, I am trying to finish off a project and I am having trouble getting the PDF assemblies and namespaces sorted out. I am using VS 2015, and the Telerik UI for WinForms version2018.3.2016 I wanted to export a RadPanel that has a few controls. But I can't seem to find the RadFixedDocument class, or the Telerik.Windows.Pdf namespace at all. I was following documentation at Getting Started and RadFixedDocument Class and I have all the named assemblies referenced, but I still cannot see the Telerik.Windows.Pdf namespace at all. Much less the sought after RadFixedDocument object. At this point I am sure I have more assemblies referenced that I actually need, I have attached a screenshot of what I have included so far. Can anyone help me find what I am missing?? Thank you, Mike Posted 25 Oct 2018 in reply to Michael Link to this post I did figure this out after a while. It looks like the documentation is a little off. It says the objects are defined in Telerik.Windows.Pdf.Documents.Fixed.Model Where I eventually found them is Telerik Telerik.Windows.Documents.Fixed.Model Works great now that I've found it, but if anyone else is looking for the Telerik.Windows.Pdf namespace, look in the above. Posted 26 Oct 2018 Link to this post
https://www.telerik.com/forums/pdf-radfixeddocument-processing
CC-MAIN-2019-13
refinedweb
236
66.94
Where a simple xor gets transformed beyond what it ever thought Briefing Required Background This tutorial is the continuation of this one. If you are not familiar with LLVM pass development you should read the previous tutorial, as the basics won't be covered in this tutorial. To go through the code samples given in this tutorial you will need to be able to read C, C++ code and simple LLVM bytecode. Riddle Let's start with a riddle. The obfuscation we are going to implement will replace an operation OP by the sequence of instructions below. Your job is to try to guess what OP is. Given: - two integers D and X of same bit width - \(Y = D \text{ OP } X\) - \(f:x = \sum x_i \cdot 2^i \mapsto x' = \sum x_i \cdot 3^i\) - \(h:x = \sum x_i \cdot 3^i \mapsto x' = \sum (x_i \text{ mod } 2) \cdot 2^i\) We are going to rewrite the operation OP as follows: - Transform operands D and X with f: \(D'=f(D) \text{ and } X'=f(X)\) - Apply the ADD operation to the new operands: \(A=D'+X'=\sum (d_i + x_i) \cdot 3^i\) - Transform back the result with h: \(A'=h(A)=h(D'+X')\) - And we magically obtain: \(h(D'+X')=D \text{ OP } X\) Don't cheat and try to find out what OP could be. Obfuscation Ok for those of you who have guessed and the ones that skipped ahead, the answer is X-OR. Oops, not that XOR, this one: The sequence of operations above (operands transformation, addition, result transformation) is a more complex, harder to understand way to code a XOR. If you want to make sure that the transformation is not trivial, post in the comments whether you found the solution of the riddle; and if you did, the time it took you. We chose to obfuscate XORs because this operator is very present in cryptography... and because we all get bored sometimes. But you shouldn't trust me. Taking a look at OpenSSL code base: $ find $OPENSSL_SRC -name "*.c" -exec grep -cE " \^ " {} \; | awk '{s+=$1} END {print s}' we find around 648 XORs. Behind the Scenes Now, since programmers don't really deal in magic (at least not officially), let's try to understand what happened. The first thing to understand is that a XOR operation is basically an ADD operation without carry. This means that if we have a representation of our operands in which the ADD carries won't propagate, then XOR and ADD are equivalent. And this is exactly what we have done. The function f takes the base 2 coefficients (bits) of the input and multiplies them by the corresponding power of 3. You may have noticed that this operation is almost a change of basis operation from base 2 to base 3 in which the 'bits' are not transformed. Since \(d_i \le 1 \text{ and } x_i \le 1\) then \(a_i \le 2\). This means that the bits of A are never going to propagate to upper bits, since the sum of \(d_i\) and \(x_i\) is smaller than the 'basis' in which they are represented (here 3). Because no carry has propagated, applying a modulo 2 on the bits of A and writing the result in base 2 (which is done by the h function) will give us the same behaviour as a XOR operation on D and X. Indeed, an addition modulo 2 is equivalent to a XOR. Bottom line: by changing the representation of XOR operands we are able to assure that the ADD operation will not propagate carries. Which means that in this new representation XORs ans ADDs are almost the same. And a simple modulo takes care of this difference. Requirements Environment To use the passes we are going to develop, you will need LLVM and clang sources and you'll have to build them. If you need details on how to get these, you can refer to the 'Requirements' section of the previous LLVM tutorial. To make sure that we all have the same basic project infrastructure you can checkout the corresponding git repository: $ git clone In this article we are not going to explain every line of code, just the interesting parts. This is why you will need the git repository. At every step of the development you will be given the name of the branch holding the appropriate code state (we are going to develop the obfuscation in 3 steps). Each commit is a fully functional pass, the complexity of which increases with every commit. From now on we will be working exclusively inside the cloned git repo llvm-passes (we will refer to it as $PASSDIR). It contains the followings: - cmake: cmake definitions to check the Python environment. Required to generate our passes test suites. - doc: contains the sources of this tutorial, in case you find a shaming typo. - llvm-passes: contains one subdirectory per pass, and a CMakeList.txt used to generate the passes. - tests: tests and validation for our passes, contains one directory per pass. The tests are using llvm-lit, the LLVM integrated validation tool. - CMakeList.txt: the file used to generate the required Makefiles LLVM: the Programmer's Stone! To implement the obfuscation detailed above we are going a create an LLVM BasicBlockPass. A FunctionPass might also be a reasonable choice because we will work on XOR chains (XORs using the result of other XORs as operands) later, and this choice will have a direct impact on our algorithms (spoiler!). Here is our plan of attack: - Write a basic pass transforming single XORs:basic_xor - Write a more complex pass transforming chained XORs:chained_xor - Write another pass splitting chained bitwise operations in order to combine it with the XOR obfuscation:propagated_transfo We'll start with the basic_xor branch, you might want to checkout this branch: $ git checkout basic_xor Turning XORs into ADDs Enough chit-chat! To implement the first version of the obfuscation we need to: - Find all the XORs in the current BasicBlock. - Choose the base used to transform the XOR operands. In the introduction we use base 3, but this can be generalized to an arbitrary base (almost arbitrary...). - Transform the XORs' operands. - Create an ADD between the transformed operands. - Transform back the result of the ADD to a standard representation. - Replace all uses of the result of the original instruction by the result of 5. I will look for you Let's start with the easy part. To find the XORs we are going to iterate through every instruction in each basic block and check if it is a XOR. The checking function looks like this: BinaryOperator *isEligibleInstruction(Instruction *Inst) { BinaryOperator *Op = dyn_cast<BinaryOperator>(Inst); if (not Op) return nullptr; if (Op->getOpcode() == Instruction::BinaryOps::Xor) return Op; return nullptr; } Nothing mind-blowing here, but if you are not familiar with LLVM API this might interest you. I will find you Once we have found a XOR we will need to pick a base for the transformation. It is a perfect opportunity to introduce diversity in our obfuscations. If we were to use the same base for every XORs the obfuscation pattern would be trivially identifiable. 'But you said earlier we could choose an arbitrary base, so let's pick a random number and stop wasting my time.' Humm... we may have oversimplified things a little. In theory the base can be arbitrary (greater than 2!). But if we obfuscate operands which type is \(N_b\) bits long, we will need to store \(S = \sum (d_i + x_i) \cdot base^i \text{ , } i < N_b\). Are you beginning to see the problem? This value can become HUGE, well above what a 'standard' type might hold. But we are programmers so 'huge' is not accurate enough... The maximum value of S is \(base^{N_b} - 1\). This means that we need \(floor(log_2(base^{N_b} - 1)) + 1\) bits to store S. The good thing is that LLVM allows you create integer variables with an arbitrary bit size. Thanks to the LLVM API we can hold and apply almost any operation to integers of any size. This is awesome! LLVM is doing all the work for us! And to take advantage of this we only need two functions. A function that, given the number of bits of the operands and a base, returns the required number of bits to represent the obfuscated operands: unsigned requiredBits(unsigned OriginalSize, unsigned TargetBase) { assert(OriginalSize); if (TargetBase <= 2 or OriginalSize >= MaxSupportedSize) return 0; // 'Exact' formula : std::ceil(std::log2(std::pow(TargetBase, OriginalSize) - 1)); unsigned ret = (unsigned)std::ceil(OriginalSize * std::log2(TargetBase)); // Need to make sure that the base can be represented too... // (For instance if the OriginalSize == 1 and TargetBase == 4) ret = std::max(ret, (unsigned)std::floor(std::log2(TargetBase)) + 1); return ret <= MaxSupportedSize ? ret : 0; } Except for the approximated formula to compute the required number of bits there is another difference with the theory. This part is tricky so hang on tight. The returned number of bits actually has to hold two different types of value: - The number S. (This is what we wrote the function for). - The value of the base itself: TargetBase. This is because we need to compute the values of \(TargetBase^i\). For instance if OriginalSize == 1 and TargetBase == 4 we only need 2 bits to store S but 2 bits is not enough to hold the value 4. Still there? Remember when I said we could apply any operation to any bit size? Well there is an exception, because of this bug. LLVM does not support division of integers of more than 128 bits. This is why there are MaxSupportedSize checks in the previous function. Because of this limit we need another function that, given the original size of the XOR operands, will return the maximum base we can use for the operands transformation. // Returns the max supported base for the given OriginalNbBit // 31 is the max base to avoid overflow 2**sizeof(unsigned) in requiredBits unsigned maxBase(unsigned OriginalNbBit) { assert(OriginalNbBit); const unsigned MaxSupportedBase = sizeof(unsigned) * 8 - 1; if (OriginalNbBit >= MaxSupportedSize) return 0; if (MaxSupportedSize / OriginalNbBit > MaxSupportedBase) return MaxSupportedBase; return unsigned(2) << ((MaxSupportedSize / OriginalNbBit) - 1); } With \(M_s\) the maximum supported size and \(N_b\) the original number of bits of the operands, the maximum supported base is \(M_b = 2^{(M_s/N_b)}\). But we have to make sure that this value is not going to overflow an unsigned. For instance if L is 1 (for a boolean) the maximum base would be \(M_b = 2^128 - 1\). And on a 64 bits OS, the maximum value for an unsigned is usually \(2^32 - 1\): this is why the \((M/N_b > M_b)\) test is required. We know the constraints on the base choice, so we can randomly pick one in \([3, maxBase(N_b)]\). And I Will... Transform You? Ok, now we have XORs, we have transformation bases, so we're ready to implement the transformations. - We will need two functions: - One generating the instructions corresponding to the function f: rewriteAsBaseN - The other generating the instructions corresponding to the function h: transformToBaseTwoRepr There is nothing worth talking about in rewriteAsBaseN. Just take a look at the way we handle types if you are not familiar with LLVM types. Value *rewriteAsBaseN(Value *Operand, unsigned Base, IRBuilder<> &Builder) { const unsigned OriginalNbBit = Operand->getType()->getIntegerBitWidth(), NewNbBit = requiredBits(OriginalNbBit, Base); if(!NewNbBit) return nullptr; Type *NewBaseType = IntegerType::get(Operand->getContext(), NewNbBit); Constant *IRBase = ConstantInt::get(NewBaseType, Base); // Initializing variables Value *Accu = ConstantInt::getNullValue(NewBaseType), *Mask = ConstantInt::get(NewBaseType, 1), *Pow = ConstantInt::get(NewBaseType, 1); // Extending the original value to NewNbBit for bitwise and Value *ExtendedOperand = Builder.CreateZExt(Operand, NewBaseType); for(unsigned Bit = 0; Bit < OriginalNbBit; ++Bit) { // Updating NewValue Value *MaskedNewValue = Builder.CreateAnd(ExtendedOperand, Mask); Value *BitValue = Builder.CreateLShr(MaskedNewValue, Bit); Value *NewBit = Builder.CreateMul(BitValue, Pow); Accu = Builder.CreateAdd(Accu, NewBit); // Updating Exponent Pow = Builder.CreateMul(Pow, IRBase); // Updating Mask Mask = Builder.CreateShl(Mask, 1); } return Accu; } The most interesting part in transformToBaseTwoRepr is the use of APInt to hold the \(base^{N_b - 1}\) value. Since regular types might not be large enough to hold this value, we use an APInt to compute it at runtime (when the pass is applied). This is done by the function APIntPow. (If you need more info you can check the doc.) Value *transformToBaseTwoRepr(Value *Operand, unsigned Base, Type *OriginalType, IRBuilder<> &Builder) { Type *ObfuscatedType = Operand->getType(); const unsigned OriginalNbBit = OriginalType->getIntegerBitWidth(); APInt APBase(ObfuscatedType->getIntegerBitWidth(), Base); // Initializing variables Value *R = Operand, *IRBase = ConstantInt::get(ObfuscatedType, Base), *IR2 = ConstantInt::get(ObfuscatedType, 2), *Accu = ConstantInt::getNullValue(ObfuscatedType); // Computing APInt max operand in case we need more than 64 bits Value *Pow = ConstantInt::get(ObfuscatedType, APIntPow(APBase, OriginalNbBit - 1)); // Euclide Algorithm for(unsigned Bit = OriginalNbBit; Bit > 0; --Bit) { // Updating NewValue Value *Q = Builder.CreateUDiv(R, Pow); Q = Builder.CreateURem(Q, IR2); Value *ShiftedBit = Builder.CreateShl(Q, Bit - 1); Accu = Builder.CreateOr(Accu, ShiftedBit); R = Builder.CreateURem(R, Pow); // Updating Exponent Pow = Builder.CreateUDiv(Pow, IRBase); } // Cast back to original type return Builder.CreateZExtOrTrunc(Accu, OriginalType); } // Builds the APInt exponent value at runtime // Required if the exponent value overflows uint64_t static APInt APIntPow(APInt const& Base, unsigned Exponent) { APInt Accu(Base.getBitWidth(), 1u); for(; Exponent != 0; --Exponent) Accu *= Base; return Accu; } Show Time Using the Pass The git branch basic_xor will allow you to run the pass without having to re-develop it yourself. The build process is the following: $ cd $PASSDIR $ mkdir build $ cd build $ cmake -DLLVM_ROOT=path/to/your/llvm/build .. $ make Once the pass is built you will need a test code. For instance write the following code in a file basic_test.c: #include <stdio.h> #include <stdint.h> int main() { volatile uint8_t a = 0, b = 1, c = 0; b=a^4; c=b+1; printf("%d\n", b); return 0; } We are using volatile variables to prevent LLVM from computing the XOR value at compile time and removing the XOR altogether. You can now run the pass on the generated bytecode: $ clang -S -emit-llvm path/to/test/basic_test.c -o basic_test.ll $ opt -S -load $PASSDIR/build/llvm-passes/LLVMX-OR.so -X_OR path/to/test/basic_test.ll -o obfuscated.ll And to make sure the obfuscation is not trivial, you can optimize the obfuscated code: $ opt -S path/to/test/obfuscated.ll -O2 -o obfuscated_optimized.ll and make sure the XOR is not back. Generated Code The original LLVM bytecode now looks like this: %4 = xor i32 %3, 4 %5 = trunc i32 %4 to i8 store volatile i8 %5, i8* %b, align 1 %6 = load volatile i8* %b, align 1 %7 = zext i8 %6 to i32 %8 = add nsw i32 %7, 1 %9 = trunc i32 %8 to i8 store volatile i8 %9, i8* %c, align 1 %10 = load volatile i8* %b, align 1 %11 = zext i8 %10 to i32 %12 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32 %11) ret i32 0 } You can see that, even though we used 8 bits variables, LLVM extended them to 32 bits to apply the XOR. This means that the obfuscation will work with 32 bits integers as OriginalType. Here is a portion of the obfuscated code after applying the pass. ; Beginning of the obfuscation ; produced by rewriteAsBaseN %4 = zext i32 %3 to i51 %5 = and i51 %4, 1 %6 = lshr i51 %5, 0 %7 = mul i51 %6, 1 %8 = add i51 0, %7 . . . %129 = and i51 %4, 2147483648 %130 = lshr i51 %129, 31 %131 = mul i51 %130, 617673396283947 %132 = add i51 %128, %131 ; New add corresponding to the XOR! %133 = add i51 %132, 9 ; Transforming back the result ; produced by transformToBaseTwoRepr %134 = udiv i51 %133, 617673396283947 %135 = urem i51 %134, 2 %136 = shl i51 %135, 31 %137 = or i51 0, %136 %138 = urem i51 %133, 617673396283947 . . . %289 = udiv i51 %288, 1 %290 = urem i51 %289, 2 %291 = shl i51 %290, 0 %292 = or i51 %287, %291 %293 = urem i51 %288, 1 %294 = trunc i51 %292 to i32 ; Original XOR, to be optimized out later %295 = xor i32 %3, 4 %296 = trunc i32 %294 to i8 store volatile i8 %296, i8* %b, align 1 %297 = load volatile i8* %b, align 1 %298 = zext i8 %297 to i32 ; Operation using the result of the obfuscation instead ; of the XOR (%295) %299 = add nsw i32 %298, 1 %300 = trunc i32 %299 to i8 store volatile i8 %300, i8* %c, align 1 %301 = load volatile i8* %b, align 1 %302 = zext i8 %301 to i32 %303 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32 %302) ret i32 0 } There are 2 important things to notice in this code: - You may have noticed that the instructions generated only convert the first XOR operand (a). The other operand was the literal 4 in the original code. Since this value is known at compile time, the IRBuilder will compute the transformation—at compile time—and generate the corresponding transformed literal. This is why the second operand of %133 is a literal 9. If you are not convinced here is the transformation: \(4 = 1*2^2 + 0*2^1 + 0*2^0 \mapsto 1*3^2 + 0*3^1 + 0*3^0 = 9\). The IRBuilder has successfully converted the original 4 literal into 9 at compile time, without generating any instructions! - The XOR is still in the obfuscated code. This is because we haven't asked LLVM to delete it. However we have rendered it useless when we replaced all of its uses by the result of the obfuscation. This means that the XOR will be deleted by the optimization pass we are going to apply. Last thing we need to do is to optimize the code to remove the unused XORs and try to compensate the performance loss (we will check this later). We will not look at this code but you can check that the XORs are gone: $ grep -Ec ' xor ' path/to/test/obfuscated_optimized.ll 0 Production Ready? Validation To make sure the obfuscation produces the same results as the original code you can use the test suite. $ cd $PASSDIR/build $ make && make check One of the tests downloads, compiles and runs the test suite of OpenSSL. This may take some time but since OpenSSL heavily uses XORs, it helped us a lot to find very tricky bugs (remember the requiredBits function :p). Performances The enormous increase in compilation time is due to the fact that obfuscation of a single XOR generates about 300 new instructions (for 32 bits operands), and that many optimizations don't scale linearly with the number of instructions. Regarding execution time, it is easy to understand that replacing one simple XOR operation by 300 expensive instructions (mul, div, mod) is going to slow things down a bit... But before you decide that this obfuscation is too expensive for production, remember that the obfuscation should only be applied to the relevant parts of code (crypto functions, DRM enforcement...). And, even there, it should only be applied to a subset of the eligible XORs to avoid making the pattern too obvious! However, when validating your obfuscation you want to apply on every candidate to make sure to hit as many tricky cases as possible. A Few Improvements Even if we apply the obfuscation to a small number of XORs, we might still want to speed things up. And we also might want to make the pattern less obvious. To do so we are going to add the following to our pass: - Handling chained XORs. Right now the a = b xor c xor d sequence would be turned into: - Transform b and c into b' and c' - Create add1' = b' + c' - Apply modulo 2 on add1' bits and transform into base 2 gives us add1 - Transform add1 and d into add1'' and d' - Create add2' = add1'' + d' - Apply modulo 2 on add2' bits and transform into base 2 gives us add2 - Store add2 in a Instead of doing this we could transform each operand only once and chain the adds on the transformed representations. This would give us the following sequence: - Transform b, c and d into b', c' and d' - Create add1' such as add1' = b' + c' - Create add2' such as add2' = add1' + d' - Apply modulo 2 on add2' bits and transform into base 2 gives us add2 - Store add2 in a This will reduce the number of transformations, which will reduce the number of instructions generated, making the code faster and the obfuscation a little less obvious. This is not that trivial, but we will get the details sorted out later. - If you have taken a look at the non-optimized obfuscated code, you've probably noticed that the pattern is very easy to spot. Each computation of a power of the base appears very clearly… 'Awesome an exponentiation \o/' To make the transformation less regular and make pattern matching harder, we could randomize the order of the transformations operations. As we will see, this will require a change of transformation algorithms, but if there is chance that it might annoy reverse engineers then it's worth our time :). From now on, we will work on the code in the chained_xor branch: $ git checkout basic_xor Handling Chained XORs What we want to do now is to avoid redundant transformations of XOR operands. And to do so we need the following: - Detect and store the XOR chains for analysis. - Make sure that the base we choose is large enough to handle successive adds. Tree Saplings What we call a XOR chain is a set of XORs which have a least one operand in the set. Or simply put a set of XORs using other XORs as operand. The following code contains such a chain: int main() { volatile uint32_t a = 0xffffffff, c = 0xffffffef, d = 0xfeffffef; uint32_t b=a^0xffffffff^c^d; printf("%u\n", b); return 0; } The most natural way to store dependency information is to use a directed graph (acyclic in our case). Here is the DAG (Directed Acyclic Graph) representing the chain in the previous code. This example may seam oversimplified but since XOR is a commutative and associative operation, LLVM optimizations will always be able to reduce any XOR sequence into a graph of this type (and they usually do...). But our obfuscation will have to be able to handle non-optimized code hence our algorithms will have to be generic. Growing the Trees Building the DAG is pretty easy thanks to LLVM's SSA representation. Each instruction has some Use, generally other instruction that use it as an operand. So building the DAG is just a matter of walking the uses and the operands of each instruction, keeping the ones that involve a XOR and leaving the others aside. The recursive part looks like this: void walkInstructions(Tree_t &T, Instruction *Inst) { if(not isEligibleInstruction(Inst)) return; [...] for (auto const &NVUse : Inst->uses()) { if(Instruction *UseInst = dyn_cast<Instruction>(NVUse.getUser())) { walkInstructions(T, UseInst); } } [...] for (auto const &Op : Inst->operands()) { Instruction *OperandInst = dyn_cast<Instruction>(&Op); if (OperandInst and isEligibleInstruction(OperandInst)) T.at(Inst).insert(OperandInst); } } Range-based loops from C++11 are really handy! Climbing Trees If you read the introduction, you should remember that the base 'change' is intended to prevent the ADD carry from propagating. If we want to handle chained XORs we have to make sure that no carry is going to propagate when chaining ADDs. For the previous example, it means that \(a_i + c_i + d_i < Base, i \in [0, N_b[\) To determine the minimum base eligible for the tree transformation we use the following algorithm: unsigned minimalBase(Value *Node, Tree_t const &T, std::map<Value *, unsigned> &NodeBaseMap) { // Emplace new value and check if already passed this node if (NodeBaseMap[Node] != 0) return NodeBaseMap.at(Node); Instruction *Inst = dyn_cast<Instruction>(Node); // We reached a leaf if (not Inst or T.find(Inst) == T.end()) { NodeBaseMap.at(Node) = 1; return 1; } else { // Recursively check operands unsigned sum = 0; for (auto const &Operand : Inst->operands()) { if (NodeBaseMap[Operand] == 0) minimalBase(Operand, T, NodeBaseMap); sum += NodeBaseMap.at(Operand); } // Compute this node's min base NodeBaseMap[Node] = sum; return sum; } } This algorithm will recusively go through the tree, and assign to each node X the maximum value that its \(x_i, i \in [0, N_b[\) can attain. And this maximum is: - 1 for a leaf because a leaf is directly converted from binary. - The sum of its parents' maxima for any other node. If this is not clear enough you can take a look at the edge labels in the above graph. To choose a base for a tree we, need to apply the previous algorithm to all the roots of tree. The minimum base for the tree will then be the maximum of the returned values. Finally we randomly pick a base between the minimum and the maximum (see maxBase function) if possible. unsigned chooseTreeBase(Tree_t const &T, Tree_t::mapped_type const &Roots) { assert(T.size()); unsigned Max = maxBase( T.begin()->first->getType()->getIntegerBitWidth()), Min = 0; // Computing minimum base // Each node of the tree has a base equal to the sum of its two // successors' min base std::map<Value *, unsigned> NodeBaseMap; for (auto const &Root : Roots) Min = std::max(minimalBase(Root, T, NodeBaseMap), Min); if (++Min < 3 or Min > Max) return 0; std::uniform_int_distribution<unsigned> Rand(Min, Max); return Rand(Generator); } Cut Them Down! The last thing to do with these trees is to transform them. This will be done as before in the runOnBasicBlock function. This function will now apply a recursive transformation on all the roots of each tree. (We won't paste the code here so you should open the $PASSDIR/llvm-passes/X-OR/X-OR.cpp.) The recursive transformation function recursiveTransform will, given a node N: - - Check each of N's operands: - - - If it has not been transformed, i.e it is not in TransfoRegister: - - If it is not a XOR or if it's a XOR not in the current BasicBlock, transform it and register the association (original value, new base) \(\mapsto\) transformed value in TransfoRegister. - Else call recursively recursiveTransform on the operand. - Else recover the transformed value. - Once the operands have been transformed, apply an ADD on the transformed operands and register the result of the add in TransfoRegister as (original XOR, new base) \(\mapsto\) new add. We register the new value so that when the recursive function hits a XOR operand, we use the result of the ADD as the new operand. - Prepare the transformed back value of the ADD in case the result of the XOR is used outside of the tree (i.e by something else than a XOR, or by a XOR outside the current BasicBlock). And replace those uses with the new transformed back value. Breaking the Patterns Okay, after changing everything to handle chained XOR let's do something easier... We want to be able to randomly re-order the transformations' instructions. However, the transformation algorithms we are currently using do not allow this. But let's pull our sleeves up and find new ones! rewriteAsBaseN Changing the rewriteAsBaseN is trivial. The only thing we need to change is the way the successive exponents are computed. for(unsigned Bit = 0; Bit < OriginalNbBit; ++Bit) { . // Updating Exponent Pow = Builder.CreateMul(Pow, IRBase); . } In the original version of the algorithm we updated the exponent when going through the loop. But if we want to go through the loop in a random order, we will need to compute the exponents beforehand (don't forget that we need to use APInt to compute those exponents). We can store those values in a mapping \(i \mapsto Base^i\). This mapping will be computed on demand, since we can not compute it for every possible base. If you are interested in the details of the function getExponentMap please refer to the code. Here is the new rewriteAsBaseN function: Value *rewriteAsBaseN(Value *Operand, unsigned Base, IRBuilder<> &Builder) { const unsigned OriginalNbBit = Operand->getType()->getIntegerBitWidth(), NewNbBit = requiredBits(OriginalNbBit, Base); if (not NewNbBit) { return nullptr; } Type *NewBaseType = IntegerType::get(Operand->getContext(), NewNbBit); auto const &ExpoMap = getExponentMap(Base, OriginalNbBit, NewBaseType); // Initializing variables Value *Accu = Constant::getNullValue(NewBaseType), *InitMask = ConstantInt::get(NewBaseType, 1u); // Extending the original value to NewNbBit for bitwise and Value *ExtendedOperand = Builder.CreateZExt(Operand, NewBaseType); auto Range = getShuffledRange(OriginalNbBit); for (auto Bit : Range) { Value *Mask = Builder.CreateShl(InitMask, Bit); Value *MaskedNewValue = Builder.CreateAnd(ExtendedOperand, Mask); Value *BitValue = Builder.CreateLShr(MaskedNewValue, Bit); Value *Expo = ConstantInt::get(NewBaseType, ExpoMap.at(Bit)); Value *NewBit = Builder.CreateMul(BitValue, Expo); Accu = Builder.CreateAdd(Accu, NewBit); } return Accu; } The getShuffledRange function returns a random shuffle of \([0, N_b[\). transformToBaseTwoRepr This one is a bit trickier. So far we used Euclide's algorithm, but it is too tightly linked to the computation order. The new algorithm we are going to use to recover the \(x_i\) from \(\sum x_i \cdot Base^i\) is the following: \(x_j = \frac{\sum x_i \cdot Base^i}{Base^j} \text{ mod } Base\) And we are going to use the same getExponentMap as earlier for the different exponents. Value *transformToBaseTwoRepr(Value *Operand, unsigned Base, Type *OriginalType, IRBuilder<> &Builder) { Type *ObfuscatedType = Operand->getType(); const unsigned OriginalNbBit = OriginalType->getIntegerBitWidth(); // Initializing variables Value *IR2 = ConstantInt::get(ObfuscatedType, 2u), *IRBase = ConstantInt::get(ObfuscatedType, Base), *Accu = Constant::getNullValue(ObfuscatedType); auto const &ExpoMap = getExponentMap(Base, OriginalNbBit, ObfuscatedType); auto Range = getShuffledRange(OriginalNbBit); for (auto Bit : Range) { Value *Pow = ConstantInt::get(ObfuscatedType, ExpoMap.at(Bit)); Value *Q = Builder.CreateUDiv(Operand, Pow); Q = Builder.CreateURem(Q, IRBase); Q = Builder.CreateURem(Q, IR2); Value *ShiftedBit = Builder.CreateShl(Q, Bit); Accu = Builder.CreateOr(Accu, ShiftedBit); } // Cast back to original type return Builder.CreateZExtOrTrunc(Accu, OriginalType); } Code Sample After all this work, let's take a look at the code produced. Here is the code to obfuscate: int main() { volatile uint32_t a = -1, b = 42, c = 100; printf("%d\n", a^b^c); return 0; } This chosen code is very simple, to make it easier to explain. We are not going to optimize the obfuscated bytecode because optimizations completely break our patterns (which is a good thing). This makes understanding the bytecode very laborious ... "I don't want to do it anymore, please let me gooooooooooo!" ... and our debugging goblins are becoming crazy. Or is it me? $ clang -Xclang -load -Xclang $PASSDIR/build/llvm-passes/LLVMX-OR.so path/to/chained.c -O0 -S -emit-llvm define i32 @main() #0 { ; Some boring stuff %2 = load volatile i32* %a, align 4 %3 = load volatile i32* %b, align 4 ; Transforming 'a' %4 = zext i32 %2 to i64 %5 = and i64 %4, 64 %6 = lshr i64 %5, 6 %7 = mul i64 %6, 4096 %8 = add i64 0, %7 ; Transforming 'b' %133 = zext i32 %3 to i64 %134 = and i64 %133, 2048 %135 = lshr i64 %134, 11 %136 = mul i64 %135, 4194304 %137 = add i64 0, %136 ; Applying 'a^b' %262 = add i64 %132, %261 ; Preparing an exit point. ; Will be optimized out since it's unused. ; Transforming 'c' %425 = load volatile i32* %c, align 4 %426 = zext i32 %425 to i64 %427 = and i64 %426, 67108864 %428 = lshr i64 %427, 26 %429 = mul i64 %428, 4503599627370496 %430 = add i64 0, %429 ; Applying '(a^b)^c' %555 = add i64 %262, %554 ; Transforming back '(a^b)^c' %556 = udiv i64 %555, 4611686018427387904 %557 = urem i64 %556, 4 %558 = urem i64 %557, 2 %559 = shl i64 %558, 31 %560 = or i64 0, %559 ; Final value %716 = trunc i64 %715 to i32 ; Some boring stuff } Good news it's working as expected! You should probably optimize the bytecode and take a look at it, just to see how it looks like. But the transformations are hard to recognize! Performances As you can see, when reducing the number of transformations thanks to the chained XORs, we have reduced compile time by ~15%. But at the same time we have increased execution time by ~10%. One of the reasons of this slowdown is that, by chaining XORs, we use larger bases. And using a larger base means using larger integer types. In the previous version, an obfuscated i32 XOR was most likely to be transformed using a type 'smaller' than i64. Which meant that all transformation instructions could use the CPU hard coded instructions. However, with chained XORs it is likely that the obfuscated types are greater than i64 and require the use of software implementation of mul, mod for non-native integer size... But even if the complexity of instructions increases, their number is reduced. This double variation probably helps mitigate the slowdown. To have a better understanding of what is happening we are going to benchmark the following code: #define LOOP 100000000 int main() { volatile uint32_t a, b = -1, c = 100, d = -10, e = 750, f = 854721, g = 42; for(size_t i = 0; i < LOOP; ++i) { a = b^c^d^e^f^g; } printf("%d\n", a); return 0; } We are going to change the number of XOR executed in the loop and study the variations in the number of instruction, compilation time, execution time and obfuscated types. Don't put this in your hot paths :-) Divide to Conquer The last thing we will do to improve this pass is to combine with another pass. The size (in bits) of the operands we want to obfuscate has a huge impact on: - Wether or not we can apply the obfuscation on a XOR chain. For instance the longest 64 bits XOR chain we can obfuscate is 4 XORs long. More than this would require to use integers greater than 128 bits which are not supported. - The speed of the instructions used and their number (see the performance section above). Therefore it would be nice to reduce the size of those operands before applying the X-OR pass. One way to do this would be to develop a pass that: - Split the XOR operands into smaller variables. - Apply XORs on the new operands. - Merge the results. Transforming this code snippet... %res = xor i32 %a, %b ... Would look like this: Actually, this transformation could be applied not only to XORs but to any bitwise operator (XOR, AND, OR). And you could chain transformations in the exact same way we chained XORs transformations! Bottom line: this new pass would be pretty similar to X-OR. We will now use the last branch propagated_transfo: $ git checkout propagated_transfo Core Logic To take advantage of the work we have already done, we have extracted a generic 'propagated transformation' class. This class will detect eligible variables (to be defined by the specific transformation), build the dependency trees and apply the transformations (to be defined). The only main change we have to make to the functions we developed for X-OR is to handle transformation turning one Value into an array of Value. If you are interested in developing a new transformation with the same properties as X-OR you should be able to use it pretty easily. However, we will not get into the details of its implementation here. Get a Knife Since this new pass is very similar to X-OR the interesting parts are the new transformation functions. The 'forward' transformation splits a variable into \(\frac{N_b}{SplitSize}\) new variables. Each new variable will be obtained by masking ans shifting the original variable: std::vector<Value *> transformOperand(Value *Operand, IRBuilder<> &Builder) override { const unsigned OriginalNbBit = Operand->getType()->getIntegerBitWidth(), SplitSize = SizeParam, NumberNewOperands = OriginalNbBit / SplitSize; Type *NewType = IntegerType::get(Operand->getContext(), SplitSize); std::vector<Value *> NewOperands(NumberNewOperands); Value *InitMask = ConstantInt::get(Operand->getType(), -1); InitMask = Builder.CreateLShr(InitMask, OriginalNbBit - SplitSize); auto Range = getShuffledRange(NumberNewOperands); for (auto I : Range) { Value *Mask = Builder.CreateShl(InitMask, SplitSize * I); Value *MaskedNewValue = Builder.CreateAnd(Operand, Mask); Value *NewOperandValue = Builder.CreateLShr(MaskedNewValue, I * SplitSize); // Using NewOperands to keep the order of split operands NewOperands[I] = Builder.CreateTrunc(NewOperandValue, NewType); } return NewOperands; } And to transform back a vector of Value, we do the exact opposite: Value *transformBackOperand(std::vector<Value *> const &Operands, IRBuilder<> &Builder) override { assert(Operands.size()); const unsigned NumberOperands = Operands.size(), SplitSize = SizeParam; Value *Accu = Constant::getNullValue(OriginalType); auto Range = getShuffledRange(NumberOperands); for (auto I : Range) { Value *ExtendedOperand = Builder.CreateZExt(Operands[I], OriginalType); Value *ShiftedValue = Builder.CreateShl(ExtendedOperand, I * SplitSize); Accu = Builder.CreateOr(Accu, ShiftedValue); } return Accu; } Pretty straight forward. But since we only handle splits of identical size (for simplicity), we need to choose a SplitSize that is a divisor of \(N_b\). This is done by computing all the divisors of \(N_b\) (in \(O(sqrt(N_b))\)) and randomly picking one of them. A Blunt Knife After applying the split obfuscation to this code: int main() { volatile uint32_t a = -1, b = 100, c = 42; printf("%d\n", a | (b & c)); return 0; } With: $ clang -Xclang -load -Xclang $PASSDIR/build/llvm-passes/LLVMSplitBitwiseOp.so split.c -O0 -S -emit-llvm We get: define i32 @main() #0 { ; LLVM stuff %2 = load i32* %a, align 4 %3 = load i32* %b, align 4 %4 = load i32* %c, align 4 ; Transforming 'b' %5 = and i32 %3, 3 %6 = lshr i32 %5, 0 %7 = trunc i32 %6 to i2 ; Transforming 'c' %53 = and i32 %4, 192 %54 = lshr i32 %53, 6 %55 = trunc i32 %54 to i2 ; Applying 'b & c' %101 = and i2 %46, %94 %102 = and i2 %22, %88 %103 = and i2 %10, %64 ; Unused back transformation of 'b & c' %117 = zext i2 %107 to i32 %118 = shl i32 %117, 10 %119 = or i32 0, %118 ; Original 'b & c' now unused %165 = and i32 %3, %4 ; Transforming 'a' %166 = and i32 %2, 3 %167 = lshr i32 %166, 0 %168 = trunc i32 %167 to i2 ; Applying 'a | (b & c)' %214 = or i2 %210, %107 %215 = or i2 %207, %103 %216 = or i2 %186, %111 ; Back transformation of 'a | (b & c)' %230 = zext i2 %226 to i32 %231 = shl i32 %230, 6 %232 = or i32 0, %231 %279 = call i32 (i8*, ...)* @printf(i8* getelementptr inbounds ([4 x i8]* @.str, i32 0, i32 0), i32 %277) } So everything looks good, right? Well, now try to optimize the obfuscated code... $ clang -Xclang -load -Xclang $PASSDIR/build/llvm-passes/LLVMSplitBitwiseOp.so split.c -O2 -S -emit-llvm Everything is gone :/. LLVM managed to understand our transformation and optimize it out. So let's file a bug report to the LLVM devs telling them that their optimizations are annoying and that they should nerf them. Or we could try to combine this transformation with the X-OR obfuscation! Working Together To combine the two passes you can either apply them one by one with opt or apply them both at one: $ LD_LIBRARY_PATH=$PASSDIR/build/llvm-passes clang -Xclang -load -Xclang LLVMSplitBitwiseOp.so -Xclang -load -Xclang LLVMX-OR.so split.c -S -emit-llvm After applying the two optimizations, the code becomes too big to paste here. But this happens: - The XORs are split into several smaller ones, hence generating a forest of independent small XORs trees (actually DAGs). - Each XOR tree is independently obfuscated by X-OR. This means that the obfuscated types of each subtree can be different (and they really are in practice)! - And the optimizer will not optimize out the splits! I'll let you take a look at the result. With the given example LLVM produces ~1300 obfuscated LLVM instructions from the original ~10. When optimizing with -O2 the ~1300 instructions are reduced to ~600. It looks like LLVM managed to merge some parts of the transformations. However, since I don't want to loose what sanity I have left, I haven't looked too closely to what's happening... If you have enough courage, let us know in the comments! Performances Here are the statistics when building OpenSSL: We have increased compilation time by 40% compared to non-chained X-OR, but since we added a new pass this seams reasonable. Regarding runtime we have gained 10%! This is probably due to the reduction of the size of integer types used during the X-OR obfuscation, but I have not checked it in depth. Now you should remember that obfuscation are not meant to be applied on the whole program to be obfuscated. Those performances measurements are worst case scenarios for a program using a lot of XORs! So don't throw out this obfuscation because of those numbers. THE END! In this post, we tried to present the different steps of obfuscation pass development, from the conception to the improvements. There are a few things that could be improved, most notably handling operations other than XORs. But we'll leave that to you!
https://blog.quarkslab.com/turning-regular-code-into-atrocities-with-llvm-the-return.html
CC-MAIN-2019-09
refinedweb
6,744
52.7
This content has been marked as final. Show 18 replies 15. Re: Spawn Thread should be kept alive indefinitely807603 Feb 21, 2008 1:46 PM (in response to 800351)I might be reinventing the wheel, but it is working. I do not know if the Executors.newFixedThreadPool(1) can make my code simpler, I have to investigate how it works. Common request in the first thread: try { ..... impl.sendTextFile(_iNFOFile); if(!SingletonThread.getInstance().isAlive()){ SingletonThread.getInstance().start(); } } catch (RemoteException e) { if(!SingletonThread.getInstance().isAlive()){ SingletonThread.getInstance().start(); } SingletonThread.getInstance().serialize(_iNFOFile, impl, inputMessageRequest.getHeaderInfo().getClaimNumber()); // } SingletonThread: public class SingletonThread extends Thread{ private static SingletonThread instance; .... public synchronized static MidasSingletonThread getInstance(){ if(instance == null){ instance = new MidasSingletonThread(); instance.setDaemon(true); // initialize other variables } return instance; } private MidasSingletonThread() { super(); } public void run() { try{ while(true){ deserialize(); //the files in directory instance.sleep(4000); } }catch(Exception e){ e.printStackTrace(); } } public void serialize(....){ } public void deserialize(){ //desirialize and FTP the file, if succesful, delete it } public boolean sendByFTP(Object _iNFOFile, String filename){ } public void deleteFile(String filename){ } } Thanks for your support. Edited by: pbasil on Feb 21, 2008 5:45 AM 16. Re: Spawn Thread should be kept alive indefinitely800351 Feb 22, 2008 12:20 AM (in response to 807603) pbasil wrote:One's thought is not always right but I think I finally see your app level requirement which you have provided only very scarce description. If the Singleton is useless, what is the right approach to have an instance of the thread and just one that is always working? And the solution for that might be: Start a resumer thread, or a daemon, which is not needed to be a singleton but practically only one instance runs throughout your whole app lifetime. The resumer thread runs an infinite loop in its run() method picking up file from a queue and doing FTP send task for the file. The resumer thread does wait() when the queue is empty. App main thread, your thread A, simply puts file which does need FTP resuming onto the queue and notifies the resumer thread in the place of the code where you previously called thread's run() and serialize() method(in the reply #3 code). 17. Re: Spawn Thread should be kept alive indefinitely807603 Feb 22, 2008 1:02 AM (in response to 800351)Yes, sorry, I didn't make myself clear at the beginning. I don�t seem to understand how both threads in your description can communicate. Let me explain the scenario I am facing: I am running on a web service enabling layer, so many incoming requests from outside send me information via SOAP that I should forward as a text file over FTP. If the FTP server is down, I have to provide a contingency plan. Thus I serialize files in a directory and try to FTP them later on (waiting 30 minutes for instance). That is why I came out with the idea of a Singleton Thread (static) I can call from any thread request. Therefore Thread A, B, C, that are all those requests I receive, should communicate with the Singleton thread. I use A (the first of all incoming requests) to instantiate the Singleton. The next ones just check whether it is running and request the single instance JUST in case the FTP server is down and they have to serialize the file that failed. I hope it clarifies the requirements I need to implement, many thanks. PS. Someone mentioned Executors.newFixedThreadPool(1), however I am running on JDK 1.4 and not 1.5, I think java.util.concurrent is not available until 1.5 18. Re: Spawn Thread should be kept alive indefinitely800351 Feb 22, 2008 1:15 AM (in response to 807603) how both threads in your description can communicateWell, thread A, B, C et al put the file onto the singleton queue, no, a single instance of a queue should suffice, and noify() your FtpLater thread which was doing wait() while the queue was empty. The FtpLater thread is a single instance of a thread, not needed to be a singleton.
https://community.oracle.com/message/8829875?tstart=0
CC-MAIN-2017-09
refinedweb
685
62.98
Joe Wrobel Web Profile Builder 2.0.0.0 It's been over five years since I've made any updates to this project. I had basically left it for dead because I personally no longer have a need for it. I know a lot of people do still rely on it though. I had some free time recently so I decided to give the project a little bump to make it easier to use and more accessible to those who do still use it. What's changed? - Most importantly, this is no longer required to be installed in the GAC. - There is no installer at all anymore! - Now it can be included in the project source control and referenced locally. - Added support to install using NuGet. - PM> Install-Package WebProfileBuilder - Simplified the configuration. - Support for configuration via Web.config has been removed. This was more of a "nice to have" feature and added unneeded complexity to the code base. - All configurable options are still supported, but now it has to be configured in the web project file. See below for a complete example of the configuration. - Moved project home to CodePlex. - Added build automation to the source code using NAnt. IMPORTANT NOTES: - The core code base has not been changed. I didn't want to introduce any bugs, so I only changed the code necessary to achieve my goal. All code changes were related to configuration. - If you are new to WebProfileBuilder, know the following: - The generated profile class does not get automatically included into the project. You must use the Solution Explorer to show all files, then manually include the generated class into your project. You only need to do this once. - You also must create the "Profile" property in your Page class. See below for an example. Example web project file: <?xml version="1.0" encoding="utf-8"?> <Project ToolsVersion="4.0" DefaultTargets="Build" xmlns=""> <!-- ... other project content ... --> <!--WebProfileBuilder setup.--> <UsingTask TaskName="BuildWebProfile" AssemblyFile="..\packages\WebProfileBuilder.2.0.0.0\tools\ WebProfileBuilder.dll" /> <Target Name="BeforeBuild"> <!--WebSiteRoot, RootNamespace, and Language are required.--> <!--ClassName, Directory, and FileName are optional for additional customization.--> <BuildWebProfile WebSiteRoot="$(ProjectDir)" RootNamespace="$(RootNamespace)" Language="$(Language)" ClassName="MyWebProfile" Directory="CodeFiles" FileName="MyWebProfile" /> </Target> <!-- ... other project content ... --> </Project> Example page class: using System; using System.Collections.Generic; using System.Web.UI; namespace CsExample { public partial class _Default : Page { //... other class content ... public static MyWebProfile Profile { get { return MyWebProfile.Current; } } //... other class content ... } } Conditional Project Reference I develop a lot of different applications. They range from large inventory tracking websites, to Windows services, to user interfaces encapsulating scripts written by someone else. I get asked to make many different things. It can sometimes be difficult to keep from reinventing the wheel in different projects. So it's very useful to maintain a common library to reference from the different projects. Regardless of what you keep in the library, utilities, base classes, or the dreaded "helper methods", it does become challenging to work with. Automating ClickOnce Deployment Building a ClickOnce deployment outside of Visual Studio can be a difficult task. One point I want to make clear is that there is no magic going on to make your application deployable using ClickOnce. Well, unless you are using the tooling inside of Visual Studio, in which case there is a lot of magic happening. Much to its credit, Visual Studio does make it very easy to setup and publish a ClickOnce deployment for your application. That said, my suggestion would be to just use Visual Studio if it fits your workflow. However, if you need a fully automated solution to create a ClickOnce deployment outside of Visual Studio, then continue reading. In my environment, my builds are automated using NAnt, which are then built on a build server using CruseControl.Net. My end goal with automating the ClickOnce deployment was to mimic the output as created by Visual Studio. I didn't have to do this, but I wanted to just incase I might ever need to resort back to using Visual Studio. I didn't want to get caught in a situation where my automated ClickOnce deployment files conflict with the files generated by Visual Studio. In general, a ClickOnce deployment requires two files. An application manifest and a deployment manifest. The application manifest contains details of the application. Some of these details include dependencies, security privileges, and a complete listing of every file required by the application. The deployment manifest contains details of, you guessed it, the deployment. For this file, my focus is primarily on the deployment strategy. It is also worth noting that this file will contain a dependency which is basically a pointer to the application manifest. I know I'm only scratching the surface of what these two files actually contain. I'm calling out the details which are directly relevant here. I'm trying hard to avoid using the phrase, "beyond the scope of this article", but there it is. As much as I dislike that phrase, I'm using it anyway. Really, if you want to know more about these two files, look at the Microsoft documentation. As I went through this process, I did find a walkthrough in the Microsoft documentation that you may find helpful. I followed the steps myself, but it didn't take me where I wanted to go and was hard to follow due to lack of detail. When I finished the walkthrough, I had more questions than when I started. But it did help to guide me in the right direction, so I want to point it out. Steps to automate a ClickOnce application 1. Obtain or create a ".pfx" key file for signing the manifest files. The key file can also be used to sign the assembly/executable of the application if wanted, but not required. You can create a key file using the "Signing" tab of the project properties window. I think there are other kinds of keys that can be used for signing, but I am not an expert in this area so I'm saying as little about it as possible. 2. Add an "app.manifest" file to the project. This will give you a physical file that you can make custom edits to if needed. I personally didn't need to make any custom edits, but at least I have that option if I ever need to. This file will get updated post build using the Mage.exe utility. 3. Build the project/application to get all the files required for the application to run. Also, copy any extra files needed to deploy with the app such as the main app icon. The goal here is to create a folder containing your entire application. All your resource files, data files, help files, referenced dlls, everything. I use NAnt to automate this process. 4. Make any last changes to configuration files or whatever content you need to change for the deployment target. This is important. Do not change any application content after the application manifest has been updated because it will make the application manifest invalid. The application manifest contains hash codes for every file. This is a security measure to prevent any tampering with the files. 5. Use Mage.exe to update the application manifest. Note, if you are using the ".deploy" extension for your files, you'll want to do this step before appending the ".deploy" extension to the files. If this doesn't make sense right now, don't worry about it yet. I'll explain more about this with web hosted deployment. Below is an example of this command. mage.exe -Update build\ClickOnceExample-Release\ClickOnceExample.exe.manifest -ToFile "build\ClickOnceExample-Release\Application Files\1.1.0.6125\ClickOnceExample.exe.manifest" -FromDirectory "build\ClickOnceExample-Release\Application Files\1.1.0.6125" -Version 1.1.0.6125 6. Use Mage.exe to sign the application manifest using your personal ".pfx" key file. Once the file is signed, your done with it. Leave it alone. Below is an example of this command. mage.exe -Sign "build\ClickOnceExample-Release\Application Files\1.1.0.6125\ClickOnceExample.exe.manifest" -CertFile src\Robolize.ClickOnceExample\ClickOnceExample.pfx -Password ClickOnceExample 7. Create a deployment manifest file. This is the most troublesome part and I'll do my best to explain. I didn't find any single good solution to generate this file. One thing I can't explain is that once I added the app.manifest to the project, Visual Studio started generating a deployment manifest as well. I guess it's one of those magic tricks that Visual Studio performs. I didn't find a way to make this file not generate. I wouldn't mind, but I couldn't use the file as it was and I don't know of a way to set any properties for the generation of the file to make it useable. But in the end it doesn't really matter because it will get overwritten anyway. This became a two-step process because not all the desired properties of the deployment manifest can be set using one method or the other. When using Mage.exe, you cannot set the update strategy to "beforeApplicationStartup", which is what I wanted. Someone else did some in-depth research on this issue and discovered a brick wall. I took his word for it, but you can see for yourself here. When using the "GenerateDeploymentManifest" task, you cannot properly set the "EntryPoint" because the final output of the application files will have a different directory structure than the default flat output from building inside of Visual Studio. So here's what to do. 7a. First, setup the "GenerateDeploymentManifest" task in the "AfterBuild" target of the main project file. This will generate a usable deployment manifest file.The most important properties to set in the "GenerateDeploymentManifest" task are the following: 1: <?xml version="1.0" encoding="utf-8"?> 2: <Project ToolsVersion="4.0" DefaultTargets="Build" 3: 4: 5: <!-- Other project content... --> 6: 7: <Target Name="AfterBuild"> 8: <GenerateDeploymentManifest 9: AssemblyName="ClickOnceExample.application" 10: Product="ClickOnceExample" 11: Install="true" 12: UpdateEnabled="true" 13: UpdateMode="Foreground" 14: CreateDesktopShortcut="true" 15: OutputManifest="$(OutputPath)\ClickOnceExample.application" 16: EntryPoint="$(OutputPath)ClickOnceExample.exe.manifest" 17: TargetFrameworkVersion="4.5" 18: 19: </Target> 20: </Project> - Install="true" - UpdateEnabled="true" - UpdateMode="Foreground" 7b. Next, use Mage.exe to update the deployment manifest file. This step assumes that the application directory structure has been setup either manually or as part of the build script. One important note to make for updating the deployment manifest with Mage.exe, Do not set the "Install" flag here because it will wipe out the settings set by the "GenerateDeploymentManifest" task and defeat the purpose of having used it in the first place. Below is an example of this command. mage.exe -Update build\ClickOnceExample-Release\ClickOnceExample.application -AppManifest "build\ClickOnceExample-Release\Application Files\1.1.0.6125\ClickOnceExample.exe.manifest" -AppCodeBase "Application Files\1.1.0.6125\ClickOnceExample.exe.manifest" -Publisher "Robolize Division" -Version 1.1.0.6125 -ProviderUrl The most important properties to set here are the following: - AppManifest - AppCodeBase - Publisher (defaults to "Microsoft" if not set.) - Version - ProviderUrl 8. Use Mage.exe to sign the deployment manifest using your personal ".pfx" key file. As with the application manifest, once you sign it, you're done with it. Leave it alone. Below is an example of this command. mage.exe -Sign build\ClickOnceExample-Release\ClickOnceExample.application -CertFile src\Robolize.ClickOnceExample\ClickOnceExample.pfx -Password ClickOnceExample 9. Optionally copy the deployment manifest file to the location of the application manifest file for safe keeping. This is not required, but it may come in useful if you ever need to revert to an earlier version. You could just replace the current deployment manifest with an old version pointing to the old version of the application. Web Hosted Deployment I mentioned earlier that I would explain the ".deploy" extension. The ".deploy" extension is primarily used when the ClickOnce deployment is being hosted within a website. By default, the web server will not allow downloading files with specific extensions like ".exe", ".config", and others. The web server can be configured to allow these file extensions to be downloaded, but for security reasons, you really don't want to do that. It is better to just give every file a ".deploy" extension so there is only that one extension to be concerned with. If you need to use the ".deploy" extension, the setting MapFileExtensions="true" in the "GenerateDeploymentManifest" task should do the trick. I personally am using the file share deployment so I don't have the need to do this. I have not gone through the steps to verify that what I'm saying actually works, but I don’t see any reason why it wouldn't work. Other Thoughts Apparently there is a way to publish a ClickOnce deployment by executing MSBuild on a project file and specifying the "publish" target. This didn't work for me and I assume it's because I execute MSBuild on a solution file, not a project file. I always use solution files because everything I work on is made up of multiple projects. I didn't go far down this path, but you can have a look here and here. The Mage.exe tool, on my system, is located here, C:\Program Files (x86)\Microsoft SDKs\Windows\v8.0A\bin\NETFX 4.0 Tools\mage.exe It could be located somewhere else on your system. Looking at the entire process I describe here, it might be confusing to follow because the interweaving of steps between NAnt and MSBuild. To help with understanding, I created a simple demo solution utilizing the entire process. You can download and inspect my demo example here. This demo I created is actually a good example of using NAnt for build automation. It contains the core aspects which are present in all applications I manage. Conditional Formatting in the Silverlight DataGrid I’ve been an asp.net developer for some time now and I was excited to jump on to Silverlight when it 2.0 was released a few months ago. One thing I really struggled with was applying conditional formatting to the individual cells in the DataGrid control. Coming from an asp.net background, I carried a lot of assumptions with me (big mistake). I thought I could get a hold of the rows or cells collection and have my way with it, but no such luck. I stumbled down several paths which all ultimately lead to dead ends. After killing hours (maybe days) on trying to figure this out, I had to let it go and move on. Now a month later I decided to give it another shot and I finally got it. The answer was right in front of me all along. I knew about the IValueConverter interface, but I didn’t fully understand its capabilities. I thought it was only used for converting an object into a text representation or vice versa. Actually, you can return anything you want from it. So you can return a Button, Grid, or whatever. Another aspect I couldn’t figure out was how to get access to page members from within the Convert method. For example, I wanted to render a button in the cell and wire up the button’s click event to a method in the page. Sure, you could do this to a certain extent using templates, but then I couldn’t find a way to change the template conditionally based on a value in the bound data item. The solution I came up with was to create a delegate along with a class which implemented the IValueConverter interface and exposed two events. One for converting and the other for converting back. I can then declare this converter in the resources collection and setup a handler in the page as shown below. <UserControl.Resources> <local:UniversalConverter x: <local:UniversalConverter x: </UserControl.Resources> Here is the markup i used in the “First Name” column of the DataGrid. <data:DataGridTemplateColumn <data:DataGridTemplateColumn.CellTemplate> <DataTemplate> <ContentControl Content="{Binding Converter={StaticResource nameConverter}}" HorizontalContentAlignment="Stretch" VerticalContentAlignment="Stretch" /> </DataTemplate> </data:DataGridTemplateColumn.CellTemplate> </data:DataGridTemplateColumn> Here is the “ConvertName” method I wired up from the converter defined in the resources collection. 1: private object ConvertName(object value, Type targetType, object parameter, CultureInfo culture) { 2: Employee employee = value as Employee; 3: if (employee == null) { 4: return value; 5: } 6: 7: if (employee.FirstName.Contains('a')) { 8: Button btn = new Button(); 9: btn.Content = employee.FirstName; 10: btn.Click += ((sender, e) => { 11: HtmlPage.Window.Alert( 12: string.Format( "There is a button here because \"{0}\" contains an \"a\".", 13: employee.FirstName)); 14: }); 15: 16: return btn; 17: } 18: return new TextBlock {Text = employee.FirstName}; 19: } This is the IConverter class and delegate I created to handle the conversions. 1: public class UniversalConverter : IValueConverter { 2: 3: public object Convert(object value, Type targetType, object parameter, CultureInfo culture) { 4: return this.OnConverting(value, targetType, parameter, culture); 5: } 6: 7: public object ConvertBack(object value, Type targetType, object parameter, CultureInfo culture) { 8: return this.OnConvertingBack(value, targetType, parameter, culture); 9: } 10: 11: protected object OnConverting(object value, Type targetType, object parameter, CultureInfo culture) { 12: UniversalConverterHandler handler = this.Converting; 13: if (handler != null) { 14: return handler(value, targetType, parameter, culture); 15: } 16: return value; 17: } 18: 19: protected object OnConvertingBack(object value, Type targetType, object parameter, CultureInfo culture) { 20: UniversalConverterHandler handler = this.ConvertingBack; 21: if (handler != null) { 22: return handler(value, targetType, parameter, culture); 23: } 24: return value; 25: } 26: 27: public event UniversalConverterHandler Converting; 28: 29: public event UniversalConverterHandler ConvertingBack; 30: 31: } 32: 33: public delegate object UniversalConverterHandler(object value, Type targetType, object parameter, CultureInfo culture); And finally, here is a screenshot of the DataGrid. (Ugly, I know, but it proves my point.) Here is a link to the complete solution. I hope you find this useful and I look forward to hearing feedback and suggestions. Most importantly, let me know if you know of a better way to do this. Thanks -Joe Web Profile Builder 1.3 I Web Profile Builder 1.2.0.0 Released Files. Web Profile Builder 1.1.0.0 Released Files can be downloaded from the Web Profile Builder project page. If you are unfamiliar with Web Profile Builder, you can read my initial blog post about it here. Changes made for release 1.1.0.0: - Added the ability to detect changes made to the profile section of the web.config file and only rebuild the Profile class if changes have been made. Notes: - If you used the previous release, remember to uninstall it first. - Also, if you used the previous release and added the customize section in the web.config file, remember to update the assembly reference to “WebProfileBuilder.WebProfileConfigurationSection, WebProfileBuilder, Version=1.1.0.0, Culture=neutral, PublicKeyToken=01d50f1f82943b0c”. Thank you to everybody who provided me with valuable feedback. This release should address all of your concerns. Thanks -Joe ClientID Problem In External JavaScript Files Solved Well,". To access the controls in the external JavaScript file, handle the "ready" event of the "PageControls" object like shown here. Web Profile Builder for Web Application Projects Files. -. A more elegant solution to display GridView header and footer when the data source is empty. I
http://weblogs.asp.net/joewrobel
CC-MAIN-2014-41
refinedweb
3,225
50.94
The line where it turns the leds on: setAllPixelsRGB(red,green,blue); Is there a way to create a fade in effect like what happens in the kickstarter video to create more or less a motion controlled night light? Mainly interested in the smooth fade in effect and i couldn't really find any existing code from the examples that had that effect. Maybe also throw in code so that you trigger the motion sensor, lights fade to bright but stays on for at least x seconds until it gets another motion reading at which point it will cause the leds to fade out (or just a auto timeout after a while). Also will there be instructions on how to solder the contact for a battery? Full code below: Code: Select all #include "PlumduinoHardware.h" // Leave this line first. Do not edit this line. This causes Arduino // to include background functions when turning your code into // machine language Wink can understand. // Below is the "setup" function. It runs one time as soon as Plumduino turns on. You can add stuff // to this function if you want, but hardwareBegin() should always be the first code in the list. void setup(){ hardwareBegin(); //initialize Wink's brain to work with his circuitry } int s1, s2; int hue, brightness; void loop() { s1 = analogRead(Slide1); //read slider 1 s2 = analogRead(Slide2); //read slider 2 hue = s1 * 0.35; //scale down 1024 scale of slider to 360 degrees brightness = s2 /4; //scale down 1024 scale of slider to 255 brightness getColorWheel(hue, brightness); while (digitalRead(Motion) == LOW){ setAllPixelsRGB(0,0,0); //set all pixels to off } // code will continue here when the motion sensor pin goes high setAllPixelsRGB(red,green,blue); //results of getColorWheel() plugged in for color values while (digitalRead(Motion) == HIGH){ // do nothing (wait for pin to go low again) } } //closing curly of the “loop()” function
http://forum.plumgeek.com/viewtopic.php?f=15&t=812&p=1278&sid=d61ac38a336dca7b7df4f383284dd33b
CC-MAIN-2018-26
refinedweb
312
65.46
54150/mininam-error-ovs-vsctl-bridge-named-ovs-vsctl-bridge-named I'm facing an issue while trying to run MiniNAM GUI for my mininet script. When I run my python script, it creates network and then gives an error ovs-vsctl: no bridge named s1, ovs-vsctl: no bridge named s2 Terminal logs: *** Creating network *** Adding controller Unable to contact the remote controller at 127.0.0.1:6653 Unable to contact the remote controller at 127.0.0.1:6633 Setting remote controller to 127.0.0.1:6653 *** Adding hosts: h1 h2 h3 h4 h5 h6 h7 h8 r1 r2 r3 *** Adding switches: s1 s2 s3 s4 *** Adding links: (h1, s1) (h2, s1) (h3, s2) (h4, s2) (h5, s3) (h6, s3) (h7, s4) (h8, s4) (r1, r2) (r1, r3) (r2, s1) (r2, s2) (r3, s3) (r3, s4) *** Configuring hosts h1 h2 h3 h4 h5 h6 h7 h8 r1 r2 r3 *** Starting CLI: ovs-vsctl: no bridge named s1 mininet> ovs-vsctl: no bridge named s2 ovs-vsctl: no bridge named s3 ovs-vsctl: no bridge named s4 def run(): # Our Network object, and its initiation net = Mininet( topo=NetworkTopo(), controller=lambda name: RemoteController( name, ip='127.0.0.1' ), link=TCLink, switch=OVSKernelSwitch, autoSetMacs=True ) mininam = MiniNAM(cheight=600, cwidth=1000 , net=net) Use the following command to install tkinter ...READ MORE Try installing it from the terminal with ...READ MORE Requests is not available for use by ...READ MORE You're missing a close paren in this ...READ MORE suppose you have a string with a ...READ MORE if you google it you can find. ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE You need to download and install the ...READ MORE Hi @Vasgi, for the code you're trying ...READ MORE OR
https://www.edureka.co/community/54150/mininam-error-ovs-vsctl-bridge-named-ovs-vsctl-bridge-named
CC-MAIN-2019-35
refinedweb
310
81.02
11-08-13 11:48:49, Toralf Förster wrote: > id. > > It can re reproduced, if > - the NFS share is an EXT3 or EXT4 directory > - and it is created at file located at tempfs and mounted via loop device > - and the NFS server is forced to umount the NFS share > - and the server forced to restart the NSF service afterwards > - and trinity is used > > I could find a scenario for an automated bisect. 2 times it brought this commit > commit 68a3396178e6688ad7367202cdf0af8ed03c8727 > Author: J. Bruce Fields <bfields@...> > Date: Thu Mar 21 11:21:50 2013 -0400 > > nfsd4: shut down more of delegation earlier Added Bruce to CC. > to be the one after which the user mode linux server crashes with a back trace like this: > > > $ cat /mnt/ramdisk/bt.v3.11-rc4-172-g8ae3f1d > [New LWP 14025] > Core was generated by `/home/tfoerste/devel/linux/linux earlyprintk ubda=/home/tfoerste/virtual/uml/tr'. > Program terminated with signal 6, Aborted. > #0 0xb77ef424 in __kernel_vsyscall () > #0 0xb77ef424 in __kernel_vsyscall () > #1 0x083a33c5 in kill () > #2 0x0807163d in uml_abort () at arch/um/os-Linux/util.c:93 > #3 0x08071925 in os_dump_core () at arch/um/os-Linux/util.c:138 > #4 0x080613a7 in panic_exit (self=0x85a1518 <panic_exit_notifier>, unused1=0, unused2=0x85d6ce0 <buf.15904>) at arch/um/kernel/um_arch.c:240 > #5 0x0809a3b8 in notifier_call_chain (nl=0x0, val=0, v=0x85d6ce0 <buf.15904>, nr_to_call=-2, nr_calls=0x0) at kernel/notifier.c:93 > #6 0x0809a503 in __atomic_notifier_call_chain (nr_calls=<optimized out>, nr_to_call=<optimized out>, v=<optimized out>, val=<optimized out>, nh=<optimized out>) at kernel/notifier.c:182 > #7 atomic_notifier_call_chain (nh=0x85d6cc4 <panic_notifier_list>, val=0, v=0x85d6ce0 <buf.15904>) at kernel/notifier.c:191 > #8 0x08400ba8 in panic (fmt=0x0) at kernel/panic.c:128 > #9 0x0818edf4 in ext4_put_super (sb=0x4a042690) at fs/ext4/super.c:818 > #10 0x081010d2 in generic_shutdown_super (sb=0x4a042690) at fs/super.c:418 > #11 0x0810209a in kill_block_super (sb=0x0) at fs/super.c:1028 > #12 0x08100f6a in deactivate_locked_super (s=0x4a042690) at fs/super.c:299 > #13 0x08101001 in deactivate_super (s=0x4a042690) at fs/super.c:324 > #14 0x08118e0c in mntfree (mnt=<optimized out>) at fs/namespace.c:891 > #15 mntput_no_expire (mnt=0x0) at fs/namespace.c:929 > #16 0x0811a2f5 in SYSC_umount (flags=<optimized out>, name=<optimized out>) at fs/namespace.c:1335 > #17 SyS_umount (name=134541632, flags=0) at fs/namespace.c:1305 > #18 0x0811a369 in SYSC_oldumount (name=<optimized out>) at fs/namespace.c:1347 > #19 SyS_oldumount (name=134541632) at fs/namespace.c:1345 > #20 0x080618e2 in handle_syscall (r=0x49e919d4) at arch/um/kernel/skas/syscall.c:35 > #21 0x08073c0d in handle_trap (local_using_sysemu=<optimized out>, regs=<optimized out>, pid=<optimized out>) at arch/um/os-Linux/skas/process.c:198 > #22 userspace (regs=0x49e919d4) at arch/um/os-Linux/skas/process.c:431 > #23 0x0805e65c in fork_handler () at arch/um/kernel/process.c:160 > #24 0x00000000 in ?? () > > > > A real system however would not crash bug would give a kernel BUG as reported here: > We have deleted inodes (regular files) in the orphan list during ext4_put_super(). My guess is that NFS is still holding some inode references to these inodes and thus inodes don't get deleted. So ext3/4 would be just a victim here. > Furthermore the server won't be able any longer to reboot - it would hang > infinitely in the reboot phase. Just the magic sysrq keys still works > then. Well, this is likely because the filesystem cannot be shut down. Honza -- Jan Kara <jack@...> SUSE Labs, CR View entire thread
https://sourceforge.net/p/user-mode-linux/mailman/message/31278567/
CC-MAIN-2017-47
refinedweb
578
52.66
Other Aliasgetgrent, setgrent SYNOPSIS #include <sys/types.h> #include <grp.h> struct group *getgrent(void); void setgrent(void); void endgrent(void); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): setgrent(): - _XOPEN_SOURCE >= 500 || /* Glibc since 2.19: */ _DEFAULT_SOURCE || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE getgrent(), endgrent(): - _XOPEN_SOURCE >= 500 || /* Since glibc 2.12: */ _POSIX_C_SOURCE >= 200809L || /* Glibc versions <= 2.19: */ _BSD_SOURCE || _SVID_SOURCE DESCRIPTIONThe getgrent() function returns a pointer to a structure containing the broken-out fields of a record in the group database (e.g., the local group file /etc/group, NIS, and LDAP). The first time getgrent()). RETURN VALUETheFor an explanation of the terms used in this section, see attributes(7). In the above table, grent in race:grent signifies that if any of the functions setgrent(), getgrent(), or endgrent() are used in parallel in different threads of a program, then data races could occur. CONFORMING TOPOSIX.1-2001, POSIX.1-2008, SVr4, 4.3BSD. COLOPHONThis page is part of release 4.06 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
http://manpages.org/endgrent/3
CC-MAIN-2019-30
refinedweb
188
60.31
Opened 7 years ago Closed 7 years ago Last modified 15 months ago #9206 closed enhancement (worksforme) patch: authorize using Remote-User: header Description for obscure reasons, my tracd-behind-proxypass wasn't able to authorize the usual way. This patch will make trac trust the 'Remote-User:' header, even if the adaptor (CGI/etc) didn't otherwise set the remote_user field. This probably should be a configuration option as it would otherwise have security implications but is perfect for our use - apache LDAP-based authentication. I'm aware of LdapPlugin but this seemed simpler and with broader implications (could work for other authentication types, kerberos etc). Attachments (2) Change History (19) Changed 7 years ago by comment:1 Changed 7 years ago by Thanks for the patch. Would you mind adding that configuration option? This would be also the place to briefly document why you would need to set up such option. The test could also be written if not remote_user and req.get_header('Remote-User'):, I suppose that would be slightly more efficient. Changed 7 years ago by better patch with option comment:2 Changed 7 years ago by The better patch here automatically logs you in if present (instead of requiring you to click login), and has a configuration option. comment:3 Changed 7 years ago by Why not make this an implementation of the IAuthenticator extension point interface? E.g. That way we can keep trac free of the change that would likely raise security issues, and you are free to deploy your plugin to all the trac installations that you have. comment:4 Changed 7 years ago by Carsten, That's a great idea. I'll do that and post a link here, thanks. -Steven comment:5 Changed 7 years ago by comment:6 Changed 6 years ago by i'm trying to do the same thing: basic auth on <Location> mod_proxy forwarded to tracd. unfortunately tracd doesn't seem to take this auth info no matter which way i try. tried: - with project/plugins/remote-user-auth.py from trac.config import BoolOption from trac.web.api import IAuthenticator - without project/plugins/remote-user-auth.py - tracd with —basic-auth="*,htpasswd,My Proxy Realm" - tracd without —basic-auth - with rewriterule RewriteEngine On RewriteCond %{LA-U:REMOTE_USER} (.+) RewriteRule . - [E=RU:%1] RequestHeader add X-Forwarded-User %{RU}e - without rewriterule what am i missing? comment:7 Changed 6 years ago by Solved, written up at TracStandalone@90. comment:8 Changed 6 years ago by Very good - it did solve my problem as well. I have transformed the script into a setuptools Trac plugin that can be packaged and installed just like any other trac-hack. Along with polishing the documentation it is now ready - download at comment:9 Changed 5 years ago by so i'm wondering what it would take for this ticket to reach fixed? comment:10 Changed 21 months ago by It seems like the configuration option is unnecessary when the authenticator is packaged as a single-file plugins since the behavior can be controlled by enabling/disabling the plugin. The security risk seems relatively low provided there are no caveats to the statement documented on the Django site: This warning doesn’t apply to RemoteUserMiddleware in its default configuration with header = 'REMOTE_USER', since a key that doesn’t start with HTTP_ in request.META can only be set by your WSGI server, not directly from an HTTP request header. Any objections to applying attachment:9206-remoteuser.patch, perhaps renaming the option to trust_remote_user_header? Alternatively, we could put the IAuthenticator in /contrib or /tracopt/web/auth. comment:11 Changed 15 months ago by We discussed this a bit more in gmessage:trac-users:5p8DjrgFHvw/atuGxm90DQAJ. I'm unsure how the login button can function in the recipe TracStandalone#Authenticationfortracdbehindaproxy. Perhaps the site described in the recipe is completely unavailable to anonymous users, and authentication occurs in the web server before hitting the Trac application. For the login button to work, it looks like req.environ['REMOTE_USER'] needs to be set. A different solution to the issue of running TracStandalone behind a proxy might be to plugin in a different implementation of AuthenticationMiddleware. Making the middleware "pluggable" is probably related to #11585 (or at least the related work that was done in Bloodhound). comment:12 Changed 15 months ago by Yes! The site is completely unavailable to anonymous. Sorry for not making that clear. comment:13 follow-up: 14 Changed 15 months ago by Thanks for the additional information. It's good to be able to reconcile the behavior of your site with what we are seeing elsewhere. There are very few places that req.remote_user is used: $grep -R -in "req.remote_user" . --exclude-dir=.git --exclude-dir=build --exclude=*.pyc ./trac/web/auth.py:93: if req.remote_user: ./trac/web/auth.py:94: authname = req.remote_user ./trac/web/auth.py:150: in req.remote_user will be converted to lower case before ./trac/web/auth.py:155: if not req.remote_user: ./trac/web/auth.py:164: remote_user = req.remote_user It's only used in LoginModule.authenticate and LoginModule._do_login. Plugins such as AccountManagerPlugin access req.remote_user as well. Summarizing what I see in the code, req.authname can be populated in an IAuthenticator implementation from the value of the HTTP_REMOTE_USER header, or in the case of LoginModule.authenticate, from the value of req.remote_user. req.remote_user is either set by AuthenticationMiddleware or by the WSGI application. What I don't understand is why LoginModule needs to access req.remote_user in addition to req.authname: tags/trac-1.0.9/trac/web/auth.py@:148,157,161#L131. If trac.web.main.RequestDispatcher.authenticate is called before processing the request to the /login path (which I'm not entirely sure is the case), couldn't we just rely on the IAuthenticator implementation to set req.authname and use the value of req.authname in LoginModule._do_login rather than req.remote_user? comment:14 Changed 15 months ago by Summarizing what I see in the code, req.authnamecan be populated in an IAuthenticatorimplementation from the value of the HTTP_REMOTE_USERheader, or in the case of LoginModule.authenticate, from the value of req.remote_user. req.remote_useris either set by AuthenticationMiddleware or by the WSGI application. If the patch is put to Trac, I think we should be able to use any header for the remote-user's name, not only Remote-User. The HTTP_REMOTE_USER variable can be set by remote attacker, easily. The following commands can set the HTTP_REMOTE_USER variable for Apache. $ curl -o /dev/null --header 'REMOTE-USER: xx' # works with apache 2.2 and 2.4 $ curl -o /dev/null --header 'REMOTE!USER: xx' # works with apache 2.2 $ curl -o /dev/null --header 'REMOTE.USER: xx' # works with apache 2.2 $ curl -o /dev/null --header 'REMOTE=USER: xx' # works with apache 2.2 RequestHeader unset can remove the header. However, it cannot remove in the cases except use of -. RequestHeader unset Remote-User early IMO, I don't recommend to use HTTP header…. comment:15 follow-up: 16 Changed 15 months ago by I misunderstood the statement in the Django documentation that I referenced in comment:10. I had hoped we could modify AuthenticationMiddleware to set the REMOTE_USER from an HTTP header, as in the flask example. If that's not secure, any other ideas on how to set REMOTE_USER in TracStandalone when the web server acts as a proxy and handles authentication? It seems like a generalized problem that we should try to support in Trac. comment:16 Changed 15 months ago by Replying to Ryan J Ollos: I misunderstood the statement in the Django documentation that I referenced in comment:10. I had hoped we could modify AuthenticationMiddleware to set the REMOTE_USERfrom an HTTP header, as in the flask example. That example is insecure, I think. If an HTTP header is set on the reverse proxy, the reverse proxy must remove the header from remote. Apache 2.4: RequestHeader unset Remote-User early Nginx: server { listen *:3000; server_name localhost; location / { proxy_pass; proxy_redirect /; proxy_set_header Remote-User ""; } location /login { proxy_pass; proxy_redirect /; proxy_set_header Remote-User $remote_user; auth_basic "auth"; auth_basic_user_file "./htpasswd.txt"; } } However, non-alphanumeric characters are replaced with _ in Apache 2.2 (e.g. Remote!User: admin ⇒ HTTP_REMOTE_USER: admin). It's hard to remove such headers only using configurations. Since Apache 2.4, such headers is not converted to HTTP_ variables. This changes is introduced in. If that's not secure, any other ideas on how to set REMOTE_USERin TracStandalone when the web server acts as a proxy and handles authentication? It seems like a generalized problem that we should try to support in Trac. Another idea is adding configurable option to use secret header. If the header is unforeseeable, it would be unable to send the header from remote. Untested patch: trac/web/auth.py diff --git a/trac/web/auth.py b/trac/web/auth.py index 81be36fac..64a54812a 100644 comment:17 Changed 15 months ago by Thanks, it sounds like a good idea to use a secret key for servers that don't allow secure web server configuration. We might run into the same issue of req.remote_user not being set in LoginModule._do_login. If that's the case, maybe it's possible to implement a similar idea of specifying the secret key as an option of TracStandalone and renaming the key in AuthenticationMiddleware. I'll do some testing in the coming days. auth-from-header.patch
https://trac.edgewall.org/ticket/9206
CC-MAIN-2017-17
refinedweb
1,586
50.43
Ferris provides a few utilities for working with the Blobstore API and Cloud Storage to upload and serve binary files. The Upload component can take the guesswork out of uploading binary files on App Engine. Automatically handles file upload fields that need to use the blobstore. This works by: - Detecting if you’re on an add or edit action (you can add additional actions with upload_actions, or set process_uploads to True) - Adding the upload_url template variable that points to the blobstore - Updating the form_action and form_encoding scaffolding variables to use the new blobstore action - Processing uploads when they come back - Adding each upload’s key to the form data so that it can be saved to the model Does not require that the controller subclass BlobstoreUploadHandler, however to serve blobs you must either use the built-in Download controller or create a custom controller that subclasses BlobstoreDownloadHandler. This component is designed to work instantly with with scaffolding and forms. Almost no configuration is needed: from ferris import Model, ndb, Controller, scaffold from ferris.components.upload import Upload class Picture(Model): file = ndb.BlobKeyProperty() class Pictures(Controller): class Meta: components = (scaffold.Scaffolding, Upload) add = scaffold.add edit = scaffold.edit list = scaffold.list view = scaffold.view delete = scaffold.delete However, there are instances where you need more direct access. This is possible as well. Upload happens in two phases. First, you have to generate an upload url and provide that to the client. The client then uploads files to that URL. When the upload is successful the special upload handler will redirect back to your action with the blob data. Here’s an example of that flow for a JSON/REST API: from ferris import Controller, route from ferris.components.upload import Upload class Upload(Controller): class Meta: components = (Upload,) @route def url(self): return self.components.upload.generate_upload_url(action='complete') @route def complete(self): uploads = self.components.upload.get_uploads() for blobinfo in uploads: logging.info(blobinfo.filename) return 200 Get all uploads sent to this controller. Returns: A dictionary mapping field names to a list of blobinfo objects. This blobinfos will have an additional cloud_storage property if they have been uploaded to cloud storage but be aware that this will not be persisted. Ferris includes a download controller that is disabled by default for security reasons. To begin using it first enable it in app/routes.py: from ferris.controllers.download import Download routing.route_controller(Download) You can now generate urls to download files: uri("download", blobkey=blobkey) uri("download-with-filename", blobkey=blobkey, filename="kitty.jpg") Google cloud storage is mostly compatible with the existing blobstore API. This means you can upload and serve items the exact same way without any change. However, there are some caveats (see below). To make all uploads for a controller go to cloud storage all you need to do is configure the bucket name: class Upload(Controller): class Meta: components = (Upload,) cloud_storage_bucket = "my-bucket" Note Locally the App Engine SDK will emulate Cloud Storage, but once deployed you must ensure the App Engine Application has access to the given bucket. Now all files will be stored with a unique name on cloud storage and a blobkey will be generated that points to that cloud storage item. You can use the download handler as above to serve blobkeys that point to cloud storage objects, however, serving items in this way does not take advantage of the cloud storage CDN or caching and can be very slow for small items such as images. In order to remedy this you should serve the item directly from cloud storage. However, in order to generate a serving URL you have to have to cloud storage object name. Unfortunately, the App Engine blobkey does not provide this information. As such, you must acquire this object name during the upload step. If you’re using the easy setup of a Model and Form all you have to do is add a field to the model like such: class Picture(Model): file = ndb.BlobKeyProperty() file_cloud_storage = ndb.StringProperty() The upload component will detect these [field]_cloud_storage properties and ensure that these field are populated with the cloud storage object name. If you’re doing things manually (as with the API example above) you’ll need to get the object name yourself: @route def complete(self): uploads = self.components.upload.get_uploads() for blobinfo in uploads: logging.info(blobinfo.filename) logging.info(blobinfo.cloud_storage.gs_object_name) return 200 To generate a serving URL: serving_url = "" % (bucket_name, object_name) Note These URLs will not work locally as the SDK does not actually upload anything to cloud storage.
http://ferris-framework.appspot.com/docs21/users_guide/uploads_and_downloads.html
CC-MAIN-2017-13
refinedweb
768
56.35
A. Need this program in Java You are asked to write a program for doctorâ€s to help them keep track of their billing information. You need to be able to keep track of patient information such as patient number, last name, first name, and full address. Also, there needs to be a way to keep track of how much the patient owes, but you need to store this information separately from where you are storing the patientâ€s personal information. This will allow anyone who works at the doctorâ€s office to obtain information about the patient without seeing their financial information unless they work at the front desk. The program must be able to store multiple patients and corresponding billing information (at least 4 records each). Have a menu that will print an entire list of patients, search for patient by last name, print patients that owe more than $50, and allow the user to quit. When printing the amount owed make sure that the patientâ€s information is included in the printout. 1. Print Patient List 2. Search Patients (By Last Name) 3. Patient's Balance Greater Than $50 4. Quit Sample output for Options 1 & 2: Patient Number: 145 Name: Word, John Address: 9 Turn Road, Greer, SC 29640 Sample output for Option 3: Patient Number: 111 Name: Smith, Bill Address: 123 Home Way, Spartanburg, SC 29303 Bill Smith Amount Due: $200.0 Patient Number: 123 Name: Johnson, Jack Address: 111 Tree Lane, Greenville, SC 29614 Jack Johnson Amount Due: $100.0 Hint: The best way to keep track of billing for a patient would be to include the patientâ€s record in the billing record without recording the patientâ€s information twice.• Show less I have a question about a program I am writing in VISUAL BASIC : This is what the rules to the program are: Write a complete Visual Basic program to do the following: Joe’s Pizza Palace needs an application to calculate the number of slices a pizza of any size can be divided into. The application should do the following: ? Allow the user to enter the diameter of the pizza, in inches. ? Calculate the number of slices that can be cut from a pizza that size. ? Display a message that indicates the number of slices. To calculate the number of slices that can be cut from the pizza, you must know these facts: ? Each slice should have an area of 14.125 inches. ? To calculate the number of slices, divide the area of the pizza by 14.125. ? The area of the pizza is calculated with the formula: Area = ? r2 Note: The ? is pi which is equal to 3.14159.The r is the radius of the pizza. Divide the diameter by 2 to get the radius. Form: The application should be done form-based. You may design your own form. It must have the following: ? a label indicating that it is Joe’s Pizza Palace ? a field with label to enter the size (diameter) of the pizza ? a field with label to display the number of slices ? buttons to calculate the number of slices and to exit the application Output: Use the following test data to determine if the application is calculating properly: Diameter of Pizza Number of Slices 22 inches 27 15 inches 13 12 inches 8 My issue is that I wrote it in Try/Catch and I'm wondering what I can do to rewrite it so it still operates without using Try/Catch. Also Option Strict On needs to be used. I really am confused. I am very new to Visual Basic and am hoping someone can show me the way. Thank you very much. Below I have included the program and the GUI. Option Strict On Public Class Form1 'This is Melanie Manion's Pizza Pi Midterm on January 30th of 2015 'My goal with this project is to demonstrate that I can show in this application I can calculate the number of slices a pizza of any size can be divided into. Private Sub btnCalculateslices_Click(sender As Object, e As EventArgs) Handles btnCalculateslices.Click 'Declare the variables for calculation Dim decDiampizza As Decimal Dim decCalculateslices As Integer Dim decslicesize As Decimal = CDec(14.125) Dim decradius As Decimal Dim decArea As Decimal Try 'Calculate and display number of slices decDiampizza = CDec(txtdiameterofpizza.Text) decradius = decDiampizza / 2 decArea = CDec(3.14159 * (decradius) * (decradius)) decCalculateslices = CInt(decArea / decslicesize) MessageBox.Show("Number of Slices: " & decCalculateslices) Catch ex As Exception MessageBox.Show("Input must be numeric") End Try End Sub Private Sub btnClear_Click(ByVal sender As Object, e As EventArgs) Handles btnClear.Click 'Clear the field’s information txtdiameterofpizza.Clear() txtdiameterofpizza.Focus() End Sub Private Sub btnExit_Click(sender As Object, e As EventArgs) Handles btnExit.Click 'Close the form Me.Close() End Sub Private Function Calculateslices() As Object Throw New NotImplementedException End Function Private Function txtDiampizza() As Object Throw New NotImplementedException End Function End Class • Show less• Show less I have been a programmer for almost 1 year. As an ADHD adult, naturally I don't have the same strength of attention on ordinary stuffs as my colleagues do. And I find the catastrophe made by me are usually caused by trivial negligence. Like for today, I found the cron process on the server collapsed in the morning. After half hour of debugging. I found I wrote in the cron * 4 * * * sh daily_task.sh instead of 0 4 * * * sh daily_task.sh Which runs the huge shell 59 times in the morning instead of the intended 1 time. Is there some kind of cultivatable behaviour or some tools or anything that can help me at least reduce such kind of mistake? How do you do to avoid such kind of mistake?• Show less I wrote this. doesn't matter what am I supposed to do with it. It just doesn't complie! Using G++ 4.9 In the line which I insert data into the set I get error! commenting out that line will solve the problem. But, why am I getting error? #include <iostream> #include <set> #include <string> using namespace std; struct data { string s; int x, y; }; int main() { int n; cin >> n; set <data> Set; for (int i = 0; i < n; i++) { data temp; cin >> temp.s >> temp.x >> temp.y; Set.insert(temp); } // Prints the size cout << Set.size() << endl; int cnt = 0; set <data> :: iterator it; for (it = Set.begin(); it != Set.end(); ++it) cout << (*it).s << (++cnt % 5 ? " " : "\n"); return 0; } I've been reading Aleph One's paper on Smashing the Stack for Fun and Profit. I wrote down example1.c from his paper, modified it a bit to see what the stack looks like on my system. I'm running Ubuntu (64-bit) on a VM on an Intel i5 M 480. The paper says that a stack will have the following structure. It also says that the word size is 4 bytes. I read up on word sizes and determined that on a 64-bit OS that is not "long-enabled" the word size is still 32 bits or 4 bytes. enter image description here However, when I run my custom code: void function(int a, int b, int c) { char buffer1[5]; char buffer2[10]; memset(buffer1, "\xaa", sizeof(buffer1)); } void main() { function(1, 2, 3); } I do not get the same stack structure as the paper. Yes, I'm aware that the paper was published in 1998 but I haven't found any article on the internet stating that the stack structure has been modified greatly. Here's what my stack looks like (I'm also uploading GDB screenshots for verification, in case I've misinterpreted the stack): Lower memory Higher memory ------------------------------------------------------------------------- | int c | int b | int a | buffer1 | buffer2 | RBP | RET | | 4 bytes | 4 bytes | 4 bytes | 16 bytes | 16 bytes | 8 bytes | 8 bytes | ------------------------------------------------------------------------- enter image description here enter image description here Now for my questions: Why has the stack structure changed? What is with the extra space given to buffer1 and buffer2? According to the paper they should have only 8 bytes and 12 bytes allotted. However, buffer2 gets an extra 6 bytes and only then does buffer1 begin and even buffer1 is allotted 16 bytes. Am I missing something here? I read about slack space being given as a protective mechanism, is this it? I have found the following loop annotation in a big project I am working on (pseudocode): var someOtherArray = []; for (var i = 0, n = array.length; i < n; i++) { someOtherArray[i] = modifyObjetFromArray(array[i]); } What brought my attention is this extra "n" variable. I have never seen a for lop written in this way before. Obviously in this scenario there is no reason why this code couldn't be written in the following way (which I'm very much used to): var someOtherArray = []; for (var i = 0; i < array.length; i++) { someOtherArray[i] = modifyObjetFromArray(array[i]); } But it got me thinking. Is there a scenario when writing such a for loop would make sense? The idea comes to mind that "array" length may change during the for loop execution, but we don't want to loop further than the original size, but I can't imagine such a scenario. Shrinking the array inside the loop does not make much sense either, because we are likely to get OutOfBoxException. Is there a known design pattern where this annotation is useful?• Show less Various articles on SCRUM methodology explicitly state that sizing should not focus on time, bur rather on the more abstract "complexity" or "effort needed" of the task. How should a task be sized if the task is trivial to do, but requires a really long time to complete (say, half the sprint or the whole sprint)? Should it be sized 1, 2, 3, or more like 20-40-Inf? As an example of such a task, I can give the following: Convert the translated texts from the technical PDF documentation (10 000+ texts) in the correct translations format for Android and iOS.• Show less I am writing an angular application, and I'm wondering how much client side memory to use. I'm currently working on a scenario where there are 2 dropdowns. The second will load new values depending on the selection of the first. I'm thinking the max # of total records in the 2nd dropdown would be around 2000-3000 items, each being around 2k each. Each selection would display probably 10-15 items of the 2000-3000. Should I load the entire array into memory and parse the selected values from there, or should I read from the server every time the first dropdown changes? I know for a desktop this wouldn't be a big deal. But we support phones and tablets, and I'm not sure how much memory to worry about with these devices.• Show less Answer the questions with clear answer and understandable, and if you got the answer in the internet please cite the website. In Linux : 1- In Linux, use diff lisaRB.txt D1/ls1.txt and diff lisaRC.txt D2/ls2.txt Explain the differences. Focus on the i-numbers. 2- explain what the tar command does when used with the options, -czf. Use man or the Internet to research your answer. 3- describing what the -e option does for the Linux zip command? 4- explain why username.tgz.zip and username.tgz have different sizes?• Show less Code written in C++. Recently, a research team collected genetic samples from around the Great Smoky Mountains in search of new species. You've enlisted to help the team by making a database of the genetic sequences. For this lab, you get to parse the input and store it in DNA objects (in preparation for inserting the information into a linked list). The parsing will be done in a class called SequenceDatabase. A description of these classes follows: class DNA: This class represents a single DNA sequence and should contain: Appropriate constructor(s) Data members to store: Label Accession ID (which is unique) Sequence Length of the sequence Index of the coding region (or -1 if not applicable) A print() method that prints the above information (used in lab1) Appropriate "get" and "set" methods class SequenceDatabase: This class should contain: Appropriate constructor(s) Method to process commands from a specified file. Commands are as follows (fields are separated by tabs): D (allocates memory for a new DNA object, which in lab1 will be added to a linked list; for now, allocate memory and print out "Adding ...", where is the ID number (see the example output below)) O (in lab1, obliterates the specified DNA entry; for now, print out "Obliterating ...") P (in lab1, prints the specified DNA entry; for now, print out "Printing ...") S (in lab1, displays the number of DNA entries; for now, print out "Entries: NYI") Driver file #include #include using namespace std; // notice the first letter is a lower case s #include "sequenceDatabase.h" int main( /*int argc, char argv[] */ ){ string commandsFilename = "lab0-commands-short.tab"; // Read in a filename from STDIN (or defualt to one) // If nothing is entered (really just a return) then use the listed filename. // Otherwise, read one from STDIN. char firstChar; string stdinFilename; cout << "Please enter the commands filename (or simply press return to use " << commandsFilename << ")\n"; cin.get( firstChar); if( firstChar != '\n'){ cin >> stdinFilename; // replace the default filename commandsFilename = firstChar + stdinFilename; } SequenceDatabase entries; // use SequenceDatabase entries{ }; for C++ 11 cout << "Importing " << commandsFilename << endl; entries.importEntries( commandsFilename); return 0; } Data File D taxon1 12345 agtcgatcagaagatctcct 20 -1 P 12345 O 12345 S P 9999 O 9999 Join Chegg Study Guided textbook solutions created by Chegg experts Learn from step-by-step solutions for 2,500+ textbooks in Math, Science, Engineering, Business and more 24/7 Study Help Answers in a pinch from experts and subject enthusiasts all semester long
http://www.chegg.com/homework-help/definitions/binary-search-3?cp=CHEGGFREESHIP
CC-MAIN-2015-06
refinedweb
2,325
63.39
Like many languages, Java has a ton of built-in tools and methods we can use. However, Java is downright huge! We can't load every little bit of Java into every program just in case we might want to use one of its tools! This would make our application run very slowly. Instead, when we want to use a specific part of Java, we can explicitly import it into our program. That way we're only loading tools we need, and our applications can remain speedy and well-organized. In this lesson we'll walk through how to import code. We will create an application that asks the user what their favorite color is, gathers their answer, and provides a response based upon how they answered. Some code seen in this lesson will be quite advanced, but nevertheless the only way to reliably receive user input running an app inside IntelliJ. Don't worry if this feels confusing or overwhelming at first - we'll show exactly how to work with this code, and you are not expected to be able to write it from scratch, or memorize it. If you are confused by the try /catch - here's a super brief summary. We'll revisit this more in depth later in the class. A try/catch block allows us to say: "Hey Java, i want you to run some code that might, under some circumstances, cause an error (Input/Output exception - something is most likely null when it shouldn't be). If that error happens, don't panic or crash, just print the stacktrace (error log)". This allows us to handle errors a bit more gracefully and make our apps more stable. In order to gather the user's response through the console, we'll import Java's BufferedReader class. This will allow us to retrieve user-inputted text from the command line. Also, throughout this process you'll hear terms like classes, or packages. Keep in mind that a Class is, as discussed, a Java file that has been compiled. A package is simply a group of files - kind of like a directory. First, create a project, a process we're becoming quickly familiar with, called favorites. In our src/main/java directory we'll make a file called FavoriteColor.java and set up our required class and method. This part should look familiar: public class FavoriteColor { public static void main(String[] args) { //try typing psvm + tab to autocomplete! } } Remember, the name of our class must directly reflect the name of our file. Since our file is called FavoriteColor.java, our class name is FavoriteColor. Now, we want our application to ask the user what their favorite color is in the command line. Let's add code to print "What's your favorite color?" for the user: public class FavoriteColor { public static void main(String[] args) { System.out.println("What is your favorite color?"); } } When data travels from within our application out to the user, like it is here when our application prints "What is your favorite color?" to the command line, it's known as system out. Notice that this is reflected in our line of code: System.out precedes the println() function. Now, compile and run your program by choosing Run > Run > FavoriteColor. And we should see "What is your favorite color?" in our terminal. Perfect! Now, let's add code that allows the user to enter a response, and our application to collect that response. Carefully copy the following code into your FavoriteColor.java file: public class FavoriteColor { public static void main(String[] args) { System.out.println("What is your favorite color?"); try{ BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in)); String inputString = bufferedReader.readLine(); System.out.println("Color entered : " + inputString); } catch(IOException e) { e.printStackTrace(); } } } Wow, this is looking pretty complicated. And, lots of things on our page are red, and there are squiggly lines in places. Don't worry! You don't need to understand everything here. And we'll fix the squiggles in a sec. The red areas and squiggles appear because we haven't yet imported the specific areas of Java we're trying to use. We need to manually import it. We can do this by adding the following line to the top of our FavoriteColor.java: import java.io.BufferedReader; Including this line has granted our file access to methods in Java's BufferedReader class. Great. This made one of the red areas disappear! Now, this isn't the only way to import code. There is a much easier to import code with IntelliJ. As you can see, you have a red squiggly under InputStreamReader - this indicates that IntelliJ's code checker has found an issue or inconsistency with your code. It's because we haven't yet imported import java.io.InputStreamReader; from our code. If you hover over the code, it'll tell you it "Cannot resolve symbol InputStreamReader". A symbol is Java's umbrella term for any variable, data type or object it can't locate. Let's import the necessary code Java needs to work with InputStreamReader objects the smart way. Place your cursor next to InputStreamReader, until you see an underline. (you may need to move the cursor back or forward a few spaces. It'll get easier to make work after a few tries.) When you see the underline, hit the windows + enter key on our classroom macs ( alt + enter or option + enter on your home machine). Sweet! IntelliJ is able to import our code for us. Remember this command, it's super useful. Do this again for the IOException, and now IntelliJ shouldn't show any red squiggles and is feeling much happier. Good stuff. While we are at it, let's change our very factual System.out.println("Color entered : " + inputString); to a friendlier System.out.println("Your favorite color is " + inputString + "? Me too!"); This is what our complete code looks like now: import java.io.BufferedReader; import java.io.IOException; import java.io.InputStreamReader; public class FavoriteColor { public static void main(String[] args) { System.out.println("What is your favorite color?"); try{ BufferedReader bufferedReader = new BufferedReader(new InputStreamReader(System.in)); String inputString = bufferedReader.readLine(); System.out.println("Your favorite color is " + inputString + "? Me too!"); } catch(IOException e) { e.printStackTrace(); } } } We can compile and run our application one more time. When you see the phrase "What is your favorite color?" place the cursor below it, and type your input, then press enter. You should see something like this: What is your favorite color? red Your favorite color is red? Me too! Process finished with exit code 0 When the terminal stops, it is waiting for you to provide a response and hit Enter. If we type in "blue", and hit enter, we should see the response "Your favorite color is blue? Me too!". Nice work! We are now interacting with a Java program we wrote ourselves. Pretty cool!
https://www.learnhowtoprogram.com/java/java-basics-9a4b2b2a-6de4-44b5-9f35-0e506177d73b/importing-code-and-receiving-user-input
CC-MAIN-2019-04
refinedweb
1,157
67.96
About | Projects | Docs | Forums | Lists | Bugs | Get Gentoo! | Support | Planet | Wiki On Wed, Aug 31, 2005 at 01:32:04AM +1000, Finn Thain wrote: > > > > Reasoning is, how do you know that pkg xyz is actually the package > > > > you're after? > > > mac os, automated updates mean that most of the time, there will be some > vendor packages that the tree hasn't been tested against. These have to be > masked until the user does emerge sync. Alright, so I'm just being a tool 'coz I thought you were talking about dynamic mapping (vs dev managed mappings). Nevermind me :) > BTW, do repos share a namespace? Presented with the same cpv in several > repos, is portage's behaviour defined yet? repo's have their own *total* namespace now; an overlay + repo is different though since an overlay is slaved to a repo. <=2.1 basically lacks any true support for N repos; you can have a portdir(+overlays), a vdb, and a bintree. Rewrite has no such restriction built into it. >. Well, considering I'm seriously considering when/if rewrite is released, it's released as two packages; portage-core, and portage-ebuild... yes. Very modular. There pretty much is one point of required entry into the code; getting the config obj- from there it loads the code it needs, instantiating objects on the fly. Aside from the entry point/config obj, everything else is intended to be configurable. ~harring Updated Jun 17, 2009 Summary: Archive of the gentoo-osx mailing list. Donate to support our development efforts. Your browser does not support iframes.
http://archives.gentoo.org/gentoo-osx/msg_b902a5a48fbcb59fa17c4238706eef24.xml
CC-MAIN-2015-06
refinedweb
262
64.3
IRC log of dawg on 2005-08-02 Timestamps are in UTC. 14:23:58 [RRSAgent] RRSAgent has joined #dawg 14:23:58 [RRSAgent] logging to 14:24:05 [DanC] Meeting: RDF Data Access WG Weekly 14:24:11 [DanC] Regrets: SouriD 14:24:49 [DanC] DanC has changed the topic to: DAWG 2 Aug. scribe: AndyS 14:25:10 [DanC] agenda + Convene, take roll, review records and agenda 14:25:17 [DanC] Agenda: 14:25:26 [EliasT] EliasT has joined #dawg 14:25:27 [DanC] agenda + ISSUE resultsMimeType 14:25:36 [DanC] agenda + SPARQL results publication 14:25:43 [DanC] agenda + toward Protocol last call 14:25:53 [DanC] agenda + SPARQL QL comment status and editorial comments 14:26:03 [DanC] agenda + issue valueTesting 14:26:14 [DanC] agenda + issue badIRIRef 14:26:24 [DanC] agenda + comment "Query forms should be resources, not operations" 14:26:30 [DanC] agenda + Test cases publication 14:27:43 [kendall] whew, crowded agenda :> 14:28:50 [DanC] yeah... it took me several hours to write 14:29:15 [kendall] i bet 14:29:51 [Zakim] SW_DAWG()10:30AM has now started 14:29:52 [Zakim] +Kendall_Clark 14:30:17 [Zakim] +[IBMCambridge] 14:30:19 [Zakim] -[IBMCambridge] 14:30:20 [Zakim] +[IBMCambridge] 14:30:28 [EliasT] Zakim, IBMCambridge is EliasT 14:30:28 [Zakim] +EliasT; got it 14:30:38 [Zakim] +??P8 14:30:43 [DaveB] Zakim, ??P8 is DaveB 14:30:43 [Zakim] +DaveB; got it 14:30:44 [Zakim] +Jeen_Broekstra 14:31:02 [Zakim] +DanC 14:31:14 [Zakim] +HowardK 14:31:21 [DanC] Zakim, take up item 1 14:31:21 [Zakim] agendum 1. "Convene, take roll, review records and agenda" taken up [from DanC] 14:31:32 [DanC] Agenda: 14:31:43 [DanC] Regrets: SouriD 14:31:55 [Zakim] +EricP 14:32:41 [ericP] Meeting: RDF Data Access 14:32:44 [DanC] -> minutes 26 Jul 2005 Date: 2005/08/01 15:34:54 14:32:51 [ericP] Scribe: AndyS 14:33:01 [Zakim] +??P13 14:33:05 [AndyS] zakim, ??P13 is AndyS 14:33:05 [Zakim] +AndyS; got it 14:33:10 [DanC] RESOLVED: to approve 26 Jul minutes 14:33:25 [Zakim] +[IPcaller] 14:33:26 [JosD] JosD has joined #dawg 14:33:30 [DanC] next meeting: 9 Aug? 14:33:49 [DanC] Zakim, IPcaller is JanneS 14:33:54 [Zakim] +JanneS; got it 14:34:04 [DanC] Zakim, who's on the phone? 14:34:04 [Zakim] On the phone I see Kendall_Clark, EliasT, DaveB, Jeen_Broekstra, DanC, HowardK, EricP, AndyS, JanneS 14:34:04 [JanneS] JanneS has joined #dawg 14:34:06 [DanC] Zakim, who's on the phone? 14:34:06 [Zakim] On the phone I see Kendall_Clark, EliasT, DaveB, Jeen_Broekstra, DanC, HowardK, EricP, AndyS, JanneS 14:34:26 [Zakim] +Jos_De_Roo 14:34:52 [AndyS] Next meeting: 9 August / next scribe : JanneS 14:35:23 [AndyS] Actions continued 14:35:47 [DanC] Zakim, next agendum 14:35:47 [Zakim] agendum 2. "ISSUE resultsMimeType" taken up [from DanC] 14:37:25 [AndyS] Yes but without broken lines :-) 14:37:38 [AndyS] ACTION: ericP to add "don't normalize" to rq23 14:37:59 [AndyS] ACTION: EricP to add test in 0096 to rq23 tests. label "approved" and 14:37:59 [AndyS] ref 14:38:31 [kendall] zakim, mute me 14:38:31 [Zakim] Kendall_Clark should now be muted 14:38:34 [DanC] ACTION: DaveB respond to "sparqlResults namespace" comment [CONTINUES] 14:39:22 [DanC] Zakim, next agendum 14:39:22 [Zakim] agendum 3. "SPARQL results publication" taken up [from DanC] 14:39:24 [AndyS] Actions done from item 2 14:39:24 [kendall] [OK?] == are you happy message 14:39:37 [kendall] zakim, unmute me 14:39:37 [Zakim] Kendall_Clark should no longer be muted 14:40:08 [AndyS] Actions done as noted in agenda 14:40:19 [AndyS] ACTION: EricP to publish rf1 14:40:29 [DanC] Zakim, next agendum 14:40:29 [Zakim] agendum 4. "toward Protocol last call" taken up [from DanC] 14:41:19 [AndyS] Actions done as noted in agenda (item 4) 14:42:11 [AndyS] Kendall: v1.55 isn't ready but KC will OK it if everyone else does 14:42:15 [Zakim] +NickG 14:42:22 [SteveH] Zakim, NickG is SteveH 14:42:22 [Zakim] +SteveH; got it 14:42:25 [SteveH] Zakim, mute me 14:42:25 [Zakim] SteveH should now be muted 14:42:26 [DanC] welcome SteveH 14:42:35 [AndyS] q+ to ask about SOAP binding 14:42:53 [AndyS] DanC: no red items should be there 14:43:14 [AndyS] EricP suggests replacing with @@ 14:43:23 [DaveB] the <sparql> result has the old namespace 14:44:32 [AndyS] DanC: wants examples to be in test suite for protocol 14:44:48 [AndyS] .. hence will become normative (maybe not right now) 14:46:14 [AndyS] DanC: Comments on error handling and conformance pending - chance to address now 14:46:36 [AndyS] e.g. definitions of SPARQL query service 14:47:01 [AndyS] WSDL magic? (definition?? :-) 14:48:04 [AndyS] EricP to ask Philippe whether WSDL 2.0 is considered defining 14:50:15 [DanC] todo list: (1) examples (2) post binding (3) conformance/definitions 14:50:46 [kendall] (4) some kind of non-normative wsdl 1.1 14:52:15 [DanC] (5) something about output serialization... is it required? 14:52:39 [kendall] the output types are certainly specified otherwise 14:52:46 [AndyS] May be optional in which case the alternatives we can send don't matter 14:53:13 [SteveH] I could do tests if that helps 14:53:46 [DanC] ACTION Elias: elaborate "DESCRIBE with simple RDF dataset" and a few other examples 14:54:29 [DanC] ACTION KendallC: update to [what?] 14:55:06 [AndyS] .../2005/sparql-results# (which I rememeber as I keep getting it wrong :-() 14:55:13 [SteveH] Zakim, unmute me 14:55:13 [Zakim] SteveH should no longer be muted 14:55:33 [AndyS] SteveH: tests offer 14:55:58 [AndyS] SteveH: Start with test content negotiation 14:56:03 [DanC] ACTION SteveH: elaborate "CONSTRUCT with content negotiation" into a test case 14:56:22 [SteveH] Zakim, mute me 14:56:22 [Zakim] SteveH should now be muted 14:57:45 [DanC] ACTION EricP: discuss conformance w.r.t. WSDL 2 with PLH and send some notes to the WG, preferably including suggested text for conformance 14:58:57 [kendall] I think there's a WSDL 2.0 eclipse plugin... 15:00:19 [AndyS] WSDL 2.0: Can be one step ahead of another rec which is ref'ed 15:00:49 [AndyS] WSDL 2.0: Talking about spring 2006 15:02:20 [AndyS] DanC plans to spend time on other things from CR 15:02:43 [AndyS] DanC: CR may be long to allow sync with other recs 15:03:20 [AndyS] KC: is v1.55 an LC candidate? 15:03:49 [AndyS] q? 15:04:06 [AndyS] DaveB: has red in it 15:04:21 [DanC] Zakim, pick a victim 15:04:21 [Zakim] Not knowing who is chairing or who scribed recently, I propose Jos_De_Roo 15:04:36 [DanC] Zakim, pick a victim 15:04:36 [Zakim] Not knowing who is chairing or who scribed recently, I propose Kendall_Clark 15:04:40 [SteveH] whats the deadline? 15:05:19 [AndyS] AndyS will review - needs 3 clear days 15:05:25 [SteveH] I cant do it by a week today 15:06:29 [AndyS] NB My Thursday is earlier than KC's 15:07:05 [AndyS] I can review regardless of state 15:07:09 [kendall] ah, good 15:07:31 [DanC] ACTION AndyS: review proto-wd/ as LC candidate after signal from KC, perhaps by 9 Aug but more likely 16 Aug 15:08:36 [AndyS] q- 15:09:51 [kendall] soap traces would be required, I think.... 15:10:20 [ericP] ACTION EricP: review protocol document for last call when KendallC says it's ready 15:12:24 [kendall] FWIW, I've heard enough to work on adding it, but that really puts "being done by Wednesday" in jeopardy 15:12:44 [DanC] ACTION SteveH: review proto-wd by 16th Aug, if not 9th Aug 15:12:55 [AndyS] KC: Adding SOAP delays the LC candidate 15:14:45 [DanC] -> Experience with SPARQL/P/SOAP/Axis 15:14:45 [EliasT] 15:14:57 [AndyS] 15:15:36 [DanC] [[ 15:15:37 [DanC] <wsdl:binding 15:15:37 [DanC] <soap:binding style="document" transport=" "/> 15:15:39 [DanC] ... 15:15:40 [DanC] ]] 15:16:07 [Zakim] +[IBMCambridge] 15:18:39 [AndyS] DanC: suggest a separate tech report from the WG for WSDL 1.1 15:19:14 [DanC] ACTION LeeF: draft WSDL 1.1 for SPARQL thingy with AndyS and Elias 15:19:44 [DanC] ... ETA 1 month 15:19:59 [kendall] kendall has joined #dawg 15:20:08 [jeen] jeen has joined #dawg 15:20:09 [JosD] JosD has joined #dawg 15:20:12 [DanC] <DanC> ACTION LeeF: draft WSDL 1.1 for SPARQL thingy with AndyS and Elias ETA 1 month 15:20:12 [LeeF] LeeF has joined #dawg 15:21:14 [EliasT] EliasT has joined #dawg 15:21:52 [DanC] ACTION DanC: ask WSDL WG to review WSDL 1.1 and WSDL 2 SPARQL protocol stuff, once both are available 15:21:56 [AndyS] AndyS suggest ask the WSDL WG re compatiblity of our WSDL 1.1 for wire format 15:22:40 [DanC] RRSAgent, list action 15:22:40 [RRSAgent] I'm logging. I don't understand 'list action', DanC. Try /msg RRSAgent help 15:22:41 [DanC] RRSAgent, list actions 15:22:41 [RRSAgent] I see 13 open action items saved in : 15:22:41 [RRSAgent] ACTION: ericP to add "don't normalize" to rq23 [1] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: EricP to add test in 0096 to rq23 tests. label "approved" and [2] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: DaveB respond to "sparqlResults namespace" comment [CONTINUES] [3] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: EricP to publish rf1 [4] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: Elias to elaborate "DESCRIBE with simple RDF dataset" and a few other examples [5] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: KendallC to update to [what?] [6] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: SteveH to elaborate "CONSTRUCT with content negotiation" into a test case [7] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: EricP to discuss conformance w.r.t. WSDL 2 with PLH and send some notes to the WG, preferably including suggested text for conformance [8] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: AndyS to review proto-wd/ as LC candidate after signal from KC, perhaps by 9 Aug but more likely 16 Aug [9] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: EricP to review protocol document for last call when KendallC says it's ready [10] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: SteveH to review proto-wd by 16th Aug, if not 9th Aug [11] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: LeeF to draft WSDL 1.1 for SPARQL thingy with AndyS and Elias [12] 15:22:41 [RRSAgent] recorded in 15:22:41 [RRSAgent] ACTION: DanC to ask WSDL WG to review WSDL 1.1 and WSDL 2 SPARQL protocol stuff, once both are available [13] 15:22:41 [RRSAgent] recorded in 15:24:15 [AndyS] AndyS sends regrets for Aug 16 15:24:32 [EliasT] Zakim, mute me 15:24:32 [Zakim] EliasT should now be muted 15:25:12 [AndyS] DanC reviews LC status 15:26:21 [kendall]. 15:26:29 [DanC] Zakim, pick a victim 15:26:29 [Zakim] Not knowing who is chairing or who scribed recently, I propose DanC 15:26:33 [kendall] (err, I'd like that on the record.) 15:26:37 [DanC] Zakim, pick a victim 15:26:37 [Zakim] Not knowing who is chairing or who scribed recently, I propose DanC 15:26:41 [DanC] Zakim, pick a victim 15:26:41 [Zakim] Not knowing who is chairing or who scribed recently, I propose Jos_De_Roo 15:26:47 [DanC] Zakim, pick a victim 15:26:47 [Zakim] Not knowing who is chairing or who scribed recently, I propose AndyS 15:29:40 [AndyS] AndyS+EricP wil do an ack section (not prior art) 15:29:42 [kendall] I'm very +1 on having an ack section in the ql spec. 15:29:57 [DanC] ACTION AndyS: take the "Backslashes in string literals" comment 15:30:04 [DanC] Zakim, next agendum 15:30:04 [Zakim] agendum 5. "SPARQL QL comment status and editorial comments" taken up [from DanC] 15:30:12 [DanC] Zakim, close agendum 5 15:30:12 [Zakim] agendum 5 closed 15:30:13 [Zakim] I see 4 items remaining on the agenda; the next one is 15:30:15 [Zakim] 6. issue valueTesting [from DanC] 15:30:16 [DanC] Zakim, next agendum 15:30:16 [Zakim] agendum 6. "issue valueTesting" taken up [from DanC] 15:30:16 [AndyS] AndyS wil take the escapes in literals 15:30:20 [kendall] zakim, mute me 15:30:20 [Zakim] Kendall_Clark should now be muted 15:31:02 [AndyS] ACTION AndyS: Propose text for escapes in literals 15:31:46 [DanC] action -14 15:31:52 [AndyS] ACTION: EricP to finish extendedType-eq-pass-result.n3 15:31:53 [AndyS] DONE 15:32:20 [DanC] PROPOSED: to accept extendedType-eq-pass-result.n3 (manifest 1.4 ...) 15:32:30 [DanC] PROPOSED: to accept extendedType-eq-pass-result.n3 (manifest 1.4 query 1.5) 15:32:34 [DanC] PROPOSED: to accept extendedType-eq-pass-result.n3 (manifest 1.4 query 1.5 results 1.2) 15:33:07 [kendall] zakim, unmute me 15:33:07 [Zakim] Kendall_Clark should no longer be muted 15:33:14 [DanC] so RESOLVED. 15:33:27 [DaveB] er, that's missing the query: # $Id: extendedType-eq-pass.rq,v 1.5 2005/08/02 03:30:19 eric Exp $ 15:33:39 [SteveH] action please, so the minuites get referneces 15:34:04 [DanC] ACTION SteveH: update test matierials to show extendedType-eq-pass-result.n3 approved in this 2 Aug meeting 15:34:24 [ericP] labeled "approved" 15:36:09 [kendall] Tim sent a message about this a few minutes ago -- to comments list, I mean 15:37:17 [kendall] I think I will be conviced iff the claims about de Morgan's law are valid. Not sure yet, though. 15:37:58 [DanC] q+ to ask if there's a test case for every sense of the word "error" 15:38:50 [DanC] ack danc 15:38:50 [Zakim] DanC, you wanted to ask if there's a test case for every sense of the word "error" 15:40:36 [ericP] 'XXI'^^:romanNumeral = 21 15:44:39 [DanC] ew... sounds like our design for sorting takes a different approach from our design for valueTesting. blech. new information to me. 15:45:33 [DanC] any volunteers to make test cases out of 'XXI'^^:romanNumeral = 21 and points nearby? I'd like to take this to email. 15:46:00 [AndyS] Agreed - tricky to do in a telecon 15:46:07 [kendall] zakim, unmute me 15:46:07 [Zakim] Kendall_Clark was not muted, kendall 15:46:11 [kendall] zakim, mute me 15:46:11 [Zakim] Kendall_Clark should now be muted 15:46:45 [ericP] -> RDF = test 15:46:57 [kendall] zakim, unmute me 15:46:57 [Zakim] Kendall_Clark should no longer be muted 15:47:09 [DanC] ACTION DaveB: make 'XXI'^^:romanNumeral = 21 and points nearby into test cases (or ask questions in email). 15:47:28 [ericP] [[ 15:47:29 [ericP] Returns TRUE if the two arguments are the same RDF term or if they are literals known to have the same value. The latter is tested with an XQuery function appropriate to the arguments. 15:47:33 [ericP] ]] 15:47:39 [JanneS] I need to leave for tonight.. I will be scribing next week. ciao 15:47:50 [DanC] hasta, Janne 15:47:53 [Zakim] -JanneS 15:48:21 [DanC] Zakim, next agendum 15:48:21 [Zakim] agendum 7. "issue badIRIRef" taken up [from DanC] 15:49:01 [DanC] SELECT ?x WHERE { <foo###bar> dc:title ?x }. 15:49:46 [SteveH] wouldn't foo###bar be relative to the base URI? 15:49:55 [DanC] but only one # is allowed 15:50:00 [SteveH] ah! right 15:52:04 [DanC] ACTION AndyS: draft language re <foo###bar> errors 15:52:30 [DanC] other case: <foo bar> 15:55:07 [AndyS] Need an error test case form 15:55:14 [SteveH] yes 15:55:29 [AndyS] ACTION AndyS: Use IRI ref not RDF URI ref as worls has moved on 15:56:01 [DanC] Zakim, next agendum 15:56:01 [Zakim] agendum 8. "comment "Query forms should be resources, not operations"" taken up [from DanC] 15:56:09 [kendall] zakim, unmute me 15:56:09 [Zakim] Kendall_Clark was not muted, kendall 15:59:05 [DanC] ACTION KC: ask Baker for clarification 15:59:16 [DanC] Zakim, next agendum 15:59:16 [Zakim] agendum 9. "Test cases publication" taken up [from DanC] 16:00:12 [DanC] ADJOURN. 16:00:16 [Zakim] -[IBMCambridge] 16:00:18 [Zakim] -DaveB 16:00:20 [SteveH] bye 16:00:22 [Zakim] -Jeen_Broekstra 16:00:23 [Zakim] -SteveH 16:00:25 [Zakim] -Jos_De_Roo 16:00:26 [Zakim] -Kendall_Clark 16:00:29 [Zakim] -HowardK 16:00:46 [ericP] XML Results Format published 16:01:02 [DanC] woohoo! 16:01:48 [Zakim] -DanC 16:01:50 [Zakim] -EricP 16:02:38 [Zakim] -AndyS 16:02:39 [Zakim] -EliasT 16:02:39 [Zakim] SW_DAWG()10:30AM has ended 16:02:41 [Zakim] Attendees were Kendall_Clark, EliasT, DaveB, Jeen_Broekstra, DanC, HowardK, EricP, AndyS, JanneS, Jos_De_Roo, SteveH, [IBMCambridge] 16:02:47 [AndyS] AndyS has left #dawg 17:06:24 [DaveB] DaveB has joined #dawg 17:06:32 [DaveB] er, re 17:06:35 [DaveB] @@FIXME@@ At publication time replace and references to result2.xsd in the linked files to the final publication location under /TR/. 17:06:58 [DaveB] just before the example where the change shouldve' been done :) 17:14:15 [ericP] a/ 17:14:26 [ericP] will do, in about an hour 17:14:31 [DanC] cool 17:17:13 [DanC] ericp, your response to timbl seems to be confused. you gave an example with SELECT but the results look like CONSTRUCT: 17:17:14 [DanC] [] fire:eek [] 17:17:33 [ericP] oops, pasto 17:19:25 [ericP] resent 17:20:39 [DanC] I was a little surprised you flagged your message as [OK?] but i guess it makes sense 17:21:51 [DanC] The example you gave wasn't from the spec nor from the test cases. I like the discussion to focus on the WG's materials. I'd be more comfortable if you put that test in the WG test suite at some point. 17:23:05 [DanC] shouldn't the temp result be "314"^^:degreesK ? 17:27:57 [LeeF] Also, note that TBL's scenario features smokeDetected being false rather than true 17:28:05 [LeeF] Which changes the tradeoff 18:12:23 [danbri] danbri has joined #dawg 18:12:55 [danbri] bjoern_"@@FIXME@@ At publication time replace and references to result2.xsd in the linked files to the final publication location under /TR/." -- 18:13:00 [danbri] from #swig just now 18:13:40 [LeeF] Thanks, danbri. I believe ericP already agreed to fix that soon. 18:14:04 [Zakim] Zakim has left #dawg 18:14:25 [danbri] ok cool 18:29:01 [DanC] ericp, [OK?] goes at the end. [closed] goes at the beginning. 18:29:19 [DanC] my tools are doing something wierd with your [OK?] message. I wonder if I can get them to stop it 20:17:58 [DanC] ericp? WD-rdf-sparql-XMLres doesn't seem to be fixed yet. 20:18:58 [DanC] DaveB, I'm not confident I can fix it. can you watch while I try, and confirm/review? 20:22:05 [ericP] DanC, yeah, mit meeting and a couple side missions kept me busy 20:22:55 [DanC] ah. hi. 20:23:38 [DanC] WWW/TR/2005/WD-rdf-sparql-XMLres-20050801/Overview.html 20:25:09 [DanC] ah... good... dave... 20:25:13 [DanC] I have ericp on the phone... 20:25:24 [DanC] neither of us is really confident we can get this right without you checking 20:25:27 [DaveB] delete: xsi:schemaLocation=" "> 20:25:40 [DaveB] replace: xsi:schemaLocation=" " 20:25:50 [DaveB] delete: paragraph @@FIXME@@ 20:26:36 [DaveB] and for extra credit, replace ">References" with "References" a little later ;) 20:26:41 [DanC] eric asks: dated or undated? ah. dated. 20:26:47 [DaveB] yes, dated 20:27:32 [DaveB] I guess in CR/PR/... it can go near or under the 2005/sparql-results area. A discussion for later. 20:28:18 [ericP] DaveB, could you inspect? 20:28:44 [DanC] WD-rdf-sparql-XMLres-20050801/Overview.html 1.1 Tue Aug 2 20:28:03 2005 UTC 20:29:05 [DaveB] looks good 20:29:06 [ericP] i hit result2.xsd and Overview.html 20:30:14 [DaveB] what did you change in result2.xsd ? 20:31:06 [DaveB] ahh nothing according to the cvs ID. I mis-understood. 20:31:43 [ericP] sorry, output2.srx 20:32:03 [ericP] hmm, shouldn't be there anyways 20:32:32 [DaveB] it's needed for my validation with wxs 20:32:43 [DaveB] I check xml, relaxng, wxs valid every make 20:33:04 [ericP] ok. i leave it alone 20:33:20 [DaveB] that's all, thanks.
http://www.w3.org/2005/08/02-dawg-irc
CC-MAIN-2014-42
refinedweb
3,673
65.46
Opened 13 months ago Last modified 13 months ago #3620 new Bug _ArraySort on 2D is not stable but the documentation says it is Description _ArraySort claims to be stable in its in-script documentation if you look at the "modified" header. LazyCoder - added $iSubItem option; Tylo - implemented stable QuickSort algo; Jos - changed logic to correctly Sort arrays with mixed Values and Strings; Melba23 - implemented stable pivot algo If you want to sort 2D arrays then the help file tells you that only quick- or insert-sort is used on these (a closer look on the Array.au3 shows you that only quicksort is used). But if you sort with this code for example you will see that the array tells you that [5, Cherry] comes before [3, Cherry] where it should be the other way round since this algorithm is supposed to be stable. Stable algorithms do not change the order of items if they equal the compared value. If you look at the __ArrayQuickSort2D inside of the Array.au3 you will see that elements are swapped if $L is less or equal than $R. (l. 1813 commented with ; Swap on 3.3.14.2) First the array is sorted on its first column showing that [3, Cherry] comes before [5, Cherry] and then you can see that after sorting the second column they are switched. Code to reproduce: #include <Array.au3> Local $aTestArray[5][2] = [[5, "Cherry"], [4, "Banana"], [3, "Cherry"], [2, "Orange"], [1, "Apple"]] ;Sort the whole array ascending on the first column _ArraySort($aTestArray, 0, 0, 0, 0) _ArrayDisplay($aTestArray) ;Sort the whole array descending on the second column _ArraySort($aTestArray, 0, 0, 0, 1) _ArrayDisplay($aTestArray) This also happens on the 3.3.14.5. Attachments (0) Change History (8) comment:1 Changed 13 months ago by Jos comment:2 Changed 13 months ago by anonymous The bug is that the algorithm claims to be stable which it is not. If you sort the second column than you compare these elements with each other. If you find two identical keys then you are not allowed to switch them in a stable algorithm. You don't need to give an extra parameter to decide which keys to compare explicitly for stability, you could but you don't have to. And if you don't the usual way would be to take the items in the column you're sorting in. These items are switched if the an item is lower or equal / greater or equal but changing that to lower / greater would cause the algorithm not to switch items (on equality) therefore resulting in a stable algorithm. Either the implementation of the 2D sort has to change (1D sort with InsertionSort should be stable) or the documentation simply has to be altered because one of them is not correct. comment:3 Changed 13 months ago by Jos In case you feel something is really wrong then simply submit a proposal to "fix" this UDF, but I am not convinced that you correct about not being allowed to change the order of the records in case the primary key is equal. Jos comment:4 Changed 13 months ago by Melba23 If you want to sort items where you want the columns to act as groupings then use my ArrayMultiColSort UDF - the link is in my sig. M23 comment:5 Changed 13 months ago by anonymous Well I implemented my own stable algorithm by using MergeSort and expanding it into two dimensions (using the sorting column as the equality check) and it seems to work quite good. The decision is up to the devs what now the solution of this ticket be. Change the documentation to call the quicksort algorithm (1D and 2D) unstable or implement an existing stable solution (for example Melba's) or just leave everything how it is. I would be very happy to see a _ArraySortStable or _ArraySort with a parameter flag to use a stable algorithm while sorting since this would be very useful in the standard UDF library. Thanks for your efforts, ticket can be closed after a final answer please. comment:6 Changed 13 months ago by Melba23 Please post your code so that we can see what you did. M23 comment:7 Changed 13 months ago by anonymous Sure thing! I don't know how to attach files here so I have to link this from the german forum. The example code remains almost the same #include <Array.au3> #include "_ArraySortStable2D.au3" Local $aTestArray[5][2] = [[5, "Cherry"], [4, "Banana"], [3, "Cherry"], [2, "Orange"], [1, "Apple"]] ;Sort the whole array ascending on the first column _ArraySortStable2D($aTestArray, 0, Default, Default, False) ; Col = 0, Start = Begin, End = End, Ascending _ArrayDisplay($aTestArray) ;Sort the whole array descending on the second column _ArraySortStable2D($aTestArray, 1, Default, Default, False) ; Col = 1, Start = Begin, End = End, Ascending _ArrayDisplay($aTestArray) The result is correct and I tried this in my current project where I sort items with the ListView (which sorts them using a stable algorithm). I tested around 500 items on various colums and always got the same result. If there is one thing that might not match, it will be the compare function. comment:8 Changed 13 months ago by anonymous This was fairly a quick UDF so I might be changing a few things in the next few days. The compare function uses StringUpper and does not recognize Uppercase and Lowercase differences. But that is just a rather small issue and nothing serious. Guidelines for posting comments: - You cannot re-open a ticket but you may still leave a comment if you have additional information to add. - In-depth discussions should take place on the forum. For more information see the full version of the ticket guidelines here. .. and what is the bug in your mind? They are properly sorted on the indicated column and there is no given which sequence "records" with a similar key will be. Jos
https://www.autoitscript.com/trac/autoit/ticket/3620
CC-MAIN-2019-18
refinedweb
996
56.29
Hello, Again When you write larger programs, it is usually a good idea to wrap your code up in one or more classes. The following example is adapted from the “hello world” program in Matt Conway’s A Tkinter Life Preserver.() root.destroy() # optional; see description below Running the Example When you run this example, the following window appears. If you click the right button, the text “hi there, everyone!” is printed to the console. If you click the left button, the program stops. Note: Some Python development environments have problems running Tkinter examples like this one. The problem is usually that the enviroment uses Tkinter itself, and the mainloop call and the quit calls interact with the environment’s expectations. Other environments may misbehave if you leave out the explicit destroy call. If the example doesn’t behave as expected, check for Tkinter-specific documentation for your development environment. Details This sample application is written as a class. The constructor (the __init__ method) is called with a parent widget (the master), to which it adds a number of child widgets. The constructor starts by creating a Frame widget. A frame is a simple container, and is in this case only used to hold the other two widgets. class App: def __init__(self, master): frame = Frame(master) frame.pack() The frame instance is stored in a local variable called frame. After creating the widget, we immediately call the pack method to make the frame visible. We then create two Button widgets, as children to the frame. self.button = Button(frame, text="QUIT", fg="red", command=frame.quit) self.button.pack(side=LEFT) self.hi_there = Button(frame, text="Hello", command=self.say_hi) self.hi_there.pack(side=LEFT) This time, we pass a number of options to the constructor, as keyword arguments. The first button is labelled “QUIT”, and is made red (fg is short for foreground). The second is labelled “Hello”. Both buttons also take a command option. This option specifies a function, or (as in this case) a bound method, which will be called when the button is clicked. The button instances are stored in instance attributes. They are both packed, but this time with the side=LEFT argument. This means that they will be placed as far left as possible in the frame; the first button is placed at the frame’s left edge, and the second is placed just to the right of the first one (at the left edge of the remaining space in the frame, that is). By default, widgets are packed relative to their parent (which is master for the frame widget, and the frame itself for the buttons). If the side is not given, it defaults to TOP. The “hello” button callback is given next. It simply prints a message to the console everytime the button is pressed: def say_hi(self): print "hi there, everyone!" Finally, we provide some script level code that creates a Tk root widget, and one instance of the App class using the root widget as its parent: root = Tk() app = App(root) root.mainloop() root.destroy() The mainloop call enters the Tk event loop, in which the application will stay until the quit method is called (just click the QUIT button), or the window is closed. The destroy call is only required if you run this example under certain development environments; it explicitly destroys the main window when the event loop is terminated. Some development environments won’t terminate the Python process unless this is done. More on widget references) More on widget names.
http://www.effbot.org/tkinterbook/tkinter-hello-again.htm
CC-MAIN-2014-15
refinedweb
595
64.51
#include <slang/util/SafeIndexedVector.h> template<typename T, typename Index> SafeIndexedVector class Contents - Reference SafeIndexedVector - a flat random-access container that uses a strongly typed integer type for indexing, so that clients can store indices without chance of mistaking them for some other value. Indices are never invalidated until they are removed from the index, at which point they are placed on a freelist and potentially reused. The index uses a vector internally for managing storage and therefore has the same performance characteristics when adding new elements and there are no open slots in the freelist. Note that index zero is always reserved as an invalid sentinel value. The Index type must be explicitly convertible to and from size_t. T should be default-constructible, and its default constructed state should represent an invalid / empty value. Public functions - auto add(const T& item) -> Index - Add a new item to the vector by copying and return an Index to its location. - auto add(T&& item) -> Index - Add a new item to the vector by moving and return an Index to its location. - template<typename... Args>auto emplace(Args && ... args) -> Index - Construct a new item in the vector and return an Index to its location. - void remove(Index index) - void clear() - Removes all items from the vector. - auto size() const -> size_t - auto empty() const -> bool
https://sv-lang.com/classslang_1_1_safe_indexed_vector.html
CC-MAIN-2021-04
refinedweb
222
52.09
Problem with QtSerialPort on Windows Hello all! I'm trying to develop a simple command line program to write to a device connected through USB-Serial adapter. Using the example application from "here": (scroll down to "Simple Example" that lists connected serial devices), it shows the device. But whenever I normally open a connection to the device, it gives error "void __thiscall QSerialPortPrivate::detectDefaultSettings(void): Unexpected flow control settings" on open, and nothing goes through to the device even if I alter the flow control, parity and other settings afterwards. It doesn't give any other error or warning while the port is open, nor is there any errors set for the port. The code I'm trying to execute is as follows: @ // QByteArr bytearr is introduced and filled with data before this snippet in the program code " << serial.portName() << " " << (success ? "OK" : "FAIL"); serial.write(bytearr); serial.waitForBytesWritten(-1); serial.close(); } @ The serial port name in use has been confirmed with the example application. The device I'm trying to write to accepts commands as bytes, and it does not send any response back at all. The weird thing is, I can get the program shown above to work by running another application, that opens the same port, and continuously (in while(true) loop) reads all data from the port and writes it to command line, and then manually closing the program. Although this test program for reading gives the same error about flow control settings, after running the test program the other program for writing works for as long as the usb cable is connected. Running the test program also removes the flow control setting error message from appearing in either of the programs. This is the test program, that should read everything from the port (it doesn't work like it should, but it resolves the issue): @#include <QtCore/QCoreApplication> #include <QtCore/QDebug> #include <QtSerialPort/QSerialPort> #include <QtSerialPort/QSerialPortInfo> QT_USE_NAMESPACE int main(int argc, char *argv[]) {: " << (success ? "OK" : "FAIL"); while(true) { if(serial.waitForReadyRead(-1)) { QByteArray out = serial.readAll(); for(int i=0; i< out.length(); i++) { qDebug() << (int) out[i]; } } } serial.close(); qDebug() << "Connection closed."; } qDebug() << "Program exiting."; return 0; }@ Although my question/problem should be obvious by now, this way for this to work isn't very optimal. Is there something I'm doing wrong in the writing application / something that I'm not taking into account, or is this a bug in Windows 8 environment? The program code has been proven to work as it is in Ubuntu Linux environment, and it kinda works in windows environment as well, but only after running the reader program. Calling the readAll function for 1M times in a for-loop doesn't work to simulate the effect of running the reader program, nor does opening the serial port without closing it first before trying to write. I am running Qt 5.2 32-bit, my compiler is MSVC2012 and my os is Windows 8 64-bit, although this has been confirmed to happen on 64-bit Windows 7. 32-bit Windows has not been tested. Any help is greatly appreciated! [quote] “void __thiscall QSerialPortPrivate::detectDefaultSettings(void): Unexpected flow control settings” [/quote] It is normal, don't pay attention. It is wrong in your case (though it will work): [code] ... serial.write(bytearr); serial.flush(); serial.close(); ... [/code] should be: [code] ... serial.write(bytearr); serial.waitForBytesWritten(); serial.close(); ... [/code] Next, it is wrong in your case: [code] ... while(true) { QByteArray out = serial.readAll(); for(int i=0; i< out.length(); i++) { qDebug() << (int) out[i]; } } serial.close(); ... [/code] Should be: [code] ... while(true) { if (serial.waitForReadyRead()) { QByteArray out = serial.readAll(); for(int i=0; i< out.length(); i++) { qDebug() << (int) out[i]; } } } serial.close(); ... [/code] So, please look an examples of QtSerialPort and read Qt documentation. kuzulis! did not you mention that async approach is the best for serial communication in qt already? yes. it is. but in your case it is synch approach. kuzulis, I'm not questioner :). I saw your helpful comments in forum. Ahh.. Sorry.. :) Never mind, your comments always are helpful. Thank you for your replies. Although I'm sure these are things to be taken into account, they are not the thing in this case. The latter program is provided only because running it makes the first program for writing work. Whether it actually reads anything or not is not the issue, and making the change you suggested would probably break the reader program even further, as the device will not send anything to the program at all, thus it will never actually reach the readAll part. As for the change in writer program, waitForBytesWritten has already been tried and it did not change anything in how the program worked (or didn't). Your suggested fix also contains a problem, as waitForBytesWritten expects integer parameter for wait timeout in msecs. Although you mention that the error/warning message can be ignored, results show otherwise as it will not appear when the program works as it should (after running reader program once). But I also understand it is not the cause of my problems, only an effect of something else not working. And that "something else" is what I need help with most likely. I already say to you that it is necessary to change in your code when using the synchronous approach (because your code demonstrated sync approach). But I recommend to use asynchronous approach with use of signals/slots. @kuzulls And I already said, what you suggested didn't fix the problem. :) I also tried the fix you gave for the reader program, and it actually did start working, and it also still works so that it makes the writer program work afterwards until usb cable is removed. But it doesn't solve the problem I need help with. The writer program (first piece of code) has to work without the reader program (second piece of code). Inspired from the fixes, I also attempted having one waitForReadRead call with long timeout before trying to write there, but this didn't solve the problem which is, the device does not receive (or does not receive correctly) what I'm trying to send there unless I run the second piece of code first. As for it being sync'd or async'd, it doesn't matter at this point. Purpose of this program / project at this stage is to get the connection to device to work for writing, it's okay if the program has to wait for the connection. Also it's worthwhile to mention (like I mentioned in OP) that this works in other environments than windows flawlessly, even with the "flawed" code I have now fixed (without seeing any changes in results really).
https://forum.qt.io/topic/36638/problem-with-qtserialport-on-windows
CC-MAIN-2018-26
refinedweb
1,131
63.29
Created on 2015-05-28 16:31 by yselivanov, last changed 2015-05-29 21:11 by yselivanov. This issue is now closed. Stefan, This patch should solve the problem with types.coroutine accepting only pure python generator functions. The approach is, however, slightly different from what you've proposed. Instead of having a wrapper class (delegating .throw, .send etc to a wrapped object), we now simply check if the returned value of the wrapped function is an instance of collections.abc.Coroutine. Issue 24315 enables duck typing for coroutines, so if a cython-based coroutine implements all coroutine abstract methods, it will automatically pass types.coroutine. New changeset 7356f71fb0a4 by Yury Selivanov in branch '3.5': Issue 24316: Fix types.coroutine() to accept objects from Cython New changeset 748c55375225 by Yury Selivanov in branch 'default': Issue 24316: Fix types.coroutine() to accept objects from Cython I just noticed that I hadn't used the real "types.coroutine" in my Py3.5 tests when reporting back in issue 24017. When I pass a Cython generator through it, I get """ Traceback (most recent call last): File "tests/run/test_coroutines_pep492.pyx", line 245, in test_coroutines_pep492.CoroutineTest.test_func_5 (test_coroutines_pep492.c:13445) for el in bar(): File "/opt/python3.5/lib/python3.5/types.py", line 197, in wrapped 'non-coroutine: {!r}'.format(coro)) TypeError: callable wrapped with types.coroutine() returned non-coroutine: <generator object at 0x7f178c458898> """ This is actually obvious, given that the sole purpose of the decorator is to turn something that is a Generator and *not* a Coroutine into something that is a Coroutine, as a means for the user to say "but I know better". So checking for the return value being a Coroutine is wrong. Instead, it should check that it's a Generator and if it's not an Awaitable, wrap it as a self-returning Awaitable. That's more or less what my proposed implementation in issue 24017 did: class types_coroutine(object): def __init__(self, gen): self._gen = gen class as_coroutine(object): def __init__(self, gen): self._gen = gen self.send = gen.send self.throw = gen.throw self.close = gen.close def __await__(self): return self._gen def __call__(self, *args, **kwargs): return self.as_coroutine(self._gen(*args, **kwargs)) > I just noticed that I hadn't used the real "types.coroutine" in my Py3.5 tests when reporting back in issue 24017. Please test thoroughly the attached patch. One failing test in "test_coroutines": test_func_5. The reason is that the GeneratorWrapper is not iterable (and there is no reason it shouldn't be, given that it wraps a Generator). That was my fault, I had already added an __iter__ method but didn't copy it in my previous message. Adding it as follows fixes the test for me: def __iter__(self): return self.__wrapped__ Alternatively, "__iter__ = __await__" would do the same. BTW, it's not only for compiled generators but also for normal Python functions that construct Python generators internally and return them, or that delegate the generator creation in some way. With this change, it's enough to decorate the constructor function and not each of the potential generators that it returns. Please test the attached patch. > BTW, it's not only for compiled generators but also for normal Python functions that construct Python generators internally and return them You're right, that's why I used "primarily" word in that comment ;) types.coroutine() is only used by asyncio.coroutine() so far, and returning generator objects from "factory" functions isn't a very common pattern in asyncio. Updated patch. Wrapper now proxies gi_code, gi_running and gi_frame Ok, now the problem with *this* patch is that __iter__ and __await__ are special methods that are being looked up on the type, not the instance. Similarly __next__, I think, as it also has its own (type) slot. But I think you're right that __next__ is also needed here. I'm attaching a patch that works for me. > I'm attaching a patch that works for me. Looks like we were working in parallel ;) I've incorporated your changes. Please look at the new patch (hopefully this one is final) Your latest patch works for me. New changeset 8080b53342e8 by Yury Selivanov in branch '3.5': Issue 24316: Wrap gen objects returned from callables in types.coroutine New changeset c0434ef75177 by Yury Selivanov in branch 'default': Issue 24316: Wrap gen objects returned from callables in types.coroutine Committed. Thanks, Stefan! Stefan, please take a look at this issue #24325 too.
https://bugs.python.org/issue24316
CC-MAIN-2018-17
refinedweb
750
58.69
This article is a continuation of the previous four, for creating good software. We previously touched on unit tests and functional tests and test categorizing all possible applications. Unfortunately, this article would not arise if the others would not occur. Without any knowledge about testing the application, you would not be able to comprehend this topic. And now let’s see how they relate to web applications. We all have to deal with customers. I personally believe that the customer is not a person, the client is a state of mind. This is a man who came to graduation nude. Whether you’re designing web sites, or simply carrying out penetration tests, very often the client wants tangible evidence that we were able to correctly perform a process. Often, this is after our work is found in disbelief by a customer or our opinion is undermined by someone from outside. How can I prevent this? Well, this article is an attempt to answer these questions. And again, we will learn a few new tools that make our lives easier. Automated web application testing primarily facilitates the creation of documentation, both in penetration testing and web application writing. They allow you to run any browser and allow it to automatically come quietly on the page, add a comment, just log in, or anything else. There are much better ways of penetration test documentation than the standard methods, such as writing a report on the tests. They simply demonstrate the tangible result of our actions. The customer can see that the page contains errors and that it can be actually very easy to manipulate because the scripts are full of holes. XPath, the XML manipulation using a web browser When using the syntax of web pages we have two choices. We can refer directly to HTML or XHTML, and we can also use XPath. It’s hard to determine which solution is better, all in all I think both are just as good, but if there is a tree of DOM elements, it is much better to use XPath. A full description of the XPath can be found at, but we are at the beginning with only the basics. Indicate the XPath node or set of nodes using the path location. This path is in turn made up of one or more location steps separated from each other with / or / /. If the path begins with a /, we call it an absolute path as it gives the full path to the root node. Otherwise, call it a relative path, it starts from the current context node named node. For Firefox, I found a few extras perfectly suited to the demonstration of XPath. I liked most FirePath features. You can download it from the. In addition, the FireBug installed. It is a tool for analyzing and editing the application source code. FireBug is located at. After the installation of these two supplements, an additional button appears: Running FireBug, the window looks like this: If we take the tree DOM elements for a site we know, the situation looks like this: FireBug can also show you what was taken and where. Here is a screen demonstrating our favorite site: FireBug is a great tool for manual analysis of web pages. However, it does not allow automatic analysis. But thanks to FireBug when developing applications, we know what we have to look for and where. FireBug also is a very intuitive and very transparent tool. Introduction to Automation – Selenium IDE Selenium is a tool for automated testing. It lets you record and read tests, and import them into a variety of formats, such as C # or Java, so that later they could fire on a single mouse click. Selenium for Firefox can be downloaded here:. It is available as a separate server or library in addition to Firefox. After running Selenium IDE, it looks as follows: Why are we so caught up in Selenium IDE? Because in reality it is really a very simple tool to take full control of the web pages. it allows you to automate many operations and test the correctness of their use. If we have to deal with some event whose effects are predictable, and it is fully reproducible, and we do it every day, why not allow Selenium to do it for us? Selenium also allows easy recording and playback functions, so you automate almost everything. Thanks to automated steps in the code, this includes even Ruby on Rails, without knowing the language. All you have to write is the appropriate test and Selenium will do it all for us, in such a way that it will look like we are doing it with the mouse. Best of all, Selenium allows you to record your activities on the Web page, and then recreate it by clicking only one button. With Selenium you can even test Web applications, while having no idea about programming. I let each one test record. Here are the steps that I made: First I typed the page address. Second I walked by clicking on the link to the Third, I chose your article on cryptographic libraries. The Selenium looks like this: When you click on the button “Play entire test suite” Selenium can do it automatically for you. Here’s what happened: And it looks like our test source code, written in HTML: <?> New Test </ title> </ Head> <body> <table border="1"> New Test </ td> </ tr> </ Thead> <tbody> <tr> <td> open </ td> <td> / </ td> <td> </ td> </ Tr> <tr> <td> clickAndWait </ td> <td> link = InfoSec Resources </ td> <td> </ td> </ Tr> <tr> <td> click </ td> <td> css = a [title = "A Review of Selected Cryptographic Libraries"]> img.attachment-archive-image.wp-post-image </ td> <td> </ td> </ Tr> </ Tbody> </ table> </ Body> </ Html> Selenium is a really cool toy in experienced hands. The set of tests which it offers is huge. I do not know how it happened that the tool itself does not have any gigantic proportions, and so can perform a lot of action at the beginning. I did not think it can even be said that testing of applications using Selenium proves that simple. Selenium stores all the information as a Web Object Model. Basic tests using Selenium in Java I am not an advocate of Java or C # only. As in the previous article, we focused on testing using C #, now let’s go for a change in Java. Only first I will introduce the new concept, which is WebDriver. I do not mind using WebDriver with NUnit with Visual Studio or Eclipse with JUnit. WebDriver is really another tool for automated testing of web applications and to check whether they really act in accordance with our expectations. The main objective of WebDriver is really to provide users of this API, which would have been easy for them to understand. In this way, we can make our tests very clear and very simple to maintain. First, get our working environment to work with WebDriver. For this purpose, we need to download both the selenium-server and selenium-client. We can do this by. Then create a new project in Eclipse with any name. Now you need to join the library. To do this, right click on the project, then select Properties, Java Build Path, Libraries and Add external JARs and add the selenium-java and selenium-server-standalone. The window of Libraries should look then like this: Now we can easily begin to create the first WebDriver. On the “File” menu, select “New class” and fill it with the following wording. This class shows how to add a new comment to the selected article on InfoSec: import org.openqa.selenium.By; import org.openqa.selenium.WebDriver; import org.openqa.selenium.WebElement; import org.openqa.selenium.htmlunit.HtmlUnitDriver; public class MyExample { public static void main(String[] args) { // Create a new instance of the html unit driver WebDriver driver = new HtmlUnitDriver(); // And now use this to visit example article // in InfoSec Institute driver.get(“”); WebElement element = driver.findElement(By.name(“author”)); element.sendKeys(“Adrian”); WebElement element2 = driver.findElement(By.name(“author”)); element2.sendKeys(“adrian.stolarski@gmail.com”); WebElement element3 = driver.findElement(By.name(“comment”)); element2.sendKeys(“This is a webdriver tests in my article for InfoSec!”); // Now submit the form. WebDriver will find the form for us from the element element.submit(); // Check the title of the page System.out.println(“Page title is: “ + driver.getTitle()); } } See for yourself what it is simple and intuitive. Now think, why, for instance, if we find a MySQL error injection, do not test for it in the form of WebDriver? Or find some XSS and again we can do code documentation using WebDriver. WebDriver also allows you to download the title, keywords, description page, as well as thousands of different things. Now you can go to the Web Object Model. Web Object Model Selenium tests not only allow a single Web application functionality. It also allows you to perform tests using the Web Object Model, which allows testing the use case. In the case of Eclipse, you need to do this is to install TestNG, which is a framework for developing this type of testing. In the case of Eclipse TestNG, installation looks like this: First, start Eclipse by clicking the Eclipse icon in the Eclipse folder that’s mentioned earlier. Second, click on Help-> Install New Softwares. Enter “” in the “Work With” and press Enter. Third, you should see TestNG. Fourth, select it and then press Next till you reach Finish. Fifth, restart Eclipse. Then proceed as in the previous paragraph for Selenium. Add Selenium and we start writing our use case. Here is mine: - I want to go to - I want to go to the page - Then I want to go to - I want to add a sample comment This time we create a new JUnit Test Case and fill it with the following content: import static org.junit.Assert.*; import org.junit.Test; import com.thoughtworks.selenium.*; import org.testng.annotations.*; import static org.testng.Assert.*; import java.util.regex.Pattern; public class SeleniumTest extends SeleneseTestNgHelper { @Test public void testMyTest() throws Exception {”); } } And so just enjoy all the benefits of a Web Object Model. This is of course not everything, and just a quick introduction. The whole Web Object Model, you can write a whole book. This is really very addictive and offers many possibilities. Look for more on the network on testing web applications using the Web Object Model. And finally, I will show how the same test looks in C #: using System; using System.Text; using System.Text.RegularExpressions; using System.Threading; using NUnit.Framework; using Selenium; namespace SeleniumTests { [TestFixture] public class my { TheMyTest() {”); } } } Note that both tests are very similar, almost look the same. This is just one of the advantages of using Selenium. It simply does not matter whether we write a test for Java and C #, both tests will look the same. I hope you have fun with writing the test. Summary The main purpose of this article was to show everyone how to use some tools to help you create Web application tests. Thanks to them, we have learned to do a lot better documentation to our own tests, than by the written word. Also I showed how to run selected tests and show them to the client. I bet you that after reading the documentation, there is reliable evidence that the tests were really performed, and the results are not fabricated. Thanks to being written by Selenium and remote control tools such documentation, it will be 100% reproducible for the customer. The test will be able to be fired by any employee thereof, using both Selenium, as well as any Web browser. In addition, I want to note one more thing. A lot of companies recognize that every programmer should be able to design tests using Selenium, and knowledge of the environment is really a huge advantage. In this article, you have an excellent introduction to Selenium and I am sure it will always be useful. This gave me the idea for the next two articles in this series. The first article will be back to the unit tests. I’ll show you a few techniques to create good unit tests that actually assist the software development process. In the second I will show, if I can, how to perform automated testing desktop applications. In the meantime, thank you for your attention and I wish you a nice day!
http://resources.infosecinstitute.com/creating-a-professional-application-how-to-create-tests-part-5/
CC-MAIN-2017-51
refinedweb
2,086
64.81
A few weeks ago, O'Reilly Network ran an article on PMD, an open source, Java static-analysis tool sponsored under the umbrella of the Defense Advanced Research Projects Agency (DARPA) project "Cougaar." That article covered some of the basics of PMD--it's built on an Extended Backus Naur Format (EBNF) grammar, from which JavaCC generates a parser and JJTree generates an Java Abstract Syntax Tree (AST), and comes with a number of ready-to-run rules that you can run on your own source code. You can also write your own rules to enforce coding practices specific to your organization. In this article, we'll take a closer look at the AST, how it is generated, and some of its complexities. Then we'll write a custom PMD rule to find the creation of Thread objects. We'll write this custom rule two ways, first in the form of a Java class, and then in the form of an XPath expression. Thread Recall from the first article that the Java AST is a tree structure that represents a chunk of Java source code. For example, here's a simple code snippet and the corresponding AST: Thread t = new Thread(); FieldDeclaration Type Name VariableDeclarator VariableDeclaratorId VariableInitializer Expression PrimaryExpression PrimaryPrefix AllocationExpression Name Arguments Here we can see that the AST is a standard tree structure: a hierarchy of nodes of various types. All of the node types and their valid children are defined in the EBNF grammar file. For example, here's the definition of a FieldDeclaration: FieldDeclaration void FieldDeclaration() : { } { ( "public" { ((AccessNode) jjtThis).setPublic( true ); } | "protected" { ((AccessNode) jjtThis).setProtected( true ); } | "private" { ((AccessNode) jjtThis).setPrivate( true ); } | "static" { ((AccessNode) jjtThis).setStatic( true ); } | "final" { ((AccessNode) jjtThis).setFinal( true ); } | "transient" { ((AccessNode) jjtThis).setTransient( true ); } | "volatile" { ((AccessNode) jjtThis).setVolatile( true ); } )* Type() VariableDeclarator() ( "," VariableDeclarator() )* ";" } A FieldDeclaration is composed of a Type followed by at least one VariableDeclarator; for example, int x,y,z = 0;. A FieldDeclaration may also be preceeded by a couple of different modifiers, that is, Java keywords like transient or private. Since these modifiers are separated by a pipe symbol and followed by an asterisk, any number can appear in any order. All of these grammar rules eventually can be traced back to the Java Language Specification (JLS) (see the References section below). Type VariableDeclarator int x,y,z = 0; transient private Related Reading Java Enterprise Best Practices By The O'Reilly Java Authors The grammar doesn't enforce nuances like "a field can't be both public and private". That's the job of a semantic layer that would be built into a full compiler such as javac or Jikes. PMD avoids the job of validating modifiers--and the myriad other tasks a compiler must perform--by assuming the code is compilable. If it's not, PMD will report an error, skip that source file, and move on. After all, if a source file can't even be compiled, there's not much use in trying to check it for unused code. javac Jikes Looking closer at the grammar snippet above, we can also see some custom actions that occur when a particular token is found. For example, when the keyword public is found at the start of a FieldDeclaration, the parser that JavaCC generates will call the method setPublic(true) on the current node. The PMD grammar is full of this sort of thing, and new actions are continually being added. By the time a source code file makes it through the parser, a lot of work has been done that makes rule writing much easier. public setPublic(true) Now that we've reviewed the AST a bit more, let's write a custom PMD rule. As mentioned before, we'll assume we're writing Enterprise Java Beans, so we shouldn't be using some of the standard Java library classes. We shouldn't open a FileInputStream, start a ServerSocket, or instantiate a new Thread. To make sure our code is safe for use inside of an EJB container, let's write a rule that checks for Thread creation. FileInputStream ServerSocket Let's start by writing a Java class that traverses the AST. From the first article, recall that JJTree generates AST classes that support the Visitor pattern. Our class will register for callbacks when it hits a certain type of AST node, then poke around the surrounding nodes to see if it's found something interesting. Here's some boilerplace code: Visitor // Extend AbstractRule to enable the Visitor pattern // and get some handy utility methods public class EmptyIfStmtRule extends AbstractRule { } If you look back up at the AST for that initial code snippet--Thread t = new Thread();--you will find an AST type called an AllocationExpression. Yup, that sounds like what we're looking for: allocation of new Thread objects. Let's add in a hook to notify us when it hits a new [something] node: Thread t = new Thread(); AllocationExpression new [something] public class EmptyIfStmtRule extends AbstractRule { // make sure we get a callback for any object creation expressions public Object visit(ASTAllocationExpression node, Object data){ return super.visit(node, data); } } We've put a super.visit(node,data) in there so the Visitor will continue to visit children of this node. This lets us catch allocations within allocations, i.e., new Foo(new Thread()). Let's add in an if statement to exclude array allocations: super.visit(node,data) new Foo(new Thread()) if public class EmptyIfStmtRule extends AbstractRule { public Object visit(ASTAllocationExpression node, Object data){ // skip allocations of arrays and primitive types: // new int[], new byte[], new Object[] if ((node.jjtGetChild(0) instanceof ASTName) { return super.visit(node, data); } } } We're not concerned about array allocations, not even Thread-related allocations like Thread[] threads = new Thread[];. Why not? Because instantiating an array of Thread object references doesn't really create any new Thread objects. It just creates the object references. We'll focus on catching the actual creation of the Thread objects. Finally, let's add in a check for the Thread name: Thread[] threads = new Thread[]; public class EmptyIfStmtRule extends AbstractRule { public Object visit(ASTAllocationExpression node, Object data){ if ((node.jjtGetChild(0) instanceof ASTName && ((ASTName)node.jjtGetChild(0)).getImage().equals("Thread")) { // we've found one! Now we'll record a RuleViolation and move on ctx.getReport().addRuleViolation( createRuleViolation(ctx, node.getBeginLine())); } return super.visit(node, data); } } That about wraps up the Java code. Back in the first article, we described a PMD ruleset and the XML rule definition. Here's a possible ruleset definition containing the rule we just wrote: <> <example> <![CDATA[ Thread t = new Thread(); // don't do this! ]]> </example> </rule> </ruleset> You can put this ruleset on your CLASSPATH or refer to it directly, like this: CLASSPATH java net.sourceforge.pmd.PMD /path/to/src xml /path/to/ejbrules.xml Recently Daniel Sheppard enhanced PMD to allow rules to be written using XPath. We won't explain XPath completely here--it would require a large book--but generally speaking, XPath is a way of querying an XML document. You can write an XPath query to get a list of nodes that fit a certain pattern. For example, if you have an XML document with a list of departments and employees, you could write a simple XPath query that returns all the employees in a given department, and you wouldn't need to write DOM-traversal or SAX-listener code. XPath and XPointer Locating Content in XML Documents By John E. Simpson That's all well and good, but how does querying XML documents relate to PMD? Daniel noticed that an AST is a tree, just like an XML document. He downloaded the Jaxen XPath engine and wrote a class called a DocumentNavigator that allows Jaxen to traverse the AST. Jaxen gets the XPath expression, evaluates it, applies it to the AST, and returns a list of matching nodes to PMD. PMD creates RuleViolation objects from the matching nodes and moves along to the next source file. DocumentNavigator RuleViolation XPath is a new language, though, so why write PMD rules using XPath when you're already a whiz-bang Java programmer? The reason is that it's a whole lot easier to write simple rules using XPath. To illustrate, here's the "DontCreateThreadsRule" written as an XPath expression: //AllocationExpression[Name/@Image='Thread'][not(ArrayDimsAndInits)] Concise, eh? There's no Java class to track--you don't have to compile anything or put anything else on your CLASSPATH. Just add the XPath expression to your rule definition like this: <> <properties> <property name="xpath"> <value> <![CDATA[ //AllocationExpression[Name/@Image='Thread'][not(ArrayDimsAndInits)]> ]]> </value> </property> </properties> <example> <![CDATA[ Thread t = new Thread(); // don't do this! ]]> </example> </rule> </ruleset> Refer to the rule as usual to run it on your source code. You can learn a lot about XPath by looking at how the built-in PMD rules identify nodes, and you can also try out new XPath expressions using a PMD utility called the ASTViewer. Run this utility by executing the astviewer.bat or astviewer.sh scripts in the etc/ directory of the PMD distribution. It will bring up a window that looks like Figure 1. Type some code into the left-hand panel, put an XPath expression in the text field, click the "Go" button at the bottom of the window, and the other panels will be populated with the AST and the results of the XPath query. ASTViewer Figure 1. Screenshot of ASTViewer When should you use XPath to write a PMD rule? My initial thought is, "Anytime you can." I think that you'll find that many simple rules can be written using XPath, especially those that are checking for braces or a particular name. For example, almost all of the rules in the PMD basic ruleset and braces ruleset are now written as very short, concise XPath expressions. The more complicated rules--primarily those dealing with the symbol table--are probably still easiest to write in Java. We'll see, though. At some point we may even wrap the symbol table in a DocumentNavigator. There's still a lot of work to do on PMD. Now that this XPath infrastructure is in place, it might be possible to write an interactive rule editor. Ideally, you could open a GUI, type in a code snippet, select certain AST nodes, and an XPath expression that finds those nodes would be generated for you. PMD can always use more rules, of course. Currently, there are over 40 feature requests on the web site just waiting for someone to implement them. Also, PMD has a pretty weak symbol table, so it occasionally picks up a false positive. There's plenty of room for contributors to jump in and improve the code. This article has presented a more in-depth look at the Java AST and how it's defined. We've written a PMD rule that checks for Thread creation using two techniques--a Java class and an XPath query. Give PMD a try and see what it finds in your code today! Thanks to the Cougaar program and DARPA for supporting PMD. Thanks to Dan Sheppard for writing the XPath integration. Thanks also to the many other.
http://archive.oreilly.com/pub/a/onjava/2003/04/09/pmd_rules.html
CC-MAIN-2015-22
refinedweb
1,870
55.03
Test::Moose::MockObjectCompile - A Module to help when testing compile time Moose use Test::Moose::MockObjectCompile; use Test::More; my $mock = Test::Moose::MockObjectCompile->new(); $mock->roles([qw{Some::Role Some::Other::Role}]); $mock->mock('method1'); lives_ok {$mock->compile} 'Test that roles don't clash and required methods are there'; a list of roles to apply to your package. a list of Moose packages you want your package to extend the constructor for a MockObjectCompile(r) it expects a hashref with the package key passed in to define the package name or it will throw an exception. simulates a compile of the mocked Moose Object with the definition defined in your roles and extend attributes and whatever you told it to mock. mocks a method in your compiled Mock Moose Object. It expects a name for the method and an optional coderef. $mock->mock('method1', '{ push @stuff, $_[1];}'); Some things to keep in mind are: this module actually compiles your package this means that any subsequent compiles only modify the package they don't replace it. If you want to make sure you don't have stuff haning around from previouse compiles change the package or make a new instance with a different package name. This way you can be sure you start out with a fresh module namespace. Jeremy Wall <jeremy@marzhillstudios.com> This program is free software you can redistribute it and/or modify it under the same terms as Perl itself. See
http://search.cpan.org/dist/Test-Moose-MockObjectCompile/lib/Test/Moose/MockObjectCompile.pm
CC-MAIN-2016-07
refinedweb
246
57.71
marble #include <GeoParser.h> Detailed Description Definition at line 40 of file GeoParser.h. Member Typedef Documentation Definition at line 43 of file GeoParser.h. Constructor & Destructor Documentation Definition at line 39 of file GeoParser.cpp. Definition at line 46 of file GeoParser.cpp. Member Function Documentation Definition at line 62 of file GeoParser.h. Definition at line 200 of file GeoParser.cpp. Definition at line 117 of file GeoParser.cpp. This method is intended to check if the current element being served by the GeoParser is a valid Document Root element. This method is to be implemented by GeoDataParser/GeoSceneParser and must check based on the current XML Document type, e.g. KML, GPX etc. - Returns trueif the element is a valid document root. Definition at line 122 of file GeoParser.cpp. Definition at line 193 of file GeoParser.cpp. Main API for reading the XML document. This is the only method that is necessary to call to start the GeoParser. To retrieve the resulting data see - See also - releaseDocument() and - releaseModel() Definition at line 74 of file GeoParser.cpp. retrieve the parsed document and reset the parser If parsing was successful, retrieve the resulting document and set the contained m_document pointer to 0. Definition at line 205 of file GeoParser.cpp. Member Data Documentation Definition at line 89 of file GeoParser.h. Definition at line 90 of file GeoParser.h. The documentation for this class was generated from the following files: Documentation copyright © 1996-2014 The KDE developers. Generated on Sun Oct 19 2014 22:19:35 by doxygen 1.8.7 written by Dimitri van Heesch, © 1997-2006 KDE's Doxygen guidelines are available online.
http://api.kde.org/4.x-api/kdeedu-apidocs/marble/html/classMarble_1_1GeoParser.html
CC-MAIN-2014-42
refinedweb
278
52.26
Hi Janus, Le 27/01/2013 19:49, Janus Weil a écrit : > > subroutine sub (arg) > procedure(sub) :: arg > end subroutine > You forgot to precise that this case (which is basically comment #4 in the PR) is *not* fixed by the patch, as it fails later on at translation stage. I have made up my mind that it's not possible for the middle-end to build such a recursive type. So `arg' will have to have a variadic function type. No patch yet, sorry; I have just figured it out. > Anyway, should we bump the mod version with this patch, or should we > rather avoid it? > I forgot the reason why we are so reluctant to do it. Module versions are not a rare resource. I'm in favor of bumping (and any time we change module format). About the patch, one nit: Index: gcc/fortran/gfortran.h =================================================================== --- gcc/fortran/gfortran.h (revision 195493) +++ gcc/fortran/gfortran.h (working copy) @@ -974,8 +974,6 @@ typedef struct gfc_component struct gfc_component *next; /* Needed for procedure pointer components. */ - struct gfc_formal_arglist *formal; - struct gfc_namespace *formal_ns; struct gfc_typebound_proc *tb; } gfc_component; The comment should probably be removed as well. > The patch was regtested on x86_64-unknown-linux-gnu. Ok for trunk? > OK from my side; you may or may not need someone else's ack as I'm the coauthor. Or maybe wait for the fix for comment #4? Mikael
https://gcc.gnu.org/pipermail/gcc-patches/2013-January/357078.html
CC-MAIN-2021-21
refinedweb
235
75
* class for breaking up an OID into it's component tokens, ala22 * java.util.StringTokenizer. We need this class as some of the23 * lightweight Java environment don't support classes like24 * StringTokenizer.25 */26 public class OIDTokenizer27 {28 private String oid;29 private int index;30 31 public OIDTokenizer(32 String oid)33 {34 this.oid = oid;35 this.index = 0;36 }37 38 public boolean hasMoreTokens()39 {40 return (index != -1);41 }42 43 public String nextToken()44 {45 if (index == -1)46 {47 return null;48 }49 50 String token;51 int end = oid.indexOf('.', index);52 53 if (end == -1)54 {55 token = oid.substring(index);56 index = -1;57 return token;58 }59 60 token = oid.substring(index, end);61 62 index = end + 1;63 return token;64 }65 }66 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/apache/geronimo/util/asn1/OIDTokenizer.java.htm
CC-MAIN-2017-04
refinedweb
149
59.6
API - importing from Scipy¶ In Python the distinction between what is the public API of a library and what are private implementation details is not always clear. Unlike in other languages like Java, it is possible in Python to access “private” function or objects. Occasionally this may be convenient, but be aware that if you do so your code may break without warning in future releases. Some widely understood rules for what is and isn’t public in Python are: - Methods / functions / classes and module attributes whose names begin with a leading underscore are private. - If a class name begins with a leading underscore none of its members are public, whether or not they begin with a leading underscore. - If a module name in a package begins with a leading underscore none of its members are public, whether or not they begin with a leading underscore. - If a module or package defines __all__ that authoritatively defines the public interface. - If a module or package doesn’t define __all__ then all names that don’t start with a leading underscore are public. Note Reading the above guidelines one could draw the conclusion that every private module or object starts with an underscore. This is not the case; the presence of underscores do mark something as private, but the absence of underscores do not mark something as public. In Scipy there are modules whose names don’t start with an underscore, but that should be considered private. To clarify which modules these are we define below what the public API is for Scipy, and give some recommendations for how to import modules/functions/objects from Scipy. Guidelines for importing functions from Scipy¶ The scipy namespace itself only contains functions imported from numpy. These functions still exist for backwards compatibility, but should be imported from numpy directly. Everything in the namespaces of scipy submodules is public. In general, it is recommended to import functions from submodule namespaces. For example, the function curve_fit (defined in scipy/optimize/minpack.py) should be imported like this: from scipy import optimize result = optimize.curve_fit(...) This form of importing submodules is preferred for all submodules except scipy.io (because io is also the name of a module in the Python stdlib): from scipy import interpolate from scipy import integrate import scipy.io as spio In some cases, the public API is one level deeper. For example the scipy.sparse.linalg module is public, and the functions it contains are not available in the scipy.sparse namespace. Sometimes it may result in more easily understandable code if functions are imported from one level deeper. For example, in the following it is immediately clear that lomax is a distribution if the second form is chosen: # first form from scipy import stats stats.lomax(...) # second form from scipy.stats import distributions distributions.lomax(...) In that case the second form can be chosen, if it is documented in the next section that the submodule in question is public. API definition¶ Every submodule listed below is public. That means that these submodules are unlikely to be renamed or changed in an incompatible way, and if that is necessary a deprecation warning will be raised for one Scipy release before the change is made. - scipy.cluster - vq - hierarchy - scipy.constants - scipy.fftpack - scipy.integrate - scipy.interpolate - scipy.io - arff - harwell_boeing - idl - matlab - netcdf - wavfile - scipy.linalg - scipy.linalg.blas - scipy.linalg.lapack - scipy.linalg.interpolative - scipy.misc - scipy.ndimage - scipy.odr - scipy.optimize - scipy.signal - scipy.sparse - linalg - csgraph - scipy.spatial - distance - scipy.special - scipy.stats - distributions - mstats - scipy.weave
https://docs.scipy.org/doc/scipy-0.16.1/reference/api.html
CC-MAIN-2019-18
refinedweb
596
55.34
Here we will see how to create Random Linear Extension of a Directed Acyclic Graph (DAG). The Linear extension is basically the topological sorting of DAG. Let us consider the graph is like below − The topological sorting for a directed acyclic graph is the linear ordering of vertices. For every edge u-v of a directed graph, the vertex u will come before vertex v in the ordering. As we know that the source vertex will come after the destination vertex, so we need to use a stack to store previous elements. After completing all nodes, we can simply display them from stack. Nodes after topological sorted order − 5 4 2 3 1 0 Input − The start vertex u, An array to keep track which node is visited or not. A stack to store nodes. Output − Sorting the vertices in topological sequence in the stack. Begin mark u as visited for all vertices v which is adjacent with u, do if v is not visited, then topoSort(c, visited, stack) done push u into stack End Input − The given directed acyclic graph. Output − Sequence of nodes. Begin initially mark all nodes as unvisited for all nodes v of the graph, do if v is not visited, then topoSort(i, visited, stack) done pop and print all elements from the stack End #include<iostream> #include<stack> #define NODE 6 using namespace std; int graph[NODE][NODE] = { {0, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0}, {0, 0, 0, 1, 0, 0}, {0, 1, 0, 0, 0, 0}, {1, 1, 0, 0, 0, 0}, {1, 0, 1, 0, 0, 0} }; void topoSort(int u, bool visited[], stack<int> &stk) { visited[u] = true; //set as the node v is visited for(int v = 0; v<NODE; v++) { if(graph[u][v]){ //for allvertices v adjacent to u if(!visited[v]) topoSort(v, visited, stk); } } stk.push(u); //push starting vertex into the stack } void performTopologicalSort() { stack<int> stk; bool vis[NODE]; for(int i = 0; i<NODE; i++) vis[i] = false; //initially all nodes are unvisited for(int i = 0; i<NODE; i++) if(!vis[i]) //when node is not visited topoSort(i, vis, stk); while(!stk.empty()) { cout << stk.top() << " "; stk.pop(); } } main() { cout << "Nodes after topological sorted order: "; performTopologicalSort(); } Nodes after topological sorted order: 5 4 2 3 1 0
https://www.tutorialspoint.com/cplusplus-program-to-create-a-random-linear-extension-for-a-dag
CC-MAIN-2021-43
refinedweb
392
58.52
productivity of computer science is only possible because it is built upon an elegant and powerful set of fundamental ideas. All computing begins with representing information, specifying logic to process it, and designing abstractions that manage the complexity of that logic. Mastering these fundamentals will require us to understand precisely how computers interpret computer programs and carry out computational processes. These fundamental ideas have long been taught at Berkeley using the classic textbook Structure and Interpretation of Computer Programs (SICP) by Harold Abelson and Gerald Jay Sussman with Julie Sussman. These lecture notes borrow heavily from that textbook, which the original authors have kindly licensed for adaptation and reuse. The embarkment of our intellectual journey requires no revision, nor should we expect that it ever. —Abelson and Sussman, SICP (1993) A language isn’t something you learn so much as something you join. In order to define computational processes, we need a programming language; preferably one many humans and a great variety of computers can all understand. In this course, we will learn the Python language. Python is a widely used programming language that has recruited enthusiasts from many professions: web programmers, game engineers, scientists, academics, and even designers of new programming languages. When you learn Python, you join a million-person-strong community of developers. Developer communities are tremendously important institutions: members help each other solve problems, share their code and experiences, and collectively develop software and tools. Dedicated members often achieve celebrity and widespread esteem for their contributions. Perhaps someday you will be named among these elite Pythonistas. The Python language itself is the product of a large volunteer community that prides itself on the diversity of its contributors. The language was conceived and first implemented by Guido van Rossum in the late 1980's. The first chapter of his Python 3 Tutorial explains why Python is so popular, among the many languages available today. Python excels as an instructional language because, throughout its history, Python's developers have emphasized the human interpretability of Python code, reinforced by the Zen of Python guiding principles of beauty, simplicity, and readability. Python is particularly appropriate for this course because its broad set of features support a variety of different programming styles, which we will explore. While there is no single way to program in Python, there are a set of conventions shared across the developer community that facilitate the process of reading, understanding, and extending existing programs. come naturally as you progress through the course. However, Python is a rich language with many features and uses, and we consciously introduce them slowly as we layer on fundamental computer science concepts. For experienced students who want to inhale all of the details of the language quickly, we recommend reading Mark Pilgrim's book Dive Into Python 3, which is freely available online. The topics in that book differ substantially from the topics of this course, but the book contains very valuable practical information on using the Python language. Be forewarned: unlike these notes, Dive Into Python 3 assumes substantial programming experience. The best way to get started programming in Python is to interact with the interpreter directly. This section describes how to install Python 3, initiate an interactive session with the interpreter, and start programming. As with all great software, Python has many versions. This course will use the most recent stable version of Python 3 (currently Python 3.2). Many computers have older versions of Python installed already, but those will not suffice for this course. You should be able to use any computer for this course, but expect to install Python 3. Don't worry, Python is free. Dive Into Python 3 has detailed installation instructions for all major platforms. These instructions mention Python 3.1 several times, but you're better off with Python 3.2 (although the differences are insignificant for this course). All instructional machines in the EECS department have Python 3.2 already installed. In an interactive Python session, you type some Python code after the prompt, >>>. The Python interpreter reads and evaluates what you type, carrying out your various commands. There are several ways to start an interactive session, and they differ in their properties. Try them all to find out what you prefer. They all use exactly the same interpreter behind the scenes. In any case, if you see the Python prompt, >>>, then you have successfully started an interactive session. These notes depict example interactions using the prompt, followed by some input. >>> 2 + 2 4 Controls: Each session keeps a history of what you have typed. To access that history, press <Control>-P (previous) and <Control>-N (next). <Control>-D exits a session, which discards this history. And, as imagination bodies forthThe forms of things to unknown, and the poet's penTurns them to shapes, and gives to airy nothingA local habitation and a name. —William Shakespeare, A Midsummer-Night's Dream To give Python the introduction it deserves, we will begin with an example that uses several language features. In the next section, we will have to start from scratch and build up the language piece by piece. Think of this section as a sneak preview of powerful features to come. Python has built-in support for a wide range of common programming activities, like manipulating text, displaying graphics, and communicating over the Internet. The import statement >>> from urllib.request import urlopen loads functionality for accessing data on the Internet. In particular, it makes available a function called urlopen, which can access the content at a uniform resource locator (URL), which is a location of something on the Internet. Statements & Expressions. Python code consists of statements and expressions. Broadly, computer programs consist of instructions to either Statements typically describe actions. When the Python interpreter executes a statement, it carries out the corresponding action. On the other hand, expressions typically describe computations that yield values. When Python evaluates an expression, it computes its value. This chapter introduces several types of statements and expressions. The assignment statement >>> shakespeare = urlopen('') associates the name shakespeare with the value of the expression that follows. That expression applies the urlopen function to a URL that contains the complete text of William Shakespeare's 37 plays, all in a single text document. Functions. Functions encapsulate logic that manipulates data. A web address is a piece of data, and the text of Shakespeare's plays is another. The process by which the former leads to the latter may be complex, but we can apply that process using only a simple expression because that complexity is tucked away within a function. Functions are the primary topic of this chapter. Another assignment statement >>> words = set(shakespeare.read().decode().split()) associates the name words to the set of all unique words that appear in Shakespeare's plays, all 33,721 of them. The chain of commands to read, decode, and split, each operate on an intermediate computational entity: data is read from the opened URL, that data is decoded into text, and that text is split into words. All of those words are placed in a set. Objects. A set is a type of object, one that supports set operations like computing intersections and testing membership. An object seamlessly bundles together data and the logic that manipulates that data, in a way that hides the complexity of both. Objects are the primary topic of Chapter 2. The expression >>> {w for w in words if len(w) >= 5 and w[::-1] in words} {'madam', 'stink', 'leets', 'rever', 'drawer', 'stops', 'sessa', 'repaid', 'speed', 'redder', 'devil', 'minim', 'spots', 'asses', 'refer', 'lived', 'keels', 'diaper', 'sleek', 'steel', 'leper', 'level', 'deeps', 'repel', 'reward', 'knits'} is a compound expression that evaluates to the set of Shakespearian words that appear both forward and in reverse. The cryptic notation w[::-1] enumerates each letter in a word, but the -1 says to step backwards (:: here means that the positions of the first and last characters to enumerate are defaulted.) When you enter an expression in an interactive session, Python prints its value on the following line, as shown. Interpreters. Evaluating compound expressions requires a precise procedure that interprets code in a predictable way. A program that implements such a procedure, evaluating compound expressions and statements, is called an interpreter. The design and implementation of interpreters is the primary topic of Chapter 3. When compared with other computer programs, interpreters for programming languages are unique in their generality. Python was not designed with Shakespeare or palindromes in mind. However, its great flexibility allowed us to process a large amount of text with only a few lines of code. In the end, we will find that all of these core concepts are closely related: functions are objects, objects are functions, and interpreters are instances of both. However, developing a clear understanding of each of these concepts and their role in organizing code is critical to mastering the art of programming. Python is waiting for your command. You are encouraged to experiment with the language, even though you may not yet know its full vocabulary and structure. However, be prepared for errors. While computers are tremendously fast and flexible, they are also extremely rigid. The nature of computers is described in Stanford's introductory course as The fundamental equation of computers is: computer = powerful + stupid Computers are very powerful, looking at volumes of data very quickly. Computers can perform billions of operations per second, where each operation is pretty simple. Computers are also shockingly stupid and fragile. The operations that they can do are extremely rigid, simple, and mechanical. The computer lacks anything like real insight .. it's nothing like the HAL 9000 from the movies. If nothing else, you should not be intimidated by the computer as if it's some sort of brain. It's very mechanical underneath it all. Programming is about a person using their real insight to build something useful, constructed out of these teeny, simple little operations that the computer can do. —Francisco Cai and Nick Parlante, Stanford CS101 The rigidity of computers will immediately become apparent as you experiment with the Python interpreter: even the smallest spelling and formatting changes will cause unexpected outputs and errors. Learning to interpret errors and diagnose the cause of unexpected errors is called debugging. Some guiding principles of debugging are: Incremental testing, modular design, precise assumptions, and teamwork are themes that persist throughout this course. Hopefully, they will also persist throughout your computer science career. A programming language is more than just a means for instructing a computer to perform tasks. The language also serves as a framework within which we organize our ideas about mechanisms for accomplishing this: and should have methods for combining and abstracting both functions and data. Having experimented with the full Python interpreter, we now must start anew, methodically developing the Python language piece by piece.. A function in Python is more than just an input-output mapping; it describes a computational process. However, the way in which Python expresses function application is the same as in mathematics. >>> max(7.5, 9.5) 9.5 This call expression has subexpressions: the operator precedes parentheses, which enclose a comma-delimited list of operands. The operator must be a function. The operands can be any values; in this case they are numbers. several others who may read your code in the future. Finally, their names available by default, so as to avoid complete chaos. Instead, it organizes the functions and other quantities that it knows about into modules, which together comprise the Python Library. To use these elements, one imports them. For example, the math module provides a variety of familiar mathematical functions: >>> from math import sqrt, exp >>> sqrt(256) 16.0 >>> exp(1) 2.718281828459045 or exp).. We can also assign multiple values to multiple names in a single statement, where names and expressions are separated by commas. >>> area, circumference = pi * radius * radius, 2 * pi * radius >>> area 314.1592653589793 >>> circumference 62.83185307179586 The = symbol is called the assignment operator in Python (and many other languages). Assignment is Python's print a function: >>> max <built-in function max> We can use assignment statements to give new names to existing functions. >>> f = max >>> f <built-in function max> >>> f(3, 4) 4 And successive assignment statements can rebind a name to a new value. >>> f = 2 >>> f 2 In Python, the names bound via assignment are often called variable names because they can be bound to a variety of different values in the course of executing a program. >>> mul(add(2, mul(4, 6)), add(3, 5)) 208 requires that this evaluation procedure be applied four times. If we draw each expression that we evaluate, we can visualize the hierarchical structure of this process. This illustration is called an expression tree. In computer science, trees grow from the top down. The objects at each point in a tree are called nodes; in this case, they are expressions paired with their values. Evaluating its root, the full expression, statement or expression has its own evaluation or execution procedure, which we will introduce incrementally as we proceed. loosely say that numerals (and expressions) themselves evaluate to values in the context of Python programs. As we continue to develop a formal model of evaluation, we will find that diagramming the internal state of the interpreter helps us track the progress of our evaluation procedure. An essential part of these diagrams is a representation of a function.. Non-pure functions. In addition to returning a value, applying a non-pure function can generate side effects, which make some change to the state of the interpreter or computer. A common side effect is to generate additional output beyond the return value, using the print function. >>> print(-2) -2 >>> Signatures. Functions differ in the number of arguments that they are allowed to take. To track these requirements, we draw each function in a way that shows the function name and names of its arguments. The function abs takes only one argument called number; providing more or fewer will result in an error. The function print can take an arbitrary number of arguments, hence its rendering as print(...). A description of the arguments that a function can take is called the function's signature., rather than a tab.. Our subset of Python is now complex enough that the meaning of programs is non-obvious. What if a formal parameter has the same name as a built-in function? Can two functions share names without confusion? To resolve such questions, we must describe environments in more detail. An environment in which an expression is evaluated consists of a sequence of frames, depicted as boxes. Each frame contains bindings, which associate a name with its corresponding value. There is a single global frame that contains name bindings for all built-in functions (only abs and max are shown). We indicate the global frame with a globe symbol. Assignment and import statements add entries to the first frame of the current environment. So far, our environment consists only of the global frame. >>> from math import pi >>> tau = 2 * pi A def statement also binds a name to the function created by the definition. The resulting environment after defining square appears below: These environment diagrams show the bindings of the current environment, along with the values (which are not part of any frame) to which names are bound. Notice that the name of a function is repeated, once in the frame, and once as part of the function itself. This repetition is intentional: many different names may refer to the same function, but that function itself has only one intrinsic name. However, looking up the value for a name in an environment only inspects name bindings. The intrinsic name of a function does not play a role in looking up names. In the example we saw earlier, >>> f = max >>> f <built-in function max> The name max is the intrinsic name of the function, and that's what you see printed as the value for f. In addition, both the names max and f are bound to that same function in the global environment. As we proceed to introduce additional features of Python, we will have to extend these diagrams. Every time we do, we will list the new features that our diagrams can express. New environment Features: Assignment and user-defined function definition. To evaluate a call expression whose operator names a user-defined function, the Python interpreter follows a process similar to the one for evaluating expressions with a built-in operator function. That is, the interpreter evaluates the operand expressions, and then applies the named function to the resulting arguments. The act of applying a user-defined function introduces a second local frame, which is only accessible to that function. To apply a user-defined function to some arguments: The environment in which the body is evaluated consists of two frames: first the local frame that contains argument bindings, then the global frame that contains everything else. Each instance of a function application has its own independent local frame. This figure includes two different aspects of the Python interpreter: the current environment, and a part of the expression tree related to the current line of code being evaluated. We have depicted the evaluation of a call expression that has a user-defined function (in blue) as a two-part rounded rectangle. Dotted arrows indicate which environment is used to evaluate the expression in each part. shall see how this model can serve as a blueprint for implementing a working interpreter for a programming language. New environment Feature: Function application. Let us again consider our two simple definitions: >>> from operator import add, mul >>> def square(x): return mul(x, x) >>> def sum_squares(x, y): return add(square(x), square(y)) And the process that evaluates the following call expression: >>> sum_squares(5, 12) 169. In this diagram, the local frame points to its successor, the global frame. All local frames must point to a predecessor, and these links define the sequence of frames that is the current environment. the following environment diagrams, we will call this frame A and replace arrows pointing to this frame with the label A as well. In operand 0, square names a user-defined function in the global frame, while x names the number 5 in the local frame. Python applies square to 5 by introducing yet another local frame that binds x to 5. Using this local frame, the body expression mul(x, x) evaluates to 25. Our evaluation procedure now turns to operand 1, for which y names the number 12. Python evaluates the body of square again, this time introducing yet another local environment frame that binds x to 12. Hence, operand 1 evaluates to 144. Finally, applying addition to the arguments 25 and 144 yields a final value for the body of sum_squares: 169. This figure, while complex, serves to illustrate many of the fundamental ideas we have developed so far. Names are bound to values, which spread across many local frames that all precede a single global frame that contains shared names. Expressions are tree-structured, and the environment must be augmented each time a subexpression contains a call to a user-defined function. All of this machinery exists to ensure that names resolve to the correct values at the correct points in the expression tree.. Our. programming community. As a side effect of following these conventions, you will find that your code becomes more internally consistent. Review these guidelines periodically as you write programs, and soon your names will be delightfully Pythonic. user should not need to know how the function is implemented in order to use it. The Python Library has this property. Many developers use the functions defined there, but few ever inspect their implementation. In fact, many implementations of Python Library functions are not written in Python at all, but instead in the C language.. Python also allows subexpression grouping with parentheses, to override the normal precedence rules or make the nested structure of an expression more explicit. >>> (2 + 3) * (4 + 5) 45 evaluates to the same result as >>> mul(add(2, 3), add(4, 5)) 45 You should feel free to use these operators and parentheses in your programs. Idiomatic Python prefers operators over call expressions for simple mathematical operations. Functions: >>> k * k_b * t / v >>> pressure(1, 273.15) 2269.974834 Here, pressure is defined to take three arguments, but only two are provided in the call expression that follows. In this case, the value for n is taken from the def statement defaults (which looks like an assignment to n, although as this discussion suggests, it is more of a conditional assignment.) As a guideline, most data values used in a function's body should be expressed as default values to named arguments, so that they are easy to inspect and can be changed by the function caller. Some values that never change, like the fundamental constant k_b, can be defined in the global frame. The expressive power of the functions that we can define at this point is very limited, because we have not introduced a way to make tests and to perform different operations depending on the result of a test. Control statements will give us this capacity. Control statements differ fundamentally from the expressions that we have studied so far. They deviate from the strict evaluation of subexpressions from left to write, and get their name from the fact that they control what the interpreter should do next, possibly based on the values of expressions. So far, we have primarily considered how to evaluate expressions. However, we have seen three kinds of statements: assignment, def, and return statements. These lines of Python code are not themselves expressions, although they all contain expressions as components. To emphasize that the value of a statement is irrelevant (or nonexistant), we describe statements as being executed rather than evaluated. applied. (spaces, not tabs). Any variation in indentation will cause an error. Originally, we stated that the body of a user-defined function consisted only of a return statement with a single return expression. In fact, functions can define a sequence of operations that extends beyond a single expression. The structure of compound Python statements naturally allows us to extend our concept of a function body to multiple statements.: >>> def percent_difference(x, y): difference = abs(x-y) return 100 * difference / x >>> percent_difference(40, 50) 25.0 So far, local assignment hasn't increased the expressive power of our function definitions. It will do so, when combined with the control statements below. In addition, local assignment also plays a critical role in clarifying the meaning of complex expressions by assigning names to intermediate quantities. New environment Feature: Local assignment. Python has a built-in function for computing absolute values. >>> abs(-2) 2 We would like to be able to implement such a function ourselves, but we cannot currently define a function that has a test and a choice. We would like to express that if x is positive, abs(x) returns x. Furthermore, if x is 0, abs(x) returns 0. Otherwise, abs(x) returns -x. In Python, we can express this choice with a conditional statement. >>> def absolute_value(x): """Compute abs(x).""" if x > 0: return x elif x == 0: return 0 else: return -x >>> absolute_value(-2) == abs(-2) True This implementation of absolute_value raises several important issues. Conditional statements. A conditional statement in Python consist of a series of headers and suites: a required if clause, an optional sequence of elif clauses, and finally an optional else clause: if <expression>: <suite> elif <expression>: <suite> else: <suite> When executing a conditional statement, each clause is considered in order. can never be assigned or returned. Python includes several false values, including 0, None, and the boolean value False. All other numbers are true values. In Chapter 2, we will see that every native data type testing (==), tests. Functions that perform tests potential of computers to make us powerful.. To build up the nth value, we need to track how many values we've created (k), along with the kth value (curr) and its predecessor (pred), like so: >>> def fib(n): """Compute the nth Fibonacci number, for n >= 2.""" pred, curr = 0, 1 # Fibonacci numbers k = 2 # Position of curr in the sequence while k < n: pred, curr = curr, pred + curr # Re-bind pred and curr k = k + 1 # Re-bind k return curr >>> fib(8) 13 the state of the environment. Note that we have also used the word "test" as a technical term for the expression in the header of an if or while statement. It should be clear from context when we use the word "test" to denote an expression, and when we use it to denote a verification mechanism.nd Fibonacci number should be 1' assert fib(50) == 7778742049, 'Error at the 50th Fibonacci number' When writing Python in files, rather than directly into the interpreter, tests should be run_docstring_examples >>> run_docstring_examples(sum_naturals, globals()) When writing Python in files, all doctests in a file can be run by starting Python with the doctest command line option: python3 -m doctest <python_source_file> The key to effective testing is to write (and run) tests immediately after (or even before) implementing new functions. A test that applies a single function is called a unit test. Exhaustive unit testing is a hallmark of good program design. We have seen that functions are, in effect, abstractions that describe compound operations independent of the particular values of their arguments. In square, >>> def square(x): return x * x we are not talking about the square of a particular number, but rather about a method for obtaining the square of any number. Of course we could get along without ever defining this function, by always writing expressions such as >>> 3 * 3 9 >>> 5 * 5 25 and never mentioning square explicitly. This practice would suffice for simple computations like square, but would become arduous for more complex examples. In general, lacking function definition would put us at the disadvantage of forcing us to work always at the level of the particular operations that happen to be primitives in the language (multiplication, in this case) rather than in terms of higher-level operations. Our programs would be able to compute squares, but our language would lack the ability to express the concept of squaring. One of the things we should demand from a powerful programming language is the ability to build abstractions by assigning names to common patterns and then to work in terms of the abstractions directly. Functions provide this ability. As we will see in the following examples, there are common programming patterns that recur in code, but are used with a number of different functions. These patterns can also be abstracted, by giving them names. To express certain general patterns as named concepts, we will need to construct functions that can accept other functions as arguments or return functions as values. Functions that manipulate functions are called higher-order functions. This section shows how higher-order functions can serve as powerful abstraction mechanisms, vastly increasing the expressive power of our language. Consider the following three functions, which all compute summations. The first, sum_naturals, computes the sum of natural numbers up to n: >>> def sum_naturals(n): total, k = 0, 1 while k <= n: total, k = total + k, k + 1 return total >>> sum_naturals(100) 5050 The second, sum_cubes, computes the sum of the cubes of natural numbers up to n. >>> def sum_cubes(n): total, k = 0, 1 while k <= n: total, k = total +>(k) return total The presence of such a common pattern is strong evidence that there is a useful abstraction waiting to be brought to the surface. Each of these functions is a summation of terms. As program designers, we would like our language to be powerful enough so that we can write a function that expresses the concept of summation itself rather than only functions that compute particular sums. We can do so readily in Python by taking the common template shown above and transforming the "slots" into formal parameters: >>> def summation(n, term, next): total, k = 0, 1 while k <= n: total, k = total + term(k), next(k) return total Notice that summation takes as its arguments the upper bound n together with the functions term and next. We can use summation just as we would any function, and it expresses summations succinctly: >>> def cube(k): return pow(k, 3) >>> def successor(k): return k + 1 >>> def sum_cubes(n): return summation(n, cube, successor) >>> sum_cubes(3) 36 Using an identity function that returns its argument, we can also sum integers. >>> def identity(k): return k >>> def sum_naturals(n): return summation(n, identity, successor) >>> sum_naturals(10) 55 We can also define pi_sum piece by piece, using our summation abstraction to combine components. >>> def pi_term(k): denominator = k * (k + 2) return 8 / denominator >>> def pi_next(k): return k + 4 >>> def pi_sum(n): return summation(n, pi_term, pi_next) >>> pi_sum(1e6) 3.1415906535898936 We introduced user-defined functions as a mechanism for abstracting patterns of numerical operations so as to make them independent of the particular numbers involved. With higher-order functions, we begin to see a more powerful kind of abstraction: some functions express general methods of computation, independent of the particular functions they call. Despite this conceptual extension of what a function means, our environment model of how to evaluate a call expression extends gracefully to the case of higher-order functions, without change. When a user-defined function is applied to some arguments, the formal parameters are bound to the values of those arguments (which may be functions) in a new local frame. Consider the following example, which implements a general method for iterative improvement and uses it to compute the golden ratio. An iterative improvement algorithm begins with a guess of a solution to an equation. It repeatedly applies an update function to improve that guess, and applies a test to check whether the current guess is "close enough" to be considered correct. >>> def iter_improve(update, test, guess=1): while not test(guess): guess = update(guess) return guess The test function typically checks whether two functions, f and g, are near to each other for the value-5): return abs(x - y) < tolerance The golden ratio, often called phi, is a number that appears frequently in nature, art, and architecture. It can be computed via iter_improve using the golden_update, and it converges when its successor is equal to its square. >>> def golden_update(guess): return 1/guess + 1 >>> def golden_test(guess): return near(guess, square, successor) At this point, we have added several bindings to the global frame. The depictions of function values are abbreviated for clarity. Calling iter_improve with the arguments golden_update and golden_test will compute an approximation to the golden ratio. >>> iter_improve(golden_update, golden_test) 1.6180371352785146 By tracing through the steps of our evaluation procedure, we can see how this result is computed. First, a local frame for iter_improve is constructed with bindings for update, test, and guess. In the body of iter_improve, the name test is bound to golden_test, which is called on the initial value of guess. In turn, golden_test calls near, creating a third local frame that binds the formal parameters f and g to square and successor. Completing the evaluation of near, we see that the golden_test, and we didn't even illustrate the whole thing. Second, it is only by virtue of the fact that we have an extremely general evaluation procedure that small components can be composed into complex processes. Understanding that procedure allows us to validate and inspect the process we have created. As always, our new general method iter_improve needs a test to check its correctness. The golden ratio can provide such a test, because it also has an exact closed-form solution, which we can compare to this iterative result. >>> phi = 1/2 + pow(5, 1/2)/2 >>> def near_test(): assert near(phi, square, successor), 'phi * phi is not near phi + 1' >>> def iter_improve_test(): approx_phi = iter_improve(golden_update, golden_test) assert approx_eq(phi, approx_phi), 'phi differs from its approximation' New environment Feature: Higher-order functions. Extra for experts. We left out a step in the justification of our test. For what range of tolerance values e can you prove that if near(x, square, successor) is true with tolerance value e, then approx_eq(phi, x) is true with the same tolerance? The above examples demonstrate how the ability to pass functions as arguments significantly enhances the expressive power of our programming language. Each general concept or equation maps onto its own short function. One negative consequence of this approach to programming is that the global frame becomes cluttered with names of small functions. Another problem is that we are constrained by particular function signatures: the update argument to iter_improve must take exactly one argument. In Python, nested function definitions address both of these problems, but require us to amend our environment model slightly. Let's consider a new problem: computing the square root of a number. Repeated application of the following update converges to the square root of x: >>> def average(x, y): return (x + y)/2 >>> def sqrt_update(guess, x): return average(guess, x/guess) This two-argument update function is incompatible with iter_improve, and it just provides an intermediate value; we really only care about taking square roots in the end. The solution to both of these issues is to place function definitions inside the body of other definitions. >>> def square_root(x): def update(guess): return average(guess, x/guess) def test(guess): return approx_eq(square(guess), x) return iter_improve(update, test) Like local assignment, local def statements only affect the current local frame. These functions are only in scope while square_root is being evaluated. Consistent with our evaluation procedure, these local def statements don't even get evaluated until square_root is called. Lexical scope. Locally defined functions also have access to the name bindings in the scope in which they are defined. In this example, update refers to the name x, which is a formal parameter of its enclosing function square_root. This discipline of sharing names among nested definitions is called lexical scoping. Critically, the inner functions have access to the names in the environment where they are defined (not where they are called). We require two extensions to our environment model to enable lexical scoping. Previous to square_root, all functions were defined in the global environment, and so they were all associated with the global environment. When we evaluate the first two clauses of square_root, we create functions that are associated with a local environment. In the call >>> square_root(256) 16.00000000000039 the environment first adds a local frame for square_root and evaluates the def statements for update and test (only update is shown). Subsequently, the name update resolves to this newly defined function, which is passed as an argument to iter_improve. Within the body of iter_improve, we must apply our update function to the initial guess of 1. This final application creates an environment for update that begins with a local frame containing only g, but with the preceding frame for square_root still containing a binding for x. The most crucial part of this evaluation procedure is the transfer of an environment associated with a function to the local frame in which that function is evaluated. This transfer is highlighted by the blue arrows in this diagram. In this way, the body of update can resolve a value for x. Hence, we realize two key advantages of lexical scoping in Python. The update function carries with it some data: the values referenced in the environment in which it was defined. Because they enclose information in this way, locally defined functions are often called closures. New environment Feature: Local function definition. We can achieve even more expressive power in our programs by creating functions whose returned values are themselves functions. An important feature of lexically scoped programming languages is that locally defined functions keep their associated environment when they are returned. The following example illustrates the utility of this feature. With many simple functions >>> add_one_and_square = compose1(square, successor) >>> add_one_and_square(12) 169 The 1 in compose1 indicates that the composed functions and returned result all take 1 argument. This naming convention isn't enforced by the interpreter; the 1 is just part of the function name. At this point, we begin to observe the benefits of our investment in a rich model of computation. No modifications to our environment model are required to support our ability to return functions in this way. So far, every time we want to define a new function, we need to give it a name. But for other types of expressions, we don’t need to associate intermediate products. Lambda expressions are limited: They are only useful for simple, one-line functions that evaluate and return a single expression. In those special cases where they apply, lambda expressions can be quite expressive. >>> def compose1(f,g): return lambda x: f(g(x)) We can understand the structure of a lambda expression by constructing a corresponding English sentence: lambda x : f(g(x)) "A function that takes x and returns f(g(x))" Some programmers find that using unnamed functions from lambda expressions is shorter and more direct. However, compound lambda expressions are notoriously illegible, despite their brevity. The following definition is correct, but some. If you can make your program easier to interpret, you. This final extended example shows how function values, local defintions, and lambda expressions can work together to express general ideas concisely. Newton's method is a classic iterative approach to finding the arguments of a mathematical function that yield a return value of 0. These values are called roots of a single-argument mathematical function. Finding a root of a function is often equivalent to solving a related math problem. Thus, a general method for finding roots will also provide us an algorithm to compute square roots and logarithms. Moreover, the equations for which we want to compute roots only contain simpler operations: multiplication and exponentiation. A comment before we proceed: it is easy to take for granted the fact that we know how to compute square roots and logarithms. Not just Python, but your phone, your pocket calculator, and perhaps even your watch can do so for you. However, part of learning computer science is understanding how quantities like these can be computed, and the general approach presented here is applicable to solving a large class of equations beyond those built into Python. Before even beginning to understand Newton's method, we can start programming; this is the power of functional abstractions. We simply translate our previous statements into code. >>> def square_root(a): return find_root(lambda x: square(x) - a) >>> def logarithm(a, base=2): return find_root(lambda x: pow(base, x) - a) Of course, we cannot apply any of these functions until we define find_root, and so we need to understand how Newton's method works. Newton's method is also an iterative improvement algorithm: it improves a guess of the root for any function that is differentiable. Notice that both of our functions of interest change smoothly; graphing x versus f(x) for on a 2-dimensional plane shows that both functions produce a smooth curve without kinks that crosses 0 at the appropriate point. Because they are smooth (differentiable), these curves can be approximated by a line at any point. Newton's method follows these linear approximations to find function roots.. Our Newton update expresses the computational process of following this tangent line to 0. We approximate the derivative of the function by computing its slope over a very small interval. >>> def approx_derivative(f, x, delta=1e-5): df = f(x + delta) - f(x) return df/delta >>> def newton_update(f): def update(x): return x - f(x) / approx_derivative(f, x) return update Finally, we can define the find_root function in terms of newton_update, our iterative improvement algorithm, and a test to see if f(x) is near 0. We supply a larger initial guess to improve performance for logarithm. >>> def find_root(f, initial_guess=10): def test(x): return approx_eq(f(x), 0) return iter_improve(newton_update(f), test, initial_guess) >>> square_root(16) 4.000000000026422 >>> logarithm(32, 2) 5.000000094858201 As you experiment with Newton's method, be aware that it will not always converge. The initial guess of iter_improve must be sufficiently close to the root, and various conditions about the function must be met. Despite this shortcoming, Newton's method is a powerful general computational method for solving differentiable equations. In fact, very fast algorithms for logarithms and large integer division employ variants of the technique.. Control structures, on the other hand, do not: you cannot pass if to a function the way you can sum. Python provides special syntax to apply higher-order functions as part of executing a def statement, called a decorator. Perhaps the most common example is a trace. >>> def trace1(fn): def wrapped(x): print('-> ', fn, '(', x, ')') return fn(x) return wrapped >>> @trace1 def triple(x): return 3 * x >>> triple(12) -> <function triple at 0x102a39848> ( 12 ) 36 In this example, A higher-order function trace1 is defined, which returns a function that precedes a call to its argument with a print statement that outputs the argument. The def statement for triple has an annototation, @trace1, which affects the execution rule for def. As usual, the function triple is created. However, the name triple is not bound to this function. Instead, the name triple is bound to the returned function value of calling trace1 on the newly defined triple function. In code, this decorator is equivalent to: >>> def triple(x): return 3 * x >>> triple = trace1(triple) In the projects for this course, decorators are used for tracing, as well as selecting which functions to call when a program is run from the command line. Extra for experts. The actual rule is that the decorator symbol @ may be followed by an expression (@trace1 is just a simple expression consisting of a single name). Any expression producing a suitable value is allowed. For example, with a suitable definition, you could define a decorator check_range so that decorating a function definition with @check_range(1, 10) would cause the function's results to be checked to make sure they are integers between 1 and 10. The call check_range(1,10) would return a function that would then be applied to the newly defined function before it is bound to the name in the def statement. A short tutorial on decorators by Ariel Ortiz gives further examples for interested students.
http://inst.eecs.berkeley.edu/~cs61a/sp12/book/functions.html
CC-MAIN-2014-52
refinedweb
7,121
52.9
ISO 8583 jPOS bridge configuration This Service Virtualization beta feature introduces support for the ISO 8583 protocol, the messaging system used for card-based electronic transactions. ISO 8583 support Service Virtualization supports ISO 8583 indirectly, by converting messages into XML using the standalone jPOS server, an open source implementation of the international ISO 8583 standard. Service Virtualization provides an external extension based on jPOS that works with an XML virtual service. The Service Virtualization ISO 8583 jPOS bridge converts ISO 8583 messages into XML and passes them back and forth between jPOS and Service Virtualization. To configure jPOS to send data to the XML virtual service, you need to edit several files which are provided by Service Virtualization. jPOS periodically scans the folder in which the configuration files are located in order to detect changes. Before you begin Prerequisite: Java 8 is required to run the SV ISO 8583 bridge. The bridge looks for your Service Virtualization Designer or Server installation and use its Java runtime environment. If the scripts cannot find Service Virtualization and its Java, you must install Java manually and then set the JAVA_HOME environment variable to point to the Java installation folder. - Prerequisite: Obtain and install jPOS 2.0.4. It is recommended to install jPOS on the same machine as Service Virtualization. For download and installation details, see On the machine where jPOS is installed, set the JPOS_HOME environment variable to the jPOS installation folder. Unzip the SV ISO-8583 jPOS bridge package, located in the Service Virtualization Tools\Iso8583 folder. By default, these folders are located in: Designer: C:\Program Files\Micro Focus\Service Virtualization Designer\Tools\Iso8583 Server: C:\Program Files\Micro Focus\Service Virtualization Server\Tools\Iso8583 Follow all the steps in the next sections to configure the ISO 8583 jPOS bridge. Create a virtual service Create and configure a new XML/HTTP service to work with the bridge. - In the Service Virtualization Designer, create a new XML over HTTP virtual service. - Set the virtual service to use the HTTP Gateway agent. On the Service Properties page, enter the URL path for the virtual service to use as the real service URL. Configure it as follows: host>:<port> where: host= The machine on which jPOS runs. port= The port used for the connection from Service Virtualization to jPOS. This is the port defined in the 10_jetty.xmlfile, described in Configure the connection to jPOS. By default, the port number is 6001. Enter this URL into the real-service pathproperty of the configuration file, as described in the next section, Configure jPOS. - Enter the URL of the virtual service into the the sv-serviceproperty of the jPOS configuration file, as described in the next section. Configure jPOS You configure jPOS using XML files located in the deploy folder of the bridge package. To add a new virtual service to simulate the ISO 8583 protocol, you need to create and configure a new XML file and add the file to the deploy folder. Service Virtualization provides a template fo assist you in creating the jPOS configuration file. In the <Service Virtualization installation folder>\Tools\Iso8583\HP.SV.Iso8583 folder\deploy folder, make a copy of 50_service_template.txt. Note: jPOS periodically scans the deploy folder in order to detect changes, such as new files. If there is an error in the configuration file, it will generate an error and jPOS will rename the file with a .BAD extension. To prevent this: - Do not edit the configuration file while jPOS is running. - If you must reconfigure the configuration file while jPOS is running, create a copy of the file and save it in another location. When you finish editing, copy the file back to the deploy folder, and rename it with the .xml extension. Define the TCP port on which jPOS will listen for requests from your client. - In the jPOS configuration file for your service, enter the port value in the port property, or accept the default value of 6000. - If you are using SSL, see Configure SSL. Define communication between jPOS and your real service. - In the host and port properties of the configuration file, define the host and port number of the real service. In the real-service pathproperty of the configuration file, define the path used by Service Virtualization to access the real service. <real-service This is the value you entered for the real service endpoint in the new virtual service. For details, see Create a virtual service. - If you are using SSL, see Configure SSL. - Enter the URL of the new virtual service into the sv-serviceproperty of the configuration file. Configure additional options. There are several other parameters you can modify in the configuration file to control the connection to the real service: - When you are finished configuring the file, rename it with an .xml extension. Make sure the file is saved in the deploy folder. Configure the connection to jPOS The 10_jetty.xml file, also located in the deploy folder of the SV ISO-8583 jPOS bridge package, defines the connection to the jPOS server, and provides the interface for Service Virtualization to pass messages to jPOS. You can change the default port set in this file. Configure SSL There are two points at which you can configure SSL: between your client and jPOS, and between jPOS and the real service. Listener You configure communication to allow jPOS to receive requests from your client, by obtaining a certificate and private key which jPOS will present to the client. On your jPOS machine, generate a certificate and name it the hostname of your jPOS machine. Set up the Java KeyStore containing the certificate and corresponding private key. For example, to generate an RSA certificate and private key protected by the password changeitusing the -keypassoption, and storing it in the file keystore.jksprotected by the password changeitusing the -storepassoption, you use the following command: keytool -genkey -alias server-alias -keyalg RSA -keypass changeit -storepass changeit -keystore keystore.jks Keytool will request that you enter the hostname of your jPOS machine. When complete, keytool will display a summary page. Make sure that CN is equal to the hostname. For detailed instructions, refer to Oracle documentation. In the jPOS configuration file for your virtual service: - uncomment the server-socket-factoryelement - enter the path to your KeyStore file relative to the jPOS installation folder - enter the password for the KeyStore and private key You may also need to export the certificate for the client to use during the handshake, by running the following command: keytool -export -alias server-alias -storepass changeit -file server.cer -keystore keystore.jks Sender To configure SSL communication between jPOS and the real service: Obtain the real service's certificate and put it either in Windows certificate authority or in the Java TrustStore. If you have a certificate in a server.cer file that was exported from the Java KeyStore, you can use Java keytoolto create the TrustStore. It will create the file truststore.jksprotected by the password changeitand import the certificate to the TrustStore. keytool -import -v -trustcacerts -alias server-alias -file server.cer -keystore truststore.jks -storepass changeit For full details on working with the Java KeyStore and TrustStore, refer to the Oracle documentation. Configure the SSL-related elements in the jPOS configuration file. In the <channel>element, uncomment the following properties and set the values: If you have a certificate in Windows trust store, set the winstoreproperty to true. - If you use Java TrustStore, set its location and password. - To disable the server certificate check completely, set the serverauthproperty to false. Configure logging The logging configuration file, 00_logger.xml, is located in the deploy folder of the bridge package. By default, logged messages are stored in q2.log in the log folder of the bridge package. You can configure three logging elements for jPOS: Filtering is based on jPOS logging methodology where each log message has two properties: - realm - the source of the message - tag - message type or severity You can define filtering rules to allow or deny logging based only on tags or on realm/tag combinations. Example: Example: In this example of a logged message, you can see the log realm and the tag <info>. <log realm="com.hp.sv.iso8583.log.JettyLogger" at="2015-11-05T16:44:18.575"> <info> Logging initialized @1236ms </info> </log> To disable info logging for the JettyLogger realm, you can include following line in your logging configuration file: <property name="deny" value="com.hp.sv.iso8583.log.Jetty/info"/> In some cases, it may be helpful to log the communication between jPOS and the client , and betwen jPOS and the real service. To enable this, set the wiredump property in the channel element of the jPOS configuration file. For more details, see the comments in the log file. Configure the ISO 8583 protocol To work with ISO 8583, you need to perform some additional configuration to describe your particular instance of the protocol. The areas that require configuration are channel, packager, and correlation. Channel When using the ISO 8583 protocol, messages are sent over a TCP connection from the client to the service. The protocol allows multiple requests to be sent over a single connection, and the service can respond to them in any order. Since the TCP connection is just a stream of bytes, there needs to be a way to identify individual messages. This is solved by providing the message length before each message. The stream of bytes sent, therefore, contains the sequence of <len, message> pairs. Supported encoding There are multiple ways of encoding the message length into the stream. jPOS provides an implementation for several of these encodings. The following table lists the encodings supported by jPOS out of the box. To use one of the supported encodings, copy the implementation class name into the channel class attribute in the jPOS configuration file. Custom encoding If your channel encoding does not fall into any of the supported categories, you may need to implement it on your own. To configure a custom encoding, extend org.jpos.iso.BaseChannel and implement the getMessageLength() and setMessageLength() methods. To use the new class in jPOS: - Create a folder named lib under the deploy folder of the bridge package. Create a JAR file and put it into the \deploy\lib folder. jPOS will automatically load these JAR files into classpath. Enter your class (my.MyChannel in the example below) in the classattribute of the channelelement in the jPOS configuration file. For example, the following code uses simple encoding where the first byte is the ASCII 'L' character followed by four ASCII characters with length. Example: package my; import java.io.IOException; import org.jpos.iso.BaseChannel; import org.jpos.iso.ISOException; import org.jpos.iso.ISOUtil; public class MyChannel extends BaseChannel { @Override protected void sendMessageLength(int len) throws IOException { // Some check for min and max permitted values if (len > 9999) throw new IOException ("len exceeded"); else if (len < 0) throw new IOException ("invalid length"); // First comes 'L' serverOut.write('L'); // Then four ASCII characters with length - ISOUtil.zeropad() converts int to String with specified length serverOut.write(ISOUtil.zeropad(len, 4).getBytes()); } @Override protected int getMessageLength() throws IOException, ISOException { int l = 0; byte[] b = new byte[5]; // While length is 0 keep reading - 0 means keep alive message, see in code below while (l == 0) { // Read exactly five bytes serverIn.readFully(b, 0, 5); // Make sure the start is correct - 'L' if (b[0] != 'L') { throw new ISOException("Expected 'L' at the start and not byte 0x" + Integer.toHexString(b[0])); } // Extract characters from position 1 - after 'L' String s = new String(b, 1, 4); try { // Parse String into int if ((l = Integer.parseInt(s)) == 0) { // Length is 0 - send the same to output - keep alive message serverOut.write(b); serverOut.flush(); } } catch (NumberFormatException e) { // Could not be parsed - error throw new ISOException ("Invalid message length " + s); } } // Return read length return l; } } Logging channel You may need to try to reverse engineer the channel length encoding. If so, it may be useful to have a channel implementation which just reads the incoming message and logs it into the TCP stream. This will not generate any response so your client will wait forever or time out. To use the logging channel, update the port, and, if needed, the SSL configuration, in the 50_log-only.txt file in the deploy folder, and rename the file with an .xml extension. This will deploy the logging channel to the specified port and enable the logging of every byte that is received by jPOS from the client. The log entry will look as follows: Example: <log realm="com.hp.sv.iso8583.impl.LogOnlyChannel" at="Tue Jul 14 10:50:54.247 CEST 2015"> <info> READ FROM 127.0.0.1:49417: 0x30 </info> </log> Packager When a message is correctly identified in the TCP stream, it needs to be broken down into individual fields. This is the job of the packager. Again, the specification is quite open and fields can be encoded in various ways. You need to either specify which packager implementation to use, or you can use the jPOS GenericPackager, which uses an XML configuration file describing the format. Put that configuration file in the conf folder of the bridge package, and reference it from the packager-config element in the jPOS configuration file. There is a sample configuration file in the conffolder called demo.xml that can be used as a template. You need to define all the fields that your messages contain. Each field is defined by the isofield element with the following attributes: The jPOS default field packagers come from the org.jpos.iso package. The following table summarizes some of their properties: Correlation As stated above, the real (or virtual) service may respond to requests in any order. This means there must be a way to match a request with its corresponding response. This is achieved by using the same value in certain fields of the message. For example field 41 can have the value ABCD in both the request and the response. There can be multiple fields used for correlation, with some of them optional (meaning that not each message carries them, but if the message does have the field, it is used for correlation). You must specify the correlation fields in the jPOS configuration file using the key attribute in the mux element. The value is a comma-separated list of fields (numbers), which are used for correlation. At least one field must be defined, and at least one of the fields must be present in the message. If these conditions are not met, the correlation will not work and responses will be discarded. The indication that correlations are not configured correctly is timeouts in the jPOS log when waiting for responses during Service Virtualization Learning or Standby modes. Run jPOS You can run jPOS either as a standalone application or as a Windows service. Note: Make sure you have reviewed the installation and configuration instructions for jPOS in the section Before you begin. The following files are included with the ISO 8583 bridge package, located in the <Service Virtualization installation folder>\Tools\Iso8583\HP.SV.Iso8583 folder.
https://admhelp.microfocus.com/sv/en/2022/Help/Content/UG/c_ISO8583_jPOS.htm
CC-MAIN-2022-21
refinedweb
2,542
54.93
Odoo Help This community is for beginners and experts willing to share their Odoo knowledge. It's not a forum to discuss ideas, but a knowledge base of questions and their answers. Zoho.com: Connection unexpectedly closed I'm trying to configure access to a catch-all mail hosted by Zoho.com with my own domain. Everything worked fine before, with a Gmail address (not my domain, just a regular Gmail account) but with Zoho I can't send emails. 1) Incoming is configured for IMAP and working flawlessly. 2) Outgoing is configured as follows: Description: Zoho (catch-all) Priority: 10 SMTP Server: smtp.zoho.com SMTP Port: 465 Debugging: [not flagged] Connection Security: SSL/TLS Username: <my_catch_all_email@mydomain.com)< Password: <password> The [Test Connection] works but when sending emails from any application I get the infamous message "SMTPServerDisconnected: Connection unexpectedly closed". I already tried to set Alias Domain to 'localhost' and '127.0.0.1' with the same results. Couldn't find much help on the net so I'm posting here... Thanks in advance for any help! I also have same problem. After one day of investigation, I found the problem is because Zoho only allow you send email from SMTP_USER. So my fix is change code in /opt/odoo/odoo-server/openerp/addons/base/ir/ir_mail_server.py You need to change "smtp_from" to "smtp_user" and also replace message['from'] to smtp_user try: smtp = self.connect(smtp_server, smtp_port, smtp_user, smtp_password, smtp_encryption or False, smtp_debug) #smtp.sendmail(smtp_from, smtp_to_list, message.as_string()) #AIO FIXX 20150308 : Because of Zoho mail does not allow relay mail #That means the mail must sent from smtp_user #so we replace smtp_from => smtp_user #Also need to replace message['From'] to smtp_user #Here we only replace email address part but keep email name #example: AIO Robotics Inc. <old@email.com> to AIO Robotics Inc. <smtp_user email> from email.utils import parseaddr,formataddr (oldname,oldemail) = parseaddr(message['From']) #exact name and address newfrom = formataddr((oldname,smtp_user)) #use original name with new address message.replace_header('From', newfrom) #need to use replace_header instead '=' to prevent double field smtp.sendmail(smtp_user, smtp_to_list, message.as_string()) finally: if smtp is not None: smtp!
https://www.odoo.com/forum/help-1/question/zoho-com-connection-unexpectedly-closed-51162
CC-MAIN-2016-50
refinedweb
360
50.84
I remember when I first started using the internet (in 1993 using Winsock on a 14400 USR sportster modem) about just how exciting it was to no longer feel isolated on my computer. Nevertheless, it has still taken me 10 years or so to actually start programming for it - no-one ever said I ran before I could walk! And it has really only come about as a result of writing ToDoList, an XML 'based' task-list-thingy, also posted on CodeProject. I had decided on XML as the data format precisely because it could be uploaded to my web site and processed using JavaScript, to say nothing of XSL transforms and the myriad of opportunities this presents for reporting. And yet how was I still doing this uploading on a day-today basis? Using an FTP utility, that's how. "And what's wrong with that", you may (or you may not) ask? Nothing, from a strictly functional perspective, it's just that the process often strikes me as all so disjointed: saving a document in one app and then starting up a separate utility to transfer the file just screams at me for an overhaul. What I really desired was to be able to simply call something like GetFile() (or run a standalone app), optionally with no parameters, and have it present all the necessary GUI required to allow me to get any file from anywhere and in the manner to which I've become accustomed over the last 8-9 years, namely via something like the standard Open/Save dialogs. GetFile() And that is what I've tried to do here. EasyFtp is a no-brainer utility (no disrespect to anyone out there without a brain) which simply wraps a bunch of classes which do all the work. Note: The reason I've wrapped it as an EXE is to allow it to be supported by TodoList's 'Tool' interface. It can just as easily be used by dropping the classes directly into your own application as I will explain later. I've already hinted at some of the requirements but here's a more exhaustive list: The basic workflows that I envisaged were these: Note: These numbers correspond directly to the image numbers above. Sounds simple doesn't it? And the beauty of it was that it was just as simple as it appears, taking into account (as I always do) that I had already written a lot of the code necessary to make it work for other projects. And the particular code in question is CRuntimeDlg which I first presented in ToDoList as a means of constructing dialog boxes without the use of the RC editor and the resultant dependency on RC based dialog templates. CRuntimeDlg Note: If you're not clear on why this is such a significant issue, consider what has to happen if you want to reuse code that relies on dialog resources: Instead, CRuntimeDlg will allow you embed dialog control definitions within the dialog's .cpp file without any other fiddling about. Any control (possibly except ActiveX, at present) that can be placed by Visual Studio's resource editor can also be used in a CRuntimeDlg based dialog. It all adds up to being able to move the files anywhere without a second thought - works for me every time. This is a rather simplified diagram which shows the principal class relationships {{DIAGRAM_START : EasyFtp Class Diagram .------------. |CEasyFtpApp | | | |Application | |class | .v-----------· |uses | .-v---------------. |CRemoteFile | | | |Orchestrates the | |GUI and does the | |uploading and | ---< downloading >-------- | ·-v---------------· | uses| |uses |uses | | | .--------------v v------------------. v--------------------. |CServerDialog | |CRemoteFileDialog | |CProgressDlg | | | | | | | |Retrieves the | |Remote version of | |Shows dload/uload | |server details| |CFileDialog | |progress and doubles| | | | | | as a cancel dialog | ·-------------v· ·v-----------------· ·v-------------------· | | | derived| |derived |derived from| |from |from ------>v-----------------<------ |CRuntimeDlg | | | |Implements dialogs| | without resource | |templates | | | ·------------------· }}DIAGRAM_END (drawn courtesy of CodePlotter © AbstractSpoon 2003) The rest of the code comprises of utility classes, the most interesting of which are: CDeferWndMove Wrapper around ::DeferWindowPos() offering handy additions like OffsetCtrl() and ResizeCtrl(). ::DeferWindowPos() OffsetCtrl() ResizeCtrl() CDlgUnits Wrapper around ::MapDialogRect() offering overloads to convert shorts, ints, longs, POINTs, SIZEs and RECTs to and from pixels and dialog units (DLUs). ::MapDialogRect() short int long CSysImageList Wrapper around the Windows system image list (which provides access to file icons). CFileEdit CEdit derivative providing integrated browsing capabilities and using an enlarged non-client border in which to draw the file's icon. CEdit By default, i.e. with nothing on the command line, EasyFtp will default to 'Download' mode, and will follow the workflow outlined above. However, the following command line switches are available so that you can streamline or modify these defaults (note: switches must be preceded by - or /): Specifies you want to upload a file, with nothing else on the command line this will display the workflow in the article image. Specifies the remote path to upload to, or download from. This can be a full path (less the server bit) or just a folder, in which case it needs a trailing forward slash. Specifies the local path to upload from, or download to. This can be a full path or a folder (no trailing backslash required). Specifies the agent string to use (useful if EasyFtp is being spawned by another app). Specifies the server location e.g... Specifies the user name, if none is specified then an 'anonymous' login will be performed. Specifies the password for the account, can be left blank if the username is blank. Specifies anonymous login, empty username/password strings are passed. Specifies _not_ to convert filenames to lowercase when uploading. Specifies suppression of the 'confirm overwrite' dialog which appears if the upload or download target path already exists. If you would rather integrate the code directly into your own application then it's equally easy. #include "[path]\remotefile.h" // note: this is straight out of Easy // uploading a file (downloading is almost identical) // sLocalPath and sRemotePath will contain the user's // choice on exit // or simply "rf.SetFile()" RMERR nErr = rf.SetFile(sLocalPath, sRemotePath); // error handling switch (nErr) { case RMERR_SUCCESS: // note: if downloading we would now // do something with the downloaded // file pointed to by sLocalPath break; case RMERR_USERCANCELLED: break; default: { CString sMessage; if (sLocalPath.IsEmpty()) sMessage.Format("Sorry, the requested upload to '%s' could not / be completed for the following reason:\n\n%s", sServer, rf.GetLastError()); else sMessage.Format("Sorry, the upload of '%s' to '%s' could not / be completed for the following reason:\n\n%s", sLocalPath, sServer, rf.GetLastError()); AfxMessageBox(sMessage, MB_OK | MB_ICONEXCLAMATION); } break; } This article, along with any associated source code and files, is licensed under The Creative Commons Attribution-ShareAlike 2.5 License General News Suggestion Question Bug Answer Joke Rant Admin Math Primers for Programmers
http://www.codeproject.com/Articles/6193/EasyFtp-1-3-2-for-Applications?fid=33975&df=90&mpp=25&sort=Position&spc=Relaxed&tid=1116387
CC-MAIN-2013-20
refinedweb
1,121
50.16
The weak purity is big. With DMD 2.050 many Phobos functions will be able to support the pure attribute. This is just a little example, swap: A more complex example, sort, with the problems I've found (using 2.050alpha): One of the troubles I've found is with "auto pure" nested functions, this asserts: // see import std.traits: FunctionAttribute, functionAttributes; void main() { static pure int foo1(int x) { return x; } pure int foo2(int x) { return x; } static assert(functionAttributes!(foo1) & FunctionAttribute.PURE); // asserts static assert(functionAttributes!(foo2) & FunctionAttribute.PURE); // asserts } Weak pure functions may become so common in my code that I'd like them to be weak pure on default :-) I know that because of C compatibility this is not an acceptable change in D. Another step forward may come from an @outer attribute, that allows to put a final stop to the unruly usage of globals (outer) variables as done in C (Spark language already has something similar, and its usage is obligatory. In D the @outer is meant to be optional): int x = 100; int y = 200; @outer(in x, inout y) int foo(int z) { y = x + z; return y; } The usage of #outer is optional, but if you use it then all constraints implied you see in that code are enforced. See for more info: Eventually it will become useful to have a way to apply or not apply the pure attribute to a function according to a compile-time test, so a function template may become pure or not according to the kind of template arguments it receives. Bye, bearophile
http://forum.dlang.org/thread/ia7rp3$l9v$1@digitalmars.com
CC-MAIN-2014-41
refinedweb
269
56.69
How to set password encryption in Talend Studio In this example, we will use ROT13 to encrypt the password for a MySQL database connection, so that the password "talend" is transformed to "gnyraq" in the Studio. For more information on the ROT13 algorithm, see. This article applies to all versions of Talend Studio. Creating a custom routine In this procedure we will create a custom routine to execute the algorithm for password encryption. Procedure - In the Repository tree view of your Talend Studio, expand the Code node, right-click Routines and select Create routine from the contextual menu to create a new routine named MyRoutine. - In the new routine that opened in the routine editor, add a function named decrypt and define the specify mechanism to decrypt the encryption string of the password. While it's possible to use any algorithm (SHA, DES, etc.), a very simple decryption mechanism, ROT13, is specified in this example. The code of the function reads as follows: public class MyRoutine { public static String decrypt(String encryptedPassword) { StringBuffer output = new StringBuffer();(); } } Validating password transformation using a demo Job In this procedure we will create a demo Job to validate the password encryption. In this Job, we use a tMysqlInput to read data from a table called person from the database (it can be configured to read data from any a table in your case) and print the result on the console with tLogRow component. Before you begin This example assumes that you have a MySQL database, with the following information: - host name: localhost - port: 3306 - database name: test - user name: root - password: talend - table name: person - table columns: - id: type Inter (INT), 2 characters long - name: type String (VARCHAR), 20 characters long - sex: type String (VARCHAR), 1 character long Procedure - Create a new Job and name it EncryptPasswordWithR0T13Demo. - Add a tMysqlInput component and a tLogRow component to your Job. - Open the Contexts view, click the [+] button to create a variable named password, of type String. - Click the Value field and type in the encryption string, gnyraq in our example, which is the string transformed from the real password "talend" using ROT13. - In the Basic settings view of tMysqlInput, click the [...] button next to the Password field, and enter the expression that calls the custom routine function MyRoutine.decrypt(context.password)in the Enter a new password dialog box. - Configure the schema and the other parameters required to read data from your database. - Execute the Job to check whether you are able to connect to the database and read data from it. Results The Run console displays the data retrieved from the specified database table. Troubleshooting password encryption If the Job fails, you may see the following error information on the Run console: Exception in component tMysqlInput_1 java.sql.SQLException: Access denied for user 'root'@'localhost' (using password: YES) at com.mysql.jdbc.SQLError.createSQLException(SQLError.java:1075) This error indicates that you have failed the authentication when trying to connect to the database. Check that: - You have the right user name/password for the database connection before encrypting the password using ROT13. - You have transformed the right encryption string from your real password using a custom routine. - You have provided the encryption string as the default value of context variable. - You have correctly configured the database component to call the custom routine function.
https://help.talend.com/reader/fB2kyYR87AynuxflsCSzdg/root
CC-MAIN-2020-40
refinedweb
558
51.18